2011-12-21

Cyber Attack: Whose Side is Your Thermostat on?

Today's WSJ lead story was on a cyber attack on the US Chamber of Commerce. After "overhauling" it's network security, the US Chamber reports that a thermostat is communicating with Chinese computers.  There has been significant press recently on both US assertions about Chinese attacks, and also some history from fairly reputable folks on this. Other attacks appear to have other sponsors-- stuxnet has become a reference example, and the subsequent death of an Iranian general which at least in theory might also reflect a cyber incursion.  From a professional perspective there are interesting aspects to this beyond any questions about who was behind various attacks, or why -- we need to continuously be prepared to expand our perspective of possible attack vectors, potential targets, and overall vulnerabilities.
Security needs to be built-in as part of design in applications from embedded systems to cloud computing. We also must be prepared to revise and maintain protections as new threats become evident. Perhaps most critical is recognizing which systems are at risk, and what that risk might be.  Which brings us back to the thermostat. I doubt that any serious security risk assessment was undertaken for the software engineering of that device.  Actually, it is quite likely that software engineering was not the discipline applied, rather fairly simple programming -- after all, what can go wrong if your thermostat fails? Perhaps a more serious question is what can go wrong if your thermostat, or your programmable logic controller, or your mobile 'everything' device get's captured by someone who has a different agenda for its use. When I questioned someone about the aurora vulnerability for power substations the response was: "that was not a valid test, they operated the systems outside of the acceptable procedures." This is one problem we face, folks attacking and abusing our systems are likely to operate them in ways that are not expected and with intentions that differ from the developer or the user. IT managers, security folks, and just-plain users and developers need to consider this.  In many cases, the best approach is the KISS principle, "keep it simple".  Why was the thermostat attached to the network ... why is it allowed to communicate beyond some immediate control system?  Is this level of automation really required?  And if it is, are we prepared to apply the appropriate security protocols to assure it is not creating an unexpected risk?
You don't  need to reply here to my questions ... just tell your thermostat, I'll get the message.

2011-10-13

What Technology Wants

This is the title of a 2010 book by Kevin Kelly , a regular presenter at TED (past editor of Wired) and commentator on the evolution of technology in any case. This book received reviews in various publications I read, and aligns with some of the topics in my OLLI class this fall on Technology and Magic.

Kelly promotes the term "technium" to reflect the entirety of technology (bees building hives, DNA building bodies, etc.) as opposed to the modern  "what engineers build" concept (or Alan Kay's concept of "technology is anything that wasn't around when you were born"). He then proceeds to argue that technology is an evolving thing, somewhat independently of sapiens (as he likes to call the current crop of self-aware, conscious entities of which most of us are instances.) So you can see the book has deep roots (actually back to the big bang) and points towards long term impacts and considerations.

Kelly shares some of Ted Kaczynski's (the Unibomber) perspective on the domination that "the system" (including technology) has over people, but does not share his paranoia or methodologies. Rather, Kelly sees the inevitable progression of technology as increasing the options for people, and as such something that will be marginally better than where things were before. I just returned from Peru where many folks living off the land (farming plus shepherding lamas, guinea pigs, etc.) found their children opting to move to the cities (and live marginally unemployed in the slums.) Kelly's assertion that this is attractive, because it provides more options and greater freedom is a reasonable argument for this trend. It also explains why many of the folks remaining in the country have cell-phones as one of the few technologies. Folks on the floating islands of lake Titicaca have solar panels to charge their TV, with few other technologies evident (in a tourist supported community.)

So the glacier of technology continues to move our way (actually at the speed of Moore's law and it's many corollaries which Kelly outlines as well) -- but un-avoidable. This leaves two questions (at least) ... what do we do about it, and can we predict where it is headed?


What can we do?
 Here I summarize Kelly's perspective:
  1. Anticipate where things are going (I like "predictive fiction" as an option here.)
  2. Maintain eternal vigilance ... we will be surprised, so minimize the response time (and recognize not everything will be good)
  3. Prioritize risks -- basement bio-engineering labs may have some higher risks than Steve Job's (we will miss you Steve) Cupertino garage.
  4. Rapid correction of harm (this one is challenging if the technology is popular, or supported by corporate or governmental interests)
  5. Don't prohibit, rather re-direct -- ban's are not effective, but re-focusing on beneficial applications can work (bombs vs power plants)
These are non-trivial challenges, ones we may not be able to track in any organized way.  The web may help ... is there a site "Incoming"? Those of us in the current technology community may want to establish something along this line.


What does technology want?
 Here's Kelly's ultimate (and admittedly incomplete list) "technology wants what life wants":
  • Efficiency - doing it 'better' tends to have an advantage
  • Opportunity -- which is why we will go along
  • Emergence and Complexity (these tend to go together, and yield unexpected results)
  • Diversity - over time, more and varied things rather than less
  • Specialization - tied to diversity, as each thing becomes more specific(environmental niches)
  • Ubiquity - this is sort of a 'selfish gene' or perhaps 'meme' aspect of things, evolution of replicators (as Susan Blackmore  will point out)  some will surface as 'winners', which rise to the highest level of dissemination they can.
  • Freedom - as in free will. Evolving systems tend to operate with motives more successfully than mandates.
  • Mutualism - things are better together, genes join to form DNA, cells to form bodies, humans to create civilizations, computers to create networks ... and in many cases these reflect diversity, specialization,   and foster symbiotic relationships.
  • Beauty - or perhaps "elegance" in the way it is used in engineering where highly efficient forms often are coupled with a simplicity.
  • Sentience - sensing and using information is an inherent aspect of technology -- from our white-cells that learn how to eliminate bacteria, to Watson as it find's it's way to winning at Jeopardy. 
  • Structure - is technology's response to entropy, while the universe moves towards heat death, technology is constantly increasing the structure of the available materials and information.
  • Evolvability - Blackmore would argue that any replicator in an environment over time will evolve, and Kelly asserts that technology is just such a beast.
Basic message: we can't beat it, so join it ... see if we can't shift the balance towards beneficial and away from "ooops!".

2011-08-10

Augmented Reality

Ok, I got a new toy: an Acer Icona Tab A500 (I'd point you to a specs page but those I checked do not list facilities like GPS!) -- with built-in GPS, camera and a variety of free aps that combine these things (sorry Apple, the open Android environment is the big win, no more being limited to the Aps that Apple approves ... some security in that, but also managed by profit motives as well.) These devices are the tip of the iceberg for the emerging area of augmented reality.

Where you are (physically, or virtually) are index points for information relevant to that place. An obvious example is any mapping software that uses where you are (via GPS, Cell triangulation, or Google's index of wifi MAC addresses) and positions you on the map. Combine with Street View or Microsoft's Street Slide and you can see where you are looking from pictures posted by others. Merge this with the image from the outward facing camera, and you now see the reality, augmented by overlays from sources you select. Simple examples from my recent vacation in Colorado include: an overlay looking at the Rocky Mountain National Park peaks from Trail Ridge Road that names the peaks, perhaps provides elevation and distance information. Try the same thing at Mesa Verde National Park and you could have text or audio of about the ancient Puebloian cliff dwelling you are viewing from a simple "Junior Ranger" view to an in-depth discussion from experts (anthropologists or the modern Puebloian perspective.) We are a short time-frame away from overlaid video sequences that can animate history in historical sites, or even fantasy stories that operate with 'boots on the ground'.

Folks like Blair MacIntyre, at Georgia Tech have been doing research for a while, and are looking to establish standards that open the door for platform independent AR content. It looks like W3C may take up this cause building on KML (ARML, KARML) -- with likely competing commercial interests that could delay things for a few years.

While the devices that will make this environment "essential" (see Vernor Vinge's Rainbows End) may include goggles/headsets or implants; in the short term we can get significant impact with the next generation pocket device (as cell phones, GPS units, cameras, tablets, et al converge) ... an interesting question is what components will be needed in said device?

An initial list - camera, display screen, GPS, wifi/bluetooth (optional phone link -- cell phones are an expensive channel for AR aps (IMHO) and won't work in many interesting locations), sensors for positioning (you are here, but where are you looking?), sound output ---- and of course the inner workings that will make it click. My A500 is a bit big for portable use in this context. You need memory to store the anticipated content elements given that you may not have online access as you move though a park or remote situation. I can envision parks adding bluetooth or wifi transmitters, as they now have 'cell' phone ones and/or have had AM radio points of interest to provide local content in some situations.

We need to consider how services evolve for this as well. Wikitude is more of an augmented map than an overlay on reality, but it does give us a sense for location based browsing. Many of the elements near my location have no value ... which is like web search unfortunately. If you want to track a given author/source, or look for history or geological or some other characterization of content -- this is not the tool. Just as we have web sites that link a set of logical pages together, similarly there is a value in having a linked AR facility as well.

2011-04-12

Getting a computer related job - new grads, et al

I encountered a query from an NHTI student about how to find a job in today's world. And I have a few suggestions for her and others who may be in a similar situation. I did a presentation at the Nashua Community College in March related to this: "Taking Control of Your Future" -- this looks at the longer term career arc(s) that todays graduates face. But what can you do NOW for a job SOON?
1. Painful reality -- most jobs (as much as 90% I've heard from some HR folks) are filled though networking. Folks who you know, or contacts you make ... while your professors are one source of contacts, reach out much farther if you want to make it happen.
  • Participate in local professional activities -- many are free, and participating professionals can be key mentors and/or paths into local companies. The IEEE NH Computer Society chapter is one example with their regular seminars, also try Googling: Linux User Group (LUG's), Visual Basic user groups, etc. -- many such informal groups exist at a local level.
  • When you attend events -- bring business cards if you can (cheap ones available at Vistaprint.com) -- it allows you to introduce yourself to folks of interest (and the quid-pro-quo is that they give you one of theirs ... so follow up on that -- see below)
  • Ask relevant questions at the event ... stand out from the crowd in a positive way.
2. Make it a "project" -- take on the role of free-lance journalist, and ask for an opportunity to interview folks in local companies of interest -- where are the jobs going to be, what skills will recent graduates need, where are they finding candidates? ...
  • You can do this simply as a personal informal role -- do some homework on target companies (what they do, what related work they are likely to have) -- and ask for a chance to interview someone for 20 minutes on where the future of careers (you would be interested in) are going. ... Have a serious and relevant list of questions at hand, ask them, and start to leave at the 20 minute point having obtained information from them. Always include the question "who else might I talk to in our industry that could have useful insight" ... if they give you a name, get an email or phone #... more about this below.
    Do have a copy of your resume in your "back pocket". It may well be that the person you talk to will ask you to stay and to learn more about you and your interests. Don't push the resume their direction, let them ask for it (if they won't use it they won't ask.)
  • You can do this literally as a project to write a paper -- I suspect most school newspapers would pick up your results, local papers might, it always makes a useful entry on your blog or Facebook page, ... and some of the professional societies have newsletters where this would be of interest. (And of course send a copy to the folks you interviewed, and add this to your resume as an example of your communications skills.)
  • Send a thank you note to anyone you interview -- if you want to stand out, send them a hand written note by U.S. Mail (and of course make sure your note has your contact information on it.)
  • Follow up on any pointers you are given ... contact these folks and ask them for an interview (same deal) ... and let them know that "bob suggested I talk to you" -- and use the references real name, not 'bob'.
3. Looks count ... every place you are visible. Make sure your Facebook page (twitter stream, Linked-in profile, etc.) are all professional ... stuff you want an employer to see ---- because they will look -- indications are that employers view these sites a significant percentage of the time -- before they bother to contact you about an interview. Potential for discrimination? ... you bet. But no body will blame them if they decide not to interview a person who highlights their party life or last binge. Being "cool" on the web is not all it's cracked up to be. Indications are that it takes months of diligent posting on your sites to both flush embarrassing old stuff, and get search engine visibility for your desired image.
  • Check your self out via Google -- is the top of the list the real you? .... (some folks will think it is, I talked to one professional lady whose name matched that of an "adult" entertainer, so she changed her day-to-day name back to her maiden name.) Add an initial, or your full middle name ... become unique if you can (even adding or creating a nickname ... one that is professionally sound --- Jim "Jedi" Isaak perhaps)
  • Your visible screen names (handles, email names, etc.) also want to be professional. I suggest that "sexy-mama" or "stud-muffin" are not images you want to project to prospective employers.
  • Pictures count ... what pictures of you are tagged? ... Looking good? ... oops
  • and of course when you are doing the face to face thing, look appropriate for the job. Interviewing at Harley-Davidson, wear your jeans and leather jacket --- interviewing at a bank, a suit is good ---- be at least business casual everywhere else.
4. Every contact is an opportunity. Some will pay out in the short term, others may be longer term connections. Once you have a position -- maintain your contacts. These are points for keeping in touch with what is happening in the field. Continue to attend those professional meetings ... continue to learn today's skills, ones that you see emerging in your future so that you have control over your future.

One resource for IEEE Computer Society members, the "build your career" site, with pointers to relevant resources as well as job postings, etc.

Carpe Cras ... Seize tomorrow ... and do it today!

2011-03-28

Most Human Human and Machine Intelligence

I've had a chance to read a galley copy of "The Most Human Human", by Brian Christian. This relates to his quest in the 2009 Loebner Prize Turing Test competition. Besides identifying the "most human computer", the annual event also designates a person who has been most confidently identified by the judges as human. In a 5 minute interactive text exchange each judge has to both select "human" or "computer" and their confidence in their selection, which leads to the two potential 'winners' (humans get no substantial awards for being human, and are discouraged from trying to simulate a machine.)
Christian raises the interesting question in the process of "how do we know what is human?" This is the focus of the book -- pursuing historical concepts from Greek Philosophers, to modern instantiations such as Garry Kasparov's chess competitions with IBM's Deep Blue. (Christian is aware of the then upcoming Jeopardy--Watson match, but went to print prior to that.) This approach ends up focusing on the diversions and "rat holes" more than the question of what distinguishes humans, intelligence, or consciousness. Since this specific instantiation of the Turing Test has a time limit (expanded from 2009 at 5 minutes, to a 25 minute head-to-chip comparison for 2011) the AI's created for the contest are "purpose built". Much of Christian's discussion focuses on differentiation from the 'single purpose' programs of the past to be convincingly distinguished as human.
Many of the points he raises provide insight on the nature of being human:
  • Persons have a consistent, unique identity (not changing point of residence, gender, relationship status or such from one input line to the next.) Exemplar AI's do not have this same sense of personal history/identify.
  • Persons have a sense of context -- except, interestingly enough, Christian points out when they are arguing ... then responses often degenerate to reply to the last comment made, not the initial topic triggering the dispute. Some (at times convincing) AI's simply respond to the most recent input with no continuity.
  • Persons 'add value' (hopefully) in interactions, ideally surfacing new concepts which were not implicit from strict analysis. (Christian touches on left brain/right brain distinctions here.)
And his listing goes on -- returning regularly to the point of "how can I use this to emerge as the most human human?"
From the technologists perspective, and getting to Turing's initial concept, "how would we know if an AI can think?" Here I find the Lobener approach to be simplistic. Fooling 30% of the judges in 2009, and 50% in 2011 does not satisfy my criteria for "thinking" (of course I'm not sure that some persons, perhaps politicians for example would clear my hurdle here either.) Consider a few alternative situations:
  • An AI which is not purpose built but consistently is considered to be a human respondent in general discourse
  • An entity known to be an AI which is generally agreed is thinking, conscious, intelligent...
Perhaps a more challenging concept is an AI that is thinking, but doesn't pass the Turing test ... perhaps because it does not care to be judged against human standards. It is a point of some arrogance in the part of humans to presume that the only instantiations of 'thinking', 'consciousness' or 'intelligence' must be evident as paralleling similar human characteristics.



2011-03-03

Thinking about China, #1

If you have not wondered how the global future will evolve given the (re)-emerging strength of China, you may not have been wondering enough.

China has 13 million folks with a "genius" IQ (1% of the population), is graduating many engineers, and has senior leadership in government with engineering degrees. China has 4000+ years of valuing education, and is rapidly becoming the largest population of persons who can speak English.

China does have significantly different cultural roots and traditions from "western" countries. This makes it difficult to understand where we have "common ground." China seems comfortable combining "Communism" with almost unfettered capitalism. Entrepreneur's abound in China, driving an exploding economy and rapid increases in GNP. At the same time it is clear that the Chinese government exerts very strong controls in some areas. Some of these controls run strongly counter to "western" sensibilities. But ...

Consider the reality that China, from a government run economic perspective, can make strategic investments in areas it considers important. This can be education (engineering among other fields), it can be industries, and it can be geo-political influence (Africa as one focus for example.) This puts a lot of resources to bear on targeted objectives ... and since China is rapidly becoming one of the richest countries in the world (GNP to exceed US by 2020 or so), this warrants consideration.

I am currently studying Chinese history -- something we do not cover in U.S. education systems in any depth. When I was in China some twenty years ago, my host indicated "we have been here 4,000 years, we will be here another 4,000 years, right now we are communist." ... a sense of history that is hard to grasp in a country with just over 200 years of it's own history.

There are periodic discussions of emerging China on TED.com, two recent talks include:
Which provide a bit of current context. For those seriously interested, I do not doubt that learning Mandarin would be a useful exercise. Language reflects how we think, and both Chinese language and the Hanzi character set provide a level of insight. Note that the concept of "spelling" does not exist with Hanzi, rather it is properly forming the characters that is critical to clear communication -- so the mental concepts involved diverge from western languages in very basic ways.

However you look at the future, China will play a major role. I fully expect China to be a leading source of innovation, new technology, and scientific breakthroughs over the next decades. I will not be surprised to find China landing on Mars (the Red planet is appropriate for many reasons), and leading in Genetic Engineering as well as technology. There will be spectacular failures along these paths -- but this again reflects different cultural backgrounds and values, and may be viewed by China as part of the "cost of doing business".

2011-02-09

Personalization - Privacy and Netflix

My engineering curiosity was captured by the integration of Internet and a few recent purchases we made, like a Blueray DVD player and an HDTV set. The business-model constrained choices in these devices are also interesting -- offering selected video streams (Netflix, VuDu) but not others (PBS, TED) -- most obviously missing is simple broadcast TV over the net. A while ago you could watch most ABC shows from ABC.com and NBC from NBC.com ... even all of the Twilight Zone series! The latter (in black and white) was combined with current (color) ads, which creates a contrast that Rod Serling could appreciate. But the broadcasters have moved away from this, experimenting with Hulu and such. (I must wonder if they were afraid that more recent creations couldn't compete with some of the oldies -- but that is me and my nostalgia perhaps.)

So, I subscribed to Netflix, and started queuing up things to watch. Another business model constraint became evident when I realized that many interesting movies were only available for DVD's "by mail" as opposed to online streaming -- no doubt a point of conflict between the online Netflix model and the copyright holder. At some point the 'big guys' will get it, and we will see the flood gates open to online video streams from the proper rights holders, but until then we will have hurdles to hop, inconsistencies and inefficiencies.

Netflix is attempting to get to the next generation of "Personalization" -- able to anticipate what you will enjoy using a formula that combines what you have watched (and ideally rated), viewing preferences you explicitly enter, and the preferences of "viewers like you". You may also be aware that they had a contest to see if someone can come up with a better algorithmic way to select recommendations. Given all of this, I was a bit disappointed in the recommendation service circa 2011: indicating that my indicated "like" for "Wallace and Gromit" was the basis for selecting a number of recommendations including "Lewis and Clark" --- sort of slipping between animation/comedy and historical documentaries ... I think that one was avoidable. Mind you historical documentaries are high on my list of preferences and 'likes', so it is not the recommendation that is in error as much as the rationale presented.
A second realization emerged as I started to consider the categorization preferences. First, you only get 5 points of rating, and "ok" is not one of them - you must either "like it" or "didn't like it" as an option. Second, for user preference indicators you only get three levels (never watch, sometimes, and always) which is very weak as well. I sense the classic engineering-marketing concern with the limited supply of positive integers. A sliding scale, encoded between negative 10,000 and positive 10,000 would be easily captured in the same database -- and while a user might not be consistent, it would provide a much more finely tuned level of feedback.
Then there is the basis for selection -- my wife has some series she enjoys (CSI and such), my grandson is currently into sunken ships and submarines, and movies we watch with this part of the family (Disney/Studio Ghibli) don't match the films my wife and I watch together. The result is a mix of selections and preferences which not really a match for any one person -- personalization that lacks personalization.
Finally there is the question of privacy. One of the few legal protections in the U.S. relates to video rentals. This is a result of the records of such rentals being a factor in the rejection of Supreme Court nominee Robert Bork. It is not clear if this applies to Netflix and their personalization. Your selections and stated preferences may disclose things about you which you do not wish to have public. There is a school of "privacy is dead" -- including a recent piece on this in IEEE Computer Society's Internet Computing by Jakob Ericsson. But I for one advocate for some level of guidelines, not quite willing to submit to the ubiquitous audio/video surveillance suggested by Niel Stephenson in Snow Crash with the virtual "earth" -- which apparently inspired "Google Earth" - albeit limited in it's real-time presentation of close-up audio/video streams - particularly indoors.
The bottom line is that the Netflix personalization inspires both fear (implied personal characteristics) and fun (off-target recommendations). We need to develop the policies and principles that will guide this evolution -- even if we cannot do this as quickly as the technology is evolving.

2011-01-24

In the January issue of Computer Magazine, Sam Fuller and Lynette Millett present a summary of a report from the CSTB (Computer Science and Telecommunications Board) of the US National Academy of Sciences -- Computing Performance: Game Over or Next Level? Which lays out challenges we face to catch-up (and keep up) with the computing performance Expectations Gap -- noting that single processor systems have already failed (power consumption/heat dissipation more than chip density) -- and the future requires serious work on parallel processing including the software level.

There are two aspects of this article I want to point out -- the trillion dollar annual impact that the report asserts, and the suggested need for standards. (Quotations are from the article.)

Industry and Government need to support IT
"The IT industry annually generates a trillion dollars and has even larger indirect effects throughout society." ... "Current technological challenges affect not only computing but also the many sectors of society that now depend on advances in IT and computation. These suggest national and global economic repercussions." Here is my concern. If industry and government really believe we face these types of impact, why don't they take IT seriously? For years IT has been identified as one of the largest potential areas of job growth (Bureau of Labor Statistics, BLS) but there is limited federal or state investment to address this labor need; and STEM programs from industry to attract students tend to be corporate flag waving with little coherent effort. Beyond this, I don't see industry providing job security for IT professionals -- and perceptions are critical here to attract and retain competent people. The need for software engineers (175,000 of the top paying jobs in the BLS table) overlaps with the need for professionals trained in parallel programming outlined by Fuller and Millett -- but as they point out we do not have this as part of the curriculum at this point. With multi-core systems, CELL processors, cloud computing, etc. it has been clear for some time that this would be an essential area for both research and skills development.

Parallel is not the priority for Consumer Computing

But we need one major caveat: this is not, in my opinion, where the challenges lie for consumer computing -- the devices we put in the hands of billions of individuals. As I have asserted previously (CS President's Blog) the real evolution for these devices is a much longer expected life span (robust, modular, upgradeable) -- and not to be disposable. The marketing desire of corporations to sell yet-another-computer to every consumer is inconsistent with sustainability objectives, and generates deserved distrust by the public. I look at the proliferation of pocket things - smart phones, iWhatevers, eReaders, etc. and see needless (but intentional) differentiation. A recent article in Scientific American (An Open Question) confuses the value of "open" platforms between Android (multivendor) and iPhone (single vendor) vs "open software" (free to modify, and most importantly add applications.) While I doubt any user needs the thousands of applications available for both of these platforms, I know some users want applications that the closed vendors do not want to have available from third parties. Open platforms like Android give users the freedom to develop and acquire applications for their devices without having to go to court -- which opens the door for Android devices to fill many identified and not-yet identified application needs without waiting for corporate approvals. While some of these may go to the cloud for parallel processing power, many will quite happily execute on a single processors for years to come.

But Standards are Needed -- Parallel Processing and Android Applications

Which leads to the second major point of the Fuller-Millett article - the need for standards.
"Private Firms are often incentivized to create proprietary interfaces and implementations to establish a competitive advantage. However, a lack of standardization can impede progress because the presence of so many incompatible approaches deprives most from achieving the benefits of wide adoption and reuse..." This problem exists for portability of applications for the cloud, for parallel processing in general and also for Android platforms. In all of these cases the short term interest of suppliers will be differentiation combined with "lock-in" that will discourage compatibility and the implicit competition that comes with real standardization. There are two steps that can significantly impact the time to standardization:
  1. Someone (industry, NSF, etc.?) has to fund a volunteer consensus standards activity, I recommend IEEE as a candidate for all three of these domains -- there is already some work going on with Cloud Computing. The highest impact investment here is to fund an experienced technical editor, which is a service IEEE staff can facilitate, although most standards depend on volunteers.
  2. Encourage the US Government to establish a FIPS that calls for government procurements to use the resulting standard(s). Without a significant buyer demand, standards do not get implemented. It has been some years since the federal government has actually exerted its buying power. But, if we believe the high dollar impact suggested by the CSTB, then this would make a lot of sense.
Why mix the consumer and the performance-demand needs in the same discussion? Because public perception is a critical factor in getting governments and industry to actually change. If the focus is strictly on applications with performance gap problems (which I argue is not consumer computing) ... then the public won't perceive the need for this. If we can combine empowering users -- providing protection for their investments (via portability, open applications, and robust devices) ... and complement these with back-end parallel processing standards we have a chance of bringing them on board.

2011-01-12

Not for Profit - why do we talk about money so much

I'm concerned with an aspect of IEEE that has multiple negative impact IMHO -- our emphasis on money. When I see evaluations of our success, it is often measured in dollars -- growth in revenue, competitors evaluated in dollar market share, etc. Since you tend to get what you measure, it puts much of the focus of the leadership on money.
Note that not-for-profits cannot survive if the income does not match or exceed the expenditures. IEEE is not running on some large endowment, so it must find revenue to cover the necessary expenses, and at times eliminate expenses (projects, products, services even staffing) when this is not happening. However, expecting many if not all products, services or even operating units to generate surpluses is not consistent with the not-for-profit objective which means funding activities of benefit to the public and profession that might not be surplus generating. In effect. we must subsidize some activities from surpluses drawn in from other activities.
Here is the flip side of this problem - IEEE groups are hesitant to be the source of that subsidy (I have yet to find one that would refuse a subsidy.) Every conference wants to run right to the wire ... and reinvest any surplus in either low fees, student travel fellowships, scholarships, etc. These may be good things, but most are driven from a myopia about the nature of IEEE, seeking to maximize the benefit for their community and not give back to the rest of IEEE. Needless to say this does not provide support for the next emerging conference, or publication, or whatever. Some of the same myopic perspective exists with sections or chapters -- it is less with publications where the centralized nature of publishing has tended to drive a more business oriented line of thinking.
The same conflict exists between societies, where again surplus is a key metric. Societies do not want to be the source of surplus that subsidizes other societies (including emerging technology, high growth areas or areas of social responsibility). So annual budget review and the periodic "society reviews" put the pressure on surplus generation. All of this in a context where IEEE's overall surpluses are growing dramatically (thanks to the stock market).
Here's the real rub: I strongly suspect Dan Pink in his discussion on motivation (Book: Drive, see the related video at http://www.youtube.com/watch?v=u6XAPnuFjJc ) is correct ... money is a negative motivator, decreasing productivity once things rise above Maslow's "physical and security" levels. The real motivators are autonomy (self actualization), mastery (self esteem and the esteem of others) and purpose (which underlies all of these.) IEEE is filled with purpose, provides the essential community (belonging in Maslow's hierarchy) and great opportunities for mastery and autonomy --- but we let money get in our way.


2011-01-03

Better Mouse Trap

The term "Better mouse trap", referring to a useful invention, is often attributed to Thomas Edison in a phrase such as "If you build a better mouse trap the world will beat a path to your door." Extensive research (15 minutes on the web) fails to uncover Edison actually having made this pronouncement. However, a fairly good history of the concept is presented by American Heritage in Oct. 1996 (A Better Mouse Trap; Jack Hope; American Heritage, Oct. 1996) where the more accurate quotation is presented:
"I trust a good deal to common fame, as we all must. If a man has good corn, or wood, or boards, or pigs, to sell, or can make better chairs or knives, crucibles, or church organs, than anybody else, you will find a broad, hard-beaten road to his house, though it be in the woods."
Ralph Waldo Emerson's Journal, 1855

While Hope's article focuses on patents and mouse-traps, which provide a delightful insight to the great engineering tradition of inviting things, it misses the main point of Emerson's insight. This is captured in the term "common fame" ... which today we might call "reputation". A concept we too-easily lose in our efforts of re-invention, and even in the highest impact potential creations.

I've pointed to the idea of "social capital", and who you know as being important. But perhaps more important in many ways is "who knows you?" Once you become known, for better or worse, you have a reputation. Folks will point others in your direction in areas where this is good (they will "beat a path to your door") and of course it can work the other way as well. So building a good reputation is fairly critical, and in today's world, your 'web profile' is essential -- but that is a topic for a future blog.

How do you build a reputation? .... be visible! Consider how you gain credibility in the Open Source world .... be active in the review and FAQ interactions ... let folks see your capability and commitment, then they will pay attention to your contributions. In the academic world this is accomplished by publications, peer review and most critically conferences where you meet and interact with those other persons. In industry you need to take equal interest in your reputation, but it is not as clearly understood. The same processes apply --- be visible in interacting with peers -- online, in local events (IEEE chapter meetings), -- conferences, etc. "Make a name for yourself" --- without it, a better mouse trap won't really trigger the path development to your door.