Is “automation addiction” a real problem?

Wednesday, August 31st, 2011 by Robert Cravotta

The recent AP article highlights a draft FAA study (I could not find a source link, please add in comments if you find) that finds that pilots sometimes “abdicate too much responsibility to automated systems.” Despite all of the redundancies and fail-safes built into modern aircraft, a cascade of failures can overwhelm pilots who have only been trained to rely on the equipment.

The study examined 46 accidents and major incidents, 734 voluntary reports by pilots and others as well as data from more than 9,000 flights in which a safety official rides in the cockpit to observe pilots in action. It found that in more than 60 percent of accidents, and 30 percent of major incidents, pilots had trouble manually flying the plane or made mistakes with automated flight controls.

A typical mistake was not recognizing that either the autopilot or the auto-throttle — which controls power to the engines — had disconnected. Others failed to take the proper steps to recover from a stall in flight or to monitor and maintain airspeed.

The investigation reveals a fatal airline crash near Buffalo New York in 2009 where the actions of the captain and co-pilot combined to cause an aerodynamic stall, and the plane crashed into the ground. Another crash two weeks later in Amsterdam involved the plane’s altimeters feeding incorrect information to the plane’s computers; the auto-throttle reduced speed such that the plane lost lift and stalled. The flight’s three pilots had not been closely monitoring the craft’s airspeed and experienced “automation surprise” when they discovered the plane was about to stall.

Recently, crash investigators from France are recommending that all pilots get mandatory training in manual flying and handling a high-altitude stall. In May, the FAA proposed that pilots be trained on how to recover from a stall, as well as expose them to more realistic problem scenarios.

But other new regulations are going in the opposite direction. Today, pilots are required to use their autopilot when flying at altitudes above 24,000 feet, which is where airliners spend much of their time cruising. The required minimum vertical safety buffer between planes has been reduced from 2,000 feet to 1,000 feet. That means more planes flying closer together, necessitating the kind of precision flying more reliably produced by automation than human beings.

The same situation is increasingly common closer to the ground.

The FAA is moving from an air traffic control system based on radar technology to more precise GPS navigation. Instead of time-consuming, fuel-burning stair-step descents, planes will be able to glide in more steeply for landings with their engines idling. Aircraft will be able to land and take off closer together and more frequently, even in poor weather, because pilots will know the precise location of other aircraft and obstacles on the ground. Fewer planes will be diverted.

But the new landing procedures require pilots to cede even more control to automation.

These are some of the challenges that the airline industry is facing as it relies on using more automation. The benefits of using more automation are quite significant, but it is enabling new kinds of catastrophic situations caused by human error.

The benefits of automation are not limited to aircraft. Automobiles are adopting more automation with each passing generation. Operating heavy machinery can also benefit from automation. Implementing automation in control systems enables more people with less skill and experience to operate those systems without necessarily knowing how to correct from anomalous operating conditions.

Is “automation addiction” a real problem or is it a symptom of system engineering that has not completely addressed all of the system requirements? As automation moves into more application spaces, the answer to this question becomes more important to define with a sharp edge. Where and how should the line be drawn for recovering from anomalous operating conditions; how much should the control system shoulder the responsibility versus the operator?

The Engineer: Marketing’s Secret Weapon

Monday, August 29th, 2011 by Rae Morrow and Bob Frostholm

For many engineers the most exciting part of developing a product is being able to say, “it’s done!” But this really isn’t the end of the cycle, getting the product to the market is the developer’s next step. Projects that involve engineers in the marketing process reap added benefits. Technical teams exploring your product, without exception, are more open with their information when speaking with you engineers. By taking advantage of this “brotherhood of engineers” bond, designers can glean insights into future needs and requirements to gain an early leg up on their next generation products.

Marketeers spend hours upon hours developing branding campaigns, datasheets, technical articles, ads, brochures, PowerPoint presentations, application notes, flashy web pages, and more to assist the sales force in winning a potential buyer’s trust and eventually their business. The quality of these tools is critical to the success of the front line salesperson. When these tools are inadequate, creativity steps in and all too often “winging it” results in some degree of lost credibility. We have all experienced the over exuberance of a salesperson trying to embellish their product beyond the point of believability.

Creating dynamite sales tools requires forethought and planning. It begins with a thorough understanding of the product’s value propositions. Whether the product concept is originated within the marketing department or the engineering department, it is the engineers who have to embed the values into the new product. Marketeers then need to extract those values from engineering in the form of features and benefits that can be translated easily into ‘sales-speak’. The information flow has gone from engineer to marketer to sales person to potential buyer.

There are dozens of different channels through which information is delivered to a potential buyer. It requires discipline to keep the message consistent across all of them.

Think back to your first grade class, when on a rainy day, the teacher gathered everyone in a circle and whispered something into the ear of the child to the right and then asked them to do the same to the child to their right. By the time the information came full circle to the teacher, it barely resembled the original message. Today, there are dozens of different channels through which information is delivered to a potential buyer. The critical technical information, that which can make or break a sale, originates in Engineering (See Figure).

It is obvious how confusing the message can become when the buyer is barraged with dozens of interpretations.  Some channels truncate the message to fit their format (how much can be communicated in 140 characters?) while others will rewrite and add interpretations and comparative analysis. In the end, the buyer does not know who to believe.

There are several ways to assure your company is delivering strong and consistent messaging. For some companies this means retaining a dedicated person in the marcom department with strong technical writing and organizational skills. Another solution is to work with a PR (public relations) firm that focuses on the electronic industry, and their team focuses on the timeliness of the communications flow and manages the various channels for consistency within the channel as well.

When all the basics have been covered, it is then time to deploy the secret weapon; the engineer. Placing engineers face to face with potential buyers is becoming a more common occurrence. The buyer’s appetite for the product has been stimulated by marketing’s branding and product positioning.  Properly executed, the positioning has resulted in several third party endorsements that the buyer cannot refute.

Exposing the knowledge, skills, and expertise of the engineering team furthers the confidence of potential buyers in their decision making process. Face to face does not necessarily mean flying the engineer half way around the world for a 1 hour meeting, although there are occasions where this may be necessary. Other equally effective techniques include:

  1. Publish “How To” articles, authored by the engineer-expert. Many times these are ghost written on the basis of inputs supplied by the engineer. A creative marketing effort will find many ways to promote and repurpose this content, whether in multiple regional markets or in different formats such as application notes, presentations, and Q&As.
  2. Host webinars that allow many potential buyers to simultaneously participate in a technical lecture or series of lectures provided by the engineer-expert. There is significant effort required for planning, promoting, and executing to ensure a qualified audience is in attendance.
  3. Publish “opinion” or ‘white” papers that address a general industry concern and offer pragmatic approaches to solutions demonstrate a level of expertise by the engineer.

While we often stereotype engineers as the ‘Dilberts’ or ‘Wallys’ of the world, they are in fact one of a company’s best assets in closing a sale. They deal in a world of facts and figures, and equations and laws of nature that do not change. They abhor vagueness and embrace truth. To the buyer, their word is golden. In the buyer’s mind, ‘they are one of us’.

It is difficult to avoid the adversarial nature of a sale. We’ve been taught that there is a winner and a loser in the transaction. Involving the engineer in the process can greatly lessen the tension and extract clearly the real value of the product to the buyer, yielding a win-win scenario.

Is testing always essential?

Wednesday, August 24th, 2011 by Robert Cravotta

This month’s audit of the Army’s armor inserts by the Pentagon’s inspector general finds that testing for the body armor ballistic inserts was not conducted consistently across 5 million inserts across seven contracts. According to the audit, the PM SEQ (Army Program Manager Soldier Equipment) did not conduct all of the required tests on two contracts because they had no protection performance concerns on those inserts. Additionally, the PM SEQ did not always use a consistent methodology for measuring the proper velocity or enforcing the humidity, temperature, weathered, and altitude requirements for the tests.

The audit also reports that the sampling process used did not provide a statistically representative sample for the LOT (Lot Acceptance Test) so that the results of the test cannot be relied on to project identified deficiencies to the entire lot. At this point, no additional testing was performed as part of the audit, so there is no conclusion on whether the ballistic performance of these inserts was adversely affected by the test and quality assurance methods that were applied.

Tests on two lots of recalled inserts so far have found that all of them met “the maximum level of protection specified for threats in combat” according to Matthew Hickman, an Army spokesman. Another spokesman released a statement that “The body armor in use today is performing as it was intended. We are continuing to research our data and as of now have not found a single instance where a soldier has been wounded due to faulty body armor.”

This audit highlights a situation that can impact any product that experiences a significant increase in demand coupled with time sensitivity for availability of that product. High profile examples in the consumer electronics space include game consoles and smart phones. Some of these products underwent recalls or aftermarket fixes. However, similar to the recalled inserts that are passing additional testing, sometimes a product that has not undergone complete testing can still meet all of the performance requirements.

Is all the testing you can do essential to perform every time? Is it ever appropriate to skip a test because “there are no performance concerns?” Do you use a process for modifying or eliminating tests that might otherwise disproportionately affect the product’s pricing or availability without significant offsetting benefit? Is the testing phase of a project an area ripe for optimization or is it an area where we can never do enough?

How does your company handle test failures?

Wednesday, August 17th, 2011 by Robert Cravotta

For many years, most of the projects I worked on were systems that had never been built before in any shape or form. As a consequence, many of the iterations for each of these projects included significant and sometimes spectacular failures as we moved closer to a system that could perform its tasks successfully in an increasingly wider circle of environmental conditions. These path-finding designs needed to be able to operate in a hostile environment (low earth orbit), and they needed to be able to make autonomous decisions on their own as there was no way to guarantee that instructions could come from a central location in a timely fashion.

The complete units themselves were unique prototypes with no more than two iterations in existence at a time. It would take several months to build each unit and develop the procedures by which we would stress and test what the unit could do. The testing process took many more months as the system integration team moved through ground-based testing and eventually moved on to space-based testing. A necessary cost of deploying the units would be to lose it when it reentered the Earth’s atmosphere, but a primary goal for each stage of testing was to collect as much data as possible from the unit until it was no longer able to operate and/or transmit telemetry about its internal state of health.

During each stage of testing, the unit was placed into an environment that would minimize the amount of damage the unit would physically be subjected to (such as operating the unit within a netted room that would prevent the unit from crashing into the floor, walls, or ceiling). The preparation work for each formal test consisted of weeks of refining all of the details in a written test procedure that fortyish people would follow exactly. Any deviations as the final test run would flag a possible abort of the test run.

Despite all of these precautions, sometimes things just did not behave the way the team expected. In each failure case, it was essential that the post mortem team be able to explicitly identify what went wrong and why so that future iterations of the unit would not repeat those failures. Because we were learning how to build a completely autonomous system that had to properly react to a range of uncertain environmental conditions, it could sometimes take a significant effort to identify root causes for failures.

Surprisingly, it also took a lot of effort to prove that the system did not experience any failures that we were not able to identify by simple observation during operation. It took a team of people analyzing the telemetry data days to determine whether the interactions between the various subsystems were behaving correctly or had coincidently behaved in an expected fashion during the test run.

The company knew we were going to experience many failures during this process, but the pressure was always present to produce a system that worked flawlessly. However, when the difference between a flawless operation and one that experienced a subtle, but potentially catastrophic anomaly rests on nuanced interpretation of the telemetry data, it is essential that the development team is not afraid to identify possible anomalies and follow them up with robust analysis.

In this project, a series of failures was the norm, but for how many projects is a sequence of system failures acceptable? Do you feel comfortable raising a flag for potential problems in a design or test run? Does how your company handles failure affect what threshold you apply to searching for anomalies and teasing out true root causes? Or is it safer to search a little less diligently and let said anomalies slip through and be discovered later when you might not be on the project anymore? How does your company handle failures?

How much trial and error do you rely on in designs?

Wednesday, August 10th, 2011 by Robert Cravotta

My wife and I have been watching a number of old television series via DVD and video streaming services. We have both noticed (in a distressing way) a common theme among the shows that purport to have a major character who happens to be a scientist – the scientist(s) know more than any reasonable person would, they accomplish tasks quicker than anyone (or a team of a thousand people) reasonably could, and they make the proper leaps of logic in one or two iterations. While these may be useful mechanisms to keep a 20 to 40 minute story moving along, it in no way reflects our experience in the real engineering world.

Tim Harford’s recent TED talk addresses the successful mechanism of trial and error to create successful complex systems and how it differs from systems that are built around systems built based on a God complex. The talk resonates with my experience and poses a statement I have floated around a few times over the years in a different manner. The few times I have suggested that engineering is a discipline of best guesses has generated some vigorous dissent. Those people offering the most dissent claim that given a complete set of requirements, they can provide an optimum engineering design to meet those requirements. But my statement refers not just to the process of choosing how to solve a requirement specification, but also in making the specifications in the first place. Most systems that must operate in the real world are just too complex for a specification to completely describe the requirements in a single iteration – there is a need for some trial and error to discover what is more or less important for the specification.

In the talk, Tim provides an industrial example regarding the manufacturing of powdered detergent. The process of making the powder involves pumping a fluid, under high pressure, through a nozzle, that distributes the fluid in such a way that as the water evaporates from the sprayed fluid, a powder with specific properties lands in a pile to be boxed up and shipped to stores for end users to purchase. The company in this example originally tried an explicit design approach that reflects a God complex mode of design. The company hired an expert to design the nozzle. Apparently the results were unsatisfactory; however, the company was eventually able to come up with a satisfactory nozzle by using a trial and error method. The designers created ten random nozzles designs and tested them all. They chose the nozzle that performed the best and created ten new variations based on that “winning” nozzle. The company performed this iterative process 45 times and was able to create a nozzle that performed its function well. The nozzle performs well, but the process that produced the nozzle did not require any understanding of why it works.

Over the years, I have heard many stories about how using a similar process yielded a superior solution to a problem than an explicit design approach. Do you use a trial and error approach in your designs? Do you introduce variations in a design, down select the variations based on measured performance, and repeat this process until the level of improvement suggests you are close enough to an optimum configuration? I suspect more people do use a variation and select process of trial and error; however, I am not aware of many tools that facilitate this type of approach. What are your thoughts and experiences on this?

What is driving lower data center energy use?

Wednesday, August 3rd, 2011 by Robert Cravotta

A recently released report from a consulting professor at Stanford University identifies that the growth in electricity use in data centers over the years 2005 to 2010 is significantly lower than the expected doubling based on the growth rate of data centers from 2000 to 2005. Based on the estimates in an earlier report on electricity usage by data centers, worldwide electricity usage has only increased by about 56% over the time period of 2005 to 2010 instead of the expected doubling. In contrast, the growth in data center electricity use in the United States increased by 36%.

Based on estimates of the installed base of data center servers for 2010, the report points out that the growth in installed volume servers slowed substantially over the 2005 and 2010 period by growing about 20% in the United States and 33% worldwide. The installed base of mid-range servers fell faster than the 2007 projections while the installed base of high-end servers grew rapidly instead of declining per the projections. While Google’s data centers were not able to be included in the estimates (because they assemble their own custom servers), the report estimates that Google’s data centers account for less than 1% of electricity used by data centers worldwide.

The author suggests the lower energy use is due to impacts of the 2008 economic crisis and improvements in data center efficiency. While I agree that improving data center efficiency is an important factor, I wonder if the 2008 economic crisis has a first or second order effect on the electricity use of data centers. Did a dip in the growth rate for data services cause the drop in the rate of new server installs or is the market converging on the optimum ratio of servers to services?

My data service costs are lower than they have ever been before – although I suspect we are flirting with a local minimum in data service costs as it has been harder to renew or maintain discounts for these services this year. I suspect my perceived price inflection point is the result of service capacities finally reflecting service usage. The days of huge excess capacity for data services are fading fast and service providers may no longer need to sell those services below market rate to gain users of that excess capacity. The migration from all-you-can-eat data plans to tiered or throttle accounts may also be an indication that excess capacity of data services is finally being consumed.

If the lower than expected energy use of data centers is caused by the economic crisis, will energy spike up once we are completely out of the crisis? Is the lower than expected energy use due more to the market converging on the optimum ration of servers to services – if so, does the economic crisis materially affect energy use during and after the crisis?

One thing this report was not able to do was ascertain how much work was being performed per unit of energy. I suspect the lower than expected energy use is analogous to the change in manufacturing within the United States where productivity continues to soar despite significant drops in the number of people actually performing manufacturing work. While counting the number of installed servers is relatively straightforward, determining how the efficiency of their workload is changing is a much tougher beast to tackle. What do you think is the first order affect that is slowing the growth rate of energy consumption in data centers?

Travelling the Road of Natural Interfaces

Thursday, July 28th, 2011 by Robert Cravotta

The forms for interfacing between humans and the machines are constantly evolving, and the creation rate of new forms for human-machine interfacing seems to be increasing. Long gone are the days of using punch cards and card reader to tell a computer what to do. Most contemporary users are unaware of what a command line prompt and optional argument is. Contemporary touch, gesture, stylus, and spoken language interfaces threaten to make the traditional hand shaped mouse a quaint and obsolete idea.

The road from idea, to experimental implementations, to production forms for human interfaces usually spans many attempts over years. For example, the first computer mouse prototype was made by Douglas Engelbart, with the assistance of Bill English, at the Stanford Research Institute in 1963. The computer mouse became a public term and concept around 1965 when it was associated to a pointing device in Bill English’s publication of “Computer-Aided Display Control.” Even though the mouse was available as a pointing device for decades, it finally became a ubiquitous pointing device with the release of Microsoft Windows 95. The sensing mechanisms for the mouse pointer evolved though mechanical methods using wheels or balls to detect when and how the user moved the mouse. The mechanical methods have been widely replaced with optical implementations based around LEDs and lasers.

3D pointing devices started to appear in market the early 1990’s, and they have continued to evolve and grow in their usefulness. 3D pointing devices provide positional data along at least 3 axes with contemporary devices often supporting 6 degrees of freedom (3 positional and 3 angular axes). Newer 9 degrees of freedom sensors (the additional 3 axes are magnetic compass axes), such as from Atmel, are approaching integration levels and price points that practically ensure they will find their way into future pointing devices. Additional measures of sensitivity for these types of devices may include temperature and pressure sensors. 3D pointing devices like Nintendo’s Wii remote combine spatial and inertial sensors with vision sensing in the infrared spectrum that relies on a light bar with two infrared light sources that are spaced at a known distance from each other.

Touch Interfaces

The release of Apple’s iPhone marked the tipping point for touch screen interfaces. However, the IBM Simon smartphone predates the iPhone by nearly 14 years, and it sported similar, even if primitive, support for a touchscreen interface. Like many early versions of human-machine interfaces that are released before the tipping point of market acceptance, the Simon did not enjoy the same market wide adoption as the iPhone.

Touchscreen interfaces span a variety of technologies including capacitive, resistive, inductive, and visual sensing. Capacitive touch sensing technologies, along with the software necessary to support these technologies, are offered by many semiconductor companies. The capacitive touch market has not yet undergone the culling that so many other technologies experience as they mature. Resistive touch sensing technology has been in production use for decades and many semiconductor companies still offer resistive touch solutions; there remain opportunities for resistive technologies to remain competitive with capacitive touch into the future by harnessing larger and more expensive processors to deliver better signal-to-noise performance. Vision based touch sensing is still a relatively young technology that exists in higher-end implementations, such as the Microsoft Surface, but as the price of the sensors and compute performance needed to use vision-based sensing continues to drop, it may move into direct competition with the aforementioned touch sensing technologies.

Touch interfaces have evolved from the simple drop, lift, drag, and tap model of touch pads to supporting complex multi-touch gestures such as pinch, swipe, and rotate. However, the number and types of gestures that touch interface systems can support will explode in the near future as touch solutions are able to continue to ride Moore’s law and push more compute processing and gesture databases into the system for negligible additional cost and energy consumption. In addition to gestures that touch a surface, touch commands are beginning to be able to incorporate proximity or hovering processing for capacitive touch.

Examples of these expanded gestures include using more than two touch points, such as placing multiple fingers from one or both hands on the touch surface and performing a personalized motion. Motions can consist of nearly any repeatable motion, including time sensitive swipes and pauses, and it can be tailored for each individual user. As the market moves closer to a cloud computing and storage model, this type of individual tailoring becomes even more valuable because the cloud will enable a user to untether themselves from a specific device and access their personal gesture database on many different devices.

Feedback latency to the user is an important measurement and a strong limiter on the adoption rate of expanded human interface options that include more complex gestures and/or speech processing. A latency target of about 100ms has consistently been the basic advice for feedback responses for decades (Miller, 1968; Myers 1985; Card et al. 1991) for user interfaces; however, according to the Nokia Forum, for tactile responses, the response latency should be kept under 20ms or the user will start to notice the delay between a user interface event and the feedback. Staying within these response time limits affects how complicated a gesture a system can handle and provide satisfactory response times to the user. Some touch sensing systems can handle single touch events satisfactorily but can, under the right circumstances, cross the latency threshold and become inadequate for handling two touch gestures.

Haptic feedbacks provide a tactile sensation, such as a slight vibration, to the user to provide immediate acknowledgement that the system has registered an event. This type of feedback is useful in noisy environments where a sound or beep is insufficient, and it can allow the user to operate the device without relying on visual feedback. An example is when a user taps a button on the touch screen and the system signals the tap with a vibration. The forum goes on to recommend that the tactile feedback is not exaggerated and short (less than 50ms) so as to keep the sensations pleasant and meaningful to the user. Vibrating the system too much or often makes the feedback meaningless to the user and risks draining any batteries in the system. Tactile feedbacks should also be coupled with visual feedbacks.

Emerging Interface Options

An emerging tactile feedback involves simulating texture on the user’s fingertip (Figure 1). Tesla Touch is currently demonstrating this technology that does not rely on mechanical actuators typically used in haptic feedback approaches. The technology simulates textures by applying and modulating a periodic electrostatic charge across the touch surface. By varying the sign (and possibly magnitude) of the charge, the electrons in the user’s fingertip will be drawn towards or away from the surface – effective creating a change in friction on the touch surface. Current prototypes are able to use signals as low as 8V to generate tactile sensations. No electric charge passes through the user.

Pranav Mistry at Fluid Interfaces Group | MIT Media Lab has demonstrated a wearable gesture interface setup combines digital information with the physical world through hand gestures and a camera sensor. The project is built with commercially available parts consisting of a pocket projector, a mirror, and a camera. The current prototype system costs approximate $350 to build. The projector projects visual information on surfaces, such as walls and physical objects within the immediate environment. The camera tracks the user’s hand gestures and physical objects. The software program processes the camera video stream and tracks the locations of the colored markers at the tip of the user’s fingers. Interpreted hand gestures act commands for projector and digital information interfaces.

Another researcher/designer is Fabian Hemmert whose projects explore emerging haptic feedback techniques including shape-changing and weight-shifting devices. His latest public projects include adding friction to a to a touch screen stylus that works through the stylus rather than through the user’s fingers like the Tesla Touch approach. The thought is that the reflective tactile feedback can prioritize displayed information, providing inherent confirmation of a selection by making the movement of the stylus heavier or lighter, and taking advantage of the manual dexterity of the user and providing friction that is similar to writing on a surface – of which the user is already familiar with.

Human media Lab recently unveiled and is demonstrating a “paper bending” interface that takes advantage of E-ink’s flexible display technology (Figure 2). The research team suggests that bending a display, such as to page forward, shows promise as an interaction mechanism. The team identified six simple bend gestures, out of 87 possible, that users preferred based around bending forward or backwards at two corners or the outside edge of the display. The research team identifies potential uses for bend gestures when the user is wearing gloves that inhibit interacting with a touch screen. Bend gestures may also prove useful to users that have motor skill limitations that inhibit the use of other input mechanisms. Bend gestures may be useful as a means to engage a device without require visual confirmation of an action.

In addition to supporting commands that are issued via bending the display, the approach allows a single display to operate in multiple modes. The Snaplet project is a paper computer that can act as a watch and media player when wrapped like a bracelet on the user’s arm. It can function as a PDA with notepad functionality when held flat, and it can operate as a phone when held in a concave shape. The demonstrated paper computer can accept, recognize, and process touch, stylus, and bend gestures.

If the experiences of the computer mouse and touch screens are any indication of what these emerging interface technologies are in for, there will be a number of iterations for each of these approaches before they evolve into something else or happen upon the proper mix of technology, low cost and low power parts, sufficient command expression, and acceptable feedback latency to hit the tipping point of market adoption.

What tools do you use to program multiple processor cores?

Wednesday, July 27th, 2011 by Robert Cravotta

Developers have been designing and building multi-processor systems for decades. New multicore processors are entering the market on a regular basis. However, it seems that the market for new development tools that help designers analyze, specify, code, test, and maintain software targeting multi-processor systems is lagging further and further behind the hardware offerings.

A key function of development tools is to help abstract the complexity that developers must deal with to build the systems they are working on. The humble assembler abstracted the zeros and ones of machine code into more easily remembered mnemonics that enabled developers to build larger and more complex programs. Likewise, compilers have been evolving to provide yet another important level of abstraction for programmers and have all but replaced the use of assemblers for the vast majority of software projects. A key value of operating systems is that it abstracts the configuration, access, and scheduling of the increasing number of hardware resources available in a system from the developer.

If multicore and multi-processor designs are to experience an explosion in use in the embedded and computing markets, it seems that development tools should provide more abstractions to simplify the complexity of building with these significantly more complex processor configurations.

In general, programming languages do not understand the concept of concurrency, and the extensions that do exist usually require the developer to identify the concurrency and explicitly identify where and when such concurrency exists. Developing software as a set of threads is an approach for abstracting concurrency; however, it is not clear how using a threading design method will be able to scale as systems approach ever larger numbers of cores within a single system. How do you design a system with enough threads to occupy more than a thousand cores – or is that the right question?

What tools do you use when programming a multicore or multi-processor system? Does your choice of programming language and compiler reduce your complexity in such designs or does it require you to actively engage more complexity by explicitly identifying areas for parallelism? Do your debugging tools provide you with adequate visibility and control of a multicore/multi-processor system to be able to understand what is going on within the system without requiring you to spend ever more time at the debugging bench with each new design? Does using a hypervisor help you, and if so, what are the most important functions you look for in a hypervisor?

Will flying cars start showing up on the road?

Wednesday, July 20th, 2011 by Robert Cravotta

People have been dreaming of flying cars for decades. The Aerocar made its first flight in 1949; however, it never entered production manufacturing. The Terrafugia Transition recently passed a significant milestone when it was cleared for takeoff by the U.S. National Highway Safety Administration. Does this mean flying cars will soon start appearing on the roads? To clarify, these vehicles are not flying cars in so much as they are roadable light sport aircraft – in essence, they are aircraft that could be considered legal to drive on the streets. The approximately $230,000 price tag is also more indicative of an aircraft rather than an automobile.

The Transition incorporates automotive safety features such as a purpose-built energy absorbing crumple zone, a rigid carbon fiber occupant safety cage, and automotive-style driver and passenger airbags. According to the company, the Transition can take off or land at any public use general aviation airport with at least 2,500′ of runway. On the ground, the Transition can be driven on any road and parked in a standard parking space or household garage. The wings can fold and be stowed vertically on the sides of the vehicle in less than 30 seconds. Pilots will need a Sport Pilot license to fly the vehicle, which requires a minimum of 20 hours of flight time and passing a simple practical test in the aircraft. Drivers will also need a valid driver’s license for use on the ground.

So what makes this vehicle different from the many earlier, and unsuccessful, attempts at bringing a flying car or roadable aircraft to market? In addition to relying on modern engines and composite materials, this vehicle benefits from computer-based avionics. Are modern embedded systems sufficiently advanced and powerful enough to finally push the dream of a roadable aircraft into reality within the next few years? Or will such a dual-mode vehicle make more sense only after automobiles are better able to drive themselves around on the ground? While the $230,000 price tag will limit how many people can gain access to one of these vehicles (if they make it to production), I wonder if aircraft flying into homes will become an issue. Is this just another pipe dream, or are things different this time around that such a vehicle may start appearing on our roads?

Will the Internet become obsolete?

Wednesday, July 13th, 2011 by Robert Cravotta

I saw an interesting question posed in a video the other day: “How much money would someone have to pay you to give up the internet for the rest of your life?” A professor in the video points out the huge gap between the value of using the Internet and the cost to use it. An implied assumption in the question is that the Internet will remain relevant throughout your entire lifetime, but the more I thought about the question, the more I began to wonder if that assumption is reasonable.

While there are many new technologies, devices, and services available today that did not exist a few decades ago, there is no guarantee that any of them will exist a few decades hence. I recently discovered a company that makes custom tables, and their comment on not integrating technology into their table designs illustrates an important point.

“We are determined to give you a table that will withstand the test of time. For example, if you wanted a music player in your table in the 1970s, you wanted an 8-track tape deck, 1980s a cassette tape deck, 1990s a CD player, 2000s an iPod docking station, 2010s a streaming device, and 2020s small spike that you impale into the listener’s tympanic bone, which is now the only way to listen to music, rendering the installation of any of the previous technology a useless scar upon your beautiful table. (No, we don’t actually know if that last one is where music is heading, but if it does, you heard it here first.) The same goes for laptop electrical cords. We can install attachments to deal with power cords, but at the rate battery technology is changing, like your cellular phone or mp3 player, you may just have a docking station you set it on at night, rendering the need for cords obsolete.”

I have seen a number of electronic technologies disappear from my own home and work office over the past few years. When I first setup a home office, I needed a fax machine and dedicated phone line for it. Both are gone today. I watched as my VHS tape collection became worthless, and as a result, my DVD collection is a bit more modest – thank goodness because now I hardly ever watch DVDs anymore because I can stream almost anything I want to watch on a demand basis. While we still have the expensive and beautiful cameras my wife and I bought, we never use them because some of the devices with integrated digital cameras are good enough quality, much easier to use, and much cheaper to use. My children would rather text their friends than actually talk to each other.

So, will the Internet become obsolete in a few decades time as something with more or better functions and is cheaper and easier to use replaces it? I am not sure because the Internet seems to embody a different concept than all of those other technologies that have become obsolete. The Internet is not tied to a specific technology, form factor, access method, or function other than connecting computing devices together.

In a sense, the Internet may be the ultimate embedded system because nearly everyone that uses it does not care about how it is implemented. Abstracting the function of connecting two sites from the underlying technology implemented may allow the Internet to avoid becoming obsolete and replaced. Or does it? Some smartphones differentiate themselves by how they access the Internet – 3G or 4G. Those smartphone will definitely become obsolete in a few years because the underlying technology of the Internet will definitely keep changing.

Will the Internet be replaced by something else? If so, what is you guess as to what will replace the Internet? If not, how will it evolve to encompass the new functions that currently do not exist? As more people and devices attach to the Internet, will it make sense to have separate infrastructures to support data for human and machine consumption?