Articles by Robert Cravotta

As a former Technical Editor covering Embedded Processing at EDN, Robert has been following and commenting on the embedded processing space since 2001 (see article index). His expertise includes software development and system design using microprocessors, microcontrollers, digital signal processors (DSPs), multiprocessor architectures, processor fabrics, coprocessors, and accelerators, plus embedded cores in FPGAs, SOCs, and ASICs. Robert's embedded engineering background includes 16 years as a Member of the Technical Staff at Boeing and Rockwell International working on path-finding avionics, power and laser control systems, autonomous vehicles, and vision sensing systems.

What is important when looking at a processor’s low power modes?

Wednesday, June 1st, 2011 by Robert Cravotta

Low power operation is an increasingly important capability of embedded processors, and many processors support multiple low power modes to enable developers to accomplish more with less energy. While low power modes differ from processor to processor, each mode enables a system to operate at a lower power level either by running the processor at lower clock rates and voltages or by removing power from selected parts of the processor, such as specific peripherals, the main processor core, and memory spaces.

An important characteristic of a low power or sleep mode is the current draw while the system is operating in that mode. However, evaluating and comparing the current draw between low power modes on different processors requires you to look at more than the just current draw to perform an apples-to-apples comparison. For example, the time it takes the system to wake-up from a given mode can disqualify a processor from consideration in a design. The time it takes a system to wake up is dependent on such factors as the settling time for the clock source and for the analog blocks. Some architectures offers multiple clock sources to allow a system to perform work at a slower rate while the faster clock source is still settling – further complicating the comparison between the wake-up time for the processor.

Another differentiator for low power modes is the level of granularity the power modes support that allows the developer to turn on and off individual versus blocks of peripherals or coprocessors. Some low power modes remove power from the main processor core and leave an autonomous peripheral controller operating to manage and perform data collection and storage. Low power modes can differ on which circuits they leave running such as brown-out detection, preserving the contents of ram or registers, and whether the real time clock remains active. The architectural decisions of which circuits can be powered down or not depends greatly on the end application, and they provide opportunities for specific processors to best target niche requirements.

When you are looking at a processor’s low power modes, what do you consider the important information that must be considered? When considering different processors, do you compare wake-up times or does current draw trump everything else? How important is your ability to control which circuits are powered on or off?

Do you care if your development tools are Eclipse based?

Wednesday, May 25th, 2011 by Robert Cravotta

I first explored the opportunity of using the Eclipse and Net Beans open source projects as a foundation for embedded software development tools in an article a few years back. Back then these Java-based IDEs (Integrated Development Environments) were squarely targeting application developers, but the embedded community was beginning to experiment with using these platforms for their own development tools. Since then, many companies have built and released Eclipse-based development tools – and a few have retained using their own IDE.

This week’s question is an attempt to start evaluating how theses open source development platforms are working out for embedded suppliers and developers. In a recent discussion with IAR Systems, I felt like the company’s recent announcement about an Eclipse plug-in for the Renesas RL78 was driven by developer request. IAR also supports its own proprietary IDE – the IAR Embedded WorkBench. Does a software development tools company supporting two different IDEs signal something about the open source platform?

In contrast, Microchip’s MPLAB X IDE is based on the Net Beans platform – effectively a competing open source platform to Eclipse. One capability that using the open source platform provides is that the IDE supports development on a variety of hosts running Linux, Mac OS, and Windows operating systems.

I personally have not tried using either an Eclipse or Net Beans tool in many years, so I do not know yet how well they have matured over the past few years. I do recall that managing installations was somewhat cumbersome, and I expect that is much better now. I also recall that the tools were a little slower to react to what I wanted to do, and again, today’s newer computers may have made that a non-issue. Lastly, the open source projects were not really built with the needs of embedded developers in mind, so the embedded tools that migrated to these platforms had to conform as best they could to architectural assumptions that were driven by the needs of application developers.

Do you care if an IDE is Eclipse or Net Beans based? Does the open source platform enable you to manage a wider variety of processor architectures from different suppliers in a meaningfully better way? Does it matter to your design-in decision if a processor is supported by one of these platforms? Are tools based on these open source platforms able to deliver the functionality and responsiveness you need for embedded development?

An interface around the bend

Tuesday, May 24th, 2011 by Robert Cravotta

User interface options continue to grow in richness of expression as sensor and compute processing costs and energy requirements continue to drop. The “paper” computing device is one such example, and it hints that touch interfaces may only be the beginning of where user interfaces are headed. Flexible display technologies like E-Ink’s have supported visions of paper computers and hand held computing devices for over a decade. A paper recently released by Human Media Lab explores the opportunities and challenges of supporting user gestures that involve bending the device display similar to how you would bend a piece of paper. A video of the flexible prototype paper phone provides a quick overview of the project.

The paper phone prototype provides a demonstration platform for exploring gestures that involve the user bending the device to issue a command.

The demonstration device is based on a flexible display prototype called paper phone (see Figure). The 3.7” flexible electrophoretic display is coupled with a layer of five Flexpoint 2” bidirectional bend sensors that are sampled at 20Hz. An E-Ink Broadsheet AM 300 Kit with a Gumstix processor that is capable of completing a display refresh in 780ms for a typical full screen grey scale image. The prototype is connected to a laptop computer that offloads the processing for the sensor data, bend recognition, and sending images to the display to support testing the flexibility and mobility characteristics of the display.

The paper outlines how the study extends prior work with bend gestures in two important ways: 1) the display provided direct visual feedback to the user’s bend commands, and 2) the flexible electronics of the bending layer provided feedback. The study involved two parts. The first part asked the participants to define eight custom bend gesture pairs. Gestures were classified according to two characteristics based on the location of the force being exerted on the display and the polarity of that force. The configuration of the bend sensors supported recognizing folds or bends at the corners and along the center edge of the display. The user’s folds could exert force forward or backwards at each of these points. Gestures could consist of folding the display in a single location or at multiple locations. The paper acknowledges that there are other criteria they could have used, such as the amount of force in a fold, the number of repetitions of a fold, as well as tracking the velocity of a fold. These were not investigated in this study.

The second part of the study asked participants to use and evaluate the bend gestures they developed in the context of complete tasks, such as operating a music player or completing a phone call. The study found that there was strong agreement among participants for the folding locations as well as the polarity of the folds for actions with clear directionality, such as navigating left and right. The applications that the participants were asked to complete were navigating between twelve application icons; navigating a contact list; play, pause, and select a previous or next song; navigate a book reader, and zoom in and out for map navigation. The paper presents analysis of the 87 total bend gestures that the ten participants created (seven additional bends were created in the second part of the study) in building a bend gesture/language, and it discusses shared preferences among the participants.

A second paper from Human Media Lab presents a demonstration “Snaplet” prototype for a bend sensitive device to change its function and context based on its shape. The video of the Snaplet demonstrates the different contexts that the prototype can recognize and adjust to. Snaplet is similar to the paper phone prototype in that it uses bend sensors to classify the shape of the device. Rather than driving specific application commands with bends, deforming the shape of the device drives which applications the device will present to the user and what types of inputs it will accept and use. The prototype includes pressure sensors to detect touches, and it incorporates a Wacom flexible tablet to enable interaction with a stylus. Deforming the shape of the device is less dynamic than bending the device (such as in the first paper); rather the static or unchanging nature of the deformations allows the device’s shape to define what context it will work in.

When the Snaplet is bent in a convex shape, such as a wristband on the user’s arm, Snaplet acts like a watch or media player. The user can place the curved display on their arm and hold in in place with Velcro. The video of the Snaplet shows the user interacting with the device via a touch screen with icons and application data displayed appropriately for viewing on the wrist. By holding the device flat in the palm of their hand, the user signals to the device that it should act as a PDA. In this context, the user can use a Wacom stylus to take notes or sketch; this form factor is also good for reading a book. The user can signal the device to act as a cell phone by bending the edge of the display with their fingers and then placing the device to their ear.

Using static bends provides visual and tangible cues to the user of the device’s current context and application mode. Holding the device in a concave shape requires a continuous pinch from the user and provides haptic feedback that signifies there is an ongoing call. When the user releases their pinch of the device, the feedback haptic energy directly corresponds with dropping the call. This means that users can rely on the shape of the device to determine its operating context without visual feedback.

The paper phone and Snaplet prototypes are definitely not ready for production use, but they are good demonstration platforms to explore how and when using bend gestures and deforming the shape of a device may be practical. Note that these demonstration platforms for these types of inputs do not suggest replace existing forms of user interfaces, such as touch and stylus inputs; rather, they demonstrate how bend gestures can augment those input forms and provide a more natural and richer communication path with electronic devices.

Is the cloud safe enough?

Wednesday, May 18th, 2011 by Robert Cravotta

The cloud and content streaming continue to grow as a connectivity mechanism for delivering applications and services. Netflix now accounts for almost 30 percent of downstream internet traffic during peak times according to Sandvine’s Global Internet Phenomena Report. Microsoft and Amazon are entering into the online storage market. But given Sony’s recent experience with the security of their PlayStation and Online Entertainment services, is the cloud safe enough, especially when new exploits are being uncovered in their network even as they bring those services back online?

When I started working, I was exposed to a subtle but powerful concept that is relevant to deciding if and when the cloud is safe enough to use, and that lesson has stayed with me ever since. One of my first jobs was supporting a computing operations group and one of their functions was managing the central printing services. Some of the printers they managed were huge impact printers that weighed many hundreds of pounds each. A senior operator explained to me that there was a way to greatly accelerate the wear and tear on these printers merely by sending a print job with the correct but completely legal sequences of text.

This opened my eyes to the fact that even when a device or system is being used “correctly,” unintended consequences can occur unless the proper safeguards are added to the design of that system. This realization has served me well in countless projects because it taught me to focus on mitigating legal but unintended operating scenarios so that these systems were robust.

An example that affects consumers more directly is exploding cell phone batteries a few years back. In some of those cases, the way the charge was delivered to the battery weakened the batteries; however, if a smarter regulator was placed between the charge input and the battery input, charge scenarios that are known to damage a battery could be isolated by the charge regulator instead of being allowed to pass through in ignorance. This is a function that adds cost and complexity to the design of the device and worse yet, does not necessarily justify an increase in the price that the buyer is willing to pay. However, the cost of allowing batteries to age prematurely or to explode is significant enough that it is possible to justify the extra cost of a smart charge regulator.

I question whether the cloud infrastructure, which is significantly more complicated than a mere stand-alone device or function, is robust enough to act as a central access point because it currently represents a single point of failure that can have huge ramifications from a single flaw, exploit, or weakness in its implementation. Do you think the cloud is safe enough to bet your product and/or company on?

Do you use any custom or in-house development tools?

Wednesday, May 11th, 2011 by Robert Cravotta

Developing embedded software differs from developing application in many ways. The most obvious difference is that there is usually no display available in embedded systems whereas most application software would be useless without a display to communicate with the user. Another difference is that it can be challenging to know whether the software for an embedded system is performing the correct functions for the right reasons or if it is performing what appear to be proper functions coincidentally. This is especially relevant to closed-loop control systems that include multiple types of sensors in the control loop, such as with fully autonomous systems.

Back when I was building fully autonomous vehicles, we had to build a lot of custom development tools because standard software development tools just did not perform the tasks we needed. Some of the system-level simulations that we used were built from the ground up. These simulations modeled the control software, rigid body mechanics, and inertial forces from actuating small rocket engines. We built a hardware-in-the-loop rig so that we could swap in and out real hardware with simulated modules so that we could verify the operation of each part of the system as well as inject faults into the system to see how it would fare. Instead of a display or monitor to provide feedback to the operator, the system used a telemetry link which allowed us to effectively instrument the code and capture the state of the system at regular points of time.

Examining the telemetry data was cumbersome due to the massive volume of data – not unlike trying to perform debugging analysis with today’s complex SOC devices. We used a custom parser to extract the various data channels that we wanted to examine together and then used a spreadsheet application to scale and manipulate the raw data and to create plots of the data that we were looking for correlations in. If I was working on a similar project today, I suspect we would still be using a lot of same types of custom tools as back then. I suspect that the market for embedded software development tools is so wide and fragmented that it is difficult for a tools company to justify creating many tools that meet the unique needs of embedded systems. Instead, there is much more money available from the application side of the software development tool market, and it seems that embedded developers must choose between figuring out how to use tools that address the needs of application software work in their project or to create and maintain their own custom tools.

In your own projects, are standard tools meeting your needs or are you using custom or in-house development tools? What kind of custom tools are you using and what problems do they help you solve?

Is the job market for embedded developers improving?

Wednesday, May 4th, 2011 by Robert Cravotta

I have an unofficial sense that there has been an uptick in the embedded market for developers. This sense is not based on hard data; rather it is based on a sense of what I hear in briefings and what types of briefings I am seeing. The message of recovery is not a new one, but over the previous year or two it felt like the undertone of the message was more of a hope than a sentiment of fact. The undertone now suggests that there may be more than just hopeful optimism to the message today.

Are you seeing more opportunities for embedded developers than the previous year or two? Is the workload growing as well as the talent being brought to bear on these projects, or are you doing more with much less? If you can provide an anecdote, please do; otherwise, use the scale below to indicate how you think the market for embedded developers is doing.

1)      The embedded market is hurting so much that improvement/growth is hard to detect.

2)      The embedded market is showing signs of revival but still has a ways to go to be healthy.

3)      The embedded market is healthy.

4)      The embedded market is growing and hiring opportunities are up.

5)      The future has never looked brighter.

6)      Other (please expand)

How do you ensure full coverage for your design/spec reviews?

Wednesday, April 27th, 2011 by Robert Cravotta

Last week I asked whether design-by-committee is ever a good idea. This week’s question derives from my experience on one of those design-by-committee projects. In this particular project, I worked on a development specification. The approval list for the specification was several dozen names long – presumably the length of the approving signatures list should provide confidence that the review and approval process was robust and good. As part of the review and approval process, I personally obtained each signature on the original document and gained some insight into some of the reasoning behind each signature.

For example, when I approached person B for their signature, I had the distinct feeling they did not have time to read the document and that they were looking at it for the first time in its current form. Now I like to think I am a competent specification author, but this was a large document, and to date, I was the only person who seemed to be aware of the entire set of requirements within the document. Well, person B looked at the document, perused the signature list, and said that person E’s signature would ensure that the appropriate requirements were good enough for approval.

When I approached person D, they took two minutes and looked at two requirements that were appropriate to their skill set and responsibility and nothing else within the specification before signing the document. When it was person E’s turn at the document, I once again felt they had not had time to look at the document before I arrived for their signature. Person E looked at the signature list and commented that it was good that person B and D had signed off, so the document should be good enough for their signature. In this example, these three signatures encompassed only two of the requirements in a thick specification.

Throughout the review and approval process, it felt like no one besides me knew all of the contents of the document. I did good work on that document, but my experience indicates that even the most skilled engineers are susceptible to small errors that can switch the meaning of a requirement and that additional sets of eyes looking over the requirements will usually uncover them during a review. Additionally, the system-level implications of each requirement can only be assessed if a reviewer is aware of the other requirements that interact with each other. The design-by-committee approach, in this case, did not provide system-level accountability for the review and approval process.

Is this lack of full coverage during a review and approval cycle a problem unique to this project or does it happen on other projects that you are aware of? What process do you use to ensure that the review process provides appropriate and full coverage of the design and specification documents?

Is design-by-committee ever the best way to do a design?

Wednesday, April 20th, 2011 by Robert Cravotta

I have participated in a number of projects that were organized and executed as a design-by-committee project. This is in contrast to most of the design projects I worked on that were the result of a development team working together to build a system. I was reminded of my experiences in these types of projects during a recent conversation about the details for the Space Shuttle replacement. The sentiment during that conversation was that the specifications for that project would produce something that no one will want once it is completed.

A common expression to illustrate what design-by-committee means is “a camel is what you get when you design a horse by committee.” I was sharing my experience with these design-by-committee projects to a friend and they asked me a good question – What is the difference between design-by-committee and a design performed by a development team? After a moment of thought my answer to that question is that each approach treats accountability among the members differently and this materially affects how system trade-offs are performed and decided.

In essence, design-by-committee could be described as design-by-consensus. Too many people in the committee have the equivalent of veto power without the appropriate level of accountability that should go with that type of power. Compounding this is that just because you can veto something does not mean that you have to come up with an alternative. Designing to a consensus seems to rely on the implied assumption that design is a process of compromises and the laws of physics are negotiable.

In contrast, in the “healthy” development team projects I worked on, different approaches fought it out in the field of trade studies and detailed critique. To an outsider, the engineering group seemed liked crazed individuals that engaged in passionate holy wars. To the members of the team, we were putting each approach through a crucible to see which one survived the best. In those cases where there was no clear “winner”, the chief project engineer had the final say over which approach the team would use – but not until everyone, even the most junior members on the team, had the chance to bring their concerns up. Ultimately, the chief project engineer was responsible for the whole system and their tie-breaker decisions were based on system level trade-offs rather than just slapping together the best of each component into a single system.

None of the design-by-committee projects that I am aware of yielded results that matched, never mind rivaled, what I think a hierarchical development team with clearly defined accountabilities would produce. Do I have a skewed perspective on this or do you know of cases when design-by-committee was the best way to pursue a project? Can you share any interesting or humorous results of design-by-committee projects that you know of? I am currently searching for an in-house cartoon we had when I worked on rockets that demonstrated the varied results you could get if you allowed one of the specialty engineering groups to dominate the design process for a rocket engine. I will share that if/once I find it. I suspect there could be analogous cartoons for any field, and I encourage you to send them to me, and I will share yours also.

Is bigger and better always better?

Wednesday, April 13th, 2011 by Robert Cravotta

The collision between an Airbus A380 and a Bombardier CRJ-700 this week at John F. Kennedy International Airport in New York City reminded me of some parallels and lessons-learned when we upgraded  the target processor with a faster version. I shared one of the lessons learned from that event in an article about adding a version control inquiry into the system. A reader added that the solution we used still could suffer from a versioning mismatch and suggested that version identifications also include an automatically calculated date and time stamp of the compilation. In essence, these types of changes in our integration and checkout procedures helped mitigate several sources of human or operator error.

The A380 is currently the world’s largest passenger jet with a wingspan of 262 feet. The taxiways at JFK Airport are a standard 75-foot-wide, but this accident is not purely the result of the plane being too large as there has been an Operation Plan for handling A380s at JFK Airport that has been successfully used since the 3rd quarter of 2008. The collision between the A380 and the CRJ appears to be the result of a series of human errors stacking onto each other (similar to the version inquiry scenario). Scanning the 36-page operation plan for the A380 provides a sense of how complicated it is to manage the ground operations for these behemoths.

Was the A380 too large for the taxiway? Did the CRJ properly clear the taxiway (per the operation plan) before the A380 arrived? Did someone in the control tower make a mistake in directing those two planes to be in those spots at the same time? Should someone have been able to see what was going to happen and stopped it in time? Should the aircraft sensors have warned the pilot that a collision was imminent? Was anyone in this process less alert or distracted at the wrong time? There have been a number of air traffic controllers that were caught sleeping on the job within the last few months, with the third instance happening this week.

When you make changes to a design, especially when you add a bigger and better version of a component into the mix, it is imperative that the new component be put through regression testing to make sure there are no assumptions broken. Likewise, the change should flag an effort to ensure that the implied (or tribal knowledge) mechanisms for managing the system accommodate for the new ways that human or operator error can affect the operation of the system.

Do you have any anecdotes that highlight how a new bigger and better component required your team to change other parts of the system or procedures to mitigate new types of problems?

The battle for multi-touch

Tuesday, April 12th, 2011 by Robert Cravotta

As with most technologies used in the consumer space, they take a number of years to gestate before they mature enough and gain visibility to end users. Capacitive-based multi-touch technology burst into the consumer conscience with the introduction of the iPhone. Dozens of companies have since entered the market to provide capacitive touch technologies to support new designs and applications. The capabilities that capacitive touch technology can support, such as improved touch sensing for multiple touches, detecting and filtering unintended touches (such as palm and cheek detection), as well as supporting a stylus, continues to evolve and improve.

Capacitive touch enjoys a very strong position providing the technology for multi-touch applications; however, there are other technologies that are or will likely be vying for a larger piece of the multi-touch pie. A potential contender is the vision-based multi-touch technology found in the Microsoft Surface. However, at the moment of this writing, Microsoft has indicated that it is not focusing its effort for the pixel sense technology toward the embedded market, so it may be a few years before the technology is available to embedded developers.

The $7600 price tag for the newest Surface system may imply that the sensing technology is too expensive for embedded systems, but it is important to realize that this price point supports a usage scenario that vastly differs from a single user device. First, the Surface provides a 40 inch diagonal touch/display surface that four people can easily access and use simultaneously. Additionally, the software and processing resources contained within the system are sized to handle 50 simultaneous touch points. Shrink both of these capabilities down to a single user device and the pixel sense technology may become quite price competitive.

Vision-based multi-touch works with more than a user’s fingers; it can also detect, recognize and interact with mundane, everyday objects, such as pens, cups, paint brushes, as well as touch interface specific objects such as styli. The technology is capable, if you provide enough compute capability, to distinguish and handle touches and hovering of fingers and objects over the touch surface differently.I’m betting as the manufacturing process for the pixel sense sensors matures, the lower price points will make a compelling case for focusing development support to the embedded community.

Resistive touch technology is another multi-touch contender. It has been the work horse for touch applications for decades, but its ability (or until recently, lack of) to support multi-touch designs has been one of its significant shortcomings. One advantage that resistive touch has enjoyed over capacitive touch for single-touch applications is a lower cost point to incorporate it into a design. Over the last year or so, resistive touch has evolved to be able to support multi-touch designs by using more compute processing in the sensor to resolve the ghosting issues in earlier resistive touch implementations.

Similar to vision-based multi-touch, resistive touch is able to detect contact with any normal object because resistive touch relies on a mechanical interface. Being able to detect contact with any object provides an advantage over capacitive touch because capacitive touch sensing can only detect objects, such as a human finger or a special conductive-tipped stylus, with conductive properties that can draw current from the capacitive field when placed on or over the touch surface. Capacitive touch technology also continues to evolve, and support for thin, passive styli (with an embedded conductive tip) is improving.

Each technology offers different strengths and capabilities; however, the underlying software and embedded processors in each approach must be able to perform analogous functions in order to effectively support multi-touch applications. A necessary capability is the ability to distinguish between explicit and unintended touches. This capability requires the sensor processor and software to be able to continuously track many simultaneous touches and assign a context to each one of them. The ability to track multiple explicit touches relative to each other is necessary to be able to recognize both single- and multi-touch gestures, such as swipes and pinches. Recognizing unintended touches involves properly ignoring when the user places their palm or cheek over the touch area as well as touching the edges of the touch surface with their fingers that are gripping the device.

A differentiating capability for touch sensing is minimizing the latency between when the user makes an explicit touch and when the system responds or provides the appropriate feedback to that touch. Longer latencies can affect the user’s experience in two detrimental ways in that the collected data for the touch or gesture has poor quality or the delay in feedback confuses the user. One strategy to minimize latency is to sample or process less points when a touch (or touches) is moving; however, this risks losing track of small movements that can materially affect analyzing the user’s movement such as when writing their signature. Another strategy to accommodate tracking the movement of a touch without losing data is to allow a delay in displaying the results of the tracking. If the delay is too long though, the user may try to compensate and try to restart their movement – potentially confusing or further delaying the touch sensing algorithms. Providing more compute processing helps in both of these cases, but it also increases the cost and energy draw of the system.

While at CES, I experienced multi-touch with all three of these technologies. I was already familiar with the capabilities of capacitive touch. The overall capabilities and responsiveness of the more expensive vision-based system met my expectations; I expect the price point for vision-based sensing to continue its precipitous fall into the embedded space within the next few years. I had no experience with resistive-based multi-touch until the show. I was impressed by the demonstrations that I saw from SMK and Stantum. The Stantum demonstration was on a prototype module, and I did not even know the SMK module was a resistive based system until the rep told me. The pressure needed to activate a touch felt the same as using a capacitive touch system (however, I am not an everyday user of capacitive touch devices). As these technologies continue to increasingly overlap in their ability to detect and appropriately ignore multiple touches within a meaningful time period, their converging price points promise an interesting battle as each technology finds its place in the multi-touch market.