Question of the Week: Is or should “dogfooding” be important to embedded developers?

Wednesday, May 12th, 2010 by Robert Cravotta

[Editor's Note: This was originally posted on the Embedded Master

Earlier this week, I discussed “dogfooding” – the practice of using your own product in the course of doing your business – as a robust design principle. This principle seems to be popular with a growing number of software tools providers, such as compiler providers or GUI platform development tools. I wonder if there is an equivalent principle for embedded developers especially because many embedded components are not really end products by themselves – hence they do not lend themselves to being explicitly used in a classic “dogfooding” fashion.

The primary tenet of using your own product is that you, as the designer, will explicitly understand how your product does or does not adequately support the needs of the person using your product to solve whatever problem they are using it to solve. Because this tenet is not limited to software tools, I started looking through my own experience, and that of my wife’s, who has been a program manager for many types of electro-mechanical embedded components, to identify any best practices that we used to provide the same type of visibility to the system designers that “dogfooding” is supposed to provide to software developers.

My design experience is overwhelmingly based on path-finding and prototyping projects where we were not even sure that what we were trying to accomplish was even possible. This is in contrast to improving an existing design, with an existing production customer, to accomplish incrementally more while costing less, using less power, and providing higher reliability. Once I started hearing companies using the phrase “eating our own dogfood”, I began to realize that we performed an embedded design equivalent to this concept during our system integration process. In these types of projects, demonstration and validation prototypes are a very important part of the delivery schedule because that is where we worked directly with the customer to integrate the delivered component into an actual system, real or simulated.

During the earliest integration stages, we would find many details that we needed to change because what was captured in the specifications did not always work quite as expected within the entire system once it existed in the real world. This process was essential to enable the designers to better understand how their design trade-offs affected the usability of the component, and it led to ultimately being able to more quickly deliver the component that the customer really needed and could use in the production system.

How important is it that the companies that supply you the tools and components you use in your designs practice this robust design principle? Also, do you employ any best practices, formal or informal, that you use to provide your designers with this type of insight so that they can more quickly and effectively implement the changes to your product design to meet your customer’s needs?

User Interfaces: Test Bench and Process for Projects

Tuesday, May 11th, 2010 by Robert Cravotta

[Editor's Note: This was originally posted on Low-Power Design

To avoid confusion or the need for repetitious information in later posts, I will now describe the test bench and process I am using for these touch development kit projects. This process is something I have refined over a number of previous hands-on projects that involved using embedded resources from multiple vendors. The goal of the process is to extract the most value from the effort while minimizing the time spent dealing with the inevitable usability issues that arise when working with so many different kits.

I have several goals when I perform this kind of hands-on project. First and foremost, I want to understand the state-of-the-art for touch development kits. Each company is serving this market from a slightly different angle, and their board and software support reflects different design trade-offs. I believe uncovering those trade-offs will provide you with better insight into how each kit can best meet your needs in your own touch projects.

Additionally, each company is at a different point of maturity in supporting touch. Some companies are focusing on providing the best signal-to-noise ratio at the sensor level and the supported software abstractions may require you to become an expert in the sensor’s idiosyncrasies to extract that next new differentiating feature. Likewise, the company may focus on simplifying the learning curve to implement touch in your design; the software may abstract more of the noise filtering and allow/limit you to treating touch as a simple on/off switch or an abstracted mouse pointer. Or the company’s development kit may focus on providing rich filtering capabilities while still allowing you to work with the raw signals for truly innovative features. My experience suggests the kits will run the entire gamut of maturity and abstraction levels.

Another goal is to help each company that participates in this project to improve their offering. One way to do this is to work with an application engineer from the company that understands the development kit we will be working with. Working with the application engineer not only permits the company to present their development kit’s capabilities in the best possible light and enables me to complete the project more quickly, but it puts the kit through a set of paces that invariably causes something to not work as expected. This helps the application engineer to gain a new understanding of how the touch kit can be used by a developer and that results in direct feedback to the development team and spawns a refinement that improves the kit for the entire community. This is especially relevant because many of the kits will have early adopter components – software modules that are “hot off the press” and may not have completely gone through the field validation process yet. This exercise becomes a classic developer and user integration effort that is the embedded equivalent to dogfooding (using you own product).

In addition to the development boards and software that is included in each touch development kit, I will be using a Dell Inspiron 15 laptop computer running Windows 7 Home Premium (64-bit) for the host development system. One reason I am using this laptop is to see how well these development kits support the Windows 7 environment. Experience suggests that at least one kit will have issues that will be solved by downloading or editing a file that is missing from the production installation files.

So in short, I will be installing the development software on a clean host system that is running Windows 7. I will be spending a few hours with an application engineer, either over the phone or face-to-face, as we step through installing the development software, bringing-up and verifying the board is operating properly from the factory, loading a known example test program, building a demonstration application, and doing some impromptu tweaks to the demonstration application to find the edges of the system’s capabilities. From there, I will post about the experience with a focus on what types of problem spaces the development kit is best suited for, and what opportunities you may have to add new differentiating capabilities to your own touch applications.

If you would like to participate in this project post here or email me at Embedded Insights.

Robust Design: Dogfooding

Monday, May 10th, 2010 by Robert Cravotta

[Editor's Note: This was originally posted on the Embedded Master

Dogfooding is an idiom coined from the saying “Eating your own dog food.” Examples of companies that practice dogfooding include Microsoft and Google. Other companies, such as Amulet Technologies and Green Hills Software use the term informally in their presentations when they are emphasizing they use their own tools to make the tools they sell. CNET recently reported that Adobe is planning to provide Android phones running Flash to its employees. I interpret this to be a dogfooding moment for Adobe to ensure that Flash has the best possible chance to succeed on the Android mobile platform.

In the above examples, dogfooding spans from beta testing a product to out and out using your own product for production work. A common thread in these examples though is that they are software-based products that target developers; however, dogfooding is not limited to software. As an example, many years ago, my brother worked at a company that built numerical control tooling machines that they used to build the product they ultimately sold.

The purported advantages of using your own products is that it proves to customers that you believe in your own product enough to use it. However, the claim that using your own product means you catch more bugs and design flaws is not a strong claim because it might suggest that the company’s QA process is less effective than it should be. One of the biggest advantages of working with your own products is that it means that your developers are more aware of what works well with the product as well as why and how they could improve it from a usability perspective. In essence, working with your own products makes it more obvious to the developers what the differences are between how you think the product works and how well it actually does against what you envisioned it would do.

However, as a best practice concept, dogfooding needs to take on a different tact for embedded developers. Using the embedded system as an end product is usually not an option because embedded systems make up the invisible components of the end product. The mechanisms to dogfooding an embedded design occurs during the demonstration and validation steps of the design and build effort when the designers, integrators, and users of the embedded system work together to quantifiably prove whether the requirements specification for the embedded system was sufficient. It is not sufficient that the embedded design meets the requirements specification if the specification was lacking key information. Often, at the demonstration and validation phase, the designers and users discover what additional requirements they need to incorporate into the system design.

In short, dogfooding is that step where designers prove that the system they envisioned actually does what the end user needs it to do. In the case of embedded systems, proving out the concept to reality requires a cooperative, and often an iterative, effort between the designer, system integrator, and end user.

Extreme Processing Thresholds: Low Power to No Power

Friday, May 7th, 2010 by Robert Cravotta

[Editor's Note: This was originally posted on the Embedded Master

The lower limit of power consumption of embedded processors continues to drop; however, there is a point where parts that operate with even smaller amounts of energy is equivalent to operating on no power. Today’s lowest power parts, such as the ones I discussed in earlier posts in this low power series, are at the edge of this point because designers are beginning to be able to pair them with energy harvesters that are able to pull more energy from the ambient environment than the application needs to operate for an indefinite period of time.

The practical limit for low power operation as “no power operation” will be at that point where harvesting the ambient energy is sufficient to allow a system to operate continuously. There are currently no systems that operate at this lower limit yet. Additional energy efficiency after this point becomes an opportunity to add more processing features to the system, analogous to how higher clock frequencies and parallel computing engines enable today’s high-end systems to take on more capabilities.

Energy harvesting is a process that enables a system to capture, store, and operate off of the ambient energy from the surrounding environment. Ambient energy can be harvested in many forms; the most commonly tapped forms at this time are thermal, light, vibration, and RF (radio frequency). I will explore the companies providing methods for harvesting these types of energy in later posts. Essentially, these types of systems harvest “free energy” from sources that we currently are not able to tap for any other work.

Currently, batteries are able to provide a reliable and cost effective source for low power systems, and they usually enjoy a cost advantage over the various energy harvesting methods. Despite this cost advantage, there are usage scenarios, namely those cases where changing batteries are impractical, costly, dangerous, or even impossible, that make using an energy harvester a more practical approach. Examples scenarios include implantable medical devices; surveillance and security equipment, as well as buildings and structures with smart sensors distributed throughout them.

The first requirement of energy harvesting systems is that they must be able to extract more energy from the environment than the amount of energy the collection and storage components consume. Storage options include batteries, capacitors, and thin-film technologies. Follow-up posts in this series will explore the cost and efficiency challenges facing the types of transducers available to extract the ambient energy as well as the challenges facing the energy storage technologies.

The second requirement of energy harvesting systems is that they must be able to monitor their own energy storage and adjust their operation to avoid starving their energy storage so that the energy collection components can still operate when there is energy available to harvest. Designers of these types of system need to be able to view energy as a variable resource and design their systems to scale with the inevitable fluctuations of energy availability so that the system can remain operational despite periods of “starvation.”

Question of the Week: Is there a fundamental difference in the skills required to build visibly moving robots versus autonomous embedded control subsystems?

Wednesday, May 5th, 2010 by Robert Cravotta

[Editor's Note: This was originally posted on the Embedded Master

A few weeks ago, I asked if robotics engineering is different enough from embedded engineering to warrant being treated as a separate discipline. I asked this question because I think a general engineering curriculum is valuable – especially at the bachelor level. Engineers should at least have exposure and experience with a broad set of skills and tools because you never know which one will be the right one to solve any given situation, and it seems a shame to focus multi-discipline skills and tools into a narrow topic when they really apply to any embedded system that deals with real world interfaces and control.

I offered three examples in the original post: my clothes-washing machine, and the braking systems and stability-control systems resident in many high-end automobiles. Differential braking systems are fairly sophisticated and they involve much more than just a software control system. They must have an understanding of the complex interactions of friction, steering, engine force, and inertia in order to accomplish their coordinated task correctly. The same can be doubly said for stability control systems that work to prevent vehicles from skidding and rolling over. The humble washing machine is a modern marvel. I have watched as the washing machine gets into an unbalanced condition. Instead of walking and banging around like its predecessor, it performs a set of gyrations that actually corrects the imbalance and allows the washing machine to work at higher efficiency even with heavy, bulky loads. Each of these examples is a collection of interconnected embedded systems working together to affect the overall system in some mechanical fashion.

In thinking about this question, I remembered spending time with the Velodyne brothers, David and Bruce Hall, for the earlier DARPA Grand Challenges and their TeamDAD autonomous automobile. I was kindred spirits with these gentlemen as I also worked on autonomous vehicles almost 15 years earlier. In addition to their innovative LIDAR sensor system, I remember their answer to how they got involved in autonomous automobiles when their background was premium subwoofers. They use motion-feedback technology in their sub woofers and it is not a large leap to applying those concepts to motion control technology for robots.

In a similar fashion, none of the examples above act like a robot that exhibits obvious motion, and yet they all use motion-feedback to accomplish their primary tasks. The relevance here is that robots also use a collection of interconnected embedded systems that work together in order to achieve mechanical degrees of free motion in the real world, which I have observed as a bias in multiple online conversations as a necessity for an autonomous system to be considered a robot. None of the examples is limited to just embedded software – they all are firmly entrenched in multidiscipline mechanical control and manipulation.

Is there a fundamental difference in the skills required to build visibly moving robots versus autonomous embedded control subsystems? My experience says they are the same set of skills.

Robust Design: Patch-It Principle – Teaching and Learning

Monday, May 3rd, 2010 by Robert Cravotta

[Editor's Note: This was originally posted on the Embedded Master

In the first post introducing the patch-it principle, I made the claim that developers use software patches to add new behaviors to their systems in response to new information and changes in the operating environment. In this way, patching systems allow developers to offload the complex task of learning off the device – at least until we figure out how to build machines that can learn. In this post, I will peer into my crystal ball, and I will describe how I see the robust design patch-it principle will evolve into a mix of teaching and learning principles. There is a lot of room for future discussions, so if you see something you hate or like, speak up – it will signal that topic for future posts.

First, I do not see the software patch going away, but I do see it taking on a teaching and learning component. The software patch is a mature method of disseminating new information to fixed-function machines. I think software patches will evolve from executable code blocks to meta-code blocks. This will be essential to support multi-processing designs.

Without using meta-code, the complexity of building robust patch blocks that can handle customized processor partitioning will grow to be insurmountable as the omniscient knowledge syndrome drowns developers in requiring them to handle even more low-value complexity. Using meta-code may provide a bridge to supporting distributed or local knowledge (more explanation in a later post) processing where the different semi-autonomous processors in a system make decisions about the patch block based on their specific knowledge of the system.

The meta-code may take on a form that is conducive to teaching rather than an explicit sequence of instructions to perform. I see devices learning how to improve what they do by observing their user or operator as well as communicating with other similar devices. By building machines this way, developers will be able to focus more on specifying the “what and why” of a process, and the development tools will assist in the system in genetically searching and applying different coding implementations and focusing on a robust verification of equivalence between the implementation and specification. This may permit systems to consist of less than perfect parts as verifying the implementation will include the imperfections in the system.

The possible downside of learning machines is that they will become finely tuned to a specific user and be less than desirable to another user – unless there is a means for users to carry their preferences with them to other machines. This already is manifesting in chat programs that learn your personal idioms and automagically provide adjusted spell checking and link associations because personal idioms do not always cleanly translate, or are they used in the same connotation, for other people.

In order for the patch-it principle to evolve to the teach and learn principle, machines will need to develop a sense of context of self in their environment, be able to remember random details, be able to spot repetition of random details, be able to recognize sequences of events, and be able to anticipate an event based on a current situation. These are all tall orders for today’s machines, but as we build wider multiprocessing systems, I think we will stumble upon an approach to perform these tasks for less energy than we ever thought possible.

Extreme Processing Thresholds: Energy Estimation and Measurement

Friday, April 30th, 2010 by Robert Cravotta

 [Editor's Note: This was originally posted on the Embedded Master

Low power operation continues to grow in importance as a product differentiator. One of the most visible examples of the importance of low power operation is chronicled in the success of the Nintendo Gameboy which debuted in 1989. It was not the most technically advanced product of its type. It did not have the best graphics. It did not deliver the fastest performance. It did however deliver the most important thing significantly better than all of the other competing hand-held game devices at the time – it delivered the longest play time on a set of AA batteries. That differentiator enabled the Gameboy to not only outlive every other single competing device, but it has led to a long line of successive devices that enjoy large volumes in sales.

Until recently, developers were left to their own machinations to estimate and measure the energy consumption of their designs. Some silicon vendors over the years have offered device specific spread sheets to help their customers better estimate energy consumption for different operational scenarios. These types of tools require the developer to intimately understand how their system transitions between the various power saving modes. Going beyond spread sheets, in 2008, Tensilica added a graphical user interface to its Xenergy tool that helps hardware and software developers to make trade-offs that yield better energy consumption based on a cycle-accurate simulator. This year may mark an inflection point for energy estimation and measurement tools for developers.

The Energy Micro offering, called the energyAware Profiler, interfaces via USB with the company’s EFM32 Gecko development and starter kits, and it is available now as a download. The Hitex offering, called PowerScale, measures up to four different power domains in the power supply line of each domain. The tool can track current measurements from 200nA to 1A, and it is not limited to a single type of microprocessor. The IAR Systems offering is part of the Embedded WorkBench, and it is currently in beta. It samples the power supply for board because the main component of the system power consumption is the peripherals rather than the microcontroller itself.

Several companies, including Energy Micro, Hitex Development Tools, and IAR Systems, are offering, or have announced products that are planned for production support within this year, that enable developers to match energy consumption with specific lines of code in their software. These tools measure the system power consumption and enable developers to make software and system level trade-offs during the software development process. They can help with identifying when peripherals are not being actively used by the system and are powered on – burning precious energy for no useful work. The interfaces of these tools present the energy data graphically so that it is easier for a developer to spot the points of interest.

100430-iar.jpg
These are just three recently announced software development tools offering visibility into the dynamic energy consumption of embedded systems under operating conditions. I believe there will be more such tools announced over the next year or so as low power operation takes on even more of the design mind space. I expect that there will also be good and complementary tutorial and tips and tricks material to help developers make the most of these tools in the years to come. I will highlight these resources and how they are changing the way designers are doing low power design as they become public. If you know of similar resources that I missed, please point them out here or email me at Embedded Insights.

How do you support software patches in your embedded designs?

Wednesday, April 28th, 2010 by Robert Cravotta

[Editor's Note: This was originally posted on the Embedded Master

Earlier, I posted about design-for-patching. While some patches involve fixing something that never worked, I believe most patches actually add new information to the system than was there before the patch was applied. This means there has to be some resource headroom to 1) incorporate the new information, and 2) receive, validate, store, and activate the patch in a safe manner. For resource constrained embedded systems, these resources are the result of deliberate trade-offs.

Patching subsystems that are close to the user interface may present a straight forward way to access the physical interface ports, but I am not aware of any industry-standard “best practices” for applying patches to deeply embedded subsystems.

Please share how you support software patches in your embedded designs. Do you use the same types of interfaces from project to project – or do you make do with what is available? Do you have a standard approach to managing in field patches – or do you require your users to ship you the devices so that you can perform the patch under controlled circumstances? How do you ensure that your patch applied successfully, and how do you recover from failed patches?

User Interfaces: First Project

Tuesday, April 27th, 2010 by Robert Cravotta

[Editor's Note: This was originally posted on Low-Power Design

To launch the HMI (human-machine interface) development tool projects, I will be focusing my efforts on exploring development kits along with their accompanying software tools and APIs (application programming interface) for touch interfaces. This project includes addressing button and touch screen form factors. Microchip has graciously volunteered their mTouch development kits for the first project. We are currently in the logistics planning phase. At this time, the proposal consists of two example projects.

The first example project focuses on touch button designs by using the mTouch Capacitive Touch Evaluation Kit (part # DM183026) to develop a six (or more)-button board based on a “custom” shape and size for a typical end design. The kit contains one 16-bit and two 8-bit mother boards, based on the PIC24F, PIC16F, and PIC18F, and four daughter boards that support 8 keys, 12 matrixed keys, a 100 point slider, and a 255 point slider. The kit also includes the PICkit Serial Analyzer to connect to the PC Host for the MPLAB mTouch Diagnostic Tool Plug-In software. The goal of this kit is demonstrate the function-specific daughter boards.

The second example project ups the development complexity by focusing on a touch screen design by using the mTouch AR1000 Development Kit (part # DV102011) to configure, calibrate and test an 8-wire analog resistive touch screen. The kit includes a 7 four-wire resistive touch screen and a PICkit Serial Analyzer. The development board has 4-, 5-, and 8-wire headers to connect to a touch screen for testing, and the kit includes adapter cables to support the various pinouts common for resistive touch screens.

I will be completing these projects on my own equipment to ensure the effort is based on a realistic out of the box experience. I look forward to sharing my experience and thoughts about these two development kits in follow up posts in the near future.

Which touch development kit would you like me to work with after the Microchip effort? Please continue to suggest vendors and development kits you would like me to explore in this series by posting here or emailing me at Embedded Insights.

Robust Design: Patch-It Principle – Design-for-Patching

Monday, April 26th, 2010 by Robert Cravotta

[Editor's Note: This was originally posted on the Embedded Master

Patching a tire is necessary when the tire has had a part of itself forcibly torn or removed so that it is damaged and can no longer perform its primary function properly. This is also true when you are patching clothing. Patching software in embedded systems however, is not based on replacing a component that has been ripped from the system – rather it involves adding new knowledge to the system that was not part of the original system. Because software patching involves adding new information to the system, there is a need for extra available resources to accommodate the new information. The hardware, software, and labor resources needed to support patching is growing as systems continue to increase in complexity.

Designing to support patching involves some deliberate resource trade-offs, especially for embedded systems that do not have the luxury of idle, unassigned memory and interface resources that a desktop computer might have access to. To support patching, the system needs to be able to recognize that a patch is available, be able to receive the patch through some interface, and verify that that the patch is real and authentic to thwart malicious attacks. It must also be able to confirm that there is no corruption in the received patch data and that the patch information has been successfully stored and activated without breaking the system.

In addition to the different software routines needed at each of these steps of the patching process, the system needs access to a hardware input interface to receive the patch data, an output interface to signal whether or not the patch was received and applied successfully, and memory to stage, examine, validate, apply, and store the patch data. For subsystems that are close to the user interface, gaining access to physical interface ports might be straight forward, but there is no industry-standard “best practices” for applying patches to deeply embedded subsystems.

It is important that the patch process does not leave the system in an inoperable state – even if there is a corruption in the patch file or loss of power to the system while applying the patch. A number of techniques designers use depend on including enough storage space in the system to house the pre- and post-patch code so that the system can confirm the new patch is working properly before releasing the storage holding the previous version of the software. The system might employ a safe, default boot kernel, which the patching process can never change, so that if the worst happens during applying a patch, the operator can use the safe kernel to put the system into known state that can provide basic functionality and accept a new patch file.

In addition to receiving and applying the patch data, system designs are increasingly accommodating custom settings, so that applying the patch does not disrupt the operator customizations. Preserving the custom settings may involve more than just not overwriting the settings; it may involve performing specific checks, transformations, and configurations before completing the patch. Supporting patches that preserve customization can involve more complexity and work from the developers to seamlessly address the differences between each different setting.

The evolving trend for the robust design patch-it principle is that developers are building more intelligence into their patch processes. This simplifies or eliminates the learning curve for the operator to initiate a patch. Smarter patches also enable the patch process to launch, proceed, and complete in a more automated fashion without causing operators with customized settings any grief. Over time, this can build confidence in the user community so that more devices can gain the real benefit of the patch-it principle – devices can change their behavior in a way that mimics learning from their environment years before designers, as a community, figure out how to make self-reliant learning machines.