Extreme Processing Thresholds: Challenges Designing Low Power Devices

Friday, April 23rd, 2010 by Robert Cravotta

[Editor's Note: This was originally posted on the Embedded Master

The low power thresholds for processor devices continue to drive downward, but what does it take to drive those energy thresholds ever lower? In many cases, it is possible to reduce a processor’s active current draw by migrating to a more aggressive silicon node, but this comes at the cost of standby power or leakage current. To balance between lower active and standby power, processors rely on low leakage silicon variants and optimally sized transistors within each block of the architecture.

Øyvind Janbu, CTO at Energy Micro, points out that designing circuits that are going to be enabled 100% of the time, such as power supervision circuits with brown-out and power-on-reset functions, is challenging to design to an energy budget of a few nA of current. As with all parts of the processor, chip architects trade-off between speed, energy draw, and accuracy at each resource block to best meet the needs of the specific function and the overall system requirements. He believes that because flash memory is power hungry and slow, in time, it will be replaced by other non-volatile technologies in many cases.

Janbu also feels that some of the challenges when designing for extremely low power chip designs are similar to those faced by RF designers. He believes the accuracy of simulation models of transistors are being pushed outside their intended operating region, and this means that the architects must specify sufficient margins so that the designs still work in volume production. There is a need for more extracted layout simulations because of high impedant nodes due to extremely low currents.

Internal voltage regulators are another low power challenge as microprocessors continue to move into more aggressive silicon nodes. While using internal voltage regulators helps reduce active power consumption, the challenge lies in designing voltage regulators and voltage references that have zero quiescent current, so as to not sacrifice the standby power consumption. The voltage regulator and voltage reference are basically the reason why microprocessors made in 0.18 um or smaller silicon nodes have standby current consumption in the tens of uA range.

The future direction of low power processors may center on modular architectures because driving to extremely low power requirements increasingly requires a rethinking of the fundamental architecture of each module. This rethinking increases the design time and risk of the processor, especially when the architects are exploring and implementing new and unproven approaches. However, as the market finds more uses for low power devices, the increased volumes will provide the needed offsets to incur the longer design cycles and higher risk to push the power threshold even lower.

Question of the Week: How do you define embedded system?

Wednesday, April 21st, 2010 by Robert Cravotta

[Editor's Note: This was originally posted on the Embedded Master

I have always noticed a discontinuity in how people use the term embedded system. I commented on how the lack of distinction between application and embedded software might be driving us towards a skill and tools crisis because of material differences between the two types of development. My programming background began with application and system level skills. I designed and wrote my share of automation applications, many of which are still in production use more than twenty years later. I also developed system skills by developing and maintaining assemblers, compilers, and interpreters.

Eventually I found my way into the embedded world. I would say it took me at least a year to adjust to the new things I had to consider in my designs. Embedded development involved working with real world physics including understanding noise, error, and filtering. I had to learn about hysteresis, rigid-body vibrations, and how temperature and humidity can change how things operate.

I also had to come to terms with the fact that the things I worked on as an embedded developer were invisible to the end user – and everyone else for that matter. The tongue-in-cheek extreme example I use to illustrate what is an embedded system involves implementing a function with hamsters – as long as the end user does not need to know of their existence, care, feeding, healthcare, and eventual replacement needs – then it is an embedded system.

I recognize that some people might think this a harshly limited definition of an embedded system, but I learned that the amount of memory a system has, the size of the package, the power consumption, and the cost of the system is relative. If you use those types of metrics to declare something an embedded system, you have a problem because those thresholds shift all the time. My hamster description does not change over time.

Based on the comments I am seeing in multiple online conversations, I think there are a few categories of embedded system in use. Please share how you define an embedded system. Maybe instead of agreeing to disagree, we can discover some defining characteristics between each category that can yield some meaningful insight into the different skill and development tool needs of each type of developer.

Robust Design: Patch-It Principle

Monday, April 19th, 2010 by Robert Cravotta

[Editor's Note: This was originally posted on the Embedded Master

The software patch is a much maligned technique for keeping systems robust because many users perceive that the majority of these patches as merely fixes of feature bugs that the developers should have taken care of before shipping the software. While there are many examples where this sentiment has a strong ring of truth to it, the patch-it principle is a critical approach to maintaining robust systems so that they can continue to operate within an acceptable range of behavior despite the huge variability that the real world throws at them. This post focuses on the many other reasons (rather than sloppy and curtailed design, build, and test) for patching a system – all of which share the same basis:

The software patch is currently the primary approach for a system to manifest modified behaviors in light of additional information about the environment it needs to operate within.

The basis for this statement stems from what I have referred to as the omniscient knowledge syndrome, which is the assumption that designers should identify and resolve all relevant issues facing a system at design time. This is a necessary assumption because contemporary embedded systems are not capable of sufficient learning that allows them to determine an appropriate course of action to handle previously unspecified environmental conditions.

Common examples of patching are to add new capabilities to a system; to implement countermeasures to malicious attack attempts; and to implement special case processing to support interoperability across a wider range of hardware configurations, such as new combinations of video cards, system boards, and versions of operating systems.

New capabilities are added to systems experimentally and in response to how the market reacts to those experimental features. Patching enables developers to competitively distribute successful experiments to their existing base of systems without requiring their user base to buy new devices with each new feature.

A robust system should be able to counter malicious attacks, such as viruses and hacks. A perfect static defense against malicious attacks is impossible, or at least it has so far been impossible for the entire history of mankind. Attacks and countermeasures are based on responding to what the other side is doing. Patching helps mitigate device obsolescence that would otherwise ensue when malicious entities successfully compromise those systems.

The rate of evolution in the electronics market is too rapid for any developer to completely accommodate in any single project. The constant flow of new chips, boards, algorithms, communication protocols, and new ways of using devices mean that some mechanism, in this case patching, is needed to allow older devices to integrate with newer devices.

In essence, the patch-it principle is a “poor man’s” approach to allow systems to learn how to behave in a given condition. The designer is the part of the embedded system that is able to learn how the world is changing and develop appropriate responses for those changes. Until embedded systems are able to recognize context within their environment, identify trends, and become expert predictors, designers will have to rely on the patch-it principle to keep their products relevant as the world keeps changing.

Extreme Processing Thresholds: Low Power On-Chip Resources

Friday, April 16th, 2010 by Robert Cravotta

[Editor's Note: This was originally posted on the Embedded Master

In the previous post in this series I pointed out that the “sweet spot” clock rate for active power consumption for some microcontrollers is lower than the maximum operating clock rate for that part. However, looking only at the rated power consumption of these microcontrollers at a steady always-on operating state ignores the fact that many low-power microcontrollers employ on-chip resources that can significantly impact the overall energy consumption of the part as the system transitions through multiple operating scenarios.

For low power applications, designers usually focus on the overall operating system energy draw rather than the peak draw. This focus on overall power efficiency justifies the additional design, build, and testing complexity of using low power and sleep modes when the system does not need to draw on the full processing capacity of the processor. In fact, many low power constrained systems spend the majority of their operating time in some type of sleep mode and transition to full active mode only when needed. This relationship begins to hint at why using only a single uA/MHz benchmark is insufficient to evaluate a processor’s energy performance.

There is a variety of low power modes available to processor architectures. Shutting down just the CPU and leaving all of the other on-chip resources functional is one type of sleep mode. Deeper sleep modes can turn off individual or all peripherals until the only on-chip resource drawing current is for RAM retention. Always-on resources may include a power supervisor circuit with brown-out and power-on-reset functions; these functions must be enabled 100% of the time because the events they are designed to detect cannot be predicted.

So in addition to active power draw, low power designers need to understand the system’s static or leakage current draw when the system is inactive. Another important metric is wake-up-time – the amount of time it takes the system to transition from a low-power mode to the active operating mode because the system clock needs to stabilize. The longer it takes the system clock to stabilize, the more energy is wasted because the system is performing no useful work during that time.

A DMA controller is an on-chip resource that affects a system’s power consumption by offloading the task of moving data from one location to another, say from a peripheral to memory, from the expensive CPU to the much cheaper to operate DMA controller. The following chart from an Atmel whitepaper demonstrates the value of using a DMA controller to offload the CPU, especially as the data rate increases. However, effectively using the DMA can add a level of complexity for the developer because using the DMA controller is not an automated process.

100416-dma.jpg

Some microcontrollers, such as from Atmel and Energy Micro, allow developers to configure the DMA controller and peripherals, through some type of peripheral controller, so that they can collect or transmit data autonomously without waking the CPU. On some devices, the autonomous data transfer can even include direct data transfers from one peripheral to another. The following chart from Energy Micro’s technology description demonstrates the type of energy reduction autonomous peripherals can create. The caveat is that the developer needs to create the highly autonomous setup as there are no tools that can perform this task automatically at this time.

100416-autonomous.jpg

On-chip accelerators not only speed up the execution of frequent computations or data transformations, they also do it for less energy than performing those functions with a software engine. Other types of on-chip resources that save energy draw can include ROM-based firmware, such as being adopted by NXP and Texas Instruments on some of their parts. There are countless approaches available to chip architects to minimize energy consumption, but they each involve trade-offs in system speed, current consumption, and accuracy that are appropriate differently to each type of application. This makes it difficult to develop a single benchmark for comparing energy consumption between processors that overlap capabilities but target slightly different application spaces.

Do you use formal selection criteria when choosing software languages and programming tools?

Wednesday, April 14th, 2010 by Robert Cravotta

[Editor's Note: This was originally posted on the Embedded Master

Apple’s recent change to section 3.3.1 of the iPhone Developer Program License Agreement for the 4.0 SDK explicitly limits developers to using

“…Objective-C, C, C++, or JavaScript as executed by the iPhone OS WebKit engine, and only code written in C, C++, and Objective-C may compile and directly link against the Documented APIs application”

While this limitation applies to application code, my own project experience for embedded control systems suggests that other development teams also impose analogous selection criteria on language and development tool choices. In some cases, our choices were limited to a short approved list of languages and development tool environments that applied across many suppliers that were contributing different subsystems to a single, long-lived end-system.

Understanding what criteria developers use to limit language and tool selection is more than an academic concern. In countless presentations I have seen software development tools rank at or near the top in importance in selecting a processor as the final candidate for designs. Suggested reasons why development tools manifest so highly in processor choices include familiarity with the toolset and its specific quirks to avoid a new learning curve and preserve an aggressive design schedule.

Technical reasons for specific selections might include execution efficiency of the compiled code, compilation speed, code size efficiency of the generated executable, as well as the tool’s flexibility to spot-optimize all of these considerations. The tool’s static and dynamic analysis capabilities, as well as traceability and testing components may also be important considerations.

I suspect that the selection criteria across all embedded design teams vary widely and are reliant on the target applications, size and maturity of the development team, and expected maintenance lifecycle of the end product. For example, if an end-product has a 10, 20, or even 30 year maintenance lifecycle (such as an automobile, aircraft, or spacecraft), then it is imperative to choose a language and set of tools that have high chance of being supported throughout that lifecycle period. Another important consideration for such long lived products is whether there will continue to be a sufficient pool of skilled engineers with working experience in the selected language into the future.

With this in mind, I am hoping to uncover common sets of reasoning for formal language and tool selection criteria across the range of embedded development projects. Understanding that, please share any formal selection criteria you use and for what type of applications do you use them for?

User Interfaces: Introduction

Tuesday, April 13th, 2010 by Robert Cravotta

[Editor's Note: This was originally posted on Low-Power Design

The pursuit of better user interfaces constantly spawns new innovative ideas to make it easier for a user to correctly, consistently, and unambiguously direct the behavior of a machine. For this series, I propose to explore logical and direct as two user interface categories. Both categories are complex enough to warrant a full discussion on their own.

I define logical user interfaces as the subsystems that manage those signals and feedbacks that exist within the digital world of software after real-world filtering. Logical user interfaces focus on the ease of teaching/learning communication mechanisms, especially via feedbacks, between user and machine to enable the user to quickly, accurately, and intuitively control a system as they intend to with a minimum of stumbling to find the way to tell the system what the user wants it to do.

I define direct user interfaces as the subsystems that collect real-world signals at the point where user and machine directly interface with one another. For a keyboard, this would include the physical key switches. For mouse-based interfaces, this would include the actual mouse mechanism, including buttons, wheels, and position sensing components. For touch interfaces, this would include the touch surface and sensing mechanisms. For direct user interface subsystems, recognizing and filtering real-world noise is an essential task.

A constant challenge for direct user interfaces is how to accurately infer a user’s true intent in a noisy world. Jack Ganssle’s “Guide to Debouncing” is a good indication of the complexity that designers still must tame to manage the variable, real-world behavior of a simple mechanical switch with the user’s expectations for simple and accurate operation when the user toggles a switch to communicate with the system.

As systems employ more complex interface components than mere switches, the amount of real-world input variability these systems must accommodate increases. This is especially true for the rapidly evolving types of user interfaces that include touch screens and speech recognition. Similar to the debounce example, these types of interfaces are relying on increasing amounts of software processing to better distinguish real-world signal from real-world noise.

To begin this series, I will be focusing mostly on the latter category of direct user interfaces. I believe understanding the challenges to extract user intent from within a sea of real-world noise is essential to discuss how to address the types of ambiguity and uncertainty logical user interfaces are subject to. Another reason to start with direct user interfaces is because over the previous year there has been an explosion of semiconductor companies that have introduced, expanded, or evolved their touch interface offerings.

To encourage a wider range of developers to adopt their touch interface solutions, these companies are offering software development ecosystems around their mechanical and electrical technologies to make it easier for developers to add touch interfaces to their designs. This is the perfect time to examine their touch technologies and evaluate the maturity of their surrounding development ecosystems. I also propose to explore speech recognition development kits in a similar fashion.

Please help me identify touch and speech recognition development kits to try out and report back to you here. My list of companies to approach for touch development kits includes (in alphabetical order) Atmel, Cypress, Freescale, Microchip, Silicon Labs, Synaptics, and Texas Instruments. I plan to explore touch buttons and touch screen projects for the development kits; companies that support both will have the option to support one or both types of project.

My list of companies to approach for speech recognition development kits includes (in alphabetical order) Microsoft, Sensory, and Tigal. I have not scoped the details for a project with these kits just yet, so if you have a suggestion, please share.

Please help me prioritize which development kits you would like to see first. Your responses here or via email will help me to demonstrate to the semiconductor companies how much interest you have in their development kits.

Please suggest vendors and development kits you would like me to explore first in this series by posting here or emailing me at Embedded Insights.

Robust Design: Sandbox Principle – Playing Nicely

Monday, April 12th, 2010 by Robert Cravotta

[Editor's Note: This was originally posted on the Embedded Master

I originally planned this post to be about the “Patch-It” principle of robust design. But, I am accelerating the “play nicely” sandbox principle to this post to use the change in Apple’s iPhone Developer Program License Agreement for the iPhone OS 4 SDK, section 3.3.1 as a timely example of an approach of how to get third party software to play nicely together on the same system. Texas Instruments’ xDAIS (eXpressDSP Algorithm Interoperability Standard) is the other example we will use to explore the “play nicely” sandbox principle.

The play nicely sandbox principle is most relevant in any system with multiple components that share system resources between them. Even systems that provide completely dedicated resources, such as memory and peripherals, to each component may still share timing and interrupt processing. If the components in a system are built without any consideration to how to play nicely with other components, there are risks that one component can trash another component’s resources and cause erroneous system behavior.

Memory management and protection units are hardware resources that are available on some processors that can allow an RTOS or operating system that is managing them to protect different software components from trashing each other. Policy constraints, through standards, coding guidelines, and APIs (application programming interfaces) are another approach to enforcing the “play nicely” design principles.

Texas Instruments introduced the standard that has evolved into xDAIS and xDM (eXpressDSP Digital Media) in 1999. These standards help developers to specify and build multifunction DSP-based applications that integrate algorithms from multiple sources into a single system. A key goal of the standards is to significantly reduce the integration time for developers by enabling them to avoid selecting algorithm implementations that can trash each other’s resources.

The standards specify a set of coding conventions and APIs with the intention of eliminating integration problems caused when algorithms implement hard-coded access to system resources that are shared with other components in the system. The xDM standard also enables developers to change an algorithm implementation to a different source when a change in functionality or performance is needed. In addition to the resource sharing interfaces, xDAIS also specifies 46 “common sense” coding guidelines that algorithms must implement, such as being reentrant or avoiding C programming techniques that are prone to introducing errors.

These types of standards benefit the entire development supply chain. Texas Instruments processor architects can better justify building in components that support the standards. Third party algorithm providers have a standardized way to describe the resources that their implementation needs. This makes it easier for developers and system integrators to compare algorithm implementations from multiple sources.

The recent change in Apple’s iPhone Developer Program License Agreement represents a refinement of a similar policy constraint approach to enforce playing nicely together. The entire text of section 3.3.1 of the iPhone Developer Program License Agreement prior to the iPhone OS 4 SDK reads as:

3.3.1 — Applications may only use Documented APIs in the manner prescribed by Apple and must not use or call any private APIs.

The new wording for section 3.3.1 of the iPhone Developer Program License Agreement that developers must agree to before downloading the 4.0 SDK beta reads as:

3.3.1 — Applications may only use Documented APIs in the manner prescribed by Apple and must not use or call any private APIs. Applications must be originally written in Objective-C, C, C++, or JavaScript as executed by the iPhone OS WebKit engine, and only code written in C, C++, and Objective-C may compile and directly link against the Documented APIs (e.g., Applications that link to Documented APIs through an intermediary translation or compatibility layer or tool are prohibited).

There have been a number of discussions about the change. John Gruber closes his insightful comments about the change with “My opinion is that iPhone users will be well-served by this rule. The App Store is not lacking for quantity of titles.” In an email conversation, Greg Slepak said to Steve Jobs “I don’t think Apple has much to gain with 3.3.1, quite the opposite actually.” Steve’s reply was “We’ve been there before, and intermediate layers between the platform and the developer ultimately produces sub-standard apps and hinders the progress of the platform.”

The reason why I bring up the Apple change in this post is that, ignoring all of the business posturing hype, it is a consistent and explicit clarification on how Apple plans to enforce the play nicely principle on their platform. It is analogous to Texas Instruments’ xDAIS standard except that Apple makes it clear that non-compliance is prohibited whereas complying with the xDAIS standard is not a requirement for using or providing an algorithm implementation.

I suspect it is essential that Apple have an explicit and enforceable play nicely mechanism in place to implement their vision of multitasking on the iPhone and iPad. I hope to be able to expand on this topic in the next sandbox posting after posting about the other types of robust design principles.

Extreme Processing Thresholds: Low Power #2

Friday, April 9th, 2010 by Robert Cravotta

[Editor's Note: This was originally posted in the Embedded Master

In the previous post in this series I asked whether reporting uA/MHz is an appropriate way to characterize the energy profile of a processor. In this post, I assume uA/Mhz is appropriate for you and offer some suggestions of additional information you might want processor vendors to include with this benchmark when they use it. I will explore how uA/MHz is insufficient for many comparisons in the follow-on post in this series.

One problem with reporting a processor’s power draw as uA/MHz is that this value is not constant across the entire operating range of the processor. Consider the chart for the Texas Instruments MSP430F5438A operating at 3V, from 256-kbyte Flash, and with an integrated LDO. This processor has an operating range up to 25MHz, and the value of uA/MIPS ranges from 230 to 356 uA/MIPS across the full operating range. Additionally, the energy sweet spot for this device is at 8MHz. Using the part at higher (and lower) clock rates consumes more energy per additional unit of processing performance.

Adrian Valenzuela, TI MSP430 MCU Product Marketing Engineer at Texas Instruments shares that many designers using this part operate it at its energy sweet spot of 8MHz precisely because it is most energy efficient at that clock rate rather than at its highest operating speed.

100409-ti-graph.jpg

The chart for Microchip’s PIC16LF1823 device illustrates another way to visualize the energy sweet spot for a processor. In this example, the energy sweet spot is at the “knee” in the curve, which is at approximately 16 MHz – again short of the device’s maximum operating clock rate of 32 MHz. 

100409-microchip-graph.jpg

At a minimum, if a processor vendor is going to specify a uA/MHz (or MIPS) metric, they should also specify the operating frequency of the device to realize that energy efficiency sweet spot. To provide a sense of the processor’s energy efficiency across the full operating range, the processor vendor could include the uA/MHz metric at the device’s highest operating frequency – the implied assumption is that the energy efficiency varies with clock rate in some proportion between these two operating points.

Using a single-value uA/MHz as an energy metric is further complicated when you consider usage profiles that include waking-up from standby or low power modes. In the next post in this series I will explore the challenges of comparing energy efficiency between different processors when the benchmarking parameters differ, such as what kind of software is executing, what is the compiler and memory efficiency, and what peripherals are active?

Question of the Week: Is robotics engineering different enough from embedded engineering to warrant being treated as a separate discipline?

Wednesday, April 7th, 2010 by Robert Cravotta

[Editor's Note: This was originally posted on the Embedded Master

Robotics Engineering seems to be gaining momentum as a formal engineering discipline. Worcester Polytechnic Institute just became the third U.S. university to offer a doctoral program for Robotics Engineering. The university also offers a Bachelor and Masters program for Robotic Engineering. The interdisciplinary programs draw on faculty from the Computer Science, Electrical and Computer Engineering, and Mechanical Engineering departments. I fear though that there is an ambiguity about the type of engineering that goes into building robotics versus “smart” embedded systems.

When I worked on a Robotics hands-on project, I noticed parallels between the issues designers have to address regardless of whether they are working on robotic designs or embedded semi-autonomous subsystems. Additionally, relying on interdisciplinary skills is not unique to robotics – many embedded systems also rely on the same sets of interdisciplinary skills.

From my own experience working with autonomous vehicles, I know that these systems can sense the world in multiple ways – for example inertially and visually – they have a set of goals to accomplish, have a means to move, interact with, and affect the world around them, and are “smart enough” to be able to adjust their behavior based on how the environment changes. We never referred to these as robots, and I never thought to apply the word robot to them until I worked on this hands-on project.

Defining what makes something a robot is not clearly established. I found a description for robots from the Robot Institute of America (1979) that says a robot is “A reprogrammable, multifunctional manipulator designed to move material, parts, tools, or specialized devices through various programmed motions for the performance of a variety of tasks.” Our autonomous vehicles met that description. They were reprogrammable and they could manipulate the system through six degrees of freedom to accomplish a variety of tasks. Despite this, I think it would still be difficult to get people to call them robots.

Additionally, it seems there are many embedded subsystems, such as the braking systems or stability-control systems resident in many high-end automobiles, that might also fit this description—but we do not call them robots either. Even my clothes-washing machine can sense and change its behavior based on how the cleaning cycle is or is not proceeding according to a predicted plan; the system can compensate for many anomalous behaviors. These systems can sense the world in multiple ways, they make increasingly complex decisions as their designs continue to mature, they meaningfully interact with the physical world, and they adjust their behavior based on arbitrary changes in the environment.

It seems to me that the principles identified as fundamental to a robotics curriculum should be taught to all engineers and embedded developers – not just robotics majors. Do you think this is a valid concern? Are there any skills that are unique to robotics engineering that warrant a new specialized curriculum versus being part of an embedded engineering curriculum?

Robust Design: Sandbox Principle

Monday, April 5th, 2010 by Robert Cravotta

[Editor's Note: This was originally posted on the Embedded Master

Before diving deeper into the fault tolerance robust design principle, I will present three other robust design principles. This post will address what I call the Sandbox principle; although, I have heard other people use the term walled garden in a similar fashion.

Sandbox in this context refers to my experience as a parent of small children playing at the park. The park was large and there were plenty of dangers to watch out for (including rattlesnakes), but when the children were playing in the sandbox area, it felt like the urgency of protecting the children eased up a bit. The environment was reasonably well controlled and the sandbox was designed to minimize the types and severity of injuries that could occur while in the sandbox.

Apple products seem to favor the sandbox design principle to great success. For example, the Mac OS’ graphical interface constrains what kind of requests a user can make of the system. You cannot make the system do anything that the graphical interface does not explicitly support; this in turn enables Apple to tout that they are more reliable and stable than Windows systems despite the fact that the Windows system enables users more options in how to specify a task request. In this case, fewer choices contribute to higher reliability.

The iPhone implements constraints that contribute to stability and suggest a bias towards static and deterministic operations. For example, you can only open eight web documents  in Safari at a time on a second generation iPhone –as if the web pages are held in fixed, statically allocated buffers. If you never need more than eight web documents open at once, this limit will not affect you except to contribute to more predictable behavior of the overall system. If you need more than eight web pages open at once, you will need to find a work around using bookmarks as temporary page holders.

The iPad does not support Adobe Flash Player. Morgan Adams shares an interesting perspective on why this is so:

 “Current Flash sites could never be made work well on any touchscreen device, and this cannot be solved by Apple, Adobe, or magical new hardware. That’s not because of slow mobile performance, battery drain or crashes. It’s because of the hover or mouseover problem.”

This statement and accompanying details supports the sandbox principle of limiting the system options to ensure an optimal experience and reducing the complexity that would be required to handle degraded operating scenarios.

Lastly, Apple’s constraints on how to replace the batteries for the iPhone and iPad minimize the risk of the end user using inappropriate batteries and equipment. This helps control the risk of exploding batteries. The new iPad has a “Not charging” message when the charging source on the USB port is inappropriate. This suggests there is a smart controller in the charging circuit that evaluates the charging source and refuses to route the charge to the battery if it is insufficient (and possibly too fast or high a charge scenario – but I am speculating at this moment). This is a similar approach to how we implemented a battery charger/controller for an aircraft project I worked on. This is yet another example of high reliability techniques finding their way into consumer products.

Do you have any comments on this robust design principle? What are some other examples of products that employ the sandbox design principle?