Entries Tagged ‘Sandbox Principle’

Robust Design: Sandbox Principle – Playing Nicely

Monday, April 12th, 2010 by Robert Cravotta

[Editor's Note: This was originally posted on the Embedded Master

I originally planned this post to be about the “Patch-It” principle of robust design. But, I am accelerating the “play nicely” sandbox principle to this post to use the change in Apple’s iPhone Developer Program License Agreement for the iPhone OS 4 SDK, section 3.3.1 as a timely example of an approach of how to get third party software to play nicely together on the same system. Texas Instruments’ xDAIS (eXpressDSP Algorithm Interoperability Standard) is the other example we will use to explore the “play nicely” sandbox principle.

The play nicely sandbox principle is most relevant in any system with multiple components that share system resources between them. Even systems that provide completely dedicated resources, such as memory and peripherals, to each component may still share timing and interrupt processing. If the components in a system are built without any consideration to how to play nicely with other components, there are risks that one component can trash another component’s resources and cause erroneous system behavior.

Memory management and protection units are hardware resources that are available on some processors that can allow an RTOS or operating system that is managing them to protect different software components from trashing each other. Policy constraints, through standards, coding guidelines, and APIs (application programming interfaces) are another approach to enforcing the “play nicely” design principles.

Texas Instruments introduced the standard that has evolved into xDAIS and xDM (eXpressDSP Digital Media) in 1999. These standards help developers to specify and build multifunction DSP-based applications that integrate algorithms from multiple sources into a single system. A key goal of the standards is to significantly reduce the integration time for developers by enabling them to avoid selecting algorithm implementations that can trash each other’s resources.

The standards specify a set of coding conventions and APIs with the intention of eliminating integration problems caused when algorithms implement hard-coded access to system resources that are shared with other components in the system. The xDM standard also enables developers to change an algorithm implementation to a different source when a change in functionality or performance is needed. In addition to the resource sharing interfaces, xDAIS also specifies 46 “common sense” coding guidelines that algorithms must implement, such as being reentrant or avoiding C programming techniques that are prone to introducing errors.

These types of standards benefit the entire development supply chain. Texas Instruments processor architects can better justify building in components that support the standards. Third party algorithm providers have a standardized way to describe the resources that their implementation needs. This makes it easier for developers and system integrators to compare algorithm implementations from multiple sources.

The recent change in Apple’s iPhone Developer Program License Agreement represents a refinement of a similar policy constraint approach to enforce playing nicely together. The entire text of section 3.3.1 of the iPhone Developer Program License Agreement prior to the iPhone OS 4 SDK reads as:

3.3.1 — Applications may only use Documented APIs in the manner prescribed by Apple and must not use or call any private APIs.

The new wording for section 3.3.1 of the iPhone Developer Program License Agreement that developers must agree to before downloading the 4.0 SDK beta reads as:

3.3.1 — Applications may only use Documented APIs in the manner prescribed by Apple and must not use or call any private APIs. Applications must be originally written in Objective-C, C, C++, or JavaScript as executed by the iPhone OS WebKit engine, and only code written in C, C++, and Objective-C may compile and directly link against the Documented APIs (e.g., Applications that link to Documented APIs through an intermediary translation or compatibility layer or tool are prohibited).

There have been a number of discussions about the change. John Gruber closes his insightful comments about the change with “My opinion is that iPhone users will be well-served by this rule. The App Store is not lacking for quantity of titles.” In an email conversation, Greg Slepak said to Steve Jobs “I don’t think Apple has much to gain with 3.3.1, quite the opposite actually.” Steve’s reply was “We’ve been there before, and intermediate layers between the platform and the developer ultimately produces sub-standard apps and hinders the progress of the platform.”

The reason why I bring up the Apple change in this post is that, ignoring all of the business posturing hype, it is a consistent and explicit clarification on how Apple plans to enforce the play nicely principle on their platform. It is analogous to Texas Instruments’ xDAIS standard except that Apple makes it clear that non-compliance is prohibited whereas complying with the xDAIS standard is not a requirement for using or providing an algorithm implementation.

I suspect it is essential that Apple have an explicit and enforceable play nicely mechanism in place to implement their vision of multitasking on the iPhone and iPad. I hope to be able to expand on this topic in the next sandbox posting after posting about the other types of robust design principles.

Robust Design: Sandbox Principle

Monday, April 5th, 2010 by Robert Cravotta

[Editor's Note: This was originally posted on the Embedded Master

Before diving deeper into the fault tolerance robust design principle, I will present three other robust design principles. This post will address what I call the Sandbox principle; although, I have heard other people use the term walled garden in a similar fashion.

Sandbox in this context refers to my experience as a parent of small children playing at the park. The park was large and there were plenty of dangers to watch out for (including rattlesnakes), but when the children were playing in the sandbox area, it felt like the urgency of protecting the children eased up a bit. The environment was reasonably well controlled and the sandbox was designed to minimize the types and severity of injuries that could occur while in the sandbox.

Apple products seem to favor the sandbox design principle to great success. For example, the Mac OS’ graphical interface constrains what kind of requests a user can make of the system. You cannot make the system do anything that the graphical interface does not explicitly support; this in turn enables Apple to tout that they are more reliable and stable than Windows systems despite the fact that the Windows system enables users more options in how to specify a task request. In this case, fewer choices contribute to higher reliability.

The iPhone implements constraints that contribute to stability and suggest a bias towards static and deterministic operations. For example, you can only open eight web documents  in Safari at a time on a second generation iPhone –as if the web pages are held in fixed, statically allocated buffers. If you never need more than eight web documents open at once, this limit will not affect you except to contribute to more predictable behavior of the overall system. If you need more than eight web pages open at once, you will need to find a work around using bookmarks as temporary page holders.

The iPad does not support Adobe Flash Player. Morgan Adams shares an interesting perspective on why this is so:

 “Current Flash sites could never be made work well on any touchscreen device, and this cannot be solved by Apple, Adobe, or magical new hardware. That’s not because of slow mobile performance, battery drain or crashes. It’s because of the hover or mouseover problem.”

This statement and accompanying details supports the sandbox principle of limiting the system options to ensure an optimal experience and reducing the complexity that would be required to handle degraded operating scenarios.

Lastly, Apple’s constraints on how to replace the batteries for the iPhone and iPad minimize the risk of the end user using inappropriate batteries and equipment. This helps control the risk of exploding batteries. The new iPad has a “Not charging” message when the charging source on the USB port is inappropriate. This suggests there is a smart controller in the charging circuit that evaluates the charging source and refuses to route the charge to the battery if it is insufficient (and possibly too fast or high a charge scenario – but I am speculating at this moment). This is a similar approach to how we implemented a battery charger/controller for an aircraft project I worked on. This is yet another example of high reliability techniques finding their way into consumer products.

Do you have any comments on this robust design principle? What are some other examples of products that employ the sandbox design principle?

Robust Design: Ambiguity and Uncertainty

Monday, March 22nd, 2010 by Robert Cravotta

[Editor's Note: This was originally posted on the Embedded Master

Undetected ambiguity is the bane of designers. Unfortunately, the opportunities for ambiguity to manifest in our specifications and designs are numerous, and they are easy to miss. Worse, when an ambiguity is discovered because two or more groups on a design team interpreted some information differently, the last person or team that touched the system often gets the blame – and that almost always is the software team.

For example, in the Best Guesses comments, DaveW points out that

“… This kind of problem is made worse when software is in the loop. Software is invisible. If it fails, you cannot know [how] unless the software gave you data to record and you recorded it.”

A problem with this common sentiment is unambiguously determining what constitutes a software failure. I shared in the lead-in Robust Design post that

“… Just because a software change can fix a problem does not make it a software bug – despite the fact that so many people like to imply the root cause of the problem is the software. Only software that does not correctly implement the explicit specification and design are truly software bugs. Otherwise, it is a system level problem that a software change might be more economically or technically feasible to use to solve the problem – but it requires first changing the system level specifications and design. This is more than just a semantic nit”

Charles Mingus offers a system perspective that hints at this type of problem:

“… And the solution most companies nowadays offer (Linear and now National) is to put ‘solutions in a box’ like small SMPS circuits, etc. You never completely know the behaviour, so always take very good care. These things are the basis of design failures, because the basic knowledge ‘seems’ not important anymore.”

Pat Ford, in the “Prius software bug?” LinkedIn discussion observes that

“…this isn’t just a software bug, this is a systems design bug, where multiple subsystems are improperly implemented.”

So how do these subsystems get improperly implemented? I contend that improperly implemented systems are largely the result of ambiguity in the system specifications, design assumptions, and user instructions. A classic visual example of ambiguity involves an image that contains a vase or two human faces looking at each other. Another classic visual example involves an image that you can interpret as a young or old woman. If you are not familiar with these images, please take some time to see both sets of images in both examples.

These two images are not so much optical illusions as they are examples of interpreting the same data in two different equally valid ways. I believe one reason why these images have at least two equally valid interpretations is that they are based on symbolic representations of the things that you can interpret them to represent. Symbols are imprecise and simplified abstractions of objects, concepts, and ideas. If you were dealing with the actual objects, the different interpretations might not be equally valid anymore.

Now consider how engineers and designers create systems. They often use a symbolic language in a natural language in a free or structured format to describe the system. It is one thing to describe all the things the system is, but it is a much different problem to explicitly describe all the things that the system is not.

To illustrate the weakness of a purely natural language way to describe something, consider how you teach someone to do a new task they have never done before. Do you explain everything in words and then leave them to their own devices to accomplish the task? Do you show them how to do it the first time?

This is the same type of problem development tool providers have to address each time they release a new development kit, and they are increasingly adopting video or animated walkthroughs to improve the success adoption rate of their systems. And this problem does not apply just to designers – it affects all types of end systems as well.

In the best guesses post, I talked about how a set of conditions had to coincide with the Freon in the air conditioning unit had to be overcharged. How would you have written the instructions for properly charging the Freon in such a system? Would the instructions specify what defined a full charge? To what precision would you have specified a minimum and maximum tolerable charge – or would you have? When using language to describe something, there is a chance that certain types of information are well understood by everyone and that you do not explicitly describe them over and over. This is fine until someone from outside that circle applies a different set of assumptions because they came from a different environment, and that environment made different arbitrary decisions that were appropriate for that operating environment.

I was recently reminded of this concept with the iRobot Roomba vacuum that I own. I went through a larger learning curve than I expected with regards to cleaning all of the brushes because some of the places you need to clear out are not immediately obvious until you understand how the vacuum works. But the real kick in the head came when I tried to use the brush cleaning tool. I read the instructions in the manual about the tool, and it says

“Use the included cleaning tool to easily remove hair from Roomba’s bristle brush by pulling it over the brush.”

Are these instructions simple enough that there is no room for ambiguity and misinterpretation? Well, I found the wrong way to use the tool, and looking at customer comments about the cleaning tool, so have other people. Mind you, this is with a tool that has a very limited possible number of ways of being used, but until you understand how it works, it is possible to use it incorrectly. I realized that the symbolic graphic on the side of the tool could be interpreted in at least two different equally valid ways because of the positioning and use of a triangle symbol which could represent the tool, the direction the brush should be used, or pointing to the place where the brush should enter the tool. Now that I understand how the tool works, the instructions and symbols make sense, but until I actually saw the tool work, it was not clear.

So not only is the specification for a system – that has never existed before – often written in a symbolic language, but so is the software that implements that system, as well as the user/maintenance manual for that system. Add to this that design teams consist of an ever larger number of people that do not necessarily work in the same company, industry, or even country. The opportunity for local, regional, and global culture differences amplifies the chances that equally valid but incompatible interpretations of the data can arise.

Consider the fate of the 1998 Mars Climate Orbiter that failed in its mission because of a mismatch between Imperial and Metric units. The opportunity to inject the mismatch into the system occurred when the units were changed between different instances of the flight software, and because there was inadequate integration testing.

I saw a similarly painful failure on a project when the control system for a spacecraft when the team decided to replace the 100 Hz inertial measurement unit with a 400 Hz unit. The failure was spectacular and completely avoidable.

The challenge then is how do we as designers increase our chances of spotting when these ambiguities exist in our specifications and design choices – especially evolving systems that experience changes in the people working on them? Is there a way to properly capture the tribal knowledge that is taken for granted? Are there tools that help you avoid shipping your end-products with undiscovered time-bombs?

I proposed four different robust design principles in the lead-in post for this series. My next post in this series will explore the fault-tolerance principle for improving the success of our best guesses and minimizing the consequences of ambiguities and uncertainty.

Robust Design: Best Guesses

Monday, March 15th, 2010 by Robert Cravotta

[Editor's Note: This was originally posted on the Embedded Master

An important realization about building robust systems is that the design decisions and trade-offs we make are based on our best guesses. As designers, we must rely on best guesses because it is impossible to describe a “perfect and complete” specification for all but the most simple, constrained, and isolated systems. A “perfect and complete” specification is a mythical notion that assumes that it is possible to not only specify all of the requirements to describe what a system must do, but that it is also possible to explicitly and unambiguously describe everything a system must never do under all possible operating conditions.

The second part of this assumption, explicitly describing everything the system must never do, is not feasible because the complete list of operating conditions and their forbidden behaviors is infinite. A concession to practicality is that system specifications address the anticipated operating conditions and those operating conditions with the most severe consequences – such as injury or death.

As the systems we design and build continue to grow in complexity, so too does the difficulty in explicitly identifying all of the relevant use cases that might cause a forbidden behavior. The short cut of specifying that the system may never act a certain way under any circumstance is too ambiguous. Where do you draw the line between reasonable use-cases and unreasonable ones? For example, did Toyota pursue profits at the expense of safety by knowingly ignoring the potential for unwanted acceleration? But what is the threshold between when you can safely ignore or must react to a potential problem? Maybe sharing my experience (from twenty years ago) with a highly safe and reliable automobile can stimulate some ideas on defining such a threshold.

After a few months of ownership, my car would randomly stall while at full freeway speeds. I brought the car into the dealership three separate times. The first two times, they could not duplicate the problem, nor could they find anything that they could adjust in the car. The third time I brought the car in, I started working with a troubleshooter that was flown in from the national office. Fortunately, I was able to duplicate the problem once for the troubleshooter, so they knew this was not just a potential problem, but a real event. It took two more weeks of full time access to the car for the troubleshooter to return the car to me with a fix.

I spoke with the technician and he shared the following insights with me. I was one of about half a dozen people in the entire country that was experiencing this problem. The conditions required to manifest this failure were specific. First, it only happened on very hot (approximately 100 degrees) and dry days. Second, the car had to be hot from sitting out in the direct sun for some time. Third, the air conditioning unit needed to be set to the highest setting while the car was turned on. Fourth, the driver of the car had to have a specific driving style (the stalls never happened to my wife who has a heavier foot on the accelerator than I do).

It turns out the control software for managing the fuel had two phases of operation. The first phase ran for the first few minutes after the car was started, and it characterized the driving style of the driver to set the parameters for managing the fuel delivered to the engine. After a few minutes of operating the car, the second phase of operation, which never modified the parameter settings, took over until the vehicle was turned off. My driving style when combined with those other conditions caused the fuel management parameters to deliver too little fuel to the engine under a specific driving condition which I routinely performed while on the freeway.

So it was a software problem right? Well, not exactly, there was one more condition that was necessary to create this problem. The Freon for the air conditioning unit had to be at least slightly overcharged. Once the technician set the Freon charge level to no more than full charge, the problem went away and I never experienced the problem again over 150k miles of driving. I always made sure that we never overcharged the Freon when recharging the system.

I imagine there could have been a software fix that used a modified algorithm that also measured and correlated the Freon charge level, but I do not know if that automobile manufacturer followed that course or not for future vehicles.

So how do you specify such an esoteric use-case before experiencing it?

The tragedy of these types of situations is that the political, legal, and regulatory realities prevent the manufacturer of the vehicle in question from freely sharing what information they have, and possibly being able to more quickly pinpoint the unique set of conditions required to make the event occur, without severely risking their own survival.

Have you experienced something that can help distinguish when and how to address potential from probable from actually occurring unintended behaviors? I do not believe any long term operating company puts out any product in volume with the intention of ignoring reasonable safety hazards. If a problem persists, I believe it is more likely because their best guesses have not yet been able to uncover which of the infinite possible conditions are contributing to the event.

My next post in this series will touch on ambiguity and uncertainty.

Robust Design : Good, Fast, Cheap – pick two

Wednesday, February 10th, 2010 by Robert Cravotta

[Editor's Note: This was originally posted on EDN

Reading Battar’s response to the introduction post for this series has suggested to me that it is worth exploring the relationship of the popular expression “good, fast, and cheap – pick two” in the context of robust design principles. The basis for this expression is that it is not possible to globally maximize/minimize all three of these vectors in the same design. Nor does this relationship apply only to engineering. For example, Jacob Cass applied it to Pricing Freelance Work.

There are a few problems with this form of the expression, but the concept of pick (n-1) from (n) choices to optimize is a common trade-off relationship. With regards to embedded processors, the “three P’s”, Performance, Power, and Price capture the essence of the expression, but with a focus on the value to the end user.

One problem is that this expression implies that the end user is interested in the extremes of these trade-offs. The focus is on realizing the full potential of an approach and robustness is assumed. This is an extremely dangerous assumption as you push further beyond the capabilities of real designs that can survive in the real world.

The danger is not in the complexity of delivering the robustness, but rather our inexperience with it because our ability to accommodate that complexity changes over time. For example, I would not want the fastest processor possible if it means it will take a whole star to power it. However, someday that amount of energy might be readily accessible (but not while we currently only have the energy from a single star to power everything on our planet). The fact that it might not be absurd to harness the full output of a star to power a future processor points out that there is a context to the trade-offs designers make. This is the relevant point to remember in robust design principles.

The danger is underestimating the “distance” of our target thresholds from the well-understood threshold points. Moore’s law implicitly captures this concept by observing that the number of transistors in a given area doubles in a constant time relationship. This rate is really driven by our ability to adjust to and maintain a minimum level of robustness with each new threshold for these new devices. The fact that Moore’s law observed a constant time relationship that has stood the test of time, versus a linear or worse relationship, suggests the processor industry has found a good-enough equilibrium point between pushing design and manufacturing thresholds with the offsetting complexity of verifying, validating, and maintaining the robustness of the new approaches.

Robust design principles are the tools and applied lessons learned when designers are pushing the threshold of a system’s performance, power, and/or price beyond the tried and tested thresholds of previous designs.

The four categories of robust design principles I propose – fault-tolerance, sandbox, patch-it, and disposable (which does not mean cheap) – provide context relevant tools and approaches for capturing and adding to our understanding when we push system thresholds beyond our comfort points while maintaining a system that can better survive what the real world will throw at it.

Robust Design

Thursday, February 4th, 2010 by Robert Cravotta

I am accelerating my plans to start a series on robust design principles because of the timely interest in the safety recall by Toyota for a sticking accelerator pedal. Many people are weighing in on the issue, but Charles J. Murray’s article “Toyota’s Problem Was Unforeseeable” and Michael Barr’s posting “Is Toyota’s Accelerator Problem Caused by Embedded Software Bugs?” make me think there is significant value in discussing robust design approaches right away.

A quick answer to the questions posed by the first article is no. The failure was not unforeseeable if a robust system level failure analysis effort is part of the specification, design, build, test, and deploy process. The subheading for Charles article hits the nail on the head:

“As systems grow in complexity, experts say designing for failure may be the best course of action for managing it.”

To put things in perspective, my own engineering experience with robust designs is strongly based on specifying, designing, building, and testing autonomous systems in an aerospace environment. Some of these systems were man-rated, triple-fault-tolerant designs – meaning the system had to operate with no degradation in spite of any three failures. The vast majority of the designs I worked on were at least single-fault-tolerant designs. Much of my design bias is shaped by those projects. In the next post in this series, I will explore fault-tolerant philosophies for robust design.

A quick answer to the questions posed by the second article is – it depends. Just because a software change can fix a problem does not make it a software bug – despite the fact that so many people like to imply the root cause of the problem is the software. Only software that does not correctly implement the explicit specification and design are truly software bugs. Otherwise, it is a system level problem that a software change might be more economically or technically feasible to use to solve the problem – but it requires first changing the system level specifications and design. This is more than just a semantic nit – it is an essential perspective to root cause analysis and resolution, and I hope in my next post to clearly explain why.

I would like to initially propose four robust design categories (fault tolerant, sandbox, patch-it, and disposable); if you know of another category, please share it here. I plan to follow up with separate posts focusing on each of these categories. I would also like to solicit for guest posts from anyone that has experience in any of these different types of robust design.

Fault tolerant design focuses on keeping the system running or safe in spite of failures. These techniques are commonly applied in high value designs where people’s lives are at stake (like airplanes, space ships, and automobiles), but there are techniques that can be applied at even lesser impact consumer level designs (think exploding batteries – which I’ll expand on in the next post).

Sandbox design focuses on controlling the environment so that failures cannot occur. Ever wonder why Apple’s new iPad does not support third party multitasking?

Patch-it design focuses on fixing problems after the system is in the field. This is a common approach for a lot of software products where the consequences of failures in the end system are not catastrophic and where implementing a correction is low cost.

Disposable design focuses on short life span issues. This affects robust design decisions in a meaningfully different way than the other three types of designs.

The categories I’ve proposed are system level in nature, but I think the concepts we can uncover in a discussion would apply to all of the disciplines required to design each component and subsystem in contemporary projects.

[Editor's Note: This was originally post on EDN ]