Software Techniques Channel

There is no one best way to design or build most embedded systems. This series focuses on collecting and categorizing the many design trade-offs of implementing functions one way or another, such as when and how it is appropriate to use dynamic memory allocation in embedded designs.

Software Coding Standards

Friday, October 8th, 2010 by Robert Cravotta

The two primary goals of many software coding standards is to reduce the probability that software developers will introduce errors into their code caused by “poor” coding practices and to make it easier to identify errors or vulnerabilities that make it into a project’s code base. By adopting and enforcing a set of known best practices, coding standards enable software development teams to work together more effectively because they are working from a common set of assumptions. Examples of the types of assumptions that coding standards address are: prohibiting language constructs known to be associated with common runtime errors; specifying when and how compiler or platform-specific constructs may and may not be used; and specifying policies for managing system memory resources such as static and dynamic memory allocation.

Because coding standards involve aligning a team of software developers to a common set of design and implementation assumptions and because every project has its own unique set of requirements and constraints, there is no single, universal best coding standard. Industry-level coding standards center on a given programming language, such as C, C++, and Java. There may be variants for each language based on the target application requirements, such as MISRA-C (Motor Industry Software Reliability Association), CERT C (Computer Emergency Response Team), JSF AV C++ (Joint Strike Fighter), IPA/SEC C (Information-Technology Promotion Agency/ Software Engineering Center), and Netrino C.

MISRA started as a guideline for the use of the C language in vehicle based software, and it has found acceptance in the aerospace, telecom, medical devices, defense, and railway industries. CERT is a secure coding standard that provides rules and recommendations to eliminate insecure coding practices and undefined behaviors that can lead to exploitable vulnerabilities. JSF specifies a safe subset of the C++ language targeting use in air vehicles. The IPA/SEC specifies coding practices to assist in the consistent production of high quality source code independent of an individual programmer’s skill. Netrino is an embedded coding standard targeting the reliability of firmware while also improving the maintainability and portability of embedded software.

Fergus Bolger, CTO at PRQA shares that different industries need to approach software quality from different perspectives – which adds more complexity to the sea of coding standards. For example, aerospace applications exist in a high certification environment. Adopting coding standards is common for aerospace projects where the end system software and the tools that developers use go through a certification process. In contrast, the medical industry takes a more process oriented approach where it is important to understand how the tools are made. MISRA is a common coding standard in the medical community.

At the other end of the spectrum, automotive has an installed automotive software code base that is huge and growing rapidly. Consider that a single high-end automobile can include approximately 200 million lines of code to manage the engine and system safety as well as all of the bells and whistles of the driver interface and infotainment systems. Due to the sheer amount of software, there is less code verification. Each automotive manufacture has their own set of mandatory and advisory rules that they include with the MISRA standard.

A coding standard loses much of its value if it is not consistently adhered to, so a number of companies have developed tools to help with compliance and software checking. The next article will start the process of identifying the players and their approaches to supporting coding standards.

Balancing risk versus innovation – disconnect between design and production

Monday, October 4th, 2010 by Rob Evans

Risk minimization, particularly at the stage of releasing design data to production and manufacturing, has been the focus of increasing attention as the product development landscape has changed. One of the most significant shifts in the way organizations work manifests in the disconnection between design and manufacturing, where a product is now likely to be manufactured in a completely different location (region or country) from where it is designed. Fuelled by the rise of a truly global electronics industry, outsourcing or ‘offshoring’ manufacture is now commonplace because the potential cost and efficiency benefits are hard to ignore for most organizations.

This change in the product development process has pulled the spotlight firmly on the need to manage and raise the integrity of design data, prior to its release to production and across the entire product development process. Manufacturing documents now need to be sent to fabrication and assembly houses in other parts of the world with different time zones, and possibly languages, which has raised the risk associated with pushing the design release button to a whole new level. You can’t just pop next door to correct a problem during the production stage.

Consequently, there is a strong push for improving both the quality and integrity of the release process, and not surprisingly, an even more rigorous application of the existing design lock-down methodology to minimize risk. In the design space, engineers are forced to adopt more stringent levels of the formalized, locked-down process with the aim of overcoming the much higher risks created by the distance between design and manufacturing. Ultimately, the best and most potentially successful design is wasted effort if poor data integrity causes manufacturing errors, or perhaps worse, if the resulting design respins cause it to be manufactured too late.

The flip side of the risk management coin, promoting design innovation, is the opposing force in current electronics design landscape. While always an important element, the capacity for innovation in electronics design is now crucial to an organization’s success or in some cases its survival. However, every new feature and every product change is a potential source for something to go wrong. We have the crucial need for effective product and feature innovation running headlong into the equally important (and increasing) demand for design control and data integrity.

Companies both large and small now face aggressive competition from all over the world in what has become a globalized electronics industry, and this very environment has opened opportunities for efficient outsourcing. Product individuality and delivering a unique user experience have become the characteristics that define a device’s competitive status amongst the crowd, and those assets can only be sourced though innovation in design.

The need for establishing a clear competitive edge through innovative design, rather than (failing) traditional factors such as price, means that creating the product value and functionality customers are seeking relies on an unrestrained design environment. This freedom allows developers to explore design options, promotes experimentation, and allows for frequent, iterative changes during design exploration. Also, it is a more fulfilling and enjoyable way to design.

The final part of this three part series proposes a different approach to the risk-innovation stalemate.

Balancing risk versus innovation – design data management

Friday, September 17th, 2010 by Rob Evans

Like most creative design processes, electronics design would be whole lot easier without the need to consider the practicalities of the real end result – in this case, a tangible product that someone will buy and use. Timelines, cost limitations, component restrictions, physical constraints, and manufacturability would fade to the background, leaving behind unrestrained design freedom without the disruptive influence of external considerations.

It is a nice thought, but the reality is that electronics design is just one part of the large, complex product design puzzle, and the puzzle pieces are no longer discrete entities that can be considered in isolation. The pieces unavoidably co-influence and interact, which makes the design development path to a final product a necessarily fluid and complex process. It is also one that involves managing and bringing together an increasing number of variable, co-dependent elements – the pieces of the puzzle – from different systems and locations. Pure electronics design is one thing, but successfully developing, producing, and keeping track of a complete electronic product is a much larger story.

From an electronics design perspective, those broader product development considerations are influencing and constraining the design process more than ever before. At the very least, the shift towards more complex and multi-domain electronic designs (typically involving hardware, software, programmable hardware and mechanical design) means a proportional increase in the risk of design-production errors. This has inevitably led to imposing tighter controls on the design process, as a risk management strategy.

From an organization’s point of view there seems little alternative to a risk-minimization approach that is based on tying down the electronics design process to control change. Leave the management of design open and design anarchy, or expensive errors, are likely outcomes. From an overall product development perspective, the peak in the risk-timeline curve (if there is such a thing) tends to be the transition from design to the production stage. This is a one-way step where an error, and there’s plenty to choose from, will delay and add cost to the final product – not to mention missed market opportunities, painful design re-spins and damaged customer relationships.

To increase the integrity of the design data that is released to production, most organizations are managing the electronic product development process by imposing a layer of control over the design process. This aims to control change and can take on a variety of forms, including manual paper-based sign-off procedures as well as external audit and approval systems. The common thread is that these approaches are an inevitable restriction in design freedom – in other words, they impose limits on how and when design changes can be made.

By restricting design experimentation and exploratory change, this ‘locked down’ product development environment does not encourage the design innovation that is crucial to creating competitive products. The upshot is that organizations must now balance two opposing forces when managing the product development process – the need to foster innovation versus the need to manage risk by maintaining design data integrity.

The second part in this three part series explores controlling risk.

When do you use your compiler’s inline assembler feature and for what reasons?

Wednesday, September 1st, 2010 by Robert Cravotta

I am working on a mapping for software that is analogous to the mapping I developed to describe the different types of processing options. The value of this type of mapping is that it improves the visibility as to the assumptions and optimization trade-offs that drive the design and implementation details of a given tool or architecture. A candidate mapping criteria is the coupling between different layers of abstraction between the software and the hardware target. I will be asking questions that try to tease out the assumptions and trade-offs behind the tools you use to move between different layers of abstraction in your designs.

For example, a compiler allows a software developer to write instructions in a high-level language that generally allows the developer to focus on what the software needs to accomplish without having to worry about partitioning and scheduling the execution engine resources such as register reads and writes. For the mapping model, a compiler would have a strong coupling with the high-level language. Additionally, if the developer is using an operating system, the compiler may also support targeting the software to the operating system API (application programming interface) rather than a privileged mode on the target processor. This would constitute another layer of coupling that the compiler must account for.

However, most compilers also include an inline assembler that allows the developer to break these abstraction layers and work at the level of the target processor’s assembly instructions and resources. Using the inline assembler usually means more complexity for the software developer to manage because the compiler is no longer directly controlling some of the target processor’s resources. Using assembly language can also reduce the portability of the software, so developers usually have a good reason to break the abstraction layer and work at the level of the target processor. Reasons for using an inline assembler include improving the execution speed of the software, optimizing the memory usage in the system, and directly controlling special hardware resources in the processor such as co-processors, accelerators, and peripherals.

Under what conditions do you use the inline assembler (or a separate assembler) for you software? What are the project management and technical trade-offs you consider when choosing to work at the assembly level? What features would a compiler need to support that would allow you to avoid using assembly language? Your answers will help refine the software sweet spot mapping that I am currently developing.

Identifying sweet spot assumptions

Monday, August 30th, 2010 by Robert Cravotta

I am continuing to develop a taxonomy to describe the different types of software tools. Rather than waiting until I have a fully fleshed out model, I am sharing my thought process with you in the hopes that it will entice you to share your thoughts and speed up the process of building a workable model.

I am offering up the following processing mapping as an example of how an analogous software mapping might look. The mapping identifies two independent characteristics, in this case, the number of states and the amount of computation that the system must handle. One nice thing about mapping the design characteristics like this is that it provides an independence from the target application and allows us to focus on what an architecture is optimizing and why.

For example, a microcontroller’s sweet spot is in the lower end of the computation load but spans from very simple to complicated state machines. Microcontroller architectures emphasize excellent context switching. In contrast, DSP architectures target streaming problems where context switching is less important and maximizing computation for the same amount of time/energy is more important.

I suspect that if we can identify the right characteristics for the axis of the mapping space that software tools will fall into analogous categories of assumptions and optimizations. The largest challenge at this moment is identifying the axes. Candidate characteristics include measures of productivity, efficiency, reusability, abstraction, coupling, and latency tolerance.

An important realization is that the best any software can accomplish is to not stall the hardware processing engine. The software will perform data manipulations and operations that cause the processing engine to stall, or be idle, some percentage of the time. As a result, all software tools are productivity tools that strive to help the developer produce software that is efficient enough to meet the performance, schedule, and budget requirements of the project. This includes operating systems, which provide a layer of abstraction from the underlying hardware implementation.

I propose using a measure of robustness or system resilience and a measure of component coupling as the two axes to map software development tools to a set of assumptions and optimization goals.

The range for the component coupling axis starts at microcode and moves toward higher levels of abstraction such as machine code, assembly code, BIOS, drivers, libraries, operating systems, and virtual machines. Many embedded software developers must be aware of multiple levels of the system in order to extract the required efficiency from the system. As a result, many software tools also target one or more of these layers of abstraction. The more abstraction layers that a tool accommodates, the more difficult it is to build and support.

Consider that while a compiler ideally allows a developer to work at a functional and/or data flow level, it must also be able to provide the developer visibility into the lower level details in case the generated code performs in an unexpected fashion that varies with the hardware implementation. The compiler may include an inline assembler and support #pragma statements that enable the developer to better specify how the compiler can use special resources in the target system.

The robustness axis is harder to define at this moment. The range for the robustness axis should capture the system’s tolerance to errors, inconsistent results, latency, and determinism. My expectation for this axis is to capture the trade-offs that allow the tool to improve the developer’s productivity while still producing results that are “good enough.”  I hope to be able to better define this axis in the next write-up.

Do you have any thoughts on these two axes? What other axes should we consider? The chart can go beyond a simple 2D mapping.