Entries Tagged ‘Taxonomy’

Identifying sweet spot assumptions

Monday, August 30th, 2010 by Robert Cravotta

I am continuing to develop a taxonomy to describe the different types of software tools. Rather than waiting until I have a fully fleshed out model, I am sharing my thought process with you in the hopes that it will entice you to share your thoughts and speed up the process of building a workable model.

I am offering up the following processing mapping as an example of how an analogous software mapping might look. The mapping identifies two independent characteristics, in this case, the number of states and the amount of computation that the system must handle. One nice thing about mapping the design characteristics like this is that it provides an independence from the target application and allows us to focus on what an architecture is optimizing and why.

For example, a microcontroller’s sweet spot is in the lower end of the computation load but spans from very simple to complicated state machines. Microcontroller architectures emphasize excellent context switching. In contrast, DSP architectures target streaming problems where context switching is less important and maximizing computation for the same amount of time/energy is more important.

I suspect that if we can identify the right characteristics for the axis of the mapping space that software tools will fall into analogous categories of assumptions and optimizations. The largest challenge at this moment is identifying the axes. Candidate characteristics include measures of productivity, efficiency, reusability, abstraction, coupling, and latency tolerance.

An important realization is that the best any software can accomplish is to not stall the hardware processing engine. The software will perform data manipulations and operations that cause the processing engine to stall, or be idle, some percentage of the time. As a result, all software tools are productivity tools that strive to help the developer produce software that is efficient enough to meet the performance, schedule, and budget requirements of the project. This includes operating systems, which provide a layer of abstraction from the underlying hardware implementation.

I propose using a measure of robustness or system resilience and a measure of component coupling as the two axes to map software development tools to a set of assumptions and optimization goals.

The range for the component coupling axis starts at microcode and moves toward higher levels of abstraction such as machine code, assembly code, BIOS, drivers, libraries, operating systems, and virtual machines. Many embedded software developers must be aware of multiple levels of the system in order to extract the required efficiency from the system. As a result, many software tools also target one or more of these layers of abstraction. The more abstraction layers that a tool accommodates, the more difficult it is to build and support.

Consider that while a compiler ideally allows a developer to work at a functional and/or data flow level, it must also be able to provide the developer visibility into the lower level details in case the generated code performs in an unexpected fashion that varies with the hardware implementation. The compiler may include an inline assembler and support #pragma statements that enable the developer to better specify how the compiler can use special resources in the target system.

The robustness axis is harder to define at this moment. The range for the robustness axis should capture the system’s tolerance to errors, inconsistent results, latency, and determinism. My expectation for this axis is to capture the trade-offs that allow the tool to improve the developer’s productivity while still producing results that are “good enough.”  I hope to be able to better define this axis in the next write-up.

Do you have any thoughts on these two axes? What other axes should we consider? The chart can go beyond a simple 2D mapping.

Software Ecosystem Sweet Spots

Monday, August 16th, 2010 by Robert Cravotta

I have been refining a sweet spot taxonomy for processors and multiprocessing designs for the past few years. This taxonomy highlights the different operating assumptions for each type of processor architecture including microcontrollers, microprocessors, digital signal processors, hardware accelerators, and processing fabrics.

I recently started to focus on developing a similar taxonomy to describe the embedded software development ecosystem that encompasses the different types of tools and work products that affect embedded software developers. I believe developing a taxonomy that identifies the sweet spot for each type of software component in the ecosystem will enable developers and tool providers to better describe the assumptions behind each type of software development tool and how to evolve them to compensate for the escalating complexity facing embedded software developers.

The growing complexity facing software developers manifests in several ways. One source of the growing complexity is the increase in the amount of code that exists in designs. The larger the amount of code within a system, the more there are opportunities for unintended resource dependencies that affect the performance and correct operation of the overall system. Using a modular design approach is a technique to manage some of this complexity, but modular design works to abstract the resource usage within a module and does not directly affect how to manage the memory and timing resources that the modules share with each other by virtue of executing on the same processor.

Another source of growing complexity is “path-finding” implementations of new functions and algorithms because many algorithms and approaches for solving a problem are first implemented as software. New algorithms undergo an evolution as the assumptions in the specification and coding implementation of the algorithms operate under wider range of operating conditions. It is not until the implementation of those algorithms matures, by being used across a wide enough range of operating conditions, that it makes sense to implement hardened and optimized versions using coprocessors, accelerators, and specialized logic blocks.

According to many conversations I have had over the years, the software in most embedded designs consumes more than half of the development budget; this ratio holds true even for “pure” hardware products such as microcontrollers and microprocessors. Consider that no company releases contemporary processor architectures anymore without also providing significant software assets that include tools, intellectual property, and bundled software. The bundled software is necessary to ease the learning curve for developers to use the new processors and to get their designs to market in a reasonable amount of time.

The software ecosystem taxonomy will map all types of software tools including assembler/compilers, debuggers, profilers, static and dynamic analyzers, as well as design exploration and optimization tools to a set of assumptions that may abstract to a small set of sweet spots. It is my hope that applying such a taxonomy will make it easier to understand how different software development tools overlap and complement each other, and how to evolve the capabilities of each tool to improve the productivity of developers. I think we are close to the point of diminishing returns of making compilation and debugging faster; rather, we need more tools that understand system level constructs and support more exploration – otherwise the continuously growing complexity of new designs will negatively impact the productivity of embedded software developers in an increasingly meaningful manner.

Please contact me if you would like to contribute any ideas on how to describe the assumptions and optimization goals behind each type of software development tool.