Software Techniques Channel

There is no one best way to design or build most embedded systems. This series focuses on collecting and categorizing the many design trade-offs of implementing functions one way or another, such as when and how it is appropriate to use dynamic memory allocation in embedded designs.

What tools do you use to program multiple processor cores?

Wednesday, July 27th, 2011 by Robert Cravotta

Developers have been designing and building multi-processor systems for decades. New multicore processors are entering the market on a regular basis. However, it seems that the market for new development tools that help designers analyze, specify, code, test, and maintain software targeting multi-processor systems is lagging further and further behind the hardware offerings.

A key function of development tools is to help abstract the complexity that developers must deal with to build the systems they are working on. The humble assembler abstracted the zeros and ones of machine code into more easily remembered mnemonics that enabled developers to build larger and more complex programs. Likewise, compilers have been evolving to provide yet another important level of abstraction for programmers and have all but replaced the use of assemblers for the vast majority of software projects. A key value of operating systems is that it abstracts the configuration, access, and scheduling of the increasing number of hardware resources available in a system from the developer.

If multicore and multi-processor designs are to experience an explosion in use in the embedded and computing markets, it seems that development tools should provide more abstractions to simplify the complexity of building with these significantly more complex processor configurations.

In general, programming languages do not understand the concept of concurrency, and the extensions that do exist usually require the developer to identify the concurrency and explicitly identify where and when such concurrency exists. Developing software as a set of threads is an approach for abstracting concurrency; however, it is not clear how using a threading design method will be able to scale as systems approach ever larger numbers of cores within a single system. How do you design a system with enough threads to occupy more than a thousand cores – or is that the right question?

What tools do you use when programming a multicore or multi-processor system? Does your choice of programming language and compiler reduce your complexity in such designs or does it require you to actively engage more complexity by explicitly identifying areas for parallelism? Do your debugging tools provide you with adequate visibility and control of a multicore/multi-processor system to be able to understand what is going on within the system without requiring you to spend ever more time at the debugging bench with each new design? Does using a hypervisor help you, and if so, what are the most important functions you look for in a hypervisor?

How is embedded debugging different?

Wednesday, June 22nd, 2011 by Robert Cravotta

Despite all the different embedded designs I worked on, one of the projects that stands out the most is the first embedded project I worked on – despite the fact that I already had ten years of experience with programming computers before that. I had received money for writing simulators, database engines, an assembler, a time share system, as well as several automation tools for production systems. All of these projects executed on mainframe systems or desktop computers. None of them quite prepared me for how different working on an embedded design is.

My first embedded design was a simple box that would reside on a ground equipment test rack that supported the flight system we were building and demonstrating. There was nothing particularly special about this box – it had a number of input and select lines and it had a few output lines. What surprised me most when putting it through its first checkout tests was how clueless I was as to how to troubleshoot the problems that did arise.

While I was aware of keyboard debounce routines from using my desktop system, I had never had to so completely understand the characteristics of different types of switches before. I had never before had to be aware of the wiring within the system, nor had I ever even considered doing an end-to-end check on every wire in a system ever before. While putting this simple box together, I became aware of so many new ways a design could go wrong that I had never had to consider in my earlier designs.

On top of the new ways that the system could behave incorrectly, the system had no file system, no display system, and no way to print out a trace log or memory dump. This made debugging a very different experience. Printf statements would be of no use, and there was no single-step debugger available. Worse yet, running the target program on my desktop computer, so that it could simulate the code, was mostly useless because I could not bring the real-world inputs and outputs that the box worked with into the desktop system.

As I tackled each debugging issue, I went from a befuddled state of having no idea how to proceed to a state where I adopted new ways of thinking that let me gain the insights I needed to infer how the system was (or was not) working and what needed to change. I worked on that project alone, and it welcomed me into the world of embedded design and working with real world signals with wide open arms.

How did your introduction to embedded systems go? What insights can you share to warn those that are entering the embedded design community about how designing, debugging, and integrating embedded components is different from writing application-level software?

Do you care if your development tools are Eclipse based?

Wednesday, May 25th, 2011 by Robert Cravotta

I first explored the opportunity of using the Eclipse and Net Beans open source projects as a foundation for embedded software development tools in an article a few years back. Back then these Java-based IDEs (Integrated Development Environments) were squarely targeting application developers, but the embedded community was beginning to experiment with using these platforms for their own development tools. Since then, many companies have built and released Eclipse-based development tools – and a few have retained using their own IDE.

This week’s question is an attempt to start evaluating how theses open source development platforms are working out for embedded suppliers and developers. In a recent discussion with IAR Systems, I felt like the company’s recent announcement about an Eclipse plug-in for the Renesas RL78 was driven by developer request. IAR also supports its own proprietary IDE – the IAR Embedded WorkBench. Does a software development tools company supporting two different IDEs signal something about the open source platform?

In contrast, Microchip’s MPLAB X IDE is based on the Net Beans platform – effectively a competing open source platform to Eclipse. One capability that using the open source platform provides is that the IDE supports development on a variety of hosts running Linux, Mac OS, and Windows operating systems.

I personally have not tried using either an Eclipse or Net Beans tool in many years, so I do not know yet how well they have matured over the past few years. I do recall that managing installations was somewhat cumbersome, and I expect that is much better now. I also recall that the tools were a little slower to react to what I wanted to do, and again, today’s newer computers may have made that a non-issue. Lastly, the open source projects were not really built with the needs of embedded developers in mind, so the embedded tools that migrated to these platforms had to conform as best they could to architectural assumptions that were driven by the needs of application developers.

Do you care if an IDE is Eclipse or Net Beans based? Does the open source platform enable you to manage a wider variety of processor architectures from different suppliers in a meaningfully better way? Does it matter to your design-in decision if a processor is supported by one of these platforms? Are tools based on these open source platforms able to deliver the functionality and responsiveness you need for embedded development?

Do you use any custom or in-house development tools?

Wednesday, May 11th, 2011 by Robert Cravotta

Developing embedded software differs from developing application in many ways. The most obvious difference is that there is usually no display available in embedded systems whereas most application software would be useless without a display to communicate with the user. Another difference is that it can be challenging to know whether the software for an embedded system is performing the correct functions for the right reasons or if it is performing what appear to be proper functions coincidentally. This is especially relevant to closed-loop control systems that include multiple types of sensors in the control loop, such as with fully autonomous systems.

Back when I was building fully autonomous vehicles, we had to build a lot of custom development tools because standard software development tools just did not perform the tasks we needed. Some of the system-level simulations that we used were built from the ground up. These simulations modeled the control software, rigid body mechanics, and inertial forces from actuating small rocket engines. We built a hardware-in-the-loop rig so that we could swap in and out real hardware with simulated modules so that we could verify the operation of each part of the system as well as inject faults into the system to see how it would fare. Instead of a display or monitor to provide feedback to the operator, the system used a telemetry link which allowed us to effectively instrument the code and capture the state of the system at regular points of time.

Examining the telemetry data was cumbersome due to the massive volume of data – not unlike trying to perform debugging analysis with today’s complex SOC devices. We used a custom parser to extract the various data channels that we wanted to examine together and then used a spreadsheet application to scale and manipulate the raw data and to create plots of the data that we were looking for correlations in. If I was working on a similar project today, I suspect we would still be using a lot of same types of custom tools as back then. I suspect that the market for embedded software development tools is so wide and fragmented that it is difficult for a tools company to justify creating many tools that meet the unique needs of embedded systems. Instead, there is much more money available from the application side of the software development tool market, and it seems that embedded developers must choose between figuring out how to use tools that address the needs of application software work in their project or to create and maintain their own custom tools.

In your own projects, are standard tools meeting your needs or are you using custom or in-house development tools? What kind of custom tools are you using and what problems do they help you solve?

Green with envy: Why power debugging is changing the way we develop code

Friday, March 4th, 2011 by Shawn Prestridge

As time passes, consumers demand more from their mobile devices in terms of content and functionality, and this demand has eroded the ability of battery technology to keep up with our insatiable appetite for capability.  The notion of software power debugging is assisting the development engineer to create more ecologically sound devices based on the ability to see how much power is consumed by the device and correlating this to the source code.  By doing statistical profiling of the code with respect to power, an engineer has the ability to understand the impact of their design decisions on the mobile devices that they create.  Armed with this information, the engineer will be able to make more informed decisions about how the code is structured to both maximize battery life and minimize the impact on our planet’s natural resources.

Anyone who has a modern smartphone can attest to their love/hate relationship to it – they love the productivity boost it can provide, the use of GPS functionality to help us find our destination and the ability to be connected to all aspects of their lives, be it via text messaging, e-mail or social networking. But all of this functionality comes at a great cost – it is highly susceptible to the capacity of the battery and can even have a deleterious impact on the life of the battery as the battery can only withstand a certain number of charge cycles.  There are two ways that this problem can be approached: either increase the energy density of the battery so that it can hold a greater mAh rating for the same size and weight or to pay special attention to eliminating extraneous power usage wherever possible.  The problem with the former is that advances in energy storage technology have been far outstripped by the power requirements of the devices they serve.  Thus, we are left with the choice of minimizing the amount of power consumed by the device.

Efforts to reduce the power footprint of a device have been mostly ad-hoc or out of the control of the device’s development engineers, e.g. improvements in wafer technology give the ability to space transistors closer together to cut down on power consumption via reduced capacitances.  However, power debugging gives a modern development engineer the ability to see how their code decisions impact the overall power consumption of the system by tying the measurements of power being supplied to the system with the program counter of the microcontroller.  Power debugging can give you the ability to see potential problems before you go to manufacturing of production hardware.  For example, you may have a peripheral that the engineer thought was deactivated in their code, but in reality is still active and consuming power.  By looking at the power graph, the engineer has the contextual clue that the power consumption of the device is more than it should be and warrants an inspection of the devices that are active in the system that are consuming energy.

Another example of how power debugging can assist an engineer is by looking at the duty cycles of their microcontroller.  A common design paradigm in battery-powered electronics is to wake up from some sort of power-saving sleep mode, do the processing required and then return to the hibernation state.  This is relatively simple to code, but the engineer may not be aware that there is an external stimulus causing the microcontroller to rouse from the sleep mode prematurely and thus causing the power consumption to be higher than it should.  It is also possible that an external signal is occurring more often than was planned in the original design specification.  While this case can be traced with a very judicious use of breakpoints, the problem may persist for quite some time before the behavior is noticed.  A timeline view of the power consumption can foreshorten this latent defect because it can evince these spikes in current and allow the engineer to double-click the spike to see where in the code the microcontroller was executing when the spike occurred, thus providing the engineer with the information necessary to divine what is happening to cause the power requirements to be so high.

Power debugging can also provide statistical information about the power profile of a particular combination of application and board.  This can be used in baselining the power consumption in such a way that if the engineer adds or changes a section of code in the application and then sees the power differ drastically from the baseline, then the engineer knows that something in the code section they just added or modified caused the spike and can investigate further what is happening and how to mitigate it.  Moreover, an engineer can change microcontroller devices to see if the power consumption of one device is lower or higher than that of another device, thus giving a commensurate comparison between the two devices.  This allows the engineer to make very scrupulous decisions about how their system is constructed with respect to power consumption.

It is evident that our consumer society will begin to rely increasingly on mobile devices which will precipitate demand for more capability and – correspondingly – more power from the batteries which drive these devices.  It behooves an engineer to make their design last as long as possible on a single battery charge, so particular attention must be paid to how the design is constructed – both in hardware and software – to maximize the efficiency of the device.  Power debugging gives the engineer the tools necessary to achieve that goal of making a more ecologically-friendly device that makes every electron count.

Will Watson affect embedded systems?

Wednesday, February 23rd, 2011 by Robert Cravotta

IBM’s Watson computer system recently beat two of the strongest Jeopardy players in the world in a real match of Jeopardy. The match was the culmination of four years of work by IBM researchers. This week’s question has a dual purpose – to focus discussion on how the Watson innovations can/will/might affect the techniques and tools available to embedded developers – and to solicit questions from you that I can ask the IBM research team when I meet up with them (after the main media furor dies down a bit).

The Watson computing system is the latest example of innovations in extreme processing problem spaces. The NOVA’s video “Smartest Machine on Earth” provides a nice overview of the project and the challenges that the researchers faced while getting Watson ready to compete against human players in the game Jeopardy. While Watson is able to interpret the natural language wording of Jeopardy answers and tease out appropriate responses for the questions (Jeopardy provides answers and contestants provide the questions), it was not clear from the press material or the video that Watson was performing processing of natural language in audio form or only text form. A segment near the end of the NOVA video casts doubt on whether Watson was able to work with audio inputs.

In order to bump Watson’s performance into the champion “cloud” (a distribution presented in the video of the performance of Jeopardy champions), the team had to rely on machine learning techniques so that the computing system could improve how it recognizes the many different contexts that apply to words.Throughout the video, we see that the team kept adding more pattern recognition engines (rules?) to the Watson software so that it could handle different types of Jeopardy questions. A satisfying segment in the video was when Watson was able to change its weighting engine for a Jeopardy category that it did not understand after receiving the correct answers of four questions in that category – much like a human player would refine their understanding of a category during a match.

Watson uses 2800 processors, and I estimate that the power consumption is on the order of a megawatt or more. This is not a practical energy footprint for most embedded systems, but the technologies that make up this system might be available to distributed embedded systems if they can connect to the main system. Also, consider that the human brain is a blood-cooled 10 to 100 W system – this suggests that we may be able to drastically improve the energy efficiency of a system like Watson in the coming years.

Do you think this achievement is huff and puff? Do you think it will impact the design and capabilities of embedded systems? For what technical questions would you like to hear answers from the IBM research team in a future article?

Is assembly language a dead skillset?

Wednesday, February 2nd, 2011 by Robert Cravotta

Compiler technology has improved over the years. So much so that the “wisdom on the street” is that using a compiled language, such as C, is the norm for the overwhelming majority of embedded code that is placed into production systems these days. I have little doubt that most of this sentiment is true, but I suspect the “last mile” challenge for compilers is far from being solved – which prevents compiled languages from completely removing the need for developers that are expert at assembly language programming.

In this case, I think the largest last mile candidate for compilers is managing and allocating memory outside of the processor’s register space. This is a critical distinction because most processors, except the very small and slower ones, do not provide a flat memory space where every memory access possible takes a single clock cycle to complete. The register file, level 1 cache, and tightly coupled memories represent the fastest memory on most processors – and those memories represent the smallest portion of the memory subsystem. The majority of a system’s memory is implemented in slower and less expensive circuits – which when used indiscriminately, can introduce latency and delays when executing program code.

The largest reason for using cache in a system is to hide as much of the latency in the memory accesses as possible so as to be able to keep the processor core from stalling. If there was no time cost for accessing anywhere in memory, there would be no need to use a cache.

I have not seen any standard mechanism in compiled languages to layout and allocate an application’s storage elements into a memory hierarchy. One problem is that such a mechanism would make the code less portable – but maybe we are reaching a point in compiler technology where that type of portability should be segmented away from code portability. Program code could consist of a portable code portion and a target-specific portion that enables a developer to tell a compiler and linker how to organize the entire memory subsystem.

A possible result of this type of separation is the appearance of many more tools that actually help developers focus on the memory architecture and find the optimum way to organize it for a specific application. Additional tools might arise that would enable developers to develop application-specific policies for managing the memory subsystem in the presence of other applications.

The production alternate at this time seems to be systems that either accept the consequences of sub-optimally automated memory allocation or to impose policies that prevent loading applications onto the system that have not been run through a certification process that makes sure each program behaves to some set of memory usage rules. Think of running Flash programs on the iPhone (I think the issue of Flash on these devices is driven more by memory issues – which affect system reliability – than by dislike of another company).

Assembly language programming seems to continue to reign supreme for time sensitive portions of code that rely on using a processor’s specialized circuits in an esoteric fashion and/or rely on an intimate knowledge of how to organize the storage of data within the target’s memory architecture to extract the optimum performance from the system from a time and/or energy perspective. Is this an accurate assessment? Is assembly language programming a dying skillset? Are you still using assembly language programming in your production systems? If so, in what capacity?

Balancing risk versus innovation – configuration in the design platform

Monday, October 25th, 2010 by Rob Evans

An approach to balance the risk-innovation stalemate is to introduce robust, high-integrity design data management into the electronic design space itself, where it becomes part of the design process, rather than an ‘add-on’ that gets in the way and stifles innovation. This is no trivial task, and needs to be done at the fundamental levels of the design environment, and through all domains. It starts by changing the way the design environment models the process from a collection of disconnected design ‘silos’, to a single concept of product development. In turn, this effectively creates a singular design platform, with the unified data model representing the system being designed.

A platform-based approach offers the possibility of mapping the design data as a single, coherent entity, which simplifies both the management of design data and the process for generating and handing over the data required for procurement and manufacturing. A singular point of contact then exists for all design data management and access, both inside and outside the design environment.

This approach provides a new layer of configuration management that is implemented into the design environment itself, at a platform level. Along with managing the design data, it allows the creation of formal definitions of the links between the design world and the supply chain that is ultimately responsible for building the actual products.

These definitions can be managed as any number of ‘design configurations’. They map the design data, stored as versions of design documents in a nominated repository (a version-controlled design vault), to specific revisions of production Items (blank and assembled boards) that the supply chain is going to build. This formalized management of design data and configurations allows changes to be made without losing historical tracking, or the definitions of what will be made (a design revision) from that version of the design data.

With the design data and configurations under control at a system level, a controlled (or even automated) design release process can potentially eliminate the risks associated with releasing a design to production. This tightly controlled release process extracts design data directly from the design vault, validates, and verifies it with configurable rule checkers, and then generates the outputs as defined by the link definitions. The generated outputs are pushed into a ‘container’ representing a specific revision of the targeted item (typically a board or assembly) that is stored in a design ’release vault’.

In this way all generated outputs, stored as targeted design revisions, are contain in that centralized storage system, where those released for production (as opposed to prototype or ones that may have been abandoned) are locked down and revisioned. It also facilitates support for a simple lifecycle management process that allows the maturity of the revision’s data to be controlled and defined, as well as providing a high-integrity foundation for integration with PLM and PDM systems for those organizations that use them, or plan to.

Such a system supports high-integrity design data management in a platform that allows for productivity and design innovation. This eliminates manual or externally imposed systems that attempt to control design data integrity, along with their associated restrictions in design flexibility and freedom. This system applies to the management of data within the design space, and perhaps more importantly, to the process of releasing the required outputs through to an organization’s supply chain. In practice, it reduces the risk of going to production with a design that was not validated, not in sync, or consists of an incomplete set of manufacturing data.

With formalized, versioned storage ‘vaults’ (for design and release) the system can provide an audit trail that gives you total visibility from the release data back to the source data, even to the level of hour to hour changes to that design information. This coupled with the unique application of configurations to define the links between a design and the various production items to be made, allows design management to become an inherent part of the product development process – as opposed to a constricting system imposed over the top of design.

But most importantly, design can be undertaken without having to give up the flexibility, freedom and creative innovation that’s needed to create the next generation of unique and competitive product designs.

Is it always a software problem?

Wednesday, October 20th, 2010 by Robert Cravotta

When I first started developing embedded software, I ran into an expression that seemed to be the answer for every problem – “It’s a software problem.” At first, this expression drove me crazy because it was blatantly wrong many times, but it was the only expression I ever heard. I never heard it was a hardware problem. If the polarity on a signal was reversed – it was a software problem. If a hardware sensor changed behavior over time – it was a software problem. In short, if it was easier, faster, or cheaper to fix any problem in the system with a change to the software – it was a software problem.

Within a year of working with embedded designs, I accepted the position that any problem that software could provide a fix or limit was defined as a software problem regardless of whether the software did exactly what the design documents specified. I stopped worrying about whether management would think the software developers were inept because in the long run, they seemed to understand that a software problem did not necessarily translate to a software developer problem.

I never experienced this type of culture when I worked on application software. There were clear demarcations between hardware and software problems. Software problems occurred because the code did not capture error return codes or because the code did not handle an unexpected input from the user. A spurious or malfunctioning input device was clearly a hardware problem. A dying power supply was a hardware problem. The developer of the application code was “protected” by a set of valid and invalid operating conditions. Either a key was pressed or it was not. Inputs and operating modes had a hard binary quality to them. At worst, the application code should not act on invalid inputs.

In contrast, many embedded systems need to operate based on continuous real world sensing that does not always translate to obvious true/false conditions. Adding to the complexity, a good sensor reading in one context may indicate a serious problem in a different operating context. In the context of a closed-loop control system, it could be impossible to definitely classify every possible input as good or bad.

Was this culture just in the teams on worked on or is it prevalent in the embedded community? Does it apply to application developers? Is it always a software problem if the software can detect, limit, or fix an undesirable operating condition?

How important are Software Coding Standards?

Wednesday, October 13th, 2010 by Robert Cravotta

When I was developing embedded systems, we had to comply with specifications that would allow a third party to verify the functional behavior of the system. The closest we came to a software coding standard was a short lived mandate that said systems needed to be developed in Ada. Invariably, we always received a waiver to the Ada requirement and generally used C to develop our systems. To be fair, most of what I worked on was prototypes and proof-of-concepts – we typically built a small handful of the system in question. The process for bringing these designs to a manufacturing level was a separate task.

When I started Embedded Insights, I spent some time discussing with my business partner how each of us approached software projects. This was important because we were planning to build a back-end database and client application to deliver capabilities in the Embedded Processing Directory that would change how developers find, research, and select their target processing options. That project is currently ongoing.

One of the software topics we discussed was coding style and how to represent design and implementation decisions. In a sense, we were negotiating software coding standards – but not pretty syntax rules. Rather, we were discussing how each of us incorporated design assumptions into the code so that someone else (possibly even ourselves a few years later) could figure out what thought process drove the software into its current implementation. I believe that is the essence of a coding standard.

Coding standards should not arbitrarily limit implementation decisions. They should enable a third party person to grasp what problems the previous developer was solving. By understanding the different challenges that the developer needed to simultaneously solve, what might appear to be “poor” coding practices might actually be making the best of a difficult situation.

In short, I think a coding standard should provide a mechanism by which developers can encode their assumptions in the implementation code without limiting their choices. This is especially critical for software because software systems must contend with shared resources – most notably in the time domain. The software from each developer must “take turns” using the CPU and other resources.

How important are software coding standards to the projects you work on? Do you use an industry standard or do you have a custom set of conventions that captures the lessons learned of your own “tribe” of developers? How formal are your coding guidelines and how do you enforce them? Or, do you find that spending too much effort on such guidelines contributes more to “mine is better than yours” religious wars than helping the team get the project finished?