Simulation & Debugging Channel

Simulation enables designers to test their system without incurring the full cost and risk of destroying a system when using it in an untested capacity. This series focuses on the challenges and opportunities available to designers to simulate their systems through logical, physical, and hybrid hardware-in-the-loop simulation techniques.

Unit test tools and automatic test generation

Monday, March 19th, 2012 by Mark Pitchford

When are unit test tools justifiable? Ultimately, the justification of unit test tools comes down to a commercial decision. The later a defect is found in the product development, the more costly it is to fix  (Figure 1). This is a concept first established in 1975 with the publication of Brooks’ “Mythical Man Month” and proven many times since through various studies.

The later a defect is identified, the higher the cost of rectifying it.

The automation of any process changes the dynamic of commercial justification. This is especially true of test tools given that they make earlier unit test more feasible. Consequently, modern unit test almost implies the use of such a tool unless only a handful of procedures are involved. Such unit test tools primarily serve to automatically generate the harness code which provides the main and associated calling functions or procedures (generically “procedures”). These facilitate compilation and allow unit testing to take place.

The tools not only provide the harness itself, but also statically analyze the source code to provide the details of each input and output parameter or global variable in any easily understood form. Where unit testing is performed on an isolated snippet of code, stubbing of called procedures can be an important aspect of unit testing. This can also be automated to further enhance the efficiency of the approach.

This automation makes the assignment of values to the procedure under test a simpler process, and one which demands little intimate knowledge of the code on the part of the test tool operator. This distance enables the necessary unit test objectivity because it divorces the test process from that of code development where circumstances require it, and from a pragmatic perspective, substantially lowers the level of skill required to develop unit tests.

This ease of use means that unit test can now be considered a viable development option with each procedure targeted as it is written. When these early unit tests identify weak code, the code can be corrected immediately while the original intent remains fresh in the mind of the developer.

Automatically generating test cases

Generally, the output data generated through unit tests is an important end in itself, but this is not necessarily always the case. There may be occasions when the fact that the unit tests have successfully completed is more important than the test data itself. This happens when source code is tested for robustness. To provide for such eventualities, it is possible to use test tools to automatically generate test data as well as the test cases. High levels of code execution coverage can be achieved by this means alone, and the resultant test cases can be complemented by means of manually generated test cases in the usual way.

An interesting application for this technology involves legacy code. Such code is often a valuable asset, proven in the field over many years, but likely developed on an experimental, ad hoc basis by a series of expert “gurus” – expert at getting things done and in the application itself, but not necessarily at complying with modern development practices.

Frequently this SOUP (software of unknown pedigree) forms the basis of new developments which are obliged to meet modern standards either due to client demands or because of a policy of continuous improvement within the developer organization. This situation may be further exacerbated by the fact that coding standards themselves are the subject of ongoing evolution, as the advent of MISRA C:2004 clearly demonstrates.

If there is a need to redevelop code to meet such standards, then this is a need to not only identify the aspects of the code which do not meet them, but also to ensure that in doing so the functionality of the software is not altered in unintended ways. The existing code may well be the soundest or only documentation available and so a means needs to be provided to ensure that it is dealt with as such.

Automatically generated test cases can be used to address just such an eventuality. By generating test cases using the legacy code and applying them to the rewritten version, it can be proven that the only changes in functionality are those deemed desirable at the outset.

The Apollo missions may have seemed irrelevant at the time, and yet hundreds of everyday products were developed or modified using aerospace research—from baby formula to swimsuits. Formula One racing is considered a rich man’s playground, and yet British soldiers benefit from the protective qualities of the light, strong materials first developed for racing cars. Hospital patients and premature babies stand a better chance of survival than they would have done a few years ago, thanks to the transfer of F1 know-how to the medical world.

Likewise, unit testing has long been perceived to be a worthy ideal—an exercise for those few involved with the development of high-integrity applications with budgets to match. But the advent of unit test tools offer mechanisms that optimize the development process for all. The availability of such tools has made this technology and unit testing itself an attractive proposition for applications where sound, reliable code is a commercial requirement, rather than only those applications with a life-and-death imperative.

Unit, Regression and System Testing

Monday, February 20th, 2012 by Mark Pitchford

While unit testing at the time of development is a sound principle to follow, all too often ongoing development compromises the functionality of the software that is already considered complete. Such problems are particularly prevalent when adding functionality to code originally written with no forethought for later enhancements.

Regression testing is what’s needed here. By using a test case file to store a sequence of tests created for the original SOUP (Software of Unproven Pedigree), it is possible to recall and reapply it to the revised code to prove that none of the original functionality has been compromised.

Once configured, this regression testing can be initiated as a background task and run perhaps every evening. Reports highlight any changes to the output generated by earlier test runs. In this way, any code modifications leading to unintentional changes in application behavior can be identified and rectified immediately.

Modern unit test tools come equipped with user-friendly, point-and-click graphical user interfaces. However, when faced with thousands of test cases, a GUI interface is not always the most efficient way to handle the development of test cases. In recognition of this, test tools are designed to allow these test case files to be directly developed from applications such as Microsoft Excel. As before, the “regression test” mechanism can then be used to run the test cases held in these files.

Unit and system test in tandem

Traditionally, many applications have been tested by functional means only. The source code is written in accordance with the specification, and then tested to see if it all works. The problem with this approach is that no matter how carefully the test data is chosen, the percentage of code actually exercised can be very limited.

That issue is compounded by the fact that the procedures tested in this way are only likely to handle data within the range of the current application and test environment. If anything changes a little – perhaps in the way the application is used or perhaps as a result of slight modifications to the code – the application could be running entirely untested execution paths in the field.

Of course, if all parts of the system are unit tested and collated on a piecemeal basis through integration testing, then this will not happen. But what if timescales and resources do not permit such an exercise? Unit test tools often provide the facility to instrument code. This instrumented code is equipped to “track” execution paths, providing evidence of the parts of the application which have been exercised during execution. Such an approach provides the information to produce data such as that depicted in Figure 1.

Color-coded dynamic flow graphs and call graphs illustrate the parts of the application which have been exercised. In this example, note that the red coloring highlights exercised code.

Code coverage is an important part of the testing process in that it shows the percentage of the code that has been exercised and proven during test. Proof that all of the code has been exercised correctly need not be based on unit tests alone. To that end, some unit tests can be used in combination with system test to provide a required level of execution coverage for a system as a whole.

This means that the system testing of an application can be complemented by unit tests that execute code which would not normally be exercised in the running of the application. Examples include defensive code (e.g., to prevent crashes due to inadvertent division by zero), exception handlers, and interrupt handlers.

Unit test is just one weapon in the developer’s armory. By automatic use of unit test both in isolation and in tandem with other techniques, the development of robust and reliable software doesn’t need to carry the heavy development overhead it once did.

Do you use hardware-in-the-loop simulation?

Wednesday, November 16th, 2011 by Robert Cravotta

While working on some complex control systems for aerospace projects, I had the opportunity to build and use hardware-in-the-loop (HIL or HWIL) simulations. A HIL simulation is a platform where you can swap different portions of the system between simulation models and real hardware. The ability to mix simulated and real components provides a mechanism to test and characterize the behavior and interactions between components. This is especially valuable when building closed-loop control systems that will perform in conditions that you do not fully understand yet (due to a lack of experience with the operating scenario).

Building a HIL simulation is an extensive effort. The simulation must be able to not only emulate electrical signals for sensors and actuators, but it may also need to be able to provide predictable and repeatable physical conditions, such as moving the system around on six degrees of freedom based on real or simulated sensor or actuator outputs. As a result, HIL can be cost prohibitive for many projects; in fact, to date the only people I have met that have used HIL worked on aircraft, spacecraft, automotive, and high-end networking equipment.

I suspect though with the introduction of more sensors and/or actuators in consumer level products, that HIL concepts are being used in new types of projects. For example, tablet devices and smartphones increasingly are aware of gravity. To date, being able to detect gravity is being used to set the orientation on the display, but I have seen lab work where these same sensors are being used to detect deliberate motions made by the user, such as shaking, lowering, or raising  the device. At that point, HIL concepts provide a mechanism for developers to isolate and examine reality versus their assumptions about how sets of sensors and/or actuators interact under the variation that can occur under each of these use scenarios.

In my own experience, I have used HIL simulation to characterize and understand how to successfully use small rocket engines to move and hover a vehicle in the air. The HIL simulation allowed us to switch between real and simulated engines that moved the system. This kind of visibility was especially useful because operating the vehicle was dangerous and expensive. Another HIL simulation allowed us to work with the real camera sensor and physically simulate the motion that the camera would experience in a usage scenario. In each of these simulation setups, we were able to discover important discrepancies between our simulation models and how the real world behaved.

Are HIL simulation concepts moving into “simpler” designs? Are you using HIL simulation in your own projects? Is it sufficient to work with only real hardware, say in the case of a smartphone, or are you finding additional value in being able to simulate specific portions of the system on demand? Are you using HIL in a different way than described here? Is HIL too esoteric a topic for most development?

Unit testing: why bother?

Tuesday, October 25th, 2011 by Mark Pitchford

Unit test? Great in theory, but…

Unit test has been around almost as long as software development itself. It just makes sense to take each application building block, build it in isolation, and execute it with test data to make sure that it does just what it should do without any confusing input from the remainder of the application.

Without automation, the sting comes from not being able to simply lift a software unit from its development environment, compile and run it – let alone supply it with test data. For that to happen, you need a harness program acting as a holding mechanism that calls the unit, details any included files, “stubs” written to handle any procedure calls by the unit, and offers any initialization sequences which prepare data structures for the unit under test to act upon. Not only is creating that process laborious, but it takes a lot of skill. More often than not, the harness program requires at least as much testing as the unit under test.

Perhaps more importantly, a fundamental requirement of software testing is to provide an objective, independent view of the software. The very intimate code knowledge required to manually construct a harness compromises the independence of the test process, undermining the legitimacy of the exercise.

Deciding when to unit test

Unit test is not always justifiable and can vary in extent and scope depending on commercial issues such as the cost of failure in the field or the time unit testing will take.

To determine whether to move forward, you need to ask a couple of questions:

  • If unit testing is to take place, how much is involved?
  • Is it best to invest in a test tool, or is it more cost effective to work from first principles?

Developers must make pragmatic choices. Sometimes the choice is easy based on the criticality of the software. If the software fails, what are the implications? Will anyone be killed, as might be the case in aircraft flight control? Will the commercial implications be disproportionately high, as exemplified by a continuous plastics production plant? Or are the costs of recall extremely high, perhaps in a car’s engine controller? In these cases, extensive unit testing is essential and any tools that aid in that purpose make sense. On the other hand, if software is developed purely for internal use or is perhaps a prototype, then the overhead in unit testing all but the most vital of procedures would be prohibitive.

As you might expect, there is a grey area. Suppose the application software controls a mechanical measuring machine where the quantity of the devices sold is low and the area served is localized. The question becomes: Would the occasional failure be more acceptable than the overhead of unit test?

In these circumstances, it’s useful to prioritize the parts of the software which are either critical or complex. If a software error leads to a strangely colored display or a need for an occasional reboot, it may be inconvenient but not justification for unit testing. On the other hand, the unit test of code which generates reports showing whether machined components are within tolerance may be vital.

Beyond unit test

For some people, the terms “unit test” and “module test” are synonymous. For others, the term “unit” implies the testing of a single procedure, whereas “module” suggests a collection of related procedures, perhaps designed to perform some particular purpose within the application.

Using the latter definitions, manually developed module tests are likely to be easier to construct than unit tests, especially if the module represents a functional aspect of the application itself. In this case, most of the calls to procedures are related and the code accesses related data structures, which makes the preparation of the harness code more straightforward.

Test tools render the distinction between unit and module tests redundant. It is entirely possible to test a single procedure in isolation and equally possible to use the exact same processes to test multiple procedures, a file, or multiple files of procedures, a class (where appropriate), or a functional subset of an entire system. As a result, the distinction between unit and module test is one which has become increasingly irrelevant to the extent that the term “unit test” has come to include both concepts.

Such flexibility facilitates progressive integration testing. Procedures are first unit tested and then collated as part of the subsystems, which in turn are brought together to perform system tests. It also provides options when a pragmatic approach is required for less critical applications. A single set of test cases can exercise a specified procedure in isolation, with all of the procedures called as a result of exercising the specified procedure, or anything in between (See Figure 1). Test cases that prove the functionality of the whole call chain are easily constructed. Again, it is easy to “mix and match” the processes depending on the criticality of the code under review.

A single test case (inset) can exercise some or all of the call chain associated with it. In this example, “AdjustLighting,” note that the red coloring highlights exercised code.

This all embracing unit test approach can be extended to multithreaded applications. In a single-threaded application, the execution path is well-defined and sequential, such that no part of the code may be executed concurrently with any other part. In applications with multiple threads, there may be two or more paths executed concurrently, with interaction between the threads a commonplace feature of the system. Unit test in this environment ensures that particular procedures behave in an appropriate manner both internally and in terms of their interaction with other threads.

Sometimes, testing a procedure in isolation is impractical. For instance, if a particular procedure relies on the existence of some ordered data before it can perform its task, then similar data must be in place for any unit test of that procedure to be meaningful.

Just as unit test tools can encompass many different procedures as part of a single test, they can also use a sequence of tests with each one having an effect on the environment for those executed subsequently. For example, unit testing a procedure which accesses a data structure may be achieved by first implementing a test case to call an initialization procedure within the application, and then a second test case to exercise the procedure of interest.

Unit test does not imply testing in only the development environment. Integration between test tools and development environments means that unit testing of software can take place seamlessly using the compiler and target hardware. This is another example of the development judgments required to find an optimal solution – from performing no unit test at all, through to testing all code on the target hardware. The trick is to balance the cost of test against the cost of failure, and the overhead of manual test versus the investment cost in automated tools.

Is testing always essential?

Wednesday, August 24th, 2011 by Robert Cravotta

This month’s audit of the Army’s armor inserts by the Pentagon’s inspector general finds that testing for the body armor ballistic inserts was not conducted consistently across 5 million inserts across seven contracts. According to the audit, the PM SEQ (Army Program Manager Soldier Equipment) did not conduct all of the required tests on two contracts because they had no protection performance concerns on those inserts. Additionally, the PM SEQ did not always use a consistent methodology for measuring the proper velocity or enforcing the humidity, temperature, weathered, and altitude requirements for the tests.

The audit also reports that the sampling process used did not provide a statistically representative sample for the LOT (Lot Acceptance Test) so that the results of the test cannot be relied on to project identified deficiencies to the entire lot. At this point, no additional testing was performed as part of the audit, so there is no conclusion on whether the ballistic performance of these inserts was adversely affected by the test and quality assurance methods that were applied.

Tests on two lots of recalled inserts so far have found that all of them met “the maximum level of protection specified for threats in combat” according to Matthew Hickman, an Army spokesman. Another spokesman released a statement that “The body armor in use today is performing as it was intended. We are continuing to research our data and as of now have not found a single instance where a soldier has been wounded due to faulty body armor.”

This audit highlights a situation that can impact any product that experiences a significant increase in demand coupled with time sensitivity for availability of that product. High profile examples in the consumer electronics space include game consoles and smart phones. Some of these products underwent recalls or aftermarket fixes. However, similar to the recalled inserts that are passing additional testing, sometimes a product that has not undergone complete testing can still meet all of the performance requirements.

Is all the testing you can do essential to perform every time? Is it ever appropriate to skip a test because “there are no performance concerns?” Do you use a process for modifying or eliminating tests that might otherwise disproportionately affect the product’s pricing or availability without significant offsetting benefit? Is the testing phase of a project an area ripe for optimization or is it an area where we can never do enough?

How does your company handle test failures?

Wednesday, August 17th, 2011 by Robert Cravotta

For many years, most of the projects I worked on were systems that had never been built before in any shape or form. As a consequence, many of the iterations for each of these projects included significant and sometimes spectacular failures as we moved closer to a system that could perform its tasks successfully in an increasingly wider circle of environmental conditions. These path-finding designs needed to be able to operate in a hostile environment (low earth orbit), and they needed to be able to make autonomous decisions on their own as there was no way to guarantee that instructions could come from a central location in a timely fashion.

The complete units themselves were unique prototypes with no more than two iterations in existence at a time. It would take several months to build each unit and develop the procedures by which we would stress and test what the unit could do. The testing process took many more months as the system integration team moved through ground-based testing and eventually moved on to space-based testing. A necessary cost of deploying the units would be to lose it when it reentered the Earth’s atmosphere, but a primary goal for each stage of testing was to collect as much data as possible from the unit until it was no longer able to operate and/or transmit telemetry about its internal state of health.

During each stage of testing, the unit was placed into an environment that would minimize the amount of damage the unit would physically be subjected to (such as operating the unit within a netted room that would prevent the unit from crashing into the floor, walls, or ceiling). The preparation work for each formal test consisted of weeks of refining all of the details in a written test procedure that fortyish people would follow exactly. Any deviations as the final test run would flag a possible abort of the test run.

Despite all of these precautions, sometimes things just did not behave the way the team expected. In each failure case, it was essential that the post mortem team be able to explicitly identify what went wrong and why so that future iterations of the unit would not repeat those failures. Because we were learning how to build a completely autonomous system that had to properly react to a range of uncertain environmental conditions, it could sometimes take a significant effort to identify root causes for failures.

Surprisingly, it also took a lot of effort to prove that the system did not experience any failures that we were not able to identify by simple observation during operation. It took a team of people analyzing the telemetry data days to determine whether the interactions between the various subsystems were behaving correctly or had coincidently behaved in an expected fashion during the test run.

The company knew we were going to experience many failures during this process, but the pressure was always present to produce a system that worked flawlessly. However, when the difference between a flawless operation and one that experienced a subtle, but potentially catastrophic anomaly rests on nuanced interpretation of the telemetry data, it is essential that the development team is not afraid to identify possible anomalies and follow them up with robust analysis.

In this project, a series of failures was the norm, but for how many projects is a sequence of system failures acceptable? Do you feel comfortable raising a flag for potential problems in a design or test run? Does how your company handles failure affect what threshold you apply to searching for anomalies and teasing out true root causes? Or is it safer to search a little less diligently and let said anomalies slip through and be discovered later when you might not be on the project anymore? How does your company handle failures?

How much trial and error do you rely on in designs?

Wednesday, August 10th, 2011 by Robert Cravotta

My wife and I have been watching a number of old television series via DVD and video streaming services. We have both noticed (in a distressing way) a common theme among the shows that purport to have a major character who happens to be a scientist – the scientist(s) know more than any reasonable person would, they accomplish tasks quicker than anyone (or a team of a thousand people) reasonably could, and they make the proper leaps of logic in one or two iterations. While these may be useful mechanisms to keep a 20 to 40 minute story moving along, it in no way reflects our experience in the real engineering world.

Tim Harford’s recent TED talk addresses the successful mechanism of trial and error to create successful complex systems and how it differs from systems that are built around systems built based on a God complex. The talk resonates with my experience and poses a statement I have floated around a few times over the years in a different manner. The few times I have suggested that engineering is a discipline of best guesses has generated some vigorous dissent. Those people offering the most dissent claim that given a complete set of requirements, they can provide an optimum engineering design to meet those requirements. But my statement refers not just to the process of choosing how to solve a requirement specification, but also in making the specifications in the first place. Most systems that must operate in the real world are just too complex for a specification to completely describe the requirements in a single iteration – there is a need for some trial and error to discover what is more or less important for the specification.

In the talk, Tim provides an industrial example regarding the manufacturing of powdered detergent. The process of making the powder involves pumping a fluid, under high pressure, through a nozzle, that distributes the fluid in such a way that as the water evaporates from the sprayed fluid, a powder with specific properties lands in a pile to be boxed up and shipped to stores for end users to purchase. The company in this example originally tried an explicit design approach that reflects a God complex mode of design. The company hired an expert to design the nozzle. Apparently the results were unsatisfactory; however, the company was eventually able to come up with a satisfactory nozzle by using a trial and error method. The designers created ten random nozzles designs and tested them all. They chose the nozzle that performed the best and created ten new variations based on that “winning” nozzle. The company performed this iterative process 45 times and was able to create a nozzle that performed its function well. The nozzle performs well, but the process that produced the nozzle did not require any understanding of why it works.

Over the years, I have heard many stories about how using a similar process yielded a superior solution to a problem than an explicit design approach. Do you use a trial and error approach in your designs? Do you introduce variations in a design, down select the variations based on measured performance, and repeat this process until the level of improvement suggests you are close enough to an optimum configuration? I suspect more people do use a variation and select process of trial and error; however, I am not aware of many tools that facilitate this type of approach. What are your thoughts and experiences on this?

How is embedded debugging different?

Wednesday, June 22nd, 2011 by Robert Cravotta

Despite all the different embedded designs I worked on, one of the projects that stands out the most is the first embedded project I worked on – despite the fact that I already had ten years of experience with programming computers before that. I had received money for writing simulators, database engines, an assembler, a time share system, as well as several automation tools for production systems. All of these projects executed on mainframe systems or desktop computers. None of them quite prepared me for how different working on an embedded design is.

My first embedded design was a simple box that would reside on a ground equipment test rack that supported the flight system we were building and demonstrating. There was nothing particularly special about this box – it had a number of input and select lines and it had a few output lines. What surprised me most when putting it through its first checkout tests was how clueless I was as to how to troubleshoot the problems that did arise.

While I was aware of keyboard debounce routines from using my desktop system, I had never had to so completely understand the characteristics of different types of switches before. I had never before had to be aware of the wiring within the system, nor had I ever even considered doing an end-to-end check on every wire in a system ever before. While putting this simple box together, I became aware of so many new ways a design could go wrong that I had never had to consider in my earlier designs.

On top of the new ways that the system could behave incorrectly, the system had no file system, no display system, and no way to print out a trace log or memory dump. This made debugging a very different experience. Printf statements would be of no use, and there was no single-step debugger available. Worse yet, running the target program on my desktop computer, so that it could simulate the code, was mostly useless because I could not bring the real-world inputs and outputs that the box worked with into the desktop system.

As I tackled each debugging issue, I went from a befuddled state of having no idea how to proceed to a state where I adopted new ways of thinking that let me gain the insights I needed to infer how the system was (or was not) working and what needed to change. I worked on that project alone, and it welcomed me into the world of embedded design and working with real world signals with wide open arms.

How did your introduction to embedded systems go? What insights can you share to warn those that are entering the embedded design community about how designing, debugging, and integrating embedded components is different from writing application-level software?

Are GNU tools good enough for embedded designs?

Wednesday, June 8th, 2011 by Robert Cravotta

The negative responses to the question about Eclipse-based tools surprised me. It had been at least four years since I tried an Eclipse-based development tool, and I assumed that with so many embedded companies adopting the Eclipse IDE that the environment would have cleaned up nicely.

This got me wondering if GNU-based tools, especially compilers targeting embedded processors, fare better within the engineering community or not. Similar to using the Eclipse IDE, it has been far too many years since I used a GCC compiler to know how it has or has not evolved. Unlike an IDE, a compiler does not need to support a peppy graphical user interface – it just needs to generate strong code that works on the desired target. The competition to GCC compilers are proprietary tools that claim to perform significantly better at generating target code.

Are the GNU-based development tools good enough for embedded designs – especially those designs that do not provide a heavy user interface? The software for most embedded designs must operate within constrained memory sizes and need to operate efficiently or it will risk driving the cost of the embedded system higher than it needs to be.

Are you using GNU-based development tools – even when there is a proprietary compiler available for your target? What types of projects are GNU-based tools sufficient for and where is the line when the proprietary tools become a necessity (or not)?

Green with envy: Why power debugging is changing the way we develop code

Friday, March 4th, 2011 by Shawn Prestridge

As time passes, consumers demand more from their mobile devices in terms of content and functionality, and this demand has eroded the ability of battery technology to keep up with our insatiable appetite for capability.  The notion of software power debugging is assisting the development engineer to create more ecologically sound devices based on the ability to see how much power is consumed by the device and correlating this to the source code.  By doing statistical profiling of the code with respect to power, an engineer has the ability to understand the impact of their design decisions on the mobile devices that they create.  Armed with this information, the engineer will be able to make more informed decisions about how the code is structured to both maximize battery life and minimize the impact on our planet’s natural resources.

Anyone who has a modern smartphone can attest to their love/hate relationship to it – they love the productivity boost it can provide, the use of GPS functionality to help us find our destination and the ability to be connected to all aspects of their lives, be it via text messaging, e-mail or social networking. But all of this functionality comes at a great cost – it is highly susceptible to the capacity of the battery and can even have a deleterious impact on the life of the battery as the battery can only withstand a certain number of charge cycles.  There are two ways that this problem can be approached: either increase the energy density of the battery so that it can hold a greater mAh rating for the same size and weight or to pay special attention to eliminating extraneous power usage wherever possible.  The problem with the former is that advances in energy storage technology have been far outstripped by the power requirements of the devices they serve.  Thus, we are left with the choice of minimizing the amount of power consumed by the device.

Efforts to reduce the power footprint of a device have been mostly ad-hoc or out of the control of the device’s development engineers, e.g. improvements in wafer technology give the ability to space transistors closer together to cut down on power consumption via reduced capacitances.  However, power debugging gives a modern development engineer the ability to see how their code decisions impact the overall power consumption of the system by tying the measurements of power being supplied to the system with the program counter of the microcontroller.  Power debugging can give you the ability to see potential problems before you go to manufacturing of production hardware.  For example, you may have a peripheral that the engineer thought was deactivated in their code, but in reality is still active and consuming power.  By looking at the power graph, the engineer has the contextual clue that the power consumption of the device is more than it should be and warrants an inspection of the devices that are active in the system that are consuming energy.

Another example of how power debugging can assist an engineer is by looking at the duty cycles of their microcontroller.  A common design paradigm in battery-powered electronics is to wake up from some sort of power-saving sleep mode, do the processing required and then return to the hibernation state.  This is relatively simple to code, but the engineer may not be aware that there is an external stimulus causing the microcontroller to rouse from the sleep mode prematurely and thus causing the power consumption to be higher than it should.  It is also possible that an external signal is occurring more often than was planned in the original design specification.  While this case can be traced with a very judicious use of breakpoints, the problem may persist for quite some time before the behavior is noticed.  A timeline view of the power consumption can foreshorten this latent defect because it can evince these spikes in current and allow the engineer to double-click the spike to see where in the code the microcontroller was executing when the spike occurred, thus providing the engineer with the information necessary to divine what is happening to cause the power requirements to be so high.

Power debugging can also provide statistical information about the power profile of a particular combination of application and board.  This can be used in baselining the power consumption in such a way that if the engineer adds or changes a section of code in the application and then sees the power differ drastically from the baseline, then the engineer knows that something in the code section they just added or modified caused the spike and can investigate further what is happening and how to mitigate it.  Moreover, an engineer can change microcontroller devices to see if the power consumption of one device is lower or higher than that of another device, thus giving a commensurate comparison between the two devices.  This allows the engineer to make very scrupulous decisions about how their system is constructed with respect to power consumption.

It is evident that our consumer society will begin to rely increasingly on mobile devices which will precipitate demand for more capability and – correspondingly – more power from the batteries which drive these devices.  It behooves an engineer to make their design last as long as possible on a single battery charge, so particular attention must be paid to how the design is constructed – both in hardware and software – to maximize the efficiency of the device.  Power debugging gives the engineer the tools necessary to achieve that goal of making a more ecologically-friendly device that makes every electron count.