How is embedded debugging different?

Wednesday, June 22nd, 2011 by Robert Cravotta

Despite all the different embedded designs I worked on, one of the projects that stands out the most is the first embedded project I worked on – despite the fact that I already had ten years of experience with programming computers before that. I had received money for writing simulators, database engines, an assembler, a time share system, as well as several automation tools for production systems. All of these projects executed on mainframe systems or desktop computers. None of them quite prepared me for how different working on an embedded design is.

My first embedded design was a simple box that would reside on a ground equipment test rack that supported the flight system we were building and demonstrating. There was nothing particularly special about this box – it had a number of input and select lines and it had a few output lines. What surprised me most when putting it through its first checkout tests was how clueless I was as to how to troubleshoot the problems that did arise.

While I was aware of keyboard debounce routines from using my desktop system, I had never had to so completely understand the characteristics of different types of switches before. I had never before had to be aware of the wiring within the system, nor had I ever even considered doing an end-to-end check on every wire in a system ever before. While putting this simple box together, I became aware of so many new ways a design could go wrong that I had never had to consider in my earlier designs.

On top of the new ways that the system could behave incorrectly, the system had no file system, no display system, and no way to print out a trace log or memory dump. This made debugging a very different experience. Printf statements would be of no use, and there was no single-step debugger available. Worse yet, running the target program on my desktop computer, so that it could simulate the code, was mostly useless because I could not bring the real-world inputs and outputs that the box worked with into the desktop system.

As I tackled each debugging issue, I went from a befuddled state of having no idea how to proceed to a state where I adopted new ways of thinking that let me gain the insights I needed to infer how the system was (or was not) working and what needed to change. I worked on that project alone, and it welcomed me into the world of embedded design and working with real world signals with wide open arms.

How did your introduction to embedded systems go? What insights can you share to warn those that are entering the embedded design community about how designing, debugging, and integrating embedded components is different from writing application-level software?

Tags:

6 Responses to “How is embedded debugging different?”

  1. My first embedded system was in Forth, native hosted, with the compiler running on the controller. It was very nice. This experience persisted as long as I used Forth. of course, Forth was designed to do embedded systems, so maybe this is not a surprise.

    My first pure cross-developed system was printer firmware, cross-assembled. We had an in-circuit emulator, so debugging was possible. As always, I stepped through all paths of new code. However, it was not as pleasant as the Forth system. Some of that was the difference between high and low level code, but much of it was inherent uninteractivity of cross-compiled logic, combined with a lack of interactivity and lack of scripting.

  2. J.B. @ LI says:

    I learned that once you stop on a breakpoint you have totally messed up the normal real time behavior of a system. On a telephone system I worked on that resulted in some hardware driver circuits being left on and burning up because the hardware drivers were only designed for 10 percent duty cycle. This was really bad hardware design though since one component failing could wipe out a whole frame of circuit boards – the hardware had to be redesigned. Later I used Intel ICE and we seemed to have a lot of reliability problems with the probe. Still later I was doing C/C++ source code level debugging with a jtag/bdm hardware assisted debugger – which was really nice. I sometimes had problems debugging C code previously with ASM debuggers as the compiler optimizations made it hard to follow the ASM code that was generated.

  3. G.L. @ LI says:

    The biggest differences between embedded and mainstream
    debugging are:

    1 Cross development: Your translation tools run on a user-
    friendly machine (the host), while your executable must be
    transferred to a board you may have designed yourself (the target).
    Bringing it up the first time is a nightmare until you
    find all the pins you forgot to connect, learn all the
    idiosyncracies of the new microprocessor, since the dead
    computer gives you no output at all; it’s largely a
    process of elimination.

    2 You test using a remote debugger giving you a sub-set of
    the debug tools available to a native debugger. It is
    quite easy to hang the target, in which case it provides
    no feedback whatsoever, so you return to intelligent
    guessing. I used in-circuit emulators into the 90′s, but
    they pretty much went away with SMT packaging.

    3 Your processor is attempting to control a real-time
    process whose behavior you assume when writing the code.
    Your assumptions are generally wrong to some extent, and
    you have to adapt in the lab. This is stuff that happens
    in the millisecond to microsecond domain and simulating it
    outside the target environment requires a lot of
    imagination.

    4 Your control circuitry is generally embedded in a power
    stage, switching hundreds of amps at tens of kilohertz,
    and emits serious magnetic fields. If you don’t understand
    the basics of board layout your circuit will lock up in
    ways you will never figure out. While electric fields can
    be stopped by a shield, magnetic fields don’t resond to
    such simple measures.

    5 So, when it comes to troubleshooting item 3, item 4
    prevents you from connecting any line-powered
    instrumentation (i.e. scopes) to your system while it is
    running at full power (which is the only time it acts up),
    so you must invent your own diagnostic tools.

    I hope the above doesn’t sound like a whine. That’s what
    makes this business so much fun!
    ============================================================
    Gary Lynch printf(“gary.lynch%cieee%corg”, 55+9, 55-9);
    ============================================================

  4. M.H.P. @ LI says:

    About 20 years ago, I used a Modula-2 cross debugger and later an ICE for debugging.

    In my point of view, one should attempt to debug as much as possible in simulation and then debug on the target system.

    Thus, the program contains printf statements and input simulation sections both delimited by the compiler directive PC and compiled by gcc. AVR specific codes such as input / output and timers are delimited by the compiler directive AVR and compiled by avr-gcc.

    That’s my approach now while working with AVR microcontroller for hobby projects. I simulate on the PC to debug the logic and then I run on the AVR microcontroller. To debug some timing related aspects on the microcontroller, I set the timers prescaler so that times become longer enough to be observed using a single LED.

    This approach works reasonably for small hobby projects where the software consists of few state machines running quasi simultaneously.

  5. M.E.P. @ LI says:

    I’ve been in the embedded business for more than 20 years too, and have made the same mistakes a couple of times as many other good people so this is some of my experience!

    The best thing to do first when you start on a new platform is: test it!
    Make a very simple program that flips the outputs, a program that reads the inputs and mirror them at the outputs and so on. Keep these small programs alive and develop them during the project. When a new version of the hardware arrives on the scene, you can test it again.

    An other good idea is to have spare pins on the controller. Connect them to test-points so that you can monitor the pins with an oscilloscope. For debugging purposes, you can use these extra pins for trace facilities. E. g. when you pass the idle point of the program, you flip the pin twice, when you enter a waiting loop you set a pin and reset it when you leave the waiting. Now an oscilloscope becomes your best friend, because you can use it for debugging!

    As others have pointed, simulate as much as possible.
    The hard thing going embedded is to make the I/O work, the next is to have processor power enough depending of the job of you project. Third, test your code in a simulator before putting it to target.

  6. K.P. @ LI says:

    My first ‘Embedded’ system was written in CORAL-66 and was for an Intel target.The development was on a VAX mainframe with a target simulator. This was not a target simulator, but an OS simulator. So my level of programming was abstracted from the Intel target in every way, other than sometimes having to consider the byte order in data constructs.

    It wasn’t until several years later that I was thrown in at the deep end on a Motorola 68302 Design. We switched from mainframe to SUN SPARC based UNIX workstations with Cross Compilers and target emulators.

    We managed to find a way to do unit test using a SPARC compiler, but this was just for the basic logic of each function. A large percentage of checking our code against requirements and Design was therefore done without going anywhere near a real target. The key advantage of this was the speed of the SPARC processor compared with the 68302.
    With this setup we could write, execute, correct and re-run tests very quickly, but never truly be aware that the code was going to behave correctly on the target.

    Once host based testing was completed, then a seperate group of engineers would put together a build of several tested modules and then download and run them on the real target through an emulator. Problems would be investigated and recorded and the failed software would be given back to us to correct.
    It was fast to write and test on the host, but the process of handing over and receiving back was very time consuming.

    Fast forward to now and we often have a target for every engineer, a JTAG or similar connection for every engineer and perhaps even the possibility to build, download and run an almost complete target build several times a day rather than once per week.

    To me therefore, the key changes that I see are:

    Flash programming by the engineer
    Non-intrusive debug with breakpoints
    The ability to build a full application in a matter of minutes
    off-the-shelf test suites that generate obvious test cases for you
    Instruction level trace

    I can’t think of a downside other than perhaps the faster development cycle could reduce the quality of the implementation due to a lack of thought being applied before each line of code is written.

Leave a Reply to G.L. @ LI