Do you refactor embedded software?

Wednesday, February 29th, 2012 by Robert Cravotta

Software refactoring is an activity where software is transformed in such a way that preserves the external behavior while improving the internal software structure. I am aware of software development tools that assist with refactoring application software, but it is not clear whether design teams engage in software refactoring for embedded code – especially for control systems.

Refactoring was not practiced in the projects I worked on; in fact, the team philosophy was to make only the smallest change necessary when working with a legacy system to affect the change needed. First, we never had the schedule or budget needed just to make the software “easier to understand or cheaper to modify.”  Second, changing the software for “cosmetic” purposes could cause an increase in downstream engineering efforts, especially in the area of verifying that the changes did not break the behavior of the system under all relevant operating conditions. Note that many of the control projects I worked on were complex enough that it was difficult just to ascertain whether the system worked properly or just coincidently looked like it did.

Most of the material I read about software refactoring assumes the software targets the application layer of software which is not tightly coupled to a specific hardware target and is implemented in an object oriented language, such as Java or C++. Are embedded developers performing software refactoring? If so, do you perform it on all types of software or are there types of software that you definitely include or exclude from a refactoring effort?

Are you looking at USB 3.0?

Wednesday, February 22nd, 2012 by Robert Cravotta

SuperSpeed USB, or USB 3.0, has been available in certified consumer products for the previous two years. The serial bus specification includes a 5Gbps signal rate which represents a ten-fold increase of the data rate over HIGH-Speed USB. The interface relies on a dual-bus architecture that enables both USB 2.0 and USB 3.0 operations to take place simultaneously, and it provides backward compatibility. Intel recently announced that its upcoming Intel 7 Series Chipset Family for client PCs and Intel C216 Chipset for servers received SuperSpeed USB Certification; this may signal that 2012 is an adoption inflection point for the three year old specification. In addition to providing a ten-fold improvement in data transfers, SuperSpeed USB increases the maximum power available via the bus to devices, supports new transfer types, and includes new power management features for lower active and idle power consumption.

As SuperSpeed USB becomes available on more host-like consumer devices, will the need to support the new interface gain more urgency? Are you looking at USB 3.0 for any of your upcoming projects? If so, what features in the specification are most important to you?

Unit, Regression and System Testing

Monday, February 20th, 2012 by Mark Pitchford

While unit testing at the time of development is a sound principle to follow, all too often ongoing development compromises the functionality of the software that is already considered complete. Such problems are particularly prevalent when adding functionality to code originally written with no forethought for later enhancements.

Regression testing is what’s needed here. By using a test case file to store a sequence of tests created for the original SOUP (Software of Unproven Pedigree), it is possible to recall and reapply it to the revised code to prove that none of the original functionality has been compromised.

Once configured, this regression testing can be initiated as a background task and run perhaps every evening. Reports highlight any changes to the output generated by earlier test runs. In this way, any code modifications leading to unintentional changes in application behavior can be identified and rectified immediately.

Modern unit test tools come equipped with user-friendly, point-and-click graphical user interfaces. However, when faced with thousands of test cases, a GUI interface is not always the most efficient way to handle the development of test cases. In recognition of this, test tools are designed to allow these test case files to be directly developed from applications such as Microsoft Excel. As before, the “regression test” mechanism can then be used to run the test cases held in these files.

Unit and system test in tandem

Traditionally, many applications have been tested by functional means only. The source code is written in accordance with the specification, and then tested to see if it all works. The problem with this approach is that no matter how carefully the test data is chosen, the percentage of code actually exercised can be very limited.

That issue is compounded by the fact that the procedures tested in this way are only likely to handle data within the range of the current application and test environment. If anything changes a little – perhaps in the way the application is used or perhaps as a result of slight modifications to the code – the application could be running entirely untested execution paths in the field.

Of course, if all parts of the system are unit tested and collated on a piecemeal basis through integration testing, then this will not happen. But what if timescales and resources do not permit such an exercise? Unit test tools often provide the facility to instrument code. This instrumented code is equipped to “track” execution paths, providing evidence of the parts of the application which have been exercised during execution. Such an approach provides the information to produce data such as that depicted in Figure 1.

Color-coded dynamic flow graphs and call graphs illustrate the parts of the application which have been exercised. In this example, note that the red coloring highlights exercised code.

Code coverage is an important part of the testing process in that it shows the percentage of the code that has been exercised and proven during test. Proof that all of the code has been exercised correctly need not be based on unit tests alone. To that end, some unit tests can be used in combination with system test to provide a required level of execution coverage for a system as a whole.

This means that the system testing of an application can be complemented by unit tests that execute code which would not normally be exercised in the running of the application. Examples include defensive code (e.g., to prevent crashes due to inadvertent division by zero), exception handlers, and interrupt handlers.

Unit test is just one weapon in the developer’s armory. By automatic use of unit test both in isolation and in tandem with other techniques, the development of robust and reliable software doesn’t need to carry the heavy development overhead it once did.

Are you using Built-in Self Tests?

Wednesday, February 15th, 2012 by Robert Cravotta

On many of the projects I worked on it made a lot of sense to implement BISTs (built-in self tests) because the systems either had some safety requirements or the cost of executing a test run of a prototype system was expensive enough that it justified the extra cost of making sure the system was in as good a shape as it could be before committing to the test. A quick search for articles about BIST techniques suggested that it may not be adopted as a general design technique except in safety critical, high margin, or automotive applications. I suspect that my literature search does not reflect reality and/or developers are using a different term for BIST.

A BIST consists of tests that a system can initiate and execute on itself, via software and extra hardware, to confirm that it is operating within some set of conditions. In designs without ECC (Error-correcting code) memory, we might include tests to ensure the memory was operating correctly; these tests might be exhaustive or based on sampling depending on the specifics of each project and the time constraints for system boot up. To test peripherals, we could use loop backs between specific pins so that the system could control what the peripheral would receive and confirm that outputs and inputs matched.

We often employed a longer and a shorter version of the BIST to accommodate boot time requirements. The longer version usually was activated manually or only as part of a cold start (possibly with an override signal). The short version might be activated automatically upon a cold or warm start. Despite the effort we put into designing, implementing, and testing BIST as well as developing responses when a BIST failed, we never actually experienced a BIST failure.

Are you using BIST in your designs? Are you specifying your own test sets, or are you relying on built-in tests that reside in BIOS or third-party firmware? Are BISTs a luxury or a necessity with consumer products? What are appropriate actions that a system might make if a BIST failure is detected?

Do you ever think about endianess?

Wednesday, February 8th, 2012 by Robert Cravotta

I remember when I first learned about this thing called endianess as it pertains to ordering higher and lower order bits for data that consumes more than a single byte of data. The two most common ordering schemes were big and little endian. Big endian stored the most significant bytes ahead of the least significant bytes; little endian stored data in the opposite order with the least significant bytes ahead of the most significant bytes. The times when I was most aware of endianess was when we were defining data communication streams (telemetry data in my case) that transferred data from one system to another that did not use the same type of processors. The other context where knowing endianess mattered was when the program needed to perform bitwise operations on data structures (usually for execution efficiency purposes).

If what I hear from semiconductor and software development tool providers is correct, only a very small minority of developers deal with assembly language anymore. Additionally, I suspect that most designers are not involved in driver development anymore either. With the abstractions that compiled languages and standard drivers offer, does endianess affect how software developers write their code? In other words, are you working with datatypes that abstract how the data is stored and used, or are you implementing functions in such a way that require you to know how your data is internally implemented? Have software development tools successfully abstracted this concept away from most developers?

Are software development tools affecting your choice of 8-bit vs. 32-bit processors?

Wednesday, February 1st, 2012 by Robert Cravotta

I have always proposed that the market for 8-bit processors would not fade away – in fact there are still a number of market niches that rely on 4-bit processors (such as clock movements and razor blades that sport a vibrating handle for men when shaving their faces). The smaller processor architectures can support the lowest cost price points and the lowest energy consumption years before the larger 32-bit architectures can begin to offer anything close to parity with the smaller processors. In other words, I believe there are very small application niches that even 8-bit processors are currently too expensive or energy hungry to support just yet.

Many marketing reports have identified that the available software development tool chains play a significant role in whether a given processor architecture is chosen for a design. It seems that the vast majority of resources spent evolving software development tools are focused on the 32-bit architectures. Is this difference in how software development tools for 8- and 32-bit processors are evolving affecting your choice of processor architectures?

I believe the answer is not as straight forward as some processor and development tool providers would want to make it out to be. First, 32-bit processors are generally much more complex to configure than 8-bit processors, so the development environments, which often include drivers and configuration wizards, are nearly a necessity for 32-bit processors and almost a non-issue for 8-bit processors. Second, the type of software that 8-bit processors are used for are generally smaller and contend with less system-level complexity. Additionally, as embedded processors continue to find their way into smaller tasks, the complexity of the software may need to be simpler than current 8-bit software to meet the energy requirements of the smallest subsystems.

Do you feel there is a significant maturity difference between software development tools targeting 8- and 32-bit architectures? Do you think there is/will be a widening gap in the capabilities of software development tools targeting different size processors? Are software development tools affecting your choice of using an 8-bit versus a 32-bit processor or are other considerations, such as the need for additional performance headroom for future proofing, driving your decisions?

Do you employ “Brown M&Ms” in your designs?

Wednesday, January 25th, 2012 by Robert Cravotta

I have witnessed many conversations where someone accuses a vendor of forcing customers to use only their own accessories, parts, or consumables as a way to extract the largest amount of revenue out of the customer base. A non-exhaustive list of examples of such products includes parts for automobiles, ink cartridges for printers, and batteries for mobile devices. While there may be some situations where a company is trying to own the entire vertical market around their product, there is often a reasonable and less sinister explanation for requiring such compliance by the user – namely to minimize the number of ways an end user can damage a product and create avoidable support costs and bad marketing press.

The urban legend that the rock band Van Halen employed a contract clause that required a venue to provide a bowl of M&Ms backstage but with all of the brown candies removed is not only true, but provides an excellent example of such a non-sinister explanation. According to David Lee Roth (the band’s lead singer) autobiography, the bowl of M&Ms with all of the brown candies removed was a nearly costless way for them to test whether the people setting up their stage followed all of the details in their extensive setup and venue requirements. If the band found a single brown candy in the bowl, they ordered a complete line check of the stage before they would agree that the entire stage setup met their safety requirements.

This non-sinister description is consistent with the type of products that I hear people complain that the vendor is merely locking them into the consumables for higher revenues. However, when I examine the details I usually see a machine, such as an automobile, that requires tight tolerances on every part; otherwise small variations in non-approved components can combine to create unanticipated oscillations in the body of the vehicle. In the case of printers, variations in the formula for the ink can gum up the mechanical portions of the system when put through the wide range of temperature and humidity environments that printers are operated in. And for mobile device providers are very keen to keep the rechargeable batteries in their products from exploding and hurting their customers.

First, do you employ some clever “Brown M&M” in your design that helps to signal when components may or may not play together well? This could be as simple as performing a version check of the software before allowing the system to go into full operation. Or is the concept of “Brown M&Ms” just a story to cover greedy practices by companies?

Are you using accelerometers and/or gyroscopes?

Wednesday, January 18th, 2012 by Robert Cravotta

My daughter received a Nintendo 3DS for the holidays. I naively expected the 3D portion of the hand held gaming machine to be a 3D display in a small form factor. Wow, was I wrong. The augmented reality games that combine the 3D display with the position and angle of the gaming machine. In other words, what the system displays changes to reflect how you physically move the game machine around.

Another use of embedded accelerometers and/or gyroscopes that I have heard about is to enable the system to protect itself when it is dropped. When the system detects that it is falling, it has a brief moment where it tries to lock down the mechanically sensitive portions of the system so that when it impacts the ground it incurs a minimum of damage to sensitive components inside the system.

Gyroscopes can be used to stabilize images viewed/recorded via binoculars and cameras by detecting jitter in the way the user is holding the system and making adjustments to the sensor subsystem.

As the price of accelerometers and gyroscope continue to benefit from the scale of being adopted in gaming systems, the opportunities for including them in other embedded systems improve. Are you using accelerometers and/or gyroscopes in any of your designs? Are you aware of any innovative forms of inertial sensing and processing that might provide inspiration for new capabilities for other embedded developers?

Is College necessary for embedded developers?

Wednesday, January 11th, 2012 by Robert Cravotta

A bill was recently introduced to mandate that high school students apply to college before they can receive their high school diploma. This bill reminded me that I worked with a number of people on embedded projects that did not have any college experience. It also reminded me that when I started new projects, the front end of the project usually involved a significant amount of research to understand not just what the project requirements were but also what the technology options and capabilities were.

In fact, while reminiscing, I realized that most of what I learned to do embedded development was learned on the job. The college education did provide value, but it did not provide specific knowledge related to designing embedded systems. I even learned different programming languages on the job because the ones I used in college were not used in the industry. A concept I learned in college and have found useful over the years, big O notation, has not shown itself to be a topic taught to even half of the people I worked with while building embedded systems. Truth be told, my mentors played a much larger role in my ability to tackle embedded designs than college did.

But then I wonder, was all of this on-the-job learning the result of working in the early, dirty, and wild-west world of embedded systems? Has the college curriculum since adjusted to address the needs of today’s embedded developers? Maybe they have, based on a programing class my daughter recently took in college, because the professor spent some time exposing the class to embedded concepts, but the class was not able to go deeply into any topics as it was an introduction course.

Is a college education necessary to be become an embedded developer today? If so, does the current curriculum sufficiently prepare students for embedded work or is there something that is missing from the course work? If not, what skill sets have you found to be essential for someone to start working with an embedded design team?

Are you using Arduino?

Wednesday, January 4th, 2012 by Robert Cravotta

The Gingerbreadtron is an interesting example of an Arduino project. A Gingerbreadtron is a gingerbread house that transforms into a robot. The project was built using an Arduino Uno board and six servo motors. Arduino is an open-source electronics prototyping platform intended for artists, designers, hobbyists, and anyone interested in creating interactive objects or environments. The project began in 2005 and there are claims that over 300,000 Arduino units are “in the wild.” 

According to the website, developers can use Arduino to develop interactive objects, taking inputs from a variety of switches or sensors, and controlling a variety of lights, motors, and other physical outputs. Arduino projects can be stand-alone, or they can communicate with software running on a computer. The boards can be assembled by hand or purchased preassembled; the open-source IDE can be downloaded for free. The Arduino programming language is an implementation of Wiring, a similar physical computing platform, which is based on the Processing multimedia programming environment.

My first exposure to the platform was from a friend that was using the platform to offer a product to control a lighting system. Since then, I see more examples of people using the platform in hobby projects – which leads to this week’s question – Are you using Arduino for any of your projects or production products? Is a platform that provides a layer of abstraction over the microcontroller sufficient for hardcore embedded designs, or is it a tool that allows developers that are not experts at embedded designs to more easily break into building real-world control systems?