Articles by Robert Cravotta

As a former Technical Editor covering Embedded Processing at EDN, Robert has been following and commenting on the embedded processing space since 2001 (see article index). His expertise includes software development and system design using microprocessors, microcontrollers, digital signal processors (DSPs), multiprocessor architectures, processor fabrics, coprocessors, and accelerators, plus embedded cores in FPGAs, SOCs, and ASICs. Robert's embedded engineering background includes 16 years as a Member of the Technical Staff at Boeing and Rockwell International working on path-finding avionics, power and laser control systems, autonomous vehicles, and vision sensing systems.

Are you using Built-in Self Tests?

Wednesday, February 15th, 2012 by Robert Cravotta

On many of the projects I worked on it made a lot of sense to implement BISTs (built-in self tests) because the systems either had some safety requirements or the cost of executing a test run of a prototype system was expensive enough that it justified the extra cost of making sure the system was in as good a shape as it could be before committing to the test. A quick search for articles about BIST techniques suggested that it may not be adopted as a general design technique except in safety critical, high margin, or automotive applications. I suspect that my literature search does not reflect reality and/or developers are using a different term for BIST.

A BIST consists of tests that a system can initiate and execute on itself, via software and extra hardware, to confirm that it is operating within some set of conditions. In designs without ECC (Error-correcting code) memory, we might include tests to ensure the memory was operating correctly; these tests might be exhaustive or based on sampling depending on the specifics of each project and the time constraints for system boot up. To test peripherals, we could use loop backs between specific pins so that the system could control what the peripheral would receive and confirm that outputs and inputs matched.

We often employed a longer and a shorter version of the BIST to accommodate boot time requirements. The longer version usually was activated manually or only as part of a cold start (possibly with an override signal). The short version might be activated automatically upon a cold or warm start. Despite the effort we put into designing, implementing, and testing BIST as well as developing responses when a BIST failed, we never actually experienced a BIST failure.

Are you using BIST in your designs? Are you specifying your own test sets, or are you relying on built-in tests that reside in BIOS or third-party firmware? Are BISTs a luxury or a necessity with consumer products? What are appropriate actions that a system might make if a BIST failure is detected?

Do you ever think about endianess?

Wednesday, February 8th, 2012 by Robert Cravotta

I remember when I first learned about this thing called endianess as it pertains to ordering higher and lower order bits for data that consumes more than a single byte of data. The two most common ordering schemes were big and little endian. Big endian stored the most significant bytes ahead of the least significant bytes; little endian stored data in the opposite order with the least significant bytes ahead of the most significant bytes. The times when I was most aware of endianess was when we were defining data communication streams (telemetry data in my case) that transferred data from one system to another that did not use the same type of processors. The other context where knowing endianess mattered was when the program needed to perform bitwise operations on data structures (usually for execution efficiency purposes).

If what I hear from semiconductor and software development tool providers is correct, only a very small minority of developers deal with assembly language anymore. Additionally, I suspect that most designers are not involved in driver development anymore either. With the abstractions that compiled languages and standard drivers offer, does endianess affect how software developers write their code? In other words, are you working with datatypes that abstract how the data is stored and used, or are you implementing functions in such a way that require you to know how your data is internally implemented? Have software development tools successfully abstracted this concept away from most developers?

Are software development tools affecting your choice of 8-bit vs. 32-bit processors?

Wednesday, February 1st, 2012 by Robert Cravotta

I have always proposed that the market for 8-bit processors would not fade away – in fact there are still a number of market niches that rely on 4-bit processors (such as clock movements and razor blades that sport a vibrating handle for men when shaving their faces). The smaller processor architectures can support the lowest cost price points and the lowest energy consumption years before the larger 32-bit architectures can begin to offer anything close to parity with the smaller processors. In other words, I believe there are very small application niches that even 8-bit processors are currently too expensive or energy hungry to support just yet.

Many marketing reports have identified that the available software development tool chains play a significant role in whether a given processor architecture is chosen for a design. It seems that the vast majority of resources spent evolving software development tools are focused on the 32-bit architectures. Is this difference in how software development tools for 8- and 32-bit processors are evolving affecting your choice of processor architectures?

I believe the answer is not as straight forward as some processor and development tool providers would want to make it out to be. First, 32-bit processors are generally much more complex to configure than 8-bit processors, so the development environments, which often include drivers and configuration wizards, are nearly a necessity for 32-bit processors and almost a non-issue for 8-bit processors. Second, the type of software that 8-bit processors are used for are generally smaller and contend with less system-level complexity. Additionally, as embedded processors continue to find their way into smaller tasks, the complexity of the software may need to be simpler than current 8-bit software to meet the energy requirements of the smallest subsystems.

Do you feel there is a significant maturity difference between software development tools targeting 8- and 32-bit architectures? Do you think there is/will be a widening gap in the capabilities of software development tools targeting different size processors? Are software development tools affecting your choice of using an 8-bit versus a 32-bit processor or are other considerations, such as the need for additional performance headroom for future proofing, driving your decisions?

Do you employ “Brown M&Ms” in your designs?

Wednesday, January 25th, 2012 by Robert Cravotta

I have witnessed many conversations where someone accuses a vendor of forcing customers to use only their own accessories, parts, or consumables as a way to extract the largest amount of revenue out of the customer base. A non-exhaustive list of examples of such products includes parts for automobiles, ink cartridges for printers, and batteries for mobile devices. While there may be some situations where a company is trying to own the entire vertical market around their product, there is often a reasonable and less sinister explanation for requiring such compliance by the user – namely to minimize the number of ways an end user can damage a product and create avoidable support costs and bad marketing press.

The urban legend that the rock band Van Halen employed a contract clause that required a venue to provide a bowl of M&Ms backstage but with all of the brown candies removed is not only true, but provides an excellent example of such a non-sinister explanation. According to David Lee Roth (the band’s lead singer) autobiography, the bowl of M&Ms with all of the brown candies removed was a nearly costless way for them to test whether the people setting up their stage followed all of the details in their extensive setup and venue requirements. If the band found a single brown candy in the bowl, they ordered a complete line check of the stage before they would agree that the entire stage setup met their safety requirements.

This non-sinister description is consistent with the type of products that I hear people complain that the vendor is merely locking them into the consumables for higher revenues. However, when I examine the details I usually see a machine, such as an automobile, that requires tight tolerances on every part; otherwise small variations in non-approved components can combine to create unanticipated oscillations in the body of the vehicle. In the case of printers, variations in the formula for the ink can gum up the mechanical portions of the system when put through the wide range of temperature and humidity environments that printers are operated in. And for mobile device providers are very keen to keep the rechargeable batteries in their products from exploding and hurting their customers.

First, do you employ some clever “Brown M&M” in your design that helps to signal when components may or may not play together well? This could be as simple as performing a version check of the software before allowing the system to go into full operation. Or is the concept of “Brown M&Ms” just a story to cover greedy practices by companies?

Are you using accelerometers and/or gyroscopes?

Wednesday, January 18th, 2012 by Robert Cravotta

My daughter received a Nintendo 3DS for the holidays. I naively expected the 3D portion of the hand held gaming machine to be a 3D display in a small form factor. Wow, was I wrong. The augmented reality games that combine the 3D display with the position and angle of the gaming machine. In other words, what the system displays changes to reflect how you physically move the game machine around.

Another use of embedded accelerometers and/or gyroscopes that I have heard about is to enable the system to protect itself when it is dropped. When the system detects that it is falling, it has a brief moment where it tries to lock down the mechanically sensitive portions of the system so that when it impacts the ground it incurs a minimum of damage to sensitive components inside the system.

Gyroscopes can be used to stabilize images viewed/recorded via binoculars and cameras by detecting jitter in the way the user is holding the system and making adjustments to the sensor subsystem.

As the price of accelerometers and gyroscope continue to benefit from the scale of being adopted in gaming systems, the opportunities for including them in other embedded systems improve. Are you using accelerometers and/or gyroscopes in any of your designs? Are you aware of any innovative forms of inertial sensing and processing that might provide inspiration for new capabilities for other embedded developers?

Is College necessary for embedded developers?

Wednesday, January 11th, 2012 by Robert Cravotta

A bill was recently introduced to mandate that high school students apply to college before they can receive their high school diploma. This bill reminded me that I worked with a number of people on embedded projects that did not have any college experience. It also reminded me that when I started new projects, the front end of the project usually involved a significant amount of research to understand not just what the project requirements were but also what the technology options and capabilities were.

In fact, while reminiscing, I realized that most of what I learned to do embedded development was learned on the job. The college education did provide value, but it did not provide specific knowledge related to designing embedded systems. I even learned different programming languages on the job because the ones I used in college were not used in the industry. A concept I learned in college and have found useful over the years, big O notation, has not shown itself to be a topic taught to even half of the people I worked with while building embedded systems. Truth be told, my mentors played a much larger role in my ability to tackle embedded designs than college did.

But then I wonder, was all of this on-the-job learning the result of working in the early, dirty, and wild-west world of embedded systems? Has the college curriculum since adjusted to address the needs of today’s embedded developers? Maybe they have, based on a programing class my daughter recently took in college, because the professor spent some time exposing the class to embedded concepts, but the class was not able to go deeply into any topics as it was an introduction course.

Is a college education necessary to be become an embedded developer today? If so, does the current curriculum sufficiently prepare students for embedded work or is there something that is missing from the course work? If not, what skill sets have you found to be essential for someone to start working with an embedded design team?

Are you using Arduino?

Wednesday, January 4th, 2012 by Robert Cravotta

The Gingerbreadtron is an interesting example of an Arduino project. A Gingerbreadtron is a gingerbread house that transforms into a robot. The project was built using an Arduino Uno board and six servo motors. Arduino is an open-source electronics prototyping platform intended for artists, designers, hobbyists, and anyone interested in creating interactive objects or environments. The project began in 2005 and there are claims that over 300,000 Arduino units are “in the wild.” 

According to the website, developers can use Arduino to develop interactive objects, taking inputs from a variety of switches or sensors, and controlling a variety of lights, motors, and other physical outputs. Arduino projects can be stand-alone, or they can communicate with software running on a computer. The boards can be assembled by hand or purchased preassembled; the open-source IDE can be downloaded for free. The Arduino programming language is an implementation of Wiring, a similar physical computing platform, which is based on the Processing multimedia programming environment.

My first exposure to the platform was from a friend that was using the platform to offer a product to control a lighting system. Since then, I see more examples of people using the platform in hobby projects – which leads to this week’s question – Are you using Arduino for any of your projects or production products? Is a platform that provides a layer of abstraction over the microcontroller sufficient for hardcore embedded designs, or is it a tool that allows developers that are not experts at embedded designs to more easily break into building real-world control systems?

Is the rate of innovation stagnating?

Wednesday, December 28th, 2011 by Robert Cravotta

Around this time of year many people like to publish their predictions for the next year – and according to an article, the “experts and analysts” do not see a lot of innovation coming out of the United States soon. The article mentions and quotes a number of sources that suggest the rate of innovation is going to be sluggish the next few years. One source suggested that “bigger innovation labs and companies are holding back on numerous innovations until they can properly monetize them.”

I wonder if these observations and expectations are realistic. I see innovation every time I see some capability available for less cost, training, or skill than before. I am constantly amazed at the speed at which new technology reaches the hands of people in the lowest quartile of income. More significantly, I am amazed at how these new technologies appear in everyday activities without a fanfare. For example, my daughter who is learning to drive has pointed out features that she really likes about the car she is driving – features I never gave any thought about either because I did not notice them or because noticing them would be analogous to noticing and commenting on the air we breathe.

My daughter received a Nintendo 3DS as a present this Christmas. The 3D part of this product goes far beyond the display as it enables her to move the device around and interact with the software in new and meaningful ways. These “invisible” types of innovations do not seem to make big headlines, but I suspect they are still sources of technology disruptions.

As for a company holding off on an innovation, is such a thing possible in a highly competitive world? Can any company afford to hold off on an innovative idea and risk another company beating them to the punch in the market?

Is the rate of innovation stagnating? Is the marketing hype around innovation just not getting the return on investment and so companies are backing off on how they hype it? Are you aware of anyone holding back on innovative ideas waiting for a better consumer market to release them?

How long should testing changes take?

Wednesday, December 21st, 2011 by Robert Cravotta

The current two month payroll tax at the center of a budget bill going through the US Congress has elicited a response by a trade organization representing the people that would have to implement the new law and is the inspiration for this week’s question. A payroll processing trade organization has claimed that even if the bill became law, it is logistically impossible to make the changes in tax software before the two month extension expires. The trade organization claims the changes required by the bill would require at least 90 days for software testing alone in addition to time for analysis, design, coding and implementation. Somehow this scenario makes me think of past conversations where marketing/management would make changes to a system and engineering would push back because there was not enough time to properly perform the change and testing before the delivery date.

If you are part of the group requesting the “simple” change, you may think the developers are overstating the complexity of implementing the change. Often, in my experience, there is strong merit to the developer’s claims because the “simple” change involves some non-obvious complexity especially when the change affects multiple parts of the system.

In my own experience, we worked on many R&D projects, most with extremely aggressive time schedules and engineering budgets. These were quick and dirty proof of concepts many times and “simple” changes did not have to go through the rigorous production processes – or so the requesters felt. What saved the engineering team on many of these requests was the need to minimize the number of variations between iterations of the prototype so that we could perform useful analysis on the test data in the event of failures. Also, we locked down feature changes to the software during system integration so that all changes were in response to resolving system integration issues.

I suspect this perspective that changes can be made quickly and at low risk has been reinforced by the success of the electronics industry to deliver what appears to be the predictable and mundane advance in silicon products to cost 30% less and/or deliver twice as much performance as the previous year’s parts. Compounding this perception are all of the “science-based” television series that show complex engineering and scientific tasks being completed by one or two people in hours or days when in reality they would take dozens to hundreds of people months to complete.

How long should testing changes to a system take? Is it reasonable to expect any change to be ordered, analyzed, designed, implemented, and tested in less than two weeks? I realize that the length of time will depend on the complexity of the change request, but two weeks seems like an aggressive limit to implement any change which might indirectly affect the operation of the system. That is for embedded systems where the types of changes requested are much more complex than changing the color of a button or moving a message to a different portion of the display. How does your team manage change requests and the time it takes to process and implement them?

What embedded development tool do you want?

Wednesday, December 14th, 2011 by Robert Cravotta

Over the years, embedded development tools seem to deliver more capabilities at lower price points. Over the years, $40,000+ development workstations and $10,000+ in-circuit emulators have given way to tools that are lower in cost and more capable. Compilers can produce production quality code in record compilation times by arming the compilation activity across a networked farm of workstations with available processing bandwidth. IDE’s (integrated development environments) greatly improve the productivity of developers by seamlessly automating the process of switching between editing and debugging software on a workstation and target system. Even stepping through software execution backwards to track down the source of difficult bugs has been possible for several years.

The tools available today make it a good time to be an embedded developer. But can the tools be even better? We are fast approaching the end of one year and the beginning of the next, and this often marks an opportunity to propose your wish items for the department or project budget. When I meet with development tool providers, I always ask what directions they are pushing their future tools development. In this case, I am looking for something that goes beyond faster and more towards assisting the developer with analyzing their system.

One tool capability that I would like to see more of is a simulator/profiler feedback based compiler tool that enables you to quickly explore many different ways to partition your system across multiple processors. Embedded systems have been using multiple processors in the same design for decades, but I think the trend is accelerating to include even more processors (even in the same package) than before to squeeze out costs and energy efficiency as well as to handle increasingly complex operating scenarios.

This partition exploration tool goes beyond those current tools that perform multiple compilations with various compiler switches and presents you with a code size versus performance trade-off. This tool should help developers understand how a particular partitioning approach will or will not affect the performance and robustness of a design. Better yet, the tool would assist in automating how to explore different partitioning approaches so that developers could explore dozens (or more) partitioning implementations instead of the small handful that can be done on today’s tight schedules with an expert effectively doing the analysis by hand. I suspect this type of capability would provide a much needed productivity boost for developers to handle the growing complexity of tomorrow’s applications.

Is there an embedded development tool that lifts your productivity to new heights such that you would recommend to your management that every member of your team had it? Is there a capability you wish your development tools had that isn’t quite available yet? What are the top one to three development tools you would recommend as must-have for embedded developers?