Question of the Week Channel

The Question of the Week challenges how designers think about embedded design concepts by touching on topics that cover the entire range of issues affecting embedded developers, such as how and why different trade-offs are made to survive in a world of resource- and time-constrained designs.

Is College necessary for embedded developers?

Wednesday, January 11th, 2012 by Robert Cravotta

A bill was recently introduced to mandate that high school students apply to college before they can receive their high school diploma. This bill reminded me that I worked with a number of people on embedded projects that did not have any college experience. It also reminded me that when I started new projects, the front end of the project usually involved a significant amount of research to understand not just what the project requirements were but also what the technology options and capabilities were.

In fact, while reminiscing, I realized that most of what I learned to do embedded development was learned on the job. The college education did provide value, but it did not provide specific knowledge related to designing embedded systems. I even learned different programming languages on the job because the ones I used in college were not used in the industry. A concept I learned in college and have found useful over the years, big O notation, has not shown itself to be a topic taught to even half of the people I worked with while building embedded systems. Truth be told, my mentors played a much larger role in my ability to tackle embedded designs than college did.

But then I wonder, was all of this on-the-job learning the result of working in the early, dirty, and wild-west world of embedded systems? Has the college curriculum since adjusted to address the needs of today’s embedded developers? Maybe they have, based on a programing class my daughter recently took in college, because the professor spent some time exposing the class to embedded concepts, but the class was not able to go deeply into any topics as it was an introduction course.

Is a college education necessary to be become an embedded developer today? If so, does the current curriculum sufficiently prepare students for embedded work or is there something that is missing from the course work? If not, what skill sets have you found to be essential for someone to start working with an embedded design team?

Are you using Arduino?

Wednesday, January 4th, 2012 by Robert Cravotta

The Gingerbreadtron is an interesting example of an Arduino project. A Gingerbreadtron is a gingerbread house that transforms into a robot. The project was built using an Arduino Uno board and six servo motors. Arduino is an open-source electronics prototyping platform intended for artists, designers, hobbyists, and anyone interested in creating interactive objects or environments. The project began in 2005 and there are claims that over 300,000 Arduino units are “in the wild.” 

According to the website, developers can use Arduino to develop interactive objects, taking inputs from a variety of switches or sensors, and controlling a variety of lights, motors, and other physical outputs. Arduino projects can be stand-alone, or they can communicate with software running on a computer. The boards can be assembled by hand or purchased preassembled; the open-source IDE can be downloaded for free. The Arduino programming language is an implementation of Wiring, a similar physical computing platform, which is based on the Processing multimedia programming environment.

My first exposure to the platform was from a friend that was using the platform to offer a product to control a lighting system. Since then, I see more examples of people using the platform in hobby projects – which leads to this week’s question – Are you using Arduino for any of your projects or production products? Is a platform that provides a layer of abstraction over the microcontroller sufficient for hardcore embedded designs, or is it a tool that allows developers that are not experts at embedded designs to more easily break into building real-world control systems?

Is the rate of innovation stagnating?

Wednesday, December 28th, 2011 by Robert Cravotta

Around this time of year many people like to publish their predictions for the next year – and according to an article, the “experts and analysts” do not see a lot of innovation coming out of the United States soon. The article mentions and quotes a number of sources that suggest the rate of innovation is going to be sluggish the next few years. One source suggested that “bigger innovation labs and companies are holding back on numerous innovations until they can properly monetize them.”

I wonder if these observations and expectations are realistic. I see innovation every time I see some capability available for less cost, training, or skill than before. I am constantly amazed at the speed at which new technology reaches the hands of people in the lowest quartile of income. More significantly, I am amazed at how these new technologies appear in everyday activities without a fanfare. For example, my daughter who is learning to drive has pointed out features that she really likes about the car she is driving – features I never gave any thought about either because I did not notice them or because noticing them would be analogous to noticing and commenting on the air we breathe.

My daughter received a Nintendo 3DS as a present this Christmas. The 3D part of this product goes far beyond the display as it enables her to move the device around and interact with the software in new and meaningful ways. These “invisible” types of innovations do not seem to make big headlines, but I suspect they are still sources of technology disruptions.

As for a company holding off on an innovation, is such a thing possible in a highly competitive world? Can any company afford to hold off on an innovative idea and risk another company beating them to the punch in the market?

Is the rate of innovation stagnating? Is the marketing hype around innovation just not getting the return on investment and so companies are backing off on how they hype it? Are you aware of anyone holding back on innovative ideas waiting for a better consumer market to release them?

How long should testing changes take?

Wednesday, December 21st, 2011 by Robert Cravotta

The current two month payroll tax at the center of a budget bill going through the US Congress has elicited a response by a trade organization representing the people that would have to implement the new law and is the inspiration for this week’s question. A payroll processing trade organization has claimed that even if the bill became law, it is logistically impossible to make the changes in tax software before the two month extension expires. The trade organization claims the changes required by the bill would require at least 90 days for software testing alone in addition to time for analysis, design, coding and implementation. Somehow this scenario makes me think of past conversations where marketing/management would make changes to a system and engineering would push back because there was not enough time to properly perform the change and testing before the delivery date.

If you are part of the group requesting the “simple” change, you may think the developers are overstating the complexity of implementing the change. Often, in my experience, there is strong merit to the developer’s claims because the “simple” change involves some non-obvious complexity especially when the change affects multiple parts of the system.

In my own experience, we worked on many R&D projects, most with extremely aggressive time schedules and engineering budgets. These were quick and dirty proof of concepts many times and “simple” changes did not have to go through the rigorous production processes – or so the requesters felt. What saved the engineering team on many of these requests was the need to minimize the number of variations between iterations of the prototype so that we could perform useful analysis on the test data in the event of failures. Also, we locked down feature changes to the software during system integration so that all changes were in response to resolving system integration issues.

I suspect this perspective that changes can be made quickly and at low risk has been reinforced by the success of the electronics industry to deliver what appears to be the predictable and mundane advance in silicon products to cost 30% less and/or deliver twice as much performance as the previous year’s parts. Compounding this perception are all of the “science-based” television series that show complex engineering and scientific tasks being completed by one or two people in hours or days when in reality they would take dozens to hundreds of people months to complete.

How long should testing changes to a system take? Is it reasonable to expect any change to be ordered, analyzed, designed, implemented, and tested in less than two weeks? I realize that the length of time will depend on the complexity of the change request, but two weeks seems like an aggressive limit to implement any change which might indirectly affect the operation of the system. That is for embedded systems where the types of changes requested are much more complex than changing the color of a button or moving a message to a different portion of the display. How does your team manage change requests and the time it takes to process and implement them?

What embedded development tool do you want?

Wednesday, December 14th, 2011 by Robert Cravotta

Over the years, embedded development tools seem to deliver more capabilities at lower price points. Over the years, $40,000+ development workstations and $10,000+ in-circuit emulators have given way to tools that are lower in cost and more capable. Compilers can produce production quality code in record compilation times by arming the compilation activity across a networked farm of workstations with available processing bandwidth. IDE’s (integrated development environments) greatly improve the productivity of developers by seamlessly automating the process of switching between editing and debugging software on a workstation and target system. Even stepping through software execution backwards to track down the source of difficult bugs has been possible for several years.

The tools available today make it a good time to be an embedded developer. But can the tools be even better? We are fast approaching the end of one year and the beginning of the next, and this often marks an opportunity to propose your wish items for the department or project budget. When I meet with development tool providers, I always ask what directions they are pushing their future tools development. In this case, I am looking for something that goes beyond faster and more towards assisting the developer with analyzing their system.

One tool capability that I would like to see more of is a simulator/profiler feedback based compiler tool that enables you to quickly explore many different ways to partition your system across multiple processors. Embedded systems have been using multiple processors in the same design for decades, but I think the trend is accelerating to include even more processors (even in the same package) than before to squeeze out costs and energy efficiency as well as to handle increasingly complex operating scenarios.

This partition exploration tool goes beyond those current tools that perform multiple compilations with various compiler switches and presents you with a code size versus performance trade-off. This tool should help developers understand how a particular partitioning approach will or will not affect the performance and robustness of a design. Better yet, the tool would assist in automating how to explore different partitioning approaches so that developers could explore dozens (or more) partitioning implementations instead of the small handful that can be done on today’s tight schedules with an expert effectively doing the analysis by hand. I suspect this type of capability would provide a much needed productivity boost for developers to handle the growing complexity of tomorrow’s applications.

Is there an embedded development tool that lifts your productivity to new heights such that you would recommend to your management that every member of your team had it? Is there a capability you wish your development tools had that isn’t quite available yet? What are the top one to three development tools you would recommend as must-have for embedded developers?

Have you experienced a “bad luck” test failure?

Wednesday, December 7th, 2011 by Robert Cravotta

Despite all of the precautions that the Mythbusters team takes when doing their tests, the team accidentally launched a cannonball into a neighborhood and through a home. The test consisted of firing a 6-inch cannonball out of a homemade cannon to measure the cannonball’s velocity. The cannonball was fired at a sheriff’s bomb disposal range, and it was supposed to hit large containers filled with water. The projectile missed the containers and made an unlucky bounce off a safety beam sending it into the nearby neighborhood. Luckily, despite the property damage, including careening through a house with people sleeping in it, no one was hurt.

This event reminds me of a number of bad luck test failures I have experienced. Two different events involved similar autonomous vehicle tests, but the failures were due to interactions with other groups. In the first case, we experienced a bad luck failure during a test flight that failed because we had delayed the test to ensure that the test could complete successfully. In this test, we had a small autonomous vehicle powered with rocket engines. The rocket fuel (MMH and NTO) is very dangerous to work with, so we handled it as little as possible. We had fueled up the system for a test flight when the word came down that the launch was going to be delayed because we were using the same kind of launch vehicle that had just experienced three failed flights before our test.

While we waited for the failure analysis to complete, our test vehicle was placed into storage with the fuel (there really was no way to empty the fuel tanks as the single-test system had not been designed for that). A few months later we got the go ahead on the test, and we pulled the vehicle out of storage. The ground and flight checkouts passed with flying colors and the launch proceeded. However, during the test, once our vehicle blew its ordnance to allow the fuel to flow through the propulsion system, the seals catastrophically failed and the fuel immediately vented. The failure occurred because the seals were not designed to be in constant contact with the fuel for the months that it was in storage. The good news was that all of the electronics were operating correctly, just that the vehicle had no fuel to do what it was intended to do.

The other bad luck failure was the result of poor communication about an interface change. In this case, the system had been built around a 100Hz control cycle. A group new to the project decided to change the inertial measurement unit so that it operated at 400Hz. The change in sample rate was not communicated to the entire team and the resulting test flight was a spectacular spinning out of control failed flight.

In most of the bad luck failures I am aware of, the failure occurred because of assumptions that masked or hid the consequences of miscommunication or unexpected decisions made by one group within the entire team. In our case, the tests were part of a series of tests and they mostly cost us precious time, but sometimes such failures are more serious. For example, the Mars Climate Orbiter (in 1999) unexpectedly disintegrated while executing a navigation command. The root cause of that failure/error was a mismatch in the measurement systems used. One team used English units while another team used Metric units.

I guess calling these bad luck failures is a nice way to say a group of people did not perform all of the checks they should have before starting their tests. Have you ever experienced a “bad luck” failure? What was the root cause for the failure and could a change in procedures have prevented it?

How should embedded systems handle battery failures?

Wednesday, November 30th, 2011 by Robert Cravotta

Batteries – increasingly we cannot live without them. We use batteries in more devices than ever before, especially as the trend to make a mobile version of everything continues its relentless advance. However, the investigation and events surrounding the battery fires for the Chevy Volt is yet another reminder that every engineering decision involves tradeoffs. In this case, damaged batteries, especially large ones, can cause fires. However, this is not the first time we have seen damaged battery related issues – remember the exploding cell phone batteries from a few years ago? Well that problem has not been completely licked as there are still reports of exploding cell phones even today (in Brazil).

These incidents remind me of when I worked on a battery charger and controller system for an aircraft. We put a large amount of effort into ensuring that the fifty plus pound battery could not and would not explode no matter what type of failures it might endure. We had to develop a range of algorithms to constantly monitor each cell of the battery and appropriately respond if anything improper started to occur with any of them. One additional constraint on our responses though was that the battery had to deliver power when it was demanded by the system despite parts of the battery being damaged or failing.

Even though keeping the battery operating as well as it can under all conditions represents an extreme operating condition, I do not believe it is all that extreme a condition when you realize that automobiles and possibly even cell phones sometimes demand similar levels of operation. I recall discussing the exploding batteries a number of years ago, and one comment was that the exploding batteries was a system level design concern rather than just a battery manufacturing issue – in most of the exploding phones cases at that time, the explosions were the consequence of improperly charging the battery at an earlier time. Adding intelligence to the battery to reject a charging load that was out of some specification was a system-level method of minimizing the opportunity to damage the batteries via improper charging.

Given the wide range of applications that batteries are finding use in, what design guidelines do you think embedded systems should follow to provide the safest operation of batteries despite the innumerable ways that they can be damaged or fail? Is disabling the system appropriate?

Food for thought on disabling the system is how CFL (compact fluorescent lights) handle end-of-life conditions for the bulbs when too much of the mercury has migrated to the other end of the lighting tube – they purposefully burn out a fuse so that the controller board is unusable. While this simple approach avoids operating a CFL beyond its safe range, it has caused much concern among the user population as more and more people are scared by the burning components in their lamp.

How should embedded systems handle battery failures? Is there a one size fits all approach or even a tiered approach to handling different types of failures so that users can confidently use their devices without fear of explosions and fire while knowing when there is a problem with the battery system and getting it fixed before it becomes a major problem?

What embedded device/tool/technology are you thankful for?

Thursday, November 24th, 2011 by Robert Cravotta

In the spirit of the Thanksgiving holiday being celebrated in the United States this week, what device, tool, or technology are you thankful for making your life as an embedded developer better? This can be anything, not just the newest developments. The idea is to identify those things that made a material impact during your embedded career.

As an example, my own introspection on the topic yielded a somewhat surprising answer for me – I am thankful for the humble USB port. I remember when my mentor came to me one day with a spec sheet and asked me what I thought about this thing called the Universal Serial Bus specification. We talked about that specification on and off over the course of a year or so, but never did I imagine the larger extent of the impact of so small a change.

Not only did USB simplify and make connecting and powering devices simpler and more reliable such that everyday technophobes could successfully use those devices, but it simplified development boards and workbenches all around because more and more of the new development boards that included a USB port no longer required a separate source for system power. As a consumer, I have recognized that I have not crawled under my desk to plug in a transformer in years – and for that, I am thankful.

There are many things I am thankful for, and these opportunities for introspection are valuable because they increase the odds of recognizing the mundane and nearly invisible changes that have permeated our environment – just like the invisible embedded devices that populate our everyday lives. What embedded device, tool, and/or technology are you thankful for?

Do you use hardware-in-the-loop simulation?

Wednesday, November 16th, 2011 by Robert Cravotta

While working on some complex control systems for aerospace projects, I had the opportunity to build and use hardware-in-the-loop (HIL or HWIL) simulations. A HIL simulation is a platform where you can swap different portions of the system between simulation models and real hardware. The ability to mix simulated and real components provides a mechanism to test and characterize the behavior and interactions between components. This is especially valuable when building closed-loop control systems that will perform in conditions that you do not fully understand yet (due to a lack of experience with the operating scenario).

Building a HIL simulation is an extensive effort. The simulation must be able to not only emulate electrical signals for sensors and actuators, but it may also need to be able to provide predictable and repeatable physical conditions, such as moving the system around on six degrees of freedom based on real or simulated sensor or actuator outputs. As a result, HIL can be cost prohibitive for many projects; in fact, to date the only people I have met that have used HIL worked on aircraft, spacecraft, automotive, and high-end networking equipment.

I suspect though with the introduction of more sensors and/or actuators in consumer level products, that HIL concepts are being used in new types of projects. For example, tablet devices and smartphones increasingly are aware of gravity. To date, being able to detect gravity is being used to set the orientation on the display, but I have seen lab work where these same sensors are being used to detect deliberate motions made by the user, such as shaking, lowering, or raising  the device. At that point, HIL concepts provide a mechanism for developers to isolate and examine reality versus their assumptions about how sets of sensors and/or actuators interact under the variation that can occur under each of these use scenarios.

In my own experience, I have used HIL simulation to characterize and understand how to successfully use small rocket engines to move and hover a vehicle in the air. The HIL simulation allowed us to switch between real and simulated engines that moved the system. This kind of visibility was especially useful because operating the vehicle was dangerous and expensive. Another HIL simulation allowed us to work with the real camera sensor and physically simulate the motion that the camera would experience in a usage scenario. In each of these simulation setups, we were able to discover important discrepancies between our simulation models and how the real world behaved.

Are HIL simulation concepts moving into “simpler” designs? Are you using HIL simulation in your own projects? Is it sufficient to work with only real hardware, say in the case of a smartphone, or are you finding additional value in being able to simulate specific portions of the system on demand? Are you using HIL in a different way than described here? Is HIL too esoteric a topic for most development?

Do you receive too much email?

Wednesday, November 9th, 2011 by Robert Cravotta

During a recent conversation I heard someone share a “good” thing about emailed newsletters – it is easy to sort the email list and block delete all of them. This got me thinking about how much time I spend managing emails each day, and it got me wondering, does anyone/everyone else receive on the order of 100 emails a day too? Mind you, these are the business relevant emails versus the countless spam emails that several layers of spam filters intercept and dispose of for me.

Email allows me to work with dozens of people asynchronously throughout the week without having to spend time identifying a common time that we can work together. However, I receive way too many newsletters such that I do not have time to open them all. On the other hand, many of them are not worth opening most of the time, but the few times they are worth opening make it worthwhile to receive them. A good subject line goes a long way to signaling whether a particular issue of a newsletter might be worth the time to open and read it. And that identifies one of the problems of receiving a constant stream of information – a majority of it is not relevant to what you need at the moment you receive it.

On the other hand, I do not like block deleting these emails because used correctly, they can provide a sort-of customized and pre-filtered search database for when I need to research something. I think this works because I choose which newsletters to receive, and it is easy (usually) to stop receiving a newsletter. When I do a search on this informal database, I sometimes find a pointer in a newsletter that helps me find the material I am looking for from sources that generally have earned my trust as being reliable.

The downside or cost of having these newsletters to search is that too many show up in my mailbox each week. Automatic email filtering rules that moves newsletters into folders are helpful, but they usually take place as the emails arrive in my mailbox. I prefer to have the newsletters pop up in my inbox so that I can see the subjects in them before they are shunted into a newsletter folder. To date, I have not seen an email tool that will move emails into appropriate folders after they have been in the inbox for a day or week.

Do you receive too much email? Or is your email box not overflowing with information? Are you receiving so many newsletters that aim to consolidate information for you but end up flooding your mailbox with too much information that is not immediately relevant? What strategies do you use to manage the influx of newsletters so that they do not interfere or possibly hide important emails?