Question of the Week Channel

The Question of the Week challenges how designers think about embedded design concepts by touching on topics that cover the entire range of issues affecting embedded developers, such as how and why different trade-offs are made to survive in a world of resource- and time-constrained designs.

What techniques do you use to protect information?

Thursday, June 16th, 2011 by Robert Cravotta

The question of how to protect information on computers and networks has been receiving a lot of visibility with public disclosures of more networks being hacked over the past few weeks. The latest victims of hacking in the last week include the United States CIA site, the United States Senate site, and Citibank. Based on conversations with people about mine and their own experiences with having account and personal information compromised, I suspect there are a number of techniques that each of us use that could prove useful to share with each other on how to improve the protections on your data.

Two techniques that I have started to adopt in specific situations involve the use of secure tokens and the use of dummy email addresses. The secure token option is not available for every system, and it does add an extra layer of passwords to the login process. The secure token approach that I use generates a new temporary passcode every 30 seconds. Options for generating the temporary passcode include using a hardware key-fob or a software program that runs on your computer or even your cell phone. The secure token approach is far from transparent, and there is some cost in setting up the token.

I have only just started playing with the idea of using temporary or dummy email addresses to provide a level of indirection between my login information and my email account. In this case, my ISP allows me to create up to 500 temporary email ids that I can create, manage, and destroy at a moment’s notice. I can create a separate email address for each service. What makes these email addresses interesting though is that there is no way to actually log into the email account with those names as they are merely aliases for my real account which remains private. I’m not positive if this is a better way than just using a real email address, but I know I was worried the one time I had an service hacked because I realized that the email address that was connected to that service was also used by other services – and that represented a potential single point of failure or security access point to a host of private accounts.

One challenge of the dummy email accounts is keeping track of each one; however, because there is no access point available for any of these addresses, I feel more comfortable using a text file to track which email address goes to which service. On the other hand, I am careful to never place the actual email address that I use to access those dummy addresses in the same place.

Do you have some favorite techniques that you have adopted over the years to protect your data and information? Are they techniques that require relying on an intermediary – such as with the secure tokens, or are they personal and standalone like the dummy email address idea? Are any of your techniques usable in an embedded device, and if so, does the design need to include any special hardware or software resources to include it in the design?

Are GNU tools good enough for embedded designs?

Wednesday, June 8th, 2011 by Robert Cravotta

The negative responses to the question about Eclipse-based tools surprised me. It had been at least four years since I tried an Eclipse-based development tool, and I assumed that with so many embedded companies adopting the Eclipse IDE that the environment would have cleaned up nicely.

This got me wondering if GNU-based tools, especially compilers targeting embedded processors, fare better within the engineering community or not. Similar to using the Eclipse IDE, it has been far too many years since I used a GCC compiler to know how it has or has not evolved. Unlike an IDE, a compiler does not need to support a peppy graphical user interface – it just needs to generate strong code that works on the desired target. The competition to GCC compilers are proprietary tools that claim to perform significantly better at generating target code.

Are the GNU-based development tools good enough for embedded designs – especially those designs that do not provide a heavy user interface? The software for most embedded designs must operate within constrained memory sizes and need to operate efficiently or it will risk driving the cost of the embedded system higher than it needs to be.

Are you using GNU-based development tools – even when there is a proprietary compiler available for your target? What types of projects are GNU-based tools sufficient for and where is the line when the proprietary tools become a necessity (or not)?

What is important when looking at a processor’s low power modes?

Wednesday, June 1st, 2011 by Robert Cravotta

Low power operation is an increasingly important capability of embedded processors, and many processors support multiple low power modes to enable developers to accomplish more with less energy. While low power modes differ from processor to processor, each mode enables a system to operate at a lower power level either by running the processor at lower clock rates and voltages or by removing power from selected parts of the processor, such as specific peripherals, the main processor core, and memory spaces.

An important characteristic of a low power or sleep mode is the current draw while the system is operating in that mode. However, evaluating and comparing the current draw between low power modes on different processors requires you to look at more than the just current draw to perform an apples-to-apples comparison. For example, the time it takes the system to wake-up from a given mode can disqualify a processor from consideration in a design. The time it takes a system to wake up is dependent on such factors as the settling time for the clock source and for the analog blocks. Some architectures offers multiple clock sources to allow a system to perform work at a slower rate while the faster clock source is still settling – further complicating the comparison between the wake-up time for the processor.

Another differentiator for low power modes is the level of granularity the power modes support that allows the developer to turn on and off individual versus blocks of peripherals or coprocessors. Some low power modes remove power from the main processor core and leave an autonomous peripheral controller operating to manage and perform data collection and storage. Low power modes can differ on which circuits they leave running such as brown-out detection, preserving the contents of ram or registers, and whether the real time clock remains active. The architectural decisions of which circuits can be powered down or not depends greatly on the end application, and they provide opportunities for specific processors to best target niche requirements.

When you are looking at a processor’s low power modes, what do you consider the important information that must be considered? When considering different processors, do you compare wake-up times or does current draw trump everything else? How important is your ability to control which circuits are powered on or off?

Do you care if your development tools are Eclipse based?

Wednesday, May 25th, 2011 by Robert Cravotta

I first explored the opportunity of using the Eclipse and Net Beans open source projects as a foundation for embedded software development tools in an article a few years back. Back then these Java-based IDEs (Integrated Development Environments) were squarely targeting application developers, but the embedded community was beginning to experiment with using these platforms for their own development tools. Since then, many companies have built and released Eclipse-based development tools – and a few have retained using their own IDE.

This week’s question is an attempt to start evaluating how theses open source development platforms are working out for embedded suppliers and developers. In a recent discussion with IAR Systems, I felt like the company’s recent announcement about an Eclipse plug-in for the Renesas RL78 was driven by developer request. IAR also supports its own proprietary IDE – the IAR Embedded WorkBench. Does a software development tools company supporting two different IDEs signal something about the open source platform?

In contrast, Microchip’s MPLAB X IDE is based on the Net Beans platform – effectively a competing open source platform to Eclipse. One capability that using the open source platform provides is that the IDE supports development on a variety of hosts running Linux, Mac OS, and Windows operating systems.

I personally have not tried using either an Eclipse or Net Beans tool in many years, so I do not know yet how well they have matured over the past few years. I do recall that managing installations was somewhat cumbersome, and I expect that is much better now. I also recall that the tools were a little slower to react to what I wanted to do, and again, today’s newer computers may have made that a non-issue. Lastly, the open source projects were not really built with the needs of embedded developers in mind, so the embedded tools that migrated to these platforms had to conform as best they could to architectural assumptions that were driven by the needs of application developers.

Do you care if an IDE is Eclipse or Net Beans based? Does the open source platform enable you to manage a wider variety of processor architectures from different suppliers in a meaningfully better way? Does it matter to your design-in decision if a processor is supported by one of these platforms? Are tools based on these open source platforms able to deliver the functionality and responsiveness you need for embedded development?

Is the cloud safe enough?

Wednesday, May 18th, 2011 by Robert Cravotta

The cloud and content streaming continue to grow as a connectivity mechanism for delivering applications and services. Netflix now accounts for almost 30 percent of downstream internet traffic during peak times according to Sandvine’s Global Internet Phenomena Report. Microsoft and Amazon are entering into the online storage market. But given Sony’s recent experience with the security of their PlayStation and Online Entertainment services, is the cloud safe enough, especially when new exploits are being uncovered in their network even as they bring those services back online?

When I started working, I was exposed to a subtle but powerful concept that is relevant to deciding if and when the cloud is safe enough to use, and that lesson has stayed with me ever since. One of my first jobs was supporting a computing operations group and one of their functions was managing the central printing services. Some of the printers they managed were huge impact printers that weighed many hundreds of pounds each. A senior operator explained to me that there was a way to greatly accelerate the wear and tear on these printers merely by sending a print job with the correct but completely legal sequences of text.

This opened my eyes to the fact that even when a device or system is being used “correctly,” unintended consequences can occur unless the proper safeguards are added to the design of that system. This realization has served me well in countless projects because it taught me to focus on mitigating legal but unintended operating scenarios so that these systems were robust.

An example that affects consumers more directly is exploding cell phone batteries a few years back. In some of those cases, the way the charge was delivered to the battery weakened the batteries; however, if a smarter regulator was placed between the charge input and the battery input, charge scenarios that are known to damage a battery could be isolated by the charge regulator instead of being allowed to pass through in ignorance. This is a function that adds cost and complexity to the design of the device and worse yet, does not necessarily justify an increase in the price that the buyer is willing to pay. However, the cost of allowing batteries to age prematurely or to explode is significant enough that it is possible to justify the extra cost of a smart charge regulator.

I question whether the cloud infrastructure, which is significantly more complicated than a mere stand-alone device or function, is robust enough to act as a central access point because it currently represents a single point of failure that can have huge ramifications from a single flaw, exploit, or weakness in its implementation. Do you think the cloud is safe enough to bet your product and/or company on?

Do you use any custom or in-house development tools?

Wednesday, May 11th, 2011 by Robert Cravotta

Developing embedded software differs from developing application in many ways. The most obvious difference is that there is usually no display available in embedded systems whereas most application software would be useless without a display to communicate with the user. Another difference is that it can be challenging to know whether the software for an embedded system is performing the correct functions for the right reasons or if it is performing what appear to be proper functions coincidentally. This is especially relevant to closed-loop control systems that include multiple types of sensors in the control loop, such as with fully autonomous systems.

Back when I was building fully autonomous vehicles, we had to build a lot of custom development tools because standard software development tools just did not perform the tasks we needed. Some of the system-level simulations that we used were built from the ground up. These simulations modeled the control software, rigid body mechanics, and inertial forces from actuating small rocket engines. We built a hardware-in-the-loop rig so that we could swap in and out real hardware with simulated modules so that we could verify the operation of each part of the system as well as inject faults into the system to see how it would fare. Instead of a display or monitor to provide feedback to the operator, the system used a telemetry link which allowed us to effectively instrument the code and capture the state of the system at regular points of time.

Examining the telemetry data was cumbersome due to the massive volume of data – not unlike trying to perform debugging analysis with today’s complex SOC devices. We used a custom parser to extract the various data channels that we wanted to examine together and then used a spreadsheet application to scale and manipulate the raw data and to create plots of the data that we were looking for correlations in. If I was working on a similar project today, I suspect we would still be using a lot of same types of custom tools as back then. I suspect that the market for embedded software development tools is so wide and fragmented that it is difficult for a tools company to justify creating many tools that meet the unique needs of embedded systems. Instead, there is much more money available from the application side of the software development tool market, and it seems that embedded developers must choose between figuring out how to use tools that address the needs of application software work in their project or to create and maintain their own custom tools.

In your own projects, are standard tools meeting your needs or are you using custom or in-house development tools? What kind of custom tools are you using and what problems do they help you solve?

Is the job market for embedded developers improving?

Wednesday, May 4th, 2011 by Robert Cravotta

I have an unofficial sense that there has been an uptick in the embedded market for developers. This sense is not based on hard data; rather it is based on a sense of what I hear in briefings and what types of briefings I am seeing. The message of recovery is not a new one, but over the previous year or two it felt like the undertone of the message was more of a hope than a sentiment of fact. The undertone now suggests that there may be more than just hopeful optimism to the message today.

Are you seeing more opportunities for embedded developers than the previous year or two? Is the workload growing as well as the talent being brought to bear on these projects, or are you doing more with much less? If you can provide an anecdote, please do; otherwise, use the scale below to indicate how you think the market for embedded developers is doing.

1)      The embedded market is hurting so much that improvement/growth is hard to detect.

2)      The embedded market is showing signs of revival but still has a ways to go to be healthy.

3)      The embedded market is healthy.

4)      The embedded market is growing and hiring opportunities are up.

5)      The future has never looked brighter.

6)      Other (please expand)

How do you ensure full coverage for your design/spec reviews?

Wednesday, April 27th, 2011 by Robert Cravotta

Last week I asked whether design-by-committee is ever a good idea. This week’s question derives from my experience on one of those design-by-committee projects. In this particular project, I worked on a development specification. The approval list for the specification was several dozen names long – presumably the length of the approving signatures list should provide confidence that the review and approval process was robust and good. As part of the review and approval process, I personally obtained each signature on the original document and gained some insight into some of the reasoning behind each signature.

For example, when I approached person B for their signature, I had the distinct feeling they did not have time to read the document and that they were looking at it for the first time in its current form. Now I like to think I am a competent specification author, but this was a large document, and to date, I was the only person who seemed to be aware of the entire set of requirements within the document. Well, person B looked at the document, perused the signature list, and said that person E’s signature would ensure that the appropriate requirements were good enough for approval.

When I approached person D, they took two minutes and looked at two requirements that were appropriate to their skill set and responsibility and nothing else within the specification before signing the document. When it was person E’s turn at the document, I once again felt they had not had time to look at the document before I arrived for their signature. Person E looked at the signature list and commented that it was good that person B and D had signed off, so the document should be good enough for their signature. In this example, these three signatures encompassed only two of the requirements in a thick specification.

Throughout the review and approval process, it felt like no one besides me knew all of the contents of the document. I did good work on that document, but my experience indicates that even the most skilled engineers are susceptible to small errors that can switch the meaning of a requirement and that additional sets of eyes looking over the requirements will usually uncover them during a review. Additionally, the system-level implications of each requirement can only be assessed if a reviewer is aware of the other requirements that interact with each other. The design-by-committee approach, in this case, did not provide system-level accountability for the review and approval process.

Is this lack of full coverage during a review and approval cycle a problem unique to this project or does it happen on other projects that you are aware of? What process do you use to ensure that the review process provides appropriate and full coverage of the design and specification documents?

Is design-by-committee ever the best way to do a design?

Wednesday, April 20th, 2011 by Robert Cravotta

I have participated in a number of projects that were organized and executed as a design-by-committee project. This is in contrast to most of the design projects I worked on that were the result of a development team working together to build a system. I was reminded of my experiences in these types of projects during a recent conversation about the details for the Space Shuttle replacement. The sentiment during that conversation was that the specifications for that project would produce something that no one will want once it is completed.

A common expression to illustrate what design-by-committee means is “a camel is what you get when you design a horse by committee.” I was sharing my experience with these design-by-committee projects to a friend and they asked me a good question – What is the difference between design-by-committee and a design performed by a development team? After a moment of thought my answer to that question is that each approach treats accountability among the members differently and this materially affects how system trade-offs are performed and decided.

In essence, design-by-committee could be described as design-by-consensus. Too many people in the committee have the equivalent of veto power without the appropriate level of accountability that should go with that type of power. Compounding this is that just because you can veto something does not mean that you have to come up with an alternative. Designing to a consensus seems to rely on the implied assumption that design is a process of compromises and the laws of physics are negotiable.

In contrast, in the “healthy” development team projects I worked on, different approaches fought it out in the field of trade studies and detailed critique. To an outsider, the engineering group seemed liked crazed individuals that engaged in passionate holy wars. To the members of the team, we were putting each approach through a crucible to see which one survived the best. In those cases where there was no clear “winner”, the chief project engineer had the final say over which approach the team would use – but not until everyone, even the most junior members on the team, had the chance to bring their concerns up. Ultimately, the chief project engineer was responsible for the whole system and their tie-breaker decisions were based on system level trade-offs rather than just slapping together the best of each component into a single system.

None of the design-by-committee projects that I am aware of yielded results that matched, never mind rivaled, what I think a hierarchical development team with clearly defined accountabilities would produce. Do I have a skewed perspective on this or do you know of cases when design-by-committee was the best way to pursue a project? Can you share any interesting or humorous results of design-by-committee projects that you know of? I am currently searching for an in-house cartoon we had when I worked on rockets that demonstrated the varied results you could get if you allowed one of the specialty engineering groups to dominate the design process for a rocket engine. I will share that if/once I find it. I suspect there could be analogous cartoons for any field, and I encourage you to send them to me, and I will share yours also.

Is bigger and better always better?

Wednesday, April 13th, 2011 by Robert Cravotta

The collision between an Airbus A380 and a Bombardier CRJ-700 this week at John F. Kennedy International Airport in New York City reminded me of some parallels and lessons-learned when we upgraded  the target processor with a faster version. I shared one of the lessons learned from that event in an article about adding a version control inquiry into the system. A reader added that the solution we used still could suffer from a versioning mismatch and suggested that version identifications also include an automatically calculated date and time stamp of the compilation. In essence, these types of changes in our integration and checkout procedures helped mitigate several sources of human or operator error.

The A380 is currently the world’s largest passenger jet with a wingspan of 262 feet. The taxiways at JFK Airport are a standard 75-foot-wide, but this accident is not purely the result of the plane being too large as there has been an Operation Plan for handling A380s at JFK Airport that has been successfully used since the 3rd quarter of 2008. The collision between the A380 and the CRJ appears to be the result of a series of human errors stacking onto each other (similar to the version inquiry scenario). Scanning the 36-page operation plan for the A380 provides a sense of how complicated it is to manage the ground operations for these behemoths.

Was the A380 too large for the taxiway? Did the CRJ properly clear the taxiway (per the operation plan) before the A380 arrived? Did someone in the control tower make a mistake in directing those two planes to be in those spots at the same time? Should someone have been able to see what was going to happen and stopped it in time? Should the aircraft sensors have warned the pilot that a collision was imminent? Was anyone in this process less alert or distracted at the wrong time? There have been a number of air traffic controllers that were caught sleeping on the job within the last few months, with the third instance happening this week.

When you make changes to a design, especially when you add a bigger and better version of a component into the mix, it is imperative that the new component be put through regression testing to make sure there are no assumptions broken. Likewise, the change should flag an effort to ensure that the implied (or tribal knowledge) mechanisms for managing the system accommodate for the new ways that human or operator error can affect the operation of the system.

Do you have any anecdotes that highlight how a new bigger and better component required your team to change other parts of the system or procedures to mitigate new types of problems?