Question of the Week Channel

The Question of the Week challenges how designers think about embedded design concepts by touching on topics that cover the entire range of issues affecting embedded developers, such as how and why different trade-offs are made to survive in a world of resource- and time-constrained designs.

What tips do you have for estimating/budgeting?

Wednesday, November 2nd, 2011 by Robert Cravotta

Many years ago, during a visit to my doctor, he pointed out to me that I had visited him around the same time each year for the past few years for roughly the same symptoms – which were all stress related. It was at that moment when it finally dawned on me how stressful year-end budgeting activities were on me. It was also the moment when I understood how to focus my energy to minimize the amount of stress that this time of the year had on me by approaching the year-end budgeting activities from a different perspective.

I do not recall who I heard the expression “water off a duck’s back” from, but it probably has been a life saver for getting me successfully through many stressful events, including year-end budgeting. The expression brings images of ducks popping up to the surface of the water after diving under the water to eat. Remarkably, the water all rolls off the duck’s back and they are dry immediately after surfacing. I had a friend who had hair like that, but the expression “water off Paul’s head” is not quite as visually effective.

The stress of needing to take an accurate assessment of my project or department’s current condition coupled with having to project and negotiate for those resources we would need to accomplish our goals for the next planning period was much easier to handle if I could imagine the stress falling off me. Equally important in handling the extra stress of this time of year was realizing which goals were real and which goals were what we called management BHAGs (big hairy-a** goals).

My management at the time thought it was a good idea to purposefully create goals that they knew probably could not be attained in the hope that we might complete a significant portion of them with far fewer resources than we might otherwise expect to need. I’m not convinced that the BHAG approach works if you overuse it. If you have too many of them, or they are just too large of a leap, there is a huge temptation by the team to just write off the goal and internally adopt a more realistic goal anyway.

Going over earlier budgeting proposals and comparing them to what actually happened proved to be a helpful exercise. First, it provides a loose baseline for the new budget proposal. Second, it can provide a mechanism for improving your budgeting accuracy because you might notice a pattern in your budget versus actuals. For example, are the proposed budgets even close to the actuals? Are they too high or too low? Do the budgets/actuals trend in any direction? My experience showed that our tend line was fairly constant year over year, but that allocating a portion of the budget to acquiring and updating tools each year was an important part of keeping that cost line from trending upward as project complexities increased.

Do you know any useful tips to share about how to be more effective at estimating projects and performing planning and budgeting activities? Does any type of tool, including spreadsheets, prove especially useful in tracking past, present, and future projects and actuals? What are the types of information you find most valuable in developing a budget and schedule? How important is having a specific person, skill set, or tool set available to making projections that you can meet?

What makes a conference worth attending?

Wednesday, October 26th, 2011 by Robert Cravotta

Travel and training budgets for engineers (and possibly every profession) have taken a beating over the past decade. Gone are the “grand-ole-days” of huge conferences for developers that would take up multiple halls and you felt like a salmon swimming upstream to spawn as you walked the exhibit areas. Instead, show and expo attendance for these conferences is decidedly lighter than it used to be. The change in attendance has been so profound that a few single-company developer conferences were delayed, cancelled, or transformed into smaller regional workshops a few years ago.

The hopeful news is that there is a sense that design activity is beginning to pick up again. Conferences targeting developers seem to be returning, attendance is up, but it seems that most of the attendance is by locals who can avoid most of the expenses of travel. I was having a conversation with an exhibitor at a recent conference; we were discussing whether there was something that the conference could offer that would justify a growing attendance that includes more attendees that had to travel to get there. The conference offered many announcements for new or upcoming components and tools that will help developers build their projects faster, cheaper, and better. There were numerous workshops and hands-on training offerings that were all well attended.

One of the most obvious things gone from today’s conferences from the past is the over-the-top parties at the end of the day. I can’t help but wonder – tongue-in-cheek – whether the excessive parties were a symptom of the huge attendance or a motivator for it. As developers, we are creating more complex systems in the same or shorter time frames, often with smaller design teams than a few years ago.

On a more serious observation, the time and complexity pressures on developers today probably raise the bar on what a conference will deliver to developers to justify spending their time travelling and attending the conference. What makes a conference worth attending? What features or events have you seen that you thought were particularly valuable? Is there something missing from today’s conferences that makes them less valuable than previously? Or maybe the contemporary conference is doing exactly what you need it to do, but there is no time to attend. Do you find the recordings of the proceedings worthwhile or perhaps even sufficient?

What advice would you give to an embedded newcomer?

Wednesday, October 19th, 2011 by Robert Cravotta

Developing embedded systems is different from developing end applications. For one, when done correctly, no one knows or cares about the embedded system when deciding whether to buy the end device. I remember having to adjust my own expectation and method of describing what I did for a living. Saying I worked on the control system for a part of a larger project, I learned to accept telling the slightly less accurate answer to people about what I worked on – “I work on the Space Shuttle, or the Space Station, or a particular aircraft.” I know I am not alone in this sentiment based on the responses from our first question of the week more than a year and a half ago – “You know you’re an embedded developer when …

In addition to adjusting to a different way of describing my job to others, I learned a number of important concepts about designing embedded systems – many of which apply well to other aspects of life. For example, one lesson I learned the hard way is to avoid the urge to rewrite portions of a system just so it will operate “better”. Instead, make the smallest change possible to get the system to do what it needs to do correctly.

The value of this advice is most apparent when you are working on projects with aggressive schedules. There are usually so many things that absolutely need to be changed and added to a design that routing your effort to redesign and rewrite a portion of the system that is invisible to the end user anyway is a recipe for unhappiness and stress. Not only are you taking precious time away from doing what needs to be done to the system, but you are also producing new sources of errors into the system by touching parts that are not part of the scope of work you are supposed to be working on. The expression “If it ain’t broke, don’t fix it” captures the essence of this advice even if it does not impart why it is usually a good idea to adhere to.

I suspect that each of us knows a handful of good advice we could pass onto embedded newcomers or mentees. What advice would you give to someone learning the ropes of engineering and/or embedded design? Do you have a simple or catchy phrase that captures the essence of your advice? Or do you have an example that illustrates a non-obvious value of the piece of advice?

Does only one person dominate your designs?

Thursday, October 13th, 2011 by Robert Cravotta

The recent death of Apple’s former CEO, Steve Jobs, has been marked by many articles about his life. The products that Apple has launched and supported over the years have greatly influenced the technology markets. Many people are asking if Apple can maintain its technology position without Steve. If the design process at Apple was completely dominated by Steve, then this is a real question; however, if the design process proceeded in a similar fashion in all of the places I have worked before, then Apple should be able to continue doing what it has been doing for the past decade with much success.

Leonard E. Read’s article “I, Pencil: My Family Tree as told to Leonard E. Read” points out why Apple should be able to continue to prosper. This excerpt highlights its profound claim:

There isn’t a single person in all these millions, including the president of the pencil company, who contributes more than a tiny, infinitesimal bit of know-how. From the standpoint of know-how the only difference between the miner of graphite in Ceylon and the logger in Oregon is in the type of know-how. Neither the miner nor the logger can be dispensed with, any more than can the chemist at the factory or the worker in the oil field—paraffin being a by-product of petroleum.

The article applies this level of complexity to a pencil, which is simpler in composition and design than any of Apple’s products. If Steve acted correctly as a CEO, he did not allow himself to be a single point of failure for the company. Other people with similar talents (even if they are manifest across several people instead of just one person), should already be identified and integrated into the design process.

A key function for any manager is to be able to identify at-risk talents, skills, and experience within their groups and to create an environment where losing any single person will not kill the group’s ability to complete its tasks. The group’s productivity may suffer, but the tasks can be correctly completed.

Does the management of any large and successful company really allow its future to rest on the shoulders of a single individual? Does only a single person within your group dominate the design process so thoroughly that if they were to “win the lottery” and suddenly disappear that your group would be in trouble? What are some of the strategies your group uses to ensure that the loss of a single person becomes such a large risk to the completion of that project? Do you have a formal or informal process for cross training your team members?

Is the collider closure cause for concern?

Wednesday, October 5th, 2011 by Robert Cravotta

Twenty-eight years of discovery is being marked by the closure of the Tevatron proton-antiproton collider last week. The closure of the collider is occurring while scientists around the world are trying to see if they can replicate measurements made by physicists at CERN of neutrinos traveling faster than the speed of light.

The Tevatron has been the most powerful atom smasher in the United States since 1983. Analysis work based on the data collected by the collider will continue for the next few years, but the lab will no longer be pursuing data for collisions of the highest possible energy. The Large Hadron Collider, an accelerator capable of pushing particles to even higher energies, is replacing the Tevatron. Instead, the scientists at the Fermi National Accelerator Laboratory (or Fermilab), the home of the Tevatron, will be pursuing the “intensity frontier” which will focus on working with very intense beams with very large numbers of particles.

To date, the United States government has been a primary source of funding for large and expensive research projects such as the Tevatron collider and the Space Shuttle – both of which have closed down their programs this year. It is unlikely that these are the only research projects operating with aging equipment. Do these two recent program closures portend a slowing down of research, or are they the signs that research efforts are progressing so well that closing these projects are part of refining and reallocating research resources to more challenging discoveries?

How important are reference designs?

Wednesday, September 28th, 2011 by Robert Cravotta

A recent white paper about a study exploring issues facing engineers during the design process identified a number of items that are “essential to the design process” but can be difficult to find. At the top of the list were reference designs and application notes. This is in spite of engineers being able to access a wider range of materials across multiple sources via the internet than ever before.

I think part of the reason why these two types of information are identified as essential and difficult to find stems from the growing complexity of contemporary processors. The number and variety of peripherals and special purpose processing engines that are being integrated into today’s newest processors create a steep learning curve for any developer trying to use these devices in their projects. Providing a compiler and debugger does not sufficiently compensate for the amount of effort to master the complexity of today’s processors without negatively impacting the project schedules.

The term reference design can apply to a wide range of materials. An article about reference designs presents a taxonomy for reference materials based on application complexity and design completeness versus broad to application-specific implementation details. If, as the article identifies, reference designs are a sales and marketing tool, why are such material difficult for developers to find?

One possible reason is that developers do not consider reference materials as essential. Another reason is that reference designs, by their nature, apply to a small swath of processors in a huge sea of options, and this makes classifying and getting these reference designs in front of interested developers challenging at best. Attempts by third-party information sources have had limited success at aggregating and connecting the appropriate reference materials with relevant processors. As evidenced by the conclusions of the referenced study, even processor vendors themselves are experiencing limited success getting word out about their own reference materials.

How important are reference materials in choosing and working with processors and other complex components in your designs? Are all types of reference materials equally important or are some types of information more valuable than others? Is aggregating reference material with the appropriate component the best way to connect developers with reference material? What good ways to classify reference material have you seen that help you better find the material you are looking for without having to wade through a bunch of irrelevant reference material?

Is hardware customization obsolete?

Wednesday, September 21st, 2011 by Max Baron

It used to be that you could install plug-in boards and peripherals for your computer such as can still be done today at the box level in component stereo and video systems. In today’s computers, that option however seems to be rapidly disappearing. During the next few years, with desktops falling out of grace, these aftermarket components will see reduced sales as the easy to customize desktops are replaced by fully integrated systems that are difficult to change or upgrade internally or externally.

The trend may impact on design houses connected directly or indirectly to desktop systems whether by hardware or software products. Computer customization by owner options have been decreasing all along but due to the slow process one may not have fully realized the implications. During the recent months however, it has become impossible to avoid noticing the events that are reflecting on the technology and business of desktop computers: Hewlett Packard announced its intention to sell its PC business; Fry’s Electronics, a major computer and electronics store in our area has cut a few daily advertisements in the local newspaper plus most stores are reducing the number of desktops displayed to make space for increasing offerings of smart phones, tablets and notebooks. And, more indicative than anything else, we see tablets and smart phones used by people who have never before used desktops or laptops.

The plunging prices of computers have already taken a bite out of aftermarket internal components like add-on boards and memory as desktop manufacturers began to integrate more functions in the motherboard to maintain company revenues. Customization received a further setback with the quickly rising adoption of mobile devices that are nearly impossible to upgrade. You can’t add internal memory, change graphics boards, or add a multimedia board or peripherals. Mobile devices are too small in size. They require all internal components to be tightly packed plus for proprietary reasons some manufacturers will not allow the addition of external flash memory and USB devices. Also, any customization even if it were allowed might increase battery consumption and reduce the time between charges.

Computer software has followed hardware. System software that’s dependent on aftermarket components will share their fate. Applications software is suffering from limitations placed by battery lifecycles on internal memory, processor performance and the reduced number of processor cycles imposed by low energy consumption.

But, we may be looking at a more significant cause for the trend than the adoption of mobile devices: the separation of professional applications from entertainment and communication. MS Excel spreadsheets, complex MS Word documents, database management, MS PowerPoint, simulators, calculators etc., can continue to be delivered on powerful desktops whose volume sales are defined by corporate use — sales that will pale in comparison with the combined volume sales expected for consumer-targeted mobile computing appliances. These appliances are already providing news, information, email, access to internet communities, opinions, video and audio, games, internet-enabled purchases of goods, etc., all delivered on simple and easy to use systems.

The general purpose computer is experiencing defeat: consumers that want just the entertainment and communication no longer need to buy bulky complex desktops or laptops or for fear of complexity, avoid buying them. They can buy an appliance that does exactly what they want.

Most of today’s mobile computing devices can be upgraded only by software that can provide additional functions, faster processing or more secure communications—but as perceived at present these computing appliances will otherwise remain unchanged. Like several consumer digital cameras that one may own and use for different purposes, one may have to buy different mobile devices from several manufacturers and/or keep up with new generations coming from the same manufacturer. But mobile device prices are forbidding such luxury and the opportunity of aftermarket customization needs to be explored.

Assuming that the world will again be separated into closed systems and open systems, the latter to gain more traction, it is interesting to envision how these systems might be customized to fit individual preferences. If old computers could be customized via boards plugged into system buses and external peripherals connected to high speed I/O, in mobile devices we might see the emergence of new and old functions packaged for example in thin 1 in² – 2 in² modules that could be introduced / swapped via a removable panel. Different modules could offer features such as higher security, additional codecs, ROM-ed applications, better graphics, higher quality still and video photography, USB and Ethernet support and wireless battery charging.

Do you see open mobile systems triumphing once more over their closed versions and if so, what would be the most important functions to support and how would they be best packaged?

Does adding an IP address change embedded designs?

Thursday, September 15th, 2011 by Robert Cravotta

A recent analysis from McAfee titled “Caution: Malware Ahead” suggests that the number of IP-connected devices will grow by a factor of fifty over a ten year period based on the number of IP-connected devices last year. The bulk of these devices are expected to be embedded systems. Additionally, connected devices are evolving from a one-way data communication path to a two way dialog – creating potential new opportunities for hacking embedded systems.

Consider that each Chevy Volt from General Motors has its own IP address. The Volt uses an estimated 10 million lines of code executing over approximately 100 control units, and the number of test procedures to develop the vehicle was “streamlined” from more than 600 to about 400. According to Meg Selfe at IBM, they use the IP-connection for a few things today, like finding a charging station, but they hope to use it to push more software out to the vehicles in the future.

As IP-connected appliances become more common in the home and on the industrial floor, will the process for developing and verifying embedded systems change – or is the current process sufficient to address the possible security issues of selling and supporting IP-connected systems? Is placing critical and non-critical systems on separate internal networks sufficient in light of the intent of being able to push software updates to both portions of the system? Is the current set of development tools sufficient to enable developers to test and ensure their system’s robustness from malicious attacks? Will new tools surface or will they derive from tools already used in high safety-critical application designs? Does adding an IP address to an embedded system change how we design and test them?

What should design reviews accomplish?

Wednesday, September 7th, 2011 by Robert Cravotta

I remember my first design review. Well, not exactly the review itself, but I remember the lessons I learned while doing it because it significantly shifted my view of what a design review is supposed to accomplish. I was tasked with reviewing a project and providing comments about the design. It was the nature of my mentor’s response to my comments that started to shape my understanding that there can be disconnects with idealism and practicality.

In this review, I was able to develop a pretty detailed understanding of how the design was structured and how it would work. The idealist in me compelled me to identify not only potential problems in the design but to suggest better ways of implementing portions of the design. My mentor’s response to my suggestions caught me completely by surprise – he did not want to hear the suggestions. According to him, the purpose of the review was to determine whether the design did or did not meet the system requirements. The time for optimizing design decisions was passed – would the design accomplish the requirements or not.

His response baffled and irked me. Wasn’t a design review part of the process of creating the best design possible? Also, I had some really blindingly brilliant observations and suggestions that were now going to go to waste. Looking back, I think the hardline approach my mentor took helped make me a better reviewer and designer.

As it turns out, my suggestions were not discarded without a look; however, the design review is not the best point in the design cycle to explore the subtle nuances of one design approach versus another. Those types of discussions should have occurred and been completed before the design review process even started. On the other hand, for areas where the design does not or might not meet the system requirements, it is imperative that a discussion be initiated to identify where and why there might be some risks in the current design approach. My mentor’s harsh approach clarified the value of focusing observations and suggestions to those parts of the design that will yield the highest return for the effort spent doing the review.

Does this sound like how your design reviews proceed or do they take a different direction? What should be the primary accomplishment of a successful design review and what are those secondary accomplishments that may find their way into the engineering efforts that follow the review process?

Is “automation addiction” a real problem?

Wednesday, August 31st, 2011 by Robert Cravotta

The recent AP article highlights a draft FAA study (I could not find a source link, please add in comments if you find) that finds that pilots sometimes “abdicate too much responsibility to automated systems.” Despite all of the redundancies and fail-safes built into modern aircraft, a cascade of failures can overwhelm pilots who have only been trained to rely on the equipment.

The study examined 46 accidents and major incidents, 734 voluntary reports by pilots and others as well as data from more than 9,000 flights in which a safety official rides in the cockpit to observe pilots in action. It found that in more than 60 percent of accidents, and 30 percent of major incidents, pilots had trouble manually flying the plane or made mistakes with automated flight controls.

A typical mistake was not recognizing that either the autopilot or the auto-throttle — which controls power to the engines — had disconnected. Others failed to take the proper steps to recover from a stall in flight or to monitor and maintain airspeed.

The investigation reveals a fatal airline crash near Buffalo New York in 2009 where the actions of the captain and co-pilot combined to cause an aerodynamic stall, and the plane crashed into the ground. Another crash two weeks later in Amsterdam involved the plane’s altimeters feeding incorrect information to the plane’s computers; the auto-throttle reduced speed such that the plane lost lift and stalled. The flight’s three pilots had not been closely monitoring the craft’s airspeed and experienced “automation surprise” when they discovered the plane was about to stall.

Recently, crash investigators from France are recommending that all pilots get mandatory training in manual flying and handling a high-altitude stall. In May, the FAA proposed that pilots be trained on how to recover from a stall, as well as expose them to more realistic problem scenarios.

But other new regulations are going in the opposite direction. Today, pilots are required to use their autopilot when flying at altitudes above 24,000 feet, which is where airliners spend much of their time cruising. The required minimum vertical safety buffer between planes has been reduced from 2,000 feet to 1,000 feet. That means more planes flying closer together, necessitating the kind of precision flying more reliably produced by automation than human beings.

The same situation is increasingly common closer to the ground.

The FAA is moving from an air traffic control system based on radar technology to more precise GPS navigation. Instead of time-consuming, fuel-burning stair-step descents, planes will be able to glide in more steeply for landings with their engines idling. Aircraft will be able to land and take off closer together and more frequently, even in poor weather, because pilots will know the precise location of other aircraft and obstacles on the ground. Fewer planes will be diverted.

But the new landing procedures require pilots to cede even more control to automation.

These are some of the challenges that the airline industry is facing as it relies on using more automation. The benefits of using more automation are quite significant, but it is enabling new kinds of catastrophic situations caused by human error.

The benefits of automation are not limited to aircraft. Automobiles are adopting more automation with each passing generation. Operating heavy machinery can also benefit from automation. Implementing automation in control systems enables more people with less skill and experience to operate those systems without necessarily knowing how to correct from anomalous operating conditions.

Is “automation addiction” a real problem or is it a symptom of system engineering that has not completely addressed all of the system requirements? As automation moves into more application spaces, the answer to this question becomes more important to define with a sharp edge. Where and how should the line be drawn for recovering from anomalous operating conditions; how much should the control system shoulder the responsibility versus the operator?