Is the rate of innovation stagnating?

Wednesday, December 28th, 2011 by Robert Cravotta

Around this time of year many people like to publish their predictions for the next year – and according to an article, the “experts and analysts” do not see a lot of innovation coming out of the United States soon. The article mentions and quotes a number of sources that suggest the rate of innovation is going to be sluggish the next few years. One source suggested that “bigger innovation labs and companies are holding back on numerous innovations until they can properly monetize them.”

I wonder if these observations and expectations are realistic. I see innovation every time I see some capability available for less cost, training, or skill than before. I am constantly amazed at the speed at which new technology reaches the hands of people in the lowest quartile of income. More significantly, I am amazed at how these new technologies appear in everyday activities without a fanfare. For example, my daughter who is learning to drive has pointed out features that she really likes about the car she is driving – features I never gave any thought about either because I did not notice them or because noticing them would be analogous to noticing and commenting on the air we breathe.

My daughter received a Nintendo 3DS as a present this Christmas. The 3D part of this product goes far beyond the display as it enables her to move the device around and interact with the software in new and meaningful ways. These “invisible” types of innovations do not seem to make big headlines, but I suspect they are still sources of technology disruptions.

As for a company holding off on an innovation, is such a thing possible in a highly competitive world? Can any company afford to hold off on an innovative idea and risk another company beating them to the punch in the market?

Is the rate of innovation stagnating? Is the marketing hype around innovation just not getting the return on investment and so companies are backing off on how they hype it? Are you aware of anyone holding back on innovative ideas waiting for a better consumer market to release them?

How long should testing changes take?

Wednesday, December 21st, 2011 by Robert Cravotta

The current two month payroll tax at the center of a budget bill going through the US Congress has elicited a response by a trade organization representing the people that would have to implement the new law and is the inspiration for this week’s question. A payroll processing trade organization has claimed that even if the bill became law, it is logistically impossible to make the changes in tax software before the two month extension expires. The trade organization claims the changes required by the bill would require at least 90 days for software testing alone in addition to time for analysis, design, coding and implementation. Somehow this scenario makes me think of past conversations where marketing/management would make changes to a system and engineering would push back because there was not enough time to properly perform the change and testing before the delivery date.

If you are part of the group requesting the “simple” change, you may think the developers are overstating the complexity of implementing the change. Often, in my experience, there is strong merit to the developer’s claims because the “simple” change involves some non-obvious complexity especially when the change affects multiple parts of the system.

In my own experience, we worked on many R&D projects, most with extremely aggressive time schedules and engineering budgets. These were quick and dirty proof of concepts many times and “simple” changes did not have to go through the rigorous production processes – or so the requesters felt. What saved the engineering team on many of these requests was the need to minimize the number of variations between iterations of the prototype so that we could perform useful analysis on the test data in the event of failures. Also, we locked down feature changes to the software during system integration so that all changes were in response to resolving system integration issues.

I suspect this perspective that changes can be made quickly and at low risk has been reinforced by the success of the electronics industry to deliver what appears to be the predictable and mundane advance in silicon products to cost 30% less and/or deliver twice as much performance as the previous year’s parts. Compounding this perception are all of the “science-based” television series that show complex engineering and scientific tasks being completed by one or two people in hours or days when in reality they would take dozens to hundreds of people months to complete.

How long should testing changes to a system take? Is it reasonable to expect any change to be ordered, analyzed, designed, implemented, and tested in less than two weeks? I realize that the length of time will depend on the complexity of the change request, but two weeks seems like an aggressive limit to implement any change which might indirectly affect the operation of the system. That is for embedded systems where the types of changes requested are much more complex than changing the color of a button or moving a message to a different portion of the display. How does your team manage change requests and the time it takes to process and implement them?

What embedded development tool do you want?

Wednesday, December 14th, 2011 by Robert Cravotta

Over the years, embedded development tools seem to deliver more capabilities at lower price points. Over the years, $40,000+ development workstations and $10,000+ in-circuit emulators have given way to tools that are lower in cost and more capable. Compilers can produce production quality code in record compilation times by arming the compilation activity across a networked farm of workstations with available processing bandwidth. IDE’s (integrated development environments) greatly improve the productivity of developers by seamlessly automating the process of switching between editing and debugging software on a workstation and target system. Even stepping through software execution backwards to track down the source of difficult bugs has been possible for several years.

The tools available today make it a good time to be an embedded developer. But can the tools be even better? We are fast approaching the end of one year and the beginning of the next, and this often marks an opportunity to propose your wish items for the department or project budget. When I meet with development tool providers, I always ask what directions they are pushing their future tools development. In this case, I am looking for something that goes beyond faster and more towards assisting the developer with analyzing their system.

One tool capability that I would like to see more of is a simulator/profiler feedback based compiler tool that enables you to quickly explore many different ways to partition your system across multiple processors. Embedded systems have been using multiple processors in the same design for decades, but I think the trend is accelerating to include even more processors (even in the same package) than before to squeeze out costs and energy efficiency as well as to handle increasingly complex operating scenarios.

This partition exploration tool goes beyond those current tools that perform multiple compilations with various compiler switches and presents you with a code size versus performance trade-off. This tool should help developers understand how a particular partitioning approach will or will not affect the performance and robustness of a design. Better yet, the tool would assist in automating how to explore different partitioning approaches so that developers could explore dozens (or more) partitioning implementations instead of the small handful that can be done on today’s tight schedules with an expert effectively doing the analysis by hand. I suspect this type of capability would provide a much needed productivity boost for developers to handle the growing complexity of tomorrow’s applications.

Is there an embedded development tool that lifts your productivity to new heights such that you would recommend to your management that every member of your team had it? Is there a capability you wish your development tools had that isn’t quite available yet? What are the top one to three development tools you would recommend as must-have for embedded developers?

Have you experienced a “bad luck” test failure?

Wednesday, December 7th, 2011 by Robert Cravotta

Despite all of the precautions that the Mythbusters team takes when doing their tests, the team accidentally launched a cannonball into a neighborhood and through a home. The test consisted of firing a 6-inch cannonball out of a homemade cannon to measure the cannonball’s velocity. The cannonball was fired at a sheriff’s bomb disposal range, and it was supposed to hit large containers filled with water. The projectile missed the containers and made an unlucky bounce off a safety beam sending it into the nearby neighborhood. Luckily, despite the property damage, including careening through a house with people sleeping in it, no one was hurt.

This event reminds me of a number of bad luck test failures I have experienced. Two different events involved similar autonomous vehicle tests, but the failures were due to interactions with other groups. In the first case, we experienced a bad luck failure during a test flight that failed because we had delayed the test to ensure that the test could complete successfully. In this test, we had a small autonomous vehicle powered with rocket engines. The rocket fuel (MMH and NTO) is very dangerous to work with, so we handled it as little as possible. We had fueled up the system for a test flight when the word came down that the launch was going to be delayed because we were using the same kind of launch vehicle that had just experienced three failed flights before our test.

While we waited for the failure analysis to complete, our test vehicle was placed into storage with the fuel (there really was no way to empty the fuel tanks as the single-test system had not been designed for that). A few months later we got the go ahead on the test, and we pulled the vehicle out of storage. The ground and flight checkouts passed with flying colors and the launch proceeded. However, during the test, once our vehicle blew its ordnance to allow the fuel to flow through the propulsion system, the seals catastrophically failed and the fuel immediately vented. The failure occurred because the seals were not designed to be in constant contact with the fuel for the months that it was in storage. The good news was that all of the electronics were operating correctly, just that the vehicle had no fuel to do what it was intended to do.

The other bad luck failure was the result of poor communication about an interface change. In this case, the system had been built around a 100Hz control cycle. A group new to the project decided to change the inertial measurement unit so that it operated at 400Hz. The change in sample rate was not communicated to the entire team and the resulting test flight was a spectacular spinning out of control failed flight.

In most of the bad luck failures I am aware of, the failure occurred because of assumptions that masked or hid the consequences of miscommunication or unexpected decisions made by one group within the entire team. In our case, the tests were part of a series of tests and they mostly cost us precious time, but sometimes such failures are more serious. For example, the Mars Climate Orbiter (in 1999) unexpectedly disintegrated while executing a navigation command. The root cause of that failure/error was a mismatch in the measurement systems used. One team used English units while another team used Metric units.

I guess calling these bad luck failures is a nice way to say a group of people did not perform all of the checks they should have before starting their tests. Have you ever experienced a “bad luck” failure? What was the root cause for the failure and could a change in procedures have prevented it?

How should embedded systems handle battery failures?

Wednesday, November 30th, 2011 by Robert Cravotta

Batteries – increasingly we cannot live without them. We use batteries in more devices than ever before, especially as the trend to make a mobile version of everything continues its relentless advance. However, the investigation and events surrounding the battery fires for the Chevy Volt is yet another reminder that every engineering decision involves tradeoffs. In this case, damaged batteries, especially large ones, can cause fires. However, this is not the first time we have seen damaged battery related issues – remember the exploding cell phone batteries from a few years ago? Well that problem has not been completely licked as there are still reports of exploding cell phones even today (in Brazil).

These incidents remind me of when I worked on a battery charger and controller system for an aircraft. We put a large amount of effort into ensuring that the fifty plus pound battery could not and would not explode no matter what type of failures it might endure. We had to develop a range of algorithms to constantly monitor each cell of the battery and appropriately respond if anything improper started to occur with any of them. One additional constraint on our responses though was that the battery had to deliver power when it was demanded by the system despite parts of the battery being damaged or failing.

Even though keeping the battery operating as well as it can under all conditions represents an extreme operating condition, I do not believe it is all that extreme a condition when you realize that automobiles and possibly even cell phones sometimes demand similar levels of operation. I recall discussing the exploding batteries a number of years ago, and one comment was that the exploding batteries was a system level design concern rather than just a battery manufacturing issue – in most of the exploding phones cases at that time, the explosions were the consequence of improperly charging the battery at an earlier time. Adding intelligence to the battery to reject a charging load that was out of some specification was a system-level method of minimizing the opportunity to damage the batteries via improper charging.

Given the wide range of applications that batteries are finding use in, what design guidelines do you think embedded systems should follow to provide the safest operation of batteries despite the innumerable ways that they can be damaged or fail? Is disabling the system appropriate?

Food for thought on disabling the system is how CFL (compact fluorescent lights) handle end-of-life conditions for the bulbs when too much of the mercury has migrated to the other end of the lighting tube – they purposefully burn out a fuse so that the controller board is unusable. While this simple approach avoids operating a CFL beyond its safe range, it has caused much concern among the user population as more and more people are scared by the burning components in their lamp.

How should embedded systems handle battery failures? Is there a one size fits all approach or even a tiered approach to handling different types of failures so that users can confidently use their devices without fear of explosions and fire while knowing when there is a problem with the battery system and getting it fixed before it becomes a major problem?

What embedded device/tool/technology are you thankful for?

Thursday, November 24th, 2011 by Robert Cravotta

In the spirit of the Thanksgiving holiday being celebrated in the United States this week, what device, tool, or technology are you thankful for making your life as an embedded developer better? This can be anything, not just the newest developments. The idea is to identify those things that made a material impact during your embedded career.

As an example, my own introspection on the topic yielded a somewhat surprising answer for me – I am thankful for the humble USB port. I remember when my mentor came to me one day with a spec sheet and asked me what I thought about this thing called the Universal Serial Bus specification. We talked about that specification on and off over the course of a year or so, but never did I imagine the larger extent of the impact of so small a change.

Not only did USB simplify and make connecting and powering devices simpler and more reliable such that everyday technophobes could successfully use those devices, but it simplified development boards and workbenches all around because more and more of the new development boards that included a USB port no longer required a separate source for system power. As a consumer, I have recognized that I have not crawled under my desk to plug in a transformer in years – and for that, I am thankful.

There are many things I am thankful for, and these opportunities for introspection are valuable because they increase the odds of recognizing the mundane and nearly invisible changes that have permeated our environment – just like the invisible embedded devices that populate our everyday lives. What embedded device, tool, and/or technology are you thankful for?

Do you use hardware-in-the-loop simulation?

Wednesday, November 16th, 2011 by Robert Cravotta

While working on some complex control systems for aerospace projects, I had the opportunity to build and use hardware-in-the-loop (HIL or HWIL) simulations. A HIL simulation is a platform where you can swap different portions of the system between simulation models and real hardware. The ability to mix simulated and real components provides a mechanism to test and characterize the behavior and interactions between components. This is especially valuable when building closed-loop control systems that will perform in conditions that you do not fully understand yet (due to a lack of experience with the operating scenario).

Building a HIL simulation is an extensive effort. The simulation must be able to not only emulate electrical signals for sensors and actuators, but it may also need to be able to provide predictable and repeatable physical conditions, such as moving the system around on six degrees of freedom based on real or simulated sensor or actuator outputs. As a result, HIL can be cost prohibitive for many projects; in fact, to date the only people I have met that have used HIL worked on aircraft, spacecraft, automotive, and high-end networking equipment.

I suspect though with the introduction of more sensors and/or actuators in consumer level products, that HIL concepts are being used in new types of projects. For example, tablet devices and smartphones increasingly are aware of gravity. To date, being able to detect gravity is being used to set the orientation on the display, but I have seen lab work where these same sensors are being used to detect deliberate motions made by the user, such as shaking, lowering, or raising  the device. At that point, HIL concepts provide a mechanism for developers to isolate and examine reality versus their assumptions about how sets of sensors and/or actuators interact under the variation that can occur under each of these use scenarios.

In my own experience, I have used HIL simulation to characterize and understand how to successfully use small rocket engines to move and hover a vehicle in the air. The HIL simulation allowed us to switch between real and simulated engines that moved the system. This kind of visibility was especially useful because operating the vehicle was dangerous and expensive. Another HIL simulation allowed us to work with the real camera sensor and physically simulate the motion that the camera would experience in a usage scenario. In each of these simulation setups, we were able to discover important discrepancies between our simulation models and how the real world behaved.

Are HIL simulation concepts moving into “simpler” designs? Are you using HIL simulation in your own projects? Is it sufficient to work with only real hardware, say in the case of a smartphone, or are you finding additional value in being able to simulate specific portions of the system on demand? Are you using HIL in a different way than described here? Is HIL too esoteric a topic for most development?

Do you receive too much email?

Wednesday, November 9th, 2011 by Robert Cravotta

During a recent conversation I heard someone share a “good” thing about emailed newsletters – it is easy to sort the email list and block delete all of them. This got me thinking about how much time I spend managing emails each day, and it got me wondering, does anyone/everyone else receive on the order of 100 emails a day too? Mind you, these are the business relevant emails versus the countless spam emails that several layers of spam filters intercept and dispose of for me.

Email allows me to work with dozens of people asynchronously throughout the week without having to spend time identifying a common time that we can work together. However, I receive way too many newsletters such that I do not have time to open them all. On the other hand, many of them are not worth opening most of the time, but the few times they are worth opening make it worthwhile to receive them. A good subject line goes a long way to signaling whether a particular issue of a newsletter might be worth the time to open and read it. And that identifies one of the problems of receiving a constant stream of information – a majority of it is not relevant to what you need at the moment you receive it.

On the other hand, I do not like block deleting these emails because used correctly, they can provide a sort-of customized and pre-filtered search database for when I need to research something. I think this works because I choose which newsletters to receive, and it is easy (usually) to stop receiving a newsletter. When I do a search on this informal database, I sometimes find a pointer in a newsletter that helps me find the material I am looking for from sources that generally have earned my trust as being reliable.

The downside or cost of having these newsletters to search is that too many show up in my mailbox each week. Automatic email filtering rules that moves newsletters into folders are helpful, but they usually take place as the emails arrive in my mailbox. I prefer to have the newsletters pop up in my inbox so that I can see the subjects in them before they are shunted into a newsletter folder. To date, I have not seen an email tool that will move emails into appropriate folders after they have been in the inbox for a day or week.

Do you receive too much email? Or is your email box not overflowing with information? Are you receiving so many newsletters that aim to consolidate information for you but end up flooding your mailbox with too much information that is not immediately relevant? What strategies do you use to manage the influx of newsletters so that they do not interfere or possibly hide important emails?

Interface Transitions and Spatial Clues

Tuesday, November 8th, 2011 by Robert Cravotta

Every time I upgrade any of my electronic devices, there is a real risk that something in the user interface will change. This is true of not just updating software but also when updating hardware. While someone who is responsible for the update decided the change in the interface is an improvement over the old interface, there is often a jolt as established users either need to adjust to the change or the system provides mechanisms that support the older interface. Following are some recent examples I have encountered.

A certain browser for desktop computers has been undergoing regular automagical updates – among the recent updates is a shuffling of the menu bar/button and how the tabs are displayed within the browser. Depending on who I talk to, people either love or hate the change. Typically it is not that the new or old interface is better but that the user must go through the process of remapping their mental map of where and how to perform the tasks they need to do. For example, a new menu tree structure breaks many of the learned paths, the spatial mapping so to speak, to access and issue specific commands. This can result in a user not being able to easily execute a common command (such as clear temporary files) without feeling like they have to search for a needle in a haystack because their spatial mapping for a command needs to be remapped.

Many programs provide update/help pages to help with this type of transition frustration, but sometimes the update cycle for the program is faster than the frequency that a user may use a specific command, and this can cause further confusion as the information the user needs is buried in an older help file. One strategy to accommodate users is to allow them to explicitly choose which menu structure or display layout they want. The unfortunate thing about this approach is that it is usually an all or nothing approach. The new feature may only be available under the new interface structure. Another more subtle strategy that some programs use to accommodate users is to quietly support the old keystrokes while displaying the newer interface structure. This approach can work well for users that memorized keyboard sequences, but it does not help those users that manually traversed the menus with the mouse. Additionally, these approaches do not really help with transitioning to the new interface; rather, they enable a user to put off the day of reckoning a little longer.

My recent experience with a new keyboard and mouse provides some examples of how these devices incorporate spatial clues to improve the experience of adapting to these devices.

The new keyboard expands the number of keys available. Despite providing a standard QWERTY layout, the relative location of the keys on the left and right edge of the layout was different relative to the edge of the keyboard. At first, this caused me to hit the wrong key when I was trying to press keys around the corners and edges of the keyboard layout – such as the ctrl and the ~ keys. With a little practice, I no longer hit the wrong keys. It helps that the keys on the left and right edge of the layout are different shapes and sizes from the rest of the alphanumeric keys. The difference in shape helps provide immediate feedback of where my hands are within the spatial context of the keyboard.

Additionally, the different sets of keys are grouped together so that the user’s fingers can feel a break between the groupings and the user is able to identify which grouping their hands are over without looking at the keyboard. While this is an obvious point, it is one that contemporary touch interfaces are not able to currently accommodate. The keyboard also includes a lighting feature for the keys that allows the user to specify a color for the keyboard. My first impression was that this was a silly luxury, but it has proven itself a useful capability because it makes it possible to immediately and unambiguously know what context mode the keyboard is in (via different color assignments) so that the programmable keys can take on different functions with each context.

The new mouse expands on the one or two button capability by supporting more than a dozen buttons. I have worked with mice with many buttons before, and because of that, the new mouse had to have at least two thumb buttons. The new mouse though does a superior job not just with button placement, but in providing spatial clues that I have never seen on a mouse before. Each button on the mouse actually has a slightly different shape, size, and/or angle that it touches the user’s fingers. It is possible to immediately and unambiguously know which button you are touching without looking at the mouse. There is a learning curve to know how each button feels, but the end result is that all of the buttons are usable with a very low chance of pressing unintended buttons.

In many of the aerospace interfaces that I worked on, we placed different kinds of cages around the buttons and switches so that the users could not accidently flip or press one. By grouping the buttons and switches and using different kinds of cages, we were able to help the user’s hands learn how performing a given function should feel. This provided a mechanism to help the user detect when they might be making an accidental and potentially catastrophic input. Providing this same level of robustness is generally not necessary for consumer and industrial applications, but providing some level of spatial clues, either via visual cues or physical variations, can greatly enhance the user’s learning curve when the interface changes and provide clues when the user is accidently hitting an unintended button.

What tips do you have for estimating/budgeting?

Wednesday, November 2nd, 2011 by Robert Cravotta

Many years ago, during a visit to my doctor, he pointed out to me that I had visited him around the same time each year for the past few years for roughly the same symptoms – which were all stress related. It was at that moment when it finally dawned on me how stressful year-end budgeting activities were on me. It was also the moment when I understood how to focus my energy to minimize the amount of stress that this time of the year had on me by approaching the year-end budgeting activities from a different perspective.

I do not recall who I heard the expression “water off a duck’s back” from, but it probably has been a life saver for getting me successfully through many stressful events, including year-end budgeting. The expression brings images of ducks popping up to the surface of the water after diving under the water to eat. Remarkably, the water all rolls off the duck’s back and they are dry immediately after surfacing. I had a friend who had hair like that, but the expression “water off Paul’s head” is not quite as visually effective.

The stress of needing to take an accurate assessment of my project or department’s current condition coupled with having to project and negotiate for those resources we would need to accomplish our goals for the next planning period was much easier to handle if I could imagine the stress falling off me. Equally important in handling the extra stress of this time of year was realizing which goals were real and which goals were what we called management BHAGs (big hairy-a** goals).

My management at the time thought it was a good idea to purposefully create goals that they knew probably could not be attained in the hope that we might complete a significant portion of them with far fewer resources than we might otherwise expect to need. I’m not convinced that the BHAG approach works if you overuse it. If you have too many of them, or they are just too large of a leap, there is a huge temptation by the team to just write off the goal and internally adopt a more realistic goal anyway.

Going over earlier budgeting proposals and comparing them to what actually happened proved to be a helpful exercise. First, it provides a loose baseline for the new budget proposal. Second, it can provide a mechanism for improving your budgeting accuracy because you might notice a pattern in your budget versus actuals. For example, are the proposed budgets even close to the actuals? Are they too high or too low? Do the budgets/actuals trend in any direction? My experience showed that our tend line was fairly constant year over year, but that allocating a portion of the budget to acquiring and updating tools each year was an important part of keeping that cost line from trending upward as project complexities increased.

Do you know any useful tips to share about how to be more effective at estimating projects and performing planning and budgeting activities? Does any type of tool, including spreadsheets, prove especially useful in tracking past, present, and future projects and actuals? What are the types of information you find most valuable in developing a budget and schedule? How important is having a specific person, skill set, or tool set available to making projections that you can meet?