Is adoption risk real?

Wednesday, January 26th, 2011 by Robert Cravotta

I recently received sales material for solar power panels. According to the literature, I can buy a solar power system that will pay for itself and then continue generating returns for the next 30 to 40 years without the risks associated with investing in stocks. Something about this pitch smacks of overlooking areas of risk when adopting an immature technology. Perhaps I am merely a pessimist on the current state of technology for solar energy, but I think there are significant adoption risks that are analogous to ones that I have had to consider with other technologies.

According to the literature supplied to me: I can buy solar panels with a 30% tax credit and a generous rebate program. Despite the steep discount – these panels will take at least 5 years to reach a breakeven point – and that point assumes I can choose the perfectly sized system for my house. The system comes with a 10 year bumper-to-bumper warranty and a 25 year manufacturer’s warranty. This sounds like a great deal right?

To reach a breakeven point in 5 years, the solar panels need to provide a better than 14.4% annual rate of return on my initial investment. That is quite a healthy rate of return for any static installation to sustain for many years, but I am willing to consider that it might be realistic. I suspect that that rate of return does not include the cost for dismantling, maintaining, and replacing the solar panels – but for this scenario – I am willing to consider those as free activities – no future costs.

I expect that solar power technology will continue to improve each year – in fact, I expect that the rate of improvement of the solar conversion efficiency might mirror Moore’s law about transistor density in some analogous fashion. If this is true – and the organizations providing the rebates and tax credit subsidies are counting on it – solar power technology will be somewhere between four and eight times more efficient in 5 years than a system I would install today.

This scenario strongly reminds me of the opportunities to buy desktop computers in the 1980’s that were future proofed by having the system motherboard be able to accept the next generation processor by merely dropping in the new processor when you wanted it. My experience with those systems was that the extra expense for the extra complexity on the motherboard was not worth the extra cost.

Additionally, there were other things in the system that also changed that made using the motherboard beyond a few years a bad idea – namely, the operating system kept evolving, the device drivers kept evolving, and both of these provided no support for “old and obsolete” peripherals and modules. It was much cheaper, easier, and safer to buy what you needed when you needed it – and then replace it with the next round of available devices when you needed to upgrade.

Am I inappropriately applying this lesson from the past to solar power? According to my lessons learned, I should wait a few more years and realize the resulting improvement in cost and efficiency of solar power – In other words, I might come out ahead if I wait and do nothing today. I believe this is the condition of a classic early adopter. Have you experienced this type of trade-off when choosing components and features for your embedded designs?

Are consumer products crossing the line to too cheap?

Wednesday, January 19th, 2011 by Robert Cravotta

I love that the price of so many products continues to fall year in and year out. However, I have recently started to wonder if in some cases the design trade-offs that the development teams for these products are making to lowering the cost to produce them are crossing a quality line. I do not have a lot of data points, but I will share my observations with my office phones as an example that might inspire you to share your experience with some product. Maybe in the aggregate, our stories will uncover whether there really is a trend for razor thin quality margins.

Over the previous ten years, I have relied on three different cordless phone systems. The price for each phone system was progressively lower than the previous phone system and usually provided more functionality than the previous phone. In each case, the phone that I replaced was still working, but there was something else that made replacing the phone worthwhile.

The first cordless phone I purchased was a single-line, 2.4GHz cordless phone. It consisted of a single handset that mounted on a full function base station. I liked that phone a lot. Everything about it was robust – except the fact that it operated in the 2.4GHz band along with so many other devices in and around my office. For example, if someone turned on the microwave, it would cause interference with the phone.

I eventually replaced the 2.4 GHz cordless phone with a dual-line, multi-handset, 5.8 GHz phone systems. That system cost me less than the 2.4 GHz phone and offered additional valuable features. The hand-set was smaller and weighed less. The dual-line capability allowed me to consolidate my phone lines to a single system and the multi-handset feature allowed me to make both phone lines available everywhere in my home office. There were a few features that the new handsets did not provide that I missed from the original phones, but I could use the phones no matter how other devices were being used around the area – so I was happy with the phones.

After a few years of heavy use, the batteries could not hold enough charge. I would regularly switch between handsets during a typical day because the battery in each handset provided less than an hour of talk time. I bought replacement batteries which were better than the original batteries, but only slightly so. With continued use, the new batteries provided longer life (I assume this was because each handset’s state of charge values were slowly adjusting to the new batteries – I’d love if someone could explain or point me to where someone explains why this happened). However, the batteries were only useful for about a year.

At this point, I bought my current cordless phone system which is made by the same manufacturer as the previous two phones. It is a DECT 6.0, dual-line, multi-handset system. It cost less money than either of the other two systems and added a larger database on the base station. However, the phone system exhibits intermittent behavior that I never experienced with my other phones.

Most notably, the communication between the base station and handsets is not always robust. Sometimes a handset will start doing a continuous ringing instead of the normal on/off cycle. Other times the handset will not receive the caller-id that the base station normally sends to it. And other times I will notice a clicking sound that I have not been able to attribute to anything. These examples of lower robustness make me wonder how much the design team shrunk the product’s performance margins to meet a lower price point.

Are these systemic examples of margins that are too small or did I just get unlucky with the phone I received? Because the phone works well most of the time, I suspect it is narrow quality margins that are the culprit – and this made me wonder if other people are noticing similar changes in the robustness of newer products. Do you have a product that you have noticed a change across generations?

Looking at Tesla Touch

Friday, January 14th, 2011 by Robert Cravotta

The team at Disney Research has been working on the Tesla Touch prototype for almost a year. Tesla Touch is a touchscreen feedback technology that relies on the principles of electrovibration to simulate textures on a user’s fingertips. This article expands on the overview of the technology I wrote earlier, and it is based on a recent demonstration meeting at CES that I had with Ivan Poupyrev and Ali Israr, members of the Tesla Touch development team.

The first thing to note is that the Tesla Touch is a prototype; it is not a productized technology just yet. As with any promising technology, there are a number of companies working with the Tesla Touch team to figure out how they might integrate the technology into their upcoming designs. The concept behind the Tesla Touch is based on technology that researchers in the 1950’s were working on to assist blind people. The research fell dormant and the Tesla Touch team has revived it. The technology shows a lot of interesting promise, but I suspect the process of making it robust enough for production designs will uncover a number of use-case challenges (like it probably did for the original research team).

The Tesla Touch controller modulates a periodic electrostatic charge across the touch surface which attracts and repels the electrons in the user’s fingertip towards or away from the touch surface – in effect, varying the friction the user experiences while moving their finger across the surface. Ali has been characterizing the psychophysics of the technology over the last year to understand how people perceive tactile sensations of the varying electrostatic field. Based on my experience with sound bars last week (which I will write about in another article), I suspect the controller for this technology will need to be able to manage a number of usage profiles to accommodate different operating conditions as well as differences between how users perceive the signal it produces.

Ali shared that the threshold to feel the signal was an 8V peak-to-peak modulation; however, the voltage swing on the prototype ranged from 60 to 100 V. The 80 to 100 V signal felt like a comfortable tug on my finger; the 60 to 80 V signal presented a much lighter sensation.Because our meeting was more than a quick demonstration in a booth, I was able to uncover one of the use-case challenges. When I held the unit in my hand, the touch feedback worked great; however, if I left the unit on the table and touched it with only one hand, the touch feedback was nonexistent. This was in part because the prototype is based on the user providing the ground for the system. Ivan mentioned that the technology can work without the user grounding it, but that it requires the system to use larger voltage swings.

In order for the user to feel the feedback, their finger must be in motion. This is consistent with how people experience touch, so there is no disconnect between expectations and what the system can deliver. The expectation that the user will more easily sense the varying friction with lateral movement of their finger is also consistent with observations that the team at Immersion, a mechanical-based haptics company, shared with me when simulating touch feedback on large panels with small motors or piezoelectric strips.

The technology prototype used a capacitive touch screen – demonstrating that the touch sensing and the touch feedback systems can work together. The prototype was modulating the charge on the touch surface at up to a 500 Hz rate which is noticeably higher than the 70Hz rate of the its touch sensor. A use-case challenge for this technology is that it requires a conductive material or substance at the touch surface in order to convey texture feedback to the user. While a 100 V swing is sufficient for a user to sense feedback with their finger, it might not be large enough of a swing to sense it through an optimal stylus. Using gloves will also impair or prevent the user from sensing the feedback.

A fun surprise occurred during one of the demonstration textures. In this case, the display showed a drinking glass. When I rubbed the display away from the drinking glass, the surface was a normal smooth surface. When I rubbed over the surface that showed the drinking glass, I felt a resistance that met my expectation for the glass. I then decided to rub repeated over that area to see if the texture would change and was rewarded with a sound similar to rubbing/cleaning a drinking glass with your finger. Mind you, the sound did not occur when I rubbed the other parts of the display.

The technology is capable of conveying coarse texture transitions, such as from a smooth surface to a rough or heavy surface. It is able to convey a sense of bumps and boundaries through varying the amount of tugging your finger feels on the touch surface. I am not sure when or if it can convey subtle or soft textures – however, there are so many ways to modulate the magnitude, shape, frequency, and repetition or the charge on the plate, that those types of subtle feedbacks may be possible in a production implementation.

I suspect a tight coupling between the visual and touch feedback is an important characteristic for the user to accept the touch feedback from the system. If the touch signal precedes or lags the visual cue, it is disconcerting and confusing. I was able to experience this on the prototype by using two fingers on the display at the same time. The sensing control algorithm only reports back a single touch point, so it would average the position between the two (or more) fingers. This is acceptable in the prototype as it was not a demonstration of a multi-touch system, but it did allow me to receive a feedback on my fingertips that did not match what my fingers were actually “touching” on the display.

There is a good reason why the prototype did not support multi-touch. The feedback implementation applies a single charge across the entire touch surface. That means any and all fingers that are touching the display will (roughly) feel the same thing. This is more of an addressing problem; the system was using a single electrode. It might be possible in later generations to lay out different configurations so that the controller can drive different parts of the display with different signals. At this point, it is a similar constraint to what mechanical feedback systems contend with also. However, one advantage that the Tesla Touch approach has over the mechanical approach is that only the finger touching the display senses the feedback signal. In contrast, the mechanical approach relays the feedback not just to the user’s fingers, but also their other hand which is holding the device.

A final observation involves the impact of applying friction to our fingers in a context we are not used to doing. After playing with the prototype for quite some time, I felt a sensation in my fingertip that took up to an hour to fade away. I suspect my fingertip would feel similarly if I rubbed it on a rough surface for an extended time. I suspect with repeated use over time, my fingertip would develop a mini callous and the sensation would no longer occur.

This technology shows a lot of promise. It offers a feedback approach that includes no moving parts, but it may have a more constrained set of use-cases,versus other types of feedback, where it is able to provide useful feedback to the user.

What is your most memorable demonstration/test mishap?

Wednesday, January 12th, 2011 by Robert Cravotta

The crush of product and technology demonstrations at CES is over. As an attendee of the show, the vast majority of the product demonstrations I saw seemed to perform as expected. The technology demonstrations on the other hand did not always fare quite so well – but then again – the technology demonstrations were prototypes of possibly useful ways to harness new ideas rather than fully developed and productized devices. Seeing all of these demonstrations at the show reminded me of the prototypes I worked on and some of the spectacular ways that things could go wrong. I suspect that sharing these stories with each other will pass around some valuable (and possibly expensive) lessons learned to the group here.

On the lighter side of the mishap scale, I still like the autonomous robots that Texas Instruments demonstrated last year at ESC San Jose. The demonstration consisted of four small, wheeled robots that would independently roam around the table top. When they bumped into each other, they would politely back away and zoom off in another direction. That is, except for one of the robots which appeared to be a bit pushy and bossy as it would push the other robots around longer before it would back away. In this case, the robots were all running the same software. The difference in behavior had to do with a different sensitivity of the pressure bar that told the robot that it had collided with something (a wall or another robot in this case).

I like this small scale example because it demonstrates that even identical devices can take on significantly different observable behaviors because of small variances in the components that make up the device. It also demonstrates the possible value for closed-loop control systems to be able to access independent or outside reference points so as to be able to calibrate their behavior to some set of norms (how’s that for a follow-up idea on that particular demonstration). I personally would love to see an algorithm that allowed the robots to gradually influence each other’s behavior, but the robots might need more sensors to be able to do that in any meaningful way.

On the more profound side of the mishap scale, my most noteworthy stories involve live testing of fully autonomous vehicles that would maneuver in low orbit space. These tests were quite expensive to perform, so we did not have the luxury of “do-overs”. Especially in failure, we had to learn as much about the system as we could; the post mortem analysis could last months after the live test.

In one case, we had a system that was prepped (loaded with fuel) for a low orbit test; however, the family of transport vehicle we were using had experienced several catastrophic failures over the past year that resulted in the payloads being lost. We put the low-orbit payload system, with the fuel loaded, into storage for a few months while the company responsible for the transport vehicle went through a review and correction process. Eventually we got the go ahead to perform the test. The transport vehicle delivered its payload perfectly; however, when the payload system activated its fuel system, the seals for the fuel lines blew.

In this case, the prototype system had not been designed to store fuel for a period of months. The intended scenario was to load the vehicle with fuel and launch it within days – not months. During the period of time in storage, the corrosive fuel and oxidizer weakened the seals so that they blew when the full pressure of the fuel system was placed upon them during flight. A key takeaway from this experience was to understand the full range of operating and non-operating scenarios that the system might be subjected to – including being subjected to extended storage. In this case, a solution to the problem would be implemented as additional steps and conditions in the test procedures.

My favorite profound failure involves a similar low-orbit vehicle that we designed to succeed when presented with a three-sigma (99.7%) scenario. In this test though, there was a cascade of failures during the delivery to low-orbit phase of the test, which presented us with a nine-sigma scenario. Despite the series of mishaps leading to deployment of the vehicle, the vehicle was almost able to completely compensate for its bad placement – except that it ran out of fuel as it was issuing the final engine commands to put it into the correct location. To the casual observer, the test was an utter failure, but to the people working on that project, the test demonstrated a system that was more robust than we ever thought it could be.

Do you have any demonstration or testing mishaps that you can share? What did you learn from it and how did you change things so that the mishap would not occur again?

What is your favorite failsafe design?

Wednesday, January 5th, 2011 by Robert Cravotta

We had snow falling for a few hours where I live this week. This is remarkable only to the extent that the last time we had any snow fall was over 21 years ago. The falling snow got me thinking about how most things in our neighborhood, such as cars, roads, houses, and our plumbing, are not subjected to the wonders of snow with any regularity. On days like that, I am thankful that the people who designed most of the things we rely on took into account what impact different extremes, such as hot and cold weather, would have on their design. Designing a system to operate or degrade gracefully in rare operating conditions is a robust design concept that seems to be missing in so many “scientific or technical” television shows.

Designing systems so that they fail in a safe way is an important engineering concept– and it is often invisible to the end user. Developing a failsafe system is an exercise in trading between the consequences and probability of a failure and the cost to mitigate those consequences. There is no single best way to design a failsafe system, but two main tools available to designers are to incorporate interlocks or safeties into the system/or and to implement processes that the user needs to be aware of to mitigate the failure state. Take for example the simple inflatable beach ball; the ones I have seen have such a long list of warnings and disclaimers printed on them that is quite humorous – until you realize that every item printed on that ball probably has a legal case associated with it.

I was completely unaware until a few months ago how a rodent could make an automobile inoperable. Worst, our vehicle became unsteerable while the car was being driven. Fortunately no one got hurt (except the rat that caused the failure). In this case, it looks like the rat got caught in one of the belts in the engine compartment that ultimately made the power steering fail. I was surprised to find out this is actually a common failure when I looked it up on the Internet. I am not aware of a way to design better safety into the vehicle, so we have changed our process when using automobiles. In our case, we do periodic checks of the engine compartment to see if there are any signs of an animal living in there, and we sprinkled peppermint oil around the compartment because we heard that rodents hate the smell.

The ways to make a system failsafe are numerous and I suspect there are a lot of great ideas that have been used over the years. As an example, let me share a memorable failsafe mechanism we implemented on a Space Shuttle Payload I worked on for two years. The payload was going to be actually flying around the Space Shuttle – which means it would be firing engines more than once. This was ground breaking as launching satellites involves firing the engines only once. As a result, we had to go to great lengths to ensure that there could be no way that the engines could misfire – or worse, that the payload could receive a malicious command from the ground to direct the payload into a collision course with the Shuttle. All of the fault tolerant systems and failsafe mechanisms made the design quite complicated. In contrast, the mechanism we implemented to prevent acting on a malicious command was to use a table of random numbers that were loaded onto the payload 30 minutes before the launch and would be known to only two people. Using encryption was not a feasible option at that time because we just did not have the computing power to do it.

Another story of making a system more failsafe involved an X-ray machine. I was never able to confirm if this actually occurred or was a local urban legend, but the lesson is still valid. The model of X-ray machine in question was exposing patients to larger doses of radiation than it was supposed to when the technician pressed the backspace key during a small time window. The short term fix was to send out an order to remove the backspace key from all of the keyboards. The take-away for me was that there are fast, quick, and cheap ways to alleviate a problem that allow you to take the appropriate efforts to find a better way to fix the problem.

Have you ever used a clever approach to making your designs more failsafe? Have you ever run across a product you used that implemented an elegant failsafe mechanism? Have you ever seen a product that you thought of a better way that they could have made the system failsafe or degrade gracefully?

How do you exercise your orthogonal thinking?

Wednesday, December 29th, 2010 by Robert Cravotta

How are Christmas and Halloween the same? The intended answer to this question requires you to look at the question from different angles to find the significant relationship between these seemingly unrelated events. In fact, to be a competent problem solver, you often need to be able to look at a problem from multiple angles and find a way to take advantage of a relationship between different parts of the problem that might not be immediately obvious. If the relationship was obvious, there might not be a problem to solve.

I have found over the years that doing different types of puzzles and thinking games often help me to juggle the conditions of a problem around and find that elusive relationship that makes the problem solvable. While I do not believe being able to solve Sudoku puzzles will make you smarter, I do believe that practicing Sudoku puzzle in different ways can help exercise your “cognitive muscles” so that you can more easily reorganize difficult and abstract concepts around in your mind and find the critical relationship between the different parts.

There are several approaches to solving Sudoku puzzles and each requires a different set of cognitive wiring to perform competently. One approach, and one that I see most electronic versions of the puzzle support, involves penciling in all of the possible valid numbers in each square and using a set of rules to eliminate numbers from each square until there is one valid answer. Another approach finds the valid numbers without using the relationships between the “penciled” numbers. Each approach exercises my thought process in very different ways, and I find that switching between them provides a benefit when I am working on a tough problem.

I believe being able to switch gears and represent data in equivalent but different representations is a key skill to effective problem solving. In the case of Christmas and Halloween, rather than looking at the social context associated with each day, looking at the date of each day – October 31 and December 25 can suggest a non-obvious relationship.

I find that many of the best types of puzzles or games for exercising orthogonal thinking engage a visual mode of looking at the problem. The ancient board game of Go is an excellent example. The more I play Go, the more abstract relationships I am able to recognize and most surprisingly – apply to life and problem solving. If you have never played Go, I strongly recommend.

Another game I find a lot of value for exercising orthogonal thinking is Contract Bridge – mostly because it is a game that involves incomplete information – much like real life problems – and relies on the ability of the players to communicate information with each other within a highly constricted vocabulary. Often times, the toughest design problems are tough precisely because it is difficult to verbalize or describe what the problem actually is.

As for the relationship between October 31 and December 25, it is interesting that the abbreviations for these two dates also correspond to notation of the same exact number in two different number bases – Oct(al) 31 is the same value as Dec(imal) 25.

These examples are some of the ways I exercise my orthogonal thinking. What are your favorite ways to stretch your mind and practice switching gears on the same problem?

How do you mitigate single-point failures in your team’s skillset?

Wednesday, December 22nd, 2010 by Robert Cravotta

One of the hardest design challenges facing developers is how to keep the system operating within acceptable bounds despite being used in non-optimal conditions. Given a large enough user base, someone will operate the equipment in ways that the developers never intended. For example, a friend recently shared that his young daughter has developed an obsession with turning the lights in the house on and off repeatedly. Complicating this scenario is that some of the lights she likes to flip on and off are fluorescent lights (the tubes, not CFLs (compact fluorescent light)). Unfortunately, repeatedly turning them on and off in this fashion significantly reduces their useful life. Those lights were not designed to be put under those types of operating conditions. I’m not sure designers can ever build a fluorescent bulb that will flourish under those types of operating conditions – but you never know.

Minimizing and eliminating single-point failures in a design is a valuable strategy for increasing the robustness of the design. Experienced developers exhibit a knack for avoiding and mitigating single-point failures – often as the result of experience with similar failures in previous projects. Successful methods for avoiding single-point failures usually involve implementing some level of overlap or redundancy between separate, and ideally independent, parts of the system.

A look at the literature addressing single-point failures reveals a focus on technical and tangible items like devices and components, but there is an intangible source of single-point failures that can be devastating to a project – when a given skillset or knowledge set is a single-point failure. I was first introduced to this idea when someone asked me “What will you do if Joe wins the Lottery?” We quickly established that winning the Lottery was a nice way to describe a myriad of unpleasant scenarios to consider – in each case the outcome is the same – Joe, with all of his skills, experience, and project specific knowledge, leaves the project.

As a junior member of the technical staff, I did not need to worry about this question, but once I started into the ranks of project lead – well that question become immensely more important. If you have the luxury of a large team and budget, you might assign people to overlapping tasks. However, small teams may lack not just the budget but the cognitive bandwidth of the team members to be aware of everything everyone else is doing.

One approach we used to mitigate the consequences of a key person “winning the Lottery” involved holding regular project status meetings. Done correctly, these meetings can provide a quick and cost effective mechanism for spreading the project knowledge among more people. The trick is to avoid involving too many people for too long or too frequently so that the meetings cost more than the possible benefit they provide. Maintaining written documentation is another approach for making sure the project can recover from the loss of a key member. Another approach we used for more tactical types of skills was to contract with an outside team that specialized in said skillset. By working with someone who understands the project’s tribal knowledge, this approach can help the team recover quickly and salvage the project.

What methods do your teams employ to protect from the consequences of a key person winning the Lottery?

Energy Management in Power Architecture

Tuesday, December 21st, 2010 by Fawzi Behmann

Embedded computing applications, such as printers, storage, networking infrastructure and data center equipment, continue to face the challenge of delivering increased performance within a constrained energy budget. In February 2009, the Power architecture added power management features in the Power ISA v.2.06 (the most recent specification). The largest gains in performance while maintaining a constrained energy budget comes from creating systems that can pace the workload with energy consumption in an intelligent and efficient manner.

In general, the work performed in embedded computing applications is done in cycles – a combination of active states, management states, and dormant states – and that different areas of the system may have higher demand for energy than other areas throughout the workflow. It becomes important for system architects to model the system from an energy consumption perspective and employing energy saving techniques to the building blocks (CPUs, ASICs and I/Os) of their computing system.

The processor is the heart of the system.  There will be times when high frequencies will be required, but these will likely be very short cycles in the work flow. The vast majority of time, the processor is being asked to perform relatively low-performance tasks. Reducing the processor’s clock frequency during these management periods saves energy, which in turn be used by the ASIC or I/O that are working harder. Throughout the workflow, energy demands vary among system components. Some devices need more power than others, and the system needs to tightly control and manage the power sharing. It is also important that software saves previous known states in non-volatile memory so that the processor can retrieve those states upon entering a more active state.

In many applications, high computing performance during periods of activity should be balanced with low power consumption when there is less workload. Microprocessor cores typically operate at higher frequencies than the rest of the system. Therefore, power consumption can be best minimized by controlling core frequency.  Software can dynamically increase or decrease the core’s clock frequency while still allowing the rest of the system continues operating at the previous frequency.

The Power ISA v.2.06 includes specifications for hypervisor and virtualization on single and multi-core processor implementations. The Power Architecture includes support for dynamic energy management; some of which are enabled internally in the core. For example, it is common for execution units in the processor pipeline to be power-gated when idle. Furthermore, Power Architecture cores offer software-selectable power-saving modes. These power-saving modes reduce the function available in other areas, such as limiting cache and bus-snooping operations, and some modes turn off all functional units except for interrupts. These techniques are effective because they reduce switching on the chip and give operating systems a means to exercise dynamic power management.

Sometimes only the application software running on the processor has the knowledge required to decide how power can be managed without affecting performance. The Power ISA 2.06 added the wait instruction to provide application software with a means to optimize power by enabling the application software to initiate power savings when it is known that there is no work to do until the next interrupt. This instruction enables power savings through user-mode code, and it is well matched to the requirements of the LTE market segment, which requires that the total SoC power be managed effectively. The combination of CPU power-savings modes, the wait instruction, and the ability to wake on an interrupt has been demonstrated to achieve deep sleep power savings with wake up on external events.

Adding texture to touch interfaces

Friday, December 17th, 2010 by Robert Cravotta

I recently heard about another approach to providing feedback to touch interfaces (Thank you Eduardo). TeslaTouch is a technology developed at Disney Research that uses principles of electrovibration to simulate textures on a user’s finger tips. I will be meeting with TeslaTouch at CES and going through a technical demonstration, so I hope to be able to share good technical details after that meeting. In the meantime, there are videos at the site that provide a high level description of the technology.

The feedback controller uniformly applies a periodic electrostatic charge across the touch surface. By varying the sign (and possibly magnitude) of the charge, the electrons in the user’s fingertip will be drawn towards or away from the surface – effective creating a change in friction on the touch surface. Current prototypes are able to use signals as low as 8V to generate tactile sensations. No electric charge passes through the user.

By varying over time the electric charge across the electrode layer, this touch sensor and feedback surface can simulate textures on a user’s finger by attracting and repelling the electrons in the user’s finger to and from the touch surface (courtesy TeslaTouch).

The figure shows a cross section of the touch surface which consists of a layer of glass overlaid with a layer of transparent electrode, which is covered by an insulator. Varying the voltage across the electrode layer changes the relative friction coefficients from pushing a finger (Fe) into the touch surface and dragging a finger across it (Fr). It is not clear how mature this technology currently is other than the company is talking about prototype units.

One big feature of this approach to touch feedback is that it does not rely on mechanical actuators typically used in haptic feedback approaches. The lack of moving parts should contribute to a higher reliability when compared to the electromechanical alternatives. However, it is not clear the this technology would work through gloves or translate through a stylus – of which the electromechanical approach can accommodate.

What are the questions you would like most answered about this technology? I am hopeful that I can dig deep into the technology at my CES meeting and pass on what I find in a follow-up here. Either email me or post the questions you would most like to see answered. The world of touch user interfaces is getting more interesting each day.

Does your embedded development team’s project budget metric support your estimation process?

Wednesday, December 15th, 2010 by Robert Cravotta

As an engineering project lead I had to develop and report on a set of performance metrics that we called the VSP (vision support plan). The idea behind these metrics was to show how each area of the company was directly supporting the company vision statement. For many of the metrics, the exercise was a waste of time because there was no clean way to measure how what we were doing as a team directly corresponded to every abstract idea in the vision statement.

However, there were a few metrics that we used that I thought were useful because we could use them to experiment with our processes and measure whether there was an improvement or not. For example, I refused to use a budget metric that only focused on whether we came in under budget or not. My budget metrics were “green” (good) if the expenditures to date were within 10% of the budget. If the project was more than 10% higher or lower than the budget, I reported the project as yellow. If the project was more than 20% higher or lower than the budget, I reported the project as red.

Here was my reasoning for the grading. If the project was within 10% of the budget, we were in control of the budget. I believe that any team can affect the cost of a project by up to 10% by choosing appropriate trade-offs without adversely sacrificing the quality of the project. Any design trade-offs that are made to affect a 10 to 20% change from the plan involve more risk and might adversely affect the quality of the project. Likewise, any time a team must accommodate changes that stray more than 20% from the plan involve significant risk and may require a reevaluation to determine whether the project is scoped realistically.

Note that this metric specified a range that covered overruns and underflows of the expenditures to budget. A major reason for this was to put a special focus on how well we were estimating projects. How many times have you seen someone try to explain why their project is over budget? In general, the reasons I saw included one or more of:

1) There was additional scope added to the project that you did not capture additional budget for (often at the direction of management).

2) The project involved solving some unexpected problems and there was not enough (or no) budget to handle such contingencies.

3) Management would not accept a realistic budget number for the project and you are doing the best you can with the budget they offered you.

The thing that is common in all of these reasons is that the estimation process did not adequately capture the project’s predictable and iterative costs. Too many times I would see management strip out our contingency budget which usually consisted of specifying 1 or 2 design iterations at those points of the design where we had the most risk. Capturing a budget metric and putting it into the context of how good the estimate was provides the potential for finding clues as to how to improve the estimating process in future projects – which directly supports just about any company’s vision statement that I have ever seen.

Likewise, if your project was substantially under budget, it seemed that most management was content to leave that alone; however, I see the following scenarios as reasons to why you might be running under budget:

1) You overestimated the cost to perform the project

2) You were able to remove scope from the project that was left in the budget numbers

3) You made an innovative leap that increased your productivity beyond what you thought you could do during the budgeting process.

Each of these reasons had a profoundly different impact on how you refine your estimating process. The first reason suggests you need better estimators. The second reason suggests you need to improve your project and contract management process. The third reason is one that any manager should want to see more of and reward the team for making it happen.

I saw many project estimates game the system so that the project lead had a potentially oversized surplus in their budget and their management would fail to comment on how resources had been allocated to a project and then not used without uncovering which of those three scenarios was the cause for the under run.

Does your project budget process enable you to improve your estimating process, contract management process, and increase the chances that your team will gain recognition when a risk pays off when you discover a new and better way to solve a problem? What are other ways you use expense/budget metrics to improve your design team’s performance?