Is the third generation the charm?

Tuesday, July 20th, 2010 by Robert Cravotta

In a recent conversation with Ken Maxwell, President of Blue Water Embedded, Ken mentioned several times how third-generation touch controllers are applying dedicated hardware resources to encapsulate and offload some of the processing necessary to deliver robust touch interfaces. We talked about his use of the term third-generation as he seemed not quite comfortable with using it. However, I believe it is the most appropriate term, is consistent with my observations about third generation technologies, and is the impetus for me doing this hands-on project with touch development kits in the first place.

While examining technology’s inflections, I have noticed that technological capability is only one part of an industry inflection based around that technology. The implementation must also: hide complexity from the developer and user; integrate subsystems to deliver lower costs, shrink schedules, and simplify learning curves; as well as pull together multi-domain components and knowledge into a single package. Two big examples of inflection points occurred around the third generation of the technology or product: Microsoft Windows and the Apple iPod.

Windows reached an inflection point at version 3.0 (five years after version 1.0 was released) when it simplified the management of the vast array of optional peripherals available for the desktop PC and hid much of the complexity of sharing data between programs. Users could already transfer data among applications, but they needed to use explicit translation programs and tolerate the loss of data from special features. Windows 3.0 hid the complexity of selecting those translation programs and provided a data-interchange format and mechanism that further improved users’ ability to share data among applications.

The third generation iPod reached an industry inflection point with the launch of the iTunes Music Store. The “world’s best and easiest to use ‘jukebox’ software” introduced a dramatically simpler user interface that needed little or no instruction to get started and introduced more people to digital music.

Touch interface controllers and development kits are at a similar third generation crossroads. First-generation software drivers for touch controllers required the target or host processor to drive the sensing circuits and perform the coordinate mapping. Second-generation touch controllers freed up some of the processing requirements of the target processor by including dedicated hardware resources to drive the sensors, and it abstracted the sensor data the target processor worked with to pen-up/down and coordinate location information. Second generation controllers still require significant processing resources to manage debounce processing as well as reject bad touches such as palm and face presses.

Third-generation touch controllers integrate even more dedicated hardware and software to offload more context processing from the target processor to handle debounce processing, reporting finger or pen flicking inputs, correctly resolving multi-touch inputs, and rejecting bad touches from palm, grip, and face presses. Depending on the sensor technology, third-generation controllers are also going beyond the simple pen-up/down model by supporting hover or mouse-over emulation. The new typing method supported by Swype pushes the pen-up/down model yet another step further by combining multiple touch points within a single pen-up/down event.

Is it a coincidence that touch interfaces seem to be crossing an industry inflection point with the advent of third generation controllers, or is this a relationship that also manifests in other domains? Does your experience support this observation?

[Editor's Note: This was originally posted on Low-Power Design]

Get in Touch with Your Inner User Interface

Thursday, July 15th, 2010 by Ville-Veikko Helppi

Touchscreens have gone from fad to “must have” seemingly overnight. The rapid growth of touchscreen user interfaces in mobile phones, media players, navigation systems, point-of-sale, and various other devices has changed the landscape in a number of vertical markets. In fact, original device manufacturers (ODMs) see the touchscreen as a way to differentiate their devices and compete against one another in an ever-expanding marketplace. But ODMs take note – a touchscreen alone will not solve the problem of delivering a fantastic user experience. If the underlying user interface is not up to snuff, the most amazing whiz-bang touchscreen won’t save you.

Touchscreens have come a long way from the early 90’s applications where they were used in primitive sales kiosks and public information displays. These devices were not cutting-edge masterpieces, but they did help jump-start the industry and expose large audiences (and potential future users) to the possibilities of what this type of technology might offer. It wasn’t until a decade later before consumers saw the major introduction of touchscreens – and the reason for this was pretty simple: the hardware was just too big and too expensive. Touchscreens became more usable and more pervasive only after the size of hardware reduced significantly.

Today there are a host of options in touchscreen technology. These include resistive, projected-capacitive, surface-capacitive, surface acoustic wave, and infrared to name a few. According to DisplaySearch, a display market research organization, resistive displays now occupy 50 percent of the market due to its cost-effectiveness, consistency, and durable performance; while projected-capacitive has 31 percent of the market. In total, there were more than 600 million touchscreens shipped in 2009. DisplaySearch also forecasts that projected-capacitive touchscreens will soon pass resistive screens as the number one touchscreen technology (measured by revenues) because the Apple iPad utilizes projected-capacitive touchscreen technology. And finally, according to Gartner, the projected-capacitive touchscreen segment is estimated to hit 1.3 billion units by 2012, which means a 44 percent compounded annual growth rate. These estimates indicate serious growth potential in the touchscreen technology sector.

However, growth ultimately hinges on customer demand. Some of the devices, such as safety and mission-critical systems, are still not utilizing the capabilities found in touchscreens. This is because with mission-critical systems, there is very little room for input mistakes made by the user. In many cases, touchscreens are considered a more fault-sensitive input method when compared to the old-fashioned button- and glitch-based input mechanisms. For some companies, the concern is not about faulty user inputs, but cost; adding a $30 touchscreen is not an option when it won’t add any value to the product’s price point.

So what drives touchscreen adoption? Adoption is mainly driven by

  1. Lowering the cost of the hardware
  2. Testing and validating new types of touchscreen technologies in the consumer space, and then pushing those technologies into other vertical markets
  3. A touchscreen provides an aesthetic and ease-of-use appeal – a sexier device gains more attention over its not so sexy non-touchscreen cousin.

This is true regardless of the type of device, whether it’s a juice blender, glucose monitor, or infotainment system in that snazzy new BMW.

The second part in this four-part series explores the paradigm shift in user interfaces that touchscreens are causing.

What matters most when choosing an embedded processor?

Wednesday, July 14th, 2010 by Robert Cravotta

I remember the first embedded project that I worked on where I had visibility into choosing the processor. I was a junior member of the technical staff and I “assisted” a more senior member in selecting and documenting our choice of processor for a proposal. I say assisted because my contribution consisted mostly of writing up the proposal rather than actually evaluating the different options. What stuck with me over the years about that experience was the large number of options and the apparent ease with which the other team member chose the target processor (a 80C196KB). I felt that processor had been chosen mostly based on his prior experience and familiarity with the part.

Today, the number of processing options available to embedded developers is vastly larger; just check out the Embedded Processing Directory to get a sense of the current market players and types of parts they offer. While prior experience with a processor family is valuable, I suspect it is only one of many considerations when choosing a target processor. Today’s device families offer many peripherals and hardware accelerators in a single package that were not available just a few years ago. Today’s devices are so complex that it is insufficient for processor vendors to just supply a datasheet and cross assembler. Today, most processor suppliers provide substantial amounts of software to go with their processors. I view most of this software as “low-hanging integration fruit” rather than a “necessary evil” to sell processors, but that is a topic for another day.

I suspect that while instruction set architecture and maximum processing performance are important, they are not necessarily the same level of deciding criteria that they used to be. There are entire processor platforms built around low energy or value pricing that trade processing performance to enable entirely different sets of end applications. There is a growing body of bundled, vertically targeted, software that many processor platforms support, and I suspect the bundled software is playing a larger role in getting a processor across the line into a design win.

With the recent launch of the Embedded Processing Directory, I would like to ask you what matters most to you when choosing an embedded processor? Is having access to the on-chip resources in a table format still sufficient, or are there other types of information that you must evaluate before selecting a target processor? We have a roadmap planned for the directory to incorporate more application-specific criteria, as well as information about the entire ecosystem that surrounds a processor offering. Is this the right idea, or do you need something different?

Please share your thoughts on this and include what types of applications you are working on to provide a context for the criteria you examine when selecting an embedded processor.

The Embedded Processing Directory is Live!

Monday, July 12th, 2010 by Robert Cravotta

If your embedded project includes a software programmable processor, check out our Embedded Processing Directory. The directory collects detailed information about processors and cores from over 80 different manufacturers and suppliers. It delivers processor information to developers through 25 reports that sort and filter the data across different characteristics, including company name, processor size and type, instruction set architecture, and target applications.

The directory is an online resource that supports regular periodic updates so that developers can easily find the most recent processor offerings among the sea of options. We offer two different newsletters to help developers keep abreast of the constant flow of new processors. The Directory Updates Newsletter is a free bi-weekly update that only highlights changes and upcoming features for the directory. The Embedded Insights Newsletter is a free bi-weekly e-newsletter that also includes information and insights into design principles and industry trends. Each issue highlights articles, tools, and the “word on the street” for embedded systems. Rotating features include summaries of past questions of the week and highlights from active discussions in the embedded insights channels, as well as some light-hearted features.

Additional new features in the directory include navigation from each entry to specific and appropriate pages or sites about the part and company you are looking at. To encourage more companies to provide supplemental information for each processor that goes beyond the technical specification, we are collecting your requests for more information for each specific part and will be passing that information onto the appropriate companies. In this way, you have a way to more directly influence the amount and type of information that the directory can deliver to you.

Some fun facts about the directory. There are over 90,000 words of information in the master file used to create the 25 reports. The information in the master report spreads across over 30,000 cells. The “All processor by company” report, which is the closest thing to a master report for the directory, spans almost 200 printed pages. We designed the reports to allow you to examine the smallest portion possible of the entire listing to find the processors that matter most to your project.

We have an extensive roadmap of features and upgrades planned for the directory. These features include real-time interactive analysis and comparison, as well as a visualization engine that will allow you to look at and quickly compare processors in a new way. Sign up for our newsletter, check out the directory, and let us know what additional information you would like to see in the listings.

What tools (if any) help simplify initializing an embedded processor?

Wednesday, July 7th, 2010 by Robert Cravotta

Recently, I was working with a touch-interface development kit as part of a hands-on project that I am publishing about over the next few months. The out-of-the-box experience was clean (although there was a media hiccup), and the demos worked as advertised. The next step was to build a “hello world” version of a touch interface program to make sure I correctly understood how to operate the system from scratch.

The development kit offered several ways to initialize the target processor. Instead of manually setting all of the registers, I chose to use the supplied configuration wizard. Even though the configuration wizard supplied the source code to initialize the processor, I cannot say it greatly lessoned the learning curve for choosing what the initialization settings should be. However, it did allow me to focus on the settings without too much worrying about the coding mechanics.

The next step was to start using the supplied API (application programming interface) to activate and query the touch interface controller. For the hello world project, I decided to toggle a light on and off whenever I pressed a specific touch button on the development board. I loaded the program and began execution, but touching the button never activated a message for the application code to process. After a bit of trying this and trying that, I realized the timer that the touch controller used was disabled.

I had assumed that the configuration wizard and touch controller API were tightly coupled and that together they would select the appropriate default settings for a touch application on the target system. However, the configuration wizard disabled the timer and the API initialization call did not enable it. This is a simple problem to fix, and I suspect the support team for the development kit will either change the API initialization call to enable that timer or change the documentation for the API call to make it clear that the application code must manually enable the timer. I personally think the API call should abstract enabling and disabling the timer.

Enabling the timer however was not sufficient to make the button activate a message to the application code. It turns out the timeout delay for the touch sensor had been set too short to allow it to register when the user had touched the pad. The timeout delay was not a direct setting; rather it was derived from several other settings. While the configuration wizard simplified the coding for those settings, it did not abstract the derivation of the timeout delay and display that to me when I was choosing the values to use. A conversation with the support team for the development team suggests they will be adding that type of feedback to the wizard.

Initializing a processor that is not using an operating system has always been significant effort. Many companies now offer configuration or initialization wizards/tools to help developers use their processors. My question is, do you find that these tools simplify the learning curve for initializing your target processor? Do they make the process faster and more reliable? What lessons learned could you share about how to best use these types of tools?

If you have a question you would like to see in a future week, please contact me.

An example of innovation for embedded systems

Tuesday, July 6th, 2010 by Robert Cravotta

Embedded systems are, for the most part, invisible to the end user, yet the end application would not work properly without them. So what form does innovation take for something that is only indirectly visible to the end user? Swype’s touch screen-based text input method is such an example. The text entry method already exists on the Samsung Omnia II and releases on July 15 for the Motorola Droid X.

Swyping is a text entry method for use with touch screens where the user places their finger on the first letter of the word they wish to type. Without lifting their finger, the user traces their finger through each of the letters of the word, and they do not lift their finger from the touch screen until they reach the final letter of the word. For example, the figure shows the trace for the word quick. This type of text entry requires the embedded engine to understand a wider range of subtle motions of the user’s finger and couples that with a deeper understanding of words and language. The embedded engine needs to be able to infer and make educated guesses about the user’s intended word.

Inferring or predicting what word a user wishes to enter into the system is not new. The iPhone, as well as the IBM Simon (from 1993), use algorithms to predict and compensate for finger tap accuracy for which letter the user is likely to press next. However, swyping takes the predictive algorithm a step further and widens the text entry system’s ability to accommodate even more imprecision from the user’s finger position because each swyping motion, start to finish, is associated with a single word.

In essence, the embedded system is taking advantage of available processing cycles to extract more information from the input device, the touch screen in this case, and correlating it with a larger and more complex database of knowledge, a dictionary and knowledge of grammar in this case. This is analogous to innovations for other embedded control systems. For example, motor controllers are able to deliver better energy efficiency because they are able to collect more information about the system and environment they are monitoring and controlling. Motor controllers are measuring more inputs, inferred and directly, that allows them to understand not just the instantaneous condition of the motor, but also the environmental and operational trends, so that the controller can adjust the motor’s behavior more effectively and extract greater efficiency than earlier controllers could accomplish. They are able to correlate the additional environmental information with an increasing database of knowledge of how the system operates under different conditions and how to adjust for variations.

The Swype engine, as well as other engines like it, supports one more capability that is important; it can learn the user’s preferences and unique usage patterns and adjust to accommodate those. As embedded systems embody more intelligence, they move away from being systems that must know everything at design time and move closer to being systems that can learn and adjust for the unique idiosyncrasies of their operating environment.

[Editor's Note: This was originally posted on Low-Power Design]

Have children ever inspired a solution for a problem you were working on?

Wednesday, June 30th, 2010 by Robert Cravotta

I recently spent a week at a friend’s house, and they have three small children. Living with young children is a stark contrast to living with teenagers. While my own children (both teenagers) impress me with their observations, their leaps of logic often make sense to me. Young children on the other hand make many leaps of logic that seem random or chaotic as they test their own models of how the world works.

During my visit, one of the young boys showed me Scribblenauts, a puzzle game on the Nintendo DS that presents you with a series of challenges (220 in total I believe). You are able to use any objects that you can think of to solve the challenge. You summon objects by typing the word for those objects, such as a ladder or a shovel. I might have been able to specify an object or two that the game was not able to create, but I am impressed by the vocabulary that the game supports – even singularities and Cthulhu.

Playing the game with the young boy was enlightening because he usually employed a different approach to solving each problem than I would. The game engine can support multiple solutions because it is a physics simulator, and the objects in the world interact with each other in a physical fashion. As an example, one of the problems in the world was to get a cat off the roof of a house. One way to accomplish that is to put cat food on the ground and the cat comes down to eat it. Another approach could be to place a dog on the roof and have it chase the cat off the roof. Yet another approach could involve catching the cat with a net, or even dropping rocks (most any object as a matter of fact) on the cat to knock it off the roof.

While playing the game with the young boy, I began to see how he approached problems and changed his tactics as his older approaches no longer solved the problem set before us. Seeing and being able to recognize his different way of seeing the world expanded my own repertoire of approaches and led me to this week’s question.

Has a child ever inspired a solution for you when trying to grapple with a problem? I suggest substituting anything for the term child, such as a pet, a spouse, a cat reaching through some bars to catch a mouse, or even an apple falling from a tree.

I suspect we find answers to problems all the time in the mundane observations of the world. I wonder if by sharing what led up to those inspirations we can accelerate or spot ways to make tools that can help us explore and inspire new approaches to ever increasingly complex problems.

If you would like to participate in this or other series, or provide a guest post, please contact me at Embedded Insights.

[Editor's Note: this was originally posted on the Embedded Master]

Subtle trade-off complexity

Tuesday, June 29th, 2010 by Robert Cravotta

This project explores the state-of-the-art for touch sensing and development kits from as many touch providers as I can get my hands on. As I engage with more touch providers, I have started to notice an interesting and initially non-obvious set of trade-offs that each company must make to support this project. On the one hand, vendors want to show how well their technology works and how easy it is to use. On the other hand, there are layers of complexity to using touch that means the touch supplier often offers significant engineering field support. Some of the suppliers I am negotiating with have kits they would love to demonstrate for technical reasons, but they are leery of exposing how much designers need domain expertise support to get the system going.

This is causing me to rethink how to highlight the kits. I originally thought I could lay out the good and the ugly from a purely technical perspective, but I am finding out that ugly is context relevant and more subtle than a brief log of working with a kit might justify. Take for example development tools that support 64-bit development hosts – or should I say the lack of such support. More than one touch-sensing supplier does or did not support 64-bit hosts immediately, and almost all of them are on the short schedule path to supporting 64-bit hosts.

As I encounter this lack of what appeared to be an obvious shortfall across more suppliers’ kits, I am beginning to understand that the touch-sensing suppliers have been possibly providing more hands-on support than I first imagined and that this was why they did not have a 64-bit port of their development tools immediately available. To encourage the continued openness of the suppliers, especially for the most exciting products that require the most field engineering support from the supplier, I will try to group common trade-offs that appear among different touch sensing implementations and discuss the context around those trade-offs from a more general engineering perspective rather than as a specific vendor kit issue.

By managing this project in this way, I hope to be able to explore and uncover more of the complexities of integrating touch sensing into your applications without scaring away the suppliers who are pushing the edges of technology from showing off their hottest stuff. If I do this correctly, you will gain a better understanding of how to quickly compare different offerings and identify which trade-offs make the most sense for the application you are trying to build.

[Editor's Note: This was originally posted on Low-Power Design]

Eating dog food? It’s all in the preparation.

Monday, June 28th, 2010 by Jason Williamson

Altia provides HMI (human machine interface) engineering tools to companies in industries like automotive, medical, and white goods. When you’re providing interface software, it makes sense to use your own tools for “real” work, just as your customers would. Not only do you prove you know your own product, but you get an invaluable “user’s perspective” into the workings of your software. You get the opportunity to see where your tools shine and where they are lacking, allowing your team to plan for new features to make them better. Through our own “dog fooding” experiences, we have developed some valuable guidelines that we believe make the process go more smoothly.

First, it is important to only use released versions of the product. It is tempting to pull in the latest beta capabilities to a project, but this is a perilous course. There is a reason why that feature hasn’t been released. It hasn’t been through the full test cycle. You cannot risk the project schedule or quality of what is delivered. Producing quality on time is why you’ve been engaged in the first place. Another reason to stick with the released versions of your tools is that you should approach all of your consulting work with the idea that the customer will ultimately need to maintain the project. They need to know that the features and output used in the creation of the project are mature and trustworthy.

The next guideline addresses releases and your revision control system.  A revision control system is the repository where all of the versions of product source code are stored.  This often includes the “golden,” release versions of the product as well as in-development “sand boxes.”  We structure our revision control system such that release-worthy code for new features is kept in a nearly ready-to-release state as the next version of our product. That is, whole feature groups should be checked in together and tested to an extent such that only running the overall test suites are needed to create a product. That way, if a new feature absolutely must be used in a project, you have a lower barrier to an interim release.

Finally, it is very important to spend sufficient time architecting the project. When deadlines rapidly approach, it is tempting to take shortcuts to the end result. Since you know your software so well, you can be quite certain that these shortcuts will not be a detriment to the delivered product. However, this is almost always a shortsighted choice. When handing off the design to another person, especially a valued customer, a well-documented and rigorously-followed architecture is paramount. Your customers need to own and usually extend this design. There should be no “duct tape” in it. Who would want to receive that call to explain a kludge four years after the project has been delivered?

I encourage you to have a hearty helping of your own dog food. Not only do you serve up a result that will please your customer, but you learn by experience where you can make your software stronger and more capable. By developing with current releases, by keeping new features tested and ready to go, and by taking appropriate measures to architect the project, you make the eating of your own dog food a gourmet experience — and keep your customers coming back for seconds.

Storing Harvested Energy

Friday, June 25th, 2010 by Robert Cravotta

Systems that harvest ambient energy on an anticipated basis do not always have a 1-to-1 correlation between when they are active and operating and when there is enough ambient energy to harvest. These systems must include mechanisms to not only harvest and convert the ambient energy, but they must also be able to store and manage their energy store. Energy storage is essential to allow systems to continue to operate during periods of insufficient ambient energy. Energy storage devices can also enable a system to support instant-on capabilities because the system does not have to be able to harvest enough energy from the environment to start operation.

Like many emerging technologies, including touch screens, fully integrated modules may integrate component parts, such as the harvesting transducers and storage technologies, from different companies within the same module. As the energy harvesting device market matures, designers will have access to more options that are fully integrated systems. For now, many of the fully integrated options available to designers include components from multiple companies.

The different types of storage technologies appropriate for energy harvesting applications include thin film micro-energy storage devices, supercapacitors, lithium-ion or lithium polymer batteries, high capacity batteries, and traditional capacitors. Capacitors are able to support applications that need energy spikes. Batteries leak less energy than capacitors, and they are more appropriate for applications that need a steady supply of energy. Thin-film energy storage cells support high numbers of charge/discharge cycles.

100625-storage-techs.jpg

(Caption: Rechargeable and nonrechargeable storage technologies. Cymbet, Infinite Power Systems (IPS), Cap-XX, Saft, and Tadiran are listed as some of the companies providing different storage technologies (source: Adaptive Energy).)

The table documents some of the companies providing storage devices as well as the voltage and maximum current levels that these different technologies support. Energy density is the amount of energy stored per unit mass. The higher a device’s energy density, the larger the amount of energy it can store in its mass. Power density is the maximum amount of power that the device can supply per unit mass. The higher a device’s power density, the larger the amount of power it can supply relative to its mass.

In addition to the energy and power density for each type of storage technology, the temperature ranges that your application will operate in will affect the appropriateness of one approach versus another. For example, in high temperature environments, a lithium polymer battery is generally not a good choice, while for low temperature environments, thin film batteries exhibit lower maximum current ratings. Another consideration as to which storage technology to use relates to the anticipated number of charge/discharge cycles you will subject the device to. For example, a designer using a rechargeable storage approach, such as a battery or capacitor, may run into trouble if the storage mechanism is unable to maintain sufficient performance characteristics while undergoing high numbers of charge/discharge cycles.

The purpose of the energy storage component in an integrated energy harvesting module is to accumulate and preserve the energy captured by the harvester and conversion electronics. In order to deliver maximum storage efficiency, designers should couple the storage technology with the conversion electronics to maximize the effectiveness of storing the energy charge coming from the harvester component. Lastly, for many applications, the storage component should exhibit a slow leakage characteristic so that it can store energy for long periods to accommodate the periods of energy starvation that the system may experience.

If you would like to be an information source for this series or provide a guest post, please contact me at Embedded Insights.

[Editor's Note: This was originally posted at the Embedded Master]