User Interfaces Channel

The User Interfaces channel focuses on the issues affecting both the physical and logical trade-offs of integrating user interfaces into embedded designs to support making it easier for a user to correctly, consistently, and unambiguously control the behavior of the system.

Touchscreen User Interface checklist: criteria for selection

Thursday, August 19th, 2010 by Ville-Veikko Helppi

Touchscreens require more from the UI (user interface) design and development methodologies. To succeed in selecting the right technology, designers should always consider the following important topics.

1) All-inclusive designer toolkit. As the touchscreen changes the UI paradigm, one of the most important aspects of the UI design is how quickly the designer can see the behavior of the UI under development. Ideally, this is achieved when the UI technology contains a design tool that allows the designer to immediately observe behavior of the newly created UI and modify easily before target deployment.

2) Creation of the “wow factor.” It is essential that UI technology enables developers and even end-users to easily create clever little “wow factors” on the touchscreen UI. These technologies, which allow the rapid creation and radical customization of the UI, have a significant impact on the overall user experience.

3) Controlling the BoM (Bill of Material). For UIs, everything is about the look and feel, ease of use, and how well it reveals the capabilities of the device. In some situations, adding a high-resolution screen with a low-end processor is all that’s required to deliver a compelling user experience. Equally important is how the selected UI technology reduces engineering costs related to UI work. Adapting a novel technology that enables the separation of software and UI creation enables greater user experiences without raising the BoM.

4) Code-free customization. Ideally, all visual and interactive aspects of a UI should be configurable without recompiling the software. This can be achieved by providing mechanisms to describe the UI’s characteristics in a declarative way. Such a capability affords rapid customization without any changes to the underlying embedded code base.

5) Open standard multimedia support. In order to enable the rapid integration of any type of multimedia content into a product’s UI (regardless of the target hardware) some form of API standardization must be in place. The OpenMAX standard addresses this need by providing a framework for integrating multimedia software components from different sources, making it easier to exploit silicon-specific features, such as video acceleration.

Just recently, Apple replaced Microsoft as the world’s largest technology company. This is a good example of how a company that produces innovative, user-friendly products with compelling user interfaces can fuel the growth of technology into new areas. Remember, the key isn’t necessarily the touchscreen itself – but the user interfaces running on the touchscreen. Let’s see what the vertical markets can do to take the user interface and touchscreen technology to the next level!

First and second generation touch sensing

Tuesday, August 10th, 2010 by Robert Cravotta

I recently proposed a tipping point for technology for the third generation of a technology or product, and I observed that touch technology seems to be going through a similar pattern as more touch solutions are integrating third generation capabilities. It is useful to understand the difference between the different generations of touch sensing to better understand the impact of the emerging third generation capabilities for developers.

First generation touch sensing relies on the embedded host processor to support, and the application software to understand how to configure, control, and read, the touch sensor. The application software is aware of and manages the details of the touch sensor drivers and analog to digital conversion of the sense circuits. A typical control flow to capture a touch event consists of the following steps:

1)  Activate the X driver(s)

2) Wait a predetermined amount of time for the X drivers to settle

3) Start the ADC measurement(s)

4) Wait for the ADC measurement to complete

5) Retrieve the ADC results

6) Activate the Y driver(s)

7) Wait a predetermined amount of time for the Y drivers to settle

8) Start the ADC measurement(s)

9) Wait for the ADC measurement to complete

10) Retrieve the ADC results

11) Decode and map the measured results to an X,Y coordinate

12) Apply any sensor specific filters

13) Apply calibration corrections

14) Use the result in the rest of the application code

Second generation touch sensing usually encapsulates this sequence of steps to activate the drivers, measure the sensing circuits, and applying the filters and calibration corrections into a touch event. Second generation solutions may also offload the sensor calibration function, although the application software may need to know when and how to initiate the calibrate function. A third generation solution may provide automatic calibration so that the application software does not need to know when or how to recalibrate the sensor because of changes in the operating environment (more in a later article).

A challenge for providing touch sensing solutions is striking a balance between meeting the needs of developers that want low- and high-levels of abstraction. For low-level design considerations, the developer needs an intimate knowledge of the hardware resources and access to the raw data to be able to build and use custom software functions that extend the capabilities of the touch sensor or even improve its signal to noise ratio. For developers using the touch sensor as a high-level device, the developer may be able to work through an API (application programming interface) to configure, as well as turn on and off, the touch sensor.

The second and third generation touch API typically includes high-level commands to enable and disable, calibrate, and read and write the configuration registers for the touch sensor as well as low-level commands to access the calibration information for the touch sensor. The details to configure the sensor and the driver for event reporting differ from device to device. Another important capability that second and third generation solutions may include is the ability to support various touch sensors and display shapes without requiring the developer to rework the application code. This is important because for many contemporary touch and display solutions, the developer must be separately aware of the display, touch sensing, and controller components because there are not many options for fully integrated touch and display systems. In short, we are still in the Wild West era of embedded touch sensing and display solutions.

Impacts of touchscreens for embedded software

Thursday, August 5th, 2010 by Ville-Veikko Helppi

No question, all layers of the embedded software are impacted when a touchscreen is used on a device. A serious challenge is finding space to visually show a company’s unique brand identity, as it is the software that runs on the processor that places the pixels on screen. From the software point of view, the touchscreen removes one abstraction level between the user and software. For example, many devices have removed ‘OK’ buttons from dialogs as the user can click the whole dialog instead of clicking the button.

Actually, software plays an even more critical role as we move into a world where the controls on a device are virtual rather than physical. In the lowest level of software, the touchscreen driver provides a mouse-emulation that basically means the same as clicking a mouse cursor on certain pixels. However, the mouse driver gets its data as “relative” while the touchscreen driver gets its data as “absolute.” Writing the touchscreen driver is usually trivial, as this component only takes care of passing information from the physical screen to higher levels of software. The only inputs the driver needs are Boolean if the screen is touched, and in what x- and y-axes has the touch taken place.

At the operating system level, a touchscreen user interface means more frequent operating system events than the typical icon or widget-based user interface. In addition to a touchscreen, there may also be a variety of different sensors (e.g., accelerometers) inputting stimuli to the operating system through their drivers. Generally, the standardized operating system can give confidence and consistency to device creation, but if it needs to be changed, the cost of doing so can be astronomical due to testing the compatibility of other components.

The next layer is where the middleware components of the operating system are found, or in this context, where the OpenGL/ES library performs. Various components within this library do different things from processing the raw data with mathematical algorithms, providing a set of APIs for drawing, interfacing between software and hardware acceleration, or providing services such as rendering, font engines, and so on. While this type of standardization is generally a good thing, in some cases, it can lead to non-differentiation – in the worst case, it might even kill the inspiration of an innovative user interface creation. Ideally, the standardized open library, together with rich and easily customizable user interface technology, results in superb results.

The application layer is the most visible part of the software and forms the user experience. It is here where developers must ask:

1)      Should the application run in the full-screen mode or enable using widgets distributed around the screen?

2)      What colors, themes, and templates are the best ways to illustrate the behavior of the user interface?

3)      How small or large should the user interface elements be?

4)      In what ways will the user interface elements behave and interact?

5)      How intuitive do I want to make this application?

Compelling UI design tools is essential for the rapid creation of user interfaces.

In the consumer space, there are increasingly more competitive brands with many of the same products and product attributes. Manufacturers are hard-pressed to find any key differentiator among this sea of “me too” offerings. One way to stand out is by delivering a rich UI experience via a touchscreen display.

We are starting to see this realization play out in all types of consumer goods, even in white goods as pedestrian as washing machines. There are now innovative display technologies replacing physical buttons and levers. Imagine a fairly standard washing machine with a state-of-the-art LCD panel. This would allow the user to easily browse and navigate all the functions on that washing machine – and perhaps learn a new feature or two. By building an attractive touchscreen display, simply changing the software running on the display can manifest any customization work. Therefore, things like changing the branding, adding compelling video clips and company logos, all become much simpler because it’s all driven by software. If the manufacturer uses the right technology, they may not even need to modify the software to change the user experience.

Driven by the mobile phone explosion, the price point of display technology has come down significantly. As a result, washing machine manufacturers can add more perceived value to their product without necessarily adding too much to the BoM (bill of materials). Thus, before the machine leaves the factory, a display technology may increase the BoM by $30, but this could increase the MSRP by at least $100. No doubt, this can have a huge impact on the company’s bottom line. This results in a “win-win” for the manufacturer and for the consumer. The manufacturer is able to differentiate the product more easily and in a more cost effective manner, while the product is easier to use with a more enhanced UI.

The final part in this four-part series presents a checklist for touchscreen projects.

The Next Must-Have System

Friday, July 30th, 2010 by Max Baron

The 9th annual Research@Intel Day held at the Computer History Museum showcased more than 30 research projects demonstrating the company’s latest innovations in the areas of energy, cloud computing, user experience, transportation, and new platforms.

Intel CTO Justin Rattner made one of the most interesting announcements about the creation of a new research division called Interaction and Experience Research (IXR).

I believe IXR’s task will be to determine the nature of the next must-have system and the processors and interfaces that will make it successful.

According to Justin Rattner, you have to go beyond technology; better technology is no longer enough since individuals nowadays value a deeply personal, information experience. This suggests that Intel’s target is the individual, the person that could be a consumer and /or a corporate employee. But how do you find out what the individual that represents most of us will need beyond the systems and software already available today?

To try to hit that moving target, Intel has been building up its capabilities in the user experience and interaction areas since the late nineties. One of the achievements was the Digital Health system, now a separate business division. It started out as a research initiative in Intel Labs – formerly the “Corporate Technology Group” (CTG), with an objective of finding how technology could help in the health space.

Intel’s most recent effort has been to assemble the IXR research team consisting of both user interface technologists and social scientists. The IXR division is tasked to help define and create new user experiences and platforms in many areas some of which are end-use of television, automobiles, and signage. The new division will be led by Intel Fellow Genevieve Bell–a leading user-centered design advocate at Intel for more than ten years.

Genevieve Bell is a researcher. She was raised in Australia. She received her bachelor’s degree in Anthropology from Bryn Mawr College in 1990 and her master’s and doctorate degrees in Anthropology in 1993 and 1998 from Stanford University where she also was a lecturer in the Department of Anthropology. In her presentation, Ms. Bell explained that she and her team will be looking into the ways in which people use, re-use and resist new information and communication technologies.

To envision the next must have system, Intel’s IXR division is expected to create a bridge of research incorporating social research, design enabling and technology research. The team’s social science, design, and human-computer interaction researchers will continue the work that’s already been going on, by asking questions to find what people will value and what will fit into their lives. New systems, software, user interactions and changes in media content and consumption could emerge from using the obtained data on one hand, and the research into the next generation of user interfaces on the other.

Bell showed a photo of a child using his nose to key in data on his mobile—an example of a type of user-preferred interface that may seem strange, but it can be part of the data used by social scientists to define an innovation that may create the human I/Os for 2020. The example also brought out a different aspect that was not addressed: how do you obtain relevant data without placing in the hands of your population sample a scale model or an actual system to help review it, improve it or even reject it and start from scratch?

In addition to the very large investment Intel makes in US-based research it also owns labs, and it collaborates with or supports over 1,000 researchers worldwide. According to Intel, 80% of Intel Labs China focuses on embedded system research. Intel Labs Europe conducts research that spans the wide spectrum from nanotechnologies to cloud computing. Intel Labs Europe’s website shows research locations and collaboration in 17 sites – and the list doesn’t even include the recent announcement of the ExaScience Lab in Flanders, Belgium.

But Intel is not focused only on the long term. Two examples that speak of practical solutions for problems encountered today are the Digital Health system that can become a link between the patient at home and doctor at the clinic, and the connected vehicle (misinterpreted by some reporters as an airplane-like black box intended for automobiles).

In reality, according to Intel, the connected vehicle’s focus was on the personal and vehicle safety. For example, when an attempt is made to break into the vehicle, captured video can be viewed via the owner’s mobile device. Or, for personal safety and experience, a destination-aware connected system could track vehicle speed and location to provide real-time navigation based on information about other vehicles and detours in the immediate area.

Both life-improving systems need to find wider acceptance from the different groups of people that perceive their end-use in different ways.

Why is Intel researching such a wide horizon of disciplines and what if anything is still missing? One of the answers has been given to us by Justin Rattner himself.

I believe that Rattner’s comment “It’s no longer enough to have the best technology,” reflects the industry’s trend in microprocessors, cores, and systems. The SoC is increasingly taking the exact configuration of the OEM system that will employ it. SoCs are being designed to deliver precisely the price-performance needed at the lowest power for the workload—and for most buyers the workload has become a mix of general-purpose processing and multimedia, a mix in which the latter is dominant.

The microprocessor’s role can no longer be fixed or easily defined since the SoCs incorporating it can be configured in countless ways to fit systems. Heterogeneous chips execute code by means of a mix of general purpose processors, DSPs and hardwired accelerators. Homogeneous SoC configurations employ multiple identical cores that together can satisfy the performance needed. And, most processor architectures have not been spared; their ISAs are being extended and customized to fit target applications.

Special-purpose ISAs have emerged –trying and most of the time, succeeding in reducing power and silicon real-estate for specific applications. Processor ISA IP owners and enablers are helping SoC architects that want to customize their core configuration and ISA. A few examples include ARC (now Viraje Logic and possibly soon–Synopsis), Tensilica, and suppliers of FPGAs such as Altera and Xilinx. ARM and MIPS offer their own flavors of configurability. ARM is offering so many ISA enhancements available in different cores that aside from its basic compatibility, it can be considered as a “ready-to-program” application-specific ISA while MIPS leaves most of its allowed configurability to the SoC architect.

In view of the rapidly morphing scenario, the importance of advanced social research for Intel and, for that matter for anybody in the same business cannot be overstated. Although it may not be perceived as such, Intel has already designed processors to support specific applications.

The introduction of the simple, lower performance but power-efficient Atom core was intended to populate the mobile notebook and net book. The Moorestown platform brings processing closer to the needs of mobile low power systems while the still-experimental SCC –Intel’s Single-chip Cloud Computer is configured to best execute data searches in servers.

It’s also interesting to see what can be done with an SCC-like chip employed in tomorrow’s desktop.

If Intel’s focus is mainly on the processing platform as it may be, what seems to be missing and who is responsible for the rest? The must-have system of the future must run successful programs and user interface software. While Intel is funding global research that’s perceived to focus mostly on processors and systems–who is working on the end-use software? I don’t see the large software companies engaging in similar research by themselves or in cooperation with Intel’s or other chip and IP companies’ efforts. And we have not been told precisely how a new system incorporating hardware and software created by leading semiconductor and software scientists will be transferred from the draft board to system OEMs. An appropriate variant on Taiwan’s ITRI model comes to mind but only time will tell.

The User Interface Paradigm Shift

Thursday, July 22nd, 2010 by Ville-Veikko Helppi

Touchscreens are quickly changing the world around us. When clicking on an image, a touchscreen requires much less thinking and more user intuition. Touchscreens are also said to be the fastest pointing method available, but that isn’t necessarily true – it all depends on how the user interface is structured. For example, most users accept a ten millisecond delay when scrolling with cursor and mouse, but with touchscreens, this same period of time feels much longer so the user experience is perceived as not as smooth. Also, multi-touch capabilities are not possible with mouse emulations, at least, not as intuitively as with a touchscreen. The industry has done a good job providing a screen pen or stylus to assist the user when selecting the right object on smaller screens, thus silencing the critics of touchscreens who say it’s far from ideal as a precise pointing method.

The touchscreen has changed the nature of UI (user interface) element transitions. When looking at motions of different UI elements, these transitions can make a difference in device differentiation and if implemented properly tell a compelling story. Every UI element transition must have a purpose and context as it usually reinforces the UI elements. Something as simple as buffers are effective at giving a sense of weight to a UI element – and moving these types of elements without a touchscreen would be awkward. For UI creation, the best user experience can be achieved when UI element transitions are natural and consistent with other UI components (e.g., widgets, icons, menus) and deliver a solid, tangible feel of that UI. Also, the 3D effects during the motion provide a far better user experience.

3D layouts enable more touchscreen friendly user interfaces.

Recent studies in human behavior along with documented consumer experiences have indicated that the gestures of modern touchscreens have expanded the ways users can control a device through its UI. As we have seen with “iPhone phenomena” the multi-touchscreen changes the reality behind the display screen, allowing new ways to control the device through hand-eye (e.g., pinching, zooming, rotating) coordination. But it’s not just the iPhone that’s driving this change. We’re seeing other consumer products trending towards simplifying the user experience and enhancing personal interaction. In fact, e-Books are perfect examples. Many of these devices have a touchscreen UI where the user interacts with the device directly at an almost subconscious level. This shift in improved user experience has also introduced the idea that touchscreens have reduced the number of user inputs required for the basic functioning of a device.

The third part in this four-part series explores the impact of touchscreens on embedded software.

Is the third generation the charm?

Tuesday, July 20th, 2010 by Robert Cravotta

In a recent conversation with Ken Maxwell, President of Blue Water Embedded, Ken mentioned several times how third-generation touch controllers are applying dedicated hardware resources to encapsulate and offload some of the processing necessary to deliver robust touch interfaces. We talked about his use of the term third-generation as he seemed not quite comfortable with using it. However, I believe it is the most appropriate term, is consistent with my observations about third generation technologies, and is the impetus for me doing this hands-on project with touch development kits in the first place.

While examining technology’s inflections, I have noticed that technological capability is only one part of an industry inflection based around that technology. The implementation must also: hide complexity from the developer and user; integrate subsystems to deliver lower costs, shrink schedules, and simplify learning curves; as well as pull together multi-domain components and knowledge into a single package. Two big examples of inflection points occurred around the third generation of the technology or product: Microsoft Windows and the Apple iPod.

Windows reached an inflection point at version 3.0 (five years after version 1.0 was released) when it simplified the management of the vast array of optional peripherals available for the desktop PC and hid much of the complexity of sharing data between programs. Users could already transfer data among applications, but they needed to use explicit translation programs and tolerate the loss of data from special features. Windows 3.0 hid the complexity of selecting those translation programs and provided a data-interchange format and mechanism that further improved users’ ability to share data among applications.

The third generation iPod reached an industry inflection point with the launch of the iTunes Music Store. The “world’s best and easiest to use ‘jukebox’ software” introduced a dramatically simpler user interface that needed little or no instruction to get started and introduced more people to digital music.

Touch interface controllers and development kits are at a similar third generation crossroads. First-generation software drivers for touch controllers required the target or host processor to drive the sensing circuits and perform the coordinate mapping. Second-generation touch controllers freed up some of the processing requirements of the target processor by including dedicated hardware resources to drive the sensors, and it abstracted the sensor data the target processor worked with to pen-up/down and coordinate location information. Second generation controllers still require significant processing resources to manage debounce processing as well as reject bad touches such as palm and face presses.

Third-generation touch controllers integrate even more dedicated hardware and software to offload more context processing from the target processor to handle debounce processing, reporting finger or pen flicking inputs, correctly resolving multi-touch inputs, and rejecting bad touches from palm, grip, and face presses. Depending on the sensor technology, third-generation controllers are also going beyond the simple pen-up/down model by supporting hover or mouse-over emulation. The new typing method supported by Swype pushes the pen-up/down model yet another step further by combining multiple touch points within a single pen-up/down event.

Is it a coincidence that touch interfaces seem to be crossing an industry inflection point with the advent of third generation controllers, or is this a relationship that also manifests in other domains? Does your experience support this observation?

[Editor's Note: This was originally posted on Low-Power Design]

Get in Touch with Your Inner User Interface

Thursday, July 15th, 2010 by Ville-Veikko Helppi

Touchscreens have gone from fad to “must have” seemingly overnight. The rapid growth of touchscreen user interfaces in mobile phones, media players, navigation systems, point-of-sale, and various other devices has changed the landscape in a number of vertical markets. In fact, original device manufacturers (ODMs) see the touchscreen as a way to differentiate their devices and compete against one another in an ever-expanding marketplace. But ODMs take note – a touchscreen alone will not solve the problem of delivering a fantastic user experience. If the underlying user interface is not up to snuff, the most amazing whiz-bang touchscreen won’t save you.

Touchscreens have come a long way from the early 90’s applications where they were used in primitive sales kiosks and public information displays. These devices were not cutting-edge masterpieces, but they did help jump-start the industry and expose large audiences (and potential future users) to the possibilities of what this type of technology might offer. It wasn’t until a decade later before consumers saw the major introduction of touchscreens – and the reason for this was pretty simple: the hardware was just too big and too expensive. Touchscreens became more usable and more pervasive only after the size of hardware reduced significantly.

Today there are a host of options in touchscreen technology. These include resistive, projected-capacitive, surface-capacitive, surface acoustic wave, and infrared to name a few. According to DisplaySearch, a display market research organization, resistive displays now occupy 50 percent of the market due to its cost-effectiveness, consistency, and durable performance; while projected-capacitive has 31 percent of the market. In total, there were more than 600 million touchscreens shipped in 2009. DisplaySearch also forecasts that projected-capacitive touchscreens will soon pass resistive screens as the number one touchscreen technology (measured by revenues) because the Apple iPad utilizes projected-capacitive touchscreen technology. And finally, according to Gartner, the projected-capacitive touchscreen segment is estimated to hit 1.3 billion units by 2012, which means a 44 percent compounded annual growth rate. These estimates indicate serious growth potential in the touchscreen technology sector.

However, growth ultimately hinges on customer demand. Some of the devices, such as safety and mission-critical systems, are still not utilizing the capabilities found in touchscreens. This is because with mission-critical systems, there is very little room for input mistakes made by the user. In many cases, touchscreens are considered a more fault-sensitive input method when compared to the old-fashioned button- and glitch-based input mechanisms. For some companies, the concern is not about faulty user inputs, but cost; adding a $30 touchscreen is not an option when it won’t add any value to the product’s price point.

So what drives touchscreen adoption? Adoption is mainly driven by

  1. Lowering the cost of the hardware
  2. Testing and validating new types of touchscreen technologies in the consumer space, and then pushing those technologies into other vertical markets
  3. A touchscreen provides an aesthetic and ease-of-use appeal – a sexier device gains more attention over its not so sexy non-touchscreen cousin.

This is true regardless of the type of device, whether it’s a juice blender, glucose monitor, or infotainment system in that snazzy new BMW.

The second part in this four-part series explores the paradigm shift in user interfaces that touchscreens are causing.

An example of innovation for embedded systems

Tuesday, July 6th, 2010 by Robert Cravotta

Embedded systems are, for the most part, invisible to the end user, yet the end application would not work properly without them. So what form does innovation take for something that is only indirectly visible to the end user? Swype’s touch screen-based text input method is such an example. The text entry method already exists on the Samsung Omnia II and releases on July 15 for the Motorola Droid X.

Swyping is a text entry method for use with touch screens where the user places their finger on the first letter of the word they wish to type. Without lifting their finger, the user traces their finger through each of the letters of the word, and they do not lift their finger from the touch screen until they reach the final letter of the word. For example, the figure shows the trace for the word quick. This type of text entry requires the embedded engine to understand a wider range of subtle motions of the user’s finger and couples that with a deeper understanding of words and language. The embedded engine needs to be able to infer and make educated guesses about the user’s intended word.

Inferring or predicting what word a user wishes to enter into the system is not new. The iPhone, as well as the IBM Simon (from 1993), use algorithms to predict and compensate for finger tap accuracy for which letter the user is likely to press next. However, swyping takes the predictive algorithm a step further and widens the text entry system’s ability to accommodate even more imprecision from the user’s finger position because each swyping motion, start to finish, is associated with a single word.

In essence, the embedded system is taking advantage of available processing cycles to extract more information from the input device, the touch screen in this case, and correlating it with a larger and more complex database of knowledge, a dictionary and knowledge of grammar in this case. This is analogous to innovations for other embedded control systems. For example, motor controllers are able to deliver better energy efficiency because they are able to collect more information about the system and environment they are monitoring and controlling. Motor controllers are measuring more inputs, inferred and directly, that allows them to understand not just the instantaneous condition of the motor, but also the environmental and operational trends, so that the controller can adjust the motor’s behavior more effectively and extract greater efficiency than earlier controllers could accomplish. They are able to correlate the additional environmental information with an increasing database of knowledge of how the system operates under different conditions and how to adjust for variations.

The Swype engine, as well as other engines like it, supports one more capability that is important; it can learn the user’s preferences and unique usage patterns and adjust to accommodate those. As embedded systems embody more intelligence, they move away from being systems that must know everything at design time and move closer to being systems that can learn and adjust for the unique idiosyncrasies of their operating environment.

[Editor's Note: This was originally posted on Low-Power Design]

Subtle trade-off complexity

Tuesday, June 29th, 2010 by Robert Cravotta

This project explores the state-of-the-art for touch sensing and development kits from as many touch providers as I can get my hands on. As I engage with more touch providers, I have started to notice an interesting and initially non-obvious set of trade-offs that each company must make to support this project. On the one hand, vendors want to show how well their technology works and how easy it is to use. On the other hand, there are layers of complexity to using touch that means the touch supplier often offers significant engineering field support. Some of the suppliers I am negotiating with have kits they would love to demonstrate for technical reasons, but they are leery of exposing how much designers need domain expertise support to get the system going.

This is causing me to rethink how to highlight the kits. I originally thought I could lay out the good and the ugly from a purely technical perspective, but I am finding out that ugly is context relevant and more subtle than a brief log of working with a kit might justify. Take for example development tools that support 64-bit development hosts – or should I say the lack of such support. More than one touch-sensing supplier does or did not support 64-bit hosts immediately, and almost all of them are on the short schedule path to supporting 64-bit hosts.

As I encounter this lack of what appeared to be an obvious shortfall across more suppliers’ kits, I am beginning to understand that the touch-sensing suppliers have been possibly providing more hands-on support than I first imagined and that this was why they did not have a 64-bit port of their development tools immediately available. To encourage the continued openness of the suppliers, especially for the most exciting products that require the most field engineering support from the supplier, I will try to group common trade-offs that appear among different touch sensing implementations and discuss the context around those trade-offs from a more general engineering perspective rather than as a specific vendor kit issue.

By managing this project in this way, I hope to be able to explore and uncover more of the complexities of integrating touch sensing into your applications without scaring away the suppliers who are pushing the edges of technology from showing off their hottest stuff. If I do this correctly, you will gain a better understanding of how to quickly compare different offerings and identify which trade-offs make the most sense for the application you are trying to build.

[Editor's Note: This was originally posted on Low-Power Design]

Resistive Touch Sensing Primer

Tuesday, June 8th, 2010 by Robert Cravotta

Resistive touch sensors consist of several panels coated with a metallic film, such as ITO (indium tin oxide), which is a transparent and electrically conductive. Thin spacer dots separate the panels from each other. When something, such as a finger (gloved or bare) or a stylus presses on the layers, it causes the two panels to make contact and closes an electrical circuit so that a controller can detect and calculate where the pressure is being applied to the panels. The controller can communicate the position of the pressure point as a coordinate to the application software.

Because the touch sensor relies on pressure on its surface to measure a touch, a user can use any object to make the contact; although using sharp objects can damage the layers. This is in contrast to other types of touch sensors, such as capacitive sensors, which require the object making contact with the touch surface, such as a finger, to be conductive.

Resistive touch sensors are generally durable and less expensive than other touch technologies; this contributes to their wide use in many applications. However, resistive touch sensors offer a lower visual clarity (transmitting about 75% of the display luminance) than other touch technologies. Resistive touch sensors also suffer from a high reflectivity with high ambient light conditions, and this can degrade the perceived contrast ratio of the displayed image.

When a user touches the resistive touch sensor, the top layer of the sensor experiences a mechanical bouncing from the vibration of the pressure. This affects the decay time necessary for the system to reach a stable DC value to determine a position measurement. In addition, affecting the decay time is the parasitic capacitance between the top and bottom layers of the touch sensor, which affect the input of the ADC when the electrode drivers are active.

Resistive touch sensors come in three flavors: 4, 5, and 8 wire interfaces. Four wire configurations offer the lowest cost, but they can require frequent recalibration. The four wire sensor arranges two electrode arrays at opposite sides of the substrate to establish a voltage gradient across the ITO coating. When the user presses the sensor surface, the two sets of electrodes can act together, by alternating the voltage signal between them, to produce a measurable voltage gradient across the substrate. The four wire configuration supports the construction of small and simple touch panels, but they are only rated to survive up to five million touches.

Five wire configurations are more expensive and harder to calibrate, but they improve the sensor’s durability and calibration stability because they use electrodes on all four corners of the bottom layer of the sensor. The top layer acts as a voltage-measuring probe. The additional electrodes make triangulating the touch position more accurate, and this makes it more appropriate for larger, full size displays. Five wire configurations have a higher life span of 35 million touches or more.  

Eight wire configurations derive their design from four wire configurations. The additional four lines (two on each layer) report baseline voltages that enable the controller to correct for drift from ITO coating degradation or from additional electrical resistance the system experiences from harsh environmental conditions. The uses for 8 wire configurations are the same as 4 wire configurations except that 8 wire systems deliver more drift stability over the same period of time. Although the four additional lines stabilize the system against drift, they do not improve the durability or life expectancy of the sensor.

If you would like to participate in this project, post here or email me at Embedded Insights.

[Editor's Note: This was originally posted on Low-Power Design]