User Interfaces Channel

The User Interfaces channel focuses on the issues affecting both the physical and logical trade-offs of integrating user interfaces into embedded designs to support making it easier for a user to correctly, consistently, and unambiguously control the behavior of the system.

Adding texture to touch interfaces

Friday, December 17th, 2010 by Robert Cravotta

I recently heard about another approach to providing feedback to touch interfaces (Thank you Eduardo). TeslaTouch is a technology developed at Disney Research that uses principles of electrovibration to simulate textures on a user’s finger tips. I will be meeting with TeslaTouch at CES and going through a technical demonstration, so I hope to be able to share good technical details after that meeting. In the meantime, there are videos at the site that provide a high level description of the technology.

The feedback controller uniformly applies a periodic electrostatic charge across the touch surface. By varying the sign (and possibly magnitude) of the charge, the electrons in the user’s fingertip will be drawn towards or away from the surface – effective creating a change in friction on the touch surface. Current prototypes are able to use signals as low as 8V to generate tactile sensations. No electric charge passes through the user.

By varying over time the electric charge across the electrode layer, this touch sensor and feedback surface can simulate textures on a user’s finger by attracting and repelling the electrons in the user’s finger to and from the touch surface (courtesy TeslaTouch).

The figure shows a cross section of the touch surface which consists of a layer of glass overlaid with a layer of transparent electrode, which is covered by an insulator. Varying the voltage across the electrode layer changes the relative friction coefficients from pushing a finger (Fe) into the touch surface and dragging a finger across it (Fr). It is not clear how mature this technology currently is other than the company is talking about prototype units.

One big feature of this approach to touch feedback is that it does not rely on mechanical actuators typically used in haptic feedback approaches. The lack of moving parts should contribute to a higher reliability when compared to the electromechanical alternatives. However, it is not clear the this technology would work through gloves or translate through a stylus – of which the electromechanical approach can accommodate.

What are the questions you would like most answered about this technology? I am hopeful that I can dig deep into the technology at my CES meeting and pass on what I find in a follow-up here. Either email me or post the questions you would most like to see answered. The world of touch user interfaces is getting more interesting each day.

Capacitive button sense algorithm

Tuesday, November 23rd, 2010 by Robert Cravotta

There are many ways to use capacitive touch for user interfaces; one of the most visible ways is via a touch screen. An emerging use for capacitive touch in prototype devices is to sense the user’s finger on the backside or side of the device. Replacing mechanical buttons is another “low hanging fruit” for capacitive touch sensors. Depending on how the touch sensor is implemented, the application code may be responsible for working with low level sensing algorithms, or it may be able to take advantage of higher levels of abstraction when the touch use cases are well understood.

The Freescale TSSEVB provides a platform for developers to work with capacitive buttons placed in slider, rotary, and multiplexed configurations. (courtesyFreescale)

Freescale’s Xtrinsic TSS (touch sensing software) library and evaluation board provides an example platform for building touch sensing into a design using low- and mid-level routines. The evaluation board (shown in the figure) provides electrodes in a variety of configuration including multiplexed buttons, LED backlit buttons, different sized buttons, and buttons grouped together to form slider, rotary, and keypad configurations. The Xtrinsic TSS supports 8- and 32-bit processors (the S08 and Coldfire V1 processor families), and the evaluation board uses an 8-bit MC9S08LG32 processor for the application programming. The board includes a separate MC9S08JM60 communication processor that acts as a bridge between the application and the developer’s workstation. The evaluation board also includes an on-board display.

The TSS library supports up to 64 electrodes. The image of the evaluation board highlights some of the ways to configure electrodes to maximize functionality while using fewer electrodes. For example, the 12 button keypad uses 10 electrodes (numbered around the edge of the keypad) to detect the 12 different possible button positions. Using 10 electrodes allows the system to detect multiple simultaneous button presses. If you could guarantee that only one button would be pressed at a time, you could reduce the number of electrodes to 8 by eliminating the two small corner electrodes numbered 2 and 10 in the image. Further in the background of the image are four buttons with LEDs in the middle as well as a rotary and slider bar.

The charge time of the sensing electrode is extended by the additional capacitance of a finger touching the sensor area.

Each electrode in the touch sensing system acts like a capacitor with a charging time defined as T = RC. An external pull-up resistor limits the current to charge the electrode which in turn affects the charging time. Additionally, the presence or lack of a user’s finger near the electrode affects the capacitance of the electrode which also affects the charging time.

In the figure, C1 is the charging curve and T1 is the time to charge the electrode to VDD when there is no extra capacitance at the electrode (no finger present). C2 is the charging curve and T2 is the time to charge the electrode when there is extra capacitance at the electrode (finger present).The basic sensing algorithm relies on noting the time difference between T1 and T2 to determine if there is a touch or not.

The TSSEVB supports three different ways to control and measure the electrode: GPIO, the KBI or pin interrupts, and timer input capture. In each case, the electrode defaults to an output high state. To start measuring, the system sets the electrode pins output low to discharge the capacitor. By setting the electrode pin to a high impedance state, the capacitor will start charging. The different measurement implementations set and measure the electrode state slightly differently, but the algorithm is functionally the same.

The algorithm to detect a touch consists of 1) starting a hardware timer; 2) starting the electrode charging; 3) waiting for the electrode to charge (or a timeout to occur); and 4) returning the value of the timer. One difference between the different modes is whether the processor is looping (GPIO and timer input capture) or in a wait state (KBI or pin interrupt) which can affect whether you can perform any other tasks during the sensing.

There are three parameters which will affect the performance of the TSS library: the timer frequency, the pull-up resistor value, and the system power voltage. The timer frequency affects the minimum capacitance measurable. The system power voltage and pull-resistor affect the voltage trip point and how quickly the electrode charges. The library uses at least one hardware timer, so the system clock frequency affects the ability of the system to detect a touch because the frequency affects the minimum capacitance value detected per timer count.

The higher clock the frequency, the smaller the amount of capacitance the system can detect. If the clock rate is too fast for the charging time, the timer can overflow. If the clock rate is too slow, the system will be more susceptible to noise and have a harder time reliably detecting a touch. When I was first working with the TSSEVB, we chose less than optimally values and the touch sensing did not work very well. After figuring out there was a mismatch in the scaling value that we chose, the performance of the touch sensing drastically improved.

The library supports what Freescale calls Turbo Sensing, which is an alternative technique to measure charge time by counting bus ticks instead of using a timer. This increases the system integration flexibility and makes measurement faster with less noise and supports interrupt conversions. We did not have time to try out the turbo sensing method.

The decoder functions, such as for the keypad, slider, or rotary configurations, support a higher level of abstraction to the application code. For example, the keypad configuration relies on each button mapping to two electrodes charging at the same time. As an example, in the figure, the button numbered 5 requires electrodes 5 and 8 to charge together as each of those electrodes covers half of the 5 button. The rotary decoder handles more information than the key press decoder because it not only detects when electrode pads have been pressed, but it reports from what direction (of two possibilities) the pad was touched and how many pads experienced some displacement. This allows the application code to control the direction and speed of moving through a list. The slider decoder is similar to the rotary decoder except that the ends of the slider do not touch each other.

The size and shape of each electrode pad, as well as the parameters mentioned before, affects the charging time, so the delta in the T1 and T2 times will not necessarily be the same for each button. The charging time for each electrode pad might change as environmental conditions change. However, because detecting a touch is based on a relative difference in the charging time for each electrode, the system provides some resilience to environmental changes.

Replacing Mechanical Buttons with Capacitive Touch

Friday, October 29th, 2010 by Robert Cravotta

Capacitive touch sensing differs from resistive touch sensing in that it relies on the conductive properties of the human finger rather than pressure on the touch surface. One major difference is that a capacitive touch sensor will not work with a stylus made of non-conductive material, such as plastic, nor will it work if the user is wearing non-conductive gloves without a high sensitivity. In contrast, both plastic styluses and gloved fingers can work fine with resistive touch sensors.

Capacitive touch solutions are used in touchscreens applications, such as on touch smartphones, as well as for replacing mechanical buttons in end equipment. The techniques to sensing a touch are similar, but the materials that each design uses may be different. Capacitive touch surfaces rely on a layer of charge-storing material, such as ITO (indium tin oxide), copper, or printed ink, coated on or sandwiched between insulators, such as glass. Copper layered on a PCB works for replacing mechanical buttons. ITO is a transparent conductor that allows a capacitive sensor to be up to 90% transparent in a single layer implementation, and that makes it ideal for touchscreen applications where the user needs to be able to see through the sensing material.

In general, oscillator circuits apply a consistent voltage across the capacitive layer. When a conductive material or object, such as a finger, gets close enough or touches the sensor surface, it draws current and causes the electrical frequencies to fluctuate each of the oscillator circuits. The touch sensing controller can correlate the differences at each oscillator to detect and infer the point or points of contact.

Capacitive touch sensors can employ different approaches to detect and determine the location of a user’s finger on the touch surface. The trade-offs for each approach provide the differentiation that drive the competing capacitive touch offerings available today.Mechanical button replacement generally does not need to determine the exact position of the user’s finger, so they can use a surface capacitance implementation.

A cross sectional view of a surface capacitive touch sensor to replace a mechanical button. (courtesy Cypress)

Surface capacitance implementationsrely on coating only one side of the insulator with a conductive layer. Applying a small voltage to the layer produces a uniform electrostatic field that forms a dynamic capacitor when the user’s finger touches the uncoated surface. In the figure (courtesy Cypress), a simple parallel plate capacitor with two conductors is separated by a dielectric layer. Most of the energy is concentrated between the plates, but some of the energy spills over into the area outside the plates. The electric field lines associated with this effect are called fringing fields, and placing a finger near these fields adds conductive surface area that the system can measure.Surface capacitance implementations are subject to parasitic capacitive coupling and need calibration during manufacturing.

The cross section figure is for a single button, but button replacement designs for multiple buttons placed near each other do not require a one-to-one sensing pad to button. For example, the 4×4 set of buttonscould be implemented with as few as 9 pads by overlapping each touch pad in a diamond shape across up to four buttons. The touch controller can then correlate a touch across multiple pads to a specific button. Touching one of the four corner buttons (1, 4, 13, and 16) requires that only one pad register a touch. To detect a touch on any of the other buttons requires the controller to detect a simultaneous touch on two pads. To support multiple button presses at the same time, the pad configuration would need to add a pad at each corner so that the corner buttons could be uniquely identified.

The next post will discuss touch location detection for touchscreen implementations.

Haptic User Interfaces

Tuesday, October 12th, 2010 by Robert Cravotta

Applications that support touch displays overwhelmingly rely on visual feedback to let the user know what touch event occurred. Some applications support delivering an audio signal to the user, such as a click or beep, to acknowledge that a button or virtual key was pressed. However, in many touch interfaces, there is no physical feedback, such as a small vibration, to let the user know that the system detected a touch of the display.

Contrast this with the design of mechanical keyboards. It is an explicit design decision whether the keys are soft or firm to the touch. Likewise, the “noisiness” of the keys and whether there is an audible and physical click at the end of a key press are the result of explicit choices made by the keyboard designer.

As end devices undergo a few design generations of supporting touch interfaces, I expect that many of them will incorporate haptic technology, such as from Immersion, so as to deliver the sensation of the click at the end of a key press. However, I am currently not aware of how a digital touch interface can dynamically simulate different firmness or softness of the touch display, but something like the Impress squishy display may not be too far away.

Some other interesting possibilities for touch based information and feedback are presented in Fabian Hemmert’s video about shape shifting mobile devices. In the video he demonstrates how designers might implement three different types of shape shifting in a mobile phone form factor.

The first concept is a weight-shifting device that can shift its center of mass. Not only could the device provide a tactile feedback of where the user is touching the display, but it can be used to “point” the user in a direction by making it heavier in the direction it wishes to point. This has the potential to allow a device to guide the user through the city without requiring the user to look at the device.

The second concept is a shape-shifting device that can transform from a flat form to one that is raised on any combination of its four corners. This allows the device to extend an edge or taper a corner toward or away from the user to indicate that there is more information in the indicated direction (such as when looking at a map). A shape-shifting capability can also allow the device to be placed on a flat surface, say a nightstand and allow the device to take on a context specific function – say an alarm clock.

The third concept is a “breathing” device where the designer uses the shifting capabilities of the device to indicate a health state of the device. However, to make the breathing concept more than just an energy drain, it will need to be able to decide whether there is someone around to observe it, so that it can save its energy when it is alone.

The mass- and shape-shifting concepts hold a lot of promise, especially when they are combined together in the same device. It might be sooner than we think when these types of features are available to include in touch interfaces.

Alternative touch interfaces – sensor fusion

Tuesday, September 21st, 2010 by Robert Cravotta

While trying to uncover and highlight different technologies that embedded developers can tap into to create innovative touch interfaces, Andrew commented on e-field technology and pointed to Freescale’s sensors. While exploring proximity sensing for touch applications, I realized that accelerometers represent yet another alternative sensing technology (versus capacitive touch) that can impact how a user can interact with a device. The most obvious examples of this are devices, such as a growing number of smart phones and tablets, which are able to detect their orientation to the ground and rotate the information they are displaying. This type of sensitivity enables interface developers to consider broader gestures that involve manipulating the end device, such as shaking it, to indicate some type of change in context.

Wacom’s Bamboo Touch graphic tablet for consumers presents another example of e-field proximity sensing combined with capacitive touch sensing. In this case, the user can use the sensing surface with an e-field optimized stylus or they can use their finger directly on the surface. The tablet controller detects which type of sensing it should use without requiring the user to explicitly switch between the two sensing technologies. This type of combined technology is finding its way into tablet computers.

I predict the market will see more examples of end devices that seamlessly combine different types of sensing technologies in the same interface space. The different sensing modules working together will enable the device to infer more about the user’s intention, which will in turn, enable the device to better learn and adapt to each user’s interface preferences. To accomplish this, devices will need even more “invisible” processing and database capabilities that allow these devices to be smarter than previous devices.

While not quite ready for production designs, the recent machine touch demonstrations from the Berkeley and Stanford research teams suggest that future devices might even be able to infer user intent by how the user is holding the device – including how firmly or lightly they are gripping or pressing on it. These demonstrations suggest that we will be able to make machines that are able to discern differences in pressure comparable to humans. What is not clear is whether each of these technologies will be able to detect surface textures.

By combining, or fusing, different sensing technologies together, along with in-device databases, devices may be able to start recognizing real world objects – similar to the Microsoft Surface. It is becoming within our grasp for devices to start recognizing each other without requiring explicit electronic data streams flowing between those devices.

Do you know of other sensing technologies that developers can combine together to enable smarter devices that learn how their user communicates rather than requiring the user to learn how to communicate with the device?

Giving machines a fine sense of touch

Tuesday, September 14th, 2010 by Robert Cravotta

Two articles were published online on the same day (September 12, 2010) in Nature Materials that describe the efforts of two research teams at UC Berkeley and Stanford University that have each developed and demonstrated a different approach for building artificial skin that can sense very light touches. Both systems have reached a pressure sensitivity that is comparable to what a human relies on to perform everyday tasks. The sensitivity of these systems can detect pressure changes that are less than a kilopascal; this is an improvement over earlier approaches that could only detect pressures of tens of kilopascals.

The Berkeley approach, dubbed “e-skin”, uses germanium/silicon nanowire “hairs” that are grown on a cylindrical drum and then rolled onto a sticky polyimide film substrate, but the substrate can be made from plastics, paper, or glass. The nanowires are deposited onto the substrate to form an orderly structure. The demonstrated e-skin consists of a 7x7cm surface consisting of an 18×19 pixel square matrix; each pixel contains a transistor made of hundreds of the nanowires.A pressure sensitive rubber was integrated on top of the matrix to support sensing. The flexible matrix is able to operate with less than a 5V power supply, and it has been able to continue operating after being subjected to more than 2,000 bending cycles.

In contrast, the Stanford approach, sandwiches a thin film of rubber molded into a grid of tiny pyramids, packing up to 25 million pyramids per cm2, between two parallel electrodes. The pyramid grid makes the rubber behave like an ideal spring that supports compression and rebound of the rubber that is fast enough to distinguish between multiple touches that follow each other in quick succession. Pressure on the sensor compresses the rubber film and changes the amount of electrical charge it can store which enables the controller to detect the change in the sensor. According to the team, the sensor can detect the pressure exerted by a 20mg bluebottle fly carcass placed on the sensor. The Stanford team has been able to manufacture a sheet as large as 7x7cm, similar to the Berkeley e-skin.

I am excited by these two recent developments in machine sensing. The uses for this type of touch sensing are endless such as in industrial, medical, and commercial applications. A question comes to mind – these are both sheets (arrays) of multiple sensing points – how similar will the detection and recognition algorithms be to the touch interfaces and vision algorithms that are being developed today? Or will it require a completely different approach and thought process to interpret this type of touch sensing?

Alternative Touch Interfaces

Tuesday, September 7th, 2010 by Robert Cravotta

Exploring the different development kits for touch interfaces provides a good example of what makes something an embedded system. To be clear, the human-machine interface between the end device and the user is not an embedded system; however, the underlying hardware and software can be. Let me explain. The user does not care how a device implements the touch interface – what matters to the user is what functions, such as multi-touch, the device supports, and what types of contexts and touch commands the device and applications can recognize and respond to.

This programmable rocker switch includes a display that allows the system to dynamically change the context of the switch.

So, while using resistive and capacitive touch sensors are among the most common ways to implement a touch interface in consumer devices, they are not the only way. For example, NKK Switches offers programmable switches that integrate a push button or rocker switch with an LCD or OLED display. In addition to displaying icons and still images, some of these buttons can display a video stream. This allows the system to dynamically change the context of the button and communicate the context state to the user in an intuitive fashion. I am in the process of setting up some time with these programmable switches for a future write-up.

Another example of alternative sensing for touch interfaces is infrared sensors. The infrared proximity sensing offered by Silicon Labs and the infrared multi-touch sensing offered by Microsoft demonstrate the wide range of capabilities that infrared sensors can support at different price points.

Silicon Labs offers several kits that include infrared support. The FRONTPANEL2EK is a demo board that shows how to use capacitive and infrared proximity sensing in an application. The IRSLIDEREK is a demo board that shows how to use multiple infrared sensors together to detect not only the user’s presence, but also location and specific motion of the user’s hand. These kits are fairly simple and straightforward demonstrations. The Si1120EK is an evaluation platform that allows a developer to explore infrared sensing in more depth including advanced 3-axis touchless object proximity and motion sensing.

By working with these kits, I have a greater appreciation of the possible uses for proximity sensing. For example, an end device could place itself into a deep sleep or low power mode to minimize energy consumption. However, placing a system in the lowest power modes incurs a startup delay when reactivating the system. A smart proximity sensing system could provide the system with a few seconds warning that a user might want to turn the system on, and it could speculatively activate the device and be able to respond to the user more quickly. In this scenario, the proximity sensor would probably include some method to distinguish between likely power-up requests versus an environment where objects or people pass near the device without any intent of powering up the device.

Finally, Microsoft’s Surface product demonstrates the other end of touch sensing using an infrared camera system. In essence, the Surface is a true embedded vision system – an implementation detail that the end user does not need to know anything about. In the case of the Surface table, there are several infrared cameras viewing a diffusion surface. The diffusion surface has specific optical properties that allow the system software to identify when any object touches the surface of the display. This high end approach provides a mechanism for the end user to interact with the system using real world objects found in the environment rather than just special implements such as stylus with specific electrical characteristics.

The point here is to recognize that there are many ways to implement touch interfaces – including sonic mechanisms. They may not support touch interfaces in the same way, nor be able to support a minimum set of commands sets, but taken together, they may enable smarter devices that are able to better predict what the end user’s true expectations are and prepare accordingly. What other examples of alternative touch sensing technologies are you aware of?

Man vs. Machine: What’s behind it?

Friday, September 3rd, 2010 by Binay Bajaj

The interaction between ‘man and the machine’ today is very different when compared to 20 – even 10 – years ago. Major changes include how a person interfaces with his everyday consumer device such as a smart phone, notebook, tablet, or navigational device. In the past, a user might push several mechanical buttons to play a handheld game or control household appliances, whereas now, they can use various touch gestures on a screen to play a handheld game, look up directions on a map, read a book on a tablet, or even control the sound of his stereo from a touchscreen.

There have been a number of devices for many years with enhanced functionality but most of these features were not used because it was too complicated. Easy and intuitive interfaces open up a device for the user. Users can quickly discover the power of the device, finding it interactive and enabling them to spend hours on the device, playing with it, and enhancing it by finding new applications.

So what is behind these devices that include intuitive interfaces? What is required to enable these devices to function with such rich user interfaces? The secret is good touch hardware, firmware, as well as the right algorithm and software drivers. These features are all part of a good touch solution that provides design engineers the tools to add touch functionality to various devices.

Many vendors are not ‘end device’ manufacturers; rather, they make the controller and touch solution for OEMs (original equipment manufacturer). These vendors provide a complete touch system so OEMs can implement a feature-rich, intuitive interface in the device for the users. These touch solutions include the touch controller, firmware, touch sensor pattern design, sensor test specification, manufacturing test specification, and software drivers.

However, the OEM needs to do an evaluation of the touch solution at the time of engagement. This is where a sensor evaluation kit showcases the capability of the solution and how the touch solution matches the customer requirement. A software development kit can provide performance characterization, as well as a development platform environment to support various operating systems. A good development kit is easy-to-understand, easy-to-install, and quick.

The software development kit for touch functionality is a key part of the package because it requires the design engineer to install the package himself. Easy-to-use is the key. The vendor provides the hardware and using it may require some collaboration, but the software development kit is typically the challenge for designers. The types of instructions that a touch development kit needs to provide in order to be easy-to-use include how to set-up the board, how to demonstrate the board’s capabilities, and how to configure the software settings.

Vendors understand that the easier it is to use a development kit, the more robust a design engineer can make a product and bring it to market faster. Software development kits make it apparent that the designer can control these various features to offer more touch functionality to the consumers including software algorithms, gestures, lower power, faster response time, and higher accuracy.

Though the interaction between ‘man and the machine’ is changing today, each year brings unlimited possibilities to the market place. The human interface to devices will continue to become easier and support more intuitive interactions between the man and his machine.

What capability is the most important for touch sensing?

Wednesday, August 25th, 2010 by Robert Cravotta

I have been exploring user interfaces, most notably touch sensing interfaces, for a while. As part of this exploration effort, I am engaging in hands-on projects with touch development kits from each of over a dozen companies that offer some sort of touch solution. These kits range from simple button replacement to complex touch screen interfaces. I have noticed, as I work with each kit, that each company chose a different set of priorities to optimize and trade against in their touch solution. Different development kits offer various levels of maturity in how they simplify and abstract the complexity of making a touch interface act as more than just a glorified switch.

It appears there may be a different set of “must have” capabilities in a touch development kit depending on who is using it and for what type of application they are adding it. For button replacement kits, the relevant characteristics seem to focus around cost and robustness with ease of development becoming more important. A common theme among button replacement kits is supporting aggregate buttons, such as a slider bar or wheel, that can act as a single control even though it consists of multiple buttons.

From my perspective, an important capability of a button replacement solution is that it simplifies the initialization and setup of the buttons while still being able to support a wide range of operating configurations. A development kit that offers prebuilt constructs that aggregate the buttons into sliders and wheels are a plus as they greatly simplify the learning curve to use these compound button structures. Another valuable capability is driver software that allows the touch system to detect calibration drift and assists or automates in recalibration. This week’s question asks if these are sufficient leading edge capabilities, or have I missed any important capabilities for button replacement systems?

In contrast, I have noticed that many touch screen solutions focus on multi-touch capabilities. However, I am not convinced that multi-touch is the next greatest thing for touch screens. Rather, I think more high-level abstraction and robust gesture recognition is the killer capability for touch screen solutions. Part of my reasoning for this is the relative importance of “pinching” to zoom and rotate an object versus flicking and tracing to navigate and issue complex commands to the system. The challenges to correctly recognize a zooming and rotating command are somewhat constrained; whereas the challenges to correctly recognize the intended context of a flick or trace gesture are significantly more difficult because there are a wider set of conditions to which a user may apply flick or trace gesture.

As a result, I feel that an important and differentiating capability of a touch screen solution is that it offers prebuilt drivers and filters that are able to consistently identify when a touch gesture is real and intended. It should also be able to accurately differentiate between subtle nuances in a gesture so as to enable the user to communicate a richer set of intended commands to the system. Again, this week’s question seeks to determine if this is the appropriate set of leading edge capabilities, or have I missed any important capabilities for touch screen systems?

Your answers will help direct my hands-on project, and they will help with the database and interface design for the upcoming interactive embedded processing directory.

Clarifying third generation touch sensing

Tuesday, August 24th, 2010 by Robert Cravotta

Eduardo’s response to first and second generation touch sensing provides a nice segue to clarifying third generation touch sensing capabilities. Eduardo said:

One other [classification] that I would say relates to “generations” is the single touch vs multitouch history; which I guess also relates [to] the evolution of algorithms and hardware to scan more electrodes and to interpolate the values between those electrodes. First generation: single touch and single touch matrixes; second generation: two touch, low resolution sliders; third generation: high resolution x-y sensing, multi touch detection.

While there is a “generational” shift between single- and multi-touch sensing, I am not sure the uses for multi-touch commands have reached a tipping point of adoption. My non-scientific survey of what types of multi-touch commands people know how to use yields only zoom and rotate commands. The MSDN library entry for touch provides an indication of the maturity of multi-touch interfaces; it identifies that Windows 7 supports new multi-touch gestures such as pan, zoom, rotate, two-finger tap, as well as press and tap. However, these multi-touch commands are more like manipulations where the “input corresponds directly to how the object would react naturally to the same action in the real world.”

I am excited about the possibilities of multi-touch interfaces, but I think standardizing gesture recognition for movements such as flicks, traces, and drags, which go beyond location and pen up/down data, is the relevant characteristic of third generation touch sensing. Nor is gesture recognition limited to touch interfaces. For example, there are initiatives and modules available that enable applications to recognize mouse gestures. The figures (from the referenced mouse gesture page) highlight a challenge of touch interfaces – how to provide feedback and a means for the user to visually see how to perform the touch command. The figure relies on an optional feature to display a “mouse trail” so that the reader can understand the motion of the gesture. The example figure illustrates a gesture command that combines tracing with a right-up-left gesture to signal to a browser application to open all hyperlinks that the trace crossed in separate tabs.

Open links in tabs (end with Right-Up-Left): Making any gesture ending with a straight Right-Up-Left movement opens all crossed links in tabs.

A common and useful mouse-based gesture that is not yet standard across touch sensing solutions is recognizing a hovering finger or pointer. Several capacitive touch solutions can technically sense a hovering finger, but the software to accomplish this type of detection is currently left to the device and application developer. An important component of detecting a hovering finger is detecting not just where the fingertip is but also what additional part of the display the rest of the finger or pointer is covering so that the application software can place the pop-up or context windows away from the user’s finger.

While some developers will invest the time and resources to add these types of capabilities to their designs today, gesture recognition will not reach a tipping-point until the software to recognize gestures, identify and filter out bad gestures, and abstract the gesture motion into a set of commands finds its way into IP libraries or operating system drivers.