Clarifying third generation touch sensing

Tuesday, August 24th, 2010 by Robert Cravotta

Eduardo’s response to first and second generation touch sensing provides a nice segue to clarifying third generation touch sensing capabilities. Eduardo said:

One other [classification] that I would say relates to “generations” is the single touch vs multitouch history; which I guess also relates [to] the evolution of algorithms and hardware to scan more electrodes and to interpolate the values between those electrodes. First generation: single touch and single touch matrixes; second generation: two touch, low resolution sliders; third generation: high resolution x-y sensing, multi touch detection.

While there is a “generational” shift between single- and multi-touch sensing, I am not sure the uses for multi-touch commands have reached a tipping point of adoption. My non-scientific survey of what types of multi-touch commands people know how to use yields only zoom and rotate commands. The MSDN library entry for touch provides an indication of the maturity of multi-touch interfaces; it identifies that Windows 7 supports new multi-touch gestures such as pan, zoom, rotate, two-finger tap, as well as press and tap. However, these multi-touch commands are more like manipulations where the “input corresponds directly to how the object would react naturally to the same action in the real world.”

I am excited about the possibilities of multi-touch interfaces, but I think standardizing gesture recognition for movements such as flicks, traces, and drags, which go beyond location and pen up/down data, is the relevant characteristic of third generation touch sensing. Nor is gesture recognition limited to touch interfaces. For example, there are initiatives and modules available that enable applications to recognize mouse gestures. The figures (from the referenced mouse gesture page) highlight a challenge of touch interfaces – how to provide feedback and a means for the user to visually see how to perform the touch command. The figure relies on an optional feature to display a “mouse trail” so that the reader can understand the motion of the gesture. The example figure illustrates a gesture command that combines tracing with a right-up-left gesture to signal to a browser application to open all hyperlinks that the trace crossed in separate tabs.

Open links in tabs (end with Right-Up-Left): Making any gesture ending with a straight Right-Up-Left movement opens all crossed links in tabs.

A common and useful mouse-based gesture that is not yet standard across touch sensing solutions is recognizing a hovering finger or pointer. Several capacitive touch solutions can technically sense a hovering finger, but the software to accomplish this type of detection is currently left to the device and application developer. An important component of detecting a hovering finger is detecting not just where the fingertip is but also what additional part of the display the rest of the finger or pointer is covering so that the application software can place the pop-up or context windows away from the user’s finger.

While some developers will invest the time and resources to add these types of capabilities to their designs today, gesture recognition will not reach a tipping-point until the software to recognize gestures, identify and filter out bad gestures, and abstract the gesture motion into a set of commands finds its way into IP libraries or operating system drivers.

Tags:

Leave a Reply