What capability is the most important for touch sensing?

Wednesday, August 25th, 2010 by Robert Cravotta

I have been exploring user interfaces, most notably touch sensing interfaces, for a while. As part of this exploration effort, I am engaging in hands-on projects with touch development kits from each of over a dozen companies that offer some sort of touch solution. These kits range from simple button replacement to complex touch screen interfaces. I have noticed, as I work with each kit, that each company chose a different set of priorities to optimize and trade against in their touch solution. Different development kits offer various levels of maturity in how they simplify and abstract the complexity of making a touch interface act as more than just a glorified switch.

It appears there may be a different set of “must have” capabilities in a touch development kit depending on who is using it and for what type of application they are adding it. For button replacement kits, the relevant characteristics seem to focus around cost and robustness with ease of development becoming more important. A common theme among button replacement kits is supporting aggregate buttons, such as a slider bar or wheel, that can act as a single control even though it consists of multiple buttons.

From my perspective, an important capability of a button replacement solution is that it simplifies the initialization and setup of the buttons while still being able to support a wide range of operating configurations. A development kit that offers prebuilt constructs that aggregate the buttons into sliders and wheels are a plus as they greatly simplify the learning curve to use these compound button structures. Another valuable capability is driver software that allows the touch system to detect calibration drift and assists or automates in recalibration. This week’s question asks if these are sufficient leading edge capabilities, or have I missed any important capabilities for button replacement systems?

In contrast, I have noticed that many touch screen solutions focus on multi-touch capabilities. However, I am not convinced that multi-touch is the next greatest thing for touch screens. Rather, I think more high-level abstraction and robust gesture recognition is the killer capability for touch screen solutions. Part of my reasoning for this is the relative importance of “pinching” to zoom and rotate an object versus flicking and tracing to navigate and issue complex commands to the system. The challenges to correctly recognize a zooming and rotating command are somewhat constrained; whereas the challenges to correctly recognize the intended context of a flick or trace gesture are significantly more difficult because there are a wider set of conditions to which a user may apply flick or trace gesture.

As a result, I feel that an important and differentiating capability of a touch screen solution is that it offers prebuilt drivers and filters that are able to consistently identify when a touch gesture is real and intended. It should also be able to accurately differentiate between subtle nuances in a gesture so as to enable the user to communicate a richer set of intended commands to the system. Again, this week’s question seeks to determine if this is the appropriate set of leading edge capabilities, or have I missed any important capabilities for touch screen systems?

Your answers will help direct my hands-on project, and they will help with the database and interface design for the upcoming interactive embedded processing directory.


Leave a Reply