The forms for interfacing between humans and the machines are constantly evolving, and the creation rate of new forms for human-machine interfacing seems to be increasing. Long gone are the days of using punch cards and card reader to tell a computer what to do. Most contemporary users are unaware of what a command line prompt and optional argument is. Contemporary touch, gesture, stylus, and spoken language interfaces threaten to make the traditional hand shaped mouse a quaint and obsolete idea.
The road from idea, to experimental implementations, to production forms for human interfaces usually spans many attempts over years. For example, the first computer mouse prototype was made by Douglas Engelbart, with the assistance of Bill English, at the Stanford Research Institute in 1963. The computer mouse became a public term and concept around 1965 when it was associated to a pointing device in Bill English’s publication of “Computer-Aided Display Control.” Even though the mouse was available as a pointing device for decades, it finally became a ubiquitous pointing device with the release of Microsoft Windows 95. The sensing mechanisms for the mouse pointer evolved though mechanical methods using wheels or balls to detect when and how the user moved the mouse. The mechanical methods have been widely replaced with optical implementations based around LEDs and lasers.
3D pointing devices started to appear in market the early 1990’s, and they have continued to evolve and grow in their usefulness. 3D pointing devices provide positional data along at least 3 axes with contemporary devices often supporting 6 degrees of freedom (3 positional and 3 angular axes). Newer 9 degrees of freedom sensors (the additional 3 axes are magnetic compass axes), such as from Atmel, are approaching integration levels and price points that practically ensure they will find their way into future pointing devices. Additional measures of sensitivity for these types of devices may include temperature and pressure sensors. 3D pointing devices like Nintendo’s Wii remote combine spatial and inertial sensors with vision sensing in the infrared spectrum that relies on a light bar with two infrared light sources that are spaced at a known distance from each other.
The release of Apple’s iPhone marked the tipping point for touch screen interfaces. However, the IBM Simon smartphone predates the iPhone by nearly 14 years, and it sported similar, even if primitive, support for a touchscreen interface. Like many early versions of human-machine interfaces that are released before the tipping point of market acceptance, the Simon did not enjoy the same market wide adoption as the iPhone.
Touchscreen interfaces span a variety of technologies including capacitive, resistive, inductive, and visual sensing. Capacitive touch sensing technologies, along with the software necessary to support these technologies, are offered by many semiconductor companies. The capacitive touch market has not yet undergone the culling that so many other technologies experience as they mature. Resistive touch sensing technology has been in production use for decades and many semiconductor companies still offer resistive touch solutions; there remain opportunities for resistive technologies to remain competitive with capacitive touch into the future by harnessing larger and more expensive processors to deliver better signal-to-noise performance. Vision based touch sensing is still a relatively young technology that exists in higher-end implementations, such as the Microsoft Surface, but as the price of the sensors and compute performance needed to use vision-based sensing continues to drop, it may move into direct competition with the aforementioned touch sensing technologies.
Touch interfaces have evolved from the simple drop, lift, drag, and tap model of touch pads to supporting complex multi-touch gestures such as pinch, swipe, and rotate. However, the number and types of gestures that touch interface systems can support will explode in the near future as touch solutions are able to continue to ride Moore’s law and push more compute processing and gesture databases into the system for negligible additional cost and energy consumption. In addition to gestures that touch a surface, touch commands are beginning to be able to incorporate proximity or hovering processing for capacitive touch.
Examples of these expanded gestures include using more than two touch points, such as placing multiple fingers from one or both hands on the touch surface and performing a personalized motion. Motions can consist of nearly any repeatable motion, including time sensitive swipes and pauses, and it can be tailored for each individual user. As the market moves closer to a cloud computing and storage model, this type of individual tailoring becomes even more valuable because the cloud will enable a user to untether themselves from a specific device and access their personal gesture database on many different devices.
Feedback latency to the user is an important measurement and a strong limiter on the adoption rate of expanded human interface options that include more complex gestures and/or speech processing. A latency target of about 100ms has consistently been the basic advice for feedback responses for decades (Miller, 1968; Myers 1985; Card et al. 1991) for user interfaces; however, according to the Nokia Forum, for tactile responses, the response latency should be kept under 20ms or the user will start to notice the delay between a user interface event and the feedback. Staying within these response time limits affects how complicated a gesture a system can handle and provide satisfactory response times to the user. Some touch sensing systems can handle single touch events satisfactorily but can, under the right circumstances, cross the latency threshold and become inadequate for handling two touch gestures.
Haptic feedbacks provide a tactile sensation, such as a slight vibration, to the user to provide immediate acknowledgement that the system has registered an event. This type of feedback is useful in noisy environments where a sound or beep is insufficient, and it can allow the user to operate the device without relying on visual feedback. An example is when a user taps a button on the touch screen and the system signals the tap with a vibration. The forum goes on to recommend that the tactile feedback is not exaggerated and short (less than 50ms) so as to keep the sensations pleasant and meaningful to the user. Vibrating the system too much or often makes the feedback meaningless to the user and risks draining any batteries in the system. Tactile feedbacks should also be coupled with visual feedbacks.
Emerging Interface Options
An emerging tactile feedback involves simulating texture on the user’s fingertip (Figure 1). Tesla Touch is currently demonstrating this technology that does not rely on mechanical actuators typically used in haptic feedback approaches. The technology simulates textures by applying and modulating a periodic electrostatic charge across the touch surface. By varying the sign (and possibly magnitude) of the charge, the electrons in the user’s fingertip will be drawn towards or away from the surface – effective creating a change in friction on the touch surface. Current prototypes are able to use signals as low as 8V to generate tactile sensations. No electric charge passes through the user.
Pranav Mistry at Fluid Interfaces Group | MIT Media Lab has demonstrated a wearable gesture interface setup combines digital information with the physical world through hand gestures and a camera sensor. The project is built with commercially available parts consisting of a pocket projector, a mirror, and a camera. The current prototype system costs approximate $350 to build. The projector projects visual information on surfaces, such as walls and physical objects within the immediate environment. The camera tracks the user’s hand gestures and physical objects. The software program processes the camera video stream and tracks the locations of the colored markers at the tip of the user’s fingers. Interpreted hand gestures act commands for projector and digital information interfaces.
Another researcher/designer is Fabian Hemmert whose projects explore emerging haptic feedback techniques including shape-changing and weight-shifting devices. His latest public projects include adding friction to a to a touch screen stylus that works through the stylus rather than through the user’s fingers like the Tesla Touch approach. The thought is that the reflective tactile feedback can prioritize displayed information, providing inherent confirmation of a selection by making the movement of the stylus heavier or lighter, and taking advantage of the manual dexterity of the user and providing friction that is similar to writing on a surface – of which the user is already familiar with.
Human media Lab recently unveiled and is demonstrating a “paper bending” interface that takes advantage of E-ink’s flexible display technology (Figure 2). The research team suggests that bending a display, such as to page forward, shows promise as an interaction mechanism. The team identified six simple bend gestures, out of 87 possible, that users preferred based around bending forward or backwards at two corners or the outside edge of the display. The research team identifies potential uses for bend gestures when the user is wearing gloves that inhibit interacting with a touch screen. Bend gestures may also prove useful to users that have motor skill limitations that inhibit the use of other input mechanisms. Bend gestures may be useful as a means to engage a device without require visual confirmation of an action.
In addition to supporting commands that are issued via bending the display, the approach allows a single display to operate in multiple modes. The Snaplet project is a paper computer that can act as a watch and media player when wrapped like a bracelet on the user’s arm. It can function as a PDA with notepad functionality when held flat, and it can operate as a phone when held in a concave shape. The demonstrated paper computer can accept, recognize, and process touch, stylus, and bend gestures.
If the experiences of the computer mouse and touch screens are any indication of what these emerging interface technologies are in for, there will be a number of iterations for each of these approaches before they evolve into something else or happen upon the proper mix of technology, low cost and low power parts, sufficient command expression, and acceptable feedback latency to hit the tipping point of market adoption.