Entries Tagged ‘Haptics’

Travelling the Road of Natural Interfaces

Thursday, July 28th, 2011 by Robert Cravotta

The forms for interfacing between humans and the machines are constantly evolving, and the creation rate of new forms for human-machine interfacing seems to be increasing. Long gone are the days of using punch cards and card reader to tell a computer what to do. Most contemporary users are unaware of what a command line prompt and optional argument is. Contemporary touch, gesture, stylus, and spoken language interfaces threaten to make the traditional hand shaped mouse a quaint and obsolete idea.

The road from idea, to experimental implementations, to production forms for human interfaces usually spans many attempts over years. For example, the first computer mouse prototype was made by Douglas Engelbart, with the assistance of Bill English, at the Stanford Research Institute in 1963. The computer mouse became a public term and concept around 1965 when it was associated to a pointing device in Bill English’s publication of “Computer-Aided Display Control.” Even though the mouse was available as a pointing device for decades, it finally became a ubiquitous pointing device with the release of Microsoft Windows 95. The sensing mechanisms for the mouse pointer evolved though mechanical methods using wheels or balls to detect when and how the user moved the mouse. The mechanical methods have been widely replaced with optical implementations based around LEDs and lasers.

3D pointing devices started to appear in market the early 1990’s, and they have continued to evolve and grow in their usefulness. 3D pointing devices provide positional data along at least 3 axes with contemporary devices often supporting 6 degrees of freedom (3 positional and 3 angular axes). Newer 9 degrees of freedom sensors (the additional 3 axes are magnetic compass axes), such as from Atmel, are approaching integration levels and price points that practically ensure they will find their way into future pointing devices. Additional measures of sensitivity for these types of devices may include temperature and pressure sensors. 3D pointing devices like Nintendo’s Wii remote combine spatial and inertial sensors with vision sensing in the infrared spectrum that relies on a light bar with two infrared light sources that are spaced at a known distance from each other.

Touch Interfaces

The release of Apple’s iPhone marked the tipping point for touch screen interfaces. However, the IBM Simon smartphone predates the iPhone by nearly 14 years, and it sported similar, even if primitive, support for a touchscreen interface. Like many early versions of human-machine interfaces that are released before the tipping point of market acceptance, the Simon did not enjoy the same market wide adoption as the iPhone.

Touchscreen interfaces span a variety of technologies including capacitive, resistive, inductive, and visual sensing. Capacitive touch sensing technologies, along with the software necessary to support these technologies, are offered by many semiconductor companies. The capacitive touch market has not yet undergone the culling that so many other technologies experience as they mature. Resistive touch sensing technology has been in production use for decades and many semiconductor companies still offer resistive touch solutions; there remain opportunities for resistive technologies to remain competitive with capacitive touch into the future by harnessing larger and more expensive processors to deliver better signal-to-noise performance. Vision based touch sensing is still a relatively young technology that exists in higher-end implementations, such as the Microsoft Surface, but as the price of the sensors and compute performance needed to use vision-based sensing continues to drop, it may move into direct competition with the aforementioned touch sensing technologies.

Touch interfaces have evolved from the simple drop, lift, drag, and tap model of touch pads to supporting complex multi-touch gestures such as pinch, swipe, and rotate. However, the number and types of gestures that touch interface systems can support will explode in the near future as touch solutions are able to continue to ride Moore’s law and push more compute processing and gesture databases into the system for negligible additional cost and energy consumption. In addition to gestures that touch a surface, touch commands are beginning to be able to incorporate proximity or hovering processing for capacitive touch.

Examples of these expanded gestures include using more than two touch points, such as placing multiple fingers from one or both hands on the touch surface and performing a personalized motion. Motions can consist of nearly any repeatable motion, including time sensitive swipes and pauses, and it can be tailored for each individual user. As the market moves closer to a cloud computing and storage model, this type of individual tailoring becomes even more valuable because the cloud will enable a user to untether themselves from a specific device and access their personal gesture database on many different devices.

Feedback latency to the user is an important measurement and a strong limiter on the adoption rate of expanded human interface options that include more complex gestures and/or speech processing. A latency target of about 100ms has consistently been the basic advice for feedback responses for decades (Miller, 1968; Myers 1985; Card et al. 1991) for user interfaces; however, according to the Nokia Forum, for tactile responses, the response latency should be kept under 20ms or the user will start to notice the delay between a user interface event and the feedback. Staying within these response time limits affects how complicated a gesture a system can handle and provide satisfactory response times to the user. Some touch sensing systems can handle single touch events satisfactorily but can, under the right circumstances, cross the latency threshold and become inadequate for handling two touch gestures.

Haptic feedbacks provide a tactile sensation, such as a slight vibration, to the user to provide immediate acknowledgement that the system has registered an event. This type of feedback is useful in noisy environments where a sound or beep is insufficient, and it can allow the user to operate the device without relying on visual feedback. An example is when a user taps a button on the touch screen and the system signals the tap with a vibration. The forum goes on to recommend that the tactile feedback is not exaggerated and short (less than 50ms) so as to keep the sensations pleasant and meaningful to the user. Vibrating the system too much or often makes the feedback meaningless to the user and risks draining any batteries in the system. Tactile feedbacks should also be coupled with visual feedbacks.

Emerging Interface Options

An emerging tactile feedback involves simulating texture on the user’s fingertip (Figure 1). Tesla Touch is currently demonstrating this technology that does not rely on mechanical actuators typically used in haptic feedback approaches. The technology simulates textures by applying and modulating a periodic electrostatic charge across the touch surface. By varying the sign (and possibly magnitude) of the charge, the electrons in the user’s fingertip will be drawn towards or away from the surface – effective creating a change in friction on the touch surface. Current prototypes are able to use signals as low as 8V to generate tactile sensations. No electric charge passes through the user.

Pranav Mistry at Fluid Interfaces Group | MIT Media Lab has demonstrated a wearable gesture interface setup combines digital information with the physical world through hand gestures and a camera sensor. The project is built with commercially available parts consisting of a pocket projector, a mirror, and a camera. The current prototype system costs approximate $350 to build. The projector projects visual information on surfaces, such as walls and physical objects within the immediate environment. The camera tracks the user’s hand gestures and physical objects. The software program processes the camera video stream and tracks the locations of the colored markers at the tip of the user’s fingers. Interpreted hand gestures act commands for projector and digital information interfaces.

Another researcher/designer is Fabian Hemmert whose projects explore emerging haptic feedback techniques including shape-changing and weight-shifting devices. His latest public projects include adding friction to a to a touch screen stylus that works through the stylus rather than through the user’s fingers like the Tesla Touch approach. The thought is that the reflective tactile feedback can prioritize displayed information, providing inherent confirmation of a selection by making the movement of the stylus heavier or lighter, and taking advantage of the manual dexterity of the user and providing friction that is similar to writing on a surface – of which the user is already familiar with.

Human media Lab recently unveiled and is demonstrating a “paper bending” interface that takes advantage of E-ink’s flexible display technology (Figure 2). The research team suggests that bending a display, such as to page forward, shows promise as an interaction mechanism. The team identified six simple bend gestures, out of 87 possible, that users preferred based around bending forward or backwards at two corners or the outside edge of the display. The research team identifies potential uses for bend gestures when the user is wearing gloves that inhibit interacting with a touch screen. Bend gestures may also prove useful to users that have motor skill limitations that inhibit the use of other input mechanisms. Bend gestures may be useful as a means to engage a device without require visual confirmation of an action.

In addition to supporting commands that are issued via bending the display, the approach allows a single display to operate in multiple modes. The Snaplet project is a paper computer that can act as a watch and media player when wrapped like a bracelet on the user’s arm. It can function as a PDA with notepad functionality when held flat, and it can operate as a phone when held in a concave shape. The demonstrated paper computer can accept, recognize, and process touch, stylus, and bend gestures.

If the experiences of the computer mouse and touch screens are any indication of what these emerging interface technologies are in for, there will be a number of iterations for each of these approaches before they evolve into something else or happen upon the proper mix of technology, low cost and low power parts, sufficient command expression, and acceptable feedback latency to hit the tipping point of market adoption.

Haptic User Interfaces

Tuesday, October 12th, 2010 by Robert Cravotta

Applications that support touch displays overwhelmingly rely on visual feedback to let the user know what touch event occurred. Some applications support delivering an audio signal to the user, such as a click or beep, to acknowledge that a button or virtual key was pressed. However, in many touch interfaces, there is no physical feedback, such as a small vibration, to let the user know that the system detected a touch of the display.

Contrast this with the design of mechanical keyboards. It is an explicit design decision whether the keys are soft or firm to the touch. Likewise, the “noisiness” of the keys and whether there is an audible and physical click at the end of a key press are the result of explicit choices made by the keyboard designer.

As end devices undergo a few design generations of supporting touch interfaces, I expect that many of them will incorporate haptic technology, such as from Immersion, so as to deliver the sensation of the click at the end of a key press. However, I am currently not aware of how a digital touch interface can dynamically simulate different firmness or softness of the touch display, but something like the Impress squishy display may not be too far away.

Some other interesting possibilities for touch based information and feedback are presented in Fabian Hemmert’s video about shape shifting mobile devices. In the video he demonstrates how designers might implement three different types of shape shifting in a mobile phone form factor.

The first concept is a weight-shifting device that can shift its center of mass. Not only could the device provide a tactile feedback of where the user is touching the display, but it can be used to “point” the user in a direction by making it heavier in the direction it wishes to point. This has the potential to allow a device to guide the user through the city without requiring the user to look at the device.

The second concept is a shape-shifting device that can transform from a flat form to one that is raised on any combination of its four corners. This allows the device to extend an edge or taper a corner toward or away from the user to indicate that there is more information in the indicated direction (such as when looking at a map). A shape-shifting capability can also allow the device to be placed on a flat surface, say a nightstand and allow the device to take on a context specific function – say an alarm clock.

The third concept is a “breathing” device where the designer uses the shifting capabilities of the device to indicate a health state of the device. However, to make the breathing concept more than just an energy drain, it will need to be able to decide whether there is someone around to observe it, so that it can save its energy when it is alone.

The mass- and shape-shifting concepts hold a lot of promise, especially when they are combined together in the same device. It might be sooner than we think when these types of features are available to include in touch interfaces.