Entries Tagged ‘Touch Interface’

Travelling the Road of Natural Interfaces

Thursday, July 28th, 2011 by Robert Cravotta

The forms for interfacing between humans and the machines are constantly evolving, and the creation rate of new forms for human-machine interfacing seems to be increasing. Long gone are the days of using punch cards and card reader to tell a computer what to do. Most contemporary users are unaware of what a command line prompt and optional argument is. Contemporary touch, gesture, stylus, and spoken language interfaces threaten to make the traditional hand shaped mouse a quaint and obsolete idea.

The road from idea, to experimental implementations, to production forms for human interfaces usually spans many attempts over years. For example, the first computer mouse prototype was made by Douglas Engelbart, with the assistance of Bill English, at the Stanford Research Institute in 1963. The computer mouse became a public term and concept around 1965 when it was associated to a pointing device in Bill English’s publication of “Computer-Aided Display Control.” Even though the mouse was available as a pointing device for decades, it finally became a ubiquitous pointing device with the release of Microsoft Windows 95. The sensing mechanisms for the mouse pointer evolved though mechanical methods using wheels or balls to detect when and how the user moved the mouse. The mechanical methods have been widely replaced with optical implementations based around LEDs and lasers.

3D pointing devices started to appear in market the early 1990’s, and they have continued to evolve and grow in their usefulness. 3D pointing devices provide positional data along at least 3 axes with contemporary devices often supporting 6 degrees of freedom (3 positional and 3 angular axes). Newer 9 degrees of freedom sensors (the additional 3 axes are magnetic compass axes), such as from Atmel, are approaching integration levels and price points that practically ensure they will find their way into future pointing devices. Additional measures of sensitivity for these types of devices may include temperature and pressure sensors. 3D pointing devices like Nintendo’s Wii remote combine spatial and inertial sensors with vision sensing in the infrared spectrum that relies on a light bar with two infrared light sources that are spaced at a known distance from each other.

Touch Interfaces

The release of Apple’s iPhone marked the tipping point for touch screen interfaces. However, the IBM Simon smartphone predates the iPhone by nearly 14 years, and it sported similar, even if primitive, support for a touchscreen interface. Like many early versions of human-machine interfaces that are released before the tipping point of market acceptance, the Simon did not enjoy the same market wide adoption as the iPhone.

Touchscreen interfaces span a variety of technologies including capacitive, resistive, inductive, and visual sensing. Capacitive touch sensing technologies, along with the software necessary to support these technologies, are offered by many semiconductor companies. The capacitive touch market has not yet undergone the culling that so many other technologies experience as they mature. Resistive touch sensing technology has been in production use for decades and many semiconductor companies still offer resistive touch solutions; there remain opportunities for resistive technologies to remain competitive with capacitive touch into the future by harnessing larger and more expensive processors to deliver better signal-to-noise performance. Vision based touch sensing is still a relatively young technology that exists in higher-end implementations, such as the Microsoft Surface, but as the price of the sensors and compute performance needed to use vision-based sensing continues to drop, it may move into direct competition with the aforementioned touch sensing technologies.

Touch interfaces have evolved from the simple drop, lift, drag, and tap model of touch pads to supporting complex multi-touch gestures such as pinch, swipe, and rotate. However, the number and types of gestures that touch interface systems can support will explode in the near future as touch solutions are able to continue to ride Moore’s law and push more compute processing and gesture databases into the system for negligible additional cost and energy consumption. In addition to gestures that touch a surface, touch commands are beginning to be able to incorporate proximity or hovering processing for capacitive touch.

Examples of these expanded gestures include using more than two touch points, such as placing multiple fingers from one or both hands on the touch surface and performing a personalized motion. Motions can consist of nearly any repeatable motion, including time sensitive swipes and pauses, and it can be tailored for each individual user. As the market moves closer to a cloud computing and storage model, this type of individual tailoring becomes even more valuable because the cloud will enable a user to untether themselves from a specific device and access their personal gesture database on many different devices.

Feedback latency to the user is an important measurement and a strong limiter on the adoption rate of expanded human interface options that include more complex gestures and/or speech processing. A latency target of about 100ms has consistently been the basic advice for feedback responses for decades (Miller, 1968; Myers 1985; Card et al. 1991) for user interfaces; however, according to the Nokia Forum, for tactile responses, the response latency should be kept under 20ms or the user will start to notice the delay between a user interface event and the feedback. Staying within these response time limits affects how complicated a gesture a system can handle and provide satisfactory response times to the user. Some touch sensing systems can handle single touch events satisfactorily but can, under the right circumstances, cross the latency threshold and become inadequate for handling two touch gestures.

Haptic feedbacks provide a tactile sensation, such as a slight vibration, to the user to provide immediate acknowledgement that the system has registered an event. This type of feedback is useful in noisy environments where a sound or beep is insufficient, and it can allow the user to operate the device without relying on visual feedback. An example is when a user taps a button on the touch screen and the system signals the tap with a vibration. The forum goes on to recommend that the tactile feedback is not exaggerated and short (less than 50ms) so as to keep the sensations pleasant and meaningful to the user. Vibrating the system too much or often makes the feedback meaningless to the user and risks draining any batteries in the system. Tactile feedbacks should also be coupled with visual feedbacks.

Emerging Interface Options

An emerging tactile feedback involves simulating texture on the user’s fingertip (Figure 1). Tesla Touch is currently demonstrating this technology that does not rely on mechanical actuators typically used in haptic feedback approaches. The technology simulates textures by applying and modulating a periodic electrostatic charge across the touch surface. By varying the sign (and possibly magnitude) of the charge, the electrons in the user’s fingertip will be drawn towards or away from the surface – effective creating a change in friction on the touch surface. Current prototypes are able to use signals as low as 8V to generate tactile sensations. No electric charge passes through the user.

Pranav Mistry at Fluid Interfaces Group | MIT Media Lab has demonstrated a wearable gesture interface setup combines digital information with the physical world through hand gestures and a camera sensor. The project is built with commercially available parts consisting of a pocket projector, a mirror, and a camera. The current prototype system costs approximate $350 to build. The projector projects visual information on surfaces, such as walls and physical objects within the immediate environment. The camera tracks the user’s hand gestures and physical objects. The software program processes the camera video stream and tracks the locations of the colored markers at the tip of the user’s fingers. Interpreted hand gestures act commands for projector and digital information interfaces.

Another researcher/designer is Fabian Hemmert whose projects explore emerging haptic feedback techniques including shape-changing and weight-shifting devices. His latest public projects include adding friction to a to a touch screen stylus that works through the stylus rather than through the user’s fingers like the Tesla Touch approach. The thought is that the reflective tactile feedback can prioritize displayed information, providing inherent confirmation of a selection by making the movement of the stylus heavier or lighter, and taking advantage of the manual dexterity of the user and providing friction that is similar to writing on a surface – of which the user is already familiar with.

Human media Lab recently unveiled and is demonstrating a “paper bending” interface that takes advantage of E-ink’s flexible display technology (Figure 2). The research team suggests that bending a display, such as to page forward, shows promise as an interaction mechanism. The team identified six simple bend gestures, out of 87 possible, that users preferred based around bending forward or backwards at two corners or the outside edge of the display. The research team identifies potential uses for bend gestures when the user is wearing gloves that inhibit interacting with a touch screen. Bend gestures may also prove useful to users that have motor skill limitations that inhibit the use of other input mechanisms. Bend gestures may be useful as a means to engage a device without require visual confirmation of an action.

In addition to supporting commands that are issued via bending the display, the approach allows a single display to operate in multiple modes. The Snaplet project is a paper computer that can act as a watch and media player when wrapped like a bracelet on the user’s arm. It can function as a PDA with notepad functionality when held flat, and it can operate as a phone when held in a concave shape. The demonstrated paper computer can accept, recognize, and process touch, stylus, and bend gestures.

If the experiences of the computer mouse and touch screens are any indication of what these emerging interface technologies are in for, there will be a number of iterations for each of these approaches before they evolve into something else or happen upon the proper mix of technology, low cost and low power parts, sufficient command expression, and acceptable feedback latency to hit the tipping point of market adoption.

Touch me (too) tender

Tuesday, June 28th, 2011 by Robert Cravotta

A recent video of a fly activating commands on a touchscreen provides an excellent example of a touchscreen implementation that is too sensitive. In the video, you can see the computing system interpreting the fly’s movements as finger taps and drags. Several times the fly’s movement causes sections of text to be selected and another time you can see selected text that is targeted for a drag and drop command. Even when the fly just momentarily bounces off the touchscreen surface, the system detects and recognizes that brief contact as a touch command.

For obvious reasons, such over sensitivity in a touchscreen application is undesirable in most cases – that is unless the application is to detect and track the behavior of flies making contact with a surface. The idea that a fly could accidentally delete your important files or even send sensitive files to the wrong person (thanks to field auto-fill technology) is unpleasant at best.

Touchscreens have been available as an input device for decades, so why is the example of a fly issuing commands only surfacing now? First, the fly in the video is walking and landing on a capacitive touchscreen. Capacitive touch screens became more prevalent in consumer products after the launch of the Apple iPhone in 2007. Because capacitive touch screens rely on the conductive properties of the human finger, a touch command does not necessarily require a minimum amount of physical force to activate.

This contrasts with resistive touch screens which do require a minimum amount of physical force to cause two layers on the touch screen surface to make physical contact with each other. If the touch sensor in the video was a screen with a resistive touch sensor layered over it, the fly would most likely never be able to cause the two layers to make contact with each other by walking across the sensor surface; however, it might be able to make the surfaces contact each other if it forcefully collided into the screen area.

Touchscreens that are too sensitive are analogous to keyboards that do not implement an adequate debounce function for the keys. In other words, there are ways that capacitive touch sensors can mitigate spurious inputs such as flies landing on the sensor surface. There are two areas within the sensing system that a designer can work with to filter out unintended touches.

The first area to address in the system is to properly set the gain levels so that noise spikes and small conductive objects (like the feet and body of a fly) do not trigger a count threshold that would be interpreted as a touch. Another symptom of an oversensitive capacitive touch sensor is that it may classify a finger hovering over the touch surface as a touch before it makes contact with the surface. Many design specifications for touch systems explicitly state an acceptable distance above the touch surface that can be recognized as a touch (on the order of a fraction of a mm above the surface). I would share a template for specifying the sensitivity of a touch screen, but the sources I checked with consider that template proprietary information.

One reason why a touch system might be too sensitive is because the gain is set too high so as to allow the system to recognize when the user is using a stylus with a small conductive material within its tip. A stylus tip is sufficiently smaller than a human finger, and without the extra sensitivity in the touch sensor, a user will not be able to use a stylus because the sensor will fail to detect the stylus tip near the display surface. Another reason a touch system could be too sensitive is to accommodate a use-case that involves the user wearing gloves. In essence, the user’s finger never actually makes contact with the surface (the glove does), and the sensor system must be able to detect the finger through the glove even though it is hovering over the touch surface.

The other area of the system a designer should address to mitigate spurious and unintended touches is through shape processing. Capacitive touch sensing is similar to image or vision processing in that the raw data consists of a reading for each “pixel” in the touch area for each cycle or frame of input processing. In addition to looking for peaks or valleys in the pixel values, the shape processing can compare the shape of the pixels around the peak/valley to confirm that it is a shape and size that is consistent with what it expects. Shapes that are outside the expected set, such as six tiny spots that are close to each other in the shape of a fly’s body, can be flagged and ignored by the system.

This also suggests that the shape processing should be able to track context because it needs to be able to remember information between data frames and track the behavior of each blob of pixels to be able to recognize gestures such as pinch and swipe. This is the basis of cheek and palm rejection processing as well as ignoring a user’s fingers that are gripping the edge of the touch display for hand held devices.

One reason why a contemporary system, such as the one in the video, might not properly filter out touches from a fly is that the processor bandwidth of the processing used for the shape processing algorithm could not perform the more complex filtering in the time frame allotted. In addition to actually implementing additional code to handle more complex tracking and filtering, the system has to allocate enough processing resources to complete those tasks. As the number of touches that the controller can detect and track increases, the amount of processing required to resolve all of those touches goes up faster than linearly. Part of the additional complexity for complex shape processing comes from determining which blobs are associated with other blobs and which ones are independent from the others. This correlation function requires multi-frame tracking.

This video is a good reminder that what is good enough in the lab might be completely insufficient in the field.

An interface around the bend

Tuesday, May 24th, 2011 by Robert Cravotta

User interface options continue to grow in richness of expression as sensor and compute processing costs and energy requirements continue to drop. The “paper” computing device is one such example, and it hints that touch interfaces may only be the beginning of where user interfaces are headed. Flexible display technologies like E-Ink’s have supported visions of paper computers and hand held computing devices for over a decade. A paper recently released by Human Media Lab explores the opportunities and challenges of supporting user gestures that involve bending the device display similar to how you would bend a piece of paper. A video of the flexible prototype paper phone provides a quick overview of the project.

The paper phone prototype provides a demonstration platform for exploring gestures that involve the user bending the device to issue a command.

The demonstration device is based on a flexible display prototype called paper phone (see Figure). The 3.7” flexible electrophoretic display is coupled with a layer of five Flexpoint 2” bidirectional bend sensors that are sampled at 20Hz. An E-Ink Broadsheet AM 300 Kit with a Gumstix processor that is capable of completing a display refresh in 780ms for a typical full screen grey scale image. The prototype is connected to a laptop computer that offloads the processing for the sensor data, bend recognition, and sending images to the display to support testing the flexibility and mobility characteristics of the display.

The paper outlines how the study extends prior work with bend gestures in two important ways: 1) the display provided direct visual feedback to the user’s bend commands, and 2) the flexible electronics of the bending layer provided feedback. The study involved two parts. The first part asked the participants to define eight custom bend gesture pairs. Gestures were classified according to two characteristics based on the location of the force being exerted on the display and the polarity of that force. The configuration of the bend sensors supported recognizing folds or bends at the corners and along the center edge of the display. The user’s folds could exert force forward or backwards at each of these points. Gestures could consist of folding the display in a single location or at multiple locations. The paper acknowledges that there are other criteria they could have used, such as the amount of force in a fold, the number of repetitions of a fold, as well as tracking the velocity of a fold. These were not investigated in this study.

The second part of the study asked participants to use and evaluate the bend gestures they developed in the context of complete tasks, such as operating a music player or completing a phone call. The study found that there was strong agreement among participants for the folding locations as well as the polarity of the folds for actions with clear directionality, such as navigating left and right. The applications that the participants were asked to complete were navigating between twelve application icons; navigating a contact list; play, pause, and select a previous or next song; navigate a book reader, and zoom in and out for map navigation. The paper presents analysis of the 87 total bend gestures that the ten participants created (seven additional bends were created in the second part of the study) in building a bend gesture/language, and it discusses shared preferences among the participants.

A second paper from Human Media Lab presents a demonstration “Snaplet” prototype for a bend sensitive device to change its function and context based on its shape. The video of the Snaplet demonstrates the different contexts that the prototype can recognize and adjust to. Snaplet is similar to the paper phone prototype in that it uses bend sensors to classify the shape of the device. Rather than driving specific application commands with bends, deforming the shape of the device drives which applications the device will present to the user and what types of inputs it will accept and use. The prototype includes pressure sensors to detect touches, and it incorporates a Wacom flexible tablet to enable interaction with a stylus. Deforming the shape of the device is less dynamic than bending the device (such as in the first paper); rather the static or unchanging nature of the deformations allows the device’s shape to define what context it will work in.

When the Snaplet is bent in a convex shape, such as a wristband on the user’s arm, Snaplet acts like a watch or media player. The user can place the curved display on their arm and hold in in place with Velcro. The video of the Snaplet shows the user interacting with the device via a touch screen with icons and application data displayed appropriately for viewing on the wrist. By holding the device flat in the palm of their hand, the user signals to the device that it should act as a PDA. In this context, the user can use a Wacom stylus to take notes or sketch; this form factor is also good for reading a book. The user can signal the device to act as a cell phone by bending the edge of the display with their fingers and then placing the device to their ear.

Using static bends provides visual and tangible cues to the user of the device’s current context and application mode. Holding the device in a concave shape requires a continuous pinch from the user and provides haptic feedback that signifies there is an ongoing call. When the user releases their pinch of the device, the feedback haptic energy directly corresponds with dropping the call. This means that users can rely on the shape of the device to determine its operating context without visual feedback.

The paper phone and Snaplet prototypes are definitely not ready for production use, but they are good demonstration platforms to explore how and when using bend gestures and deforming the shape of a device may be practical. Note that these demonstration platforms for these types of inputs do not suggest replace existing forms of user interfaces, such as touch and stylus inputs; rather, they demonstrate how bend gestures can augment those input forms and provide a more natural and richer communication path with electronic devices.

The battle for multi-touch

Tuesday, April 12th, 2011 by Robert Cravotta

As with most technologies used in the consumer space, they take a number of years to gestate before they mature enough and gain visibility to end users. Capacitive-based multi-touch technology burst into the consumer conscience with the introduction of the iPhone. Dozens of companies have since entered the market to provide capacitive touch technologies to support new designs and applications. The capabilities that capacitive touch technology can support, such as improved touch sensing for multiple touches, detecting and filtering unintended touches (such as palm and cheek detection), as well as supporting a stylus, continues to evolve and improve.

Capacitive touch enjoys a very strong position providing the technology for multi-touch applications; however, there are other technologies that are or will likely be vying for a larger piece of the multi-touch pie. A potential contender is the vision-based multi-touch technology found in the Microsoft Surface. However, at the moment of this writing, Microsoft has indicated that it is not focusing its effort for the pixel sense technology toward the embedded market, so it may be a few years before the technology is available to embedded developers.

The $7600 price tag for the newest Surface system may imply that the sensing technology is too expensive for embedded systems, but it is important to realize that this price point supports a usage scenario that vastly differs from a single user device. First, the Surface provides a 40 inch diagonal touch/display surface that four people can easily access and use simultaneously. Additionally, the software and processing resources contained within the system are sized to handle 50 simultaneous touch points. Shrink both of these capabilities down to a single user device and the pixel sense technology may become quite price competitive.

Vision-based multi-touch works with more than a user’s fingers; it can also detect, recognize and interact with mundane, everyday objects, such as pens, cups, paint brushes, as well as touch interface specific objects such as styli. The technology is capable, if you provide enough compute capability, to distinguish and handle touches and hovering of fingers and objects over the touch surface differently.I’m betting as the manufacturing process for the pixel sense sensors matures, the lower price points will make a compelling case for focusing development support to the embedded community.

Resistive touch technology is another multi-touch contender. It has been the work horse for touch applications for decades, but its ability (or until recently, lack of) to support multi-touch designs has been one of its significant shortcomings. One advantage that resistive touch has enjoyed over capacitive touch for single-touch applications is a lower cost point to incorporate it into a design. Over the last year or so, resistive touch has evolved to be able to support multi-touch designs by using more compute processing in the sensor to resolve the ghosting issues in earlier resistive touch implementations.

Similar to vision-based multi-touch, resistive touch is able to detect contact with any normal object because resistive touch relies on a mechanical interface. Being able to detect contact with any object provides an advantage over capacitive touch because capacitive touch sensing can only detect objects, such as a human finger or a special conductive-tipped stylus, with conductive properties that can draw current from the capacitive field when placed on or over the touch surface. Capacitive touch technology also continues to evolve, and support for thin, passive styli (with an embedded conductive tip) is improving.

Each technology offers different strengths and capabilities; however, the underlying software and embedded processors in each approach must be able to perform analogous functions in order to effectively support multi-touch applications. A necessary capability is the ability to distinguish between explicit and unintended touches. This capability requires the sensor processor and software to be able to continuously track many simultaneous touches and assign a context to each one of them. The ability to track multiple explicit touches relative to each other is necessary to be able to recognize both single- and multi-touch gestures, such as swipes and pinches. Recognizing unintended touches involves properly ignoring when the user places their palm or cheek over the touch area as well as touching the edges of the touch surface with their fingers that are gripping the device.

A differentiating capability for touch sensing is minimizing the latency between when the user makes an explicit touch and when the system responds or provides the appropriate feedback to that touch. Longer latencies can affect the user’s experience in two detrimental ways in that the collected data for the touch or gesture has poor quality or the delay in feedback confuses the user. One strategy to minimize latency is to sample or process less points when a touch (or touches) is moving; however, this risks losing track of small movements that can materially affect analyzing the user’s movement such as when writing their signature. Another strategy to accommodate tracking the movement of a touch without losing data is to allow a delay in displaying the results of the tracking. If the delay is too long though, the user may try to compensate and try to restart their movement – potentially confusing or further delaying the touch sensing algorithms. Providing more compute processing helps in both of these cases, but it also increases the cost and energy draw of the system.

While at CES, I experienced multi-touch with all three of these technologies. I was already familiar with the capabilities of capacitive touch. The overall capabilities and responsiveness of the more expensive vision-based system met my expectations; I expect the price point for vision-based sensing to continue its precipitous fall into the embedded space within the next few years. I had no experience with resistive-based multi-touch until the show. I was impressed by the demonstrations that I saw from SMK and Stantum. The Stantum demonstration was on a prototype module, and I did not even know the SMK module was a resistive based system until the rep told me. The pressure needed to activate a touch felt the same as using a capacitive touch system (however, I am not an everyday user of capacitive touch devices). As these technologies continue to increasingly overlap in their ability to detect and appropriately ignore multiple touches within a meaningful time period, their converging price points promise an interesting battle as each technology finds its place in the multi-touch market.

Touch with the Microsoft Surface 2.0

Tuesday, March 29th, 2011 by Robert Cravotta

The new Microsoft Surface 2.0 will become available to the public later this year. The technology has undergone significant changes from when the first version was introduced in 2007. The most obvious change is that the dimensions of the newer unitis much thinner, so much so, that the 4 inch thick display can be wall mounted – effectively enabling the display to act like a large-screen 1080p television with touch capability.Not only is the new display thinner, but the list price has nearly halved to $7600. While the current production versions of the Surface are impractical for embedded developers, the sensing technology is quite different from other touch technologies and may represent another approach to user touch interfaces that will compete with other forms of touch technology.

Namely, the touch sensing in the Surface is not really based on sensing touch directly – rather, it is based on using IR (infrared) sensors the visually sense what is happening around the touch surface. This enables the system to sense and be able to interact with nearly any real world object, not just conductive surfaces such as with capacitive touch sensing or physical pressure such as with resistive touch sensing. For example, there are sample applications of the Surface working with a real paint brush (without paint on it). The system is able to identify objects with identification markings, in the form of a pattern of small bumps, to track those objects and infer additional information about them that other touch sensing technologies currently cannot do.

This exploded view of the Microsoft Surface illustrate the various layers that make up the display and sensing housing of the end units. The PixelSense technology is embedded into the LCD layers of the display.

The vision sensing technology is called PixelSense Technology, and it is able to sense the outlines of objects that are near the touch surface and distinguish when they are touching the surface. Note: I would include a link to the PixelSense Technology at Microsoft, but it is not available at this time. The PixelSense Technology embedded in the Samsung SUR40 for Microsoft Surface replaces the five (5) infrared cameras that the earlier version relies on. The SUR40 for Microsoft Surface is the result of a collaborative development effort between Samsung and Microsoft. Combining the Samsung SUR40 with Microsoft’s Surface Software enables the entire display to act as a single aggregate of pixel-based sensors that are tightly integrated with the display circuitry. This shift to an integrated sensor enables the finished casing to be substantially thinner than previous versions of the Surface.

The figure highlights the different layers that make up the display and sensing technology. The layers are analogous to any LCD display except that the PixelSense sensors are embedded in the LCD layer and do not affect the original display quality. The optical sheets include material characteristics to increase the viewing angle and to enhance the IR light transmissivity. PixelSense relies on the IR light generated at the backlight layer to detect reflections from objects above and on the protection layer. The sensors are located below the protection layer.

The Surface software targets an embedded AMD Athlon II X2 Dual-Core Processor operating at 2.9GHz and paired with an AMD Radeon HD 6700M Series GPU using DirectX 11 acting as the vision processor. Applications use an API (application program interface) to access the algorithms contained within the embedded vision processor. In the demonstration that I saw of the Surface, the system fed the IR sensing to a display where I could see my hand and objects above the protection layer. The difference between an object hovering over and touching the protection layer is quite obvious. The sensor and embedded vision software are able to detect and track more than 50 simultaneous touches. Supporting the large number of touches is necessary because the use case for the Surface is to have multiple people issuing gestures to the touch system at the same time.

This technology offers exciting capabilities for end-user applications, but it is currently not appropriate, nor available, for general embedded designs. However, as the technology continues to evolve, the price should continue to drop and the vision algorithms should mature so that they can operate more efficiently with less compute performance required of the vision processors (most likely due to specialized hardware accelerators for vision processing). The ability to be able to recognize and work with real world objects is a compelling capability that the current touch technologies lack and may never acquire. While the Microsoft person I spoke with says the company is not looking at bringing this technology to applications outside the fully integrated Surface package, I believe the technology will become more compelling for embedded applications sooner rather than later. At that point, experience with vision processing (different from image processing) will become a valuable skillset.