Entries Tagged ‘Touch Screens’

Alternative Touch Interfaces

Tuesday, September 7th, 2010 by Robert Cravotta

Exploring the different development kits for touch interfaces provides a good example of what makes something an embedded system. To be clear, the human-machine interface between the end device and the user is not an embedded system; however, the underlying hardware and software can be. Let me explain. The user does not care how a device implements the touch interface – what matters to the user is what functions, such as multi-touch, the device supports, and what types of contexts and touch commands the device and applications can recognize and respond to.

This programmable rocker switch includes a display that allows the system to dynamically change the context of the switch.

So, while using resistive and capacitive touch sensors are among the most common ways to implement a touch interface in consumer devices, they are not the only way. For example, NKK Switches offers programmable switches that integrate a push button or rocker switch with an LCD or OLED display. In addition to displaying icons and still images, some of these buttons can display a video stream. This allows the system to dynamically change the context of the button and communicate the context state to the user in an intuitive fashion. I am in the process of setting up some time with these programmable switches for a future write-up.

Another example of alternative sensing for touch interfaces is infrared sensors. The infrared proximity sensing offered by Silicon Labs and the infrared multi-touch sensing offered by Microsoft demonstrate the wide range of capabilities that infrared sensors can support at different price points.

Silicon Labs offers several kits that include infrared support. The FRONTPANEL2EK is a demo board that shows how to use capacitive and infrared proximity sensing in an application. The IRSLIDEREK is a demo board that shows how to use multiple infrared sensors together to detect not only the user’s presence, but also location and specific motion of the user’s hand. These kits are fairly simple and straightforward demonstrations. The Si1120EK is an evaluation platform that allows a developer to explore infrared sensing in more depth including advanced 3-axis touchless object proximity and motion sensing.

By working with these kits, I have a greater appreciation of the possible uses for proximity sensing. For example, an end device could place itself into a deep sleep or low power mode to minimize energy consumption. However, placing a system in the lowest power modes incurs a startup delay when reactivating the system. A smart proximity sensing system could provide the system with a few seconds warning that a user might want to turn the system on, and it could speculatively activate the device and be able to respond to the user more quickly. In this scenario, the proximity sensor would probably include some method to distinguish between likely power-up requests versus an environment where objects or people pass near the device without any intent of powering up the device.

Finally, Microsoft’s Surface product demonstrates the other end of touch sensing using an infrared camera system. In essence, the Surface is a true embedded vision system – an implementation detail that the end user does not need to know anything about. In the case of the Surface table, there are several infrared cameras viewing a diffusion surface. The diffusion surface has specific optical properties that allow the system software to identify when any object touches the surface of the display. This high end approach provides a mechanism for the end user to interact with the system using real world objects found in the environment rather than just special implements such as stylus with specific electrical characteristics.

The point here is to recognize that there are many ways to implement touch interfaces – including sonic mechanisms. They may not support touch interfaces in the same way, nor be able to support a minimum set of commands sets, but taken together, they may enable smarter devices that are able to better predict what the end user’s true expectations are and prepare accordingly. What other examples of alternative touch sensing technologies are you aware of?

What capability is the most important for touch sensing?

Wednesday, August 25th, 2010 by Robert Cravotta

I have been exploring user interfaces, most notably touch sensing interfaces, for a while. As part of this exploration effort, I am engaging in hands-on projects with touch development kits from each of over a dozen companies that offer some sort of touch solution. These kits range from simple button replacement to complex touch screen interfaces. I have noticed, as I work with each kit, that each company chose a different set of priorities to optimize and trade against in their touch solution. Different development kits offer various levels of maturity in how they simplify and abstract the complexity of making a touch interface act as more than just a glorified switch.

It appears there may be a different set of “must have” capabilities in a touch development kit depending on who is using it and for what type of application they are adding it. For button replacement kits, the relevant characteristics seem to focus around cost and robustness with ease of development becoming more important. A common theme among button replacement kits is supporting aggregate buttons, such as a slider bar or wheel, that can act as a single control even though it consists of multiple buttons.

From my perspective, an important capability of a button replacement solution is that it simplifies the initialization and setup of the buttons while still being able to support a wide range of operating configurations. A development kit that offers prebuilt constructs that aggregate the buttons into sliders and wheels are a plus as they greatly simplify the learning curve to use these compound button structures. Another valuable capability is driver software that allows the touch system to detect calibration drift and assists or automates in recalibration. This week’s question asks if these are sufficient leading edge capabilities, or have I missed any important capabilities for button replacement systems?

In contrast, I have noticed that many touch screen solutions focus on multi-touch capabilities. However, I am not convinced that multi-touch is the next greatest thing for touch screens. Rather, I think more high-level abstraction and robust gesture recognition is the killer capability for touch screen solutions. Part of my reasoning for this is the relative importance of “pinching” to zoom and rotate an object versus flicking and tracing to navigate and issue complex commands to the system. The challenges to correctly recognize a zooming and rotating command are somewhat constrained; whereas the challenges to correctly recognize the intended context of a flick or trace gesture are significantly more difficult because there are a wider set of conditions to which a user may apply flick or trace gesture.

As a result, I feel that an important and differentiating capability of a touch screen solution is that it offers prebuilt drivers and filters that are able to consistently identify when a touch gesture is real and intended. It should also be able to accurately differentiate between subtle nuances in a gesture so as to enable the user to communicate a richer set of intended commands to the system. Again, this week’s question seeks to determine if this is the appropriate set of leading edge capabilities, or have I missed any important capabilities for touch screen systems?

Your answers will help direct my hands-on project, and they will help with the database and interface design for the upcoming interactive embedded processing directory.

Clarifying third generation touch sensing

Tuesday, August 24th, 2010 by Robert Cravotta

Eduardo’s response to first and second generation touch sensing provides a nice segue to clarifying third generation touch sensing capabilities. Eduardo said:

One other [classification] that I would say relates to “generations” is the single touch vs multitouch history; which I guess also relates [to] the evolution of algorithms and hardware to scan more electrodes and to interpolate the values between those electrodes. First generation: single touch and single touch matrixes; second generation: two touch, low resolution sliders; third generation: high resolution x-y sensing, multi touch detection.

While there is a “generational” shift between single- and multi-touch sensing, I am not sure the uses for multi-touch commands have reached a tipping point of adoption. My non-scientific survey of what types of multi-touch commands people know how to use yields only zoom and rotate commands. The MSDN library entry for touch provides an indication of the maturity of multi-touch interfaces; it identifies that Windows 7 supports new multi-touch gestures such as pan, zoom, rotate, two-finger tap, as well as press and tap. However, these multi-touch commands are more like manipulations where the “input corresponds directly to how the object would react naturally to the same action in the real world.”

I am excited about the possibilities of multi-touch interfaces, but I think standardizing gesture recognition for movements such as flicks, traces, and drags, which go beyond location and pen up/down data, is the relevant characteristic of third generation touch sensing. Nor is gesture recognition limited to touch interfaces. For example, there are initiatives and modules available that enable applications to recognize mouse gestures. The figures (from the referenced mouse gesture page) highlight a challenge of touch interfaces – how to provide feedback and a means for the user to visually see how to perform the touch command. The figure relies on an optional feature to display a “mouse trail” so that the reader can understand the motion of the gesture. The example figure illustrates a gesture command that combines tracing with a right-up-left gesture to signal to a browser application to open all hyperlinks that the trace crossed in separate tabs.

Open links in tabs (end with Right-Up-Left): Making any gesture ending with a straight Right-Up-Left movement opens all crossed links in tabs.

A common and useful mouse-based gesture that is not yet standard across touch sensing solutions is recognizing a hovering finger or pointer. Several capacitive touch solutions can technically sense a hovering finger, but the software to accomplish this type of detection is currently left to the device and application developer. An important component of detecting a hovering finger is detecting not just where the fingertip is but also what additional part of the display the rest of the finger or pointer is covering so that the application software can place the pop-up or context windows away from the user’s finger.

While some developers will invest the time and resources to add these types of capabilities to their designs today, gesture recognition will not reach a tipping-point until the software to recognize gestures, identify and filter out bad gestures, and abstract the gesture motion into a set of commands finds its way into IP libraries or operating system drivers.

Touchscreen User Interface checklist: criteria for selection

Thursday, August 19th, 2010 by Ville-Veikko Helppi

Touchscreens require more from the UI (user interface) design and development methodologies. To succeed in selecting the right technology, designers should always consider the following important topics.

1) All-inclusive designer toolkit. As the touchscreen changes the UI paradigm, one of the most important aspects of the UI design is how quickly the designer can see the behavior of the UI under development. Ideally, this is achieved when the UI technology contains a design tool that allows the designer to immediately observe behavior of the newly created UI and modify easily before target deployment.

2) Creation of the “wow factor.” It is essential that UI technology enables developers and even end-users to easily create clever little “wow factors” on the touchscreen UI. These technologies, which allow the rapid creation and radical customization of the UI, have a significant impact on the overall user experience.

3) Controlling the BoM (Bill of Material). For UIs, everything is about the look and feel, ease of use, and how well it reveals the capabilities of the device. In some situations, adding a high-resolution screen with a low-end processor is all that’s required to deliver a compelling user experience. Equally important is how the selected UI technology reduces engineering costs related to UI work. Adapting a novel technology that enables the separation of software and UI creation enables greater user experiences without raising the BoM.

4) Code-free customization. Ideally, all visual and interactive aspects of a UI should be configurable without recompiling the software. This can be achieved by providing mechanisms to describe the UI’s characteristics in a declarative way. Such a capability affords rapid customization without any changes to the underlying embedded code base.

5) Open standard multimedia support. In order to enable the rapid integration of any type of multimedia content into a product’s UI (regardless of the target hardware) some form of API standardization must be in place. The OpenMAX standard addresses this need by providing a framework for integrating multimedia software components from different sources, making it easier to exploit silicon-specific features, such as video acceleration.

Just recently, Apple replaced Microsoft as the world’s largest technology company. This is a good example of how a company that produces innovative, user-friendly products with compelling user interfaces can fuel the growth of technology into new areas. Remember, the key isn’t necessarily the touchscreen itself – but the user interfaces running on the touchscreen. Let’s see what the vertical markets can do to take the user interface and touchscreen technology to the next level!

First and second generation touch sensing

Tuesday, August 10th, 2010 by Robert Cravotta

I recently proposed a tipping point for technology for the third generation of a technology or product, and I observed that touch technology seems to be going through a similar pattern as more touch solutions are integrating third generation capabilities. It is useful to understand the difference between the different generations of touch sensing to better understand the impact of the emerging third generation capabilities for developers.

First generation touch sensing relies on the embedded host processor to support, and the application software to understand how to configure, control, and read, the touch sensor. The application software is aware of and manages the details of the touch sensor drivers and analog to digital conversion of the sense circuits. A typical control flow to capture a touch event consists of the following steps:

1)  Activate the X driver(s)

2) Wait a predetermined amount of time for the X drivers to settle

3) Start the ADC measurement(s)

4) Wait for the ADC measurement to complete

5) Retrieve the ADC results

6) Activate the Y driver(s)

7) Wait a predetermined amount of time for the Y drivers to settle

8) Start the ADC measurement(s)

9) Wait for the ADC measurement to complete

10) Retrieve the ADC results

11) Decode and map the measured results to an X,Y coordinate

12) Apply any sensor specific filters

13) Apply calibration corrections

14) Use the result in the rest of the application code

Second generation touch sensing usually encapsulates this sequence of steps to activate the drivers, measure the sensing circuits, and applying the filters and calibration corrections into a touch event. Second generation solutions may also offload the sensor calibration function, although the application software may need to know when and how to initiate the calibrate function. A third generation solution may provide automatic calibration so that the application software does not need to know when or how to recalibrate the sensor because of changes in the operating environment (more in a later article).

A challenge for providing touch sensing solutions is striking a balance between meeting the needs of developers that want low- and high-levels of abstraction. For low-level design considerations, the developer needs an intimate knowledge of the hardware resources and access to the raw data to be able to build and use custom software functions that extend the capabilities of the touch sensor or even improve its signal to noise ratio. For developers using the touch sensor as a high-level device, the developer may be able to work through an API (application programming interface) to configure, as well as turn on and off, the touch sensor.

The second and third generation touch API typically includes high-level commands to enable and disable, calibrate, and read and write the configuration registers for the touch sensor as well as low-level commands to access the calibration information for the touch sensor. The details to configure the sensor and the driver for event reporting differ from device to device. Another important capability that second and third generation solutions may include is the ability to support various touch sensors and display shapes without requiring the developer to rework the application code. This is important because for many contemporary touch and display solutions, the developer must be separately aware of the display, touch sensing, and controller components because there are not many options for fully integrated touch and display systems. In short, we are still in the Wild West era of embedded touch sensing and display solutions.

Impacts of touchscreens for embedded software

Thursday, August 5th, 2010 by Ville-Veikko Helppi

No question, all layers of the embedded software are impacted when a touchscreen is used on a device. A serious challenge is finding space to visually show a company’s unique brand identity, as it is the software that runs on the processor that places the pixels on screen. From the software point of view, the touchscreen removes one abstraction level between the user and software. For example, many devices have removed ‘OK’ buttons from dialogs as the user can click the whole dialog instead of clicking the button.

Actually, software plays an even more critical role as we move into a world where the controls on a device are virtual rather than physical. In the lowest level of software, the touchscreen driver provides a mouse-emulation that basically means the same as clicking a mouse cursor on certain pixels. However, the mouse driver gets its data as “relative” while the touchscreen driver gets its data as “absolute.” Writing the touchscreen driver is usually trivial, as this component only takes care of passing information from the physical screen to higher levels of software. The only inputs the driver needs are Boolean if the screen is touched, and in what x- and y-axes has the touch taken place.

At the operating system level, a touchscreen user interface means more frequent operating system events than the typical icon or widget-based user interface. In addition to a touchscreen, there may also be a variety of different sensors (e.g., accelerometers) inputting stimuli to the operating system through their drivers. Generally, the standardized operating system can give confidence and consistency to device creation, but if it needs to be changed, the cost of doing so can be astronomical due to testing the compatibility of other components.

The next layer is where the middleware components of the operating system are found, or in this context, where the OpenGL/ES library performs. Various components within this library do different things from processing the raw data with mathematical algorithms, providing a set of APIs for drawing, interfacing between software and hardware acceleration, or providing services such as rendering, font engines, and so on. While this type of standardization is generally a good thing, in some cases, it can lead to non-differentiation – in the worst case, it might even kill the inspiration of an innovative user interface creation. Ideally, the standardized open library, together with rich and easily customizable user interface technology, results in superb results.

The application layer is the most visible part of the software and forms the user experience. It is here where developers must ask:

1)      Should the application run in the full-screen mode or enable using widgets distributed around the screen?

2)      What colors, themes, and templates are the best ways to illustrate the behavior of the user interface?

3)      How small or large should the user interface elements be?

4)      In what ways will the user interface elements behave and interact?

5)      How intuitive do I want to make this application?

Compelling UI design tools is essential for the rapid creation of user interfaces.

In the consumer space, there are increasingly more competitive brands with many of the same products and product attributes. Manufacturers are hard-pressed to find any key differentiator among this sea of “me too” offerings. One way to stand out is by delivering a rich UI experience via a touchscreen display.

We are starting to see this realization play out in all types of consumer goods, even in white goods as pedestrian as washing machines. There are now innovative display technologies replacing physical buttons and levers. Imagine a fairly standard washing machine with a state-of-the-art LCD panel. This would allow the user to easily browse and navigate all the functions on that washing machine – and perhaps learn a new feature or two. By building an attractive touchscreen display, simply changing the software running on the display can manifest any customization work. Therefore, things like changing the branding, adding compelling video clips and company logos, all become much simpler because it’s all driven by software. If the manufacturer uses the right technology, they may not even need to modify the software to change the user experience.

Driven by the mobile phone explosion, the price point of display technology has come down significantly. As a result, washing machine manufacturers can add more perceived value to their product without necessarily adding too much to the BoM (bill of materials). Thus, before the machine leaves the factory, a display technology may increase the BoM by $30, but this could increase the MSRP by at least $100. No doubt, this can have a huge impact on the company’s bottom line. This results in a “win-win” for the manufacturer and for the consumer. The manufacturer is able to differentiate the product more easily and in a more cost effective manner, while the product is easier to use with a more enhanced UI.

The final part in this four-part series presents a checklist for touchscreen projects.

The User Interface Paradigm Shift

Thursday, July 22nd, 2010 by Ville-Veikko Helppi

Touchscreens are quickly changing the world around us. When clicking on an image, a touchscreen requires much less thinking and more user intuition. Touchscreens are also said to be the fastest pointing method available, but that isn’t necessarily true – it all depends on how the user interface is structured. For example, most users accept a ten millisecond delay when scrolling with cursor and mouse, but with touchscreens, this same period of time feels much longer so the user experience is perceived as not as smooth. Also, multi-touch capabilities are not possible with mouse emulations, at least, not as intuitively as with a touchscreen. The industry has done a good job providing a screen pen or stylus to assist the user when selecting the right object on smaller screens, thus silencing the critics of touchscreens who say it’s far from ideal as a precise pointing method.

The touchscreen has changed the nature of UI (user interface) element transitions. When looking at motions of different UI elements, these transitions can make a difference in device differentiation and if implemented properly tell a compelling story. Every UI element transition must have a purpose and context as it usually reinforces the UI elements. Something as simple as buffers are effective at giving a sense of weight to a UI element – and moving these types of elements without a touchscreen would be awkward. For UI creation, the best user experience can be achieved when UI element transitions are natural and consistent with other UI components (e.g., widgets, icons, menus) and deliver a solid, tangible feel of that UI. Also, the 3D effects during the motion provide a far better user experience.

3D layouts enable more touchscreen friendly user interfaces.

Recent studies in human behavior along with documented consumer experiences have indicated that the gestures of modern touchscreens have expanded the ways users can control a device through its UI. As we have seen with “iPhone phenomena” the multi-touchscreen changes the reality behind the display screen, allowing new ways to control the device through hand-eye (e.g., pinching, zooming, rotating) coordination. But it’s not just the iPhone that’s driving this change. We’re seeing other consumer products trending towards simplifying the user experience and enhancing personal interaction. In fact, e-Books are perfect examples. Many of these devices have a touchscreen UI where the user interacts with the device directly at an almost subconscious level. This shift in improved user experience has also introduced the idea that touchscreens have reduced the number of user inputs required for the basic functioning of a device.

The third part in this four-part series explores the impact of touchscreens on embedded software.

Is the third generation the charm?

Tuesday, July 20th, 2010 by Robert Cravotta

In a recent conversation with Ken Maxwell, President of Blue Water Embedded, Ken mentioned several times how third-generation touch controllers are applying dedicated hardware resources to encapsulate and offload some of the processing necessary to deliver robust touch interfaces. We talked about his use of the term third-generation as he seemed not quite comfortable with using it. However, I believe it is the most appropriate term, is consistent with my observations about third generation technologies, and is the impetus for me doing this hands-on project with touch development kits in the first place.

While examining technology’s inflections, I have noticed that technological capability is only one part of an industry inflection based around that technology. The implementation must also: hide complexity from the developer and user; integrate subsystems to deliver lower costs, shrink schedules, and simplify learning curves; as well as pull together multi-domain components and knowledge into a single package. Two big examples of inflection points occurred around the third generation of the technology or product: Microsoft Windows and the Apple iPod.

Windows reached an inflection point at version 3.0 (five years after version 1.0 was released) when it simplified the management of the vast array of optional peripherals available for the desktop PC and hid much of the complexity of sharing data between programs. Users could already transfer data among applications, but they needed to use explicit translation programs and tolerate the loss of data from special features. Windows 3.0 hid the complexity of selecting those translation programs and provided a data-interchange format and mechanism that further improved users’ ability to share data among applications.

The third generation iPod reached an industry inflection point with the launch of the iTunes Music Store. The “world’s best and easiest to use ‘jukebox’ software” introduced a dramatically simpler user interface that needed little or no instruction to get started and introduced more people to digital music.

Touch interface controllers and development kits are at a similar third generation crossroads. First-generation software drivers for touch controllers required the target or host processor to drive the sensing circuits and perform the coordinate mapping. Second-generation touch controllers freed up some of the processing requirements of the target processor by including dedicated hardware resources to drive the sensors, and it abstracted the sensor data the target processor worked with to pen-up/down and coordinate location information. Second generation controllers still require significant processing resources to manage debounce processing as well as reject bad touches such as palm and face presses.

Third-generation touch controllers integrate even more dedicated hardware and software to offload more context processing from the target processor to handle debounce processing, reporting finger or pen flicking inputs, correctly resolving multi-touch inputs, and rejecting bad touches from palm, grip, and face presses. Depending on the sensor technology, third-generation controllers are also going beyond the simple pen-up/down model by supporting hover or mouse-over emulation. The new typing method supported by Swype pushes the pen-up/down model yet another step further by combining multiple touch points within a single pen-up/down event.

Is it a coincidence that touch interfaces seem to be crossing an industry inflection point with the advent of third generation controllers, or is this a relationship that also manifests in other domains? Does your experience support this observation?

[Editor's Note: This was originally posted on Low-Power Design]

Get in Touch with Your Inner User Interface

Thursday, July 15th, 2010 by Ville-Veikko Helppi

Touchscreens have gone from fad to “must have” seemingly overnight. The rapid growth of touchscreen user interfaces in mobile phones, media players, navigation systems, point-of-sale, and various other devices has changed the landscape in a number of vertical markets. In fact, original device manufacturers (ODMs) see the touchscreen as a way to differentiate their devices and compete against one another in an ever-expanding marketplace. But ODMs take note – a touchscreen alone will not solve the problem of delivering a fantastic user experience. If the underlying user interface is not up to snuff, the most amazing whiz-bang touchscreen won’t save you.

Touchscreens have come a long way from the early 90’s applications where they were used in primitive sales kiosks and public information displays. These devices were not cutting-edge masterpieces, but they did help jump-start the industry and expose large audiences (and potential future users) to the possibilities of what this type of technology might offer. It wasn’t until a decade later before consumers saw the major introduction of touchscreens – and the reason for this was pretty simple: the hardware was just too big and too expensive. Touchscreens became more usable and more pervasive only after the size of hardware reduced significantly.

Today there are a host of options in touchscreen technology. These include resistive, projected-capacitive, surface-capacitive, surface acoustic wave, and infrared to name a few. According to DisplaySearch, a display market research organization, resistive displays now occupy 50 percent of the market due to its cost-effectiveness, consistency, and durable performance; while projected-capacitive has 31 percent of the market. In total, there were more than 600 million touchscreens shipped in 2009. DisplaySearch also forecasts that projected-capacitive touchscreens will soon pass resistive screens as the number one touchscreen technology (measured by revenues) because the Apple iPad utilizes projected-capacitive touchscreen technology. And finally, according to Gartner, the projected-capacitive touchscreen segment is estimated to hit 1.3 billion units by 2012, which means a 44 percent compounded annual growth rate. These estimates indicate serious growth potential in the touchscreen technology sector.

However, growth ultimately hinges on customer demand. Some of the devices, such as safety and mission-critical systems, are still not utilizing the capabilities found in touchscreens. This is because with mission-critical systems, there is very little room for input mistakes made by the user. In many cases, touchscreens are considered a more fault-sensitive input method when compared to the old-fashioned button- and glitch-based input mechanisms. For some companies, the concern is not about faulty user inputs, but cost; adding a $30 touchscreen is not an option when it won’t add any value to the product’s price point.

So what drives touchscreen adoption? Adoption is mainly driven by

  1. Lowering the cost of the hardware
  2. Testing and validating new types of touchscreen technologies in the consumer space, and then pushing those technologies into other vertical markets
  3. A touchscreen provides an aesthetic and ease-of-use appeal – a sexier device gains more attention over its not so sexy non-touchscreen cousin.

This is true regardless of the type of device, whether it’s a juice blender, glucose monitor, or infotainment system in that snazzy new BMW.

The second part in this four-part series explores the paradigm shift in user interfaces that touchscreens are causing.

An example of innovation for embedded systems

Tuesday, July 6th, 2010 by Robert Cravotta

Embedded systems are, for the most part, invisible to the end user, yet the end application would not work properly without them. So what form does innovation take for something that is only indirectly visible to the end user? Swype’s touch screen-based text input method is such an example. The text entry method already exists on the Samsung Omnia II and releases on July 15 for the Motorola Droid X.

Swyping is a text entry method for use with touch screens where the user places their finger on the first letter of the word they wish to type. Without lifting their finger, the user traces their finger through each of the letters of the word, and they do not lift their finger from the touch screen until they reach the final letter of the word. For example, the figure shows the trace for the word quick. This type of text entry requires the embedded engine to understand a wider range of subtle motions of the user’s finger and couples that with a deeper understanding of words and language. The embedded engine needs to be able to infer and make educated guesses about the user’s intended word.

Inferring or predicting what word a user wishes to enter into the system is not new. The iPhone, as well as the IBM Simon (from 1993), use algorithms to predict and compensate for finger tap accuracy for which letter the user is likely to press next. However, swyping takes the predictive algorithm a step further and widens the text entry system’s ability to accommodate even more imprecision from the user’s finger position because each swyping motion, start to finish, is associated with a single word.

In essence, the embedded system is taking advantage of available processing cycles to extract more information from the input device, the touch screen in this case, and correlating it with a larger and more complex database of knowledge, a dictionary and knowledge of grammar in this case. This is analogous to innovations for other embedded control systems. For example, motor controllers are able to deliver better energy efficiency because they are able to collect more information about the system and environment they are monitoring and controlling. Motor controllers are measuring more inputs, inferred and directly, that allows them to understand not just the instantaneous condition of the motor, but also the environmental and operational trends, so that the controller can adjust the motor’s behavior more effectively and extract greater efficiency than earlier controllers could accomplish. They are able to correlate the additional environmental information with an increasing database of knowledge of how the system operates under different conditions and how to adjust for variations.

The Swype engine, as well as other engines like it, supports one more capability that is important; it can learn the user’s preferences and unique usage patterns and adjust to accommodate those. As embedded systems embody more intelligence, they move away from being systems that must know everything at design time and move closer to being systems that can learn and adjust for the unique idiosyncrasies of their operating environment.

[Editor's Note: This was originally posted on Low-Power Design]