A recent video of a fly activating commands on a touchscreen provides an excellent example of a touchscreen implementation that is too sensitive. In the video, you can see the computing system interpreting the fly’s movements as finger taps and drags. Several times the fly’s movement causes sections of text to be selected and another time you can see selected text that is targeted for a drag and drop command. Even when the fly just momentarily bounces off the touchscreen surface, the system detects and recognizes that brief contact as a touch command.
For obvious reasons, such over sensitivity in a touchscreen application is undesirable in most cases – that is unless the application is to detect and track the behavior of flies making contact with a surface. The idea that a fly could accidentally delete your important files or even send sensitive files to the wrong person (thanks to field auto-fill technology) is unpleasant at best.
Touchscreens have been available as an input device for decades, so why is the example of a fly issuing commands only surfacing now? First, the fly in the video is walking and landing on a capacitive touchscreen. Capacitive touch screens became more prevalent in consumer products after the launch of the Apple iPhone in 2007. Because capacitive touch screens rely on the conductive properties of the human finger, a touch command does not necessarily require a minimum amount of physical force to activate.
This contrasts with resistive touch screens which do require a minimum amount of physical force to cause two layers on the touch screen surface to make physical contact with each other. If the touch sensor in the video was a screen with a resistive touch sensor layered over it, the fly would most likely never be able to cause the two layers to make contact with each other by walking across the sensor surface; however, it might be able to make the surfaces contact each other if it forcefully collided into the screen area.
Touchscreens that are too sensitive are analogous to keyboards that do not implement an adequate debounce function for the keys. In other words, there are ways that capacitive touch sensors can mitigate spurious inputs such as flies landing on the sensor surface. There are two areas within the sensing system that a designer can work with to filter out unintended touches.
The first area to address in the system is to properly set the gain levels so that noise spikes and small conductive objects (like the feet and body of a fly) do not trigger a count threshold that would be interpreted as a touch. Another symptom of an oversensitive capacitive touch sensor is that it may classify a finger hovering over the touch surface as a touch before it makes contact with the surface. Many design specifications for touch systems explicitly state an acceptable distance above the touch surface that can be recognized as a touch (on the order of a fraction of a mm above the surface). I would share a template for specifying the sensitivity of a touch screen, but the sources I checked with consider that template proprietary information.
One reason why a touch system might be too sensitive is because the gain is set too high so as to allow the system to recognize when the user is using a stylus with a small conductive material within its tip. A stylus tip is sufficiently smaller than a human finger, and without the extra sensitivity in the touch sensor, a user will not be able to use a stylus because the sensor will fail to detect the stylus tip near the display surface. Another reason a touch system could be too sensitive is to accommodate a use-case that involves the user wearing gloves. In essence, the user’s finger never actually makes contact with the surface (the glove does), and the sensor system must be able to detect the finger through the glove even though it is hovering over the touch surface.
The other area of the system a designer should address to mitigate spurious and unintended touches is through shape processing. Capacitive touch sensing is similar to image or vision processing in that the raw data consists of a reading for each “pixel” in the touch area for each cycle or frame of input processing. In addition to looking for peaks or valleys in the pixel values, the shape processing can compare the shape of the pixels around the peak/valley to confirm that it is a shape and size that is consistent with what it expects. Shapes that are outside the expected set, such as six tiny spots that are close to each other in the shape of a fly’s body, can be flagged and ignored by the system.
This also suggests that the shape processing should be able to track context because it needs to be able to remember information between data frames and track the behavior of each blob of pixels to be able to recognize gestures such as pinch and swipe. This is the basis of cheek and palm rejection processing as well as ignoring a user’s fingers that are gripping the edge of the touch display for hand held devices.
One reason why a contemporary system, such as the one in the video, might not properly filter out touches from a fly is that the processor bandwidth of the processing used for the shape processing algorithm could not perform the more complex filtering in the time frame allotted. In addition to actually implementing additional code to handle more complex tracking and filtering, the system has to allocate enough processing resources to complete those tasks. As the number of touches that the controller can detect and track increases, the amount of processing required to resolve all of those touches goes up faster than linearly. Part of the additional complexity for complex shape processing comes from determining which blobs are associated with other blobs and which ones are independent from the others. This correlation function requires multi-frame tracking.
This video is a good reminder that what is good enough in the lab might be completely insufficient in the field.
Tags: Debounce, Sensitivity, Touch Interface