Articles by Robert Cravotta

As a former Technical Editor covering Embedded Processing at EDN, Robert has been following and commenting on the embedded processing space since 2001 (see article index). His expertise includes software development and system design using microprocessors, microcontrollers, digital signal processors (DSPs), multiprocessor architectures, processor fabrics, coprocessors, and accelerators, plus embedded cores in FPGAs, SOCs, and ASICs. Robert's embedded engineering background includes 16 years as a Member of the Technical Staff at Boeing and Rockwell International working on path-finding avionics, power and laser control systems, autonomous vehicles, and vision sensing systems.

How do you exercise your orthogonal thinking?

Wednesday, December 29th, 2010 by Robert Cravotta

How are Christmas and Halloween the same? The intended answer to this question requires you to look at the question from different angles to find the significant relationship between these seemingly unrelated events. In fact, to be a competent problem solver, you often need to be able to look at a problem from multiple angles and find a way to take advantage of a relationship between different parts of the problem that might not be immediately obvious. If the relationship was obvious, there might not be a problem to solve.

I have found over the years that doing different types of puzzles and thinking games often help me to juggle the conditions of a problem around and find that elusive relationship that makes the problem solvable. While I do not believe being able to solve Sudoku puzzles will make you smarter, I do believe that practicing Sudoku puzzle in different ways can help exercise your “cognitive muscles” so that you can more easily reorganize difficult and abstract concepts around in your mind and find the critical relationship between the different parts.

There are several approaches to solving Sudoku puzzles and each requires a different set of cognitive wiring to perform competently. One approach, and one that I see most electronic versions of the puzzle support, involves penciling in all of the possible valid numbers in each square and using a set of rules to eliminate numbers from each square until there is one valid answer. Another approach finds the valid numbers without using the relationships between the “penciled” numbers. Each approach exercises my thought process in very different ways, and I find that switching between them provides a benefit when I am working on a tough problem.

I believe being able to switch gears and represent data in equivalent but different representations is a key skill to effective problem solving. In the case of Christmas and Halloween, rather than looking at the social context associated with each day, looking at the date of each day – October 31 and December 25 can suggest a non-obvious relationship.

I find that many of the best types of puzzles or games for exercising orthogonal thinking engage a visual mode of looking at the problem. The ancient board game of Go is an excellent example. The more I play Go, the more abstract relationships I am able to recognize and most surprisingly – apply to life and problem solving. If you have never played Go, I strongly recommend.

Another game I find a lot of value for exercising orthogonal thinking is Contract Bridge – mostly because it is a game that involves incomplete information – much like real life problems – and relies on the ability of the players to communicate information with each other within a highly constricted vocabulary. Often times, the toughest design problems are tough precisely because it is difficult to verbalize or describe what the problem actually is.

As for the relationship between October 31 and December 25, it is interesting that the abbreviations for these two dates also correspond to notation of the same exact number in two different number bases – Oct(al) 31 is the same value as Dec(imal) 25.

These examples are some of the ways I exercise my orthogonal thinking. What are your favorite ways to stretch your mind and practice switching gears on the same problem?

How do you mitigate single-point failures in your team’s skillset?

Wednesday, December 22nd, 2010 by Robert Cravotta

One of the hardest design challenges facing developers is how to keep the system operating within acceptable bounds despite being used in non-optimal conditions. Given a large enough user base, someone will operate the equipment in ways that the developers never intended. For example, a friend recently shared that his young daughter has developed an obsession with turning the lights in the house on and off repeatedly. Complicating this scenario is that some of the lights she likes to flip on and off are fluorescent lights (the tubes, not CFLs (compact fluorescent light)). Unfortunately, repeatedly turning them on and off in this fashion significantly reduces their useful life. Those lights were not designed to be put under those types of operating conditions. I’m not sure designers can ever build a fluorescent bulb that will flourish under those types of operating conditions – but you never know.

Minimizing and eliminating single-point failures in a design is a valuable strategy for increasing the robustness of the design. Experienced developers exhibit a knack for avoiding and mitigating single-point failures – often as the result of experience with similar failures in previous projects. Successful methods for avoiding single-point failures usually involve implementing some level of overlap or redundancy between separate, and ideally independent, parts of the system.

A look at the literature addressing single-point failures reveals a focus on technical and tangible items like devices and components, but there is an intangible source of single-point failures that can be devastating to a project – when a given skillset or knowledge set is a single-point failure. I was first introduced to this idea when someone asked me “What will you do if Joe wins the Lottery?” We quickly established that winning the Lottery was a nice way to describe a myriad of unpleasant scenarios to consider – in each case the outcome is the same – Joe, with all of his skills, experience, and project specific knowledge, leaves the project.

As a junior member of the technical staff, I did not need to worry about this question, but once I started into the ranks of project lead – well that question become immensely more important. If you have the luxury of a large team and budget, you might assign people to overlapping tasks. However, small teams may lack not just the budget but the cognitive bandwidth of the team members to be aware of everything everyone else is doing.

One approach we used to mitigate the consequences of a key person “winning the Lottery” involved holding regular project status meetings. Done correctly, these meetings can provide a quick and cost effective mechanism for spreading the project knowledge among more people. The trick is to avoid involving too many people for too long or too frequently so that the meetings cost more than the possible benefit they provide. Maintaining written documentation is another approach for making sure the project can recover from the loss of a key member. Another approach we used for more tactical types of skills was to contract with an outside team that specialized in said skillset. By working with someone who understands the project’s tribal knowledge, this approach can help the team recover quickly and salvage the project.

What methods do your teams employ to protect from the consequences of a key person winning the Lottery?

Adding texture to touch interfaces

Friday, December 17th, 2010 by Robert Cravotta

I recently heard about another approach to providing feedback to touch interfaces (Thank you Eduardo). TeslaTouch is a technology developed at Disney Research that uses principles of electrovibration to simulate textures on a user’s finger tips. I will be meeting with TeslaTouch at CES and going through a technical demonstration, so I hope to be able to share good technical details after that meeting. In the meantime, there are videos at the site that provide a high level description of the technology.

The feedback controller uniformly applies a periodic electrostatic charge across the touch surface. By varying the sign (and possibly magnitude) of the charge, the electrons in the user’s fingertip will be drawn towards or away from the surface – effective creating a change in friction on the touch surface. Current prototypes are able to use signals as low as 8V to generate tactile sensations. No electric charge passes through the user.

By varying over time the electric charge across the electrode layer, this touch sensor and feedback surface can simulate textures on a user’s finger by attracting and repelling the electrons in the user’s finger to and from the touch surface (courtesy TeslaTouch).

The figure shows a cross section of the touch surface which consists of a layer of glass overlaid with a layer of transparent electrode, which is covered by an insulator. Varying the voltage across the electrode layer changes the relative friction coefficients from pushing a finger (Fe) into the touch surface and dragging a finger across it (Fr). It is not clear how mature this technology currently is other than the company is talking about prototype units.

One big feature of this approach to touch feedback is that it does not rely on mechanical actuators typically used in haptic feedback approaches. The lack of moving parts should contribute to a higher reliability when compared to the electromechanical alternatives. However, it is not clear the this technology would work through gloves or translate through a stylus – of which the electromechanical approach can accommodate.

What are the questions you would like most answered about this technology? I am hopeful that I can dig deep into the technology at my CES meeting and pass on what I find in a follow-up here. Either email me or post the questions you would most like to see answered. The world of touch user interfaces is getting more interesting each day.

Does your embedded development team’s project budget metric support your estimation process?

Wednesday, December 15th, 2010 by Robert Cravotta

As an engineering project lead I had to develop and report on a set of performance metrics that we called the VSP (vision support plan). The idea behind these metrics was to show how each area of the company was directly supporting the company vision statement. For many of the metrics, the exercise was a waste of time because there was no clean way to measure how what we were doing as a team directly corresponded to every abstract idea in the vision statement.

However, there were a few metrics that we used that I thought were useful because we could use them to experiment with our processes and measure whether there was an improvement or not. For example, I refused to use a budget metric that only focused on whether we came in under budget or not. My budget metrics were “green” (good) if the expenditures to date were within 10% of the budget. If the project was more than 10% higher or lower than the budget, I reported the project as yellow. If the project was more than 20% higher or lower than the budget, I reported the project as red.

Here was my reasoning for the grading. If the project was within 10% of the budget, we were in control of the budget. I believe that any team can affect the cost of a project by up to 10% by choosing appropriate trade-offs without adversely sacrificing the quality of the project. Any design trade-offs that are made to affect a 10 to 20% change from the plan involve more risk and might adversely affect the quality of the project. Likewise, any time a team must accommodate changes that stray more than 20% from the plan involve significant risk and may require a reevaluation to determine whether the project is scoped realistically.

Note that this metric specified a range that covered overruns and underflows of the expenditures to budget. A major reason for this was to put a special focus on how well we were estimating projects. How many times have you seen someone try to explain why their project is over budget? In general, the reasons I saw included one or more of:

1) There was additional scope added to the project that you did not capture additional budget for (often at the direction of management).

2) The project involved solving some unexpected problems and there was not enough (or no) budget to handle such contingencies.

3) Management would not accept a realistic budget number for the project and you are doing the best you can with the budget they offered you.

The thing that is common in all of these reasons is that the estimation process did not adequately capture the project’s predictable and iterative costs. Too many times I would see management strip out our contingency budget which usually consisted of specifying 1 or 2 design iterations at those points of the design where we had the most risk. Capturing a budget metric and putting it into the context of how good the estimate was provides the potential for finding clues as to how to improve the estimating process in future projects – which directly supports just about any company’s vision statement that I have ever seen.

Likewise, if your project was substantially under budget, it seemed that most management was content to leave that alone; however, I see the following scenarios as reasons to why you might be running under budget:

1) You overestimated the cost to perform the project

2) You were able to remove scope from the project that was left in the budget numbers

3) You made an innovative leap that increased your productivity beyond what you thought you could do during the budgeting process.

Each of these reasons had a profoundly different impact on how you refine your estimating process. The first reason suggests you need better estimators. The second reason suggests you need to improve your project and contract management process. The third reason is one that any manager should want to see more of and reward the team for making it happen.

I saw many project estimates game the system so that the project lead had a potentially oversized surplus in their budget and their management would fail to comment on how resources had been allocated to a project and then not used without uncovering which of those three scenarios was the cause for the under run.

Does your project budget process enable you to improve your estimating process, contract management process, and increase the chances that your team will gain recognition when a risk pays off when you discover a new and better way to solve a problem? What are other ways you use expense/budget metrics to improve your design team’s performance?

Considerations for 4-bit processing

Friday, December 10th, 2010 by Robert Cravotta

I recently posed a question of the week about who is using 4-bit processors and for what types of systems. At the same time, I contacted some of the companies that still offer 4-bit processors. In addition to the three companies that I identified as still offering 4-bit processors (Atmel, EM Microelectronics, and Epson), a few readers mentioned parts from NEC Electronics, Renesas, Samsung, and National. NEC Electronics and Renesas merged and Renesas Electronics America now sells the combined set of those company’s processor offerings.

These companies do not sell their 4-bit processors to the public developer community in the same way that 8-, 16-, and 32-bit processors are. Atmel and Epson told me their 4-bit lines support legacy systems. The Epson lines support most notably timepiece designs. I was able to speak with EM Microelectronics at length about their 4-bit processors and gained the following insights.

Programming 4-bit processors is performed in assembly language only. In fact, the development tools cost in the range of $10,000 and the company loans the tools to their developer clients rather than sell them. 4-bit processors are made for dedicated high volume products – such as the Gillette Fusion ProGlide. The 4-bit processors from EM Microelectronics are available only as ROM-based devices, and this somewhat limits the number of designs the company will support because the process to verify the mask sets is labor intensive. The company finds the designers that can make use of these processors – not the other way around. The company approaches a developer and works to demonstrate how the 4-bit device can provide differentiation to the developer’s design and end product.

The sweet spot for 4-bit processor designs are single battery applications that have a 10 year lifespan and where the device is active perhaps 1% of that time and in standby the other 99%. An interesting differentiator for 4-bit processors is that they can operate at 0.6V. This is a substantial advantage over the lowest power 8-bit processors which are still fighting over the 0.9 to 1.8V space. Also, 4-bit processors have been supporting energy harvesting designs since 1990 whereas 8- and 16-bit processor vendors are only within the last year or so beginning to offer development and demonstration kits for energy harvesting. These last two revelations strengthen my claim in “How low can 32-bit processors go” that smaller sized processors will reach lower price and energy thresholds years before the larger processors can feasibly support those same thresholds – and that time advantage is huge.

I speculate that there may be other 4-bit designs out there, but the people using them do not want anyone else to know about them. Think about it, would you want your competitor to know you were able to simplify the problem set to fit on such a small device? Let them think you are using a larger, more expensive (cost and energy) device and wonder how you are doing it.

What feature would you most like embedded designs to enable in systems you use?

Wednesday, December 8th, 2010 by Robert Cravotta

I remember the first time I heard about keyless entry on an automobile. I thought it was the most frivolous idea ever – that is, until I started using a car with said feature. The thing is, keyless entry and operation is not a feature that enables you to perform some new maneuver or drive further than before – rather it is a harbinger of a new class of features that rely on systems to be smarter and able to “infer the user’s intent” without explicitly using keyholes or data entry devices.

The wireless communication that takes place between the automobile and the key carried by the user enables an invisible interface between the user and the automobile. The automobile can act as if it recognizes the user and act accordingly. A user does not carry a recognized key, they cannot unlock or operate the automobile. Additionally, if the car is in an operating state and all authorized keys move out of range, the car notices this, notifies the operator, and can shut the vehicle down if a key is not brought back in range to the automobile. I experienced this when my wife drove the car up to the driveway, left the car running, and I pulled it into the garage, but I did not have my key on me and the car warned me about not being able to detect an authorized key.

When a user, carrying a recognized key approaches the automobile, the car senses the key and can begin to perform functions that make using the car simpler. For example, when I approach my car it turns on some lights on the door I am approaching; this makes it easier to see when it is dark and signals to me that the car recognizes me. There are other features in the car that turn on and off automagically that I like, such as automatic windshield wipers. Each of these features incorporates more smarts than manual systems and do a good enough job that they remove the cognitive load for managing those functions of the car from the operator.

One thing I like about these types of features is that they are embedded systems made visible. They often do not require the user to adjust to the system because one of their primary purposes is to adjust to the operator and infer the operator’s intent with significant accuracy. Designed correctly, the average user might never think about these types of features. They embodied the essence of an embedded system – invisible but indispensible to the proper operation of the system.

Do you have any examples of features that either exist or that you would like to see added to devices or systems? One thing I now would like to see is a cost effective way to make my house as smart as my car and let me in without requiring me to take the house key out and manually unlock the door. What ideas do you have?

The importance of failing quickly and often

Friday, December 3rd, 2010 by Robert Cravotta

When do recent graduates of kindergarteners outperform recent graduates of business school? Believe it or not, according to Tom Wujec, kindergarteners consistently perform better than business school graduates in the Marshmallow Challenge. This is not a challenge to see who can eat the most marshmallows; rather, it is an exercise in teamwork and rapid prototyping.

The challenge consists of a team of four members building the tallest structure they can using only a single marshmallow, strands of dry spaghetti, a roll of masking tape, and some string. The major constraint is that the marshmallow must go on the top of the structure. The mass of the marshmallow makes this project more challenging than you might first assume.

In Tom’s video, he explains that kindergarteners do better than the business school graduates because they approach the process of building the structure in a more iterative and prototyping sequence than do the business graduates. The kindergarteners start building and placing the marshmallow at the top of the structure right away and they receive immediate feedback from when the structure stands or falls that enables them to make improvements in the next attempt. In contrast, the business graduates discuss a plan of action, choose a direction, and typically do not place the marshmallow on the top of the structure until near the end of the challenge, and when the structure fails, there is not enough time to perform another iteration of rebuilding the structure.

I bring up the Marshmallow Challenge because it augments Casey Weltzin’s recent article “To Design Innovative Products, You Must Fail Quickly” about the importance of prototyping and the role of failures during the prototyping process. Engineers are intimately familiar with failure – in fact, I remember there was a unit on errors and failure as part of my engineering undergraduate studies. Not surprisingly, the people who consistently do the best in the challenge are engineers and architects.

The unrelenting and almost predictable pace of technological improvements that engineers deliver decade after decade belies the amount of failures that engineers experience and iterate through behind each of those publicly visible successes. In a sense, our repeated success as an industry to deliver ever more functional systems at a low price point engenders a sense that it is easier than it truly is to perform these feats of innovation over and over again.

Another interesting observation in Tom’s presentation is that adding an executive admin to a team of CEOs and company executives significantly improves their performance in the challenge than the team without an admin. One take away I see from this is that it is important to be able to expose and remind your management that design is an iterative process where we apply our assumptions to the real world and the real world smacks us down by pointing out our hidden or unspoken assumptions that do not quite align with reality.

What is your favorite debugging anecdote?

Wednesday, December 1st, 2010 by Robert Cravotta

We all know stories about how something went wrong during a project and how we or someone else was able to make a leap of logic that enabled them to solve the problem. However, I think the stories that stick with us through the years are the ones that imparted a longer term insight that goes beyond the actual problem we were trying to solve at the time. For example, I have shared two such stories from my days as a junior member of the technical staff.

One story centers around solving an intermittent problem that ultimately would have been completely avoided if the development team had been using a more robust version control process. The other story involves uncovering an unexpected behavior in a vision sensor that was uncovered only because the two junior engineers that were working with the sensor were encouraged to think beyond the immediate task they were assigned to do.

More than twenty years later, these two stories still leave me with two key insights that I find valuable to pass on to other people. In the version control story, I learned that robustness is not just doing things correctly, but involves implementing processes and mechanisms to be able to automatically self-audit the system. Ronald Reagan’s saying “Trust but verify” is true on so many levels. In the valuing uncertainty story, I learned that providing an appropriate amount of wiggle room in work assignments is an essential ingredient to creating opportunities to grow your team member’s skills and experience while improving the performance of the team.

I suspect we all have analogous stories and that when we share them with each other, we scale the value of our lessons learned that much more quickly. Do you have a memorable debugging anecdote? What was the key insight(s) you got out of it? Is it a story that grows in value and is worth passing on to other members in the embedded community? I look forward to seeing your story.

Are you, or would you consider, using a 4-bit microcontroller?

Wednesday, November 24th, 2010 by Robert Cravotta

Jack Ganssle recently asked me about 4-bit microcontrollers. He noted that there are no obvious 4-bit microcontrollers listed in the Embedded Processing Directory – but that is partly because there are so few of them that I “upgraded” them to the 8-bit listing a few years back. In all the years I have been doing the directory, this is the first time someone has asked about the 4-bitters.

I suspect the timing of Jack’s inquiry is related to his recent article “8 bits is dead” where he points out that the predicted death of 8-bit microcontrollers continues to be false – in fact, he predicts “that the golden age of 8 bits has not yet arisen. As prices head to zero, volumes will soar putting today’s numbers to shame.” I agree with him, the small end of the processing spectrum is loaded with potential and excitement, so much so that I started a series on extreme processing thresholds a few months ago to help define where the current state of the art for processing options is so that it is easier to identify when and how it shifts.

The timing of this inquiry also coincides with Axel Streicher’s article asking “Who said 16-bit is dead?” Axel makes a similar observation about 16-bit processors. I would have liked to have seen him point out that 16-bit architectures is also a sweet spot for DSC (digital signal controllers), especially because Freescale was one of the first companies to adopt the DSC naming. A DSC is a hybrid that combines architectural features of a microcontroller and a digital signal processor in a single execution engine.

A comment on Jack’s article suggested that this topic is the result of someone needing a topic for a deadline, but I beg to differ. There are changes in the processing market that constantly raise the question of whether 8- and 16-bitters will finally become extinct. The big change this year was the introduction of the Cortex-M0 – and this provided the impetus for me to revisit this same topic, albeit from a slightly different perspective, earlier this year when I asked “How low can 32 bits processors go?” I offer that a key advantage that smaller processors have over 32-bit processors is that they reach lower cost and energy thresholds several years before 32-bit processors can get there, so the exciting new stuff will be done on the smaller processors long before they are put on a 32-bit processor.

In contrast, the humble 4-bit gets even less to no attention than the 8- and 16-bitters – but the 4-bit microcontroller is not dead either. Epson just posted a new data sheet for a 4-bit microcontroller a few weeks ago (I am working to get them added to the Embedded Processing Directory now). The Epson 4-bitters are legacy devices that are used in time pieces. EM Microelectronics’ EM6607 is a 4-bit microcontroller; I currently have a call to them to clarify its status and find out what types of applications it is used in.You can still find information about Atmel’s MARC4 which the company manages out of their German offices and is not currently investing any new money into.

So to answer Jack’s question – no, 4-bit processors are not dead yet, and they might not die anytime soon. Are any of you using 4-bit processors in any shape or form? Would you consider using them? What types of processing characteristics define a 4-bitter’s sweet spot? Do you know of any other companies offering 4-bit processors or IP?

Capacitive button sense algorithm

Tuesday, November 23rd, 2010 by Robert Cravotta

There are many ways to use capacitive touch for user interfaces; one of the most visible ways is via a touch screen. An emerging use for capacitive touch in prototype devices is to sense the user’s finger on the backside or side of the device. Replacing mechanical buttons is another “low hanging fruit” for capacitive touch sensors. Depending on how the touch sensor is implemented, the application code may be responsible for working with low level sensing algorithms, or it may be able to take advantage of higher levels of abstraction when the touch use cases are well understood.

The Freescale TSSEVB provides a platform for developers to work with capacitive buttons placed in slider, rotary, and multiplexed configurations. (courtesyFreescale)

Freescale’s Xtrinsic TSS (touch sensing software) library and evaluation board provides an example platform for building touch sensing into a design using low- and mid-level routines. The evaluation board (shown in the figure) provides electrodes in a variety of configuration including multiplexed buttons, LED backlit buttons, different sized buttons, and buttons grouped together to form slider, rotary, and keypad configurations. The Xtrinsic TSS supports 8- and 32-bit processors (the S08 and Coldfire V1 processor families), and the evaluation board uses an 8-bit MC9S08LG32 processor for the application programming. The board includes a separate MC9S08JM60 communication processor that acts as a bridge between the application and the developer’s workstation. The evaluation board also includes an on-board display.

The TSS library supports up to 64 electrodes. The image of the evaluation board highlights some of the ways to configure electrodes to maximize functionality while using fewer electrodes. For example, the 12 button keypad uses 10 electrodes (numbered around the edge of the keypad) to detect the 12 different possible button positions. Using 10 electrodes allows the system to detect multiple simultaneous button presses. If you could guarantee that only one button would be pressed at a time, you could reduce the number of electrodes to 8 by eliminating the two small corner electrodes numbered 2 and 10 in the image. Further in the background of the image are four buttons with LEDs in the middle as well as a rotary and slider bar.

The charge time of the sensing electrode is extended by the additional capacitance of a finger touching the sensor area.

Each electrode in the touch sensing system acts like a capacitor with a charging time defined as T = RC. An external pull-up resistor limits the current to charge the electrode which in turn affects the charging time. Additionally, the presence or lack of a user’s finger near the electrode affects the capacitance of the electrode which also affects the charging time.

In the figure, C1 is the charging curve and T1 is the time to charge the electrode to VDD when there is no extra capacitance at the electrode (no finger present). C2 is the charging curve and T2 is the time to charge the electrode when there is extra capacitance at the electrode (finger present).The basic sensing algorithm relies on noting the time difference between T1 and T2 to determine if there is a touch or not.

The TSSEVB supports three different ways to control and measure the electrode: GPIO, the KBI or pin interrupts, and timer input capture. In each case, the electrode defaults to an output high state. To start measuring, the system sets the electrode pins output low to discharge the capacitor. By setting the electrode pin to a high impedance state, the capacitor will start charging. The different measurement implementations set and measure the electrode state slightly differently, but the algorithm is functionally the same.

The algorithm to detect a touch consists of 1) starting a hardware timer; 2) starting the electrode charging; 3) waiting for the electrode to charge (or a timeout to occur); and 4) returning the value of the timer. One difference between the different modes is whether the processor is looping (GPIO and timer input capture) or in a wait state (KBI or pin interrupt) which can affect whether you can perform any other tasks during the sensing.

There are three parameters which will affect the performance of the TSS library: the timer frequency, the pull-up resistor value, and the system power voltage. The timer frequency affects the minimum capacitance measurable. The system power voltage and pull-resistor affect the voltage trip point and how quickly the electrode charges. The library uses at least one hardware timer, so the system clock frequency affects the ability of the system to detect a touch because the frequency affects the minimum capacitance value detected per timer count.

The higher clock the frequency, the smaller the amount of capacitance the system can detect. If the clock rate is too fast for the charging time, the timer can overflow. If the clock rate is too slow, the system will be more susceptible to noise and have a harder time reliably detecting a touch. When I was first working with the TSSEVB, we chose less than optimally values and the touch sensing did not work very well. After figuring out there was a mismatch in the scaling value that we chose, the performance of the touch sensing drastically improved.

The library supports what Freescale calls Turbo Sensing, which is an alternative technique to measure charge time by counting bus ticks instead of using a timer. This increases the system integration flexibility and makes measurement faster with less noise and supports interrupt conversions. We did not have time to try out the turbo sensing method.

The decoder functions, such as for the keypad, slider, or rotary configurations, support a higher level of abstraction to the application code. For example, the keypad configuration relies on each button mapping to two electrodes charging at the same time. As an example, in the figure, the button numbered 5 requires electrodes 5 and 8 to charge together as each of those electrodes covers half of the 5 button. The rotary decoder handles more information than the key press decoder because it not only detects when electrode pads have been pressed, but it reports from what direction (of two possibilities) the pad was touched and how many pads experienced some displacement. This allows the application code to control the direction and speed of moving through a list. The slider decoder is similar to the rotary decoder except that the ends of the slider do not touch each other.

The size and shape of each electrode pad, as well as the parameters mentioned before, affects the charging time, so the delta in the T1 and T2 times will not necessarily be the same for each button. The charging time for each electrode pad might change as environmental conditions change. However, because detecting a touch is based on a relative difference in the charging time for each electrode, the system provides some resilience to environmental changes.