Are you, or someone you know, using voting within your embedded system designs?

Wednesday, November 3rd, 2010 by Robert Cravotta

With the midterm elections in the United States winding down, I thought I’d try to tie embedded design concepts to the process of elections. On some of the aerospace projects I worked on, we used voting schemes as fault tolerant techniques. In some cases, because we could not trust the sensors, we used multiple sensors, and performed voting among the sensor controllers (along with separate and independent filtering) to improve the quality of the data that we used for our control algorithms. We might use multiple of the same type of sensor, and in some cases we would use sensors that differed from each other significantly so that they would not be susceptible to the same types of bad readings.

I did a variety of searches on fault tolerance and voting to see if there was any recent material on the topic. There was not a lot of material available, and what was available was scholarly, and I was generally not able to download the files. It is possible I did a poor job choosing my search terms. However, this lack of material made me wonder if people are using the technique at all and/or has it evolved into a different form. In this case, sensor fusion.

Sensor fusion is the practice of combining data derived from sensors from disparate sources to deliver “better” data than would be possible if these sources were used individually. “Better” in this case can mean more accurate, complete, reliable data. From this perspective, the fusion of the data is not strictly a voting scheme, but there are some similarities with the original concept.

This leads me to this week’s question. Are you, or someone you know, using voting or sensor fusion within embedded system designs? As systems continue to increase in complexity, the need for robust design principles that can enable systems to correctly operate with less-than-perfect components becomes more relevant. Is the voting schemes of yesterday still relevant, or have they evolved into something else?

Tags: ,

2 Responses to “Are you, or someone you know, using voting within your embedded system designs?”

  1. Rick Matz says:

    I recently worked on a Steering Column Lock Module for a large automaker. Because of the threat that the steering column could lock while the car was in motion, it was rated SIL 3 (Safety Integrity Level 3) application.

    The way we implemented voting was this: There was a hardware path and a software path, each giving an output. These two outputs were ANDed together, and when the result was true, the system could lock.

    Each of these paths used three separate inputs, and each of these inputs were tested against other system parameters to insure that they were true.

    For example, the car had to be in PARK and the engine was off to lock the steering column. The hardware path looked at the Park switch and the state of the key cylinder, while the software looked at the Park status message on the CAN bus.It also looked to see the ignition status, the engine status (OFF) and vehicle speed (0).

    Even if the software was completely wrong, the hardware path would prevent the Steering Column Lock Module from locking the steering column while the car was in motion.

  2. Lance ==)----------- says:

    I’ve been working with avionics control systems (DO-178B, level A) for almost 20 years and have seen a multiple voting schemes. These systems are the ultimate in paranoia, as they start out with the assumption that not only might the sensors have errors, they’re quite possibly lying (i.e., wrong with very reasonable values).

    Because they’re going on large commercial aircraft, these are scrutinized very closely by the FAA. This has both advantages and disadvantages: on the plus side, we’re using the very best practices we *know* work; on the negative side, we’re extremely slow to adopt practices that have yet to be compelling to the most skeptical safety engineers. From an economic stand point, it is much easier and faster to defend a scheme by demonstrating similarity to an already approved one than show that a new one is in all ways superior.

    With triply-identical sensors, after verifying that the sensors think they’re healthy, the values are reasonable (both on their own and relative to earlier data from the same sensor), and filtering, then we compare the readings from the three and choose the middle one. This minimizes the magnitude of any erroneous data.

    With doubly-identical sensors for which there exists some combination of other sensor readings from which we can infer what this sensor ought to be reading, we choose the sensor that is closest to the synthesized reading, provided it is sufficiently close to the synthesized reading.

    The question has always been: with only two sensors (one of the three has declared itself bad or the synthetic reading is unavailable) that disagree, what do you do in circumstances where there is no fail-safe value?

    Lance ==)——————–

Leave a Reply to Lance ==)-----------