Entries Tagged ‘Testing’

Is testing always essential?

Wednesday, August 24th, 2011 by Robert Cravotta

This month’s audit of the Army’s armor inserts by the Pentagon’s inspector general finds that testing for the body armor ballistic inserts was not conducted consistently across 5 million inserts across seven contracts. According to the audit, the PM SEQ (Army Program Manager Soldier Equipment) did not conduct all of the required tests on two contracts because they had no protection performance concerns on those inserts. Additionally, the PM SEQ did not always use a consistent methodology for measuring the proper velocity or enforcing the humidity, temperature, weathered, and altitude requirements for the tests.

The audit also reports that the sampling process used did not provide a statistically representative sample for the LOT (Lot Acceptance Test) so that the results of the test cannot be relied on to project identified deficiencies to the entire lot. At this point, no additional testing was performed as part of the audit, so there is no conclusion on whether the ballistic performance of these inserts was adversely affected by the test and quality assurance methods that were applied.

Tests on two lots of recalled inserts so far have found that all of them met “the maximum level of protection specified for threats in combat” according to Matthew Hickman, an Army spokesman. Another spokesman released a statement that “The body armor in use today is performing as it was intended. We are continuing to research our data and as of now have not found a single instance where a soldier has been wounded due to faulty body armor.”

This audit highlights a situation that can impact any product that experiences a significant increase in demand coupled with time sensitivity for availability of that product. High profile examples in the consumer electronics space include game consoles and smart phones. Some of these products underwent recalls or aftermarket fixes. However, similar to the recalled inserts that are passing additional testing, sometimes a product that has not undergone complete testing can still meet all of the performance requirements.

Is all the testing you can do essential to perform every time? Is it ever appropriate to skip a test because “there are no performance concerns?” Do you use a process for modifying or eliminating tests that might otherwise disproportionately affect the product’s pricing or availability without significant offsetting benefit? Is the testing phase of a project an area ripe for optimization or is it an area where we can never do enough?

Can we improve traffic safety and efficiency by eliminating traffic lights?

Wednesday, August 18th, 2010 by Robert Cravotta

I love uncovering situations where there is a mismatch between the expected results and the actual results of an experiment because it helps reinforce the importance of actually performing an experiment despite how much you think you “know” how it will turn out. System level integration of the software with the hardware is a perfect example.

It seems, with a frequency that defies pure probability, that if the integration team fails to check out an operational scenario during integration and testing, the system will behave in an unexpected manner when that scenario occurs. Take for example Apple’s recent antenna experience:

“…The electronics giant kept such a shroud of secrecy over the iPhone 4′s development that the device didn’t get the kind of real-world testing that would have exposed such problems in phones by other manufacturers, said people familiar with the matter.

The iPhones Apple sends to its carrier partners for testing are “stealth” phones that disguise a new device’s shape and some of its functions, people familiar with the matter said. Those test phones are specifically designed so the phone can’t be touched, which made it hard to catch the iPhone 4′s antenna problem. …”

The prototype units did not operate under the same conditions as they would in a production capacity, and that allowed an undesirable behavior to get through to the production version. The message here is never assume your system will work the way you expect it to – test it because the results may just surprise you.

Two recent video articles about removing traffic lights from intersections support this sentiment. In one of the video interviews a traffic specialist that suggests that turning off the traffic lights at intersections can actually improve the safety and efficiency of some intersections. The other video highlights what happened when a town turned off the traffic lights at a specific intersection. The results are anti-intuitive. This third video of an intersection is fun to watch, especially when you realize that there is no traffic control and there are all types of traffic, ranging from pedestrians, bikes, small cars, large cars, and buses all sharing the road. I am amazed watching the pedestrians and the near misses that do not appear to faze them.

I am not advocating that we turn off traffic lights, but I am advocating that we explore whether we are testing our assumptions sufficiently – whether in our own embedded designs or in other systems including traffic control. What is causing better traffic flow and safety in these test cases? Is it because the flow is low enough? Is it because the people using the intersection are using a better set of rules rather than “green means go?” Are there any parallel lessons learned that apply to integrating and testing embedded systems?