Is testing always essential?

Wednesday, August 24th, 2011 by Robert Cravotta

This month’s audit of the Army’s armor inserts by the Pentagon’s inspector general finds that testing for the body armor ballistic inserts was not conducted consistently across 5 million inserts across seven contracts. According to the audit, the PM SEQ (Army Program Manager Soldier Equipment) did not conduct all of the required tests on two contracts because they had no protection performance concerns on those inserts. Additionally, the PM SEQ did not always use a consistent methodology for measuring the proper velocity or enforcing the humidity, temperature, weathered, and altitude requirements for the tests.

The audit also reports that the sampling process used did not provide a statistically representative sample for the LOT (Lot Acceptance Test) so that the results of the test cannot be relied on to project identified deficiencies to the entire lot. At this point, no additional testing was performed as part of the audit, so there is no conclusion on whether the ballistic performance of these inserts was adversely affected by the test and quality assurance methods that were applied.

Tests on two lots of recalled inserts so far have found that all of them met “the maximum level of protection specified for threats in combat” according to Matthew Hickman, an Army spokesman. Another spokesman released a statement that “The body armor in use today is performing as it was intended. We are continuing to research our data and as of now have not found a single instance where a soldier has been wounded due to faulty body armor.”

This audit highlights a situation that can impact any product that experiences a significant increase in demand coupled with time sensitivity for availability of that product. High profile examples in the consumer electronics space include game consoles and smart phones. Some of these products underwent recalls or aftermarket fixes. However, similar to the recalled inserts that are passing additional testing, sometimes a product that has not undergone complete testing can still meet all of the performance requirements.

Is all the testing you can do essential to perform every time? Is it ever appropriate to skip a test because “there are no performance concerns?” Do you use a process for modifying or eliminating tests that might otherwise disproportionately affect the product’s pricing or availability without significant offsetting benefit? Is the testing phase of a project an area ripe for optimization or is it an area where we can never do enough?

Tags: ,

5 Responses to “Is testing always essential?”

  1. Jon Titus says:

    No, testing is not always a necessity. In a new home, you don’t “test” the frame. By constructing the frame according to building codes you ensure the home will withstand the elements. That said, though, you would always test the drain system for leaks! And of course, you would have the house construction inspected at several points.

    The same is true for electronic systems. You will test a final system, but might not test every component or subassembly before putting the system together.

  2. R.F. @ LI says:

    Just imagine how people would do their job if they knew there will be no tests …

  3. A.T. @ LI says:

    If it compiles, ship it. Marketing will bundle the bug fixes on the next release as an “upgrade” (as long as the customer pays for maintenance….).” Sad, but in a lot of cases, true.

  4. J.V.S. @ LI says:

    Sadly, no. CEO’s and Presidents don’t understand the value of QA and Test. Because it ‘doesn’t produce anything’. Unless of course the CEO and President could end up in jail for a bad product! That’s why Military and Medical fields value QA and Test, while the idiot MBA’s don’t.
    What is the price of poor testing and QA? Every release after 1.0 But these bird brains don’t get it.

  5. R.A. @ LI says:

    “…sometimes a product that has not undergone complete testing can still meet all of the performance requirements.”

    Sometimes? I would think that the vast majority of products that have had been previously validated (both the product against user requirements and the production method against the design specification), but haven’t been tested would meet performance requirements.

    Quality cannot be tested into a product, it can only be designed and built into a product. Testing is only a verification that the validated build process has not be inadvertently compromised.

Leave a Reply to Jon Titus