How much trial and error do you rely on in designs?

Wednesday, August 10th, 2011 by Robert Cravotta

My wife and I have been watching a number of old television series via DVD and video streaming services. We have both noticed (in a distressing way) a common theme among the shows that purport to have a major character who happens to be a scientist – the scientist(s) know more than any reasonable person would, they accomplish tasks quicker than anyone (or a team of a thousand people) reasonably could, and they make the proper leaps of logic in one or two iterations. While these may be useful mechanisms to keep a 20 to 40 minute story moving along, it in no way reflects our experience in the real engineering world.

Tim Harford’s recent TED talk addresses the successful mechanism of trial and error to create successful complex systems and how it differs from systems that are built around systems built based on a God complex. The talk resonates with my experience and poses a statement I have floated around a few times over the years in a different manner. The few times I have suggested that engineering is a discipline of best guesses has generated some vigorous dissent. Those people offering the most dissent claim that given a complete set of requirements, they can provide an optimum engineering design to meet those requirements. But my statement refers not just to the process of choosing how to solve a requirement specification, but also in making the specifications in the first place. Most systems that must operate in the real world are just too complex for a specification to completely describe the requirements in a single iteration – there is a need for some trial and error to discover what is more or less important for the specification.

In the talk, Tim provides an industrial example regarding the manufacturing of powdered detergent. The process of making the powder involves pumping a fluid, under high pressure, through a nozzle, that distributes the fluid in such a way that as the water evaporates from the sprayed fluid, a powder with specific properties lands in a pile to be boxed up and shipped to stores for end users to purchase. The company in this example originally tried an explicit design approach that reflects a God complex mode of design. The company hired an expert to design the nozzle. Apparently the results were unsatisfactory; however, the company was eventually able to come up with a satisfactory nozzle by using a trial and error method. The designers created ten random nozzles designs and tested them all. They chose the nozzle that performed the best and created ten new variations based on that “winning” nozzle. The company performed this iterative process 45 times and was able to create a nozzle that performed its function well. The nozzle performs well, but the process that produced the nozzle did not require any understanding of why it works.

Over the years, I have heard many stories about how using a similar process yielded a superior solution to a problem than an explicit design approach. Do you use a trial and error approach in your designs? Do you introduce variations in a design, down select the variations based on measured performance, and repeat this process until the level of improvement suggests you are close enough to an optimum configuration? I suspect more people do use a variation and select process of trial and error; however, I am not aware of many tools that facilitate this type of approach. What are your thoughts and experiences on this?

Tags: ,

9 Responses to “How much trial and error do you rely on in designs?”

  1. Eduardo says:

    I think the God complex and the iterative versions actually apply in different situations. Innovations can (almost) never be the product of a set of well-defined requirements creating a successful on the first try. Of course there is the chance factor of getting an innovation right the first time, but that’s a very very small percentage of the cases. Most innovations take several iterations to happen.

    On the other hand, an incremental design, or simply a different design (say, a new refrigerator with WiFi) simply needs a good set of requirements to work well. Of course, this also assumes some things.

    A “good set of requirements” is also an ethereal thing. In my first job I designed a small remote control that transmitted simple commands through an IrDA interface. The control was to be used outside (gas station). When I finally finished, and went to the field to test it, it failed miserably. I had never considered the sun radiating infrared energy into the tiny, defenseless IrDA transceivers. I had to go back and spend some extra-long days adding some software filtering. Did the requirements fail, should they have explicitly said: “sun proof”? What if they had only said “needs to work outdoors”? To an experienced engineer that might have meant enough info to create an algorithm to filter out sunlight. To my 6 months of professional experience it meant “tell the mechanical designer to add some rubber to the case (to make it water-proof).

    So applications knowledge is always important. Yes, you can give a veteran refrigerator designer some requirements and he’ll create a great refrigerator the first time around (assuming his team is good, and he’s well rested during all the reviews so that he doesn’t miss anything). For the rest of us, there’s always some level of trial and error needed. Luckily the project schedule will actually have enough available time to have two or three prototypes made.

  2. A.T. @ LI says:

    Those leaps of logic you seem to disbelieve are not unusual at all. Trial and error is only necessary if you cannot draw from experience or inference from a breadth of experiences. The more experience you have the less “trial” you need, the fewer the errors. This is where CFO bargain hunting for staff with no more than 3-5 years of experience will lead to long term FAIL. It’s starting to happen now, which is why we are seeing a lot of covering of tracks with “fail is good”, “learn from failure”. Failure and error is to be avoided. The near terms cost of experience is far less than the cost of trial and error, or a blind try and fail.

  3. L.R. @ LI says:

    Those trial-and-error design processes you describe I have only seen in movies (some fiction some documentary) and not even once in real life.
    In real life, seldom a truly optimal solution would justify the enormous effort needed to iron out an optimal solution, and a “good enough” solution is sought for instead.
    In analogue electronics a traditional design technique does somewhat resemble “trial-and-error”, whereby a circuit topology is chosen based on general requirements, and then depending on the development time available, using e.g. Spice simulation, the designer would “try out” several methods of getting at the most optimal values for the components. Sometimes more than one topology can be tried out on a simulator, time allowing.
    Than said, most of my experience suggests that our entire career is one huge trial-and-error process, where we develop our vertical expertise based on experiences, some of which are positive and many are negative, and we also try to learn from other people’s experience, and thus we develop a kind of “instinct” that (hopefully) drives us to the “right” solution fairly quickly, or perhaps most of us gravitate to the approach that is most likely to result in a satisfactory result, in a way sacrificing an ideal solution that may be possible for one that is the least likely to fail miserably.

  4. A.M. @ LI says:

    @Robert: what you describe is called “genetic algorithms” (GA) and is used in computer science. They typically have high numerical complexity. Sometimes an inefficient algorithm is the only algorithm though.

    I am aware of some successful designs using GA. However, they should be used only if there is no direct solution to the problem. They don’t guarantee the optimal solution as you can get stacked in a local minimum of your goal function.

  5. J.L. @ LI says:

    I’m with A. here – hiring experience is obtaining a large pile of trial and error accumulated on someone else’s budget.

  6. G.H. @ LI says:

    What you just said, I think, is why the iterative development process became popular.

    Most design companies realize that the only way for them to produce a quality product is to slowly put in the features; test it; learn from the mistakes; improve it, and then add the next feature.

    Even with a well design system there will always be issues or limitations that you cannot think of until you actually have a running system in place to observe and test.

  7. B.Z. @ LI says:

    “Trial and Error” is what I reserve for areas of high risk in the design, that is, those pieces of the system where we just don’t know what the real solution should be. I can send a team on a “deep dive” into the unknown technology while encapsulating the expected solution from the rest of the system, allowing the rest of the team to continue developing in the known area of the system. The method of the “deep dive” tends to be a “Hypothesize and Test”, generating data for better hypothesis and zeroing in on the solution. Like the nozzle example, it is logically directed towards a goal.

  8. P.H. @ LI says:

    Trial and error is good when you don’t fully understand something. I’ll often cut a small PCB with some circuit on it and play with it, I consider this trial and error.

    Another “trial and error” technique I do is to abuse a system and see how it works. How low can the power supply go before things get unhappy, and when they get unhappy are there bad side effects (relays kick in, etc). I’ve also done and heard of taking subroutines and feeding them random data for arguments to test the bounds checking of the subroutine and be sure that bad parameters don’t spread down the line. This lets you find the problems you didn’t know you had.

    Seems like most project have a 80-20 percent discipline vs. trial and error mix to them. Of course all this depends on the area. Trial and error are OK in consumer “board in a box” systems, harder in instrumentation automation, and (I’m just guessing here) very unlikely or very expensive in aerospace stuff.

  9. Z.E. @ TI says:

    Hi Robert, I Agree partly with you. Scientist are not illuminated people but normally they are trainned in certain areas to shortten the well proven try and error method. In teh other hand, while a nozzle can be easily TUNNED by a try and error method, some devices can only be DESIGNED to prove a concept. After a schema is designed most enginners can tune a system. For example, a positrom emision tomography can only be designed, some accesories can be implemented by lesser trainned people.

    Reagards. Nice topic.

    Z.

Leave a Reply to G.H. @ LI