Robust Design: Patch-It Principle – Teaching and Learning

Monday, May 3rd, 2010 by Robert Cravotta

[Editor's Note: This was originally posted on the Embedded Master

In the first post introducing the patch-it principle, I made the claim that developers use software patches to add new behaviors to their systems in response to new information and changes in the operating environment. In this way, patching systems allow developers to offload the complex task of learning off the device – at least until we figure out how to build machines that can learn. In this post, I will peer into my crystal ball, and I will describe how I see the robust design patch-it principle will evolve into a mix of teaching and learning principles. There is a lot of room for future discussions, so if you see something you hate or like, speak up – it will signal that topic for future posts.

First, I do not see the software patch going away, but I do see it taking on a teaching and learning component. The software patch is a mature method of disseminating new information to fixed-function machines. I think software patches will evolve from executable code blocks to meta-code blocks. This will be essential to support multi-processing designs.

Without using meta-code, the complexity of building robust patch blocks that can handle customized processor partitioning will grow to be insurmountable as the omniscient knowledge syndrome drowns developers in requiring them to handle even more low-value complexity. Using meta-code may provide a bridge to supporting distributed or local knowledge (more explanation in a later post) processing where the different semi-autonomous processors in a system make decisions about the patch block based on their specific knowledge of the system.

The meta-code may take on a form that is conducive to teaching rather than an explicit sequence of instructions to perform. I see devices learning how to improve what they do by observing their user or operator as well as communicating with other similar devices. By building machines this way, developers will be able to focus more on specifying the “what and why” of a process, and the development tools will assist in the system in genetically searching and applying different coding implementations and focusing on a robust verification of equivalence between the implementation and specification. This may permit systems to consist of less than perfect parts as verifying the implementation will include the imperfections in the system.

The possible downside of learning machines is that they will become finely tuned to a specific user and be less than desirable to another user – unless there is a means for users to carry their preferences with them to other machines. This already is manifesting in chat programs that learn your personal idioms and automagically provide adjusted spell checking and link associations because personal idioms do not always cleanly translate, or are they used in the same connotation, for other people.

In order for the patch-it principle to evolve to the teach and learn principle, machines will need to develop a sense of context of self in their environment, be able to remember random details, be able to spot repetition of random details, be able to recognize sequences of events, and be able to anticipate an event based on a current situation. These are all tall orders for today’s machines, but as we build wider multiprocessing systems, I think we will stumble upon an approach to perform these tasks for less energy than we ever thought possible.

Tags:

Leave a Reply