<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
		>
<channel>
	<title>Comments on: Robust Design: Ambiguity and Uncertainty</title>
	<atom:link href="http://www.embeddedinsights.com/channels/2010/03/22/robust-design-ambiguity-and-uncertainty/feed/" rel="self" type="application/rss+xml" />
	<link>http://www.embeddedinsights.com/channels/2010/03/22/robust-design-ambiguity-and-uncertainty/</link>
	<description>Shedding Light on the Hidden World of Embedded Systems</description>
	<lastBuildDate>Mon, 28 Jul 2014 16:18:37 -0400</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.0</generator>
	<item>
		<title>By: Mavi Jeans</title>
		<link>http://www.embeddedinsights.com/channels/2010/03/22/robust-design-ambiguity-and-uncertainty/#comment-618</link>
		<dc:creator>Mavi Jeans</dc:creator>
		<pubDate>Thu, 24 Jun 2010 19:58:40 +0000</pubDate>
		<guid isPermaLink="false">http://robert.blogs.embeddedinsights.com/2010/03/22/robust-design-ambiguity-and-uncertainty/#comment-618</guid>
		<description>Good share, great article, very usefull for us…thanks.</description>
		<content:encoded><![CDATA[<p>Good share, great article, very usefull for us…thanks.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: D.W. @EM</title>
		<link>http://www.embeddedinsights.com/channels/2010/03/22/robust-design-ambiguity-and-uncertainty/#comment-617</link>
		<dc:creator>D.W. @EM</dc:creator>
		<pubDate>Mon, 22 Mar 2010 23:19:45 +0000</pubDate>
		<guid isPermaLink="false">http://robert.blogs.embeddedinsights.com/2010/03/22/robust-design-ambiguity-and-uncertainty/#comment-617</guid>
		<description>&lt;p&gt;Ambiguity is the possibility of misinterpretation, proportional to lack of direct experience. Once you have &quot;done it&quot; or &quot;used it,&quot; you assume that is the way to interpret anything that asks you to do it or use it. No ambiguity. For example, there are two types of maps. The first is a map for someone who has never been &quot;there&quot; before, a first-time map. The second is a reminder map for someone who has been &quot;there&quot; before. The second type of map is useless to the first time visitor. It assumes you recognize the landmarks. But the second kind is usually the one given. It is very tough to create the first map, because it is hard to remember what your thinking was like before you knew the landmarks.&lt;br /&gt;
.&lt;br /&gt;
And detailed explanation is not the absolute remedy. You explain new things in terms of things already known. And as McCarthy of AI fame said, &quot;There must be some knowledge. You can explain nothing to a stone.&quot; You need to identify what you know versus what the installer, user or maintainer knows and to use explanation to bridge the gap. But you cannot consciously identify all you know that is relevant to the design, and you do not know what the user knows - and does not know - without asking. Making assumptions in these cases leads to disasters.&lt;br /&gt;
.&lt;br /&gt;
Reality bites, and you have to ship something. Do the best manual you can, realizing it will have holes. Look for things that are not in the experience of the user, and expect problems with those things, realizing that you will not catch all of them. Then, listen to the field.&lt;br /&gt;
.&lt;br /&gt;
Software is one of these arcane things. Designs today put a lot of functionality and associated complexity in the software. And software is a black box in a system, when viewed from the outside. In mechanical systems, you can often take off the cover and figure out how it works. Not so with software. Sometimes, not even with the source code!&lt;br /&gt;
.&lt;br /&gt;
The software box becomes a major abstraction of how the thing works. Joel Spolsky&#039;s &quot;Law of Leaky Abstractions&quot; applies. When it works, all is fine. But when it breaks, you have to know how it works at the next level down to fix it. If you car engine does not start one morning, you need to know the next level down: you need a charged battery and gas in the tank to start the car, etc.&lt;br /&gt;
.&lt;br /&gt;
This brings us to measurements. If the car does not start, headlights and a gas gauge will tell us if the battery has charge and there is gas in the tank. In an embedded system, this means that there needs to be some indicators - flashing LEDs, etc. - that tell you if things are OK at the next level down. In a software driven system, this usually means that you have to add software to generate these readouts. And while you are at it, you should probably run a log file that records the previous few seconds before a crash, so that you can figure out what happened.&lt;/p&gt;</description>
		<content:encoded><![CDATA[<p>Ambiguity is the possibility of misinterpretation, proportional to lack of direct experience. Once you have &#8220;done it&#8221; or &#8220;used it,&#8221; you assume that is the way to interpret anything that asks you to do it or use it. No ambiguity. For example, there are two types of maps. The first is a map for someone who has never been &#8220;there&#8221; before, a first-time map. The second is a reminder map for someone who has been &#8220;there&#8221; before. The second type of map is useless to the first time visitor. It assumes you recognize the landmarks. But the second kind is usually the one given. It is very tough to create the first map, because it is hard to remember what your thinking was like before you knew the landmarks.<br />
.<br />
And detailed explanation is not the absolute remedy. You explain new things in terms of things already known. And as McCarthy of AI fame said, &#8220;There must be some knowledge. You can explain nothing to a stone.&#8221; You need to identify what you know versus what the installer, user or maintainer knows and to use explanation to bridge the gap. But you cannot consciously identify all you know that is relevant to the design, and you do not know what the user knows &#8211; and does not know &#8211; without asking. Making assumptions in these cases leads to disasters.<br />
.<br />
Reality bites, and you have to ship something. Do the best manual you can, realizing it will have holes. Look for things that are not in the experience of the user, and expect problems with those things, realizing that you will not catch all of them. Then, listen to the field.<br />
.<br />
Software is one of these arcane things. Designs today put a lot of functionality and associated complexity in the software. And software is a black box in a system, when viewed from the outside. In mechanical systems, you can often take off the cover and figure out how it works. Not so with software. Sometimes, not even with the source code!<br />
.<br />
The software box becomes a major abstraction of how the thing works. Joel Spolsky&#8217;s &#8220;Law of Leaky Abstractions&#8221; applies. When it works, all is fine. But when it breaks, you have to know how it works at the next level down to fix it. If you car engine does not start one morning, you need to know the next level down: you need a charged battery and gas in the tank to start the car, etc.<br />
.<br />
This brings us to measurements. If the car does not start, headlights and a gas gauge will tell us if the battery has charge and there is gas in the tank. In an embedded system, this means that there needs to be some indicators &#8211; flashing LEDs, etc. &#8211; that tell you if things are OK at the next level down. In a software driven system, this usually means that you have to add software to generate these readouts. And while you are at it, you should probably run a log file that records the previous few seconds before a crash, so that you can figure out what happened.</p>
]]></content:encoded>
	</item>
</channel>
</rss>
