<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
		>
<channel>
	<title>Comments on: Do you refactor embedded software?</title>
	<atom:link href="http://www.embeddedinsights.com/channels/2012/02/29/do-you-refactor-embedded-software/feed/" rel="self" type="application/rss+xml" />
	<link>http://www.embeddedinsights.com/channels/2012/02/29/do-you-refactor-embedded-software/</link>
	<description>Shedding Light on the Hidden World of Embedded Systems</description>
	<lastBuildDate>Mon, 28 Jul 2014 16:18:37 -0400</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.0</generator>
	<item>
		<title>By: Farah</title>
		<link>http://www.embeddedinsights.com/channels/2012/02/29/do-you-refactor-embedded-software/#comment-15524</link>
		<dc:creator>Farah</dc:creator>
		<pubDate>Thu, 19 Apr 2012 12:38:13 +0000</pubDate>
		<guid isPermaLink="false">http://www.embeddedinsights.com/channels/?p=699#comment-15524</guid>
		<description>I used the term &quot;Migration&quot; as my research revolves round the transition of architectures in embedded systems from even-triggered to time-triggered designs.  This is about changing the underlying architecture from multiple interrupts enabled in the pre existing system to change to a single interrupt based system.</description>
		<content:encoded><![CDATA[<p>I used the term &#8220;Migration&#8221; as my research revolves round the transition of architectures in embedded systems from even-triggered to time-triggered designs.  This is about changing the underlying architecture from multiple interrupts enabled in the pre existing system to change to a single interrupt based system.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: J.D. @ LI</title>
		<link>http://www.embeddedinsights.com/channels/2012/02/29/do-you-refactor-embedded-software/#comment-14958</link>
		<dc:creator>J.D. @ LI</dc:creator>
		<pubDate>Sun, 01 Apr 2012 14:28:54 +0000</pubDate>
		<guid isPermaLink="false">http://www.embeddedinsights.com/channels/?p=699#comment-14958</guid>
		<description>@Don: 
As you said, it takes a keen mind to capture multidisciplinary contexts, model dependencies and effects, design and implement hardware and firmware to fit the visualized functions, and then analyze, find the design problems and better it in a continuous base.
What we are talking about is dedicated, focused engineers. People that continuously interest themselves in improving.
The process of improving and create better designs is part of what constitutes excellent engineering. 
And the attitude of taking ownership of the design, in the sense that you are responsible, as an engineer, for the performance and safety of the system, counts when you are defending a seemingly more difficult decision of refactor &quot;perfectly good code&quot; or completing &quot;expensive and time-consuming tests&quot;. That attitude, when coupled with common sense, is very convincing to the customer and Management.

There are several methodologies and quality assurance tools that can be used to formalize this process, even very simple and effective as the color coding you described. 
But the truth is that, when you are fortunate to make the process work, everybody in the team feels proud of the work. That is one hell of a good indicator. Non-technical people that are distant from the implementation may be proud of bad products, but not the engineers who can see the hardware and firmware details.

It is not only about refactoring also. If you have that attitude (continuous improvement), your current design will be at least as good as the last one, but probably better, in several aspects. You must be able to pick at least 5 aspects from your current work that you can improve. If not now, in the next one. But when you see those aspects, fix them in the current design. 

Don&#039;t let the response from the field drive this process, but rather, make the field to benefit from it.

- Jonny</description>
		<content:encoded><![CDATA[<p>@Don:<br />
As you said, it takes a keen mind to capture multidisciplinary contexts, model dependencies and effects, design and implement hardware and firmware to fit the visualized functions, and then analyze, find the design problems and better it in a continuous base.<br />
What we are talking about is dedicated, focused engineers. People that continuously interest themselves in improving.<br />
The process of improving and create better designs is part of what constitutes excellent engineering.<br />
And the attitude of taking ownership of the design, in the sense that you are responsible, as an engineer, for the performance and safety of the system, counts when you are defending a seemingly more difficult decision of refactor &#8220;perfectly good code&#8221; or completing &#8220;expensive and time-consuming tests&#8221;. That attitude, when coupled with common sense, is very convincing to the customer and Management.</p>
<p>There are several methodologies and quality assurance tools that can be used to formalize this process, even very simple and effective as the color coding you described.<br />
But the truth is that, when you are fortunate to make the process work, everybody in the team feels proud of the work. That is one hell of a good indicator. Non-technical people that are distant from the implementation may be proud of bad products, but not the engineers who can see the hardware and firmware details.</p>
<p>It is not only about refactoring also. If you have that attitude (continuous improvement), your current design will be at least as good as the last one, but probably better, in several aspects. You must be able to pick at least 5 aspects from your current work that you can improve. If not now, in the next one. But when you see those aspects, fix them in the current design. </p>
<p>Don&#8217;t let the response from the field drive this process, but rather, make the field to benefit from it.</p>
<p>- Jonny</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: D.P. @ LI</title>
		<link>http://www.embeddedinsights.com/channels/2012/02/29/do-you-refactor-embedded-software/#comment-14925</link>
		<dc:creator>D.P. @ LI</dc:creator>
		<pubDate>Sat, 31 Mar 2012 18:43:40 +0000</pubDate>
		<guid isPermaLink="false">http://www.embeddedinsights.com/channels/?p=699#comment-14925</guid>
		<description>@Jonny: Unfortunate that you also agree this is a common behavior. With defect fixes on a shpping product there is buy in up front, as it is currently negatively impacting &quot;the company&quot;, and as you say, test plans, test equipment and so forth already exist. The critical area testing is &quot;generally followed&quot; by majority of my customers, likely all of them if I thought about it. I generally don&#039;t disclose too much about the RTOS, LLC processes, but since &quot;us contractors&quot; deal with all types of customers from those &quot;skilled in the art&quot; to those with literally no product development expertise we do the following with &quot;limited success&quot;. In the primary detailed design document, called the Device Profile, we include requirements, followed by test plans, followed by test results and/or references to other doucments containing these items when &quot;large&quot;. When we enter a requirement it is done in &quot;black&quot;. When we right a test plan (detailed) we change the requirement and test plan to red text. The test result, in Device Profile, will mininmally follow the test plan and minimally state PASS or FAIL. This text remains &quot;red&quot; until it &quot;passes&quot; at which time we turn the related material to &quot;blue&quot;. When asked the test status we merely reply;, &quot;is the entire document blue?&quot; There are some customers/industries that regularly turn them completely blue, which is the exception. Some customers the product ships and the document is black, or almost totally black, menaing &quot;informal testing may have occurred&quot; but no proof of testing. I don&#039;t know about others experience, but I find that when doing test plans/testing/capturing test results that one&#039;s brain must use &quot;different neuron paths&quot; or something, as I ALWAYS find what I would call a defect. Sometimes critical and sometimes not, but I seem to ALWAYS find stuff. It&#039;s been my experience you have a more &quot;gobal perspective&quot; of interrelated product functionality during testing and a more &quot;focussed view&quot; when implementing and during testing I generally find &quot;inter-dependency issues&quot; mroe than anything else, and generally due to &quot;evolution of requirements or refinement of requirements&quot; during implementation, where you often &quot;miss crossing the tees and dotting all the i&#039;s&quot;. Since I obviously have a &quot;black, red and blue&quot; overall view of project status, even without a &quot;single clue&quot; of the content of the Devcie Profile you get an immediate sense of the project status. Yep, I have lived in the world of &quot;defect databases&quot;, spreadsheets and all that &quot;tracking&quot; stuff, but I don&#039;t know about others, but it never seemed to give me a good feeling relative to the coverage testing. Of course our definition fo a requirment is &quot;if you can&#039;t test to it and verity it is met, it is NOT a requirement&quot;. Just wondering, how many &quot;really test to requirements&quot; and how many &quot;test for conformance to all requirements&quot;? Also, any suggestions on a &quot;better way&quot; to indicate test status that is actually indicates the level of test coverage? (By the way, your word processor, that everyone has on their computer, is the only &quot;tool&quot; needed when you do it with colors, versus using the defect reporting tools most of my customers have no access and thus won&#039;t use.).</description>
		<content:encoded><![CDATA[<p>@Jonny: Unfortunate that you also agree this is a common behavior. With defect fixes on a shpping product there is buy in up front, as it is currently negatively impacting &#8220;the company&#8221;, and as you say, test plans, test equipment and so forth already exist. The critical area testing is &#8220;generally followed&#8221; by majority of my customers, likely all of them if I thought about it. I generally don&#8217;t disclose too much about the RTOS, LLC processes, but since &#8220;us contractors&#8221; deal with all types of customers from those &#8220;skilled in the art&#8221; to those with literally no product development expertise we do the following with &#8220;limited success&#8221;. In the primary detailed design document, called the Device Profile, we include requirements, followed by test plans, followed by test results and/or references to other doucments containing these items when &#8220;large&#8221;. When we enter a requirement it is done in &#8220;black&#8221;. When we right a test plan (detailed) we change the requirement and test plan to red text. The test result, in Device Profile, will mininmally follow the test plan and minimally state PASS or FAIL. This text remains &#8220;red&#8221; until it &#8220;passes&#8221; at which time we turn the related material to &#8220;blue&#8221;. When asked the test status we merely reply;, &#8220;is the entire document blue?&#8221; There are some customers/industries that regularly turn them completely blue, which is the exception. Some customers the product ships and the document is black, or almost totally black, menaing &#8220;informal testing may have occurred&#8221; but no proof of testing. I don&#8217;t know about others experience, but I find that when doing test plans/testing/capturing test results that one&#8217;s brain must use &#8220;different neuron paths&#8221; or something, as I ALWAYS find what I would call a defect. Sometimes critical and sometimes not, but I seem to ALWAYS find stuff. It&#8217;s been my experience you have a more &#8220;gobal perspective&#8221; of interrelated product functionality during testing and a more &#8220;focussed view&#8221; when implementing and during testing I generally find &#8220;inter-dependency issues&#8221; mroe than anything else, and generally due to &#8220;evolution of requirements or refinement of requirements&#8221; during implementation, where you often &#8220;miss crossing the tees and dotting all the i&#8217;s&#8221;. Since I obviously have a &#8220;black, red and blue&#8221; overall view of project status, even without a &#8220;single clue&#8221; of the content of the Devcie Profile you get an immediate sense of the project status. Yep, I have lived in the world of &#8220;defect databases&#8221;, spreadsheets and all that &#8220;tracking&#8221; stuff, but I don&#8217;t know about others, but it never seemed to give me a good feeling relative to the coverage testing. Of course our definition fo a requirment is &#8220;if you can&#8217;t test to it and verity it is met, it is NOT a requirement&#8221;. Just wondering, how many &#8220;really test to requirements&#8221; and how many &#8220;test for conformance to all requirements&#8221;? Also, any suggestions on a &#8220;better way&#8221; to indicate test status that is actually indicates the level of test coverage? (By the way, your word processor, that everyone has on their computer, is the only &#8220;tool&#8221; needed when you do it with colors, versus using the defect reporting tools most of my customers have no access and thus won&#8217;t use.).</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: J.D. @ LI</title>
		<link>http://www.embeddedinsights.com/channels/2012/02/29/do-you-refactor-embedded-software/#comment-14924</link>
		<dc:creator>J.D. @ LI</dc:creator>
		<pubDate>Sat, 31 Mar 2012 18:43:17 +0000</pubDate>
		<guid isPermaLink="false">http://www.embeddedinsights.com/channels/?p=699#comment-14924</guid>
		<description>@Don: 
Your question (I presume it is not rhetorical) is central to the theme. 
I am directly impacted by this paradoxically possible scenario:
- we need to have bug-free products;
- the route to that is good engineering coupled to comprehensive testing;
- a bug is, by stipulation, a software defect that hit the field;
- if you never release a product, you have a bug-free product;

Of course, you *must* release in a regular basis, so the company makes money etc.
The danger of Management forcing an early release of a firmware that is not sufficiently tested can be very damaging, especially in industrial systems, where bugs can potentially generate high losses and liability. Nevertheless, that scenario you described is surprisingly common.

What I did a few years ago, in my current company, is to force a minimum test coverage for *any* firmware, and declare that *all* firmwares have the highest criticality, i.e., operation on mission-critical environments. The test scenarios vary, but are based on the aspects changed in the firmware for the current release, on top of the checklist and regression tests currently in place. We simply do not sign off a release candidate that did not undergo the &quot;minimum period&quot; of testing. 

The approval of the version involves an exposition of the aspects and associated risks of failure, so Management partakes in the decision to release, and Engineering applies pressure not to release early. 

This process resulted in a significant reduction of fielded bugs. We have two product lines that are based on the 8051, and for which the releases were driven by field bug detections. After this protocol, we started to do planned refactoring and preemptive bug fixing, and almost eliminated bug impacts in the field.

- Jonny</description>
		<content:encoded><![CDATA[<p>@Don:<br />
Your question (I presume it is not rhetorical) is central to the theme.<br />
I am directly impacted by this paradoxically possible scenario:<br />
- we need to have bug-free products;<br />
- the route to that is good engineering coupled to comprehensive testing;<br />
- a bug is, by stipulation, a software defect that hit the field;<br />
- if you never release a product, you have a bug-free product;</p>
<p>Of course, you *must* release in a regular basis, so the company makes money etc.<br />
The danger of Management forcing an early release of a firmware that is not sufficiently tested can be very damaging, especially in industrial systems, where bugs can potentially generate high losses and liability. Nevertheless, that scenario you described is surprisingly common.</p>
<p>What I did a few years ago, in my current company, is to force a minimum test coverage for *any* firmware, and declare that *all* firmwares have the highest criticality, i.e., operation on mission-critical environments. The test scenarios vary, but are based on the aspects changed in the firmware for the current release, on top of the checklist and regression tests currently in place. We simply do not sign off a release candidate that did not undergo the &#8220;minimum period&#8221; of testing. </p>
<p>The approval of the version involves an exposition of the aspects and associated risks of failure, so Management partakes in the decision to release, and Engineering applies pressure not to release early. </p>
<p>This process resulted in a significant reduction of fielded bugs. We have two product lines that are based on the 8051, and for which the releases were driven by field bug detections. After this protocol, we started to do planned refactoring and preemptive bug fixing, and almost eliminated bug impacts in the field.</p>
<p>- Jonny</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: D.P. @ LI</title>
		<link>http://www.embeddedinsights.com/channels/2012/02/29/do-you-refactor-embedded-software/#comment-14882</link>
		<dc:creator>D.P. @ LI</dc:creator>
		<pubDate>Fri, 30 Mar 2012 17:50:54 +0000</pubDate>
		<guid isPermaLink="false">http://www.embeddedinsights.com/channels/?p=699#comment-14882</guid>
		<description>Ya, perhaps I was a little to emotional in my statement, but the issue remains relative to the consequences of shpping a bad product. I acutally thought about this some more last night and know of varios &quot;ship it&quot; scenarios where an individual, and there were quite a few, that would ask at times during a test phase; &quot;But does anyone know of any identified defect right now?&quot; If you ask the question often enough during testing there will be times when no identified defects exist, although testing is not complete. Then when there are no response, product is shipped! My guess is this is somewhat common as I have seen it a few times. These are the &quot;cases&quot; where some consequences should exist, as it is a kind of knowing negligence. Stuff goes wrong in products when an unforseen scenario occurs that no onw was &quot;intuitive enough&quot; to foresee and thus a defect is uncovered. This is what I refer to the ship it syndrome, and how your typical engineer (with a career) would confront this behavior? I have also seen projects where there is poor engineering and you can test forever and it may never ship, which the root problem here is different. So what advice do you give those that ask since CYA is not the issue, fixing the problem is? Anyone can identify the issue, how do you guys fix it? &quot;Not my job man&quot; or &quot;above my pa grade&quot; is not the answer I am looking for here.</description>
		<content:encoded><![CDATA[<p>Ya, perhaps I was a little to emotional in my statement, but the issue remains relative to the consequences of shpping a bad product. I acutally thought about this some more last night and know of varios &#8220;ship it&#8221; scenarios where an individual, and there were quite a few, that would ask at times during a test phase; &#8220;But does anyone know of any identified defect right now?&#8221; If you ask the question often enough during testing there will be times when no identified defects exist, although testing is not complete. Then when there are no response, product is shipped! My guess is this is somewhat common as I have seen it a few times. These are the &#8220;cases&#8221; where some consequences should exist, as it is a kind of knowing negligence. Stuff goes wrong in products when an unforseen scenario occurs that no onw was &#8220;intuitive enough&#8221; to foresee and thus a defect is uncovered. This is what I refer to the ship it syndrome, and how your typical engineer (with a career) would confront this behavior? I have also seen projects where there is poor engineering and you can test forever and it may never ship, which the root problem here is different. So what advice do you give those that ask since CYA is not the issue, fixing the problem is? Anyone can identify the issue, how do you guys fix it? &#8220;Not my job man&#8221; or &#8220;above my pa grade&#8221; is not the answer I am looking for here.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: F.D. @ LI</title>
		<link>http://www.embeddedinsights.com/channels/2012/02/29/do-you-refactor-embedded-software/#comment-14881</link>
		<dc:creator>F.D. @ LI</dc:creator>
		<pubDate>Fri, 30 Mar 2012 17:50:38 +0000</pubDate>
		<guid isPermaLink="false">http://www.embeddedinsights.com/channels/?p=699#comment-14881</guid>
		<description>So what manager do you guys know, whom was fired after a bad product was shipped? If there are no consequences, this behavior does not change. However, you probably know of some engineers whom paid the price since they &quot;didi it wrong&quot;. Shipping a bad product, if it meant career disaster, might impact &quot;just ship it&quot; decisions. 
-------
Fred Brooks wasn&#039;t fired - but he was one of our country&#039;s greatest pioneers. Anyone who repeats his mistakes once acquainted with them, should see a shrink for shock therapy.</description>
		<content:encoded><![CDATA[<p>So what manager do you guys know, whom was fired after a bad product was shipped? If there are no consequences, this behavior does not change. However, you probably know of some engineers whom paid the price since they &#8220;didi it wrong&#8221;. Shipping a bad product, if it meant career disaster, might impact &#8220;just ship it&#8221; decisions.<br />
&#8212;&#8212;-<br />
Fred Brooks wasn&#8217;t fired &#8211; but he was one of our country&#8217;s greatest pioneers. Anyone who repeats his mistakes once acquainted with them, should see a shrink for shock therapy.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: F.D. @ LI</title>
		<link>http://www.embeddedinsights.com/channels/2012/02/29/do-you-refactor-embedded-software/#comment-14880</link>
		<dc:creator>F.D. @ LI</dc:creator>
		<pubDate>Fri, 30 Mar 2012 17:50:08 +0000</pubDate>
		<guid isPermaLink="false">http://www.embeddedinsights.com/channels/?p=699#comment-14880</guid>
		<description>Product quality is a &quot;risk assessment issue&quot; where I have NEVER seen any theories of calculations on &quot;this issue&quot; for an engineer to place in front of the &quot;ship it&quot; decision makers. 
------------
Well, I certainly have.

http://www.amazon.com/Principles-Of-Software-Engineering-Management/dp/0201192462</description>
		<content:encoded><![CDATA[<p>Product quality is a &#8220;risk assessment issue&#8221; where I have NEVER seen any theories of calculations on &#8220;this issue&#8221; for an engineer to place in front of the &#8220;ship it&#8221; decision makers.<br />
&#8212;&#8212;&#8212;&#8212;<br />
Well, I certainly have.</p>
<p><a href="http://www.amazon.com/Principles-Of-Software-Engineering-Management/dp/0201192462" rel="nofollow">http://www.amazon.com/Principles-Of-Software-Engineering-Management/dp/0201192462</a></p>
]]></content:encoded>
	</item>
	<item>
		<title>By: D.P. @ LI</title>
		<link>http://www.embeddedinsights.com/channels/2012/02/29/do-you-refactor-embedded-software/#comment-14845</link>
		<dc:creator>D.P. @ LI</dc:creator>
		<pubDate>Thu, 29 Mar 2012 15:24:05 +0000</pubDate>
		<guid isPermaLink="false">http://www.embeddedinsights.com/channels/?p=699#comment-14845</guid>
		<description>@Jonny,Frank &amp; Eric: I have been watching, and commeting, on this issue for some time. You guys are &quot;saying it like it is&quot; and just have some minor additions. First I agree with all of you with some tweaks. I provide licensed, high quality, reusable assets to my customers and then interconnect them to make products. I send a &quot;bill&quot; for the liceinsing and the &quot;work&quot;. Thus the &quot;cost of quality&quot; can be measured. I sometimes deal with customers with no electronic product development knowledges and thus I see &quot;normal human behavior&quot; since those whom haven&#039;t designed complex hardware or software can&#039;t even relate to what you do every day. In effect, a product ships when the customer does not want to spend any more testing money. Companies with experence (generally large with deep pociets) tend to &quot;do it right&quot;. I have worked for two MAJOR control companies and when some division has a &quot;recall&quot; the &quot;cost of lack of quality&quot; can also be measured. Product quality is a &quot;risk assessment issue&quot; where I have NEVER seen any theories of calculations on &quot;this issue&quot; for an engineer to place in front of the &quot;ship it&quot; decision makers. It is like congress and the deficit and the budget, most people will put their head in the sand, determine what their next day will look like if the ship it, and kick the can down the road. This is not true for those whom understand the risks and open issues in a product. So what manager do you guys know, whom was fired after a bad product was shipped? If there are no consequences, this behavior does not change. However, you probably know of some engineers whom paid the price since they &quot;didi it wrong&quot;. Shipping a bad product, if it meant career disaster, might impact &quot;just ship it&quot; decisions. The old &quot;cause &amp; effect&quot; discussion.</description>
		<content:encoded><![CDATA[<p>@Jonny,Frank &amp; Eric: I have been watching, and commeting, on this issue for some time. You guys are &#8220;saying it like it is&#8221; and just have some minor additions. First I agree with all of you with some tweaks. I provide licensed, high quality, reusable assets to my customers and then interconnect them to make products. I send a &#8220;bill&#8221; for the liceinsing and the &#8220;work&#8221;. Thus the &#8220;cost of quality&#8221; can be measured. I sometimes deal with customers with no electronic product development knowledges and thus I see &#8220;normal human behavior&#8221; since those whom haven&#8217;t designed complex hardware or software can&#8217;t even relate to what you do every day. In effect, a product ships when the customer does not want to spend any more testing money. Companies with experence (generally large with deep pociets) tend to &#8220;do it right&#8221;. I have worked for two MAJOR control companies and when some division has a &#8220;recall&#8221; the &#8220;cost of lack of quality&#8221; can also be measured. Product quality is a &#8220;risk assessment issue&#8221; where I have NEVER seen any theories of calculations on &#8220;this issue&#8221; for an engineer to place in front of the &#8220;ship it&#8221; decision makers. It is like congress and the deficit and the budget, most people will put their head in the sand, determine what their next day will look like if the ship it, and kick the can down the road. This is not true for those whom understand the risks and open issues in a product. So what manager do you guys know, whom was fired after a bad product was shipped? If there are no consequences, this behavior does not change. However, you probably know of some engineers whom paid the price since they &#8220;didi it wrong&#8221;. Shipping a bad product, if it meant career disaster, might impact &#8220;just ship it&#8221; decisions. The old &#8220;cause &amp; effect&#8221; discussion.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: E.J. @ LI</title>
		<link>http://www.embeddedinsights.com/channels/2012/02/29/do-you-refactor-embedded-software/#comment-14844</link>
		<dc:creator>E.J. @ LI</dc:creator>
		<pubDate>Thu, 29 Mar 2012 15:23:44 +0000</pubDate>
		<guid isPermaLink="false">http://www.embeddedinsights.com/channels/?p=699#comment-14844</guid>
		<description>I have to agree with Frank – refactoring imposes a risk and that has to be balanced so that the business risk is acceptable, i.e. it is not a universally good thing to do.

I tend to leave working legacy code alone however much I hate it and how ever much I think it could be improved – why do more work and increase risk when there is no need, why expose your business to risk for no gain.

If I am changing code I may refactor some of the code involved if it makes my task easier, if it reduces the risk of errors, if it improves my ability to maintain and support the software in future.

This risk based approach means – I change only what is absolutely essential if a release is small and most of the code has been tested and is stable.

I do low risk refactoring if the change to the software is small, and the risk is smaller enough to justify the improvement.

I do larger refactoring for major releases, to reduce my risk and reduce my effort now or in the future, or pave the way for changes I expect to receive later.

Sometimes what a customer wants is a significant change that requires refactoring, and also justifies a lot of testing in I may do a considerable amount of refactoring, and then usually with my customers full agreement. (That way they share some of the risk, but also understand some of the benefit to their product.)

Refactoring on a large scale for a minor enhancement release is foolish – and a risk I will not take. I am in business not just a techy.

I guess I am saying balance the effort and risk of refactoring against the benefit of the release it is being added to. Your customers do not want refactoring they just want new features, so this is a commercial and technical decision, that has to balance benefit and risk.</description>
		<content:encoded><![CDATA[<p>I have to agree with Frank – refactoring imposes a risk and that has to be balanced so that the business risk is acceptable, i.e. it is not a universally good thing to do.</p>
<p>I tend to leave working legacy code alone however much I hate it and how ever much I think it could be improved – why do more work and increase risk when there is no need, why expose your business to risk for no gain.</p>
<p>If I am changing code I may refactor some of the code involved if it makes my task easier, if it reduces the risk of errors, if it improves my ability to maintain and support the software in future.</p>
<p>This risk based approach means – I change only what is absolutely essential if a release is small and most of the code has been tested and is stable.</p>
<p>I do low risk refactoring if the change to the software is small, and the risk is smaller enough to justify the improvement.</p>
<p>I do larger refactoring for major releases, to reduce my risk and reduce my effort now or in the future, or pave the way for changes I expect to receive later.</p>
<p>Sometimes what a customer wants is a significant change that requires refactoring, and also justifies a lot of testing in I may do a considerable amount of refactoring, and then usually with my customers full agreement. (That way they share some of the risk, but also understand some of the benefit to their product.)</p>
<p>Refactoring on a large scale for a minor enhancement release is foolish – and a risk I will not take. I am in business not just a techy.</p>
<p>I guess I am saying balance the effort and risk of refactoring against the benefit of the release it is being added to. Your customers do not want refactoring they just want new features, so this is a commercial and technical decision, that has to balance benefit and risk.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: F.D. @ LI</title>
		<link>http://www.embeddedinsights.com/channels/2012/02/29/do-you-refactor-embedded-software/#comment-14843</link>
		<dc:creator>F.D. @ LI</dc:creator>
		<pubDate>Thu, 29 Mar 2012 15:23:21 +0000</pubDate>
		<guid isPermaLink="false">http://www.embeddedinsights.com/channels/?p=699#comment-14843</guid>
		<description>While I believe strongly that code &quot;that works&quot; should evolve carefully into more robust representations, I don&#039;t want to minimize the real-world risk that any change to a complex system will impact the short-term bottom-line. This risk is compounded by simultaneous feature creep.

Software Zen:

The inability of Management to correctly allocate time for design of &quot;non-profit&quot; standard unit tests, system tests, and field tests, which themselves introduce a different class of failure, is legendary. I suspect that unexpected software refactoring horror stories leads non-practicing Software Management to conclude that software engineers are prima donnas.whose pride of ownership and OCD regarding &quot;elegant&quot; code must be quietly discouraged.

The difference in attitude concerning the quality of a &quot;working&quot; product can be experienced first-hand by the non-civil-engineering traveller as roads transition between the United States and Mexico. Software, not so much.

The psychological aspect of Software Engineering puts a premium on all aspects of interpersonal relationships, communication skills and good general mental health.

Though our argot is subtle, and slightly subversive, it&#039;s universal. On the other hand, being profoundly wise doesn&#039;t meet deadlines by itself.</description>
		<content:encoded><![CDATA[<p>While I believe strongly that code &#8220;that works&#8221; should evolve carefully into more robust representations, I don&#8217;t want to minimize the real-world risk that any change to a complex system will impact the short-term bottom-line. This risk is compounded by simultaneous feature creep.</p>
<p>Software Zen:</p>
<p>The inability of Management to correctly allocate time for design of &#8220;non-profit&#8221; standard unit tests, system tests, and field tests, which themselves introduce a different class of failure, is legendary. I suspect that unexpected software refactoring horror stories leads non-practicing Software Management to conclude that software engineers are prima donnas.whose pride of ownership and OCD regarding &#8220;elegant&#8221; code must be quietly discouraged.</p>
<p>The difference in attitude concerning the quality of a &#8220;working&#8221; product can be experienced first-hand by the non-civil-engineering traveller as roads transition between the United States and Mexico. Software, not so much.</p>
<p>The psychological aspect of Software Engineering puts a premium on all aspects of interpersonal relationships, communication skills and good general mental health.</p>
<p>Though our argot is subtle, and slightly subversive, it&#8217;s universal. On the other hand, being profoundly wise doesn&#8217;t meet deadlines by itself.</p>
]]></content:encoded>
	</item>
</channel>
</rss>
