<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
		>
<channel>
	<title>Comments on: Unit test tools and automatic test generation</title>
	<atom:link href="http://www.embeddedinsights.com/channels/2012/03/19/unit-test-tools-and-automatic-test-generation/feed/" rel="self" type="application/rss+xml" />
	<link>http://www.embeddedinsights.com/channels/2012/03/19/unit-test-tools-and-automatic-test-generation/</link>
	<description>Shedding Light on the Hidden World of Embedded Systems</description>
	<lastBuildDate>Mon, 28 Jul 2014 16:18:37 -0400</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.0</generator>
	<item>
		<title>By: Massimo Manca</title>
		<link>http://www.embeddedinsights.com/channels/2012/03/19/unit-test-tools-and-automatic-test-generation/#comment-14513</link>
		<dc:creator>Massimo Manca</dc:creator>
		<pubDate>Wed, 21 Mar 2012 18:07:57 +0000</pubDate>
		<guid isPermaLink="false">http://www.embeddedinsights.com/channels/?p=706#comment-14513</guid>
		<description>Hello Mark,
did you read what NASA team said about the unwanted acceleration of some Toyota cars? It said that there are no possibility with actual testing tools to find the error and also if there is an error. This is not a good information and I am very suprised the team said this.

Next week I will read the NASA report about the software inspection and I will say something more.

The problem is exactly how much does it cost to develop a software/firmware in agile way and in more traditional way?

Doing agile development lower the development costs between 30% to 70% depending by the embedded sub market and this because it shorts the development time by a magnitude.

I don&#039;t work in a single market, most of the times I work in industrial, little appearances, white goods and some times automotive market but also in other markets and I also find to help companies to solve the problems they already have on existing products.

With today embedded sw complexity without a good testing practice you may achieve about 15 to 50 errors in 1000 C/C++ SLOC. Only to solve the 20% of these errors you would make happy about 80% customers (Pareto law). But how much does it costs to correct these errors? In embedded markets seems that delivering a product with 20% less errors will rise the entire development to about 3 times then developing products with agile techniques fro the beginning and without considering customer service costs for both. 

With traditional development cycle a C/C++ SLOC may cost between 15$ and 40$ in USA, with agile techniques it is about 6$ to 9$. And with a good agile development starting from executable requirements we have about 1 to 3 errors per 100000 errors that we normally correct during the continuous building/testing of the application (so these are the only errors leaved by a TDD/BDD unit tests fixture) so we don&#039;t add many weeks to solve application errors.

Many times I had the idea to take the original software and rewrite it from the beginning because the bad quality and poor testing it had. I learned that it pays if the product is not so bigger, may be &lt; 100000 SLOC (C or C++ code) and if you have 2 or 3 months to work on it.

If the project is bigger normally pays better to define executable acceptance tests from the requirements (better after a short revision) to represent the requirements, prioritizing them and execute them to find and solve one bug at time.

The project quality increase day by day and the customer will be confident I will have a solution.

In the past I checked LDRA solution, I also used on a customer site they are attractive but they didn&#039;t help so much me, only MISRA-C conformance tests help me to don&#039;t manually check the code row by row.

To find difficult bugs doing also limited budget of many customers I prefer to instrument the source code also at C instruction level (yes, I mean if, while for and so on) and save the debug output on an internal or external file. Also code coverage helps for C source code but not so much for C++ if you use a lot of templates, and class hierarchy/derivation.

So the problem is not on the tools but in this particular practice that is the automatic unit test generation, it is not the good tests to generate automatically.</description>
		<content:encoded><![CDATA[<p>Hello Mark,<br />
did you read what NASA team said about the unwanted acceleration of some Toyota cars? It said that there are no possibility with actual testing tools to find the error and also if there is an error. This is not a good information and I am very suprised the team said this.</p>
<p>Next week I will read the NASA report about the software inspection and I will say something more.</p>
<p>The problem is exactly how much does it cost to develop a software/firmware in agile way and in more traditional way?</p>
<p>Doing agile development lower the development costs between 30% to 70% depending by the embedded sub market and this because it shorts the development time by a magnitude.</p>
<p>I don&#8217;t work in a single market, most of the times I work in industrial, little appearances, white goods and some times automotive market but also in other markets and I also find to help companies to solve the problems they already have on existing products.</p>
<p>With today embedded sw complexity without a good testing practice you may achieve about 15 to 50 errors in 1000 C/C++ SLOC. Only to solve the 20% of these errors you would make happy about 80% customers (Pareto law). But how much does it costs to correct these errors? In embedded markets seems that delivering a product with 20% less errors will rise the entire development to about 3 times then developing products with agile techniques fro the beginning and without considering customer service costs for both. </p>
<p>With traditional development cycle a C/C++ SLOC may cost between 15$ and 40$ in USA, with agile techniques it is about 6$ to 9$. And with a good agile development starting from executable requirements we have about 1 to 3 errors per 100000 errors that we normally correct during the continuous building/testing of the application (so these are the only errors leaved by a TDD/BDD unit tests fixture) so we don&#8217;t add many weeks to solve application errors.</p>
<p>Many times I had the idea to take the original software and rewrite it from the beginning because the bad quality and poor testing it had. I learned that it pays if the product is not so bigger, may be &lt; 100000 SLOC (C or C++ code) and if you have 2 or 3 months to work on it.</p>
<p>If the project is bigger normally pays better to define executable acceptance tests from the requirements (better after a short revision) to represent the requirements, prioritizing them and execute them to find and solve one bug at time.</p>
<p>The project quality increase day by day and the customer will be confident I will have a solution.</p>
<p>In the past I checked LDRA solution, I also used on a customer site they are attractive but they didn&#039;t help so much me, only MISRA-C conformance tests help me to don&#039;t manually check the code row by row.</p>
<p>To find difficult bugs doing also limited budget of many customers I prefer to instrument the source code also at C instruction level (yes, I mean if, while for and so on) and save the debug output on an internal or external file. Also code coverage helps for C source code but not so much for C++ if you use a lot of templates, and class hierarchy/derivation.</p>
<p>So the problem is not on the tools but in this particular practice that is the automatic unit test generation, it is not the good tests to generate automatically.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Mark Pitchford</title>
		<link>http://www.embeddedinsights.com/channels/2012/03/19/unit-test-tools-and-automatic-test-generation/#comment-14499</link>
		<dc:creator>Mark Pitchford</dc:creator>
		<pubDate>Wed, 21 Mar 2012 09:52:39 +0000</pubDate>
		<guid isPermaLink="false">http://www.embeddedinsights.com/channels/?p=706#comment-14499</guid>
		<description>Massimo Manca: Thank you for your comments. 

You may be surprised that I agree entirely with your points for the market I suspect that you work in. This article deals specifically with the automatic generation of unit tests and that fact that they make the relevant tools accessible to people who cannot generally justify the investment you imply, which is perhaps the point you miss. 

From a commercial perspective, software (like anything else) needs to be of adequate quality. That doesn&#039;t necessarily always mean the best quality - there are all sorts of commercial judgements to make such as the cost of failure and time to market. It is analogous to the fact that if a small Fiat (or Ford, or Skoda, or whatever) was made to the same quality standards as a Rolls-Royce then it would become far too expensive to compete.  

Likewise, not everyone is writing safe code. Not everyone is working to the levels of excellence you are clearly aspiring to; for some people, a reassurance by this means is a step forward from their present practise of functional test only. That doesn&#039;t necessarily make them wrong. It may just mean that they have different criteria to work by. 

I&#039;d encourage you to look at other articles from myself in particular and LDRA in general, and you will see that our sphere of influence extends through the requirements traceability you mentioned and on to object code verification which goes beyond your observations in pursuit of excellence when that is required.</description>
		<content:encoded><![CDATA[<p>Massimo Manca: Thank you for your comments. </p>
<p>You may be surprised that I agree entirely with your points for the market I suspect that you work in. This article deals specifically with the automatic generation of unit tests and that fact that they make the relevant tools accessible to people who cannot generally justify the investment you imply, which is perhaps the point you miss. </p>
<p>From a commercial perspective, software (like anything else) needs to be of adequate quality. That doesn&#8217;t necessarily always mean the best quality &#8211; there are all sorts of commercial judgements to make such as the cost of failure and time to market. It is analogous to the fact that if a small Fiat (or Ford, or Skoda, or whatever) was made to the same quality standards as a Rolls-Royce then it would become far too expensive to compete.  </p>
<p>Likewise, not everyone is writing safe code. Not everyone is working to the levels of excellence you are clearly aspiring to; for some people, a reassurance by this means is a step forward from their present practise of functional test only. That doesn&#8217;t necessarily make them wrong. It may just mean that they have different criteria to work by. </p>
<p>I&#8217;d encourage you to look at other articles from myself in particular and LDRA in general, and you will see that our sphere of influence extends through the requirements traceability you mentioned and on to object code verification which goes beyond your observations in pursuit of excellence when that is required.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Massimo Manca</title>
		<link>http://www.embeddedinsights.com/channels/2012/03/19/unit-test-tools-and-automatic-test-generation/#comment-14480</link>
		<dc:creator>Massimo Manca</dc:creator>
		<pubDate>Tue, 20 Mar 2012 20:21:45 +0000</pubDate>
		<guid isPermaLink="false">http://www.embeddedinsights.com/channels/?p=706#comment-14480</guid>
		<description>Hello Mark,

1. I totally agree: the cost to fix a defect exponentially increase with the time going on. This is the only thing I can agree. In the rest of the article there is a real misconception of the difference between a &quot;unit test&quot; and &quot;a test made by a developer&quot;. Also the graphic vision is typical for non agile development strategies, the same steps can be performed in an agile way following the continuous development principle that means also continuous build, continuous test, continuous integration and continuous deployment.

2. Being an agile developer in the embedded market I can&#039;t agree about unit test general meaning inside the article. A unit test case without adopting a test driven development or better a behaviour driven development cycle is a false solution simply because it can&#039;t assure that I write the source code I really need to meet the requirement I need to test.

3. The missing point of the article is the more important advantage of the unit tests: the direct link between requirements and unit tests. If there isn&#039;t the link, unit tests are just the same as the developer tests (most done with a debugger and so not quite traceable) with the only benefit to show what tests were performed by the developer. This means that the developer missed the necessity to have executable requirements agreed with the customer (the customer may be also the internal product manager).
Writing the unit test code after the piece of software to test is simply wrong as writing requirements after the application software. An unit test function has to be written before the code to test simply because it is the &quot;executable translation&quot; of a requirement, so if we agree that requirements have to be written before the application we have to agree that unit test functions have to be written before the code to test.

4. Automatic test generation for non legacy code is only a defensive strategy (for the management) to try to enforce a better quality software without increasing development costs because management tend to think that unit tests are like more software to write and so they increase the SLOC or the function point (or every other metric they use) needed to develop the product. I know managers that computed the ratio between the application code to the unit test and acceptance test code to show the increasing cost of development to don&#039;t embrace agile development strategy and so the underlying unit test strategy.

5. About legacy code: if a company reuses its software it is a commom good sense practice because a working software is part of the company know-how, it was an investment and it has to be used as more as possible. Generally companies procedures require more documentation than tests to reuse sw, but the real problem is how much of this know-how is well tested and how much developers are able to reuse it?

Developers should learn about Ariane 5 and Therac software errors to improve their way to reuse the source code. Both failures would be discovered in the early stage of the projects using BDD and TDD approachs.

So if I am not sure that a piece of legacy software could help me I simply write my application using BDD and when I have to pass my test I integrate the legacy module (I mean a class a package or a function); if it pass the test it is ok, if not I will correct it and update in the source version repository (I put also the tests used to verify the code) for the benefit of all the developers. This is the best secure source code reuse practice I learned in the last 25 years.

6. Using an auto generated unit test fixture to test legacy code has the very bad effect that the developer thinks to have a safe piece of source code (supposing all tests passed) but this is not true especially in the embedded world. Porting a well done and working piece of code (also well tested following the ESA test procedures) to a different processor was the real cause of the Ariane 5 fault (it was auto distrupted at its 1st flight with all the payload) but it passed all the original tests (they weren&#039;t updated thaking account the different data size of the Ariane5 and Ariane4 in the calling function).

7. About unit test &quot;commercially attractive&quot; due to automatic test generation: this is just the last try to sell what I call &quot;the last failing silver bullet application&quot;. A company will do best business if it will develop and sell high quality products with zero or very low count defects with the correct price/benefit ratio in the time window of the market opportunity that for hi tech products is always shortening and this sort of business culture goes well over the unit test automation tools.</description>
		<content:encoded><![CDATA[<p>Hello Mark,</p>
<p>1. I totally agree: the cost to fix a defect exponentially increase with the time going on. This is the only thing I can agree. In the rest of the article there is a real misconception of the difference between a &#8220;unit test&#8221; and &#8220;a test made by a developer&#8221;. Also the graphic vision is typical for non agile development strategies, the same steps can be performed in an agile way following the continuous development principle that means also continuous build, continuous test, continuous integration and continuous deployment.</p>
<p>2. Being an agile developer in the embedded market I can&#8217;t agree about unit test general meaning inside the article. A unit test case without adopting a test driven development or better a behaviour driven development cycle is a false solution simply because it can&#8217;t assure that I write the source code I really need to meet the requirement I need to test.</p>
<p>3. The missing point of the article is the more important advantage of the unit tests: the direct link between requirements and unit tests. If there isn&#8217;t the link, unit tests are just the same as the developer tests (most done with a debugger and so not quite traceable) with the only benefit to show what tests were performed by the developer. This means that the developer missed the necessity to have executable requirements agreed with the customer (the customer may be also the internal product manager).<br />
Writing the unit test code after the piece of software to test is simply wrong as writing requirements after the application software. An unit test function has to be written before the code to test simply because it is the &#8220;executable translation&#8221; of a requirement, so if we agree that requirements have to be written before the application we have to agree that unit test functions have to be written before the code to test.</p>
<p>4. Automatic test generation for non legacy code is only a defensive strategy (for the management) to try to enforce a better quality software without increasing development costs because management tend to think that unit tests are like more software to write and so they increase the SLOC or the function point (or every other metric they use) needed to develop the product. I know managers that computed the ratio between the application code to the unit test and acceptance test code to show the increasing cost of development to don&#8217;t embrace agile development strategy and so the underlying unit test strategy.</p>
<p>5. About legacy code: if a company reuses its software it is a commom good sense practice because a working software is part of the company know-how, it was an investment and it has to be used as more as possible. Generally companies procedures require more documentation than tests to reuse sw, but the real problem is how much of this know-how is well tested and how much developers are able to reuse it?</p>
<p>Developers should learn about Ariane 5 and Therac software errors to improve their way to reuse the source code. Both failures would be discovered in the early stage of the projects using BDD and TDD approachs.</p>
<p>So if I am not sure that a piece of legacy software could help me I simply write my application using BDD and when I have to pass my test I integrate the legacy module (I mean a class a package or a function); if it pass the test it is ok, if not I will correct it and update in the source version repository (I put also the tests used to verify the code) for the benefit of all the developers. This is the best secure source code reuse practice I learned in the last 25 years.</p>
<p>6. Using an auto generated unit test fixture to test legacy code has the very bad effect that the developer thinks to have a safe piece of source code (supposing all tests passed) but this is not true especially in the embedded world. Porting a well done and working piece of code (also well tested following the ESA test procedures) to a different processor was the real cause of the Ariane 5 fault (it was auto distrupted at its 1st flight with all the payload) but it passed all the original tests (they weren&#8217;t updated thaking account the different data size of the Ariane5 and Ariane4 in the calling function).</p>
<p>7. About unit test &#8220;commercially attractive&#8221; due to automatic test generation: this is just the last try to sell what I call &#8220;the last failing silver bullet application&#8221;. A company will do best business if it will develop and sell high quality products with zero or very low count defects with the correct price/benefit ratio in the time window of the market opportunity that for hi tech products is always shortening and this sort of business culture goes well over the unit test automation tools.</p>
]]></content:encoded>
	</item>
</channel>
</rss>
