Project Management Channel

How long should testing changes take?

Wednesday, December 21st, 2011 by Robert Cravotta

The current two month payroll tax at the center of a budget bill going through the US Congress has elicited a response by a trade organization representing the people that would have to implement the new law and is the inspiration for this week’s question. A payroll processing trade organization has claimed that even if the bill became law, it is logistically impossible to make the changes in tax software before the two month extension expires. The trade organization claims the changes required by the bill would require at least 90 days for software testing alone in addition to time for analysis, design, coding and implementation. Somehow this scenario makes me think of past conversations where marketing/management would make changes to a system and engineering would push back because there was not enough time to properly perform the change and testing before the delivery date.

If you are part of the group requesting the “simple” change, you may think the developers are overstating the complexity of implementing the change. Often, in my experience, there is strong merit to the developer’s claims because the “simple” change involves some non-obvious complexity especially when the change affects multiple parts of the system.

In my own experience, we worked on many R&D projects, most with extremely aggressive time schedules and engineering budgets. These were quick and dirty proof of concepts many times and “simple” changes did not have to go through the rigorous production processes – or so the requesters felt. What saved the engineering team on many of these requests was the need to minimize the number of variations between iterations of the prototype so that we could perform useful analysis on the test data in the event of failures. Also, we locked down feature changes to the software during system integration so that all changes were in response to resolving system integration issues.

I suspect this perspective that changes can be made quickly and at low risk has been reinforced by the success of the electronics industry to deliver what appears to be the predictable and mundane advance in silicon products to cost 30% less and/or deliver twice as much performance as the previous year’s parts. Compounding this perception are all of the “science-based” television series that show complex engineering and scientific tasks being completed by one or two people in hours or days when in reality they would take dozens to hundreds of people months to complete.

How long should testing changes to a system take? Is it reasonable to expect any change to be ordered, analyzed, designed, implemented, and tested in less than two weeks? I realize that the length of time will depend on the complexity of the change request, but two weeks seems like an aggressive limit to implement any change which might indirectly affect the operation of the system. That is for embedded systems where the types of changes requested are much more complex than changing the color of a button or moving a message to a different portion of the display. How does your team manage change requests and the time it takes to process and implement them?

What tips do you have for estimating/budgeting?

Wednesday, November 2nd, 2011 by Robert Cravotta

Many years ago, during a visit to my doctor, he pointed out to me that I had visited him around the same time each year for the past few years for roughly the same symptoms – which were all stress related. It was at that moment when it finally dawned on me how stressful year-end budgeting activities were on me. It was also the moment when I understood how to focus my energy to minimize the amount of stress that this time of the year had on me by approaching the year-end budgeting activities from a different perspective.

I do not recall who I heard the expression “water off a duck’s back” from, but it probably has been a life saver for getting me successfully through many stressful events, including year-end budgeting. The expression brings images of ducks popping up to the surface of the water after diving under the water to eat. Remarkably, the water all rolls off the duck’s back and they are dry immediately after surfacing. I had a friend who had hair like that, but the expression “water off Paul’s head” is not quite as visually effective.

The stress of needing to take an accurate assessment of my project or department’s current condition coupled with having to project and negotiate for those resources we would need to accomplish our goals for the next planning period was much easier to handle if I could imagine the stress falling off me. Equally important in handling the extra stress of this time of year was realizing which goals were real and which goals were what we called management BHAGs (big hairy-a** goals).

My management at the time thought it was a good idea to purposefully create goals that they knew probably could not be attained in the hope that we might complete a significant portion of them with far fewer resources than we might otherwise expect to need. I’m not convinced that the BHAG approach works if you overuse it. If you have too many of them, or they are just too large of a leap, there is a huge temptation by the team to just write off the goal and internally adopt a more realistic goal anyway.

Going over earlier budgeting proposals and comparing them to what actually happened proved to be a helpful exercise. First, it provides a loose baseline for the new budget proposal. Second, it can provide a mechanism for improving your budgeting accuracy because you might notice a pattern in your budget versus actuals. For example, are the proposed budgets even close to the actuals? Are they too high or too low? Do the budgets/actuals trend in any direction? My experience showed that our tend line was fairly constant year over year, but that allocating a portion of the budget to acquiring and updating tools each year was an important part of keeping that cost line from trending upward as project complexities increased.

Do you know any useful tips to share about how to be more effective at estimating projects and performing planning and budgeting activities? Does any type of tool, including spreadsheets, prove especially useful in tracking past, present, and future projects and actuals? What are the types of information you find most valuable in developing a budget and schedule? How important is having a specific person, skill set, or tool set available to making projections that you can meet?

Does only one person dominate your designs?

Thursday, October 13th, 2011 by Robert Cravotta

The recent death of Apple’s former CEO, Steve Jobs, has been marked by many articles about his life. The products that Apple has launched and supported over the years have greatly influenced the technology markets. Many people are asking if Apple can maintain its technology position without Steve. If the design process at Apple was completely dominated by Steve, then this is a real question; however, if the design process proceeded in a similar fashion in all of the places I have worked before, then Apple should be able to continue doing what it has been doing for the past decade with much success.

Leonard E. Read’s article “I, Pencil: My Family Tree as told to Leonard E. Read” points out why Apple should be able to continue to prosper. This excerpt highlights its profound claim:

There isn’t a single person in all these millions, including the president of the pencil company, who contributes more than a tiny, infinitesimal bit of know-how. From the standpoint of know-how the only difference between the miner of graphite in Ceylon and the logger in Oregon is in the type of know-how. Neither the miner nor the logger can be dispensed with, any more than can the chemist at the factory or the worker in the oil field—paraffin being a by-product of petroleum.

The article applies this level of complexity to a pencil, which is simpler in composition and design than any of Apple’s products. If Steve acted correctly as a CEO, he did not allow himself to be a single point of failure for the company. Other people with similar talents (even if they are manifest across several people instead of just one person), should already be identified and integrated into the design process.

A key function for any manager is to be able to identify at-risk talents, skills, and experience within their groups and to create an environment where losing any single person will not kill the group’s ability to complete its tasks. The group’s productivity may suffer, but the tasks can be correctly completed.

Does the management of any large and successful company really allow its future to rest on the shoulders of a single individual? Does only a single person within your group dominate the design process so thoroughly that if they were to “win the lottery” and suddenly disappear that your group would be in trouble? What are some of the strategies your group uses to ensure that the loss of a single person becomes such a large risk to the completion of that project? Do you have a formal or informal process for cross training your team members?

The Engineer: Marketing’s Secret Weapon

Monday, August 29th, 2011 by Rae Morrow and Bob Frostholm

For many engineers the most exciting part of developing a product is being able to say, “it’s done!” But this really isn’t the end of the cycle, getting the product to the market is the developer’s next step. Projects that involve engineers in the marketing process reap added benefits. Technical teams exploring your product, without exception, are more open with their information when speaking with you engineers. By taking advantage of this “brotherhood of engineers” bond, designers can glean insights into future needs and requirements to gain an early leg up on their next generation products.

Marketeers spend hours upon hours developing branding campaigns, datasheets, technical articles, ads, brochures, PowerPoint presentations, application notes, flashy web pages, and more to assist the sales force in winning a potential buyer’s trust and eventually their business. The quality of these tools is critical to the success of the front line salesperson. When these tools are inadequate, creativity steps in and all too often “winging it” results in some degree of lost credibility. We have all experienced the over exuberance of a salesperson trying to embellish their product beyond the point of believability.

Creating dynamite sales tools requires forethought and planning. It begins with a thorough understanding of the product’s value propositions. Whether the product concept is originated within the marketing department or the engineering department, it is the engineers who have to embed the values into the new product. Marketeers then need to extract those values from engineering in the form of features and benefits that can be translated easily into ‘sales-speak’. The information flow has gone from engineer to marketer to sales person to potential buyer.

There are dozens of different channels through which information is delivered to a potential buyer. It requires discipline to keep the message consistent across all of them.

Think back to your first grade class, when on a rainy day, the teacher gathered everyone in a circle and whispered something into the ear of the child to the right and then asked them to do the same to the child to their right. By the time the information came full circle to the teacher, it barely resembled the original message. Today, there are dozens of different channels through which information is delivered to a potential buyer. The critical technical information, that which can make or break a sale, originates in Engineering (See Figure).

It is obvious how confusing the message can become when the buyer is barraged with dozens of interpretations.  Some channels truncate the message to fit their format (how much can be communicated in 140 characters?) while others will rewrite and add interpretations and comparative analysis. In the end, the buyer does not know who to believe.

There are several ways to assure your company is delivering strong and consistent messaging. For some companies this means retaining a dedicated person in the marcom department with strong technical writing and organizational skills. Another solution is to work with a PR (public relations) firm that focuses on the electronic industry, and their team focuses on the timeliness of the communications flow and manages the various channels for consistency within the channel as well.

When all the basics have been covered, it is then time to deploy the secret weapon; the engineer. Placing engineers face to face with potential buyers is becoming a more common occurrence. The buyer’s appetite for the product has been stimulated by marketing’s branding and product positioning.  Properly executed, the positioning has resulted in several third party endorsements that the buyer cannot refute.

Exposing the knowledge, skills, and expertise of the engineering team furthers the confidence of potential buyers in their decision making process. Face to face does not necessarily mean flying the engineer half way around the world for a 1 hour meeting, although there are occasions where this may be necessary. Other equally effective techniques include:

  1. Publish “How To” articles, authored by the engineer-expert. Many times these are ghost written on the basis of inputs supplied by the engineer. A creative marketing effort will find many ways to promote and repurpose this content, whether in multiple regional markets or in different formats such as application notes, presentations, and Q&As.
  2. Host webinars that allow many potential buyers to simultaneously participate in a technical lecture or series of lectures provided by the engineer-expert. There is significant effort required for planning, promoting, and executing to ensure a qualified audience is in attendance.
  3. Publish “opinion” or ‘white” papers that address a general industry concern and offer pragmatic approaches to solutions demonstrate a level of expertise by the engineer.

While we often stereotype engineers as the ‘Dilberts’ or ‘Wallys’ of the world, they are in fact one of a company’s best assets in closing a sale. They deal in a world of facts and figures, and equations and laws of nature that do not change. They abhor vagueness and embrace truth. To the buyer, their word is golden. In the buyer’s mind, ‘they are one of us’.

It is difficult to avoid the adversarial nature of a sale. We’ve been taught that there is a winner and a loser in the transaction. Involving the engineer in the process can greatly lessen the tension and extract clearly the real value of the product to the buyer, yielding a win-win scenario.

How do you ensure full coverage for your design/spec reviews?

Wednesday, April 27th, 2011 by Robert Cravotta

Last week I asked whether design-by-committee is ever a good idea. This week’s question derives from my experience on one of those design-by-committee projects. In this particular project, I worked on a development specification. The approval list for the specification was several dozen names long – presumably the length of the approving signatures list should provide confidence that the review and approval process was robust and good. As part of the review and approval process, I personally obtained each signature on the original document and gained some insight into some of the reasoning behind each signature.

For example, when I approached person B for their signature, I had the distinct feeling they did not have time to read the document and that they were looking at it for the first time in its current form. Now I like to think I am a competent specification author, but this was a large document, and to date, I was the only person who seemed to be aware of the entire set of requirements within the document. Well, person B looked at the document, perused the signature list, and said that person E’s signature would ensure that the appropriate requirements were good enough for approval.

When I approached person D, they took two minutes and looked at two requirements that were appropriate to their skill set and responsibility and nothing else within the specification before signing the document. When it was person E’s turn at the document, I once again felt they had not had time to look at the document before I arrived for their signature. Person E looked at the signature list and commented that it was good that person B and D had signed off, so the document should be good enough for their signature. In this example, these three signatures encompassed only two of the requirements in a thick specification.

Throughout the review and approval process, it felt like no one besides me knew all of the contents of the document. I did good work on that document, but my experience indicates that even the most skilled engineers are susceptible to small errors that can switch the meaning of a requirement and that additional sets of eyes looking over the requirements will usually uncover them during a review. Additionally, the system-level implications of each requirement can only be assessed if a reviewer is aware of the other requirements that interact with each other. The design-by-committee approach, in this case, did not provide system-level accountability for the review and approval process.

Is this lack of full coverage during a review and approval cycle a problem unique to this project or does it happen on other projects that you are aware of? What process do you use to ensure that the review process provides appropriate and full coverage of the design and specification documents?

Is design-by-committee ever the best way to do a design?

Wednesday, April 20th, 2011 by Robert Cravotta

I have participated in a number of projects that were organized and executed as a design-by-committee project. This is in contrast to most of the design projects I worked on that were the result of a development team working together to build a system. I was reminded of my experiences in these types of projects during a recent conversation about the details for the Space Shuttle replacement. The sentiment during that conversation was that the specifications for that project would produce something that no one will want once it is completed.

A common expression to illustrate what design-by-committee means is “a camel is what you get when you design a horse by committee.” I was sharing my experience with these design-by-committee projects to a friend and they asked me a good question – What is the difference between design-by-committee and a design performed by a development team? After a moment of thought my answer to that question is that each approach treats accountability among the members differently and this materially affects how system trade-offs are performed and decided.

In essence, design-by-committee could be described as design-by-consensus. Too many people in the committee have the equivalent of veto power without the appropriate level of accountability that should go with that type of power. Compounding this is that just because you can veto something does not mean that you have to come up with an alternative. Designing to a consensus seems to rely on the implied assumption that design is a process of compromises and the laws of physics are negotiable.

In contrast, in the “healthy” development team projects I worked on, different approaches fought it out in the field of trade studies and detailed critique. To an outsider, the engineering group seemed liked crazed individuals that engaged in passionate holy wars. To the members of the team, we were putting each approach through a crucible to see which one survived the best. In those cases where there was no clear “winner”, the chief project engineer had the final say over which approach the team would use – but not until everyone, even the most junior members on the team, had the chance to bring their concerns up. Ultimately, the chief project engineer was responsible for the whole system and their tie-breaker decisions were based on system level trade-offs rather than just slapping together the best of each component into a single system.

None of the design-by-committee projects that I am aware of yielded results that matched, never mind rivaled, what I think a hierarchical development team with clearly defined accountabilities would produce. Do I have a skewed perspective on this or do you know of cases when design-by-committee was the best way to pursue a project? Can you share any interesting or humorous results of design-by-committee projects that you know of? I am currently searching for an in-house cartoon we had when I worked on rockets that demonstrated the varied results you could get if you allowed one of the specialty engineering groups to dominate the design process for a rocket engine. I will share that if/once I find it. I suspect there could be analogous cartoons for any field, and I encourage you to send them to me, and I will share yours also.

Which is better: faster- or quality-to-market?

Wednesday, March 23rd, 2011 by Robert Cravotta

There are at least two major schools of thought about the best way to release new products – especially products that contain software that can be updated by the user. The faster-to-market approach pushes designs through the development cycle as quickly as possible to release the new product to market before anyone else. A plausible tactic for faster-to-market products that have user-updatable software is to ship the product even while there are still major bugs in the system with the expectation that the development team can create a corrective software patch before the product actually ends up in the hands of the customer. In this scenario, the expectation is that the user will perform a software update before they can even use the product the first time.

The quality-to-market school of thought believes that products should work out of the box without requiring extra effort from the user such as downloading and applying software patches. This philosophy does not preclude the later use of software updates to add or improve features – rather, the out of the box experience is considered an important part of the product’s value.

An argument for the faster-to-market approach is that the user will be able to start benefiting from the product sooner – possibly months sooner because the development team is able to take advantage of the production lead time to bring the product to the desired level of performance. This argument often presumes that the development team would still be working on fixing bugs after shipping a finished product even under the quality-to-market approach. For this approach, the shorter term tactical advantage of using a capability sooner outweighs the probability that some expected features may not work properly.

Likewise, an argument for the quality-to-market approach is that the user will know for certain at the time of purchase what the product will and will not be able to perform. A presumption of this argument is that sometimes a faster-to-market product overpromises what the development team is able to deliver and this leads to customer dissatisfaction because of unmet expectations. For this approach, the longer term strategic advantage of always being able to use the features as-advertised outweighs the probability that a crippled version of the feature will cause you to lose future sales.

There are many companies that side with both of these schools of thought. Which is better? Is one always better or is there a condition when one approach is better than the other? How does your development cycle accommodate one or both of these approaches to releasing a product?

Do you have enough budget for your embedded project?

Wednesday, March 16th, 2011 by Robert Cravotta

The word on the street is that budgets for development projects have been shrinking for years – perhaps even decades. If this sentiment is true, how do so many embedded designs make it to production status each year? Is the design quality for these projects more compromised each year? If the projects are over budget each year, how do the companies that fund these projects realize a high enough return on investment to keep justifying starting new designs? I question the validity of the claim that budgets have been shrinking year-in and year-out for ever increasingly complex designs without some offset occurring somewhere.

Perhaps a development team has a smaller budget for developers, but is it done without an increase for the development tools? I have been watching the development tool industry for years to see where the big productivity gains are coming from. Most of what I have observed is incremental improvements in what tools can offload from developers. Also, I do believe companies have been slashing their training budgets, but somehow the projects still get done (maybe not optimally but well enough to justify doing the project in the first place).

Another way that projects might get by with smaller development budgets is through heavy reuse of earlier designs and/or licensed IP (intellectual property). A trend that I have seen increasing over the years is that semiconductor companies are providing more software with their silicon products so that development teams can concentrate their efforts on the value add features rather than the basic functions. In this case, the overall budget is not shrinking so much as it is being transferred to different development teams.

I do not expect that development teams have generous budgets. I do expect that the budgets routinely require creative thinking from the team members as to how to squeeze 10 to 20% more out of the budget to meet the project requirements – but that is the process by which continuous improvement occurs – I doubt it would occur any other way. Are your budgets sufficient or are they too tight to complete your projects? What would you change about your budgeting process if you had your way? Is it possible for budgets to shrink every year and still be able to produce ever more complex designs?