Entries Tagged ‘Project Management’

Which is better: faster- or quality-to-market?

Wednesday, March 23rd, 2011 by Robert Cravotta

There are at least two major schools of thought about the best way to release new products – especially products that contain software that can be updated by the user. The faster-to-market approach pushes designs through the development cycle as quickly as possible to release the new product to market before anyone else. A plausible tactic for faster-to-market products that have user-updatable software is to ship the product even while there are still major bugs in the system with the expectation that the development team can create a corrective software patch before the product actually ends up in the hands of the customer. In this scenario, the expectation is that the user will perform a software update before they can even use the product the first time.

The quality-to-market school of thought believes that products should work out of the box without requiring extra effort from the user such as downloading and applying software patches. This philosophy does not preclude the later use of software updates to add or improve features – rather, the out of the box experience is considered an important part of the product’s value.

An argument for the faster-to-market approach is that the user will be able to start benefiting from the product sooner – possibly months sooner because the development team is able to take advantage of the production lead time to bring the product to the desired level of performance. This argument often presumes that the development team would still be working on fixing bugs after shipping a finished product even under the quality-to-market approach. For this approach, the shorter term tactical advantage of using a capability sooner outweighs the probability that some expected features may not work properly.

Likewise, an argument for the quality-to-market approach is that the user will know for certain at the time of purchase what the product will and will not be able to perform. A presumption of this argument is that sometimes a faster-to-market product overpromises what the development team is able to deliver and this leads to customer dissatisfaction because of unmet expectations. For this approach, the longer term strategic advantage of always being able to use the features as-advertised outweighs the probability that a crippled version of the feature will cause you to lose future sales.

There are many companies that side with both of these schools of thought. Which is better? Is one always better or is there a condition when one approach is better than the other? How does your development cycle accommodate one or both of these approaches to releasing a product?

How do you mitigate single-point failures in your team’s skillset?

Wednesday, December 22nd, 2010 by Robert Cravotta

One of the hardest design challenges facing developers is how to keep the system operating within acceptable bounds despite being used in non-optimal conditions. Given a large enough user base, someone will operate the equipment in ways that the developers never intended. For example, a friend recently shared that his young daughter has developed an obsession with turning the lights in the house on and off repeatedly. Complicating this scenario is that some of the lights she likes to flip on and off are fluorescent lights (the tubes, not CFLs (compact fluorescent light)). Unfortunately, repeatedly turning them on and off in this fashion significantly reduces their useful life. Those lights were not designed to be put under those types of operating conditions. I’m not sure designers can ever build a fluorescent bulb that will flourish under those types of operating conditions – but you never know.

Minimizing and eliminating single-point failures in a design is a valuable strategy for increasing the robustness of the design. Experienced developers exhibit a knack for avoiding and mitigating single-point failures – often as the result of experience with similar failures in previous projects. Successful methods for avoiding single-point failures usually involve implementing some level of overlap or redundancy between separate, and ideally independent, parts of the system.

A look at the literature addressing single-point failures reveals a focus on technical and tangible items like devices and components, but there is an intangible source of single-point failures that can be devastating to a project – when a given skillset or knowledge set is a single-point failure. I was first introduced to this idea when someone asked me “What will you do if Joe wins the Lottery?” We quickly established that winning the Lottery was a nice way to describe a myriad of unpleasant scenarios to consider – in each case the outcome is the same – Joe, with all of his skills, experience, and project specific knowledge, leaves the project.

As a junior member of the technical staff, I did not need to worry about this question, but once I started into the ranks of project lead – well that question become immensely more important. If you have the luxury of a large team and budget, you might assign people to overlapping tasks. However, small teams may lack not just the budget but the cognitive bandwidth of the team members to be aware of everything everyone else is doing.

One approach we used to mitigate the consequences of a key person “winning the Lottery” involved holding regular project status meetings. Done correctly, these meetings can provide a quick and cost effective mechanism for spreading the project knowledge among more people. The trick is to avoid involving too many people for too long or too frequently so that the meetings cost more than the possible benefit they provide. Maintaining written documentation is another approach for making sure the project can recover from the loss of a key member. Another approach we used for more tactical types of skills was to contract with an outside team that specialized in said skillset. By working with someone who understands the project’s tribal knowledge, this approach can help the team recover quickly and salvage the project.

What methods do your teams employ to protect from the consequences of a key person winning the Lottery?

Does your embedded development team’s project budget metric support your estimation process?

Wednesday, December 15th, 2010 by Robert Cravotta

As an engineering project lead I had to develop and report on a set of performance metrics that we called the VSP (vision support plan). The idea behind these metrics was to show how each area of the company was directly supporting the company vision statement. For many of the metrics, the exercise was a waste of time because there was no clean way to measure how what we were doing as a team directly corresponded to every abstract idea in the vision statement.

However, there were a few metrics that we used that I thought were useful because we could use them to experiment with our processes and measure whether there was an improvement or not. For example, I refused to use a budget metric that only focused on whether we came in under budget or not. My budget metrics were “green” (good) if the expenditures to date were within 10% of the budget. If the project was more than 10% higher or lower than the budget, I reported the project as yellow. If the project was more than 20% higher or lower than the budget, I reported the project as red.

Here was my reasoning for the grading. If the project was within 10% of the budget, we were in control of the budget. I believe that any team can affect the cost of a project by up to 10% by choosing appropriate trade-offs without adversely sacrificing the quality of the project. Any design trade-offs that are made to affect a 10 to 20% change from the plan involve more risk and might adversely affect the quality of the project. Likewise, any time a team must accommodate changes that stray more than 20% from the plan involve significant risk and may require a reevaluation to determine whether the project is scoped realistically.

Note that this metric specified a range that covered overruns and underflows of the expenditures to budget. A major reason for this was to put a special focus on how well we were estimating projects. How many times have you seen someone try to explain why their project is over budget? In general, the reasons I saw included one or more of:

1) There was additional scope added to the project that you did not capture additional budget for (often at the direction of management).

2) The project involved solving some unexpected problems and there was not enough (or no) budget to handle such contingencies.

3) Management would not accept a realistic budget number for the project and you are doing the best you can with the budget they offered you.

The thing that is common in all of these reasons is that the estimation process did not adequately capture the project’s predictable and iterative costs. Too many times I would see management strip out our contingency budget which usually consisted of specifying 1 or 2 design iterations at those points of the design where we had the most risk. Capturing a budget metric and putting it into the context of how good the estimate was provides the potential for finding clues as to how to improve the estimating process in future projects – which directly supports just about any company’s vision statement that I have ever seen.

Likewise, if your project was substantially under budget, it seemed that most management was content to leave that alone; however, I see the following scenarios as reasons to why you might be running under budget:

1) You overestimated the cost to perform the project

2) You were able to remove scope from the project that was left in the budget numbers

3) You made an innovative leap that increased your productivity beyond what you thought you could do during the budgeting process.

Each of these reasons had a profoundly different impact on how you refine your estimating process. The first reason suggests you need better estimators. The second reason suggests you need to improve your project and contract management process. The third reason is one that any manager should want to see more of and reward the team for making it happen.

I saw many project estimates game the system so that the project lead had a potentially oversized surplus in their budget and their management would fail to comment on how resources had been allocated to a project and then not used without uncovering which of those three scenarios was the cause for the under run.

Does your project budget process enable you to improve your estimating process, contract management process, and increase the chances that your team will gain recognition when a risk pays off when you discover a new and better way to solve a problem? What are other ways you use expense/budget metrics to improve your design team’s performance?