I am a strong believer in applying multiple sets of eyes to tasks and projects. Design reviews provide a multi-stage mechanism for bringing independent eyes into a design to improve the probability of uncovering poor assumptions or missed details. Peer performed code inspection is another mechanism to bring multiple sets of eyes to the task of implementing software code. However, given the evolution of automated code checking tools, is the manual task of inspecting a peer’s code still a worthwhile task?
Even when tools were not readily available to check a developer’s code, my personal experience involved some worthwhile and some useless code inspection efforts. In particular, the time I engaged in a useless code inspection was not so much about the code, but rather about how the team leader approach the code inspection and micromanaged the process. That specific effort left a bad taste in my mouth for overly formal and generic procedures for a task that requires specific and deep knowledge to perform well.
A staggering challenge facing code inspectors is the sheer magnitude of software that is available for inspecting. The labor for inspecting software is significant and it requires a high level of special skills and knowledge to perform. Tools that perform automated code inspections have proliferated, and they continue to improve over time, but are they good enough alternative to peer code inspections? I like Jack Ganssle’s “Guide to Code Inspections”, but even his final statement in the article (“Oddly, many of us believe this open-source mantra with religious fervor but reject it in our own work.”) suggests that the actions of software developers imply that they do not necessarily consider code inspections a worthwhile redirection of development team’s time.
Is peer-based code inspection worthwhile? Are the automated code inspection tools good enough to replace peer inspection? When is peer inspection necessary, or in what ways is automated inspection insufficient?
Nah. If it compiles, ship it. 1) The user can then call India to fill out a problem ticket so you know where your bug is.2) You can then fix it and include it in the next release that needs to be purchased as an “upgrade”. 3) Profit
“So those little problems are not “bugs”, but future revenue opportunities ?”
Robert, the automated tools should always be used– and after you fix all of the things that it finds, THEN you go for the code inspection. You still need human beings, because the automated tools cannot understand the INTENT of the programmer– automated tools only find syntax errors (and omissions). Automated tools are available from “free” to those costing many thousands of dollars– and you should have the best tools that your team can afford.
The code inspection should be done in a room where there are many inspectors, a “code reader” (who is NOT the programmer), and the programmer. The programmer is only there to answer questions about his or her intentions for those code sections where the inspectors are having trouble understanding them– (and also to possibly learn better coding techniques). If there are any questions about “why are you doing this?”– then the comments need to be re-written to make the intent of the code more clear. If the code does not meet up with the original intent of the programmer, then the inspectors have probably found a bug (either in the comments, or in the code– and yes, bad comments or no comments are considered a bug). In addition, if the code inspectors are senior level people, they might be able to make some helpful suggestions on how the programmer might perform certain tasks more efficiently in their next project– and this helps the programmer grow their skills in a non-threatening environment.
It is always a good idea.to have someone else peer at the code. The problem is more one of design or implementation problems. If policy calls for one simple process to be fragmented across dozens of methods, objects, declarations, headers, files, obscured by accessors and setters, with variables required to be stamped more than named, and comments to be everywhere but meaningless, it won’t help. No one else would be able to figure it out either, whether in a group or individually.
Perhaps a better word would be “refactor”. In order to clean up the code I have to understand it. Can I eliminate extra variables? Create a function from repeated sections of code, or merge a unique one inline? Do the data structures correspond cleanly to the intent? Are there better algorithms? Should it be for(), while(), or do{}while()?
If the code is already at what the Programmer’s Stone calls “The Quality Plateau”, review is easy. If not, the process of getting it there should reveal bugs.
A large number of opensource projects are already there – I can look at the code and tell what it is doing because everything is local and direct – on the same page if you have enough vertical screen real-estate and any declarations or definitions of things referenced are direct and simple.
Other programs (no major opensource to my knowledge) which appear simple lose me because the declarations and definitions for just a few lines of code are nested and scattered across a dozen files or are separated by hundreds of lines.
So instead of a review as such, invite your technical peers to refactor your code. If they aren’t able to clean anything or make it more efficient, it is probably already as good as it can be. Or it is so obfuscated that it needs to be rewritten.
Peer review and testing is not only important to improve the code quality. It force the peoples to share theres informations within a project, improving the overall quality and efficiency of the team.
Code reviews are awesome and peer coding is vey helpful on occasion every so often. I almost always learn something working with someone and vice versa. I don’t think one needs to peer code always, but I’ve never worked at a XP house. I don’t like code walk-throughs since they are really time consuming, so much so I dread bringing up points sometimes. I think reviewers should read the code ahead of time and bring a set of points to discuss rather than a line-by-line reading.
The best part about code reviews is cross-team education and knowledge transfer, especially for junior team members.
Code reviews are most productive in the absence of design. When the design is solid the code simply conforms or it doesn’t. If code reviews are needed for communication purposes then architecture and design are both lacking. It is always a good idea to get someone to double check your work. Everyone makes mistakes in the details at least some of the time. Still, if important issues are being discovered in code reviews then look to improve design skills/tools/methodology.
The vast majority of the projects are growing in evolutionary way: there is no clear split between the design activity and the code programming. Look at most of open source project: the design of a new feature is a collection of patches that will be merged into the trunk when it is ready. Often the design and code are improved by a number of test and review cycles done by others.
It is SOP that nothing gets checked in to source control until it has been peer reviewed using Code Collaborator. I would respectfully disagree with comments that a necessity for code reviews is an indication of a poor architecture or design. Another set of eyeballs is always a good idea. Besides the opportunity to improve the quality of the code being added or changed, it is an opportunity to catch the unintended consequences of a code change in a complex system. When production volumes are high mistakes become very costly in the repair bay or worse yet in the customers home..
Code reviews make the code you generate better. When you know it will be looked at you do a better job. That’s on top of all the downstream benefit
Peer code review should be conducted very often say every 15 days or 500 lines of code. ideally the developer should develop Unit Test cases first (Test driven Development) and then the code. Getting both reviewed (with no bureaucracy of team lead etc) can really help improve the code quality and ensure defects are caught very early
Especially with a so-called agile (read: little or no documentation) process, peer code review is essential. You can argue with people until you pass out about whether or not they should write a design spec first. I prefer to remain conscious. But if your process includes a required review before any code gets committed to the trunk, then you’ve at least got some guarantee that there’s a line that’s uncrossable without scrutiny of design. I earned my chops in the “quality” 80′s when it was actually fashionable to document your design before writing code. Today it seems impossible to fight the trend toward document-free coding, at least in my company. Without required peer code review there would be chaos. Just wondering, am I really a dinosour, or do others share my preference for up-front design reviews?
I just started a new discussion about whether design is time worthy.
I might have started to pollute this thread which I think is about inspection techniques and lessons learned. One of my lessons was that finding defects during inspection exposes opportunity for design improvement.
It saves a lot of time in peer view if everyone follows strictly the coding standard and peer view carried once every week before too much coding has taken place
There is some rules that can be appropriate when using a distributed version controlling system:
* Each new feature have his own branch, even if it’s just local to the developer.
* Each feature is a list of comprehensive patches.
* Each patch change a single aspect of the work to be done, with a message explaining the intention.
* The branch must be fully tested and checked with automatic tools before submitted for review.
* Patches are reviewed separately, one by one.
* If reviewers sign-off all the patches, then the branch is merged into the main tree.
The concept force to make patches that are easy to understand by others. Eventually, the “others” is sometimes ourself days or months later…
This links up nicely to another topic I have covered in my blog, peer programming, as proposed by Extreme Programming and other methodologies. I believe it’s all in the experience level of the people doing the review (or coding) and the risk management related to the project.
For example: why would you review a printf implementation? If it’s a new processor, new RTOS and a freshout programmer, it probably makes sense.
Another thing: experienced (meaning: good and experienced) will know their own limits, will know when to request a peer review, unexperienced programmers will not, and will usually assume their implementation is ideal.
To me, it’s all in the team. A good development team will have enough experienced personnel to drive the team towards excellence. That is: they’ll be able to coach the newer members just enough so that they don’t micromanage, but create a good sense of team. Remember: good manufacturing NEEDS good processes, good development NEEDS good people, and processes are only a tool. Processes shouldn’t overpower innovation and programming skills in development teams.