Seven Deadly Sins of Software Reviews

Comienza Ya. Es Gratis
ó regístrate con tu dirección de correo electrónico
Seven Deadly Sins of Software Reviews por Mind Map: Seven Deadly Sins of Software Reviews

1. Participants Don't Understand the Review Process

1.1. Symptoms: Software engineers don't instinctively know how to conduct and contribute to software reviews. Review participants may have different understandings of their roles and responsibilities, and of the activities conducted during a review. Team members may not know which of their software work products should be reviewed, when to review them, and what review approach is most appropriate in each situation.

1.2. Solutions: Training is the best way to ensure that your team members share a common understanding of the review process. For most teams, four to eight hours of training will be sufficient, though you wish to obtain additional specialized training for those who will play the role of moderator in formal inspections. Training can be an excellent team-building activity, as all members of the group hear the same story on some technical topic and begin with a shared understanding and vocabulary.

2. Reviewers Critique the Producer, Not the Product

2.1. Symptoms: Initial attempts to hold reviews sometimes lead to personal assaults on the skills and style of the author. A confrontational style of raising issues exacerbates the problem. Not surprisingly, this makes the author feel beaten down, defensive, and resistant to legitimate suggestions that are raised or defects that are found. When authors feel personally attacked by other review participants, they will be reluctant to submit their future products for review. They may also look forward to reviewing the work of their antagonists as an opportunity for revenge.

2.2. Solutions: When helping your team begin reviews, emphasize that the correct battle lines pit the author and his peers against the defects in the work product. A review is not an opportunity for a reviewer to show how much smarter he is than the author, but rather a way to use the collective wisdom, insights, and experience of a group of peers to improve the quality of the group's products. Try directing your comments and criticisms to the product itself, rather than pointing out places the author made an error. Practice using the passive voice: "I don't see where these variables were initialized, "not "You forgot to initialize these variables"

3. Reviews Are Not Prepared

3.1. Symptoms: You come into work at 7:45AM and find a stack of paper on your chair with a note attached: "We're reviewing this code at 8:00AM in conference room B." There's no way you can properly examine the work product and associated materials in 15 minutes. If attendees at a review meeting are seeing the product for the first time, they may not understand the intent of the product or its assumptions, background, and context, let alone be able to spot subtle errors. Other symptoms of inadequate preparation are that the work product copies brought to the meeting aren't marked up with questions and comments, and some reviewers don't actively contribute to the discussion.

3.2. Solutions: Since about 75% of the defects found during inspections are located during individual preparation, the review's effectiveness is badly hampered by inadequate preparation prior to the meeting. This is why the moderator in an inspection begins the meeting by collecting the preparation times from all participants. If the moderator judges the preparation time to be inadequate (say, less than half the planned meeting time), she should reschedule the meeting. Make sure the reviewers receive the materials to be reviewed at least two or three days prior to the scheduled review meeting.

4. Review Meetings Drift Into Problem-Solving

4.1. Symptoms: Software developers are creative problem solvers by nature. We enjoy nothing more than sinking our cerebrums into sticky technical challenges, exploring elegant solutions to thorny problems. Unfortunately, this is not the behavior we want during a technical review. Reviews should focus on finding defects, but too often an interesting defect triggers a spirited discussion about how it ought to be fixed.

4.2. Solutions: The kind of reviews I'm discussing in this article have one primary purpose: to find defects in a software work product. Solving problems is usually a distraction that siphons valuable time away from the focus on error detection. One reason inspections are more effective than less formal reviews is that they have a moderator who controls the meeting, including detecting when problem-solving is taking place and bringing the discussion back on track. Certain types of reviews, such as walkthroughs, may be intended for brainstorming, exploring design alternatives, and solving problems. This is fine, but don't confuse a walkthrough with a defect-focused review such as an inspection.

5. Reviewers Are Not Planned

5.1. Symptoms: On many projects, reviews do not appear in the project's work breakdown structure or schedule. If they do appear in the project plan, they may be shown as milestones, rather than as tasks. Because milestones take zero time by definition, the non-zero time that reviews actually consume may make the project appear to slip its schedule because of reviews. Another consequence of failing to plan the reviews is that potential review participants do not have time to take part when one of their peers asks them to join in.

5.2. Solutions: A major contributor to schedule overruns is inadequate planning of the tasks that must be performed. Not thinking of these tasks doesn't mean that you won't perform them; it simply means that when you do perform them, the project will wind up taking longer than you expected. The benefits of well-executed software technical reviews are so great that project plans should explicitly show that key work products will be reviewed at planned checkpoints.

6. The Wrong People Participate

6.1. Symptoms: If the participants in a review do not have appropriate skills and knowledge to find defects, their review contributions are minimal. Participants who are there only to learn may benefit, but they aren't likely to improve the quality of the product. Management participation in reviews may (but doesn't always) also lead to poor review results. If the team feels the manager is counting the bugs found to hold against the author at performance appraisal time, they may hesitate to raise issues during the discussion that might make their colleague look bad.

6.2. Solutions: Review teams having 3 to 7 participants are most effective. The reviewers should include the work product's author, the author of any predecessor or specification document, and anyone who will be a victim of the product. For example, a design review should include the designer, the author of the requirements specification, the programmer, and whoever is responsible for integration testing. On small projects, one person may play all these roles, so ask some of your peers to represent the other perspectives. It's okay to include some participants who are there primarily to learn (an important side benefit of software reviews), but focus on people who will spot bugs.

7. Reviewers Focus on Style, Not Substance

7.1. Symptoms: Whenever I see a defect list containing mostly style issues, I'm nervous that substantive errors have been overlooked. When review meetings turn into debates on style and the participants get heated up over indentation, brace positioning, variable scoping, and commenting, they aren't spending energy on finding logic errors and missing functionality.

7.2. Solutions: Style can be a defect, if excessive complexity, obscure variable names, and coding tricks make it hard to understand and maintain the code. This is obviously a value judgment: an expert programmer can understand complex and terse programs more readily than someone who has less experience. Control the style distraction by adopting standard templates for project documents and coding standards or guidelines. These will help make the evaluation of style conformance more objective. Use code reformatting tools to enforce the standards, so people can program the way they like, then convert the result to the established group conventions.