The factors of code review
We often go into autopilot with code reviews, without stopping and asking ourselves why we do them
For as long as there have been professional software developers, there’s been code review. Whether it’s a formal process, or a cursory glance by a colleague before pushing the code to production, for many teams, code review is an essential part of the development process.
This is a common practice in other fields too. I trained as a journalist in my early days, and a sub editor would look over my work before my articles would get published. In academia too, you’d do a thorough peer review process before publishing a paper.
But, like with automated testing, we often go into autopilot with code reviews, without stopping and asking ourselves why we do them.
At dxw, we’re trying to standardise our approach to code review. With this in mind, I thought it might be useful to sketch out some of the reasons why teams review code and how some of those reasons can come into conflict with one another.
Reasons for doing code reviews
1. Safety and reliability
Is this code safe? Does this code have a security hole, or is there a less known exploit that the developer hasn’t thought about? Are there any situations where the code may break? Could we rewrite this feature to make this code more reliable?
2. Performance
Is this code likely to suffer from performance problems when it goes into production? Could a few tweaks here and there improve the performance of the code?
3. Teaching/Learning
This is particularly true if the author is more junior than the reviewer, or is less familiar with the language/framework being used. Is there a feature in the language/framework that makes what the author is trying to do easier or more expressive? The reverse is also true – code reviews can be a great opportunity for developers at any level to learn by seeing how a colleague has solved a problem.
4. Readability
Did anything in this feature make the reviewer stop and think, “What’s going on here?” A quick, “Can you explain what you did here?” can be a nice forcing function for the author to go over their work again and tweak the code so it better expresses the intent of the code.
5. Style
We use Standard.rb to lint our code, which saves bikeshedding about code style. There can still be times where new code may not be consistent with other code, or may not match the idioms used in a particular framework. This can cause problems with maintaining the code in future, especially when handing it over to new teams, and makes for a messier codebase in general.
6. Showing working
Does the format of the pull request clearly spell out how it has been implemented? Does each commit clearly spell out what’s been done, together with a narrative of why it’s been done in a particular way to help out future devs? Is there a need for documentation, either in the comments, in the Readme, or in the form of Architectural Decision Records?
The conflicts that can exist in code reviews
These factors fall into 2 distinct approaches to code review. The first (which covers things like style, safety, and performance) suggest a systematic approach to code review, reviewing each line of code, and looking for any potential issues. The second approach brings a more holistic view to the codebase, thinking about not only the code being reviewed, but also the impact on other aspects of the project, both code related and non-code related.
However, depending on where you work, how you work and who’s in your team, these 2 approaches may come into conflict with each other.
Consider a team in a large corporation who are trying to move on from “big bang” release processes to a more agile, incremental release process. They are risk averse, so may see code review as a quality assurance process, where other developers “rubber stamp” code before it gets released. They may see the “softer” factors, such as using code review as a learning tool, to be unnecessary distractions from shipping the code.
Similarly, a team working at pace on an alpha project might see considerations about reliability and long term usage not to be a concern, as they’re much more focussed on shipping a demonstrator which may well be thrown away at the end of the project.
The makeup of a team is also likely to cause factors to clash. For example, a team mainly made up of junior or mid level developers is more likely to prioritise learning over some of the other factors, whereas this would be less of a concern for a team made up of more experienced developers.
Work out what’s important
Code review, like many parts of a developer’s life, is a complicated business. It’s not just a case of working out all the possible reasons why we review code and then optimising for them all. That’s impossible, and depending on various things, like team makeup, the goals of a project and your own values as a team, it may be that your team will value some factors over others, and some factors may not even be a consideration.
When coming together as a team, it’s important to make sure that you agree on what factors you value, to save time working on things that the team don’t value, and prevent causing possible friction between team members.
With this in mind, the tech team at dxw are starting to reappraise what’s important to us when it comes to code review. We’re going to use these factors to help us think about why we do code reviews, and what we want to get out of them. We’ll use that to agree some shared expectations, which will help us to get even more out of our code reviews.