It seems like a decent guide, though like others mentioned, you should download and run the code most of the time, except for trivial changes.
Also, I'd like to push back on this:
> Most developers have a lot of features to add and bugs to write, time spent reviewing other people’s work does not add a lot to your bottom line.
I disagree; it adds a lot to your bottom line. In fact, reading and reviewing code is one of the main tasks of a developer, perhaps more important than writing code.
Reviewing code gives you an opportunity to get familiar with other parts of the code base, understand why a change or fix is done, and collaborate with others to make an overall better improvement to the product.
It gets frustrating when egos get involved, and people take criticism too personally, or point out improvements for the sake of demonstrating their knowledge. In best working environments, code is judged without a bias or relation to who produced it, but as an inert output of the collective team. I would actually like to see a feature in code review tools to submit merge requests anonymously, just so that the author isn't a prevalent factor while reviewing. :)
I usually dedicate a few hours, maybe even half a day, to reviews. Definitely don't look at it as a chore to put off, but as a crucial part of your responsibilities as a developer.
I went ~25yrs into my coding career without code review and without tests. I wrote games, several AAA games, play testers found bugs, but otherwise there was no code based tests and no code review, none, zero, zilch. From 1983 to around 2008.
Now I use tests and code review and I like them. I like that reviewers catch my bugs. I like that they suggest better solutions I didn't think of. I like that they tell me about functions I could use that make the code similar. I like that they help explain the code.
But, I can't entirely dismiss that I and the teams I worked on shipped a ton of products without any code review and that my velocity feels much slower than previously. It's possible that slowness is from the projects being way more complex, the teams being much larger, and that code review is actually a perf multiplier over all for these larger more complex things I'm working. But, waiting for and/or doing reviews certainly "feels" slower than I used to be.
While I think code reviews are great, I think the ideal state is doing them asynchronously post-merge. Enforced pre-merge reviews are a way to hold the change hostage to force a certain quality bar. If there is the experience and trust on the team, you don't need to hold each other hostage. You know the author won't make too much of a mess, you know someone will come around to doing the review soon, getting the benefits of code review, and you know the author will be responsive to feedback and not just dismiss them because they moved on.
Even when you do have pre-merge reviews, some times the culture and tooling can make things slower than they need to be. I saw this at one company:
- Review system sent so many notifications, people ignored them
- It was unclear if a change was relevant for you to review
- When there is communal ownershipg no one owns it. The code review system needs to assign the reviewer from among the team
- it was unclear when you owned reviewing something
- slow CI and not having auto-merge makes people lose track of things. As a last resort consider a two stage CI, fast build of a subset for PR updates, full build on merge.
- people should treat CI as another reviewer and not block on it
Sure, the programming landscape was much different 30 years ago. We also didn't rely on code versioning, CI/CD, testing, and many other practices we consider standard today. The industry has since evolved to make development better and more reliable, at the expense of relative speed or simplicity. In return, we gained the ability to minimize the chances of shipping broken code that is only caught by manual testers, or, worse, actual users.
Game development particularly went through radical changes. I don't have much experience here, but I can imagine the stress of needing to ship bug-free code on actual physical media, and the excruciatingly slow dev-QA-release cycle. We're spoiled these days to have automated tests, peer review, and OTA updates, where bugs can be found, fixed, and bugfixes shipped much quicker than if we were not doing this.
So I agree that all of these processes add a mental and time overhead, but I wouldn't go back to working without them for anything.
For a game that has a single release code review seems like it would be less important. Where I've found it really comes into it's own is keeping a codebase maintainable over several years.
> I like that they suggest better solutions I didn't think of.
Although I don't write code full-time, when I do this is the part I enjoy more. People reviewing my code and coming up with better solutions on that same problem amazes me.
> It seems like a decent guide, though like others mentioned, you should download and run the code most of the time, except for trivial changes.
Actually, that's not true at all. Code review is not a replacement for automated builds and testing. That's what Continuous Integration is for. Let humans concentrate on what they do best (understanding and comment on code) and don't lose their time and patience with things that can be automated.
Projects often enforce that by running CI on code submitted for review and it cannot be accepted as long as a CI fails. It also reduces breakages for other devs of the builds of the code base.
> Code review is not a replacement for automated builds and testing.
I agree, and never said it was. But the fact is most projects don't have complete test coverage, or even replicate the real world environment of actually using the program. In the absolute best case scenario where they do, I agree with you; automated tests and CI should find any functional issues. But if your software is meant to be used by a human, then at some point it should also be tested by one. Preferably before your actual users test it for you.
It depends on the project, of course, so if all you're working on is a library or API consumed by machines, then by all means, rely entirely on CI if you can. Same goes if your project is a small piece of a much larger machinery, and it's difficult to test locally, then it makes sense to rely more on CI in that case as well.
But you can't expect humans to actually interpret all the nuance and complexity of a non-trivial code change from just staring at patches. At least, I'm not that smart. So I typically pull the changes locally, go through the relevant files, try to manipulate inputs and see what the effects are (i.e. manual fuzzing), compare the behavior before and after the changes, and, finally, since I already have the code locally, build and run the program as a last quick check that things actually work. Sure, this is time-consuming sometimes, but it's been helpful for finding issues or just understanding the code more than if I wouldn't do this.
I expect the submitter to have done that. The most annoyed I get in code reviews when it is clear that the tests are green but the code is totally broken and the submitter has never bothered to test it. And at my old job there were things that to properly do a manual QA would have taken all day because it wasn't possible to test on a laptop (e.g. wrong O/S and no ability to easily spin up an AIX image to start with). Really feel like this needs to not be a job of the reviewer, but needs to be on the submitter, but a lot of junior devs or open source contributors tend to skip that step and treat green tests as the definition of done, rather than working software.
If you read that statement as "when judged on your output by those in control of your salary" does it make more sense which bottom line is in play here?
If your employer is a company that values good software development practices, then they wouldn't judge you based on the LOCs you produce, but on the overall value you provide to the team, product and company.
If they're not, then most of what we're discussing here is not very relevant, since they likely don't follow other good practices either.
If you're in that situation, then you have a few options: if you value the product and company, promote these best practices so that the company can attract and keep good developers, which ultimately benefits the product. If you don't, or they won't listen, or you don't have the authority to change things, then you can probably forget about following good practices, and just keep working based on the metrics they do value. Or, I'd strongly suggest, just find a better working environment. :)
I disagree; it adds a lot to your bottom line. In fact, reading and reviewing code is one of the main tasks of a developer, perhaps more important than writing code.
It provides the possibility of not causing future 'your business' reductions to its bottom line. That's why any 'rules' to code review are so context specific and should be applied as such.
The reality is that most of your comments will be reminding people to follow the style guide or to not organize their code like monkeys with type writers.
If you're leaving comments about style on PRs then your tooling isn't doing a good enough job. You should have a linter and a formatter configured, everyone should be sharing the same config, and they should always be run on a prepush hook (or earlier... run them on file saves if possible). Badly formatted code should make the the push fail so the dev has to fix the problem before they can open a PR. Force pushes and skipping hooks should be vigorously discouraged. That way badly formatted code shouldn't make it to the repo and your PRs can focus on more important things.
Obviously this is really hard to implement on a project that's old and has thousands of linter errors. If you didn't set it up on a greenfield project right at the start you can't really complain too much though.
I think that often it's not just coding style (yes, a linter should catch indentation errors), but coding conventions/requirements to enforce an idiomatic code structure in the code base. Linters can't catch that. E.g. how to organize files, where to put business logic, how tests are structured, which libraries to use, how to handle non-blocking or concurrent code, errors, code documentation etc.
I don't think I'd call any of those things "style". Those things are the code itself, not the style it's written in. They're definitely things that PR reviewers should be looking at.
I also blame the tooling a bit. After using Critique [1] internally at Google all the other tools/implementations/UIs feel very confusing. And I'm not speaking with lack of integration with other tools like coverage, linters, spell checks, error-prone constructs analysis, ... but simply the UI.
Additionally within Google (or at least the teams I was in) really tries to chunk changes in small pieces. For instance, adding a feature always entails something like: 1 CL to introduce the flag that will enable/ disable the feature, at least 1 CL that actually adds the logic and tests, 1 that enables the flag in the deployment. Not sure if that's due to the monorepo or more of a culture thing, but that MO is not that common in my personal experience
I agree that the tooling is very underrated! It’s strange that programmers spend so much effort choosing their code writing environment (IDE, etc) but just take what they’re given when it comes to code reviewing (mostly GitHub PRs)
If you’re missing Critique, allow me to shamelessly plug CodeApprove: https://codeapprove.com
Just like Critique, it’s a code review tool for power users that really lets you focus on resolving every discussion.
Well, you can decide more or less by yourself what tools you use for writing code, but the code review tool is something everyone needs to agree upon (and then convince management to change), so of course it has more inertia. I'd love to use something else than Jira for issue tracking too, but if the whole company uses it...
I think I disagree with the emphasis on asking questions. Not because questions shouldn't be asked, but because they should be asked much earlier in the process than the PR stage. Every feature should at a minimum have a brief doc for the requirements and high level design. If you haven't read that and are reviewing the PR for anything other than style, you need to take a step back and go read that doc first.
In my opinion, the levels are:
1. Barely-existent review: The PR author is the SME, I'm only reviewing and LGTMing this because peer review is needed to submit the change. I might notice a typo or something along those lines, and will call out if they're being lazy and skipping tests.
2. Functionality-focused change: Does the code meet the objective and do the tests adequately cover the change?
3. In-depth review: Does the code address the problem in an efficient way that is not likely to cause the team future pain?
It's interesting that the author puts "downloading the code" in the exhaustive section.
Is it really that much work to run git checkout? I routinely checkout code in PRs if I'm not familiar with that part of the codebase. I can navigate the codebase much quicker in my editor than in GitHub. Although GitHub is actually getting there with the semantic analysis of code and finding references and stuff, sometimes I just need to open 5 files in different splits to figure out what is going on.
I think another factor here is if the reviewer is working on the same code base and the project has a complex environment to set up (e.g., a library compiled with custom flags), then `git stash` may not be enough to checkout the branch and run tests. (I am unfortunately guilty of creating some projects like that myself...) Docker can help with these cases, but not all projects are docker-oriented.
I personally like running the code myself when reviewing, it's just not always practical.
Yeah, reality: no one is going to download code and try to run it. It would take them all day to do code reviews. You are supposed to go live with your buggy shit soon, that's what the business values. Once it's live it gets tossed over the wall and the maintenance process begins where lower paid workers tinker with it until they have the skills to code-and-bail also.
On bad projects (which happens often now that node_modules directories are often gigabytes big), IntelliJ IDEA is super-long to index, which is required before being able to Cmd+Click on functions. And we have M1.
But this is an example where usability impacts the author’s judgement. For me, UX testing is part of the initial step of code reviews, no technical review is necessary if we end up deciding to implement the story another way.
One of my biggest pet peeves is when people get into the habit of leaving code reviews with only suggestions like variable name changes. It's not that I'm resistant to changing things like that but I have several colleagues that seem to only ever make suggestions like that and it seems very performative to me.
I had a colleague at a previous job who would, without fail leave reviews on only the implementation of the tests during a review.
Which normally I would appreciate as testing is usually neglected but they where almost always minor nitpick style suggestions or comments (which often revealed they didn't understand the business logic / code changes being made).
But as showing you proactively contributed during reviews was part of the framework for promotion they needed to show they had made suggestions rather than ask for clarification on features etc.
> Most developers have a lot of features to add and bugs to write, time spent reviewing other people’s work does not add a lot to your bottom line.
Does not add a lot to your bottom line? This is the mindset of isolated individual contributors rather than members of a single team that work on a given feature of the same product. Isn't this precisely what various agile approaches (scrum or XP in particular) are fighting against?
> This is the mindset of isolated individual contributors rather than members of a single team that work on a given feature of the same product.
From experience, I think this is the more common scenario, unfortunately.
> Isn't this precisely what various agile approaches (scrum or XP in particular) are fighting against?
Maybe that was the ambition, but I don't see this borne out in practice. If anything, Agile as most commonly practiced exacerbates this problem. Of course highly paid Agile consultants will dig their heels in and say "You're not doing Agile!", but that doesn't really change the reality for the people on the ground.
and if you do do it (speaking from experience) people get offended that someone be looking at their code (as if existence in source control guaranteed absence of bugs...). It is all the more surprising since simple inspection of never-reviewed code usually has a lot of low-hanging fruit in terms of bugs.
Also, I'd like to push back on this:
> Most developers have a lot of features to add and bugs to write, time spent reviewing other people’s work does not add a lot to your bottom line.
I disagree; it adds a lot to your bottom line. In fact, reading and reviewing code is one of the main tasks of a developer, perhaps more important than writing code.
Reviewing code gives you an opportunity to get familiar with other parts of the code base, understand why a change or fix is done, and collaborate with others to make an overall better improvement to the product.
It gets frustrating when egos get involved, and people take criticism too personally, or point out improvements for the sake of demonstrating their knowledge. In best working environments, code is judged without a bias or relation to who produced it, but as an inert output of the collective team. I would actually like to see a feature in code review tools to submit merge requests anonymously, just so that the author isn't a prevalent factor while reviewing. :)
I usually dedicate a few hours, maybe even half a day, to reviews. Definitely don't look at it as a chore to put off, but as a crucial part of your responsibilities as a developer.