I strongly disagree with this article. I wouldn't trust myself or my my coworkers to produce perfect code every time. We have 2 reviewers mandatory for each PR and using pull requests is a perfectly acceptable way to ensure quality - every line of code has to be vetted by at least 3 developers (it has to pass the CI/CD pipeline checks as well). If you produce high end software with complex workflows and/or calculations, pull requests are essential.
But is it the most efficient way to ensure that level of quality?
PR-s influence the same code, so it is not enough to review PR-s individually anyway.
Maybe a weekly or monthly release would be a better unit for evaluation.
We have all kind of ways of producing diffs between branches, releases, tags, timestamps, etc?
Why would PR diffs be the most efficient unit to review?
Often PR-s are just a silly way to communicate between team members instead of just talking to each other. I.e., you sit in the same office and create and reject PR-s, when you could just say to you colleague: "if you do not like the name of that variable, feel free to change the name".
The main problem with the PR system is that changes to the code is not immediately committed to the branches that developers work on, so it limits cooperation between developers. You can still review commits to dev branches and if problems are found, you can fix them or in most cases undo them.
I don't disagree with everything in there, The trouble with pull requests for instance is very real, but yeah, without mentioning the trouble with continuous integration nor the benefits of pull requests, it is just a pretty weak and one-sided article. Especially developers should now better than state 'B is bad because xxx and A is good because yyy so pick yyy'.
Unless you work in a very complex technical domain with high profit margins. A bug in production can literally cost us billions (apart from potential reputation damage), the system is highly complex, so having 2 reviewers isn't a luxury for us but a necessity.
You could make the same argument about automation not manual process.
e.g. "A bug in production can literally cost us billions (apart from potential reputation damage), the system is highly complex, so having robust test suites, automated monitoring and rollback isn't a luxury for us but a necessity."
IMHO, automation wins every time. That is, while there is value in "stop the pipeline versions of code review", we value the automation more.
That is a perfectly acceptable alternative, but to me it feels it would put a lot of trust in automation and lower affinity with the code as it's not continuously inspected. It depends on the size and duration of the project I guess.
It depends on what is required from the reviewers.
If you have a pair with 1 person with good knowdledge of the code digging deep into what the PR/MR does, and another dev just checking for anything surprising, or even just validating that what you're doing makes sense at the surface, that will probably cover most bases.