Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Improving code review time (fb.com)
216 points by pzrsa on Nov 17, 2022 | hide | past | favorite | 219 comments


> Next reviewable diff

Holy Fuck No.

90% of my PR review time goes into "Okay, how will this change impact parts of the system that this mid-level Dev doesn't understand yet?" and "Does this PR actually do what the ticket it is claiming to implement actually intended?"

Reviewing diffs in isolation completely removes one's ability to do that.

If you remove a person's ability to do that, what you've left them with is the easiest part of the PR, just checking that the logic seems logical. And honestly, most of that work can be automated by linting, style cops, and unit tests.

The fact that they got rid of the part of the PR review process that matters, and only saw a 1.5% improvement speaks to all sorts of problems in the process overall, not an improvement by this tool


I don't understand this criticism. How does a "next reviewable diff" pop-up suggest that you're being forced to review a diff "in isolation"? As I understand it, nothing proposed in this article prevents you from reviewing the diff in context of the larger piece of software, as you would have always done.

This just seems like a feature to suggest another diff for you to review after you've finished accepting/rejecting the current diff. I'm already "in the zone" of code review so to speak, so this minimizes context switching. I see this is a good thing.

> The fact that they got rid of the part of the PR review process that matters

I don't understand this either. What "part of the PR review process" did they remove? The article does not claim to have eliminated any part of the review process.


It encourages looking quickly at small bits of changes in isolation as opposed taking a holistic view of the entire change, and viewing them in series as 5 min little tasks you can tick off like tiktok videos.

The review tools on github already go too far toward this by removing far too much context.


A diff at meta is a pull-request.


As far as I can see that - and many of the other responses to the criticism - does not counter said criticism.

If you look at only the PR, even with a few lines of context, you still don't see much.

I actually like the diff view window provided by JetBrains editors through the alreedy bundled "Github plugin". I get to see the whole file, before and after (left/right), with highlights for changes, lines added, lins removed. That way I see the entire context.

If you only see the usual change+3 lines of context you often don't even see what function is impacted, and it's also rare to have the context of the entire module being changed in ones head already.

For evaluating PRs, I use the PR review feature in IDEA editors and go through the list of files changed, opening a new window with the entirety of that file available to me, changes highlighted. F7 jumps to the next change, but mostly I just scroll.

You can add review comments right in the diff view.

https://youtu.be/MoXxF3aWW8k ("IntelliJ IDEA. GitHub Pull Requests") -- Diff viewer shown at 2:40; In that video the diff view is shown inside the editor, but I much prefer - and fortunately that is configurable - to open the Diff View in a new window, maximized.

https://www.jetbrains.com/help/idea/comparing-files-and-fold...

The example works only for PRs on GitHub and IDEA editors, but I wanted to present the concept, and show that IMO very nice diff view and how it makes it easy to review changes with the context of the entire file.


Yes I was talking about a pr (i.e. a collection of small diffs usually presented without much context). The smaller the pr the easier to review, but conversely the harder to see the big picture, and most tools don't give nearly enough context around the changes - I prefer to see the entire file.

Thanks for the links.


No. A diff is a commit.


> At Meta we call an individual set of changes made to the codebase a “diff.”

I don't work at Meta and I still don't have any idea which of you is right.

A commit would satisfy the description above.

As would a pull/merge request, which the image in the article perhaps fits better: https://engineering.fb.com/wp-content/uploads/2022/11/Code-R...

Of course, it's possible to review each individual commit as well, but who does that?

I know that some folks love to mess around as much as they need in their local branches, commit often and then do an interactive rebase, to maybe end up with one or just a few commits that can then be the basis for a pull/merge request, then the difference matter a little bit less.


> At Meta we call an individual set of changes made to the codebase a “diff.”

This statement is definitely a misleading one. A diff itself is always a commit in the history log, and a diff can't be a group of multiple smaller units unless their team submit change set to git first and sync the squashed commit back to fbcode. But even in that case, from fbcode's view the squashed commit is still a diff.

A set of changes is called diff stack.


They might get squashed for review or before merge, making the distinction less important. When you are running a monorepo you generally don't get long series of small commits in a batch but rather a singular atomic commit.


At first glance, I thought you said “comment” instead of “commit”. I laughed at the facetiousness


Meta deploys small changes continuously. If you have a bigger change, it should get broken up into smaller easily reviewable atomic changes.


That seems to be the case, but I think gp’s point still stands.

Imagine a feature that would heavily degrade performances as a whole, but as it’s being introduced in small bits and pieces no one sees the big picture and are left to wonder why perfs are slowly degrading, boiling frog style.


I think you are misinterpreting the term "diff". Near the beginning of the article they describe it "At Meta we call an individual set of changes made to the codebase a “diff.” So diff == PR.


You seem to be thinking of "hunk" and not what Facebook calls "diff" -- what Facebook means by "diff" is closer to what many people call "commit" or even "branch", i.e. a set of interrelated changes that are sort of atomic.

Reviewing individual hunks would be crazy and even Facebook knows that.


You don't remove their ability to do that. If you need more context, then there are UI elements to show the other lines. At Google, there are also links to the file in question in code search if you want to look at history or any other related context.

Most changes don't need this. A prerequisite of fast code reviews is small changes. Rather than 3000-line features, make a series of changes with 10-100 lines of code plus tests. Reviews can quickly understand the change in logic and confirm that test cases for the new codepaths are being added. Wham, bam, done in two minutes.

Sure, some reviews take more time, but of them 10-30 code reviews I do a week, it's perhaps 10% of them.


>The fact that they got rid of the part of the PR review process that matters, and only saw a 1.5% improvement speaks to all sorts of problems in the process overall, not an improvement by this tool

The 1.5% improvement was from better suggestions on who would be a good person to review the diff.

The next reviewable diff feature "resulted in a 17 percent overall increase in review actions per day (such as accepting a diff, commenting, etc.) and that engineers that use this flow perform 44 percent more review actions than the average reviewer!"


> "Does this PR actually do what the ticket it is claiming to implement actually intended?"

Let me ask about your unspoken assumption: is this the PR reviewer's job? Maybe the PR seems to implement what the ticket asked for, then after merging it becomes clear that it didn't fully implement it, or the business stakeholders are unsatisfied, etc.?


It's not the PR reviewer's job to go actively test it out (you can assume that your colleagues are somewhat competent at what they're doing), but if you review with the spec or the issue open on the side that says to add a blue button and you see it's red, it's your job to ensure it's not a mistake and point it out to the author.


Those architectural decisions should be reviewed during the design and planning phase so mid/low-level devs don't waste time building the wrong thing in the first place.


In theory that’s right what you say.

But still, tasks should be small enough that it hopefully won’t hurt too much if you throw away all the code again.

Too often I experience that - even if you talk to low/mid level devs about a feature before, even if you make a task breakdown together with them and write all the software design decisions down, even if you tell them they should commit often to you can check once in a while, even then in the end it‘s too often garbage what has been produced. Still, companies want to keep these developers because it’s hard to find new ones. And I guess it’s our senior’s duty - even if there are many disappointments - to still assume the best and try to teach them to do better. Again and again and again.


Diff at Meta is short for "Differential revision," a Phabricator term (which was originally an internal FB project before Evan left).

The fact that this comment still remains at the top here despite being incredibly inaccurate (and many subcomments stating as such) shows how degraded the discussion has gotten here when Meta is mentioned.


Am I reading this right? If the total median time in review is “a few hours”, that means that people are dropping what they’re doing to review the code. That can’t really be a good code review? A context switch alone would be half of that time for me.


I work at a company with a similar code review culture.

I have about 2 blocks of meetings per day on average. I usually check my assigned code reviews in the morning and when I come back from the meetings. Once I’m done with those then I move on to my own work.

If I have too much code review to the point that I can’t get my own work done, I take my name off the reviewer list and it’s reassigned.

On most days I’m still able to get a good 2-3 hours of sustained focused work on my own project.


> I have about 2 blocks of meetings per day on average

Are you sure you are a developer?


> I take my name off the reviewer list and it’s reassigned.

How big is your team? We've got 4 developers on our team (and one lead), and require two signoffs for code changes. There's just not enough of us for something like that to work... right?


That doesn't sound like the best ratio to me, but I don't know what you are running or how good your deploy process is at catching issues. With a 4 person team, you are probably better off discussing risky/bigger changes as a whole team, and allowing 1 review for smaller, less risky changes with well defined rollback processes.


Also with a team so small you can't afford wasting it on meetings.

You might need focus, i.e. non-meeting days, 3 days a week.


I work on similar sized teams, I introduce my PRs as stacked so they are extremely consumeable in very small bite-sized code changes and build on the last one reviewed.


Do you use any special tools for this? I've heard of people "stacking" diffs in this way, but it seems like it would be clunky to review on GitHub.


If you work directly in the source repository that works actually really well on Github (if everybody works on its own fork instead, you're unfortunately out of luck). When opening a pull request, you just select the branch the pull request should be based on to be the previous branch in your stack. If one of those pull requests gets merged, Github will automatically rebase the stacked on onto the base branch. This way every pull requests contains exactly the changes you want it to contain, while you're still being able to stack changes by opening multiple pull requests, with each one being based on the branch of the previous one.


This is one of the features of (yes), Meta's sapling VCS. https://sapling-scm.com/docs/introduction/getting-started/



So every change needs to be seen by 3 out of 5 people (assuming the lead also does code review)? That sounds a bit inefficient.


I think your main issue is having "two blocks of meetings" a day as a developer. That sounds absolutely nuts to me. Even two meetings a day is likely too much for a dev, let alone two blocks of meetings! I'd start by reducing that.


If the average change is small, it's not a particularly challenging thing to take 5 minutes to review something. (for context, my median change, both mailed and reviewed, is under 50 LoC, though this will depend on language, e.g. Java is notably more boilerplate-y than the languages I usually use, and I do a lot of configuration changes that are 1-3 lines by nature).

Picking up a 5 minute review after you've returned from some existing context switch (a meeting, coffee, conversation, lunch, etc.) is easy enough.

As a result, around 25% of my reviews are done within an hour, and 10% within 10 minutes. I'm a slightly fast reviewer, but approximately the same applies to reviews done of my code. And that includes reviews that require interaction with peers in Europe, which obviously require much longer because they can be mailed while I'm asleep or the reverse.


Yeah if it's a change to code you're deeply familiar with, that's not much of a context switch. In that situation, working on your own stuff or reviewing someone else's code feels basically the same, unless one of you is doing something very wrong.


10 minutes of work to review is fascinating to me. I’m somewhat of a baby programmer, and that’s over my head a bit I think.


Working there I tended to conduct code reviews each afternoon at the end of work. Sometimes after coming back from lunch as well. There isn't any "drop what they're doing" involved.


Sure, I do reviews first thing every morning, and sometimes right after lunch (mostly just rereviews), but that would give a mean/median response time of 4 hours assuming work completion time is uniformly distributed. And if changes are requested, that would add another 2-4 hours, which brings the total review-in-wait time to a full day, which the post was saying was unacceptable. To get the numbers they claim they must be reviewing more frequently than that.


You can use timezones to your advantage here. When California is finishing, Singapore is starting and when Singapore is finishing London is starting.


Why would you do that unless it's a hotfix or something really critical? Are the costs negligible?


So, it depends. When I was at FB, we had one person in Europe (me), one in SF and one in Asia.

Organising the diffs in this way made it much, much easier to collaborate.

Given how much FB has grown since then, I suspect this would be even easier, and this is how the numbers noted above are what they are.


Every team member doesn't do their review at the same time. You're likely to get a review within a few hours, which means total review time (they're only counting the time it's waiting for a review) is unlikely to be a whole day.


Which is what I expect. This doesn’t really agree with “a few hours” which is why I was confused.


It's perfectly consistent with a few hours, given that different team members tend to review diffs at different time during the day.


If you're working 8 hours and do at least 4 breaks per day - that's a break every 2 hours. If you do a break you might as well skim through emails and do some reviews. And remember that everything* is Pareto distributed, the review request sizes too. It's really not hard at all.

[*] it's an exaggeration, but close to reality.


> If you do a break you might as well skim through emails and do some reviews.

That's not exactly a break...


Well, after or before the break. The point is you're going to lose context completely anyhow.

And I'd like to stress the distribution of the "complexity of a review", which you can crudely approximate with something like number of lines changed. Most of them will be small.


When I'm on a break, I'm pretty much always mulling my task on the back of my mind, and doing things that don't require concentration, so when I get back from the break I'm still largely in the same frame of mind. Doing code reviews definitely forces a sharper shift of mind and takes longer to switch back from than a break does. Sometimes it is worthwhile, but it does have a cost.


Meta had the best code review quality I've seen in my career.


As someone who has only experienced terrible code reviews, could you give a little summary of what you liked?


Calling you out for lazy work. Teaching you how to use internal tools, company coding standards, suggesting better patterns, digging deep in to the context of system design and maintainability, suggesting other domain experts to tag in the reviews. I learned all sorts of Hack things that obviously I wouldn't learn outside from code reviews.


How are small things like "you left a commented out print/log" or "use color var not hex code" handled - or that a 'company coding standards' type thing. Are those types of things potentially automated with a linter or similar?


[flagged]


I mean, I generally think comments with this kind of attitude aren't looking for serious responses, so I normally don't respond.

That said, there are three things to consider: 1) I honestly don't care whether you think it's garbage or not. 2) I saw some amazing software there, and I saw some that wasn't amazing. 3) It's an immense business, and there is a certain amount of inherent dysfunction that comes with that many people working on anything together much as you would see at any large company.


Boo, please consider a review of the HN comment guidelines https://news.ycombinator.com/newsguidelines.html

Could you potentially explain what makes it "some of the most garbage software imaginable"?

edit: wording


There's already many articles posted to hackernews frequently at the insane internals of the mobile app, to the "meta verse", to many other things. Most of which are posted by ex-Facebook engineers. I don't need to rehash the same conversations, nor do I need to grant Facebook of all corporations any sort of courtesy.


The courtesy is towards us, people engaging in this discussion. Meta is not a party here.


Name one successful company that only produces beautiful, elegant, and readable code, with no nasty warts or hacks.

I'll wait.


> Name one successful company that only produces beautiful, elegant, and readable code, with no nasty warts or hacks.

Honestly, I'd genuinely like to hear some of the answers to this, or even just mentions of good codebases to explore.

That said, I don't think that there will be that many that satisfy all of the requirements listed here, because as the scope of a project grows, there will almost inevitably be some technical debt or accidental complexity, or even subjective things.


They probably had some pretty good cat meme image macros.


Yes good meme macros too. I left a few in my wake.


Most reviews can be done quite quickly. Small changes, test plan is convincing, low potential to break things. Besides, if engineers are good, code is easier to review.


Slow reviews means that you need to context switch a lot to answer their review suggestions and resubmit the review, that is much worse for productivity than taking a few minutes to go and look through what a teammate is doing now and then.

Alternatively your code review culture doesn't give a lot of suggestions for changes before submission, and that leads to more technical debt which will produce bad results long term.


Having thousands of engineers tends to help on that. If there's a 5% chance one engineer is available in the next two hours after you post a PR, with a team of 10 you will probably have none, with a team of 1000 you will easily get some.


Pairing can sidestep both of these problems. No context interruption and reviewed continuously.


I have found that when you pair it's easy for both people to lose sight of some greater context. You become one big mind and become susceptible to as many blindspots as an individual.


If I read it right, it described there being a "code review team." Whether they're responsible for reviewing all code, or just for coming up with these sorts of practices was not clear. ETA: with that said, the "reviewer recommender" to me would imply it's people who work on the same codebase.


Nah, a dedicated "code review team" is not a thing.

Often the suggestion featured does suggest a whole team as a review based on code ownership.


rb2k


The code review team is about the code review process I think; as opposed to being reviewers. Such would be the nature of a peak industry company.


People don't like when I say this, but I think it's true:

If it typically takes you hours to "get inside" some code, that code is way too complicated.


Some problems are inherently difficult to understand, let alone solve. I'm inclined to believe that engineers at Meta solve a lot of these types of problems.


You can solve difficult problems with simple, well factored code.

Just like your can solve simple problems with spaghetti code.

Of course, writing simple, will factored code is often hard!


I'm a strong believer in fast reviews.

I get into deep working mode for 3 hours a day total, on a good day. The rest is meetings, daily sync, coffee, lunch, "hey can you look at something", hallway conversations, emails, my own inability to concentrate when I'm not feeling it.

I've been in this industry for coming up on ten years. None of this is going to change, unless I become an academic or a hermit.

I don't get to do deep work, at least I'm going to unblock other people as fast as possible. That means out of an 8-hour work day, you're going to get a review from me within half an hour in most cases.

I've been on the team for years and I have write access. I WILL merge your change if you pass my review.

I find that this has immense benefits:

1) People just do things. They don't schedule design meetings, get approvals, get consensus. You know why? Because if someone has a good reason why the commit wasn't a good idea, we roll back. No harm, no foul. And guess what? It happens once in 100 commits. (If it's something truly complex, you do get a design doc approved first. But then the review is about making sure your code is correct, matches your design and our style/testing requirements, not whether it's the right thing to do.)

2) People write good commit messages. If your commit message isn't in the following format:

  Push foos in bar order instead of baz order.

  Following discussion with johnsmith, benchmark 
  (http://<shorturl>) shows 12% improvement in the hot 
  frobnication flow.

  Ticket: http://tickets/<ticket_number>

I'm sending it back to you. Since I merge most of my team's code most commits look like that.

3) People write small commits. Got a bigger change? I'll ask you to split it up (without breaking the build if we ship a version between commits). People don't push back on that, because they know it's not going to add a lot of overhead.

4) In the same spirit, people don't push back on changes I request - unless it's for a good reason. Discussions are on-point. When changes are made you get back approval half an hour later. No background psychological pressure of "I wanted to get this in today and I don't want to have to restore context tomorrow morning".

The velocity you reach is amazing. True serendipity. Unless you're consistently able to get full days of deep work in, I suggest you try it.

(edited for formatting)


Variation on 2 - We don't care about commit messages at all. All PRs must have good descriptions which use a company-wide template. But individual commits get squashed and their commit messages entirely replaced by the PR template contents.

So my commit history is

"fump" "GAH" "I think this works?"

etc. but all changes have a description of the issue (and link back to the ticket) the solution described in some detail.


Is there any tooling you use to squash the commits and use the PR message?


Github has a "squash and merge" option during merging. You can disable the other options (merge, rebase) in the repo settings.

I've worked with squash and merge almost everywhere and it's great. Keeps history clean (1 commit = 1 ticket, usually), and links back to the PR with discussions and context.


Unless your tickets are tiny, the single commit will have multiple functional changes and be harder to rollback.

I squash my commits into functionally independent units so they make sense and can be rolled back easily. Rebase FTW.


A sibling comment mentioned GitHub, I'll also add GitLab here as well, which allows you to do the same thing in their merge requests (the equivalent of pull requests).

Just checked, it doesn't seem that my self-hosted Gitea instance allows for that, so neither does Gogs (or maybe I just can't find the option in the UI).


> I WILL merge your change if you pass my review.

Doing that, you remove the possibility at the submitter of a final check before the merge. Especially if the merge happens on another day, rechecking with a clean mind helps to find missed things.

Then someone does that to my changes, they get a kind message to not do this anymore.


Of course, this is only if the submitter explicitly agrees to that.


> 2) People write good commit messages. If your commit message isn't in the following format: [snip] I'm sending it back to you.

Try reviewing commits without reading any associated messaging, or having your team submit complex changes with no message. You have to engage your brain to understand what the code is doing and what is being changed, you'll have a better understanding of if the comments are useful or insufficient, and most importantly you don't already have a bias that the code does what it says. You may find that when you really look at it foos are getting pushed in qux order, or that the benchmark was calling a different function.


My takeaway from your comment is that one should read the code first, and then see if the comments match your understanding of it afterwards, NOT that one should not write any comments.


there’s so many low hanging fruits for improving the quality of diff viewing. The worst code reviews are often the ones where code get refactored, leading to piles of delete / create lines that are just code being moved or slightly renamed.

One very simple approach would be better git integration with the IDE, helping build commit that make sense, where a set of changes could easily be commented by the author as they’re performing the edits, then keep improving from there.


This is a concept I almost pursed for my PhD over a decade ago. I’m still surprised no one has done this.

The bigger picture: context is often missing for anything complicated, be it software or a new law. Yet many hands touch and retouch the underlying material over time. If you could capture _how_ something was built, and had enough insight into the larger process to sample some of the _why_, then you could both know what changed together and what potentially impacted the final decisions. This would result in (hypothetically) tremendous gains for anyone working on or joining a project that’s bigger than can fit in the mind of one person.


The PR history can answer the why, and if you can anticipate someone will ask why because you know it is edge-condition spaghetti then you can document it right in a comment.


This is a pretty myopic stance that says everything is fine. It’s not. PR histories can be gigantic, interwoven, doesn’t tell you how and only sometimes tells you the why—-usually at a very fine grain of detail. Saying that you can “document it right” is like saying “code it well in the first place.” Much of the time there aren’t great comments or PRs, and what there is ends up assuming a huge amount of knowledge about the why/how. If you’re an outsider coming in, we should be able to do way better to help.


JetBrains has a code review platform that lets you do diffs in your IDE. The idea is incredibly appealing but for some reason I found it not so great in practice. Something about viewing the diffs in a different UI gets me in a different frame of mind. I found it hard to get into review mode in my IDE.

There are advantages of course, being able to jump around and get the context at will.


You can almost get there if you commit often, and structure your commits such that all the trivial stuff are done in their own commits; for instance, if you introduce an if, you add the if and end in one commit and do all the re-indenting of existing lines in another.


They measure the "quality" of review based on time spent looking at the code which suggests to me they have absolutely no idea why they are making people do code review at all.

This is an especially bad metric because

1) We have good data that after about 60 minutes of reviewing we start to lose the ability to find more issues.

2) It incentives making bigger and harder to review changes so that people spend more time looking at the code.


You seem to suggest that the team is trying to increase eyeball time metric, but that isn't what the post said. The post said that they wanted to decreases the time to review metric, but without changing the eye ball time metric.


When I was there, you could always just put

    Reviewed-By: self
in the commit, and not wait. Much faster ;)


In the future "Simon Elf" will be asked why he approved so much broken code ;)


It might be the reason why you aren't there anymore. (kidding)


Lol, I let myself out, but appreciate the thought ;)


> Next reviewable diff

As the commit author, it's in my (and everyone's) interest to size changes up so that it's easier for review, and also present in them in the logical order of thinking. Personally, I prefer the bottom-up approach.

I bring the non-functional and impertinent changes (like refactoring and tangential changes) ahead in the line-up so that the actual changes are kept separate and are concentrated at the tail end.

I make commit messages of the pattern: Present situation, the problem with that, what this patch does, and what the effect it has/how it solves the problem or sets up a path forward.

The initial PR might be sliced too thinly, and so will have more commits than ideal. But, as the review progresses, and once both the reviewer(s)' and the author's mental models are in sync, commits can be collapsed at their logical boundaries.

Regardless of the tooling and presentation, it's imperative that that the reviewers are intuitively aware of the ramifications of the change. Without that, the review ends up being nit picking, spell checking, and whatever that's obvious on the immediate vicinity, and the process degrades into a box-ticking exercise.

No AI needed. Be human.


Somewhere in the post, it's mentioned fb uses code ownership logic in the next review engine.

If folks are interested, there's project called https://github.com/milin/gitown which does something similar in github leveraging code owners.


Thanks


Here's another factor at Meta that can reduce code review time: Your performance review is based in part (maybe not a large part, but in part) on how many reviews you perform, and how many words you put in to those reviews. edit: In short, people are incentivized to review


Ex-Facebooker here: Number of reviews does factor into performance reviews, but mostly as a tie-breaker in calibrations when you're borderline in between bands, not as a primary metric. People who do disproportionately large number of reviews do get recognized and rewarded.


I quite agree with the proposition you are making here. It goes to the very heart of the matter, does it not. The better you are at giving detailed, explicit and concrete feedback about each and every particular aspect of a diff (NOTE: both good and bad), the better and more competent you clearly are, and the more of a true champion for the cause you are proving to be. Time and time again, I wish my reviewers would just lay it all out on the table. As opposed to the limited and perfunctory

LGTM


Meh.

I've often seen reviewers delivering essays to back up their arguments, which essentially boil down to "because I prefer it this way".

Focusing on pure word count doesn't mean that the feedback is valid, or even explains the reasoning well. If anything, it encourages nitpicky and long winded comments based on personal preference.

Often, less is more. If you can get a point across by a small code suggestion, do that instead. But definitely don't fall into the trap of suggesting huge chunks of code, or rewriting parts of it. Sometimes even asking a question to improve understanding is better than arguing a point.

And then other times, especially for trivial changes, "LGTM" or just a blank approval is perfectly fine as well. No need to waste time discussing trivial things if everyone is on the same boat.


I think you're replying to a joke comment about inflating word count via loquacious reviews. But I'm not sure either, so kudos to the author if it is.


I came to the conclusion long ago that mandatory code reviews are a waste of time. For critical stuff, absolutely.

But PRs and review cycles over burden dev teams and don’t seem to move the quality needle higher one bit.

A better way is to ensure multiple hands touch a given area of the code, so that multiple eyes ultimately are seeing and manipulating those bits. If they are given a task to do in that area they will be motivated to understand it (and potentially improve it). By contrast, with code reviews often the reviewer does not have time to really deep dive into the code and will only have a superficial understanding of it.

Oh, also use code quality scanners to keep an eye on tactical code debt.


The most important reason for code review is not to fix the code but rather to transfer knowledge between developers. I've learned a lot of techniques for writing better code from suggestions from my code reviewers, and from reviewing other people's code.

Without code review, when will a junior developer ever learn anything from a senior developer?


It works both ways, too--it makes more developers aware of the others' work, what changes are being made and so forth. So it's not just expertise that gets transferred, but also the current state of the codebase.


In that case, it doesn't matter if you do the code review before or after they merge and deploy. You could read through the diffs in the commit log whenever it is convenient for you, and leave review comments for the other developer on their PR after the fact.


I have been on teams that never review code and teams that always review code, and I can confidently say I never want to work on the former again. That was with a junior team, but your team is going to have new people, if not junior, at some point. People who are familiar with each other's code and have agreed on standards can review code pretty quick. I would rather have it and not need it than need it and not have it.


I've found PRs are usually a waste of time because people focus on such trivial bullshit because they're easy to find.

98% of the review comments I receive hark back to some handwavy explanation about maintainability. But never in any way that could actually cause a bug.

when I run teams my rules with PRs are if it's not a bug, and it's not against the code review guidelines leave it. It's usually not worth the back and forth unless there is a clear mentor/mentee relationship.


This is exactly backwards.

One person writes the code. Many people read the code. If something is unclear or feels wrong to the reader, they get priority. Reviewers should leave all sorts of feedback; the response of the author should be to just implement the feedback unless they can clearly articulate why doing so is not a good idea. You're right that "it's not worth the back and forth", but completely wrong on who should be the one keeping their mouth shut.


Usually the feedback is not "I don't understand this" or "I find this unclear" it's "I think it would be clearer if you did it this way instead of this way". Or "if you made this change it would make it more maintainable".

And if everyone followed everyone else's advice you end up with everyone writing code in their reviewers style instead of their own which doesn't seem like a gain.

Not to mention code changes have a cost, very few devs spend as much time testing and thinking about their code when making code changes. So many times you are trading more thought out, better tested code for less tested less thought out code. Because of this I've seen countless bugs introduced from code changes related to trivial PR requests.


If bugs are being introduced, that means the team's testing culture is poor. If someone asks me to make a change in code review, I know that my modifications are fine because the unit tests still pass (and the integration tests, but it's rarer to have to touch those). If I forget to manually run them the build system runs them before my code is let in.

Writing in "the reviewer's style" is preferred because they are the reader. For most things that are "style" there should be explicit style guides; for the things that aren't, the reader's opinion is better every time.

If your team isn't getting useful code review feedback, that's also a team culture problem and your senior eng need to step up and lead by example.

"I don't understand this" is important feedback. At a minimum, more comments are needed, but it probably also signals that variable or method names need improvement.

My experience (15+ years, companies of all shapes and sizes from college shops to FAANG) is that the kind of engineer who feels that code review feedback isn't worth listening or is a waste of time to is detrimental to the team in both the short and long term.


Very few places have enough testing that code modifications are not correlated with bugs. Based on your comment I think you maybe you're working at places that aren't very representative of the industry as a whole.

> the reader's opinion is better every time.

I don't see why the reader's preference is always better. If you like designing for flexibility and I like KISS why is KISS better when you write the code, and flexibility better when I write it?

You act like our profession isn't plagued by flamewars. You see it on here every day with differences of opinion about microservices, design patterns, designing for today vs the future, KISS vs SOLID, how much testing is appropriate, should you focus more on automation vs integration vs unit tests.

When the organizations views on all these matters are aligned and documented such that the code reviews are structured that's great and you get good feedback. I've worked for 10 companies almost none of them were that aligned and organized.

Code review feedback can be very valuable, I've was the champion for code reviews 8 years ago when most organizations were not doing them. But I think the vast majority of useful code review feedback falls into one of the following categories

1. Spotted a bug, deviation from spec, or a deficiency in the spec 2. A detailed and concrete comment about maintainability (i.e. this method doesn't handle x case, or blows up on y case and that's not documented) 3. This violates a code review guideline 4. Expressing what is confusing such as complex code that needs documentation or poorly named variables 5. Performance critique

Most of the code review feedback I've seen has not fallen into these categories and instead has to do with one persons preferences vs another.

- I'd break this out into a method vs I'd inline this method - Differences in code grouping by functional vs logical cohesion. - You should make this more performant vs this isn't going to be a hotspot so build this for readability over performance


So you're advocating for pair programming all of the time? As someone who has to do both pair programming and code reviews quite a bit this past year (mentoring some quite junior developers), I find pair programming much more mentally exhausting. Two hours of that and my mind is pretty much spent for the day.


IME PR reviews are generally a half-assed substitute for pair programming. I don't really enjoy doing pair programming, but can't deny the effectiveness.


I don't read the parent as advocating for pair programming, just that any given part of the code should be understood by at least two people. This is possible to achieve even on a fully asynchronous team.


you'll have compliance needs to speak for if you let code to prod unreviewed


In the spirit of tangentialism I randomly suggest: Architecture Review!

- Prevents juniors from being blown out of the ocean into startalloverland by seniors at tail end

- Focus on the most dangerous aspects of the change that can't be fixed later

- Sets the stage for more informed programming reviews later on (lower priority to me though)


I've seen mixed things from architecture reviews. I've seen it used by people with titles that exceeded their actual abilities, to stop people with junior titles from doing things the senior person simply didn't understand. And I've seen architecture reviews used just to satisfy the whims of senior people, to gratify that urge to nitpick or dictate what language they wanted to use. Those are the bad ways.

The good way I've seen architecture review used is nobody was going to tell you not to write or even deploy whatever the hell it was that you thought you wanted to write, but if you wanted to integrate with Grown Up Systems, there were ACLs that your system would not be added to unless and until your system had passed the review of the Grown Ups. I think this way is strictly better for two reasons: it can't strangle good ideas at birth, and it minimizes the amount of architecture reviewing that everyone needs to do, because half of the junk that gets brought to pre-implementation arch reviews never gets built anyway.


Architecture Review!

Looks a lot more fun than code review to boot:

https://www.youtube.com/watch?v=QfArEGCm7yM&t=57s


Have I missed the feedback from the users? There should be some quotes from team members who liked the change. Their mentioning that they start being data-driven for internal tools suggests that they start treating developers like cattle and not pets.

>Driving down Time In Review would not only make people more satisfied with their code review process, it would also increase the productivity of every engineer at Meta.

This hasn't been tested. "The average Time In Review for all diffs dropped 7 percent" - they have verified that they changed the left side of the equation, the review time, but they haven't checked the outcome, the productivity. Overall it doesn't seem like they have checked if their changes have negative side effects.

Likewise

>The choice of reviewers that an author selects for a diff is very important. Diff authors want reviewers who are going to review their code well, quickly, and who are experts for the code their diff touches.

doesn't match

>A 1.5 percent increase in diffs reviewed within 24 hours and an increase in top three recommendation accuracy (how often the actual reviewer is one of the top three suggested) from below 60 percent to nearly 75 percent.

They have shown that the people they nudge are more likely to do a code review. But are they the experts who do the review well?

The 1.5 percent in reviewed diffs could also be jitter.

*edit: Meta could extend the review process. There doesn't seem to be a review process for the review team. If they don't like to review their changes, or if they cannot find suitable reviewers, how are they qualified to role out their changes to the software development team?


>They have shown that the people they nudge are more likely to do a code review. But are they the experts who do the review well?

I think there's assumption that people won't just rubberstamp significant diffs to code they don't own. When submitting a change to another team's project if the reviewers that are suggested aren't actually the right person they are more likely to know the right person who should review it and they can manually add that person.

>The 1.5 percent in reviewed diffs could also be jitter.

Facebook / Meta has tools for measuring the effects of changes and seeing if they are statistically significant. Yes, it could still be jitter but without them giving more data about the experiment we can't tell what the chance of it being due to chance is.

>There doesn't seem to be a review process for the review team

There isn't a review team. Anyone can review a change.


> I think there's assumption that people won't just rubberstamp significant diffs to code they don't own

I wouldn't advise doing that but to play devil's advocate, why shouldn't they do that when number of reviews are probably a metric in their performance review? What's in place to discourage that?


If you're going to add machines to the process why not add it with the purpose of eliminating the human from the process all together? Reviews are necessary because compilers and linters can't catch everything. Runtime bugs that are not caught by the pipeline tend to be edge cases that don't happen until there is enough data to test (in the general sense) the feature. ML could be used for smart testing and if it passes the code diff merges automatically.

It always surprises me how much software companies want to rely on human verification. The whole point of programming is to automate and let the machine take care of it. Every few years the industry does add new tools to automate process like CI/CD pipelines, but at the ground level most companies seem to favor adding more humans whenever the technology is not good enough.


Catching bugs is only one of the reasons code review is important. It is also important to transfer knowledge between developers and review design, architecture, scalability, and performance concerns.


I don't know about knowledge transfer. I feel that can be done separately and more effectively. However in the context of this article, if you're going to build an AI tool to aid the process, then why not make an AI that can be trained on a new feature and then tests code changes for bugs. The dev process has been increasingly more and more automated over the years, and I think that won't stop. The current low hanging fruits are things like rolling back the code when something breaks, and testing the code before publishing.. things that companies usually have a lengthy manual process for.


Well, yes, the menial parts of review can and should be automated. Big tech cos are leading the industry, and ML does find its way in there (with mixed success).

But at its core, code review is putting two heads together to write some code instead of one. In that sense you can think of the code review problem as the "automating away all of programming" problem.


Fair point, and to that extent reviews can still be useful. However many code changes are small, bug fixes, or straight forward changes, so the main benefit would still be the same. There could be an option to wait for human reviews if needed.


How would the AI be aware of the business logic communicated via some word doc?


I'm guessing it would be trained via manual input (+visual), i.e. recording actions on the app. The AI would repeat the process and decide if it's a pass or fail. Different AIs could also be trained on other data, like network calls, and test those as well. I'm sure something like this must exist already, but I'm not seeing efforts to integrate it with current industry practices.


You can never automate how readable the code is or how good the design is.


Google already has linters and static analysis tools to find out common mistakes or suggest fixes. They complement but not substitute for human reviews though.


On a much smaller scale (with a team of 8), but I also noticed this problem and wrote a “nudge bot” for Slack and Gerrit. It takes the team-relevant changes and post it to a Slack channel in a formatted message with the patch state (not reviewed, pending, needs change, etc)

I made a talk about it, unfortunately in Hungarian, but you can see screenshots how it worked: https://youtu.be/7WiICWyP1sQ

Here is the code:

https://github.com/kissgyorgy/slack-review-bot


Is it just me, or in the last couple of weeks (since announcement of layoffs) there's been an increase in FB tooling/infra threads? Could be Baader–Meinhof effect, of course.


Stacked diffs are awful compared to PR's. Not every commit should be clean


Why? I found them way easier to review than regular PR during my time at fb. Since every commit is cleaner, you get fewer, higher quality commits than in a PR.


If code is important enough, it will get reviewed and tested one way or another.

Everything else is a waste of time.


All these comments about how code review is a waste of time, or suggest code review is only for bugs, really shine light on why so much software is incredibly slow today.


None of them engage with the content of the article either - having a "play next" button for code review is awesome assuming it works reasonably well. I'm curious about the quality of nudgebot reviews. In my experience the PRs that sit around forever are the 3000 line epic find/replace refactors all done in one commit that are impossible to really review. I skimmed the paper and didn't see any accounting for "diff time to diff length", so I'm not sure the result there is anything meaningful. Maybe people are getting faster feedback to not submit such shitty PRs.


what stops you from reviewing the sed one-liner that created the diff, instead?


Reviewing the sed isn't going to help you spot the instance that shouldn't have been replaced or the missing substitution.


Maybe people'd have more time to optimise the performance of their software if they weren't spinning their wheels and context switching waiting hours for minor changes to be merged.


Too many think code reviews are an opportunity for endless debates over personal preference. A code review should be fast and cover blatant good practice violations and architectural mistakes. Everything else should be taken care of by linters and tools. If a reviewer wants code done in a different way they can write the code themselves.


One thing I've enjoyed where I am now is that PR comments come in two flavors. The first, actual feedback. The second, borderline pedantic issues that are prefaced with "nit: " in the comment. Nit comments are safely ignored but are there so that if the author wants to put in that change while changing some other issue, then OK.


I especially like Github's new option to make a _suggested diff_ of what you want changed. Typo fixes, comments re-worded, etc. It really reduces friction, both as the person making the suggestion, and as the person who authored the PR.


I completely agree, suggesting the actual diff is the ultimate "put up or shut up". One of the most annoying review situations is when you get some vague comment that doesn't spell out what change is being asked for.

Although the best response to that sort of thing is just a quick DM - "hey, I'm not sure how to interpret this comment, wanna hop on a call for a minute and explain?"

Again - put up or shut up. Want a change? Cool, let's pair on it. I want the code to be good too. But I'm not going to just read your comment and based on the vibe I'm feeling shoot off some change, only to find out that it wasn't what you meant.


Yes, hopefully someday GitHub will have all the major features that Gerrit has had for years. By the time I'm ready to retire, GitHub PR review UI should be up to approximately 2010 standards.


the reviewer should make 'nit' changes directly rather than pass a mini-ticket for evaluation back to the author. why are reviewers so scared to modify prs at orgs?


That's an interesting thought. Personally, I feel messing with another person's personal branch is dangerous. They might already have some change locally, which is going to lead to surprises, and most of the people I work with unfortunately don't know Git well enough.

It's something that could be addressed on an organizational level perhaps.

The other point to consider is the whole teach a man to fish vs give them a fish thing. Teaching is better in the long run.


Teaching: you can still comment and edit. And your edit in the PR is reviewable by the author.

Org level: absolutely. Solvable thru tooling.


I agree that anything that _could_ be covered by an automated check ought to be. But don't think I'm convinced that everything else is either an egregious mistake or isn't worth discussing.

IMO a big part of what you ought to be reviewing for is readability, which does sometimes overlap with personal preference. But there's a spectrum from "I'd indent these columns a little differently" to "it's hard for me to follow what's happening, I think it'd be clearer if we organized things like...".


"I think it'd be clearer if we organized things like..." any such guidelines should be agreed upon beforehand, otherwise what's the expectation? That people rewrite their code to accommodate someone's needs? Reading, other than what's in whatever coding style the team uses, is subjective. Best to agree upon what the whole team prefers beforehand.


The expectation is that you talk with the person who did the review and discuss the best place to put things. Those discussions are good for code health, and it is much easier to have them when you have new code in front of people so they can see what it looks like.

If you can't have those discussions in a code review then your code review process is too slow, and you ought to fix it so that it is easy to have these discussions there.


Reading is absolutely subjective, but that doesn't mean it can't or shouldn't be optimized for (just like with regular written language -- editors are a thing).

Being a good code reviewer (just like being a good editor) is absolutely its own skill though, and people can be bad at it.


Meta:

> At Meta we call an individual set of changes made to the codebase a “diff.”

GitHub:

> Pull request

Amazon:

> Change Request

GitLab:

> Merge Request

Google:

> Changelist

Nitpicking, but jesus christ, why can't we stick to a single term?


They aren’t all the same thing. A diff (or patch) is a change to the code that’s not tied to the history in any place. Linux kernel patches and Meta diffs are like this.

Pull Requests and Merge Requests are more complicated, and the unit of change is a branch. Many people reject this workflow because ultimately a branch of commits can be summed to one single diff and that’s the only thing that matters in the wider project, outside of your private branches on your personal machine.

GitLab doesn’t support anything other than merging hence the name. Most people who prefer linear history will never merge. Instead they’ll land their change on the tip of the branch, each developer taking it in turns to advance the linear history one step at a time.

I don’t know about changelists or change requests.


No, Gitlab does support other integration workflows, including one which requires MRs be rebased and fast-forwards them onto main without a merge commit.


A changelist is most generally a Perforce term (and as may the case for Google, a similar concept). In the Perforce Helix Core (what the "Perforce Server" is currently called), changelists are increasing integers that mark units of atomic change. Changelists can contain committed work, pending work, or shelved work (a shelf is un-committed work that resides server-side). Shelves are often how code review is done since the file content and metadata is available outside of the author's computer.


Whereas “merge request” and “pull request” have very literal meanings in terms of Git / distributed version control systems, Phabricator’s “diff” is VCS neutral. Diffs don’t even have to come from a VCS at all.. we have tools that spit out Phab diffs for review of changes they’re going to apply to MySQL DBs, for example.

When you use Phab with Git you normally set it to always squash merge. Each diff becomes exactly one commit on the main branch, with its commit message reformatted by Phab. The SHA that you committed before submitting your changes to Phab never appears in the authoritative repo (if only because Phab adds Reviewed-By and other metadata).


It's all the tooling: phabricator has diffs, perforce has changelists and git has both "merge" and "pull".

Tools don't agree on what to call stuff, and everyone uses different tools (and a lot of these tools are pretty old). Perforce is from '95, phabricator is from '07. Both of those predate widespread usage of git (released '05), so it's pretty reasonable that everything is different.


I think this basically doesn’t matter and it’s fine for institutional culture to have its own ingroup jargon.

Trying to standardize for some aesthetic reason wouldn’t add anything


I don't know that this particular term is a big deal, but it's a little alarming that you think standardized terminology has no more than "aesthetic" value.


“Some aesthetic reason” like cutting down on onboarding time? Making things a bit less complex for the juniors?

Standardization is more about consistency than aesthetics.


I think most people would be able to handle any one of those terms smoothly on day 1

You: "I'm going to send a pull request"

Other engineer: "OK. BTW, we call them change requests here"

You: "OK"


If only it were that simple. As we’ve seen in this thread, you’d instead likely have:

Somebody else: Where I used to work we called them pull requests. Let’s have a meeting to discuss changing the terms to match industry standards. We may also want to create a committee to agree on other terminology changes too, like main instead of master.


I'm not sure if you're saying this as an argument for standardisation (ha, let's debate the spelling) or not, but these people will always find something, from frameworks to linting rules - the amount of discussion stays the same no matter how much is agreed


I'm not really arguing for one way or the other, just pointing out the reality of working with lots of people means lots of opinions on how to do things, so it's never that straightforward to get things done. It's just a fact of life.

And yeah, I've been in those long discussions too about frameworks and linting rules. It can often turn into just egos trying to win and less about doing what's right or expedient for the team.


The workflow is different enough where it does take some time to get used to


> The workflow is different enough where it does take some time to get used to reply

If the workflow is different, then it's _not_ just a question of terminology. In that case it's probably a good thing that there are different names for the various workflows.


These are all just terms for slightly different perspectives on the same thing. Like diff is the summary of changes in a commit. the person who wants to affect the change submits a pull request. Another person reviews the pull request and decides whether or not to merge their code into the main codebase, etc. It's all the same thing from different perspectives.


The content is the diff (including description/test plan), which I'd argue should be the name.

It could be in a draft state, awaiting review, accepted for merge, or being merged.

'diff' is the shortest at most descriptive name.


There is no "correct" name. You only see things from your standpoint, and so does everyone else. There is nothing about your opinion that is any less arbitrary than all the opinions that disagree with you.


When I hear “PR” I think of a tree of git commits. When I hear “diff” I think of one set of squashed text changes, for a tool that doesn’t support git.


Is diff the best of them all though, because of the implication?


Mac: What implication?

Dennis: The implication that things might go wrong for her if she doesn't review my code in a timely manner. Now, not that things are gonna go wrong for her, but she’s thinking that they will.


The others have one thing in common: they describe the difference between what is and what should be. Whereas diff is commonly used for the difference between any two things. You could make a diff (as in using the diff program) between two feature branches with no intention of merging one into the other. To me it makes sense to have separate names for these two concepts.


Diff seems the best because the others describe internal details of the source code management implementation. Pull request, merge request, and changelist especially.


They are all diffs. Simple is better here.


All pull/change/merge requests are diffs, but not all diffs are pull/change/merge requests.


Right, but in the context of making code changes, you can use diff. Which aligns with the Unix tool diff, and application of said diff with patch. So diff and patch are the operations that happen as you mutate a code base.


Of all these, the one that confused me the most when I encountered them was surely "pull request".


Same


> Nitpicking, but jesus christ, why can't we stick to a single term?

Because different groups of people develop their own languages. This is pretty standard for humanity across all time periods, disciplines, cultures, etc.


diff request makes the most sense to me...

At least that seems descriptive in the context of making a request in a system to get approval to change the code.


A diff request would be asking someone to create a diff for something.


Other folks have said the basic (these aren't the same) but one thing that I feel like few people realize is that "Pull Requests" aren't actually a git concept at all.

"Pull Requests" are just implementations that many web-based git tools (but not all!) have adopted to facilitate the code review process taking place on their platforms.


I'd like to propose another to consider "delta"


nitpicking but "changelist" sounds like what meta would call a diff-stack (aka multiple diffs).

don't forget Incident = SEV = DEFCON = 911 ...


Disclaimer: these are anecdotal reports. I've heard from a lot of my friends how abysmal the quality of code is at Meta. Obviously, this may not be true in all teams/products, but that's the general sentiment. Why make it faster when you're already dealing with mess! This is abundantly evident from the constant fire fighting, duct tapes and a metric driven culture that incentivizes the number of diffs landed.


I'd say local code quality is generally good but overall it's a big hodge podge of small features duct-taped together, so the whole app becomes a tangled mess. The metric driven culture tends to emphasize impact, not diffs landed.


> I'd say local code quality is generally good

Have you ever used facebook.com? At least the frontend is incredibly slow (on a thousands of € machine), so that's not synonym with quality in my experience.


From reading the comment thread I would say the front end would be macro, local would be like a specific function on the front end, like the friends list, that by itself could have "good code".


Like when I have a red number indicating I have a notification and then I click and it just loops forever on some grey animation because it doesn't manage to load the list of notification on my fiber connection?


You're thinking of features when the other person is thinking in code. Maybe some overcomplicated bloat is involved with whatever goes into clicking that.

They mean locally as in code scope and not necessarily client sided or the size of a feature


The little red dot is one microservers, message previews is another, 'load profile picture of message sender id' is probably another, etc.


How would you measure impact?


A change you made moved some metric a measurable amount (usually aligned with a team or product goal).

You put these little nuggets of impact on your performance review.


At first guess, landing/launching a new feature, and then possibly feature adoption.


Are there companies with a reputation for code quality?


Google places a pretty high emphasis on code quality and readability. It's not universally great, but it's a big part of the culture. You can catch a glimpse in their [style guides][1], [abseil totws][2] and [aips][3]. Almost every change is required to be reviewed for "readability" in addition to functionality. This can feel like a lot, but it leads to pretty consistent style across the codebase which makes it a lot easier to switch projects or read and understand an unfamiliar section of the code.

[1]: https://google.github.io/styleguide/ [2]: https://abseil.io/tips/ [3]: https://google.aip.dev/


I find Google's take on code reviews to be particularly good:

https://cloud.google.com/architecture/devops/devops-tech-tru...


I enjoy how they state like 3 times that code reviews should be synchronous, yet "industry-standard" (aka: what people really do) is to toss it over the fence in a PR and go back and forth for several days with stylistic bullshit.


Code reviews are usually synchronous at Google though, commenting and fixing things is like chatting with the reviewer so are usually done quickly. Not sure why this wouldn't be industry standard, is there any reason to make code reviews more painful than that?


I was kinda joking that places like Google might have good processes but most places just cargo cult it and do it incorrectly. My current job does reviews that are so useless I don’t even participate anymore and no one cares. I really want to work at a place where they care about code quality and an effective process.


Oh, Abseil is new to me. Thanks!

Do you know where the missing tips are? For example:

https://abseil.io/tips/110

The root page (parent's [2]) mentions this one as famous by name:

>Often they are cited by number, and some have become known simply as “totw/110” or “totw/77”.

Is totw/110 some Google secret sauce that we are forbidden from knowing or did they skip some weeks?


A lot of them are related to Google-internal practices or libraries, or are like too opinionated (Google has strong internal C++ opinions that aren't necessarily correct or even reasonable elsewhere) to be useful.

110 in particular probably could be public, but it looks like they stopped externalizing them in late 2020, which is kind of sad, so I assume they never got around to it.


Yeah, that sounds about right to me. I used to help edit Testing on the Toilet and there were a lot of internal-tool-specific ones that we never made public. When we were low on content, we would happily publish an issue about someone's internal service/project, for example. It's not that we were trying to hide something, it would just be completely useless to the outside world.


TIL TotT is externalized.

https://testing.googleblog.com/


I asked a Google friend to find totw/110 for me and apparently it's just some tips about construction and destruction of global variables.


These missing TotW's are usually about internal tools that aren't open source.


totw/110 was about safely initializing global and static objects. IIUC there was something missing in the open sourced version of Abseil so that it could not be published verbatim.


Yes. VMware's ESXi kernel was unbelievably rock-solid compared to other similar systems. Fantastic resilience against even hardware faults. I've heard that there was a culture of "doing things the right way" there.

Meanwhile in the same org, the group doing their GUI kept adding band-aids to a broken mess for years.


ESXi is a gem that few appreciate. It's been quietly chugging along as my home server hypervisor since 4.1 (2010). And it's free.


GitLab and Netflix come to mind.


If what you’re saying is true, I wonder how much of the quality issues is just a company getting really big over time. Harder to standardize or keep tabs on such a large number of developers.


Because making it slower won't improve anything.


Situation: code review takes too much time.

Solution: announce unprecedented layoffs of 10000 programmers.

Resolution: no work to be done. code review team on schedule.


11,000 employees, about half technical.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: