If you're going to add machines to the process why not add it with the purpose of eliminating the human from the process all together? Reviews are necessary because compilers and linters can't catch everything. Runtime bugs that are not caught by the pipeline tend to be edge cases that don't happen until there is enough data to test (in the general sense) the feature. ML could be used for smart testing and if it passes the code diff merges automatically.
It always surprises me how much software companies want to rely on human verification. The whole point of programming is to automate and let the machine take care of it. Every few years the industry does add new tools to automate process like CI/CD pipelines, but at the ground level most companies seem to favor adding more humans whenever the technology is not good enough.
Catching bugs is only one of the reasons code review is important. It is also important to transfer knowledge between developers and review design, architecture, scalability, and performance concerns.
I don't know about knowledge transfer. I feel that can be done separately and more effectively. However in the context of this article, if you're going to build an AI tool to aid the process, then why not make an AI that can be trained on a new feature and then tests code changes for bugs. The dev process has been increasingly more and more automated over the years, and I think that won't stop. The current low hanging fruits are things like rolling back the code when something breaks, and testing the code before publishing.. things that companies usually have a lengthy manual process for.
Well, yes, the menial parts of review can and should be automated. Big tech cos are leading the industry, and ML does find its way in there (with mixed success).
But at its core, code review is putting two heads together to write some code instead of one. In that sense you can think of the code review problem as the "automating away all of programming" problem.
Fair point, and to that extent reviews can still be useful. However many code changes are small, bug fixes, or straight forward changes, so the main benefit would still be the same. There could be an option to wait for human reviews if needed.
I'm guessing it would be trained via manual input (+visual), i.e. recording actions on the app. The AI would repeat the process and decide if it's a pass or fail. Different AIs could also be trained on other data, like network calls, and test those as well. I'm sure something like this must exist already, but I'm not seeing efforts to integrate it with current industry practices.
Google already has linters and static analysis tools to find out common mistakes or suggest fixes. They complement but not substitute for human reviews though.
It always surprises me how much software companies want to rely on human verification. The whole point of programming is to automate and let the machine take care of it. Every few years the industry does add new tools to automate process like CI/CD pipelines, but at the ground level most companies seem to favor adding more humans whenever the technology is not good enough.