Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's interesting to me that the industry still relies so heavily on diffs of code listings as a primary method for code reviews.

It is absolutely a good use of a human reviewer's time to build a mental model of the code's runtime behavior. But to do that by manually by reading each line and trying to predict what will happen when it's run is massively inefficient, incomplete, and error-prone. Plus it's susceptible to the "LGTM, fine, just merge it" phenomenon when the PR is large.

Reviews of static code listings won't reveal how ORMs structure their DB queries at runtime, or reflect how dynamically injected/configured components will behave, or any other number of things that are only visible by watching the code execute.

We have commoditized linting, checking for CVEs in dependencies, and static analysis for certain classes of bugs. We should now use fast runtime analysis in the same flow to relieve the burden from human reviewers of having to do line-by-line "telepathy reviews" where they try to magically divine how something will run in production at scale. (Full disclosure, I work at a company doing exactly that - https://appmap.io - and one of our most popular features is our sequence diagram diff that shows runtime differences between a PR and the main branch).



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: