Linters suffer from a false positive/negative tradeoff that AI can improve. If they falsely flag things then developers tend to automatically ignore or silence the linter. If they don't flag a thing then ... well ... they didn't flag it, and that particular burden is pushed to some other human reviewer. Both states are less than ideal, and if you can decrease the rate of them happening then the tool is better in that dimension.
How does AI fit into that picture then? The main benefits IMO are the abilities to (1) use contextual clues, (2) process "intricate" linting rules (implicitly, since it's all just text for the LLM -- this also means you can process many more linting rules, since things too complicated to be described nicely by the person writing a linter without too high of a false positive rate are unlikely to ever be introduced into the linter), and (3) giving better feedback when rules are broken. Some examples to compare and contrast:
For that `except` vs `except Exception:` thing I mentioned, all a linter can do is check whether the offending pattern exists, making the ~10% of proper use cases just a little harder to develop. A smarter linter (not that I've seen one with this particular rule yet) could allow a bare `except:` if the exception is always re-raised (that being both the normal use-case in DB transaction handling and whatnot where you might legitimately want to catch everything, and also a coding pattern where the practice of catching everything is unlikely to cause the bugs it normally does). An AI linter can handle those edge cases automatically, not giving you spurious warnings for properly written DB transaction handling. Moreover, it can suggest a contextually relevant proper fix (`except BaseException:` to indicate to future readers that you considered the problems and definitely want this behavior, `except Exception:` to indicate that you do want to catch "everything" but without weird shutdown bugs, `except SomeSpecificException:` because the developer was just being lazy and would have accidentally written a new bug if they caught `Exception` instead, or perhaps just suggesting a different API if exceptions weren't a reasonable way to control the flow of execution at that point).
As another example, you might have a linting rule banning low-level atomics (fences, seq_cst loads, that sort of thing). Sometimes they're useful though, and an AI linter could handle the majority of cases with advice along the lines of "the thing you're trying to do can be easily handled with a mutex; please remove the low-level atomics". Incorporating the context like that is impossible for normal linters.
My point wasn't that you're replacing a linter with an AI-powered linter; it's that the tool generates the same sort of local, mechanical feedback a linter does -- all the stuff that might bog down a human reviewer and keep them from handling the big-picture items. If the tool is tuned to have a low false-positive rate then almost any advice it gives is, by definition, an important improvement to your codebase. Human reviewers will still be important, both in catching anything that slips through, and with the big-picture code review tasks.
How does AI fit into that picture then? The main benefits IMO are the abilities to (1) use contextual clues, (2) process "intricate" linting rules (implicitly, since it's all just text for the LLM -- this also means you can process many more linting rules, since things too complicated to be described nicely by the person writing a linter without too high of a false positive rate are unlikely to ever be introduced into the linter), and (3) giving better feedback when rules are broken. Some examples to compare and contrast:
For that `except` vs `except Exception:` thing I mentioned, all a linter can do is check whether the offending pattern exists, making the ~10% of proper use cases just a little harder to develop. A smarter linter (not that I've seen one with this particular rule yet) could allow a bare `except:` if the exception is always re-raised (that being both the normal use-case in DB transaction handling and whatnot where you might legitimately want to catch everything, and also a coding pattern where the practice of catching everything is unlikely to cause the bugs it normally does). An AI linter can handle those edge cases automatically, not giving you spurious warnings for properly written DB transaction handling. Moreover, it can suggest a contextually relevant proper fix (`except BaseException:` to indicate to future readers that you considered the problems and definitely want this behavior, `except Exception:` to indicate that you do want to catch "everything" but without weird shutdown bugs, `except SomeSpecificException:` because the developer was just being lazy and would have accidentally written a new bug if they caught `Exception` instead, or perhaps just suggesting a different API if exceptions weren't a reasonable way to control the flow of execution at that point).
As another example, you might have a linting rule banning low-level atomics (fences, seq_cst loads, that sort of thing). Sometimes they're useful though, and an AI linter could handle the majority of cases with advice along the lines of "the thing you're trying to do can be easily handled with a mutex; please remove the low-level atomics". Incorporating the context like that is impossible for normal linters.
My point wasn't that you're replacing a linter with an AI-powered linter; it's that the tool generates the same sort of local, mechanical feedback a linter does -- all the stuff that might bog down a human reviewer and keep them from handling the big-picture items. If the tool is tuned to have a low false-positive rate then almost any advice it gives is, by definition, an important improvement to your codebase. Human reviewers will still be important, both in catching anything that slips through, and with the big-picture code review tasks.