This is a really arrogant statement. In C, `if(a = b)` could cause incredibly subtle bugs. It was prominent enough that it got addressed in all the best practices books, like "Writing Solid Code", with a style that got later termed "yoda conditions" - a style that was prominent in spite of being acknowledged as less readable, because it forced a compiler error more often. It's hard to even think of many other types of bug with that kind of significance, and you just dismissed it out of hand with "I don't make those kinds of mistakes".
Well, that's pretty much equivalent to the the language preventing bugs, for the purpose of the conversation. You can imagine similar cases that would be harder for a linter to pick up. Things like type errors, or accessing the wrong side of a union - in C++ people used Hungarian Notation for a good while to try to make these kind of errors more detectable; now some languages have tagged unions/sum types to make them nearly impossible. The point where a linter is distinct from the language in catching syntactically evident errors is that, even if I use a linter for my own code, that isn't the same as having buy-in from the whole team to block any code that doesn't pass the linter from shipping to production; language syntax, on the other hand, is implicitly agreed upon.
Addressing this specific example, historically, running a linter all the time wasn't always practical, and `if(a = b)` was used intentionally a lot, e.g. to inline a check for a null pointer with an assignment.