It's not as much about load as it is about complexity; it starts to make sense when you hit some threshold number of internal services, regardless of the amount of traffic you're doing. You use a service mesh to factor out network policy and observability from your services into a common layer.
This is more a religious question than a technical one. I tend to build monoliths. Some of our clients build microservices; some of them decompose into just a small number of services, and about half of them have monolithic API servers.
But if you're going to do the fine-grained microservice thing, the service mesh concept makes sense. You might choose not to use it, the same way I choose not to use gRPC, but like, it's clear why people like it.
* Codebase should be defined as 'the platform'. where one team will most likely never look at the code of other team's microservices.
* this communication problems and overhead start the moment you go from 2 to 3 or more teams.
* the term 'team' in this context should be interpreted very broadly. One dev working alone on a microservice should be considered "a team".
Also, things mentioned in the article: you don't want to implement TLS, circuit breakers, retries, ... in every single microservice. Keep them as simple as possible. Adding stuff like that creates bloat very quickly.