Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's not as much about load as it is about complexity; it starts to make sense when you hit some threshold number of internal services, regardless of the amount of traffic you're doing. You use a service mesh to factor out network policy and observability from your services into a common layer.


What is the threshold above which a service needs to exist at all, over a module in an existing codebase?


This is more a religious question than a technical one. I tend to build monoliths. Some of our clients build microservices; some of them decompose into just a small number of services, and about half of them have monolithic API servers.

But if you're going to do the fine-grained microservice thing, the service mesh concept makes sense. You might choose not to use it, the same way I choose not to use gRPC, but like, it's clear why people like it.


The point at which you have multiple teams working on the same codebase, and their velocity is suffering from communication overhead and missteps.


A few remarks:

* Codebase should be defined as 'the platform'. where one team will most likely never look at the code of other team's microservices. * this communication problems and overhead start the moment you go from 2 to 3 or more teams. * the term 'team' in this context should be interpreted very broadly. One dev working alone on a microservice should be considered "a team".

Also, things mentioned in the article: you don't want to implement TLS, circuit breakers, retries, ... in every single microservice. Keep them as simple as possible. Adding stuff like that creates bloat very quickly.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: