Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How would you split the total overhead between monolith and monorepo?

(sorry this ran away with me)

A dumb example is that if I start with my single codebase running on a single server, I am likely to have a single git repo (foo).

Then I have a genius idea and put all the email handling code into foo.mail and soon I have foo.web and foo.payments.

All is fine as long as I am just checking out HEAD each time. The code running in the runtime is still one big set of code.

If I get creative and put in load balancers it's still a monolith.

But if I split out the web servers from the email servers, then I kind of start to see microservices appear.

I am trying not to be pedantic, but I am truly interested in experienced views on where the pain really starts to appear.

At this point (server to server comms), I should look at mTLS, and centralised logging and all the good stuff to manage microservices.

But how much pain was there before?

What if I was a large company and so hired a dev or two per repo (you know to get that 9 women one month effect). Co-ordinating multiple devs over different repos, even with one monolithic runtime, is painful (experience tells me).

So I am interested in where the break points are, and whether there are easier paths up the mountain?



Splitting out the webserver (assuming it is the main entrypoint for users) seems more like an infrastructure choice than application architecture and having an independent email-sending-service looks more like replacing a third-party offer (like turboSMTP) with an in-house service.

I do not think that this is what people mean by microservices.


I get that. I was trying to describe a small path from monolith to microservice without inventing yet another student and teacher database. But the even the step you mention matters. going to a third party service is very similar to reaching out to your own internal service. Make it a "customer" service.

Is the pain in microservices (or APIs and mTLS) or is the pain in managing a team that now has to do customer stuff only ? or email stuff only?


> Make it a "customer" service.

Common path to pain right there — services based on nouns and entities rather than verbs and business processes.

Having a single "customer" service means anything involving customer data has to talk to that single service. If that service has a problem, every other service stops working. You basically have a tightly coupled distributed monolith, so network boundaries instead of function calls between modules.

Think instead of splitting that data to separate services around business processes like billing, shipping, outbound marketing, whatever. Each service owns the slice of customer data required for its processes and can run fully autonomously.


At Netflix we had both noun and verb services. We had the "play movie" service, which was very much verb, as well as the listing service for example. But we also had the subscriber service, which was the source of truth for any account data, and the catalog service, which held all the info about the movies.

Yes, subscriber was a Tier 0 service and required a strong reliability and scaling story. They had more rigorous deployment setup and were usually over-scaled for traffic. But they also vended a library that services used to connect to them. That library had intelligent caching and fallbacks, so that even if the service was down, in a lot of cases the system could keep running. And the catalog was the same.

Microservices is very intertwined with how your engineering teams are built. We had a team that worked on subscriber data, one that worked on catalog data, one that worked on making movies play, one that built listings. So we had services to match.


Thank you. I sometimes feel microservices are an evolution of Conways law. How a company sees its internal organisation is how it will want to arrange its microservices and vice versa.

I remember trying to build out a "barium bullet" through very complex intertwined systems - following known customer data and looking for their footprints in all linked systems. It gave some insights on how to redesign systems - like someone else said here, removing complexity is ridiculously hard.


That seems like a great approach for Netflix and investing in a resilient, redundant Tier 0 service like that was likely worth it there

Most orgs don't operate on that scale though and resources would likely be better spent elsewhere

> Microservices is very intertwined with how your engineering teams are built.

I wish more people understood this. Microservices are much more about scaling your organization than about scaling technology. At a certain size, teams can't collaborate effectively enough and splitting out different services for each team makes a lot of sense


…until you need to market to people who didn't get a shipping last month, delay shipping until at least one recharge attempt, or ship before charging for customers you're reasonably sure will pay anyway.

Business changes, and putting rigid firewalls in place without seeing an established pattern of change and without established need for that is a bad idea.

One global database buys you a lot (easy data access, easy transaction consistency, simple locks, even transactional notifications), and you can take it very far performance wise, throwing that away just in case you might need more performance later seems unwise.


> …until you need to market to people who didn't get a shipping last month, delay shipping until at least one recharge attempt, or ship before charging for customers you're reasonably sure will pay anyway.

These are all very simple things to handle when services encode processes instead of entities

> Business changes

Yes, and then you throw away the code for that outdated process and encode the new one.

> One global database buys you a lot

Absolutely, this is the correct choice for probably 95+% of companies. I will always argue in favour of a monolith and single database.

If someone is dead set on microservices though, they'll avoid a world of pain by actually designing them properly




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: