Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Software would be 50% better if every developer understood Tesler's Law:

"Complexity can neither be created nor destroyed, only moved somewhere else."

The drive to simplify is virtuous, but often people are blind to the complexity that it adds.

Okay, so your microservices are each very simple, but that made the interactions and resource provisioning very complex. What was the net gain?

The correct solution depends on the circumstances. There are excellent uses of microservices. There are excellent uses of monoliths. There are excellent uses of monorepos. There are excellent uses of ... (wait never mind monorepos are just better).

Understand what is ESSENTIAL complexity and what is INCIDENTAL complexity. Simplify your system to remove as much incidental complexity as possible, until you are left with the essential and be satisfied with that.



Given your last sentence incidental complexity can be created and destroyed (and is more difficult to destroy than create).

The quote would probably be more accurate as:

> "ESSENTIAL Complexity can neither be created nor destroyed, only moved somewhere else."


Essential complexity can also be created and destroyed, though sometimes it happens earlier in the design process. Picking the problem you choose to solve is how you control essential complexity.


Essential complexity is inherent to the problem you have. The solutiom is layered between product design, technical design and implementation. What is essential complexity for a layer can be accidental for the layer above.


That just makes it a tautology. It basically says “essential complexity exists”.


It's often a matter of framing. When you abstract, refactor or move complexity it should serve to make the rest of the system/application easier to understand or for those adding features into the application(s) to be more productive as a whole. If it doesn't do one of those things, you probably shouldn't do it.

It's one thing to shift complexity to a different group/team, such as managing microservice infrastructure and orchestration that can specialize in that aspect of many applications/systems. It's very different to create those abstractions and separations when the same people will be doing the work on both sides, as this adds cognitive overhead to the existing team(s). Especially if you aren't in a scenario where your application performance overhead is in eminent danger of suffering as a whole.


It's a frame of mind.

Often a developer will see something big or complex and see it as a problem.

But they should consider whether this matches the size/scope of the problem being solved


> But they should consider whether this matches the size/scope of the problem being solved

In professional software development projects, specially legacy projects, often times the complexity is not justified by the problem. There is always technical debt piling up, and eventually it starts getting in the way.


> often times the complexity is not justified by the problem.

Often times means not always -- what would you say some projects are doing right so that their complexity is justified by the problem?


Maybe but it still useful to know that essential complexity exists and to identify it in your project.


Yes :)

That is the more precise adaptation of Kelsers Law.

Obviously you can always add also superfluous complexity :)


The talk of essential and incidental complexity is, in practice, far less useful than it seems. It's just very easy to just agree in trying to just get rid of incidental complexity, in the same way that it's easy for people to agree on getting rid of unnecessary, wasteful government programs, until the second you define which ones you mean.

Every time I've had that argument at work, ultimately there's no agreement of what is essential. In one of the worst cases, the big fan of the concept was a CEO considering themselves very technical, who decided that everything they didn't understand was obviously valueless and incidental, while everything they personally cared about was really essential complexity, and therefore impossible to simplify... even though the subsystem in question never had any practical applications until the company failed.

So ultimately, either the focus on avoiding incidental complexity is basically a platitude, or just a nice way to try to bully people into your favorite architecture. A loss either way.


The strict definition would be essential complexity is only the complexity that would be required to implement in an "ideal system" (eg no latency, memory concerns, etc). By that definition you cannot just abandon all incidental complexity (as our systems are no where near ideal). Instead, this way of thinking is helpful for keeping the essential complexity implementation isolated and free from the incidental complexity.


The disagreement is on what (sub)systems are even necessary/important. Heck, we usually have trouble agreeing on what goals we are achieving. Once we settle on what truly matters, separating the essential is not that hard.


> Complexity can neither be created nor destroyed, only moved somewhere else.

I have to say that this short quote is not the whole story; for example it's ridiculously common for artificial complexity to be introduced into a system, like using microservices on a system that gets 1k users a day.

In which case, it is sometimes possible to remove complexity, because you are removing the artificial complexity that was added.


I think that quote makes sense if you assume that the complexity is required. Over-engineering is a whole different topic.

The problem is that the shops that move from monoliths to distributed systems are under the impression that all of their problems are now magically over "because microservices".


> (...) like using microservices on a system that gets 1k users a day.

This sort of specious reasoning just shows how pervasive is the fundamental misunderstanding of the whole point of microservices. Microservices solve organizational problems, and their advantages in scaling and reliability only show up as either nice-to-haves or as distant seconds.

Microservices can an do make sense even if you have 10 users making a hand full of requests, if those services are owned and maintained by distinct groups.


> Microservices can an do make sense even if you have 10 users making a hand full of requests, if those services are owned and maintained by distinct groups.

Maybe, but after the next CEO comes in, those groups would be reorganised anyway :-/

Few companies maintain their org chart for a large length of time. My last place had the microservices maintained by distinct groups when I joined. When I left a third of the people were gone and half the groups were merged into the other half.

This is not an uncommon thing. Going microservices because you don't want to step on other peoples toes is a good reason, but it's an even bet that the boundaries will shift anyway.


> Maybe, but after the next CEO comes in, those groups would be reorganised anyway :-/

That's perfectly fine, because microservices excel in that scenario: just hand over the keys to the repo and the pipelines, and you're done.


> Okay, so your microservices are each very simple, but that made the interactions and resource provisioning very complex. What was the net gain?

The net gain was composability of microservices, distribution of computing resources, and the ability to marshall off implementation details. Just because those requirements were routinely ignored in the era of monoliths doesn't mean the complexity wasn't essential or didn't exist.


Before "microservices" there are services, which are also composable. And in the realm of monoliths there are also modules. Which are the key to composability.

What microservices give you is a hard boundary that you cannot cross (though you can weaken it, you cannot eliminate it) between modules. This means the internal state of a module now has to be explicitly and more deliberately exposed rather than merely bypassed by lack of access control, or someone swapping out public for private, or capitalizing a name in Go. If there's any real benefit of microservices, this is it. The hard(er) boundary between modules. But it's not novel, we've had that concept since the days of COBOL. And hardware engineers have had the concept even longer.

The challenge in monoliths is that the boundary is so easily breached, and it will be over time because of expedient choices rather than deliberate, thoughtful choices.


"The challenge in monoliths is that the boundary is so easily breached, and it will be over time because of expedient choices rather than deliberate, thoughtful choices."

I just doubt that people who don't have the discipline to write decent modular code will do any better with microservices. You will end up with a super complex, hard to change and hard to maintain system.


Over focusing on source leads to stand conclusions.

True remedy in both cases is refactoring. So if team don't have time for refactoring in monolith, then switch to microservices would need to free up enough time for the team to start doing it.

Can that even with on a level of a single team?


Exactly. 100% right.


There are tools for enforcing boundaries.

One name for this is "Modulith" where you use modules that have a clear enforced boundary. You get the same composability as micro-services without the complexity.

Here's how Spring solves it: https://www.baeldung.com/spring-modulith

It's basically a library that ensures strict boundaries. Communication has to go through an interface (similar to service api) and you are not allowed to leak internal logic such as database entities to the outer layer

If you later decide to convert the module into a separate service, you simply move the module to a new service and write a small API layer that uses the same interface. No other code changes are necessary.

This enables you to start with a single service (modulith) and split into microservices later if you see the need for it without any major refactoring


> The challenge in monoliths is that the boundary is so easily breached

The biggest challenge with monoliths is the limits of a single process and machine.


Typically the application server is stateless and any persistent state is kept in a database, so you can just spawn another instance on another machine.


Sure but there's still limits such as the binary size and working memory etc


Could you give a concrete example from your experience? I ask because in my experience, services have had a relatively small (say less than a few hundred GB) fixed working memory usage, and the rest scales with utilisation meaning it would help to spawn additional processes.

In other words, it sounds like you're speaking of a case where all services together consume terabytes of memory irrespective of utilisation, but if you can split it up into multiple heterogeneous services each will only use at most hundreds of GB. Is that correct, and if so, what sort of system would that be? I have trouble visualising it.


Let's imagine Facebook, we can partition the monolith by user, but you would need the entire code base (50+ million lines?) running in each process just in case a user wants to access that functionality. I'm not saying one can't build a billion dollar business using a monolith, but at some point the limit of what a single process can host might become a problem.


Things like Facebook and Google are at a level of scale where they need to do things entirely differently form everyone else though. e.g. for most companies, you'll get better database performance with monotonic keys so that work is usually hitting the same pages. Once you reach a certain size (which very few companies/products do), you want the opposite so that none of your nodes get too hot/become a bottleneck. Unless you're at one of the largest companies, many of the things they do are the opposite of what you should be doing.


I agree that most will be fine with a monolith, I never said anything to the contrary. But let's not pretend that the limits don't exist and don't matter. They matter to my company (and we're not Facebook or Google, but we're far older and probably have more code).


We've been here before with object oriented programming which was supposed to introduce a world of reusable, composable objects, even connected by CORBA if you wanted to run them remotely.


This is a very nice academic theory, but in real life you get half-assed, half-implemented hairballs that suck out your will to live if you have to untangle that.


What are the sequence of events that happen in real life that take us from nice theory to hairballs, that academics fail to foresee?


Most companies severely underestimate the complexities of a distributed system and the work that goes into a truly resilient, scalable setup of this kind.

An infrusctructfre of this sort is meant to solve very hard problems, not to make regular problems much harder.


There's also the distribution of work... if one set of teams manages the deployments or communications issues between the microservices, while the micro-service developers can concentrate on the domain it can be a better distribution of work. Where as if the same teams/devs are going to do both sides of this, it may make more sense for a more monolithic codebase/deployment.


Not true. You can make any system arbitrarily complex. And 95% of software developers IMHO are hell-bent on proving that true every single day. Micro-services is a GREAT example of this.


> You can make any system arbitrarily complex.

But isn't that introducing incidental complexity? Not sure if you actually disagree.


Both essential complexity and accidental complexity can be created or destroyed based on how functional scope or technical scope is defined and understood. In a live and evolving system, a lot of complexity is also because of not having a vision for how the product domain or market or ecosystem might evolve and having a coherent shared understanding of it amongst product managers and platform builders. When product direction flip-flops or technical choices flip-flops too widely, it creates lots of complexity lag/debt which can be very murky to clearly identify and attack.


Do you have a suggestion on how to avoid the flip-flopping problem? Or is there a way to turn with the wind without increasing complexity? Genuinely curious.


Well, flip-flop is usually due to heavy investment too early – on poorly formed thesis of a potential product-market fit or business model (ie., vision). Early days has to be about optimizing aggressively for speed of learning/validation/formation of thesis.

There has to be an overarching thesis that is well-formed (formed based on experience/tests from adjacent/related markets, key assumptions validated at reduced scale, strong backing by investors/founders etc).

During this vision formation/validation stage, keep things very lightweight and don't over-invest in prolonged tech platforms.

For example, if your product/service has an on-the-field operational part to it, then run that part with manual operations with pen/paper – pen/paper is extremely flexible and universally usable and survives all kinds of surprises in the field – don't wait to build a software solution to test your thesis in the field. Manual ops can scale quite well especially during learning phase (scale is not your goal, learning/validating is). Choose the use-case for your experiments carefully – optimizing for fast learning and applicability of that learning for next adjacent use-case you intend to expand to.

Once you get going, still build the tech systems without dogmas or baking strong opinions in too much. Keep your engineers generalists and first principles problem solvers and interested in solving both tech and functional-domain problems and encourage them to be humble and curious – because both the functional and tech world is constantly changing. Don't hire a huge product management org – every engineer/manager should be thinking about customer/product first. Over time, parts of your product are more mature and parts of your product are very nascent and still volatile. If your entire team is still thinking customer/product first and build tech to solve problems from first principles, then they should have found the right coupling/cohesion balance between different parts of the system to avoid shared fate, high blast radius or high contagion of that volatility/instability affecting more mature parts of the system.


This is an incredibly insightful response. Thank you!


> Okay, so your microservices are each very simple, but that made the interactions and resource provisioning very complex. What was the net gain?

The main misconception about microservices is that people miss why they exist and what problems they solve. If you don't understand the problem, you don't understand the solution. The main problems microservices solve are not technical, but organizational and operational. Sure, there are plenty of advantages in scaling and reliability, but where microservices are worth their weight in gold is the way they impose hard boundaries on all responsibilities, from managerial down to operational, and force all dependencies to be loosely coupled and go through specific interfaces with concrete SLAs with clearly defined ownerships.

Projects where a single group owns the whole thing will benefit from running everything in one single service. Once you feel the need to assign ownership of specific responsibilities or features or data to dedicated teams, you quickly are better off if you specify the interface, and each team owns everything begind each interface.


> Once you feel the need to assign ownership of specific responsibilities or features or data to dedicated teams, you quickly are better off if you specify the interface, and each team owns everything begind each interface.

If team A needs a new endpoint from team B, what would a typical dialogue look like under microservices and a modular monolith, respectively?


> If team A needs a new endpoint from team B, what would a typical dialogue look like under microservices and a modular monolith, respectively?


How teams interact is a function of the team/org, not the software architecture.

What microservices easily provide in this scenario that is far harder to pull off with a monolith is that with microservices the service owners are able to roll out a dedicated service as a deliverable from that dialogue. Whether the new microservice implements a new version of the API or handles a single endpoint, the service owners can deploy the new service as an extension to their service instead of a modification, and thus can do whatever they wish to do with it without risking any impact on their service's stability.


Aren't microservices an example of complexity creation over the fundamental base case?


reminds me of that law:

"For something to get clean, something else must get dirty"

and the corollary:

"You can get something dirty without getting anything clean"


How does that apply here? (I’m not being facetious)


I was sort of thinking along the lines of...

You must remove complexity to make understandable code.

You can remove complexity without making anything understandable.


Last paragraph about essential and incidental complexity rings my ear about Rich Hickey's "Simple Made Easy" talk.

Talk: https://youtube.com/watch?v=SxdOUGdseq4


I'd add that abstracting complexity should be done where it makes the rest of the system easier to understand over the abstraction. Too much abstraction can make systems harder to understand and work with instead of easier.


Assuming you are not perfect, you must have implemented too much abstracted systems at some point. What went through your head when you did that? What thoughts legitimized that extra layer of abstraction that turned out to be superfluous?


I think the best example I can think of is implementing the Microsoft Enterprise Library Data Access Application Blocks. EntLib came out of MS consulting as a means of normalizing larger scale application development with .Net. DAB in particular entailed creating abstractions and interfaces for data objects and data access components. In practice in smaller projects, this made things far more difficult, as in general the applications only targeted a single database and weren't really doing automated unit testing at the time.

It was kind of horrible, as VS would constantly take you to the IFoo interface definition instead of the DbFoo implementation when you were trying to debug. Not fun at all. It was much earlier in my career.

Probably the most complex system I designed was an inherited permission interface down to a component level through a byzantine mess of users/groups and ownership for a CRM tool in a specific industry. If I had to do it today, it would be very, very different and much simpler. Also much earlier in my career (about 20 years ago).

These days I've really enjoyed using scripting languages, mostly JS/Node and keeping things simple (I don't care for inversify for example). In general the simpler thing is usually easier to maintain and keep up with over time. Yeah, frameworks and patterns can make boilerplate easier but it's often at the expense of making everything else much harder. Vs just starting off slightly harder from the start, but everything is simpler overall over time.

Aside, been enjoying Rust as well.


Does Tesler’s law apply to lines of code or architecture?

I have absolutely seen complex code that was created (often for perceived “best practices” like DRY) which could be removed by simplifying the code.


I think it applies to problems, not solutions. The complexity of a given problem cannot change. If you try to ignore part of the inherent complexity of a problem (also called essential complexity) in your solution, it does not disappear but someone else must solve it somewhere else, or the problem is not really solved. If you build a solution that is more complex than the problem itself (in other words, if you add incidental complexity), this does not increase the complexity of the problem either, only the complexity of the solution.

I think a good solution to any problem needs to match it in complexity. I regularly use this comparison as a benchmark for solutions.

Of course, you can also see it this way: The complexity you remove from the code by making it cleaner is added to your team communication because you now have to defend your decision. (Only half joking.)


>>Understand what is ESSENTIAL complexity and what is INCIDENTAL complexity

Succintly put!


Microservices reveal communication.

I believe we do not have the right tools yet.


there is no silver bullet


>Complexity can neither be created nor destroyed, only moved somewhere else.

Not true. I blow.up a car. Complexity is destroyed. I rebuild the car from blown up parts. Complexity is created.

There's no underlying generic philosophical point about microservices and monoliths. What we can say is that microservices are not necessarily less complex than monoliths, but this relationship has no bearing on the nature of complexity itself.


> I blow.up a car. Complexity is destroyed.

Wouldn't the blown-up car be more complex than the non-blown-up car?


In that case complexity is created thereby proving my point anyway.

Typically I define complexity as a low probability macrostate meaning low entropy. So debris generated from the explosion has a high probability of randomly occuring, while a new car has a very low probability of randomly occuring.

Following this definition you arrive at a formal definition of complexity that is intuitive.

Imagine a tornado that randomly reconfigures everything in it's wake. The more complex something is the less likely the tornado is going to reconfigure everything into that thing. So it is very likely for a tornado to reconfigure things into debris and extremely extremely unlikely for the tornado to reconfigure everything into a brand new car. A tornado reconfiguring atoms into a car seems to be impossible but it is not, it is simply a low probability nearly impossible configuration.

Therefore the car is more complex then exploded debris.

Think about this definition because it also aligns intuitively with the effort required and technical complexity of any object. The more effort and technical complexity it has the lower chance it has of a tornado randomly reconfiguring atoms to form that object. Thus that object is more "complex".

Whatever your definition is, the quote saying complexity cannot be created or destroyed is fundamentally from any perspective usually not true. If you want to define it as simply microservices or monoliths it still doesn't make sense. Whose to say that when converting a monolith to microservices that complexity remains conserved? Likely the complexity rises or lowers a bit or a lot. Complexity doesn't remain the same even if you use informal and inexact fuzzy definitions of complexity.


> So it is very likely for a tornado to reconfigure things into debris and extremely extremely unlikely for the tornado to reconfigure everything into a brand new car.

Is this not a linguistic sleight of hand? There are billions of trillions of states we label with "debris" but only a few thousand we would call "car". So a specific state of debris, then, is equal in complexity to a car?


The technical term for this is macrostate. It is not a linguistic sleight of hand.

It is literally within the formal definition entropy. Debris is a high entropy macrostate, while a car occupies a lower entropy macrostate. There are less possible atomic configurations for cars then their is for debris. Each of these individual configurations of atoms is called a microstate.

A macrostate is a collection of microstates that you define. Depending on the definition you choose that definition has an associated probability. So if you choose to define macrostate as a car, you are choosing a collection of microstates that have a low probability of occuring.

The law of thermodynamics says that systems will, over time, will gain entropy meaning they naturally progress to high probability macrostates over time. So in other words, complexity is destroyed over time by specific laws of entropy.

https://en.m.wikipedia.org/wiki/Entropy_(statistical_thermod...

The reason why this occurs is straightforward. As a system evolves and randomly jiggles over time it trends towards high probability configurations like "debris" because simply that configuration has a high probability of occuring. Generally the more billions of microstates a macrostate contains the higher entropy that macrostate is.

Through logic and probability and the second law of thermodynamics we have a formal definition of complexity and we see that complexity naturally destroys itself or degrades with time.

This is the thing that confuses people about entropy. It's definition is a generality based on microstates and macrostates you choose to define yourself. It's similar to calling a function with generics in programming where you choose to define the generic at the time of the call.

But even within this generic world their are laws (traits in rust or interfaces in c++) that tell us how that generic should behave. Just like how entropy tells us that macrostates we define will always trend towards losing complexity.

The heat death of the universe is the predicted end of the universe where all complexity is inevitably lost forever. You can define that macrostate as the collection of all microstates that do not contain any form of organization.


> There are less possible atomic configurations for cars then their is for debris.

That doesn't really line up logically. :/


You're mistaken and you're intuition is off. It lines up absolutely.

Debris is almost any configuration of atoms that are considered trash or unusable.

There's are much more ways you can configure atoms to form useless trash then you can to make cars. Case in point "you" can manufacture "debris" or "trash" by throwing something into a trash can. Simple.

When's the last time you manufactured a car? Never.


Wow. You're talking atomic scale stuff, but ignoring obvious things like oxidation occurring during an explosion. :/

Seems like a case of you including and excluding things to support your "logic". Ugh.


Thanks. I was vaguely familiar with physical entropy from earlier but this answered multiple questions I've had but not dared ask before!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: