OO code for domain modeling might be, to date, the single greatest source of disillusionment in my career.
There are absolutely use cases where it works very well. GUI toolkits come to mind. But for general line-of-business domain modeling, I keep noticing two big mismatches between the OO paradigm and the problem at hand. First and foremost, allowing subtyping into your business domain model is a trap. The problem is that your business rules are subject to change, you likely have limited or even no control over how they change, and the people who do get to make those decisions don't know and don't care about the Liskov Substitution Principle. In short, using one of the headline features of OOP for business domain modeling exposes you to outsize risk of being forced to start doing it wrong, regardless of your intentions or skill level. (Incidentally, this phenomenon is just a specific example of premature abstraction being the root of all evil.)
And then, second, dynamic dispatch makes it harder for newcomers to figure out the business logic by reading the code. It creates a bit of a catch-22 situation where figuring out which methods will run when - an essential part of understanding how the code behaves - almost requires already knowing how the code works. Not actually, of course, but reading unfamiliar code that uses dynamic dispatch is and advanced skill, and nobody enjoys it. Also, this problem can easily be mitigated with documentation. But that solution is unsatisfying. Just using procedural code and banging out whatever boilerplate you need to get things working using static dispatch creates less additional work than what it takes to write and maintain satisfying documentation for an object-oriented codebase, and comes with the added advantage that it cannot fall out of sync with what the code actually does.
Incidentally, Donald Knuth made a similar observation in his interview in the book Coders at Work. He expressed dissatisfaction with OOP on the grounds that, for the purposes of maintainability, he found code reuse to be less valuable than modifiability and readability.
This is a strong argument against inheritance, but that isn't everything about OOP. Just one well supported advanced abstraction (That I would also argue should be rarely used.)
I would argue that just having strong type system and bundling methods with data gets you the vast majority of the usefulness of OOP. Liskov, Open/Closed, Message Passing, and other theoretical abstractions be damned.
EDIT - Where are the good places to use inheritance?
There are only a few I can think.
One is when you are trying to create a system that inverts dependencies by allowing a plugin system or follows some sort of nuanced workflow that others might want to "hook into". But that isn't the only way to do that, maybe other ways would be better like passing in functors.
Another situation I have seen recently is when creating a kind of data or messages that differ only by type and maybe a few small pieces of behavior and they are all known up front.
> I would argue that just having strong type system and bundling methods with data gets you the vast majority of the usefulness of OOP.
Yes, a module system brings almost all of the advantages of OOP. The one remaining is structure abstraction (things like interfaces on Java derived languages, or type classes on Haskell derived ones).
But well, none of those are even typically associated with OOP. The OOP languages just have those features, like they have variables too.
Yep. Rust has all of these features (modules, structs with associated methods and type classes (traits)). But nobody thinks of it as an OO language. In fact, I’ve heard that many people struggle with rust if they’ve come from a heavily OO language like Java. You have to structure your code a little differently if you don’t have classes.
Modula apparently had many of these features too - and that predated what we now think of as object oriented programming. The good parts of OOP aren’t OOP.
> One is when you are trying to create a system that inverts dependencies by allowing a plugin system or follows some sort of nuanced workflow that others might want to "hook into".
I’m fairly certain that’s the use case of inheritance - at least in the Simula tradition. Classes as a means of lifetime management, moving parts that have well defined steps of operation (methods), and interchangeable parts (subtypes) which you can more or less slot into the larger system (polymorphism).
It’s easier to think about classes not as nouns, but as verbs over time (or rather, bounded by time): at a specific moment in the assembly line, call this particular method, at another moment, call that other method…
Object oriented programming in the Simula tradition I would even go as far as to say is just best practices in structured/procedural programming taken to their logical extremes.
Wrt plugin systems: at least as the class level, are classes really a means of lifetime management in practice?
IIRC, audio plugin APIs follow the shell command pattern of memory management for loading new classes-- the user dynamically loads a library into a running instance of an application, and there it stays until the application exits.
And even if plugin systems as implemented are actually unloading classes, the user is almost always just restarting the app to make sure it took. :)
Agree 100%: static typing (for code completion) + method/data bundling is the major win in OO, and it rarely gets talked about for whatever reason.
It's unfortunate that inheritance became such a major focus of practical OO languages. Would love to see a composition-first OO language. Might have its own problems, but would at least be interesting.
Go, Rust, Zig, etc all support static typing and method/data bundling without any explicit language support for implementation inheritance (interface inheritance in general and especially when structural rather than nominal is not nearly as much of an issue and doesn't create strict tree hierarchies).
Rust has support for variance and subtupint so perhaps it's not as pure of an example, but it's pretty heavily restricted.
Zig's support for method/data bundling being used for "objects" isn't even first class so I wouldn't call it OO (object-oriented) so much as object-orientation-capable with less fuss than if one wanted to build their own objects system in C.
Even in C++ the last time I thought I might need inheritance I made a simple class/struct with a few members that were `std::function` instances. Instead of needing inheritance this worked and I managed to keep type safety checks on all function return and parameter types. Once upon a time this would have been weird function pointers and `void*` with dangerous casts. Last month when I did it, there were just lambdas passed to typesafe constructors.
Go's first class support for typed return tuples and Interfaces is a lovely replacement for inheritance (E.G. an Interface of type blah supports this signature). They function as an API contract, if a given class implements the requirements of the Interface, it can be cast to and used as that anywhere which accepts that interface.
It's not unfortunate happenstance, it's by definition.
Dynamic dispatch is the defining feature of object-oriented programming. In dynamically typed languages such as Smalltalk, you can get there with duck typing. But a statically typed language needs a statically typed mechanism for dynamic dispatch, and that requires some way of saying, "Y is a particular kind of X, so all members of X are also in Y." Which is - again by definition - inheritance.
You could remove - or refuse to use - the inheritance (or, equivalently for some purposes, duck typing). But that would also prevent the use of dynamic dispatch, so what you're doing would bee be procedural programming, not OOP, even if you're using an object-oriented language to do it.
> Dynamic dispatch is the defining feature of object-oriented programming.
Message passing is the defining feature of object-oriented programming. Dynamic dispatch can be achieved using message passing, but message passing is more than dynamic dispatch.
Ultimately, static typing is incongruent with object-oriented programming. Messages are able to be invented at runtime, so it is impossible to apply types statically. At best you can have an Objective-C-like situation where you can statically type the non-OO parts of the language, while still allowing the messages to evade the type system.
Whether you'd call "composition-first" is probably asking for a big argument about what "composition first" really means, but Go is certainly a language that syntactically privileges a particular type of composition over inheritance. It doesn't even have syntax for inheritance, and frankly even manually implementing it is rather a pain (best I've ever done requires you to pass the "object" as a separate parameter to every method call... and, yes, I said that correctly, to every method call).
I'm not ready to try to stake a position on the top of some "composition first" hill because the syntactic composition it supports is not something I use all the time. It's an occasional convenience more than a fundamental primitive in the language, the way inheritance is in inheritance-based languages. Most of the composition is just done through methods that happen to use in composed-in values, but it is generally not particularly supported by syntax.
Inheritance is just plain a great way to model a lot of relationships, in my experience, because a lot of things are most easily thought of as "x is a kind of y". I am perennially baffled that people shit on inheritance so much, because I think it's incredibly useful. I find myself often missing inheritance when working in Rust, for example.
Implementation inheritance often leads to code that is just awful to read. If class C extends class B and class B extends class A, then to find out what `new C().foo()` actually does, you need to read through the whole C-B-A hierarchy, bottom to top. If `A.foo()` calls `this.bar()`, you have to start again, from the bottom of the hierarchy. With an inheritance hierarchy of depth n, every method call could be going any of n different places. With an interface, there's a single level of indirection. With composition, the code simply tells you what happens next.
If class A and class B both implement interface X, and B wants to borrow code from A, it should just call A's methods—ideally, static methods, but B can keep an instance of A if it wants. Explicit is better than implicit.
Also, I dislike ontological statements like "x is a kind of y." What does that mean? Typically, it's a claim about behaviour: "x offers method w and satisfies invariant v". But the actual blueprint here is an interface, (w,v)—not another object y. The waters get even muddier when we start talking about "is-a" vs "has-a" relationships. It feels like OOP is trying to unhelpfully distance us from what's actually going on with our code. Under the hood, inheritance is no more than syntactic sugar for composition. I think that OOP's focus on the ontological philosophy of inheritance is the reason why it led to so much bad AbstractObserverStrategyFactory-style code.
> dynamic dispatch makes it harder for newcomers to figure out the business logic by reading the code
This is definitely a potential problem, but I note that you can also get into this mess without OO in any language that lets you put a (reference to) function in a variable. Or, god help you, operator overloading.
If I overload shift-left `<<` for a completely different concept such as "piping", that's my mistake. That's like writing a normal function or method and calling it `foo()` when it has nothing to do with the concept of fooing.
That said, unless you are writing a math library or some container, there's not many good uses for operator overloading.
I think the main difference between the two is, as someone reading and debugging the code, will probably eventually check even those methods that I assume I know roughly what they do. In contrast, I may not even think to check an overloaded operator unless I _already know_ that it's overloaded.
Maybe a good analogous method would be an overloaded `.ToString()` in C# that has side effects or returns the full text of the Magna Carta or something.
Custom operators of any kind are definitely a problem for learners. I think people just fixate on overloading because that's the only kind of operator customization available in the most popular languages.
The particular problem is that search engines tend to have terrible support for searching for arbitrary sequences of non-alphabetic characters.
ConsoleLogger is a Logger because they share a method (log),
PaidUser and User can have some common things, but I don't think it's only in the way it behaves, but also in the way you contact/use them
But, in a way, OO modeling and design was invented to solve the mess that "banging out" procedural code created in the first place.
You have to model your business domain in software one way or another anyway. Why should it be bad to try to be more methodical about it using OO methods? We do it with relational databases all the time where tables are pretty similar to objects.
I actually have nothing against objects and methods, but that’s a very limited subset of OO. I prefer to use algebraic data types for domain modeling, and giving them methods is totally fine too. But I do prefer them to be immutable in most cases, which is also quite counterintuitive from an OO perspective.
The oscillation between the two as what's in favour is also humorous.
The right thing for the right need for the present and near future, especially the newer the codebase, and the greater the need to learn, is often the way to consider pursuit.
Software exists in the real world, and is used to solve real world problem. In building software we inevitably invent or use abstractions to represent or effect real world things. Abstractions that make it easier to do this are good, abstractions that make it harder to do this are just getting in the way.
> In building software we inevitably invent or use abstractions to represent or effect real world things.
Ehhh. Most abstractions I’ve written aren’t abstractions over the real world. They’re abstractions over low level machinery of a computer or program. (Eg there’s no HTTP request and response, DB connection or network socket outside of computer software).
The real world isn’t object oriented. It’s just a bunch of atoms moving around. You can describe physical reality as a bunch of objects that interact via ownership and method calls, but there’s nothing natural about that. OO is no better of a way to describe the real world than actors & message passing, or state & events.
Software that models “the real world” usually describes users, money, addresses and things like that. But none of those things are made out of atoms. There is no money molecule. Money is not on the periodic table. They’re all just another form of abstraction - one that happens to exist outside the software world, that we can capture in part in a database table.
Interesting abstractions are all invented ideas. Some are useful. Some are elegant to express and use in OO code, and some are not.
1 - (Application) System exist in the real world, not software. Software exists in machines.
2 - Computing is used to solve real world problem.
3 - "In building software we inevitably invent or use abstractions to represent or effect real world things." Here is the problem where we part company.
4 - Abstractions that inform computing systems are indeed useful.
[edit+ps]
self disclosure: I've reached 'architectural orbit' numerous times in my career. 30 years later, I am sharing a subtle point. Effective software models cutout attributes of real world elements of the problem domain. All attempt to "model the world" end in tears.
For me, software and tech that is for someone, exists to work for people, who are end users.
End users and customers don't exist to serve at the leisure and pleasure of software and it's creators.
Making people work harder than they need to operate software is selfish.
DevOps and DevEx is important, but if no one uses it with those being great, the Customer and their experience are often lost and never gained.
Learning to model something flexible enough for absorbing and quickly implementing the early customer feedback that is relevant is critical to boring things like retention.
Helping customers earn enough to eat every month, helps the tool makers earn enough to eat every month.
There are absolutely use cases where it works very well. GUI toolkits come to mind. But for general line-of-business domain modeling, I keep noticing two big mismatches between the OO paradigm and the problem at hand. First and foremost, allowing subtyping into your business domain model is a trap. The problem is that your business rules are subject to change, you likely have limited or even no control over how they change, and the people who do get to make those decisions don't know and don't care about the Liskov Substitution Principle. In short, using one of the headline features of OOP for business domain modeling exposes you to outsize risk of being forced to start doing it wrong, regardless of your intentions or skill level. (Incidentally, this phenomenon is just a specific example of premature abstraction being the root of all evil.)
And then, second, dynamic dispatch makes it harder for newcomers to figure out the business logic by reading the code. It creates a bit of a catch-22 situation where figuring out which methods will run when - an essential part of understanding how the code behaves - almost requires already knowing how the code works. Not actually, of course, but reading unfamiliar code that uses dynamic dispatch is and advanced skill, and nobody enjoys it. Also, this problem can easily be mitigated with documentation. But that solution is unsatisfying. Just using procedural code and banging out whatever boilerplate you need to get things working using static dispatch creates less additional work than what it takes to write and maintain satisfying documentation for an object-oriented codebase, and comes with the added advantage that it cannot fall out of sync with what the code actually does.
Incidentally, Donald Knuth made a similar observation in his interview in the book Coders at Work. He expressed dissatisfaction with OOP on the grounds that, for the purposes of maintainability, he found code reuse to be less valuable than modifiability and readability.