Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Worse Is Better (2001) (dreamsongs.com)
88 points by karimf on July 25, 2021 | hide | past | favorite | 43 comments


>The concept known as "worse is better" holds that in software making (and perhaps in other arenas as well) it is better to start with a minimal creation and grow it as needed.

It's a horrible moniker for the concept and most people who use it don't really mean piecemeal growth at all. What they do talk about is usually design where some shortcut allows you to quickly accomplish a short-term goal while completely ignoring long-term consequences. This is the mentality that I see everywhere in the software industry today.

Go is mentioned here in comments as an example of "worse is better". It fits nicely. The main "feature" of the language is that it doesn't have any constructs that would require an average college grad or a disgruntled Java shitcoder out there to apply effort in order to understand the feature. Thus, people grab the language and mindlessly run with it. By the time the codebase grows where you need more advanced features, it's too late to switch.

My prediction is that in a few years when an average Go codebase switches ownership at least once, it will become widely hated, and language developers will start cramming in features that they originally said aren't needed there at all. You've already seen a preview of thing when they replaced cutesey (and obviously broken) dependency management with modules.


> It's a horrible moniker for the concept and most people who use it don't really mean piecemeal growth at all. What they do talk about is usually design where some shortcut allows you to quickly accomplish a short-term goal while completely ignoring long-term consequences. This is the mentality that I see everywhere in the software industry today.

I think the moniker is perfect because that is exactly what piecemeal growth is in nature. That's why code seems to have a biological feel. The rough bit, for we humans, is that code tends toward piecemeal growth because of the economics of change. It is easier to add to something that exists than to make something new, so entities (functions, classes, services) tend to grow in size. We have to step in, as gardeners in a way, to guide structure.

I don't know that it was intended by the original people behind micro-services (Fred George comes to mind), but micro-services are a good example of aligning with natural tendency rather than attempting to impose a design and continually falling short. Part of the intent was that you just rewrite a micro-service when it becomes unmaintainable. That's a direct structural correspondence to apoptosis in biology. Cells live for a while, then they are replaced.


I don’t think much will change until the incentives change. Right now there is basically no consequence for doing the wrong thing. Software is unique in that defects are generally tolerated. Minimizing time to market can be everything where often winners take all. Thus an incomplete solution today is worth more than a flawless solution in the future.

I’m not saying it’s good, but it is a predictable consequence of the market incentives.


What many Go adopters don't grok is that many of us have seen this "worse is better" movie at theater so many times, that the end is relatively easy to guess, but hey we are just old guys and gals that don't get progress.


I think it's possible to take the cult of simplicity too far. As is the case with over engineering.

There is always the risk of over-simplifying and dumbing down the inherent complexity of problems. Instead of representing the problem space with a good data model, programmers who love to brag about their love of "simplicity" sometimes end up with many extra "simple" layers than are necessary.

It's not a binary because there are a lot of factors that go into simplicity and some matter more than others in different contexts. One person's simplicity is another's tedious verbosity. And another's use of specialized programming techniques is someone else's scary learning curve.

Much easier to just label My Preferred Solution as The Most Simple and Parsimonious because clearly I am the smartest and most scientific thinker in the group. Other approaches are Too Complex, therefore I don't need to understand them. It's just projection of insecurity about knowledge gaps by hiding under the appearance of being an iconoclast.

That said, there are definitely some castle-in-the-sky and nonsense-on-stilts implementations out there :) Some would say ORMs, others would point to the actor model, and others would even say anything higher level than pointers and for-loops. For what purpose though?


The entire concept of "worse is better" can be more accurately be described as "don't let the perfect be the enemy of the good." Simple, functional solutions are good ideas, and attempt to replace them with perfect ones are likely to fail. In software, that just means writing code using practical languages that are proven to work. It does not need to be written in the ideal language, especially if using the ideal language requires very specialized skills. Furthermore, you do not need software that solves every single corner case. Once you have solve the main problems and most of the minor ones, there's no reason to waste years solving the last few annoying issues.


> The concept known as "worse is better" holds that in software making (and perhaps in other arenas as well) it is better to start with a minimal creation and grow it as needed.

Or in other words, try to avoid Feature Creep[0] or Overengineering[1]

[0] https://en.wikipedia.org/wiki/Feature_creep

[1] https://en.wikipedia.org/wiki/Overengineering


I think it's more than avoiding feature creep.

I think the most interesting parts are

« It is more important for the implementation to be simple than the interface. » and « completeness must be sacrificed whenever implementation simplicity is jeopardized ».

I think of this every time I'm annoyed that I can't add an empty directory to a git repository.


Many newer programming languages have focused on reducing hidden control flow and ugly text macros in favor of more transparent metaprogramming, which may help to illustrate one of the bigger issues with Lisp. Lisp itself is simple and elegant, but that doesn't mean it's easy to write simple and elegant Lisp code. Imperative programming is maybe less elegant and beautiful than functional programming, but imperative programming is easy to understand by humans. Sure, C code may not literally execute line by line, but this is how C programmers mentally model execution, to the point where sometimes people are befuddled to learn it’s not the case. It's such an effective approach that CPUs also provide an as-if illusion of in-order execution. And sure, Lisp macros are powerful, but it feels like nobody ever stopped to consider if maybe there's an upper limit to how powerful you really want a programming language to be. Well, I have. I want languages more like Go and Zig. People will always hate at least Go, but it didn't hit TIOBE top 10 out of sheer baseless hype and corporate sponsorship. Simple, easy to understand, perhaps even "stupid" code works.

I could be wrong—perhaps the Lisp renaissance is just around the bend. But realistically, I'm not running into any problems where I wish to deal with more clever and elegant code. In fact, I'm running into problems where I wish to deal with dumber and more obvious code. That these ideals often conflict is probably a great source of debate and strife.


This reminds me of the "Principle of Least Power" and variations [1, 2] in relation to the design of various languages of WWW. Although that is supposed to describe languages that are not programming languages. E.g. [1]:

> Computer Science in the 1960s to 80s spent a lot of effort making languages which were as powerful as possible. Nowadays we have to appreciate the reasons for picking not the most powerful solution but the least powerful. The reason for this is that the less powerful the language, the more you can do with the data stored in that language. If you write it in a simple declarative from, anyone can write a program to analyze it in many ways. The Semantic Web is an attempt, largely, to map large quantities of existing data onto a common language so that the data can be analyzed in ways never dreamed of by its creators. If, for example, a web page with weather data has RDF describing that data, a user can retrieve it as a table, perhaps average it, plot it, deduce things from it in combination with other information. At the other end of the scale is the weather information portrayed by the cunning Java applet. While this might allow a very cool user interface, it cannot be analyzed at all. The search engine finding the page will have no idea of what the data is or what it is about. This the only way to find out what a Java applet means is to set it running in front of a person.

[1] https://www.w3.org/DesignIssues/Principles.html#PLP

[2] https://www.w3.org/2001/tag/doc/leastPower.html


> imperative programming is easy to understand by humans

It's interesting that you choose C as an example because the absolutely immense numbers of bugs and security vulnerabilities directly due to C should, alone, lead us to question this. People might feel they understand C, but they're empirically wrong!

We can argue about why this is the case or whether it applies to other imperative languages, but it's pretty depressing we're still arguing about whether it is the case...


I’m a bit sad that people in functional programming circles seem to think this way, because to me, it feels like a pretty flagrant disregard for the obvious. No offense intended at all, but please consider: is it possible you are mentally comparing C with a language that is generally memory safe? Like Lisp or Haskell?

Understanding code and programming languages doesn’t prevent you from making mistakes. Advanced type systems can help, but you can put those in imperative languages too, and they do have their own pitfalls and limitations. Obviously, nothing is a panacea. But I think programming paradigms are neither here nor there...


Yes, it's a function of memory safety. The abstractions C provides for working with memory and the semantics of the language itself are both design choices it makes. Memory safety is one of the few places where we have robust empirical evidence on how it directly prevents bugs and security issues in practice. Robust empirical evidence is fundamentally hard to come by in software engineering research, so when we do have it, it's a strong signal.

The conclusion is that C is not simple and that the C model is not somehow "more natural" or a good fit for how people think. This was the core argument of the comment I was responding to and it is simply wrong, regardless of whether we're comparing C to functional languages or to something else. We should collectively not be drawing any conclusions by starting with C as an exemplar of how things should be.

Personally, I am certain that other aspects of C also contribute to extra bugs: implicit type coercion, undefined behavior, the C preprocessor and anything to do with concurrency all come to mind. I did not bring those up because we do not have the same kind of strong evidence for their impact and because just one problem of that magnitude should be enough to show that a language is neither simple nor natural.

I don't see how any of this is disregarding the obvious. Instead, I am making an obvious point that the original comment somehow disregarded. "Nothing is a panacea" might be true, but that does not mean that some things can't be better and other things can't be worse. C is one of the latter.


C does plenty of things badly - for another language in the same realm, consider Zig. It has sane metaprogramming and it can actually abstract over things without hacks like “we will always have this struct in another one with a pointer preceding it so we can decrement the object pointer” and macro magic (text based macros are simply the worst thing ever)


> hacks like “we will always have this struct in another one with a pointer preceding

A struct embedded within a different struct gives simply a bigger struct, which is something we have decided is a good idea (as opposed to e.g. attribute lookup in dynamic scripting languages).

The unfortunate thing is that there is no way to check statically that an inner struct that you hold in your hand is indeed embedded within a larger thing. But that is the kind of assertion that is incredibly hard to prove in a simple framework without complicating things, so the better idea seems to not do it. (Also it's the kind of assertions that seldom wrong once you've tested a tiny bit).

> (text based macros are simply the worst thing ever)

Are they?

Consider how many actual bugs were caused by text macros, and then consider how many useful things you can do with them that you simply cannot do with AST based macros.


What about dynamically replacing said inner or outer struct? You get absolutely no insight on the intent of the developer. That’s why actually joining a C project is really hard. C simply sucks at abstraction, and can only manage it through convention, without enforcement.

Please show me any IDE that can reliably refactor huge C projects. Compare it to eg. Java. The difference is night and day — what even is the point of text based macros? You have to parse the AST either way, why not manipulate that? And you can say that at the time memory constraint or whatever, but it is simply a terrible idea all around. You are basically running sed through your code.


> immense numbers of bugs and security vulnerabilities directly due to C

If K&R were here they might argue that it's perfectly possible to write a memory-safe C compiler, runtime, or abstract machine, and indeed there are multiple examples of same. Certainly memory-safe compilers and runtimes were (and still are) available for Pascal and Object Pascal/Delphi.

Unfortunately, from the 1970s to the 2000s memory safety (along with software reliability/security) was often considered much less important than speed and efficiency. Along with this, much C code depends on unreliable assumptions.

Of course the speed and efficiency vs. reliability and security trade-off was made in hardware design as well, with predictable results.


Ironically, one reason that Unisys keeps selling ClearPath MCP (nee Burroughs) it is not that a couple of companies are using a mainframe created in 1961, which anyway has had its evolution path anyway, rather there are security levels where a UNIX clone is just not desired at all.


> the absolutely immense numbers of bugs and security vulnerabilities directly due to C

Isn't this simply a result of the absolutely immense amount of C being used in critical systems? Are functional languages empirically more secure?


The immense number of C bugs is not merely due to C's popularity. Other languages lead to less buggy and more secure code.

This is largely due to memory safety. However, immutability is also known to reduce bugs and be more secure (as many security related bugs are due to mutability). Lastly, superior concurrency constructs also have this bug-reducing effect.

That said I haven't looked into empirical studies. But I feel like it is just factually the case, at least for memory safety, that in most other languages you literally are unable to create entire classes of bugs that are possible in C. So just logically thinking about it, I don't even see why you'd need to see empirical studies. It should be quite obvious!

Although it'd be cool to see the empirical research too! Maybe someone else can reply with links to studies.


> But I feel like it is just factually the case,

Aka truthiness. [1]

Or when this is asserted for safety, safyness. [2]

[1] https://en.wikipedia.org/wiki/Truthiness

[2] https://blog.metaobject.com/2014/06/the-safyness-of-static-t...


Memory safety is literally the only relevant topic in this whole thread whose impact isn't "truthy" in that sense! We can measure how many bugs and security vulnerabilities in real projects are memory issues, and we know that a memory safe language would rule those bugs out by construction.

And we have! This blog post[1] is a good starting point, linking to a number of studies. Empirical evidence is never perfect, but that's much stronger evidence than we have for the impact of pretty much any other aspect of a programming languages!

[1]: https://alexgaynor.net/2020/may/27/science-on-memory-unsafet...


Your mistake was taking @chillpenguin at their word when they said "I feel". That was just a hedge, it is factually the case that memory safety eliminates an entire class of bugs. More accurately, it pushes them down to the garbage collector and runtime, where they can be epsilon close to eliminated, instead of rolling fresh ones in each and every program.


Programming languages don't win because they are inherently better. Programming languages win because they fill a need better, or are more familiar.

There are better languages than other, that help people to write better code, have bugs be easier to find, push better paradigms. But thats not what decides a programming language winner. A winner helps someone do X faster or with less effort. If better languages want to win they need to both be better, and be first to tackle niche X before a worse / familiar language does. And do it sufficiently significantly better to overcome the "familiarity benefit" a worse language will get that enters the same space.

A not perfect example - typescript is almost the exact same language as JS and it took almost a decade for people to realize that unless you're writing small solo projects TS is better in just about every way. That is a best case scenario for comparison. Identical syntax, fully backwards compatible. On "language" a complete subset of the other (so no wars about losing a to gain be so is the trade off worth it), almost no new paradigm to learn and it still took our industry almost a decade to agree that it's universally better. If it takes that long to evaluate a clear cut case, its strong evidence that language benefit is not what primarilt drives language adoption.

I've been around since the arguments about how Python would never be adopted because "white space was significant and that's a terrible syntax choice." That its distracting and takes mental focus. I've seen all the arguments. When a syntax is different it slows adoption drastically. Not because its worse but people we're humans. People don't like change. Then Python solved a need bettet then everything else and it took off. And once you use Python regularly you see how absurd those whitespace arguments were.

JS and its approach took over because that's what the web was written in. If the language of the web had been chosen to be Lisp (as almost happened), everyone would be using Lisp right now without giving it a second thought.


> Well, I have. I want languages more like Go and Zig. People will always hate at least Go, but it didn't hit TIOBE top 10 out of sheer baseless hype and corporate sponsorship.

Given how succesfull Oberon-2 and Limbo have been in the market without someone like Google behind them, or having a biased Go advocate at the same company push for a project rewrite from Java into Go, yes it did.


I read about this notion often and the deeper truth behind it seems to be that abstractions are hard and they tend to be harmful if done wrong.

At the same time there are clear, obvious benefits of abstraction, which is why we don't typically write in machine code. This means that the goal is to create the right abstractions and not just throw our hands up in despair.

Using the popularity of Go as a metric is interesting, because it is a language that was designed in an environment with very large amounts of code and high fluctuation. It was targeted at the programmers that are replaceable cogs in a very large system, so it has to accommodate the lowest common denominator. One of the great features of the language is how easy it is to learn. (But they went too far, leaving out parametric polymorphism which is a feature they are now working on for years.)

Something that is easy to use and understand typically has a broader appeal. However, other environments want different things from programming languages. The appeal of a Lisp for example is its expressive power, programming expertise scales with the language more strongly. At the other end of the spectrum we have languages like Rust and statically typed functional languages that have very strong guardrails, stronger runtime guarantees can be made in exchange for agility. Again, certain types of programmers who face certain problems are drawn to these.

So creating good abstractions is both hard and desirable and programming languages are itself abstractions and facilitate the creation of abstractions. But what is a "good" abstraction depends entirely on perspective, which includes problems, domains and people. This is why languages like Lisp are not going away, languages like Go are popular and why there is plenty of diversity in language design.


But then again, C++ has been top 5 TIOBE since the index began. And C++ is the poster child for complexity.


So we're presented with one design philosophy and one design Gabriel admits is a strawman. This was written before steelmanning was common and he was open about it, so the strawman is forgivable.

Then again, maybe they're both stickmen of those design approaches. All of the important context is stripped from both. What is the goal of the designed thing? How will it be used? What are the costs?

The "better" "MIT/Stanford style" was designing things to be as perfect as possible when considered in a vacuum. The "worse" "NJ style" was intended for use in industry. I still see That jarring difference between the academic and industry approaches today. And I'd rather make a difference in the world with running software, but I can see value in academics, too.


So we're presented with one design philosophy and one design Gabriel admits is a strawman. This was written before steelmanning was common and he was open about it, so the strawman is forgivable.

"Steelmanning" is just a new term for the principle of charity, which has been around for centuries. Engineers didn't invent informal logic, and they don't get a pass for ignoring it.


Yeah, the difference is about acedemia vs reality - real life is not a happy case - happy case is side case.


I was just referring to a paper I wrote, some years ago[0], about a [quite successful] system that I designed. It has become widely adopted, and has been taken over by a new, energetic, and highly-talented team.

It is something that many in today's industry would sneer at. It's a primitive, mostly-procedural PHP framework that fails to achieve Buzzword Bingo.

I deliberately designed it to be primitive. It is meant to be taken up by fairly unskilled users, and implemented in spartan, low-tech environments.

It's really quite successful. The important parts are that the code quality is extremely high, as is the robustness of the system. I don't know if I've ever gotten a question about the code. In fact, I'm really just teats on a boar hog, as far as the current team is concerned. I don't think there's one blessed technical thing that I can offer them. It only took them a month or so to grok the system. They did sneer at it a lot, but they have yet to rewrite it in node.js (but they have written some excellent adapters for it in Node).

I'm a big believer in functionality and utility over purity. It needs to work, and work well. Pretty is nice, but Quality and Applicability are the Principal goals.

[0] https://littlegreenviper.com/miscellany/bmlt/


> Pretty is nice, but Quality and Applicability are the Principal goals.

Speed of maintenance is of ultimate importance with software that changes frequently. You may call this pretty, but in my book its principal goal.


You haven't seen my software (which is easy to do. Just look at my HN handle). It's pretty easy to maintain.

In fact, the gist of what I just wrote, is that I deliberately avoid trendy and fancy, in favor of Quality and maintainability.

Additionally, high Quality code tends to need less maintenance.


Worse is better is definitely better. I have no doubts about it.

Natural organisms seem to follow that pattern - Pauling showed the cells produce sub-optimum concentrations of substances by genetic machinery probably because keeping it at optimum takes up more resources to maintain cellular machinery. Sounds totally like worse is better to me. Life is full of examples .


Why is there never much discussion about the actual technical issue presented in the text, which is how to design a syscall API with respect to interrupt handling?

I never thought of the EINTR errno as a hack. It strikes me as very pragmatic and actually the right choice. Am I missing something? Why would one want to save the state of the system call routine when an interrupt occurs and the interrupt handler is executed in user code?

The point of interrupts is precisely that they can interrupt a call that otherwise could potentially block forever. When an interrupt occurs, the user code has the chance to completely cancel any attempts to complete the original call. Otherwise, just re-try. The complexity added to the syscall API is small, the practical benefit is very real.

So why would I want a general mechanism in the kernel that automatically continues the blocking call after running the interrupt handler?

To me the lesson from "Worse is Better" has always been "Don't design before you actually know what you need. You'll probably end up with something that does not do what you want and that you can't fix".


The notion that a minimal first version is the ideal starting point is mirrored in this quote from John Gall:

> A complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over, beginning with a working simple system.


Worse is better and excellence are not at odds. Worse is better applies at one side of a shearing layer, and excellence on the other.

In software, a worse is better API imposes designing for modularity upon the applications that depend on you. An excellent implementation means your internals are clean and efficient.


Read the original [0] Lisp: the good news, the bad news, and how to win big. (Spoiler: they didn’t.)

[0] https://www.dreamsongs.com/WIB.html


[flagged]


The website is great though. Loads fast, mostly text, not too contrasty. I'd take this over the Reddit or Twitter mobile interfaces any day.


New reddit is a blight on the already scarred name of software development that seems to exist as a parody of modern web development.


100%. new reddit UI is soooo slow.


One thing I really like about websites like that (and HN for example) is that zooming just works. If you zoom, you're left with what looks like a modern blog. I have HN on 125% zoom all the time and I often forget about it. The website also perfectly supports bookmarklets to change the CSS. This is empowering the user to make the website how they want. I'd prefer that over what a designer thought would be the best for me.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: