Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Clever code considered harmful (joshwcomeau.com)
95 points by signa11 on May 29, 2023 | hide | past | favorite | 134 comments


This is one of those advice that sound reasonable when explained in the abstract, and yet when you hear it invoked in actual code reviews by other people, you wish it had never existed. Especially when the following kind of constructs get decried as "clever":

- writing "return condition" instead of "if condition then return true else return false end"

- using the conditional-value ("ternary") operator in any capacity

- early returns/goto cleanup instead of nested if conditions

- using basic higher-order functions like map or reduce in any capacity

- any kind of metaprogramming whatsoever

One person's "clever" is another's "elementary". If you want to point out that some code is hard to understand, that's fine. But don't call it "clever". Using "clever" as a criticism is implicitly an ad personam argument, passing value judgement on the person who wrote it ("Oh, they must have written it to show off") instead of an argument about the code itself.

Also, this:

> Hi friend! Hope I didn't startle you. Can I let you know about my newsletter?

is missing a "fuck off to hell, don't you dare disrespect my attention like that ever" button.


I don't think most people would describe almost anything you wrote as clever. I hope that's not insulting...

Metaprogramming tho... You want to make a magic black box that takes hours to debug and conceals real bugs, invest heavily in metaprogramming to save a hundred lines of code that can be maintained with three find and replaces/SED calls and a recompile a year... Most metaprogramming shouldn't exist in production code.

Programming is social. You have to weigh your decisions heavily by who you work with now and who you might work with in the future. If you work in a niche language or toolset you might think "this is my chance to do whatever I want!" But the reality is, it should be the complete opposite.

Most compilers make clever code silly. You can write a 15 line hard to read pipe or method chain, or a 25 line double for loop with the same runtime characteristics. The latter is always better to maintain, while the former is always more clever. One is good for the team the other for someone's self esteem.


> You can write a 15 line hard to read pipe or method chain, or a 25 line double for loop with the same runtime characteristics. The latter is always better to maintain, while the former is always more clever.

You’ve just told us which one you’re more familiar with, that’s all, which is exactly what the other commenter was pointing out. This one also belongs on the list in that comment.

Beyond familiarity and subjective preference, there are benefits to the pipe approach that come from its functional nature. Determining that a for loop is a pure pipeline can be tricky in general, which contradicts your idea about which one is harder to read. And the ability to get reliable parallelism for free - e.g. Java’s parallelStream - is not a feature of for loops.


I think if you took a survey about which code is easier to read a pile of expressions acting as closures inside of nested method calls or a loop, you would likely find most programmers have comfort with one over another. That's objective because it's how pseudocode, the most widely used language agnostic abstraction used in computer science beyond plain text, is written. But that's really not the point and it never was, right?

You can grill me if you want and say I am the problem! Zing ouch got me so good! Those HN points will get racked up, so much winning...

Or you can read between the lines. I drew an unspecific example with microseconds of thought behind it. People who know what I am saying know what I conveyed is fine. People who know me, know I regularly use pipes and chained method calls all day. The point is, the paradigm could go either way, it depends on culture(reading the room), and design decisions. Had I of flipped my example and said "method calls can make code way cleaner than for loops" which in some contexts is equally valid your antiparticle HN contributor would make the same argument you made calling me the problem. Get it?

I would say people trying to shit on everyone around them for being casual are more of a problem then anything else. Especially in collaborative development environments. That supercedes any code anyone could contribute. But that's my take and I'm not going to suggest it belongs on some arbitrary list of ad hoc HN rules (of which none hold water)... I won't be writing a five pager with examples where what I said was sound valid and best praxis, I have nothing to prove, and that was never the point. Everyone who knows what I'm talking about gets it. Happy flag planting with imaginary enemies.


Hey man, this feels like an overly defensive response to me. I don’t think there was an attempt to attack you personally. Just an outside observer opinion. Have a nice day!


> And the ability to get reliable parallelism for free - e.g. Java’s parallelStream - is not a feature of for loops.

Why not? As in, it just seems like a language design issue, there's no reason why you technically couldn't use the "simpler" syntax. For example, what prevents us from having:

  parallel for (Type instance: collection) {
    // code for each iteration
  }
Of course, there are other reasons to consider using streams and such, though admittedly they can sometimes be cumbersome to work with: especially when you would like to step through all of the transformations that a particular object instance undergoes (or when you map stuff and create new ones, or reduce lists into fewer items), as opposed to needing to tinker with lots of conditional breakpoints in the debugger.


> Why not?

If the construct you propose were feasible, why do you think it doesn’t exist?

The reason is that it’s not feasible. Compositional pipelines involve constraints and properties that an imperative for loop doesn’t support.


> If the construct you propose were feasible, why do you think it doesn’t exist?

Because it already exists as a part of the Stream API in a way that doesn't change the language much: since you can mostly use .parallelStream() or .stream().parallel() there's basically no need for the "old" syntax to enable the same functionality.

That doesn't mean that it's somehow unfeasible, since the implementation of the example "parallel for" construct would just need to execute the code in the loop body with a ThreadPool. The Ada language has a nice example of parallel loops like that https://ada-lang.io/docs/arm/AA-5/AA-5.5#p26

Of course, there are other benefits to streams and lazy evaluation, but perhaps that's besides the point.


> And the ability to get reliable parallelism for free - e.g. Java’s parallelStream - is not a feature of for loops.

This is the big one. For large datasets or computationally intensive processing the speed difference will be noticeable.

One of the issues with "cleverness" is also that code that's trivial for John Carmack to understand might not be for $BODY_SHOP_RESSOURCE_200353.

I recall a self taught dev (or maybe from a bootcamp) coming up with a cascade of nested if-else, nested 8 deep. Someone with a background in CS asked him what he was trying to do and basically concluded that what he was trying to do could be expressed as a state machine. To which the initial dev replied that it was "way too fancy" and that he didn't need the code to be fancy, just work.


> Most compilers make clever code silly. You can write a 15 line hard to read pipe or method chain, or a 25 line double for loop with the same runtime characteristics. The latter is always better to maintain, while the former is always more clever.

Weirdly, in rust there's lots of cases where the compiler will generate better assembly if you give it map/filter/fold than it will if you give it a for loop. The reasons are weird - like, its easier for the compiler to know it doesn't need bounds checks for list.iter(), and to work around some integer overflow errors while iterating through ranges and things like that.

The difference usually doesn't matter in practice, but I still find myself thinking about it in performance critical sections.


Ignoring rust or any language specific thoughts... Sure, but there are many cases where it's still better to have slightly slower code at the expense of readability/extendability/maintenance. Even in HPC applications. I know I know some very smart people are about to throw their sharpened axe at me. But in my experience very rarely does someone truly need to sacrifice a bounds check to deliver their product or save meaningful amounts of $ in prod. Not saying all loops must be bound checked, that's a dumb hill to die on. But I've seen devs hyperfocus on hypothetical minimal gains that get blown away two days after the code lands, or a minor requirement changes.

The challenge with DO NOT TOUCH this code comments that intense performance tuning leads too(figuratively and literally) is it makes people who don't understand the code build bizarre(and often slow) monuments all around it that don't usually need to exist and deter from the minor gains won over a cute optimization. I've seen this happen ALOT.

Of course there are times where you should opinionate your code to better serve the machine. But my big thing is, it's rarer then the overly complex design decisions favoring it are and compilers are getting better every day.


>I don't think most people would describe almost anything you wrote as clever. I hope that's not insulting...

I desperately wish I could say this, but I have seen it happen


Sorry to hear that. I'd pitch to invest in the teams education a bit. A little time spent establishing some ground rules goes a long way. If it's your boss biting down on you for it, don't waste your time, seek a new environment if it bothers you or stunts your growth. Most code review cultures degrade into absurd rules over time because of human power structures and crappy feedback loops. I'd imagine if some of those things became rules either there's an obscure reason for it, or the environment has rotten out.

I've seen both cases for some of these types of things. The rotten culture being the most common


> Most metaprogramming shouldn't exist in production code.

Go ahead and try. Many of the libraries you will use use metaprogramming of one flavour or another for ergonomic reasons.

Python decorators are metaprogramming. For example putting @cached above a function declaration esentially wraps your function into another that caches it's values without you ever having to see all of these details.

I am not sure if your production code would really gain from writing out that caching logic over and over again for each of your functions that need caching. I am sure however that it would become less readable and harder to reason about.

In Rust #[derive(Serialize, Deserialize)] can be used to automagically give your data-types serialization and deserialization. This is also metaprogramming. It also reduces the code you have to read and write and the mistakes you will make if you roll this on your own. It also makes your code easier to read to all people who understand what it does (so nearly everyone who programs Rust).

Metaprogramming is okay, if it is done in the right places for the right reasons and doesn't obscure the logic of the program.

I often use it for custom decorators in flask e.g. @admin_user or @authenticated_user to quickly wrap the functions for some http routes with the ever-same logic for authentification. Sure you could also do this with a function, but the ergonomics could be worse and you could accidentally place the function call in the wrong place and thus exposing some parts of a route to unauthenticated users. I don't think this makes it harder to reason about my code.

Like all "don't do X" idioms in programming the one about metaprogramming should be taken with a grain of salt. There are cases where metaprogramming is the best (most reliable, futureproof, usable, etc) way of solving a given problem. The warning is true in that you should avoid using it everywhere without reason. But there are places where it makes sense and there it would be a waste not to use it


It's a good thing I didn't say "don't do X"


Metaprogramming can do the opposite: make obscure, complicated and lengthy code much more readable and easier to maintain. It’s clear when it does this.

Agree that “clever code” is silly but also important to remember that people need to entertain themselves and “grow professionally.” People get paid more if they write the 15 line pipe and it’s job security. Maybe gpt will help since it’s a lot better to have gpt write easy to debug for loops, but then programming will be less fun. People need to entertain themselves.


I hear you, that's why I said "most". But there's also a flipside where someone tries to tweak a language feature or implement a missing language feature with an inhouse macro. That macro goes on to be defunct but a major chunk of mission critical code is now glued too it. After all we can't expect languages to not fill in missing gaps in an opinionated way. The only way out is to wall off a code base for a month for a treacherous refactor.

I am pro have fun screw up and grow. But macros present a certain kind of danger a lot of other tools don't. If you are playing with them be careful of their scope is my only real warning.


> writing "return condition" instead of "if condition then return true else return false end"

> using the conditional-value ("ternary") operator in any capacity

Looks like author of some code I had to comb through recently, maybe had that among guidelines. Said code was replete with:

  if(function_that_returns_boolean()){
    return true;
  }else{
    return false;
  }
...and...

  if(foo()){
    return true;
  }else{
    if(bar()){
      return true;
    }else{
      return false;
    }
  }


This to me screams "I don't understand boolean variables".

Why stop there?

  switch (byte_value) {
    case 0: return 0;
    case 1: return 1;
    ...
    case 255: return 255;
  }



Visualizing control flow from if ladder effort: minimal

Doing the same for not tiny boolean expression: way more

Just because you think it is cool cuz profs forced it on ya during college is not a argument


Ironically, I have had profs tell me to use nested ifs instead of breaking loops, so…


It's personal taste. You can subdivide boolean expressions just as easily with if-then-elses as with boolean functions.

What most people object to is leaving the last level of if-then-else and returning true or false, when you could have returned simpleBooleanExpression and do away with 1 level of nesting.


My high school computing exam (by government direction - VCE, graduated 2019) was filled with many such

    If a And b Then
      Return True
    Else
      Return False
    End If
programs. Not sure why they were so scared of Return a And b. Apparently they got a new curriculum after I graduated; my teacher was very happy about it but I didn't get to see it myself.


Well, the 2019 final exam [0] lacks such questions; it must have been some other year. But the correct data type for an "AccountBalance" variable (section C, question 9) was a float, so I definitely lost marks on that question!

The 2021 final exam (which is the most recent I can find) [1] somehow creates XML records for a SQL database (section C, question 7), and hints at using "data validation" to avoid an SQL injection (section C, question 11.c), which is quite exciting. And the 3.5 gigabit wireless connection (section A, question 13) is definitely not using an ISP-supplied router.

[0] https://www.vcaa.vic.edu.au/Documents/exams/technology/2019/..., answers https://www.vcaa.vic.edu.au/Documents/exams/technology/2019/...

[1] https://www.vcaa.vic.edu.au/Documents/exams/appliedcomp/2021..., answers https://www.vcaa.vic.edu.au/Documents/exams/appliedcomp/2021...


In an intro class using Ada I’ve seen

  case Big_Expression_Here is
    when true   => Do_Something;
    when others => Do_Something_Else;
  end case;
and I found myself wondering whether they realize what this is.


If a code reviewer pulls that on you, meet with them and explain why that's asinine. If that fails bring in someone more senior as a tie breaker. If that fails, find somewhere else to work...


In C++ projects you see the verbose version a lot, because it's much faster to set a breakpoint on a statement than on a conditional expression. It's really annoying to repeatedly stop debugging, add the redundant return statements, and sit through a recompile cycle just because `return expr` looks nicer than `if (expr) return true else return false`.


Alternatively, you could use a debugger that supports conditional breakpoints.


Cond breakpoints require the code breaking each time and evaluate the statement which makes it runs like snail.


> Using "clever" as a criticism is implicitly an ad personam argument, passing value judgement on the person who wrote it ("Oh, they must have written it to show off") instead of an argument about the code itself.

I don't know how you got this conclusion.

I can see judgement being passed to the code with respect to not just who wrote it, but also who's going to need to understand it later, who's going to need to make changes to it. In that sense, calling code "too clever" makes sense, right?

There's hyper-specific reasons why you'd need code that's going to be complex/too clever for that group, esp if it saves cost or is faster than the simpler alternatives, but in most cases there's nothing that justifies it.


> any kind of metaprogramming whatsoever

I am personally on the fence about this one. It's tremendously useful when writing code, and I've done quite a fair share of it myself; but on the other hand, when you are reading the code, trying to understand what the hell is actually being called and why, it really sucks when your F12 bottoms out at some meta-generic piece of machinery that invokes something when called from somewhere but what and where, exactly, is a mystery that requires a global search over the repo in the best case (in the worst case there is either nothing to search on or the search term will turn up thousands of hits).


To me, meta-programming should not creep into a function. It's ok to generate functions or initializers or whatever with it, but it should not be an inconspicuous part of an otherwise normal function body, especially not if it affects control flow. If it really, really helps readability, perhaps it could be used at the top level, but not inside a loop or conditional statement. I might make an exception for complex type declarations.

But, like all programming: it should help readability and it should help to avoid bugs, not be used for its own sake, aka cleverness.


> Using "clever" as a criticism is implicitly an ad personam argument, passing value judgement on the person who wrote it ("Oh, they must have written it to show off") instead of an argument about the code itself.

I don’t get it; why?

Anecdotally, me and my colleagues are fine giving ourselves feedback that some code is too clever, without malicious undertones.


Someone reading my code could probably point out profound numbers of code quality issues.

But I would want to know what they've done themselves to evaluate the weight of their expertise. If they have impressive projects, their criticism means something.

there was a koan about the progression of someone's Java program from beginner to intermediate, to expert to master to enlightened. The code for the enlightened looked the same as the beginner.

These koans are useful too

https://prirai.github.io/books/unix-koans.html

Edit: In other words, say a developer that criticized wordpress, do you respect what they say more than the success and value of wordpress (of the wordpress developers)?


> there was a koan about the progression of someone's Java program from beginner to intermediate, to expert to master to enlightened. The code for the enlightened looked the same as the beginner.

Mmm I've been programming for 30 years across a lot of languages. My code doesn't look the same as it did when I was a beginner. Sure has flopped around a lot though. Take comments:

- When I started, I used no comments (because I was ignorant)

- Then I put comments everywhere because someone told me it was good practice

- Then someone said comments are a sign your functions are too long and poorly named. So I went back to no comments for a bit. But I sort of hated the resulting code - functions were too small and it still wasn't "obvious" no matter how cleverly I named my functions.

- So then I had a few comments here and there, wherever my code wasn't "obvious".

- Then I tried literate programming, and my code was in small islands amidst an essay of comment. That was fun, but it didn't last.

These days I use comments for 2 purposes: to document my APIs, and to write little letters to my future self. Eg:

    // This code looks wrong at first glance, but its actually correct.
    // Here's why: ...


// this code looks wrong but was mandated by the pm in the 3/3/22 memo. I do not endorse


Koans, proverbs and quotes are a lazy shortcut to sounding authoritative. If you had something of substance to say, you could argue for it directly, instead trying to make it sound like some specious ancient wisdom.

> Edit: In other words, say a developer that criticized wordpress, do you respect what they say more than the success and value of wordpress?

Depends on the content of the criticism. There are many legitimate criticisms to be had about WordPress, and they can't all be dismissed with "it's popular, so it must be good".


Don't Critics hide under the authoritativeness of "best practices"? What is whatever they think and agree with. The critics think they're authoritatively right, as if they know what is best in all cases and scenarios. (But they're not the ones who did the work.)

An authoritative source of accomplishment or demonstration is more valuable than someone with no skin in the game, or who doesn't even contribute even if what they say is true.

Wordpress has had security issues, I didn't mean to say popular = good. What I meant is that I trust the wordpress developer's ability to know how to write software that works than someone who doesn't know how or has ever written successful software but knows how to avoid security issues. Don't let perfect be enemy of the good :-)

What would you prefer or listen to more? More developers capable of writing wordpress-successful products or more developers who can criticize but can't build anything.

You could listen to the wordpress developer's attitudes on software development but you wouldn't take their security advice. Maybe you would ignore the criticizer's criticism on how to build software.


> Don't Critics hide under the authoritativeness of "best practices"?

That's true! Just calling something a best practice does not make it so. That also has to be argued for, by demonstrating the consequences of one practice over another.

> I trust the wordpress developer's ability to know how to write software that works than someone who doesn't know how or has ever written successful software but knows how to avoid security issues.

Criticism being easier to state than address does not invalidate the criticism. "Sure, this extension may be full of SQL injection bugs, but that's easier said than fixed, so it doesn't matter. I'm going to install it anyway."


You describe an appeal to accomplishment fallacy. Whether or not someone has done x or y themselves has no bearing on whether your code has quality issues.


The merit of the code quality issues is in question based on the authoritativeness of the person doing the criticising.

I don't like the pride, superiority and smugness that comes from criticizing code and the assumption that they're the ones in the right because of their criticism, rather than the expertise of the one who wrote the code and solved the problem and did the work.

I've noticed people pay more attention to the stones that people throw rather than those throwing them.

Edit: In other words, criticism is cheap if you have no skin in the game.

I don't think it's a fallacy, it's like hosting a wedding and inviting a guest and then being criticised for the entertainment on offer. People pay attention to the criticism, not that the criticism is coming from a guest and that their criticism is not as valuable coming from the guest than the people paying for and tastes of those hosting the wedding.

Whether or not the code quality issues are true or not, their importance is dependent and weight of someone's criticism is not that high as everyone thinks.


"Quality" is not an indisputable property of the code. Someone's judgement about code quality can only have as much weight as their competence.


I keep seeing "if (some_boolean == true)" and it kind of grates on me.


In many languages it's necessary if it's a nullable boolean...


In PHP, you might need "if (some_boolean === true)".

There is a fashion for inverting it: "if (true == some_boolean)", which seems arse-over-elbow to me (I do know why people do that).


The “literal/constant first” form can be used defensively to avoid mistakenly using the wrong operator for languages that have assignment operators that are similar to equality-test operators. For example, this will cause a syntax error:

    if (true = some_boolean)
While this will silently result in an assignment of true to the some_boolean variable, not intended the equality test

    if (some_boolean = true)
Editor hints and linting can help catch this as well, but those aren’t always available (or weren’t available in the past). I think I first saw this in Code Complete 2.

Here’s some more explanation

https://softwareengineering.stackexchange.com/questions/7408...

Looks like the pattern is sometimes referred to as a Yoda expression.


Routine reminder to the world that uBlock has “annoyances” filters I recommend for everyone, and reporting new sites that aren’t yet blocked is pretty easy once you’ve done it.

This removes virtually any pop up or cookie banner or suedo ad on any popular website.


I do use that. It doesn't work here though, because the author of that website "cleverly" decided to obfuscate their CSS class names.


> fuck off to hell

Yeah, that's the button I was looking for.


> when it comes to day-to-day production code, here's the barometer I like to use: will a junior developer, someone at the very start of their career, struggle to understand this code?

this is one of the stupidest things ever said about any piece of code. how about you give that feedback to beethoven (and other great composers)? hey beeth, can you please make your music piece so simple that a junior pianist can play it? this thing over here requires too much study and virtuoso to play, and that's not something we want to aspire to.

or to euclid: hey can you remove this _pons asinorum_ here? i have a beginner whose struggling to understand the isosceles triangle. clever code is art to be studied and enjoyed. they are the vehicles of growth, of new and important ideas. holding them off for the sake of a proverbial junior is stupid.


> clever code is art to be studied and enjoyed.

Cool, great, save it for the Obfuscated C contest. The less time I spend “studying” someone else’s idea of cleverness while trying to fix ticket #51483 or implement some weird business logic corner case the happier I am.


Pretty sure 99% of the world agrees with you here. If someone wants to write twenty nested closures that compose some category theoretic construct in some ancient academic paper instead of two function calls using an already imported library they can go to their museum or whatever to be studied. The rest of us have shit to do...


> clever code is art to be studied and enjoyed. they are the vehicles of growth, of new and important ideas.

I don't think you quite got the idea this article was trying to convey. Here's a simplification:

Don't make simple things look hard.

Hard problems require convoluted solutions, however those solutions can still be broken down into simpler steps. Don't obfuscate the simplicity, allow all the simple parts to work together to create something complex.


hard problems don't require convoluted solutions, i think. the nature of simplicity is that, if aspired to, it could be realized in the small as well as the big since it really is a description of cohesion. i understand that starting a conversation about simplicity will lead us nowhere: there are as many opinions/definitions of 'simple' as there are people in the world. in this statement lies the fact that regardless of level of experience (related or unrelated), a priori intelligence, and general disposition towards new things, we have our expectations of simple. that can't be right! we need to learn new simpler forms that require higher intelligence to grasp. to the initiated, einstein's equation is simpler than newton's because it allows a lot more derivations that were impossible with newton's. i suggest that we deliberately move people towards einstein's as the new simpler forms. in the example code the author showed, they appeared to identify advantages that are on net better for any software application. they should adjust their expectation of what simple is, and make an effort to help so-called juniors move up a level or two so that they can develop more powerful simple forms.


I think you are proving the authors point quite well.

Composing music to be performed if very different than sharing the composition of it with many composers.

There is meter and scale and notation but only so many ways you can express things with the most complex being complicated meter. Now performing that - yes - more complexity but not in the composition.

We dont need programmers to "perform" in every CRUD app on the internet. We arent all writing game engines, gpu drivers, etc etc.

There are Beethovens out there doing things that require phenomenal skills (that can be learned) but its not 1:1 with composing music with performers limits in mind when the composer didnt HAVE to collaborate.


You say it’s stupid but don’t really elaborate on why. You imply there’s some excellence cliff that presumably cannot be reached if you cater to beginners but that’s hardly the case. Simplicity and complexity, beginner and expert are not inclusive things.



Making code simple is not easy.

In your analogy I think the listener should be the junior programmer. Music by Beethoven can be appreciated by many more than just the people who can compose on that level.

In the same way you can write code so that also junior developers can appreciate it. The challenge lies in clearly expressing solutions to complex problems.


I wouldn’t use such strong words, but you’re right. Not everything needs to be made for beginners. I keep remembering what Rich Hickey said that seems relevant:

We should not sell humanity short by trying to solve the problem of beginners in our stuff. We need to make things for people to use, and we need to teach people and trust people to be able to learn how to do that.

https://youtu.be/QCwqnjxqfmY?t=1914, https://youtu.be/QCwqnjxqfmY?t=2063


Writing code in a way that it can be understood by coworkers is a prerequisite for collaboration.


Surely that's completely missing the reality that the reason code is hard to understand is virtually never because of whether you're using some particular syntactical feature like ternary expressions* - it's because the features you're working on are complex and there's a bunch of interactions between lots of different parts of the software, and it takes a while to get your head around all of them. On the other hand, poorly named functions and variables, poorly constructed interfaces, tight (but non-obvious) coupling between different units of code and a lack of a straightforward way to walk through various paths of the code (e.g. via unit tests) are the sorts of things that make code especially incomprehensible and hard to work with, and for that reason, the sorts of things I'll flag in a review. If I see syntax I think looks a little ugly or unnecessarily "clever" I'll usually add a comment and a suggestion but it still generally gets a tick.

(*) even the example given in the article of an obviously unnecessary use of "reduce" is pretty rarely representative of the sorts of reasons code is hard to understand, though I have come across similar things in 3rd party open-source libraries and the like - I would say though these days you could almost certainly get ChatGPT to explain particularly obscure uses of map/reduce etc. Whereas it's never going to be able to explain high-level behaviour that requires the entire codebase.


When did "clever" become synonymous with "overcomplicated"?

To me, clever code is maintainable code. If you can't understand code, either it is not maintainable, or you don't master the language at the required level.

The thing is that it depends. "Ternary operators are too hard" is not an absolute truth. I would argue that sometimes they are more readable, sometimes not. Same for recursion and everything. The art is to find the balance, and even there, there is not one universal truth.

If my interns need to learn things, that's not a problem for me. The goal is not for the interns or my grandmother to maintain my code. The idea is that the maintainers maintain it. If they find it maintainable, then it's hard to argue that it is not.


It's one of those tongue-in-cheek mis-statements that make it into common usage, lose their scare-quotes or italicisation and become a different definition of the same word. Other examples:

* "Literally" used in a way to refer to something that's clearly figurative

* "Non-trivial" used in a way to refer to something that is clearly extremely complicated

Much as I like to decry abuse of language¹, at some point one has just accept that a headword has gained a new bullet-point in the dictionary and move on.

1: and speaking, or, rather, attempting to speak, the Queen's^W King's English, the modern American-centric world certainly delivers opportunities for that!


Sure, I'm fine with that. I guess my point is that those posts, to me, just try to convey that one should write good code, and that good code is not necessarily using all the tricks in the book (rather not).

But then they tend to go as far as to say "never use this concept, because it's too complex", or even go towards wishing that the language did not offer any "clever" features, which I disagree with.

Yes, write good code, as in "maintainable". But don't say that a concept is always bad (be it tertiary operators, recursion or templates). It depends on the language, project and situation.


It's like when the electrician comments on some of my own electric handiwork. I believe he called it "nifty".


“Articulate”.


Smart code good; clever code bad.

Smart code is maintainable. Clever code is not.

Smart code takes complex problems and simplifies them in a way that makes them easily understandable by the whole team. Clever code, whether the original problem was complex or not, presents a complex solution that's difficult to follow.

Have you ever poured a lot of work into something only to have a coworker say "how simple - I thought that would be a lot harder?" That's smart code, but your coworker was expecting clever.

Those are the meanings people are using in the conversations about clever code.


Right, I'm completely okay with that. I guess I just don't understand why there are regularly blog posts trying to explain what should be common sense: "write code that it nice to read and maintain".

The thing is, getting to the point where you can write good code takes experience, not blog posts that say that recursion and tertiary operators are always "clever".


> Right, I'm completely okay with that. I guess I just don't understand why there are regularly blog posts trying to explain what should be common sense: "write code that it nice to read and maintain".

Because “nice to read and easy to maintain” is a goal, not something trivial to evaluate while writing (especially the easy to maintain, which, among other things, is dependent on what the actual future needs will be), and acheiving that involves balancing competing factors, opinions on how to do it conflict, experience of what works varies, and clear and applicable science about both the broad trends and what explains specific variations from the trends is sparse.


I have been the maintenance programmer for multiple legacy projects.

I don't have the time or motivation to "master the language at the required level". I want to get in, fix the code, test the fix and get out. That's what I get paid for, not spending weeks to become an expert in Language X's clever code tricks and hacks.

    "Always code as if the guy who ends up maintaining your code will be a violent psychopath who knows where you live."
You can be clever and terse in your own projects, please write readable code with comments. Even if something is 150% clear to you while writing the code, it might not be for the next person - or yourself 5 years later.


Again, you are assuming that I mean "overcomplicated code" when I say "clever". To me, clever code is meant to be readable. Sometimes a recursion is more readable, sometimes it is not.

I am just saying that the choice of what is "overcomplicated" comes to the maintainer. My project, my rules. If you don't want me to use lambdas in the project you maintain, that's your choice (and it is my choice to contribute or not). I just don't agree when people say "don't use tertiary operators ever, because I don't like them".

Interestingly, I go further than you: good code to me mostly doesn't need comments. Comments are there for the more complicated parts, but most code should be readable as-is. I guess we mostly disagree on what is too complicated :).


For me the benchmark is how much time it takes me to figure out my own code when I come back to it six months later. If I can glance at it and still know what I was thinking, I did an OK job. If you stick around long enough on a software system, you will come across code that you wrote that you struggle to understand. Failing this test can be an educational moment. It's your code after all. So, obviously you messed up. But how? How could you improve things? These are important questions because failing to address the issue just sets you up for more problems later. The worst you can do is walking away and then having to figure it out all over again a few months later.

Cognitive load of understanding code is expensive. I tend to leave notes for myself. Document the not so obvious things with little one liner comments. Rename variables to clarify what they are for. Extracting code to get rid of the distracting clutter. Etc. I sometimes get comments from others that they find this helpful. That's nice. But I do it for myself as well.

Simple code is obvious in what it does. There's a certain elegance to simple code solving big problems. It takes effort to make things simple. Making things as complicated as needed but not more complicated. Being overly clever means you end up with a Rube Goldberg machine of unnecessary complexity and clever mechanics that is hard to understand and maintain. You get a certain satisfaction out of making that kind of stuff work but in the end it's a mistake.


I enjoy writing my own code more than I enjoy reading other people's code which is probably not right.

To understand other people's code, I have to put away my own thoughts of how things work and try understand the thinking model and mental model of the person solving the problem.

And if they're smarter than me, then I have an uphill battle.

I think experts get too deep into values that aren't practically valuable to businesses or me and as a result go into the deep end creating a manifestation of something they think is beautiful or elegant but for me is a mess I am now forced to understand, with reluctance.

They say that one person's trash is another person's treasure and I think it applies to code too.

One person's elegance at runtime is another person's unextendable mess that leads to slow velocity and developer morale.

They say code IS the documentation but I don't agree. I don't want to bounce between 50 tiny files to understand where the core part of your algorithm is and what I need to do to get the behaviour I need.

I think if you're reaching for advanced language features to solve the problem before better data modelling, that's a warning sign.


> I enjoy writing my own code more than I enjoy reading other people's code which is probably not right.

Right or wrong, I think this is true for most programmers. Would be interesting to understand why this is so. Probably related to the problem in the article: we're writing code that is too clever, so reading it takes too long or is too mentally taxing. If we could read code as fast as we read text, it would be less annoying.


Thank you for your reply.

I prefer reading documentation or technical deep dives - such as this one

https://mattwarren.org/2017/02/07/The-68-things-the-CLR-does...

Is code more dense than English? Why do I understand more overall from this blog post than I would if I studied the codebase of the CLR for hours and hours.

I think it's to do with mental models. If developers were on all the same page of understanding the underlying mental models, we could all understand eachother's code because we all understood the same underlying things.


Yes, I would say code is usually denser than English. But sometimes it's not. This unevenness is I think part of the problem. You breeze through some parts, and stop and think through some others. It's unnatural.


>I think experts get too deep into values that aren't practically valuable

This describes the majority of practices today. Individuals pushing their own values, selling it and hoping it sticks despite having zero evidence to back it up. Even most things 'proven' are fairly context-dependent.

The majority don't even bother making a simple cost-benefit analysis these days. As if any significant social change is just going to break even from moment one and staying 'we should do X because it will give us Y' is a good enough argument. It is crazy how casually some of them advocate for practices that won't break even in the span of a few years at best. That's a lifetime in the field of software development.


Debugging is twice as hard as writing new code. So if you write code as cleverly as you can, you are by definition not smart enough to debug it.


It's an interesting experience reviewing a "smart" solution you wrote years ago, and happening upon it again.

It's like opening a time capsule to yourself and getting nothing but a punch in the face.


“Smart” can also mean less code, more modularity, efficient algorithms. Obfuscated code is also “smart”, but in a different way.


My smart code plans for me to be sort of stupid and slow in the future, lacking the context I had when I was writing it. And I'm delighted when some seemingly hard problem has a simple, straightforward solution.

I often wonder what happened to me. There's a preponderance of evidence that I was smart in the past but very little that I'm smart today.


True, but you can keep writing fresh clever code until you wrote something that doesn't need debugging.


As of May 2023, we do not have the technology to produce bug free software. We have the technology to specify requirements, we have the technology to produce formal proofs, do verification of specifications ( ex. TLA+ ), and prove a software implements the specification. But what is this "something" that would not need debugging?

"Beware of bugs in the above code; I have only proved it correct, not tried it."

   - Donald Knuth


Kernighan's Law, exactly.


Completely agree with this. It took me 7-8 years of professional development before I fully realised that I needed to stop being clever without a really good reason.

A really good reason would be to significantly improve performance or reduce a maintainence burden (e.g. loads of duplication). A good reason would not be "elegance" or an abstraction which might be useful one day.

One of things I loathe about code review is fighting for simplicity against people who haven't figured it out yet.


If you look at golang... It's entire purpose for existing(in my opinion) was google realized it had too many people who couldn't be trusted to write maintainable code with all of the flexibility of most programming languages. And yes, google hires many of the best and brightest, and yep go lang is still flourishing despite being an objectively rough language.

I'm with you though. I don't like reviewing a lot of code because it sucks calling out a simpler approach to someone who is clearly trying to showcase their greatness. Can put mid level engineers who know better in a crappy spot.


This rule is simply wrong.

Sure there is code that is needlessly labyrinthine or confusing, but otherwise the far more important rules are

- Be concise

- Do not repeat yourself

- Never be scared of the language. Every feature is a tool that has a right time to use.

Code should be simple in these terms, and it takes a fair degree of cleverness to make code simple in these terms


Arguably the pendulum has swung too far in the opposite direction.

There's a time and a place for clever code. Clever code can save you thousands of dollars in hardware spending and permits you to do more with less. That's great, even if it comes at the expense of costlier maintenance.

That said, most code probably shouldn't be clever. Clever isn't a benefit in itself.


100% with the author here: the number of times I've worked with clever, talented people who make things as complicated as possible is too damn high. The usual result is that they leave, somebody thinks "jesus I don't want to maintain this" and their code is either ignored forever or rewritten with the complexity removed. Maybe this is done in a more verbose or ugly way, but it is also more approachable.

As a profession, we really don't give one another enough help. Coding is not about reaching some platonic form of terse expressiveness, it is a social activity, and the sooner this is commonly understood, the better.


I think sometimes "clever" is just a term for bad code that you don't like. The clever-code react example doesn't seem like a good example of clever code to me—it just seems awful! If you were do something clever for that function, it could look like this:

  function extractDataFromResponse ([Component, props]) {
    return _.pickBy({Component, props}, Boolean);
  }
I'm cheating by using lodash, but I think this code is a fairer comparison to the unclever example that's given in the article.


Nice simplification of the “clever” code. Lodash docs for anyone curious: https://lodash.com/docs/4.17.15#pickBy

Here’s my version of the “clever” example that doesn’t depend on Lodash. It’s longer than your version but still simpler than the article’s:

  const extractDataFromResponse = (response) => {
    const [Component, props] = response;
    const dataIncludingUndefinedValues = { Component, props };
  
    return Object.fromEntries(
      Object.entries(dataIncludingUndefinedValues).filter(([_key, val]) => val)
    );
  };


"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult."


I once got in trouble for using a switch statement to build a finite state machine. It turned out that language had only recently supported the switch statement and using experimental features in production was a no no.

I still hate perl so much for crap like that.


> will a junior developer, someone at the very start of their career, struggle to understand this code

That guy’s entire job is to become the expert you need. It’s a bad tradeoff to permanently handicap your experts from communicating clearly and concisely to each other, just to accommodate the start of his learning curve. It’s also not doing his career any favors, having to work with worse code for the rest of his tenure.


I stopped reading after the Haskell example, which looked idiomatic and reasonable to me. Yes it's short; that's a reason to learn Haskell. I'd much rather code in the shower than handcuffed to a keyboard.


> The hard thing is solving complex problems with simple code.

If you agree that code is a way of communicating with people (not machines), then this is a corrolary of my rule: that if you truly understand a complex matter, then you can explain it in simple language to a reasonably intelligent non-expert.


> Clearly, it takes a tremendous amount of skill to solve such a hard problem with such a small amount of code.

Wrong. It is extremely easy. The code isn't golfed. It is regular J code that could be written by anyone who's looked at J for more than 5 minutes. They've not even copied it right! I was confused for a second at what gt was, but it's just a mistake from copying >.

The other paragraph is also nonsense. It's just in a different language. Don't hand it off to python (or whatever mainstream lang) programmers, hand it to array programmers and they'll do just fine. If I were to be provocative I'd say that the J is vastly easier to read write and understand than the equivalent in any other (non array) language.


Think of it as writing in a natural language. You are writing a story that someone else has to read and follow. The compiler will know what you mean if you use the the right words, but you are really not writing the code for the compiler - you are writing it for other people.

Clarity is more important than performance, and in many cases the performance gain is imaginary in the first place.

Every time I write code and tell myself "I will just keep in the back of my mind why I did that", I know I am writing bad code. There should be no open loops, you should not have hidden knowledge in your brain about how the code works.


If only there were a single standard of what is considered "clever" and what is considered "simple" to which we all could agree regardless of seniority level, experience, etc.


s/harmful/irrelevant/

By the line, most code and tests will be written by AI in the next few years, with humans specifying it, checking it and checking the overall results.

The next generation will look back and ask, "you wrote code BY HAND ?" the same way 99% of us look back and ask "you wrote assembly code BY HAND ?" (and by the numbers, 99% of us don't manage memory by hand, either)


> "you wrote assembly code BY HAND ?"

The first computer I was exposed to was an IBM Schools Computer. It had no assembler. It booted into what in retrospect I suppose was a primitive memory debugger; you toggled machine code into the box as hex bytes.


Those initial two examples look like kdb. Particularly the second one. That's production code in many places where I have worked. I would send it to hell if I could.

edit: oh, J lang is a later K. That explains it. Similarly it should be sent to hell. It calculates an average in the most obtuse manner. So clever. I hope I never have to work with that one.


It's not obtuse, it's an extremely obvious way to calculate a moving average. What would you do instead?


I would not do it all in one long line. I would not do it in an exotic language. I would not do it in a language that it is impossible to find skilled experts at a reasonable price. KDB people cost 2X anyone else with the same skills. Any normal developer can't just pick it up based on some other conventional language in a few weeks.

This is the point of the OP and I agree.

So many reasons.


The original article wasn't about the economics of different languages. It was about not using "clever" code. That J code isn't "clever", it's a very straightforward implementation of the problem, just in a language that is unfamiliar to most people. If someone wants to write an article saying no one should ever use J, fine, but that's a pretty separate article.

You said the moving average in particular was "obtuse". How? It's more verbose than it could be, to make it more readable/obvious. Like I asked, what would you do? Whenever people say something is bad, if they can't provide either a good reason or an alternative, it makes me inclined to think that it is simply a kneejerk reaction to unfamiliarity.


If you code that same thing in js, java, c++, python or rust they all look similar. If you can achieve the same performance in a language understandable by more people you will be better off.

Another example is, Russian is much more expressive than English due to the more advanced grammar. That said if you want to express yourself to your American peers and they don't speak Russian you don't write in Russian. If you are all Russian sure.

We use kdb/q in our firm but I would rather not and will be trying to migrate away from it in the future. It is good but bigger picture more trouble than it is worth. At least our code is formatted and not all one liners like Morgan Stanley.


Whether or not someone should use J vs another language is an interesting, but entirely separate question. The original post was about rewriting things in the same language to be more 'readable', not to suggest everyone who doesn't use one of the imperative languages you mentioned should just rewrite their entire codebase, which would obviously be an unrealistic suggestion. As I said in my other comment, I have an issue with this (or at least the examples they gave), because the J code is not unreadable to J programmers.

Like you say, if everyone speaks Russian, communicating in Russian can have benefits. The same thing is true with J, or k, or any of these more obscure languages. Saying that your code is "formatted" and not "one-liners" to me suggests that you haven't seen these benefits yet, and I'd suggest reading Iverson's Notation as a Tool of Thought[1] or some of the very informative HN comments [2] about how this style of code can be very useful and productive, and yes, readable.

You still haven't answered my original question, which was how the implementation of the moving average was "obtuse". The moving average code is (>:i.$n)%~+/\n. This seems pretty reasonable to me. As the cumulative sum is ascending, you can use /: instead of i.$, but that seems less obvious. I don't know what issue you could have with this code, other than it having slightly more punctuation than a mainstream language.

[1] https://www.jsoftware.com/papers/tot.htm

[2] https://news.ycombinator.com/item?id=27223086


The biggest issue it will solve is not even technical, it is management

https://blog.pwkf.org/2022/09/18/always-optimize-for-dummies...


The problem with the intern test is some problem spaces are just complex! But for most crud stuff, yes.


My view: good code is necessarily boring code.


I found the first solution easier to understand, but I have this infuriating (even to myself) obsession with ternary operators that could introducing some bias.


I like them when the actually fit one one line. If I have to scroll horizontally I will roll my eyes. And I really hate nested ternaries.

One of my pet hates is when the c# style is for opening braces on a new line, making if/then/else statements much longer than they need to be, so people try to cram as much into a single ternary as possible.


yeah, they need to be tidy. You can shoot an unsuspecting programmer through the soul with some evil ternary abuse.


My biggest beef with the former solution is that resultsEntries is unnecessarily stored in a variable, and between its declaration and use there is an interspersed declaration of a general utility function. I might have inlined that one as well. This is what harms readability the most. Code should be arranged general-to-specific, and this example violates that.

Although given that there are only two properties to process, the gopher-style assignment code preferred by the author would suffice anyway.


I don't know. It's true that simpler code is more maintainable and all that. But what happens to the fun and joy of coding? You remove the fun part, and you're left with dry processes and mind-numbing toil that suck the joy out of a passionate endeavor. I get it that many see programming as a means to an end, and that's totally fine, but why take away from those that derive an intellectual high from it?


I leave the fun to my side projects. There I write overly clever code and play with the language.

Some of these ideas end up in production if I can find a way to make them less clever-looking and more approachable.

When I was younger I wrote fun code in production. Then I had to extend it or debug it some years later and struggled to figure out what was going on. Invariably ended up rewriting it to plain code, often many lines more but with clear intent and little ambiguity.


Because I don't want to debug your clever code when we have an outage and you're on vacation


Fortunately my code is perfect so it never needs debugging.


The fun, intellectually stimulating part definitely takes a back seat in priority over keeping the lights on for customers/colleagues most of the time, except in personal/weekend projects.


You can be fun and clever on your own time. When you're doing code in a team, prefer readability over clever terseness and premature optimisation.

Doubly so if (when) you're not the one who has to come look at the code in 5 years and spend days deciphering the clever bits.


I wish “considered harmful” was banned by the HN rules.


How about banning that boring meme too?

You know "considered harmful considered harmful"

and complaining about "considered harmful" whenever "considered harmful" post appears?


Okay, but please use `let output = {}` instead. Yeah, I know, const/let is about the mutability of the variable, not its content, but if you use const I'll assume you write in an immutable style, which you don't when you do `output.Component = Component`. Using `let` is a better warning like "Watch out, mutation ahead!".


> if you use const I'll assume you write in an immutable style

Why make assumptions that don’t make sense in the language you’re using?


this just isn't the usage I observe; const/let is typically checked by the linter and auto-fixed. It's just another idiosyncracy of JavaScript.


I don't know if it's intrinsically qualify as clever but point free programming in haskell can get really hard to understand really fast.


>tip: Euler is pronounced “Oiler”. Impress your math friends with this knowledge.

This is what I learned today!


Philosophical question. I learned the pronunciation of Euler long ago, but I still sometimes use the incorrect pronunciation because I want to make sure people who don't know the correct pronunciation understand what I'm about to say.

Is the correct pronunciation of Euler equivalent to clever code?


Now do Fermat and Erdos!


Honest question, how else would you pronounce it?


e-you-ler


"To maintain or not to maintain, that's the question" - Shakeshands.


I spend much of my time looking at dusty old code, arguing with my former self, undoing tangled knots of hitherto maligned bits of supposed genius.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: