Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't buy this idea that the programming language you use determines the quality of your work. There is no evidence to support this. These extreme programming fads were invented to sell books.

If novelists followed the same approach as programmers and they kept trying to find the perfect language to write their books in, they wouldn't manage to publish a single interesting novel.

A language cannot prevent someone from making stupid mistakes. At best, it solves one set of familiar problems and replaces them with a new set of unfamiliar (but equally bad) problems; then you have to learn to cope with those new problems until the next hyped-up language fad comes along claiming to solve those new problems and then you repeat the cycle.



There's a concept known as linguistic relativity (https://en.wikipedia.org/wiki/Linguistic_relativity) that states that the language affects what thoughts you are capable of having.

We don't program in assembly language because it is very tedious and error prone. Higher level languages absolutely do eliminate classes of errors that are only possible at the assembly language level (accidentally altering the stack and affecting the jump return address for example). Another example would be garbage collection making it impossible to create a certain class of errors.

Another example would be doing long division with roman numerals instead of Arabic. The human brain can only process so much before it becomes overloaded. By "compressing" thoughts using new vocabulary concepts you can reason about ideas that are beyond your normal cognitive limits.


I've found that being able to think in one programming language even affects the thoughts you have when working in other programming languages.

For example, after learning ML, I started writing much more functional code even when writing Python. I also started using abstract types more, even in C.


> A language cannot prevent someone from making stupid mistakes.

This can be refuted in one.

    if(a = b)
Languages that uses := for assignment, or some other operator, don't have the mental overload of equals being used both for assignment and for equality checks. Typos like the above become not possible.

Other languages just don't allow assignment in 'if' statements.

In regards to floating point support, a language could default overload equality checks so that they are configurably fuzzy when comparing floats.

And of course, languages already prevent stupid mistakes. Even the humble C compiler properly arranges the stack for us when exiting a function, putting all return values in the appropriate place based on the platform's ABI and restoring registers that are supposed to be preserved.


>> Typos like the above become not possible.

In my 15 years of programming experience of programming every single day, I have made this kind of typo maybe 2 or 3 times in total and it took me less than a minute to identify and fix the problem each time.

On the other hand, the amount of time that it would have taken me to type out that extra ':' a few million times would have been much more costly.


This is a really arrogant statement. In C, `if(a = b)` could cause incredibly subtle bugs. It was prominent enough that it got addressed in all the best practices books, like "Writing Solid Code", with a style that got later termed "yoda conditions" - a style that was prominent in spite of being acknowledged as less readable, because it forced a compiler error more often. It's hard to even think of many other types of bug with that kind of significance, and you just dismissed it out of hand with "I don't make those kinds of mistakes".


If the problem is so bad, why not simply use a linter? Actually, almost all instances of this typo can be caught by a simple regular expression.


Well, that's pretty much equivalent to the the language preventing bugs, for the purpose of the conversation. You can imagine similar cases that would be harder for a linter to pick up. Things like type errors, or accessing the wrong side of a union - in C++ people used Hungarian Notation for a good while to try to make these kind of errors more detectable; now some languages have tagged unions/sum types to make them nearly impossible. The point where a linter is distinct from the language in catching syntactically evident errors is that, even if I use a linter for my own code, that isn't the same as having buy-in from the whole team to block any code that doesn't pass the linter from shipping to production; language syntax, on the other hand, is implicitly agreed upon.

Addressing this specific example, historically, running a linter all the time wasn't always practical, and `if(a = b)` was used intentionally a lot, e.g. to inline a check for a null pointer with an assignment.


> I have made this kind of typo maybe 2 or 3 times

Pardon my french, but that is some straight-up horseshit.

Even if you were the best programmer in the world, I would be skeptical at you saying that you make that kind of typo less than two or three times a year or per project, much less over the course of fifteen whole years.


I can see it, in the beginning I may have made that mistake more often for a while, but I don't think I've made this mistake in years. The difference is absolutely ingrained, just like I wouldn't mistake plus for minus. Perhaps it's different for people who read a lot of math.

I would agree that having equals-assignments as valid expressions is bad, I just don't think it's such a big deal. If you are using such a language, you should be using a linter anyway.

Using := would be very annoying for me, even though I can see the argument and would favor it in theory.


How about JavaScript and '==' versus '==='?

Or going back and forth between languages that use '==' and JavaScript?

Let's say over all of history there have been a million C/C++ programmers, and each of them has made that mistake ~5 times, in total.

And let's be generous and say that 95% of the time it was found before it got anywhere near production.

That is still 250,000 bugs introduced.

Not to mention how much time is spent tracking down the bug once it has been in the code base for awhile. If it goes unnoticed for a couple of weeks debugging becomes non-linearly harder.


> How about JavaScript and '==' versus '==='? > Or going back and forth between languages that use '==' and JavaScript?

That's a mistake I make often, but it's exactly because Javascript is different here from all other languages that I use.

Let's say I had to use a language with ':=' as assignment, I would still accidently type '=' all the time.

> Let's say over all of history there have been a million C/C++ programmers, and each of them has made that mistake ~5 times, in total.

> And let's be generous and say that 95% of the time it was found before it got anywhere near production.

> That is still 250,000 bugs introduced.

Well, so what? That's 250,000 out of maybe a hundred million bugs. I'm not saying it's good language design to have this behavior, quite the opposite. I'm saying it's not a big deal, especially considering all the other footguns in these languages. It's also not like these languages are going change it.


So you admit that he refuted your contention that

> A language cannot prevent someone from making stupid mistakes.


Yes, but the total input requirement must be considered too.

A very expressive language may have that ”:=” overhead while also providing many robust and useful operators too.

Yes, the product may look like line noise, but it was Uber efficient to create in basic cost terms


Best of both worlds:

(let [x 1]

  (if (= x 1)

    ...))


While the language can prevent certain errors on itself, errors can also be found with static analysis and testing, and prevented with paradigms and experience.


I am personally more likely to forget the : than a second =. Happened all the time when I was writing plsql.


> I don't buy this idea that the programming language you use determines the quality of your work. There is no evidence to support this.

Then I have a thought experiment. If language cannot determine the quality of your work, I would request that you write pong in brainfuck, and I will write pong in, let's say, javascript. Then we'll compare the quality of each implementation, and switch languages. Whoever wins in terms of quality in the first round would be expected to be the better developer, and so would win the second round handily.

Now that we've established that the quality of your work does in fact depend on the tools used to implement it, can we please reframe your original point in less absolute terms? Perhaps the programming language is not as important as people claim, and that given two projects in two languages of similar quality, the output should also be of similar quality?

How do we define similar quality? Is incremental improvement a worthwhile goal? Do different tasks require different features? I would much rather write a shader in HLSL than in BASIC.


You're making too many extreme claims to be defensible. You might have a point for recent language fads, but choice of language absolutely does have an impact on your work and the nature of the bugs you're going to be dealing with. If you choose to use a language with a poor ecosystem, then you're going to have to build a lot more tools in house that are going to be subpar compared to more vigorously maintained libs and frameworks in another language. If you decide to write in JavaScript, then you're going to introduce bugs that TypeScript would have statically checked against. It all depends on the nature of your problem and what tradeoffs you're willing to make.


It is a complicated area, you might find this paper interesting:

https://web.cs.ucdavis.edu/~filkov/papers/lang_github.pdf

It tries to measure the defect rates between programming languages. To say it doesn't matter is too simplistic, in my opinion. I think matching the tool to the task is somewhat important, but often, as you suggest, you can get by with suboptimal choices and choose based on familiarity or esthetics. People would probably do better by spending their time learning how to think critically about code rather than learn a new language every year.


It depends on what you're trying to do.

A language without strong types might make tooling more difficult, not catch certain errors at compile time, make refactoring harder, and make working in teams harder. Or it might have little impact on you.

A language without a GC will be more complicated for certain tasks as now you have to manage the memory. It's harder for beginners, and doesnt make sense in many environments. Then again a lack of GC may be necessary if you do embedded work.

A language might let you write in one line of code, while another would take 20 lines for it. Is that more expressive? Maybe it's harder to maintain?

Do you like white space? Maybe you do, maybe you don't.

Many interpreted languages are slower then their native counter parts. Then again if your application doesn't need much compute performance, maybe it's worth it? Or maybe not.

It's all pros and cons. You don't want to make a "hello web!" web application In assembly when ruby/python/etc can do it in a few easy to read and understand lines.


Try writing something in machine code?


> A language cannot prevent someone from making stupid mistakes.

Completely prevent all possible mistakes? Of course not.

However, it certainly can make many mistakes either impossible or much rarer.


I think of languages at work as dislike and don't dislike. Basically if something gets in my way, I don't like it.

I do love Elixir, but for the kind of work I am doing currently, I'd have a hard time selling the benefits of it over just about any other language. But I dislike Java's verbosity, so unless Java brings something other options lack, I feel like pushing back on too much code being a legit reason to avoid Java.


History is littered with purpose built languages and notation for mathematics, logic, music and law etc. A language guides (or limits) one's thoughts, offering a particular set of abstractions. Functional programming is very close to logic and particularly amenable to mathematical reasoning and static analysis - exactly what one needs in order to safely compose large programs from small ones.


And yet there's a thousand vulnerabilities from C/C++ code being used out in the wild, stemming from bad memory management, whereas we don't see those with many other languages.

Yes, there are still other types of bugs that are language-agnostic, but a language definitely can reduce the chance of creating a certain set of bugs in the first place.


That's why I write all my programs in 16-bit real-mode 8086 assembly using MS-DOS's DEBUG.EXE. Aside from being fired from my job for this destroying my productivity and making my code unreviewable and full of security issues, everything is great, because there's no differences between programming languages at all!


Human languages had a few thousand more years of optimization going on. By now they're pretty decent for expressing things people want to say.


> I don't buy this idea that the programming language you use determines the quality of your work. There is no evidence to support this.

This is false, unless I misunderstand you.

I track ~10,000 programming languages. Are you willing to bet that I can pick one of these languages at random, give you a task, and your output won't be affected by the language I pick?


Mostly, no. Usually it all comes down to understanding and implementing business requirements correctly. Implementing those is a somewhat straightforward (also tedious) process in any language.

The main inpediment in my opinion is:

- knowing the standard library

- knowing the available libraries

- syntax quirks

in that order.

The main advantages you gain from using a language is familiarity with a language and the three points above.

I will be much more productive in JS or Python than in Haskell just because I’m already familiar with thise two and I’m not familiar with Haskell.

The rest of advantages come from the ability of a language to express a programmer’s thoughts rwadably, concisely and in a maintainable manner (where “maintainable” means “a new colleague coming in to the project six months later”).


Interestingly, English is super expressive and allows for many ad hoc type expressions the be understood with reasonable fidelity.

Many other languages are gendered and or structured, both of which do impact the work, both good and bad.

Great English can be written right along with crazy expression borrowed bits here and there, even words made up on the spot.

Do we need crazy expression in programming?

Technically, no. But many want it and practice it in creative ways not unlike writers do with form, structure and style.

With a story, did readers get value, even with poor understanding, or a reread or two making sense seems to be the bar.

With a program, does it do the job and doe it do so in a consistent way, secure, etc..

A look at things like COBOL, FORTRAN shows us design can impact quality. It is baked in.

Skill matters too.

An inexperienced programmer may produce higher quality earlier and more easily given such an environment. Adept ones can use most anything and do the same.

But, they may also be size coding, played ng in the demoscene as much as they are cranking out business logic and computation.

TL;DR; Language absolutely does impact quality, but so can experience and process. No easy outs, no one size fits all.


> I don't buy this idea that the programming language you use determines the quality of your work.

I don't buy into fads either but this is not a fad. It is common sense. The quality of your own skill and the tools you use affect the overall quality of your work. To say that one has no affect is illogical.

Just look at the extremes of your statement. If it was true, I should be able to write a triple A game in assembly language because assembly language affects the quality of my work. Obviously this is not true. If I tried to do one in assembly language the quality would be subpar.

> A language cannot prevent someone from making stupid mistakes.

In Elm it is not possible to have runtime errors. The program cannot crash period. The only type of error you can have in elm is a compile error or logic error.

This is an example of a program preventing stupid mistakes.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: