Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I tend to disagree, only because I think that certain things are fundamentally better.

For example, a foreach eliminates a whole class of problems that a for loop introduces by using a reference to an item instead of a counter variable (which risks array bounds errors, etc). And higher-order functions eliminate a whole class of problems that foreach introduces, by helping the user to think declaratively and allowing for things like composability and parallelization. These are the "low hanging fruit" of programming languages and it astounds me that a lot of people haven't even made it to foreach yet.

What really gets me though is that compilers could trivially analyze side effects and transform code to use these better abstractions. We should be able to write a for loop and end up with higher-order functions in the compiled code if the outcome is equivalent. Then we could trivially parallelize code and be running orders of magnitude faster than we are now.

In fairness, this stuff is much easier in functional programming (FP) languages. So then, why don't imperative languages compile down to FP internally?

These simple examples show some of the fundamental prerequisites that software engineering somehow missed. And I think that saying all programming languages are roughly equivalent caused us to accidentally overlook some obvious truths.

My perfect language would have the embarrassingly-parallel vector processing of something like MATLAB, with the copy-on-write data handling of something like Clojure and Redux, with complex concurrency handled by lock-free atomic functions and channels/streams like Go/Elixer/Erlang, with all immutable variables to encourage higher-order functions instead of object-oriented programming, with the syntactic sugar of Python for slices etc, with the homoiconicity of Lisp and the speed of C. Basically it would look like immutable Javascript transpiling to Lisp. Without things like monads (or a clearer handling of them somehow), so that the user can always think in terms of synchronous blocking one-shot execution with no side effects and the code can be viewed as a tree so it can be dropped directly into a genetic algorithm. In my head, code looks like a mix between this and a spreadsheet. Then 90% of my time now goes to converting that to whatever mediocre language and framework I'm stuck using.

Writing this all out has shown me that my ideal language would probably piss a lot of people off. So maybe you are right after all!



What you described is, actually, here and called... Haskell!

You need COW things like in Closure? Here in unordered-containers they are. You need SIMD processing of vectors? Here in vector package it is. Parallel processing? parallel strategies.

Channels like in Go? You bet right, it is on hackage, in stm package. Green threads are much more greener in Haskell than in Erlang and Go (about twice as small overhead compared to Erlang).

I keep repeating that what is usually a language feature in regular languages, is often just a library in Haskell.

Which, as you rightly noted, piss many people off.


>What really gets me though is that compilers could trivially analyze side effects and transform code to use these better abstractions. We should be able to write a for loop and end up with higher-order functions in the compiled code if the outcome is equivalent. Then we could trivially parallelize code and be running orders of magnitude faster than we are now.

> In fairness, this stuff is much easier in functional programming (FP) languages. So then, why don't imperative languages compile down to FP internally?

I prefer FP languages, because code written in them is more readable to me --- recursion is often more comprehensible than looping at the same time being more general, immutability lowers my anxiety connected with tracking the values of variables/bindings, generally "reasoning via equality" is easier. I can even do it with a piece of paper. Good luck programming with pen and paper using an imperative language. So that's my preference.

But... The benchmarks that everybody's seen would indicate that things like garbage collection, which is pretty much a must with higher-order functions, disregard for the modern CPU cache locality rules (a lot of pointer indirection), and all sorts of other things that I haven't a slightest idea about are costly for performance. So much for [If we just converted everything to FP, t]hen we could trivially parallelize code and be running orders of magnitude faster than we are now. Also: parallelizing is a lot more complicated with regard to performance than "i'll run it on n threads to do it n times faster". Sometimes you will slow things down this way. It's weird, but when you start measuring things, you find that your intuition is wrong all the time. We could probably attribute it to all the complexity accumulated in the lower layers (CPU, OS) that we do not understand.

That said, converting to FP is kinda what we do (but not really). After all, a lot of compilers use SSA[1] to make analysis of dependencies between variables easier.

[1]: <https://en.wikipedia.org/wiki/Static_single_assignment_form>




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: