Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Why Learn Haskell? (2018) (stanford.edu)
207 points by allenleee on March 1, 2021 | hide | past | favorite | 113 comments


Personally, I've written zero lines of production Haskell code, and expect that this will still be true when I retire. Despite that, learning Haskell was probably one of the best things that I've ever done. The learning process sent me down lots of interesting rabbit holes of language design, and certainly made me a more disciplined developer in other languages. The lambda calculus and predicate logic discussed in this article was one of the enjoyable rabbit hole trips induced by Haskell, but it was only one of many.


I’ve been using Haskell in production for three years and it’s been great so far!

I think these concepts translate to other languages fairly well, and are easy to learn in Haskell:

- Newtypes (Languages like Kotlin/Swift have support for them, and you can kinda get close with lots of classes in Java)

- Purity (can write “pure” code in most languages)

- Algebraic Data Types (help with thinking about modeling data, works in eg TypeScript)

- Maybe / Optional types (Now very popular in other languages)

- Parser Combinators (work in eg JavaScript)


Undeniably, lots of ideas from pure lazy typed functional programming languages (some pioneered by Haskell) have influenced mainstream languages. Thus, by learning Haskell one can adapt to mainstream languages fairly quickly.

However, I think there are still some extremely interesting ideas that can only be implemented in Haskell and some related languages (e.g. Joy) which may prove very useful.

One can see the glimpse of that in Compiling to Categories [1]. In general, programming in a point-free way may lead to lots of advantages we don't understand well yet [2-3]. I think, for example, synthesizing point-free programs should be a cool path towards AI.

[1] http://conal.net/papers/compiling-to-categories/

[2] https://www.cs.nott.ac.uk/~pszjlh/pcalc.html

[3] https://www4.di.uminho.pt/~jno/ps/pdbc.pdf


I second this statement. I find that my code across different paradigms (imperative/OO/FP) has become a lot cleaner as a result of learning a purely functional language that makes it impossible[0] to call an impure function from a pure one, among other things.

Also, since Haskell has a lot of advanced type system features, whatever feature that gains traction in other languages (traits in Rust and Swift, optional chaining, async/await) I find much easier to learn as they are usually specializations of more general concepts in Haskell.

And as someone who also studies mathematics, the connections between functional programming and areas like logic, order theory and category theory run deep—programs can too become objects of mathematical study, with their correctness proved and refactoring sound.

[0] with exceptionally rare caveats


I completely agree with this sentiment.

Writing good Haskell always seemed harder to me than something like Python. The amount I'd have to think before beginning to type was a lot, but it always seemed worth it, as there was always potential always for a beautiful solution to any problem.

As a result, before I write code in any production language I give a little more thought to what I'm about to write, which I think ends up with a better result.

Heck, even if Haskell just gave a developer an appreciation for basic functional programming, that would be something that will benefit them for years in my view.


Having a background in Haskell (and C) made me able to jump in and be productive in Rust within hours. And that's a language that is considered to have a steep learning curve.

Never used it in production either but I did use Haskell to ace some of my advanced computer science course work with less time and less lines of code than my peers.


I concur, for me the journey was: Java -> Clojure -> Haskell -> Rust. Each language introduced concepts that made it easier to learn the next. So far I've had jobs in all of those languages except for Rust, but that time will come too. I still prefer Haskell for everything that doesn't involve GUI's.


Please do write about other rabbit holes too.. ..If not a long form article, a short compilation of the holes (and links if you can find the time to get them) would be very useful.. I've been interested and flirting with haskell for a while, but become too time-poor the last 4-5 years to do anything even hobby level.


I don't know Haskell, but I'm pretty certain that something I recently discovered also applies there: Dependent Types

In short, it's a way to restrict the data types to such a degree that the type checker can do compile-time checks of array sizes and other interesting things, as those aspects are directly expressed as distinct types depending on each other. The principle requires a pretty powerful type system and is cumbersome to use, but I found it fascinating regardless.


Ergonomic dependent types are probably coming to Haskell in a couple years. It already has them in a way but it's currently harder than it will be in the future.

https://gitlab.haskell.org/ghc/ghc/-/wikis/dependent-haskell


Check out type-driven development (there are books on using Idris for this) -- it blew my mind! You do what you're talking about to encode business logic that's checked at compile time.


I had checked out F# for a while, some time earlier. IIRC, it has some features like that, for encoding business logic in code/types. Probably some other languages do too.

I had mainly used Scott Wlaschin's site F# for Fun and Profit, and MS sites, for F#.

Edit: Also had looked at OCaml, via Real World OCaml, and also much earlier, via INRIA / O'Reilly sites / books / software.

F# may be based on OCaml.

Both resources are free to read online.


Hmm.. interesting.. i was and still am excited about the type inheritance. Alas I code with python for a living.. :-(


Ooops i meant to say type inference.. Ah well fat fingers.. or is misconnected synapses between Broca's area and the motor cortex?? shrug whatever.


As others have said: same!

In addition to expanding my thinking and making be a better developer in other languages, I think Haskell has made me expect more of programming languages. In a positive way. Things can be better than they are. That doesn't mean things should be Haskell. But they can be better.


Had the same experience while learning scheme. Lazy eval, continuation, continuation-passing-style, coroutine, OOP (yes!), lambda stuff, dynamic/lexical scope, closure, thunks, tail recursion (not all of that is specific to scheme of course, but you get the idea)...

I'm still not OK with monads though :-)


Closest thing to Haskell in production I've worked on were pure fp Scala projects


Since many people are agreeing with you let me disagree for the sake of doing so:

I’ve used Haskell in production (not a very big service but anyway) and while I enjoyed it, I think that it was at the cost of my employer. Spending my time studying about lambda calculus and suffering through slow compile times of GHC isn’t a very rewarding experience (for me personally). IMO I’m still a better programmer for knowing the concepts of, but not by much. It’s a field bigger than Haskell and IMO writing production Haskell code only skims the surface of lambda calculus/theory.


> The lambda calculus and predicate logic discussed in this article was one of the enjoyable rabbit hole trips induced by Haskell, but it was only one of many.

... aren't those very basic things that are taught in the first years of any comp. sci. university ?


Personally, I love Haskell and wish I can use it more. This article does not do justice convincing people to invest time to learn it.

Is this like the curse of teaching monads? Once you know them you lose the ability to teach someone else about them?

> If we sincerely ask "why learn Haskell?", then we wind up learning Haskell!

No. We wind up doing a search for "Why learn Haskell" and unfortunately this article might pop up. And if we don't see a benefit or pain point being removed we aren't going to learn it.

They don't even make an argument for why. They just jump into a philosophy lesson on logic. That's not an argument for learning Haskell and is completely tangential.

The basic rules of copywriting requires that you understand your target audience and make a case for how their life will better.

> The gold standard is a logical reason.

LOL. No, it's not. The gold standard is an emotional reason. Perhaps this is why Haskell has not taken off. What pain point does it solve or what benefit does it give you?

For me personally, it has expanded my thinking and allowed me to write clearer and more concise code in other languages. It has allowed me to think in terms of new concepts that increase the reach of what I am able to do. Its type system provides for safety as well as making the intent of programs clear. Pattern matching allows you to specify your intent much easier and conveniently than the typical imperative approach. The compiler is your friend and reminds you if you fail to take into account all possible scenarios (total function). The syntax may seem strange at first, but I have come to love it and wish other languages had adopted such a clean syntax devoid of visual clutter.

The gold standard reason for me is that I am able to tackle, think, and solve higher levels of complexity than are possible in other languages. I am able to express myself much more fluidly and I am able to reuse code to a much higher degree.


If you don't care about engineering, and are in the humanities, philosophy or theoretical areas of the social sciences where you use logic, this article actually makes a great case for why you'd care about learning any programming language at all. They help you prove theorems. Haskell is a gateway drug to Coq or Lean. I don't know that someone who already programs a lot is the target audience for something like this.


> LOL. No, it's not. The gold standard is an emotional reason.

Maybe the emotional reason is a warm fuzzy standard?


Reading Ben Lynn's Haskell website was one of the things that got me out of a rut with learning Haskell and showed me how expressive, yet concise it is. There's a lot of advanced and beginner Haskell resources but not much intermediate. I highly recommend his Haskell series, Lambda Calculus series[1] and (my favorite) compiler for Haskell series[2].

It's as if SICP were redone in Haskell in the 21st century.

[1] https://crypto.stanford.edu/~blynn/lambda/

[2] https://crypto.stanford.edu/~blynn/compiler/


Filling in your [0]th link. Thank you so much for this pointer.

[0] https://crypto.stanford.edu/~blynn/haskell/


It's a great language to learn as a gateway for language design, but I think what turns people off are unrealistic expectations.

Haskell has warts, as do all languages. Something that's tedious in one language might be a breeze in another. In Haskell, most of the baggage I've seen is around

- Cabal: seems the best advice is to just start with Stack - Delayed exceptions: due to lazy evaluation (which is a really great feature once you get used to it!), the site where you trigger an exception may not be anywhere near the actual exception-generating site. For example, if you call head on an empty list, then return that as a result only to have it be used far later, it will go "boom" at some later time. This is also true for I/O things: perhaps the file I/O way deep down in some thunk was closed due to an error, so some code that is not equipped to handle errors gets an exception.

Michael Snoyman has a great series of blog posts about this [0], i.e. "Haskell the Bad Parts 1-3". Most of this is managed by not using some of the defaults or with practice. Just be aware that like any language, there will be areas that may not have the greatest behavior.

That said, I think these confusing behaviors are very limited in Haskell, much more so than in C++ for instance.

[0] https://www.snoyman.com/blog/


The biggest wart for me is the complexity introduced by language extensions, many of which can significantly alter the behaviour and feature set.

There is not one Haskell, there are many flavors of Haskell, and every file can be a different subset because extensions can be toggled by file ( `{-# LANGUAGE <Extension> #-}` ).

My second biggest wart is the heterogeneous nature of the ecosystem due to lack of established paradigms, and the heavy use of complex type system abstractions in some libraries (plus the common lack of good documentation).

I love Haskell as a language for the learning experience, but I would not use it in a professional/production environment.


I agree with you that the many flavors of Haskell are problematic when it comes to uniformity. C++ and Scala have similar challenges. The old trope of every C++ shop using their own unique subset of the language is pretty relevant here. One thing in Haskell's favor though...with the language extension paradigm you have a very clear and tangible path to enforcing uniformity in your codebase that you don't have in C++ or Scala. You can use simple code analysis tools (I believe hlint already has this functionality, but just grep is sufficient actually) to enforce your team's choice of language extensions.


Many build tools also let you configure the common set of extensions. You shouldn't be forced to set them on a per-file basis.

There's finally an effort to update the Haskell spec and many of the most common extensions are slated to be included so we won't have to use so many for "common Haskell" soon.


> There's finally an effort to update the Haskell spec

That's great to hear!

But in the future it would also be great if Haskell took a route similar to Rust.

In Rust, extensions (called features) only work on the nightly compiler. On stable and beta compilers, they produce an error. Only once they are officially accepted and finalized will they work on stable.

That gives both users and compiler/std lib developers the freedom to experiment, but prevents a proliferation of a complicated feature matrix and centralizes the ecosystem on a coherent language spec.

This has worked out really well. In the early days many users were stuck nightly because of essential features, but by now libraries are expected to work on stable.


I'm glad that works for Rust but I don't see it being compatible with the way Haskell has been developed.

Haskell the language is defined by a specification called the Haskell Report. The last one was Haskell 2010 [0].

The news is that the next version of this report will incorporate advances from several GHC-specific language extensions.

How useful this is in modern Haskell remains to be seen. As far as I can tell the de-facto Haskell compiler is GHC. Hugs is unmaintained and UHC and LHC are mostly experimental or in-development. However without a spec we can't get innovation from alternative implementations of the language.

[0] https://www.haskell.org/onlinereport/haskell2010/


If you are interested in learning Haskell, do not start with Snoyman's blogs. They are good, but not something a true beginner should be bothered with.

I can recommend the https://haskellbook.com as an introducing resource.

If you want to dabble with something simpler/cleaner than Haskell, try Elm. It is just for browser apps, but extremely powerful and beginner friendly. You do not even have to install anything when you use this online tool: https://ellie-app.com/new


> I can recommend the https://haskellbook.com as an introducing resource.

As a counterpoint, I found that book unbearably boring and abandoned it. I continue to be interested in Haskell so I may eventually pick it up again but I’ll skip (or at least skim) the first chapters.

I followed the book’s development (written by a programmer and a novice) and had high hopes for it, so it was a great disappointment when I got to read it.


Interesting, such different experience. As it's hard to answer "what did you find boring", I want to ask you: which programming book did you find super juicy (opposite of boring) to read?


I had a bad experience trying to get into Elm. It's a nice enough language, but I found the community to be less nice. This was about when the version something.19 was released.


Interesting! Elm has by far the nicest programming community I've ever been a part of, specifically the Elm Slack[0]. All my stupid questions get answered immediately :)

[0]: https://elmlang.slack.com/


My "big turnoff" from Haskell is the highly disorganized standard library / Prelude. I'm aware that alternatives exist, but the terminological chaos of standard Haskell just reminds me of Perl and it makes me not want to use it.


I think Cabal is "fixed" these days, at least on Linux and FreeBSD; it behaves approximately like Cargo now.


Stack on windows is a nightmare :( Cabal is good, cross-platform really.


What problems have you run into with Stack on Windows?


I had a period of a couple of years in uni when I was very into Haskell and using it whenever it made sense to me (which was a lot).

I haven't touched it many years (save for the occasional amending of my xmonad conf) and would need a lot of time to get back to reading it properly, let alone code in it.

Still, I'm very happy I did that and look back at it fondly. It got me grokking the functional mindset and paradigm that still influence the design choices I make and the code I write heavily.

Today, I'd push for learning Idris instead, or maybe using Haskell as a brief intro to Idris. While young, it's a lot more approachable and makes working with statefulness and IO a lot more intuitive, without sacrificing purity.


Can you explain why someone would strive for "purity"?


When a variable can be reassigned it adds the complexity of time dependence. Meaning that if you don't keep track of the order of values that variable was assigned then you probably can't understand the program. The problem is compounded by the size of the scope that variable exists in. That's why we avoid using global variables, globals force us to track the order of updates from potentially anywhere in the code. The problem is further compounded if there are multiple globals that depends on each other. Then you need to track the relative order of updates across all globals. That's the worst case scenario.

In OOP you try to limit the scope of state, private variables inside an object can update its value but can only be directly accessed from inside the object. But still those objects are inherently stateful. In order to understand an object, how you can use it, whether it is working correctly, you are still forced to understand where/when it is in the timeline of updates.

Purity means that if you understand a variable or object in one place in your code then you understand it from anywhere in your code. You don't need to know how it was used before because it can't be changed. If it were changed then you would know because it would now be a different variable or object.


Ease of testing is the most obvious benefit.

A "pure" function is one whose result is simply based on its inputs, no dependency on external state/IO etc. Same inputs, same outputs guaranteed.

These types of functions, are really easy to reason about and unit test. State is often messy.


I echo the sibling comments to my comment here.

I would further add that in day-to-day code, I don't necessarily write "pure" functions. I use a strategy based on my experiences with Haskell, but spiced with a bit more pragmatism. For any significant bit of code past a couple of screens or so, I almost always have a clean separation between the "business logic" code, and the "IO" code, such that I can plug different "IO" codes into the business logic code trivially.

The business logic code is not technically "pure", in that it calls out to IO code freely. But it means I can swap out the IO code for testing, and drive the business logic with any input I desire in the process. It also lets me test the IO code directly if that is useful/necessary/desirable, without the business logic getting in the way, which it often does!

Theoretically, you could transform this to "pure" code, by gathering everything into one big data structure in the IO code, then feeding it to the business logic, but this often comes with a lot of inconvenience and even performance issues (like gathering expensive things you might need but not using them).

One of the things that Haskell can "put into your fingers" is a sense of what code does IO. Even if you nominally know, you probably don't instinctively know if you've never used a language that rigidly forced you to be correct. It is a common experience for quite a while in Haskell to write a pure function, that calls a pure function, that calls a pure function... that, err... needs to read just a little bit from a file according to the way you've structured things. Whoops. And you learn to restructure things to move the reading back "up" in the code to constrain the purity, even though it's just a "little bit", and over time you get better at not making the mistake in the first place. If you are not trained by Haskell, it is really easy to end up thinking "oh, it's just a little bit of impurity... it'll be fine"... and maybe, in fact, it will. But this still starts to add up. I still write "impure" code... but I do so much more carefully now.


The number of times students have come to me absolutely baffled as to why they can't tokenize a string twice using strtok() is the best reason for function purity I can think of. It's such an abomination of a function. The first time you call it, you give it a string and a delimiter. Subsequent times you call it, you give it a NULL and the delimiter. And then once you've tokenized the string once, you can never use it again. It's an example I hold up of how not to design functions.


https://en.wikipedia.org/wiki/Referential_transparency

> An expression is called referentially transparent if it can be replaced with its corresponding value (and vice-versa) without changing the program's behavior. This requires that the expression be pure, that is to say the expression value must be the same for the same inputs and its evaluation must have no side effects.

> In mathematics all function applications are referentially transparent, by the definition of what constitutes a mathematical function. However, this is not always the case in programming, where the terms procedure and method are used to avoid misleading connotations. In functional programming only referentially transparent functions are considered.

> The importance of referential transparency is that it allows the programmer and the compiler to reason about program behavior as a rewrite system. This can help in proving correctness, simplifying an algorithm, assisting in modifying code without breaking it, or optimizing code by means of memoization, common subexpression elimination, lazy evaluation, or parallelization.

> The concept seems to have originated in Alfred North Whitehead and Bertrand Russell's Principia Mathematica (1910–13). It was adopted in analytical philosophy by Willard Van Orman Quine.


Not op but they may mean the term-of-art where a pure function has no internal state and no side effects, so it always has the same outputs given the same inputs. Although it's not technically part of the definition of functional purity, I'd add that's it's nice to have a function whose behavior is defined for every input in the domain.


I love the things I learned from Haskell, but I find it so painful to actually use :( Similar case with Erlang...

I wonder if there are any functional languages that aren’t weird? Like they call the first element in a list / the other elements “list[0]” and “list[1..]” instead of “car” and “cdr”? Bonus points for a C-inspired syntax rather than an abstract-mathematics-inspired syntax

For now I just write Rust and Python using functional-style design (const inputs, no side effects, etc), but I feel like I’m missing out...


> Similar case with Erlang...

Have you tried Elixir?

I had been doing Ruby for a long time (also Java and Scala before and after that, now Python).

Gave Elixir and Phoenix framework a shot a month ago and it's a breath of fresh air: great language, outstanding tooling.

I've also done a bit of Haskell, OCaml and PureScript within my personal projects. I do miss having a state of the art typing in Elixir, but there is enough in Elixir to keep me happy:

* immutability everywhere

* ridiculously low latencies (not by HFT standards I guess)

* LiveView

* ...many other goodies that come with BEAM

* Tooling

edit: formatting


+1 for Elixir. If you like Rust syntax, you should also check out [Gleam], it compiles to Erlang and has full interop w/ Erlang/Elixir.

[Gleam]: https://gleam.run/


Hope that this one gets some more traction. ML on BEAM would be lovely.


I believe someone has made an OCaml BEAM compiler, but I don’t recall offhand if it’s just a hobbyist product or something that could be used seriously.


It sounds like you are talking about Caramel[1].

As far as I understand its serious enough but quite young.

[1]https://caramel.run/manual/


For me it was a similar story, after many years of python I got deep into the functional world with Elm, F#, some Haskell, and then I found out how complete the elixir ecosystem was but even a week later I could not stand going back to a dynamic and untyped languages, perhaps o didn’t insist enough with spec but I couldn’t go much beyond the second week. Did you have extensive usage of functional strong typed languages before elixir or just basic knowledge? Did it really get better that feeling of missing the types after a while? For me the difference feels as deceiving as falling from an full featured IDE to an environment with notepad and no source control and no compiler errors/stacktraces.

And I used to be the python guru at the previous company and made a whole lot of language evangelizing etc, I am afraid there is no way back for me.


I'm not sure where is the line between extensive and basic knowledge. Here is my more detailed exposure:

In commercial context:

* Of strongly typed ones only Scala (with [shapeless]). Can reluctantly throw in Kotlin as well for it's amazing structured concurrency.

In non-commercial context:

* Went through a few chapters of [Software Foundations] doing Coq proofs.

* Worked through most of the [Types and Programming Languages] (writing typecheckers in Ocaml)

* 3 services in Haskell (1 on Scotty, 2 on Servant). Loved persistent+esqueleto for the ORM layer, disliked Opaleye.

* 2 projects in PureScript (1 with Halogen, 1 with React bindings).

* 1 project in ReasonML (Ocaml).

-

> I am afraid there is no way back for me

I see where you are coming from. In my case I can alternate between "I want all invariants properly expressed and checked" and "I just want to ship that barely-working piece of junk and iterate on it". I learned to adjust depending on organization needs. IMO, for many orgs, especially startups/scaleups, the latter is often the more fitting way. With that in mind, I'm willing to trade the guiding hand of great type systems for other productivity aspects (amazing runtime and cohesive web framework in Elixir's case).

[shapeless]: https://github.com/milessabin/shapeless

[Software Foundations]: https://softwarefoundations.cis.upenn.edu/

[Types and Programming Languages]: https://www.cis.upenn.edu/~bcpierce/tapl/


Can’t speak for the GP, but the most difficult part of Erlang isn’t the syntax, but trying to clear my mind of how I would design something in C#/Java and embracing/understanding OTP.


>Bonus points for a C-inspired syntax rather than an abstract-mathematics-inspired syntax

Since functional languages are based on the mathematical idea of function application, this is a weird request. What would a C-style functional language even look like? Most functional languages have syntax for doing a imperative-style sequence of assignments before producing a result. Beyond that, the risk is that C-style code will make the programmer think that the language itself is like C, Java, or other imperative languages - which fundamentally, it wouldn't be.


> Since functional languages are based on the mathematical idea of function application, this is a weird request.

Well, imperative languages are based on the idea of a turing machine, but they look nothing like one.


>imperative languages are based on the idea of a turing machine

In what way? I wouldn't count executing sequential instructions as a "the idea of a turing machine", and no major imperative language has programs which have only finitely many states along with an infinite memory space that is both code and data.


You might enjoy Scala. It has a range of usable styles, from "concise Java" to "somewhat Haskell-ish with OOP-ier syntax" to "type astronaut", and you can get by anywhere in between (as long as you don't mind hearing others bicker a bit about what people ought to be doing). It can target JVM, JS, and native/LLVM. And, Scala 3 just reached RC stage; it tries to revamp some of the language aspects that people found to be pitfalls or otherwise confusing, so it might be a great opportunity to try it for the first time.


As a Scala practitioner (well, in my previous job) I found Scala code harder to understand and read than Haskell.

Haskell's syntax is actually quite minimal by comparison. Scala type signatures, in particular of library functions, can be a beast to understand.

Also, because many Scala devs start using it as a sort of "better Java", spaghetti code and imperative-style messes are relatively common. This doesn't happen with Haskell, because nobody starting Haskell tries to write Java-like code with it.

That said, I do like Scala better than Java!


Second Scala. Great language and most mainstream of all the more advanced languages out there.


Common Lisp has functions first, second, last, etc. It has rest too.

    (first '(a b c d))  returns 'a
    (rest  '(a b c d))  returns '(b c d)
    (third '(a b c d))  returns 'c
    (nth 0 '(a b c d))  returns 'a
    (nth 1 '(a b c d))  returns 'b
    (last '(a b c d) 2) returns '(c d)
I find these quite intuitive, and after a bit of use one gets accustomed to Lisp's three ancient functions: car, cdr, and cons. car is the same as first and cdr is the same as rest.

In comparison to handling lists in more modern languages, Lisp seems overly verbose and a bit cumbersome. To be fair, there has been over half a century of evolution in programming languages since Lisp was originally designed.

Early Lisp was an alternative to programming in Fortran II, Fortran IV, or Algol 60. These are very primitive languages with significant hurdles for programmers trying to do non-numeric computations (e.g. AI programs). Lisp's underlying memory model of garbage collected cons cells gave it so much flexibility compared to Fortran's fixed length arrays containing only numbers. Fortran had no dynamic lists, no dynamic vectors or slices, no dictionary/hashmap types, no records or structs, no variable length strings, no sets, no tuples. Assembly language often appeared as a reasonable alternative.

Here are a few Fortran if statements appearing in a popular programming book of the 60's (I still have it on my bookshelf):

    IF (x - 2.1) 40, 40, 30
This if will always jump to either line labeled 40 or line labeled 30 of the program.

Here's another:

    IF(I.GE.20.AND.I.LE.42.AND.J.GE.20.AND.J.LE.42) GO TO 702
Lisp's cond (the special form used like if) may seem odd now, but compared to the other languages of the time it was so much more expressive.

Because I'm an Emacs user, I use Lisp frequently just to keep my configuration tweaked the way I like it, but if I was creating an editor like Emacs today I think it would be better to use Python, Javascript, or Lua for the underlying programmable part of the editor.

Perhaps Haskell is heading for the same kind of niche that Common Lisp occupies today, remaining important for the ideas it explored and of historical interest but never becoming more popular than it is today.


Curious: why last takes the quantity second, unlike nth? That inconsistency would drive me mad.


The default behavior of last is literally returning the last cons. So:

  (last '(1 2 3)) ;; => (3)
The signature for last is:

  last list &optional n => tail
If n is not provided, it defaults to 1 so the behavior and order of parameters is sensible in this context.

http://www.lispworks.com/documentation/HyperSpec/Body/f_last...

It's also worth noting that along with nth which takes the index first, there's elt which takes a sequence as the first parameter and index as the second. aref is similar but restricted to arrays and permits multiple numbers for the subscripts since arrays can be multidimensional. char, which accesses characters in a string, takes the string first and index second as well. bit takes the bit array as the first parameter and the subscripts follow it.


>Like they call the first element in a list / the other elements “list[0]” and “list[1..]” instead of “car” and “cdr”?

car and cdr are remnants of an ancient instruction set (https://en.wikipedia.org/wiki/CAR_and_CDR)... first/rest, head/tail are better. Actually doesn't Haskell use head/tail for lists? Except even there what is in the prelude is busted since head/tail are partial.


Maybe I'm weird but I even prefer cad/cdr to, say clojure first/rest. Even though first/rest is completely justified.. cad/cdr have a kind of abstract esoteric place for linear walk on a sequence, also they are symmetric three letters different by their middle.


In 21 or so of LIsp coding, I've noticed that my use of car, and cadr, cadar, caddr, and all those has been greatly increasing. I hardly ever type second instead of cadr. A cursor scan of my git history confirms it.

One reason is that second and third and such are oriented toward sequences. But sequences are often generic. In TXR Lisp, I have [x 2], so why would I ever write (third x)? It's verbose, like using Roman numerals instead of Arabic.

Now if I'm processing tree structure, I know that is made of conses. So (caddr x) makes sense.

Part of the reason it makes sense is that when we are processing tree structure, such as code syntax, we cannot just evaluate (caddr x) out of the blue. We can only do that if (cddr x) has been confirmed to be a cons cell: (consp (cddr x)). The syntax could be bad. It could contain the dotted notation in an unexpected place, or be missing required arguments.

And so this makes no stylistic sense at all:

  (when (and (consp x) (consp (cdr x)) (consp (cddr x))
    (third x))
We want this:

  (when (and (consp x) (consp (cdr x)) (consp (cddr x))
    (caddr x))
It is also easier to read and verify. We know caddr is right because it just adds an a to cddr, the last cell which was tested.

There is an impedance mismatch between validating (cddr x) and then extracting (third x), which isn't there when (caddr x) is used.

Anyway, a lot of that kind of code is avoided by pattern matching.

  (when-match (@nil @nil @elem . @nil) x
    elem)
That also avoids traversing the structure multiple times. Unless the compiler is clever about doing CSE between these functions, (cddr x) starts scanning at x.


F# is what you want!


> Bonus points for a C-inspired syntax rather than an abstract-mathematics-inspired syntax

Out of curiosity, could you give an example of what you find "abstract-mathematics-inspired" of Haskell's syntax? Is it just the one-letter identifiers (which is more standard practice than syntax), the symbols for function names (which remind me of C++'s "<<" and the like) or what?

My own personal opinion is that Haskell's syntax has some pitfalls with indentation, but other than that it's not particularly difficult. It's just not C-like, but that's a separate issue.

Similarly, Lisp-like languages have barely any syntax at all!


There are some advantages in the syntax and reasons to be for all these things. I would recommend to start with a sweet intermediate step in which you may like more the domain you are going to program, for example if you are into web frontend go with Elm, if you like the dotnet ecosystem, server programming and mobile go with F#, and you may want to check clojure or prolog in order to understand why a homoiconic syntax is later important. Make the effort, it pays off when your brain clicks


Have you tried Elixir?

I've not learned its Erlang base, so I can't tell my Erlang from my Elixir, but... in Elixir you can Enum.at(list,n), which with:

at = &(Enum.at(list,&1)

Could simplify into:

at.(n)

If that doesn't suit I'm sure some equally simple such thing could create exactly what you're after. (As I understand it, various kinds of metaprogramming are an intended base element of the language itself.)


car/cdr have historic origin all the way back to the fifties. It made sense when creating the languages for familiarity. In essence it's a pair of data and can be used in any which way you feel like. Understanding that makes for a powerful tool, which is why many functional experts so stubbornly hang on to them. Many functional languages have types as a first class citizen, and you can define abstractions on top of what you already have. If you put a cons cell as the data, you have created a tree structure as an example where car points children of a node and cdr points to the next sibling of a node.


Elm doesn't have quick list access like that, but it does have Arrays if you need them. Also accessing a list or array returns a Maybe of the contained type, rather than potentially throwing as in Haskell.


If you want a C-style syntax then ReasonML maybe?


Haskell is a great language! (FWIW the argument in TFA also applies to Prolog: https://sforman.srht.site/source/TermsInProlog.html )

The problem I have with it is the meta-language. I tried Haskell a few weeks ago and "stack" installed over 27000 files. I feel like it's a case of "Physician heal thyself." Turn the power of Haskell back onto it's own tooling. (Python suffers from a similar problem, where distributing and installing Python code has been paradoxically un-Pythonic.) Some languages like Elm and Gleam distribute single binaries that you just put in your PATH.


This is my number 1 concern when it comes to writing larger Haskell programs. Its tooling is so meager compared to what it could be and should be were the promises of the languages fulfilled.

Given how good Haskell is at writing compilers and code analysis tools it should have a best-in-class IDE, installation process, etc. when that is most definitely not the case.

Luckily the past few years seems to have witnessed rapid improvement on this front so hopefully my gripe will be irrelevant in another few years.


I'm actually interested in Haskell now after I learned an interesting crypto startup called Cardano uses it and stakes (pun intended) its success on it. They toute the safety and maintainability for their choice, claiming it's the language of choice for banking institutions (if my memory serves me correctly).

Found the video - https://www.youtube.com/watch?v=CffrvwIW0JY

I'm currently a web developer, but the blockchain scene is quite interesting. Especially these new smart contracts look quite easy to play around with.


> They toute the safety and maintainability for their choice, claiming it's the language of choice for banking institutions

So is COBOL. Doesn't mean anything, really. Bankers' choices in tech probably has nothing to do with ideological/purity issues in software development.


It's to do with the ability to formally reason about the code using formal methods/verification etc...


I believe Cardano will re-ignite the interest in Haskell.


IIUC there's another crypto/dapp thing, but in ocaml.

I guess safe and fast fp appeals to distributed finance.


It's Tezos


I fondly remember my time learning Haskell in uni. Some of the most brain expanding stuff I did during that period. Of course, when the time came to actually use it to build my thesis work (data mining collaboration patterns in open source software), I fumbled around with Haskell for months trying to get something done, and then "in anger" wrote the whole thing in Python.

I've since gone on to do most of my career in Python and haven't looked back. But something deep inside of me certainly misses Haskell. And try as I may to let old bygones be bygones, I keep trying to figure out a good use for it. I certainly wonder whether there's a niche for convex optimization problems where I'd otherwise use `cvxopt` in Python, or something in Julia, or even (if I was feeling particularly exotic), something in Prolog.

Basically, what I'm trying to get at is whether there's something that Haskell's enormously powerful compiler could do so much better than other languages that it would be reason enough to write service code in it. Anyone more experienced in HS have thoughts on this? I know Facebook uses it in production.


I'm trying to fiddle with python lambdas to make a timeout monad.. or something like that. Didn't have that much fun in a year.

FP is confusing at first but really a strong source of fun.


I'm thinking of starting this journey with PureScript. Seems a lot more practical for my use cases and their introductory book is quite good (I'm reading it).


Yes. Or Elm. It'd advise that approach, as starting with Haskell is quite a big step. The https://haskellbook.com is a great start though for anyone that wants to be guided through the basics.


I'm probably way OT, but this seems like a decent place to get some opinions and this article doesn't address my issue:

Here's my problem with Haskell: I want to model my domain as collections of heterogeneous maps. I work in the B2B world where our databases go back years but every week either the sales dept or our customers are throwing totally new requirements at us changing the way we think about our domain. We also do a ton of work with SQL which is trivial with such a mindset as relations are just sets of maps, but working with relations in Haskell, or any strongly typed language requires mapping layers that become their own source of unnecessary complexity.

I love the idea of the compiler doing more work for me and really do miss the advantages of type systems for communicating intent to other developers. I've had positive experiences with C# in game dev and Scala in scientific computing, but in my current role shoving the real world into a typed box has been an exercise in futility. Am I missing something about Haskell that supports this world or is it just a bad fit for my org?


No matter how heterogeneous the map seems to be, there's always some underlying structure. If you have written code to work with it, then you've encoded that structure...by definition. Haskell gives you fantastic tools for managing this structure at whatever level of specificity you want. At the end of the day everything in a computer always boils down to bytes, so in the absolute worst case, you can always drop out to the escape hatch of:

Map ByteString ByteString

This can represent any heterogeneous map that any other language can represent. Sum types can allow us to add another layer of structure to this if we want. For example, take the classic example of JSON.

Map ByteString JValue

...where

   data JValue
     = JObject (Map Text JValue)
     | JArray Array
     | JString Text
     | JNumber Scientific
     | JBool Bool
     | JNull
This is just one example of a way you could structure things. Sum types are super powerful for this kind of thing. And no matter what structure you come up with, you can always escape hatch that structure by adding a catch-all constructor like this:

   data MyValue
     = MyThingA A
     | MyThingB B
     | ... more things
     | Unexpected ByteString
That's actual valid code btw, and it even reads really nicely!


Maybe your domain can’t be modeled better than heterogenous maps.

But Haskell’s type safety extends beyond your immediate domain. Writing a web server, interfacing to a database, etc. all have well formulated domains and Haskell’s type safety provides a lot of value there.

Finally, I’m not denying that some domains are too chaotic and fast changing to be modeled statically. But there’s like some subset of your domain that is indeed static. Can you represent that statically and represent the other subset dynamically?


True, and that's where it feels like the Haskell side would be powerful. I did a small demo app with Yesod and loved it, and we have a ton of state machine type logic that would hugely benefit from ADTs.

You're right the hybrid approach seems better, but there it seems more like that Python/Ruby with their new gradual typing seems like the way to go, adding strong on top of dynamic rather than trying to poke a hole in strong typing to let you be dynamic.


You would probably love Clojure then. It is the language to deal with data maps and their manipulation.


I do love Clojure.

But it's always interesting to listen to the experience of the Haskeller's and I keep trying to find the motivation to dig deeper into it.

The responses I've received have me thinking I need to give up finding a way to directly map Haskell into my day to day work and more just approach it on its own terms and then take from it what I can.


Have you looked into using a heterogeneous map library in Haskell? I haven't used any myself, but a quick search shows that they do at least exist.


I looked a bit, but the variety of libraries itself is somewhat concerning. There's this article https://wiki.haskell.org/Heterogenous_collections that points to some good resources but the general feel I get from reading is that heterogeneous collections in Haskell are a bit of a rabbit hole with no clear answer yet.


I have only worked with Haskell on a personal basis, trying to understand if purity has any actual advantages. I have yet to be persuaded about the need to have code be pure. I never had any issue reasoning about impure code, and I do not find that the impure style has actually benefited me in any way. My logic bugs still continue to appear in pure code.

What I really find annoying is when I have to update state. I always have to write tones of functions in order to re-synthesize the data after the smallest change, and the deeper the data model tree is the more functions are needed.

Did I ever have any benefit from using the IO monad or the state monad? I don't know..it seems to be the number of bugs between my non-IO code and my IO code is roughly the same.

Perhaps it shines on a collaborative level...I don't know, I never had the chance of writing Haskell code with other people.


Code purity isn't just for you, it's also for the compiler to be able to make assumptions about the code in order to apply optimizations that would not otherwise be possible. For example, if the compiler "knows" that a certain branch won't be used, it doesn't have to evaluate it. This is part of the reason why Haskell can reach near-C performance and why its recursion is particularly performative.


> Code purity isn't just for you, it's also for the compiler to be able to make assumptions about the code in order to apply optimizations

Yes, and purity is also for your collaborators (including your future self) who didn't write the code so don't understand it as well as you do.


'Why Learn Haskell?' is not a very hard question to answer nowadays since it's one of the most 'Perlis Language'.

I hope there's one day we can answer 'Why Use Haskell?', and enjoy all the elegance and preciseness it has to offer. In reality, however, Haskell is the language that continuously and intentionally 'avoiding success'.

Scala was the 'Haskell' in the engineering world a decade ago and still is. There was a trend of hosted languages like Scala and Clojure. It was a way to attract the audience because it was the de-facto platform. Nowadays this trend started to die out, and new generations are more on bare metal, like Rust and Go.

It seems like there's a niche that looks like the new Scala while on bare metal, targeting engineers while without all the historical syntax.


I love academics. They can post shitty code full of single letter variables etc and still get the applause for it, not to mention the paycheck :D


This is not shitty code, single letter names have their place, and are absolutely fine here.


I like Golang's convention for names - names should get longer and more expressive the wider their use is across the code. Single letter names for loop variables, two or three letter names to refer to the current struct in methods, but expressive struct and function names.


academic is about computer science not computer programming. 2 very different things, their "shitty code" is often "good computer science code" and just like it's fine to have single letter variables when reading mathematics, it's the same when reading computer science code. the idea is to get knowledge across which can then be expressed clearly by computer programmers.


Regarding this example, which variable could do with being longer? It seemed fine to me!


I wonder if I will ever use Haskell at all. Currently I'm trying to learn PureScript as the next closest thing I can use in production.


It's funny how none of these posts about learning Haskell ever take a working program and demonstrate the benefits of the language by examining useful code.


That wasn't the point of the article. I think you skipped to the punchline and missed the joke. More "useful" are examples are linked at the bottom, so I'm not sure what you're griping about unless you just don't like Haskell.


This is exactly how I imagine proficient Haskellers spend their time on top of the ivory tower


So then, why learn prolog?


Prolog is kinda the natural step after FP. Now variables are not one point in space but the whole space itself. Combinatorial bliss. The syntax is kinda shorter..

Whether it's prolog, Kanren, datalog or any other relational language I wish everybody to enjoy the mind expanding effect.

ps: for any scheme ready, 'the reasoned schemer' is a good book about the pieces for relprog


For me the greatest benefit of learning Prolog was really understanding recursion – the only way to iterate through something is with a tail-recursive call. Also, it really makes you appreciate the For loop ;)

Another cool aspect of Prolog is how simple it is to write a meta interpreter. Fort example, you can easily change Prolog's default depth first search through the solution space to breadth first, or iterative deepening: https://www.metalevel.at/acomip/


Well, in order to satisfactorily answer that, we'd have to agree on what constitutes a good reason...

(describes a notation for propositional logic)

Oh. I see what you did there.


Interestingly Cardano Ada (a cryptocurrency, in case you don't know) if fully coded in Haskell and has been gaining a lot of traction recently -- it's now #3 cryptocurrency by market capitalization and might start competing with Ethereum for #2 during this year




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: