Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Make the Leap from JavaScript to PureScript (works-hub.com)
105 points by lelf on May 22, 2019 | hide | past | favorite | 86 comments


PureScript really hits the sweet spot for me, and it's become my favorite functional language over the last six months. It really feels like a purpose built stripped down Haskell without some modern refinements (string handling, records, compiles to readable JavaScript). You can easily manage dependences to incorporate the absolutely minimum set of packages necessary to run your project, and I think its safety is unparalleled.

That said, the community is pretty small (but also really awesome), and it fits into what is starting to feel like a crowded space (ReasonML, Elm, ClojureScript [kind of], TypeScript, etc), but I'm still hopeful that PureScript gets some traction over the next few years and becomes more widely used in the industry, as it is a fantastic language and deserves to be more widely recognized.

Also, if anyone is looking to try PureScript with minimal effort, I threw together this guide a while back:

https://github.com/tmountain/purescript-reproducible


> without some modern refinements

Did you mean "with some modern refinements" ?

Otherwise, I agree. String handling, vectors, and record namespacing/access are quite annoying legacy burdens of Haskell, improved upon in Purescript.

The type system lacks some of the power, but is sufficient for most designs IMO.


Sorry, yes, that was a typo.


Don't forget GHCJS, which is really handy as you get to share (Haskell) code between client and server. Here's an example from my own project: https://github.com/srid/Taut


We use PureScript for almost all frontend happening at my company, Lumi (YC W15). We're definitely biased towards functional programming since we use Haskell on the backend, and the creator of PureScript, Phil Freeman, works here as Director of Engineering.

We're starting to publish more about the benefits of this approach and sharing open source libraries here: https://www.lumi.dev/

Our React bindings are interesting to look at if you are curious: https://github.com/lumihq/purescript-react-basic


A resource that covers more of these questions is covered here: https://github.com/JordanMartinez/purescript-jordans-referen... Not all of it is up-to-date or presents the best arguments one could muster. Rather, it's goal is to be "good enough."

Also related (but unfortunately not yet finished): https://github.com/chexxor/purescript-documentation-discussi...

See these repos for an example of the language in action: - https://github.com/jordanmartinez/purescript-jordans-referen... - https://github.com/jordanmartinez/learn-halogen - https://github.com/thomashoneyman/purescript-halogen-realwor...


I've been thinking about using PureScript years ago as I've enjoyed the PureScript by Example book [0] and the language that seems like a cleaned-up version of Haskell.

I didn't like that the compiler is not written in PureScript (although there was a proof of concept [1]. Additionally, even though it compiles to JavaScript, the code that is generated is sub-optimal performance-wise [2].

Ultimately I used TypeScript and have been happy with that but still long for better FP language that generates lean code and can be used in a browser.

[0]: https://leanpub.com/purescript/read

[1]: https://github.com/purescript/purescript-in-purescript

[2]: https://github.com/purescript/purescript/issues/2577


I'm also concerned about PureScript's JS output so I looked into ReasonML and has been quite impressed. The type system is a bit less powerful (no type classes yet) but the JS output is very readable and maps almost line-to-line to ReasonML code.


I don't buy this idea that the programming language you use determines the quality of your work. There is no evidence to support this. These extreme programming fads were invented to sell books.

If novelists followed the same approach as programmers and they kept trying to find the perfect language to write their books in, they wouldn't manage to publish a single interesting novel.

A language cannot prevent someone from making stupid mistakes. At best, it solves one set of familiar problems and replaces them with a new set of unfamiliar (but equally bad) problems; then you have to learn to cope with those new problems until the next hyped-up language fad comes along claiming to solve those new problems and then you repeat the cycle.


There's a concept known as linguistic relativity (https://en.wikipedia.org/wiki/Linguistic_relativity) that states that the language affects what thoughts you are capable of having.

We don't program in assembly language because it is very tedious and error prone. Higher level languages absolutely do eliminate classes of errors that are only possible at the assembly language level (accidentally altering the stack and affecting the jump return address for example). Another example would be garbage collection making it impossible to create a certain class of errors.

Another example would be doing long division with roman numerals instead of Arabic. The human brain can only process so much before it becomes overloaded. By "compressing" thoughts using new vocabulary concepts you can reason about ideas that are beyond your normal cognitive limits.


I've found that being able to think in one programming language even affects the thoughts you have when working in other programming languages.

For example, after learning ML, I started writing much more functional code even when writing Python. I also started using abstract types more, even in C.


> A language cannot prevent someone from making stupid mistakes.

This can be refuted in one.

    if(a = b)
Languages that uses := for assignment, or some other operator, don't have the mental overload of equals being used both for assignment and for equality checks. Typos like the above become not possible.

Other languages just don't allow assignment in 'if' statements.

In regards to floating point support, a language could default overload equality checks so that they are configurably fuzzy when comparing floats.

And of course, languages already prevent stupid mistakes. Even the humble C compiler properly arranges the stack for us when exiting a function, putting all return values in the appropriate place based on the platform's ABI and restoring registers that are supposed to be preserved.


>> Typos like the above become not possible.

In my 15 years of programming experience of programming every single day, I have made this kind of typo maybe 2 or 3 times in total and it took me less than a minute to identify and fix the problem each time.

On the other hand, the amount of time that it would have taken me to type out that extra ':' a few million times would have been much more costly.


This is a really arrogant statement. In C, `if(a = b)` could cause incredibly subtle bugs. It was prominent enough that it got addressed in all the best practices books, like "Writing Solid Code", with a style that got later termed "yoda conditions" - a style that was prominent in spite of being acknowledged as less readable, because it forced a compiler error more often. It's hard to even think of many other types of bug with that kind of significance, and you just dismissed it out of hand with "I don't make those kinds of mistakes".


If the problem is so bad, why not simply use a linter? Actually, almost all instances of this typo can be caught by a simple regular expression.


Well, that's pretty much equivalent to the the language preventing bugs, for the purpose of the conversation. You can imagine similar cases that would be harder for a linter to pick up. Things like type errors, or accessing the wrong side of a union - in C++ people used Hungarian Notation for a good while to try to make these kind of errors more detectable; now some languages have tagged unions/sum types to make them nearly impossible. The point where a linter is distinct from the language in catching syntactically evident errors is that, even if I use a linter for my own code, that isn't the same as having buy-in from the whole team to block any code that doesn't pass the linter from shipping to production; language syntax, on the other hand, is implicitly agreed upon.

Addressing this specific example, historically, running a linter all the time wasn't always practical, and `if(a = b)` was used intentionally a lot, e.g. to inline a check for a null pointer with an assignment.


> I have made this kind of typo maybe 2 or 3 times

Pardon my french, but that is some straight-up horseshit.

Even if you were the best programmer in the world, I would be skeptical at you saying that you make that kind of typo less than two or three times a year or per project, much less over the course of fifteen whole years.


I can see it, in the beginning I may have made that mistake more often for a while, but I don't think I've made this mistake in years. The difference is absolutely ingrained, just like I wouldn't mistake plus for minus. Perhaps it's different for people who read a lot of math.

I would agree that having equals-assignments as valid expressions is bad, I just don't think it's such a big deal. If you are using such a language, you should be using a linter anyway.

Using := would be very annoying for me, even though I can see the argument and would favor it in theory.


How about JavaScript and '==' versus '==='?

Or going back and forth between languages that use '==' and JavaScript?

Let's say over all of history there have been a million C/C++ programmers, and each of them has made that mistake ~5 times, in total.

And let's be generous and say that 95% of the time it was found before it got anywhere near production.

That is still 250,000 bugs introduced.

Not to mention how much time is spent tracking down the bug once it has been in the code base for awhile. If it goes unnoticed for a couple of weeks debugging becomes non-linearly harder.


> How about JavaScript and '==' versus '==='? > Or going back and forth between languages that use '==' and JavaScript?

That's a mistake I make often, but it's exactly because Javascript is different here from all other languages that I use.

Let's say I had to use a language with ':=' as assignment, I would still accidently type '=' all the time.

> Let's say over all of history there have been a million C/C++ programmers, and each of them has made that mistake ~5 times, in total.

> And let's be generous and say that 95% of the time it was found before it got anywhere near production.

> That is still 250,000 bugs introduced.

Well, so what? That's 250,000 out of maybe a hundred million bugs. I'm not saying it's good language design to have this behavior, quite the opposite. I'm saying it's not a big deal, especially considering all the other footguns in these languages. It's also not like these languages are going change it.


So you admit that he refuted your contention that

> A language cannot prevent someone from making stupid mistakes.


Yes, but the total input requirement must be considered too.

A very expressive language may have that ”:=” overhead while also providing many robust and useful operators too.

Yes, the product may look like line noise, but it was Uber efficient to create in basic cost terms


Best of both worlds:

(let [x 1]

  (if (= x 1)

    ...))


While the language can prevent certain errors on itself, errors can also be found with static analysis and testing, and prevented with paradigms and experience.


I am personally more likely to forget the : than a second =. Happened all the time when I was writing plsql.


> I don't buy this idea that the programming language you use determines the quality of your work. There is no evidence to support this.

Then I have a thought experiment. If language cannot determine the quality of your work, I would request that you write pong in brainfuck, and I will write pong in, let's say, javascript. Then we'll compare the quality of each implementation, and switch languages. Whoever wins in terms of quality in the first round would be expected to be the better developer, and so would win the second round handily.

Now that we've established that the quality of your work does in fact depend on the tools used to implement it, can we please reframe your original point in less absolute terms? Perhaps the programming language is not as important as people claim, and that given two projects in two languages of similar quality, the output should also be of similar quality?

How do we define similar quality? Is incremental improvement a worthwhile goal? Do different tasks require different features? I would much rather write a shader in HLSL than in BASIC.


You're making too many extreme claims to be defensible. You might have a point for recent language fads, but choice of language absolutely does have an impact on your work and the nature of the bugs you're going to be dealing with. If you choose to use a language with a poor ecosystem, then you're going to have to build a lot more tools in house that are going to be subpar compared to more vigorously maintained libs and frameworks in another language. If you decide to write in JavaScript, then you're going to introduce bugs that TypeScript would have statically checked against. It all depends on the nature of your problem and what tradeoffs you're willing to make.


It is a complicated area, you might find this paper interesting:

https://web.cs.ucdavis.edu/~filkov/papers/lang_github.pdf

It tries to measure the defect rates between programming languages. To say it doesn't matter is too simplistic, in my opinion. I think matching the tool to the task is somewhat important, but often, as you suggest, you can get by with suboptimal choices and choose based on familiarity or esthetics. People would probably do better by spending their time learning how to think critically about code rather than learn a new language every year.


It depends on what you're trying to do.

A language without strong types might make tooling more difficult, not catch certain errors at compile time, make refactoring harder, and make working in teams harder. Or it might have little impact on you.

A language without a GC will be more complicated for certain tasks as now you have to manage the memory. It's harder for beginners, and doesnt make sense in many environments. Then again a lack of GC may be necessary if you do embedded work.

A language might let you write in one line of code, while another would take 20 lines for it. Is that more expressive? Maybe it's harder to maintain?

Do you like white space? Maybe you do, maybe you don't.

Many interpreted languages are slower then their native counter parts. Then again if your application doesn't need much compute performance, maybe it's worth it? Or maybe not.

It's all pros and cons. You don't want to make a "hello web!" web application In assembly when ruby/python/etc can do it in a few easy to read and understand lines.


Try writing something in machine code?


> A language cannot prevent someone from making stupid mistakes.

Completely prevent all possible mistakes? Of course not.

However, it certainly can make many mistakes either impossible or much rarer.


I think of languages at work as dislike and don't dislike. Basically if something gets in my way, I don't like it.

I do love Elixir, but for the kind of work I am doing currently, I'd have a hard time selling the benefits of it over just about any other language. But I dislike Java's verbosity, so unless Java brings something other options lack, I feel like pushing back on too much code being a legit reason to avoid Java.


History is littered with purpose built languages and notation for mathematics, logic, music and law etc. A language guides (or limits) one's thoughts, offering a particular set of abstractions. Functional programming is very close to logic and particularly amenable to mathematical reasoning and static analysis - exactly what one needs in order to safely compose large programs from small ones.


And yet there's a thousand vulnerabilities from C/C++ code being used out in the wild, stemming from bad memory management, whereas we don't see those with many other languages.

Yes, there are still other types of bugs that are language-agnostic, but a language definitely can reduce the chance of creating a certain set of bugs in the first place.


That's why I write all my programs in 16-bit real-mode 8086 assembly using MS-DOS's DEBUG.EXE. Aside from being fired from my job for this destroying my productivity and making my code unreviewable and full of security issues, everything is great, because there's no differences between programming languages at all!


Human languages had a few thousand more years of optimization going on. By now they're pretty decent for expressing things people want to say.


> I don't buy this idea that the programming language you use determines the quality of your work. There is no evidence to support this.

This is false, unless I misunderstand you.

I track ~10,000 programming languages. Are you willing to bet that I can pick one of these languages at random, give you a task, and your output won't be affected by the language I pick?


Mostly, no. Usually it all comes down to understanding and implementing business requirements correctly. Implementing those is a somewhat straightforward (also tedious) process in any language.

The main inpediment in my opinion is:

- knowing the standard library

- knowing the available libraries

- syntax quirks

in that order.

The main advantages you gain from using a language is familiarity with a language and the three points above.

I will be much more productive in JS or Python than in Haskell just because I’m already familiar with thise two and I’m not familiar with Haskell.

The rest of advantages come from the ability of a language to express a programmer’s thoughts rwadably, concisely and in a maintainable manner (where “maintainable” means “a new colleague coming in to the project six months later”).


Interestingly, English is super expressive and allows for many ad hoc type expressions the be understood with reasonable fidelity.

Many other languages are gendered and or structured, both of which do impact the work, both good and bad.

Great English can be written right along with crazy expression borrowed bits here and there, even words made up on the spot.

Do we need crazy expression in programming?

Technically, no. But many want it and practice it in creative ways not unlike writers do with form, structure and style.

With a story, did readers get value, even with poor understanding, or a reread or two making sense seems to be the bar.

With a program, does it do the job and doe it do so in a consistent way, secure, etc..

A look at things like COBOL, FORTRAN shows us design can impact quality. It is baked in.

Skill matters too.

An inexperienced programmer may produce higher quality earlier and more easily given such an environment. Adept ones can use most anything and do the same.

But, they may also be size coding, played ng in the demoscene as much as they are cranking out business logic and computation.

TL;DR; Language absolutely does impact quality, but so can experience and process. No easy outs, no one size fits all.


> I don't buy this idea that the programming language you use determines the quality of your work.

I don't buy into fads either but this is not a fad. It is common sense. The quality of your own skill and the tools you use affect the overall quality of your work. To say that one has no affect is illogical.

Just look at the extremes of your statement. If it was true, I should be able to write a triple A game in assembly language because assembly language affects the quality of my work. Obviously this is not true. If I tried to do one in assembly language the quality would be subpar.

> A language cannot prevent someone from making stupid mistakes.

In Elm it is not possible to have runtime errors. The program cannot crash period. The only type of error you can have in elm is a compile error or logic error.

This is an example of a program preventing stupid mistakes.


Does anyone have any experience with both PureScript and ReasonML that can compare the 2?


During the 2016-2017 New Year I did an intense look into Purescript. I bought the LeanPub book and did all kinds of stuff with it. More recently I did a look into ReasonML. I bought the Pragprog book and worked through that. For me, it came down to Haskell vs OCaml. I'll admit, I have a serious love for OCaml/F#, so I immediately was excited about ReasonML. Many concepts just transferred very easily. Purescript felt very academic once I got into trying to build real things. Watching Phil Freeman's talks/Google Hangouts on building cool new projects really was eye-opening. That guy thinks on an entirely new level... who knows, maybe most Haskellers do? I have a sincere appreciation for Haskell, but I have yet to bend my mind around it, so Purescript left me in the same place - I felt like I wasn't really grokking it. I should stress that this is ME, so you may have an entirely different experience. I'm not saying that Purescript was bad at all.... it just wasn't clicking very well.


I had this exact same question and couldn't find much out there.

Based on what I have seen ReasonML is being pushed more within the React community and has more of a JS syntax so there is less of an abrupt change in terms of language.

PureScript follows the Haskell syntax that I personally like more and find cleaner and more useful for FP programming.

The ReasonML code I've seen doesn't have as much computer science rigor or abstractions like monads. This means the ReasonML community generally works at a lower level of abstraction compared to the PureScript crowd.

The interop with JS looks a cleaner in PS.

The tooling looks better around ReasonML especially around fast compilation and automatic reloading.

PureScript has a richer type system.

Personally, I would rather use PS but I've studied Haskell a bit already. For people without the Haskell background ReasonML provides an easier transition so it is an easier sell for a team.


Yeah I've used both quite a lot.

Both languages are pretty similar. ReasonML isn't actually a language but more like a new syntax for Ocaml (Bucklescript on the frontend), kind of like what Coffeescript is to JS.

- Both have row polymorphism and higher kinded types. - Purescript has ad hoc polymoprhism, Ocaml doesn't but it's planned in a future release. Purescript achieves it via typeclasses, future Ocaml will achieve it via modular implicits which is more sound from a type system perspective. - Purescript has higher kinded polymorphisn, Ocaml only has a lightweight version of it via it's module system. - Purescript has better support for type-level programming although awkward, but Ocaml has good interop with Coq -it's written by the same people - Purescript is purely functional, Ocaml isn't although it emphasizes a purely functional approach - Purescript places category theory abstractions first like Haskell does as a core aspect of the language. Ocaml doesn't, although it's standard library has all of the same kinds of functions and there are category theory libraries and everything - Ocaml not just -to-JS, it's also a compiled language, and has had real industrial use as a systems language in the finance industry (Jane Street) - Ocaml's compiler is _insanely_ fast compared to any other compiler I've ever used. I absolutely appreciate it - Future Ocaml will be multicore and have a builtin algebraic effects system, basically like Haskell's extensible effects but without the performance hit or having to muck about with monads - Ocaml has GADTs, polymorphic variants, and a way to extend the language via PPX. Polymorphic variants are awesome - Ocaml's module system is really powerful and amazing

Overall both languages are great, but for almost every use case I'd go with Bucklescript/Ocaml any day. It's type system is pretty simple and easy to learn. Purescript has the same issue as Haskell which is the learning curve: higher kinded polymorphism, typeclasses, enforced purity via "the IO monad", and category theory concepts all add up and take quite a while for people to get used to. And the compiler is really fast. And you can write low level imperative code if you need to for performance. And the type system is really expressive, moreso in my opinion than Purescript's because of the poly variants. And you get like 90% of the benefit of using a language like Haskell and Purescript because of the `option` type and enforced safety via exhaustive pattern matching

I can't wait until Ocaml gets modular implicits and algebraic effects into the core language.


PureScript is heavily influenced by Haskell with a focus on purity, monads and lots of abstraction, while Reason is literally OCaml (which has a more pragmatic take on mutability) only they switched the ML syntax for a JS-like syntax.


This just reminds me of "Make the Leap from Javascript to Coffeescript"

Remember where that got us?

This stuff is all really cool and fun to work on, but instead maybe we should focus on changing the JS standard for the better since we always end up working in it anyways!

https://js.foundation/


Why not both?

CoffeeScript was awesome at the time, and ahead of its time. JavaScript owes many of its new niceties to CoffeeScript.

Yes, please, let's advance the state of JS. And let's push the boundaries with other languages that don't have the backwards-compatibility constraints and corporate sluggishness that JavaScript does. Then take the best features and merge them into JS.


Yeah it was awesome! But then years later most projects are wasting engineering time to convert their whole code base which is a bummer! That's all I'm saying.


The JS community is adopting more and more concepts but a lot of it has to do with the syntax. If the syntax is too verbose people just won't use it. FP is much easier when working with a language based off the lambda calculus (Haskell, Elm, PureScript, etc). Currying, partial application, and point-free composition is overly verbose in JS. JS has adopted the concepts but they are not used very often because of the messy and noisy syntax.


Yes, it got us ES6! Languages don't exist in a vacuum. They introduce new ideas, dispense with others, and all languages can learn from what works in the field and what doesn't.

If you're working on a personal project, it might be a good idea to try out PureScript and learn from what it has to teach!


Yay new ECMA standards! Classes/decorators/scope/arrow functions have made my life SO MUCH BETTER the last 5 years


Question is why? What’s better about PureScript that would make other FP programmers (eg; Elm) choose over it?


Elm is also inspired by Haskell, but has a (intentionally) very simplistic type system.

It lacks essential abstractions such as type classes, resulting in very boilerplate-heavy code.

Maybe this trade-off has helped it gain adoption and is the right choice for a simple language that purpose built for UI generation. For me it is way too limiting.


Elm requires the use of the Elm architecture. PS does not.

PureScript can work server-side. Elm is pretty much browser only.

PS has a richer type system. Elm's type system is extremely limited and makes it much harder to model domains because you can't do it generically and often need to write lots of boilerplate.

PS has a higher level of abstraction and expressiveness.

Elm's niche is frontend developers wanting to learn more about FP and get a gentle introduction to the Haskell like syntax.

Elm definitely has the better ecosystem and tooling. It's much easier to learn as well.

But as you learn more about FP and some of the higher level abstractions like monads you will find Elm very limiting.


One of the biggest reasons for me is that you can run it on the backend too (on NodeJS, or Erlang, or some other VM - yes, there are several backends available)


Anything that targets JS, including Elm, can in theory be run on Node.



Can I run PureScript on node practically? If so, examples please.



This is my question every time I see a new language pop up here. If there's nothing obviously novel about its design, its platform, or its community, I don't see the point of further fragmenting what people are using (unless it's just an experimental/learning-experience/for-fun language that isn't intended for production).


More powerful abstractions that reduce the amount boilerplate compared to Elm.


Can you provide an example ?


Can't define one generic map function like;

class Functor f where map :: forall a b. (a -> b) -> f a -> f b


I'm put off (not because of the formatting):

http://try.purescript.org/?session=60f1839b-bd82-d9af-75ef-d...

It adds weird syntax where it doesn't matter, like "<>" for string concatenation, but it doesn't cut down on noise where it does matter, i.e. the "list of records" in the example.

I'm also wary of a language that puts "pure" right in the name. Purity is nice and often beneficial, but there's a point where it just becomes a crutch. That point which is where the FP adherents and the mainstream diverge.


It’s not a syntax, it’s just a function (<>). Also how would you ”improve” list of records?


> It’s not a syntax, it’s just a function (<>)

Doesn't matter, it's annoying to type and read. What's wrong with '+'? (Rhetorical question, nothing is wrong with '+')

> Also how would you ”improve” list of records?

Admittedly, I had not thought of a better solution, but it looks noisy where supposedly such a language should look "clean".

Perhaps something like this:

    examples:{title:string, gist:string} = [
          { "Algebraic Data Types", "37c3c97f47a43f20c548" }
          { "Loops", "cfdabdcd085d4ac3dc46" }
          { "Operators", "3044550f29a7c5d3d0d0" }
          { "Records", "b80be527ada3eab47dc5" }
          { "Recursion", "ff49cc7dc85923a75613" }
          { "Do Notation", "47c2d9913c5dbda1e963" }
          { "Type Classes", "1a3b845e8c6defde659a" }
          { "Generic Programming", "3f735aa2a652af592101" }
          { "QuickCheck", "69f7f94fe4ff3bd47f4b" }
      ]


Putting addition and concatenation under the same operator massively backfires when you start using math libraries. All of the language semantics is thrown out of the window when you can realistically sum or concatenate the same two elements, but the operator can only be used for one case (example: python and numpy).

For the list of examples, you can just use tuples if you want the "clean" look. There is no need to change the record semantics which is the same in almost all languages.


> All of the language semantics is thrown out of the window when you can realistically sum or concatenate the same two elements, but the operator can only be used for one case (example: python and numpy).

Huh? You can't realistically "concatenate" two numbers and you can't "add" two strings. You can only concatenate two strings, or a string and an (implicitly converted) number. You can only add two numbers, or the elements of two number arrays (or matrices). Plus isn't the "sum operator" (Σ). There generally is no sum operator, there's a sum function (which works on iterables).

So what's the issue? It doesn't cause problems unless you confuse types, which shouldn't happen in a statically typed language.

> For the list of examples, you can just use tuples if you want the "clean" look.

Is that really true though? Can I initialize an array of records with tuples that are implicitly named?

> There is no need to change the record semantics which is the same in almost all languages.

This is basically C/C++ syntax, which somehow manages to be less verbose than many supposedly terser languages.


I do not do any web related programming but was quite intrigued and impressed by a language called Opa. Could anyone watching this space comment on what is happening in the Opa world.


Scanned the home page for any reference to how you interface with the browser API in purescript. Is it any different than in js?


Most of if is already wrapped in a well-maintained package: https://pursuit.purescript.org/packages/purescript-web-dom/2...

But doing FFI and just using JS when needed is very easy: https://github.com/purescript/documentation/blob/master/lang...

So, to answer the question: no, it's exactly as in JS, because _it is_ JS


The biggest problem with PureScript right now is so many of the libraries have little traditional documentation.

They often just include generated module level type/function docs as in the `purescript-web-dom` one you linked to. Meanwhile the developer just wants a quick overview and some examples of querying the DOM in order to evaluate it.

I tried using PureScript last year but I was having dependency problems like crazy. Largely because everything was very alpha and dependency chains were unreliable.

It's good to see the community maturing, there's still some way to go first, which is excepted with any new language. I might try it out again this weekend as I'm a fan of Haskell.


Agreed. Documentation is hard in all languages though, and it usually gets fixed slowly over time as langs mature.

If you had dependency problems you were likely using bower (where dependencies are "resolved", and this leads to broken states in some cases). You might want to try a "package set" approach instead - i.e. there's a curated set of library versions that are known to work together, so "dependency problems" become impossible for users: https://github.com/spacchetti/spago


What's the deal with being tied to bower as the package manager?


It isn't necessarily though. A lot of people are using Spago (https://github.com/spacchetti/spago). Bower is fine though for how they're using it. Often times it'd kill for the ability to relink libraries locally like bower lets you do in Elm while working through a PR of an external library for my use-case.



Misread as "Make the Leap from JavaScript to PostScript".

Now that I would have been on board with!!

:-)


What about HypeScript..anyone using that?


What distinguishes PureScript from Elm?


type variables, type classes, parametric polymorphism. Elm is similar to golang missing generics, its a large gaping hole.


Typescript.


[flagged]


Please don't post unsubstantive comments here.


I didn't do that this was a legitimate question. Apart form that if you're not a mod you really can't tell me what to do.


I'm afraid I am a mod.

If you throw around a term like "real programming language" in that way, that's flamebait, which isn't ok here. Would you mind reviewing the site guidelines and taking the spirit of this site to heart when posting? We'd appreciate it.

https://news.ycombinator.com/newsguidelines.html


Prove that you're a mod.


What defines a “real” programming language?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: