Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Does anyone have any idea how much performance is being left on the table by writing these compile-to-js compilers in JS? Could we get another 10x improvement by writing the transpilers in something like Go/Rust/C++? Frontend compile performance is beginning to get painful with large codebases...


Depending on what you're doing, yes I think there can be a significant speedup.

We had a custom parser in our build toolchain in Perl that needed ~40min for a normal build. I rewrote it in F# (similar performance to C#) and got it down to 3min. Because I thought it was fun I ported it to Rust, and it now runs in 15 secs (mostly because of memory safe, zero-copy string handling and generally better control of memory allocation).

Probably in C# it could have been a bit faster than in F# if you use some of the optimization capabilities. But I doubt you could get close to Rust. Even reading one of the files in .NET takes as long as Rust takes for everything.

In C++ it would definitely be possible to reach Rust performance, it's just way harder to keep your memory intact without the borrow-checking compiler.

Now I don't know how Perl and JS compare, but I'd guess they're in a similar ballpark for performance.


Modern JS engines can be significantly faster than Perl in many workloads. Perl’s closest performing neighbours are Ruby, PHP and Python.


I don't know and I'll take your word for JS (on V8). For Perl vs Python my experience is they are in a similar ballpark in, but Perl has an edge on text processing and Python on numerics (mostly because of better regex engine in Perl and numpy being pretty good in Python). So for parsing stuff Perl is still usually the faster alternative.

I still doubt JS/V8 is faster than one of the VM languages (.NET or JVM), they don't need to be interpreted and have much less dynamic stuff so they can optimize better.


My general intuition is that you shouldn't expect JS to beat Java or .NET, but you should expect it to be by far the fastest among similarly-dynamic languages. Browsers have been under fierce competition for many years, and are backed by well-funded engineering teams, so JS performance in particular has repeatedly pushed the limits on what kind of optimizations are possible in a dynamic language. Other dynamic languages haven't had the same incentives.


For small code bases yes, or for simple transpilers. But for large codebases (like say, Google Gmail/Docs/etc) a global optimizing compile tends to be dominated by the O(f(x)) of the optimization passes, which tend to run in a fixed point loop.

If you rewrote something like Closure Compiler in C, I doubt you could get more than a 2x-4x speedup. The bigger fruit lay in minimizing the amount of work you do each time through the optimization loop.

The problem with most hobby projects that write transpilers is, people take a simple app like TodoMVC and say "look how amazing the edit/refresh is with this transpiler", but design decisions made early can come back to bite you much later when you're trying to handle a codebase with hundreds of thousands of lines.

Transpiled higher level languages tend to encourage people to create many layers of abstraction and re-use a ton of existing libraries, which tends to bloat the inputs to the compiler.


> a global optimizing compile tends to be dominated by the O(f(x)) of the optimization passes

Really good point. Closure Compiler, at least historically, rejected many valid optimizations because they aren't common enough to justify growing the rate of number of passes.

> If you rewrote something like Closure Compiler in C, I doubt you could get more than a 2x-4x speedup.

Closure Compiler is written in Java. I don't know exactly where the state of the art of JIT is right now, but I would bet C wouldn't be that much faster.


Why not allow the user to specify the level of optimization?


It's been a while since I looked at Closure's codebase but, as I recall, the individual optimizations are not loosely overlaid, but somewhat dependent on each other and specifically ordered, so "lower optimization levels" would involve manually deciding which optimizations could be removed from the path without adversely affecting the others. But I could be wrong on that.

More importantly, this is a compiler targeted towards one-time compilations to permanently reduce large JavaScript payloads per millions of downloads, and not a compiler that is required during development. As such, blunting its effect to save a few seconds is pretty meaningless, so I doubt the maintainers ever considered "less optimization".

That said, it does allow for "dangerous" but more aggressive optimizations that require assurances from the JavaScript or you'll break the code. In that way, Closure offers user-specified levels of optimization.

EDIT: A secondary and less-obvious effect is that using a smaller number of total optimizations produces more internally-consistent code, as opposed to producing unusual and internally-unique constructions for rare optimizations. Internal consistency is great for the next step after compilation: run-length compression.


You might be interested in my work on sorting and clustering code to improve compression efficiency: http://timepedia.blogspot.com/2009/08/on-reducing-size-of-co...


> As such, blunting its effect to save a few seconds is pretty meaningless, so I doubt the maintainers ever considered "less optimization".

Yes, but still, some projects are orders of magnitude larger than other projects. Also, some users might be willing to wait an hour, others only a minute.


The point I was trying to make is that, in practice, everyone will run at full optimization, since that's the point of something like Closure. It's not gzip where "good enough" exists sometimes. JavaScript compilers are all about saving users time. Because of that, offering a product that breaks deployment cycles becomes a non-starter.

There are, essentially, an infinite number of optimizations you could make to Closure, though probably several thousand are reasonable. Every marginal optimization needs to run though the entire AST and many of them require prior optimizations to be re-run. As 'cromwellian pointed out, the number of passes is the dominating factor in speed. At some point, it's no longer worth it.


Yes, there are diminishing returns, where you spend polynomial more time to get an extra 0.2% code size reduction. At some point, you need to early exit the optimization loop.

For Google production code, we typically let things run long, because if you shave off say, 30k from Gmail * 1 billion active users, you've just saved a lot of bandwidth.


The Closure compiler's optimization baseline is drastically ahead of any other JS optimizer. It's a diminishing returns sort of thing I would think.

I'd be interested in seeing what a large typescript project would look like run through https://github.com/angular/tsickle.


Some examples of "transpilers" targeting Javascript but writen in other languages:

https://github.com/google/closure-compiler

https://github.com/BuckleScript/bucklescript

https://github.com/fastpack/fastpack

Seems like on the average it does offer a boost in performance.

And there is some aditional work providing javascript parsers for rust (which you could build tools like babel on top of): https://github.com/dherman/esprit


I think part of BuckleScript's speed may come from the source language, OCaml, being easier to parse. There may also be fewer passes necessary to transform OCaml to JavaScript.


BuckleScript using Reason (with a similar syntax than JS, slower parser than vanilla OCaml) is still one of the fastest around. Mostly a case of very careful and dedicated engineering here (I work on Reason with the author of BuckleScript). Additionally, we try to work smart and delegate most of the build process to Ninja, which itself is one of the fastests around.

But I believe the topic here was about runtime performance of using a language to compile JS, not about the build speed of working in that language itself. In which case you’ll still get some wins writing a JS toolchain in BuckleScript (compiled to JS), just from the JiT-friendliness of the BuckleScript JS output.

But realistically, you’d be compiling to native OCaml through the same codebase. We did see a 10-25x perf jump from converting a part of a Babel pipeline to native OCaml. I mean, these languages are basically designed over decades with AST manipulation in mind, so that’s not surprising.


People like to claim that JavaScript is only about 2-4 times slower than C++. And maybe that's true for heavily optimized JavaScript that gets to run for a long time. Personally, I rewrote an ES5 parser in Rust and saw about a 16x speedup on real world JS source code (directories, not just individual files), which I suspect is closer to what you'd see in reality. People vastly overestimate how long most programs should actually run (such that V8's optimizations would really kick in) and also the extent to which V8's optimizations can actually be applied. There is a link elsewhere in here suggesting that v8 it can eventually get to about 2x faster than AOT JavaScript, but it doesn't show significant improvements until 100,000 iterations over the same 100 lines--besides that being an unrealistic microbenchmark, almost no repositories come close to 10 million lines of code, so by the time v8 is done optimizing a much faster program will have finished long ago.


See https://github.com/alangpierce/sucrase/issues/216 for some investigation I did into webassembly. V8 seems to be pretty good at optimizing your JS code if you give it long enough and write relatively C-like JavaScript in the first place.

I certainly thought about writing this in Rust or C++, and still plan on exploring that. Still, it's nice to stay in the JS world, e.g. easy integration with webpack and all of the other tools.


JS is pretty fast nowdays. I _guess_ you could make it 2x as fast by using C++ instead but on the other hand the code might be harder to write and contain more bugs. These tools are used by houndreds of thousands developers and if the bug count goes up and the development of the tools slows down the tool would probably just loose out to the existing js-based tools. Just my thoughts.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: