I don't think comments like these are helpful, or get us anywhere. The latency is not simply a bug to fix - it is a consequence of the compilation model, which is also what makes Julia so great.
It's like wanting to use Rust, but demanding that the Rust devs fix the annoying problem that you have to compile your code before running it. After all, Python doesn't have this problem, so why don't the Rust devs just stop being lazy and fix it like Python did?
Dynamic scripting languages like python and ruby all have JIT compilers, that isn't particularly unique to Julia.
In comparison the startup costs of Julia are quite painful though compared to those.
> The latency is not simply a bug to fix - it is a consequence of the compilation model
This statement would require some actual formal proof.
And I don't need to recompile glibc or openssl every time I make a binary on Linux, and I don't feel like I should have to recompile Plots.jl and all its dependencies in Julia every time I want to make a change in my script that uses it.
Yes, Python and Ruby both have JIT compilers, but they do not operate in the same way that Julia's JIT operates.
Numba is a tracing JIT -- Julia does not do that. It precompiles a static CFG (as much of the CFG as inference can concretize) before runtime.
For the parts of the CFG where inference cannot explore call edges, special calls are inserted which allows the runtime to return back to inference when the types are known.
Julia does not have a fallback "tracing interpreter" at all, it's all compilation. When compilation occurs and how it occurs for any specific user program depends greatly on how abstract interpretation learns about the CFG.
As to your latter comments, they are all false as well. Julia does not recompile Plots.jl every time you make a change to your script -- Plots.jl precompiles once, and only recompiles if a method definition invalidates something which has already been precompiled. The specific mechanism/relationship which Julia uses to detect an invalidation is called a call backedge -- you can think of it as a relationship between callers and callees, but designed to handle multiple dispatch and the specialization that that entails.
The first time precompilation is slow -- because Julia is literally running type inference and then caching all parts of the CFG which could be inferred. But unless you doing things which would (in general) not be performant (like invalidating a ton of cached method instances) -- the full precompilation stage should never occur again.
Only quibble i have is that Numba is actually an Just AOT method at a time JIT, last I checked in 2018...just much more rudimentary and limited than Julia.
Most compiled languages allow for incremental build. I can understand that a clean compile needs 15 minutes or whatever. But why does it take long when you compile a small change? (or maybe this has been fixed?)
It doesn't take long to compile a small change. But the way compilation works is different than other languages. Julia doesn't cache compiled code (well, it caches some of it), because that's fiendishly hard to do in a dynamic language that allows dynamically redefining what functions mean. So whenever you start a new Julia process, it has to compile tons of things from scratch.
You can use the Revise.jl package, however. It keeps track of the files you've imported and updates the code whenever a file is changed. This causes only minor latency, similar to an incremental build.
Julia isn't that unique, most dynamic languages compile code on startup into bytecode and execute it on a JIT VM, and they all allow dynamically redefining the world at runtime as well.
Revise.jl inside the REPL is annoying as hell to use when you start messing with structs.
It isn't a big ask to just have `julia whatever.jl` not take minutes to compile the entire world. Stop defending it like every Apple fanboy on mac forums trying to convince me that the touchbar is great.
And Revise.jl kind of proves the fact that incremental compilation is entirely possible. If it wasn't possible at all due to $maximum_dynamic_insanity that wouldn't even work.
> Revise.jl inside the REPL is annoying as hell to use when you start messing with structs.
> And Revise.jl kind of proves the fact that incremental compilation is entirely possible. If it wasn't possible at all due to $maximum_dynamic_insanity that wouldn't even work.
I agree it's disengenuous to say that Revise is the answer to all problems when struct redefinition is still not possible. Yes you can toss them in a module, but then why allow defining them in global scope at all? Thus far I haven't seen any theoretical limitations either, since Common Lisp variants seem to be able to do something similar.
That said, I would not look at most other dynamic languages for this. Having a mandatory VM is an express non-goal in Julia, and if you want one then you'll have to face off against all the other people asking for better static compilation. Again, compiled lisps show us that a VM isn't even necessary to get this level of dynamism or low latency incremental compilation.
The claim around having to recompile the entire world has already been addressed in a sibling thread, so I'll not rehash it here.
It's like wanting to use Rust, but demanding that the Rust devs fix the annoying problem that you have to compile your code before running it. After all, Python doesn't have this problem, so why don't the Rust devs just stop being lazy and fix it like Python did?