Probably more interesting is Rust vs. C++. Rust is actually beating C++ on several benchmarks, by a comfortable margin.
Go is a modern scripting language, stripped of the features that make scripting languages impossible to multithread and intrinsically slow, statically typed, and with things built on top of the multithreading. (The Perl/Python/PHP/etc series of languages is very heavy weight, huge VMs, dynamic everything, and while if you started from day one to write one of them to be threaded you could probably do it, bodging it on after the fact was virtually impossible. IMHO "dynamic everything" turned out to be a great deal more dynamic than was really necessary. But it was a worthy experiment!) Rust is a C++ replacement. Go is very fast relative to its real competition, but it is certainly not a C++ replacement. It also seems to me that Go is likely to converge on being easier to develop; I don't think Rust is going to get all of this quite for free. Both will sit on an interesting part of the bang-for-the-buck curve that is currently not well-occupied. That is, it is occupied, but it is not well occupied.
I, too, would say that I find this a strange comparison. I use Go for web services. I would use Rust for an OS or other low-level software such as, perhaps, a new OS or a reference implementation of some protocol I'd like to implement
Please illuminate me as to how you have arrived at the conclusion go is a scripting language. It doesn't check any of the scripting language boxes I've ever heard of.
"It doesn't check any of the scripting language boxes I've ever heard of."
Well, as I sort of implied but will now spell out, the 1990s produced a certain stereotypical definition of a "scripting language" that I think is ultimately overspecified. A 1990s scripting language is extremely dynamic, including the type system, everything's boxed, everything's a hash table of one form or another, and in general everything converges to produce a language that is very fluid, but ultimately very slow. These were overreactions to the languages of the day, which were, I think, easy to forget how hard they were to use in practice. Even when you use C or C++ today, you're still using it with radically better community best practices, libraries, and better tool support. 1990s C++ was some hard stuff to work with even before you let threading in. Into that world, Perl and Python were a breath of fresh air!
It is easy to get deceived by the constant stream of benchmarks that are 50% faster than last month and the constant stream of incredibly-micro micro-benchmarks where JS or something is as fast as C at adding together small integers in an array or something. The reality is that after years and years and years of work, they're fundamentally slow. Yes, even Javascript; remember, for asm.js to produce such fantastic speedups, Javascript must be leaving that much performance on the table.
Go ends up in practice working very like a scripting language; line counts are virtually identical in my experience, logic is very similar, and the fluidity is there, partially by virtue of ignoring the very things that Rust brings out and makes explicit (the ownership and such). I can't say with a straight face that the logic comes out the same, the way a Perl program can be virtually transliterated into Python, because the ability to use concurrency seriously means that you don't have to bend into the contortions you sometimes do to simulate it in those older languages, but otherwise it works similarly, except that you use structs and interfaces instead.
Scripting 1.0, being made when this world was younger and 150MHz was a fast machine, made some mistakes in thinking that compilers would eventually be able to overcome their fundamental weaknesses. It was a neat experiment to make everything wildly dynamic, but in the end, you end up paying and paying and paying and paying and paying for that dynamism, but you only use it rarely. You may be inclined to object, saying "I use it all the time! I use Ruby gems to assemble all sorts of complicated and interesting objects!", but that's because we humans are really bad at order of magnitudes inside of programs... you set up your objects essentially once at the "beginning" of some computation, then you pay for that dynamism billions of times. You pay for the awesome exception handling all the time, yet only use it rarely. You pay... etc.
Scripting 1.0 also accidentally interpreted the failures of the languages of the day, which were all statically typed, as being significantly caused by static typing. I'd suggest we've since learned that the static typing of the time was simply bad; the languages are not all that great, and the ways they were used even worse (inheritance favored over composition, etc.). So they whiplashed wildly to the opposite, which turns out to have been an overreaction similar to the previous paragraphs. You pay for all that typing dynamism all the time, yet in practice, legal inputs to a given function are characterized by a limited number of types, legal outputs also, and all this dynamism is really going to waste. (Go's interfaces capture %95 of the use cases here while still being perfectly statically typed. There are other possibilities as well for other languages to explore.)
Scripting 2.0 is fixing that problem by creating languages that judiciously use dynamism instead of indiscriminately slathering it everywhere, that are merely somewhat slower than C rather than the night-and-day of 1.0, are still fairly easy to program in even if they are not as wildly dynamic, and execute an order of magnitude faster than Python or Perl or JS while also obtaining the advantages of more static typing for software quality. Go is merely the most prominent of this line of language, there's a whole bunch of them bubbling up right now. Julia is trying to fit into this for numeric computing, LuaJIT is sitting there sipping the punch wondering why all the other languages are so many years late to the party, there's another language that's popped up on HN every so often in the past few months that fits in here but escapes my mind, and I think we're going to see more of this. With computer processors speeds having stalled it becomes a difficult proposition to go up to a programmer and promise them the moon, and all they have to do is be willing to be 20-100x slower than the competition, and, uh, well, no, you won't just catch up in 5 years of processor advancement because raw clock speeds are stalled.
So I find in practice Go is a Scripting 2.0 language. You write it at roughly Python speeds, it runs much faster and works much better in the cloud where that actually matters, and part of the way it accomplishes this is to do things like gloss over memory management details and exactly how interfaces are satisfied (dynamic lookup at runtime, compare with wycat's comments in the thread on how Rust does it), and generally focuses on being relatively easy and fast to write. Looking at it from another direction, compare Go to Rust and Go looks more like a scripting language than a competitor to Rust.
I could write a similar post on how Rust is also a harbinger of a new wave of languages that are trying to replace C++, but make it easier to write correct code than to write incorrect code. (C++, alas, makes both quite easy, and makes it hard to tell which is which. It is true that correct code is possible, but it takes more skill to get there than it should.) Nim, for instance, fits into this mold, though a great deal less popular. I expect more. The dependent type community is also struggling mightily in the direction of putting this sort of thing out, and I wouldn't be surprised to see something moderately practical pop out in about 5 years or so.
I know the narrative right now is that Rust and Go are in some sort of fierce competition, but personally I see a world where they live in harmony, with a great deal less overlap than most people currently see, just as Python would still not displace C even if it were in fact 25 times faster.
It's really quite an exciting time and I can't wait to get my hands on these languages. I am ready to leave C and C++ behind. Of course I've already got Go in hand, but I look forward to Rust. (As I am personally a "cutting edge" but not a "bleeding edge" sort of guy in language choice, Rust and I are not yet a match. I consider this just a matter of time.)
All just my opinion, of course. I am not the keeper of the term "scripting language." YMMV.
On the benchmarks, sure. That wouldn't surprise me. In my real code, though, it writes like well-written Python, minus an occasional penalty for a list comprehension or something, which I don't really use that often anyhow.
If you want to play Perl golf, Perl will blow it away. Then again, my well-written Perl and Go come out pretty similarly.
You don't think that a scripting language is just a language for writing scripts -- environment-specific interpreted code?
I don't see how inventing terms like "scripting 2.0" makes anything clearer. It just reeks of unjustifiable rationalization and square-peg-round-hole thinking.
I agree with most of what you said, but Go is not a scripting language. The defining characteristic of scripts is that they are interpreted. Go compiles ahead of time to a statically linked binary.
"Scripts are interpreted" is a bit 1990s; as the Scripting 1.0 languages have tried to squeeze every last bit of performance out of the constraints they labor under this is less true than it used to be. And more to the point, rather than a binary distinction like you could have claimed in 1995, it's much, much more a continuum now, and with computers running a bazillion times faster, much less important. I'm not that worried about applying a useless criterion to languages that dates from an era whose desktop computers would aspire to keep up with a low-end 'feature' phone.
I think the former poster meant that scripts are JIT compiled rather than interpreted. But either way, with scripting languages, you redistribute the source code. With AOT compiled languages such as Go and Rust, you redistribute the binary (albeit some will always prefer to compile on the target PC).
That's not to say that I think Go and Rust are the same classification of languages though. I think "scripting vs compiled" is an overly simplistic view. Go compares better against the byte-code AOT compiled languages such as C# and Java. These kind of languages are also seen in some traditionally scripting-dominated markets (eg web development) while still being used as systems languages as well.
Scripting language doesn't imply at all "interpreted". Otherwise you would just say that: interpreted. "Scripting language" denotes a strong tie between a system and the language used to script it. In other words you will often have to use a given scripting language because it is the interface to the system you want to script. We should really always say: "scripting language of X".
"Fortran and Cobol were the scripting languages of early IBM mainframes. C was the scripting language of Unix, and so, later, was Perl. Tcl is the scripting language of Tk. Java and Javascript are intended to be the scripting languages of web browsers." (from Being popular, http://www.paulgraham.com/popular.html)
Depends what you define a 'scripting language' as, doesn't it?
Some people argue python isn't a scripting language, because it 'compiles' to byte code.
What is the defining characteristic of a scripting language for you?
That it runs in an interpreter? That you distribute source code files that are executed by a monolithic binary? That you can't compile it into a static binary?
Which of these doesn't tick at least one of those?
The point the parent post is making is that 'scripting language' is an arbitrary term that isn't well defined. It tends to traditionally mean "relatively easy to code in language that runs slowly and has a horrible syntax". ie. bash scripts, perl scripts, make files and the like.
...but technically speaking, what makes these 'scripting languages' as distinct from other languages, like say, java? Or, indeed, go?
The parent isn't arguing that go is a scripting language (I wouldn't say that either), but that it shares some common features, both good and bad (ease of writing, gc) with some other 'scripting languages'.
"but that it shares some common features, both good and bad (ease of writing, gc) with some other 'scripting languages'."
And that the useful criteria are changing, as in my reply to Retra. Who in 2014 is sitting here selecting a language based on whether or not it is "compiled" or "interpreted" as opposed to picking it based on libraries, or simple real speed (as opposed to worrying about where the speed comes from)? Who saw the wave of Javascript JIT interpreters and yelled "Crap, there goes Javascript, because now it's 'compiled' and that change now makes it suck"? The way we're going to be characterizing language families in 2020 will make such 1990 concerns look quaint and there's little reason not to start thinking about it now, since 2014 looks more like 2020 than 1990.
Scala hits all of those checkmarks (and checks them off more completely than go) and is explicitly designed such that you can use it as a scripting language.
Go is garbage collected. I don't think that's a necessary or sufficient criteria to call something a scripting language, but it makes sense why people might consider it such.
Yeah, usual benchmarks caveat, take with grain of salt. Still it's an interesting result. Many languages aren't even within spitting distance of C++, after all. And that's actually really good performance for a 1.0 of something like Rust; usually at this phase even a language putatively tuned for performance is still hanging around at 1/2 - 1/4 the speed of C, at least based on what I see. Rust is doing something right.
It's highly unlikely Go will be able to compete in terms of compiler optimizations because of it's goal of fast compilation speeds. The majority of the time Rust spends compiling code is within LLVM last time I checked, because LLVM does quite a bit of optimizations.
Then if you take into account the fact that Go's language features do not promote highly-performant code (GC, heap allocation, dynamic dispatching) that a few people in this thread has touched on.
While better looking code is highly subjective, Rust has indeed traded off some aesthetics for safety, performance, etc... However, that's not to say Rust code is ugly, you can certainly write very nice looking code in Rust.
> It's highly unlikely Go will be able to compete in terms of compiler optimizations because of it's goal of fast compilation speeds.
For a given piece of code as written today, its compilation speed will naturally improve over time as processors get faster. Optimizing it to a certain degree becomes faster for the same reason. Therefore, assuming a fixed threshold for what constitutes "fast [enough] compilation", potential for more optimizations becomes available over time.
> However, that's not to say Rust code is ugly, you can certainly write very nice looking code in Rust.
That's true for many languages, including PHP or Perl. The problem is: once you use these languages more, you discover that most people don't do it (for various reasons, one of them often being that they're trying to be clever).
If you judge your programming languages based on personal aesthetics of the 'beauty of the code', you're wondering off into lala land.
Go has a better syntax that rust, objectively, because it refuses to create new operators that are not orthogonal to existing ones. That makes it easier to parse because there are fewer 'code synonyms' (eg. the rust Fn<(Arg1, Arg2), (Rtn)> is identical to Fn<Args, Arg2> -> Rtn; to be fair go has its share of these irritations as well; eg. x := 1, vs var x = 1; but fewer of them)
It's got nothing to do with how pretty or ugly the code is.
Most of the complexity in this code is due to parallelism and in fact it would be prettier these days (it's moving fast). A plain simple version (like the go) one is likely to be even simpler due to the functional aspect in rust.
I blame it on Rob Pike saying it was a system language.
He later regret it but it's too late everybody just bench Go with any system language there is out there.
Go is better off as a middleware language that rival that of Node.JS (javascript) imo. Everybody see Node.JS (javascript) as somesort of web dev language in the same vein as Ruby, Python and PHP. But the web frameworks out there that I've seen are mostly incomplete or just very bare bones compare to RoR, Django, and Laravel. Express is very very barebone and you wouldn't prototype with that or do anything quick with that. You can argue that would use Node.JS for long term stuff and my counter point would be if your code have to survive for more than 6 months then use a type language. I don't believe a dynamically loose type language such as Javascript is suitable. Go would be better in this regard.
> Express is very very barebone and you wouldn't prototype with that or do anything quick with that
The point of the Node ecosystem is to be library-based. It's quite rare to have full-fledge frameworks dictate an environment like Rails does (and a method I personally hate). Go has the same principle afaik with it's web libraries.
I'd personally rather use a dynamic language rather than a statically-typed one with a weak type system. While the former can produce more errors at times, the later is often times just too inflexible and painful to use properly.
This lines up with my experience with the two languages as well. Usually I could get Rust to be virtually just as fast as C or C++ while Go would be roughly twice as slow. However I still prefer Go due to the quick compilation times and ease of development.
Had I been asked "which do you think is faster, on average?" I would have guessed Go. I'm not sure why but that was my gut feeling. Rust feels newer and less developed (though they are both similar in age), to me, and also more ambitious in some regards.
But, this turns that on its head. Microbenchmarks are, of course, to be taken with a grain of salt, but it's still a useful data point. More useful would be a discussion of why it is so.
A design goal of Rust is to target the performance of well-written, safe C++. It doesn't (today) even ship with a garbage collector, discourages heap allocation (which is your only option in normal Go), and avoids dynamic dispatch in the vast majority of cases. In contrast, idiomatic Go code is garbage collected, heap allocation heavy, and dynamic dispatch heavy.
That isn't always going to mean that Rust code is faster, but for benchmark games like this, when Rust is slower it almost always means that the benchmark is wrong.
I read an article recently about memory in Go (the "why is my go program 138GB!?" article), which was enlightening and also weird. I don't have any experience with either language, but have plans to learn a smattering of both in 2015.
As for dynamic dispatch...does Rust not support it, or is there some language construct that allows solving the problems dynamic dispatch solves without imposing annoying limitations or needless verbosity? And, aren't there fast implementations of dynamic dispatch?
> As for dynamic dispatch...does Rust not support it, or is there some language construct that allows solving the problems dynamic dispatch solves without imposing annoying limitations or needless verbosity?
Rust supports dynamic dispatch if you truly need it, but the normal approach is to use trait bounds, which the compiler expands when used. Here's an example:
fn main() {
let r = MemReader::new("hello world\n".as_bytes());
get_line(&r);
}
fn get_line<R: Buffer>(r: &R) -> IoResult<String> {
r.read_line()
}
In this very simple example, the `get_line` function takes any value that implements the `Buffer` trait, which provides the `read_line` method. When `get_line` is called inside of `main`, the compiler creates a special version of `get_line` that takes a `MemReader`, and dispatches to its implementation of `read_line` statically. In practice, this feels a lot like dynamic dispatch, but using trait bounds always results in static dispatch in practice. Also, in this example, the `MemReader` is stack-allocated, and lent to the `get_line` method.
In very rare cases, you may want dynamic dispatch. For example, imagine you have an array of Buffers, and want to read a line from each of them. In this case, you use a "trait object" (a `Box<Buffer>`), which moves the object to the heap and allows dynamic dispatch.
In practice, I have written many thousands of lines of production Rust code and have only encountered a need for trait objects a handful of times. This means that idiomatic, normal Rust code is stack-allocated and statically dispatched.
The verbosity of trait bounds has decreased more and more over time, and my favorite proposal (by aturon) for making it pretty close to maximally ergonomic is:
fn main() {
let r = MemReader::new("hello world\n".as_bytes());
get_line(&r);
}
fn get_line(r: &impl Reader) -> IoResult<String> {
r.read_line()
}
Technically it's not true that heap allocation is your only option on go. In Go you have 0 options but stack allocation is automatically done in some cases.
Yes, escape analysis does tend to eliminate some unnecessary heap allocations but they're specifically isolated to a single subroutine (it's better than not having it). Whereas with Rust, you can safely pass around references to stack allocated memory to other subroutines, given proper lifetimes.
There's not even a chance at a competition, because of their respective design goals.
Go is explicitly meant to have _really_ fast compilation times, which precludes the Go compilers from any expensive optimisations. Rust, on the other hand, has an explicit goal of being as fast as idiomatic/safe C/C++ while removing the burden of writing the safety yourself, so it really _has_ to be fast.
An older version of the benchmarks discussed on the Go mailing list was done with GCCGO (which I assume is Go with a GCC backend). Given that GCC is historically ahead of LLVM on CPU optimizations (a gap that has shrunk in recent years), I'm guessing that's not a complete answer.
But, if it is the primary source, it would tell me that Go will close the gap as the compiler becomes more advanced and takes advantage of more CPU optimization features. Also, it's an interesting difference that Rust is an LLVM language while Go is compiled to native code directly; probably not from a performance perspective (at least not in the long run), but from a development perspective...is it easier or harder to develop new language features on LLVM or a native compiler?
Note: despite the "VM" in the name LLVM is a native compiler: the rust compiler constructs the appropriate internal representation to LLVM and passes it to LLVM for optimisation and native code generation. There's not really a fundamental difference between this and writing the code generation step in pure Rust other than the latter missing out on all the benefits (speed, architecture support etc) of LLVM.
That is to say: there's not much difference for difficulty of developing new language features, but avoiding LLVM makes it harder to develop new fast language features.
You can typically predict performance behaviors across a whole language based on memory allocation patterns and the type system.
If automatic memory management is built in, that adds some overhead to most allocation situations(and encourages an architecture that exploits easy allocation - e.g. Java libraries where everything you do involves a new()). There are differences between garbage collection(mark-and-sweep) and reference counting as well; an RC system is "pay for what you allocate" with predictable scaling, making it more appropriate to employ for hitting real-time deadlines, however, the state-of-the-art in generational mark-and-sweep GC does better for average case throughput. Or to put it another way, if you have a single batch process that everything goes through, you can usually count on a GC system doing it faster than a RC equivalent. But if you want to run several things concurrently and tally results, or you are running an interactive app and care about getting your latency numbers down, the pauses involved in GC may be more problematic to you. (The semantic downside of RC, which has also made a lot of languages avoid it, is its inability to deal with circular data references.)
Of course, manually managed memory lets you design any allocation strategy you want, and so it can let you blow away automatically managed systems on microbenchmarks like these - stuff that starts mattering in the real world only after you've scaled up near to the constraints of the target hardware and any overhead is a Really Big Problem. There's definitely some more experimental automatic technology out there that "breaks the rules," but in terms of what you see in today's production environments, you can start estimating performance characteristics just by knowing what type of allocation is being done.
The other big issue, of course, is the type system. The design of the type system determines the constraints and difficulty of automated optimizations; classically, Fortran was one of the best languages for low-level optimization because it made things very easy for the compiler, and offloaded many considerations on the programmer. Today C has grown into a roughly equivalent space(pointer aliasing, the last big issue, was tackled in C99). Rust can do everything C does, but it also has a much smarter type system backing it up and enforcing memory safety. Dynamically typed languages like Python, Ruby, Lua etc. are essentially always slower and more memory-hungry because of their intentional flexibility - to do as much as they do at runtime, they end up retaining more stuff per piece of data and paying a bigger cost to manipulate it. Luajit is an example of best-in-class technology for dynamic languages, and it's still not as fast as Go on most of the benchmark game. [0]
Where Go stands in these two concerns is roughly comparable to Java or C#: it uses GC, and it has an intentionally simplistic static type system which points you towards built-in data structures, which makes it easy to work with for average-case situations, but hard or impossible to "tune up" as needs grow more complex. And if you check the benchmark game, all three are in the same window of performance, with some variance that could mostly come down to implementation detail.
> Rust can do everything C does, but it also has a much smarter type system backing it up and enforcing memory safety.
This is why I have hopes that Rust will make it easier to match the performance of micro-optimised C code in the future whilst still maintaining clarity and safety (the compiler knows more invariants about the code, so can be more aggressive in its optimisations).
The benchmarksgame site does no longer feature LuaJIT benchmarks. The link you provided compares PUC Lua (an interpreter, no optimizer) against Google Go (AOT native code compiler, weak optimizer).
So, Go is garbage collected and seems to rely on compiler optimizations for optimizing heap allocations to stack allocations (where applicable). Meanwhile, Rust is centered around manual memory management, discourages heap allocation if you can use stack allocation, and even has (or plans to have) a library for choosing different allocators when those give better performance (which should give an inclination of at what level Rust aims for). And it leverages an existing compiler architecture for generating the native code.
It was based on lack of knowledge about Rust. I didn't know any of those things about Rust. I assumed it was similar to Go (garbage collected, at the least). As I mentioned, I don't know a lot about either language; less about Rust. Which is why I mentioned I'd like to see a discussion about why Rust was faster.
Only three rust benchmarks use unsafe code, in one of them it's virtually mandatory, and in another it's just one line. On a side note, rust compiler enforces naming conventions, which is kind of cute.
The reverse complement program certainly doesn't need unsafe code, but the safe rust version[1] takes 3x the execution time of the unsafe rust version[2]. By comparison the fastest Go version[3] is fairly idiomatic Go without unsafe memory management and takes just 2x the time of the unsafe rust version.
Also, which benchmark did you have in mind when you said unsafe code is mandatory?
These are REALLY BAD examples, the safe rust version isn't parallelized but the two others are. The unsafe version is just translated C as people already stated. A safe version these days (heading towards 1.0) shouldn't be far of the unsafe versions.
That specific safe version, which, IIRC, was a basic translation of the fastest C version. I'd guess that there's other safe variations that are more idiomatic and much faster.
May be a faster safe version should be contributed. The current state is that unsafe rust takes half as much time as a safe go program in this benchmark. That is not very interesting because the benefits offered by rust are being given up to perform better, for a problem that doesn't really need unsafe memory access.
Same is the case with spectral norm. It is unsafe rust beating safe go there too.
Small, widely-verified blocks of unsafe code should be the norm if unsafe code is necessary. This does not mean "if you're using unsafe you've failed!"
Go is a modern scripting language, stripped of the features that make scripting languages impossible to multithread and intrinsically slow, statically typed, and with things built on top of the multithreading. (The Perl/Python/PHP/etc series of languages is very heavy weight, huge VMs, dynamic everything, and while if you started from day one to write one of them to be threaded you could probably do it, bodging it on after the fact was virtually impossible. IMHO "dynamic everything" turned out to be a great deal more dynamic than was really necessary. But it was a worthy experiment!) Rust is a C++ replacement. Go is very fast relative to its real competition, but it is certainly not a C++ replacement. It also seems to me that Go is likely to converge on being easier to develop; I don't think Rust is going to get all of this quite for free. Both will sit on an interesting part of the bang-for-the-buck curve that is currently not well-occupied. That is, it is occupied, but it is not well occupied.