In my experience at present, comparing Chrome Canary with Firefox Nightly, SpiderMonkey and V8 both win on particular use cases now (sometimes quite dramatically).
V8 tends to win on stuff that depends on garbage collection since they have a generational collector, and SpiderMonkey only has incremental.
SpiderMonkey wins on some computation workloads because V8 still has a huge limitation where they often have to store floats as individual allocations on the heap (that's the best explanation I can come up with, at least; impossible to tell for sure from the Chrome profiler)
V8 also wins on some use cases that make use of new ES5 features like Object.defineProperty, because their optimizations do a better job with them.
IonMonkey is definitely a big improvement over the previous generation of SpiderMonkey, though. They're closing the gap.
P.S. It's a little hard to measure this stuff because V8 has some optimizations that happen to be perfect for benchmarks but provide a much smaller win for applications (Loop invariant code motion, for one). Nothing naughty going on here, you just need to be aware the numbers are lies.
IonMonkey does loop invariant code motion as well, and of course it's not "naughty", that's a widely used and standard compilation technique. It's also exactly what you want: there are many instances when keeping an invariant statement inside a loop results in clearer code, and there's no reason you should have to realize that and make your code more awkward and less maintainable when it's something the compiler can handle easily.
What it sounds like you're describing is that people are still writing terrible benchmarks. The days of this sort of thing on jsperf being useful are numbered:
function benchmark() {
for (var i = 0; i < 1000000; i++) {
var result = computationWithNoSideEffects();
}
}
Yes, a good compiler will turn that into a noop. And no, that's not being naughty either. It's just a terrible benchmark.
(also, when using crankshaft, doubles in V8 are unboxed and not allocated on the heap).
I've been benchmarking various methods recently and also figured out that optimizations basically make it impossible (or very hard) to benchmark correctly. Are there any resources about these optimizations?
V8 tends to win on stuff that depends on garbage collection since they have a generational collector, and SpiderMonkey only has incremental.
SpiderMonkey wins on some computation workloads because V8 still has a huge limitation where they often have to store floats as individual allocations on the heap (that's the best explanation I can come up with, at least; impossible to tell for sure from the Chrome profiler)
V8 also wins on some use cases that make use of new ES5 features like Object.defineProperty, because their optimizations do a better job with them.
IonMonkey is definitely a big improvement over the previous generation of SpiderMonkey, though. They're closing the gap.
P.S. It's a little hard to measure this stuff because V8 has some optimizations that happen to be perfect for benchmarks but provide a much smaller win for applications (Loop invariant code motion, for one). Nothing naughty going on here, you just need to be aware the numbers are lies.