Wait, did I just read that correctly? Microsoft is going to make their OS less responsive to the web by disabling JIT for browsers other than IE? IE is horrid, it just makes no sense why they would do this to the detriment of their users. Microsoft... this is why you're a curse word to front end devs.
The no JIT is only for the metro interface and ARM based machines - this is only for tablet like machines and is for the same reason apple do the exact same, the apps are all sandboxed and can't write executable memory regions. Normal x86 machines will behave just as they do now.
The security argument for not allowing other JIT's that Microsoft and Apple keep throwing out there is really just a lame excuse to not allow other browsers be more competitive than their own, and I'm sick of it. Other browser makers are just as responsible if not more responsible than them when it comes to security, so it's a really really bad argument and I'm surprised some people actually buy into it.
It's just about not allowing 3rd party code to have writable code pages. In principle, this is good - most programs don't write code to memory, unless they're suffering a security problem. It's just unfortunate that the few legitimate uses of this are also culled.
It's not good in principle. There are many legitimate uses of writeable code pages, and as long as they're not the default they are unlikely to cause security problems. Don't forget that these apps all run in a sandbox already, so an app exploit won't get you system access anyway. All the excuses Apple/Microsoft give are just that: excuses. The real reason interpreters and self-modifying code are prohibited is to prevent competition with the first-party browsers and app frameworks.
They are prohibited by Apple to be even built with iOS SDK, if they interpret the code downloaded from the Web, which is naturally the case with browsers and JavaScript.
But IE10, unlike competing browsers, isn't just a "browser" on Windows 8, it's one of the fundamental development platforms built into the OS, along with .NET, WinRT, and Win32. Of course it has special privileges
If it were a default prohibition that could be overridden for a project that really needs it, I wouldn't complain. But as it is it's an unreasonable cost for the benefit for everyone but the platform owner. (Cost: obvious. Benefit: some hacks get harder, but with no essential change to what's possible in the way that memory protection or high-level programming languages change the game. E.g. instead of native code your hack uses https://en.wikipedia.org/wiki/Return-oriented_programming .)
V8's "lead" on SunSpider is as much a fantasy as reality, since SunSpider performance is (IIRC) largely derived from how fast you can run setTimeout callbacks and not your actual JS performance. The tightly clustered scores on that graph make this pretty clear.
On the other hand, it's pretty interesting to me that we're now in the state where SpiderMonkey wins at Kraken and v8 wins at v8bench. I wonder if there's some unconscious bias involved there, where each test suite contains cases the engine's developers care about the most, so they end up winning that test suite?
SunSpider hasn't (ever) relied on setTimeout performance (and AWFY runs everything through JS shells, and the standalone harness doesn't rely upon any callback-providing host objects). The closeness in performance is simply because it practically doesn't do anything that challenging to optimize — the biggest challenge is getting the compile time/execution time trade-off right.
Kraken was developed fairly unrelated to SpiderMonkey, so I wouldn't say there's any deliberate bias there — V8 was explicitly optimized for the V8 benchmark suite as a deliberate aim, with several design decisions made to optimize for it, so it's advantage there is unsurprising.
In my experience at present, comparing Chrome Canary with Firefox Nightly, SpiderMonkey and V8 both win on particular use cases now (sometimes quite dramatically).
V8 tends to win on stuff that depends on garbage collection since they have a generational collector, and SpiderMonkey only has incremental.
SpiderMonkey wins on some computation workloads because V8 still has a huge limitation where they often have to store floats as individual allocations on the heap (that's the best explanation I can come up with, at least; impossible to tell for sure from the Chrome profiler)
V8 also wins on some use cases that make use of new ES5 features like Object.defineProperty, because their optimizations do a better job with them.
IonMonkey is definitely a big improvement over the previous generation of SpiderMonkey, though. They're closing the gap.
P.S. It's a little hard to measure this stuff because V8 has some optimizations that happen to be perfect for benchmarks but provide a much smaller win for applications (Loop invariant code motion, for one). Nothing naughty going on here, you just need to be aware the numbers are lies.
IonMonkey does loop invariant code motion as well, and of course it's not "naughty", that's a widely used and standard compilation technique. It's also exactly what you want: there are many instances when keeping an invariant statement inside a loop results in clearer code, and there's no reason you should have to realize that and make your code more awkward and less maintainable when it's something the compiler can handle easily.
What it sounds like you're describing is that people are still writing terrible benchmarks. The days of this sort of thing on jsperf being useful are numbered:
function benchmark() {
for (var i = 0; i < 1000000; i++) {
var result = computationWithNoSideEffects();
}
}
Yes, a good compiler will turn that into a noop. And no, that's not being naughty either. It's just a terrible benchmark.
(also, when using crankshaft, doubles in V8 are unboxed and not allocated on the heap).
I've been benchmarking various methods recently and also figured out that optimizations basically make it impossible (or very hard) to benchmark correctly. Are there any resources about these optimizations?
Firefox stable is version 15, Chrome is 21. Since they both use a 6 week release, and Mozilla is introducing this in Firefox 18, that means it should be compared to Chrome 24 or Chrome 25, depending on which is released first.
TBH, these numbers pale in comparison. I don't care if FF is a few instructions faster than Chrome. It's good competition. Chrome and IE both have multi-process capabilities which make the whole experience significantly better. FF has yet to catch up to that.
Multi-process architectures do not guarantee fluidity. In the case of more than 15 tabs, Firefox nightly is more responsive on my laptop than Chrome Canary. YMMV
What about IE being multi-process makes the experience with that browser better? Without specificity, this is like me saying X is better than Y because of some attribute of X that Y does not have.
This is my first hacker news post using Chrome. JaegerMonkey and TraceMonkey both failed, I'm tired of waiting. The three curent builds of FF (nightly, beta and release) are horrendous on Mac. Two Windows and 15 tabs (total) will cost you ~1GB to ~1.5GB ram. After sticking with FF since before it was called Firefox, I have made the move to Chrome. FF simply became unusable in a work environment.
I'm tired of Mozilla coming out with their righteous BS about standards etc. They get distracted by every shiny object (mobile OS, mobile browser) when their flag ship product is in dire need of some love (UI/UX , memory leaks, JS speed, etc.) Part of their goal this year was to close the gap with Chrome, for me I've only seen it widen and I dont think Chrome has improved that much...
Chrome for Mac is a 32-bit application. Firefox for Mac is a 64-bit application. If you force Firefox to open in 32-bit mode (by editing Firefox.app's "Get Info" settings in the Finder), its memory footprint is smaller (but you lose some of the 64-bit benefits of large address space and ASLR).
Also, I assume you are adding up the memory footprint of all of Chrome's processes? ;)
Thank you, but your post is somewhat patronizing. Yes, I have accounted for each Chrome process. And you forgot to tick one of the 'we're in denial that FF sucks' boxes...what plugins are you using (aka blame the plugins).
I'm apposed to hacking the settings and installing 32-bit versions. It should give somewhat acceptable performance out of the box. I'm not adding bandaids, FF has wasted enough of my time.
I wasn't suggesting that forcing 32-bit mode was a solution. I was just pointing out that comparing the memory footprint of 32-bit and 64-bit applications is not apples-to-apples.
Also, Mac OS X 10.6 and earlier do not support ASLR for 32-bit applications and ASLR's effectiveness is reduced with a smaller address space.
See Chromium Issue 18323 ("Need more bits: 64-bit Mac version") from 2009: http://crbug.com/18323