Hmm, I haven't checked the code yet, but Firefox 37 on Arch with NoScript crashed for me on a reload. I just got the noscript detection the first time.
Actually, it didn't crash my firefox 40.0a . It asked me eventually if I would like to stop running scripts on this page. Which I did and I am now typing this using the same Firefox instance. Only it now is using 4.6gb of memory. :)
...as displayed with ^U, except the "AAAA...." wasn't visible until I pasted it in this text box. I'm not sure, but I suspect the length of the AAAAs was growing rapidly. There's a lot of memory and more than one core on this system, so I think I killed the tab, which appeared only as a blank page, before it became too destructive.
As a Firefox user I can confirm this does indeed crash Firefox. It however also crashes Chrome to an extent (at least on Windows) you get the aw snap page shortly after it loads.
Seems like it just overloads the browser using this technique in the URL: data:text/html,<script>location+=location+'A'.repeat(100000000);</script> - I haven't tested in any other browser, but to me that technique would crash any browser, probably even worse on mobile.
Seriously, no disclaimer that this will hang Firefox, and will likely lock up a system for 20+ seconds before it will let you kill Firefox (linux)? (Thankfully I run linux in a vmware image, so I can go play a game while waiting for linux to figure out the process went rogue)
I didn't realize HN was the new 4chan/8chan. Sigh.
I continue to be incredibly sad that any OS considers it ok to lock-up the entire UX for a program running amok. Most people would consider such an OS defective, and consider what else is out there, rather than ssh into it. I can't really fault them, I do wonder what is wrong with people making such systems that think this is an acceptable situation.
When a program aggressively allocates memory, buffers must be flushed back to disk, stuff may need to get moved into swap, etc. I/O bound stuff gets higher priority because... I don't know why... probably the throughput is greater that way, or perhaps it's more urgent. Not sure if there's a solution other than having enough RAM that you are never I/O bound. Perhaps if Linux wasn't so aggressive about caching, it would happen less, but things would be slower overall.