In particular, it seems that "site" isn't precisely defined. It seems to be based on domains, but backed by a human-curated list of "sites": <https://github.com/publicsuffix/list>.
So it's different than Chrome's "every webpage gets a separate process".
Chrome's policy is pretty much the same; while it can generate a process-per-tab under most conditions, the guarantee it actually makes (in modern versions of Chrome) is that sites (including different-origin iframes) are isolated into different processes. They use the PSL to determine which sites constitute a different origin, just like Firefox does.
I don't know if "most conditions" is even true. Even when it's only running a handful of processes and I have plenty of ram free I cannot convince it to use more than one process for twitch tabs.
The bug might be fixed right now but yes I definitely wanted it, because opening a twitch tab was consistently causing the video in the old tab to hang for a couple seconds.
> There's a flag related to isolation in your chrome://flags that will do per-origin.
What flag is that?
I even tried setting --process-per-site-instance and it had no effect.
I think there are some restrictions on tab "navigation source". (Something about a fairly obscure JavaScript feature that links tabs opened via click navigation, if I recall correctly.)
Does this also happen when you type the Twitch URL in a new tab?
Yes. Or even if I have another tab on youtube or whatever and type in twitch, it will close the youtube process and switch to sharing the existing twitch process.
The public suffix list is also used by other browsers to determine whether resources are cross origin (see below), not just by Firefox. So, I think it's a pretty authoritative list, and also consider that domains are added by formal request of the domain holder, not as a result of someone's curation.
That list is the reason why CORS behaves differently e.g. across two subdomains like [subdomain].herokuapp.com (requests are considered cross origin) in comparison with two subdomains of the type [subdomain].[myowndomain.ext] (requests are considered same origin)[1] - the reason for this difference is that herokuapps.com is part of that list.
[1] unless you added your own domain to the public suffix list.
One of the maintainers of the PSL (Ryan Sleevi) has written on HN before that they'd sure like it if people leant on the PSL less rather than more.
It's a nasty hack, the successor to even worse proprietary hacks but still something we ought to strive to get rid of.
I can see exactly why it was the choice here, and I don't blame Mozilla for choosing it, but we're not going to make things better if nobody gets out and pushes.
That said, since we're stuck with the PSL for the foreseeable, I sure would like it if Mozilla shipped a way for extensions to just consult Firefox's built-in copy of the PSL, rather than needing to either build yet another awful hack or ship the entire PSL again in an extension.
How do you propose getting rid of the PSL? I don't see alternatives to having an authoritative publicly available list, unless we change the current standards somehow?
> I sure would like it if Mozilla shipped a way for extensions to just consult Firefox's built-in copy of the PSL
the PSL is available at https://publicsuffix.org/list/public_suffix_list.dat - as noted elsewhere in this tread it is also used by other browsers. I guess the one built into Firefox is just downloaded from there and cached? If so why would you want that over the other?
I do not have a concrete proposal. If I did I'd probably be too busy arguing about it with other people in that space to comment here.
> I guess the one built into Firefox is just downloaded from there and cached? If so why would you want that over the other?
If your extension is 10kB of Javascript and you typically update it once or twice a year to tweak things, it's crazy that now the total extension size is over twenty times bigger and you need updates every month or so at least because otherwise things might not work for some users.
If your extension wraps, say, the New York transit map, or Wikipedia's list of English monarchs then fine, there's no reason Firefox would know those, you need to ship or fetch the data. But the PSL is necessarily built-in to Firefox, they do have the data, you just can't access their copy.
I was thinking about downloading the data at runtime, not baking it into the extension source. Obviously I don't know what your extension does and maybe there are reasons why this is not possible (e.g. maybe that's not doable because you need offline support).
This is really interesting. Prior to this, Firefox's isolation model was much weaker than Chrome's due to only having a pool of 8 content processes. If I'm reading the technical blog correctly [1], this will move to a process-per-site model without also doing process-per-tab as Chrome does, i.e. if you have several tabs open on the same site, they'd be in the same process. This seems much less resource intensive than Chrome's model while still delivering similar security properties.
This is a common misconception. Chrome doesn't technically do process-per-tab.
Chrome's model can most succinctly be described as process-per-domain, although even then, there are rare instances where two tabs opened on different domains will actually share the same process.
But either it never actually was, or they abandoned it as impractical even before release (https://stackoverflow.com/q/42804 has answers agreeing it isn’t process-per-tab on the day after the first release). So either Google lied, or they released the comic including a glaring and rather significant factual error (even if it had been true when Scott first drew the pages).
It’s frustrating when parties pull these shenanigans, making big claims around things like security and performance predicated on points that are simply not true, but never retracting those points properly or repudiating them, so that the misconception persists.
Sadly, process-per-site also means memory usage will skyrocket, which linked post doesn't mention.
It's ridiculous to think that a budget laptop with 4 GB of RAM suddenly isn't enough to browse the Web comfortably. All thanks to Meltdown and Spectre.
If browsers are careful not to use CPU and mem, web developers will just bloat their sites even more because there is room for it. Let browsers bloat, it will slow down website bloat.
Complementary to this, one can use the Temporary Containers addon to get isolation of e.g. cookies. I've set it up to run one container per domain, and it works really well. I hope they merge this into Firefox at some point.
I ended up having to disable my containers plugin due to syncing issues and later...CPU usage. It wasnt terrible on a better processor (like 1-2 cores 50-100% consistent usage) but on my old core2duo thinkpad it was basically useless. And on any laptop that was unacceptable.
I like the idea of containers, and will probably revisit periodically to see if whatever was fubar on my account is resolved (theres none/if any logging, so its hard to really dig in)
When Chrome was new and shiny, I used it for a time. Then, the first time I found myself needing to kill Chrome because it was completely locked up, I found myself staring at a wall of chrome processes in the task list, not knowing which one I needed to kill. At the time, I thought the idea of a separate process for each tab was silly. Though, with Firefox moving towards this model, I guess the engineers at Google were prescient in the correctness of that tradeoff.
I do use a lot of tabs, so I fear I'm going to find myself facing the same problem I faced with Chrome: a site misbehaves and locks things up, crap, which process do I kill? A way of tracking which tab maps to which process would be nice, so the next time I trip over a badly-coded page, I don't have to kill everything just to get my browser to respond again. Lazyweb question to y'all: is there a feature in Chrome or Firefox that can do this (mapping tab/page -> process), or have I just stumbled upon a side-project idea?
When a tab freezes, I just pull up activity monitor/top and look for the Chrome process using the most CPU. It's almost always the culprit.
I also like to occasionally sort by memory usage and kill the biggest Chrome processes. Chrome is nice in that it will show you when a process crashed, so what I do is kill the biggest memory hog, and then see what tab crashed. Then I do it again a few times.
This at least tells me which processes use the most RAM over time and should be recycled (Spoiler alert, it's always GMail and then GCal.)
It works when 1..n of your tabs are frozen, but the UI is still responsive e.g. you are still able to switch to other tabs. If your chrome is completely frozen i.e. you can't even open the chrome task manager, then you usually have to restart the browser.
Chrome does not "lock up", at-least on Windows without extensions. Individual tabs may crash or lock up, but the rest of the interface hasn't done that for a many years.
Process Explorer (from sysinternals) lists processes as a tree so it's easy to find and kill the root Chrome process. At a glance it looks like all non-root processes have a "--type" parameter given to them. The root process has the simplest command line with only "--remote-debugging-port" being passed.
to offload idle tabs. Downside is that it doesnt sync with firefox so anyhing i may need I may need to open in another firefox instance, I just bookmark.
The grouping is nice because it gives me a reference of a time i was doing x research/reading. And what i find is that there really isnt too much that i need to crossover, work stuff stays on my work computer, personal to personal etc.
Offtopic, Mozilla blog articles like the click through more details one aways have the most awesome images. They almost tell the story without a need to read the text. Other one I remember is the one on webassembly [0]. Similar style images.
They really allow you to scroll through the post quickly and see if it is interesting to read in detail.
Can anyone explain the relationship to the Firefox "Electrolysis" initiative better than this[0]? It looks like Electrolysis was just making the browser kernel <> IPC layer and now Fission is actually divvying up the processes by origin.
hi, co-author of the blog post here. There is a more detailed blog post explaining how Site Isolation is better than the Electrolysis architecture here - https://hacks.mozilla.org/2021/05/introducing-firefox-new-si... (also linked to from the security blog post). Hope this is helpful!
Thanks for this link. Not sure how I missed it when it's the very last word, haha.
I'm not sure what gave me the impression but, in my mind "process-per-tab" and "Electrolysis" were linked, but that was a misconception:
>In great detail, (as of April 2021) Firefox’s parent process launches a fixed number of processes: eight web content processes, up to two additional semi-privileged web content processes, and four utility processes for web extensions, GPU operations, networking, and media decoding.
>While separating content into currently eight web content processes already provides a solid foundation, it does not meet the security standards of Mozilla because it allows two completely different sites to end up in the same operating system process and, therefore, share process memory. To counter this, we are targeting a Site Isolation architecture that loads every single site into its own process.
> I'm not sure what gave me the impression but, in my mind "process-per-tab" and "Electrolysis" were linked, but that was a misconception:
Your impression was mostly correct. Electrolysis is basically process-per-tab until you reach eight tabs, but after that, tabs start sharing those eight content processes.
Correction to my earlier statement: the initial version of Electrolysis had just one content process (that could be sandboxed apart from the browser parent process), but was soon followed up with "e10s-multi" with multiple content processes.
I enjoyed the illustrations, but you should try looking at your article in Firefox for Android: all pictures overflow to the right and it's not even possible to scroll horizontally to see the rest.
Any news about the memory usage overhead this brings? The original design goal when the work on site isolation started was 1 GB overhead for a browsing session with 100 separate origins (can't remember how many tabs that was supposed to correspond to, although due to iframes it was definitively less than 100 tabs).
Was this goal reached in the end, or perhaps even surpassed, or missed after all?
I guess this also makes adblockers even more valuable in terms of saving memory, since each blocked third party-iframe that doesn't load is potentially one additional process that doesn't have to be created…
I think the overhead is something more like 15MB per process, on Windows. It is higher on other OSes, due in part to the way they load executables. In practice, the total overhead is less bad than you might expect, because people usually don't have that many unique sites open. Telemetry shows that unique sites per tab decreases as the number of tabs increases.
It really depends on what web sites you have open. If you have a single tab with an ad-laden news site, the overhead will be high, but if you have a bunch of Google Docs tabs open, there's no overhead.
Okay, so it seems the original target wasn't quite reached unfortunately.
On the other hand I guess you're right about "people usually don't have that many unique sites open", so the original design value of 100 separate origins was probably purposely chosen to be on the large side, and thinking about it, I guess not having that many unique sites open usually fits my usage patterns, too. The unknown factor I can't really judge is how many iframes with potentially separate origins the pages I normally visit use, though.
Looking at it positively, one additional potential benefit could be that I have a few long-lived tabs that I always keep open – under the current model, this means that the content processes associated with those tabs never die and possibly slowly accumulate cruft and memory fragmentation from additional tabs that happen to be loaded in them (and later closed again).
Under the new model on the other hand, closing all tabs associated with a domain should be enough to get the associated content process to exit and free up really all memory used by those now closed tabs.
In case anyone is wondering about the stability I've been running this for a couple of months now and stability has gotten pretty darn good. I'm excited to see it go into stable builds soon.
Container Tabs are completely orthogonal. A site loaded in a tab which is contained in this way cannot access your global cookie jar, for example. If you visit a site with a Facebook Like button on it, then Facebook will not receive the same cookie from you that it would have received if you had loaded the site in a non–contained tab. This is true whether or not the site has been given it’s own process to live in. The converse is also still true; non–contained sites still have access to your global cookie jar even if they’re isolated in their own process.
Putting sites in their own process mitigates against Spectre–like attacks, but it doesn’t do anything for higher–level problems like third–party cookies.
I think they are complementary, since one is about browser site isolation, and one is about process isolation on the computer.
Using temporary containers, multi-account containers, site isolation, along with a number of other privacy/security addons such as Umatrix, LocalCDN, and many others, I have not noticed any slowdown.
So far it seems to work fine for me too. Can you share you list of security/privacy addons? I've used Umatrix but never heard of LocalCDN. Was wondering what other gems you may have found.
Here is a complete list of security/privacy addons I am using, a small number of which I have disabled, as I regularly toggle them depending on what I am doing at the time
Enable Strict Enhanced Tracking Protection setting, this turns on Dynamic First Party Isolation which is the native version of what Temporary Containers is aiming to do.
Based on the processes I'm now running, it seems that tabs for the same domain but in different containers do (as one might expect) count as separate origins for the purpose of creating one process per origin.
So they do different things, and interaction between the features appears to work without issues.
This is fantastic work that will greatly improve the security of Firefox; big thanks to those who have worked on it. Is there data on what effect it will have on memory use?
One of the primary reasons I use Firefox is that it uses significantly less memory than Chrome, and the entire OS seems to function better as a result (I've seen the most stark difference on macOS). I had been under the impression that most of the reason Chrome uses so much memory is its multiprocess model.
I understand that maybe we need to give that up for better security, but it would be nice to know if that's indeed the tradeoff being made here.
Yes, more processes come with the cost of more memory but we have reducing the process overhead in Firefox in order to minimize that cost as much as possible. We will continue doing memory reductions and will have numbers to report when we roll-out to all our users. Thank you for your continued support and use of Firefox.
Do you need more data? If so, what's the best way for me to add to it? Would that be installing Nightly, setting fission.autostart to true, and enabling some telemetry?
Does anyone else get ANNOYED by the UI of Mozilla's blog on mobile?
Looking at the navbar, the horizontal and vertical alignment is all over the place, the search input has no placeholder or label, background colors are inconsistent, and paddings are just bizarre.
It's garbage. The new tab button doesn't follow the status bar to the top of the screen so unless you like it on the bottom you can't browse one-handed given the giant stretch between opening the tab manager and opening a new tab.
Not to mention the fact the extension ecosystem is still crippled. There's still no user-agent switcher, meaning there are sites I literally cannot access from my phone without installing an old version of Firefox Mobile or go back to using Dolphin Browser. "View Desktop Version" perversely still tells sites you're on a mobile browser.
Could anyone here who has been using it report their experience with site isolation turned on? Do you find anything it breaks or makes more difficult? Has it altered your privacy/security practices (in terms of addons, other settings, etc.)?
How good is Firefox sandboxing these days? Last time I looked it was years behind Chrome's, but site isolation is definitely a step in the right direction.
It would be sad if one day Chromium removed Manifest v2 and there was no alternative.
Does anyone remember Firesomething? The extension that randomized the name of Firefox to OceanMonkey, WaterHorse, FlameTiger, etc? Powerful extensions and much better UI are the main reasons so many of us switched to Firefox back in the early 2000s.
Chrome didn't have this until 2018, as the parent link shows. This is not about multi-process architecture. Firefox is < 3 years behind, not 10, not 14.
Site Isolation launched in Chrome in 2018, but the work started in earnest in 2012 -- see the below check-in. The idea in Chrome dated to before the Chrome 1.0 launch; it was the subject of Charlie Reis's PhD dissertation and he interned on Chrome pre-public launch.
Site isolation proved to be the biggest refactor in Chrome's history, and was one of the motivating reasons for the webkit/blink fork. Making site isolation work touched a huge host of features, since handling iframes out of process has a way of making simple things incredibly complicated.
The example I always gave was: imagine how the "find text in page" browser feature would be implemented. With the entire document in-process, it was a simple for loop. With the document and its subframes sharded across multiple processes, it is now a distributed search problem that requires handling of out-of-order results and stitching them into a traversal order. What's more, to achieve Chrome's security goals, you want to avoid introducing functionality that would allow the [presumed-compromised] process of the outer document to query the contents of the inner document via the find in page feature. So you can't simply do this as a peer-to-peer query between the renderer processes; it needs to be coordinated by the main browser process.
I was wrong about the actual security policy, but multi-process is still a big security win.
And not so related to this, but from what I've heard about cracking competitions a few while ago, Firefox was not even included, it was considered too easy. Maybe my sources were just bad.
And I say this as a Firefox user for the last decade or more.
Chrome was a new project, and didn't have to deal with the legacy of being built on top of the same source code as Netscape Navigator. I do not understand why you are trying to make this out to bash Firefox like they aren't as competent by taking ~10 years to implement multi-process browsing after Chrome. Legacy software and patterns are truly painstaking processes to iterate on.
But yes Electrolysis is the initiative that you should have referred to in the original comment.
What's wrong with the web and browsers? It's honestly pretty incredible - we have a system where we can load and execute arbitrary code from any number of third parties near-instantly and it actually works and isn't a complete security disaster.
People object to the massive effort it takes to create or maintain a browser engine which can practically browse the modern web. We're down to 3 players now actually trying to do this (Mozilla, Google, Apple). It conflicts with the idea that you can fork software if you dislike what it's doing, because even starting from existing code, it would be a lot of work to keep up with changes so you don't get left behind.
So people imagine splitting off a simpler web, where the main focus is on reading documents, rather than interactive applications, and browsers could be much simpler. But it's pretty hard to see how this could actually work in practice.
There are many distinct PDF readers. Making a document browser shouldn't be more complicated than making a PDF reader. PDF readers is how it works in practice.
I guess the issue is that a modern web browser is a sandboxed application runtime which also happens to function as a document browser. It's been going in that direction for a long time (since webmail became common), and there are real advantages of the browser as a platform for applications - it's cross platform by default, and it has pretty good sandboxing.
So probably the most you can hope for is that we split the document part of the web from the application part, so that it's easier to make a viable document browser. But it's not clear what advantage this offers for anyone who's not trying to make their own browser. Security is probably much simpler for the document browser, but the logins and sensitive data you care about securing are probably in the application browser anyway. And we've spent the last 20 years blurring the lines between documents and applications (think of a Github issue page, for instance), so even if it was possible to access information as a pure document, there would be advantages to looking at it in an application browser.
Gopher is the opposite of new. Gemini is interesting for sure, but it's not an alternative to the web as they fully admit. It's an alternative to a subset of the web. Let's call it the document web. Blogs and articles and so on. But as entertaining as it is, it is a very very small subset.
Respectfully, you're failing to engage with the purpose of the project.
> it's not an alternative to the web
Right. You can't have a lightweight drop-in alternative to the web, pretty much by definition. Any platform capable of everything modern browsers are capable of, is by definition enormously complex.
> it is a very very small subset
That's not a flaw, it's a design goal. It isn't meant to be a half-baked portable GUI toolkit the way the modern web platform is, it's meant to be a simple and minimal format, stable and easy to implement. There are other formats somewhat like this in common usage, like man pages and, of course markdown.
That's the point. If you want the web, you need today's browsers. If you want a subset of the web, your "document web" for example, you can get away with something simpler.
In particular, it seems that "site" isn't precisely defined. It seems to be based on domains, but backed by a human-curated list of "sites": <https://github.com/publicsuffix/list>.
So it's different than Chrome's "every webpage gets a separate process".