Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Site Isolation in Firefox (blog.mozilla.org)
473 points by arthuredelstein on May 18, 2021 | hide | past | favorite | 115 comments


This provides more technical details: <https://hacks.mozilla.org/2021/05/introducing-firefox-new-si...>, which should be more interesting to HN than a marketing announcement.

In particular, it seems that "site" isn't precisely defined. It seems to be based on domains, but backed by a human-curated list of "sites": <https://github.com/publicsuffix/list>.

So it's different than Chrome's "every webpage gets a separate process".


The definition of site in this case is <https://html.spec.whatwg.org/multipage/origin.html#sites>, for both Firefox and Chrome. If you don't like reading specs, this blog post might be interesting to you <https://web.dev/same-site-same-origin/>.


Chrome's policy is pretty much the same; while it can generate a process-per-tab under most conditions, the guarantee it actually makes (in modern versions of Chrome) is that sites (including different-origin iframes) are isolated into different processes. They use the PSL to determine which sites constitute a different origin, just like Firefox does.


I don't know if "most conditions" is even true. Even when it's only running a handful of processes and I have plenty of ram free I cannot convince it to use more than one process for twitch tabs.


> I cannot convince it to use more than one process for twitch tabs.

Do you actually want it to? Or are you just experimenting? FWIW There's a flag related to isolation in your chrome://flags that will do per-origin.


The bug might be fixed right now but yes I definitely wanted it, because opening a twitch tab was consistently causing the video in the old tab to hang for a couple seconds.

> There's a flag related to isolation in your chrome://flags that will do per-origin.

What flag is that?

I even tried setting --process-per-site-instance and it had no effect.


There's "--process-per-tab" and Strict-Origin-Isolation , dunno if that'll work though


I think there are some restrictions on tab "navigation source". (Something about a fairly obscure JavaScript feature that links tabs opened via click navigation, if I recall correctly.)

Does this also happen when you type the Twitch URL in a new tab?


Yes. Or even if I have another tab on youtube or whatever and type in twitch, it will close the youtube process and switch to sharing the existing twitch process.


They've been using the public suffix list for scoping cookies for ages. It's an important list


“Site” is defined in the HTML Standard: https://html.spec.whatwg.org/multipage/origin.html#same-site


The public suffix list is also used by other browsers to determine whether resources are cross origin (see below), not just by Firefox. So, I think it's a pretty authoritative list, and also consider that domains are added by formal request of the domain holder, not as a result of someone's curation.

That list is the reason why CORS behaves differently e.g. across two subdomains like [subdomain].herokuapp.com (requests are considered cross origin) in comparison with two subdomains of the type [subdomain].[myowndomain.ext] (requests are considered same origin)[1] - the reason for this difference is that herokuapps.com is part of that list.

[1] unless you added your own domain to the public suffix list.


One of the maintainers of the PSL (Ryan Sleevi) has written on HN before that they'd sure like it if people leant on the PSL less rather than more.

It's a nasty hack, the successor to even worse proprietary hacks but still something we ought to strive to get rid of.

I can see exactly why it was the choice here, and I don't blame Mozilla for choosing it, but we're not going to make things better if nobody gets out and pushes.

That said, since we're stuck with the PSL for the foreseeable, I sure would like it if Mozilla shipped a way for extensions to just consult Firefox's built-in copy of the PSL, rather than needing to either build yet another awful hack or ship the entire PSL again in an extension.


How do you propose getting rid of the PSL? I don't see alternatives to having an authoritative publicly available list, unless we change the current standards somehow?

> I sure would like it if Mozilla shipped a way for extensions to just consult Firefox's built-in copy of the PSL

the PSL is available at https://publicsuffix.org/list/public_suffix_list.dat - as noted elsewhere in this tread it is also used by other browsers. I guess the one built into Firefox is just downloaded from there and cached? If so why would you want that over the other?


I do not have a concrete proposal. If I did I'd probably be too busy arguing about it with other people in that space to comment here.

> I guess the one built into Firefox is just downloaded from there and cached? If so why would you want that over the other?

If your extension is 10kB of Javascript and you typically update it once or twice a year to tweak things, it's crazy that now the total extension size is over twenty times bigger and you need updates every month or so at least because otherwise things might not work for some users.

If your extension wraps, say, the New York transit map, or Wikipedia's list of English monarchs then fine, there's no reason Firefox would know those, you need to ship or fetch the data. But the PSL is necessarily built-in to Firefox, they do have the data, you just can't access their copy.


I was thinking about downloading the data at runtime, not baking it into the extension source. Obviously I don't know what your extension does and maybe there are reasons why this is not possible (e.g. maybe that's not doable because you need offline support).


This is really interesting. Prior to this, Firefox's isolation model was much weaker than Chrome's due to only having a pool of 8 content processes. If I'm reading the technical blog correctly [1], this will move to a process-per-site model without also doing process-per-tab as Chrome does, i.e. if you have several tabs open on the same site, they'd be in the same process. This seems much less resource intensive than Chrome's model while still delivering similar security properties.

[1] https://hacks.mozilla.org/2021/05/introducing-firefox-new-si...


> process-per-tab as Chrome does

This is a common misconception. Chrome doesn't technically do process-per-tab.

Chrome's model can most succinctly be described as process-per-domain, although even then, there are rare instances where two tabs opened on different domains will actually share the same process.


It’s a misconception that Google fostered right from the start.

They did advertise Chrome as process-per-tab: https://www.google.com/googlebooks/chrome/big_04.html, pages 6 and 7 also definitely agree. (I haven’t read all through it again now, but I should also note that the process in the very centre of https://www.google.com/googlebooks/chrome/big_38.html shows what appears to be two tabs under it, which supports it not necessarily being process-per-tab.)

But either it never actually was, or they abandoned it as impractical even before release (https://stackoverflow.com/q/42804 has answers agreeing it isn’t process-per-tab on the day after the first release). So either Google lied, or they released the comic including a glaring and rather significant factual error (even if it had been true when Scott first drew the pages).

It’s frustrating when parties pull these shenanigans, making big claims around things like security and performance predicated on points that are simply not true, but never retracting those points properly or repudiating them, so that the misconception persists.


It's "scheme + eTLD + 1", with a flag to set it to per origin.


Sadly, process-per-site also means memory usage will skyrocket, which linked post doesn't mention.

It's ridiculous to think that a budget laptop with 4 GB of RAM suddenly isn't enough to browse the Web comfortably. All thanks to Meltdown and Spectre.


If browsers are careful not to use CPU and mem, web developers will just bloat their sites even more because there is room for it. Let browsers bloat, it will slow down website bloat.


Let's steal everything we can grab, it will slow others from stealing.

Let's buy all the toilet paper, it will slow others from buying toilet paper.


Complementary to this, one can use the Temporary Containers addon to get isolation of e.g. cookies. I've set it up to run one container per domain, and it works really well. I hope they merge this into Firefox at some point.


I ended up having to disable my containers plugin due to syncing issues and later...CPU usage. It wasnt terrible on a better processor (like 1-2 cores 50-100% consistent usage) but on my old core2duo thinkpad it was basically useless. And on any laptop that was unacceptable.

I like the idea of containers, and will probably revisit periodically to see if whatever was fubar on my account is resolved (theres none/if any logging, so its hard to really dig in)


How did you set it up to use one container per domain?

I'm using Temporary Containers, but if I visit `somedomain.com`, close it, and come back later, I get a new temporary container.


First Party Isolation is the native version of this (AKA Total Cookie Protection). Set Enhanced Tracking Protection to Strict to enable it.


When Chrome was new and shiny, I used it for a time. Then, the first time I found myself needing to kill Chrome because it was completely locked up, I found myself staring at a wall of chrome processes in the task list, not knowing which one I needed to kill. At the time, I thought the idea of a separate process for each tab was silly. Though, with Firefox moving towards this model, I guess the engineers at Google were prescient in the correctness of that tradeoff.

I do use a lot of tabs, so I fear I'm going to find myself facing the same problem I faced with Chrome: a site misbehaves and locks things up, crap, which process do I kill? A way of tracking which tab maps to which process would be nice, so the next time I trip over a badly-coded page, I don't have to kill everything just to get my browser to respond again. Lazyweb question to y'all: is there a feature in Chrome or Firefox that can do this (mapping tab/page -> process), or have I just stumbled upon a side-project idea?


I gave it a try. I opened a new tab to a random website, then went to about:memory

Scrolling down I found a section starting with

> web (pid 1036080)

> Explicit Allocations

> 108.27 MB (100.0%) -- explicit

> ├───45.04 MB (41.60%) -- window-objects/top(https://www.that-random.site/, id=175)

I try to kill that process now, but I post this message first in case I kill the whole browser.

Result: the tab crashed, the browser survived.

> Gah. Your tab just crashed.

> We can help!

> Choose Restore This Tab to reload the page.

Restore did work.


When a tab freezes, I just pull up activity monitor/top and look for the Chrome process using the most CPU. It's almost always the culprit.

I also like to occasionally sort by memory usage and kill the biggest Chrome processes. Chrome is nice in that it will show you when a process crashed, so what I do is kill the biggest memory hog, and then see what tab crashed. Then I do it again a few times.

This at least tells me which processes use the most RAM over time and should be recycled (Spoiler alert, it's always GMail and then GCal.)


Shift+Esc brings out Chrome task manager where you can kill individual tabs/pages by name.


Does this work when Chrome is locked up? Usually people go to Task Manager because it's unresponsive.


It works when 1..n of your tabs are frozen, but the UI is still responsive e.g. you are still able to switch to other tabs. If your chrome is completely frozen i.e. you can't even open the chrome task manager, then you usually have to restart the browser.


Chrome does not "lock up", at-least on Windows without extensions. Individual tabs may crash or lock up, but the rest of the interface hasn't done that for a many years.


Nice!

I suspected gmail was the heaviest thing I regularly had open, but it's good to see the stats.


On Firefox you can go to `about:processes`.

It lists tabs by process, and includes the PID (on Linux; no idea about other platforms). You can also directly kill tabs and processes from there.


That's super useful on a resource strapped system. Wish I knew this earlier.


For a resource-constrained machine, uBlock Origin + Auto Tab Discard save a lot of resources and keep a window with 100 tabs quite usable.


You should take a look at Firefox's about:about. There are all sorts of goodies in there.

For example, about:compat lists sites they added hard coded work arounds for.


sounds like page "about:processes" in Firefox would be super helpful in this case! you can use it to unload tabs and kill processes.


Process Explorer (from sysinternals) lists processes as a tree so it's easy to find and kill the root Chrome process. At a glance it looks like all non-root processes have a "--type" parameter given to them. The root process has the simplest command line with only "--remote-debugging-port" being passed.


not a direct resolution but i do use OneTab

https://addons.mozilla.org/en-US/firefox/addon/onetab/

to offload idle tabs. Downside is that it doesnt sync with firefox so anyhing i may need I may need to open in another firefox instance, I just bookmark.

The grouping is nice because it gives me a reference of a time i was doing x research/reading. And what i find is that there really isnt too much that i need to crossover, work stuff stays on my work computer, personal to personal etc.


Run `top`, sort by CPU or RES, depending on want is overspent.

If Firefox is still somehow responsive, open about:performance and identify the CPU-hungry tab(s), then close them.


In chrome when a site hangs its process the browser chrome is still responsive. So you just close the tab.


> a site misbehaves and locks things up

Why/how can this happen? That sounds like a bad failure of the browser.


In htop I can see and kill the process tree, I think processhacker2 can achieve the same on Windows.


if you're on linux you can do killall -9 firefox


Offtopic, Mozilla blog articles like the click through more details one aways have the most awesome images. They almost tell the story without a need to read the text. Other one I remember is the one on webassembly [0]. Similar style images.

They really allow you to scroll through the post quickly and see if it is interesting to read in detail.

[0] https://hacks.mozilla.org/2019/08/webassembly-interface-type...


Thank you for the positive feedback.


Can anyone explain the relationship to the Firefox "Electrolysis" initiative better than this[0]? It looks like Electrolysis was just making the browser kernel <> IPC layer and now Fission is actually divvying up the processes by origin.

[0]: https://wiki.mozilla.org/Electrolysis#Thanks


hi, co-author of the blog post here. There is a more detailed blog post explaining how Site Isolation is better than the Electrolysis architecture here - https://hacks.mozilla.org/2021/05/introducing-firefox-new-si... (also linked to from the security blog post). Hope this is helpful!


Hi, I'm a blind user and I'm just dropping to say a big thank you for the excellent alt text to the diagrams in the hacks post.

Thanks for the browser work as well!


Thank YOU for writing comments like these - it encourages us web devs to work harder on accessibility.


Thanks for this link. Not sure how I missed it when it's the very last word, haha.

I'm not sure what gave me the impression but, in my mind "process-per-tab" and "Electrolysis" were linked, but that was a misconception:

>In great detail, (as of April 2021) Firefox’s parent process launches a fixed number of processes: eight web content processes, up to two additional semi-privileged web content processes, and four utility processes for web extensions, GPU operations, networking, and media decoding.

>While separating content into currently eight web content processes already provides a solid foundation, it does not meet the security standards of Mozilla because it allows two completely different sites to end up in the same operating system process and, therefore, share process memory. To counter this, we are targeting a Site Isolation architecture that loads every single site into its own process.


> I'm not sure what gave me the impression but, in my mind "process-per-tab" and "Electrolysis" were linked, but that was a misconception:

Your impression was mostly correct. Electrolysis is basically process-per-tab until you reach eight tabs, but after that, tabs start sharing those eight content processes.


Correction to my earlier statement: the initial version of Electrolysis had just one content process (that could be sandboxed apart from the browser parent process), but was soon followed up with "e10s-multi" with multiple content processes.


> I'm not sure what gave me the impression but, in my mind "process-per-tab" and "Electrolysis" were linked

This always was a long term goal I think. It's per site not per tab though.


I enjoyed the illustrations, but you should try looking at your article in Firefox for Android: all pictures overflow to the right and it's not even possible to scroll horizontally to see the rest.


Any news about the memory usage overhead this brings? The original design goal when the work on site isolation started was 1 GB overhead for a browsing session with 100 separate origins (can't remember how many tabs that was supposed to correspond to, although due to iframes it was definitively less than 100 tabs).

Was this goal reached in the end, or perhaps even surpassed, or missed after all?

I guess this also makes adblockers even more valuable in terms of saving memory, since each blocked third party-iframe that doesn't load is potentially one additional process that doesn't have to be created…


I think the overhead is something more like 15MB per process, on Windows. It is higher on other OSes, due in part to the way they load executables. In practice, the total overhead is less bad than you might expect, because people usually don't have that many unique sites open. Telemetry shows that unique sites per tab decreases as the number of tabs increases.

It really depends on what web sites you have open. If you have a single tab with an ad-laden news site, the overhead will be high, but if you have a bunch of Google Docs tabs open, there's no overhead.


Okay, so it seems the original target wasn't quite reached unfortunately.

On the other hand I guess you're right about "people usually don't have that many unique sites open", so the original design value of 100 separate origins was probably purposely chosen to be on the large side, and thinking about it, I guess not having that many unique sites open usually fits my usage patterns, too. The unknown factor I can't really judge is how many iframes with potentially separate origins the pages I normally visit use, though.

Looking at it positively, one additional potential benefit could be that I have a few long-lived tabs that I always keep open – under the current model, this means that the content processes associated with those tabs never die and possibly slowly accumulate cruft and memory fragmentation from additional tabs that happen to be loaded in them (and later closed again).

Under the new model on the other hand, closing all tabs associated with a domain should be enough to get the associated content process to exit and free up really all memory used by those now closed tabs.


> It is higher on other OSes, due in part to the way they load executables.

Can you explain this in more detail?


In case anyone is wondering about the stability I've been running this for a couple of months now and stability has gotten pretty darn good. I'm excited to see it go into stable builds soon.


Thank you for this feedback. Firefox Fission team appreciates it. If you see any problems, please file using this template: https://bugzilla.mozilla.org/enter_bug.cgi?product=Core&bug_... .


Has anyone tried this along with Container Tabs? Do they play nicely? Does it offer any advantage over Container Tabs?


Container Tabs are completely orthogonal. A site loaded in a tab which is contained in this way cannot access your global cookie jar, for example. If you visit a site with a Facebook Like button on it, then Facebook will not receive the same cookie from you that it would have received if you had loaded the site in a non–contained tab. This is true whether or not the site has been given it’s own process to live in. The converse is also still true; non–contained sites still have access to your global cookie jar even if they’re isolated in their own process.

Putting sites in their own process mitigates against Spectre–like attacks, but it doesn’t do anything for higher–level problems like third–party cookies.


Any idea?

I'd love to drop https://addons.mozilla.org/en-US/firefox/addon/temporary-con..., which automatically assigns a temporary container to any new tab, which plays nicely with the https://addons.mozilla.org/en-US/firefox/addon/multi-account... but uses too much memory (at least from my experience).


I think they are complementary, since one is about browser site isolation, and one is about process isolation on the computer.

Using temporary containers, multi-account containers, site isolation, along with a number of other privacy/security addons such as Umatrix, LocalCDN, and many others, I have not noticed any slowdown.

This on an older broadwell i7 with 32GB of ram.


So far it seems to work fine for me too. Can you share you list of security/privacy addons? I've used Umatrix but never heard of LocalCDN. Was wondering what other gems you may have found.


Here is a complete list of security/privacy addons I am using, a small number of which I have disabled, as I regularly toggle them depending on what I am doing at the time

AdNauseam

Archive Page

ClearURLs

Cookie AutoDelete

Cookie Remover

Decentraleyes

DoH Roll-Out

Don't track me Google

DuckDuckGo

Facebook Container

Firefox Multi-Account Containers

Firefox Private Network

Firefox Relay

Firefox Screenshots

First Party Isolation

Google Container

Google search link fix

Greasemonkey

HistoryBlock

HTTPS Everywhere

I don't care about cookies

Laboratory

Link Cleaner

LocalCDN

PinPatrol

Privacy Badger

Privacy Pass

Redirect AMP to HTML

Skip Redirect

Tampermonkey

Temporary Containers

Trocker

Twitter Container

uBlock Origin

Ugly Email

uMatrix

Wappalyzer

Zoom Redirector


Enable Strict Enhanced Tracking Protection setting, this turns on Dynamic First Party Isolation which is the native version of what Temporary Containers is aiming to do.


I believe this is privacy.firstparty.isolate in about:config, if you want to do this manually


That's not for the Dynamic version as far as I'm aware.


Is there a way to enable the dynamic version via about:config to your knowledge?



I use the same add-ons. I just enabled site isolation. I'll let you know how it goes.


Based on the processes I'm now running, it seems that tabs for the same domain but in different containers do (as one might expect) count as separate origins for the purpose of creating one process per origin.

So they do different things, and interaction between the features appears to work without issues.


This is fantastic work that will greatly improve the security of Firefox; big thanks to those who have worked on it. Is there data on what effect it will have on memory use?

One of the primary reasons I use Firefox is that it uses significantly less memory than Chrome, and the entire OS seems to function better as a result (I've seen the most stark difference on macOS). I had been under the impression that most of the reason Chrome uses so much memory is its multiprocess model.

I understand that maybe we need to give that up for better security, but it would be nice to know if that's indeed the tradeoff being made here.


Yes, more processes come with the cost of more memory but we have reducing the process overhead in Firefox in order to minimize that cost as much as possible. We will continue doing memory reductions and will have numbers to report when we roll-out to all our users. Thank you for your continued support and use of Firefox.


Awesome! I'll look out for these numbers.

Do you need more data? If so, what's the best way for me to add to it? Would that be installing Nightly, setting fission.autostart to true, and enabling some telemetry?


Does anyone else get ANNOYED by the UI of Mozilla's blog on mobile?

Looking at the navbar, the horizontal and vertical alignment is all over the place, the search input has no placeholder or label, background colors are inconsistent, and paddings are just bizarre.


It's garbage. The new tab button doesn't follow the status bar to the top of the screen so unless you like it on the bottom you can't browse one-handed given the giant stretch between opening the tab manager and opening a new tab.

Not to mention the fact the extension ecosystem is still crippled. There's still no user-agent switcher, meaning there are sites I literally cannot access from my phone without installing an old version of Firefox Mobile or go back to using Dolphin Browser. "View Desktop Version" perversely still tells sites you're on a mobile browser.


Could anyone here who has been using it report their experience with site isolation turned on? Do you find anything it breaks or makes more difficult? Has it altered your privacy/security practices (in terms of addons, other settings, etc.)?


I've had site isolation on for more than a year. Never had any issues with it.


I've had it on for many months and haven't had any issues I could attribute to it. I don't really use Firefox extensions though.


How good is Firefox sandboxing these days? Last time I looked it was years behind Chrome's, but site isolation is definitely a step in the right direction.

It would be sad if one day Chromium removed Manifest v2 and there was no alternative.


Not as good as Chrome's, but constantly improving and good enough that Firefox sandbox escapes are Quite A Big Deal, not trivial.


FireFox appears to be accelerating their feature velocity post Mozilla resizing. Curious what changes they made internally to refocus development.


This has likely been in the works for years.


Does anyone remember Firesomething? The extension that randomized the name of Firefox to OceanMonkey, WaterHorse, FlameTiger, etc? Powerful extensions and much better UI are the main reasons so many of us switched to Firefox back in the early 2000s.


@dang please delete this comment


This is about isolation in OS processes, not browser containers.


Software is hard. Chrome had this in 2008. Firefox had to be rearchitected 14 years for this.


This is factually untrue. Site isolation wasn't enabled by default in Chromium until v67 in 2018.

https://www.chromium.org/Home/chromium-security/site-isolati...


https://www.google.com/googlebooks/chrome/big_04.html

http://www.scottmccloud.com/googlechrome/

In early-mid 2008, I created a comic book for Google explaining the inner workings of their new open source browser Google Chrome.

If I'm mixing this up with https://wiki.mozilla.org/Electrolysis, that's still 10 years.


Chrome didn't have this until 2018, as the parent link shows. This is not about multi-process architecture. Firefox is < 3 years behind, not 10, not 14.


Site Isolation launched in Chrome in 2018, but the work started in earnest in 2012 -- see the below check-in. The idea in Chrome dated to before the Chrome 1.0 launch; it was the subject of Charlie Reis's PhD dissertation and he interned on Chrome pre-public launch.

https://chromium.googlesource.com/chromium/src/+/c6f2e67ab40...

Site isolation proved to be the biggest refactor in Chrome's history, and was one of the motivating reasons for the webkit/blink fork. Making site isolation work touched a huge host of features, since handling iframes out of process has a way of making simple things incredibly complicated.

The example I always gave was: imagine how the "find text in page" browser feature would be implemented. With the entire document in-process, it was a simple for loop. With the document and its subframes sharded across multiple processes, it is now a distributed search problem that requires handling of out-of-order results and stitching them into a traversal order. What's more, to achieve Chrome's security goals, you want to avoid introducing functionality that would allow the [presumed-compromised] process of the outer document to query the contents of the inner document via the find in page feature. So you can't simply do this as a peer-to-peer query between the renderer processes; it needs to be coordinated by the main browser process.

Congrats to the Firefox team on this milestone.


I was wrong about the actual security policy, but multi-process is still a big security win.

And not so related to this, but from what I've heard about cracking competitions a few while ago, Firefox was not even included, it was considered too easy. Maybe my sources were just bad.

And I say this as a Firefox user for the last decade or more.


That may have been true at some point, but I don't think it's true now. E.g. Project Zero finds Firefox sandbox escapes noteworthy.


That was before Firefox desktop had any multiprocess support at all.


Yeah, so until just 3 years ago.


Chrome was a new project, and didn't have to deal with the legacy of being built on top of the same source code as Netscape Navigator. I do not understand why you are trying to make this out to bash Firefox like they aren't as competent by taking ~10 years to implement multi-process browsing after Chrome. Legacy software and patterns are truly painstaking processes to iterate on.

But yes Electrolysis is the initiative that you should have referred to in the original comment.


> Software is hard. Chrome had this in 2008. Firefox had to be rearchitected 14 years for this.

How is this bashing? :-)

It literally starts with "Software is hard."

...


Browsers are too big and the web is too complex. Engineering failures all around.

As engineers, we should not accept this status quo; we should replace it. We need a new web and new software.


What's wrong with the web and browsers? It's honestly pretty incredible - we have a system where we can load and execute arbitrary code from any number of third parties near-instantly and it actually works and isn't a complete security disaster.

Seems pretty cool tbh


People object to the massive effort it takes to create or maintain a browser engine which can practically browse the modern web. We're down to 3 players now actually trying to do this (Mozilla, Google, Apple). It conflicts with the idea that you can fork software if you dislike what it's doing, because even starting from existing code, it would be a lot of work to keep up with changes so you don't get left behind.

So people imagine splitting off a simpler web, where the main focus is on reading documents, rather than interactive applications, and browsers could be much simpler. But it's pretty hard to see how this could actually work in practice.


There are many distinct PDF readers. Making a document browser shouldn't be more complicated than making a PDF reader. PDF readers is how it works in practice.


I guess the issue is that a modern web browser is a sandboxed application runtime which also happens to function as a document browser. It's been going in that direction for a long time (since webmail became common), and there are real advantages of the browser as a platform for applications - it's cross platform by default, and it has pretty good sandboxing.

So probably the most you can hope for is that we split the document part of the web from the application part, so that it's easier to make a viable document browser. But it's not clear what advantage this offers for anyone who's not trying to make their own browser. Security is probably much simpler for the document browser, but the logins and sensitive data you care about securing are probably in the application browser anyway. And we've spent the last 20 years blurring the lines between documents and applications (think of a Github issue page, for instance), so even if it was possible to access information as a pure document, there would be advantages to looking at it in an application browser.



Gopher is the opposite of new. Gemini is interesting for sure, but it's not an alternative to the web as they fully admit. It's an alternative to a subset of the web. Let's call it the document web. Blogs and articles and so on. But as entertaining as it is, it is a very very small subset.


Respectfully, you're failing to engage with the purpose of the project.

> it's not an alternative to the web

Right. You can't have a lightweight drop-in alternative to the web, pretty much by definition. Any platform capable of everything modern browsers are capable of, is by definition enormously complex.

> it is a very very small subset

That's not a flaw, it's a design goal. It isn't meant to be a half-baked portable GUI toolkit the way the modern web platform is, it's meant to be a simple and minimal format, stable and easy to implement. There are other formats somewhat like this in common usage, like man pages and, of course markdown.


I don't disagree with you at all. I like Gemini and think it is a very worthwhile pursuit.


That's the point. If you want the web, you need today's browsers. If you want a subset of the web, your "document web" for example, you can get away with something simpler.


Gopher is from 1991. I've been using it back then but HTTP won quite easily.


A recent post about that:

https://dustri.org/b/the-web-browser-im-dreaming-of.html

Gemini might be a good replacement for the web:

https://gemini.circumlunar.space/




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: