"Writing a web browser for today's web appears to be a technological challenge comparable to a level 5 self-driving car."
Only when the "constraints" are that the browser must support all "standards" produced by some committee where most if not all memebers are employees of the few browser vendors or other "tech" companies heavily invested in the web. Perhaps that is what is meant by "today's web".
I use a netcat and similar TCP clients and a text-only browser on today's web and this works quite well for mine own purposes. Non-commercial use. Basic tasks like sending GET and POST requests, downloading pages and other resources and reading them. Using the web as an information source. I would be willing to bet these programs are "more secure" than the "modern web browser". They are certainly less complex.
I use a netcat and similar TCP clients and a text-only browser on today's web and this works quite well for mine own purposes.
For 99% of what I do I use Firefox + NoScript and I think I'm relatively secure. My solution probably provides more functionality than yours, but it's still a hassle to use some sites. Other sites simply don't exist w/o JavaScript. The situation isn't getting better.
Any chance could disclose the ones that "simply don't exist w/o Javascript". (Assuming they are information sources, not online games or otherwise purely interactive.) I have had reasonably consistent success finding workarounds for sites that gratuitously use Javascript to present information.
And of course Google really really wants JS to make it easier for them to spy on you. Youtube isn't useful w/o JS (but IIRC it used to at least somewhat function): https://www.youtube.com/ Fortunately search still works.
Twitter are twats and broke all functionality w/o JS, but nitter.net is a workaround! Just replace twitter.com with nitter.net to see the tweet.
Fortunately many sites that start out blank are quite functional w/o JS, just need to View -> Page Style -> No Styles. E.g. this one: https://www.politico.com/ ugly but readable.
However when I checked archive.org I noticed their archived page was different. It appeared to require Javascript.
YouTube
I use YouTube without any JS.^1 I search from the command line using a tiny shell script. I download from the command line using a tiny shell script. With few exceptions I do not use youtube-dl.^1
1. Only exceptions are videos with a dynamic sig value in their googlevideo.com URL. Most videos I watch have static sig values in their googlevideo.com URL. Alternatively, I can use a "modern browser" to get the googlevideo.com URLs to download these videos, with all the tracking and other non-essential URLs blocked (no ads), instead of youtube-dl.
Twitter
I read Twitter without any JS. It currently works fine for me using links(1). I have a tiny shell script to download an entire feed if I need to find some historical tweet. Twitter does make changes from time to time. Previously I had to use the GraphQL API. Not anymore.
I agree 100% that the web of gratuitous JS sucks, but even with TCP clients and a text-only browser that do not run Javascript, I encounter few problems reading the textual information and/or downloading the binary files (resources) that websites provide.
For example, of the 43 open working groups at W3C, "Google LLC" has representatives participating in 36 out of the 43. Of the 89 closed former working groups at W3C, "Google LLC" has had representatives formerly particpating in 54 out of the 89. Source: https://www.w3.org/groups/wg/
This is such an unreasonable and paranoid set of trade offs.
You could have gotten basically the same (I would even argue better) level of security with a virtual machine and an otherwise full browsing experience.
I do not use this method for security. I use it for speed, reliability and aesthetics. The complexity fetish of too many software developers has driven me to appreciate minimalism.
Also, the computers I use are often older, underpowered and/or have limited resources. Running VMs is usually not an option. You cannot assume that everyone has the free overhead available to run VMs.
The large, graphical browsers controlled by corporations are slow, unwieldy and overkill for making simple HTTP requests. The "full browsing experience" is highly unsatisfactory for me. I am not "trading off" anything. I like fast, text-only information retrieval.
That said, the statement I made still stands. Using TCP clients and a text-only HTML reader is probably "more secure" than using one of the corporate graphical browsers that allow users to do more dangerous things. A side benefit.
Only when the "constraints" are that the browser must support all "standards" produced by some committee where most if not all memebers are employees of the few browser vendors or other "tech" companies heavily invested in the web. Perhaps that is what is meant by "today's web".
I use a netcat and similar TCP clients and a text-only browser on today's web and this works quite well for mine own purposes. Non-commercial use. Basic tasks like sending GET and POST requests, downloading pages and other resources and reading them. Using the web as an information source. I would be willing to bet these programs are "more secure" than the "modern web browser". They are certainly less complex.