The principle of least privilege/authority has been around for a while, and the reason we don't see much adoption of it in real-world systems is not because it's unknown.
The first question is overhead: it's true that the majority of libraries are purely computational, but that means that there's frequent interaction between code written by the end developer and code from the library. If every call to, say, lodash's _.filter goes through a process to marshal the programmer's list, send it to a separate execution environment, and then marshal it right back in the other direction to call the predicate, people would choose not to use it. I do agree that the proposal in the post you link to seems to be on the right track - directly run the code in the current execution environment if it can be statically demonstrated that the code has no access to dangerous capabilities.
The second question is making the policy decision about whether to grant privileges. You might be familiar with this from your mobile phone: the security architecture is miles better than that of your desktop OS, but still, most people do say "yes" when asked to let Facebook, Twitter, Slack, etc. access their photos and their camera and their microphone, because they intentionally want those apps to have some access. What do you do in the above model when, say, the "request" library wants access to the network? Now it can exfiltrate all of your data. (The capability-based model is that you pass into the library a capability to access the specific host it should talk to, instead of giving it direct access, but again, if it did this, people would choose not to use it - the whole point of these libraries is to make writing code more convenient.)
The other problem, and perhaps the most important, is that purely-computational libraries can still be dangerous. Yes, _.filter (and perhaps all of lodash) is purely computational, but if you're using it to, say, restrict which user records are visible on a website, and someone malicious takes over lodash, they can edit the filter function to say, "if the username is me, don't filter anything at all." Or if you had a capability-based HTTP client that only talked to a single server, the library could still lie about the results that it got from the server.
I think the way to think about it is that the principle of least privilege is a mitigation strategy, like ASLR or filtering out things that look like SQL statements from web requests. ASLR mitigates not being able to guarantee that your code is memory-safe; if you could, you wouldn't need it. SQL filtering mitigates making mistakes with string interpolation (but it comes with a significant cost, so you really want to avoid it if you can). Least privilege mitigates the reality that you cannot code-review all of your code and its dependencies to ensure that it's free of bugs. But, on the other hand, a mitigation is not a license to stop doing the thing you can't do perfectly - it's just a safety measure. You can still have serious security bugs from buffer overflows even with ASLR; you just have fewer. You should not use ASLR as an excuse to write memory-unsafe code. You can still have SQL injection attacks from people being clever about smuggling strings. You should not use a WAF as an excuse to not use parametrization in SQL queries. And you can still have malicious dependencies cause problems even in a least-privilege situation, because they still have some privilege. You should not use it as a reason to run dependencies you don't trust.
> there's frequent interaction between code written by the end developer and code from the library
Microkernel operating systems that use capability security like KeyKOS or one of its distant successors, sel4 focus on optimizing the call procedure as best as they can. A small performance overhead for guaranteed security properties shouldn't be seen as a tradeoff.
>send it to a separate execution environment, and then marshal it right back[...]
Current runtimes of all kinds always need new instances of something to sandbox things, which incurs a big overhead everytime. But it doesn't have to be this way. The KeyKOS kernel was of fixed size, I built my https://esolangs.org/wiki/RarVM to be so, too, it is stateless. No need for several instances of the interpreter itself, and minimal overhead in the process snapshots.
>policy decision about whether to grant privileges [...] people would choose not to use it
The advantage of capability systems (that start with all rights given in every call by default) is that even if people do not initially use it, they can restrict rights later - "hollow out the attack surface", when this makes economical sense - when a library becomes popular, laws or contracts require it etc.
While users may not interact with it directly or choose not to use it, such systems grant developers at least the ability to secure their software internally, something that is not even possible now.
> [...] it's just a safety measure [...] purely-computational libraries can still be dangerous
But this argument doesn't attack the central point. Yes, this is true, but has nothing to do with the security properties offered by POLA architectures like capability security. It's an orthogonal problem, which has to be mitigated by other mechanisms, possibly social.
I think we're disagreeing on what the central point is, then. I think least-privilege architectures are great, and I use them for many things. I think they do not save you from the problem being addressed in this article. That is, do not read what I'm saying as an argument against least-privilege architectures: read it as an argument against using that hammer to drive in this screw.
In turn, I think that means that there isn't enough justification for using them in this case that users will feel like the additional complexity of wiring through least-privilege across their libraries is worth it. Even if you take the approach of incrementally adding the security to the existing design, the implication is it won't actually be securing end users for a long time, and only against minor and unlikely threats at first, but it will impose increasing complexity all along.
KeyKOS is great and I've read about it and tried to adopt its lessons in my own designs, but the fact remains that KeyKOS is dead. And I certainly agree that a small performance overhead for guaranteed security properties shouldn't be seen as a tradeoff (assuming it is in fact solely a performance overhead, and not a developer mental burden, nor a reviewer mental burden, nor an operational burden) - but I'm not commenting on what I see, I'm commenting that the vast majority of NPM users will see it as a tradeoff, regardless of what you and I believe.
The first question is overhead: it's true that the majority of libraries are purely computational, but that means that there's frequent interaction between code written by the end developer and code from the library. If every call to, say, lodash's _.filter goes through a process to marshal the programmer's list, send it to a separate execution environment, and then marshal it right back in the other direction to call the predicate, people would choose not to use it. I do agree that the proposal in the post you link to seems to be on the right track - directly run the code in the current execution environment if it can be statically demonstrated that the code has no access to dangerous capabilities.
The second question is making the policy decision about whether to grant privileges. You might be familiar with this from your mobile phone: the security architecture is miles better than that of your desktop OS, but still, most people do say "yes" when asked to let Facebook, Twitter, Slack, etc. access their photos and their camera and their microphone, because they intentionally want those apps to have some access. What do you do in the above model when, say, the "request" library wants access to the network? Now it can exfiltrate all of your data. (The capability-based model is that you pass into the library a capability to access the specific host it should talk to, instead of giving it direct access, but again, if it did this, people would choose not to use it - the whole point of these libraries is to make writing code more convenient.)
The other problem, and perhaps the most important, is that purely-computational libraries can still be dangerous. Yes, _.filter (and perhaps all of lodash) is purely computational, but if you're using it to, say, restrict which user records are visible on a website, and someone malicious takes over lodash, they can edit the filter function to say, "if the username is me, don't filter anything at all." Or if you had a capability-based HTTP client that only talked to a single server, the library could still lie about the results that it got from the server.
I think the way to think about it is that the principle of least privilege is a mitigation strategy, like ASLR or filtering out things that look like SQL statements from web requests. ASLR mitigates not being able to guarantee that your code is memory-safe; if you could, you wouldn't need it. SQL filtering mitigates making mistakes with string interpolation (but it comes with a significant cost, so you really want to avoid it if you can). Least privilege mitigates the reality that you cannot code-review all of your code and its dependencies to ensure that it's free of bugs. But, on the other hand, a mitigation is not a license to stop doing the thing you can't do perfectly - it's just a safety measure. You can still have serious security bugs from buffer overflows even with ASLR; you just have fewer. You should not use ASLR as an excuse to write memory-unsafe code. You can still have SQL injection attacks from people being clever about smuggling strings. You should not use a WAF as an excuse to not use parametrization in SQL queries. And you can still have malicious dependencies cause problems even in a least-privilege situation, because they still have some privilege. You should not use it as a reason to run dependencies you don't trust.