Here's an idea: someone sends a dev at some company, or even a freelancer, some code. Code references a module with a malicious npm package (say, with a postinstall script). Dev opens it in zed
Now, my untrusted code is running on your machine, probably without your knowledge
Why the hell does npm support a postinstall script? There really shouldn't be a need to run arbitrary code provided by the package for something like this.
The package itself is arbitrary code. You're running arbitrary code either way whether it's preinstall, install, post install, or when the package code gets ran.
It's common to need to setup tool chains for code that gets compiled (i.e. a node module that adds language bindings to a C library)
NodeJS isn't very sandboxed. Many "dev libraries" are native and will either download and link to binary blobs, or build e.g. C code, which AFAICT is what all the various install scripts are for.
It seems like a bad design choice, that, besides allowing for running untrusted code directly at download time, also makes it difficult to properly mirror artifacts, and I'd assume, make platform portability inconsistent, at best.
How is that any different from the VS Code extensions that have one star and are just copies of other extensions… waiting to get high stars and then switch-a-roo? Same goes for browser extensions.
Unless you’re auditing everything while taking Trusting Trust into account, you’re drawing the line somewhere saying “ok I can’t be bothered past this point verifying”.
… everyone has a line somewhere on the trust-but-verify spectrum
Then you’d click the „yes and never ask me again” if a prompt about whether you want to download a random binary showed up. But a lot of people wouldn’t want to click that and would either click „no and never ask me again” or vet each case one by one
> How are you going to "vet" the language server when it pops up?
You may not vet the source of the language server, but you might want to determine which ones you are willing to trust/take the risk, and which ones you aren't.
mason can install them, but there isn't a way to "ensure-installed" built in. So that was a second package I needed. Then I needed a third package to configure things.
Maybe I'm missing something, but it was definitely more complicated than "just use mason".
That’s a completely separate concern, it’s not like a new language server is downloaded for each file you open. I don’t know if Zed has a “safe mode” like some other editors, if it doesn’t you should ask for that instead. Unless of course you never open untrusted files in a language you’re familiar with, which would make you extremely peculiar.
This kind of article or reddit post and discussion is how you know, at least for some people.
Anyway, you asked who would care. Now the topic has moved to "what to do about it", which is hardly an issue. Of course people who think Zed has a problem will not use it. That does not make it a non-problem.
What if one language server adds a function to use your code for AI training? Are you okay with that as long as it came as a gitthub binary?
And these modern editors introduce another issue with their modularazied design. For each supported language Vscode installs tons of other crap beside the language server itself. And the language server alone has a quite long list of dependencies
Ah, I think you might be pleasantly surprised that this is an area being focused on right now with attestations[1] for example, here are the attestations for the GitHub CLI[2].
Maybe this whole cryptographic stuff has some use, but all that which was needed was for GitHub to declare when a file was uploaded manually and when by a workflow (specifying which workflow).
This looks so complex that it might well be just smoke and mirrors
The xz backdoor was an example of exploiting this disconnect. It was not present in the repository, it was inserted only into the release artifacts. Anyone getting xz by checking out the repository and building it themselves, would not be affected by it.
Right but it was injected from data in a "corrupt" xz file in the repo under certain conditions
>This injects an obfuscated script to be executed at the end of configure. This
script is fairly obfuscated and data from "test" .xz files in the repository.
>The files containing the bulk of the exploit are in an obfuscated form in
tests/files/bad-3-corrupt_lzma2.xz
tests/files/good-large_compressed.lzma
committed upstream
IntelliJ now comes by default with a local-only AI auto-completer. I noticed that almost always, it "knows" the autocompletion better than the older intellisense.
However, sometimes (very often) you need to explore the API and just check every available method and check their docs to find which one is appropriate to use.
So, even though I can see AI replacing a lot of auto-completions, it just can't replace it completely.
> Who wants to approve and configure all of their language servers?
I think you're asking the wrong question. The correct one would be: "who wants to be asked if they want to approve and configure all of their language servers?"
It's not what zed does, it's doing it behind your back!
It's okay for a browser to download and use anything from any site, maybe, with mature cross origin policies and billions in security work, but the fact it's done without saying anything is just a bug that can be fixed. Fixing clarity is the real win.
What's really funny is it was found because it was crashing and the user was running another libc. If they're really concerned about 14MB of download, they should add a firewall or something, but they saw it crashing. Finally, all these versions of everything sitting around, nodeJS, glibc, etc, very UNIX, a recipe for small breakages. Though I guess that's just the problem we deal with.
If you open a file for that language, is there ever a time you would deny the download?
I just don't want a huge amount of popups like VSCode.
Also, the binaries are downloaded from their release on github. As long as that is secure I don't see a problem.