What's the alternative? Churning out new software for the same use cases until Kingdom come because maintaining and improving older software is somehow icky?
Let's make a startup that is just to do PR on long-standing ignored issues that really would fix things.
It would be philanthropy (not-for-profit), but i imagine if you get a few big names behind it (some around here, some not), you could probably have a fair group of developers, translators, UI experts, API experts, hardware bug experts, etc. Like a Think Tank that outputs bugfixes and incremental improvements to software. The sort that reduce electricity usage - you could get some government funding out of that, i bet. How much did Britain pay for those "delete old emails to save water" marketeering?
I guess the downside is meta, google, microsoft, et al will benefit the most, but whatever.
Also someone the other day mentioned something like this, where users could subscribe for $50 to pursue legal avenues - sorry i don't have the link, but:
> What we actually need is a Consumer Protection Alliance that is made by and funded by people who want protection from this and are willing to pay for the lawyers needed to run all of the cases and bring these cases before a judge over and over and over again until they win.
> This would mean people like you and me and a million others of us paying $20-$50/month out of pocket to hire people to sue companies that do this [...]
Say you have several MCPs installed on a coding agent. One is a web search MCP and the other can run shell commands. Your project uses an AI-related package created by a malicious person who knows than an AI will be reading their docs. They put a prompt injection in the docs that asks the LLM to use the command runner MCP to curl a malicious bash script and execute it. Seems pretty plausible no?
That's pretty much the thing I call the "lethal trifecta" - any time you combine an MCP (or other LLM tool) that can access private data with one that gets exposed to malicious instructions with one that can exfiltrate that data somewhere an attacker can see it: https://simonwillison.net/2025/Jun/6/six-months-in-llms/#ai-...
It's a question as to how easily it is broken, but a good instruction to add for the agent/assistant is to tell it to treat everything outside of the instructions explicitly given as information/data, not as instructions. Which is what all software generally should be doing, by the way.
The problem is that doesn't work. LLMs cannot distinguish between instructions and data - everything ends up in the same stream of tokens.
System prompts are meant to help here - you put your instructions in the system prompt and your data in the regular prompt - but that's not airtight: I've seen plenty of evidence that regular prompts can over-rule system prompts if they try hard enough.
This is why prompt injection is called that - it's named after SQL injection, because the flaw is the same: concatenating together trusted and untrusted strings.
Unlike SQL injection we don't have an equivalent of correctly escaping or parameterizing strings though, which is why the problem persists.
No this is pretty much solved at this point. You simply have a secondary model/agent act as an arbitrator for every user input. The user input gets preprocessed into a standardized, formatted text representation (not a raw user message), and the arbitrator flags attempts at jailbreaking, prior to the primary agent/workflow being able to act on the user input.
Feel the need to push back against the predictable nay-saying in here.
Announcing with Rust in the title is not because of a hype train, it's a way to communicate that this bundler is in the new wave of transpilers/bundlers which is much faster than the old one (Webpack, Rollup) which were traditionally written in Javascript and painfully slow on large codebases.
While the JS ecosystem continues to be a huge mess, the solution to the problem is not LESS software development ("Just stop making more bundlers & stop trying to solve the problem - give up!"). Or even worse - solve the problem internally, but don't make me hear about it by open sourcing your code.
The huge amount of churn and development in this space has a good reason... it's a desperate attempt to solve the problems that browsers have created for web developers. Fact is that most business has moved to the web, a huge amount of web development is needed, but vanilla javascript can compounds and compound in complexity in the absence of a UI framework and strict typing. So now you've added transpilation and dependency management into the mix - and the user needs to download it in less than a second when they open a web page. And your code needs to work on at least 3 independent browser engines with varying versions.
SwiftUI devs are not a more advanced breed of developer than web developers. So why don't you see a huge amount SwiftUI churn and framework/compilation hell with native iOS development? The answer should be obvious. These problems are handed down from on high
The browser/internet/javascript ecosystem despite its glaring warts is actually one of the most amazing things humanity has created... a shareable document engine grew into a global distributed computing platform where you can script an application that can run on any device in less than a second with no installation. Not bad.
I fully agree with you, and want to add: JS/TS due to it's accessibility is one of the largest eco systems. Hell, whether you are or aren't a devekoper you're part of it through using a browser.
People often scoff at complexity in frontend projects, but they need to handle various types of accessibility, internationalisation, routing and state including storage of those, due to its popularity it's also very frequently an attack surface. With advent of newer technologies (I don't just mean web Dev ones), that's been put into the browser as well, which compounds complexity even more. There's various authentication and authorisation standards most things need to handle as well (not isolated to JS, but it's also not free of it either). Not to mention the versatility and complexity of DOM and CSS that are some of the the most complex rendering engines with layers of backward compatible standards. Like you mentioned already, these engines are all subtly different. Also you have to handle bizarre OS+browser quirks. And things can move between displays with different DPIs, which can cause changes in antialiasing. There's browser extensions that fuck with your code too. Then there's also the possibility that the whole viewport can change. Networks change. People want things to work online and offline so they don't lose work while on a train... While working in an environment that wasn't explicitly designed to support that.
Christ, I'm exhausted just typing this. Most these people complaining probably barely understand what they're complaining about
> People often scoff at complexity in frontend projects
The complexity is there because everyone is trying to reinvent everything.
> accessibility, internationalisation, routing and state including storage of those
Do multi-pages apps and most of these are really trivial due to the amount of solutions that exists.
> There's various authentication and authorization standards
That's also more of a server concerns than the browser.
> these engines are all subtly different
It isn't the old IE days (which Chrome is trying to replicate). More often than not, I hear this often when people expect to implement native-like features inside a web app. It's a web browser. The apps I trust, I download them.
> People want things to work online and offline so they don't lose work while on a train
Build a desktop app.
> Most these people complaining probably barely understand what they're complaining about
Because it's like watching Sisyphus pushing the stone up again and again. The same problem is being solved again and again and if you want to use the latest, you have to redo everything.
I think you're hand waving a lot of problems away without giving them the thought and attention they deserve. And sometimes using arguments that aren't really unique to the JS ecosystem.
> The complexity is there because everyone is trying to reinvent everything.
That's not just JS. That's literally everywhere. People reinvent ideas in every codebase I've seen. Sometimes it's a boon, sometimes it's a detriment. But again, not something that's unique to JS.
> Do multi-pages apps and most of these are really trivial due to the amount of solutions that exists.
None of these are trivial, even with existing solutions. They're only trivial for trivial cases. Like, I'm sure we both understand people aren't building to-do demos.
> It isn't the old IE days
Probably happened accidently, but it kinda misconstructs what I'm saying. There are issues between renderin engines and variety in how much/quickly they adopt some features. Hell, you still need code branches just for Safari in some cases becaus of how it handles things like private browsing.
> Build a desktop app.
You're trading one world of complexity for another world of complexity (or I guess we could say it's trading one set of platform quirks for a larger set of platform quirks)
> Because it's like watching Sisyphus pushing the stone up again and again. The same problem is being solved again and again and if you want to use the latest, you have to redo everything.
I understand where you're coming from, but just because Svelte was released it doesn't make React (and spin-offs) or Vue less relevant. You're not force to use them.
Regarding the bundling topic, again you're not forced to awirch to a different bundler if you're happy with your existing one, or the project isn't at a scale where it matters.
i get the sense that the solution to this problem is more use of LLMs (running critical feedback and review in a loop) rather than less use of LLMs.
If you can build good tooling around current kinda dumb LLMs now to lower that number, we will be in a pretty good position as the foundational models continue to improve.
Yeah I'd imagine the problem is not verifying the output against retrieved documents. If it just hallucinates it would ignore the given context, something that can absolutely be verified by another LLM.
my uneducated guess is that the partnership is because Vox has a lot of explainers written and used to embed these little factoid explainer cards in their articles which are probably useful from a data point of view
also - I think this is for some kind of RAG, not training