I’ve said this before on HN, but there are two things that make me optimistic that we won’t see a big rug pull where price-to-capability ratio skyrockets relative to today:
* People keep finding ways of cramming more intelligence into smaller models, meaning that a given hardware spec delivers more model capability over time. I remember not that long ago when cutting edge 70B parameter models could kinda-sorta-sometimes write code that worked. Versus today, when Qwen 27BA3B (1/23 of the active parameters!) is actually *fun* to vibe code with in a good harness. It’s not opus smart, but the point is you don’t need a trillion parameters to do useful things.
* Hardware will continue to improve and supply will catch up to demand, meaning that a dollar will deliver more hardware spec over time. Right now the industry is massively supply constrained, but I don’t see any reason that has to continue forever. Every vendor knows that memory quality and memory bandwidth and the new metrics of note, and I expect to start seeing products that reflect that in a few years.
I hope that one day we’ll look back on the current model of “accessing AI through provider APIs” the same way we now look back on “everyone connecting to the company mainframe.”
The price for a given level of capability will fall, but the frontier has recently been getting more expensive. If you compare GPT-5 to GPT-5.5 on the Artificial Analysis benchmark, it's ~4x more expensive, but achieves a higher score. Claude 4.7 is also more expensive than predecessors because of a tokenizer change.
As the AI labs become more reliant on enterprise adoption, it makes sense to push capabilities at a cost that makes sense for businesses. Even if it prices out consumers or hobbyists.
Between: more efficient models - tuned for the task at hand, the ability to run those models in-house, or even at the edges, plus Google and Microsoft are well positioned to stay ambivalent as they’ve got lots of products to sell and whether or not LLMs are part of the portfolio mix is completely dependent on enterprise customer demand.
Anthropic/OpenAI have a number of aggressive downward pressures on their pricing.
As much as I hate the changes Bitwarden is making, I’m kinda with them on not adding official vaultwarden support. Having to support multiple backends (some of which you don’t control!) with your frontend makes everything massively more complicated.
Bitwarden is open-source though? This is about the hosted version of it, which has a free tier. But you can run the same software on your server at home if you want, for free.
(That said, I am also concerned about the direction Bitwarden is taking. I just think this shows that even OSS projects can have direction/rugpull issues.)
How long after a public sale will Bitwarden clients keep compatible with Vaultwarden? The new owners could put a check in all clients on the first day of ownership if they wanted, and Vaultwarden would immediately be obselete and useless.
I wonder if Bitwarden shit on everyone, how long it would take for Vaultwarden specific clients to appear. A browser extension would be pretty simple, app store apps are a bit more complicated because of the pay-to-play aspects.
The problem is once Vaultwarden clients appear, then Vaultwarden becomes its own complete system and is no longer able to rely on the good reputation of Bitwarden. Plus developing clients for multiple browsers and OSes is a lot more difficult than just keeping a back end up to date.
If they went this path I think I would jump ship to a paid service.
As soon as they break compatibility with the official clients, it becomes much tougher. Even though the current versions can be forked, the whole system is set up to work against any kind of grassroots effort to maintain an open source version.
Apple and Google being the gatekeepers for all mobile app distribution is a real pain point. Without the clout of a big brand name the risk of being unable to distribute apps goes up.
Vaultwarden relies on the goodwill of Bitwarden to allow it to use its clients for compatibility. I would wager a new owner looking for money would block that pretty soon after buying the company.
Again, for how long? The answers to all the questions seems to be the same. If Bitwarden was sold they could remove all of this free functionality and interoperability with 3rd party clients immediately.
Then you could say well Vaultwarden will work with these forked clients, but then you are placing your security into the hands of multiple different open source maintainers and vaultwarden then has nothing to do with Bitwarden and becomes some random back end + some random 3rds party clients.
Sure, but vaultwarden as a system would be entirely usable, I don't think a lot of it is really relying on the bitwarden compatibility for much more than a little convenience.
Useable yes, but trustable? Not without some serious backing and regular auditing from some public security experts.
IMO that fact that the existing Vaultwarden system relies on Bitwarden clients and therefore caries Bitwardens secure reputation is its main selling point. Take that away and Vaultwarden is nothing more than some random back end software that can not really be trusted.
> the existing Vaultwarden system relies on Bitwarden clients and therefore caries Bitwardens secure reputation is its main selling point.
I hope that this could be a starting point and not an end-point of Vaultwarden. It has gotten far on the shoulders of the Bitwarden giant. If it forked, would it have a large enough community to continue to carry that trust forward (including building new clients)? How much financial support would they need? Could they find a sponsor? It's a European project -- would the EU help fund it as a data sovereignty push?
Agreed, it would be great to have a fully open source solution, however I would be wary of it until it was audited and backed by secuirty professionals in the field.
Maybe, I don't think that reputation really should transfer anyway, and it's not something I would consider necessary for using it. (I mean, some scrutiny is obviously good, but I don't think it needs to be as big as Bitwarden).
> I don't think that reputation really should transfer anyway
Why not? The most important security bits are implemented client-side which is developed by Bitwarden. If the clients are secure then my database is safe even if Vaultwarden turns out to be evil.
Switching from Bitwarden Client to Vaultwarden Client would require about 3 orders of magnitude more trust than switching the server which primarily deals with encrypted blobs. If the client turns out to be malicious then it's game over.
You're right, though the friends and family that I would feel the need to recommend a password manager to aren't the type that would self-host their own servers.
- KeePass files synced between laptop and phone on OneDrive, DropBox, etc
- KeePassXC on Windows and Mac
- Keepass2Android mobile client
- Browser integration on mobile.
- On laptop, I prefer no browser integration; Copy username and password with Ctrl+B and Ctrl+C
Slightly off topic, I use KeePassXC on Mac and browser integration almost never works for me. It never picks up the usernames, passwords for me, even if the entry has the url in it.
I've paid for and recommended Bitwarden. For years it's operated along a stable trajectory. I was confident in its security record. Vaultwarden is an escape hatch I'm in a position to set up for my family as a last resort. Almost any reputable password manager is more secure than reusing the same passwords or storing everything in a note file.
What I stopped doing so frequently could be described as "evangelizing" or "endorsing". I no longer actively tell people that I think they should use X, instead, if someone asks, I say "I use X, and it's worked for me so far".
The server is only recently free, if indeed it is at all. I don't remember when or if that changed, because for most of its life it was definitely not free (open source).
I don’t believe any of the extant open source rar implementations cover the range of features and versions OP’s does. I think that’s the point - OP’s isn’t the cleanest or fastest implementation, but it is the most broad open source version available.
I don’t think this holds at all, because the idea with a lot of vibe-code workflows is “humans never need to read the code” which would mean that human dev ergonomics are irrelevant. Here, the blog post is still clearly targeted at humans, so human reader ergonomics are still relevant.
Yeesh, is "never reading the code" really the modus operandi we want from AI?
Microsoft, for all their warts, at least had the compassion to call their AI product "Copilot", suggesting we have some residual agency in whatever it is that it produces.
Copilot is a legacy brand from 2021 (anyone remembers it's free beta? good times) when it was just a rudimentary autocomplete powered by GPT-3. I don't think it aligns with Microsoft's views and priorities now.
It's clearly not the MO that capable engineers want, but it's the MO that is getting funded right now.
Reading code carefully is harder than writing code unless the code is written consistently and clearly in a way that is idiomatic to the reader. And there's way more code to review now, but companies aren't scaling up the number of skilled engineers on staff. So in practice, never reading all of the diffs is the MO that will be built into code we depend on.
> It's clearly not the MO that capable engineers want, but it's the MO that is getting funded right now.
Quite a few capable engineers really are that short-sighted!
The bigger question for the AI-techbro questioning "If AI writes your code, why use Python?" is "If AI writes your code, what use do we have for you?"
After all, there's dozens of people in the same business that have better domain knowledge but are unable to program - as a programmer the only value you added over random analysts and clerks was that you could automate shit.
Now you can't, so good luck competing with people who were already making half your salary when your largest value-prop is now gone.
There are lots of good use cases for vibe coding (”never reading the code”), prototypes, various explorations and one-offs. I’ve done various kinds of migrations where I didn’t bother to review the code much, just the output.
Possibly also some user-facing tools with a limited task and runtime environment.
Incidentally, these are all use cases where performance isn’t critical, typically, so you might as well write them in Python or Typescript or whatever makes most sense for the task.
Real production code? Yeah, you still need to be able to read it and understand it.
You don’t need to read the code if you have a robust test suit to validate the output. The article implies testing is the new “reading”. If I spend 10 minutes reading code to find an edge case bug, I have lost the benefit of using AI. AI code is legacy code the moment is generated because I can’t tell why some lines were chosen, so the only way for me to add more features or refactor legacy code is by being very rigorous with testing.
This is perhaps where our perspectives differ, because I see the usage of LLMs not as an external third-party (another team per your example), but instead as an extension of one's self. Given that lens, I'm highly sensitive to the quality and function of its output, because ultimately its contribution is my responsibility.
I appreciate not everyone feels this way, but that's why I personally would be anathema not to read its code.
If the code is written in a language that no one can read it becomes vibe coded by definition. However, if it's a readable language then people CAN look at the diffs.
Agreed. Saving maybe a dozen tokens is meaningless when a task can easily chew through ten thousand times that many. A single misdescribed task will use more tokens than all my spelling mistakes all year.
I mainly object to AI writing when it’s excessively verbose. This was pretty information dense, a few AI-isms didn’t make it a waste of my time to read.
IMO there are two things that make me optimistic that we won’t see a big rug pull where price-to-capability ratio skyrockets relative to today:
* As you’ve noted, people keep finding ways of slamming more intelligence into smaller models, meaning that a given hardware spec delivers more model capability over time.
* Hardware will continue to improve and supply will catch up to demand, meaning that a dollar will deliver more hardware spec over time.
I hope that one day we’ll look back on the current model of “accessing AI through provider APIs” the same way we now look back on “everyone connecting to the company mainframe.”
I also hope that we’ll find effective ways to distribute load between small local models and heavyweight remote models. Sort of like what Apple tried to do in iOS.
So much of what I ask codex to do doesn’t require full GPT 5 intelligence, and if 75% of the tokens were generated locally that’d save a massive amount of cost.
* People keep finding ways of cramming more intelligence into smaller models, meaning that a given hardware spec delivers more model capability over time. I remember not that long ago when cutting edge 70B parameter models could kinda-sorta-sometimes write code that worked. Versus today, when Qwen 27BA3B (1/23 of the active parameters!) is actually *fun* to vibe code with in a good harness. It’s not opus smart, but the point is you don’t need a trillion parameters to do useful things.
* Hardware will continue to improve and supply will catch up to demand, meaning that a dollar will deliver more hardware spec over time. Right now the industry is massively supply constrained, but I don’t see any reason that has to continue forever. Every vendor knows that memory quality and memory bandwidth and the new metrics of note, and I expect to start seeing products that reflect that in a few years.
I hope that one day we’ll look back on the current model of “accessing AI through provider APIs” the same way we now look back on “everyone connecting to the company mainframe.”
reply