But whenever I see coincidences like this, I wonder: if an AI model (heck, even meat model) was consulted in the naming, how much of the coincidence accidentally/subconsciously got factored in? It's probably not zero.
And you mess with your boot.ini and ignore that half your screen is taken up by a TEST MODE banner. Buy a screen twice as big and tape over half of it, I guess.
I'm not a fan of Altman, but it seems debatable whether LLM psychosis is psychosis if it is conducive to the subject given their environment. Which seems to be the case for Altman by some measures.
I'm sure if we took one of us back in time a couple hundred years we would be diagnosed with all sorts of machine-magic induced psychoses.
I get what you're saying, but psychosis is a very real thing that humans can fall into and I experienced it myself once.
Humility is the real cure, and there is a way that LLMs are specifically designed to steer away from humility and towards aggrandizement, convincing regular people that they've solved fundamental problems in physics. It gives everyone access to cult followers in their pocket, if they're so inclined.
Directionally correct. But seems overly optimistic to think that moats can be kept from the prying eyes of LLMs, unless you're not interacting with the market at all.
I don't think it's separate. It might be a hamfisted fix but it seems fair game. Claude code subscriptions are for their CC product, which will not have this in the system prompt. If this is a dealbreaker, don't use Claude code.
I would be alarmed if they started to ban OpenClaw from the API.
I love that it's a freeze not a purge. And that it's opt-out to have surreptitiously collected data being used against your livelihood.
The data breach should have been reason enough to ban Equifax and force them to destroy their data. But that can only be done when the government works for the people, instead of money.
I expect that at some point this will become a native web feature, but not anytime soon, since the model download is many multiples the size of the browser itself. Maybe at some point these APIs could use LLMs built into the OS, like we do for graphics drivers.
That’s exactly where we’re headed. Architecturally it makes zero sense to spin up an LLM in every app's userspace. Since we have dedicated NPUs and GPUs now, we need a unified system-level orchestrator to balance inference queues across different programs - exactly how the OS handles access to the NIC or the audio stack. The browser should just be making an IPC call to the system instead of hauling its own heavy inference engine along for the ride
FWIW - I did a real world experiment pitting the built in Gemini Nano vs a free equivalent from OpenRouter (server call) and the free+server side was better in literally every performance metric.
That's not to say that the in browser isn't valuable for privacy+offline, just that the standard case currently is pretty rough.
It's worth mentioning that "Gemini Nano 4" is going to be Gemma 4, and presumably when it becomes the default Nano model, it should improve performance quite a bit.
(It's currently available for testing in Android's AICore under a developer preview)
But whenever I see coincidences like this, I wonder: if an AI model (heck, even meat model) was consulted in the naming, how much of the coincidence accidentally/subconsciously got factored in? It's probably not zero.
reply