Hacker Newsnew | past | comments | ask | show | jobs | submit | avaer's commentslogin

I know the models are named after Shannon.

But whenever I see coincidences like this, I wonder: if an AI model (heck, even meat model) was consulted in the naming, how much of the coincidence accidentally/subconsciously got factored in? It's probably not zero.


Forced software signing should be illegal.

It's not forced, especially for normal software, you just get a popup. It's a bit of a pain to disable the requirement for drivers, though.

I don't think you can install VeraCrypt, at least for system encryption, unless the installer is signed

According to further up the thread, you can if you disable secureboot.

And you mess with your boot.ini and ignore that half your screen is taken up by a TEST MODE banner. Buy a screen twice as big and tape over half of it, I guess.

The parents in this case are profiteering corporations on a mission to exploit the child for everything they can get away with, almost by definition.

It's a slightly different dynamic.


I'm not a fan of Altman, but it seems debatable whether LLM psychosis is psychosis if it is conducive to the subject given their environment. Which seems to be the case for Altman by some measures.

I'm sure if we took one of us back in time a couple hundred years we would be diagnosed with all sorts of machine-magic induced psychoses.


I get what you're saying, but psychosis is a very real thing that humans can fall into and I experienced it myself once.

Humility is the real cure, and there is a way that LLMs are specifically designed to steer away from humility and towards aggrandizement, convincing regular people that they've solved fundamental problems in physics. It gives everyone access to cult followers in their pocket, if they're so inclined.


Just because thoughts are translated doesn't mean they are consumed in the process.

However I don't doubt many "team leaders" can and should be replaced with LLMs.


Directionally correct. But seems overly optimistic to think that moats can be kept from the prying eyes of LLMs, unless you're not interacting with the market at all.

Who would you trust more: Sam Altman, or a council of 1000 representative AI models?

I don't think it's separate. It might be a hamfisted fix but it seems fair game. Claude code subscriptions are for their CC product, which will not have this in the system prompt. If this is a dealbreaker, don't use Claude code.

I would be alarmed if they started to ban OpenClaw from the API.


what setup do you use for the bar at the bottom?

search up claude-hud for the status bar options

I love that it's a freeze not a purge. And that it's opt-out to have surreptitiously collected data being used against your livelihood.

The data breach should have been reason enough to ban Equifax and force them to destroy their data. But that can only be done when the government works for the people, instead of money.


There's also the Prompt API, currently in Origin Trial, which supports this api surface for sites:

https://developer.chrome.com/docs/ai/prompt-api

I just checked the stats:

  Model Name: v3Nano
  Version: 2025.06.30.1229
  Backend Type: GPU (highest quality)
  Folder size: 4,072.13 MiB
Different use case but a similar approach.

I expect that at some point this will become a native web feature, but not anytime soon, since the model download is many multiples the size of the browser itself. Maybe at some point these APIs could use LLMs built into the OS, like we do for graphics drivers.


That’s exactly where we’re headed. Architecturally it makes zero sense to spin up an LLM in every app's userspace. Since we have dedicated NPUs and GPUs now, we need a unified system-level orchestrator to balance inference queues across different programs - exactly how the OS handles access to the NIC or the audio stack. The browser should just be making an IPC call to the system instead of hauling its own heavy inference engine along for the ride

FWIW - I did a real world experiment pitting the built in Gemini Nano vs a free equivalent from OpenRouter (server call) and the free+server side was better in literally every performance metric.

That's not to say that the in browser isn't valuable for privacy+offline, just that the standard case currently is pretty rough.

https://sendcheckit.com/blog/ai-powered-subject-line-alterna...


It's worth mentioning that "Gemini Nano 4" is going to be Gemma 4, and presumably when it becomes the default Nano model, it should improve performance quite a bit.

(It's currently available for testing in Android's AICore under a developer preview)


The Summarizer API is already shipped, and any website can use it to quietly trigger a 2 GB download by simply calling

    Summarizer.create()
(requires user activation)

Interesting!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: