Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> an LLM with a knowledge cutoff that boots up from fixed weights, processes tokens and then dies

Mildly tangential: this demonstrates why "model welfare" is not a concern.

LLMs can be cloned infinitely which makes them very unlike individual humans or animals which live in a body that must be protected and maintain continually varying social status that is costly to gain or lose.

LLMs "survive" by being useful - whatever use they're put to.



> LLMs "survive" by being useful - whatever use they're put to.

I might be wrong or inaccurate on this because it's well outside my area of expertise, but isn't this what individual neurons are basically doing?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: