Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It goes up at least until LLMs match humans - ie until an LLM can write Windows


I want the LLM to decide not to do anything, or write a new OS.

Whenever I prompt: "Do not do anything"

It always does <something>.


Do not think of a pink elephant. Were you able to do so?


> Whenever I prompt: "Do not do anything" It always does <something>.

Yep. A lot of times, the responses I get remind me of Simone in Ferris Bueller's Day Off: https://www.youtube.com/watch?v=swBtLPWeKbU

If you end up making a new model, please teach it that less is more and call it "LAIconic".


Slightly tangential, but I want an LLM which can debug windows.


Debug And locally fix security holes.


Or sell the found exploit to the friendly nation state actor to pay for its compute behind your back




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: