Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

because this prompt is part of what is applied to the model when you prompt it, its weights are not. Training is a different process from prompting, and weights are an internal property not part of the actual input in any way. E.g. it's like asking "How many brain cells do you have? I mean it's your own brain, surely you must know the exact number?"


If you put a known prompt in an LLM and ask it to read it back to you, how often does it do it correctly? I would bet not all the time, particularly if you give it a long prompt like the one that is proposed here.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: