Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Sure, in which case the real prompt is as useless as a hallucinated one, so what's the difference?


I guess that now verifying it isn't the easy part, as you boldly claimed the comment before?


I don't think the purpose of getting the prompt leaked was to then use the prompt but just to expose the limitations of this approach to steering an LLM.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: