Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think because a human actually wants to figure out what you want, and a you're just going to keep prompting that ML model until you get something similar to what you want, something that would annoy a human and probably waste their time or make an endeavor extremely expensive.

I don't think its really fundamental to LLMs its just that you don't treat a human the same way you treat an unthinking unfeeling computer system whose transactions are cheap and relatively near instant compared to requesting from a human.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: