Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Well, the LLMs were trained with data that required human effort to write, it's not just random noise. So the result they can give is, indirectly and probabilistically regurgitated, human effort.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: