Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> if one would extend it with an external program, that gives it feedback

If you have an external program, then by defining it's not self-awareness ;). Also, it's not about correctness per se, but about the model's ability to assess its own knowledge (making a mistake because the model was exposed to mistakes in the training data is fine, hallucinating isn't).



Yes, but that's essentially my point. Where to draw the system boundary? The brain is also composed of multiple components and does IO with external components, that are definitely not considered part of it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: