Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The problem in this case is not that it was trained on bad data. The AI summaries are just that - summaries - and there are bad results that it faithfully summarizes.

This is an attempt to reduce hallucinations coming full circle. A simple summarization model was meant to reduce hallucination risk, but now it's not discerning enough to exclude untruthful results from the summary.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: