Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Last month, I was listening to the Joe Rogan Experience episode with guest Avi Loeb, who is a theoretical physicist and professor at Harvard University. He complained about the disturbingly increasing rate at which his students are submitting academic papers referencing non-existent scientific literature that were so clearly hallucinated by Large Language Models (LLMs). They never even bothered to confirm their references and took the AI's output as gospel.

https://www.rxjourney.net/how-artificial-intelligence-ai-is-...



From a certain perspective, these students are optimizing their Time, which is not an unwise strategy. If you've saved 3 hours by using an LLM, would it make sense to spend 1 hour checking those references by hand?

Of course, they're also cheating themselves out of an education, but few students have that Big Picture at their age.


You make good points.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: