Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I called out one hallucination - "The Personal MBA by Josh Kaufman" wasn't on my shelf.

I didn't bother fact-checking every other book because I thought highlighting one mistake would illustrate that the results weren't accurate - which is pretty much expected for anything related to LLMs at this point.



I don't think highlighting one mistake is enough, when these can sometimes have more mistakes than corrects. I've found use for LLMs (in large part thanks to your teaching) in cases where I can easily verify the results fully like code and process documentation, but tasks where "fact-checking everything" would be too much work are very much on the danger zone for getting accidentally scammed by AI.


No result is better than misinformation.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: