Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Every LLM web app I have used has a disclaimer along these lines prominently featured in the UI. Maybe the disclaimer isn't bright red with gifs of flashing alarms, but the warnings are there for the people who would pay attention to them in the first place.


Unfortunately, even after 2 years of ChatGPT and countless news stories about it, people still don't realize that LLMs can be wrong.

There maybe should be a bright red flashing disclaimer at this point.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: