Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I see your point, but I feel like there's going to be a 'eating tidepods' level societal meme within a year mocking people who fall for AI hallucinations as "boomers", and then omnipotent AI myth will be shattered.

Essentially, I believe the baseline misinformation level is being undercounted by many and so the delta in the interim while people are learning the fallibility of AI is small enough it is not going to cause significant issues.

Also the 'inoculation' effect of getting the public using LLMs could result in a net social benefit as the common man will be skeptical of authorities appealing to AI to justify actions - which I think could be much more dangerous that Suzie copying hallucinated facts into her book report.



If the only negative effect is some people look foolish, that's an acceptable risk. I'm worried a bit it's closer to people thinking that Tesla has a full self-driving system because Tesla called it auto-pilot and demonstrated videos of the car driving without a human occupant. In that case, yeah the experts understand that "auto-pilot" still means driver-assisted, but we can't ignore the fact that most people don't know that and that the marketing info reinforced the wrong ideas.

I don't want to argue with people that won't understand the AI model can be wrong. I'm far more concerned with public policy being driven by made up facts or someone responding poorly in an emergency situation because a search engine synthesized facts. Outside of small discussions here, I don't see any acknowledgment about the current limitations of this technology, only the sunny promises of greener pastures.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: