Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Why?

We already have to deal with Tesla FSD hallucinations and regulating it in serious and safety critical applications such as autonomous transportation.

I don't think we need a LLM behind the wheel and hallucinating directions to end up leading the driver to drive off a cliff or running past a stop sign it got confused over.

> Seems like LLMs are very good at generalizing to random tasks that they're not necessarily trained for.

Some tools are useful for other applications, especially safety critical applications. LLMs are *absolutely* not useful for this use-case.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: