Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There will come a time where complex systems can better be predicted with the use of AI than with mathematical predictions. One use-case could be, feeding body scans into them for cancer prevention. AFAIK this is already researched.

There may come a time where we grow so accustomed to this, that the decision is so heavily influenced by AI, that we believe it more than human decisions.

And then it can very well kill a human through misdiagnostic.

I think it is important to not just put this thought aside, but to evaluate all risks.



> And then it can very well kill a home through misdiagnosis.

I would imagine outcomes would be scrutinized heavily for an application like this. There is a difference between a margin of error (existing with human doctors as well) and a sentient ai that has decided to kill, which is what it sounds like you're describing.

If we didn't give it that goal, how does it obtain it otherwise?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: