Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Like most AI research, the training set and the biases and assumptions it contains are the real challenge.

That seems like an insurmountable problem. AI obfuscates those assumptions and risks deeply ingraining them/causing stagnation. If there isn’t some kind of distributed human mechanism actively involved in not just generating data, but also in choosing the relevancy and the modeling, it seems like many of these AI applications will be actively worse than human centered systems that can address and evolve the model/data collection based on those biases/assumptions on the fly.

AI seems like something that should only be used when the success criteria are super clear/close to incontrovertible.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: