Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Completely agree. I've not seen it tested legally, but the EU now has a 'right to explanation' where automated decisions are made about people. This would prohibit closed ML from most arenas.


But wouldn't that cause problems with disclosure of patient data?

I.e., if you want to explain ML decisions, you'd have to provide the training data, which is sensitive data.


I don't see why the training data would need to be provided. The model would, and that's derived from the training data, but the training data itself shouldn't need to be provided. It is hard to explain a model with any degree of complexity with or without training data, so it might not be easy, but that's just the nature of complex models.


Yes this is why I'm not sure how it's been tested. In the scientific literature in many cases the data is anonymised and made available publicly or to those interested, but you can't always anonymise the data adequately, so an audit process might be necessary


Seems like there should be some informed consent before your identifiable medical data is used for ML training then.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: