Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Shouldn't the medical staff be the ones filling out the questionnaire then? They are the ones perceiving the race.

Seems like in matters of perception, the person to interrogate is the one perceiving, not the object being perceived.



You're welcome to propose that change, but I see it as being impractical. Not only would this require complex and expensive changes to health data infrastructure, but I doubt it would result in significantly better data. How many of the racially biased medical practitioners will correctly report the perceptions they're acting upon?

As Box said, "All models are wrong. Some models are useful." The current model is clearly wrong but clearly useful. You'd have to make the case that switching to your theoretically-less-wrong model would have a high ROI in terms of reducing unfair medical outcomes. But I think that's a very hard case to make.


I'm not proposing a new model, I'm merely proposing reducing errors by using the one already in use in a sightly less nonsensical fashion.


The current model is 1:1 for patient:race, data entered by the patient. You are proposing a 1:n model with data entered by each practitioner. That is a different model.

And again, you haven't walked through the practical consequences of making this change and how it will improve fairness enough to be worth the cost of implementation. If you're serious about this, please do that. Otherwise it looks to me like you're just bikeshedding, and I don't have time for that today.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: