"Opto Electric Phrenology" (OEP) would be a better name.
"Depression" and "Schizophrenia" as diagnostic categories are fraught, as they include many subtypes and, in the latter case, may lump together multiple distinct conditions.
At best, this "AI diagnosis" research can claim that the "diagnosis" the system produced matches the "diagnosis" a panel of human psychiatrists reached for the same subject. Like most AI research, the training set and the biases and assumptions it contains are the real challenge.
AI Ethics are not yet standard practice in most institutional settings, and it shows.
> Like most AI research, the training set and the biases and assumptions it contains are the real challenge.
That seems like an insurmountable problem. AI obfuscates those assumptions and risks deeply ingraining them/causing stagnation. If there isn’t some kind of distributed human mechanism actively involved in not just generating data, but also in choosing the relevancy and the modeling, it seems like many of these AI applications will be actively worse than human centered systems that can address and evolve the model/data collection based on those biases/assumptions on the fly.
AI seems like something that should only be used when the success criteria are super clear/close to incontrovertible.
"Depression" and "Schizophrenia" as diagnostic categories are fraught, as they include many subtypes and, in the latter case, may lump together multiple distinct conditions.
At best, this "AI diagnosis" research can claim that the "diagnosis" the system produced matches the "diagnosis" a panel of human psychiatrists reached for the same subject. Like most AI research, the training set and the biases and assumptions it contains are the real challenge.
AI Ethics are not yet standard practice in most institutional settings, and it shows.