I used to work in emergency medicine as a paramedic. A far cry from an MD for sure, but here's my two cents: Unless they were already in decline, I never suspected Sepsis based solely on vitals. It was usually more of a gut feel thing. It can present with weird symptoms, like unexplained joint pain in multiple joints and things like that. I think that two key tells are patient history and patient cognition relative to baseline. My favorite question to ask family is: "How are they normally?" It's hard to measure these things with vitals so the AI doesn't have the right data going in, IMO. But, take all this with a grain of salt as I am deeply uninformed on the subject.
The title is misleading and wrong. "AI" from a specific company(Epic) can't detect sepsis. But technology exists from people that known what they are doing, such as Johns Hopkin's spin-off Bayesian [1], and they have been very successful in detecting Sepsis.
That's fair- I've edited the title to make it more specific. I've not come across Bayesian but I'm going to add their research on the site to summarise down the line.
Epic would tell you that they have been very successful in predicting sepsis too. But it's hard to train a model that generalizes to data from other sources (the problem described in the article). How does BH's model work outside Johns Hopkins? That's the real test.
One of the problems with closed models is that any model can be found to train on the 'wrong' data point. So e.g. a chest x ray reader determines that images taken with the machine in ICU indicate sicker patients than elsewhere- that's not useful. If you can't inspect the model to check that, they might claim superior performance, but then the model doesn't work as well as advertised when it's tried out. Other biases might occur as well- for instance you can imagine a 'Greyball for healthcare' with the wrong incentives which recommends a certain drug/therapy more often than it should.
One of my more radical opinions in this area is the idea that it should be illegal to sell a closed and proprietary ML model for areas of public safety, specifically in hospitals and in courts/jails. The public’s interest in transparency in such matters trumps the company’s copyrights. Trained experts get a chance to inspect every drug and every medical device that’s used; why shouldn’t they get to see how a ML model used in a hospital was trained?
Completely agree. I've not seen it tested legally, but the EU now has a 'right to explanation' where automated decisions are made about people. This would prohibit closed ML from most arenas.
I don't see why the training data would need to be provided. The model would, and that's derived from the training data, but the training data itself shouldn't need to be provided. It is hard to explain a model with any degree of complexity with or without training data, so it might not be easy, but that's just the nature of complex models.
Yes this is why I'm not sure how it's been tested. In the scientific literature in many cases the data is anonymised and made available publicly or to those interested, but you can't always anonymise the data adequately, so an audit process might be necessary
Granted this is the review of a single model, but given that it's by one of the big players in the EMR space, it shows how challenging this problem is. Sepsis is easy to under-recognise, and difficult to identify reliably without lots (and lots) of false alarms.
I think this headline is slightly misleading since the writeup doesn't really go into the "why" or generalize past Epic's performance.
The TLDR is much more accurate: "A widely used sepsis prediction tool demonstrated poor performance in an independent validation, with worse results than clinician judgement and an increased burden of alert fatigue."