05.02.22
Machine learning has been shown to be accurate in many areas of healthcare, but clinical usage of ML is still very low.
A recent paper in Nature, reviews the use of AI in healthcare. I've got an interest in this, having done quite a lot of work in this area, and I found the paper useful.
The main finding of the paper is the disconnect between how much research is being done in this area vs how much it is actually being used in practice. Particularly for imaging, ML has been shown to be at least as good as humans (in trials) across a range of tasks, yet usage of systems is still low.
I think the situation is probably similar outside of healthcare but is more pronounced. A common scenario is: A proof of concept gets built, a model works, a paper gets published, a business gets funding, but then progress stalls. Performance in the wild is less reliable. Maintaining and deploying systems on crumbling hospital infrastructure is tricky. Clinicians lose faith, it never takes off.
Healthcare is hard. You have to prove your system works, and that means more than just accuracy, you have to actually improve patient outcomes overall. In a commercial setting you can easily run an A/B test, deploy a 'just working' model and iterate until it really works. But healthcare is much harder. You can't just try things out on unsuspecting patients. Stakes are high, and the expertise you are replacing is deep and nuanced. Data is messy, patients have co-morbidities. Systems that solve a small problem have no background knowledge of other illnesses. Pharmaceutical innovations can take decades and billions to get working, don't expect ML to be any different.