An ER staff is frequently making inferences based on a variety of things like weather, what the pt is wearing, what smells are present, and a whole lot of other intangibles. Frequently the patients are just outright lying to the doctor. An AI will not pick up on any of that.
It will if it trains on data like that. It's all about the training data.
Diagnostic standards in (at least emergency, but I think other specialties) medicine are largely a joke -- ultimately it's often either autopsy or "expert consensus."
We get to bill more for more serious diagnoses. The amount of patients I see with a "stroke" or "heart attack" diagnosis that clearly had no such thing is truly wild.
We can be sued for tens of millions of dollars for missing a serious diagnosis, even if we know an alternative explanation is more likely.
If AI is able to beat an average doctor, it will be due to alleviating perverse incentives. But I can't imagine where we could get training data that would let it be any less of a fountain of garbage than many doctors.
Without a large amount of good training data, how could AI possibly be good at doctoring IRL?
I don't understand how you think this doesn't win vs a human doctor.
What kind of embedding helps the AI learn to do a physical exam?
Not to mention patient privacy, I can't even take a still photo of a patient in my current system (even with a hospital-owned camera).
(Where AI is likely to actually excel in medicine is parsing datasets that are much easier to do context free number crunching on than ER rooms, some of which physicians don't even have access to ...)
My sense is that doctors and AI would be doing a lot better if they were just doing medicine, not being a contact surface for failures of housing, mental health and addiction services, and social systems. Drug seeking and the rest should be non-issues, but drug seekers are informed and adaptive adversariesz