upvote
They have covered this in the article.

> But it is not curtains for emergency doctors yet, the researchers said. The study only tested humans against AIs looking at patient data that can be communicated via text. The AI’s reading of signals, such as the patient’s level of distress and their visual appearance, were not tested. That means the AI was performing more like a clinician producing a second opinion based on paperwork.

reply
> The study only tested humans against AIs looking at patient data that can be communicated via text.

This is like saying that LLMs can evaluate paintings better than art experts. But only when looking at data that can be communicated via text.

Of course they can, because it makes no sense to do such a thing.

reply
> That means the AI was performing more like a clinician producing a second opinion based on paperwork.

That actually seems like a good application – automatically get a quick AI second opinion for everything; if it's dissenting the first/human medic can re-review, or comment why it's slop, or get a third/second-human opinion.

(I'm assuming most cases would be You're absolutely right, that's an astute diagnosis.)

reply
On the other hand,

> there are few things as dangerous as an expert with access to open-ended data that can be interpreted wildly, like a clinical interview.

https://entropicthoughts.com/arithmetic-models-better-than-y...

reply
Agreed. I think the best use of this sort of tech is to use both to their strengths. Use AI to go over the record and suggest diagnoses which you have the doctor review after observing the patient.

The other thing is that common issues are common. I have to wonder how much that ultimately biases both the doctor and the LLM. If you diagnose someone that comes in with a runny nose and cough as having the flu you will likely be right most of the time.

reply
You could say the same about the Ai. Ai is incredibly well suited for extracting knowledge through chats.

In this regard. A doctor also just have 15 minutes for an interview. An Ai can be with the patient for days leading up to a consultation.

So if we remove this "handicap" this Ai will likely really start to win.

reply
Chat seems like a really bad way to get patient information. You'll miss out on various cues doctors will use to diagnose you. People can get ashamed of their symptoms and may try to hide them.
reply
It’s not good for a doctor to be your best friend. It doesn’t seem any LLM is capable of that emotional distance.
reply
It’s the ER. People aren’t always in a position to “chat” when they go there.
reply
You think current ER people work in complete silence? No words uttered?
reply
You think that they have “days leading up to consultation”? Please don’t be so disingenuous; I’m sure you know exactly what the person you’re replying to meant.
reply
> I’m sure you know exactly what the person you’re replying to meant.

No.

There are a lot of different modus operandi, and you can always find an outlier.

> Please don’t be so disingenuous;

Ditto

reply
Can't the same be said for the AI?
reply
No? Can an AI examine a patient in the physical world?
reply
Why not?
reply
If the answer is yes, let’s see that study.

This one compares AI to a human doctor practicing in a very unrealistic way.

reply
This feels like a deeply important observation. Now also, would be interesting to include e.g. a short video or photograph for the AI to use as well.
reply
My doctor makes me wait for weeks, then googles my symptoms in front of me, asks me if I checked on the internet first before I came and then gives me the first google result as an answer, as well as suggests me to wait longer. He does this several times.

When I got tired of this I just lied to the emergency line and was admitted to hospital based on my lie, and they discovered a brain tumor which explained the other stuff.

I WISH I could just use AI.

reply
Bonus, health networks now push doctors to use AI transcription software for the EHR entries. Doctors and nurses like it because they don't have to type it up. But it is a complete shitshow on whether the records are reviewed for transcription errors which happen quite often

Now feed a flawed transcripted into an AI diagnosis system and bam-o. The AI will treat it as gospel, while the doctor may go wait what.

reply
deleted
reply
deleted
reply
deleted
reply
deleted
reply