upvote
"The boy who cried wolf" is a story about false positives, so if that's what you want to avoid then you want to get close to 100% specificity, and accept that there are many things that the tool will not catch. If, as you propose, the tool would mainly be used to create a low confidence list of potential problems that will be further reviewed by a human, then casting a wide net and calibrating for high sensitivity instead does make sense.
reply
The idea is to minimize the false positives "the boy who cried wolf" at the same time mitigate, or better eliminate false negatives. The main reason is that based on the physician in-the-loop, the system can be optimized for sensitivity but can be relaxed for specificity. Of course if can get both 100% sensitivity and specificity it will be great, but in life there's always a trade-off, c'est-la-vie.

In our novel ECG based CVD detection system we can get 100% sensitivity for both arrhythmia and ischemia, with inter-patient validation, not the biased intra-patient as commonly reported in literature even in some reputable conferences/journals. Specificity is still high around 90% but not yet 100% as in sensitivity but due to the physician-in-the-loop approach, which is a diagnostic requirement in the current practice of medicine, this should not be an issue.

reply
I think this is mixing streams here.

Try narrowing the scope to remove the word 'AI' and just think 'Blood Test'.

We accept that machines can do these things faster and better than humans, and we don't lose sleep over it.

The AI will be faster and better than humans at so many things, obviously.

"Hipprocatic Oath" isn't hugely relevant to diagnosis etc.

These are systems we are measuring, that's it.

Obviously - treatment and other things, we'll need 'Hipprocatic Humans' ... but most of this is Engineering.

I don't think doctors will even trust their own judgment for many things for very long, their role will evolve as it has for a long time.

reply
What do imperfect, biased and expensive human doctors add to the « liability and ethics » question exactly?
reply
You can't hide behind "computer says no".
reply
Human judgement and accountability
reply
Assume if you know for certain that AI has better senstivity and specificity than your local physician for the particular diagnosis, which likely would be the case now or in few years. Would you purposefully get inferior consultation just because of Hippocatic oath?
reply
Doctors will apply AI sooner than patient, and they can check these results with confidence.
reply
This almost the plot of “minority report.”
reply
I agree. I think this is some sort of excuse to not use AI because of some vague metaphysical reason like liability.
reply