upvote
It's not enough for them to be "better" than a human. When they fail they also have to fail in a way that is legible to a human. I've seen ML systems fail in scenarios that are obvious to a human and succeed in scenarios where a human would have found it impossible. The opposite needs to be the case for them to be generally accepted as equivalent, and especially the failure modes need to be confined to cases where a human would have also failed. In the situations I've seen, customers have been upset about the performance of the ML model because the solution to the problem was patently obvious to them. They've been probably more upset about that than about situations where the ML model fails and the end customer also fails.
reply
That's not a citation.
reply
That’s because there’s no objective research on this. Similarly, there are no good citations to support your objection. They simply don’t exist yet.
reply
Maybe not worth discussing something that cannot be objectively assessed then.
reply
Then don't; all I did was offer my thoughts in a public comments section.
reply
It's roughly why I think this way, along with a statement that I don't have objective citations. So sure, it's not a citation. I even said as much, right in the middle there.
reply