The use case here is police facial recognition. Not hitting nails. The parent wasn't saying "AI is a liability" with no context.
The problem here is incidental to the tool; it was done by the cops and therefore nobody will be held accountable.
That would be the vendors, the system planners, and the institutions that greenlit this. It would also include the larger financial tech circle that is trying to drive large scale AI adoption. Like Peter Thiel, who sees technology as an "alternative to politics". I.e. a way to circumvent democracy [1]
[1] https://stavroulapabst.substack.com/p/techxgeopolitics-18-te...
As much as I detest Clearview and Thiel the fault for this incident falls squarely on the justice system.
Only one small little problem --- there is no way to tell if you are using it "correctly".
The only way to be sure is to not use it.
Using it basically boils down to, "Do you feel lucky?".
The Fargo police didn't get lucky in this case. And now the liability kicks in.
Look for similar to play out elsewhere --- using unreliable tools for decision making is not a good, responsible business plan. And lawyers are just waiting to press the point.
I’m very opposed to AI in general, but this one is clearly human failure.
The noteworthy AI angle is the undeserved credence police gave to AI information. But that is ultimately their failure; they should be investigating all information they receive.
Absolutely.
The failure starts with tool vendors who market these statistical/probabilistic pattern searchers as "intelligent". The Fargo police failed to fully evaluate these marketing claims before applying them to their work.
So in the same way that the failure rolled down hill, liability needs to roll back up.
At some point, you have to decide if wasting good money on bad intel makes sense.
https://www.lawlegalhub.com/how-much-is-a-wrongful-arrest-la...
But...
> there is no way to tell if you are using it "correctly".
This simply isn't true, at least in cases like this.
I know common sense isn't really all that common, but why would you give more credence to an untested tool than an untested crack-addled human informant?
The entire point of the informant, or the AI in this instance, is to generate leads. Which subsequently need to be checked.
But this approach negates much of the incentive to pay for questionable results.
As is true with results from people.
> But this approach negates much of the incentive to pay for questionable results.
I'm not sure that follows. Even the crack-addled human informant has always been paid for questionable results.
Now, if I misused a hammer and it hurt everyone's thumb in my country, then maybe what you said would have some merit.
Otherwise, I'd say it's an extremely lazy argument