The incentive is to prosecte and prove the charges.
Speaking from the experience of being falsely accused after calling 911 to stop a drunk woman from driving.
The narrative they "investigated" was so obviously false, bodycam evidence directly contradicted multiple key facts. Officials are interested only seeking to prove the case. Thankfully the jury came to the right verdict.
The truth is much more complicated and involves politics. For example Seattle (and possibly other cities?) enacted a law that involves paying damages for being wrong in the event of bringing certain types of charges. But that has resulted in some widely publicized examples where the prosecutor erred by being overly cautious.
Minimum 1 year of jail time for grossly wrongful arrests that could be avoided with standard procedure or investigation tactics that were not applied.
We could sit here all day arguing “you should always validate the results”, but even on HN there are people loudly advocating that you don’t need to.
You should always validate the results, but there is an inherint difference between an AI generated tool for personal use and a tool which could be used to destroy someones life.
To the extent people trust AI to be infallible, it's just laziness and rapport (AI is rarely if ever rude without prompting, nor does it criticize extensive question-asking as many humans would, it's the quintessential enabler[1]) that causes people to assume that because it's useful and helpful for so many things, it'll be right about everything.
The models all have disclaimers that state the inverse. People just gradually lose sight of that.
[1] This might be the nature of LLMs, or it might be by design, similar to social media slop driving engagement. It's in AI companies' interest to have people buying subscriptions to talk with AIs more. If AI goes meta and critiques the user (except in more serious cases like harm to self or others, or specific kinds of cultural wrongthink), that's bad for business.
Why it happens is secondary to the fact that it does.
> The models all have disclaimers that state the inverse. People just gradually lose sight of that.
Those disclaimers are barely effective (if at all), and everyone knows that. Including the ones putting them there.
I see all kinds of people being told that AI-based AI detection software used for detecting AI in writing is infallible!
You want to make sure people aren't using fallible AI? Use our AI to detect AI? What could possibly go wrong.
"The trauma, loss of liberty, and reputational damage cannot be easily fixed,” Lipps' lawyers told CNN in an email.
That sounds a LOT like a statement you make for before suing for damages, not to mention they literally say "Her lawyers are exploring civil rights claims but have yet to file a lawsuit, they said."
This lady probably just wants to go back to normal life and get some money for the hell they put her in. She has never been on a airplane before, I doubt she is going to take on the entire system like you suggest. Easier said than done to "challenge the entire system", what does that even mean exactly?
...Unable to pay her bills from jail, she lost her home, her car and even her dog.
There is not a jury in the country that will side against the woman. I am not even sure who will make the best pop culture mashup - John Wick or a country song writer?(Also, what happened to journalism - no Oxford comma?)
Where your home was lost to foreclosure because one JUDGE did not look at the paperwork.
There should be a way to personally sue somebody when they don't do their job. Protecting the innocent. The JUDGE failed badly here.
Flimsy evidence would mean no warrant. Do your basic investigation please... Rubberstamping JUDGE caused this.
Why are they not named? Like they are a spectator. Infact they are the cause.
Also rather unreasonable to arrest someone who is clearly neither violent nor a flight risk. You could literally hold the trial via video conference at that point and there would be no downside.
Effectively it just raises taxes to cover the cost of these failed prosecutions.
Everytime one of these cases happens, a cop and a prosecutor should be out of a job permanently. Possibly even jailed. The false arrest should lose the cop their job and get them blacklisted, the prosecution should lose the prosecutor's right to practice law.
And if the police union doesn't like that and decides to strike, every one of those cops should simply be fired. Much like we did to the ATC. We'd be better off hiring untrained civilians as cops than to keep propping up this system of warrior cops abusing the citizens.
It absolutely was. There's no question of this. Now we need to ask how was the system marketed, what did the police pay for it, how were they trained to use it?
> anybody bothered to ask her "where were you the morning of july 10th between 3 and 4pm.
Legally that amounts "hearsay" and cannot have any value. Those statements probably won't even be admissible in court without other supporting facts entered in first.
> we are all guilty until cleared.
This is not at a phenomenon that started with AI. If you scratch the surface, even slightly, you'll find that this is a common strategy used against defendants who are perceived as not being financially or logistically capable of defending themselves.
We have a private prison industry. The line between these two outcomes is very short.
How is that hearsay if she's directly testifying to her own whereabouts?
Hearsay would be if someone else was testifying "she was in X location on july 10th between 3 and 4pm", without the accused being available for cross
"I was at the library" is firsthand testimony.
"I saw her at the library" is firsthand testimony.
"I saw her library card in her pocket" is firsthand testimony.
"She was at the library - Bob told me so" is hearsay. Just look at the word - "hear say". Hearsay is testifying about events where your knowledge does not come from your own firsthand observations of the event itself.
Better just to apply Musk or Altman software to the problem and avoid it entirely.