You should always validate the results, but there is an inherint difference between an AI generated tool for personal use and a tool which could be used to destroy someones life.
To the extent people trust AI to be infallible, it's just laziness and rapport (AI is rarely if ever rude without prompting, nor does it criticize extensive question-asking as many humans would, it's the quintessential enabler[1]) that causes people to assume that because it's useful and helpful for so many things, it'll be right about everything.
The models all have disclaimers that state the inverse. People just gradually lose sight of that.
[1] This might be the nature of LLMs, or it might be by design, similar to social media slop driving engagement. It's in AI companies' interest to have people buying subscriptions to talk with AIs more. If AI goes meta and critiques the user (except in more serious cases like harm to self or others, or specific kinds of cultural wrongthink), that's bad for business.
Why it happens is secondary to the fact that it does.
> The models all have disclaimers that state the inverse. People just gradually lose sight of that.
Those disclaimers are barely effective (if at all), and everyone knows that. Including the ones putting them there.
I see all kinds of people being told that AI-based AI detection software used for detecting AI in writing is infallible!
You want to make sure people aren't using fallible AI? Use our AI to detect AI? What could possibly go wrong.