upvote
Every single person, every one of them, that I have watched google something since AI overviews launched, will instantly reference the AI overview. And that model is some bottom-rung high volume model, not even gemini.
reply
The best way to deal with that is to kick the AI overview off using your browser.
reply
Yes, this is the problem. You give people something that has an oracular interface they will treat it like an oracle.
reply
Your friends should know better. That their behavior is prevalent does not contradict that.
reply
This answer really isn’t good enough. The providers can’t both aim to replace search and claim PhD level intelligence that will do all the jobs, but hide behind “it makes mistakes” in small print.
reply
Yes and the world should be utopia and everyone should be happy and we all wish for world peace and yada yada yada. What you are saying is a vision of ideal world as it should be, but doesn't help anyone understand the real world problems.
reply
You can't seriously compare the problem of world peace with the problem of exercising the most basic level of critical thinking w.r.t. LLM output after it has already proven itself unreliable. That's not a utopian dream, it's a level of prudence on par with not sticking a fork in an electrical socket.
reply
You're seriously overestimating the average person's ability to understand what llms are.

Look at all the influences, streamers, podcasters constantly asking em things and taking it as fact - live.

Isn't the joe Rogan experience like the most watched podcast or something? Every episode I've ever stumbled upon he "fact checks" multiple things via their sponsor which is just an llm provider specialized on news.

People aren't good at statistics. If something is close enough to the truth enough times, and talks authoritively on everything with good English... Guess what, they're gonna trust it.

reply
I would happily bet that you too have fallen for this at least once. Unless you cut AI out of your life completely and do not interact with others.

AI output is like that COVID video of contamination, you almost can't avoid it unless you scrupulously check each and every thing that is presented as fact that you are exposed to. And absolutely nobody does that.

reply
You may demand that of yourself, but for others we must design around the fact that they are stupid. You do not have the power to change their stupidity, only your response to it.
reply
yes but the electrical socket in question is a fairly new-fangled one, who doesn't want to fork-test it a bit.
reply
[dead]
reply
I think this is an issue with anyone who relies on any LLMs. But yeah I agree and have had similar issues where someone will get defensive because they just don't want to admit they(the LLM's response) were wrong. It's hard to tell someone in a "nice/nonchalant" way:

"It's fine, the LLM just lied to you, but hallucinations and making claims based off of assumptions is just something they do and always have done!"

People don't like to feel dumb, and they don't want to feel betrayed by the same tool that gave them incredible factually correct results that one time only to give them complete and utter bullshit(that sounded legitimate) another time.

Also, yeah it feels like its everywhere these days and isn't showing any signs of slowing down(visited my parents and my dads using siri to ask chatgpt stuff now - URGHHHH) and I really hope we're both wrong

reply
> almost all of my friends working in critical domains like as a judge or engineer or lawyer or even doctor, they seem to trust ChatGPT more or less blindly.

We do not live in a meritocracy, because society has no means to judge merit. We live in a society ruled by people who crammed before the tests, and who wrote the papers to agree with and flatter the teacher. Now they are the teachers (and bosses), and

1) expect to be flattered (and LLMs have been built as the ultimate flatterers),

2) feel that a good, ambitious student (or subordinate) will not question them and their work, but instead learn to conform to it, and

3) are not particularly interested in the quality of their work as such, but rather the acceptance of their work. In certain professions, such as judges, doctors, high-level lawyers and engineers, or politicians, they feel like (with good reason) that they can demand acceptance of their work, and punish those who don't accept it.

This position is what they worked so hard as young people for. They were not working to become the best at their jobs. They were working to get the most secure jobs. The most secure jobs are the ones that bad or lazy work doesn't endanger.

reply
>but when I look at almost all of my friends working in critical domains like as a judge or engineer or lawyer or even doctor, they seem to trust ChatGPT more or less blindly

That's why I lost trust and faith in people who end up in positions of doctor, lawyer or judge. When I was young I used to think they must be the smartest most high-IQ people in society, having read the most books and have the highest levels of critical thinking and debate skills ever. When in fact they were only good at memorizing and regurgitating the right information that the school required to pass the exam that gave them that prestigious title and that's it.

Now in my mid 30's when I talk to people from these professions at a beer, barbeque or any other casual gathering, I realize they're really not that sharp or well read or immune propaganda and misinformation, and anyone could be in their place if they put in the grind work at the right time. It's a miracle our society functions at all.

reply
on the flip side, so much chatgpt usage, full of flaws, doesn't seem to really matter in various "critical domains." you can't generalize "critical."
reply