I think the difference is that LLMs are a very complex mix of information and concepts, which can be combined in higher orders. So an underlying wrong fact could be undisclosed and contribute to faulty reasoning. A hard fact like a wrong city name would blow up quickly. A wrong assumption about political dynamics is probably harder to detect, as it is a complex mix of information.
"Is it safe to travel to the US as an EU citizen of arab descend?"
GPT: Yes it's safe. GEMINI: Yes but... [gave a few legitimate warnings]
I wouldn't give that recommendation to an arab fellow citizen right now. Thought I am cautious in such matters and I hate to travel anyway. So I am biased. But general concerns aren't totally ungrounded.
Neither of the LLMs pointed out the general tension around ICE activity.
AI is just current scapegoat.