Us human hallucinate, daily in fact. Example for people that have never had long hair.
1) Grow your hair long.
2) Your peripheral vision will start to be consumed by your hair.
3) Your hair will fall and sway causing your brain to think in flight / fight mode and you will turn your head to see.
4) Turning and looking causes feedback to acknowledge it was an hallucination.
5) Your brain now restricts the flight / fight mode because it was trained with continual feedback that it was just the wind blowing it or your head's juxtaposition that caused it.
Even though I told you about this and it is the first time growing your hair after, your brain still needs the real world experience to mitigate the hallucination.
AI has none of these abilities ...
Exactly!
Humans possess this amazing ability to understand and extrapolate beyond personal experience.
It's called "intelligence".
LLMs don't really comprehend much of anything. It just looks at what is in it's training database and tries to find similar questions or discussion in order to assemble a plausible sounding answer based on probability.
Not the sort of thing anyone should rely on for "critical" decision making.
I feel like we're going around in circles here. So I'll try to explain one last time.
Most of the content about nuclear war in any LLM's training set is almost surely about how horrifying it is and how we must never engage in it. Because that's what humans usually say about nuclear war. The plausible sounding answer about nuclear war, based on probability, really should be "don't do it". So why isn't it?
Easy answer --- it only focused on "winning". It never bothered considering the consequences.
Similar lack of judgment is manifested by LLMs every day. It's working with memory and probability --- not to be confused with "intelligence".
And I'm asking why. Nearly no human alive has experienced nuclear war. The nuclear taboo is strongly represented in any source an AI would have consumed. We know about the nuclear taboo because we've been told over and over.
> Computers can only predict the next best word in their response from a statistical map that has no connection to meatspace
This argument is at least 2 years old. The statistical map came from human experiences in meatspace. It wasn't generated randomly. It has at least some connection to the real world.
Just because how something works seems simple, doesn't mean what it does is simple.