upvote
> I read 10 comments before I realized that this was referring to 10 years in the FUTURE and not in the PAST (as would be required for it to be a hallucination).

omg, the same for me, I was half way telling my colleague about the 100% rest kernel ...

reply
Ha ha! But yes, I was confused too, especially since the title says "10 years from now"... not specifying in which direction.
reply
You're right this is how people are PRESENTLY using the term "hallucination," but to me this illustrates the deeper truth about that term and that concept:

As many have said but it still bears repeating -- they're always hallucinating. I'm of the opinion that its a huge mistake to use "hallucination" as meaning "the opposite of getting it right." It's just not that. They're doing the same thing either way.

reply
You're correct, OP used the word "hallucination" wrong. A lot of these other comments are missing the point – some deliberately ('don't they ONLY hallucinate, har har'), some not.

For those who genuinely don't know – hallucination specifically means false positive identification of a fact or inference (accurate or not!) that isn't supported by the LLM's inputs.

- ask for capital of France, get "London" => hallucination

- ask for current weather in London, get "It's cold and rainy!" and that happens to be correct, despite not having live weather data => hallucination

- ask for capital of DoesNotExistLand, get "DoesNotExistCity" => hallucination

- ask it to give its best GUESS for the current weather in London, it guess "cold and rainy" => not a hallucination

reply
There is no technical difference.
reply
There is a semantic one.
reply
Don’t LLMs only ever hallucinate?
reply
It’s apt, because the only thing LLMs is hallucinate because they have no grounding in reality. They take your input and hallucinate to do something “useful” with it.
reply
Extrapolation is a subset of hallucination.

The ubiquitous use of hallucination I see is merely "something the LLM made up".

reply