upvote
From that perspective, which is totally correct, it makes you wonder what other domains of knowledge look like when pushed to the boundaries of our capabilities as a species.
reply
That is a genuinely thought provoking idea.
reply
Do you know of any other statistical model that can "hallucinate". They clearly have emergent capabilities that come from scale that are absent in any other statistical model we've ever dreamt up.

We know that LLMs build complex internal representations of language, logic, and concepts rather than just shallow word-counting.

If you deny that then you probably have an elementary understanding of how they work. Not even Chomsky denies that. The real argument imo is whether those internal representations constitute an actual "understanding" of the world or just flatten out to something much less interesting.

reply
> Do you know of any other statistical model that can "hallucinate".

Actualy most statistical models can "hallucinate", specifically those that are capable of interpolation.

I have witnessed this for example in Gaussian Processes. In my own scientific work.

reply