People also "synthesize from the data they were trained on". Intelligence is a result of that. So this dead-end argument then turns into begging the question: LLMs don't have intelligence because LLMs can't have intelligence.
Couldn't you say that about 99% of humans too?
And of course, if you don't limit yourself to "advancing the state of the art at the far frontiers of human knowledge" but allow for ordinary people to make everyday contributions in their daily lives, you get even more. Sure, much of this knowledge may not be widespread (it may be locked up within private institutions) but its impact can still be felt throughout the economy.
How? By also "synthesizing the data they were trained on" (their experience, education, memories, etc.).
That might read like an insult to Lattner, but what I’m really pointing out is that we tend to hold AIs to a much higher standard than we do humans, because the real goal of such commentary is to attempt to dismiss a perceived competitive threat.