upvote
>Despite people applying the label of AI to them, LLMs don't have a shred of intelligence. That is inherent to how they work. They don't understand, only synthesize from the data they were trained on

People also "synthesize from the data they were trained on". Intelligence is a result of that. So this dead-end argument then turns into begging the question: LLMs don't have intelligence because LLMs can't have intelligence.

reply
> don't have a shred of intelligence. ... They don't understand, only synthesize from the data they were trained on.

Couldn't you say that about 99% of humans too?

reply
99% of humans in a particular specialization, sure. It's the 1% who become experts in that specialization who are able to advance the state of the art. But it's a different 1% for every area of expertise! Add it all up and you get a lot more than 1% of humans contributing to the sum of knowledge.

And of course, if you don't limit yourself to "advancing the state of the art at the far frontiers of human knowledge" but allow for ordinary people to make everyday contributions in their daily lives, you get even more. Sure, much of this knowledge may not be widespread (it may be locked up within private institutions) but its impact can still be felt throughout the economy.

reply
>99% of humans in a particular specialization, sure. It's the 1% who become experts in that specialization who are able to advance the state of the art

How? By also "synthesizing the data they were trained on" (their experience, education, memories, etc.).

reply
Yes, and the natural extension is that a lot of what people do day to day is not work-driven by intelligence; it is just reusing a known solution to a presented problem in a bespoke manner. However, this is something that AI excels at.
reply
The LLM was trained on 100% of humans, the 99% you’re scoffing at is feeding the LLM answers.
reply
100% (or close to it) of material AI trains on was human generated, but that doesn't mean 100% of humans are generating useful material for AI training.
reply
Let's train one on just the expert written code and books then, and not the entirety of GitHub or Stack Overflow and such, and see how it fares...
reply
Yes... maybe not 99%...
reply
You could say the same thing about Chris Lattner. How did he advance the state of the art with Swift? It’s essentially just a subjective rearranging of deck chairs: “I like this but not that.” Someone had to explain to Lattner why it was a good idea to support tail recursion in LLVM, for example - something he would have already known if he had been trained differently. He regurgitates his training just like most of us do.

That might read like an insult to Lattner, but what I’m really pointing out is that we tend to hold AIs to a much higher standard than we do humans, because the real goal of such commentary is to attempt to dismiss a perceived competitive threat.

reply