upvote
Is the above comment a genuine question? I’m concerned it’s a rhetorical question that isn’t really getting to the heart of the matter; namely, what is the empirical performance? One’s ability to explain said performance doesn’t always keep up.

How about we pick an LLM evaluation and get specific? They have strengths and weaknesses. Some do outperform humans in certain areas.

Often I see people latching on to some reason that “proves” to them “LLMs cannot do X”. Stop and think about how powerful such a claim has to be. Such claims are masquerading as impossibility proofs.

Cognitive dissonance is a powerful force. Hold your claims lightly.

There are often misunderstandings here on HN about the kinds of things transformer based models can learn. Many people use the phrase “stochastic parrots” derisively; most of the time I think these folks are getting it badly wrong. A careful reading of the original paper is essential, not to mention follow up work.

reply