How about we pick an LLM evaluation and get specific? They have strengths and weaknesses. Some do outperform humans in certain areas.
Often I see people latching on to some reason that “proves” to them “LLMs cannot do X”. Stop and think about how powerful such a claim has to be. Such claims are masquerading as impossibility proofs.
Cognitive dissonance is a powerful force. Hold your claims lightly.
There are often misunderstandings here on HN about the kinds of things transformer based models can learn. Many people use the phrase “stochastic parrots” derisively; most of the time I think these folks are getting it badly wrong. A careful reading of the original paper is essential, not to mention follow up work.
There, I've shaved a ton of the spread off of your argument. Possibly enough to moot the value of the AI, depending on the domain.
Much like with Wikipedia, using AI to start on this journey (rather than blindly using quick answers) makes a lot of sense.