upvote
Please be specific because outside of anecdotal blog posts by people who don’t know what they’re talking about it’s not true. Look at scaling laws, composite benchmarks from the epoch capability index, nothing at all suggests “model progress is slowing down”
reply
Which indications are that?
reply
The cost factors on the new models compared to the old models.
reply
deleted
reply
Nobody is releasing NEW models
reply
…not only is this not true but it also doesn’t matter. Why would this indicate performance saturating?
reply
What constitutes a NEW model for the purposes of calculating progress?
reply
What? DeepSeekV3 just came out and is incredible for the price. Mythos is also half-released.
reply
The standard networking connection has been called “Ethernet” for more than thirty years, so networking has stagnated, right?
reply
If higher bandwidth networking consisted primarily running more and more ethernet lines in parallel, you would most certainly agree that "networking has stagnated".

"Reasoning" and now "Agentic" AI systems are not some fundamental improvement on LLMs, they're just running roughly the same prior-gen LLMS, multiple times.

Hence the conclusion that LLM improvement has slowed down, if not stagnated entirely, and that we should not expect the improvements of switching to these "reasoning" systems to keep happening.

reply
Investment dollars.
reply
Source for that claim?
reply