1. lots of room for progress, i.e. the theoretical ceiling dwarfed the current capabilities
2. strong incentives to continue development, i.e. monetary or military success
3. no obviously better competitors/alternatives
4. social/cultural tolerance from the public
Literally hasn't happened. Even if you can find 1 or 2 examples, they are dwarfed by the hundreds of counter examples. But more than likely, you won't find any examples, or you'll just find something recent where progress is ongoing.
Useful technology with room to improve almost always improves, as people find ways to make it better and cheaper. AI costs have already fallen dramatically since LLMs first burst on the scene a few years back, yet demand is higher than ever, as consumers and businesses are willing to pay top dollar for smarter and better models.
1. As I said before, we've long since reached diminishing returns on models. We simply don't have enough compute or training data left to make them dramatically better.
2. This is only true if it actually pans out, which is still an unknown question.
3. Just... not using it? It has to justify its existence. If it's not of benefit vs. the cost then why bother.
4. The public hates AI. The proliferation of "AI slop" makes people despise the technology wholesale.
2. Sure, depends on #1. But the incentive is undeniable.
3. It has. Do you think people are using Claude Code in incredible numbers for no reason?
4. The public and businesses are adopting AI en masse. It's incredibly useful. Demand is skyrocketing. I don't think you could show that negative public sentiment has been sufficient to stop this, any more than negative sentiment about TVs, headphones, bicycles, etc (which was significant).
With the exception of #1, I feel like you're arguing that things won't happen, where the numbers show they've already have happened and are accelerating.