I'm pretty sure there's a 3 year design goal starting this year that'll do that to any of the qwen, deepseek, etc models. There's a lot you could do with sped up models of these quality.
It might even be bad enough that the real bubble is how much we don't need giant data centers when 80-90% of use cases could just be a silicon chip with a model rather than as you say, bloated SOTA
If there's a breakthrough in memristors, you could end up with another 20x reduction in circuit elements (get rid of memory bottlnecks, start doing multiplication ops as log transform voltage addition)
The ceiling is ultra high for how far AI can go.
All the easily verifiable domains such as mathematics, coding, and things that can be run inside a reasonable simulation are falling very very fast.
By next year if not sooner, mathematicians will be wildly outpaced by LLMs for reasoning.
So it's not impossible to have things that seem orthogonal, like generation speed or context length, have an impact on quality of result.