upvote
IMO we are either limited by data or reaching the limits of what's possible with a transformer architecture. Hardware will get us efficiency but I am not sure if it will lead to smarter models
reply
Moore's law is bypassed with volume--more datacenters
reply
they already did put a model into the silicon and it's crazy fast. https://chatjimmy.ai/

I'm pretty sure there's a 3 year design goal starting this year that'll do that to any of the qwen, deepseek, etc models. There's a lot you could do with sped up models of these quality.

It might even be bad enough that the real bubble is how much we don't need giant data centers when 80-90% of use cases could just be a silicon chip with a model rather than as you say, bloated SOTA

reply
And this is an asic that is still operating digitally. Imagine a chip with baked it weights that does its math analogue with 20x reduction in number of circuit elements needed to do a multiplication op.

If there's a breakthrough in memristors, you could end up with another 20x reduction in circuit elements (get rid of memory bottlnecks, start doing multiplication ops as log transform voltage addition)

The ceiling is ultra high for how far AI can go.

reply
It would be pretty cool to have interchangeable usb keys with models on them.
reply
Even at orders of magnitude greater speed, we've still hit diminishing returns for quality of output. We simply haven't found anything like superhuman reasoning ability, just superhuman (potentially) reasoning speed.
reply
I disagree with this. Reinforcement learning with verifiable rewards training is actually the secret sauce that is leading Claude and GPT to automating software engineering tasks.

All the easily verifiable domains such as mathematics, coding, and things that can be run inside a reasonable simulation are falling very very fast.

By next year if not sooner, mathematicians will be wildly outpaced by LLMs for reasoning.

reply
Coding is anything but “easily” verifiable.
reply
It's extremely verifiable. The reinforcement finetuning strategy I'm referring to involves LLM creating coding tasks with an expected output, implementing the code, and then having a compiler (or interpreter in the case of languages like python) succeed or fail to run the code. Then compare the output to expected output. The verification process (run interpreter + run test) can be done in seconds. One can generate millions of datasets like this for free and there is extensive research showing with the right policy, an agent will be able to learn to reason - first as good as human, and in many cases superior to a human.
reply
deleted
reply
It's not that easy to assess diminishing returns with saturated benchmarks where asymptoting to 100% is mathematically baked in. I could point to the number of Erdos proofs being solved by AI going from 0 to many very recently as evidence for acceleration.
reply
That is not evidence of acceleration, just of some measurable improvement compared to a previous model. After all, humans have made these breakthroughs since before recorded history—that never by itself implied accelerating intelligence.
reply
Possibly - but we've also seen that spending more tokens on a task can improve the quality of the output (reasoning, CoT, etc).

So it's not impossible to have things that seem orthogonal, like generation speed or context length, have an impact on quality of result.

reply