Moores law is long dead, leading edge nodes are getting ever more expensive, the most recent generation of tensor silicon is not significantly better in terms of flops/watt over the previous generation.
Given that model performance has consistently trended log linear with compute thrown at the problem, there must be a point at which it is no longer economically viable to throw more flops at the problem.
The AI companies and their frontier models have already ingested the whole internet and reoriented economic growth around data center construction. Meanwhile, Google throttles my own Gemini Pro usage with increasingly tight constraints. The big firms are feeling the pain on the compute side.
Substantial improvements must now come from algorithmic efficiency, which is bottlenecked mostly by human ingenuity. AI-assisted coding will help somewhat, but only with the drudgery, not the hardest parts.
If we ask a frontier AI researcher how they do algorithmic innovation, I am quite sure the answer will not be "the AI does it for me."
I believe AGI is probably coming, but not on a predictable timeline or via blind scaling.
I don't think the sci fi definition agi is happening soon but, something more boring in the meanwhile that is perhaps nearly as destructive to life as we know it as knowledge workers today. That is, using a human still, but increasingly fewer humans of lower and lower skill as the models are able to output more and more complete solutions. And naturally, there are no geographic or governmental barriers to protect employment in this sector, or physical realities that demand the jobs take place in a certain place of the world. This path forward is ripe for offshoring to the lowest internet-connected labor available, long term. Other knowledge work professions like lawyer or doctor have set up legal moats to protect their field and compensation decades ago, whereas there is nothing similar to protect the domestic computer science engineer.
By all means they are on this trajectory already. You often see comments on here from developers who say something along the lines of the models years ago needing careful oversight, now they are able to trust them to do more of the project accurately with less oversight as a result. Of course you will find anecdotes either way, but as the years go on I see more and more devs reporting useful output from these tools.
I wonder how do they hold up when there's a big enough benefit of using AI over human work. Like how are politicians to explain these moats to the masses when your AI doctor costs 10x less and according to a multitude of studies is much better at diagnosis?
Or in law? I've read China is pushing AI judges because people weren't happy with the impartiality of the human ones. I think in general people overestimate how much these legal moats are worth in the long run.