upvote
I've been at this longer than most.

After three major generations of models the "intuition" I've build isn't about what AI can do, but about what a specific model family can do.

No one cares what the gotchas in gpt3 are because it's a stupid model. In two years no one will care what they were for gpt5 or Claude 4 for the same reason.

We currently have the option of wasting months of our lives to get good at a specific model, or burn millions to try and get those models to do things by themselves.

Neither option is viable long term.

reply
My philosophy is to try and model the trajectories of these systems and build rigging around where the curve is flat (e.g. models have been producing big balls of mud since the beginning and this hasn't improved meaningfully). Models also have a strong mean bias that I don't expect to go away any time soon.

Trying to outsmart the models at core behaviors over time is asking to re-learn the bitter lesson though.

reply
The ridiculous resources being thrown at this, and the ability through RLVR to throw gigatons of spaghetti at the wall to see what sticks, should make it very clear just how incredibly inefficient frontier AI reasoning is - however spectacular it may be that it can reason at this level at all.
reply
Long term though, AI will win out. The thing is that you can improve capability. You can make the context window bigger. You can throw more compute at it. Improve efficiency of chips. Throw more power at it. And indeed, that has worked so far to turn the gpts of 2017 into the gpts of 2026 that can actually do stuff.

Meanwhile, human thoughtpower cannot really be improved. Once the tipping point is reached where computers exceed humans, humans will never be able to catch up by definition.

Humans can also only maintain so much contextual information and scope. They can only learn so much in the time scale they have to get up to speed. They can only do so much within the timescale of their own mental peak before they fall off and go senile or die. While these limits are bound by evolution, they change on the orders of thousands of generations, and require strong selection for these changes at that.

The turtle has marched far already, but the hare in the speeding car they continually improve is not far behind. Efficiency doesn't matter. What is inefficient now will be trivial to parallelize and scale in the future as its always been in the history of compute. We'd have to engage in something like the Bene Gesserit breeding program if we are to have human thoughtpower be competitive against compute in the future.

reply
You're presupposing an answer to what is actually the most interesting question in AI right now: does scaling continue at a sufficiently favorable rate, and if so, how?

The AI companies and their frontier models have already ingested the whole internet and reoriented economic growth around data center construction. Meanwhile, Google throttles my own Gemini Pro usage with increasingly tight constraints. The big firms are feeling the pain on the compute side.

Substantial improvements must now come from algorithmic efficiency, which is bottlenecked mostly by human ingenuity. AI-assisted coding will help somewhat, but only with the drudgery, not the hardest parts.

If we ask a frontier AI researcher how they do algorithmic innovation, I am quite sure the answer will not be "the AI does it for me."

reply
Of course it continues. Look at the investment in hardware going on. Even with no algorithmic efficiency improvement that is just going to force power out of the equation just like a massive inefficient V8 engine with paltry horsepower per liter figures.
reply
I believe it continues, but I don't know if the rate is that favorable. Today's gigawatt-hungry models that can cost $10-100 per task or more to run... still can't beat Pokémon without a harness. And Pokémon is far from one task.

I believe AGI is probably coming, but not on a predictable timeline or via blind scaling.

reply
The harness can be iterated upon (1).

I don't think the sci fi definition agi is happening soon but, something more boring in the meanwhile that is perhaps nearly as destructive to life as we know it as knowledge workers today. That is, using a human still, but increasingly fewer humans of lower and lower skill as the models are able to output more and more complete solutions. And naturally, there are no geographic or governmental barriers to protect employment in this sector, or physical realities that demand the jobs take place in a certain place of the world. This path forward is ripe for offshoring to the lowest internet-connected labor available, long term. Other knowledge work professions like lawyer or doctor have set up legal moats to protect their field and compensation decades ago, whereas there is nothing similar to protect the domestic computer science engineer.

By all means they are on this trajectory already. You often see comments on here from developers who say something along the lines of the models years ago needing careful oversight, now they are able to trust them to do more of the project accurately with less oversight as a result. Of course you will find anecdotes either way, but as the years go on I see more and more devs reporting useful output from these tools.

1. https://news.ycombinator.com/item?id=46988596

https://news.ycombinator.com/item?id=46988596

reply
In my experience, AI enables smart people to do their best work while automating zero-quality work like SEO spam that no humans should have been doing in the first place. I have yet to see anything that I would remotely call tragic.
reply
> legal moats to protect their field

I wonder how do they hold up when there's a big enough benefit of using AI over human work. Like how are politicians to explain these moats to the masses when your AI doctor costs 10x less and according to a multitude of studies is much better at diagnosis?

Or in law? I've read China is pushing AI judges because people weren't happy with the impartiality of the human ones. I think in general people overestimate how much these legal moats are worth in the long run.

reply
You are forgetting that the current approach to AI may lead to a flat asymptote that still lies well below human capabilities.
reply
I credit them for acknowledging their limitations and not actively trying to be misleading. Unlike a certain other company in the space.
reply