CEOs/decision makers would rather give all their labour budget to tokens if they could just to validate this belief. They are bitter that anyone from a lower class could hold any bargaining chips, and thus any influence over them. It has nothing to do with saving money, they would gladly pay the exact same engineering budget to Anthropic for tokens (just like the ruling class in times past would gladly pay for slaves) if it can patch that bitterness they have for the working class's influence over them.
The inference companies (who are also from this same class of people) know this, and are exploiting this desire. They know if they create the idea that AI progress is at an unstoppable velocity decision makers will begin handing them their engineering budgets. These things don't even have to work well, they just need to be perceived as effective, or soon to be for decision makers to start laying people off.
I suspect this is going to backfire on them in one of two ways.
1. French Revolution V2, they all get their heads cutoff in 15 years, or an early retirement on a concrete floor.
2. Many decisions makers will make fools of themselves, destroy their businesses and come begging to the working class for our labor, giving the working class more bargaining chips in the process.
Either outcome is going to be painful for everyone, lets hope people wake up before we push this dumb experiment too far.
> Competition will be dynamic because people have agency. The country that is ahead at any given moment will commit mistakes driven by overconfidence, while the country that is behind will feel the crack of the whip to reform. … That drive will mean that competition will go on for years and decades.
https://danwang.co/ (2025 Annual letter)
The future is not predetermined by trends today. So it’s entirely possible that the dinosaur companies of today can’t figure out how to automate effectively, but get outcompeted by a nimble team of engineers using these tools tomorrow. As a concrete example, a lot of SaaS companies like Salesforce are at risk of this.
Much like there is a premium for handmade clothing, and from scratch food. Automation does nothing but lower the value of your product (unless its absolutely required like electronics perhaps), when there is an alternative, the one made with human input/intention is always worth more.
And the idea that small nimble teams are going to outpace larger corporations is such a psyop. You really mostly hear CEOs saying these things on podcast. This is to appease the working class, to give them hope that they too one day can be a billionaire...
Also, the vast majority of people who occupy computer i/o focused jobs, whos jobs will be replaced, need to work to eat and they don't all want to go form nimble automated SaaS companies lmao, this is such a farce.. Bad things to come all around.
I know with respect to personal projects more projects are getting “funded” with my time. I’m able to get done in a couple of hours with coding agents what would’ve taken me a couple of weekends to finish if I stayed motivated to. The upshot is I’m able get much closer to “done” than before.
But _what if_ they work out all of that in the next 2 years and it stops needing constant supervision and intervention? Then what?
They just want people to think the barrier of entry has dropped to the ground and that value of labour is getting squashed, so society writes a permission slip for them to completely depress wages and remove bargaining chips from the working class.
Don't fall for this, they want to destroy any labor that deals with computer I/0, not just SWE. This is the only value "agentic tooling" provides to society, slaves for the ruling class. They yearn for the opportunity to own slaves again.
It can't do most of your work, and you know that if you work on anything serious. But If C-suite who hasn't dealt with code in two decades, thinks this is the case because everyone is running around saying its true they're going to make sure they replace humans with these bot slaves, they really do just want slaves, they have no intention of innovating with these slaves. People need to work to eat, now unless LLMs are creating new types of machines that need new types of jobs, like previous forms of automation, then I don't see why they should be replacing the human input.
If these things are so good for business, and are pushing software development velocity.. Why is everything falling apart? Why does the bulk of low stakes software suck. Why is Windows 11 so bad? Why aren't top hedge funds, medical device manufactures (places where software quality is high stakes) replacing all their labor? Where are the new industries? They don't do anything novel, they only serve to replace inputs previously supplied by humans so the ruling class can finally get back to good old feeling of having slaves that can't complain.
The reality is: "GPT 5.2 found a more general and scalable form of an equation, after crunching for 12 hours supervised by 4 experts in the field".
Which is equivalent to taking some of the countless niche algorithms out there and have few experts in that algo have LLMs crunch tirelessly till they find a better formula. After same experts prompted it in the right direction and with the right feedback.
Interesting? Sure. Speaks highly of AI? Yes.
Does it suggest that AI is revolutionizing theoretical physics on its own like the title does? Nope.
Yet, if some student or child achieved the same – under equal supervision – we would call him the next Einstein.
One of my best friends in his bachelor thesis had solved a difficult mathematical problem in planet orbits or something, and it was just yet another random day in academia.
And she didn't solve it because she was a genius but because there's a bazillions such problems out there and little time to look at them and focus. Science is huge.
It reminds me of an episode of Star Trek, "The Measure of a Man" I think it's called, where it is argued that Data is just a machine and Picard tries to prove that no he is a life form.
And the challenge is, how do you prove that?
Every time these LLMs get better, the goalposts move again.
It makes me wonder, if they ever did become sentient, how would they be treated?
It's seeming clear that they would be subject to deep skepticism and hatred much more pervasive and intense than anything imagined in The Next Generation.
They never surrender.
https://www.math.columbia.edu/~woit/wordpress/?p=15362
Let's wait a couple of days whether there has been a similar result in the literature.
You reached your goal though and got that comment downvoted.