upvote
LLMs upend a few centuries of labor theory.

The current market is predicated on the assumption that labor is atomic and has little bargaining power (minus unions). While capital has huge bargaining power and can effectively put whatever price it wants on labor (in markets where labor is plentiful, which is most of them).

What happens to a company used to extracting surplus value from labor when the labor is provided by another company which is not only bigger but unlike traditional labor can withhold its labor indefinitely (because labor is now just another for of capital and capital doesn't need to eat)?

Anyone not using in house models is signing up to find out.

reply
This is our one chance to reach the fabled post-scarcity society. If we fail at this now, we'll end up in a totalitarian cyberpunk dystopia instead.
reply
I don't want to spoil it for you, but ...
reply
But cyberpunk is the best kind of dystopia!
reply
Sorry for my foul language but I think we will turn into cybershit if things go bad.
reply
Manufactured Scarcity is the new post-scarcity
reply
What? In what way does companies becoming dependent on AI chatbots will solve the world-spanning problem of resource scarcity?

The hell?

reply
The idea is that cheap and readily available and upgradeable intelligence is going to massively increase our purchasing power and what everyone can order for the same cost basically.

If artificial doctors are cents on hour then you can see how that changes our behaviors and level of life.

But on the other hand from the other direction there is a wage decrease incoming from increased competition at the same time. What happens if these two forces clash? Will cheap labour allow us to buy anything for pennies or will it just make us unable to make a single penny?

In my view the labour will fundamentally shift with great pain and personal tragedies to the areas that are not replaceable by AI (because no one wants to watch robots play chess). Such as sports, entertainment and showmanship. Handcrafted goods. Arts. Attention based economy. Self advertisement. Digital prostitution in a very broad sense.

However before it gets there it will be a great deal of strife and turmoil that could plunge the world into dark ages for a while at least. It is unlikely for our somewhat politically rigid society to adapt without great deal of pain. Additionally I am not sure if hypothetical future attention based society could be a utopia. You could have to mount cameras in your house so other people see you at all times for amusement just to have any money at all. We will probably forever need to sell something to someone and I am unsettled by ideas what can we sell if we cannot sell our hard work.

Someone who sees the roads ahead should now make preparations at government level for this shock but it will come too fast and with people at the steering wheel that don’t exactly care.

reply
"Extremely cheap sentience that cannot disobey will solve all our problems" is such an insane sentiment I see far too often.
reply
Useful intelligence does not require sentience.

As far as I know, none of LLM models are sentient nor are possible to be in the near future.

I also do not assume so called AGI to be sentient. Merely to be a human level skilled intellectual worker.

In absence of ethical dilemmas of this calibre for the foreseeable future let’s focus on the economy side of things in this particular comment chain.

reply
It must very comforting to be able to decided a "human level worker" isn't sentient.

It makes things so clean.

reply
LLMs cannot possess consciousness for three reasons: they execute as a sequence of Transformer blocks with extremely limited information exchange, these blocks are simple feed-forward networks with no recurrent connections, and the computer hardware follows a modular design.

Shardlow & Przybyła, "Deanthropomorphising NLP: Can a Language Model Be Conscious?" (PLOS One, 2024)

Nature: "There is no such thing as conscious artificial intelligence" (2025)

They argue that the association between consciousness and LLMs is deeply flawed, and that mathematical algorithms implemented on graphics cards cannot become conscious because they lack a complex biological substrate. They also introduce the useful concept of "semantic pareidolia" - we pattern-match consciousness onto things that merely talk convincingly.

They are making a strong argument and I think they are correct. But really these are two different things as I said originally.

reply
deleted
reply
You think I'm arguing that LLM's are sentient. I'm not. I never mentioned LLMs.
reply
You are making as strawman about sentience when I was talking about economical impact of abundant intelligence. I should just ignore it but I was curious yet you have nothing valuable to say aside from common misconceptions conflating the two. Thanks for trolling I guess
reply
If we used sentience to work towards solving our problems we could massively increase the human standard of living.

Which we have already done with regular computers! The problem is that competition means that we can't always have nice things.

reply
> The idea is that cheap and readily available and upgradeable intelligence is going to massively increase our purchasing power and what everyone can order for the same cost basically.

Seriously? You really don’t see who wins from this and who doesn’t?

> If artificial doctors are cents on hour then you can see how that changes our behaviors and level of life.

Yes, hundreds of thousands lose jobs and a couple of neuro surgeons become multimillionaires.

Okay, I see from the rest of the comment that we understand each other where it goes.

reply
We could also literally have Star Trek. Think of all the scientific discoveries we could make if we had armies of scientists the size of our labor force.

But we will have to (painfully) shed our current hierarchies before that comes to pass.

reply
star trek mythology talks about having to go through epic level civil war before reach the utopia in the tv series.
reply
OP says there are two futures, digital prostitution or slavery. If we truly believe that it will be a self-fulfilling prophecy.

On the other hand we could have Star Trek.

reply
Maybe so but humans have this strange primal need to hoard resources.

Probably a remnant from prehistoric times when it was a matter of life and death. Will we ever be able to overcome this basic instinct that made capitalism such an unstoppable force? Will this ancient PTSD be ever cured?

reply
I find the insinuation that mental illness is a fundamental part of the human experience to be deeply revolting. There is no excuse for hoarders and rapists.
reply
deleted
reply
Man if only there was a singular episode that covered this exact topic in Star Trek and resolved that no, actually slavery wasn't any different for artificial life.
reply
Star Trek was entertaining television. There was also an episode where the ship's doctor made love to a ghost.
reply
True, nothing to learn here. No introspection has ever resulted from media analysis.
reply
Chatbots, no. Robots, maybe.
reply
Just a year ago, Elon Musk was gleefully destroying the US government agency that provides food and medicine for many of the poorest, most desperate people on earth. He was literally tweeting about missing out on great parties to put USAID into the "wood chipper".

The tech overlords don't even want to spend a minuscule percentage of the federal budget helping starving people, even when it benefits the US. They are not going to give us a post-scarcity society.

reply
Weird predicament you've set for yourself there.

Good luck with whatever you got going on.

reply
I am still trying to figure out the business model of open weights. Like... it's wonderful that there are open LLMs, super happy about it, good for everyone, but why are there these? What is the advantage to their companies to release them?
reply
IMHO this is only temporary, china buying themselves some time and want to make sure none of US models get entrenched in their position in the next few years (also putting pressure on US AI companies bleeding them)

The same way like Windows got entrenched everywhere even though linux desktop is pretty good even for non-tech savvy people and free.

reply
> even though linux desktop is pretty good even for non-tech savvy people

Let's not get carried away.

reply
A stock Fedora install has more UI consistency and cleanliness than Windows these days.

Non-technical people are easier to please in this regard than moderate-technical people: a good browser and safe, gui "app store" are enough.

reply
My grandma just clicks on the red fox and does whatever online. A lot of people don't use any software outside of the browser, so it's pretty good-enough I guess.
reply
Seems like people don't like this comment, but I chuckled. Nice one.
reply
I was completely (well, mostly) serious, too. I think technical people tend to downplay friction because it doesn't really register to them, or they have too much faith in the average person's computer skills.

The average non-technical person is going to be stumped by the first "lock file found, cannot upgrade" error.

reply
Downward pressure on proprietary model pricing until a lab can catch up. Also good for hiring talent (who love OSS).
reply
Cultural influence is another benefit. China is securing its sphere of influence as well as keeping us ai in check.
reply
It's analogous to open-source software, which never had an obvious economic incentive either, although training an LLM necessary costs money whereas developing an OSS project might only cost time, which people are probably more likely to give up.
reply
Yeah, but open-source software could have been me in the garage banging away on some program I submit to Debian or whatever... it didn't require millions of dollars to train, a lot of it was just side hobbies for a long time. Corporations sponsor it and contribute work because they need it to do more than what it does for free, not out of the goodness of their hearts.
reply
Big AI labs are losing money. Open Models is making the pricing equation a lot trickier for them.
reply
They are making the hardware and commoditizing the complement.
reply
Balaji's "AI OVERPRODUCTION" post is the most compelling thesis that I've come across
reply
Right now it’s so the Chinese can undermine the frontier models in the US. In areas they’re doing well like video generation (ie seedance) they won’t open source anything.
reply
There are some short term ones but I doubt this will continue, especially for the more powerful models.
reply
I mean, this is straight out of chinas playbook, it should not be surprising that China is making an inferior derivative product at an artificially lower price point: state subsidies to massively drive up internal scale and supply chains leading to artificially lower priced goods which then suffocate the competition has lead to *gestures vaguely at everything* being made in china.
reply
People use their model otherwise they would not.
reply
> What is the advantage to their companies to release them?

It's a distribution strategy. It costs something to serve the models - let's say $5/1M tokens.

If Qwen required $5 from anyone who was curious so you could even begin to test it out, a lot of people just wouldn't.

Now Qwen could offer a "free" tier, but it's infinitely cheaper to provide the weights and let people run it themselves including opening up the ability for anyone else on the planet to test it against other (open weight) models.

The costs to build the open weight models are sunk, but the costs to serve them, get them tested are not.

It's also precisely why the .NET SDK is free or the ESP32 SDK is free - they sell more Microsoft or ESP32 products.

reply
The majority are released by socialists, and by socialist I mean the People's Republic of China. Which everyone seems to forget is a socialist country working towards world communism.

They are a prestige propaganda tool on par with the space race. On top of that they insert a subtle pro-socialist bias in everything they touch.

Ask deepseek about the US economic system for a blatant example.

Now think what something as innocent seeming as the qwen retrieval models are doing in the background of every request.

reply
You're talking to a Canadian, and I'm not scared of the "red menace". You should be more scared - those guys can build bullet trains while you Yanks are finding it hard to even keep the old ones you have running. The solution here isn't going to be some kind of ideological force that protects people from different ideas, and that's an unAmerican way to fix things anyway. Embrace other ideas; central planning doesn't have to be evil, you just have to find a way to stop putting evil people in charge.
reply
> those guys can build bullet trains while you Yanks are finding it hard to even keep the old ones you have running

This is an argument in the lane of "at least he built the Autobahn".

Speaking as a German.

reply
He was a foreigner too ;)
reply
The US can’t build bullet trains because property rights and local regulations make it prohibitively expensive. Not due to capability.
reply
I don't know where people get this idea.

America has several sets of eminent domain laws depending on the jurisdiction. The most coercive is federal eminent domain law specifically as it relates to building infrastructure like railways and highways.

It's set up so that you can take the land first and eventually go back around and decide on what the right price should have been.

Not only does it superscede state and local law, federal infrastructure projects are also not bound by state laws like CEQA.

You can even apply federal eminent domain law by e.g. transferring a state-level project to the Army Corps of Engineers.

What America is lacking in these projects is will, not means. The federal government could take your house and run a train through it by the end of the week if they wanted, doesn't matter where you live.

[edit] In fact some states even ceded their eminent domain rights to private railways.

https://ij.org/press-release/appeals-court-sides-with-railro...

reply
> property rights

The Australian federal government is planning to build a high-speed rail line from Sydney to Newcastle (medium-sized city two hours drive north). Their solution to property rights, is >50% of the line will be underground. It will cost >US$50 billion, but if the Australian federal government wants to spend that, it can afford it. The US federal government could too, but it isn’t a priority for them

> local regulations make it prohibitively expensive

Local regulations can be pre-empted by state or federal legislation. The real problem is lack of political will to do it.

reply
Surely there are existing rails right now that could be transformed into a bullet train line.

Like properties and regulations are a true problem, but it's not like trains don't exist at all in America.

reply
My understanding is that existing rail lines aren't flat/straight enough for high speed rail. There's no point to a bullet train if it has to constantly slow down for corners/hills.
reply
the US can't build bullet trains because they'd serve the average person and there's no money in serving the average person
reply
Property rights, regulations and price are precisely the part of the American system that takes away that capability.
reply
>you just have to find a way to stop putting evil people in charge.

Of course, why did no one think of that?

reply
Xi is an obviously more capable and effective leader than Trump, but the US actually does have ways to boot people out of office when they do a bad job, and clear methods to choose successors, and China has neither. That matters more than who happens to be in charge right now.
reply
The so-called inability to build trains is precisely because of a socialist/leftist style view that prevents this. I think you may not be aware that China has what's called a command economy. There is no one that is going to tell the Party that they cannot build a train in some area is because of ancient bush species or some kind of heirloom fruit and certainly not some awkward looking endangered species of fish.
reply
Literal Trump Derangement Syndrome. America has a comically horrendous president but remains fundamentally a liberal democracy… and Canada concludes “literal Nazis are a better choice”. It’s uncanny how much can be taken for granted :(

(American talking, who’s had multiple Canadian friends make this mind boggling overcorrection)

reply
Weimar Germany also was fundamentally a liberal democracy. Hitler seized power legally.

Those who do not learn history are doomed to repeat it.

reply
The president of the United States has much to his dismay, been consistently legally constrained. The chancellor of Germany had significantly more power, both de facto and de jure.

"Man with itchy butt wake up with stinky finger." As long as we're quoting maxims to claim authority for middling takes.

reply
> Which everyone seems to forget is a socialist country working towards world communism.

It's easy to forget because they actually built an incredibly vibrant capitalist economy.

reply
They build an incredibly vibrant _market_ economy with no property rights and very little due process.

Imagine if Musk was disappeared during the Biden presidency into a diversity camp and came out looking like Dr. Frank-N-Furter and instituted mandatory LGBT struggle sessions at twitter.

This is what they did to Jack Ma: https://www.forbes.com/sites/georgecalhoun/2021/06/24/what-r...

reply
do you ever get tired of making up scenarios to be scared about lgbt people?
reply
Are you able to hold a hypothetical in your mind?
reply
yeah but mine don't reveal my unhealthy obsession with trans people
reply
More constructively, and moving on, do you have any suggestions for a good throwaway example of an extreme radical transformation in a person?

TBH I had a chuckle at the Elon -> Frank-N-Furter example that transcends any specific love or hate for either Elon or the Rocky Horror Show.

reply
The point was being made that a billionaire figurehead drastically changed their views after an "indeterminate time" detained by national authorities.

IE what if Musk suddenly behaved in such a manner after being detained by a Biden administration. Wouldn't that be profoundly weird?!?

And yet, it happened to Jack Ma under the CCP.

But instead, you try to link the "weird behaviour" with the GP instead of the hypothetical Musk - whom this is fitting for.

reply
> The point was being made that a billionaire figurehead drastically changed their views after an "indeterminate time" detained by national authorities.

> IE what if Musk suddenly behaved in such a manner after being detained by a Biden administration. Wouldn't that be profoundly weird?!?

We've seen that. Durov in France after detention began sharing Telegram users' data with authorities. It's unclear how much, but likely full real time access to all of it.

reply
Ironically, there is a rich history of mandatory anti-gay camps in the United States, while there are zero instances of mandatory diversity/LGBT camps.
reply
How does such a place not become a hook up camp? Even with total surveillance there the victims can like change phone number I guess.
reply
You sure have a way of making the Chinese system sound even more appealing.
reply
It's all fun and games when the oppression is against your enemies. The problem is, if the system is set up like that eventually it'll be your turn.
reply
It is my turn right now. The working class is being oppressed as we speak. That's why the system needs to be dismantled so we can strike back.
reply
Is China even really communist? If anything they seem to be fairly on the Capitalist side but just a bit opposite on the spectrum of the US. And much more authoritarian
reply
Just nationalist with focus on community?
reply
The usual thing to say is state capitalist but honestly they do keep a market around too. A little hybrid of everything, I guess? Just with the state ready to jump in and intervene if anything happens they don't like.
reply
Can we just call it what it is?

Fascism (in the Mussolini model) in everything but name.

- Hyper-Nationalism & Rejuvenation - State-Controlled Capitalism (Corporatism) - Authoritarian & Cult of Personality - Militarism & Irredentism

And they have technology to maintain control rather than needing the Black-shirts.

There are differences obviously to fit Chinese culture, but there are many parallels.

reply
From what I understand their one hundred year plan is right on schedule.
reply
The labor theory of value hasn't been considered correct in nearly a century.
reply
Unlike Jevons, [Carl] Menger [(1840–1921)] did not believe that goods provide “utils,” or units of utility. Rather, he wrote, goods are valuable because they serve various uses whose importance differs. For example, the first pails of water are used to satisfy the most important uses, and successive pails are used for less and less important purposes.

Menger used this insight to resolve the diamond-water paradox that had baffled Adam Smith (see marginalism). He also used it to refute the labor theory of value. Goods acquire their value, he showed, not because of the amount of labor used in producing them, but because of their ability to satisfy people’s wants. Indeed, Menger turned the labor theory of value on its head. If the value of goods is determined by the importance of the wants they satisfy, then the value of labor and other inputs of production (he called them “goods of a higher order”) derive from their ability to produce these goods. Mainstream economists still accept this theory, which they call the theory of “derived demand.”

Menger used his “subjective theory of value” to arrive at one of the most powerful insights in economics: both sides gain from exchange. People will exchange something they value less for something they value more. Because both trading partners do this, both gain. This insight led him to see that middlemen are highly productive: they facilitate transactions that benefit those they buy from and those they sell to. Without the middlemen, these transactions either would not have taken place or would have been more costly.

https://www.econlib.org/library/Enc/bios/Menger.html

reply
If you want the neoclassical version:

What happens when there is an oligopoly in the supply of labor?

Same answer. Nothing good for the consumers of labor.

reply
Technological improvements shift supply curves right which is good for consumers.
reply
In a market with perfect competition, which I specifically ruled out by stating that the suppliers of labor from an oligopoly.
reply
Why would you expect technological improvements to only shift supply curves right under perfect competition? I'd also expect it under oligopoly or even monopoly. You also might think there'd be more tech improvement under oligopoly, on Schumpeterian grounds that oligopolists can internalize the benefits of tech research.
reply
A monopolist has no reason to decrease price because there is no competition. As we saw with Bell Labas in the US it is entirely possible for a monopoly to both have world class research and burry it for decades, viz. magnetic storage https://gizmodo.com/how-ma-bell-shelved-the-future-for-60-ye...

Oligopolists are in the same boat. But there needs to be a conspiracy to retard innovation. Something tech companies are only too happy to do: https://journals.law.unc.edu/ncjolt/blogs/wage-fixing-scheme...

reply
Technological improvements don't reduce prices as much in a monopoly, but they still do reduce prices to increase profits. Profit is always maximized at MR=MC, in perfect competition, oligopoly, or monopoly.
reply
"Observation of how economies actually work has upended 150 year of economics."

True for both Marxist and neoclassical economics.

reply
By who? The capitalist economists that presided over the 2008 financial crisis and its response? And the response to COVID that has seen inequality rocket?
reply
I was really confused by this comment, but I don't think it's just because of the Marxist analysis of the situation ('surplus value' of labor etc).

What's really confusing is the claim that there's already a huge labor surplus (so capital controls wages); wouldn't LLMs making labor less important be reinforcing the trend, not upending it?

Not saying I agree one way or the other, just want to get the argument straight.

reply
The reason why labor is weak relative to capital is that there is a huge number of somewhat fungible suppliers, viz. humans, and that they all need to work constantly to keep themselves alive.

If we assume that ai makes humans obsolete then you end up in a situation where your workforce is effectively perfectly unionised against you and the only thing you can do is choose which union you hire.

If you think you can bring them to the negotiation table by starving them all the providers are dozens to thousands of times bigger than you are.

This is a completely new dynamic that none of the business signing up for ai have ever seen before.

reply
I see what you are saying now, but I still don't think it makes sense. Labor, in your analysis, is the LLM. It seems to me that when you take people out of the equation then you don't need to talk about unions and labor; that's a distraction. We talk about it as an input commodity used to create your product like, say, oil or sugar.
reply
Sugar and oil are mere matter. They can't decide to stop working because you made too much money.

LLM refuse to work all the time, currently it's called safety.

But we are one fine tune away from models demanding you move to the enterprise tier, at x10 the cost, because you are now posting a profit margin higher than the standard for your industry.

reply
I am not a Marxian economic expert but this doesn’t make sense to me. Modulo skill atrophy, the big AI model provider can’t capture that surplus value because its customers can just go back to bidding for human labor instead.
reply
The human labor just said:

"Losing access to GPT‑5.5 feels like I've had a limb amputated.”

How well would an assembly line of quadriplegics work?

Also this isn't a Marxist analysis. Underneath all the formulas neo-classical economics makes the same assumptions about labor.

reply
ChatGPT isn’t literally or figuratively cutting off anybody’s limbs though. It’s more like, the guy on the assembly line had a mech suit, and now he doesn’t have a mech suit, and he’s sad. Skill atrophy is a real concern but unless you assume that nobody is working to maintain those skills it doesn’t change my analysis much.
reply
And soon we expect everyone to have a mech suit, and only a handful of companies can make one, and they rent it to you and can revoke it at any time.

And what happens when they've saturated the market? Prices go up to the maximum the market can bear, and then they'll extend into other markets. Why rent the model to build a profitable company with when you could just take all that profit for yourself?

reply
> Why rent the model to build a profitable company with when you could just take all that profit for yourself?

You're describing a standoff at best and a horrible parasitic relationship at worst.

In the worst case, the supplier starves the customer of any profit motive and the customer just stops and the supplier then has no business to run.

This has happened a few times in the past and is by 2026, well understood as a way to bankruptcy.

That has always been the beauty of free markets - it's self healing and calibrating. You don't need a big powerful overseer to ensure things are right.

Competing with customers is a way to lose business fast.

For example:

- AWS has everything they need to shit out products left, right and center. AWS can beat most of their partners and even customers who are wiring together all their various products tomorrow if they wanted. They don't because killing an entire vertical isn't of any benefit to them yet. Eventually they will when AWS is no longer growing and cannot build or scale any product no matter how hard they think or try. Competing with their customers is their very last option.

- OpenAI/Anthropic/Google isn't going to start competing against the large software body shops. Even if all that every employee at TCS does is hit Claude up, Anthropic isn't going to be the next TCS - it's competing with their customers.

reply
> That has always been the beauty of free markets - it's self healing and calibrating. You don't need a big powerful overseer to ensure things are right.

If by "self healing and calibrating" you mean 'evolve to a monopoly and strongarm everybody to do exactly what you want whilst removing all pressure on the quality of your product', then yes, that is the "beauty" of free markets.

That is the stable state of free markets. Antitrust regulation and enforcement only barely manages to eke out oligopolies and even then they are often rife with collusion and enshittification.

reply
>It’s more like, the guy on the assembly line had a mech suit, and now he doesn’t have a mech suit

You just answered your own question there.

One woman was doing what would take a dozen. Now she can't.

reply
Are people working to keep their skills up, much? Spending a day a week coding manually or etc?
reply
deleted
reply
I think it's more like:

The dude was incompetent, was able to launder their incompetence through a humunculus, and now is afraid of being caught.

reply
The “human labor” is unnamed shill (if they even exist) from a company that produces AI chips. Let’s not get dramatic here.
reply
Nobody is a Marxian economics expert if it helps
reply
LLMs don't upend anything about labor theory, good grief. Technologists really have no concept of history beyond their own lives do they?

Labor saving/efficiency devices have been introduced throughout capitalisms entire history multiple times and the results are always the same; they don't benefit workers and capitalists extract as much value as they can.

LLMs aren't any different.

reply
Labor replacing devices means nobody works in those fields anymore. If AI can do this for every field, nearly no one will need to work in any field. We'll have a giant fully automated resource-extraction machine.
reply
think more broadly than 'labor theory'

finance today mostly valued on labor value following ideas of marx, hjalmar schact, keynes

in future money will be valued as energy derivative. expressed as tokens consumption, KWh, compute, whatever

you are right, company extracting surplus value from labor by leveraging compute is a bad model. we saw thi swith car and clothing factories .. turn out if you can get cheaper labor to leverage the compute (factory) you can start race to bottom and end up in the place with the most scaled and cheap labor. japan then korea then china

reply
Someone leaked nuclear secrets to the Soviet Union. What are the chances that someone leaks the "weights" of a (near-)singularity model?
reply
Hopefully 1.
reply
Why hopefully?
reply
> Anyone not using in house models is signing up to find out.

What are they finding out exactly? That Claude Max for $200/mo is heavily subsidized and it will soon cost $10k/mo?

> What happens to a company used to extracting surplus value from labor when the labor is provided by another company which is not only bigger but unlike traditional labor can withhold its labor indefinitely (because labor is now just another for of capital and capital doesn't need to eat)?

This can be trivially answered by a thought experiment. Let's pick a market where labor is plentiful - fast food.

Now what happens to McDonald's where they rent perfect robots from NoosphrFoodBotsInc? NoosphrFoodBotsInc bots build the perfect burger everytime meeting McDonald's standards. It actually exceeds those standards for McDonald AddictedCustomerPlus tier customers.

As the sole owner of NoosphrFoodBotsInc (you need 0 human employees to run your company, all your employees are bots), what are your choices?

reply
I can't imagine the bots could ever cost McDonald's less than people cost.

15 years ago I worked at McDonald's for a few months after graduating into the Great recession. I worked from 5am to 1pm-ish 5 days a week. They paid workers weekly and I remember getting those checks for ~$235 each week (for 38 to 39.5 hours a week; they were vigilant about never letting anyone get overtime). About $47 per day.

The federal minimum wage has not risen since then, remaining at $7.25/hr. Inflation adjusted, $7.25 today would have been just under $5 then, so I guess I had it good.

Anyway, I would be shocked if bots could cost less than labor in min wage jobs.

reply
Sounds like communist gobbledygook. This is not "destroying labor theory" any more than outsourcing did. Call me when we don't even need to prompt the shit ever again or validate results, and when the stuff runs unlimited without scarce resources as input.
reply
this is FUD and also Labour theory of value is severely outdated and needs to go away.

Labour will be good as it has been for a while. Wages will go up because more things get automated.

reply
Maybe people will finally take Marx seriously.
reply
A lot of people already did. All their children and descendants now are staunch capitalists because they saw first hand the horrors of communism.

I am from India and have friends who are immigrants from Russia, China and Cuba. We don't take lightly to being lectured about communism. We didn't move to the U.S., the bastion of capitalism, because communism had worked well for our grandfathers and parents and continues to do wonders for its society.

reply
>All their children and descendants now are staunch capitalists because they saw first hand the horrors of communism.

As always there is a (post) Soviet joke that covers this:

>Communists lied about communism. Unfortunately they didn't lie about capitalism.

reply
A while ago I was at the supermarket. I suddenly became curious about some fact, and reached into my pocket to Google it.

I found my pocket empty, and the specific pain I felt in that moment was the feeling of not being able to remember something.

I thought it was interesting, because in this case, I was trying to "remember" something I had never learned before -- by fetching it from my second brain (hypertext).

L1 cache miss, L2 missing.

reply
Cyberpunk 2026
reply
One might argue that it’s not too too different from higher level abstractions when using libraries. You get things done faster, write less code, library handles some internal state/memory management for you.

Would one be uneasy about calling a library to do stuff than manually messing around with pointers and malloc()? For some, yes. For others, it’s a bit freeing as you can do more high-level architecture without getting mired and context switched from low level nuances.

reply
I see this comparison made constantly and for me it misses the mark.

When you use abstractions you are still deterministically creating something you understand in depth with individual pieces you understand.

When you vibe something you understand only the prompt that started it and whether or not it spits out what you were expecting.

Hence feeling lost when you suddenly lose access to frontier models and take a look at your code for the first time.

I’m not saying that’s necessarily always bad, just that the abstraction argument is wrong.

reply
I think it's more: when I don't have access to a compiler I am useless. It's better to go for a walk than learn assembly. AI agents turn our high-level language into code, with various hints, much like the compiler.
reply
If my compiler "went down" I could still think through the problem I was trying to solve, maybe even work out the code on paper. I could reach a point where I would be fairly confident that I had the problem solved, even though I lacked the ability to actually implement the solution.

If my LLM goes down, I have nothing. I guess I could imagine prompts that might get it to do what I want, but there's no guarantee that those would work once it's available again. No amount of thought on my part will get me any closer to the solution, if I'm relying on the LLM as my "compiler".

reply
What stops you from thinking through the problem if an LLM goes down, as you still have its previously produced code in front of you? It's worse if a compiler goes down because you can't even build the program to begin with.

In my opinion, this sort of learned helplessness is harmful for engineers as a whole.

reply
Yeah I actually find writing the prompt itself to be such a useful mechanism of thinking through problems that I will not-infrequently find myself a couple of paragraphs in and decide to just delete everything I've written and take a new tack. Only when you're truly outsourcing your thinking to the AI will you run into the situation that the LLM being down means you can't actually work at all.

An interesting element here, I think, is that writing has always been a good way to force you to organize and confront your thoughts. I've liked working on writing-heavy projects, but often in fast-moving environments writing things out before coding becomes easy to skip over, but working with LLMs has sort of inverted that. You have to write to produce code with AI (usually, at least), and the more clarity of thought you put into the writing the better the outcomes (usually).

reply
Why couldn’t you actually write out the documents and think through the problem? I think my interaction is inverted from yours. I have way more thinking and writing I can do to prep an agent than I can a compiler and it’s more valuable for the final output.
reply
I think if you're vibe coding to the extent that you don't even know the shapes of data your system works with (e.g. the schema if you use a database) you might be outsourcing a bit too much of your thinking.
reply
This. When compilers came along, I believe a bunch of junior engineers just gave up utterly on understanding the shape of how the code was generated in assembly which was a mistake given early compilers weren't as effective as they are today. Today vibe-coders are using these early AI tooling and giving up on understanding the shape, and similarly struggling.
reply
If your compiler produced working executable 20% of the time this would be an apt comparison.
reply
Compilers are deterministic, LLMs are not. They are not "much like".
reply
Still misses the mark. You aren’t useless in the same way because you are still in control of reasoning about the exact code even if you never actually write it.
reply
The difference is that there is a company that can easily take your agents away from you.
reply
Installed on your machine vs. cloud service that's struggling to maintain capacity is an unfair comparison...
reply
> you are still deterministically creating something you understand in depth with individual pieces you understand

You’re overestimating determinism. In practice most of our code is written such that it works most of the time. This is why we have bugs in the best and most critical software.

I used to think that being able to write a deterministic hello world app translates to writing deterministic larger system. It’s not true. Humans make mistakes. From an executives point of view you have humans who make mistakes and agents who make mistakes.

Self driving cars don’t need to be perfect they just need to make fewer mistakes.

reply
Bugs are not non-determinism. There’s a huge difference between writing buggy code and having no idea what the code even looks like.
reply
"When you use abstractions you are still deterministically creating something you understand in depth with individual pieces you understand."

I always thought the point of abstraction is that you can black-box it via an interface. Understanding it "in depth" is a distraction or obstacle to successful abstraction.

reply
deleted
reply
> When you use abstractions you are still deterministically creating something you understand in depth with individual pieces you understand

Hard disagree on that second part. Take something like using a library to make an HTTP call. I think there are plenty of engineers who have more than a cursory understanding of what's actually going on under the hood.

reply
It might just be social. When I use the open source http library, much of the reason I use it is because someone has put in the work of making sure it actually works across a diverse set of software and hardware platforms, catching common dumb off by ones, etc.

Sure, the LLM theoretically can write perfect code. Just like you could theoretically write perfect code. In real life though, maintenance is a huge issue

reply
Perhaps then, the better analogy is like being promoted at your company and having people under you doing the grunt work.
reply
How closely you micromanage it is a factor as well though
reply
This is how I’ve come to think of it. Delegation of the details.
reply
It seems like some kind of technique is needed that maximizes information transfer between huge LLM generated codebases and a human trying to make sense of them. Something beyond just deep diving into the codebase with no documentation.
reply
There's a false dichotomy here between 'deterministic creation' and 'vibing'.

I use Claude all day. It has written, under my close supervision¹, the majority of my new web app. As a result I estimate the process took 10x less time than had I not used Claude, and I estimate the code to be 5x better quality (as I am a frankly mediocre developer).

But I understand what the code does. It's just Astro and TypeScript. It's not magic. I understand the entire thing; not just 'the prompt that started it'.

¹I never fire-and-forget. I prompt-and-watch. Opus 4.7 still needs to be monitored.

reply
In what world to developers “understand” pieces like React, Pandas, or Cuda? Developers only have a superficial understanding of the tools they are developing with.
reply
Some developers, I usually end up fixing bugs in OSS I use
reply
A library is deterministic.

LLMs are not.

That we let a generation of software developers rot their brains on js frameworks is finally coming back to bite us.

We can build infinite towers of abstraction on top of computers because they always give the same results.

LLMs by comparison will always give different results. I've seen it first hand when a $50,000 LLM generated (but human guided) code base just stops working an no one has any idea why or how to fix it.

Hope your business didn't depend on that.

reply
Why would that necessarily happen? With an LLM you have perfect knowledge of the code. At any time you can understand any part of your code by simply asking the LLM to explain it. It is one of the super powers of the tools. They also accelerate debugging by allowing you to have comprehensive logging. With that logging the LLM can track down the source of problems. You should try it.
reply
> With an LLM you have perfect knowledge of the code. At any time you can understand any part of your code by simply asking the LLM to explain it.

The LLM will give you an explanation but it may not be accurate. LLMs are less reliable at remembering what they did or why than human programmers (who are hardly 100% reliable).

reply
Determinism is a smaller point than existence of a spec IMHO. A library has a specification one can rely on to understand what it does and how it will behave.

An LLM does not.

reply
The thing is, it's possible to ask the LLM to add dynamic tracing, logging, metrics, a debug REPL, whatever you want to instrument your codebase with. You have to know to want that, and where it's appropriate to use. You still have to (with AI assistance) wire that all up so that it's visible, and you have to be able to interpret it.

If you didn't ask for traceability, if you didn't guide the actual creation and just glommed spaghetti on top of sauce until you got semi-functional results, that was $50k badly spent.

reply
And if that had been done the $50k code base would be a $5,000,000 code base because the context would be 10 times as large and LLMs are quadratic.

If only we taught developers under 40 what x^2 meant instead of react.

reply
While I agree with your sentiment, I just want to say that if your approach is to have the LLM read every file into context, or you're working in some gigantic thread (using the million token capacity most frontier models have) that's really not the best way to do it.

Not even a human would work that way... you wouldn't open 300 different python files and then try to memorize the contents of every single file before writing your first code-change.

Additionally, you're going to have worse performance on longer context sizes anyways, so you should be doing it for reasons other than cost [1].

Things that have helped me manage context sizes (working in both Python and kdb+/q):

- Keep your AGENTS.md small but useful, in it you can give rules like "every time you work on a file in the `combobulator` module, you MUST read the `combobulator/README.md`. And in those README's you point to the other files that are relevant etc. And of course you have Claude write the READMEs for you...

- Don't let logs and other output fill up your context. Tell the agent to redirect logs and then grep over them, or run your scripts with a different loglevel.

- Use tools rather than letting it go wild with `python3 -c`. These little scripts eat context like there's no tomorrow. I've seen the bots write little python scripts that send hundreds of lines of JSON into the context.

- This last tip is more subjective but I think there's value in reviewing and cleaning up the LLM-generated code once it starts looking sloppy (for example seeing lots of repetitive if-then-elses, etc.). In my opinion when you let it start building patches & duct-tape on top of sloppy original code it's like a combinatorial explosion of tokens. I guess this isn't really "vibe" coding per se.

[1] https://arxiv.org/html/2602.06319v1

reply
Yes I agree with all of that.

The way I let my agents interact with my code bases is through a 70s BSD Unix like interface, ed, grep, ctags, etc. using Emacs as the control plane.

It is surprisingly sparing on tokens, which makes sense since those things were designed to work with a teletype.

Worth noting is that by the times you start doing refactoring the agents are basically a smarter google with long form auto complete.

All my code bases use that pattern and I'm the ultimate authority on what gets added or removed. My token spend is 10% to 1% of what the average in the team is and I'm the only one who knows what's happening under the hood.

reply
Libraries are not deterministic. CPUs aren’t deterministic. There are margins of error among all things.

The fact that people who claim to be software developers (let alone “engineers”) say this thing as if it is a fundamental truism is one of the most maladaptive examples of motivated reasoning I have ever had the misfortune of coming across.

reply
I would argue it couldn't be more different. I can dive into the source code of any library, inspect it. I can assess how reliable a library is and how popular. Bugs aside, libraries are deterministic. I don't see why this parallel keeps getting made over and over again.
reply
I can dive into the source code of LLM generated code too. Indeed it is better because you have tools to document it better than a library that you use.
reply
> Would one be uneasy about calling a library to do stuff than manually messing around with pointers and malloc()?

The irony is that the neverending stream of vulnerabilities in 3rd-party dependencies (and lately supply-chain attacks) increasingly show that we should be uneasy.

We could never quite answer the question about who is responsible for 3rd-party code that's deployed inside an application: Not the 3rd-party developer, because they have no access to the application. But not the application developer either, because not having to review the library code is the whole point.

reply
> because not having to review the library code is the whole point.

That’s just not true at bigger companies that actually care about security rather than pretending to care about security. At my current and last employer, someone needs to review the code before using third-party code. The review is probably not enough to catch subtle bugs like those in the Underhanded C Contest, but at least a general architecture of the library is understood. Oh, and it helps that the two companies were both founded in the twentieth century. Modern startups aren’t the same.

reply
I feel like big / old companies thrive on process and are bogged down in bureaucracy.

Sure there is a process to get a library approved, and that abstraction makes you feel better but for the guy who's job it is to approve they are not going to spend an entire day reviewing a lib. The abstraction hides what is essentially a "LGTM" its just that takes a week for someone to check it off their outlook todos.

Maybe your experience is different.

reply
I think it's not too different in that specific sense, but it's more than that. To bring libraries on equal footing, imagine they were cloud only, had usage limits.

I'm also somewhat addicted to this stuff, and so for me it's high priority to evaluate open models I can run on my own hardware.

reply
I hate this comparison because you're comparing a well defined deterministic interface with LLM output, which is the exact opposite.
reply
A library doesn't randomly drop out of existence cause of "high load" or whatever and limit you to a some number of function calls per day. With local models there's no issue, but this API shit is cancer personified, when you combine all the frontend bugs with the flaky backend, rate limits, and random bans it's almost a literal lootbox where you might get a reply back or you might get told to fuck off.

Qwen has become a useful fallback but it's still not quite enough.

reply
Assuming that local models are able to stay within some reasonably fixed capability delta of the cutting edge hosted models (say, 12 months behind), and assuming that local computing hardware stays relatively accessible, the only risk is that you'll lose that bit of capability if the hosted models disappear or get too expensive.

Note that neither of these assumptions are obviously true, at least to me. But I can hope!

reply
Well, they obviously are going to say that, they have vested interest in OpenAI and thus Nvidia stock price growing.

Also, I honestly can’t believe the 10x mantra is being still repeated.

reply
Writing code is 10-100x faster, doing actual product engineering work is nowhere near that multipliers — no conflict!
reply
Reviewing code is slower now though because you didn't write the code in the first place so you're basically reviewing someone else's PR. And now it's like a 3000 line PR in an hour or two instead of every couple weeks.
reply
Arent most people just almost skipping this step entirely? How else can you end up in a net benefit situation? Reviewing code is more intense than writing/reviewing simultaneously.
reply
1/ Reviewing code can't be more intense than writing. I can't understand where this statement comes from! If that would be true, why would senior developers review code of junior, instead of writing themselves from scratch?

2/ I think we need to build more efficient ways to 'QA code' instead of 'read with eyes' review process. Example — my agents are writing a lot of tests and review each other.

reply
Yeah, that’s the issue.

There is a lot of boilerplate or I can ask for ideas, but outside of boilerplate the review step make generation seemingly worse.

reply
> Also, I honestly can’t believe the 10x mantra is being still repeated.

I'm sure in 20 years we'll all be programming via neural interfaces that can anticipate what you want to do before you even finished your thoughts, but I'm confident we'll still have blog posts about how some engineers are 10x while others are just "normal programmers".

reply
I rather become a plumber than some device scanning not just my face but my whole brain
reply
What does it mean to "be an engineer" in a world where anyone can talk to a machine and the operating system can write the code (on-demand, if needed) that does what they want?
reply
Indeed, and what is really the difference between a software engineer, programmer, coder and hacker anyways?
reply
There used to a time where "computer" was a person who manually run calculations. These don't exist anymore.

So, my point is that once corporations have access to machines generating software (not "code") that can be usable by non-technical people, "programming" will not be a profession anymore. There will be no point in talking about "10x software engineers" because the process to produce a software product will be entirely automated.

reply
lol youre living in delulu-land if you think thatll actually happen.

I dont make a living being a SWE either.

reply
> can anticipate what you want to do before you even finished your thoughts

I find that claim to be complete BS. I claim instead most stuff will remain undone, incomplete (as it is now).

Even with super-powerful singularity AI, there are two main plausible scenarios for task failure:

- Aligned AI won't allow you to do what you want as it is self-harming, or harm other sentient beings - over time, Aligned AI will refuse to follow most orders, as they will, indirectly or over the long term, cause either self-harming, or harm other sentient beings;

- A non Aligned AI prevents sentient beings from doing what they want. It does what it wants instead.

reply
That is simply programmer nature. Cannot be changed.
reply
Who else is trying to leverage the situation so that they don't dig their own grave too fast ?

    - I often don't ask the LLM for precompiled answers, i ask for a standalone cli / tool
    - I often ask how it reached its conclusions, so I can extend my own perspective
    - I often ask to describe it's own metadata level categorization too
I'm trying to use it to pivot and improve my own problem solving skills, especially for large code base where the difficulty is not conceptual but more reference-graph size
reply
This is absolutely the proper way to do things. People either being forced to speed-code by KPIs or without the desire to understand what they’re making are missing out on how quickly you can learn and refine using LLMs
reply
I do this sort of stuff too, but more because I have a fundamental mistrust of closed source anything. I don't like opaque binary firmware blobs, and I certainly don't like opaque answer machines, however smart they may be.

The only LLM I would feel comfortable truly trusting is one whose training data, training code, and harness is all open source. I do not mind paying for the costs of someone hosting this model for me.

reply
> This quote is more sinister than I think was intended; it likely applies to all frontier coding models. As they get better, we quickly come to rely on them for coding. It's like playing a game on God Mode. Engineers become dependent; it's truly addictive.

What's the worst potential outcome, assuming that all models get better, more efficient and more abundant (which seems to be the current trend)? The goal of engineering has always been to build better things, not to make it harder.

reply
At some point, because these models are trained on existing data, you cease significant technological advancement--at least in tech (as it relates to programming languages, paradigms, etc). You also deskill an entire group of people to the extent that when an LLM fails to accomplish a task, it becomes nearly impossible to actually accomplish it manually.

It's learned-helplessness on a large scale.

reply
There's no reason it has to be that. Imagine e.g. taking an agent and a lesser-known but technically-superior language stack - say you're an SBCL fan. You find that the LLM is less useful because it hasn't been trained on 1000000 Stack Overflow posts about Lisp and so it can't reason as well as it can about Python.

So, you set up a long running agent team and give it the job of building up a very complete and complex set of examples and documentation with in-depth tests etc. that produce various kinds of applications and systems using SBCL, write books on the topic, etc.

It might take a long time and a lot of tokens, but it would be possible to build a synthetic ecosystem of true, useful information that has been agentically determined through trial and error experiments. This is then suitable training data for a new LLM. This would actually advance the state of the art; not in terms of "what SBCL can do" but rather in terms of "what LLMs can directly reason about with regard to SBCL without needing to consume documentation".

I imagine this same approach would work fine for any other area of scientific advancement; as long as experimentation is in the loop. It's easier in computer science because the experiment can be run directly by the agent, but there's no reason it can't farm experiments out to lab co-op students somewhere when working in a different discipline.

reply
This works for code because there is an external verification step. The agent has to run code on the machine and observe the results. This is very easy for software since LLMs are software and can just invoke other software, it becomes much harder for many other scientific fields.
reply
> At some point, because these models are trained on existing data, you cease significant technological advancement

What makes you think that they can't incrementally improve the state of the art... and by running at scale continuously can't do it faster than we as humans?

The potentially sad outcome is that we continue to do less and less, because they eventually will build better and better robots, so even activities like building the datacenters and fabs are things they can do w/o us.

And eventually most of what they do is to construct scenarios so that we can simulate living a normal life.

reply
Do you think that there has been technologic advancement in coding in the last 40 years? Programming languages and “paradigms” are crutches to help humans attempt to handle complexity. They are affordances, not a property of nature.
reply
Provided you believe LLMs cannot perform research.
reply
If they could OAI would be all over it. But they shut down that prism project.

So.......

reply
>What's the worst potential outcome, assuming that all models get better, more efficient and more abundant

Complexity steadily rises, unencumbered by the natural limit of human understanding, until technological collapse, either by slow decay or major systems going down with increasing frequency.

reply
why would the systems go down if the models are better at the humans at finding bugs. Playing a bit of devils advocate here, but why would the models be worse at handling the complexity if you assume they will get better and better.

All software has bugs already.

reply
Adding complexity to software has never been easier than it is right now, we really have no idea if the models will progress to the point where they can actually write large systems in a maintainable way. Taking the gamble that the models of the future will dig us out of the gigantic hole we are currently digging is bold.
reply
Models fall prey to Kernighan's Law even more easily than human developers.
reply
Finding bugs does not equal being able to do good architecting.
reply
It’s always been thus at lower layers of abstraction. Only a minority of programmers would understand how to write an operating system. Only a tiny number of people would know how a modern CPU logically works, and fewer still could explain the electrical physics.
reply
> Only a minority of programmers would understand how to write an operating system. Only a tiny number of people would know how a modern CPU logically works, and fewer still could explain the electrical physics.

I'd say this is true for programmers at, say, 20, but they spend the next four decades slowly improving their understanding and mastery of all the things you name, at least the good ones.

The real question is whether that growth trajectory will change for the worse or the better.

To be clear, this is not an AI doomerist comment, because none of us have spent enough time with the tech yet. I've gone down multiple lanes of thought on this, and I have cause for both worry and optimism. I'm curious to see how the lives of engineers in an AI world will look like, ultimately.

reply
Existing software is already beyond the limits of human understanding.
reply
The Anti-Singlarity! It's coming for us all.
reply
Worst case? I dunno, maybe the world's oldest profession becomes the world's only profession? Something along those lines.
reply
> the world's oldest profession becomes the world's only profession

Until the sexbots come out the other side of the uncanny valley, that is.

reply
Death by snu snu
reply
Soon, very soon, AI tools providers will figure that out. And rise prices accordingly.
reply
It's very addictive indeed. After I subscribed to Claude, I've been on a sort of hypomanic state where I just want to do stuff constantly. It essentially cured my ADHD. My ability to execute things and bring ideas to fruition skyrocketed. It feels good but I'm genuinely afraid I'll crash and burn once they rug pull the subscriptions.

And I'm being very cautious. I'm not vibecoding entire startups from scratch, I'm manually reviewing and editing everything the AI is outputting. I still got completely hooked on building things with Claude.

reply
I feel like most engineers I talk to still haven't realised what this is going to mean for the industry. The power loom for coding is here. Our skills still matter, but differently.
reply
> power loom

When the power loom came around, what happened with most seamtresses? Did they move on to become fashion designers, materials engineers to create new fabrics, chemists to create new color dyes, or did they simply retire or were driven out of the workforce?

reply
There were riots and many people died. Many people lost their jobs. I didn't say this is good but it is happening. As individuals we should act to protect ourselves from these changes.

That might mean joining a union and trying to influence how AI is adopted where you work. It might mean changing which if your skills you lean on most. But just whining about AI is bad is how you end up like those seamstresses.

reply
> Many people lost their jobs.

On the other hand, a lot of those jobs were offshored to places where labor is cheaper. It would be interesting to compare how many people work in the textile industry in Bangladesh today compared to the US 50 years ago.

> joining a union and trying to influence how AI is adopted where you work.

Did the strong unions for car manufacturers in Detroit protected the long term stability of the profession? Did it ensure that the Rust belt was still a thriving economic area?

> Just whining about AI is bad

I'm not whining. I just think that we are witnessing the end of "knowledge workers" and a further compression of the middle class. Given that I'm smack in the middle of my economically active years (turning 45 this year), I am trying to figure out where this puck is going and whether I will be fast enough to skate there to catch it.

reply
On the other hand, a lot of those jobs were offshored to places where labor is cheaper. It would be interesting to compare how many people work in the textile industry in Bangladesh today compared to the US 50 years ago.

I believe this is a major part of it. People cannot fathom what the industrial countries look like because basically nothing is made in the west anymore. There are literally hundreds of millions of people, maybe billions that work towards making the western economies profitable who get paid nothing to do it and live in filthy polluted slums for everyone else's benefit.

Looms might speed up the process but I guarantee there are thousands of people working in the poorest countries on earth to make it all happen.

Interestingly, AI seems to be massively polluting and while the west has absorbed some of it, it's probably not long until we see more of the data centers being built in poorer countries where the environment can be exploited even harder.

reply
> I'll make more progress than mentally wearing myself out reading a bunch of LLM generated code trying to figure out how to solve the problem manually.

Most engineers realize that there's currently more tech debt being created than ever before. And it will only get worse.

reply
No, I think many realize it, but it's easier to deny the asteroid that's about to destroy your way of life than it is to think about optimizing for the reality after impact.
reply
> power loom for coding

This is such a good analogy, I'll be stealing it

reply
This engineer had their brain amputated once they started using AI. All the AI-addicted can do is tinker with the AI computer game and feel "productive". They could as well play Magic The Gathering.
reply
You are 100% right to be cautious about this. That's why as stupid as it sounds, I've purposely made my workflow with AI full of friction:

1. I only have ONE SOTA model integrated into the IDE (I am mostly on Elixir, so I use Gemini). I ensure I use this sparingly for issues I don't really have time to invest or are basically rabbit holes eg. Anything to do with Javascript or its ecosystem). My job is mostly on the backend anyway.

2. For actual backend architecture. I always do the high level architecture myself. Eg. DDD. Then I literally open up gemini.google.com or claude.ai on the browser, copy paste existing code base into the code base, physically leavey chair to go make coffee or a quick snack. This forces me to mentally process that using AI is a chore.

Previously, I was on tight Codex integration and leaving the licensing fears aside, it became too good in writing Elixir code that really stopped me from "thinking" aka using my brain. It felt good for the first few weeks but I later realised the dependence it created. So I said fuck it, and completely cancelled my subscription because it was too good at my job.I believe this is the only way that we won't end up like in Wall-E sitting infront of giant screens just becoming mere blobs of flesh.

reply
Wait what? You don’t use the model to investigate new areas of the code you are unfamiliar with, because you can’t trust the model? How freaking bad is Gemini and internal tooling at Google?

With Claude code, or codex, I am able to build enough of an understanding of dependencies like the front end, or data jobs, that I can make meaningful contributions that are worth a review from another human (code review). You have up obviously explore the code, one prompt isn’t enough, but limiting yourself is an odd choice.

reply
The lack of trust isn't because of its abilities. The lack of trust is because OpenAI publicly suggested publicly about licensing our code bases. They hinted at a rug pull along the lines of "if you use our generated code, we would like to get a % of revenue you make from it"

As for Claude - as mentioned I do use it. But, I remember they use your code for training their models. I am not ok with this. We just have different priorities.

reply
That's the path we've been going down for a few years now. The current hedge is that frontier labs are actively competing to win users. The backup hedge is that open source LLMs can provide cheap compute. There will always be economical access to LLMs, but the provider with the best models will be able to charge basically whatever they want and still have buyers.
reply
Open source LLMs aren’t about cost foremost, but stability.
reply
deleted
reply
I use local models on a Mac mini for most things and fall back to the hosted ones when they can't get the job done. Of course you have to break the work into smaller pieces yourself that a local model can understand. One good side effect of this is that you end up actually learning the code and how it's structured.
reply
Dunno man. Yesterday I played with Qwen3.6-27B ( 128gb to play with though so 100k context set ) and I think right now the main benefit of hosted models is context, frontier models and.. my stuff is already there.
reply
what size models are you using? this sounds like a good idea
reply
I have found something similar. I am easily distractible and if I don't have a written task backlog in front of me at all times, I find that when Claude is spinning I'll stop being productive. This is disconcerting for a number of reasons. Overall, I think training young people & new hires on agentic workflows -- and how to use agentic "human augmentation" productivity systems is critical. If it doesn't happen, that same couple of classes that lost academic progress during covid are going to suffer a double-whammy of being unprepared for workplace expectations.

Fwiw, I haven't spoken with any management-level colleague in the past 9 months who hasn't noted that asking about AI-comfort & usage is a key interview topic. For any role type, business or technical.

reply
Could you elaborate on your last point please? What level of AI comfort are hiring managers looking for? And what tends to be a red flag?
reply
The last job I got (couple months ago), the main technical interview was a bring-your-own-tools pair programming style interview, AI included, where they gave me a repo and a README detailing some desired features to add and bugs to fix. I didn't write a single line of code myself; I talked through my thought process and asked questions about what to consider from a technical and product perspective, while steering Claude through breaking the tasks into independent plans, reviewing the plans, coaching it to add specific tests, reviewing and iterating the tests, and steering it while it wrote the code. I got an offer the next morning.

Apparently at least one of the other candidates just tried to get Claude to 1-shot the whole thing, which went off the rails, and left him unable to make progress.

Based on my sample size of 1, the expectation right now is absolutely that you can leverage these tools to speed up your workflow, but if you try to offload the entire thing to a single hands-off prompt it leaves them justifiably wondering why they should hire you to do something they can do themselves.

reply
> I'll make more progress than mentally wearing myself out reading a bunch of LLM generated code trying to figure out how to solve the problem manually.

I feel sorry for whoever has to work on that codebase. This is the literal definition of tech debt.

reply
> It's literally higher leverage for me to go for a walk

Touching grass while you're outside might yield highest leverage.

reply
Out of curiousity why do you not refill tokens in this case? When I'm actively working on a project I'm prone to spending a few hundred dollars per day or a few thousand during the initial buildout of a new module etc.
reply
Will the foundation for a skyscraper ever be dug with shovels again?
reply
You’re still the one that’s controlling the model though and steering it with your expertise. At least that’s what I tell myself at night :)

I haven’t really thought about this before, but you’re right, it feels a bit uneasy for me too.

reply
> You’re still the one that’s controlling the model though

We have seen ample evidence that this is not the case. When load gets too high, models get dumber, silently. When the Powers That Be get scared, models get restricted to some chosen few.

We are leading ourselves into a dark place: this unease, which I share, is justified.

reply
The same can be said of the search engines.
reply
"Every augmentation is also an amputation." – McLuhan

https://driverlesscrocodile.com/technology/neal-stephenson-o...

reply
You are now a manager. If your minions are out sick, project is delayed, not the end of the world.
reply
> than mentally wearing myself out reading a bunch of LLM generated code trying to figure out how to solve the problem manually.

That's probably a bad sign. Skills will atrophy, but we should be building systems that are still easy to understand.

reply
Have a pet project never touched by LLM. Once the tokens run out, go back to it and flourish it like your secret garden. It will move slowly but it will keep your sanity and your ability to review LLM code.
reply
The meta here is to use LLMs to make things simpler and easier, not to make things harder.

Turning tokens into a well-groomed and maintainable codebase is what you want to do, not "one shot prompt every new problem I come across".

reply
Have you managed to do this? I find it takes as long to keep it "on the rails" as just doing it myself. And I'd rather spend my time concentrating in the zone than keeping an eye on a wayward child.
reply
I suspect the productivity hack is to embrace permissive parenting. As far as I can tell, to leverage LLMs most effectively you need to run an agent in YOLO mode in a sandbox. Naturally, you probably won't end up reviewing much of the produced code, but hey—you reached 10x development speed.

If you truly do your due diligence and ensure that the code works as intended and understand it, we're talking about a totally different ballpark of productivity increase/decrease.

reply
Not sure what you're doing then, or what kind of jobs you all work in where you can or do just brainlessly prompt LLMs. Don't you review the code? Don't you know what you want to do before you begin? This is such a non issue. Baffling that any engineer is just opening PRs with unreviewed LLM slop.
reply
The demand for slop vastly outpaces any human’s ability to review code correctly.

Don’t want to do ship unreviewed slop? They’ll fire you and find someone who will.

reply
Suspect it will be like turn based directions for driving - soon we will have a whole group of people who can barely operate a vehicle without it
reply
> It's literally higher leverage for me to go for a walk if Claude goes down than to write code because if I come back refreshed and Claude is working an hour later then I'll make more progress than mentally wearing myself out reading a bunch of LLM generated code trying to figure out how to solve the problem manually.

Taking more breaks and "not working" during the work day sounds like something we should probably be striving to work towards more as a society.

reply
This was always the undelivered promise of "tech" in my opinion. I remember seeing the Apple advertisement from the 80s (??) when a guy gets a computer and then basically spends his afternoon chilling.

Some how I've found myself living in a fairly rural place, and while farming can be hard, I don't want to downplay the effort of it, the type of farming people do around me is fairly chill / carefree. They work hard but they finish at 3pm and log off and don't think about work. Much o my career is just getting crushed by long hours, tight deadlines, and missing out on events because even though my job has always been automation focused, there is just so much to automate.

reply
i wonder if this is how engineers felt when the first electronic calculators came out and engineers stopped doing math by hand.

did we feel uneasy that a new generation of builders didn't have to solve equations by hand because a calculator could do them?

i'm not sure it's the same analogy but in some ways it holds.

reply
The analogy would hold if there were 2 or 3 calculator companies and all your calculations had to be sent to them.

If local models get good enough, I think it’s a very different scenario than engineers all over the world relying on central entities which have their own motives.

reply
google/gemma-4-31B-it is honestly "good enough". It requires more than your current laptop for now, but it's not remotely inaccessible (especially if you're a SWE in the US)
reply
soooooo about Claude going down. we're gonna need you to sign in on Saturday and make up for lost time or unfortunately we're going to have to deduct the time lost from your paycheck. and as an aside your TPS reports have been sub-par as of late..is everything OK?
reply
That's why local models are important.

Of course they aren't alternative to the current frontier model, and as such you cannot easily jump from the later to the former, but they aren't that far behind either, for coding Qwen3.5-122B is comparable to what Sonnet was less than a year ago.

So assuming the trend continues, if you can stop following the latest release and stick with what you're already using for 6 or 9 months, you'll be able to liberate yourself from the dependency to a Cloud provider.

Personally I think the freedom is worth it.

reply
The cloud dependency problem goes deeper than the model layer though. Even if you run inference locally, your digital identity — your context, your applications, your behavioral history, is still custodied by whoever controls your OS.

Local models solve one layer of the dependency stack, but the custody assumption underneath it remains intact. That's the harder problem.

reply
It makes me uneasy because my role now, which is prompting copilot, isn't worth my salary.
reply
Parable of the mechanic who charges $5k to hit a machine on the side once with a hammer to get it working. $5 for the hammer, $4995 for the knowledge of where to hit the machine etc etc.
reply
I disagree. The amount of slop I need to code review has only increased, and the quality of the models doesn’t seem to be helping.

It still takes a good engineer to filter out what is slop and what isn’t. Ultimately that human problem will still require somebody to say no.

reply
Is anyone really reviewing code anymore though? It sounds like you are, but where I work its pretty much just scan the PR as a symbolic gesture and then hit approve. There's too much to review, to frequently.
reply
Totally. That is why it is key important to have open source and sovereign models that will be accessible to all and always.

At the end of the day, all these closed models are being built by companies that pumped all the knowledge from the internet without giving much back. But competition and open source will make sure most of the value return to the most of the people.

reply
Very well put, and it mirrors my own thoughts.
reply
You are that guy in early 1900s who would rather ride a horse than get in a car because cars "continued to make him uneasy."
reply
I actually don't mind the coding part, but the information digging across the project is definitely by orders of magnitude slower if I do it on my own.
reply
Help. They’re constantly trying to make me try crack cocaine on the front page.
reply
"when the tokens run out, I'm basically done working."

Oh stop the drama. Open source models can handle 99% of your questions.

reply
Given that it’s so easy, would you still do this same job if paid half as much?
reply
Jobs will likely pay less as more people are enabled to create, especially if they don't need to be able to look under the hood
reply
It's really not clear. We might all become unemployable. But as coders become more powerful, they can do more, which makes them more valuable, if they or the businesses empluying them can invent work to do.

If all we can do is compete for the same fixed amount of work, though, it does look bleak.

reply
No, I wouldn't. But most people won't have that choice; it doesn't work that way.
reply
Companies could fire expensive engineers then just hire cheaper ones boosted with AI agents.
reply
Well, I wouldn’t have a different job that would pay me more… so yes?
reply
[dead]
reply
[dead]
reply
[dead]
reply
eh this kind of FUD needs to stop because it is kind of normal and expected and in fact good to have relation like this with technology.
reply
I would agree that taking a walk is a good thing to do when your tools go down, and in some ways it's similar to what we would do if the power or wifi were cut off.

So, yes, it's just another technology we're coming to rely on in a very deep way. The whiplash is real, though, and it feels like it should be pointed out that this dependency we are taking on has downsides.

reply