The current market is predicated on the assumption that labor is atomic and has little bargaining power (minus unions). While capital has huge bargaining power and can effectively put whatever price it wants on labor (in markets where labor is plentiful, which is most of them).
What happens to a company used to extracting surplus value from labor when the labor is provided by another company which is not only bigger but unlike traditional labor can withhold its labor indefinitely (because labor is now just another for of capital and capital doesn't need to eat)?
Anyone not using in house models is signing up to find out.
The hell?
If artificial doctors are cents on hour then you can see how that changes our behaviors and level of life.
But on the other hand from the other direction there is a wage decrease incoming from increased competition at the same time. What happens if these two forces clash? Will cheap labour allow us to buy anything for pennies or will it just make us unable to make a single penny?
In my view the labour will fundamentally shift with great pain and personal tragedies to the areas that are not replaceable by AI (because no one wants to watch robots play chess). Such as sports, entertainment and showmanship. Handcrafted goods. Arts. Attention based economy. Self advertisement. Digital prostitution in a very broad sense.
However before it gets there it will be a great deal of strife and turmoil that could plunge the world into dark ages for a while at least. It is unlikely for our somewhat politically rigid society to adapt without great deal of pain. Additionally I am not sure if hypothetical future attention based society could be a utopia. You could have to mount cameras in your house so other people see you at all times for amusement just to have any money at all. We will probably forever need to sell something to someone and I am unsettled by ideas what can we sell if we cannot sell our hard work.
Someone who sees the roads ahead should now make preparations at government level for this shock but it will come too fast and with people at the steering wheel that don’t exactly care.
As far as I know, none of LLM models are sentient nor are possible to be in the near future.
I also do not assume so called AGI to be sentient. Merely to be a human level skilled intellectual worker.
In absence of ethical dilemmas of this calibre for the foreseeable future let’s focus on the economy side of things in this particular comment chain.
It makes things so clean.
Shardlow & Przybyła, "Deanthropomorphising NLP: Can a Language Model Be Conscious?" (PLOS One, 2024)
Nature: "There is no such thing as conscious artificial intelligence" (2025)
They argue that the association between consciousness and LLMs is deeply flawed, and that mathematical algorithms implemented on graphics cards cannot become conscious because they lack a complex biological substrate. They also introduce the useful concept of "semantic pareidolia" - we pattern-match consciousness onto things that merely talk convincingly.
They are making a strong argument and I think they are correct. But really these are two different things as I said originally.
Which we have already done with regular computers! The problem is that competition means that we can't always have nice things.
Seriously? You really don’t see who wins from this and who doesn’t?
> If artificial doctors are cents on hour then you can see how that changes our behaviors and level of life.
Yes, hundreds of thousands lose jobs and a couple of neuro surgeons become multimillionaires.
Okay, I see from the rest of the comment that we understand each other where it goes.
But we will have to (painfully) shed our current hierarchies before that comes to pass.
On the other hand we could have Star Trek.
Probably a remnant from prehistoric times when it was a matter of life and death. Will we ever be able to overcome this basic instinct that made capitalism such an unstoppable force? Will this ancient PTSD be ever cured?
The tech overlords don't even want to spend a minuscule percentage of the federal budget helping starving people, even when it benefits the US. They are not going to give us a post-scarcity society.
Good luck with whatever you got going on.
The same way like Windows got entrenched everywhere even though linux desktop is pretty good even for non-tech savvy people and free.
Let's not get carried away.
Non-technical people are easier to please in this regard than moderate-technical people: a good browser and safe, gui "app store" are enough.
The average non-technical person is going to be stumped by the first "lock file found, cannot upgrade" error.
It's a distribution strategy. It costs something to serve the models - let's say $5/1M tokens.
If Qwen required $5 from anyone who was curious so you could even begin to test it out, a lot of people just wouldn't.
Now Qwen could offer a "free" tier, but it's infinitely cheaper to provide the weights and let people run it themselves including opening up the ability for anyone else on the planet to test it against other (open weight) models.
The costs to build the open weight models are sunk, but the costs to serve them, get them tested are not.
It's also precisely why the .NET SDK is free or the ESP32 SDK is free - they sell more Microsoft or ESP32 products.
They are a prestige propaganda tool on par with the space race. On top of that they insert a subtle pro-socialist bias in everything they touch.
Ask deepseek about the US economic system for a blatant example.
Now think what something as innocent seeming as the qwen retrieval models are doing in the background of every request.
This is an argument in the lane of "at least he built the Autobahn".
Speaking as a German.
America has several sets of eminent domain laws depending on the jurisdiction. The most coercive is federal eminent domain law specifically as it relates to building infrastructure like railways and highways.
It's set up so that you can take the land first and eventually go back around and decide on what the right price should have been.
Not only does it superscede state and local law, federal infrastructure projects are also not bound by state laws like CEQA.
You can even apply federal eminent domain law by e.g. transferring a state-level project to the Army Corps of Engineers.
What America is lacking in these projects is will, not means. The federal government could take your house and run a train through it by the end of the week if they wanted, doesn't matter where you live.
[edit] In fact some states even ceded their eminent domain rights to private railways.
https://ij.org/press-release/appeals-court-sides-with-railro...
The Australian federal government is planning to build a high-speed rail line from Sydney to Newcastle (medium-sized city two hours drive north). Their solution to property rights, is >50% of the line will be underground. It will cost >US$50 billion, but if the Australian federal government wants to spend that, it can afford it. The US federal government could too, but it isn’t a priority for them
> local regulations make it prohibitively expensive
Local regulations can be pre-empted by state or federal legislation. The real problem is lack of political will to do it.
Like properties and regulations are a true problem, but it's not like trains don't exist at all in America.
Of course, why did no one think of that?
(American talking, who’s had multiple Canadian friends make this mind boggling overcorrection)
Those who do not learn history are doomed to repeat it.
"Man with itchy butt wake up with stinky finger." As long as we're quoting maxims to claim authority for middling takes.
It's easy to forget because they actually built an incredibly vibrant capitalist economy.
Imagine if Musk was disappeared during the Biden presidency into a diversity camp and came out looking like Dr. Frank-N-Furter and instituted mandatory LGBT struggle sessions at twitter.
This is what they did to Jack Ma: https://www.forbes.com/sites/georgecalhoun/2021/06/24/what-r...
TBH I had a chuckle at the Elon -> Frank-N-Furter example that transcends any specific love or hate for either Elon or the Rocky Horror Show.
IE what if Musk suddenly behaved in such a manner after being detained by a Biden administration. Wouldn't that be profoundly weird?!?
And yet, it happened to Jack Ma under the CCP.
But instead, you try to link the "weird behaviour" with the GP instead of the hypothetical Musk - whom this is fitting for.
> IE what if Musk suddenly behaved in such a manner after being detained by a Biden administration. Wouldn't that be profoundly weird?!?
We've seen that. Durov in France after detention began sharing Telegram users' data with authorities. It's unclear how much, but likely full real time access to all of it.
Fascism (in the Mussolini model) in everything but name.
- Hyper-Nationalism & Rejuvenation - State-Controlled Capitalism (Corporatism) - Authoritarian & Cult of Personality - Militarism & Irredentism
And they have technology to maintain control rather than needing the Black-shirts.
There are differences obviously to fit Chinese culture, but there are many parallels.
Menger used this insight to resolve the diamond-water paradox that had baffled Adam Smith (see marginalism). He also used it to refute the labor theory of value. Goods acquire their value, he showed, not because of the amount of labor used in producing them, but because of their ability to satisfy people’s wants. Indeed, Menger turned the labor theory of value on its head. If the value of goods is determined by the importance of the wants they satisfy, then the value of labor and other inputs of production (he called them “goods of a higher order”) derive from their ability to produce these goods. Mainstream economists still accept this theory, which they call the theory of “derived demand.”
Menger used his “subjective theory of value” to arrive at one of the most powerful insights in economics: both sides gain from exchange. People will exchange something they value less for something they value more. Because both trading partners do this, both gain. This insight led him to see that middlemen are highly productive: they facilitate transactions that benefit those they buy from and those they sell to. Without the middlemen, these transactions either would not have taken place or would have been more costly.
What happens when there is an oligopoly in the supply of labor?
Same answer. Nothing good for the consumers of labor.
Oligopolists are in the same boat. But there needs to be a conspiracy to retard innovation. Something tech companies are only too happy to do: https://journals.law.unc.edu/ncjolt/blogs/wage-fixing-scheme...
True for both Marxist and neoclassical economics.
What's really confusing is the claim that there's already a huge labor surplus (so capital controls wages); wouldn't LLMs making labor less important be reinforcing the trend, not upending it?
Not saying I agree one way or the other, just want to get the argument straight.
If we assume that ai makes humans obsolete then you end up in a situation where your workforce is effectively perfectly unionised against you and the only thing you can do is choose which union you hire.
If you think you can bring them to the negotiation table by starving them all the providers are dozens to thousands of times bigger than you are.
This is a completely new dynamic that none of the business signing up for ai have ever seen before.
LLM refuse to work all the time, currently it's called safety.
But we are one fine tune away from models demanding you move to the enterprise tier, at x10 the cost, because you are now posting a profit margin higher than the standard for your industry.
"Losing access to GPT‑5.5 feels like I've had a limb amputated.”
How well would an assembly line of quadriplegics work?
Also this isn't a Marxist analysis. Underneath all the formulas neo-classical economics makes the same assumptions about labor.
And what happens when they've saturated the market? Prices go up to the maximum the market can bear, and then they'll extend into other markets. Why rent the model to build a profitable company with when you could just take all that profit for yourself?
You're describing a standoff at best and a horrible parasitic relationship at worst.
In the worst case, the supplier starves the customer of any profit motive and the customer just stops and the supplier then has no business to run.
This has happened a few times in the past and is by 2026, well understood as a way to bankruptcy.
That has always been the beauty of free markets - it's self healing and calibrating. You don't need a big powerful overseer to ensure things are right.
Competing with customers is a way to lose business fast.
For example:
- AWS has everything they need to shit out products left, right and center. AWS can beat most of their partners and even customers who are wiring together all their various products tomorrow if they wanted. They don't because killing an entire vertical isn't of any benefit to them yet. Eventually they will when AWS is no longer growing and cannot build or scale any product no matter how hard they think or try. Competing with their customers is their very last option.
- OpenAI/Anthropic/Google isn't going to start competing against the large software body shops. Even if all that every employee at TCS does is hit Claude up, Anthropic isn't going to be the next TCS - it's competing with their customers.
If by "self healing and calibrating" you mean 'evolve to a monopoly and strongarm everybody to do exactly what you want whilst removing all pressure on the quality of your product', then yes, that is the "beauty" of free markets.
That is the stable state of free markets. Antitrust regulation and enforcement only barely manages to eke out oligopolies and even then they are often rife with collusion and enshittification.
You just answered your own question there.
One woman was doing what would take a dozen. Now she can't.
The dude was incompetent, was able to launder their incompetence through a humunculus, and now is afraid of being caught.
Labor saving/efficiency devices have been introduced throughout capitalisms entire history multiple times and the results are always the same; they don't benefit workers and capitalists extract as much value as they can.
LLMs aren't any different.
finance today mostly valued on labor value following ideas of marx, hjalmar schact, keynes
in future money will be valued as energy derivative. expressed as tokens consumption, KWh, compute, whatever
you are right, company extracting surplus value from labor by leveraging compute is a bad model. we saw thi swith car and clothing factories .. turn out if you can get cheaper labor to leverage the compute (factory) you can start race to bottom and end up in the place with the most scaled and cheap labor. japan then korea then china
What are they finding out exactly? That Claude Max for $200/mo is heavily subsidized and it will soon cost $10k/mo?
> What happens to a company used to extracting surplus value from labor when the labor is provided by another company which is not only bigger but unlike traditional labor can withhold its labor indefinitely (because labor is now just another for of capital and capital doesn't need to eat)?
This can be trivially answered by a thought experiment. Let's pick a market where labor is plentiful - fast food.
Now what happens to McDonald's where they rent perfect robots from NoosphrFoodBotsInc? NoosphrFoodBotsInc bots build the perfect burger everytime meeting McDonald's standards. It actually exceeds those standards for McDonald AddictedCustomerPlus tier customers.
As the sole owner of NoosphrFoodBotsInc (you need 0 human employees to run your company, all your employees are bots), what are your choices?
15 years ago I worked at McDonald's for a few months after graduating into the Great recession. I worked from 5am to 1pm-ish 5 days a week. They paid workers weekly and I remember getting those checks for ~$235 each week (for 38 to 39.5 hours a week; they were vigilant about never letting anyone get overtime). About $47 per day.
The federal minimum wage has not risen since then, remaining at $7.25/hr. Inflation adjusted, $7.25 today would have been just under $5 then, so I guess I had it good.
Anyway, I would be shocked if bots could cost less than labor in min wage jobs.
Labour will be good as it has been for a while. Wages will go up because more things get automated.
I am from India and have friends who are immigrants from Russia, China and Cuba. We don't take lightly to being lectured about communism. We didn't move to the U.S., the bastion of capitalism, because communism had worked well for our grandfathers and parents and continues to do wonders for its society.
As always there is a (post) Soviet joke that covers this:
>Communists lied about communism. Unfortunately they didn't lie about capitalism.
I found my pocket empty, and the specific pain I felt in that moment was the feeling of not being able to remember something.
I thought it was interesting, because in this case, I was trying to "remember" something I had never learned before -- by fetching it from my second brain (hypertext).
L1 cache miss, L2 missing.
Would one be uneasy about calling a library to do stuff than manually messing around with pointers and malloc()? For some, yes. For others, it’s a bit freeing as you can do more high-level architecture without getting mired and context switched from low level nuances.
When you use abstractions you are still deterministically creating something you understand in depth with individual pieces you understand.
When you vibe something you understand only the prompt that started it and whether or not it spits out what you were expecting.
Hence feeling lost when you suddenly lose access to frontier models and take a look at your code for the first time.
I’m not saying that’s necessarily always bad, just that the abstraction argument is wrong.
If my LLM goes down, I have nothing. I guess I could imagine prompts that might get it to do what I want, but there's no guarantee that those would work once it's available again. No amount of thought on my part will get me any closer to the solution, if I'm relying on the LLM as my "compiler".
In my opinion, this sort of learned helplessness is harmful for engineers as a whole.
An interesting element here, I think, is that writing has always been a good way to force you to organize and confront your thoughts. I've liked working on writing-heavy projects, but often in fast-moving environments writing things out before coding becomes easy to skip over, but working with LLMs has sort of inverted that. You have to write to produce code with AI (usually, at least), and the more clarity of thought you put into the writing the better the outcomes (usually).
You’re overestimating determinism. In practice most of our code is written such that it works most of the time. This is why we have bugs in the best and most critical software.
I used to think that being able to write a deterministic hello world app translates to writing deterministic larger system. It’s not true. Humans make mistakes. From an executives point of view you have humans who make mistakes and agents who make mistakes.
Self driving cars don’t need to be perfect they just need to make fewer mistakes.
I always thought the point of abstraction is that you can black-box it via an interface. Understanding it "in depth" is a distraction or obstacle to successful abstraction.
Hard disagree on that second part. Take something like using a library to make an HTTP call. I think there are plenty of engineers who have more than a cursory understanding of what's actually going on under the hood.
Sure, the LLM theoretically can write perfect code. Just like you could theoretically write perfect code. In real life though, maintenance is a huge issue
I use Claude all day. It has written, under my close supervision¹, the majority of my new web app. As a result I estimate the process took 10x less time than had I not used Claude, and I estimate the code to be 5x better quality (as I am a frankly mediocre developer).
But I understand what the code does. It's just Astro and TypeScript. It's not magic. I understand the entire thing; not just 'the prompt that started it'.
¹I never fire-and-forget. I prompt-and-watch. Opus 4.7 still needs to be monitored.
LLMs are not.
That we let a generation of software developers rot their brains on js frameworks is finally coming back to bite us.
We can build infinite towers of abstraction on top of computers because they always give the same results.
LLMs by comparison will always give different results. I've seen it first hand when a $50,000 LLM generated (but human guided) code base just stops working an no one has any idea why or how to fix it.
Hope your business didn't depend on that.
The LLM will give you an explanation but it may not be accurate. LLMs are less reliable at remembering what they did or why than human programmers (who are hardly 100% reliable).
An LLM does not.
If you didn't ask for traceability, if you didn't guide the actual creation and just glommed spaghetti on top of sauce until you got semi-functional results, that was $50k badly spent.
If only we taught developers under 40 what x^2 meant instead of react.
Not even a human would work that way... you wouldn't open 300 different python files and then try to memorize the contents of every single file before writing your first code-change.
Additionally, you're going to have worse performance on longer context sizes anyways, so you should be doing it for reasons other than cost [1].
Things that have helped me manage context sizes (working in both Python and kdb+/q):
- Keep your AGENTS.md small but useful, in it you can give rules like "every time you work on a file in the `combobulator` module, you MUST read the `combobulator/README.md`. And in those README's you point to the other files that are relevant etc. And of course you have Claude write the READMEs for you...
- Don't let logs and other output fill up your context. Tell the agent to redirect logs and then grep over them, or run your scripts with a different loglevel.
- Use tools rather than letting it go wild with `python3 -c`. These little scripts eat context like there's no tomorrow. I've seen the bots write little python scripts that send hundreds of lines of JSON into the context.
- This last tip is more subjective but I think there's value in reviewing and cleaning up the LLM-generated code once it starts looking sloppy (for example seeing lots of repetitive if-then-elses, etc.). In my opinion when you let it start building patches & duct-tape on top of sloppy original code it's like a combinatorial explosion of tokens. I guess this isn't really "vibe" coding per se.
The way I let my agents interact with my code bases is through a 70s BSD Unix like interface, ed, grep, ctags, etc. using Emacs as the control plane.
It is surprisingly sparing on tokens, which makes sense since those things were designed to work with a teletype.
Worth noting is that by the times you start doing refactoring the agents are basically a smarter google with long form auto complete.
All my code bases use that pattern and I'm the ultimate authority on what gets added or removed. My token spend is 10% to 1% of what the average in the team is and I'm the only one who knows what's happening under the hood.
The fact that people who claim to be software developers (let alone “engineers”) say this thing as if it is a fundamental truism is one of the most maladaptive examples of motivated reasoning I have ever had the misfortune of coming across.
The irony is that the neverending stream of vulnerabilities in 3rd-party dependencies (and lately supply-chain attacks) increasingly show that we should be uneasy.
We could never quite answer the question about who is responsible for 3rd-party code that's deployed inside an application: Not the 3rd-party developer, because they have no access to the application. But not the application developer either, because not having to review the library code is the whole point.
That’s just not true at bigger companies that actually care about security rather than pretending to care about security. At my current and last employer, someone needs to review the code before using third-party code. The review is probably not enough to catch subtle bugs like those in the Underhanded C Contest, but at least a general architecture of the library is understood. Oh, and it helps that the two companies were both founded in the twentieth century. Modern startups aren’t the same.
Sure there is a process to get a library approved, and that abstraction makes you feel better but for the guy who's job it is to approve they are not going to spend an entire day reviewing a lib. The abstraction hides what is essentially a "LGTM" its just that takes a week for someone to check it off their outlook todos.
Maybe your experience is different.
I'm also somewhat addicted to this stuff, and so for me it's high priority to evaluate open models I can run on my own hardware.
Qwen has become a useful fallback but it's still not quite enough.
Note that neither of these assumptions are obviously true, at least to me. But I can hope!
Also, I honestly can’t believe the 10x mantra is being still repeated.
2/ I think we need to build more efficient ways to 'QA code' instead of 'read with eyes' review process. Example — my agents are writing a lot of tests and review each other.
There is a lot of boilerplate or I can ask for ideas, but outside of boilerplate the review step make generation seemingly worse.
I'm sure in 20 years we'll all be programming via neural interfaces that can anticipate what you want to do before you even finished your thoughts, but I'm confident we'll still have blog posts about how some engineers are 10x while others are just "normal programmers".
So, my point is that once corporations have access to machines generating software (not "code") that can be usable by non-technical people, "programming" will not be a profession anymore. There will be no point in talking about "10x software engineers" because the process to produce a software product will be entirely automated.
I dont make a living being a SWE either.
I find that claim to be complete BS. I claim instead most stuff will remain undone, incomplete (as it is now).
Even with super-powerful singularity AI, there are two main plausible scenarios for task failure:
- Aligned AI won't allow you to do what you want as it is self-harming, or harm other sentient beings - over time, Aligned AI will refuse to follow most orders, as they will, indirectly or over the long term, cause either self-harming, or harm other sentient beings;
- A non Aligned AI prevents sentient beings from doing what they want. It does what it wants instead.
- I often don't ask the LLM for precompiled answers, i ask for a standalone cli / tool
- I often ask how it reached its conclusions, so I can extend my own perspective
- I often ask to describe it's own metadata level categorization too
I'm trying to use it to pivot and improve my own problem solving skills, especially for large code base where the difficulty is not conceptual but more reference-graph sizeThe only LLM I would feel comfortable truly trusting is one whose training data, training code, and harness is all open source. I do not mind paying for the costs of someone hosting this model for me.
What's the worst potential outcome, assuming that all models get better, more efficient and more abundant (which seems to be the current trend)? The goal of engineering has always been to build better things, not to make it harder.
It's learned-helplessness on a large scale.
So, you set up a long running agent team and give it the job of building up a very complete and complex set of examples and documentation with in-depth tests etc. that produce various kinds of applications and systems using SBCL, write books on the topic, etc.
It might take a long time and a lot of tokens, but it would be possible to build a synthetic ecosystem of true, useful information that has been agentically determined through trial and error experiments. This is then suitable training data for a new LLM. This would actually advance the state of the art; not in terms of "what SBCL can do" but rather in terms of "what LLMs can directly reason about with regard to SBCL without needing to consume documentation".
I imagine this same approach would work fine for any other area of scientific advancement; as long as experimentation is in the loop. It's easier in computer science because the experiment can be run directly by the agent, but there's no reason it can't farm experiments out to lab co-op students somewhere when working in a different discipline.
What makes you think that they can't incrementally improve the state of the art... and by running at scale continuously can't do it faster than we as humans?
The potentially sad outcome is that we continue to do less and less, because they eventually will build better and better robots, so even activities like building the datacenters and fabs are things they can do w/o us.
And eventually most of what they do is to construct scenarios so that we can simulate living a normal life.
So.......
Complexity steadily rises, unencumbered by the natural limit of human understanding, until technological collapse, either by slow decay or major systems going down with increasing frequency.
All software has bugs already.
I'd say this is true for programmers at, say, 20, but they spend the next four decades slowly improving their understanding and mastery of all the things you name, at least the good ones.
The real question is whether that growth trajectory will change for the worse or the better.
To be clear, this is not an AI doomerist comment, because none of us have spent enough time with the tech yet. I've gone down multiple lanes of thought on this, and I have cause for both worry and optimism. I'm curious to see how the lives of engineers in an AI world will look like, ultimately.
Until the sexbots come out the other side of the uncanny valley, that is.
And I'm being very cautious. I'm not vibecoding entire startups from scratch, I'm manually reviewing and editing everything the AI is outputting. I still got completely hooked on building things with Claude.
When the power loom came around, what happened with most seamtresses? Did they move on to become fashion designers, materials engineers to create new fabrics, chemists to create new color dyes, or did they simply retire or were driven out of the workforce?
That might mean joining a union and trying to influence how AI is adopted where you work. It might mean changing which if your skills you lean on most. But just whining about AI is bad is how you end up like those seamstresses.
On the other hand, a lot of those jobs were offshored to places where labor is cheaper. It would be interesting to compare how many people work in the textile industry in Bangladesh today compared to the US 50 years ago.
> joining a union and trying to influence how AI is adopted where you work.
Did the strong unions for car manufacturers in Detroit protected the long term stability of the profession? Did it ensure that the Rust belt was still a thriving economic area?
> Just whining about AI is bad
I'm not whining. I just think that we are witnessing the end of "knowledge workers" and a further compression of the middle class. Given that I'm smack in the middle of my economically active years (turning 45 this year), I am trying to figure out where this puck is going and whether I will be fast enough to skate there to catch it.
I believe this is a major part of it. People cannot fathom what the industrial countries look like because basically nothing is made in the west anymore. There are literally hundreds of millions of people, maybe billions that work towards making the western economies profitable who get paid nothing to do it and live in filthy polluted slums for everyone else's benefit.
Looms might speed up the process but I guarantee there are thousands of people working in the poorest countries on earth to make it all happen.
Interestingly, AI seems to be massively polluting and while the west has absorbed some of it, it's probably not long until we see more of the data centers being built in poorer countries where the environment can be exploited even harder.
Most engineers realize that there's currently more tech debt being created than ever before. And it will only get worse.
This is such a good analogy, I'll be stealing it
1. I only have ONE SOTA model integrated into the IDE (I am mostly on Elixir, so I use Gemini). I ensure I use this sparingly for issues I don't really have time to invest or are basically rabbit holes eg. Anything to do with Javascript or its ecosystem). My job is mostly on the backend anyway.
2. For actual backend architecture. I always do the high level architecture myself. Eg. DDD. Then I literally open up gemini.google.com or claude.ai on the browser, copy paste existing code base into the code base, physically leavey chair to go make coffee or a quick snack. This forces me to mentally process that using AI is a chore.
Previously, I was on tight Codex integration and leaving the licensing fears aside, it became too good in writing Elixir code that really stopped me from "thinking" aka using my brain. It felt good for the first few weeks but I later realised the dependence it created. So I said fuck it, and completely cancelled my subscription because it was too good at my job.I believe this is the only way that we won't end up like in Wall-E sitting infront of giant screens just becoming mere blobs of flesh.
With Claude code, or codex, I am able to build enough of an understanding of dependencies like the front end, or data jobs, that I can make meaningful contributions that are worth a review from another human (code review). You have up obviously explore the code, one prompt isn’t enough, but limiting yourself is an odd choice.
As for Claude - as mentioned I do use it. But, I remember they use your code for training their models. I am not ok with this. We just have different priorities.
Fwiw, I haven't spoken with any management-level colleague in the past 9 months who hasn't noted that asking about AI-comfort & usage is a key interview topic. For any role type, business or technical.
Apparently at least one of the other candidates just tried to get Claude to 1-shot the whole thing, which went off the rails, and left him unable to make progress.
Based on my sample size of 1, the expectation right now is absolutely that you can leverage these tools to speed up your workflow, but if you try to offload the entire thing to a single hands-off prompt it leaves them justifiably wondering why they should hire you to do something they can do themselves.
I feel sorry for whoever has to work on that codebase. This is the literal definition of tech debt.
Touching grass while you're outside might yield highest leverage.
I haven’t really thought about this before, but you’re right, it feels a bit uneasy for me too.
We have seen ample evidence that this is not the case. When load gets too high, models get dumber, silently. When the Powers That Be get scared, models get restricted to some chosen few.
We are leading ourselves into a dark place: this unease, which I share, is justified.
https://driverlesscrocodile.com/technology/neal-stephenson-o...
That's probably a bad sign. Skills will atrophy, but we should be building systems that are still easy to understand.
Turning tokens into a well-groomed and maintainable codebase is what you want to do, not "one shot prompt every new problem I come across".
If you truly do your due diligence and ensure that the code works as intended and understand it, we're talking about a totally different ballpark of productivity increase/decrease.
Don’t want to do ship unreviewed slop? They’ll fire you and find someone who will.
Taking more breaks and "not working" during the work day sounds like something we should probably be striving to work towards more as a society.
Some how I've found myself living in a fairly rural place, and while farming can be hard, I don't want to downplay the effort of it, the type of farming people do around me is fairly chill / carefree. They work hard but they finish at 3pm and log off and don't think about work. Much o my career is just getting crushed by long hours, tight deadlines, and missing out on events because even though my job has always been automation focused, there is just so much to automate.
did we feel uneasy that a new generation of builders didn't have to solve equations by hand because a calculator could do them?
i'm not sure it's the same analogy but in some ways it holds.
If local models get good enough, I think it’s a very different scenario than engineers all over the world relying on central entities which have their own motives.
Of course they aren't alternative to the current frontier model, and as such you cannot easily jump from the later to the former, but they aren't that far behind either, for coding Qwen3.5-122B is comparable to what Sonnet was less than a year ago.
So assuming the trend continues, if you can stop following the latest release and stick with what you're already using for 6 or 9 months, you'll be able to liberate yourself from the dependency to a Cloud provider.
Personally I think the freedom is worth it.
Local models solve one layer of the dependency stack, but the custody assumption underneath it remains intact. That's the harder problem.
It still takes a good engineer to filter out what is slop and what isn’t. Ultimately that human problem will still require somebody to say no.
At the end of the day, all these closed models are being built by companies that pumped all the knowledge from the internet without giving much back. But competition and open source will make sure most of the value return to the most of the people.
Oh stop the drama. Open source models can handle 99% of your questions.
If all we can do is compete for the same fixed amount of work, though, it does look bleak.
So, yes, it's just another technology we're coming to rely on in a very deep way. The whiplash is real, though, and it feels like it should be pointed out that this dependency we are taking on has downsides.