It's very sad, for me.
Like I told someone recently - letting the LLM write my code for me is like letting the LLM play my video games for me.
If all I wanted was the achievement on my steam profile, then sure, it makes sense, but that achievement is not why I play video games.
I'm looking at all these people proudly showing off their video game achievements, gained just by writing specs, and I realise that all of them fail to realise that writing specs is a lower-skill activity than writing programs.
It also pays far, far less - a BA earns about half what an average dev earns. They're cosplaying at being BAs, not realising that they are now employed for a skill that pays less, and it's only a matter of time before the economics catch up to them.
I don't see a solution here.
Talking to sales to get an idea what the customer wanted from the business side (first B2B at a product company and now consulting) -> talking to the customer and hashing out more detailed requirements -> designing the architecture and a proposed technical plan -> presenting it to the stakeholder (sometime internal sometime external) -> doing the work or delegating and leading the work -> presenting the work to the stakeholder and leading the UAT -> getting it to production.
The coding part has been a commodity for enterprise developers for well over a decade. I knew a decade ago that I wasn’t going to be 50 years old reversing b trees on a whiteboard trying to prove my worth.
Doing the work is the only thing that the AI does.
While I don’t make the eye popping BigTech comp (been there. Done that and would rather get a daily anal probe than go back), I am making more than I could make if I were still selling myself as someone who “codez real gud” as an enterprise dev.
Many of these people made many of the countless things we take for granted every day (networking, operating systems, web search; hell, even the transformer architecture before they got productized!).
Seeing software development --- and software engineering by proxy --- get reduced to a jello that will be stepped on by "builders" in real-time is depressing as shit.
It's even more depressing to see folks on HACKER news boost the "programming never mattered" mentality that's taken hold these last few years.
Last comment I'll make before I step off my soapbox: the "codez real gud" folks that makes the big bucks bring way more to the table than their ability to code...but their ability to code is a big contributor to why they bring more to the table!
It’s always been jello. I at 51 can wax poetically about the good old days or I can keep doing what I need to do to keep money appearing in my account.
You are not the first person to say things like this.
Tell me, you ever wondered why a person with a programming background was filling that role?
On the enterprise dev side of the industry where most developers work, I saw a decade ago that if I were just a ticket taker who turned well defined requirements into for loop and if statements, that was an undifferentiated commodity.
You’re seeing now that even on the BigTech side knowing how to reverse a binary tree on the whiteboard is not enough.
Also if you look at the leveling guidelines of any major tech company, their leveling guidelines above mid level are based on scope, impact and dealing with ambiguity - not “I codez real gud”
One's ability to reverse a binary tree (which is a BS filter, but it is what it is) hasn't been an indicator of ability in some time. What _is_ though, is the wherewithall to understand _when_ that's important and tradeoffs that come with doing that versus using other data structures or systems (in the macro).
My concern is that, assuming today's trajectory of AI services and tooling, the need to understand these fundamentals will become less important over time as the value of "code" as a concept decreases. In a world where prompting is cheap because AI is writing all the code and code no longer matters, then, realistically, tech will be treated even more aggressively as a line item to optimize.
This is a sad reality for people like me whose love for computers and programming got them into this career. Tech has been a great way to make a wonderful living for a long time, and it's unfortunate that we're robbing future generations of what we took for granted.
There are millions of people that can code as well as you are I and a lot cheaper if you are in the US. Thousands of developers have been laid off over the last three years and tech companies keep going strong - what does that tell you?
I’m just as happy to get away from writing for loops in 2026 as was to be able to get away with LDA, LDX and BRA instructions once I could write performant code in C.
And how are we robbing future generations? Because some of us (not that I can take credit for any of it) move the state of technology from the 1Mhz Apple //e I had in 1986?
Your entire comment is this specific strawman - no one, and I mean no one, is making this claim! You are the only one who is (ironically, considering the job you do) too tone-deaf and too self-unaware to avoid making this argument.
I'm merely pointing out that your value-prop is based on a solid technical foundation, which I feel you agree on:
> If not the technical person, then who? It’s a lot easier for a technical person to learn how to talk the language of the business than a business person to have a deep understanding of technology.
The argument is not "Oh boo hoo, I wish I could spend 8 hours a day coding for money like I used to", so stop pretending like it is.
Even the comment I replied to mentioned “being a BA” like the most important quality of a software engineer is their ability to translate requirements into code.
Then what is it.
be blunt and obvious in your reply or go home.
Business Analyst - those people who learn everything about what the customers requirements, specs, etc are. What they need, what they currently have, how to best advise them, etc.
They know everything, except how to program.
In my experience, they know nothing, including how to program.
Most of the commercial code I've written, over a 30+ year career, has been shite. The mandate was always to write profitable code, not elegant code. I started (much like the OP) back in the 80's writing code as a hobby, and I enjoyed that. But implementing yet another shitty REST CRUD server for a shitty website... not so much.
I totally see a solution: get the LLM to write the shitty REST CRUD server, and focus on the hard bits of the job.
I do use these tools, clearly see their potential, and know full well where this is going: capital is devaluing labor. My skills will become worthless. Maybe GP is right that at first only skilled developers can wield them to full effect, but it's obviously not going to stop there.
If I could destroy these things - as the Luddites tried - I would do so, but that's obviously impossible.
For now I'm forced to use them to stay relevant, and simply hope I can hold on to some kind of employment long enough to retire (or switch careers).
But now you too can access AI labor. You can use it for yourself directly.
I don't fault anyone for trying to find opportunities to provide for themselves and loved ones in this moment by using AI to make a thing. But don't fool yourself into thinking that the AI labor is yours. The capitalists own it, not us.
but also, if that were possible, then why wouldn't prices go down? why would the value of such labor stay so high if the same thing can be done by other individuals?
If these tools improve to the point of being able to write real code, the financial move for the agent runners is to charge far more than they are now but far less than the developers being replaced.
I don’t think it is obvious actually that you won’t have to have some expert experience/knowledge/skills to get the most out of these tools.
It already seemed like we were approaching the limit of what it makes sense to develop, with 15 frameworks for the same thing and a new one coming out next week, lots of services offering the same things, and even in games, the glut of games on offer was deafening and crushing game projects of all sizes all over the place.
Now it seems like we're sitting on a tree branch and sawing it off on both sides.
If you state “in 6 months AI will not require that much knowledge to be effective” every year and it hasn’t happened yet then every time it has been stated has been false up to this point.
In 6 months we can come back to this thread and determine the truth value for the premise. I would guess it will be false as it has been historically so far.
I think that this has been true, though maybe not quiet a strongly as strongly worded as your quote says it.
The original statement was "Maybe GP is right that at first only skilled developers can wield them to full effect, but it's obviously not going to stop there."
"full effect" is a pretty squishy term.
My more concrete claim (and similar to "Ask again in 6 months. A year.") is the following.
With every new frontier model released [0]:
1. the level of technical expertise required to achieve a given task decreases, or
2. the difficulty/complexity/size of a task that a inexperienced user can accomplish increases.
I think either of these two versions is objectively true looking back and will continue being true going forward. And, the amount that it increases by is not trivial.
[0] or every X months to account for tweaks, new tooling (Claude Code is not even a year old yet!), and new approaches.
Three months ago, we didn't have Opus 4.5, which almost everyone is saying is leaps and bounds better than previous models. MCP and A2A are mostly antiquated. We also didn't have Claude Desktop, which is trying to automate work in general.
Three _weeks_ ago, we didn't have Clawdbot/Openclaw, which people are using to try and automate as much of their lives as possible...and succeeding.
Things are changing outrageously fast in this space.
Claude Code came out a year ago.
Would travel agents have been justified in destroying the Internet so that people couldn't use Expedia?
I guess the right word here is "disenfranchising".
Valuation is a relative thing based mostly of availability. Adding capital makes labor more valuable, not less. This is not the process happening here, and it's not clear what direction the valuation is going.
... even if we take for granted that any of this is really happening.
Certainly, you must realize how much worse life would be for all of us had the Luddites succeeded.
Putting it in today's terms, if the goal of AI is to significantly reduce the labor force so that shareholders can make more money and tech CEOs can become trillionaires, it's understandable why some developers would want to stop it. The idea that the wealth will just trickle down to all the laid off work is economically dubious.
Depends how you look at it.
Trickle down economics has never worked in the way it was advertised to the masses, but it worked fantastically well for the people who pushed (and continue to push) for it.
That would be "trickle up economics", though.
I'm not a hard socialist or anything, but the economics don't make sense. if there's cheap credit and the money supply perpetually expands without a sink, of course people with the most capital will just compound their wealth.
so much of the "economy" orbits around the capital markets and number going up. it's getting detached from reality. or maybe I'm just missing something.
Yes, because fighting for the rights of laborers is obviously what most people hate.
"Except the Luddites didn’t hate machines either—they were gifted artisans resisting a capitalist takeover of the production process that would irreparably harm their communities, weaken their collective bargaining power, and reduce skilled workers to replaceable drones as mechanized as the machines themselves."
https://www.currentaffairs.org/news/2021/06/the-luddites-wer...
What I'm finding is that it's possible to integrate AI tools into your workflow in a big way without giving up on doing that, and I think there's a lot to say for a hybrid approach. The result of a fully-engaged brain (which still requires being right in there with the problem) using AI tools is better than the fully-hands-off way touted by some. Stay confident in your abilities and find your mix/work loop.
It's also possible to get a certain version of the rewards of coding from instrumenting AI tools. E.g. slicing up and sizing tasks to give to background agents that you can intuit from experience they'll be able to actually hand in a decent result on is similar to structuring/modularization exercises (e.g. with the goal to be readable or maintainable) in writing code, feelings-wise.
I feel that for using AI effectively I need to be fully engaged with both the problem itself and an additional problem of communicating with the LLM - which is more taxing than pre-LLM coding. And if I'm not fully engaged those outcomes usually aren't that great and bring frustration.
In isolation, the shift might be acceptable, but in reality I'm still left with a lot of ineffective meetings - only now without coding sessions to clear my brain.
Making sense of new or significantly changed code is very taxing. Writing new code is less taxing as you're incrementally updating the model as you go, at a pretty modest pace.
LLMs can produce code at a much higher rate than humans can make sense of it, and assisted coding introduces something akin to cache thrashing, where you constantly need to build mental models of the system to keep up with the changes.
Your bandwidth for comprehending code is as limited as it always was, and taxing this ability to its limits is pretty unpleasant, and in my experience, comes at a cost of other mental capabilities.
This.
On my fun side project, I don't accept pull requests because writing the code is the fun part.
Only once did someone get mad at me for not accepting their pull request.
Reality: Promoted to management (of AI) without the raise or clout or the reward of mentoring.
I really feel this. Claude is going to forget whatever correction I give it, unless I take the time and effort to codify it in the prompt.
And LLMs are going to continue to get better (though the curve feels like it's flattening), regardless of whatever I do to "mentor" my own session. There's no feeling that I'm contributing to the growth of an individual, or the state-of-the-art of the industry.
I care about creating stuff. How it gets from the idea in my brain to running on the computer, is immaterial to me.
I really like that I go from idea to reality in half the time.
THAT part doesn't mesh too well with AI, since it's still really bad at autonomous wholistic level planning. I'm still learning how to prompt in a way that results in a structure that is close to what I want/reasonable. I suspect going a more visual block diagram route, to generate some intermediate .md or whatever, might have promise, especially for defining clear bounds/separation of concerns.
Related, AI seems to be the wrong tool for refactoring code (I recently spent $50 trying to move four files). So, if whatever structure isn't reasonable, I'm left with manually moving things around, which is definitely un-fun.
I've been exploring some computer vision recognition stuff. Being able to reason through my ideas with an LLM, and make visualizations like t-SNE to show how far apart a coke can and a bag of cheetos are in feature-space has been mind blowing. ("How much of a difference does tint make for recognition? Implement a slider that can show that can regenerate the 512-D features array and replot the chart")
It's helping me get an intuitive understanding 10x faster than I could reading a textbook.
At the very least, it feels ergonomic and saves me keystrokes in the same way as stuff like snippets & aliases
thankfully I started down the FIRE route 20 years ago and now am more or less continuing to work because I want to
which will end for my employer if they insist on making me output generative excrement
I just rebuilt a fairly simple personal app that I've been maintaining for my family for nearly 30 years, and had a blast doing with an AI agent - I mostly used Claude Sonnet 4.5. I've been dreading this rebuild mostly because it's so boring; this is an app I built originally when I was 17, and I'm 43 now. I treated Claude basically like I'd treat my 17-year-old self, and I've added a bunch of features that I could never be assed to do before.
Not saying right/wrong but it's a useful Rorschach Test - about what you feel defines 'making this'?
although i do think Steve Jobs didn't make the iPhone /alone/, and that a lot of other people contributed to that. i'd like to be able to name who helps me and not say "gemini". again, it's more of a personal thing lol
I honestly find coding with AI no easier than coding directly, it certainly does not feel like AI is doing my work for me. If it was I wouldn't have anything to do, in reality I spend my time thinking about much higher level abstractions, but of course this is a very personal thing too.
I myself have never thought of code as being my output, I've always enjoyed solving problems, and solutions have always been my output. It's just that before I had to write the code for the solutions. Now I solve the problems and the AI makes it into code.
I think that this probably the dividing line, some people enjoy working with tools (code, unix commands, editors), some people enjoy just solving the problems. Both of course are perfectly valid, but they do create a divide when looking at AI.
Of course when AI starts solving all problems, I will have a very different feeling :-)