upvote
> If you were a smart dev before AI, chances are you will remain a smart dev with AI.

I don't think that's what people are upset about, or at least it's not for me. For me it's that writing code is really enjoyable, and delegating it to AI is hell on earth.

reply
> For me it's that writing code is really enjoyable, and delegating it to AI is hell on earth.

It's very sad, for me.

Like I told someone recently - letting the LLM write my code for me is like letting the LLM play my video games for me.

If all I wanted was the achievement on my steam profile, then sure, it makes sense, but that achievement is not why I play video games.

I'm looking at all these people proudly showing off their video game achievements, gained just by writing specs, and I realise that all of them fail to realise that writing specs is a lower-skill activity than writing programs.

It also pays far, far less - a BA earns about half what an average dev earns. They're cosplaying at being BAs, not realising that they are now employed for a skill that pays less, and it's only a matter of time before the economics catch up to them.

I don't see a solution here.

reply
My job for the last 8 years has involved

Talking to sales to get an idea what the customer wanted from the business side (first B2B at a product company and now consulting) -> talking to the customer and hashing out more detailed requirements -> designing the architecture and a proposed technical plan -> presenting it to the stakeholder (sometime internal sometime external) -> doing the work or delegating and leading the work -> presenting the work to the stakeholder and leading the UAT -> getting it to production.

The coding part has been a commodity for enterprise developers for well over a decade. I knew a decade ago that I wasn’t going to be 50 years old reversing b trees on a whiteboard trying to prove my worth.

Doing the work is the only thing that the AI does.

While I don’t make the eye popping BigTech comp (been there. Done that and would rather get a daily anal probe than go back), I am making more than I could make if I were still selling myself as someone who “codez real gud” as an enterprise dev.

reply
Look, there are at least dozens of us who like and enjoy programming for programming's sake and got into this crazy industry because of that.

Many of these people made many of the countless things we take for granted every day (networking, operating systems, web search; hell, even the transformer architecture before they got productized!).

Seeing software development --- and software engineering by proxy --- get reduced to a jello that will be stepped on by "builders" in real-time is depressing as shit.

It's even more depressing to see folks on HACKER news boost the "programming never mattered" mentality that's taken hold these last few years.

Last comment I'll make before I step off my soapbox: the "codez real gud" folks that makes the big bucks bring way more to the table than their ability to code...but their ability to code is a big contributor to why they bring more to the table!

reply
> Talking to sales to get an idea what the customer wanted from the business side (first B2B at a product company and now consulting) -> talking to the customer and hashing out more detailed requirements -> designing the architecture and a proposed technical plan -> presenting it to the stakeholder (sometime internal sometime external) -> doing the work or delegating and leading the work -> presenting the work to the stakeholder and leading the UAT -> getting it to production.

You are not the first person to say things like this.

Tell me, you ever wondered why a person with a programming background was filling that role?

reply
If not the technical person, then who? It’s a lot easier for a technical person to learn how to talk the language of the business than a business person to have a deep understanding of technology.

On the enterprise dev side of the industry where most developers work, I saw a decade ago that if I were just a ticket taker who turned well defined requirements into for loop and if statements, that was an undifferentiated commodity.

You’re seeing now that even on the BigTech side knowing how to reverse a binary tree on the whiteboard is not enough.

Also if you look at the leveling guidelines of any major tech company, their leveling guidelines above mid level are based on scope, impact and dealing with ambiguity - not “I codez real gud”

reply
Those levels bake in the expectation of "codez real gud" at FAANG/MANGA/whatever style tech companies since the technical complexity of their operations is high and a high skill bar needs to be hurdled over to contribute to most of those codebases and make impact at the scale they operate at.

One's ability to reverse a binary tree (which is a BS filter, but it is what it is) hasn't been an indicator of ability in some time. What _is_ though, is the wherewithall to understand _when_ that's important and tradeoffs that come with doing that versus using other data structures or systems (in the macro).

My concern is that, assuming today's trajectory of AI services and tooling, the need to understand these fundamentals will become less important over time as the value of "code" as a concept decreases. In a world where prompting is cheap because AI is writing all the code and code no longer matters, then, realistically, tech will be treated even more aggressively as a line item to optimize.

This is a sad reality for people like me whose love for computers and programming got them into this career. Tech has been a great way to make a wonderful living for a long time, and it's unfortunate that we're robbing future generations of what we took for granted.

reply
> Also if you look at the leveling guidelines of any major tech company, their leveling guidelines above mid level are based on scope, impact and dealing with ambiguity - not “I codez real gud”

Your entire comment is this specific strawman - no one, and I mean no one, is making this claim! You are the only one who is (ironically, considering the job you do) too tone-deaf and too self-unaware to avoid making this argument.

I'm merely pointing out that your value-prop is based on a solid technical foundation, which I feel you agree on:

> If not the technical person, then who? It’s a lot easier for a technical person to learn how to talk the language of the business than a business person to have a deep understanding of technology.

The argument is not "Oh boo hoo, I wish I could spend 8 hours a day coding for money like I used to", so stop pretending like it is.

reply
There is an entire contingent of comments here who miss translating requirements into code.

Even the comment I replied to mentioned “being a BA” like the most important quality of a software engineer is their ability to translate requirements into code.

reply
> The argument is not

Then what is it.

be blunt and obvious in your reply or go home.

reply
I've been coping by reminding myself that I was absurdly lucky to have found a job that was also enjoyable and intellectually stimulating for so long, and if all AI does is bring software engineering down to the level of roughly every other job in the world in terms of fun, I don't really have much ground to complain
reply
deleted
reply
I cannot figure out what you mean by "BA" in this context
reply
> I cannot figure out what you mean by "BA" in this context

Business Analyst - those people who learn everything about what the customers requirements, specs, etc are. What they need, what they currently have, how to best advise them, etc.

They know everything, except how to program.

reply
> They know everything, except how to program

In my experience, they know nothing, including how to program.

reply
I was a BA forever ago during a summer job in college. That job wasn't for me at all! Looking back on the experience, putting together a FRD felt much like writing a CLAUDE.md with some prompts thrown in!
reply
Business Analyst
reply
This is a part of it, but I also feel like a Luddite (the historical meaning, not the derogatory slang).

I do use these tools, clearly see their potential, and know full well where this is going: capital is devaluing labor. My skills will become worthless. Maybe GP is right that at first only skilled developers can wield them to full effect, but it's obviously not going to stop there.

If I could destroy these things - as the Luddites tried - I would do so, but that's obviously impossible.

For now I'm forced to use them to stay relevant, and simply hope I can hold on to some kind of employment long enough to retire (or switch careers).

reply
> know full well where this is going: capital is devaluing labor

But now you too can access AI labor. You can use it for yourself directly.

reply
Kind of. But the outcomes likely do not benefit the masses. People "accessing AI labor" is just a race to the bottom. Maybe some new tools get made or small businesses get off the ground, but ultimately this "AI labor" is a machine that is owned by capitalists. They dictate its use, and they will give or deny people access to the machine as it benefits them. Maybe they get the masses dependent on AI tools that are currently either free or underpriced, as alternatives to AI wither away unable to compete on cost, then the prices are raised or the product enshittified. Or maybe AI will be massively useful to the surveillance state and data brokers. Maybe AI will simply replace a large percentage of human labor in large corporations, leading to mass unemployment.

I don't fault anyone for trying to find opportunities to provide for themselves and loved ones in this moment by using AI to make a thing. But don't fool yourself into thinking that the AI labor is yours. The capitalists own it, not us.

reply
As someone who has leaned fully into AI tooling this resonates. The current environment is an oligopoly so I'm learning how to leverage someone else's tool. However, in this way, I don't think LLMs are a radical departure from any proprietary other tool (e.g. Photoshop).
reply
Indeed. Do you know how many small consultancies are out there which are "Microsoft shops"? An individual could become a millionaire by founding their own and delivering value for a few high-roller clients.
reply
Nobody says there's no money to make anymore. But the space for that is limited, no matter how many millions hustle, there's only 100 spots in the top 100.
reply
what makes you think that's actually possible? maybe if you really had the connections and sales experience etc...

but also, if that were possible, then why wouldn't prices go down? why would the value of such labor stay so high if the same thing can be done by other individuals?

reply
I saw it happen more back in the day compared to now. Point being, nobody batted an eyelash at being entirely dependent on some company's proprietary tech. It was how money was made in the business.
reply
> it’s obviously not going to stop there.

I don’t think it is obvious actually that you won’t have to have some expert experience/knowledge/skills to get the most out of these tools.

reply
I think the keyword here is "some".

It already seemed like we were approaching the limit of what it makes sense to develop, with 15 frameworks for the same thing and a new one coming out next week, lots of services offering the same things, and even in games, the glut of games on offer was deafening and crushing game projects of all sizes all over the place.

Now it seems like we're sitting on a tree branch and sawing it off on both sides.

reply
Today. Ask again in 6 months. A year.
reply
People have been saying this for multiple years in a row now.
reply
And it has been getting more true for years in a row.
reply
Disagree entirely.

If you state “in 6 months AI will not require that much knowledge to be effective” every year and it hasn’t happened yet then every time it has been stated has been false up to this point.

In 6 months we can come back to this thread and determine the truth value for the premise. I would guess it will be false as it has been historically so far.

reply
Six months ago, we _literally did not have Claude Code_. We had MCP, A2A and IDE integrations, but we didn't have an app where you could say "build me an ios app that does $thing" and have it build the damn thing start to finish.

Three months ago, we didn't have Opus 4.5, which almost everyone is saying is leaps and bounds better than previous models. MCP and A2A are mostly antiquated. We also didn't have Claude Desktop, which is trying to automate work in general.

Three _weeks_ ago, we didn't have Clawdbot/Openclaw, which people are using to try and automate as much of their lives as possible...and succeeding.

Things are changing outrageously fast in this space.

reply
> If you state “in 6 months AI will not require that much knowledge to be effective” every year and it hasn’t happened yet then every time it has been stated has been false up to this point

I think that this has been true, though maybe not quiet a strongly as strongly worded as your quote says it.

The original statement was "Maybe GP is right that at first only skilled developers can wield them to full effect, but it's obviously not going to stop there."

"full effect" is a pretty squishy term.

My more concrete claim (and similar to "Ask again in 6 months. A year.") is the following.

With every new frontier model released [0]:

1. the level of technical expertise required to achieve a given task decreases, or

2. the difficulty/complexity/size of a task that a inexperienced user can accomplish increases.

I think either of these two versions is objectively true looking back and will continue being true going forward. And, the amount that it increases by is not trivial.

[0] or every X months to account for tweaks, new tooling (Claude Code is not even a year old yet!), and new approaches.

reply
Using a LLM to program is simply another abstraction level. Just how C was to assembly.
reply
I feel like the nondeterminism makes LLM-assisted programming a different sort of concept than using a compiler. Your prompt isn't your source code.
reply
If I could destroy these things - as the Luddites tried - I would do so

Would travel agents have been justified in destroying the Internet so that people couldn't use Expedia?

reply
> capital is devaluing labor

I guess the right word here is "disenfranchising".

Valuation is a relative thing based mostly of availability. Adding capital makes labor more valuable, not less. This is not the process happening here, and it's not clear what direction the valuation is going.

... even if we take for granted that any of this is really happening.

reply
> If I could destroy these things - as the Luddites tried - I would do so, but that's obviously impossible.

Certainly, you must realize how much worse life would be for all of us had the Luddites succeeded.

reply
If the human race is wiped out by global warming I'm not so sure I would agree with this statement. Technology rarely fails to have downsides that are only discovered in hindsight IMO.
reply
Sure, but would it have been better or worse for the Luddites?
reply
Or perhaps they would have advanced the cause of labor and prevented some of the exploitation from the ownership class. Depends on which side of the story you want to tell. The slur Luddite is a form of historical propaganda.

Putting it in today's terms, if the goal of AI is to significantly reduce the labor force so that shareholders can make more money and tech CEOs can become trillionaires, it's understandable why some developers would want to stop it. The idea that the wealth will just trickle down to all the laid off work is economically dubious.

reply
Reaganomics has never worked
reply
> Reaganomics has never worked

Depends how you look at it.

Trickle down economics has never worked in the way it was advertised to the masses, but it worked fantastically well for the people who pushed (and continue to push) for it.

reply
Sure, because it all trickles into their pockets.
reply
problem today is that there is no "sink" for money to go to when it flows upwards. we have resorted to raising interest rates to curb inflation, but that doesn't fix the problem, it just gives them an alternative income source (bonds/fixed income)

I'm not a hard socialist or anything, but the economics don't make sense. if there's cheap credit and the money supply perpetually expands without a sink, of course people with the most capital will just compound their wealth.

so much of the "economy" orbits around the capital markets and number going up. it's getting detached from reality. or maybe I'm just missing something.

reply
Yeah it's called wealth transfer and the vast majority is on the wrong end.
reply
The historical luddites are literally the human death drive externalized. Reject them and all of their garbage ideas with extreme prejudice.

Related, the word “meritocracy” was coined in a book which was extremely critical of the whole concept. AI thankfully destroys it. Good riddance, don’t let the door hit your ass on the way out.

https://en.wikipedia.org/wiki/The_Rise_of_the_Meritocracy

reply
You can reject the ideas in the aggregate. Regardless, for the individual, your skills are being devalued, and what used to be a reliable livelihood tied to a real craft is going to disappear within a decade or so. Best of luck
reply
> The historical luddites are literally the human death drive externalized. Reject them and all of their garbage ideas with extreme prejudice.

Yes, because fighting for the rights of laborers is obviously what most people hate.

reply
For a different perspective:

"Except the Luddites didn’t hate machines either—they were gifted artisans resisting a capitalist takeover of the production process that would irreparably harm their communities, weaken their collective bargaining power, and reduce skilled workers to replaceable drones as mechanized as the machines themselves."

https://www.currentaffairs.org/news/2021/06/the-luddites-wer...

reply
[flagged]
reply
Either you're thinking of the "room temperature semi-conductor" thing out of Korea, or you're some boomer who forgot that cold fusion was in the 80s.
reply
I resonate with that. I also find writing code super pleasurable. It's immediate stress relief for me, I love the focus and the flow. I end long hands-on coding sessions with a giddy high.

What I'm finding is that it's possible to integrate AI tools into your workflow in a big way without giving up on doing that, and I think there's a lot to say for a hybrid approach. The result of a fully-engaged brain (which still requires being right in there with the problem) using AI tools is better than the fully-hands-off way touted by some. Stay confident in your abilities and find your mix/work loop.

It's also possible to get a certain version of the rewards of coding from instrumenting AI tools. E.g. slicing up and sizing tasks to give to background agents that you can intuit from experience they'll be able to actually hand in a decent result on is similar to structuring/modularization exercises (e.g. with the goal to be readable or maintainable) in writing code, feelings-wise.

reply
I'm in the enjoy writing code camp and do see merits of the hybrid approach, but I also worry about the (mental) costs.

I feel that for using AI effectively I need to be fully engaged with both the problem itself and an additional problem of communicating with the LLM - which is more taxing than pre-LLM coding. And if I'm not fully engaged those outcomes usually aren't that great and bring frustration.

In isolation, the shift might be acceptable, but in reality I'm still left with a lot of ineffective meetings - only now without coding sessions to clear my brain.

reply
I think an additional big part of why LLM-aided coding is so draining is that it has you constantly refreshing your mental model of the code.

Making sense of new or significantly changed code is very taxing. Writing new code is less taxing as you're incrementally updating the model as you go, at a pretty modest pace.

LLMs can produce code at a much higher rate than humans can make sense of it, and assisted coding introduces something akin to cache thrashing, where you constantly need to build mental models of the system to keep up with the changes.

Your bandwidth for comprehending code is as limited as it always was, and taxing this ability to its limits is pretty unpleasant, and in my experience, comes at a cost of other mental capabilities.

reply
Hope: I want to become a stronger dev.

Reality: Promoted to management (of AI) without the raise or clout or the reward of mentoring.

reply
LLMs are similar in a lot of ways to the labor outsourcing that happened a generation or two ago. Except that instead of this development lifting a billion people out of poverty in the third world a handful of rich people will get even more rich and everyone else will have higher energy bills.
reply
> ...the reward of mentoring.

I really feel this. Claude is going to forget whatever correction I give it, unless I take the time and effort to codify it in the prompt.

And LLMs are going to continue to get better (though the curve feels like it's flattening), regardless of whatever I do to "mentor" my own session. There's no feeling that I'm contributing to the growth of an individual, or the state-of-the-art of the industry.

reply
AIs have made me realize that I don't actually care about writing code, even though it's all I've done for my entire career.

I care about creating stuff. How it gets from the idea in my brain to running on the computer, is immaterial to me.

I really like that I go from idea to reality in half the time.

reply
Same.

I've been exploring some computer vision recognition stuff. Being able to reason through my ideas with an LLM, and make visualizations like t-SNE to show how far apart a coke can and a bag of cheetos are in feature-space has been mind blowing. ("How much of a difference does tint make for recognition? Implement a slider that can show that can regenerate the 512-D features array and replot the chart")

It's helping me get an intuitive understanding 10x faster than I could reading a textbook.

reply
Same here, and I also really enjoy the high level design/structure part of it.

THAT part doesn't mesh too well with AI, since it's still really bad at autonomous wholistic level planning. I'm still learning how to prompt in a way that results in a structure that is close to what I want/reasonable. I suspect going a more visual block diagram route, to generate some intermediate .md or whatever, might have promise, especially for defining clear bounds/separation of concerns.

Related, AI seems to be the wrong tool for refactoring code (I recently spent $50 trying to move four files). So, if whatever structure isn't reasonable, I'm left with manually moving things around, which is definitely un-fun.

reply
Definitely go for that middle step. If it's something bigger I get them to draw out a multi-phase plan, then I go through and refine that .md and have them work from that.
reply
exactly

thankfully I started down the FIRE route 20 years ago and now am more or less continuing to work because I want to

which will end for my employer if they insist on making me output generative excrement

reply
There's room for both. Give AI the boilerplate, save the exciting stuff for you.
reply
but are employers going to be fine with that?
reply
That remains to be seen. As long as the work gets done... Don't ask, don't tell.
reply
It does NOT remain to be seen. https://www.cnbc.com/2025/09/26/accenture-plans-on-exiting-s... Big players are already moving in the direction of "join us or leave us". So if you can't keep up and you aren't developing or "reinventing" something faster with the help of AI, it was nice knowing you.
reply
I didn't say don't use AI at all, I said give it the boilerplate, rote work. Developers can still work on more interesting things. Maybe not all the interesting things.
reply
That may be fine ... if it remains your choice. I'm saying companies are outmoding people (programmers, designers, managers, et al) who don't leverage AI to do their job the fastest. If one programmer uses AI to do boilerplate and then codes the interesting bits personally and it takes a week and another does it all with AI (orchestrating agents, etc) and it takes 2 hours and produces the same output (not code but business value), the AI orchestrator/manager will be valued above the former.
reply
I get your point, but I think smart people will figure out a balance. That 2 hours of output could take a week to debug and test.
reply
Yes! I am not advocating for the 2 hours and the "vision" of managers and CEOs. Quite the contrary. But it is the world we live in for now. It's messy and chaotic and many people may (will?) be hurt. I don't like it. But I'm trying to be one of the "smart people". What does that look like? I hope I find out.
reply
I don't like it, either. I hear people ranting about doing "everything with AI" on one meeting, and what a productivity boost it is, then I get tagged on a dumpster fire PR full of slop and emoji filled log statements. Like did you even look at your code at all? "Oh sorry I don't know how that got in there!"
reply
These are the same employers that mandate return to office for distributed teams and micro-manage every access of our work. I think we know how its going to play out.
reply
Woodworking is still a thing despite IKEA, big box furniture stores, etc.
reply
People will pay for quality craftsmanship they can touch and enjoy and can afford and cannot do on their own - woodworking. Less so for quality code and apps because (as the Super Bowl ads showed us) anyone can create an app for their business and it's good enough. The days of high-paid coders is nearly gone. The senior and principals will hang on a little longer. Those that can adapt to business analyst mode and project manager will as well (CEOs have already told us this: adapt or get gone), but eventually even they will be outmoded because why buy a $8000 couch when I can buy one for $200 and build it myself?
reply
Then don't delegate it to AI.
reply
I like writing new, interesting code, but learning framework #400 with all its own idiosyncrasies has gotten really old.

I just rebuilt a fairly simple personal app that I've been maintaining for my family for nearly 30 years, and had a blast doing with an AI agent - I mostly used Claude Sonnet 4.5. I've been dreading this rebuild mostly because it's so boring; this is an app I built originally when I was 17, and I'm 43 now. I treated Claude basically like I'd treat my 17-year-old self, and I've added a bunch of features that I could never be assed to do before.

reply
i agree. it seem like an expectation these days to use AI sometimes... for me i am happy not using it at all, i like to be able to say "I made this" :)
reply
I suppose the question is "Do you feel Steve Jobs made the iPhone?"

Not saying right/wrong but it's a useful Rorschach Test - about what you feel defines 'making this'?

reply
it's more just a personal want to be able to see what I can do on my own tbh; i don't generally judge other people on that measure

although i do think Steve Jobs didn't make the iPhone /alone/, and that a lot of other people contributed to that. i'd like to be able to name who helps me and not say "gemini". again, it's more of a personal thing lol

reply
So not disagreeing as you say, it is a personal thing!

I honestly find coding with AI no easier than coding directly, it certainly does not feel like AI is doing my work for me. If it was I wouldn't have anything to do, in reality I spend my time thinking about much higher level abstractions, but of course this is a very personal thing too.

I myself have never thought of code as being my output, I've always enjoyed solving problems, and solutions have always been my output. It's just that before I had to write the code for the solutions. Now I solve the problems and the AI makes it into code.

I think that this probably the dividing line, some people enjoy working with tools (code, unix commands, editors), some people enjoy just solving the problems. Both of course are perfectly valid, but they do create a divide when looking at AI.

Of course when AI starts solving all problems, I will have a very different feeling :-)

reply
deleted
reply
I’m not worried about being a good dev or not but these AI things thoroughly take away from the thing I enjoy doing to the point I’d consider leaving the industry entirely

I don’t want to wrangle LLMs into hallucinating correct things or whatever, I don’t find that enjoyable at all

reply
I've been through a few cycles of using LLMs and my current usage does scratch the itch. It doesn't feel like I've lost anything. The trick is I'm still programming. I name classes and functions. I define the directory structure. I define the algorithms. By the time I'm prompting an LLM I'm describing how the code will look and it becomes a supercharged autocomplete.

When I go overboard and just tell it "now I want a form that does X", it ends up frustrating, low-quality, and takes as long to fix as if I'd just done it myself.

YMMV, but from what I've seen all the "ai made my whole app" hype isn't trustworthy and is written by people who don't actually know what problems have been introduced until it's too late. Traditional coding practices still reign supreme. We just have a free pair of extra eyes.

reply
Serious question: so what then is the value of using an LLM? Just autocomplete? So you can use natural language? I'm seriously asking. My experience has been frustrating. Had the whole thing designed, the LLM gave me diagrams and code samples, had to tell it 3 times to go ahead and write the files, had to convince it that the files didn't exist so it would actually write them. Then when I went to run it, errors ... in the build file ... the one place there should not have been errors. And it couldn't fix those.
reply
I also use AI to give me small examples and snippets, this way it works okay for me

However this still takes away from me in the sense that working with people who are using AI to output garbage frustrates me and still negatively impacts the whole craft for me

reply
Hah. I don't work with (coding) people, so thankfully I don't have that problem
reply
> My advice to everyone feeling existential vertigo over these tools is to remain confident and trust in yourself. If you were a smart dev before AI, chances are you will remain a smart dev with AI.

We replaced the chess board in the park with an app that compares the Elo score of you and your opponent, and probabilistically declares a winner.

But don't worry, if you were a good chess player before we introduced the app, chances are you will remain a good one with the app. The app just makes things faster and cheaper.

My advice to the players is to quit mourning the loss of the tension, laughter and shared moments that got them into chess in the first place.

reply
>We replaced the chess board in the park with an app that compares the Elo score of you and your opponent, and probabilistically declares a winner.

The chess board is still there, not sure I see how LLM tools compels one to stop writing personal projects without AI assistance.

reply
I think there is more existential fear that is left unaddressed.

Most commenters in this thread seem to be under the impression that where the agents are right now is where they will be for a while, but will they? And for how long?

$660 billion is expected to be spent on AI infrastructure this year. If the AI agents are already pretty good, what will the models trained in these facilities be capable of?

reply
Yes, absolutely. I think the companies that don't understand software, don't value software and that think that all tech is fundamentally equivalent, and who will therefore always choose the cheaper option, and fire all their good people, will eventually fail.

And I think AI is in fact a great opportunity for good devs to produce good software much faster.

reply
deleted
reply
I think the issue is that given the speed the bad dev can generate sub-par results that at face value look good enough overwhelm any procedures in place.

Pair that with management telling us to go with AI to go as fast as possible means that there is very little time to do course correction.

reply
I think it represents a bigger threat than you realize. I can't use an AI for my day job to implement these multi-agent workflows I see. They are all controlled by another company with little or no privacy guarantees. I can run quantized (even more braindead) models locally but my work will be 3-5 years behind the SOTA, and when the SOTA is evolving faster than that timeline there's a problem. At some point there's going to be turnover - like a lake in winter - where AI companies effectively control the development lifecycle end-to-end.
reply
I agree with the quality comments. The problem with AI coding isn't so much the slop, it's the developers not realizing its slop and trying to pass it off as a working product in code reviews. Some of the stuff I've reviewed in the past 6 months has been a real eye opener.
reply
I think no one is better positioned to use these tools than experienced developers.
reply
For me the problem is simple: we are in an active prisoner's dilemma with AI adoption where the outcome is worse collectively by not asking the right questions for optimal human results, we are defecting and using ai selfishly because we are rewarded by it. There's lots of potential for our use to be turned against us as we train these models for companies that have no commitment to give to the common good or return money to us or to common welfare if our jobs are disrupted and an AI replaces us fully.
reply
> My advice to everyone feeling existential vertigo over these tools is to remain confident and trust in yourself.

I do try to do that and have convinced myself that nothing has really changed in terms of what is important and that is systems thinking. But it's just one more barrier to convincing people that systems thinking is important, and it's all just exhausting.

Besides perhaps my paycheck, I have nothing but envy for people who get to work with their hands _and_ minds in their daily work. Modern engineering is just such a slog. No one understands how anything works nor even really wants to. I liken my typical day in software to a woodworker who has to rebuild his workshop everyday to just be able to do the actual woodworker. The amount of time I spend in software merely to being able to "open the door to my workshop" is astounding.

reply
One thing I'm hoping will come out of this is the retiring of coders that always turn what should be a basic CRUD app (just about everything) into some novelty project trying to pre-solve every possible concern that could ever come up, and/or a no-code solution that will never actually get used by a non-developer and frustrate every developer that is forced to use it.
reply
It's a combination of things... it's not just that AI feels like it is stripping the dignity of the human spirit in some ways, but it's also that the work we are doing is often detrimental to our fellow man. So learning to work with AI to do that faster (!!) (if it is actually faster on average), feels like doubling down.
reply