upvote
It is the last narrative that some of Wall Street believes and has enough mediocre or senile coders to promote it.

That narrative will implode like Sora later this year.

reply
No, AI is truly useful in software engineering. I was a skeptic until I started using it. No, it isn’t going to solve every problem out there, but it’s a force multiplier.
reply
You pay understanding for speed. How much this trade is acceptable is up to you and the task you have in front of you. I cannot recommend it as a general solution.
reply
This field doesn’t do well on long-term thinking. Even if all this turns out to be a net loss, it will be reinterpreted as a win and just an opportunity for even more of the same solution. There are numerous examples of this, e.g. the OOP craze. Tech is a stock market of ideas and HN is a trading floor. The “line goes up” logic applies - not merit.
reply
Describing OOP as a "craze" is incredibly out of touch. It's been a thing for, what, three decades?
reply
You may not recall the crazy era of OOP where people would go bonkers with massive object trees trying to objectify everything and using operator overloading to do (dumb) things like adding a control to a window with +=.
reply
OOP is great. "OOP is the one perfect paradigm for all coding" was the craze.
reply
I'm sure I'm not the first person you've seen hinting at OOP (and all that came with it) having been hyped up beyond its merits.
reply
There certainly was an OOP craze, that's not out of touch to talk about.
reply
AbstractBeanFactoryFactoryInterfaceBeanContextFactoryBeanBean.java
reply
That’s just falls. I’ve spent disproportionate amount on “understanding” awful tooling like Gradle and npm. There’s no value in it if you’re not an infra engineer. It would take me a couple of days to manually restructure my hobby app, now I can just say “extract this into another workspace/subproject” and be done with it in minutes. And that’s just one example.
reply
I agree with this sentiment. I just also see AI-driven development in core business logic, where truly understanding what is going on is essential and yet completely disregarded.
reply
If I never have to debug a gradle file ever again, it's all worth it.
reply
You might say the same about garbage collected programming languages. It’s an acceptable tradeoff in a lot of scenarios. Same goes for AI.
reply
It is wild that people are still posting this kind of thing in 2026. Some folks really are living in a different world.
reply
I liken it to VR. That was a big hype before AI and while I really love the tech (I have 5 headsets) I could have told anyone that the expectations were insane. The investors truly believed that in 2-3 years time everyone would be doing everything with a big headset on. It was dragged into lots of situations where it didn't belong.

Then of course the hype collapsed and now even the usecases where VR shines are deemed a flop. But no, it's exceptionally good at simulation (racing/flight) and visualising complex designs while 3D designing.

I see the same with generative AI and LLM. It's really good with programming. It's definitely good at making quick art drafts or even final ones for those who don't care too much about the specifics of the output. I use it a lot for inspiration.

But it's not good for everything that it's trying to be sold as. Just like the VR craze they're dragging it by the hairs into usecases where it has no business being. A lot of these products are begging to die.

For example an automation tool using real world language. For that it's a disaster, it's inconsistent and constantly confuses itself. It's the reason openclaw is a foot bazooka. It's also not very great at meeting summaries especially those where many speakers are in a room on the same microphone.

I don't think AI will disappear but a realignment to the usecases where it actually adds value, yes I hope that happens soon.

reply
> It's also not very great at meeting summaries especially those where many speakers are in a room on the same microphone.

It is astonishingly poor at this. My intuition was that it should be good at this (it is basically a translation problem right? And LLMs are fundamentally translation systems) but the practical results are so poor. Not just mis-identifying speakers (frequently saying PersonX responded to PersonX) but managing complete opposite conclusions from what was actually said.

I'm genuinely intrigued as to what approaches have been taken in this space and what the "hard problem" is that is stopping it being good.

reply
Ugh... a balanced take, this isn't appropriate for social media! /s
reply
deleted
reply
It's because programmers are willing to pay thousands of dollars a month for a product commensurate with the value to provides, aka AI coding.

Generating pointless AI videos for pocket change or ad revenue is a loser in comparison.

reply
Thousands? Maybe not, but hundreds? Yeah, for my freelancer/contracting gigs, it's easily worth $200/month to be able to say "How come X is like that and what change lead to Y being Z?", wait 20 minutes and then get an answer that jumpstarts understanding a completely new codebase. If AI/LLMs never evolved beyond their current skills and usefulness, I'd still be happy to pay $200/month for this.

However, I don't know a single developer who pays "thousands of dollars a month", not sure how you'd end up like that.

reply
I most definitely am not.
reply
From my vantage point AI consumption is being lead by tech leadership moreso than actual in-the-weeds programmers themselves. HN just happens to include more folk at the intersection of leadership and individual code contributor.

The top down push for AI is in line with the age old traditions of replacing highly skilled and highly compensated trade workers with automation. The writing is on the wall if folks care to look; many just don't want to. This has happened 1000 times before and it'll keep happening in the name of "progress" in capitalist systems for as long as there are "inefficiencies" to "resolve." AI is meant as our replacement, not as an extension of our skill as it happens to align with today.

Its increasingly obvious that the next phase in the evolution of the average programmer role will be as technical requirements writers and machine generated output validators, leaving the actual implementation outsourced to the machine. Even in that new role, there is no secret sauce protecting this "programmer" from further automation. Technical product managers eventually fall to automation given enough time and money poured into the automation of translating fuzzy, under specified ideas into concrete bulleted requirements where they can simply review the listed output, make minor tweaks and hit "send" to generate the list of jira-like units of work to farm out to a fleet of agents wearing various hats (architect, programming, validator, etc.)

The above is very much in progress already, and today I'm already spending the majority of my time reviewing the output of said AI "teams", and let me tell you: it gets closer and closer to "good enough" week by week. Last year's models are horse shit in comparison to what I'm using today with agentic teams of the latest frontier models (Opus 4.6 [1m] currently, with some Sonnet.)

Maybe we're at a plateau and the limitations inherent in GenAI tech will be insurmountable before we get to 100% replacement. But it literally won't matter in the end as "good enough" always prevails over the perfect, and human devs are far from perfect already.

I have been producing software (at fang scale) for several decades now, and I've been closely monitoring GenAI systems for coding specifically. Even just a few months ago I'd get a verbose, meandering sprawl of methods and logic scattered with the actual deliverables outlined in the prompt from these systems. Sometimes even with clear disregard of the requirements laid out, or "cheating" on validation via disabling tests or writing ones that don't actually do anything useful. Today I'm getting none of that. I don't know what changed, but I somehow get automated code with good separation of concerns, following best practices and proven architectural patterns. Sure, with a bunch of juniors let loose with AI you get garbage still, but that's simply a function of poor delegation of work units. Giving the individual developer and the AI too much leeway in the scope of changes is the bug there. Division of work into small enough units is the key and always has been for the de-skilling portion of automating away skilled human labor for machines. We're just watching Marxist theory on capitalist systems play out in real time in a field generally thought to be "safe." It certainly won't be the last.

reply
Whats your setup for the agent team?
reply
To be fair, LLMs are exceptional at coding and they very well could displace some jobs. But you'll always need people at the helm who know what they're doing too.
reply
Also that developers make for good early adopters for tech
reply
This is very true and an underrated comment.
reply
Yeah they are called PMs and already exist. And these people normally are creating the design documents, the flows etc. and then have to wait for the dev team to implement this.

So a good PM running 1-3 teams, will only need 1-3 agentic ai teams instead.

reply
[dead]
reply
> To be fair, LLMs are exceptional at coding

No they aren't. Any decently skilled human blows them out of the water. They can do better than an untrained human, but that's not much of an achievement.

reply
> Any decently skilled human blows them out of the water

No, by far no. I’m by all accounts “decently skilled human”, at least if we go by our org, and it blows anyone out of the water with some slight guidance.

And the most important part: it doesn’t get tired, it doesn’t have any mood swings, its performance isn’t affected by poor sleep, party yesterday or their SO having a bad day.

reply
I have 20 years of experience and I don't handwrite any code anymore. Opus does everything, and it only needs a bit of steering occasionally. If you can give it guardrails (ie a pre-existing design system) and ways to verify its output (ie enforce TDD and use Chrome to visually verify) then it gets it right basically every time.
reply
The thing is, LLM's produce better quality one-shots than any of the products that get returned from overseas ultra-budget contractors in India or SEA. I don't know what that means for Western devs, but I can tell you that the fortune 500 I work for is dialing back on contracting and outsourcing because domestic teams can do higher-quality work faster.
reply
>The thing is, LLM's produce better quality one-shots than any of the products that get returned from overseas ultra-budget contractors in India or SEA.

Source?

reply
Turns out there are whole categories of software where 'extremely fast and good enough' is what matters, even for skilled software developers.
reply
I’ve been a full stack developer for 10+ years now and I completely disagree.

Modern models like Opus / Gemini 3 are great coding companions; they are perfectly capable of building clean code given the right context and prompt.

At the end of the day it’s the same rule of garbage in -> garbage out, if you don’t have the right context / skills / guidance you can easily end up with bad code as you could with good code.

reply
Am I an untrained human if I believe that Claude Opus 4.6 produces generally better code than I do in most circumstances?

Even with years as a principal engineer at a company with high coding standards and engineering processes?

reply
Maybe not untrained, but you work on some easy, boring shit. That may be true for a lot of developers, I don't know.
reply
What do you reckon? Do you think that is true for me and thousands of others, or that your opinion on this is too narrow and rigid?
reply
How are they going to claw back the market from Anthropic though?
reply
Step 1: make a coding product which is better on cost/quality/speed. Probably need to choose two, so redirecting compute from dumb ai videos to coding makes sense.

Step 2: win back public trust by firing Sam Altman or dropping defense contracts or something else I can’t think of.

reply
Step 3: use politicians to jam Anthropic up in legal battles.
reply
This is actually step 1
reply
Imagine all the money they can save on Sora which surely cost them way more than regular LLM usage, that they can now invest into suave Superbowl ads trash-talking Claude.

I also wonder if they got the $1B from Disney? Was that even a paid for deal? Or just another "announced" deal? Every article I found doesn't mention anyone signing any paperwork - which seems to be typical of AI journalism these days. Every AI deal is supposedly inked but if you dig deeper, all you find are adjectives like proclaimed, announced, agreed upon.

reply
I believe that the $1b is apparently not coming anymore because it was basically dependent upon Sora being an actual product that actual people can use, which isn't the case anymore.
reply
"Clawing back" was what the Open Claw acquisition was for ;)
reply
Not enough money though. Not hundreds of billions of dollars.
reply
[flagged]
reply
Software engineers have spent the last 40 years automating away other people's jobs. The discomfort only seems to start when the automation points inward.
reply
I want to make people’s jobs easier and more interesting, I never want to make them redundant.

This did happen once. 3 people were laid off, I think directly based on things I said to drive the completion of some automation. That was the last time I ever measured something in man-hours to make a point. I’ll never do it again. That was over 12 years ago.

reply
Have they? I keep seeing this little snippet of wisdom being thrown about everywhere in these AI discussions as a gotcha, but to me it seems like moving jobs into dirt cheap 3rd world countries with slave labor is the biggest culprit for job loss than any kind of automation from software.

If anything software engineers have spawned in uncountable numbers of jobs that never would've existed before, is what my intuition tells me.

reply
Haven't mechanical engineers done the same thing (steam engines, trains,...)? The whole applied science is about using knowledge to remove tediousness (and now adding it back). A lot of jobs have been removed.
reply
model T factor workers are anti worker
reply
[flagged]
reply
deleted
reply
deleted
reply