I'm not sure what your circumstances are but even if it's not true for you, it's true for many other people.
People online with identical views to them all assure me that theyre all highly skilled though.
Meanwhile I've been experimenting using AI for shopping and all of them so far are horrendous. Cant handle basic queries without tripping over themselves.
But you can understand why all the 1700 and below chess players say it is good and it is making them better using it for eval?
Don't worry, AI will replace you one day, you are just smarter than most of us so you don't see it yet.
This kind of thinking is actually a big reason why execs are being misinformed into overestimating LLM abilities.
LLM coding agents alone are not good enough to replace any single developer. They only make a developer x% faster. That dev who is now x% faster may then allow you to lay off another dev. That is a subtle yet critical difference.
To adress your point, let's try another one analogy. Imagine secreterial assistants, discussing their risk of been replaced by computers in the 80s. They would think: someone still need to type those letters, sit next to that phone and make those appointments, I am safe. Computers won't replace me.
It is not that AI will do all of your tasks and replace you. It is that your role as a specialist in software development won't be necessary most of the time (someone will do that, and that person won't call themselves a programmer).
For me the main difference is now some people can explain what their code does. While some other only what it wants to achieve
This is an interesting choice for a first experiment. I wouldn't personally base AI's utility for all other things on its utility for shopping.
Most people dont really understand coding but shopping is a far simpler task and so it's easier to see how and where it fails (i.e. with even mildly complex instructions).
On the tech side I see it saving some time with stuff like mock data creation, writing boiler plate, etc. You still have to review it like it's a junior. You still have to think about the requirements and design to provide a detailed understanding to them (AI or junior).
I don't think either of these will provide 90% productivity gains. Maybe 25-50% depending on the job.
Sure it is not as fast to understand as code I wrote. But at least I mostly need to confirm it followed how it implemented what I asked. Not figuring out WHAT it even decided to implement in the first place.
And in my org, people move around projects quite a bit. Hasn’t been uncommon for me to jump in projects with 50k+ lines of code a few times a year to help implement a tricky feature, or help optimize things when it runs too slow. Lots of code to understand then. Depending on who wrote it, sometimes it is simple: one or two files to understand, clean code. Sometimes it is an interconnected mess and imho often way less organized that Ai generated code.
And same thing for the review process, lots of having to understand new code. At least with AI you are fed the changes a a slower pace.
Because it does.
> I still don't see ANY proof that it doesn't generate a total unmaintainable unsecure mess, that since you didn't develop, you don't know how to fix.
I wouldn't know since it's been years since I've tried but I'd imagine that Claude Code would indeed generate a half-baked Next.js monstrosity if one-shot and left to its own devices. Being the learned software engineer I am, however, I provide it plenty of context about architecture and conventions in a bootstrapped codebase and it (mostly) obeys them. It still makes mistakes frequently but it's not an exaggeration to say that I can give it a list of fields with validation rules and query patterns and it'll build me CRUD pages in a fraction of the time it'd take me to do so.
I can also give it a list of sundry small improvements to make and it'll do the same, e.g. I can iterate on domain stuff while it fixes a bunch of tiny UX bugs. It's great.
not talking about toys or vibecoded crap no one uses.
Nobody is.
Perhaps nobody cares to “convince you” and “win you over”, because…why? Why do we all have to spoon feed this one to you while you kick and scream every step of the way?
If you don’t believe it, so be it.
Weirdly, people who have actually created functional one-man products don't seem to have the same problem, as they welcome the business.
We are very much in need of an actual way to measure real economic impact of AI-assisted coding, over both shorter and longer time horizons.
There's been an absolute rash of vibecoded startups. Are we seeing better success rates or sales across the industry?
That's the same false argument that the religious have offered for their beliefs and was debunked by Bertrand Russell's teapot argument: https://en.wikipedia.org/wiki/Russell%27s_teapot
If you use it correctly, you can get better quality, more maintainable code than 75% of devs will turn in on a PR. The “one weird trick” seems to be to specify, specify, specify. First you use the LLM to help you write a spec (document, if it’s pre existing). Make sure the spec is correct and matches the user story and edge cases. The LLM is good at helping here too. Then break down separations of concerns, APIs, and interfaces. Have it build a dependency graph. After each step, have it reevaluate the entire stack to make sure it is clear, clean, and self consistent.
Every step of this is basically the AI doing the whole thing, just with guidance and feedback.
Once you’ve got the documentation needed to build an actual plan for implementation, have it do that. Each step, you go back as far as relevant to reevaluate. Compare the spec to the implementation plan, close the circle. Then have it write the bones, all the files and interfaces, without actual implementations. Then have it reevaluate the dependency graph and the plan and the file structure together. Then start implementing the plan, building testing jigs along the way.
You just build software the way you used to, but you use the LLM to do most of the work along the way. Every so often, you’ll run into something that doesn’t pass the smell test and you’ll give it a nudge in the right direction.
Think of it as a junior dev that graduated top of every class ever, and types 1000wpm.
Even after all of that, I’m turning out better code, better documentation, and better products, and doing what used to take 2 devs a month, in 3 or 4 days on my own.
On the app development side of our business, the productivity gain also strong. I can’t really speak to code quality there, but I can say we get updates in hours instead of days, and there are less bugs in the implementations. They say the code is better documented and easier to follow , because they’re not under pressure to ship hacky prototype code as if it were production.
On the current project, our team size is 1/2 the size it would have been last year, and we are moving about 4x as fast. What doesn’t seem to scale for us is size. If we doubled our team size I think the gains would be very small compared to the costs. Velocity seems to be throttled more by external factors.
I really don’t understand where people are coming from saying it doesn’t work. I’m not sure if it’s because they haven’t tried a real workflow, or maybe tried it at all, or they are definitely “holding it wrong.” It works. But you still need seasoned engineers to manage it and catch the occasional bad judgment or deviation from the intention.
If you just let it, it will definitely go off the rails and you’ll end up with a twisted mess that no one can debug. But use a system of writing the code incrementally through a specification - evaluation loop as you descend the abstraction from idea to implementation you’ll end up winning.
As a side note, and this is a little strange and I might be wrong because it’s hard to quantify and all vibes, but:
I have the AI keep a journal about its observations and general impressions, sort of the “meta” without the technical details. I frame this to it as a continuation of “awareness “ for new sessions.
I have a short set of “onboarding“ documents that describe the vision, ethos, and goals of the project. I have it read the journal and the onboarding docs at the beginning of each session.
I frame my work with the AI as working with it as a “collaborator” rather than a tool. At the end of the day, I remind it to update its journal of reflections about the days work. It’s total anthropomorphism, obviously, but it seems to inspire “trust” in the relationship, and it really seems to up-level the effort that the AI puts in. It kinda makes sense, LLMs being modelled on human activity.
FWIW, I’m not asserting anything here about the nature of machine intelligence, I’m targeting what seems to create the best result. Eventually we will have to grapple with this I imagine, but that’s not today.
When I have forgotten to warm-start the session, I find that I am rejecting much more of the work. I think this would be worth someone doing an actual study to see if it is real or some kind of irresistible cognitive bias.
I find that the work produced is much less prone to going off the rails or taking shortcuts when I have this in the context, and by reading the journal I get ideas on where and how to do a better job of steering and nudging to get better results. It’s like a review system for my prompting. The onboarding docs seem to help keep the model working towards the big picture? Idk.
This “system” with the journal and onboarding only seems to work with some models. GPT5 for example doesn’t seem to benefit from the journal and sometimes gets into a very creepy vibe. I think it might be optimized for creating some kind of “relationship” with the user.
I suspect you either already were or would’ve been great at leading real human developers not just AI agents. Directing an AI towards good results is shockingly similar to directing people. I think that’s a big thing separating those getting great results with AI from those claiming it simply does not work. Not everyone is good at doing high level panning, architecture, and directing others. But those that already had those skills basically just hit the ground running with AI.
There are many people working as software engineers who are just really great at writing code, but may be lacking in the other skills needed to effectively use AI. They’re the angry ones lamenting the loss of craft, and rightfully so, but their experience with AI doesn’t change the shift that’s happening.