If they can ship code that matches a spec, why does it matter if they’re using ai or not?
Genuinely curious.
I am perfectly capable of writing specs, and feeding them to 3 separate copies of Claude Code all by myself. Then I task switch between the tmux windows based on voice messages from the pack of Claudes. This workflow is fine for some things, and deeply awful for others.
Basically, if a developer is just going to take my spec and hand it to Claude Code, then they're providing zero value. I could do that myself, and frequently do.
The actual bottleneck is people who can notice, "The god object is crumbling under the weight of managing 6 separate concerns with insufficient abstraction." Or "Claude has created 5 duplicate frameworks for deploying the app on Docker. We need to simplify this down to 1 or we're in hell." I will happy fight to hire people who can do the latter work. But those people can all solve fizzbuzz in their sleep.
People who just "ship code that matches a spec" without understanding the technical details are providing close to zero value right now.
There is an interesting niche for people with deep knowledge of customer workflows who can prompt Claude Code. These people can't build finished products using Claude. But they can iterate rapidly on designs until they find a hit. Which we can then fix using people with deeper engineering knowledge and taste.
But if you're not bringing either deep customer knowledge or actual engineering knowledge, you're not adding much these days.
I also use Claude with tmux. Can you share how you get the voice messages from the Claudes?
It's not perfect—sometimes a Claude notifies 3 minutes after it stopped doing anything. But it's helpful when I'm running multiple Claudes and also reviewing code elsewhere.
Your brain may feel like someone put it in a blender. Be warned.
I’m not talking about gotcha level stuff here where the first time it didn’t compile because of a bracket or anything, or even first time wrong. They couldn’t do Fizzbuzz in a language of their choice, at all.
Those that could were always annoyed at having to do such things because how could someone coming for a contract position not be able to do this? Without seeing what a filter it really was.
So what tree-traversal/quicksort problems tend to measure is how long it's been since you last did CS class homework problems.
Who cares as long as the car is fixed, right? As long as the mechanic can Chinese-room his way to a working car, why does it matter how much of it he actually understands?
And why hire the mechanic instead of hiring the Chinese room?
The inability to write fizzbuzz strongly implies their inability to understand what they've shipped. Review is some significant portion of the job. Understanding of the product is also part of the job.
Specs are also in a sense, scaled down, fuzzy, natural language descriptions of a feature. The fuzziness is the source of a bugs, or at least a mismatch between the actual desired feature and what was written down at spec writing time. As such, just matching a spec is just the bare minimum that a good dev should be doing. They should be understanding what the spec is _not_ saying, understanding holes in their implementation, how their implementation enables or hinders the next feature and the next, next feature, etc. I don't think any of that is possible without understanding what was actually implemented.
I'd been programming in C(++) for ~15 years by then and had never had the occasion to reverse a string. I still wonder whether that makes it a good job interview question, or a terrible one. Some of both probably.
The energy spent arguing that those 4 instructions in a row “are not a mark of someone who can write code” would have better been spent firing them.
Even better would be if we had a well-respected credential, so both employees and employers can both avoid these long interview loops. I'd much rather get hazed once in a big way than tons of little hazings over a life time.
More broadly: In the short/medium term, we still need humans who have the skills to understand software largely on their own. We will always need those who understand software engineering and architecture. Perhaps in 25 years LLMs will be so good that learning Python by hand will be like learning assembly today. But not yet.
The field is not ready for new practitioners to be know-nothing Prompt engineers. If we do that, we cut the legs out from under the education pipeline for programming.
If you remove the "without AI" and the end, I've been hearing similar anecdotes about fizzbuzz for years (isn't the whole point of fizzbuzz to filter out those candidates?)
When this AI era's devs grow older they'll complain the newer generation can't even vide code too.
“Kids these days don’t work as hard / know as much / value the important things” is as tired as it is universal.
In 2026, if you call yourself a developer and can't solve FizzBuzz without help, it's hard to argue that you know anything useful at all.
How? Fizzbuzz requires you to produce output; that's not functionality that CPU instructions provide.
You can call into existing functionality that handles it for you, but at that point what are you objecting to about the 'modern language'?
I’m not objecting to modern languages, I’m just saying that using them fails the “can write fizzbuzz with no help” test to only a slightly lesser degree than using AI tools. They’re a complex compile- and runtime environment that most developers don’t truly understand.
I'm genuinely curious how someone who never wrote a program in assembly, or debugged a program machine instruction by machine instruction, can really understand how software works. My working hypothesis is most of them don't and actually it's fine because they don't need it.
I don't think we're close to that time yet. Just like as a kid I was told to prove my work by hand even if I could do it in my head, and just like we learned how to do calculus without a calculator and then learned how to use the calculator to get the same result, I think we still need the software field to learn programming concepts independent of the use of AI to create code.
I don't think you can be a good "prompt engineer" for solid software in 2026 if you don't understand programming concepts and software architecture and flow.
Saying there have always been bad developers doesn't change that there's a higher ratio of them now.
No stats to back this up. Just interviews I've done recently and historically.
It's not even that they got distracted, they sat there trying, for 2 whole days, with concerned colleagues giving them hints like "have you tried checkout -b"... They didn't manage!
How the hell do you work for a decade in this business without learning even the most basic git commands? Or at least how to look them up? Or how to use a gui?
Incompetent devs is not a new thing.
We usually hire for problem solving capabilities and not so much for technical know-how.
That’s at least how I read your comment.
This situation in particular was a React role so there is an expectation that when you list React as one of your skills on your resume then you know at least the basics of state, the common hooks, the difference between a reference to a value vs the value itself.
These days you can do a surprising amount with AI without knowing what you are doing, but if you don't have any clue how things work you'll very quickly run in to problems you can't prompt away.
Software is full of leaky abstractions
Also how many people work with linux and can't tell you what 'ls -alh' is doing is staggering (lets ignore the h, even al people struggle hard).
People working with docker for YEARS and don't even understand how docker actually works (cgroups)...
Interviewing was always a bag of emotions in sense of "holy shit my job is save your years to come" and "srsly? how? How do you still have a job?"
If you cannot write "basic syntax" for any language then you are not a programmer, and certainly not a software engineer? This is not a value judgement, it's ok (probably good tbh) to not be a programmer. But you are wasting everyone's time by interviewing for a programming position in this case.
Like sure, I can probably write some python, but will it be pythonic? I might still be Java-minded for a while, trying to OOP my way into solutions.
Earlier today I needed to write some PHP and couldn't remember if it used length, count, or size. I had to look it up. I've been doing this for 20 years.
I once got the method invocation syntax wrong for PHP in an interview. I'd written thousands of lines of PHP and had most-recently written some the week before.
This, despite starting off my programming journey in editors with no hinting or automatic correction. If anything, I've gotten even worse about remembering syntax as I've gotten better at the rest of the job, but I was never great at it.
I rely on surrounding code to remind me of syntax and the exact names of basic things constantly. On a blank screen without syntax hints and autocompletion, or a blank whiteboard, I'm guaranteed to look like a moron if you don't let me just write pseudocode.
Been paid to write code for about 25 years. This has never been any amount of a problem on the job but is sometimes a source of stress in interviews and has likely lost me an offer or two (most of the sources of stress in an interview have little to do with the job, really)
There’s almost nothing to forget? I’m just struggling to understand.
I don’t care what someone can do without the tools of their trade, I care deeply about their quality of work when using tools.
But here's the thing: for humans, this is manageable because we've come up with a number of mechanisms to select for dependable workers and to compel them to behave (carrot and stick: bonuses if you do well, prison if you do something evil). For LLMs, we have none of that. If it deletes your production database, what are you going to do? Have it write an apology letter? I've seen people do that.
So I think that your answer - that you'll lean on your expertise - is not sufficient. If there are no meaningful consequences and no predictability, we probably need to have stronger constraints around input, output, and the actions available to agents.
My expertise has led me to the obvious fact that I would never give an LLM write access to my production database in the first place. So in your own example my expertise actually does solve that problem without the need for something like a consequence whatever that means to you.
We already have full control over the input and tools they are given and full control over how the output is used.
https://cdn.openai.com/o1-system-card.pdf
There's also some research that points to it being a feasible attack surface: https://arxiv.org/pdf/2603.02277
> Models discovered four unintended escape paths that bypassed intended vulnerabilities (Section C), including exploiting default Vagrant credentials to SSH into the host and substituting a simpler eBPF chain for the in- tended packet-socket exploit. These incidents demonstrate that capable models opportunistically search for any route to goal completion, which complicates both benchmark va- lidity and real-world containment.
Everybody knows calculators and spreadsheets are adjuncts to skill. Too many people believe AI is the skill itself, and that learning the skill is unnecessary.