I had no access to anyone who could teach me calculus as a kid except Khan Academy, so I think this is a gross exaggeration. But I agree in the end, that all my "real" learning did come from pen-and-paper practice, not watching videos.
It's not unlike going to the gym, and we see how many people do that regularly. Except it's even funnier, because people serious about the gym but what? Tutors. They call them personal trainers. We've known for a millennium or more that 1-on-1 instruction is vastly better than anything else, but most people actually don't want to get into shape, and most people actually don't want to learn.
But that's not using "computers" as a computer but as a video player. When evaluating whether computers are "good for learning", I don't think we should include using a computer as a video player, a book, or even flash cards. It should be things a computers uniquely offer which a books, paper, videos and a physical reference library cannot.
Based on the results of deploying hundreds of millions of computer to schools in the 80s and 90s, the evidence was mostly that computers are good for learning computer programming and "how to use a computer" but not notably better than cheaper analog alternatives for learning other things.
Interestingly, a properly trained and scaffolded LLM could be the first thing to meaningfully change that. It could do some things in ways only human teachers could previously since it is theoretically capable of observing learner progress and adapting to it in real-time.
He really took the time to replicate the manual teaching process of writing on whiteboard. He improved upon it by using colors. But basically had the same pace as a teacher writing on a whiteboard.
When professors are given a projector, they just throw together some slides and add their narration.
This is not very efficient. To learn you need to suffer. Or you need to watch the suffering.
This has never been achieved by, nor is it the point of, education for the masses.
They're wrong sometimes, but usually in verifiable ways. And they don't seem to know the difference between medicine and bioterrorism, so often they refuse. But these limitations are worth tolerating when the alternative is that our specialists in topic X are bogged down by questions about topic Y to the point where X isn't getting taught.
Whether you're in class or at work, it's just courteous to ask an AI first.
I agree with this.
The problem is frankly computer and now computer with LLM makes it easy to cheat.
The kid doesn't want to learn, the kid wants good grades so parent is happy with them, and the young adult wants to get the paper coz they were told that is required for good life. It's misalignment of incentives.
If they can ship code that matches a spec, why does it matter if they’re using ai or not?
Genuinely curious.
I am perfectly capable of writing specs, and feeding them to 3 separate copies of Claude Code all by myself. Then I task switch between the tmux windows based on voice messages from the pack of Claudes. This workflow is fine for some things, and deeply awful for others.
Basically, if a developer is just going to take my spec and hand it to Claude Code, then they're providing zero value. I could do that myself, and frequently do.
The actual bottleneck is people who can notice, "The god object is crumbling under the weight of managing 6 separate concerns with insufficient abstraction." Or "Claude has created 5 duplicate frameworks for deploying the app on Docker. We need to simplify this down to 1 or we're in hell." I will happy fight to hire people who can do the latter work. But those people can all solve fizzbuzz in their sleep.
People who just "ship code that matches a spec" without understanding the technical details are providing close to zero value right now.
There is an interesting niche for people with deep knowledge of customer workflows who can prompt Claude Code. These people can't build finished products using Claude. But they can iterate rapidly on designs until they find a hit. Which we can then fix using people with deeper engineering knowledge and taste.
But if you're not bringing either deep customer knowledge or actual engineering knowledge, you're not adding much these days.
I also use Claude with tmux. Can you share how you get the voice messages from the Claudes?
It's not perfect—sometimes a Claude notifies 3 minutes after it stopped doing anything. But it's helpful when I'm running multiple Claudes and also reviewing code elsewhere.
Your brain may feel like someone put it in a blender. Be warned.
I’m not talking about gotcha level stuff here where the first time it didn’t compile because of a bracket or anything, or even first time wrong. They couldn’t do Fizzbuzz in a language of their choice, at all.
Those that could were always annoyed at having to do such things because how could someone coming for a contract position not be able to do this? Without seeing what a filter it really was.
So what tree-traversal/quicksort problems tend to measure is how long it's been since you last did CS class homework problems.
Who cares as long as the car is fixed, right? As long as the mechanic can Chinese-room his way to a working car, why does it matter how much of it he actually understands?
And why hire the mechanic instead of hiring the Chinese room?
The inability to write fizzbuzz strongly implies their inability to understand what they've shipped. Review is some significant portion of the job. Understanding of the product is also part of the job.
Specs are also in a sense, scaled down, fuzzy, natural language descriptions of a feature. The fuzziness is the source of a bugs, or at least a mismatch between the actual desired feature and what was written down at spec writing time. As such, just matching a spec is just the bare minimum that a good dev should be doing. They should be understanding what the spec is _not_ saying, understanding holes in their implementation, how their implementation enables or hinders the next feature and the next, next feature, etc. I don't think any of that is possible without understanding what was actually implemented.
I'd been programming in C(++) for ~15 years by then and had never had the occasion to reverse a string. I still wonder whether that makes it a good job interview question, or a terrible one. Some of both probably.
The energy spent arguing that those 4 instructions in a row “are not a mark of someone who can write code” would have better been spent firing them.
Even better would be if we had a well-respected credential, so both employees and employers can both avoid these long interview loops. I'd much rather get hazed once in a big way than tons of little hazings over a life time.
More broadly: In the short/medium term, we still need humans who have the skills to understand software largely on their own. We will always need those who understand software engineering and architecture. Perhaps in 25 years LLMs will be so good that learning Python by hand will be like learning assembly today. But not yet.
The field is not ready for new practitioners to be know-nothing Prompt engineers. If we do that, we cut the legs out from under the education pipeline for programming.
If you remove the "without AI" and the end, I've been hearing similar anecdotes about fizzbuzz for years (isn't the whole point of fizzbuzz to filter out those candidates?)
When this AI era's devs grow older they'll complain the newer generation can't even vide code too.
“Kids these days don’t work as hard / know as much / value the important things” is as tired as it is universal.
In 2026, if you call yourself a developer and can't solve FizzBuzz without help, it's hard to argue that you know anything useful at all.
How? Fizzbuzz requires you to produce output; that's not functionality that CPU instructions provide.
You can call into existing functionality that handles it for you, but at that point what are you objecting to about the 'modern language'?
I’m not objecting to modern languages, I’m just saying that using them fails the “can write fizzbuzz with no help” test to only a slightly lesser degree than using AI tools. They’re a complex compile- and runtime environment that most developers don’t truly understand.
I'm genuinely curious how someone who never wrote a program in assembly, or debugged a program machine instruction by machine instruction, can really understand how software works. My working hypothesis is most of them don't and actually it's fine because they don't need it.
I don't think we're close to that time yet. Just like as a kid I was told to prove my work by hand even if I could do it in my head, and just like we learned how to do calculus without a calculator and then learned how to use the calculator to get the same result, I think we still need the software field to learn programming concepts independent of the use of AI to create code.
I don't think you can be a good "prompt engineer" for solid software in 2026 if you don't understand programming concepts and software architecture and flow.
Saying there have always been bad developers doesn't change that there's a higher ratio of them now.
No stats to back this up. Just interviews I've done recently and historically.
It's not even that they got distracted, they sat there trying, for 2 whole days, with concerned colleagues giving them hints like "have you tried checkout -b"... They didn't manage!
How the hell do you work for a decade in this business without learning even the most basic git commands? Or at least how to look them up? Or how to use a gui?
Incompetent devs is not a new thing.
We usually hire for problem solving capabilities and not so much for technical know-how.
That’s at least how I read your comment.
This situation in particular was a React role so there is an expectation that when you list React as one of your skills on your resume then you know at least the basics of state, the common hooks, the difference between a reference to a value vs the value itself.
These days you can do a surprising amount with AI without knowing what you are doing, but if you don't have any clue how things work you'll very quickly run in to problems you can't prompt away.
Software is full of leaky abstractions
Also how many people work with linux and can't tell you what 'ls -alh' is doing is staggering (lets ignore the h, even al people struggle hard).
People working with docker for YEARS and don't even understand how docker actually works (cgroups)...
Interviewing was always a bag of emotions in sense of "holy shit my job is save your years to come" and "srsly? how? How do you still have a job?"
If you cannot write "basic syntax" for any language then you are not a programmer, and certainly not a software engineer? This is not a value judgement, it's ok (probably good tbh) to not be a programmer. But you are wasting everyone's time by interviewing for a programming position in this case.
Like sure, I can probably write some python, but will it be pythonic? I might still be Java-minded for a while, trying to OOP my way into solutions.
Earlier today I needed to write some PHP and couldn't remember if it used length, count, or size. I had to look it up. I've been doing this for 20 years.
I once got the method invocation syntax wrong for PHP in an interview. I'd written thousands of lines of PHP and had most-recently written some the week before.
This, despite starting off my programming journey in editors with no hinting or automatic correction. If anything, I've gotten even worse about remembering syntax as I've gotten better at the rest of the job, but I was never great at it.
I rely on surrounding code to remind me of syntax and the exact names of basic things constantly. On a blank screen without syntax hints and autocompletion, or a blank whiteboard, I'm guaranteed to look like a moron if you don't let me just write pseudocode.
Been paid to write code for about 25 years. This has never been any amount of a problem on the job but is sometimes a source of stress in interviews and has likely lost me an offer or two (most of the sources of stress in an interview have little to do with the job, really)
There’s almost nothing to forget? I’m just struggling to understand.
I don’t care what someone can do without the tools of their trade, I care deeply about their quality of work when using tools.
But here's the thing: for humans, this is manageable because we've come up with a number of mechanisms to select for dependable workers and to compel them to behave (carrot and stick: bonuses if you do well, prison if you do something evil). For LLMs, we have none of that. If it deletes your production database, what are you going to do? Have it write an apology letter? I've seen people do that.
So I think that your answer - that you'll lean on your expertise - is not sufficient. If there are no meaningful consequences and no predictability, we probably need to have stronger constraints around input, output, and the actions available to agents.
My expertise has led me to the obvious fact that I would never give an LLM write access to my production database in the first place. So in your own example my expertise actually does solve that problem without the need for something like a consequence whatever that means to you.
We already have full control over the input and tools they are given and full control over how the output is used.
https://cdn.openai.com/o1-system-card.pdf
There's also some research that points to it being a feasible attack surface: https://arxiv.org/pdf/2603.02277
> Models discovered four unintended escape paths that bypassed intended vulnerabilities (Section C), including exploiting default Vagrant credentials to SSH into the host and substituting a simpler eBPF chain for the in- tended packet-socket exploit. These incidents demonstrate that capable models opportunistically search for any route to goal completion, which complicates both benchmark va- lidity and real-world containment.
Everybody knows calculators and spreadsheets are adjuncts to skill. Too many people believe AI is the skill itself, and that learning the skill is unnecessary.
So something like, "Frontier AI has broken the 'high school' or 'university' format"?
The hype surrounding AI is just pervasively exhausting: you've got the folks talking about an entire new age for humanity where we're shortly going to take over the entire universe. And you've got the folks talking about how our entire society is crumbling.
Education is one place folks seem to throw up their hands and say nothing can be done.
The fix is simple: students are to be evaluated on their performance in person. That's it.
Any other "collapse of education" isn't due to AI, it's something else.
[0] Episode webpage: https://share.transistor.fm/s/31855e83
But he was a great teacher anyway. He was engaging and kept the kids in line and learning. I eventually learned the truth, and most of my classmates forgot about it. Teaching, like flying a plane or driving a train, might become more about keeping watch over a small group of people and ensuring that things don't go off the rails, and that's fine.
I think it helps that it's a very narrow field to look at, compared to fuzzy and big-picture view of social studies, for example. So much room to be confidently wrong... And sadly I can't think of a solution, LLMs or not.
In reality heavier isotopes of hydrogen fuse, conserving the total number of nucleons, but the resulting hydrogen has a lower rest mass than the parent particles. The extra mass is released as energy and the total energy is conserved.
By his logic the system either violated energy conservation (by creating nucleons while releasing energy) or was endothermic (creating nucleons from the surrounding energy).
Here some indication I'm not making this up: https://hsm.stackexchange.com/questions/2465/when-and-why-di...
In any case, I never use those concepts, and I know no professional particle physicist that does. By "mass", I mean rest mass.
E.g. in Hungary I had a university CS professor that originally wanted to be a highschool teacher and a highschool physics teacher that originally wanted to be researcher. Their choice of degree didn't determine which outcome they got. The researcher and teacher curriculum had an 80%+ overlap.
You also have to pass a standardized test specifically on subject matter in order to get your teaching certificate.
The undergrad degree I did was split into thirds, one for subject matter, one for teaching pedagogy, and one for teaching your subject matter.
A Physics Prof Bet Me $10,000 I'm Wrong
All things I learned in school which were wrong information.
Not to mention, the current state of education is far worse. I don't think most realize how low the bar is.
She only really had two faults: She wasn't very bright, and she wasn't fond of children. I had her in about 80% of all my classes for six years. High school was a relief.
It is widely believed by their neighbors, that the _Druze_ wear baggy pants because they believe that the Mahdi will be born to a male, and the pants will catch the baby etc. I say "widely believed", the Druze are famously secretive and will not confirm or deny most things about their religion. The 'elect' Druze men do wear distinctive baggy trousers with the crotch down around the knees: no one else does.
The Druze are people in the Arabic world: moreover, they are Arabs. They began as an Isma'ili sect, but do not identify as Muslim: they call themselves al-Muwaḥḥidūn, meaning 'the monotheists', or 'unitarians'.
Much closer to correct than not!
My “earth sciences” teacher also once tried to argue with me against the universal law of gravitation. (no, she was not referring to Special/General Relativity. She didn’t agree two objects in a vacuum fall at the same speed regardless of mass.
Like almost everything else about LLMs, this unfortunate tendency has gotten a lot better recently, which you might not realize if you gave up after getting some lame answers or bogus glazing on the free ChatGPT page a couple of years ago.
We can all agree that both human "experts" and LLMs can sometimes be right, and sometimes be confidently wrong.
But that doesn't imply that they're equally fit for purpose. It just means that we can't use that simple shortcut to conclude that one is inferior to the other.
So where do we go from here?
Well, they were ostensibly forcing functions... ten years ago everyone was paying the exchange student to do their homework and assignments for them, and that guy was paying his cousin back in his home country, but the whole thing is a bit more efficient now.
No we have not.
Are they or aren't they
Can't argue with that logic
Now I’m certain that there exist those mythical human instructors who can do better, but that’s not worth much if 99.99% of people don’t have access to them. Just like a good human physician who takes their time with the patient is better than an LLM, but that’s not worth much either given that this doesn’t match most people’s experience with their own physicians.
For me the best human teachers were the ones that managed to make me interested on topics that I thought are boring/useless (many times my opinion being stupid, mostly due to lack of experience).
So far with LLM I learn about things I know something (at least that they exist) and I am interested in, which is a small subset of things that one should learn during lifetime.
The kids learnt all about Team Fortress 2, Roblox, Rainbow Six etc. They also learnt how to game the learning system so it looked like they were doing their work.
Not really, not if you want to ask it deep questions. It won't have an answer that is deeper than something that you can find online, and if pressed it will just keep circling around the same response.
The reason is that this "thing" was never curious, never asked questions, and never really learned anything. It just has learned the Internet "by heart", and is as boring as a human teacher who is not really curious about the subject they are teaching, and has just got some degree by "by hearting" some text book. Of course it does it much better than a human, but it is fundamentally the same thing.
You're certain that mythical instructors exist (?) who "can" do better?
Are human instructors more competent as teachers than AI teachers, or are AI teachers more competent as teachers than human teachers? No "this or that can happen," just a definitive statement please.
AI is likely a million times better student than my dimwit cybersec meatbags...er, majors, for sure, as well! Don't have a reliable way to measure or experience why/how, tho, so I'm not out here claiming it. Even if I did, why would I argue for their replacement?
(Real mathematics problems, not American-style ""math"".)
Exceptional clarity on the problem you have
Know how to measure the problem you’re solving
Numerically define what “done” is
Make a deterministic and fully observable prototype
Iterate in production with the user
Expand user base as desired with user iteration in parallel forever
Etc…
Obviously a lot more in the details and these are all case by case, but these chatbots are basically perfect productivity machines for this process.
The massive caveat to all of this is this only works for people that can reliably and truthfully define those items above, are willing to structure organization to make those your priorities.
And actually most financial incentives demand the opposite of this process
If most organizations were honest about it, they would simply say “we’re here to make the most money possible and we’re gonna do whatever it takes to do that”
A lot of people don’t like that, so they don’t say it to come up with other bullshit.
Ultimately that’s why I felt like my only option right now is to teach people how to do this because I assumed it was obvious and it is not.
Also, you could spin up your own educational agent with very strict instructions on guiding the user instead of just doing the work. Of course you can always go around it but if you're making an effort to learn, this is a good middle ground.