Once they have to solve a novel problem that was not already solved for all intentes and purposes, Alice will be able to apply her skillset to that, whereas Bob will just run into a wall when the LLM starts producing garbage.
It seems to me that "high-skill human" > "LLM" > "low-skill human", the trap is that people with low levels of skills will see a fast improvement of their output, at the hidden cost of that slow build-up of skills that has a way higher ceiling.
Why is it a problem of the LLM if your test is unrelated to the performance you want?
I didn't get to be a senior engineer by immediately being able to solve novel problems. I can now solve novel problems because I spent untold hours solving trivial ones.
There is a vast difference between “never learned the skill,” and “forgot the skill from lack of use.” I learned how to do long division in school, decades ago. I sat down and tried it last year, and found myself struggling, because I hadn’t needed to do it in such a long time.
This sentence contains the entire point, and the easiest way to get there, as with many, many things, is to ask “why?”
No-ones becomes a good mathematician without first learning to write simple proofs, and then later on more complex proof. It's the very stuff of the field itself.
Edit: let's look at a paper like Some Linear Transformations on Symmetric Functions Arising From a Formula of Thiel and Williams https://ecajournal.haifa.ac.il/Volume2023/ECA2023_S2A24.pdf and try and guess how many of trivial things were completely unneeded to write a paper like this.
If you send a kid to an elementary school, and they come back not having learned anything, do you blame the concept of elementary schools, or do you blame that particular school - perhaps a particular teacher _within_ that school?
While we have a lot of abstractions that solve some subproblems, there still need to connect those solutions to solve the main problem. And there’s a point where this combination becomes its own technical challenge. And the skill that is needed is the same one as solving simpler problems with common algorithms.
At a certain point, higher level languages stop working. Performance, low level control of clocks and interrupts, etc.
I’m old enough dropping into assembly to be clever with the 8259 interrupt controller really was required. Programmers today? The vast majority don’t really understand how any of that works.
And honestly I still believe that hardware-up understanding is valuable. But is it necessary? Is it the most important thing for most programmers today?
When I step back this just reads like the same old “kids these days have it so easy, I had to walk to school uphill through the snow” thing.
Writing assembly is probably completely irrelevant. You should still know how programming language concepts map to basic operations though. Simple things like strict field offsets, calling conventions, function calls, dynamic linking, etc.
ffmpeg disagrees.
More broadly, though, it’s a logical step if you want to go from “here’s how PN junctions work” to “let’s run code on a microprocessor.” There was a game up here yesterday about building a GPU, in the same vein of nand2tetris, Turing Complete, etc. I find those quite fun, and if you wanted to do something like Ben Eater’s 8-bit computer, it would probably make sense to continue with assembly before going into C, and then a higher-level language.
Because we largely want people who have committed to tens of thousands of dollars of debt to feel sufficiently warm and fuzzy enough to promote the experience so that the business model doesn’t collapse.
It’s difficult to think anyone would end up truly regretting doing a course in astrophysics, or any of the liberal arts and sciences if they have a modicum of passion, but it’s very believable that a majority of them won’t go on to have a career in it, whatever it is, directly.
They’re probably more likely to gain employment on their data science skills, or whether core competencies they honed, or just the fact that they’ve proven they can learn highly abstract concepts, or whatever their field generalises to.
Most of the jobs are in not-highly-specific academic-outcome.
We're minting an entire generation of people completely dependent on VC funding. What happens if/when the AI companies fail to find a path to profitability and the VC funding dries up?
As of March 2026, OpenAI generates annual revenue exceeding $12 billion. However, the costs of running ChatGPT are around $17 billion a year.
Source: https://searchlab.nl/en/statistics/chatgpt-statistics-2026
Once I realized that this white on black contrast was hurting my eyes, I decided to stop as I didn't want to see stripes for too long when looking away.
Some activity has outcomes that aren't strictly in the results.
This argument boils down to "don't use tools because you'll forget how to do things the hard way", which nobody would buy for any other tool, but with LLMs we seem to have forgotten that line of reasoning entirely.
But to even *know* what is more useful, it is crucial to have walked the walk. Otherwise we will all end up with a bunch of people trying to reinvent the wheel, over and over again, like JavaScript "developers" who keep reinventing frameworks every six months.
> which nobody would buy for any other tool
I don't know about you, but I wasn't allowed to use calculators in my calculus classes precisely to learn the concepts properly. "Calculators are for those who know how to do it by hand" was something I heard a lot from my professors.
I feel like people tend to forget that among the many things LLMs can do these days, “using a search engine” is among them. In fact, they use them better than the majority of people do!
The conversation people think they’re having here and the conversation that actually needs to be had are two entirely different conversations.
> I don’t know about you, but I wasn’t allowed to use calculators in my calculus classes precisely to learn the concepts properly. “Calculators are for those who know how to do it by hand” was something I heard a lot from my professors.
Suppose I never learned how to derive a function. I don’t even know what a function is. I have no idea how to do make one, write one, or what it even does. So I start gathering knowledge:
- A function is some math that allows you to draw a picture of how a number develops if you do that math on it.
- A derivative is a function that you feed a function and a number into, and then it tells you something about what that function is doing to that number at that number.
- “What it’s doing” specifically means not the result of the math for that particular number, but the results for the immediate other numbers behind and in front of it.
- This can tell us about how the function works.
Now I go tell ClaudeGPTimini “hey, can you derive f(x) at 5 so that we can figure out where it came from and where it goes from there?”, and it gives me a result.
I’ve now ostensibly understood what a derivative does and what it’s used for, yet I have zero idea how to mathematically do it. Does that make any results I gain from this intuitive understanding any less valuable?
What I’ll give you is this: if I knew exactly how the math worked, then it would be far easier for me to instantly spot any errors ClaudeGPTimini produced. And the understanding of functions and derivatives outlined above may be simplistic in some places (intentionally so), in ways that may break it in certain edge cases. But that only matters if I take its output at face value. If I get a general understanding of something and run a test with it, I’ll generally have some sort of hypothesis of what kind of result I’m expecting, given that my understanding is correct. If I know that a lot of unknown unknowns exist around a thing I’m working with, then I also know that unexpected results, as well as expected ones, require more thorough verification. Science is what happens when you expect something, test something, and get a result - expected OR unexpected - and then systematically rule out that anything other than the thing you’re testing has had an effect on that result.
This is not a problem with LLMs. It’s a thing we should’ve started teaching in schools decades ago: how to understand that there are things you don’t understand. In my view, the vast majority of problems plaguing us as a species lies in this fundamental thing that far too many people are just never taught the concept of.
This is false. There absolutely are people that fall back on older tools when fancy tools fail. You will find such people in the military, in emergency services, in agriculture, generally in areas where getting the job done matters.
Perhaps you're unfamiliar.
They other week I finished putting holes in fence posts with a bit and brace as there was no fuel for the generator to run corded electric drills and the rechargable batteries were dead.
Ukrainians, and others, need to fall back on no GPS available strategies and have done so for a few years now.
etc.
It depends on the task though. If you are in a similar scenario as with your fence posts and want to edit computer programs, you can't. (Not even with xkcd's magnetic needle and a steady hand). ;-)
As technology marches on it seems inevitable that we will get increasingly large and frequent knowledge gaps. Otherwise progress would stop - we need the giant shoulders to stand on.
How many people in the world can recreate a ASML lithography machine vs how many people are surviving by doing something that requires that machine to exist?
LLMs, the way they typically get used, are solely to save time by handing over nearly the entire process. In that sense acuity can't remain intact, even less so improving over time.
Do you agree the different treatment is justified ? (Many do not). Or are you asking , so what if acuity is diminished so long as an LLM does the job equally well?
> Mathematica has been able to do many integrals for decades and yet we still make students learn all the tricks to integrate by hand
But maybe, integrating by hand is still as big as ever in other parts of academia. Or were you thinking about high school? I'm fairly sure, that symbolic solving of integrals is treated as less important in education these days, than it was before digital computers, but I could be wrong. Mathematica's symbolic solve sure is very useful, but numeric solutions are what really makes the art of finding integrals much less relevant.
The industrialization of academia hasn't even produced more results, it has produced more meaningless papers. Just like LLMs produce the 10.000th note taking app, which for the LLM psychosis afflicted is apparently enough.
I will make an explicit, plausible, counterpoint: academia wants to produce understanding. This is, more or less, by definition, not possible with an AI directly (obviously AIs can be useful in the process).
Take GR as an example. The vast majority of the dynamical character of the theory is inaccessible to human beings. We study it because we wanted to understand it, and only secondarily because we had a concrete "result" we were trying to "achieve."
A person who cares only about results and not about understanding is barely a person, in my opinion.