This won’t affect everyone equally. Some Bob’s will nerd out and spend their free time learning, but other Bob’s won’t.
And it’s not strictly speaking university we’re talking about. The way we understand work is going to fundamentally change. And we’re not going to value the people who use LLMs to get 1x done.
But yes, university was always about how much work you put into it, and LLM’s are going to make that 10x more obvious.
The point is the Bob and Alice comparison is already a straw man, but I do squarely believe it’s the people with the best mental models and not the people who “get AI” who will win the new world. If you’re curious and good at developing mental models, you can learn “AI” in a week. But if you’re curious and good at developing mental models you’ve probably already lapped both Bob and Alice
Your perspective is cut off. In the real world Bob is supposed to produce outcomes that work. If he moves on into the industry and keeps producing hallucinated, skewed, manipulated nonsense, then he will fall flat instantly. If he manages to survive unnoticed, he will become CEO. The latter rather unlikely.
But do you actually understand it? The article argues exactly against this point - that you cannot understand the problems in the same way when letting agents do the initial work as you would when doing it without agents.
from the article: "you cannot learn physics by watching someone else do it. You have to pick up the pencil. You have to attempt the problem. You have to get it wrong, sit with the wrongness, and figure out where your reasoning broke. Reading the solution manual and nodding along feels like understanding. It is not understanding. Every student who has tried to coast through a problem set by reading the solutions and then bombed the exam knows this in their bones. We have centuries of accumulated pedagogical wisdom telling us that the attempt, including the failed attempt, is where the learning lives. And yet, somehow, when it comes to AI agents, we've collectively decided that maybe this time it's different. That maybe nodding at Claude's output is a substitute for doing the calculation yourself. It isn't. We knew that before LLMs existed. We seem to have forgotten it the moment they became convenient."