Don't read the statement as a human dunk on LLMs, or even as philosophy.
The gap is important because of its special and devastating economic consequences. When the gap becomes truly zero, all human knowledge work is replaceable. From there, with robots, its a short step to all work is replaceable.
What's worse, the condition is sufficient but not even necessary. Just as planes can fly without flapping, the economy can be destroyed without full AGI.
There’s no “gap that becomes truly zero” at which point special consequences happen. By the time we achieve AGI, the lesser forms of AI will likely have replaced a lot of human knowledge labor through the exact “brute-force” methods Chollet is trying to factor out (which is why many people are saying that doing so is unproductive).
AGI is like an event horizon: It does mean something, it is a point in space, but you don’t notice yourself going through it, the curvature smoothly increases through it.
I don’t know why statements like this are just taken as gospel fact. There are plenty of economic activities which do not disappear even if an AI can do them.
Here’s one: I support certain artists because I care about their particular life story and have seen them perform live. I don’t care if an AI can replicate their music because the AI didn’t experience life.
Here’s another: positions that have deep experience in certain industries and have valuable networks; or that derive power by being in certain positions. You could build a model that incorporates every single thing the US president, any president, ever said, and it still wouldn’t get you in the position of being president. Many roles are contextual, not knowledge-based.
The idea that AGI replaces all work only makes sense if you’re talking about a world with completely open, free information access. I don’t just mean in the obvious sense; I mean also “inside your head.” AI can only use data it has access to, and it’s never going to have access to everyone’s individual brain everywhere at all times.
So here’s a better prediction: markets will gradually shift to adjust to this, information will become more secretive, and attention-based entertainment economics will become a larger and larger share of the overall economy.
You can't get deep experience in any industry if there's a machine that can do the entry-level work for a fraction of the cost you can. And keep in mind that, by definition, this machine can learn to do everything you can, so it's in a much better position than you to get that deep experience you speak of.
If we get what's essentially mass-producable brains, and information gets more secretive as you say, if we have say 1000 machines for every person in the economy, they're in a better position than you to produce said valuable secret information.
As I said, not all types of jobs are set up this way. Pure knowledge ones, sure. But ones dependent on context are not going to have this elimination of entry-level work in the first place.
and we get 1000 robots for every person in the economy, they're in a better position than you to produce said valuable secret information.
Again, no, they aren't, because certain types of information are not merely a question of computational power.
There is this constant assumption that all knowledge is just a math problem to solve, ergo AI will eventually solve it. That isn't how information actually functions in the real world.
Yeah, but obviously no human can clear that bar either.
> Here’s another: positions that have deep experience in certain industries and have valuable networks
What stops an AGI from gaining "deep experience in an industry"? Or forming networks? There's plenty of popular bot accounts across social media already.
I'm glad you could think of a couple examples where AI might not replace humans. It's almost an entirely useless point to make.
The cat is already out of the bag. The information is out there and the models are trained. Even where we stand today will bring massive disruption in time.
The economy is being propped up by the wealthy few that have money to spend and now their legs are being cut out from under them with this technology. We're in for a reckoning.
> even Alan M. Turing allowed himself to be drawn into the discussion of the question whether computers can think. The question is just as relevant and just as meaningful as the question whether submarines can swim.
(I am of the opinion that the thinking question is in fact a bit more relevant than the swimming one, but I understand where these are coming from.)
While I share Dijkstra's sentiment that "thinking machines" is largely a marketing term we've been chasing for decades, and this new cycle is no different, it's still worth discussing and... thinking about. The implications of a machine that can approximate or mimic human thinking are far beyond the implications of a machine that can approximate or mimic swimming. It's frankly disappointing that such a prominent computer scientist and philosopher would be so dismissive and uninterested in this fundamental CS topic.
Also, it's worth contextualizing that quote. It's from a panel discussion in 1983, which was between the two major AI "winters", and during the Expert Systems hype cycle. Dijkstra was clearly frustrated by the false advertising, to which I can certainly relate today, and yet he couldn't have predicted that a few decades later we would have computers that mimic human thinking much more closely and are thus far more capable than Expert Systems ever were. There are still numerous problems to resolve, w.r.t. reliability, brittleness, explainability, etc., but the capability itself has vastly improved. So while we can still criticize modern "AI" companies for false advertising and anthropomorphizing their products just like in the 1980s hype cycle, the technology has clearly improved, which arguably wouldn't have happened if we didn't consider the question of whether machines can "think".
It seems to me like too many people are missing this point.
Modern philosophy tells us we can't even be certain whether other humans are conscious or not. The 'hard problem', p-zombies, etcetera.
The fact that current LLMs can convince many actual humans that they are conscious (whether they are or not is irrelevant, I lean toward not but whatever) has implications which aren't being discussed enough. If you teach a kid that they can treat this intelligent-seeming 'bot' like an object with no mind, is it not plausible that they might then go on to feel they can treat other kids who are obviously far less intelligent like objects as well? Seriously, we need to be talking more about this.
One of the most important questions about AI agents in my opinion should be, "can they suffer?", and if you can't answer that with a definitive "absolutely not" then we are suddenly in uncharted waters, ethically speaking. They can certainly act like they're suffering (edit: which, when witnessed by a credulous human audience, could cause them to suffer!). I think we should be treading much more carefully than many of us are.
The question of whether the current generation of "AI" can think, whether it is conscious, let alone whether it can suffer(!), is not even worth discussing. It should be obvious to anyone who understands how these tools work that they don't in fact "think", for even the most liberal definition of that term. They're statistical models that can generate useful patterns when fed with vast amounts of high quality data. That's it. The fact we interpret their output as though it is coming from a sentient being is simply due to our inability to comprehend patterns in the data at such scales. It's the best mimicry of intelligence we've ever invented, for better or worse, but it's far from how intelligence actually works, even if we struggle to define it accurately. Which doesn't mean that this technology can't be useful—far from it—but it's ludicrous to ascribe any human-like qualities to it.
So I 100% side with Dijkstra on that point.
What I'm criticizing is his apparent dismissal and refusal to even consider it a worthy philosophical exercise. This is why I think that the comparison to submarines and swimming is reductionist, and ultimately not productive. I would argue that we do need to keep thinking about whether machines can think, as that drives progress, and is a fundamentally interesting topic. It would be great if the progress wouldn't be fueled by greed, self-interest, and manipulation, or at the very least balanced by rationality, healthy skepticism, and safety measures, but I suppose this is just inescapable human nature.
There are very valid reasons to measure that. You wouldn’t ask a plane to drive you to the neighbor or to buy you groceries at the supermarket. It’s not general mobile as you are, but it increases your mobility
Are household appliances trying to replace humans?
But the arc-agi competitions are cool. Just to see where we stand, and have some months where the benchmarks aren't fully saturated. And, as someone else noted elswhere in the thread, some of these games are not exactly trivial, at least until you "get" the meta they're looking for.
It also doesn't actually matter much, as ultimately the utility of it's outputs is what determines it's worth.
There is the moral question of consciousness though, a test for which it seems humans will not be able to solve in the near future, which morally leads to a default position that we should assume the AI is conscious until we can prove it's not. But man, people really, really hate that conclusion.
This may seem like a joke, but your answer will likely be in the vain of "conscious things are obviously conscious", which gets us nowhere.
I mean, self motivation and a desire to not be turned off can be programmed into even decades old AIs.
There is also apparently no real memory; if I tell it to stop doing something today, it’ll agree, then go back to doing it again tomorrow, with no memory of our conversation. This never changes, no matter how many times I ask.
Again we could debate consciousness forever, but in a simple sense, are there any other conscious beings without this sense of continuity? Not that I can think of. And so if everything we call “conscious” is different from an AI, then are we justified in extending it to AI?
Ruling out consciousness or qualia emerging from the inference in an LLM is just as invalid of a take as being 100% certain of its consciousness. We don’t know what consciousness really is, so only thing we can say with certainty is we do not know.
I think consciousness is not an abstract property in the world, therefore it’s tied to certain types of entities. Therefore an AI is not going to be “conscious” in the way an animal is, and never will be. This is a failing of specific language. Maybe the machines can be aware, input data, mimic what we see as consciousness, etc. but the metaphor of consciousness really doesn’t fit. A jet can move faster than an eagle but it’s not moving in the same way. We simply lack a sophisticated enough language to easily differentiate the two.
> I think consciousness is not an abstract property in the world, therefore it’s tied to certain types of entities. Therefore an AI is not going to be “conscious”
This pretty much sums up most arguments for why LLMs aren’t conscious: ”I think” followed by assertions. Only real argument is: science doesn’t quantify consciousness, we cannot quantify consciousness, let’s not assign so much certainty to models clearly exhibiting intelligence not being conscious in some way, to some degree.
I am making a linguistic argument. AI may get as sophisticated as "traditional" consciousness. But this is only "real" consciousness if you are a functionalist and think the output is all that matters.
I disagree and think that "flying" is just a weak generic word that describes both planes and birds, and not some kind of ultimate Platonic Ideal in the world.
Ditto for AI consciousness: it may develop to be as complex as traditional animal consciousness, but I'm not a functionalist, and think it's merely a lack of our sophisticated language that makes us think it's the same thing. It's not. Planes PlaneFly through the air, while birds BirdFly.
All I am saying we should stop being so certain they are not conscious, since we lack a solid, quantifiable model for consciousness.
[0] I lack a conscious experience and qualia
I’d be curious about how you’re showing they lack either of those
Unprompted they're not unlike a human sleeping or in a coma. Those states don't preclude consciousness in other states.
Where we are at today is ASI (artificial semi-intelligence). Maybe in 20 years artificial super intelligence can be achieved, but certainly not AGI.
>> "It's silly to say airplanes don't fly because they don't flap their wings the way birds do."
> Just because a human can do X and the LLM can't doesn't negate the LLM's "intelligence", any more than an LLM doing a task better than a human negates the human's intelligence.
You misinterpret what is meant by "a gap between AI and human learning". The point isn't that they aren't similar enough or that they aren't as intelligent. The statement is specifically about "learning". Humans learn continuously and can devise new strategies for problem solving. Current AI, especially LLMs are just snapshots of a single strategy. LLMs do not learn at all -- they specifically have "knowledge cutoffs" even with all the tools available to them in a harness we still have to wait for new frontier models or new fine tuning for them to solve significantly new problems. A human does this continually -- learn regardless of intelligence.
Despite so many claims an LLM has never done any interesting task better than a human. I could claim that cat is better than humans at writing text, but the non-specificity of my language here makes that statement simultaneously meaningless and incorrect. Another meaningless and incorrect (but less incorrect than most pro AI sttements) "git clone" is better at producing correct and feature rich c compiler code than $20,000 worth of Claude tokens.