upvote
> As long as there is a gap between AI and human learning, we do not have AGI.

Don't read the statement as a human dunk on LLMs, or even as philosophy.

The gap is important because of its special and devastating economic consequences. When the gap becomes truly zero, all human knowledge work is replaceable. From there, with robots, its a short step to all work is replaceable.

What's worse, the condition is sufficient but not even necessary. Just as planes can fly without flapping, the economy can be destroyed without full AGI.

reply
If you’re concerned about the economic impact, then whether a model is AGI or not doesn’t matter. It really is more of a philosophical thing.

There’s no “gap that becomes truly zero” at which point special consequences happen. By the time we achieve AGI, the lesser forms of AI will likely have replaced a lot of human knowledge labor through the exact “brute-force” methods Chollet is trying to factor out (which is why many people are saying that doing so is unproductive).

AGI is like an event horizon: It does mean something, it is a point in space, but you don’t notice yourself going through it, the curvature smoothly increases through it.

reply
The gap is important because of its special and devastating economic consequences. When the gap becomes truly zero, all human knowledge work is replaceable. From there, with robots, its a short step to all work is replaceable.

I don’t know why statements like this are just taken as gospel fact. There are plenty of economic activities which do not disappear even if an AI can do them.

Here’s one: I support certain artists because I care about their particular life story and have seen them perform live. I don’t care if an AI can replicate their music because the AI didn’t experience life.

Here’s another: positions that have deep experience in certain industries and have valuable networks; or that derive power by being in certain positions. You could build a model that incorporates every single thing the US president, any president, ever said, and it still wouldn’t get you in the position of being president. Many roles are contextual, not knowledge-based.

The idea that AGI replaces all work only makes sense if you’re talking about a world with completely open, free information access. I don’t just mean in the obvious sense; I mean also “inside your head.” AI can only use data it has access to, and it’s never going to have access to everyone’s individual brain everywhere at all times.

So here’s a better prediction: markets will gradually shift to adjust to this, information will become more secretive, and attention-based entertainment economics will become a larger and larger share of the overall economy.

reply
Very few artists or aspiring artists make enough money from their art to make a living -- even now, when the average person has a job and at least some disposable money and can support artists. This % will not get higher if we get 1000x more artists, and 1000x less employed people working in the general economy.

You can't get deep experience in any industry if there's a machine that can do the entry-level work for a fraction of the cost you can. And keep in mind that, by definition, this machine can learn to do everything you can, so it's in a much better position than you to get that deep experience you speak of.

If we get what's essentially mass-producable brains, and information gets more secretive as you say, if we have say 1000 machines for every person in the economy, they're in a better position than you to produce said valuable secret information.

reply
You can't get deep experience in any industry if there's a machine that can do the entry-level work for a fraction of the cost you can.

As I said, not all types of jobs are set up this way. Pure knowledge ones, sure. But ones dependent on context are not going to have this elimination of entry-level work in the first place.

and we get 1000 robots for every person in the economy, they're in a better position than you to produce said valuable secret information.

Again, no, they aren't, because certain types of information are not merely a question of computational power.

There is this constant assumption that all knowledge is just a math problem to solve, ergo AI will eventually solve it. That isn't how information actually functions in the real world.

reply
> AI can only use data it has access to, and it’s never going to have access to everyone’s individual brain everywhere at all times.

Yeah, but obviously no human can clear that bar either.

> Here’s another: positions that have deep experience in certain industries and have valuable networks

What stops an AGI from gaining "deep experience in an industry"? Or forming networks? There's plenty of popular bot accounts across social media already.

reply
it's just not binary. today's world is dominated by capitalistic competition and a lot of people earn a living by competing with their labor. If AI + robots can do the labor better, cheaper, faster, most (90%+) of today's jobs are gone without obvious replacement.
reply
Crazy how many people have their heads in the sand.

I'm glad you could think of a couple examples where AI might not replace humans. It's almost an entirely useless point to make.

The cat is already out of the bag. The information is out there and the models are trained. Even where we stand today will bring massive disruption in time.

The economy is being propped up by the wealthy few that have money to spend and now their legs are being cut out from under them with this technology. We're in for a reckoning.

reply
Or the classic from Dijkstra (https://www.cs.utexas.edu/~EWD/transcriptions/EWD08xx/EWD867...):

> even Alan M. Turing allowed himself to be drawn into the discussion of the question whether computers can think. The question is just as relevant and just as meaningful as the question whether submarines can swim.

(I am of the opinion that the thinking question is in fact a bit more relevant than the swimming one, but I understand where these are coming from.)

reply
I've come across that quote several times, and reach the same conclusion as you.

While I share Dijkstra's sentiment that "thinking machines" is largely a marketing term we've been chasing for decades, and this new cycle is no different, it's still worth discussing and... thinking about. The implications of a machine that can approximate or mimic human thinking are far beyond the implications of a machine that can approximate or mimic swimming. It's frankly disappointing that such a prominent computer scientist and philosopher would be so dismissive and uninterested in this fundamental CS topic.

Also, it's worth contextualizing that quote. It's from a panel discussion in 1983, which was between the two major AI "winters", and during the Expert Systems hype cycle. Dijkstra was clearly frustrated by the false advertising, to which I can certainly relate today, and yet he couldn't have predicted that a few decades later we would have computers that mimic human thinking much more closely and are thus far more capable than Expert Systems ever were. There are still numerous problems to resolve, w.r.t. reliability, brittleness, explainability, etc., but the capability itself has vastly improved. So while we can still criticize modern "AI" companies for false advertising and anthropomorphizing their products just like in the 1980s hype cycle, the technology has clearly improved, which arguably wouldn't have happened if we didn't consider the question of whether machines can "think".

reply
> The implications of a machine that can approximate or mimic human thinking are far beyond the implications of a machine that can approximate or mimic swimming

It seems to me like too many people are missing this point.

Modern philosophy tells us we can't even be certain whether other humans are conscious or not. The 'hard problem', p-zombies, etcetera.

The fact that current LLMs can convince many actual humans that they are conscious (whether they are or not is irrelevant, I lean toward not but whatever) has implications which aren't being discussed enough. If you teach a kid that they can treat this intelligent-seeming 'bot' like an object with no mind, is it not plausible that they might then go on to feel they can treat other kids who are obviously far less intelligent like objects as well? Seriously, we need to be talking more about this.

One of the most important questions about AI agents in my opinion should be, "can they suffer?", and if you can't answer that with a definitive "absolutely not" then we are suddenly in uncharted waters, ethically speaking. They can certainly act like they're suffering (edit: which, when witnessed by a credulous human audience, could cause them to suffer!). I think we should be treading much more carefully than many of us are.

reply
You lost me there. :)

The question of whether the current generation of "AI" can think, whether it is conscious, let alone whether it can suffer(!), is not even worth discussing. It should be obvious to anyone who understands how these tools work that they don't in fact "think", for even the most liberal definition of that term. They're statistical models that can generate useful patterns when fed with vast amounts of high quality data. That's it. The fact we interpret their output as though it is coming from a sentient being is simply due to our inability to comprehend patterns in the data at such scales. It's the best mimicry of intelligence we've ever invented, for better or worse, but it's far from how intelligence actually works, even if we struggle to define it accurately. Which doesn't mean that this technology can't be useful—far from it—but it's ludicrous to ascribe any human-like qualities to it.

So I 100% side with Dijkstra on that point.

What I'm criticizing is his apparent dismissal and refusal to even consider it a worthy philosophical exercise. This is why I think that the comparison to submarines and swimming is reductionist, and ultimately not productive. I would argue that we do need to keep thinking about whether machines can think, as that drives progress, and is a fundamentally interesting topic. It would be great if the progress wouldn't be fueled by greed, self-interest, and manipulation, or at the very least balanced by rationality, healthy skepticism, and safety measures, but I suppose this is just inescapable human nature.

reply
You know what the G stands for in AGI? General intelligence. You could measure a plane on general versatility in air and it would lose against a bird. You could also measure it against energy consumption. There are a lot of things you can measure a lot of them are pointless, a lot of articles on HN are pointless.

There are very valid reasons to measure that. You wouldn’t ask a plane to drive you to the neighbor or to buy you groceries at the supermarket. It’s not general mobile as you are, but it increases your mobility

reply
Planes aren't trying to replace birds. ML is trying to replace humans, so unless they also demonstrate that quick learning ability isn't necessary to perform the tasks a human does the measures still make sense.
reply
> ML is trying to replace humans

Are household appliances trying to replace humans?

reply
Actually, they do. The purpose of many appliances is to reduce the workload of humans, with the end goal of zero human intervention.
reply
For me the whole are we there yet wrt AGI is already dead, since the tools we've had for ~1.5 years are already incredibly useful for me. So I just don't care anymore. For some people we're already there. For other we'll never get there. Definitions change, goalposts move, etc. In the meantime we're already seeing ASI stuff coming (self improvement and so on).

But the arc-agi competitions are cool. Just to see where we stand, and have some months where the benchmarks aren't fully saturated. And, as someone else noted elswhere in the thread, some of these games are not exactly trivial, at least until you "get" the meta they're looking for.

reply
In the Expeditionary Force series of sci-fi novels pretty much every civilization treats their (very advanced, obviously AGI) AIs not as living beings. Humans are outliers in the story. I think there will always be a dichotomy. Obviously we aren't at the point where we should treat the models as beings, but even if we do get to that point there will be plenty of people that essentially will say they don't have souls, some indeterminate quality, etc.
reply
It's unlikely that intelligence comes in only human flavor.

It also doesn't actually matter much, as ultimately the utility of it's outputs is what determines it's worth.

There is the moral question of consciousness though, a test for which it seems humans will not be able to solve in the near future, which morally leads to a default position that we should assume the AI is conscious until we can prove it's not. But man, people really, really hate that conclusion.

reply
Something that surprises me about modern LLMs is that they're relatively smart yet lack consciousness. I used to believe that consciousness (e.g. a desire for self-preservation, intrinsic motivation) might be a necessary requirement for AGI/ASI, but it's increasingly looking like that may not be the case. If true, that's actually good news, since it makes the worst doomsday scenarios less likely.
reply
How can you tell?
reply
How can I tell what? That current LLMs are not conscious or that AGI/ASI will not require consciousness?
reply
How do you know they aren't conscious of we don't know what consciousness is, and have no test to see if anyone or anything is conscious?

This may seem like a joke, but your answer will likely be in the vain of "conscious things are obviously conscious", which gets us nowhere.

I mean, self motivation and a desire to not be turned off can be programmed into even decades old AIs.

reply
Consciousness is a huge topic and beyond a HN comment, but: My answer to this is that they obviously lack a basic understanding of simple things that any continually conscious being would find trivial. I have spent a lot of time having long form exploratory conversations on a particular topic with AI, and you begin to see how it doesn’t really understand what you’re talking about, it just makes a prediction about what you probably mean.

There is also apparently no real memory; if I tell it to stop doing something today, it’ll agree, then go back to doing it again tomorrow, with no memory of our conversation. This never changes, no matter how many times I ask.

Again we could debate consciousness forever, but in a simple sense, are there any other conscious beings without this sense of continuity? Not that I can think of. And so if everything we call “conscious” is different from an AI, then are we justified in extending it to AI?

reply
So is a person suffering from amnesia conscious if they lack short-term and long-term memory?

Ruling out consciousness or qualia emerging from the inference in an LLM is just as invalid of a take as being 100% certain of its consciousness. We don’t know what consciousness really is, so only thing we can say with certainty is we do not know.

reply
No, by continuity I mean literally moment to moment. Sorry if I didn’t clarify that. Even people with amnesia are still present moment to moment. As far as I know there are no things that we call conscious which have zero continuity.

I think consciousness is not an abstract property in the world, therefore it’s tied to certain types of entities. Therefore an AI is not going to be “conscious” in the way an animal is, and never will be. This is a failing of specific language. Maybe the machines can be aware, input data, mimic what we see as consciousness, etc. but the metaphor of consciousness really doesn’t fit. A jet can move faster than an eagle but it’s not moving in the same way. We simply lack a sophisticated enough language to easily differentiate the two.

reply
Doesn’t the LLM experience discrete continuity every time it infers the next token?

> I think consciousness is not an abstract property in the world, therefore it’s tied to certain types of entities. Therefore an AI is not going to be “conscious”

This pretty much sums up most arguments for why LLMs aren’t conscious: ”I think” followed by assertions. Only real argument is: science doesn’t quantify consciousness, we cannot quantify consciousness, let’s not assign so much certainty to models clearly exhibiting intelligence not being conscious in some way, to some degree.

reply
I don't think you really understood my point, because you didn't reply to it at all.

I am making a linguistic argument. AI may get as sophisticated as "traditional" consciousness. But this is only "real" consciousness if you are a functionalist and think the output is all that matters.

I disagree and think that "flying" is just a weak generic word that describes both planes and birds, and not some kind of ultimate Platonic Ideal in the world.

Ditto for AI consciousness: it may develop to be as complex as traditional animal consciousness, but I'm not a functionalist, and think it's merely a lack of our sophisticated language that makes us think it's the same thing. It's not. Planes PlaneFly through the air, while birds BirdFly.

reply
I see it as LLMs, AI, whatever, can be intelligent enough to emulate consciousness, appear outside as if it were. But that is not proof it really has a qualia, an experience of existing.

All I am saying we should stop being so certain they are not conscious, since we lack a solid, quantifiable model for consciousness.

reply
As a philosophical zombie myself[0], I'm well aware of how hard it is to define and test consciousness. That's why I tried to clarify what I meant with: desire for self-preservation and intrinsic motivation. Which LLMs clearly lack, don't you agree? Also, I'm not saying that those things couldn't be programmed in, just that so far, they don't seem necessary.

[0] I lack a conscious experience and qualia

reply
How can you tell that you lack conscious experience and qualia?
reply
They assert that they dont have them, in the same way you (presumably) assert that you do have them. Neither have any further evidence and one is not a prioi more likely than the other.
reply
Yep, this basically. I tend to get along well with solipsists.
reply
> desire for self-preservation and intrinsic motivation

I’d be curious about how you’re showing they lack either of those

reply
They don't try to prevent you from deleting them and they don't output anything unless prompted.
reply
"they don't output anything unless prompted"

Unprompted they're not unlike a human sleeping or in a coma. Those states don't preclude consciousness in other states.

reply
That's besides the point though.
reply
I think there's some third baseline standard, which most humans and some AI can meet to be considered "intelligent". A lot of humans are essentially p-zombies, so they wouldn't meet the standard either. Possibly all humans. Possibly me too.
reply
Humans can do a lot of things that don't require intelligence. Artificial intelligence does not need to be 100% human to be AGI.
reply
It needs to pass the most basic concept of learning, which it can’t currently do. Probably wont ever do after listening to dario on his latest podcast run.

Where we are at today is ASI (artificial semi-intelligence). Maybe in 20 years artificial super intelligence can be achieved, but certainly not AGI.

reply
Important to remember that intelligence is not a singular thing and when the last gap is closed, most aspects will be highly superhuman
reply
So…calculators are intelligent? How about accountants that failed arithmetic 101 in high-school, are they intelligent? Generally intelligent?
reply
>> As long as there is a gap between AI and human learning, we do not have AGI.

>> "It's silly to say airplanes don't fly because they don't flap their wings the way birds do."

> Just because a human can do X and the LLM can't doesn't negate the LLM's "intelligence", any more than an LLM doing a task better than a human negates the human's intelligence.

You misinterpret what is meant by "a gap between AI and human learning". The point isn't that they aren't similar enough or that they aren't as intelligent. The statement is specifically about "learning". Humans learn continuously and can devise new strategies for problem solving. Current AI, especially LLMs are just snapshots of a single strategy. LLMs do not learn at all -- they specifically have "knowledge cutoffs" even with all the tools available to them in a harness we still have to wait for new frontier models or new fine tuning for them to solve significantly new problems. A human does this continually -- learn regardless of intelligence.

reply
All of flapping, flying and intelligence are physical actions. If your "flying" machine can't get up to altitude fast enough to avoid small hills then it's not an adequate flying system.

Despite so many claims an LLM has never done any interesting task better than a human. I could claim that cat is better than humans at writing text, but the non-specificity of my language here makes that statement simultaneously meaningless and incorrect. Another meaningless and incorrect (but less incorrect than most pro AI sttements) "git clone" is better at producing correct and feature rich c compiler code than $20,000 worth of Claude tokens.

reply
The very obvious flaw with that argument is that flying is defined by, you know, moving in the air, whereas intelligence tends to be defined with the baseline of human intelligence. You can invent a new meaning, but it seems kind of dishonest
reply
Except there's a much simpler definition of flying than of intelligence.
reply