upvote
> I don't understand this view. How I see it the fundamental bottleneck to AGI is continual learning and backpropagation. Models today are static, and human brains don't learn or adapt themselves with anything close to backpropagation.

Even with continuous backpropagation and "learning", enriching the training data, so called online-learning, the limitations will not disappear. The LLMs will not be able to conclude things about the world based on fact and deduction. They only consider what is likely from their training data. They will not foresee/anticipate events, that are unlikely or non-existent in their training data, but are bound to happen due to real world circumstances. They are not intelligent in that way.

Whether humans always apply that much effort to conclude these things is another question. The point is, that humans fundamentally are capable of doing that, while LLMs are structurally not.

The problems are structural/architectural. I think it will take another 2-3 major leaps in architectures, before these AI models reach human level general intelligence, if they ever reach it. So far they can "merely" often "fake it" when things are statistically common in their training data.

reply
Humans are notoriously bad at formal logic. The Wason selection task is the classic example: most people fail a simple conditional reasoning problem unless it’s dressed up in familiar social context, like catching cheaters. That looks a lot more like pattern matching than rule application.

Kahneman’s whole framework points the same direction. Most of what people call “reasoning” is fast, associative, pattern-based. The slow, deliberate, step-by-step stuff is effortful and error-prone, and people avoid it when they can. And even when they do engage it, they’re often confabulating a logical-sounding justification for a conclusion they already reached by other means.

So maybe the honest answer is: the gap between what LLMs do and what most humans do most of the time might be smaller than people assume. The story that humans have access to some pure deductive engine and LLMs are just faking it with statistics might be flattering to humans more than it’s accurate.

Where I’d still flag a possible difference is something like adaptability. A person can learn a totally new formal system and start applying its rules, even if clumsily. Whether LLMs can genuinely do that outside their training distribution or just interpolate convincingly is still an open question. But then again, how often do humans actually reason outside their own “training distribution”? Most human insight happens within well-practiced domains.

reply
> The story that humans have access to some pure deductive engine and LLMs are just faking it with statistics might be flattering to humans more than it’s accurate.

Your point rings true with most human reasoning most of the time. Still, at least some humans do have the capability to run that deductive engine, and it seems to be a key part (though not the only part) of scientific and mathematical reasoning. Even informal experimentation and iteration rest on deductive feedback loops.

reply
> The Wason selection task is the classic example: most people fail a simple conditional reasoning problem unless it’s dressed up in familiar social context, like catching cheaters.

I've never heard about the Wason selection task, looked it up, and could tell the right answer right away. But I can also tell you why: because I have some familiarity with formal logic and can, in your words, pattern-match the gotcha that "if x then y" is distinct from "if not x then not y".

In contrast to you, this doesn't make me believe that people are bad at logic or don't really think. It tells me that people are unfamiliar with "gotcha" formalities introduced by logicians that don't match the everyday use of language. If you added a simple additional to the problem, such as "Note that in this context, 'if' only means that...", most people would almost certainly answer it correctly.

Mind you, I'm not arguing that human thinking is necessarily more profound from what what LLMs could ever do. However, judging from the output, LLMs have a tenuous grasp on reality, so I don't think that reductionist arguments along the lines of "humans are just as dumb" are fair. There's a difference that we don't really know how to overcome.

reply
Agree with much of your comment.

Though note that as GP said, on the Wason selection task, people famously do much better when it's framed in a social context. That at least partially undermines your theory that its lack of familiarity with the terminology of formal logic.

reply
Your response contains a performative contradiction: you are asserting that humans are naturally logical while simultaneously committing several logical errors to defend that claim.
reply
This comment would be a lot more useful with an enumeration of those logical errors.
reply
commenter’s specific claim—that adding a note about the definition of "if" would solve the problem—is a moving the goalposts fallacy and a tautology. The comment also suffers from hasty generalization (in their experience the test isn't hard) and special pleading (double standard for LLM and humans).
reply
When someone tells you "you can have this if you pay me", they don't mean "you can also have it if you don't pay". They are implicitly but clearly indicating you gotta pay.

It's as simple as that. In common use, "if x then y" frequently implies "if not x then not y". Pretending that it's some sort of a cognitive defect to interpret it this way is silly.

reply
> Even with continuous backpropagation and "learning"

That's what I said. Backpropagation cannot be enough; that's not how neurons work in the slightest. When you put biological neurons in a Pong environment they learn to play not through some kind of loss or reward function; they self-organize to avoid unpredictable stimulation. As far as I know, no architecture learns in such an unsupervised way.

https://www.sciencedirect.com/science/article/pii/S089662732...

reply
Forgive me for being ignorant - but 'loss' in supervised learning ML context encode the difference between how unlikely (high loss) or likely (low loss) was the network in predicting the output based on the input.

This sounds very similar to me as to what neurons do (avoid unpredictable stimulation)

reply
So, I have been thinking about this for a little while. Image a model f that takes a world x and makes a prediciton y. At a high-level, a traditional supervised model is trained like this

f(x)=y' => loss(y',y) => how good was my prediction? Train f through backprop with that error.

While a model trained with reinforcement learning is more similar to this. Where m(y) is the resulting world state of taking an action y the model predicted.

f(x)=y' => m(y')=z => reward(z) => how good was the state I was in based on my actions? Train f with an algorithm like REINFORCE with the reward, as the world m is a non-differentiable black-box.

While a group of neurons is more like predicting what is the resulting word state of taking my action, g(x,y), and trying to learn by both tuning g and the action taken f(x).

f(x)=y' => m(y')=z => g(x,y)=z' => loss(z,z') => how predictable was the results of my actions? Train g normally with backprop, and train f with an algorithm like REINFORCE with negative surprise as a reward.

After talking with GPT5.2 for a little while, it seems like Curiosity-driven Exploration by Self-supervised Prediction[1] might be an architecture similar to the one I described for neurons? But with the twist that f is rewarded by making the prediction error bigger (not smaller!) as a proxy of "curiosity".

[1] https://arxiv.org/pdf/1705.05363

reply
I think people MOSTLY foresee and anticipate events in OUR training data, which mostly comprises information collected by our senses.

Our training data is a lot more diverse than an LLMs. We also leverage our senses as a carrier for communicating abstract ideas using audio and visual channels that may or may not be grounded in reality. We have TV shows, video games, programming languages and all sorts of rich and interesting things we can engage with that do not reflect our fundamental reality.

Like LLMs, we can hallucinate while we sleep or we can delude ourselves with untethered ideas, but UNLIKE LLMs, we can steer our own learning corpus. We can train ourselves with our own untethered “hallucinations” or we can render them in art and share them with others so they can include it in their training corpus.

Our hallucinations are often just erroneous models of the world. When we render it into something that has aesthetic appeal, we might call it art.

If the hallucination helps us understand some aspect of something, we call it a conjecture or hypothesis.

We live in a rich world filled with rich training data. We don’t magically anticipate events not in our training data, but we’re also not void of creativity (“hallucinations”) either.

Most of us are stochastic parrots most of the time. We’ve only gotten this far because there are so many of us and we’ve been on this earth for many generations.

Most of us are dazzled and instinctively driven to mimic the ideas that a small minority of people “hallucinate”.

There is no shame in mimicking or being a stochastic parrot. These are critical features that helped our ancestors survive.

reply
> We can steer our own learning corpus

This is critical. We have some degree of attentional autonomy. And we have a complex tapestry of algorithms running in thalamocortical circuits that generate “Nows”. Truncation commands produce sequences of acts (token-like products).

reply
> They will not foresee/anticipate events, that are unlikely or non-existent in their training data, but are bound to happen due to real world circumstances. They are not intelligent in that way.

Can you be a bit more specific at all bounds? Maybe via an example?

reply
I'm sure that if a car appeared from nowhere in the middle of your living room, you would not be prepared at all.

So my question is: when is there enough training data that you can handle 99.99% of the world ?

reply
The main difference is humans are learning all the time and models learn batch wise and forget whatever happened in a previous session unless someone makes it part of the training data so there is a massive lag.

Whoever cracks the continuous customized (per user, for instance) learning problem without just extending the context window is going to be making a big splash. And I don't mean cheats and shortcuts, I mean actually tuning the model based on received feedback.

reply
> Models today are static, and human brains don't learn or adapt themselves with anything close to backpropagation.

While I suspect latter is a real problem (because all mammal brains* are much more example-efficient than all ML), the former is more about productisation than a fundamental thing: the models can be continuously updated already, but that makes it hard to deal with regressions. You kinda want an artefact with a version stamp that doesn't change itself before you release the update, especially as this isn't like normal software where specific features can be toggled on or off in isolation of everything else.

* I think. Also, I'm saying "mammal" because of an absence of evidence (to my *totally amateur* skill level) not evidence of absence.

reply
they can be continuously updated, assuming you re-run representative samples of the training set through them continuously. Unlike a mammal brain which preserves the function of neurons unless they activate in a situation which causes a training signal, deep nets have catastrophic forgetting because signals get scattered everywhere. If you had a model continuously learning about you in your pocket, without tons of cycles spent "remembering" old examples. In fact, this is a major stumbling block in standard training, sampling is a huge problem. If you just iterate through the training corpus, you'll have forgotten most of the english stuff by the time you finish with chinese or spanish. You have to constantly mix and balance training info due to this limitation.

The fundamental difference is that physical neurons have a discrete on/off activation, while digital "neurons" in a network are merely continuous differentiable operations. They also don't have a notion of "spike timining dependency" to avoid overwriting activations that weren't related to an outcome. There are things like reward-decay over time, but this applies to the signal at a very coarse level, updates are still scattered to almost the entire system with every training example.

reply
Iirc LeCunn talks about a self organizing hierarchy of real world objects and imo this is exactly how the human brain actually learns
reply
The fact that models aren't continually updating seems more like a feature. I want to know the model is exactly the same as it was the last time I used it. Any new information it needs can be stored in its context window or stored in a file to read the next it needs to access it.
reply
> The fact that models aren't continually updating seems more like a feature.

I think this is true to some extent: we like our tools to be predictable. But we’ve already made one jump by going from deterministic programs to stochastic models. I am sure the moment a self-evolutive AI shows up that clears the "useful enough" threshold we’ll make that jump as well.

reply
Stochastic and unpredictability aren't exactly the same. I would claim current LLMs are generally predictable even if it is not as predictable as a deterministic program.
reply
It’s a feature of a good tool, but a sentient intelligence is more than just a tool
reply
Persistent memory through text in the context window is a hack/workaround.

And generally:

> I want to know the model is exactly the same as it was the last time I used it.

What exactly does that gain you, when the overall behavior is still stochastic?

But still, if it's important to you, you can get the same behavior by taking a model snapshot once we crack continuous learning.

reply
Unless you use your oen local models then you don't even know when OpenAI or Anthropic tweaked the model less or more. One week it's a version x, next week it's a version y. Just like your operating system is continuously evolving with smaller patches of specific apps to whole new kernel version and new OS release.
reply
There is still a huge gap between a model continuously updating itself and weekly patches by a specialist team. The former would make things unpredictable.
reply
You could have continual learning on text and still be stuck in the same "remixing baseline human communications" trap. It's a nasty one, very hard to avoid, possibly even structurally unavoidable.

As for the "just put a vision LLM in a robot body" suggestion: People are trying this (e.g. Physical Intelligence) and it looks like it's extraordinarily hard! The results so far suggest that bolting perception and embodiment onto a language-model core doesn't produce any kind of causal understanding. The architecture behind the integration of sensory streams, persistent object representations, and modeling time and causality is critically important... and that's where world models come in.

reply
I don't understand why online learning is that necessary. If you took Einstein at 40 and surgically removed his hippocampus so he can't learn anything he didn't already know (meaning no online learning), that's still a very useful AGI. A hippocampus is a nice upgrade to that, but not super obviously on the critical path.
reply
> If you took Einstein at 40 and surgically removed his hippocampus so he can't learn anything he didn't already know (meaning no online learning), that's still a very useful AGI.

I like how people are accepting this dubious assertion that Einstein would be "useful" if you surgically removed his hippocampus and engaging with this.

It also calls this Einstein an AGI rather than a disabled human???

reply
Hypotheticals fear him
reply
He basically said that himself:

"Reading, after a certain age, diverts the mind too much from its creative pursuits. Any man who reads too much and uses his own brain too little falls into lazy habits of thinking".

-- Albert Einstein

reply
I guess the sheer amount and also variety of information you would need to pre-encode to get an Einstein at 40 is huge. Every day stream of high resolution video feed and actions and consequences and thoughts and ideas he has had until the age of 40 of every single moment. That includes social interactions, like a conversation and mimic of the other person in combination with what was said and background knowledge about the other person. Even a single conversation's data is a huge amount of data.

But one might say that the brain is not lossless ... True, good point. But in what way is it lossy? Can that be simulated well enough to learn an Einstein? What gives events significance is very subjective.

reply
Kinda a moot point in my eyes because I very much doubt you can arrive at the same result without the same learning process.
reply
That's true. Though could that hippocampus-less Einstein be able to keep making novel complex discoveries from that point forward? Seems difficult. He would rapidly reach the limits of his short term memory (the same way current models rapidly reach the limits of their context windows).
reply
It could possibly be useful but I don't see why it would be AGI.
reply
Where does that training data come from?
reply
Who knows? Perhaps attention really is all you need. Maybe our context window is really large. Or our compression is really effective. Perhaps adding external factors might be able to indirectly teach the models to act more in line with social expectations such as being embarrassed to repeat the same mistake, unlocking the final piece of the puzzle. We are still stumbling in the dark for answers.
reply
Agents have the ability of continual learning.
reply
Putting stuff you have learned into a markdown file is a very "shallow" version of continual learning. It can remember facts, yes, but I doubt a model can master new out-of-distribution tasks this way. If anything, I think that Google's Titans[1] and Hope[2] architectures are more aligned with true continual learning (without being actual continual learning still, which is why they call it "test-time memorization").

[1] https://arxiv.org/pdf/2501.00663

[2] https://arxiv.org/pdf/2512.24695

reply
I have had it master tasks by doing this. The first time it tries to solve an issue it may take a long time, but it documents its findings and how it was able to do it and then it applies that knowledge the next time the task comes up.
reply