upvote
Why is learning an appropriate metaphor for changing weights but context is not? There are certainly major differences in what they are good or bad at and especially how much data you can feed them this way effectively. But they are both ways to take an artifact that behaves as if it doesn't know something and produce an artifact that behaves as if it does.

I've learned how to solve a Rubik's cube before, and forgot almost immediately.

I'm not personally fond of metaphors to human intelligence now that we are getting a better understanding of the specific strengths and weaknesses these models have. But if we're gonna use metaphors I don't see how context isn't a type of learning.

reply
> We need models that keep on learning (updating their parameters) forever, online, all the time.

Do we need that? Today's models are already capable in lots of areas. Sure, they don't match up to what the uberhypers are talking up, but technology seldom does. Doesn't mean what's there already cannot be used in a better way, if they could stop jamming it into everything everywhere.

reply
How long will it take someone to poison such a model by teaching it wrong things?

Even humans fall for propaganda repeated over and over .

The current non-learning model is unintentionally right up there with the “immutable system” and “infrastructure as code” philosophy.

reply
> We need models that keep on learning (updating their parameters) forever, online, all the time.

Yeah, that's the guaranteed way to get MechaHilter in your latent space.

If the feedback loop is fast enough I think it would finally kill the internet (in the 'dead internet theory' sense). Perhaps it's better for everyone though.

reply
Many are working on this, as well as in-latent-space communication across models. Because we can’t understand that, by the time we notice MechaHitler it’ll be too late.
reply
I'm not sure if you want models perpetually updating weights. You might run into undesirable scenarios.
reply
Our brains, which are organic neural networks, are constantly updating themselves. We call this phenomenon "neuroplasticity."

If we want AI models that are always learning, we'll need the equivalent of neuroplasticity for artificial neural networks.

Not saying it will be easy or straightforward. There's still a lot we don't know!

reply
How would you keep controls - safety restrictions - Ip restrictions etc with that, though? the companies selling models right now probably want to keep those fairly tight.
reply
If done right, one step closer to actual AGI.

That is the end goal after all, but all the potential VCs seem to forget that almost every conceivable outcome of real AGI involves the current economic system falling to pieces.

Which is sorta weird. It is like if VCs in Old Regime france started funding the revolution.

reply
Yes the planet got destroyed. But for a beautiful moment in time we created a lot of value for shareholders.

And for your comparison, they did fund the American revolution which on its turn was one of the sparks for the French revolution (or was that exactly the point you were making?)

reply
The funding of the American revolution is a fun topic but most people don't know about it so I don't bother dropping references to it. :D
reply
I wonder which side tried to forget that first (;->
reply
Tay the chatbot says hi from 2017.
reply
How about we just put them to bed once in a while?
reply
it is interesting
reply
Doesn't necessarily need to be online. As long as:

1. there's a way to take many transcripts of inference over a period, and convert/distil them together into an incremental-update training dataset (for memory, not for RLHF), that a model can be fine-tuned on as an offline batch process every day/week, such that a new version of the model can come out daily/weekly that hard-remembers everything you told it; and

2. in-context learning + external memory improves to the point that a model with the appropriate in-context "soft memories", behaves indistinguishably from a model that has had its weights updated to hard-remember the same info (at least when limited to the scope of the small amounts of memories that can be built up within a single day/week);

...then you get the same effect.

Why is this an interesting model? Because, at least to my understanding, this is already how organic brains work!

There's nothing to suggest that animals — even humans — are neuroplastic on a continuous basis. Rather, our short-term memory is seemingly stored as electrochemical "state" in our neurons (much like an LLM's context is "state", but more RNN "a two-neuron cycle makes a flip-flop"-y); and our actual physical synaptic connectivity only changes during "memory reconsolidation", a process that mostly occurs during REM sleep.

And indeed, we see the same exact problem in humans and other animals, where when we stay awake too long without REM sleep, our "soft memory" state buffer reaches capacity, and we become forgetful, both in the sense of not being able to immediately recall some of the things that happened to us since we last slept; and in the sense of later failing to persist some of the experiences we had since we last slept, when we do finally sleep. But this model also "works well enough" to be indistinguishable from remembering everything... in the limited scope of our being able to get a decent amount of REM sleep every night.

reply
It 100% needs to be online. Imagine you're trying to think about a new tabletop puzzle, and every time a puzzle piece leaves your direct field of view, you no longer know about that puzzle piece.

You can try to keep all of the puzzle pieces within your direct field of view, but that divides your focus. You can hack that and make your field of view incredibly large, but that can potentially distort your sense of the relationships between things, their physical and cognitive magnitude. Bigger context isn't the answer, there's a missing fundamental structure and function to the overall architecture.

What you need is memory, that works when you process and consume information, at the moment of consumption. If you meet a new person, you immediately memorize their face. If you enter a room, it's instantly learned and mapped in your mind. Without that, every time you blinked after meeting someone new, it'd be a total surprise to see what they looked like. You might never learn to recognize and remember faces at all. Or puzzle pieces. Or whatever the lack of online learning kept you from recognizing the value of persistent, instant integration into an existing world model.

You can identify problems like this for any modality, including text, audio, tactile feedback, and so on. You absolutely, 100% need online, continuous learning in order to effectively deal with information at a human level for all the domains of competence that extend to generalizing out of distribution.

It's probably not the last problem that needs solving before AGI, but it is definitely one of them, and there might only be a handful left.

Mammals instantly, upon perceiving a novel environment, map it, without even having to consciously make the effort. Our brains operate in a continuous, plastic mode, for certain things. Not only that, it can be adapted to abstractions, and many of those automatic, reflexive functions evolved to handle navigation and such allow us to simulate the future and predict risk and reward over multiple arbitrary degrees of abstraction, sometimes in real time.

https://www.nobelprize.org/uploads/2018/06/may-britt-moser-l...

reply
That's not how training works - adjusting model weights to memorize a single data item is not going to fly.

Model weights store abilities, not facts - generally.

Unless the fact is very widely used and widely known, with a ton of context around it.

The model can learn the day JFK died because there are millions of sparse examples of how that information exists in the world, but when you're working on a problem, you might have 1 concern to 'memorize'.

That's going to be something different than adjusting model weights as we understand them today.

LLMs are not mammals either, it's helpful analogy in terms of 'what a human might find useful' but not necessary in the context of actual llm architecture.

The fact is - we don't have memory sorted out architecturally - it's either 'context or weights' and that's that.

Also critically: Humans do not remember the details of the face. Not remotely. They're able to associate it with a person and name 'if they see it again' - but that's different than some kind of excellent recall. Ask them to describe features in detail and maybe we can't do it.

You can see in this instance, this may be related to kind of 'soft lookup' aka associating an input with other bits of information which 'rise to the fore' as possibly useful.

But overall, yes, it's fair to take the position that we'll have to 'learn from context in some way'.

reply
Also, with regards to faces, that's kind of what I'm getting at - we don't have grid cells for faces, there seem to be discrete, functional, evolutionary structures and capabilities that combine in ways we're not consciously aware of to provide abilities. We're reflexively able to memorize faces, but to bring that to consciousness isn't automatic. There've been amnesia and lesion and other injury studies where people with face blindness get stress or anxiety, or relief, when recognizing a face, but they aren't consciously aware. A doctor, or person they didn't like, showing up caused stress spikes, but they couldn't tell you who they were or their name, and the same with family members- they get a physiological, hormonal response as if they recognized a friend or foe, but it never rises to the level of conscious recognition.

There do seem to be complex cells that allow association with a recognizable face, person, icon, object, or distinctive thing. Face cells apply equally to abstractions like logos or UI elements in an app as they do to people, famous animals, unique audio stings, etc. Split brain patients also demonstrate amazing strangeness with memory and subconscious responses.

There are all sorts of layers to human memory, beyond just short term, long term, REM, memory palaces, and so forth, and so there's no simple singular function of "memory" in biological brains, but a suite of different strategies and a pipeline that roughly slots into the fuzzy bucket words we use for them today.

reply
I suspect we're going to need hypernetworks of some sort - dynamically generated weights, with the hypernet weights getting the dream-like reconsolidation and mapping into the model at large, and layers or entire experts generated from the hypernets on the fly, a degree removed from the direct-from-weights inference being done now. I've been following some of the token-free latent reasoning and other discussions around CoT, other reasoning scaffolding, and so forth, and you just can't overcome the missing puzzle piece problem elegantly unless you have online memory. In the context of millions of concurrent users, that also becomes a nightmare. Having a pipeline, with a sort of intermediate memory, constructive and dynamic to allow resolution of problems requiring integration into memorized concepts and functions, but held out for curation and stability.

It's an absolutely enormous problem, and I'm excited that it seems to be one of the primary research efforts kicking off this year. It could be a very huge capabilities step change.

reply
Yes, so I think that's a fine thought, I don't think it fits into LLM architecture.

Also, weirdly, even Lecun etc. are barely talking about this, they're thinking about 'world models etc'.

I think what you're talking about is maybe 'the most important thing' right now, and frankly, it's almost like an issue of 'Engineering'.

Like - its when you work very intently with the models so this 'issue' become much more prominent.

Your 'instinct' for this problem is probably an expression of 'very nuanced use' I'm going to guess!

So in a way, it's as much Engineering as it is theoretical?

Anyhow - so yes - but - probably not LLM weights. Probably.

I'll add a small thing: the way that Claude Code keeps the LLM 'on track' is by reminding it! Literally, it injects little 'TODO reminders' with some prompts, which is kind of ... simple!

I worked a bit with 'steering probes' ... and there's a related opportunity there - to 'inject' memory and control operations along those lines. Just as a starting point for a least one architectural motivation.

reply
Models like Claude have been trained to update and reference memory for Claude Code (agent loops) independently and as a part of compacting context. Current models have been trained to keep learning after being deployed.
reply
yes but that's a very unsatisfactory definition of memory.
reply
I think they can do in-context learning.
reply