upvote
In some ways these methods are similar to the model "learning", but it's also fundamentally different than how models are trained and how humans learn. If a human actually learns something, they're retain that even if they no longer have access to what they learned it from. And LLM won't (unless trained by the labs not to, which is out of scope). If you stop giving it the instructions, it won't know how to do the thing you were "teaching" it to do any more.
reply
It is a matter of fact that LLMs cannot learn. Whether it is dressed up in slightly different packaging to trick you into thinking it learns does not make any difference to that fact.
reply
Sure, LLMs can't learn. I'm saying that systems built around LLMs can simulate aspects of what we might call "learning".
reply