upvote
I think in the long run, we may need something like a batch job that compresses context from the last N conversations (in LLMs) and applies that as an update to weights. A looser form of delayed automated reinforcement learning.

Or make something like LoRA mainstream for everyone (probably scales better for general use models shared by everyone).

reply