upvote
if I understand you correctly, this is essentially what vllm does with their paged cache, if I’ve misunderstood I apologize.
reply
Paged Attention is more of a low-level building block, aimed initially at avoiding duplication of shared KV-cache prefixes in large-batch inference. But you're right that it's quite related. The llama.cpp folks are still thinking about it, per a recent discussion from that project: https://github.com/ggml-org/llama.cpp/discussions/21961
reply