upvote
Could still be useful; maybe for overnight async workloads? Tell your agent research xyz at night and wake up to a report.
reply
Assuming 1 token per second and "overnight" being 12 hours, that's 43 200 tokens. I'm not sure what you can meaningfully achieve with that.
reply
Sure, but if long-term throughput is a real limitation there's plenty of ways to address that while still not needing to keep anywhere close to all model weights in RAM (which is still the conventional approach with MoE). So the gain of a smaller memory footprint is quite real.
reply
Yes, and with virtually zero context, which makes an enormous difference for TTFT on the MoE models.
reply