upvote
Just for anyone else who hadn't seen the announcement yet, this Anthropic 1M context is now the same price as the previous 256K context - not the beta where Anthropic charged extra for the 1M window:

https://x.com/claudeai/status/2032509548297343196

As for retrieval, the post shows Opus 4.6 at 78.3% needle retrieval success in 1M window (compared with 91.9% in 256K), and Sonnet 4.6 at 65.1% needle retrieval in 1M (compared with 90.6% in 256K).

reply
now that's major news
reply
deleted
reply
In addition to context rot, cost matters, I think lots of people use toke compression tools for that not because of context rot
reply
From a determinism standpoint it might be better for the rot to occur at ingest rather than arbitrarily five questions later.
reply
[dead]
reply