Separately, I think Anthropic are probably the least likely of the big 3 to release a model that uses latent-space reasoning, because it's a clear step down in the ability to audit CoT. There has even been some discussion that they accidentally "exposed" the Mythos CoT to RL [0] - I don't see how you would apply a reward function to latent space reasoning tokens.
[0]: https://www.lesswrong.com/posts/K8FxfK9GmJfiAhgcT/anthropic-...
Literally just a citation of Meta's Coconut paper[1].
Notice the 2027 folk's contribution to the prediction is that this will have been implemented by "thousands of Agent-2 automated researchers...making major algorithmic advances".
So, considering that the discussion of latent space reasoning dates back to 2022[2] through CoT unfaithfulness, looped transformers, using diffusion for refining latent space thoughts, etc, etc, all published before ai 2027, it seems like to be "following the timeline of ai-2027" we'd actually need to verify that not only was this happening, but that it was implemented by major algorithmic advances made by thousands of automated researchers, otherwise they don't seem to have made a contribution here.
[1] https://ai-2027.com/#:~:text=Figure%20from%20Hao%20et%20al.%...
What are you, Haiku?
But yeah, in many ways we're at least a year ahead on that timeline.
The first 500 or so tokens are raw thinking output, then the summarizer kicks in for longer thinking traces. Sometimes longer thinking traces leak through, or the summarizer model (i.e. Claude Haiku) refuses to summarize them and includes a direct quote of the passage which it won't summarize. Summarizer prompt can be viewed [here](https://xcancel.com/lilyofashwood/status/2027812323910353105...), among other places.