upvote
Anthropic was chirping about Chinese model companies distilling Claude with the thinking traces, and then the thinking traces started to disappear. Looks like the output product and our understanding has been negatively affected but that pales in comparison with protecting the IP of the model I guess.
reply
When Gemini Pro came out, I found the thinking traces to be extremely valuable. Ironically, I found them much more readable than the final output. They were a structured, logical breakdown of the problem. The final output was a big blob of prose. They removed the traces a few weeks later.
reply
That’s kind of funny since a Chinese model started the thinking chains being visible in Claude and OA in the first place.
reply
I don't think it ever has. For a very long time now, the reasoning of Claude has been summarized by Haiku. You can tell because a lot of the times it fails, saying, "I don't see any thought needing to be summarised."
reply
Maybe there was no thinking.
reply
It also gets confused if the entire prompt is in a text file attachment.

And the summarizer shows the safety classifier's thinking for a second before the model thinking, so every question starts off with "thinking about the ethics of this request".

reply
They are trying to optimize the circus trick that 'reasoning' is. The economics still do not favor a viable business at these valuations or levels of cost subsidization. The amount of compute required to make 'reasoning' work or to have these incremental improvements is increasingly obfuscated in light of the IPO.
reply
Anthropic always summarizes the reasoning output to prevent some distillation attacks
reply
Genuine question, why have you chosen to phrase this scraping and distillation as an attack? I'm imagining you're doing it because that's how Anthropic prefers to frame it, but isn't scraping and distillation, with some minor shuffling of semantics, exactly what Anthropic and co did to obtain their own position? And would it be valid to interpret that as an attack as well?
reply
> I'm imagining you're doing it because that's how Anthropic prefers to frame it

Correct.

> would it be valid to interpret that as an attack as well?

Yup.

reply
If you ask claude in chinese it thinks its deepseek.
reply
I don't think that learning from textbooks to take an exam and learning from the answers of another student taking the exam are the same.

Joking aside, I also don't believe that maximum access to raw Internet data and its quantity is why some models are doing better than Google. It seems that these SoTA models gain more power from synthetic data and how they discard garbage.

reply
Firehosing Anthropic to exfiltrate their model seems materially different than Anthropic downloading all of the Internet to create the model in the first place to me. But maybe that's just me?
reply
I don't see the material difference in firehosing anthropic vs anthropic firehosing random sites on the internet. As someone who runs a few of those random sites, I've had to take actions that increase my costs (and burn my time) to mitigate a new host of scrapers constantly firing at every available endpoint, even ones specifically marked as off limits.
reply
Yeah, it's different. Anthropic profits when it delivers tokens. Hosting providers pay when Anthropic scrapes them.
reply
Yes, what the LLM providers did was worse and impacted people financially a whole lot more in lost compensation for works as well as operational costs that would never reach the heights they did solely because of scrapers on behalf of model providers.
reply
Attacks? That's a choice of words.
reply
Definitely Anthropic playing the victim after distilling the whole internet.
reply
Proprietary pattern matcher proves there's no moat; promptly pre-covers other's perception.
reply
Very cool that these companies can scrape basically all extant human knowledge, utterly disregard IP/copyright/etc, and they cry foul when the tables turn.
reply
All extant human knowledge SO FAR. Remember, by the nature of the beast, the companies will always be operating in hindsight with outdated human knowledge.
reply
Yep, that is exactly what happens. It's a disgrace that their models aren't open, after training on everything humanity has preserved.

They should at least release the weights of their old/deprecated models, but no, that would be losing money.

reply
We should treat LLM somewhat like patents or drugs. After 5 years or so, the models should become open source. Or at very least the weights. To compensate for the distilling of human knowledge.
reply
and so does OpenAI
reply
Safety versus Distillation, guess we see what's more important.
reply
CoT is basically bullshit, entirely confabulated and not related to any "thought process"...
reply
But still CoT distillation WORKS. See the DeepSeek R1 paper.
reply
Tokens relate to each other. More tokens more compute
reply