upvote
They won't. These are not "issues", it's them trying to push the models to burn less compute. It will only get worse.
reply
> it's them trying to push the models to burn less compute

I'm curious, how does using more tokens save compute?

reply
productivity (tokens per second per hardware unit) increases at the cost of output quality, but the price remains the same.

both Anthropic and OpenAI quantize their models a few weeks after release. they'd never admit it out loud, but it's more or less common knowledge now. no one has enough compute.

reply
Pretty bold claim - you have a source for that?
reply
There is no evidence TMK that the accuracy the models change due to release cycles or capacity issues. Only latency. Both Anthropic and OpenAI have stated they don't do any inference compute shenanigans due to load or post model release optimization.

Tons of conspiracy theories and accusations.

I've never seen any compelling studies(or raw data even) to back any of it up.

reply
Do you have a source for that claim?
reply
my source is that people have been noticing this since GPT4 days.

https://arxiv.org/pdf/2307.09009

but of course, this isn't a written statement by a corporate spokespersyn. I don't think that breweries make such statements when they water their beer either.

reply
I think that the idea is each action uses more tokens, which means that users hit their limit sooner, and are consequently unable to burn more compute.
reply
I'm 99.9% sure Opus 4.7 is a smaller model than 4.6.

Too many signs between the sudden jump in TPS (biggest smoking gun for me), new tokenenizer, commentary about Project Mythos from Ant employees, etc.

It looks like their new Sonnet was good enough to be labeled Opus and their new Opus was good enough to be labeled Mythos.

They'll probably continue post-training and release a more polished version as Opus 5

reply
It could be the adaptive reasoning
reply
If you've not seen Common People Black Mirror episode I strongly recommend it.

The only misprediction it makes is that AI is creating the brain dead user base...

You have to hook your customers before you reel them in!

https://www.netflix.com/gb/title/70264888?s=a&trkid=13747225...

reply
I am having a shit experience lately. Opus 4.7, max effort.

> You're right, that was a shit explanation. Let me go look at what V1 MTBL actually is before I try again.

> Got it — I read the V1 code this time instead of guessing. Turns out my first take was wrong in an important way. Let me redo this in English.

:facepalm:

reply
> I read the V1 code this time instead of guessing

Does the LLM even keep a (self-accessible) record of previous internal actions to make this assertion believable, or is this yet another confabulation?

reply
Yes, the LLM is able to see the entire prior chat history including tool use. This type of interaction occurs when the LLM fails to read the file, but acts as though it had.
reply
This seems like the experience I've had with every model I've tried over the last several years. It seems like an inherent limitation of the technology, despite the hyperbolic claims of those financially invested in all of this paying off.
reply
Opus 4.6 pre-nerf was incredible, almost magical. It changed my understanding of how good models could be. But that's the only model that ever made me feel that way.
reply
That was better, but still not to the point that I just let it go on my repo.
reply
Yes! I genuinely got a LOT of shit done with Opus 4.6 "pre nerf" with regular old out-of-the-box config, no crazy skills or hacks or memory tweaks or anything. The downfall is palpable. Textbook rugpull.
reply
If it isn’t working for you why don’t you choose an older model? 4.6
reply
Matches what I am experiencing. Makes incredible stupid mistakes.

The weird stuff is yesterday I asked it to test and report back on a 30+ commit branch for a PR and it did that flawlessly.

reply
The docs suggest not using max effort in most cases to avoid overthinking :shrug:
reply
They've jumped the shark. I truly can't comprehend why all of these changes were necessary. They had a literal money printing machine that actually got real shit done, really well. Now it's a gamble every time and I am pulling back hard from Anthropic ecosystem.
reply
It seems clear that it was a money spending machine, not a money printing machine.
reply
> he’s making .. mistakes

Claude and other LLMs do not have a gender; they are not a “he”. Your LLM is a pile of weights, prompts, and a harness; anthropomorphising like this is getting in the way.

You’re experiencing what happens when you sample repeatedly from a distribution. Given enough samples the probability of an eventual bad session is 100%.

Just clear the context, roll back, and go again. This is part of the job.

reply
You are being downvoted but I actually agree with your statement.
reply
Why be so upset at someone using pronouns with a LLM?
reply