upvote
I've always been a believer in the "post honey-moon new model phase" being a thing, but if you look at their analysis of how often the postEdit hooks fire + how Anthropic has started obfuscating thinking blocks, it seems fishy and not just vibes
reply
I was in this camp as well until recently, in the last 2-3 weeks I've been seeing problems that I wasn't seeing before, largely in line with the issues highlighted in the ticket (ownership dodging, hacky fixes, not finishing a task).
reply
Nope, there is a categorical degradation in quality of output, especially with medium to high effort thinking tasks.
reply
What about the analysis evidences?
reply
You mean the Claude output? The same claude that has "regressed to the point it cannot be trusted"?
reply
What you saying the OP fabricated/hallucinated the evidence?
reply
I'm just saying it's epistemically unrigorous to the point of being equivalent to anecdata.
reply
How should one conduct such a rigourously reproducible experiment when LLMs by nature aren't deterministic and when you don't have access to the model you are comparing to from months ago?
reply
Something like this: https://marginlab.ai/trackers/claude-code/ (see methodology section)
reply
Kudos for the methodology. The only question I can come up with is that if the benchmarks are representative of daily use.

Anecdotal or not, we see enough reports popping up to at least elicit some suspion as to service degradation which isn't shown in the charts. Hypothesis is that maybe the degradation experienced by users, assuming there is merit in the anecdotes, isn't picked up by the kind of tracking strategy used.

reply
It's not my methodology to be clear, but they have picked up actual regressions that happened in the past - e.g. https://news.ycombinator.com/item?id=46815013
reply
deleted
reply
I suspect you might be right but I don't really know. Wouldn't these proposed regressions be trivial to confirm with benchmarks?
reply