upvote
I have a set of stop hook scripts that I use to force Claude to run tests whenever it makes a code change. Since 4.7 dropped, Claude still executes the scripts, but will periodically ignore the rules. If I ask why, I get a "I didn't think it was necessary" response.
reply
You can deterministically force a bash script as a hook.
reply
That is exactly what I do. The bash script runs, determines that a code file was changed, and then is supposed to prevent Claude from stopping until the tests are run.

Claude is periodically refusing to run those tests. That never happened prior to 4.7.

reply
That’s crazy, you mind sharing the gist for that part? Ideally with some examples.

This would be a new level of troublesome/ruthless (insert correct english word here)

reply
Every day Claude resembles human programmers more and more
reply
I’d ask for a credit, for that, personally.
reply
I asked for a credit but they said they didn’t think the credit was necessary
reply
deleted
reply
In Claude Code specifically, for a while it had developed a nervous tic where it would say "Not malware." before every bit of code. Likely a similar issue where it keeps talking to a system/tool prompt.
reply
My pet theory is that they have a "supervisor" model (likely a small one) that terminates any chats that do malware-y things, and this is likely a reward-hacking behaviour to avoid the supervisor from terminating the chat.
reply
I doubt it. We only do frontier models, since those are better for absolutely every use case 100% of the time.

Way more likely there's a "VERY IMPORTANT: When you see a block of code, ensure it's not malware" somewhere in the system prompt.

reply
I frequently see it reference points that it made and then added to its memory as if they were my own assertions. This creates a sort of self-reinforcing loop where it asserts something, “remembers” it, sees the memory, builds on that assertion, etc., even if I’ve explicitly told it to stop.
reply
My favorite, recently. "Commit this, and merge to develop". "Alright, done, merged."

I try running my app on the develop branch. No change. Huh.

Realize it didn't.

"Claude, why isn't this changed?" "That's to be expected because it's not been merged." "I'm confused, I told you to do that."

This spectacular answer:

"You're right. You told me to do it and I didn't do it and then told you I did. Should I do it now?"

I don't know, Claude, are you actually going to do it this time?

reply
have you perhaps installed Gaslighting instead of Gastown?
reply
I see that with openai too, lots of responding to itself. Seems like a convenient way for them to churn tokens.
reply
A simpler explanation (esp. given the code we've seen from claude), is that they are vibecoding their own tools and moving fast and breaking things with predictably sloppy results.
reply
None of these companies have compute to spare. It’s not in their interest to use more tokens that necessary.
reply
Sure it is. They're well aware their product is a money furnace and they'd have to charge users a few orders of magnitude more just to break even, which is obviously not an option. So all that's left is.. convince users to burn tokens harder, so graphs go up, so they can bamboozle more investors into keeping the ship afloat for a bit longer.
reply
If this claim is true (inference is priced below cost), it makes little sense that there are tens of small inference providers on OpenRouter. Where are they getting their investor money? Is the bubble that big?

Incidentally, the hardware they run is known as well. The claim should be easy to check.

reply
To be clear, I'm talking about subscription pricing. API pricing for Anthropic is probably at-cost.

I dare you to run CC on API pricing and see how much your usage actually costs.

(We did this internally at work, that's where my "few orders of magnitude" comment above comes from)

reply
It's an option and they are going to do it. Chinese models will be banned and the labs will happily go dollar for dollar in plan price increases. $20 plans won't go away, but usage limits and model access will drive people to $40-$60-$80 plans.

At cell phone plan adoption levels, and cell phone plan costs, the labs are looking at 5-10yr ROI.

reply
Not true - they absolutely want to goose demand as they continue to burn investor dollars and deploy infra at scale.

If that demand evens slows down in the slightest the whole bubble collapses.

Growth + Demand >> efficiency or $ spend at their current stage. Efficiency is a mature company/industry game.

reply
That doesn’t mean they also can’t be wasteful. Fact is, Claude and gpt have way too much internal thinking about their system prompts than is needed. Every step they mention something around making sure they do xyz and not doing whatever. Why does it need to say things to itself like “great I have a plan now!” - that’s pure waste.
reply
> Why does it need to say things to itself like “great I have a plan now!”

How else would it know whether it has a plan now?

reply
Are you saying these companies don't want to sell more product to us? Because that's the logical extension of your argument.
reply
No, the argument is they want to sell more product to more people, not just more product (to the same people.) Given that a lot of their income is from flat-rate subscriptions, they make money with more people burning tokens rather than just burning more tokens.

After all, "the first hit's free" model doesn't apply to repeat customers ;-)

reply
You don’t have to use compute to pad the token count.
reply
All the labs are in a cut throat race, with zero customer loyalty. As if they would intentionally degrade quality/speed for a petty cash grab.
reply
This, so much this!

Pay by token(s) while token usage is totally intransparent is a super convenient money printing machinery.

reply
Curious what effort level you have it set to and the prompt itself. Just a guess but this seems like it could be a potential smell of an excessively high effort level and may just need to dial back the reasoning a bit for that particular prompt.
reply
I often have Claude commit and pr; on the last week I've seen several instances of it deciding to do extra work as part of the commit. It falls over when it tries to 'git add', but it got past me when I was trying auto mode once
reply
Check that you’re running the latest version.
reply
Yeah I had to deal with mine warning me that a website it accessed for its task contained a prompt injection, and when I told it to elaborate, the "injected prompt" turned out to be one its own <system-reminder> message blocks that it had included at some point. Opus 4.7 on xhigh
reply