upvote
Would imagine it's the simplest answer: they're flying by the seat of their pants, there's 1000 things happening every day that demand attention and there's not enough of it to go around. They toss their LLM at it, give it a cursory glance, and ship it. A quick glance at the Claude Code source code bears the result of this process out. The fundamental question is, if their model is so powerful, why do they keep fucking up such simple things? We're led to believe this is a serious company with a model so powerful they can't release it to the general public.
reply
Hermes is one of these OpenClaw clones, so this was certainly intentional, not a model hallucinating something.

I think the problem is clear. Anthropic saw their usage go up much more than their capacity could handle. There are a few tried and true solutions to this, like "increase the price" or "restrict signups so you can guarantee service to what you have already sold".

Then there is the "large scale fraud" option, where you materially change and degrade the service you have already sold. Just because you have obfuscated and mislead in how you describe the product you are selling doesn't mean you get to capture the cash flow of 1 year subscriptions then not honor that contract for the full duration.

reply
> Hermes is one of these OpenClaw clones

So that's what it is. Reading its README I thought it was another harness like Pi [1], but with built-in memory so it remembers what it learns, and gets more capable the longer it runs.

Like Letta [2], Dirac [3][4] and the other "more experimental harnesses that look interesting but I haven't had time to try out".

1. https://pi.dev/

2. https://www.letta.com/

3. https://dirac.run/

4. https://news.ycombinator.com/item?id=47920787

reply
Mind pointing out where exactly in the contract you were allowed to use OpenClaw?
reply
Non-Claude client access is not permitted in the terms and conditions, except via API key.

The correct implementation of this condition by Anthropic on the server side would be to block usage by non-Claude apps via Claude's authentication mechanism, and allow it via the per-token API key billing.

Instead of a simple 403 error, which would block usage, they silently redirect to a different billing bucket, which is not ethical behaviour especially since it is based on fuzzy heuristics.

reply
I doubt an AI would be stupid enough to write code like that without being explicitly prompted to do so. It's so... specific.

That specific nature would mean it would get caught by even the most cursory of code reviews.

Even if I was just "scanning my eyeballs over the code" without properly reading it, this would jump out as very odd and make me pause.

reply
Vibes were strong dude. Don't blame the dev blame the bots brah. They forgot to use mythos obviously otherwise this wouldn't happen simple mistake.
reply
Anthropic obviously vibe code everything and it shows
reply