upvote
Why is this a red flag? OpenClaw is basically automated abuse of their subscription plans. This is entirely reasonable.
reply
What I don't quite understand is why would one of the most advanced AI labs use rudimentary broken text match heuristics to track and detect abuse. Why not run simple inference on actual turns out of band, and if abuse is detected, adjust the quotas semi-retroactively.
reply
> What I don't quite understand is why would one of the most advanced AI labs use rudimentary broken text match heuristics to track and detect abuse.

It's vibe-coded. What's hard about understanding that?

reply
> most advanced AI labs use rudimentary broken text match

> It's vibe-coded

I called this out when I saw Claude Code CLI source code reach for regex on a certain task a while back and got told it was very unlikely that nobody reviewed the diff. Looks like the bar was lower than imagined.

reply
They’re idiots who hacked together a shockingly useful tool by leveraging the billions of dollars they received from shamelessly hyping up chatbots. The Claude Code leak makes this very clear.
reply
Pretty wild to say that the company with one of the best models (arguably the best) is a bunch of idiots.
reply
The people working on the models almost certainly aren't the same people writing the code for their harness.
reply
> Pretty wild to say that the company with one of the best models (arguably the best) is a bunch of idiots.

It would be pretty wild if they didn't considering all the money thrown at them!

You're looking at one of the largest investments business (as a collective) has ever made. They had better be one of the forerunners in the space :-/

reply
And you think with all of this money they are employing idiots?
reply
They're completely vibe-coding one of their flagship products. It's not unreasonable to consider that the people who took that decision are, indeed, idiots.
reply
Even idiots can succeed if you uncritically funnel them hundreds of billions of dollars.
reply
You can't just burn money in a pit to get the best AI model out. Undoubtedly some of the smartest people in the world are working on frontier AI.
reply
deleted
reply
Maybe running additional inference on all sessions to detect OpenClaw usage would require spending more money than they would save with that detection in the first place (which is the original goal). I also suspect the Claude Code team is just a regular software team without immediate access to ML pipelines (or competence to run them) to quickly develop proper abuse detection systems with extensive testing (to avoid false positives, which people would also complain about), and they're under pressure by the management to do something right now, so a regex is all they can do within those constraints.
reply
> Why not run simple inference on actual turns out of band, and if abuse is detected, adjust the quotas semi-retroactively.

I suppose because running inference of any kind is a helluva lot more demanding than running a regex and less deterministic.

reply
This is fascinating because it makes me think OpenClaw is something of a trojan horse aimed at draining Anthropic's resources. For them to go to this length to stop OpenClaw usage raises some interesting questions and a precedent for closed model vendors.
reply
Why do they treat is as a trojan horse? More OpenClaw usage means more Claude usage. Isn't more Claude usage what Anthropic wants?
reply
Not when their customers are paying a flat rate subscription.
reply
Calling current AI subscription services (especially Claude) "flat rate" (implying infinite access for a flat fee) is misleading. There are pretty strict hourly, daily, weekly, and monthly limits. So there is a pretty easy-to-reach limit for all these subscriptions. They're hardly unlimited, and given how easy it is to run into limits, it's likely not super complicated/low-stddev for an accounting department to figure out avg cost per customer.
reply
Is flat rate the best way to describe it when there's actually a few different tiers and each one has hard coded rate limits?
reply
Within each tier, each marginal token is an expense with no marginal revenue to offset. So yes. The platonic ideal for any subscription business model is zero usage.
reply
I run an AI subscription business and we have our pricing set in a way that we make an acceptable profit even if all users were to max out their given usage
reply
Of course. My point is that your profit still decreases as you approach max usage, ceteris parabis. It may be acceptable but it is less. Your costs are variable and your revenue is fixed (at least on a unit basis).
reply
No, not for us because we have a lot of different tiers and so as their usage increases they buy bigger and bigger plans.
reply