upvote
I just assume that they realized that they can split the offering, and to charge for the top tier more. (Yes, even more.)

If Claude Code can replace an engineer, it should cost just a bit less than an engineer, not half as much.

reply
But then you pay for the less outrageously subsidized rates of API instead of the a bit less incredibly generous prices of the subscription.
reply
Its not subsidized, in fact, they probably have very healthy margins on Claude Code.
reply
Yeah. If you ignore the negligible fact that some investor may want a return on all that money that is going into capex I am pretty sure you can, Enron style, get to the conclusion that any of those companies have “healthy” margins.
reply
Why do you think that?
reply
DeepSeek had a theoretical profit margin of 545 % [1] with much inferior GPUs at 1/60th the API price.

Anthropic's Opus 4.6 is a bit bigger, but they'd have to be insanely incompetent to not make a profit on inference.

[1] https://github.com/deepseek-ai/open-infra-index/blob/main/20...

reply
> they'd have to be insanely incompetent to not make a profit on inference.

Are you aware of how many years Amazon didn’t turn a profit?

Not agreeing with the tactic - just…are you aware of it?

reply
Because if you don't then current valuations are a bublle propped inflated by burning a mountain of cash.
reply
That's not how valuations work. A company's valuation is typically based on an NPV (net present value) calculation, which is a power series of its time-discounted future cash flows. Depending on the company's strategy, it's often rational for it to not be profitable for quite a long while, as long as it can give investors the expectation of significant profitability down the line.

Having said that, I do think that there is an investment bubble in AI, but am just arguing that you're not looking at the right signal.

reply
And that's OpenAI's biz model? :)
reply
Remember there are no moats in this industry - if anything one company might have a 2 month lead, sometimes. We've also noticed that companies paying OpenAI may swiftly shift to paying Google or Anthropic in a heartbeat.

That means the pricing is going to be competitive. You may still get your wish though, but instead of the price of an engineer remaining the same, it will cut itself down by 95%.

reply
I don't know about you, but I benefit so much from using Claude at work that I would gladly pay $80,000-$120,000 per year to keep using it.
reply
Why would you gladly pay more than what it's worth? It's not an engineer you are hiring, it's AI. The whole point of it was to make intelligent workflows cheaper. If it's going to cost as much as an engineer, hire the engineer, at least you'd have an escape goat when things invariably go wrong.
reply
> an escape goat

Autocorrect hall of famer, there.

reply
Scapegoat, got it. Can't blame the autocorrect though... I honestly thought it was spelled like that, which is a shame since I've been studying English my entire life as a second language.
reply
There's a name for this sort of phenomenon...

https://eggcorns.lascribe.net/english/242/escape-goat/

reply
At least that misunderstanding didn’t cause a nuclear accident: https://practical.engineering/blog/2025/4/15/when-kitty-litt...
reply
Luckily these strayed goats weren't irradiated
reply
I agree with you, I was just joking.
reply
Oh now I see... Joke's on me then I guess :D
reply
It wasn't clear to me that this was a joke either. I assume the same for others since the post is grayed out.
reply
Oh come on. That pays for more than 10 fte in some countries
reply
I made this joke with "$1,500-$2000 per month" last night and everyone thought I was serious
reply
I know people who burned several hundreds a day and still were finding it worth it.
reply
Were they actually making money though? A lot of the people on the forefront of this AI stuff seem like cult leaders and crackheads to me.
reply
I'd pay up to $1000 pretty easily just based off the time it saves me personally from a lot of grindy type work which frees me up for more high value stuff.

It's not 10x by any means but it doesn't need to be at most dev salaries to pay for itself. 1.5x alone is probably enough of an improvement for most >jr developers for a company to justify $1000/month.

I suppose if your area of responsibility wasn't very broad the value would decrease pretty quickly so maybe less value for people at very large companies?

reply
I can see $200 but $1,000 per month seems crazy to me.

Using Claude Code for one year is worth the same as a used sedan (I.E., ~$12,000) to you?

You could be investing that money!

reply
Yes, easily. Paying for Claude would be investing that money. Assuming 10% return which would be great I'd make an extra $1200 a year investing it. I'm pretty sure over the course of a year of not having to spend time doing low value or repetitive work I can increase productivity enough to more than cover the $13k difference. Developer work scales really well so removing a bunch of the low end and freeing up time for the more difficult problems is going to return a lot of value.
reply
I would probably pay $2000 a month if I had to - it's a small fraction of my salary, and the productivity boost is worth it.
reply
It's *worth it* when you're salaried? Compared to investing the money? Do you plan to land a very-high-paying executive role years down the line? Are you already extremely highly paid? Did Claude legitimately 10x your productivity?

edit: Fuck I'm getting trolled

reply
I'm serious - the productivity boost I'm getting from using AI models is so significant, that it's absolutely worth paying even 2k/month. It saves me a lot of time, and enables me to deliver new features much faster (making me look better for my employer) - both of which would justify spending a small fraction of my own money. I don't have to, because my employer pays for it, but as I said, if I had to, I would pay.
reply
I am not paying this myself, but the place I work at is definitely paying around 2k a month for my Claude Code usage. I pay 2 x 200, for my personal projects.

I think personal subs are subsidized while corporate ones definitely not. I have CC for my personal projects running 16h a day with multiple instances, but work CC still racks way higher bills with less usage. If I had to guess my work CC is using 4x as little for 5x the cost so at least 20x difference.

I am not going to say it has 10xed or whatever with my productivity, but I would have never ever in that timeframe built all those things that I have now.

reply
I don't know why you keep insisting that no one is making any money off of this. Claude Code has made me outrageously more productive. Time = Money right?
reply
What do you use it for, do you have example? For you to be ok with paying 80k to 120k I'm guessing its making you 375-450k a year?
reply
I'm joking, my point is that it's already quite expensive and I don't think it's making anyone money.
reply
that means customers will pay minimum 2x that much I think
reply
STFU right now because the more you bring this up the more likely it'll happen.

Similarly, STFU about the stuff that can give LLMs ideas for how to harm us (you know what I'm talking about, it's reptilian based)

The whole comment thread is likely to have been read by some folks at Anthropic. Already a disaster. Just keep on with the "we hate it unless it gets cheaper" discourse please!!!

reply
Patching's not long for this world; Claude Code has moved to binary releases. Soon, the NPM release will just be a thin wrapper around the binary.
reply
Where there's a will, there's a way
reply
> It's very clear that Anthropic doesn't really want to expose the secret sauce to end users

Meanwhile, I am observing precisely how VS+Copilot works in my OAI logs with zero friction. Plug in your own API key and you can MITM everything via the provider's logging features.

reply
> to end users

To other actors who want to train a distilled version of Claude, more likely.

reply
If they cared about that, they wouldn't expose the thinking blocks to the end-user client in the first place; they'd have the user-side context store hashes to the blocks (stored server-side) instead.
reply
I don't suppose you could share a little on that patching process?
reply
More likely 99.9% of users never press ctrl+o to see the thinking, so they don't consider it important enough to make a setting out of.
reply
Honestly, just use OpenCode. It works with Claude Code Max, and the TUI is 100x better. The only thing that sucks is Compaction.
reply
How much longer is Anthropic going to allow OpenCode to use Pro/Max subscriptions? Yes, it's technically possible, but it's against Anthropic's ToS. [1]

1: https://blog.devgenius.io/you-might-be-breaking-claudes-tos-...

reply
Consider switching to an OpenAI subscription, which allows OpenCode use.
reply
Yeah. OpenAI allows any client, and only one single fixed system prompt. All their control is on the backend, which is worse than Claude.
reply
Doesn’t Claude code have an agents sdk that officially allows you to use the good parts?
reply
Yes but you can't use a subscription with that
reply
There are also Azure versions of Opus
reply
I have been unable to use OpenCode with my Claude Max subscription. It worked for awhile, but then it seems like Anthropic started blocking it.
reply
What’s 100x better about the TUI?
reply
Nope, OpenCode is nowhere near Claude Code.

It's amazing how much other agentic tools suck in comparison to Claude Code. I'd love to have a proper alternative. But they all suck. I keep trying them every few months and keep running back to Claude Code.

Just yesterday I installed Cursor and Codex, and removed both after a few hours.

Cursor disrespected my setting to ask before editing files. Codex renamed my tabs after I had named them. It also went ahead and edited a bunch of my files after a fresh install without asking me. The heck, the default behavior should have been to seek permission at least the first time.

OpenCode does not allow me to scrollback and edit a prior prompt for reuse. It also keeps throwing up all kinds of weird errors, especially when I'm trying to use free or lower cost models.

Gemini CLI reads strange Python files when I'm working on a Node.js project, what the heck. It also never fixed the diff display issues in the terminal; It's always so difficult for me to actually see what edits it is actually trying to make before it makes it. It also frequently throws random internal errors.

At this point, I'm not sure we'll be seeing a proper competitor to Claude Code anytime soon.

reply
I use Opencode as my main driver, and I don’t experience what you have experienced.

For instance, opencode has /undo command which allows you to scroll back and edit a prior prompt. It also support forking conversations based on any prior message.

I think it depends on the set up. I overwrote the default planning agent prompt of opencode to fit my own use cases and my own mcp servers. I’ve been using OpenAI’s gpt codex models and they have been performing very well and I am able to make it do exactly what I ask it to do.

Claude code may do stuff fast, but in terms of quality and the ability to edit only what I want it to do, I don’t think it’s the best. Claude code often take shortcuts or do extra stuff that I didn’t ask.

reply
Hmmm, I used OpenCode for awhile and didn't have this experience. I felt like OpenCode was the better experience.
reply
Same, I still use CC mainly due to it being so wildly better at compaction. The overall experience of using OpenCode was far superior - especially with the LSP configured.
reply
5.3 Codex on cursor is better than Claude code
reply
Not in my (limited) experience. I gave CC and codex detailed instructions for reworking a UI, and codex did a much worse job and took 5x as long to finish.
reply
I thought the source code for the actual CLI was closed source. How are you patching it?
reply
Claude code can reverse engineer it to a degree. Doing it for more than a single version is a PITA though. Easier to build you own client over their SDK.
reply
To be fair they have like 10,000 open issues / spam issues, it's probably insane out there for them to filter all of it haha
reply
GitHub Issues as a customer support funnel is horrible. It's easy for them, but it hides all the important bugs and only surfaces "wanted features" that are thumbs-up'd alot. So you see "Highlight text X" as the top requested feature; meanwhile, 10% of users experience a critical bug, but they don't all find "the github issue" one user poorly wrote about it, so it has like 7 upvotes.

GitHub Codespaces has a critical bug that makes the copilot terminal integration unusable after 1 prompt, but the company has no idea, because there is no clear way to report it from the product, no customer support funnel, etc. There's 10 upvotes on a poorly-written sorta-related GH issue and no company response. People are paying for this feature and it's just broken.

reply
Maybe they can use AI to figure out which ones are actually useful and which ones are not.
reply
Humans don't look at these anymore, Claude itself does. They've even said so.
reply
I think it's more classic enshittification. Currently, as a percentage, still not many devs use it. In a few months or 1-2 years all these products will start to cater to the median developer and start to get dumbed down.
reply