upvote
I am on Google's $20/month plan, and I usually get about three half-hour coding sessions a week with AntiGravity using the Claude models. The limit using Gemini Pro models is much higher. I am retired so Google's $20 plan is sufficient for me, but I understand that people who are still working would need higher limits.

I am also on a $10/month plan with Nous Research for supplying open models for their open source Hermes Agent. I run Hermes inside a container, on a dedicated VPS as a coding agent for complex tasks and so far I find the $10/month plan is enough for about five to ten major tasks a month. I think it is also a good deal.

reply
In my experience, Codex is better than Claude Code in every way and GPT-5.4 is on par or better than Opus 4.6 at every coding task I ask of it.

You're really not going to miss CC. And OpenAI actually had some foresight to invest massively in compute so they don't run into usage and rate limits like Anthropic does constantly. I couldn't even use CC for more than a couple complex tasks before I was out of extra usage or session usage. It was a maddening productivity killer and I just switched to Codex full time.

reply
> the very community they are trying to court

After all, we may be a just a data source and not their intended demographic all along.

reply
The valuation is obviously based on the premise of their capturing the white collar economy. OpenAI's charter says so openly. And Chinese robots will come for blue workers next.
reply
The economy, not the workers :) It feels like pretty soon white collar workers will be in a “You have nothing to lose but your chains” situation. Except we are not as fit as the proletariat of the past.
reply
If I could get the equivalent of GPT-4 running locally, that would cover like 95% of what I need an LLM for. Tweak this dockerfile, gimme a bash script. I guess the context probably isn’t sufficient for the agent stuff, but I’m sure more context-efficient harnesses will be coming down the line
reply
I have an old Mac Mini with 32G of integrated RAM, and the following works for me for small local code changes:

ollama launch claude --model qwen3.6:35b-a3b-nvfp4

In addition to not having an integrated web search tool, one drawback is that it runs more slowly than using cloud servers. I find myself asking for a code or documentation change, and then spending two minutes on my deck getting fresh air waiting for a slower response. When using a fast cloud service I can be a coding slave, glued to my computer. Still, I like running local when I can!

reply
> I guess I will be trying the latest offering from OpenAI and Google tomorrow and if they are satisfactory I might just switch.

If Anthropic’s move is confirmed, my guess is other coding agents providers might end up making similar moves

reply
Gpt xhigh isn't that bad..
reply
This is the definition of cartel
reply
Kimi K2.6 is supposedly good: https://www.kimi.com/blog/kimi-k2-6
reply
deleted
reply
gpt 5.4 has been performing great in my harness.
reply
I have codex and Gemini for spill over, they work good.
reply