upvote
First hit is free… got to get you hooked.
reply
How much better is it than Claude? I have both but Claude sucks up so many tokens.
reply
Compaction is basically seamless which is a major weak point of Claude. At effort=low, Claude is better than codex but still slower. If you don't mind trading the upfront quality of work with additional micromanaging but at a faster speed, it is fine. I also think because of that very reason, you absorb more of the code.
reply
5.5 is absolutely comparable to opus 4.7 (both on highest effort), maybe even better. It generally seems less lazy, faster, and writes code closer to what I'd write. The only downside is that for very very long tasks, it can kind of lose track of the goal. For tasks under ten minutes I'll go with codex every time.
reply
The main difference is in the frontend skills. GPT produces terrible design. What I do these days is ask Opus to produce an HTML mockup, then feed it to Codex.
reply
I have not had problems with long goals. I let it chomp for 40 minutes on a proof in my custom theorem prover (xhigh fast), and it got there. Very happy with Codex, I ditched Claude for it.
reply
They've added a new goal mode that might help with that
reply
I switched some time after Anthropic bricked their models with adaptive thinking. It's a legit mystery to me how people are still using CC professionally.

Codex is far less frustrating and manages context better. It's also costing me about 1/3rd as much as Opus 4.7 on CC.

reply
The only way to keep using CC for me has been to stick to 4.6 1M
reply
I stopped trying to use Claude to do anything with 4.7 because it sucks up so many tokens so quickly. I use the 4.6 model still and have switched to Codex for larger tasks. It also works better at more complex coding tasks than Claude for web apps that have python backends and typescript front ends.
reply
Less gibliterrating and more doing

Very fast

reply
Can’t you just turn off training on your data in the settings?
reply
I think it's free for about 2 useful requests and then you have to upgrade or wait?
reply
Switching to GPT 5.4-mini can increase the number of requests we can use freely.
reply
So basically a 20$ Claude plan lmao
reply
I stopped using my Claude subscription because it became so prohibitive. Back to ChatGPT and Codex full time and been pretty happy. I miss the tone/writing style of Claude, but don't miss the frustration of being told I've reached my plan limits in a comically short amount of time.
reply
Using these prompts/steering[0], setting Base style to Friendly, Warm to More, Enthusiastic to Default, Headers, Lists, and Emoji to Less, I have found I can get gpt-5.5 about ... 80% of the way there to writing as non-annoyingly as Claude. And it's so much faster and has such higher limits that that's worth it for me.

I also put together this ridiculous thing[1] because I missed the font and color scheme of Claude.

[0] https://gist.githubusercontent.com/dmd/91e9ca98b2c252a185e8e...

[1] https://github.com/dmd/aimpostor

reply
How do you fit that entire prompt in the customized instructions ?
reply
Some of it is in my customized instructions, some of it I fed pieces in at a time saying "remember this please:" so it goes into Memories.

I'm not entirely clear on the mechanism by which memories make it into context, so it's possible some of it isn't all the time, but it does seem to be working reasonably.

Again, it's not as good as Claude when it comes to writing "not like an AI". But it's significantly better than it was.

reply
Thanks, I’ll give those a try!
reply
FYI I'm actively working on aimpostor, so check back in a couple days for some quality improvements. (I'm definitely not going to bother with a Sparkle updater or anything like that.)
reply
on Codex I ran into limits maybe like 2 times in 3 months, after doing several "upgrade this experimental game to my latest shared framework" passes on 5.5 Extra High
reply
On which plan?

I can go through a 5-hour limit with a $20/mo Plus subscription in a few minutes with 5.5 Extra High. This causes me to reserve the latest/best rev for the harder problems.

5.5 really does seem to be very superior to 5.4, but it's also very expensive to run: The gas gauge moves fast. It's not very clearly defined whether 5.5 will cost less to get a problem solved quickly, or if a bunch of automatic iterations of 5.4 will solve it less-expensively. Both are often frustrating to me on the $20 plan.

(Also: Are you sure you're seeing it right? 5.5 has been in the wild for less than a month, so far. https://openai.com/index/introducing-gpt-5-5/ )

reply
The standard $20 plan, on my existing Godot code: https://github.com/InvadingOctopus/comedot

Most of those commits since the last few months are thanks to Codex reviews (but the code is not AI generated): 5.5 since it came out, and 5.4 etc before that, almost always on Extra High because it's for a framework that underlies the other stuff I do so I want make to sure everything's correct.

Sometimes I have to run multiple passes on the same task: I rarely continue any session beyond 4-5 prompts to avoid "bloat" or accumulate "stale context", so sometimes Codex finds different stuff in subsequent reviews of the same file/subsystem.

The project is modular enough where each file can be considered standalone with only 1-2 dependencies, and I already used to write a lot of comments everywhere (something some people laughed at), so maybe that helps the AI along?

reply
Thanks. That's good data.

I'm taking this, along with my own experience, to mean that the GPTs are cheaper to use for refactors of an existing body of work than they are for creating a new one.

(And perhaps part of that is in the name? These "LLM" contraptions are very good at translation, after all. And tokens seem to relate more to concepts than to specific phrases or words.)

reply
the current state of that 20$ claude plan, despite twice this week them stating better usage. first for "double 5 hour usage", then for 50% overall more usage a week.

MAYBE the 50% overall is true, but the double usage during a 5 hour window i just dont see it at all. I've maxed 3 5 hour windows since this happened, 0% chance it was double as much as normal, i ate up about 4-5% of my weekly total each time(this was ~10% each time pre announcements). wish i could give token numbers but its obscured i just know it was around 120k 4.6 with some delegation to sonnet subagents.

So SURE its almost certainly more allotted weekly, but if those totals are consistent for 5 hour blocks, you gotta split your daily usage into at least 3 sessions with 5 hours between them to even hit that weekly limit. its unreal how much they have burned their good reputation in a 2 month stretch, i am positive its also being astroturfed with bots more than happy to advance the narrative.

the internet is annoying, these tools are overall cool, just wish anthropic would go back to being semi predictable.

reply
I was really unimpressed by the free Codex (for nodejs/react dev). I think it must be using a less powerful model or they’re limiting it in some other way.
reply
Are you specifically pointing at a different experience between free + paid? Or just that the free version is unimpressive?

I'm using paid on TypeScript and it's genuinely terrific. Subjectively I think it has the edge over Opus.

I'd be surprised if OpenAI is hamstringing the free version. That would seem crazy from a GTM PoV. If anything the labs seem to throttle the heavy paid users.

reply
Yes, the free version doesn't have access to the same models that the paid does.
reply
You have access to 5.5 xhigh on free. Which model is missing except the 5.3 that run on cerebras?
reply
It's only missing the trash models. Likely a user skill issue.
reply
The free version of ChatGPT is definitely worse as well. My SO uses the free version and I can tell a significant downgrade.
reply
Post your chat session
reply
Can Codex chats be shared? (This is a genuine question; so far, I've only used Codex in CLI on Linux.)
reply
Via jsonl file
reply
I'm unimpressed by all LLMs, and especially unimpressed by the people claiming to be impressed by them.
reply
[dead]
reply
[dead]
reply