upvote
I'm currently "working" on a toy 3d Vulkan Physx thingy. It has a simple raycast vehicle and I'm trying to replace it with the PhysX5 built in one (https://nvidia-omniverse.github.io/PhysX/physx/5.6.1/docs/Ve...)

I point it to example snippets and webdocumentation but the code it gens won't work at all, not even close

Opus4.6 is a tiny bit less wrong than Codex 5.4 xhigh, but still pretty useless.

So, after reading all the success stories here and everywhere, I'm wondering if I'm holding it wrong or if it just can't solve everything yet.

reply
" or if it just can't solve everything yet."

Obviously it cannot. But if you give the AI enough hints, clear spec, clear documentation and remove all distracting information, it can solve most problems.

reply
It works somewhat well with trivial things. That's where most of these success stories are coming from.
reply
Codex has been good quality wise, but I hit limits on the Codex team subscription so quickly it's almost more hassle that it is worth.
reply
I have also switched from claude to codex a few weeks ago. After deciding to let agents only do focused work I needed less context, and the work was easier to review. Then I realized codex can deliver the same quality, and it's paid through my subscription instead of per token.
reply
I made this switch months ago, ChatGPT 5.4 being a smarter model, but I’ve had subjective feelings of degradation even on 5.4 lately. There’s a lot of growth in usage right now so not sure what kind of optimizations their doing at both companies
reply
I use Codex at home and Opus at work. They're both brilliant.
reply
I would switch to Codex, but Altman is such a naked sociopath and OpenAI so devoid of ethical business practices that I can't in good conscience. I'm not under any illusion that Anthropic is ethical, but it is so far a step up from OpenAI.
reply
I'm with you on the ethical part, but everything is a spectrum. All the AI leadership are some shade of evil. There's no way the product would be effective if they weren't. I don't like that Sam Altman is a lunatic, but frankly they all are. I also recognize that these are massive companies filled with non shitty engineers who are actually responsible for a lot of the magic. Conflating one charlatan with the rest of it is a tragedy of nuance.
reply
Yeah, but there's distinct difference between "risks their company because they refuse to help with killing little kids" and "happily helping with genocide".

One of these is better.

reply
Cannot you use Codex (which is open source, unlike Claude Code) with Claude, even via Amazon Bedrock?
reply
Out of the loop here, what did Sam Altman do that is considered a sociopath and what did OpenAI do that is uniquely unethical that one should avoid it?

This keeps popping up in every thread and I want to separate virtue signalling and genuine fear of OpenAI.

reply
There's not one thing that stands out, but he abandoned the entire core principles of OpenAI (took a 180), constantly lies to people and doesn't plan to stop.

https://www.newyorker.com/magazine/2026/04/13/sam-altman-may...

reply
Calling out sociopaths is not virtue signaling. You need to look in the mirror if you think there's something wrong with that kind of virtue.

You know, you can just google his name yourself, don't you?

reply