upvote
They are just cancelling side projects because Anthropic is dominating in enterprise and side projects (probably) don't make profit. https://x.com/ShanuMathew93/status/2031074311629353299
reply
I would point out Anthropic isn't profitable either (yet), it's just that enterprise is where the money is. Now that all the AI companies are narrowing in on that market, becoming profitable will be even more challenging.
reply
This data is pretty questionable. OpenAI employees have said on Twitter that it does not account for ChatGPT Enterprise, where most of their growth is, which is quote-only and not paid by credit card.
reply
You have more info about the inflated token use? I’m using codex cli a bunch now, but the reported token usage seems like an order of magnitude higher than, say Claude code with opus.

Idk if it’s because I set codex to xhigh reasoning, but even then it still seems way higher than Claude. The input/output ratio feels large too, eg I have codex session which says ~500M in / ~2M out.

reply
I wish I had hard evidence but it is mostly an observation. I do use Codex a lot and I felt a drastic change from like one-two months ago to this day.

It used to give me precise answers, "surgical" is how I described it to my friends. Now it generates a lot of slop and plenty of "follow ups". It doesn't give me wrong answers, which is ok, but I've found that things that used to take 3-4 prompts now take 8-10. Obviously my prompting skills haven't changed much and, if anything, they've become better.

This is something that other colleagues have observed as well. Even the same GPT5.4 model feels different and more chatty recently. Btw, I think their version numbers mean nothing, no one can be certain about the model that is actually running on the backend and it is pretty evident that they're continuously "improving" it.

reply
Back in business school they used to tell the story of how makers of razor blades would put a good blade as the first and the last blade in the pack. I suspect the LLM services of doing something like that.
reply
I haven't had the time to fully hash this take out, but a big question in the back of my mind has been - is it possible that AI model improvements come partly from finding overhang in things that look hard and impressive to humans but are actually trivial consequences of the training data? If true, then the observable performance of any widely distributed model could get worse over time as it "mines out" the work that's easy for it to do.
reply
Jony Ive project was cancelled? I cannot find anythin on that

Just that they took down some "io" mentions because of some trademark dispute with a third party "iyo".

reply
Turns out just lying about what your tech will do and how much people want it doesn’t work forever to raise unlimited money to throw in the fire hoping you hit something that actually makes a profit.
reply