While there are no companies with $1.5 trillions (4*$380B) of net revenue, the difference is that Anthropic is cash net-negative, has more than 4 people in staff (none of them are hungry artists like PG) and hardware use spendings, I think, are astronomical. They are cash net-negative because of hardware needed to train models.
There should be more than one company able to offer good purchase terms to Anthropic's owners.
I also think that Anthropic, just like OpenAI and most of other LLM companies and companies' departments, ride "test set leakage," hoping general public and investors do not understand. Their models do not generalize well, being unable to generate working code in Haskell [1] at the very least.
[1] https://haskellforall.com/2026/03/a-sufficiently-detailed-sp...
PG's Viaweb had an awful code as a liability. Anthropic's Claude Code has an awful implementation (code) and produces awful code, with more liability than code written by human.
isn't that pretty much why anthropic and openai are racing to IPO?