So over time older models will be less valuable, but new models will only be slightly better. Frontier players, therefore, are in a losing business. They need to charge high margins to recoup their high training costs. But latecomers can simply train for a fraction of the cost.
Since performance is asymptomatic, eventually the first-mover advantage is entirely negligible and LLMs become simple commodity.
The only moat I can see is data, but distillation proves that this is easy to subvert.
There will probably be a window though where insiders get very wealthy by offloading onto retail investors, who will be left with the bag.
There hasn't been a real Moore's law for a good while even before LLMs.
And memory isn't getting less expensive either...
Oh well
OpenAI was built as you say. Google had a corporate motto of "Don't be evil" which they removed so they could, um, do evil stuff without cognitive dissonance, I guess.
This is the other kind of enshitification where the businesses turn into power accumulators.
You could call it a rug pull, but they may just be doing the math and realize this is where pricing needs to shift to before going public.