If power costs are significantly lower, they can pay for themselves by the time they are outdated. It also means you can run more instances of a model in one datacenter, and that seems to be a big challenge these days: simply building an enough data centres and getting power to them. (See the ridiculous plans for building data centres in space)
A huge part of the cost with making chips is the masks. The transistor masks are expensive. Metal masks less so.
I figure they will eventually freeze the transistor layer and use metal masks to reconfigure the chips when the new models come out. That should further lower costs.
I don’t really know if this makes sanse. Depends on whether we get new breakthroughs in LLM architecture or not. It’s a gamble essentially. But honestly, so is buying nvidia blackwell chips for inference. I could see them getting uneconomical very quickly if any of the alternative inference optimised hardware pans out
I really don't like the hallucination rate for most models but it is improving, so that is still far in the future.
What I could see though, is if the whole unit they made would be power efficient enough to run on a robotics platform for human computer interaction.
It makes sense they would try to make repurposing their tech as much as they could since making changes is frought with a long time frame and risk.
But if we look long term and pretend that they get it to work, they just need to stay afloat until better smaller models can be made with their technology, so it becomes a waiting game for investors and a risk assessment.
^^^ I think the opposite is true
Anthropic and OpenAI are releasing new versions every 60-90 days it seems now, and you could argue they’re going to start releasing even faster
Per period of time, I’d say yes.