upvote
RAM hoarding is, AFAICT, the moat.
reply
lol... true that for now though
reply
Yeah, just cause Cisco had a huge market lead on telecom in the late '90s, it doesn't mean they kept it.

(And people nowadays: "Who's Cisco?")

reply
You'd still need those giant data centers for training new frontier models. These Taalas chips, if they work, seem to do the job of inference well, but training will still require general purpose GPU compute
reply
Next up: wire up a specialized chip to run the training loop of a specific architecture.
reply
I think their hope is that they’ll have the “brand name” and expertise to have a good head start when real inference hardware comes out. It does seem very strange, though, to have all these massive infrastructure investment on what is ultimately going to be useless prototyping hardware.
reply
Tools like openclaw start making the models a commodity.

I need some smarts to route my question to the correct model. I wont care which that is. Selling commodities is notorious for slow and steady growth.

reply
Nvidia bought all the capacity so their competitors can't be manufactured at scale.
reply
If I am not mistaken this chip was build specifically for the llama 8b model. Nvidia chips are general purpose.
reply