upvote
Nothing to do with each other. This is a general optimization. Taalas' is an ASIC that runs a tiny 8B model on SRAM.

But I wonder how Taalas' product can scale. Making a custom chip for one single tiny model is different than running any model trillions in size for a billion users.

Roughly, 53B transistors for every 8B params. For a 2T param model, you'd need 13 trillion transistor assuming scale is linear. One chip uses 2.5 kW of power? That's 4x H100 GPUs. How does it draw so much power?

If you assume that the frontier model is 1.5 trillion models, you'd need an entire N5 wafer chip to run it. And then if you need to change something in the model, you can't since it's physically printed on the chip. So this is something you do if you know you're going to use this exact model without changing anything for years.

Very interesting tech for edge inference though. Robots and self driving can make use of these in the distant future if power draw comes down drastically. 2.4kW chip running inside a robot is not realistic. Maybe a 150w chip.

reply
The 2.5kW figure is for a server running 10 HC1 chips:

> The first generation HC1 chip is implemented in the 6 nanometer N6 process from TSMC. ... Each HC1 chip has 53 billion transistors on the package, most of it very likely for ROM and SRAM memory. The HC1 card burns about 200 watts, says Bajic, and a two-socket X86 server with ten HC1 cards in it runs 2,500 watts.

https://www.nextplatform.com/2026/02/19/taalas-etches-ai-mod...

reply
I’m confused then. They need 10 of these to run an 8B model?
reply
Just tried this. Holy fuck.

I'd take an army of high-school graduate LLMs to build my agentic applications over a couple of genius LLMs any day.

This is a whole new paradigm of AI.

reply
A billion stupid LLMs don't make a smart one, they just make one stupid LLM that's really fast at stupidity.
reply
I think maybe there are subsets of problems where you can have either a human or a smart LLM write a verifier (e.g. a property-based test?) and a performance measurement and let the dumb models generate candidates iterate on candidates?
reply
Yeah, maybe, but then it would make much more sense to run a big model than hope one of the small ones randomly stumbles upon the solution, just because the possibility space is so much larger than the number of dumb LLMs you can run.
reply
I don't work this way, so this is all a hypothetical to me, but the possibility space is larger than _any_ model can handle; models are effectively applying a really complex prior over a giant combinatorial space. I think the idea behind a swarm of small models (probably with higher temperature?) on a well-defined problem is akin to e.g. multi-chain MCMC.
reply
What did you try and how?
reply
reply
I see; the chatbot demo in the Taalas page. If they could produce this cost effectively it would definitely be valuable. The only challenge would be getting models to market before their next revision.
reply
Man, I'm in the exact opposite camp. 1 smart model beats 1000 chaos monkeys any day of the week.
reply
When that genrates 10k of output slop in less latency than my web server doing some crud shit....amazing!
reply
This is exceptionally fast (almost instant) whats the catch? Answer was there before I lifted return key!
reply
This is crazy!
reply