upvote
I have two A100s and have been playing with local models for years. There's definitely moments where they are quite impressive, but small context sizes and unreliability become immediately obvious.

> For those of us a bit crazy, we are running KimiK2.6, GLM5.1

Yes, those can compare to Opus, but you can't run those unquantized for less than $400k in hardware.

reply
Two Mac Studio M3 Ultra 512GB and 1 USB cable can run all those models - maybe about $30,000 in hardware - and based on my benchmarks, those Mac Studios were twice as fast as the A100s on Deepseek v4 Flash, which has a quantization but not really a lossy one.
reply
That cannot run KimiK2.6 or GLM5.1 i.e models within the ballpark of anything offered by frontier companies.
reply
Yes it can, but the experience is not great.

A single M3 maxed can run a Q2 Kimi 2.6, though thats with a hardly degraded perplexity.

2x M3s with RDMA can run a lossless Kimi2.6 at Q4, but with CPU only you would get okayish decode but horrible (+1m) TTFT, that wouldnt be a great _interactive_ experience.

reply
They all still fall short of Opus 4.6, definitely though. They are good but fail on extremely complex tasks, in contrast with a frontier model that will keep on trying until it succeeds or exhausts the solutions space.
reply
Not by much, and moving goalposts makes for a bad comparison. Local open weight models are already more powerful than frontier models from only a year back.

If you believe what you read here, the gap is closing fast.

reply