I have about 1KLOC of harness code written by Kimi to work around quirks in Kimi not needed for any other model I've tested, such as infinite toolcall loops and other weirdness.
You can do quite a bit with it and never run into those quirks, or you might hit it every request.
It is very sensitive to "confusing" things about it's environment in a way Sonnet and Opus are not.
Still great value, but they have some way to go.
How do you think the large providers do inference? No single GPU has 1TB plus of memory on board. It’s a cluster of a bunch of gpus.
GPU interconnect speeds are a big bottleneck today for GPU's in AI applications. Data can't move between them fast enough.
The model is fine, Ive switched to it entirely for a personal project, but it's not opus.
And no, you're not running then locally unless you're a millionaire. You still need hundreds of GB (500+++) of VRAM on your graphics card - that's not at a level of consumer electronics.
Sure you can run the quantized models, but then you're at Haiku performance.
Claude becomes near lobotomized at beyond 500,000 tokens. I don't believe much quality code gets outputted at such high token counts, not to mentioned drastically increased cost.
270k isn't massive, but its very usable with compaction. Not every task needs the full context history.
Quantized models do have a quality / accuracy impact, although it is not as drastic as you suggest. There is some good data on this [0].
"These findings confirm that quantization offers large benefits in terms of cost, energy, and performance without sacrificing the integrity of the models. "
One thing that is worth mentioning is quant models are not created equally, they are not always scaling at the same rate. [1] For example not all tensors contribute equally to model accuracy. In practice, the most sensitive parts (such as key attention projections) are often quantized less aggressively to preserve the quality of the inference.
[0] - https://developers.redhat.com/articles/2024/10/17/we-ran-ove...
[1]- https://medium.com/@paul.ilvez/demystifying-llm-quantization...
Check out tensor parallelism