upvote
Local models embody the hacker spirit, constant Claude glazing is spiritually incompatible with tinkering. Don't upload your spirit to the cloud.
reply
That’s like saying cloud computing is spiritually incompatible with tinkering.
reply
It is. You can't tinker with someone else's machine.
reply
They may well do but in practice if you want to embody the hacker spirit, the best thing is to hack rather than trying to get some clearly inadequate local LLM to do it.
reply
But if I run the model locally I have to pay for it, whereas with Claude I can, oh wait I just hit my 5 hour free limit with 2 messages.
reply
I experiment a lot with local models, and I agree.

I have a lot of fun with the local models and seeing what they can do.

I appreciate the SOTA models even more after my local experiments. The local models are really impressive these days, but the gap to SOTA is huge for complex tasks.

reply
Opus is probably somewhere in the 5TB parameter range and needs terabytes of GPU memory.

The economics of running SOTA locally just does not make sense, because you’re not using it 24/7 at 80%+ utilization while the cloud based providers can.

reply
Reasoning over a large codebase is only one use case for large models. For the use cases in the article (summarizing, classifying, basic text rewrites) most phones can handle them just fine.
reply
The article is not about those use cases. There are plenty of use cases for which local models are already pretty good
reply
DeepSeek V4 with 1 million token context window is pretty powerful, although still not there. There's hope that Opus 4.5 level performance locally is not that far away.
reply
Running DeepSeek V4 without extreme quantization locally requires a lot of hardware.

The IQ2 quants that fit into 128GB machines are very degraded.

reply
That is true, it is a 1.6T parameters model so it requires a great deal of memory. I also heard there's a 2bit quantization that works well on Apple metal.
reply
From what I read, ds v4 is very close with opus 4.6 performance.
reply
The full model is, not the quantized versions.
reply
yeah that goes without saying. how can openweight, quantized version beat SOTA :)
reply
deleted
reply
Should be relatively quickly, 1-2 years for local models to catch up to today's SOTA.

Of course then you'll be asking "uhh lemme know when Opus 6.8 level performance is available locally". People are never happy.

Gemma 4 and Qwen 3.6 are legit beast models that would steamroll every API offering from 2 years ago.

reply
Next year there will be Opus 4.5 level available on open source models so theoretically you may be able to run it locally but in reality it will be too expensive (i.e maybe 2 x max Studio 512GB ram each) for “normal” users.
reply
Depending on a task, there are already models matching Opus 4.5. Just not in everything. But you can always swap a local model for a particular task.
reply
The frontier Chinese open source models are already at this level, GLM-5.1 and Kimi K2.6 specifically.
reply
But you can't run the locally at full quality. And quantized versions you can run locally are a far cry from Opus 4.6.
reply
Anthropic serves quantized versions of their models and you can run q8 locally.
reply
I don't even use Sonnet anymore. Current feels worse than Claude 3.5 couple years ago. They have quantized that much? Switched to GPT 5.5, let's see how long it will stay good.
reply