upvote
> Only a matter of time for local models to reach Opus level. We are 1 or at most 2 years behind that and Anthropic knows that.

Can confirm. Kimi K2.5 is pretty intelligent and most of the time there's no difference between Opus and Kimi.

reply
Local models just make no economic sense since the GPU will idle 99% of the time.
reply
You have a GPU already (at least an iGPU and an NPU on most newer platforms) as part of your computer, might as well get some use out of it with local inference. And trying to do inference on a larger model with an undersized GPU will have you idling a lot less than 99% - but that still makes a lot of sense for most casual users who will only rarely need a genuine "Pro" class answer from AI. Doing that locally is way less hassle than paying for a subscription or messing with API spend.
reply
False on a team that’s distributed
reply
[dead]
reply