FWIW I think Gemma 4 31b is more likely to be of use to me than Sonnet, idfk, maybe it's a skill issue but I love Opus 4.7, undisputed king, but Sonnet seems borderline useless and I basically think of it as on the same level as Qwen 35b MoE.
But they diverge greatly on other particular ones whenever the ViT tower and the apriori knowledge of the world is crucial. I wish Gemma was on par but both me and Google know they not.
I'm going to switch to local LLMs for most stuff soon.
Thot_experiment is saying that his 2016 Toyota Prius is a great and reliable car for his daily commute and running errands.
Whereas everyone is screeching about its capability gap with a Lockheed Martin F35 lightning.
(of course if i'm being honest 640kB is fine, i'm sure tons of the world's commerce is handled by less for example, the delta between a system with 640kb of ram and a modern one is near nil for many people, the UX on a PoS terminal does not require more than that for example, the hacker news UX could also be roughly the same)
Doubtful. The increase in demand is greatly outpacing supply, and all signs point to a continued acceleration in demand
> If I could drop $10,000 to have an effectively permanent opus 4.7 subscription today, I would.
lol well obviously, but realistically that price point is going to be closer to $100k, with a perpetual $1k a month in power costs.
I predict the B200 data centers we're build today will be obsolete in 3 years and we'll be using whatever models and hardware that isn't even on a road map today. Likely not NVIDIA, likely not OpenAI or Anthropic. Maybe Chinese?
In the mean time, we must continue building software with the clumsy coding agents tied to cloud services as this (for now) seems to be about the only area where AI economically makes sense.
If we think about the near future, something like Kimi2.6 is within the realm of Opus 4.6 today, but requires closer to $700k in hardware to run.
> For those of us a bit crazy, we are running KimiK2.6, GLM5.1
Yes, those can compare to Opus, but you can't run those unquantized for less than $400k in hardware.
A single M3 maxed can run a Q2 Kimi 2.6, though thats with a hardly degraded perplexity.
2x M3s with RDMA can run a lossless Kimi2.6 at Q4, but with CPU only you would get okayish decode but horrible (+1m) TTFT, that wouldnt be a great _interactive_ experience.
If you believe what you read here, the gap is closing fast.
For niche applications, sure. For general use, I think the tendency towards the best model being used for everything will–to the model publishers' delight–continue. It's just much easier to get a feel for Opus and then do everything with it, versus switch back and forth and keep track of how Haiku came up with novel ways to dumbfuck this Sunday evening.
Fixed that for you. Right now most models produced are based on floating point maths and probabilities, which is "expensive" to do math on.
Microsoft has researched 1-bit LLMs which can run much more efficiently, and on much cheaper hardware[1].
If this research is reproducable and reusable outside their research models, this means the cost of running self-hosted LLMs will be reduced by an order of magnitude once this hits mainstream.