upvote
Open weights undercut the absolute cutoff scenario. They don't fully solve the question of who gets the best model first, who gets enough tokens to use it heavily, and who gets to integrate it into sensitive workflows without waiting for permission
reply
Affordability of hardware that can run local LLMs is a real factor, too. Not sure when RAM prices are going down, but with everything that’s happening and can happen in the world right now, it doesn’t look like it’ll drop in the near or medium-term
reply
Open weight models does not means you can run them on your laptop (except for the small ones). It means that someone independent (a cloud provider, another company ...) can build big computers that are capable ton run those models and provide you a metered usage.

At the end of the day, as a consumer, you still pay per token (or per something) to your provider, except you can chose from multiple providers with your own criteria. If you want to use DeepSeek v4 hosted in Europe, it's possible.

reply
In other words commodification, and no moat.

Which would also be an ideal outcome for people interested in avoiding a concentration of power and wealth due to access to generative AI.

reply
No one is going to run models that are comparable to frontier locally without spending enormous sums for use at scale or in large orgs. Even with cheap RAM, you will still need a very large budget for frontier-level capability.

Open models that are competitive with frontier will be used on shared hosts.

reply
Models have been capped out on training and (active) parameters a while ago, its tooling / harness that is making the big jumps in performance happen. And then you have things like DeepSeek with a pretty small KV cache.

And with the extreme chip shortages for the next two years, there's little appetite for even bigger models anyway.

Barring a breakthrough in scaling, the only direction the models can really go is smaller. Which will inevitably mean better performing local models for same chip budget.

reply
> No one is going to run models that are comparable to frontier locally without spending enormous sums for use at scale

You can always run these models cheaper locally if you're willing to compromise on total throughput and speed of inference. For most end-user or small-scale business needs, you don't really need a lot of either.

reply
It would be awful if running models locally became the primary way of using LLMs. On dedicated servers sharing GPUs across requests, energy usage and environmental impact is way lower overall than if everyone and their mother suddenly needs beefy GPUs. It’s the equivalent of everyone commuting alone in their own car instead of a train picking up hundreds at once.
reply
You can batch requests when running locally too, if you're using a model with low-enough requirements for KV-cache; essentially targeting the same resource efficiencies that the big providers rely on. This is useful since it gives you more compute throughput "for free" during decode, even when running on very limited hardware.
reply
Maybe people would target their use more appropriately, then.
reply
It's even more awful if the compute capital is owned by only a handful of players.
reply
Open weights will remain open only if they’re significantly worse than the frontier weights.

Before you challenge with benchmarks, consider the labs which release open weight models have internal testing and unpublished results.

reply
There are two problems with that scenario:

1. Your European startup will be competing with others using a much better frontier model. In a scenario where you already have other major disadvantages (access to capital, labor), you might be outcompeted

2. Open models have been keeping pace very nicely, but they rely on distillation of frontier models. If the race gets really tight, this could be affected so that the time gap grows larger (ie, it's very unlikely anyone but Anthropic is distilling from Mythos at the moment)

reply
> 1. Your European startup will be competing with others using a much better frontier model.

If the small (and I'd even say, sometimes imperceptible) difference between Opus & DeepSeek v4 Pro is such a disadvantage for your startup, it's that your startup have an issue, not the LLM.

At the end of the day, your startup is there to solve real problems and even before the LLMs, being fast at coding things have never been such a huge competitive advantage compared to marketing, sales, customer support, product vision ...

reply
The direction we are going suggests AI will also be used for marketing, sales, customer support and product vision.

Besides, if the difference between Opus and DeepSeek 4 is so small and imperceptible, you are missing the opportunity to launch a startup on your own and compete with Claude Code.

reply
Someone recently made a graph showing that the gap between US American frontier LLMs and Chinese open weight LLMs (including DeepSeek v4) is widening. Unfortunately I can't find it anymore.

Update: GPT-5.5 found it.

Article: https://www.nist.gov/news-events/news/2026/05/caisi-evaluati...

Graph: https://www.nist.gov/sites/default/files/images/2026/05/01/1...

reply
This is propaganda, not data.

If the Chinese government published a graph that said the opposite, would you consider that a serious and objective source?

reply
If the methodology in the accompanying write-up did look credible, yes. Though I have significantly more trust in US agencies, like NIST in this case.
reply
Give it time. It's inevitably a logistic curve.
reply
I believe logistic curves make no sense when you have Elo scores.
reply
Someone is an official website of the united states gouvernement. I would prefer another source.
reply
I think no other source exists.
reply
Llama is not months behind GPT 5.5 Pro. I don't think Qwen or DeepSeek are either.

edit: I'm specifically referring to the "5.5 Pro" model, not regular 5.5 with Pro tier subscription. Claude has no model available that's comparable to 5.5 Pro either.

reply
I’ve used DeepSeek 4 Pro through Claude. It’s fine. Plans are similar to what sonnet/opus make. Same massage-the-plan -> massage-the-code loop. Maybe the code is a bit worse, but that’s the “months behind” thing.

The thing is, vast majority of code tasks aren’t a venture into the unknown. We as an industry for the most part build CRUD interfaces and dashboards. That can be achieved, with supervision, with frontier open-weights models quite well.

reply
I think maybe you are both right. Perhaps AI coding assistants just don't need to be all that smart in many cases, so open weights models are fine. At the same time, frontier models are advancing in other domains, like mathematics, where raw intelligence is a more important factor.
reply
I can’t compare raw intelligence of these models, and I certainly can’t say anything about their advances in mathematics (without repeating press releases). But, erm, does it really matter? It’s not like some engineer somewhere will vibe-calculate how much weight a bridge can hold.

Well, yes, someone probably will do that. But I’m pretty sure there will be consequences for the engineer errors in this vibe-calculations.

reply
There's no evidence there's any 5.5 Pro model distinct from 5.5 xhigh or whatever.

https://developers.openai.com/api/docs/models

reply
lol

(tap view all on yr link or ask gpt to search for you next time)

reply
Open models are pretty good at this point but the problem is that they are limited by the tooling and infrastructure that surrounds them. For example, the last time I tried to set up web search with an open model, the experience was pretty bad.
reply