upvote
The news is not in the way to compare models, it’s that Kimi K2.6 (and I’d add Deepseek v4 Pro) are more or less equivalent to Opus and that’s already pretty big.

They are open source and cost waaaay less per token than American models.

I’m using them right now on the $20 Ollama cloud plan and I can actually work with them on my side projects without reaching the limits too much. With Claude Pro $20 plan my usage can barely survive one or two prompts.

And I choose Ollama cloud just because their CLI is convenient to use but their are a lot of other providers for those models so you aren’t even stuck with shitty conditions and usage rules.

To me that’s a pretty bad thing for American economy.

reply
Or maybe it is a pretty good thing for the American economy that you can get AI at cost rather than monopoly pricing.

You know, for the rest of the economy that is not big tech.

reply
It's not good for current administration. The American AI growth is only thing that keeps the GDP not looking terrible.

And investor pumping money in US AI circular money flow just makes innovation everywhere else slower. If not for the GPU/Memory drought running stuff locally (or just in competition cloud) would be far cheaper

reply
> It's not good for current administration

I don't know where to begin if you're leading with that. Anything approaching reality is not good for the current administration.

reply
deleted
reply
That is the very reason the open source models exist. Prestige and soft power to influence interest away from American models and hopefully slow down their progress.
reply
DeepSeek and other Chinese model makers are massively accelerating progress in AI not slowing it down. They're the only ones who still come up with real technical innovations while the proprietary model makers are stagnating.
reply
I'm as happy to see cheap open weight models any anyone is, and I'm in Europe and certainly not cheering the US on, but that's a bunch of unfounded hyperbole you just said.
reply
That is a petty big assumption (aka bullshit) unless you have direct insight the inner workings of the big US labs. Just because it isn’t published doesn’t mean that innovation is not happening.
reply
That's an unfalsifiable assertion with no evidence to support it, while all the visible evidence we have points to stagnation and merely incremental pushes among the big proprietary model makers. Even Claude Mythos, which was 'teased' to the public but not released, is reportedly mostly a scaled-up model that takes massive compute resources to run (and lengthy agentic loops to achieve its reported results in computer security). The polar opposite to what the Chinese labs are releasing now.
reply
Can you name some tangible AI idea that came out of Chinese labs?

I can name thousands that came out western universities.

I see a lot of rhetoric that only the Chinese labs are contributing to AI while companies like Google and Microsoft are still pulishing their research.

Unfortunately the domain of scientific papers is cluttered with AI slop but still occasional serious paper that i find are from western labs particularly Google Research or Microsoft Research

reply
Any of DeepSeek's recent papers which are more about efficiency and that's how their inference costs can be so low.
reply
I appreciate your reply but you are completely glossing over his point about how head to head model evals are useless lmao
reply
They are no way as good as Opus yet. But Sonnet, yes. Using all in real life.
reply
> for American economy.

There is more to American economy than big tech.

And that's precisely why this has started: https://www.wired.com/story/super-pac-backed-by-openai-and-p...

reply
>There is more to American economy than big tech.

Most of the stock market valuation is big-tech, and most of people's retirements are the stock market, so... if the AI bubble bursts a lot of the US will be affected.

reply
>Most of the stock market valuation is big-tech

Which is why most of it is a bubble

reply
I do not know why this is downvoted. This is true.
reply
Agreed. I upvoted.
reply
deleted
reply
There are objective ways to compare models. They involve repeated sampling and statistical analysis to determine whether the results are likely to hold up in the future or whether they're just a fluke. If you fine-tune each model to achieve its full potential on the task you expect to be giving it, the rankings produced by different benchmarks even agree to a high degree: https://arxiv.org/abs/2507.05195

The author didn't do any of that. They ran each model once on each of 13 (so far) problems and then they chose to highlight the results for the 12th problem. That's not even p-hacking, because they didn't stop to think about p-values in the first place.

LLM quality is highly variable across runs, so running each model once tells you about as much about which one is better as flipping two coins once and having one come up heads and the other tails tells you about whether one of them is more biased than the other.

reply
That's objective metrics. Not an objective way to compare, which is the selection of metrics to include.
reply
That's exactly why there's a ton of different benchmarking suites used for evaluating hardware performance.

I reckon we'll have similar suites comparing different aspects of models.

And, at some point, we'll be dealing with models skewing results whenever they detect they're being benchmarked, like it happened before with hardware. Some say that's already happening with the pelican test.

reply
> I reckon we'll have similar suites comparing different aspects of models.

The problem is that hardware benchmarks are harder to game. Yes, hardware manufacturer can make driver tweaks for say particular game to run better but the benchmark is still representable for the workflow user faces and they can't change the most important part, hardware, they can't benchmark gimmick their way in designing hardware

Meanwhile in LLM land the game is to tune it for the current popular set of benchmarks, all while user experience is only vaguely related to those results

reply
Fine-tuning for a specific task is even much less realistic than the benchmarks shown in TFA.

Most people who have computers could run inference for even the biggest LLMs, albeit very slowly when they do not fit in fast memory.

On the other hand, training or even fine tuning requires both more capable hardware and more competent users. Moreover the effort may not be worthwhile when diverse tasks must be performed.

Instead of attempting fine-tuning, a much simpler and more feasible strategy is to keep multiple open-weights LLMs and run them all for a given task, then choose the best solution.

This can be done at little cost with open-weights models, but it can be prohibitively expensive with proprietary models.

reply
While I partially agree with you, there IS work being done to make the metrics comparable. Eg:

https://ghzhang233.github.io/blog/2026/03/05/train-before-te...

It just hasn't been widely adopted yet. And it might be in each of their particular interests that it continues to stay so for a while. It's basically like p-hacking.

reply
This is a problem for OpenAI and Anthropic when they are bleeding money and in desperate need to jack up prices by moving people to their very expensive API.

It's very difficult to justify spending on the their models in a world where DeepSeek costs a fraction and Chinese open models exists and they perform as well as what is considered the state of the art, and it only depends on you adjusting how you use them.

A couple of days ago I canceled ChatGPT and started to try out DeepSeek. Let's see how it goes.

reply
My theory is we will end up in a similar spot to hiring people. You can look at a CV (benchmarks) but you won't know for sure until you've worked with them for six months.

We as an industry cannot determine if one software engineer is objectively better than another, on practically any dimension, so why do we think we can come to an objective ranking of models?

reply
Yes, the entire field of software engineering ran aground on not being able to test how well people can write software.

But I'm more optimistic about testing programming models. You can run repeated tests, and compare median performance. You can run long tests, like hundreds of hours, while getting more than a few humans to complete half-day tests is a huge project. And you can do ablation testing, where you remove some feature of the environment or tools and see how much it helps/hurts.

reply
The CV-to-six-months analogy is actually exactly right and it's also why benchmarks for hiring people stopped being useful. The signal that holds up is what you see when something breaks, which is hard to compress into a number.
reply
this smells like an ai-generated comment so much
reply
Not many things are as manifold broken as hiring these days. I hope we do not end up there.
reply
Terrible comparison. CV is just a list, telling you barely anything about performance and that's when candidate is not lying to get thru HR filter.

And we can judge developer performance, it just takes 6 months to a year working with a team so it's just hard to get metric

reply
You do not interview 1000 rounds on problems you're actually solving. If you did, hiring would be fine. Minus the social fit aspect, which isn't as relevant for a model.
reply
A pretty simple one would be to have every model try and one shot every ticket your company has and then measure the acceptance rate of each model.
reply
Except that if you tried one-shotting your ticket twenty times at different hours of the day and different days of the week, you would have enough changes to make benchmarks even if you used the same model every time. Much moreso if you fiddled with the thinking or changed the prompt.

Because non-deterministic, because of constant updates and changes, and because the models are throttled according to number of users, releases, et al.

reply
You never get "the same" Steph Curry, he might be tired, annoyed by a fan, getting older... but if he and I were to throw 100 3-pointers, we could all correctly guess who will perform better.
reply
Good point.

But I use Codex and Claude daily (work and hobby respectively). And there are days where one or the other just seems to have gotten up on the wrong side of the bed. Or is just being lazy. Or is suddenly super-powered do everything including what i asked it not to. (To be fair, the same thing happens with myself. :/)

I am convinced that if I was bench-marking, I would be convinced these are different models on different days.

[This conviction may say more about me then about the model.]

reply
Unfortunately, you're probably right, but the cock measuring contest is going to keep escalating because the billionaires and VC backers need to _win_. And the Psychosis is going to produce some horrible collateral damage.
reply
That was my thought too.

> The Word Gem Puzzle is a sliding-tile letter puzzle. The board is a rectangular grid (10×10, 15×15, 20×20, 25×25, or 30×30) filled with letter tiles and one blank space.

Just last week my superior asked to implement that for a customer. /s

Maybe some real, real task would be good? Add sone database, some REST, some random JS framework and let it figure out a full-stack task instead of creating some rectangles?

reply
giving real relatable task like that is memory excercise, not any reasoning excercise. The training dataset have tens of thousands apps like that
reply
[flagged]
reply
So like Open Router?
reply