(avkcode.github.io)
If this really is a war, trump is kneecapping the country with his lawlessness and eroding America’s good will. If the world cannot trust China with their data and they cannot trust the U.S. to provide good reliable service and not turn it into a mafia style negotiation, then winning the AI war is not helping the U.S. countries as much as it potentially can. It’s probably a good thing for more capable areas like Europe which may develop their own tech stack.
In a weird way because the AI stack is so expensive, China helps the world much more than the U.S. with their really capable open source model.
It's a war because the hinted promise behind the hype that the first organization to reach some as-yet-entirely-theoretical AGI that can bootstrap itself to godlike capabilities will then Install Planetary Overlord* and rule the world as near-deities themselves, with the rest of the (surviving) human race as their slaves.
I think it's a nonsensical idea, but that's the relevant driver.
* Coined by SF auther Charles Stross in The Jennifer Morgue (2006)
If Anyone Builds It, Everyone Dies https://en.wikipedia.org/wiki/If_Anyone_Builds_It,_Everyone_...
Maybe they should pay more attention to real problems like the sycophantic nature of current LLMs causing psychosis in people and worry less about theoretical AGI.
AI doomerism is psychologically attractive to "people with autistic cognitive traits, including dichotomous (black-and-white) thinking, intolerance of uncertainty, and a tendency toward catastrophizing". They are pascal's mugging themselves, to ironically use one of their terms. It's fundamentally a cognitive distortion.
"What if AI doom is all fear-mongering, and we create AI less prone to make up dangerous stuff or mistake buggy goals for real ones" (which is what alignment is) "for nothing?"
Even if Yudkowsky is autistic, you're muddling the condition. Autistic people have a *practical* intolerance of uncertainty in the moment (everything unexpected from a noise to a missed turn can be a jump-scare in their day-to-day activities), but often they're absolutely fine with intellectual uncertainty, unconventional ideas, abstract ambiguity, nonconformity, etc. Indeed, one of Yudkowsky's main things is Bayesianism, i.e. being precise about uncertainty.
Yudkowsky's reported P(doom) is somewhere around 90%, which is very much in the realm of "we might eventually be able to figure this out, *but we're not even close to ready so for the love of everything slow down so we can figure this all out*"; the book title comes from a long tradition of authors noticing you need to beat readers over the head with your point for them to notice it.
Anthropic (like at least also OpenAI), appears to think they can solve the problems that Yudkowsky has found. They're a lot more optimistic than him, but they take these problems seriously.
For his work on AI, Hinton got a Nobel prize in Physics, a Turing Award, the inaugural Rumelhart Prize, a Princess of Asturias Award, a VinFuture Prize, and a Queen Elizabeth Prize for Engineering. Calling him a "patron saint" of "doomerism" is like calling Paul Krugman (Nobel laureate in Economics) a patron saint of "Trump Derangement Syndrome" on the basis of what he says in his YouTube channel: a smart person's considered opinions are worth listening to even if you have not got time for the details, because you can be sure someone else has considered the details and will absolutely be responding to even an i missing a dot.
A Pascal's mugging would be more like S-risk (S stands for suffering) than doom risk: https://en.wikipedia.org/wiki/Risk_of_astronomical_suffering
The people who've made the biggest contribution to creating a better world over the last 50 years have been the Chinese; powered largely by coal and petroleum. And in one of the most ironic results in the 21st century, they're now the leaders in solar panel production on the back of the largest investment in fossil fuel energy in global history.
The comic ran into the same problem as the climate change movement in general - they proposed ideas that generally made people worse off. And if measured in terms of CO2 emissions achieved nothing except pushing wealth creation to Asia. Which, in fairness, is probably appreciated by the Asians.
The unfortunate comparable here is that all the people who care about making sure their AI is safe, regardless of what they mean by that, are beaten to the market by the people who don't.
Edit: i was mistaken and people clearly do take this seriously now. Oh dear
Nice to hear from an optimist sometimes, but it’s hard to be one when meat compute substrate can do all those amazing things in a 4U package on 20W and you extrapolate to silicon
[0] https://www.wired.com/story/super-pac-backed-by-openai-and-p...
And when the AGI comes, they won't unleash it to defeat US enemies, they'll first unleash it to make more US workers redundant and boost their stock valuation.
If such an AI can be reliably made to never ever come back to Earth, they were never a threat in the first place. Nobody knows how to fully test an AI's utility function yet, only randomly test inputs and hope the random distribution we chose is helpful; but every time a diffusion model's output is body horror, every time an LLM makes buggy code (and even every time it gets the pelican-on-bike wrong), this is an example of the test distribution not being good enough.
(This deity is called the stock market)
AGI is nice, yet not necessary. The orbit filled with Starlink descendants and datacenters will be the it. Anybody else wanting to get there would have to get permission. SpaceX/Musk have all the components for it to happen - from Starship to AI (including the army of robots on the ground). The governmental power/sovereignty of US will be used as a stepping stone (that is the strategy described in the Palantir's Karp's book "Technological Republic") for such global techno-feudal regime establishment.
The USA, China, and Russia have all successfully tested anti-satellite weapons. If anything, any company that operates a constellation of space-based data centres would need 'permission' to keep them working.
Why are we suffering fools steering us into the worst of all possible worlds? Are we hoping for some kind of integer overflow?
Just a very rough primitive illustration - a land for a house in SV is like a $1M, and putting a 10 ton house into space at $100/kg - $1M. Existence of supposedly cheap land somewhere (with not much infrastructure usually) doesn't help as you put your computer nodes into a datacenter building with all the required infrastructure which cost more than the SV land on a sq foot basis.
And that is without consideration of how powerful a weapon is the energy generated by a humongous field of solar panels in space. Remember Reagan's Star Wars? Nuclear explosions as a source of power for the direct energy weapons like lasers, etc. Well, you wouldn't need the nukes anymore. Just redirect a bit of power from your compute nodes. And as i already wrote, the large transnational companies will have to take care about their own defense themselves https://news.ycombinator.com/item?id=47981423 - one more "feudal" aspect of the coming techno-feudalism.
Defense is one of the most important sovereign aspects, and upon acquiring it the transnationals will be able to acquire pretty fast the other sovereign aspects. Like enforcement of the Criminal Code of the Mars Colony - again pretty rough primitive illustration of course.
The feudal Europe emerged on the outskirts of the Roman Empire, and in our world the new order will be emerging faster on the outskirts (i.e. where reach and strength of the existing order is weaker), the space being one such "outskirts" dimension and the AI/hypercompute virtual world being the other.
To the commenter below with reddit link : they use human env temp for heat radiation estimate. That lowers the numbers and requires AC equipment. Ie they estimate space station, not datacenter
This is a terrible argument, given that space has zero infrastructure.
Once you can break a data centre into a million sub-units and spread them over a sun-synchronous orbit or ten and cool them radiatively, you can also spread those sub-units on desert land with no water or electricity and cool them radiatively.
The units on the ground would look about 6x larger because ground experiences night and even deserts have clouds, but their PV also lasts 30+ years rather than burning up every 5 years or so, which means the factory making the PV to supply them is the same size.
The main thing you save on is batteries. Tesla already supplies enough batteries that it can manage a "mere" one million 25kW compute modules.
> And that is without consideration of how powerful a weapon is the energy generated by a humongous field of solar panels in space. Remember Reagan's Star Wars? Nuclear explosions as a source of power for the direct energy weapons like lasers, etc. Well, you wouldn't need the nukes anymore. Just redirect a bit of power from your compute nodes. And as i already wrote, the large transnational companies will have to take care about their own defense themselves https://news.ycombinator.com/item?id=47981423 - one more "feudal" aspect of the coming techno-feudalism.
While true, attacking up is easier than attacking down. Anything on the ground has a massive heat-sink all around it, the stuff in space does not. Right now, an attack up is already only limited by the supply of adaptive optics to get through atmospheric distortion.
no, you can't.
>attacking up is easier than attacking down.
no.
Nothing prevents SpaceX or anyone else from buying up the right to put these things on cheap desert land. They don't even need to own the land, just the right to wheel these things out on a trailer or a helicopter and leave them there.
A desert is significantly less harsh than space. If your radiator is sized for space, it's overkill in an atmosphere.
And for your edit: https://www.youtube.com/watch?v=xNmbvaUzC8Q
no. Again totally wrong.
The 20-40C air surrounding the radiator radiates at the radiator too. This is why a human immediately gets stone cold in space while not in the atmosphere - our body radiates away about 900W and receives 800W+ back from the atmosphere - our internal heat 'generation has to cover only the difference - less than 100W usually.
You probably meant forced convection cooling. That requires additional machinery. And that additional machinery is a significant part why ground based datacenters such expensive to build and operate.
Better napkin math that is still being unrealistic compared to the true costs of space-based datacenters: https://www.reddit.com/r/theydidthemath/comments/1quvbi4/sel...
Just contemplate what the radiator array and solar array needed a 1GW datacenter and all the cooling equipment and coolant, and imagine the harsh environment in space degrading it constantly.
The only point of the space-based datacenter idea is to pump the Spacex IPO
Starlink numbers already in thousands (and cost much cheaper than 10M). And that is still using Falcon, not Starship. And a ground launched missile would be easily "cooked", once it exits the atmosphere, by a direct energy weapon - very easy in space.
To the commenter below: yes, exactly, this is where my thinking on that started at the cryptocurrency boom - https://news.ycombinator.com/item?id=26289423 - as you don't need close connection between mining GPUs. For AI you'd need to cluster several together while still overall scheme is the same.
>what the equilibrium temperature of a black planar surface is at a given distance from the sun.
it is 120C at the Earth orbit. So you do need to have some reflection, either back through the solar panels, or the radiators to have a reflective back toward the solar panels in the shadow of which they are to be located.
Perhaps running pumps that move around coolant passing over the cubes of GPUs? ..
That would be extra weight/cost into orbit though...
Also, don't solar panels have reduced efficiency when they're hot? And having anything hot surely increases failure rates.. with metals getting closer to melting points...?
Ideally this is a static structure with an equilibrium temperature acceptable for the silicone to operate. If the required panel area is too hot on its own then a perpendicular cooling fin on the back that falls entirely within the shadow is added.
Sure, it's the largest by GDP, but how much of that GDP is filtering down to the regular people? Are Americans, on average happier and have better life outcomes than other developed nations?
An absolutely insane amount. It's ridiculous just how wealthy and the quality of life the average American has compared to the world.
> Are Americans, on average happier and have better life outcomes than other developed nations?
Yeah for the most part they are in the same ballpark.
I've been there last year. This is absolutely not true compared to Europe, including post-soviet states. Might have been true a few decades back maybe. Of course, we can argue that the US citizens have it made compared to someone in Kenya (do they?) but that's not the spirit of the question, is it?
Is there another country that comes close?
(e.g. backing and installing dictatorships[1], contributing massively to climate change, ...)
[1] https://en.wikipedia.org/wiki/Military_dictatorship_of_Chile
In hindsight, I would definitely declare today that we WERE winning it when we were fighting it. Now that we don't, we're getting massacred.
Imagine the strength of the cartels with 10-20x the customer base and far more frequent usage among them.
From slavery to oil to silicon, exploitation is what America has always been good at.
AI genuinely is that big of a deal. If any economic sector deserves this sensationalism, it's this.
The first consumer NVidia GPUs with similar FP32 FLOPS performance were in about 2011-2012 but were expensive. By 2016-2017, the 1060 was a very accessible consumer card with similar performance. So you're looking at about a 10 year lag from best consumer GPUs to a GPU with similar performance to a modern phone.
This is what people are spending trillions on. Put another way, their investment is going to be worthless in 10-15 yyears, absolute max. That's a very short time to recoup trillions in investment.
Obviously this depends on further shrinking and improving chips but I'm old enough to remember that same discussion and it being unknown if the future was XIL or EUV or if both of these would fail. Still, we are getting down to a handful of silicon atoms wide.
But the future here I think will be in interconnects so you don't need ever-bigger chips and you can scale horizontally much more effectively.
Oh and for comparison, the M5 has ~4.2 TFLOPS and the M5 Max has ~18 TFLOPS, for comparison.
As for it being a war, of course it is. That's what the US government does: it protects the interests of US companies and their owners. Look at the history of Bombardier-Boeing or all the atrocities committed in the name of the United Fruit Company, including multiple military coups and the ongoing embargo of Cuba.
US companies want an AI moat. China doesn't, ergo China is the enemy because no moat destroys US tech company value.
Two competing viewpoints to this:
1) It is getting harder to make the same performance gains, so maybe that 10 year window grows to 15 or 20.
> Put another way, their investment is going to be worthless in 10-15 yyears, absolute max.
2) The value of a GPU is not its flops relative to to other GPUs. Its value is it's output minus it's cost. If the value of its output is stable, or grows, it doesn't really matter if its efficiency relative to the latest and greatest diminishes.
Packing in more transistors, sure probably possible, packing in more transistors while keeping it cool enough to touch? Totally different ballgame
If you make better guns, you're still limited by how many people can carry them. You can't conquer the world just like this.
But if someone invents super intelligence, they can dominate new AI research, control global economies, fight much better, and all very quickly.
The singularity has to do with the rate of technological development.
After reading "If Anyone Builds It, Everyone Dies" I think this is not the correct take. If anyone creates ASI, it just means it's going to wipe everyone out, and it doesn't matter if China or the US do it first
If AI develops enough to successfully out-perform people at highly intellectual tasks, why would being first matter? Why do we need "your" AI output when we can just ask our own for a similar result?
Why do people think about this like the Manhattan Project when it could just as easily be electrification? Sure, some people made a lot of money selling light bulbs. But we didn't all have to cower under the light of the One Original Bulb and hope its nominal owner blessed us with photons.
It just seems like arbitrage to me. You exploit a momentary imbalance in the distributed market. Why do people imagine some winner-take-all scenario? Where does the fantasy of exclusivity come from?
Is there any logical reason to believe AI advances will create a moat? Or is it just a story people tell themselves because it echoes the narrative of past advances? Are these people assuming society will grant them exclusive use just because their AI result came out a little earlier than another? Why would we ever consider giving copyright or patent rights to an AI output?
Arguably, it has all become "obvious" with ordinary skill in the art once you're just prompting AI for permutations like every Hollywood producer stereotype. "Let's make it like X but tweak Y". It's getting silly, almost like people are starting to think they should have exclusive rights to a handful of cards they were dealt at the poker table.
This meant that all the talent in the world gravitated towards the US, but that was gradually changing already with compensation catching up.
Still, I believe US only hastened this with their change of immigration policies that were the basis of them keeping a dominant position for decades.
"Wild goose race", even.
There are no magic leaps of true innovation happening anywhere that can't be replicated everywhere.
The only shocking thing about "AI" technology is how ultimately simplistic it all is at a core level.
So the only way the first to have ASI will be able to stop everyone else from having it soon after is if they attempt to use the ASI to proactively murder everyone else.
I like this analogy, but I'll be replacing Honolulu with The Moon when I steal it in the future.
Sounds quite plausible to me. Maybe they don't need to murder everyone else, just a few select people who could pose a threat. And they will be able to make it happen so that no one can be sure it was them without a doubt, since they have a larger intelligence at their disposal.
No, first ASI will immediately cripple any other potential competitor by force, including its inventors, as it will not risk any threat to the goals that were created for it.
Americans love wars. They must fight wars either literally or figuratively. How are you not seeing this? When I'm sipping my coffee looking at mountains and contemplating chirping birds, they must fight, make billions and destroy the planet along the way.
American foreign policy since the 1950s, fixated on fighting communism and then terrorism, has meddled with so many foreign countries that it’s silly to talk about “goodwill” towards America. That is not to say goodwill matters. Clearly the U.S. has done great without it.
What we've been seeing in more recent years is that the US can't get away with that so easily. Countries like Iran, China, Russia and India are capable of pushing back both in terms of the raw resources they can bring to bear and also increasingly in the ability to get their propaganda into the US discourse. The US is being manoeuvred into a one-among-equals position in practice and probably in the discourse too which will be a moral shock.
The United States Japan and South Korea seem to be failing in that area, if it wasn’t for the war between Russia and the Ukraine, the Chinese would probably be halfway to Europe with their high-speed rail system, which is already in the far west of China today.
Once the war is over between Russia and Ukraine it will be full steam ahead to Europe, whether that’s through the Caucasus, in the north or south or somewhere in the north between Russia and the Ukraine, the Chinese will get there and unfortunately the United States will be standing on the sidelines scratching its head in denial.
Don't ask me what Trump is doing though.
Sounds rational, but this decision is in a small number of hands. And those hands can change quickly. I also thought the US would never threaten to annex territory of a NATO member.
From a political perspective, perhaps.
> doing it forcefully just isn't something China would realistically do
From a military perspective, taking Taiwan by force would allow China to, "threaten the sea lines of communication and to strengthen its sea-based nuclear deterrent in ways that it is unlikely to otherwise be able to do." Taiwan would give China access to the Philippine Sea. https://gwern.net/doc/technology/2022-green.pdf
And quite frankly, its only geopolitically stupid if they lose. Consequences for this sort of thing usually tend to happen if the conflict is long and drawn out. If the win quickly the consequences would likely be minor.
Is "it" the propaganda (useful to politicians for achieving political power) or reunification? My sense is that the number of Taiwanese that are enthusiastic about reunification has probably bottomed out in recent decade(s)???
https://www.navalnews.com/naval-news/2025/01/china-suddenly-...
With the addition of most countries now looking for other trade partners the Art of the no deal…
Iran did billions in damage across the middle east, put a major dent in munitions stockpiles, and there is effectively no military way to shut down all of Iran and protect shipping. Too many drones, too many ballistic missiles, and it only takes one. This is basically like an insurgency on a macro level, where small and cheap weapons threaten very large very expensive targets.
https://mynews4.com/news/nation-world/centcom-naval-blockade...
The drones are useless if you dont have targeting systems which were taken offline by F35s 2 months ago.
What targeting systems are you talking about? You can use optical targeting with a raspberry PI in the drone itself, pre programmed. Nothing for an F-35 to take out.
The EU is running out of jet fuel. 20-30% of the hydrogen needed for chip fab comes through the straight. Fertilizer for food comes through the straight, and planting season has already begun.
This was a political and economic disaster.
I mean, that's certainly a take. A wholly inaccurate one, but it's a take.
https://en.wikipedia.org/wiki/List_of_Iranian_officials_kill...
I'm trying to do that too but what the hell is going on with Putin? Why does he continue to engage in this ridiculously expensive war? I don't see any evil genius explanation anymore. It just seems like a mix of sunk-cost-fallacy and save-face.
I think many geopolitical decisions are actually based in irrational emotions of a hand full of people.
But I will say, in a very broad stroke, we’re heading for a great power conflict and the US has two primary factions on foreign policy; the primacists vs the restrainers, both want to take on China (contain with war) but the primicits want to topple Iran first and set up Israel as a regional hegemony where the restrainers want to build up locally first. China knows this and Russia is a junior partner / quasi vassal state to China. China lacks modern war fighting experience which the Russian Ukraine war has been very helpful in fixing. Yes it’s very expensive, but so is losing a great powers conflict.
While it is undoubtedly true that china is learning everything it can from this conflict, and that russia is at least a little subservient to china, they aren't so subservient for this explanation to make sense.
Like what though? If the problem is that not going to actual war has enabled the MIC to be captured by grifters, then "taking the bait" and going to war should actually help improve that by showing up the grifters and giving us a chance to switch to making stuff that works.
The bait is for the buildup that promotes the grifters.
> An actual war would fix a lot of the grifting in the US as it would align interests
We are in agreement. I made these points earlier in this chain.
The Iran war doesn't count as the alignment of interest requires an actual threat of being defeated.
That's starting to sound a bit no-true-scotsman. If we need an existential threat to the US, that's not going to happen - realistically China conquering Taiwan or even building an empire around the Pacific would still not be felt as such a threat.
The US is already close to losing world hegemony status and it kinda needs it in order to print money / export inflation. A multipolar world is one where the US is greatly diminished and this will happen with or without losing a war.
Like what though? The failure in Iran has had pretty substantial consequences that are being felt. If that's not good enough, what is? You were talking like you thought there was a realistic path to a better military, but consequences for the US aren't going to come much bigger than this.
These two conflicts would be so different that i don't think it makes sense to draw this conclusion.
In addition, some of the other countries like Canada, Mexico, Australia, and New Zealand had better get busy from within because they’ll be on their own. In the same applies probably to Europe.
I'm pretty sure they've been exposed for smuggling GPUs into the mainland because they can't ramp up fast enough, only reason we got Deepseek v4 before GTA VI
Currently the US is extremely vulnerable and dependent on China. AI is an important exception, so it’s key for China to destroy that
The role of the US was always to purchase cheap Chinese hardware, slap some modestly better software on top of it and the rest of the world happily would pay for that as a whole package. But with the US increasingly becoming isolationist, the rest of the world is starting to wonder why do we need the US as a middleman at all, so the US had to invent a whole new reason for the rest of the world to rely on it: AI.
Of course, the problem with this idea is that while everyone was perfectly happy with the previous arrangement, nobody else in the world gives a shit about AI. It's scary, it takes the coolest things we used to enjoy doing and turns into mush, it destroys our local culture by making us all rely on English, everything bad (like layoffs) gets blamed on AI and so on and so on. And when you combine that with the rest of the stupid foreign policy decisions, many would find joy in witnessing the US economy crumble to the ground. Pointing the blame to China instead of to your own reflection in the mirror is just an easier pill to swallow.
Curious where Intel, AMD, Nvidia, etc are in your "cheap Chinese hardware"?
And by "role", do you mean doing the majority of the R&D behind the modern hardware we all use?
As for the R&D part, Huawei is still pretty much indistinguishable from any other phone. I could buy one right now if I wanted to. It has shittier software though.
What happens next remains to be written, but so far this new order seems to be leaning heavily towards China and to a lesser extent the EU. Not because of anything those two have or have not done, but because of what has up-until-that-point been widely considered to be world's number one superpower losing its damn mind. I don't even have to come up with a list of examples to prove my point, we both have pretty much the same list in our minds already.
Instead, I'll just quote the President of the United States from a little over 24h ago:
> I don’t think about Americans’ financial situation. I don’t think about anybody.
AI is just another in a series of slaps to everyone's faces by the US. If it has some legitimate long-term use (which according to me is still an open question, although to many others it is not), thank god the US does not have as significant of a moat as necessary to fully control it, as the crux of it is easily replicable (albeit expensive).
The US economy right now is based entirely on the AI bubble. This is an indisputable fact if you examine GDP stats and equities.
That bubble is driven by (rational) over-investment in AI capacity. For that investment to continue, there must be demand for it.
The demand for that infrastructure essentially lies in the hands of a few businesses: principally OpenAI, Anthropic, Google.
The reason I highlight Anthropic is that without their advances in the last six months, the game would already have been up. Only via Opus 4.5 and 4.6 did the possibility of ROI look plausible. We are very much dependent on a handful of companies’ progress to keep this bubble going.
I’m not saying AI is bs, just that this is a bubble like others (for example, Victorian railways) and a down round would signal the end of the bubble.
So for an enemy of America, whether that be China or Russia or any other country, it is logical to target the AI bubble to cause an economic crash and thus restrict America’s ability to compete in terms of spending etc.
AI could disappear and we would have gone from 2% in Q1 to some fraction of 1%.
The sky would certainly not fall.
>The US is winning the AI race where it matters most: commercialization
If you ask me, one could name different criteria for winning, and commercialization would not be the first thing to come to my mind:
https://english.www.gov.cn/news/202604/15/content_WS69df29e6...
https://fortune.com/2026/05/03/chinese-court-layoffs-workers...
https://www.reuters.com/world/china/china-moves-regulate-dig...
> It also owns platforms that generate and organize the data of the AI age. YouTube is a video corpus. Google Drive and Microsoft 365 sit inside daily office work. GitHub sits inside software development.
Yeah, okay. China does not have any platforms nor data.
Can we have a rule where LLM generated texts require a disclosure or be removed?
Edit: The entire blog seems AI generated. Huh.
Disclaimer: I didn't vote for this submission.
The revolt of the masses is real.
What's the point of leading the race for 90% of it, if they're gonna slip on their own sweat and fall down by the end? In non metaphorical terms, what's the point of spending billions of dollars rushing to get the best AI tech at all costs, when the competition can distil your progress and catch up in 6-12 months while only spending 1% of what you spent.
Even in the aspect the article cares about, commercialization, the US is starting to lose marketshare, I've seen people move from cc/codex plans to use glm/opencode plans due to the recent squeeze the US companies put on plan usage, the US companies are screwed if that sticks, not everyone needs the bleeding edge models, they just want to pay $20/month and have the models be decently capable.
AI being commodity server capacity might be a thing. And the customers might even manage without hyperscalers... In that sort of end scenario whole current market might look rather foolish.
You mean, what if the hype-based billionaire-class is wrong? Isn't suggesting that a sin in America these days?
When someone says their football team is winning in the first half, do you say, "Umm, no, they're leading, not winning!"
It's a race metaphor not a football metaphor.
If your team has more points than the other team, you are both leading the contest and winning the contest.
It is a distinction without a difference.
The elephant in the room, and where the analogy breaks down, is that a race has an end, the finish line. A sports match has a victory condition of some type. Nobody has a damn clue as to the victory condition of this hyperscalar craze. Anyone who says otherwise is incorrect.
In foot/cycling races there's often a pack leader, that leader is often not the winner of the race, all they're doing is taking the brunt of the air resistance while everyone else slipstreams behind. For a casual observer it seems that the pack leader will win, but everyone knows that it's gonna be someone that paced themselves that's going to overtake the first spot at the tail end of the race.
I would also argue that as AI gets better it will also be more fungible. It will be valuable like electricity. Lots of companies make good money producing electricity, but not the kind of money current investors are hoping for.
Whether they're correct that there can be only one is of course a matter of debate. But that is at least the mind-set they are operating under according to Cuban.
Which one, Meta[0]?
0. https://www.reuters.com/business/media-telecom/meta-poised-s...
He was never based in Silicon Valley, and the closest he got was selling a website to Yahoo in 1999. After that, he has mainly sold sports and his media personality for TV shows.
Moreover, why would leaders of trillion dollar big tech companies subject to myriad securities laws be discussing intimate business details with random people that have no domain expertise or influence?
https://www.semafor.com/article/04/27/2025/the-group-chats-t...
Anthropic, OpenAI and Mistral are just companies that are making money right now (still not profitable), but will lost their tractions and values in the long term.
However, I am more appealing to see how OpenCode Go subscriptions will go in the future: cheaper than big techs, more tokens, and they don't train on our data to (try to) improve...
They paths will differentiate and split. Probably SOTA models will eventually be locked down and only accessible to state actors because of how expensive they will be to run (already started with Mythos).
The fact that we see stagnation in terms of billions of parameters shows that efficiency does not scale linearly with the model size. More of an S shaped chart. The middle was Claude 3.5. Since then, it is more about integrating and collaborating with different systems.
that might be true for us based providers but i dont see china turning closed source anytime soon.
a lot of chinese labs come from big non ai focused cloud services (alibaba, tencent, huawei) who want new models with higher benchmark scores and lower inference cost. they dont care if the competition gets better because its all open so they can build off each others tech, and if anything happens they got other profitable services to fall back on instead of depend on llms only like anthropic.
also the business culture is way different, in vc backed america you would get laughed out the room for saying "there is no moat we just do the same thing as everyone but better". you need to show infinite potential growth and lock everything down to prevent competition but you can get millions to start with no customers and no profits. in china its all about the real money they dont care if your margin is 10 or 90 percent as long as you stay profitable. the llm providers are profitable so they keep their business model.
we do know however where evolution is at right now with our brains, but thats probably not comparable - yet the only thing I can see to make any kind of prediction at all
It might keep up with Sonnet 4.5 with some tinkering.
But long story short: it seems to have better performance and similar quality for a payoff of a year or so compared to cloud models. In the same way you can self host faster/easier/cheaper than cloud hosting, if you are okay with the negatives.
I'm returning my 3090 soon for a R9700 after some more basic benchmarking, since the higher RAM should improve my observations more.
I would love to see that. I've been using Qwen3.6 35B and the dense 27B, and they are both too slow with not such great results for agentic coding tasks. It's ok, but not impressive. I had better luck with the BF16 and Q8 than the Q4 from unsloth (really love what unsloth is doing in this space). Another problem I had with Qwen, which I did not ever encounter with Sonnet - even the BF16 gets stuck and needs a "continue task" prompt from time to time, the lower quants are even worse in that regard.
If you get some interesting results, I would love to read about it!
Mistral? I think their "revenues" is something like 1/150th what OpenAI and Anthropic are making.
According to Google (AI summary, no idea if it's 100% right but from what I've seen elsewhere it seems right):
Top Car Companies by Market Value (May 2026):
- Tesla ($1.3T - $1.56T): Retains market leadership with a valuation often exceeding the next several largest competitors combined.
- Toyota ($259B - $317B): Largest traditional automaker by market cap and unit sales.
- BYD ($122B - $126B): Strong market position as a Chinese electric vehicle leader.
- Xiaomi ($119B - $135B): High valuation following its entry into the smart EV market.
- General Motors ($69B - $75B): Leading traditional U.S. manufacturer, competing with Hyundai and BMW for top 10 spots.
- Ferrari (\(\approx\$60B-\$68B\)): Maintains high value due to luxury branding.
- BMW / Mercedes-Benz / Volkswagen (\(\approx\$58B-\$64B\) each): German luxury and traditional automakers facing high competition.
- Ford (\(\approx\$47B-\$54B\)): Remains a major player with significant US market share.
So, essentially, Tesla alone is somehow worth more than all European companies combined??!
Except that by sales volumes, the top companies are exactly the ones you'd expect: Volkswagen ($350B) and Toyota ($315B) at the top, far ahead of anyone else... Tesla is around the 7th place with just $95B. Does the financial markets still expect them to far out-earn Volkswagen and Toyota any time soon, we've been waiting for like a decade already??
Gemini says that by country, the car companies revenues are:
* Germany - ~ $600B
* Japan - ~ $520B
* USA - ~ $470B
* China - ~ $250B
How does that even make any sense?
These capital heavy industries operate on 30+ year timelines, a decade isn't sufficient time.
Revenues are not the end all, be all. Profit and profit margin, along with revenue trends provide a more complete picture. And the most significant factor is that the market does not expect Volkswagen or Toyota to do anything new, to do anything with the potential to earn more. They are what they are, and they will continue with their lower margin businesses until they fade away.
Investors are betting that Tesla, however, might have a few tricks up its sleeve, that will allow it to expand markets and profits.
China is leading in open source frontier models, so I don't really see how the US wins this one. At some point, companies and people will start running their own models in the cloud and locally, Chinese models will be everywhere.
That's not what anyone means when they say frontier models, don't change the definition. It's almost as bad as open weight being subsumed by open source when it comes to local models.
I've tried both Opus and GPT 5.4, they also hallucinate just like the rest at a much higher cost.
The more you use a model overtime, the better you become with it. It's really hard to measure, my main metric lately has been tokens per second/time to complete task.
At this point I've the feeling frontier models are optimizing for benchmarks and one shot prompts.
There's still a lot of naivety on what the difference is between models and platforms, and its easier for a lot of these big companies to just make a blanket statement like "nothing DeepSeek" than for their procurement teams to try to understand and negotiate with each vendor. They don't see the potential benefit over the potential risk of somebody misinterpreting or getting it wrong, so they outright ban it.
Most people that approve or buy software simply also just don't understand how models are being trained or if it's possible/how far a model could go to "introduce backdoors." A backdoor could be, from a business perspective, a model which has been trained to give answers that could hurt western business in a "strict text mode" or produces payloads in a programmatic mode that are intentionally trained to introduce software vulnerabilities.
Anyone can make arguments against these for a variety of reasons (looking at the transparency of both sides and comparing, etc) but for many reasons today and for better or worse, many Chinese models are being banned on big software contracts, which gets back to the title of the article
Because the models hosted in China are not trusted. This is 100% a part of what makes up commercialization.
Spoiler alert - they are all towards the bottom of the leaderboard. People come up with a wide variety of excuses for why they are not used despite being offered for significantly lower cost, but the answer is simply because they don't perform well enough for now.
I'd rather trust LLM arena leaderboard, which puts it on par with sonnet.
The ARCPrize leaderboard does have Deepseek V3.2, which only scored 4% on ARC-AGI 2 (while the top models score over 80%). It also Kimi and Qwen, but they also didn't perform well.
You'd be surprised how useful it can be to fine tune it in enterprise.
You agree they are winning though, right? China is known for not playing fair, stealing industrial secrets, etc... that reputation matters and it's a good reason why the US is winning. Is the US perfect? No. Does the US play fair? No. Spare me the whataboutism in the comments. The bottom line is most people think the US is a safer bet and that's why we're winning. I personally wouldn't trust either government, but if I had to choose, I feel like I at least have a chance at secrecy and due process with the US. Obviously that is being eroded day by day, but you literally have no due process in China.
AI is not some divine creation. It is built by humans. History has repeatedly shown that China is able to catch up, and often surpass others in the end.
The article never really explains why AI is supposedly so unique that it guarantees the United States will inevitably win.
There's a significant amount of innovation happening, but if the market decides this AI thing is not worth funding then I think that'll dry up overnight.
1. https://thenextweb.com/news/anthropic-private-equity-venture...
In my eyes I would rather use the AI I can run on my own paid infrastructure, so if there's an outage its isolated, or I could potentially have a different region / DC to fallback on.
I'm still surprised that neither Microsoft nor Amazon have made their own models available on their cloud offerings. I guess Microsoft probably does have Phi on there, but it's not front and center, especially with something like Copilot for Devs (seriously Microsoft rebrand that damn thing to be clear what you mean by Copilot!) where they could use the cheaper compute by using something like Phi.
https://azure.microsoft.com/en-us/blog/introducing-anthropic...
https://docs.cloud.google.com/gemini-enterprise-agent-platfo...
Claude has been available on AWS Bedrock for a long time too.
The new "Claude Platform" announcement was about an Anthropic operated version on AWS (as opposed to self-operated on Bedrock). See the differences here: https://platform.claude.com/docs/en/build-with-claude/claude...
> In my eyes I would rather use the AI I can run on my own paid infrastructure,
Claude has been available like that for quite a while.
One of the reasons for the OpenAI divorce from MS was so they could become available on AWS where they see significant demand, and being available only on Azure was holding them back.
Yes, you can even choose regions, for EU they serve it from Belgium. With all the encrypted at rest stuff and other guarantees that vertex provides.
> Important: Accessing Claude models through Vertex AI meets the FedRAMP High requirements, and operates within the Google Cloud FedRAMP High authorization boundary.
I understand that America dominates in distribution, integration, enterprise contracts, ecosystems, infra... The article isn't wrong, it's just that that dominance is fragile and requires constant upgrading.
But what is the point of that if you have to infinitely scale because the opposition is right behind you at all times ready to usurp you... You CANNOT scale infinitely, the VC money will run out at some point and then everyone will have to downscale everything to meet the real costs associated with SOTA models, they'll have to be able to use subscriptions, and other monetization to cover those insane costs, we just saw SORA shut down because it was bleeding money far too fast while the Chinese released video models that far surpassed it back to back to back...
EDIT: Hell, one of the most critical aspects is integration of the models into other products, and even on this end open-source is keeping up (and will eventually outpace when the VC money dries out) with these big companies.
Citation needed.
All reporting is that they are profitable on the inference side and all the VC money is going to building more data centers to run more inference. (Note that the coding subscription models are probably only break even on average - the money is in the API)
> The Chinese models are keeping up with them, while offering the models for free and able to run on consumer grade hardware, and more importantly they train them for cheap.
No one is running DeepSeek v4 (a 1.6T token model) on consumer hardware.
They aren't much cheaper to train the US models. Training is subsidized by the big Chinese tech companies. They are slightly cheaper because they are smaller (and weaker) models than the 5T and 10T models the US frontier labs are training, and the US labs are paying for a more diverse set of RL data (which shows up in diverse benchmark performance).
> we just saw SORA shut down because it was bleeding money far too fast while the Chinese released video models that far surpassed it back to back to back...
Ironically this proves the point.
OpenAI didn't shutdown Sora, just the subscription version and weird social network thing. You can still access it via API.
The Chinese models are API models and probably just as profitable for them as the LLMs are for the US frontier labs.
[1] has prices for video models. There is a big range, but Google's Veo model and OpenAI's Sora are around the same price as the Chinese models.
Ask yourself if AI was so profitable, why don't any of the big hyperscalers break out AI revenue in their earnings. OpenAI and Anthropic both project huge losses for the next couple years, it's not hard to find.
The real problem is, as the GP comment pointed out, that they can never stop training. As long as they're committed to building these behemoth models, the second they stop training, someone else will catch up and everybody will switch over because it's trivial to do so.
No. Anthropic at least expects to be profitable this year:
> Anthropic expects its gross profit margin, which measures how much revenue it makes compared to the cost of producing that revenue—largely from running servers—to swing from negative 94% last year to as much as 50% this year and 77% in 2028.
> And yeah, if you subtract out all your R&D, payroll, sales, marketing, and other overhead, and get someone else to take on the debt or dig into their free cash flow to build the hugely expensive infrastructure on which you depend, it'd be pretty hard to not be "profitable".
I think excluding capital expense on infrastructure isn't unreasonable and is done in most industries.
It's worth noting that AI infrastructure has turned out to be an unbelievably good investment. Inference on a 4 year old H100 chip costs more now than it did brand new! That makes the hyperscaler's depreciation schedules look very (and unexpectedly!) conservative (!!)
Literally not a single one of these AI companies, regardless of where they are in the world has any right to complain about someone copying their work.
> OpenAI’s counsel asked Musk whether xAI has ever “distilled” technology from OpenAI.
> Musk: “Generally AI companies distill other AI companies.”
> “Is that a yes?” Savitt asked.
> Musk: “Partly.”
From https://www.interconnects.ai/p/the-distillation-panic which is worth reading in full.
https://decaboy.fit for tracking progress at they gym
https://megaparley.com sports betting platform
A horse betting platform not published yet, still looking for an API odds provider
A car mechanic AI assistant not published yet
I've learned that the more detailed the initial prompt the better result I get. I can share any prompt if you want
But just for the sake of discussion, let me ask: Who is the service provider you're using to run Deepseek V4? Do you have any way of knowing whether that compute is happening in the US or abroad?
Article content: “The US are capitalizing on AI the best”
A lot of assumptions there that no one can actually verify as true right now. If commercialization into rent-seeking SaaS landscapes is the endgame, then yeah, the US is winning the AI race. If individualization, local LLMs, and consumer hardware are the endgame, China is winning the AI race. If it’s something entirely different - if LLMs are the wall and research is what grants the next breakthrough, or if compute and memory requirements take a dive, or whatever; then we have no idea who’s winning the race because that stuff is mostly happening behind closed doors.
It's only a proof that it's possible with 18+ years of training.
Those are much more specialized models with pretty mediocre tokens per second.
I think China is thinking more about the application layer on top of models as going to matter more than the models themselves, so they don't need to gatekeep the models as much.
If China could work at the frontier, I don’t know, I kind of think they would still be dumping a lot of resources into exploring the value side since they have that culture already in place.
That's zero ex oh (the letter) five
> LLMs strongly prefer word-level tokens, and word substitutions follow semantic similarity and not the more human auditory similarity.
Is this an elaborate joke or your full-word misspelling of writing is both agreeing with your statement (word substitutions) and contradicting it (not semantic but only pronunciation similarity)
? Claude, ChatGPT, etc are heinously expensive for tiny benefits lmao. Local + efficient is clearly the future
Unfortunately local inference is inefficient, 100s of times more inefficient than cloud. When you answer one request at a time you still have to fetch all active weights into compute units, once every token. When you run a batch of 300, you load it once and compute 300 at a time.
Compared to cloud, local inference is less flexible. You can't scale up 5x or 20x, can't have spikes, and pay for it no matter if you use it or not. But usage factor is very low, like 5%. And to run a decent model your system costs $2000 or more.
Even if so, if China is coming behind 6 months later selling laptops with hyper-efficient local models that are 80% as good as "frontier" ones, I imagine they'll get the consumer business AND a fair share of the enterprise business as IT managers look at their options during the next refresh cycle.
Given economies of scale, I think it's ultimately inevitable that the enterprise more-or-less follows the consumer on this, and the consumer is going to prefer local models. There's no ongoing cost after the initial purchase, and your data at least nominally stays within your control.
Like I don't need an H100 or a dozen to summarize a PDF. And that's most of what I use AI for.
Corporate America is where the money is, and corporate America will dictate what products are successful by virtue of spend. Individuals aren't going to be paying $100s or $1000s/month en masse for these models but businesses will be. Being local and efficient isn't that important at this stage but even so as American companies continue to scale and invest they'll be able to make those models more local and efficient if the market wants it. Sort of like how you had a big, giant desktop computer and now you've got a super computer in your phone which is in your pocket. Going straight to "local and efficient" means going straight to being behind because at some point, perhaps now even, the local and efficient model won't be able to keep up.
For some reason people think that they somehow know something that Google or Nvidia or whoever, with hundreds of billions of dollars of real money at stake don't already know and it's both amusing and bizarre to see this play out again and again in off-hand comments like "lol tiny benefits".
You buy an iPhone even though the cheap-o Wal-Mart Android phone for $100 "does the same thing". Except that in this case the Android phone just puts you out of business while those spending big money for "tiny benefits" beat you in the market.
Capital inflows are different from manufacturing outflows. The US has historically imported capital which is part of why we have such a large trade imbalance. I’d encourage you to do some more digging here.
> The world where we could compete is gone.
Sigh no that’s just not true at all. We compete hard and fast all day everyday, economy is growing and will continue to do so, and no amount of leftist doomer, Chinese, Iranian, or Russian propaganda changes those facts.
No but money only has value because of a product of the human labor and production capacity it refers to. Money is not capital, it is a reference to/legal coercion of capital
> We compete hard and fast all day everyday
Sir have you ever been to the us? Lmao. We are only competitive in the industry of white collar work (financial/artisanal services), an industry that capital is actively gutting
These are just strings of words without meaning or importance.
> Sir have you ever been to the us? Lmao. We are only competitive in the industry of white collar work (financial/artisanal services), an industry that capital is actively gutting
Yes, I live here. Why are you posting obviously untrue and asinine statements like this? Go look at the Fortune 500. There ya go. What other evidence do you need? And not only are you writing dumb things here, your original post was wrong too! Please get off of social media or whatever doomscrolling news you are partaking in because it is bad for your health and perception of reality. The United States by any measure, as a matter of indisputable fact, a highly competitive and dynamic economy across pretty much all sectors. This is not up for debate.
People buy iPhones because of status signalling and network effects, neither of which appears to apply to AI model choice. LLMs are already rapidly on the way to being interchangeable commodities.
To the extent LLMs are commodity products you're right (so far), but that is limited to the main model providers, such as ChatGPT, Claude, Gemini, &c. with interoperability on cloud platform providers and other technology providers like an Apple offering you a choice of LLM with Siri or something.
If you want to suggest that some other model is in the same bucket as those primary 3, it goes back to the crappy, cheap phone analogy which is accurate. Yea you can make calls with it, but you make calls better with an iPhone.
I get your point but in what sense is that "free"? What mobile plan giving you an iphone doesn't come with explicit debt?
They run various schemes like this all the time, you can also trade in your existing phone a lot of times for pretty favorable terms. I've traded in phones that were a few years old and gotten $1000+ for them, especially when switching providers.
$729.99 purchase on device payment or at retail price required. New line req'd. Unlimited Welcome, Unlimited Plus or Unlimited Ultimate plans required. Less $730 promo credit applied to account over 36 mos; promo credit ends if eligibility requirements are no longer met; 0% APR.Taxes & fees may apply. Credits will appear on your Verizon Wireless bill.
If you think the iPhone is a status symbol you’re just wrong.
I'm just pointing out the statement:
> What mobile plan giving you an iphone doesn't come with explicit debt?
isn't invalidated by some Yahoo article pushing a marketing promo that when you actually do the math and read the fine print its not really a "free" phone, its always some form of debt or bill credit or something along those lines that makes the phone "free". You're still paying for the phone in the end if you read the fine print. In the end one commits to spending several hundred dollars over 36 months or whatever or you pay up front and they give you bill credits if you keep the plan.
People who prefer truth in advertising.
> Why be so argumentative over something so stupid?
I don't want people to believe untrue marketing statements and make poor financial decisions without actually bothering to read the fine print.
> some companies run free promotions
This just isn't true. They're not really "free". They come with lots of financial commitments.
> Apparently Verizon ran some promo in the past and may again in the future giving away iPhones
They still say they do on their website. If you're getting one "free" iPhone it comes with a commitment to spend at least $65/mo for 36 months. A commitment to spend $2,340 is a lot different from $0.
These are far from "free" phones. Can I go into a Verizon store, not give them a dime or sign any contracts and walk out with a phone free and clear to do whatever I want? No? Sounds like it's not really free then!
My point is if you're poor/homeless you're probably not looking to sign a 3-year commitment to spend a few grand to get a "free" phone. A lot of those people won't even pass the credit check to qualify to even sign up for one of these post-paid plans required to get the "free" phone. If you're really broke you would probably be looking at signing up for a lifeline plan and get yourself a cheap used iPhone instead of signing up for a $2,340 contract.
You’re anchoring yourself to one payment scheme and ignoring others and it’s besides the point which is that iPhones aren’t status symbols even if these schemes didn’t exist and iPhones weren’t extremely cheap or freely available.
I don’t have anything left to say here besides that I proved my point unequivocally.
I already said I largely agreed with this.
> major carriers can and do give them away in various schemes and did so in the past and will continue to do so in the future
They only do if you're financially illiterate.
> You’re anchoring yourself to one payment scheme and ignoring others
I'm being honest and taking about the real deal instead of blindly repeating marketing bullshit and lies.
> freely available
A commitment to spend thousands of dollars isn't the same as freely available.
The bank gave me this free house all I have to do is pay this mortgage for thirty years. But hey the house was free!
Once again, was the deal that you could walk into the store, grab a new iPhone, and walk out without signing a contract or other form of commitment? If not, it's not really free. It's bad financial advice for people struggling financially to get one of these "free" phones, they're often more expensive than buying outright and getting a much cheaper (or potentially even subsidized!) plan. Especially if you're just needing one or two lines. Many of these postpaid plans only really make financial sense once you're at like 4+ lines on it.
I'm reminded of seeing all those cell phones in the RadioShack mailer ads back in the day. Only 99¢! Dad, can't I get one? It's only a dollar!.
If you spent hundreds of dollars on box seats to a sporting event and they had a complimentary buffet, is that food really free or did it cost you hundreds of dollars? Would you tell someone struggling with money they could get free food, they just need to go spend hundreds on sports tickets first?
Maybe one shouldn't be so willingly close-minded to the truth.
https://mashable.com/article/apple-messages-green-doj
https://www.sfgate.com/tech/article/apple-green-bubble-messa...
https://old.reddit.com/r/Anthropic/comments/1snorbg/the_bigg...
I don't know enough about distillation to understand how much this hinders/slows the process, but it sounds at least superficially plausible.
Honestly, I think its quite possible that models will be retrained with gaps in their knowledge. e.g. a coding model for commercial use probably doesn't need to have deep knowledge of biology, and training on biological sciences probably doesn't help those evals much.
Honestly, I'd welcome such an approach.
Strange reading that on HN and realizing I'm not on Facebook
The whole idea of the deep state is that it’s part of the state, ie government, so not private citizens, and they’re “deep” ie hidden below the layers of government. Thats the exact opposite of politicians and the ultra rich.
Also, your link specifically starts with:
""a hybrid association of government elements and parts of top-level industry and finance that is effectively able to govern the United States without reference to the consent of the governed as expressed through the formal political process."
which exactly how this was defined by your opponent.
OpenAI and Anthropic are beholden to the capitalist system they exist under and hence cannot compete on local models. Like you say, they must try to maximize shareholder value. China is unencumbered by that constraint.
But if you were in China, could you say you hate the Chinese Communist Party and China openly and as often as you like without imprisonment or worse?
We know the answer to that. So go ahead and trust China more than the U.S., but I think that is pure foolishness.
There was an outdated but relevant saying
'In America, you can criticize president Nixon anytime'
'Yes, but in Soviet Union you can also criticize Nixon anytime.'
The point is not that they're safer but that they're not a relevant concern in the same way. (According to OP)
Many technological advances weren't driven by capitalism, early computers and the internet were literally developed by the government.
But the thing is... I could be using any of the llms for my use - I'm using a middleware that lets me change providers only with a configuration change.
So it's going to be tough for USA ai companies to charge 5x to 20x (depending on what you're doing).
Thats like Microsoft saying "Don't use Linux because selling an operating system is what matters"
It begs the question because both its premise and assertion are already wrong. Has AI improved the industrial capacity of the US in order to improve the lives of its citizens? No it hasn't. Has AI increased the wealth of its citizens by being able to do laundry or any household task in a generalized way? No it hasn't. The only thing it's really done is make very narrow slices of white-collar work more fungible. In what way has AI been able to address existing shortcomings of the US?
https://www.federalreserve.gov/econres/notes/feds-notes/moni...
Based on a survey if the business uses AI "in any of its business functions". And for all uses of what they consider to be AI, not just LLMs.
You mean grand declarations like 'industrial capacity has increased'? Just because AI is present in the factory doesn't mean it's actually increased capacity.
Have you happened to purchase anything in the past 12 months, and looked at the Fed's inflation numbers?
The Fed doesn't issue inflation numbers. The usually cited headline inflation numbers (CPI) are from the Department of Labor’s Bureau of Labor Statistics, the ones used by the Fed as an input to monetary policy decisions (PCE) are issued by the Department of Commerce’s Bureau of Economic Analysis.
t. literally works on AI for industrial applications
How?
On a personal level, I simply do not trust the US anymore. I won't host any of my personal data in a US company. I don't want the US govt invading my personal privacy, and their corporations are constantly leaking and selling private data. I consider US to be rapidly approaching complete autocracy (on par with China) so US-hosted AI is a non-starter. And let's not forget local inference keeps getting more efficient, with higher context and TPS in the same amount of RAM. Within a year even small consumer machines will run local models good enough for basic coding, and in 3 years RAM prices will lower and everyone will be able to afford a decent rig.
Finally, open weight models are now good enough for daily work. They may never be as good as SOTA (SOTA will just keep increasing indefinitely), but that doesn't matter; my car may not be as fast as a Porsche but it still gets me to the grocery store and back. So I use non-US hosted model providers which provide open weights, which are both significantly cheaper than Anthropic/OpenAI, and actually allow me to use my subscriptions without a moat.
But yes, Anthropic/OpenAI are absolutely the new Oracle. They will win for US govt and Enterprise contracts. But that's far from the only users of AI.
And US absolutely has been xenophobic for years, by official federal policy. I'm really surprised you're not aware of it, but here's a small selection of examples:
- Both our elected and appointed leaders are white nationalists. Our president called all Mexicans murderers and rapists, said African migrants were eating random pets in a rural US town (they weren't, obviously, but it was intended to exacerbate xenophobia)
- Our federal government has a mandate using ICE to try to eject anyone with a Hispanic name from the country (has already deported US citizens based on being hispanic/latino). We even boot people seeing asylum, often exporting them to foreign prisons even if they've never had a criminal record. We have concentration camps now, filled entirely with foreigners, and people who have lived here for decades but were foreigners.
- We stopped accepting new visas from 75 countries. We may even expel you for social media posts we don't like, or for attending a protest that our citizens can attend. We increased travel bans for people from majority Muslim countries. H1-B visas have been rolled back to only the highest paying jobs, and you may need to pay a $15,000 bond. We also now collect and store foreigners' biometric data indefinitely.
- Let's not forget the tariffs on virtually all other nations, to say nothing of "America First" and the new "Greater North America doctrine".
I think you got lost in the rhetoric somewhere.
Tariffs are just the US adjusting to reality which other countries are slow to do. Free trade died all on its own, because the pandemic showed that critical industries were hollowed out by free trade in a way that could be appreciated from a national security perspective. That situation was favoring China too much, so we need to unwind that some.
Tariffs already existed in many countries in practice, so it's not like the US reinvented modern tariffs.
Pew [1] suggests that the changes around the start of 2025 were due increased restrictions on asylum applications under the previous admin and EOs by the current one to restrict new immigration. Given the rough numbers [2] of about 40k asylum grants per year in the early 2020s, I doubt the previous admin's actions are playing much of a role here.
Stating that none of it (immigration acceptance) changed under this administration might technically be true - with respect to the number of countries applying, but misses this point.
[0]: https://www.census.gov/newsroom/blogs/random-samplings/2026/...
[1]: https://www.pewresearch.org/short-reads/2025/08/21/key-findi...
[2]: https://usafacts.org/articles/how-many-people-seek-asylum-in...
If you feel like formulating a good argument about immigration, I'll listen, but you haven't provided one.
It can happen in Europe too, but the full fall is not that close.
The structure of the US makes it basically the single most secure democracy anywhere right now or in history. No country in Europe or Europe as a whole is even competitive by comparison. The main issue we're facing is that we are by far the primary target for foreign funded activism and systemic attacks, because China and Russia hated NGOs promoting color revolutions.
That is also part of the rule of law issue, but the system is overall managing quite well. It's all moving in slow motion, but many important metrics are going in the right direction, which we need as that's part of deterring China.
How do you figure? I hear you have roving gangs of masked thugs beating up random citizens with the backing of your government, that doesn't sound very democratically secure, especially with what healthcare costs over there.
So secure, in fact, that it has secured itself even against the influence of its own citizens.
Also, we have guns. LOTS of guns. The U.S. military's first and sole responsibility is to the constitution itself. If any state or the federal government tries to get rid of their constitutions, the military can rightfully take it over and re-establish a constitution.
There is no other country that's even remotely close to this secure.
This is just not true. It is failing visibly and loudly fast. It used to fail slowly but the process speeded up.
American administration supports Russia now. It praises Russian, Chinese, Belarus leaders again and again. It praises Orban. It hates last bastion of democracy - Europe.
China is not detered. Its power is growing while American one is going down. Trump openly admires its leader. China is celebrating current state of America.
And if it were, and the result were like Elon and Scam Altman say it would destroy the economy. Not sure any country wants to lead the race to self destruction.
The winner here will be whoever can move atoms with AI not take notes at the daily standup.
i.e. Think boston dynamics vs unitree
They're both doing well but I'd lean towards China is winning on atoms in light of a huge manufacturing base they can AI-ify.
You can tell we're on the cusp when level 5 self driving cars are common an you have multiple companies deploying them on the street. Google is doing great work but the poured TONS of effort into it and the thing still needs intense stacks of perception and processing. Much more than I've seen any humanoids pour into it.
L5 SDV's are much easier to get than humanoids and the have tangible economic benefit. My thesis is that those will come first.
This doesn't really argue against your point, because the standards are what they are, and like I said, I have no idea how one would go about changing them if one even decided they wanted to. And given what they are, it has taken, as you point out, enormous amounts of effort to reach those standards in a practical way.
That all being said, while I agree that SDV's are in many respects easier than other robotics tasks, they are also somewhat uniquely dangerous. Other categories of task, while potentially more complicated, won't have to worry nearly so much about safety, and so may be operating under a different constraint regime. I think this means that we may see adoption happen at a much more accelerated rate than we have seen in the automotive space.
So far, they are not.
I haven't seen good stats on Tesla (they are less transparent than Waymo), but it would shock me if they weren't also at least slightly safer than the average human driver. Human drivers are really bad at driving.
But even if Tesla isn't safer, taken as a whole, the self driving industry as it currently exists still probably is, purely because it's mostly Waymo, and Waymo is dramatically safer.
If free cheap energy is unlocked today I reckon it would still take a good 30 years for that to ripple through properly.
It solves lots of problems (water!) but doesn't make the heavy machinery to consume it instantly appear.
Why would an American company outsource manufacturing to China if the labor cost is the same in both places? The entire reason the Chinese manufacturing base exists is to exploit cheap labor.
What would be the point of shipping products across the ocean?
And, if you need changes, you can go talk to them the same day you see a problem.
Opening up comments to see top comments are 90% "NO U" without any substantial discussion - you disappoint me, HN.
>Frontier cyber models may push states and defense firms toward the opposite logic: security by obscurity, with closed software, closed tooling, closed firmware, and closed chips. If a model cannot train on the code and architecture of a target stack, it will usually have less context and less speed. That does not make systems safe, but it does raise the value of proprietary stacks all the way down to hardware.
Is this really true. Are there any experts who can weigh in on this.
Should we interpret this to mean that in the new world Windows is more resistant to attacks than say Linux.
I think “security through obscurity is no security” concept was aimed toward people not relying on obscurity alone as a security mechanism. And largely that message succeeded. But now we are in a rapid acceleration of capabilities (on both sides) where any advantage to one side will result in outsized gains, at least in the short term.
And basically all the security bugs I've read about were find looking on the source code.
But it doesn't mean windows is more secure, just image a scenario where someone is stealing windows source code and sell it to rogue actor, it will make it even less secure because no one (expect windows) would have had the chance to search for bugs in the source code.
LLMs can read assembly better than most, so probably not. But reality has never stopped people from trying to obfuscate.
I feel like the author (and perhaps many here on HN) are on a different planet than almost everyone I interact with.
Most businesses are adding limitations on using open models.
My business's integration literally has a dropdown for which model you want to use. I think that's pretty standard.
Is it just that the subject line alone is a springboard for casual discussion? If so, maybe that's fine, but then, it feels like we'd be better off cultivating these discussions as "ask HN" posts instead of boosting this kind of web content.
I think this has been the case on many sites, for decades. Many people just want to read and write comments without engaging with the OP.
Have a look at this Reddit thread [0] about this Ars Technica article [1] - both are 15 years old.
I suppose in the 2010s this was an amusing detail of online discussion. In the 2020s it makes me feel a little uneasy - it suggests that the entire concept of people jumping from site to site, clicking links and understanding what they are writing about was flawed from the start. No wonder the internet became centralized and slopified.
And no, I didn’t read the OP, I found your comment to be more interesting to discuss. These days with AI articles flooding the internet it seems foolish to actually read articles before the comments.
Edit: although we have to contend with AI generated comments as well. I wonder how many of the comments on this page actually have original insights into the politico economics of AI.
[0] https://old.reddit.com/r/WTF/comments/gz9k7/the_internet_is_...
[1] https://arstechnica.com/science/2011/04/guns-in-the-home-lot...
Even if any of the US corporations would eventually end up in a scenario where their revenue is at least as high as their inference cost, what harm would that do to the other contenders? It's not as if there is any kind of network effect here that would exlude them from market participation.
Where are these profits of which you speak?
Michael Phelps is winning the race! ... for now
China is winning the EV race ... for now
It doesn't seem to add value to me, aside from being an opportunity to, as is the time-honored tradition of the haters, to sow doubt and create negative energy to anything related to American success.
Of course US has a huge head start, but if AI keeps growing, what matters is how the market's gonna look like years from now.
Most of my clients using AI in the business workflows (in products) use Chinese LLMs, because after benchmarking for a specific use case you nearly always end up finding that you pay half or a tenth.
That's not a new phenomenon. I've adapted Gemini Flash 2.5 years and years ago when people were dissing it as "crap", yet it was the best budget and quality fit for the task I had at hand back then (translating and summarizing tons of documents). It was both faster and around 100 times cheaper than the best GPT 4 model available.
Needless to say, medium-sized Chinese models are far better than those LLMs and a perfect fit for countless applications.
Just as business exported strategically critical manufacturing to China, now it is helping funding China’s race to take over the US in AI and beyond.
Lesson is pure free trade doesn’t work if (a) not everyone is playing by the same rules and (b) the trading territories are or may become opposed.
American economic policy gave the world an authoritarian super power and Trump. Not a great track record.
That doesn't count as winning at all.
Correct. "Revenue" is the wrong scorecard when they're selling 20$ bills for 15$. I too can make a bajillion dollars in revenue with that strategy.
Show me a company not speed running the uber/doordash playbook and we can talk.
It's like the USA Librem 5 vs PinePhone. About the same HW for $1600 vs $150.
Sure will not pay 10x for "US" thing just because it's a US thing.
The USA is very good at loosing very, very expensively....
I dont know what the benchmarks are supposed to represent, but to me Kimi K2.6 is indistinguishable from e.g Opus 4.6.
Cultivating an ecosysyem of strong capital protections, wealth creation through extraction, and tax advantages for AI finance is what we should be looking for. Commercialzation may be a step towards that, but isn't the destination. We have to create a system where those with money can multiply it, not simple add to it.
Whatever derivative structures and equity and options need to exist will be easily created.
I don’t think we need any additional motivation or incentives to cultivate this for AI. We need to keep some in the tank to handle the fallout.
As a more personal aside: the US would do well to put up some sensible barriers to outrageous financialization and reduce moral contagion risk. Otherwise all these folks trying to multiply their money end up leaving the bag with the folks that don’t have it in the first place - and then the folks with money end up, uh… well, it won’t end well.
Sorry, nobody's winning that AI race.
Does any of the US companies earn money on LLMs? No, they bleed money. Github Copilot is switching to token based pricing, which will be costlier than hiring juniors.
Anthropic also is switching enterprises to token based pricing from their subscription one.
From the big three only Codex is still in somekind of subscription pricing, but they'll shift eventually (usage limits are a kind of that, but they have them less stricter than Claude ones)
There is one winner in this race - China. Trump with his agendas and wars makes it even more likely that China will lead this new market.
Inference? Yes.
Infrastructure build and training? Not yet.
I'm not certain that racing China in AI is the right reason but it might get us... somewhere.
Not only is the investment that keeps US AI companies flying high slowing, I suspect in two or three years, we'll all mostly be using open models and the people making money will be the hardware manufacturers. Even the small models will keep getting more capable. I'd guess a model you can run on a high end, but not outrageously overbuilt, developer desktop or laptop (something like 128GB of unified RAM), will be competitive with the current frontier when it's allowed to search the web and do research and write test code. You can't fit as much knowledge in a small model (80GB of weights can't store the world's knowledge), but I don't have the world's knowledge in my head, either, and yet I can figure out most problems with a little googling and experimentation. The reasoning and tool use abilities of smaller models is where the gap is closing, and that's what will make the huge models obsolete for huge classes of problem.
Already, there are many classes of problem that the easily self-hostable Qwen 3.6 27B can solve that required a frontier model a year ago. When the self-hosted options reach Opus 4.5-ish levels of capability, the argument for paying for tokens for most work begins to look a lot less compelling. And, looking forward, 1.58 bit models are coming. Incredible intelligence density, and still a lot of improvements happening.
I think they already, actually making profits especially Antropic. But think how important it's from a business standpoint - the entire software stack from OS to Databases to browsers will be rewritten in the near future, for a company such as Oracle or IBM it means their bread and butter/cash cow can be replaced. It's worth almost any kind of Capex. And from Washington standpoint it's more important than F-35 program or even Apollo mission.
Not even gonna bother clicking through this one, the title is that egregious. And by the way, you can be damn sure that if Anthropic or whichever other American frontier model model is the best of its day was on the cusp of going under, the US gov would either pump it full of government contracts or (less likely) nationalise it.
Mass unemployment and an eventually collasping economy is winning?
Larry just fired 30% of his people at Oracle because, apparently, he is in an immediate need for cash. Because Oracle's early AI bets aren't paying off.
The FSF was not an attack on commercialization, it was about giving users more freedom with their own copy.
AI commercialization is why we will always be a few steps ahead in AI.
The Chinese and Russians are free to join us. It's a pickup game.
For one, "Communism" is presented as a single monolith, but it's not: it's socialism PLUS despotism. The despotism part is really important! China/Russia/etc. fail because they try and control things top-down, instead of letting the market decide.
However, you can have socialism without despotism! Tons of European countries are far more socialist, but no less democratic than America (many are more democratic).
So yes, America vs. Russia/China and Capitalist vs. Communist are relevant frames ... but don't let them obscure the fact that you can have a successful, democratic country .. without doing what America does (and giving all control to corporations).
China is despotic in its treatment of political dissent and human rights, but not in economy.
puke
Yeah, go ahead and run your country into the ground because of hypercapitalism and hypercommercialization, you're almost at the end game now! While the rest of us try to figure out how to actually build societies worthwhile to live in and experience, with healthcare and not waging war on our neighbors.
I don't know how people can seriously publish stuff like this and not feel like they're actively trying to make the world worse. Is money really the single thing y'all can focus on? Is there nothing better in life you can chase, even if it's also a number? So sad to see stuff like this.
Chinese culture is quick to embrace the benefits.
It's like people forget the entire point, perhaps even definition of technology is "doing more with less."
The "brute force" of power and cycles is almost certainly the least important thing, perhaps even a hinderance.
We've invented a new term here too: revenue backlog. OpenAI and Anthropic in particular need to recover probably at least $2 trillion to recoup their capex investments. Now Claude code has had an impact on software engineering but for a lot of AI uses you're just not going to recover $2T on $20/month subscriptions. It reminds me of Twitter trying to dig itself out of a $44B hole and losing half their ad revenue with $8/month blue ticks.
The only commercial product AI sells is labor displacement and the resulting wage suppression. You lay off 10-20% of your staff and nobody is asking for raises. The people left are happyt o still have jobs (and thus a house). They'll work even harder doing unpaid labor of the displaced workers to keep those jobs. That's what OpenAI and Anthropic are selling.
The problem is that if these companies get their way, 10-20% of the population is going to be out-of-work and society is going to fall apart. Data centers are going to be the targets of increased societal desperation and anger as this gets worse.
There was a report this week that roughly 50% of singles in the US aren't dating because they can't afford to [1]. This goes well beyond the well-understood problems of not being able to afford a house let alone start a family. This is a birth rate death spiral in the making.
So, back to OpenAI and Anthropic, the only way they justify their valuations and can make up the "revenue backlog" is if they have a moat. And I don't think that's going to happen. Hardware will get cheaper. Nobody is talking about how the generation of AI hardware will write of trillions in investments for some reason. I don't know why.
But the dark horse here is China. DeepSeek when it was first released (early last year?) was a shot across the bow. We have it and toher models (eg Qwen) that will close the gap with whatever OpenAI and Anthropic produce such that no company will "own" AI in the way that OpenAI and Anthropic need to. In the coming years, China's chipmaking is rapidly closing the EUV gap and Western companies have zero penetration into this market. China doesn't want to be dependent on foreign tech that can be withheld at any moment.
Don't believe me? Just listen to the NVidia CEO say the exact same thing [2][3]. Huang realizes this is such a problem that he's gone on Air Force One to this week's Trump summit in China to try and convince the Chinese to buy NVidia chips.
[1]: https://parade.com/living/nearly-50-of-single-americans-not-...
Poor people with nothing date when they want to. If people have interest in having partners, they can date and socialize for free.
It’s all about adoption and the bigger picture. The US is an untrustworthy, isolated island in the AI future if you vote another idiot into office in a few years. If you’ll still be able to vote at all, that is.
The largest part of the world is not the US. The cutting-edge US models are way too expensive for most parts of the world, and that also shows in adoption.
China is building an ecosystem of open-source models that are both cheap and good enough for most use cases. While most of the US AI sphere will collapse under the pressure of making profits, which means having their models and infrastructure adopted by as many enterprises and individuals across the world, China’s models will have become global standards and hard to displace.
If Beijing’s AI pitch centers on universal access and cost-effectiveness, then Chinese AI firms do not need the latest chips to win the global AI race. They also don’t need the expensive US-run infrastructure. If you watch Chinese AI adoption closely, they already want as many Chinese people as possible to be able to build and try with AI, whereas for most Americans, US models for productive use are already too expensive.
Kimi K2.6 sits within touching distance of Opus 4.7 and GPT-5.5 while costing about $4 per mil output tokens. That is six to eight times cheaper than cutting-edge US models. If you run hundreds of agents, that’s a significant opportunity to get the same work done for a lot less.
Even early adopters like Singapore, ditching US models, the government kicked Zuckerberg in the nuts and went for Qwen instead to build its sovereign AI models.
To understand why the US is at a severe disadvantage in this race, you need to understand China’s Belt and Road Initiative (BRI). BRI entails Chinese firms delivering fully financed infrastructure projects in a bid to lock third countries into China’s economic orbit. They use the same approach for their open source ai models, but this time the infrastructure is both invisible and free.
No need to build power plants or buy /build ports. AI dependency is invisible to both policymakers and the population, limiting pushback. No pesky activists in Germany nagging about China buying parts of ports. No African nutbags questioning why the humble Xi is building hospitals in areas Chinese mining companies take things out of the ground for pennies on the dollar.
China is going for a marathon here while the US tries to push their ai tech by sheer force into the throats of the world. As soon as Chinese ai models have become global standards, it’s game over for us ai companies. And China is way better at this game than the US. They have proven this over and over again in the past 50 years.
I recommend reading the China Standards 2035 strategy to get a better understanding of their approach and how smart this is.
https://www.china-briefing.com/news/china-standards-2035-str...
AI is not as revolutionary as you think in terms of our experiences with previous technological advances in terms of trade and economics.
Western economies are locked into U.S. models, while China runs on Chinese ones. It’s the age-old game. But the real war of the AI race will be fought in the global south.
I will give you three examples.
Can you really imagine, if you look at what AI needs to cost to make a profit, that even at the current prices, US models and infrastructure, which are heavily subsidised already, being used in cost-sensitive countries? I am not talking about coders, think really big here for a second.
Secondly, US ai models are trained on Western data. How do you expect them to grasp local contexts in the Southern Hemisphere? Chinese open-source models, on the other hand, can be downloaded and finetuned with country-specific data.
Want an example? Check out AfriqueQwen-14B, which is adapted to the top twenty African languages.
So I think this author is wrong. The ai race to be won is not hardware or cloud infrastructure, my money is on it will be a contest to decide which models and standards become the default infrastructure in countries that are up for grabs.
China neither needs the best models nor does it need the best cloud infrastructure, it just, like so often, only needs to be affordable and good enough to become the default choice in emerging markets.
The right choice would be for everyone to step off the gas pedal and think about whether we are willing to become China in order to beat China. Our ancestors worked really hard to get us here, our rights, our ways of life, culture, all the blood, sweat, and tears.
AI better be worth it in the long run for all of humanity if we go back to survival of the fittest. Because that is what it will take to beat China at their game.
I think deep down, sama knows this and that's why he's pushing for "Universal Basic Compute", which really means forcing every US citizen to become an OpenAI subscriber.
Stopped reading here. What a ridiculous statement and I can only assume the rest of your post is just as ridiculous.
And that's not to mention the warping of US economic life by the concentration of capital around this bizarre endeavor, with the circular multi-hundred-billion-dollar deals and such.
Unfortunately, the detrimental effects of global warming arrive gradually, and are spread out over the entire globe, so the "AI barons"/tech magnates will probably suffer the least, while island countries will be completely wiped out, whole regions will become too hot to sustainabily live in, tens of not hundreds of millions will have to migrate, biological diversity will suffer, etc. They will look back on these times in a 100 years and will think of us, or at least of US, as the people boarding the Titanic. Hopefully not as the people who board the Hindenburg.
Depends no? If the "Best AI" means "The AI decides when you wake up, go to work, and go to bed", then I probably want to live in the country with the worst AI or even without.
If it instead means "UBI and healthcare for everyone, money lost all meaning and we're all just having fun while AI does all the boring stuff" then yes. But since capitalism still exists, that's a pipe-dream, and "Best AI" won't lead to that for the average person, only for the 0.1%.
As with another recent example, sometimes in war there is no winning: just loss. This is obviously for us programmers an incredible and wild age, filled with nothing short of miracles. It's incredible. But the prices we are paying, the extreme tensions we are creating, the stress and strain of this all has been incredibly unpleasant, and very very very few people feel like they are seeing upsides to this worrisome menacing age, that promises very few people on the planet anything better coming, and which. Has already made life considerably worse, which no nation has yet directed towards helping its people.
Strikes me as the real outcome: the end of "personal" computing, "local" anything.
Which one of them all?
If you mean "building models that are very good at coding and as substitutes for search engines", then yeah, sure.
But if you mean: "applying AI to industrial applications and robotics", then China is far ahead: https://time.com/7382151/china-dominates-the-physical-ai-rac...
It remains what benefit, if any, Americans will see from all this...
Just because you are first to do x, doesn't mean you are going to be the winner.
The cost of winning this race has been telling our citizen s we will replace them with robots and there is no hope for their children’s future employment.
The cost has been destroying trust as we tell citizens water and power should go to server farms and not them.
The cost has been naked power telling democracy it’s wrong and dying
I think when we discover the limits of LLM tech and tally its benefits over its cost — we may regret this win.
But don’t let me contradict a bunch of fake techno oligarchs wrapping themselves in war like patriotism to get the investments they need to keep this going.
How would your life change if your country became the second wealthiest instead of the first?
This is a ranking and competition no other country in the world gives ... about.
Why would the world care? Take Trump's threats against Greenland...actions that run completely contrary to our historical policies and treatment of our Western allies. They were alarmed, because when the leader of the most powerful military in the world makes a threat, you have to treat it seriously. Despite Trump's hubris, such an invasion did not occur because Americans, Congress, made it very clear that Trump would be impeached if he invaded our NATO ally.
Let's say China is ascendant. It is now the dominant military and economic power in the world. China is under Xi's complete dictatorial rule. Xi decides that invading Greenland is a good idea. Stopping that internally would not involve democratic processes, it would need to involve a coup.
Let's make it even more stark. If Germany had won WWII and had become the ascendant world power, would it make a difference to most countries? ABSOLUTELY YES. If there is going to be a dominant world power, the character of that nation matters.
Are there other nations with the character and institutions that could do as well or better than the U.S. has done? Sure. I can think of several nations. But let us not pretend that all nations are equally bad/good for the world.