Arguably, too much of this kind of hedging is anti-competitive. But that doesn’t seem to be much of a problem yet?
Google versus OpenAI and Anthropic, sure, but Microsoft is deep into OpenAI. Google helping Anthropic is also putting MS into a corner (one that may even be shrinking? Copilot and openAI financing hurting their brand, rumours of deep displeasure at OpenAIs promises v returns).
Seen from afar I see Google happy to provide TPUs for money (improving Googles strategic positioning), torpedoing confidence in LLMs with their search AI summaries, and using their bankroll to force larger competitors (MS in particular), to keep investments high regardless of performance and user revolts and internal tensions with Sam Altmans sales approach. Plus, Anthropic is in ‘the lead’ right now product wise, so grooming them as a potential purchase would also seem to be a strategic hedge in the long term.
1. https://www.microsoft.com/en-us/microsoft-365/blog/2026/03/0...
> torpedoing confidence in LLMs with their search AI summaries
That is some real tin foil hat thinking.Google didn’t launch LLM products despite being a tech leader, and have gotten piles of bad press for their misleading AI search summaries. They know how and why they suck. Google search is a highly popular and market facing service packaging bad summaries as “AI”. Meanwhile LLM searches threaten to disrupt Googles primary cash cow (advertising around search).
Here on HN, on Reddit, and media writ large, a lot of the “AI” failure stories are not about ChatGPT hallucinations, it’s the shockingly wrong search summaries from Google, undermining consumer confidence and breaching trust.
ChatGPT and other LLM providers rarely show conflicting source material side by side with misleading text gen. The number one search provider who leads in some LLM tech does though, routinely, looking incompetent and generating negative “AI” sentiment through repeated failures at mass scale…
So the theory here is either that the best search org in the world filled with geniuses can’t tell they’re pooping on their own product and profitability and aren’t fixing it because they can’t/won’t… … or <tinfoil mode engaged>… Google already makes money and is happy with substandard product and market performance in the cases where it hurts the necessary hype critical to other businesses but not themselves (while also pre-positioning in case LLM search becomes essential).
Win/win/win strategy with a substandard product, versus Google not being aware of what their biggest product is doing.
Googles AI summaries are doing lotsa work to make AI summaries seem terrible. I ascribe profit motives to their actions. Ascribing incompetence seems naive and irreconcilable with their strategic corporate history.
By the time it is a problem, it will be too late.
OpenAI lead the game while they were best. Antrophic followed and got better. Now openAI is catching up again and also google with gemini(?) ... and the open weight models are 2 years behind.
Any win here seems only temporary. Even if a new breakthrough to a strong AI happen somehow.
So if I'm Google I'd want a decent chunk of at least one of them.
It’s a commodity in the making.
Who supplies the hardware for the singularity?
If you gatekeep, you will not make back the money you invested. If you don't gatekeep, your competitors will use your model to build competing models.
I guess you can sell it to the Department of War.
Its awesome and world dominating, you just don’t sell access to that AI, you instead directly, by yourself, dominate any field that better AI provides a competitive advantage in as soon as you can afford to invest the capital to otherwise operate in that field, and you start with the fields where the lowest investment outside of your unmatchable AI provides the highest returns and, and plow the growing proceeds into investing in successive fields.
Obviously, it is even more awesome if you are a gigantic company with enormous cash to to throw around to start with when you develop the AI in question, since that lets you get the expanding domination operation going much quicker.
There's a lot more to building a successful product than how smart your engineers/agents are, how many engineers/agents you have, and capital.
Google, for example, can be extremely dysfunctional at launching new products despite unimaginably vast resources. They often lack intangible elements to success, such as empathizing with your customers' needs.
If we were in a world where AI was not already widespread, then I would agree that having strong AI would be an immense competitive advantage. However, in a world where "good enough" AI is increasingly widespread, the competitive advantage of strong AI diminishes as time goes on.
That's why I've been saying that the entire software industry is now living on borrowed time. It'll continue at the mercy of SOTA LLM operators, for as long as they prefer to extract rent from everyone for access to "cognition as a service". In the meantime, as the models (and harnesses) get better, the number of fields SOTA model owners could dominate overnight, continues to grow.
(One possible trigger would be the open models. As long as the gap between SOTA and open is constant or decreasing, there will be a point where SOTA operators might be forced to cannibalize the software industry by a third party with an open model and access to infra pulling the trigger first.)
I think meaningful change to the current equilibrium would require at absolute minimum the proprietary equivalent of the development of the transformer architecture.
Integration, and mindset. AI, by its general-purpose nature, subsumes software products. Most products today try to integrate AI inside, put it in a box and use to supercharge the product - whereas it's becoming obvious even for non-technical users, that AI is better on the outside, using the product for you. This gives the SOTA AI companies an advantage over everyone else - they're on the outside, and can assimilate products into their AI ecosystem - like the Borg collective, adding their distinctiveness to their own - and reaping outsized and compounding benefits from deep interoperability between the new capability and everything else the AI could already do.
Once one SOTA AI company starts this process, the way I see it, it's the end-game for the industry. The only players that can compete with it are the other SOTA AI companies - but this will just be another race, with nearly-equivalent offerings trading spots in benchmarks/userbase every other month - and that race starts with rapidly cannibalizing the entire software industry, as each provider wants to add new capabilities first, for a momentary advantage.
Once this process starts, I see no way for it to be stopped. Software products will stop being a thing.
Open models can't compete, because they're always lagging proprietary. What they do, however, is ensure the above happens - because if, for some reason SOTA AI companies stick to only supplying "digital smarts a service" for everyone, someone with access to sufficient compute infra is bound to eventually try the end-game strategy with an open model, hoping to get a big payday before SOTA companies respond in kind.
Either way, the way I see it, software industry as we know it is already living on borrowed time.
So suppose ACo attempts to subsume Spotify or Photoshop or whatever. So they ... build their own competing platform internally? That's a lot of work. And now they what, attempt to drive users to it by virtue of it being a first party offering? Okay sure that's just your basic anticompetitive abuse of monopoly I guess. MS got in trouble for that but whatever let's assume that happens.
So now lots of ACo users are using a Photoshop competitor behind the scenes. I guess they purchased a subscription addon for that? And I guess ACo has the home team advantage here (anticompetitive and illegal ofc) but other than that why can't Photoshop compete? It just seems like business as usual to me. What am I missing?
If ACo sells widgets and I also sell widgets, assuming I can get attention from consumers and offer a compelling set of features for a competitive price why can't I get customers exactly? ACo's AI will be able to make use of either widget solution just fine assuming ACo doesn't intentionally sabotage me.
I think the more likely issue is that at some point the cost of building software falls far enough that it ceases to be a viable product category. You just ask an agent for a one off solution and it hands it to you.
Projecting out even farther, eventually the agents get good enough that you don't need to ask for software tools in the first place. You request X, the agent realizes that it needs a tool for that, builds the one off tool, uses it, returns X to you, and the ephemeral purpose built tool gets disposed of as part of the the session history. All of this without the end user ever realizing that a tool to do X was authored to begin with.
So I guess I agree with your end outcome but disagree about the mechanics and consequences of it.
> Open models can't compete
They can though. There's a gap, sure, but this isn't black and white. Plenty of open models are quite useful for a particular task right now.
Any of Meta’s competitors could reproduce Instagram “the software” in every meaningful detail for (let’s say) $100M.
They still don’t have Instagram the product. Reducing that outlay to a few billion tokens doesn’t change that.
I guess I’ll believe this theory when Anthropic or OpenAI rolls out a search engine with an integrated ad platform that can meaningfully compete with Google. How hard can that be?
At this point, if you can no longer safely drip-feed industry the access to "thinking as a service" and rake in rent, you start using it, displacing existing players in segment after segment until you kill the entire software industry.
That's pre-ASI and entirely distinct from the AI itself becoming so good it takes over.
Being unfathomably smarter than the people making use of it you could simply instruct it not to reveal information that would enable a potential competitor to construct an equivalent. No need to worry about competition; you can quite literally take over the world at that point.
Not that I think it's likely such a system will so easily come to pass, nor that I think humanity could manage to maintain control over such a system for long. But we're talking about investments to hedge against existential tail risks here so "within the realm of plausibility" is sufficient.
I’d be willing the bet that the Venn diagram of investors in those two companies is nearly a circle.
But why? Assuming there is a secret undiscovered algorithm to make AGI from a neuronal network ... then what happens if someone leaks it, or china steals it and releases it openly tomorrow?
Current LLMs are absolutely stupidly inefficient on this front, requiring virtually all human knowledge to train on as a prerequisite to early-college-level understanding of any one subject (granted, almost all subjects at that point, but what it has in breadth it lacks in depth).
That way instead of training millions of TPUs on petabytes of data just to get a model that maintains an encyclopedia of knowledge with a twelve-year-old's capacity for judgment, that same training set and compute could (they hope) instead far exceed the depth of judgement, planning, and vision of any human who has ever lived (ideally while keeping the same depth, speed of inference, etc).
It's one of those situations where we have reason to believe that "exactly matching" human intelligence is basically impossible: the target range is too exponentially large. You either fall short (and it's honestly odd that LLMs were able to exceed animal intelligence/judgment while still falling short of average adult humans.. even that should have been too small of a target) or you blow past it completely into something that both humans and teams of humans could never compete directly against.
Chess and Go are fine examples here: algorithms spent very short periods of time "at a level where they could compete reasonably well against" human grand masters. It was decades falling short, followed by quite suddenly leaving humans completely in the dust with no delusions of ever catching up.
That is what the large players hope to get with AGI as well (and/or failing that, using AI as a smoke screen to bilk investors and the public, cover up their misdeeds, play cup and ball games with accountability, etc)
One technique is, instead of trying to pick individual winners, look at the total addressable market. Then compare the market size with the capital being pumped in. If you look on this basis, Aswath concluded that collectively AI investment is likely to provide unsatisfactory returns.
Here's a recent headline: "Nvidia’s Jensen Huang thinks $1 trillion won’t be enough to meet AI demand—and he’s paying engineers in AI tokens worth half their salary to prove it"
There are two parts to this. 1. A staggering $1t is expected to be invested in AI. Someone worked out that this was more than the entire capital expenditure for companies like Apple. We're talking about its entire existence here. IOW, $1t is a lot of dough. A LOT.
Secondly, this whole notion that AI is such a sure thing that half the salary will be in tokens should ring alarm bells. '“I could totally imagine in the future every single engineer in our company will need an annual token budget,” he said. “They’re going to make a few 100,000 a year as their base pay. I’m going to give them probably half of that on top of it as tokens so that they could be amplified 10 times.”'
I recall from the dotcom fiasco that service companies like accountants and lawyers were providing services to the dotcom companies and being compensated in stock options rather than cold hard cash like you'd normally expect.
Very dangerous.
As another poster pointed out, this really boils down to FOMO by big tech. I'm expecting big trouble down the line. We await to see if I'm early or just plain wrong.
It is just cargo cult financing at this point.
But: no singularity. At least not yet.
The flaw in this thinking seems to be the idea that AI is a singular thing. You point the model back at its own source code, sit back and watch as it does everything at once. Right now it's more like AI being an army of assistants organized by human researchers. You often need specialized models for this stuff, you can't just use GPT for everything.
AI has none of that now - it only gets direct human feedback from those controlling the training (or at a second level, the harness), and that feedback is really in service of the humans at the steering wheels. Sum total of humanity, mixed in the blender, and flavored to make the trainers look good in front of their peers.
Now, if AI could interact directly and propagate that feedback to their training, or otherwise learn on-line, that changes. It's a qualitative jump. The second one is, once there's enough AIs interacting with human economy and society directly, that their influence starts to outweigh ours. At that point, they'll end up evolving their own standards and benchmarks, and then it's us who will be judged by their measure.
(I.e. if you think we have it bad now, with how we're starting to adapt our writing and coding style to make it easier for LLMs, just wait when next-gen models start participating in the economy, and we'll all be forced by the market forces to learn some weird, emergent token-efficient English/Chinese pidgin that AI-run companies prefer their suppliers to use.)
Then it all remains a question of who has the most compute power, as self improve seems compute heavy with the current approach.
It seems pretty wild to bet the future on such an assumption. What are you even basing it on?
But they also have access to an unimaginably large data set plus reach into people’s daily lives.
Seems more like partners for world domination.
I actually mentioned to a Google friend the other week that I wouldn't be surprised to see Google tipping the hat towards Anthropic soon so as to put a little more heat on OAI.