The risk is from this structure is mostly to do with how this affects market cap. Companies using the value of their shares to fund demand for their services.
That's a risk.
PG had an essay about this during the dotcom, when he worked at yahoo. Iirc...Yahoo's share price and other big successes in the space attracted investment into startups. Startups used that money to advertise on yahoo. Yahoo bought some of these the startups.
So... a lot of the revenue used to analyze companies for investment was actually a 2nd order side effect of these investments.
Here the risk is that we have Ai investments servicing Ai investments for other Ai investments.
Google buys Nvidia chips to sell anthropic compute. Anthropic sells coding assist to Ai companies (including Google and Nvidia). They buy anthropic services with investor money that is flowing because of all this hype.
Imo the general risk factor is trying to get ahead of actual worldly use.
The Ai optimists have a sense that Ai produces things that are valuable (like software) at massive scale...that is output.
But... even if true, it will take a lot of time, and lot of software for the Econony to discover this, go through the path dependencies and actually produce value.
The most valuable, known software has already afy been written. The stuff that you could do, but haven't yet is stuff that hasn't made the cut. Value isn't linear.
I can't continue the current model. The dev that gets AI is done in five hours, the ones that don't are thrashing for the next two weeks. I have to unleash the good AI dev. I have the Product team handing us markdown files now with an overview of the project and all the details and stories built into them. I'm literally transforming how a billion dollar company works right now because of this. I have Codex, Claude and GitHub Copilot enterprise accounts on top of Office 365. Everyone is being trained right now as most devs are behind, even.
The (imo) question isn't how you produce software, but what the value of this software is. Are you going to make make/better software such that customers pay more, or buy more? Are those customers getting value of this kind?
The answer may be yes. But... it's not an automatic yes.
Instead of programming think of accounting. Say you experience what you are experiencing, but as an accountant. 6 person team replaced by 2-3 hotshots.
So... Maybe you can sell more/better accounting for a higher price. But... potential is probably pretty limited. Over time, maybe business practices will adjust and find uses for this newly abundant capacity.
Maybe you lower prices. Maybe the two hotshot earn as much as the previous team.
If you are reducing team size, and that's the primary benefit... the fired employees need to find useful emplyment elsewhere in the economy for surplus value to be realized.
Mediating all this is the law of diminishing returns. At any given moment, new marginal resources have less productive value than the current allocation.
That dev is productive with AI precisely _because_ they have a good mental model.
AI like other tools is a multiplier - it doesn’t make bad devs good, but it makes good devs significantly more productive.
If you write a program in Python or JavaScript, you have a terrible mental model for how that code is actually executed in machine code. It's irrelevant though, you figure it out only when it's a problem.
Even if you don't have a great mental model, now you have AI to identify the problems and generate an explanation of the structure for you.
Outsourcing that to an AI SaaS might be ok I guess. Given past form there's going to be a rug-pull/bait-and-switch moment and dividends to start paying out.
For the past decade people have been clawing their eyes out over how sluggish their computers have become due to everything becoming a bloated Electron app. It's extremely relevant. Meanwhile, here you are seemingly trying to suggest that not only should everything be a bloated, inefficient mess, it should also be buggy and inscrutable, even moreso than it already is. The entire experience of using a computer is about to descend into a heretofore unimaginable nightmare, but hey, at least Jensen Huang got his bag.
I personally make sure I really diversify, so that when I buy funds, I buy those with stocks of EU companies which pay dividends. AFAICT there are 0 European AI companies that pay dividends.
You have to go pretty far down the list of holdings (under "Holding details") to find any big bets on AI:
https://www.vanguardinvestor.co.uk/investments/vanguard-ftse...
That's not what's happening here though. Google isn't using the value of its shares to fund demand. Google is using its own cash flow to fund this demand from Anthropic.
The question is whether Anthropic has demand from end users for the capacity they are buying from Google (that's a yes I guess) and whether that demand is profitable for Anthropic (that's a question mark).
Regardless, (a) it's ability/desire to make such investments is still driven by stock-driven optimism and (b) these transactions' "signal" can have a similar, warping effect.
In this case the transaction creates demand for Google's services and also funds anthropic's growth... which represents demands for google's services.
"Loop" is an approximation of an analogy. The risk is that enough of such transactions create a dynamic that distorts feedbacks.
I don't think it has much to do with the stock price at all. Current platform oligopolists fear the rise of new platforms. They want a foot in the door for strategic reasons.
What could happen is that frontier labs like Anthropic and OpenAI never become platforms and turn out to be providers of a largely commoditised, low margin service.
In that event, current valuations are too high. But Anthropic's valuation doesn't seem extreme to me. Their $30bn annual run rate is valued at $380bn.
Given this price and Anthropic's strategic value, Google's investment seems reasonable.
So they're selling the transformation, or the model. Or the ability to make a model. And their brand and their harness.
And it seems like the model is definitely not worth 380 billion. Models depreciate incredibly fast. There are lots of models and the other models aren't that far behind.
And it seems like the harness is not worth much as there's already open source alternatives that people claim are better.
And all these companies are paying lots of money for these AI training experts.
But I suspect that any regular Hacker News reader of 10 years dev experience could become a training expert in months if allowed to play with a load of compute and a lot of data for a bit.
Just like any of us could have become a data scientist, this stuff is not particularly hard. Random horny dudes on the internet are putting out loras and quantized models in days against the open source image models.
So what's worth 380 billion exactly? The brand?
These valuations just look really off. Not by one order of magnitude, but more like by 4 orders of magnitude. Like 380 million might be a reasonable valuation, but not billion.
What I also don't get is that it's pretty obvious to me that the Europeans should all be spinning up their own, not necessarily massive, data centers and throwing a few billion at some guys in Cambridge or Stockholm or London or Berlin to make their own AI models.
Only the French have done it.
But instead the rest seem to be trying to court Anthropic or OpenAI to build data centers. Which is just stupid politics given what's happening in the world right now.
Coding facebook isn't rocket surgery either. Neither is Visa, Salesforce or many other tech-centric companies. Replicating their business model is.
Those are locked in by network effects. Path dependencies and suchlike can play a role. But... the upshot is that anthropic, open Ai and whatnot have the model people are using for work.
A government sponsored model isn't a bad thing to have, but I thing it's unlikely (but possible) that it will also be the product people want to use or the business that succeeds.
Whatever it is that leads to a $30bn run rate, growing >200%. Right now it's having the better model and being able to show how to use it in specific verticals.
But I suspect in the long run only platforms have high margins (and they will need margins not just revenues to justify their valuation). Are they becoming platforms? Google seems to think (or fear) that they might.
But generally speaking, AI is currently pretty competitive and robust. Straightforward business model where users pay money and select the best deal are central. Market power is relatively dispersed.
So... Idk. Nvidia doesn't have competition. But Intel didn't have much competition either, and they drive the Moore's law bus for a long time.
Hardware has been less prone to enshitification. Maybe it's because the demand curve for compute doesn't have natural limits. Drive down price, and demand grows by enough that the total market grows.
There’s competition now among the American companies (who have a head start in this space) as always happens as the professional oligopolists try to manufacture their footholds in the new market.
Nor is it cynical to objectively appraise the interests and economics at play. People aren’t playing circular financing games out of the goodness of their hearts.
And the circularity makes the actual investment numbers fairly meaningless. They don't mind if they end up overpaying for future services, as long as they overpay each other equally.
Vendors may be positioned to know how a customer is doing, but they're also incentivized to overestimate how well a customer is going to perform.
GE Capital (edit: and GMCA) is a great example of how seemingly reasonable vendor financing can cause the lender serious problems.
> To be honest, I think "vendor financing" is still a very risky premise.
Are you aware that all heavy industry in all highly developed nations make extensive use of vendor financing to sell their products? Siemens is a perfect example of a well-run, stable, industrial giant. They offer vendor financing for large purchases. Same for the "heavies" (Mitsubishi, Kawasaki, IHI, Hyundai, Doosan, Hanjin) in Japan and Korea.If anyone is interested to learn about the damage that the financialisation of General Electric (USA) brought upon itself, you can ask ChatGPT to tell you the story. It is too long to repeat here.
Here is a sample prompt that I used to remind myself:
> I am interested in the history of General Electric and the trouble that their financing units brought in the early to mid 2000s. Can you tell me more?Edit: I am not asking whether ChatGPT is better than Google Search, I am asking after the standard dodge of citing one's sources.
EDIT ---- Also, the OP was so brief about GE Cap, I realised that most readers under 30 (maybe 35) will have almost no knowledge or memory of that economic history. I wanted to offer an "intellectual carrot" (ChatGPT prompt) for anyone wishing to learn more. ----
What bothered me most about the original post was the person was putting all vendor financing in the same "bad" bucket. I disagree. I would characterise GE Cap as an infamous example! They were the worst of the worst in a generation (25 years). Most vendor financing is very boring and is used to buy big heavy things with very long operational lives. If the buyer goes bankrupt, it is (relatively) easy to repossess the big heavy thing and sell it again (probably with vendor financing again!).
I just cannot justify the environmental impact and surveillance of using LLMs for everything. I prefer to summarize recent information myself. LLMs are not particularly good at it.
Funny thing about the cable analogy. Ever since all streaming providers have started cranking up prices and still forcing users to see hundreds of ads my family has been buying second hand dvds. So we have regressed from streaming to right after cable. I know one family that went back to cable, they do still watch YouTubes here and there but they got sick of it.
The OP did mention GE Capital, the motherload of all heavy industry vendor financing. And of massaging the accounting books in order to increase shareholder value in the short term, also.
> motherload of all heavy industry vendor financing
I doubt they are bigger than other national "heavy industry" champions from East Asia and Western/Central Europe. Without checking, I would guess that the global leaders are Boeing and Airbus.To the extent that Google and Anthropic are competing for AI business, Google is somewhat hedged against Anthropic winning market share. They still get data center revenue and they own equity, so that’s a consolation prize.
On the other hand, it’s increasing Google’s investment in AI, in general.
The vendor financing stuff I saw (as a junior / intern at a supplier) in those days was a reflection of that culture. They’d lease capital equipment through GE Capital, and pack it with other stuff to the limit of their accountants appetite for risk. (You can usually roll 20% of the value into services or peripheral stuff) I remember one deal where we had to run around and buy office supplies and tools with a corporate card. I did 4 Honda Civic of laser toner.
GE was reporting their own capital equipment and office supplies as revenue on the Capital side. :) But that is penny ante stuff in terms of what they did.
The AI stuff is a shady variation of that, but likely far worse as we’ve fired all of the watchers.
The POTUS kids are players in Polymarket and Kalshi, and are running crypto grifts.
The SEC fired most of their investigators, hasn’t appointed members to key boards, and cancelled most of their contracts with FINRA. (Which has laid off a ton of people) Nobody is watching.
So there’s an open season for normal corporate bullshit, and if you’re personally committing felonies attributable to you, you make sure you do it in Florida, and pay a vig to the library fund for a pardon.
We’ll have a fun run, then everything starts exploding in mid 2027-2029.
So far both of these companies have shown they suck at support so we know that's not it. It could be that it might help Anthropic to leverage Gemini in their competition with OpenAI and Google will take compute commitments.
Anecdata: I'm finding a lot of my "type random question in URL/search bar" has decent top Gemini answers where I don't scroll to results unless I need to dive deeper.
However, they are still useful in these cases if you know the above and use their output as a starting point to think and ask questions.
Google crippling search to bolster AI is a dangerous game. But without people going to competitors, what's the recourse?
The plural of anecdote is not data but this does not feel like a one-off thing: I was trying to find where it would be possible to get to have a reasonable holiday, and asked Gemini to list me all the international airports in two named countries that had direct flights from my preferred departure airport. The response came back with a single proposed flight destination with "book here" prominently available.
Only once I told it that the search was NOT an impulse purchase intent and I really wanted to know the possible destinations - then did it actually come back with the list of airports that satisfied my search criteria.
Although if we are looking for the bright side, it did provide a valid and informative answer on the second try. I haven't had that kind of experience on SEO-infested Google search for quite a long time now.
Arguably, too much of this kind of hedging is anti-competitive. But that doesn’t seem to be much of a problem yet?
Google versus OpenAI and Anthropic, sure, but Microsoft is deep into OpenAI. Google helping Anthropic is also putting MS into a corner (one that may even be shrinking? Copilot and openAI financing hurting their brand, rumours of deep displeasure at OpenAIs promises v returns).
Seen from afar I see Google happy to provide TPUs for money (improving Googles strategic positioning), torpedoing confidence in LLMs with their search AI summaries, and using their bankroll to force larger competitors (MS in particular), to keep investments high regardless of performance and user revolts and internal tensions with Sam Altmans sales approach. Plus, Anthropic is in ‘the lead’ right now product wise, so grooming them as a potential purchase would also seem to be a strategic hedge in the long term.
1. https://www.microsoft.com/en-us/microsoft-365/blog/2026/03/0...
> torpedoing confidence in LLMs with their search AI summaries
That is some real tin foil hat thinking.Google didn’t launch LLM products despite being a tech leader, and have gotten piles of bad press for their misleading AI search summaries. They know how and why they suck. Google search is a highly popular and market facing service packaging bad summaries as “AI”. Meanwhile LLM searches threaten to disrupt Googles primary cash cow (advertising around search).
Here on HN, on Reddit, and media writ large, a lot of the “AI” failure stories are not about ChatGPT hallucinations, it’s the shockingly wrong search summaries from Google, undermining consumer confidence and breaching trust.
ChatGPT and other LLM providers rarely show conflicting source material side by side with misleading text gen. The number one search provider who leads in some LLM tech does though, routinely, looking incompetent and generating negative “AI” sentiment through repeated failures at mass scale…
So the theory here is either that the best search org in the world filled with geniuses can’t tell they’re pooping on their own product and profitability and aren’t fixing it because they can’t/won’t… … or <tinfoil mode engaged>… Google already makes money and is happy with substandard product and market performance in the cases where it hurts the necessary hype critical to other businesses but not themselves (while also pre-positioning in case LLM search becomes essential).
Win/win/win strategy with a substandard product, versus Google not being aware of what their biggest product is doing.
Googles AI summaries are doing lotsa work to make AI summaries seem terrible. I ascribe profit motives to their actions. Ascribing incompetence seems naive and irreconcilable with their strategic corporate history.
By the time it is a problem, it will be too late.
OpenAI lead the game while they were best. Antrophic followed and got better. Now openAI is catching up again and also google with gemini(?) ... and the open weight models are 2 years behind.
Any win here seems only temporary. Even if a new breakthrough to a strong AI happen somehow.
So if I'm Google I'd want a decent chunk of at least one of them.
It’s a commodity in the making.
Who supplies the hardware for the singularity?
If you gatekeep, you will not make back the money you invested. If you don't gatekeep, your competitors will use your model to build competing models.
I guess you can sell it to the Department of War.
Its awesome and world dominating, you just don’t sell access to that AI, you instead directly, by yourself, dominate any field that better AI provides a competitive advantage in as soon as you can afford to invest the capital to otherwise operate in that field, and you start with the fields where the lowest investment outside of your unmatchable AI provides the highest returns and, and plow the growing proceeds into investing in successive fields.
Obviously, it is even more awesome if you are a gigantic company with enormous cash to to throw around to start with when you develop the AI in question, since that lets you get the expanding domination operation going much quicker.
There's a lot more to building a successful product than how smart your engineers/agents are, how many engineers/agents you have, and capital.
Google, for example, can be extremely dysfunctional at launching new products despite unimaginably vast resources. They often lack intangible elements to success, such as empathizing with your customers' needs.
If we were in a world where AI was not already widespread, then I would agree that having strong AI would be an immense competitive advantage. However, in a world where "good enough" AI is increasingly widespread, the competitive advantage of strong AI diminishes as time goes on.
That's why I've been saying that the entire software industry is now living on borrowed time. It'll continue at the mercy of SOTA LLM operators, for as long as they prefer to extract rent from everyone for access to "cognition as a service". In the meantime, as the models (and harnesses) get better, the number of fields SOTA model owners could dominate overnight, continues to grow.
(One possible trigger would be the open models. As long as the gap between SOTA and open is constant or decreasing, there will be a point where SOTA operators might be forced to cannibalize the software industry by a third party with an open model and access to infra pulling the trigger first.)
I think meaningful change to the current equilibrium would require at absolute minimum the proprietary equivalent of the development of the transformer architecture.
Integration, and mindset. AI, by its general-purpose nature, subsumes software products. Most products today try to integrate AI inside, put it in a box and use to supercharge the product - whereas it's becoming obvious even for non-technical users, that AI is better on the outside, using the product for you. This gives the SOTA AI companies an advantage over everyone else - they're on the outside, and can assimilate products into their AI ecosystem - like the Borg collective, adding their distinctiveness to their own - and reaping outsized and compounding benefits from deep interoperability between the new capability and everything else the AI could already do.
Once one SOTA AI company starts this process, the way I see it, it's the end-game for the industry. The only players that can compete with it are the other SOTA AI companies - but this will just be another race, with nearly-equivalent offerings trading spots in benchmarks/userbase every other month - and that race starts with rapidly cannibalizing the entire software industry, as each provider wants to add new capabilities first, for a momentary advantage.
Once this process starts, I see no way for it to be stopped. Software products will stop being a thing.
Open models can't compete, because they're always lagging proprietary. What they do, however, is ensure the above happens - because if, for some reason SOTA AI companies stick to only supplying "digital smarts a service" for everyone, someone with access to sufficient compute infra is bound to eventually try the end-game strategy with an open model, hoping to get a big payday before SOTA companies respond in kind.
Either way, the way I see it, software industry as we know it is already living on borrowed time.
So suppose ACo attempts to subsume Spotify or Photoshop or whatever. So they ... build their own competing platform internally? That's a lot of work. And now they what, attempt to drive users to it by virtue of it being a first party offering? Okay sure that's just your basic anticompetitive abuse of monopoly I guess. MS got in trouble for that but whatever let's assume that happens.
So now lots of ACo users are using a Photoshop competitor behind the scenes. I guess they purchased a subscription addon for that? And I guess ACo has the home team advantage here (anticompetitive and illegal ofc) but other than that why can't Photoshop compete? It just seems like business as usual to me. What am I missing?
If ACo sells widgets and I also sell widgets, assuming I can get attention from consumers and offer a compelling set of features for a competitive price why can't I get customers exactly? ACo's AI will be able to make use of either widget solution just fine assuming ACo doesn't intentionally sabotage me.
I think the more likely issue is that at some point the cost of building software falls far enough that it ceases to be a viable product category. You just ask an agent for a one off solution and it hands it to you.
Projecting out even farther, eventually the agents get good enough that you don't need to ask for software tools in the first place. You request X, the agent realizes that it needs a tool for that, builds the one off tool, uses it, returns X to you, and the ephemeral purpose built tool gets disposed of as part of the the session history. All of this without the end user ever realizing that a tool to do X was authored to begin with.
So I guess I agree with your end outcome but disagree about the mechanics and consequences of it.
> Open models can't compete
They can though. There's a gap, sure, but this isn't black and white. Plenty of open models are quite useful for a particular task right now.
Any of Meta’s competitors could reproduce Instagram “the software” in every meaningful detail for (let’s say) $100M.
They still don’t have Instagram the product. Reducing that outlay to a few billion tokens doesn’t change that.
I guess I’ll believe this theory when Anthropic or OpenAI rolls out a search engine with an integrated ad platform that can meaningfully compete with Google. How hard can that be?
At this point, if you can no longer safely drip-feed industry the access to "thinking as a service" and rake in rent, you start using it, displacing existing players in segment after segment until you kill the entire software industry.
That's pre-ASI and entirely distinct from the AI itself becoming so good it takes over.
Being unfathomably smarter than the people making use of it you could simply instruct it not to reveal information that would enable a potential competitor to construct an equivalent. No need to worry about competition; you can quite literally take over the world at that point.
Not that I think it's likely such a system will so easily come to pass, nor that I think humanity could manage to maintain control over such a system for long. But we're talking about investments to hedge against existential tail risks here so "within the realm of plausibility" is sufficient.
I’d be willing the bet that the Venn diagram of investors in those two companies is nearly a circle.
But why? Assuming there is a secret undiscovered algorithm to make AGI from a neuronal network ... then what happens if someone leaks it, or china steals it and releases it openly tomorrow?
Current LLMs are absolutely stupidly inefficient on this front, requiring virtually all human knowledge to train on as a prerequisite to early-college-level understanding of any one subject (granted, almost all subjects at that point, but what it has in breadth it lacks in depth).
That way instead of training millions of TPUs on petabytes of data just to get a model that maintains an encyclopedia of knowledge with a twelve-year-old's capacity for judgment, that same training set and compute could (they hope) instead far exceed the depth of judgement, planning, and vision of any human who has ever lived (ideally while keeping the same depth, speed of inference, etc).
It's one of those situations where we have reason to believe that "exactly matching" human intelligence is basically impossible: the target range is too exponentially large. You either fall short (and it's honestly odd that LLMs were able to exceed animal intelligence/judgment while still falling short of average adult humans.. even that should have been too small of a target) or you blow past it completely into something that both humans and teams of humans could never compete directly against.
Chess and Go are fine examples here: algorithms spent very short periods of time "at a level where they could compete reasonably well against" human grand masters. It was decades falling short, followed by quite suddenly leaving humans completely in the dust with no delusions of ever catching up.
That is what the large players hope to get with AGI as well (and/or failing that, using AI as a smoke screen to bilk investors and the public, cover up their misdeeds, play cup and ball games with accountability, etc)
One technique is, instead of trying to pick individual winners, look at the total addressable market. Then compare the market size with the capital being pumped in. If you look on this basis, Aswath concluded that collectively AI investment is likely to provide unsatisfactory returns.
Here's a recent headline: "Nvidia’s Jensen Huang thinks $1 trillion won’t be enough to meet AI demand—and he’s paying engineers in AI tokens worth half their salary to prove it"
There are two parts to this. 1. A staggering $1t is expected to be invested in AI. Someone worked out that this was more than the entire capital expenditure for companies like Apple. We're talking about its entire existence here. IOW, $1t is a lot of dough. A LOT.
Secondly, this whole notion that AI is such a sure thing that half the salary will be in tokens should ring alarm bells. '“I could totally imagine in the future every single engineer in our company will need an annual token budget,” he said. “They’re going to make a few 100,000 a year as their base pay. I’m going to give them probably half of that on top of it as tokens so that they could be amplified 10 times.”'
I recall from the dotcom fiasco that service companies like accountants and lawyers were providing services to the dotcom companies and being compensated in stock options rather than cold hard cash like you'd normally expect.
Very dangerous.
As another poster pointed out, this really boils down to FOMO by big tech. I'm expecting big trouble down the line. We await to see if I'm early or just plain wrong.
It is just cargo cult financing at this point.
But: no singularity. At least not yet.
The flaw in this thinking seems to be the idea that AI is a singular thing. You point the model back at its own source code, sit back and watch as it does everything at once. Right now it's more like AI being an army of assistants organized by human researchers. You often need specialized models for this stuff, you can't just use GPT for everything.
AI has none of that now - it only gets direct human feedback from those controlling the training (or at a second level, the harness), and that feedback is really in service of the humans at the steering wheels. Sum total of humanity, mixed in the blender, and flavored to make the trainers look good in front of their peers.
Now, if AI could interact directly and propagate that feedback to their training, or otherwise learn on-line, that changes. It's a qualitative jump. The second one is, once there's enough AIs interacting with human economy and society directly, that their influence starts to outweigh ours. At that point, they'll end up evolving their own standards and benchmarks, and then it's us who will be judged by their measure.
(I.e. if you think we have it bad now, with how we're starting to adapt our writing and coding style to make it easier for LLMs, just wait when next-gen models start participating in the economy, and we'll all be forced by the market forces to learn some weird, emergent token-efficient English/Chinese pidgin that AI-run companies prefer their suppliers to use.)
Then it all remains a question of who has the most compute power, as self improve seems compute heavy with the current approach.
It seems pretty wild to bet the future on such an assumption. What are you even basing it on?
But they also have access to an unimaginably large data set plus reach into people’s daily lives.
Seems more like partners for world domination.
I actually mentioned to a Google friend the other week that I wouldn't be surprised to see Google tipping the hat towards Anthropic soon so as to put a little more heat on OAI.
Let's say Anthropic fails to pay it's debt, can Google take those TPU's back and make money from them?
What if AI is never good or cheap enough to reach significant profitability?
Maybe a little bit of both.
Obviously it's not a perfect comparison, but you have to wonder how much of NVIDIA's income (for instance) is ultimately funded by its own money.
~ TK