Some people call this sort of thing a "circular deal", but perhaps a better way to think of it is as a very large-scale version of vendor financing? The simple version of vendor financing is when a vendor gives a retailer time to pay for goods they purchased for resale. This is effectively a loan that's backed by the retailer's ability to resell the goods. There's a possibility that the retailer goes broke and doesn't pay, but the vendor has insight into how well the retailer is doing, so they know if they're a good risk.
Similarly, Google likely knows quite a lot about Anthropic because Anthropic buys computing services from Google for resale. They're making an equity investment rather than a loan, but the money will be coming back to Google, assuming Anthropic's sales continue to rise as fast as they have been.
Also, if you own Google stock, some small part of that is an investment in Anthropic?
[1] https://www.anthropic.com/news/google-broadcom-partnership-c...
The risk is from this structure is mostly to do with how this affects market cap. Companies using the value of their shares to fund demand for their services.
That's a risk.
PG had an essay about this during the dotcom, when he worked at yahoo. Iirc...Yahoo's share price and other big successes in the space attracted investment into startups. Startups used that money to advertise on yahoo. Yahoo bought some of these the startups.
So... a lot of the revenue used to analyze companies for investment was actually a 2nd order side effect of these investments.
Here the risk is that we have Ai investments servicing Ai investments for other Ai investments.
Google buys Nvidia chips to sell anthropic compute. Anthropic sells coding assist to Ai companies (including Google and Nvidia). They buy anthropic services with investor money that is flowing because of all this hype.
Imo the general risk factor is trying to get ahead of actual worldly use.
The Ai optimists have a sense that Ai produces things that are valuable (like software) at massive scale...that is output.
But... even if true, it will take a lot of time, and lot of software for the Econony to discover this, go through the path dependencies and actually produce value.
The most valuable, known software has already afy been written. The stuff that you could do, but haven't yet is stuff that hasn't made the cut. Value isn't linear.
I can't continue the current model. The dev that gets AI is done in five hours, the ones that don't are thrashing for the next two weeks. I have to unleash the good AI dev. I have the Product team handing us markdown files now with an overview of the project and all the details and stories built into them. I'm literally transforming how a billion dollar company works right now because of this. I have Codex, Claude and GitHub Copilot enterprise accounts on top of Office 365. Everyone is being trained right now as most devs are behind, even.
The (imo) question isn't how you produce software, but what the value of this software is. Are you going to make make/better software such that customers pay more, or buy more? Are those customers getting value of this kind?
The answer may be yes. But... it's not an automatic yes.
Instead of programming think of accounting. Say you experience what you are experiencing, but as an accountant. 6 person team replaced by 2-3 hotshots.
So... Maybe you can sell more/better accounting for a higher price. But... potential is probably pretty limited. Over time, maybe business practices will adjust and find uses for this newly abundant capacity.
Maybe you lower prices. Maybe the two hotshot earn as much as the previous team.
If you are reducing team size, and that's the primary benefit... the fired employees need to find useful emplyment elsewhere in the economy for surplus value to be realized.
Mediating all this is the law of diminishing returns. At any given moment, new marginal resources have less productive value than the current allocation.
I personally make sure I really diversify, so that when I buy funds, I buy those with stocks of EU companies which pay dividends. AFAICT there are 0 European AI companies that pay dividends.
That's not what's happening here though. Google isn't using the value of its shares to fund demand. Google is using its own cash flow to fund this demand from Anthropic.
The question is whether Anthropic has demand from end users for the capacity they are buying from Google (that's a yes I guess) and whether that demand is profitable for Anthropic (that's a question mark).
Regardless, (a) it's ability/desire to make such investments is still driven by stock-driven optimism and (b) these transactions' "signal" can have a similar, warping effect.
In this case the transaction creates demand for Google's services and also funds anthropic's growth... which represents demands for google's services.
"Loop" is an approximation of an analogy. The risk is that enough of such transactions create a dynamic that distorts feedbacks.
I don't think it has much to do with the stock price at all. Current platform oligopolists fear the rise of new platforms. They want a foot in the door for strategic reasons.
What could happen is that frontier labs like Anthropic and OpenAI never become platforms and turn out to be providers of a largely commoditised, low margin service.
In that event, current valuations are too high. But Anthropic's valuation doesn't seem extreme to me. Their $30bn annual run rate is valued at $380bn.
Given this price and Anthropic's strategic value, Google's investment seems reasonable.
So they're selling the transformation, or the model. Or the ability to make a model. And their brand and their harness.
And it seems like the model is definitely not worth 380 billion. Models depreciate incredibly fast. There are lots of models and the other models aren't that far behind.
And it seems like the harness is not worth much as there's already open source alternatives that people claim are better.
And all these companies are paying lots of money for these AI training experts.
But I suspect that any regular Hacker News reader of 10 years dev experience could become a training expert in months if allowed to play with a load of compute and a lot of data for a bit.
Just like any of us could have become a data scientist, this stuff is not particularly hard. Random horny dudes on the internet are putting out loras and quantized models in days against the open source image models.
So what's worth 380 billion exactly? The brand?
These valuations just look really off. Not by one order of magnitude, but more like by 4 orders of magnitude. Like 380 million might be a reasonable valuation, but not billion.
What I also don't get is that it's pretty obvious to me that the Europeans should all be spinning up their own, not necessarily massive, data centers and throwing a few billion at some guys in Cambridge or Stockholm or London or Berlin to make their own AI models.
Only the French have done it.
But instead the rest seem to be trying to court Anthropic or OpenAI to build data centers. Which is just stupid politics given what's happening in the world right now.
Coding facebook isn't rocket surgery either. Neither is Visa, Salesforce or many other tech-centric companies. Replicating their business model is.
Those are locked in by network effects. Path dependencies and suchlike can play a role. But... the upshot is that anthropic, open Ai and whatnot have the model people are using for work.
A government sponsored model isn't a bad thing to have, but I thing it's unlikely (but possible) that it will also be the product people want to use or the business that succeeds.
Whatever it is that leads to a $30bn run rate, growing >200%. Right now it's having the better model and being able to show how to use it in specific verticals.
But I suspect in the long run only platforms have high margins (and they will need margins not just revenues to justify their valuation). Are they becoming platforms? Google seems to think (or fear) that they might.
Vendors may be positioned to know how a customer is doing, but they're also incentivized to overestimate how well a customer is going to perform.
GE Capital (edit: and GMCA) is a great example of how seemingly reasonable vendor financing can cause the lender serious problems.
> To be honest, I think "vendor financing" is still a very risky premise.
Are you aware that all heavy industry in all highly developed nations make extensive use of vendor financing to sell their products? Siemens is a perfect example of a well-run, stable, industrial giant. They offer vendor financing for large purchases. Same for the "heavies" (Mitsubishi, Kawasaki, IHI, Hyundai, Doosan, Hanjin) in Japan and Korea.If anyone is interested to learn about the damage that the financialisation of General Electric (USA) brought upon itself, you can ask ChatGPT to tell you the story. It is too long to repeat here.
Here is a sample prompt that I used to remind myself:
> I am interested in the history of General Electric and the trouble that their financing units brought in the early to mid 2000s. Can you tell me more?Edit: I am not asking whether ChatGPT is better than Google Search, I am asking after the standard dodge of citing one's sources.
EDIT ---- Also, the OP was so brief about GE Cap, I realised that most readers under 30 (maybe 35) will have almost no knowledge or memory of that economic history. I wanted to offer an "intellectual carrot" (ChatGPT prompt) for anyone wishing to learn more. ----
What bothered me most about the original post was the person was putting all vendor financing in the same "bad" bucket. I disagree. I would characterise GE Cap as an infamous example! They were the worst of the worst in a generation (25 years). Most vendor financing is very boring and is used to buy big heavy things with very long operational lives. If the buyer goes bankrupt, it is (relatively) easy to repossess the big heavy thing and sell it again (probably with vendor financing again!).
I just cannot justify the environmental impact and surveillance of using LLMs for everything. I prefer to summarize recent information myself. LLMs are not particularly good at it.
Funny thing about the cable analogy. Ever since all streaming providers have started cranking up prices and still forcing users to see hundreds of ads my family has been buying second hand dvds. So we have regressed from streaming to right after cable. I know one family that went back to cable, they do still watch YouTubes here and there but they got sick of it.
The OP did mention GE Capital, the motherload of all heavy industry vendor financing. And of massaging the accounting books in order to increase shareholder value in the short term, also.
> motherload of all heavy industry vendor financing
I doubt they are bigger than other national "heavy industry" champions from East Asia and Western/Central Europe. Without checking, I would guess that the global leaders are Boeing and Airbus.To the extent that Google and Anthropic are competing for AI business, Google is somewhat hedged against Anthropic winning market share. They still get data center revenue and they own equity, so that’s a consolation prize.
On the other hand, it’s increasing Google’s investment in AI, in general.
The vendor financing stuff I saw (as a junior / intern at a supplier) in those days was a reflection of that culture. They’d lease capital equipment through GE Capital, and pack it with other stuff to the limit of their accountants appetite for risk. (You can usually roll 20% of the value into services or peripheral stuff) I remember one deal where we had to run around and buy office supplies and tools with a corporate card. I did 4 Honda Civic of laser toner.
GE was reporting their own capital equipment and office supplies as revenue on the Capital side. :) But that is penny ante stuff in terms of what they did.
The AI stuff is a shady variation of that, but likely far worse as we’ve fired all of the watchers.
So far both of these companies have shown they suck at support so we know that's not it. It could be that it might help Anthropic to leverage Gemini in their competition with OpenAI and Google will take compute commitments.
Anecdata: I'm finding a lot of my "type random question in URL/search bar" has decent top Gemini answers where I don't scroll to results unless I need to dive deeper.
Google crippling search to bolster AI is a dangerous game. But without people going to competitors, what's the recourse?
The plural of anecdote is not data but this does not feel like a one-off thing: I was trying to find where it would be possible to get to have a reasonable holiday, and asked Gemini to list me all the international airports in two named countries that had direct flights from my preferred departure airport. The response came back with a single proposed flight destination with "book here" prominently available.
Only once I told it that the search was NOT an impulse purchase intent and I really wanted to know the possible destinations - then did it actually come back with the list of airports that satisfied my search criteria.
Although if we are looking for the bright side, it did provide a valid and informative answer on the second try. I haven't had that kind of experience on SEO-infested Google search for quite a long time now.
However, they are still useful in these cases if you know the above and use their output as a starting point to think and ask questions.
Arguably, too much of this kind of hedging is anti-competitive. But that doesn’t seem to be much of a problem yet?
Google versus OpenAI and Anthropic, sure, but Microsoft is deep into OpenAI. Google helping Anthropic is also putting MS into a corner (one that may even be shrinking? Copilot and openAI financing hurting their brand, rumours of deep displeasure at OpenAIs promises v returns).
Seen from afar I see Google happy to provide TPUs for money (improving Googles strategic positioning), torpedoing confidence in LLMs with their search AI summaries, and using their bankroll to force larger competitors (MS in particular), to keep investments high regardless of performance and user revolts and internal tensions with Sam Altmans sales approach. Plus, Anthropic is in ‘the lead’ right now product wise, so grooming them as a potential purchase would also seem to be a strategic hedge in the long term.
1. https://www.microsoft.com/en-us/microsoft-365/blog/2026/03/0...
> torpedoing confidence in LLMs with their search AI summaries
That is some real tin foil hat thinking.Google didn’t launch LLM products despite being a tech leader, and have gotten piles of bad press for their misleading AI search summaries. They know how and why they suck. Google search is a highly popular and market facing service packaging bad summaries as “AI”. Meanwhile LLM searches threaten to disrupt Googles primary cash cow (advertising around search).
Here on HN, on Reddit, and media writ large, a lot of the “AI” failure stories are not about ChatGPT hallucinations, it’s the shockingly wrong search summaries from Google, undermining consumer confidence and breaching trust.
ChatGPT and other LLM providers rarely show conflicting source material side by side with misleading text gen. The number one search provider who leads in some LLM tech does though, routinely, looking incompetent and generating negative “AI” sentiment through repeated failures at mass scale…
So the theory here is either that the best search org in the world filled with geniuses can’t tell they’re pooping on their own product and profitability and aren’t fixing it because they can’t/won’t… … or <tinfoil mode engaged>… Google already makes money and is happy with substandard product and market performance in the cases where it hurts the necessary hype critical to other businesses but not themselves (while also pre-positioning in case LLM search becomes essential).
Win/win/win strategy with a substandard product, versus Google not being aware of what their biggest product is doing.
Googles AI summaries are doing lotsa work to make AI summaries seem terrible. I ascribe profit motives to their actions. Ascribing incompetence seems naive and irreconcilable with their strategic corporate history.
By the time it is a problem, it will be too late.
OpenAI lead the game while they were best. Antrophic followed and got better. Now openAI is catching up again and also google with gemini(?) ... and the open weight models are 2 years behind.
Any win here seems only temporary. Even if a new breakthrough to a strong AI happen somehow.
So if I'm Google I'd want a decent chunk of at least one of them.
It’s a commodity in the making.
If you gatekeep, you will not make back the money you invested. If you don't gatekeep, your competitors will use your model to build competing models.
I guess you can sell it to the Department of War.
Its awesome and world dominating, you just don’t sell access to that AI, you instead directly, by yourself, dominate any field that better AI provides a competitive advantage in as soon as you can afford to invest the capital to otherwise operate in that field, and you start with the fields where the lowest investment outside of your unmatchable AI provides the highest returns and, and plow the growing proceeds into investing in successive fields.
Obviously, it is even more awesome if you are a gigantic company with enormous cash to to throw around to start with when you develop the AI in question, since that lets you get the expanding domination operation going much quicker.
There's a lot more to building a successful product than how smart your engineers/agents are, how many engineers/agents you have, and capital.
Google, for example, can be extremely dysfunctional at launching new products despite unimaginably vast resources. They often lack intangible elements to success, such as empathizing with your customers' needs.
If we were in a world where AI was not already widespread, then I would agree that having strong AI would be an immense competitive advantage. However, in a world where "good enough" AI is increasingly widespread, the competitive advantage of strong AI diminishes as time goes on.
That's why I've been saying that the entire software industry is now living on borrowed time. It'll continue at the mercy of SOTA LLM operators, for as long as they prefer to extract rent from everyone for access to "cognition as a service". In the meantime, as the models (and harnesses) get better, the number of fields SOTA model owners could dominate overnight, continues to grow.
(One possible trigger would be the open models. As long as the gap between SOTA and open is constant or decreasing, there will be a point where SOTA operators might be forced to cannibalize the software industry by a third party with an open model and access to infra pulling the trigger first.)
I think meaningful change to the current equilibrium would require at absolute minimum the proprietary equivalent of the development of the transformer architecture.
Integration, and mindset. AI, by its general-purpose nature, subsumes software products. Most products today try to integrate AI inside, put it in a box and use to supercharge the product - whereas it's becoming obvious even for non-technical users, that AI is better on the outside, using the product for you. This gives the SOTA AI companies an advantage over everyone else - they're on the outside, and can assimilate products into their AI ecosystem - like the Borg collective, adding their distinctiveness to their own - and reaping outsized and compounding benefits from deep interoperability between the new capability and everything else the AI could already do.
Once one SOTA AI company starts this process, the way I see it, it's the end-game for the industry. The only players that can compete with it are the other SOTA AI companies - but this will just be another race, with nearly-equivalent offerings trading spots in benchmarks/userbase every other month - and that race starts with rapidly cannibalizing the entire software industry, as each provider wants to add new capabilities first, for a momentary advantage.
Once this process starts, I see no way for it to be stopped. Software products will stop being a thing.
Open models can't compete, because they're always lagging proprietary. What they do, however, is ensure the above happens - because if, for some reason SOTA AI companies stick to only supplying "digital smarts a service" for everyone, someone with access to sufficient compute infra is bound to eventually try the end-game strategy with an open model, hoping to get a big payday before SOTA companies respond in kind.
Either way, the way I see it, software industry as we know it is already living on borrowed time.
So suppose ACo attempts to subsume Spotify or Photoshop or whatever. So they ... build their own competing platform internally? That's a lot of work. And now they what, attempt to drive users to it by virtue of it being a first party offering? Okay sure that's just your basic anticompetitive abuse of monopoly I guess. MS got in trouble for that but whatever let's assume that happens.
So now lots of ACo users are using a Photoshop competitor behind the scenes. I guess they purchased a subscription addon for that? And I guess ACo has the home team advantage here (anticompetitive and illegal ofc) but other than that why can't Photoshop compete? It just seems like business as usual to me. What am I missing?
If ACo sells widgets and I also sell widgets, assuming I can get attention from consumers and offer a compelling set of features for a competitive price why can't I get customers exactly? ACo's AI will be able to make use of either widget solution just fine assuming ACo doesn't intentionally sabotage me.
I think the more likely issue is that at some point the cost of building software falls far enough that it ceases to be a viable product category. You just ask an agent for a one off solution and it hands it to you.
Projecting out even farther, eventually the agents get good enough that you don't need to ask for software tools in the first place. You request X, the agent realizes that it needs a tool for that, builds the one off tool, uses it, returns X to you, and the ephemeral purpose built tool gets disposed of as part of the the session history. All of this without the end user ever realizing that a tool to do X was authored to begin with.
So I guess I agree with your end outcome but disagree about the mechanics and consequences of it.
> Open models can't compete
They can though. There's a gap, sure, but this isn't black and white. Plenty of open models are quite useful for a particular task right now.
Any of Meta’s competitors could reproduce Instagram “the software” in every meaningful detail for (let’s say) $100M.
They still don’t have Instagram the product. Reducing that outlay to a few billion tokens doesn’t change that.
I guess I’ll believe this theory when Anthropic or OpenAI rolls out a search engine with an integrated ad platform that can meaningfully compete with Google. How hard can that be?
At this point, if you can no longer safely drip-feed industry the access to "thinking as a service" and rake in rent, you start using it, displacing existing players in segment after segment until you kill the entire software industry.
That's pre-ASI and entirely distinct from the AI itself becoming so good it takes over.
Being unfathomably smarter than the people making use of it you could simply instruct it not to reveal information that would enable a potential competitor to construct an equivalent. No need to worry about competition; you can quite literally take over the world at that point.
Not that I think it's likely such a system will so easily come to pass, nor that I think humanity could manage to maintain control over such a system for long. But we're talking about investments to hedge against existential tail risks here so "within the realm of plausibility" is sufficient.
I’d be willing the bet that the Venn diagram of investors in those two companies is nearly a circle.
But why? Assuming there is a secret undiscovered algorithm to make AGI from a neuronal network ... then what happens if someone leaks it, or china steals it and releases it openly tomorrow?
Current LLMs are absolutely stupidly inefficient on this front, requiring virtually all human knowledge to train on as a prerequisite to early-college-level understanding of any one subject (granted, almost all subjects at that point, but what it has in breadth it lacks in depth).
That way instead of training millions of TPUs on petabytes of data just to get a model that maintains an encyclopedia of knowledge with a twelve-year-old's capacity for judgment, that same training set and compute could (they hope) instead far exceed the depth of judgement, planning, and vision of any human who has ever lived (ideally while keeping the same depth, speed of inference, etc).
It's one of those situations where we have reason to believe that "exactly matching" human intelligence is basically impossible: the target range is too exponentially large. You either fall short (and it's honestly odd that LLMs were able to exceed animal intelligence/judgment while still falling short of average adult humans.. even that should have been too small of a target) or you blow past it completely into something that both humans and teams of humans could never compete directly against.
Chess and Go are fine examples here: algorithms spent very short periods of time "at a level where they could compete reasonably well against" human grand masters. It was decades falling short, followed by quite suddenly leaving humans completely in the dust with no delusions of ever catching up.
That is what the large players hope to get with AGI as well (and/or failing that, using AI as a smoke screen to bilk investors and the public, cover up their misdeeds, play cup and ball games with accountability, etc)
One technique is, instead of trying to pick individual winners, look at the total addressable market. Then compare the market size with the capital being pumped in. If you look on this basis, Aswath concluded that collectively AI investment is likely to provide unsatisfactory returns.
Here's a recent headline: "Nvidia’s Jensen Huang thinks $1 trillion won’t be enough to meet AI demand—and he’s paying engineers in AI tokens worth half their salary to prove it"
There are two parts to this. 1. A staggering $1t is expected to be invested in AI. Someone worked out that this was more than the entire capital expenditure for companies like Apple. We're talking about its entire existence here. IOW, $1t is a lot of dough. A LOT.
Secondly, this whole notion that AI is such a sure thing that half the salary will be in tokens should ring alarm bells. '“I could totally imagine in the future every single engineer in our company will need an annual token budget,” he said. “They’re going to make a few 100,000 a year as their base pay. I’m going to give them probably half of that on top of it as tokens so that they could be amplified 10 times.”'
I recall from the dotcom fiasco that service companies like accountants and lawyers were providing services to the dotcom companies and being compensated in stock options rather than cold hard cash like you'd normally expect.
Very dangerous.
As another poster pointed out, this really boils down to FOMO by big tech. I'm expecting big trouble down the line. We await to see if I'm early or just plain wrong.
It is just cargo cult financing at this point.
AI has none of that now - it only gets direct human feedback from those controlling the training (or at a second level, the harness), and that feedback is really in service of the humans at the steering wheels. Sum total of humanity, mixed in the blender, and flavored to make the trainers look good in front of their peers.
Now, if AI could interact directly and propagate that feedback to their training, or otherwise learn on-line, that changes. It's a qualitative jump. The second one is, once there's enough AIs interacting with human economy and society directly, that their influence starts to outweigh ours. At that point, they'll end up evolving their own standards and benchmarks, and then it's us who will be judged by their measure.
(I.e. if you think we have it bad now, with how we're starting to adapt our writing and coding style to make it easier for LLMs, just wait when next-gen models start participating in the economy, and we'll all be forced by the market forces to learn some weird, emergent token-efficient English/Chinese pidgin that AI-run companies prefer their suppliers to use.)
Then it all remains a question of who has the most compute power, as self improve seems compute heavy with the current approach.
It seems pretty wild to bet the future on such an assumption. What are you even basing it on?
But they also have access to an unimaginably large data set plus reach into people’s daily lives.
Seems more like partners for world domination.
I actually mentioned to a Google friend the other week that I wouldn't be surprised to see Google tipping the hat towards Anthropic soon so as to put a little more heat on OAI.
Let's say Anthropic fails to pay it's debt, can Google take those TPU's back and make money from them?
What if AI is never good or cheap enough to reach significant profitability?
Maybe a little bit of both.
Obviously it's not a perfect comparison, but you have to wonder how much of NVIDIA's income (for instance) is ultimately funded by its own money.
~ TK
That kind of insane growth & demand is unprecedented at that scale.
https://www.anthropic.com/news/google-broadcom-partnership-c...
- Development velocity is very noticeably much higher across the board. Quality is not obviously worse, but it's LLM assisted, not vibe coding (except for experiments and internal tools).
- Things that would have been tactically built with TypeScript are now Rust apps.
- Things that would have been small Python scripts are full web apps and dashboards.
- Vibe coding (with Claude Desktop, nobody is using Replit or any of the others) is the new Excel for non tech people.
- Every time someone has any idea it's accompanied by a multi page "Clauded" memo explaining why it's a great idea and what exactly should be done (about 20% of which is useful).
- 80% of what were web searches now go to Claude instead (for at least a significant minority of people, could easily be over 50%).
- Nobody talks about ChatGPT any more. It's Claude or (sometimes) Gemini.
- My main job isn't writing code but I try to keep Claude Code (both my personal and corpo accounts) and OpenCode (also almost always Claude, via Copilot) busy and churning away on something as close to 100% of the time as I can without getting in the way of my other priorities.
We (~20 people) are probably using 2 orders of magnitude more inference than we were at the start of the year and it's consolidated away from cursor, ChatGPT and Claude to just be almost all Claude (plus a little Gemini as that's part of our Google Whateverspace plan and some people like it, mostly for non-engineering tasks).
No idea if any of this will make things better, exactly, but I think we'd be at a severe competitive disadvantage if we dropped it all and went back how things were.
It's all romantic, but a bunch of devs are getting canned left and right, a slice of the population whose disposable income the economy depends on.
It's too late to be a contrarian pundit, but what's been done besides uncovering some 0-days? The correction will be brutal, worse than the Industrial Revolution. Just the recent news about Meta cuts, SalesForce, Snap, Block, the list is long.
Have you shipped anything commercially viable because of AI or are you/we just keeping up?
Has it occurred to you that there might not be a correction, and that the outcome would still be brutal, at least on par with the industrial revolution.
It's physically impossible to build out the datacenters required for the "AI is actually good and we have mass layoffs" scenario. This Anthropic investment is spurred on because they've already hit a brick wall with capacity.
$40B goes a long way, but not for datacenters where nearly every single component and service is now backordered. Even if you could build the DC, the power connection won't be there.
The current oil crisis just makes all of that even worse.
The next level of layoffs is probably still 25 years out.
But all the economic indicators suggest those are "bad economy" layoffs dressed up as "AI" layoffs to keep the shareholders happy.
And that's without accounting for the various wars (and resultant economic impacts) that are already in progress. A large part of what drove the meat grinder of WWI was (very approximately) the various actors repeatedly misjudging the overall situation and being overly enthusiastic to try out their shiny new weapons systems. If one or more superpowers decide to have a showdown the only thing that might minimize loss of life this time around is (ironically enough) the rise of autonomous weapons systems. Even in that case as we know from WWII the logical outcome is a decimated economy and manufacturing sector regardless of anything else that might happen.
I think that just means the relative civilian loss of life will increase once again.
russia is really and empire of the dumb and subjugated serfs at this point (again, history repeats), but they are far from only such place.
Dont expect more, most people are not that nice when SHTF.
Bubbles like the AI bubble are a game theoretic outcome of a revolution. Many players invest heavily to avoid losing, but as a whole the market over invests. This leads to a bubble.
But right now, the difference in developer experience between a dev on a team at a business which has corporate copilot or Claude licenses and bosses encouraging them to maximize token usage, vs a solo dev experimenting once every few months with a consumer grade chat model is vast.
Meta seemingly has a constant stream of product managers. If llm’s really augment the productivity of engineers, why isn’t meta launching lots more stuff? I mean there’s no harm in at least launching one new thing.
What are all those people doing with the so called productivity enhancements?
What I’m calling into question is how much does generating more code matter if the bottle neck is creativity/imagination for projects?
The only thing I’ve seen is a really crummy meta AI thing implemented within WhatsApp.
Only solution I can think of is to drastically cut headcount so productivity is back to prior levels, and profitability is raised. Big Tech is mostly market constrained with not much room to grow beyond the market itself growing.
As for startups, seems like AI tools have drastically reduced their time to market and accelerated their growth curves.
Hobbyist solo dev, counting tokens, hitting quotas, trying things on little projects, giving up and not seeing what the fuss is about.
vs
Corporate developer, increasingly held accountable by their boss for hitting metrics for token usage; being handed every new model as soon as it comes out; working with the tools every day on code changes that impact other developers on other teams all of whom have access to those same tools.
I might be missing a lot of self-evident assumptions here but I feel like I'm still missing so much context and have no idea what this difference is actually describing.
I'm talking more about why threads like this seem to be full of people saying 'this has completely changed how corporate development works' and other people saying 'I tried it a few times and I don't get the hype'
My impression has always been it's more important the build the correct thing (what the customer needs/wants) rather than more stuff faster.
The process of learning what the customer needs/wants is a heavily iterative one, often involving throwing prototypes at them or betting at a solution, then course-correcting based on their reaction. Similarly, the process of building the correct thing is almost always an iterative approximation - correctness is something you discover and arrive at after research and prototypes and trying and getting it wrong.
All of that benefits from any of its steps being done faster - but it's up to the org/team whether they translate this speedup to quality or velocity. For example, if AI lets you knock out prototypes and hypothesis-testing scripts much faster, you can choose whether to finish earlier (and start work on next thing sooner), or do more thorough research, test more hypothesis, and finish as normally, but with better result.
(Well, at least theoretically. If you're under competitive pressure, the usual market dynamics will take the choice away, but that's another topic.)
Thats just one set of costs but a good starting point.
It's an absolute tornado of PRs these days. Everyone making the most of these tools is effectively an engineering team lead.
I’m making a team version of my buildermark.dev open source project and trying to learn about how teams would like to use it.
Backends handling tens to hundreds of thousands of messages per second with extremely high correctness and resilience requirements are necessarily taking a different approach to less critical services that power various ancillary sites/pages or to front end web apps.
That said there's a lot of very open discussion around tooling, "skills", MCP, etc., harnesses, and approaches and plenty of sharing and cross-pollination of techniques.
It would be great to find ways to better quantify the actual value add from LLMs and from the various ways of using them, but our experience so far is that the landscape in terms of both model capability and tooling is shifting so fast that that's quite hard to do.
It hardly seems worth it to try to iterate on design when they can just build a completely functional prototype themselves in a few hours. We're building APIs for internal users in preference to UIs, because they can build the UIs themselves and get exactly what they need for their specific use cases and then share it with whoever wants it.
We replaced an expensive, proprietary vendor product in a couple of weeks.
I have no delusions about the scale or complexity limits of these projects. They can help with large, complex systems but mostly at the margins: help with impact analysis, production support, test cases, code review. We generate a lot of code too but we're not vibe coding a new system of record and review standards have actually increased because refactoring is so much cheaper.
The fact is that ordinary businesses have a LOT of unmet demand for low stakes custom software. The ones that lean into this will not develop superpowers but I do think they will out-compete slow adopters and those companies will be forced to catch up in the next few years.
I develop presentations now by dumping a bunch of context in a folder with a template and telling Claude Cowork what I want (it does much better than web version because of its python and shell tools and it can iterate, render, review, repeat until its excellent). The copy is quite good, I rewrite less than a third of it and the style and graphics are so much better than I could do myself in many hours.
No one likes reading a bunch of vibe coded slop and cultural norms about this are still evolving; but on balance its worth it by far.
Mainn blockers are still product, legal, management ... which Claude code didn't help with.
He did a writeup: https://buduroiu.com/blog/ai-lent-end/
Don't leave the kicker out of the story
https://en.wikipedia.org/wiki/Jevons_paradox
In the end only profit matters
We are definitely reaching the point where you need an LLM to deal with the onslaught of LLM-generated content, even if the humans are being judicious about editing everything. We're all just cranking on an inhumanly massive amount of output and it's frankly scary.
I presume I'm not the only one.
Barely an hour goes by without a new 4-page document about something that that everyone is apparently ment to read, digest and respond to, despite its 'author' having done none of those steps, it's starting to feel actively adversarial.
With good management you will get great work faster.
The distinguishing feature between organisations competing in the AI era is process. AI can automate a lot of the work but the human side owns process. If it’s no good everything collapses. Functional companies become hyper functional while dysfunctional companies will collapse.
Bad ideas used to be warded off by workers who in some shape or form of malicious compliance just would slow down and redirect the work while advocating for better solutions.
That can’t happen as much anymore as your manager or CEO can vibe code stuff and throw it down the pipeline for the workers to fix.
If you have bad processes your company will die, or shrivel or stagnate at best. Companies with good process will beat you.
I just went and deleted it because it's completely broken at every edge case and half of the happy paths too.
This was possible before but someone would maybe notice the insane spaghetti. Now it's just "we'll fix it with another layer of noodles".
edit: LOL called it, a bunch of useless garbage that no one really cares about but used to justify corporate jobs programs.
Still useless in the sense that if you died tomorrow and your app was forgotten in a week the world will still carry on. As it should. Utterly useless in pushing humanity forward but completely competent at creating busy work that does not matter (much like 99% of CRUD apps and dashboards).
But sure yeah, the dashboard for your SMB is amazing.
Your rant just shows you don't understand why people pay for software.
I'd been fighting to make this for two years and kept getting told no. I got claude to make a PoC in a day, then got management support to continue for a couple weeks. It's super beneficial, and targets so many of our pain points that really bog us down.
Or, Excel > Data > Sort > by the Date column. No dashboard needed, no app needed.
If you are using an LLM to create an application to grab data from heterogeneous sources, combine it and present it, that is much better, but could also basically be the excel spreadsheet they are describing.
And what’s worse is that when someone does build a decent tool, you can’t help but be skeptical because of all the absolute slop that has come out. And everyone thinks their slop doesn’t stink, so you can’t take them at their word when they say it doesn’t. Even in this thread, how are you to know who is talking about building something useful vs something they think is useful?
A lot of people that have always wanted to be developers but didn’t have the skills are now empowered to go and build… things. But AI hasn’t equipped them with the skill of understanding if it actually makes sense to build a thing, or how to maintain it, or how to evolve it, or how to integrate it with other tools. And then they get upset when you tell them their tool isn’t the best thing since sliced bread. It’s exhausting, and I think we’ve yet to see the true consequences of the slop firehose.
I run a team and am spending my time/tokens on serious pain points.
This is in a real-time stateful system, not a system where I'd necessarily expect the exact same thing to happen every time. I just wanted to understand why it behaved differently because there wasn't any obvious reason, to me, why it would.
The explanation it came back with was pretty wild. It essentially boiled down to a module not being adequately initialized before it was used the first time and then it maintained its state from then on out. The narrative touched a lot of code, and the source references it provided did an excellent job of walking me through the narrative. I independently validated the explanation using some telemetry data that the LLM didn't have access to. It was correct. This would have taken me a very long time to work out by hand.
Edit: I have done this multiple times and have been blown away each time.
> The explanation it came back with was pretty wild. It essentially boiled down to a module not being adequately initialized before it was used the first time and then it maintained its state from then on out.
Even without knowing any of the variable values, that explanation doesn't sound wild at all to me. It sounds in fact entirely plausible, and very much like what I'd expect the right answer to sound like.
This the the difference between intentional and incidental friction, if your CI/CD pipeline is bad it should be improved not sidestepped. The first step in large projects is paving over the lower layer so that all that incidental friction, the kind AI can help with, is removed. If you are constantly going outside that paved area, sure AI will help, but not with the success of the project which is more contingent on the fact that you've failed to lay the groundwork correctly.
it's crazy that the experiences are still so wildly varying that we get people that use this strategy as a 'valid' gotcha.
AI works for the vast majority of nowhere-near-the-edge CS work -- you know, all the stuff the majority of people have to do every day.
I don't touch any kind of SQL manually anymore. I don't touch iptables or UFW. I don't touch polkit, dbus, or any other human-hostile IPC anymore. I don't write cron jobs, or system unit files. I query for documentation rather than slogging through a stupid web wiki or equivalent. a decent LLM model does it all with fairly easy 5-10 word prompts.
ever do real work with a mic and speech-to-text? It's 50x'd by LLM support. Gone are the days of saying "H T T P COLON FORWARD SLASH FORWARD SLASH W W W".
this isn't some untested frontier land anymore. People that embrace it find it really empowering except on the edges, and even those state-of-the-art edge people are using it to do the crap work.
This whole "Yeah, well let me see the proof!" ostrich-head-in-the-sand thing works about as long as it takes for everyone to make you eat their dust.
I'm not trying to marginalize your or anyone else's usage of AI. The reason people are saying "such as" is to gauge where the value lies. The US GDP is around 30T. Right now there's is something like ~12T reasonably involved in the current AI economy. That's massive company valuations, data center and infrastructure build out a lot of it is underpinning and heavily influencing traditional sectors of the economy that have a real risk of being going down the wrong path.
So the question isn't what can AI do, it can do a lot, even very cheap models can handle most of what you have listed. The real question is what can the cutting edge state of the art models do so much better that is productively value added to justify such a massive economic presence.
It's the same model as Uber, and I can't afford Uber most of the time anymore. It's become cost prohibitive just to take a short ride, but it used to cost like $7.
It's all fun and games until someone has to pay the bill, and these companies are losing many billions of dollars with no end in sight for the losses.
I doubt the tech and costs for the tech will improve fast enough to stop the flood of money going out, and I doubt people are going to want to pay what it really costs. That $200/month plan might not look so good when it's $2000/month, or more.
You can use "API-style" pricing on these providers which is more transparent to costs. It's very likely to end up more than 200 a month, but the question is, are you going to see more than that in value?
For me, the answer is yes.
The "costs" are subsidized, it's a loss-leader.
> This whole "Yeah, well let me see the proof!" ostrich-head-in-the-sand thing works about as long as it takes for everyone to make you eat their dust.
People will stop asking for the proof when the dust-eating commences.
Claude is a tool. It can be abused, or used in a sloppy way. But it can also be used rigorously.
I've been beating my team to be more papercut-free in the tooling they develop and it's been rough mostly because of the velocity.
But overall it's a huge net positive.
I personally noticed this. The speed at which development was happening at one gig I had was impossible to keep up with without agentic development, and serious review wasn't really possibile because there wasn't really even time to learn the codebase. Had a huge stack of rules and MCPs to leverage that kinda kept things on the rails and apps were coming out but like, for why? It was like we were all just abandoning the idea of good code and caring about the user and just trying to close tickets and keep management/the client happy, I'm not sure if anyone anywhere on the line was measuring real world outcomes. Apparently the client was thrilled.
It felt like... You know that story where two economists pass each other fifty bucks back and forth and in doing so skyrocket the local GDP? Felt like that.
well, isn't that what AI can be used effectively for - to generate [auto]response to the AI generated content.
I guess you gotta look busy. But the stick will come when the shareholders look at the income statement and ask... So I see an increase in operating expenses. Let me go calculate the ROIC. Hm its lower, what to do? Oh I know, lets fire the people who caused this (it wont be the C-Suite or management who takes the fall) lmao.
You could argue that all the spending is wasted (doubtless some is), but insisting that the decision is being made in complete ignorance of financial concerns reeks of that “everyone’s dumb but me” energy.
The real thing to look at is whether or not the future outlook for company AI spend is heading up or down?
Are they peeking over the shoulder of each team and individual? Of course not.
It can be the case that the spend is absolutely wasteful. Numbers don’t lie.
Oh, they were involved all right. They ran their analyses and realized that the increase in Acme Corp's share price from becoming "AI-enabled" will pay for the tokens several times over. For today. They plan to be retired before tomorrow.
Most firms are not a google or a Microsoft - a firms cash balance can become a strategic weapon in the right environment. So wasting money is not a great idea. Lest we forget dividends.
Moreover if you have a budget set re. Spend on tokens - you have rationing. Therefore the firm should be trying to get the most out of token spend. If you are wasting tokens on stuff that doesn’t create a benefit financially for the firm then indeed it is not inline with proper corporate financial theory.
People who work at VC-backed firms do not get to enjoy the same degree of liquidity, not even close. There can be some outliers but that is 0.1% of all.
Can't believe simple stuff like this has to be said.
Round-tripping used to be regulated. SPVs used to be regulated. If you need a loan you used to have to go to something called a bank, now it comes from ???? who knows drug cartels, child traffickers, blackstone, russians & chinese oligarchs. Even assuming it doesn't collapse tommorow why should they make double digit returns on AI datacenters built on the backs of Americans?
> “Im convinced none of these people have any training in corporate finance. For if they did they'd realise they were wasting money.”
This isn’t meaningful criticism. This is a vacuous “those guys are so dumb”.
[waits for chickens to come home to roost]
After all (Grug Chief reminds us), the only truly secure computing system is an inert rock.
"We are writing down X billions over 4 years, and have cancel several ambitious programs related to our AI experiments. We were following standard practice in the industry, so [shareholders] can't blame us for these chickens coming to roost. If everyone is guilty, is anyone really guilty?"
> Security is less or no concern, bugs are more acceptable, performance / scalability rarely a concern. Quickest way to get things done
This is literally how rest of the world works already, and always had. We'd still be living in caves otherwise. Fortunately most people (at least outside software) seem to understand that security is a trade-off against usefulness, and not an end goal in itself.
Even right now the difference with working with 'AI native' developers or with regular developers is day and night.
I certainly wouldn't want a non-clause enabled developer on my team now.
You only want to work with people who are hip with the North Pole?
I wonder what I’m doing differently.
I did spend quite a bit of time, mostly manually, improving development processes such that the agent could effectively check its work. This made a difference between the agent mostly not working and mostly working. Maybe if I had instead spent gobs of money it would have worked output tooling improvements?
Haven't found a process that beats this yet and I burn very few tokens this way.
I like writing code, I’m good at writing code. What I hate doing is dredging through logs, filtering out test scenarios and putting together disparate information from knowledge silos - so I have the AI doing that. It’s my research assistant.
Effectively I’m using it like an automated search engine that indexes anything I want and refines the results by using the statistical near neighbors of how other people explained their searches.
It's now trivial to fix these problems while still doing our day jobs -- shipping a product.
This will have previously been too ambitious to ever scope but we’ve been able to build essentially all of it in just two months. Since it sits on top of our other systems and acts as more of a window/pass through control pane, the fact that it’s vibe coded poses little risk since we still have all the existing infrastructure under it if something goes awry.
it's trivial to reimplement a better solution.
Also, I am not sure if it is trivial to implement. The code is injected into many scenarios and workflows, so replacement will be painful and risky if new solution break some edge case.
It's better than the "here's my code, it a giant pile of spaghetti but only luddites care about code quality and maintainability anyway" method, at least.
I've been using it to write tools that drastically facilitate spinning up local k8s cluster with an entire suite of development services that used to take two days to set up in Docker.
Coding velocity doesn't matter if it the net result is software that sucks massive schlong. The real world doesn't care if programmers can write code faster.
My hypothesis is that companies dont want to offer cheaper nor better services. Only want to cut costs and keep the revenue for investors.
I other news, TQQQ is pretty high!
Where I work, the power dynamics have shifted wildly. There are a number of senior engineers who refuse to touch the stuff, and as a result, they can barely keep up with their peers. Some of our juniors are now running laps around them.
When a stranger to your craft can now teach themselves what you know, how to do your job, and even how to automate your tasks in the span of the same workday as you, all while reliably being able to gauge the innacuracy of the output they're reading, how much longer do you really hold relevance?
Are the juniors increasing economic productivity or just pushing lines of code?
</retired from being measured against a random number generator>
And also because the Plan agent generates a huge plan, asks me a couple yes/no questions with an obvious answer, and then regenerates the entire plan again. Then the Build agent gets confused anyway and does something else, and I have to round-trip about 5 times with that full context each time.
I'm at least 5x faster, if not more. With tooling I might be able to get to 10-15x.
But yea it's not gonna make facebook 20% better tomorrow just that you need 5 people instead of 40 to build the next facebook.
That "more expensive" is someone's revenue. May be AI is the kind of technology that allows to make more and more revenue by making things more expensive and worse than by making them better and cheaper.
And yet.. building shit is no longer the sole domain of the software engineer.
That's the sea change.
I've literally had finance and GTM stand things up for themselves in the last few weeks. A few tweaks (obviously around security and access), and they are good to go.
They've gone from wrangling spreadsheets to smooth automated workflows that allow them to work at a higher level in a matter of months.
That's what all this AI is doing. The shit we could never get the time to get around to doing.
The only thing that matters is the impact on the financials. The shareholders (the people who employ you) dont care about any of this if it does not enhance value.
Another project I'm seeing in the same realm is taking an approved protocol and some study results and checking that the records of what was done match what they said they could do in the approved protocol. It can also make sure that surgical records have all the things they should have. This can help meet one of the requirements from the national accreditation organization to do "post approval monitoring".
Another way I've used it is to have it collate and compare a particular kind of policy across many institutions who transparently put their policies online. Seeing the commonality between the policies and where some excel helped me rewrite our policy.
This is work that just wasn't happening before or, more accurately, it was being spread over lots of people, and any improvement in efficiency or consistency is hard to measure.
Given the fact that both Altman and Amodei are pathological liars, there's absolutely no reason to believe that Anthropic has $30B ARR.
Can you explain how that’d work? What would the $30B figure be based on if they only have $100 in revenue?
(Run Rate = Revenue in Period / # of Days in Period x 365)
It's a forecast.
(That said, their numbers are much realer than that.)
That said, most people would use a monthly or quarterly period to estimate ARR. I'm not sure what Anthropic used. Probably monthly.
(I would then argue that he was re-hired specifically because others involved with OpenAI understood that it is literally his job to lie and that OpenAI would not be where it is today as a corporate behemoth rather than a research non-profit without a world-class liar marketing it, but that is merely conjecture.)
I agree about the core motivation behind these deals, however I'm skeptical as to how "suddenly" we'll see substantial improvements. Despite their size, I'd be surprised if Google or Amazon had uncommitted chunks of Anthropic-scale, top-tier AI compute sitting around waiting to be activated.
They're already over-subscribed and waiting for new data centers (and power plants) to come online. I suspect Anthropic will get a modest amount of new capacity right away with more added over coming quarters. These two deals don't change the total amount of AI compute available on planet Earth over the next 18 months. Anthropic parting with high-value equity has now made them the new highest bidder for an already over-bid resource. I suspect the net impact will be Amazon & Google pushing prices even higher on everyone else as they reallocate compute to their new top whale.
I doubt it was idle capacity. But for a chunk of equity in Anthropic I imagine they are willing to deprioritize other, possibly internal, uses. Certainly anything that's not contractually obligated could be on the chopping block.
AI is in such desperate need to adopt software-hardware co-development practices, it's infuriating watching the industry drag its feet about it. We are wasting so much electricity and absolutely wrecking the "free" market just because these companies are incentivized to work at an unsustainable breakneck speed in getting shit to market.
Is that not down to this? https://www.anthropic.com/engineering/april-23-postmortem
But all progress points to a commodification of foundation models--Google first named it as "we have no moat, neither does anyone else." So there must be some secondary play driving this, right? Hardware sales? Hedging for search ad revenue?
Still feels mispriced. I think asset inflation leaves too much money desperate for the Next Big Thing.
No doubt as of currently Google has a better business. But the same argument could have been said about Instagram or Whatsapp before Facebook (now Meta) acquired them.
Although I doubt this will stop them if they think it’s advantageous…
US law here is nuanced. Good quick primer https://www.ftc.gov/advice-guidance/competition-guidance/gui...
As long it further's American interests globally - monopoly is fine. Other countries need to take notice and start picking winners nationally in order to compete with the large American big tech firms.
ed: @er2d, can't reply to your comment for some reason, so doing it here: I don't agree. In theory a monopoly decreases the necessity for R&D. Of course this becomes more complex if the R&D is funded or steered by the state. But look at the current state of LLMs. There is fierce competition between 3 US companies. But geopolitically it's the same as if there would be one monopoly. The US being the clear technological leader in an industry is not dependent on that industry being a domestic monopoly.
And for the Europe comment: Also don't agree. Look at Boeing & Airbus. Both are companies where the US & EU have decided that they need to ensure the existence of a domestic airplane manufacturer. So in these cases they support these companies (often in violation of international trade laws). But it has nothing to do with monopolies. If a state decides to support a company to ensure its existence, a monopoly is the logical consequence and not the aim. Because if that industry would be profitable it wouldn't need to be supported in the first place.
But all these tech companies are not in industries that would move off-shore or stop existing because they're not profitable enough, so it's an entirely different setting.
The US understands that and allows it to happen as the former yields a compounding effect of power.
European states certainly don't get this.
Airbus ?
lol
Now, that’s a name I haven’t heard in a long time.
couldn't this just be framed / spun as just using search data as training? i don't seem being bundled enough to run afoul with anti-trust.
Running at a loss long enough to kill the competition is basically the name of the game these days.
When Uber started, they were basically setting VC money on fire by selling rides at a loss to destroy the taxi market.
Buwahahahahahahahhahah
They drop a little cash on some shitcoin the president controls and those problems go away.
This is why SpaceX could be a dark horse in this race. Putting compute in space is expensive but so is building a data center in the US.
You know what's also really hard in a vacuum? Dissipating heat.
Correct. The economics of space-based DCs comes down to permitting delays versus radiator mass.
At ISS-weight radiators (12 to 15 W/kg (EDIT: kg/kW)), you need almost decade-long delays on the ground (or 10+ percent interest rates) to make lifting worthwhile. Get down to current state-of-the-art in the 5 to 10 W/kg (EDIT: kg/kW) range, however, and you only need permiting delays of 2 to 3 years.
If there is a game-changing start-up waiting to be built, it's in someone commercialising a better vacuum-rated radiator.
Saudi will host the biggest data centers in the world
I really couldn't have been more obscure, could I? :P
In 1932, "the first oil field in the Persian Gulf outside of Iran" was discovered in Bahrain [1]. (The same year Saudi Arabia announced unification [2].)
In the end, Saudi Arabia had larger reserves and wound up geopolitically dominating its first-moving rival. In commodities, the game tends to be scale in part through land grabbing. Less who got where first.
To close the analogy, if AI does wind up commoditised, the layers at which that commodity is held are probably between power and compute [3]. So if AI commoditises (commodifies?), Google selling computer (and indirectly power) to Anthropic and OpenAI is the smarter play than trying to advantage Gemini. (If AI doesn't commoditise, the opposite may be true–Google is supercharging a competitor.)
[1] https://en.wikipedia.org/wiki/Bahrain_Petroleum_Company
[2] https://en.wikipedia.org/wiki/Proclamation_of_the_Kingdom_of...
[3] The alternate hypothesis is it's at distribution.
Source? That would be surprising!
As a user and a consumer, I don't want them to have a moat. Moat means pseudo-monopoly. That is the exact opposite of what we want.
Only the investors and owners want a moat, to keep others out.
So what they're doing? They're competing. Good.
Because they are investors, VCs, or startup founders who hope to establish their own moats.
Users and consumers can get a lot of useful information from HN, but it's important to keep the local demographics in mind.
> In September 2025, Google is in talks with several "neoclouds," including Crusoe and CoreWeave, about deploying TPU in their datacenter. In November 2025, Meta is in talks with Google to deploy TPUs in its AI datacenters.
The integration of LLMs with tools and data via agent harnesses has created the opportunity for a real moat. As these products start differentiating, the moats will develop to be significant.
Also those personalities, quirks and choices accumulate. A lot of people talk about using Claude Code and Codex for different things. This is 100% my experience. Some people make better models, but on the top 3, there are often differences that are fixed only by switching between them. If I feel the need to switch between them, then there are significant enough differences and those differences will accumulate.
Or, more controversially, say EU green deal which decimated EU car industry and lost/will lose us few millions of jobs. Losses up to a trillion and nothing to show for that
This money could be invested in universal healthcare, or into AI research for medicine. But hey, I guess replacing developers and generating slop is more beneficial to our society.
The software will only improve for so long before it hits a wall. The best models were just a proxy for early mainstream market adoption, keeping your head above the water … plus some useful marketing hype about longshots for developing something bigger than LLMs (“AGI”).
People who work in tech are biased to obsess about the technical side and short term uptime/performance outrage. Despite that being mostly just standard immature market issues.
Anthropic (all ex Open AI) knew the negatives of the deal, so they made a slightly better deal with AWS, not a full lock in. They also grounded it in hardware from the start, ie. being the flagship customer for Trainium, the flagship customer for external usage of TPU's.
Domain knowledge, expertise, is a big thing in tech, because code can be written fast if you know what to do, and so by having that expertise, building a frontier model is a matter of time and capex. Anthropic is founded by top ex-openAi, so they are not lacking in expertise, and are not attached to SamAlt. It’s an easy choice of who to finance.
Anthropic will win long term because big tech knows how much of a loud mouth sam is, how much of the piee he wants, he is more of a rival then some company they could use to grow. While Anthropic (even though they aren’t really good guys) seems more like a shared common good for the big tech then openAi, like linux corporate business deals version.
If anything you ought to expect them to be behind, since they took the position of making all the mistakes first so others (who already had the same or better tech) didn’t have to.
I think that’s underselling their contribution, which I believe is mainly: it’s possible and this is what it looks like as a product. Until that time, nobody had figured out how to shape it as a product, and ChatGPT showed how to do that. Don’t forget that for a year or two they kept making headlines all the time with Dall.E and whatnot.
For me it seems like what happened after that is where the lack of focus started to hurt them: they realized that models themselves will be a commodity and have no moat, and that they needed to somehow build a network or something to keep pulling people back in. Sora was one such attempt, and it failed hard.
To me, enterprise / B2B seems like a much easier, obvious market to approach, but I don’t know a lot about B2C. But it seems like B2C was what OpenAI was going after.
So from that point of view you can indeed look at it as the entire value of the economy should be invested into AI companies.
The question is when will we get there.
If the answer is tomorrow, money means nothing and none of these investments matter. If the answer is 30 years, well lots of money to be made up until the inflection point of machines being able to design, build, and repair themselves.
Meanwhile people are still begging car manufacturers to stop locking their glove box behind a touch screen. Or how about a TV that isn't loaded with crappy software that makes it unusable after 2 years. There's a reason we don't put tech in everything.
https://en.wikipedia.org/wiki/Panic_of_1873#Factors
"In the United States, the panic was known as the "Great Depression" until the events of 1929 and the early 1930s set a new standard.[2]"
What are you counting in this category?
My neighbors just gave Ford $60k. It'll be a while until my neighbor gives Anthropic $60k.
How much of that 60K does Ford actually keep? And how much will it be once BYD is allowed in the US? The forecast for Ford is pretty much only downwards, the possible upside on AI is huge.
If every company in the F500 starts spending $2000+ on AI credits per employee, then every consumer product will indirectly be funding AI companies. I think it's already the case that companies small enough to avoid/skip getting O365 or Google Suite subscriptions will pay for AI first.
AI company revenues aren't driven by consumer subscriptions.
The people doing $20 or even $200 per month plans for their side projects aren't driving the demand. It's going to be business customers spending $1000/month or more per developer and all of the companies feeding their business processes through the API like call centers, document processing, and everything else.
If you're thinking of AI companies as consumer plays you're only seeing the tip of the iceberg. We get cheap access to Claude because they want us playing with it so when it comes time for our employers to choose something we can all lobby for Anthropic.
They should stop messing with us then. Stealth model changes, threatening to take code away on the $20 plan, the list goes on.
Now count the Amazon deliveries in a year on said same street. And next year, and the year after, and.. however long one keeps a Ford these days..
It's quite a scary thought exercise.
Amazon makes 800 dollars off of each person in revenue.
Ford makes $303 per person in revenue.
AWS makes the same.
AI spend for all platforms $450 per person
Their costs to produce aren't equal.
How many businesses are paying Ford $10 million per annum?
Computer costs keep collapsing. Image and audio generation is turned out to be less computer intensive than text (lol).
First company to launch 24/7 customized streaming AI slop wins!
$1k for a lot of developers here is totally worth it.
The amount of new revenue that I am personally able to create for my clients, using Claude models for dev, and Claude models inside the insanely agile products delivered, is astounding.
If I was not currently experiencing this myself, and someone told me that this was possible, I would be calling them names.
If we get to an end-state of monopoly/duopoly at this game, then we are truly screwed.
I was just stating my current use and revenue path. Anthropic has insane velocity, in April of 2026.
I think Deepseek is already there.
Energy will get fully solved eventually. To think otherwise is to bet against humanities ability to innovate, which I don't think is ever a wise bet.
I just ran a quick gpt check - EC2 Prices have gone down by more than 80% after accounting for performance and inflation over last 20 years.
The math is pretty simple, and it's easy to justify still paying the price even if it goes up 10 fold, when compared to hirering more resources its still cheap.
So I guess having multiple players and competition in the market is the key?
Chinese models like Deepseek v4 are as good and 10 times cheaper. You can even run Deepseek locally. So no, cheap AI wont be over. Just the US investors won't be able to profit off of the artificial bubble that is there now but wont be in the future.
100% agree. I have been trying to tell everyone to build their ideas, and exploit this environment where 100B of VC money into OpenAI/Anthropic = some percentage of money invested into your idea. This is the golden era of building! The music is gonna stop soon. Build now ffs!
It is likely that 99% of the value created by Anthropic / OpenAI / friends will go the end user. Which is great news.
It's like insane hype marketing speak. "insanely agile products delivered" like huh?
I believe that I am more of an AI realist. The agentic dev tools are really helping me out, but if I could wave a magic wand to make AI go away for a hundred years, I would do it.
I really hope that we can all laugh at how wrong I was.
However, I believe that the horrors will likely outweigh the benefits. Our global society/political systems are not ready for Stasi as a Service, mass unemployment, or any of this impending crap storm.
Who could call me a starry-eyed idealist? I have invested in bunkers.
I hate money.
You know what I hate even more? Being the supposed "smart one," and having to borrow money from my entire family to get through my health issues.
I will never do that again, hopefully.
Like ex-developer turned PM who is now vibe coding everything they can and thinks it's the greatest thing ever.
To the GP: I'd like some details of these "insanely agile products". Is this insane agility reflected by your customers saying that they have a better, faster, more reliable product? How are you measuring this?
I get that it's tedious to sit on tech forums listening to an endless stream of people insisting that suchandsuch technology is world-changing. Many people and probably most people who say that are wrong. But sometimes the world really does change.
It's tedious because the insistence doesn't seem to be matched by much observable change.
If software development speed has doubled, then we should be seeing not just an increase in apps being released, but an increase in product output from the big players too.
However, Anthropic can and will charge much more for enterprise customers.
If you’re using it for personal work, why is $100 worth it?
I pay for my own AI provider subscriptions because keeping work and personal strictly separated is important for me. I do know some people who secretly pay $200/month for Claude and use it at their job even though it's not approved. I do not recommend doing that, but it shows that some people value this for their work.
For developers earning more than $10K per month, spending less than 1% of salary on tooling to make the job easier is easy to justify.
It took about two weeks of really running it through its paces, and constantly slamming against the limit on it to convince me I had to upgrade to at least the 100/month sub, and at this point I wouldn’t blink to bump that to the 200/month if necessary.
I 100% believe we’re in a bubble, and that this level of compute isn’t sustainable at this price point, but for as long as I have it, I’m going to run it at the redline.
I’m a solo dev working on a project that I’ve just gone full-time on, after about 1.5 years of part time work. It’s a codebase that I laid the groundwork in, and has very well established systems, standards, and constraints.
The work I’m using Claude to do is the exact work I would be doing myself, but it does it at somewhere in the neighborhood of 5-10x the pace I could have. I don’t know that I could get the same rate of production if I managed a team of 2-3 programmers. Right now, it’s literally almost perfect at taking my iterative suggestions, and implementing them at that accelerated pace.
Honestly the hardest part is dealing with the fact that at the end of the day, I have to understand this codebase perfectly (solo dev and all that), so I have to take in changes to it that are also 5-10x the rate my normal intuition would. But, again, the plus side is that it’s implementing them essentially exactly as I would have, as it has ~20k lines of code that I wrote to use as an example.
If I were to hire even one other programmer, I’d be paying well north of 5k/month, and I’d not only be managing a super computer programmer tool, but an actual human being as well. $100/month might as well be free comparatively.
Doesn’t make any sense.
I'm not who you were replying to, but:
My work pays for $100/mo Claude, I pay another $100 to bring it up to $200/mo level because:
- Partly: I got in the habit back when work was only paying $20 and I was paying the $180.
- It is not worth it to me to spend braincells trying to optimize my use to slip into the $100 plan, I give everything "Opus, effort max" and with the $200/mo plan I never run out ($100 I'll run out mid-morning).
- I run a *lot* of experiments, including work-related and personal, to try to understand and improve my AI use skills.
- I also use it for a lot of personal things, right now I'm using it to help me plan a backyard studio and ADU.
"ccusage" the past month says $1017.edit: Formatting, ccusage
You'll notice that all the really big deals have fallen through, because they're based on promises and meeting objectives that can't be met. So it's likely that there will be really big writeoffs but not a huge implosion like 2001/2008. The real losers will be the retail investors who put all their money in a handful of stocks at ridiculous valuations.
"Disney cancels $1B deal with OpenAI after video platform Sora is shut down: 'The future is human'" https://finance.yahoo.com/sectors/technology/articles/disney...
And if I recall correctly the AI datacenter deal isn'tdoing Oracle stock any favours.
We need to run a SotA coding agent basically 24/7 uninterrupted and so far we didn’t find an easy solution for this (you can get provisioned TPUs for Gemini on GCP but it costs a fortune).
Surely that’s possible for under $5k a month? $10k?
Why should anyone feed the SV AI bubble if they can just use cheap Chinese models, even locally if they want to...
anthropic is the anchor external customer of tpu's and nvidia is worth more than all of google. If tpu's actually breakout as a viable alternative over the next few years for multiple clients the business could easily be worth as much as search, maybe more.
Why haven't they broken out yet, I wonder, if they're more efficient for inference and LLM costs are now weighted towards inference over training?
But as far as i know it currently supports just that + tensorflow (which nobody uses it anymore, least here). And last we tried, so much of our kernels needs rework that it’s not worth the effort.
This may change since ironwood but we haven’t tried that generation.
Microsoft is in the same boat with Azure.
If only Apple could pass the favor forward. But no, they can't be bothered to invest even a single million in Asashi Linux to benefit their own hardware.
The tech is great but valuations are out of control. It's cheaper to keep valuations high through these circular financing deals, rather than to allow for any deflation.
Example. Them doing a AB test where they remove Claude CLI from the 20$ pro plan ... they rolled it back now. Other rate limits where they publicly double your quota at NON peak times but lower it during peak quietly. These are tacky and signs of panic.
One such issue is experimentation. But when you see back to back issues, it looks odd.
What's the explanation behind this? I am sure they use AI in their ad network (matching web sites with ad offerings, maybe generating ads automatically), but is there more to it?
Still rooting for AMD to catch up too, especially if they can continue improving their software stack. They seem to be moving in the right direction.. though, they could benefit from speeding up a bit more.
Google now has it's fingers in all the pies.. is successfully fully vertically integrated and now expanding horizontally.
And it may very well be bad news for OpenAI.
including the option to acquire Anthropic.
Not possible anymore unless Anthropic collapses and goes on a multi-year decline.They're worth $1 trillion in private market. If they IPO today, I'm willing to bet my house that the hype will drive them to $2 trillion market cap or 50% of Google's marketcap.
OpenAI and Anthropic will be the biggest IPOs ever - bigger than SpaceX. That's my prediction.
I have feeling that Dario is not the type of man who would want to be acquired and then have Google's CEO telling him what to do.
OpenAI crashing would be good news and bad news for Anthropic investors.
The drama on HN alone would last for days. Twitter would implode in on itself.
It's more understanding for Amazon or Microsoft to make such an investment, because they're not as competitive in the model space.
Google buys Anthropic.
Microsoft buys Open AI (or vice versa depending on how things go).
SpaceGrok buys Cursor, limps along in 3rd place.
Meta is the last man standing, get's stuck with Oracle, dies.
And then hopefully some open source models save us from this nightmare before China commadatises everything.Edit: I forgot Amazon. Who knows what they will do. They're the wildcard anyway.
Anything to invigorate the desktop.
Microsoft buying OpenAI.. 10 minutes later it's rebranded Copilot.. and.. nothing much changes in the world. Oh, except all the AI improvements are around Enterprise governance.
This same sentiment is there within Deepmind, except they have more power it seems. Perhaps Google is hedging their bet?
Best non-X link I could find: https://benzatine.com/news-room/internal-strife-at-google-th...
Why the euphemism? What Anthropic did was an aggressive degradation of their model to save compute, and it's not just “perceived downtrend”, Anthropic themselves have acknowledged the quality of service degradation.
Great position to be in if you're Amazon and Google
I assume Anthropic said something like "We'll give you 3% of our company for $30B, since we're valued at $1T now! So cheap!", and Google immediately came back with "Hell no. We'll give you even more, $40B... but it's for 11% of the company. Take it or leave it." With all the issues they're having, what leverage does Anthropic have at that point?
Basically, Google made them an offer they couldn't refuse.
(If anthropic didn't exist, ØpenAI would suck up all the capital and talent in the room. Anthropic's existence has helped divide capital+talent that'd otherwise be gobbled up by the single fastest growing player.)
~ TK
For example, you can buy KLM Air france for less than $3B.
It is a profitable business that does $30B in sales and $1B in profit. (and has been profitable since for the past 4-5 years)
[PDF] https://www.airfranceklm.com/sites/default/files/2026-02/202...
First, become a billionaire. Then, start an airline.
This margin seems terrible.
That said, certain sectors like software (as in custom enterprise grade software dev) pull revenues that are much much higher sitting around 35%, but it's not that common.
I don’t know what to make of it
"Attention Is All You Need" was a very very different thing and I also wonder if they are glad they published it. But I imagine if they hadn't, the motivation for researchers to leave Google would have been even larger.
Jeff Dean is asked this question by Geoffrey Hinton at 37:35 - might worth watching. Overall an interesting video.
Didn't Amazon AWS do the same recently?
And with cashback through gcp usage!
if it runs of out of cash - then it's bad for the whole industry.
same as OpenAI. so all players - will provide cash & compute to keep them going.
Why? I don’t think we would suffer if anthropic disappeared tomorrow
How much of this goes back to Google as cloud spend?
Not sure if it’s going to be good enough to replace IDEs with neatly integrated superior models.
this is insane. on the secondary market the valuation is 2-3x that. what gives?
Google's deal from prior rounds likely lets them buy in at the same valuation other investors get every round, so they're just getting the February valuation.
Amazon did almost the same thing last week, at the same valuation.
If you gave anthropic 10b cash they couldn't get chips in the 0-6mo timeframe at scale. Anthropic is suffering reputational damage due to choices they have to make around capacity constraints.
Google, AWS, and Azure are the only people who can help them so they hold the cards, thus the good terms.
It is not uncommon to keep a round open after the formal announcement for a bit so that few investors who could not close for whatever reason are part of it. It can be hard to line up everyone at the same time, especially when they are public companies.
---
Specific to your point on why valuation can be lower than market at the same time - Goods(and stocks) while feel to be homogeneous, divisible, fungible, they are not. Size can value of its own.
A block of 10% shares may be worth more (or less) than unit share price, because them being available together has a property of its own, making it either more desirable when someone wants to acquire or harder to sell because there is not enough demand if all of them get dumped at the same time [1]
In this deal terms, just cause few ten millions are trading at $850B, or some investors can put in say $1-2B doesn't mean you can raise $40B at the same valuation.
There isn't depth in the market to raise $65B (including the AMZN deal) at $850B valuation. There is always some demand at any price point in the demand supply curve, you will probably find few people who will buy few shares at $10T, or $100T or some ridiculous number but that doesn't mean you can raise a large round on that.
Strictly speaking it is not even $350B per se, i.e. Google and AWS benefit from this as vendors. It very much like vendor financing with convertible debt. Meaning it is worth that much to them, but not to you and me because we are not getting some of the money back as sales that boosts are own stock.
---
[1] In the same vein, price can also depend on what you are getting in return, hard immediate dollars is the highest value. However if you are getting shares in return, you can usually negotiate a premium depending on risk of the shares you are getting.
The recent SpaceX - Cursor deal is a good example, any founder would likely take say $10B all cash offer over the $60B from SpaceX, or price would be closer to cash if it GOOG, AMZN, APPL shares instead - proven deeply liquid market etc.
Correct. But I think $5 to 10bn are sitting ready for $700 to 800, which strongly implies Google is getting a solid deal on this.
Google may reckon they can't (yet) reconcile their vision of Gemini with the raw coding performance of Claude and Codex.
There's been far to many "plans" and "commitments" and an awful lot of nothing actually happening.
I am still upset at these companies for driving up the RAM prices. "Free market" has evident problems - companies are way too dominating here. Average Joe suffers from this price mafia, assuming he or she needs to purchase RAM now.
> My main job isn't writing code but I try to keep Claude Code and OpenCode busy and churning away on something as close to 100% of the time as I can without getting in the way of my other priorities
I’ve seen many people say this the past few weeks i.e that their daily job now is no longer coding and has flipped to being a full Claude Feeder making sure its always churning.
As someone who uses Claude Code daily, I still find myself reading code and thinking more vs just shoveling coal as fast as I can into the Claude steam train. Am I doing things wrong?
Its just amazing people that people talk about Anthropic and have never used it.
Nah, see Meta
I don’t think that’s the ultimate cause of the turnaround in fortunes. But it strikes me, at least from the investor and potentially urban-consumer perspectives, as a pivotal moment in both companies’ fortunes.
Ant's recent rise has little to none to do with retail subscribers, it is Claude Code with Opus 4.5+, followed by their Mythos stunt
I would say the flood of $20 Claude Subscribers due to news cycle backfired on them, now everyone is getting worse outputs and exposed their shortage on compute, which they can't fix anytime soon.
Pretty much everyone I know has both cc and codex now, just because how unreliable cc has become.
This is a good hypothesis. I suspect we are both correct.
The PR boost from Anthropic standing its ground drove signups. That, in turn, drove investors. But the users also drove utilization, which degraded quality across the board.
My hypothesis rests on Anthropic’s user mix having significantly shifted to consumers (versus enterprise) after the mix-up. Whenever we get public numbers it would be interesting to test that.
I think it was psychological to a degree. For many consumers OpenAI, or at least ChatGPT was AI. The controversy was enough for folks to be introduced to competitors in the AI space and suddenly OpenAI's success felt a lot less inevitable.
I agree with OP though that this won't actually be the cause of OpenAI's downfall, should it happen. But I still think it's an interesting inflection point.
This is true. OpenAI WAS the story of AI, now it is just 50% of it, at max. Losing the monopoly of imagination towards AGI is bad for them.
One thing I don't agree though, consumers aren't the important part of AI, they are a liability.
AI is too expensive, consumers can't pay for it. Instead they will compete with enterprise for the same tokens, with less money.
This is my suspicion. Consumers hadn’t previously heard of Anthropic and Claude. Now they had, particularly in cities.
> this won't actually be the cause of OpenAI's downfall, should it happen. But I still think it's an interesting inflection point
Also agree. Hence why I said “I don’t think” the fight is “the ultimate cause.”
Of course this is part of what has lead to such insane demand and outages they've experienced since then.
"Stunt", eh?
Sure. Neither OpenAI or Anthropic do. Amazon and Google have followed institutional investors bidding up Anthropic over OpenAI in private markets, all of which—I suspect—followed user-pattern shifts following the fiasco. (Well, fiascos. Altman is a host unto himself.)
Opposite of what you said. The "dig" was not retrenching to more use, but rather I evaluated what I saw them doing and have migrated our company to much better options.
Individually, yes. Anthropic surging in private markets the weekend after the supply-chain risk designation, and raising from not only Google but also Amazon in such short clip (following credibly reports of it turning down $800+ billion valuation cheques from financial investors), all while OpenAI gets pilloried in the press and struggles to hold its $800bn valuation in private markets, collectively—to me—paints a bigger picture.
I always wondered why Anthropic was not out there feverishly scrambling to procure compute like the other big players. While Altman was being laughed at as a "podcasting bro asking for trillions in investment" Dario was on Dwarkesh expounding on how tricky it is to predict the demand for capacity. Now Dario has to give equity to a competitor to get compute. (OpenAI does this too, of course, but I suspect the terms are much better.)
At this point, it's pretty clear that compute is the only moat in this business. Even as an outsider, the extreme demand curves and compute crunch were painfully obvious, so this seems like a serious strategic error on Dario's part.
lol hes barely done anything, but sometimes that is all that's necessary when a bozo opponent is hell-bent on screwing things up. He didn't get fired the first time for no reason.
An former chess instructor told me most games are won not by brilliant maneuver, but by not screwing up. Repeatedly making the boring play is a winning strategy far more often than any mastermind play.
Wat?
I guess to address the point, having a problem with Hegseth isn’t the same as having a problem with Trump. And given some of Trump’s administration is embracing e.g. Mythos, it seems unfair to characterize Dario v. Hegseth as anything broader.
There was a recent moment when OpenAI went from the uncontested darling of consumer and investing America, to being second place to Anthropic. It happened rapidly, and I saw it at least on the investor side in the weekend after the supply-chain risk designation. (Disclosure: that’s also the week I signed up for Claude, in part out of protest, but mostly to see what the fuss was about.) I think there is a lesson for anyone working with startups or in tech from this example—it may be one of the most violent strategic sea changes I’ve seen in a while.
I really like HN's system of flagging versus banning. Like, I genuinely mapped TDS to Trump Derangement Syndrome, something I wasn't doing before because I thought it was a joke versus something his supporters thought of seriously.
It wouldn't call it TDS but it does project a severe political blind spot.
It’s concerning that the only thing that seems to be keeping the AI bubble inflated at this point is money from the folks selling things to AI companies. That’s very much not a good sign no matter how you spin it.
I’m a fan of AI and there’s clearly value to it… however that value seems completely out of whack with the money pumping into the ecosystem and at some point such irrational behaviors break.