If AI is here to stay, as a thing that permanently increases productivity, then AI buying up all the electricians and network engineers is a (correct) signal. People will take courses in those things and try to get a piece of the winnings. Same with those memory chips that they are gobbling up, it just tells everyone where to make a living.
If it's a flash in a pan, and it turns out to be empty promises, then all those people are wasting their time.
What we really want to ask ourselves is whether our economy is set up to mostly get things right, or it is wastefully searching.
Plus, it makes natural moat against masses of normal (i.e. poor) people, because requires a spaceship to run. Finally intelligence can also be controlled by capital the way it was meant to, joining information, creativity, means of production, communication and such things
I'd put intelligence in quotes there, but it doesn't detract from the point.
It is astounding to me how willfully ignorant people are being about the massive aggregation of power that's going on here. In retrospect, I don't think they're ignorant, they just haven't had to think about it much in the past. But this is a real problem with very real consequences. Sovereignty must be occasionally be asserted, or someone will infringe upon it.
That's exactly what's happening here.
The current generation of AI is an opportunity for quick gains that go beyond just a few months longer lifespan or a 2% higher average grade. It is an unrealised and maybe unrealistic opportunity, but it's not just greed and lust for power that pushes people to invest, it's hope that this time the next big thing will make a real difference. It's not the same as investing more in schools because it's far less certain but also has a far higher alleged upside.
"Marginal cost barrier" hit, then?
I don't think there's a way to solve the issue of: one-shotted apps will increasingly look more convincing, in the same way that the image generation looks more convincing. But when you peel back the curtain, that output isn't quite correct enough to deploy to production. You could try brute-force vibe iterating until it's exactly what you wanted, but that rarely works for anything that isn't a CRUD app.
Ask any of the image generators to build you a sprite sheet for a 2d character with multiple animation frames. I have never gotten one to do this successfully in one prompt. Sometimes the background will be the checkerboard png transparency layer. Except, the checkers aren't all one color (#000000, #ffffff), instead it's a million variations of off-white and off-black. The legs in walking frames are almost never correct, etc.
And even if they get close - as soon as you try to iterate on the first output, you enter a game of whack-a-mole. Okay we fixed the background but now the legs don't look right, let's fix those. Okay great legs are fixed but now the faces are different in every frame let's fix those. Oh no fixing the faces broke the legs again, Etc.
We are in a weird place where companies are shedding the engineers that know how to use these things. And some of those engineers will become solo-devs. As a solo-dev, funds won't be infinite. So it doesn't seem likely that they can jack up the prices on the consumer plans. But if companies keep firing developers, then who will actually steer the agents on the enterprise plans?
US$700 billion could build a lot of infrastructure, housing, or manufacturing capacity.
Its not due to a lack of money that housing in SF is extremely expensive.
Whereas $700 billion in AI might actually do that.
[0]https://www.cancer.org/research/acs-research-news/people-are...
Because we're not good at curing cancers, we're just good at making people survive better for longer until the cancer gets them. 5 year survival is a lousy metric but it's the best we can manage and measure.
I'm perfectly happy investing roughly 98% of my savings into the thing that has a solid shot at curing cancers, autoimmune and neurodegenerative diseases. I don't understand why all billionaires aren't doing this.
But realistically: perhaps by noticing patterns we’ve failed to notice and by generating likely molecules or pathways to treatment that we hadn’t explored.
We don’t really know what causes most diseases anyway. Why does the Shingles vaccine seem to defend against dementia? Why does picking your nose a lot seem to increase risk of Alzheimer’s?
That’s the point of building something smarter than us: it can get to places we can’t get on our own, at least much faster than we could without it.
But people accept the status quo and are afraid to take a moment’s look into the face of their own impending injury, senescence and death: that’s how our brains are wired to survive and it used to make sense evolutionarily until about 5 minutes ago.
...Meanwhile, we are developing techniques to yes, cure some kinds of cancer, as in every time they check back it's completely gone, without harming healthy tissue.
We are developing "anti-vaccines" for autoimmune diseases, that can teach our bodies to stop attacking themselves.
We are learning where some of the origins of the neurodegenerative diseases are, in ways that makes treating them much more feasible.
So you're 100% wrong about the things we can't do, and your confidence in what "AI" can do is ludicrously unfounded.
I’m not claiming we haven’t made a dent. I’m claiming I’m in roughly as much danger from these things right now as any human ever has been: middling results.
If we can speed up the cures by even 1%, that’s cumulatively billions of hours of human life saved by the time we’re done.
And that hypothetical "billions of hours of human life saved" has to be measured against the actual damage being done right now.
Real damage to economy, environment, politics, social cohesion, and people's lives now
vs
Maybe, someday, we improve the speed of finding cures for diseases? In an unknown way, at an unknown time, for an unknown cost, and by an unknown amount.
Who knows, maybe they'll give everyone a pony while they're at it! It seems just as likely as what you're proposing.
I can’t speak to the economy as a whole, but the tech economy has a long history of bubbles and scams. Some huge successes, too—but gets it wrong more often than it gets it right.
Singapore is an IQ shredder. It is an economically productive metropolis that
sucks in bright and productive minds with opportunities and amusements at the
cost of having a demographically unsustainable family unit.
Basically, if you're a productive person, you want to maximize your return. So, you go where the action is. So does every other smart person. Often that place is a tech hub, which is now overflowing with smart guys. Those smart guys build adware (or whatever) and fail to reproduce (combined, these forces "shred" the IQ). Meanwhile every small town is brain-drained. You hometown's mayor is 105 IQ because he's the smartest guy in town. Things don't work that great, and there's a general stagnation to the place.Right now, AI is a "capital shredder". In the past, there were barriers everywhere, and we've worked hard to tear those down. It used to be that the further the distance (physically, but also in other senses too, like currencies, language, culture, etc.), the greater the friction to capital flows. The local rich guy would start a business in his town. Now he sends it to one of the latest global capital attractors, which have optimized for capital inflow. This mechanism works whether the attractor can efficiently use that capital or not. That resource inflow might be so lucrative, that managing inflow is the main thing it does. Right now that's AI, but as long as present structure continues, this is how the machine of the global economy will work.
Not every smart person (or even most) are engineers, and of the ones that are they don't all move to tech hubs, and the ones that do not all of them can't get laid.
And I'll give you a great reason why it's hogwash, the "brilliant" engineers that can't get laid in Singapore are the same "brilliant" engineers that can't get laid in their home town
Notepad now has Copilot built right into it, after all. That wasn't going to happen by now if we took the human psyche as a given and built around that.
More like French post-structuralism.
Or put more plainly, being a big fish in a small pond is not better than being a small fish.
It's the process where social, political, or cultural meaning is rooted in some context. It's a state of stability and boundaries. For just the economic, the geographic would likely be the centroid of that, but the other vectors are not irrelevant.
One could argue that we suffer to the degree we are deterritorialized, because the effects thereof are alienating. So, we need structure that aligns both our economic and psychological needs. What we have is subordination to the machine, which will do what it's designed to: optimize for its own desire, which is machinic production.
Note that none of this is inherently good/bad. Like anything, a choice has trade-offs. We definitely get more production within the current structure. The cost is born by the individual, aggregating into the social ills that are now endemic.
"The most hard-core capitalist response to this [IQ shredders] is to double-down on the antihumanist accelerationism. This genetic burn-rate is obviously unsustainable, so we need to convert the human species into auto-intelligenic robotized capital [a]s fast as possible, before the whole process goes down in flames." [0]
[0] Nick Land (2014). IQ Shredders in Xenosystems Blog. Retrieved from github.com/cyborg-nomade/reignition
“Blood and soil” is such a taboo in today’s society because it’s the solution to the problems we have. I don’t say this to be edgy. Scaling trust is the most important thing in human societies, and race, cultural history, shared religion etc are the absolute best ways to do this. Think about it this way:
Every morning men like Sam Altman wake up and decide to keep pursuing billions of dollars with no thought of how to ensure the rest of his people (because they aren’t you and me) are going to get by. He could very easily make this a core tenet of OpenAI (in a way inline with its original incarnation) and be _loved_ by the people. We don’t have a single elite that behaves this way: Musk, Trump, Altman, etc are all parasites that do not care if we live or die.
Land and co. are more or less transhumanist Thiel cronies so you’re not gonna get great analysis from them.
Every week in 2026 Google will pay for the cost of a Burj Khalifa. Amazon for a Wembley Stadium.
Facebook will spend a France-England tunnel every month.
Think of the PC gamers, who first dealt with COVID supply shocks, followed by crypto making GPUs scarce and untenable, then GPU makers raising prices and narrowing inventory to only the highest-end SKUs, only to outright abandon them entirely for AI - which then also consumed their RAM and SSDs. A hobby that used to be enjoyed by folks on even a modest budget is now a theft risk given the insane resale priced of parts on the second-hand market due to scarcity.
And that extends to others as well. The swaths of folks who made freelance or commission artistry work through Patreons and conventions and the like are suddenly struggling as customers and companies spew out AI slop using their work and without compensation. Tech workers, previously the wealthy patron of artisans and communities, are now being laid off en masse for AI CapEx buildouts and share pumps as investors get cold feet about what these systems are actually doing to the economy at large (definite bad, questionable good, uncertain futures).
Late stage capitalism’s sole respite was consumerism, and we can’t even do that anymore thanks to AI gobbling up all the resources and capital. It’s no wonder people are pissed at AI boosters trying to say this is a miracle technology that’ll lift everyone up: it’s already kicking people down, and nobody actually wants to admit or address that lest their investments be disrupted to protect humans.
But when everyone has access to recordings of the world's best musicians at all times, why listen to uncle Harry's shoddy guitar play? Why sing Christmas songs together when you can put on the Sinatra Christmas jazz playlist on Spotify?
Regarding singing - I do not know a single perso who can "somehow" sing at least a little bit.
The society is loosing these capacities.
Like how most of the royalties Spotify pays out are for older catalogue stuff from “classic” artists, rather than new bands. Or how historical libraries of movies and films are constantly either up for grabs (for prestige) or being pushed off platforms due to their older/more costly royalty agreements.
With AI though, it’s the actual, tangible consumption of physical goods being threatened, which many companies involved in AI may argue is exactly the point: that everyone should be renters rather than consumers, and by making ownership expensive through cost and scarcity alike, you naturally drive more folks towards subscriptions for what used to be commodities (music, movies, games, compute, vehicles, creativity tools, TCGs, you name it).
It’s damn depressing.
It's a loop of captcha which never ends