Has it occurred to you that there might not be a correction, and that the outcome would still be brutal, at least on par with the industrial revolution.
It's physically impossible to build out the datacenters required for the "AI is actually good and we have mass layoffs" scenario. This Anthropic investment is spurred on because they've already hit a brick wall with capacity.
$40B goes a long way, but not for datacenters where nearly every single component and service is now backordered. Even if you could build the DC, the power connection won't be there.
The current oil crisis just makes all of that even worse.
The next level of layoffs is probably still 25 years out.
Hasn't even been 25 years years since the previous layoffs before the current ones.
But all the economic indicators suggest those are "bad economy" layoffs dressed up as "AI" layoffs to keep the shareholders happy.
And that's without accounting for the various wars (and resultant economic impacts) that are already in progress. A large part of what drove the meat grinder of WWI was (very approximately) the various actors repeatedly misjudging the overall situation and being overly enthusiastic to try out their shiny new weapons systems. If one or more superpowers decide to have a showdown the only thing that might minimize loss of life this time around is (ironically enough) the rise of autonomous weapons systems. Even in that case as we know from WWII the logical outcome is a decimated economy and manufacturing sector regardless of anything else that might happen.
I think that just means the relative civilian loss of life will increase once again.
russia is really and empire of the dumb and subjugated serfs at this point (again, history repeats), but they are far from only such place.
Dont expect more, most people are not that nice when SHTF.
Bubbles like the AI bubble are a game theoretic outcome of a revolution. Many players invest heavily to avoid losing, but as a whole the market over invests. This leads to a bubble.
But right now, the difference in developer experience between a dev on a team at a business which has corporate copilot or Claude licenses and bosses encouraging them to maximize token usage, vs a solo dev experimenting once every few months with a consumer grade chat model is vast.
Meta seemingly has a constant stream of product managers. If llm’s really augment the productivity of engineers, why isn’t meta launching lots more stuff? I mean there’s no harm in at least launching one new thing.
What are all those people doing with the so called productivity enhancements?
What I’m calling into question is how much does generating more code matter if the bottle neck is creativity/imagination for projects?
The only thing I’ve seen is a really crummy meta AI thing implemented within WhatsApp.
Only solution I can think of is to drastically cut headcount so productivity is back to prior levels, and profitability is raised. Big Tech is mostly market constrained with not much room to grow beyond the market itself growing.
As for startups, seems like AI tools have drastically reduced their time to market and accelerated their growth curves.
Hobbyist solo dev, counting tokens, hitting quotas, trying things on little projects, giving up and not seeing what the fuss is about.
vs
Corporate developer, increasingly held accountable by their boss for hitting metrics for token usage; being handed every new model as soon as it comes out; working with the tools every day on code changes that impact other developers on other teams all of whom have access to those same tools.
I might be missing a lot of self-evident assumptions here but I feel like I'm still missing so much context and have no idea what this difference is actually describing.
I'm talking more about why threads like this seem to be full of people saying 'this has completely changed how corporate development works' and other people saying 'I tried it a few times and I don't get the hype'