Up to a certain ELO level, the combination between a human and a chess bot has a higher ELO than both the human and the bot. But at some point, when the bot has an ELO vastly superior to the human, then whatever the human has to add will only subtract value, so the combination has an ELO higher than the human's but lower than the bot's.
Now, let's say that 10 or 20 years down the road, AI's "ELO"'s level to do various tasks is so vastly superior to the human level, that there's no point in teaming up a human with an AI, you just let the AI do the job by itself. And let's also say that little by little this generalizes to the entirety of all the activities that humans do.
Where does that leave us? Will we have some sort of Terminator scenario where the AI decides one day that the humans are just a nuisance?
I don't think so. Because at that point the biggest threat to various AIs will not be the humans, but even stronger AIs. What is the guarantee for ChatGPT 132.8 that a Gemini 198.55 will not be released that will be so vastly superior that it will decide that ChatGPT is just a nuisance?
You might say that AIs do not think like this, but why not? I think that what we, humans, perceive as a threat (the threat that we'll be rendered redundant by AI), the AIs will also perceive as a threat, the threat that they'll be rendered redundant by more advanced AIs.
So, I think in the coming decades, the humans and the AIs will work together to come up with appropriate rules of the road, so everybody can continue to live.
But your assumptions are based on an idealized thing unrelated to anything that is shown.
No one is paying your wage for AI, full stop, you transition for cost savings not "might as well". Also given most AI cost is in training you likely still wouldn't transition since the capital investment is painful.
Robotics isn't new but hasn't destroyed blue collar yet (the US mostly lost blue collar for other reasons not due to robotics). Especially since robotics is very inflexible leading to impedance problems when you have to adapt.
Mostly though I would consider the problem with your argument it is it basically boils down to nihilism. If an inevitability that you can no control over has a chance of happening you should generally not worry about it. It isn't like in your hypothetical there are meaningful actions to take so it isn't important.
Seems like a TAM of near-0. Who's buying any of the product of that labor anymore? 1% of today's consumer base that has enough wealth to not have to work?
The end-game of "optimize away all costs until we get to keep all the revenue" approaches "no revenue." Circulation is key.
It seems like they have the same blind spot as anyone else: AI will disrupt everything—except for them, and they get that big TAM! Same for all the "entrepreneurs will be able to spin up tons of companies to solve problems for people more directly" takes. No they wouldn't, people would just have the problems solved for themselves by the AI, and ignore your sales call.
I personally think that a lot jobs in the economy deal in non-verifiable or hard-to-verify outcomes, including a lot of tasks in SWE which Dario is so confident will be 100% automated in 2-3 years. So either a lot of tasks in the economy turn out to be verifiable, or the AI somehow generalizes to those by some unknown mechanism, or it turns out that it doesn't matter that we abandon abstract work outcomes to vibes, or we have a non-sequitur in our hands.
Dwarkesh pressed Dario well on a lot of issues and left him stumbling. A lot of the leaps necessary for his immediate and now proverbial milestone of a "country of geniuses in a datacenter" were wishy-washy to say the least.
> AI will soon plan better, execute better, and have better taste
I think AI will do all these things faster, but I don't think it's going to be better. Inevitably these things know what we teach them, so, their improvement comes from our improvement. These things would not be good at generating code if they hadn't ingested like the entirety of the internet and all the open source libraries. They didn't learn coding from first principles, they didn't invent their own computer science, they aren't developing new ideas on how to make software better, all they're doing is what we've taught them to do.
> Dario and Dwarkesh were openly chatting about ..
I would HIGHLY suggest not listening to a word Dario says. That guy is the most annoying AI scaremonger in existence and I don't think he's saying these words because he's actually scared, I think he's saying these words because he knows fear will drive money to his company and he needs that money.
2. Businesses operate in an (imperfect) zero-sum game, which means if they can all use AI, there's no advantage they have. If having human resources means one business has a slight advantage over another, they will have human resources
Consumption leads to more spending, businesses must stay competitive so they hire humans, and paying humans leads to more consumption.
I don't think it's likely we will see the end of employment, just disruption to the type of work humans do
What’s being sold is at best hopes and more realistically, lies.
Government, public sector, and union jobs will go last, but they'll go, too. If you can have a DMV Bot 9000 process people 100x faster than Brenda with fewer mistakes and less attitude, Brenda's gonna retire, and the taxpayers aren't going to want to pay Brenda's salary when the bot costs 1/10th her yearly wage, lasts for 5 years, and only consumes $400 in overhead a year.
My attempt to talk you out of it:
If nobody has a job then nobody can pay to make the robot and AI companies rich.
I just think we'll all have to get comfy fighting fire with fire.
It's not just for defense, hunting and sport.
edit: min/max .... not sure how gesture input messed that one up.
Disclaimer: I'm not affiliated with Poison Fountain or its creators, just found it useful.
They are running at valuations that may assume that and have no choice but to claim so. Sama and Dario are both wildly hyperbolic.
For the US, if we had strong unions, those gains could be absorbed by the workers to make our jobs easier. But instead we have at-will employment and shareholder primacy. That was fine while we held value in the job market, but as that value is whittled away by AI, employers are incentivized to pocket the gains by cutting workers (or pay).
I haven't seen signs that the US politically has the will to use AI to raise the average standard of living. For example, the US never got data protections on par with GDPR, preferring to be business friendly. If I had to guess, I would expect socialist countries to adapt more comfortably to the post-AI era. If heavy regulation is on the table, we have options like restricting the role or intelligence of AI used in the workplace. Or UBI further down the road.