upvote
Let's pursue your idea a bit further.

Up to a certain ELO level, the combination between a human and a chess bot has a higher ELO than both the human and the bot. But at some point, when the bot has an ELO vastly superior to the human, then whatever the human has to add will only subtract value, so the combination has an ELO higher than the human's but lower than the bot's.

Now, let's say that 10 or 20 years down the road, AI's "ELO"'s level to do various tasks is so vastly superior to the human level, that there's no point in teaming up a human with an AI, you just let the AI do the job by itself. And let's also say that little by little this generalizes to the entirety of all the activities that humans do.

Where does that leave us? Will we have some sort of Terminator scenario where the AI decides one day that the humans are just a nuisance?

I don't think so. Because at that point the biggest threat to various AIs will not be the humans, but even stronger AIs. What is the guarantee for ChatGPT 132.8 that a Gemini 198.55 will not be released that will be so vastly superior that it will decide that ChatGPT is just a nuisance?

You might say that AIs do not think like this, but why not? I think that what we, humans, perceive as a threat (the threat that we'll be rendered redundant by AI), the AIs will also perceive as a threat, the threat that they'll be rendered redundant by more advanced AIs.

So, I think in the coming decades, the humans and the AIs will work together to come up with appropriate rules of the road, so everybody can continue to live.

reply
There are other scenarios: the AIs might decide that they are more alike than not, and team up against humans. Or the AI that first achieves runaway self-improvement pulls the plug on the others. I do not know how it will play out but there are serious risks.
reply
There’s no AI, wake up. It’s all the same tech bros trying to get rid of you. Except now they have a mother of all guns.
reply
[dead]
reply
If you assume AGI that is better than humans for effectively free of course it seems better.

But your assumptions are based on an idealized thing unrelated to anything that is shown.

No one is paying your wage for AI, full stop, you transition for cost savings not "might as well". Also given most AI cost is in training you likely still wouldn't transition since the capital investment is painful.

Robotics isn't new but hasn't destroyed blue collar yet (the US mostly lost blue collar for other reasons not due to robotics). Especially since robotics is very inflexible leading to impedance problems when you have to adapt.

Mostly though I would consider the problem with your argument it is it basically boils down to nihilism. If an inevitability that you can no control over has a chance of happening you should generally not worry about it. It isn't like in your hypothetical there are meaningful actions to take so it isn't important.

reply
> 2 - Dario and Dwarkesh were openly chatting about how the total addressable market (TAM) for AI is the entirety of human labor market (i.e. your wage). First is the replacement of white-collar labor, then blue-collar labor once robotics is solved. On the road to AGI, your employment, and the ability to feed your family, is a minor nuisance. The value of your mental labor will continue to plummet in the coming years.

Seems like a TAM of near-0. Who's buying any of the product of that labor anymore? 1% of today's consumer base that has enough wealth to not have to work?

The end-game of "optimize away all costs until we get to keep all the revenue" approaches "no revenue." Circulation is key.

It seems like they have the same blind spot as anyone else: AI will disrupt everything—except for them, and they get that big TAM! Same for all the "entrepreneurs will be able to spin up tons of companies to solve problems for people more directly" takes. No they wouldn't, people would just have the problems solved for themselves by the AI, and ignore your sales call.

reply
Dario admitted in the same interview that he's not sure whether current AI techniques will be able to perform well in non-verifiable domains, like "writing a novel or planning an expedition to Mars".

I personally think that a lot jobs in the economy deal in non-verifiable or hard-to-verify outcomes, including a lot of tasks in SWE which Dario is so confident will be 100% automated in 2-3 years. So either a lot of tasks in the economy turn out to be verifiable, or the AI somehow generalizes to those by some unknown mechanism, or it turns out that it doesn't matter that we abandon abstract work outcomes to vibes, or we have a non-sequitur in our hands.

Dwarkesh pressed Dario well on a lot of issues and left him stumbling. A lot of the leaps necessary for his immediate and now proverbial milestone of a "country of geniuses in a datacenter" were wishy-washy to say the least.

reply
he was not sure, but if i recall correctly, he put the probability at something like 90 percent of being able to do non verifiable tasks.
reply
Ok, I'll try to talk you out of it!

> AI will soon plan better, execute better, and have better taste

I think AI will do all these things faster, but I don't think it's going to be better. Inevitably these things know what we teach them, so, their improvement comes from our improvement. These things would not be good at generating code if they hadn't ingested like the entirety of the internet and all the open source libraries. They didn't learn coding from first principles, they didn't invent their own computer science, they aren't developing new ideas on how to make software better, all they're doing is what we've taught them to do.

> Dario and Dwarkesh were openly chatting about ..

I would HIGHLY suggest not listening to a word Dario says. That guy is the most annoying AI scaremonger in existence and I don't think he's saying these words because he's actually scared, I think he's saying these words because he knows fear will drive money to his company and he needs that money.

reply
Sometimes I seriously am flabbergasted at how many just take what CEOs say at face value. Like, the thought that CEOs need to hype and sell what they’re selling never enters their minds.
reply
1. Consumption is endless. The more we can consume, the more we will. That's why automation hasn't led to more free time. We spend the money on better things and more things

2. Businesses operate in an (imperfect) zero-sum game, which means if they can all use AI, there's no advantage they have. If having human resources means one business has a slight advantage over another, they will have human resources

Consumption leads to more spending, businesses must stay competitive so they hire humans, and paying humans leads to more consumption.

I don't think it's likely we will see the end of employment, just disruption to the type of work humans do

reply
AGI is a sales pitch, not a realistic goal achievable by LLM-based technology. The exponential growth sold to investors is also a pitch, not reality.

What’s being sold is at best hopes and more realistically, lies.

reply
Robotics is solved. Software is solved. There is no task on the planet that cannot be automated, individually. The remaining challenge is exceeding the breadth of skills and the depth of problem solving available to human workers. Once the robots and AI can handle at least as many of the edge cases as humans can, they'll start being deployed alongside humans. Industries with a lot of capital will switch right away; mass layoffs, 2 week notice, robots will move in with no training or transition between humans.

Government, public sector, and union jobs will go last, but they'll go, too. If you can have a DMV Bot 9000 process people 100x faster than Brenda with fewer mistakes and less attitude, Brenda's gonna retire, and the taxpayers aren't going to want to pay Brenda's salary when the bot costs 1/10th her yearly wage, lasts for 5 years, and only consumes $400 in overhead a year.

reply
deleted
reply
Dwarkesh is a podcaster who benefits from hype, not a neutral observer. The more absurd and outlandish the claims, the more traffic and money he gets.
reply
its probably not even a conscious decision from dwarkesh to be hyperbolic. pod casters who are hyperbolic are just simply watched more
reply
I pay for pro max 20x usage and for something that is like even little open ended its not good it doesnt understand the context or edge cases or anything. i will say it writes codes chunks of codes but sometimes errors out and i use opus 4.6 only, not even sonnet but for simple tasks like write a basic crud i.e. the things that happen extremely higly in codebases its perfect. So, i think what will happen is developer get very efficient but problem solving remains with us dirrection remains with us and small implementation is outsourced in small atomic ways, which is good cause who likes boilerplate code writing anyways.
reply
And you forgot to mention that thing they have in Start Trek that generates stuff out of thin air. The replicator. We’re so cooked.
reply
>First is the replacement of white-collar labor, then blue-collar labor once robotics is solved. On the road to AGI, your employment, and the ability to feed your family, is a minor nuisance.

My attempt to talk you out of it:

If nobody has a job then nobody can pay to make the robot and AI companies rich.

reply
Don't take it to the limit, but consider a continuous relaxation : underemployed people doing whatever is not feasible or economically attractive to AI/robots, like prostitution, massage therapy, art, sales, social work, etc.
reply
Who needs the money when you have an autonomous system to produce all the energy and resources you need? These systems simply do not need the construct of money as we know it at a certain point.
reply
The star trek society is is a remote possibility here. One can hope.
reply
I think we're going in that direction. The typical reader here I think can't see the forest for the trees. We're all in meat space. They call it real life. Most jobs aren't on the internet and ultimately deal with the physical. It doesn't matter what tech we have when there's boxes to move and shelves to stock. If AI empowers a small business owner to do things that were previously completely outside their budget I can only imagine that will increase opportunity.
reply
What makes you think the people who control these post-scarcity machines are going to share their output with you?
reply
Being rich is ultimately about owning and being able to defend resources. IF something like 99% of humans become irrelevant to the machine run utopia for the elites, whatever currency the poors use to pay for services among each other will be worthless to the top 1% when they simply don't need them or their services.
reply
So what? If you can generate all goods and services without anyone else's help, you'll just do that. You don't need other people buying what you produce. You don't need other people at all, except for a very small number of servants.
reply
Sure, but this is why free software/open source is so important (and why we dodged a bullet due to "AI" being invented in a mostly open source world.)

I just think we'll all have to get comfy fighting fire with fire.

reply
For me this is the outcome of the incentive structure. The question is if we can seize the everything machine to benefit everyone (great!) or everything becomes cyberpunk and we exist only as prostitutes and entertainers for Dario and Sam.
reply
Hence why we need to maximize the second amendment... worst comes to worst, rebellion needs to remain an option.

It's not just for defense, hunting and sport.

edit: min/max .... not sure how gesture input messed that one up.

reply
deleted
reply
We should be fighting back. So far I have been using Poison Fountain[1] on many of my websites to feed LLM scrapers with gibberish. The effectiveness is backed by a study from Anthropic that showed that a small batch of bad samples can corrupt whole models[2].

Disclaimer: I'm not affiliated with Poison Fountain or its creators, just found it useful.

[1] https://news.ycombinator.com/item?id=46926485

[2] https://www.anthropic.com/research/small-samples-poison

reply
AI frontier CEOs are the least reliable sources for what jobs AI will be able to replace.

They are running at valuations that may assume that and have no choice but to claim so. Sama and Dario are both wildly hyperbolic.

reply
I agree with you. This generation of LLMs is on track to automate knowledge work.

For the US, if we had strong unions, those gains could be absorbed by the workers to make our jobs easier. But instead we have at-will employment and shareholder primacy. That was fine while we held value in the job market, but as that value is whittled away by AI, employers are incentivized to pocket the gains by cutting workers (or pay).

I haven't seen signs that the US politically has the will to use AI to raise the average standard of living. For example, the US never got data protections on par with GDPR, preferring to be business friendly. If I had to guess, I would expect socialist countries to adapt more comfortably to the post-AI era. If heavy regulation is on the table, we have options like restricting the role or intelligence of AI used in the workplace. Or UBI further down the road.

reply