upvote
> My prediction is no, because productivity gains must benefit the lower classes to see a multiplier in the economy.

Baumol's cost disease hurts the lower classes by restricting their access to services like health care and education, and LLMs/agents make it possible to increase productivity in these areas in ways which were once unimaginable. The problem with services is that they're typically resistant to productivity growth, and that's finally changing.

If you can get high quality medical advice for effectively nothing, if you can get high quality individualized tutoring for free, that's a pretty big game changer for a lot of people. Prices on these services have been rising to the stratosphere over the past few decades because it's so difficult to increase the productivity of individual medical practitioners and educators. We're entering an era that could finally break this logjam.

reply
"Baumol's cost disease hurts the lower classes by restricting their access to services like health care and education, and LLMs/agents make it possible to increase productivity in these areas in ways which were once unimaginable."

You've expressed very clearly what LLMs would have to do in order to be economically transformative.

"If you can get high quality medical advice for effectively nothing, if you can get high quality individualized tutoring for free, that's a pretty big game changer for a lot of people. Prices on these services have been rising to the stratosphere over the past few decades because it's so difficult to increase the productivity of individual medical practitioners and educators. We're entering an era that could finally break this logjam."

It's not that process innovations are lacking, it's that product innovations are perceived as an indignity by most people. Why should one child get an LLM teacher or doctor while others get individualized attention by a skilled human being?

reply
> Why should one child get an LLM teacher or doctor while others get individualized attention by a skilled human being?

Is the value in the outcome of receiving medical advice and care, and becoming educated, or is the value just in the co-opting of another human being's attention?

If the value is in the outcome, the means to achieving that aren't of much consequence.

reply
The supply/demand picture here is more complicated than it looks.

If AI displaces human educators, yes, their supply shrinks -- but we can't assume what direction its demand will go.

We've seen this pattern before: as recorded music became free, live performance got more expensive, and therefore much less accessible than it used to be.

What's likely to happen is that "worse" (read: AI) education will become much cheaper, while "better" (read: in-person) education that involves human connection-driven benefits will become much less accessible compared to what it is today.

Most people may be consider it a win. It's certainly not a world I'm looking forward to.

reply
Important follow-up to my comment: as fewer people do X -- live music, medicine, education, you name it -- fewer talented people do it as well.

Fields need a large base of participants to produce great ones. This is exactly why software has been so extraordinary over the past 30 years: an unusual concentration of gifted minds across the entire humankind committed themselves to it.

In my view, Bach, Rachmaninoff, Cole Porter equivalents today probably aren't writing symphonies. They've decided to write code for a living. Which is why any Great American Songbook made today won't hold a candle next to one from 1950s.

reply
More subtly, what is an education? What is care? As you point out, the LLMs are (or probably will become) perfectly good at the measurable parts of those services; but I think the residual edge of “good” education/care is more than just the other human’s co-opted attention.

How many of us have a reminiscence that starts “looking back, the most life-changing part of my primary or secondary education was ________,” where the blank is a person, not a curriculum module? How many doctors operate, at least in part, on hunches—on totalities of perception-filtered-through-experience that they can’t fully put into words?

I’m reminded of the recent account of homebound elderly Japanese people relying on the Yakult delivery lady partly for tiny yoghurt drinks, but mainly for a glimmer of human contact [0]. Although I guess that cuts to your point: the value in that example really is just co-opting another human’s attention.

In most of these caring professions, some of the value is in the measurable outcome (bacterial infection? Antibiotic!), but different means really do create different collections of value that don’t fully overlap (fine, I’ll actually lay off the wine because the doctor put the fear of the lord in me).

I guess the optimistic case is, with the rote mechanical aspects automated away, maybe humans have more time to give each other the residual human element…

[0] https://news.ycombinator.com/item?id=47287344

reply
The premise of your argument is that "the outcome" can be separated from the process. This is true enough for manufacturing bricks: I don't much care what processes was used to create a brick if it has certain a compressive strength, mass, etc.

But Baumol's argument, which you introduced to the conversation, is that outcome and process cannot actually be distinguished, even if a distinction in thought is possible among economic theorists.

reply
> But Baumol's argument, which you introduced to the conversation, is that outcome and process cannot actually be distinguished

How is that Baumol's argument? How is 'outcome' vs 'process' relevant to his argument at all?

'Cost disease' is just the foundational truth that the cost of the output from industries with stagnant productivity will increase due to the fact that the workers in that industry can be more valuable in other industries, reducing the number of relative workers in the stagnant industry.

If you want to make the output from a stagnant industry available to a broader spectrum of the population then you have to improve the productivity of that industry.

reply
It's very true for healthcare (especially mental healthcare) and education today as well, because for most people, the choice isn't LLM vs. human attention - it's LLM vs. no access at all.
reply
Even if you have perfect medical information and advice through an LLM, can you perform surgery on yourself? Can you prescribe yourself whatever medication you think you need?

For education, if you know as much as the average Harvard grad, can you give yourself a Harvard degree that will be as readily accepted in a job application or raising funds for a new business?

reply
Interesting perspective; medical regulation as a business moat
reply
> the value just in the co-opting of another human being's attention?

Thats a weird way of describing it.

A machine telling me to exercise and eat right will be ignored, even if the advice is correct. A person I trust taking me aside, looking me in the eye and asking me the same would be taken far more seriously.

reply
That may well be true if you need to be persuaded to exercise and eat right.

OTOH, if you don't need to be persuaded and just want information on how best to go about doing it, then I think it makes little difference where the information comes from as long as it's of reasonable quality.

reply
It also seems like the value of quality tutoring that doesn't primarily function as social/class signaling goes down as tools capable of automating high quality intellectual work are more widely available.
reply
It depends on outcome again: is the value of tutoring the social class elevation, or is it in the outcome of becoming more skilled and knowledgable?

There's also the deeper philosophical question of what is the meaning of life, and if there's inherent value in learning outside of what remunerative advantages you reap from it.

reply
If I described my symptoms to an AI and it suggested a diagnosis, I would defintely get a second opinion.
reply
You're implying that insurance companies will allow prices to fall and lower their profits. That seems like a really unlikely event in the current economy. They fire a lot of doctors and nurses, but they won't lower prices.
reply
This is assuming no competition materializes from the lowered friction
reply
The ACA requires 80-85% of health insurance to go toward medical care (medical loss ratio). The way they work around that is to figure out how to charge more for medical care.
reply
Can a robot write a medicine prescription? A medical procedure prescription? If yes, that would be a game-changer. But the medical insurance providers would be very cautious about honoring these. Then, if things go wrong, what entity would be held accountable for malpractice?

You already can get a good-quality medical advice "for nothing", unless it requires e.g. a blood test. The question is, how actionable such an advice is going to be, and how even the quality is going to be.

reply
By the time it replaces doctors, nobody but today's investors will be able to afford anything at all. The X-shaped economy would have owners in the V and manual laborers (assuming this doesn't translate to gains in automation) in the ^. This outcome is worth avoiding...
reply
I’m sick of this idea that “free” services are beneficial to society. There is no such thing as a free lunch; users are essentially bartering their time, attention, IP (contributed content) and personal/behavioral data in exchange for access to the service.

By selling those services at a cost of “free”, hyperscalers eliminate competition by forcing market entrants to compete against a unit price of 0. They have to have a secondary business to subsidize the losses from servicing the “free” users, which of course is usually targeted advertising to capitalize on the resources paid by users for access. Or simply selling to data brokers.

With the importance of training data and network effects, “free” services even further concentrate market power. Everyone talks about how AI is going to take away jobs, but no one wants to confront how badly the anticompetitive practices in big tech are hurting the economy. Less competition means less opportunity for everyone else, regardless of consumer benefit.

The only way it works if the “free” service for tutoring or healthcare is through government subsidies or an actual non-profit. Otherwise it’s just going to concentrate market power with the megacorps.

reply
This 1000x. "Free" is only a viable business model if the govt funds it. Otherwise, the $$ has to come from somewhere else in the company - how long will it take for the company to lose interest in a loss-leader when they're making $$ from other parts?

Look at all the deprecated Google products. What happens when Gemini-SaaS makes billions from licensing to other companies, and Gemini-Charity-for-the-poors starts losing money?

Sadly, the bigger the $$ in the tech pie, the more we have attracted robber barons, etc.

reply
> I’m sick of this idea that “free” services are beneficial to society. There is no such thing as a free lunch; users are essentially bartering their time, attention, IP (contributed content) and personal/behavioral data in exchange for access to the service.

In aggregate, this is true, but there are many ways to game the system to one's advantage and get a true "free lunch." For example, people watching Youtube with an adblocker and logged out don't provide Google with any income or useful telemetry. Likewise you can get practically unlimited GPT/Claude/etc by using multiple accounts.

reply
No, you are misunderstanding th economic principle. There is still a cost associated with serving that user, and the user is still paying for the cost of their internet connection and the opportunity cost of spending time on the service, or of setting up new accounts to get past usage limits. “No useful telemetry” I don’t really agree with in the YouTube example, as view counts are still vital for their recommendation algorithm.

TINSTAFL has two main implications. First that nothing is free, someone has to pay for it. Second is that money is not the only thing you pay with; every choice has an opportunity cost. Gaming the system costs someone something.

reply
Your argument is (mildly) a variant of the broken window fallacy.

AI will bring about a de-sequestering of talent and resources from some sectors of the economy. It's very difficult to predict where these people and resources will go after that, and what effect that will have upon the world.

reply
> because productivity gains must benefit the lower classes to see a multiplier in the economy

by this logic, the invention of mechanized farm equipment, which displaced farm labor, didnt increase productivity

reply
On the contrary, humanity spent nearly its entire existence calorically deficit, and until mechanized farming did we finally see health outcomes improve, height increase, IQ increase, and populations explode.

Productivity gains in the case of mechanized labor got everyone out of subsistence farming and into factories.

AI gets everyone out of every job and into nothing.

reply
It made food cheaper.
reply
The benefits largely accrued to the poorest people.
reply
reply
> It is a unique failure outcome I have yet to see anyone talk about

It seems likely to me that we will reach a violent, bloody revolt before we possibly reach this point. That may be why no one is taking about this failure mode

reply
> We may reach a point where the only ones able to afford compute are AI companies

Nah. I think "good enough AI for 95% of people" will be able to run locally within 3-5 years on consumer-accessible devices. There will be concentration of the best compute in AI companies for training, but inference will always become cheaper over time. Decommissioned training chips will also become inference chips, adding even more compute capacity to inference.

This is like computing once again. In 1990 only the upper class could afford computers, as of 2000 only the upper class owned mobile phones, as of now more or less everyone and their kid has these things.

reply
1990? We were solid lower-middle class, and I got a computer for Christmas in 1983. I bought my own, from $$ saved by working in 1987.
reply
We were solid middle-middle class and didn't have a computer until 1989, and it was a "free", 2- or 3-year-old computer from my dad's work that they were going to throw away. We absolutely could not have afforded a computer during the 80s.

Even in the 90s, we kept relying on cast-offs from my dad's employer, and when I was preparing to go to college in '99, my parents scrounged to buy me the parts for a computer to build and take to college. But even then, my dad bought the parts at a discount through a former co-worker's consulting company, and vetoed a couple of my more expensive component choices.

And now that I think about it, my first laptop in 2003 was my dad's old work laptop that had been decommissioned.

reply
Computers were roughly ~ $1000 in 1990. How did your lower-middle class family justify a $1000 expenditure inflation adjusted to $2565 today? Average minimum wage in the US is $11.30 so that's 29 days working at minimum wage.

My family was on the border of upper-lower and lower-middle and we bought a computer once and used it for 10+ years. I dumpster dove later to scavenge parts for upgrading until the mid 2000s when cheap computers became available.

reply
Yes and also keep in mind that low-income in US is high income in most of the world!
reply
> How did your lower-middle class family justify a $1000 expenditure

What, like a yearly vacation? Maybe they stayed home for Christmas one year instead of flying to visit family

reply
I would argue we've even already seen this play out with productivity gains across the economy over the last 40 years. The American middle class has been gradually declining since the '80s. AI seems likely to accelerate that trend for the exact reasons you point out.

A lot of people recognize this pattern even if they can't articulate it, and that's why they hate AI so much. To them, it doesn't matter if AI lives up to the hype or not. Either it does and we're staring down a future of 20%+ unemployment, or it doesn't and the economy crashes because we put all our eggs in this basket.

No matter what happens, the middle class is likely fucked, and anyone pushing AI as "the future" will be despised for it whether or not they're right.

Personally, I think the solution here might be to artificially constrain the supply of productivity. If AI makes the average middle-class worker twice as productive, then maybe we should cut the number of work hours expected from them in a given week.

The complete unwillingness of people in power to even acknowledge this problem is disheartening, and is highly reminiscent of the rampant corruption and wealth inequality of the Gilded Age.

Technological progress that hurts more people than it helps isn't progress, it's class warfare.

reply
Technological progress that hurts more people than it helps isn't progress, it's class warfare.

We've never seen such a thing before, so I don't know how you can draw such sweeping conclusions about it.

reply
deleted
reply
The longer we ignore the collapse of the middle class, the angrier the bottom half of the economy will get and the more justified they will feel in enacting retribution. We absolutely have historical precedents for what happens here: The French Revolution, the Gilded Age, etc. People will only tolerate a declining standard of living for so long.
reply
Well, I see I've thoroughly angered the billionaire wannabes. Funny how they never offer any solutions to these problems and just make a stink about them being acknowledged in the first place.
reply
> Technological progress that hurts more people than it helps isn't progress, it's class warfare.

I think this is right. The historical analogue I keep drifting toward is Enclosure. LLM tech is like Enclosure for knowledge work. A small class of capital-holding winners will benefit. Everyone else will mostly get more desperate and dependent on those few winners for the means of subsistence. Productivity may eventually rise, but almost nobody alive today will benefit from it since either our livelihood will be decimated (knowledge workers, for now) or we will be forced into AI slop hell-world where our children are taught by right-wing robo-propagandists, we are surveilled to within an inch of our lives, and our doctor is replaced by an iPad (everyone who isn't fabulously wealthy). Maybe we can eek out a living being the meat arms of the World Mind, or maybe we'll turned into hamburger by robotic concentration camp guards.

reply
I like how you identified the pattern of defeat and still complied in advance.
reply
Right there with you. Sure, I have gained a lot as a software engineer in the valley (I guess I'm upper-middle class now), but I'd give it up and go right back to lower-middle class (1980s) status I was raised in if it meant my kids could also aspire to a similar lower-middle class life.

This suicide-pact of "either AI goes crazy and 100 people rule the world with 99% of the world's wealth" or "AI fails badly and everyone's standard of living drops 3 levels, except for the 100 people that rule the world with 99% of the world's wealth" is not what I signed up for. Nor is it in any way sustainable or wise.

Too much class distinction / wealth between lower/upper classes, and a surplus of unemployed lower-class men is how many revolts/revolutions/wars have started.

reply