Baumol's cost disease hurts the lower classes by restricting their access to services like health care and education, and LLMs/agents make it possible to increase productivity in these areas in ways which were once unimaginable. The problem with services is that they're typically resistant to productivity growth, and that's finally changing.
If you can get high quality medical advice for effectively nothing, if you can get high quality individualized tutoring for free, that's a pretty big game changer for a lot of people. Prices on these services have been rising to the stratosphere over the past few decades because it's so difficult to increase the productivity of individual medical practitioners and educators. We're entering an era that could finally break this logjam.
You've expressed very clearly what LLMs would have to do in order to be economically transformative.
"If you can get high quality medical advice for effectively nothing, if you can get high quality individualized tutoring for free, that's a pretty big game changer for a lot of people. Prices on these services have been rising to the stratosphere over the past few decades because it's so difficult to increase the productivity of individual medical practitioners and educators. We're entering an era that could finally break this logjam."
It's not that process innovations are lacking, it's that product innovations are perceived as an indignity by most people. Why should one child get an LLM teacher or doctor while others get individualized attention by a skilled human being?
Is the value in the outcome of receiving medical advice and care, and becoming educated, or is the value just in the co-opting of another human being's attention?
If the value is in the outcome, the means to achieving that aren't of much consequence.
If AI displaces human educators, yes, their supply shrinks -- but we can't assume what direction its demand will go.
We've seen this pattern before: as recorded music became free, live performance got more expensive, and therefore much less accessible than it used to be.
What's likely to happen is that "worse" (read: AI) education will become much cheaper, while "better" (read: in-person) education that involves human connection-driven benefits will become much less accessible compared to what it is today.
Most people may be consider it a win. It's certainly not a world I'm looking forward to.
Fields need a large base of participants to produce great ones. This is exactly why software has been so extraordinary over the past 30 years: an unusual concentration of gifted minds across the entire humankind committed themselves to it.
In my view, Bach, Rachmaninoff, Cole Porter equivalents today probably aren't writing symphonies. They've decided to write code for a living. Which is why any Great American Songbook made today won't hold a candle next to one from 1950s.
How many of us have a reminiscence that starts “looking back, the most life-changing part of my primary or secondary education was ________,” where the blank is a person, not a curriculum module? How many doctors operate, at least in part, on hunches—on totalities of perception-filtered-through-experience that they can’t fully put into words?
I’m reminded of the recent account of homebound elderly Japanese people relying on the Yakult delivery lady partly for tiny yoghurt drinks, but mainly for a glimmer of human contact [0]. Although I guess that cuts to your point: the value in that example really is just co-opting another human’s attention.
In most of these caring professions, some of the value is in the measurable outcome (bacterial infection? Antibiotic!), but different means really do create different collections of value that don’t fully overlap (fine, I’ll actually lay off the wine because the doctor put the fear of the lord in me).
I guess the optimistic case is, with the rote mechanical aspects automated away, maybe humans have more time to give each other the residual human element…
But Baumol's argument, which you introduced to the conversation, is that outcome and process cannot actually be distinguished, even if a distinction in thought is possible among economic theorists.
How is that Baumol's argument? How is 'outcome' vs 'process' relevant to his argument at all?
'Cost disease' is just the foundational truth that the cost of the output from industries with stagnant productivity will increase due to the fact that the workers in that industry can be more valuable in other industries, reducing the number of relative workers in the stagnant industry.
If you want to make the output from a stagnant industry available to a broader spectrum of the population then you have to improve the productivity of that industry.
For education, if you know as much as the average Harvard grad, can you give yourself a Harvard degree that will be as readily accepted in a job application or raising funds for a new business?
Thats a weird way of describing it.
A machine telling me to exercise and eat right will be ignored, even if the advice is correct. A person I trust taking me aside, looking me in the eye and asking me the same would be taken far more seriously.
OTOH, if you don't need to be persuaded and just want information on how best to go about doing it, then I think it makes little difference where the information comes from as long as it's of reasonable quality.
There's also the deeper philosophical question of what is the meaning of life, and if there's inherent value in learning outside of what remunerative advantages you reap from it.
You already can get a good-quality medical advice "for nothing", unless it requires e.g. a blood test. The question is, how actionable such an advice is going to be, and how even the quality is going to be.
By selling those services at a cost of “free”, hyperscalers eliminate competition by forcing market entrants to compete against a unit price of 0. They have to have a secondary business to subsidize the losses from servicing the “free” users, which of course is usually targeted advertising to capitalize on the resources paid by users for access. Or simply selling to data brokers.
With the importance of training data and network effects, “free” services even further concentrate market power. Everyone talks about how AI is going to take away jobs, but no one wants to confront how badly the anticompetitive practices in big tech are hurting the economy. Less competition means less opportunity for everyone else, regardless of consumer benefit.
The only way it works if the “free” service for tutoring or healthcare is through government subsidies or an actual non-profit. Otherwise it’s just going to concentrate market power with the megacorps.
Look at all the deprecated Google products. What happens when Gemini-SaaS makes billions from licensing to other companies, and Gemini-Charity-for-the-poors starts losing money?
Sadly, the bigger the $$ in the tech pie, the more we have attracted robber barons, etc.
In aggregate, this is true, but there are many ways to game the system to one's advantage and get a true "free lunch." For example, people watching Youtube with an adblocker and logged out don't provide Google with any income or useful telemetry. Likewise you can get practically unlimited GPT/Claude/etc by using multiple accounts.
TINSTAFL has two main implications. First that nothing is free, someone has to pay for it. Second is that money is not the only thing you pay with; every choice has an opportunity cost. Gaming the system costs someone something.
AI will bring about a de-sequestering of talent and resources from some sectors of the economy. It's very difficult to predict where these people and resources will go after that, and what effect that will have upon the world.
by this logic, the invention of mechanized farm equipment, which displaced farm labor, didnt increase productivity
Productivity gains in the case of mechanized labor got everyone out of subsistence farming and into factories.
AI gets everyone out of every job and into nothing.
It seems likely to me that we will reach a violent, bloody revolt before we possibly reach this point. That may be why no one is taking about this failure mode
Nah. I think "good enough AI for 95% of people" will be able to run locally within 3-5 years on consumer-accessible devices. There will be concentration of the best compute in AI companies for training, but inference will always become cheaper over time. Decommissioned training chips will also become inference chips, adding even more compute capacity to inference.
This is like computing once again. In 1990 only the upper class could afford computers, as of 2000 only the upper class owned mobile phones, as of now more or less everyone and their kid has these things.
Even in the 90s, we kept relying on cast-offs from my dad's employer, and when I was preparing to go to college in '99, my parents scrounged to buy me the parts for a computer to build and take to college. But even then, my dad bought the parts at a discount through a former co-worker's consulting company, and vetoed a couple of my more expensive component choices.
And now that I think about it, my first laptop in 2003 was my dad's old work laptop that had been decommissioned.
My family was on the border of upper-lower and lower-middle and we bought a computer once and used it for 10+ years. I dumpster dove later to scavenge parts for upgrading until the mid 2000s when cheap computers became available.
What, like a yearly vacation? Maybe they stayed home for Christmas one year instead of flying to visit family
A lot of people recognize this pattern even if they can't articulate it, and that's why they hate AI so much. To them, it doesn't matter if AI lives up to the hype or not. Either it does and we're staring down a future of 20%+ unemployment, or it doesn't and the economy crashes because we put all our eggs in this basket.
No matter what happens, the middle class is likely fucked, and anyone pushing AI as "the future" will be despised for it whether or not they're right.
Personally, I think the solution here might be to artificially constrain the supply of productivity. If AI makes the average middle-class worker twice as productive, then maybe we should cut the number of work hours expected from them in a given week.
The complete unwillingness of people in power to even acknowledge this problem is disheartening, and is highly reminiscent of the rampant corruption and wealth inequality of the Gilded Age.
Technological progress that hurts more people than it helps isn't progress, it's class warfare.
We've never seen such a thing before, so I don't know how you can draw such sweeping conclusions about it.
I think this is right. The historical analogue I keep drifting toward is Enclosure. LLM tech is like Enclosure for knowledge work. A small class of capital-holding winners will benefit. Everyone else will mostly get more desperate and dependent on those few winners for the means of subsistence. Productivity may eventually rise, but almost nobody alive today will benefit from it since either our livelihood will be decimated (knowledge workers, for now) or we will be forced into AI slop hell-world where our children are taught by right-wing robo-propagandists, we are surveilled to within an inch of our lives, and our doctor is replaced by an iPad (everyone who isn't fabulously wealthy). Maybe we can eek out a living being the meat arms of the World Mind, or maybe we'll turned into hamburger by robotic concentration camp guards.
This suicide-pact of "either AI goes crazy and 100 people rule the world with 99% of the world's wealth" or "AI fails badly and everyone's standard of living drops 3 levels, except for the 100 people that rule the world with 99% of the world's wealth" is not what I signed up for. Nor is it in any way sustainable or wise.
Too much class distinction / wealth between lower/upper classes, and a surplus of unemployed lower-class men is how many revolts/revolutions/wars have started.