My prediction is no, because productivity gains must benefit the lower classes to see a multiplier in the economy.
For example, ATMs being automated did cause a negative drop in teller jobs, but fast money any time does increase the velocity of money in the economy. It decreases savings rate and encourages spending among the class of people whose money imparts the highest multiplier.
AI does not. All the spending on AI goes to a very small minority, who have a high savings rate. Junior employees that would have productively joined the labor force at good wages, must now compete to join the labor force at lower wages, depressing their purchasing power and reducing the flow of money.
Look at all the most used things for AI: cutting out menial decisions such as customer service. There are no "productivity" gains for the economy here. Each person in the US hired to do that job would spend their entire paycheck. Now instead, that money goes to a mega-corp and the savings is passed on to execs. The price of the service provided is not dropping (yet). Thus, no technology savings is occurring, either.
In my mind, the outcomes are:
* Lower quality services
* Higher savings rate
* K-shaped economy catering to the high earners
* Sticky prices
* Concentration of compute in AI companies
* Increased price of compute prevents new entrants from utilizing AI without paying rent-seekers, the AI companies
* Cycle continues all previous steps
We may reach a point where the only ones able to afford compute are AI companies and those that can pay AI companies. Where is the innovation then? It is a unique failure outcome I have yet to see anyone talk about, even though the supply and demand issues are present right now.
Baumol's cost disease hurts the lower classes by restricting their access to services like health care and education, and LLMs/agents make it possible to increase productivity in these areas in ways which were once unimaginable. The problem with services is that they're typically resistant to productivity growth, and that's finally changing.
If you can get high quality medical advice for effectively nothing, if you can get high quality individualized tutoring for free, that's a pretty big game changer for a lot of people. Prices on these services have been rising to the stratosphere over the past few decades because it's so difficult to increase the productivity of individual medical practitioners and educators. We're entering an era that could finally break this logjam.
You've expressed very clearly what LLMs would have to do in order to be economically transformative.
"If you can get high quality medical advice for effectively nothing, if you can get high quality individualized tutoring for free, that's a pretty big game changer for a lot of people. Prices on these services have been rising to the stratosphere over the past few decades because it's so difficult to increase the productivity of individual medical practitioners and educators. We're entering an era that could finally break this logjam."
It's not that process innovations are lacking, it's that product innovations are perceived as an indignity by most people. Why should one child get an LLM teacher or doctor while others get individualized attention by a skilled human being?
Is the value in the outcome of receiving medical advice and care, and becoming educated, or is the value just in the co-opting of another human being's attention?
If the value is in the outcome, the means to achieving that aren't of much consequence.
If AI displaces human educators, yes, their supply shrinks -- but we can't assume what direction its demand will go.
We've seen this pattern before: as recorded music became free, live performance got more expensive, and therefore much less accessible than it used to be.
What's likely to happen is that "worse" (read: AI) education will become much cheaper, while "better" (read: in-person) education that involves human connection-driven benefits will become much less accessible compared to what it is today.
Most people may be consider it a win. It's certainly not a world I'm looking forward to.
Fields need a large base of participants to produce great ones. This is exactly why software has been so extraordinary over the past 30 years: an unusual concentration of gifted minds across the entire humankind committed themselves to it.
In my view, Bach, Rachmaninoff, Cole Porter equivalents today probably aren't writing symphonies. They've decided to write code for a living. Which is why any Great American Songbook made today won't hold a candle next to one from 1950s.
How many of us have a reminiscence that starts “looking back, the most life-changing part of my primary or secondary education was ________,” where the blank is a person, not a curriculum module? How many doctors operate, at least in part, on hunches—on totalities of perception-filtered-through-experience that they can’t fully put into words?
I’m reminded of the recent account of homebound elderly Japanese people relying on the Yakult delivery lady partly for tiny yoghurt drinks, but mainly for a glimmer of human contact [0]. Although I guess that cuts to your point: the value in that example really is just co-opting another human’s attention.
In most of these caring professions, some of the value is in the measurable outcome (bacterial infection? Antibiotic!), but different means really do create different collections of value that don’t fully overlap (fine, I’ll actually lay off the wine because the doctor put the fear of the lord in me).
I guess the optimistic case is, with the rote mechanical aspects automated away, maybe humans have more time to give each other the residual human element…
But Baumol's argument, which you introduced to the conversation, is that outcome and process cannot actually be distinguished, even if a distinction in thought is possible among economic theorists.
How is that Baumol's argument? How is 'outcome' vs 'process' relevant to his argument at all?
'Cost disease' is just the foundational truth that the cost of the output from industries with stagnant productivity will increase due to the fact that the workers in that industry can be more valuable in other industries, reducing the number of relative workers in the stagnant industry.
If you want to make the output from a stagnant industry available to a broader spectrum of the population then you have to improve the productivity of that industry.
For education, if you know as much as the average Harvard grad, can you give yourself a Harvard degree that will be as readily accepted in a job application or raising funds for a new business?
Thats a weird way of describing it.
A machine telling me to exercise and eat right will be ignored, even if the advice is correct. A person I trust taking me aside, looking me in the eye and asking me the same would be taken far more seriously.
OTOH, if you don't need to be persuaded and just want information on how best to go about doing it, then I think it makes little difference where the information comes from as long as it's of reasonable quality.
There's also the deeper philosophical question of what is the meaning of life, and if there's inherent value in learning outside of what remunerative advantages you reap from it.
You already can get a good-quality medical advice "for nothing", unless it requires e.g. a blood test. The question is, how actionable such an advice is going to be, and how even the quality is going to be.
By selling those services at a cost of “free”, hyperscalers eliminate competition by forcing market entrants to compete against a unit price of 0. They have to have a secondary business to subsidize the losses from servicing the “free” users, which of course is usually targeted advertising to capitalize on the resources paid by users for access. Or simply selling to data brokers.
With the importance of training data and network effects, “free” services even further concentrate market power. Everyone talks about how AI is going to take away jobs, but no one wants to confront how badly the anticompetitive practices in big tech are hurting the economy. Less competition means less opportunity for everyone else, regardless of consumer benefit.
The only way it works if the “free” service for tutoring or healthcare is through government subsidies or an actual non-profit. Otherwise it’s just going to concentrate market power with the megacorps.
Look at all the deprecated Google products. What happens when Gemini-SaaS makes billions from licensing to other companies, and Gemini-Charity-for-the-poors starts losing money?
Sadly, the bigger the $$ in the tech pie, the more we have attracted robber barons, etc.
In aggregate, this is true, but there are many ways to game the system to one's advantage and get a true "free lunch." For example, people watching Youtube with an adblocker and logged out don't provide Google with any income or useful telemetry. Likewise you can get practically unlimited GPT/Claude/etc by using multiple accounts.
TINSTAFL has two main implications. First that nothing is free, someone has to pay for it. Second is that money is not the only thing you pay with; every choice has an opportunity cost. Gaming the system costs someone something.
AI will bring about a de-sequestering of talent and resources from some sectors of the economy. It's very difficult to predict where these people and resources will go after that, and what effect that will have upon the world.
by this logic, the invention of mechanized farm equipment, which displaced farm labor, didnt increase productivity
Productivity gains in the case of mechanized labor got everyone out of subsistence farming and into factories.
AI gets everyone out of every job and into nothing.
It seems likely to me that we will reach a violent, bloody revolt before we possibly reach this point. That may be why no one is taking about this failure mode
Nah. I think "good enough AI for 95% of people" will be able to run locally within 3-5 years on consumer-accessible devices. There will be concentration of the best compute in AI companies for training, but inference will always become cheaper over time. Decommissioned training chips will also become inference chips, adding even more compute capacity to inference.
This is like computing once again. In 1990 only the upper class could afford computers, as of 2000 only the upper class owned mobile phones, as of now more or less everyone and their kid has these things.
My family was on the border of upper-lower and lower-middle and we bought a computer once and used it for 10+ years. I dumpster dove later to scavenge parts for upgrading until the mid 2000s when cheap computers became available.
What, like a yearly vacation? Maybe they stayed home for Christmas one year instead of flying to visit family
A lot of people recognize this pattern even if they can't articulate it, and that's why they hate AI so much. To them, it doesn't matter if AI lives up to the hype or not. Either it does and we're staring down a future of 20%+ unemployment, or it doesn't and the economy crashes because we put all our eggs in this basket.
No matter what happens, the middle class is likely fucked, and anyone pushing AI as "the future" will be despised for it whether or not they're right.
Personally, I think the solution here might be to artificially constrain the supply of productivity. If AI makes the average middle-class worker twice as productive, then maybe we should cut the number of work hours expected from them in a given week.
The complete unwillingness of people in power to even acknowledge this problem is disheartening, and is highly reminiscent of the rampant corruption and wealth inequality of the Gilded Age.
Technological progress that hurts more people than it helps isn't progress, it's class warfare.
We've never seen such a thing before, so I don't know how you can draw such sweeping conclusions about it.
I think this is right. The historical analogue I keep drifting toward is Enclosure. LLM tech is like Enclosure for knowledge work. A small class of capital-holding winners will benefit. Everyone else will mostly get more desperate and dependent on those few winners for the means of subsistence. Productivity may eventually rise, but almost nobody alive today will benefit from it since either our livelihood will be decimated (knowledge workers, for now) or we will be forced into AI slop hell-world where our children are taught by right-wing robo-propagandists, we are surveilled to within an inch of our lives, and our doctor is replaced by an iPad (everyone who isn't fabulously wealthy). Maybe we can eek out a living being the meat arms of the World Mind, or maybe we'll turned into hamburger by robotic concentration camp guards.
This suicide-pact of "either AI goes crazy and 100 people rule the world with 99% of the world's wealth" or "AI fails badly and everyone's standard of living drops 3 levels, except for the 100 people that rule the world with 99% of the world's wealth" is not what I signed up for. Nor is it in any way sustainable or wise.
Too much class distinction / wealth between lower/upper classes, and a surplus of unemployed lower-class men is how many revolts/revolutions/wars have started.
ATMs didn't just reduce teller headcount per branch. They changed what tellers do. Before ATMs, tellers were mostly cash handlers. After, the remaining tellers shifted toward relationship banking — account openings, loan discussions, financial advice. The job title survived but the job content was transformed.
The deeper question for AI is whether the same pattern holds when the technology affects cognitive tasks rather than physical ones. ATMs automated a narrow physical routine (dispensing cash), which freed up the human role to emphasize the parts machines couldn't do (relationship judgment, complex problem-solving). AI is different because it targets exactly those higher-order cognitive tasks that humans were "freed up" to do after previous automation waves.
So the real question isn't "will AI create new jobs?" — it probably will. The question is whether the new tasks humans get pushed into will be higher-value (as happened with ATMs making tellers into advisors) or lower-value (humans relegated to tasks AI can't yet do, which tend to be physical, uncomfortable, or poorly paid).
The ATM precedent is optimistic, but the mechanism that made it work — automating the simple task so humans could do the complex one — runs in the wrong direction when the technology specifically targets complex cognitive work.
This is not so helpful if AI is boosting productivity while a sector is slowing down, because companies will cut in an overabundant market where deflationary pressure exists.
Net result ATM’s likely cost ~30-40% or of bank teller jobs.
Population is really important to adjust for in employment statistics. Compare farmers in the USA in 2025 vs 1800, and yes the absolute number is up but the percentage is way down.
I dont think the race to shove an LLM into everything is going to grow the pie.
But I also dont think it is impossible that a use case will present itself that will create further jobs.
The issue is that its largely unpredictable.
Its a bit like, we are sitting around in the 1950s trying to predict how computers will affect the economy.
It is going to take more than 1 successful deductive leap to get us from 1950s computing -> miniaturisation -> computer in every home -> internet communications.
Every deductive leap we take is extremely prone to being wrong.
We simply cannot lie back and imagine every productive relationship in the economy and then extrapolate every centaur and anti centaur possible for it.
What we do know is that theres a bit of a gold rush to effectively brute force every possible AI variant into every productive relationship in the economy. The fastest way to get the answer to your question is to do it. Possibly the only way to get the answer is to do it.
For instance, someone might imagine LLMs simply eating a whole bunch of service industry jobs. At the same time, theres a mid state where it eats some, but the remaining staff are employed to monitor the LLMs to prevent them handing out free shit to smart shoppers. Its also easy enough to imagine that LLMs never quite get there and the risk is too large for foul play, so they just dont gain that kind of traction. Its also possible to imagine an end state where LLMs can get to 0% risk if they are constantly trained on human data coming from humans doing the same job, and that humans are gainfully employed in parallel with LLMs. Its possible that LLMs are great at business as usual, but the risk emerges when company policies change, and the cost of retraining LLMs makes it impractical for move fast and break things companies to do anything but hire humans. My favourite scenario is one where humans are largely AI assisted, trained on particular people, and theres a massive cybercrime industry built around exfiltrating LLM training weights trained on high functioning humans and deploying them without humans to the third world to help them get 80% of the quality of first world businesses, making them heavily competitive.
We dont know what we dont know.
Did it? This sounds like describing a company opening a new campus as laying off a third of their employees, partly offset by most of them still having the same job in the same company but at a new desk.
If I'm reading this correctly, the interpretation should be that a third of them were transferred to new branches.
0.66 (two thirds retention) * 1.4 (40% more branches) = 0.84, so we only expect ~16% were made redundant.
However, the number of software companies being started is booming which should result in net neutral or net positive in software developer employment.
Today: 100 software companies employ 1,000 developers each[0]
Tomorrow: 10,000 software companies employ 10 developers each[1]
The net is the same.
[0]https://x.com/jack/status/2027129697092731343
[1]https://www.linkedin.com/news/story/entrepreneurial-spirit-s...
Plenty of businesses need very custom software but couldn't realistically build it before.
A recent example, Mitchell Hashimoto was pointing out that he wasn't "first to market" with his product(s), he was (at least) SEVENTH
If this were seven government funded teams solving the same problem, people would lose their minds over the 'waste' But when private companies do it, we call it efficient market competition. The duplication is the same - we just frame it differently.
Edit: fixed some typos caused by fat fingers on a phone keyboard
>If this were seven government funded teams solving the same problem
The problem here is "government funded" - the trials are not rationalized by free-market economics. That is, a 5% better product in the end would not be worth seven competing developments initially.
I'm sure the retort of the AI optimist will be that AI will make the things that person buys cheaper, and there may be truth to that when it comes to things that people buy with disposable income...
But how likely is AI to make actual essentials like housing and food cheaper?
IE. If a top tier dev make $1m today, they'll make $5m in the future. If the average makes $100k today, they'll maybe make $60k.
AI likely enables the best of the best to be much more productive while your average dev will see more productivity but less overall.
Previously, software devs were just way too expensive for small businesses to employ. You can't do much with just 1 dev in the past anyway. No point in hiring one. Better go with an agency or use off the shelf software that probably doesn't fill all your needs.
How silly of me to rely on reality when it’s so obvious that AI is benefiting us all.
Anyways, this is the start. Companies are adjusting. You hear a lot about layoffs but unemployments. But we're in a high interest environment with disruptions left and right. Companies are trying to figure out what their strategy is going forward.
I don't expect to see a boom in software developer hiring. I think it'll just be flat or small growth.
We are in negative growth, and the current leadership class keeps talking about all the people they can get rid of.
Look at the Atlassian layoff notice yesterday for example where they lied to our faces by saying they were laying off people to invest more in AI but they totally aren’t replacing people with AI.
Long-term, they will need none. I believe that software will be made obsolete by AI.
Why use AI to build software for automating specific tasks, when you can just have the AI automate those tasks directly?
Why have AI build a Microsoft Excel clone, when you can just wave your receipts at the AI and say "manage my expenses"?
Enjoy your "AI-boosted productivity" while it lasts.
I think this is a bit hyperbolic. Someone still needs to review and test the code, and if the code is for embedded systems I find it unlikely.
For SaaS platforms you’ll see a dramatic reduction, maybe like 80% but it’ll still have a handful of devs.
Factories didn’t completely eliminate assembly line workers, you just need a far fewer number to make sure the cogs turn the way it should.
I feel like you didn't understand my comment. I am predicting that there is no code to review. You simply ask the AI to do stuff and it does it.
Today, for example, you can ask ChatGPT to play chess with you, and it will. You don't need a "chess program," all the rules are built in to the LLM.
Same goes for SaaS. You don't need HR software; you just need an LLM that remembers who is working for the company. Like what a "secretary" used to be.
I didn’t, and thanks for clarifying for me.
This doesn’t pass the sniff test for me though - someone needs to train the models, which requires code. If AI can do everything for you, then what’s the differentiator as a business? Everything can be in chatGPT but that’s not the only business in existence. If something goes wrong, who is gonna debug it? Instead of API requests you would debug prompt requests maybe.
We already hate talking to a robot for waiting on calls, automated support agents, etc. I don’t think a paying customer would accept that - they want a direct line to a person.
I can buy the argument that the backend will be entirely AI and you won’t need to be managing instances of servers and databases but the front end will absolutely need to be coded. That will need some software engineering - we might get a role that is a weird blend of product + design + coding but that transformation is already happening.
Honestly the biggest change I see is that the chat interface will be on equal footing with the browser. You might have some app that can connect to a bunch of chat interfaces that is good at something, and specializations are going to matter even more.
It was a bit of a word vomit so thanks for coming to my TED Talk.
Speed, cost, security, job/task management
Next question
All of that will inevitably be solved.
50 years ago, using a personal computer was an extravagant luxury. Until it wasn't.
30 years ago, carrying a powerful computer in your pocket was unthinkable. Until it wasn't.
Right now, it's cheaper to run your accounting math on dedicated adder hardware. But Llms will only get cheaper. When you can run massive LLMs locally on your phone, it's hard to justify not using it for everything.
If I can run 50,000 fixed tasks that cost me $0.834/hr but OpenAI is costing $37/hr and the automation takes 40x as long and can make TERRIBLE errors why the fuck would I not move to the deterministic system?
Also, battery life of mobile devices.
But now, we not only have laptops, we run horribly inefficient GUIs in horribly inefficient VMs on them.
The dollar-per-compute trend goes ever downward.
Yes. That's precisely why my company runs dBase 7 on a fleet of old 286DX machine from Compaq. /s
Running obsolete software will be cheaper, but the value provided by the newer technology will make the difference insignificant.
Why do 50,000 tasks with an LLM when I can do 64,467,235 without an LLM that the LLM created for the same cost on probably far lower cost hardware?
Because you'll be outcompeted by people who make the best of the nondeterministic system.
I used the Perspective tool in an image editor to give a rough idea of what the first graph would look like adjusted for population change:
I can see AI making things more productive but it requires humans to be very expert and do more work. That might mean fewer developers but they are all more skilled. It will take a while for people to level up so to speak. It's hard to predict but I think there could be a rough transition period because people haven't caught on that they can't rely on AI so either they will have to get a new career or ironically study harder.
My subjective assessment is that agents like Copilot got better because of better harnesses and fine tuning of models to use those harnesses. But they are not improving in the direction of labor substitution, but rather in the direction of significant, but not earth-shaking, complementarity. That complementarity is stronger for more experienced developers.
Of course, it could also be argued that some day we may decide that it's no longer necessary at all for code to be written for a human mind to understand. It's the optimistic scenario where you simply explain the misbehavior of the software and trust the AI to automatically fix everything, without breaking new stuff in the process. For some reason, I'm not that optimistic.
For as long as a human remains the customer.
Once humans become the proverbial horse supplanted by the automobile... I don't suppose glue really cares.
We have a massively distorted economy driven by debt financialization and legalised banking cartels. It leads to weird inversions. For example as long as housing gets increasingly expensive at a predictable rate the housing becomes more affordable instead of less as banks are more able to lend money. The inverse is also true, if housing were to drop at a predictable rate fewer people would be able to get a mortgage on the house so fewer people could afford to buy one. Housing won't drop below cost of materials and labor (ignoring people dumping housing to get rid of tax debts as I would include such obligations in the cost of acquisition). Long term it's not sustainable but long term is multi-generational.
Many low cost areas have bad crime problems, there is another little phenomenon where the wealthy by doing a poor job in governance can increase the price of their assets by making alternative assets (lower cost housing) less desirable due to the increase in crime.
Only if every person born needs to have a brand new house constructed for them.
Not if - you know - people die and don't need a house to live in anymore.
But considering how it's been the past 20 years, I'm starting to expect that a lot of the current elder generation will opt to have their houses burnt down to the ground when they die. Or maybe the banker owned politicians will make that decision for them with a new policy to burn all property at death to "combat injustice". Who knows what great ideas they have?
The only solution here is to stop tying people's value to their productivity. That makes a lot of sense in the 1900s but it makes a lot less sense when the primary faucet of productivity is automation. If you insist on tying a person's fundamental right to a decent and secure life to their productivity and then take away their ability to be productive you're left with a permenant and growing underclass of undesirables and an increasingly slim pantheon of demigods at the top.
We have written like, an ocean of scifi about this very subject and somehow we still fail to properly consider this as a likely outcome.
This is extremely hand-wavy.
Can you be more concrete in what you think this looks like?
The way I see it, we're only 5-10 years away from having general purpose robots and AI that can basically do anything. If the prices for that automation is low enough, there will be massive layoffs as workers are replaced.
There's no way to "naturally" solve the problem of skyrocketing unemployment without government involvement.
Disconnecting value from productivity sounds good if you don't examine any of the consequences.
Can you build a society from scratch using that principle? If you can't then why would it work on an already built society?
Like if we're in an airplane flying, what you're saying is the equivalent getting rid of the wings because they're blocking your view. We're so high in the sky we'd have a lot of altitude to work with, right?
In this society there is literally nothing for anyone else to do. Do you think they deserve to be cut out of sharing the value generated by The Engineer and the machine, leaving them to starve? Do you think starving people tend to obey rules or are desperate people likely to smash the evil machine and kill The Engineer if The Engineer cuts them off? Or do you think in a society where work hours mean nothing for an average person a different economic system is required?
To derive an alternate system you need alternate axioms. The axioms of our liberal society are moral equality and peaceful coexistence. Among such equals, no one person, group, or majority has the right to dictate to another. What axioms do you propose that would constrain The Engineer? How would you prevent enslaving him?
Eeeeeerrrr, wrong! This is garbage hypercapitalist/libertarian ideology.
Did you earn your public school education? Did you earn your use of the sidewalk or the public parks and playgrounds? Did you earn your library card? Did you earn your citizenship or right to vote? Did you earn the state benefits you get when you are born disabled? Did you earn your mother’s love?
No, these are what we call public services, unalienable rights, and/or unconditional humanity. We don’t revolve the entire world and our entire selves solely around profit because it’s not practical and it’s empty at its core.
Arguably we still do too much profit-based society stuff in the US where things like healthcare and higher education should be guaranteed entitlements that have no need to be earned. Many other countries see these aspects of society as non-negotiable communal benefits that all should enjoy.
In this hypothetical society with The Engineer, it’s likely that The Engineer would want or need to win over the minds of their society in some way to prevent their own demise and ensure they weren’t overthrown, enslaved, or even just thought of as an evil person.
Many of my examples above like public libraries came about because gilded age titans didn’t want to die with the reputation of robber barons. Instead, they did something anti-profit and created institutions like libraries and museums to boost the reputation of their name.
It’s the same reason why your local university has family names on its buildings. The wealthiest people in society often want to leave a positive legacy where the alternative without philanthropy and, essentially, wealth redistribution, is that they are seen as horrible people or not remembered at all.
Go on then, how do you decide what people deserve? How do you negotiate with others who disagree with you?
> examples above like public libraries
I agree! The nice part about all these mechanisms is that they’re voluntary.
If you’re suggesting that The Engineer’s actions should be constrained entirely by his own conscience and social pressure, then we agree. No laws or compulsion required.
These examples aren’t generally voluntary once implemented. I can’t get a refund from my public library or parks department if I decide not to use it.
The social pressure placed on The Engineer is the manifestation of law. That’s all law is: a set of agreed-upon social contracts, enforced by various means.
Obviously, many dictators and governments get away with badly mistreating their subjects, and that’s unfortunate, shouldn’t happen, and shouldn’t be praised as a good system.
I think you may be splitting hairs a little bit here and trying really hard to manufacture…something.
What if you are in the minority? Do you just accept the hypercapitalist dictates of the majority? Why not?
Law is more than convention. What distinguishes legitimate from illegitimate law?
The only way for people who disagree axiomatically to get along is to impose on each other minimally.
You figure out your own economic security, I’ll manage mine.
We have a K shaped economy. Top earners take the majority. The top 20% make up 63% of all spending, and the top 10% accounted for more than 49%. The highest on record. Businesses adapt to reality and target the best market, in this case the top 10 to 20%, and the rest just get ignored, like in many countries around the world.
All that unlocked money? In a K shaped economy it mostly goes to those at the top, who look to new places to park/invest it, raising housing prices, moving the squeeze of excess capital looking for gains to places like nursing homes and veterinary offices. That doesn't result in prices going down, but in them going up.
The benefit to the average American will be more capital in the top earners' hands looking for more ways to do VC style squeezes in markets previously not as ruthless but worth moving to now as there are less and less 'untapped' areas to squeeze (because the top 10-20% need more places to park more capital). The US now has more VC funds than McDonalds.
If goods aren't being sold, then the price will increase.
So newer bank branches look like car dealership offices. There are many little glass rooms where you sit down with a bank employee and discuss loans and other financial products. That's where the money is made.
There's a small area in back with traditional tellers. It's not where the money is made.
More like something closer to 100%. The ATM was notable for enabling a complete change in mission. The historical job of teller largely disappeared, but a brand new job never done before was created in its wake. That is why there was little change in the number of people employed.
> because of deregulation and a booming economy and whatever else.
The deregulation largely happened in the 1970s, while you're talking about 1988 onward. The reality is that ATM actually was the primary catalyst for the specific branch expansion you are talking about. Like above, the ATM made the job of teller redundant, but it introduced a brand new job. A job that was most effective when the workers were closer to the customer, hence why workers were relocated.
I think it would be a mistake to look at this solely through the lens of history. Yes, the historical record is unbroken, but if you compare the broad characteristics of the new jobs created to the old jobs displaced by technology, they are the same every time: they required higher-level (a) cognitive (b) technical or (c) social skills.
That's it. There is no other dimension to upskill along.
And LLMs are good at all three, probably better than most people already by many metrics. (Yes even social; their infinite patience is the ultimate advantage. Prompt injection is an unsolved hurdle though, so some relief there.)
Plus AI is improving extremely rapidly. Which means it is probably advancing faster than most people can upskill.
An increasingly accepted premise is that AI can displace junior employees but will need senior employees to steer it. Consider the ratio of junior to senior employees, and how long it takes for the former to grow into the latter. That is the volume of displacement and timeframe we're looking at.
Never in history have we had a technology that was so versatile and rapidly advancing that it could displace a large portion of existing jobs, as well as many new jobs that would be created.
However, what few people are talking about is the disintermediating effect of AI on the power of capital. If individuals can now do the work of entire teams, companies don't need many of them. But by the same token(s) (heheh) individuals don't need money, and hence companies, to start something and keep it going either! I think that gives the bottom side of the K-shaped economy a fighting chance to equalize.
No, because if you think about Startrek the endgame is replicators. Well the concept that 100% of basic needs are met.
At some point work becomes unnecessary for a society to function.
The future is anyone's guess, but it is certain that 100% of your needs being able to be met theoretically is not equivalent to actually having 100% of your needs met.
Greed/Change Avoidance:
If someone invented replicators right now, even if they gave it completely away to the world, what would happen? I can't imagine the finance and military grind just coming to an end to make sure everyone has a working replicator and enough power to run it so nobody has to work anymore. Who gives up their slice of society to make that change and who risks losing their social status? This is like openai pretending "your investment should be considered a gift because money will have no value soon". That mask came off really quickly.
Status/Hate:
There are huge swaths of the US population that would detest the idea that people they see as "below" them don't have to work. I can imagine political movements doing well on the back of "don't let the lazy outgroup ruin society by having replicators".
Fuck the Poor:
We don't do the easy things to eliminate or reduce suffering now, even when it has real world positive effects. Malaria, tuberculosis, even boring old hunger are rampant and causing horrible, unnecessary suffering all over the world.
Dont tread on me:
I shudder when I think of the damage someone could do with a chip on their shoulder and a replicator.
The road to hell is paved with good intentions:
What happens when everyone can try their own version of bio engineering or climate engineering or building a nuclear power plant or anything else. Invasive species are a problem now and I worry already when companies like Google decide to just release bioengineered mosquitos and see what happens. I -really- worry when the average person decides a big complicated problem is actually really simple and they can just replicate their particular idea and see what happens. Whoops, ivermectin in the water supply didn't cure autism!
Someone give me some hope for a more positive version here because I bummed myself out.
Even replicators need feedstock - people who own the rocks or sand or whatever feeds them will start charging an arm and a leg. Sure, I could feed it dirt and rocks from my own property, but only for so long before I'm undermining the foundation of my own house. To say nothing of people who live in apartments.
And then, if everyone has equal $$, how do you decide who gets to live in the better locations / nicer housing?
People when they mature have an innate desire to work. It is good for body and mind. If you're curious about the world, you'll have to do some work one way or another to achieve your goals and satisfy your curiosity.
If "society" is just a function of basic needs, then there's plenty of places in the world to visit where people live like that and use any excess energy in endless fighting against each other instead of work.
If you go in with the attitude that work is hell and humiliation, that's what life is going to give you.
And right now, due to having to work, maintenance on my house is a bit behind.. Would also prefer to catch up on that - but again, no one is paying me to do that.
That doesn't mean it has to be wage labor though.
But it is usually only people who enjoy work who manage to do something different with their life than wage labour.
There's an important point here that you're glossing over. The increase in the total number of branches doesn't have to be unrelated to the decrease in the number of tellers each branch requires to operate. The sharp drop in the cost of operating one branch directly means that you can have more branches. This means it isn't true that "a third of bank tellers were made redundant" - some of them were reallocated from existing branches to new ones.
That's not quite my read - the original says per branch there was a 1/3 reduction, but your comment appears to say 1/3 total redundancy.
There was, according to the original, a 40% increase in number of branches, meaning a net increase in tellers (my math might be off though)
edit:
100 branches → 140 branches = +40%
100 tellers/branch → 67 tellers/branch = -33%
140 × 67 = 9,380
100 × 100 = 10,000
net difference -620 or just over 6% (loss)