It can't. It can't even deal with emails without randomly deleting your email folder [1]. Saying that it can make decisions and replace humans is akin of saying that random number generator can make decisions and can replace people.
It's just an automation tool, and just like all automation tools before it it will create more jobs than destroy. All the CEOs' talks about labor replacement are a fuss, a pile of lies to justify layoffs and worsening financial situation.
[1] https://www.pcmag.com/news/meta-security-researchers-opencla...
The combination of these two things could lead to a situation where there is a massive, startup-dominated market for engineers who can take projects from 0.5 to 1, as well as for consulting companies or services that help founders to do the same.
Another pair of hopes is that a) the LLM systems plateau at a level where any use on complex or important projects requires expert knowledge and prompting, and b) that because of this, the hype of using them to replace engineers dies down. This would hopefully lead to a situation where they are treated like any other tool in our toolbox. Then, just like no one forces me to use emacs or vim (despite the fact that they unambiguously help me to be at least 2x more productive), no one will force me to use LLMs just for the sake of it.
It doesn’t even have to be people with no idea what they’re doing. If you lay off enough smart people from big tech companies, those people might put together small companies that directly compete with larger ones at a fraction of the cost.
These small companies will only be able to sell through basically scam marketing.
I doubt that
Focusing on option 2 and software development, teams and companies will only downsize if the demand for software doesn’t increase. Make the same amount of stuff you do now but with less people.
What I think will happen is that enough companies will choose to do things that they couldn’t afford or weren’t possible without AI (and new companies will be created to do the same) to offset the ones that choose to cut costs and actually increase the amount of people making software.
I am pretty sure these are well known economic ideas but I don’t know the specific terminology for it.
We are already hitting the limits of demand in many areas of life. The fundamental currency that is not growing is human attention.
Sure, now you can be a musician and use AI to help you make an album in a weekend. Great. So can a million other people. Who's going to listen to them? Everyone is already inundated with more music than they could ever listen to in a lifetime.
Now someone who's never written a line of code can vibe code an app and upload it to an app store. Great. So can a million other people. Who's going to install those apps? When was the last time you found yourself thinking, "I wish I had more unmaintained apps on my phone!"?
Now someone who aspires to be a "writer" but lacks the willpower to craft sentences can throw a couple of bullet points at an AI and get a thousand word article out. Great, so can a million other people. Who wants to read more AI slop text on the web? There are already a million self-published authors whose books never get read. That's not going to get better when there are a billion of them.
All of us, every single one of us, is already drowning in information overload and is stressed out because of it. The last thing any of us want is more stuff to pay attention to. All of this AI generated stuff will just be thrown into the void and ignored by most.
You don't need to create the next Facebook, Shopify, X etc.... Because it already exists and controls the market.
Mass unemployment, consolidation of all AI-related benefits in the hands of a few, an increase in demand that doesn't outpaced the loss of employment, increase in capabilities (not AGI) that mean a few chosen people can do most things without hiring other people, etc.
I know it is the classic sci-fi dystopia where somehow despite endless advances in tech and automation, the masses can't figure out how to make it work for themselves and end up living in shanty towns on top of each other waiting for gifts from the elite, or scraping in dirt outside the cities, but come on... I just don't see that as being credible.
> They want us spending lots of money on their products, so their wealth increases.
If we're considering scifi scenarios, imagine this: if full blown automation of everything is achieved, why would the "haves" need the "have-nots" buying anything at all? Why would they need them to exist, at all? Think about it. It's an extreme and we're not near it... yet.
> despite endless advances in tech and automation, the masses can't figure out how to make it work for themselves
If the tech (or the really helpful tech) is guarded behind a lock, and they don't hold a key, it's not a matter of figuring things out. Unless by figuring out you mean revolt?
So we reach this post scarcity society, where everyone could be living a life of luxury, but this whole group of "haves" as you call them (who would they be?), somehow form this uniform view that they just don't want 99.9% of other people around and let them all die off while they guard themselves in gated cities or something.
It just makes no sense at all to me. Like in a sci-fi novel or movie where it is a plot requirement, ok, but in reality, I just cannot see the path and all the things required to get to that particular reality. So many ways it would work out differently.
A full automation society, where the implied post scarcity is not necessarily for everyone. Maybe it needs most of the population not to exist in order for the few to enjoy the lack of scarcity. Resources aren't infinite, but greed is.
I mean, resources and wealth could be far better distributed right now, no need for AI, yet most times this is attempted the wealthy fight tooth and nails against it, even though the impact for them would be very small. What makes you think having AI will magically make them better people?
> [...] this whole group of "haves" as you call them (who would they be?) somehow form this uniform view that they just don't want 99.9% of other people around
A uniform view on this matter is easier to achieve by an extremely small subset of people.
And really, do you need to ask "who are they"? I mean, the billionaires and owners of concentrated capital of the world?
> I just cannot see the path and all the things required to get to that particular reality.
You cannot see a path from unchecked capitalism and extreme concentration of capital, via total automation, to this particular reality?
It sounds like a failure of imagination. I see the people at the top being lying sociopaths and have no trouble believing this.
In the old days change was slow enough that few people got displaced from jobs requiring any substantial skill (although there was local devastation: for example, court reporters.)
Now, however, we are seeing change happening faster than people's careers. You can not realistically retrain into another high skill job--you're going to be the last to be hired. (There's a good reason Social Security Disability has cutoffs a 50 and 60 for how much change can be required!) And, likewise, someone who has worked a desk for decades is not going to be hired for a physical job. (Assuming they even can do it. I can't think of any physical job that wouldn't have me in a lot of pain in weeks at the most.)
I read this take a lot but I don't buy it. This isn't guaranteed by any means. And even if it does happen, isn't it just as likely that AI is deployed into those companies too and they don't actually result in any job growth?
If you don’t care what individual people think then simply don’t talk to them.
Sorry, you made a claim, there's good reason to believe your claim may not pan out, and if it doesn't the consequences are dire.
> New companies will appear doing things that we can't even imagine yet
I have a really big imagination, so I will believe it when I see it. If you have any real idea what these new companies might be doing in the future then I'm all ears. But until then maybe stop trying to claim some kind of future knowledge based on some handwaved nonsense like "we can't even imagine what the future will look like"
And then trying to claim that's "the reality of the situation", please be serious
Edit: Maybe if you think the future is so unimaginable, you should take a look around at the present. Can you identify anything in our lives today that was not imagined by anyone in the past? Think about how every piece of technology ever made nowadays, someone can say "it's like the Torment Nexus from Famous Piece of Literature!"
And early cars were expensive, dangerous, highly unreliable, uncomfortable, belched foul exhaust, and required knowledge of how to drive AND maintain them. We are far, far from that scenario these days.
Random number generators can't solve open math problems, but it looks like AI agents can? [1]
[1] https://www-cs-faculty.stanford.edu/~knuth/papers/claude-cyc...
It doesn’t have to be effective. It has to make CEOs believe it is effective.
I don't think the comment you're replying to is saying that an evil AI bot will kill people. They are saying something along the lines of: mass job loss doesn't bother the AI companies because in the AI-powered future they envision, population reduction is a positive side effect.
If AI is smart enough to replace the 99.999% it's also smart enough to replace the 0.001%.
Setting people against each other is a time honored way for a small elite to control a large population.
This scenario is not meant to be taken literally.
Energy. The key is controlling their access to energy.
The 99.999% needs to assert their controlling stake in the technology. I don't know what this looks like. Maybe ubiquitous unionizing, coupled with a fully public and openly-trained LLM.
https://www.bentoml.com/blog/navigating-the-world-of-open-so...
No chance of it happening in the US due to lobbying pressure, but maybe in a more civilized country... (unless a distributed SETI@home-type architecture becomes viable)
But that doesn't really matter when we talk about "replacement" because these people don't "do" they simply "own".
They're not concerned about being outpaced at some skill they perform in exchange for money...they just need the productive output of their capital invested in servers/models/etc to go up.
What's important is that ultimately some small subset owns this, and it doesn't matter how smart they are, only that they own the thing and that it cannot be employed against them (because they hold the key).
Or things could turn out more than fine and we progress as we've always progressed, towards more abundance and humans in 30 years will live massively better lives than we live today, just as we live massively better lives than people at just about any previous point in history.
This is blatantly unsustainable.
> just about any previous point in history
The late 1990s is an exception for most people.
It sounds like things are going well for you. Be mindful of psychological projection.
General strike and bank runs.
Not to collapse the economic system, but to present a credible threat of collapsing the economic system which AI development, as these elite and their platforms know it, relies on. When they're freaking out, we call for negotiations.This only works if people with "secure" livelihoods not just participate, but drive the effort. Getting paid six figures or more in a layoff-proof position? Cool, you need to be the first person walking out the door on May 1st (or whenever this happens), and the first person at the bank counter requesting your max withdrawal.
As for bank runs, no one cares. The big banks no longer need retail customer deposits as a source of capital for fractional reserve lending. Modern bank funding mechanisms are more sophisticated than that.
In which the FDIC took unprecedented action, drawing down the DIF to backstop depositors beyond the insured $250k and offering a credit facility to other banks, in order to prevent "contagion" - a panic, a bank run - which was presumed to be likely after the 3rd largest bank collapse in US history. A bank almost no one outside of California had heard of before it died.
Bank runs are serious business, and far from being something "no one cares" about, even just talking about them makes banks nervous, because they can happen to even "healthy" banks. The big banks have been undercapitalized for more than a decade, and even a moderate run on a regional institution threatens the entire system. Which is why it should be done, or at least signaled as incoming; it's good leverage.
>You're free to take a vacation or quit working if you want to. Go ahead.
The implicit, "I'll stay here, where I'm nice and secure," is delusion. People care about your outcomes even if you don't care about ours. Take the invitation to organize with others to secure your own future, to show just how much you're needed before your employer decides that you're not (however erroneously).Anyway, corporate depositors have a duty to safeguard their capital. That means that if a bank run is underway by retail depositors, they're in line too, willing participants or not. This is why, again, even discussion of bank runs is discouraged, and their likelihood and effectiveness downplayed. They're built on turning the imperative of self-interest, which the financial industry is built on, on its head.
Collective humanity needs to think this matter through and take global action. This is the only way I fear, short of natural calamities (act of God) that unplugs humanity from advanced tech for a few generations again.
What? I don’t know anybody who has a layoff-proof position.
>“There are people sitting in our office in King’s Cross, London, working, and collaborating with AI to design drugs for cancer. “That’s happening right now.” https://www.htworld.co.uk/news/research-news/isomorphic-labs...
and
>...enables researchers to move seamlessly from AI-generated sequences to functional antibodies in just days https://the-decoder.com/googles-ai-drug-discovery-spinoff-is...
There may also be downsides, like skipping testing things that would enhance our fundamental understanding of something because the AI was wrong. But that’s already a problem , and having a better gauge in the early stages could be really helpful
Not making predictions that they will, just trying to give an example of a benefit that we may get out of this
It can help a little bit in the early stages of drug design, but even if it was perfect (which it's not), there's a massive gap between understanding a protein structure, and understanding how a drug will or system will interact with it.
In a broader sense, understanding the structure of a protein is only a small part of drug development. Unfortunately biology is complicated, and we're an extremely far way away from solving it.
But LLMs compute requirement is so high that it pushes the boundaries of compute, memory and memory bandwidth which is fundamental for curing diseases.
LLMs math / neural networks can and are used for medical research. Simulating a whole body with proteins, cells etc. will bring us the breakthrough we need.
Nothing in modern medicin research is withoout compute.
AlphaFold def helps researchers around the globe.