upvote
>To prevent the violence towards them.

"This morning at 8:00 am Pacific, there were 5 simultaneously assassination attempts on tech executives across the Bay Area. The victims, who are all tech executives known to us have suffered serious injuries . It is reported that Securibot 5000s were involved. Securibot inc declined to comment. This is a developing story"

reply
There is no master plan, there's a hype cycle, environment and the market.

Humanoid robots became possible and so people are racing to be first to market assuming that might be a giant market (it's cheap labor potentially so of course it might be huge - the microcomputer was).

reply
That is exactly the motivation. The problem with being a billionaire is you still have to associate with poor people. But imagine a world where your wealth completely insulates you from the resentful poor.
reply
That notion is based on the misconception that for there to be very rich people, other people would need to be poor — that would resent you.

Economic science has pretty much proven that when the average income in a society is higher and fewer are poor, the economy moves more money and the rich benefit more as well.

reply
Misconception is not really the right word here along with the word 'need'.

It comes down to if the people in power think they are playing a zero sum game and are driven by greed. We see plenty of dictatorships that are very resource wealthy and yet their society suffers in abject poverty. Said leaders have zero care about making their peoples life better and will gladly kill them wholesale if they become problematic.

reply
> other people would need to be poor

Just like billions are not about "being rich", this is about CONTROL. Control of the economy, and how people live, and control over one's own life.

Abstraction is a beast, putting everything regardless of what it actually is as some $$ number is terrible for understanding. The billionaires don't have Scrooge McDuck money at home where they swim in coins, they control huge parts of the economy.

And as long as they need workers, they will want them to live not too well - that would raise the price of labor, if people wanted to do work in places like Amazon warehouses to begin with, if they had better alternatives not working for the billionaires.

Being "poor" in this context means having a lot less control over how you live, not that you live on the streets. Although, as soon as you lose your value, e.g. by getting too sick, that is always on the table too.

reply
Relative wealth disparity is what drives lower-class resentment, not absolute poverty.

Income inequality is very bad in its own right.

reply
Watch, or read "altered carbon" for a taste of that future.
reply
Or the Epstein files, for that matter
reply
deleted
reply
How does a billionaire have to associate with poor people? They can live in a complete bubble: house in the hills, driven by a chauffeur, private jets, private islands for holidays etc...?
reply
The people who cook for them, the people who clean for them, the ones who take care of their kids, the one who sell them stuff or serve them in restaurants...
reply
They have separate kitchens for the prep, the cleaners work while they’re out on the yacht, they have people to do the buying, and the restaurants they visit have very well trained staff who stay out of the way.
reply
All that is irrelevant to the point: they still need to have those poor people around them, trust them, and even trust their security to them.
reply
And this is all very easy to control.

First is you get a particular group of people to work for you. You tell them they are better than all the other poor people out there, that is get them to be nationalistic/racist, etc. You also give them a little bit more than the abjectly poor so they have something they fear to lose. You also let them know if they upset the situation they are in retribution will be swift and brutal and affect anyone they know and love.

reply
And this is how every societal hierarchy is structured.
reply
So easy to control that most rich people throughout history were murdered by their guards and close helper circle.
reply
"Most rich people that were murdered" is a different statement from "most rich people"
reply
Epstein had a staff of 70 on that island kept mum
reply
Also, they're not building the house or the jet, they're not growing the food, ... people close enough can be chosen for willingness to be sycophants and happiness to be servants. Unless you're feeding yourself from your own farm, or manufacturing your own electronics, there are limits to even a billionaires ability to control personnel.
reply
nah, if slave owners like Thomas Jefferson and George Washington could reorient their entire lives around not seeing the "ick" of chattel slavery I think modern billionaires can do the same thing even easier now if they wanted.
reply
The poor maids and servants, the poor chauffeur, the poor chef, etc.
reply
Unless they’re living entirely by themselves, they will always be dependent on poor people.
reply
The fact that people see that basically the singularity is happening but can't imagine that humanoid robots get good rapidly is why most people here are bad futurists.
reply
That fact that people see "the singularity happening" based on LLM results, is why most people are the kind of ignorant cheerleaders of tech that predicted robot servants, flying cars, and space colonies by 2000 in 1950.
reply
This feels different. In the 1950s rapid technological progress had been driven by the pressures of the second world war and produced amazing things that held a lot of promise, but few appreciated the depth of complexity of what lay before them. A lot of that complexity had to be solved with software which expanded the problem set rather than solving it. If we have a general solution to the problem of software, we don't know that there are other barriers that would slow progress so much.
reply
Tomy made the Dustbot robot vacuum in 1985, Electrolux made the Trilobite robot vacuum in 1996, and then washing machines, dishwashers, tumble-dryers, microwaves, microwave meals, disposable diapers, fast fashion, and plug-in vacuums, floor steamers, carpet washers, home automation for lights and curtains, central heating instead of coal/wood fires and ash buckets, fridge-freezers and supermarkets (removing the need for canning, pickling, jamming, preserving), takeaways and food delivery, people having 1-2 children instead of 6-12 children. The amount of human labour in housework has plummetted since 1900.

Plenty of flying cars existed through the 1900s, including commercial ones: https://en.wikipedia.org/wiki/Flying_car

The International Space Station was launched in 1998.

reply
> the singularity is happening

[Citation needed]

No LLM is yet being used effectively to improve LLM output in exponential ways. Personally, I'm skeptical that such a thing is possible.

LLMs aren't AGI, and aren't a path to AGI.

The Singularity is the Rapture for techbros.

reply
LLMs aren't AGI and maybe aren't a path to AGI, but step back and look at the way the world is changing. Hard disks were invented by IBM in 1953 and now less than a hundred years later there's an estimated million terabytes a year of hard disks made and sold, and a total sold of Mega, Giga, Tera, Peta, Exa, Zetta ... 1.36 Zettabytes.

In 2000, webcams were barely a thing, audio was often recorded to dictaphone tapes, and now you can find a recorded photo or video of roughly anyone and anything on Earth. Maybe a tenth of all humans, almost any place, animal, insect, or natural event, almost any machine, mechanism, invention, painting, and a large sampling of "indoors" both public and private, almost any festival or event or tradition, and a very large sampling of "people doing things" and people teaching things for all kinds of skills. And tons of measurements of locations, temperatures, movements, weather, experiment results, and so on.

The ability of computers to process information jumped with punched card readers, with electronic computers in the 1950s, again with transistors in the 1970s, semiconductors in the 1980s, commodity computer clusters (Google) in the 1990s, maybe again with multi-core desktops for everyone in the 2000s, with general purpose GPUs in the 2010s, and with faster commodity networking from 10Mbit to 100Gbit and more, and with SATA, then SAS, then RAID, then SSDs.

It's now completely normal to check Google Maps to look at road traffic and how busy stores are (picked up in near realtime from the movement of smartphones around the planet), to do face and object recognition and search in photos, to do realtime face editing/enhancement while recording on a smartphone, to track increasing amounts of exercise and health data from increasing numbers of people, to call and speak to people across the planet and have your voice transcribed automatically to text, to realtime face-swap or face-enhance on a mobile chip, to download gigabytes of compressed Wikipedia onto a laptop and play with it in a weekend in Python just for fun.

"AI" stuff (LLMs, neural networks and other techniques, PyTorch, TensorFlow, cloud GPUs and TPUs), the increase in research money, in companies competing to hire the best researchers, the increase in tutorials and numbers of people around the world wanting to play with it and being able to do that ... do you predict that by 2030, 2035, 2040, 2045, 2050 ... 2100, we'll have manufactured more compute power and storage than has ever been made, several times over, and made it more and more accessible to more people, and nothing will change, nothing interesting or new will have been found deliberately or stumbled upon accidentally, nothing new will have been understood about human brains, biology, or cognition, no new insights or products or modelling or AI techniques developed or become normal, no once-in-a-lifetime geniuses having any flashes of insight?

reply
I mean, what you're describing is technological advancement. It's great! I'm fully in favor of it, and I fully believe in it.

It's not the singularity.

The singularity is a specific belief that we will achieve AGI, and the AGI will then self-improve at an exponential rate allowing it to become infinitely more advanced and powerful (much moreso than we could ever have made it), and it will then also invent loads of new technologies and usher in a golden age. (Either for itself or us. That part's a bit under contention, from my understanding.)

reply
> "The singularity is a specific belief that we will achieve AGI

That is one version of it, but not the only one. "John von Neumann is the first person known to have discussed a "singularity" in technological progress.[14][15] Stanislaw Ulam reported in 1958 that an earlier discussion with von Neumann "centered on the accelerating progress of technology and changes in human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue""[1]. A time when people before it, would be unable to predict what came after it because it was so different. (And which I argue in another comment[2] is not a specific cutoff time, but a trend over history of the future being increasingly hard to predict over shorter and shorter timeframes).

Apart from AGI, or Von Neuman accelerationism, I also understand it as augmenting human intelligence: "once we become cyborgs and enhance our abilities, nobody can predict what comes next"; or artificial 'life' - "if we make self-replicating nano-machines (that can have Darwinian natural selection?), all bets about the future are off"; or "once we can simulate human brains in a machine, even if we can't understand how they work, we can run tons of them at high speeds".

> and usher in a golden age. (Either for itself or us. That part's a bit under contention, from my understanding.)

Arguably, we have built weakly-superhuman entities, in the form of companies. Collectively they can solve problems that individual humans can't, live longer than humans, deploy and exploit more resources over larger areas and longer timelines than humans, and have shown a tendency to burn through workers and ruin the environment that keeps us alive even while supposedly guided by human intelligence. I don't have very much hope that a non-human AGI would be more aligned with our interests than companies made up of us are.

[1] https://en.wikipedia.org/wiki/Technological_singularity

[2] https://news.ycombinator.com/item?id=46935546

reply
If you look at the rapid acceleration of progress and conclude this way, well, de nile ain't just a river in egypt.

Also yes LLMs are indeed AGI: https://www.noemamag.com/artificial-general-intelligence-is-...

This was Peter Norvig's take. AGI is a low bar because most humans are really stupid.

reply
> If you look at the rapid acceleration of progress

I don’t understand this perspective. There are numerous examples of technical progress that then stalls out. Just look at batteries for example. Or ones where advancements are too expensive for widespread use (e.g. why no one flies Concorde any more)

Why is previous progress a guaranteed indicator of future progress?

reply
Just think of this as risk management.

If AGI doesn't happen, then good. You get to keep working and playing and generally screwing off in the way that humans have for generations.

On the other hand if AGI happens, especially any time soon, you are exceptionally fucked along with me. The world changes very rapidly and there is no getting off Mr Bones wild ride.

>Why is previous progress a guaranteed indicator of future progress?

In this case, because nature already did it. We're not just inventing and testing something whole cloth. And we know there are still massive efficiencies to be gained.

For me the Concorde is an example of how people look at stuff incorrectly. In the past we had to send people places very quickly to do things. This was very expensive and inefficient. I don't need to get on a plane to have an effect just about anywhere else in the world now. The internet and digital mediums give me a presence at other locations that is very close to being there. We didn't need planes that fly at the speed of sound, we needed strings that communicate at the speed of light.

reply
If you think AGI is at hand why are you trying to sway a bunch of internet randos who don’t get it? :) Use those god-like powers to make the life you want while it’s still under the radar.
reply
how do you take over the world if you have access to 1000 normal people? if AGI is by the original definition (long forgotten by now) of surpassing MEDIAN human at almost all tasks. How the rebranding of ASI into AGI happened without anyone noticing is kind of insane
reply
>If you look at the rapid acceleration of progress and conclude this way

There's no "rapid acceleration of progress". If anything there's a decline, and even an economic decline.

Take away the financial bubbles based on deregulation and huge explosion of debt, and the last 40 years of "economic progress" are just a mirage filling a huge bubble with air in actual advancement terms - unlike the previous millenia.

reply
That’s completely wrong. There was barely any progress in previous millennia. There was even economic Nobel prize for showing why!
reply
The GDP per capita of the world has been slowly increasing for several millenia. Same for the advancements in core technology.

The industrial revolution increased the pace, but it was already there, not flat or randomly flunctuating (think ancient hominids versus early agriculture vs bronge age, vs ancient Babylon and Assyrian empires, vs later Greece, and Persia, later Rome, later Renaissance and so on).

Post 1970s most of the further increase has been based on mirages due to financialization, and doesn't reflect actual improvement.

reply
> Post 1970s most of the further increase has been based on mirages due to financialization, and doesn't reflect actual improvement.

Of course it does. It would be good if you would try to actually support such controversial claims with data.

reply
Lol, wut?

The world can produce more things cheaper and faster than ever and this is an economic decline? I think you may have missed the other 6 billion people on the planet getting massive improvements in their quality of life.

reply
>I think you may have missed the other 6 billion people on the planet getting massive improvements in their quality of life.

I think you have missed that it's easy to get "massive improvements in your quality of life" if you start from merely-post-revolution-era China or 1950s Africa or colonial India.

Much less so if you plateaud as US and Europe, and live off of increased debt ever since the 1970s.

reply
And yet in the US I can currently survive and illness by the means of technology where I would have died in the 70s. It can be really hard to see the forest from the trees when everything around us is rapidly changing technology.

Increased debt is mostly on the good that technology cannot at least yet reproduce. For example they aren't making new land. Taste, NIMBYism and currently laws stop us from increased housing density in a lot of places too. Healthcare is still quite limited by laws in the US and made expensive because of it.

reply
> rapid acceleration

Who was it who stated that every exponential was just a sigmoid in disguise?

> most humans are really stupid.

Statistically, don't we all sort of fit somewhere along a bell curve?

reply
The bell curve of IQ and being stupid probably don't have much to do with each other.

Think of stupidity as the consequences of interacting with ones environment with negative outcomes. If you have a simple environment with few negative outcomes, then even someone with a 80 IQ may not be considered stupid. But if your environment rapidly grows more complex and the amount of thinking you have to do for positive outcomes increases then even someone with a 110 IQ may find themselves quickly in trouble.

reply
Yes, and that's why surpassing it doesn't lead to a singularity except over an infinite timeframe. This whole thing was stupid in the first place.
reply
What rapid acceleration?

I look at the trajectory of LLMs, and the shape I see is one of diminishing returns.

The improvements in the first few generations came fast, and they were impressive. Then subsequent generations took longer, improved less over the previous generation, and required more and more (and more and more) resources to achieve.

I'm not interested in one guy's take that LLMs are AGI, regardless of his computer science bonafides. I can look at what they do myself, and see that they aren't, by most very reasonable definitions of AGI.

If you really believe that the singularity is happening now...well, then, shouldn't it take a very short time for the effects of that to be painfully obvious? Like, massive improvements in all kinds of technology coming in a matter of months? Come back in a few months and tell me what amazing new technologies this supposed AGI has created...or maybe the one in denial isn't me.

reply
> I look at the trajectory of LLMs, and the shape I see is one of diminishing returns

It seems even more true if you look at OpenAI funding thru 2022 initial public release to how spending has exponentially increased to deliver improvements since. We’re now talking upwards of $600B/yr of spending on LLM based AI infrastructure across the industry in 2026.

reply
In my opinion, LLMs provide one piece of AGI. The only intelligence I’ve directly experienced is my own. I don’t consciously plan what I’m saying (or writing right now).

Instead, a subconscious process assembles the words to support my stream of consciousness. I think that LLMs are very similar, if not identical.

Stream of thought is accomplishing something superficially similar to consciousness, but without the ability to be innovative.

At any rate, until there’s an artificial human level stream of consciousness in the mix for each AI, I doubt we’ll see a group of AIs collaborating to produce a significantly improved new generation of AI hardware and software minus human involvement.

Once that does happen, the Singularity is at hand.

reply
You’re delusional if you think singularity is happening.
reply
If your every other animal on the planet other than humans, the singularity already happened.

Your species would have watched humans go from hairless mammals that basically did the same set of actions and need that your species had to an alien that might as well have landed from another planet (other than you don't even know other planets even exist). Now forests disappear in an instant. Lakes appear and disappear. Weird objects cover the ground and fill the sky. The paradigms that worked for eons are suddenly broken.

But you, you're a human, you're smart. The same thing couldn't possibly happen to you, right?

reply
That's like saying "you're delusional if you think we're affected by The Sun's gravity when it's a hundred million miles away".

A hundred million years ago, every day on Earth was much like every other day and you could count on that. As you sweep forwards in time you cross things like language, cooperation, villages, control of fire, and the before/after effects are distinctly different. The nearer you get to the present, the more of those changes happen and the closer they happen, like ripples on a pond getting closer to the splash point, or like the whispers of gravity turning into a pull and then a crunch. "Singularity" as an area closer to the splash point where models from outside can't make good predictions keeps happening - a million years ago, who would have predicted nations and empires and currency stamped with a human face? Fifty thousand years ago, who could have predicted skyscrapers with human-made train tunnels underground beneath them, or even washing bleached white bedsheets made from cotton grown overseas? Ten thousand years ago, who could have predicted container shipping through the human-made Panama canal? A thousand years ago who could have predicted Bitcoin? Five hundred years ago, who could have predicted electric motors? Three hundred years ago who could have predicted satellite weather mapping of the entire planet or trans-Atlantic undersea dark fibre bundles? Two hundred years ago, who could have predicted genetic engineering? A hundred and fifty years ago, who could have predicted MRI scanners? A hundred years ago, who could have predicted a DoorDash rider following GPS from a satellite using a map downloaded over a cellular data link to a wirelessly charging smartphone the size of a large matchbox bringing a pizza to your house coordinated by an internet-wide app?

In 2000 with Blackberry and Palm Treo and HP Journada and PalmPilot and Windows Phone and TomTom navigation, who was expecting YouTube, Google Maps with satellite photos, Google StreetView, Twitch, Discord, Vine, TikTok, Electron, Amazon Kindle with worldwide free internet book delivery, or the dominance of Python or the ubiquity of bluetooth headphones?

Fifty years ago is 1975, batteries were heavy and weak, cameras were film based, bulbs were incandescent, betamax and VHS and semiconductors were barely a thing - who was predicting micro-electromechanical timing devices, computer controlled LED Christmas lights playing tunes in greetings cards, DJI camera drones affordable to the population, Network Time Protocol synchronising the planet, the normality of video calling from every laptop or smartphone, or online shopping with encrypted credit card transactions hollowing out the highstreets and town centers?

The strange attractor at the end of history might be a long way away, but it's pulling us towards it nonetheless and its ripples go back millions of years in time. It's not like there's (all of history) and then at one point (the singularity where things get weird). Things have been getting weird for thousands and thousands of years in ways that the people before that wouldn't or couldn't have predicted.

reply
You still have to find the kids to rape.
reply
[dead]
reply
Gaza is kept as a testing ground for domestic spying and domestic military technology intended to be used on other groups. Otherwise they'd have destroyed it by now. Stuff like Palantir is always tested in Gaza first.
reply
The bombs dropped on Gaza are equivalent to six Hiroshimas - I think we can safely say that Gaza has been destroyed.

https://www.bradford.ac.uk/news/archive/2025/gaza-bombing-eq...

reply
There are still living people in it
reply
> Otherwise they'd have destroyed it by now.

About that…

reply
Isn’t Gaza a mess in ruins already? I’d count that as destroyed aleeady
reply
Not completely. I mean there would be zero living people left. They've been downscaling it.
reply
Sort of. The thing building and being protected is capital, not humans. As Nick Land wrote:

"Robotic security. [...] The armed mass as a model for the revolutionary citizenry declines into senselessness, replaced by drones. Asabiyyah ceases entirely to matter, however much it remains a focus for romantic attachment. Industrialization closes the loop, and protects itself." [0]

The important part here is that "[i]ndustrialization [...] protects itself". This is not about protecting humans ultimately. Humans are not autonomous, but ultimately functions of (autonomous) capital. Mark Fisher put it like this (summarizing Land's philosophy):

"Capital will not be ultimately unmasked as exploited labour power; rather, humans are the meat puppet of Capital, their identities and self-understandings are simulations that can and will be ultimately be sloughed off." [1]

Land's philosophy is quite useful for providing a non-anthropocentric perspective on various processes.

[0] Nick Land (2016). The NRx Moment in Xenosystems Blog. Retrieved from github.com/cyborg-nomade/reignition

[1] Mark Fisher (2012). Terminator vs Avatar in #Accelerate: The Accelerationist Reader, Urbanomic, p. 342.

reply
This reads like absolute gibberish to me. The capitalistic system does not function without the motivations of the people running it. Ultimately every decision and action is in service of some human, and his or his group's interest.
reply
They're saying capital is power. Not analogous — the same thing. Until now power always had to be wielded by a human, but it's really the power who is wielding the human as an instrument to channel itself, like Majora's Mask. Once we have power that doesn't need a human, it won't need that and we'll all be subservient.

I agree with it. Consider financial markets, for example. There are individual humans whose account balances are changing, but the system as a whole is not an instrument of any human, not the buyers, not the sellers, and not the exchange operators, and yet it dictates the large scale structure of society in ways unimaginable a century ago.

reply
Companies are already like this. Boss fires his friend and puts him out of work with a family to feed. Why? It was ‘good for the company.’ So many inhuman decisions hide behind that fig leaf. It is a way to mentally remove responsibilities from these decisions. After all, it was for the good of the company. Stock buybacks while mailroom is on food stamps? For the good of the company. Dumping the waste into the river instead of safely disposing it? For the good of the company. Making money off vice and the mentally vulnerable? For the good of the company. Overfishing the ocean such that there won’t even be a commercial fishing industry in our lifetimes? For the good of the company. Polluting the earth and ushering in a new age of extinction? For the good of the company.

We are already enslaved to capitalism. Working against our own interest. In service towards the company and the company alone. This meta organism we value above all else on earth.

reply
Capitalism is the ultimate 'Norman lord in his castle on conquered land', removed from the consequences of his choices/rent extraction on the serfs he shares no culture, no understanding with just what value can he extract from them.
reply
He is forced to do that, or he'll be replaced by someone who will. All hail Moloch[2+√7i]!

[2+√7i] https://slatestarcodex.com/2014/07/30/meditations-on-moloch/

reply
From the Landian perspective, initially, sure, the system needs humans. But once you have autonomous and sovereign capital, things could look very different.

In Land's own words:

"Since capitalism did not arise from abstract intelligence, but instead from a concrete human social organization, it necessarily disguises itself as better monkey business, until it can take off elsewhere. It has to be the case, therefore, that cynical evo-psych reduction of business activity remains highly plausible, so long as the escape threshold of capitalism has not been reached. No one gets a hormone rush from business-for-business while political history continues. To fixate upon this, however, is to miss everything important (and perhaps to enable the important thing to remain hidden). Our inherited purposes do not provide the decryption key." [0]

[0] Nick Land (2013). Monkey Business in Xenosystems Blog. Retrieved from github.com/cyborg-nomade/reignition

If you're open to explore Land's perspective more deeply, you can read the introduction here: https://retrochronic.com/

reply
The motivations of the people running the capitalistic system is making more money. Remember the entire mantra of greed is good? Group interest can be a super-human entity that you can get caught in a loop of serving even though serving said entity is not in your best interest. Humans have only been 'mostly' in control of this because there was no other entities capable of said control themselves.

https://slatestarcodex.com/2014/07/30/meditations-on-moloch/

reply