"This morning at 8:00 am Pacific, there were 5 simultaneously assassination attempts on tech executives across the Bay Area. The victims, who are all tech executives known to us have suffered serious injuries . It is reported that Securibot 5000s were involved. Securibot inc declined to comment. This is a developing story"
Humanoid robots became possible and so people are racing to be first to market assuming that might be a giant market (it's cheap labor potentially so of course it might be huge - the microcomputer was).
Economic science has pretty much proven that when the average income in a society is higher and fewer are poor, the economy moves more money and the rich benefit more as well.
It comes down to if the people in power think they are playing a zero sum game and are driven by greed. We see plenty of dictatorships that are very resource wealthy and yet their society suffers in abject poverty. Said leaders have zero care about making their peoples life better and will gladly kill them wholesale if they become problematic.
Just like billions are not about "being rich", this is about CONTROL. Control of the economy, and how people live, and control over one's own life.
Abstraction is a beast, putting everything regardless of what it actually is as some $$ number is terrible for understanding. The billionaires don't have Scrooge McDuck money at home where they swim in coins, they control huge parts of the economy.
And as long as they need workers, they will want them to live not too well - that would raise the price of labor, if people wanted to do work in places like Amazon warehouses to begin with, if they had better alternatives not working for the billionaires.
Being "poor" in this context means having a lot less control over how you live, not that you live on the streets. Although, as soon as you lose your value, e.g. by getting too sick, that is always on the table too.
Income inequality is very bad in its own right.
First is you get a particular group of people to work for you. You tell them they are better than all the other poor people out there, that is get them to be nationalistic/racist, etc. You also give them a little bit more than the abjectly poor so they have something they fear to lose. You also let them know if they upset the situation they are in retribution will be swift and brutal and affect anyone they know and love.
Plenty of flying cars existed through the 1900s, including commercial ones: https://en.wikipedia.org/wiki/Flying_car
The International Space Station was launched in 1998.
[Citation needed]
No LLM is yet being used effectively to improve LLM output in exponential ways. Personally, I'm skeptical that such a thing is possible.
LLMs aren't AGI, and aren't a path to AGI.
The Singularity is the Rapture for techbros.
In 2000, webcams were barely a thing, audio was often recorded to dictaphone tapes, and now you can find a recorded photo or video of roughly anyone and anything on Earth. Maybe a tenth of all humans, almost any place, animal, insect, or natural event, almost any machine, mechanism, invention, painting, and a large sampling of "indoors" both public and private, almost any festival or event or tradition, and a very large sampling of "people doing things" and people teaching things for all kinds of skills. And tons of measurements of locations, temperatures, movements, weather, experiment results, and so on.
The ability of computers to process information jumped with punched card readers, with electronic computers in the 1950s, again with transistors in the 1970s, semiconductors in the 1980s, commodity computer clusters (Google) in the 1990s, maybe again with multi-core desktops for everyone in the 2000s, with general purpose GPUs in the 2010s, and with faster commodity networking from 10Mbit to 100Gbit and more, and with SATA, then SAS, then RAID, then SSDs.
It's now completely normal to check Google Maps to look at road traffic and how busy stores are (picked up in near realtime from the movement of smartphones around the planet), to do face and object recognition and search in photos, to do realtime face editing/enhancement while recording on a smartphone, to track increasing amounts of exercise and health data from increasing numbers of people, to call and speak to people across the planet and have your voice transcribed automatically to text, to realtime face-swap or face-enhance on a mobile chip, to download gigabytes of compressed Wikipedia onto a laptop and play with it in a weekend in Python just for fun.
"AI" stuff (LLMs, neural networks and other techniques, PyTorch, TensorFlow, cloud GPUs and TPUs), the increase in research money, in companies competing to hire the best researchers, the increase in tutorials and numbers of people around the world wanting to play with it and being able to do that ... do you predict that by 2030, 2035, 2040, 2045, 2050 ... 2100, we'll have manufactured more compute power and storage than has ever been made, several times over, and made it more and more accessible to more people, and nothing will change, nothing interesting or new will have been found deliberately or stumbled upon accidentally, nothing new will have been understood about human brains, biology, or cognition, no new insights or products or modelling or AI techniques developed or become normal, no once-in-a-lifetime geniuses having any flashes of insight?
It's not the singularity.
The singularity is a specific belief that we will achieve AGI, and the AGI will then self-improve at an exponential rate allowing it to become infinitely more advanced and powerful (much moreso than we could ever have made it), and it will then also invent loads of new technologies and usher in a golden age. (Either for itself or us. That part's a bit under contention, from my understanding.)
That is one version of it, but not the only one. "John von Neumann is the first person known to have discussed a "singularity" in technological progress.[14][15] Stanislaw Ulam reported in 1958 that an earlier discussion with von Neumann "centered on the accelerating progress of technology and changes in human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue""[1]. A time when people before it, would be unable to predict what came after it because it was so different. (And which I argue in another comment[2] is not a specific cutoff time, but a trend over history of the future being increasingly hard to predict over shorter and shorter timeframes).
Apart from AGI, or Von Neuman accelerationism, I also understand it as augmenting human intelligence: "once we become cyborgs and enhance our abilities, nobody can predict what comes next"; or artificial 'life' - "if we make self-replicating nano-machines (that can have Darwinian natural selection?), all bets about the future are off"; or "once we can simulate human brains in a machine, even if we can't understand how they work, we can run tons of them at high speeds".
> and usher in a golden age. (Either for itself or us. That part's a bit under contention, from my understanding.)
Arguably, we have built weakly-superhuman entities, in the form of companies. Collectively they can solve problems that individual humans can't, live longer than humans, deploy and exploit more resources over larger areas and longer timelines than humans, and have shown a tendency to burn through workers and ruin the environment that keeps us alive even while supposedly guided by human intelligence. I don't have very much hope that a non-human AGI would be more aligned with our interests than companies made up of us are.
Also yes LLMs are indeed AGI: https://www.noemamag.com/artificial-general-intelligence-is-...
This was Peter Norvig's take. AGI is a low bar because most humans are really stupid.
I don’t understand this perspective. There are numerous examples of technical progress that then stalls out. Just look at batteries for example. Or ones where advancements are too expensive for widespread use (e.g. why no one flies Concorde any more)
Why is previous progress a guaranteed indicator of future progress?
If AGI doesn't happen, then good. You get to keep working and playing and generally screwing off in the way that humans have for generations.
On the other hand if AGI happens, especially any time soon, you are exceptionally fucked along with me. The world changes very rapidly and there is no getting off Mr Bones wild ride.
>Why is previous progress a guaranteed indicator of future progress?
In this case, because nature already did it. We're not just inventing and testing something whole cloth. And we know there are still massive efficiencies to be gained.
For me the Concorde is an example of how people look at stuff incorrectly. In the past we had to send people places very quickly to do things. This was very expensive and inefficient. I don't need to get on a plane to have an effect just about anywhere else in the world now. The internet and digital mediums give me a presence at other locations that is very close to being there. We didn't need planes that fly at the speed of sound, we needed strings that communicate at the speed of light.
There's no "rapid acceleration of progress". If anything there's a decline, and even an economic decline.
Take away the financial bubbles based on deregulation and huge explosion of debt, and the last 40 years of "economic progress" are just a mirage filling a huge bubble with air in actual advancement terms - unlike the previous millenia.
The industrial revolution increased the pace, but it was already there, not flat or randomly flunctuating (think ancient hominids versus early agriculture vs bronge age, vs ancient Babylon and Assyrian empires, vs later Greece, and Persia, later Rome, later Renaissance and so on).
Post 1970s most of the further increase has been based on mirages due to financialization, and doesn't reflect actual improvement.
Of course it does. It would be good if you would try to actually support such controversial claims with data.
The world can produce more things cheaper and faster than ever and this is an economic decline? I think you may have missed the other 6 billion people on the planet getting massive improvements in their quality of life.
I think you have missed that it's easy to get "massive improvements in your quality of life" if you start from merely-post-revolution-era China or 1950s Africa or colonial India.
Much less so if you plateaud as US and Europe, and live off of increased debt ever since the 1970s.
Increased debt is mostly on the good that technology cannot at least yet reproduce. For example they aren't making new land. Taste, NIMBYism and currently laws stop us from increased housing density in a lot of places too. Healthcare is still quite limited by laws in the US and made expensive because of it.
Who was it who stated that every exponential was just a sigmoid in disguise?
> most humans are really stupid.
Statistically, don't we all sort of fit somewhere along a bell curve?
Think of stupidity as the consequences of interacting with ones environment with negative outcomes. If you have a simple environment with few negative outcomes, then even someone with a 80 IQ may not be considered stupid. But if your environment rapidly grows more complex and the amount of thinking you have to do for positive outcomes increases then even someone with a 110 IQ may find themselves quickly in trouble.
I look at the trajectory of LLMs, and the shape I see is one of diminishing returns.
The improvements in the first few generations came fast, and they were impressive. Then subsequent generations took longer, improved less over the previous generation, and required more and more (and more and more) resources to achieve.
I'm not interested in one guy's take that LLMs are AGI, regardless of his computer science bonafides. I can look at what they do myself, and see that they aren't, by most very reasonable definitions of AGI.
If you really believe that the singularity is happening now...well, then, shouldn't it take a very short time for the effects of that to be painfully obvious? Like, massive improvements in all kinds of technology coming in a matter of months? Come back in a few months and tell me what amazing new technologies this supposed AGI has created...or maybe the one in denial isn't me.
It seems even more true if you look at OpenAI funding thru 2022 initial public release to how spending has exponentially increased to deliver improvements since. We’re now talking upwards of $600B/yr of spending on LLM based AI infrastructure across the industry in 2026.
Instead, a subconscious process assembles the words to support my stream of consciousness. I think that LLMs are very similar, if not identical.
Stream of thought is accomplishing something superficially similar to consciousness, but without the ability to be innovative.
At any rate, until there’s an artificial human level stream of consciousness in the mix for each AI, I doubt we’ll see a group of AIs collaborating to produce a significantly improved new generation of AI hardware and software minus human involvement.
Once that does happen, the Singularity is at hand.
Your species would have watched humans go from hairless mammals that basically did the same set of actions and need that your species had to an alien that might as well have landed from another planet (other than you don't even know other planets even exist). Now forests disappear in an instant. Lakes appear and disappear. Weird objects cover the ground and fill the sky. The paradigms that worked for eons are suddenly broken.
But you, you're a human, you're smart. The same thing couldn't possibly happen to you, right?
A hundred million years ago, every day on Earth was much like every other day and you could count on that. As you sweep forwards in time you cross things like language, cooperation, villages, control of fire, and the before/after effects are distinctly different. The nearer you get to the present, the more of those changes happen and the closer they happen, like ripples on a pond getting closer to the splash point, or like the whispers of gravity turning into a pull and then a crunch. "Singularity" as an area closer to the splash point where models from outside can't make good predictions keeps happening - a million years ago, who would have predicted nations and empires and currency stamped with a human face? Fifty thousand years ago, who could have predicted skyscrapers with human-made train tunnels underground beneath them, or even washing bleached white bedsheets made from cotton grown overseas? Ten thousand years ago, who could have predicted container shipping through the human-made Panama canal? A thousand years ago who could have predicted Bitcoin? Five hundred years ago, who could have predicted electric motors? Three hundred years ago who could have predicted satellite weather mapping of the entire planet or trans-Atlantic undersea dark fibre bundles? Two hundred years ago, who could have predicted genetic engineering? A hundred and fifty years ago, who could have predicted MRI scanners? A hundred years ago, who could have predicted a DoorDash rider following GPS from a satellite using a map downloaded over a cellular data link to a wirelessly charging smartphone the size of a large matchbox bringing a pizza to your house coordinated by an internet-wide app?
In 2000 with Blackberry and Palm Treo and HP Journada and PalmPilot and Windows Phone and TomTom navigation, who was expecting YouTube, Google Maps with satellite photos, Google StreetView, Twitch, Discord, Vine, TikTok, Electron, Amazon Kindle with worldwide free internet book delivery, or the dominance of Python or the ubiquity of bluetooth headphones?
Fifty years ago is 1975, batteries were heavy and weak, cameras were film based, bulbs were incandescent, betamax and VHS and semiconductors were barely a thing - who was predicting micro-electromechanical timing devices, computer controlled LED Christmas lights playing tunes in greetings cards, DJI camera drones affordable to the population, Network Time Protocol synchronising the planet, the normality of video calling from every laptop or smartphone, or online shopping with encrypted credit card transactions hollowing out the highstreets and town centers?
The strange attractor at the end of history might be a long way away, but it's pulling us towards it nonetheless and its ripples go back millions of years in time. It's not like there's (all of history) and then at one point (the singularity where things get weird). Things have been getting weird for thousands and thousands of years in ways that the people before that wouldn't or couldn't have predicted.
https://www.bradford.ac.uk/news/archive/2025/gaza-bombing-eq...
About that…
"Robotic security. [...] The armed mass as a model for the revolutionary citizenry declines into senselessness, replaced by drones. Asabiyyah ceases entirely to matter, however much it remains a focus for romantic attachment. Industrialization closes the loop, and protects itself." [0]
The important part here is that "[i]ndustrialization [...] protects itself". This is not about protecting humans ultimately. Humans are not autonomous, but ultimately functions of (autonomous) capital. Mark Fisher put it like this (summarizing Land's philosophy):
"Capital will not be ultimately unmasked as exploited labour power; rather, humans are the meat puppet of Capital, their identities and self-understandings are simulations that can and will be ultimately be sloughed off." [1]
Land's philosophy is quite useful for providing a non-anthropocentric perspective on various processes.
[0] Nick Land (2016). The NRx Moment in Xenosystems Blog. Retrieved from github.com/cyborg-nomade/reignition
[1] Mark Fisher (2012). Terminator vs Avatar in #Accelerate: The Accelerationist Reader, Urbanomic, p. 342.
I agree with it. Consider financial markets, for example. There are individual humans whose account balances are changing, but the system as a whole is not an instrument of any human, not the buyers, not the sellers, and not the exchange operators, and yet it dictates the large scale structure of society in ways unimaginable a century ago.
We are already enslaved to capitalism. Working against our own interest. In service towards the company and the company alone. This meta organism we value above all else on earth.
[2+√7i] https://slatestarcodex.com/2014/07/30/meditations-on-moloch/
In Land's own words:
"Since capitalism did not arise from abstract intelligence, but instead from a concrete human social organization, it necessarily disguises itself as better monkey business, until it can take off elsewhere. It has to be the case, therefore, that cynical evo-psych reduction of business activity remains highly plausible, so long as the escape threshold of capitalism has not been reached. No one gets a hormone rush from business-for-business while political history continues. To fixate upon this, however, is to miss everything important (and perhaps to enable the important thing to remain hidden). Our inherited purposes do not provide the decryption key." [0]
[0] Nick Land (2013). Monkey Business in Xenosystems Blog. Retrieved from github.com/cyborg-nomade/reignition
If you're open to explore Land's perspective more deeply, you can read the introduction here: https://retrochronic.com/
https://slatestarcodex.com/2014/07/30/meditations-on-moloch/