I don't think you can use lindy on trends as if trends are static objects, but that's another conversation.
I mean, that's called "having an opinion".
Edit: in particular I don’t agree with
But if someone claims that the trend toward increasing AI capabilities will never reach some particular scary level...
One has to agree that the benchmark results are getting “scarier”, which is not automatically implied by finding more goals to optimize forThe important thing we can show it in hindsight only. We don't know which other tasks we are currently mistaken about requiring intelligence. Maybe none of them are?
We don't know. We don't know what intelligence is. If we look at decades and even centuries of attempts to define intelligence, it is all looks like a goalposts moving. When a definition of intelligence starts to include people or things we don't like to think as of intelligent ones, we change the definition.
If we don't understand the fundamental limits to any particular kind of trend, our default assumption should be that it will continue for about as long as it has gone on already.
We can, in fact, easily put a confidence interval on this. With 90% odds we're not in the first 5% of the trend, or the last 5% of the trend. Therefore it will probably go on between 1/19th longer, and 19 times longer. With a median of as long as it has gone on so far.
This is deeply counterintuitive. When we expect something to last a finite time, every year it goes on, brings us a year closer to when it stops. But every year that it goes on properly brings the expectation that it will go on for a year longer still.
We're looking at a trend. We believe that it will be finite. Our intuition for that is that every year spent, is a year closer to the end. But our expectation becomes that every year spent, means that it will last yet another year more!
How can we apply that? A simple way is stocks. How long should we expect a rapidly growing company, to continue growing rapidly?
For example, take something like a fad or trend; they don't have a hard end date like human lifespan, so it should follow Lindy's law.
However, the likelihood, on average across the population, that you observe a trend is going to be higher at the end of a trend lifecycle than at the beginning. This is baked into the definition - more and more people hear about a trend over time, so the largest quantity of observers will be at the end of the lifecycle, when the popularity reaches its peak.
In other words, if you are a random person, finding out about a trend likely means it is near the end rather than the middle.
The law only applies for certain types of processes, and is completely wrong for other types (e.g. a human who has lived 50 years may live 50 more, but one who has lived 100 years will certainly not live 100 more). So the question becomes: what type of process are you looking at? And that turns out to be exactly the question you started with: is there a fundamental limit to this growth curve, or not.
"The Lindy effect applies to non-perishable items, like books, those that do not have an "unavoidable expiration date"."
And later in the article you can see the mathematical formulation which says the law holds for things with a Pareto distribution [2]. I'd want to see some sort of good analysis that "the life span of exponential growth curves" is drawn from some Pareto distribution. I don't think it's completely out of the question. But I'm also nowhere near confident enough that it is a true statement to casually apply Lindy's Law to it.
The argument given is the same as the one that I first ran across, not by that name, in https://www.nature.com/articles/363315a0. https://en.wikipedia.org/wiki/Doomsday_argument claims that it was a rediscovery of something that was hypothesized a decade article.
I hadn't tried to give it a name, or thought to apply it outside of that context.
As for the mathematical qualms, I'm a big believer in not letting formal mathematical technicalities get in the way of adopting an effective heuristic. And the heuristic reasoning here is compelling enough that I would like to adopt it.
So for example, the longer a time bomb ticks, the less likely it is to go off any time soon. (Assuming the timer isn't visible.) :)
But that's the entire idea of Bayesian reasoning. Which has proven to be surprisingly effective in a wide range of domains.
I'm all for quantifying my ignorance, and using it as an outside view to help guide my expectations. Read the book Superforecasting to understand how effective forecasters use an outside view to adjust their inside view, to allow them to forecast things more precisely.
We expect fresh processes to terminate quickly and long running processes to last for a while longer.
The naive expectation is that AI will slow down b/c Moore's law is coming to an end, but if you really think about the models and how they are currently implemented in silicon, they are still inefficient as hell.
At some point someone will build a tensor processing chip that replaces all the digital matmuls with analogue logamp matmuls, or some breakthrough in memristors will start breaking down the barrier between memory and compute.
With the right level of research funding in hardware, the ceiling for AI can be very high.
All the easily verifiable domains such as mathematics, coding, and things that can be run inside a reasonable simulation are falling very very fast.
By next year if not sooner, mathematicians will be wildly outpaced by LLMs for reasoning.
So it's not impossible to have things that seem orthogonal, like generation speed or context length, have an impact on quality of result.
I'm pretty sure there's a 3 year design goal starting this year that'll do that to any of the qwen, deepseek, etc models. There's a lot you could do with sped up models of these quality.
It might even be bad enough that the real bubble is how much we don't need giant data centers when 80-90% of use cases could just be a silicon chip with a model rather than as you say, bloated SOTA
If there's a breakthrough in memristors, you could end up with another 20x reduction in circuit elements (get rid of memory bottlnecks, start doing multiplication ops as log transform voltage addition)
The ceiling is ultra high for how far AI can go.
I don't know if they can get their numbers right this way, but this seems a way more useful metric, than theoretic capabilities.
It is purely a test of capabilities (can it do a thing that takes a human $X hours), not efficiency (how fast will it do it).
At least I want AI to solve my problems, not score high on a academic leaderboard.
At first the models turned a 5 minute task into a 5 second task (by 5 seconds I mean a very short amount of time, not precisely 5 seconds). Then they turned a 15 minute task into a 5 second task.
Opus 4.6 completes 8 hour tasks all the time but (at least in my experience) it isn't spitting the answer out in 5 seconds anymore. It's using chain of thought and tools and the time to completion is measured in minutes or maybe hours.
In my experiments with local LLMs, a substantial part of the gap between frontier and local (for everyday use) is in tooling and infrastructure.
That is why I am sympathetic to the idea we are leveling off. But to bring in the air speed example from the article, I don't think we've reached the equivalent of the ramjet yet. I suspect in the coming years there will be new architectures, new hardware, and new ways to get even more capable models.
I trained an LLM to write the whole Harry Potter series, and that took JK Rowling like 17 years.
For my next point on the graph, I'll train the LLM to write the Bible, something that took humans >1500 years.
The tasks are obviously all of the form "Go do this, and if you get the following output you passed". Setting up a web server apparently takes 15 minutes for a human, which is news to me since I'm able to search for https://gist.github.com/willurd/5720255, find the python one-liner, and copy it within about ten seconds.
Anyway, this is cool but it does not mean Claude can perform any human tasks that take less than 8 hours and are within its physical capabilities.
I'm curious what people really mean when they say this. Intelligence is famously hard to define, let alone measure; it certainly doesn't scale linearly; it only loosely correlates to real-world qualities that are easy to measure; etc. Are you referring to coding ability or...?
emoji face with eyes rolling upward
Scott makes a Lindy effect argument which is plausible, but don't let that fool you, we still don't know what's going to happen.
All exponentials eventually become sigmoids? Don’t think this can be true without qualifiers.
The issue is that the exponential-looking part of the sigmoid might contain all of human history, sure, but most folks who espouse this theory probably agree that over time everything reaches a steady-enough state to be considered non-exponential, or become oscillatory.
All exponential eventually becomes a sigmoid because exponential growth always expose limiting factors that weren't limiting at the beginning. Silicon manufacturing had lots of room for high-margin customers like Nvidia even a year ago (by the mere virtue of outbidding lower-margin customers), but now it is mostly gone, and no amount of money will make fabs build themselves overnight.
[1]: https://stockanalysis.com/stocks/nvda/metrics/revenue-by-seg...
My mental model has been 3D computer graphics: doubling the polygon count had huge returns early on but delivered diminishing returns over time.
Ultimately, you can't make something look more realistic than real.
I don't know what the future holds, but the answer to the question "can LLMs be more realistic than real" will determine much about whether or not you think the curve will level off soon.
This is the crux of the article. To a large extent continued progress depends on a stable increase in compute, an increase in training data, and an increase in good ideas to squeeze more out of both of them.
One calculation you could do is a survival function: for each of the above, how long before it is disrupted? For example, China could crack down on AI or invade Taiwan. Or data centers become politically unpopular in the US. Or, we could run out of great ideas. Very hard to predict.
All positive growth eventually flattens out and becomes sigmoid, but a lot of phenomena experience negative growth and nose dive. No gentle curve, but a hard kink and perfect flat line at zero. Forever. I think it would be a stretch to categorize that pattern as sigmoid. Predicting a sigmoid pattern for negative growth implies some sort of a soft landing (depending on your definition of soft).
We can think of many populations that are no longer with us. So just a caution about over applying this reasoning in the negative case.
While we're at it, the "exponentials are actually sigmoïds" meme is not necessarily true. While exponentials are never exponentials, sigmoids are not guaranteed. Overshoot-and-collapse examples also happen in tech, e.g. the dotcom bubble, or the successive AI winters.
For example, When a car starts, it's speed and acceleration become more than zero. But what about rate of change in higher degrees? It suddenly doesn't change from zero acceleration to non-zero. That means the car has a non-zero derivative at all degrees. In other words, the movement is exponential. The same thing happens in reverse when the car reaches a constant speed.
https://xcancel.com/peterwildeford/status/202963666232244661...
Ofc "full labor automation" has a certain spread of meaning. A sliver of population will always find ways to hold to a job or run one or many businesses. But there will be "enough" labor automation for it to be a social ticking bomb. That, in fact, does not depend on better models nor better AI than we have today. By 2045 there will be a couple of generations that has been outsourcing their thinking to AI for most of their adult lives. Some of them may still work as legal flesh of sorts, but many won't get to be middle man and will find no job.
Also, if you could replace your senator today by an untainted version of a frontier model (of today), would you do it? Would it be a better ruler? What are the odds of you not wanting to push that button in the next twenty years, after a few more batches of incompetent and self-serving politicians?
Going to need a big citation for that claim
Yeah well my prophet says he can beat up your prophet in a fight.
---
Here in reality, I'm not accustomed to taking random predictions without backing evidence as if they were truth.
Lol
In Scott's mind, dangers from AI are not a known fact, but are somewhere between highly probable and a near-certainty. In his mind, there are well-grounded justifications for believing that AI poses substantial future dangers to the public. Therefore he also believes he should inform people about this, and strives to convince skeptics, so that we might steer clear.
It's easy to understand why someone who believes what you believe about AI would of course not warn people about AI. It's also easy to understand why someone who believes what Scott believes about AI would want to warn people about AI. Your contention is with his confidence for being worried about AI, not his reason for wanting to warn people.
Neither can any specific discussion of what the dangers are and how we can steer clear. It all comes preplanted in your head. The only thing that Scott is playing on (as far as we can see) is your ingrained fear, by using an ominous headline, and a vague reference to something "scary" in the conclusion.
Of course there was no reason to "warn" you, you already believed in the scary future. Scott is just giving you fuel, which you seem to appreciate.
If only there were a way to see more of Scott's thoughts on the subject of AI..
1. If you're not treating my claim as a black box, explain explicitly what is your model of what the article was about? Are you aware, for example of the last paragraph of the article? I think that WAS what the article was about. Do you have specific opinions on e.g. how I went wrong and where my model differs?
2. If you are treating it as a black box, what's your default expectation based on the law of Nothing Ever Happens?
Just kidding, you don't need to explain anything. A"I" fearmongers should though.This does *not* imply the inevitability of AGI. It does not imply AGI is necessarily bad.
It does mean that "the capabilities of AI will eventually plateau" offers no meaningful predictive power or relevance to the overall AI discussion.
This doesn't say much, and the author fights their own points a couple times, suggesting that they maybe didn't think through what they wanted to write until they were in the middle of writing it and started realizing their assumptions didn't match what they expected the data to say.
I really don't get the point of what I just read.
Model reasoning is on an s-curve, which is improving.
Model intelligence is not the same as reasoning. It's a different axis, and one I have not seen much movement on.
See, humans have a recursive form of intelligence which is capable of self-reflection and introspection. LLMs can only reason about tokens which have already been emitted. Humans and LLMs do not share the same form of reasoning, and general human-like intelligence will not arise from the current architecture of LLMs. Therefore it is a mistake to assume that continual improvement on the reasoning scale will result in something that is equivalent enough to humans on the intelligence axis to replace all labor.
No definitely not saying this and I don’t quite know what it means
> Model reasoning is on an s-curve, which is improving.
Is this saying two different things? I think I might agree with this in principle as in maybe there is some sort of s curve or something like it but do we see evidence of this? Where?
> Model intelligence is not the same as reasoning. It's a different axis, and one I have not seen much movement on.
Can you clarify this? What is the distinction and what makes you say you have “not seen much progress?”
> See, humans have a recursive form of intelligence which is capable of self-reflection and introspection. LLMs can only reason about tokens which have already been emitted
LLMs do self reflection and introspection in context, and tweaks such as value functions (serving a similar purpose to intuition or emotion) may make this better? Why do you feel self reflection and introspection are a fundamental limitation here? Models reason over tokens they have emitted and also with their own sense and learned behavior already. Are you just talking about continual learning? Also I feel people just latch onto LLMs as if this is all of AI. Why? SSMs, memory networks, recurrent neural networks etc etc etc are all part of AI but aren’t as popular because they can’t yet compete with LLMs in terms of scaling laws and training efficiency due to e.g. hardware and software optimization and investment being focused on LLMs. If something else comes along that works better we’ll just start scaling that.
> Humans and LLMs do not share the same form of reasoning, and general human-like intelligence will not arise from the current architecture of LLMs.
Very strong statement, any theoretical or experimental basis for this? I also don’t particularly care personally other than as a point of curiosity. Why does it matter if AI systems will develop equivalent reasoning mechanisms as humans? In fact it may be much better not to.
> Therefore it is a mistake to assume that continual improvement on the reasoning scale will result in something that is equivalent enough to humans to replace all labor.
Idk I didn’t say this explicitly but I also dont think it matters if we have a system “equivalent to humans” or one that “replaces all labor”.
I am making that argument that how we measure model intelligence is flawed, and we are actually measuring something that is closer to "reasoning" than "intelligence". If you want evidence, we'll need a different form of tests, but how about I just gesture at the fact that GPT supposedly outscored PhDs on a broad range of subjects at least a year ago and to date is not replacing PhD jobs.
We see this pattern of high scores on tests but mediocre performance in the real world all over the place. From that, I draw the conclusion that it can reason like a PhD, but it can't think like a PhD.
So, we may see an s-curve on the measure of model reasoning but that doesn't imply they will overtake us or even match us on measures of intelligence.
As to your other questions:
> LLMs do self reflection and introspection in context,
> Why do you feel self reflection and introspection are a fundamental limitation here? Models reason over tokens they have emitted and also with their own sense and learned behavior already. Are you just talking about continual learning?
I disagree that models are reflecting and introspecting in a way equivalent to human intelligence here. They can reason over tokens which have been emitted, but by the same measure they cannot reason over tokens which have not been emitted. It's hard to make this point without drawing some diagrams, but I believe that human intelligence has internal loops, where many ideas may be turned over simultaneously before an action is taken. In comparison, an LLM might "feel uncertain" about a token before emitting it, but once it is emitted that uncertainty and the other near neighbor options are lost and the LLM is locked into the track that was set by the top-choice token. I think this is where hallucinations arise from, amongst other issues.
Context isn't sufficient for an internal reasoning loop because the tokens that compose context lose a lot of the information the network itself generated when picking those tokens. They occupy a much lower dimensional space than the "internal reasoning" processes of the transformer do.
>> Humans and LLMs do not share the same form of reasoning, and general human-like intelligence will not arise from the current architecture of LLMs.
> Very strong statement, any theoretical or experimental basis for this?
It's just my theory, but this is what I have been gesturing at. You already know about RNNs so I'll put it in those terms: the core of an intelligent network should be an RNN, not a transformer, but we fundamentally cannot train a network like that to work like an LLM because backprop doesn't work when there is infinite recursion and without being able to bootstrap off of the knowledge and reasoning baked into human text, there's no sufficient source of training material beyond being embodied.
---
EDIT:
I missed this, which I also want to reply to:
> Why does it matter if AI systems will develop equivalent reasoning mechanisms as humans? In fact it may be much better not to.
I actually agree that it may be better if they did not develop equivalent reasoning, but I don't see a world in which machines replace human labor without being intellectually equivalent.
As I think about it though, "dumb" machines which can following reasoning but not think like humans are a rather scary proposition, honestly. Seems like a tool that would be wielded without restraint by those in power to control those who aren't.
> But those skeptics are initially responding to the constant AI hype claims that we are exponentially growing to AGI.
This is a meaningless statement or at best just strawmanning.
The entire plot of the Lord of the Rings could probably be compressed into less than 10 kB of text too.
Edit: this seems to be a controversial comment, but IMHO a blog of Scott Alexander's type is an art form, not just a communication channel.
Good example of this is number of submissions to neurips/icml/iclr. In 2017 that curve was exponential.
Except innovation. When one sigmoid tapers off we keep finding new ones to keep the climb going.
Lindy's Law is not actually a law and many exact minds will be provoked by the very name; it also fails spectacularly in certain contexts (e.g. lifetime of a single organism, though not necessarily existence of entire species).
But at the same time, I am willing to take its invocation in the context of AI somewhat seriously. There is an international arms race with China, which has less compute, but more engineers and scientists. This sort of intellectual arms race does not exhaust itself easily.
A similar space race in the 1950s and 1960s progressed from first unmanned spaceflight to a moonwalk in mere 12 years, which is probably less than what it takes to approve a bicycle lane in Chicago now.
I keep seeing this. Where did it come from? Has China said that they intend to attack other countries using AI? Have other countries declared that they intend to attack China with AI?
Also, why does anyone believe that AI could actually be that dangerous, given it's inherent unpredictable and unreliable performance? I would be terrified to rely on AI in a life or death situation.
Inherent unpredictable and unreliable performance is also quite the feature of human beings as well.
BTW your handle is an actual Czech word, minus a diacritic sign ("křupan"), and a bit amusing one. It basically means hillbilly. Not that it matters, just FYI.
Anyway: AI will be used in military context, and it probably already is. Both for target acquisition and maybe even driving the weapon itself. As of now, the Ukrainians are almost certainly operating some AI-enabled killer drones.
- Making connections to other subjects that an expert would miss. The hall of fame of sigmoid predictions is just excellent, I already know I'm going to be reminded of it some time in the future. Very entertaining way to get the point across.
- Writing about tricky concepts in a very accessible and elegant way, which experts are notoriously bad at doing themselves - they are often optimizing for other specialists.
- Being able to write with an air of speculation and experimentation with ideas that experts and institutions often can't afford. Experts have to maintain their track record; Scott Alexander can say "lol just double the timeline"
it doesn't help that sCotT aLexAndEr is also as close as you can come to the modern dressed up version of a eugenicist (again, not based on any actual expertise)
but I rest my case
Allowing slop articles like this literally prints them evaluation money.