upvote
Predicting the future is problematic, agreed.

Re: the Nate Silver nuclear weapons example, that's pretty weak - eg: given (say) I've just seen three heads in a row (exactly once) .. does that alter anything about "the odds".

Having seen nuclear weapons not used post WWII ... does that inform us about "the odds" or the several times their use was almost certain (eg: Cuban missile crisis) save for out of band behaviour by individuals that averted use and escalation?

reply
Historical base rates are the starting point unless you have an unusually good causal theory of the thing you're modelling. In the case of a coin flip you do. But the large majority of the time when it's a complex system you don't.

Most people's first instinct when faced with a complex system is to try to model it with words and use those words to predict. It's a beginner's error.

reply
> Having seen nuclear weapons not used post WWII ... does that inform us about "the odds"

This is what Bayesian prediction does

> save for out of band behaviour by individuals that averted use and escalation?

This is kind of the point being made.

reply
> This is what Bayesian prediction does

Repeatedly, in a reproducible way, for events in the arrow of time? We can test this by going back to 1945 and running forward again?

> This is kind of the point being made.

Was it?

( assume I did a little math some decades past and have some poor grasp of Bayesian statistics )

reply
Edit: Here is a Claude artifact you can play with to try this yourself: https://claude.ai/public/artifacts/402f2670-5f48-4d76-96df-8...

You can play with how strong that ("10% per year") prior belief is and see how it affects what the odds are today.

I think the way you are wording this question ("We can test this by going back to 1945 and running forward again?") is an attempt to make it seem "obviously wrong".

Bayesian predictions deal exactly with this type of scenario, where you start with a prior estimate ("Post World War 2, some people had the odds per year at 10%") and then as new information comes along ("It is now 1946. Did we use nuclear weapons again?"... It is now 1956. Did we use nuclear weapons again?") we update our model to try to make the future prediction more accurate.

https://www.stat.berkeley.edu/~aldous/134/lecture4.pdf has example of its use in exactly these kinds of "impossible to rewind" situations. Unfortunately it doesn't have the worked solutions.

https://math.mit.edu/~dav/05.dir/class11-prep.pdf is pretty good because it shows how updating the model with new data changes the odds.

reply
deleted
reply
> Repeatedly, in a reproducible way, for events in the arrow of time? We can test this by going back to 1945 and running forward again?

This is a frequentist mental model - all well and good, but frequentism and Bayesianism are different schools of statistics. Where frequentism asks the question, "if I keep drawing samples from this distribution, what does the histogram converge to?" Bayesianism asks the question, "given my prior understanding and a new piece of evidence (a new sample), how should I adjust my hypothesis about what distribution it is I am sampling from?". (That is really boiled down, and the frequentist part is maybe even butchered.)

Among other applications this enables us to estimate a distribution for which we have a tiny number of samples. A problem I'm interested in is called the Doomsday Argument, which estimates how long humanity will survive using your birth order (the number of humans born before you) and the anthropic principle (we assume you were not born unusually early or unusually late but closer to the mode); interestingly, everything you observe in the universe is already factored into this measurement, so you can't ever get a second sample. Obviously the opportunity for error with 1 measurement is huge, but you can come up with a number and it isn't arbitrary, it is a real estimate.

Similarly, we only have about 80 samples of years in which it was possible to have a nuclear exchange, so a fairly small sample size, but we can still get a noisey estimate. But I haven't read On The Edge yet, so I don't know exactly what Silver does here.

>> This is kind of the point being made.

> Was it?

I think they meant that all of the solutions people invented to prevent nuclear war and which commentators failed to anticipate is reflected within the true probability distribution and within our dataset. So it is captured in our estimate, to the best of our abilities and given the limited data we have.

reply
Well, there was a (now under public domain) movie which predicted WW2 bombings.

https://publicdomainmovie.net/movie/things-to-come-1

On nukes, "The World Set Free" from HG Wells predicted nuclear weapons:

https://www.gutenberg.org/ebooks/1059

Also:

https://gutenberg.net.au/ebooks03/0301391h.html from 1933

reply