Time to nuclear Armageddon




 * This article is the narrative basis for the accompanying video of presentation at the Joint Statistical Meetings 2019-08-01. It is on Wikiversity to invite further discussion, expansion, correction, and revision of the narrative presented here subject to the standard Wikimedia rules of writing from a neutral point of view citing credible sources.

This work was inspired by 's 2017 book, The Doomsday Machine. In this book Ellsberg says that as long as the world maintains large nuclear arsenals, it is only a matter of time before there is a nuclear war, which he claims will almost certainly lead to a nuclear winter that lasts over a decade, during which 98 percent of humanity will starve to death if they do not die of something else sooner.

Ellsberg's claims suggest statistical questions regarding the probability distribution of the time to a nuclear war and the severity of the consequences.

The following outlines a methodology for addressing these statistical questions. Previous estimates of the probability of a nuclear war in the next year range from 1 chance in a million to 7 percent, with 0.7 percent being offered by the Good Judgment Project, which arguably uses the best known methodology for making such estimates. If that rate is assumed to have been constant over the 70 years since the first test of a nuclear weapon by the Soviet Union in 1949, these estimates of the probability of a nuclear war in 70 years range from 70 chances in a million to 99 percent. The Good Judgment answer translates into a 40 percent chance of such a war in 70 years, past or future, or equivalently 20 chances in a million that the next 24 hours might see the initiation of a crisis that leads to a nuclear war.

Moreover, nuclear proliferation is continuing. This suggests that the probability of a nuclear war and winter is likely increasing and will continue to increase until something happens to make it effectively impossible for anyone to make more nuclear weapons for a very long time. Two possible scenarios might produce such a nuclear disarmament: This article ends with an outline of possible future research in this area.
 * 1) A nuclear war and winter ending civilization.
 * 2) An unprecedented international movement that strengthens international law to the point that the poor and disfranchised have effective nonviolent means for pursuing a redress of grievances.

Methodology
We suggest here the following methodology:


 * 1. Select a list of incidents.
 * 2. Model the time between such incidents.
 * 3. Estimate subjective probabilities for (a) an essentially equivalent repetition of the same incident leading to a nuclear war, and (b) the distribution of the severity of the consequences of the war.  And
 * 4. Combine “2” and “3” into compelling communications.

Someone attacked item number “3” saying, “You, Spencer Graves, are willing to speculate. That's just a rank speculation. I am not willing to speculate.”

My response is an unwillingness to speculate is essentially equivalent to saying that the probability is zero, and I think that is an unrealistic speculation.

A prototype use of this methodology considers only two incidents:
 * (1) The 1962, and
 * (2) The.

President Kennedy, who was the US President during the Cuban Missile Crisis, later said he thought the odds that the Soviets would go all the way to war were "somewhere between one in three and even." He died before learning that Soviet nuclear weapons were in Cuba at that time. The crisis ended less than 48 hours before a planned invasion by the US, predicated on the belief that there were no such weapons in Cuba at that time. At a 30th anniversary conference in 1992, Fidel Castro (Cuban head of state in 1962) told Robert McNamara (US Secretary of State in 1962) that if the US had invaded, those nuclear weapons would have been used, even though Castro knew that not one person on Cuba would survive.

The 1983 Soviet nuclear false alarm incident occurred when US President Ronald Reagan was building up the US military and challenging the Soviets. Andropov, the Soviet Premier, and his inner circle believed that the US was preparing for a nuclear first attack.

This gives us one observation of $$t_1$$ = 21 years of the time between the 1962 Cuban Missile Crisis and the 1983 Soviet nuclear false alarm incident. In addition, the time to the next incident of a similar magnitude is censored at the $$t_2$$ = 36 years between the 1983 Soviet nuclear false alarm incident and 2019, as this is being written. Standard statistical theory says that the likelihood for these two observations is the product of the at $$t_1$$ and the  at $$t_2$$:


 * $$L_e = f(t_1) S(t_2)$$.

It seems reasonable to assume, at least for an initial demonstration of this methodology, an exponential distribution. This means the likelihood is as follows:


 * $$L_e = \exp[-(21+36) / \tau] / \tau$$.

To the extent that this is accurate, it says that the maximum likelihood estimate of the mean time to the next comparable nuclear crisis is [(21 + 36) divided by 1] = 57 years.


 * $$\hat\tau$$ = 57.

by considering this history as consisting one Poisson distributed observation on the number of such incidents in each of the 57 years between 1962 and this writing in 2019: We have one such incident in 1983 and 0 in the other 56 years. The likelihood for this formulation is as follows:


 * $$L_p = \lambda \exp(-57 \lambda)$$.

This is maximized with $$\hat\lambda$$ = 1/57 = 0.018 such incidents per year.

The Poisson formulation is useful, because it is easier to consider non-constant hazard. The glm function in the can easily model a liner relationship between $$\log(\lambda)$$ and the time since the very first test of a nuclear weapon by the United States in 1949. Moreover, the bssm package for R can model a normal random walk of log(Poisson mean). These options will will not be pursued here but might be useful in future work, either with a larger list of incidents or with nuclear proliferation, discussed below.

Relevant literature
Simon Beard shared the following literature review of studies estimating something like the probability of a nuclear war in the next year, which he compiled in joint with Tom Rowe of Virginia Tech, and James Fox at the University of Oxford. Beard's analysis is augmented here with the probability of a nuclear war in the 70 years between the first test of a nuclear weapon by the Soviet Union (now Russia) in 1949 and the time that this is being written in 2019. It is augmented also by columns translating the annual probabilities in to the number of chances in a million (parts per million, ppm) that a crises leading to a nuclear war will begin on any given day.

The 70-year numbers use the fact that if there is a constant probability $$p$$ of a nuclear war in a given year, the probability of at least one nuclear war in 70 years is $$[1-(1-p)^{70}]$$. The upper limit of 7% for the probability of a nuclear war in the next year (Barrett et al., 2013) is clearly not plausible as a constant probability of a nuclear war each year during that period: Otherwise the probability that we would already have had one is 99%.

Similarly, the ppm numbers can be interpreted as equivalent to suggesting that each new day the leaders of the nuclear-weapon states roll the cylinder and pull the trigger in a game of with the indicated chance of the result being a nuclear war.

It seems useful to highlight the Good Judgment Project (2018), because it uses a methodology developed by a 20-year project funded in part by the and documented in Tetlock and Gardner (2015). Their methodology produced 30% better forecasts than intelligence agents with access to classified information. It is as follows:


 * 1) Recruit volunteers and ask them a series of forecasting questions, like estimating the probability of a certain event in a specific time period (typically 1, 2 or 3 years).
 * 2) Identify the volunteers with the best forecasts.
 * 3) Organize them into teams.
 * 4) Study what the best teams did.

The result is documented in Tetlock and Gardner (2015). This methodology might potentially be crowdsourced on a platform like Wikidata, Wikiversity and Wikipedia.

The 0.7 percent chance of a nuclear war starting in the coming year estimated by the Good Judgment Project is equivalent to a 40 percent chance in 70 years and 20 chances in a million that it will start in the next 24 hours.

Other leading figures supporting Ellsberg's claims
Ellsberg is not alone in his concern about this. also said that as long as the world has large nuclear arsenals, it's only a question of time before there is a nuclear war. Similar concerns led former US Senator and media executive  to found the Nuclear Threat Initiative, also supported by former US Secretary of Defense, and former US Secretaries of State  and. Perry wrote, "The threat of Russia intentionally launching a nuclear attack against the United States today is vanishingly small", but we are much more likely to stumble into a nuclear war because of a cybersecurity breach, similar to Stuxnet, which reportedly destroyed a fifth of Iran's weapons-grade nuclear enrichment capabilities in 2010. In 2020 the US federal government was the recipient of a similar attack. New York Times cybersecurity and digital espionage expert Nicole Perlroth wrote, "This is how they tell me the world ends."

Atmospheric scientists Owen Toon, Alan Robock et al. (2017) have estimated that a relatively minor nuclear war between India and Pakistan could involve at least 100 nuclear weapons, leading to a nuclear autumn during which two billion people -- just over a quarter of humanity -- not involved in the nuclear exchange would starve to death. A hundred nuclear weapons is only about 2 percent of the US nuclear arsenal. A nuclear war involving the US would likely be closer to Ellsberg's doomsday scenario than the two billion dead mentioned by Toon, Robock et al. (2017).



Nuclear proliferation
The fact that nuclear proliferation is continuing suggests that any model that assumes that the risk of a nuclear war is constant or declining is probably wrong. When the took effect in 1970, there were 5 nuclear weapon states. When US President George W. Bush announced an “axis of evil” consisting of North Korea, Iran and Iraq on 2002-01-28, there were 8. As this is being written in 2019, there are 9. As long as nuclear weapon states continue to threaten countries without them, the pressure for nuclear proliferation will continue, and the risks of a nuclear war will likely grow.

Future work
It is relatively easy to use the glm function in the to model a random walk in the log(Poisson mean) of the number of first-tests of new nuclear-weapon states each year.

Beyond this, it could be useful to try to expand the present study to consider larger lists of incidents threatening nuclear war. For this purpose, it might be useful to try to recruit volunteers to use Wikimedia Foundation projects, especially Wikipedia, Wikiversity, and Wikidata to produce estimates like this using the methodology of the Good Judgment Project (2018) described in Tetlock and Gardner (2017). Wikipedia already does something like this: Peter Binkley in an invited 2006 article for a Canadian Library Association journal said that on controversial topics "the two sides actually engaged each other and negotiated a version of the [Wikipedia] article that both can more or less live with. This is a rare sight indeed in today’s polarized political atmosphere, where most online forums are echo chambers for one side or the other”.

Another potentially useful project could be to write an R function to convert probability distributions generated by models like those discussed here estimates of the probability that a person of any age, especially a child born today, would die prematurely from a nuclear war. Stanford Engineering Professor Emeritus Martin Hellman has estimated that the probability is at least 10 percent that a child born today would die prematurely from a nuclear war. These kinds of analyses might help a broader audience understand the seriousness of this issue.