The Theory That Would Not Die: How Bayes’ Rule Cracked the Enigma Code, Hunted Down Russian Submarines, and Emerged Triumphant from Two Centuries of Controversy,

For more than 200 years, a controversy has persisted over the incontrovertible theorem developed by 18th-century British amateur mathematician Reverend Thomas Bayes. The story is captured by Sharon McGrayne in *The Theory That Would Not Die: How Bayes’ Rule Cracked the Enigma Code, Hunted Down Russian Submarines, and Emerged Triumphant from Two Centuries of Controversy*. A well-informed science writer with a dramatic eye, McGrayne recounts the development of probabilistic reasoning, focusing on Bayes, the brilliant French mathematician and scientist Pierre Simon Laplace, and other well-known figures such as Alan Turing and Ronald Fisher. The book is a delightful change of pace from more technical historical accounts such as *The History of Statistics: The Measurement of Uncertainty Before 1900* by Stephen Stigler (Harvard University Press, 1986) and *A History of Mathematical Statistics from 1750 to 1930* by Anders Hald (Wiley, 1998). *The Theory That Would Not Die* will appeal to physicists, computer scientists, mathematicians, statisticians, and other empirical scientists whose work involves stochastic modeling.

Bayes’s theorem, or Bayes’s rule, is itself uncontroversial because it is a direct consequence of the definition of conditional probability. It says that for two random variables *N* and *O*,

Pr(*N* = *n* given *O* = *o*) ∝ Pr(*O* = *o* given *N* = *n*) × Pr(*N* = *n*),

where Pr(*N* = *n*) represents the probability that *N* assumes the particular value *n*.

The controversy arose when Bayes and contemporaries used the expression to solve the “inverse problem” to calculate the probability of cause: They let *N* represent an unknown state of nature and *O* represent observations about that state. In that way, the theorem starts with a personal subjective belief about the state of nature, Pr(*N* = *n*), and then, based on the observations, calculates an updated belief, Pr(*N* = *n* given *O* = *o*). (For more on Bayesian analysis, see the Quick Study by Glen Cowan in PHYSICS TODAY, April 2007, page 82.) What was and, in some circles, remains controversial is the use of probability as a measure of subjective belief when working in science.

One key aspect of the controversy is the interpretation of the word “probability.” For “frequentists,” whose roots can be traced to the gambling parlors of Europe, probability represents long-run frequency. For “subjectivists,” who promoted Bayes’s theorem, probability measures belief and is a means for quantifying personal uncertainty. Frequentists assume that characteristics of nature are fixed and that the observations of them are stochastic. For example, the speed of light is fixed, but measurements of it are random because they are made with error. To frequentists, it makes no sense to talk about assigning a probability distribution to the fixed state of nature. But for subjectivists, knowledge of *N* is imperfect, and that uncertainty must be represented by probability. Despite the controversy, subjectivists and frequentists agree entirely that the likelihood function Pr(*O* = *o* given *N* = *n*) represents the evidence in the observations from which scientists learn about nature. Fortunately, given enough evidence, Bayesian and frequentist approaches tend to reach the same conclusions.

McGrayne describes how subjectivist probability stalled in mathematical circles because of strong opposition from the academic fathers of modern statistics, including Karl Pearson, Ronald Fisher, and Jerzy Neyman. She shows that Bayes’s rule, despite being dismissed in academic circles, continued to be applied to problems across a wide range of fields, from cryptography to public health to particle physics (see the article by Louis Lyons on page 45 of this issue). The subjectivist approach persisted because of its broad utility, including to practicing scientists. The ivory tower’s effect was stultifying, not stimulating.

McGrayne gives a superb synopsis of the fundamental development of probability and statistics by Laplace, who embraced Bayes’s perspective on probability and applied it, for example, to refining France’s population estimates for Napoleon Bonaparte and to making astronomical calculations of the positions of planets. The latter application led to Laplace discovering the central limit theorem.

I most enjoyed the retelling of the role that John Tukey, the brilliant Princeton University statistics professor and Bell Labs scientist, played in the 1960 presidential election between Richard Nixon and John F. Kennedy. Although Nixon was ahead in more states, Tukey convinced the NBC television network not to call the election for him; indeed, Kennedy prevailed by a narrow margin in the electoral college. Tukey, of frequentist orientation, used a Bayesian-style analysis to “borrow strength” from data on similar prior elections and improve prediction of the 1960 election. McGrayne speculates that Tukey kept secret the details of his election methods even to his death to avoid acknowledging use of Bayesian methods.

Given Albert Einstein’s view of quantum mechanics, some physicists may sympathize with the frequentists who say that Bayes’s rule has no place in science. But the emergence of physical theories that allow for nature to simultaneously be in many states, as represented by a probability distribution, is another step forward for Bayes, Laplace, and their followers.