Physicists typically think they know more about experiments than they do. One main culprit is physics textbooks, which often present homework exercises as if they were straightforward descriptions of experiments rather than idealized situations designed to teach calculational techniques. As a result of such oversimplifications, the conceptual and theoretical analysis of experiments—that is, analysis of the evidence that is actually produced and the limitations of the conclusions that scientists can properly draw from that evidence—has not been given anywhere near the attention it deserves.

Fortunately, experts in the theory of experimental procedure and the history of experimentation have been filling that gap. Measuring Nothing, Repeatedly: Null Experiments in Physics by Allan Franklin and Ronald Laymon is a splendid example. Franklin is an emeritus high-energy experimentalist who has turned his attention to the theory of experiment, and Laymon is an emeritus philosopher and historian of science. As the title indicates, Franklin and Laymon focus on two issues: why we should be interested in null experiments and why experimentalists sometimes decide that it is worth their while to repeat experiments that have already been done before. As it turns out, the term “null experiment” has two quite different meanings. In one sense, a null experiment compares two systems, or the same system in two configurations, and finds no difference in observable behavior. In another, it finds no statistically significant deviation from the null hypothesis—that is, that a possible causal factor does not exist or has no influence.

ISTOCK.COM/SLOBODAN MILJEVIC
ISTOCK.COM/SLOBODAN MILJEVIC
Close modal

Franklin and Laymon start with a discussion of Galileo’s famous (and possibly apocryphal) experiment at the Tower of Pisa, in which he dropped from a high floor two bodies of the same substance but different weight. The relevant null result is the two objects hitting the ground at the same time. Galileo reports in Discourse on Two New Sciences that “you find, on making the experiment, that the larger anticipates the smaller by two inches.” That counts as hitting the ground together within the margin of error, and Galileo certainly takes the result as a refutation of Aristotle’s theory that heavier objects fall faster.

From there, Franklin and Laymon provide additional examples of null experiments: Galileo’s observations of pendula, Newton’s experiments with pendula, the famous Eötvös experiments on the proportionality of inertial and gravitational mass, and the searches for a “fifth force” and for a southern deviation of falling bodies. In each case the experimental protocols, theory, and analysis of data exhibit subtleties and problems, and the scientists made multiple attempts to repeat the experiments. “Repetitions” typically involved changes to the apparatus rather than attempts at pure replication. Only a careful consideration of possible systematic errors explains why the changes in procedure were considered improvements.

The book’s second section focuses on the Michelson–Morley experiment, a late-19th-century attempt to detect a substance called ether that was thought to be the medium through which light waves traveled. The experiment has become a favorite anecdote in physics textbooks, but the comparison between those idealized accounts and the actual issues confronting experimentalists is eye-opening. For example, differences in temperature of merely 1/500 of a degree between the arms of the experimental apparatus might produce misleading results, but attempts to shield the arms from such variations might theoretically also interfere with the ether wind that the experiment was meant to measure.

In the first two sections, the experiments discussed are null in the sense that they show no difference between the behaviors of two systems or of the same system in different situations. The last section discusses searches for new particles or novel decays in high-energy experiments at the Large Hadron Collider. Its experiments are of critical importance for physicists investigating the possible limitations or omissions of the standard model. Null results are outcomes that agree with the null hypothesis; they can be accounted for by the standard model and mean that the experiment does not point to novel physics.

Physicists and physics students will likely be familiar with many of those experiments, but not with the detailed history behind them or the technical and conceptual challenges that confronted the experimenters. Measuring Nothing, Repeatedly is written as a textbook and would be ideal for a course that offers a broad survey of the challenges and limits of experimentation and data analysis.

Apart from its potential as a textbook, Measuring Nothing, Repeatedly will be valuable to anyone interested in either the history of physics or the general problems of conducting experiments and evaluating their outcomes. There is a wide and variegated gap between idealized visions of scientific experiments that can be easily analyzed and the messy real-world experiments that scientists actually perform. Experimentalists must try to account for random errors, known sources of systematic error, and, most challenging of all, unknown sources of systematic error. Experimental designs that mitigate one sort of problem may amplify another. Even theorists who have neither the inclination nor the expertise to do experimental work can benefit from a finer appreciation of the problems that experimentalists confront and the sources of doubt that must accompany all empirical tests of physical theories.

Tim Maudlin is a professor of the philosophy of physics at New York University. He received his doctorate in the history and philosophy of science from the University of Pittsburgh.