In the late 1960s, John Clauser became fascinated with a paper he stumbled on in the Columbia University library. Published a few years earlier by John Bell, it proposed a scenario in which specific predictions of quantum mechanics could be distinguished from those of a proposed rival theory.1 Clauser was eager to conduct the described entanglement experiment. But his graduate adviser and other professors discouraged him from pursuing the topic, which they deemed to be more philosophical than physics related. If Clauser wanted a job in physics, he needed to stick with a mainstream topic, such as the ultimate subject of his thesis, radio astronomy.
Thirty years earlier many leading physicists—Niels Bohr, Albert Einstein, Werner Heisenberg, and Erwin Schrödinger, among others—devoted much of their time to grappling with the defining properties of quantum mechanics, particularly entanglement. But the pragmatic bent of physicists during World War II and the Cold War had pushed quantum mechanics interpretations from the forefront of the field to the fringes.
The three experimentalists awarded this year’s Nobel Prize in Physics were pioneers who helped return the foundations of quantum mechanics to mainstream interest. In the face of discouragement and indifference from the research community, Alain Aspect, Clauser, and Anton Zeilinger pursued rigorous evidence that pinned down the properties of entanglement. And now those results and techniques lie at the foundation of quantum information science.
Hot topic, Cold War
In the 1920s, physicists were still identifying and understanding the implications of quantum mechanics. Those implications, such as wave–particle duality, were in stark contrast to classical physics, and physicists started formulating different conceptions of quantum mechanics’ math and measurements. Bohr and Heisenberg were among those promoting numerous ideas and attitudes that, by the 1950s, were collectively being referred to as the Copenhagen interpretation of quantum mechanics, which generally posits that rather than well-defined properties, quantum systems have only probability distributions—until the moment they’re measured.
To Einstein, the Copenhagen interpretation had unsettling consequences. For example, two particles can interact such that a single wavefunction describes them both—that is, they become entangled. No matter how far apart the particles are, quantum mechanics was suggesting that the moment one is measured, the other instantaneously adopts the expected partner state. But such an observation would seem to contradict causality, as understood in the theory of relativity.
In their famous 1935 paper, Einstein, Boris Podolsky, and Nathan Rosen, known together as EPR, argued that because of such violations, quantum mechanics must not be a complete description of physical systems.2 They suggested that a full theory should be local, in that an object is directly influenced only by its immediate surroundings, and realistic, in that nature has defined properties whether or not they’re measured.
Those on the Copenhagen side argued that locality and realism might be what’s wanted, not what’s necessary, in a model. Einstein stuck to his convictions, and other physicists proposed the addition of hidden variables—so called because they aren’t measurable—that could explain away entanglement’s action at a distance. The variables would determine all a particle’s measurable properties, such as position and spin, before (and regardless of) measurement, and they would have a distribution of values across particles that accounts for the apparent probabilities seen in quantum experiments.
By the 1950s, however, the Copenhagen interpretation had become the standard. The decade saw some alternative quantum interpretations, notably from David Bohm and Hugh Everett. (See the Quick Study by Sean Carroll, Physics Today, July 2022, page 62.) But largely, physicists stopped thinking about quantum mechanics’ implications. World War II and the Cold War created a physics culture centered on pragmatism rather than philosophizing. “The Copenhagen interpretation was something that folks could appeal to and say, ‘Those interpretation questions seem like they were handled, and our business is elsewhere,’” says David Kaiser, a physicist and historian of physics at MIT. At universities, ballooning physics enrollment and class sizes—a reaction to massive defense projects—led professors to focus on topics, particularly calculation-based problems, amenable to a large lecture hall and rapid grading.3
When the physics funding bubble eventually burst in the late 1960s, job prospects dwindled, and by the end of the 1970s, physics enrollments were half of what they were at their peak near the start of the decade. With the return to smaller classes, essay and discussion questions again became part of exams and textbooks, and philosophically oriented seminars found their place on the calendars once again.3
EPR reevaluated
As early as his undergraduate days at Queen’s University Belfast in the 1940s, Bell disliked how he’d been taught quantum mechanics. The Copenhagen interpretation distinguished quantum and classical worlds without a clear divide between the two. (See the article by Reinhold Bertlmann, Physics Today, July 2015, page 40.) While a grad student at the University of Birmingham, Bell became intrigued by Bohm’s 1952 reinterpretation of quantum mechanics as deterministic and realistic through the addition of hidden variables. Bohm presented the idea in a modified version of the situation proposed in the EPR paper.
In the EPR gedanken experiment, two entangled particles are emitted in opposite directions, and they travel until each one has either its position or momentum measured. Bohm replaced those continuous measurements with binary measurements of spin. Bohm’s hidden-variable model has a literal wave–particle duality, in that particles ride on wavefunctions, which replace quantum mechanics’ probability distributions for determinism. But because measurement outcomes depend on wavefunctions, Bohm’s model is still nonlocal.
After graduation, Bell went to work at CERN, alongside his wife and fellow physicist Mary Bell. But in his spare moments, Bell pondered the possibility of hidden variables that could restore locality to quantum systems. In 1964 Bell published an article about Bohm’s variant of the EPR paradox.1 He identified an experimental scenario in which Einstein’s desired local-realist theory couldn’t possibly replicate the results of quantum mechanics.
In the scenario, each particle of an entangled pair has its spin measured along one of two randomly and independently chosen axes, as illustrated in figure 1. For certain axes—say, parallel ones—the correlation between the pair’s measured spins over many measurements will have the same upper bound for quantum and local-realist models. But for combined measurements of multiple relative angles between the axes, quantum mechanics predicts a higher upper bound on the correlation. Given the right parameters, an experiment could potentially exclude a broad class of local hidden-variable theories if the correlations are higher than the upper bound in what’s now known as Bell’s inequality.
John Bell, in his CERN office in 1982. The drawing on the blackboard depicts measurements of the correlations between the spins or polarizations of an entangled pair of particles. The equation at the top is the upper bound expected in local-realist models, which is lower than the bound for quantum mechanics (QM). (Courtesy of CERN, CC BY 4.0.)
John Bell, in his CERN office in 1982. The drawing on the blackboard depicts measurements of the correlations between the spins or polarizations of an entangled pair of particles. The equation at the top is the upper bound expected in local-realist models, which is lower than the bound for quantum mechanics (QM). (Courtesy of CERN, CC BY 4.0.)
California dreamin’
Despite his graduate adviser’s discouragement, Clauser refused to be swayed from his desire to test Bell’s inequality. He wrote to Bell to confirm that no such experiment had been done, and buoyed by Bell’s confirmation and encouragement, Clauser started planning how to transform the idealized situation in Bell’s paper to realistic equipment. He connected with some researchers who also were interested in Bell’s inequalities: Abner Shimony and his grad student Michael Horne at Boston University and Richard Holt at Harvard University. In 1969 they published their reformulation of Bell’s inequality for a realistic experimental setup.4
That same year Clauser started a postdoc at Lawrence Berkeley National Laboratory under Charles Townes, one of the inventors of the laser. Townes agreed to let Clauser split his time between radio astronomy and an experimental test of Bell’s inequality. Over the course of two years, Clauser and Stuart Freedman, a graduate student under Eugene Commins, constructed their setup. In the experiment, calcium atoms produced entangled photons after excitation by a hydrogen arc lamp. Most excited electrons immediately returned to the ground state, but some cascaded down a series of energy levels, emitting photons along the way. The parity of the energy transitions determined the polarization state of the photons, and their shared origin entangled them.
The photons traveled in opposite directions toward a detector on each end of the setup. But first, each photon encountered a polarizer set at some angle. Some photons were blocked, and others passed through to ping the detectors. Clauser and Freedman built new, more efficient polarizers—so-called pile-of-plate polarizers—whose angles could be changed more quickly. Even so, collecting sufficient particle statistics for a range of relative polarizer angles from 0° to 90° took about 200 hours.
In their paper, published in 1972, Clauser and Freedman presented the first-ever experimental Bell test. Their observations violated Bell’s inequality.5 “The result, I didn’t expect,” Clauser said in a 2002 oral history interview with the American Institute of Physics (publisher of Physics Today). “I hoped we would overthrow quantum mechanics.” Similar experiments using mercury atoms followed from Holt, Edward Fry, Randall Thompson, and Clauser. All again matched the expectations of quantum mechanics, not local realism.
Switching it up
When Aspect started working on Bell’s theorem in 1974, he had just returned from Cameroon for his graduate studies at the University of Paris–Sud. During his three years teaching in the central African country, he had read and thought about quantum theory. So when Bernard d’Espagnat recommended that he test Bell’s inequality, Aspect quickly realized why the project was interesting—and experimentally tricky. But the topic was still viewed with skepticism at the time. In fact, in a discussion at CERN, Bell recommended pursuing it only if Aspect had a permanent job, which he did: a teaching job while he finished his degree. “It was possible for a young Aspect and a young Zeilinger to pursue their projects with cover from one or two influential senior colleagues,” says Kaiser, “and that’s what it took to get them going.”
Over the course of 1981 and 1982, Aspect, Philippe Grangier, Jean Dalibard, and Gérard Roger did three tests. (See the article by David Mermin, Physics Today, April 1985, page 38.) Their experiments were similar to Clauser’s—in fact, some of his equipment was shipped from California to Paris for them to use—but improved in a few crucial regards. In Aspect and his colleagues’ first test, they excited the calcium atoms more efficiently, which boosted the pair production. In their second experiment, they opted for polarizing cubes, which transmit one polarization and reflect the other, rather than block it as in Clauser’s pile of plates. Aspect could then measure photons with both polarizations.
The third experiment was the biggest advance. In Clauser’s setup, the polarizers stayed at a given fixed angle for long periods. That design introduced what’s known as the locality loophole. Information about the orientation of one polarizer traveling at or less than the speed of light would have plenty of time to reach and influence the source and the other polarizer before the entangled photons are even emitted. A local-realist model could then still explain the measured outcome, even though it seems to violate Bell’s inequality. In the parlance of relativity, to close the locality loophole, each measurement should be space-like separated from the events in which the other polarizer is positioned and in which the particles are emitted. That is, ideally, the polarizers should be set during the time of flight of the two particles.
Aspect made strides in closing the locality loophole. Rotating the polarizers took too long to be done during the photons’ 20 ns journey from the source to the detectors 6 m away. But with the help of acousto-optical switches that alternated between transmitting and reflecting light every 10 ns, the setup could, during the photons’ flight, direct each photon to one of two possible polarizers at different fixed angles. All three of Aspect’s measurements exceeded Bell’s inequality. “Aspect’s experiments were received more warmly than Clauser and Freedman’s,” says Kaiser, but largely by the small community already interested in Bell tests. “The topic was still on the margins.”
Loop the loop
Aspect didn’t fully close the locality loophole because the measurement settings weren’t random. The two polarizers were fixed, and the switches were essentially periodic. That information is known enough in advance that one detector could still influence the other one or the particle-pair source before emission.
The locality loophole is one of three significant loopholes in Bell tests.6 Another is what’s known as fair sampling. No measurement detects every particle. If too few particles are detected, the measurement could be picking a nonrepresentative sample of the photons that artificially skews the correlations. Although nature is perhaps unlikely to play such a trick, in quantum communication technologies a hacker may well try to do so. (See Physics Today, December 2011, page 20.) Detecting more than about three-fourths of the photons takes care of that loophole.
The third, the freedom-of-choice loophole, arises when the measurement settings may not be free or random but could instead depend on the entangled pairs’ local hidden variables because of the shared history of the detector and particle source. Taken to an extreme, the loophole can suggest that every event in all spacetime was determined by the initial conditions at the Big Bang, an idea called superdeterminism. Such a universe would obey local realism, but at the cost of free will, among other things. But even not taken to such an extreme, “it actually takes very little statistical correlation for Einstein-like models to yield all the predictions of quantum mechanics,” explains Kaiser.
The loopholes at first were tackled one by one. Zeilinger’s group closed the locality loophole in the 1990s in a measurement done by detectors 400 m apart whose polarizer settings were determined by an electro-optical modulator hooked up to a random-number generator. David Wineland and his colleagues closed the fair-sampling loophole in 2001 for measurements on entangled trapped ions.7
A real test of Bell’s inequality, however, requires simultaneously closing all three loopholes. That is easier said than done, particularly because the locality and fair-sampling loopholes are at odds. The more distance between the source and the detectors, the more photons the experiment stands to lose. Nevertheless, in 2015, three groups managed loophole-free Bell tests, the first by Ronald Hanson’s group at Delft University of Technology. (See Physics Today, January 2016, page 14.)
Zeilinger and his then-grad student Marissa Giustina performed their measurements in the Hofburg, a former imperial palace in Vienna. Zeilinger’s group has a history of performing experiments in unusual locations, including the Canary Islands and a utility tunnel under the Danube. For the loophole-free Bell test, “it was a challenge to find a good location,” says Giustina. “We pulled up a Google Maps satellite view of Vienna and looked for a spot that would be willing to let us take over for an unknown period of time and shine lasers around, with a 60-meter line of sight, stable temperatures, three-phase power, and water-chiller support for our cryostats.” The dusty basement of the Hofburg was one of the few options.
Using random-number generators and high-efficiency detectors set far apart, Giustina and Zeilinger demonstrated once again that nature violated Bell’s inequality. Sae Woo Nam and Krister Shalm of NIST, who likewise used photons for their loophole-free test, got similar results. And Hanson and his collaborators also saw higher correlations than in Bell’s inequality, but in their case in the spin states of two diamond nitrogen–vacancy centers entangled through a trick known as entanglement swapping, which is explained below.
The freedom-of-choice loophole, however, is never fully closed. One approach to narrow it is picking detector settings based on phenomena that shrink the shared history of the settings and the entangled pairs. Zeilinger and researchers from several institutions, including MIT, Harvey Mudd College, and NASA’s Jet Propulsion Laboratory, conducted a pair of experiments in 2016 and 2018 in which they set the polarizers based on the observed fluctuating properties of light from distant astronomical objects. (See “Cosmic experiment is closing another Bell test loophole,” Physics Today online, 1 December 2016.) Such so-called cosmic Bell tests have pushed the most recent shared history between the experiment and the light used to derive the random settings to 8 billion years ago.
More particles
Zeilinger says that his interest was never specifically in tackling loopholes. Rather, he says, “I am interested in what conceptual features quantum physics must have.” In collaboration with Daniel Greenberger and Michael Horne, “Zeilinger made the step from two entangled particles to multiple entangled particles and all you can do with that,” says Hanson. Two notable examples, depicted in figure 2, are quantum teleportation and entanglement swapping, both proposed by Charles Bennett and his colleagues in 1993.
Entanglement enables useful tricks for quantum technologies. (a) Quantum teleportation replicates a quantum state that starts with Alice. She and Bob each receive one particle of an entangled pair. Alice then does a joint measurement on the initial state and her entangled particle, and she sends Bob classical information about the outcome. With that information, Bob can then apply a local transformation to his particle to replicate the exact state of Alice’s initial particle. (b) Entanglement swapping entangles particles that have never been in close proximity. Two separate entangled pairs are sent out: particle 1 to Alice, particle 4 to Bob, and particles 2 and 3 to a central location. A joint measurement on the central particles and communication of the result leaves distant particles 1 and 4 entangled. (Figure by Freddie Pagani.)
Entanglement enables useful tricks for quantum technologies. (a) Quantum teleportation replicates a quantum state that starts with Alice. She and Bob each receive one particle of an entangled pair. Alice then does a joint measurement on the initial state and her entangled particle, and she sends Bob classical information about the outcome. With that information, Bob can then apply a local transformation to his particle to replicate the exact state of Alice’s initial particle. (b) Entanglement swapping entangles particles that have never been in close proximity. Two separate entangled pairs are sent out: particle 1 to Alice, particle 4 to Bob, and particles 2 and 3 to a central location. A joint measurement on the central particles and communication of the result leaves distant particles 1 and 4 entangled. (Figure by Freddie Pagani.)
According to the no-cloning theorem, a quantum state can’t be copied while keeping the original. But copying is possible if you destroy the original state. In 1997 two groups—one headed by Zeilinger, while at the University of Innsbruck, and the other by Francesco De Martini—managed to do it through quantum teleportation. (See Physics Today, February 1998, page 18.) In the scheme, a fictional character Alice teleports a quantum state to Bob with the help of an entangled pair shared between them. Alice does a joint measurement on the teleporting particle and her half of the entangled pair. She then sends Bob classical information about what measurement he should perform to put his particle into the initial particle’s state.
A similar trick, known as entanglement swapping, can entangle two particles that have never directly interacted. Take two entangled pairs: particles 1 and 2 and particles 3 and 4. While particles 1 and 4 head off to their final destinations—say, distant Alice and Bob—particles 2 and 3 are both sent to Charlie. He then performs a joint measurement on those two particles that, after classical information is shared, leaves 1 and 4 entangled. The phenomenon was demonstrated in 1998 by Zeilinger and his collaborators.8
“Starting in the early 1990s, people began to realize that Bell’s inequality and quantum entanglement could become a real-world resource for things like quantum encryption,” says Kaiser. That realization is part of why the quantum information field boomed, but unlike during the Cold War, the physics community was no longer dismissive of foundational work. Entanglement, as described and understood by quantum mechanics, is now at the core of numerous current and proposed future technologies, most notably quantum computers and quantum encryption. (See the article by Charles Bennett, Physics Today, October 1995, page 24.)
Giustina, who now works at Google, explains that quantum error correction, which is an essential component of quantum computing, “stands on the foundation of Bell inequality violations and the confidence that nature consistently tends to violate Bell’s inequality.” (See the article by Anne Matsuura, Sonika Johri, and Justin Hogaboam, Physics Today, March 2019, page 40.) Quantum key distribution also relies on Bell tests to securely send messages and check that they haven’t been hacked. (See the article by Marcos Curty, Koji Azuma, and Hoi-Kwong Lo, Physics Today, March 2021, page 36.)
But the future of quantum mechanics is more than applications. “The most important issue for future research lies in questions about the foundations of quantum mechanics,” says Zeilinger. Now that Bell tests have excluded local hidden-variable theories, “we can focus on questions that have not been answered by the experiments, such as, ‘Is there a deeper theory than quantum mechanics?’ ”