A much-noted side issue in director Christopher Nolan’s 2023 movie Oppenheimer is the possibility that detonating a bomb by nuclear fission would release so much energy that it could cause a fusion chain reaction in the atmosphere. That fear was raised and spread by Edward Teller (shown in figure 1) among concerned scientists during a 1942 meeting at the University of California, Berkeley.
Edward Teller. Although he may have worried about the potential danger of the sudden energy release from fission-triggered fusion reactions between abundant nitrogen-14 nuclei in the atmosphere, Teller argued for such reactions to enhance the power of nuclear weapons. (Illustration by David McMacken.)
Edward Teller. Although he may have worried about the potential danger of the sudden energy release from fission-triggered fusion reactions between abundant nitrogen-14 nuclei in the atmosphere, Teller argued for such reactions to enhance the power of nuclear weapons. (Illustration by David McMacken.)
The scientists thought that a fission bomb’s detonation could so rapidly heat the atmosphere that its temperature might reach a point at which the fusion of nitrogen-14 nuclei would occur. They also considered a second possibility—that of fusion between 14N and hydrogen-1 from water vapor in the atmosphere. Hans Bethe, head of the theory division at the Los Alamos Laboratory, estimated the probability for a chain reaction as very low.
J. Robert Oppenheimer, portrayed in figure 2, discussed that obvious danger with Albert Einstein in the movie but didn’t go into the physical details. An actual meeting between the two would have been unlikely because Einstein was little versed in questions of nuclear reactions and the mechanisms behind quantum mechanical fusion. In reality, Oppenheimer traveled by train from Berkeley to Chicago in the summer of 1942 to talk about the issues with Arthur Compton.
J. Robert Oppenheimer in typical postures—at the blackboard and with a cigarette. His goal as scientific director of the Manhattan Project was to develop a nuclear device that exploded from the fission of uranium-235 and plutonium-239. (Illustration by David McMacken.)
J. Robert Oppenheimer in typical postures—at the blackboard and with a cigarette. His goal as scientific director of the Manhattan Project was to develop a nuclear device that exploded from the fission of uranium-235 and plutonium-239. (Illustration by David McMacken.)
At the time, Compton was head of the metallurgical laboratory at the University of Chicago, which was responsible for developing reactors to breed plutonium-239. He was also the leading expert in photon scattering and the cooling of highly heated atmospheres. When Oppenheimer arrived, Compton picked him up at the train station and they drove to Compton’s vacation home on Otsego Lake in Michigan, where they discussed the question. As Compton later recalled in his book Atomic Quest, they concluded that further experiments would be necessary to confirm that a thermal runaway would not happen at atmospheric conditions.
Bethe’s estimates indicated that neither the temperature nor pressure expected during the detonation of the first fission bomb, codenamed the Trinity test, would be high enough to ignite the atmosphere. But no experimental data existed on the relevant reaction probabilities, or fusion cross sections, so such an ignition could not be deemed impossible. The Trinity test took place in July 1945, and the atomic bombs were dropped on Hiroshima and Nagasaki shortly thereafter. Despite the bombs’ tremendous damage, they did not set the atmosphere on fire.
Theory mitigates fear
The year after the test, Teller, his graduate student Emil Konopinski, and local technician Cloyd Marvin Jr wrote a classified Los Alamos National Laboratory report in which they summarized theoretical considerations on the possible ignition of the atmosphere by an atomic explosion.1 The paper, declassified in 1979, argues that propagation of nuclear burning in the atmosphere is possible only if the energy gained from nuclear reactions is greater than the energy loss through the emitted gamma and beta radiation.
Konopinski, Teller, and Marvin considered the fusion of two 14N nuclei as the most important energy-producing reaction, because 14N is the dominant component in Earth’s atmosphere. On the other hand, when compared to the stable oxygen-16 isotope, 14N nuclei can easily be broken up. Therefore, the fusion of two 14N atoms should lead mainly to a rearrangement of the nucleons by the nuclear force and produce a light fragment and a heavy fragment. Energetically, the most favorable result would be their breakup into alpha particles and a magnesium-24 nucleus.
Up to 17.7 MeV of kinetic energy from the reaction can be transferred to the emitted alpha particles. Teller and coworkers approximated the cross section from the geometrical size of the 14N nuclei and corrected for the energy dependence by multiplying it by the quantum mechanical probability for tunneling through the deflective Coulomb potential between the two positively charged nitrogen nuclei.
Distributing the initial alpha-particle energy is most efficient in collisions with similarly heavy particles and much less efficient for electrons. Therefore, a uniform distribution of nuclear fragments characterized by the nuclear temperature Tn can be quickly established. Although the electron-gas temperature is much lower, it can also be calculated as a function of Tn. The electron gas cools by inelastic scattering and by emitting bremsstrahlung in the form of a continuous x-ray spectrum. Because the atmosphere is transparent to that radiation, it loses energy. Konopinski, Teller, and Marvin found that the rate of energy loss is always greater than the rate of its production by nuclear fusion. So the critical condition needed to ignite the atmosphere cannot occur.
To capture the result quantitatively, the three scientists calculated the energy generation and radiation cooling as a function of temperature, plotted in figure 3. They defined the ratio of the rate of energy loss to its production as the “safety factor.” The figure demonstrates that the safety factor decreases at high temperatures as the energy-production curves level off. The report’s abstract points out that in the case of more powerful fission bombs, however—or even the fusion bombs that Teller had envisioned for the future—the potential danger of an ignited atmosphere remains:
The energy losses to radiation always overcompensate the gains due to reactions…. It is impossible to reach such temperatures unless fission bombs or thermonuclear bombs are used which greatly exceed the bombs now under consideration.1
A critical plot of the rate of energy production as a function of temperature (in megaelectron volts), from the originally classified 1946 Los Alamos report Ignition of the Atmosphere with Nuclear Bombs.1 Three curves characterize the energy-transport conditions for different temperatures in the nuclear fireball. The (dE/dt)C curve shows the reaction rate for the fusion of two nitrogen-14 nuclei when a constant cross section is assumed. The (dE/dt)G curve shows the 14N + 14N fusion reaction rate when the cross section is assumed to rapidly decrease at low energies, as predicted by George Gamow. And the (dE/dt)B curve shows the radiative energy loss through x-ray emission, as predicted by Arthur Compton. (From ref. 1.)
A critical plot of the rate of energy production as a function of temperature (in megaelectron volts), from the originally classified 1946 Los Alamos report Ignition of the Atmosphere with Nuclear Bombs.1 Three curves characterize the energy-transport conditions for different temperatures in the nuclear fireball. The (dE/dt)C curve shows the reaction rate for the fusion of two nitrogen-14 nuclei when a constant cross section is assumed. The (dE/dt)G curve shows the 14N + 14N fusion reaction rate when the cross section is assumed to rapidly decrease at low energies, as predicted by George Gamow. And the (dE/dt)B curve shows the radiative energy loss through x-ray emission, as predicted by Arthur Compton. (From ref. 1.)
That passage reflects Teller’s foresight: The weapons community, including Oppenheimer, expected the development of more powerful fission bombs, and Teller saw the need to develop thermonuclear fusion weapons orders of magnitude more powerful.
Despite Bethe’s reassurances, the fear of an atmospheric chain reaction remained a concern throughout the entire nuclear weapons test program. The 10-fold and higher increase in a fission bomb’s energy release—from the 20-kiloton Trinity test (see figure 4) to the 200-kiloton Hutch underground test in the Nevada desert in 1969 and more—reduced the estimated safety margin. (The magnitudes of bombs are expressed in kilotons of TNT needed to produce a comparable explosion.)
The Trinity fireball, 16 ms after ignition. That’s the moment of maximum energy release and localized heating of the atmosphere during the first nuclear weapon test. The fireball’s opacity prevented radiative energy loss, so the released heat was contained within the fireball. The lightly tinted spots on the fireball’s surface are locations where radiation emission would occur first. (Courtesy of the US Department of Energy.)
The Trinity fireball, 16 ms after ignition. That’s the moment of maximum energy release and localized heating of the atmosphere during the first nuclear weapon test. The fireball’s opacity prevented radiative energy loss, so the released heat was contained within the fireball. The lightly tinted spots on the fireball’s surface are locations where radiation emission would occur first. (Courtesy of the US Department of Energy.)
The reduction was greater when scientists began underwater tests, which involved higher densities and more hydrogen content. Of even more concern were the tests of 20-megaton thermonuclear weapons (so-called hydrogen bombs), and scientists even considered the possibility of the fusion of 16O atoms in ocean water.2 Their explosions would increase the sudden energy release by up to three orders of magnitude. The uncertainties in the initial crude energy release and cooling calculations required experimental verification.
Experiment confirms theory
To experimentally clarify the troubling situation, a dedicated accelerator was built at Oak Ridge National Laboratory in the early 1950s, which made it possible to measure fusion cross sections for 14N + 14N, 16O + 16O, and other reactions of medium-heavy nuclei.3 Alexander Zucker, one of the young scientists who was to measure the effective cross sections and who would later be director of Oak Ridge, noted that for security reasons he and other experimentalists were not directly told why there was interest in those data.
After the detonation of the Soviet 50-megaton “Tsar Bomba” in 1961 above Novaya Zemlya—a group of islands in the Arctic Circle—it became experimentally clear that the conditions required for atmospheric (or even oceanic) ignition had not been reached. (And that remains the case today.) The experimental measurements obtained by Zucker and others demonstrated that the fusion probability is smaller than the geometric cross section assumed by Teller and his coworkers. Because the atmosphere is heated only to temperatures of a few million degrees, the energies of the fusing nuclei—a few hundred kiloeletron volts—are well below the Coulomb barrier, and the likelihood of fusion is low.
The Oak Ridge fusion tests were not confined to nitrogen and oxygen nuclei; they also included tests on light isotopes such as deuterium and tritium and were meant to inform Teller’s plans and ideas for developing the “Super,” his label for a thermonuclear weapon based on fusion. The idea for the fusion bomb based on the fusion of deuterium and tritium isotopes was born out of prewar ideas and papers on hydrogen burning of the sun, but those deliberations triggered the fear of nitrogen burning in the atmosphere.
Konopinski and Teller had published in 1948 the first theoretical prediction for the fusion probability of two deuterium nuclei,4 envisioned as bomb fuel. Those calculations were based on existing prewar measurements of the reaction, which were soon supported by experimental studies at Los Alamos. Because of the much lower Coulomb barrier between the two deuterium nuclei, lower temperatures were necessary to trigger the fusion than in the case of nitrogen nuclei. That realization motivated the development of the two-step design of the hydrogen bomb. The first step—the ignition of a plutonium bomb—generated the necessary temperature and density conditions required to trigger the second step, the fusion of deuterium fuel.
Those million-degree temperatures are similar to ones found in the late hydrostatic burning stages of massive stars. That area of nuclear astrophysics, involving the last stages of stellar burning through the fusion of light elements, received an important impetus from the work done on the Manhattan Project and vice versa.
Astrophysics influences bomb physics
When the Manhattan Project was striving to develop a fission bomb, it was no coincidence that some of its protagonists, including Teller, were interested in questions of fusion. He had been investigating a similar astrophysical question in the 1930s: How can stars generate the energy that allows them to shine and yet remain in a state of equilibrium for long periods of time?
The method for calculating the energy-dependent effective cross sections that Teller had used to estimate the likelihood of atmospheric ignition had been developed by George Gamow,5 who was a fellow professor at George Washington University (GW) between 1935 and 1941. Gamow had left the Soviet Union for political reasons in 1933, and Teller, who was Hungarian and Jewish, came to the US in 1935 after leaving Germany two years earlier to escape the Nazi movement. Both men were interested in questions of energy production in stars, a topic that connected them with Carl Friedrich von Weizsäcker, one of Teller’s fellow students in Germany.
Gamow organized annual meetings at GW on the new questions in theoretical physics. The topic of the fourth Washington Conference on Theoretical Physics, in 1938, was the importance of nuclear physics for astrophysics, and scientists discussed the possibilities of nuclear reactions and chain reactions in stars. Bethe, another Jewish refugee from Nazi Germany, presented his ideas on hydrogen burning in stars. Weizsäcker was pursuing those ideas as well. And Gamow, knowing both of them, acted as contact and mediator between the theorists and their ideas during the years before the war.6
To calculate the fusion rates in their 1946 report, Teller and coworkers used the overlap integral between the Maxwell–Boltzmann distribution of the velocities of nitrogen nuclei and their effective cross section. That methodology had first been used in 1938 by Gamow and Teller to calculate reaction rates for evaluating stellar burning.7 Independently Bethe also used the approach to calculate the important fusion reaction rates between light nuclei that drive the energy generation of the Sun. The methodology gave rise to the standard formalism for presenting nuclear reaction rates in any kind of high-temperature environment—from bomb to star.
The fifth Washington Conference on Theoretical Physics, in 1939, was overshadowed by news of Otto Hahn and Fritz Strassmann’s discovery of nuclear fission, as interpreted and explained by Lise Meitner and Otto Frisch. Even before joining the Manhattan Project in 1943, Teller saw the potential applications for his experience in fusion physics and reaction-rate calculations and began to tirelessly promote nuclear fusion as a possibility for the bomb. His calculations excluded the fusion of heavier ions; the fusion of light deuterium or tritium isotopes seemed to be much more promising. The problem was the inability to produce the appropriate amount of fuel material.
Although the calculation of nuclear reaction rates in hot environments was a necessity for astrophysics, the Manhattan Project provided the opportunity to develop the theoretical methodology with which to treat nuclear reactions and thus calculate previously unknown reaction rates.
Feedback for astrophysics
After the war, when most of the physicists had returned to their university chairs and research institutes, the experience they had gained by simulating nuclear processes in a bomb explosion served to develop new ideas in nuclear astrophysics. Bethe returned to Cornell University, where he worked with his student Edwin Salpeter, a Jewish emigrant from Austria. They demonstrated that proton–proton chains, the sequence of nuclear reactions by which stars convert hydrogen into helium, are the main energy source of our Sun. The reaction sequence containing light-element fusion processes, such as 2H + 2H, were of particular interest for the thermonuclear weapons community around Teller. Predicting new reaction sequences—especially fusion and neutron-induced reactions—was of great interest for understanding the processes that drive stars and bombs alike.
Caltech became the new gathering place for the next generation of young nuclear astrophysicists. On the experimental side, William Fowler, shown in figure 5, made Caltech the international center for astrophysics research. He had been a student, postdoc, and young assistant professor at Caltech before the war, and he knew Oppenheimer well from their prewar history at Caltech. Oppenheimer had impressed Fowler with his report from the 1938 GW conference and Bethe’s ideas on the carbon cycle as a key process in astrophysics.
Caltech’s William Fowler, best known for his work on stellar nucleosynthesis. The processes he considered probably came from the analysis of nuclear reaction data that were measured at the institute’s Kellogg Radiation Laboratory and deduced from bomb debris during the nuclear weapons test program. (Courtesy of Caltech, Symposium on Nuclear Astrophysics: A Celebration of Willy Fowler, 14–16 December 1995.)
Caltech’s William Fowler, best known for his work on stellar nucleosynthesis. The processes he considered probably came from the analysis of nuclear reaction data that were measured at the institute’s Kellogg Radiation Laboratory and deduced from bomb debris during the nuclear weapons test program. (Courtesy of Caltech, Symposium on Nuclear Astrophysics: A Celebration of Willy Fowler, 14–16 December 1995.)
Fowler developed ignition systems for nuclear weapons, including the system that abruptly and symmetrically compressed the plutonium core of the Trinity bomb, causing it to detonate. He was also involved in the development of the bomb initiator—a mixture of polonium-210 and beryllium-9 that would produce a burst of neutrons on demand. Alpha particles emitted by the polonium would be rapidly absorbed by the beryllium, producing carbon-12 and the neutron flux necessary to initiate the chain reaction in 239Pu. As part of the missile program at China Lake, California, Fowler also considered long-range missile delivery systems for nuclear weapons.8
Through his training in experimental nuclear physics at Caltech, Fowler became quite familiar with the issues of low-energy nuclear reactions. In 1951 he was appointed the scientific director of Project Vista at Caltech.9 The project was established for the development and study of strategic nuclear weapons to defend the US and other NATO countries against the new, albeit presumed enemy, the Soviet Union. After the project he resumed his role in academia, again taking up his research in nuclear astrophysics.
That move was motivated by Cambridge University’s Fred Hoyle.10 In collaboration with the astronomers Margaret Burbidge and Geoffrey Burbidge,11 Fowler and Hoyle predicted numerous important reaction sequences for the origin of elements in stars and stellar explosions. Those reactions, in turn, can be traced conceptually to many of the scientific questions posed by the Manhattan Project.
The science of stars
The Manhattan Project and the subsequent test program—with its associated studies of nuclear reactions at accelerators—also stimulated the progress and development of nuclear astrophysics. It became clear that massive stars, the main producers of elements in the universe, achieve their hydrostatic equilibrium by balancing gravitational attraction against the radiation pressure arising from fusion reactions in the stellar interior. Unlike the stellar atmosphere (and Earth’s), stellar matter in the interior is not transparent to radiation.
Energy transport by radiation is important for stars, but it proceeds rather slowly. In the Sun, for example, it’s on a time scale of millions of years. As a consequence of the slow energy loss, a star can establish an equilibrium of nuclei, electrons, and radiation—all of them having the same temperature.
Hydrogen- and helium-induced reactions were being studied at Caltech and Cornell. For Manhattan Project scientists, 14N was the isotope of most concern. It plays a key role in the Bethe–Weizsäcker carbon-nitrogen-oxygen cycle of hydrogen burning and becomes enriched in the process. Before the 1950s some suggested that the enrichment might make subsequential 14N + 14N fusion important in stars. But when researchers considered light fusion reactions as an alternative energy source in the Sun, it became clear that that could not be the case.12 The phases after helium burning in massive stars proceed via carbon (12C + 12C) and oxygen (16O + 16O) fusion.
Those fusion reactions dominate the final years in the life of a massive star. And through the release of protons, neutrons, alphas, and an intense flux of high-energy photons, a complicated network of different nuclear reactions emerges, producing heavy elements up to iron and nickel. In that mass region, nuclear fusion vanishes as a stellar energy source, and the star’s core collapses under its own weight. The collapse triggers a supernova explosion that releases many of the nuclei produced during the star’s life. The physics of the supernova explosion was Bethe’s focus during the last decades of his life. The final product of the explosion is either a neutron star consisting of extremely dense nuclear matter or a black hole. The mass of neutron stars was first estimated in 1939 by Oppenheimer and his student George Volkoff.13
Theoretical and experimental studies associated with the Manhattan Project and subsequent developments thus have largely informed the nuclear astrophysics community’s effort to understand and interpret the development and life of stars—from their beginning as low-density interstellar dust to their end.
The test program’s impact
In addition to providing insights into the physics of fusion reactions between charged nuclei, the observations and calculations from the nuclear weapon test program have opened a new path to understanding neutron-reaction physics, the existence of which was revealed by analyses of materials from the test program. With his knowledge of neutrons as the initiators of plutonium-bomb explosions, Fowler recognized the possibility of similar alpha-induced neutron sources in stars. In 1937 Weizsäcker was the first to predict neutron-capture processes as a way to produce the heavy elements observed in nature. And on the basis of the formalism developed by Gamow and Teller, Fowler and his coworkers later calculated the reaction rates for neutron-induced processes.14
Fowler’s effort was further motivated by the discovery of the heavy elements fermium (atomic number 100) and einsteinium (atomic number 99) deposited on the corals of the Enewetak Atoll in the Marshall Islands and in the filters of an observation aircraft after the first hydrogen-bomb test, Ivy Mike, in 1952. Those discoveries led to the first thoughts on the rapid neutron capture, known as the r-process in type Ia supernovae.15
Those thoughts, however, turned out to be hasty. After long-lived transuranic elements were identified in the debris of the 1964 Par and Barbel tests, Hoyle and Fowler expanded their model of the r-process to core-collapse supernovae.16
Returning to the initial question of atmospheric reactions between 14N isotopes in our atmosphere, one reaction was not considered in the original analysis by Konopinski, Teller, and Marvin: The enormous neutron flux released by fission did indeed lead to an interaction with atmospheric 14N, yielding 14C. That reaction is naturally triggered by the steady bombardment of the atmosphere by cosmic rays; the enormous release of neutrons by a nuclear bomb explosion only multiplies the effect.
The long-lived 14C, or radiocarbon, that was produced by nuclear tests is clearly seen in the so-called radiocarbon bomb peak—a doubling of the relative isotopic concentration of 14C in the atmosphere in the 1960s. Radiocarbon in our atmosphere did decrease rapidly, because through the biological carbon cycle the isotope is absorbed by plant materials and remains in biological materials for thousands of years. The bomb peak today enables a wide range of analytical studies using the radiocarbon method.17 Thus, radiocarbon that remains in our bodies is a long-lasting sign of the nuclear weapons hubris that Oppenheimer tried to warn us against.18
References
Michael Wiescher ([email protected]) is the Frank M. Freimann Professor of Physics at the University of Notre Dame in Indiana. Karlheinz Langanke is the former research director at the GSI Helmholtz Center for Heavy Ion Research in Darmstadt, Germany.