This paper reviews the many twists and turns in the long journey that culminated in ignition in late 2022 using the laser heated indirect-drive approach to imploding DT filled targets at the National Ignition Facility (NIF), located at the Lawrence Livermore National Laboratory (LLNL). We describe the early origins of the Laser Program at LLNL and key developments such as the paradigm shifting birth of high energy density physics (HEDP) studies with lasers, changes in choice of laser wavelength, and the development of key diagnostics and computer codes. Fulfilling the requirements of the multi-faceted Nova Technical Contract was a necessary condition for the approval of the NIF, but more importantly, the end of the Cold War and the cessation of nuclear testing were key catalysts in that approval, along with the ready-and-waiting field of HEDP. The inherent flexibility of the field of laser driven inertial confinement fusion played a fundamental role in achieving success at the NIF. We describe how the ultimately successful ignition target design evolved from the original “point design” target, through the lessons of experiment. All key aspects of that original design changed: The capsule's materials and size were changed; the hohlraum's materials, size, laser entrance hole size, and gas fills were also all changed, as were the laser pulse shapes that go along with all those changes. The philosophy to globally optimize performance for stability (by raising the adiabat and thus lowering the implosion convergence) was also key, as was progress in target fabrication, and in increasing NIF's energy output. The persistence of the research staff and the steadfast backing of our supporters were also necessary elements in this success. We gratefully acknowledge seven decades of researcher endeavors and four decades of the dedicated efforts of many hundreds of personnel across the globe who have participated in NIF construction, operation, target fabrication, diagnostic, and theoretical advances that have culminated in ignition.
I. INTRODUCTION
On December 5, 2022, researchers used 2.05 MJ of laser energy from the National Ignition Facility (NIF), located at the Lawrence Livermore National Laboratory (LLNL), and aimed it into a “hohlraum”: a cylinder of high Z material whose walls were made from depleted uranium (DU) lined with a thin layer of gold. At the center of that hohlraum was the capsule: a 2-mm diameter shell of high-density carbon (HDC) of thickness ∼170 μm. Inside that shell was a concentric shell: a 75-μm-thick layer of frozen DT. The laser pulse was directed at the walls of the hohraum, not at that central capsule. In this “indirect drive” approach, the laser was absorbed on the walls and electron conduction deeper into the walls heats an over-critical plasma to a sufficient temperature to be the source of x rays that was then absorbed and reemitted by the walls of the hohlraum, creating a bath of x rays that ablated the outer surface of the capsule. In a rocket-like reaction to that ablation, the x rays generated by a temporally shaped laser pulse launched a sequence of shocks into the imploding capsule that kept the DT fuel on an adiabat reasonably close to that of Fermi-Degenerate plasmas. Figure 1 (whose main focus is the difference between the two shots) should give the reader a sense of the hohlraum, the capsule, and the laser and radiation drive's temporal shape that were involved in this achievement.
When this ablatively driven rocket implosion stagnated upon itself, a central hotspot was formed. The imploding shell did compressional heating (like a piston in a car engine) in which pressure is applied to a decreasing volume (“PdV”) of the hotspot and raised its temperature to ∼5 keV. The size and density of the hotspot (characterized by the product of density and radius, the “hotspot ρR”) were sufficiently large to trap/confine the alphas produced by the DT reaction so that they could further heat the hotspot. The dense shell stagnating on this hotspot was of sufficient inertial “quality” (characterized by “total ρR”) to confine this already hot assembly, for sufficiently long time so that the thermonuclear heating wave could propagate into the dense fuel. This process resulted in even more fusion, and more heating, raising the assembly to ∼10 keV and propagating the burn further into the dense surrounding shell of DT. This process, where “Mother Nature” takes over from human efforts (namely, the PdV work of the implosion) and has the temperature “run-away” due to this thermal instability (more fusion, via alpha deposition, heats the plasma that results in even more fusion …), is, in essence, what ignition really is. At NIF scales, this process will inevitably lead to fusion yields in excess of 1 MJ. On the December 5th shot, while this high-pressure system exploded and disassembled, a total of 3.15 MJ of fusion energy was produced.1–5
This first time ever achievement of more fusion energy produced than the incident laser energy that entered the target, “officially” achieved the definition of ignition, as put forward in a 1997 report by a committee formed by the National Academy of Sciences (NAS).6 This operational definition (Gain > 1) was chosen to avoid any controversies and lack of consensus over the definition as it existed at that time. The Department of Energy (DOE) and its specific section that funds this research, the National Nuclear Security Administration (NNSA), adopted this criterion. The tale of achieving a goal that the inertial confinement fusion (ICF) community of researchers had sought for over 50 years is a fascinating journey of ups and downs and the amazing perseverance of its adherents and practitioners. It is only now, after ignition had been achieved, that the Inertial Fusion Energy (IFE) community has had the full impetus to pursue high gain target strategies, high efficiency and high rep-rate lasers, and all of the other components necessary to fill out the IFE portfolio.
This paper will review the key steps along the way that led to this historic achievement of ignition. It will be told in the first-person voice, as I have been an eyewitness and participant in this journey for nearly 50 years, since the mid-1970s. Throughout this exposition, I will supplement the technical material with personal anecdotes that relate to the events and personalities involved in this long story. This paper will not be a detailed description of every avenue that was pursued, but will focus on those steps that pushed the process along.
In Sec. II, I will explore the deep roots of the field of ICF from the early 1950s and show the commonality of its roots with the magnetic fusion energy (MFE) efforts. In Sec. III, I discuss the work of John Nuckolls and colleagues at LLNL in formulating the ICF problem/challenge in the late 1950s, pre-dating the invention of the laser. I will also discuss the early laser work at LLNL. In Sec. IV, I discuss important strategic changes in direction in response to hard earned lessons learned from these earlier laser systems. One outgrowth of these changes was the invention of the field of high energy density physics (HEDP). I also discuss the motivations that led to the proposal to build the NIF and the specifics therein. In Sec. V, I discuss the efforts to get the NIF approved, in the 90s, and the issues involved in its actual construction. In addition, I discuss important work done in the 2000s to learn new physics and to prepare crucial new experimental platforms in preparation for the NIF completion and the onset of the National Ignition Campaign (NIC), a formal program that was to expire in 2012.
In Sec. VI, I describe a series of surprises and challenges that were uncovered as the NIF started its ignition attempts. Some solutions and eventual explanations for these surprising results will also be discussed. In Sec. VII, I present how the program utilized, what I like to call, “ICF's superpower,” namely, its ability to change, adapt, and innovate due to its inherent flexibility. These changes in direction led to steady improvements in target performance. In Sec. VIII, I describe how these efforts culminated in ignition. Section IX briefly covers future directions, and Sec. X looks back at all of this and suggests “lessons learned.”
II. DEEP ROOTS: THE EARLY 1950s
I choose to begin this recounting of the history of ICF ignition by looking at events in the very early 1950s. (Coincidentally, that is when my personal history has its early start, as I was born in 1951). In reaction to the 1949 Soviet Union's demonstration of an atom bomb, U.S. President Truman gave the go-ahead to develop a more powerful hydrogen bomb. At Los Alamos, New Mexico, the single U.S. nuclear weapons design laboratory of that time (now called Los Alamos National Laboratory [LANL]), no one had a sure-fire way of designing one. The great physicist John Archibald Wheeler, who, with Niels Bohr had earlier made important contributions to understanding which isotopes of uranium fissioned, wished to make contributions to the H-bomb effort as well. He received permission from Los Alamos to set up a “second lab” at his home institution, Princeton University, in Princeton, New Jersey, in order to so contribute.
A recent paper by Chadwick et al.7 delves even deeper into the roots of DT fusion and traces it back to work in the late 30s and early 40s. In the late 30s, A. J. Ruhlig from the University of Michigan published work on DD reactions. Since tritium is produced in half of those reactions, a secondary DT reaction can ensue. The observation of high energy neutrons implied a very high cross section for the DT reaction. The paper argues that Emil Konopinski was aware of this result and brought it up in the initial discussions in the Manhattan Project that considered a hydrogen/fusion bomb. The paper then traces the creation of some tritium at Berkeley by Emilio Segre and colleagues at the specific request of J. Robert Oppenheimer and Hans Bethe. That tritium was sent to Purdue University where the very first cross sections of the DT reaction were measured in 1943. They showed the reactivity was ∼100× that of DD. This result gave great impetus to the H-bomb effort.
So, back to the 1950s. Wheeler sent Lyman Spitzer to Los Alamos to collect data upon which the nascent effort at Princeton would base its efforts. In those days, cross country travel was by train, and Spitzer got off the train in Aspen Colorado to indulge in his favorite hobby, snow skiing. While on the chair lift, he noticed how the cable wires twisted and was seized with an “eureka” moment concerning how to confine particles in a toroidal geometry that were drifting upward due to a grad B drift. Twist the B field lines so that the particles' drift's “starting point” would alternate between the top of the torus to the bottom. From the bottom of the torus, they would drift “up” into the center of the plasma, and thus be confined. This was the idea behind the stellarator. So enamored with this idea was Lyman, that he got right back on the train and returned to Princeton, having never made it to Los Alamos.
As a result, Wheeler's effort at Princeton bifurcated. Matterhorn-S became Spitzer's stellarator project, which evolved into the present-day Princeton Plasma Physics Laboratory (PPPL). When I was a graduate student at PPPL in the early 70s, preprints were still being numbered as “MATT-nnnn.” The other half of Wheeler's effort, Matterhorn-B, remained the original project to help with Los Alamos' H-bomb developments. These developments are described by an active participant in this latter effort, Ken Ford, a graduate student of Wheeler, who also helped Wheeler write his autobiography Geons, Black Holes, and Quantum Foams: A Life in Physics.8 Ford also wrote a much shorter version for Physics Today.9
The Matterhorn-B project was relatively short lived. During that short time, it was evaluating a more difficult route to a fusion device. The Teller–Ulam scheme was invented at Los Alamos in which a “primary,” namely, a fission driven atom bomb, emits a robust radiative output, which would be trapped in a large hohlraum, and drive a “secondary,” a separate hydrogen fusion bomb. This concept was successfully tested and thereby obviated the immediate need for a second lab.
There has been much said about the Teller–Ulam invention. Given that it has been declassified, the carefully worded (pre-declassification) published descriptions10 of it can clearly be understood now. Ulam had the idea of separate devices, and Teller adapted it and improved upon it, specifically with regard to radiation as the instrument of energy transfer. I recall, in the early 1990s, being in Edward Teller's office at LLNL. While I do not recall the specific reason for that particular visit, I used the opportunity to question Dr. Teller as to his opinion about another controversy: The question as to whether Heisenberg had deliberately slowed down the Nazi effort to produce an atom (fission) bomb, or had he simply “missed the (technical) boat,” and not understood that a compact device could achieve what it eventually did in the Manhattan Project. In retrospect, it was no surprise, that Teller vigorously defended Heisenberg, and insisted that he had purposely sabotaged the Nazi efforts. It was no surprise because Heisenberg had been Teller's thesis advisor, and, as such, there was a special bond between the two of them.
While speaking to Teller on this subject, we were interrupted by a phone call. It was a reporter (from Time magazine, I believe) who wanted to hear, first hand, from Teller as to who should take priority on the Teller–Ulam invention. Teller replied as to how he, Teller, had really nurtured the idea from concept into reality. In that sense, some might say that Ulam was the “Father of the H-bomb” and Teller was the “Mother of the H bomb.” Since the iron curtain had recently fallen, Teller added the following remark to wrap-up the interview. Alluding to the fact that Stan Ulam was a Jewish refugee from the Nazis from Poland, and that Teller was a Jewish refugee from the Nazis from Hungary, and that both countries had, until recently, been behind the iron curtain with curtailed freedoms, Teller remarked that the two countries were now “free” to go to war with each other over who should get credit for the invention!
Two prominent designers of the first successful test of the Teller–Ulam invention were both graduate students at the University of Chicago, who worked at Los Alamos in the summers: Richard Garwin and Marshall Rosenbluth, both now recognized as giants in their fields. The second reason for the short duration of Matterhorn-B was that Edward Teller and E. O. Lawrence wished to pursue further ideas on the subject, so they started a rival weapon design lab at Livermore, California, which is now LLNL. For quite a long time, the Lawrence Berkeley Lab (where Lawrence built his cyclotrons) was known as site 100, while Livermore was called site 200. An explosive testing facility even further to the east, near Tracy, CA, was established, known to this day as site 300. Some of Wheeler's crew went West to join that Livermore effort, thereby diluting the Princeton effort and accelerating its demise.
In my career, I was privileged to interact with both Dick Garwin and Marshall Rosenbluth.
I met Marshall rather early in my career, as he was one of four (late) professors teaching the first year graduate course in plasma physics at Princeton, the others being Harold Furth, Paul Rutherford, and John Dawson. Marshall's thesis advisor at Chicago was Edward Teller, and his long list of contributions to the field throughout his illustrious career earned him the sobriquet of “the pope of plasma physics.” Once, during my graduate years, Marshall gave a seminar at PPPL's theory wing, and “temporarily” put his pipe into his jacket pocket while he spoke. Probably none of the attendees can remember the specific topic, since we were all mesmerized by the column of smoke rising from his pocket throughout his lecture.
In the 1990s, Marshall was part of the advisory group reviewing our ICF research progress in the context of giving as go-ahead for the NIF (all of which will be described later in this report). When I described the state of the laser heated gold walls of the hohlraum (whose low density blowoff reached a temperature of several kiloelectron volt) as being stripped of 51 of its 79 electrons, leaving 28, thus creating a “nickel-like” ionic state, Marshall remarked: “well that's your problem right there, Mordy; you're trying to turn gold into nickel, when you should be trying, like the ancient alchemists, to turn nickel into gold!.” In any event, Marshall was a great friend of ICF, and we owe him much gratitude.
I met Dick Garwin much later in life, though his reputation preceded him. His thesis advisor at Chicago was Enrico Fermi, who is reported to have said that Dick was the only true genius that he had ever met.11 This is extraordinary high praise from Fermi, who certainly interacted with John Von Neumann throughout the Manhattan Project! Dick, operating from a base of being an IBM fellow, has had a distinguished career of advising the U.S. government on a variety of breakthrough technologies. Like Marshall, Dick was a constant fixture at the JASON group who advises the U.S. government on technical issues. It was composed of academics who could spare the time in the summers, hence the name JASON: July, August, September, October, November. The group was founded by John Wheeler, who had wished to form a permanent group as an outgrowth from Matterhorn-B, but he settled for a summer study group. At the JASON meetings, Dick had a legendary modus-operandi, in which he would quickly rifle through the presenters' view-graphs (back in the day when they were plastic foils, not powerpoint slides) before the talk was to be given, and that would suffice—it was rare that he would feel the need to sit through the presentation. Imagine my surprise then, in the mid-2000s, when I was briefing the JASON group on my contribution to understanding energy balance in nuclear events, that Dick sat through the entirety of my presentation. He asked probing questions that I was lucky enough to have thought of and thus had prepared answers for all of them. Freeman Dyson, another JASON member later remarked to me how it was the most interesting piece of physics in the nuclear weapons realm that he had heard in over 40 years. The next day Dick pulled me into a side room and on the white board sketched a completely independent way of thinking about the problem and rederived my results. Indeed, that is a good description of his genius.
In the early to mid-2000s, both LANL and LLNL were still jointly run by the University of California (UC). As such, oversight committees appointed by UC operated to evaluate the quality of science being done at each lab. I was appointed chair of such a committee evaluating the physics department of LANL. It is composed of an impressively diverse set of activities, including ICF (my connection to this enterprise) but high energy particle physics, nuclear physics, and bio-physics as well. The oversight committee was truly an all-star team from these many fields, and it included the late Stuart Freedman, the experimental nuclear physicist from the Lawrence Berkeley Lab, whose experiment with John Clauser on Bell's inequality led to the latter's Nobel Prize in 2022, and Andy Vterbi, the visionary technologist who co-founded the cellular giant, Qualcomm Inc. Also on the committee was Dick Garwin. Dick spent most of his time, it seemed, busy answering emails during all of the presentations. However, I will never forget the time a LANL researcher was presenting his work on brain wave imaging using a wired helmet, for review by our committee, and mentioned the fact that he was having difficulties getting a good signal. Dick looked up from his emails, asked the presenter to project his circuit diagram. Immediately Dick pointed to a place on the diagram where he said the researcher needed to add a pre-amp and then went back to his emails. Of course, Dick proved to be correct!
Despite its relatively short duration of operation, the Matterhorn-B effort did have a lasting impact. A construct that has lasted to this day, from that effort, is the so-called “Wheeler Diagrams.” Their purpose is to delineate a path in parameter space on which the dT/dt of the system stays positive, allowing the thermal instability we call ignition to occur. Early examples of this, applied to ICF, appear in Kirpatrick and Wheeler,12 where the y-axis was T and the x-axis was density. Later M. Widner (Ref. 11 of our Ref. 12) changed the x-axis to the density radius product ρR. In Fig. 2(a), we show such diagrams as they appear in John Lindl's review article13 some 45 years later, as well as in Fig. 2(b), in a more recent article by Annie Kritcher and co-workers148 showing the system trajectories of non-igniting to nearly or fully igniting targets.
In that same Lindl review article, he presents my results for an analytic solution to a non-linear differential equation that traces a path through the Wheeler Diagram space, of T vs ρR, in the regime early in the implosion when PdV heating dominates, and electron conduction is the dominant cooling mechanism. This formula, T ∼ (ρR vimp)2/5 trajectory is a “stable attractor” in this phase space. Less than a week after I came up with this solution (and I did not particularly spread the news of this result around), I was quite surprised to get a phone call from Dr. Teller's office. He had to respond to an inquiry from DoE headquarters on the distinctions of ICF ignition from the workings of a nuclear weapon. I went to his office and showed him this result, which he subsequently used in his successful defense of keeping ICF an open avenue of inquiry and research. How Teller knew that I had just come up with this formula, I'll never know.
In Fig. 3, we show a “class photo” from Matterhorn-B, as shown in Ref. 9. One notes the “climber's rope” on the left in row 2, a homage to scaling the Matterhorn. Wheeler and Ford are on the right of row 2. Some other prominent alumni are in the top row. On the left is David Layzer, who was an astrophysicist at Harvard and who did work relevant to ICF regarding the Rayleigh–Taylor instability (RTI). Fourth from the left is Edward Frieman. When I was a graduate student at PPPL in the early to mid-70s, Ed was my second-year advisor and later a reader of my PhD thesis, and served as PPPL's deputy director. He later went on to head the Office of Science at DOE, and after that was the Director of the Scripps Institute in San Diego. His contributions to National Security include his work on numerous very high-level advisory committees on technical projects, arms control, and climate impact, including a leadership role with the JASON group.
In that same top row, second from the right, is an individual only labeled as “unidentified,” as his head is partially obscured by Wheeler's. I can identify that person as Carl Haussman. Shortly after that photo was taken, Carl was one of those who left the project and joined up at the newly established Livermore lab.
Carl went on to have a distinguished career at LLNL, and he retired from there as a deputy director at large. One of Carl's most impactful contributions was to lead a group of designers who came up with a fundamental breakthrough in weapons design. This was one of two breakthroughs (the other by Johnny Foster) that allowed the lab to design a working system to fit on the Polaris submarine launched missile, and thereby ensuring deterrence for many decades. When Edward Teller first announced that LLNL could accomplish this (at the time) seemingly impossible goal, there was great skepticism that it could be done. Those two basic contributions have impacted all systems in the current U.S. stockpile. The success on the Polaris project firmly established the reputation of LLNL. To some degree, this history of achieving ignition on the NIF, despite its many doubters and skeptics, has analogies to this Polaris story. In later years, Carl paid attention to the environmental state of the LLNL's physical layout, and, to this day, at the center of LLNL is a lake, surrounded by a natural eco-system, named after him.
How can I be sure that Carl is a partially obscured individual? Well, in 1995, I gave a Physics Colloquium at the Princeton University Physics Department in Jadwin Hall, speaking about our plans for the not yet built NIF. Before the seminar, I met with John Wheeler, then in his mid-80s, who asked after the welfare of Carl. He remarked that I should give his regards to Carl, with a quote that “Carl was a good boy.” By chance, a week later at LLNL, I ran into Carl and gave him the regards. He, near his retirement, was rather amused at being called a “boy”! A few years later, Carl passed away, preceding Wheeler in death by a decade.
The “tradition” started by Carl, in the early 50s, of migrating from PPPL to LLNL was greatly reinforced in the early 70s when the laser fusion effort at LLNL was just getting off the ground.
First, when LLNL needed an associate director to put down firm roots for its nascent laser program, Director Michael May appointed one Carl Hausmann to lead it! A partial list of PPPL graduates to head West and join this effort is as follows. We list them by decade and, for later years, include those who entered ICF programs in labs other than LLNL, as well.
-
1960s: S. Bodner, D. Forslund, B. Langdon, W. Kruer
-
1970s: C. Max, J. Lindl, E. Williams, M. Rosen, M. True
-
1980s: R. Chrien, C. Barnes, D. Ho, D. Meyerhofer, T. Murphy, C. Keane, P. Beiersdorfer
-
1990s: D. Ward, D. Roberts, M. Herrmann, H. Herrmann, R. Heeter
-
2000s: M. Karasik, S. Hsu, Y. Ping, D. Clark, A. Sefkow, L. Berzak Hopkins
-
2010s: J. Kallman, L. Petersen, J. Baumgaertel, P. Schmit, J. Mitrani, S. Davidovits, Y. Shi
This list is not meant to exclude many other Princeton University graduates from other departments who have also made large contributions to the ICF program, nor, of course, is it meant to exclude the many other outstanding institutions of higher learning which have contributed talented and dedicated staff from which the ICF Program has benefited enormously.
III. THE EARLY DAYS OF THE LLNL ICF/LASER PROGRAM
A. The very early days
In the late 1950s, John Nuckolls of LLNL considered how to apply weapon research to civilian applications. Starting from the above-mentioned Teller–Ulam scheme, clearly two major changes needed to be made to make that happen. First, the fission primary needed to be eliminated altogether, and be replaced by a non-nuclear “driver” of some kind that would heat the hohlraum. Second, the hydrogen/fusion secondary would have to be considerably reduced in size to allow for containable, sustainable fusion outputs, suitable for a civilian power plant.
Nuckolls14 envisioned, for example, a megajoule class, particle beam entering a ∼1 cm size hohlraum and imploding a radiation driven, several millimeter size, DT capsule. These choices were remarkably prescient, given that these are precisely the scales of driver, hohraum, and capsule, as discussed in the introduction, that achieved ignition on the NIF some 65 years later. Moreover, that vision predated the invention of the laser by ∼1/2 a decade.
As will be described below, I was hired by John Nuckolls and worked for him for many decades. In my life as a physicist, I have encountered many extraordinary and brilliant colleagues. However, I define “genius” as someone who thinks about things in completely divergent ways. I have encountered two: Dick Garwin and John Nuckolls. John's actual achievements, as well as some of the mind-blowing ideas that I have heard from him (almost all, classified) qualify John for that description. I recall 1 long week, in which I interacted and collaborated with John on a specific ICF piece of physics, caused my head to ache. Every day, John would explain how he got to a certain result in a surprising and unique way, and I would spend the rest of the day trying to reconstruct that result using “normal” physics thinking. A week of such struggles was all my brain could take.
Once the laser was invented, it was clear that research should be pursued that considered it as the driver of choice for ICF. Nascent efforts at laser building at LLNL began, under the tutelage of such pioneers as Ray Kidder, Sterling Colgate, and Yu-li Pan. As the efforts became more serious, Director Mike May created a laser directorate and, as mentioned above, named Carl Hausman as LLNL's first associate director for lasers.
One of Carl's lasting contributions to the development of the field was to go out and hire John Emmet from the Naval Research Lab (NRL). John was a powerhouse of a laser builder, and he created a world class team of laser expertise around him, including Bill Krupke, John Trenholme, John Murray, John Holzrichter, John Hunt, Jeff Paisner, Abe Szoke, Julius Goldhar, Paul Wegner, and Jack Campbell. The team of Nuckolls and Emmet led a two-decade effort of unprecedented progress and growth in building ICF strategies and target designs along with their concomitant lasers.
B. The Nature paper
In 1972, the field of ICF emerged from the shadows of the nuclear weapon design world with the publication15 of the Nature paper by Nuckolls, Wood, Thiessen, and Zimmerman. The key concept revealed in that paper was the necessity for pulse shaping the drive so that the DT fuel would stay as cold as possible, even as the pressure on it built up. This would help the fuel get as dense as possible, thus minimizing the required driver energy. To wit, it meant that the fuel would follow, as closely as possible, the Fermi Degenerate isentrope, PFD = kρ5/3, where ρ is the density of the fuel, and PFD is its minimum (“quantum”) pressure. We define α = P/PFD, where P is the actual pressure of the fuel. Thus, α = 1 represents the best one can do, and a value greater than unity represents a surrender to practicalities that might sometimes be required. In this story of the road to ignition, we will see many important instances of such compromises along the way.
The reason for the need for very high fuel densities is (at least) twofold:
First: To ignite the hotspot in the implosion, we need the fuel to have a requisite ion temperature, Ti, of order 5 keV, (and for a dense enough hotspot Te and Ti are nearly equal) to get the fusion rate going robustly, and a density-radius (ρR) product of at least 0.3 g/cm2 in order to stop the alphas produced by those reactions, so that they may further heat the hotspot fuel to ignition. Thus, the energy of the hotspot EHS ∼ MHS T ∼ ρR3 T ∼ (ρR)3 T/ρ2. The numerator, as just explained, must be at least a fixed amount, (0.3)3 (5). Thus, the leverage is in the denominator to be as large as possible to minimize the energy needed to ignite the hotspot. Minimizing that energy is proportional to minimizing the size/energy of the driver, and thus minimizes cost.
Second: The basic confinement of ICF is inertial: the hot and burning imploded core will disassemble on a timescale of its final radius, R, divided by a sound speed.16 To maximize yield, the fusion rate (proportional to ρ) must be fast compared to that timescale. In short, we seek to maximize the total ρR of the system to achieve a good burn-up of the fusion fuel and a high efficiency output. At a fixed target mass, M, we have a compressed sphere's R ∼ (M/ρ)1/3, so ρR ∼ M1/3 ρ2/3, so again, a large ρ is needed for good ICF performance.
The Nature paper invoked the results of the two-dimensional (2D) hydrodynamics code Lasnex. My colleague (to this day), George Zimmerman, was the creator of this code. In preparation for that paper, there were many simulations run to optimize the target performance. The printout from each simulation run would also report on the amount of time the problem took to complete. One early designer mistook that “timing” printout as a report of the yield, so he optimized on that rather than yield, until George set him straight. I consider George a rare “National Treasure.” His abilities to integrate diverse fields of physics and to understand each field so deeply and fundamentally is off the charts. I measure my growth as a physicist by the degree to which I increasingly come to appreciate George's greatness.
The Nature paper assumed a simple “bare drop” of DT illuminated directly by the laser. This would clearly have simplicity on its side, a trait one would like to see in a power reactor. With much more refined analysis, it became clear that Mother Nature would not be so kind, on at least two counts.
First, the very high intensity driving this design at its peak was of order 1017 W/cm2 and was thus determined later to be subject to laser plasma instabilities (LPI).17,18 These would compromise the coupling to the target, as well as be the source of hot electrons that would penetrate deep into the target and preheat the fuel. Such preheat would negate the whole idea of the pulse shaping which was to keep that DT fuel as cold as possible. Some of the early researchers involved in these insights were Bill Kruer, John DeGroot, and Jonathan Katz. Specific concerns of the non-linear nature of these LPI effects were published by the LLNL group,19 as well as by the exceptionally talented LPI group at LANL.20
Second, ablation driven rockets, also known as ICF target implosions, have the low density ablated material accelerate the dense shell. Thus, there is an effective “gravity,” g, pointing from the dense shell into the lower density gas. This is equivalent to an inverted glass of water. In principle, air pressure should keep the water in the glass. However, that pressure equilibrium is an unstable one, because the system can lower its energy in the gravity field by exchanging dense upper fluid with low density air lower fluid. It is a classic example of the Rayleigh–Taylor instability (RTI). This instability can be mitigated somewhat by the ablation process itself. The ablative stabilization formula used in the Nature paper, attributed to a “work-in-progress” talk by Chuck Leith, was determined later to be inaccurate.13,21 Given the technology at that time, direct drive laser intensity profiles were quite non-uniform, which would provide deadly initial perturbations that would grow due to the RTI, and eventually completely break up the imploding shell and ruin the implosion.
C. The switch to indirect drive
We will see throughout this paper, that this one-two punch of LPI and hydrodynamic instabilities will form the “Scylla and Charybdis” through which the good ship ICF must safely sail to attain success. These two constraints have persisted throughout this long journey to ignition. In the early days then, the decision was made to shift the program from direct drive to indirect drive, (again) for at least two reasons. First, the capsule inside the hohlraum is driven by the x rays. It did not matter (too much) that the laser was non-uniform as it hit the walls of the hohlraum. Two adjacent points on the capsule would look out at their “sky” and see a “mess,” but, crucially, if they are sufficiently close to one another, they would see the same mess, and thus they would be driven equally. Second, x rays reach deeper into the capsule, and thus ablatively stabilize the RTI much better than direct drive.16
The LLNL laser/ICF program was building lasers throughout the period when this reassessment of the target/drive choices were made. In 1975, the Cyclops laser was a one-beam prototype of what was to be the 20-beam Shiva laser, conceived as a direct drive implosion facility. This was the first example of many, on this journey, wherein a multi-beam “mega facility” would quite sensibly, have its technology tested on a single prototype beam. Also in that year, the Janus facility (like its two faced mythological namesake) had two beamlines and produced the first thermonuclear burn products using DT, and in an indirect drive geometry. Much of the details about these early days of indirect drive at LLNL can be found in John Lindl's review article,13 while a comprehensive review of early direct drive efforts around the world can be found in the review article by Steve Craxton and colleagues from the University of Rochester Laboratory for Laser Energetics (URLLE).22
I arrived at LLNL in 1976. Prior to that, I was pursuing my graduate degree at PPPL. I had outstanding teachers such as Marshall Rosenbluth, John Dawson, Harold Furth, Tom Stix, Carl Oberman, Rip Perkins, Ed Frieman, and Miklos Porkolab. When Ed Frieman would lecture, he would nimbly and consistently interchange a long white cigarette on one hand, and a long white piece of chalk on the other, and amazingly never confused the two. I had gracious and generous advisors in Ed Frieman and John Greene. Most of all, I had extremely impressive fellow graduate students. Rob Goldston went on to be PPPL director. Ned Sauthoff eventually became the head of the U.S. part of the international tokamak project, ITER. Steve Jardin has had a stunning career in computational plasma physics at PPPL. Earl Marmar and Adil Hassam went on to distinguished careers at MIT and the University of Maryland, respectively. My second-year research project found a “vertical” (namely, “n = 0,” or axisymmetric) “instability” in tokamaks. For any poloidal shape, there was a perturbation (some combination of “m” modes) that would grow. My classmates termed it “the Rosen wrinkle.”23
As satisfied as I was with my theory work, I felt that, as a theorist, I would have minimal impact on the promising future that new tokamaks would lead to. I feared that I might go an entire career and not change a single screw on a proposed tokamak. After graduating, I interviewed at the LLNL laser fusion program. Earlier, the LLNL ICF Program's Claire Max had come by PPPL to recruit. I was excited by the possibility that as a target designer, I could put “a new tokamak” in front of the high-tech and expensive element of the ICF Program, namely, the laser, every single day. This insight has served me well throughout my career and has kept my work fresh, diverse, and exciting for nearly 50 years. Throughout this description of the path to ignition, I will invoke this notion of “ICF's superpower.” By that I mean the possibility and flexibility of innovating, and the ability to react to experimental challenges and disappointments with a change of target and approach.
By the time I arrived at LLNL in 1976, the Argus facility was firing shots. It was also two-beam and started using what was then modern and novel technologies, such as a pinhole component to smooth out laser beam intensity profile irregularities. To azimuthally symmetrize the indirect drive on targets illuminated by Argus, the target used a scattering cone. In principle, this would have worked just fine. In practice, however, high irradiance on that cone led to LPI. I recall losses from Brillouin scattering approaching 50%. Moreover, the scattering would depend on the laser polarization, and that, combined with such LPI issues as side scattering broke the azimuthal symmetry. Our design code, Lasnex, was a 2D code, and thus not capable of accurately modeling this inherently 3D issue.
Another requirement as we stepped through lasers and tried various hohlraum configurations was the understanding of how hohlraum drive scaled. I applied a quantitative Marshak wave analysis to the problem of assessing how deep into the Au walls would the non-linear radiatively driven heat wave penetrate. This then would determine how much mass was heated in a given time, and thus the drive temperature of the hohlraum walls that would bathe the capsule and implode it. This analysis correctly predicted hohlraum drive from those early days through to NIF ignition.24,25 For this application, I found it convenient to invent a new system of units, “r.h.u.” radiation hohlraum units, whose constituents included time in nanosecond (ns), distance in mm, energy in hecta Joules (hJ, meaning hundreds of Joules), and temperature in hecta electron volts (heV). In these units, the basic black body radiated, per unit area, as σT4, with σ conveniently at a value very close to unity. Hohlraums to this day are 1–3 heV, so that too makes heV the unit of choice. My colleague at the time, Roger Bangerter, who is a pioneer in the concept of heavy ion driven ICF, insisted on calling “r.h.u.” “Rosen's Hebrew Units,” and heV, “Hebrew electron volts.”
At the time all of this was classified, as it would hint at the Teller–Ulam scheme. The work was published in classified annual reports of the LLNL Laser Program. Only over a decade later, when other research efforts throughout the world started publishing work along these lines, would this early work of mine see the light of day. In addition to Refs. 24 and 25 which came later, major sections of Ref. 13 describe this work, as do the works of Kaufman et al.26 and Suter et al.27
The term Marshak wave comes from the work of Robert E. Marshak, which he performed28 during the Manhattan Project. It is a non-linear heat wave since the heat conductivity (in our case, photon driven) is not simply a constant that depends on the material in question, but rather depends on the temperature. Marshak was a thesis student of Hans Bethe (on white dwarf stars) and served as his deputy at Los Alamos during the Manhattan Project. After the war, he did important research, at the University of Rochester, in formulating the weak interaction approach in high energy physics that culminated in Richard Feynman and Murray Gell-Mann getting the Nobel Prize. Two of his thesis students at Rochester were Al Simon and John Greene. Al later played an important role in laser plasma physics at the URLLE, and I always enjoyed interacting with him. John Greene went on to do MHD theory at PPPL and is the “G” of BGK (Bernstein–Greene–Kruskal) plasma waves. John was my thesis advisor. Thus, in the “Academic Family Tree” methodology, in which “thesis advisor = parent,” Marshak was my academic “grandfather,” and Bethe my “great grandfather.”
Marshak was president of the American Physical Society (APS), and died in a swimming accident in Cancun, Mexico. Sadly, I never met him. On the other hand, when I was Division Leader for ICF theory and design in the 1990s, I had the privilege one day of hosting Hans Bethe and briefing him on our plans for the NIF. Later that night Bethe gave a public lecture at UC Berkeley on the role of neutrinos in supernova explosions. Every seat in the two-story 525 seat Pimentel lecture hall was taken. I sat on the steps with my 3 young children. After the talk, I introduced them to him. I simply wanted them to meet a great man. (I don't think that at the time I was aware that he was my academic “great grandfather”). Ironically, just as I had extended the work of “grandfather” Marshak by developing it further throughout my career, later in life I extended29 the work of “great grandfather” Bethe, by expanding on the so-called “Bethe-Feynman” formula used during the Manhattan Project.
I adapted my earliest work on Marshak waves from research notes of our former LLNL Director Mike May. While this work was close enough to being correct to explain the data on hohlraum drive that we were collecting, there were some small, but annoying (to me), inconsistencies in it. Since dE/dt must equal the divergence of a diffusive flux, F, then, as a check on proposed solutions, the spatial integral over the Energy profile must equal the temporal integral over the Flux evolution in time. The fact that the solutions I was using very nearly matched up in this way but not exactly, bothered me for many years. Some 25 years later, in 2003, Jim Hammer suggested a formal expansion in a small-ish parameter, that together we expanded in full second order, that fixed the former inconsistencies.30 I am forever grateful to Jim for finally and fully putting my mind to rest.
In those early days, I recall having to appear and present my design results as well as my analytic work on target performance and hohlraum theory, to a weekly review board co-chaired by the then lab director, Roger Batzel, a nuclear chemist, and by the former lab Director, Mike May, a physicist and weapons designer. I was impressed by the high level of scrutiny the ICF program was being subject to, even internally by the LLNL management. I was also impressed by the LLNL tradition of giving young, new staff, such as myself at that time, high responsibilities so early in their careers, and high exposure to upper management. I was particularly touched at these director's eagerly accommodating my constraints, if those meeting conflicted with my need to take off for Jewish holidays.
One of the lessons learned from this history is the importance of the engagement of upper lab management to the health and direction of its ICF program. I think this is true throughout the history of the project and has served the program well. During the stressful times of the construction of the NIF, LLNL lab management went out of their way to accommodate the NIF team's needs in terms of support and needed personnel, to make their job somewhat easier. Certainly, in later years, I participated in regular briefings before Bill Goldstein, the previous director of LLNL before its present director, Kim Budil. Kim, as head of the weapons program was also in attendance at those Goldstein briefings, and she has been an active, creative, clear thinking, and steadfast supporter of the ICF program during her current tenure as LLNL director.
IV. THE LESSONS LEARNED FROM THOSE EARLY LASERS
A. Hot electrons
Soon after 1976, the Shiva laser was completed. It was built for direct drive, with 20 beam lines.
One of my first target design projects (supervised by Jon Larsen) was the design of the so-called “exploding pusher” targets for Shiva. An exploding pusher is a thin shell that contains the DT gas. The driver, either through electron conduction or via hot electrons, completely heats through that thin shell. The shell is now dense and hot, namely, high pressure, and explodes inward and outward. The inward going shock heats the DT gas, which then is further heated by the compression of the incoming half of the shell. High temperatures can be achieved in this scheme, though densities are not nearly as high as an ablatively driven shell system.
One of my first papers in the field of ICF was a simple model31 for the physics processes that dominated the behavior of these exploding pushers. There was very good agreement between the model and the results of the full hydrodynamic simulations. Shortly after publication, I happened to meet Dr. Robert Dautray, who led the French ICF efforts. I was quite taken aback, when he told me that my work “was precious” to their efforts. Be that as it may, I must say that doing that work settled my career path for me. To be able to formulate a simple model and then to test it immediately with “virtual experiments” using the complex simulation codes was a shear delight. This delight has blissfully lasted for me continually, with changing applications, for the past 50 years.
There was great interest throughout the ICF community as to what yield would emerge from this first Shiva experimental campaign. As optimistic calculations suggested yields of order 1012 neutrons, that is, where many of the guesses in this “betting pool” were made. Don Slater of KMS Fusion, a private laser fusion company in Ann Arbor, Michigan, which closed down in 1990, guessed the speed of light (in cgs), namely, 3 × 1010. He was the winner of the competition, as that was quite close to the experimental result. I am not sure we ever understood the reason for that underperformance of that system.
In January 1980, I and my LLNL colleagues experienced an earthshaking moment. A 5.8 magnitude earthquake caused a great deal of damage and a great deal of shaking. It turned out it was on a little-known fault, the Greenville fault that passes within 1 mile of the lab. Prior to this event, we were unaware of its existence. So when the building shook so violently, I imagined that this was actually an earthquake centered near San Francisco, some 40 miles away, and that it was truly catastrophic over there. At the time I had just bought a house in the Berkeley hills and had not yet sold my old one in the Berkeley flats. I imagined my new house rolling down the hill and crashing into my old one. Amidst the shaking, I crawled under my desk and called my wife, Rena, at home. To my relief, she did not know what I was talking about. The earthquake had not affected Berkeley (or San Francisco) at all! I went back to work at my desk when the shaking stopped, but I was advised to exit the building, along with everybody else. I was on the ground floor, but I had not known that my colleague, Judy Harte on the second floor had her entire ceiling come down on her (luckily vacant) desk.
In another part of the LLNL square mile site, Bill Kruer was just about to give a lecture on LPI in front of a full auditorium when it hit, and the auditorium was evacuated. Exactly 1 week later, when his lecture was rescheduled, an aftershock hit and the auditorium was cleared again, earning Bill quite a reputation as a “moving” speaker. In the main library, the bookshelves collapsed on each other like falling dominoes. Many lessons were learned from this event. Most involved safety features that are installed throughout the site. Some involved laser architecture and planning. The lasers needed to be built on their own floating “tables,” disconnected from the buildings they were in, and automated pointing software and hardware were needed to accurately align targets, lasers, and diagnostics. This is true to this day on the NIF.
Indirect drive experiments at Shiva, with x-ray ablated glass shells, were aimed at compressing DT to 100× of its “standard” initial density of 0.25 g/cc. This was a continuation of an earlier, failed effort, at Argus. The Shiva targets underperformed. It was assumed that LPI was creating hot electrons. These hot electrons would penetrate deep into the capsule, preheat the fuel, and thereby prevent the achievement of the sought after high density.
B. The birth of high energy density physics (HEDP)
The roots of this difficulty on Shiva extend back to the failed attempts at achieving “100×” in the Cairn experiment on the Argus laser. My colleague Bill Mead (then at LLNL, and later at LANL) was the lead designer and the individual responsible for this campaign. An experiment was proposed to test this hot electron preheat hypothesis:32 Instead of a full hohlraum, with two laser entrance holes (LEHs) and a capsule in its center, it would be a half-hohlraum (“halfraum”), or, for this first try in the Cairn campaign, it was called a “half-Cairn.” It is pictured in Fig. 4. It would be a cylinder only 1/2 the length of a full one, with only one LEH. The opposite endplate would be at the position of the midplane of what would be a full hohlraum. That endplate had a hole cut into it, and a glass slab of thickness equal to the capsule shell would be attached on the inside. External diagnostics could thus view the “inside” of the capsule, by looking at the cold, undriven side of the glass slab, as it was that side that was adjacent to the hole of that endplate. Hot electron production within the hohlraum would be monitored by the “FFLEX” diagnostic that measured the hard x-ray bremsstrahlung produced when most of those hot electrons plowed into the gold wall of the halfraum. During this era, targets were expertly made by an LLNL in-house team led by Chuck Hendricks and Bill Hatcher.
Before we discuss the results of this experiment, let us remark on the paradigm shift represented by this halfraum. It was a departure from doing full hohlraums with a capsule imploded with it, with new implosion experiments done for each new generation of laser driver. In some sense, this modus operandi was no surprise, as the testing of nuclear devices at the Nevada Test Site (NTS) followed the same script. A new device would be placed down-hole, and an entire array of diagnostics placed above it, reporting out data at the speed of light just before they would be pulverized by the nuclear blast. Thus, the halfraum was a change that would not concentrate on yield performance, but rather on physics understanding. If successful, it would open a new era of halfraums as “physics factories” that could study a wide variety of physics issues that occur within the “high energy density physics” (HEDP) regime. The key word in the previous sentence is “if.” The preliminary data from this experiment cast doubt on the whole approach and thus threatened the birth of this new field of HEDP.
The FFLEX instrument worked quite well, and from its signal, we could infer 60 J of hot electrons at a temperature of 70 keV. The two instruments monitoring the cold side of the glass slab were a “Dante,” a time resolved, multi-channel broadband x-ray detector (though its field of view was extremely broad), and a streaked optical pyrometer, which was not only time resolved, but its field of view was also highly localized. They both gave a very early signal that could be interpreted consistently as a ∼2 eV preheat signal, followed by a ∼10 eV shock breakout. Given the FFLEX results, and a measured 140 eV drive, these preheat and shock signals were in reasonable agreement with expectations. However later in time, the two diagnostics reported some very large, unexpected signals, that, moreover, diverged rather widely from each other both on maximum signal level and in their temporal behavior. In short, these late-time signals were quite large and not understood. The signals are pictured in Fig. 5. This mystery cast a cloud over the entire enterprise of doing such “physics” experiments, as it seemed as if they raised more questions than they answered.
I was brought in to look at these mysteries with a “fresh set of eyes.” I saw immediately what the problem was. The mindset had been that these diagnostics continued to see the cold side of the glass slab even at late time. What was forgotten was that this glass slab, like the glass shell of an imploding capsule that it was meant to represent, is a radiation ablation driven rocket. The slab could move, and indeed did move. It popped right out of the halfraum through that diagnostic hole in the halfraum's endplate, in a cookie-cutter like manner. As such, the two diagnostics, each viewing the back of the halfraum from an angle, would eventually see the hot, driven side of the glass slab as it cleared an axial position that would allow that “hot drive” to be in the line of sight for each detector. This explained their high late time signals and the difference in timing of each diagnostic given their different angles/lines-of-sight. In fact, from the time difference of their “hot side,” large signal one could deduce the velocity of that slab “cork popping” its way out of the halfrum (see Fig. 6).
With this successful explanation of all aspects of those signals, and with the additional “bonus data” of measuring the slab's x-ray ablative acceleration and motion out of the halfraum, the field of HEDP was rescued from being stillborn. We were soon measuring the motion directly by x-ray backlighting, a joint effort between LLNL and the Naval Research Laboratory (NRL).33 Then we were soon measuring x-ray driven shock breakouts through stepped samples to measure equations of state along the Hugoniot, x-ray burn-through of high Z samples to measure opacities, and eventually (after quite a few struggles) we were able to measure hydrodynamic instability growth rates. This RTI work34 was a design tour-de-force by Dave Munro, and an experimental achievement led by Bruce Remington. It won the APS “Excellence in Plasma Physics Award,” also known as “The John Dawson Award” in 1995, and was the first HEDP project to do so. A short summary of all of these developments stemming from this first HEDP experiment devoted to preheat measurement, as described above, can be found in my 2001 Teller Award lecture,35 which was given at a conference in Kyoto Japan a few short days after September 11. Being stuck in Japan, with no flights leaving for a week, was a surreal experience. I will always be grateful for the care and concern that the Japanese people expressed to all of us U.S. conference attendees during this very stressful time.
This robust new field of HEDP suggested a whole body of work that could inform the high energy density physics regime found in nuclear weapons. An LLNL committee, led by Carol Alonzo, a weapon designer, was formed to flesh out this proposed body of work. I partnered with Abraham Szoke to put forward about 100 pages of proposed ideas for laser driven experiments. Abe was literally “old enough to be my father,” as he had a son older than me. He was a distinguished laser physicist36 with expertise in other areas, such as crystallography and holography. He survived the Holocaust as a teenager in Budapest, and later escaped from the communist regime there after the war. Forty years later, well past his retirement from the lab, and well into his 80s, Abe would still come to my office nearly daily to work on projects of mutual interest. I miss him greatly.
The committee's recommendations were only partially embraced by LLNL management, who were too busy conducting full scale, underground nuclear tests at a rate of nearly one per month. Colleagues at the Aldermaston Weapons Establishment (AWE) in the UK, with a far smaller frequency of full tests, were far more enthused about HEDP and embraced this field rather whole heartedly.37 My chief counterpart from AWE, who was also suggesting quite similar HEDP laser driven experiments, was Brian Thomas. Brian shared my penchant for simple models. It was a great comfort to know that there was at least one other person on the planet who had identical interest and goals as me. Brian is a polymath who has an encyclopedic knowledge of early blues and rock and roll music (and a record collection to match it). In his retirement, he has written an extensive two-volume history of his beloved Wales, and he is an accomplished poet. Sir Brian was knighted for his HEDP work. At LLNL, the HEDP laser driven experimental activity persisted throughout the eighties and nineties, with great support from AWE, and, as well shall see, was quite critical in making the case for the NIF in the mid-90s after the cessation of nuclear testing.
C. A color change
With the difficulties encountered in the Shiva experiments to achieve high imploded densities, and with the proof from the halfraum experiment that those difficulties were due to hot electrons, it was time for a change. The laser wavelength, λ, used up to that time was 1.06 μm, the natural “color” of the Nd:glass (silica) laser slabs of the lasers built to date. It was clear that shorter wavelengths would do better, at a given irradiance I, since LPI thresholds usually trigger on the quantity Iλ.2 A very important paper in the history of the field of ICF was Ref. 38, a French work that showed how absorption increased with shorter wavelength. One of the early workers on LPI at LLNL, Claire Max, was on sabbatical in France and was a coauthor of this paper. Important for indirect drive was Ref. 39, the research that showed how the efficiency of conversion of laser light to x rays also increased with shorter wavelengths. Since the critical density scales as λ−2, a shorter wavelength laser brought energy to higher densities, whereupon electron conduction could transfer that energy to even higher densities above critical, leading to a more efficient generation of x rays from that dense and hot region.
The next question then would be how to efficiently convert the inherent 1.06 μm light of the Nd Yag glass lasers to shorter wavelength. Pioneering work40 by Steve Craxton and colleagues at the University of Rochester Laboratory for Laser Energetics (URLLE) showed how this could be done through non-linear conversion of wavelengths, first to green light (“2ω”) and then to ultraviolet light (3ω and 4ω) as the light passes through potassium dideuterium phosphate (KDP) crystals. This method is embraced and utilized to this day on the NIF.
This crucial decision to pursue a path to ignition using shorter wavelength laser light had major implications. The big laser planned as a follow-on to Shiva was Nova. It was conceived as a 200 kJ, 20 beam facility, configured for indirect drive. The 200 kJ was originally thought to be sufficient to have a chance at ignition. In retrospect, we now know that that guess was off by a factor of 10. However more importantly, that 200 kJ was at 1.06 μm light, which was now recognized to be an unacceptable wavelength. This became one of those moments where the fate of ICF and the pursuit of ignition hung in the balance. The program did survive this “near death experience.” The project, now incapable of reaching ignition, was cut back to 10 beams, 120 kJ at 1.06 μm light, and 30 kJ of 3ω, 1/3 μm light, as it passed through the KDP crystals.
Ironically, at this same time, the LANL laser fusion program was experimenting with a CO2 laser whose wavelength was quite long, at 10 μm! As the reader should not be surprised by now, their experiments were chock-full of hot electrons. As LANL was also pursuing indirect drive, it must have surely been a bitter disappointment to them that the measured41 conversion efficiency of laser light to x rays using this laser was a microscopic 2%. These very poor results quickly put the issue to rest: LANL would not be authorized to build a large laser. Instead, their technical staff was advised to help the LLNL Nova efforts in LPI theory, target design, and experiment, as well as in target fabrication techniques. My assessment is that their support in all four of these areas, as well as their independent ideas and contributions, were all indeed helpful in pushing the ICF Program along on its ultimate path to ignition.
D. A green light and x-ray lasers
The Novette laser was a two-beam facility that was a prototype to ten-beam Nova. It could operate at 2ω, 3ω, or 4ω light. With its green light capability, on Friday, July 13, 1984, we created the world's first extreme ultraviolet “x-ray” laser42,43 (XRL) using exploding foils as the plasma medium, in which a Ne-like Se plasma propagated 3p-3s 206 and 210 Å laser light. An earlier target design that considered using thick (non-exploding) foils showed (in the simulations) regions of higher laser gains, but suffered from the resultant XRL beam refracting off of steep gradients. Because we had some experience with exploding foils for LPI experiments for ICF, Mike Campbell suggested we use them as lasing media. The regions of gain had lower peak gain values, since the density was lower, but did manage to let the XRL propagate down the lasing axis for a respectable distance. The idea worked and we share the patent for this approach.44 It won the APS “Excellence in Plasma Physics Award,” also known as “The John Dawson Award” in 1990, and was the first non MFE project to do so.
This lesson of using the exploding foil, and thus retreating from a “higher gain” design to something with less potential performance, but more “stable” in some other sense, would be exactly repeated in ICF ignition target design, as we shall see below. Overall, another valuable lesson learned on Novette was that the 4ω option caused too much damage to the optics. As a result, Nova (and to this day, NIF) was chosen to operate at 3ω.
The aftermath of our x-ray laser work was an interesting and early example of the concept of “deterrence by capability,” which later formed one of the bases for embarking on the mission to achieve ignition. While the U.S. and the U.S.S.R. both pursued the “Star Wars” concept of a nuclear weapon as a pump for an x-ray laser that perhaps could shoot down incoming missiles, it is not clear as to how the Soviets assessed the U.S. progress in this field. Whatever would be reported in the popular U.S. press would likely be judged by the Soviets as their own favorite device: disinformation. The only thing that was for real was scientific peer reviewed publications later bolstered by genuine repeats of the feat by other labs across the globe. The successful work on x-ray lasing at Novette was just such an achievement. Moreover, it used a lasing scheme suggested by the Soviet scientist Vinogradov.45 Even after the cold war was over, I was amazed at the constant interest and questions posed to me by Russian scientists on this subject. It is speculated that over-spending on this Russian version of “star wars” was the tipping point in the collapse of the U.S.S.R. If so, the Novette x-ray laser was indeed the first example of “deterrence by capability.”46
A second outgrowth from the x-ray laser work was the high-quality work force that it attracted to LLNL. Rich London, LLNL's first post-doc, went on to a distinguished career in target design of XRLs as well as a diverse set of HEDP experiments. Experimentalists such as Brian MacGowan and Bruce Hammel went on to become leaders of the LLNL ICF Program. Chris Keane would go on to become a leader at the DoE office in charge of the NIF Program, and he is currently Vice President for Research at Washington State University. Nino Landen is currently Deputy ICF Program leader for experiments. Jim Trebes would go on to lead the Physics Department at LLNL. The message here, relevant to the ignition quest, is that bold and challenging projects attract the best and the brightest, which in itself is a great benefit of such a high risk pursuit.
Meanwhile, the proponents of direct drive ICF knew that progress needed to be made on smoothing the inherent speckle structure of the driving lasers that could seed hydrodynamic instabilities. Many schemes were invented in the 1980s that did precisely that.47–50 This important body of work won the APS “Excellence in Plasma Physics Award,” also known as “The John Dawson Award” in 1993, and was the first ICF specific project to do so. The field of indirect drive was happy to adapt these techniques as well, since unsmoothed beams entering a hohlraum could trigger LPI in those intense speckles51 if they went unsmoothed.
E. The Nova laser
The period of the mid-1980s through the 1990s brought changes to the LLNL Laser and ICF Program. John Nuckolls went on to become head of the Physics Department, and then Laboratory Director. John Lindl replaced him as head of the ICF theory and design effort, and eventually head of the ICF Program. Jim Davis replaced John Emmett on the laser side, and eventually he was replaced by Mike Campbell. Mike's experimental contributions and leadership to the experimental program on Nova were manifold. Nova showed that pulse shaping indeed helped to increase implosion convergence, and that the Rayleigh–Taylor instability was reduced by x-ray ablation, as predicted. Low mode implosion symmetry could be tuned by varying the laser pointing. The hohlraums could reach the predicted high temperatures (250–300 eV). These hohlraums were illuminated by relatively short pulses, and thus could be empty. Under those conditions, LPI was rather minimal.
Meanwhile, there was a robust program using the copious energy generated in underground nuclear explosive tests to explore the fundamentals of the ICF strategy. Both LLNL and Los Alamos (LANL) participated in this “Halite/Centurion” program. Experimental managers from the LLNL's ICF program such as Hal Ahlstrom and Erik Storm were very much involved in this endeavor. Its details remain classified. What is true is that the results from this program “demonstrated excellent performance, putting to rest fundamental questions about the basic feasibility to achieve high gain.”
Thus, from an energy driver point of view, with Nova low, and Halite/Centurion high, it seemed as though the ICF program had the problem of achieving ignition “surrounded.” Flush with these successes, LLNL proposed to build a facility, the “Laboratory Micro-fusion Facility” (LMF) that would produce yields in excess of 100 MJ, a quantity of interest to the weapons program, and, if successful, could provide useful information for Inertial Fusion Energy (IFE) civilian power production efforts. Lindl and co-workers estimated that the LMF would need to be of order 10 MJ of 3ω light, a factor of 300 greater than Nova, to ensure ignition and propagating burn. The target would have a Be ablator and operate in a large 250 eV hohlraum.
The National Research Council and other trusted outside experts advised against this large, 300×, extrapolation from Nova. Lindl and co-workers heeded this advice and devised a riskier target operating in a 300 eV hohlraum that could achieve ignition at 1 MJ. Thus, a 1.8 MJ, 500 TW facility was proposed that would have a “margin” of a factor of nearly two. This, of course, was the NIF. The operating space (in power, energy coordinates) is shown in Fig. 7. The constraints in power requirements on the sides of this “bird's peak” operating space were determined by LPI constraints from above, and hydrodynamic instability growth from below. The factor of ∼2 margin is present along the diagonal, but there was no guarantee that the sides would not collapse into the operating space, squeezing the successful target space up and to the right to nearly have no margin left. In retrospect this is essentially what happened on the NIF on the way to ignition using every-last-ounce-of-energy (and then some) from the laser. Once again, the navigation between the Scylla and Charybdis of LPI and hydrodynamic instabilities proved treacherous indeed.
It is somewhat ironic, that now that ignition has been achieved, folks with no knowledge or memory of these late 1980s developments ask the question: “Well, why did not you just ask for a 10 MJ driver in the first place?” I hope the above description answers this question. Had the ICF program been stubbornly insistent on the “sure bet” (from a target performance point of view) 10 MJ facility, we might never have gotten funding for it and might still be waiting for ignition in the laboratory. Though the 10 MJ, high gain and very high yield facility for both weapons physics studies and for IFE was not authorized, it was always conceived as the next step, once NIF had demonstrated ignition and moderate gain.
V. GETTING NIF APPROVED AND PREPARING FOR ITS COMPLETION
A. Getting NIF approval
A key ingredient in getting the NIF approved was to fulfill the “Nova Technical Contract” (NTC). This constituted a dozen milestones that demonstrated good performance on Nova, of relevance to the proposed NIF ignition target. About a half dozen involved hohlraum drive, symmetry, and LPI issues. The other half dozen involved implosion convergence and growth rates of hydrodynamic instabilities, and mix occurring in the imploded core as a result. The detailed list can be found in the appendix of a later review paper by Lindl and co-workers.52
In 1990, I was named head of LLNL's X-division, succeeding John Lindl who had succeeded John Nuckolls. This division was made up of components such as LPI basic theory and code development, hydrodynamic design code development, and several groups of target designers. I have no illusions of grandeur about why I was chosen to lead the Division. The design group leaders in charge of hohlraum physics and capsule design, Larry Suter and Steve Haan, respectively, were far too valuable to the future efforts of achieving ignition on the NIF. They could not be spared, and certainly not be subject to, the inevitable burdens and distractions from technical work of Division management. Another group, originally led by Roger Bangerter, and then Max Tabak, dealt with other driver technologies such as pulsed power and heavy ion driver ICF research. My own design group, which had evolved from HEDP to x-ray lasing, to ultra-short pulse laser physics and extreme ultraviolet (EUV) x-ray lithography source design and optimization could and would be ably led by my replacement, Rich London. Thus, I was available and the logical choice for X-Division leadership.
As head of the ICF target design, code development, and basic theory effort in the 1990s, I was responsible for all of the work from that end supporting the achievement of these 12 NTC milestones. Joe Kilkenny was my counterpart responsible for his team that was executing the experiments and innovating and fielding the diagnostics associated with them. Colleagues at LANL were very active participants in design, target fabrication, and experimental efforts in these campaigns. Colleagues from Sandia National Lab (SNL) contributed their technological expertise in diagnostic development.
The progress on fulfilling this NTC was monitored every 3 months, first by the ICF Advisory Committee (“ICFAC”) chaired by Venkatesh (“Venky”) Narayanamurti (who has served as Dean of Engineering, first at UC Santa Barbara, and then at Harvard) and later by the National Academy of Sciences (NAS), specially appointed NIF review committee co-chaired by Steve Koonin (then Provost of Cal Tech, and later Under Secretary for Science at DoE) and Hermann Grunder (former Jefferson National Accelerator Facility director, and then director of Argonne National Lab). This was an intensely stressful process for the staff, as it seemed like the best outcome one could hope for every 3 months was the chance to come back 3 months later and do it again. Failure at any point meant cancelation of the project.
During this intense period of reviews every 3 months, I ran into LLNL former director Mike May, who, as described above, was the force behind the weekly reviews of ICF by upper LLNL management in the mid-70s. Mike asked me how I was doing, and I complained about the heavy load of reviews. I was taken aback by the vehemence of Mike's reaction. He forcefully asserted that I need to appreciate the review process. To paraphrase his remarks: “Smart people are devoting their most precious resource, their time, to listening to you about your program and its problems. This is a gift and a privilege, not a burden.” I took this rebuke to heart, and to this day have adjusted my attitude toward reviews as an opportunity, not a burden. What is certainly true, in my long experience, is that even if a review committee has little to contribute, the very act of the ICF program getting together to prepare the review material is highly valuable. It helps with communication across broad areas of the program and also helps formulate more clearly an overall and integrated strategy and approach to the program's research results and its plans moving forward.
A rather dramatic event happened mid-way through this process. Our colleagues at LANL, led by Melissa Cray, along with excellent designers such as Bill Krauser, Bernie Wilde, Doug Wilson, Nels Hoffman, Steve Coggeshall, Norm Delameter, Bill Varnum, and David Harris, using the same code, Lasnex, imported from LLNL, separately calculated the proposed ignition target. They did so in an integrated manner of the capsule within the hohlraum. With the laser pointing that Steve Haan and his team had specified, they found the implosion to have a “P4” asymmetry component. The very high convergence amplified this asymmetry, and the implosion fizzled, with an “X” shaped implosion image (rather than the desired “O” shaped!). This had a chilling effect on the review committee, and, frankly, made me appreciate how difficult this task would be. Our own Steve Pollaine worked tirelessly and recalculated the implosion with an adjusted laser beam pointing and achieved ignition (in the code)53 just barely in time for the next review in the December timeframe. Steve called it his “Channukah miracle of light.” If anything, this episode taught us some humility, and the need for an empirical tuning of symmetry once experiments would begin.
The staff worked extremely hard, and, in the end, did accomplish the NTC. I recall once coming back into the lab past 9 pm, after I had to go out of the lab to a charity dinner event in Oakland. The staff was still there, exhausted but working. I recall specifically Peter Amendt, Chris Keane, and Linda Powers barely able to stand up straight as I talked to them in the hallway that night. I do not think I have ever had a prouder moment as Division leader as I had in that encounter.
From the theory and design point of view, noteworthy is Larry Suter, who was responsible for hohlraum work (featuring Steve Pollaine, Linda Powers, Chris Keane, Ron Thiessen, Tom Shepard, and Peter Amendt), and Steve Haan, who was responsible for capsule implosion and RTI work (featuring Steve Weber, Steve Hatchett, Dave Munro, Kirk Levedahl, and Tom Dittrich). Their experimental LLNL counterparts include Joe Kilkenny, Bruce Remington, Nino Landen, Don Phillion, Bruce Hammel, Brian MacGowan, Fred Ze, David Ress, John Porter, Harry Kornblum, Bob Turner, Siegfried Glenzer, Bob Kirkwood, Chris Darrow, David Montgomery, Ted Orzechowski, and John Moody, along with LANL experimentalists Alan Hauer, Warren Hsing, and Juan Fernandez. Bruce Langdon led the LPI efforts, featuring Ed Williams, Dick Berger, Bert Still, Barbara Lasinski, Kent Estabrook, Denise Hinkel, Chris Decker, Scott Wilks, and Bedros Afeyan, along with the Division's Chief Scientist, Bill Kruer. Other outstanding efforts on equations of state (EOS) in the HEDP regime, which we use to this day, involved Richard More,54 Yim Lee,55 David Liberman, and Jim Albritton (who sadly passed away during the writing of this manuscript). Further support for EOS and opacity efforts were provided by the LLNL Physics department, including the efforts of Brian Wilson, Carlos Iglesias, and Bill Goldstein.
Max Tabak led the advanced projects effort that involved Heavy Ion Fusion Target Design56 (featuring the late Dennis Hewitt, Alex Friedman, David Grote, Jim Mark, Darwin Ho, Grant Logan, Mike Glinsky, Charles Orth, and an up and coming star, Debbie Callahan), pulsed power applications57 (featuring Jim Hammer, who I had hired into the Division from the MFE part of the LLNL), and fast ignition.58 The first few sections of that classic paper on fast ignition (and its first two references) lean heavily on the (numerical) gain model of Meyer-ter-Vehn59 and my extension of it60 to be entirely analytic, and to move smoothly from the isochoric to isobaric ansatz for the assembled fuel configuration. Rich London continued in my now vacated role, to lead efforts in x-ray lasing, ultra-short pulse work, and laser medicine modeling. His group featured Dave Eder, Steve Maxon, Charlie Cerjan, Rick Ratowski, and Steve Moon. I used to say that my goal as X-Division Leader was to be an “ex-division leader” and to be Rich London's post doc. The great diversity of the division's design efforts was helped greatly by a highly flexible “helper and post processor code,” Yorick, developed by Dave Munro.
George Zimmerman led the code efforts, featuring Judy Harte, Dave Bailey, David Kershaw, Ed Alley, Alexei Shestakov, Jose Milovich, Manoj Prasad, Nick Gentile, and Paul DuBois. George, Judy, and Dave are working at LLNL still to this day. During this time, we hired Marty Marinak who would go on to develop the 3D code Hydra,61 the present-day workhorse for NIF design. During this period, there were two technical developments that greatly aided our computational efforts. In the 1990-time frame, my colleague (to this day) Eugene Brooks reported to a Supercomputing conference his concept, which he famously called “Attack of the Killer Micros.” This paradigm shifting concept was that efficient and cheaper micro-processors (and enough of then computing in parallel) would surpass the performance of supercomputers then in use (such as the products from the Cray company at that time). A second, synergistic, development, led by Paul Dubois, was to wrap the entire Lasnex code within a shell run in Basis. When we began shifting to micro-processors, Lasnex was ready to utilize them. When machines changed (as they did, rather frequently), the code could be up and running within an afternoon, because of its Basis based portability. These efforts were overseen by Steve Langer of X Division. Other massive codes at LLNL took many weeks to adjust to such changes. These dual, synergistic developments allowed us to respond to the heavy usage need of keeping up with fulfilling the NTC. The lesson learned here was similar to the ICF lessons learned writ large but applied specifically to computing: Be light on your feet, and be ready, willing, and able, to jump ship to a better choice of platforms.
In addition to the target physics addressed by the NTC, the Nova facility embarked on an important undertaking: “Precision Nova.” Without the rigor and precision in the improved facility, the NTC would not have been accomplished. This, to my mind, foreshadows analogous efforts at “precision NIF” in the 2020s that got us “over the hump” and resulted in ignition. A second important effort during this time was the construction of the Beamlet, a prototype of one of the eventual 192 beams of NIF. Much was learned regarding laser technology in the construction and operation of Beamlet. Bruno Van Wonterghem was hired by Mike Campbell to work on the Beamlet. Bruno has been the dedicated NIF facility manager to this day. The Beamlet is now at Sandia National Laboratory (SNL) and serves an important role as a pre-heat source to a cylinder of DT gas that is to be then imploded by pulse power, in the MagLif scheme.62
Having completed all 12 milestones, we all thought that we were done with the review process. It was true that the NTC only covered areas within Nova's capability. Thus, no cryogenically frozen DT shells capsules, etc., were tested. Along those lines, the NIF point design63 involved a rather long laser pulse and thus called for a gas filled hohlraum to partially hold back the Au walls from collapsing inward during that long pulse. Had such motion been allowed, it would greatly challenge the ability to control time-dependent drive symmetry. At the very least, the low Z gas would be an easier medium through which the beams could propagate, vs the high Z plasma of an ingressing gold wall. All the NTC involved vacuum hohlraums. Thus, the NAS committee asked us to study “one more thing.” They wanted us to test the performance of targets with warm gas filled hohlraums on Nova.
We did so. We used thin wall hohlraums to image where the beams propagated to along the wall. For empty hohlraums, the beam imprints had been at axial positions along the cylinder walls to which they were precisely aimed. In short, no surprise. However, with gas in the hohlraums, the beams bent and hit the walls at an axial position closer to the laser entrance holes (LEH) than originally aimed. This indeed was a surprise. While this beam bending behavior was reproducible, and thus correctable with an adjusted aim, as what one would employ in archery (or golf) for “windage,” it raised the possibility of an inability to control symmetry if this behavior persisted or even worsened at the NIF scale. We needed to understand it.
The complete understanding came quickly. Harvey Rose of LANL (who sadly passed away during the writing of this manuscript) visited LLNL and reminded Ed Williams of LLNL that he (Ed) and Bob Short of URLLE had written a paper64 (with Bob Bingham) describing the beam bending effect. Lasers can filament. In a flowing field of plasma, there are places where the flow field can resonantly stagnate and build up a stronger wall of density within the filament. The beam would then refract off that density wall and bend. The solution that could avoid this phenomenon was to try to avoid the formation of the filament in the first place. LLNL had developed a LPI simulation code pf3D.65 I worked with Denise Hinkel, then a newly hired post-doc from UCLA, in exercising the code on this problem. Denise showed that the relatively unsmoothed Nova beam would indeed filament, and then showed quantitatively, that the beam bending would ensue.66 Denise then showed that with the smoothing techniques planned to be implemented at the NIF, that such behavior would be eliminated. This prediction was successfully demonstrated at Nova.67
For reasons that are beyond me, I soon found myself chosen to summarize our case, in front of the NAS review committee as they were about to ponder their final decision on whether to recommend to DOE to approve the NIF project. It was obvious to me, that ignition would be difficult and would probably not be achieved right away. Moreover, given the 60× leap in energy scale (and ∼4× in spatial scales) and the new challenges of cryogenic systems, there were bound to be plenty of surprises on the NIF. So that is exactly what I said in my “closing arguments.” I pointed out, however, that the committee, by its own choosing, had required the gas fill campaign, whose beam bending therein was a shining example of an unanticipated surprise and that the wide ICF community had collaborated in reaching an explanation for the surprise and offer up a fix for it on the NIF. As such, we could proceed into the future with our eyes wide open for surprises, but with some confidence that ICF with its flexibility and with its talented workforce acting in a collaborative manner could overcome surprises in the future.
This argument seemed to sway the committee, and to their credit (in retrospect) proceeded to recommend building the NIF.
However, as I look back, fulfilling the NTC was a necessary, but not sufficient accomplishment in bringing approval to the NIF project. External world events were crucial. During the late 1980s and the early 1990s, the Soviet Union collapsed. As a “peace dividend” demanded by the public in response to these events, it became clear that significant expenditures could be avoided by the cessation of nuclear testing. Yet, it would be irresponsible for the Nation to weaken our deterrent by letting nuclear designer skills atrophy. The answer to this was ready and waiting: The use of high-power lasers to drive HEDP experiments that, as described above, were born and then matured at the previous LLNL/ICF lasers. This body of proposed work formed the bedrock of what would become the science-based Stockpile Stewardship Management Program (SSMP) that has lasted for over 30 years, to this day.
I recall the efforts put together by LLNL to brief high U.S. government officials as to this new strategy, shepherded by Dr. Vic Reiss at the Department of Energy (DoE). LLNL director, Bruce Tarter, weapons program associate director George Miller, laser associate director Mike Campbell, Physics Department associate director Dick Fortner, and others at that level were huddled in a small room crafting the presentation. I, a “mere” Division leader, was the lowest ranked person in the group. I was there as the person who had started the HEDP efforts at LLNL (as described above) and was aware of its latest developments. One tact was to also emphasize more basic science that could be done with HEDP, such as laboratory astrophysics.68 A title of one viewgraph mentioned studying the physics of aging stars. I warned George Miller (who would ultimately give the DoE briefing) that he should be careful how he says this, as he would be in a room “full of aging stars.”
It should also be emphasized that technical progress alone was insufficient here. Much effort on the political side would also be required. The LLNL management, and Mike Campbell in particular, was highly instrumental in getting the three national lab directors to sign a letter that included support of NIF, and in getting every member of the California Congressional Delegation to sign a letter in support of NIF construction. Getting New Mexico Senator Pete Domenici, the “patron saint” of Los Alamos, on board, was also quite critical.
This change in the global political environment, along with the ready willing and able field of HEDP, ultimately won the day for approval of the NIF. The SSMP would have at its cornerstones high performance computing and laser driven HEDP experiments. Achieving ignition on the NIF would have a several-fold utility. First, as a “stretch goal,” it would challenge the workforce to achieve a very daunting task. Second, if achieved, it would open new parts of parameter space into which to extend the domain of achievable HEDP in the laboratory. Third, in the absence of nuclear testing, it would act as an example of “deterrence by capability” for any-and-all adversaries to see. Finally, it would be considered the “waystation” on the path to a larger, 10 MJ scale facility that could reach high gain and high yield of use both to the Stewardship Mission and to the idea of commercial use of ICF, namely, the Inertial Fusion Energy (IFE) enterprise.
In the early 90s, the period discussed above, I feel that another important development came into being. There had been significant and scientifically credible ICF work from the Japanese69 and the German70,71 efforts that involved theory and experiments with laser driven hohlraums. The original idea behind classifying this research at U.S. national Labs was to protect the Teller–Ulam scheme for the H bomb. However, as a result of a November 1979 article by Howard Morland published in Progressive Magazine and the legal issues that followed, the Teller–Ulam scheme was officially declassified. Given those facts, there seemed to be no reason left to classify the indirect drive approach being pursued at LLNL. Moreover, given the hoped-for push toward ignition with the NIF, there seemed to be plenty of reasons to engage a world-wide community to help in, and to participate in, this grand challenge quest. I gave the final briefing to advocate for the declassification in Washington, DC, before a broad inter-agency committee. When I flew out of Washington, DC, that night of January 17, 1991, I noticed a strange thing. As the plane flew over the Pentagon, every single light in every single office was on. Later during the flight, it was announced that Operation Desert Storm was under way and that bombs were falling on Baghdad.
Quite apparently, the briefing had its intended effect. It resulted in a declassification in 1994, and subsequent publication of several articles26,72–74 describing our research efforts in hohlraum drive and in indirectly driven implosions. To this day we benefit greatly from the participation of citizens from around the globe in our efforts, which have, as mentioned, indeed led to ignition. I am quite proud of my role in bringing about this broad participation.
B. NIF construction
With the NIF facility approved by the DoE, next came the long haul of actually constructing it. When first proposed as simply a “Nova upgrade” built within the existing Nova building, the price tag was a mere 400 × 106 dollars. However, laser technology and architecture matured, and a new building would be needed, and the challenge of building a structure while installing a high-tech laser would spell a great managerial and systems engineering challenge. Without steadfast support from many stakeholders, the NIF project would not have survived. Stakeholders include lab management, the other national labs, and the NNSA/DoE (particularly with the help from there of Sheldon Kahalas, Marshall Sluyter, Dave Crandall, Allan Hauer, and Chris Keane). On even higher levels, stakeholders include Congress; influential supporters of science at very high levels of government such as Will Happer (from Princeton University and former head of the Office of Science at DoE), the late Arthur Kerman (from MIT), and Neal Lane (former provost at Rice University, former Presidential Science Advisor, and former head of the NSF). The NIF budget grew approximately fourfold from its initial 1 × 109 dollar estimate, but it did reach completion75 in the summer of 2009. Given the 1993 cancelation of the proposed particle accelerator, the Superconducting Super-Collider in Texas (with 2 × 109 spent but perhaps 8 × 109 to go), it is remarkable that NIF survived as a project.76 The decision showed a renewed national resolve to support a big science project.
By the 2000s, the NIF leadership had changed and was under, first, George Miller, and then Ed Moses when Miller became LLNL director. Much vision for the laser developments to follow was provided by Mary Spaeth and co-workers.77 Ralph Patterson served as NIF Project Director. A great many technologies were developed to be able to reach its specified 500 TW, 1.8 MJ goal. The French government also contributed to this effort, by way of a laser technology co-development agreement. The comparable-to-NIF LMJ laser in Bordeaux is still not fully complete, but with more beamlines added every few years, it is getting there.78
Some of the breakthroughs (“the seven wonders of NIF”) needed for NIF to achieve its performance goals include continuous processing of the laser glass manufacturing; precision, programmable, and flexible pulse shaping using fiber optic oscillators and transport to the regenerative, stable, high gain pre-amplifiers; a four-pass angular multiplexed main amplifier, also known as a large aperture optical switch; a large aperture plasma electrode Pockels cell; adaptive optics via the use of deformable mirrors; integrated computer control systems; significant advances in target fabrication, and especially in cryogenic systems including beta layering technique for smooth uniform frozen DT shells, pioneered by the late Larry Foreman of LANL. Much of the target fabrication was done at General Atomics in San Diego, under the able leadership of Abbas Nikroo.
A final need, as mentioned earlier, was the need for large, rapid growth, KDP crystals for the conversion of the 1.06 μm light to 2ω and to 3ω. Given the end of the Cold War, the NIF project was lucky enough to recruit Natalia Zaitseva from Moscow State University, who had the technological know-how to grow these crystals 10–100 times more rapidly than traditional methods. Without this contribution, we might still be waiting for those giant crystals to be grown.
C. Preparing for NIF experiments
During the decade that NIF was being constructed, the target physics program did not stand still. The point design63 was highly instrumental in dictating the exact specifications for both the laser and the target fabrication. It called for a ∼20 ns long pulse, with a sequence of four shocks incident onto the CH ablator, which were meant to keep the frozen DT shell on a very low adiabat, allowing for very high compression. The first shock, about 1 MB in strength, would later come to be known as the “Low Foot” design.
Experiments continued at the Omega laser at the URLLE. Some of that work proved out the strategy and method of beam phasing to control time dependent symmetry.79 An important platform for NIF, the key-hole platform for measuring shock timing to minimize the adiabat of the implosion, was developed there as well.80 To be discussed below, an important platform to measure RTI growth was also developed. LPI was studied in a gas bag geometry that tested our LPI codes such as the aforementioned pf3D. The compendium of all these efforts can be found in the previously cited publication of Lindl et al.52
The validation of these plasma codes with Omega experiments that measured LPI thresholds and growth factors was quite important. It allowed the staff to plan on what choice of phase plates were to be ordered and built in time for NIF first shots. The tension was as follows: A phase plate that could make the laser beam spot sizes larger would lower the incident irradiance on target and stay below thresholds for instabilities. However, large diameter spots would present difficulties in repointing those beams for possible required symmetry adjustments, as the repointed larger spot laser beams might no longer fit properly inside the LEH of the hohlraum. As with so many issues in ICF, there was a trade-off that resulted in some middle-ground compromise. In this regard, it is possible that the initial results78 from hohlraums illuminated by a partially built LMJ in France do show some LPI signatures. Those beams have a smaller diameter than those on NIF, and it may be that that is the cause of the LPI seen there to date.
Two specific pieces of physics studied at Omega during the mid-2000s are also worthy of mention: cocktail hohlraums and the high flux model derived from gold sphere experiments.
A hohlraum can be made more efficient than a standard one with gold walls, by making the walls out of a combination of materials, a so-called “cocktail.” The idea is that any dip in the opacity of one material can be compensated by a peak in the opacity of another material at the same frequency. If that dip goes uncompensated, then photons can penetrate deep into the wall, never to emerge. If the opacity is compensated and is high, then the photon is absorbed near the surface of the wall. It can excite an ion and then have it de-excite and re-radiate into 4π. As such, the entire process can be viewed as a scattering event, and thus the cocktail wall has an effectively larger albedo (reflectivity) and will scatter energy back into the internal drive in the hohlraum, thus rendering the hohlraum more efficient.
One way to test the principle is to measure the burn-through times of a cocktail sample on the side of a hohlraum vs that of an identical thickness sample of pure gold. The cocktail should have a delayed burn-through time because effectively it is scattering more of the drive back into the hohlraum, delaying the propagation of the Marshak wave through the sample. We did this in 1996.81
The acid test, of course, is to make a hohlraum out of the cocktail material entirely and then see if it gets hotter than a gold one illuminated by the same laser power. In the late 90s, I stepped down from being the ICF theory and design division leader. Nearly a full decade of management was about twice as long as I had originally anticipated, and thus was more than enough for me. Having seen NIF be justified for the SSMP mission, I wanted to understand better what exactly were the problems that NIF could help solve in the realm of HEDP of relevance to the SSMP. During a period of about 10 years, I helped solve two major issues, the so-called “energy balance” problem, and explicating the basics in the so-called “boost” process. In my absence from the ICF endeavor, efforts were made to do the acid test for cocktails, to no avail. The cocktail hohlraums did not get any hotter.
When I returned to doing ICF work, my first assignment was to figure out what went wrong with these disappointing drive experiments with full hohlraums with cocktail walls. I polled several project managers as to where to begin a calculational study, and each gave me pointers in different directions. Had I followed any of them, we would all still be wondering what went wrong. Luckily, an incident at LANL in which a floppy disk went missing for a while (to later be found behind a copier machine), brought about a shutdown of both labs, LANL and LLNL, for several weeks. The idea was to be introspective on how to improve security procedures. The computing facilities were shut down, so all I had to think about problems was pencil and paper. I was able to formulate a hypothesis that the cocktails that had been used, had oxidized, increasing the walls' specific heat, and thus lowering the temperature it could have been without the oxygen. I was able to estimate the effect by hand, and later confirm it when operations returned to normal, and full computing could resume.
The oxidation happened because of the way the cocktail hohlraum was made. A solid cylinder “mandrel” serves as a substrate upon which a hohlraum wall material is deposited. Later the substrate would be dissolved away, leaving an empty cylinder with the appropriate wall material. This etching process accelerated the oxidation into the cocktail, especially the material facing the inside of the hohlraum. I suggested an entirely different process to make the hohlraums. The substrate would be two solid pieces that each looked like a canoe (or a celery stick). The walls of the cylinder would be deposited on the inside of each canoe, and then the canoe mandrel would be dissolved from the outside, leaving the cocktail material that would face the inside of the hohlraum pristine. We tried this on Omega, and the now unoxidized cocktail behaved properly, and the hohlraum got hotter than a similarly driven gold walled one, precisely as predicted.82,83
The offshoot of this research was to simply ask the question which cocktails were optimal. It turned out that a pure depleted uranium (DU) wall was better than gold, and had ∼15% less wall loss (or, equivalently, a better albedo). When cocktail walled hohlraums for NIF proved too difficult to manufacture consistently, NIF simply used DU,84 resulting in hotter, more efficient hohlraums. Since wall loss is about half the energy balance in laser dirven hohlraums (the rest of the x-ray energy goes out the LEHs and into the capsule), the DU represented a 7% more efficient hohlraum. With 1.8 MJ incident, and say 1.5 MJ absorbed and converted to x rays, this means that DU “saved” about 100 kJ worth of incident laser energy. As we shall see later, to reach ignition we needed every bit of energy we could squeeze out of the NIF laser, so 100 kJ saving turned out to be a major component in achieving ignition.
A second area of physics tested on Omega was the non-LTE physics of the laser heating gold. In particular, x rays greater than 1.8 keV emitted from the gold could penetrate the capsule ablator and affect the density profile at the ablator–ice interface, leading to the RTI growing there, without the benefit of ablative stabilization. Doping the capsule ablator with some higher Z material can control this density profile, but we needed to know ahead of time (to give the target fabricators time to figure out exactly how to do this doping) how much dopant was needed. Namely, we needed to know the relative size of the >1.8 keV photon emission to the thermal peak near 1 keV (given a 300 eV hohlraum, which is near the peak of its Planckian spectrum).
To do so, Larry Suter suggested we shoot gold spheres on Omega and assigned me the job of modeling the emission we would measure from these gold spheres. To our surprise, the gold spheres emitted85 thermal x rays at about twice the rate predicted by our then current non-LTE model. That model used XSN,86 an average ion model that had no “delta-n = 0” transitions. It also used a restrictive flux limit. The local Fick's law of conduction relies on a gradient of the energy density, but in ICF that gradient scale length can be so short as to lead to a nonphysical, too fast, a heat transport. As a result, the electron heat conduction in the computational model needed to be limited to a fraction, f, of the free streaming heat flux, namely, “fnvt,” with f as a variable, and n, v, and T as the electron density, velocity, and temperature, respectively. Based on decades of previous work87 (notably with mostly unsmoothed beams), that value of f, which fit most data, to that date, was rather restrictive, about 0.03.
To match the gold sphere's high x-ray emission levels, we needed to make two changes. Instead of XSN, we used a DCA (“Detailed Configuration Accounting”) model.88 Even with that we also needed to increase f to a value of about 0.15. We called this the high flux model89 (HFM) because compared to the previous non-LTE model it predicted higher x-ray emissive flux and higher electron heat flux. As a result, a fluid element, subject to these dual higher loss channels, would be significantly cooler than a similar fluid element under the older more restrictive model, since the HFM applied to that fluid element would have it lose more of its heat. Many years later, we would find confirmatory evidence for this model by using Thomson scattering to directly measure the temperature of the plasma ablated from the gold sphere. That data showed90 that indeed it was matched by the cooler predictions of the HFM, and it ruled out completely the restricted heat flux, f = 0.03 model.
Now the reader may well ask, what has all of this HFM, on an open geometry gold sphere, have to do with what would happen in a NIF hohlraum? That is a fair question, but as we shall learn shortly, the initial results of the full NIF experiments showed, to our surprise, that it was the HFM that could explain the data there. A harbinger of this result was also available to us from work on Omega where LPI in hohlraums was studied.91 The f = 0.03 model was working acceptably well in this experiment with a standard laser beam pointing. As reported by R. London at the APS/DPP 2008 meeting (Paper NO4.5), when the beams were separated and expanded to more uniformly cover the hohlraum wall, a model with f = 0.15 fits the data much better. Similarly, the gold sphere was rather uniformly illuminated, and f = 0.15 fits the data better. In a NIF hohlraum, the 192 beams cover much of the hohlraum walls, and perhaps that is why there too, it seemed like the f = 0.15 model fits the data better. Unfortunately, the point design used the old non-LTE model (XSN and f = 0.03) and that set the program up for the first of several surprises. Thus, the surprises that I had predicted would inevitably occur upon NIF startup indeed would come to pass.
D. Red teaming
In addition to actual physics campaigns at Omega, there was another class of activity going on before the NIF was completed. This involved two flavors of “Red Teaming.” A Red Team is often employed against a “Blue Team” in war games. The Blue Team represents the strategies and underlying established assumptions, and the Red Team's job is to challenge those very assumptions.
The first exercise was, on one level, a Red Team challenging the assumptions of the Blue Team as to the physics models that underpinned the point design. On another level, it was with a refreshing lack of arrogance, that the NIF strategy was not to believe its physics assumptions per se, but to prepare an empirical tuning campaign to learn what the correct path to ignition would be. As such, this first exercise was to see if this tunability approach92 would prove viable. Since this was pre-NIF, and the point design was literally a virtual success, it was easy enough for a Red Team to change the physics assumptions that went into that design, so that the point design would fail under the Red Team physics. Examples of the changes would be electron conduction assumptions, opacity and equation of state assumptions, target imperfections, etc. The Red team did redesign the capsule to reach ignition under its physics assumptions. I formulated some of the Red Team physics, and then moved on to be the “referee” of the exercise. Larry Suter headed the Red Team, and John Edwards headed the Blue Team.
A virtual campaign ensued.93 The Blue Team would propose experiments (e.g., shock timings controlled by the laser pulse shape, symmetry controlled by beam pointing, and beam balance) and the Red Team would carry them out, all virtually, using its physics model. The Red Team would return synthetic “data,” and the Blue Team would then propose to follow on experiments to tune their way to ignition. This exercise did result in “ignition by tuning.” I found it interesting that the Blue Team, true to their physicist nature, hypothesized what the Red Team physics model was. They were quite wrong, but it did not prevent them from reaching (virtual) ignition. Another very valuable outcome from this exercise was the preparation of a great many computational tools to be able to directly compare simulation predictions to actual data signatures.
The second form of “Red Teaming” was to perform a “pre-mortem,” which took place in 2007, 2 years before NIF began shooting. I chaired the multi-lab, multi-expertise Red Team, called the Ignition Risk Reduction Committee (IRRC) that was assigned to this task. It was composed of LPI specialists Hector Baldis and Bill Kruer of LLNL, laser specialist John Murray of LLNL, experimentalists Guy Dimonte of LANL and Dick Fortner, and Mike Key of LLNL, nuclear physicists Ken Moody, Steve Libby, and Richard Boyd of LLNL, and designers of many fields of expertise including John Nuckolls, George Zimmerman, Charlie Verdon, John Lindl, Jim Hammer, Omar Hurricane, all from LLNL, and Mark Herrmann, then of SNL. A pre-mortem means that we pretended that NIF had failed to reach ignition, and then had to imagine or explain why. One issue raised was the kinetics aspect of LPI, and the fear that LPI would raise its ugly head. Another was our deep suspicion that the tent that holds the capsule in place at the center of the hohlraum would perturb the implosion. Our team looked at that issue, but we found that the computational power needed to properly address that issue at the time was insufficient to do so. Seven years later, with improved computational capabilities, the ICF Program would find out the hard truth about this tent issue, as will be reported below.
The revised Red Team continued to meet on occasion once NIF shooting began, in an advisory role to the Program, as an outlet for junior staff to present their non-mainline ideas, and to adjudicate disputes regarding diagnostic interpretations. I chaired this effort throughout most of that time. The revised LLNL team was comprised of John Nuckolls, John Lindl, Nino Landen, Bill Kruer, Erik Storm, Mary Spaeth, Bob Tipton, George Zimmerman, and Brian Pudliner. Participants from outside LLNL, such as Riccardo Betti from the URLLE, and Dov Shvartz from Ben Gurion University (BGU), attended on the occasions when they were visiting on-site.
Of course, a different form of “red teaming” is to have a robust independent effort from another lab, namely, LANL. This “collabo-tition” combination of collaboration and competition proved so useful in the efforts to accomplish the Nova Technical Contract. For reasons unclear to me, the mid-2000s saw a decrease in funding and effort from LANL with regard to NIF and ignition. Nonetheless, some very important LANL work on diagnostics did continue. As we will see, as the NIF targets eventually produced substantial yields, the crucial LANL-supplied information of the shape of the neutron emitting regions was invaluable in understanding the behavior of those targets, and in reconstructing density maps of the imploded core.94
VI. SURPRISES UPON THE STARTUP OF THE FULL NIF LASER
In the summer of 2009, NIF was ready to fire all its 192 beams into a target, albeit, at reduced peak power, to slowly “break in” the new laser. The first targets were empty hohlraums. To everyone's surprise, the Dante broad band spectrometer, which looks through the LEH and reports out the soft x-ray emission vs time, showed nearly twice the emission predicted by the “standard” non-LTE model, using XSN and f = 0.03. The HFM matched the data perfectly.95,96 This fact seemed quite consistent with our experience with the URLLE gold spheres, which, as described above, also emitted nearly twice what the expectations were based on the “standard” model. It gave us a first inkling that the HFM could possibly be applied to NIF targets.
In December 2009, NIF fired its first hohlraums filled with gas and a capsule at the 1 MJ of laser input level. Earlier, such shots in September were at lower energy inputs. These first began as “warm” targets, with a neopentane gas fill. Later these evolved into a helium/hydrogen mixture that needed to be cooled down to allow the proper fill density to be achieved without blowing out the windows that stretched across the laser entrance hole (LEH). The cooling would lower the pressure of that fill gas. More discussions of the possible complications of such cooling will be presented shortly below.
Lo and behold, more surprises arose. The levels of LPI were rather high (about 15%), meaning that there was a decrement in drive due to those losses. Moreover, the spectrum of the stimulated Raman scattering (SRS) signal that went back into the lens was duly recorded. This spectrum can be interpreted to derive the temperature of the plasma from which it scattered, and it turned out to be quite different than the spectrum predicted by the standard non-LTE model. Once again, the HFM predictions matched that spectrum quite closely. Consistent with the levels of LPI, and the observed spectrum, was the notion that the plasma in the hohlraum (or at least the plasma in the part of the hohlraum from whence the SRS came) was significantly cooler than predicted by the standard model. As described above, with regard to the plasma in the blowoff of the URLLE gold sphere, this cooler plasma is due to the HFM's enhanced radiative and conductive cooling. With these results,97 it seemed clear that the HFM should be taken seriously in the design process.98
An LPI phenomenon that can happen when high power laser beams (of slightly different wavelengths) cross within a flowing plasma medium is the possibility of an ion acoustic wave growing and acting as a grating to transfer energy99 from one beam to another (see Fig. 8). This cross beam energy transport (CBET) had been seen in experiments in the 2000s, and now with 96 beams crossing within each of the two LEHS, came to life in earnest at the NIF. The original strategy was to pick a “delta-lambda” (Δλ) between the inner and outer beam lines, to minimize this phenomenon, as the point design did not include this complication. This required the facility to include this option, which it did. However, the new results showed that the inner beams, which penetrated deep into the hohlraums, were the ones encountering the cool plasma and thus showed enhanced LPI (SRS) losses. Moreover, the cooler plasma there led to high beam absorption from the classical mechanism of inverse bremsstrahlung, thus directly impeding the propagation of the inner beams to their desired location deeper into the hohlraum. To restore symmetry under these adverse conditions, CBET was purposely used100 (via a choice in Δλ) to transfer energy from the relatively unaffected outer beams to the inner beams. This method succeeded in those initial experiments.
There were more surprises to come. In 2010, the facility installed more diagnostics and installed its full cryogenic ability. Recall, that at Nova, no cryogenic capsules in hohlraums with shaped pulses were tested, and so the situation was ripe for surprises. It is true that LPI experiments with gas bags, cooled to low temperature to raise the density of the fill gas, were carried out at Nova.101 That research did evaluate ice forming on the gas bag, but being illuminated by a simple 1.5 ns square pulse made that thin ice layer rather inconsequential.
Similarly, the first implosion results on NIF in 2009, as described above, employed “symmetry capsules” (“symcaps”) that did not have a frozen DT shell. As such shock timing was less of an issue, so that if ice formed on the LEH window, it would not necessarily be noticed. Nonetheless, curious results started to appear as a function of how long the targets were in the chamber before being shot. Now, at NIF, in 2010, full target shots with frozen DT shells, and with their long shaped pulses, with the cryogenics in place showed a curious drop in drive (again, as measured by Dante). Most importantly, the keyhole platform to measure shock timing now showed rather directly some curious results. It was discovered that the vacuum in the NIF target chamber was insufficient to isolate the target. Ice formed on the windows of the cold hohlraum. Detective work by Harry Robey and Cliff Thomas is noteworthy in this regard.
One early solution considered was to simply add extra laser energy onto the foot of the laser pulse to heat up and blow away the ice. Debbie Callahan asked Dan Clark to calculate the effects of this strategy. This was the first manifestation of what later would be called “the high foot” approach. Dan found that the higher foot would make the implosion far more hydrodynamically stable (see Fig. 9). We will return to this important point in Sec. VII. The actual solution selected for the ice problem was to install “storm windows” on the hohlraum, namely, double windows with an insulating vacuum gap. This allowed the point design research to proceed as originally planned.
In 2011 then, the NIF was ready to field the point design. A series of keyhole platform shock timing measurements showed that empirically the timing could be tuned to the low adiabat goal.102 The rho-R of the system was measured by the “DSR”—the down scattering ratio of the ∼10 MeV neutrons to the 14 MeV DT fusion “birth energy” neutrons. This is because the 14 MeVs must traverse the dense DT shell (and, to a lesser degree, the remaining unablated CH ablator) on its way out to the detector. The denser that shell, the more rho-R they must traverse, the more the neutrons would be down-scattered in energy. The data did show an increase in DSR with improved shock timing.
However, the DT implosion performance was disappointingly low. With ignition in the 1018 neutron yield neighborhood, the yields in 2011 were in the 1014 range. I co-organized, with John Lindl and Mike Key, a summer study of external experts to advise us on how to make further progress. They came from URLLE (Riccardo Betti, Ryan Nora, Valeri Goncharov), Washington University (Jonathan Katz), UCLA (Chan Joshi), France (Catherine Cherfils, Guy Shurtz), Israel (Dov Shvartz, Yoni Elbaz), Italy (Stefano Atzeni), the UK (Steve Rose, Peter Roberts), and LLNL (John Nuckolls, George Zimmerman, Paul Springer, Jim Hammer, Bill Kruer) and spent 2 weeks reviewing the data. The number one advice they reported to us was “Push Longer.” Based mostly on experience103 from direct drive experiments at the Omega laser at URLLE, the idea is to keep pushing on the implosion even after it is “committed” and its implosion trajectory would not really change much. The reasoning behind this is as follows: Even though the trajectory of the center of mass of the shell would not change, the shell is a hot plasma and can decompress and expand on its way inward. Keeping the drive on longer keeps the shell dense. The “ram pressure” that the shell can deliver to the hotspot gas scales as ρv2, so the system can reach higher compressed pressures if ρ stays high. Follow on experiments at NIF the following year proved their advice to be correct.104
Nonetheless, the point design, low foot, CH target continued to disappoint. There was evidence of a serious mix of the ablator into the hotspot.105 Such mix radiatively cools the plasma and lowers yield. Based on the measured surface finish of the capsule, the calculations could not reproduce this mix. We were now facing the “flip side” of deterrence by capability. There were many who were quick to jump to the conclusion that the codes were inaccurate, which brought into question our ability to certify our deterrent capability in the absence of nuclear testing. Those critics seemed blind, to me, to the possibility that it was not the codes that were wrong, but rather it was the assumptions of the initial conditions that were off base. This mix could be “post-dicted” only if a surface finish on the capsule would be artificially enhanced (over the metrologized surface roughness) by a factor of 4.
Symmetry continued to be controlled by CBET, but was reaching the point of diminishing returns by using very high, and increasingly saturated, values of Δλ. The significant levels of measured LPI persisted, with no real confidence that even more, internal LPI (such as side-scatter) may be occurring. In addition, in all of this “noise” of curious and disappointing results, the lesson to “push longer” and minimize coast time was at times ignored/forgotten. With the program, in general, floundering, it was time to invoke ICF's superpower: its inherent flexibility to adjust, innovate, and to set out into new directions. We will discuss that in Sec. VII.
Before we leave the low foot point design we should report, in hindsight, our views as to “what went wrong?” In truth, it took us into the 2014 timeframe to get some clarity on this issue. As mentioned earlier, the Red Team pointed out its fears that the tent holding the capsule in the center of the hohlraum could be a source of perturbation to the implosion. In 2007, computer power was just not capable of properly assessing that. Bruce Hammel kept pursuing the problem, and by 2014, aided by 7 years of “Moore's Law” improvements to computational capabilities, began to shed light on the effect. The initial estimates had treated the tent as simply an extra few dozen nanometers of material on the capsule. Bruce found that the issue was different. In his ab initio calculations, in which the 50 nm tent geometry was initialized with adequate resolution, the tent departed (or, perhaps better described, as “lifted off”) from the surface of the capsule at some azimuthal position. When it was heated, it exploded inward and outward. The inward half impacted the ablator, and at the liftoff position, formed a shaped charge jet. This collision of tent and ablator seeded a pernicious perturbation that led to hydrodynamic instability growth. Bruce was able to follow this perturbation growth all the way into the convergence 40, down to the 25 μm radius scale. At this fuel assembly scale, using the Hydra code, he showed the tent penetrating into the hotspot. This could explain a great deal of how and why the low foot implosions mixed heavily and failed.106
Another important part of the story of the tent problem was a new diagnostic that could image the capsule as it imploded, at least at the convergence 5, at a 200 μm radius scale. As they have done so consistently throughout the history of ICF, new diagnostics opened our eyes to issues we could not see, or did not even imagine. The images, seen along the waist of the capsule, clearly showed the tent scar (at about plus and minus 45°) on the capsule.107,108 I cannot help but wonder if we were fooled during the low foot campaign by this tent perturbation. If the tent scar closed off compressional heating above the 45° line (and below the minus 45° line) then a prolate shaped implosion of 3:1 aspect ratio could, upon assembly, look more like a 1:1 implosion. We might have been “tuning” symmetry in a completely wrong place in target performance space! Improvements to the tent problem involved having the tents be “polar,” namely, only holding the capsule tangentially at its north and south poles.109
Yet another issue arose and reached some clarity by 2014. There was evidence110,111 that the CH ablator could be photo-activated to uptake oxygen into its bulk. This uptake could be random, and the oxygen could serve as a perturbation since it is a source of opacity to the x-ray drive. A surface finish metrology would look smooth and not detect these hidden perturbations within the bulk of the ablator. Could we quantify this? Work at URLLE did show112 this perturbation.113 On NIF, a very important hydro growth radiography platform114 was stood up using the same keyhole geometry employed in shock timing, but now with x-ray backlighting.115 It measured the 3D modulations on a CH driven capsule and found something like a 4× larger growth perturbation than the one predicted assuming the measured surface finish. This eerily harkened back to our need for a 4× surface perturbation to explain mix due to hydrodynamic instability growth on the low foot shots.
Dan Clark performed some rather heroic 3D simulations, which I had termed “kitchen sink” calculations116—namely, apply any-and-all sources of perturbation to the low foot implosion calculation using 3D Hydra. The tent was certainly a major source of degradation. However, in truth, the low foot target was failing for a combination of “diseases.” Dan's 3D calculations explained the x-ray and neutron observations reasonably well. The point design may have been the most robust design in 1D with its high gain, but in the real world of 3D and in the real world of a variety of sources of degradation, it was not viable. It was time for a change of design that had 3D stability in mind. A general and extensive review of all the work at NIF to this point in time was published.117
VII. CHANGES TO THE POINT DESIGN FOLLOWED BY STEADY IMPROVEMENTS IN PERFORMANCE
A. Overview
Sections VII B and VII C will describe the many changes that were made in our approach to ignition that ultimately led to success. Let me now, first summarize the entirety of that path, quite briefly here, before we delve into the details. This will portray “the big picture” which shows that with each change (in some combination of hohlraum, capsule, and laser pulse) improvements ensued.
I ask the reader's indulgence if the thumbnail sketches here are too brief. All will be explained in detail as we proceed further into this paper. While Fig. 10 may also be considered a “summary” of all these changes, we defer it to later for the same reason: it is best comprehended when all the details are explained.
2009–2013: “Low Foot” four-shock pulse, CH ablator, high gas fill hohlraum: The “NIF Point Design.” Low adiabat, high gain potential, and “robust” in a 1D sense. Yields in the kilojoule range.
2013–2016: “High Foot” three-shock pulse, CH ablator, high gas fill hohlraum: Higher adiabat but less gain potential, and much more stable to hydrodynamic instabilities and thus more “robust” in a 3D sense. Yields in the 10 kJ range (eventually 28 kJ), but more importantly, much better “behaved.”
2014–2021: Low gas fill hohlraum with standard LEH size. Lowers the LPI seen in high gas fill hohlraumns, but needs shorter pulse (or other tricks) to control low mode asymmetry.
2016–2022: Shorter (still three shock) laser pulse that goes with a High Density Carbon (also known as diamond), HDC ablator capsule, allows for better symmetry control. Yields in the 50 kJ range.
2018–2022: Increase capsule scale of HDC capsules. Hohlraum is more efficient in coupling to larger capsule, but more challenging for symmetry. Use “I-raum” or CBET to control symmetry. Achieves “burning plasma” (alpha heating exceeds PdV heating). Yields near 200 kJ.
2020: More precision on laser balance, hohlraum diagnostic windows, fill tube size, and HDC capsule quality. All necessary ingredients for making further progress.
2021: Smaller LEH leads to more efficient hohlraum. This allows a longer laser pulse (at less peak power) to keep “pushing longer” on implosion. Yield over 1 MJ and doubling of hotspot temperature due to fusion, so it exceeds the Lawson criterion and is scientifically “ignition.”
2022–2023: A 7% increase in NIF energy (past its original specs) leads to thicker capsule and even longer pulse. Yields of 3–4 MJ and reaches NAS definition of ignition.
B. High Foot
So let us begin with the “multiple births” of the high foot. In 2012, there was a community wide brainstorming meeting organized by LLNL's Bill Goldstein (who later became LLNL Director) and Bob Rosner (of the University of Chicago, who is just finishing his term as APS President). This “San Ramon Workshop”118 was attended by over 150 people, and parsed its sessions into: Laser propagation and x-ray generation, co-chaired by Chan Joshi of UCLA and myself; x-ray Transport and ablation physics, co-chaired by David Meyerhofer, then at URLLE (now at LANL) and Jim Hammer, LLNL; Implosion hydrodynamics, co-chaired by Valeri Goncharov of URLLE and Omar Hurricane, LLNL; Stagnation properties and burn, co-chaired by Riccardo Betti of URLLE and Johan Frenje of MIT; HED materials crosscut, co-chaired by Justin Wark of Oxford University, and Gilbert Collins (then at LLNL, now at URLLE); and Integrated modeling, co-chaired by Don Lamb of the University of Chicago and Marty Marinak, LLNL.
In the ablation physics section of that report, there was mention of a curious result. The DCA model, used on the ablator, predicted a strange “double peaked” structure of pressure vs radius in the ablator. This was a point of concern to which we will return shortly. In the implosion hydrodynamics section were figures that showed that a higher laser power in the picket (the first shock launcher) of the pulse, not yet called “high foot” though that was precisely what it was, led to significant reduction of growth of hydrodynamic instabilities. This was the work of Dan Clark, mentioned earlier, that he had done in response to the “ice on the windows of the hohlraum” issue of 2010.
Shortly after this workshop came a more formal introduction to the high foot concept. The just mentioned problem of the curious double ablation structure early in the pulse in the CH ablator as predicted by the DCA model was investigated by Tom Dittrich and Jim Hammer. I feel partly responsible for this issue even arising. The HFM, which seemed to have proven itself on the NIF experiments to date (and had first arisen in analyzing high Z sphere emission in Omega experiments), called for the use of DCA. However, strictly speaking, its use was proven and recommended solely for high power illumination of high Z elements (and a T of several kiloelectron volts) and not for low power illumination of low Z elements (and a T of 60 eV!). Thus, the use of DCA for the CH ablator early in the pulse was a bit of overzealous “mission creep” by the program. In any event, the team did what any good designer does: Redesign to avoid the problematic and curious result. They proposed a “higher foot” in which even the DCA model had a single hump of pressure vs radius, not a double one. This is all presented in the first part of their publication.119 For the record, about a year after this work, the DCA model was upgraded in this low temperature problematic part of parameter space, and the double ablation structure disappeared. Luckily for the ignition program, the “design fix” of the high foot was already well on its way to implementation.
In the second part of that same PRL came a crucial result. The high foot led to 2D capsule implosion simulations that survived intact the hydrodynamic instability growth, seeded even by the (thought at the time to be artificially enhanced) “4×” roughness. It was this same “4×” enhanced roughness described earlier, that was leading, calculationally, to the failure of the low foot design. This three-shock, higher adiabat design was a compelling possibility for improving performance, by optimizing on 3D, real-world stability, not 1D idealized performance. Previous attempts at predicting perturbation growth relied on adding perturbation amplitudes of various modes in quadrature. It seems like Mother Nature was being less kind, in that 3D perturbations coupled more perniciously.
Some general comments, and then some particular comments, on the improvement in stability for this design are in order. First, the general lessons. A three-shock system will not be as “true” as a four-shock system in adhering the system to be close to the Fermi-Degenerate adiabat. As such, α will be higher in the three-shock system, rising from the low foot's presumed α of 1.5 to an α closer to 3. Given that P ∼ αρ5/3, at the same peak driving pressure, P, the higher adiabat system, namely, higher α, will lead to a lower shell density, ρ. The ablation velocity, VA, is given by (dm/dt)/ρ, where dm/dt is the mass ablation rate that depends on the drive temperature T. Thus, we expect the ablation velocity, VA, to scale as α3/5. The reader is referred to my ICF tutorial16 to follow through on this argumentation, which leads to a smaller in-flight-aspect ratio, namely, a thicker shell upon implosion, with the higher adiabat, α. A thicker shell will be somewhat more impervious to the damage brought on by the RTI. Moreover, as ablative stabilization depends on VA, this again points to the stability advantage of higher α (at a cost in higher gain to be had at lower α). These lessons were all independently learned at the Omega facility at the URLLE, with direct drive.22 As will be described shortly, experimental evidence at NIF supports these arguments.120
Further detailed system studies of the stability of the high foot system uncovered some interesting and useful lessons. Before the shell smoothly accelerates inward, and thus becomes subject to the RTI, the shell is first driven by a shock. This shock is in itself hydrodynamically unstable, as it is subject to the Richtmeyer–Meshkov instability (RMI), which can amplify any initial non-uniformities. The phase of the perturbation may be controlled in such a way as to minimize the size of the perturbation at the time when the RTI kicks in, and, in this way minimize the growth of the initial non-uniformities.121 This same principle is at work in more recent work, the “SQn” approach,122 which has a smoother acceleration to minimize the RM seed for the RTI. A more complete discussion of these issue can be found in review paper by Meezan et al.123 In the early days of the high foot design, Omar Hurricane has related to me that he advocated for dropping four-shock systems to either three or two shocks, specifically to minimize the RMI. Tom Dittrich still advocated for a four-shock system, but with the high foot as the first shock, that four-shock system in Denise Hinkel's hohlraum design was so close to a three-shock system that three shocks were adopted.
Not only was this high-foot scheme attractive in reducing growth rates for the seeds of that “4×” roughness, which probably contributed to the low foot's underperformance, it also improved upon the issue of the pernicious effect that the tent had on the low foot design. The visible “tent scar” seen, in the low foot, in the back-lit image of the capsule implosion taken when it imploded from a radius of 1 mm to a radius of 200 μm, completely disappeared when the high foot implosion was performed.
From these promising indicators came gratifying results. Yields jumped tenfold, and a 10 kJ yield was comparable to the energy in the final fuel assembly.124 Far more important, to my mind, was the fact that the implosions were “behaving” far more “rationally” than the low foot. For example, when implosion velocities were increased, leading to higher measured hotspot temperatures, as measured by the neutron time of flight (NTOF) detectors, the yields rose accordingly. In fact, they rose as T4.1, just as we would expect from the sigma-v scaling of the DT fusion reaction rate in the 3–4 keV range of the measured T. This is in sharp contrast to the low foot yields rising as T2.4 when we would have expected a T6 scaling in the 1.5–3 keV range of the low foot implosion hot spots. As explained in Sec. VII A, mix from various sources was killing the yield of the low foot, so no “rational” scaling would ever emerge from those experiments.
This “good behavior” of the high foot campaign also allowed systematic studies of other issues. The “push longer” advice of the 2011 Summer Study group could now be tested more systematically on this rational platform. Omar Hurricane and co-workers published a careful study of how important it was to “push longer,” or, in their terminology, lower the “coast time” of the implosion.125 This important lesson would continue to inform the program in its progress toward ignition. Another way to state the requirement of low coast time is to minimize the radius at which the imploding shell reaches peak velocity.4,126
C. Hohlraums with low gas fill
In the mid-2010s, the laser program leadership was transferred over to Dr. Jeff Wisoff, who holds that position to this day. The NIF facility was eventually under the guidance of Mark Herrmann and then Doug Larsen, and most recently led by Gordon Brunton. The every-day operation of the facility has been managed ever so faithfully by Bruno Van Wonterghem, to this day. On the program side, the ICF program leader became John Edwards, to be followed a half decade later by Mark Herrmann, and more recently by Richard Town. I am proud of the fact that in the 90s I hired both John Edwards from AWE, and Mark Herrmann from PPPL, knowing full well their leadership potential. Richard Town was hired in the 2000s, after my tenure as X-Division leader was over, by my very worthy successor, Charles Verdon. Both Verdon and Town came over to LLNL from the URLLE.
In this timeframe, some serendipity paid the program a visit. The neutron diagnostics needed calibration from a source that would emit neutrons into 4-pi rather uniformly. A thin shell of CH surrounding a thin shell of DT was imploded with a short 4 ns long pulse. Because the pulse was so short (vs the 22 ns long pulse of the low foot design and the 16 ns long pulse of the high foot design), it was decided to shoot it in a hohlraum that was near vacuum. The hohlraum and capsule performed flawlessly, in accord127 with the predictions of the HFM. While targets like these are often termed “indirect drive exploding pushers,” Ref. 115 makes it clear that they are not exploding pushers in the true sense of the word, as has been described earlier regarding the first campaign at Shiva. They are a radiation driven thin ablator system that implodes quite rapidly, sends a strong shock ahead of it into the fuel, and gets that fuel hot.
Perhaps most significantly, the LPI levels that had persisted even with the high foot experiments (which had a hohlraum He gas fill of 1.6 mg/cc) disappeared in this new near vacuum platform. For a change, Mother Nature was acting kindly toward the LLNL indirect-drive ICF Program. The coupling of the laser to the hohlraum was in excess of 99%. A lasting lesson from this will end up being: Be light on your feet, and be prepared to take advantage of lucky breaks, and then, be brave enough to change course, and thus, to actually do so. When this hohlraum was later utilized to implode high convergence capsules (with a different ablator material, as will be described shortly) all its advantages persisted nicely.128
Another lesson eventually emerged from this experience with near vacuum hohlraums. The capsule symmetry exhibited a surprising behavior. The capsule emission was prolate, implying good propagation of the inner beams to the waist of the hohlraum. The simulations were predicting an oblate implosion, implying that the inner beams were undergoing difficulty propagating to the waist. The designers had to artificially change the wavelength of the laser to get the inner beams into the waist area.129 This, of course, did nothing for our code credibility. It was hypothesized that the near vacuum hohlraum allowed interpenetration of the plasma flowing from the ablator and from the gold bubble (caused by the outer beams on the walls of the hohlraum), but the code capability was not quite up to computing that reliably. Experiments were done130 at URLLE to test interpenetration in a cylindrical geometry.
It was not until years later that we ultimately understood the source of this discrepancy. George Zimmerman had put a better multi-fluid penetration package into Lasnex, and Drew Higginson of LLNL was assigned to test it out against these confounding near vacuum hohlraum symmetry results. That package alone did not explain the data. In the interim, Steve Maclaren of LLNL had zoned up a good portion of the double “storm window” hardware outside the LEH to be included in the code. In addition, there was the in-line CBET package. It turned out, that it was, most crucially, CBET occurring in this outside-the-LEH plasma. The CBET enhanced the inner beam strength and resulted in the correct symmetry.131 The lesson here is that details really matter, and that putting in the correct amount of detail to be properly simulating the reality of the experiment (in this case the storm window and its ensuing plasma formation) is crucial in explaining data and thus in projecting more accurately the plasma conditions and behavior of future targets.
The serendipitous result of the near vacuum hohlraum's elimination of LPI immediately suggested a systematic study of LPI levels vs the amount of gas fill in the hohlraum.132 That study showed that the SRS backscatter came close to zero for He fills of 0.6 mg/cc and below. The result was explained by plasma physics post-processors, that “post-dicted” very low LPI given the low density and short gradient scale-lengths.
Shortly thereafter, I co-organized, with John Edwards, the next Summer Study session, in 2014. There were participants from AWE (Peter Graham), Ben Gurion University (Dov Shvarts), LANL (Don Haynes, Ray Leeper, Steve Batha), NNSA (Kirk Levedahl, Jeff Quintenz), NRL (Andrew Schmitt), SNL (Mark Herrmann, Mike Campbell), SLAC (Siegfried Glenzer), URLLE (Riccardo Betti, Valeri Goncharov, David Meyerhofer, Craig Sangster), and LLNL (Jim Hammer, George Zimmerman, Paul Springer, Steve MacLaren). The group's final report was unequivocal: Shift all hohlraum work to low gas fills, the need to eliminate the hard to calculate, and hard to precisely diagnose LPI effects were an imperative.
A low density hohlraum gas fill would open up a challenge to achieving good low mode symmetry, as now the gold walls would ingress more than with a high density gas fill hohlraum, and challenge beam propagation and the places where lasers converted their energy to x rays. This would be very difficult for long pulse implosions that were needed for CH ablators. Denise Hinkel and co-workers did succeed in redesigning CH ablator high foot implosions with a somewhat shorter pulse that could, in principle, be symmetrized.133 However, ultimately, the tent's perturbative effect on that CH capsule would probably remain an issue.
D. High density carbon (HDC) ablators
So, along came another principal lesson from this long saga: Diversify. While the CH ablator, high-foot work was being highlighted by the ICF program, an alternative technology was slowly making progress, and doing fundamental and foundational work that benefited by not being in the limelight. Years earlier, “seed money,” through the vehicle of Laboratory Directed Exploratory Research (LDRD) had been devoted to exploring an alternative to CH: ablators made from high-density carbon (HDC). By the way, the entire LDRD process and infrastructure was instituted at LLNL by the initiative of Claire Max who no longer worked in the ICF program but had moved on to do many other things at LLNL. Claire's other activities included starting the LLNL branch of the Institute for Geophysics and Planetary Physics (IGPP), and initiating an effort called the laser guide star that artificially made a layer of the atmosphere light up sodium atoms there, so that specially augmented telescopes could dynamically correct for atmospheric fluctuations and thus sharpen their eye on the universe. Claire is now at U.C. Santa Cruz.
The HDC ablator benefited from needing a much shorter pulse to drive it; hence, it was very well matched to the ICF Program's move to low fill hohlraums. Because HDC's density is about 3.5× that of CH, the first shock has 3.5× less thickness to traverse for it to break out at the ablator/DT–ice's interface. This makes the three-shock pulse for HDC much shorter than the equivalent one for the CH ablator approach. Being out of the limelight allowed the HDC team to perform careful experiments134,135 throughout the duration of the pulse, ensuring good low mode, P2, symmetry throughout, and avoiding any fuel “sloshing” and other symmetry swings in time that could compromise target performance.136
Another lesson, seen throughout this saga, is, again, the role of diagnostics. A wide array of diagnostics were available, each dedicated to a portion of the time history of the implosion, to diagnose the symmetry and allow us to retune to improve it. This too was a long time in development. I recall during the early 90s during the period of executing the NTC, that we presented many of these techniques (and not just in theory, but already tested on Nova) to measure and to ensure time dependent symmetry.137 Opponents of the indirect drive/NIF project claimed that time dependent symmetry would be too difficult to measure and achieve, but we had already considered the problem and had already prepared ways to address it. All these techniques came to the fore when demonstrating time dependent symmetry for the HDC campaign.
Another advantage of the shorter pulse for HDC was that the shorter it was, the easier it would be to lengthen it slightly in order to “push longer” on the capsule and reduce the coast time for improved performance. The HDC showed better stability to perturbations from the tent but did show sensitivity to perturbations from the fill tube. The fill tube's role is to inject the DT gas in the first place, into the center of the capsule, before the DT gas is frozen in place to form an ice shell.
The HDC first operated at “sub scale,” for instance, at a radius of 0.9 mm, not 1 mm. This allowed for more shots without too much worry of NIF laser damage since it required less incident energy. Before too long, the HDC scale 0.9 capsules were yielding over 50 kJ,138 which was very promising indeed! HDC (also known as diamond) has a crystalline structure, which can be a seed for RTI. Therefore, the first shock must exceed 12 MB in order to melt those structures. This naturally limits the HDC approach by leading to a higher α and higher adiabats. The way to increase yields then, was to increase the scale of the target. We will discuss that in Sec. VIII.
Before we begin to describe that excursion that finally brought ignition to fruition, namely, going to larger scale capsules, this would be a good point to look back at all the effort described up to this point and remark on the state of understanding of that progress vis-a-vis achieving ignition. Many of the target designs achieved temperatures of about 5 keV and ρR products of the hotspot of greater than 0.3 g/cm2. In short, by the conventional Lawson criteria, they were ripe for ignition, but had certainly fallen short in practice. So what was going wrong? I believe that the answer lies in the 3D world in which we live, and not in the 1D criteria that constitute the Lawson criteria.
The published works of both Springer et al.139 and Patel et al.140 and co-workers emphasize that if there are 3D perturbations to the implosion, they would lead to 3D thin spots in the confining shell. While this was appreciated in general,141 their work actually calculated the failure due to 3D effects and related discussion that follows. When those thin spots expand and balloon, their PdV cooling is enhanced over any 1D average expansion. This extra cooling kills the ignition of the capsule. At minimum radius, the system has a d2T/dt2 < 0, leading to a fizzle. A thermal instability, namely, a thermal runaway which we call ignition, needs d2T/dt2 > 0. This criterion for ignition was developed by myself and my late colleague, Abraham Szoke for general systems, and has been successfully applied by Springer (see Ref. 13 of Springer) and Patel to the NIF targets. Their work showed that indeed, these considerations could explain the failure to ignite, despite the 1D and 2D simulations that predicted success. They concluded that to achieve ignition, either better symmetry that cuts down on 3D thin spots or targets with larger ρR for better confinement and thus more robustness would be needed. A larger ρR could be achieved at a larger scale capsule.
These same conclusions were reached by Dan Clark using 3D HYDRA “kitchen sink” calculations. These simulations were performed on significant capsule implosions that included the highest yield at the time, performance cliffs, and experiments that assessed repeatability and hydrodynamic scaling. They captured global trends in the NIF implosion data for the neutron yield, neutron down-scatter ratio (DSR), burn weighted ion temperature, and burn width. These gave better agreement than 2D HYDRA simulations and appear in Fig. 14 of the review paper142 by Marinak et al. The close level of agreement for this set of highly significant implosions gave us confidence that these simulations were capturing the important implosion physics, including the burn. These simulations showed that various asymmetry sources were acting in concert to degrade the capsule yields and prevent ignition. They indicated that even if we fixed all asymmetry sources to within abilities of target fabrication and the laser, the capsule would still not ignite. These simulations made it clear it was imperative that we develop more robust designs, in particular larger scale capsules.
It was roughly in this time frame that colleagues at LANL released a 2019 report143 that predicted “with high confidence” that NIF would never achieve ignition, and that a laser about 10× bigger was required. Despite, what was, to my mind, no compelling physics reasoning behind this conclusion, this report seemed to carry weight and influence with various review committees. I think it is to the ICF Program's credit that it persevered despite these negative reports, (much like it did in the dark days of the low-foot campaign) and calmly pushed forward. While target quality and the afore-mentioned 3D non-uniformities were getting in the way of ignition, and leading to the pessimism of the LANL report, the fact that the requisite T and ρR were being achieved really meant (at the very least, in hindsight) that the program was actually “tantalizingly close” to making rapid progress. (Culturally, it was great taboo at that time to use that phrase, in-house, as if it were a “jinx” to progress.) Another way of saying this is that the metric of ignition, known as “ITFX”117,144 [which stands for: “Ignition Threshold Factor (measured) eXperimentally”] was getting quite close to unity.
VIII. CHANGING SCALE, CONQUERING SYMMETRY, PUSHING LONGER, AND ACHIEVING IGNITION
A. Larger scale
Increasing the scale of the capsule is a high leverage for increasing yield. Yield should scale as a fusion rate per unit mass, ρσv, multiplied by a confinement time, t, and then multiplied by the mass ρR3. Near the hotspot temperature of 4–5 KeV, σv ∼ T4, and since ρT is the pressure, P, we get a yield scaling as P2T2 t R3. A confinement time scales as R/v. A hydro-equivalent145 implosion preserves P and v, so yield scales as T2 R4. The arguments for how a larger scale will increase T because of reduced conduction losses result146 in a T ∼ R2/7 scaling. Thus, we end up with a yield scaling as R4.6, or, with S representing scale, S4.6. All this is a yield that is unenhanced by the resultant alpha heating, which will increase the yields even further.
This strong scaling of yield with scale suggested that the program embarks on a campaign of “High Yield Big Radius Implosion Design” (HYBRID).147 Of course, the capsule is sitting in a hohlraum and must be imploded symmetrically, so that there is minimal residual kinetic energy upon stagnation and that all that kinetic energy of the imploding shell can be transferred into internal, thermal energy of the assembly. The challenge, then, is to put a larger capsule into, roughly, the same size hohlraum as before, since a larger hohlraum would simply soak more energy into its larger area walls, and inefficiently transfer energy to the capsule. A larger capsule into a nearly the same size hohlraum (a smaller “case to capsule ratio”) presents a challenge to providing the needed implosion symmetry (see Fig. 11).
B. Symmetry
The fact that the program was committed to a low density gas fill makes this symmetry problem even more acute. The larger capsule can get in the way of the inner side of the inner beams trying to propagate to the waist of the hohlraum, They already have that challenge, as the outer side of these inner beams (the side further away from the hohlraum axis) tries to traverse the “gold bubble” coming from the expansion inward of the gold walls illuminated by the outer beams. Debbie Callahn and co-workers published a compendium148 of NIF data under these conditions, which supported the notion that a larger radius capsule and the longer laser pulse (that must come along with a larger capsule) both exacerbate the symmetry and drive the capsule toward the undesirable oblate shape.
There needed to be a way to break out of these constraints, if the HYBRID campaign were to succeed. One method to do so involved a return to invoking CBET and choosing a Δλ to help bolster the inner beam strength by “borrowing” energy from the outer beams. This method was first proven, under these newer low density gas fill conditions, in the HYBRID C campaign that used CH ablator capsules.149 It was then adapted by the HYBRID B and the HYBRID E campaigns that used HDC ablators.150 Another method was to change the hohlraum shape (yes, again invoking ICF's superpower of adaptability). The “I-raum”151 looked somewhat like a capital letter “I” (in Times New Roman font: I). In the places where the outer beams hit, the hohlraum had a cylindrical radius larger than normal. This meant that the gold bubble had longer to ingress before it interfered with the inner beam, thus allowing the longer pulse to have the inner beams pass by the gold bubble's axial position somewhat less impeded. Of course, the I-raum could also use Δλ if it needed to.
C. Pushing longer
Callahan and co-workers followed up on their “symmetry rules” paper, with another paper152 that considered some global rules for hohlraum drive. I was proud to help with the research on this aspect. Combining this work with the previous one on symmetry allowed the program to produce a global map of operating space. Plotting case-to-capsule ratio on the y-axis and hohraum diameter on the x-axis mapped out a narrow band (due to symmetry constraints) to show where to optimize absorbed capsule energy (at fixed laser energy). Relieving the symmetry constraint by invoking CBET widened the acceptable operating space and thus increased possible absorbed capsule energy. The choice of how to narrow down this available space even further was made by the notion, mentioned several times above, of the importance to “push longer” and minimize coast time. As discussed above, this physics was somewhat equivalent to finding the minimum radius at which to achieve peak velocity. When this metric was applied, and overlayed on the previous constraints, it became clear what capsule size, hohlram size, and expected energy absorbed by the capsule, to use.
In the 2019–2020 time frame, the initial attempts at scaling up did not go smoothly. The HDC capsules were scaled up from a radius of 0.9 mm to a radius of 1.1 mm. The principal impediment to progress was target quality. There were too many voids, inclusions, and pits (VIPs) that served as initial perturbations for the RTI, and target performance suffered. A new batch of capsules of radius 1.05 mm was tried next. Their quality was improved over the previous batch. Moreover, the somewhat smaller radius could allow for a similar laser pulse to provide a “push longer” environment, reducing coast time and improving performance. Yields of order 170 kJ were achieved, which was the same order as the amount of energy absorbed by the capsules. These milestones were achieved in both platforms, the HYBRID-E cylinder with CBET, and in the I-raum. Careful analysis suggested that we had reached a so-called “burning plasma” in which alpha heating was the dominant contributor to the yield performance.141,153,154 In retrospect, it should have been obvious that we were close to ignition. However, we had already been on this long path to ignition for over a decade on NIF, and psychologically most (not all!) of us were not prepared for exactly where, when, and how the next step in progress would be made.
D. Achieving Ignition
A campaign led by Joe Ralph was initiated to develop more efficient hohlraums. A principal tool to do so would be to return to a smaller LEH (used in much earlier NIF experiments). A hohlraum with a smaller LEH would need to have the laser pointing adjusted accordingly. A smaller LEH would be more efficient as there would be less energy lost out of the LEH, allowing for more to be absorbed by the capsule. This more efficient hohlraum would also, critically, allow for a longer pulse (at fixed energy, by having a lower peak power). The lower peak power, in the more efficient hohlraum, would still provide the necessary drive to accelerate the capsule to nearly the same implosion velocity. More importantly, it would allow us to “push longer” on the capsule and minimize coast time. Furthermore, the target fabrication effort provided us with a capsule with a mere 2-μm diameter fill tube, to minimize the effect of higher Z material jetting into the hotspot from that tube. In addition, we were provided with an excellent quality target with regard to VIPs.
This all came together in shot N210808 on August 8, 2021. An order of magnitude increase in yield, 1.35 MJ was produced. The temperature doubled from the 5 keV achieved by the PdV heating of the implosion, to 10 keV achieved by the fusion process itself. This was ignition in the scientific sense, and the Lawson criteria for ignition was exceeded.155 The simulations156 showed that d2T/dt2 was positive at minimum radius, so finally the Rosen–Szoke criterion was achieved as well. A capsule gain of order 6 was achieved, and a target gain of 0.7. We were now certainly close to “achieving ignition” by the NAS metric of unity target gain or greater. There was some discussion within the program as to whether to declare that ignition had been achieved. Our director, Kim Budil (wisely, in my opinion) insisted that we stick to the NAS criterion. Since we, in the long run, would need to increase target performance anyway, it should (and would) be only a matter of time before we would achieve a gain greater than unity.
Attempts at repeating this result fell short.157 Those attempts were stymied by either mix stemming from target VIPs or mode-1 asymmetries158 in either capsule shell thicknesses159 or laser delivery.160 Near ignition, the consequence of every imperfection is amplified.161 On average, the yields for the repeats centered at about half the 1.35 MJ yield of N210808. This result should justify the settings for the diagnostics prior to the N210808 shot, which expected that lower level of yield. It was clear, from these repeats, that, just as in Nova, fulfilling the NTC required special efforts needing a “precision Nova,” we now needed special efforts at “precision NIF.”
So, the next element of the program, the NIF laser effort, stepped up to the plate. Not only did they supply more precision in the laser, they were also able to exceed the original NIF energy spec and delivered162 a 2.05 MJ, 440 TW pulse. The target was adjusted to have a thicker ablator,2,4 which was better matched to this somewhat longer pulse. This extra thickness would increase the confinement parameter, ρR, and may have also mitigated the degradations caused by whatever mix was happening at the ablator–ice interface, by further insulating the hotspot from that interface.
The first shot did not quite achieve the symmetry needed, but adjustments were made for the second shot. I had full confidence in our design team that they could make this symmetry adjustment successfully. As such, I planned to follow the progress of the shot in real time, but it was delayed into the early morning hours. I made the decision to go to sleep. That was a poor decision, as I was too excited to sleep! I woke up early the next morning to learn the great news: On December 5, 2022, at 1:30 am, shot N221204 (whose sequence to countdown began the previous day, on December 4), a lovely round implosion produced 3.15 MJ and ignition was achieved.1 See Fig. 1 for a comparison of the drives between N210808 and the ignition shot N221204. While there was some delay in reporting the yield by counting the neutrons, it was clear to me right away by looking at the Dante signal of the x rays coming through the LEH, a yield of order 3 MJ was achieved. These data were available immediately. The ignited target reheats5 the hohlraum to a temperature higher than the 300 eV brought on by the original laser that drove the implosion.5 A 50-year-long effort had come to fruition.
A later repeat attempt at 2.05 MJ, in July 2023, shot N230729 produced 3.9 MJ due to even better target quality. Also of note, was the achievement of ignition using 1.9 MJ of light, by adjusting the shock timing to allow for the target to achieve a higher ρR due to more convergence. These two shots, on June 23 and October 7, 2023, yielded 1.9 and 2.4 MJ, respectively. Details and official yields will be published in the near future. In addition, there have been shots using even more NIF energy, 2.2 MJ. The first shot ever tried with that increased energy ignited and yielded 3.5 MJ. A second shot with attempts to fix some asymmetries yielded 5.2 MJ. These results and their official yields should be published shortly.
In short, ignition accomplished! A pictorial summary of all of the above description appears in Fig. 10.
IX. THE FUTURE
Applications of ignition are already being discussed. As mentioned above, the reheating of the hohlraum due to the ignited capsule already exceeds the heating of the hohraum due to the original laser. This opens up greater regions of parameter space for HEDP studies.
There is much work yet to be done to deepen our understanding of our results, to date, to help us move into the future.
Targets at the same scale can, in principle, perform better if they can be driven at lower adiabat to higher convergence (and thus higher confinement parameter, ρR). Is convergence in present experiments limited by low mode asymmetry, by shock mistiming, by RTI and its concomitant fuel ablator mix,163 or by some combination of all of these? Much work remains to illuminate this issue.
Better predictive capability is needed with respect to LPI, drive, and symmetry. The difficulty in quantifying LPI affects the coupling efficiency of the hohlraum, and thus drive and symmetry. LPI can produce sources of capsule preheat, and threatens laser damage if too much SBS scatters back into the lenses. CBET affects symmetry, and its sources of saturation must be better understood. Other very difficult-to-quantify issues, such as non-LTE physics can affect drive and symmetry. Steady progress is under way in improving our NLTE models.
This paper has not gone into detail on the efforts to improve our hohlraum modeling. However, it behooves me to mention the pioneering work of Jim Hammer and Steve Maclaren in devising a very useful platform, the “view factor hohlraum” that allows better experimental access to hohlraum dynamics. This platform has been highly useful in the endeavor to reach a deeper understanding of hohlraum plasmas, from its maiden voyage a decade ago164 until this day.165 Better predictive capability can shorten the iteration times for experimental campaigns and is also needed in increasing the credibility of the planning of future facility upgrades. Continued innovation in diagnostic techniques120 and in code developments142 will surely aid in all of this.
Pushing onwards toward higher gains166 can be done along two paths: with the same energy and with higher energy upgrades to the laser.
With the same driver energy, we are pursuing a variety of hohlraums. The frustraum167 has less surface to volume than cylinders and can thus be more efficient, and thus drive larger capsules. The I-raum, mentioned above, can be combined with Δλ and CBET to perhaps allow for bigger capsules. Mag-raums, which are hohlraums embedded in a B field, can lead to igniting targets at perhaps even less energy, because of its conduction inhibition in the hotspot.168,169
Higher convergence, via lower adiabat along with more hydro stability, is the path being pursued by the SQ-n approach mentioned earlier. Alternative ablators, such as B4C which are created by amorphous layering, may allow for a lower first shock (no need to melt the HDC crystal structures) and lower adiabat. Perhaps a reevaluation of all the work that has been done on Be ablators (design-wise and experiments) would prove fruitful. Of course, we can also continue on the HYBRID-E path with thicker ablators and by improving low mode symmetry.170 As mentioned above, another slight up-tick in available NIF energy (to 2.2 MJ) has just been provided. The first try, with an imperfect shaped implosion in its initial shot, still provided the second highest yield to date. The second try improved the shape somewhat and 5.2 MJ yield ensued.
There are serious plans to upgrade NIF to the 2.6–3 MJ range. Yields are expected to increase into the many tens of megajoules.166 As just mentioned, a better predictive capability for LPI, drive, and symmetry will help with the credibility in planning for such an upgrade. Moreover, experiments are planned on the current size NIF to investigate what LPI to expect in the bigger hohlraums designed for the 2.6–3 MJ driver scale.
The lists above should not be considered exhaustive (exhausting, maybe, but not exhaustive). Much more innovation can and should happen in capsule and in hohlraum design. The world is not standing still and simply observing the progress at NIF. The LMJ effort at Bordeaux is pursuing rugby shaped hohlraums.78 The SGIII laser in China is using hohlraums with an eightfold symmetry.171 There are world-wide efforts in direct drive,22 fast ignition,58 and shock ignition,172 not to mention several IFE startups each with their own scheme.
Innovations in target fabrication, diagnostic techniques, and broad-band laser technologies are also under way. All of this can help define a minimum size driver that can lead to yields well in excess of 100 MJ, necessary for the stewardship mission as well as the IFE applications. This is simply a wonderful time to be involved in ICF research.
X. LESSONS LEARNED
As I hope I have made clear in this short history of the long path to ignition, ignition was achieved after many decades of advances in physics design, simulation codes, lasers, optics, targets, and diagnostics. It has been a bold journey into extreme physics, engineering, and technology that required the long-term persistence of an extremely talented workforce. It required that all participants take the “long view” of how all of this was to be accomplished. In particular, it needed the long-term support of DOE, Congress, and the National Labs. In addition, I think I have demonstrated that, every step of the way, the LLNL indirect drive effort was aided by national and international collaboration and teamwork.
So, what are some of the lessons learned along the way? I list a few as follows:
Seek out and heed wise external counsel. This is somewhat of a tautology, because there were probably a few instances of un-wise counsel as well. An outstanding example of wise council, to my mind, is the advice not to pursue a 10-MJ laser as a follow on to Nova, but to take the riskier, but, at least affordable path, of a 2-MJ NIF. Had we not heeded that, we would probably not have achieved ignition, because we would still be waiting for the 10 MJ facility to be authorized. Other examples include our two outside expert summer study groups who were correct to suggest both “push longer” and to go to low density gas filled hohlraums to minimize LPI.
Optimize on 3D stability, not 1D robustness. The world is 3D and so are ICF implosions. The high foot approach and its successors helped bring target performance into the realm of the understandable, because of its stabilizing features. While these stabilizing, higher adiabat, systems have a reduced “upside” with respect to high gain, they at least give us a basis upon which to build future efforts at lowering the adiabat. As also described herein, this same philosophy of more robustness, but at the cost of “lower gain,” led us to the first successful demonstration of x-ray lasing in the laboratory.
Diversify approaches: Having an independent development path for HDC (vs the mainline CH ablator approach) showed wisdom, especially in hindsight when we needed shorter pulses in low density gas fill hohlraums, and HDC was better matched to that. Too much diversity dilutes the efforts, so a good balance between mainline and alternative approaches must be maintained.
Be prepared to take advantage of surprises: The near vacuum exploding pusher experiment showed a vanishingly small level of LPI. To its credit, the program pivoted to this approach, and had to overcome its own (meta) inertial confinement to do so. While it is one thing to have the ability to be “light on one's feet,” it is entirely another to actually choose to do so.
New diagnostics and new code packages are needed for progress: Looking back over the nearly 50 years I have been engaged in this research, I have seen innumerable times when a new capability (and especially a new diagnostic) came on-line that opened our eyes to physics that was happening and that we could not have imagined without it.
When analyzing data, include all the details: I have seen this many times: The only way to match data with simulations is to put in all the relevant details and include them in the simulations. Two examples come to mind: Nathan Meezan zoning up the solid holder at the waist of the gas bag experiments on Omega. Steve Maclaren zoning up the storm window LEH, and then Drew Higginson's explanation of symmetry in near vacuum hohlraums that involved CBET in that extra LEH plasma.
Much detailed attention and support from upper Lab management is needed: I have seen this way back, nearly 50 years ago from Roger Batzel and Mike May, extending to the special efforts (political and institutional support wise) from Bruce Tarter during NIF construction, through to our latest two directors, Bill Goldstein and Kim Budil.
Utilize a world-wide world-class diverse workforce: The world is a big place and scientists with sharp minds and superb skills can come from anywhere. Getting a declassification of indirect drive ICF approved opened the door for our present extremely talented and dedicated workforce from all over the world.
Above all: Exploit ICF's “super-power”: Flexibility and the ability to innovate. As described in this discourse, the original NIF point design had to be changed in every single aspect for ignition to be achieved. This inherent flexibility of ICF allowed us to respond to whatever hard truths Mother Nature threw in our way. I believe that we must continue to utilize this superpower in order to make progress that will lead to much higher yields in the future.
ACKNOWLEDGMENTS
I must express my appreciation to the current ICF Program leadership. They, and the technical teams that they lead, are the true heroes of this story, as they are the ones that ultimately made ignition a reality: R. Town, N. Landen, J. Moody, B. Spears, D. Hinkel, W. Farmer, A. Kritcher, C. Weber, M. Marinak, A. Pak, T. Chapman, K. Humbird, S. Ross, O. Hurricane, K. Raman, V. Smalyuk, G. Brunton, B. Van Wonterghem, A. Nikroo, A. MacKinnon, B. Woodworth, and M. Stadermann.
This long path counted on the wise guidance and support of LLNL's lab directors throughout the years, and we are grateful for it: J. Foster, M. May, R. Batzel, J. Nuckolls, B. Tarter, M. Anastasio, P. Albright, B. Knapp, G. Miller, B. Goldstein, and K. Budil. Similarly, we thank the LLNL associate directors responsible for the overall program, including R. Woodruff, R. Fortner, B. Goodwin, C. Verdon, K. Budil, B. Wallin, M. Herrmann, and T. Arsenlis. On a more “local” level, we appreciate the dedicated efforts of the actual ICF Program leaders over the years: J. Emmett, H. Ahlstrom, L. Coleman, E. Storm, J. Davis, M. Campbell, J. Lindl, J. Kilkenny, B. Hammel, E. Moses, J. Wisoff, B. MacGowan, J. Atherton, J. Edwards, M. Herrmann, and R. Town, as well as our partners at DoE's NNSA throughout the years.
It is, sadly, the nature of things that if 50 years pass, so will some of the pioneers who toiled and excelled in this field. So, in memoriam, we mention R. Thiessen, Y. Pan, B. Still, O. Jones, A. Szoke, R. Kidder, S. Colgate, J. Murray, H. Powell, J. Trenholme, L. Foreman, C. Hendricks, N. Ceglio, J. Koch, J. Grun, H. Baldis, H. Rose, J. Albritton, D. Liberman, S. Maxon, R. Ratowsky, A. Simon, K. Estabrook, J. Denavit, T. Shepard, M. Feit, and E. Burke.
My many colleagues who have been instrumental in so much of the work described here are too numerous to mention, but I sincerely thank them for their expertise, and their friendship. As this paper is a compendium of history, and I could not have written it without expert contributions of memories and insights from my colleagues, I truly appreciate their input, and thus, I wish to acknowledge J. Lindl, D. Larson, M. Campbell, D. Hinkel, O. Hurricane, J. Edwards, N. Meezan, J. Kilkenny, D. Clark, J. Moody, J. Kline, and N. Landen. The suggestions and clarifications of all three anonymous referees are also hereby acknowledged with gratitude.
As mentioned earlier, LLNL is not alone in this achievement. I thank our colleagues at LANL, URLLE, SNL, NRL, GA, AWE, CEA, and MIT for their valuable contributions to this effort. The current LLNL ICF staff in target physics, code development, NIF facility, target fabrication, and diagnostic development all played a role in this historical achievement, and I salute them.
Finally, a debt that we cannot repay goes to our families and loved ones, who support us all, every day.
This work was performed under the auspices of U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract No. DE-AC52-07NA27344. This document was prepared as an account of work sponsored by an agency of the United States government. Neither the United States government nor Lawrence Livermore National Security, LLC, nor any of their employees makes any warranty, expressed or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise does not necessarily constitute or imply its endorsement, recommendation, or favoring by the United States government or Lawrence Livermore National Security, LLC. The views and opinions of the author expressed herein do not necessarily state or reflect those of the United States government or Lawrence Livermore National Security, LLC, and shall not be used for advertising or product endorsement purposes.
AUTHOR DECLARATIONS
Conflict of Interest
The author has no conflicts to disclose.
Author Contributions
Mordecai D. Rosen: Conceptualization (lead); Writing – original draft (lead).
DATA AVAILABILITY
Data sharing is not applicable to this article as no new data were created or analyzed in this study.