We present a new analysis methodology that allows for the self-consistent integration of multiple diagnostics including nuclear measurements, x-ray imaging, and x-ray power detectors to determine the primary stagnation parameters, such as temperature, pressure, stagnation volume, and mix fraction in magnetized liner inertial fusion (MagLIF) experiments. The analysis uses a simplified model of the stagnation plasma in conjunction with a Bayesian inference framework to determine the most probable configuration that describes the experimental observations while simultaneously revealing the principal uncertainties in the analysis. We validate the approach by using a range of tests including analytic and three-dimensional MHD models. An ensemble of MagLIF experiments is analyzed, and the generalized Lawson criterion χ is estimated for all experiments.
I. INTRODUCTION
Inertial confinement fusion (ICF) is the process by which fusion fuel is compressed to high density and temperature with the goal of producing an abundance of thermonuclear fusion reactions before the fuel disassembles.1,2 A variant of this approach, known as magneto-inertial fusion (MIF), uses an embedded magnetic field to ease some of the requirements imposed by traditional ICF.3 In either ICF or MIF, accurately characterizing the plasma conditions during the period of neutron production (e.g., stagnation) is of paramount importance for understanding target performance, assessing the impact of target modifications, evaluating the relative importance of sources of degradation, and ultimately for charting progress toward high yield. Fusion experiments are highly integrated with many interdependent processes occurring simultaneously, making cause and effect difficult to quantify. For this reason, multiphysics simulations are used to design and, in many cases, interpret the experimental results. Unfortunately, these simulations are extremely complex and expensive to perform. Furthermore, without a detailed understanding of the state of the plasma in the experiment, it is difficult to adequately constrain the simulations, making progress difficult.
The magnetized liner inertial fusion4–6 (MagLIF) concept is an approach to MIF that is being pursued on the Z machine at Sandia National Laboratories.7 In this approach, a pre-imposed magnetic field is used to insulate a hot laser-heated fuel volume from an imploding cylindrical metal liner. If the fuel is kept sufficiently pure and is able to retain enough of its energy during the implosion, it will reach conditions where significant thermonuclear burn can occur. In order to understand the performance of a given target and how it compares to our expectations, we must understand the state of the fuel during neutron production. In these experiments, many diagnostics are fielded to measure various physical quantities relating to the plasma conditions of the experiment. Unfortunately, most quantities of interest must be constrained using multiple diagnostic outputs. As an example, the fuel pressure cannot be directly measured, only inferred through measurements of other quantities like temperature and density. Furthermore, the inference of pressure will depend on the level of contaminants in the fuel, which contribute extra electrons and is, itself, a difficult parameter to estimate. The measurement of pressure, therefore, requires information from many diagnostics and is inherently a multi-objective inference.
For these reasons, we are pursuing an inversion technique based on Bayesian inference that uses multiple different diagnostic signatures to simultaneously constrain multiple quantities of interest. In this approach, sometimes referred to as data fusion or data assimilation,8 an appropriately parameterized model is used to generate synthetic diagnostic data, which are compared with all of the experimental data simultaneously, incorporating known constraints on parameters. This method allows us to estimate the most likely configuration in the N-dimensional parameter space, the associated uncertainties, and correlation between both data and inputs.
The remainder of this paper is organized as follows: In Sec. II, a brief introduction to Bayesian inference is given along with the details of the algorithm developed for this application. In Sec. III, the physics model is introduced along with the formalism used to form each of the synthetic diagnostics. In Sec. IV, we present a series of validation tests, where data are produced using analytic models and multiphysics simulations to demonstrate the utility of the technique. In Sec. V, we present the analysis of real experimental data, and finally, in Sec. VI, we discuss the outlook for using this methodology for experimental data analysis as well as planned extensions of the physics model to better capture experimental nuances.
II. PARAMETER ESTIMATION VIA BAYESIAN INFERENCE
A. Bayesian inference
In recent decades, Bayesian inference has become a popular and versatile method used for data analysis in a wide range of fields.9 It is used extensively in neuroscience,10,11 geophysics,12,13 particle physics,14,15 astrophysics,16,17 dynamic material properties,18 high energy density physics (HEDP) experiments,19 and to determine tokamak equilibria and current profiles in magnetic confinement fusion research,20,21 among other applications. The application of this approach to ICF is fairly new but is quickly gaining momentum.22 Recent efforts have used Bayesian methods to analyze implosion performance on the NIF23,24 to study energy flow in direct drive implosions on the OMEGA laser25 and to infer fuel magnetization in MagLIF implosions on the Z facility.26
The basic problem we wish to solve is the following: With a model describing the physics and diagnostic observables of interest, we wish to find the set of model parameters m that best matches a set of observables x with associated uncertainties . This model could, in principle, be anything from a multiphysics simulation to a reduced model commonly used for data interpretation. Bayesian statistics provides us with a formalism to accomplish this task in a rigorous and quantitative way. In the Bayesian worldview, the model can be viewed as a hypothesis about the physics describing the system. We have some degree of certainty that this hypothesis accurately describes our system given some prior knowledge about the parameters. This quantity is called the prior and can be written mathematically as (read as the probability of m given A), where A encapsulates our background knowledge about the system including all physical constraints and assumptions. In the Bayesian interpretation, we seek to find the distribution of m that describes the data by evaluating the probability distribution of agreement with the observations over the entire space of m. This function is written mathematically as and is known as the likelihood. Formally, it is the probability of observing the data, x, given a set of model parameters, m, and our background assumptions A.
Typically, we construct a model for the probability of observing x given a particular m (the likelihood) and an expression encapsulating our prior assumptions from knowledge of the physical system and measurements. What we want is an expression giving us the probability of a particular m, given our observations and assumptions. Bayes' theorem states
where the term on the left hand side (known as the posterior) is precisely what we desire. The term in the denominator is known as the evidence and for our purposes is simply a normalization constant. (When performing model selection, this term is essential.) A fundamental tenet of Bayesian inference is that it is impossible to analyze data without making assumptions. While this is always true, it is often not acknowledged or stated explicitly in practice. The Bayesian formalism requires that one write down these assumptions and explicitly define their relationship with the data, embodied in the term A above and the model being used.
The likelihood is determined by the problem at hand. For example, if the problem is to estimate the number of counts measured on a radiation detector, then one would use the Poisson distribution as their likelihood function. If the problem is to estimate the probability of flipping N heads over the course of M flips, then one would use the binomial distribution. In our case, and indeed in most data analysis applications, we use the normal distribution as our likelihood. It can be expressed as
This expression states that the likelihood is maximized where the function equals xi and drops off with a standard deviation equal to the uncertainty in the data, σi. The functions are the forward models that describe the mapping from model parameters to observations. In our case, the xi are the diagnostic measurements, and the are the synthetic diagnostic models used to calculate the measurements from a given hotspot configuration. We note that the product form of the likelihood above amounts to an assumption that the errors are independent with zero mean.
B. Methodology
The task is to estimate the posterior distribution, , where we will drop A for simplicity. One has simple analytic expressions for the prior probability, , and the probability of the observations given the model, , both given by normal distributions. The numerical challenge is introduced by the forward model, , in Eq. (2). We adopt a two-step procedure, where the first step estimates a Gaussianized approximation of the posterior distribution, then the second step samples, using a classic Metropolis method,27 the nonlinear aspects of the posterior distribution starting from the maximum aposteriori (MAP) model and making proposals using the covariance matrix estimated by the first step.
Many of the Gaussians are truncated so that the model parameters are restricted from unreasonable or nonphysical values, equivalent to the Tobit model construction in applied statistics.28 If a value is proposed that is outside the bounds, the value is set to the bound before the objective and derivatives are calculated. This is equivalent to there being a finite probability of being on the boundary.
For the first step, a Levenberg–Marquardt29,30 (LM) least squares optimization is done using finite differences to estimate the derivatives. Multiple optimizations can be done starting from different starting points chosen from the prior distribution. This is to ensure that if there are multiple modes, they are enumerated.
The prior distributions are not based on strong knowledge and are weak guidance to keep the inversion away from nonphysical and unreasonable extremes. Unfortunately, these priors can induce a bias on the results. We implemented an outer loop on the inversion, where the mean of the prior for the next iteration is set to the MAP of the previous posterior distribution, but the covariance is set to the covariance of the starting prior. It has been shown that this is nearly equivalent to having a uniform prior, effectively removing bias induced by the prior, and has rapid convergence, converging within two to three iterations.31 This outside loop is also used to optimize for the uncertainties in the first step. Because the sigmas appear both in the exponent (quadratic term) and in the logarithmic prefactor of the log-likelihood, the optimum value for the sigmas is determined by the balance between these two factors. For the purposes of this study, only the initial Gaussian LM step is used to estimate the posterior, as testing found little benefit to performing the Metropolis sampling. See the Appendix for examples comparing the posterior found with each method for justification.
A few more words need to be said about the practical application of Eq. (2) when there are multiple measurements and some of those measurements are one-dimensional (1D) or two-dimensional (2D) signals. For computational efficiency, the signals are sub-sampled so that they are not significantly oversampled, but we do not push the Nyquist sampling limit. An estimate is then made of the number of degrees of freedom or independent data points in the signal. The likelihood is then scaled to give the correct value if the expected value of the deviations of the synthetic forward model of the measurement from the data is the size of the expected uncertainty.
The algorithm is written in Python and uses an extended version of pyMC2,32 where LM least squares optimization, message passing interface (MPI) parallelized multiple start optimization and Markov Chain Monte Carlo (MCMC) chains, and a MCMC sampler starting from the LM MAP using a step size given by the LM covariance have been added. The program has both a graphical user interface (GUI) and command line interface (CLI) enabled by the Traits33 and TraitsUI34 packages.
III. THE PHYSICS AND FORWARD MODELS
In this section, we will detail the models used to make inferences about plasma conditions in MagLIF experiments. In Sec. III A, we describe the physics that goes into our model of the stagnated plasma, or hotspot. In Secs. III B and III C, we detail the models that are used to compute x ray and neutron emission from the hotspot, respectively. Finally, in Sec. III D, we discuss the forward diagnostic models. These models take the x-ray and neutron emission as inputs and return synthetic diagnostic signatures.
A. The hotspot model
In order to determine the plasma conditions at stagnation, we have constructed a simplified model of the hot, neutron producing plasma, i.e., the hotspot. The hotspot is defined by a series of one-dimensional, isobaric cylinders (slices), illustrated in Fig. 1(a), each with height δh and radius . The number of slices used to approximate the experiment is a parameter that can be varied, but for the purposes of estimating bulk stagnation metrics it is set to N = 3. This allows gross axial variations to be captured in the averages reported but avoids issues with overfitting of the high frequency axial variations and instability in the optimization algorithm. As mentioned, each slice is isobaric, meaning the pressure is a constant function of radius, but each slice can have different pressures. Each slice is defined by six parameters:
One final parameter, the burn duration , is taken as a constant for all slices and is directly measured using x-ray radiation power detectors. The temperature parameter is defined as the central, on-axis temperature in a radially varying profile. The ion and electron temperatures are assumed to be equal. The mix fraction is assumed to be radially uniform. Finally, the hotspot is surrounded by a liner. In this model, we assume the liner to be uniform and only characterized by its areal density, . These parameters are illustrated in Fig. 1(b).
In the isobaric approximation, we assume that the temperature varies as a function of radius according to a power-law,35,36
where α is the power-law exponent, is the temperature at r = 0, is the temperature at the liner wall, and R is the hotspot radius. By fixing α and to be sensible values from simulations, the temperature profile is then characterized by a single free parameter, Tc. Following McBride and Slutz,35,36 we take . By enforcing the isobaric condition, the density is then determined as . This allows us to replace the density with pressure and pull it out of any subsequent volume integrals, which simplifies the calculations and emphasizes the prominence of the fuel pressure as a fundamental stagnation parameter. To do this, we write
where is the total ion density, , and is the average charge of the plasma, where s denotes all plasma ion species, including the fuel. In the above expression, we have assumed that the ion and electron temperatures are equal, a good assumption in MagLIF experiments.4
The model is formulated such that multiple species can be included with different radially varying profiles with the constraint that the sum of all species concentrations, including that of the fuel, must equal to one. For this study, we adopt the simplest mix profile and assume a uniform concentration across the hotspot. In this approximation, the mix fraction is constant, so the absolute amount of a mix species varies with the density, increasing toward the outer radius of the hotspot.
B. X-ray emission
With the appropriate expression for the radial temperature profile determined, we can calculate the x-ray emission from each slice of the hotspot. Following Ma et al.37 and Epstein et al.,38 we write the time integrated x-ray emissivity, which for simplicity we will call the emissivity, as
where in this expression, is the liner opacity, is the free–free Gaunt factor, and fs and js are the fraction and emissivity of each species, s. The Gaunt factor is a function of temperature and can be written as .38 Assuming line emission is negligible, the emissivity is the sum of the free–free and bound–free emissivities, , with
where is the free–free emission coefficient, is the bound–free emission coefficient, and is the Rydberg constant. To simplify the expression, we define
where is the emissivity of deuterium. Invoking the isobaric condition to replace the density, we write the total emission as
where again, the summation is over all plasma species (fuel and contaminants). The above expression is plotted in Fig. 2 for different temperatures and two different Be mix fractions along with a more accurate calculation using SCRAM39 for comparison, showing that the analytic expression used here is reasonably accurate.
C. Neutron emission
The neutron output can be calculated in the same manner as the x-ray output. In this case, the expression for the number of fusion reactions per unit volume (neutron emissivity) is
where are the concentrations of species j and k, is the Kronecker delta, and is the Maxwell-averaged fusion reactivity for species j and k, which is a function of temperature.40 Again, invoking the isobaric condition, we get
We also wish to calculate the neutron spectrum, which is used to compute synthetic neutron time of flight signals, a measurement that is sensitive to ion temperature. We can define a neutron spectral function with a normalized integral as . Then the spectrum can be written as
The above expression simply weights the spectrum produced at each location by the emissivity. Integrating the above expression over neutron energy results in the total neutron yield. For this model, we use the exact, relativistic expression for the neutron spectrum derived by Ballabio et al.41 This model assumes a Maxwellian plasma. While z-pinch driven fusion schemes often exhibit non-Maxwellian behavior,42,43 the MagLIF concept has demonstrated that the neutron spectrum and yields are thermonuclear in nature.5,44
D. Synthetic diagnostics
The various synthetic diagnostic outputs used in the analysis are formed by taking different spatial and spectral integrals over the expressions for the x-ray and neutron emissivities in Eqs. (10) and (13), respectively. The diagnostics used in this analysis are the filtered x-ray yields, DD neutron yield,45 neutron time-of-flight spectrum (nTOF), multi-channel filtered time integrated pinhole camera (TIPC),46 and the spherical crystal imager.5
The neutron yield is obtained simply by integrating Eq. (13) over volume and neutron energy. The radiated x-ray energy is measured using filtered x-ray power detectors, typically photoconducting diamonds (PCDs),47 which are integrated in time to obtain the absorbed energy. These measurements are integrated over the whole volume and can be expressed as
where Ω is the solid angle subtended by the detector, A is the detector response including x-ray absorption, F is the filter transmission, and Vn is the volume of the nth slice. The summation is over all slices characterizing the stagnated fuel column. Typically, three PCDs with different Kapton filters (, , and m) are fielded on a shot.
The TIPC contains five different channels, each with different filters and a spatial resolution of m. This spatial resolution is adequate to resolve axial variations in the ∼cm scale stagnation column, but not to resolve the radial gradients since the average stagnation radius (m) is much smaller than this. As such, we use this diagnostic as a 1D imager by integrating both the experimental data and the synthetic profiles over the radial dimension and use each axial slice as the basic unit for axial spatial resolution. This gives the total x-ray emission in the jth channel from the nth slice
In this expression, Ω is the collection solid angle of the pinhole, A is the image plate response,48 is a calibration factor for the image plate scanner,49 Fj is the filter transmission for the jth channel, and Vn is the volume of the nth slice.
The spherical crystal imager is an x-ray imaging system that provides good two-dimensional spatial resolution (m depending on the configuration). The optic is monochromatic, though it reflects efficiently in multiple reflection orders. Because the instrument images in both dimensions, the integral expressing the signal is more complicated. The spatially dependent signal from the slice can be written as
In this expression, x is the position at detector, is the spectral sensitivity of the instrument including the spectrally resolved detector sensitivity (image plate), the total transmission of all the filters in the system, and the photon energy dependent reflectivity of the crystal, Ω is the collection solid angle of the crystal, and h(x) is the point spread function of the imaging system approximated as a Gaussian with which the spatial profile is convolved. The crystal reflectivity is approximated as a series of delta functions in photon energy at each of the energies that satisfies the Bragg condition for the imager. The photon energy of the first order reflection, the efficiency at different orders, and the spatial resolution are all dependent on the specific configuration of the instrument used on any given experiment.
Finally, the nTOF signal is calculated by integrating Eq. (13) over the hotspot volume giving the total neutron spectrum. The energy spectrum is then corrected for the light output of the scintillator, which varies with incident neutron energy and converted into the time domain by transforming the spectrum according to the distance propagated to the detector.50 The time domain signal is then convolved with an instrument response function (IRF) that accounts for the composite temporal response of the scintillator and photomultiplier, giving the final nTOF signal, . Due to the complicated scattering environment present in the Z facility, which can contaminate the low energy (late time) portion of the signal, only the first of the signal (i.e., only the high energy portion) is used to perform the analysis.
IV. VALIDATION OF DATA ASSIMILATION FRAMEWORK
In this section, we will describe two forms of validation tests conducted to build confidence in both the algorithm and the model used to infer experimental plasma conditions. In Sec. IV A, we test the algorithm and data processing methods using analytic plasma models that do not contain any temporal evolution. This includes a case where the exact solution is recovered and an ensemble of cases where three-dimensional (3D) structure is incorporated. In Sec. IV B, we employ MHD simulations to create synthetic data from 1D and 3D simulations of MagLIF implosions. These tests incorporate time dependence in the synthetic data as well as additional physics in the simulation, allowing us to assess the impact of various model assumptions on our inferences.
A. Test cases
The first step we undertook to confirm that the parameter inference is working properly and that the problem is well posed was to use the physics model described in Sec. III to generate a hotspot configuration and synthetic diagnostic outputs. The inversion was then used to estimate the axial parameter profiles for the known configuration. For this test, we ran the inversion on a case with imposed axial variations in pressure, temperature and fuel radius, and uniform liner ρR and mix fraction. The profiles of pressure, temperature, liner areal density, and fuel radius are shown in Figs. 3(a)–3(d) as the red lines along with the most likely values determined by the inversion (solid green lines) and the 25% and 75% quantiles (dashed green lines). The mean and 25% and 75% quantiles of the prior are shown as the solid and dashed blue lines, respectively. Some variations are shown in the prior as a function of position, which is due to the random nature of the sampling, not to imposed variations. In general, we see excellent agreement between the actual parameter values and most likely inferred values. There are some areas where the inferred solution does not lie exactly on the input value, but the discrepancy is much smaller than the standard deviation of the posterior. This test confirms that we are able to recover the exact solution in the most ideal case.
Imaging data from MagLIF experiments reveals a rich variety of structures characterized by a quasi-helical morphology of the column, axial brightness variations with varying frequency, axial variations in the apparent width of the column along a given line of sight, and occasional bifurcations of the emission column that suggest the emergence of a double-helical structure.5,44,51–54 Taken together, these observations indicate a three-dimensional structure to the stagnation column that is not captured in our model. In order to validate the use of our cylindrical model to estimate bulk stagnation quantities, we have constructed a double-helix stagnation model that is used to generate an ensemble of hotspots. Each instance is then processed in the same manner as experimental data, bulk parameters are inferred and compared to the true values from the ensemble. The model is constructed using two Gaussian plasmas centered at the locus of points corresponding to each helix, shown schematically in Fig. 4(a). The wavelength, amplitude, and phase of the helices are allowed to vary as a function of height. At each axial location, the plasmas are characterized by a single pressure, mix fraction, and liner areal density but have individual radii and temperatures. A smoothly varying axial profile is prescribed for each of the parameters with random fluctuations added on top. Four examples of stagnation columns generated using this model are shown in Fig. 4(b).
It was found that the complexity of the crystal images needs to be reduced for the purposes of extracting bulk quantities with our simplified model. The complexity reduction process we derived for this purpose is shown in Fig. 5 using data collected during shot z3040. Panel (a) in Fig. 5 shows the data as recorded by the spherical crystal imager. The recorded data exhibit significant variations in the horizontal direction which we cannot capture with our model. To remove these variations, we first determine the maximum intensity location in each pixel row and center the pixel row on that position, producing the image shown in panel (b). Horizontal variations have been significantly reduced in the shifted image, but further processing is needed since several pixel rows still exhibit asymmetries in the x-direction. This is especially evident when examining the top half of the image in panel (b). We remove these remaining asymmetries by comparing the regular and horizontally flipped versions of each pixel row. If the pixel row under investigation is dominated by a single-peaked intensity profile, the difference between the regular and flipped pixel rows is small. The regular pixel row is adopted as final in this case. If, however, the pixel row contains an intensity profile with multiple peaks, we isolate the dominant peak, scale it to the total pixel row intensity, and adopt the result as final. We show an example of a single-peaked intensity profile in the top panel of Fig. 6 and a multi-peaked profile in the bottom panel of that figure. This process gives rise to panel (c), the “cleaned” image, in Fig. 5. The cleaned image is then cut into N = 3 slices, shown for illustrative purposes as red dashed boxes in Fig. 5 panel (c), all of which are then vertically integrated. The final image shown in panel (d) results.
An ensemble of 50 instances of the double helix model was generated to validate the use of the cylindrical model in obtaining bulk stagnation information. Each instance was used to generate the full suite of data including the crystal image, processed as discussed above, TIPC, PCD yields, neutron yield, and the nTOF. All inferences were made using a three-slice model. The results are summarized in Fig. 7, where the true values from the model are plotted against the inferred values for pressure (a), temperature (b), fuel volume (c), liner areal density (d), and beryllium mix percentage (e). In each plot, the dashed line corresponds to perfect agreement between the truth and posterior values. The error bars along the abscissa represent the confidence interval of the posterior, while the error bars along the ordinate represent the interval of the distribution of true values in the model. All values from the double helix model and the posterior are x-ray emissivity weighted so that we can make an accurate comparison between the model and inference. Emissivity weighting is computed as
where Q is the quantity of interest and is the x-ray emissivity from Eq. (10). The quantity is computed for each sample in the posterior, from which the mean and confidence intervals are estimated.
We can readily see that the inferred temperature, pressure, and fuel volume agree quite well with the true values over the entire range tested. This result indicates that the procedure used to process the highly structured images is able to recover an unbiased estimate of the fuel volume. Though the trends are captured accurately, the liner areal density and mix fraction reveal a bias in the estimates of the parameters which we attribute to the three-dimensional structure in the validation model. In particular, both parameter estimates are biased toward higher values than the true value from the model. By examining the posteriors and resulting uncertainties in the plots, it is apparent that these parameters both have very large uncertainties indicating the current incarnation of the inference model and the diagnostics used is not particularly sensitive to these quantities. Additionally, these two parameters are highly correlated which could be partially responsible for the bias. These results give us a high degree of confidence that our cylindrical model can be used to extract bulk performance metrics, in particular pressure, temperature, and volume, from experimental data.
B. Comparison with simulation results
Due to the simplifications used to describe the state of the plasma, we need to check our model's ability to provide useful inferences against more complete multiphysics simulations. The most basic assumption that we need to validate is the use of a stationary hotspot model to infer parameters of a highly dynamic process. So a fundamental question can be asked: When we infer a set of stagnation parameters using our simple model, what exactly do they correspond to? Due to the fact that neutron and x-ray production are , which is itself (where is the polytropic index), we expect that the measurements we make will be highly biased toward the time of minimum radius. Yet, we must evaluate how much the stationary assumption affects the solution and, consequently, our interpretation of the results of a given experiment.
A static mesh refined implementation of the Gorgon system of MHD equations55–57 was used to simulate a one-dimensional implosion and produce synthetic data specifically to focus on the question of time dependence. The evolution of the important 1D parameters and neutron and x-ray pulses are shown in Figs. 8(a) and 8(b), respectively. The temperature and pressure used for comparison in these plots are taken at r = 0, where the temperature is peaked, closely corresponding to the temperature parameter in our model. The radius (yellow line) is taken as the inner radius of the liner. From this simulation, synthetic TIPC, PCD, crystal imager, and neutron yield diagnostics were created. A small amount of Gaussian noise was added to the simulated diagnostic data. The burn duration was arrived at by taking the FWHM of the simulated x-ray pulse, exactly as we do for the experiment.
In Fig. 9, we compare the posterior distribution (yellow) to the prior distribution (green) and the 1D simulated stagnation parameters extracted at peak burn (red), weighted by the neutron emission (orange), and weighted by the x-ray emission (light blue). Note, in the case of temperature where there is a radial dependence, the emission weighted temperature is the on-axis temperature weighted by the emission history. This is done so that the simulated quantity more closely maps to the model parameter which defines the on-axis temperature. We can see that in general, agreement is quite good with the simulated values, and agreement is closest for the x-ray weighted values. However, for pressure and temperature, the neutron and x-ray weighted values lie within the 95% confidence interval. The radius is extremely tightly constrained and agrees best with the x-ray weighted value. The mix value does not dynamically evolve in this simulation, so the comparison is more straightforward.
The largest source of disagreement is with the liner areal density. In the simulation, this quantity was calculated by integrating the liner density out to infinity, e.g., . In contrast, the only view the model has on the liner areal density is through the attenuation of x-rays as they leave the hotspot on their way to the detector, for which we use cold opacity tables. We expect that as the liner is compressed and heated to a warm, dense state, the opacity will change. Additionally, the material at large radius that is carrying significant current could be quite hot and, therefore, has lower opacity than the bulk compressed liner material. It is, therefore, consistent with our intuition that the areal density inferred via x rays is different than the physical liner areal density.
Our final validation task was to process 3D simulation results and compare the results with quantities from the simulation. This test compounds both the three-dimensional structure effects as well as the time evolution effects, which are both neglected in our model. For this test, full height 3D implosions were simulated in Gorgon to capture the end effects. The laser energy was deposited directly into the gas, not by detailed simulation of the laser propagation in the gas. A uniform 2% Be mix fraction was artificially imposed in the gas at the start of the simulation. The simulation was driven using an experimentally derived circuit model and source voltage. Three runs were used in this validation, of which one representative example is shown in Fig. 10. The runs used different instability seeds and preheat energies to produce different stagnation conditions and structures.
Figure 10(a) shows the crystal image produced from one of these runs giving a sense of the structure. The box plots in Figs. 10(b)–10(e) show the x-ray emissivity weighted temperature, pressure, fuel radius, and fuel volume, respectively. In each plot, the posterior distribution is shown in yellow, the prior distribution is shown in green, and the true value is shown in purple. The boxes indicate the standard deviation of the distribution with the error bars indicating the 95% confidence interval. The line inside the boxes indicates the mean value. For the true values, the distribution indicates the variation of each of the quantities along the height of the column. The true volume is shown as point with no distribution, because it is a volume integrated quantity. We can see that the temperature, pressure, and radius agree extremely well with the true values extracted from simulation. The inferred volume is slightly above the true value but still lies well within the 95% confidence interval. This is partly due to the fact that the volume is calculated from the simulation data by setting an emission threshold above which fuel volume is included in the summation, while the model volume contains 100% of the emission by construction.
It is worth noting that the distribution of the inferred radius is narrower than the distribution of the value obtained from simulation. This is a statement that the effective radius of the entire column, within in the confines of the model used, can be described by the inferred distribution. This should not be interpreted as a statement that we believe the actual radius to be known to greater precision than simulation warrants. This discrepancy originates from the fact that the model is not able to describe all of the variations observed in the stagnation column. It is for this reason that we use the plasma volume, not the radius, in the analysis that follows.
The inferred mix fraction was , comparing favorably to the 2% Be mix fraction imposed in the simulation. The liner areal density again shows a bias toward being low. In this case, we infer a value of compared to the true value of . As before, these values are inferred with relatively low confidence. The results of all of the validation tests confirm that we can infer fuel pressure, temperature, and volume with high confidence. Mix and liner areal density are inferred with lower confidence and have the potential for significant bias, though the trends appear to be recovered well. The addition of new diagnostic information, e.g., from x-ray spectroscopy, is the most promising avenue to resolve this concern.
V. APPLICATION TO EXPERIMENTAL DATA
Having validated our methodology for the extraction of critical performance metrics, we applied the analysis to an ensemble of 36 MagLIF experiments (see the Appendix for examples of the posterior distribution from this analysis). These experiments span a wide range of experimental conditions that date back to 2015 and were chosen based solely on the quality of the data return. This represents the most comprehensive analysis of a wide range of experiments providing much needed input to our understanding of the response of the MagLIF system to changes in experimental inputs. Typically, these results have focused on the change in neutron yield with respect to changes in input and stagnation conditions. For the first time, our posterior inference provides us with the information needed to estimate the generalized Lawson parameter χ directly,58 a critical parameter needed to quantify proximity to ignition and scaling prospects. We define χ in a form allowing the incorporation of the effects of mix as follows:
where is the energy of a DT fusion alpha particle, is the DT fusion reactivity, is the hotspot pressure, T is the fuel ion temperature, and is the energy confinement time. In our case, we assume , the FWHM of x-ray emission measured by the PCDs and used in the inference. We did not use DT fuel in any of these experiments, but our model inference allows us to compute the DT reactivity of our plasma via the inferred pressure and temperature profile. We further assume that the experimentally inferred deuterium fraction is equally split into deuterium and tritium. The integral is over the fuel volume V, allowing us to account for the radially varying fuel temperature in our model. The results of this analysis are shown in Fig. 11, where we have plotted each point in temperature–pressure space. The color of each data point corresponds to determined using the posterior values for the pressure, temperature, and fuel volume in Eq. (18). The contours on the plot represent contours of equal χ, where χ = 1 is the threshold where self-heating becomes dominant.
Three of the experiments analyzed in this study have achieved with each of these experiments producing DD neutron yields of . The stagnation pressures achieved in these experiments span a fairly large range of Gbar with temperatures of keV. These conditions were enabled by steady progress in improving experimental capabilities allowing larger drive currents, higher initial magnetic fields, fill densities, and coupled preheat energy while reducing mix.52,59,60 The results of the inferences as well as the initial conditions for these three experiments are summarized in Table I. We note that Z, as presently configured, is not capable of achieving χ = 1, as higher pressures, and correspondingly, higher drive currents are needed than what is accessible on Z. The topic of scaling existing experiments to larger driver energy is the subject of a number of recent studies.60,61
Z shot # . | (MA) . | (T) . | (mg/cm3) . | (kJ) . | . | (ns) . | T (keV) . | (Gbar) . | (mm3) . | χ . |
---|---|---|---|---|---|---|---|---|---|---|
z3179 | 15 | 15.8 | 0.7 | 0.95 | 0.55 | 1.8 | 3.3 ± 0.1 | 1.2 ± 0.07 | 0.03 ± 0.002 | 8.4 ± 0.9 |
z3236 | 15 | 10 | 1.05 | 1.35 | 1.1 | 2.1 | 2.9 ± 0.1 | 1.3 ± 0.07 | 0.08 ± 0.004 | 6.2 ± 0.6 |
z3289 | 20 | 15.9 | 1.05 | 1.15 | 1.1 | 2.0 | 2.6 ± 0.3 | 1.8 ± 0.16 | 0.05 ± 0.007 | 7.1 ± 1.8 |
Z shot # . | (MA) . | (T) . | (mg/cm3) . | (kJ) . | . | (ns) . | T (keV) . | (Gbar) . | (mm3) . | χ . |
---|---|---|---|---|---|---|---|---|---|---|
z3179 | 15 | 15.8 | 0.7 | 0.95 | 0.55 | 1.8 | 3.3 ± 0.1 | 1.2 ± 0.07 | 0.03 ± 0.002 | 8.4 ± 0.9 |
z3236 | 15 | 10 | 1.05 | 1.35 | 1.1 | 2.1 | 2.9 ± 0.1 | 1.3 ± 0.07 | 0.08 ± 0.004 | 6.2 ± 0.6 |
z3289 | 20 | 15.9 | 1.05 | 1.15 | 1.1 | 2.0 | 2.6 ± 0.3 | 1.8 ± 0.16 | 0.05 ± 0.007 | 7.1 ± 1.8 |
VI. DISCUSSION
We have detailed a multi-objective Bayesian parameter estimation algorithm using a reduced physics model to infer the plasma conditions during neutron production in MagLIF experiments. The main benefits of this approach compared to a more traditional one instrument at a time approach are: (1) fully consistent determination of all important plasma parameters and (2) improved confidence in the inferred parameters with rigorously defined uncertainties. We described the physics and diagnostic models used, as well as the algorithm employed to perform the parameter estimation.
The model developed for this purpose employs a cylindrically symmetric, radially isobaric, static stagnation column where axial variations are allowed through the use of independent axial sections, referred to as “slices.” This model of the stagnation plasma is used to generate neutron and x-ray emission that are processed by synthetic diagnostic models to produce filtered x-ray yields, neutron yield, neutron time-of-flight, a multi-channel filtered x-ray pinhole camera, and a high resolution monochromatic x-ray image. These synthetic diagnostics are compared to experimental data to find the stagnation conditions that simultaneously best match the suite of observations.
A series of validation tests were presented to demonstrate that the technique is able to recover the proper solution when one is known. These tests included the recovery of the exact solution when the same model is used to generate data and infer parameters. A three-dimensional double-helix model was used to generate synthetic data that better match the character of experimental data. An ensemble of 50 instances from this model was used to validate the inference of emissivity weighted stagnation conditions in the presence of significant three-dimensional structure. It was shown that the temperature, pressure, and volume of the stagnation are inferred accurately. The mix fraction and liner areal density show a systematic bias in this test, though the trends are recovered accurately. Finally, stagnation conditions were inferred from the output of 1D and 3D Gorgon simulations. Parameter estimation performed on synthetic data produced from the 1D simulation shows that the method can be used to determine stagnation conditions with high fidelity, despite the model having no time dependence. Performing the same inference on the 3D simulated data shows that the emissivity weighted temperature, pressure, and fuel volume are recovered accurately.
This methodology was applied to an ensemble of 36 MagLIF experiments. The posterior distribution of temperature, pressure, and volume was used to compute the generalized Lawson criterion χ for all experiments. This analysis shows systematic improvement in performance corresponding to ongoing efforts to improve experimental capabilities such as preheat energy deposited in the fuel and initial magnetic field. Detailed results from the three highest performing experiments to date are shown.
Future work will focus on three parallel directions: (1) model improvement, (2) value of information, and (3) data-driven experiment design. First, our results suggest that a three-dimensional structure is a significant contributor to bias in the inferences. A model that accurately captures the rich structures is required both to improve the fidelity of our inferences and to assess the impact of this structure on performance trends as well as efforts to control morphology. We will also explore both the addition of other physics, such as density and temperature dependent liner opacity to further reduce bias, and the simultaneous analysis of multiple experiments since some of the model parameters are common between the experiments. Second, we can use the tools developed here to test the value of adding new diagnostic information to the inference. By modeling the outputs of new diagnostics (e.g., x-ray spectroscopic signatures of density, temperature, mix, and liner areal density), we can quantitatively test their impact on the inferences through our validation framework. This gives us a quantitative metric with which to prioritize the incorporation of additional extant data as well as new developments to improve our understanding. Finally, this work provides a database of detailed performance metrics from a wide range of experiments with a variety of input conditions. We will seek to use this information and similar information obtained on subsequent experiments to guide the design of new experiments to optimize performance and extract more physical insight.
ACKNOWLEDGMENTS
Sandia National Laboratories is a multimission laboratory managed and operated by National Technology & Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International, Inc., for the U.S. Department of Energy's National Nuclear Security Administration under Contract No. DE-NA0003525. This paper describes objective technical results and analysis. Any subjective views or opinions that might be expressed in the paper do not necessarily represent the views of the U.S. Department of Energy or the United States Government.
AUTHOR DECLARATIONS
Conflict of Interest
The authors have no conflicts to disclose.
DATA AVAILABILITY
The data that support the findings of this study are available from the corresponding author upon reasonable request.
APPENDIX: POSTERIOR DISTRIBUTIONS
In this appendix, we show the posterior distribution for some of the inferred parameters from two experimental data sets using both the Gaussian approximation and the MCMC step. Figure 12 shows the posterior distribution of the five inferred physics parameters from the middle slice for the inference using the z3019 dataset. The corner plot shows the marginalized distribution of each parameter on the diagonal while the off diagonal plots show the pairwise joint histograms for each parameter. The blue data show the posterior inferred using the Gaussianized LM step only, while the orange data show the result when adding the MCMC step. We see in this example that the MCMC serves to reduce the width of some of the distributions, and introduces a small bias relative to the Gaussian step which is within the uncertainty. The Gaussianized posterior is, therefore, viewed as a conservative estimate of the posterior. In some cases, the Gaussian posterior overestimates the correlations between parameters.
Figure 13 shows the same as above, but for z3179, one of the highest performing experiments. We see in this case that the posterior inferred using the MCMC step almost exactly reproduces the Gaussianized posterior, but with reduced uncertainties. Again, demonstrating that the Gaussian posterior represents a conservative estimate with small to negligible bias introduced by the Gaussian approximation. As a result of this observation, all inferences made in this work, including in the validation and comparison with simulation, were done using the Gaussian approximation. We expect that for certain classes of problems this approximation will be violated, so it should continue to be checked in the future.