Accurate constitutive data, such as equations of state and plasma transport coefficients, are necessary for reliable hydrodynamic simulations of plasma systems such as fusion targets, planets, and stars. Here, we develop a framework for automatically generating transport-coefficient tables using a parameterized model that incorporates data from both high-fidelity sources (e.g., density functional theory calculations and reference experiments) and lower-fidelity sources (e.g., average-atom and analytic models). The framework incorporates uncertainties from these multi-fidelity sources, generating ensembles of optimally diverse tables that are suitable for uncertainty quantification of hydrodynamic simulations. We illustrate the utility of the framework with magnetohydrodynamic simulations of magnetically launched flyer plates, which are used to measure material properties in pulsed-power experiments. We explore how changes in the uncertainties assigned to the multi-fidelity data sources propagate to changes in simulation outputs and find that our simulations are most sensitive to uncertainties near the melting transition. The presented framework enables computationally efficient uncertainty quantification that readily incorporates new high-fidelity measurements or calculations and identifies plasma regimes where additional data will have high impact.

## I. INTRODUCTION

Over the past few decades, a great deal of effort has been devoted to assessing the uncertainties of and improving the modeling of constitutive properties such as equations of state (EOS)^{1} and charged-particle transport coefficients.^{2,3} Data of these properties are critical inputs for radiation- and magneto-hydrodynamic simulations of diverse plasma systems such as inertial-fusion targets^{4} and white dwarf stars.^{5} At the same time, experimental and diagnostic capabilities have advanced to enable high-precision measurements of the EOS^{6,7} and transport properties.^{8,9} The measurements provide benchmarks for models and constrain the data tables used by hydrodynamic codes.

Notably, however, incorporating new knowledge from high-fidelity calculations and precision experiments into integrated hydrodynamic simulations introduces a significant bottleneck that requires the dedicated efforts of experts. In practice, the EOS and transport coefficients are first tabulated based on calculations from one or more theoretical approaches, which must then be adjusted manually to fit known data such as ambient pressures and conductivities and supplied to a hydrodynamic code. Moreover, the EOS tables are usually generated independently from transport-coefficient tables, which can introduce inconsistencies between them such as the temperature and density of phase transitions (e.g., melt). Because hydrodynamic simulations access a wide range of application-dependent plasma conditions, tables of constitutive data must span an enormous range of temperatures and densities. It is common for tables to cover a parameter space that spans many orders of magnitude in both density and temperature. Within this large parameter space, very little experimental data exist for validation, and the models that generate data have different levels of uncertainty. All of these considerations add to the difficulty of developing a systematic approach to uncertainty quantification of hydrodynamic simulations.

Non-parametric analytic models are useful to generate transport-coefficient data rapidly across the necessary parameter range for hydrodynamic simulations.^{10–18} These models are often validated against experimental and simulation data when possible. However, it is rare for a non-parametric analytic model to be sufficiently flexible to capture trends in data in disparate physical regimes. Parametric analytic models offer a solution to this deficiency by providing additional flexibility through the fine-tuning of the model parameters.^{19}

Transport-coefficient tables are less ubiquitous than EOS tables. When a table for a particular application does not exist, hydrodynamic codes that require transport coefficients as closures are forced to rely on simplifying approximations that may have limited accuracy in the regions of interest.^{20,21} The quality of the results from hydrodynamic codes depends critically on the EOS and transport-coefficient tables,^{22,23} but exactly how the uncertainties in these tables correspond to uncertainties in integrated quantities is often indirect since the equations that rely on these data are non-linear. Here, we present a framework for quantifying these uncertainties, using as an example the impact of the direct current (DC) electrical conductivity on observable predictions from a magnetohydrodynamic (MHD) code.

To generate tables of the DC electrical conductivity across a wide range of conditions, we use a modification of the parameterized Lee–More–Desjarlais (LMD) model.^{19} We constrain the parameters of the LMD model to fit a multi-fidelity dataset containing experimental data, high-fidelity density functional theory molecular dynamics (DFT-MD) calculations,^{24} and lower-fidelity calculations from a DFT-based average-atom (AA) code.

To determine the LMD parameters, we use Bayesian inference on the multi-fidelity datasets and employ a simple surrogate model to extend our framework to be applicable across the large parameter space. Our approach provides a framework to generate ensembles of tables automatically that incorporates uncertainties from multi-fidelity datasets and enforces consistent melt transitions between the EOS and transport-coefficient tables. We also devise a scheme to perform optimal table selection for propagating dataset uncertainties to quantify the uncertainties in observables from MHD simulations. While we apply the framework to electrical conductivities and outputs of MHD codes, the framework is general and can be applied to any constitutive property to which the data can be fit—that is, to a parametric form or to a non-parametric form if the data are sufficiently dense.^{25} We emphasize that the generalizability and automation of this approach to generate the tables means that additional data can be easily incorporated to refine the uncertainty quantification. We name the framework developed here ETHOS, which stands for “electronic transport for hydrodynamics with an optimized surrogate.”

The paper is organized as follows. Section II discusses the methods used to generate transport-coefficient datasets. These methods generate datasets across a range of plasma conditions but are not sufficient to produce wide-ranging tables. The process of constructing tables from these datasets is given in Sec. III, where we describe the automated framework, ETHOS, that incorporates the datasets into tables using the parameterized LMD model and Bayesian inference. The framework includes dataset uncertainty, can capture the correlations between data points, and selects optimal tables for computationally efficient uncertainty quantification. Then, we use an ensemble of tables generated by the framework to quantify the sensitivities of the outputs of MHD simulations to uncertainties in the DC electrical conductivity, showing simulation results in Sec. IV. We find that a 20% uncertainty in data of the DC electrical conductivity corresponds to uncertainties in MHD outputs that are outside the resolution of experimental diagnostics, suggesting that the experimental data will further constrain our tables. We conclude in Sec. V by providing a suggested path forward for extensions and improvements of the present framework.

## II. THE DC ELECTRICAL CONDUCTIVITY IN DENSE PLASMAS

^{10}derived an expression for the DC electrical conductivity that was later improved upon by Desjarlais,

^{19}

*e*is the elementary charge, $\mu $ is the chemical potential, $Te$ is the electron temperature, and $kB$ is Boltzmann's constant. In this work, we assume that the electrons and ions share the same temperature so that $Te=Ti=T$. Finally, the term $\gamma ee(Z*)$ is a correction for electron–electron collisions that depend on the average-ionization state $Z*$

^{26}and the function $A\alpha $ is represented by Fermi–Dirac integrals and is provided in the Appendix. The LMD model provides approximations for the electron-ion collision rate $\tau ei$ in the solid, liquid, and plasma states; these approximations contain tunable parameters that can be adjusted to agree with available data generated from simulation and experiment. See the Appendix for more details of the LMD model used in this work.

The average electron-ion collision rate can also be computed from an AA model using an extended Ziman formalism^{27–29} that is dependent on different choices of $Z*$ and the ion–ion static structure factor. Variations in these quantities correspond to variations in the DC electrical conductivity from an AA model.^{3,29} More sophisticated multi-center models based on DFT-MD provide high-fidelity conductivity data based on the Kubo–Greenwood formalism, which predicts the optical (frequency-dependent) conductivity using the Onsager relations.^{30–33} Other sources of data for the DC electrical conductivity include experimental measurements near ambient conditions^{34,35}—usually spanning a temperature range of only a few hundred Kelvin—and a very limited number of measurements in the plasma regime.^{8,9,36,37}

The three computational approaches described above have complementary strengths and weaknesses. While the LMD model is wide-ranging and computationally rapid, it contains tunable parameters and assumes a fixed functional form. AA models take minutes to compute a single data point, are sensitive to modeler choices,^{27} and are best used for liquids and plasmas.^{3} DFT-MD is computationally expensive, taking days or weeks to obtain values for the DC electrical conductivity, but extends to non-radially symmetric systems such as crystalline solids and molecules. AA models and DFT-MD are sensitive to the choice of exchange-correlation functional, and DFT-MD is sensitive to finite-size effects^{21} and the choice of pseudopotential. Together, these models enable the generation of sparse sets of multi-fidelity DC electrical conductivity data.

In this work, we use the term *dataset* to mean a set of data generated by an experiment, from DFT-MD calculations, and an AA model at specific temperature and density locations. We use the term *table* to mean the interpolated dataset (using the LMD model) that spans a density and temperature range outside the extent of the dataset. Here, we generated a dataset for the electrical conductivity of beryllium along 32 isochores with densities ranging from $\rho i=0.01\u22126.0$ g cm^{−3} in intervals of 0.2 g cm^{−3}. The dataset spans a temperature range of $T=10\u22122\u2212104$ eV and includes experimental and simulation data of varying fidelity. Altogether, our dataset consisted of 1436 data points: 28 from an experiment,^{34} 48 from DFT-MD simulation, and 1360 from AA model calculations. Some of these data are shown in Fig. 1. While this is a fairly dense dataset, a parametric form is needed to mitigate interpolation and extrapolation errors in the regions absent of data. A region that is of particular importance is the low-temperature and solid-density region, which governs the growth rate of the electrothermal instability.^{38–41} With this dataset, we now detail the procedure to generate an ensemble of tables that can be supplied to MHD codes.

## III. BAYESIAN INFERENCE FOR TABLE GENERATION

*posterior*distribution of parameters that can be sampled to generate an ensemble of conductivity tables. It is the ensemble of tables that enable uncertainty quantification of MHD with this framework. The foundations of the framework developed here rely on Bayes' theorem

*likelihood*distribution, describes the probability of obtaining a particular value of the DC electrical conductivity given a set of LMD model parameters: the likelihood distribution characterizes how we expect our observations to distribute about the model (e.g., normally distributed for continuous quantities). The uncertainty in the data and correlations between data points are quantified in a covariance matrix that enters that term. The term $P(\theta LMD)$, called the

*prior*distribution, characterizes any physics-based beliefs or previous analyses that suggest reasonable parameter values (e.g., only positive values of a parameter). It is common to set the prior distribution to be a wide uniform distribution to ensure that the posterior distribution will fall within that range (see Fig. 2). Finally, the normalization term $P(\sigma DC)$ is called the

*evidence*; that term does not need to be computed when using the Markov-chain Monte Carlo approach employed in this work.

^{45,46}

Bayesian inference has been applied to many areas of plasma physics including interpolating plasma transport-coefficient data,^{25} analyzing inertial confinement fusion experiments,^{47,48} producing EOS,^{49,50} and inferring current delivered to loads on pulsed-power machines.^{51} Recent tutorials^{48,52} discuss some of the basics of Bayesian inference. For the results presented herein, we compute the posterior distribution by using the No U-turn sampler^{53} with an acceptance rate of 77.5%. We assume that the likelihood distribution is normally distributed, specify a burn-in period of 2000 steps, and approximate our posterior distribution with 8000 samples. Figure 2 displays the prior and posterior distributions for one parameter of the LMD model at a particular density value where a 10% uncertainty was assumed on the DC electrical conductivity dataset. We see that the posterior distribution is roughly normal, and by sampling this distribution, we generate a range of an LMD parameter.

In Fig. 1, LMD fits to data of the DC electrical conductivity at the isochore $\rho i=1.85$ g cm^{−3} are shown. For these data, we have assumed a 10% uncertainty. The variation in the fits is most notable near the region of the melt transition, where a spread of fits is present. The LMD parameters responsible for the spread of fits include the parameter shown in Fig. 2 and also a parameter that tunes the Bloch–Grüneisen formula for the mean-free path—see Eq. (A2). The fit labeled as “fit: posterior median” is generated by using the posterior median for each of the LMD parameters. We chose the median (as opposed to the mean, for example) because the median is less sensitive to outliers and heavily skewed distributions.

The procedure of obtaining optimized LMD parameters is carried out at each isochore within the range of our dataset described in Sec. II. Doing so results in a set of LMD parameters at each isochore, which are then interpolated. This step ensures that the LMD model used to produce a table is both density and temperature dependent. Here, a simple linear interpolation that acts as a surrogate model is used between the parameter values at each isochore. We note that not all parameters are constrained at each density; these parameters are not included in the fitting procedure. In regions without electrical conductivity data (i.e., where $\rho i<0.01$ g cm^{−3} and $\rho i>6.0$ g cm^{−3}), we extrapolate each fit by a constant value determined by the final constraining point.

In Fig. 1, each fit represents a valid set of LMD parameters although some sets of parameters are more probable than others. These sets of LMD parameters are used to generate an ensemble of tables for use in an MHD code providing a direct connection from uncertainties in the DC electrical conductivity to uncertainties in the results of MHD simulations; uncertainty quantification using the ensemble of tables is discussed in Sec. IV. The tables generated for the uncertainty quantification study contained a 300 $\xd7$ 300 log-spaced temperature-density grid spanning seven orders of magnitude in both temperature and density. To ensure that those tables were dense enough for our study, we carried out a number of simulations with increasingly dense tables. We found that, relative to a log-spaced 1000 $\xd7$ 1000 table, that we had assumed to be sufficiently dense, the results using the 300 $\xd7$ 300 table were within 0.1% of the 1000 $\xd7$ 1000 table.

The MHD codes used to simulate inertial confinement fusion or astrophysical plasmas are typically computationally expensive—especially in three spatial dimensions. It is common for these calculations to take days or even weeks of computer time. Therefore, carrying out thousands of simulations to quantify the uncertainties of transport-coefficient tables is often computationally intractable. For these applications, selecting a reduced number of tables from the full posterior distribution is desirable. It would be ideal if this reduced set of tables could capture the possible spread of parameters sampled by the full posterior distribution since these tables would likely correspond to the largest uncertainty in the output of the MHD code. An approach to accomplish the task of efficiently sampling the high-dimensional posterior distribution is presented in Sec. III A.

### A. Optimal table selection for computationally efficient uncertainty quantification

^{54}to accomplish this task for simplicity but alternate space-filling design approaches exist.

^{25,55,56}The Morris–Mitchell criterion is set by minimizing the following quantity:

*p*is the number of tunable LMD parameters. Additionally,

*q*is a positive integer, and

*M*is the number of posterior samples that will be used to generate

*M*tables. To find a set of optimally diverse posterior samples, we compute $\varphi i$ for

*M*samples of the posterior distribution. Here, $i=1,2,\u2026,N$ where

*N*is the total number of posterior samples of size

*M*. We then compute the optimal set of posterior samples using $\varphi opt=mini(\varphi i)$. As an example, consider that we wish to select $M=10$ representative samples from the posterior distribution. Then, we might choose $N=100$ samples of size

*M*from the posterior distribution and pick the $M=10$ samples that result in the minimal value of $\varphi i$. The need to select subsets from the posterior is not strictly required; however, it results in a decrease in the computation cost over computing $\varphi i$ for all combinations of size

*M*from the posterior distribution with possibly many thousands of samples.

In Fig. 3, we compare randomly selected posterior samples and optimally selected samples from the posterior using Eq. (3). We see that the optimally selected samples span a larger range in the joint-posterior distribution than the randomly selected samples. In this case, the random sampling approach results in samples that were more clustered in the upper left portion of the joint-posterior distribution.

We use this optimal sampling approach in Sec. IV A, where we simulate the dynamics of a magnetically launched flyer plate. As we will show, in contrast to random sampling, optimally sampling the posterior distribution with Eq. (3) results in a more representative sampling of the distribution of flyer-plate velocities.

## IV. MAGNETOHYDRODYNAMIC SIMULATIONS

^{57}

*m*, $n\u2261n(r,t)$ is the number density, $u\u2261u(r,t)$ is the velocity, and $B\u2261B(r,t)$ is azimuthal component of the magnetic field, at radial distance

*r*and time

*t*. The right-hand side of Eq. (6) is obtained by assuming that current is flowing only in the

*z*-direction. Additionally, $\mu 0$ is the permeability of free space and $p\u2261p(\rho ,T)$ denotes the scalar pressure; in this work,

*p*is obtained from the SESAME #92025 EOS, which contains Maxwell constructions. The resistive-diffusion MHD equations [Eqs. (4)–(6)] are numerically solved using a Lagrangian approach and Eq. (6) is integrated implicitly in time.

To compute the temperature *T*, that is used to query the EOS and conductivity tables, the internal energy is converted to an energy density and then updated. It is updated at each simulation time step from three contributions. These contributions include work done by an artificial viscosity term (we employ the von Neumann–Richtmyer artificial viscosity), PdV work, and work done by Ohmic heating. We neglect thermal conduction as our primary goal is to assess the influence of variations of the electrical conductivity on outputs of the MHD simulation. By neglecting the thermal conductivity in this work, we may reduce the accuracy of the simulations, but we disentangle any potential concerns with coupling the electrical conductivity model to the thermal conductivity model which may have its own uncertainty. Due to the impact that thermal conductivity may have on the fidelity of MHD simulations, we plan to carry out an in-depth analysis of the impact of thermal conductivity in future work. After updating the value of the internal energy density, we determine the value of *T* (and *p*) from the EOS table. We use this value of *T* to determine the value of $\sigma DC$ from the conductivity table.

### A. Magnetically launched flyer-plate simulations

To assess the significance of the table generation framework, we select a platform that is particularly sensitive to the values of the DC electrical conductivity: magnetically launched flyer plates.^{58,59} We simulate a beryllium flyer plate with the one-dimensional MHD code described in Sec. IV, which eliminates potential biasing of simulation results due to sensitivities from geometry or numerical choices that are important beyond a single spatial dimension.

A schematic of a magnetically launched flyer-plate is shown in Fig. 4; additional schematics are provided in Refs. 58–60. The flyer plate is driven by a current density $J$ that moves from the anode to the cathode along a short circuit. The current density $J$ produces a magnetic field $B$ and the flyer plate accelerates due to the $J\xd7B$ force. The side of the flyer plate that the current density initially flows through sets the magnetic field boundary condition at that side of the flyer plate. The other side of the flyer plate expands freely.

Here, we consider two distinct current pulses: a short pulse and a long pulse. These pulses mimic two cases of the pulses used at the Z machine at Sandia National Laboratories. The short pulse has a rise time of $\u223c$ 100 ns and the long pulse has a rise time of $\u223c$ 400 ns. These pulses are displayed in the insert of Fig. 5. The short pulse resembles the drive current used in experimental platforms relevant to inertial confinement fusion. The long pulse is regularly employed in experiments that isentropically ramp a flyer plate into a material to measure its EOS properties. For the short pulse, the flyer plate is 400 $\mu $m wide; and for the long pulse, the flyer plate is 200 $\mu $m wide. For the short pulse, the flyer is 7 mm away from $r=0$ mm, which is roughly the same for the return-current can diagnostics in inertial confinement fusion experiments.^{61} For the long pulse, the flyer is 500 mm away from $r=0$ mm.

As discussed in Ref. 58, the $J\xd7B$ force produces a stress wave that compresses and accelerates the flyer plate. Behind the stress wave, the magnetic field diffuses into the flyer plate. Eventually, the stress wave, and later the magnetic field, reaches the free surface of the flyer plate. Depending on the form of the current pulse (e.g., the extent of the rise time and the peak current), the velocity of the free surface may gradually or rapidly increase. If the free surface accelerates rapidly, then the surface experiences a shock due to the stress wave. This phenomenon is shown in Fig. 5 for the short pulse case. The *shock time* is the time when the velocity of the flyer plate is roughly discontinuous. Another prominent feature of flyer-plate evolution is the time that the magnetic field reaches the free surface. We refer to this time as the magnetic-field burnthrough time or simply the *burnthrough time*. The burnthrough time in Fig. 5 appears as a bump in the velocity profile after the shock time. This bump is caused by the sudden increase in temperature and decrease in density of the flyer plate as the material is melted. We note that these features are seen in experiments^{58,59} and constrain conductivity tables through an iterative procedure.

The shock wave that precedes the magnetic field arrival on the free surface could melt the flyer plate. In this case, we will not observe a distinct bump in the velocity of the flyer plate. Instead of using the bump to indicate magnetic burnthrough, which is a less obvious feature, we calculate the Ohmic heating term, at the free surface. At some point in time, the Ohmic heating term will be maximal and then decrease. The time at which the Ohmic heating term is maximal is another way to estimate the burnthrough time. As there is no distinct bump in the velocity profile for the short pulse current profile, we compute the burnthrough time by noting the time at which the Ohmic heating term is maximal. For the long pulse current profile, the bump is apparent in the velocity profile, and it is sufficient to use the time at which the gradient of the temperature at the free surface is maximal to indicate the burnthrough time.

### B. Uncertainty quantification with uniform dataset uncertainty

We use the table generation framework detailed in Sec. III to produce the DC electrical conductivity tables for the MHD code discussed in Sec. IV; we output velocity profiles of the magnetically launched flyer plate as a function of time and compare the simulated shock and burnthrough times.

Our analysis begins with assigning an uncertainty to the DC conductivity dataset that is used to fit the LMD model with the Bayesian inference approach of Sec. III. We perform a number of calculations by varying the uncertainty assigned to the DC electrical conductivity dataset. We select the values of 5, 10, 20, 30, 40, and 50 percent for our study. We assume that all data points in a dataset have the same uncertainty but later relax this assumption in Sec. IV C. It is important to emphasize that, with the framework developed here, uncertainty can be assigned to each data point uniquely if available. The case of 10% dataset uncertainty is shown in Fig. 1.

Once a table is generated, it is supplied to the MHD code. The MHD code used here is computationally rapid, allowing for an ensemble of calculations. Because of this, we propagate 1000 tables for uncertainty quantification based upon simulations using both the short and long pulses for all variations in dataset uncertainty. The results of the simulations are summarized in Figs. 6–8 and Tables I and II. The uncertainty in the shock time (for the short pulse) and the burnthrough time grows with the uncertainty in the DC electrical conductivity datasets used to produce the tables. We observe an increase of approximately 3 ns in the mean of the burnthrough time for the short pulse case, which is a significant amount relative to the sub-nanosecond precision of the experimental diagnostics.

% dataset uncertainty . | Shock time (ns) . | Burnthrough time (ns) . | ||||||
---|---|---|---|---|---|---|---|---|

1 table (median) . | 10 tables (optimal) . | 10 tables (random) . | 1000 tables (random) . | 1 table (median) . | 10 tables (optimal) . | 10 tables (random) . | 1000 tables (random) . | |

5 | 153.79 | 154 $\xb1$ 0.06 | 154 $\xb1$ 0.05 | 154 $\xb1$ 0.06 | 176.06 | 176 $\xb1$ 0.56 | 176 $\xb1$ 0.39 | 176 $\xb1$ 0.38 |

10 | 153.80 | 154 $\xb1$ 0.12 | 154 $\xb1$ 0.07 | 154 $\xb1$ 0.11 | 176.10 | 176 $\xb1$ 0.61 | 176 $\xb1$ 0.53 | 176 $\xb1$ 0.76 |

20 | 153.80 | 154 $\xb1$ 0.28 | 154 $\xb1$ 0.11 | 154 $\xb1$ 0.24 | 175.98 | 175 $\xb1$ 3.31 | 176 $\xb1$ 2.05 | 176 $\xb1$ 1.55 |

30 | 153.89 | 154 $\xb1$ 0.55 | 154 $\xb1$ 0.31 | 154 $\xb1$ 0.37 | 176.57 | 177 $\xb1$ 3.16 | 176 $\xb1$ 2.21 | 177 $\xb1$ 2.42 |

40 | 154.01 | 154 $\xb1$ 0.55 | 154 $\xb1$ 0.45 | 154 $\xb1$ 0.46 | 177.34 | 180 $\xb1$ 5.57 | 176 $\xb1$ 2.92 | 178 $\xb1$ 3.33 |

50 | 154.08 | 154 $\xb1$ 0.76 | 154 $\xb1$ 0.50 | 154 $\xb1$ 0.51 | 178.26 | 178 $\xb1$ 5.42 | 176 $\xb1$ 2.81 | 179 $\xb1$ 3.60 |

% dataset uncertainty . | Shock time (ns) . | Burnthrough time (ns) . | ||||||
---|---|---|---|---|---|---|---|---|

1 table (median) . | 10 tables (optimal) . | 10 tables (random) . | 1000 tables (random) . | 1 table (median) . | 10 tables (optimal) . | 10 tables (random) . | 1000 tables (random) . | |

5 | 153.79 | 154 $\xb1$ 0.06 | 154 $\xb1$ 0.05 | 154 $\xb1$ 0.06 | 176.06 | 176 $\xb1$ 0.56 | 176 $\xb1$ 0.39 | 176 $\xb1$ 0.38 |

10 | 153.80 | 154 $\xb1$ 0.12 | 154 $\xb1$ 0.07 | 154 $\xb1$ 0.11 | 176.10 | 176 $\xb1$ 0.61 | 176 $\xb1$ 0.53 | 176 $\xb1$ 0.76 |

20 | 153.80 | 154 $\xb1$ 0.28 | 154 $\xb1$ 0.11 | 154 $\xb1$ 0.24 | 175.98 | 175 $\xb1$ 3.31 | 176 $\xb1$ 2.05 | 176 $\xb1$ 1.55 |

30 | 153.89 | 154 $\xb1$ 0.55 | 154 $\xb1$ 0.31 | 154 $\xb1$ 0.37 | 176.57 | 177 $\xb1$ 3.16 | 176 $\xb1$ 2.21 | 177 $\xb1$ 2.42 |

40 | 154.01 | 154 $\xb1$ 0.55 | 154 $\xb1$ 0.45 | 154 $\xb1$ 0.46 | 177.34 | 180 $\xb1$ 5.57 | 176 $\xb1$ 2.92 | 178 $\xb1$ 3.33 |

50 | 154.08 | 154 $\xb1$ 0.76 | 154 $\xb1$ 0.50 | 154 $\xb1$ 0.51 | 178.26 | 178 $\xb1$ 5.42 | 176 $\xb1$ 2.81 | 179 $\xb1$ 3.60 |

% dataset uncertainty . | Burnthrough time (ns) . | |||
---|---|---|---|---|

1 table (median) . | 10 tables (optimal) . | 10 tables (random) . | 1000 tables (random) . | |

5 | 192.37 | 192 $\xb1$ 0.26 | 192 $\xb1$ 0.27 | 192 $\xb1$ 0.26 |

10 | 192.39 | 192 $\xb1$ 0.63 | 193 $\xb1$ 0.38 | 192 $\xb1$ 0.53 |

20 | 192.39 | 193 $\xb1$ 1.33 | 193 $\xb1$ 0.83 | 193 $\xb1$ 1.10 |

30 | 192.60 | 193 $\xb1$ 2.10 | 193 $\xb1$ 1.22 | 193 $\xb1$ 1.78 |

40 | 192.84 | 193 $\xb1$ 1.82 | 193 $\xb1$ 2.97 | 193 $\xb1$ 2.17 |

50 | 193.06 | 195 $\xb1$ 4.07 | 192 $\xb1$ 3.41 | 193 $\xb1$ 2.77 |

% dataset uncertainty . | Burnthrough time (ns) . | |||
---|---|---|---|---|

1 table (median) . | 10 tables (optimal) . | 10 tables (random) . | 1000 tables (random) . | |

5 | 192.37 | 192 $\xb1$ 0.26 | 192 $\xb1$ 0.27 | 192 $\xb1$ 0.26 |

10 | 192.39 | 192 $\xb1$ 0.63 | 193 $\xb1$ 0.38 | 192 $\xb1$ 0.53 |

20 | 192.39 | 193 $\xb1$ 1.33 | 193 $\xb1$ 0.83 | 193 $\xb1$ 1.10 |

30 | 192.60 | 193 $\xb1$ 2.10 | 193 $\xb1$ 1.22 | 193 $\xb1$ 1.78 |

40 | 192.84 | 193 $\xb1$ 1.82 | 193 $\xb1$ 2.97 | 193 $\xb1$ 2.17 |

50 | 193.06 | 195 $\xb1$ 4.07 | 192 $\xb1$ 3.41 | 193 $\xb1$ 2.77 |

For computationally expensive codes, thousands of simulations are intractable for uncertainty quantification studies. Therefore, we employ the optimal sampling approach discussed in Sec. III A to sample the posterior distribution to generate ten tables. For comparison, we also randomly sample the posterior to generate ten tables. We found that the optimal sampling approach produced outputs of the MHD simulation that had a larger uncertainty than by randomly sampling, see Tables I and II and Fig. 9. This wider spread of outputs is a desirable trait to ensure that the minimal set of tables is representative of the true uncertainty. Note that for the long pulse case with 40% dataset uncertainty, the ten MHD simulations that used a random sampling approach had a larger uncertainty in the burnthrough time than the optimal sampling approach. In this case, the choice of random samples from the posterior distribution produced conductivity tables that were more distinct near the melting transition at solid density than those from the optimal sampling approach.

It is noteworthy that the burnthrough time is more sensitive to variations in the DC electrical conductivity than the shock time. We hypothesize that the shock time is more sensitive to variations in the EOS (i.e., pressure, temperature, and energy) than the variations in the DC electrical conductivity, which will be a topic of future work. In Sec. IV C, we assess the significance of a particular physical regime (e.g., solid, liquid, and plasma) within the table on the shock and burnthrough times.

### C. Uncertainty quantification with non-uniform dataset uncertainty

As previously mentioned, the uncertainty assigned to each data point used for table generation need not be the same. Here, we relax the assumption that all data are equally uncertain by considering two cases. In the first case, we assign a 50% uncertainty for conductivity data in the solid regime and assign a 10% uncertainty for data in the liquid and plasma regime. In the second case, we reverse these assigned uncertainties such that the conductivity data in the solid regime have a 10% uncertainty, and the data in liquid and plasma regime have a 50% uncertainty. For both cases, we use the short current pulse as the drive side boundary condition (see Fig. 5). Selectively assigning dataset uncertainties allows us to determine which states of matter (i.e., which regions of the transport-coefficient table) most significantly alter the results of the MHD simulation. Figure 10 displays the results of this analysis based on 1000 MHD simulations. The MHD simulation results are compared to a reference case of a table generated from the median parameters of a posterior distribution, assuming that a 10% uniform uncertainty is assigned to all data in the dataset. We found that the flyer-plate velocity profile is more sensitive to uncertainties in values of the DC conductivity in the liquid and plasma regime. Our finding is consistent with a previous study,^{58} which found that the flyer-plate velocity was highly sensitive in the regions of conductivity near the melt transition. The analysis in Ref. 58 arrived at that conclusion through an approach in which discrete regions of the conductivity table were multiplied by a scaling factor. Our framework improves upon that analysis by eliminating the introduction of unphysical discontinuities into the table. Moreover, the number of MHD calculations needed to assess the sensitivity using the multiplier approach is drastically reduced from many thousands to hundreds (and to tens with the optimal sampling approach).

The results of this non-uniform error assignment highlights the regime where high-precision measurements should be focused to be maximally impactful—that is, just after melt and in the liquid regime.

## V. CONCLUSIONS AND OUTLOOK

We have developed an automated framework to generate tables of transport coefficients for uncertainty quantification of magnetohydrodynamic codes. Our framework, named ETHOS, produces ensembles of tables that are consistent with the statistics of the multi-fidelity dataset used to generate them and avoids the time consuming and potentially inconsistent process to manually tune model parameters. This automated framework produces any number of tables that optimally span the posterior distribution to enable uncertainty quantification for computationally expensive codes and highlights the regions of the dataset where additional data would reduce table uncertainties.

Using the exemplar problem of a magnetically launched flyer plate, we have carried out a systematic uncertainty quantification study using a one-dimensional Lagrangian resistive-diffusion magnetohydrodynamic code. We found that a modest uncertainty of 20% in the values of the transport-coefficient dataset used to produce the tables corresponds to uncertainties in the results from magnetohydrodynamic simulations that are larger than the sub-nanosecond resolution of experimental diagnostics. Further increasing the dataset uncertainty to 50% resulted in uncertainties that are two to three times larger than the experimental diagnostics; experimental data for the cases discussed here would further constrain the tables generated in this work.

Within the framework, we have addressed a common issue with computationally expensive codes: the ability to carry out uncertainty quantification with a small number of simulations (on the order of ten). Through an optimal selection process, we have produced tables that widely span the posterior distribution, resulting in an accurate representation of the uncertainty obtained by carrying out a large number of simulations, i.e., a Monte Carlo-style sensitivity analysis, but at a fraction of the computation cost.

The framework developed here provides a set of steps that are readily generalized to other datasets and applications, for example, assessing the sensitivity of magnetohydrodynamic codes to uncertainties in the equations of state—which is a topic of future work. Future simulations will explore more integrated studies and the effect of magnetic fields on transport quantities (e.g., the thermal conductivity). Moreover, future work will also include the development of tools to perform inference of transport coefficients with magnetohydrodynamic simulations and experimental data using a surrogate model.

While we have ensured that the tables generated in this work were sufficiently dense, the ability to carry out inline calculations of transport coefficients within a magnetohydrodynamic code would eliminate the uncertainty incurred through table interpolation. While rapid analytic expressions for plasma transport coefficients exist, many contain tunable parameters (like the LMD model) or are only accurate in limiting physical regimes (like the hot-dilute plasma regime). From the point of view of the DC electrical conductivity, there remains a dearth of data (both experimental and simulation) in the cold and warm dense matter regimes. Namely, in the solid for densities that are rarefied and compressed relative to ambient conditions and also near the melt transition. The additional data would further constrain the tables generated with the framework developed here, which would result in more predictive magnetohydrodynamic simulations of plasmas and divert attention to other potential sources of uncertainty.

## ACKNOWLEDGMENTS

The authors would like to acknowledge John Carpenter, Alina Kononov, Brian Robinson, Nikita Chaturvedi, and Rebekah White for useful discussions. We would also like to thank Luke Shulenburger and Mary Ann Sweeney for careful proofreading of the manuscript and Jeremy Boerner for providing the long current pulse. L.J.S. was supported by the Laboratory Directed Research and Development program (Project No. 230332) at Sandia National Laboratories. Sandia National Laboratories is a multimission laboratory managed and operated by National Technology & Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International Inc., for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA0003525. This paper describes objective technical results and analysis. Any subjective views or opinions that might be expressed in the paper do not necessarily represent the views of the U.S. Department of Energy or the United States Government.

## AUTHOR DECLARATIONS

### Conflict of Interest

The authors have no conflicts to disclose.

### Author Contributions

**Lucas J. Stanek:** Conceptualization (lead); Data curation (lead); Formal analysis (lead); Funding acquisition (lead); Investigation (lead); Methodology (lead); Validation (lead); Visualization (lead); Writing – original draft (lead); Writing – review & editing (lead). **William E. Lewis:** Conceptualization (supporting); Formal analysis (supporting); Investigation (supporting); Methodology (supporting); Supervision (equal); Writing – review & editing (supporting). **Kyle R. Cochrane:** Conceptualization (supporting); Formal analysis (supporting); Investigation (supporting); Methodology (supporting); Supervision (equal); Writing – review & editing (supporting). **Christopher A. Jennings:** Conceptualization (supporting); Formal analysis (supporting); Investigation (supporting); Methodology (supporting); Supervision (equal); Writing – review & editing (supporting). **Michael P. Desjarlais:** Data curation (equal); Formal analysis (supporting); Investigation (supporting); Methodology (supporting); Software (equal); Supervision (equal); Writing – review & editing (supporting). **Stephanie B. Hansen:** Conceptualization (equal); Data curation (equal); Formal analysis (supporting); Investigation (supporting); Methodology (supporting); Software (equal); Supervision (equal); Writing – review & editing (supporting).

## DATA AVAILABILITY

The data that support the findings of this study are available within the article.

### APPENDIX: PARAMETERIZED MODEL FOR THE DC ELECTRICAL CONDUCTIVITY

^{10,19,62}we reproduce them here for completeness. Following Lee and More,

^{10}the collision rate in the solid and liquid is

*j*. The Fermi–Dirac integrals depend on the chemical potential $\mu $ which can be obtained through the expression

*inverse*Fermi–Dirac integral of order 1/2.

*Z*is the nuclear charge, $\rho i$ is the mass density, and

*A*is the atomic weight.

Extracting the melting temperature from an equation of state table is the preferred method when building a wide-ranging table of the electrical conductivity. This method ensures consistency. The method employed in this work involved extracting, and then interpolating, the melt curve from an equation of state which eliminated the need to rely on Eqs. (A5)–(A7).

^{19}) is

To determine which collision rate to use in Eq. (1), one can simply take the maximum value of all the expressions for the collision rate or use an appropriate norm to ensure smoothness once a particular expression of the collision rate becomes larger.

^{19}

*I*is the ionization potential, $Ra=(4\pi na/3)\u22121/3$ is the Wigner–Seitz radius where $na$ is the total ion and neutral number density, and $ZTF$ is the average-ionization state generated by a Thomas–Fermi model that is readily computed with a fit provided by More.

^{63}Equation (A26) provides a smooth transition from a single-ionization Saha model to the Thomas–Fermi model.

The equations presented in this appendix form the basis for a wide-ranging model of the DC electrical conductivity. With the Bayesian inference framework presented in Sec. III, the tunable parameters are fit to available data. In some cases, the functional form of the LMD model will be insufficient to capture trends in the data, and modifications may be necessary.

## REFERENCES

*e*–

*e*collisions

*Electrical Resistivity Handbook*

*Uncertainty Quantification and Predictive Computational Science*

*Advances in Atomic and Molecular Physics*