Equations of state (EOSs) are typically represented as physics-informed models with tunable parameters that are adjusted to replicate calibration data as closely as possible. Uncertainty quantification (UQ) allows for the development of an ensemble of EOS parameters that are consistent with the calibration data instead of a single EOS. In this work, we perform UQ for the reactant and product EOSs for a variety of high explosives (HEs). In doing so, we demonstrate a strategy for dealing with heterogeneous (both experimental and calculated) data. We also use a statistical distance metric to quantify the differences between the various HEs using the UQ results.

Equations-of-state (EOSs) relate the conditions of a material to its properties. In hydrodynamic simulations of multi-scale phenomena, the EOSs of the constituent materials prescribe the material behavior as the simulation evolves. Therefore, the accuracy of the underlying EOSs is critical to the fidelity of the hydrodynamic simulations. Though the functional form of the EOS models is generally informed by physics, such models typically contain adjustable parameters that must be calibrated to data so that the EOS accurately reproduces the known (either via experiment or calculation) material properties. Historically, this tuning procedure has been done by hand, but in recent years, several automated optimization strategies have been developed to facilitate this process.1–7 

The hydrodynamic simulation of high explosive (HE) detonation requires a theoretical description of the transition from the unreacted condensed-phase state composed of larger molecules at ambient conditions to a high-temperature, high-pressure gaseous state composed of smaller molecules like water and carbon dioxide. A core requirement of such simulations is the need for both a reactant and product EOS. At present, single deterministic calibrations for HEs are standard in the field; however, increasing computational power warrants the use of an ensemble of statistically relevant calibrations that convey the inherent limitations in both data quantity and data certainty. The strategy outlined in this work, while applied only to ideal HE models, lays the foundation for incorporating the burn models required by non-ideal theories of detonation into an uncertainty-aware calibration framework.

In this work, we calibrate reactant and product EOS models to a heterogeneous dataset that includes experimental data. However, such measurements have uncertainties. Additionally, EOS models often are quasi-degenerate, meaning that multiple parameter sets can produce practically equivalent predictions for a finite-sized calibration dataset. Uncertainty quantification (UQ) provides a mechanism to produce an ensemble of EOS parameter sets that capture both of these effects. Then, in principle, this ensemble of EOSs can be transmitted downstream to result in a range of predictions for hydrodynamic simulations. Prior UQ work on EOSs and other hydrodynamic models underscores the growing importance of UQ in the field.8–21 

In general, fitting HE EOSs to data presents a variety of challenges. First is the high dimensionality of the problem involving a reactant and product EOS (and in later work, we will also include burn models). A large number of degrees of freedom can be problematic for standard UQ strategies such as Markov Chain Monte Carlo (MCMC). We ameliorate this problem by untangling the reactant and product EOSs, allowing us to treat them nearly independently. Second is that data are often conflicting, old, or very difficult to obtain. Conflicting data sources are natural to include in a UQ framework, which we show in this work. To deal with other data challenges, we use a composite dataset comprised of both experimental and computed calibration data, and we explore creative ways to cast the calibration data. For the reasons that we describe below, it is expedient to use a Bayesian approach for the reactants and a bootstrap approach for the products. We note that the reactant data are often reported as fit coefficients instead of as individual measurements. We discuss strategies for the inclusion of this type of data in a UQ analysis, with an emphasis on their uncertainties. We then use a machine-learning classifier to unite the reactant and product EOS parameters.

We perform UQ for various HE formulations. Some of these HEs might be expected to behave relatively similarly, e.g., the plastic-bonded explosives (PBXs) 9501, 9404, and 9011 all have the same HE material (1,3,5,7-tetranitro-1,3,5,7-tetrazocane or HMX) with different percentages of HE and different binders.22,23 We will show in this work that the ensembles of parameter combinations that result from UQ can allow for a more in-depth understanding of relative similarities and differences between various HE formulations by way of statistical distances.

The remainder of this work is organized as follows. In Sec. II, we describe the essential details of the UQ methodology: the model, the calibration data, and the UQ sampling strategies employed. We also cover the statistical distances used to quantify the differences between the various HEs explored in this work. In Sec. III, we employ the above strategy for the reactant EOSs first, and we compare differences in HEs as well as differences between different data sources. Then, we fold in the product calibration data to provide a complete comparison of reactant and product EOSs. Finally, in Sec. IV, we conclude and comment on future directions.

For the EOSs, we use the reactant Davis model and the product Davis model.24–26 These models are of Mie–Grüneisen form, meaning that pressure and specific energy are linearly related and pressure and volume are non-linearly related27 and is based on a reference isentrope as26,
P ( e , ρ ) = P ~ ( ρ ) + ρ Γ ( ρ ) [ e e ~ ( ρ ) ] ,
(1)
where ρ is the density, P ~ ( ρ ) and e ~ ( ρ ) are the pressure and specific energy along the reference isentrope, respectively, and Γ ( ρ ) is the Grüneisen parameter that extends the EOS to off isentrope conditions.24–26 In this work, the product reference isentrope is taken to be the release isentrope from the Chapman–Jouget (CJ) state. The reference state for the product EOS has a temperature of 298.15 K, and the reference volume is taken from the reactant EOS. The reactant EOS has six adjustable parameters [ A, B, C, C v r 0, α S T, Γ r 0]; the product EOS has seven adjustable parameters [ a, k, v c, p c, n, b, C v p].

The Davis EOS models are popular choices for modeling the behavior of HEs; however, they have known limitations. For example, the reactant EOS does not obey the Dulong–Petit limit for the isochoric heat capacity C v as a function of T. In the UQ parlance, such systematic defects in the physics model are referred to as model form uncertainty. We will not explicitly consider model form uncertainty in this work.

1. Reactants

For the HE reactants, we use two HE data compendiums, one from Los Alamos Scientific Laboratory [Gibbs & Popolato 1980, Ref. 22] and the other from Lawrence Livermore Laboratory [Dobratz 1972, Ref. 23]. Their selection is motivated so that the calibration data will be maximally internally consistent within a single data source. We want comparisons between HEs to reflect the intrinsic differences between the HEs and not discrepant experimental equipment or processing of the data. While both of these data sources sometimes cite external data, we can (perhaps naïvely) hope that the data curation choices were made in some sort of internally consistent fashion. For the purposes of this study, we are essentially outsourcing the data curation step to the two compendiums of HE data as these sources are fairly standard references in HE research. However, we note that data curation is, in general, a part of the UQ process, and the UQ field as a whole would stand to benefit from increased collaboration between physical scientists and statisticians in this area.

For the reactant calibration data, we use isobaric heat capacity ( C p vs T), isobaric linear thermal expansion ( α vs T), and shock Hugoniot (shock speed U s vs particle speed U p) data. We selected the eight HEs where all of these data are available in both Gibbs & Popolato 1980 and Dobratz 1972: PBX 9501, PBX 9404, PBX 9011, 2,4,6-trinitrotoluene (TNT), pentaerythritol tetranitrate (PETN), Baratol, Composition B-3 (Comp B), and the extrudable explosive XTX 8003.

Both these data sources report the experimental data as polynomial fit coefficients instead of raw data. For example, the isobaric heat capacity of solid TNT is reported as C p [ cal / g / ° C ] = 0.254 + 7.5 × 10 4 T [ ° C ] over the range T = [ 17 , 67 ° C ] in Gibbs & Popolato 1980. It is tempting to use the preceding equation to generate synthetic point-wise data over the relevant temperature range and input these data points into the UQ process. However, this approach is problematic when we do not have information about the magnitude of the uncertainty in the underlying data. For reasons discussed below, we will not infer the data uncertainties in this work, and so we are required to make reasonable, non-arbitrary choices for these values.

There are some reported uncertainties for the fit parameters in Ref. 22 that tend to be about 10% of the value of the coefficient. Therefore, we can use the fit coefficients themselves as data and make informed choices about their uncertainty. For a given EOS parameter set, we generate, e.g., C p vs T data points over the relevant temperature range, fit these data points to a polynomial of the appropriate order (a linear fit in the TNT example above), and then compare the fit coefficients to the experimental data. We assume that the magnitude of the uncertainty in the fit coefficients is constant across all HEs. Since our assumed uncertainty in the fit coefficients is approximate, we perform the UQ with two different sets of uncertainties, corresponding to an uncertainty of about 5% and 10%, respectively, of the average value for the given coefficient.

Additionally, the above approach circumvents the need to arbitrarily select how many synthetic data points are generated from the polynomial fit coefficients for a given experiment. This is important because, when model form uncertainty is omitted, the posterior variance in the inferred physical parameters will continue to decrease with the number of points sampled.12 However, since the calibration data are a polynomial fit, altering the number of synthetic points does not increase the information content and, therefore, should not affect the variance of the physical parameters. Use of the fit coefficients as data avoids this problem.

We chose to use a constant uncertainty for each HE so that the differences between the resulting posterior distributions would reflect inherent differences between the HEs and not differences in measurement precision. In principle, we could have inferred the magnitude of the constant error in the Bayesian approach described below. However, this would require performing the UQ on all HEs simultaneously, vastly increasing the dimensionality of the parameter space. For fixed uncertainty, we can perform the UQ for each HE independently, limiting the number of variables for each UQ problem. We explore the effect of this approximation by varying the uncertainty in Sec. III.

2. Products

For the product calibration data, we chose overdriven shock Hugoniot data, release isentrope data, and the following properties of the CJ state: detonation velocity ( D C J), pressure ( P C J), and temperature ( T C J). These data are not all available in the two compendiums used for the reactants, and data involving temperature are very difficult to evaluate experimentally. Therefore, we used thermochemical predictions of the above quantities from the Los Alamos National Laboratory thermochemical code magpie28 as calibration data. For a given list of products, magpie minimizes the free energy for a mixture of atoms whose stoichiometry is derived from the reactant HE. The product list used for this work includes H2O, H2, O2, N2, NO, CO, CO2, NH3, HCOOH, HNCO, CH4, C2H2, C2H6, C2N2, NO2, N2O, diamond, graphite, and liquid carbon. The free energies of the constituent products are described by the Ree-Ross liquid perturbation theory—a molecular interaction model that approximates molecules as isotropic and uses a pairwise potential to describe particle interactions.29 The molecular interactions are described by ϕ ( r ) = ϵ U ( r / r ), where ϵ and r are the characteristic energy and length scales, respectively, and U is a universal function. There are several sets of Ree-Ross parameters available in the literature, differing in the choice of universal function, U, and the particular calibration of the parameters ϵ and r .

We use magpie to generate P V and T V data along both the shock Hugoniot and the release isentrope, in addition to the desired properties of the CJ state. In order to perform UQ, we need to include the uncertainty on the data in some way. It would, in principle, be possible to assign “error bars” to the calculated data; however, similar to the reactants above, it is unclear what the uncertainties should be. We can re-cast the problem more appropriately as follows. The magpie calculations can be seen as a (complicated and non-analytical) functional form to which the Ree-Ross parameters are input. Therefore, the core data for the product calibration are the Ree-Ross parameters. The uncertainty on the data is derived from the multiple sets of Ree-Ross parameters available in the literature30–34 as described in Sec. II C 2.

1. Reactants

UQ strategies are generally aimed at characterizing the posterior distribution of the model parameters ( θ), given any prior beliefs about θ and the calibration dataset ( D). Using Bayes’ theorem, we can express this probability distribution as
P ( θ | D ) = P ( D | θ ) P ( θ ) P ( D ) ,
(2)
where P ( θ ) describes the prior beliefs (in the absence of the calibration data) about the model parameters, P ( D | θ ) is the likelihood that a set of parameters θ will generate the data D, and P ( D ) is the probability of the data, termed the evidence.35 Note that the evidence is θ-independent and can be discarded in the Markov Chain Monte Carlo (MCMC) sampling described below.

The prior beliefs are typically simple analytic expressions. For this work, we use relatively broad Gaussian functions for the priors on A and B. Expert knowledge tells us that reasonable values for the sound speed ( A) and its derivative as U p goes to zero ( B) are in the ballpark of 2 or 3 for many HEs; therefore, we use a Gaussian with a mean of two and standard deviation of 0.5 for the priors on those two parameters to lightly constrain θ to maintain physically reasonable values. For the other parameters, we use flat priors. The priors for all parameters had a minimum value of 0.0 and a maximum values of 10.

Evaluation of the likelihood term, P ( D | θ ), requires a generative model to predict the calibration data for a given parameter combination θ. The generative model encompasses both the physics model prediction step and the application of the uncertainty associated with the experimental data. In this work, we assume that the experimental data, i.e., the polynomial fit coefficients, are corrupted by Gaussian-distributed noise only (no systematic biases), and that the physics model is not systematically deficient in its prediction of the fit coefficients.

Under these assumptions, the equation for the likelihood of a single fit coefficient y i is
P ( y i | θ ) = 1 σ i 2 π exp ( y i η i ( θ ) ) 2 2 σ i 2 ,
(3)
where η i ( θ ) is the polynomial fit coefficient calculated from the physics model predictions for a given parameter combination θ, y i is the experimentally reported fit coefficient, and σ i is the uncertainty on y i. We then assume that each fit coefficient is mutually independent, making the total likelihood the product of the individual data points. Written as a weighted log-likelihood,
ln P ( D | θ ) = i = 1 n w i ln P ( y i | θ ) ,
(4)
where n is the number of fit coefficients over all experimental datasets and w i is a weight that ensures that all experiment types are weighted equally, regardless of how many distinct measurements of, e.g., the Hugoniot, are present in the dataset. That is, w i = 1 / n e x p, where n e x p is the number of reported sets of fit coefficients for a given type of experiment.

We characterize the above posterior distribution via MCMC sampling, using the emcee36 and ptemcee37 codes as described in Ref. 8. The MCMC sampling proposes trial θ parameter combinations and accepts or rejects these moves according to the Metropolis–Hastings criterion. All MCMC simulations were run with 48 walkers. In brief, we use ptemcee to perform parallel tempering MCMC (PT-MCMC) simulations for 1000 steps at 8 temperatures. Then, we fit the final 500 steps of the lowest-temperature walkers to a multi-variate Gaussian that we re-sample to get a good starting point for the single-temperature emcee calculations. The above intermediate fitting step ensures that all walkers are initialized in a distribution that is both smooth and close to the true posterior distribution, improving the equilibration of the walkers in the MCMC simulation. From the single-temperature MCMC run, the first 500 of 2500 total steps were discarded, which was more than sufficient to reach equilibrium. We used the LANL UQ code UNIT10 to organize the data and interface with emcee and ptemcee.

2. Products

Physically, the product EOS would seem to be inextricably linked to the reactant EOS. However, for our choice of calibration data and judicious definition of the reference state, we can nearly untangle the reactant and product EOSs. If all of the product calculations in the calibration data set are initiated from a reference state where T = 298.15 K (the reference temperature chosen in the Davis reactant model) and the volume is the reference volume, then the reactant EOS has an internal energy and pressure of zero by construction, regardless of the six adjustable input parameters to the reactant EOS. We then only must ensure that the reactant and product Hugoniot are physically reasonable (i.e., that detonation causes expansion); see Sec. II D.

Since we define the product calibration data to be the Ree-Ross parameters, MCMC sampling would require generating physics predictions for a given trial θ and then fitting those predictions to an optimal set of Ree-Ross parameters that best describe those data. Unlike the reactants, this fitting step is incredibly challenging because the “functional form” encoded by magpie is a constrained, free energy minimization requiring optimization of approximately 50 degrees of freedom (there are either three or four adjustable parameters per molecule in the product library). Another complicating factor is that there are a relatively small number of Ree-Ross parameter sets to use as data, and the Ree-Ross parameters within a given set will have non-trivial correlations with each other. As a result, it is not clear how to express the uncertainty in a way compatible with Eq. (3).

Bootstrapping has been referred to as the “poor man’s” posterior and, thus, can provide an estimate for Eq. (2) in a totally different fashion than that outlined in Sec. II C 1.38 The “poor man’s” descriptor refers to the ease with which bootstrapping can be implemented as compared with parametric approaches such as Bayesian MCMC. In Bayesian approaches, we are required to make assumptions about the nature of the uncertainty present in the calibration data, i.e., we are assuming a statistical distribution for the underlying process that generated the calibration data. By contrast, bootstrapping does not require any such assumptions. We only assume that there is some underlying distribution (about which we are not required to have knowledge) that has been sampled to generate the calibration data (here, the five sets of Ree-Ross parameters). Bootstrapping then enables estimation of the posterior distribution directly from these samples. The methodology approximates a Bayesian analysis in the absence of prior beliefs about θ.38 

The first step in bootstrapping is sampling with replacement of the calibration data. In this case, we use five different sets of Ree-Ross parameters available in magpie (labeled as full, JCZS, RUS, REE, and CARTE in magpie). Having only five datasets to choose from can result in artifacts in the resulting bootstrapped distributions. Therefore, we use Bayesian bootstrapping to smooth out the resulting distribution. That is, instead of sampling with replacement (equivalent to weighting the sampling with multinomial coefficient weights), the samples are weighted by pulling samples from the uniform Dirichlet distribution (i.e., all weights sum to 1).39 

For each of the five sets of Ree-Ross parameters, we can generate physics predictions for the quantities that we wish to include in the calibration (see Sec. II B 2). If a product species is not present in any of the above Ree-Ross parameters sets, we default to the “full” Ree-Ross set in magpie, which contains all product species. magpie has built-in capabilities to fit the Davis Product parameters to multiple datasets with weights. By repeatedly sampling different weights for the physics predictions associated with the Ree-Ross parameters, we obtain different realizations of the Davis product parameters that estimate the desired posterior distributions. The exact details of the fitting procedure to transform the bootstrapped data into an optimal set of Davis parameters is somewhat arbitrary. We optimize the Davis parameters such that they minimize the root mean squared error between the magpie thermochemical predictions and the Davis model physics predictions.

Subsection II C outlined the UQ strategies for treating the reactants and products independently. However, we need to filter out parameter combinations that result in the unphysical result of densification upon explosion. That is, the reactant Hugoniot should be denser than the product Hugoniot at any pressure that is accessible to the reactant state. In order to do this filtering efficiently (as we must combinatorially check the samples comprising the reactant and product posterior distributions), we train a machine-learning (ML) classifier to predict whether a given combination of reactant and product EOSs are valid.

For each HE, we fit both the reactant MCMC samples and the bootstrapped product samples to kernel density estimations (KDEs). We then pull 50 000 samples from the two distributions and compute the Davis model predictions for the Hugoniot curves and label them as either valid or invalid. While not required, modeling the posterior samples with KDEs allows us to sample the posterior more densely and concentrate the training data on the portions of phase space where we know that we need to make accurate predictions. 70% of these data formed the training set, 15% formed the validation set, and the remaining 15% were held out to form the test set.

scikit-learn40 was used for implementation of the ML model. Three different classifiers were originally tested on the Hugoniot data: Gaussian Naïve Bayes, Random Forest, and Gradient-Boosted Trees. Preliminary work indicated that the Gradient-Boosted Trees performed the best. Then, the hyperparameters41 for the Gradient-Boosted Trees algorithm were tuned by fivefold cross-validation. During training using the optimal hyperparameters, the validation set was used to determine the stopping criteria. Over the six HEs, the resulting models (one per HE) were 96.5%–98.7% accurate on the test sets containing 7500 samples that were completely held out of the ML training.

The above UQ analysis allows us to make pairwise statistical comparisons of different HE formulations on the basis of the two posterior distributions, p ( θ ) and q ( θ ). Statistical distances are used to compare probability distributions. For this work, we employ the Bhattacharrya distance, which has the advantage of being equivalent regardless of which HE posterior distribution is defined to be p ( θ ) and which is defined as q ( θ ).

The Bhattacharyya distance gives the amount of overlap between two probability distributions or populations. The Bhattacharyya distance is
B C ( p , q ) = p ( θ ) q ( θ ) d θ = q ( θ ) p ( θ ) p ( θ ) , D B C ( p , q ) = ln ( B C ( p , q ) )
(5)
for the continuous distributions p ( θ ) and q ( θ ). The Bhattacharyya coefficient ( B C) ranges from 0 to 1, and the Bhattacharyya distance ( D B C) ranges from 0 to . While the Bhattacharyya distance is symmetric and order-preserving, the Bhattacharyya distance is not a proper metric; that is, it does not obey the triangle inequality.

Operationally, D B C between two HEs is computed by modeling p ( θ ) and q ( θ ) as KDEs and pulling 10 000 samples from p ( θ ) at which the right-hand side of Eq. (5) can be evaluated. We can estimate the uncertainty in the D B C values (largely derived from the KDE modeling step) by switching the definitions of p ( θ ) and q ( θ ) in Eq. (5); perfect estimation of the posteriors in the infinite sample limit would yield the exact same value for D B C for either definition of p ( θ ) and q ( θ ). The average discrepancy between the D B C values is less than 5%, with nearly all distances having an uncertainty of less than 10%. We report the average value of D B C derived from the above two calculations in the paper.

Before showing UQ results, we recall that we have defined the polynomial fit coefficients reported in Refs. 22 and 23 as the reactant calibration data, and we have assumed that the uncertainties on the fit coefficients are uncorrelated and Gaussian with fixed standard deviations. Data are often reported in such a fashion, where the underlying measurements are distilled into fit coefficients with (what is presumably) a standard deviation associated with each fit coefficient. (See, for example, the linear fits to the Hugoniot measurements in Ref. 22.)

On the one hand, the assumptions described in above formation are simple and easily compatible with the log-likelihood expressed in Eq. (4). On the other hand, this formulation is unlikely to be consistent with the underlying data that generated the fit. Consider a linear fit to a monotonically increasing dataset in the first quadrant (e.g., all of the reactant data in this work). If the stochastic errors are such that the computed slope is steeper than the “true” slope, then the intercept will be lower than the “true” intercept and vice versa. Therefore, the slope and intercept should be anti-correlated for the data in this work.

If one knew the exact details of the fitting procedure (including where the data points were measured and how the measurement uncertainty varied over the data points), it would be possible to infer the co-variance terms. However, if one has sufficient information to definitively infer the co-variances, one probably has sufficient information to simply use the data directly. In the absence of the co-variance information, the most conservative (maximum entropy) choice is to assume that all co-variances are zero, knowing that there is some loss of information content relative to the original (unknown) measured data. A related approach to neglecting the co-variance terms is inflating experimental uncertainties; both approaches increase the uncertainty of the data. The latter is a zeroth-order strategy to ameliorate the effects of model discrepancy12,42,43 and thus could be justifiable in a qualitative sense for this work since we are not otherwise accounting for model form uncertainty.

With the above caveat in mind, we performed UQ for the reactant EOS for eight different HEs listed in Sec. II B 1. The marginalized single variable distributions for the Davis adjustable parameters in the case of TNT are shown in Fig. 1 for the Gibbs & Popolato 1980 data (blue dotted-dashed lines), Dobratz 1972 (red dashed lines), and a combined dataset using both sources (purple solid lines). For some variables (e.g., C v r 0), the variables are essentially insensitive to the data source used. However, other variables differ more noticeably between the two data sources. C, in particular, varies quite strongly; C reflects the high-pressure regime of the EOS where there is typically little constraining data.

FIG. 1.

Single variable distributions for the six Davis reactant model parameters in the case of TNT. The parameter and the associated units are denoted by the title, where a dash indicates a unitless variable.

FIG. 1.

Single variable distributions for the six Davis reactant model parameters in the case of TNT. The parameter and the associated units are denoted by the title, where a dash indicates a unitless variable.

Close modal

We can then use these distributions to produce ensembles of predictions for various physical quantities of interest. In Fig. 2, we show the predictions for 480 samples for the calibration data, where the median prediction along with 95% confidence intervals (CIs) are compared to the experimental calibration data for TNT. Despite the differences in the single variable distributions, the range of predictions over the calibration dataset are relatively similar, with only small differences in the medians and CIs. However, predictions that are not included in the calibration dataset (e.g., downstream hydrodynamic calculations) may vary more strongly.8 While the calibration data are in reasonable qualitative agreement with the EOS model predictions, we observe some discrepancy between the two datasets, indicating that the physics model is unable to simultaneously reproduce all of the calibration data. This could be due to deficiencies in the physics model, systematic errors in the calibration data, or a combination of the two effects.

FIG. 2.

Comparison of the medians and 95% confidence intervals of 480 MCMC samples from the posterior distribution to the experimentally reported calibration data in the case of TNT reactants for (a) the isobaric heat capacity data, (b) the thermal expansion coefficient data, and (c) the Hugoniot data. Note that in panel (c) both datasets report multiple Hugoniot measurements that are plotted as separate curves.

FIG. 2.

Comparison of the medians and 95% confidence intervals of 480 MCMC samples from the posterior distribution to the experimentally reported calibration data in the case of TNT reactants for (a) the isobaric heat capacity data, (b) the thermal expansion coefficient data, and (c) the Hugoniot data. Note that in panel (c) both datasets report multiple Hugoniot measurements that are plotted as separate curves.

Close modal

While one of the purposes of performing UQ is to pass these ensembles of EOSs in some form to hydrodynamic simulations to generate a range of outcomes consistent with the calibration data, another possible use for these ensembles is to answer statistical questions about similarities and differences between EOSs. In this work, we compute the Bhattacharyya distance to qualitatively compare the eight EOSs studied in this work.

In Fig. 3, we show the Bhattacharyya distances between various ensembles of reactant EOSs. The diagonal blocks indicate the differences between the Gibbs & Popolato 1980 UQ and the Dobratz 1972 UQ for the same HE. The off diagonal elements are the statistical differences between different HEs within the same dataset (lower left is Dobratz and upper right is Gibbs & Popolato 1980). Note that many of the diagonal terms are larger in magnitude than some of the off diagonal terms, indicating that two different HEs are more similar to each other within a single data source than the same HE across the two compendiums. For instance, PBX 9501 and PBX 9404 are closer if a single data source is used ( D B C = 0.6 or 0.8) than the two PBX 9404 ensembles derived from the two datasets ( D B C = 1.1). There is also variability in terms of the disparities between the datasets: Baratol is the least affected by choice of data source ( D B C = 0.2) and PETN is the most affected ( D B C = 2.1). To underscore just how significant the differences between the two data sources can be, we note that the reactant PETN and PBX 9501 (two very different HEs) is more similar within the same data source ( D B C = 1.7 or 1.4) than PETN is to itself between the two data sources. As another example, XTX 8003 and PBX 9011 are hardly distinguishable ( D B C = 0.3) within Dobratz 1972; however, those same two HEs are much more distinct ( D B C = 3.2) within Gibbs & Popolato 1980. The differences shown in Figs. 1 and 2 for TNT correspond to an HE that is comparatively insensitive to data source ( D B C = 0.3 in Fig. 3).

FIG. 3.

Bhattacharyya distances between pairs of HEs assuming 5% uncertainty on the fit coefficients and considering only the reactant EOS. The lower left off diagonal terms are the differences between pairs of different HEs using Dobratz 1972 as the data source; the upper right off diagonal terms are the same quantities using Gibbs & Popolato as the data source. The diagonal terms are the distances between the same HE using the two different data sources.

FIG. 3.

Bhattacharyya distances between pairs of HEs assuming 5% uncertainty on the fit coefficients and considering only the reactant EOS. The lower left off diagonal terms are the differences between pairs of different HEs using Dobratz 1972 as the data source; the upper right off diagonal terms are the same quantities using Gibbs & Popolato as the data source. The diagonal terms are the distances between the same HE using the two different data sources.

Close modal

In the absence of a reason to trust one dataset over the other, we combine both datasets and report those Bhattacharyya distances in Fig. 4. Here, we also explore the effect of varying the uncertainty from 5% to 10%. The diagonal terms quantify the differences in the same HE between the two choices for uncertainty. The off diagonal terms compare different HEs with an uncertainty of 5% for the lower left triangle and 10% for the upper right triangle.

FIG. 4.

Bhattacharyya distances between pairs of HEs including only the reactant EOS. The lower left off diagonal terms are the differences between pairs of different HEs assuming 5% uncertainty; the upper right off diagonal terms are the same quantities assuming 10% uncertainty. The diagonal terms are the distances between the same HE using the two different uncertainties.

FIG. 4.

Bhattacharyya distances between pairs of HEs including only the reactant EOS. The lower left off diagonal terms are the differences between pairs of different HEs assuming 5% uncertainty; the upper right off diagonal terms are the same quantities assuming 10% uncertainty. The diagonal terms are the distances between the same HE using the two different uncertainties.

Close modal

While the qualitative trends are preserved between the two choices for uncertainty, the distances shrink for the larger uncertainty value. The larger uncertainty results in less sharply peaked distributions. Some of the trends between HEs in Fig. 4 are as expected. For instance, PBX 9501 and PBX 9404 are relatively similar, and both Baratol and XTX 8003 are not particularly similar to any of the other HEs (though XTX 8003—which contains PETN—is most similar to PETN). However, other trends are counter-intuitive. For instance, PBX 9501 and Comp B are extremely close together in a statistical sense. PBX 9404 and TNT are also highly similar. These latter two results are not at all expected based on the known behavior of the HEs. It is important to resolve these oddities in order to demonstrate that this approach to quantify differences between parameters ensembles is robust. Therefore, we must move to perform UQ on the product EOSs since these also inform the behavior of HEs.

As described in Sec. II C 2, we use the Bayesian bootstrap to repeatedly re-weight the five Ree-Ross parameter sets available in the literature in order to approximate the posterior distribution for the product EOSs. Figure 5 shows the spread in the single variable distributions for the adjustable parameters to the product Davis model after the bootstrapping in the case of TNT. This analysis is performed for only six HEs, with Baratol and XTX 8003 being omitted, because magpie cannot treat compounds with barium or silicon at present.

FIG. 5.

Single variable distributions for the seven Davis product model parameters in the case of TNT. The parameter and the associated units are denoted by the title, where a dash indicates a unitless variable.

FIG. 5.

Single variable distributions for the seven Davis product model parameters in the case of TNT. The parameter and the associated units are denoted by the title, where a dash indicates a unitless variable.

Close modal

Figure 6 shows the ensemble of predictions compared to the calibration data for TNT. On the scale furnished by the range of specific volumes in the data plotted in panels a and b, the spread is not visually obvious, but the Davis predictions are in reasonable agreement with the mapgie data. The properties of the CJ state fall within the ranges of the mapgie predictions as well. The Davis EOS model does seem to struggle to capture the temperature dependence as a function of specific volume along the overdriven Hugoniot and release isentrope. This is perhaps unsurprising since this type of detailed temperature data is generally not available from experiment, and the product Davis model was not developed with these type of data in mind.

FIG. 6.

Comparison of the medians and 95% confidence intervals from the bootstrap sample distributions over the product Davis model parameters (green shaded areas or bar and whisker plots) to the calibration data (i.e., the five different sets of Ree-Ross parameters) in the case of TNT products (gray lines/symbols) for (a) temperature–volume and (b) pressure–volume data along the overdriven Hugoniot ( V < V C J) and the release isentrope ( V V C J), and (c) the detonation velocity, (d) pressure, and (e) temperature at the CJ state.

FIG. 6.

Comparison of the medians and 95% confidence intervals from the bootstrap sample distributions over the product Davis model parameters (green shaded areas or bar and whisker plots) to the calibration data (i.e., the five different sets of Ree-Ross parameters) in the case of TNT products (gray lines/symbols) for (a) temperature–volume and (b) pressure–volume data along the overdriven Hugoniot ( V < V C J) and the release isentrope ( V V C J), and (c) the detonation velocity, (d) pressure, and (e) temperature at the CJ state.

Close modal

Because of the nature of the product calibration data, we have performed UQ on the reactant and product EOSs separately to this point. However, there is one final physical consideration that affects both the reactant EOS and the product EOS: in pressure–volume space, the products EOS should always be at higher volume at a fixed pressure, i.e., materials should expand upon detonation. We treat this constraint as a step function; parameter combinations that do not obey this constraint have a probability of zero. We have independent samples for the reactant and product EOS parameters (from the MCMC sampling and the bootstrap, respectively) that we can combine into a single distribution by checking to see if the above constraint is obeyed. If so, the reactant and product EOS parameters are included in the final joint distribution. We trained an ML model for each HE to predict whether a given joint parameter combination was valid or not. We sub-sampled 5000 samples from the MCMC and used all 5000 product combinations from the bootstrapping and combinatorially checked each parameter combination for a total of 2.5 × 10 7 evaluations.

The changes to the TNT reactant distributions are shown in Fig. 7, where the distributions before and after the ML filtering are plotted. (The product distributions were fairly unaffected and therefore not shown.) Most of the single variable distributions are similar in both cases, but the C parameter is shifted to lower values. The Hugoniot constraint provides additional information about the high pressure behavior that influences the admissible values for C. In the reactant EOS distribution, B and C have a Pearson correlation coefficient (R) of 0.8, indicating a strong anti-correlated relationship, and so B commensurately shifts to slightly higher values. By contrast, A and Γ r 0 are directly correlated (R values of 0.25 and 0.37, respectively) with C and so shift to lower values with C.

FIG. 7.

Single variable distributions for the six Davis reactant model parameters in the case of TNT before (R) and after (R+P) removing samples that are not physically reasonable with the ML classifier. The parameter and the associated units are denoted by the title, where a dash indicates a unitless variable.

FIG. 7.

Single variable distributions for the six Davis reactant model parameters in the case of TNT before (R) and after (R+P) removing samples that are not physically reasonable with the ML classifier. The parameter and the associated units are denoted by the title, where a dash indicates a unitless variable.

Close modal

Given the final statistical descriptions of six HEs, we can provide an updated matrix of Bhattacharyya distances in Fig. 8. We can see that the counter-intuitive results occurring when only the reactant EOS was included have been alleviated. In particular, the pairs of HEs (PBX 9404, TNT), (PBX 9404, PETN), (PBX 9501, CompB) were all more similar than (PBX 9501, PBX 9404) in Fig. 4. However, when we include both reactant and product data, we see that (PBX 9501, PBX 9404) remain similar to each other, but the other pairs become more disparate when product data is included.

FIG. 8.

Bhattacharyya distances between pairs of HEs assuming 5% uncertainty on the fit coefficients for the combined reactant and product EOSs. The diagonal terms are zero, and the matrix is symmetric by construction.

FIG. 8.

Bhattacharyya distances between pairs of HEs assuming 5% uncertainty on the fit coefficients for the combined reactant and product EOSs. The diagonal terms are zero, and the matrix is symmetric by construction.

Close modal

Interestingly, despite also having HMX as the energetic material, PBX 9011 is more distinct from PBX 9501 and 9404. In the original calibration data, the coefficient of thermal expansion is quite different for PBX 9011 relative to the other two PBXs, so the distances in Fig. 8 are consistent with the underlying data. The thermal coefficient expansion measurements in the two compendiums are rather discrepant from one another and so it is difficult to speculate on the underlying physical reason for why PBX 9011 deviates from 9404 and 9501. Nonetheless, the statistical distances reflect these differences in calibration data in a distilled fashion, propagating the differences in calibration data through to their effect on the parameters.

In this study, we have demonstrated a composite UQ strategy for heterogeneous datasets, and we have used the resulting UQ to compute statistical distances to rank the similarity or differences among different HE formulations. The ability to fully quantify the uncertainty in the reactant and product EOSs for HEs is an essential step toward propagating small-scale experiment data up to multi-physics experiments.

In addition to the usual applications of the propagation of uncertainty, the ability to compare similarities and differences between EOS distributions could be relevant to answering many questions. In this work, we have ranked the HEs in terms of consistency between two separate data sources, but there are other possible applications. It would be possible to rank the degree to which new pieces of data influence the UQ; i.e., a larger D B C would indicate a particularly impactful piece of data. Similarly, if we develop a new material (e.g., a novel HE), we could rank where existing materials are the most similar or different to make further predictions about performance.

This work was supported by the U.S. Department of Energy through the Los Alamos National Laboratory. Los Alamos National Laboratory is operated by Triad National Security, LLC, for the National Nuclear Security Administration of U.S. Department of Energy (Contract No. 89233218NCA000001).

The authors have no conflicts to disclose.

Beth A. Lindquist: Conceptualization (lead); Data curation (equal); Formal analysis (equal); Methodology (lead); Supervision (equal); Writing – original draft (lead). Ryan B. Jadrich: Conceptualization (supporting); Methodology (supporting); Software (lead); Supervision (equal); Writing – original draft (supporting). Juampablo E. Heras Rivera: Data curation (equal); Formal analysis (equal); Writing – original draft (supporting). Lucia I. Rondini: Data curation (equal); Formal analysis (equal); Writing – original draft (supporting).

The data that support the findings of this study are available from the corresponding author upon reasonable request.

1.
E.
Forte
,
J.
Burger
,
K.
Langenbach
,
H.
Hasse
, and
M.
Bortz
, “
Multi-criteria optimization for parameterization of SAFT-type equations of state for water
,”
AIChE J.
64
,
226
237
(
2018
).
2.
M.
Bergh
,
R.
Wedberg
, and
J.
Lundgren
, “
Optimization of equation of state and burn model parameters for explosives
,”
AIP Conf. Proc.
1979
,
100003
(
2018
).
3.
G.
Cox
, “
Generating a multiphase equation of state with swarm intelligence
,”
AIP Conf. Proc.
1979
,
040002
(
2018
).
4.
G. A.
Cox
and
M. A.
Christie
, “
Fitting of a multiphase equation of state with swarm intelligence
,”
J. Phys. Condens. Matter.
27
,
405201
(
2015
).
5.
J.
Haynes
, “
A multiphase equation of state for gold
,”
AIP Conf. Proc.
2272
,
070017
(
2020
).
6.
J. A.
Lazzús
, “
Hybrid particle swarm-ant colony algorithm to describe the phase equilibrium of systems containing supercritical fluids with ionic liquids
,”
Commun. Comput. Phys.
14
,
107
125
(
2014
).
7.
P. C.
Myint
,
L. X.
Benedict
,
C. J.
Wu
, and
J. L.
Belof
, “
Minimization of Gibbs energy in high-pressure multiphase, multicomponent mixtures through particle swarm optimization
,”
ACS Omega
6
,
13341
13364
(
2021
).
8.
B. A.
Lindquist
and
R. B.
Jadrich
, “
Uncertainty quantification for a multi-phase carbon equation of state model
,”
J. Appl. Phys.
131
,
155104
(
2022
).
9.
B. A.
Lindquist
and
R. B.
Jadrich
, “
Uncertainty quantification for a multi-phase magnesium equation of state
,”
AIP Conf. Proc.
(to be published).
10.
R. B.
Jadrich
,
B. A.
Lindquist
,
J. A.
Leiding
, and
T. D.
Aslam
, “
Uncertainty quantified reactant and product equation of state for composition B
,”
AIP Conf. Proc.
(to be published).
11.
S. J.
Ali
,
D. C.
Swift
,
C. J.
Wu
, and
R. G.
Kraus
, “
Development of uncertainty-aware equation-of-state models: Application to copper
,”
J. Appl. Phys.
128
,
185902
(
2020
).
12.
J. L.
Brown
and
L. B.
Hund
, “
Estimating material properties under extreme conditions by using Bayesian model calibration with functional outputs
,”
J. R. Stat. Soc. Ser. C Appl. Stat.
67
,
1023
1045
(
2018
).
13.
K.
Rumsey
,
G.
Huerta
,
J.
Brown
, and
L.
Hund
, “
Dealing with measurement uncertainties as nuisance parameters in Bayesian model calibration
,”
SIAM-ASA J. Uncertain. Quantif.
8
,
1287
1309
(
2020
).
14.
W. J.
Schill
,
R. A.
Austin
,
K. L.
Schimdt
,
J. L.
Brown
, and
N. R.
Barton
, “
Simultaneous inference of the compressibility and inelastic response of tantalum under extreme loading
,”
J. Appl. Phys.
130
,
055901
(
2021
).
15.
N. H.
Paulson
,
E.
Jennings
, and
M.
Stan
, “
Bayesian strategies for uncertainty quantification of the thermodynamic properties of materials
,”
Int. J. Eng. Sci.
142
,
74
93
(
2019
).
16.
S. A.
Andrews
and
A. M.
Fraser
, “
Estimating physics models and quantifying their uncertainty using optimization with a Bayesian objective function
,”
J. Verif. Valid.
4
,
011002
(
2019
).
17.
S. A.
Andrews
,
A. M.
Fraser
,
S. I.
Jackson
, and
E. K.
Anderson
, “Exploring the uncertainty in the equation of state for a high explosive fit to heterogeneous data,” ASME 2019 Verification and Validation Symposium (ASME, 2019).
18.
D. J.
Walters
,
A.
Biswas
,
E. C.
Lawrence
,
D. C.
Francom
,
D. J.
Luscher
,
D. A.
Fredenburg
,
K. R.
Moran
,
C. M.
Sweeney
,
R. L.
Sandberg
,
J. P.
Ahrens
, and
C. A.
Bolme
, “
Bayesian calibration of strength parameters using hydrocode simulations of symmetric impact shock experiments of Al-5083
,”
J. Appl. Phys.
124
,
205105
(
2018
).
19.
J.
Bernstein
,
K.
Schmidt
,
D.
Rivera
,
N.
Barton
,
J.
Florando
, and
A.
Kupresanin
, “
A comparison of material flow strength models using Bayesian cross-validation
,”
Comput. Mater. Sci.
169
,
109098
(
2019
).
20.
T.
Nguyen
,
D. C.
Francom
,
D.
Luscher
, and
J.
Wilkerson
, “
Bayesian calibration of a physics-based crystal plasticity and damage model
,”
J. Mech. Phys. Solids
149
,
104284
(
2021
).
21.
J.
Frutiger
,
I.
Bell
,
J. P.
O’Connell
,
K.
Kroenlein
,
J.
Abildskov
, and
G.
Sin
, “
Uncertainty assessment of equations of state with application to an organic rankine cycle
,”
Mol. Phys.
115
,
1225
1244
(
2017
).
22.
LASL Explosive Property Data, Los Alamos Series on Dynamic Material Properties, edited by T. R. Gibbs and A. Popolato (University of California Press, Berkeley, CA, 1980).
23.
Properties of Chemical Explosives and Explosive Simulants, edited by B. M. Dobratz (Lawrence Livermore Laboratory, Livermore, CA, 1972), Vol. 1, uCRL-51319 Rev 1.
24.
B. L.
Wescott
,
D. S.
Stewart
, and
W. C.
Davis
, “
Equation of state and reaction rate for condensed-phase explosives
,”
J. Appl. Phys.
98
,
053514
(
2005
).
25.
T. D.
Aslam
, “
The reactants equation of state for the tri-amino-tri-nitro-benzene (TATB) based explosive PBX 9502
,”
J. Appl. Phys.
122
,
035902
(
2017
).
26.
T. D.
Aslam
, “
Shock temperature dependent rate law for plastic bonded explosives
,”
J. Appl. Phys.
123
,
145901
(
2018
).
27.
A. C.
Robinson
, “The Mie-Grüneisen power equation of state,” Report No. SAND2019-6025 (Sandia National Laboratory, 2019).
28.
C.
Ticknor
,
S. A.
Andrews
, and
J. A.
Leiding
, “
Magpie: A new thermochemical code
,”
AIP Conf. Proc.
2272
,
030033
(
2020
).
29.
M.
Ross
and
F.
Ree
, “
Repulsive forces of simple molecules and mixtures at high density and temperature
,”
J. Chem. Phys.
73
,
6146
(
1980
).
30.
V.
Dubois
,
N.
Desbiens
, and
E.
Auroux
, “
New developments of the CARTE thermochemical code: Calculation of detonation properties of high explosives
,”
Chem. Phys. Lett.
494
,
306
311
(
2010
).
31.
M.
Ross
and
F.
Ree
, “JCZS3—An improved database for EOS calculations,” Report No. SAND2018-6389C (Sandia National Laboratory, 2018).
32.
L.
Fried
,
W. M.
Howard
, and
P. C.
Souers
, “Exp6: A new equation of state library for high pressure thermochemistry,” in 12 Symposium (International) on Detonation (U.S. Naval Research Office, San Diego, CA, 2002).
33.
F. H.
Ree
, “
Supercritical fluid phase separations: Implications for detonation properties of condensed explosives
,”
J. Chem. Phys.
84
,
5845
5856
(
1986
).
34.
Y. A.
Bogdanova
,
S. A.
Gubin
,
A. A.
Anikeev
, and
S. B.
Victorov
, “
Thermodynamic modelling of detonation H-N-O high explosives
,”
J. Phys. Conf. Ser.
751
,
012018
(
2016
).
35.
D.
Barber
,
Bayesian Reasoning and Machine Learning
,
4th ed.
(
Cambridge University Press
,
2011
).
36.
D.
Foreman-Mackey
,
D. W.
Hogg
,
D.
Lang
, and
J.
Goodman
, “
EMCEE: The MCMC hammer
,”
Publ. Astron. Soc. Pac.
125
,
306
312
(
2013
).
37.
W. D.
Vousden
,
W. M.
Farr
, and
I.
Mandel
, “
Dynamic temperature selection for parallel tempering in Markov chain Monte Carlo simulations
,”
Mon. Not. R. Astron. Soc.
455
,
1919
1937
(
2015
).
38.
T.
Hastie
,
R.
Tibshirani
, and
J.
Friedman
, The Elements of Statistical Learning, Springer Series in Statistics (Springer, New York, 2001).
39.
D. B.
Rubin
, “
The Bayesian bootstrap
,”
Ann. Statist.
9
,
130
134
(
1981
).
40.
F.
Pedregosa
,
G.
Varoquaux
,
A.
Gramfort
,
V.
Michel
,
B.
Thirion
,
O.
Grisel
,
M.
Blondel
,
P.
Prettenhofer
,
R.
Weiss
,
V.
Dubourg
,
J.
Vanderplas
,
A.
Passos
,
D.
Cournapeau
,
M.
Brucher
,
M.
Perrot
, and
E.
Duchesnay
, “
Scikit-learn: Machine learning in Python
,”
J. Mach. Learn. Res.
12
,
2825
2830
(
2011
).
41.
Those deviating from the scikit-learn v1.2.1 defaults are 200 estimators, maximum depth of 6, learning rate of 0.5, the “deviance” loss function.
42.
C.
Jackson
,
M.
Sen
, and
P.
Stoffa
, “
An efficient stochastic Bayesian approach to optimal parameter and uncertainty estimation for climate model predictions
,”
J. Clim.
17
,
2828
2841
(
2004
).
43.
S.
Mosbach
,
J. H.
Hong
,
G. P. E.
Brownbridge
,
M.
Kraft
,
S.
Gudiyella
, and
K.
Brezinsky
, “
Bayesian error propagation for a kinetic model of n-propylbenzene oxidation in a shock tube
,”
Int. J. Chem. Kinet.
46
,
389
404
(
2014
).