In this article, we trained a machine learning (ML) model to connect microstructural details of an energetic material formulation to its performance for the purpose of guiding the discovery of new explosive formulations. Our hypothesis was that the algorithm would robustly learn the training data and produce an accurate surrogate model. Specifically, the algorithm learned the relationship between details of the void size distribution (VSD), initiating shock pressure, and the energetic material performance. We used realistic constraints on the VSD and a range of cases were ingested by a physically informed reactive flow model working within a hydrodynamic solver running on high-performance computing resources. The ML algorithm produced a surrogate model that accurately predicted known test points around the parameter space. In addition to the utility of the model and the process used for its development, we noted interesting comparisons between what we, the authors—subject matter experts, would heuristically conclude from the training data and the surrogate model predictions. We detected nuanced details that were missed by the surrogate model; however, these details are not important to an energetic material formulator. We concluded that the algorithm did indeed robustly learn the training data and produce an accurate surrogate model. We further concluded that the surrogate model is a powerful tool to guide the formulator in the absence of subject matter experts and limited-access computing resources.

## INTRODUCTION

The future of energetic materials’ discovery and development, especially high-performance high explosives, demands new efficient and cost-effective methods to design formulations by having an *a priori* guide to the effect of starting material parameters on the ultimate performance of the materials. Here, we consider the class of high explosives that consist of high loading fraction of an energetic organic crystalline powder in an inert polymer matrix. This is an important class of explosives for use in precision applications. Well-studied examples include PBX 9501, PBX 9502, LX-10, and LX-17. In standard practice, the formulator chooses the binder to have similar or higher density than the explosive crystal and favorable thermo-mechanical properties and the explosive particle characteristics (e.g., size distribution) to optimize density, which ultimately optimizes the detonation velocity (D) and pressure (P_{CJ}). The formulator must also consider solubility and wettability issues. This optimization of particle characteristics proceeds mostly by iteration, starting with the knowledge that a mixture of small and large particles (a bimodal distribution) yields the highest packing efficiency, leading to high density and good detonation performance.

However, there are other key performance parameters that are important to the overall design space of an explosive system. These include initiation sensitivity, reaction spreading from an initiation locus, and the detonation wave curvature when propagating through complex geometries. While the formulator may choose a particle size distribution (PSD) to optimize density, other performance parameters may not be optimized and can only be known through experimentation. Here, we present one part of an overall approach to use machine learning (ML) to correlate PSD with all of the key performance metrics. There are many facets to this problem, and we distill the correlation into two parts: ML correlations of PSD and the resulting void size distribution (VSD); ML correlations of the VSD with performance metrics. For reasons discussed later in the article, the VSD may be the dominant characteristic of the PBX that controls performance. In principle, these machine learning algorithms give the formulator a powerful tool to specify performance requirements and use the machine-guide to find the required PSD, in effect, to discover new energetic materials with tailored performance. There are several robust techniques for using ML in the context of energetic materials, including ML for material genomes,^{1,2} multiscale modeling,^{3} surrogate modeling,^{4} reaction kinetics modeling.^{5} This article explores VSD-performance by adapting the sparse-data machine-learning approach of Sen and Rai^{6,7} to “train” a machine-learning algorithm on a dataset of “synthetic microstructures” and the corresponding computed shock initiation response of the explosive PBX 9502 (95% 2,4,6-trinitrobenzene-1,3,5-triamine, 5% KelF binder).

In this study, we use the VSD to quantify key features of the microstructure and the hydrodynamic reaction rate across a range of shock pressures to measure the initiation performance. We use the πSURF reactive flow model^{8–10} working in the Pagosa hydrodynamic solver system^{11} to generate the training dataset. Specifically, we hypothesize that the method of Sen and Rai will robustly learn and correlate the relationship between VSD and initiation performance. The work presented here provides the basis for an iterative algorithm that enables the formulator to specify the performance requirement and then obtain the required VSD characteristics. Ultimately, developing an ML algorithm that correlates PSD with VSD, used with the algorithm developed here, will provide a powerful tool for the formulator to optimize a wide range of performance metrics by controlling PSD. This article proceeds by providing background and details of our methods; our methods of training data development and heuristic observations of trends in the training data; surrogate model results; a comparison of the heuristic observations and the surrogate model predictions; and our overall conclusions.

## BACKGROUND

There were two key background components of this study: physically aware reactive flow modeling and ensemble machine learning techniques. Both components are recently developed cutting-edge techniques that are described in detail elsewhere. This article proceeds by first outlining the physically aware πSURF reactive flow model,^{8–10} followed by an overview of the Ensemble Learning algorithm.

### Physically aware reactive flow modeling

The microstructure of heterogeneous energetic materials (HEs) contains voids, crack networks, and other defects. We know these heterogeneities play a key role in determining the explosive performance behavior through the formation of hot spots or regions of localized heating that trigger the chemical reaction. Hot spots form when a shock passes through defects. While researchers have proposed several types of defects that localize shock energy to cause ignition, a significant amount of evidence supports the idea that the interaction of shock waves with microstructural voids is the dominant mechanism. One may follow references forward from Ref. 12 to Ref. 13 to see the aggregate evidence. If the shock-void interaction leads to a chemical kinetics-dependent critical time-temperature-size relationship, the voids will ignite and deflagration waves will propagate from the hot spot. Qualitatively speaking, smaller voids require stronger shocks to ignite and have a smaller reaction time. If the voids have sufficient proximity to one another, the individual deflagration waves merge and strengthen the incident shock to transition into steady detonation. It is, therefore, apparent that both the presence of voids and the size of the voids are key factors that control the initiation and detonation performance of a heterogeneous explosive. It naturally follows that the size distribution of the voids also influences the process, such that stronger shocks access more of the void space.^{8–10} Therefore, in this article, we consider the statistical representation of the void structure in the form of void volume distribution (VVD). We note that the VVD is more commonly referred to as void or pore size distribution. For consistency with more common vernacular, here we use the VSD acronym. Experimentally, we determine the VSD using Ultra-Small Angle Neutron Scattering (USANS)^{14} or Ultra-Small Angle X-ray Scattering (USAX) or computed tomography.^{15} Figure 1 shows a typical VSD and void area distribution for PBX 9502.

The physically informed SURF (πSURF) reactive flow model connects physiochemical properties and the VSD of an explosive to its shock initiation and detonation performance. Full details can be found in Refs. 8–10. The overall reaction rate is

where $\alpha \tau $ accounts for the dependence on the pressure in the flow following the shock,

and $\Lambda =f(0<\lambda <1)$ is a topology function that modulates the rate as the reaction proceeds. The initial reaction rate depends on the specific surface area activated by shock $A~$ and the initial combustion wave speed (aka deflagration speed) $D0$ at shock pressure,

The deflagration speed is intrinsic to the specific binder-energetic material considered (within some constraints such as particle size, morphology, etc.) and we find $A~$ by

where $\delta c=f(Ps)$ is the critical (smallest) void size that ignites under a shock strength of $Ps$. That parameter depends on the physical properties and chemical kinetics, and $a(\delta )$ is the void area distribution function (VAD). Figure 1 shows typical VSD and VAD.

The algorithm “ingests” the VSD and produces a pressure-dependent reaction rate, as shown in Fig. 2 (left) for use within the hydrocode. While the key parameters can, in principle, be determined *a priori*, for this study we determined the nominal parameters by a calibration that correlates the known reaction rate to the nominal VSD of PBX 9502 via $\delta c$. Figure 2 (right) shows this calibration curve. To consider off-nominal VSDs, we assume $\delta c(Ps)$ remains the same and the model will modify the nominal rate curve and, therefore, the predicted initiation and detonation performance.

The defining characteristics of the VSD include the modality [number of distinct void size populations, (i)] and, then for each mode, the void fraction $(\varphi i)$, the width $(\sigma i)$, and the median size $(\mu i)$. This study included two modes and realistic ranges of $\varphi i$ and $\sigma i$, and the ratio of the void fractions associated with the two modes and the modes are so named such that mode 1 is larger than mode 2. Here, “realistic” means a choice of values that result in VSDs that are bounded by lower and upper void sizes observed in real PBXs. Furthermore, the sum $\varphi 1+\varphi 2$ is held constant, representing a fixed density for the overall study. The defining characteristics of the VSD are used to develop training data for the machine learning algorithm; this will be discussed in the following section.

### Ensemble learning method

We quantify the shock initiation response of PBX 9502 as the Run-to-Detonation distance, RtD, (x) for a given loading pressure, $Ps$, and a microstructure described by the VSD characteristics. In terms of surrogate modeling, *x* is a function of $Ps$ and VSD over a range of loading pressures and microstructures. Surrogate modeling estimates an unknown function $f(x)$ at discrete and distinct points, $zj$, $j=1,2,\u2026,N$. These N points, $zj$, where the function is known, span the parameter space of the surrogate. In general, *z* is considered to be a *d* dimensional vector, i.e., $z\u2208Rd$ and the *d* components of $z=z1z2\cdots zd$ in the *d* dimensional parameter space comprise the features/predictors of the function $f(z)$. The set of known values of the function and their locations, i.e., the set $(zj,f(zj))$, are the inputs/training points for the surrogate model and are used to derive an approximation of $f(z)$ in the entire parameter space. Often, it is of interest to obtain the value of the function at a selected location in the parameter space, say $z0$. The point $z0$ is the probe-point of the surrogate model, while $f(z0)$ is the output/response of the surrogate model.

To construct surrogate models for $f(z)$, others use classical Machine learning techniques such as Gaussian Processes (GPs),^{16} Radial basis functions,^{17} support vectors,^{18} and neural networks.^{19} Traditionally, GPs have been widely used by the authors’ groups and others to construct surrogate models. However, discontinuities and steep gradients cause problems for the GPs, as the fundamental assumptions involved in approximation using GPs become questionable in the presence of gradients/discontinuities. Shock to detonation transition (SDT) parameter spaces commonly contain steep gradients, as shown in Rai and Udaykumar.^{13} The problems manifest as spurious oscillations and introduce unwarranted epistemic uncertainties in surrogates constructed using GPs.

To overcome this limitation of GPs, we have designed a novel Ensemble Learning (EL) algorithm, which provides robust and accurate surrogate models for challenging SDT parameter spaces.^{20} EL methods either combine different machine-learning methods (such as in Stacked Ensemble methods) or use different subsets of the input data to train a single machine-learning method (such as in Bootstrap Aggregated/Bagged Ensemble methods and Boosted Ensemble Methods). In either case, the central philosophy is to create several sub-models, either by combining diverse input data sets or by combining diverse machine learning methods, and derive a cumulative response of $f(z)$ at $x0$. The purpose behind combining a wide variety of sub-models is to obtain an aggregate of responses that strike a balance between model biases (i.e., capture the non-linearities in $f(z)$ in the parameter space with sufficient accuracy) and model variance (i.e., prevent over-fitting or prevent memorizing the training data).

The central philosophy of the EL is to still exploit the high-convergence rate of GPs; however, before training the GP, the training dataset is simplified using low-fidelity regression learning models (support vector regressions, decision trees, and extreme learning machines). In other words, the training dataset is first passed to a stack of base learners, and the output of these low-fidelity learners is passed to the second stack, comprising a GP. Because the GP trains on the outputs of the base learners, it always sees linearly separated training sets, even when steep gradients or discontinuities exist in the parameter space, and it only needs to fine tune the final fit to improve the approximation error of the low fidelity base learners. To increase generalization and reduce overfitting in this stacked learning approach, the stacks are trained via bootstrapped aggregate samples, thereby balancing the bias and variance of the fitting. A schematic of the EL framework used in this study is shown in Fig. 3.

## TRAINING DATA GENERATION

The goal of this work is to create a linkage between microstructural details and performance to guide the formulator during the discovery process via machine learning. A key step is generating data to train the machine learner. Here, we represent the microstructure in the form of VSDs. Previous sections detail how we generated a wide range of realistic VSDs and how we use the physical-aware πSURF reactive flow model to generate performance data in the form of Pop plots. The VSD characteristics and the stimulus (shock pressure, $Ps$) form the “inputs,” which are fed to the ML model; the ultimate output is the RtD or distance to detonation. The training data for the ML model is generated in the following manner. First, πSURF rate curves are produced from the VSD characteristics. These rate curves are then fed to the high-performance computing (HPC) continuum hydrocode Pagosa to see the predicted shock initiation response resulting from variations in the VSDs. The shock initiation response is characterized in terms of $Ps$ and x. The machine learner then trains on this dataset to construct a surrogate model connecting the microstructural characteristics (VSD) to their explosive performance (RtD). The training data will be qualitatively dissected in the “Qualitative Observations” section.

### Void size distributions

Other researchers have coined the phrase “synthetic microstructures,”^{21} where, in general, they create computational domains (2- or 3-d renderings) that contain statistically correct distributions of relevant features, i.e., voids and defects. Here, we realize an alternative form of this concept and generate synthetic microstructures by varying the statistical characteristics of the VSD $(\mu 1,\mu 2,\sigma 1,\sigma 2,\varphi 1/\varphi 2)$.

The training datasets were developed in two stages. The *first stage* provides bounding data to the ML model. These data points were chosen by the authors. The ML algorithm then identifies parameter space regions that require more data to improve the learned model accuracy and provides parameter values for the *second stage* of training data. For the first stage, a systematically-and-realistically-bounded table of the VSD characteristics was developed. Realism is maintained by enforcing that less than 1% of the void volume occur below 1 nm or above 5 *μ*m (consistent with USAX measurements of PBX 9502), and the overall void volume is held constant at 2.5% (representing 97.5% of TMD). The initial run varies the five VSD parameters at two levels, varies the input shock pressure $(Ps)$ at five levels, and holds the modality constant, as shown in Table I; this set accounts for 160 cases. To create the second dataset, the ML model was tasked to generate 200 points, bringing the total to 360 cases. The supplementary materials contain the full table representing both training datasets. Figure 4 shows the VSDs in the initial set and shows an example of mapping of the VSD from a semi-log to a log–log plot. Semi-log plots are typically used to visualize VSDs, but visualizations in this article are presented in log–log plots because they give a clearer picture of the relative size of the distribution modes. Figure 5 shows the VSDs that were spanned by the second stage of training data. The visualization shows how varying each parameter affects the VSD; the cases are colored incrementally ranging from blue to red. The blue cases represent the cases occurring at lower values of that given parameter, while the red cases represent the high-end. The parameters $\mu 1$ and $\mu 2$ shift the median size of the upper and lower distributions, $\sigma 1$ and $\sigma 2$ change the width of the distributions, and the VF varies the fraction of the total void volume occurring in each distribution.

### Run to detonation

We characterize the initiation performance expected from the VSD parameters by computing the run to detonation (RtD). The RtD is the effect of a one-dimensional, long duration shock wave incident on a domain representing PBX 9502, having a shock pressure-dependent reaction rate computed by the πSURF model as described above. This is the canonical method to isolate the mechanical and chemical effects that occur during shock initiation, free from boundary wave interference (rarefaction waves). Numerical tracers located at 1 mm intervals from the shocked boundary reveal the growth of the initial shock as it evolves into steady detonation. The tracer data are interpreted as shock position vs time and differentiated to reveal shock velocity vs time using the method of Hill and Gustavsen.^{22} Ultimately, we interpret these results in terms of the RtD vs the input shock pressure for the training dataset (x vs $Ps$). Figure 6 shows a typical dataset to find those parameters.

As discussed in previous sections, the πSURF methodology takes in the VSDs and via Pagosa ultimately calculates the RtD over a range of $Ps$. A plot of the RtD vs $Ps$ for a long duration shock in a one-dimensional system yields a “Pop plot.” As such, the plot is a convenient measure of “shock sensitivity.” The Pop plots resulting from the initial dataset are discussed here. Pop plots were not produced for the second dataset because multiple runs at the same velocity are required for the curve fitting; the second dataset varied the parameters in a manner that did not make this curve fitting possible.

### Qualitative (heuristic) observations

Figure 7 shows the Pop plots resulting from the initial dataset. For each case, the black line indicates the nominal pop plot. Every subfigure shows the same dataset, where the curves are colored to highlight a particular VSD characteristic. The top-left subfigure of Fig. 7 highlights the effect of $\mu 1$, the natural logarithm of 1000 or 500 nm. The effect of this parameter is significant. The top-right subfigure highlights $\mu 2$, the natural logarithm of 100 or 10 nm. This parameter has a relatively weak effect. The middle-left subfigure highlights $\sigma 1$, 0.4 or 0.6. The plot suggests that increasing $\sigma 1$ decreases the magnitude of the effect of changing $\mu 1$. The middle-right subfigure highlights $\sigma 2$, 0.4 or 0.6. This parameter has a weak effect. The bottom figure shows $VF$, 8.5 or 34. We see a small but clear increase in sensitivity. In all cases, the characteristics of the large mode have a much greater effect on the sensitivity than the smaller mode. According to the πSURF model, “sensitivity” will increase when the population of voids increases above the lower integration limit, e.g., the critical void size. As shown in Fig. 2 (right), for the maximum $Ps$ (14 GPa) investigated here (relevant to shock initiation), the minimum critical void size is ∼50 nm. Referring to the VSD of Fig. 1, we can see that this approach suggests that porosity in high-side of the large mode will have the most significant effect on the Pop plot. In the discussion section, we compare these observations with those of the machine-learned surrogate model.

## SURROGATE MODEL RESULTS

Figures 8–10 represent key aspects of the machine-learned surrogate model. The figures present 2D slices of the six-dimensional surrogate model of RtD—$x$. Qualitatively, the figures show weak to no dependency on $\sigma 1$, $\sigma 2$, and $\varphi 1/\varphi 2$. Figure 8 emphasizes the strong effect of $Ps$ and $\mu 1$ for fixed values of $\sigma 1=0.5$, $\sigma 2=0.5$, and $\varphi 1\varphi 2=21.25$ for three different values of $\mu 2=10,50,and100\mu m$. Figures 9 and 10 show that the relatively small variation on *x* with respect to $\mu 2$ for three different values of $Ps$ and $\mu 1$ (fixed values of $\sigma 1$, $\sigma 2$, and $\varphi 1/\varphi 2$ as used for Fig. 8). Overall, a qualitative visual analysis of the surrogate model graphical output finds the order of importance as $Ps$, $\mu 1$, $\mu 2$, $\sigma 1$, $\sigma 2$, and $\varphi 1/\varphi 2$.

The ML tool also has the ability to quantitatively rank the importance of the factors. It does so by engaging the Matern Kernel Function,

where *y* is the predicted output, $\sigma $ is the variance, *r* is the Euclidean distance between the training point and the prediction point, and $dl$ is the length of the support along the *l*th direction. Thus, if the *n*th feature is insignificant, i.e., it does not contribute the prediction *y*, then $dl$ will shrink to a small value. The rank/importance of the *n*th feature can, thus, be measured quantitively as $\psi n=dn/\u2211i=1ldl$, and all the *l* features can be sorted in decreasing values of $\psi n$ to determine their respective rank/importance in predicting the output *y*. Applying this function, we find the normalized relative importance ranks as $Ps$: 0.5415, $\mu 1$: 0.4581, $\mu 2$: 3.9 × 10^{−4}, and the other factors essentially zero.

### Accuracy of the surrogate model

First, we show the training and the generalization error of the EL. As noted earlier, the EL uses bootstrap aggregated samples, i.e., a random subset of the input data (the in-bag data) is used for training a single instance of the stacked ensemble, while the remaining data in the input (i.e., the out-of-bag data) are used for determining the generalization of the particular instance of the stacked ensemble. Several such stacked ensembles are trained with different in-bag data, and the final response of the ML is the arithmetic mean of the responses of the individual instances of the stacked ensemble. Consequently, every instance of the stacked ensemble has different in-bag and out-of-bag (normalized root mean square) errors, resulting in a distribution of in-bag and out-of-bag errors. These distributions are shown in Fig. 11 for the training data used to train the ML model, as discussed previously. The in-bag error provides an estimate of the training error of the ML model, while the out-of-bag error provides an estimate of the generalization error that is expected from the ML model. As the figures show, the median training and testing RMSE are 0.004 and 0.0075, respectively, the lower 95% confidence intervals are 0.0031 and 0.0058, while the upper 95% confidence intervals are 0.0062 and 0.0118.

To further determine the accuracy of the ML model, we compute errors from extra simulations that were not used to train the model for any of the instances at any time. The ML model learned from a total of 360 training points, randomly distributed across the six-dimensions input parameter space $(\mu 1,\mu 2,\sigma 1,\sigma 2,\varphi 1/\varphi 2,Ps)$, bounded by the lower and upper limits given in Table II. To analyze the accuracy of the trained ML model, we select 10 random data points, which were not used in the training set. The ML model estimates the RtD for these points. Table II presents the input parameters for the testing dataset, the ML predicted RtD values, and the $\pi $ SURF calculated values. To ascertain the quality of prediction by using feature selection (i.e., by disregarding the unimportant features as determined by the ML algorithm), Table II also shows the ML predictions both with and without feature selection. As can be seen from the table, there is only a modest effect of including the unimportant features in the ML predictions. Therefore, hereafter for the sake of computational robustness, the ML results with feature selection will only be shown. Figure 12 shows a bar chart comparing the ML predicted run to detonation distance response with the πSURF calculated values at the testing points. Generally, the ML model predictions agree well with the πSURF calculated values. For some cases, i.e., case 2, they agree very well and for others, such as case 9, the agreement was not as good. To quantify the accuracy of the ML model based on the 10 data points, we calculate the normalized $L2$ norm of the error, $\epsilon ML$,

Case . | exp(μ_{1}) (nm)
. | exp(μ_{2}) (nm)
. | σ_{1} (nm)
. | σ_{2} (nm)
. | VF . | P (GPa) . | Raw data, h (mm) . | ML Pred., h (mm) . | ML Pred., h (mm) . |
---|---|---|---|---|---|---|---|---|---|

1. | 967.64 | 16.17 | 0.56 | 0.597 | 10.59 | 9.57 | 10.256 7 | 10.1328 | 10.1749 |

2. | 739.49 | 61.11 | 0.504 | 0.509 | 20.25 | 12.16 | 7.230 47 | 7.2299 | 7.2775 |

3. | 994.41 | 23.72 | 0.41 | 0.584 | 32.92 | 14.11 | 6.405 59 | 6.5382 | 6.1253 |

4. | 566.57 | 70.675 | 0.502 | 0.536 | 25.47 | 12.83 | 6.929 09 | 7.0162 | 6.9964 |

5. | 638.97 | 37.85 | 0.467 | 0.476 | 16.18 | 13.08 | 6.300 05 | 6.5411 | 6.2646 |

6. | 850.75 | 38.73 | 0.547 | 0.527 | 24.69 | 10.19 | 9.449 48 | 9.306 | 9.5284 |

7. | 722.49 | 76.71 | 0.46 | 0.501 | 31.32 | 14.11 | 6.054 61 | 5.77 | 6.0505 |

8. | 579.18 | 38.6 | 0.52 | 0.476 | 24.95 | 10.64 | 10.428 7 | 10.3483 | 10.4245 |

9. | 943.39 | 10.63 | 0.41 | 0.587 | 30.53 | 9.93 | 8.591 51 | 9.6267 | 8.8016 |

10. | 584.24 | 32.63 | 0.5 | 0.544 | 33.98 | 12.99 | 6.544 23 | 6.7752 | 6.6242 |

Case . | exp(μ_{1}) (nm)
. | exp(μ_{2}) (nm)
. | σ_{1} (nm)
. | σ_{2} (nm)
. | VF . | P (GPa) . | Raw data, h (mm) . | ML Pred., h (mm) . | ML Pred., h (mm) . |
---|---|---|---|---|---|---|---|---|---|

1. | 967.64 | 16.17 | 0.56 | 0.597 | 10.59 | 9.57 | 10.256 7 | 10.1328 | 10.1749 |

2. | 739.49 | 61.11 | 0.504 | 0.509 | 20.25 | 12.16 | 7.230 47 | 7.2299 | 7.2775 |

3. | 994.41 | 23.72 | 0.41 | 0.584 | 32.92 | 14.11 | 6.405 59 | 6.5382 | 6.1253 |

4. | 566.57 | 70.675 | 0.502 | 0.536 | 25.47 | 12.83 | 6.929 09 | 7.0162 | 6.9964 |

5. | 638.97 | 37.85 | 0.467 | 0.476 | 16.18 | 13.08 | 6.300 05 | 6.5411 | 6.2646 |

6. | 850.75 | 38.73 | 0.547 | 0.527 | 24.69 | 10.19 | 9.449 48 | 9.306 | 9.5284 |

7. | 722.49 | 76.71 | 0.46 | 0.501 | 31.32 | 14.11 | 6.054 61 | 5.77 | 6.0505 |

8. | 579.18 | 38.6 | 0.52 | 0.476 | 24.95 | 10.64 | 10.428 7 | 10.3483 | 10.4245 |

9. | 943.39 | 10.63 | 0.41 | 0.587 | 30.53 | 9.93 | 8.591 51 | 9.6267 | 8.8016 |

10. | 584.24 | 32.63 | 0.5 | 0.544 | 33.98 | 12.99 | 6.544 23 | 6.7752 | 6.6242 |

Based on $xMLi$ and $x\pi SURFi$ values provided in Table II, the $L2$ error is approximately $4.56%$, i.e., the accuracy of the ML model is $95.44%$. In the $L2$ error calculation of the ML model, we use the mean value of the bootstrap-aggregated ML predicted response, i.e., $xMLi$. Due to the use of bootstrapping, the ML methods used here seamlessly provide the model fitting uncertainties for any probed data point. To understand the nature of the fitting uncertainties from the ML model, Fig. 13 shows the model fitting uncertainty distribution for six of the ten cases from the testing dataset. Figures 13(a)–13(f) show each of the cases and the model fitting uncertainty distribution (dashed black curve) along with the ML predicted mean value (solid black line), $xMLi$, and the πSURF calculated response, $x\pi SURFi$ (solid red line). We see for all six data points that the ML predictions agree well with the πSURF calculated response. Additionally, the πSURF calculated values fall within the uncertainty bounds obtained from the ML model. Therefore, the ML-derived surrogate model provides a high-fidelity quantitative representation of the relationship between the RtD, $Ps$, and microstructural features for PBX 9502.

## COMPARISON OF HEURISTIC AND MACHINE-LEARNED GUIDANCE

The most interesting aspect of this study is the comparison of heuristic observations and surrogate model trends. One obvious heuristic conclusion is that the input shock pressure has a strong effect, even without examining the training data. This is common knowledge in our field of study. And, after running simulations of the synthetic microstructures, most observers with some background in high-explosives science and engineering would conclude that $\mu 1$ has the most significant effect and the implication of that observation: the formulator would use that parameter in formulations to tune the Pop plot. A very experienced observer would tease out the subtleties of the other parameters, noting that they have minor, but meaningful effect.

The machine algorithm also easily detected the strong effect of *P _{s}* and was able to accurately quantify it with no prior experience and only a short training time. It also easily recognized the importance of $\mu 1$ and accurately quantified it, again with very little experience. However, the machine algorithm did not detect the importance of other parameters, whereas an experienced human operator certainly would. In practical applications, this may be of little consequence. The significant advantage of the machine algorithm is apparent in the context of designing explosive formulations for targeted performance. For humans, this activity would require a

*very*experienced formulator (exceedingly rare worldwide) or a formulator working alongside a specialized theory and modeling expert with access to an exclusive tool set (HPC resources and limited-access hydrocodes). Such specialized persons or teams would easily recognize the factors having the greatest influence, the more nuanced effects, as well as how to make formulations to meet certain objectives after reviewing the training data. Conversely, the machine algorithm quickly, efficiently, and dispassionately digests the training data and easily recognizes the most significant factors while missing more subtle behavior. These weaknesses not-withstanding, the machine tool integrates rare human knowledge and the capabilities of exclusive tools and reduces it to a tool that is useful to any formulator with a laptop computer. The tool presented here, of course, has limited applicability (PBX 9502 formulation variants), but it provides a clear framework for developing more generalized tools by expanding the training dataset.

## CONCLUSION

In this study, we investigated the robustness of the ML algorithm developed by Sen and Rai to learn a training dataset consisting of distinct microstructural features (characteristics of the VSD) and calculated initiation performance metrics. Our results show that the tool is indeed robust for predicting the initiation performance of the explosive PBX 9502 as a result of a range of defining characteristics of the VSD. Furthermore, it correctly identified the most important factors. However, the algorithm did not recognize more nuanced effects of VSD characteristics having a weaker effect that might be detectable by an experienced human observer. We conclude generally that this is a very promising approach, as a small team of specialists with access to limited-access resources can produce a widely available and usable tool for optimizing explosive formulations to meet specific performance specifications. Our on-going work will further develop the algorithm so that the user can specify performance metrics and receive target values for specific microstructural details. Furthermore, we intend to generalize the tool for a broader range of explosives by expanding the training data to include chemical kinetic factors beyond the microstructural details considered here.

## SUPPLEMENTARY MATERIAL

See the supplementary material for the full training dataset. It also includes the void-volume and void-area distributions highlighting the effect of changing the input parameters. These distributions are shown on log–log and semi-log plots.

**ACKNOWLEDGMENTS**

The research presented in this article was supported by the Laboratory Directed Research and Development program of Los Alamos National Laboratory under Project No. 20210339ER.

## AUTHOR DECLARATIONS

### Conflict of Interest

The authors have no conflicts to disclose.

## DATA AVAILABILITY

The data that support the findings of this study are available within the article and its supplementary material.

## REFERENCES

*PAGOSA Physics Manual*Technical Report No. LA-14425-M (Los Alamos National Lab. (LANL), Los Alamos, NM, 2010).