Machine learning techniques are seeing increased usage for predicting new materials with targeted properties. However, widespread adoption of these techniques is hindered by the relatively greater experimental efforts required to test the predictions. Furthermore, because failed synthesis pathways are rarely communicated, it is difficult to find prior datasets that are sufficient for modeling. This work presents a closed-loop machine learning-based strategy for colloidal synthesis of nanoparticles, assuming no prior knowledge of the synthetic process, in order to show that synthetic discovery can be accelerated despite limited data availability.
I. INTRODUCTION
The discovery of novel materials has potential to solve a myriad of grand challenges, ranging from the world’s rapidly changing climate to humanity’s need for water and food security. These challenges are significant not only that the solutions require technological advancements beyond the current state of the art but also because these solutions need to be developed and deployed much more rapidly than has occurred in recent history. The average timeline from discovery to commercial deployment for technological innovations has been 20 years.1 If we are to address the challenges facing humanity without significant disturbances to global society, this timeline needs to be accelerated by an order of magnitude.
Accelerating the discovery-to-deployment timeline with machine learning techniques is not a novel concept. Efforts such as AFLOW,2 Materials Project,3 and the Open Quantum Materials Database4 have demonstrated success in accelerating the rate of discovery using computational design of materials on massive scales. These resources utilize high-performance computing to predict materials properties across hundreds of thousands of candidate materials in order to identify materials, which are most likely to meet the design targets for the intended application.5 While these techniques can significantly accelerate identification of candidates, the burden of validation lies with the human scientists who must still synthesize the materials. The discovery of synthetic routes can take years and thousands of person-hours of effort and is the current bottleneck for accelerating the deployment of novel materials to address global challenges.
Organic small molecule synthesis is one area that has been especially efficient in discovering new materials. Automated experimental processes have been demonstrated for synthesis and analysis of small molecules for drug discovery.6,7 These systems typically employ the computer control of a continuous flow system with in-line analytical instruments, coupled to high throughput calculations that predict both the performance of the drug and the reaction coordinate of the synthesis with high fidelity. This combination of experimental and theoretical data is used in a feedback loop such that each experiment informs its successor to efficiently search for candidates within the enormous parameter space. Navigating the parameter space can either be done by systematically probing all reaction conditions or by utilizing a learning algorithm to select regions of high probability of success.8–11
Compared to small organic molecules, the synthesis of materials brings increased complexity due to the fact that there are no robust theoretical frameworks, which predict reaction coordinates from the atomic (or molecular reagent) scale to the meso- or macro-scale products. For example, in nanoparticle synthesis, there is no known unified framework for relating the physio-chemical properties of the reagents to the structure of the product, even though these materials have been studied heavily for decades. Further complicating is the fact that there are many different synthesis methods, such as sol–gel, thermal decomposition, hot injection, and heated solutions in batch or flow configurations, each providing unique process-dependent results. Even within a single synthetic method, the synthetic parameter space can be large, including variation in reagent choice, reaction temperature, reagent concentration, and reaction time.12 Prior to computational techniques, these parameter spaces have been navigated by design rules distilled from decades of research. This results in a design strategy that is limited by human intuition, which is difficult or impossible to reproduce.
Autonomous exploratory synthesis has been reported for a variety of materials ranging from nanoparticles, nanotubes, to even additively manufactured materials.13–16 This work presents a closed-loop autonomous framework (no human intervention required) for the data driven discovery of nanoparticle synthesis recipes. This framework is applied toward the colloidal synthesis of palladium nanoparticles with specific size distributions. Palladium nanoparticles are used as catalysts for many chemical conversions, such as methane oxidation, and their selectivity is heavily based on their size and homogeneity.17 This system was chosen as it has been studied extensively in prior work, providing a benchmark for the efficiency of the autonomous framework as well as enabling us to validate the resulting design rules against the existing literature. Moreover, due to the complexity of the synthetic parameter space, it is likely that there are recipes that have not yet been discovered by intuition-limited methods.
The ability to synthesize specific size distributions of particles is important in the study of catalytic activity. It is currently a challenging task to model nanoparticle catalysis using first-principles. In addition to the atomistic properties of the active sites, the interactions between the nanoparticle, supporting substrate, and environment must also be considered. Because of this, systematic studies require libraries of well-characterized nanoparticles to experimentally infer the relationships between the structure and catalytic activity.
In contrast to previous autonomous nanoparticle synthesis systems,8,16 our system integrates a high throughput continuous flow reactor with a synchrotron x-ray light source for in situ characterization of the product. Two computers were used to control the reactor, design the recipes, analyze the scattering patterns, and control the light source. The use of small-angle x-ray scattering (SAXS) allows for control of nanoparticle shape and size independent of end-use application. This reactor is shown schematically in Fig. 1(a). The result is a completely automated platform that, given the design objectives, will either converge on a recipe, which is expected to meet the target, or will return a near-zero acquisition value to indicate that the design objectives appear unrealistic in the context of the training data. This framework can discover unique recipes within the constraints of the reactor and determine which targets are feasible or infeasible. By post-processing the collected dataset after the closed-loop experiment, these data resulting from the exploration can be leveraged to extract qualitative design rules.
II. RESULTS AND DISCUSSIONS
A. Automated control
Experimental materials discovery typically requires four general procedures: control over the synthesis conditions, measurement, analysis of the outcome, and planning the next best experiment. This is exemplified in the laboratory where a person selects the starting chemicals and experimental parameters, synthesizes the products in a Schlenk line reactor, extracts the products and prepares them for measurement [usually transmission electron microscopy (TEM)], analyzes the outcome in relation to the synthesis conditions, and plans the next best experiment by intuition. This sequence is performed cyclically until the desired end-product is synthesized or the experimental design constraints are found to be infeasible. Our approach to automation of these procedures is described in this section.
This work draws upon previously established chemical methodologies to produce atomically precise palladium nanoparticles.17,18 A continuous flow reactor was developed to control the experimental conditions, as shown in Fig. 1(b). Multiple pumps are used to individually dispense precise amounts of solution (trioctylphosphine, 1-octadecene, oleylamine, and precursor salts) for the chemical reaction. The total mixed solution then flows into a heated furnace where the nanoparticles nucleate and grow. The finished nanoparticles are then probed in situ by small-angle x-ray scattering (SAXS) at the output, and the scattering pattern is analyzed to estimate the statistics of the size distribution.
It is expected that the synthesized nanoparticles will show some differences when compared to previous studies. In contrast to batch chemistry done in flasks and beakers where the residence time is constant for all particles, a continuous flow reactor produces a distribution of residence times resulting in a distribution of nanoparticle growth durations. This is due to the mixing of the fluid within the reactor: In principle, it is possible for any given molecule to remain in the reaction zone throughout the entire experiment. At this expense, the continuous flow reactor can more efficiently explore the parameter space. Our system can perform over 50 unique syntheses in 24 h without interruption.
B. Automated SAXS analysis
In-line SAXS analysis was used so that synthesis results could be known immediately at the reactor output. The SAXS measurements can be done on the formed nanoparticles in solution without any post-synthesis treatment. The results of the automated SAXS fitting were validated manually as well as by transmission electron microscopy (TEM) characterization, as shown in the supplementary material.
In-line analysis was performed by xrsdkit (X-ray Scattering and Diffraction Tookit) v0.2.4 (https://github.com/scattering-central/xrsdkit), a home-built open-source software package by SSRL. XRSDkit can parameterize and objectively fit SAXS patterns for dilute, condensed (flocculated/disordered), or crystalline (superlattice) arrangements, all of which may be products of colloidal synthesis. Given a sufficient training set of analysis results, XRSDkit can train machine learning models to identify the sample content and estimate its physical parameters before objective fitting. For this work, the fit results were closely monitored to guarantee accuracy because inaccurate results are likely to sabotage the experimental design process.
A successful analysis of a nanoparticle synthesis product should identify the morphologies of the scatterer populations and then provide estimates of the intensities and physical parameters of each population. For particles, the parameters of interest are the statistics of the size distribution and the dilute particle scattering intensity. A more detailed description of this analysis is given in Sec. IV D as well as the supplementary material.
C. Intelligent closed loop synthesis
The synthesis performed in this work requires the simultaneous tuning of multiple parameters to produce a result with specific properties. Nanoparticles for catalytic applications require selective morphologies to promote activity and stability at operating conditions.17 A successful synthesis would produce high-yield batches of nanoparticles with sizes very near the target size, arranged in a dilute colloidal morphology to facilitate washing and recovery of the particles. In this section, we describe how to formulate a machine learning problem to explore the synthesis space in search of such desirable products. In this work, experimental design, reactor control, and data acquisition were performed by home-built software, the platform for automated workflows by SSRL (PAWS), v0.11.1 (https://github.com/slaclab/paws).
Here, we present our approach using standard Bayesian optimization with Gaussian process (GP) regression used in this work. To build models that numerically predict the outcomes of a recipe, begin by casting the recipe into vector x of real-valued parameters. We consider a recipe of five independent parameters: reactor temperature Treac, total flow rate Ftot, and volume fractions x of three liquid reagents [1-octadecene (ODE), trioctylphosphine (TOP), and oleylamine (oley)],
Boundary conditions are imposed on the recipe parameters in order to accommodate the limitations of the equipment: The temperature must be between room temperature and the maximum allowable reactor temperature, the flow rates must be non-negative and below the maximum pump outputs, and the volume fractions must be non-negative and less than or equal to 0.3. Note that the sum of volume fractions in the recipe is at most 0.9: The remaining volume fraction is the metal precursor solution (during synthesis) or ODE (during background acquisition).
All relevant outcomes for a given recipe, y, are obtained through an objective analysis of the scattering pattern (see Sec. IV D and the supplementary material). After an initial set of recipes and results is obtained, models are trained to predict the outcomes for any given candidate recipe, and these models’ predictions are used to optimize the recipe for any given target. This work employs models that predict distribution statistics rather than single values so that the prediction of yj consists of mean value and variance (the variance of the predicted distribution),
Let the desired outcome be denoted by y*. The experimental design task is to describe how a predicted outcome relates to the desirable outcome in the form of objective function . Any objective function can be used, but it is convenient to formulate the objective as a statistical likelihood if possible. Because our models can be assumed to produce Gaussian distribution statistics, the value of each dimension of the objective can be evaluated by the corresponding Gaussian cumulative distribution function Φj,
For example, let the best-yet outcome for yj be denoted by . For a predicted distribution and , a convenient objective would be the probability of getting closer than ,
This approach is reformulated for each dimension of the output to target a value, a range, a minimization, or a maximization (as described further in Sec. IV E). The objective function for evaluating a recipe can then conveniently be described as a joint acquisition value over all dimensions yj of the predicted outcome y,
The optimal recipe is then found at the maximum of the joint acquisition function,
Section IV and the supplementary material contain more details on the models used in this work and how their predictions are used to optimize recipes. For now, it is sufficient to acknowledge that an independent distribution is predicted for every outcome of interest, and all the predicted distributions are used jointly to assess the acquisition value of the recipe.
By this approach, a set of trained models can be used to optimize any number of recipes for any number of targets. When one or more new recipes and results have been collected, the models may be retrained to account for the knowledge gained by the new data points, and then, (if needed) the retrained models can be used to try again at the same objectives. Sequential design loops like this have been used to accelerate materials design in a variety of fields.19
In this work, the previously described design strategy was implemented in two phases. The initial experiment allowed for minimization or maximization of a single scalar outcome (in this case, the width of the nanoparticle size distribution) with any number of additional constraints on other outcomes (such as the mean nanoparticle size). This approach established the functionality of the reactor and confirmed the feasibility of closed-loop optimization (see Fig. 2 and Sec. II C 1).
The goal of the first experiment was to produce an achievable nanoparticle size within a known chemical system. However, there may be scenarios where a library of nanoparticle sizes is required in an unknown system. A more generalized experiment was used to explore the system’s ability to learn how to make any given particle size within the limits of the hardware without prior training data. For this problem, we developed a home-built framework for Bayesian optimization with Gaussian process regression models. For each model training cycle, a range of particle sizes was attempted and the training set was augmented to include the full set of successfully analyzed samples. The acquisition value functions were all designed to behave as likelihoods of improvement and were optimized jointly in standardized space (see Sec. II C). After a small initial data collection and only two training and synthesis cycles, the designer is able to drive the reactor to produce particles with radii ranging from 18 to 26 Å. The results of this approach are summarized in Fig. 3 and Sec. II C 2.
1. Closed-loop nanoparticle design
Nanoparticle catalysts are often synthesized with a targeted size to study the properties of that specific size of particles. However, synthesis of a specific nanoparticle radius is challenging if the chemical system is not well characterized by prior studies or existing datasets. Optimization of nanoparticle production requires balancing this average nanoparticle size with the size uniformity and total yield. Without computational tools, synthetic design requires exploration of the synthetic parameter space followed by the development of either empirical or theoretical design rules. Because this process may be involved and time-consuming when performed by humans, automation may have much to offer.
We used the Citrination platform, developed by Citrine Informatics, to assess the viability of closed-loop automated synthesis for a single size target. This software makes use of the Random Forests with Uncertainty Estimates for Learning Sequentially (FUELS) framework for materials design.19 More specific details about the design methodology can be found in Sec. IV.
Here, we employ single-objective design under multiple constraints. The design objective was to minimize the width of the nanoparticle size distribution (sigma or coefficient of variation), and all other design requirements were formulated as constraints. Binary flags for the presence of colloidal, condensed, and superlattice products were encoded as numbers in [0, 1], and the outcomes were constrained to >0.9, <0.1, and <0.1, respectively. The mean size was constrained to be 30 ± 2 Å, and the total scattered intensity was constrained to be greater than 100 (arbitrary integrated intensity units). Constraints were also set on the input space: Temperatures were constrained within [180, 300] °C, total flow rates were constrained within [40, 120] μl/min, and volume fractions were constrained within [0, 0.3].
With the input and outcome constraints set, the design tool was launched at maximum design duration and the top two candidates for each of the two acquisition functions (described in Sec. IV; four recipes total) were applied to the reactor. After running the four recipes and processing their SAXS patterns (as outlined in Sec. IV), the analysis results were added to the modeling dataset and the design process was repeated. Before each design optimization, the full training set was reviewed and any samples that failed the objective fitting procedure on the first (automated) pass were corrected or removed so that the design optimization would not be misguided by incorrectly labeled outcomes.
The results of the experiment are shown in Fig. 2. The first 16 experiments were performed on a selected sparse grid of conditions, providing an initial training set. The values for the grid were chosen to reflect expected minimum and maximum synthesis conditions based on the previous experimental literature.17,18,20,21 The recipe models were trained, the design optimization was performed, and the following four experiments (trials 17–20) were taken from the first batch design results. These four experiments were analyzed and uploaded, the dataset was corrected by human intervention if necessary, and the process was repeated. A recipe meeting all our specified constraints was formulated within 37 total experiments (21 recipes after the initial training data). The system operated for additional 14 experiments, until exhaustion of the precursor reservoirs, to ascertain if a more optimal recipe could be found. During this period, the measured radii and intensity began to deviate from optimum values; however, the size distribution (sigma) remained lower than the previous experiments. Given the experimental constraints, this proof of concept demonstrates the ability for fully automated closed loop synthetic development.
2. Mapping feasible synthesis space
The initial closed-loop experiment suggested that regions of the Pd nanoparticle synthesis space could be sequentially learned toward one desired outcome. The more general design problem is to explore the entire synthesis space of the material system by targeting a suite of different nanoparticle sizes. This ultimately produces a reactor that can discover recipes on demand for any particle size (within the limits of the reagents and the hardware) and furthermore can accurately indicate when a set of objectives is unlikely to be synthesizable. This experiment was supported by home-built free and open-source tools designed to support a customized seven-objective optimization problem, as described in Sec. IV.
The initial modeling dataset was formed from 32 recipes selected from a grid to reflect a sparse dataset commonly seen in relatively new and unexplored systems. Out of these 32 recipes, 29 were amenable to least-squares fitting (see Sec. IV E) Synthesis Recipe Optimization. The first round of Gaussian process models was trained on these 29 data points to predict the seven objectives identified in Sec. II C. Equation (6) was optimized by Metropolis Monte Carlo for 40 000 iterations for each objective, targeting mean particle radii from 10 to 42 Å by 4 Å steps. One set of trained models thereby produces a total of nine recipes, each designed for one of the nine different targeted sizes. The nine recipes were then synthesized by the flow reactor, analyzed, and added to the training dataset. The Gaussian process models were then re-trained, and the process was repeated two more times. The results for all three trials are shown, from left to right, in Fig. 3.
Panels (a)–(c) of Fig. 3 show the acquisition values for the optimized recipes, with panels (d)–(f) showing the targeted, predicted, and measured colloidal nanoparticle sizes, panels (g)–(i) showing the predicted and measured size distribution widths, panels (j)–(l) showing the predicted and measured scattering intensities, and panels (m)–(r) showing the recipe information.
In the first trial (leftmost column), the measured nanoparticle radii do not align with the predicted values nor the target values. The polydispersity (sigma/coefficient of variation, see the supplementary material) remains below 0.2 for all size targets except 30 Å, where it is greater than 0.7. For the 34 Å target, no results are reported because no particles were synthesized and the scattering pattern was not amenable to least-squares fitting—this is an example of a sample that should be removed for subsequent training cycles.
In the second trial (center column), the measured particle radii for some targets fall within 1 Å of the predicted and target values for sizes of 18, 22, and 26 Å. The model correctly predicts that it would be unable to synthesize larger particles colloidally: Larger particles are indeed produced for the larger targets, but the intensity panels show that these samples included undesirable condensed particle morphologies, so the third cycle should be expected not to attempt these recipes again.
The third trial (rightmost column) produces particles that almost exactly match the target and predicted values at 18, 22, and 26 Å. The 30 Å target shows that the model is no longer attempting the recipes that lead to the larger-size particles that failed to remain colloidal in the previous cycle. Targets above 30 Å were not attempted in the third cycle due to the limited availability of the synchrotron source. The smallest nanoparticle radius produced from this trial was 18 Å.
The near-zero acquisition values for radii outside the 18–26 Å range suggest that this is outside the feasible size range for our reagents and hardware. It is possible that the range is slightly wider, but to confirm this, one would have to run finer-grained tests from 14 to 18 Å and from 26 to 30 Å.
Comparing against the literature, we find that reported size ranges are consistent with our machine’s conclusions. Kim et al. were able to produce particles with 3.5–7 nm diameter (17.5–35 Å radii) using oleylamine and TOP as the surfactant and solvent.20 Wu et al. were able to synthesize particles ranging from 3.4 to 5.5 nm diameter (17–27.5 Å radii) using oleylamine and TOP as the surfactants and ODE as the main solvent.22 The chemistry from this work closely follows that from Wu et al., with both using ODE as the primary solvent as well as TOP and oleylamine as the surfactant ligands.
Larger sizes achieved by Kim et al. and Wu et al. require fundamental deviations from the base chemistry used in this work, for example, a different solvent than ODE or the inclusion of oleic acid. It should be noted that there were instances where the machine synthesized nanoparticles were outside of these normal ranges; however, the measured yields and polydispersities were not within the accepted constraints.
D. Learning nanoparticle design rules
Sections II A–II C describe the merit of using machine learning and automated synthesis in the pursuit of targeted design. In this section, we investigate whether these techniques can also yield heuristics for how the target property behaves as a function of experimental parameters. The concept of a design rule can take on many forms and often aims at providing a notional trend in what happens when a chemist adds more or less of an ingredient, usually with regard to the value of a measured property. Based on this definition, we manifest the design rule as the slope of a line for the model’s response to various inputs. For example, what chemical trends does the machine extract to successfully design recipes? Can the machine effectively discern a design rule?
The first question is addressed by reviewing the recipe data shown in panels (m)–(r) of Fig. 3. The variable input parameters for each recipe include oleylamine, TOP, ODE solvent dilution, temperature, and flow rate. The first cycle (first column) does not have an immediately discernible trend in temperature and flow rate, but by the third cycle (third column), it is apparent that higher temperatures and lower flow rates are correlated with larger particle sizes. This is no surprise, as higher temperatures should accelerate the growth kinetics of the nanoparticles, and higher flow rates should decrease growth durations.
We also seek to understand general design rules for particle morphology as a function of experimental inputs. ODE is the main non-coordinating solvent and does not normally react with the metal salt or coordinating ligands,23 but the amount of solvent directly dilutes or concentrates the reacting agents, controlling the overall activity of the system. TOP and oleylamine are ligands, which bind to the surface of the nanoparticles, providing control over the nucleation and growth kinetics.
In the first cycle of Fig. 3, the particle size appears to be positively correlated with the volume fraction of ODE and negatively correlated with that of oleylamine. This trend is reversed in the second cycle and eliminated in the third cycle. In the first two cycles, TOP is introduced in small amounts with no evident correlation to target size, favoring slightly more TOP for intermediate sizes. By the third cycle, however, a strong correlation has emerged between the TOP concentration and particle size.
Previous experimental work confirms the correlation between the TOP content and particle size.21,22 These works find that TOP slows the nucleation kinetics by increasing the thermal stability of the initial Pd salt precursor. Oleylamine, on the other hand, has been shown to enhance the nucleation kinetics after initial thermal decomposition.22 There is experimental evidence that oleylamine has a stronger binding affinity to Pd atom sites and can be exchanged with the TOP ligand.21 It is also noted that the effect of oleylamine on the nanoparticle size is slight when compared to that of TOP.22
Exploring high-dimensional spaces to achieve a desired outcome is a process that is difficult to visualize in two dimensions and challenging to execute efficiently in the laboratory. In many situations, investigators intuitively sweep through a set of recipes in a linear fashion. This leads us to ask whether machine learning techniques can be used to determine the design rules more efficiently than intuitive methods. To investigate how effectively machine learning can discover a heuristic, we focused on the effect of TOP on the Pd particle radius. We used simulated sequential learning, beginning with ten initial data points, to examine the convergence of an estimate for the TOP coefficient in a linear regression of particle radius as a function of experimental inputs.
In each sequential learning iteration, we add in new experimental information to the training dataset, as selected by three different acquisition functions (described in more detail below), and compare the estimated TOP regression coefficient using partial information to the value we obtain when all experimental data are available. Figure 4 shows the results of this process.
We focus on two aspects of the results in Fig. 4: convergence of the TOP coefficient as a function of available data and the behavior of coefficient uncertainty as a function of available data. The first acquisition function, random search, leads to a median estimate for the TOP coefficient that diverges for nearly 40 experiments before decreasing toward the true value. While the uncertainty of random search, in the form of the interquartile range (IQR), smoothly decreases, it does so more slowly than the uncertainty for the max uncertainty-based strategy late in the simulations (training examples >∼100), as indicated by the differing slopes of the IQRs. The second acquisition function, which involves sweeping through TOP concentrations from high to low, obtains an initially negative median estimate for the TOP coefficient; the estimate also oscillates as it approaches the true value. The sweep method’s uncertainty tends to be smaller, likely because there is much greater rigidity in how this method selects candidates (indeed, any variability stems from the randomly selected ten initial training examples). Finally, the ML-driven max uncertainty strategy selects as its next experiment the candidate with the greatest uncertainty in particle radius; it provides the most accurate median estimates when less data are available (training examples <∼100), although (based on the widths of the IQRs in Fig. 4) the magnitude of its performance advantage over random search is not large. We also observe that, near the end of the simulations, the median TOP coefficient estimates from max uncertainty and random selection are very similar. Since all three acquisition functions reach convergence by the end of our simulation, selecting an ideal strategy for future training data will be dependent on the system and availability of data.
III. CONCLUSIONS
We created experimental and computational frameworks that allow synthetic chemists to automate colloidal synthesis and discovery. It utilizes a continuous flow reactor with in-line SAXS analysis to update machine learning models and use them to optimize recipes in a sequential learning loop. Our initial experiment utilized the Citrination platform to validate the feasibility of automated synthesis. Based off an initial sparse training grid, the Citrination platform identified a synthesis solution to meet a design constraint within 21 iterations. A set of Gaussian process regression models was then used to map the entire feasible synthesis space of the reactor in just three model training cycles.
The reactor was able to discover synthesis trends and produce candidate recipes that were distinct from the training data in both the recipe and the predicted outcomes. Furthermore, the system was able to achieve these results without the explicit enumeration of the synthesis design space and without applying any prior chemical intuition.
The strength of this chemically agnostic approach is that it makes no initial assumptions about the system. This is ideal for new materials discovery, where the physical and chemical interactions are likely not well-understood. The generality of the approach lends itself well to a variety of problems in colloidal materials synthesis, and because it relies on open-source software, it can be implemented at any facility with SAXS instrumentation.
The post hoc analysis of the design rules extracted from the data indicates that even in the absence of physicochemical information, the algorithm can learn the relationship between the synthesis input and measured products. Future work will focus on training models that are chemically aware such that the input space is defined by the physicochemical descriptors of the reagents and the outputs are the product structure. This will enable the field to extend beyond commonly used chemical reagents and discover materials that leverage novel reaction pathways.
We may draw some general conclusions from the techniques applied here. First, materials informatics techniques, such as the sequential learning method used here, can help efficiently study the design rules for a system in addition to efficiently optimizing recipes. Second, analyzing the design rules in the manner described, alongside methods such as sensitivity analysis, provides a means to interpret or trust how the algorithm is learning. We envision future practitioners using this approach periodically during recipe optimization to infer if the algorithm is following conventional heuristics or moving beyond the domain of applicability for the synthetic conditions.
IV. METHODS
A. Experimental equipment
A custom continuous flow reactor was constructed for use at the beamline. Five gas pressure pumps (Mitos Ppump) fitted with flow rate sensors (Dolomite Microfluidics) were individually controlled to dispense the desired concentrations of reagents. The five pumps were loaded with one of the following: Pd–TOP complex, oleylamine, trioctylphosphine (TOP), 1-octadecene for dilution, and 1-octadecene for flushing between recipes. The system was connected using PTFE tubing and PEEK fittings (purchased from idex-hs). A custom tube furnace was constructed to hold a 2 mm outer diameter capillary with 15 cm of heated length. Cryocon 24C (Cryogenic Control Systems, Inc.) with a type K thermocouple was used to control the temperature.
B. Chemical preparation
The starting Pd–TOP complex was synthesized in a similar fashion as described by Wu et al.18 by combining 1-octadecene, ODE (90%, Sigma-Aldrich), with 0.024M concentration of palladium acetylacetonate, Pd(acac)2 (35% Pd, Sigma-Aldrich), in a flask with magnetic stirring. The flask was then put under vacuum and nitrogen purge at 60 °C. Finally, 0.09M concentration of trioctylphosphine (97%, Sigma-Aldrich) was added to form a transparent yellow solution. This solution was then loaded into one of the chemical pumps controlled by the computer.
C. In-line SAXS
Real-time nanoparticle size was determined by small angle x-ray scattering (SAXS) performed at SSRL beamline 1-5. The beamline was configured to use 15.5 keV (0.7999 Å) x rays with a spot size of 500 × 500 μm2, probing the output of the reactor. A Rayonix 165 CCD area detector with 79 μm pixel sizes and 165 mm2 diameter active area was used to collect the SAXS patterns. It was set to a nominal distance of 900 mm downstream of the synthesis reactor, resulting in a maximum scattering vector Q of ∼ 0.7 Å−1.
D. Scattering data analysis
All parameters of samples reported in this work are the results of numerically converged solutions using the equations outlined in the supplementary material. All parts of this analysis were performed by the free and open source xrsdkit v0.2.4 (https://github.com/slaclab/xrsdkit_modeling_public).
E. Synthesis recipe optimization
Recipes were chosen by joint Bayesian optimization over a set of Gaussian process (GP) regression models trained to estimate the outcomes of the experiments, following the process outlined in Sec. II C. See the supplementary material for full details.
1. Gaussian processes
The Gaussian processes were modeled in accordance with the work of Brochu et al.24 See the supplementary material for full details.
F. Citrination
1. Initial closed-loop experiment
Citrination is a cloud-based software platform for materials design and optimization, employing the Physical Information File (PIF) data standard,25 and a suite of machine learning-based tools for materials development and design.26–31 The Python Citrination client was used to interact with the platform.
In our initial experiments for this work, the Citrination design tool was launched at maximum design duration and the top two candidates for two different acquisition functions were applied to the reactor. These acquisition functions attempt to identify recipes that have a high predicted probability of satisfying input constraints, while also using either (1) predicted values for polydispersity or (2) predicted values for polydispersity and their associated uncertainties (a more exploratory strategy) to surface promising recipes.
2. Learning nanoparticle design rules
The trend in particle radius was examined as a function of TOP content by fitting a scikit-learn32 linear regression model to a subset of available experimental data that were filtered for reaction yields greater than 95% and deduplicated (i.e., identical or nearly identical experiments were removed). The regression for particle radius includes terms for flow rate, temperature, TOP fraction, oleylamine fraction, and ODE fraction, and we focus specifically on the TOP coefficient. This coefficient should be positive, as Wu et al.18 determined that increased TOP tends to promote large Pd nanoparticles under certain parametric conditions. To initiate our simulated sequential learning, we choose ten records at random from the filtered experimental data and train a machine learning model using Citrine Informatics’ open-source random forest lolo package.33 We then iterate through all of the remaining experimental data, increasing the amount of available training data by one record with each loop; the new record is selected according to one of three acquisition functions: random choice, maximum uncertainty, or TOP sweep. For a given iteration, the available training data are used to fit a lolo model, which then predicts probability distributions for the radii of all held-out recipes in the experimental data. These predicted distributions are sampled to fit 500 different linear regression models for the purpose of determining the range of possible coefficients for TOP in the radius regression. To account for stochasticity, the simulated sequential learning process was repeated 100 times.
SUPPLEMENTARY MATERIAL
See the supplementary material for additional information on the nanoparticle characterization via transmission electron microscopy (TEM) as well as additional details on the fitting of the small angle x-ray scattering (SAXS) equations and Bayesian optimization equations.
ACKNOWLEDGMENTS
The authors thank V. I. Hegde for reviewing the section of the manuscript dealing with machine learning heuristics and M. Hutchinson for helpful discussions.
Citrination is a commercial product of Citrine Informatics; M.D., C.C., K.P., and B.M. performed this research as employees of Citrine Informatics.
This work was supported by the U.S. Department of Energy (DOE), Office of Energy Efficiency and Renewable Energy, Advanced Manufacturing Office, under Grant No. FWP 100250. The use of the Stanford Synchrotron Radiation Lightsource, SLAC National Accelerator Laboratory, is supported by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences, under Contract No. DE-AC02-76SF00515.
DATA AVAILABILITY
The data that support the findings of this study are available from the corresponding author upon reasonable request.