Hybrid quantum-mechanics/molecular-mechanics (QM/MM) simulations are popular tools for the simulation of extended atomistic systems, in which the atoms in a core region of interest are treated with a QM calculator and the surrounding atoms are treated with an empirical potential. Recently, a number of atomistic machine-learning (ML) tools have emerged that provide functional forms capable of reproducing the output of more expensive electronic-structure calculations; such ML tools are intriguing candidates for the MM calculator in QM/MM schemes. Here, we suggest that these ML potentials provide several natural advantages when employed in such a scheme. In particular, they may allow for newer, simpler QM/MM frameworks while also avoiding the need for extensive training sets to produce the ML potential. The drawbacks of employing ML potentials in QM/MM schemes are also outlined, which are primarily based on the added complexity to the algorithm of training and re-training ML models. Finally, two simple illustrative examples are provided which show the power of adding a retraining step to such “QM/ML” algorithms.
I. INTRODUCTION
Quantum mechanical calculations have seen widespread application in various scientific disciplines to describe the properties of molecules and materials from the atomistic scale. However, the scope of their applicability has been severely constrained due to the associated computational cost, which tends to scale with N3 to N7, where N is the number of electrons (or their basis functions) in the system. This is especially prominent for large and complex systems where periodicity cannot be established at an affordable size, such as the simulations of nanostructures,1,2 reactions in aqueous solutions,3,4 and grain boundaries.5–7 The quantum-mechanics (QM)/molecular-mechanics (MM) method, which was first introduced by Warshel and Levitt8 in 1976 to solve the above problem, has been widely used in computational molecular biology,8–10 medicine design,10–15 nanotechnology,2,16,17 and quantum chemistry9,15,18–23 and was part of the basis for the 2013 Nobel Prize in Chemistry.24 In QM/MM, atoms and electrons in a smaller subsystem of interest, denoted as Region A in Fig. 1, are treated with a quantum-mechanics-based calculator, while the atoms in the remainder of the system (Region B) are treated with an empirical potential—that is, a molecular-mechanics-based calculator.25–28 Crucially, a scheme to couple the two regions is necessary and will be discussed later in this work.
Any method that can approximate the system’s potential energy and the forces on each atom should work as the MM calculator in such QM/MM schemes. In this work, we discuss the prospects of using potentials based on machine learning (ML) in QM/MM algorithms: these relatively new ML forms have emerged as flexible, user-adaptable interatomic potentials,29–35 which possess unique attributes that we suggest can have certain natural advantages in QM/MM approaches and can possibly lead to improved yet simple versions of the QM/MM algorithm.
II. NATURAL ADVANTAGES OF MACHINE LEARNING IN QM/MM ALGORITHMS
We start with a very brief introduction to machine-learning potentials; the reader is referred to the literature32–35 for more detailed descriptions. By the Born-Oppenheimer approximation,36 the (ground-state) potential energy (E) of an atomic system can be considered to be a function only of atomic positions (), in the absence of external fields. Machine learning methods have been developed to learn and predict this potential energy surface (PES) from a set of consistent training data produced with a QM calculator. In its most common implementation, the atomic positions are first transformed into a set of atom-centered feature vectors, , which provide a “fingerprint” of each atom i’s local environment. Importantly, these feature vectors are designed to be rotationally, translationally, and permutationally invariant. Example feature vector transformations include Gaussian,30 Zernike,32,37 and bispectrum31 descriptors. The potential energy can then be “learned” as the sum of atomic contributions, that is,
A variety of machine-learning regression methods can be used to learn the form of (which is specific to each element type to assure permutation invariance), such as artificial neural networks,38 Gaussian-process regression,39,40 or support vector machines.41 Forces, the (negative) spatial derivatives of energy, are also often included in the fitting loss function in order to construct high accuracy models. We have produced an open-source software package known as Amp (Atomistic Machine-learning Package)32,42 which allows flexible coupling of a variety of descriptors and ML approaches, as well as integration with a large number of electronic structure calculators. We will use Amp to construct the examples presented at the end of the current work.
We now discuss the prospective advantages of employing such potentials within QM/MM schemes:
A. Avoid need for extensive research into MM potentials
While most force fields are designed for a certain purpose, well-functioning all-atom force fields are still scarce. Molecular mechanics potentials have a rich history, and there are many possible choices43–45 that can be made for each element and element combination that are employed in the MM region of the simulation. Certain potentials are well suited for mechanical properties,46–50 others are well suited for describing reactive events,51,52 others are optimized for speed in large-scale simulations,53 and all depend on the nature of the bonds expected in the material or substance. Furthermore, elemental potentials are only good in certain combinations; e.g., a C potential may be well parameterized for neighbors consisting of H, N, and O, but if in subsequent simulations Cu is introduced, one may have to search for a new potential. We argue that the need to choose from the vast array of potentials available in the literature is a steep barrier to the adoption of QM/MM methods for generalized systems; that is, one needs to develop this particular expertise or rely on the expertise of others.
By contrast, the use of machine-learning potentials as the MM backend alleviates this decision. Of course, one still needs to learn how to properly use machine-learning potentials in order to use them as the MM backend. However, this procedure needs to occur only once and we expect that as these approaches mature this can be automated to a great extent, particularly as part of a QM/ML algorithm, as described below. Furthermore, if one wishes to extend the number of elements in the system (e.g., adding O to simulations previously involving only Pt, C, and H), this is relatively straightforward to do as machine-learning potentials can be grown dynamically in response to their experiences with the potential energy surface.
B. Avoid distortion caused by mis-match between QM and MM descriptions
All QM and MM calculators that describe reasonable system sizes are imperfect and make errors in their predictions of materials’ properties, bond strengths, and every system property derived from the electronic structure and the resulting potential energy surface. In general, this can lead to a disagreement54 between the QM and MM calculators in describing fundamental system properties, such as optimal lattice constant, hydrogen-bonding networks, and elastic responses. This can create mis-match issues in concurrent QM/MM calculations.
For example, consider the simulation of a simple single-metal crystal, as shown in Fig. 2, where QM forces are used inside Region A and MM forces are used in Region B. Typically, even the more-expensive QM calculator will have significant errors from the experimental lattice constant; e.g., the popular density functional theory (DFT) typically errs on the order of 2%, as compared to the experimental cases. Taking this as a baseline for the disparity between the predictions of the two calculators, this suggests that either Region A or Region B (or both) will be under an artificial strain with respect to its relaxed lattice spacing, and a strain of 2% is significant. In practice, the parameters in the MM calculator are typically adjusted to scale the lattice constants to match those of the QM calculator, yet it is still difficult to match in all mechanical properties simultaneously. This phenomenon is not limited to crystal structures; for example, we would expect hydrogen-bonding networks of water described with QM and MM potentials to also differ.
As machine-learning potentials are directly trained from the QM potentials, they share a common potential energy surface and should naturally predict identical mechanical properties, at least to the degree that the ML potential is well trained to the QM data. In the example above, if the machine-learning calculator describing the atoms in the exterior portion has been properly trained, then it will predict the same lattice constant and elastic response as the QM calculator does in Region B, and the mis-match strain issue should not arise.
C. Machine learning may allow for simpler QM/MM schemes
QM/MM methods can be coarsely classified into two schemes, additive and subtractive. In the additive scheme, the potential energy can be expressed by the equation E = EQM(A) + EMM(B) + Ecoupling(boundary), where a pure QM calculator (such as DFT) and a pure MM calculator (such as a force field) are used to calculate the energy of the atoms in Regions A and B, respectively, and the final term describes the coupling between the two regions, which is the crucial and complicated part of this algorithm. The QM calculation may be a relatively standard calculation, but it also often involves strategies such as passivating any dangling bonds and electrostatic embedding into the surrounding environment. The subtractive scheme has a much simpler implementation55,56 with the potential energy expressed by E = EQM(A) − EMM(A) + EMM(A + B). Here the MM calculator does the work of coupling Regions A and B, as this is equivalent to setting the coupling term of the additive scheme to EMM(A + B) − EMM(A) − EMM(B). If the MM energy calculation is as accurate as the QM results, the subtractive scheme can give results with QM accuracy (in the absence of long-range electronic effects).
It is interesting that certain simplifications arise if we apply ML, with re-training, in such schemes. The basic idea is schematized in Fig. 3, which starts with a total energy calculation of Region A with the QM calculator, giving EQM(A) and . Before this result is used in the subtractive scheme equation, these electronic structure calculation data are added to the training set to re-train the ML model. We can then use the subtractive scheme to predict the energy and forces of the system, as shown in Eqs. (2) and (3). Intriguingly, since the re-trained ML model replicates (to user-specified accuracy) the QM calculation, the subtractive scheme reduces to
where the model has “learned” the new QM configuration and incorporated it into its representation. This leads to the approximate cancellation of the two Region A terms (which contained the boundary interactions problematic to QM/MM methods) and leaves only the final term, which describes the entire system and can be considered “perfectly” trained to the QM result within Region A (within the training tolerance).
It is worth further discussing the boundary between the two regions. In the traditional subtractive QM/MM scheme, one relies on a pre-existing MM calculator to couple the regions. Taking an example of an extended metal surface, the boundary region would cut through metal bonds in the surface, and the MM calculator must be equally capable of describing the bulk-like atoms within the surface and the low-coordination atoms at the boundary (as well as any interesting chemistry occurring within Region A). These surface energies at the boundary can be large, and thus any differences in how the QM and MM calculators represent these energetics will cause inaccuracies in the method. However, in an algorithm as described above where the ML calculator is constantly retrained to the new QM data, the ML calculator learns directly from the QM calculator how to properly describe this vacuum interaction. As noted above, these two terms (with the large surface energies) will essentially cancel, leading to a much improved methodology without the need for prior training data that described the vacuum region. Of course, in either method, if there are long-range electronic effects that cannot be captured in Region A, then a more sophisticated method may be necessary.
Accurate ML models generally require a substantial amount of training data that anticipate atomic configurations that will be encountered during the course of a simulation; these training data can be difficult to anticipate and expensive to calculate, being provided on the basis of QM methods. While this is certainly still true of configurations that may be encountered in Region B, this is not the case for the ML representation of Region A, as these configurations are learned immediately before they are used. Since, by design, the unanticipated chemistry occurs in Region A, this greatly simplifies the pre-training task of creating the ML potentials: in theory, we actually do not need any images that anticipate configurations of Region A. Region B, on the other hand, contains fairly predictable or routine atomic configurations, so the preparation of these training data is not necessarily difficult, but will vary with precise applications of such methods.
What we present here is a simple scheme. Yet we anticipate that this retraining approach is readily adaptable to more elaborate schemes, such as those that treat vacuum regions more robustly.2,54,57–59
D. Allows adaptive QM/MM without an interpolative region
In adaptive22,60 QM/MM, atoms can freely move in and out of the QM region; for example, a water molecule may be described in MM in one image and QM in the next. However, this movement across the boundary is equivalent to a discontinuity in the potential energy surface, since the QM and MM calculators in general give differing energy predictions. The non-physical nature of this discontinuity can lead to numerical issues in optimization (since the forces are undefined), as well as violation of energy conservation laws. For example, Heyden, Hei, and Truhlar61 showed that such discontinuities can lead to violations in conservation of energy and/or momentum in molecular dynamics (MD) simulations. In practice, this is resolved by using an interpolative region between the two regions such that molecules are described with both the QM and MM calculators and an interpolated mix of the two resulting forces are used depending upon the position. While this typically resolves the issue, there is computational cost as this effectively widens the necessary QM region.
In principle, a well-trained machine-learning calculator shares the same potential energy surface as the QM calculator from which it is trained, so this interpolative scheme is unnecessary. This is shown in Fig. 4. Of course, this is only true if the ML representation is well trained over all regions encountered in the simulation. If we use the scheme described above, we first assume that the ML calculator has been well trained in Region B, as a specification of the method. Although the ML calculator is not pre-trained to match the QM calculator in Region A, the re-training step will guarantee matching forces. We should note that these matches will still only be within some specified error tolerance, so in practice, a small discontinuity may still arise, and it remains to be seen whether a buffering region is necessary. However, we expect that if necessary, this region may be much smaller than is needed in other methodologies; detailed examination of this aspect is left for further studies.
E. Allows for consistent sampling of a single potential energy surface
Many advanced methods for estimating kinetic and thermodynamic parameters involve extensive sampling of the potential energy surface (PES), such as variational reaction-coordinate transition-state theory,62–72 transition-path sampling,73–79 umbrella sampling,80–82 or metadynamics.83,84 In standard QM/MM simulations, the QM and MM calculators give fundamentally differing approximations of the PES, making the implementation of such PES sampling schemes challenging. By contrast, when ML potentials learn directly from the QM calculator employed, then the total energies and individual forces in a QM/ML scheme (in particular, as described above) should in principle match, to within a tolerance, those of a pure QM calculation, which we expect should make the integration with such energy sampling schemes more straightforward.
III. DRAWBACKS OF MACHINE LEARNING IN QM/MM ALGORITHMS
While the use of ML potentials in QM/MM schemes has several apparent advantages, these come with costs compared to current methodologies. Many of these costs are rooted in the relative immaturity of the field of machine-learning potentials. Here, we describe the key drawbacks and how they may be addressed.
A. Need for expertise in machine-learning potentials
Machine-learning potentials are an emerging form of adaptable interatomic potentials. Although we argue that a large advantage of using one’s own retrainable ML potential is to obviate the need for extensive research into force fields, this is replaced by the need for a working knowledge, or better still expertise, in ML potentials. Reliable generic methods to train ML potentials to the potential energy surface are not yet standardized: that is, the methodology is not as “turn the crank” as mature computational approaches (like density functional theory). However, we can expect the reliability and accessibility of these ML approaches to improve with cumulative experience as more users adopt these methods; with the recent surge of publications in this field, there is reason to be hopeful.
Furthermore, the user should be well aware of the limitations of machine-learning potentials. For example, most current machine-learning potentials rely upon the assumption that all chemistry is local; for example, only interactions within a 6.5-Å radius can affect the energy and forces in many implementations. Additionally, knowing when the model has adequate training data can be difficult; although in the scheme we outline above there is no need for any training data that anticipate structures inside Region A (where the interesting chemistry should take place), the user still needs confidence that the training data are sufficient to cover the types of configurations that will be encountered in Region B. The generation of this training set is problem-specific, unless methods that account for the uncertainty of predictions85 are employed.
B. Computational costs for training and use
In the current work, we suggest that these machine learning models are most powerful when they are directly trained to match the data set of interest (for example, in the scheme presented in Advantage #4, above). This comes with a computational cost not associated with standard QM/MM methodologies: the ML model is ideally re-trained when each new QM image is produced. Additionally, ML potentials tend to be more computationally expensive per force call than most standard force fields.
This suggests that it is important to keep the re-training time and force-call time small compared to the time required to solve the electronic structure problem associated with each QM force call. These computational costs will be discussed in more depth in the context of the examples below; as we will show, in practice, the first training exercise is costly, but the amount of training needed to add a single new image is typically very small and converges in few iterations. This behavior will differ strongly depending upon the particular problem.
Additionally, we should note that QM/MM algorithms tend to already be rather complicated, and the addition of a ML step to these algorithms adds to the complexity, perhaps increasing the likelihood of errors in calculations and crashing scripts.
IV. ILLUSTRATIVE EXAMPLES
We devote the remainder of this work to exploring two simple examples that employ a QM/ML algorithm. We will show both examples in two different schemes: in the first scheme (referred to here as “QM/MM”), we pre-train a ML potential and use it as the MM calculator in a standard subtractive scheme. In the second, we again pre-train a ML potential but use it in a scheme that employs and tests the retraining algorithm we suggested in Fig. 3; we refer to this as “QM/ML.”
In the first example, we simply optimize the atomic positions in an extended Cu(100) surface, as shown in Fig. 5. Of course, a hybrid method is not needed for such a simple example, yet its simplicity allows us to readily demonstrate the QM/ML algorithm, compare it against QM/MM approaches, and validate both against the full ab initio solution. The initial structure is periodic in the xy-plane with one Cu atom pulled up and rattled from the top layer. We defined the atoms near the displaced atom as those in Region A, while Region B encompassed those atoms outside this boundary. In order to pre-train the ML model to be capable of capturing Region B structures, we created a small training set of 28 images, each consisting of 27 Cu atoms [in a 3 × 3 × 3 Cu(100) slab geometry]. These images were created as Cu surfaces with various biaxial strains in the surface plane, and the DFT potential energies and forces were calculated with the dacapo planewave/pseudopotential code.86,87 Since the fitting of a neural network-based ML model to a set of data does not lead to a unique set of optimal parameters, we independently trained ten ML models using the same pre-training data set with random initial parameters. These models served as the pre-trained ML models described in Fig. 3. Full details on the parameters of the ML model and the DFT training calculations are contained in the supplementary material. As shown in Fig. 5, Region A was selected as 5 × 3 × 3 atoms in the center dashed-line box. (Note that the set of atoms chosen to be in Region A was kept constant throughout the simulation; this was not an adaptive scheme.) For the Region A calculations in this example, we could, in principle, use tight periodic boundary conditions in the y direction to not introduce any artificial vacuum. However, for general systems, this is not possible, and a generic application requires the QM/ML algorithm to be capable of dealing with Region A surrounded by vacuum. Thus, we constructed the isolated Region A system by cutting it from the larger system and adding 5 Å of vacuum to each end along the x direction.
We then performed structure optimizations in the BFGS (Broyden-Fletcher-Goldfarb-Shanno) minimization scheme within the framework of the algorithm we presented in Fig. 3. That is, after each QM calculation, we re-trained the ML calculator until the root-mean-square error (RMSE) of the force prediction was within 0.005 eV/Å and each individual atomic force component was within 0.05 eV/Å of the DFT value. Note that in such a re-training scenario, it is important to include a maximum allowable residual on any individual force, as the addition of a new training image moves the RMSE by relatively small amounts. These newly re-trained forces were then used at each ionic step to update the atomic positions according to the BFGS algorithm. The potential energy (E) and maximum force on any unconstrained atom (Fmax) during the optimization process are shown in Figs. 6(a) and 6(b).
The optimization starts with the perturbed Cu atom about 1 Å away from its relaxed position. Figure 6 shows that all ten QM/ML calculators follow nearly the same optimization path and give essentially the same final result as the pure QM optimization. For comparison, we also repeated the algorithm with an ML potential but in a pure QM/MM scheme; i.e., we started with the same set of ten ML calculators and followed the algorithm of Fig. 3 without the retraining step. This is indicated as QM/MM on Fig. 6. As can be seen, the optimization in the pure QM/MM scheme tends to give qualitatively reasonable predictions but lacks the quantitative match with the QM calculations that were achieved with the QM/ML method. This clearly shows the strong influence of re-training on reproducing the QM results and the power of this methodology.
We note an important subtlety of this method. Region A includes the reactive sites (or the atoms whose positions change most drastically). In general, the distance between the reactive atoms themselves to the A/B boundary should be equal to or larger than the cutoff radius of the ML descriptors. This is important because of the atoms near the boundary in Region B—because those atoms have not experienced a changing reactive site (within the edge of their cutoff radius) in their training set, and no new training information is added to the set on their behalf, then a non-canceling error could be introduced if this rule of thumb is violated. For this specific example, we found that a smaller 3 × 3 × 3 atoms Region A also allows the system to be optimized, but with a larger variance in energy and force prediction. We show these results in the supplementary material.
Computationally, the inclusion of the re-training step within each loop of the QM/ML algorithm makes it more expensive than a pure QM/MM procedure. In practice, we found that the first re-training step is the most difficult, as shown in Fig. 7(a), which shows the number of iterations it took to train the machine-learning calculator at each ionic step. (This is approximately proportional to the computational demand of re-training.) Specifically, we expect that this occurs because the first new QM image added to the training set is a relatively unique geometric configuration, as compared to the images in the training set, and the model parameters must change strongly in response to the presence of this image. When additional QM images are added, they are reasonably similar to the first image that was already added, and the model does not need to change as drastically. As the perturbed Cu atom’s position keeps changing, the re-training time may increase if the similarity is low between the new structure and previous ones. When the structure is approaching equilibrium and changes are minimal, the model in some cases does not need to be re-trained—that is, it converges on the first parameter optimization step. The average computational demand of each ionic step in the geometry optimization is compared between QM/ML and pure QM in Fig. 7(b). Obviously, these numbers are very system-dependent—for example, depending strongly on the sizes of Regions A and B—but nonetheless provide an indication of the possible acceleration as well as the extra cost associated with the ML retraining. For this system with 81 Cu atoms, the DFT calculation requires more central processing units (CPUs), memory, and time. The QM/ML is 3.5 times more efficient than the corresponding DFT method while providing the same accuracy in predicting the energy and forces. The presence of ML in the algorithm requires a small but non-negligible increase in the computational demand, but at the presumed benefit of increased fidelity. The ML computational demand can vary for different systems and applications, and the re-training process for the current neural network model can also become more demanding if more elements are involved or the pre-training model is very unrelated to the system’s structure.
To further demonstrate our method, we use QM/ML to tackle a more challenging problem, the optimization of a metal surface with grain boundaries present. Grain boundaries are significant microstructures affecting many material properties89,90 and have even been proposed as reactive sites on catalyst surfaces.91 However, most grain boundary calculations and molecular dynamic simulations are based on the embedded-atom method (EAM), which is specifically designed for the simulation of metal atoms and is not suitable for chemical reactions on a surface. In this example, we took the atoms in the vicinity of the grain boundary as Region A, while Region B contained the atoms in the abutting crystal plane. This system was comprised of a three-layer surface of Cu with symmetric Σ5(310) grain boundary sites, which are formed by two fcc Cu bulk tilted around the [100]-axis, as shown in Fig. 8. A supercell containing 410 Cu atoms is simulated with an equal number of atoms located on both of the grain boundary sites, leading to the largest distance of any atom away from the grain boundary of around 40 Å.
QM/ML was employed to optimize the structure with 37 atoms selected as Region A, containing one grain boundary unit with periodicity along the y-axis. In the initial configuration, some atoms in the initial structure are still away from their optimal positions, especially the atoms around grain boundaries. For instance, the center atom in the second layer is 3.39 Å from its (Cartesian-plane) Quadrant-II neighbor but is only 2.64 Å from its Quadrant-I neighbor, as shown in Fig. 8. The ratio of these two distances is used as an indicator of the optimization process—by symmetry, this should be 1 in an optimal symmetric structure. Ten individual optimizations are plotted in Fig. 9. After geometric optimization (until the maximum force on any unconstrained atom is under 0.05 eV/Å), we obtain a very reasonable symmetric structure with the distance ratio indicator around 1, similar to the structure calculated by EAM in the literature.92 The final states are re-calculated by the corresponding pre-trained ML model in the QM/MM subtractive scheme, and the re-calculated results are labeled as blue crosses—these represent the predictions of a corresponding QM/MM model without re-training. The QM/MM energies have a much larger variance, and the Fmax are still quite large around 0.4 eV/Å, implying that the symmetric structure will change if using QM/MM as the calculator in optimization. This example suggests again the importance of the learning step, that is, of adapting the ML to the ultimate target structures.
V. CONCLUSIONS
We suggest that ML-based potentials have several natural advantages for use in QM/MM calculations; these advantages are largely due to the coincidence of the potential energy surfaces predicted by using the ML-based calculator and the QM calculator. These advantages include avoiding artificial distortions, eliminating or reducing the need for a buffering zone in adaptive QM/MM methodologies, the development of simpler methods to account for the coupling term, and providing a single potential energy surface for thermodynamic and kinetic sampling approaches. Naturally, this shared PES and thus these advantages only occur if the ML potential is well trained to all situations that may be encountered, yet we show that this can be achieved dynamically during the course of the simulation if the ML potential is re-trained to the new QM data produced. The actual ML algorithm in the proposed QM/ML scheme can take many forms, although in the current work we chose neural network potentials. Neural networks can be prone to overfitting, so good practices such as regularization should be employed; alternatively, other ML algorithms such as Gaussian-process regression39,40 have built-in regularization against overfitting. We also expect that the relatively simple and universal nature of ML potentials reduces the difficulty of choosing a MM potential, but at the cost of needing to learn to train and validate ML potentials.
We introduced a simple iterative scheme by which we can couple the straightforward subtractive QM/MM scheme with a re-training step, in order to calculate large-sized atomic systems and to provide a prediction of energies and forces that approach QM accuracy. A well-recognized definition of ML is a computer algorithm that learns and improves with experience.93 In this spirit, our algorithm is designed to use the new QM information that is generated during each step of the algorithm to improve the ML representation, rather than requiring a huge amount of training data to be produced before the algorithm. This not only increases the computational efficiency but also takes the guesswork out of the pre-generation of such data.
We expect that improvements upon this framework are possible. While this method has the above advantages, it nonetheless operates under the assumption that all chemistry is local. If long-range electronic effects are expected, then the machine-learning potentials will need to be adapted to allow for such effects; such approaches to dealing with long-range effects in the machine-learning potential framework are present in the literature.94,95 The current QM/ML scheme still requires the manual selection of Region A to cover the reactive sites. It would be very attractive if Region A can be shifted dynamically to the atoms with higher uncertainty; these can in principle be identified by algorithms such as a Gaussian-process predictor-correction algorithm96,97 or via analysis tools such as the bootstrap ensemble technique, which can isolate the uncertainty in a machine-learning potential to individual atoms.85 The examples we employed in the current work were rather simple crystal structures; in order to apply QM/ML to more molecular simulation systems, such as proteins, it must have a strategy to deal with the cases where covalent bonds are cut by boundaries of Region A. Such solutions are commonplace in the QM/MM literature and can likely be adapted into the current algorithm, such as by adding link passivating atoms to the bond ends at boundaries.10 Furthermore, we expect that the QM/ML method can be used not only in structural optimization but also in processes like transition-state searches98,99 and ab initio molecular dynamics (MD). In the latter, special attention must be paid to energy conservation as the machine-learning potential changes over the course of the simulation. While we expect that the re-training scheme outlined above should result in conservative forces, this should be verified, for example, by using the final trained calculator to evaluate the completed trajectory and verify that the forces match within a tolerance. Interestingly, ML models based on force-explicit descriptors may be more suitable for constructing force fields useful for dynamics schemes which do not rely on potential-energy calculations.96,97,100 We hope that the current work can serve as the basis for such advanced algorithms.
SUPPLEMENTARY MATERIAL
See supplementary material for parameters used in the machine-learning and electronic structure calculators, as well as completed descriptions of the atomic geometries employed.
ACKNOWLEDGMENTS
We gratefully acknowledge financial support from the Office of Naval Research through Award No. N00014-15-1-2223. Calculations were undertaken at the Center for Computation and Visualization (CCV), Brown University.