Precise prediction of phase diagrams in molecular dynamics simulations is challenging due to the simultaneous need for long time and large length scales and accurate interatomic potentials. We show that thermodynamic integration from low-cost force fields to neural network potentials trained using density-functional theory (DFT) enables rapid first-principles prediction of the solid–liquid phase boundary in the model salt NaCl. We use this technique to compare the accuracy of several DFT exchange–correlation functionals for predicting the NaCl phase boundary and find that the inclusion of dispersion interactions is critical to obtain good agreement with experiment. Importantly, our approach introduces a method to predict solid–liquid phase boundaries for any material at an ab initio level of accuracy, with the majority of the computational cost at the level of classical potentials.

Molten salts are a class of high-temperature ionic fluids that have recently attracted renewed interest due to their potential applications in modular nuclear reactors1 and thermal energy-storage systems.2 A molten alkali halide salt, such as LiF, or a mixture, such as FLiNaK, may be used as a coolant instead of highly pressurized water in nuclear reactors; these salts can also act as the medium in which fuel and fission products are dissolved.3 

Accurate knowledge of the salt’s phase diagram is critical for the design of such reactors. Experimental results serve as the ultimate benchmark of these properties, and methods such as CALPHAD can be used in the design process using parameters that are fit to experimental inputs.4 However, such methods often use empirical functional forms for the relevant thermodynamic quantities needed to predict phase coexistence. A more accurate method would obtain the relevant quantities—specifically free energies—from a direct description of the interactions between the constituent atoms.

Predictions of thermodynamic properties of condensed phases from atomistic simulations can employ either Monte Carlo (MC)5 or molecular dynamics (MD)6,7 approaches. Each method’s accuracy depends upon the treatment of the interatomic potential energy function. This potential energy function can be calculated from first principles using Kohn–Sham electronic density-functional theory (DFT)8,9 in ab initio molecular dynamics (AIMD) simulations, or approximated by classical force fields, such as additive pairwise potentials. MD simulations for predicting bulk phase equilibria accurately typically require system sizes containing at least 500−1000 atoms, while AIMD simulations are typically limited by computational costs to 100–200 atoms and time scales of 10–100 ps. Consequently, MD predictions of phase equilibria have typically employed classical force fields.10–12 However, this requires explicit parameterization of empirical force fields for new materials, and even for single component systems, it is limited in accuracy for smaller temperature and pressure ranges than a first-principles method.

Machine-learned interatomic potentials promise to bridge this gap between AIMD and classical MD by using highly flexible functional forms, such as neural-network potentials (NNPs), which can better reproduce the potential energy surface from ab initio results than simpler classical force-fields.13 Several families of NNPs are finding increasing use for MD simulations14–16 and essentially serve to extrapolate the DFT level predictions from smaller AIMD simulations to larger-scale MD simulations. For molten salts, NNPs have been used to predict structure, diffusivity,17–19 shear viscosity,20 equations of state, heat capacity, thermal conductivity, and phase coexistence at individual state points.21 However, to our knowledge, systematic mapping of the solid–liquid phase boundary of an alkali halide, such as NaCl, using either AIMD or machine-learned potentials in the pressure–temperature space has not been performed yet.

The most common way to estimate phase equilibria in MD is to carry out direct simulations of coexistence of the two phases in a large interface calculation. Such a method has been used to estimate the melting point of NaCl with machine-learned potentials previously.21 At a given state point, the interface will typically move to expand the thermodynamically favorable phase at the expense of the less favorable phase.22 This requires simulating large interfaces with at least 104 atoms over long time scales (typically nanoseconds) and must be repeated over several state points to pinpoint the coexistence point. However, at state points close to the true coexistence point, the velocity of the interface will typically be too low to reliably capture in simulations of tractable length. Such a method also depends upon the orientation of the interface setup between the solid and liquid phases.

A more accurate approach with better resolution involves calculating the free energy difference between the phases at various state points. One can use thermodynamic integration23 (TI) in order to obtain these relevant free energies in molecular simulations. A reversible “pseudosupercritical” pathway that directly transforms the liquid to the solid phase can be employed, where the interatomic potential U(λ) is varied continuously as a function of an introduced path parameter λ so as to establish a reversible transformation between the solid and liquid phases path at a particular state point (P, T). The resulting free energy difference between phases is calculated by integrating dλ(∂U/∂λ) along the pathway. In particular, such a method avoids the problem of interfaces between two separate phases and requires much smaller system sizes (on the order of 500–1000 atoms) than the aforementioned interface coexistence technique. TI simulations have recently been used with NNPs to assess solid–liquid coexistence for uranium and solvation free energy predictions.24–26 However, to our knowledge, no similar study has yet been performed using NNPs to compare the effects of different DFT approximations on an alkali halide phase boundary, such as NaCl.

Here, we introduce an approach using TI to combine low-cost classical force fields and more complex NNPs trained to electronic structure data in order to, respectively, combine the computational cost and accuracy advantages of each kind of interatomic potential. Briefly, our method involves performing most of the complex transformations along the pseudosupercritical pathway using a cheap additive pairwise potential and an additional bulk transformation from the classical potential to the NNP in each phase to obtain the NNP melting point. From this initially determined melting point, we can extend the phase boundary in the (P, T) space using the Clausius–Clapeyron equation.

The remainder of this paper is organized as follows: In Sec. II, we specify the classical interatomic potential and NNP parameterization details. Afterward, in Sec. III, we detail the phase equilibrium approach, starting from the prediction of a single coexistence point using TI and then using the Clausius–Clapeyron equation to extend the phase boundary in the (P, T) space. Finally, we show results of our method in Sec. IV for the NaCl solid–liquid phase boundary and for NNPs trained to AIMD data with different choices of the exchange–correlation (XC) functional. We find that the predicted phase boundaries are highly sensitive to the choice of XC functional and that those functionals that explicitly use built-in treatment of dispersion interactions agree with experiment over a much wider range of temperatures and pressures than other functionals.

TABLE I.

Fumi–Tosi parameters used for classical MD simulations of NaCl.28 

PairA (eV)ρ (Å)σ (Å)C (eV/Å6)D (eV/Å8)
Na–Na 0.2637 0.317 2.340 1.0486 −0.4993 
Na–Cl 0.2110 0.317 2.755 6.9906 −8.6758 
Cl–Cl 0.1582 0.317 3.170 72.4022 −145.4285 
PairA (eV)ρ (Å)σ (Å)C (eV/Å6)D (eV/Å8)
Na–Na 0.2637 0.317 2.340 1.0486 −0.4993 
Na–Cl 0.2110 0.317 2.755 6.9906 −8.6758 
Cl–Cl 0.1582 0.317 3.170 72.4022 −145.4285 

We perform classical MD simulations in LAMMPS27 using the standard Fumi–Tosi (FT) parameters28 for NaCl. This model is also referred to as the rigid ion model (RIM) within the molten salt literature,

Table I lists the FT parameters used to model NaCl in this study.
(1)

In the present work, the long-range Coulomb part of the FT potential is treated using the damped shifted force model,29 which allows for faster computation than Ewald and particle–particle–particle–mesh (PPPM) methods.

While this simple functional form allows for rapid computation of interatomic forces, it limits the accuracy of predicted properties to specific chemical environments and thermodynamic conditions used to parameterize the model.30 

Neural networks are universal function approximators31 that have been proven increasingly useful for describing the complex multidimensional potential energy surfaces of atomistic systems; in an NNP, the input to the neural network is a representation of the atomic coordinates and the output is an energy.

1. Featurization and neural network formulation

All approaches to NNPs require some means to map the local neighbor configuration of each atom onto the input features for the neural network. It is particularly important for these features to account for rotational, translational, and atomic permutation symmetries in order for the neural network to represent the potential energy landscape of the system with practical AIMD training datasets. Prevalent current approaches for generating these features (or descriptors) from atomic configurations include atom-centered symmetry functions,32 smooth overlap of atomic positions (SOAP),33 neighbor density bispectrum,34 Coulomb matrices,35 and atomic cluster expansions (ACE)36.

Here, we use the SimpleNN code15 for training and evaluating neural network potentials, which implements the atom-centered symmetry function approach. In this approach, the total energy is written as a sum of atomic energies, each expressed as a neural network of several symmetry functions evaluated on the local atomic configuration. These include radial functions that effectively measure the radial density of each atom type in a finite basis and angular functions that similarly measure the angular distribution of pairs of atom types in a finite basis surrounding each atom.32 We use the default set of radial G2 and angular G4 symmetry functions (70 in total) implemented in the SimpleNN package with a cutoff of 6 Å.

To train neural networks using the SimpleNN code, we use the built-in principal component preprocessing to mitigate linear dependence of symmetry functions and thereby accelerate the training.15 We also use adaptive sampling of the local atomic configurations based on Gaussian density functions, which increases the weight for infrequently encountered configurations in the loss function for training and improves the transferability of the resulting potential.37 Finally, we find that a standard feed-forward neural network architecture with two hidden layers of 30 nodes each (30–30) proves sufficient, with a negligible reduction in training errors with deeper or wider networks.

2. AIMD training data

An appropriate dataset of AIMD calculations is critical to reliably fit the parameters in the neural networks. For the present use case, we need to ensure that the NNP is able to accurately model both solid and molten NaCl across a wide range of temperatures and pressures. Table II lists the training AIMD simulations we use. We include solid configurations in the stable rock salt and metastable cesium chloride and zinc blende structures near the melting temperature and liquid configurations at ambient and high pressures spanning a range of temperatures above the melting point. High pressure simulations are necessary in order to ensure stability of the NNPs, essentially by sampling more of the repulsive regime of the potential energy surface.21 

TABLE II.

Structure, thermodynamic state points, and number of configurations sampled (10 fs apart) for each NaCl AIMD simulation used to train the NNPs utilized in this study.

StructureT (K)P (bar)No. of configs
Rock salt 1000 201 
CsCl 1000 201 
Zinc blende 1100 201 
Liquid 1300 201 
Liquid 1100 201 
Liquid 1500 201 
Liquid 1700 105 201 
Liquid 1500 5 × 104 201 
Liquid 1500 2 × 104 to 3 × 106 108 
StructureT (K)P (bar)No. of configs
Rock salt 1000 201 
CsCl 1000 201 
Zinc blende 1100 201 
Liquid 1300 201 
Liquid 1100 201 
Liquid 1500 201 
Liquid 1700 105 201 
Liquid 1500 5 × 104 201 
Liquid 1500 2 × 104 to 3 × 106 108 

In order to compare the effects of different functionals, we repeat the AIMD simulations, NNP training, and all subsequent calculations for four different XC functionals, starting with the most frequently used Perdew–Burke–Ernzerhof (PBE) generalized-gradient approximation (GGA).38 Since PBE often underbinds solids leading to larger lattice constants, we also use its version reparameterized for solids, PBEsol.39 To analyze the impact of long-range dispersion interactions, we consider two variants of dispersion corrections to PBE, namely PBE D240 and PBE D3,41 which have been shown to be important for accurate structure prediction in molten salts.42 For the remainder of this paper, we refer to these four NNPs trained to different XC functionals as NNP-PBE, NNP-PBEsol, NNP-PBE D2, and NNP-PBE D3, respectively.

Each AIMD simulation is started from an equilibrated configuration of 64 atoms using the Fumi–Tosi potential, with this size sufficient to get atomic environments extending to the symmetry function cutoff and to require no Brillouin zone sampling in the DFT. Note that this box length is larger than the ionic screening lengths, the crystal lattice size, and the cutoff length used for the NNP and should, therefore, be sufficient for training the NNP; the classical MD simulations used for property predictions with the NNPs below use much larger boxes. The AIMD simulations are performed using the open-source JDFTx software,43 using a Nosé–Hoover thermostat and barostat, a time step of 1 fs, with configurations extracted for the dataset every 10 fs. Configurations are extracted at this frequency in order for a balance between obtaining sufficient training data and tractable computation time for each individual simulation. Each AIMD simulation is run for 2 ps in the NPT ensemble, except for the high pressure sweep, which consists of four snapshots chosen along a classical MD compression simulation and then simulated in AIMD as an NVE ensemble for 0.2 ps each. With enough snapshots, the NNP learns the potential energy landscape sufficiently to handle atoms approaching one another in MD to remain stable. We use a plane wave basis with kinetic energy cutoffs of 20 and 100 hartrees for wavefunctions and charge densities, respectively, as recommended for use with the Garrity-Bennett-Rabe-Vanderbilt (GBRV) ultrasoft pseudopotential set,44 and converge the wavefunctions to an energy threshold of 10−7 hartrees at each time step. All subsequent simulations using the NNP are performed in LAMMPS.27 

3. Benchmarks

We validate the trained NNP in two ways—by comparing its forces and energies to those generated via AIMD and by comparing its radial distribution functions (RDFs) to those generated by the FT potential at three different state points. This allows us to check if the forces and energies learned by the neural network are sufficient to capture the relevant structures of each phase needed for the later phase boundary calculations.

Figure 1(a) shows the correlation and errors between the NNP and AIMD (DFT) energies and forces. The energy errors are all within 26 meV/atom, which is thermal energy kBT at ambient temperature.

FIG. 1.

(a) Correlation plots (above) and error distributions (below) between AIMD data and NNPs for energies (left panel) and forces (right panel). (b) Comparison of partial radial pair correlation functions between Fumi–Tosi and NNPs trained to AIMD with each of the four XC functionals considered here, for solid (top panel) and liquid at ambient pressure (middle panel) and at high pressure (bottom panel). The predicted RDFs are overall very similar, except for overstructuring in the solid RDF from Fumi–Tosi potential compared to the NNPs and slight shifts of the NNP-PBE RDF peaks to the right relative to the others due to their underbinding. High pressure predominantly affects the liquid RDFs from the second coordination shell onward.

FIG. 1.

(a) Correlation plots (above) and error distributions (below) between AIMD data and NNPs for energies (left panel) and forces (right panel). (b) Comparison of partial radial pair correlation functions between Fumi–Tosi and NNPs trained to AIMD with each of the four XC functionals considered here, for solid (top panel) and liquid at ambient pressure (middle panel) and at high pressure (bottom panel). The predicted RDFs are overall very similar, except for overstructuring in the solid RDF from Fumi–Tosi potential compared to the NNPs and slight shifts of the NNP-PBE RDF peaks to the right relative to the others due to their underbinding. High pressure predominantly affects the liquid RDFs from the second coordination shell onward.

Close modal

The resulting structures predicted by the NNPs are checked by running larger calculations with each potential on 4096-atom simulation cells of the solid and liquid phases at different state points displayed in Fig. 1(b). In the rock salt phase, NNP-PBE predicts a less dense phase than the other potentials (RDF peaks shifted slightly to the right), which is expected due to that functional’s well-known underbinding in solids; however, the RDFs for the other NNPs largely overlap with FT. In the liquid phase, the RDFs for all four NNPs appear to overlap with FT at both ambient and high pressures, indicating that each potential has learned the appropriate structure.

4. Cross-validation strategy

We also implement a three-fold cross-validation strategy to assess the impact of training errors of each NNP on the final phase boundary results presented later in this paper. For each exchange–correlation functional, we fit three more NNPs to 2/3 of the AIMD training data, excluding a different 1/3 each time, and restart training with different randomly initialized weights in the network. The different subsets are selected to evenly span the range of configurations in the training data. A random sampling is not used to select the subsets of training data, in order that none of the subsets lose out any relevant information in the training data needed to maintain a stable potential.

All subsequent calculations for the phase boundaries use four NNPs for each functional—three NNPs using 2/3 of the training data and one NNP using all the training data. Subsequent estimates along the phase boundary are reported as the mean of the predictions made by each of these four NNPs, and error bars are reported as the standard deviation of these predictions.

The most commonly used technique for estimating the solid–liquid coexistence in MD simulations is to directly set up an interface between a crystalline configuration and a liquid configuration and allow the system to equilibrate at a specified state point.45 If the state point is far from the phase boundary, the thermodynamically favorable phase grows at the expense of the less favorable one, whereas close to the phase boundary, the interface does not move appreciably.

We test the direct interface coexistence method as an initial benchmark for the ambient-pressure melting temperature of the FT potential and the NNP trained to the PBE functional (hereafter referred to as NNP-PBE). For each direct interface simulation, we start with a large orthorhombic supercell of NaCl on the order of 104 atoms and convert the middle region to a liquid by running a high-temperature (3000 K) NVT simulation for 30 ps, while the remainder of the system is excluded from time-integration during this initial melt. We subsequently reset velocities of the entire system and run NPT simulations to equilibrate the system at each candidate temperature at ambient pressure. We monitor the fraction of atoms in the crystalline phase as a function of time with Steinhardt’s q6 order parameter46 and repeat this at several candidate melting temperatures. Figure 2(a) displays a representative snapshot of the section of the system with the interface and the fraction of atoms in the solid phase as a function of time at different temperatures for the FT potential.

FIG. 2.

(a) Left: a snapshot of a representative interface coexistence simulation, here with the interface setup along the (100) plane. Right: the fraction of atoms in the rock salt phase as a function of simulation time for a simulation setup along the 100 interface using the Fumi–Tosi potential. (b) The rate of change of the fraction of atoms in the rock salt phase as a function of temperature along the 100, 110, and 111 interfaces, respectively, for the Fumi–Tosi potential. (c) The rate of change of the fraction of atoms in the rock salt phase for the NNP-PBE potential. A coexistence temperature can be coarsely estimated from the zero-crossing of such curves; however, the resulting estimate is inherently a feature of the surface setup in the simulation.

FIG. 2.

(a) Left: a snapshot of a representative interface coexistence simulation, here with the interface setup along the (100) plane. Right: the fraction of atoms in the rock salt phase as a function of simulation time for a simulation setup along the 100 interface using the Fumi–Tosi potential. (b) The rate of change of the fraction of atoms in the rock salt phase as a function of temperature along the 100, 110, and 111 interfaces, respectively, for the Fumi–Tosi potential. (c) The rate of change of the fraction of atoms in the rock salt phase for the NNP-PBE potential. A coexistence temperature can be coarsely estimated from the zero-crossing of such curves; however, the resulting estimate is inherently a feature of the surface setup in the simulation.

Close modal

An inherent drawback of this method is that any assessment of a coexisting state point depends upon the orientation of the interface setup between the solid phase and the liquid phase. We repeat these simulations with solid surfaces oriented along the (100), (110), and (111) facets and track the rate of change of solid fraction as a function of temperature for each, displayed in Figs. 2(b) and 2(c) for the Fumi–Tosi and NNP-PBE potentials, respectively.

While a rough value of Tm can be estimated at 1070 ± 30 K and 875 ± 40 K for the FT and NNP-PBE potentials, respectively, we see in Fig. 2(b) that there is a qualitative difference in behavior for each facet and, hence, that such a method may have limitations for estimating coexistence of the bulk of a material with high accuracy. Moreover, the velocity of the interface is essentially a product of a mobility term and a driving force, and this driving force is proportional to |TTm|. Close to the melting temperature, the velocity of the interface may be too slow to observe in a tractable simulation on the order of nanoseconds.22 

Predicting coexistence from the free energy difference between phases as a function of thermodynamic variables is more a robust method and can be achieved with significantly smaller systems than the ones necessary in direct interface coexistence simulations above.47 

Most commonly, the free energy difference between two systems (or states) with different interaction potentials U in molecular dynamics can be obtained via thermodynamic integration (TI),23 which involves the construction of a reversible pathway between the two states. In the canonical ensemble, we can obtain the Helmholtz free energy difference as
(2)
where λ is a parameter that continuously changes the interaction potential from U(λ = 0) at the starting point to U(λ = 1) at the endpoint of the pathway and the average Uλ is calculated in a canonical ensemble corresponding to each intermediate λ. The key requirement is that the path is reversible so that the ensemble averages are continuous with respect to λ. Reversibility of the pathway is maintained by ensuring that the system is in equilibrium at each λ point.

The resulting ensemble averages Uλ depend upon the way λ is introduced to the interatomic potential energy function. Since the free energy is a state variable, it does not matter whether the pathway or the intermediate states are physically realistic and only the reversibility of the path is important; such simulations are often referred to as alchemical simulations in the community.

We can employ a “pseudosupercritical” pathway10 (PSCP) to obtain the free energy difference between the solid and liquid phase at a single state point (P, T) for a given interatomic potential. This pathway involves a direct transformation of the liquid to the solid phase at an arbitrary temperature, allowing one to sidestep the long simulation trajectories that would typically be needed to sample a nucleation event. We can also use TI to transform from relatively expensive NNPs to cheaper additive pairwise potentials, such as FT potential at the start and end of said pathway. This allows us to obtain final results that only depend on the NNP, even though most of the computation over the pathway is performed with pair potentials. Once an initial coexistence point is found, the phase boundary can be extended in the (P, T) space by the integration of the Clausius–Clapeyron equation48 using data from simulations, for which numerous techniques are available in the literature.

So, our overall scheme for predicting the first-principles solid–liquid phase boundary of NaCl using NNPs (trained to DFT) can be broken down into the following three major steps:

  1. Use a pseudosupercritical pathway at ambient pressure with the FT potential to calculate the solid–liquid free energy difference as a function of temperature ΔGsl(T) to estimate that model’s melting temperature TmFT.

  2. Use a thermodynamic cycle involving TI from NNP to FT to obtain TmNNP.

  3. Extend the phase boundary Tm(P) by integrating the Clausius–Clapeyron equation.

We detail each of the steps in Secs. III B 1III B 3 and note that the second and third steps are repeated for NNPs trained to different DFT functionals and with different training sets for the cross-validation and error estimation strategy discussed in Sec. II B 4.

1. Pseudosupercritical pathway (PSCP) for Fumi–Tosi Tm

The melting temperature of NaCl at ambient pressure has been predicted previously for the Fumi–Tosi potential using different TI approaches. The initial usage of the PSCP method, proposed by Eike et al., yielded TmFT=(1089±8) K.10 More recent studies using this same pathway by Aragones et al.49,50 but run in both directions (solid to liquid and liquid to solid) yielded an estimate of TmFT=1082±13 K. An alternative approach proposed by Anwar et al., involving separate pathways transforming the solid to a harmonic crystal and the liquid to an ideal gas, yielded TmFT=(1064±14) K.11 

Both usages of the PSCP computed the solid–liquid free energy difference ΔGsl at a single arbitrary temperature and used analytical corrections based on the solid and liquid equations of state to obtain a correction factor to predict the melting point.10 We adapt the PSCP method and make it more robust by running it independently at multiple temperatures to directly obtain ΔGsl(T) and extract Tm from its zero-crossing, which reduces the possibility of systematic errors in analytic corrections.

We note that we only run this pathway in a single direction, from the liquid to the solid phase. At a single state point, this pathway consists of the following four steps, displayed in Fig. 3(a):

  1. Deform liquid from its equilibrium volume to the equilibrium volume of solid at same (P, T), with free energy ΔAdeform=VLVSPdV.

  2. Scale down Fumi–Tosi interaction potential UFT to ηUFT, with free energy given by Eq. (2) applied to U(λ) = (1 − λ)UFT + λ(ηUFT). Using η = 0.1, this transforms the ionic liquid to a weakly interacting liquid, amenable for the next step of transformation to the relevant crystal structure.

  3. Switch on a tethering potential Utether, consisting of attractive Gaussian potentials of the form U(r)=AeBr2, with A = 2.0 eV and B = 1.1 Å−2 at each crystal site, which interacts with the corresponding species of atoms. This path has net potential, U(λ) = (1 − λ) (ηUFT) + λ(ηUFT + Utether), with free energy given by Eq. (2) and transforms the weakly interacting liquid to a “tethered crystal.”

  4. Restore the original Fumi–Tosi potential and simultaneously switch off the tethering potential. This path has net potential, U(λ) = (1 − λ) (ηUFT + Utether) + λUFT, with free energy given by Eq. (2) and transforms the “tethered crystal” to the ionic crystal.

FIG. 3.

(a) Thermodynamic integration pathway to compute solid–liquid free energy difference at a single state point. In the first step, an equilibrium ionic liquid is compressed to the corresponding equilibrium volume occupied by the crystal; in the second step, the ionic interactions are scaled down; in the third step, a tethering potential is switched on at each of the crystal lattice sites; and in the fourth step, the tethering potential is switched off and the ionic interactions are scaled back to their full values. (b) Representative Helmholtz free-energy curves for each of the four steps along the pathway, for a single run at T = 1140 K. (c) Gibbs free energy difference ΔGsl(T) obtained from repeating this pathway for several temperatures. To avoid assuming a specific polynomial form, we fit an ensemble of kernel ridge regression models to the data with resampling and find the zero-crossing to get the melting point with a 95% confidence interval, TmFT=1060.8±5.5 K.

FIG. 3.

(a) Thermodynamic integration pathway to compute solid–liquid free energy difference at a single state point. In the first step, an equilibrium ionic liquid is compressed to the corresponding equilibrium volume occupied by the crystal; in the second step, the ionic interactions are scaled down; in the third step, a tethering potential is switched on at each of the crystal lattice sites; and in the fourth step, the tethering potential is switched off and the ionic interactions are scaled back to their full values. (b) Representative Helmholtz free-energy curves for each of the four steps along the pathway, for a single run at T = 1140 K. (c) Gibbs free energy difference ΔGsl(T) obtained from repeating this pathway for several temperatures. To avoid assuming a specific polynomial form, we fit an ensemble of kernel ridge regression models to the data with resampling and find the zero-crossing to get the melting point with a 95% confidence interval, TmFT=1060.8±5.5 K.

Close modal

Adding the free energy from these four steps yields the net Helmholtz free energy difference between phases, ΔAsl. We can then calculate the Gibbs free energy difference ΔGsl = ΔAsl + PΔVsl, where ΔVsl is the corresponding change in volume at this state point (P, T).

We perform each of these steps in a cell with 256 Na–Cl ion pairs, in the NVT ensemble using the Nosé–Hoover thermostat in LAMMPS. The simulations use a time step of 1 fs and converged within 25 ps for each of 50 λ values in steps 2 and 4 and within 50 ps for each λ in step 3 above. The final configuration at each λ point is used as the initial configuration for the simulation at the next λ point in order to ensure a smooth transformation along the pathway.

We repeat this entire process to compute ΔGsl(T) for several temperatures ranging from 1030 to 1140 K and fit an ensemble of kernel ridge models fit to different resamplings of the data to extract the zero-crossing with an error estimate, displayed in Fig. 3(b). We thereby estimate the Fumi–Tosi melting point, TmFT=1060.8±5.5 K, which is consistent with the previous estimate from Ref. 11 but is lower than the one from Refs. 10, 49, and 50. As mentioned earlier, both these papers only ran the PSCP at a single temperature and subsequently used analytical corrections from the equations of state to estimate TmFT. Our estimate may be closer to the method employing transformations of each phase to its reference state because we run the PSCP at multiple temperatures to directly obtain the Gibbs free energy difference between phases as a function of temperature. We use our estimate of the ambient-pressure TmFT as the starting point to determine the NNP melting temperatures.

2. Thermodynamic cycle to obtain NNP Tm

Once we have an estimate for the Fumi–Tosi melting point, we can use TI to obtain an estimate for the NNP melting point. We could adapt the approach mentioned in Sec. III B 1 by simply adding on a bulk transformation between the NNP and the Fumi–Tosi potential at the start and end of the pseudosupercritical pathway [Fig. 3(a)] to obtain ΔGslNNP(T); however, this would involve running multiple equilibrium simulations with the NNP at intermediate λ points at every temperature—typical NNP simulations are on the order of 100 times slower than a FT simulation, so we propose an alternative approach that allows more rapid convergence for TmNNP.

Starting from an initial guessed value for TmNNP, we run the following thermodynamic cycle separately for each phase:

  • Convert NNP to Fumi–Tosi potential through TI, with ΔA computed using Eq. (2) applied to U(λ) = (1 − λ)UNNP + λUFT. At a fixed volume, this changes the pressure from P for NNP equilibrium to P′ for FT equilibrium. Consequently, this step yields ΔG = ΔA + V(P′ − P).

  • Change pressure (at fixed T) of Fumi–Tosi solid/liquid from P′ back to P.

  • Change temperature (at fixed P) of Fumi–Tosi solid/liquid to TmFT, where ΔGslFT=0 by definition.

See Fig. 4(b) for an elucidation of this cycle. Adding these three steps together, we obtain ΔGsl for the NNP at the guessed temperature. We can subsequently calculate a correction factor to update the guess for TmNNP using the following equation:
(3)
because ∂G/∂T = −S and ΔSsl = ΔHsl/Tm at the true melting point, and the latter is approximately true close to the melting point. We find that for the NNPs trained in this study, this pathway converges to 4 K tolerance within at most five such steps.
FIG. 4.

(a) The NNP melting point can be found by performing bulk transformation to and from a cheaper classical potential at the endpoints of the pseudosupercritical pathway. (b) Thermodynamic cycle we use to iteratively converge upon TmNNP. We make a guess for TmNNP, use this cycle to compute ΔGslNNP at that state point, and subsequently update our guess using Eq. (3). This is a faster way to obtain TmNNP than the method in (a) since it converges within a maximum of five iterations, whereas the pathway above to obtain ΔGslNNP(T) directly would involve running multiple expensive NNP simulations at various λ points at every scanned temperature. (c) Representative dU/ variation for the TI step above involving bulk transformation from the NNP to the Fumi–Tosi potential; the linear variation indicates that very few λ points are needed to obtain the relevant free energy difference.

FIG. 4.

(a) The NNP melting point can be found by performing bulk transformation to and from a cheaper classical potential at the endpoints of the pseudosupercritical pathway. (b) Thermodynamic cycle we use to iteratively converge upon TmNNP. We make a guess for TmNNP, use this cycle to compute ΔGslNNP at that state point, and subsequently update our guess using Eq. (3). This is a faster way to obtain TmNNP than the method in (a) since it converges within a maximum of five iterations, whereas the pathway above to obtain ΔGslNNP(T) directly would involve running multiple expensive NNP simulations at various λ points at every scanned temperature. (c) Representative dU/ variation for the TI step above involving bulk transformation from the NNP to the Fumi–Tosi potential; the linear variation indicates that very few λ points are needed to obtain the relevant free energy difference.

Close modal

Figure 4(c) shows a representative dU/ curve obtained for the NNP to FT connection with ten λ points. Note the near perfect linearity of dU/ with λ, indicating that this TI can be performed with very few λ points, possibly even with just three λ points at 0, 0.5, and 1. Once again, this indicates the feasibility of the present approach to keep most of the computation at the cheaper classical potential level, requiring very few calculations using NNPs. The main requirement is that the classical potential is just accurate enough to predict a liquid and a solid phase that remain stable through the TI paths shown above.

3. Extension of phase boundary using Clausius–Clapeyron equation

Once we have an initial point of solid–liquid coexistence, we can numerically integrate the Clausius–Clapeyron equation,48 
(4)
to find the entire coexistence line in the (P, T) space for whichever interatomic potential. The solid–liquid difference in enthalpy ΔHsl and molar volume ΔVsl can be obtained directly from NPT simulations of both phases at a known coexisting state point (P, T). This allows for an initial estimate of the melting temperature at pressure P′ = P + ΔP at T′ = T + ΔP/(dP/dT) (we use a ΔP of 1000 bars in the present work).

We then converge from this straight-line approximation to find the point where ΔGsl = 0, using a method very similar to the calculation of the NNP melting point in Sec. III B 2. We run a compression step from PP′ at constant T and then iteratively run heating steps from TT′ at constant P′ until convergence, using Eq. (3) at each iteration to obtain the correction factors. Note that these are essentially the last two steps of the thermodynamic cycle in Fig. 4(b).

We find that this approach converges within three iterations for each step in pressure, with a convergence criterion of |ΔT| < 4 K, for all the interaction potentials used in this study. This approach is similar to the coexistence-line free-energy difference integration method12 but distinct from Gibbs–Duhem integration.51 

The techniques developed in Sec. III allow for mapping of the solid–liquid phase boundary in the pressure–temperature space for any interatomic potential, including classical potentials, such as FT potential for NaCl, and machine-learned potentials, such as NNPs. Figure 5 shows the comparison of the NaCl phase boundaries predicted by NNPs trained to four different DFT XC functionals against experimental measurements and the FT potential.

FIG. 5.

Predicted NaCl solid–liquid phase boundaries for NNPs trained to four different DFT functionals, compared to experimental results and Fumi–Tosi predictions. The error bars shown for the NNP predictions are from cross-validation using NNPs trained to different subsets of the DFT data for each case. The NNPs trained to PBE and PBEsol without explicit dispersion corrections strongly underestimate the melting temperature at all pressures, while the PBE D2 and PBE D3 dispersion-corrected results agree better with experiment than the empirical Fumi–Tosi potential.

FIG. 5.

Predicted NaCl solid–liquid phase boundaries for NNPs trained to four different DFT functionals, compared to experimental results and Fumi–Tosi predictions. The error bars shown for the NNP predictions are from cross-validation using NNPs trained to different subsets of the DFT data for each case. The NNPs trained to PBE and PBEsol without explicit dispersion corrections strongly underestimate the melting temperature at all pressures, while the PBE D2 and PBE D3 dispersion-corrected results agree better with experiment than the empirical Fumi–Tosi potential.

Close modal

First, note that while the classical potential is accurate for the melting point at ambient pressure compared to experiment, it deviates from experiment at higher pressures, consistent with previous results with this potential.10 Note that even the slope dP/dT of the coexistence line is incorrect near ambient pressure, indicating that the error stems from either the predicted enthalpy difference or molar volume difference between the phases, as indicated by the Clausius–Clapeyron equation. Table III shows that the FT potential is reasonably accurate for the enthalpy difference and solid volume but overestimates the liquid volume and thereby results in a smaller dP/dT than experiment.

TABLE III.

Comparison of lattice constant a and atomization energy Ea of NaCl crystals at 300 K, as well as equilibrium molar volumes and solid–liquid enthalpy difference at the respective melting points, predicted by different NNPs using DFT exchange–correlation functionals and classical potentials against the experimental values.52,53 The values for dP/dT are reported at ambient pressure. The final column reports the mean absolute percent error (MAPE) of all the ambient-pressure properties (excluding dP/dT).

a (Å)Ea (eV)Vs (L/mol)Vl (L/mol)ΔHsl (kJ/mol)dP/dT (bar/K)MAPE (%)
Experiment 5.60 6.68 32.054  37.654  28.055  4689 ⋯ 
Fumi–Tosi 5.62 6.53a 31.8 41.2 28.5 3043 2.9 
PBE 5.70 6.28 32.8 44.4 28.5 2438 6.0 
PBEsol 5.61 6.47 30.2 41.2 28.2 2550 3.9 
PBE D2 5.66 6.75 30.3 36.9 33.0 4944 5.5 
PBE D3 5.66 6.66 31.7 37.9 28.8 4653 1.2 
a (Å)Ea (eV)Vs (L/mol)Vl (L/mol)ΔHsl (kJ/mol)dP/dT (bar/K)MAPE (%)
Experiment 5.60 6.68 32.054  37.654  28.055  4689 ⋯ 
Fumi–Tosi 5.62 6.53a 31.8 41.2 28.5 3043 2.9 
PBE 5.70 6.28 32.8 44.4 28.5 2438 6.0 
PBEsol 5.61 6.47 30.2 41.2 28.2 2550 3.9 
PBE D2 5.66 6.75 30.3 36.9 33.0 4944 5.5 
PBE D3 5.66 6.66 31.7 37.9 28.8 4653 1.2 
a

Predicted energy of splitting crystal to ions, combined with experimental Na ionization energy and Cl electron affinity, since this classical potential can only describe ions and not atoms.

The NNP-PBE potential leads to a consistently lower melting point than experiment for all pressures, as shown in Fig. 5(a). This is expected given the tendency of PBE to underestimate binding in solids generally. In particular, for NaCl, PBE predicts an almost 2% larger lattice constant for the crystal than experiment and underestimates the atomization energy by 6% (Table III), leading to the 15% underestimation of the melting point.

The PBEsol functional is a reparameterization of PBE that restores the correct gradient expansion of the correlation energy, generally improving performance for solids.39 This fixes the lattice constant of the crystal (<0.2% error), but the atomization energy is still underestimated by 3%. Correspondingly, the NNP-PBEsol melting point predictions shown in Fig. 5(b) are slightly improved compared to the PBE case but are still substantially lower than experiment at all pressures.

Only NNPs trained to DFT that includes dispersion corrections predict melting points that agree reasonably with experiment, as shown in Figs. 5(c) and 5(d). This was also recently pointed out for the ambient-pressure melting point in a previous study.21 The dispersion-corrected PBE D2 variant40 has the lowest error for this ambient-pressure melting point, but the PBE D3 variant41 exhibits better accuracy overall for the entire range of pressures considered in this study.

Table III indicates that both PBE D2 and PBE D3 are less accurate than PBEsol for the lattice constant at lower temperatures but are more accurate for the atomization energy. Both these dispersion-corrected functionals are more accurate for the solid and liquid volumes near the melting point, so the relative underbinding of these functionals for the perfect crystal becomes less important for solids at higher temperatures and for the liquid phase. PBE D3 has both the closest molar volumes and enthalpy difference compared to experiment among all the potentials considered here (including the FT potential), resulting in the most accurate ambient pressure dP/dT and in phase boundary across the pressure–temperature space. As shown in Table IV, this leads to the best accuracy overall across the considered pressures for PBE D3, followed by the (empirical) Fumi–Tosi model and then the other DFT functionals investigated here.

TABLE IV.

Comparison of ambient pressure Tm and mean absolute percent error (MAPE) computed along the phase boundary for each force field.

Tm (K)MAPE (%)
Experiment 1074 ⋯ 
Fumi–Tosi 1061 ± 6 2.8 
PBE 859 ± 10 10.1 
PBEsol 920 ± 5 8.7 
PBE D2 1059 ± 11 2.9 
PBE D3 1103 ± 26 1.6 
Tm (K)MAPE (%)
Experiment 1074 ⋯ 
Fumi–Tosi 1061 ± 6 2.8 
PBE 859 ± 10 10.1 
PBEsol 920 ± 5 8.7 
PBE D2 1059 ± 11 2.9 
PBE D3 1103 ± 26 1.6 

We introduced a computational approach to efficiently predict ab initio level solid–liquid phase boundaries in molten salts using a combination of machine-learned potentials and thermodynamic integration. We used NNPs trained to DFT with different exchange–correlation functionals in order to compare the accuracy of different DFT methods for the thermodynamics of molten salts, with error bars on all predictions using ensembles of NNPs trained to different AIMD data.

Most importantly, we tailored the thermodynamic integration approach to carry out most of the simulations using low-cost classical potentials, with NNPs used only in the final connection. Critically, once this approach is converged, the final result depends only on the NNP interaction potential, even though we used the lower-level classical potential in all intermediate steps of the path connecting the solid and liquid. Overall, this approach makes it much more tractable to explore molten salt equilibria with accuracy ultimately limited by the first-principles methods underlying the NNPs.

In particular, for the melting of NaCl, we show that treatment of long-range dispersion interactions in the DFT exchange–correlation functional is critical, with PBE D3 yielding the overall highest accuracy for solid–liquid coexistence across a wide range of pressures. We note that this finding is consistent with Ref. 42 that found PBE D3 to yield the highest accuracy for the local structure of molten chloride salts, among the XC functionals investigated in that study.

The overall approach described here was prototyped using NaCl as a model system but is applicable for any single-component system with an interatomic potential that is accurate enough to exhibit a stable solid and a liquid phase in the relevant temperature range. Importantly, the thermodynamic integration approach removes the dependence of the final results on this potential and allows prediction via NNPs using any underlying first-principles method. Future work can extend such free-energy methods combining NNPs and thermodynamic integration to predict phase boundaries in multicomponent systems and solubility limits.

The supplementary material includes the pseudocode of all relevant algorithms and a complete set of neural network potential files, LAMMPS, and Python scripts for reproducing the results and analysis presented here.

This work was supported by funding from the DOE Office of Nuclear Energy’s Nuclear Energy University Program (NEUP) under Award No. DE-NE0008946.

The authors have no conflicts to disclose.

T.S. and K.F. contributed equally to this work.

Tanooj Shah: Conceptualization (equal); Data curation (lead); Formal analysis (lead); Investigation (lead); Methodology (lead); Software (equal); Visualization (lead); Writing – original draft (lead); Writing – review & editing (lead). Kamron Fazel: Data curation (lead); Formal analysis (equal); Investigation (equal); Software (equal); Writing – original draft (equal); Writing – review & editing (equal). Jie Lian: Conceptualization (equal). Liping Huang: Conceptualization (equal); Resources (equal); Supervision (equal); Writing – review & editing (equal). Yunfeng Shi: Conceptualization (equal); Methodology (equal); Resources (equal); Validation (equal); Visualization (equal); Writing – review & editing (equal). Ravishankar Sundararaman: Conceptualization (lead); Data curation (equal); Formal analysis (equal); Funding acquisition (lead); Investigation (lead); Project administration (equal); Resources (equal); Software (lead); Supervision (equal); Validation (equal); Visualization (equal); Writing – review & editing (equal).

The data that support the findings of this study are available from the corresponding author upon reasonable request.

2.
H.
Zhang
,
J.
Baeyens
,
J.
Degreve
, and
G.
Caceres
,
Renewable Sustainable Energy Rev.
22
,
466
(
2013
).
3.
W.
Grimes
,
Nucl. Appl. Technol.
8
,
137
(
1970
).
5.
N.
Metropolis
,
A.
Rosenbluth
,
M.
Rosenbluth
,
A.
Teller
, and
E.
Teller
,
J. Chem. Phys.
21
,
1087
(
1953
).
6.
B. J.
Alder
and
T. E.
Wainwright
,
J. Chem. Phys.
27
,
1208
(
1957
).
8.
P.
Hohenberg
and
W.
Kohn
,
Phys. Rev.
136
,
B864
(
1964
).
9.
W.
Kohn
and
L. J.
Sham
,
Phys. Rev.
140
,
A1133
(
1965
).
10.
D. M.
Eike
,
J. F.
Brennecke
, and
E. J.
Maginn
,
J. Chem. Phys.
122
,
014115
(
2005
).
11.
J.
Anwar
,
D.
Frenkel
, and
M. G.
Noro
,
J. Chem. Phys.
118
,
728
(
2003
).
12.
E. J.
Meijer
and
F.
El Azhar
,
J. Chem. Phys.
106
,
4678
(
1997
).
13.
J.
Behler
,
J. Chem. Phys.
145
,
170901
(
2016
).
14.
R.
Lot
,
F.
Pellegrini
,
Y.
Shaidu
, and
E.
Kucukbenli
,
Comput. Phys. Commun.
256
,
107402
(
2020
); arXiv:1907.03055.
15.
K.
Lee
,
D.
Yoo
,
W.
Jeong
, and
S.
Han
,
Comput. Phys. Commun.
242
,
95
(
2019
).
16.
H.
Wang
,
L.
Zhang
,
J.
Han
, and
E.
Weinan
,
Comput. Phys. Commun.
228
,
178
(
2018
).
17.
S.-C.
Lee
,
Y.
Zhai
,
Z.
Li
,
N. P.
Walter
,
M.
Rose
,
B. J.
Heuser
, and
Y.
Z
,
J. Phys. Chem. B
125
,
10562
(
2021
).
18.
W.
Liang
,
G.
Lu
, and
J.
Yu
,
ACS Appl. Mater. Interfaces
13
,
4034
(
2021
).
19.
W.
Liang
,
G.
Lu
, and
J.
Yu
,
J. Mater. Sci. Technol.
75
,
78
(
2021
).
20.
T.
Xu
,
X.
Li
,
L.
Guo
,
F.
Wang
, and
Z.
Tang
,
Sol. Energy
209
,
568
(
2020
).
21.
Q.-J.
Li
,
E.
Küçükbenli
,
S.
Lam
,
B.
Khaykovich
,
E.
Kaxiras
, and
J.
Li
,
Cell Rep. Phys. Sci.
2
,
100359
(
2021
).
22.
D.
Frenkel
and
B.
Smit
,
Understanding Molecular Simulation: From Algorithms to Applications
,
2nd ed.
(
Academic Press
,
2002
).
23.
J. G.
Kirkwood
,
J. Chem. Phys.
3
,
300
(
1935
).
24.
I. A.
Kruglov
,
A.
Yanilkin
,
A. R.
Oganov
, and
P.
Korotaev
,
Phys. Rev. B
100
,
174104
(
2019
).
25.
R.
Jinnouchi
,
F.
Karsai
, and
G.
Kresse
,
Phys. Rev. B
101
,
060201
(
2020
).
26.
S.
Fukushima
,
E.
Ushijima
,
H.
Kumazoe
,
A.
Koura
,
F.
Shimojo
,
K.
Shimamura
,
M.
Misawa
,
R. K.
Kalia
,
A.
Nakano
, and
P.
Vashishta
,
Phys. Rev. B
100
,
214108
(
2019
).
27.
28.
F. G.
Fumi
and
M. P.
Tosi
,
J. Phys. Chem. Solids
25
,
31
(
1964
).
29.
C. J.
Fennell
and
J. D.
Gezelter
,
J. Chem. Phys.
124
,
234104
(
2006
).
30.
J.
Lu
,
S.
Yang
,
G.
Pan
,
J.
Ding
,
S.
Liu
, and
W.
Wang
,
Energies
14
,
746
(
2021
).
31.
D.
Rumelhart
,
G.
Hinton
, and
R.
Williams
,
Nature
323
,
533
(
1986
).
32.
J.
Behler
,
J. Chem. Phys.
134
,
074106
(
2011
).
33.
A.
Bartok
,
R.
Kondor
, and
G.
Csanyi
,
Phys. Rev. B
87
,
184115
(
2013
).
34.
A.
Bartok
,
M.
Payne
,
R.
Kondor
, and
G.
Csanyi
,
Phys. Rev. Lett.
104
,
136403
(
2010
).
35.
M.
Rupp
,
A.
Tkatchenko
,
K.-R.
Muller
, and
O. A.
von Lilienfeld
,
Phys. Rev. Lett.
108
,
058301
(
2012
).
37.
W.
Jeong
,
K.
Lee
,
D.
Yoo
,
D.
Lee
, and
S.
Han
,
J. Phys. Chem. C
122
,
22790
(
2018
).
38.
J. P.
Perdew
,
K.
Burke
, and
M.
Ernzerhof
,
Phys. Rev. Lett.
77
,
3865
(
1996
).
39.
J. P.
Perdew
,
A.
Ruzsinszky
,
G. I.
Csonka
,
O. A.
Vydrov
,
G. E.
Scuseria
,
L. A.
Constantin
,
X.
Zhou
, and
K.
Burke
,
Phys. Rev. Lett.
100
,
136406
(
2008
).
40.
41.
S.
Grimme
,
J.
Antony
,
S.
Ehrlich
, and
H.
Krieg
,
J. Chem. Phys.
132
,
154104
(
2010
).
42.
S.
Roy
,
M.
Brehm
,
S.
Sharma
,
F.
Wu
,
D. S.
Maltsev
,
P.
Halstenberg
,
L. C.
Gallington
,
S. M.
Mahurin
,
S.
Dai
,
A. S.
Ivanov
,
C. J.
Margulis
, and
V. S.
Bryantsev
,
J. Phys. Chem. B
125
,
5971
(
2021
).
43.
R.
Sundararaman
,
K.
Letchworth-Weaver
,
K. A.
Schwarz
,
D.
Gunceler
,
Y.
Ozhabes
, and
T. A.
Arias
,
SoftwareX
6
,
278
(
2017
).
44.
K. F.
Garrity
,
J. W.
Bennett
,
K. M.
Rabe
, and
D.
Vanderbilt
,
Comput. Mater. Sci.
81
,
446
(
2014
).
45.
M. P.
Allen
and
D. J.
Tildesley
,
Computer Simulations of Liquids
,
2nd ed.
(
Oxford University Press
,
2017
).
46.
P. J.
Steinhardt
,
D. R.
Nelson
, and
M.
Ronchetti
,
Phys. Rev. B
28
,
784
(
1983
).
47.
C.
Chipot
and
A.
Pohorille
,
Free Energy Calculations: Theory and Applications in Chemistry and Biology
,
Springer Series in Chemical Physics
(
Springer
,
2007
), Vol.
86
.
48.
R.
Clausius
,
Ann. Phys. Phys. Chem.
155
,
500
(
1850
).
49.
J. L.
Aragones
,
E.
Sanz
,
C.
Valeriani
, and
C.
Vega
,
J. Chem. Phys.
137
,
104507
(
2012
).
50.
R. S.
DeFever
,
H.
Wang
,
Y.
Zhang
, and
E. J.
Maginn
,
J. Chem. Phys.
153
,
011101
(
2020
).
52.
American Institute of Physics Handbook
,
3rd ed.
, edited by
D. E.
Gray
(
McGraw-Hill
,
New York
,
1972
).
53.
P.
Brown
, A Level Born Haber Cycle Calculations Sodium Chloride Magnesium Chloride Magnesium Oxide Sodium Oxide Enthalpy Level Diagrams KS5 GCE Chemistry Revision Notes (
2000
).
54.
A.
Kirshenbaum
,
J.
Cahill
,
P.
McGonigal
, and
A.
Grosse
,
J. Inorg. Nucl. Chem.
24
,
1287
(
1962
).
55.
F.
Ullmann
,
W.
Gerhartz
,
Y. S.
Yamamoto
,
F. T.
Campbell
,
R.
Pfefferkorn
, and
J. F.
Rounsaville
,
Ullmann’s Encyclopedia of Industrial Chemistry
,
5th ed.
(
VCH
,
Weinheim, Federal Republic of Germany
,
1985
).

Supplementary Material