Machine learning is an important tool in the study of the phase behavior from molecular simulations. In this work, we use un-supervised machine learning methods to study the phase behavior of two off-lattice models, a binary Lennard-Jones (LJ) mixture and the Widom–Rowlinson (WR) non-additive hard-sphere mixture. The majority of previous work has focused on lattice models, such as the 2D Ising model, where the values of the spins are used as the feature vector that is input into the machine learning algorithm, with considerable success. For these two off-lattice models, we find that the choice of the feature vector is crucial to the ability of the algorithm to predict a phase transition, and this depends on the particular model system being studied. We consider two feature vectors, one where the elements are distances of the particles of a given species from a probe (distance-based feature) and one where the elements are +1 if there is an excess of particles of the same species within a cut-off distance and −1 otherwise (affinity-based feature). We use principal component analysis and t-distributed stochastic neighbor embedding to investigate the phase behavior at a critical composition. We find that the choice of the feature vector is the key to the success of the unsupervised machine learning algorithm in predicting the phase behavior, and the sophistication of the machine learning algorithm is of secondary importance. In the case of the LJ mixture, both feature vectors are adequate to accurately predict the critical point, but in the case of the WR mixture, the affinity-based feature vector provides accurate estimates of the critical point, but the distance-based feature vector does not provide a clear signature of the phase transition. The study suggests that physical insight into the choice of input features is an important aspect for implementing machine learning methods.

Machine learning (ML) has become an important tool in the study of phase transitions in molecular simulations.1 Supervised and un-supervised methods have been successfully applied to a variety of systems.2–12 While the majority of studies have focused on lattice models such as the 2D Ising model,8,13–18 there have also been supervised machine learning studies of complex fluids, such as polymer blends19 and polymers in ionic liquids.7 In this work, we use un-supervised machine learning methods to study the phase behavior of two continuous-space binary mixtures.

Machine learning methods can be broadly classified into two categories: supervised and un-supervised. In supervised methods, such as neural networks and support vector machines, the algorithm is trained on known data and used to predict the behavior of an unknown evaluation set. In un-supervised methods, a method is used to decrease the dimensionality of the input data, and correlations (or clustering) in the reduced dimensions are used to provide insight into the behavior. The objective is to discover patterns in the data without any prior training.2,13 The advantage of un-supervised methods for phase behavior is that prior knowledge of the existence of a phase transition is not required. Furthermore, they can be useful for complex fluids where molecular simulation of the phase behavior using conventional methods (required for a training set in supervised methods) is computationally intensive. In this work, we use un-supervised machine learning methods with two dimensionality reduction techniques, namely, principal component analysis (PCA) and t-distributed stochastic neighbor embedding (t-SNE), to study the phase behavior of off-lattice models.

There have been some studies of continuous-space methods using un-supervised ML methods. Jadrich et al. used PCA to study the phase behavior of 2D model (continuous-space) particle systems, such as the RandOrg model, hard ellipses, and non-additive, hard sphere Widom–Rowlinson (WR) mixture, as well as 3D hard sphere and patchy colloid mixtures.3,4 They emphasized the importance of “feature” construction in the application of PCA to particle-based systems. The feature vector is the data that is input to the ML method. In spin models, such as the 2D Ising model, the raw configuration, i.e., the values of all the spins, is usually an excellent choice as the input feature vector. For particle-based systems, the positions of the particles prove to be uninformative. Inter-particle distances are more useful, since these are invariant to rotation of the simulation cell.3,4,20

In this work we study the phase behavior of two 3D off-lattice systems showing liquid–liquid phase behavior, a binary Lennard-Jones (LJ) mixture and the Widom–Rowlinson (WR) mixture, using PCA and t-SNE techniques. We find that the construction of the feature vector is crucial to the ability of the ML method to predict the phase behavior. A distance-based feature provides good estimates for the critical point of the LJ mixture, but not for the WR mixture. We propose an affinity-based feature, where the elements are +1 if a particle has predominantly neighbors of the same species and −1 otherwise. We find that this feature provides an excellent description of the phase behavior for both models.

The two models we investigate are an LJ mixture and the WR mixture.21 In the symmetric LJ mixture, the particles interact via a truncated and force shifted 12-6 LJ potential, i.e., the potential of interaction between a particle of species A and a particle of species B, ΦAB, is given by

(1)

where

(2)

σAA = σBB = σAB = σ, rc = 2.5σ, and ϵAA = ϵBB = 2ϵAB = ϵ. Since the attraction between like species (A–A and B–B) is stronger than that of unlike (A–B) species, this system phase separates upon cooling. We define a reduced temperature by T* = kBT/ϵ, where kB is Boltzmann’s constant and T is the temperature, and a reduced density ρ = 3/V, where N is the total number of particles, NA + NB = N, where NA and NB denote the number of A type and B type particles, respectively. V is the volume of the simulation box.

In the WR mixture, there is no interaction between particles of the same species, and particles of unlike species (A-B) interact via a hard sphere interaction, with diameter σHS. This system phase separates when the density is increased. We define a reduced density ρ=NσHS3/V. All simulations are performed for a mole fraction x = 0.5 which, by symmetry, is the critical composition of both mixtures.

Configurations of the WR mixture are obtained from Monte-Carlo simulations in the NVT (number of particles, volume, and temperature fixed) ensemble. Initial configurations are obtained by placing N particles in a cubic cell with linear dimension L. The system is evolved via single particle moves, where a randomly chosen particle is moved at a random angle with displacements uniform on (0, σHS). The system is equilibrated for 106 steps per particle. The production run consists of 107 attempted moves per particle from which 1000 configurations are extracted. A total of 51 number densities are explored, ranging from ρσHS3 = 0.5–1. We study six system sizes with N = 1024, 2048, 3072, 4096, 8192, and 16 384.

Configurations of the binary LJ mixture are obtained via molecular dynamics (MD) simulations in the canonical (NVT) ensemble using the LAMMPS package.22 Initial configurations are obtained by placing N particles in a cubic cell with linear dimension L = V1/3, such that ρ = 1. The system is propagated using a velocity verlet integrator, with a time step of Δτ* = 0.004, where τ* is the reduced time (τ*=tσ2ϵ/m, where m is the mass of a particle). A Nose–Hoover thermostat is used to maintain the desired simulation temperature.23,24 The initial configuration is equilibrated for 2 × 106 time steps (8000 τ*) and properties are averaged over 107 time steps (40 000 τ*) from which 1000 configurations are extracted and used. Simulations are performed for temperatures ranging from 1.0 ≤ T* ≤ 1.8 for distance feature matrix calculations and from 0.9 ≤ T* ≤ 2.375 for affinity matrix calculations. For affinity matrix calculations, we study varying system sizes with L/σ = 10, 12, 14, 16, 18, 20, 24, and 30.

The feature vector is a one-dimensional vector that comprises the data from each configuration that is input into the ML algorithm. In spin systems, this is usually a list of the values of all the spins in a configuration. In continuous-space systems, the choice is not obvious. We test two different choices for the feature vector in this work. The first is based on the work of Jadrich et al.,4 which we refer to as the distance-based feature. The second is a choice that we propose, inspired by spin systems, and we refer to this as the affinity-based feature.

The distance based feature vector is constructed as follows [see Fig. 1(a)]: For each configuration, we choose a probe particle of one of the species and then calculate the distances from this particles to the Nc nearest neighbors of the same species. We select a particle of species A as a probe and use the distances from Nc nearest neighbors for analysis as described below. If vi is the distance from the probe to the ith nearest neighbor, the average distance is

(3)

We define a new distance xi=viv̄ and manually re-order the elements such that x1x2xNc. A single component of the feature vector is given by

(4)

For the WR mixtures, the particles’ co-ordinates are first normalized by the box length.

FIG. 1.

Pictorial representation of feature construction of (a) distance-based feature vector and (b) affinity-based feature vector. Here, Ni represents the probe particle, and vi represents the distance of the i-th probe from the other like-type particles, with Nc nearest neighbors considered.

FIG. 1.

Pictorial representation of feature construction of (a) distance-based feature vector and (b) affinity-based feature vector. Here, Ni represents the probe particle, and vi represents the distance of the i-th probe from the other like-type particles, with Nc nearest neighbors considered.

Close modal

The feature matrix X is constructed by stacking all the feature vectors

(5)

where NT is the number of configurations. Since there are Nc neighbors, X is a matrix of dimension NT × Nc. We average the results over ten probes in the case of the LJ mixtures and N/2 probes in the case of the WR mixture. We perform PCA for each feature matrix constructed from each probe and then derive the mean and standard deviation profile. Then, we average them to get the final results.

The affinity-based feature vector is constructed as follows [see Fig. 1(b)]: For each particle, we calculate the number of particles of each species within a cut-off distance rcut, which we set equal to half the box length. If the number of particles of the same species around particle i is greater than the number of particles of the other species, then the quantity gi = 1, otherwise gi = −1.

For each configuration the feature vector is given by

(6)

where N is the total number of particles. The feature vector matrix is an NT × N matrix,

(7)

We optimize the value of rcut in the affinity-based feature vector by selecting a value that can best distinguish between different structures. Figure 2 depicts the probability distribution function, Pḡ, of the average feature vector, ḡ=1Ni=1Ngi, for two values of rcut. There is a greater distinction between the different values of ḡ for rcut = 0.5L. We choose a value of rcut where the variance σḡ2 is the largest. For both the WR and LJ mixtures, this occurs for rcut = 0.5L.

FIG. 2.

Probability distribution of the mean feature vector component ḡ for the LJ mixture at (a) rcut/L = 0.1 and (b) rcut/L = 0.5.

FIG. 2.

Probability distribution of the mean feature vector component ḡ for the LJ mixture at (a) rcut/L = 0.1 and (b) rcut/L = 0.5.

Close modal

We use Principal Component Analysis (PCA) and t-Distributed Stochastic Neighbor Embedding (t-SNE)25,26 to obtain a low-dimensional representation of our data. PCA uses an orthogonal linear mapping of the original high dimensional data into a lower dimensional space. The method we use is standard, but we describe it briefly here for completeness.

The first step is standardization of the feature matrix to give a matrix XS, where the mean of each column is zero. The covariance matrix

(8)

is a positive, semi-definite, symmetric matrix (of dimension Nc × Nc or N × N) with non-negative eigenvalues and orthogonal eigenvectors. The eigenvalues bn and eigenvectors wn are obtained by solving

(9)

The transformation matrix has the eigenvectors as columns, i.e., W=w1T,w2T,,wNT, where the eigenvectors are ordered so that they correspond to a decreasing order of eigenvalues, b1 > b2 > b3, etc. The principal component matrix is given by

(10)

The principal component (PC) axes are the columns of P. Since the eigenvalues are ordered in decreasing magnitude, the first PC will have the largest variance. A dimensionality reduction is achieved by choosing the first few principal components. If bn is the variance of PC n, called the explained variance, the relative explained variance λn is defined as

(11)

where M is the dimension of W.

Figure 3 depicts the relative explained variances plotted against the PC index for the LJ mixture and WR models. In both models, λn decreases slowly with n with the distance-based feature, but the first index is dominant with the affinity-based feature. The affinity-based feature, therefore, provides a better reduction in dimensionality.

FIG. 3.

Variation of relative explained variance λi with respect to the index of the principal component for (a) LJ binary mixture and (b) WR model. Here, we use 20 components. So, the abscissa varies from 1 to 20.

FIG. 3.

Variation of relative explained variance λi with respect to the index of the principal component for (a) LJ binary mixture and (b) WR model. Here, we use 20 components. So, the abscissa varies from 1 to 20.

Close modal

We quantify the phase transition using PCA-derived order parameters (OPs). These OPs are calculated by using only the first two PC scores: in other words, only the first two columns of the final matrix P. The first two columns contain the first two principal components P1 and P2 for each state point (temperature or density). One OP is the ensemble average of P1, denoted P1. We normalize P1 so that the range is between 0 and 1. The second OP is related to the variance of the principal components. If σ12 and σ22 are the ensemble averages of the variances of P1 and P2, respectively, the OP is defined as σ12σ12+σ22/2.

The theoretical framework of t-SNE is based on effective minimization of the divergence between the two conditional probabilities (P and Q). Here, for the distance feature matrix, we have a higher dimensional input dataset XS = [x1,x2,xNT]. For the input, we use the PCA transformed data. The first step is measuring the conditional probability of pairwise distances between the input data points based on a Gaussian distribution,

(12)

where

(13)

and

(14)

Here, σi is the variance of the Gaussian centered on each high dimensional data point xi. This parameter is automatically determined by the algorithm for a given value of user-provided perplexity, which is basically the effective number of neighbors around xi, mathematically given as

(15)

where H(Pi) is Shannon entropy. In order to keep the symmetry between Pij and Pji, t-SNE uses a probability function Pij, which is the mean of two probabilities

(16)

where N is the size of the input dataset.

The second conditional probability (Q) measures the similarity between the high-dimensional data points in lower dimensional space. The low dimensional mapping is represented as Y = [y1,y2,yNT] and the embedding similarity Qij between two low dimensional data points yi and yj is calculated using a Student-t kernel given as

(17)

where

(18)

and

(19)

The t-SNE approach minimizes the Kullback–Leibler divergence that is defined using a cost function (C), given as

(20)

In order to minimize the difference between two probability distributions Pij and Qij and updating Qij iteratively, the cost function is minimized by descending along the gradient

(21)

Solving this gradient iteratively allows t-SNE to find a low dimension projection yi of the corresponding input xi. Since, here we are projecting our data to a two-dimensional space, the final low-dimensional matrix is of size NT times two, and the columns give us the first and second components of the projections. These are labeled S1 and S2 in the scatter plots. In the case of t-SNE, the order parameters are the same as described in connection with the PCA, although they are denoted as S1 and σ12.

All the PCA and tSNE calculations were performed by using the libraries available in scikit-learn.27 

A scatter plot of the PCA and t-SNE as a function of state point shows that the affinity based feature vector more clearly distinguishes between the phase separated and mixed states, when compared to the distance-based feature, for both the LJ mixture and WR model. Interestingly, there is no significant difference in the ability of the PCA or t-SNE to distinguish the phases. These conclusions are evident in Figs. 4 and 5, which depict the scatter plots for the two models with the distance based (Fig. 4) and affinity-based (Fig. 5) feature vectors, respectively. In the figures, the colors represent the state point going from mixed (red, high temperature in the LJ mixture and low density in the WR model) to phase separated (blue, low temperature in the LJ mixture and high density in the WR model).

FIG. 4.

PCA and t-SNE scatter plots for the LJ binary mixture and WR model using the distance based feature vector: (a) PCA for LJ mixture, (b) t-SNE for LJ mixture, (c) PCA for WR model, and (d) t-SNE for WR model.

FIG. 4.

PCA and t-SNE scatter plots for the LJ binary mixture and WR model using the distance based feature vector: (a) PCA for LJ mixture, (b) t-SNE for LJ mixture, (c) PCA for WR model, and (d) t-SNE for WR model.

Close modal
FIG. 5.

PCA and t-SNE scatter plots for the LJ binary mixture and WR model using the affinity based feature vector: (a) PCA for LJ mixture, (b) t-SNE for LJ mixture, (c) PCA for WR model, and (d) t-SNE for WR model.

FIG. 5.

PCA and t-SNE scatter plots for the LJ binary mixture and WR model using the affinity based feature vector: (a) PCA for LJ mixture, (b) t-SNE for LJ mixture, (c) PCA for WR model, and (d) t-SNE for WR model.

Close modal

A comparison of the PCA with the t-SNE (compare left to right panels in all figures) shows that there is no significant distinction between the ability of the PCA and t-SNE to distinguish mixed and phase separated states, i.e., the degree of clustering of data points is similar. With the distance-based feature, both methods show some clustering in the case of the LJ mixture, but are unable to display a meaningful separation in the case of the WR model. With the affinity-based feature, both methods show a clear clustering, with the mixed phase (red) distinct from the phase separated phase (blue). This suggests that the affinity-based feature vector is a more promising route to extracting the phase behavior of these models.

Often higher order components in the PCA are required to distinguish different structures. This is particularly relevant here because, with the distance-based feature, the explained variances show more than one dominant component (see Fig. 3). The higher components do not provide significant additional information in this case, however. Figure 6, which is a three-dimensional plot of the first three components of the PCA (P1, P2 and P3), does not show a clear distinction between the mixed and separated states.

FIG. 6.

Three dimensional PCA scatter plots for the (a) LJ binary mixture and (b) WR model using the distance based feature vector. P1, P2, and P3 are the first three components of PCA.

FIG. 6.

Three dimensional PCA scatter plots for the (a) LJ binary mixture and (b) WR model using the distance based feature vector. P1, P2, and P3 are the first three components of PCA.

Close modal

The order parameters obtained with the affinity-based feature vectors provide an accurate estimate of the critical point of both models. The order parameters with the PCA are depicted in Fig. 7 and those with the t-SNE are in the supplementary material (Fig. S1). The order parameter P1 is zero in the mixed phase and increases rapidly in the two-phase region, and σ12 shows a peak in the neighborhood of what could be the critical point. Interestingly, the behavior of σ12 is quite similar to the constant volume specific heat (Fig. S2).

FIG. 7.

Order parameters with the PCA method (a) P1 for the LJ mixture, (b) σ12 for the LJ mixture, (c) P1 for the WR model, (d) σ12 for the WR model. The colors have the same value as in Fig. 5.

FIG. 7.

Order parameters with the PCA method (a) P1 for the LJ mixture, (b) σ12 for the LJ mixture, (c) P1 for the WR model, (d) σ12 for the WR model. The colors have the same value as in Fig. 5.

Close modal

For the smaller systems, the PCA is not able to distinguish between the mixed and phase separated states, for both models, but a clear transition is seen for larger systems. The order parameter P1 often shows a gradual increase rather than a rapid increase to non-zero values, and we, therefore, identify the peak in σ12 with the phase transition. For the systems shown in Figs. 7(b) and 7(d), which are the largest system sizes for each case, we estimate a critical temperature of 1.425 for the LJ mixture, which compares well with the literature value 1.423 ± 0.0005.28 Similarly, the transition density for the WR model is 0.75, which compares well with the literature value of 0.762.29 Results for other system sizes are shown in Figs. S3–S8 of the supplementary material.

We estimate the infinite system transition point using finite size scaling. We find the transition temperatures and densities from the position where σ12 is the highest. Uncertainties in the temperature or density are measurement uncertainties, i.e., the simulation intervals of temperature (LJ) or density (WR). The machine learning transition temperatures do not show the usual scaling expected from finite size simulations in the Ising universality class.30 Usually, Tc(L) ∼ Tc, + AL−1/ν, where Tc, is the critical temperature of the infinite system, ν = 0.629 is the Ising finite size scaling exponent, and A is a positive constant. This implies that Tc(L) > Tc, for finite values of L. This is because, in a finite system, the critical point is reached when the length-scale of fluctuations exceeds the box size, and this happens at a higher temperature for smaller systems.

We find that for the WR model, Tc is essentially independent of L (within uncertainties), and for the LJ mixture, Tc increases with increasing L. The variation of Tc with L−1/ν for the LJ mixture and WR model are shown in Figs. 8(a) and 8(b), respectively. From a fit to Tc(L) = Tc, + AL−1/ν, we extract Tc, = 1.43 ± 0.01 for the LJ mixture, and ρc, = 0.76 ± 0.01 for the WR mixture. These estimates are in good agreement with the critical point obtained using conventional simulations.

FIG. 8.

Finite size analysis for determination of the infinite system critical point for the (a) LJ model and (b) WR model, where L is the linear box dimension.

FIG. 8.

Finite size analysis for determination of the infinite system critical point for the (a) LJ model and (b) WR model, where L is the linear box dimension.

Close modal

We use unsupervised machine learning methods to obtain the critical point of three dimensional off-lattice systems. The most important result is that the construction of the input feature vector plays a crucial role in the ability of the ML method to predict the critical point. We use two feature vectors, one based on the distances between particles (“distance-based”) and one based on the local concentration (“affinity-based”), and study two systems, a symmetric Lennard–Jones binary (LJ) mixture and a non-additive, hard sphere Widom–Rowlinson (WR) mixture. We find that the distance-based feature vector is not able to distinguish clearly between mixed and two-phase states for the WR mixture, although it does show a distinction for the LJ mixture. The affinity-based feature vector, however, shows a distinct separation of one-phase and two-phase configurations for both models. We use two ML methods, PCA and t-SNE, and find that the choice of the ML algorithm is not important—the key choice is the construction of the feature vector.

We estimate the critical point from the variation of the peak in variance of the first two principal components with temperature and density, and extrapolate to an infinite system using the critical scaling of the Ising universality class. The critical point obtained in this fashion is in good agreement with previous estimates from conventional simulations, and requires standard simulations with no particle insertions or deletions.

Supervised and un-supervised ML methods have been widely used for simple lattice systems and some off-lattice systems. In lattice systems, the choice of the feature vector is clear, namely, the values of the spins. We find that in off-lattice systems, a judicious choice of the input features can play a crucial role in the predictions, and it remains to be seen if the choice of feature is system dependent. We hope this paves the way to study other complex fluids using un-supervised methods.

An interesting direction is to connect the machine learning methods to statistical mechanics and liquid state theory. For the LJ mixture, we find empirically that the order parameter σ12 shows behavior quite similar to the constant volume heat capacity. Although this suggests that there might be a link, the former relies only on the locally averaged concentrations, whereas the latter is a measure of energy fluctuations. Note that ML relies on how configurations at a given state are different from those at other states, and it is, therefore, important to have a range of temperatures (in the LJ case) that span the critical point. In statistical mechanics, on the other hand, one only requires a single state to determine all the properties. In this regard, it is important to note that identifying the peak in σ12 with the critical point is merely an ansatz, and one that must eventually be verified to represent the critical point from a more fundamental standpoint.

See the supplementary material for analysis of the phase behavior using the distance-based feature vector and order parameters with the t-SNE method.

This work was supported by the National Science Foundation through Grant No. CHE-1856595. All simulations presented here were performed using computational resources provided by UW-Madison Department of Chemistry HPC Cluster under NSF Grant No. CHE-0840494.

The authors have no conflicts to disclose.

Inhyuk Jang: Conceptualization (equal); Data curation (equal); Formal analysis (equal); Investigation (equal); Methodology (equal); Writing – original draft (equal). Supreet Kaur: Conceptualization (equal); Data curation (equal); Formal analysis (equal); Investigation (equal); Methodology (equal); Writing – original draft (equal). Arun Yethiraj: Conceptualization (equal); Funding acquisition (lead); Investigation (equal); Project administration (lead); Resources (lead); Supervision (lead); Writing – review & editing (lead).

The data that support the findings of this study are available from the corresponding author upon reasonable request.

1.
E.
Bedolla
,
L. C.
Padierna
, and
R.
Castañeda-Priego
, “
Machine learning for condensed matter physics
,”
J. Phys.: Condens. Matter
33
,
053001
(
2020
).
2.
L.
Wang
, “
Discovering phase transitions with unsupervised learning
,”
Phys. Rev. B
94
,
195105
(
2016
).
3.
R. B.
Jadrich
,
B. A.
Lindquist
, and
T. M.
Truskett
, “
Unsupervised machine learning for detection of phase transitions in off-lattice systems. I. Foundations
,”
J. Chem. Phys.
149
,
194109
(
2018
).
4.
R. B.
Jadrich
,
B. A.
Lindquist
,
W. D.
Piñeros
,
D.
Banerjee
, and
T. M.
Truskett
, “
Unsupervised machine learning for detection of phase transitions in off-lattice systems. II. Applications
,”
J. Chem. Phys.
149
,
194110
(
2018
).
5.
C.
Giannetti
,
B.
Lucini
, and
D.
Vadacchino
, “
Machine Learning as a universal tool for quantitative investigations of phase transitions
,”
Nucl. Phys. B
944
,
114639
(
2019
).
6.
S.
Tetef
,
N.
Govind
, and
G. T.
Seidler
, “
Unsupervised machine learning for unbiased chemical classification in X-ray absorption spectroscopy and X-ray emission spectroscopy
,”
Phys. Chem. Chem. Phys.
23
,
23586
23601
(
2021
).
7.
H.
Jung
and
A.
Yethiraj
, “
Phase behavior of continuous-space systems: A supervised machine learning approach
,”
J. Chem. Phys.
153
,
064904
(
2020
).
8.
C.
Alexandrou
,
A.
Athenodorou
,
C.
Chrysostomou
, and
S.
Paul
, “
The critical temperature of the 2D-Ising model through deep learning autoencoders
,”
Eur. Phys. J. B
93
,
226
(
2020
).
9.
D.
Bhattacharya
and
T. K.
Patra
, “
dPOLY: Deep learning of polymer phases and phase transition
,”
Macromolecules
54
,
3065
3074
(
2021
).
10.
D.
McDermott
,
C. J. O.
Reichhardt
, and
C.
Reichhardt
, “
Detecting depinning and nonequilibrium transitions with unsupervised machine learning
,”
Phys. Rev. E
101
,
042101
(
2020
).
11.
C. A.
Löpez
,
V. V.
Vesselinov
,
S.
Gnanakaran
, and
B. S.
Alexandrov
, “
Unsupervised machine learning for analysis of phase separation in ternary lipid mixture
,”
J. Chem. Theory Comput.
15
,
6343
6357
(
2019
).
12.
G.
Torlai
and
R. G.
Melko
, “
Learning thermodynamics with Boltzmann machines
,”
Phys. Rev. B
94
,
165134
(
2016
).
13.
W.
Hu
,
R. R. P.
Singh
, and
R. T.
Scalettar
, “
Discovering phases, phase transitions, and crossovers through unsupervised machine learning: A critical examination
,”
Phys. Rev. E
95
,
062122
(
2017
).
14.
A.
Morningstar
and
R. G.
Melko
, “
Deep learning the Ising model near criticality
,”
J. Mach. Learn. Res.
18
,
5975
5991
(
2017
).
15.
N.
Walker
,
K.-M.
Tam
, and
M.
Jarrell
, “
Deep learning on the 2-dimensional Ising model to extract the crossover region with a variational autoencoder
,”
Sci. Rep.
10
,
13047
(
2020
).
16.
S. J.
Wetzel
, “
Unsupervised learning of phase transitions: From principal component analysis to variational autoencoders
,”
Phys. Rev. E
96
,
022140
(
2017
).
17.
S.
Acevedo
,
M.
Arlego
, and
C. A.
Lamas
, “
Phase diagram study of a two-dimensional frustrated antiferromagnet via unsupervised machine learning
,”
Phys. Rev. B
103
,
134422
(
2021
).
18.
I.
Corte
,
S.
Acevedo
,
M.
Arlego
, and
C. A.
Lamas
, “
Exploring neural network training strategies to determine phase transitions in frustrated magnetic models
,”
Comput. Mater. Sci.
198
,
110702
(
2021
).
19.
H.
Jung
and
A.
Yethiraj
, “
Phase behavior of poly(ethylene oxide) in room temperature ionic liquids: A molecular simulation and deep neural network study
,”
J. Phys. Chem. B
124
,
9230
9238
(
2020
).
20.
J.
Behler
, “
Atom-centered symmetry functions for constructing high-dimensional neural network potentials
,”
J. Chem. Phys.
134
,
074106
(
2011
).
21.
B.
Widom
and
J. S.
Rowlinson
, “
New model for the study of liquid–vapor phase transitions
,”
J. Chem. Phys.
52
,
1670
1684
(
1970
).
22.
A. P.
Thompson
,
H. M.
Aktulga
,
R.
Berger
,
D. S.
Bolintineanu
,
W. M.
Brown
,
P. S.
Crozier
,
P. J.
in ’t Veld
,
A.
Kohlmeyer
,
S. G.
Moore
,
T. D.
Nguyen
,
R.
Shan
,
M. J.
Stevens
,
J.
Tranchida
,
C.
Trott
,
S. J.
Plimpton
, and
S. J.
Plimpton
, “
LAMMPS - A flexible simulation tool for particle-based materials modeling at the atomic, meso, and continuum scales
,”
Comput. Phys. Commun.
271
,
108171
(
2022
).
23.
S.
Nosé
, “
A unified formulation of the constant temperature molecular dynamics methods
,”
J. Chem. Phys.
81
,
511
519
(
1984
).
24.
W. G.
Hoover
, “
Canonical dynamics: Equilibrium phase-space distributions
,”
Phys. Rev. A
31
,
1695
1697
(
1985
).
25.
H.
Abdi
and
L. J.
Williams
, “
Principal component analysis
,”
WIREs Comput. Stat.
2
,
433
459
(
2010
).
26.
L.
van der Maaten
and
G.
Hinton
, “
Visualizing data using t-SNE
,”
J. Mach. Learn. Res.
9
,
2579
2605
(
2008
).
27.
F.
Pedregosa
,
G.
Varoquaux
,
A.
Gramfort
,
V.
Michel
,
B.
Thirion
,
O.
Grisel
,
M.
Blondel
,
P.
Prettenhofer
,
R.
Weiss
,
V.
Dubourg
,
J.
Vanderplas
,
A.
Passos
,
D.
Cournapeau
,
M.
Brucher
,
M.
Perrot
, and
É.
Duchesnay
, “
Scikit-learn: Machine learning in Python
,”
J. Mach. Learn. Res.
12
,
2825
2830
(
2011
).
28.
K.
Binder
, “
Computer simulations of critical phenomena and phase behaviour of fluids
,”
Mol. Phys.
108
,
1797
1815
(
2010
).
29.
C. Y.
Shew
and
A.
Yethiraj
, “
Phase behavior of the Widom–Rowlinson mixture
,”
J. Chem. Phys.
104
,
7665
7670
(
1996
).
30.
K.
Binder
, “
Finite size scaling analysis of Ising model block distribution functions
,”
Z. Phys. B
43
,
119
140
(
1981
).

Supplementary Material