We present an extension of the locally enhanced sampling method. A restraint potential is introduced to drive the many-replica system to the canonical ensemble corresponding to the physical, single-replica system. Convergence properties are demonstrated using a model rugged two-dimensional potential, for which sampling by conventional equilibrium molecular dynamics is inefficient. Restrained locally enhanced sampling (RLES) is found to explore the space of configurations with an efficiency comparable to that of temperature replica exchange. To demonstrate the potential of RLES for realistic applications, the method is used to fold the 12-residue tryptophan zipper miniprotein in explicit solvent. The RLES algorithm can be incorporated into existing LES implementations with minor code modifications.

Complex systems of present-day interest, especially those involving biological macromolecules, are often described by a rugged potential energy surface, on which configurational transitions between local minima are rare events. Because such transitions underlie many processes of biological significance (e.g., protein conformational change in response to the binding of a ligand) and are relevant for finding the most stable configurations of a molecular system (e.g., structure refinement), algorithms to enhance the sampling of such rare events have been a focus of attention (see, e.g., Ref. 1 for an introduction). One particular method that is the subject of this Communication is Locally Enhanced Sampling (LES)2 (also known as multi-copy simultaneous search3), in which sampling is enhanced for a subset of the system. Several identical replicas of the subset are simulated concurrently, each interacting with the same (single) copy of the remaining atoms. LES has been used in computer-aided drug design,4,5 structure refinement,6,7 and simulation of ligand-release pathways.8 Despite its widespread use, LES has the drawback that the replicated system is not thermodynamically equivalent to the original (i.e., unreplicated) system. Although this does not affect the search for the global energy minimum,6 LES is not guaranteed to converge to the free energy minima of the original system. In addition, LES cannot by itself be used to estimate free energy differences. In the present Communication, we show that, using an additional restraint potential, (restrained) LES simulations can be used to locate the minima of the free energy and to estimate the statistical importance of different configurations in the original thermodynamic ensemble.

Restrained locally enhanced sampling (RLES). A classical system composed of N atoms is partitioned arbitrarily into NE ≥ 0 environment atoms and NL = NNEligand atoms (we use the term ligand atoms to denote any portion of the system for which enhanced sampling is desired). The Hamiltonian of the system is

(1)

in which XE and XL are the coordinates of the environment and the ligand atoms, respectively, X = (XE; XL), P = (PE; PL) are the conjugate momenta, and ME and ML are the corresponding diagonal mass matrices. The decomposition in Eq. (1) assumes that the potential energy U is separable into interactions involving only the environment atoms (UE), those involving only the ligand atoms (UL), and an interaction energy (UI). It is valid for most molecular mechanics force fields, such as CHARMM.9 Given a positive integer n and a force constant k, we define a replicated restrained Hamiltonian according to

(2)

In Eq. (2), the coordinates XLi correspond to the ith ligand replica. The replicas have the mass matrices ML/n and interact with the environment via the scaled potential energies UI(XE,XLi)/n, but do not interact with each other. The last term in Eq. (2) enforces distance restraints between the n(n − 1)/2 distinct replica pairs. A form that is more efficient in a direct computation is

(3)

where X¯L=(1/n)j=1nXLj is the average ligand structure. In the presence of a Langevin thermostat with the inverse temperature β and friction γ, the above Hamiltonian system samples configurations from the corresponding canonical ensemble according to the following equations:

(4)
(5)

where ΞE(t)R3NE and ΞLi(t)R3NL are identically distributed white-noise stochastic processes with unit variances and zero means. Equation (5) shows that the temperature of each replicated ligand is scaled by n relative to the temperature of the environment, as noted previously.10 The increase in the speed of sampling different ligand configurations arises from the higher ligand temperature, as well as from the n > 1 ligand replicas. The essential difference between the original LES method2,6 and the present formulation is the additional restraint force in Eq. (5), which allows a transformation of the replicated ensemble into the original system [Eq. (1)] via the force constant k, as follows.

Averaging Eq. (5), we have

(6)

where Ξ¯L(t)(i=1nΞLi(t))/n is a white-noise process distributed identically to ΞLi(t), except with variance 1/n. Normalizing Ξ¯L(t) to unit variance, i.e., letting H(t)nΞ¯L(t), Eq. (6) can be written as

(7)

In view of Eq. (3), letting k implies that XLiX¯L. Assuming that the force field is continuously differentiable, U(X,XLi)U(X,X¯L), and XLiU(X,XLi)X¯LU(X,X¯L) for all i. In this limit, Eqs. (6) and (7) describe the motion of a single ligand with coordinates identical to XLi but moving at the original temperature 1/β, as required.

It can be shown directly that Eqs. (4) and (5) generate the correct configurational averages in the limit of high restraint forces. The expectation value of a quantity a(X)a(XE,X¯L) computed using the Hamiltonian HR(k) is

(8)

where Zk is the corresponding partition function. Using the change of variables {XL1,XL2,,XLn}{X¯L,XL2,,XLn} and absorbing the constant Jacobian factor (n) into Zk yield

(9)

Letting k and using the fact that a sequence of normalized Gaussians with variance O(k−1) converges to the δ-function in the sense of distributions,

(10)

as required.

Although Eq. (10) is formally valid only in the limit k, converged expectation values can be recovered to desired accuracy using a sufficiently large force constant k. Using the error estimate associated with replacing the Dirac δ by a Gaussian with variance ()−1 in configurational averages11 gives aH=aHR+O(1/k). (A similar estimate was also used by Frenkel and Ladd.12) The convergence rate as a function of k is demonstrated below for a model problem.

A model rugged potential. To illustrate RLES with a simple application, we compute the probabilities (i.e., normalized partition functions) associated with regions on a two-dimensional (2D) model potential. The computation of probabilities is a rigorous test for the accuracy and completeness of configurational sampling. The results are compared to the exact values computed using numerical integration and to those obtained with the standard temperature replica exchange (TREX),13 one of the most popular methods for enhanced sampling. In the present example, the 2D particle is considered a ligand that is replicated n times to achieve enhanced sampling, with no explicit environment particles. The 2D potential can be regarded as the interaction energy between the ligand particle and the environment, i.e., U = UI(XL). The energy function chosen for this example is U(x, y) = 10(x10 + y10) + 5 sin(2πx) cos(2πy). A contour plot of U(x, y) is shown in Fig. 1. The monomial terms in the potential confine the regions of high probability to the square S[1,1]×[1,1]. The trigonometric term results in multiple minima separated by barriers to mimic a rugged energy landscape commonly encountered in complex systems. The six highest-probability rectangular regions in S are shown in Fig. 1. The corresponding partition functions, ZiSiexp(βU),i=16, can be computed exactly using numerical integration. We choose a large inverse temperature, β = 10 (dimensionless units), to ensure that transitions between the minima in Fig. 1 are rare events. Using the 2D trapezoidal rule with 100 points in each direction, the probabilities PiZi/∑jZj associated with the regions Si,,S6 computed to five decimal places of precision are, respectively, 0.002 74, 0.318 64, 0.354 20, 0.003 05, 0.002 74, and 0.318 64 (increasing the number of points did not change these values).

FIG. 1.

Two-dimensional model potential energy surface U(x, y) = 10(x10 + y10) + 5 sin(2πx) cos(2πy). Rectangles 1–6 delimited by vertical and horizontal lines correspond to the regions of highest probability, with the corresponding canonical probabilities PiZi/∑jZj equal to 0.002 74, 0.318 64, 0.354 20, 0.003 05, 0.002 74, and 0.318 64 (computed using β = 10).

FIG. 1.

Two-dimensional model potential energy surface U(x, y) = 10(x10 + y10) + 5 sin(2πx) cos(2πy). Rectangles 1–6 delimited by vertical and horizontal lines correspond to the regions of highest probability, with the corresponding canonical probabilities PiZi/∑jZj equal to 0.002 74, 0.318 64, 0.354 20, 0.003 05, 0.002 74, and 0.318 64 (computed using β = 10).

Close modal

The probabilities Pi can be also computed by dynamic simulations in the canonical ensemble. (This is often the method of choice for systems with many degrees of freedom, such as proteins, for which uniform sampling with direct integration over the configurational space is not feasible.) For this purpose, we assign a mass of 1 (all units are taken to be dimensionless) to the 2D particle and use Langevin dynamics as the thermostat, with β = 10 and the friction constant γ = 1. The resulting dynamical system is simulated using the BBK integrator14 with the time step dt = 0.01. A combination of four linear congruential generators was used to generate random numbers, which produces a sequence that has a long period (2121) and passes many tests for randomness.15 At the chosen inverse temperature β = 10, equilibrium simulations initiated from the coordinates (x, y) = (−0.25, 0) in S3 (shown in Fig. 1) and performed for 109 steps did not sample any region other than S3, indicating the need for enhanced sampling.

To enhance configurational sampling using RLES, we replicate the particle ten times (n = 10) and introduce an ensemble of eleven (replicated) systems. To each system in the ensemble, we assign an RLES force constant taken consecutively from the set {0, 0.5, 1, 2, 4, 8, 16, 32, 64, 128, 256}. The first system (k = 0) corresponds to standard LES,2 in which the temperature is effectively scaled by a factor of 10. The remaining systems tend to the original single-particle system as k increases, as demonstrated below. To enhance the sampling of the high-k systems, the ensemble replicas are simulated concurrently, and the particle coordinates are exchanged according to the Metropolis criterion of Hamiltonian replica exchange (HREM).16 Exchanges were attempted every 100 steps only between systems with neighboring k values. The average acceptance rate was about 34%, with a minimum of 18% for the two lowest-k systems.

Enhanced sampling was also performed using TREX13 as follows. An ensemble of eleven single-replica systems was defined with the temperatures ranging from T = 0.1 (β = 10) to T = 1.1 (β ≃ 0.91) in increments of 0.1 (we have set Boltzmann’s constant to unity). To facilitate a fair comparison with RLES, the remaining simulation parameters were the same as in the RLES case. The average replica exchange acceptance rate was 70%, with a minimum of 60% for the lowest-temperature replicas. Simulations were initiated from (x, y) = (−0.25, 0) for all particles and run for 109 steps. Coordinates were saved every 103 steps, and the first half of the sample was discarded as part of equilibration. The rates of convergence to the exact probabilities are shown in Fig. 2. Convergence is apparent for the high-k RLES systems (k ≥ 64). It is interesting that even moderately restrained systems (e.g., k = 4), produce an rms error of ≃1%, implying that the restraint strengths can be tuned to achieve a compromise between accuracy and efficiency (an ensemble of only five replicas is needed to reach k = 4). In Fig. 3, we show the convergence of the quantity ϵi=1nXLiX¯L2/n, which measures the dispersion of the RLES replicas around the average position X¯L. The behavior of ϵ as a function of k is in agreement with the asymptotic estimate of O(1/k).

FIG. 2.

Convergence of enhanced sampling simulations to the exact probabilities Pi as a function of simulation time. Each curve represents an average over four identical simulations initialized with different random numbers. See the text for simulation details.

FIG. 2.

Convergence of enhanced sampling simulations to the exact probabilities Pi as a function of simulation time. Each curve represents an average over four identical simulations initialized with different random numbers. See the text for simulation details.

Close modal
FIG. 3.

Convergence of RLES simulations as a function of the RLES force constant k. See the text for simulation details.

FIG. 3.

Convergence of RLES simulations as a function of the RLES force constant k. See the text for simulation details.

Close modal

From Fig. 2, the performance of RLES with HREM is slightly lower than that of TREX. Although the error dependence for TREX and RLES (with k = 256) follow the same power law (∼t1/2), the error remains twice higher for RLES than for TREX. This appears to be caused by the lower replica exchange acceptance rate for RLES (≃20% minimum) than for TREX (≃60% minimum). We note that for this test case, the computational cost of RLES is approximately n = 10 times that of TREX. This occurs because the term UE(XE) in Eq. (1) is zero, and all forces arise from the potential [U = UI(XL)]. The computational overhead of simulating the additional replicas is therefore significant. In many applications of LES,7,8 the contribution to the energy arising purely from the environment is large so that the relative overhead of RLES is smaller.

Folding of a miniprotein. To illustrate an application of RLES to a biomolecular system, we folded the 12-residue Tryptophan zipper (TrpZip) miniprotein with the sequence SWTWENGKWTWK17 using molecular dynamics (MD) in the explicit solvent. To accomplish this efficiently on a single workstation, instead of HREM, we used the method of adaptive tempering18 to adjust the force constant k on-the-fly, which we call adaptively restrained LES (ARLES).

First, we choose a desired, as yet unspecified, probability density p(k) from which values of k will be sampled and define a generalized (replicated) ensemble with coordinates (X, k) and density

(11)

where ρ2(X) refers to the sum of squares in the RLES restraint [Eq. (3)]. Defining the energy

(12)

we model the evolution of k by the overdamped Langevin equation with reflecting boundary conditions and the stationary solution E(X, k):19 

(13)

where D is an arbitrary diffusion function, η is a standard Gaussian white-noise process, and (̇) and (′) refer to differentiation with respect to time and k, respectively. Inserting Eqs. (11) and (12) into Eq. (13) gives

(14)

An apparent problem with Eq. (14) is that Zk is unknown. However, only the derivative of its logarithm is required, which can be obtained from Eq. (8) with a = 1 as

(15)

The expectation ρ2HR(k) is updated on-the-fly over a long MD trajectory, as done in adaptive tempering,18,20,21 so that in the long time limit (assuming ergodicity), Eq. (14) samples k values according to the distribution p(X, k). After the user chooses D and p(k), as discussed below for the case of TrpZip, the evolution of k is completely specified. The numerical integration of Eq. (14) along with the averaging procedure of Eq. (15) were implemented as a plugin for the MD program OpenMM;22 the technical details are provided in the supplementary material.

An unfolded extended conformation of TrpZip was prepared using the program CHARMM23 using default internal coordinates for all residues. Four identical overlapping copies (n = 4) of TrpZip were solvated in a cubic box of side length 45 Å with explicit TIP3P water; i.e., the ligand was TrpZip, and the environment was the solvent. TrpZip masses were scaled in the structure file, bonded interactions involving TrpZip atoms were scaled in the parameter file, and nonbonded interactions between TrpZip and the solvent were scaled using the alchemy module in OpenMM.22 Interactions between TrpZip replicas were turned off using an exclusion list in the structure file. Simulations were performed using CHARMM36 parameters,9 with a time step of 1 fs, a Langevin thermostat with friction constant 0.1 ps−1, constraints on bond lengths involving hydrogens, and PME electrostatics for a periodic box; van der Waals interactions were attenuated to zero in the range 7.5 Å < r < 9 Å using the standard cutoff function in OpenMM.

To optimize the parameters D(k) and p(k), we first performed simulations in which both were constant over the allowed range of k, which we arbitrarily set to [0.1, 10] (kcal/mol)/Å2. As a qualitative check that the maximum k-value [10 (kcal/mol)/Å2] was sufficiently high to approximate the single-replica physical system, we performed a 100 ns RLES MD simulation starting from the folded structure, with k fixed at this value. The Cα RMSD of each of the simulation replicas to the initial structure remained below 1 Å, suggesting that this value of k is adequate for this system. However, visual inspection of the ARLES trajectories with constant D(k) and p(k) revealed that the protein became trapped in metastable conformations at high-k values and showed no clear progress toward folding after 100 ns. Examples of such conformations can be seen in Fig. 4, corresponding to 40 ns or 63 ns in panel (c), which will be discussed further below. To facilitate folding, we experimented with polynomial functional forms for D(k) and p and settled on the functions D(k) = C1(1 + C2k) and p(k) ∝ k−1, where Ci are constants added for dimensional consistency (see the supplementary material). These forms were chosen to ensure (1) that most of the time, TrpZip is in a partially unfolded state (low k) and (2) that TrpZip traverses metastable conformations (high k) rapidly since the diffusion function D(k) is larger at higher k. With these parameters, we performed six ARLES simulations with a maximum length of 200 ns. Simulations were checked after each 10 ns and stopped if the protein folded (see Fig. 4). In all cases, TrpZip folded into its native state,17 with subangstrom Cα RMSD from the average NMR conformation [Fig. 4(a)]. The longest folding time was ∼115 ns [cyan trace in Fig. 4(a)]. For one simulation [black trace in Fig. 4(a)], panel (b) shows the evolution of the force constant k and the distance ρ, and panel (c) illustrates the structural progression toward the folded state. To investigate the structural stability of the intermediate states associated with Fig. 4(c), we performed 100 ns-long equilibrium MD simulations starting from four intermediate structures shown at 10 ns, 40 ns, 63 ns, and 75 ns and an additional structure at 27 ns. The state at 18 ns was excluded because it is disordered [see Fig. 4(c)]. The MD simulations showed different degrees of metastability, with structures at 10 ns and 27 ns unfolding within ∼20 ns, structures at 40 ns and 63 ns remaining trapped (neither folding nor unfolding), and the structure at 75 ns folding after ∼30 ns. The RMSD traces are shown in Fig. S1. We note that apparent metastability in ARLES simulations could be caused in part by artifacts in the evolution Eq. (14) for k. For example, the first and third terms on the right-hand side can be combined with Eq. (15) to give βD/2[ρ2HR(k)ρ2(X)]. Thus, k will experience an upward force when ρ2(X)<ρ2HR(k), i.e., when the instantaneous MSD between the replicas is below its running average. When the ARLES simulation is started (in the present case, from an unfolded structure), the running average ρ2HR(k) will be inaccurate due to low sampling. As the multi-replica system diffuses over a low energy basin, ρ2(X) will decrease because the individual replicas are attracted toward the same minima on the potential energy landscape. This creates an upward driving force for k, which will persist until the running average decreases and could create the appearance of a deep metastable state.

FIG. 4.

Folding of TrpZip. (a) Cα RMSD from the average NMR conformation; colors indicate six different trajectories; (b) evolution of the force constant k and the distance ρ for one simulation [black trace in (a)]; (c) selected structures along the folding trajectory in B; the backbones of the four replicas are drawn in different colors, and all sidechains are in black.

FIG. 4.

Folding of TrpZip. (a) Cα RMSD from the average NMR conformation; colors indicate six different trajectories; (b) evolution of the force constant k and the distance ρ for one simulation [black trace in (a)]; (c) selected structures along the folding trajectory in B; the backbones of the four replicas are drawn in different colors, and all sidechains are in black.

Close modal

To obtain an estimate of the folding fraction, we performed an additional 1 µs ARLES simulation and computed the folding fraction using the ARLES trajectory frames in which the RLES force constant k was in the range 9 (kcal/mol)/Å2–10 (kcal/mol)/Å2. We obtained the value of 0.82, which is lower than the value of ∼0.92 in the experiments of Ref. 17. However, our value should be considered approximate because sampling of folding transitions is limited even over the 1 µs simulation.

Our results indicate that the RLES method can become useful in biomolecular applications, such as protein folding simulations and structure optimization. However, because RLES is sensitive to the choice of parameters D(k) and p(k), they may need to be optimized for each system. For example, a possible approach to find a suitable upper bound for k is to compare some equilibrium properties of the replicated ensemble to those of the physical system, increasing k until satisfactory agreement is observed. Furthermore, the method requires the scaling of interatomic interactions and specification of nonbonded exclusion lists, which could severely decrease the speed of MD simulations for large ligands.

We note that the computationally intensive combination of RLES with HREM or ARLES used in the present examples was chosen to demonstrate the convergence properties of RLES. A simpler implementation may be more appropriate for equilibration simulations, such as finding low-energy configurations of protein side chains, loops, or docked ligands. For example, one can start a single simulation with many ligand replicas and a low initial value for the RLES force constant. The simulation should rapidly sample a large ensemble of different ligand configurations. A subsequent gradual increase in the force constant will guide the replicated subsystem to the lowest-free energy conformation for the given temperature. The application of RLES to such problems is left to future work.

Details of the computer implementation of ARLES and supporting Fig. S1 are provided in the online supplementary material.

The data that support the findings of this study, including MD simulation trajectories and analysis routines, are available from the corresponding author upon reasonable request.

The research was supported by Grant Nos. GM 5R01-30804 and NIAID 5R03AI111416 from the National Institutes of Health and by Grant No. 17-ERD-043 from the Lawrence Livermore National Laboratory.

2.
R.
Elber
and
M.
Karplus
,
J. Am. Chem. Soc.
112
,
9161
(
1990
).
3.
A.
Miranker
and
M.
Karplus
,
Proteins
11
,
29
(
1991
).
4.
C. R.
Schubert
and
C. M.
Stultz
,
J. Comput. Aided Mol. Des.
23
,
475
(
2009
).
5.
6.
A.
Roitberg
and
R.
Elber
,
J. Chem. Phys.
95
,
9277
(
1991
).
7.
X.
Cheng
,
G.
Cui
,
V.
Hornak
, and
C.
Simmerling
,
J. Phys. Chem. B
109
,
8220
(
2005
).
8.
M.
Cecchini
,
Y.
Alexeev
, and
M.
Karplus
,
Structure
18
,
458
(
2010
).
9.
R. B.
Best
,
X.
Zhu
,
J.
Shim
,
P. E. M.
Lopes
,
J.
Mittal
,
M.
Feig
, and
A. D.
MacKerell
, Jr.
,
J. Chem. Theor. Comput.
8
,
3257
(
2012
).
10.
J. E.
Straub
and
M.
Karplus
,
J. Chem. Phys.
94
,
6737
(
1991
).
11.
L.
Maragliano
,
A.
Fischer
,
E.
Vanden-Eijnden
, and
G.
Ciccotti
,
J. Chem. Phys.
125
,
024106
(
2006
).
12.
D.
Frenkel
and
A. J.
Ladd
,
J. Chem. Phys.
81
,
3188
(
1984
).
13.
Y.
Sugita
and
Y.
Okamoto
,
Chem. Phys. Lett.
314
,
141
(
1999
).
14.
A.
Brünger
,
C. L.
Brooks III
, and
M.
Karplus
,
Chem. Phys. Lett.
105
,
495
(
1984
).
15.
P.
L’Ecuyer
and
T. H.
Andres
,
Math. Comput. Simul.
44
,
99
(
1997
).
16.
Y.
Sugita
,
A.
Kitao
, and
Y.
Okamoto
,
J. Chem. Phys.
113
,
6042
(
2000
).
17.
A. G.
Cochran
,
R. T.
Tong
,
M. A.
Starovasnik
,
E. J.
Park
,
R. S.
McDowell
,
J. E.
Theaker
, and
N. J.
Skelton
,
J. Am. Chem. Soc.
123
,
625
(
2001
).
18.
C.
Zhang
and
J.
Ma
,
J. Chem. Phys.
132
,
244101
(
2010
).
19.
C.
Gardiner
,
Handbook of Stochastic Methods
, 3rd ed. (
Springer-Verlag
,
Berlin
,
2003
).
20.
T.
Zang
,
L.
Yu
,
C.
Zhang
, and
J.
Ma
,
J. Chem. Phys.
141
,
044113
(
2014
).
21.
T.
Ma
,
T.
Zang
,
Q.
Wang
, and
J.
Ma
,
Protein Sci.
27
,
1842
(
2018
).
22.
M. S.
Friedrichs
,
P.
Eastman
,
V.
Vaidyanathan
,
M.
Houston
,
S.
Legrand
,
A. L.
Beberg
,
D. L.
Ensign
,
C. M.
Bruns
, and
V. S.
Pande
,
J. Comput. Chem.
30
,
864
(
2009
).
23.
B. R.
Brooks
,
C. L.
Brooks
 III
,
A. D.
Mackerell
, Jr.
,
L.
Nilsson
,
R. J.
Petrella
,
B.
Roux
,
Y.
Won
,
G.
Archontis
,
C.
Bartels
,
S.
Boresch
et al.,
J. Comput. Chem.
30
,
1545
(
2009
).

Supplementary Material