Free energy perturbation (FEP) was proposed by Zwanzig [J. Chem. Phys. **22**, 1420 (1954)] more than six decades ago as a method to estimate free energy differences and has since inspired a huge body of related methods that use it as an integral building block. Being an importance sampling based estimator, however, FEP suffers from a severe limitation: the requirement of sufficient overlap between distributions. One strategy to mitigate this problem, called Targeted FEP, uses a high-dimensional mapping in configuration space to increase the overlap of the underlying distributions. Despite its potential, this method has attracted only limited attention due to the formidable challenge of formulating a tractable mapping. Here, we cast Targeted FEP as a machine learning problem in which the mapping is parameterized as a neural network that is optimized so as to increase the overlap. We develop a new model architecture that respects permutational and periodic symmetries often encountered in atomistic simulations and test our method on a fully periodic solvation system. We demonstrate that our method leads to a substantial variance reduction in free energy estimates when compared against baselines, without requiring any additional data.

## I. INTRODUCTION

Free energy estimation is of central importance in the natural sciences. Accurate estimation of free energies, however, is challenging, as many systems are out of reach of experimental methods and analytic theory. Computer-based estimation has, thus, emerged as a valuable alternative. Successful application areas of *in silico* free energy estimation span industry and scientific research, including drug discovery,^{1} condensed matter physics,^{2} materials science,^{3} structural biology,^{4} and the effects of mutagenesis.^{5} Because of its importance and wide ranging applications, computer-based free energy estimation has been an active field of research for decades.^{6}

At the core of many state-of-the-art estimators^{7} lies the free energy perturbation (FEP) identity introduced by Zwanzig^{8} in 1954,

Here, Δ*F* = *F*_{B} − *F*_{A} is the Helmholtz free energy difference between two thermodynamic states *A* and *B*, each connected to a thermal reservoir at the inverse temperature *β*. We denote **x** as a point in the system’s configuration space and define

with *U*_{A} = *U*(**x**; *λ*_{A}) and *U*_{B} = *U*(**x**; *λ*_{B}) being the energy functions associated with *A* and *B*, respectively. The parameters *λ*_{A} and *λ*_{B} could, for example, represent the coupling coefficients of a particle–particle interaction potential. By $EA[\cdots \u2009]$, we denote an expectation under equilibrium distribution $\rho A\u221de\u2212\beta UA$ (and similarly for *B*). A procedure for computation of Δ*F* via Eq. (1) is then as follows: a number of samples are first drawn from *ρ*_{A}, for example, via a Markov chain Monte Carlo or molecular dynamics simulation. The change in energy, Δ*U*, associated with instantaneously switching *λ*_{A} → *λ*_{B} is then computed, from which an exponential average is taken. From a statistical point of view, FEP is an application of Importance Sampling (IS), where *ρ*_{A} serves as the proposal distribution.^{9–11}

While Eq. (1) is exact, the convergence of this estimator for a finite number of samples strongly depends on the degree to which *A* and *B* overlap in configuration space.^{12} Indeed, the dominant contributions to the above expectation will come from samples of *A* that are typical under *B*, and such contributions become increasingly rare with decreasing overlap.^{13}

There exist multiple strategies for mitigating the overlap requirement. Arguably, the most common strategy is a multi-staged approach, in which a sequence of intermediate thermodynamic states is defined between *A* and *B* [Fig. 1(a)]. Here, the increased convergence is facilitated by demanding that neighboring pairs of states be chosen to contain sufficient overlap. The quantity of interest, Δ*F*, is then recovered as a sum over the pairwise differences Δ*F*_{i,i+1}.^{6} The Multistate Bennett Acceptance Ratio^{7} (MBAR) estimator is a prominent example of an estimator that follows this strategy. However, multi-staged approaches require samples from multiple states as well as a suitable order parameter to define intermediate states. Furthermore, it is unclear *a priori* how best to discretize the order parameter or how many stages to use.

An alternative, elegant strategy to increasing the overlap is by incorporating configuration space maps. Jarzynski developed Targeted Free Energy Perturbation^{14} (TFEP), a generalization of FEP whereby an invertible mapping defined on configuration space transports points sampled from *A* to a new distribution, *A*′ [see Fig. 1(b)]. Jarzynski showed that a generalized FEP identity can be applied to this process, from which the free energy difference can be recovered. Importantly, if the mapping is chosen wisely, an effective overlap can be increased, leading to quicker convergence of the TFEP estimator. Hahn and Then extended TFEP to the bidirectional setting,^{15} whereby the mapping and its inverse are applied to samples from *A* and *B*, respectively. Lower-error free energy estimates can then be obtained via the statistically optimal BAR estimator.^{16}

Whether in the unidirectional or bidirectional case, the main challenge for targeted approaches is crafting a mapping that is capable of increasing the overlap. Unfortunately, for most real-world problems, the physical intuition needed to develop such a technique is simply lacking. Modern-day machine learning (ML) techniques, however, seem perfectly suited for this task.

Since the introduction of TFEP in 2002, research in ML has made remarkable progress in fields of image classification,^{17,18} playing video^{19} and board games,^{20,21} and generative modeling of images.^{22,23} ML has also enabled advances in the natural sciences, including state-of-the-art protein structure prediction,^{24} neural-network based molecular force fields,^{25,26} generative modeling of lattice field theories,^{27} new paradigms for sampling equilibrium distributions of molecules,^{28} and variational free energy estimates.^{29,30}

In this work, we turn targeted free energy estimation into a machine learning problem. *In lieu* of a hand-crafted mapping, we represent our mapping by a deep neural network whose parameters are optimized so as to maximize the overlap. Once trained, the free energy can then be computed by evaluating the targeted estimator with our learned mapping. Below we will consider both unidirectional and bidirectional settings and will refer to them as *Learned Free Energy Perturbation* (LFEP) and *Learned Bennett Acceptance Ratio* (LBAR), respectively.

A key contribution of this work is the development of a mapping that respects the underlying symmetries of our system of study. In particular, our neural network is equivariant to the permutation of identical particles and respects periodic boundary conditions by construction. These are particularly important considerations when modeling atomic systems, as they often obey such symmetries.

The rest of our manuscript is structured as follows. In Sec. II, we summarize the previously developed targeted free energy estimators that we will be making use of. This is followed by the development of suitable training objectives to maximize the overlap (Sec. III). We then demonstrate our method by applying it to a solvation system. In Sec. IV, we describe the experimental setup and discuss inherent symmetries that are exploited to devise a model with the correct inductive biases (Sec. V). Finally, we present the experimental results in Sec. VI and discuss our findings in Sec. VII.

## II. THEORETICAL BACKGROUND

In the following, we will refer to *A* and *B* as the thermodynamic states, defined by equilibrium densities $\rho A(x)=e\u2212\beta UA(x)/ZA$ and $\rho B(x)=e\u2212\beta UB(x)/ZB$, where *Z*_{A} and *Z*_{B} are the normalization constants (partition functions). In the targeted scheme, configurations **x** are drawn from *A* and mapped to new configurations **y** = *M*(**x**) via an invertible, user-specified mapping *M*. The set of mapped configurations can be thought of as samples from a new state *A*′,

Similarly, we also consider the reverse case where configurations are drawn from *B* and mapped to *B*′ via the inverse

We refer to this pair of prescriptions as the “forward” and “reverse” process, respectively. For each process, we denote generalized energy differences as

where *J*_{M} and $JM\u22121=JM\u22121$ are the Jacobian determinants associated with the mappings. As originally shown by Jarzynski,^{14} an identity exists, which relates Δ*F* to an ensemble of realizations of Φ_{F},

Equation (7) can be regarded as a generalization of FEP, as it holds for any invertible *M*, and reduces to Eq. (1) if *M* is the identity. An analogous equation holds for the reverse process. Derivations of Eq. (7) can be found in Refs. 14 and 15 or Appendix A.

Hahn and Then extended the above result to the bidirectional case,^{15} showing that a fluctuation theorem (FT) exists between the forward and reverse processes

The functions

and

can be thought of as generalized work distributions associated with the mapping processes and *δ* is the Dirac delta function. With these bidirectional estimates, Bennett’s Acceptance Ratio (BAR) method^{16} can be employed as an alternative estimator of Δ*F*.^{15} BAR estimation of Δ*F* can be formulated as a self-consistent iteration of the equation

where *f*(*x*) = 1/(1 + *e*^{x}) is the Fermi function. For simplicity, in Eq. (11), we restrict ourselves to the case where the number of samples in the forward and reverse directions is equal, but more general formulations exist. BAR has a statistical advantage over FEP as it has been shown to be the minimum variance free energy estimator for any asymptotically unbiased method.^{31} Because of this property, BAR is generally the method of choice when samples from both *A* and *B* are available.

In summary, our method proceeds in two stages by first computing optimized work values using Eqs. (5) and (6) and then estimating Δ*F*. For the latter, we can employ the generalized FEP estimator (7) in the unidirectional setting (LFEP) or solve the BAR equations (11) in the bidirectional setting (LBAR). This highlights an important difference between LBAR and other maximum likelihood free energy estimators that assume the work values to be fixed.^{7,16,32} Instead, LBAR learns to optimize the work values and subsequently combines them optimally to predict Δ*F*.

Crucially, the targeted estimators above hold for *every* invertible mapping. That is, given an infinite number of samples, any invertible choice of *M* will produce a consistent estimate of Δ*F*. Of course, the finite-sample convergence properties are of more practical importance and will strongly depend on the choice of *M*.

## III. TRAINING OBJECTIVE

In a distributional sense, the forward and reverse processes act to transform *ρ*_{A} and *ρ*_{B} into *ρ*_{A′} and *ρ*_{B′}, as depicted in Fig. 1. In what follows, we will refer to the distribution of mapped configurations as the “images” (i.e., *ρ*_{A′} and *ρ*_{B′}), and the distributions we want them mapped toward as the “targets” [i.e., *ρ*_{B} (*ρ*_{A}) for the forward (reverse) process]. Due to the deterministic mapping, the bases and images are related by the change of variable formula,

The crucial consideration for convergence of our estimators [Eqs. (7) and (11)] is the overlap between the image and target distributions.^{14} Indeed, in the limit that images and targets coincide, Jarzynski showed^{14} that $pF(\varphi )\u2192\delta \varphi \u2212\Delta F$. This implies that the convergence of Eq. (7) is immediate (i.e., only one sample is needed). In this limit, it is also the case that $pR(\varphi )\u2192\delta \varphi +\Delta F$ implying that the expectation values on either side of Eq. (11) converge immediately. This overlap argument is further reinforced in Appendix A, where we show that the TFEP estimator can be interpreted as an FEP estimator between *A*′ and *B*.

We now turn our attention to the construction of a loss function that accurately judges the quality of *M*. Guided by the considerations of overlap, we consider the Kullback–Leibler (KL) divergence between the image and target. For the forward process, we have

In the above derivation, we invoked a change of variable formula in going from the first to third lines, and used the identity −*β*Δ*F* = log *Z*_{B} − log *Z*_{A} to get to the last. An analogous equation can be derived for the reverse process yielding

While from Eqs. (14) and (15) it is clear that the KL cannot be accurately estimated unless Δ*F* is known, in terms of optimizing, Δ*F* and *β* can be disregarded as they are constants.

Below we consider two separate training regimes for our model. In the unidirectional case, the model was trained only on the forward process, with a loss function

In the bidirectional case, the model was trained using both forward and reverse processes with the loss

## IV. EXPERIMENTAL SETUP

To test our method, we consider a system similar to the one used by Jarzynski^{14} consisting of a repulsive solute immersed in a bath of *N* = 125 identical solvent particles. The task is to calculate the free energy change associated with growing the solute radius from *R*_{A} to radius *R*_{B} (see Fig. 2). In contrast to a hard-sphere solute as used in Ref. 14, we modeled our solute as a soft sphere. This is because any finite particle overlap would lead to infinite forces, which our training method cannot handle.

Intuitively, an effective mapping should push solvent particles away from the center to avoid high-energy steric clashes with the expanding solute. Jarzynski followed this intuition, defining a mapping that uniformly compresses the solvent particles amidst an expanding repulsive solute. Although this mapping gave a significant convergence gain when applied to a hard solute,^{14} it is not directly applicable to soft solutes. This is because the phase space compression results in a transformed density whose support is not equal to that of the target density, violating the assumption of invertibility.

Below we demonstrate that we no longer need to rely on physical intuition to hand-craft a tractable mapping; this process can be fully automated using the general framework proposed in this work. In order to learn an effective mapping, however, it is crucial that the model be compatible with the inherent symmetries of the underlying physical system.

### A. Energy

Periodic boundary conditions (PBCs) confine the *N*-particle system to a 3*N*-dimensional torus, $x=(r1,\u2026,rN)\u2208T3N$, where each coordinate of the position vector $ri\u2208T3$ of particle *i* lives in the interval [−*L*, *L*] (see Fig. 2). The total energy can then be decomposed into a sum of pairwise contributions according to

where subscript *α* ∈ {*A*, *B*} denotes the state and *r*_{ij} = *r*_{j} − *r*_{i}. The quantity $rij\u2032$ = *r*_{ij} − 2*L* round(*r*_{ij}/(2*L*)) is a difference vector whose components correspond to the shortest, signed distance in the respective dimension, and the function round is applied element-wise giving the nearest integral number. The radially symmetric functions *u*(*r*) and *v*(*r*) represent the Lennard-Jones^{35} (LJ) and Weeks–Chandler–Andersen^{36} (WCA) potentials. In practice, we truncate LJ interactions using a radial cutoff of *L* and shift the potential to be zero at the cutoff.^{37}

### B. Training data

The data used for training and evaluation were generated via molecular dynamics (MD) simulations using the package LAMMPS.^{38} Simulation frames were generated via Langevin dynamics and were saved infrequently enough to ensure decorrelated samples (as judged by the potential energy decorrelation time), giving a total of 5 × 10^{4} frames per simulation. A total of 20 independent simulation trajectories were generated for each thermodynamic state, starting from randomly initialized configurations and momenta. Ten independent training and evaluation datasets were then constructed from these simulations by first concatenating and shuffling configurations, and then partitioning them into *N*_{train} = 9 × 10^{4} training and *N*_{test} = 1 × 10^{4} test samples for each dataset. The value of *N*_{train} was chosen such that the baseline BAR estimator, when evaluated on *N*_{train} data points, yields a reasonably accurate free energy difference. Below we train and evaluate our model on these datasets so as to compute statistical convergence properties of the estimators across independent runs. Further simulation details are summarized in Appendix B.

### C. Symmetries

The system under consideration exhibits properties that are widely encountered in the atomistic simulation community: periodic boundary conditions (PBCs) and permutation invariance. PBCs are usually employed to reduce finite-size effects. Permutation invariance arises as a consequence of the energy being invariant to particle permutations—a condition that is satisfied by the solvent particles as they are all identical.

PBCs are a choice of geometry for the space in which particles live: a 3D torus. Without them, the system (including the solute) would admit rigid rotation, reflection, and rigid translation symmetries. With them, only the translation symmetries remain and a discrete set of rotations/reflections. The original rigid translations in $R3$ become translations by elements of the torus $T3$, leaving the energy invariant. Because of this symmetry, we can fix the solute at the origin of the simulation box without affecting the ratio *Z*_{B}/*Z*_{A} (as shown in Fig. 2). The remaining set of discrete rotations/reflections is the group of symmetries of a cube (the *octahedral group*), which contains only 48 elements.

We design the mapping *M* to respect PBCs and permutation invariance by construction. This means that the state *A*′ obtained by transforming *A* via *M* is guaranteed to have the 3D torus geometry stipulated by PBCs and to be symmetric with respect to any permutation of solvent particles. Section V discusses in detail how these symmetries are implemented in the model architecture.

Our model architecture does not obey the octahedral symmetries, in the sense that *A*′ is not guaranteed to be symmetric with respect to the 48 permutations and/or reflections of the three coordinate axes. However, since the size of this symmetry group is small, we account for the octahedral symmetries via *training-data augmentation* instead. That is, during the training, we transform every training data point (system configuration) by a random element of the octahedral group. As the training progresses, all 48 transformations of each data point are likely to be seen by the model; thus, the model is trained to learn these symmetries from data. In comparison, exhausting the *N*! = 125! permutation symmetries by training-data augmentation would be infeasible in any reasonable training time.

## V. MODEL

We implement the mapping *M* using a deep neural network, parameterized by a set of learnable parameters *θ*. In designing the architecture of the network, we take into account the following considerations.

The mapping

*M*must be bijective, and the inverse mapping*M*^{−1}should be efficient to compute, for any setting of the parameters*θ*.The Jacobian determinant

*J*_{M}should be efficient to compute for any setting of*θ*.The network should be flexible enough to represent complex mappings.

The transformed distributions

*ρ*_{A′}and*ρ*_{B′}should respect the boundary conditions and symmetries of the physical system.

The first three requirements are satisfied by a class of deep neural networks known as *normalizing flows*,^{39} which are invertible networks with efficient Jacobian determinants. Since bijectivity is a closed property under function composition, multiple normalizing flows (or “layers”) can be composed into a deeper flow, yielding a model with increased flexibility. We implement *M* as a normalizing flow composed of *K* invertible layers, that is,

Each layer $Mk:T3N\u2192T3N$ is of the same architecture but it has its own learnable parameters *θ*_{k}, and the learnable parameters of *M* are simply *θ* = (*θ*_{1}, …, *θ*_{K}).

Our implementation of *M*_{k} is based on the architecture proposed by Dinh *et al.*,^{40} which is often referred to as the *coupling layer*. Let $ri\nu $, *ν* ∈ {1, 2, 3}, be the spatial coordinates of the particle with position vector $ri\u2208T3$. To simplify the notation, we will also use $ri\nu $ to denote the inputs to layer *k* (that is, the particles transformed by *M*_{k−1}○⋯○*M*_{1}) and drop the dependence on *k*. Let $Ik$ be a subset of the indices {1, 2, 3} associated with layer *k*. Then, *M*_{k} is defined as follows:

where

By $riIk$, we refer to the set of coordinates of *r*_{i} that are indexed by $Ik$, and $Ci\nu $ is the output of *C* for coordinate *ν* of particle *i*. The functions *G* and *C* are implemented by neural networks. The parameters of *G* associated with coordinate *ν* of particle *i* are denoted by $\psi i\nu $ and computed by *C*; we have again dropped the dependence of $\psi i\nu $ on *k* to simplify the notation. The parameters of *C* are the learnable parameters of the layer, *θ*_{k}. An illustration of the coupling layer defined by Eqs. (20) and (21) is shown in Fig. 3.

In simple terms, *M*_{k} works as follows. We partition the coordinates into two sets; the coordinates indexed by $Ik$ are left invariant, whereas the remaining coordinates are transformed element-wise. Each transformed coordinate undergoes a different transformation depending on the value of $\psi i\nu $, which itself is a function of all the coordinates that remain invariant. To ensure that each coordinate gets the opportunity to be transformed as a function of every other coordinate, each layer *M*_{k} uses a different partition $Ik$, and we cycle through the partitions {1}, {2}, {3}, {1, 2}, {2, 3}, and {1, 3} across layers.

For the mapping *M* to be bijective, a sufficient condition is that *G*(·; *ψ*) : [−*L*, *L*] → [−*L*, *L*] be strictly increasing for any setting of *ψ*. In that case, the inverse $Mk\u22121$ is obtained by simply replacing *G* with *G*^{−1} in Eq. (20), and the Jacobian determinants can be computed efficiently as follows:

Finally, the inverse and the Jacobian determinant of the composite mapping *M* can be computed by

To ensure that the transformed distributions *ρ*_{A′} and *ρ*_{B′} obey the required boundary conditions, the implementation of *G* must reflect the fact that $ri\nu =\u2212L$ and $ri\nu =L$ are identified as the same point. For this to be the case, a sufficient set of conditions is as follows:

for any setting of the parameters *ψ*. To satisfy the above conditions, we implement *G* using *circular splines*, which were recently proposed by Jimenez Rezende *et al.*^{42} and are based on the rational-quadratic spline flows of Durkan *et al.*^{43} Our implementation of the circular splines is detailed in Appendix C.

In addition to the above, we also need to make sure that the parameters $\psi i\nu $ in Eq. (21) are periodic functions of the invariant coordinates $r1Ik,\u2026,rNIk$. This can be easily achieved by the following feature transformation:

where cos and sin act element-wise. The above feature transformation is injective, so no information about $riIk$ is lost. We apply this feature transformation to each particle *i* at the input layer of network *C*.

Finally, to ensure that the transformed distributions *ρ*_{A′} and *ρ*_{B′} are invariant to particle permutations, it is necessary that the Jacobian determinant *J*_{M} also be invariant to particle permutations. In our architecture, this can be achieved by taking *C* to be *equivariant* to particle permutations. Specifically, let *σ* be a permutation of the set {1, …, *N*}. We say that *C* is equivariant with respect to *σ* if

that is, if permuting the particles has the effect of permuting the parameter outputs $\psi 1\nu ,\u2026,\psi N\nu $ in exactly the same way. From Eq. (22), we can easily see that the above property implies that *σ* leaves $JMk$, and hence *J*_{M}, invariant, because we sum over all particles and the sum is permutation invariant. Previous studies have made similar observations.^{44,45} Our implementation of *C* is based on the architecture proposed by Vaswani *et al.*,^{41} often referred to as the *transformer*, which we use in a permutation-equivariant configuration. The implementation details of our transformer architecture are in Appendix C.

## VI. RESULTS

In this section, we evaluate the performance of our method for the solvation system illustrated in Fig. 2. We focus on the bidirectional BAR and LBAR estimators in the main text, due to their advantages over unidirectional approaches as discussed in Sec. II. We refer to Appendix D for a discussion of the unidirectional counterparts.

To capture statistical variation, our training and analysis procedure was performed ten times, each using independent training and evaluation datasets. Our loss profiles and free energy estimates below report averages as well as statistical variation across these runs.

We first report training results in Fig. 4, where the full-batch loss is plotted as a function of the number of training steps. We observe a pattern commonly encountered in ML: after an initial decrease of both training and test loss, the latter develops a minimum. At around the minimum, the model stops generalizing and starts to overfit to the training data. We, therefore, employ a technique called *early stopping*^{46} and use the model parameters corresponding to the minimum test loss for all further evaluations. It is worth emphasizing, however, that the precise location of the minimum does not have an appreciable effect on the quality of the free energy estimates reported for the bidirectional estimator (results not presented). We also note the small variation among the independent runs, suggesting that there is no significant dependence of the performance on a particular dataset.

Next, we probe the overlap resulting from our learned mapping. In Fig. 5(a), we plot the distributions of learned work values for the forward and reverse directions compared against their unmapped counterparts for which the mapping is the identity. These distributions are typically unimodal^{6} and satisfy the inequalities $\u2212EB[\Phi R]\u2264\Delta F\u2264EA[\Phi F]$ for any invertible mapping. The free energy difference Δ*F* corresponds precisely to the point of intersection of the forward and reverse distributions. Both of these facts are a consequence of the fluctuation theorem [Eq. (8)]. Intuitively, it is clear then that an effective mapping should increase the overlap between the distributions to facilitate locating the intersection. This is precisely the enhancement we observe for the mapped work values. The two modes are shifted toward each other and share a significantly larger overlap than the unmapped distributions. We also see that the mapping strongly reduces the variance of the distributions. In fact, with a perfect mapping, we would expect the forward and reverse distributions to collapse onto a single delta distribution located at Δ*F*.

We now turn to the statistical convergence of the free energy estimates. In Fig. 5(b), we plot a running average of the estimate as a function of the number of evaluated samples per stage. The solid lines report averages of the estimate over the independent runs and shaded regions represent one standard deviation of the runs. We first validate the correctness of our method against a converged MBAR estimator. Here, MBAR employed 15 stages and, thus, 7.5 times more samples in total. From Fig. 5(b), we see that the variation of our estimate overlaps nicely with the MBAR error estimates. We next compare the efficiency of our method against the baseline BAR estimator, where we see in Fig. 5(b) a clear variance reduction of LBAR across a wide range of evaluated sample sizes. The full-batch LBAR standard deviation we observe is approximately 19% of that reported by BAR. Moreover, training and evaluation of the model occurred on the same dataset, demonstrating that an effective mapping can be learned in a *data-efficient* manner. This is an important practical consideration but not at all obvious *a priori*. We could, in principle, even combine samples from the training and test datasets for estimation of Δ*F* but have used the test set only to detect overfitting.

## VII. DISCUSSION

In this work, we turn TFEP into a machine learning problem by combining it with state-of-the-art ML techniques. TFEP previously required hand-crafting a tractable mapping on configuration-space, a significant challenge for many realistic systems. We proposed to represent the mapping by a suitable neural network and identified training objectives for unidirectional and bidirectional cases. We then tested their performance on a prototype solvation system – the growth of a soft sphere in a fluid of solvent particles in periodic boundary conditions. While this system is relatively simplistic from a physical standpoint, it poses a significant challenge for ML models due to the system’s underlying permutational symmetry and periodic boundary conditions. Our experimental results indicate that both LFEP and LBAR estimators can lead to a significant variance reduction compared to their respective baselines and, therefore, clearly highlight the potential of this approach. Interesting directions for future work include a systematic analysis of the error of the learned estimators and a detailed comparison with MBAR. We believe it is possible that optimal estimation strategies on complex systems will contain a combination of staging and mapping.

Improving TFEP via learned mappings relates to the general idea of improving importance sampling by learning the proposal distribution, which has been explored substantially in machine learning and statistics. For instance, recent works in machine learning have proposed training a flexible deep-learning model of the proposal distribution to improve importance sampling^{47} or more sophisticated variants such as bridge sampling^{48} and sequential Monte Carlo.^{49–51} In turn, these approaches can be traced back to methods for adaptive importance sampling^{52} and adaptive sequential Monte Carlo^{53} in statistics. One recent instance of these approaches that relates closely to our work is *Neural Importance Sampling*,^{47} which uses expressive normalizing flows to learn good proposal distributions for importance sampling. Many of the above works have noted that the choice of loss function is important, with the forward KL divergence and chi-squared divergence being standard choices. These observations are in line with our observations of the differences between the unidirectional and bidirectional training losses.

We note that our learned free energy estimators and equivariant, periodic model architecture can be combined with the work on *Boltzmann Generators*,^{28} which uses flow-based models in combination with statistical re-weighting to sample from desired Boltzmann distributions. In particular, it would be interesting to see how our targeted estimators compare to the ones considered in Ref. 28. Furthermore, we note that neural network based free energy estimation is an active field of research. For example, two recent studies have independently proposed targeted unidirectional^{54} and bidirectional^{55} estimators similar to the ones suggested here. Both studies employ autoregressive networks to compute free energy estimates of a lattice spin model with discrete states. In addition, Ref. 55 also estimates free energies of a small protein in the gas phase (no periodicity) using normalizing flows. As noted in that study, however, their model lacks permutation equivariance, which is likely a drawback when applying it to the type of solvation system we consider here. We also believe that permutation equivariant, periodicity-respecting networks, such as the one considered here, will be key to scaling up the approach to system sizes commonly used in atomistic simulations.

Finally, our results demonstrate that we can estimate free energy differences between two states directly and data efficiently, i.e., using fewer MD samples for training and evaluation than the base estimator would require to converge. Other studies, for example, Refs. 28 and 55, follow a different strategy and estimate free energy differences by learning two separate mappings that share a common reference state. We believe that our direct approach may be preferable in cases where one state is a small perturbation of the other. It will be interesting to see how these different approaches compare with respect to data efficiency and variance reduction, and under which circumstances one is preferable to the other.

## AUTHORS’ CONTRIBUTIONS

P.W. and A.J.B. contributed equally to this work.

## ACKNOWLEDGMENTS

The authors would like to thank their colleagues Shakir Mohamed, Michael Figurnov, Ulrich Paquet, James Kirkpatrick, Daan Wierstra, Craig Donner, Alex Vitvitskyi, John Jumper, Amber Nicklin-Clark, Steph Hughes-Fitt, Alex Goldin, and Guy Scully for their help and for stimulating discussions.

## DATA AVAILABILITY

The data that support the findings of this study are openly available at https://github.com/deepmind/deepmind-research/tree/master/learned_free_energy_estimation.

### APPENDIX A: ALTERNATE DERIVATION OF TFEP AND INTERPRETATION

In this section, we derive and interpret the unidirectional TFEP estimator as a multi-staged FEP estimator. This interpretation allows us to reason about TFEP using intuition from FEP.

Given explicit densities for *A*, *A*′, and *B*, we can formally decompose Δ*F* into a sum over two terms,

which can each be computed separately using the FEP estimator [Eq. (1)]. We next define the energy of *A*′ as

such that $\rho A\u2032\u221de\u2212\beta UA\u2032$, where we have used Eq. (12) in going from the first to second line. Conveniently, one of these stages comes for free, as Δ*F*_{AA′} = 0,

In going from the first to second line, we have used Eq. (A2). Combining Eqs. (A1)–(A3), as well as the FEP estimator [Eq. (1)], we arrive at our final result

where

is the energy difference between *A*′ and *B*. Although Eq. (A4) is an explicit estimate between *A*′ and *B*, it is just a reformulation of the TFEP estimator [Eq. (7)] as can be seen by the equivalence of Δ*U*′ and Φ_{F} [compare Eqs. (5) and (A5)]. This interpretation of TFEP allows us to apply the intuition on convergence we have built for FEP. Specifically, we can accelerate convergence if *A*′ shares a large overlap with *B*.

### APPENDIX B: SYSTEM

To generate the training data, we performed MD simulations of the system illustrated in Fig. 2 using the simulation package LAMMPS.^{38} The system is similar to the one studied in Ref. 14 but we replaced hard solute–solvent interactions by a Weeks–Chandler–Andersen^{36} (WCA) potential. Below, we represent our energy, length, and mass units in terms of the LJ well depth *ε*, the LJ diameter *σ*, and the solvent particle mass *m*, respectively.^{37} From this, our unit of time is defined as $\tau =\sigma m/\epsilon $. Quantities expressed in these reduced units are denoted by an asterisk. We used a cubic simulation box with an edge length of 2*L*^{*} = 6.29 and employed cutoff radii of *L*^{*} and $26R\alpha $ for LJ and WCA interactions, where *α* ∈ {*A*, *B*} labels the state. The solute radii were taken to be $RA*=2.5974$ and $RB*=2.8444$. Both LJ and WCA potentials shared the same value for *ε*, and we set *σ*_{WCA} = *R*_{α}.

To simulate a specific state, we first assigned random positions to all solvent particles. The solute was placed at the origin of the box and kept stationary throughout the simulation. After performing energy minimization, the system was equilibrated in the canonical ensemble for a period of 5 × 10^{4}*τ*. The equations of motion were integrated using the velocity Verlet algorithm^{56} with a time step of 0.002*τ*. We employed a Langevin thermostat with a relaxation time of 0.5*τ* to keep the system at a temperature of *T*^{*} = 3.2155. To prevent drift of the center of mass motion, we set the total random force to zero. During the 5 × 10^{5}*τ* long production run, configurations were sampled every ten reduced time units, yielding a total of 5 × 10^{4} samples. For each state, 20 such simulations were generated starting from random initializations and random number seeds. The resulting 1 × 10^{6} samples were then partitioned as described in the main text.

We followed the same protocol to generate samples for MBAR. In addition to the two states corresponding to *R*_{A} and *R*_{B}, we considered 13 intermediate states with a constant radial increment Δ*R*_{i,i+1}. Using two different random seeds, we obtained 1 × 10^{5} samples for each state and evaluated all 15 energy functions on each sample. MBAR estimates of Δ*F* were then computed from this combined energy matrix using the package pymbar.^{7}

### APPENDIX C: MODEL IMPLEMENTATION DETAILS

We implement *G* using *circular splines*^{42} so that *G* satisfies the boundary conditions in Eqs. (25) and (26). Briefly, *G* is a piecewise function that consists of *S* segments. The parameters *ψ* are a set of *S* + 1 triplets (*x*_{s}, *y*_{s}, *d*_{s}), where [*x*_{s−1}, *x*_{s}] defines the domain of segment *s*, [*y*_{s−1}, *y*_{s}] defines its image, and *d*_{s−1}, *d*_{s} define its slopes at the endpoints (all slopes are required to be positive). Each segment is a strictly increasing rational-quadratic function, constructed using the interpolation method of Gregory and Delbourgo.^{57} The conditions in Eqs. (25) and (26) are satisfied by setting *x*_{0} = *y*_{0} = −*L*, *x*_{S} = *y*_{S} = *L*, and *d*_{0} = *d*_{S} > 0. We can increase the flexibility of *G* by increasing the number of segments *S*. With a large enough *S*, circular splines can approximate arbitrarily well any strictly increasing function from [−*L*, *L*] to itself that satisfies Eqs. (25) and (26).

We implement *C* using the *transformer architecture* of Vaswani *et al.*^{41} so that *C* satisfies the permutation-equivariance condition in Eq. (28). Briefly, the network architecture is as follows. Each input $riIk$ undergoes the feature transformation in Eq. (27) and is then mapped to a fixed-sized vector *h*_{i} via an affine transformation identically for each *i*. Then, (*h*_{1}, …, *h*_{N}) is processed by a sequence of *transformer blocks*, each of which is composed of two *residual layers*.^{18} The first residual layer is

where MHA is a *multi-headed self-attention layer* as described by Vaswani *et al.*,^{41} and MHA_{i} is its *i*th output. The second residual layer is

where MLP is a standard *multi-layer perceptron*^{58} that is applied identically for each *i*. After repeating the transformer block a specified number of times (each time with different learnable parameters), each vector *h*_{i} is mapped to the output $\psi i\nu \nu \u2209Ik$ via an affine transformation identically for each *i*. To help with generalization and stability, we applied *layer normalization*^{59} to the inputs of MHA and MLP, as well as to the output. Because MHA is permutation-equivariant and every MLP and affine transformation is applied identically for each *i*, it follows that *C* is permutation-equivariant, and therefore, the transformed distribution is permutation-invariant as desired.

All models were trained using the Adam optimizer.^{60} A summary of the hyperparameters is provided in Table I.

Transformer . | . |
---|---|

Number of blocks | 4 |

Number of heads | 4 |

Value dimension | 128 |

Key dimension | 128 |

Embedding dimension | 128 |

Circular spline flow | |

Number of segments (S) | 32 |

Adam optimizer | |

Batch size | 256 |

Learning rate | 10^{–4} |

β_{1} | 0.9 |

β_{2} | 0.999 |

Transformer . | . |
---|---|

Number of blocks | 4 |

Number of heads | 4 |

Value dimension | 128 |

Key dimension | 128 |

Embedding dimension | 128 |

Circular spline flow | |

Number of segments (S) | 32 |

Adam optimizer | |

Batch size | 256 |

Learning rate | 10^{–4} |

β_{1} | 0.9 |

β_{2} | 0.999 |

### APPENDIX D: RESULTS FOR LFEP

In this section, we discuss the free energy estimates obtained with LFEP, using unidirectional training with the $LLFEP$ loss [Eq. (16)] as a proxy for *D*_{KL} [Eq. (14)]. To estimate the KL, we replaced Δ*F* in Eq. (14) with the LFEP estimate of that quantity. Figure 6(a) shows the evolution of both quantities during training. The results feature an interesting behavior in the initial training regime. While test and training loss are still decreasing, the KL already exhibits a pronounced minimum due to a drift of the Δ*F* estimate toward lower values. This is further illustrated in Fig. 6(b), which compares the convergence of Δ*F* for three different mappings corresponding to specific training steps [colored symbols in Fig. 6(a)]. We see that the variance is reduced significantly in all three cases. However, only the first mapping (training step 2 × 10^{3}) agrees with the MBAR baseline, while we can already observe a small bias for the second mapping (training step 1.10 × 10^{4}) that becomes even more pronounced as the training progresses (training step 2 × 10^{4}). We note that this behavior is consistent with other experiments that we performed in the unidirectional setting (data not shown).

This is problematic in that the minimum of the KL would be a natural point to stop training but does not yield the lowest bias. This is also in contrast to our findings for the bidirectional case, where the quality of the mapping was best in the vicinity of the minimum in the test loss. One possible explanation for these observations is the well-known zero-forcing property^{33} of $LLFEP$. That is, $LLFEP$ does not encourage the transformed distribution to be mass-covering, which is known to negatively impact the performance of importance-sampling estimators.^{34}