Slow kinetic processes in molecular systems can be analyzed by computing the dominant eigenpairs of the Koopman operator or its generator. In this context, the Variational Approach to Markov Processes (VAMP) provides a rigorous way of discerning the quality of different approximate models. Kernel methods have been shown to provide accurate and robust estimates for slow kinetic processes, but they are sensitive to hyper-parameter selection and require the solution of large-scale generalized eigenvalue problems, which can easily become computationally demanding for large data sizes. In this contribution, we employ a stochastic approximation of the kernel based on random Fourier features (RFFs) to derive a small-scale dual eigenvalue problem that can be easily solved. We provide an interpretation of this procedure in terms of a finite, randomly generated basis set. By combining the RFF approach and model selection by means of the VAMP score, we show that kernel parameters can be efficiently tuned and accurate estimates of slow molecular kinetics can be obtained for several benchmarking systems, such as deca alanine and the NTL9 protein.

## I. INTRODUCTION

The automated extraction of essential information about the thermodynamic and kinetic properties of molecular systems from large-scale computer simulations remains one of the major open problems in computational physics and chemistry to this day. The challenge consists of processing very long time series of high-dimensional data in such a way that *relevant* features of the molecular system are determined automatically, with limited user input or intervention. The precise definition of what is regarded as relevant depends on the context, but in many applications, the determination of *slow* relaxation processes and their associated metastable sets is among the most important features to be detected. This applies, for example, to protein folding, ligand binding, or protein–protein association.^{1–3}

Let us highlight two milestones that have shaped the field in the last two decades: The first was the inception of Markov State Models (MSMs) in the late 1990s and early 2000s,^{4–6} which allowed for the construction of easily interpretable discrete kinetic models from which a wealth of information could be extracted. The second milestone was the variational approach to conformational dynamics (VAC) for reversible systems^{7,8} and its extension, the *variational approach to Markov processes* (VAMP),^{9} which provided a rigorously defined scalar objective function (called the VAMP score) to assess the quality of kinetic models. Combined, these developments have led to a standard workflow for the construction of MSMs, usually consisting of a linear dimensionality reduction using *time-lagged independent component analysis* (TICA),^{10} geometric clustering, as well as MSM estimation and validation.^{11,12} It was also shown that MSMs and the VAC are closely related to the Koopman operator framework^{13,14} and, in particular, the *extended dynamic mode decomposition* (EDMD) algorithm,^{15,16} which have primarily emerged in the analysis of fluid dynamics and related fields. The Koopman framework is therefore a unifying theme for the construction of kinetic models. Extensions of the EDMD algorithm can also be used to approximate the generator of dynamical systems [e.g., the Kolmogorov backward operator for stochastic differential equations (SDEs)], as shown in Ref. 17.

Despite the remarkable success of MSMs, the construction process remains somewhat unsatisfactory, at least from a theoretical perspective. The initial TICA step presupposes that there is a kinetically informative linear subspace of the initial set of descriptors used for MSM building. In order to move beyond this requirement, algorithms based on deep neural networks (DNNs) were introduced in recent years,^{18,19} allowing one to learn in one step the complete non-linear transformation from raw atomic coordinates to low-dimensional features that optimize the VAMP score. Both the MSM construction process (including state definition) and neural networks require a non-linear optimization step, which is typically solved by means of stochastic algorithms. A significant number of hyper-parameters need to be tuned in both cases, such as network architecture, network depth, and height for DNNs, or number of cluster centers, number of iterations, convergence threshold, etc. for clustering-based MSMs.

A different avenue that has been considered is to employ reproducing kernels, which have long been known in machine learning^{20,21} as a powerful non-linear model class with rich theoretical properties. Kernel methods have also been demonstrated in many applications to be fairly robust if the available data are only small- to medium-sized. Kernel-based versions of EDMD were suggested in Refs. 22–24 and applied to high-dimensional molecular simulations in Ref. 25. Extracting slow timescales from kernel representations requires the solution of a large-scale generalized eigenvalue problem, where the size of the matrix depends on the number of snapshots. Even for moderate data sizes, the solution to this problem can be challenging. Moreover, although kernel methods usually require only a few hyper-parameters (such as the bandwidth of the hugely popular Gaussian kernel), these need to be tuned accurately, which requires solving the above-mentioned eigenvalue problem many times, which can be prohibitively time-consuming.

In this work, we show that kernel methods can be made tractable for the analysis of molecular kinetics by employing *Random Fourier Features* (RFFs),^{26} a well-known low-rank approximation technique for kernel matrices, to convert the original large-scale eigenvalue problem into a much smaller one that can be easily solved. We derive the associated small-scale matrix problem both for the Koopman operator and its generator and show that this approach can be interpreted as applying EDMD with a specific randomly chosen basis set. We demonstrate that the VAMP score can be effectively used for hyper-parameter tuning of the kernel method. The combination of these ideas leads to an efficient approximation algorithm that is robust with respect to the data size and the stochastic Fourier feature selection. Therefore, our method presents a promising new direction for the automated analysis of molecular kinetics.

The remainder of this paper is structured as follows: In Sec. II, we review elements of the Koopman formalism, the EDMD approximation algorithm, reproducing kernel Hilbert spaces (RKHSs), and Koopman modeling using RKHSs. Then, in Sec. III, we present our main contribution, an approximation algorithm for the Koopman operator or generator based on Random Fourier Features. Further computational details, especially regarding model selection, are discussed in Sec. IV, while applications to benchmarking problems in molecular dynamics (MD) are shown in Sec. V.

## II. THEORETICAL BASICS

We will briefly introduce the stochastic Koopman operator and its generator, as well as numerical methods to estimate these operators and their spectral properties from data.

### A. Stochastic dynamical systems

*t*is the time and $Xt\u2208X\u2282Rd$ is the state of the system at time

*t*. In particular, a widely used model in this context is given by

*stochastic differential equations*(SDEs) of the form

*drift*and

*diffusion*of the SDE, and

*W*

_{t}is the

*d*-dimensional Brownian motion. The covariance matrix of diffusion is denoted by

*Boltzmann distribution,*

*V*is the potential energy at position

*x*and

*β*

^{−1}=

*k*

_{B}

*T*is the inverse temperature. Most simulation protocols are built to achieve this goal, i.e., the Boltzmann distribution is invariant, and long time averages converge to spatial averages with respect to

*μ*(ergodicity). A standard example is

*Brownian*or

*overdamped Langevin dynamics,*

*γ*.

### B. Koopman operator and generator

*X*

_{t}, significant progress has been made in recent years by considering an operator-based approach to the statistics of the stochastic process

*X*

_{t}. For a fixed lag time

*t*≥ 0, the

*Koopman operator*

^{13}$Kt$ acts on a function $\varphi :X\u2192R$ by taking the conditional expectation,

*x*yields the expectation of

*ϕ*after starting the dynamics at

*x*and evolving until time

*t*. The linear operators $Kt$ satisfy the semigroup equation $Ks+t=KsKt$, which implies the existence of another linear operator $L$, called the

*Koopman generator*, such that the following linear differential equation holds for the function $\varphi t=Kt\varphi $,

*Kolmogorov equation,*

*A*:

*B*= ∑

_{i,j}

*A*(

*i*,

*j*)

*B*(

*i*,

*j*). For overdamped Langevin dynamics, the generator is given by

*dynamical operators*for the dynamical system

*X*

_{t}. If the dynamics are reversible with respect to the invariant distribution

*μ*, then the generator $L$ is symmetric with respect to the inner product,

*κ*

_{i}≤ 0. An object of prime interest for molecular kinetics are low-lying eigenvalues 0 ≤ −

*κ*

_{i}≪ 1, as these encode slow motions corresponding to metastable behavior encountered in many examples of molecular systems.

^{6,27}The connection between close-to-zero eigenvalues and metastability is also apparent in the fact that eigenfunctions of the generator are also eigenfunctions of the Koopman operator at any

*t*, with eigenvalue $\lambda i(t)=e\kappa it$. If |

*κ*

_{i}| ≪ 1, then the corresponding eigenvalues

*λ*

_{i}(

*t*) decay slowly, as expected for metastable dynamics.

### C. Finite-dimensional approximation

*extended dynamic mode decomposition*(EDMD),

^{15}and its adaptation to the generator is then called

*gEDMD*.

^{17}Both methods require choosing a finite

*dictionary*of observable functions {

*ϕ*

_{1}, …,

*ϕ*

_{n}}. The projections of the dynamical operators onto the linear span of these functions can be represented by matrices,

*x*

_{1},

*x*

_{2}, …,

*x*

_{m+1}} of the process

*X*

_{t}, the data-based estimators for these matrices are given by

*t*in the formula for $A\u0302t$. This is not required, in practice, if a sliding-window estimator is used; see Refs. 12 and 28 for details.

**A**

^{L}and its estimator $A\u0302L$ above can be replaced by the following energy-like expressions:

^{17}

**A**

^{L}even for finite data.

### D. Collective variables

*N*≤

*d*, often called

*collective variables*(CVs), such as intramolecular distances or angles. Formally, CVs are given by a smooth mapping $\xi :X\u2192RN$. Choosing a dictionary ${\varphi \u0303i}i=1n$ of functions depending only on CV space, that is, $\varphi \u0303i(x)=\varphi i(\xi (x))$, does not lead to any conceptual changes to the EDMD approach. When approximating the generator, the gEDMD method is straightforwardly adapted by computing all

*x*-derivatives in the definition of the generator using the chain rule. This leads to a particularly elegant reformulation of the symmetric estimator (14) in terms of an effective diffusion matrix,

_{x}

*ξ*is the

*d*×

*N*-dimensional Jacobian of the mapping

*ξ*. The symmetric estimator (14) then becomes

### E. Reproducing kernel Hilbert space

A critical modeling decision for the (g)EDMD approach is the choice of the dictionary {*ϕ*_{1}, …, *ϕ*_{n}}. Recent works^{22–25} have shown that accurate results, even for large systems, can be obtained using methods based on *reproducing kernel Hilbert spaces* (RKHS) generated by a kernel function *k*. These approaches lead to powerful approximation spaces, yet they only depend on a few model parameters.

^{21,29}is a Hilbert space $H$ with an inner product $\u22c5,\u22c5H$ of continuous functions on $X$, defined by a symmetric, positive-definite two-argument function

*k*(

*x*,

*y*), called the

*kernel*. Each such kernel corresponds to a unique RKHS.

^{29}Given

*k*, one can define a map from $X$ into $H$, called the

*feature map*, by

*f*in the RKHS then satisfy the reproducing property,

*Arguably the most widely used kernel is the Gaussian radial basis function (RBF) kernel,*

*with bandwidth parameter*

*σ*> 0

*. On a periodic domain*[−

*L*,

*L*]

^{d}

*, the closely related periodic Gaussian kernel,*

*analogously generates an RKHS of periodic functions.*△

^{30}For many popular kernels, including the Gaussian RBF kernel, the associated RKHS is infinite-dimensional and densely embedded into the most relevant function spaces. Kernels, therefore, provide powerful approximation spaces, reducing the problem of dictionary selection to the choice of a single function, the kernel, and its hyper-parameters, such as the bandwidth for the Gaussian RBF kernel.

### F. RKHS representation of dynamical operators

*k*with RKHS $H$, the analogs of the matrices

**G**,

**A**

^{t}, and

**A**

^{L}in Sec. II C are given by linear operators on $H$; see Refs. 23 and 24 for a derivation,

*m*-dimensional space $Hm$, which leads to the matrix representations,

The linear problems (26) and (27) represent large-scale generalized eigenvalue problems. Solving these problems efficiently for large datasets is one of the central challenges common to most applications of kernel methods. In Sec. III, we present an approach based on a randomized low-rank representation obtained from the Fourier transformation of the kernel function.

## III. LOW-RANK REPRESENTATIONS OF KERNEL EIGENVALUE PROBLEMS

### A. Random Fourier features

*Random Fourier Features*(RFFs) have been introduced in Ref. 26 as a low-rank approximation framework for large kernel matrices. The basis for this approach is Bochner’s theorem, stating that a translation-invariant kernel

*k*, which is normalized to

*k*(

*x*,

*x*) = 1 for all

*x*, possesses the following stochastic representation:

^{31},

*ρ*is a probability measure in frequency space, called the spectral measure. This representation extends to the derivatives of the kernel function, namely

*σ*, the spectral measure is also Gaussian with bandwidth

*σ*

^{−1}. For the periodic Gaussian kernel, the spectral measure is discrete, supported on all wave vectors $\pi Lk$ for $k\u2208Zd$, with probabilities,

*I*

_{k}is the modified Bessel function of the first kind; see Ref. 32. Thus, drawing independent samples from the spectral measure is easily accomplished for many popular kernel functions.

### B. Representation of kernel matrices

*p*samples ${\omega u}u=1p$ in frequency space, sampled from the spectral measure

*ρ*. Replacing expectation values with finite-sample averages, we obtain

**M**correspond to the data points, and the columns of

**M**correspond to the randomly sampled features. When approximating the generator, we use the representation of derivatives given in (29), to find

**M**

^{L}, which can be written compactly using the Frobenius inner product as

### C. Solution of the generalized eigenvalue problems

*p*≤

*m*, we can employ a standard trick to express this equivalently as a lower-dimensional eigenvalue problem. By defining

**v**

_{i}=

**M**

^{H}

**w**

_{i}, we can directly verify that

*dual*problem,

*p*-dimensional generalized eigenvalue equation is the central problem to be solved within the context of RFF-based kernel approximation. If

*p*is much smaller than

*m*, (37) already leads to computational savings compared to (26).

**M**,

**M**

^{t},

**M**

^{L}. To this end, we apply a whitening transformation, which is widely used within the context of (g)EDMD. Using a truncated singular value decomposition (SVD) of

**M**,

*r*≤

*p*components of the SVD, we obtain a linear transformation

**T**=

**W**Σ

^{−1}, which eliminates the matrix on the right-hand sides of (37),

It is then sufficient to diagonalize these reduced matrices in order to solve (37). If the truncation rank of **M** is significantly smaller than *p*, additional computational savings can be harnessed this way. Algorithm 1 summarizes our solution method in compact form.

Input: data matrix $X=[x1,\u2026,xm+1]\u2208Rd\xd7(m+1)$, | |

kernel function k with spectral measure ρ, | |

number of features p, truncation rule for singular values. | |

Output: Approximate eigenpairs $(\lambda \u0302i(t),\psi \u0302i)$ or $(\kappa \u0302i,\psi \u0302i)$ of the dynamical operator. | |

1: Draw p samples ${\omega u}u=1p$ from the spectral measure ρ. | |

2: Form matrix M given by (32). | |

3: Non-reversible case: form matrix M^{t} or M^{L} as in (32) or (33). | |

4: Compute the SVD of M and choose rank r according to the truncation rule: M ≈ UΣW^{H}. | |

5: Form the reduced matrix R^{t} or R^{L} according to (40) or (44). | |

6: Compute eigenpairs of the reduced problem $Rtui=\lambda \u0302i(t)ui$ or $RLui=\kappa \u0302iui$. | |

7: Transform to original RFF basis: v_{i} = Tu_{i}, $\psi \u0302i(x)=viH\varphi RFF(x)$. |

Input: data matrix $X=[x1,\u2026,xm+1]\u2208Rd\xd7(m+1)$, | |

kernel function k with spectral measure ρ, | |

number of features p, truncation rule for singular values. | |

Output: Approximate eigenpairs $(\lambda \u0302i(t),\psi \u0302i)$ or $(\kappa \u0302i,\psi \u0302i)$ of the dynamical operator. | |

1: Draw p samples ${\omega u}u=1p$ from the spectral measure ρ. | |

2: Form matrix M given by (32). | |

3: Non-reversible case: form matrix M^{t} or M^{L} as in (32) or (33). | |

4: Compute the SVD of M and choose rank r according to the truncation rule: M ≈ UΣW^{H}. | |

5: Form the reduced matrix R^{t} or R^{L} according to (40) or (44). | |

6: Compute eigenpairs of the reduced problem $Rtui=\lambda \u0302i(t)ui$ or $RLui=\kappa \u0302iui$. | |

7: Transform to original RFF basis: v_{i} = Tu_{i}, $\psi \u0302i(x)=viH\varphi RFF(x)$. |

### D. Interpretation as (g)EDMD problem

^{28}This is not a coincidence. Consider a finite dictionary defined by

**T**from Sec. III C from the left and from the right, i.e.,

### E. Computational effort

Let us break down the computational effort required for Algorithm 1: The SVD of **M** requires $O(mp2)$ operations. Assembling the reduced matrix costs $O(rpm)$ operations for the Koopman operator. In the case of generator approximation, we obtain an additional dependence on the dimension due to the required contraction of derivatives of the kernel function. We can upper-bound the resulting effort as $O(mp(d2+r))$ in the non-reversible case and by $O(m(rpd+r2d2))$ in the reversible case. Note that this bound assumes a dense and state-dependent diffusion field *a*; it reduces to linear dependence on *d* for diagonal diffusions, such as overdamped Langevin dynamics. Finally, the diagonalization of the reduced matrix amounts to $O(r3)$ floating point operations. Comparing this to the full kernel method, we see that the most drastic computational savings will result from being able to choose *p* small compared to *m*, as it reduces a factor $O(m3)$ to $O(mp2)$.

## IV. METHODS

In this section, we will discuss aspects of the practical application of our spectral RFF method and also introduce the systems studied in the results section below. For the numerical examples, we will use the Gaussian RBF kernel or its periodic counterpart for periodic domains.

### A. Model selection

Before we can apply Algorithm 1, we have to choose the number of features *p*, the truncation threshold for singular values, and as many hyper-parameters as required by the kernel function (e.g., the bandwidth for Gaussian RFF kernels). We select these parameters by cross-validating the VAMP score metric on randomly selected subsets of the data, as explained below. The only exception is the truncation threshold for singular values of **M**. Here, we adopt the rule that all singular values smaller than 10^{−4} times the largest singular value are discarded. This choice is based on prior experience and works well in all considered examples.

*Rayleigh variational principle*, which states that for reversible systems, the exact leading

*K*eigenfunctions are optimizers of the Rayleigh trace,

*VAMP score*introduced for Koopman operators in Refs. 7 and 9 and for the Koopman generator in Ref. 33, we label it VS in what follows. For the generator, the same variational principle holds, with

*λ*

_{i}(

*t*) replaced by

*κ*

_{i}and $Kt$ replaced by $L$. The variational principle also generalizes to non-reversible systems by considering singular functions instead of eigenfunctions.

^{9}However, we only analyze reversible systems in what follows. The VAMP score can be easily estimated upon replacing inner products by their empirical estimators, as shown in Sec. II C. The maximal VAMP score within a finite-dimensional model class can also be straightforwardly computed; see again Ref. 7. In this way, we can simply compare the maximal VAMP score obtained for each choice of hyper-parameters, e.g., choice of kernel bandwidth

*σ*and number of Fourier features

*p*, and choose the best.

To account for the estimation error stemming from the use of finite data and also to avoid over-fitting, we employ a standard cross-validation approach and first compute the optimal functions *ϕ*_{0}, …, *ϕ*_{K−1} for each model class on a random subsample of the data (training data). We then recompute the VAMP score for the *K*-dimensional space spanned by *ϕ*_{0}, …, *ϕ*_{K−1} on the remaining data (test data). This process is repeated *n*_{test} times, where *n*_{test} is either 10 or 20 below, and we choose the model class that optimizes the test score on average. The ratio of training data to test data is always chosen as a 75%/25% split.

### B. Spectral analysis and clustering

The solution of the eigenvalue problem for the reduced matrix **R** provides estimates of the leading eigenvalues $\lambda \u0302i(t)$ or $\kappa \u0302i$, respectively, and their associated eigenfunctions $\psi \u0302i$. The first point to note is that (37) is a complex eigenvalue problem, so the eigenvectors can be scaled by arbitrary complex numbers. Before analyzing the eigenvectors, we simply apply a grid search over complex phase factors to determine the one that minimizes the imaginary part of all eigenvector entries and use the rescaled eigenvectors for further analysis.

*implied timescales*defined as

*i*-th eigenmode of the Koopman operator. Since the timescales are independent of the lag time in the absence of projection and estimation errors,

^{34}we monitor the convergence of implied timescales with increasing lag time

*t*in order to select an optimal lag time.

^{11}We also compute the timescales when approximating the generator; in this case, they are simply given by

^{35}to assign each data point

*x*

_{l}to each metastable set

*q*with a degree of membership

*χ*

_{q}(

*x*

_{l}), which is a number between zero and one. We then identify data point

*x*

_{l}as part of metastable set

*q*if

*χ*

_{q}(

*x*

_{l}) ≥ 0.6, and the remaining data points are treated as transition states.

### C. Systems and simulation setups

*Lemon-Slice potential*,

^{36}given in polar coordinates as

_{t}= 10

^{−3}. Friction

*γ*and inverse temperature

*β*are both set to one. Reference implied timescales are obtained from a Markov state model based on a

*k*-means discretization of the two-dimensional state space, using 50 discrete states and

*m*= 10

^{5}data points.

The second system is molecular dynamics simulation data of the alanine dipeptide in explicit water at a temperature of *T* = 300 K, employing the Langevin thermostat. We generated a total of 1 × 10^{6} data points at 1 ps time spacing. Reference implied timescales were computed by a Markov state model in the well-known two-dimensional space of backbone dihedral angles *ϕ* and *ψ*, using a 30 × 30 box discretization and all 1 × 10^{6} data points.

The third example is molecular dynamics simulation data of the deca alanine peptide in explicit water at a temperature of *T* = 300 K; see Ref. 37 for a description of the simulation setup. The data comprises a total of 4 × 10^{6} time steps at 1 ps time spacing. As reference values, we refer to the MSM analysis presented in Ref. 38, which is built on a 500-state *k*-means discretization after applying time-lagged independent component analysis (TICA)^{10} in the space of the peptide’s 16 backbone dihedral angles.

Finally, we re-analyze the 39-residue protein NTL9, which was simulated on the Anton supercomputer by D. E. Shaw Research; see Ref. 39 for details. The data comprises around 1.2 × 10^{6} frames, corresponding to a time spacing of Δ_{t} = 2 ns. As a reference, we apply linear TICA to the set of 666 minimal heavy atom inter-residue distances. We also rank these distances according to the fraction of simulation time where a contact is formed, that is, the distance does not exceed 0.35 nm; see Ref. 38, and calculate TICA models using only the first [20, 100, 200, 300, 400, 500, 666] of these distances.

## V. RESULTS

### A. Lemon-Slice potential

We start by analyzing the Lemon-Slice potential (48), shown in Fig. 1(a), which features four metastable states corresponding to the wells of the potential energy. We use this system as a first test case for approximating the generator $L$ due to the availability of all relevant quantities in analytical form.

Figure 1(b) shows the VAMP score as a function of the bandwidth for a selection of data sizes *m* and feature numbers *p*. We see that while *m* = 1000 data points are not sufficient, for *m* = 5000 data points, the score stabilizes for a range of bandwidths *σ* ∈ [0.3, 0.8] and already attains optimal values for very small *p*. We confirm this finding by plotting the first three non-trivial eigenvalues of the generator, for *m* = 5000 and *p* = 50, as a function of the bandwidth in Fig. 1(c). Indeed, accurate estimates for eigenvalues can be obtained in the suggested regime of bandwidths. After calculating the eigenvectors of the reduced matrix for *σ* = 0.4 and applying the PCCA analysis, we see that the metastable structure is perfectly recovered; see Fig. 1(d). Thus, the VAMP score reliably guides the hyper-parameter search toward a very efficient approximation of the leading spectral components of this system.

### B. Alanine dipeptide

For alanine dipeptide, we approximate the Koopman operator with lag time *t*, using the periodic Gaussian kernel on the well-known two-dimensional reaction coordinate space given by the dihedral angles *ϕ* and *ψ*. Figure 2(a) shows the familiar free energy landscape of the system in the reaction coordinate space. For all RFF-based approximations, we downsample the dataset to *m* = 20 000 points, corresponding to a time spacing of 50 ps. We apply the same model selection protocol as before and find that a range of kernel bandwidths *σ* ∈ [0.4, 1.0] can be stably identified as optimal in terms of the VAMP score. We observe in Fig. 2(b) that again, a small number of Fourier features, *p* = 50, is sufficient to arrive at an optimal VAMP score. We also confirm that the optimal parameter regime is stable across different lag times. Note that the VAMP score necessarily decreases with increasing lag time, as all eigenvalues but *λ*_{0}(*t*) decay exponentially with *t*. As shown in Fig. 2(c), the optimal range of bandwidths indeed allows for accurate estimation of the three leading implied timescales of the system. Further, we also verify that the metastable decomposition of the dihedral space into four states is correctly recovered using the RFF model; see Fig. 2(d). The dramatic rank reduction for the kernel matrix achieved by using a small number of Fourier features allows us to easily compute eigenvalues and test scores for a broad range of parameters using *m* = 20 000 data points in real space. Solving the same problem by means of standard methods would be significantly more costly if the same data size was used.

### C. Deca alanine

We use the deca alanine example to demonstrate that, by means of low-rank kernel methods, generator models can be efficiently learned on a reaction coordinate space that is more than just one- or two-dimensional and that dynamical hypotheses can be tested this way. We project the system onto the space of its 16 interior backbone torsion angles. The generator of full molecular dynamics is not readily available in closed form. It is well known, however, that if observed only in a subset of position space, many systems driven by thermostatically controlled molecular dynamics behave like reversible overdamped Langevin dynamics at long timescales.^{41} This can be rigorously shown for underdamped Langevin dynamics by using the inverse friction constant as a re-scaling of time, leading to overdamped dynamics in position space in the large friction limit.

*ξ*is the Jacobian of the mapping from Euclidean coordinates to the peptide’s backbone dihedral angles, which can be evaluated analytically. Note that this approach can be seen as a dynamical hypothesis: we choose to model the position space dynamics as an overdamped Langevin process and test if its projection onto the dihedral angle space can recover all relevant features of the full MD simulation.

We find in Fig. 3(a) that a kernel bandwidth of *σ* ∈ [2.0, 8.0] leads to an optimal VAMP score for the projected generator. Using about *m* = 1000 data points in real space and *p* = 300 Fourier features is sufficient to obtain stable results, rendering the calculation highly efficient. The optimality of this regime of bandwidths is confirmed by plotting the first three non-trivial eigenvalues of the generator, as shown in Fig. 3(b). We note that these eigenvalues differ from those of the reference MSM by a constant factor. This is due to the re-scaling of time inherent in assuming overdamped Langevin dynamics in position space, rendering the effective dynamics too fast. However, we also see in Fig. 3 that all slow eigenvalues are indeed re-scaled by the same factor. Moreover, the PCCA analysis of the slowest eigenvectors obtained for the optimal RFF model also recovers the metastable states of the deca alanine system, corresponding to the folded and unfolded states as well as two intermediates. The effective overdamped Langevin model learned by the kernel method thus retains all major features of the original molecular dynamics, up to the re-scaling of time.

### D. NTL9

We use the NTL9 dataset to demonstrate once again the ability of the proposed method to efficiently and robustly analyze large datasets. We downsample the available trajectory data to around *m* = 15 000 data points at 200 ns time spacing. As mentioned earlier, *m* = 15 000 is still a significant data size for kernel methods. We project the MD simulations onto the 666-dimensional space of minimal heavy atom inter-residue distances. It is known from previous publications that these coordinates capture the folding process of NTL9 very well.^{38,42} Indeed, just using linear TICA on the top-ranked 300 distances allows for an accurate estimation of the slowest implied timescale, *t*_{1}, associated with the folding process; see the top green line in Fig. 4(b).

Figure 4(a) shows that using Gaussian bandwidth parameters *σ* ∈ [10, 50] can be identified as optimal in terms of the VAMP score, again across different lag times. In Fig. 4(b), we first compare the slowest timescale *t*_{1} obtained for *σ* = 15 as a function of the lag time for different feature numbers *p* (blue lines). We see that already for *p* ≥ 300, satisfactory convergence of *t*_{1} with *t* can be obtained, reducing the computational cost by about a factor of two thousand compared to the full kernel eigenvalue problem. The folded and unfolded states of the protein can also be identified correctly using these parameters, see Fig. 5. Interestingly, timescale convergence improves with *p* in about the same way that the performance of TICA improves as more and more of the ranked distances are added. However, we show that the kernel method is more robust with respect to the selection of input coordinates by randomly selecting *d* = 50 out of the 666 distances and recomputing both the VAMP score and implied timescales for different bandwidths, keeping the number of Fourier features fixed at *p* = 300. We see in Figs. 4(a) and 4(b) that both the optimal value of the VAMP score as well as the timescale estimates remain stable across random selections of the input distances [magenta lines in panel (a), red lines in panel (b)]. The convergence of timescales is, therefore, mainly dependent on the number of Fourier features *p*, which can be automatically tuned using the VAMP score, and not on the selection of input coordinates. Moreover, we also find that under random distance selection, the optimal regime of kernel bandwidths changes as the dimension of the input coordinate space becomes significantly smaller. This is reliably reflected by the behavior of the VAMP score; see again Fig. 4(a).

## VI. CONCLUSIONS

We have introduced a new approach to extracting the dominant eigenvalues of dynamical operators associated with stochastic processes, corresponding to the slow timescale dynamics in molecular kinetics. Building on kernel representations of the Koopman operator and generator, our main novelty is a dual low-rank representation of the kernel matrices using random Fourier features. We have derived the corresponding reduced eigenvalue problem and provided an interpretation of the method in terms of a random dictionary of plane waves. Using four examples of increasing complexity, we have shown that hyper-parameters of the method can be very effectively tuned using the VAMP score metric. Moreover, we have shown that only a few hundred Fourier features were sufficient to obtain accurate and robust results for all systems considered. This finding allowed us to rapidly scan kernel hyper-parameters for data sizes in the ten thousand on coordinate spaces of dimensions up to several hundred. The computational savings compared to the full eigenvalue problem, measured in floating point operations, were on the order of three to four orders of magnitude.

The main promise of our method is the ability to analyze large datasets using a very generic and, in fact, randomly generated basis set. The success of the method depends on the number of required Fourier features. Future research will need to test if this number indeed remains small even for larger systems, which would make the method scalable beyond the benchmarking examples studied here. Combining our approach with re-weighting techniques would allow the application of the method to enhanced sampling simulation data. It will also be interesting to see if kernel functions with more specific properties can be used, for instance, kernels incorporating more detailed symmetries than only translational invariance, kernels acting on mixed types of domain (periodic and non-periodic), or kernels associated with Sobolev spaces, which have rich analytical properties. In addition, concerning the theoretical analysis, it is known that the timescale separation between slow and fast dynamics expected for metastable systems induces an approximate low-rank structure of the Koopman operator. Our results suggest that this structure can be captured by a small number of random features. Providing a more detailed analysis of this heuristic argument presents another future research direction.

## ACKNOWLEDGMENTS

The authors thank D. E. Shaw Research for providing the molecular dynamics simulation data for NTL9.

## AUTHOR DECLARATIONS

### Conflict of Interest

The authors have no conflicts to disclose.

### Author Contributions

**Feliks Nüske**: Conceptualization (lead); Data curation (lead); Formal analysis (equal); Investigation (equal); Methodology (lead); Software (lead); Validation (lead); Visualization (lead); Writing – original draft (lead); Writing – review & editing (lead). **Stefan Klus**: Conceptualization (supporting); Formal analysis (equal); Investigation (equal); Methodology (supporting); Validation (supporting); Visualization (supporting); Writing – original draft (supporting); Writing – review & editing (supporting).

## DATA AVAILABILITY

The methods presented in this paper are available as part of the KoopmanLib library at https://github.com/fnueske/KoopmanLib. The complete data and code to reproduce the examples shown above are available from a public repository at https://doi.org/10.5281/zenodo.8036525, with the exception of the raw molecular dynamics simulation data, which can be obtained directly from the authors or, in the case of NTL9, from the copyright owners. PCCA decompositions and all Markov state model results were computed using the deeptime library.^{43}

### APPENDIX: DERIVATION OF KERNEL EIGENVALUE PROBLEMS

^{23,24}We compute the matrix elements of the operators $G\u0302,A\u0302t,A\u0302L$ on the finite-dimensional space $Hm$ with respect to its canonical basis ${\Phi (xl)}l=1m$,

_{1}for the nabla-operator to indicate differentiation with respect to the first variable. To get from the second to the third line, we used the derivative reproducing property,

*α*and all RKHS functions

*f*as soon as the kernel is at least 2|

*α*|-times continuously differentiable in both arguments, see Ref. 20.

**K**

_{X}can be canceled out, which leaves us with (26). In the reversible case, we replace the operator $A\u0302L$ with a symmetric bi-linear form

**K**

_{X}, which leads to (27).

## REFERENCES

*An Introduction to Markov State Models and Their Application to Long Timescale Molecular Simulation*

*Advances in Experimental Medicine and Biology Vol. 797*

*Scattered Data Approximation*

*Support Vector Machines*

*Advances in Neural Information Processing Systems 20*

*Introduction to Gaussian Processes*

*NATO ASI Series F Computer and Systems Sciences Vol. 168*

*Free Energy Computations: A Mathematical Perspective*