Methods derived from the generalized quantum master equation (GQME) framework have provided the basis for elucidating energy and charge transfer in systems ranging from molecular solids to photosynthetic complexes. Recently, the nonperturbative combination of the GQME with quantum-classical methods has resulted in approaches whose accuracy and efficiency exceed those of the original quantum-classical schemes while offering significant accuracy improvements over perturbative expansions of the GQME. Here, we show that, while the non-Markovian memory kernel required to propagate the GQME scales quartically with the number of subsystem states, the number of trajectories required scales at most quadratically when using quantum-classical methods to construct the kernel. We then present an algorithm that allows further acceleration of the quantum-classical GQME by providing a way to selectively sample the kernel matrix elements that are most important to the process of interest. We demonstrate the utility of these advances by applying the combination of Ehrenfest mean field theory with the GQME (MF-GQME) to models of the Fenna-Matthews-Olson (FMO) complex and the light harvesting complex II (LHCII), with 7 and 14 states, respectively. This allows us to show that the MF-GQME is able to accurately capture all the relevant dynamical time scales in LHCII: the initial nonequilibrium population transfer on the femtosecond time scale, the steady state-type trapping on the picosecond time scale, and the long time population relaxation. Remarkably, all of these physical effects spanning tens of picoseconds can be encoded in a memory kernel that decays only after ∼65 fs.

Theoretical modeling of processes ranging from energy transfer in photosynthetic light harvesting complexes and organic electronics to proton coupled electron transfer requires the development of methods that allow for the accurate and efficient calculation of the dynamics of open quantum systems, i.e., a subsystem of interest coupled to an external environment. Generalized quantum master equations (GQMEs) provide a formally exact way of rewriting the equation of motion of a set of degrees of freedom of particular interest via the use of the appropriate projection operator.1–3 For example, in open quantum systems, the projection operator is frequently chosen to encompass all the states of the electronic subsystem, although this choice need not align with simple partitions at the level of the Hamiltonian, i.e., the projector could contain only a subset of electronic states or even include some nuclear degrees of freedom. The effect of the remaining degrees of freedom on the dynamics of those of interest is encoded through the memory kernel.1,2,4 However, evaluating the memory kernel requires overcoming the complications that arise from the presence of the projected propagator.

A common approach to avoid the projected propagator is to treat the memory kernel using second order perturbation theory. This leads to methods including Markovian and non-Markovian Redfield theory and its variants,5–8 the noninteracting blip approximation (NIBA),9,10 Förster theory,11 and the polaronic quantum master equation,12–15 which have been used to treat a wide variety of multistate electronic energy transfer systems. In these approaches, the memory kernel depends only on correlation functions of the isolated system and bath and hence can be straightforwardly generated even for systems with many subsystem states.16,17 However, these approaches are limited in their applicability to systems where the perturbative parameter remains small.

More recently, it has been shown that the projected propagator can be exactly removed by employing the Dyson identity, leading to a representation of the full (nonperturbative and non-Markovian) memory kernel in terms of unprojected correlation functions of the system and bath.18 This allows one to gain accurate results over a much wider range of regimes than perturbative master equations, albeit at the expense of requiring cross-correlation functions of system and bath operators to generate the memory kernel. Evaluating these cross-correlation functions using trajectory-based quantum-classical approaches with the GQME formalism has been shown to give considerable increases in both accuracy and efficiency compared to using the same quantum-classical method alone.19–25 

However, while these studies showed the efficacy of such a combination in systems containing two subsystem states, for a system with Ns subsystem states, the reduced density matrix (RDM) has Ns2 elements and hence the size of the memory kernel needed to propagate it grows as Ns4. One might therefore imagine that combining quantum-classical methods with the GQME would be limited to problems with small numbers of subsystem states. Here, we demonstrate that for quantum-classical methods that treat the subsystem using wavefunctions, e.g., Ehrenfest mean field theory (MFT)26–28 and fewest switches surface hopping (FSSH),29–31 the full memory kernel can be obtained using only O(Ns2) trajectories.

We then show how, in cases where one is interested in particular (or a small set of) subsystem initial conditions, one can further improve the efficiency by focusing resources on accurately capturing the matrix elements of the memory kernel that most strongly determine the evolution of a given initial condition. We achieve this by noting that, while the fully converged memory kernel contains the information necessary to propagate the GQME from any subsystem initial condition contained in the projector, often many of these may not be of interest. Here, we introduce an algorithm that exploits this observation to identify and selectively converge the kernel elements that are most important to capture the relaxation from a given subsystem initial condition. This allows one to minimize the computational effort expended on the elements that only exert a minor influence on the process of interest.

We then illustrate the scalability of the quantum-classical GQME to multistate systems by applying the mean field GQME (MF-GQME) method21 to treat the 7-state Fenna-Matthews-Olson (FMO) complex32,33 and the 14-state light harvesting complex II (LHCII).34–36 We show that the MF-GQME gives quantitative accuracy in regimes where direct MFT and modified Redfield theory fail, while remaining less expensive than a direct application of MFT even in these larger systems. This development thus opens the door to the application of the quantum-classical GQME approaches to multistate systems coupled to atomistic environments that lie beyond linear system-bath coupling and harmonic environments.22 

To show how the GQME formalism can be efficiently used with trajectory-based quantum dynamics methods in multistate systems, here, we give the general expression for the memory kernel for a system with arbitrary system-bath coupling. By analyzing the form of this general expression, we demonstrate that only O(Ns2) trajectories are required to calculate the memory kernel in the MF-GQME method in a system with Ns subsystem states.

To begin, we consider a Hamiltonian of the general form,

(1)

where Ĥs is the isolated subsystem Hamiltonian, Ĥb is the isolated bath Hamiltonian, and Ĥsb contains all the terms that couple the system and the bath. For any Ĥsb, one can write it such that it is in the form of a sum of direct products of system and bath operators,

(2)

where Ŝj is a pure subsystem operator, Γ^j is a pure bath operator, and Nsb is the number of terms in the sum.

Here, we consider initial conditions for the full system density operator ρ^(0) that are of the spectroscopic form, ρ^(0)=ρ^beqρ^s(0), where ρ^s(0) encodes the initial state of the subsystem and ρb^eq is the equilibrium bath density operator, ρ^beq=eβĤb/Trb{eβĤb}. While this choice of initial condition simplifies the expressions for the memory kernel, we emphasize that the scaling arguments presented here do not depend on this choice and are straightforward to generalize to any factorizable initial condition.

The quantity of interest is the RDM, ρ^s(t)=Trb{ρ^(t)}, from which any subsystem observable, such as the population relaxation or electronic spectra, can be generated. In a given basis, one can write the RDM as

(3)

where Ân|αα|, which is a set of Ns2 subsystem operators that span the subsystem Liouville space and [ρs]n denotes a particular matrix element of the RDM. These matrix elements can be expressed as quantum mechanical expectation values,

(4)

In the Mori-Nakajima-Zwanzig formalism, the dynamics of the RDM can be obtained by integrating the GQME,1–4,16

(5)

where ρs(t) is a row vector with Ns2 time-dependent elements and X and K(t) are matrices of size Ns2×Ns2 and ρ̇s(t)=ddtρs(t), respectively. The static matrix Xmn = iTrsÂmLsÂn ensures that the GQME exactly recovers the free subsystem evolution when the memory kernel, K(t), is zero, i.e., when the subsystem and bath do not interact. The memory kernel, which encodes the effect of the bath on the subsystem dynamics, can be constructed using correlation functions in the full system and bath Hilbert space. Using the Dyson decomposition of the propagator,37 one obtains18,23

(6)

Here, K3 and K1 are auxiliary kernels, which can be obtained using one’s choice of dynamical method, and the subscripts 1 and 3 are used to be consistent with earlier work.18–23 For any system-bath coupling, K3 and K1 can be written as linear combinations of correlation functions of system and bath operators of the form (see  Appendix A),

(7)
(8)

Here, like the memory kernels, c(j,±), q3(j,±)(t), and q1(jk,±)(t) are matrices of size Ns2×Ns2. The transformation matrices c(j,±) are time-independent and simple to evaluate analytically, with their elements given by

(9)

where [·, ·] denotes the commutator and [·, ·]+ the anticommutator. The matrix elements of the correlation functions q3(j,±)(t) and q1(jk,±)(t) are (see  Appendix A)

(10)
(11)

Equations (7) and (8) provide a formal basis for the calculation of the memory kernel, in which one obtains the auxiliary kernels K3(t) and K1(t) by evaluating c(j,±) exactly and calculating q3(j,±)(t) and q1(jk,±)(t) using one’s choice of exact or approximate dynamical method. These quantities are then used to obtain the auxiliary kernels K1 and K3 using Eqs. (7) and (8), and the full memory kernel is obtained using Eq. (6) (see  Appendix B). In Sec. II A, we use these expressions to analyze the scaling with the number of subsystem states.

In the MF-GQME method, one approximates the correlation functions defined in Eqs. (10) and (11) using MFT (see  Appendix C). The memory kernel can then be used to solve the GQME for the dynamics of the RDM, given by Eq. (5). In practice, the computational cost of generating the full memory kernel using Eqs. (6)–(8) and integrating the GQME is negligible compared to the cost of generating the MFT trajectories necessary to calculate K3 and K1. The computational cost is therefore determined by the number of MFT trajectories required to evaluate q3(j,±)(t) and q1(jk,±)(t), and how this cost scales with the number of subsystem states.

The number of correlation functions that need to be calculated to evaluate q3(j,±)(t) and q1(jk,±)(t) using Eqs. (10) and (11) can at first appear formidable, as the indices m and n in these equations each run over Ns2 elements. This might lead one to erroneously conclude that the MF-GQME method scales as O(Ns4), which would severely limit its applicability to treat problems with many subsystem states. However, closer inspection of these expressions in the context of MFT dynamics reveals that one only needs to sample the distinct electronic initial conditions, labeled by the index m, to calculate all the necessary correlation functions. This is because the MFT trajectories generated from a given electronic initial condition, Âm, provide the time-dependent observables necessary to calculate all of the correlation functions corresponding to that initial condition. Consequently, since only Ns2 electronic initial conditions need to be sampled, only O(Ns2) quantum-classical trajectories are required. In practice, for MFT, the number of initial conditions can be shown to be 12Ns(Ns+1) by following the procedure outlined in  Appendix D, which exploits the fact that the correlation functions corresponding to a given off-diagonal initial condition and its Hermitian conjugate can be obtained from the same set of trajectories.

In general, evaluating forces dominates the overall cost, especially in atomistic systems or systems with a large number of bath degrees of freedom. However, the calculation of the correlation functions required scales as O(Ns4) and integration of the GQME scales as O(Ns6) which could dominate the cost for large Ns. However, even for the systems studied here which use very simple forms of the bath, the cost of integrating the GQME is negligible.

An additional concern regarding the scalability of the MF-GQME approach comes from the amount of memory required to store all elements in q3,mn(j,±)(t) and q1,mn(j,±)(t). For instance, the number of time-dependent elements in q1,mn(j,±)(t) can be calculated as the product of Ns2 initial electronic conditions Âm, Nsb bath operators Γ^j measured at t = 0, and Ns2 electronic operators Ân(t) and Nsb bath operators Γ^j(t) measured at finite times. This corresponds to O(Ns4×Nsb2) time dependent elements that need to be stored in memory, and hence one might expect the amount of available memory to be a significant limitation. In practice, however, this problem can be easily circumvented because q3(j,±)(t) and q1(jk,±)(t) do not themselves need to be stored in memory. Instead, one can evaluate their contributions to K3(t) and K1(t) on the fly using Eqs. (7) and (8) so that only K3(t) and K1(t) need to be stored. In addition, because not all of the matrix elements are independent, there are only 12Ns2(Ns2+1) unique elements in each, with the remainder being complex conjugates of these elements.18 The amount of memory required to store the kernels is therefore proportional to Ns2(Ns2+1)×Nsteps, where Nsteps is the number of time steps before the kernel decays. In practice, when the memory kernel decays quickly to zero Nsteps is a small number, meaning that negligible storage is required.

Once the memory kernel has been calculated, solving the GQME, Eq. (5), provides access to the subsystem dynamics for any factorizable initial condition of the subsystem RDM. However, in many cases, one is only interested in a particular initial condition, or a set of them. For example, one may be interested in the relaxation dynamics resulting from the initial excitation of a particular chromophore in a molecular system consisting of multiple chromophores. Here it is important to make a distinction between the initial condition of interest at the RDM level and the electronic initial conditions that need to be sampled in the MFT calculation of the memory kernel. At the memory kernel level, all separable initial conditions (e.g., initial excitation of each chromophore and coherence between chromophores) need to be sampled to construct the full memory kernel. It is this complete sampling of the initial conditions at the memory kernel level that ensures that one can use Eq. (5) to propagate the RDM dynamics from any separable initial condition.

However, as in the case mentioned above, suppose that one is only interested in propagating the RDM starting from one particular initial condition (or a small subset of initial conditions) in a system with a large number of subsystem states. In this case, there may be many pathways that do not play a role in the relaxation dynamics from the RDM initial condition of interest. Hence, it is clearly unnecessary to have accurate knowledge of the memory kernel elements that describe these unimportant relaxation pathways.

One can exploit this realization by focusing resources into converging the elements of the memory kernel that are most important to describe the relaxation process of interest. When the memory kernel elements are generated using MFT, this corresponds to identifying and selectively converging the most important correlation functions from the sets {q3(j,±)(t)} and {q1(jk,±)(t)} [defined in Eqs. (10) and (11)], which give rise to the memory kernel. As such, here we present an algorithm that allows one to use fewer trajectories to generate the less important correlation functions while minimizing the resulting loss in accuracy for the RDM.

Any algorithm that allows for the selective convergence of the memory kernel elements should satisfy certain requirements. In particular, after specifying the RDM-level initial condition that one wishes to propagate, the algorithm should provide a prescription for apportioning a fixed total number of quantum-classical trajectories, Ntot, among the different initial conditions that need to be sampled at the memory kernel level. Specifically, the algorithm should provide a set of normalized weights, w, that determine the portion of trajectories wmNtot assigned to a given memory kernel level initial condition, Am. These weights should be chosen such that, when the time-evolved RDM is propagated from the specified initial condition, the error that arises from the statistical error in the memory kernel is minimized. In addition, the calculation of these weights should take into account two important observations. First, correlation functions with certain memory kernel-level initial conditions may contribute more to the relevant relaxation pathways than others. Second, correlation functions arising from different memory kernel-level initial conditions may require different numbers of trajectories to achieve reasonable levels of convergence.

To calculate the optimal weights, some knowledge of the relaxation pathway and current level of convergence of the memory kernel is crucial. Here, we suggest an iterative approach, where increasingly improved estimates for the memory kernel and the RDM dynamics are used to recompute the weights at each iteration. An overview of the algorithm is shown in Fig. 1. We note that while here we present one particular form for the weights, the approach used here is general and does not depend on this particular choice.

FIG. 1.

Overview of an iterative approach to identifying and selectively converging the memory kernel given an initial state or states of interest.

FIG. 1.

Overview of an iterative approach to identifying and selectively converging the memory kernel given an initial state or states of interest.

Close modal

To provide a form for the weights, wm, we consider how a given realization of the RDM dynamics, which we call the trial RDM dynamics, deviates from its converged form, δρs(t)=ρsconv(t)ρs(t). Although in practice one does not know the converged dynamics ρsconv(t), considering how an unconverged dynamics may deviate from its converged form will allow us to elucidate how the convergence of the dynamics can be accelerated via selective sampling. Our goal is to reduce the absolute value of this difference from the converged dynamics for a given number of trajectories, Ntot. Because the dynamics, ρs(t), arise from integrating Eq. (5), one can minimize the absolute difference between the time-derivative of the trial RDM and its converged form, δρs(t)=ρsconv(t)ρs(t). Because this error metric is time dependent and different for each element of the density, it is convenient to consider the mean absolute deviation, which is the absolute value of δρs(t), averaged over time and elements of the density matrix,

(12)

where tmax is the maximum time that is of interest of the dynamics.

Because there is no error in the initial condition [i.e., δρs(0) = 0], one can minimize the error at subsequent times by minimizing the error in the time-derivative of the RDM, δρ̇s(t)=ρ̇sconv(t)ρṡ(t)=ddtδρs(t). Again, it is convenient to consider the mean absolute deviation in this quantity, which is

(13)

An approximate upper bound of this quantity is (see  Appendix E)

(14)

where the index m runs over all initial conditions. The selective sampling error metric (see also  Appendix E) is approximately proportional to the error in the dynamics at time t that arises from finite sampling of correlation functions with initial condition m. For the models considered here, this quantity is

(15)

where Nm is the number of trajectories that have been run in all previous iterations of the algorithm with memory kernel level initial condition m. The quantity [σK1(t)]mβ is given by Eq. (E10) in  Appendix E.

Motivated by the above considerations, the computed weights in our algorithm depend on ρs(t), as well as the statistical error of the correlation functions that contribute to a specific elements of the memory kernel. By distributing Ntot trajectories over nstep steps in increments of ΔN = Ntot/nstep to different GQME-level initial conditions, we refine our estimate for these weights iteratively at each step, as described below. We repeat this procedure either until the RDM converges to a desired threshold or until Ntot trajectories have been run.

Using the definitions provided above, we define the un-normalized weight, am, as

(16)

The normalized weight wm is then

(17)

Using these weights, we then apportion trajectories to each memory kernel level initial condition Âm in the subsequent cycle so that

(18)

where j is the index of the current iteration of the algorithm (for example, in the first iteration of the algorithm j = 1). Nm(j) is therefore the number of trajectories that have been assigned to initial condition m at step j. The algorithm can be summarized as follows:

  1. Initialize the algorithm by setting K(t)=0 and [σK1]mβ=1.

  2. Use K(t) to integrate the GQME [Eq. (5)] for some time range tmax, yielding ρs(t).

  3. Use the current guess for K(t) and ρs(t) to compute the weights, wm, corresponding to a given memory kernel-level initial condition Âm using Eqs. (15)–(17).

  4. Distribute a number of MFT trajectories, ΔN, among all memory kernel-level initial conditions according to Eq. (18).

  5. Using the correlation functions q3(j,±)(t) and q1(jk,±)(t) obtained from all MFT trajectories that have been run so far, rebuild K(t) using Eqs. (6)–(11).

  6. Calculate σK1(t) using the standard deviations in the correlation functions q1(jk,±)(t) (see  Appendix E).

  7. Calculate the mean absolute deviation between the previous and the current iteration of ρs(t). If the maximum number of trajectories, Ntot is exceeded, or if the mean absolute deviation falls below the predetermined threshold, stop. Otherwise, return to step 2.

A plot of the number of trajectories assigned to each initial condition for the LHCII model discussed in Sec. IV is available in the supplementary material.

Here, we apply the MF-GQME to the Frenkel exciton Hamiltonian.32,33 This model Hamiltonian forms the basis of many studies of electronic energy transfer in photosynthetic light harvesting systems.38,39 In addition to providing physical insight into these systems, the Frenkel exciton model’s simplicity, i.e., its bilinear coupling to harmonic baths, allows for exact results to be calculated using, for example, the hierarchical equations of motion (HEOM) approach.40 Although GQMEs, and in particular MF-GQME, are not limited to any specific form of the Hamiltonian, the availability of exact results for this model system allows us to benchmark the accuracy and efficiency of the MF-GQME for multistate problems. The Frenkel exciton Hamiltonian is of the general form given in Eq. (1), with the subsystem Hamiltonian taking the form,

(19)

where E(k) is the energy of diabatic state |k⟩ and Δkk is the electronic coupling between states |k⟩ and |k′⟩. Each diabatic state is bilinearly coupled to a bath of Nosc independent harmonic oscillators,

(20)

where j indexes the bath degrees of freedom, cj,k is the coupling constant for a given bath oscillator, and R^j,k is the position operator for that oscillator, and in the final equality we define γ^k to be the bath part of the system-bath coupling to state |k⟩. The bath therefore consists of Ns × Nosc independent harmonic oscillators,

(21)

where P^j,k is the momentum operator of the bath mode, and ωj,k is its frequency, and the Hamiltonian is given in mass-weighted coordinates and momenta. The coupling of each diabatic state to its local bath is characterized by its spectral density,

(22)

For the FMO model, it is common to assume that all spectral densities are equivalent and take the Debye form,

(23)

where λ is the reorganization energy that defines the strength of the system-bath coupling and ωc is the inverse of the characteristic time scale of the bath. For the LHCII model, we adopt a sum of shifted Drude-Lorentz peaks,41 

(24)

where we use the parameters of Ref. 41: Npeaks = 7, ωc1=30,1400,1000,1400,1000,600,1000 fs, λ=130,6,18,6,16,48,17cm1, and Ω=240,297,342,388,518,745,915cm1.

For the FMO model, the spectral density was discretized with frequencies,

(25)

For the LHCII model, the spectral density was discretized by numerically solving for ωj that satisfies

(26)

where λtot = ∑kλk. For both Eqs. (25) and (26), ωj is obtained by taking j = 1, 2, …, Nosc. For both models, the couplings of the discrete oscillators are given by

(27)

For Ehrenfest trajectories, the equations of motion were solved using a split evolution method, where at each dynamical time step, the bath is evolved through a half time step of Δt2 using mean field theory (see  Appendix C) with the velocity Verlet algorithm.42 The system was then evolved for a full time step Δt by numerically diagonalizing Hs + Hsb(R, P), holding the bath positions and momenta constant. The bath was then evolved for another half time step of Δt2 using velocity Verlet, keeping the state of the electronic subsystem constant. For the FMO model, the time step used was Δt = 0.25 fs, while for LHCII the time step was Δt = 0.05 fs.

The partial memory kernels were constructed using Eqs. (7)–(11), with the necessary correlation functions in Eqs. (10) and (11) approximated using mean field theory as described in  Appendix C. The memory kernel was calculated at a lower time resolution (0.5 fs) than the direct MFT dynamics used to generate the underlying correlation functions, Eqs. (10) and (11), which was observed not to change the resulting GQME dynamics. Additional details on the specific form these equations in the context of the Frenkel exciton model are given in  Appendix A. Plots of selective memory kernel matrix elements that relate to transitions between the states that have significant populations for FMO and LHCII are available in the supplementary material.

Typically, the memory kernel decays to zero after a short amount of time and hence can be truncated (set equal to zero) at all times greater than this time. For the FMO model, this kernel cutoff time, τc, was chosen by increasing the cutoff until the dynamics ceased to change, with τc between 125 and 300 fs giving very similar results. In practice, larger values of τc require more trajectories to converge. For the dynamics of the FMO model shown in this work, we used τc = 150 fs. For the LHCII model, the dynamics remain unchanged for τc between 50 and 65 fs. To stably integrate the MF-GQME dynamics with cutoff times greater than 65 fs requires a large number of trajectories to achieve stable dynamics. This is because the LHCII complex is in a more strongly coupled regime, which gives rise to highly oscillatory memory kernels that require a very large number of trajectories to converge. Such oscillatory memory kernels have been observed for the dynamics of the spin-boson model with the Debye spectral density in intermediate to strong coupling regimes.23 For all LHCII dynamics shown, we used τc = 65 fs. The instabilities observed for LHCII might be able to be alleviated by employing alternative closures23,43,44 or by using higher level quantum-classical methods to generate the memory kernel.20,24

After calculating the partial memory kernels, the GQME, Eq. (5), was integrated using Heun’s method,45 with the integral in Eq. (5) approximated using the trapezoid rule.

Because the sampling of bath initial conditions from which the correlation functions are generated is stochastic, there is variability in how many trajectories are required to converge the memory kernel for a given realization of uniform or selective sampling. To evaluate our selective sampling algorithm on the LHCII model, we performed 50 separate realizations of the selective sampling algorithm, as well as 50 separate realizations of uniform sampling. All GQME dynamics shown are from the realization with the median error in the dynamics, where error in the dynamics is quantified by the mean absolute deviation in the subsystem density matrix from its converged form, Eq. (12). The converged form for LHCII corresponds to 65 × 106 total trajectories, apportioned equally across all initial conditions. A plot that shows the distribution of errors of different realizations of the selective sampling procedure as a function of number of trajectories added is shown in the supplementary material. The selective sampling procedure was performed by adding a total of ΔN = 100 trajectories at each iteration.

Here, we apply the MF-GQME method to electronic energy transfer in Frenkel exciton models of the FMO and LHCII light harvesting complexes. The FMO model consists of Ns = 7 subsystem states, whereas Ns = 14 for LHCII. Therefore, for FMO, there are 74 = 2401 time dependent matrix elements in the memory kernel, while, for LHCII, there are 144 = 38 416. However, as described in Sec. II A, all of the matrix elements can be obtained using very short simulations starting from 28 initial conditions for FMO or 105 distinct initial conditions for LHCII.

Figure 2 compares the direct MFT and MF-GQME dynamics to exact HEOM results41 for the population relaxation following an initial excitation for the FMO model at low temperatures in a regime of high nonadiabaticity (ωc ≳ Δkk for most pairs of states). As observed in previous work for the FMO model,46 direct MFT gives qualitatively correct coherent oscillations at short times but fails to recover the correct long-time populations. This can be explained by the high nonadiabaticity and the low temperature bath, both of which are problematic for MFT. In contrast, the MF-GQME produces highly accurate results, consistent with previous observations that this scheme offers significant improvements over direct MFT in nonadiabatic regimes.21–23 

FIG. 2.

Populations for direct MFT and MF-GQME in the FMO model at T = 77 K with 10 000 trajectories per initial condition. The model parameters are λ = 35 cm−1, ωc1=50fs. Filled points are the exact HEOM results from Ref. 33. (a) and (b) correspond to the initial excitation of state 6, while (c) and (d) correspond to the initial excitation of state 1.

FIG. 2.

Populations for direct MFT and MF-GQME in the FMO model at T = 77 K with 10 000 trajectories per initial condition. The model parameters are λ = 35 cm−1, ωc1=50fs. Filled points are the exact HEOM results from Ref. 33. (a) and (b) correspond to the initial excitation of state 6, while (c) and (d) correspond to the initial excitation of state 1.

Close modal

For this nonadiabatic problem, the memory kernel decays by a cutoff time of τc = 0.15 ps, which is considerably shorter than the 1–2 ps lifetime of the population dynamics. However, because the MF-GQME requires sampling 28 distinct initial conditions to generate the memory kernel, uniform sampling of all initial conditions in this regime, i.e., when trajectories are apportioned equally to each initial condition, requires a total of 105 total trajectories (∼3500 per initial condition) to converge the memory kernel. This is in contrast to the ∼104 trajectories required to obtain a similar level of convergence with direct MFT. Hence, despite the shorter lifetime of the kernel, the larger number of trajectories required to generate the kernel means that the total cost of generating the MF-GQME dynamics shown in Fig. 2 is ∼1.5 times greater than that of MFT. Moreover, because the memory kernel provides the dynamics for all factorizable initial conditions of the subsystem RDM, the dynamics resulting from the two distinct initial subsystem excitations shown in Figs. 2(b) and 2(d) are generated using the same memory kernel. In contrast, Figs. 2(a) and 2(b), which correspond to direct MFT dynamics, require separate sets of MFT trajectories initialized from the two different subsystem initial conditions.

An additional advantage of the MF-GQME scheme is that, once the memory kernel is converged, one can obtain dynamics for arbitrarily long times by integrating Eq. (5). In practice, the GQME propagation can be done at negligible computational cost compared to the costs of generating MFT trajectories. Figure 3 shows the long time population dynamics for direct MFT and MF-GQME following the initial excitation of state 6 in the same parameter regime as Fig. 2. Because of the prohibitive cost of converging the exact dynamics for such long times, the arrows in Fig. 3 correspond to the Boltzmann equilibrium populations that would be expected based on the subsystem Hamiltonian, eβĤs. As expected, due to the well-known detailed balance problems in methods that rely on a mean field description of the interaction between the electronic subsystem and the nuclei,47–49 MFT incorrectly predicts that all states have nearly equal population at long times, in qualitative disagreement with the expected result. In contrast, the MF-GQME accurately captures the expected long-time populations. Furthermore, to converge the MFT dynamics out to the 20 ps shown in Fig. 3 requires significant additional computational effort (at least 20 times more than for 1 ps), while in the GQME approach, Eq. (5) can be integrated for arbitrarily long times using the same memory kernel as Fig. 2. Because integrating the GQME for longer times incurs negligible additional cost, the dynamics shown in Fig. 3 are ∼13 times cheaper when generated with the MF-GQME as compared with direct MFT.

FIG. 3.

Long time populations in the FMO model for the parameter regime in Fig. 2 with the initial excitation in state 6 using direct MFT (a) and MF-GQME (b). The MF-GQME dynamics are generated using the same memory kernel as Fig. 2. The arrows show the long-time populations that would be expected from eβĤs.

FIG. 3.

Long time populations in the FMO model for the parameter regime in Fig. 2 with the initial excitation in state 6 using direct MFT (a) and MF-GQME (b). The MF-GQME dynamics are generated using the same memory kernel as Fig. 2. The arrows show the long-time populations that would be expected from eβĤs.

Close modal

A more stringent test of the MF-GQME method is the LHCII model, which has significantly more subsystem states each of which is also more strongly coupled to the bath degrees of freedom. The combination of a large number of states, stronger system-bath coupling, and the complex form of the spectral density of LHCII presents a significant challenge to exact methods. Hence, very limited benchmarks are available using the HEOM method and these required substantial computational effort and some simplification of the spectral density.41 Here, we use the form and parameterization of the LHCII Hamiltonian of Ref. 41 with the J7 spectral density from that reference, and compare against the HEOM results presented therein. These HEOM results relied on the high temperature approximation,41 and hence, while they are expected to be highly accurate, it remains unclear whether they are numerically exact.

Figure 4(a) shows the dynamics for the sum of the populations of all Chl b chromophores for the LHCII model following the initial excitation of the highest energy eigenstate of the subsystem Hamiltonian. For this model, the exact HEOM dynamics exhibit an initial fast (subpicosecond) decay in the population of Chl b chromophores followed by a longer time component which decays over tens of picoseconds. Direct MFT fails to capture the magnitude of the short time drop in the population of Chl b chromophores but qualitatively reproduces the longer time scale of population transfer out of this set of chromophores. As previously observed, the modified Redfield dynamics, which is a major workhorse in the modeling and investigation of the excitation dynamics in photosynthetic systems, shows erroneous rapid decay within the first 1–2 ps.41 The MF-GQME, however, yields accurate short and long time behavior, in good agreement with the HEOM result. This behavior is mirrored in the populations of the Chl 604a and Chl 605b states in Figs. 4(b) and 4(c) which contribute to the Chl b decay shown in (a). Direct MFT underestimates the transfer into Chl 604a while overestimating that into Chl 605b, while modified Redfield theory leads to spuriously fast population decay of these states. In contrast, the MF-GQME accurately captures the HEOM result.

FIG. 4.

Population dynamics for the 14-state LHCII model of Ref. 41. (a) shows the sum of the populations of all Chl b chromophores, and (b) and (c) show the populations of the Chl604a and Chl605b chromophores, respectively. The direct MFT and MF-GQME results are from this work. The direct MFT results used 100 000 trajectories and the MF-GQME results used 625 000 trajectories per initial condition (6.5 × 106 total trajectories). For comparison, HEOM and modified Redfield results for the same model are from Ref. 41. (a) Sum of the populations of all Chl b chromophores, (b) population of the Chl 604a state, and (c) population of the Chl 605b state. The parameters of the model are those of Ref. 41.

FIG. 4.

Population dynamics for the 14-state LHCII model of Ref. 41. (a) shows the sum of the populations of all Chl b chromophores, and (b) and (c) show the populations of the Chl604a and Chl605b chromophores, respectively. The direct MFT and MF-GQME results are from this work. The direct MFT results used 100 000 trajectories and the MF-GQME results used 625 000 trajectories per initial condition (6.5 × 106 total trajectories). For comparison, HEOM and modified Redfield results for the same model are from Ref. 41. (a) Sum of the populations of all Chl b chromophores, (b) population of the Chl 604a state, and (c) population of the Chl 605b state. The parameters of the model are those of Ref. 41.

Close modal

Figure 5 shows the long-time dynamics obtained from the MF-GQME using the same initial condition. In addition to capturing the exact HEOM dynamics at short times, the MF-GQME also obtains the intermediate time scale (picosecond) trapping in Chl 604a and Chl 605b bottleneck states, and the long time scale (tens of picoseconds) relaxation to the Chl 610a-611a-612a trimer, the Chl 613a-614a dimer, and the Chl 602a-603a dimer.34 

FIG. 5.

Long time populations in the LHCII model of Ref. 41 with initial excitation in the highest energy eigenstate of the subsystem Hamiltonian. The dynamics are generated using the same memory kernel in Fig. 4. The dashed lines show the long-time populations that would be expected from eβĤs.

FIG. 5.

Long time populations in the LHCII model of Ref. 41 with initial excitation in the highest energy eigenstate of the subsystem Hamiltonian. The dynamics are generated using the same memory kernel in Fig. 4. The dashed lines show the long-time populations that would be expected from eβĤs.

Close modal

Because of the stronger system-bath coupling of the LHCII model in comparison to the FMO complex, converging the memory kernel using uniform sampling of all 105 distinct initial conditions would require approximately 65 × 106 total trajectories. Although this number of trajectories is feasible for simple models, it is intractable when more sophisticated atomistic descriptions of the environment are used. The fact that LHCII consists of a large number of subsystem states makes it a prime candidate to benefit from using selective sampling to accelerate convergence. As such, we now consider the computational speedups that can be obtained by applying the selective sampling algorithm introduced in Sec. II B to this model. To demonstrate the utility of the selective sampling scheme, we thus compare the convergence of the GQME dynamics for a fixed total number of trajectories when using the selective sampling and the uniform sampling approaches to compute the memory kernel. See the supplementary material for a plot showing the average number of trajectories assigned to each memory-kernel level initial condition at each step of the algorithm.

Figure 6 shows the populations for the median realizations of selective and uniform sampling at 100 000 and 200 000 trajectories. Already, at 100 000 trajectories, the selective sampling yields qualitatively correct dynamics, while the uniformly sampled memory kernel is not numerically stable, with the populations several chromophores becoming negative by t = 1.0 ps, and the unphysically high frequency and large amplitude oscillations seen in (a) are caused by numerical instability. By 200 000 trajectories, selective sampling is already in semiquantitative agreement with the exact result, while uniform sampling produces negative populations only after ∼0.5 ps. In fact, the uniform sampling approach fully converges only after ∼65 000 000 trajectories. Hence, in this regime, the selective sampling algorithm is approximately 300 times more efficient than uniform sampling.

FIG. 6.

MF-GQME population dynamics for the LHCII model using uniform and selective sampling. Filled circles are the fully converged uniform sampling result with 65 × 106 total trajectories. (a) and (b) correspond to 100 000 trajectories, and (c) and (d) correspond to 200 000 trajectories. The high frequency and large amplitude oscillations seen in (a) are caused by numerical instability due to noise in the memory kernel.

FIG. 6.

MF-GQME population dynamics for the LHCII model using uniform and selective sampling. Filled circles are the fully converged uniform sampling result with 65 × 106 total trajectories. (a) and (b) correspond to 100 000 trajectories, and (c) and (d) correspond to 200 000 trajectories. The high frequency and large amplitude oscillations seen in (a) are caused by numerical instability due to noise in the memory kernel.

Close modal

In this paper, we have shown how the MF-GQME approach can be efficiently extended to treat systems where the electronic subspace consists of many states. By analyzing the symmetries of the memory kernel, we have demonstrated that, for a system consisting of Ns subsystem states, the memory kernel, despite having Ns4 matrix elements, can be generated using only O(Ns2) quantum-classical trajectories. This scaling analysis is directly applicable to quantum-classical methods using wavefunction-based treatments of the electronic subsystem dynamics, such as MFT and FSSH. The same scaling arguments also apply to methods of generating the kernel using only the original dynamics23,24,50 and to other methods that involve a time convolution master equation such as the transfer tensor method.51,52 This is in contrast to methods53–60 based on the Meyer-Miller-Stock-Thoss (MMST) mapping,61,62 where the initial conditions necessary to generate all memory kernel elements could be recovered by an appropriate weighting of the variables drawn from a single Gaussian distribution of MMST variables.19,24

To further reduce the cost of generating the memory kernel necessary to propagate an RDM subject to specific initial conditions, we have outlined a general procedure for how one can select an optimal set of quantum-classical trajectories to converge the kernel. Employing a specific realization of a selective sampling procedure on the LHCII model that focuses resources on initial conditions that mostly contribute to the fast convergence of the memory kernel, we have demonstrated that one can obtain further 2 orders of magnitude speed-up. While the specific procedure used here is one possible choice, alternative ways of apportioning trajectories to specific initial conditions could lead to more efficient selective sampling schemes and represent a promising direction for future work.

By applying the MF-GQME to models of excitation energy transfer in the FMO and LHCII complexes, which consist of 7 and 14 electronic states, respectively, we have shown that this method can provide accurate results for multistate systems for both short-time dynamics and long-time populations at the computational cost of low-level quantum classical trajectory-based methods. Furthermore, when memory kernels are short-lived in comparison to the electronic dynamics of interest, the MF-GQME approach usually costs less than a direct application of MFT. Moreover, once the memory kernel is generated up to the required cutoff time, τc, the GQME can be propagated with linear scaling in time, making long-time dynamics easy to access.

To date, the MF-GQME has previously proven successful in treating the dynamics of systems coupled to harmonic21,23,25 or fully atomistic22 environments, both in25 and out21–23 of equilibrium. In the present work, we have demonstrated that this approach is also able to accurately and efficiently address the dynamics of systems where the electronic subspace consists of many states. Indeed, we have shown that the MF-GQME can provide quantitatively accurate results in LHCII where commonly invoked methods, such as Redfield theory5,6 and its modified variant,7,8 are known to fail.41 The MF-GQME thus provides a powerful tool to investigate charge and energy transfer in the condensed phase.

See supplementary material for plots of selective memory kernel matrix elements for FMO and LHCII, a plot of the distribution of errors of different realizations of the selective sampling procedure as a function of number of trajectories added, and a plot of the number of trajectories assigned to each initial condition of the LHCII model using the selective sampling approach.

This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences under Award No. DE-SC0014437. T.E.M. also acknowledges support from the Camille Dreyfus Teacher-Scholar Awards Program. This research used resources of the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility, supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. W.C.P. acknowledges support from the Melvin and Joan Lane Stanford Graduate Fellowship. We would also like to thank Stanford University and the Stanford Research Computing Center for providing computational resources and support that have contributed to these research results.

In this section, we provide more details for the derivation of the expressions for the auxiliary kernels K1(t) and K3(t) for the GQME that describes the dynamics of the subsystem RDM subject to a spectroscopic initial condition. These expressions are applicable to a general Ns state system coupled to an arbitrary bath of the type described in Sec. II. We subsequently specialize our discussion to the Frenkel exciton model presented in Sec. III A given by Eqs. (19)–(21).

We start from the general form of the Hamiltonian, Eq. (1), for an Ns state electronic system coupled to a bath, which can take a harmonic [e.g., Eq. (20)] or atomistic form, with a general form of the system-bath coupling, given by Eq. (2).

As stated in Sec. II, we are interested in the nonequilibrium RDM dynamics arising from a spectroscopic or Franck-Condon initial condition, ρ^(0)=ρ^sρ^beq, where the initial condition of the electronic subsystem can be expanded in terms of the Liouville basis, ρ^s(0)=nanÂn, and the bath is in local equilibrium, ρ^beq=eβĤbTr[eβĤb]. To derive a particular GQME, it is also necessary to choose a projection operator. Here, we choose the Ns state system generalization of the Redfield-type projection operator, shown in Ref. 23 in the context of a two-level system. In the more conventional Nakajima-Zwanzig form of the GQME for nonequilibrium RDM dynamics,1,2 this projector corresponds to the commonly used Argyres-Kelley projector,63P, which performs a partial trace over the bath and multiplies by the canonical density of the bath, i.e., P(ÔbÔs)=ρ^beqTrb[ÔbÔs]=ρ^beqÔsTrb[Ôb], where Ôb and Ôs are bath and system operators, respectively.

With this form of the projection operator, and assuming that Tr[ρ^beqΓ^j]=0 (which can always be enforced with no loss of generality by redefining the subsystem and bath parts of the Hamiltonian: ĤsĤs+Tr[ρ^beqHsb] and ĤsbĤsbTr[ρ^beqHsb]), it is possible to obtain the general forms for the auxiliary kernels. Focusing first on K1(t),

(A1)

Using similar manipulations, one can show that

(A2)

where the elements of the dynamical matrices, q1(jk,±) and q3(j,±), are given by Eqs. (11) and (10), the elements of the static transformation matrix c(j,±) take the form,

(A3)

and we have used the fact that

(A4a)
(A4b)

Equations (A1) and (A2) are general, applicable to an Ns state system coupled to an arbitrary bath with system-bath coupling given by Eq. (2), and subject to a spectroscopic initial condition. We emphasize that such systems include electronic states coupled to harmonic or atomistic baths and system-bath interactions beyond linear coupling.

We now specialize our discussion to the Frenkel exciton Hamiltonian, Eqs. (19)–(21), where the bath is harmonic and can be decomposed into contributions that are local, in addition to the coupling being bilinear and local. Given the Frenkel exciton form of the Hamiltonian, the matrix elements of the static transformation matrix are given by

(A5)

where δ is the Kronecker delta.

Here, we describe a procedure for obtaining K from K1 and K3 by discretizing the self-consistent expansion and solving a system of linear equations, hence avoiding an iterative solution. The self-consistency relation for the memory kernel, Eq. (6), can be written in discrete time as

(B1)

where we have discretized the integral using the trapezoid rule. Here, t0, t1, …, tn are equally spaced discrete times, and hence tj+1tj ≡ Δt, and tmtj = tmj (m > j). Equation (B1) can be solved to obtain an explicit expression for K(tn) in terms of K1, K3, and K(tj) (0 ≤ jn − 1),

(B2)

where n > 0 and 1 is the identity matrix. Equation (B2) allows K(tn) (n > 0) to be obtained from the K(tj) (j < n) using a single, noniterative step where one solves for the unknown matrix K(tn) in Eq. (B2). In practice, the system of linear equations implied by Eq. (B2) was solved by computing and storing the Cholesky factorization of [1+iΔt2K3(t0)], which can be used to solve for K(tn).

Here, we outline how one can calculate the dynamical matrices, q1(jk,±)(t) and q3(j,±)(t) in Eqs. (11) and (10) using the Ehrenfest MFT method. To do this, we first review the partial transformation to Wigner phase space with respect to bath variables of a general correlation function. We then show the general MFT approximation to the dynamical matrices corresponding to a general Ns-state system and specialize our discussion to the Frenkel exciton model. Finally, we provide the equations of motion for the subsystem and bath variables.

To treat a general correlation function of a system which can be separated into a subsystem coupled to a bath,

(C1)

using the Ehrenfest MFT method, we first perform a partial Wigner transformation64,65 with respect to the bath variables. Here, ρ^ encodes the initial condition of the full system, and A and B are arbitrary operators. Upon performing the partial Wigner transformation, we can rewrite the correlation function as follows:

(C2)

where X = (R, P) are the classical coordinates, R, and momenta, P, of the bath, and the Wigner transform of an operator, Ô, is defined as65,66

(C3)

The Wigner transform of the product of two operators is65,66

(C4)

where

(C5)

is the Poisson bracket operator and the arrows indicate the directions in which the differential operators act.

The MFT approximation to this correlation function amounts to replacing the time-dependent operator (B(t))W=(eiHt/BeiHt/)W with BW(Xt,t), where the classical bath degrees of freedom evolve according to the Poisson bracket subject to an additional time-dependent mean force arising from the subsystem dynamics and the quantum variables evolve according to the time-dependent Schrodinger equation subject to the time-dependent contribution of the bath. This allows one to rewrite the dynamical matrices, q1(jk,±)(t) and q3(j,±)(t), corresponding to a general Ns state system in the MFT approximation as

(C6a)
(C6b)
(C6c)
(C6d)

where

(C7)

Here, we have explicitly included the dependence of operator Ân(t) on the time-dependent bath variables, Xt, to emphasize that the value of Cmn(s)(t,Xt) is dependent on the realization of the bath.

For the Frenkel exciton model, where the bath is harmonic and the bath part of the system-bath coupling is linear in the bath coordinates and given by Γ^jγ^j=kcj,kR^j,k, the Wigner transform of the anticommutator and commutator of the bath canonical distribution and γ^k take simple forms,

(C8a)
(C8b)

where, using Eq. (C4), one obtains

(C9a)
(C9b)

and

(C10)

Application of the Ehrenfest MFT approximation to the dynamics of the Frenkel exciton model leads to the evolution of the subsystem wavefunction subject to the Hamiltonian of the subsystem modulated by the time-dependent system-bath coupling,

(C11)

where the subsystem initial condition, ρs(0) = |ψ⟩⟨ψ|, is of pure form.

The bath, in turn, follows Hamilton’s equations,

(C12a)
(C12b)

where the Ehrenfest version of the bath Hamiltonian takes the form,

(C13)

where S¯j(t)=Trs[ρs(t,Xt)Ŝj] is the mean potential arising from the subsystem dynamics.

The correlation functions given in Eqs. (C6a)–(C6d) are generated using MFT, and the memory kernel is then generated according to Eqs. (6)–(8).

In this section, we describe how the correlation functions in Eqs. (10) and (11) are evaluated using MFT for initial conditions that do not correspond to a single wavefunction.

If the subsystem initial condition is diagonal, i.e., Âj = |α⟩⟨α′|, where α = α′, then MFT is straightforward to apply because one can use the subsystem initial condition corresponding to pure state |α⟩, i.e., |ψ(0)⟩ = |α⟩. On the other hand, if Aj is an off-diagonal (if αα′), the procedure is less straightforward because Aj does not correspond to an initial condition that arises from a single wavefunction. Several possible approaches and subtle issues related to this are discussed in detail in the Appendices of Ref. 23. A simple and computationally efficient approach for calculating such correlation functions involves an auxiliary wavefunction,

(D1)

where ϕ is a random number uniformly sampled from the interval [0, 2π). Then, consider the expression

(D2)

where q(t) is a general memory kernel correlation function which is partially Wigner transformed over the bath degrees of freedom, such as those defined in Eqs. (C6a)–(C6d), and we have explicitly used that αα′. Equation (D2) thus provides a way to access two q correlation functions using the same initial wavefunction |ψ¯, in particular those for which Âm = |α⟩⟨α′| and Âm = |α′⟩⟨α|. The explicit procedure for calculating such correlation functions is

  1. Initialize the wavefunction |ψ(0)=|ψ¯, sampling a new random number ϕ from a uniform distribution on the interval [0, 2π). Sample the bath initial conditions from the appropriate bath distribution, ρb,W, as in the diagonal case.

  2. At t = 0, evaluate and store e and e.

  3. Run each trajectory using MFT to calculate the desired observables at time t in the same way as for a diagonal initial condition. Multiply the correlation function by e to obtain q¯+(t) and by e to obtain q¯(t).

  4. Repeat steps 1–4, averaging q¯±(t) over an ensemble of trajectories with different bath initial conditions and different initial values of ϕ. As shown in Eq. (D2), the trajectory-averaged q¯+ corresponds q(t) with Am = |α⟩⟨α′| and q¯ corresponds to q(t) with Aj = |α′⟩⟨α|.

This approach of phase averaging over a superposition state in order to obtain the correlation function of interest is similar to that employed in the recently proposed CFBT method,67 which extends Ehrenfest mean field theory.

Here, we describe how to motivate our selective sampling algorithm from the formal analysis of how the error in the memory kernel arising from the lack of convergence translates into error in the GQME dynamics.

Our analysis starts from the GQME corresponding to the fully converged RDM, ρsconv(t), and the GQME corresponding to a (possibly unconverged) RDM ρs(t). The fully converged dynamics arise from a fully converged memory kernel, Kconv(t), while the unconverged dynamics arise from an unconverged memory kernel, K(t). A simple metric for the error in ρs(t) is its deviation from the converged RDM,

(E1)

Since this quantity can be positive or negative, it is convenient to consider its absolute value, δρs(t). This deviation is time dependent and is different for each element of the density. A scalar metric for the total error is the mean absolute deviation of the RDM from its converged value, with the mean taken over time and elements of the density,

(E2)

where tmax is the total amount of time over which one wants to know the RDM dynamics. Because there is no error in the initial condition [i.e., δρs(0) = 0], one can minimize the error at subsequent times by minimizing the error in the time-derivative of the RDM,

(E3)

Equation (5) can be combined with Eq. (E3) in order to express the error in the time derivative, δρ̇s(t), in terms of δρs(t) and δK(t)=Kconv(t)K(t),

(E4)

where

(E5)
(E6)
(E7)

Of the three terms, only E2 can be directly controlled by running more trajectories, since more trajectories lead to a smaller δK(t) as the correlation functions that give rise to K(t) become more converged. In contrast, E1 and E3 cannot be directly controlled because they depend on δρs(t), a quantity which arises from δK(t) and previous values of δρs(t) according to Eqs. (E3)–(E7). For our choice of selective sampling algorithm, we make the ansatz that the mean absolute deviation in the RDM dynamics is proportional to the mean absolute value of E2, i.e., δρs|E2|. Figure 7(a), which is generated using data from LHCII, shows strong correlation between the error in the GQME dynamics, and |E2| suggests that this ansatz is valid.

FIG. 7.

Correlation between the mean absolute deviation in the dynamics from its converged result δρs (a) and the mean absolute value of the E2 and ϵ (b) for the LHCII model. Each individual point corresponds to a memory kernel with a different level of convergence.

FIG. 7.

Correlation between the mean absolute deviation in the dynamics from its converged result δρs (a) and the mean absolute value of the E2 and ϵ (b) for the LHCII model. Each individual point corresponds to a memory kernel with a different level of convergence.

Close modal

In order to obtain an expression for the selective sampling weights, wm, we consider how ⟨|E2|⟩ is related to the error in specific elements of K(t),

(E8)

Using this expression to determine the error in the dynamics still requires prior knowledge of the converged kernel. In order to use this expression to obtain an expression for the selective sampling weights, we make the additional approximation that δK(t)δK1(t)σK12(t)N so that

(E9)

where σK1 is the standard deviation in K1,

(E10)

Here, it is useful to rewrite Eq. (E9) as a sum of terms arising from different initial conditions at the memory kernel level,

(E11)

where m indexes the initial conditions at the memory kernel level and [ϵ(t)]m is the sum of all the terms in Eq. (E9) arising from initial condition m,

(E12)

Equations (E11) and (E12) provide the basis for our choice of weights in our selective sampling algorithm. In particular, the weight wm associated with initial condition m is proportional to the time average of [ϵ(t)]m,

(E13)

This metric is an upper bound parts of the |E2| error term that arises from initial condition m, as shown in Eq. (E9). As shown in Fig. 7(b), |E2(t)| and |ϵ(t)| track the error in the GQME dynamics, |δρs(t)|. Hence, a selective sampling algorithm that minimizes this term should also minimize the error in the GQME dynamics. For the models considered here, we note that [ϵ(t)]m takes a somewhat simpler form because [c(j,±)]αm=0 if αm and hence

(E14)
1.
S.
Nakajima
,
Prog. Theor. Phys.
20
,
948
(
1958
).
2.
R.
Zwanzig
,
J. Chem. Phys.
33
,
1338
(
1960
).
3.
B.
Berne
and
R.
Pecora
,
Dynamic Light Scattering: With Applications to Chemistry, Biology, and Physics
(
John Wiley & Sons
,
New York
,
1976
).
4.
H.
Mori
,
Prog. Theor. Phys.
33
,
423
(
1965
).
6.
A. G.
Redfield
,
Adv. Magn. Opt. Reson.
1
,
1
(
1965
).
7.
T.-M.
Chang
and
J. L.
Skinner
,
Physica A
193
,
483
(
1993
).
8.
W. M.
Zhang
,
T.
Meier
,
V.
Chernyak
, and
S.
Mukamel
,
J. Chem. Phys.
108
,
7763
(
1998
).
9.
A. J.
Leggett
,
S.
Chakravarty
,
A. T.
Dorsey
,
M. P. A.
Fisher
,
A.
Garg
, and
W.
Zwerger
,
Rev. Mod. Phys.
59
,
1
(
1987
).
10.
11.
T.
Förster
,
Radiat. Res., Suppl.
2
,
326
(
1960
).
12.
S.
Jang
,
Y. C.
Cheng
,
D. R.
Reichman
, and
J. D.
Eaves
,
J. Chem. Phys.
129
,
101104
(
2008
).
13.
S.
Jang
,
J. Chem. Phys.
135
,
034105
(
2011
).
15.
D. P.
McCutcheon
and
A.
Nazir
,
Phys. Rev. B
83
,
165101
(
2011
).
16.
H.-P.
Breuer
and
F.
Petruccione
,
The Theory of Open Quantum Systems
(
Oxford University Press
,
Oxford
,
2007
).
17.
A.
Nitzan
,
Chemical Dynamics in Condensed Phases: Relaxation, Transfer, and Reactions in Condensed Molecular Systems
(
Oxford University Press
,
New York
,
2006
).
18.
Q.
Shi
and
E.
Geva
,
J. Chem. Phys.
119
,
12063
(
2003
).
19.
Q.
Shi
and
E.
Geva
,
J. Chem. Phys.
120
,
10647
(
2004
).
20.
A.
Kelly
and
T. E.
Markland
,
J. Chem. Phys.
139
,
014104
(
2013
).
21.
A.
Kelly
,
N.
Brackbill
, and
T. E.
Markland
,
J. Chem. Phys.
142
,
094110
(
2015
).
22.
W. C.
Pfalzgraff
,
A.
Kelly
, and
T. E.
Markland
,
J. Phys. Chem. Lett.
6
,
4743
(
2015
).
23.
A.
Montoya-Castillo
and
D. R.
Reichman
,
J. Chem. Phys.
144
,
184104
(
2016
).
24.
A.
Kelly
,
A.
Montoya-Castillo
,
L.
Wang
, and
T. E.
Markland
,
J. Chem. Phys.
144
,
184105
(
2016
).
25.
A.
Montoya-Castillo
and
D. R.
Reichman
,
J. Chem. Phys.
146
,
084110
(
2017
).
27.
J.
Tully
, in
Classical and Quantum Dynamics in Condensed Phase Simulations
, edited by
B. J.
Berne
,
G.
Ciccotti
, and
D. F.
Coker
(
World Scientific Publishing
,
Hackensack, NJ
,
1998
), pp.
489
514
.
28.
G.
Stock
,
J. Chem. Phys.
103
,
1561
(
1995
).
29.
J.
Tully
and
R.
Preston
,
J. Chem. Phys.
55
,
562
(
1971
).
30.
J.
Tully
,
J. Chem. Phys.
93
,
1061
(
1990
).
31.
S.
Hammes-Schiffer
,
J. Chem. Phys.
105
,
2236
(
1996
).
32.
J.
Adolphs
and
T.
Renger
,
Biophys. J.
91
,
2778
(
2006
).
33.
A.
Ishizaki
and
G.
Fleming
,
Proc. Natl. Acad. Sci. U. S. A.
106
,
17255
(
2009
).
34.
V. I.
Novoderezhkin
,
M. A.
Palacios
,
H.
van Amerongen
, and
R.
van Grondelle
,
J. Phys. Chem. B
109
,
10493
(
2005
).
35.
R.
van Grondelle
and
V. I.
Novoderezhkin
,
Phys. Chem. Chem. Phys.
8
,
793
(
2006
).
36.
V.
Novoderezhkin
,
A.
Marin
, and
R.
van Grondelle
,
Phys. Chem. Chem. Phys.
13
,
17093
(
2011
).
37.
D.
Evans
and
G.
Morriss
,
Statistical Mechanics of Nonequilibrium Liquids
(
Cambridge University Press
,
Cambridge, United Kingdom
,
2007
).
38.
V.
May
and
O.
Kühn
,
Charge and Energy Transfer Dynamics in Molecular Systems
(
Wiley-VHC
,
2011
).
39.
S. J.
Jang
and
B.
Mennucci
,
Rev. Mod. Phys.
90
,
035003
(
2018
).
40.
A.
Ishizaki
and
G. R.
Fleming
,
J. Chem. Phys.
130
,
234111
(
2009
).
41.
C.
Kreisbeck
,
T.
Kramer
, and
A.
Aspuru-Guzik
,
J. Chem. Theory Comput.
10
,
4045
(
2014
).
42.
W. C.
Swope
,
H. C.
Andersen
,
P. H.
Berens
, and
K. R.
Wilson
,
J. Chem. Phys.
76
,
637
(
1982
).
43.
M.-L.
Zhang
,
B. J.
Ka
, and
E.
Geva
,
J. Chem. Phys.
125
,
044106
(
2006
).
44.
E.
Mulvihill
,
A.
Schubert
,
X.
Sun
,
B. D.
Dunietz
, and
E.
Geva
,
J. Chem. Phys.
150
,
034101
(
2019
).
45.
E.
Suli
and
D.
Mayers
,
An Introduction to Numerical Analysis
(
Cambridge University Press
,
Cambridge, UK
,
2003
).
46.
T. C.
Berkelbach
,
T. E.
Markland
, and
D. R.
Reichman
,
J. Chem. Phys.
136
,
084104
(
2012
).
47.
P.
Parandekar
and
J.
Tully
,
J. Chem. Theory Comput.
2
,
229
(
2006
).
48.
G.
Tao
and
W. H.
Miller
,
J. Phys. Chem. Lett.
1
,
891
(
2010
).
49.
A.
Kelly
and
Y. M.
Rhee
,
J. Phys. Chem. Lett.
2
,
808
(
2011
).
50.
L.
Kidon
,
E. Y.
Wilner
, and
E.
Rabani
,
J. Phys. Chem.
143
,
234110
(
2015
).
51.
J.
Cerrillo
and
J.
Cao
,
Phys. Rev. Lett.
112
,
110401
(
2014
).
52.
A.
Gelzinis
,
E.
Rybakovas
, and
L.
Valkunas
,
J. Chem. Phys.
147
,
234108
(
2017
).
53.
X.
Sun
,
H.
Wang
, and
W. H.
Miller
,
J. Chem. Phys.
109
,
7064
(
1998
).
54.
M.
Thoss
and
G.
Stock
,
Phys. Rev. A
59
,
64
(
1999
).
55.
J.-L.
Liao
and
G. A.
Voth
,
J. Phys. Chem. B
106
,
8449
(
2002
).
56.
N.
Ananth
and
T. F.
Miller
,
J. Chem. Phys.
133
,
234103
(
2010
).
57.
P.
Huo
and
D. F.
Coker
,
J. Chem. Phys.
135
,
201101
(
2011
).
58.
C. Y.
Hsieh
and
R.
Kapral
,
J. Chem. Phys.
137
,
22A507
(
2012
).
59.
N.
Ananth
,
J. Chem. Phys.
139
,
124102
(
2013
).
60.
J. O.
Richardson
and
M.
Thoss
,
J. Chem. Phys.
139
,
031102
(
2013
).
61.
H.-D.
Meyer
and
W. H.
Miller
,
J. Chem. Phys.
70
,
3214
(
1979
).
62.
G.
Stock
and
M.
Thoss
,
Phys. Rev. Lett.
78
,
578
(
1997
).
63.
P. N.
Argyres
and
P. L.
Kelley
,
Phys. Rev.
134
,
A98
(
1964
).
65.
M.
Hillery
,
R. F.
O’Connell
,
M. O.
Scully
, and
E. P.
Wigner
,
Phys. Rep.
106
,
121
(
1984
).
66.
K.
Imre
,
J. Math. Phys.
8
,
1097
(
1967
).
67.
S. A.
Sato
,
A.
Kelly
, and
A.
Rubio
,
Phys. Rev. B
97
,
134308
(
2018
).

Supplementary Material