Modeling the dynamics of photophysical and (photo)chemical reactions in extended molecular systems is a new frontier for quantum chemistry. Many dynamical phenomena, such as intersystem crossing, non-radiative relaxation, and charge and energy transfer, require a non-adiabatic description which incorporate transitions between electronic states. Additionally, these dynamics are often highly sensitive to quantum coherences and interference effects. Several methods exist to simulate non-adiabatic dynamics; however, they are typically either too expensive to be applied to large molecular systems (10's-100's of atoms), or they are based on ad hoc schemes which may include severe approximations due to inconsistencies in classical and quantum mechanics. We present, in detail, an algorithm based on Monte Carlo sampling of the semiclassical time-dependent wavefunction that involves running simple surface hopping dynamics, followed by a post-processing step which adds little cost. The method requires only a few quantities from quantum chemistry calculations, can systematically be improved, and provides excellent agreement with exact quantum mechanical results. Here we show excellent agreement with exact solutions for scattering results of standard test problems. Additionally, we find that convergence of the wavefunction is controlled by complex valued phase factors, the size of the non-adiabatic coupling region, and the choice of sampling function. These results help in determining the range of applicability of the method, and provide a starting point for further improvement.

Advances in theoretical methods as well as in computational power have positioned quantum chemistry as a powerful tool in studying various properties of molecules and molecular complexes.1,2 Accurate, but numerically expensive, wavefunction based approaches enable detailed description of electronic properties in relatively small molecules. In contrast, efficient Density functional theory (DFT) methods in combination with molecular dynamics (MD) allow one to deduce valuable information on ground state properties for molecules with hundreds of atoms in size.3 Further advances in time-dependent density functional theory (TDDFT) have made it possible to carry out efficient calculations for exited states in these systems and thus predict their susceptibilities, absorption, and emission spectra, etc.4 

At present quantum chemistry faces a new frontier: Now it aims not only at computations of the equilibrium (static) molecular properties, but also at modeling dynamics of photophysical processes and (photo)chemical reactions in molecular systems.5,6 The latter are characterized by different reaction pathways that lead to different reaction products. For example, upon absorbing a photon, a molecule may undergo radiative relaxation processes,7 such as fluorescence,8 and phosphorescence; non-radiative relaxation processes, such as photodissociation9–11 or photoisomerization;12–15 charge;16–19 and energy transfer;20–22 or intersystem crossing.23–25 Such dynamics are obviously accompanied by electronic transitions, involving complex electron-phonon interaction26,27 and possibly many electronic states, and therefore cannot be described within the standard adiabatic or Born-Oppenheimer approximation. That is, while in the traditional adiabatic approximation the nuclei move along a given potential energy surface (PES) of a molecule, description of photophysical or photochemical processes requires accounting for the transitions between different PESs. Such transitions typically occur in the relatively small regions of the phase space where the PESs closely approach or cross each other, so that the energy separation between relevant PESs becomes comparable with inverse time scales of the nuclear motion (in units of ℏ), i.e., phonon frequencies. Thus the assumption of separation between electronic and nuclear timescales breaks down in the vicinity of electronic PES crossings and dynamics becomes non-adiabatic.

Several approaches exist in the literature to treat the non-adiabatic dynamics within the framework of quantum chemistry.28–36 The most well known of these are mixed quantum classical treatments, the Ehrenfest and surface hopping methods. The Ehrenfest method is easy to implement. It requires one to run a single trajectory of the nuclei, on an average PES, with electronic populations being evaluated on the fly.37 The simplicity of the implementation of the method is flawed by its poor accuracy, related to its mean field nature. Usually different electronic configurations are associated (“entangled”) with different paths of the nuclei and so the mean field approach becomes inadequate when these paths are very dissimilar. In order to fix this problem, Tully introduced a surface hopping approach, which accounts for the trajectory branching by running multiple trajectories that can hop between PESs.38 Fewest switches surface hopping (FSSH) has become the most popular approach for problems at various scales, from molecules to nanoclusters.39–45 However, the method is build on ad hoc assumptions and, as a result, frequently fails to properly describe correlations between the trajectories of nuclei and electronic states. In particularly, it does not take into account decoherence arising due to spacial separation between the components of the nuclear wavefunctions corresponding to different electronic states (PES) and, as a results, does not properly accounts for the interference effects, etc.46–48 Many approaches exist in the literature for overcoming this decoherence problem. Some attempt to add decoherence into Tully's surface hopping procedure,47,49–61 while others involve rigorous treatments at the density matrix level, e.g., quantum classical Liouville equation (QCLE)62–68 and the Meyer-Miller-Stock-Thoss36,69,70 formalism. While these approaches all improve upon the FSSH algorithm, they are often either too costly, lead to complicated algorithms, or may not be accurate in certain scenarios.

In this paper, we describe a method based on Monte Carlo sampling of the semiclassical time-dependent wavefunction of a molecule. Unlike common surfaces hopping approaches, the SCMC is not ad hoc by its construction, and is able to account for the quantum coherence effects between different photoinduced pathways. The main idea is based on writing the full molecular wavefunction as a series of terms that can be viewed as contributions of “partial” wavefunctions of the system, each corresponding to a certain number of transitions (hops) between electronic PES. These partial wavefunctions can be sampled by a stochastic process similar to that used in the surface hopping method: “on-the-fly” propagation of nuclei according to the classical equations of motion without prior knowledge of the PESs involved. The accumulated phase information characterizing each classical trajectory (e.g., the action along the trajectory) is further used to perform “post-processing” evaluation of multidimensional integrals that correspond to each partial wavefunction using the Monte Carlo technique.

The method was first reported in Ref. 71. It was shown that the results obtained with our method are in very good agreement with numerically exact results for three standard sample problems (the so-called Tully problems).38 Additionally, comparison of the results for 1-D and 2-D test problems,71 demonstrated that the convergence of the method does not deteriorate with an increase in the number of nuclear degrees of freedom. This is a consequence of a fact that the stochasticity of the Monte Carlo procedure is related purely to the non-determinism in the subspace of the electronic states (between the hops nuclei propagate according to the classical equations of motion, i.e., deterministically). In this paper, we present a detailed algorithmic description of the SCMC procedure, which should make its implementation straightforward. Additionally, we show that the Monte Carlo procedure used in our method is reasonably well convergent for the problems involving relatively few level crossings. Finally, we demonstrate and discuss how the convergence of the wavefunction is controlled by complex valued phase factors, the size of the non-adiabatic coupling region, and the choice of sampling function.

The QCLE procedure developed by Kapral et al.62–65 is similar in spirit to our method. The largest difference in the approach presented here is that, while Kapral's approach involves mixed classical-quantum dynamics of the density matrix using the Wigner-Liouville equation, our approach involves direct calculation of the dynamics of the wavefunctions. We believe our approach provides a more “straightforward” path to calculating accurate non-adiabatic molecular dynamics, with minimal deviation from the commonly used FSSH procedure. For example, such as FSSH, the dynamics in our approach always follow a single PES, whereas dynamics used to sample QCLE involve following single and averaged PES.62 Additionally, QCLE dynamics may require branched trajectories.63 

The paper is organized as follows: In Sec. II, we will formulate the problem and describe the theory behind the numerical method. In Sec. III, we describe, in detail, how to carry out the SCMC simulation, and provide an algorithm for a system with two PES. In Sec. IV, we analyze the results of the SCMC method when applied to Tully's 1D test problems and an additional three electronic levels, 1D, test problem.

Consider a Hamiltonian with nuclear and electronic degrees of freedom

\begin{eqnarray} H &=& {{\bf \hat{P}}^2\over 2M} + \hat{H}_{el}({\bf R}), \end{eqnarray}
H=P̂22M+Ĥel(R),
(1)

where

${\bf \hat{P}}$
P̂ is the N-component nuclear momentum operator (N is the number of independent nuclear coordinates, R ≡ (R1, …, RN)T), M is the nuclear mass (for simplicity of notation we assume that all nuclear degrees of freedom have the same mass, extension to different masses is straightforward) and Hel is the electronic Hamiltonian (which includes electronic kinetic energy and all interactions and depends parametrically on R). The dynamics of the nuclear degrees of freedom can be conveniently represented in the adiabatic basis in terms of the effective Hamiltonian (see Refs. 72 and 73 for details)

\begin{eqnarray} {\hat{H}}_{eff} &=& {({\bf \hat{P}}-{\hat{\bf d}}({\bf R}))^2\over 2M} + {\hat{E}}({\bf R}) . \end{eqnarray}
Ĥeff=(P̂d̂(R))22M+Ê(R).
(2)

Here

${\hat{E}}({\bf R})$
Ê(R) is diagonal matrix with elements being the eigenstates of the Hamiltonian Hel and
${\hat{\bf d}}({\bf R})$
d̂(R)
is the non-adiabatic (derivative) coupling matrix, dij(R) = ℏ ⟨Ψi(R)|∇Rj(R)⟩.

In the absence of the non-adiabatic coupling, the PESs are decoupled and the nuclei propagate on a potential landscape given by a particular Ei(R). Formally, in order to describe quantum dynamics for a system of N particles corresponding to Hamiltonian

$H_0= {{\bf \hat{P}}^2\over 2m} + \hat{E}({\bf R})$
H0=P̂22m+Ê(R)⁠, one needs to solve an N + 1 dimensional Schrodinger equation (for a given PES, i.e., Ei(R)), which is an impossible task for N ≫ 1. However, in the classical limit, the problem reduces to solving N coupled Newton's equations, which is not too difficult, even for a reasonably large number of degrees of freedom. Furthermore, in this classical or semiclassical limit, one can easily construct a wavefunction of the nuclei by utilizing a gaussian approximation

\begin{eqnarray} \Psi _i({\bf R},t) \sim \prod _{\alpha =1}^{N} e^{-{(R^\alpha -R^{\alpha }_ c(t))^2\over 2\sigma _\alpha ^2(t)}+iP^\alpha _c(t)(R^\alpha -R^{\alpha }_c(t))+iS_i(t)} . \end{eqnarray}
Ψi(R,t)α=1Ne(RαRcα(t))22σα2(t)+iPcα(t)(RαRcα(t))+iSi(t).
(3)

That is, the gaussian wavepacket with average momentum Pc(t) is centered around positions Rc(t), is evaluated according to the classical equations of motion, with Si(t) being the classical action of the system produced during a time, t. For simplicity in this paper, we assume that the dispersion of the gaussian wavepacket corresponds to that of a free particle, σ2(t) = σ2(0) + it/M. While this approximation is clearly violated for PESs with sufficient curvature, for dynamics in realistic molecules this is practically never the case. Indeed, a typical spread of the nuclear wavepacket is of the order of nuclear zero point motion associated with the vibrational ground states of a molecule, which is at least an order of magnitude smaller than the size of a typical electronic bond (which sets the length scale of Ei(R)) due to large mass, M, of the nuclei. Furthermore, this approximation did not lead to significant deviation from exact solution in the scattering results considered below. Thus, instead of solving a N + 1 dimensional Schrodinger equation, a problem reduces to solving N ordinary second order differential equations.

Unfortunately, for

${\hat{\bf d}}\ne 0$
d̂0 this is no longer the case. One way to proceed is to use the mean field approach and neglect the spacial dependence of the wavefunction. Such approach is easy to implement, but it is well known that it leads to significant errors when branching of trajectories is important. Another approach was proposed by Pechukas in the late 1960s,74,75 which is an attempt to apply the semiclassical approximation to the
${\hat{\bf d}}\ne 0$
d̂0
situation. Unfortunately in such case one can only formulate an iterative procedure (which is not necessarily well convergent) that leads to the saddle point (i.e., semiclassical) solution to the Schrodinger equation for the molecule. Instead we propose a different approach: We write down solution to the Schrodinger equation as a perturbative expansion in powers of
${\hat{\bf d}}$
d̂
,

\begin{eqnarray} |\Psi (t)\rangle \!&=&\! e^{-i{\hat{H}}_0 t/\hbar }|\Psi (0)\rangle +\int _0^t dt_1 e^{-i{\hat{H}}_0 (t-t_1)/\hbar } \nonumber\\ &&\times\, {(-i)\over 2m\hbar }({\hat{\bf d}}{\bf P} \!+\!{\bf P} {\hat{\bf d}}) e^{-i{\hat{H}}_0 t_1/\hbar } |\Psi (0)\rangle \!+\!\! \int _0^t\! dt_1\!\int _0^{t_1}\! dt_2 \ldots .\nonumber\\ \end{eqnarray}
|Ψ(t)=eiĤ0t/|Ψ(0)+0tdt1eiĤ0(tt1)/×(i)2m(d̂P+Pd̂)eiĤ0t1/|Ψ(0)+0tdt10t1dt2....
(4)

Equation (4) is an infinite series with second and higher order terms being time ordered integrals and containing second and higher orders of perturbation. Here we neglected the

${\hat{\bf d}}^2/2M$
d̂2/2M term, which is small (compared to the systems kinetic energy) in the semiclassical approximation. Moreover, this term is diagonal and therefore it may be included in the definition of
${\hat{H}}_0$
Ĥ0
if needed.

Suppose that the state |Ψ(0)⟩ is a gaussian wavepacket localized on the PES 1. Furthermore, let us assume that the electronic subspace is two dimensional, i.e., there are only two close PES, the remaining PESs are separated by large gaps so that transition probabilities are infinitesimally small due to rapid oscillations of the integrands in Eq. (4). In fact such situation is usually the case; typically only two PES cross in a relevant region of the phase space. Then

${\hat{\bf d}} = d_{12}({\bf R}){\hat{\sigma }}_y$
d̂=d12(R)σ̂y⁠, where
${\hat{\sigma }}_y$
σ̂y
is a Pauli matrix and d12 is a scalar function of R. According to the above discussion of the
${\hat{\bf d}}=0$
d̂=0
situation, the first term in the right hand side (RHS) of Eq. (4) is just a gaussian wavepacket that have propagated along the first PES according to classical equations of motion.

Let us consider the second term in the RHS of Eq. (4) which corresponds to a “piece” of the wavefunction that has hopped once at time t1 from PES 1 to PES 2. It is clear that the wavepacket right before the hopping event at t1 can be approximated as a gaussian. It is reasonable to assume that after the scattering event at t1 the wavepacket retains its gaussian shape, but acquires an amplitude c12 (proportional to d12 at Rc(t1)). Similar assumptions apply to the higher order integrals. That is, we approximate the wavefunction in Eq. (4) as

\begin{eqnarray} &&\hspace*{-5pt} |\Psi ({\bf R},t)\rangle\nonumber\\[-1pt] &&\hspace*{-5pt}\quad= {\cal N} \Bigg ( e^{-{({\bf R}-{\bf R}^{(0)}_c(t))^2\over 2\sigma ^2(t)}+i{\bf P}^{(0)}_c(t)({\bf R}-{\bf R}^{(0)}_c(t))+iS^{(0)}(t)} |1\rangle\nonumber\\[-1pt] &&\hspace*{-5pt}\qquad +\! \int _0^t\! dt_1 c_{12}\big[{\bf R}^{(0)}_c(t_1)\big] e^{-{({\bf R}-{\bf R}^{(1)}_c(t))^2\over 2\sigma ^2(t)}+i{\bf P}^{(1)}_c(t)({\bf R}-{\bf R}^{(1)}_c(t))+iS^{(1)}(t)} |2\rangle \nonumber\\[-1pt] &&\hspace*{-5pt}\qquad+ \int _0^t dt_1\int _0^{t_1} dt_2\, c_{21} \big[{\bf R}^{(0)}_c(t_1) \big] \, c_{12}\big[{\bf R}^{(1)}_c(t_2)\big]\nonumber\\ &&\hspace*{-5pt}\qquad\times\, e^{-{({\bf R}-{\bf R}^{(2)}_c(t))^2\over 2\sigma ^2(t)}+i{\bf P}^{(2)}_c(t)({\bf R}-{\bf R}^{(2)}_c(t))+iS^{(2)}(t)} |1\rangle + \ldots \Bigg ) , \end{eqnarray}
|Ψ(R,t)=N(e(RRc(0)(t))22σ2(t)+iPc(0)(t)(RRc(0)(t))+iS(0)(t)|1+0tdt1c12Rc(0)(t1)e(RRc(1)(t))22σ2(t)+iPc(1)(t)(RRc(1)(t))+iS(1)(t)|2+0tdt10t1dt2c21Rc(0)(t1)c12Rc(1)(t2)×e(RRc(2)(t))22σ2(t)+iPc(2)(t)(RRc(2)(t))+iS(2)(t)|1+...),
(5)

where the normalization constant

${\cal N} = \pi ^{-N/4}[\sigma (t)]^{-N/2}$
N=πN/4[σ(t)]N/2 and from now on we will set ℏ = 1 unless stated otherwise. In Eq. (5), the superscript indices for the “classical” variables, Rc, Pc, and Sc, indicate the number of jumps. For example,
${\bf R}^{(2)}_c$
Rc(2)
is a classical position for a trajectory of the nuclei with two hops: At time t1 from PES 1 to PES 2 and back at time t2, so that
${\bf R}^{(2)}_c$
Rc(2)
is a function not only of t, but also of t1 and t2.

Equation (5) is incomplete on itself: One needs to specify the c12/21[Rc(t)] as well as the value of the momentum right after the hop. (Here we assume that neither the position nor the dispersion of the wavepacket changes immediately after the hop, which is easy to check by a simple calculation similar to the one discussed below.) For that let us divide the time integrals in Eqs. (4) and (5) into integrals over smaller time intervals Δt, where Δt is smaller than the timescale at which the particle moves over the distance at which potential

$E_{1,2}(\bf R)$
E1,2(R) varies significantly. Let us consider dynamics of the wavepacket at such interval Δt. Assume that at the beginning of the interval the shape of the wavepacket (for the nuclei) is ϕ1(R, 0) (which is not necessarily gaussian) with electrons being in state 1. Furthermore, we assume that the wavepacket is in the semiclassical regime: On one hand it is sufficiently localized so that its width, σ, is smaller than the distance at which potential
$E_{1,2}(\bf R)$
E1,2(R)
varies significantly; on the other hand its momentum is well defined so that it is much greater in magnitude than its uncertainty (σ−1). Then the parameters c12/21[Rc(t)] and P2 (the momentum after the hop, i.e., at the PES 2) must be such that the descriptions of the wavefunction in Eqs. (4) and (5) match. That is, Eq. (4) on the time interval, Δt, describes dynamics of a wavepacket according to an effective Hamiltonian in Eq. (2), where, within a small Δt, we may determine the wavefunction as
$|\Psi _{ex} (t)\rangle = e^{-it{\hat{H}}_{eff}}\phi _1({\bf R},0) |1\rangle$
|Ψex(t)=eitĤeffϕ1(R,0)|1
by expanding in powers of t and setting d and E to be independent of R. Similarly, we may evaluate the wavefunction for the gaussian approximation in Eq. (5). Up to the second order in t, which is sufficient for our purposes, we may write

\begin{eqnarray} |\Psi _g(t)\rangle = (1+c_{21}c_{12}t^2/2)\phi _1({\bf R},t)|1\rangle -ic_{12}t \phi _2({\bf R},t)|2\rangle ,\nonumber\\ \end{eqnarray}
|Ψg(t)=(1+c21c12t2/2)ϕ1(R,t)|1ic12tϕ2(R,t)|2,
(6)

where ϕ2(R, t) is a normalized wavepacket at the second PES. The t2 term in Eq. (6) arises due to two hop contribution in Eq. (5). Note that the wavefunction in Eq. (6) is normalized up to O(t2). Since c12/21 and

${\bf P}_2=\int d{\bf R} \phi _2^\ast ({\bf R},t){\hat{\bf P}}\phi _2({\bf R},t)$
P2=dRϕ2*(R,t)P̂ϕ2(R,t) define occupation probabilities and system's momentum, we require that

\begin{eqnarray} \langle \Psi _g(t)|\sigma _z|\Psi _g(t)\rangle = \langle \Psi _{ex}(t)|\sigma _z|\Psi _{ex}(t)\rangle , \end{eqnarray}
Ψg(t)|σz|Ψg(t)=Ψex(t)|σz|Ψex(t),
(7)
\begin{eqnarray} \langle \Psi _g(t)|{\hat{\bf P}}|\Psi _g(t)\rangle = \langle \Psi _{ex}(t)|{\hat{\bf P}}-{\bf d}_{12}\sigma _y|\Psi _{ex}(t)\rangle . \end{eqnarray}
Ψg(t)|P̂|Ψg(t)=Ψex(t)|P̂d12σy|Ψex(t).
(8)

Equations (7) and (8) provide the matching (boundary) conditions that determine values of c12/21 and P2. Note that in the RHS of Eq. (8) we have used the “true” momentum of the system, which is given by its velocity

$M\dot{{\bf R}} =iM [{\hat{H}}_{eff}, {\bf R}]\break=\hat{\bf P} - {\bf d}_{12}\hat{\sigma }_y$
MṘ=iM[Ĥeff,R]=P̂d12σ̂y⁠. The left hand side (LHS) of Eq. (8) gives 1 + 2c21c12t2, while the RHS results in 1 − 2(d12P1t)2/M2, where
${\bf P}_1=\int d{\bf R} \phi _1^\ast ({\bf R},t){\hat{\bf P}}\phi _1({\bf R},t)$
P1=dRϕ1*(R,t)P̂ϕ1(R,t)
. Then we obtain that

\begin{eqnarray*} c_{12}({\bf R}) &=& ({\bf d}_{12}({\bf R})\cdot {\bf P}_1)/M = {\bf d}_{12}(t)\nonumber \end{eqnarray*}
c12(R)=(d12(R)·P1)/M=d12(t)

and

\begin{eqnarray} c_{21}({\bf R}) &=& ({\bf d}_{21}({\bf R})\cdot {\bf P}_2)/M = {\bf d}_{21}(t) , \end{eqnarray}
c21(R)=(d21(R)·P2)/M=d21(t),
(9)

with P1 being the momentum right before the hop.

Similarly, the LHS of Eq. (8) gives P1 + c21c12t2(P2P1), where we have assumed that t is small enough and so the momenta P1 and P2 do not change during the evolution over the time interval t. The RHS yields P1d12(R)(E2E1)(d12(R) · P1)t2/M, so that

\begin{eqnarray} {\bf P}_2-{\bf P}_1 = {M(E_2-E_1){\bf d}_{12}({\bf R}) \over {\bf d}_{12}({\bf R})\cdot {\bf P}_1} \, . \end{eqnarray}
P2P1=M(E2E1)d12(R)d12(R)·P1.
(10)

Equation (10) gives a condition for the re-scaling of momentum of the wavepacket which, at first glance, may seem to be in contradiction with that proposed by Herman76 and used by Tully38 in his surface hopping algorithm. Indeed, while in Eq. (10) the change in the momentum at the hop is along d12, Eq. (10) does not conserve the total energy at the hop. However, careful inspection reveals that Eq. (10) is simply the zeroth and first order terms in an expansion, in powers of

$\frac{E_2-E_1}{{\bf P}^2_1/2M}$
E2E1P12/2M⁠, of the full energy conservation condition. This is, presumably, an artifact of the short time approximation that we used in the derivation of Eq. (10). Indeed, it is easy to check that Eq. (10) conserves energy approximately, up to the leading order in ΔE = E2E1. Furthermore, we have run the test problems considered below using condition of Eq. (10) as well as using the exact energy conservation condition at the hop and found very good agreement between the two in a very broad range of initial momenta. This is a consequence of a fact that the main contribution to the integrals in Eq. (5) comes from the region where ΔE is small and so the energy conservation at the hop is satisfied. Therefore, we conclude that the two conditions are essentially identical and in the following we will be using the Herman/Tully criterion to re-scale the wavepacket's momentum at the hop. In Sec. III, we will describe an efficient numerical algorithm that evaluates the wavefunction in Eq. (5) using Monte Carlo technique.

The SCMC provides a first principles method for calculating non-adiabatic molecular dynamics, with similar cost to surface hopping methods. The calculation occurs in two main steps: (A) the non-adiabatic propagation of classical trajectories, and (B) the use of trajectory data to sample the integrals in Eq. (5), with cij[Rc(t)] = dij(t). The classical propagation is similar to standard surface hopping algorithms.29,38,76 For simplicity, we assume that we have only two electronic states to consider. Generalization to higher numbers of states is straightforward.71 In Subsections III A and III B, we describe, in algorithmic fashion, the surface-hopping dynamics, A, and the Monte Carlo calculation of the time-dependent semiclassical wavefunction, B. Figure 1 shows a diagrammatic map of the algorithm.

FIG. 1.

Example for semiclassical Monte Carlo surface hopping algorithm for a two electronic state system. Green boxes – Part A (dynamics). Purple boxes – Part B (Wavefunction calculation). Definitions are same as text.

FIG. 1.

Example for semiclassical Monte Carlo surface hopping algorithm for a two electronic state system. Green boxes – Part A (dynamics). Purple boxes – Part B (Wavefunction calculation). Definitions are same as text.

Close modal
  • All trajectories are initialized with the same position and momentum. Various tracked values are initialized (see Fig. 1). We begin propagation of the nuclear wavepackets.

  • Begin time step. Supplied with a molecular geometry, Rc, quantum chemistry methods (e.g., DFT and TDDFT) are used to calculate the electronic state energies, Ei(t), the Hellmann-Feynman forces (electronic gradients), Fc, the first order non-adiabatic coupling vectors (NACV),

    ${\bf d}_{ij} ({\bf R}_c) = \langle \Psi _i ({\bf R}_c) \vert \frac{\partial }{\partial {\bf R}_c} \Psi _j ({\bf R}_c) \rangle$
    dij(Rc)=Ψi(Rc)|RcΨj(Rc)⁠, and scalars
    $d_{ij}(t)\break = \langle \Psi _i ({\bf R}_c(t)) \vert \frac{\partial }{\partial t} \Psi _j ({\bf R}_c(t)) \rangle = {\bf d}_{ij}({\bf R}_c) \cdot \dot{{\bf R}}_c(t)$
    dij(t)=Ψi(Rc(t))|tΨj(Rc(t))=dij(Rc)·Ṙc(t)
    .

  • The classical particles are propagate for one time step, ∂t, on the current adiabatic surface, i, using Newton/Lagrange equations of motion,
    \begin{align} \frac{\partial }{\partial t} {\bf P}_c(t)={\bf F}_c(t)= -\frac{\partial }{\partial {\bf R}_c} E_i( {\bf R}_c(t)) , \end{align}
    tPc(t)=Fc(t)=RcEi(Rc(t)),
    (11)
    by some numerical method, e.g., Verlet, Runga-Kutta. The classical action along the trajectory,
    \begin{align} S(t)&= \int ^{t}_0 dt^{\prime } \, \sum _{\alpha }^{N} \frac{{P}_\alpha ^2(t^{\prime })}{2M_\alpha } - E(t^{\prime }) \nonumber\\ &= \sum _{\alpha }^{N}\int ^{{R}_c^\alpha (t)}_{{R}^\alpha _{c\,0}} d\bar{{R}}_c^\alpha \,{P}_\alpha (\bar{{R}}_c^\alpha ) , \end{align}
    S(t)=0tdtαNPα2(t)2MαE(t)=αNRc0αRcα(t)dR¯cαPα(R¯cα),
    (12)
    is calculated for subsequent use in part B. We assume our propagated gaussian nuclear wavepackets are “free,” i.e., they broaden with time, independent of the PES.
  • At the end of the time step, we determine whether or not to hop to a new state, j. First, we determine whether the hop is allowed or frustrated by, the assumed, conservation of energy, i.e.,
    \begin{align} \frac{{\bf P}_c^2(t^{\prime })}{2M} + E_i({\bf R}_c) \ge E_j({\bf R}_c) . \end{align}
    Pc2(t)2M+Ei(Rc)Ej(Rc).
    (13)
    If the hop is allowed the probability, γij(t), is calculated. If not γij(t) = 0. Unlike other surface hopping methods,29,38,76 the final result is formally independent of the choice of hopping rate, assuming that the hopping rate is nonzero everywhere dij(t) is non-zero. However, the choice of hopping rate is crucial to achieve rapid convergence of the result.
  • At each time step, the hopping probability to the new state is added to the integral
    \begin{align} \Omega ({\bf t})=\sum _{l=0}^{l=m} \int _{t_l}^{t_{l+1}} dt^{\prime } \, \gamma _{s_l\rightarrow s_{l+1}}[{\bf R}_c(t^{\prime })] . \end{align}
    Ω(t)=l=0l=mtltl+1dtγslsl+1[Rc(t)].
    (14)
    Here, m is the number of hops over the course of the trajectory, and tl is the time at which the lth hop occurs (tm + 1t, sm + 1j). If RN < γij, where RN is a generated random number between 0 and 1, then the trajectory hops to state j.
  • If a hop occurs, the products
    \begin{align} &D(t_{m-1}\ldots t_0)=\prod _{l=0}^{m-1} d_{s_{l}\, s_{l+1}}(t_l), \end{align}
    D(tm1...t0)=l=0m1dslsl+1(tl),
    (15)
    and
    \begin{align} &\Gamma (t_{m-1}\ldots t_0) =\prod _{l=0}^{m-1} \gamma _{s_l\rightarrow s_{l+1}}(t_l) \end{align}
    Γ(tm1...t0)=l=0m1γslsl+1(tl)
    (16)
    are multiplied by dij(t) and γij(t), respectively. The number of hops, m, is increased by one.

The procedure is repeated for the next time step on either the original or new state. These dynamics continue for the desired time. Once completed, the final position, Rc(t), and momentum, Pc(t), of the trajectory are stored. The process is repeated for the specified number of trajectories. In calculations of realistic molecular systems,

$\bf A2$
A2⁠, i.e., quantum chemistry calculation of state energies, forces, and non-adiabatic couplings, will be the most computationally demanding step. However, due to the independence of the dynamical trajectories, the loop beginning at A1 is trivially parallelizable. This will greatly aid in the application of the method to larger systems.

Information collected by the surface hopping dynamics is further analyzed in the second step at negligible computational expense when compared to part A. Namely, we calculate the nuclear wavefunctions,

\begin{align} {\bf \Psi }_i({\bf R},t)=\sum _m \int d^m {\bf t} \,\, {\bf I}_i^m ({\bf R},{\bf t}) , \end{align}
Ψi(R,t)=mdmtIim(R,t),
(17)

or probabilities, Pi = ∫dR |Ψi(R, t)|2. Here t ≡ {t1, …tm, t} is the set of time integration variables and

$\int d^m{\bf t}\break \equiv \int ^t_0 dt_m \int ^{t_m}_0 dt_{m-1} \ldots \int ^{t_2}_0 dt_1$
dmt0tdtm0tmdtm1...0t2dt1⁠. We independently sample integrals,
$\int d^m {\bf t} \,\, {\bf I}_i^m ({\bf R},{\bf t})$
dmtIim(R,t)
, which differ in the number of hops, m, and final state, i. Additionally, if there is significant separation between wavepackets, e.g., wavepackets which are reflected or transmitted in 1D, then integrals contributing to those wavepackets are also sampled separately. While not required formally, this consideration of spacial overlap significantly improves convergence rates with minimal loss of information. Thus, trajectories are sorted by the number of hops and final electronic state, and grouped into spatially separated contributors of the partial wavefunctions.

Using the standard Monte Carlo approach, the integral

$\int d^m {\bf t} \,\, {\bf I}_i^m ({\bf R},{\bf t})$
dmtIim(R,t)⁠, can be expressed in terms of an expectation value

\begin{eqnarray} \int d^m {\bf t}\, {\bf I}_i^m ({\bf R},{\bf t}) &=& \int d^m {\bf t} \frac{{\bf I}_i^m ({\bf R},{\bf t})}{\rho ^m_i({\bf t}) } \times \rho ^m_i({\bf t}) \nonumber\\ &\approx & \frac{1}{N^m_i} \times \sum _{k=0}^{N^m_i} \frac{{\bf I}_{i}^m ({\bf R}_k,{\bf t}_k)}{ \rho _i^m({\bf t}_k)} . \end{eqnarray}
dmtIim(R,t)=dmtIim(R,t)ρim(t)×ρim(t)1Nim×k=0NimIim(Rk,tk)ρim(tk).
(18)

Here, k is the index of a trajectory from step 1,

$N_k^m$
Nkm is the total number of trajectories with m hops and ending in state i.
$\rho _i^m({\bf t}_k)$
ρim(tk)
is the value of the probability distribution,
$\rho _i^m({\bf t})$
ρim(t)
, for the trajectory k.

The kth value of the probability distribution can be expressed as

$\rho ^m_i({\bf t}_k) = \frac{\Gamma _{i}^m({\bf t}_k)}{A^m_{i}}$
ρim(tk)=Γim(tk)Aim⁠. Here
$\Gamma _{i}^m({\bf t}_k)$
Γim(tk)
is given by Eq. (16), and
$A^m_{i}=\int d^m{\bf t} \, \Gamma _{i}({\bf t})$
Aim=dmtΓi(t)
is the normalization coefficient for trajectories with m hops finishing in state i.
$A^m_{i}$
Aim
can be determined using statistical considerations of the stochastic process defined by the rate γ. We consider the probability of a single trajectory finishing in state i with m hops for all values of t (see the supplementary material of Ref. 71),

\begin{align} \text{p}_i=\int d^m {\bf t} \, \Gamma _{i}({\bf t})e^{-\Omega _{i}({\bf t})} . \end{align}
pi=dmtΓi(t)eΩi(t).
(19)

This probability can also be expressed as an expectation value, using the same probability distribution,

$\rho _i^m({\bf t})=\frac{\Gamma _{i}({\bf t})}{A_i^m}$
ρim(t)=Γi(t)Aim⁠, from Eq. (18),

\begin{align} \text{p}_i= \frac{A_i^m}{N_i^m} \sum _{k = 0}^{N_i^m} e^{-\Omega _{i}({\bf t}_k)} . \end{align}
pi=AimNimk=0NimeΩi(tk).
(20)

Ωi(ti) is given by Eq. (14). We seek the value of

$A_i^m$
Aim⁠. Thus, we assume the number of trajectories,
$N^m_i$
Nim
, is sufficiently large so that we can replace pi with
$N_i^m / N$
Nim/N
, and rearrange

\begin{eqnarray*} A_i^m & \approx & {\big(N_i^m}\big)^2 \bigg [ N \sum _{k=0}^{N_i^m} e^{-\Omega _{i}^m ({\bf t}_k)}\bigg ]^{-1} , \end{eqnarray*}
Aim(Nim)2Nk=0NimeΩim(tk)1,

and

\begin{eqnarray} \rho _i^m({\bf t}_k) &=& \Gamma _i^m({\bf t}_k) \frac{N}{\big(N_i^m\big)^2} \sum _{k=0}^{N_i^m} e^{-\Omega _i^m ({\bf t}_k)} . \end{eqnarray}
ρim(tk)=Γim(tk)NNim2k=0NimeΩim(tk).
(21)
  • We first determine

    $N_i^m$
    Nim and calculate the sum in Eq. (21) for every i/m pair.

  • We calculate each

    $A_i^m$
    Aim⁠.

  • For a given trajectory
    ${\bf I}_{i}^m ({\bf R}_k,{\bf t}_k)$
    Iim(Rk,tk)
    is easily calculated, see Eq. (5), from the surface hopping data, Eqs. (11), (12), and (15),
    \begin{eqnarray} {\bf I}_{i}^m ({\bf R}_k,{\bf t}_k) &=& D_{k}({\bf t}_k) \times e^{i [ S^i ({\bf t}_k) -{\bf P}^k (t) {\bf R}_c^k (t) ] } \nonumber\\ & & \times\, \bm{\phi } \big({\bf R},{\bf R}_c^k,{\bf P}_c^k,t\big) , \end{eqnarray}
    Iim(Rk,tk)=Dk(tk)×ei[Si(tk)Pk(t)Rck(t)]×ϕR,Rck,Pck,t,
    (22)
    where ϕ is a “free” gaussian wavepacket,
    \begin{align} \bm{\phi } \big({\bf R},{\bf R}_c^k,&{\bf P}_c^k,t\big)= \root 4 \of {\frac{M^2 \sigma ^2 }{\pi [ M^2\sigma ^4 + t^2]}} e^{i{\bf P}^k(t){\bf R}}e^{\frac{[{\bf R}-{\bf R}_c^k (t)]^2}{\sigma ^2 + \frac{it}{M}}} , \end{align}
    ϕ(R,Rck,Pck,t)=M2σ2π[M2σ4+t2]4eiPk(t)Re[RRck(t)]2σ2+itM,
    (23)
    with the final position and momentum of the trajectory k, and an initial width, σ. At this point the nuclear wave functions can be calculated by summing over all the trajectories, Eq. (18).
  • Once Ψi(R, t) is known, expectation values (e.g., x, P, density matrix) can be calculated. When the number of trajectories is low, the random error in Ψi(R, t) can be reduced by normalization. However, the effect is minimal in a well converged calculation.

Some notes on the case of more than 2 electronic states: (1) The rate in Eq. (14) would be replaced with a sum over the rates to all possible new states. (2) Eqs. (15) and (16) remain unchanged. (3) Determination of which state to hop to can be determined as it is in Ref. 38. (4) For every possible path of intermediate electronic excited states, there is an integral which must be sampled independently. Thus each distinct path has a different probability distribution, Eq. (21).

In order to test the SCMC procedure, we consider three two-level models proposed by Tully.38 Figure 2 shows the energy eigenstates and NACVs of these three one-dimensional problems. For the exact form of the Hamiltonians see Ref. 38. For these three problems, scattering probabilities calculated using the SCMC, FSSH, and Ehrenfest methods are compared to the exact time-dependent Schrödinger equation in Gorshkov et al.71 In Figure 3, we represent those scattering results. For Problems 1 and 3: 25 000 and 10 000 trajectories were used for the SCMC and FSSH approaches, respectively. For Problem 2: 75 000 trajectories were used in the SCMC calculation, while only 10 000 were used in the FSSH calculation. Here we seek to provide in-depth analysis of these results and insight into the method. All parameters and results in this section are in atomic units.

FIG. 2.

Test problems. (Top Left) Tully Problem 1-Single avoided crossing. (Top Right) Tully Problem 2 – Double avoided crossing. (Bottom Left) Tully Problem 3 – Extended coupling with reflection. (Bottom Right) Three Level Model X. Green dotted (Purple dashed/Blue dashed-dotted) line is lower (upper/middle) electronic eigenstate. Black (red) solid line is the non-adiabatic coupling vector, d12(23)(x). All values are in atomic units.

FIG. 2.

Test problems. (Top Left) Tully Problem 1-Single avoided crossing. (Top Right) Tully Problem 2 – Double avoided crossing. (Bottom Left) Tully Problem 3 – Extended coupling with reflection. (Bottom Right) Three Level Model X. Green dotted (Purple dashed/Blue dashed-dotted) line is lower (upper/middle) electronic eigenstate. Black (red) solid line is the non-adiabatic coupling vector, d12(23)(x). All values are in atomic units.

Close modal
FIG. 3.

Scattering probabilities for Tully's problem set, using time-dependent Schrödinger equation (black solid line), Ehrenfest method (blue dotted line), fewest switching surface hopping (purple circles), and SCMC method (green squares). (a) Problem 1 – Transmission on lower (inset-upper) surface. (b) Problem 2 – Transmission on lower (inset-upper) surface. (c) Problem 3 – Reflection on lower (inset-upper) surface. (d) Problem 3 – Transmission on upper (inset-lower) surface. Results previously reported in Ref. 71.

FIG. 3.

Scattering probabilities for Tully's problem set, using time-dependent Schrödinger equation (black solid line), Ehrenfest method (blue dotted line), fewest switching surface hopping (purple circles), and SCMC method (green squares). (a) Problem 1 – Transmission on lower (inset-upper) surface. (b) Problem 2 – Transmission on lower (inset-upper) surface. (c) Problem 3 – Reflection on lower (inset-upper) surface. (d) Problem 3 – Transmission on upper (inset-lower) surface. Results previously reported in Ref. 71.

Close modal

Problem 3 provides a simple model problem where the standard implementation of the FSSH quantitatively and qualitatively fails to reproduce the exact TD-Schrödinger result, see Figs. 3(c) and 3(d). It is well known that this failure is due to the lack of decoherence.38,77 The equation of motion for the electronic density matrix, or complex expansion coefficients, does not include the effect of bifurcation of nuclear wavepackets. Corrections can be introduced by hand into the FSSH algorithm to correct for this problem.47,49–61 In the SCMC approach, we do not need to propagate the electronic density matrix in order to determine hopping rates. We find the simple hopping rate γ1 → 2(t) = |d12(t)| leads to accurate results for all of the test problems, see Figs. 3 and 4. Thus, since we do not utilize the electronic density matrix, we do not encounter the “over coherence” problem. The left (right) column of Figure 4 shows the results of the SCMC (time-dependent Schrödinger) calculation for Problem 3 with an initial wavefunction

\begin{eqnarray*} \Psi (t=0,x)= \left(\begin{array}{c}(\pi \sigma ^2)^{-\frac{1}{4}}\times \text{exp}\lbrace {ikx - \frac{(x-x_0)^2}{2\sigma ^2}}\rbrace\\[6pt] 0 \end{array}\right), \end{eqnarray*}
Ψ(t=0,x)=(πσ2)14×exp{ikx(xx0)22σ2}0,

with initial position: x0 = −12, momentum: k = 10, and width:

$\sigma =\sqrt{200}/k$
σ=200/k⁠, starting on the lower eigenstate: |ψ1⟩. The snapshot is taken after the wavepacket has gone through the interaction region with a portion being transmitted on the lower adiabatic surface (bottom), or reflected on either the lower (middle) or upper (top) surface. The SCMC wavepackets have the same momentum, p, and positions, x, as the exact solution, indicating that our semiclassical dynamics, i.e., assumption of conservation of energy and Newtonian equations of motion is reasonable. The transmitted wavepacket (bottom) calculated by SCMC shows significantly less broadening than the exact result. This is due to our “free” gaussian approximation, which ignores the increase in broadening due to the negative second derivative in the energy of the lower surface from x = −5 → 0. In principle, this is simple to account for,78 however calculation of the second derivative of the PES would be prohibitively expensive in a many-dimensional problem.

FIG. 4.

Scattered wavepackets problem 3 at t = 4059. (Left column) SCMC method using 25 000 trajectories. (Right column) TD Schrödinger equation. (Top (Center)) Reflected wavepacket on upper (lower) electronic eigenstate. (Bottom) Transmitted wavepacket on lower electronic eigenstate. (Colored solid) Wavefunction. (Black dashed) Probability density.

FIG. 4.

Scattered wavepackets problem 3 at t = 4059. (Left column) SCMC method using 25 000 trajectories. (Right column) TD Schrödinger equation. (Top (Center)) Reflected wavepacket on upper (lower) electronic eigenstate. (Bottom) Transmitted wavepacket on lower electronic eigenstate. (Colored solid) Wavefunction. (Black dashed) Probability density.

Close modal

To test the limits of our method, we consider the three level crossing “Model X” problem from Ref. 48 (see Ref. 48 for Hamiltonian). The PES can be seen in Fig. 2(d). We consider a particle entering from negative x on the middle surface. The particle has a well-defined momentum (plane wave for the exact solution,

$\sigma =\sqrt{200\,000}/k$
σ=200000/k for SCMC). The large value of σ, as well as the multiple crossings, and possibility of reflection makes “Model X” a very difficult problem for semiclassical and surface hopping methods. However, with enough trajectories, 5 × 105, the SCMC can approach the exact solution. Figure 5 shows the unnormalized scattering probabilities for this problem, with SCMC compared to the exact solution. The most difficult part of the exact dynamics to recover is the period and phase of the oscillations in the transmission on the lower surface.48 The SCMC method correctly reproduces this phase and period. Within the Fewest Switches framework, careful corrections to decoherence, e.g., augmented-FSSH,79 as well as a phase correction, e.g., Shenvi phase correction,59 are required to correctly achieve the period and phase of the oscillation.48 

FIG. 5.

Scattering probabilities for three level crossing problem (Fig. 2(d)), using time-dependent Schrödinger equation (purple dotted line), and SCMC method (green solid line). (Top Left) Reflection on lower surface. (Top Right) Reflection on middle surface. (Bottom Left) Transmission on lower surface. (Bottom Right) Transmission on upper surface. Schrödinger equation results were previously reported in Ref. 48.

FIG. 5.

Scattering probabilities for three level crossing problem (Fig. 2(d)), using time-dependent Schrödinger equation (purple dotted line), and SCMC method (green solid line). (Top Left) Reflection on lower surface. (Top Right) Reflection on middle surface. (Bottom Left) Transmission on lower surface. (Bottom Right) Transmission on upper surface. Schrödinger equation results were previously reported in Ref. 48.

Close modal

While the SCMC method does a good job of reproducing the exact result, it does so at the cost of a high number of trajectories. For a problem like “Model X” significant improvement in the convergence could be achieved by using the SCMC only in the vicinity of the strong NAC, then adiabatically propagating wavepackets with the normalization, position, momentum, and phase given by SCMC. This is similar, in essence, to the branching scheme recently proposed by Herman, which significantly improved convergence in a two crossing problem.80 

We have shown how the details of the problem, number, and nature of level crossings, can affect the number of trajectories required, i.e., 2.5 × 104 for a simple avoided crossing (∼1%−2% absolute error) with 3 × as many required for the double avoided crossing. In Subsection IV A, we will discuss the factors which lead to this increased cost in trajectories.

While the SCMC method does a good job of reproducing the exact result, including when FSSH and Ehrenfest methods fail, it does so at an increased cost in trajectories. In order to determine the areas where SCMC is most applicable, we seek to understand how the convergence rate is affected by different models and parameters. First, we look at the simple single level crossing, Problem 1. Figure 6 (Inset) shows the relative error, δrel, in the calculated probabilities of transmission on the upper (lower) surfaces, TU (TL), as a function of the total number of trajectories, N. For all momentums and both probabilities, the convergence obeys an approximate power law of the form

$\delta _{\text{Rel}}=\frac{a}{\sqrt{N}}+b$
δRel=aN+b (see line fits). This
$O(\frac{1}{\sqrt{N}})$
O(1N)
dependence of the error is standard to Monte Carlo methods. For TU, we see faster convergence with increasing initial momentum.

FIG. 6.

Average relative error percentage (δRel) of SCMC calculation, without normalization, for Problem 1 as a function of the number of trajectories (N) for k = 10, 20, and 30. Twenty calculations per data point. Filled (Empty) Markers – Transmission probability on the upper (lower) surface. Solid (Dashed) lines are respective fits to the data of the form

$\delta _{\text{rel}}=\frac{a}{\sqrt{N}}+b$
δrel=aN+b⁠.

FIG. 6.

Average relative error percentage (δRel) of SCMC calculation, without normalization, for Problem 1 as a function of the number of trajectories (N) for k = 10, 20, and 30. Twenty calculations per data point. Filled (Empty) Markers – Transmission probability on the upper (lower) surface. Solid (Dashed) lines are respective fits to the data of the form

$\delta _{\text{rel}}=\frac{a}{\sqrt{N}}+b$
δrel=aN+b⁠.

Close modal

This k dependence can be understood by looking at the phase in Eq. (22). Assuming a one-dimensional system with two electronic states, we can calculate the phase for a trajectory which begins on the lower surface and hops once,

\begin{eqnarray} &&\hspace*{-4pt} F_1(t_f,t_1)\nonumber\\ &&\hspace*{-4pt}\quad= P_2(t_f) \Bigg [\int _{0}^{t_1} dt \frac{P_1(t)}{M} + \int _{t_1}^{t_f} dt \frac{P_2(t)}{M} +x_0 \Bigg ] \nonumber\\ &&\hspace*{-4pt}\qquad+\int _{0}^{t_1} dt \Bigg[ \frac{P_1(t)^2}{2M} -E_1(t)\Bigg] + \int _{t_1}^{t_f} dt \Bigg[ \frac{P_2(t)^2}{2M} -E_2(t) \Bigg] .\nonumber\\ \end{eqnarray}
F1(tf,t1)=P2(tf)0t1dtP1(t)M+t1tfdtP2(t)M+x0+0t1dtP1(t)22ME1(t)+t1tfdtP2(t)22ME2(t).
(24)

Since all trajectories have the same final and initial times, we can subtract off any term which does not depend on the time or position of the hop. Additionally, if we assume that the initial momentum, k, is large enough that we do not have any reflection, and tf is large enough that P2(tf) is a constant, we can rewrite Eq. (24) as a relative phase that depends on the position of the hop,

\begin{align} \tilde{F}_1(x_1) =&\, P_{2f} \int _{x_0}^{x_1} dx \Bigg [1-\frac{P_2(x)}{P_1(x)}\Bigg ] \nonumber\\ &+\int _{x_0}^{x_1} dx \Bigg [P_1(x)-\frac{P_2(x)^2}{P_1(x)}\Bigg ] . \end{align}
F̃1(x1)=P2fx0x1dx1P2(x)P1(x)+x0x1dxP1(x)P2(x)2P1(x).
(25)

With the same considerations we can write the relative phase for a trajectory with two hops,

\begin{align} \tilde{F}_2(x_1,x_2) =&\, P_{1f} \int _{x_1}^{x_2} dx \Bigg [1-\frac{P_1(x)}{P_2(x)}\Bigg ] \nonumber\\ &+\int _{x_1}^{x_2} dx \Bigg [P_2(x)-\frac{P_1(x)^2}{P_2(x)}\Bigg ] . \end{align}
F̃2(x1,x2)=P1fx1x2dx1P1(x)P2(x)+x1x2dxP2(x)P1(x)2P2(x).
(26)

In Figure 7, we can see that the number of oscillations, which must be sampled, is reduced in the one (a) and (b) and two (c)–(f) hop integrals when we go from the low momentum, k = 10, to the high momentum k = 30. Indeed for integrals of all numbers of hops the number of oscillations will decrease with an increasing momentum. The “smoother” integrands associated with higher momentum are easier to sample via Monte Carlo methods. For the TL in Figure 6, the effect competes with constant error sources, due to, e.g., the free gaussian approximation, which appears as an increasing relative error with decreasing average probability (probability of TL decreases with increase k). This leads to the non-monotonic k dependence.

FIG. 7.

Relative Phases, Problem 1, for single (a) and (b), Eq. (25), and double (c)–(f), Eq. (26), hop trajectories for k = 10 (left) and k = 30 (right). (a) and (b) Black solid – |d12(x)|, Purple dotted –

$d_{12}(x)\times \text{Re}[\tilde{F}_1(x)]$
d12(x)×Re[F̃1(x)]⁠, Green dashed –
$d_{12}(x)\times \text{Im}[\tilde{F}_1(x)]$
d12(x)×Im[F̃1(x)]
. (c) and (d) Color map of
$\big {\vert } d_{12}(x_1)d_{21}(x_2)\break \times\! \text{Re}[\tilde{F}_2(x_1,x_2)] \big {\vert }$
|d12(x1)d21(x2)×Re[F̃2(x1,x2)]|
. (e) and (f) Color map of
$\big {\vert } d_{12}(x_1)d_{21}(x_2)\! \times\! \text{Im}[\tilde{F}_2(x_1,x_2)] \big {\vert }$
|d12(x1)d21(x2)×Im[F̃2(x1,x2)]|
. ∫dx |d12(x)| ≃ 1.57.

FIG. 7.

Relative Phases, Problem 1, for single (a) and (b), Eq. (25), and double (c)–(f), Eq. (26), hop trajectories for k = 10 (left) and k = 30 (right). (a) and (b) Black solid – |d12(x)|, Purple dotted –

$d_{12}(x)\times \text{Re}[\tilde{F}_1(x)]$
d12(x)×Re[F̃1(x)]⁠, Green dashed –
$d_{12}(x)\times \text{Im}[\tilde{F}_1(x)]$
d12(x)×Im[F̃1(x)]
. (c) and (d) Color map of
$\big {\vert } d_{12}(x_1)d_{21}(x_2)\break \times\! \text{Re}[\tilde{F}_2(x_1,x_2)] \big {\vert }$
|d12(x1)d21(x2)×Re[F̃2(x1,x2)]|
. (e) and (f) Color map of
$\big {\vert } d_{12}(x_1)d_{21}(x_2)\! \times\! \text{Im}[\tilde{F}_2(x_1,x_2)] \big {\vert }$
|d12(x1)d21(x2)×Im[F̃2(x1,x2)]|
. ∫dx |d12(x)| ≃ 1.57.

Close modal

When going from a single (Problem 1) to double (Problem 2) crossing systems the integrated area of |d12(x)| increases from 1.57 to 2.60. The number of trajectories required to sample an integral with m hops will approximately scale with ∫dx |d12(x)| as

\begin{align} \frac{1}{m!}\Bigg [\int dx \, \vert d_{12}(x)\vert \Bigg ]^m . \end{align}
1m!dx|d12(x)|m.
(27)

However, integrals with more hops, generally, will contribute less to the total wavefunction. Additionally, oscillations due to the phases, Eqs. (25) and (26), increase for Problem 2 (see Fig. 8). These two, not independent, effects result in a much slower convergence rate for Problem 2 as compared to Problem 1 or 3. It was shown in Ref. 71 that ∼3 times the number of trajectories is required to obtain similar precision for Problem 2 as for Problems 1 and 3.

FIG. 8.

Problem 2 – Relative Phases, for single (a), Eq. (25), and double (b) and (c), Eq. (26), hop trajectories for k = 15. (a) Black solid – |d12(x)|, Green dotted –

$d_{12}(x)\times \text{Re}[\tilde{F}_1(x)]$
d12(x)×Re[F̃1(x)]⁠, Purple dashed –
$d_{12}(x)\break\times \text{Im}[\tilde{F}_1(x)]$
d12(x)×Im[F̃1(x)]
. (b) Color map of
$\big {\vert } d_{12}(x_1)d_{21}(x_2) \times \text{Re}[\tilde{F}_2(x_1,x_2)] \big {\vert }$
|d12(x1)d21(x2)×Re[F̃2(x1,x2)]|
. (c) Color map of
$\big {\vert } d_{12}(x_1)d_{21}(x_2) \times \text{Im}[\tilde{F}_2(x_1,x_2)] \big {\vert }$
|d12(x1)d21(x2)×Im[F̃2(x1,x2)]|
. ∫dx |d12(x)| ≃ 2.60.

FIG. 8.

Problem 2 – Relative Phases, for single (a), Eq. (25), and double (b) and (c), Eq. (26), hop trajectories for k = 15. (a) Black solid – |d12(x)|, Green dotted –

$d_{12}(x)\times \text{Re}[\tilde{F}_1(x)]$
d12(x)×Re[F̃1(x)]⁠, Purple dashed –
$d_{12}(x)\break\times \text{Im}[\tilde{F}_1(x)]$
d12(x)×Im[F̃1(x)]
. (b) Color map of
$\big {\vert } d_{12}(x_1)d_{21}(x_2) \times \text{Re}[\tilde{F}_2(x_1,x_2)] \big {\vert }$
|d12(x1)d21(x2)×Re[F̃2(x1,x2)]|
. (c) Color map of
$\big {\vert } d_{12}(x_1)d_{21}(x_2) \times \text{Im}[\tilde{F}_2(x_1,x_2)] \big {\vert }$
|d12(x1)d21(x2)×Im[F̃2(x1,x2)]|
. ∫dx |d12(x)| ≃ 2.60.

Close modal

In principle, the hopping rate, γij, is arbitrary. It is simply the choice of a probability distribution for sampling the integral, Eq. (5). A better probability distribution leads to less samples being required to accurately calculate the correct result. The closer the probability distribution is to the absolute value of the normalized integrand, the better the sampling. If we knew the normalization factor for each integrand

$I_i^m$
Iim⁠, then we would not need to run simulations. Even if we could make a good estimate of these normalization factors, we would have to run separate sets of trajectories, with different hopping rates, to sample different outcomes. Additionally, it is not clear, in general, how to include the complex phases into a probability distribution, and it is difficult to know the phases in Eq. (22)a priori. However, by choosing γij(t) = |dij(t)| we can at least include the factor D(t) into the probability distribution. D(t) provides bounds and amplitude to the integrands in Eq. (5). While there are methods to improve sampling of complex valued integrands, e.g., stationary phase approximation,81,82 they emphasize sampling in the same region as D(t), i.e., regions with small ΔE between PES. This leads to minimal or no improvement. Investigating other hopping rate choices, which would effectively include the effects from the phases, is an ongoing project.

One choice which may come to the readers mind, is the fewest switches hopping rate from Tully,38 

\begin{eqnarray*} \gamma ^{\text{FS}}_{i \rightarrow j} (t) &=& 2\text{Im}[a_{ji}(t)V_{ji}] - 2\text{Re}\Bigg [\frac{a_{ji}(t) {\bf d}_{ji}}{a_{ii}(t)}\Bigg ] \nonumber\\ &=& 0 \,\, \text{if} \,\, < 0 , \end{eqnarray*}
γijFS(t)=2Im[aji(t)Vji]2Reaji(t)djiaii(t)=0if<0,

with

\begin{eqnarray} \frac{\partial }{\partial t} a_{ji}(t) &=& \sum _{k} \Bigg [ i(a_{jk}(t)V_{ki}-V_{jk}a_{ki}(t)) \nonumber\\ && +\,a_{jk}(t){\bf d}_{ki}-{\bf d}_{jk}a_{ki}(t) \Bigg ] . \end{eqnarray}
taji(t)=k[i(ajk(t)VkiVjkaki(t))+ajk(t)dkidjkaki(t)].
(28)

Here a is the electronic density matrix. Figure 9 shows the relative error of the SCMC calculation for the different hopping rates,

$\gamma ^{\text{FS}}_{i \rightarrow j}$
γijFS and dji. In both the cases where FSSH method works (Problem 1 – Fig. 9(a)), and when the FSSH method fails (Problem 2 – Fig. 9(b)), the error is extreme (see Fig. 9(b) inset) if the results are unnormalized, and still high even when normalized (Fig. 9). There is a problem with using
$\gamma ^{\text{FS}}_{i \rightarrow j}$
γijFS
. After one surface hop, the returning hop is frustrated, for a time, due to the resulting negative sign in
$\gamma ^{\text{FS}}_{i \rightarrow j}$
γijFS
. This leads to regions of vanishing hop probability in trajectories with more than one hop. Even without the phases this would, in principle, lead to erroneous results.

FIG. 9.

Averaged relative error percentage (δRel) of SCMC calculation as a function of the number of trajectories (N). (a) Problem 1, Transmission probability on the upper surface with normalization for k = 10, 20, and 30. (b) Problem 2, Transmission probability on the lower surface with normalization for k = 15, 30, and 50. (b) – (inset) Averaged relative error percentage of unnormalized results. Filled markers are for γ1 → 2 = |d21|. Empty markers are for

$\gamma _{1\rightarrow 2}= \gamma ^{FS}_{1\rightarrow 2}$
γ12=γ12FS⁠. Solid (Dashed) lines are respective fits to the data of the form
$\delta _{\text{rel}}=\frac{a}{\sqrt{N}}+b$
δrel=aN+b
.

FIG. 9.

Averaged relative error percentage (δRel) of SCMC calculation as a function of the number of trajectories (N). (a) Problem 1, Transmission probability on the upper surface with normalization for k = 10, 20, and 30. (b) Problem 2, Transmission probability on the lower surface with normalization for k = 15, 30, and 50. (b) – (inset) Averaged relative error percentage of unnormalized results. Filled markers are for γ1 → 2 = |d21|. Empty markers are for

$\gamma _{1\rightarrow 2}= \gamma ^{FS}_{1\rightarrow 2}$
γ12=γ12FS⁠. Solid (Dashed) lines are respective fits to the data of the form
$\delta _{\text{rel}}=\frac{a}{\sqrt{N}}+b$
δrel=aN+b
.

Close modal

Additionally, when the integrals are unevenly sampled without consideration of the phases, as they are when using

$\gamma ^{\text{FS}}_{i \rightarrow j}$
γijFS⁠, the sensitive constructive/destructive behavior of the phases leads to the large error. We believe there will always be situations where any ad hoc hopping rate will lead to poor convergence due to the imperfect accounting of the phases. Thus, while we have repeatedly referred to the SCMC procedure as surface hopping with a post-processing step, it should not be blindly applied to any surface hopping procedure, such as FSSH. However, it remains to be seen if other surface hopping probabilities do provide a good probability distribution for Monte Carlo sampling. The difference in the result before (Fig. 6 and inset of Fig. 9(b)) and after (Figs. 9(a) and 9(b)) normalization tells us how well the method is working. If the change is minimal (compare Figs. 6 and 9(a)), the SCMC procedure is working well for the system. If the change is large (compare Fig. 9(b) body and inset), the SCMC procedure is failing.

The semiclassical Monte Carlo method provides an approach, which is not ad hoc, for calculating non-adiabatic dynamics in extended molecular systems. It is based on an expansion of the full molecular wavefunction into partial wavefunctions corresponding to the number of transitions between electronic states. The procedure involves standard surface hopping dynamics with an additional post-processing step (calculation of the wave functions from collected data). While in the surface hopping method the hopping probability controls all of the dynamics; in SCMC it is primarily interference between trajectories with different (or similar) phases that determines the final result. For this reason, the SCMC method does not suffer from the same “over coherence” problem as the FSSH approach.

Some notes on the applicability of this method: (1) While the phases lead to difficulty in sampling for small momentum, for higher momentums the sampling becomes easier. (2) Additionally, we believe that convergence rate does not increase with increased number of system coordinates.71 (3) Increasing the number of crossing regions increases the number of trajectories required for sampling, see Eq. (27), due to an increase in the area of non-adiabatic coupling and an increase in the number of oscillations due to the phases. (4) While it has been shown that, in some systems, FSSH approach requires complete coupling, i.e., the quantum mechanical electronic dynamics requires calculation of NACV for all combinations of states (

$\frac{1}{2}N_s[N_s-1]$
12Ns[Ns1] NACV calculations),83 the SCMC method with a hopping rate of |di, j(t)| requires only partial coupling (Ns − 1 NACV calculations). When Ns becomes large the calculation of NACVs can become the bottleneck of the calculation. Thus improved scaling in Ns may make up for the additional trajectories needed for sampling the wavefunction. (5) Alternatively, one could utilize this method only in the vicinity of non-negligible NAC, in essence attacking the multi-crossing problem one crossing at a time. We leave development of such a procedure, with comparison to full SCMC and TD-Schrödinger calculations, as well as continued efforts in improving convergence rates, and application of the SCMC approach to realistic systems for future work.

We acknowledge the support of the U.S. Department of Energy through the Los Alamos National Laboratory (LANL) LDRD Program. LANL is operated by Los Alamos National Security, LLC, for the National Nuclear Security Administration of the U.S. Department of Energy under Contract No. DE-AC52-06NA25396. We acknowledge the support of the Center for Nonlinear Studies (CNLS) and the Center for Integrated Nanotechnology (CINT) at LANL. We also thank J.E. Subotnik for sharing his numerical results reported in Ref. 48.

1.
C. J.
Cramer
,
Essentials of Computational Chemistry: Theories and Models
(
John Wiley and Sons
,
2004
).
2.
A.
Szabo
and
N. S.
Ostlund
,
Modern Quantum Chemistry: Introduction to Advanced Electronic Structure Theory
(
McGraw-Hill
,
1989
).
3.
Ab initio molecular dynamics
,” in
Computational Methods in Catalysis and Materials Science
(
Wiley-VCH
,
2009
), pp.
93
120
.
4.
C.
Ullrich
,
Time-Dependent Density-Functional Theory: Concepts and Applications
(
Oxford University Press
,
2012
).
5.
A.
Nitzan
,
Chemical Dynamics in Condensed Phases: Relaxation, Transfer and Reactions in Condensed Molecular Systems
(
Oxford University Press
,
2006
).
6.
G. C.
Schatz
and
M. A.
Ratner
,
Quantum Mechanics in Chemistry
(
Dover
,
2002
).
7.
C. J.
Smallwood
,
W. B.
Bosma
,
R. E.
Larsen
, and
B. J.
Schwartz
,
J. Chem. Phys.
119
,
11263
(
2003
).
8.
G.
Cui
and
W.
Fang
,
J. Phys. Chem. A
115
,
1547
(
2011
).
9.
J. M.
Hostettler
,
A.
Bach
, and
P.
Chen
,
J. Chem. Phys.
130
,
034303
(
2009
).
10.
A.
Matsugi
,
J. Phys. Chem. Lett.
4
,
4237
(
2013
).
11.
M. H.
Kim
,
L.
Shen
,
H.
Tao
,
T. J.
Martinez
, and
A. G.
Suits
,
Science
315
,
1561
(
2007
).
12.
J.
Clark
,
T.
Nelson
,
S.
Tretiak
,
G.
Cirmi
, and
G.
Lanzani
,
Nat. Phys.
8
,
225
(
2012
).
13.
A. J.
Neukirch
,
L. C.
Shamberger
,
E.
Abad
,
B. J.
Haycock
,
H.
Wang
,
J.
Ortega
,
O. V.
Prezhdo
, and
J. P.
Lewis
,
J. Chem. Theor. Comput.
10
,
14
(
2014
).
14.
A.
Kazaryan
,
Z.
Lan
,
L. V.
Schäfer
,
W.
Thiel
, and
M.
Filatov
,
J. Chem. Theor. Comput.
7
,
2189
(
2011
).
15.
R.
Mathies
,
C.
Brito Cruz
,
W.
Pollard
, and
C.
Shank
,
Science
240
,
777
(
1988
).
16.
R. A.
Marcus
,
J. Phys. Chem.
67
,
853
(
1963
).
17.
P. F.
Barbara
,
T. J.
Meyer
, and
M. A.
Ratner
,
J. Phys. Chem.
100
,
13148
(
1996
).
18.
P.
Peumans
,
S.
Uchida
, and
S. R.
Forrest
,
Nature (London)
425
,
158
(
2003
).
19.
S. M.
Falke
,
C. A.
Rozzi
,
D.
Brida
,
M.
Maiuri
,
M.
Amato
,
E.
Sommer
,
A.
De Sio
,
A.
Rubio
,
G.
Cerullo
,
E.
Molinari
, and
C.
Lienau
,
Science
344
,
1001
(
2014
).
20.
T. G.
Goodson
,
Acc. Chem. Res.
38
,
99
(
2005
).
21.
E.
Collini
and
G. D.
Scholes
,
Science
323
,
369
(
2009
).
22.
H.
Lee
,
Y.-C.
Cheng
, and
G. R.
Fleming
,
Science
316
,
1462
(
2007
).
23.
M.
Reufer
,
M. J.
Walter
,
P. G.
Lagoudakis
,
A. B.
Hummel
,
J. S.
Kolb
,
H. G.
Roskos
,
U.
Scherf
, and
J. M.
Lupton
,
Nat. Mater.
4
,
340
(
2005
).
24.
S.
Mai
,
P.
Marquetand
, and
L.
González
,
J. Chem. Phys.
140
,
204302
(
2014
).
25.
A.
Cannizzo
,
A. M.
Blanco-Rodríguez
,
A.
El Nahhas
,
J.
Sěbera
,
S.
Záliš
,
A.
Vlek
 Jr.
, and
M.
Chergui
,
J. Am. Chem. Soc.
130
,
8967
(
2008
).
26.
R. S.
Becker
,
A. P.
Pelliccioli
,
A.
Romani
, and
G.
Favaro
,
J. Am. Chem. Soc.
121
,
2104
(
1999
).
27.
S. V.
Kilina
,
D. S.
Kilin
, and
O. V.
Prezhdo
,
ACS Nano
3
,
93
(
2009
).
28.
L.
Wang
,
D.
Trivedi
, and
O. V.
Prezhdo
,
J. Chem. Theor. Comput.
10
,
3598
(
2014
).
29.
J. C.
Tully
,
J. Chem. Phys.
137
,
22A301
(
2012
).
30.
W. H.
Miller
,
Classical-Limit Quantum Mechanics and the Theory of Molecular Collisions
(
John Wiley and Sons, Inc.
,
2007
), pp.
69
177
.
31.
T. J.
Martnez
,
Acc. Chem. Res.
39
,
119
(
2006
).
32.
33.
K.
Drukker
,
J. Comput. Phys.
153
,
225
(
1999
).
35.
R.
Car
and
M.
Parrinello
,
Phys. Rev. Lett.
55
,
2471
(
1985
).
36.
H.
Meyer
and
W. H.
Miller
,
J. Chem. Phys.
70
,
3214
(
1979
).
37.
S.-I.
Sawada
,
A.
Nitzan
, and
H.
Metiu
,
Phys. Rev. B
32
,
851
(
1985
).
38.
J. C.
Tully
,
J. Chem. Phys.
93
,
1061
(
1990
).
39.
M.
Barbatti
,
M.
Ruckenbauer
,
F.
Plasser
,
J.
Pittner
,
G.
Granucci
,
M.
Persico
, and
H.
Lischka
,
Wiley Interdiscip. Rev.: Comput. Mol. Sci.
4
,
26
(
2014
).
40.
A. V.
Akimov
and
O. V.
Prezhdo
,
J. Chem. Theor. Comput.
9
,
4959
(
2013
).
41.
A. V.
Akimov
and
O. V.
Prezhdo
,
J. Am. Chem. Soc.
136
,
1599
(
2014
).
42.
T.
Nelson
,
S.
Fernandez-Alberti
,
V.
Chernyak
,
A. E.
Roitberg
, and
S.
Tretiak
,
J. Phys. Chem. B
115
,
5402
(
2011
).
43.
I.
Schapiro
,
M. N.
Ryazantsev
,
L. M.
Frutos
,
N.
Ferr
,
R.
Lindh
, and
M.
Olivucci
,
J. Am. Chem. Soc.
133
,
3354
(
2011
).
44.
E.
Tapavicza
,
I.
Tavernelli
, and
U.
Rothlisberger
,
Phys. Rev. Lett.
98
,
023001
(
2007
).
45.
C. F.
Craig
,
W. R.
Duncan
, and
O. V.
Prezhdo
,
Phys. Rev. Lett.
95
,
163001
(
2005
).
46.
E. R.
Bittner
and
P. J.
Rossky
,
J. Chem. Phys.
103
,
8130
(
1995
).
47.
J. E.
Subotnik
and
N.
Shenvi
,
J. Chem. Phys.
134
,
244114
(
2011
).
48.
J. E.
Subotnik
,
J. Phys. Chem. A
115
,
12083
(
2011
).
49.
H. M.
Jaeger
,
S.
Fischer
, and
O. V.
Prezhdo
,
J. Chem. Phys.
137
,
22A545
(
2012
).
50.
J.-Y.
Fang
and
S.
Hammes-Schiffer
,
J. Phys. Chem. A
103
,
9399
(
1999
).
51.
O. V.
Prezhdo
and
P. J.
Rossky
,
Phys. Rev. Lett.
81
,
5294
(
1998
).
52.
B. J.
Schwartz
,
E. R.
Bittner
,
O. V.
Prezhdo
, and
P. J.
Rossky
,
J. Chem. Phys.
104
,
5942
(
1996
).
53.
F.
Webster
,
E. T.
Wang
,
P. J.
Rossky
, and
R. A.
Friesner
,
J. Chem. Phys.
100
,
4835
(
1994
).
54.
M. J.
Bedard-Hearn
,
R. E.
Larsen
, and
B. J.
Schwartz
,
J. Chem. Phys.
123
,
234106
(
2005
).
55.
Y. L.
Volobuev
,
M. D.
Hack
,
M. S.
Topaler
, and
D. G.
Truhlar
,
J. Chem. Phys.
112
,
9716
(
2000
).
56.
M. D.
Hack
and
D. G.
Truhlar
,
J. Chem. Phys.
114
,
2894
(
2001
).
57.
C.
Zhu
,
A. W.
Jasper
, and
D. G.
Truhlar
,
J. Chem. Phys.
120
,
5543
(
2004
).
58.
C.
Zhu
,
S.
Nangia
,
A. W.
Jasper
, and
D. G.
Truhlar
,
J. Chem. Phys.
121
,
7658
(
2004
).
59.
N.
Shenvi
,
J. E.
Subotnik
, and
W.
Yang
,
J. Chem. Phys.
134
,
144102
(
2011
).
60.
J. E.
Subotnik
,
J. Chem. Phys.
132
,
134112
(
2010
).
61.
N.
Shenvi
and
W.
Yang
,
J. Chem. Phys.
137
,
22A528
(
2012
).
62.
R.
Kapral
and
G.
Ciccotti
,
J. Chem. Phys.
110
,
8919
(
1999
).
63.
S.
Nielsen
,
R.
Kapral
, and
G.
Ciccotti
,
J. Chem. Phys.
112
,
6543
(
2000
).
64.
C.-Y.
Hsieh
and
R.
Kapral
,
J. Chem. Phys.
137
,
22A507
(
2012
).
65.
D.
Mac Kernan
,
G.
Ciccotti
, and
R.
Kapral
,
J. Phys. Chem. B
112
,
424
(
2008
).
66.
W.
Boucher
and
J.
Traschen
,
Phys. Rev. D
37
,
3522
(
1988
).
67.
O. V.
Prezhdo
and
V. V.
Kisil
,
Phys. Rev. A
56
,
162
(
1997
).
68.
C. C.
Martens
and
J.-Y.
Fang
,
J. Chem. Phys.
106
,
4918
(
1997
).
69.
H.
Meyer
and
W. H.
Miller
,
J. Chem. Phys.
72
,
2272
(
1980
).
70.
G.
Stock
and
M.
Thoss
,
Phys. Rev. Lett.
78
,
578
(
1997
).
71.
V. N.
Gorshkov
,
S.
Tretiak
, and
D.
Mozyrsky
,
Nat. Commun.
4
,
2144
(
2013
).
72.
M.
Baer
,
Beyond Born-Oppenheimer: Electronic Nonadiabatic Coupling Terms and Conical Intersections
(
Wiley-Interscience
,
2006
).
74.
P.
Pechukas
,
Phys. Rev.
181
,
166
(
1969
).
75.
P.
Pechukas
,
Phys. Rev.
181
,
174
(
1969
).
76.
M. F.
Herman
,
J. Chem. Phys.
81
,
754
(
1984
).
77.
B. R.
Landry
and
J. E.
Subotnik
,
J. Chem. Phys.
135
,
191101
(
2011
).
78.
E. J.
Heller
,
J. Chem. Phys.
62
,
1544
(
1975
).
79.
J. E.
Subotnik
and
N.
Shenvi
,
J. Chem. Phys.
134
,
024105
(
2011
).
80.
M. F.
Herman
,
J. Phys. Chem. B
118
,
8026
(
2014
).
81.
N.
Makri
and
W. H.
Miller
,
Chem. Phys. Lett.
139
,
10
(
1987
).
82.
J.
Doll
,
D.
Freeman
, and
M.
Gillan
,
Chem. Phys. Lett.
143
,
277
(
1988
).
83.
T.
Nelson
,
S.
Fernandez-Alberti
,
V.
Chernyak
,
A. E.
Roitberg
, and
S.
Tretiak
,
J. Chem. Phys.
136
,
054108
(
2012
).