Several strategies for simulating the ultrafast dynamics of molecules induced by interactions with electromagnetic fields are presented. After a brief overview of the theory of molecule-field interaction, we present several representative examples of quantum, semiclassical, and classical approaches to describe the ultrafast molecular dynamics, including the multiconfiguration time-dependent Hartree method, Bohmian dynamics, local control theory, semiclassical thawed Gaussian approximation, phase averaging, dephasing representation, molecular mechanics with proton transfer, and multipolar force fields. In addition to the general overview, some focus is given to the description of nuclear quantum effects and to the direct dynamics, in which the ab initio energies and forces acting on the nuclei are evaluated on the fly. Several practical applications, performed within the framework of the Swiss National Center of Competence in Research “Molecular Ultrafast Science and Technology,” are presented: These include Bohmian dynamics description of the collision of H with H2, local control theory applied to the photoinduced ultrafast intramolecular proton transfer, semiclassical evaluation of vibrationally resolved electronic absorption, emission, photoelectron, and time-resolved stimulated emission spectra, infrared spectroscopy of H-bonding systems, and multipolar force fields applications in the condensed phase.

Interaction of molecules with light is the starting point of many remarkable physical and chemical processes occurring on the ultrashort time scales.1–6 Due to the microscopic nature of molecules, one might expect that many such processes depend on the quantum nature of both electrons and nuclei, and indeed, this is true in many situations, but fortunately, semiclassical and even classical dynamics of nuclei can provide accurate description of various ultrafast phenomena. Semiclassical and classical dynamics are extremely important for yet another, more pragmatic reason: Exact quantum description requires the solution of the time-dependent Schrödinger equation, which scales exponentially with the number of degrees of freedom, so even on today's supercomputers it is out of reach except for systems with a few atoms. As a result, due to their computational efficiency, classical nuclear dynamics is extremely handy if the only quantum effects are electronic, and semiclassical dynamics—if the nuclear quantum effects can be described approximately. In this article, we will therefore devote some attention to all three approaches for treating nuclear dynamics.

We start with a brief summary of the theory of the interaction of molecules with light (see Sec. II A),7–9 including the various approximations made even at the quantum-mechanical level: In particular, we shall always assume the validity of the electric dipole approximation for the interaction potential but allow the fields to be nonperturbative. Then we will discuss, in turn, the time-dependent perturbation theory, valid for weak fields, Condon approximation for the transition dipole, rotating-wave (or quasiresonant), and zero-temperature approximations for electronic transitions, ultrashort-pulse, and separated-pulse approximations for ultrafast laser fields.

The next three sections are devoted to the quantum, semiclassical, and classical treatment of the nuclear dynamics. In all three cases, we give a very brief general overview of the field and then provide a more detailed description of several representative methodologies, which have been either developed or used for applications in the framework of the Swiss National Center of Competence in Research “Molecular Ultrafast Science and Technology” (NCCR MUST). Within quantum dynamics (Sec. II B), these include the multiconfiguration time-dependent Hartree (MCTDH) method,10,11 a benchmark for exact quantum dynamics in systems with tens of degrees of freedom, the Bohmian dynamics,12,13 a “quantum trajectory”-based method providing an intuitive picture of quantum dynamics, but difficult to implement numerically, and the local control theory (LCT),14,15 a very efficient method to reach a control objective, such as increasing a population of a desired electronic state. In Sec. II C on semiclassical dynamics, after a general overview, we present two very simple and complementary methods: the thawed Gaussian approximation (TGA),16 based on a single classical trajectory, but requiring the Hessian of the potential energy surface along the trajectory, and the phase averaging/dephasing representation (DR),8,17,18 which relies on multiple trajectories, but does not need the Hessian. Finally, in Sec. II D, it is explained how classical force fields can be extended to treat reactive dynamics, involving bond breaking and bond formation, using the molecular mechanics with proton transfer (MMPT),19 and how polarizability can be accounted for with multipolar (MTP)20,21 force fields. It is also discussed how force fields can be parametrized using a combination of ab initio calculations and spectroscopic data. As one of the goals of this paper is to provide a view of the theory of ultrafast dynamics starting from a full quantum, via semiclassical, all the way to purely classical description of the dynamics, we were forced to select only very few but hopefully representative methods from each area.

Section III of this review is devoted to several applications of the methods described in Sec. II. Examples of the quantum methods include the nonadiabatic Bohmian dynamics (NABDY) treatment of the collision of the H atom with the H2 molecule and the local control theory applied to the photoinduced ultrafast intramolecular proton transfer in 4-hydroxyacridine (4-HA). Although in principle more general, the local control theory is implemented within the trajectory surface hopping (TSH) method, meaning that the electrons are treated quantum mechanically while the nuclei classically. Applications of semiclassical methods include the on-the-fly ab initio semiclassical evaluation of the absorption and photoelectron spectra of ammonia, of the emission spectra of oligothiophenes with up to 105 vibrational degrees of freedom, and the calculations of the time-resolved stimulated emission spectrum of pyrazine with various extensions of phase averaging and dephasing representation. Last but not least, examples of classical molecular dynamics (MD) include the computational infrared spectroscopy of H-bonding systems and the applications of multipolar force fields in the condensed phase, including CO in myoglobin, 1 D- and 2 D-infrared spectroscopy of CN, protein-ligand binding, and vibrational relaxation of solvated CN.

In addition to reviewing examples of quantum, classical, and semiclassical methodologies for simulating ultrafast dynamics, our goal is to point out several emerging notions. On one hand, the term “quantum chemistry” has until rather recently typically evoked the quantum treatment of only the electronic structure. “Molecular dynamics,” on the other hand, typically meant classical nuclear dynamics in empirically parametrized force fields. These restrictions have changed over the years:

First, nuclear quantum effects are important in many fields of chemical physics ranging from spectroscopy to kinetics, and thanks to the improved efficiency of computers these effects are being included in an increasing number of simulations. That is why we devote two sections of this review to quantum and semiclassical dynamics, where the nuclear quantum effects are treated either exactly or approximately.

In many situations, however, even classical molecular dynamics (MD) simulations provide considerable insight. The classical MD is an especially useful tool when the system size becomes large and/or the dynamics beyond the first few picoseconds is important, i.e., situations, in which quantum and semiclassical methods are prohibitively expensive.

With increasing computer power, it has become possible to combine classical MD simulation with electronic structure calculations, which opened the field of ab initio MD simulations.22–24 Here the forces acting on the nuclei are computed using the ab initio electronic structure “on the fly,” i.e., only where they are required during the dynamics. This circumvents the tedious parametrization of force fields and is extremely useful if the problem at hand only requires a single or few trajectories. However, if many trajectories are needed for a statistically significant exploration of phase space,25,26 in simulations of very long-time (such as microsecond) dynamics, or in very large systems, where ab initio electronic structure remains too costly, it still pays off to construct, once for all, a parametrized force field, which is much cheaper to evaluate repeatedly later. Yet, as the ab initio electronic structure becomes gradually more accurate and easier to evaluate, e.g., with the use of graphical processing units,27–31 it appears that ab initio dynamics will become increasingly practical and popular in the future. That is the reason why we also put an emphasis on the trajectory-based quantum and semiclassical methods, which are naturally suited for the on-the-fly ab initio implementation: among the quantum methods, it is the Bohmian dynamics, among the mixed quantum-classical methods, the trajectory surface hopping implementation of the local control theory, and among the semiclassical methods, the thawed Gaussian approximation.

Nonetheless, the reader should be warned about terminology. In this review, we are somewhat casual about what we call “ab initio”—in particular, we include density functional theory as a fair game for the electronic structure. To avoid confusion, some authors use the terms “direct dynamics” or “first-principles dynamics” for the same concept. In addition, we include both methods where the ab initio or density functional electronic structure is evaluated on the fly, i.e., during the dynamics, and methods where the electronic structure calculations are used to design analytical potential energy surfaces before the actual dynamics.

The applications chapter is followed by a summary, which further reflects on the themes of ultrafast dynamics, nuclear quantum effects, and on-the-fly ab initio dynamics, and concludes the paper. For ease of reference, the acronyms used in this article are included in Nomenclature.

In this article, we discuss molecular dynamics following either electronic or vibrational excitations induced by the interaction with electromagnetic field. While an electronic excitation can induce nonadiabatic dynamics between different electronic states, and some of the applications below do involve nonadiabatic dynamics, in this section we will focus on adiabatic dynamics, i.e., dynamics on a single electronic potential energy surface, for three reasons: first, nonadiabatic dynamics is discussed much more thoroughly in another article of this special issue,32 second, the subject of nonadiabatic dynamics, including spectroscopic detection, conical intersections, and associated geometric phase effects, has been reviewed extensively in the literature,33–39 and third, as we show below, many interesting phenomena, even in electronic spectroscopy, depend on dynamics on a single Born-Oppenheimer potential energy surface (or, in time-resolved spectroscopy with well-separated ultrashort pulses, depend on a sequence of such elementary steps, each of which takes place on a single surface).

1. Exact dynamics, electric dipole approximation, and quasiresonant condition

To justify our focus on electronically adiabatic dynamics, we start the discussion with the full molecular wave function that involves both electronic and nuclear degrees of freedom. This will be useful particularly since several applications come from electronic spectroscopy, where the electromagnetic field induces the transition of the molecule to a different electronic state, which is then followed by nuclear adiabatic dynamics on the corresponding, new potential energy surface.

Assuming, for simplicity, only two electronic states, the time-dependent molecular wavepacket can be written as

|ψ(t)=n=12|ψn(t)|n=(ψ1(t)ψ2(t)),
(1)

where |ψn(t) is a time-dependent nuclear wavepacket on the electronic surface n and |n the corresponding nth electronic state. Evolution of |ψ(t) is given by the time-dependent molecular Schrödinger equation

iddt|ψ(t)=Ĥ(t)|ψ(t),
(2)

driven by the Hamiltonian

Ĥ(t)=Ĥmol+V̂int(t),
(3)

where Ĥmol is the molecular Hamiltonian and V̂int(t)=μ̂·E(t) the interaction potential of the molecule with the electromagnetic field E(t) via the electric transition dipole moment μ̂. Above, we have introduced compact notation in which the bold face denotes the S-component vectors in the electronic Hilbert space, such as ψ(t) (S, here equal to 2, is the number of relevant electronic states), or S × S matrices representing electronic operators, whereas the hat ̂ denotes nuclear operators; the arrow stands for vectors in the ambient, 3-dimensional space. The form of the interaction potential V̂int(t) assumes the validity of the electric dipole approximation40 but allows rather strong, nonperturbative fields.

In addition, we assume that there are no nonadiabatic or spin-orbit couplings among the two electronic states and the only electronic transitions are induced by the interaction with the electromagnetic field. To rigorously justify neglecting nonadiabatic or spin-orbit couplings, several criteria32,35,37,38 can be used, starting from static criteria such as the strength of nonadiabatic couplings or the size of the energy gap between electronic states to more dynamical criteria such as the population dynamics. Among the most rigorous dynamical criteria, “adiabaticity”41–44 is defined as the fidelity, i.e., overlap of the adiabatically and nonadiabatically evolved molecular wave functions: If adiabaticity is 1, the nonadiabatic effects can be neglected, whereas if adiabaticity is 0, they must be included in the simulation. In addition, there exist approximate methods to evaluate this adiabaticity efficiently without solving the full Schrödinger equation.41,43,44

An infrared laser field will mostly induce vibrational (or rovibrational) transitions and, as a result, one may consider only the diagonal matrix element μ̂11 of the transition dipole operator, setting the others to zero, and evolve only the wavepacket |ψ1(t) on the initial surface. This is true both for weak and strong fields.

In contrast, a visible or ultraviolet laser field will excite electronic transitions, and if it is approximately in resonance with the transition from state 1 to state 2, we are allowed to retain only the off-diagonal elements of the transition dipole moment

μ̂(0μ̂12μ̂210).
(4)

This is a special case of the quasiresonant condition.45–47 

If the fields are so strong that perturbation theory breaks down, one must treat the electric field explicitly and worry about the coupled dynamics on the two surfaces—in other words, evolve the two-component state |ψ(t). In Sec. II A 2, we show that if perturbation theory is valid, one can think of the electronic transition as instantaneous and evolve the nuclear wavepacket adiabatically on the second surface.

2. Perturbation theory, zero-temperature, and Condon approximations

For sufficiently weak fields or for short interaction times, one may employ the time-dependent perturbation theory. Whereas the first-order perturbation theory is often sufficient for linear spectroscopy, the second order is required, e.g., for Raman spectroscopy and the third order (or higher) for more sophisticated nonlinear and time-resolved spectroscopic techniques.8 For brevity, in this subsection, we only consider the first-order perturbation theory, within which the molecular state evolves as

|ψ(t)=Ûmol(t)|ψ(0)itdtÛmol(tt)V̂int(t)Ûmol(t)|ψ(0),
(5)

where Ûmol(t)=exp(iĤmolt/) denotes the molecular evolution operator in the absence of the electric field.

For vibrational transitions, Eq. (5) simplifies as

|ψ1(t)=Û1(t)|ψ1(0)+itdtÛ1(tt)μ̂11·E(t)Û1(t)|ψ1(0),
(6)

where Ûn(t)=exp(iĤnt/) stands for the nuclear evolution operator in the electronic state n. Note that the second electronic state plays no role whatsoever. The molecule first evolves independently of the field up to time t, when it feels the field instantaneously, resulting in rovibrational transitions, and then evolves up to time t, again with the molecular Hamiltonian only. This is integrated over all possible interaction times t. Equation (6) provides a simple, yet rigorous criterion for the validity of the time-dependent perturbation theory, namely, the second term in Eq. (6), which is due to the interaction with the field, must be smaller than the first term, due to the molecular evolution in the absence of the field. Since the first term describes a unitary evolution and, therefore, has a unit norm at all times, the criterion for the validity of the perturbation theory is that the norm of the second term be smaller than 1. In contrast to time-independent perturbation theory, where time plays no role, time-dependent perturbation theory breaks down not only with increasing magnitude of the electric field E(t), but also with increasing interaction times (i.e., longer pulses of the same strength).9 

For electronic transitions, expression (5) also simplifies, but in a different way. The only interesting part is the first-order term describing the wavepacket generated by the field on the second electronic surface

|ψ2(t)=itdtÛ2(tt)μ̂21·E(t)Û1(t)|ψ1(0).
(7)

This equation implies that the initial state first evolves freely on the first surface, then, at time t, interacts with the field, which induces instantaneously an electronic transition to the second electronic state, and, finally, evolves for the remaining time on the second electronic surface. Again, this is integrated over all possible interaction times t. [Here we assumed no initial occupation of the second electronic state; hence, the perturbation theory is valid if ||ψ2(t)||1, which can be expressed approximately by requiring that 1tμ21,av·Eenv(t)dt1, where μ21,av is the average of the transition dipole over the molecular wavepacket (or its constant value within the Condon approximation) and Eenv(t) the slowly varying envelope of the electric field (equivalently, within a factor of 2, the electric field in the so-called rotating frame).]

At room temperature, most of the molecules are typically in the vibrational ground state of the initial electronic state, which is, in particular, an eigenstate of Ĥ1; hence, the first evolution operator Û1(t) yields only a phase factor exp(iE1,0t/), which results in an overall shift of an electronic spectrum by the zero point vibrational energy E1,0 of the initial electronic state. As a result, in the case of electronic transitions, the only interesting dynamics occurs after time t, in the second electronic state, and hence, as promised, the problem reduces to adiabatic dynamics on the second surface. The assumption that the initial state is a vibrational ground state of Ĥ1 is usually referred to as the zero-temperature approximation and avoids the necessity of Boltzmann averaging over different initial states. It is a good approximation for vibrationally resolved electronic spectroscopy.

If electronic transitions are of interest, one also frequently makes the Condon approximation,48–50 which amounts to ignoring the dependence of the transition dipole on the nuclear coordinates: μ̂12const=μ12. Note that removing the hat from μ̂12 permits taking the transition dipole in front of the integral sign in Eq. (7). In contrast, an analogous approximation is not useful for vibrational transitions. If the dipole moment μ̂11 were independent of nuclear coordinates, Eq. (6) would yield a boring result |ψ1(t)=[1+(i/)tdtμ11·E(t)]Û1(t)|ψ1(0), i.e., the field would only add an overall phase to the field-free evolution of the initial state. In particular, no vibrational transitions would occur.

3. Adiabatic quantum, semiclassical, and classical dynamics

To summarize, we have considered either weak or strong electromagnetic fields inducing either vibrational or electronic transitions and showed that only in the case of strong fields resonant with electronic transitions, one has to perform explicitly electronically nonadiabatic dynamics (we will show an example of this when discussing local control theory in Secs. II B 3 and III A 2). In the three other cases, the dynamics is adiabatic and depends only on the molecular Hamiltonian if the fields are weak (both infrared and visible to ultraviolet light, see Secs. II C 2, III B 2, III B 3, and III C 2), or, on the full time-dependent Hamiltonian if the fields are infrared and strong.

It is worth mentioning that there exist several powerful, nonperturbative approaches to molecular quantum dynamics that sometimes can be even more efficient than perturbation theory, especially if higher order perturbation theory is required. Among these, it is worth mentioning the work of Seidner, Stock, and Domcke33,51–53 who also provide an elegant way to analyze nonperturbative calculations of ultrafast spectra which allow extracting individual spectroscopic signals, resolved in time, frequency, and direction of emission, from the total polarization, and more recent work of Gelin, Egorova, and Domcke54–57 who proposed a time-domain spectroscopic technique based on strong pump and probe pulses to access temporal resolution that is not limited by the pulse duration and that cannot be obtained by weak pump and probe pulses.

When perturbation theory is sufficient, similar simplifications to those discussed above in detail for the first-order perturbation theory occur also in nonlinear, pump-probe spectroscopy if the pulses are, in addition, ultrashort (i.e., long on the electronic dephasing time scale and short on nuclear time scale) and well separated.8,34,58–63 Although higher order perturbation theory is required, for weak and well-separated ultrashort pulses, one can compute, e.g., transient absorption or time-resolved stimulated emission spectra within the “doorway/window” picture,58 in which the interaction with the probe pulse can be treated simply as the first-order spectroscopy of a nonstationary state prepared by the pump pulse.59,60 The calculation is done by performing a sequence of “elementary” adiabatic dynamics steps on different surfaces with instantaneous switches in between (see Secs. II C 1 and III B 1). In the following, we will describe various approaches to perform this elementary, adiabatic quantum dynamics on a single electronic surface, which can be stated as the problem of solving one of the following time-dependent Schrödinger equations for nuclei:

iddt|ψ(t)=Ĥ|ψ(t)oriddt|ψ(t)=[Ĥμ̂·E(t)]|ψ(t).
(8)

For simplicity, we dropped the electronic state indices on the Hamiltonian, state and the transition dipole since they are no longer needed. The former equation applies for perturbative fields, while the latter is needed for strong infrared fields.

In particular, we will discuss various approaches with different degrees of accuracy of the treatment of electronic structure [i.e., accuracy of Ĥ and μ̂ in Eq. (8)] and of the nuclear quantum dynamics [i.e., accuracy of |ψ(t), given Ĥ and μ̂]. We start with the in-principle exact quantum dynamics methods whose only approximation consists in the numerical implementation. Then we consider ab initio semiclassical dynamics, an extension of ab initio classical dynamics that takes into account quantum interference by carrying semiclassical phase information along the classical trajectories and provides an intuitive understanding of the dynamics. Finally, we discuss classical molecular dynamics using reactive, multipolar, and ab initio-based force fields, an approach, which, by replacing the wave function evolution by classical trajectories, permits to treat the largest systems and can be remarkably accurate in cases where nuclear quantum effects are not important.

The motion of the nuclei in a molecule, which is inherently quantum mechanical in nature, can be described most accurately by the solution of the time-dependent Schrödinger equation.9 For a time-independent molecular Hamiltonian, the knowledge of a nuclear wavepacket at all times carries essentially the same information as that provided by solving the time-independent Schrödinger equation and knowing all the eigenstates of the Hamiltonian. It is the particular problem at hand which makes adopting one approach over the other preferable. For example, low-resolution electronic absorption or photoelectron spectra typically depend on a rather short-time behavior of the system, making the time-dependent perspective the obvious choice. For time-dependent Hamiltonians, which do not have well-defined eigenstates, the time-dependent approach is even more important since the time-independent approach cannot be used at all.

The numerical solution of the time-dependent Schrödinger equation relies on a suitable discretization of the wave function as well as the Hamiltonian, typically in terms of a set of basis functions or grid points, and on a numerical algorithm chosen to propagate the initial wave function in time. Over the years, many numerical propagation schemes have been developed and the detailed description of various approaches can be found in specialized reviews.64–71 The most popular propagation methods include, e.g., the Chebyshev72 and iterative Lanczos propagators,73–75 both of which employ an expansion of the action of the time-evolution operator into a convergent series. Other widely used methods, based on the explicit integration of the differential equation, include the finite differences76–78 and Runge Kutta79,80 schemes.

Among numerical approaches that take into account the geometric structure of the time-dependent Schrödinger equation, i.e., which preserve the time-reversal symmetry, unitarity, and symplectic structure of the quantum dynamics exactly, the one most commonly used is the split-operator method,81,82 which takes advantage of treating the kinetic and potential energy operators in their natural representations (i.e., momentum and coordinate representations, respectively), in which the relevant operators are represented by diagonal matrices. Originally formulated for the second-order splitting, the algorithm has been extended to an arbitrary order of accuracy in the time step.83–87 Recently, the split-operator method was combined with the Magnus expansion88–93 to construct geometric integrators of arbitrary order of accuracy in the time step not only for the exact treatment of the interaction of the molecule with electromagnetic field but also for an arbitrary combination of the Condon, rotating wave, and ultrashort pulse approximations.94,95

All of the above mentioned numerical propagation methods have their merits, and their performance depends on the particular problem under consideration. However, the number of basis functions or grid points needed to represent a wave function typically increases exponentially with the number of degrees of freedom considered, which makes the numerically exact quantum dynamical calculations practically impossible for systems with large dimensionality. This is the main reason behind the long-standing search for approximate but numerically efficient methods to solve the time-dependent Schrödinger equation.

1. Multi-configuration time-dependent Hartree (MCTDH) method

The time-dependent Hartree method serves as a way to circumvent the exponential-scaling problem, where the molecular wave function is represented as a Hartree-product of one-dimensional time-dependent basis functions, known as single-particle functions (SPFs). In spite of its appealing simplicity and computational efficiency, however, this method suffers from lack of accuracy. Being a single reference ansatz, it neglects a large part of the correlation present between different degrees of freedom.96 The multi-configuration time-dependent Hartree (MCTDH) approach emerged as a natural extension of the time-dependent Hartree method, where the molecular wave function is expanded in terms of several Hartree products/configurations.10,97,98 The MCTDH method can be viewed as a trade-off between the efficiency of the time-dependent Hartree method and the accuracy of a numerically exact treatment (analogous to full configuration integration in electronic-structure theory).10,11,99 The high flexibility in choosing the number of SPFs opens the way to access the full range of approximations between time-dependent Hartree and the numerically exact solution. As the time-dependent SPFs closely follow the time evolution of the nuclear wavepacket, convergence can be achieved relatively easily.

In MCTDH, the wave function is defined by the following ansatz:10,11,99

ψ(Q,t)=γ1=1n1γD=1nDAγ1γD(t)α=1Dφγα(α)(Qα,t)=ΓAΓΦΓ,
(9)

where D denotes the number of degrees of freedom, Q is the vector containing the set of nuclear coordinates, Aγ1γD denote the MCTDH expansion coefficients, and φγα(α) are the nα time-dependent expansion functions (SPFs) for each degree of freedom α. ΦΓ is the D-dimensional Hartree product of the SPFs represented by the composite index Γ=(γ1,,γD). For practical purposes, the SPFs have to be represented in terms of an underlying time-independent primitive basis set

φγα(α)(Qα,t)=λ=1Nαcλγα(α)(t)χλ(α)(Qα).
(10)

The primitive basis functions are often replaced by a discrete variable representation grid.

The SPFs and the time-dependent expansion coefficients in Eq. (9) are determined by variationally solving the time-dependent Schrödinger equation using the Dirac-Frenkel variational principle100,101

δψ|Ĥit|ψ=0.
(11)

After some algebra, one obtains two coupled differential equations for the SPFs and the expansion coefficients

iȦΓ=ΛΦΓ|Ĥ|ΦΛAΛ,
(12)
i|φ̇γ(α)=λ,ξ(1P̂(α))(ρ(α)1)γλĤλξ(α)|φξ(α),
(13)

where P(α) denotes the projection operator on the space spanned by the SPFs for the αth degree of freedom, ργλ(α) denotes a density matrix, and Ĥλξ(α) is a matrix of mean-fields.

While in a standard wavepacket propagation ND numbers are needed to represent a wave function, the memory required to represent an MCTDH wave function is

memorynD+DnN,
(14)

which is a huge memory saving especially for high-dimensional systems.102 

The MCTDH ansatz needs to be extended to describe nonadiabatic dynamics. A particularly convenient way is to use the so-called multi-set formulation, which employs different sets of SPFs for different electronic states. In this ansatz, the wave function is expanded in the set {|j} of diabatic electronic states103 

ψ(Q,t)=j=1Sψj(Q,t)|j,
(15)

where the component ψj(Q,t) is the nuclear wavepacket evolving on the electronic state |j and is represented in the usual MCTDH form as in Eq. (9).

During the last years, MCTDH was used to study different aspects of the nuclear dynamics of molecules and clusters within the NCCR MUST.104–106 

2. Bohmian dynamics

Despite the overwhelming success of MCTDH in performing accurate quantum dynamical calculations for relatively large systems, it still suffers from an exponential scaling behavior. On the other hand, with most of the existing trajectory based solutions, it is possible to deal with a large number of degrees of freedom, however, at the expense of accuracy. For example, the nuclear trajectory obtained in Ehrenfest dynamics, which typically lies on the mean-field potential, does not have a clear physical meaning. While the widely used trajectory surface hopping (TSH) schemes have been successful in describing some nuclear quantum effects like the branching of nuclear wavepackets, however, by virtue of being classical by construction, it is unable to describe some other quantum phenomena like decoherence and tunneling.

One possible solution to this problem, that is, to achieve an accurate and efficient quantum propagation scheme for the nuclei is to employ the so-called quantum trajectory based methods developed by Wyatt et al.12,13 Having Bohmian (or hydrodynamical) interpretation of quantum mechanics107,108 as its backbone, this method provides formally exact equations of motion for quantum trajectories (also known as fluid elements), which in principle reproduce the full nuclear wavepacket dynamics. In this class of methods, the complex nuclear wave function is represented in its polar form. The Madelung ansatz ψ(Q,t)=A(Q,t)exp(iS(Q,t)/) is inserted into the time-dependent Schrödinger equation; separating the real and imaginary parts yields equations of motion for the amplitude and phase of the nuclear wavepacket. The equation for the amplitude is equivalent to a continuity equation for the probability density ρ=A2, while the equation for the phase can be interpreted as an extended Hamilton-Jacobi equation

St=12m(S)2+V22m2AA,
(16)

in which

Q=22m2AA
(17)

is the so-called quantum potential. Together, the continuity and extended Hamilton-Jacobi equations provide a link between this formulation of quantum mechanics and continuum hydrodynamics. This link, in turn, makes it possible to derive a Newton-like equation for the propagation of the fluid elements, giving rise to a swarm of correlated trajectories by virtue of the presence of the quantum potential.

In a recent version of Bohmian mechanics, ρ was obtained from kernel density estimation, a concept borrowed from statistics, which is a non-parametric procedure to estimate the probability density function from a finite number of samples. Using such a formulation, it was shown that tunneling probabilities can be readily and accurately computed for 1- and 2-dimensional problems whereas interference effects are oversmoothed.109 

During the last decade, there has been a constant endeavour to extend the original ideas of quantum trajectory based methods in order to be able to apply them to the cases of nuclear dynamics involving more than one electronic state.110–112 The specific implementations differ mainly by the way the electronic wave function is represented, namely, the adiabatic or the diabatic representation.113,114 It is worth mentioning here that, in spite of being quite promising, these non-adiabatic dynamics schemes suffer quite often from severe computational difficulties. One of the major problems is related to the instability associated with the numerical computation of the quantum potential.

Recently, a scheme has been developed with an aim to solve the non-relativistic high-dimensional quantum dynamics of nuclei and electrons within the framework of Bohmian dynamics employing an adiabatic representation of the electronic states. This method, NABDY (Nonadiabatic Bohmian DYnamics), an on-the-fly trajectory based nonadiabatic molecular dynamics algorithm, is able to capture the nuclear quantum effects which were missing in TSH due to the independent trajectory approximation.115–117 This method has been implemented within the CPMD package,118 where the electronic energies, classical forces, and the nonadiabatic coupling elements are calculated on-the-fly for each configuration at the DFT/TDDFT level of theory.

For the formal derivation, one can start from the time-dependent Schrödinger equation for the molecular system

Ĥ|ψ(Q,t)=it|ψ(Q,t),
(18)

where the molecular wave function, |ψ(Q,t), can be expressed in the Born-Huang ansatz as

|ψ(Q,t)=j=1ψj(Q,t)|j.
(19)

Here, the expansion coefficients, ψj(Q,t), can be interpreted as the nuclear wave function associated with the electronic state |j. If expressed in polar form, the complex nuclear wave function reads

ψj(Q,t)=Aj(Q,t)exp[iSj(Q,t)].
(20)

Inserting Eq. (19) in Eq. (18) and using Eq. (20), we obtain the following equations of motion for the amplitude Aj(Q,t) and for the phase Sj(Q,t):

Aj(Q,t)t=γ1MγγAj(Q,t)γSj(Q,t)γ12MγAj(Q,t)γ2Sj(Q,t)+γi2MγDjiγ(Q)Ai(Q,t)Im[eiϕ(Q,t)]γ,ijMγdjiγ(Q)γAi(Q,t)Im[eiϕ(Q,t)]γ,ij1Mγdjiγ(Q)Ai(Q,t)γSi(Q,t)Re[eiϕ(Q,t)]

and

Sj(Q,t)t=γ12Mγ(γSj(Q,t))2+Ejel(Q)γ22Mγγ2Aj(Q,t)Aj(Q,t)+γi22MγDjiγ(Q)Ai(Q,t)Aj(Q,t)Re[eiϕ(Q,t)]γ,ij2Mγdjiγ(Q)γAi(Q,t)Aj(Q,t)Re[eiϕ(Q,t)]+γ,ijMγdjiγ(Q)Ai(Q,t)Aj(Q,t)γSi(Q,t)Im[eiϕ(Q,t)],

where

ϕ(Q,t)=1[Si(Q,t)Sj(Q,t)],
(21)
Hji(Q)=j|Ĥel|i.
(22)

The first-order (djiγ) and the second-order (Djiγ) nonadiabatic couplings are, respectively,

djiγ=j|γ|i,
(23)
Djiγ=j|γ2|i.
(24)

In the Hamilton-Jacobi formulation of mechanics, the phase of the nuclear wavepacket can be related to its momentum as

βSj(Q,t)=Pβj,
(25)

which gives rise to a Newton-like equation for the motion of the nuclei

Mβd2Qβ(dtj)2=β[Ejel(Q)+Qj(Q,t)+Dj(Q,t)],
(26)

where the definition of the time-derivative in the Lagrangian frame has been employed

ddtj=t+γγSj(Q,t)Mγ·γ.
(27)

It is clear from Eq. (26) that in addition to the usual classical potential, Ejel(Q), the nuclei will feel a quantum potential Qj(Q,t) as well, which induces adiabatic nuclear quantum effects. Dj(Q,t) is the nonadiabatic quantum potential and is responsible for electronic interstate couplings.

Numerically, a conventional quantum trajectory propagation scheme is used to start the adiabatic dynamics of an initial wavepacket represented as an ensemble of fluid elements on a single electronic state. The nonadiabatic couplings are constantly monitored during the adiabatic propagation of the quantum trajectories. When their strengths exceed a pre-defined threshold, the algorithm starts the dynamics on the coupled electronic states.

3. Local control theory with ab initio molecular dynamics: A computationally efficient scheme to achieve control

Since a couple of decades, ultrafast laser pulses have increasingly been employed to induce certain dynamical events in molecules leading to the emergence of fields such as femtochemistry and femtobiology.119–121 The shaping of laser pulses to control chemical reactions has been a long-standing topic of interest for both theorists and experimentalists.122–126 The term coherent control of chemical reactions grossly includes all those schemes which optimize an external radiation field such that it can induce a transition from an initial state to a final state (also called target) after a certain time. The most well-known ones are the pioneering success of the Tannor-Rice-Kosloff pump-dump scheme14,127 and the Brumer-Shapiro scheme.128 

One of the most commonly employed coherent control techniques is optimal control theory.129 This is, in general, a global control scheme, where the control field is constructed variationally through an iterative process of forward-backward propagations considering the information of the entire dynamics of the system. This scheme carries many similarities with the experimental learning algorithm approach.130 Despite its apparent success, it has a few significant disadvantages. One of the major problems is the computational expense it demands due to the involvement of multiple forward-backward propagations. Another practical drawback is the fact that, in spite of giving the optimized pulse producing the desired target, optimal control theory does not provide direct information leading to a detailed understanding of the underlying mechanism which often requires further analysis.131 

Unlike optimal control theory, local control theory (LCT) departs from the global picture and calculates the field on-the-fly taking into account the instantaneous response of the system dynamics. In LCT, one typically calculates the field at each time step to ensure the increase/decrease in the expectation value of an operator of interest, such as an electronic state population, vibrational state population, or nuclear momentum.15,124–126,132,133 Being computationally much faster than optimal control theory, and being more flexible, LCT is widely considered as the method of choice to achieve coherent control of larger systems. It should be mentioned here that a connection between optimal control theory and LCT can be established by considering the fact that, at least in some cases, LCT equations can be derived by solving the optimal control theory equations employing Krotov's scheme.15 

The Hamiltonian of a molecular system, upon interaction with a radiation field, can be written as

Ĥ=Ĥmol+V̂int,
(28)

where Ĥmol is the field-free Hamiltonian of the system and V̂int describes the interaction of the system with the electromagnetic field. In the dipole approximation, the interaction part of the Hamiltonian can conveniently be expressed as

V̂intji=μ̂ji·E(t).
(29)

The main objective of LCT is to calculate an electric field on-the-fly, at each time step, as a response of the instantaneous dynamics of the system to ensure the increase (or decrease) of the expectation value of some predefined operator. If we consider the time evolution of an arbitrary operator Ô, one finds

dÔdt=iψ(t)|[Ĥmol,Ô]|ψ(t)+iψ(t)|[V̂int,Ô]|ψ(t),
(30)

where |ψ(t) is the molecular state vector at time t. This equation shows that if Ô and V̂int do not commute, it is possible to shape the external field to influence a desired change in the expectation value of Ô. Assuming that Ĥmol commutes with Ô, which is the case only in the absence of nonadiabatic couplings, Eq. (30) can be written as

dÔdt=iE(t)ψ(t)|[μ̂,Ô]|ψ(t),
(31)

and therefore, the desired control may be achieved by changing the temporal evolution of E. If we consider the transfer of electronic population to the state |i, the corresponding operator to be employed is the projector Pi=|ii|. In the absence of nonadiabatic couplings, the time evolution of the projector operator can then be written as

dPidt=iE(t)ψ(t)|[μ̂,Pi]|ψ(t).
(32)

Equation (32) is common to most of the LCT implementations irrespective of the underlying dynamical method. However, in the method developed in the framework of the NCCR MUST, LCT has been implemented within a trajectory surface hopping (TSH) ab initio molecular dynamics scheme.134 All the required quantities, such as electronic energies, nuclear forces, nonadiabatic couplings, and transition dipole elements, have been calculated on-the-fly with LR-TDDFT as implemented in the software package CPMD. Within the TSH ansatz, the total wave function for trajectory α is approximated as

|ψα(t)=j=1Cjα(t)|j,
(33)

where the complex-valued time-dependent amplitude Cjα(t) substitutes the nuclear wavepacket in the corresponding quantum-dynamical ansatz and apportions trajectories among electronic states. Applying this ansatz to Eq. (32) and expanding the projector operator for trajectory α, it is straightforward to get the following equation:

dPiαdt=2Eα(t)jIm[Cjα(t)μ̂jiCiα(t)].
(34)

From this equation, it is clear that choosing the electric field to be

Eα(t)=±λjIm[Cjα(t)Ciα(t)μ̂ji]
(35)

ensures, depending on the sign, that dP(t)/dt increases or decreases at all times. LCT has been applied to a photoinduced intramolecular proton transfer reaction, which is described in more detail in Sec. III.

As mentioned above, a direct solution of the time-dependent Schrödinger equation for large systems is unfeasible due to the exponential scaling of the computational cost with the number of dimensions. Moreover, the exact quantum dynamics requires construction of global potential energy surfaces, which is a daunting task by itself.

In this respect, semiclassical trajectory-based methods provide a powerful tool for molecular dynamics simulations. On one hand, as in the Bohmian or ab initio classical dynamics, the propagation of classical trajectories requires only a local knowledge of the potential energy surface, allowing on-the-fly evaluation of necessary ab initio data. On the other hand and in contrast to classical dynamics simulations, semiclassical methods can approximately describe nuclear quantum effects, such as the zero-point energy and quantum coherences, by virtue of the phase carried along each trajectory.

In particular, the Herman–Kluk initial value representation,135–137 which stems from the stationary-phase approximation to the Feynman path integral propagator, has been successfully merged with on-the-fly dynamics to calculate vibrationally resolved spectra138–141 and internal conversion rates.142 Within the Herman–Kluk approximation, the quantum evolution operator can be written as

eiĤt/hDdq0dp0Rt(q0,p0)eiSt(q0,p0)/|qtptq0p0|,
(36)

where D is the number of degrees of freedom in the system, (qt,pt) denote the phase-space coordinates at time t of a point along the classical trajectory, and St(q0,p0) is the corresponding classical action. In the position representation, the wave functions of the coherent states from Eq. (36) are given by

r|qp=(detgπD)1/4exp[12(rq)T·g·(rq)+ipT·(rq)],
(37)

where g is the coherent state width matrix and the Herman–Kluk prefactor is given by

Rt(q0,p0)=det[12(Mqq+g1·Mpp·giMqp·g+ig1·Mpq)]
(38)

with Mαβ=αt/β0 being the components of the stability (monodromy) matrix. The phase-space integral in Eq. (36) is usually evaluated by sampling the initial conditions of classical trajectories using Monte Carlo techniques; the subsequent propagation requires computing the potential energy to evolve the action S, forces to evolve positions and momenta, and the Hessian to evolve the stability matrix M.

Despite some progress, the straightforward application of the Herman–Kluk initial value representation to systems with many degrees of freedom is limited. The oscillatory nature of the integrand in Eq. (36) requires a very large number of trajectories to converge the results, which, together with the expensive Hessian calculations, keeps the overall computational cost high. The computational burden can be partially alleviated by invoking additional approximations such as a prefactor-free propagator,143 time averaging,144 and Filinov filtering (cellularization).145–147 The latter technique has been widely used to improve Monte Carlo statistics147–149 and to derive new approximate semiclassical methods.149–151 The time-averaging has proved to be particularly useful in the context of on-the-fly simulations as a central ingredient of the multiple-coherent-states time-averaged semiclassical initial value representation.139,140,152 This method is especially well suited for the determination of vibrational frequencies and prediction of vibrational spectra.152–155 

1. Phase averaging, dephasing representation, and extensions

Within the domain of validity of perturbation theory, all dynamical phenomena in complex systems can be described in terms of time correlation functions. For example, in the time-dependent approach, pioneered by Heller,156 many linear and nonlinear spectra of a molecule can be obtained via the Fourier transform of an appropriate wavepacket correlation function. Thus, many semiclassical dynamics methods are specifically designed to approximate directly the correlation function rather than the solution of the time-dependent Schrödinger equation itself.

Methods employing the correlations functions invoke the time-dependent perturbation theory, where the dynamics involves only the molecular Hamiltonian, which is time-independent. Let us, therefore, consider a general wave packet correlation function (sometimes called the fidelity amplitude157,158) given by

f(t)=ψ1(t)|ψ2(t)=ψ|eiĤ1t/eiĤ2t/|ψ,
(39)

where Ĥ1 and Ĥ2 are the Hamiltonian operators corresponding to different electronic states of the system and |ψ is the initial state, which is typically an eigenstate of Ĥ1 or Ĥ2. A remarkably simple approximation for Eq. (39) is given by the so-called phase averaging, dephasing representation (DR) or Wigner averaged classical limit8,17,18,62,159–164

fDR(t)=hDdx0ρW(x0)eiΔS(x0,t)/,
(40)

where x0=(q0,p0) denotes the initial phase-space coordinates of a classical trajectory, ρW(x0) is the Wigner phase-space representation of the initial state |ψ, and

ΔS(x0,t)=0tΔV(xt)dt
(41)

is the action due to the difference ΔV(xt):=V2(xt)V1(xt) between two potential energy surfaces along the classical trajectory guided by the average Hamiltonian H¯(H1+H2)/2. Using the Wigner function ρW(x0) as a sampling weight for the initial conditions x0, one can rewrite Eq. (40) as a statistical average

fDR(t)=eiΔS(x0,t)/ρW(x0).
(42)

The most attractive feature of the dephasing representation is that it does not require the calculation of a Hessian along the classical trajectory. Moreover, the number of trajectories required for convergence is independent of the system's dimensionality,165 is much lower than the number required in the Herman–Kluk initial value representation, and typically ranges from a hundred to a few thousand.

While the dephasing representation (42) is exact for the displaced harmonic oscillators8 and often accurate in chaotic systems,18 it breaks down when the Hamiltonians H1 and H2 represent the harmonic oscillators with significantly different force constants. To correct this drawback, Zambrano and Ozorio de Almeida166 proposed the dephasing representation with a prefactor (DRP)

fDRP(t)=ADRP(x0,t)eiΔS(x0,t)/ρW(x0),
(43)

where the prefactor A(x0,t) depends on the Hessian of the DR phase ΔS(x0,t) with respect to initial conditions. Consequently, the numerical cost of propagating a single trajectory is higher for DR with a prefactor compared to the original formulation of the DR, but the prefactor correction extends the validity of the approximation.167 

For the systems with many degrees of freedom, even propagating only a thousand trajectories could be computationally unfeasible. The number of trajectories required to achieve convergence can be reduced by employing smoothing techniques, such as Filinov filtering (cellularization)145–147 used for the Herman–Kluk initial value representation. Šulc and Vaníček168 and Zambrano et al.167 proposed a related but somewhat more rigorous approach to the cellularization, which unlike standard Filinov techniques (with two free parameters for position and momentum widths of the cells) has no free parameter and is guaranteed to converge to the original dephasing representation in the limit of the number of trajectories going to infinity. In this method, as in Heller's cellular dynamics, the neighboring trajectories are grouped into cells, and all contributions from the cell are evaluated approximately analytically using the information collected along the central trajectory. This approach yields the cellular dephasing representation (CDR)167,168

fCDR(t)=ACDR(x0,t)eiΔS(x0,t)/ρIWT(x0),
(44)

where the prefactor ACDR(x0,t)167 accounts for the contributions from each cell and the sampling weight for the initial conditions ρIWT(x0) is given by the inverse Weierstrass transform of the Wigner function ρW(x0).167 

As the most expensive part of both DR with a prefactor and cellular DR is the calculation of the Hessian of ΔS(x0,t) with respect to the initial conditions, the two methods can be easily combined without increasing the cost per trajectory. The resulting cellular dephasing representation with a prefactor (CDRP), which evaluates the correlation function as

fCDRP(t)=ADRP(x0,t)ACDR(x0,t)eiΔS(x0,t)/ρIWT(x0),
(45)

has a potential to be more accurate and more efficient than the original DR formulation (see Sec. III B 1).

An alternative route for improving the accuracy of the dephasing representation replaces the independent semiclassical trajectories with coupled Gaussian basis functions. This is closely related to the basic idea employed in multiple spawning,23 variational Gaussian wavepackets,169 coupled coherent states, and multiconfiguration Ehrenfest method.170 The time evolved states |ψj(t), j = 1, 2 are expanded in the Gaussian basis as

|ψj(t)=α=1Ncj,α(t)|gα(t),
(46)

where |gα(t) is the Gaussian wavepacket whose center moves according to the average Hamiltonian H¯ and the expansion coefficients cj,α(t) satisfy the time-dependent Schrödinger equation

iSċj=(HjiD)cj.
(47)

Here Hj is the Hamiltonian matrix, S is the overlap matrix, and D is the nonadiabatic coupling matrix; their matrix elements in the Gaussian basis are

Hj,αβ(t)=gα(t)|Ĥj|gβ(t),
(48)
Sαβ(t)=gα(t)|gβ(t),
(49)
Dαβ(t)=gα(t)|ġβ(t).
(50)

In the Gaussian dephasing representation (GDR),171 the information obtained along the propagated classical trajectories is used to construct the matrices in Eqs. (48)–(50); the time dependence of the expansion coefficients cj,α(t) is then obtained by solving Eq. (47). Finally, the wavepacket correlation function (39) is calculated as171 

fGDR(t)=c2(t)S(t)c1(t).
(51)

As the size of the Gaussian basis increases and the basis approaches completeness, the result of the Gaussian DR approximation should converge to the exact quantum answer.

Overall, the phase averaging, dephasing representation, and their variants described in this section provide an efficient semiclassical approach for computing wavepacket correlation functions and have found a wide range of applications in molecular spectroscopy.62,159,161,167,168,171 Several examples demonstrating both merits and limitations of different extensions of the phase averaging will be provided in Sec. III B 1.

2. Ab initio thawed Gaussian approximation

One of the simplest, yet efficient, semiclassical approaches for molecular dynamics is provided by the thawed Gaussian approximation (TGA) developed by Heller and co-workers.16,172 The method is based on the fact that the time evolution of a Gaussian wavepacket in constant, linear, and harmonic potentials does not perturb its functional form. In other words, while it can spread, compress, and rotate in the phase space, a Gaussian remains a Gaussian (Fig. 1).

FIG. 1.

Evolution of a Gaussian wavepacket in phase space within the thawed Gaussian approximation.

FIG. 1.

Evolution of a Gaussian wavepacket in phase space within the thawed Gaussian approximation.

Close modal

Thus, within the TGA, the center of a Gaussian wavepacket is guided by a classical trajectory, which accounts for the full anharmonicity of the potential, while the width is propagated using a time-dependent effective potential obtained from a local harmonic approximation of the full potential

Veff(q,t)=V|qt+(gradqV|qt)T·(qqt)+12(qqt)T·HessqV|qt·(qqt).
(52)

Here V|qt,gradqV|qt, and HessqV|qt are the potential, its gradient, and Hessian evaluated at the center of the Gaussian wavepacket. The evolving wavepacket is assumed in the form16,172

ψ(q,t)=N0exp{(qqt)T·At·(qqt)+i[(pt)T·(qqt)+γt]},
(53)

where N0 is the initial normalization factor, (qt,pt) are the phase-space coordinates of the Gaussian wavepacket's center, At is a complex symmetric width matrix, and γt is the complex number whose real part gives the phase of the Gaussian wavepacket, while the imaginary part ensures normalization of ψ(q,t) for t  >  0. Inserting the ansatz (53) together with the effective potential (52) into the time-dependent Schrödinger equation gives the following equations of motion for the wavepacket's parameters:16,172

q̇t=Hp|pt,
(54)
ṗt=Hq|qt,
(55)
Ȧt=2iAt·m1·At+i2HessqV|qt,
(56)
γ̇t=L2Tr(m1·At),
(57)

where m is the diagonal mass matrix, H=(1/2)pT·m1·p+V(q) is the Hamiltonian, and L=ṗt·qtH is the Lagrangian. The numerical integration of the classical equations of motions (54) and (55) is straightforward. The solution of Eqs. (56) and (57) can be simplified172 by introducing two auxiliary matrices Qt and Pt such that

At=i2Pt·Qt1,
(58)
Q̇t=m1·Pt.
(59)

In matrix notation, the unique solutions of Eqs. (58) and (59) are given by

(QtPt)=(MqqMqpMpqMpp)·(Q0P0)
(60)

with initial conditions Q0=IdD and P0=2iA0. Inserting Eqs. (58) and (60) into Eq. (57) and performing the integration yields the explicit solution for γt in the form

γt=0tLdt+i2ln[det(Qt·Q01)].
(61)

Since matrix Qt is complex, a proper branch of the logarithm has to be taken to make γt continuous in time.

Performing calculations with the TGA requires propagating a single classical trajectory, which makes it very useful in implementation with the on-the-fly dynamics; the moderate computational cost allows us to perform molecular dynamics simulations of large systems inaccessible to other methods (see Sec. III B 3). Although the accuracy of a single Gaussian wavepacket description is limited, it can supply the most important information beyond that available in the static calculations employing the global harmonic approximation for the potential173–176 or that from purely classical molecular dynamics simulations.

Using classical molecular dynamics simulations for investigating the dynamics in the gas- and in the condensed-phase goes back to the late 1950s.177 Solving Newton's equations of motions for given initial conditions178 and a parametrized energy functions yields coordinates q(t) and momenta p(t) from which a multitude of experimentally accessible observables can be determined using statistical mechanics. Compared with a quantum mechanical treatment, nuclear dynamics followed along classical trajectories neglects three essential effects: zero-point energy, tunneling, and coherence. In this context, it is of interest to note that the results for one of the earliest simulations of a reactive process (quasi-classical simulation of the reactive collision of H + H2)179 have been almost quantitatively confirmed at room temperature by a full quantum treatment some 10 years later.180 Hence, even for a system where one would expect quantum effects to be particularly important, quasi-classical trajectory simulations are capable of providing useful insight.

Given this, interest in classical molecular dynamics simulations has shifted more towards realistically describing the intermolecular interactions which has become possible through considerable advances in electronic structure theory. With current computational equipment, it is possible to compute fully dimensional potential energy surfaces for systems such as malonaldehyde (21 degrees of freedom) at the CCSD(T) level with large basis sets and to represent the energies by fitting to a parametrized expression.181 Alternatively, fully dimensional potential energy surfaces for smaller systems using reproducing kernels which exactly represent the data from electronic structure calculations are possible.182–185 

On the other hand, such high-accuracy representations are not yet feasible for systems such as proteins for which empirical force fields are being developed. Based on established parametrized forms186,187 recent advances include, among others, multipolar20,21,188–192 and polarization interactions.193,194 Such extensions now allow predictive atomistic simulations for condensed-phase systems.195 

1. Explicit proton transfer: The MMPT force field

Proton transfer reactions are fundamental in biophysical and biochemical processes. In order to characterize the properties of a shared proton between an acceptor and donor moiety, various experimental methods have been used in the past. One of the most successful approaches is based on optical spectroscopy.196–200 

Following bond-breaking and bond-formation in simulations based on parametrized, empirical force fields have started with the empirical valence bond (EVB) technique which was particularly relevant to (proton transfer) reactions in solution.201 The generalization of EVB to multi-state EVB has played an important role for investigating proton transfer in solution.202 The EVB Hamiltonian usually consists of two or more diagonal terms which are force field expressions for all states of interest. The off-diagonal terms are coupling matrix elements which depend parametrically on one (or several) internal coordinate of the system.203 This introduces a dependence on the choice of the coordinates which is not always desirable, e.g., if multiple bond rearrangements can occur. Alternatively, a chemical reaction can be followed along time as the progression coordinate, which is the situation encountered in experiments.204,205 This is the purpose of adiabatic reactive molecular dynamics (ARMD) which was originally developed for reactions in the condensed phase.204,206–208 More recently, ARMD has also been applied to gas-phase systems such as the vibrationally induced photodissociation of sulfuric acid (H2SO4). Here, the excitation of a higher overtone (ν94) of a local OH stretch vibration can lead to photodissociation into water and sulfur-trioxide (H2O + SO3) on the pico- to nanosecond time scale.209,210 However, the ARMD-trajectories were not suitable for final state analysis of the reaction products because they were based on an explicitly time-dependent Hamiltonian which does not conserve total energy during crossing.

Molecular Mechanics with Proton Transfer (MMPT) is a force field-based method which allows bond formation and bond breaking between the transferring hydrogen atom H* and the acceptor or donor atom, respectively.19 In this approach, multi-dimensional potential energy surfaces are parametrized from ab initio calculations and fit to efficient representations based on Morse potentials. The additional MMPT-energy is written as

VMMPT=V0(R,ρ)+k·θ2,
(62)

where R is the donor–acceptor distance and r is the donor–H* separation. These two variables R and r are combined into a coordinate ρ defined as ρ=(rr0)/(RR0)[0,1] with r0=0.8 Å and R0=1.6 Å. In Eq. (62) the (isotropic) 2 D potential V0(R,ρ) is a superposition of Morse functions. For linear proton transfer, the third coordinate θ involves the angle donorH*acceptor and is approximated by harmonic function.19,211 In the next step, MMPT was extended to non-linear hydrogen bonded motifs as they occur, e.g., in malonaldehyde.212 The nonlinear path is described by a displacement d=r·sinθ orthogonal to the proton transfer path and the 3-dimensional potential is Vd(R,ρ,d) which replaces the term k·θ2 in Eq. (62). Adaptation of the MMPT potential to the chemical environment can be achieved through morphing transformations.213 

2. Atomistic simulations with multipole electrostatics: MTP force fields

Empirical force fields traditionally employ point charge (PC) electrostatics which describe the charge distribution of a molecule using atom-centered partial charges, interacting with one another according to Coulomb's law. In order to efficiently handle the pronounced long range decay of a 1/r interaction, methods such as Ewald summation have been devised to compute long-range electrostatics in periodic systems.214,215 Most of the success of atomistic force fields is due to the effectiveness of PC electrostatics in reasonably well approximating the charge distribution around a particular chemical group. However, limitations become apparent in specific systems, e.g., halogens are notoriously challenging for PC force fields as they fail to model the σ hole in front of the atom.216–218 In general, the lack of anisotropy limits the ability to model specific chemical interactions, such as the need for dummy atoms in certain water models to better reproduce hydrogen-bond interactions.219,220 To this end, multipolar (MTP) electrostatics provide a natural and systematic extension to Coulomb interactions, where anisotropy is included as a series expansion with distinct symmetries.

A quantum-mechanical description would be the most rigorous representation of intermolecular interactions. However, practical computational limitations restrict the amount of sampling one would be able to carry out. As an example, for an isolated chromophore in solution, at least several nanoseconds of molecular dynamics simulations are required for converging typical properties such as the radial distribution function g(r) or its 1d- or even 2d-infrared spectrum. This corresponds to 106 energy evaluations to be carried out with a time step of Δt=1 fs. This is why one resorts to empirical force-fields which allow extensive sampling of configuration space. The validity of the underlying computational model is verified by comparing with reference data from experiments.221 Since the relevant dynamics is governed by electrostatic and van der Waals interactions, multipolar and polarizable force fields193,194 are necessary for the interpretation of time scales and structural changes at an atomistic level. However, PC-based force fields are not necessarily inferior compared to MTP parametrization depending on the molecule considered and the property studied.20,21,222

3. Force field parametrization

Instead of decomposing the electron density into distributed multipoles,223 it is also possible to fit MTP coefficients with respect to the electrostatic potential itself.224–226 Expanding the electrostatic potential in terms of the Cartesian coordinates227 gives rise to

4πε0Φ(r)=qR+μαRαR3+13Θαβ3RαRβR2δαβR5+,
(63)

where 1/R1/|rr|, r and r are the locations of the MTP moments, and r is an observation point, the Einstein summation convention is applied, and the Kronecker δαβ, is 1 only if α=β, and 0 otherwise. Equation (63) shows that the electrostatic potential depends linearly on the MTP coefficients q, μα, etc. Optimizing MTP coefficients to best reproduce the ab initio electrostatic potential can thus be done from a linear least-squares fit over a number of discrete points r(p) around the molecule. In the target function

χ2=minp[Φai(r(p))ΦMTP(r(p))],
(64)

the sum runs over a list of discrete points, and Φai and ΦMTP represent the value of the electrostatic potential generated by the ab initio and MTP coefficients, respectively. The linearity of the problem allows us to cast χ2 into the form Xb=y, where the matrix X represents all geometrical terms (i.e., the T tensors227) sampled on every grid point, the vector b contains all MTP coefficients, and the vector y is the collection of ab initio electrostatic potential values at every grid point. A comparison between the electrostatic potential and its PC and MTP-representation for iodophenol is shown in Fig. 2. From both the difference density map and the root-mean-square error, it is evident that a PC representation is not capable of correctly describing the electrostatic potential around the molecule. A MTP model is superior by a factor of 5 compared to the PC model and is expected to perform much better in atomistic simulations. This was explicitly shown for an iodinated Tyrosine in insulin complexed to a model for the insulin receptor for which a PC model only leads to one favorable interaction between hormone and receptor, whereas an MTP model establishes 2 additional contacts because the sigma-hole is correctly represented in the MTP model.195 A comparison of electrostatic potentials for halogenated benzenes is shown in Fig. 3. In all cases, a MTP representation of the electrostatics together with van der Waals parameters fitted to experimental data yields hydration free energies within 0.1 kcal/mol of the reference values whereas for PC models the difference can be 5 times larger depending on the halogen modification.20,218 The choice of the electrostatic model (PC vs. MTP) leads to a different water-ordering around the solute (see Fig. 4) which directly impacts on the quality of the quantity computed from the simulation which is the hydration free energy in this case.

FIG. 2.

PC vs MTP: Isosurfaces of the difference between ab initio and PC and MTP electrostatic potentials for iodophenol (Iodophen-2-ol). Blue and red regions denote the positive and negative errors, respectively. The plots only show points within the first interaction belt.

FIG. 2.

PC vs MTP: Isosurfaces of the difference between ab initio and PC and MTP electrostatic potentials for iodophenol (Iodophen-2-ol). Blue and red regions denote the positive and negative errors, respectively. The plots only show points within the first interaction belt.

Close modal
FIG. 3.

Electrostatic surface potential maps of (form left to right) benzene, fluorobenzene, chlorobenzene, bormobenzene, and iodobenzene at the 10–3 e a3 isodensity surface. The color scale of the surface potential ranges from 2.12e2 au (red) to 2.12e2 au (blue). The upper black arrow indicates the increase in the sigma-hole strength of the halogens. The red arrow indicates the decrease of the electron rich region δ on the sides of the C-halogen bond, and the blue arrow indicates the increase of the electron deficient region δ+ along the C-halogen bond.

FIG. 3.

Electrostatic surface potential maps of (form left to right) benzene, fluorobenzene, chlorobenzene, bormobenzene, and iodobenzene at the 10–3 e a3 isodensity surface. The color scale of the surface potential ranges from 2.12e2 au (red) to 2.12e2 au (blue). The upper black arrow indicates the increase in the sigma-hole strength of the halogens. The red arrow indicates the decrease of the electron rich region δ on the sides of the C-halogen bond, and the blue arrow indicates the increase of the electron deficient region δ+ along the C-halogen bond.

Close modal
FIG. 4.

Halobenzene-water dynamics. The figure illustrates the two types of positioning of water molecules around the halogen of halobenzene.

FIG. 4.

Halobenzene-water dynamics. The figure illustrates the two types of positioning of water molecules around the halogen of halobenzene.

Close modal

The final parametrization of the nonbonded interactions of a force field involves accurate electrostatic parameters (see above) and optimized van der Waals parameters for condensed-phase simulations. This second step requires explicit molecular dynamics simulations to be run with fixed PC or MTP parameters while adjusting the van der Waals well depths ϵ and the ranges σ to best reproduce experimentally determined thermodynamic reference data. Often, the pure liquid density, the heat of vaporization, and the hydration free energy of the target molecule are used as the reference to fit to. Starting from, e.g., the CGenFF228 force field observables are computed from explicit molecular dynamics simulations. Then, the van der Waals parameters are modified by scalar factors for efficient optimization and the simulations are repeated until a predefined quality of the fit is obtained. Such a procedure has been cast into a versatile computing environment which demonstrated that it is possible to reach an RMSD between experimental observations and computed thermodynamic properties of 0.36 kcal/mol for a range of 20 diverse small molecules can be obtained.21 

As mentioned above, a MTP representation is not unique and such a fit is usually overdetermined. Hence, the number of MTP components required to best reproduce a reference electrostatic potential can be reduced to obtain a predefined level of accuracy.229 Also, symmetries can be exploited to further reduce the number of MTP components.225 An alternative to arrive at multipolar-quality force fields is to use the isomorphism between multipoles, atomic orbitals, and their point charge representation. This was exploited in the distributed charge model230 which was recently further improved to a minimal distributed charge model231 based on off-centered point charges. The minimal distributed charge model is capable of approximating the reference ab initio electrostatic potential with an accuracy as good as or better than MTPs without the need for computationally expensive higher order multipoles. For three test cases (imidazole, imidazole cation, and phenylbromide), the best minimal distributed charge model outperforms a multipole expansion truncated after the quadrupole term and is very close to or even better in quality than a multipole expansion truncated after the octupole term. At the same time, a minimal distributed charge model usually uses fewer than two PCs per atom and is therefore computationally more efficient by about one order of magnitude than the corresponding distributed charge model,230 while having the same computational advantages over MTPs in molecular dynamics simulations. Remarkably, for imidazole and PhBr, it is even possible to find a minimal distributed charge model with fewer PCs than atoms (i.e., more efficient than a conventional PC representation), which has a quality comparable to a multipole expansion truncated after the quadrupole term.231 

1. An application of NABDY to the collision of H with H2

The theoretical formalism of the NABDY approach, which provides an accurate on-the-fly solution of the electronic and the nuclear time-dependent Schrödinger equations, has already been described above in some details. A small molecular system has been chosen to demonstrate the applicability of the method. To this end, the dynamical problem of collision of H with H2 has been found to be a convenient test case to perform NABDY simulations.115 

The electronic energies and the nonadiabatic coupling vectors have been computed on-the-fly at the DFT/TDDFT level using LDA functional.232–234 An energy cutoff of 70 Ry and a cubic box of 20 Bohrs have been employed in all the electronic structure calculations performed with the plane wave code CPMD. The smaller second-order nonadiabatic couplings Dijγ((R)) were neglected [see Eq. (24)], which, due to the low dimensionality of the problem, do not lead to a considerable norm-conservation problem. Exact wavepacket propagation and TSH calculations have also been performed on the initial wavepacket using the same potential energy surfaces and nonadiabatic couplings obtained with NABDY to compare and validate results. The exact wavepacket propagation has been performed on a fitted one-dimensional surface obtained by the unconstrained NABDY dynamics.

The system has been prepared with an H atom with an initial momentum k = 75 au moving towards the H2 molecule along the collision path shown in Fig. 5 (inset). During the course of the dynamics, as the colliding bodies approach each other, they eventually encounter a region of strong nonadiabatic coupling, and electronic population undergoes a partial transfer from ground electronic state to the first excited state as is shown in Fig. 5 (top panel). The amount of transfer has been seen to depend on the momentum of the incident H atom. For k = 75 au, NABDY estimates a 27.9% population transfer, whereas the exact propagation gives a 27.8% population in the first excited state. It is worth noting that despite the inherently ad hoc stochastic hops and the independent trajectory approximation, the TSH scheme is able to reproduce the excited state populations quite accurately. However, TSH estimates a rate of population transfer slightly higher than the exact one. The agreement on the amount of population transfer for NABDY calculations with the exact result is remarkable. The systematic agreement of the NABDY results with that of the exact one stems from the presence of the adiabatic quantum and the nonadiabatic quantum potentials in the NABDY equations of motion [see Eq. (26)] which are non-existent in TSH. The bottom panel of Fig. 5 shows the quantum and the non-adiabatic quantum potentials calculated as a function of H–H2 distance when the ground state wavepacket was centered around d = 1.75 au. Overall, it can be concluded that NABDY, being a correlated trajectory method, can capture the additional nuclear quantum effects which is not possible in Tully's surface hopping approach. Interested reader is suggested to consult Ref. 115 for further details.

FIG. 5.

NABDY simulation results for the collision of H with H2. Top panel: the H atom approaches the H2 molecule with an initial momentum k = 75 au along the path which makes an angle χ=89° with the H–H bond axis (see inset). A Gaussian wave packet prepared at t1=0 au evolves with time. Shown is the probability density of the nuclear wave packet obtained with 352 trajectories at t1 and at t3=300 au after it crosses the region of strong nonadiabatic coupling. The wavepackets on the different states are indicated with blue (ground state) and orange (excited state) colors. The vertical displacement of the wavepackets at t3 is arbitrary. The black dotted line represents the nonadiabatic coupling strength. The inset illustrates the time-dependent population of the first excited state obtained with the different dynamics methods. The bottom panel shows the quantum potentials (Qj) and the non-adiabatic potentials (Dj) computed at time t2 (see the asterisk in the upper panel). At this time, the ground state wavepacket is centered at d = 1.75 au.

FIG. 5.

NABDY simulation results for the collision of H with H2. Top panel: the H atom approaches the H2 molecule with an initial momentum k = 75 au along the path which makes an angle χ=89° with the H–H bond axis (see inset). A Gaussian wave packet prepared at t1=0 au evolves with time. Shown is the probability density of the nuclear wave packet obtained with 352 trajectories at t1 and at t3=300 au after it crosses the region of strong nonadiabatic coupling. The wavepackets on the different states are indicated with blue (ground state) and orange (excited state) colors. The vertical displacement of the wavepackets at t3 is arbitrary. The black dotted line represents the nonadiabatic coupling strength. The inset illustrates the time-dependent population of the first excited state obtained with the different dynamics methods. The bottom panel shows the quantum potentials (Qj) and the non-adiabatic potentials (Dj) computed at time t2 (see the asterisk in the upper panel). At this time, the ground state wavepacket is centered at d = 1.75 au.

Close modal

2. Photoinduced ultrafast intramolecular proton transfer of 4-hydroxyacridine: An application of local control theory

Mixed quantum/classical dynamical methods based on on-the-fly determination of the electronic structure,235,236 such as TSH,237 are best suited for the application of LCT to photochemical problems of larger systems (such as biomolecules), especially when a specific environment needs to be considered. LCT, when combined with ab initio molecular dynamics, carries more appeal as it requires a single forward propagation in time, while conventional optimal control theory typically involves several forward and backward propagations. The TSH/LCT implementation developed in the framework of the NCCR MUST targets typically state-specific electronic transitions.134,238 Starting from a system, usually in its ground electronic state, it computes the instantaneous optimal pulse which induces electronic population transfer to the desired state, eventually leading to a trajectory hop from the initial state to the target state.

As an application of this on-the-fly TSH/LCT approach based on an LR-TDDFT framework,117,238–240 the photoinduced ultrafast intramolecular proton transfer of 4-hydroxyacridine (4-HA) has been investigated. 4-HA has previously been studied both experimentally and theoretically with static calculations241,242 showing that the proton transfer in the ground state is hindered by a prohibitively high potential energy barrier, which is reduced by a large extent in the first excited (S1) state. Therefore, to assess the involvement of electronic excited states on the ultrafast dynamics in this system, an unconstrained nonadiabatic ab initio molecular dynamics study combined with LCT (such as TSH/LCT) has been performed.

To this end, an isolated 4-HA molecule was placed in a simulation box of dimension 16×16×10 Å. Martins-Troullier-type pseudopotentials243 have been employed with a cutoff of 100 Ry for the plane wave basis. The ground and the first three excited electronic states (S1, S2 and S3) have been included in the calculations. To compute the excitation energies and the nuclear forces, the LR-TDDFT equations were solved using the Tamm-Dancoff approximation.244 The Perdew-Burke-Ernzerhof (PBE) xc functional has been used along with the adiabatic approximation for the corresponding xc kernel.245 The molecule was equilibrated at 300 K by a Born-Oppenheimer molecular dynamics run in the ground electronic state. Different initial configurations were chosen randomly from the Boltzmann distribution obtained from the ground-state equilibration run. All the calculations have been performed using the CPMD package.118 

The TSH/LCT calculations were initialized with a 2.4 fs seed pulse of field strength 0.005 au which provides an infinitesimal initial population in the target state. This is essential for an effective LCT dynamics, which otherwise would have a zero field as long as the population of the target state remains strictly zero [see Eq. (35)]. The rest of the LCT dynamics has been carried out with a field strength λ=0.1 au. The field has been calculated at every integration time step for the nuclear equations of motion, which was set to 1 au.

To illustrate the efficiency of the LCT scheme, the results were compared to the case of applying a simple Π pulse (see bottom panel of Fig. 6). To design the Π pulse, we considered a vector potential of the form

A(t)=A0ελexp((tt0)2T2)sinωt,
(65)

where the frequency ω has been chosen to represent the energy gap (2.55 eV) between the ground and the first excited electronic states at the ground-state optimized geometry. The numerical values of the other relevant quantities of Eq. (65) were A0/c=0.1067, t = 2000 au, and T = 800 au, respectively. The results from the propagation of a single trajectory, using the same initial conditions as the TSH/LCT propagation, are depicted in Fig. 6. It shows a smooth transfer of population from the ground to the first excited state for the first 50 fs with an accumulation of 42% population. However, beyond this point, the dynamics exhibits merely oscillatory, back and forth, incomplete transfer of population between the lowest two electronic states (middle panel of Fig. 6). It can also be seen (top panel of Fig. 6) that the trajectory undergoes an actual hop to the first excited state at t77 fs but stays there only for a short period of time. Overall, it can be concluded that with this rather naive and weak Π pulse, it is not possible to efficiently promote the population of the ground state to the first excited state selectively.

FIG. 6.

The results of propagating a single trajectory of the TSH/Π-pulse dynamics of 4-HA. Top panel: Potential energies of the ground (black), S1 (blue), S2 (orange), and S3 (red) states as a function of time obtained at DFT/LR-TDDFT level. The green line indicates the driving state, which determines the forces on nuclei during the dynamics. Middle panel: The time-dependent populations of all 4 electronic states for the same trajectory. The inset shows the Fourier transforms of the entire LCT pulse (—). Bottom panel: The LCT pulse in time domain (black line) and the vector potential (red line).

FIG. 6.

The results of propagating a single trajectory of the TSH/Π-pulse dynamics of 4-HA. Top panel: Potential energies of the ground (black), S1 (blue), S2 (orange), and S3 (red) states as a function of time obtained at DFT/LR-TDDFT level. The green line indicates the driving state, which determines the forces on nuclei during the dynamics. Middle panel: The time-dependent populations of all 4 electronic states for the same trajectory. The inset shows the Fourier transforms of the entire LCT pulse (—). Bottom panel: The LCT pulse in time domain (black line) and the vector potential (red line).

Close modal

At contrast, as it can clearly be seen from Fig. 7, the LCT pulse starts gaining amplitude since the very early stage of simulation and attains a maximum amplitude at 50 fs while the corresponding trajectory undergoes a hop to the S1 state at 60 fs. Consequently, a smooth and almost complete electronic population transfer is achieved within the first 75 fs. The frequency spectrum of the LCT pulse (obtained by Fourier transform) is centered around the vertical energy gap between the ground and the S1 state (2.6 – 2.65 eV). Some low-intensity additional peaks appear below 2 eV which mainly stem from the vibrational relaxation within the S1 state. The bottom most panel of Fig. 7 depicts 4 representative structures of 4-HA which correspond to 4 important instants of time (shown in the top panel of Fig. 7) during the course of the dynamics. The system starts evolving in time in the ground electronic state with the proton attached to the oxygen atom. At about 60 fs, it undergoes a trajectory hop to the S1 state of ππ character [Fig. 7(ii)] which induces a transfer of electron density from the donor (oxygen) atom to the acceptor (nitrogen) atom. Consequently, the N–H distance shortens and the O–H distance increases with time [Figs. 7(iii) and 8] which finally leads to a complete proton transfer, occurring shortly after 200 fs [Figs. 7(iv) and 8]. It is worth noting that only a small amount of population has been seen to accumulate in the other two excited states during the dynamics. Moreover, the occasional hops to these states are always followed by a subsequent quick deactivation to the ground state. Further details about this study can be found in Ref. 246.

FIG. 7.

A representative trajectory of the TSH/LCT dynamics of 4-HA. Top panel: Potential energies of the ground (black), S1 (blue), S2 (orange), and S3 (red) states as a function of time obtained at DFT/LR-TDDFT level. The green line indicates the driving state, which determines the forces on nuclei during the dynamics. Middle panel: The time-dependent populations of all 4 electronic states for the same trajectory. The inset shows the Fourier transforms of the entire LCT pulse (—) and the same for the first part of the pulse until the first trajectory hop occurs (light grey area). Bottom panel: The LCT pulse in time domain. Panels (i)–(iv) report 4 representative 4-HA structures sampled along the trajectory (labels correspond to times indicated in the top panel).

FIG. 7.

A representative trajectory of the TSH/LCT dynamics of 4-HA. Top panel: Potential energies of the ground (black), S1 (blue), S2 (orange), and S3 (red) states as a function of time obtained at DFT/LR-TDDFT level. The green line indicates the driving state, which determines the forces on nuclei during the dynamics. Middle panel: The time-dependent populations of all 4 electronic states for the same trajectory. The inset shows the Fourier transforms of the entire LCT pulse (—) and the same for the first part of the pulse until the first trajectory hop occurs (light grey area). Bottom panel: The LCT pulse in time domain. Panels (i)–(iv) report 4 representative 4-HA structures sampled along the trajectory (labels correspond to times indicated in the top panel).

Close modal
FIG. 8.

Time evolution of the O–H and the N–H distances of 4-HA along the TSH/LCT dynamics. The cyan area represents the duration in which the molecule is in its electronic ground state.

FIG. 8.

Time evolution of the O–H and the N–H distances of 4-HA along the TSH/LCT dynamics. The cyan area represents the duration in which the molecule is in its electronic ground state.

Close modal

1. Time-resolved stimulated emission spectra of pyrazine

The present section illustrates the performance of several variants of the dephasing representation (Sec. II C 1) in calculating the time-resolved stimulated emission spectra of pyrazine. The employed model247 considers the transitions between S0 and S1 electronic states and takes into account four normal modes of pyrazine. The nonadiabatic couplings between S1 and S2 states are neglected since those do not play a significant role for an S0S1 excitation.

Assuming the validity of the zero-temperature, electric dipole, and Condon approximations (see Sec. II A 1), assuming the two pulses to be ultrashort and well separated (see Sec. II A 3), and using the time-dependent perturbation theory (see Sec. II A 2), the time-resolved stimulated emission spectrum at the frequency ω can be calculated as the Fourier transform

σ(ω,τ)Re0f(t,τ)eiωtdt
(66)

of the wavepacket correlation function59,86,168

f(t,τ)=ψ1(t,τ)|ψ0(t,τ),
(67)

where τ is the time delay between pump and probe pulses, t is the time elapsed after the probe pulse, and

|ψj(t,τ)=eiĤjt/eiĤ1τ/|ψ
(68)

represents the state |ψ (initially the vibrational ground state of the ground state Hamiltonian Ĥ0) evolved first for the time delay τ with the excited state Hamiltonian Ĥ1 and subsequently for time t with either the ground or excited state Hamiltonian (j = 0, 1; note that we now number the electronic states and corresponding Hamiltonians starting from 0 instead of 1, to agree with the convention of numbering electronic singlet states S0 and S1).

The dephasing representation and its variants described in Sec. II C 1 can be easily applied to Eq. (67). The only minor modification consists in replacing the action difference in Eq. (42) with its generalized version167 

ΔS(z0,τ,t)=ττ+tΔV(zt)dt,
(69)

where the classical trajectory zt follows the excited state Hamiltonian H1 for t[0,τ] and the average Hamiltonian H¯ for t>τ.

Figure 9 compares the time-correlation functions and spectra calculated using the DR and cellular DR with a prefactor with the corresponding exact quantum-mechanical results. The cellular DR with a prefactor agrees remarkably well with the quantum calculation [see panels (a) and (b)] and requires fewer trajectories for convergence than the original DR [see the convergence plot in panel (c)]. However, the latter property is not universal—in strongly chaotic systems, such as the quartic oscillator, a few chaotic trajectories with very large prefactors may require an enormous number of well-behaved trajectories to compensate for this, whereas the original DR approach avoids this issue since it contains no potentially problematic prefactors.167 

FIG. 9.

Time-resolved stimulated emission (TRSE) in the pyrazine S0/S1 model247 for the time delay τ=2×103 a.u.48 fs. Comparison of the results of the dephasing representation (DR) and cellular dephasing representation with a prefactor (CDRP) with the exact quantum results. (b) TRSE spectrum. (c) Convergence (measured by the relative L2 norm error) of the damped correlation function as a function of the number of trajectories N. Reprinted with permission from J. Vaníček, Chimia 71, 283 (2017). Copyright 2017 Swiss Chemical Society.

FIG. 9.

Time-resolved stimulated emission (TRSE) in the pyrazine S0/S1 model247 for the time delay τ=2×103 a.u.48 fs. Comparison of the results of the dephasing representation (DR) and cellular dephasing representation with a prefactor (CDRP) with the exact quantum results. (b) TRSE spectrum. (c) Convergence (measured by the relative L2 norm error) of the damped correlation function as a function of the number of trajectories N. Reprinted with permission from J. Vaníček, Chimia 71, 283 (2017). Copyright 2017 Swiss Chemical Society.

Close modal

The accuracy of the Gaussian dephasing representation is demonstrated on the same pump-probe system in Fig. 10. The correlation function and spectrum computed with the Gaussian dephasing representation and using 576 basis functions are virtually indistinguishable from the exact quantum results, unlike the fully converged DR calculation, which contains a residual semiclassical error. We also note that, while the original DR does not capture the absolute magnitudes of all peaks in the spectrum correctly, the positions and relative intensities are described rather well with the DR [see Figs. 9(b) and 10(b)]. Thus, even the original DR (or phase averaging) provides a computationally efficient tool for a qualitative prediction of molecular spectra since no Hessians are required.

FIG. 10.

Time-resolved stimulated emission (TRSE) in the pyrazine S0/S1 model247 for the time delay τ=2×103 a.u. 48 fs. Comparison of the results of the dephasing representation (DR) and Gaussian DR (GDR) with the exact quantum results. (a) Time correlation function. (b) TRSE spectrum obtained as a Fourier transform of the correlation function multiplied by a damping function displayed in panel (a) by a gray dashed-double-dotted line. Reprinted with permission from J. Vaníček, Chimia 71, 283 (2017). Copyright 2017 Swiss Chemical Society.

FIG. 10.

Time-resolved stimulated emission (TRSE) in the pyrazine S0/S1 model247 for the time delay τ=2×103 a.u. 48 fs. Comparison of the results of the dephasing representation (DR) and Gaussian DR (GDR) with the exact quantum results. (a) Time correlation function. (b) TRSE spectrum obtained as a Fourier transform of the correlation function multiplied by a damping function displayed in panel (a) by a gray dashed-double-dotted line. Reprinted with permission from J. Vaníček, Chimia 71, 283 (2017). Copyright 2017 Swiss Chemical Society.

Close modal

2. Absorption and photoelectron spectra of ammonia

In this section, we illustrate the performance of the on-the-fly ab initio TGA method (Sec. II C 2) in describing the spectra of floppy molecules, i.e., molecules in which one would expect a local harmonic approximation to break down due to large amplitude, anharmonic motions. Ammonia (NH3) is used as a representative example of a floppy molecule, which, due to its small size, allows for comparison of different and rather accurate levels of ab initio theory, permitting to separate the errors due to electronic structure evaluation from those due to the dynamical approximation.

Within the Born-Oppenheimer, zero-temperature, electric dipole, and Condon approximations, and using the time-dependent perturbation theory (Sec. II A), the absorption spectrum as a function of the incident light frequency ω is obtained as the Fourier transform

σ(ω)=2π3cμ012ωdtei(ω+E0,0/)tC(t).
(70)

Here, μ01 is the transition dipole moment between the ground and excited electronic states and

C(t)=ψ0,0|eiĤ1t/|ψ0,0
(71)

is the autocorrelation function of the initial ground vibrational state |ψ0,0 of the ground electronic state with energy E0,0, which evolves with excited-state Hamiltonian Ĥ1 after the excitation.

The experimental Ã1A2X̃1A1(S1S0) absorption spectrum of ammonia contains a single long progression due to the activation of the umbrella motion of NH3 (Fig. 11). The electronic transition under consideration is accompanied by a substantial change of the nuclear configuration from non-planar (X̃1A1 state) to planar (Ã1A2 state), which induces a large-amplitude nuclear motion exploring anharmonic regions of the excited potential energy surface.

FIG. 11.

Absorption spectrum of NH3: Comparison of the experimental spectrum recorded at the temperature of 175 K with the spectra computed with the on-the-fly ab initio thawed Gaussian approximation (OTF-AI TGA), vertical harmonic (VH), and adiabatic harmonic (AH) models within the B3LYP and CASPT2 ab initio methods. All spectra are rescaled so that the highest spectral peak in each spectrum is of unit intensity. Reprinted with permission from Wehrle et al., J. Phys. Chem. A 119, 5685 (2015). Copyright 2015 American Chemical Society.

FIG. 11.

Absorption spectrum of NH3: Comparison of the experimental spectrum recorded at the temperature of 175 K with the spectra computed with the on-the-fly ab initio thawed Gaussian approximation (OTF-AI TGA), vertical harmonic (VH), and adiabatic harmonic (AH) models within the B3LYP and CASPT2 ab initio methods. All spectra are rescaled so that the highest spectral peak in each spectrum is of unit intensity. Reprinted with permission from Wehrle et al., J. Phys. Chem. A 119, 5685 (2015). Copyright 2015 American Chemical Society.

Close modal

Figure 11 compares the experimental spectrum with spectra calculated with the on-the-fly ab initio TGA approach using CASPT2 and B3LYP levels of theory.176 The local harmonic approximation employed in the on-the-fly ab initio TGA captures partially the anharmonicity of the potential energy surface of the excited electronic state, resulting in an excellent peak spacing in the corresponding spectrum and the relative intensity distribution. We also note that the employed level of ab initio theory mainly affects the intensities of the peaks without modifying the spacing. In addition, Fig. 11 illustrates the performance of the common approach based on the global harmonic approximation for the excited state potential, which is obtained using the ab initio data (potential, forces, and Hessian) calculated either at the ground (“vertical harmonic” model) or excited (“adiabatic harmonic” model) state equilibrium geometries.248 In the adiabatic harmonic model, the stretching modes are overly excited due to their coupling to the bending mode, which results in unphysical progressions. Furthermore, small changes in the equilibrium geometries caused by employing different levels of the ab initio theory have a drastic impact on the spectrum. The vertical harmonic model suffers much less from these two problems and, in addition, it obviously provides a better description of the Franck–Condon region important for spectra calculations. Still, it is clear that neither of the two harmonic models can reproduce the anharmonic peak spacing, while the on-the-fly ab initio TGA approach provides a good quantitative description of the absorption spectrum of NH3.

A more strict test of the robustness of the on-the-fly ab initio TGA was provided by the simulation of the photoelectron spectrum of NH3.176 This better-resolved spectrum depends on much longer dynamics than does the absorption spectrum and, as a result, the photoelectron spectrum is much more affected by nonlinearity. Indeed, as shown in Ref. 176, the global harmonic approaches break down even more than in the case of absorption spectrum, the vertical harmonic model yielding again too large level spacing and adiabatic harmonic model exhibiting unphysical progressions. Surprisingly, the on-the-fly ab initio TGA result agrees with the experimental spectrum reasonably well: the peak positions are almost indistinguishable, whereas the discrepancies in the intensities reflect the deteriorating quality of the local harmonic approximation. Overall, the on-the-fly ab initio TGA approach provides a powerful tool for the simulation of the electronic spectra even for floppy systems as long as the contributing dynamics is rather short.

3. Emission spectra of oligothiophenes

Polythiophenes and their functional derivatives demonstrate remarkable conductivity with excellent thermo- and chemo-stability making them very promising for applications in organic electronics. Thus, understanding structural and dynamical properties of such systems is important for the design of new materials. Due to the large size of oligothiophenes, the molecular dynamics simulations using quantum mechanical methods are unfeasible and one is forced to find a compromise between accuracy and computational efficiency.

The utility of the on-the-fly ab initio TGA approach (coupled with DFT and time-dependent DFT electronic structure methods) for predicting the electronic emission spectra of the oligothiophenes (Tn, where n=2,3,4,5 is the number of thiophene units) has been validated by Wehrle et al.175 Figure 12 compares the experimental and calculated emission spectra of pentathiophene; both the overall shape of the spectrum and peak intensities are in an excellent agreement. The calculated spectrum is slightly shifted compared to the one experimentally measured, which is most likely due to insufficiently accurate electronic structure methods. Nevertheless, the observed agreement is remarkable considering that the pentathiophene has 105 degrees of freedom, which is currently the largest chemical system treated with the on-the-fly ab initio semiclassical dynamics.

FIG. 12.

Comparison of the experimental emission spectrum of pentathiophene (dashed green line) with the full-dimensional on-the-fly ab initio thawed Gaussian approximation calculation using all 105 normal modes (solid black line). Adapted with permission from J. Chem. Phys. 140, 244114 (2014). Copyright 2014 AIP Publishing LLC.

FIG. 12.

Comparison of the experimental emission spectrum of pentathiophene (dashed green line) with the full-dimensional on-the-fly ab initio thawed Gaussian approximation calculation using all 105 normal modes (solid black line). Adapted with permission from J. Chem. Phys. 140, 244114 (2014). Copyright 2014 AIP Publishing LLC.

Close modal

To better understand the underlying dynamics, Wehrle et al.175 proposed a systematic way to analyze the influence of different normal modes on the vibrational structure of the emission spectrum. The method uses components of the stability matrix calculated along the trajectory to partition all normal modes into approximately uncoupled groups and then selects the most important modes by considering the maximum displacements relative to the associated Gaussian width parameters. As a result, this method allows an automatic and natural construction of reduced dimensionality models of complex polyatomic systems.

Figure 13 illustrates the usefulness of this approach on the emission spectrum of pentathiophene, by comparing the full, 105-dimensional result with the results of two, automatically generated models of reduced dimensionality. It is clear from the figure that performing the dynamical simulation with only four active modes, corresponding to an inter-ring stretch and ring-squeeze, yields a good qualitative description of the positions and intensities of all peaks in the 105-dimensional calculation. Moreover, including only four additional modes, attributed to the chain and C-H bond deformations, captures most of the peak broadening and brings the calculated spectrum to an almost perfect agreement with the result of the full-dimensional simulation, which, as we have seen in Fig. 12, describes fully the experimental emission spectrum. Thus, the on-the-fly ab initio TGA, combined with the proposed scheme to estimate the importance of normal modes in the dynamics, provides a powerful tool for calculation and analysis of electronic spectra in large molecular systems. Moreover, a single TGA trajectory could be used to factorize the original system into several independent, lower dimensional systems, which can be treated by more accurate or even exact quantum dynamics methods.

FIG. 13.

Emission spectrum of pentathiophene: comparison of the spectrum obtained from the full-dimensional on-the-fly ab initio thawed Gaussian approximation with reduced dimensionality models generated automatically from a single thawed Gaussian trajectory. Adapted with permission from J. Chem. Phys. 140, 244114 (2014). Copyright 2014 AIP Publishing LLC.

FIG. 13.

Emission spectrum of pentathiophene: comparison of the spectrum obtained from the full-dimensional on-the-fly ab initio thawed Gaussian approximation with reduced dimensionality models generated automatically from a single thawed Gaussian trajectory. Adapted with permission from J. Chem. Phys. 140, 244114 (2014). Copyright 2014 AIP Publishing LLC.

Close modal

1. Computational infrared spectroscopy for H-bonded systems

Infrared spectroscopy is a powerful method to characterize the dynamics of molecules in the gas- and condensed phase. For H-bonded systems, the hydrogen-stretch is a particularly sensitive degree of freedom. The energetics and dynamics of proton and hydrogen transfer is of fundamental importance in biology and chemistry.249–251 Such processes are primarily governed by the height of the barrier for proton/hydrogen transfer which is, however, difficult to determine reliably through direct experimentation. Possibilities include high resolution spectroscopy where the splitting of spectral lines can provide information about the barrier height252 or nuclear magnetic resonance (NMR) experiments.253,254 On the other hand, kinetic isotope effects or shift of vibrational bands in the infrared alone cannot be used directly to determine the energetics for proton transfer.

Large-amplitude motion (including proton- or hydrogen-transfer) along the X–H* stretching coordinate in systems containing XH*Y motifs—where X and Y are the donor and acceptor atoms, respectively, and H* is the transferring hydrogen—can lead to characteristically broadened features in vibrational spectra.255–257 This broadening reflects strong coupling between the X–H stretch and other framework modes of the environment and structural heterogeneity.199 The broadening even persists down to low temperatures and cooling the species does not lead to sharper bands.258 

As an example, the infrared and near-infrared spectra of acetylacetone200 were investigated computationally and through experiments. The fundamental OH-stretching bands were red-shifted relative to those of usual OH stretching transitions. Using a suitably morphed MMPT force field, the computed spectra from atomistic simulations of acetylacetone in the gas phase can be matched with the experimentally determined spectra. The OH-stretching (or proton transfer) band was found to be broad and weak. Furthermore, the wavenumber of this band sensitively depends on the barrier height for proton transfer (see Fig. 14). From comparing computed and experimentally measured infrared spectra, a barrier height of around 2.5 kcal/mol was inferred, which favorably compares with 3.2 kcal/mol obtained from CCSD(T)/cc-pVTZ calculations.200 

FIG. 14.

The empirical correlation between morphed potential energy surface and bond stretching frequencies.

FIG. 14.

The empirical correlation between morphed potential energy surface and bond stretching frequencies.

Close modal

MMPT with suitably morphed potential energy surfaces was also employed to analyze the gas-phase infrared spectra of formic acid dimer (FAD)198 and of protonated oxalate.259 For FAD, a combination of a symmetric double (SDM) and single minimum (SSM) surface yields a realistic description of the double proton transfer potential energy surface (see Fig. 15).198 Conversely, for protonated oxalate, the two resonance forms of the molecule can be parametrized such that the change in bonding character of the CO-subunit (from single. to double-bonded) upon proton transfer is incorporated into the energy function.259 For both systems (FAD and oxalate), the comparison with experimentally determined infrared spectra in the region of the proton transfer band yields accurate barrier heights of 57 kcal/mol and 4.2 kcal/mol, respectively. Hence, estimation of the proton transfer barrier height from a combined computational/infrared spectroscopy approach is likely to be a generic way forward for better characterizing this important quantity for a range of donor-acceptor pairs.

FIG. 15.

Mixed two-dimensional potential energy surfaces for double proton transfer (DPT) in formic acid dimer. The reference data from MP2 calculations are in red and the empirical potential in black. The right hand panel illustrates that the empirical surface is of very high quality.

FIG. 15.

Mixed two-dimensional potential energy surfaces for double proton transfer (DPT) in formic acid dimer. The reference data from MP2 calculations are in red and the empirical potential in black. The right hand panel illustrates that the empirical surface is of very high quality.

Close modal

2. Multipolar force fields applications in the condensed phase

In the following, a number of applications of multipolar force fields to spectroscopic and dynamical properties in the condensed phase are described.

a. CO in Myoglobin

The use of MTP electrostatics has been of particular relevance in spectroscopic applications, specifically when quantitative comparisons with experiments and their interpretation were of interest. One of the noticeable examples is the infrared spectrum of photodissociated carbon monoxide (CO) in myoglobin. The strong [43 MV/cm (Ref. 260)] inhomogeneous electric field in the heme pocket leads to characteristic shifting and splitting of the spectral lines due to the Stark effect. Several attempts were made261–263 to correctly interpret the experimentally known infrared spectrum264 using computational methods. Although some of them were capable of correctly modelling the width of the experimentally determined spectrum, they usually were unable to find the characteristic splitting of the CO spectrum (i.e., 10 cm1). The first successful attempt used a fluctuating point-charge model based on an earlier three-point model for CO.188,265 This was later generalized to a rigorous fluctuating MTP model which reproduced most features of the spectrum known from experiments.189 In particular, the splitting, width, and relative intensities of the computed spectrum favorably agreed with the experimentally known properties. Based on this agreement, it was then also possible to assign the two spectroscopic signatures to distinct conformational substates. Those agreed with previous—more heuristic—attempts based on mutations in the active site and mixed quantum mechanics/molecular mechanics simulations based on a few representative snapshots from molecular dynamics simulations.266,267

b. 1 D- and 2 D-infrared spectroscopy of CN

The solution-phase spectroscopy of the cyanide anion (see Fig. 16) is another benchmark system for atomistic simulations as its dynamics has been studied extensively by experiments.268–270 The solution dynamics of small solute molecules provides detailed information on the coupling between intra- and intermolecular degrees of freedom. 2 D infrared spectroscopy has been shown to be sensitive to the solvent dynamics on the picosecond time scale which provides a benchmark to validate atomistic simulations against detailed experimental data.271 

FIG. 16.

(a) Cartoon representation of cyanide in water. (b) Time evolution of the 2D-infrared tilt angle, α. The red, magenta, and blue curves correspond to PC, MTP, and experimental results, respectively. See Ref. 221 for more details.

FIG. 16.

(a) Cartoon representation of cyanide in water. (b) Time evolution of the 2D-infrared tilt angle, α. The red, magenta, and blue curves correspond to PC, MTP, and experimental results, respectively. See Ref. 221 for more details.

Close modal

The 1 D- and 2 D-infrared spectrum of a hydrated probe can be determined from the frequency trajectory ω(t) from which the frequency-fluctuation correlation function C(t)=δω(0)δω(t) can be determined. Here, δω(t)=ω(t)ω and ω is the average frequency of the oscillator along the trajectory. The correlation function contains time scales which are representative of the surrounding solvent motion and can be related to the experimentally measured spectral features. Those, in turn, are characterized by a tilt angle. Hence, following the frequency-fluctuation correlation function is directly related to following the spectral changes as a function of time in a 2 D-infrared experiment.271 Within a range of justifiable (and commonly used) force fields, one of the major experimental observables—the tilt angle α as a function of the waiting time—can be realistically modelled.221 Most importantly, an MTP model for water and cyanide combined with anharmonic stretching and bending potentials272 and slightly modified van der Waals ranges for the CN yields very favorable agreement with 2D infrared experiments.270 Conversely, a PC model misses almost all of the time dependence of the signal (see Fig. 16). Hence, MTP models provide a robust and realistic parametrization for dynamical problems including vibrational relaxation and 2D infrared spectroscopy.

It is also worth mentioning that an efficient and spectroscopically accurate force field for sampling the conformations obviates the need for specifically designing frequency maps in the computation of 2D infrared spectra. Such frequency maps are a convenient means to determine 2D infrared spectra from conventional molecular dynamics simulations.273,274 However, their transferability from one system to a chemically related one is not guaranteed, and they do not allow to carry out a consistent analysis of a physico-chemical process because conformational sampling and analysis (“scoring”) of the simulations employ different energy functions. In other words, only the use of a unique force field for both conformational sampling and post-processing allows us to uniquely trace back potential shortcomings of the energy function (e.g., CN in aqueous solution221,272).

c. Protein-ligand binding

The advantage of MTP over PC electrostatics coupled to a non-polarizable force field becomes evident when calculating the free energy of binding of a tetrabromobenzotriazole ligand with the target protein casein kinase 2:275 PC-only electrostatics have been shown to destabilize the complex,276 while the relative binding free energy between PC and MTP descriptions yielded a 3.8 kcal/mol increased stability though no absolute free energy calculation was reported.20 

A recent application of refined electrostatic interactions in atomistic simulations concerned the ab initio determination of protein-ligand binding poses from computation combined with linear infrared experiments. Stark shifts can be used to study the structure, electrostatics, and dynamics of ligands and spectroscopic probes in protein active sites.277–282 The dynamics and spectroscopic response of chemical bonds to changes in the local electric fields can be accurately measured through 1-dimensional spectroscopy. However, relating spectroscopic information to changes in the structure of the environment surrounding the spectroscopic probe is not straightforward because simultaneous observation of spectroscopy and structure is still difficult.283 Atomistic simulations using validated force fields189 provide a valuable complement. The preferred use of physics-based empirical force fields21,193,194,222,284 over ab initio molecular dynamics simulations derives from the fact that comprehensive conformational sampling for a protein-ligand complex in solution is currently not possible due to the computational expense of ab initio molecular dynamics.

The nitrile group (-CN) is a meaningful spectroscopic label for probing the local structure, electric field, and solvent dynamics involving proteins and biological molecules.278,279,285–288 Previously used nitrile probes for proteins include CN-labelled phenylalanine289–291 and the nitrile-containing IDD type inhibitor for human aldose reductase.278,292 Benzonitrile (PhCN) is another potentially useful probe to determine the local electrostatic environment as it fulfills three important criteria: the -CN stretching mode at 2200 cm1 (a) absorbs in a frequency range (i.e., between 1800 cm1 and 2800 cm1) in which proteins containing only naturally occurring amino acids have no vibrational spectral response (except for the -SH group in cysteine), (b) is to a good approximation a local mode (i.e., uncoupled from other framework modes), and (c) the dipole moment of PhCN is to a large extent that of the nitrile group itself. On the other hand, the nitrile group may pose additional challenges in concrete experiments due to its low extinction coefficient.293 The previous work on PhCN in water294 provides an ideal benchmark to validate the computational methods used in the present work.

Using a fluctuating point charge model for PhCN fitted to the molecular electrostatic potential, the dynamics of PhCN in the benzene-binding site of Lysozyme was investigated.295 The model was validated against 1d- and 2d-infrared experiments of PhCN in solution (water). Using the wild-type and two mutated proteins (L99A and L99G) which provide different electrostatic environments in the active site, the simulations find that the peak frequency of the -CN stretch in the linear absorption spectrum shifts. The shift approximately correlates with the relative binding free energy: the stronger the binding, the larger the red shift. This is a useful basis for the proposed strategy to locate ligand-binding sites through a combination of experiment and computation.278 The long time scale decay constant of the frequency-fluctuation correlation function is largest (2.0 ps) for the L99A mutant to which PhCN binds most strongly. Given that in state-of-the-art experiments a relaxation time can be determined to within 40%, the wild-type and L99G show a similar τ2 and the binding of PhCN to these two protein variants is weaker. Hence, strong protein-ligand binding correlates with long decay times in the frequency-fluctuation correlation function. Finally, a pronounced static inhomogeneous component (Δs2=0.2 ps1) is found in the correlation function which appears, which is absent for PhCN in water. However, the magnitude of Δs2 does not appear to be related to the binding strength.

d. Vibrational Relaxation of Solvated CN

The exchange of energy between different degrees of freedom in a condensed-phase system is of fundamental importance. Energy flow is required for processes ranging from chemical reactivity to signalling in biological systems. Direct determination of energy migration pathways in molecular systems from experiments alone is very challenging. Hence, atomistic simulations with dedicated force fields are a powerful complement.

Atomistic simulations have shown to give energy relaxation times in good agreement with experiments.272,296 It has been found that vibrational energy relaxation is particularly sensitive to the level at which the intermolecular interactions are described and that models beyond conventional point charges are required for quantitative computational work. This provides the basis for more detailed investigations of the spectroscopy of CN in D2O, specifically whether a single parametrization of the intermolecular interactions is capable of quantitatively describing a number of distinct experimental observables.

Infrared experiments were used to determine T1 relaxation times of the v = 1 state of CN in H2O and D2O.269,297 In contrast to polyatomic molecules such as N3, energy relaxation in diatomics is governed by intermolecular interactions and the coupling between solvent and solute can be investigated directly. It has been suggested297 and later confirmed269,296 that Coulomb interactions are responsible for the vibrational relaxation of polar molecules in coordinating solvents, such as water. Therefore, atomistic simulations with accurate MTP electrostatics are expected to provide detailed insights into energy migration pathways. Many previous simulations were carried out with idealized interaction potentials. For example, rigid water models are unable to reproduce energy flow into the water's internal degrees of freedom.296 

Simulations with fully flexible force fields and accurate representations of the nonbonded interactions for CN and H2O provide quantitative agreement with experimentally determined relaxation times.272 Using a rigid water model, energy relaxation from the vibrationally excited chromophore (CN) into the surrounding solvent is slower by more than an order of magnitude. Hence, under the given circumstances (existence of mechanical resonances between chromophore vibrations and internal solvent degrees of freedom) and for this type of study, it is mandatory that atomistic simulations are carried out with fully flexible monomers. The simulations also show that the calculated T1 times sensitively depend on the force field parametrization, in particular the Lennard-Jones ranges. Increasing the Lennard–Jones ranges by up to 7.5% simulations leads to longer relaxation times by a factor of 4 to 5. This can be qualitatively understood by noting that for larger Lennard–Jones ranges the distance between the solvent water molecules and CN will be larger on average which, in turn, leads to reduced electrostatic interactions and hence less efficient vibrational energy transfer.

In summary, the work on hydrated CN highlights that with one and the same force field parametrization based on MTP electrostatics it is possible to accurately describe sub-ps (2 D-infrared), ps (2 D-infrared), 10-ps (vibrational relaxation), infrared, and thermodynamic observables.221,272,298 Therefore, physics-based force fields provide the necessary improvement and level of accuracy required to provide molecular-level insight into condensed-phase energetics and dynamics.

We have presented several approaches for describing ultrafast dynamics induced by the interaction of molecules with light. Rather than providing a comprehensive review of one area, we have chosen several representative examples of methodologies and applications from the fields of quantum, semiclassical, and classical dynamics. Ultimately, one would like to treat both electrons and nuclei quantum mechanically, yet, as we have seen, many interesting phenomena can be described accurately with mixed quantum-classical (as in the trajectory surface hopping implementation of the local control theory in Sec. III A 2), semiclassical (as in the thawed Gaussian approximation evaluation of various types of electronic spectra in Secs. III B 2 and III B 3), or classical dynamics (as in the 1 D- and 2 D-infrared spectroscopy of CN in Sec. III C 2). Where nuclear quantum effects are important, one should of course use quantum or semiclassical approaches, both of which are capable to include nuclear quantum coherence, zero point energy, and sometimes also tunneling effects. Regarding the treatment of electronic structure, we have presented both on-the-fly ab initio dynamics (quantum Bohmian dynamics in Sec. III A 1, mixed quantum-classical trajectory surface hopping in Sec. III A 2, semiclassical thawed Gaussian approximation in Secs. III B 2 and III B 3) and classical dynamics based on high-quality parametrized reactive and multipolar force fields (in Secs. III C 1 and III C 2). On one hand, the latter, highly efficient analytical force fields will clearly always be in demand for applications in the systems with the largest number of atoms. On the other hand, it appears that on-the-fly ab initio dynamics will become increasingly practical not only for classical but also for semiclassical and trajectory-based quantum molecular dynamics.

The authors acknowledge the financial support from the Swiss National Science Foundation through the Swiss National Center of Competence in Research Molecular Ultrafast Science and Technology (NCCR MUST) Network. M.M. would like to thank University of Basel, while J.V. and U.R. are thankful to EPFL for support. J.V. also acknowledges the financial support from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (Grant Agreement No. 683069–MOLEQULE).

ARMD

adiabatic reactive molecular dynamics

CASPT2

complete active space second-order perturbation theory

CCSD(T)

coupled cluster with single, double and perturbative triple excitations

CDR

cellular dephasing representation

CDRP

cellular dephasing representation with a prefactor

DFT

density functional theory

DPT

double proton transfer

DR

dephasing representation

DRP

dephasing representation with a prefactor

EVB

empirical valance bond

FAD

formic acid dimer

GDR

Gaussian dephasing representation

LCT

local control theory

LR-TDDFT

linear response time-dependent density functional theory

MD

molecular dynamics

MTP

multipole

MP2

second-order Møller–Plesset perturbation theory

NABDY

nonadiabatic Bohmian dynamics

OTF-AI-TGA

on-the-fly ab initio thawed Gaussian approximation

PC

point charges

PhCN

benzonitrile

SDM

symmetric double minimum

SPF

single-particle function

SSM

symmetric single minimum

TDDFT

time-dependent density functional theory

TGA

thawed Gaussian approximation

TSH

trajectory surface hopping

4-HA

4-hydroxyacridine

1.
A. H.
Zewail
,
J. Phys. Chem. A
104
,
5660
(
2000
).
2.
P.
Kukura
,
D. W.
McCamant
, and
R. A.
Mathies
,
Annu. Rev. Phys. Chem.
58
,
461
(
2007
).
3.
F.
Krausz
and
M.
Ivanov
,
Rev. Mod. Phys.
81
,
163
(
2009
).
4.
L. M.
Malard
,
M. A.
Pimenta
,
G.
Dresselhaus
, and
M. S.
Dresselhaus
,
Phys. Rep.
473
,
51
(
2009
).
5.
H. J.
Bakker
and
J. L.
Skinner
,
Chem. Rev.
110
,
1498
(
2010
).
6.
F.
Calegari
,
G.
Sansone
,
S.
Stagira
,
C.
Vozzi
, and
M.
Nisoli
,
J. Phys. B
49
,
062001
(
2016
).
7.
G.
Stock
and
W.
Domcke
,
Phys. Rev. A
45
,
3032
(
1992
).
8.
S.
Mukamel
,
Principles of Nonlinear Optical Spectroscopy
(
University Press
,
Oxford, UK
,
1995
).
9.
D. J.
Tannor
,
Introduction to Quantum Mechanics a Time-Dependent Perspective
(
University Science Books
,
Sausalito, CA
,
2007
).
10.
H.-D.
Meyer
,
U.
Manthe
, and
L. S.
Cederbaum
,
Chem. Phys. Lett.
165
,
73
(
1990
).
11.
M. H.
Beck
,
A.
Jäckle
,
G.
Worth
, and
H.-D.
Meyer
,
Phys. Rep.
324
,
1
(
2000
).
12.
C. L.
Lopreore
and
R. E.
Wyatt
,
Phys. Rev. Lett.
82
,
5190
(
1999
).
13.
R. E.
Wyatt
,
Quantum Dynamics with Trajectories: Introduction to Quantum Hydrodynamics
, Vol. 28 (
Springer, Interdisciplinary Applied Mathematics
,
2005
).
14.
R.
Kosloff
,
A. D.
Hammerich
, and
D.
Tannor
,
Phys. Rev. Lett.
69
,
2172
(
1992
).
15.
V.
Engel
,
C.
Meier
, and
D. J.
Tannor
,
Adv. Chem. Phys.
141
,
29
(
2009
).
16.
E. J.
Heller
,
J. Chem. Phys.
62
,
1544
(
1975
).
17.
S.
Mukamel
,
J. Chem. Phys.
77
,
173
(
1982
).
18.
J.
Vaníček
,
Phys. Rev. E
70
,
055201
(
2004
).
19.
S.
Lammers
,
S.
Lutz
, and
M.
Meuwly
,
J. Comput. Chem.
29
,
1048
(
2008
).
20.
T.
Bereau
,
C.
Kramer
, and
M.
Meuwly
,
J. Chem. Theory Comput.
9
,
5450
(
2013
).
21.
F.
Hédin
,
K.
El Hage
, and
M.
Meuwly
,
J. Chem. Inf. Mod.
56
,
1479
(
2016
).
22.
R.
Car
and
M.
Parrinello
,
Phys. Rev. Lett.
55
,
2471
(
1985
).
23.
M.
Ben-Nun
and
T. J.
Martínez
,
Adv. Chem. Phys.
121
,
439
(
2002
).
24.
D.
Marx
and
J.
Hutter
,
Ab Initio Molecular Dynamics: Basic Theory and Advanced Methods
, 1st ed. (
Cambridge University Press
,
Cambridge
,
2009
).
25.
T.
Nagy
,
J.
Yosa Reyes
, and
M.
Meuwly
,
J. Chem. Theory Comput.
10
,
1366
(
2014
).
26.
J.
Yosa Reyes
,
T.
Nagy
, and
M.
Meuwly
,
Phys. Chem. Chem. Phys.
16
,
18533
(
2014
).
27.
I. S.
Ufimtsev
and
T. J.
Martinez
,
J. Chem. Theory Comput.
4
,
222
(
2008
).
28.
C. M.
Isborn
,
N.
Luehr
,
I. S.
Ufimtsev
, and
T. J.
Martinez
,
J. Chem. Theory Comput.
7
,
1814
(
2011
).
29.
N.
Luehr
,
I. S.
Ufimtsev
, and
T. J.
Martinez
,
J. Chem. Theory Comput.
7
,
949
(
2011
).
30.
N.
Luehr
,
A. G. B.
Jin
, and
T. J.
Martinez
,
J. Chem. Theory Comput.
11
,
4536
(
2015
).
31.
N.
Snyder
,
B. F. E.
Curchod
, and
T. J.
Martinez
,
J. Phys. Chem. Lett.
7
,
2444
(
2016
).
32.
M. P.
Bircher
,
E.
Liberatore
,
N.
Browning
,
S.
Brickel
,
C.
Hofmann
,
A.
Patoz
,
O. T.
Unke
,
T.
Zimmermann
,
M.
Chergui
,
P.
Hamm
,
U.
Keller
,
M.
Meuwly
,
H.-J.
Woerner
,
J.
Vaníček
, and
U.
Rothlisberger
,
Struct. Dyn.
4
,
061510
(
2017
).
33.
W.
Domcke
and
G.
Stock
,
Adv. Chem. Phys.
100
,
1
(
1997
).
34.
S.
Mukamel
,
Annu. Rev. Phys. Chem.
51
,
691
(
2000
).
35.
M.
Baer
,
Beyond Born-Oppenheimer: Electronic Nonadiabatic Coupling Terms and Conical Intersections
, 1st ed. (
Wiley
,
2006
).
36.
B. G.
Levine
and
T. J.
Martinez
,
Annu. Rev. Phys. Chem.
58
,
613
(
2007
).
37.
W.
Domcke
and
D. R.
Yarkony
,
Annu. Rev. Phys. Chem.
63
,
325
(
2012
).
38.
H.
Nakamura
,
Nonadiabatic Transition: Concepts, Basic Theories and Applications
, 2nd ed. (
World Scientific Publishing Company
,
2012
).
39.
I. G.
Ryabinkin
,
L.
Joubert-Doriol
, and
A. F.
Izmailov
,
Acc. Chem. Res.
50
,
1785
(
2017
).
40.
G. C.
Schatz
and
M. A.
Ratner
,
Quantum Mechanics in Chemistry
(
Dover Publications
,
2002
).
41.
T.
Zimmermann
and
J.
Vaníček
,
J. Chem. Phys.
132
,
241101
(
2010
).
42.
R.
MacKenzie
,
M.
Pineault
, and
L.
Renaud-Desjardins
,
Can. J. Phys.
90
,
187
(
2012
).
43.
T.
Zimmermann
and
J.
Vaníček
,
J. Chem. Phys.
136
,
094106
(
2012
).
44.
T.
Zimmermann
and
J.
Vaníček
,
J. Chem. Phys.
137
,
22A516
(
2012
).
45.
M.
Quack
,
J. Chem. Phys.
69
,
1282
(
1978
).
46.
M.
Quack
,
Chem. Phys. Lett.
65
,
140
(
1979
).
47.
M.
Quack
and
E.
Sutcliffe
,
J. Chem. Phys.
83
,
3805
(
1985
).
48.
J.
Franck
and
E. G.
Dymond
,
Trans. Faraday Soc.
21
,
536
(
1926
).
49.
E. U.
Condon
,
Proc. Natl. Acad. Sci. U.S.A.
13
,
462
(
1927
).
50.
E. U.
Condon
,
Phys. Rev.
32
,
858
(
1928
).
51.
L.
Seidner
,
G.
Stock
, and
W.
Domcke
,
Chem. Phys. Lett.
228
,
665
(
1994
).
52.
L.
Seidner
,
G.
Stock
, and
W.
Domcke
,
J. Chem. Phys.
103
,
3998
(
1995
).
53.
B.
Wolfseder
,
L.
Seidner
,
G.
Stock
, and
W.
Domcke
,
Chem. Phys.
217
,
275
(
1997
).
54.
M. F.
Gelin
,
D.
Egorova
, and
W.
Domcke
,
J. Chem. Phys.
131
,
124505
(
2009
).
55.
M. F.
Gelin
,
D.
Egorova
, and
W.
Domcke
,
J. Phys. Chem. Lett.
2
,
114
(
2011
).
56.
M. F.
Gelin
,
D.
Egorova
, and
W.
Domcke
,
J. Phys. Chem. B
115
,
5648
(
2011
).
57.
M. F.
Gelin
,
D.
Egorova
, and
W.
Domcke
,
Phys. Chem. Chem. Phys.
15
,
8119
(
2013
).
58.
S.
Mukamel
,
Annu. Rev. Phys. Chem.
41
,
647
(
1990
).
59.
W. T.
Pollard
,
S.-Y.
Lee
, and
R. A.
Mathies
,
J. Chem. Phys.
92
,
4012
(
1990
).
60.
W. T.
Pollard
and
R. A.
Mathies
,
Annu. Rev. Phys. Chem.
43
,
497
(
1992
).
61.
N. E.
Shemetulskis
and
R. F.
Loring
,
J. Chem. Phys.
95
,
4756
(
1991
).
62.
N. E.
Shemetulskis
and
R. F.
Loring
,
J. Chem. Phys.
97
,
1217
(
1992
).
63.
N.
Henriksen
and
V.
Engel
,
Int. Rev. Phys. Chem.
20
,
93
(
2001
).
64.
R.
Kosloff
,
J. Phys. Chem.
92
,
2087
(
1988
).
65.
C.
Leforestier
,
R. H.
Bisseling
,
C.
Cerjan
,
M. D.
Feit
,
R.
Friesner
,
A.
Guldberg
,
A.
Hammerich
,
G.
Jolicard
,
W.
Karrlein
,
H.-D.
Meyer
,
N.
Lipkin
,
O.
Roncero
, and
R.
Kosloff
,
J. Comput. Phys.
94
,
59
(
1991
).
66.
C. J.
Williams
,
J.
Qian
, and
D. J.
Tannor
,
J. Chem. Phys.
95
,
1721
(
1991
).
67.
R.
Kosloff
,
Annu. Rev. Phys. Chem.
45
,
145
(
1994
).
68.
N.
Balakrishnan
,
C.
Kalyanaraman
, and
N.
Sathyamurthy
,
Phys. Rep.
280
,
79
(
1997
).
69.
N.
Makri
,
Annu. Rev. Phys. Chem.
50
,
167
(
1999
).
70.
G.
Nyman
and
H.-G.
Yu
,
Rep. Prog. Phys.
63
,
1001
(
2000
).
71.
K.
Kormann
,
S.
Holmgren
, and
H. O.
Karlsson
,
J. Chem. Phys.
128
,
184101
(
2008
).
72.
H.
Tal-Ezer
and
R.
Kosloff
,
J. Chem. Phys.
81
,
3967
(
1984
).
73.
T. J.
Park
and
J. C.
Light
,
J. Chem. Phys.
85
,
5870
(
1986
).
74.
U.
Manthe
,
H.
Köppel
, and
L. S.
Cederbaum
,
J. Chem. Phys.
95
,
1708
(
1991
).
75.
A. D.
Hammerich
,
R.
Kosloff
, and
M. A.
Ratner
,
J. Chem. Phys.
97
,
6410
(
1992
).
76.
A.
Askar
and
A. S.
Cakmak
,
J. Chem. Phys.
68
,
2794
(
1978
).
77.
R.
Kosloff
and
D.
Kosloff
,
J. Comp. Phys.
52
,
35
(
1983
).
78.
R.
Kosloff
and
D.
Kosloff
,
J. Chem. Phys.
79
,
1823
(
1983
).
79.
G.
Stock
,
C.
Woywod
, and
W.
Domcke
,
Chem. Phys. Lett.
200
,
163
(
1992
).
80.
L.
Seidner
and
W.
Domcke
,
Chem. Phys.
186
,
27
(
1994
).
81.
M. D.
Feit
,
J. A. J.
Fleck
, and
A.
Steiger
,
J. Comput. Phys.
47
,
412
(
1982
).
82.
M. D.
Feit
and
J. A. J.
Fleck
,
J. Chem. Phys.
78
,
301
(
1983
).
83.
H.
De Raedt
,
Comput. Phys. Rep.
7
,
1
(
1987
).
84.
H.
Yoshida
,
Phys. Lett. A
150
,
262
(
1990
).
85.
A. D.
Bandrauk
and
H.
Shen
,
Chem. Phys. Lett.
176
,
428
(
1991
).
86.
M.
Wehrle
,
M.
Šulc
, and
J.
Vaníček
,
Chimia
65
,
334
(
2011
).
87.
S. A.
Chin
and
C. R.
Chen
,
J. Chem. Phys.
117
,
1409
(
2002
).
88.
W.
Magnus
,
Commun. Pure Appl. Math.
7
,
649
(
1954
).
89.
P.
Pechukas
and
J. C.
Light
,
J. Chem. Phys.
44
,
3897
(
1966
).
90.
W. R.
Salzman
,
J. Chem. Phys.
85
,
4605
(
1986
).
91.
M.
Hochbruck
and
C.
Lubich
,
SIAM J. Num. Anal.
41
,
945
(
2003
).
92.
S.
Blanes
and
P. C.
Moan
,
App. Numer. Math.
56
,
1519
(
2006
).
93.
S.
Blanes
,
F.
Casas
,
J. A.
Oteo
, and
J.
Ros
,
Phys. Rep.
470
,
151
(
2009
).
94.
A.
Patoz
and
J.
Vaníček
, “
Geometric integrators for the nonadiabatic dynamics induced by the interaction with the electromagnetic field: 2. Split-operator and Magnus integrators of arbitrary order
” (unpublished).
95.
A.
Patoz
, “
Geometric integrators for nonadiabatic molecular quantum dynamics induced by the interaction with the electromagnetic field
,” Ph.D. thesis (
École Polytechnique Fédérale de Lausanne
,
2017
).
96.
J.
Kucar
,
H.-D.
Meyer
, and
L.
Cederbaum
,
Chem. Phys. Lett.
140
,
525
(
1987
).
97.
H.-D.
Meyer
,
U.
Manthe
, and
L. S.
Cederbaum
, in
Numerical Grid Methods and their Application to Schrödinger's Equation
, edited by
C.
Cerjan
(
Kluwer Academic Publishers
,
Dordrecht
,
1993
), pp.
141
152
.
98.
U.
Manthe
,
H.-D.
Meyer
, and
L. S.
Cederbaum
,
J. Chem. Phys.
97
,
3199
(
1992
).
99.
U.
Manthe
,
H.-D.
Meyer
, and
L.
Cederbaum
,
J. Chem. Phys.
97
,
9062
(
1992
).
100.
P. A. M.
Dirac
,
Math. Proc. Cambridge Philos. Soc.
26
,
376
(
1930
).
101.
J.
Frenkel
,
Wave Mechanics: Advanced General Principles
(
Clarendon Press
,
Oxford
,
1934
).
102.
H.-D.
Meyer
,
F.
Gatti
, and
G. A.
Worth
,
Multidimensional Quantum Dynamics
(
WILEY-VCH
,
Weinheim
,
2009
).
103.
G.
Worth
,
H.-D.
Meyer
,
H.
Köppel
,
L.
Cederbaum
, and
I.
Burghardt
,
Int. Rev. Phys. Chem.
27
,
569
(
2008
).
104.
G.
Capano
,
T. J.
Penfold
,
U.
Roethlisberger
, and
I.
Tavernelli
,
Chimia
68
,
227
(
2014
).
105.
G.
Capano
,
M.
Chergui
,
U.
Rothlisberger
,
I.
Tavernelli
, and
T. J.
Penfold
,
J. Phys. Chem. A
118
,
9861
(
2014
).
106.
G.
Capano
,
C.
Milne
,
M.
Chergui
,
U.
Rothlisberger
,
I.
Tavernelli
, and
T.
Penfold
,
J. Phys. B
48
,
214001
(
2015
).
107.
D.
Bohm
,
Phys. Rev.
85
,
166
(
1952
).
108.
D.
Bohm
,
Phys. Rev.
85
,
180
(
1952
).
109.
O. T.
Unke
and
M.
Meuwly
,
Chem. Phys. Lett.
639
,
52
(
2015
).
110.
R. E.
Wyatt
,
C. L.
Lopreore
, and
G.
Parlant
,
J. Chem. Phys.
114
,
5113
(
2001
).
111.
C. L.
Lopreore
and
R. E.
Wyatt
,
J. Chem. Phys.
116
,
1228
(
2002
).
112.
B.
Poirier
and
G.
Parlant
,
J. Phys. Chem. A
111
,
10400
(
2007
).
113.
N.
Zamstein
and
D. J.
Tannor
,
J. Chem. Phys.
137
,
22A517
(
2012
).
114.
N.
Zamstein
and
D. J.
Tannor
,
J. Chem. Phys.
137
,
22A518
(
2012
).
115.
B. F.
Curchod
,
I.
Tavernelli
, and
U.
Rothlisberger
,
Phys. Chem. Chem. Phys.
13
,
3231
(
2011
).
116.
B. F.
Curchod
,
U.
Rothlisberger
, and
I.
Tavernelli
,
Chimia
66
,
174
(
2012
).
117.
B. F.
Curchod
,
U.
Rothlisberger
, and
I.
Tavernelli
,
ChemPhysChem
14
,
1314
(
2013
).
118.
CPMD, see http://www.cpmd.org/ for Copyright IBM Corp 1990–2015, Copyright MPI für Festkörperforschung Stuttgart, 1997–2001.
119.
A. H.
Zewail
,
Femtochemistry: Ultrafast Dynamics of the Chemical Bond: (Volumes I & II)
(
World Scientific
,
Singapore
,
1994
), Vol.
3
.
120.
M.
Chergui
,
Femtochemistry: Ultrafast Chemical and Physical Processes in Molecular Systems Proceedings of Femtochemistry: The Lausanne Conference
(
World Scientific
,
Singapore
,
1996
).
121.
V.
Sundström
,
Femtochemistry and Femtobiology: Ultrafast Reaction Dynamics at Atomic-Scale Resolution
(
River Edge
,
NJ, World Scientific, Singapore
,
1996
).
122.
A. M.
Weiner
,
Rev. Sci. Instrum.
71
,
1929
(
2000
).
123.
M.
Wollenhaupt
,
A.
Assion
, and
T.
Baumert
, in
Springer Handbook of Lasers and Optics
, edited by
F.
Träger
(
Springer
,
New York
,
2007
), pp.
937
983
.
124.
P.
Marquetand
and
V.
Engel
,
J. Chem. Phys.
127
,
084115
(
2007
).
125.
P.
von den Hoff
,
S.
Thallmair
,
M.
Kowalewski
,
R.
Siemering
, and
R.
de Vivie-Riedle
,
Phys. Chem. Chem. Phys.
14
,
14460
(
2012
).
126.
D.
Geißler
,
P.
Marquetand
,
J.
González-Vázquez
,
L.
González
,
T.
Rozgonyi
, and
T.
Weinacht
,
J. Phys. Chem. A
116
,
11434
(
2012
).
127.
D. J.
Tannor
and
S. A.
Rice
,
J. Chem. Phys.
83
,
5013
(
1985
).
128.
P.
Brumer
and
M.
Shapiro
,
Chem. Phys. Lett.
126
,
541
(
1986
).
129.
A. P.
Peirce
,
M. A.
Dahleh
, and
H.
Rabitz
,
Phys. Rev. A
37
,
4950
(
1988
).
130.
R. S.
Judson
and
H.
Rabitz
,
Phys. Rev. Lett.
68
,
1500
(
1992
).
131.
G. A.
Worth
and
C. S.
Sanz
,
Phys. Chem. Chem. Phys.
12
,
15570
(
2010
).
132.
D. J.
Tannor
,
R.
Kosloff
, and
A.
Bartana
,
Faraday Discuss.
113
,
365
(
1999
).
133.
D.
Keefer
,
S.
Thallmair
,
S.
Matsika
, and
R.
de Vivie-Riedle
,
J. Am. Chem. Soc.
139
,
5061
(
2017
).
134.
B. F.
Curchod
,
T. J.
Penfold
,
U.
Rothlisberger
, and
I.
Tavernelli
,
Phys. Rev. A
84
,
042507
(
2011
).
135.
M. F.
Herman
and
E.
Kluk
,
Chem. Phys.
91
,
27
(
1984
).
136.
W. H.
Miller
,
J. Phys. Chem.
105
,
2942
(
2001
).
137.
K. G.
Kay
,
Annu. Rev. Phys. Chem.
56
,
255
(
2005
).
138.
J.
Tatchen
and
E.
Pollak
,
J. Chem. Phys.
130
,
041103
(
2009
).
139.
M.
Ceotto
,
S.
Atahan
,
S.
Shim
,
G. F.
Tantardini
, and
A.
Aspuru-Guzik
,
Phys. Chem. Chem. Phys.
11
,
3861
(
2009
).
140.
M.
Ceotto
,
S.
Atahan
,
G. F.
Tantardini
, and
A.
Aspuru-Guzik
,
J. Chem. Phys.
130
,
234113
(
2009
).
141.
S. Y. Y.
Wong
,
D. M.
Benoit
,
M.
Lewerenz
,
A.
Brown
, and
P.-N.
Roy
,
J. Chem. Phys.
134
,
094110
(
2011
).
142.
R.
Ianconescu
,
J.
Tatchen
, and
E.
Pollak
,
J. Chem. Phys.
139
,
154311
(
2013
).
143.
S.
Zhang
and
E.
Pollak
,
J. Chem. Phys.
121
,
3384
(
2004
).
144.
A. L.
Kaledin
and
W. H.
Miller
,
J. Chem. Phys.
118
,
7174
(
2003
).
145.
N.
Makri
and
W. H.
Miller
,
J. Chem. Phys.
89
,
2170
(
1988
).
146.
E. J.
Heller
,
J. Chem. Phys.
94
,
2723
(
1991
).
147.
A. R.
Walton
and
D. E.
Manolopoulos
,
Mol. Phys.
87
,
961
(
1996
).
148.
H.
Wang
,
D. E.
Manolopoulos
, and
W. H.
Miller
,
J. Chem. Phys.
115
,
6317
(
2001
).
149.
M. S.
Church
,
S. V.
Antipov
, and
N.
Ananth
,
J. Chem. Phys.
146
,
234104
(
2017
).
150.
M.
Thoss
,
H.
Wang
, and
W. H.
Miller
,
J. Chem. Phys.
114
,
9220
(
2001
).
151.
S. V.
Antipov
,
Z.
Ye
, and
N.
Ananth
,
J. Chem. Phys.
142
,
184102
(
2015
).
152.
M.
Ceotto
,
G. F.
Tantardini
, and
A.
Aspuru-Guzik
,
J. Chem. Phys.
135
,
214108
(
2011
).
153.
M.
Ceotto
,
Y.
Zhuang
, and
W. L.
Hase
,
J. Chem. Phys.
138
,
054116
(
2013
).
154.
M.
Buchholz
,
F.
Grossmann
, and
M.
Ceotto
,
J. Chem. Phys.
144
,
094102
(
2016
).
155.
F.
Gabas
,
R.
Conte
, and
M.
Ceotto
,
J. Chem. Theory Comput.
13
,
2378
(
2017
).
156.
E. J.
Heller
,
Acc. Chem. Res.
14
,
368
(
1981
).
157.
T.
Gorin
,
T.
Prosen
,
T. H.
Seligman
, and
M.
Žnidarič
,
Phys. Rep.
435
,
33
(
2006
).
158.
J.
Vaníček
,
Chimia
71
,
283
(
2017
).
159.
J. M.
Rost
,
J. Phys. B
28
,
L601
(
1995
).
160.
Z.
Li
,
J.-Y.
Fang
, and
C. C.
Martens
,
J. Chem. Phys.
104
,
6919
(
1996
).
161.
S. A.
Egorov
,
E.
Rabani
, and
B. J.
Berne
,
J. Chem. Phys.
108
,
1407
(
1998
).
162.
S. A.
Egorov
,
E.
Rabani
, and
B. J.
Berne
,
J. Chem. Phys.
110
,
5238
(
1999
).
163.
Q.
Shi
and
E.
Geva
,
J. Chem. Phys.
122
,
064506
(
2005
).
164.
J.
Vaníček
,
Phys. Rev. E
73
,
046204
(
2006
).
165.
C.
Mollica
and
J.
Vaníček
,
Phys. Rev. Lett.
107
,
214101
(
2011
).
166.
E.
Zambrano
and
A. M.
Ozorio de Almeida
,
Phys. Rev. E
84
,
045201(R)
(
2011
).
167.
E.
Zambrano
,
M.
Šulc
, and
J.
Vaníček
,
J. Chem. Phys.
139
,
054109
(
2013
).
168.
M.
Šulc
and
J.
Vaníček
,
Mol. Phys.
110
,
945
(
2012
).
169.
G. A.
Worth
,
M. A.
Robb
, and
I.
Burghardt
,
Faraday Discuss.
127
,
307
(
2004
).
170.
D. V.
Shalashilin
,
J. Chem. Phys.
132
,
244111
(
2010
).
171.
M.
Šulc
,
H.
Hernandez
,
T. J.
Martínez
, and
J.
Vaníček
,
J. Chem. Phys.
139
,
034112
(
2013
).
172.
S.-Y.
Lee
and
E. J.
Heller
,
J. Chem. Phys.
76
,
3035
(
1982
).
173.
M.
Baranger
,
M. A. M.
de Aguiar
,
F.
Keck
,
H. J.
Korsch
, and
B.
Schellhaaß
,
J. Phys. A
34
,
7227
(
2001
).
174.
F.
Grossmann
,
J. Chem. Phys.
125
,
014111
(
2006
).
175.
M.
Wehrle
,
M.
Šulc
, and
J.
Vaníček
,
J. Chem. Phys.
140
,
244114
(
2014
).
176.
M.
Wehrle
,
S.
Oberli
, and
J.
Vaníček
,
J. Phys. Chem. A
119
,
5685
(
2015
).
177.
B. J.
Alder
and
T. E.
Wainright
,
J. Chem. Phys.
31
,
459
(
1959
).
178.
L.
Verlet
,
Phys. Rev.
159
,
98
(
1967
).
179.
M.
Karplus
,
R. D.
Sharma
, and
R. N.
Porter
,
J. Chem. Phys.
40
,
2033
(
1964
).
180.
D. G.
Truhlar
,
A.
Kupperma
, and
J. T.
Adams
,
J. Chem. Phys.
59
,
395
(
1973
).
181.
Y.
Wang
,
B. J.
Braams
,
J. M.
Bowman
,
S.
Carter
, and
D. P.
Tew
,
J. Chem. Phys.
128
,
224314
(
2008
).
182.
T.-S.
Ho
and
H.
Rabitz
,
J. Chem. Phys.
104
,
2584
(
1996
).
183.
T.-S.
Ho
,
T.
Hollebeek
,
H.
Rabitz
,
L. B.
Harding
, and
G. C.
Schatz
,
J. Chem. Phys.
105
,
10472
(
1996
).
184.
J. C.
Castro-Palacio
,
T.
Nagy
,
R. J.
Bemish
, and
M.
Meuwly
,
J. Chem. Phys.
141
,
164319
(
2014
).
185.
O. T.
Unke
and
M.
Meuwly
,
J. Chem. Inf. Model.
57
,
1923
(
2017
).
186.
S.
Lifson
and
A.
Warshel
,
J. Chem. Phys.
49
,
5116
(
1968
).
187.
M.
Levitt
and
S.
Lifson
,
J. Mol. Biol.
46
,
269
(
1969
).
188.
D. R.
Nutt
and
M.
Meuwly
,
Biophys. J.
85
,
3612
(
2003
).
189.
N.
Plattner
and
M.
Meuwly
,
Biophys. J.
94
,
2505
(
2008
).
190.
N.
Gresh
,
G. A.
Cisneros
,
T. A.
Darden
, and
J.-P.
Piquemal
,
J. Chem. Theory Comput.
3
,
1960
(
2007
).
191.
G.
Cisneros
,
S.
Tholander
,
O.
Parisel
,
T.
Darden
,
D.
Elking
,
L.
Perera
, and
J.-P.
Piquemal
,
Int. J. Quant. Chem.
108
,
1905
(
2008
).
192.
L. V.
Slipchenko
and
M. S.
Gordon
,
Mol. Phys.
107
,
999
(
2009
).
193.
J. W.
Ponder
,
C.
Wu
,
P.
Ren
,
V. S.
Pande
,
J. D.
Chodera
,
M. J.
Schnieders
,
I.
Haque
,
D. L.
Mobley
,
D. S.
Lambrecht
,
R. A.
DiStasio
, Jr.
,
M.
Head-Gordon
,
G. N. I.
Clark
,
M. E.
Johnson
, and
T.
Head-Gordon
,
J. Phys. Chem. B
114
,
2549
(
2010
).
194.
P. E. M.
Lopes
,
J.
Huang
,
J.
Shim
,
Y.
Luo
,
H.
Li
,
B.
Roux
, and
J. A. D.
MacKerell
,
J. Chem. Theory Comput.
9
,
5430
(
2013
).
195.
K.
El Hage
,
V.
Pandyarajan
,
N. B.
Phillips
,
B. J.
Smith
,
J. G.
Menting
,
J.
Whittaker
,
M. C.
Lawrence
,
M.
Meuwly
, and
M. A.
Weiss
,
J. Biol. Chem.
291
,
27023
(
2016
).
196.
C. T.
Wolke
,
J. A.
Fournier
,
L. C.
Dzugan
,
M. R.
Fagiani
,
T. T.
Odbadrakh
,
H.
Knorke
,
K. D.
Jordan
,
A. B.
McCoy
,
K. R.
Asmis
, and
M. A.
Johnson
,
Science
354
,
1131
(
2016
).
197.
J. A.
Fournier
,
C. T.
Wolke
,
C. J.
Johnson
,
M. A.
Johnson
,
N.
Heine
,
S.
Gewinner
,
W.
Schllkopf
,
T. K.
Esser
,
M. R.
Fagiani
,
H.
Knorke
, and
K. R.
Asmis
,
Proc. Natl. Acad. Sci. U.S.A.
111
,
18132
(
2014
).
198.
K.
Mackeprang
,
Z.-H.
Xu
,
Z.
Maroun
,
M.
Meuwly
, and
H. G.
Kjaergaard
,
Phys. Chem. Chem. Phys.
18
,
24654
(
2016
).
199.
C. T.
Wolke
,
A. F.
DeBlase
,
C. M.
Leavitt
,
A. B.
McCoy
, and
M. A.
Johnson
,
J. Phys. Chem. A
119
,
13018
(
2015
).
200.
D. L.
Howard
,
H. G.
Kjaergaard
,
J.
Huang
, and
M.
Meuwly
,
J. Phys. Chem. A
119
,
7980
(
2015
).
201.
A.
Warshel
and
R. M.
Weiss
,
J. Am. Chem. Soc.
102
,
6218
(
1980
).
202.
U.
Schmitt
and
G.
Voth
,
J. Phys. Chem. B
102
,
5547
(
1998
).
203.
G.
Hong
,
E.
Rosta
, and
A.
Warshel
,
J. Phys. Chem. B
110
,
19570
(
2006
).
204.
D. R.
Nutt
and
M.
Meuwly
,
Biophys. J.
90
,
1191
(
2006
).
205.
P.
Dayal
,
S. A.
Weyand
,
J.
McNeish
, and
N. J.
Mosey
,
Chem. Phys. Lett.
516
,
263
(
2011
).
206.
J.
Danielsson
and
M.
Meuwly
,
J. Chem. Theory Comput.
4
,
1083
(
2008
).
207.
S.
Mishra
and
M.
Meuwly
,
J. Am. Chem. Soc.
132
,
2968
(
2010
).
208.
K.
Nienhaus
,
S.
Lutz
,
M.
Meuwly
, and
G. U.
Nienhaus
,
Chem. Eur. J.
19
,
3558
(
2013
).
209.
J.
Yosa
and
M.
Meuwly
,
J. Phys. Chem. A
115
,
14350
(
2011
).
210.
Y.
Miller
and
R. B.
Gerber
,
J. Am. Chem. Soc.
128
,
9594
(
2006
).
211.
S.
Lammers
and
M.
Meuwly
,
J. Phys. Chem. A
111
,
1638
(
2007
).
212.
Y.
Yang
and
M.
Meuwly
,
J. Chem. Phys.
133
,
064503
(
2010
).
213.
M.
Meuwly
and
J. M.
Hutson
,
J. Chem. Phys.
110
,
8338
(
1999
).
214.
M.
Deserno
and
C.
Holm
,
J. Chem. Phys.
109
,
7678
(
1998
).
215.
C.
Sagui
and
T. A.
Darden
,
Annu. Rev. Biophys. Biomol. Struct.
28
,
155
(
1999
).
216.
A. C.
Legon
,
Phys. Chem. Chem. Phys.
12
,
7736
(
2010
).
217.
W. L.
Jorgensen
and
P.
Schyman
,
J. Chem. Theory Comput.
8
,
3895
(
2012
).
218.
K.
El Hage
,
T.
Bereau
,
S.
Jakobsen
, and
M.
Meuwly
,
J. Chem. Theory Comput.
12
,
3008
(
2016
).
219.
W. L.
Jorgensen
,
J.
Chandrasekhar
,
J. D.
Madura
,
R. W.
Impey
, and
M. L.
Klein
,
J. Chem. Phys.
79
,
926
(
1983
).
220.
M. W.
Mahoney
and
W. L.
Jorgensen
,
J. Chem. Phys.
112
,
8910
(
2000
).
221.
M. W.
Lee
,
J. K.
Carr
,
M.
Göllner
,
P.
Hamm
, and
M.
Meuwly
,
J. Chem. Phys.
139
,
054506
(
2013
).
222.
F.
Hedin
,
K.
El Hage
, and
M.
Meuwly
,
J. Chem. Inf. Mod.
57
,
102
(
2017
).
223.
A. J.
Stone
,
J. Chem. Theory Comput.
1
,
1128
(
2005
).
224.
D. M.
Elking
,
L.
Perera
,
R.
Duke
,
T.
Darden
, and
L. G.
Pedersen
,
J. Comput. Chem.
31
,
2702
(
2010
).
225.
C.
Kramer
,
P.
Gedeck
, and
M.
Meuwly
,
J. Comp. Chem.
33
,
1673
(
2012
).
226.
C.
Kramer
,
T.
Bereau
,
A.
Spinn
,
K. R.
Liedl
,
P.
Gedeck
, and
M.
Meuwly
,
J. Chem. Inf. Model.
53
,
3410
(
2013
).
227.
A. J.
Stone
,
The Theory of Intermolecular Forces
(
Clarendon Press
Oxford
,
1996
), Vol.
32
.
228.
K.
Vanommeslaeghe
and
A. D.
MacKerell
,
J. Chem. Inf. Mod.
52
,
3144
(
2012
).
229.
S.
Jakobsen
and
F.
Jensen
,
J. Chem. Theory Comput.
12
,
1824
(
2016
).
230.
M.
Devereux
,
S.
Raghunathan
,
D. G.
Fedorov
, and
M.
Meuwly
,
J. Chem. Theory Comput.
10
,
4229
(
2014
).
231.
O. T.
Unke
,
M.
Devereux
, and
M.
Meuwly
,
J. Chem. Phys.
147
,
161712
(
2017
).
232.
I.
Tavernelli
,
E.
Tapavicza
, and
U.
Rothlisberger
,
J. Chem. Phys.
130
,
124107
(
2009
).
233.
I.
Tavernelli
,
B. F.
Curchod
, and
U.
Rothlisberger
,
J. Chem. Phys.
131
,
196101
(
2009
).
234.
I.
Tavernelli
,
B. F.
Curchod
,
A.
Laktionov
, and
U.
Rothlisberger
,
J. Chem. Phys.
133
,
194104
(
2010
).
235.
M.
Thachuk
,
M. Y.
Ivanov
, and
D. M.
Wardlaw
,
J. Chem. Phys.
105
,
4094
(
1996
).
236.
K.
Yagi
and
K.
Takatsuka
,
J. Chem. Phys.
123
,
224103
(
2005
).
237.
J. C.
Tully
,
J. Chem. Phys.
93
,
1061
(
1990
).
238.
I.
Tavernelli
,
B. F.
Curchod
, and
U.
Rothlisberger
,
Phys. Rev. A
81
,
052508
(
2010
).
239.
E.
Tapavicza
,
I.
Tavernelli
, and
U.
Rothlisberger
,
Phys. Rev. Lett.
98
,
023001
(
2007
).
240.
E.
Tapavicza
,
G. D.
Bellchambers
,
J. C.
Vincent
, and
F.
Furche
,
Phys. Chem. Chem. Phys.
15
,
18336
(
2013
).
241.
R. B.
Singh
,
S.
Mahanta
, and
N.
Guchhait
,
J. Photochem. Photobiol. A
200
,
325
(
2008
).
242.
B. K.
Paul
,
S.
Mahanta
,
R. B.
Singh
, and
N.
Guchhait
,
J. Phys. Chem. A
114
,
2618
(
2010
).
243.
N.
Troullier
and
J. L.
Martins
,
Phys. Rev. B
43
,
1993
(
1991
).
244.
J.
Hutter
,
J. Chem. Phys.
118
,
3928
(
2003
).
245.
J. P.
Perdew
,
K.
Burke
, and
M.
Ernzerhof
,
Phys. Rev. Lett.
77
,
3865
(
1996
).
246.
B. F.
Curchod
,
T. J.
Penfold
,
U.
Rothlisberger
, and
I.
Tavernelli
,
ChemPhysChem
16
,
2127
(
2015
).
247.
G.
Stock
,
C.
Woywod
,
W.
Domcke
,
T.
Swinney
, and
B. S.
Hudson
,
J. Chem. Phys.
103
,
6851
(
1995
).
248.
W.
Domcke
,
L. S.
Cederbaum
,
H.
Köppel
, and
W.
von Niessen
,
Mol. Phys.
34
,
1759
(
1977
).
249.
W.
Cleland
and
M.
Kreevoy
,
Science
264
,
1887
(
1994
).
250.
A.
Warshel
,
A.
Papazyan
, and
P.
Kollman
,
Science
269
,
102
(
1995
).
251.
S.
Shan
,
S.
Loh
, and
D.
Herschlag
,
Science
272
,
97
(
1996
).
252.
D.
Borst
,
J.
Roscioli
,
D.
Pratt
,
G.
Florio
,
T.
Zwier
,
A.
Muller
, and
S.
Leutwyler
,
Chem. Phys.
283
,
341
(
2002
).
253.
P.
Frey
,
S.
Whitt
, and
J.
Tobin
,
Science
264
,
1927
(
1994
).
254.
J.
Braun
,
M.
Schlabach
,
B.
Wehrle
,
M.
Kocher
,
E.
Vogel
, and
H.
Limbach
,
J. Am. Chem. Soc.
116
,
6593
(
1994
).
255.
L.
Bondesson
,
K. V.
Mikkelsen
,
Y.
Luo
,
P.
Garberg
, and
H.
Ågren
,
Spectrochim. Acta Mol. Biomol. Spectrosc.
66
,
213
(
2007
).
256.
M.
Thaemer
,
L.
De Marco
,
K.
Ramasesha
,
A.
Mandal
, and
A.
Tokmakoff
,
Science
350
,
78
(
2015
).
257.
F.
Dahms
,
B. P.
Fingerhut
,
E. T. J.
Nibbering
,
E.
Pines
, and
T.
Elsaesser
,
Science
357
,
491
(
2017
).
258.
C. M.
Leavitt
,
A. F.
DeBlase
,
C. J.
Johnson
,
M.
van Stipdonk
,
A. B.
McCoy
, and
M. A.
Johnson
,
J. Phys. Chem. Lett.
4
,
3450
(
2013
).
259.
Z.-H.
Xu
and
M.
Meuwly
,
J. Phys. Chem. A
121
,
5389
(
2017
).
260.
E. S.
Park
,
S. S.
Andrews
,
R. B.
Hu
, and
S. G.
Boxer
,
J. Phys. Chem. B
103
,
9813
(
1999
).
261.
J.
Ma
,
S.
Huo
, and
J.
Straub
,
J. Am. Chem. Soc.
119
,
2541
(
1997
).
262.
J.
Meller
and
R.
Elber
,
Biophys. J.
74
,
789
(
1998
).
263.
M.
Anselmi
,
M.
Aschi
,
A.
Di Nola
, and
A.
Amadei
,
Biophys. J.
92
,
3442
(
2007
).
264.
M.
Lim
,
T. A.
Jackson
, and
P. A.
Anfinrud
,
J. Chem. Phys.
102
,
4355
(
1995
).
265.
J. E.
Straub
and
M.
Karplus
,
Chem. Phys.
158
,
221
(
1991
).
266.
K.
Nienhaus
,
J. S.
Olson
,
S.
Franzen
, and
G. U.
Nienhaus
,
J. Am. Chem. Soc.
127
,
40
(
2005
).
267.
M.
Meuwly
,
Chem. Phys. Chem.
7
,
2061
(
2006
).
268.
J.
Lascombe
and
M.
Perrot
,
Faraday Discuss.
66
,
216
(
1978
).
269.
P.
Hamm
,
M.
Lim
, and
R. M.
Hochstrasser
,
J. Chem. Phys.
107
,
10523
(
1997
).
270.
M.
Koziński
,
S.
Garrett-Roe
, and
P.
Hamm
,
Chem. Phys.
341
,
5
(
2007
).
271.
P.
Hamm
and
M.
Zanni
,
Concepts and Methods of 2D Infrared Spectroscopy
(
Cambridge University Press
,
2011
).
272.
M. W.
Lee
and
M.
Meuwly
,
J. Phys. Chem. A
115
,
5053
(
2011
).
273.
T.
Hayashi
,
T.
Jansen
,
W.
Zhuang
, and
S.
Mukamel
,
J. Phys. Chem. A
109
,
64
(
2005
).
274.
L.
Wang
,
C. T.
Middleton
,
M. T.
Zanni
, and
J. L.
Skinner
,
J. Phys. Chem. B
115
,
3713
(
2011
).
275.
R.
Battistutta
,
E.
De Moliner
,
S.
Sarno
,
G.
Zanotti
, and
L. A.
Pinna
,
Protein Sci.
10
,
2200
(
2001
).
276.
M.
Kolář
and
P.
Hobza
,
J. Chem. Theory Comput.
8
,
1325
(
2012
).
277.
I. T.
Suydam
and
S. G.
Boxer
,
Biochemistry
42
,
12050
(
2003
).
278.
I. T.
Suydam
,
C. D.
Snow
,
V. S.
Pande
, and
S. G.
Boxer
,
Science
313
,
200
(
2006
).
279.
B. A.
Lindquist
,
K. E.
Furse
, and
S. A.
Corcelli
,
Phys. Chem. Chem. Phys.
11
,
8119
(
2009
).
280.
M.
Devereux
,
N.
Plattner
, and
M.
Meuwly
,
J. Phys. Chem. B
113
,
13199
(
2009
).
281.
J. P.
Layfield
and
S.
Hammes-Schiffer
,
J. Am. Chem. Soc.
135
,
717
(
2013
).
282.
S. D.
Fried
and
S. G.
Boxer
,
Acc. Chem. Res.
48
,
998
(
2015
).
283.
F.
Schotte
,
M.
Lim
,
T. A.
Jackson
,
A. V.
Smirnov
,
J.
Soman
,
J. S.
Olson
,
G. N.
Phillips
, Jr.
,
M.
Wulff
, and
P. A.
Anfinrud
,
Science
300
,
1944
(
2003
).
284.
F.
Paesani
,
Acc. Chem. Res.
49
,
1844
(
2016
).
285.
M. J.
Tucker
,
Z.
Getahun
,
V.
Nanda
,
W. F.
DeGrado
, and
F.
Gai
,
J. Am. Chem. Soc.
126
,
5078
(
2004
).
286.
A. T.
Krummel
and
M. T.
Zanni
,
J. Phys. Chem. B
112
,
1336
(
2008
).
287.
S. S.
Andrews
and
S. G.
Boxer
,
J. Phys. Chem. A
104
,
11853
(
2000
).
288.
Y. S.
Kim
and
R. M.
Hochstrasser
,
J. Phys. Chem. B
113
,
8231
(
2009
).
289.
S.
Bagchi
,
S. D.
Fried
, and
S. G.
Boxer
,
J. Am. Chem. Soc.
134
,
10373
(
2012
).
290.
S.
Bagchi
,
S. G.
Boxer
, and
M. D.
Fayer
,
J. Phys. Chem. B
116
,
4034
(
2012
).
291.
K. C.
Schultz
,
L.
Supekova
,
Y.
Ryu
,
J.
Xie
,
P. G.
Perera
, and
R.
Schitz
,
J. Am. Chem. Soc.
128
,
13984
(
2006
).
292.
L.
Xu
,
A. E.
Cohen
, and
S. G.
Boxer
,
Biochemistry
50
,
8311
(
2011
).
293.
K. L.
Koziol
,
P. J. M.
Johnson
,
B.
Stucki-Buchli
,
S. A.
Waldauer
, and
P.
Hamm
,
Curr. Opin. Struct. Biol.
34
,
1
(
2015
).
294.
A.
Ghosh
,
A.
Remorino
,
M. J.
Tucker
, and
R. M.
Hochstrasser
,
Chem. Phys. Lett.
469
,
325
(
2009
).
295.
P.
Mondal
and
M.
Meuwly
,
Phys. Chem. Chem. Phys.
19
,
16131
(
2017
).
296.
R.
Rey
and
J. T.
Hynes
,
J. Chem. Phys.
108
,
142
(
1998
).
297.
E. J.
Heilweil
,
F. E.
Doany
,
R.
Moore
, and
R. M.
Hochstrasser
,
J. Chem. Phys.
76
,
5632
(
1982
).
298.
M. W.
Lee
and
M.
Meuwly
,
Phys. Chem. Chem. Phys.
15
,
20303
(
2013
).