This article summarizes technical advances contained in the fifth major release of the Q-Chem quantum chemistry program package, covering developments since 2015. A comprehensive library of exchange–correlation functionals, along with a suite of correlated many-body methods, continues to be a hallmark of the Q-Chem software. The many-body methods include novel variants of both coupled-cluster and configuration-interaction approaches along with methods based on the algebraic diagrammatic construction and variational reduced density-matrix methods. Methods highlighted in Q-Chem 5 include a suite of tools for modeling core-level spectroscopy, methods for describing metastable resonances, methods for computing vibronic spectra, the nuclear–electronic orbital method, and several different energy decomposition analysis techniques. High-performance capabilities including multithreaded parallelism and support for calculations on graphics processing units are described. Q-Chem boasts a community of well over 100 active academic developers, and the continuing evolution of the software is supported by an “open teamware” model and an increasingly modular design.
I. INTRODUCTION
The era of electronic computing began with the “ENIAC” machine,1 developed at the University of Pennsylvania beginning in 1943, and the first commercial machines began to be produced around 1950. Although originally developed for military applications, molecular physics was not far behind.2 The existence of these machines in universities led to the first development of quantum chemistry software starting in the mid-1950s.3 Prognosticating on the future of electronic structure theory in his 1966 Nobel Lecture, Mulliken stated that4
… the era of computing chemists, when hundreds if not thousands of chemists will go to the computing machine instead of the laboratory for increasingly many facets of chemical information, is already at hand.
However, he did caution that
… at the present time the rapid progress which could be made even with existing machine programs is not being made, simply because available funds to pay for machine time are far too limited.
In the ensuing half-century, the problem of inadequate funds was resolved by the revolution in inexpensive computer hardware that traces its origin to the invention of the integrated circuit in the late 1950s and the microprocessor in the mid-1970s. Perhaps ironically, a desire for realistic simulation in computer games has led to such a massive market for high-performance hardware that today’s laptop computers have the power of the world’s most powerful supercomputer from the mid-1990s, as shown in Fig. 1. It is also worth noting that the roughly 100 W power consumption of today’s eight-core laptop is an impressive 5000× smaller than the corresponding supercomputer (e.g., the Fujitsu Numerical Wind Tunnel Computer, which was No. 1 in 1996, consumes 500 kW). At the other extreme, computing resources well into the terascale are routinely available on computer clusters, and leadership supercomputing is in the midst of a transition from petascale toward exascale computing.
Development of leading edge computer capabilities, as documented through the performance of the world’s top 500 supercomputers, as measured on dense linear algebra in units of double precision floating-point operations per second (Flops/s). The data are adapted from Top500.org and compared against the performance of an eight-core laptop, which evidently has performance comparable to the world’s fastest supercomputer of the mid-1990s to late-1990s.
Development of leading edge computer capabilities, as documented through the performance of the world’s top 500 supercomputers, as measured on dense linear algebra in units of double precision floating-point operations per second (Flops/s). The data are adapted from Top500.org and compared against the performance of an eight-core laptop, which evidently has performance comparable to the world’s fastest supercomputer of the mid-1990s to late-1990s.
This revolution in computer hardware is only meaningful to practicing chemists if corresponding software is available to enable straightforward and realistic simulation of molecules, molecular properties, and chemical reaction pathways. The first electronic structure codes were already working at the time of Mulliken’s Nobel address, and indeed, Charles Coulson had warned in 1959 of a growing split between theoretical chemists who were numerical simulators (primarily early code developers) and those who developed chemical concepts.5 Today one would rather say that quantum chemistry calculations are simulations whose results represent numerical experiments. Just like real experiments, results from these in silico experiments (even if reliable) must still be understood in conceptual terms, to the extent possible. The aspirations of early electronic structure codes are reflected in program names such as Polyatom,6 and such efforts rarely achieved useful accuracy or else did so via fortuitous cancellation of errors.7 However, today there are many useful program packages including ≈20 that are actively developed and supported.8
One of those is the Q-Chem project, which began in the late 1992.9 Since its inception, Q-Chem has operated as a large collaboration that defines its genre as open teamware scientific software.9,10 The Q-Chem source code is open to a large group of developers that currently includes more than 100 individuals in at least 9 countries. Developers can submit their contributions for inclusion in the official releases as long as the changes do not violate the integrity of the overall package and are scientifically sound. In addition, several Q-Chem modules are distributed as open source software.11–17 Figure 2 illustrates some statistics regarding developer activity derived from the Q-Chem source code repository logs. These data provide clear evidence of the sustained growth of the developer community and the code itself over the past decade.
Statistics showing Q-Chem developer activity since 2006. Top: total number of code commits, organized chronologically by month. The color of each monthly entry indicates the number of individual developers who made commits. (Light blue is single-digit numbers, and the January 2021 peak represents about 50 developers committing code that month.) Bottom: growth of the developer base broken down into existing developers vs those who committed code for the first time. The inset depicts the total number of commits by the 50 most prolific developers.
Statistics showing Q-Chem developer activity since 2006. Top: total number of code commits, organized chronologically by month. The color of each monthly entry indicates the number of individual developers who made commits. (Light blue is single-digit numbers, and the January 2021 peak represents about 50 developers committing code that month.) Bottom: growth of the developer base broken down into existing developers vs those who committed code for the first time. The inset depicts the total number of commits by the 50 most prolific developers.
The Q-Chem collaboration has delivered useful and reliable quantum chemistry software over the course of five major releases (as documented in earlier review articles)18–20 and ≈15 minor releases. The present paper addresses progress made since 2015 by the relatively large team of academic developers and the relatively small team of professional programmers who contribute to the package. The authors of this paper710 represent contributors to Q-Chem v. 4 and v. 5, while contributors to earlier versions are recognized in overview articles describing v. 2,18 v. 3,19 and v. 4.20
The remainder of this paper is organized as follows: Sec. II provides an overview of density functional theory (DFT) capabilities in Q-Chem, including a survey of the 200+ exchange–correlation (XC) functionals that are presently available (Sec. II A).21 A variety of excited-state DFT capabilities are described in Sec. II C, including time-dependent (TD-)DFT in both its linear-response and its explicitly time-dependent (“real-time”) versions. Next, Sec. III describes single-reference correlated wave function methods and other many-body capabilities, while Sec. IV describes multireference methods. Section V highlights some specialty features, including methods for computing core-level (x-ray) excitation spectra, methods for describing metastable resonance states, methods for computing vibronic lineshapes, and finally the nuclear–electronic orbital (NEO) method for describing proton quantum effects. Section VI surveys methods for describing a molecule’s extended environment [e.g., quantum mechanics/molecular mechanics (QM/MM), dielectric continuum, and embedding methods]. Energy decomposition analysis methods are described in Sec. VII. Section VIII describes the Q-Chem software development environment, and Sec. IX provides an overview of high-performance capabilities, including multithreaded parallelism and algorithms that exploit graphics processing units (GPUs). Section X describes graphical user interfaces (GUIs). Finally, Sec. XI provides a wrap-up and a glimpse toward the future.
II. DENSITY FUNCTIONAL THEORY
Standard quantum mechanics, including wave function-based quantum chemistry, employs an approximate N-electron wave function |Ψ⟩ to evaluate the energy, . By contrast, DFT is based on the Hohenberg–Kohn theorems,22–25 which assert that the ground state energy E can be expressed as a functional of the electron density, E = E[ρ(r)]. While the exact functional is unknown and is almost certainly unknowable in explicit form, tremendous progress has been made toward achieving useful approximations. After some minimal background, this section summarizes recent aspects of that progress that are available in Q-Chem.
A. Exchange–correlation functionals
Nearly all modern density functionals are of the Kohn–Sham type,23–26 in which the density is constructed from an auxiliary Slater determinant |Φs⟩ composed of Kohn–Sham molecular orbitals (MOs), {ϕk}. The determinant |Φs⟩ describes a system of noninteracting electrons (or partially interacting electrons,27 for rungs 4 and 5 on the hierarchy in Fig. 3), which has the same density as the physical system of interest. This ensures so-called N-representability24,25 and is also used to exactly evaluate the noninteracting kinetic energy, . The Kohn–Sham DFT energy is expressed as
where the electron–nuclear attraction term (or “external potential,” Vext) and the classical Coulomb mean-field energy (EJ) are known functionals of ρ(r). This leaves only the non-classical exchange–correlation (XC) energy (EXC) as unknown, and density functional approximations (DFAs) represent models for EXC.
Illustration of the ladder-based classification of density functionals. Also shown at each rung are the top-performing functionals (out of 200 DFAs from rungs 1–4), as assessed using the MGCD84 database containing nearly 5000 data points.21 Adapted with permission from N. Mardirossian and M. Head-Gordon, Mol. Phys. 115, 2315 (2017). Copyright 2017 Taylor and Francis.
Illustration of the ladder-based classification of density functionals. Also shown at each rung are the top-performing functionals (out of 200 DFAs from rungs 1–4), as assessed using the MGCD84 database containing nearly 5000 data points.21 Adapted with permission from N. Mardirossian and M. Head-Gordon, Mol. Phys. 115, 2315 (2017). Copyright 2017 Taylor and Francis.
Given a DFA, the energy is obtained by minimizing the energy of Eq. (1) with respect to the density . This minimization is equivalent to solving the Kohn–Sham eigenvalue equation
This is a one-electron analog of the time-independent Schrödinger equation. By analogy to the single-determinant Hartree–Fock approach in wave function theory (WFT),28 the effective one-electron Hamiltonian is known as the Fock operator, and it depends on its own eigenfunctions (as in Hartree–Fock theory). The power of Kohn–Sham DFT is that that the solution of the self-consistent field (SCF) problem in Eq. (2) would be equivalent to solving the full N-electron Schrödinger equation, if the exact functional EXC were available.
While that is sadly not the case, the lack of an exact XC functional happily keeps electronic structure theorists gainfully employed, and there are many useful DFAs that far exceed the accuracy of the cost-equivalent Hartree–Fock method. The manner in which different DFAs depend on various descriptors of the density ρ(r) leads to five broadly recognized categories of density functionals that are commonly visualized as rungs of the metaphorical “Jacob’s ladder.”29,30 The rungs are illustrated in Fig. 3. From lowest to highest, the rungs correspond to the following:
Local Spin Density Approximation (LSDA). The LSDA functional EXC[ρ(r)] depends strictly on the density and solves the model problem of a uniform electron gas. Common fits to the uniform electron gas data are known as VWN31 and PW92,32 which are quite similar.33 Most higher rungs of Jacob’s ladder introduce corrections based on LSDA as a starting point.
Generalized Gradient Approximations (GGAs). GGAs add a dependence on to EXC, making the ansatz potentially exact for slowly varying electron densities, not just uniform ones. Many useful GGAs have been developed, including PBE,34 BLYP,35,36 and B97-D.37 Q-Chem 5 also includes the nonseparable gradient approximation, GAM.38 It is nowadays standard to add empirical dispersion corrections (of the D, D3, or D4 form, for example) to these functionals,39 in order to improve their performance for non-bonded interactions.
Meta-GGAs. These functionals incorporate an additional dependence on the kinetic energy density, τ(r). Functionals on this rung are still under active development and noteworthy recent meta-GGAs include SCAN,40 B97M-V,41 and revM06-L.42 The “-V” suffix in B97M-V indicates that the functional also includes a nonlocal correlation functional (VV10),43 which can (at least in principle) account for dispersion interactions for the right physical reasons,44 whereas “semilocal” functionals that depend only on ρ(r), , and/or τ(r) lack the nonlocality to describe correlated density fluctuations between nonoverlapping densities.
Hybrid functionals. Hybrid DFAs include some portion of the “exact” (or Hartree–Fock) exchange energy associated with the Kohn–Sham determinant. The traditional approach has used a fixed fraction of exact exchange, and such functionals are known as “global” hybrid functionals. Popular examples include B3LYP,35,36 PBE0,45 and M06-2X,46 while some more recent and noteworthy examples of global hybrids include SCAN0,47 MN15,48 and revM06.49 A popular alternative to global hybrids uses a variable fraction of exact exchange that typically increases with the inter-electron distance, r12. These are known as range-separated hybrid (RSH) functionals, and notable older examples include ωB97X50 and ωB97X-D,51 while newer examples include ωB97X-V52 and ωB97M-V.53 More specialized RSH functionals are also widely used for time-dependent DFT calculations of excited states; see Sec. II C.
Double-Hybrid (DH) functionals. Hybrid DFAs depend only on the occupied Kohn–Sham orbitals, but DH-DFAs add an additional dependence on the virtual (unoccupied) Kohn–Sham MOs, which facilitates description of nonlocal electron correlation, as in second-order Møller–Plesset perturbation theory (MP2). DH-DFAs have undergone rapid recent development,54,55 and established models such as B2PLYP-D3,56 XYG3,57 and ωB97X-258 have been joined by promising new DH-DFAs, including ωB97M(2),59 and a slew of functionals that involve empirical scaling of the MP2 spin components.60–62 Relative to the lower rungs of the ladder, the prospect of higher accuracy from DH-DFAs also comes with the cost of significantly higher computational demands, and significantly slower convergence of the results toward the complete basis set limit.
With respect to DFT, the most important feature of Q-Chem is that an exceptionally rich set of density functionals is supported: well over 200 functionals are available for a user to choose between.21 A closely related feature is that Q-Chem contains a very complete set of methods for accurate treatment of dispersion interactions. These include Grimme’s D,37 D3,63,64 and D4 corrections,65 as well as a variety of nonlocal correlation and van der Waals functionals,43,66–68 the exchange dipole model (XDM),69,70 the Tkatchenko–Scheffler (TS) model,71 and the many-body dispersion (MBD) model.72–74 In addition, for calculations on large molecules using the small def2-SVPD basis set,75,76 a built-in geometric counterpoise correction method (the so-called DFT-C approach77) is available. Q-Chem also has analytic nuclear gradients and Hessians for most of this long list of functionals through rung 4. Some modern DFAs are more challenging to integrate than older ones, and a set of modern quadrature grids is available,78 with sensible defaults.
This broad selection of available functionals is a perhaps unfortunate necessity due to the fact that the “best” functional often depends on the problem at hand. According to Pople’s concept of a theoretical model chemistry,79,80 one should validate candidate approximations using known results that are related (as closely as possible) to the desired area of chemical application and then proceed to make predictions for related but unknown systems. The best functional(s) for modeling hydrogen storage in a host material,81 for example, may differ significantly from the best functional(s) to describe elementary steps in a CO2 reduction catalyst,82 or the best functional may even differ from one catalyst to another,83 as dictated by the need to get reduction potentials in reasonable agreement with experiment. (Excited-state calculations bring in a host of other considerations,84–89 as discussed in Sec. II C.) Problem-specific validation of the choice of DFA for a given application is therefore a good idea, particularly if there are good available data to benchmark several candidate DFAs.
To bring some order to this situation, it is important to recognize that there are general classes of energy differences that are common to most applications in chemistry. Such classes include non-covalent interactions, thermochemical energy differences, isomerization energies, and reaction barrier heights. The large main-group chemistry database (MGCDB84) developed by Mardirossian and Head-Gordon is categorized along these lines and contains 84 distinct subsets and almost 5000 data points.21 The top-ranked functional at each rung of Jacob’s ladder, according to this dataset, is shown in Fig. 3.
The GMTKN55 dataset is another large diverse set of benchmarks for main-group chemistry,90 and Fig. 4 summarizes the performance of a large range of functionals for this dataset. Consistent with the Jacob’s ladder taxonomy, the performance of the best functional improves at each rung of the ladder, showing that the inclusion of additional physical content does indeed improve accuracy. While it is often (correctly) stated that DFT results on a given molecule are not systematically improvable by switching from one functional to another, these results illustrate that in a statistical sense, DFT does systematically improve when represented by the best functional at each rung of the ladder. The same need not be true if one considers worse-performing functionals at each level, as the additional flexibility associated with higher rungs on Jacob’s ladder makes it quite possible to overfit complicated functional forms using limited data, especially where meta-GGA functionals are concerned.
Weighted errors (in kcal/mol) for a range of functionals, assessed using the GMTKN55 dataset and arranged according to the rungs of Jacob’s ladder in Fig. 3. The figure is adapted from Ref. 90 but includes additional data from Refs. 91 and 62. Adapted with permission from Goerigk et al., Phys. Chem. Chem. Phys. 19, 32184 (2017). Copyright 2017 Published by the PCCP Owner Societies.
Weighted errors (in kcal/mol) for a range of functionals, assessed using the GMTKN55 dataset and arranged according to the rungs of Jacob’s ladder in Fig. 3. The figure is adapted from Ref. 90 but includes additional data from Refs. 91 and 62. Adapted with permission from Goerigk et al., Phys. Chem. Chem. Phys. 19, 32184 (2017). Copyright 2017 Published by the PCCP Owner Societies.
Diving a bit deeper into the data shown in Fig. 4 reveals a variety of other interesting observations.
LSDA (rung 1) is essentially useless for chemical applications. A good GGA such as B97-D3 is the simplest and lowest-cost DFT method that is useful for chemistry.
A good meta-GGA, as exemplified by B97M-V, offers striking improvements over the best GGA across all categories. It is clear that meta-GGAs can deliver significantly higher accuracy than GGAs.
Significant further improvement is delivered by the best hybrid functionals, exemplified by ωB97X-V as a RSH-GGA and ωB97M-V as a corresponding RSH-meta-GGA. This improvement arises primarily from better accuracy for barrier heights, thermochemistry, and isomerization energies. There is good reason for hybrids to be a default choice for chemical modeling.
The best DH-DFAs offer further improvements in the same categories where hybrids improve over meta-GGAs: barrier heights, thermochemistry, and isomerization energies. However, the significantly higher cost of DH-DFAs means that they are often used only for single-point energy calculations at stationary points optimized at lower levels of theory. Q-Chem includes the efficient occ-RI-K algorithm92 to significantly reduce the additional compute cost of DH-DFAs. Some parallel timings are given in Sec. IX.
The gap in accuracy between DFT and the best wave function theories remains quite substantial. For both bonded and non-bonded interactions, errors associated with coupled-cluster (CC) methods that include triple excitations [CCSD(T) or better] are on the order of 5× smaller than those for the best rung-5 density functionals.59 Therefore, despite the much higher computational costs, there remains strong incentive to perform CC calculations when possible. Some of Q-Chem’s CC capabilities are described in Sec. III.
Further details regarding the combinatorial design strategy used to obtain the best functionals at rungs 3, 4, and 5 can be found in the work of Mardirossian and Head-Gordon.41,52,53,59 It should be noted that statistical assessments of DFAs are only as transferable as the data they are built upon. The transferability of the conclusions discussed above to similar systems is supported by the fact that broadly similar conclusions can be drawn from other large-scale data assessments, e.g., comparing MGCDB84 vs GMTKN55 for main-group compounds. It is a separate issue to investigate the performance of density functionals for very different classes of molecules, such as transition metal compounds. (These have been the target of several other recent benchmark studies.93,94) Similarly, interest in the quality of densities derived from DFT must be separately assessed, either directly95 or via properties such as electrical moments.96–99 Similar considerations apply to other molecular properties, such as polarizabilities100 and nuclear magnetic resonance (NMR) chemical shifts.101
B. Thermally assisted-occupation DFT
Systems with strong static correlation remain very challenging for conventional Kohn–Sham DFT. Q-Chem 5 contains thermally assisted-occupation (TAO-)DFT,102–104 an efficient means to explore ground-state properties of large electronic systems with strong static correlation. Unlike Fermi smearing105 (also supported by Q-Chem), which is a convergence aid for small-gap systems, TAO-DFT aims to access densities beyond those obtainable from a single Kohn–Sham determinant. TAO-DFT is similar to Kohn–Sham DFT in computational complexity but represents the ground-state electron density in terms of orbitals with fractional occupation numbers governed by a Fermi–Dirac distribution at a fictitious temperature that is related to the strength of static correlation. In TAO-DFT, static correlation can be approximately described by the entropy contribution,102 even when semilocal102,103 or hybrid104 density functionals are employed. A self-consistent scheme defining the fictitious temperature has been recently developed for diverse applications.106 By combining computational efficiency with reasonable accuracy, TAO-DFT is well positioned to investigate the ground-state properties of electronic systems at the nanoscale, especially those possessing strong static correlation effects.107–111 TAO-DFT has recently been combined with ab initio molecular dynamics.112
C. Excited-state DFT methods
The TDDFT approach113,114 extends ground-state DFT to electronically excited states via the linear response (LR) formalism,115,116 incorporating electron correlation at a computational cost equivalent to its uncorrelated Hartree–Fock analog, the configuration-interaction singles (CIS) method.114 This relatively low cost makes LR-TDDFT (Sec. II C 1) the most widely used method for computing vertical excitation spectra and for exploring excited-state potential energy surfaces (computational photochemistry, Sec. II C 2). An alternative to the LR formalism is “real-time” TDDFT,117,118 also known as time-dependent Kohn–Sham (TDKS) theory,119–121 which is discussed in Sec. II C 3 and which can be used to compute broadband excitation spectra. Finally, an altogether different category of DFT-based excited-state methods is the ΔSCF formalism, which is a state-specific approach that fully accounts for orbital relaxation in the excited state and can be used to describe challenging problems such as excited-state charge separation and states with double-excitation character, thereby sidestepping known systemic problems with LR-TDDFT while retaining SCF cost. The ΔSCF approach is discussed in Sec. II C 4.
1. LR-TDDFT
Despite its popularity, LR-TDDFT does have systemic problems for certain classes of excited states, the most infamous of which is its dramatic underestimation of excitation energies having charge-transfer (CT) character.85–87,122–127 Nevertheless, this method often achieves an impressive statistical accuracy of 0.2–0.3 eV for low-lying valence excitation energies,128 giving it a wide domain of applicability despite recognized shortcomings.
The CT problem, in particular, can be largely ameliorated through the use of long-range corrected (LRC) functionals,84–89 which are RSH functionals in which the fraction of Hartree–Fock exchange is required to go to unity as r12 → ∞. The most popular such functional is LRC-ωPBE,87,129 along with its short-range hybrid cousin, LRC-ωPBEh,126 although other variants are available, including LRC-μBLYP and LRC-μBOP.86,88,130 In addition to these LRC-GGAs, Q-Chem 5 also includes the relatively new revM11 functional,131 a LRC-meta-GGA functional specifically optimized for long-range CT excitations.
For best results, the range-separation parameter (ω or μ) is often “tuned” in order to set the frontier energies based on the molecule’s own (ΔSCF) ionization energy (IE),89,132–134
In Q-Chem 5, an alternative “global density-dependent” (GDD) tuning procedure is available.135–137 Following a standard SCF calculation with a functional such as LRC-ωPBE, the GDD procedure automatically determines a new tuned value (ωGDD) based on the size of the exchange hole. This approach appears to avoid system-size-dependent problems with the value of ω tuned according to Eq. (3).137
2. Exploring excited-state potential surfaces
Q-Chem 5 contains new tools that enable the exploration of excited-state potential energy surfaces with LR-TDDFT, including algorithms for locating minimum-energy crossing points (MECPs) along conical seams. For a molecule with nvib = 3natoms − 6 vibrational degrees of freedom, the conical seam (or “conical intersection”) is a (nvib − 2)-dimensional subspace within which two electronic states are exactly degenerate. Conical intersections serve as photochemical funnels for nonadiabatic dynamics,138,139 so locating the MECP (i.e., the lowest-energy point within the degenerate subspace) can help to rationalize excited-state dynamics by providing a single chemical structure to represent the whole seam space.140
Orthogonal to the conical seam is the two-dimensional branching space, within which any infinitesimal displacement lifts the degeneracy between electronic states |ΨJ⟩ and |ΨK⟩.138,141 The branching space is spanned by two (nonorthogonal) vectors,
and
where R indicates the nuclear coordinates. Operationally, the gradient difference (“g-vector”) is easily computed using any excited-state method for which analytic gradients are available, but the nonadiabatic coupling (“h-vector”) is less routinely available. Analytic h-vectors are available in Q-Chem 5 for both CIS and LR-TDDFT,141–145 which greatly facilitates efficient optimization of MECPs by means of a projected-gradient algorithm that optimizes directly in the seam space.146 Alternatively, for excited-state methods where analytic gradients (and therefore gJK) are available but analytic derivative couplings (hJK) are not, Q-Chem provides a branching-plane updating algorithm to optimize MECPs.140,147 This is significantly more efficient140 than alternative penalty-function methods,148 which can also be used in the absence of hJK. The projected-gradient algorithm is the most efficient approach of all, however, converging in fewer steps while the computation of hJK adds a modest 10%–20% overhead to the cost of computing the gradients for states J and K.142,149,150 For molecules with intersystem crossing, analytic gradients and derivative couplings at the CIS and LR-TDDFT levels are available within both the spin-diabatic and spin-adiabatic representations.151,152
Nonadiabatic trajectory simulations at the LR-TDDFT level are available in Q-Chem and take advantage of these analytic derivative couplings. These simulations can be performed using the Tully’s “fewest switches” surface hopping (FSSH) algorithm153,154 or using an “augmented” FSSH algorithm that includes decoherence effects on the electronic amplitudes.155,156 These corrections are necessary in order to maintain detailed balance and to describe both short- and long-time relaxation dynamics, including Marcus theory.157–159 A Python framework for performing FSSH simulations using Q-Chem is also available.160
A systematic shortcoming of LR-TDDFT that is relevant here is an incorrect description of the topology around any conical intersection that involves the ground state; in such cases, the branching space predicted by LR-TDDFT is one-dimensional rather than two-dimensional.141,161 This problem has its roots in the fact that any excited-state method based on response theory treats the “reference state” (usually the ground state) in a fundamentally different manner as compared to the “response” (excited) states. This can cause difficulties when the reference state becomes quasi-degenerate with the lowest excited state, and in the context of nonadiabatic trajectory simulations, this imbalance can manifest as SCF convergence failure in the vicinity of a conical intersection.162 The “spin–flip” approach to LR-TDDFT163–165 resolves this problem141,142 by using a reference state with a different spin multiplicity as compared to the target states of interest. An example is shown in Fig. 5, which depicts the excitation space for a case where a high-spin triplet reference state is used to generate determinants for singlet states, including the closed-shell S0 ground state. The spin–flip single-excitation manifold contains a subset of the possible determinants that are doubly excited with respect to S0, including the one (in the “o–o” subspace in Fig. 5) that is necessary to provide proper topology at the S0/S1 conical intersection.142,161 In Q-Chem 5, nonadiabatic coupling vectors hJK are available for both conventional and spin–flip variants of LR-TDDFT.142
Illustration of the spin–flip TDDFT excitation space for a (4e, 4o) model, starting from a high-spin triplet reference. Proper spin eigenfunctions can be formed from the four determinants in the o–o subspace, but the remaining determinants are missing one or more complementary spin functions. Adapted from X. Zhang and J. M. Herbert, J. Chem. Phys. 143, 234107 (2015) with the permission of AIP Publishing.
Illustration of the spin–flip TDDFT excitation space for a (4e, 4o) model, starting from a high-spin triplet reference. Proper spin eigenfunctions can be formed from the four determinants in the o–o subspace, but the remaining determinants are missing one or more complementary spin functions. Adapted from X. Zhang and J. M. Herbert, J. Chem. Phys. 143, 234107 (2015) with the permission of AIP Publishing.
While the spin–flip approach rigorously cures the topology problem at conical intersections,141,142 it unfortunately exacerbates problems with spin contamination. This is especially true as one moves away from the Franck–Condon region and starts to break bonds, for which singlet and triplet states often become comparable in energy, and may necessitate the use of state-tracking algorithms to ensure that a geometry optimization or dynamics trajectory remains on a potential surface of consistent spin multiplicity.166–169 At the heart of this problem is the fact that each of the determinants in the c-o, o-v, and c-v subspaces in Fig. 5 is missing one or more of the complementary determinants170–172 needed to form an eigenstate. The missing determinants are absent because they cannot be generated from the reference state via a single excitation combined with a single α → β spin flip. However, these determinants can be generated, in an automated manner that does not increase the formal computational scaling of LR-TDDFT, by means of a tensor equation-of-motion (EOM) formalism.169,173–175 This formalism has been used to develop a “spin-adapted spin–flip” (SA-SF) TDDFT method,169 which preserves proper topology at conical intersections but also restores spin multiplicity as a good quantum number. Figure 6 shows that SA-SF--TDDFT results are close to multireference benchmarks for the challenging problem of twisting ethylene by 90° about its C–C axis. Analytic gradients for SA-SF-TDDFT are not yet available, but this method can be used to check the veracity of any heavily spin-contaminated results that are obtained with other flavors of LR-TDDFT.
Potential energy curves for the singlet N [], V [], and Z [] states of C2H4, twisting along the C–C axis, computed using various spin–flip methods in comparison to multireference benchmarks. Both SA-SF-TDDFT and SA-SF-CIS correctly describe the topology around a conical interaction, but the latter lacks dynamical correlation and therefore excitation energies are not accurate. Adapted from X. Zhang and J. M. Herbert, J. Chem. Phys. 143, 234107 (2015) with the permission of AIP Publishing.
Potential energy curves for the singlet N [], V [], and Z [] states of C2H4, twisting along the C–C axis, computed using various spin–flip methods in comparison to multireference benchmarks. Both SA-SF-TDDFT and SA-SF-CIS correctly describe the topology around a conical interaction, but the latter lacks dynamical correlation and therefore excitation energies are not accurate. Adapted from X. Zhang and J. M. Herbert, J. Chem. Phys. 143, 234107 (2015) with the permission of AIP Publishing.
SF-TDDFT methods are also suitable for treating other types of electronic structure that are not accessible by the standard Kohn–Sham DFT, such as polyradicals and single-molecule magnets.163,164,176,177
3. “Real-time” TDDFT
The term “TDDFT” is used almost universally to refer specifically to LR-TDDFT, which despite its name is a strictly frequency-domain theory with no explicit time dependence, at least not within the ubiquitous adiabatic approximation that is used in all practical implementations.114,115 However, just as the ground-state Kohn–Sham problem is based on a one-electron analog of the time-independent Schrödinger equation [Eq. (2)], at the foundation of TDDFT is a one-electron analog of the time-dependent Schrödinger equation, which governs the time evolution of |Φs⟩ and thus the Kohn–Sham MOs. The latter evolve in time according to
Using this TDKS equation, the MOs can be propagated in time following a perturbation of the ground state density at t = 0 that generates a (non-stationary) superposition of excited states. Information about electronic excitation energies is encoded into the time evolution of this superposition state, and an entire broadband excitation spectrum can be obtained via Fourier transform of the time-dependent dipole moment function, with a spectral resolution that improves upon further time propagation.117,178 This approach has been given the unwieldy moniker of “real-time” TDDFT,117,118 although calling it TDKS theory avoids confusion with the more widespread LR-TDDFT approach.119–121
In the limit of a weak perturbation at t = 0, propagated to t → ∞ to obtain narrow spectral lines, TDKS spectra are equivalent to those obtained using LR-TDDFT,178 but the TDKS approach need not be limited to the weak-field LR regime and can be used to explore strong-field dynamics,179 strong-field ionization,180–183 and high-harmonic spectra,120,184–187 for example. [Ionization requires the use of complex absorbing potentials (CAPs), which are discussed in Sec. V B. These are available for use in TDKS simulations,120,121 along the lines of the atom-centered potentials described in Refs. 180–183.] In this way, TDKS simulations can describe time-dependent electron dynamics beyond the Born–Oppenheimer approximation, where the electrons are out of equilibrium with the nuclei. At present, Q-Chem’s implementation of the TDKS method120,121 is limited to clamped-nuclei simulations, meaning electron dynamics only.
Time propagation according to Eq. (6) is complicated by the fact that depends on the MOs and thus the effective Hamiltonian is time-dependent. The most widely used propagation algorithm is the modified-midpoint method,188 for which the cost of one time step is the same as the cost of one SCF cycle of a ground-state calculation. (It should be noted that for electron dynamics, the fundamental timescale is a