Efficient computer implementations of the GW approximation must approximate a numerically challenging frequency integral; the integral can be performed analytically, but doing so leads to an expensive implementation whose computational cost scales as *O*(*N*^{6}), where *N* is the size of the system. Here, we introduce a new formulation of the full-frequency GW approximation by exactly recasting it as an eigenvalue problem in an expanded space. This new formulation (1) avoids the use of time or frequency grids, (2) naturally obviates the need for the common “diagonal” approximation, (3) enables common iterative eigensolvers that reduce the canonical scaling to *O*(*N*^{5}), and (4) enables a density-fitted implementation that reduces the scaling to *O*(*N*^{4}). We numerically verify these scaling behaviors and test a variety of approximations that are motivated by this new formulation. The new formulation is found to be competitive with conventional *O*(*N*^{4}) methods based on analytic continuation or contour deformation. In this new formulation, the relation of the GW approximation to configuration interaction, coupled-cluster theory, and the algebraic diagrammatic construction is made especially apparent, providing a new direction for improvements to the GW approximation.

Green’s function approaches based on time-dependent many-body perturbation theory provide an economical description of excitation energies and spectral intensities. For the one-particle Green’s function, which describes electron addition and removal processes, the GW approximation to the self-energy^{1} performs well for weakly correlated insulators and metals,^{2} which has partially motivated its application to molecules.^{3–6} Here and throughout, we are considering the common non-self-consistent G_{0}W_{0} approximation, which we call the GW approximation for simplicity. The size of systems that can be studied with the GW approximation is determined by the implementation, which can be characterized by its asymptotic scaling with the system size *N*, ranging from *O*(*N*^{3}) to *O*(*N*^{6}) with widely varying prefactors.^{7,8}

GW implementations can be distinguished based on their handling of a numerically challenging frequency integral, which is relatively uncommon in quantum chemical methods. The earliest works used a generalized plasmon pole model to approximate the dielectric function and, thus, integrate analytically.^{2,9,10} More sophisticated approaches treat the full-frequency dependence using numerical integration techniques such as analytic continuation^{11–16} and contour deformation.^{15–19} These latter methods introduce numerical errors, but ones that, in principle, can be eliminated with an increase in the cost (e.g., the frequency integration grid or the fitting of the self-energy on the imaginary frequency axis). The final class of methods is numerically exact within a given single-particle basis set and requires the explicit enumeration of all neutral excitation energies,^{3–5,20,21} typically calculated within the random-phase approximation (RPA). This explicit enumeration, i.e., a sum over states, dominates the cost of such a GW calculation due to its *O*(*N*^{6}) scaling. This exact handling of the full-frequency dependence is the type that we address in the present work. We note that this class of methods still constructs a frequency-dependent self-energy, which is used to solve the quasiparticle equation for each excitation. In this work, we present a new formulation of the GW approximation by recasting it as an eigenvalue problem in an expanded space, and *a frequency variable never appears*.

Within Green’s function theories, charged excitation energies *E*_{n}, i.e., ionization potentials (IPs) and electron affinities (EAs), are found as the poles of the one-particle Green’s function matrix **G**(*ω*) via the eigenvalue problem,

where **f** = **h** + **J** + **K** is the Fock matrix, **h** is the kinetic and external potential energy matrix, **J** is the Hartree matrix, **K** is the exchange matrix, and **Σ**_{c}(*ω*) is the correlation part of the self-energy matrix. In practice, we typically work in a basis of orbitals *ϕ*_{p}(** x**) that diagonalize a mean-field Green’s function, which serves as the reference and defines the orbital energies

*ε*

_{p}. As usual, the

*O*occupied orbitals will be indexed by

*i*,

*j*,

*k*,

*l*; the

*V*unoccupied orbitals will be indexed by

*a*,

*b*,

*c*,

*d*; and generic orbitals will be indexed by

*p*,

*q*,

*r*,

*s*. For simplicity, we will assume real orbitals. In this basis, we have

**f**=

*ε*+

**K**−

**V**

_{xc}, where

**V**

_{xc}is the exchange–correlation potential matrix. Note that for a HF reference,

**K**−

**V**

_{xc}= 0.

In the GW approximation, the self-energy is calculated to lowest-order in the screened Coulomb interaction *W*, which gives rise to the aforementioned frequency integral, Σ_{c}(*ω*) = (*i*/2*π*)∫*dω*′*e*^{iηω′}*G*(*ω* + *ω*′)*W*_{p}(*ω*′), where *W*_{p} = *W* − *v* is the polarized part of the screened Coulomb interaction. When the polarizability that enters *W*_{p} is expressible by a spectral representation, the frequency integration can be performed analytically to yield^{3–5,20,21}

where $Mpq\nu =\u222bdx1dx2\varphi p(x1)\varphi q(x1)r12\u22121\rho \nu (x2)$, Ω_{ν} are neutral excitation energies, and *ρ*_{ν}(** x**) are transition densities.

Although any theory of neutral excitations can be used to calculate the polarizability,^{22} here, we consider the Tamm–Dancoff approximation (TDA) to the (direct) RPA. Within the TDA, the neutral excitation energies and transition density moments are defined by **AX**^{ν} = Ω_{ν}**X**^{ν}, where

and $Mpq\nu =\u2211iaXia\nu \u2009pi|qa\u2009$. Diagrammatically, such a self-energy has screening due to infinite-order, *forward time-ordered* ring (or bubble) diagrams. The algebraic form [Eq. (2)] assumes that *all* eigenvalues and eigenvectors of the **A** matrix have been calculated, which implies a canonical *O*(*N*^{6}) scaling, as discussed in the Introduction.

In the GW community, RPA screening is much more commonly implemented without the TDA. Although the frequency-free implementation of the GW approximation that we present here is far simpler to formulate within the TDA, many of the same ideas can be applied for the case of RPA screening, which we discuss in the supplementary material. In particular, we show that a frequency-free formulation with RPA screening exists; however, it is less conducive to cost reductions. In Fig. 1, we show that when a density functional theory (DFT) reference is used, the IPs and EAs obtained with TDA screening exhibit errors that are roughly twice as large as those obtained with RPA screening. However, for molecules, a HF reference significantly outperforms DFT references, as shown previously.^{23} Remarkably, we find that the most accurate results of all methods considered are those obtained when a HF reference is combined with TDA screening, achieving an accuracy of about 0.1 eV and empirically justifying our focus on TDA screening. These conclusions are likely to change when the GW approximation is applied to solids, especially metals and small gap semiconductors that are highly polarizable.

In order to make progress on a frequency-free implementation that avoids the explicit sum over states in Eq. (2), we define a vector space of excitations corresponding to one hole (1h), one particle (1p), two holes and one particle (2h1p), and two particles and one hole (2p1h). A vector in this space has elements ** R** = (

*r*

_{i},

*r*

_{a},

*r*

_{i[jb]},

*r*

_{[jb]a}). The notation of the 2h1p and 2p1h amplitudes indicates that the

*j*→

*b*excitation is independent of the other particle or hole index, i.e., the amplitudes do not obey any antisymmetry as they do in determinantal approaches. We define a frequency-independent “super-matrix”

**H**,

where **C**^{2h1p} = *ε*^{1h} ⊕ (−**A**) and **C**^{2p1h} = *ε*^{1p} ⊕ **A**, with matrix elements

This super-matrix can be downfolded into the 1h + 1p space, leading to a frequency-dependent eigenvalue problem of the form Eq. (1), with

It is straightforward to check that this frequency-dependent matrix, arising from the downfolding of the 2h1p and 2p1h spaces, is precisely the correlation part of the GW self-energy; a short proof of the downfolding equivalence is given in the supplementary material. This presentation closely follows the algebraic diagrammatic construction (ADC) literature.^{29,30} In particular, the above-mentioned theory, i.e., the GW approximation with TDA screening, is a strict but severe approximation to the ADC(3) and 2p1h-TDA methods.^{31} Diagrammatically, the latter two theories include many vertex corrections beyond the GW approximation, including ladder and exchange diagrams that appear as additional terms in Eq. (6). An analogous approach was also used recently to formulate an efficient renormalized second-order Green’s function theory,^{32} and similar conceptual ideas were discussed in the context of double excitations in time-dependent density functional theory.^{33}

Importantly, the frequency-independent super-matrix form of the GW approximation enables the use of iterative eigensolvers that lowers the computational scaling. Matrix-vector multiplication is given by **H R** =

**, with**

*σ*where all indices correspond to spin-orbitals. For a restricted, closed-shell reference, spin-free equations are straightforward to derive and are given in the supplementary material. Clearly, the these equations exhibit no worse than *O*(*N*^{5}) scaling (specifically, *O*^{2}*V*^{3} for moderately sized basis sets), which is a significant improvement over the *O*(*N*^{6}) scaling exhibited by the sum-over-states implementation. Furthermore, because only Coulomb-type electron repulsion integrals are used in the direct TDA (or RPA), the scaling of the most expensive contractions can be easily reduced by density fitting. For example, if the ERIs are approximated as $(pq|rs)\u2248\u2211QBpqQBrsQ$, then the worst-scaling *O*(*N*^{5}) term can be calculated by

which has two steps that scale as *O*(*N*_{aux}*OV*^{2}) or *O*(*N*^{4}).

Although we will not show results here, we briefly describe how spectral quantities can also be obtained iteratively with identical scalings. Using a spectral resolution of **H**, the full Green’s function is given by $Gpq(\omega )=\u2211n(rpnrqn)/(\omega \u2212En)$, i.e., the quasiparticle weight is given simply in terms of the 1p + 1h elements of the solution vector ** R**; this formulation naturally precludes the common diagonal approximation Σ

_{pq}(

*ω*) ≈

*δ*

_{pq}Σ

_{pp}(

*ω*). The matrix

**H**can also be used iteratively (without diagonalization) to calculate the frequency-dependent self-energy

**Σ**(

*ω*) =

**V**

^{2h1p}

**Z**

^{2h1p}(

*ω*) +

**V**

^{2p1h}

**Z**

^{2p1h}(

*ω*), where

**Z**(

*ω*) is the matrix that solves the linear systems of equations, e.g., $\omega 12h1p\u2212C2h1pZ2p1h(\omega )=[V2h1p]\u2020$, which can be solved with iterative methods such as a conjugate gradient or the generalized minimum residual method. Similarly, the Green’s function can be calculated as

**G**(

*ω*) =

**PZ**(

*ω*), where

**Z**(

*ω*) solves $\omega 1\u2212HZ(\omega )=P\u2020$ and

**P**is the matrix that projects onto the 1p + 1h space.

We have implemented the GW techniques described above in the PySCF software package.^{24,25} To compare their costs and verify their asymptotic scaling, we have calculated the first IP of a series of linear alkanes in the def2-SVP basis^{26} up to C_{37}H_{76}, which has 898 basis functions The execution timings of the *O*(*N*^{6}) sum-over-states, *O*(*N*^{5}) frequency-free, and *O*(*N*^{4}) density-fitted frequency-free implementations are shown in Fig. 2; all calculations were performed on a single core of an Intel Xeon Gold 6126 2.6 GHz (Skylake) central processing unit (CPU), and density-fitted calculations used the def2-SVP-JKFIT auxiliary basis set.^{34} As can be seen, all methods exhibit the expected asymptotic scaling. Comparing the absolute execution times of the sum-over-states and density-fitted frequency-free implementations, we obtained a speed-up of four orders of magnitude for the C_{10}H_{22} calculation. For our largest system with almost one thousand basis functions, the density-fitted implementation required only two hours on a single core, demonstrating the immense savings available with the advances described here. The use of density fitting was found to introduce a negligible error of around 0.01 eV.

For a rough comparison with the conventional GW approaches that achieve *O*(*N*^{4}) scaling, we have calculated the first IP of C_{25}H_{52} (610 basis functions), using analytic continuation and contour deformation, as implemented in PySCF.^{16} These two methods required 31 min and 64 min, respectively (with 100 grid points and 50 Padé approximants); the total execution time of our density-fitted matrix-vector implementation was 13 min. A precise comparison among the three methods is complicated because execution times depend on the number of grid points, the number of Padé approximants, and the number of poles that are calculated. In any case, our new implementation that exactly treats the frequency integration is clearly competitive with existing approaches that approximate the frequency integration.

One challenge using iterative eigensolvers on **H** is that the eigenvalues of typical interest (valence ionization potentials and electron affinities) are interior eigenvalues. Therefore, they must be found using energy-targeting methods such as shift-and-invert or ones that maximize the eigenvector overlap with a given guess vector. We have found the latter to work well, in conjunction with Davidson diagonalization,^{35,36} for valence IPs and EAs. However, two simple alternatives exist by introducing additional approximations.

In a first approach, one can make the diagonal approximation to the Green’s function and the self-energy, seeking the self-consistent solution *E*_{n} of the algebraic equation *f*_{pp} + Σ_{pp}(*ω* = *E*_{n}) = *E*_{n}. This can be solved by iterative diagonalization of a modified matrix **H**^{(p)}, which has deleted all 1p + 1h rows and columns except that of orbital *p*. Although the principal eigenvalue of interest is still an interior eigenvalue, this approach eliminates all other quasiparticle energies, which can facilitate energy- or overlap-targeting procedures. In a second approach, one can perturbatively decouple the IP and EA parts of **H**, which will make the valence IPs and EAs into extremal eigenvalues. For example, for the calculation of IPs, we perturbatively eliminate the 1p and 2p1h subspaces based on their lowest-order influence on the 1h subspace and likewise for EAs. This leads to modified Fock matrix elements,

which are used in place of *f*_{ii} and *f*_{aa} during the matrix-vector product. This approach has the added benefit of reducing the size of the vector spaces for the IP and EA problems, *R*^{IP} = (*r*_{i}, *r*_{i[jb]}) and *R*^{EA} = (*r*_{a}, *r*_{[jb]a}), and reducing the scaling of the IP matrix-vector product to be *O*(*O*^{3}*V*^{2}). Finally, we point out that the valence IPs can be made into the *lowest* eigenvalues by negating the matrix, **H**^{IP} → −**H**^{IP}. With all these changes, the calculation of IPs and EAs within the GW approximation looks quite similar to that within the IP/EA-EOM-CCSD approximation.^{37,38} The decoupling of the IP and EA spaces is also common in the ADC literature and referred to as a non-Dyson approach.^{39} We will use the same language and evaluate the performance of the Dyson (coupled IP and EA) and non-Dyson (perturbatively decoupled IP and EA) GW approximation.

To assess the effect of the diagonal approximation and perturbative decoupling, we used our frequency-free *O*(*N*^{5}) GW implementation to calculate the first IP of all molecules in the GW100 test set.^{6} In Fig. 3, we compare the non-diagonal, diagonal, and non-diagonal perturbatively decoupled (“non-Dyson”) GW results among themselves and to ΔCCSD(T) results,^{27} using a HF reference. As shown in Fig. 1, the GW results have a mean absolute error of 0.24 eV, with respect to CCSD(T). On average, the diagonal approximation has negligible effect (less than 0.1 eV), although a maximum deviation of 0.65 eV is observed, indicating the potential importance of off-diagonal elements of the self-energy for some molecules; these results are in agreement with previous studies of off-diagonal contributions to the GW self-energy.^{40} In contrast, perturbative decoupling (the non-Dyson GW approximation) changes the results by 0.41 eV on average (and by as much as 2.44 eV) and increases the mean absolute error from 0.24 eV to 0.51 eV, suggesting that it is a relatively severe approximation. We have performed the same analysis (not shown) for the EA, as well as for a PBE reference; all results are qualitatively similar.

To summarize, we have shown that the typical Dyson equation formulation of the GW approximation can be exactly reformulated as a frequency-independent eigenvalue problem in an expanded space. In addition to providing a new conceptual framework for the GW approximation and related Green’s function theories, the new formulation was used to reduce the computational scaling from *O*(*N*^{6}) to *O*(*N*^{4}). Based on our preliminary results, we expect that this frequency-free formulation of the GW approximation will be readily applicable to systems with hundreds or thousands of atoms, likely limited by the memory needed to store three-index quantities.

We anticipate that this eigenvalue formulation of the GW approximation will lead to new methodological developments inspired by using quantum chemical methods with the similar structure. For example, we are exploring the use of partitioning schemes to mitigate the cost of large basis sets,^{41–43} the introduction of vertex corrections through the algebraic diagrammatic construction,^{29,30} the use of renormalization and compression to perform self-consistent GW,^{32} and the extension toward strongly correlated systems with multi-reference techniques.^{44,45} The above-mentioned directions can be combined with the Galitski–Migdal formula to obtain ground-state energies, which has overlap with previous works that eliminate a frequency integration in the RPA correlation energy.^{46} Finally, the ideas presented in this work and these latter extensions can be applied to treat the frequency dependence of the Bethe–Salpeter equation for neutral excitations.^{47,48}

## SUPPLEMENTARY MATERIAL

See the supplementary material for a short proof of the equivalence between the self-energy form and the super-matrix form of the GW approximation, for spin-free matrix-vector product equations and for the derivation of the frequency-free GW approximation with RPA screening.

## ACKNOWLEDGMENTS

This work was supported, in part, by the National Science Foundation Graduate Research Fellowship under Grant No. DGE-1644869 (S.J.B.) and by the National Science Foundation under Grant No. CHE-1848369 (T.C.B.). We acknowledge computing resources from Columbia University’s Shared Research Computing Facility project, which was supported by the NIH Research Facility Improvement Program (Grant No. 1G20RR030893-01), and associated funds from the New York State Empire State Development, Division of Science Technology and Innovation (NYSTAR), Contract No. C090171, both awarded on April 15, 2010. Flatiron Institute is a division of the Simons Foundation.

## DATA AVAILABILITY

The data that support the findings of this study are available from the corresponding author upon reasonable request.

## REFERENCES

_{0}W

_{0}for molecular systems

^{3}) implementation of Hedin’s GW approximation for molecules

_{0}W

_{0}for valence and core excitation energies of periodic systems