CP2K is an open source electronic structure and molecular dynamics software package to perform atomistic simulations of solid-state, liquid, molecular, and biological systems. It is especially aimed at massively parallel and linear-scaling electronic structure methods and state-of-the-art ab initio molecular dynamics simulations. Excellent performance for electronic structure calculations is achieved using novel algorithms implemented for modern high-performance computing systems. This review revisits the main capabilities of CP2K to perform efficient and accurate electronic structure simulations. The emphasis is put on density functional theory and multiple post–Hartree–Fock methods using the Gaussian and plane wave approach and its augmented all-electron extension.

The geometric increase in the performance of computers over the last few decades, together with advances in theoretical methods and applied mathematics, has established computational science as an indispensable technique in chemistry, physics, and life and materials sciences. In fact, computer simulations have been very successful in explaining a large variety of new scientific phenomena, interpret experimental measurements, predict materials properties, and even rationally design new systems. Therefore, conducting experiments in silico permits to investigate systems atomistically that otherwise would be too difficult, expensive, or simply impossible to perform. However, the by far most rewarding outcome of such simulations is the invaluable insights they provide into the atomistic behavior and the dynamics. Therefore, electronic structure theory based ab initio molecular dynamics (AIMD) can be thought of as a computational microscope.1–3 

The open source electronic structure and molecular dynamics (MD) software package CP2K aims to provide a broad range of computational methods and simulation approaches suitable for extended condensed-phase systems. The latter is made possible by combining efficient algorithms with excellent parallel scalability to exploit modern high-performance computing architectures. However, along with conducting efficient large-scale AIMD simulations, CP2K provides a much broader range of capabilities, which includes the possibility of choosing the most adequate approach for a given problem and the flexibility of combining computational methods.

The remainder of this paper is organized as follows: The Gaussian and plane wave (GPW) approach to density functional theory (DFT) is reviewed in Sec. II and before Hartree–Fock and beyond Hartree–Fock methods are covered in Secs. III and IV, respectively. Thereafter, constrained DFT (CDFT), density functional perturbation theory (DFPT), and time-dependent DFT (TD-DFT) are described in Secs. V, VI, and VII, respectively. Sections VIII and XIII are devoted to low-scaling eigenvalue solver based on sparse matrix linear algebra using the distributed block compressed sparse row (DBCSR) library. Conventional orthogonal localized orbitals, non-orthogonal localized orbitals (NLMOs), absolutely localized molecular orbitals (ALMOs), and compact localized molecular orbitals (CLMOs) are discussed in Sec. IX to facilitate linear-scaling AIMD, whose key concepts are detailed in Sec. X. Energy decomposition and spectroscopic analysis methods are presented in Sec. XI, followed by various embedding techniques, which are summarized in Sec. XII. Interfaces to other programs and technical aspects of CP2K are specified in Secs. XIV and XV, respectively.

The electronic structure module Quickstep4,5 in CP2K can handle a wide spectrum of methods and approaches. Semi-empirical (SE) and tight-binding (TB) methods, orbital-free and Kohn–Sham DFT (KS-DFT) and wavefunction-based correlation methods [e.g., second-order Møller–Plesset perturbation theory (MP2), direct-random phase approximation (dRPA), and GW] all make use of the same infrastructure of integral routines and optimization algorithms. In this section, we give a brief overview of the methodology that sets CP2K apart from most other electronic structure programs, namely, its use of a plane wave (PW) auxiliary basis set within a Gaussian orbital scheme. As many other programs, CP2K uses contracted Gaussian basis sets gr to expand orbital functions

φr=udugur,
(1)

where the contraction coefficients du are fixed and the primitive Gaussians

gr=rlexp[α(rA)2]Ylm(rA)
(2)

are centered at atomic positions. These functions are defined by the exponent α, the spherical harmonics Ylm with angular momentum (l, m), and the coordinates of its center A. The unique properties of Gaussians, e.g., analytic integration or the product theorem, are exploited in many programs. In CP2K, we make use of an additional property of Gaussians, namely, that their Fourier transform is again a Gaussian function, i.e.,

exp[αr2]exp[iGr]dr=expG24α.
(3)

This property is directly connected with the fact that integration of Gaussian functions on equidistant grids shows exponential convergence with grid spacing. In order to take advantage of this property, we define, within a computational box or periodic unit cell, a set of equidistant grid points

R=hNq.
(4)

The three vectors a1, a2, and a3 define a computational box with a 3 × 3 matrix h = [a1, a2, a3] and a volume Ω = det h. Furthermore, N is a diagonal matrix with entries 1/Ns, where Ns is the number of grid points along vector s = 1, 2, 3, whereas q is a vector of integers ranging from 0 to Ns − 1. Reciprocal lattice vectors bi, defined by bi · aj = 2πδij, can also be arranged into a 3 × 3 matrix [b1,b2,b3]=2π(ht)1, which allows us to define reciprocal space vectors

G=2π(ht)1g,
(5)

where g is a vector of integer values. Any function with periodicity given by the lattice vectors and defined on the real-space points R can be transformed into a reciprocal space representation by the Fourier transform

f(G)=Rf(R)exp[iGR].
(6)

The accuracy of this expansion is given by the grid spacings or the PW cutoff defining the largest vector G included in the sum.

In the GPW method,6 the equidistant grid, or equivalently the PW expansion within the computational box, is used for an alternative representation of the electron density. In the KS method, the electron density is defined by

nr=μνPμνφμrφνr,
(7)

where the density matrix P, with elements Pμν = ificμicνi, is calculated from the orbital occupations fi and the orbital expansion coefficients cμi of the common linear combination of atomic orbitals Φir=μcμiφμr. Therein, Φir are the so-called molecular orbitals (MOs) and φμr are the atomic orbitals (AOs). In the PW expansion, however, the density is given by

nr=Gn(G)exp[iGr].
(8)

The definitions given above allow us to calculate the expansion coefficients n(G) from the density matrix Pμν and the basis functions φμr. The dual representation of the density is used in the definition of the GPW-based KS energy expression (see Sec. II A) to facilitate efficient and accurate algorithms for the electrostatic as well as exchange and correlation energies. The efficient mapping Pμνn(G) is achieved by using multigrid methods, optimal screening in real-space, and the separability of Cartesian Gaussian functions in orthogonal coordinates. Details of these algorithms that result in a linear scaling algorithm with a small prefactor for the mapping is described elsewhere.4–6 

The electronic KS energy functional7 for a molecular or crystalline system within the supercell or Γ-point approximation in the GPW framework4–6 is defined as

E[P]=Ekin[P]+Eext[P]+EES[nG]+EXC[nG],
(9)

where Ekin is the kinetic energy, Eext is the electronic interaction with the ionic cores (see Sec. II B), EES is the total electrostatic (Coulomb) energy, and EXC is the exchange-correlation (XC) energy. An extension of Eq. (9) to include a k-point sampling within the first Brillouin zone is also available in CP2K. This implementation follows the methods from Pisani,8 Kudin,9 and co-workers. The electrostatic energy is calculated using an Ewald method.10 A total charge density is defined by adding Gaussian charge distributions of the form

nAr=ZA(RAc)3π3/2exprARAc2
(10)

to the electronic charge distribution nr. Therewith, Eq. (9) can be rewritten as

E[P]=μνPμνφμr122+Vextrφνr+4πΩG|ntot(G)|2G2+EovrlEself+ΩN1N2N3RFXC[nG](R),
(11)

where the self and overlap terms

Eself=A12πZA2RAc,
(12a)
Eovrl=A,BZAZB|AB|erfc|AB|RAc2+RBc2,
(12b)

as generated by these Gaussian distributions, have to be compensated. The double sum for Eovrl runs over unique atom pairs and has to be extended, if necessary, beyond the minimum image convention. The electrostatic term also includes an interaction of the compensation charge with the electronic charge density that has to be subtracted from the external potential energy. The correction potential is of the form

VcoreA=drnA(r)|rr|=ZA|rA|erf|rA|RAc.
(13)

The treatment of the electrostatic energy terms and the XC energy is the same as in PW codes.10 This means that the same methods that are used in those codes to adapt Eq. (11) for cluster boundary conditions can be used here. This includes analytic Green’s function methods,10 the method of Eastwood and Brownrigg,11 the methods of Martyna and Tuckerman,12,13 and wavelet approaches.14,15 Starting from n(R), using Fourier transformations, we can calculate the combined potential for the electrostatic and XC energies. This includes the calculation of the gradient of the charge density needed for generalized gradient approximation (GGA) functionals. For non-local van der Waals functionals,16 we use the Fourier transform based algorithm of Román-Pérez and Soler.17 The combined potential calculated on the grid points VXC(R) is then the starting point to calculate the KS matrix elements

HμνXC=ΩN1N2N3RVXC(R)φμ(R)φν(R).
(14)

This inverse mapping VXC(R)HμνXC is using the same methods that have been used for the charge density. In this case, we can achieve linear scaling with small prefactors. Furthermore, a large variety of different techniques to solve the eigenvalue problem for the now completed KS Hamilton matrix, so of which are linear scaling too, are described in detail in Sec. VIII.

A consistent calculation of nuclear forces within the GPW method can be done easily. Pulay terms, i.e., the dependence of the basis functions on nuclear positions, require the integration of potentials given on the real-space grid with derivatives of the basis functions.18 However, the basic routines work with Cartesian Gaussian functions and their derivatives are again functions of the same type, so the same routine can be used. Similarly, for XC functionals including the kinetic energy density, we can calculate τr and the corresponding potential and matrix representation using the same mapping functions.

For the internal stress tensor

Πuv=1ΩsE[P]hushsvt,
(15)

we use again the Fourier transform framework for the EXC terms and the simple virial for all pair forces. This applies to all Pulay terms, and only for the XC energy contributions from GGA functionals, special care is needed due to the cell dependent grid integration.19–21 

The accuracy of the PW expansion of the electron density in the GPW method is controlled by the cutoff value Ecut restricting the maximal allowed value of |G|2. In the case of Gaussian basis sets, the cutoff needed to get a given accuracy is proportional to the largest exponent. As can been easily seen by inspecting common Gaussian basis sets for elements of different rows in the periodic table, the value of the largest exponent rapidly increases with atomic number. Therefore, the prefactor in GPW calculations will increase similarly. In order to avoid this, we can either resort to a pseudopotential description of inner shells or use a dual representation as described in Blöchl’s projector augmented-wave method (PAW).22 The pseudopotentials used together with PW basis sets are constructed to generate nodeless atomic valence functions. Fully non-local forms are computationally more efficient and easier to implement. Dual-space pseudopotentials are of this form and are, because of their analytic form consistent of Gaussian functions, easily applied together with Gaussian basis sets.23–25 

The pseudopotentials are given in real-space as

Vlocpp(r)=Zionrerfαppr+i=14Cipp2αppr2i2×expαppr2,withαpp=12rlocpp
(16)

and a non-local part

Vnlpp(r,r)=lmijr|pilmhijlpjlm|r,
(17)

where

r|pilm=NilYlm(r^)rl+2i2exp12rrl2
(18)

are Gaussian-type projectors resulting in a fully analytical formulation that requires only the definition of a small parameter set for each element. Moreover, the pseudopotentials are transferable and norm-conserving. The pseudopotential parameters are optimized with respect to the atomic all-electron wavefunction obtained from relativistic density functional calculations using a numerical atom code, which is also part of CP2K. A database with many pseudopotential parameter sets optimized for different XC potentials is available together with the distribution of the CP2K program.

The use of pseudopotentials in the GPW method also requires the use of correspondingly adapted basis sets. In principle, the same strategies that have been used to generate molecular or solid-state Gaussian basis sets could be used. It is always possible to generate specific basis sets for an application type, e.g., for metals or molecular crystals, but for ease of use, a general basis set is desirable. Such a general basis should fulfill the following requirements: high accuracy for smaller basis sets and a route for systematic improvements. One and the same basis set should perform in various environments from isolated molecules to condensed phase systems. Ideally, the basis sets should lead for all systems to well conditioned overlap matrices and be therefore well suited for linear scaling algorithms. To fulfill all the above requirements, generally contracted basis sets with shared exponents for all angular momentum states were proposed.26 In particular, a full contraction over all primitive functions is used for both valence and polarization functions. The set of primitive functions includes diffuse functions with small exponents that are mandatory for the description of weak interactions. However, in contrast to the practice used in augmented basis sets, these primitive functions are always part of a contraction with tighter functions. Basis sets of this type were generated according to a recipe that includes global optimization of all parameters with respect to the total energy of a small set of reference molecules.26 The target function was augmented with a penalty function that includes the overlap condition number. These basis sets of type molopt have been created for the first two rows of the periodic table. They show good performance for molecular systems, liquids, and dense solids. The results for the delta test are shown in Fig. 18. The basis sets have seven primitive functions with a smallest exponent of ≈0.047 for oxygen. This is very similar to the smallest exponent found in augmented basis sets of the correlation consistent type, e.g., ≈0.060 for aug-cc-pVTZ. The performance of the grid mapping routines depends strongly on this most diffuse function in the basis sets. We have therefore optimized a second set of basis sets of the molopt type, where the number of primitives has been reduced (e.g., 5 for first row elements), which also leads to less diffuse functions. The smallest exponent for oxygen in this basis set is now ≈0.162. These basis sets still show good performance in many different environments and are especially suited for all types of condensed matter systems (e.g., see the delta test results in Sec. XV D). The reduction of Gaussian primitives and the removal of very diffuse functions lead to a 10-fold time reduction for the mapping routines for liquid water calculations using default settings.

Within the GPW approach, the mapping of the electronic density on the grid is often the time determining step. With an appropriate screening, this is a linear scaling step with a prefactor determined by the number of overlapping basis functions. Especially in condensed phase systems, the number of atom pairs that have to be included can be very large. For such cases, it can be beneficial to add an intermediary step in the density mapping. In this step, the charge density is approximated by another auxiliary Gaussian basis. The expansion coefficients are determined using the local density fitting approach by Baerends et al.27 They introduced a local metric, where the electron density is decomposed into pair-atomic densities, which are approximated as a linear combination of auxiliary functions localized at atoms A and B. The expansion coefficients are obtained by employing an overlap metric. This local resolution-of-the-identity (LRI) method combined with the GPW method is available in CP2K as the LRIGPW approach.28 For details on how to set up such a calculation, see Ref. 29.

The atomic pair densities nAB are approximated by an expansion in a set of Gaussian fit functions fr centered at atoms A and B, respectively. The expansion coefficients are obtained for each pair AB by fitting the exact density while keeping the number of electrons fixed. This leads to a set of linear equations that can be solved easily. The total density is then represented by the sum of coefficients of all pair expansions on the individual atoms. The total density is now presented as a sum over the number of atoms, whereas in the GPW method, we have a sum over pairs. In the case of 64 water molecules in a periodic box, this means that the fitted density is mapped on the grid by 192 atom terms rather than ≈200 000 atom pairs.

LRIGPW requires the calculation of two- and three-index overlap integrals that is computationally demanding for large auxiliary basis sets. To increase the efficiency of the LRIGPW implementation, we developed an integral scheme based on solid harmonic Gaussian functions,30 which is superior to the widely used Cartesian Gaussian-based methods.

An additional increase in efficiency can be achieved by recognizing that most of the electron density is covered by a very small number of atom pair densities. The large part of more distant pairs can be approximated by an expansion on a single atom. By using a distance criterion and a switching function, a method with a smooth potential energy is created. The single atom expansion reduces memory requirements and computational time considerably. In the above water box example, about 99% of the pairs can be treated as distant pairs.

An alternative to pseudopotentials, or a method to allow for smaller cutoffs in pseudopotential calculations, is provided by the Gaussian-augmented plane wave (GAPW) approach.31,32 The GAPW method uses a dual representation of the electronic density, where the usual expansion of the density using the density matrix Pαβ is replaced in the calculation of the Coulomb and XC energy by

n(r)=ñ(r)+AnA(r)AñA(r).
(19)

The densities ñ(r), nA(r), and ñA(r) are expanded in plane waves and products of primitive Gaussians centered on atom A, respectively, i.e.,

ñ(r)=1ΩGñ(G)eiGr,
(20a)
nA(r)=mnAPmnAgm(r)gn(r),
(20b)
ñA(r)=mnAP̃mnAgm(r)gn(r).
(20c)

In Eq. (20a), ñ(G) are the Fourier coefficients of the soft density, as obtained in the GPW method by keeping in the expansion of the contracted Gaussians only those primitives with exponents smaller than a given threshold. The expansion coefficients PmnA and P̃mnA are also functions of the density matrix Pαβ and can be calculated efficiently. The separation of the density from Eq. (19) is borrowed from the PAW approach.22 Its special form allows the separation of the smooth parts, characteristic of the interatomic regions, from the quickly varying parts close to the atoms while still expanding integrals over all space. The sum of the contributions in Eq. (19) gives the correct full density if the following conditions are fulfilled:

n(r)=nA(r),ñ(r)=ñA(r)close to atom A,
(21a)
n(r)=ñ(r),nA(r)=ñA(r)far from atom A.
(21b)

The first conditions are exactly satisfied only in the limit of a complete basis set. However, the approximation introduced in the construction of the local densities can be systematically improved by choosing larger basis sets.

For semi-local XC functionals such as the local density approximation (LDA), GGA, or meta functionals using the kinetic energy density, the XC energy can be simply written as

EGAPWXC[n]=EXC[ñ]+AEXC[nA]AEXC[ñA].
(22)

The first term is calculated on the real-space grid defined by the PW expansion, and the other two are efficiently and accurately calculated using atom centered meshes.

Due to the non-local character of the Coulomb operator, the decomposition for the electrostatic energy is more complex. In order to distinguish between local and global terms, we need to introduce atom-dependent screening densities nA0 that generate the same multipole expansion QAlm as the local density nAñA+nAZ, where nAZ is the nuclear charge of atom A, i.e.,

nA0(r)=lmQAlmgAlm(r).
(23)

The normalized primitive Gaussians gAlm(r) are defined with an exponent such that they are localized within an atomic region. Since the sum of local densities nAñA+nAZnA0 has vanishing multiple moments, it does not interact with charges outside the localization region, and the corresponding energy contribution can be calculated by one-center integrals. The final form of the Coulomb energy in the GAPW method then reads as

EGAPWC[n+nZ]=EH[ñ+n0]+AEH[nA+nAZ]AEH[ñA+nA0],
(24)

where n0 is summed over all atomic contributions and EH[n] denotes the Coulomb energy of a charge distribution n. The first term in Eq. (24) can be efficiently calculated using fast Fourier transform (FFT) methods using the GPW framework. The one-centered terms are calculated on radial atomic grids.

The special form of the GAPW energy functional involves several additional approximations in addition to a GPW calculation. The accuracy of the local expansion of the density is controlled by the flexibility of the product basis of primitive Gaussians. As we fix this basis to be the primitive Gaussians present in the original basis, we cannot independently vary the accuracy of the expansion. Therefore, we have to consider this approximation as inherent to the primary basis used.

With the GAPW method, it is possible to calculate materials properties that depend on the core electrons. This has been used for the simulation of the x-ray scattering in liquid water.33 X-ray absorption (XAS) spectra are calculated using the transition potential method.34–40 Several nuclear and electronic magnetic properties such as nuclear magnetic resonance (NMR) chemical shift and electric field gradient, electron paramagnetic resonance (EPR) hyperfine coupling, and g-tensors are also available.41–44 

Even though semi-local DFT is a cornerstone of much of condensed phase electronic structure modeling,45 it is also recognized that going beyond GGA-based DFT is necessary to improve the accuracy and reliability of electronic structure methods.46 One path forward is to augment DFT by elements of wavefunction theory47 or to adopt wavefunction theory itself.48 This is the approach taken in successful hybrid XC functionals such as B3LYP or Heyd-Scuseria-Ernzerhof (HSE),49,50 where part of the exchange functional is replaced by exact Hartree–Fock exchange (HFX). The capability to compute HFX was introduced in CP2K by Guidon et al.51–53 The aim at that time was to enable the use of hybrid XC functionals for condensed phase calculations, of relatively large, disordered systems, in the context of AIMD simulations. This objective motivated a number of implementation choices and developments that will be described in the following. The capability was particularly important to make progress in the field of first principles electro-chemistry54,55 but is also the foundation for the correlated wavefunction (CW) methods such as MP2, RPA, and GW that are available in CP2K and will be described in Sec. IV.

In the periodic case, HFX can be computed as

EXPBC=12Nki,jk,kψik(r1)ψjk(r1)×g(|r1r2|)ψik(r2)ψjk(r2)d3r1d3r2,
(25)

where an explicit sum over the k-points is retained and a generalized Coulomb operator g(|r1r2|) is employed. The k-point sum is important as at least for the standard Coulomb operator 1/r, the term for k = k′ = 0 (the so-called Γ-point) is singular, although the full sum approximates an integrable expression. If the operator is short-ranged or screened, the Γ-point term is well-behaved. CP2K computes HFX at the Γ-point only and employs a localized atomic basis set, using an expression where the sum is explicit over the indices of the localized basis (λσμν) as well as the image cells (abc), thus

EXΓ=12λσμνPμσPνλabcμνa|λbσb+cg.
(26)

For an operator with a finite range, the sum over the image cells will terminate. This expression was employed to perform hybrid DFT-based AIMD simulations of liquid water with CP2K.51 Several techniques have been introduced to reduce the computational cost. First, screening based on the overlap of basis functions is employed to reduce the scaling of the calculation from O(N4) to O(N2). This does not require any assumptions on the sparsity of the density matrix, nor the range of the operator, and makes HFX feasible for fairly large systems. Second, the HFX implementation in CP2K is optimized for “in-core” calculations, where the four center integrals are computed (analytically) only once at the beginning of the SCF procedure, stored in main memory, and reused afterward. This is particularly useful in the condensed phase as the sum over all image cells multiplies the cost of evaluating the integral, relative to gas phase calculations. To store all computed integrals, the code has been very effectively parallelized using Message Passing Interface (MPI) and Open Multi-Processing (OpenMP), yielding super-linear speed-ups as long as added hardware resources provide additional memory to store all integrals. Furthermore, a specialized compression algorithm is used to store each integral with just as many bits as needed to retain the target accuracy. Third, a multiple-time-step (MTS) algorithm (see Sec. X D) is employed to evaluate HFX energies only every few time steps during an AIMD simulation, assuming that the difference between the potential energy surface of a GGA and a hybrid XC functional is slowly varying with time. This technique has found reuse in the correlated wavefunction simulations described in Sec. IV.

In Ref. 52, the initial implementation was revisited, in particular, to be able to robustly compute HFX at the Γ-point for the case where the operator in the exchange term is 1r, and not a screened operator as, e.g., in HSE.56,57 The solution is to truncate the operator (not any of the image cell sums), with a truncation radius RC that grows with the cell size. The advantage of this approach is that the screening of all terms in Eq. (26) can be performed rigorously and that the approach is stable for a proper choice of screening threshold, also in the condensed phase with good quality (but non-singular) basis sets. The value of RC that yields convergence is system dependent, and large values of RC might require the user to explicitly consider multiple unit cells for the simulation cell. Note that the HFX energy converges exponentially with RC for typical insulating systems and that the same approach was used previously to accelerate k-point convergence.58 In Ref. 59, it was demonstrated that two different codes (CP2K and Gaussian), with very different implementations of HFX, could reach micro-Hartree agreement for the value of the HF total energy of the LiH crystal. In Ref. 52, a suitable GGA-type exchange function was derived that can be used as a long range correction (LRC) together with the truncated operator. This correction functional, in the spirit of the exchange functional employed in HSE, effectively allows for model chemistries that employ very short range exchange (e.g., ≈2 Å) only.

Important for the many applications of HFX with CP2K is the auxiliary density matrix method (ADMM), introduced in Ref. 53. This method reduces the cost of HFX significantly, often bringing it to within a few times the cost of conventional GGA-based DFT, by addressing the unfavorable scaling of the computational cost of Eq. (26) with respect to basis set size. The key ingredient of the ADMM method is the use of an auxiliary density matrix P^, which approximates the original P, for which the HFX energy is more cost-effective to compute,

EXHFX[P]=EXHFX[P^]+EXHFX[P]EXHFX[P^]EXHFX[P^]+EXDFT[P]EXDFT[P^].
(27)

Effectively, EXHFX[P] is replaced with computationally more efficient EXHFX[P^], and the difference between the two is corrected approximately with a GGA-style exchange functional. Commonly, the auxiliary P^ is obtained by projecting the density matrix P using a smaller, auxiliary basis. This approximation, including the projection, can be implemented fully self-consistently. In Ref. 60, the efficiency of the ADMM method was demonstrated by computing over 300 ps of AIMD trajectory for systems containing 160 water molecules and by computing spin densities for a solvated metalloprotein system containing ∼3000 atoms.53 

In addition, so-called post–Hartree–Fock methods that are even more accurate than the just describe hybrid-DFT approach are also available within CP2K.

Second-order Møller–Plesset perturbation theory (MP2) is the simplest ab initio correlated wavefunction method61 applied to the Hartree–Fock reference and able to capture most of the dynamic electron correlation.62 In the DFT framework, the MP2 formalism gives rise to the doubly hybrid XC functionals.63 In the spin-orbital basis, the MP2 correlation energy is given by

E(2)=12ij,ab(ia|jb)[(ia|jb)(ib|ja)]Δijab+ij¯,ab¯(ia¯|jb¯)[(ia¯|jb¯)(ib¯|ja¯)]Δij¯ab¯i¯j,a¯b(ia¯|jb)Δi¯ja¯b,
(28)

where i, j, … run over occupied spin-orbitals, a, b, … run over virtual spin-orbitals (indexes without bar stand for α-spin-orbitals and indexes with bar stand for β-spin-orbitals), Δijab=ϵa+ϵbϵiϵj (ϵa and ϵi are orbital energies), and (ia|jb) are electron repulsion integrals in the Mulliken notation.

In a canonical MP2 energy algorithm, the time limiting step is the computation of the (ia|jb) integrals obtained from the electron repulsion integrals over AOs (μν|λσ) via four consecutive index integral transformations. The application of the resolution-of-identity (RI) approximation to MP2,64 which consists of replacing integrals (ia|jb) by its approximated counterparts (ia|jb)RI, is given by

(ia|jb)RI=PBPiaBPjb,
(29)

with

BPia=R(ia|R)LPR1,
(30)

where P, R, … (the total number of them is Na) are auxiliary basis functions and L are two-center integrals over them. The RI-MP2 method is also scaling O(N5) with a lower prefactor: the main reason for the speed-up in RI-MP2 lies in the strongly reduced integral computation cost.

As the MP2 is non-variational with respect to wavefunction parameters, analytical expressions for geometric energy derivatives of RI-MP2 energies are complicated, since its calculation requires the solution of Z-vector equations.65 

1. Scaled opposite-spin MP2

The scaled opposite-spin MP2 (SOS-MP2) method is a simplified variant of MP2.66 Starting from Eq. (28), we neglect the same spin term in curly brackets and scale the remaining opposite spin term to account for the introduced error. We can rewrite the energy term with the RI approximation and the Laplace transform,

1x=0dtext.
(31)

When we exploit a numerical integration, we obtain the working equation for SOS-MP2, which reads as

ESOSMP2=qPQwqQPQα(tq)QQPβ(tq)
(32)

with

QPQα(tq)=iaBPiaBQiae(ϵaϵi)tq
(33)

and similarly for the beta spin. The integration weights wq and abscissa tq are determined by a minimax procedure.67 In practice, ≃7 quadrature points are needed for μhartree accuracy.66 

The most expensive step is the calculation of the matrix elements QPQα, which scales like O(N4). Due to similarities with the random phase approximation (RPA), we will discuss the implementation of this method in Sec. IV B.

2. Implementation

CP2K features Γ-point implementations of canonical MP2, RI-MP2, Laplace-transformed MP2, and SOS-MP2 energies.68,69 For RI-MP2, analytical gradients and stress tensors are available,70 for both closed and open electronic shells.71 Two- and three-center integrals can be calculated by the GPW method or analytically.

The implementation is massively parallel, directly profits from the availability of sparse matrix algebra, and allows for graphics processing unit (GPU) acceleration of large matrix multiplies. The evaluation of the gradients of the RI-MP2 energy can be performed within a few minutes for systems containing hundreds of atoms and thousands of basis functions on thousands of central processing unit (CPU) cores, allowing for MP2-based structure relaxation and even AIMD simulations on HPC facilities. The cost of the gradient calculation is 4–5 times larger than the energy evaluation, and open-shell MP2 calculation is typically 3–4 times more expensive than the closed-shell calculation.

3. Applications

The RI-MP2 implementation, which is the computationally most efficient MP2 variant available in CP2K, has been successfully applied to study a number of aqueous systems. In fact, ab initio Monte Carlo (MC) and AIMD simulations of bulk liquid water (with simulation cells containing 64 water molecules) predicted the correct density, structure, and IR spectrum.72–74 Other applications include structure refinements of ice XV,75 AIMD simulations of the bulk hydrated electron (with simulation cells containing 47 water molecules),76 as well as the first AIMD simulation of a radical in the condensed phase using wavefunction theory.

Total energy methods based on the RPA correlation energy have emerged in the recent years as promising approaches to include non-local dynamical electron correlation effects at the fifth rung on the Jacob’s ladder of density functional approximations.77 In this context, there are numerous ways to express the RPA correlation energy depending on the theoretical framework and approximations employed to derive the working equations.78–96 Our implementation uses the approach introduced by Eshuis et al.,97 which can be referred to as based on the dielectric matrix formulation, involving the numerical integral over the frequency of a logarithmic expression including the dynamical dielectric function, expressed in a Gaussian RI auxiliary basis. Within this approach, the direct-RPA (sometimes referred to as dRPA) correlation energy, which is a RPA excluding exchange contributions,97 is formulated as a frequency integral

EcRPA=12+dω2πTr(ln(1+Q(ω))Q(ω)),
(34)

with the frequency dependent matrix Q(ω), expressed in the RI basis, determined by

QPQ(ω)=2ioavBPiaεaεiω2+(εaεi)2BQia.
(35)

For a given ω, the computation of the integrand function in Eq. (34) and using Eq. (35) requires O(N4) operations. The integral of Eq. (34) can be efficiently calculated by a minimax quadrature requiring only ≃10 integration points to achieve μhartree accuracy.98,99 Thus, the introduction of the RI approximation and the frequency integration techniques for computing EcRPA leads to a computational cost of O(N4Nq) and O(N3) storage, where Nq is the number of quadrature points.69 

We note here that the conventional RPA correlation energy methods in CP2K are based on the exact exchange (EXX) and RPA correlation energy formalism (EXX/RPA), which has extensively been applied to a large variety of systems including molecules, systems with reduced dimensionality, and solids.75,78–81,83,84,86–89,100–111 Within the framework of the EXX/RPA formalism, the total energy is given as

EtotEXX/RPA=EtotHF+ECRPA=EtotDFTEXCDFT+EXEXX+ECRPA,
(36)

where the right-hand side (RHS) terms of the last equation are the DFT total energy, the DFT xc energy, the EXX energy, and the (direct) RPA correlation energy, respectively. The sum of the first three terms is identical to the Hartree–Fock energy as calculated employing DFT orbitals, which is usually denoted as HF@DFT. The last term corresponds to the RPA correlation energy as computed using DFT orbitals and orbital energies and is often referred to as RPA@DFT. The calculation of EtotEXX/RPA thus requires a ground state calculation with a given DFT functional, followed by an EXX energy evaluation and a RPA correlation energy evaluation employing DFT ground state wavefunctions and orbital energies.

1. Implementation of the quartic scaling RPA and SOS-MP2 methods

We summarize here the implementation in CP2K of the quartic scaling computation of the RPA and SOS-MP2 correlation energies. The reason to describe the two implementations here is due to the fact that the two approaches share several components. In fact, it can be shown that the direct MP2 energy can be obtained by truncating at the first non-vanishing term of the Taylor expansion to compute the logarithm in Eq. (34) and integrating over frequencies.97 

After the three center RI integral matrix BPia is made available (via, i.e., RI-GPW69), the key component of both methods is the evaluation of the frequency dependent QPQ(ω) for the RPA and the τ dependent QPQ(τ) for the Laplace transformed SOS-MP2 method (see Sec. IV A 1). The matrices QPQ(ω) and QPQ(τ) are given by the contractions in Eqs. (35) and (33), respectively. Their computation entails, as a basic algorithmic motif, a large distributed matrix multiplication between tall and skinny matrices for each quadrature point. Fortunately, the required operations at each quadrature point are independent of each other. The parallel implementation in CP2K exploits this fact by distributing the workload for the evaluation of QPQ(ω) and QPQ(τ) over pools of processes, where each pool is working independently on a subset of quadrature points. Furthermore, the operations necessary for each quadrature point are performed in parallel within all members of the pool. In this way, the O(N4) bottleneck of the computation displays an embarrassingly parallel distribution of the workload and, in fact, it shows excellent parallel scalability to several thousand nodes.69,98 Additionally, since at the pool level, the distributed matrix multiplication employs the widely adopted data layout of the parallel BLAS library, minimal modifications are required to exploit accelerators [such as graphics processing units (GPUs) and field-programmable gate arrays (FPGAs); see Sec. XV A for details], as interfaces to the corresponding accelerated libraries are made available.98 

Finally, the main difference between RPA and SOS-MP2 is the postprocessing. After the contraction step to obtain the QPQ matrix, the sum in Eq. (32) is performed with computational costs of O(Na2Nq) for SOS-MP2 instead of O(Na3Nq), which is associated with the evaluation of the matrix logarithm of Eq. (34) for the RPA (in CP2K, this operation is performed using the identity Tr[ln A] = ln(det[A]), where a Cholesky decomposition is used to efficiently calculate the matrix determinant). Therefore, the computational costs of the quartic-scaling RPA and SOS-MP2 are the same for large systems, assuming that the same number of quadrature points is used for the numerical integration.

2. Cubic scaling RPA and SOS-MP2 method

The scaling of RPA and SOS-MP2 can be reduced from O(N4) to O(N3) or even better by alternative analytical formulations of the methods.112,113 Here, we describe the CP2K-specific cubic scaling RPA/SOS-MP2 implementation and demonstrate the applicability to systems containing thousands of atoms.

For cubic scaling RPA calculations, the matrix QPQ(ω) from Eq. (35) is transformed to imaginary-time QPQ(τ)99 as it is already present in SOS-MP2. The tensor BPia is transformed from occupied-virtual MO pairs ia to pairs μν of AO basis set functions. This decouples the sum over occupied and virtual orbitals and thereby reduces the formal scaling from quartic to cubic. Further requirements for a cubic scaling behavior are the use of localized atomic Gaussian basis functions and the localized overlap RI metric such that the occurring 3-center integrals are sparse. A sparse representation of the density matrix is not a requirement for our cubic scaling implementation, but it reduces the effective scaling of the usually dominant O(N2) sparse tensor contraction steps to O(N).112 

The operations performed for the evaluation of QPQ(τ) are generally speaking contractions of sparse tensors of ranks 2 and 3 - starting from the 3-center overlap integrals and the density matrix. Consequently, sparse linear algebra is the key to good performance, as opposed to the quartic scaling implementation that relies mostly on parallel BLAS for dense matrix operations.

The cubic scaling RPA/SOS-MP2 implementation is based on the distributed block compressed sparse row (DBCSR) library,114 which is described in detail in Sec. XIII and was originally co-developed with CP2K, to enable linear scaling DFT.115 The library was extended with a tensor API in a recent effort to make it more easily applicable to algorithms involving contractions of large sparse multi-dimensional tensors. Block-sparse tensors are internally represented as DBCSR matrices, whereas tensor contractions are mapped to sparse matrix–matrix multiplications. An in-between tall-and-skinny matrix layer reduces memory requirements for storage and reduces communication costs for multiplications by splitting the largest matrix dimension and running a simplified variant of the CARMA algorithm.116 The tensor extension of the DBCSR library leads to significant improvements in terms of performance and usability compared to the initial implementation of the cubic scaling RPA.113 

In Fig. 1, we compare the computational costs of the quartic and the cubic scaling RPA energy evaluation for periodic water systems of different sizes. All calculations use the RI together with the overlap metric and a high-quality cc-TZVP primary basis with a matching RI basis. The neglect of small tensor elements in the O(N3) implementation is controlled by a filtering threshold parameter. This parameter has been chosen such that the relative error introduced in the RPA energy is below 0.01%. The favorable effective scaling of O(N1.8) in the cubic scaling implementation leads to better absolute timings for systems of 100 or more water molecules. At 256 water molecules, the cubic scaling RPA outperforms the quartic scaling method by one order of magnitude.

FIG. 1.

Comparison of the timings for the calculation of the RPA correlation energy using the quartic- and the cubic-scaling implementation on a CRAY XC50 machine with 12 cores per node (flat MPI). The system sizes are n × n × n supercells (with n = 1, 2, 3) of a unit cell with 32 water molecules and a density of 1 g/cm3. The intersection point where the cubic scaling methods become favorable is at ∼90 water molecules. The largest system tested with the cubic scaling RPA contains 864 water molecules (6912 electrons, 49 248 primary basis functions, and 117 504 RI basis functions) and was calculated on 256 compute nodes (3072 cores). The largest tensor of size 117 504 × 49 248 × 49 248 has an occupancy below 0.2%.

FIG. 1.

Comparison of the timings for the calculation of the RPA correlation energy using the quartic- and the cubic-scaling implementation on a CRAY XC50 machine with 12 cores per node (flat MPI). The system sizes are n × n × n supercells (with n = 1, 2, 3) of a unit cell with 32 water molecules and a density of 1 g/cm3. The intersection point where the cubic scaling methods become favorable is at ∼90 water molecules. The largest system tested with the cubic scaling RPA contains 864 water molecules (6912 electrons, 49 248 primary basis functions, and 117 504 RI basis functions) and was calculated on 256 compute nodes (3072 cores). The largest tensor of size 117 504 × 49 248 × 49 248 has an occupancy below 0.2%.

Close modal

The observed scaling is better than cubic in this example because the O(N3) scaling steps have a small prefactor and would start to dominate in systems larger than the ones presented here—they make up for around 20% of the execution time for the largest system. The dominant sparse tensor contractions are quadratic scaling, closely matching the observed scaling of O(N1.8). It is important to mention that the density matrices are not yet becoming sparse for these system sizes. Lower dimensional systems with large extent in one or two dimensions have an even more favorable scaling regime of O(N) since the onset of sparse density matrices occurs at smaller system sizes.

All aspects of the comparison discussed here also apply to SOS-MP2 because it shares the algorithm and implementation of the dominant computational steps with the cubic scaling RPA method. In general, the exact gain of the cubic scaling RPA/SOS-MP2 scheme depends on the specifics of the applied basis sets (locality and size). The effective scaling, however, is O(N3) or better for all systems, irrespective of whether the density matrix has a sparse representation, thus extending the applicability of the RPA to large systems containing thousands of atoms.

The GW approach, which allows us to approximately calculate the self-energy of a many-body system of electrons, has become a popular tool in theoretical spectroscopy to predict electron removal and addition energies as measured by direct and inverse photoelectron spectroscopy, respectively [see Figs. 2(a) and 2(b)].117–120 In direct photoemission, the electron is ejected from orbital ψn to the vacuum level by irradiating the sample with visible, ultraviolet, or X-rays, whereas in the inverse photoemission process, an injected electron undergoes a radiative transition into an unoccupied state.

FIG. 2.

Properties predicted by the GW method: Ionization potentials and electron affinities as measured by (a) photoemission and (b) inverse photoemission spectroscopy, respectively. (c) Level alignment of the HOMO and LUMO upon adsorption of a molecule at metallic surfaces, which are accounted for in CP2K by an IC model avoiding the explicit inclusion of the metal in the GW calculation.

FIG. 2.

Properties predicted by the GW method: Ionization potentials and electron affinities as measured by (a) photoemission and (b) inverse photoemission spectroscopy, respectively. (c) Level alignment of the HOMO and LUMO upon adsorption of a molecule at metallic surfaces, which are accounted for in CP2K by an IC model avoiding the explicit inclusion of the metal in the GW calculation.

Close modal

The GW approximation has been applied to a wide range of materials including two-dimensional systems, surfaces, and molecules. A summary of applications and an introduction to the many-body theory behind GW and practical aspects can be found in a recent review article.120 The electron removal energies are referred to as ionization potentials (IPs) and the negative of the electron addition energies are referred to as electron affinities (EAs); see Ref. 120 for details on the sign convention. The GW method yields a set of energies {εn} for all occupied and unoccupied orbitals {ψn}. εn can be directly related to the IP for occupied states and to the EA for the lowest unoccupied state (LUMO). Hence,

IPn=εn,nocc and EA=εLUMO.
(37)

The difference between the IP of the highest occupied state (HOMO) and the EA is called the fundamental gap, a critical parameter for charge transport in, e.g., organic semiconductors.121 It should be noted that the fundamental HOMO–LUMO gap does not correspond to the first optical excitation energy (also called optical gap) that is typically smaller than the fundamental gap due to the electron–hole binding energy.122 For solids, the Bloch functions ψnk carry an additional quantum number k in the first Brillouin zone and give rise to a band structure εnk.118,123 From the band structure, we can determine the bandgap, which is the solid-state equivalent to the HOMO–LUMO gap. Unlike for molecules, an angle-resolved photoemission experiment is required to resolve the k-dependence.

For GW, mean absolute deviations of less than 0.2 eV from the higher-level coupled-cluster singles and doubles plus perturbative triples [CCSD(T)] reference have been reported for IPHOMO and EA.124,125 The deviation from the experimental reference can be even reduced to <0.1 eV when including also vibrational effects.126 On the contrary, DFT eigenvalues εnDFT fail to reproduce spectroscopic properties. Relating εnDFT to the IPs and EA as in Eq. (37) is conceptually only valid for the HOMO level.127 Besides the conceptual issue, IPs from DFT eigenvalues are underestimated by several eV compared to experimental IPs due to the self-interaction error in GGA and LDA functionals yielding far too small HOMO–LUMO gaps.128,129 Hybrid XC functionals can improve the agreement with experiment, but the amount of exact exchange can strongly influence εnDFT in an arbitrary way.

The most common GW scheme is the G0W0 approximation, where the GW calculation is performed non-self-consistently on top of an underlying DFT calculation. In G0W0, the MOs from DFT ψnDFT are employed and the DFT eigenvalues are corrected by replacing the incorrect XC contribution ψnDFT|vxc|ψnDFT by the more accurate energy-dependent XC self-energy ψnDFT|Σ(ε)|ψnDFT, i.e.,

εn=εnDFT+ψnDFTΣ(εn)vXCψnDFT.
(38)

The self-energy is computed from the Green’s function G and the screened Coulomb interaction W, Σ = GW, hence the name of the GW approximation.119,120

The G0W0 implementation in CP2K works routinely for isolated molecules,130 although first attempts have been made to extend it for periodic systems.76,131,132 The standard G0W0 implementation is optimized for computing valence orbital energies εn (e.g., up to 10 eV below the HOMO and 10 eV above the LUMO).130 The frequency integration of the self-energy is based on the analytic continuation using either the 2-pole model133 or Padé approximant as the analytic form.134 For core levels, more sophisticated implementations are necessary.135,136 The standard implementation scales with O(N4) with respect to system size N and enables the calculation of molecules up to a few hundred atoms on parallel supercomputers. Large molecules exceeding thousand atoms can be treated by the low-scaling G0W0 implementation within CP2K,137 which effectively scales with O(N2) to O(N3).

The GW equations should be in principle solved self-consistently. However, a fully self-consistent treatment is computationally very expensive.138,139 In CP2K, an approximate self-consistent scheme is available, where the wavefunctions ψnDFT from DFT are employed and only the eigenvalues in G and W are iterated.128,130 That is, after completion of the G0W0 loop, the quasiparticle energies obtained from Eq. (38) are reinserted into the non-interacting Green’s function G0 and the screened Coulomb W0 instead of the starting DFT eigenvalues. Via G0 and W0, the change in the eigenvalues permeates to the self-energy and eventually to the energies ϵn.120 This eigenvalue-self-consistent scheme (evGW) includes more physics than G0W0 but is still computationally tractable. Depending on the underlying DFT functional, evGW improves the agreement of the HOMO–LUMO gaps with experiment by 0.1–0.3 eV compared to G0W0.128,130 For lower lying states, the improvement over G0W0 might be not as consistent.140 

Applications of the GW implementation in CP2K have been focused on graphene-based nanomaterials on gold surfaces complementing scanning tunneling spectroscopy with GW calculations to validate the molecular geometry and to obtain information about the spin configuration.141 The excitation process generates an additional charge on the molecule. As a response, an image charge (IC) is formed inside the metallic surface, which causes occupied states to move up in energy and unoccupied states to move down. The HOMO–LUMO gap of the molecule is thus significantly lowered on the surface compared to the gas phase [see Fig. 2(c)]. This effect has been accounted for by an IC model,142 which is implemented in CP2K. Adding the IC correction on-top of a gas phase evGW calculation of the isolated molecule, CP2K can efficiently compute HOMO–LUMO gaps of physisorbed molecules on metal surfaces.141 

However, if not explicitly stated otherwise, we will confine ourselves to conventional KS-DFT from now on.

The CDFT method facilitates to construct charge and/or spin localized states, which are needed in a number of applications,143 such as

  • studying charge transfer (CT) phenomena and calculating electronic couplings (e.g., using the Marcus theory approach),

  • correcting spurious charge delocalization due to self-interaction error, and

  • parameterizing model Hamiltonians (e.g., the Heisenberg spin Hamiltonian).

The basic theory underlying CDFT has been derived by Wu and Van Voorhis,144,145 whereas the present implementation within CP2K is described in detail elsewhere,146,147 which is why both are only very briefly summarized here.

The charge and spin localized states are created by enforcing electron and spin density localization within atom centered regions of space. To this effect, the KS energy functional EKS[ρ] is augmented by additional constraint potentials, i.e.,

ECDFT[ρ,λ]     =maxλminρEKS[ρ]+cλci=,dr wci(r)ρi(r)Nc,
(39)

where λ=[λ1,λ2,]T are the constraint Lagrangian multipliers, which can be thought of as the strengths of the constraint potentials, Nc is the target value of the corresponding constraint, whereas wci(r) is an atom centered weight function. The latter is constructed as a normalized sum over a set of selected constraint atoms C; hence,

wi(r)=jCcjPj(r)jNPj(r),
(40)

where cj are atomic coefficients that determine how each atom is included in the constraint, Pj is the so-called cell function that controls the volume occupied by atom j according to some population analysis method, whereas N is the set of all atoms in a system. Different types of constraints can be constructed by modifying wci(r) according to the following conventions:

  • charge constraint (ρ + ρ): wi = w = w,

  • magnetization constraint (ρρ): wi = w = −w, and

  • spin specific constraint (ρ/): wi = w/, w/ = 0.

In CP2K, the Becke and Hirshfeld space partitioning schemes are implemented as constraint weight functions. Using CDFT within AIMD or to optimize the geometry, however, the following additional force term arises due to the employed constraints:

FI,c=λcdr wc(r)RIρ(r).
(41)

As illustrated in Fig. 3, ECDFT[ρ, λ] is optimized self-consistently using a two-tiered approach: one external optimization loop for the constraints and an inner loop to converge the electronic structure, as described in detail in Sec. VIII. By definition, all constraints are satisfied when

c(λ)=i=,dr w1i(r)ρi(r)N1,T=0.
(42)

The constraint Lagrangian multipliers λ can therefore be optimized by minimizing the constraint error expression maxλ|c(λ)| using root-finding algorithms. For Newton and quasi-Newton optimizers, a new guess for λ at step n is generated according to the following iteration formula:

λn=λn1αJn1c(λn1),
(43)

where α ∈ (0, 1]) is the step size and J−1 is the inverse Jacobian matrix, which is approximated by means of finite differences.

FIG. 3.

Schematic of the CDFT SCF procedure. The constraint Lagrangians λ are first optimized in the outer CDFT loop, their values are subsequently fixed, and the electron density corresponding to these fixed values is solved like in traditional CP2K DFT simulations. The control is then returned to the outer CDFT loop, where the convergence of the constraints is checked. This iteration process is repeated until convergence is achieved.

FIG. 3.

Schematic of the CDFT SCF procedure. The constraint Lagrangians λ are first optimized in the outer CDFT loop, their values are subsequently fixed, and the electron density corresponding to these fixed values is solved like in traditional CP2K DFT simulations. The control is then returned to the outer CDFT loop, where the convergence of the constraints is checked. This iteration process is repeated until convergence is achieved.

Close modal

Additional properties can be calculated from the interactions between CDFT states. Since in CP2K, these types of simulations, where multiple CDFT states are treated in parallel, leverage the mixed force methods described in Sec. XII, we will refer to them as mixed CDFT simulations, which are useful for a variety of different functions including

  • calculating charge transfer kinetics parameters and

  • performing configuration interaction (CI) calculations within the basis of CDFT states.

The theoretical concepts related to mixed CDFT calculations are best introduced through an example. Consider the following one electron transfer process X + Y → X + Y. Denote the initial and final states of this reaction as A and B, respectively. Now, according to the Marcus theory of electron transfer, the charge transfer rate of this reaction is given by the rate equation

kab=2π|Hab|2T4πkBTξexp(ξ+ΔA)24πkBTξ,
(44)

where ΔA is the reaction free energy, ξ is the solvent reorganization energy, and |Hab| is the electronic coupling. The first two quantities can be obtained from free energy simulations, whereas the last is rigorously defined as the interaction energy between wavefunctions Ψ representing the two reaction states, i.e.,

Hab=Ψa|H|Ψb,
(45)

where H is the many-electron Hamilton operator. The usefulness of the electronic coupling quantity is not limited to the Marcus rate equation, but it is also a central quantity in other charge transfer theories, as well as in CDFT-based CI.148 

Since the true interacting many-electron wavefunctions or the Hamiltonian are not available within CDFT, the electronic coupling is instead approximated using the CDFT surrogates

HABΦA|HKS|ΦB=EBSABcλcBWcAB,
(46a)
HBAΦB|HKS|ΦA=EASBAcλcAWcBA,
(46b)

where Φ are the CDFT KS determinants, EI is the CDFT energy of state I, SAB = ⟨ΦAB⟩, and WcAB are the weight function matrices defined by

WcAB=ΦA|wcB(r)|ΦB.
(47)

In the above expressions, capital subscripts have been used to emphasize the fact that the CDFT determinants are in general non-orthogonal. The electronic couplings and overlaps are collected into matrices H and S, respectively. The off-diagonal elements of H are not symmetric. The matrix is converted to the symmetric form by setting

HAB=HBA=HAB+HBA2=EA+EB2SABcΦAλcAwcA(r)+λcBwcB(r)2ΦB.
(48)

The resulting matrix H′ is then orthogonalized to yield the final electronic coupling.

A pathological example of a system where the self-interaction error of common DFT XC functionals leads to unphysical results is the simple dissociation reaction H2+H++H. Even though this system contains only one electron, the dissociation profile obtained with the Perdew-Burke-Ernzerhof (PBE) XC functional notably deviates from the exact HF curve, as shown in Fig. 4. However, using CDFT states as the basis of a CI calculation, it is possible to correct for the self-interaction error of the H2+ ion. In fact, using the PBE XC functional, CDFT-CI is able to reproduce the exact dissociation profile. Very briefly, CDFT-CI simulations involve representing the system’s wavefunction as a linear combination of multiple CDFT states, where the charge/spin density is constrained differently in different states. For that purpose, so-called fragment-based constraints are employed, where the constraint target value is calculated from the superposition of isolated fragment densities according to the scheme shown in Fig. 5. The CI expansion coefficients and energies are then obtained by solving a generalized eigenvalue equation, where the effective Hamilton matrix describes how the CDFT states interact with each other.

FIG. 4.

Illustration of the DFT self-interaction error using the PBE XC functional for the reaction H2+H++H. However, employing the fragment constraint states |H+H⟩ and |HH+⟩ as the basis, the correct profile can be recovered with CDFT-CI.

FIG. 4.

Illustration of the DFT self-interaction error using the PBE XC functional for the reaction H2+H++H. However, employing the fragment constraint states |H+H⟩ and |HH+⟩ as the basis, the correct profile can be recovered with CDFT-CI.

Close modal
FIG. 5.

Using a fragment-based CDFT constraint, the system is first divided into two fragments with atomic positions fixed in the same configuration as in the full system. The electron and spin densities of the fragment systems are then saved to cube files and subsequently used as input files for the CDFT calculation, where the constraint target value is calculated from the superimposed fragment densities.

FIG. 5.

Using a fragment-based CDFT constraint, the system is first divided into two fragments with atomic positions fixed in the same configuration as in the full system. The electron and spin densities of the fragment systems are then saved to cube files and subsequently used as input files for the CDFT calculation, where the constraint target value is calculated from the superimposed fragment densities.

Close modal

Several experimental observables are measured by perturbing the system and then observing its response; hence, they can be obtained as derivatives of the energy or density with respect to some specific external perturbation. In the common perturbation approach, the perturbation is included in the Hamiltonian, i.e., as an external potential, then the electronic structure is obtained by applying the variational principle and the changes in the expectation values are evaluated. The perturbation Hamiltonian defines the specific problem. The perturbative approach applied in the framework of DFT turns out to perform better than the straightforward numerical methods, where the total energy is computed after actually perturbing the system.149,150 For all kinds of properties related to derivatives of the total energy, DFPT is derived from the following extended energy functional:

E[{ϕi}]=EKS[{ϕi}]+λEpert[{ϕi}],
(49)

where the external perturbation is added in the form of a functional and λ is a small perturbative parameter representing the strength of the interaction with the static external field.151,152 The minimum of the functional is expanded perturbatively in powers of λ as

E=E(0)+λE(1)+λ2E(2)+,
(50)

whereas the corresponding minimizing orbitals are

ϕi=ϕi(0)+λϕi(1)+λ2ϕi(2)+.
(51)

If the expansion of the wavefunction up to an order (n − 1) is known, then the variational principle for the 2nth-order derivative of the energy is given by

E(2n)=minϕi(n)Ek=0nλkϕi(k)
(52)

under the constraint

k=0nϕi(nk)|ϕj(k)=0.
(53)

For zero-order, the solution is obtained from the ground state KS equations. The second-order energy is variational in the first-order wavefunction and is obtained as

E(2)[{ϕi(0)},{ϕi(1)}]=ijϕi(1)|H(0)δijΛij|ϕj(1)+iϕi(1)|δEpertδϕi(0)|+δEpertδ|ϕi(0)|ϕi(1)+12drdrK(r,r)n(1)(r)n(1)(r).
(54)

The electron density is also expanded in powers of λ, and the first-order term reads as

n(1)(r)=ifi[ϕi(0)*(r)ϕi(1)(r)+ϕi(1)*(r)ϕi(0)(r)].
(55)

The Lagrange multipliers Λij are the matrix elements of the zeroth-order Hamiltonian, which is the KS Hamiltonian. Hence,

Λij=ϕi(0)|H(0)|ϕj(0),
(56)

and the second-order energy kernel is

K(r,r)=δ2EHxc[n]δn(r)δn(r)n(0),
(57)

where EHxc represents the sum of the Hartree and the XC energy functionals. Thus, the evaluation of the kernel requires the second-order functional derivative of the XC functionals.

The second-order energy is variational with respect to {ϕi(1)}, where the orthonormality condition of the total wavefunction gives at the first-order

ϕi(0)|ϕj(1)+ϕi(1)|ϕj(0)=0,  i,j.
(58)

This also implies the conservation of the total charge.

The perturbation functional can often be written as the expectation value of a perturbation Hamiltonian

H(1)=δEpertδϕi0|+drK(r,r)n(1)(r).
(59)

However, the formulation through an arbitrary functional also allows orbital specific perturbations. The stationary condition then yields the inhomogeneous, non-linear system for {ϕi(1)}, i.e.,

jH(0)δijΛij|ϕj(1)     =PdrK(r,r)n(1)(r)|ϕi(0)+δEpertδϕi0|,
(60)

where P=1j|ϕj(0)ϕj(0)| is the projector upon the unoccupied states. Note that the right-hand side still depends on the {ϕi(1)} via the perturbation density n(1). In our implementation, the problem is solved directly using a preconditioned conjugate-gradient (CG) minimization algorithm.

One case where the perturbation cannot be expressed in a Hamiltonian form is the presence of an external electric field, which couples with the electric polarization Pel = er⟩, where i⟨r⟩ is the expectation value of the position operator for the system of N electrons. In the case of periodic systems, the position operator is ill-defined, and we use the modern theory of polarization in the Γ-point-only to write the perturbation in terms of the Berry phase,

γα=ImlogdetQ(α),
(61)

where the matrix is defined as

Qij(α)=ϕi|exp[i2πhα1r]|ϕj
(62)

and h = [a, b, c] is the 3 × 3 matrix defining the simulation cell and hα = (aα, bα, cα).153–157 The electric dipole is then given by

Pαel=e2πhαγα.
(63)

Through the coupling to an external electric field Eext, this induces a perturbation of the type

λEpert[{ϕi}]=αEαextPαel,
(64)

where the perturbative parameter is the field component Eαext. The functional derivative δEpert/δϕI(0)| can be evaluated using the formula for the derivative of a matrix with respect to a generic variable,152 which gives the perturbative term as

δlogdetQ(α)δϕI(0)|=jQ(α)ij1exp[i2πhα1r]|ϕj.
(65)

Once the first-order correction to the set of the KS orbitals has been calculated, the induced polarization is

δPαel=βe2πhβImijϕiβ(1)|exp[i2πhα1r]|ϕj(0)+ϕi(0)|exp[i2πhα1r]|ϕjβ(1)Q(α)ij1Eβ,
(66)

while the elements of the polarizability tensor are obtained as ααβ=Pαel/Eβ.

The polarizability can be looked as the deformability of the electron cloud of the molecule by the electric field. In order for a molecular vibration to be Raman active, the vibration must be accompanied by a change in the polarizability. In the usual Placzeck theory, ordinary Raman scattering intensities can be expressed in terms of the isotropic transition polarizability αi=13Tr[α] and the anisotropic transition polarizability αa=αβ12(3ααβααβααααββ).158 The Raman scattering cross section can be related to the dynamical autocorrelation function of the polarizability tensor. Along finite-temperature AIMD simulations, the polarizability can be calculated as a function of time.159,160 As the vibrational spectra are obtained by the temporal Fourier transformation of the velocity autocorrelation function, and the IR spectra from that of the dipole autocorrelation function,161 the depolarized Raman intensity can be calculated from the autocorrelation of the polarizability components.162 

The development of the DFPT within the GAPW formalism allows for an all-electron description, which is important when the induced current density generated by an external static magnetic perturbation is calculated. The so induced current density determines at any nucleus A the nuclear magnetic resonance (NMR) chemical shift

σαβA=1crRA|rRA|3×jαβdr
(67)

and, for systems with net electronic spin 1/2, the electron paramagnetic resonance (EPR) g-tensor

gαβ=geδαβ+ΔgαβZKE+ΔgαβSO+ΔgαβSOO.
(68)

In the above expressions, RA is the position of the nucleus, jα is the current density induced by a constant external magnetic field applied along the α axis, and ge is the free electron g-value. Among the different contributions to the g-tensor, the current density dependent ones are the spin–orbit (SO) interaction,

ΔgαβSO=ge1c[jα(r)×Veff(r)jα(r)×Veff(r)]βdr
(69)

and the spin–other–orbit (SOO) interaction

ΔgαβSOO=2Bαβcorr(r)nspin(r)dr,
(70)

where

  Bαβcorr(r)=1crr|rr|3×(jα(r)jαspin(r))βdr,
(71)

which also depends on the spin density nspin and the spin-current density jspin. Therein, Veff is an effective potential in which the spin up electrons are thought to move (similarly Veff for spin down electrons), whereas Bαβcorr is the β component of the magnetic field originating from the induced current density along α. The SO-coupling is the leading correction term in the computation of the g-tensor. It is relativistic in origin and therefore becomes much more important for heavy elements. In the current CP2K implementation, the SO term is obtained by integrating the induced spin-dependent current densities and the gradient of the effective potential over the simulation cell. The effective one-electron operator replaces the computationally demanding two-electrons integrals.163 A detailed discussion on the impact of the various relativistic and SO approximations, which are implemented in the various codes, is provided by Deyne et al.164 

In the GAPW representation, the induced current density is decomposed with the same scheme applied for the electron density distinguishing among the soft contribution to the total current density, and the local hard and local soft contributions, i.e.,

j(r)=j̃(r)+AjA(r)+j̃A(r).
(72)

In the linear response approach, the perturbation Hamiltonian at the first-order in the field strength is

H(1)=empA(r),
(73)

where p is the momentum operator and A is the vector potential representing the field B. Thus,

A(r)=12(rd(r))×B,
(74)

with the cyclic variable d(r) being the gauge origin. The induced current density is calculated as the sum of orbital contributions ji and can be separated in a diamagnetic term jid(r)=e2mA(r)|ϕi(0)(r)|2 and a paramagnetic term jip(r)=e2mϕi(0)|[p|rr|+|rr|p]|ϕi(1). Both contributions individually are gauge dependent, whereas the total current is gauge-independent. The position operator appears in the definition of the perturbation operators and of the current density. In order to be able to deal with periodic systems, where the multiplicative position operator is not a valid operator, first we perform a unitary transformation of the ground state orbitals to obtain their maximally localized Wannier functions (MLWFs) counterpart.165–167 Hence, we use the alternative definition of the position operator, which is unique for each localized orbital and showing a sawtooth-shaped profile centered at the orbital’s Wannier center.168–170 

Since we work with real ground state MOs, in the unperturbed state, the current density vanishes. Moreover, the first-order perturbation wavefunction is purely imaginary, which results in a vanishing first-order change in the electronic density n(1). This significantly simplifies the perturbation energy functional, since the second-order energy kernel can be skipped. The system of linear equations to determine the matrix of the expansion coefficients of the linear response orbitals C(1) reads as

iiμHνμ(0)δijSνμϕi(0)|H(0)|ϕj(0)Cμi=μHνμ(j)(1)Cμj(0),
(75)

where i and j are the orbital indexes, ν and μ are the basis set function indexes, and Sνμ is an element of the overlap matrix. The optional subindex (j), labeling the matrix element of the perturbation operator, indicates that the perturbation might be orbital dependent. In our CP2K implementation,43 the formalism proposed by Sebastiani et al. is employed,168,169,171 i.e., we split the perturbation in three types of operators, which are L = (rdj) × p, the orbital angular momentum operator p; the momentum operator; and Δi = (didj) × p, the full correction operator. The vector dj is the Wannier center associated with the unperturbed jth orbital, thus making L and Δi dependent on the unperturbed orbital to which they are applied. By using the Wannier center as a relative origin in the definition of L, we introduce an individual reference system, which is then corrected by Δi. As a consequence, the response orbitals are given by nine sets of expansion coefficients, as for each operator all three Cartesian components need to be individually considered. The evaluation of the orbital angular momentum contributions and of the momentum contributions can be done at the computational cost of just one total energy calculation. The full correction term, instead, requires one such calculation for each electronic state. This term does not vanish unless all di are equal. However, in most circumstances, this correction is expected to be small, since it describes the reaction of state i to the perturbation of state j, which becomes negligible when the two states are far away. Once all contributions have been calculated, the x-component of the current density induced by the magnetic field along y is

jxy(r)=12ciνμCνi(0)CμiLy+(d(r)di)xCμipz(d(r)di)zCμipxCμiΔiy×xφν(r)φμ(r)φν(r)xφμ(r)+(rd(r))zn(0)(r)),
(76)

where the first term is the paramagnetic contribution and the second term is the diamagnetic one.

The convergence of the magnetic properties with respect to the Gaussian basis set size is strongly dependent on the choice of the gauge. The available options in CP2K are the individual gauge for atoms in molecules (IGAIM) approach introduced by Keith and Bader,172 and the continuous set of gauge transformation (CSGT) approach.173 The diamagnetic part of the current density vanishes identically when the CSGT approach is used, i.e., d(r = r). Yet, this advantage is weakened by the rich basis set required to obtain an accurate description of the current density close to the nuclei, which typically affects the accuracy within the NMR chemical shift. In the IGAIM approach, however, the gauge is taken at the closer nuclear center. Condensed-phase applications involve the calculation of NMR spectra of energy-storage materials,174,175 biomolecular176–178 and hydrogen-bonded systems,179–182 as well as the Overhauser dynamic nuclear polarization in solids.183 Large-scale computations of NMR chemical shifts for extended paramagnetic solids were reported by Mondal et al.44 They showed that the contact, pseudocontact, and orbital-shift contributions to paramagnetic NMR can be calculated by combining hyperfine couplings obtained by hybrid functionals with g-tensors and orbital shieldings computed using gradient-corrected functionals.184 

The dynamics and properties of many-body systems in the presence of time-dependent potentials, such as electric or magnetic fields, can be investigated via TD-DFT.

Linear-response TD-DFT (LR-TDDFT)185 is an inexpensive correlated method to compute vertical transition energies and oscillator strengths between the ground and singly excited electronic states. Optical properties are computed as a linear-response of the system to a perturbation caused by an applied weak electro-magnetic field.

The current implementation relies on Tamm-Dancoff and adiabatic approximations.186 The Tamm-Dancoff approximation ignores electron de-excitation channels,187,188 thus reducing the LR-TDDFT equations to a standard Hermitian eigenproblem,189 i.e.,

AX=ωX,
(77)

where ω is the transition energy, X is a response eigenvector, and A is a response operator. In addition, the adiabatic approximation postulates the independence of the employed XC functional of time and leads to the following form for the matrix elements of the response operator:190 

Aiaσ,jbτ=δijδabδστ(ϵaσϵiσ)+(iσaσ|jτbτ)cHFXδστ(iσjσ|aτbτ)+(iσaσ|fxc;στ|jτbτ).
(78)

In the above equation, the indices i and j stand for occupied spin-orbitals, whereas a and b indicated virtual spin-orbitals, and σ and τ refers to specific spin components. The terms (iσaσ|jτbτ) and (iσaσ|fxc;στ|jτbτ) are standard electron repulsion and XC integrals over KS orbital functions {ϕ(r)} with corresponding KS orbital energies ϵ; hence,

(iσaσ|jτbτ)=φiσ*(r)φaσ(r)1|rr|φjτ*(r)φbτ(r)drdr
(79a)

and

(iσaσ|fxc;στ|jτbτ)=φiσ*(r)φaσ(r)fxc;στ(r,r)×φjτ*(r)φbτ(r)dr dr.
(79b)

Here, the XC kernel fxc;στ(r, r′) is simply the second functional derivative of the XC functional Exc over the ground state electron density n(0)(r);191 hence,

fxc;στ(r,r)=δ2Exc[n](r)δnσ(r)δnτ(r)n=n(0).
(80)

To solve Eq. (77), the current implementation uses a block Davidson iterative method.192 This scheme is flexible enough and allows us to tune the performance of the algorithm. In particular, it supports hybrid exchange functionals along with many acceleration techniques (see Sec. III), such as integral screening,51 truncated Coulomb operator,52 and ADMM.53 Whereas in most cases the same XC functional is used to compute both the ground state electron density and the XC kernel, separate functionals are also supported. This can be used, for instance, to apply a long-term correction to the truncated Coulomb operator during the LR-TDDFT stage,52 when such correction has been omitted during the reference ground state calculation. The action of the response operator on the trial vector X for a number of excited states may also be computed simultaneously. This improves load-balancing and reduces communication costs, allowing a larger number of CPU cores to be effectively utilized.

1. Applications

The favorable scaling and performance of the LR-TDDFPT code have been exploited to calculate the excitation energies of various systems with 1D, 2D, and 3D periodicities, such as cationic defects in aluminosilicate imogolite nanotubes,193 as well as surface and bulk canonical vacancy defects in MgO and HfO2.186 Throughout, the dependence of results on the fraction of Hartree–Fock exchange was explored and the accuracy of the ADMM approximation was verified. The performance was found to be comparable to ground state calculations for systems of ≈1000 atoms, which were shown to be sufficient to converge localized transitions from isolated defects, within these medium to wide bandgap materials.

Alternative to perturbation based methods, real-time propagation-based TDDFT is also available in CP2K. The real-time TDDFT formalism allows us to investigate non-linear effects and can be used to gain direct insights into the dynamics of processes driven by the electron dynamics. For systems in which the coupling between electronic and nuclear motion is of importance, CP2K provides the option to propagate cores and electrons simultaneously using the Ehrenfest scheme. Both methods are implemented in a cubic- and linear scaling form. The cubic scaling implementation is based on MO coefficients (MO-RTP), whereas the linear scaling version acts on the density matrix (P-RTP).

While the derivation of the required equations for origin independent basis functions is rather straightforward, additional terms arise for atom centered basis sets.194 The time-evolution of the MO coefficients in a non-orthonormal Gaussian basis reads as

ȧαj=βγSαβ1iHβγ+Bβγaαj,
(81)

whereas the corresponding nuclear equations of motion is given by

MIR̈I=U(R,t)RI+j=1Neα,βaαj*DαβIHαβRIaβj.
(82)

Therein, S and H are the overlap and KS matrices, whereas MI and RI are the position and mass of ion I, and U(R,t) is the potential energy of the ion–ion interaction. The terms involving these variables represent the basis set independent part of the equations of motion. The additional terms containing matrices B and D are arising as a consequence of the origin dependence and are defined as follows:

DI=BI+(S1HiS1B)+iCI++(HS1+iB+S1)BI+iCI,
(83)

with

Bαβ=ϕα|ddtϕβ,Bαβ+=ddtϕα|ϕβ,BI=ϕα|ddRIϕβ,BI+=ddRIϕα|ϕβ,CI=ddtϕα|ddRIϕβ,CI+=ddRIϕα|ddtϕβ.
(84)

For simulations with fixed ionic positions, i.e., when only the electronic wavefunction is propagated, the B and D terms are vanishing and Ref. 81 simplifies to just the KS term. The two most important propagators for the electronic wavefunction in CP2K are the exponential midpoint and the enforced time reversal symmetry (ETRS) propagators. Both propagators are based on matrix exponentials. The explicit computation of it, however, can be easily avoided for both MO-RTP and P-RTP techniques. Within the MO-RTP scheme, the construction of a Krylov subspace Kn(X, MO) together with Arnoldi’s method allows for the direct computation of the action of the propagator on the MOs at a computational complexity of O(N2M). For the P-RTP approach, the exponential is applied from both sides, which allows the expansion into a series similar to the Baker–Campbell–Hausdorff expansion. This expansion is rapidly converging and in theory only requires sparse matrix–matrix multiplications. Unfortunately, pure linear scaling Ehrenfest dynamics seems to be impossible due to the non-exponential decay of the density matrix during such simulations.195 This leads to a densification of the involved matrices and eventually cubic scaling with system size. However, the linear scaling version can be coupled with subsystem DFT to achieve true linear scaling behavior.

In subsystem DFT, as implemented in CP2K, follows the approach of Gordon and Kim.196,197 In this approach, the different subsystems are minimized independently with the XC functional of choice. The coupling between the different subsystem is added via a correction term using a kinetic energy functional,

Ecorr=Es[ρ]insubEs[ρi],
(85)

where ρ is the electron density of the full system and ρi is the electron density of subsystem i. Using an orbital-free density functional to compute the interaction energy does not affect the structure of the overlap, which is block diagonal in this approach. If P-RTP is applied to the Kim–Gordon density matrix, the block diagonal structure is preserved and linear scaling with the number of subsystems is achieved.

After the initial Hamiltonian matrix for the selected method has been built by CP2K, such as the KS matrix K in the case of a DFT-based method using the Quickstep module, the calculation of the total (ground state) energy for the given atomic configuration is the next task. This requires an iterative self-consistent field (SCF) procedure as the Hamiltonian depends usually on the electronic density. In each SCF step, the eigenvalues and at least the eigenvectors of the occupied MOs have to be calculated. Various eigensolver schemes are provided by CP2K for that task:

  • Traditional Diagonalization (TD),

  • Pseudodiagonalization (PD),198 

  • Orbital Transformation (OT) method,199 and

  • purification methods.200 

The latter method, OT, is the method of choice concerning computational efficiency and scalability for growing system sizes. However, OT requires fixed integer MO occupations. Therefore, it is not applicable for systems with a very small or no bandgap, such as metallic systems, which need fractional (“smeared”) MO occupations. For very large systems in which even the scaling of the OT method becomes unfavorable, one has to resort to linear-scaling methods (see Sec. VIII E). By contrast, TD can also be applied with fractional MO occupations, but its cubic-scaling (N3) limits the accessible system sizes. In that respect, PD may provide significant speed-ups (factor of 2 or more) once a pre-converged solution has been obtained with TD.

In the following, we will restrict the description of the implemented eigensolver methods to spin-restricted systems, since the generalization to spin-unrestricted, i.e., spin-polarized systems is straightforward and CP2K can deal with both types of systems using each of these methods.

The TD scheme in CP2K employs an eigensolver either from the parallel program library ScaLAPACK (Scalable Linear Algebra PACKage)201 or from the ELPA (Eigenvalue soLvers for Petascale Applications)202,203 library to solve the general eigenvalue problem

KC=SCϵ,
(86)

where K is the KS and S is the overlap matrix. The eigenvectors C represent the MO coefficients, and ϵ are the corresponding MO eigenvalues. Unlike to PW codes, the overlap matrix S is not the unit matrix, since CP2K/Quickstep employs atom-centered basis sets of non-orthogonal Gaussian-type functions (see Sec. II C). Thus, the eigenvalue problem is transformed to its special form

KC=UTUCϵ,
(87a)
(UT)1KU1C=Cϵ,
(87b)
KC=Cϵ
(87c)

using a Cholesky decomposition of the overlap matrix

S=UTU
(88)

as the default method for that purpose. Now, (87c) can simply be solved by a diagonalization of K′. The MO coefficient matrix C in the non-orthogonal basis is finally obtained by the back-transformation,

C=UC.
(89)

Alternatively, a symmetric Löwdin orthogonalization instead of a Cholesky decomposition can be applied,204 i.e.,

U=S1/2.
(90)

On the one hand, the calculation of S1/2 as implemented in CP2K involves, however, a diagonalization of S, which is computationally more expensive than a Cholesky decomposition. On the other hand, however, linear dependencies in the basis set introduced by small Gaussian function exponents can be detected and eliminated when S is diagonalized. Eigenvalues of S smaller than 10−5 usually indicate significant linear dependencies in the basis set and filtering of the corresponding eigenvectors can help ameliorate numerical difficulties during the SCF iteration procedure. For small systems, the choice of the orthogonalization has no crucial impact on the performance since it has to be performed only once for each configuration during the initialization of the SCF run.

Only the occupied MOs are required for the build-up of the density matrix P in the AO basis

P=2CoccCoccT.
(91)

This saves not only memory but also computational time since the orthonormalization of the eigenvectors is a time-consuming step. Usually, only 10%–20% of the orbitals are occupied when standard Gaussian basis sets are employed with CP2K.

The TD scheme is combined with methods to improve the convergence of the SCF iteration procedure. The most efficient SCF convergence acceleration is achieved by the direct inversion in the iterative sub-space (DIIS),205,206 exploiting the commutator relation

e=KPSSPK
(92)

between the KS and the density matrix, where the error matrix e is zero for the converged density. The DIIS method can be very efficient in the number of iterations required to reach convergence starting from a sufficiently pre-converged density, which is significant if the cost of constructing the Hamiltonian matrix is larger than the cost of diagonalization.

Alternative to TD, a pseudodiagonalization can be applied as soon as a sufficiently pre-converged wavefunction is obtained.198 The KS matrix KAO in the AO basis is transformed into the MO basis in each SCF step via

KMO=CTKAOC
(93)

using the MO coefficients C from the preceding SCF step. The converged KMO matrix using TD is a diagonal matrix, and the eigenvalues are its diagonal elements. Already after a few SCF iteration steps, the KMO matrix becomes diagonally dominant. Moreover, the KMO matrix shows the following natural blocking

ooouuouu
(94)

due to the two MO sub-sets of C, namely, the occupied (o) and the unoccupied (u) MOs. The eigenvectors are used during the SCF iteration to calculate the new density matrix [see Eq. (91)], whereas the eigenvalues are not needed. The total energy only depends on the occupied MOs, and thus, a block diagonalization, which decouples the occupied and unoccupied MOs, allows us to converge the wavefunctions. As a consequence, only all elements of the block ou or uo have to become zero, since KMO is a symmetric matrix. Hence, the transformation into the MO basis

KouMO=CoTKAOCu
(95)

only has to be performed for that matrix block. Then, the decoupling can be achieved iteratively by consecutive 2 × 2 Jacobi rotations, i.e.,

θ=ϵqϵp2KpqMO,
(96a)
t=sgn(θ)|θ|+1+θ2,
(96b)
c=1t2+1,
(96c)
s=tc,
(96d)
C̃p=cCpsCq,
(96e)
C̃q=sCp+cCq,
(96f)

where the angle of rotation θ is determined by the difference of the eigenvalues of the MOs p and q divided by the corresponding matrix element KpqMO in the ou or uo block. The Jacobi rotations are computationally cheap as they can be performed with BLAS level 1 routines (DSCAL and DAXPY). The oo block is significantly smaller than the uu block, since only 10%–20% of the MOs are occupied using a standard basis set. Consequently, the ou or uo block also includes only 10%–20% of the KMO matrix. Furthermore, an expensive re-orthonormalization of the MO eigenvectors is not needed, since the Jacobi rotations preserve the orthonormality of the MO eigenvectors. Moreover, the number of non-zero blocks decreases rapidly when convergence is approached, which results in a decrease in the compute time for the PD part.

An alternative to the just described diagonalization-based algorithms is techniques that rely on direct minimization of the electronic energy functional.199,207–212 Convergence of this approach can in principle be guaranteed if the energy can be reduced in each step. The direct minimization approach is thus more robust. It also replaces the diagonalization step by having fewer matrix–matrix multiplications, significantly reducing the time-to-solution. This is of great importance for many practical problems, in particular, large systems that are difficult or sometimes even impossible to tackle with DIIS-like methods. However, preconditioners are often used in conjunction with direct energy minimization algorithms to facilitate faster and smoother convergence.

The calculation of the total energy within electronic structure theory can be formulated variationally in terms of an energy functional of the occupied single-particle orbitals that are constrained with respect to an orthogonality condition. With M orbitals, CRN×M is given in a nonorthogonal basis consisting of N basis functions {ϕi}i=1N and its associated N × N overlap matrix S, with element Sij = ⟨ϕi|ϕj⟩. The corresponding constrained minimization problem is expressed as

C*=argminC{E[C]|CTSC=1},
(97)

where E[C] is an energy functional, C* is the minimizer of E[C] that fulfills the condition of orthogonality CTSC = 1, whereas argmin stands for the argument of the minimum. The ground state energy is finally obtained as E[C*]. The form of the energy functional E[C] is determined by the particular electronic structure theory used; in the case of hybrid Hartree–Fock/DFT, the equation reads as

EHF/DFT[C]=Tr[Ph]+12Tr[P(J[P]+αHFK[P])]+EXC[P],
(98)

where P = CCT is the density matrix, whereas h, J, and K are the core Hamiltonian, the Coulomb, and Hartree–Fock exact exchange matrices, respectively, and EXC[P] is the XC energy.

Enforcing the orthogonality constraints within an efficient scheme poses a major hurdle in the direct minimization of E[C]. Hence, in the following, we describe two different approaches implemented in CP2K.199,212

1. Orthogonality constraints

a. Orbital transformation functions: OT/diag and OT/Taylor.

To impose the orthogonality constraints on the orbitals, VandeVondele and Hutter reformulated the non-linear constraint on C [see Eq. (97)] by a linear constraint on the auxiliary variable X via

X*=argminX{E[C(X)]|XTSC0=0}
(99)

and

C*=C(X*),
(100)

where XRN×M and C0 is a set of initial orbitals that fulfill the orthogonality constraints C0TSC0=1.199 The OT is parametrized as follows:

C(X)=C0cosU+XU1sinU,
(101)

where U = XTSX1/2. This parametrization ensures that CTXSCX = 1, for all X satisfying the constraints XTSC0 = 0. The matrix functions cos U and U−1 sin U are evaluated either directly by diagonalization or by a truncated Taylor expansion in XTSX.213 

b. Orbital transformation based on a refinement expansion: OT/IR.

In this method, Weber et al. replaced the constrained functional by an equivalent unconstrained functional [Cf(Z)].212 The transformed minimization problem in Eq. (97) is then given by

Z*=argminZE[f(Z)]
(102)

and

C*=f(Z*),
(103)

where ZRN×M. The constraints have been mapped onto the matrix function f(Z), which fulfills the orthogonality constraint fT(Z)Sf(Z) = 1 for all matrices Z. The main idea of this approach is to approximate the OT in Eq. (102) by fn(Z) ≈ f(Z), where fn(Z) is an approximate constraint function, which is correct up to some order n + 1 in δZ = ZZ0, where Z0TSZ0=1. The functions derived by Niklasson for the iterative refinement of an approximate inverse matrix factorization are used to approximate f(Z).214 The first few refinement functions are given by

f1(Z)=12Z(3Y),
(104a)
f2(Z)=18Z(1510Y+3Y2),
(104b)
f3(Z)=116Z(3535Y+21Y25Y3),
(104c)
(104d)

where Y = ZTSZ and Z = Z0 + δZ. It can be shown that

fnT(Z)Sfn(Z)1=O(δZn+1).
(105)

Using this general ansatz for fn(Z), it is possible to extend the accuracy to any finite order recursively by an iterative refinement expansion fn(⋯fn(Z)⋯).

2. Minimizer

a. Direct inversion of the iterative subspace.

The DIIS method introduced by Pulay is an extrapolation technique based on minimizing the norm of a linear combination of gradient vectors.206 The problem is given by

c*=argminci=1mcigi|i=1mci=1,
(106)

where gi is an error vector and c* are the optimal coefficients.

The next orbital guess xm+1 is obtained by the linear combination of the previous points using the optimal coefficients c*, i.e.,

xm+1=i=1mci*(xiτfi),
(107)

where τ is an arbitrary step size chosen for the DIIS method. The method simplifies to a steepest descent (SD) for the initial step m = 1. While the DIIS method converges very fast in most of the cases, it is not particularly robust. In CP2K, the DIIS method is modified to switch to SD when a DIIS step brings the solution toward an ascent direction. This safeguard makes DIIS more robust and is possible because the gradient of the energy functional is available.

b. Non-linear conjugate gradient minimization.

Non-linear CG leads to a robust, efficient, and numerically stable energy minimization scheme. In non-linear CG, the residual is set to the negation of the gradient ri = −gi and the search direction is computed by Gram–Schmidt conjugation of the residuals, i.e.,

di+1=ri+1+βi+1di.
(108)

Several choices for updating βi+1 are available, and the Polak–Ribière variant with restart is used in CP2K,215,216

βi+1=maxri+1T(ri+1ri)riTri,0.
(109)

Similar to linear CG, a step length αi is found that minimizes the energy function f(xi + αidi) using an approximate line search. The updated position becomes xi+1 = xi + αidi. In Quickstep, a few different line searches are implemented. The most robust is the golden section line search,217 but the default quadratic interpolation along the search direction suffices in most cases. Regarding time-to-solution, the minimization performed with the latter quadratic interpolation is in general significantly faster than the golden section line search.

Non-linear CGs are generally preconditioned by choosing an appropriate preconditioner M that approximates f′′ (see Sec. VIII C 3).

c. Quasi-Newton method.

Newton’s method can also be used to minimize the energy functional. The method is scale invariant, and the zigzag behavior that can be seen in the SD method is not present. The iteration for Newton’s method is given by

xk+1=xkβH(xk)1fk,
(110)

where H is the Hessian matrix. On the one hand, the method exhibits super-linear convergence when the initial guess is close to the solution and β = 1, but, on the other hand, when the initial guess is further away from the solution, Newton’s method may diverge. This divergent behavior can be suppressed by the introduction of line search or backtracking. As they require the inverse Hessian of the energy functional, the full Newton’s method is in general too time-consuming or difficult to use. Quasi-Newton methods,218 however, are advantageous alternatives to Newton’s method when the Hessian is unavailable or is too expensive to compute. In those methods, an approximate Hessian is updated by analyzing successive gradient vectors. A quasi-Newton step is determined by

xk+1=xkβGkfk,
(111)

where Gk is the approximate inverse Hessian at step k. Different update formulas exist to compute Gk.219 In Quickstep, the Broyden’s type 2 update is implemented to construct the approximate inverse Hessian with an adaptive scheme for estimating the curvature of the energy functional to increase the performance.220 

3. Preconditioners

a. Preconditioning the non-linear minimization.

Gradient based OT methods are guaranteed to converge but will exhibit slow convergence behavior if not appropriately preconditioned. A good reference for the optimization can be constructed from the generalized eigenvalue problem under the orthogonality constraint of Eq. (97) and its approximate second derivative

2EXαiXβj=2Hαβδij2Sαβδijϵi0.
(112)

Therefore, close to convergence, the best preconditioners would be of the form

(HSϵi0)αβ1δEδXβi.
(113)

As this form is impractical, requiring a different preconditioner for each orbital, a single positive definite preconditioning matrix P is constructed approximating

P(HSϵ)xx0.
(114)

In CP2K, the closest approximation to this form is the FULL_ALL preconditioner. It performs an orbital dependent eigenvalue shift of H. In this way, positive definiteness is ensured with minimal modifications. The downside is that the eigenvalues of H have to be computed at least once using diagonalization and thus scales as O(N3).

To overcome this bottleneck, several more approximate though lower scaling preconditioners have been implemented within CP2K. The simplest assume ϵ = 1 and H = T, with T being the kinetic energy matrix (FULL_KINETIC) or even H = 0 (FULL_S_INVERSE) as viable approximations. These preconditioners are obviously less sophisticated. However, they are linear-scaling in their construction as S and T are sparse and still lead to accelerated convergence. Hence, these preconditioners are suitable for large-scale simulations.

For many systems, the best trade-off between the quality and the cost of the preconditioner is obtained with the FULL_SINGLE_INVERSE preconditioner. Instead of shifting all orbitals separately, only the occupied eigenvalues are inverted. Thus, making the orbitals closest to the bandgap most active in the optimization. The inversion of the spectrum can be done without the need for diagonalization by

P=H2SC0(C0THC0+δ)C0TSϵS,
(115)

where δ represents an additional shift depending on the HOMO energy, which ensures positive definiteness of the preconditioner matrix. It is important to note that the construction of this preconditioner matrix can be done with a complexity of O(NM2) in the dense case and is therefore of the same complexity as the rest of the OT algorithm.

b. Reduced scaling and approximate preconditioning.

All of the abovementioned preconditioners still require the inversion of the preconditioning matrix P. In dense matrix algebra, this leads to an O(N3) scaling behavior independent of the chosen preconditioner. For large systems, the inversion of P will become the dominant part of the calculation when low-scaling preconditioners are used. As Schiffmann and VandeVondele had shown,221 sparse matrix techniques are applicable for the inversion of the low-scaling preconditioners and can be activated using the INVERSE_UPDATE option as the preconditioner solver. By construction, the preconditioner matrix is symmetric and positive definite. This allows for the use of Hotelling’s iterations to compute the inverse of P as

Pi+11=αPi1(2IPαiPi1).
(116)

Generally, the resulting approximants to the inverse become denser and denser.222 In CP2K, this is dealt with aggressive filtering on Pi+11. Unfortunately, there are limits to the filtering threshold. While the efficiency of the preconditioner is not significantly affected by the loss of accuracy (see Sec. XV B), the Hotelling iterations may eventually become unstable.223 

Using a special way to determine the initial alpha based on the extremal eigenvalues of PP01, it can be shown that any positive definite matrix can be used as the initial guess for the Hotelling iterations.224 For AIMD simulations or geometry optimizations, this means the previous inverse can be used as the initial guess as the changes in P are expected to be small.225 Therefore, only very few iterations are required until convergence after the initial approximation for the inverse is obtained.

Linear-scaling DFT calculations can also be achieved by purifying the KS matrix K directly into the density matrix P without using the orbitals C explicitly.200 These density matrix-based methods exploit the fact that the KS matrix K and the density matrix P have by definition the same eigenvectors C and that a purification maps eigenvalues ϵi of K to eigenvalues fi of P via the Fermi–Dirac function

fi=1expϵiμkBT+1,
(117)

with the chemical potential μ, the Boltzmann constant kB, and electron temperature T. In practice, the purification is computed by an iterative procedure that is constructed to yield a linear-scaling method for sparse KS matrices. By construction, such purifications are usually grand-canonical purifications so that additional measures such as modifications to the algorithms or additional iterations to find the proper value of the chemical potential are necessary to allow for canonical ensembles. In CP2K, the trace-resetting fourth-order method (TRS4226), the trace-conserving second order method (TC2227), and the purification via the sign function (SIGN115) are available. Additionally, CP2K implements an interface to the PEXSI (Pole EXpansion and Selected Inversion) library,228–231 which allows us to evaluate selected elements of the density matrix as the Fermi–Dirac function of the KS matrix via a pole expansion.

The sign function of a matrix

sign(A)=A(A2)1/2
(118)

can be used as a starting point for the construction of various linear-scaling algorithms.232 The relation

sign0AI0=sign0A1/2A1/20
(119)

together with iterative methods for the sign function, such as the Newton–Schulz iteration, is used by default in CP2K for linear-scaling matrix inversions and (inverse) square roots of matrices.233,234 Several orders of iterations are available: the second-order Newton–Schulz iteration as well as third-order and fifth-order iterations based on higher-order Padé-approximants.224,232,235

The sign function can also be used for the purification of the Kohn–Sham matrix K into the density matrix P, i.e., via the relation

P=12IsignS1KμIS1.
(120)

Within CP2K, the linear-scaling calculation of the sign-function is implemented up to seventh-order based on Padé-approximants. For example, the fifth-order iteration has the form

X0=S1KμI,
(121a)
Xk+1=Xk128(35Xk8180Xk6+378Xk4420Xk2+315),
(121b)
limkXk=signS1KμI
(121c)

and is implemented in CP2K with just four matrix multiplications per iteration. After each matrix multiplication, all matrix elements smaller than a threshold ϵfilter are flushed to zero to retain sparsity. The scaling of the wall-clock time for the computation of the sign- and sqrt-functions to simulate the STMV virus in water solution using the Geometry, Frequency, Noncovalent, eXtended Tight-Binding (GFN-xTB) Hamiltonian is shown in Fig. 6.195 The drastic speed-up of the calculation, when increasing the threshold ϵfilter, immediately suggests the combination of sign-matrix iteration based linear-scaling DFT algorithms with the ideas of approximate computing (AC), as discussed in Sec. XV B.

FIG. 6.

Wall time for the calculation of the computationally dominating matrix-sqrt (blue, boxes) and matrix-sign (red, circles) functions as a function of matrix truncation threshold ϵfilter for the STMV virus. The latter contains more than one million atoms and was simulated using the periodic implementation in CP2K of the GFN-xTB model.236 The fifth-order sign-function iteration of Eq. (121a) and the third-order sqrt-function iterations have been used. The calculations have been performed with 256 nodes (10 240 CPU-cores) of the Noctua system at the Paderborn Center for Parallel Computing (PC2).

FIG. 6.

Wall time for the calculation of the computationally dominating matrix-sqrt (blue, boxes) and matrix-sign (red, circles) functions as a function of matrix truncation threshold ϵfilter for the STMV virus. The latter contains more than one million atoms and was simulated using the periodic implementation in CP2K of the GFN-xTB model.236 The fifth-order sign-function iteration of Eq. (121a) and the third-order sqrt-function iterations have been used. The calculations have been performed with 256 nodes (10 240 CPU-cores) of the Noctua system at the Paderborn Center for Parallel Computing (PC2).

Close modal

Recently, a new iterative scheme for the inverse p-th of a matrix has been developed, which also allows us to directly evaluate the density matrix via the sign function in Eq. (118).224 An arbitrary-order implementation of this scheme is also available in CP2K.

In addition to the sign method, the submatrix method presented in Ref. 222 has been implemented in CP2K as an alternative approach to calculate the density matrix P from the KS matrix K. Instead of operating on the entire matrix, calculations are performed on principal submatrices thereof. Each of these submatrices covers a set of atoms that originates from the same block of the KS matrix in the DBCSR-format, as well as those atoms in their direct neighborhood whose basis functions are overlapping. As a result, the submatrices are much smaller than the KS matrix but relatively dense. For large systems, the size of the submatrices is independent of the overall system size so that a linear-scaling method immediately results from this construction. Purification of the submatrices can be performed either using iterative schemes to compute the sign function (see Sec. VIII E) or via a direct eigendecomposition. The submatrix method provides an approximation of the density matrix P, whose quality and computational cost depend on the truncation threshold ϵfilter used during the SCF iterations. Figure 7 compares the accuracy provided and wall time required by the submatrix method with the accuracy and wall time of a Newton–Schulz sign iteration.

FIG. 7.

Comparison of the wall time required for the purification using the submatrix method (red circles, with a direct eigendecomposition for the density matrix of the submatrices) and the second order Newton–Schulz sign iteration (blue boxes) for different relative errors in the total energy after one SCF iteration. The corresponding reference energy has been computed by setting ϵfilter = 10−16. All calculations have been performed on a system consisting of 6192 H2O molecules, using KS-DFT together with a SZV basis set, utilizing two compute nodes (80 CPU-cores) of the “Noctua” system at PC2.

FIG. 7.

Comparison of the wall time required for the purification using the submatrix method (red circles, with a direct eigendecomposition for the density matrix of the submatrices) and the second order Newton–Schulz sign iteration (blue boxes) for different relative errors in the total energy after one SCF iteration. The corresponding reference energy has been computed by setting ϵfilter = 10−16. All calculations have been performed on a system consisting of 6192 H2O molecules, using KS-DFT together with a SZV basis set, utilizing two compute nodes (80 CPU-cores) of the “Noctua” system at PC2.

Close modal

Spatially localized molecular orbitals (LMOs), also known as MLWFs in solid state physics and materials science,165,167 are widely used to visualize chemical bonding between atoms, help classify bonds, and thus understand electronic structure origins of observed properties of atomistic systems (see Sec. XI A). Furthermore, localized orbitals are a key ingredient in multiple local electronic structure methods that dramatically reduce the computational cost of modeling electronic properties of large atomistic systems. LMOs are also important to many other electronic structure methods that require local states such as XAS spectra modeling or dispersion-corrected XC functionals based on atomic polarizabilities.

CP2K offers a variety of localization methods, in which LMOs |j⟩ are constructed by finding a unitary transformation A of canonical MOs |i0⟩, either occupied or virtual, i.e.,

|j=i|i0Aij,
(122)

which minimizes the spread of individual orbitals. CP2K uses the localization functional proposed by Resta,156 which was generalized by Berghold et al. to periodic cells of any shape and symmetry,

ΩL(A)=KiωK|ziK|2,
(123a)
ziK=mnAmiBmnKAni,
(123b)
BmnK=m0|eiGKr^|n0,
(123c)

where r^ is the position operator in three dimensions, and ωK and GK are suitable sets of weights and reciprocal lattice vectors, respectively.166 The functional in Eq. (123a) can be used for both gas-phase and periodic systems. In the former case, the functional is equivalent to the Boys–Foster localization.237 In the latter case, its applicability is limited to the electronic states described within the Γ-point approximation.

CP2K also implements the Pipek–Mezey localization functional,238 which can be written in the same form as the Berghold functional above with K referring to atoms, ziK measuring the contribution of orbital i to the Mulliken charge of atom K, and B being defined as

BmnK=12μKm0||χμχμ|+|χμχμ||n0,
(124)

where |χμ⟩ and |χμ⟩ are atom-centered covariant and contravariant basis set functions.166 The Pipek–Mezey functional has the advantage of preserving the separation of σ and π bonds and is commonly employed for molecular systems.

In addition to the traditional localization techniques, CP2K offers localization methods that produce non-orthogonal LMOs (NLMOs).239 In these methods, matrix A is not restricted to be unitary and the minimized objective function contains two terms: a localization functional ΩL given by Eq. (123a) and a term that penalizes unphysical states with linearly dependent localized orbitals

Ω(A)=ΩL(A)cPlogdetσdiag1(A)σ(A),
(125)

where cP > 0 is the penalty strength and σ is the NLMO overlap matrix,

σkl=k|l=jiAjkj0|i0Ail.
(126)

The penalty term varies from 0 for orthogonal LMOs to + for linearly dependent NLMOs, making the latter inaccessible in the localization procedure with finite penalty strength cP. The inclusion of the penalty term converts the localization procedure into a straightforward unconstrained optimization problem and produces NLMOs that are noticeably more localized than their conventional orthogonal counterparts (Fig. 8).

FIG. 8.

Orthogonal (bottom) and non-orthogonal (top) LMOs on the covalent bond of the adjacent carbon atoms in a carborane molecule C2B10H12 (isosurface value is 0.04 a.u.).

FIG. 8.

Orthogonal (bottom) and non-orthogonal (top) LMOs on the covalent bond of the adjacent carbon atoms in a carborane molecule C2B10H12 (isosurface value is 0.04 a.u.).

Close modal

Linear-scaling, or so-called O(N), methods described in Sec. VIII E exploit the natural locality of the one-electron density matrix. Unfortunately, the variational optimization of the density matrix is inefficient if calculations require many basis functions per atom. From the computational point of view, the variation of localized one-electron states is preferable to the density matrix optimization because the former requires only the occupied states, reducing the number of variational degrees of freedom significantly, especially in calculations with large basis sets. CP2K contains a variety of orbital-based O(N) DFT methods briefly reviewed here and in Sec. X C.

Unlike density matrices, one-electron states tend to delocalize in the process of unconstrained optimization and their locality must be explicitly enforced to achieve linear-scaling. To this end, each occupied orbital is assigned a localization center—an atom (or a molecule)—and a localization radius Rc. Then, each orbital is expanded strictly in terms of subsets of localized basis functions centered on the atoms (or molecules) lying within Rc from the orbital’s center. In CP2K, contracted Gaussian functions are used as the localized basis set.

Since their introduction,240–242 the orbitals with this strict a priori localization have become known under different names including absolutely localized molecular orbitals,243 localized wavefunctions,244 non-orthogonal generalized Wannier functions,245 multi-site support functions,246 and non-orthogonal localized molecular orbitals.247 Here, they are referred to as compact localized molecular orbitals (CLMOs) to emphasize that their expansion coefficients are set to zero for all basis functions centered outside orbitals’ localization subsets. Unlike previous works,248–250 the ALMO acronym is avoided,243 since it commonly refers to a special case of compact orbitals with Rc = 0.180,251–256

While the localization constraints are necessary to design orbital-based O(N) methods, the reduced number of electronic degrees of freedom results in the variationally optimal CLMO energy being always higher than the reference energy of fully delocalized orbitals. From the physical point of view, enforcing orbital locality prohibits the flow of electron density between distant centers and thus switches off the stabilizing donor–acceptor (i.e., covalent) component of interactions between them. It is important to note that the other interactions such as long-range electrostatics, exchange, polarization, and dispersion are retained in the CLMO approximation. Thereby, the accuracy of the CLMO-based calculations depends critically on the chosen localization radii, which should be tuned for each system to obtain the best accuracy-performance compromise. In systems with non-vanishing bandgaps, the neglected donor–acceptor interactions are typically short-ranged and CLMOs can accurately represent their electronic structure if Rc encompasses the nearest and perhaps next nearest neighbors. On the other hand, the CLMO approach is not expected to be practical for metals and semi-metals because of their intrinsically delocalized electrons.

The methods and algorithms in CP2K are designed to circumvent the known problem of slow variational optimization of CLMOs,244,257–261 the severity of which rendered early orbital-based O(N) methods impractical. Two solutions to the convergence problem are described here. The first approach is designed for systems without strong covalent interactions between localization centers such as ionic materials or ensembles of small weakly interacting molecules.248,249,260,262 The second approach is proposed to deal with more challenging cases of strongly bonded atoms such as covalent crystals.

The key idea of the first approach is to optimize CLMOs in a two-stage SCF procedure. In the first stage, Rc is set to zero and the CLMOs—they can be called ALMOs in this case—are optimized on their centers. In the second stage, the CLMOs are relaxed to allow delocalization onto the neighbor molecules within their localization radius Rc. To achieve a robust optimization in the problematic second stage, the delocalization component of the trial CLMOs must be kept orthogonal to the fixed ALMOs obtained in the first stage. If the delocalization is particularly weak, the CLMOs in the second stage can be obtained using the simplified Harris functional263—orbital optimization without updating the Hamiltonian matrix—or non-iterative perturbation theory. The mathematical details of the two-stage approach can be found in Ref. 249. A detailed description of the algorithms is presented in the supplementary material of Ref. 250.

The two-stage SCF approach resolves the convergence problem only if the auxiliary ALMOs resemble the final variationally optimal CLMOs and, therefore, is not practical for systems with noticeable electron delocalization—in other words, covalent bonds—between atoms. The second approach, designed for systems of covalently bonded atoms, utilizes an approximate electronic Hessian that is inexpensive to compute and yet sufficiently accurate to identify and remove the problematic optimization modes.264 The accuracy and practical utility of this approach rely on the fact that the removed low-curvature modes are associated with the nearly invariant occupied–occupied orbital mixing, which produce only an insignificant lowering in the total energy.

The robust variational optimization of CLMOs combined with CP2K’s fast O(N) algorithms for the construction of the KS Hamiltonian enabled the implementation of a series of O(N) orbital-based DFT methods with a low computational overhead. Figure 9 shows that Rc can be tuned to achieve substantial computational savings without compromising the accuracy of the calculations. It also demonstrates that CLMO-based DFT exhibits early-offset linear-scaling behavior even for challenging condensed-phase systems and works extremely well with large basis sets.

FIG. 9.

Accuracy and efficiency of O(N) DFT for liquid water based on the two-stage CLMO optimization. Calculations were performed using the BLYP functional and a TZV2P basis set for 100 snapshots representing liquid water at constant temperature (300 K) and density (0.9966 g/cm3). (a) Dependence of the average number of neighbors on the localization radius, expressed in units of the elements’ van der Waals radii (vdWR). (b) Energy error per molecule relative to the energies of fully delocalized orbitals. For simulations, in which the coordination number of molecules does not change drastically, the mean error represents a constant shift of the potential energy surface and does not affect the quality of simulations. In such cases, the standard deviation (error bars) is better suited to measure the accuracy of the CLMO methods. (c) Wall-time required for the variational minimization of the energy on 256 compute cores. The localization radius is set to 1.6 vdWR for the CLMOs methods. The CLMO methods are compared to the cubically scaling optimization of delocalized orbitals (OT SCF),199,212 as well as to the O(N) optimization of density matrix.115 Perfect linear- and cubic-scaling lines are shown in gray and cyan, respectively. See Ref. 249 for details.

FIG. 9.

Accuracy and efficiency of O(N) DFT for liquid water based on the two-stage CLMO optimization. Calculations were performed using the BLYP functional and a TZV2P basis set for 100 snapshots representing liquid water at constant temperature (300 K) and density (0.9966 g/cm3). (a) Dependence of the average number of neighbors on the localization radius, expressed in units of the elements’ van der Waals radii (vdWR). (b) Energy error per molecule relative to the energies of fully delocalized orbitals. For simulations, in which the coordination number of molecules does not change drastically, the mean error represents a constant shift of the potential energy surface and does not affect the quality of simulations. In such cases, the standard deviation (error bars) is better suited to measure the accuracy of the CLMO methods. (c) Wall-time required for the variational minimization of the energy on 256 compute cores. The localization radius is set to 1.6 vdWR for the CLMOs methods. The CLMO methods are compared to the cubically scaling optimization of delocalized orbitals (OT SCF),199,212 as well as to the O(N) optimization of density matrix.115 Perfect linear- and cubic-scaling lines are shown in gray and cyan, respectively. See Ref. 249 for details.

Close modal

The current implementation of the CLMO-based methods is limited to localization centers with closed-shell electronic configurations. The nuclear gradients are available only for the methods that converge the CLMO variational optimization. This excludes CLMO methods based on the Harris functional and perturbation theory.

Section X C describes how the CLMO methods can be used in AIMD simulations by means of the second-generation Car–Parrinello MD (CPMD) method of Kühne and co-workers.3,265,266 Energy decomposition analysis (EDA) methods that exploit the locality of compact orbitals to understand the nature of intermolecular interactions in terms of physically meaningful components are described in Sec. XI A.

The computational cost of a DFT calculation depends critically on the size and condition number of the employed basis set. Traditional basis sets are atom centered, static, and isotropic. Since molecular systems are never isotropic, it is apparent that isotropic basis sets are sub-optimal. The polarized atomic orbitals from the machine learning (PAO-ML) scheme provide small adaptive basis sets, which adjust themselves to the local chemical environment.267 The scheme uses polarized atomic orbitals (PAOs), which are constructed from a larger primary basis function as introduced by Berghold et al.268 A PAO basis function φ̃μ can be written as a weighted sum of primary basis functions φν, where μ and ν belong to the same atom,

φ̃μ=νBμνφν.
(127)

The aim of the PAO-ML method is to predict the transformation matrix B for a given chemical environment using machine learning (ML). The analytic nature of ML models allows for the calculation of exact analytic forces as they are required for AIMD simulations. In order to train such an ML model, a set of relevant atomic motifs with their corresponding optimal PAO basis are needed. This poses an intricate non-linear optimization problem as the total energy must be minimal with respect to the transformation matrix B, and the electronic density, while still obeying the Pauli principle. To this end, the PAO-ML scheme introduced an improved optimization algorithm based on the Li, Nunes, and Vanderbilt formulation for the generation of training data.269 

When constructing the ML model, great care must be taken to ensure the invariance of the predicted PAO basis sets with respect to rotations of the simulation cell to prevent artificial torque forces. The PAO-ML scheme achieves rotational invariance by employing potentials anchored on neighboring atoms. The strength of individual potential terms is predicted by the ML model. Collectively, the potential terms form an auxiliary atomic Hamiltonian, whose eigenvectors are then used to construct the transformation matrix B.

The PAO-ML method has been demonstrated by means of AIMD simulations of liquid water. A minimal basis set yielded structural properties in fair agreement with basis set converged results. In the best case, the computational cost was reduced by a factor of 200 and the required flops were reduced by 4 orders of magnitude. Already, a very small training set gave satisfactory results as the variational nature of the method provides robustness.

The mathematical task of AIMD is to evaluate the expectation value O of an arbitrary operator O(R,P) with respect to the Boltzmann distribution

O=dRdPO(R,P)eβE(R,P)dRdPeβE(R,P),
(128)

where R and P are the nuclear positions and momenta, while β = 1/kBT is the inverse temperature. The total energy function

E(R,P)=I=1NPI22MI+E[{ψi};R],
(129)

where the first term denotes the nuclear kinetic energy, E[{ψi}; R] denotes the potential energy function, N denotes the number of nuclei, and MI denotes the corresponding masses.

However, assuming the ergodicity hypothesis, the thermal average O can not only be determined as the ensemble average but also as a temporal average,

O=limT1TdtO(R(t),P(t)),
(130)

by means of AIMD.

In the following, we will assume that the potential energy function is calculated on the fly using KS-DFT so that E[{ψi};R]=EKS{ψi[ρ(r)]};R+EII(R). In CP2K, AIMD comes in two distinct flavors, which are both outlined in this section.

In Born–Oppenheimer MD (BOMD), the potential energy E{ψi};R is minimized at every AIMD step with respect to {ψi(r)} under the holonomic orthonormality constraint ⟨ψi(r)|ψj(r)⟩ = δij. This leads to the following Lagrangian:

LBO{ψi};R,Ṙ=12I=1NMIṘI2min{ψi}E{ψi};R+i,jΛijψi|ψjδij,
(131)

where Λ is a Hermitian Lagrangian multiplier matrix. By solving the corresponding Euler–Lagrange equations

ddtLṘI=LRI,
(132a)
ddtLψ̇i|=Lψi|,
(132b)

one obtains the associated equations of motion

MIR̈I=RImin{ψi}E{ψi};R|{ψi|ψj=δij}=ERI+i,jΛijRIψi|ψj2iψi|RIδEδψi|jΛij|ψj,
(133a)
0δEδψi|+jΛij|ψj=Ĥeψi|+jΛij|ψj.
(133b)

The first term on the right-hand side (RHS) of Eq. (133a) is the so-called Hellmann–Feynman force.270,271 The second term that is denoted as Pulay,18 or wavefunction force FWF, is a constraint force due to the holonomic orthonormality constraint and is nonvanishing if, and only if, the basis functions ϕj explicitly depend on R. The final term stems from the fact that, independently of the particular basis set used, there is always an implicit dependence on the atomic positions. The factor 2 in Eq. (133a) stems from the assumption that the KS orbitals are real, an inessential simplification. Nevertheless, the whole term vanishes whenever ψi(R) is an eigenfunction of the Hamiltonian within the subspace spanned by the not necessarily complete basis set.272,273 Note that this is a much weaker condition than the original Hellmann–Feynman theorem, of which we hence have not availed ourselves throughout the derivation, except as an eponym for the first RHS term of Eq. (133a). However, as the KS functional is nonlinear, eigenfunctions of its Hamiltonian Ĥe are only obtained at exact self-consistency, which is why the last term of Eq. (133a) is also referred to as non-self-consistent force FNSC.274 Unfortunately, this cannot be assumed in any numerical calculation and results in immanent inconsistent forces as well as the inequality of Eq. (133b). Neglecting either FWF or FNSC, i.e., applying the Hellmann–Feynman theorem to a non-eigenfunction, leads merely to a perturbative estimate of the generalized forces

F=FHF+FWF+FNSC,
(134)

which, contrary to the energies, depends just linearly on the error in the electronic charge density. That is why it is much more exacting to calculate accurate forces rather than total energies.

Until recently, AIMD has mostly relied on two general methods: The original CPMD and the direct BOMD approach, each with its advantages and shortcomings. In BOMD, the total energy of a system, as determined by an arbitrary electronic structure method, is fully minimized in each MD time step, which renders this scheme computationally very demanding.275 By contrast, the original CPMD technique obviates the rather time-consuming iterative energy minimization by considering the electronic degrees of freedom as classical time-dependent fields with a fictitious mass μ that are propagated by the use of Newtonian dynamics.276 In order to keep the electronic and nuclear subsystems adiabatically separated, which causes the electrons to follow the nuclei very close to their instantaneous electronic ground state, μ has to be chosen sufficiently small.277 However, in CPMD, the maximum permissible integration time step scales like μ and therefore has to be significantly smaller than that of BOMD, hence limiting the attainable simulation timescales.278 

The so-called second-generation CPMD method combines the best of both approaches by retaining the large integration time steps of BOMD while, at the same time, preserving the efficiency of CPMD.3,265,266 In this Car–Parrinello-like approach to BOMD, the original fictitious Newtonian dynamics of CPMD for the electronic degrees of freedom is substituted by an improved coupled electron–ion dynamics that keeps the electrons very close to the instantaneous ground-state without the necessity of an additional fictitious mass parameter. The superior efficiency of this method originates from the fact that only one preconditioned gradient computation is necessary per AIMD step. In fact, a gain in efficiency between one and two orders of magnitude has been observed for a large variety of different systems ranging from molecules and solids,279–287 including phase-change materials,288–294 over aqueous systems,295–304 to complex interfaces.305–311 

Within mean-field electronic structure methods, such as Hartree–Fock and KS-DFT, E{ψi};R is either a functional of the electronic wavefunction that is described by a set of occupied MOs |ψi⟩ or, equivalently, of the corresponding one-particle density operator ρ = i|ψi⟩⟨ψi|. The improved coupled electron–ion dynamics of second-generation CPMD obeys the following equations of motion for the nuclear and electronic degrees of freedom:

MIR̈I=RImin{ψi}E{ψi};RI{ψi|ψj=δij}=ERI+i,jΛijRIψi|ψj2iψi|RIE{ψi};RIψi|jΛij|ψj,
(135a)
d2dτ2|ψi(r,τ)=E{ψi};RIψi(r,τ)|γωddτ|ψi(r,τ)+jΛij|ψj(r,τ).
(135b)

The former is the conventional nuclear equation of motion of BOMD consisting of Hellmann–Feynman, Pulay, and non-self-consistent force contributions,18,270,271,274 whereas the latter constitutes a universal oscillator equation as obtained by nondimensionalization. The first term on the RHS of Eq. (135b) can be sought of as an “electronic force” to propagate |ψi⟩ in dimensionless time τ. The second term is an additional damping term to relax more quickly to the instantaneous electronic ground-state, where γ is an appropriate damping coefficient and ω is the undamped angular frequency of the universal oscillator. The final term derives from the constraint to fulfill the holonomic orthonormality condition ⟨ψi|ψj⟩ = δij, by using the Hermitian Lagrangian multiplier matrix Λ. As shown in Eq. (135b), not even a single diagonalization step but just one “electronic force” calculation is required. In other words, the second-generation CPMD method not only entirely bypasses the necessity of a SCF cycle but also the alternative iterative wavefunction optimization.

However, contrary to the evolution of the nuclei, for the short-term integration of the electronic degrees of freedom, accuracy is crucial, which is why a highly accurate yet efficient propagation scheme is essential. As a consequence, the evolution of the MOs is conducted by extending the always-stable predictor–corrector integrator of Kolafa to the electronic structure problem.312 However, since this scheme was originally devised to deal with classical polarization, special attention must be paid to the fact that the holonomic orthonormality constraint, which is due to the fermionic nature of electrons that forces the wavefunction to be antisymmetric in order to comply with the Pauli exclusion principle, is always satisfied during the dynamics. For that purpose, first the predicted MOs at time tn are computed based on the electronic degrees of freedom from the K previous AIMD steps,

|ψip(tn)=mK(1)m+1m2KKm2K2K1ρ(tnm)|ψi(tn1).
(136)

This is to say that the predicted one-electron density operator ρp(tn) is used as a projector onto the occupied subspace |ψi(tn−1)⟩ of the previous AIMD step. In this way, we take advantage of the fact that ρp(tn) evolves much more smoothly than |ψip(tn) and is therefore easier to predict. This is particularly true for metallic systems, where many states crowd the Fermi level. Yet, to minimize the error and to further reduce the deviation from the instantaneous electronic ground-state, |ψip(tn) is corrected by performing a single minimization step |δψip(tn) along the preconditioned electronic gradient direction, as computed by the orthonormality conserving orbital OT method described in Sec. VIII C.199 Therefore, the modified corrector can be written as

|ψi(tn)=|ψip(tn)+ω|δψip(tn)|ψip(tn)

with

ω=K2K1 for K2.
(137)

The eventual predictor–corrector scheme leads to an electron dynamics that is rather accurate and time-reversible up to O(Δt2K2), where Δt is the discretized integration time step, while ω is chosen so as to guarantee a stable relaxation toward the minimum. In other words, the efficiency of the present second-generation CPMD method stems from the fact that the instantaneous electronic ground state is very closely approached within just one electronic gradient calculation. We thus totally avoid the SCF cycle and any expensive diagonalization steps while remaining very close to the BO surface and, at the same time, Δt can be chosen to be as large as in BOMD.

However, despite the close proximity of the propagated MOs to the instantaneous electronic ground state, the nuclear dynamics is slightly dissipative, most likely because the employed predictor–corrector scheme is not symplectic. The validity of this assumption has been independently verified by various numerical studies of others.2,313–315 Nevertheless, presuming that the energy is exponentially decaying, which had been shown to be an excellent assumption,3,265,266 it is possible to rigorously correct for the dissipation by modeling the nuclear forces of second-generation CPMD FICP=RIE{ψi};RI by

FICP=FIBOγDMIṘI,
(138)

where FIBO are the exact but inherently unknown BO forces and γD is an intrinsic yet to be determined friction coefficient to mimic the dissipation. The presence of damping immediately suggests a canonical sampling of the Boltzmann distribution by means of the following modified Langevin-type equation:

MIR̈I=FIBOγDMIṘI+ΞID
(139a)
=FICP+ΞID,
(139b)

where ΞID is an additive white noise term, which must obey the fluctuation–dissipation theorem ΞID(0)ΞID(t)=2γDMIkBTδ(t) in order to guarantee an accurate sampling of the Boltzmann distribution.

This is to say that if one knew the unknown value of γD, it would nevertheless be possible to ensure an exact canonical sampling of the Boltzmann distribution despite the dissipation. Fortunately, the inherent value of γD does not need to be known a priori but can be bootstrapped so as to generate the correct average temperature,3,265,266 as measured by the equipartition theorem

12MIṘI2=32kBT.
(140)

More precisely, in order to determine the hitherto unknown value of γD, we perform a preliminary simulation in which we vary γD on the fly using a Berendsen-like algorithm until Eq. (140) is eventually satisfied.295 Alternatively, γD can also be continuously adjusted automatically using the adaptive Langevin technique of Leimkuhler and co-workers.316–318 In this method, the friction coefficient γD of Eq. (139) is reinterpreted as a dynamical variable, defined by a negative feedback loop control law as in the Nosé–Hoover scheme.319,320 The corresponding dynamical equation for γD reads as

γ̇D=(2KnkBT)/Q,
(141)

where K is the kinetic energy, n is the number of degrees of freedom, and Q=kBTτNH2 is the Nose–Hoover fictitious mass with time constant τNH.

The computational complexity of CLMO DFT described in Sec. IX B grows linearly with the number of molecules, while its overhead cost remains very low because of the small number of electronic descriptors and efficient optimization algorithms (see Fig. 9). These features make CLMO DFT a promising method for accurate AIMD simulations of large molecular systems.

The difficulty in adopting CLMO DFT for dynamical simulations arises from the nonvariational character of compact orbitals. While CLMOs can be reliably optimized using the two-stage SCF procedure described in Sec. IX B, the occupied subspace defined in the first stage must remain fixed during the second stage to achieve convergence. In addition, electron delocalization effects can suddenly become inactive in the course of a dynamical simulation when a neighboring molecule crosses the localization threshold Rc. Furthermore, the variational optimization of orbitals is in practice inherently not perfect and terminated once the norm of the electronic gradient drops below a small but nevertheless finite convergence threshold ϵSCF. These errors accumulate in long AIMD trajectories, leading to non-physical sampling and/or numerical problems.

Traditional strategies to cope with these problems are computationally expensive and include increasing Rc, decreasing ϵSCF, and computing the nonvariational contribution to the forces via a coupled-perturbed procedure.321,322 CP2K implements another approach that uses the CLMO state obtained after the two-stage CLMO SCF procedure to compute only the inexpensive Hellmann–Feynman and Pulay components and neglects the computationally intense nonvariational component of the nuclear forces. To compensate for the missing component, CP2K uses a modified Langevin equation of motion that is fine-tuned to retain stable dynamics even with imperfect forces. This approach is known as the second-generation CPMD methodology of Kühne and workers,3,265,266 which is discussed in detail in Sec. X B. Its combination with CLMO DFT is described in Ref. 250.

An application of CLMO AIMD to liquid water demonstrates that the compensating terms in the modified Langevin equation can be tuned to maintain a stable dynamics and reproduce accurately multiple structural and dynamical properties of water with tight orbital localization (Rc = 1.6 vdWR) and ϵSCF as high as 10−2 a.u.250 The corresponding results are shown in Fig. 10. The low computational overhead of CLMO AIMD, afforded by these settings, makes it ideal for routine medium-scale AIMD simulations, while its linear-scaling complexity allows us to extend first-principle studies of molecular systems to previously inaccessible length scales.

FIG. 10.

Accuracy and efficiency of O(N) AIMD based on the two-stage CLMO SCF for liquid water. All simulations were performed using the PBE XC functional and a TZV2P basis set at constant temperature (300 K) and density (0.9966 g/cm3). For the CLMO methods, Rc = 1.6 vdWR. (a) Radial distribution function, (b) kinetic energy distribution (the gray curve shows the theoretical Maxwell–Boltzmann distribution), and (c) IR spectrum were calculated using the fully converged OT method for delocalized orbitals (black line) and CLMO AIMD with ϵSCF = 10−2 a.u. (d) Timing benchmarks for liquid water on 256 compute cores. Cyan lines represent perfect cubic-scaling, whereas gray lines represent perfect linear-scaling. (e) Weak scalability benchmarks are performed with ϵSCF = 10−2 a.u. Gray dashed lines connect systems simulated on the same number of cores to confirm O(N) behavior. See Ref. 250 for details.

FIG. 10.

Accuracy and efficiency of O(N) AIMD based on the two-stage CLMO SCF for liquid water. All simulations were performed using the PBE XC functional and a TZV2P basis set at constant temperature (300 K) and density (0.9966 g/cm3). For the CLMO methods, Rc = 1.6 vdWR. (a) Radial distribution function, (b) kinetic energy distribution (the gray curve shows the theoretical Maxwell–Boltzmann distribution), and (c) IR spectrum were calculated using the fully converged OT method for delocalized orbitals (black line) and CLMO AIMD with ϵSCF = 10−2 a.u. (d) Timing benchmarks for liquid water on 256 compute cores. Cyan lines represent perfect cubic-scaling, whereas gray lines represent perfect linear-scaling. (e) Weak scalability benchmarks are performed with ϵSCF = 10−2 a.u. Gray dashed lines connect systems simulated on the same number of cores to confirm O(N) behavior. See Ref. 250 for details.

Close modal

It is important to note that AIMD in CP2K cannot be combined with the CLMO methods based on the Harris functional and perturbative theory (see Sec. IX B). A generalization of the CLMO AIMD methodology to systems of strongly interacting atoms (e.g., covalent crystals) is underway.

The AIMD-based MTS integrator presented here is based on the reversible reference system propagator algorithm (r-RESPA), which was developed for classical MD simulations.323 Using a carefully constructed integration scheme, the time evolution is reversible, and the MD simulation remains accurate and energy conserving. In AIMD-MTS, the difference in computational cost between a hybrid and a local XC functional is exploited, by performing a hybrid calculation only after several conventional DFT integration time steps. r-RESPA is derived from the Liouville operator representation of Hamilton mechanics,

iL=j=1fHpjxj+Hxjpj,
(142)

where L is the Liouville operator for the system containing f degrees of freedom. The corresponding positions and momenta are denoted as xj and pj, respectively. This operator is then used to create the classical propagator U(t) for the system, which reads as

U(t)=eiLt.
(143)

Decomposing the Liouville operator into two parts,

iL=iL1+iL2,
(144)

and applying a second-order Trotter-decomposition to the corresponding propagator yields

ei(L1+L2)Δt=ei(L1+L2)Δt/nn=eiL1(δt/2)eiL2δteiL1(δt/2)n+O(δt3),
(145)

with δt = Δt/n. For this propagator, several integrator schemes can be derived.324 The extension for AIMD-MTS is obtained from a decomposition of the force in the Liouville operator into two or more separate forces, i.e.,

iL=j=1fẋjxj+Fj1pj+Fj2pj.
(146)

For that specific case, the propagator can be written as

eiLΔt=e(Δt/2)F2pe(δt/2)F1peδtẋjxje(δt/2)F1pne(Δt/2)F2p.
(147)

This allows us to treat F1 and F2 with different time steps, while the whole propagator still remains time-reversible. The procedure for F1 and F2 will be referred to as the inner and the outer loop, respectively. In the AIMD-MTS approach, the forces are split in the following way:

F1=Fapprox,
(148a)
F2=FaccurFapprox,
(148b)

where Faccur are the forces computed by the more accurate method, e.g., hybrid DFT, whereas Fapprox are forces as obtained from a less accurate method, e.g., by GGA XC functionals. Obviously, the corresponding Liouville operator equals the purely accurate one. The advantage of this splitting is that the magnitude of F2 is usually much smaller than that of F1. To appreciate that, it has to be considered how closely geometries and frequencies obtained by hybrid DFT normally match the ones obtained by a local XC functional, in particular, for stiff degrees of freedom. The difference of the corresponding Hessians is therefore small and low-frequent. However, high-frequency parts are not removed analytically; thus, the theoretical upper limit for the time step of the outer loop remains half the period of the fastest vibration.325 The gain originates from an increased accuracy and stability for larger time steps in the outer loop integration. Even using an outer loop time step close to the theoretical limit, a stable and accurate AIMD is obtained. Additionally, there is no shift to higher frequencies as the (outer loop) time step is increased, contrary to the single time step case.

Within CP2K, the nature of intermolecular bonding and vibrational spectra can be rationalized by EDA, normal mode analysis (NMA), and mode selective vibrational analysis (MSVA) methods, respectively.

Intermolecular bonding is a result of the interplay of electrostatic interactions between permanent charges and multipole moments on molecules, polarization effects, Pauli repulsion, donor–acceptor orbital interactions (also known as covalent, charge-transfer, or delocalization interactions), and weak dispersive forces. The goal of EDA is to measure the contribution of these components within the total binding energy, thus gaining deeper insight into physical origins of intermolecular bonds.

To that extent, CP2K contains an EDA method based on ALMOs—molecular orbitals localized entirely on single molecules or ions within a larger system.248,252 The ALMO EDA separates the total interaction energy (ΔETOT) into frozen density (FRZ), polarization (POL), and charge-transfer (CT) terms, i.e.,

ΔETOT=ΔEFRZ+ΔEPOL+ΔECT,
(149)

which is conceptually similar to the Kitaura–Morokuma EDA326—one of the first EDA methods. The frozen interaction term is defined as the energy required to bring isolated molecules into the system without any relaxation of their MOs, apart from modifications associated with satisfying the Pauli exclusion principle,

ΔEFRZE(RFRZ)xE(Rx),
(150)

where E(Rx) is the energy of isolated molecule x and Rx is the corresponding density matrix, whereas RFRZ is the density matrix of the system constructed from the unrelaxed MOs of the isolated molecules. The ALMO EDA is also closely related to the block-localized wavefunction EDA327 because both approaches use the same variational definition of the polarization term as the energy lowering due to the relaxation of each molecule’s ALMOs in the field of all other molecules in the system,

ΔEPOLE(RALMO)E(RFRZ).
(151)

The strict locality of ALMOs is utilized to ensure that the relaxation is constrained to include only intramolecular variations. This approach, whose mathematical and algorithmic details have been described by many authors,243,248,251,328,329 gives an upper limit to the true polarization energy.330 The remaining portion of the total interaction energy, the CT term, is calculated as the difference in the energy of the relaxed ALMO state and the state of fully delocalized optimized orbitals (RSCF),

ΔECTE(RSCF)E(RALMO).
(152)

A distinctive feature of the ALMO EDA is that the charge-transfer contribution can be separated into contributions associated with forward and back-donation for each pair of molecules, as well as a many-body higher-order (induction) contribution (HO), which is very small for typical intermolecular interactions. Both the amount of the electron density transferred between a pair of molecules and the corresponding energy lowering can be computed,

ΔQCT=x,y>x{ΔQxy+ΔQyx}+ΔQHO,
(153a)
ΔECT=x,y>x{ΔExy+ΔEyx}+ΔEHO.
(153b)

The ALMO EDA implementation in CP2K is currently restricted to closed-shell fragments. The efficient O(N) optimization of ALMOs serves as its underlying computational engine.249 The ALMO EDA in CP2K can be applied to both gas-phase and condensed matter systems. It has been recently extended to fractionally occupied ALMOs,331 thus enabling the investigation of interactions between metal surfaces and molecular adsorbates. Another unique feature of the implementation in CP2K is the ability to control the spatial range of charge-transfer between molecules using the cutoff radius Rc of CLMOs (see Sec. IX B). Additionally, the ALMO EDA in combination with CP2K’s efficient AIMD engine allows us to switch off the CT term in AIMD simulations, thus measuring the CT contribution to dynamical properties of molecular systems.332 

The ALMO EDA has been applied to study intermolecular interactions in a variety of gas- and condensed-phase molecular systems.331,333 The CP2K implementation of ALMO EDA has been particularly advantageous to understand the nature of hydrogen bonding in bulk liquid water,180,248,254,297,300,301,303,332 ice,334 and water phases confined to low dimensions.335 Multiple studies have pointed out a deep connection between the donor–acceptor interactions and features of the x-ray absorption,254,336 infrared (IR)297,298,300,303,332,337,338 (Fig. 11), and nuclear magnetic resonance spectra of liquid water.180 Extension of ALMO EDA to AIMD simulations has shown that the insignificant covalent component of HB determines many unique properties of liquid water including its structure (Fig. 11), dielectric constant, hydrogen bond lifetime, rate of self-diffusion, and viscosity.250,332

FIG. 11.

Comparison of the calculated radial distribution functions and IR spectra computed for liquid water based on AIMD simulations at the BLYP + D3/TZV2P level of theory with and without the CT terms. See Ref. 332 for details.

FIG. 11.

Comparison of the calculated radial distribution functions and IR spectra computed for liquid water based on AIMD simulations at the BLYP + D3/TZV2P level of theory with and without the CT terms. See Ref. 332 for details.

Close modal

IR vibrational spectra can be obtained with CP2K by carrying out a NMA, within the Born–Oppenheimer approximation. The expansion of the potential energy in a Taylor series around the equilibrium geometry gives

Epot({RA})=Epot(0)+AEpotRAδRA+12AB2EpotRARBδRAδRB+.
(154)

At the equilibrium geometry, the first derivative is zero by definition; hence, in the harmonic approximation, only the second derivative matrix (Hessian) has to be calculated. In our implementation, the Hessian is obtained with the three-point central difference approximation, which for a system of M particles corresponds to 6M force evaluations. The Hessian matrix is diagonalized to determine the 3MQ vectors representing a set of decoupled coordinates, which are the normal modes. In mass weighted coordinates, the angular frequencies of the corresponding vibrations are then obtained by the eigenvalues of the second derivative matrix.

For applications in which only few vibrations are of interest, the MSVA presents a possibility to reduce the computational cost significantly.339,340 Instead of computing the full second derivative matrix, only a subspace is evaluated iteratively using the Davidson algorithm. As in NMA, the starting point of this method is the eigenvalue equation

H(m)Qk=λkQk,
(155)

where H(m) is the mass weighted Hessian in Cartesian coordinates. Instead of numerically computing the Hessian elementwise, the action of the Hessian on a predefined or arbitrary collective displacement vector

d=idieim
(156)

is chosen, where ei(m) are the 3M nuclear basis vectors. The action of the Hessian on d is given in the first approximation as

σk=(H(m)d)k=l2EpotRl(m)Rk(m)dl=2EpotRk(m)d.
(157)

The first derivatives with respect to the nuclear positions are simply the force which can be computed analytically. The derivative of the force along components i with respect to d can then be evaluated as a numerical derivative using the three-point central difference approximation to yield the vector σ. The subspace approximation to the Hessian at the ith iteration of the procedure is a i × i matrix,

Happrox(m),i=DTH(m)D=DTΣ,
(158)

where B and Σ are the collection of the d and Σ vectors up to the actual iteration step. Solving the eigenvalue problem for the small Davidson matrix

Happrox(m),iui=λ̃i,ui,
(159)

we obtain approximate eigenvalues λ̃ki. From the resulting eigenvectors, the residuum vector

rmi=k=1iumki[σkλ̃midk],
(160)

where the sum is over all the basis vectors dk, the number of which increases at each new iteration i. The residuum vector is used as a displacement vector for the next iteration. The approximation to the exact eigenvector m at the ith iteration is

Qmk=1iumkidk.
(161)

This method avoids the evaluation of the complete Hessian and therefore requires fewer force evaluations. Yet, in the limit of 3M iterations, the exact Hessian is obtained and thus the exact frequencies and normal modes in the limit of the numerical difference approximation.

As this is an iterative procedure (Davidson subspace algorithm), the initial guess is important for convergence. Moreover, there is no guarantee that the created subspace will contain the desired modes in the case of a bad initial guess. In CP2K, the choice of the target mode can be a single mode, a range of frequencies, or modes localized on preselected atoms. If little is known about the modes of interest (e.g., a frequency range and contributing atoms), an initial guess can be built by a random displacement of atoms. In case, where a single mode is tracked, one can use normal modes obtained from lower quality methods. The residuum vector is calculated with respect to the mode with the eigenvalue closest to the input frequency. The method will only converge to the mode of interest if the initial guess is suitable. With the implemented algorithm, always the mode closest to the input frequency is improved. Using an arbitrary initial guess, the mode of interest might not be present in the subspace at the beginning. It is important to note that the Davidson algorithm might converge before the desired mode becomes part of the subspace. Therefore, there is no warranty that the algorithm would always converge to the desired mode. By giving a range of frequencies as initial guess, it might happen that either none or more than one mode is already present in the spectrum. In the first case, the mode closest to the desired range will be tracked. In the second case, always the least converged mode will be improved. If the mode is selected by a list of contributing atoms, at each step, the approximations to the eigenvectors of the full Hessian are calculated, and the vector with the largest contributions of the selected atoms is tracked for the next iteration step.

The MSVA scheme can be efficiently parallelized by either distributing the force calculations or using the block Davidson algorithm. In the latter approach, the parallel environment is split into n sub-environments, each consisting of m processors performing the force calculations. The initial vectors are constructed for all n environments such that n orthonormal vectors are generated. After each iteration step, the new d and σ vectors are combined into a single D and Σ matrix. These matrices are then used to construct the approximate Hessian, and from this, the n modes closest to the selected vectors are again distributed.

The IR intensities are defined as the derivatives of the dipole vector with respect to the normal mode. Therefore, it is necessary to activate the computation of the dipoles together with the NMA. By applying the mode selective approach, large condensed matter systems can be addressed. An illustrative example is the study by Schiffmann et al.,341 where the interaction of the N3, N719, and N712 dyes with anatase(101) has been modeled. The vibrational spectra for all low-energy conformations have been computed and used to assist the assignment of the experimental IR spectra, revealing a protonation dependent binding mode and the role of self-assembly in reaching high coverage.

CP2K aims to provide a wide range of potential energy methods, ranging from empirical approaches such as classical force-fields over DFT-based techniques to quantum chemical methods. In addition, multiple descriptions can be arbitrarily combined at the input level so that many combinations of methods are directly available. Examples are schemes that combine two or more potential energy surfaces via

E[R]=EMM[RI+II]EMM[RI]+EQM[RI],
(162)

linear combinations of potentials as necessary in alchemical free-energy calculations,

Eλ[R]=λEI[R]+(1λ)EII[R],
(163)

or propagation of the lowest potential energy surfaces,

E[R]=minEI[R],EII[R],.
(164)

However, beside such rather simplistic techniques,342 more sophisticated embedding methods described in the following are also available within CP2K.

The QM/molecular-mechanics (MM) multi-grid implementation in CP2K is based on the use of an additive mixed quantum-classical mechanics (QM/MM) scheme.343–345 The total energy of the molecular system can be partitioned into three disjointed terms,

ETOT(Rα,Ra)=EQM(Rα)+EMM(Ra)+EQM/MM(Rα,Ra).
(165)

These energy terms parametrically depend on the coordinates of the nuclei in the quantum region (Rα) and on classical atoms (Ra). Hence, EQM is the pure quantum energy, computed using the Quickstep code,5 whereas EMM is the classical energy, described through the use of the internal classical molecular-mechanics (MM) driver called FIST. The latter allows the use of the most common force-fields employed in MM simulations.346,347 The interaction energy term EQM/MM contains all non-bonded contributions between the QM and the MM subsystems, and in a DFT framework, we express it as

EQM/MM(Rα,Ra)=aMMqaρ(r,Rα)va(|rRa|)dr+aMM,αQMvNB(Rα,Ra),
(166)

where Ra is the position of the MM atom a with charge qa, ρ(r, Rα) is the total (electronic plus nuclear) charge density of the quantum system, and vNB(Rα, Ra) is the non-bonded interaction between classical atom a and quantum atom α. The electrostatic potential (ESP) of the MM atoms va(|rRa|) is described using for each atomic charge a Gaussian charge distribution,

ρ(|rRa|)=1πrc,a3exp((|rRa|/rc,a)2),
(167)

with width rc,a, eventually resulting in

va(|rRa|)=erf(|rRa|/rc,a)|rRa|.
(168)

This renormalized potential has the desired property of tending to 1/r at large distances and going smoothly to a constant for small r (see Ref. 348 for renormalization details). Due to the Coulomb long-range behavior, the computational cost of the integral in Eq. (166) can be very large. In CP2K, we designed a decomposition of the electrostatic potential in terms of Gaussian functions with different cutoffs combined with a real-space multi-grid framework to accelerate the calculation of the electrostatic interaction. We named this method Gaussian Expansion of the Electrostatic Potential or GEEP.348 The advantage of this methodology is that grids of different spacing can be used to represent the different contributions of va(r, Ra), instead of using only the same grid employed for the mapping of the electronic wavefunction. In fact, by writing a function as a sum of terms with compact support and with different cutoffs, the mapping of the function can be efficiently achieved using different grid levels, in principle as many levels as contributing terms, each optimal to describe the corresponding term.

1. QM/MM for isolated systems

For isolated systems, each MM atom is represented as a continuous Gaussian charge distribution and each GEEP term is mapped on one of the available grid levels, chosen to be the first grid whose cutoff is equal to or bigger than the cutoff of that particular GEEP contribution. However, all atoms contribute to the coarsest grid level through the long-range Rlow(|rRa|) part,348 which is the smooth GEEP component as defined in Eq. (169). The result of this collocation procedure is a multi-grid representation of the QM/MM electrostatic potential ViQM/MM(r,Ra), where i labels the grid level, represented by a sum of single atomic contributions ViQM/MM(r,Ra)=aMMvai(r,Ra), on that particular grid level. In a realistic system, the collocation represents most of the computational time spent in the evaluation of the QM/MM electrostatic potential that is around 60%–80%. Afterward, the multi-grid expansion ViQM/MM(r,Ra) is sequentially interpolated starting from the coarsest grid level up to the finest level, using real-space interpolator and restrictor operators.

Using the real-space multi-grid operators together with the GEEP expansion, the prefactor in the evaluation of the QM/MM electrostatic potential has been lowered from Nf3 to Nc3, where Nf is the number of grid points on the finest grid and Nc is the number of grid points on the coarsest grid. The computational cost of the other operations for evaluating the electrostatic potential, such as the mapping of the Gaussians and the interpolations, becomes negligible in the limit of a large MM system, usually more than 600–800 MM atoms.

Using the fact that grids are commensurate (Nf/Nc=23(Ngrid1)), and employing for every calculation four grid levels, the speed-up factor is around 512 (29). This means that the present implementation is two orders of magnitude faster than the direct analytical evaluation of the potential on the grid.

2. QM/MM for periodic systems

The effect of the periodic replicas of the MM subsystem is only in the long-range term and comes entirely from the residual function Rlow(r, Ra),

VrecipQM/MM(r,Ra)=Laqavarecip=LaqaRlow(|rRa+L|),
(169)

where L labels the infinite sums over the period replicas. Performing the same manipulation used in the Ewald summation,349 the previous equation can be computed more efficiently in reciprocal space, i.e.,

VrecipQM/MM(ri,Ra)=L3kkcutaqaR̃low(k)cos[2πk(riRa)].
(170)

The term R̃low(k), representing the Fourier transform of the smooth electrostatic potential, can be analytically evaluated via

R̃low(k)=4π|k|2exp|k|2rc,a24NgAg(π)32Gg3expGg2|k|24.
(171)

The potential in Eq. (170) can be mapped on the coarsest available grid. Once the electrostatic potential of a single MM charge within periodic boundary conditions is derived, the evaluation of the electrostatic potential due to the MM subsystem is easily computed employing the same multi-grid operators (interpolation and restriction) used for isolated systems.

The description of the long-range QM/MM interaction with periodic boundary conditions requires the description of the QM/QM periodic interactions, which plays a significant role if the QM subsystem has a net charge different from zero or a significant dipole moment. Here, we exploit a technique proposed few years ago by Blöchl,350 for decoupling the periodic images and restoring the correct periodicity also for the QM part. A full and comprehensive description of the methods is reported in Ref. 351.

3. Image charge augmented QM/MM

The IC augmented QM/MM model in CP2K has been developed for the simulation of molecules adsorbed at metallic interfaces.352 In the IC-QM/MM scheme, the adsorbates are described by KS-DFT and the metallic substrate is treated at the MM level of theory. The interactions between the QM and MM subsystems are modeled by empirical potentials of, e.g., the Lennard-Jones-type to reproduce dispersion and Pauli repulsion. Polarization effects due to electrostatic screening in metallic conductors are explicitly accounted for by applying the IC formulation.

The charge distribution of the adsorbed molecules generates an electrostatic potential Ve(r), which extends into the substrate. If the electrostatic response of the metal is not taken into account in our QM/MM setup, the electrostatic potential has different values at different points r inside the metal slab, as illustrated in Fig. 12. However, the correct physical behavior is that Ve(r) induces an electrostatic response such that the potential in the metal is zero or at least constant. Following Ref. 353, we describe the electrostatic response of the metallic conductor by introducing an image charge distribution ρr(r), which is modeled by a set of Gaussian charges {ga} centered at the metal atoms so that

ρm(r)=acaga(r,Ra),
(172)

where Ra are the positions of the metal atoms. The unknown expansion coefficients ca are determined self-consistently imposing the constant-potential condition, i.e., the potential Vm(r) generated by ρm(r) screens Ve(r) within the metal so that Ve(r) + Vm(r) = V0, where V0 is a constant potential that can be different from zero if an external potential is applied. The modification of the electrostatic potential upon application of the IC correction is shown in Fig. 12. Details on the underlying theory and implementation can be found in Ref. 352.

FIG. 12.

Simulation of molecules at metallic surfaces using the IC-QM/MM approach. The isosurface of the electrostatic potential at 0.0065 a.u. (red) and −0.0065 a.u. (blue) for a single thymine molecule at Au(111). Left: standard QM/MM approach, where the electrostatic potential of the molecule extends beyond the surface into the metal. Right: IC-QM/MM approach reproducing the correct electrostatics expected for a metallic conductor.

FIG. 12.

Simulation of molecules at metallic surfaces using the IC-QM/MM approach. The isosurface of the electrostatic potential at 0.0065 a.u. (red) and −0.0065 a.u. (blue) for a single thymine molecule at Au(111). Left: standard QM/MM approach, where the electrostatic potential of the molecule extends beyond the surface into the metal. Right: IC-QM/MM approach reproducing the correct electrostatics expected for a metallic conductor.

Close modal

The IC-QM/MM scheme provides a computationally efficient way to describe the MM-based electrostatic interactions between the adsorbate and the metal in a fully self-consistent fashion, allowing the charge densities of the QM and MM parts to mutually modify each other. Except for the positions of the metal atoms, no input parameters are required. The computational overhead compared to a conventional QM/MM scheme is in the current implementation negligible. The IC augmentation adds an attractive interaction because ρm(r) mirrors the charge distribution of the molecule but has the opposite sign (see Ref. 352 for the energy expression). Therefore, the IC scheme strengthens the interactions between molecules and metallic substrates, in particular, if adsorbates are polar or even charged. It also partially accounts for the rearrangement of the electronic structure of the molecules when they approach the surface.352 

The IC-QM/MM approach is a good model for systems, where the accurate description of the molecule–molecule interactions is of primary importance and the surface acts merely as a template. For such systems, it is sufficient when the augmented QM/MM model predicts the structure of the first adsorption layer correctly. The IC-QM/MM approach has been applied, for example, to liquid water/Pt(111) interfaces,352,354 organic thin-film growth on Au(100),355 as well as aqueous solutions of DNA molecules at gold interfaces.356,357 Instructions on how to set up an IC-QM/MM calculation with CP2K can be found in Ref. 358.

4. Partial atomic charges from electrostatic potential fitting

Electrostatic potential (ESP) fitting is a popular approach to determine a set of partial atomic point charges {qa}, which can then be used in, e.g., classical MD simulations to model electrostatic contributions in the force-field. The fitting procedure is applied such that the charges {qa} optimally reproduce a given potential VQM obtained from a quantum chemical calculation. To avoid unphysical values for the fitted charges, restraints (and constraints) are often set for qa, which are then called RESP charges.359 

The QM potential is typically obtained from a preceding DFT or Hartree–Fock calculation. The difference between VQM(r) and the potential VRESP(r) generated by {qa} is minimized in a least squares fitting procedure at defined grid points rk. The residual Resp that is minimized is

Resp=1NkN(VQM(rk)VRESP(rk))2,
(173)

where N is the total number of grid points included in the fit. The choice of {rk} is system dependent, and choosing {rk} carefully is important to obtain meaningful charges. For isolated molecules or porous periodic structures (e.g., metal–organic frameworks), {rk} are sampled within a given spherical shell around the atoms defined by a minimal and a maximal radius. The minimal radius is usually set to values larger than the van der Waals radius to avoid fitting in spatial regions, where the potential varies rapidly, which would result in a destabilization of the fit. As a general guideline, the charges should be fitted in the spatial regions relevant for interatomic interactions. CP2K offers also the possibility to fit RESP charges for slab-like systems. For these systems, it is important to reproduce the potential accurately above the surface where adsorption processes take place. The sampling technique is flexible enough to follow a corrugation of the surface (see Ref. 360).

When calculating RESP charges for periodic systems, periodic boundary conditions have to be employed for the calculation of VRESP. In CP2K, the RESP point charges qa are represented as Gaussian functions. The resulting charge distribution is presented on a regular real-space grid and the GPW formalism is employed to obtain a periodic RESP potential. Details of the implementation can be found in Ref. 360.

CP2K features also a GPW implementation of the REPEAT method,361 which is a modification of the RESP fitting for periodic systems. The residual in Eq. (173) is modified such that the variance of the potentials instead of the absolute difference is fitted,

Rrepeat=1NkN(VQM(rk)VRESP(rk)δ)2,
(174)

where

δ=1NkN(VQM(rk)VRESP(rk)).
(175)

The REPEAT method was originally introduced to account for the arbitrary offset of the electrostatic potential in infinite systems, which depends on the numerical method used. In CP2K, VQM and VRESP are both evaluated with the same method, the GPW approach, and have thus the same offset. However, fitting the variance is an easier task than fitting the absolute difference in the potentials and stabilizes significantly the periodic fit. Using the REPEAT method is thus recommended for the periodic case, in particular for systems that are not slab-like. For the latter, the potential above the surface changes very smoothly and we find that δ ≈ 0.360 

The periodic RESP and REPEAT implementation in CP2K has been used to obtain atomic charges for surface systems, such as corrugated hexagonal boron nitride (hBN) monolayers on Rh(111).360 These charges were then used in MD QM/MM simulations of liquid water films (QM) on the hBN@Rh(111) substrate (MM). Other applications comprise two-dimensional periodic supramolecular structures,362 metal–organic frameworks,363–365 as well as graphene.366 Detailed instructions on how to set up a RESP or REPEAT calculation with CP2K can be found under Ref. 367.

Quantum embedding theories are multi-level approaches applying different electronic structure methods to subsystems, interacting with each other quantum-mechanically.368 Density functional embedding theory (DFET) introduces high-order correlation to a chemically relevant subsystem (cluster), whereas the environment and the interaction between the cluster and the environment are described with DFT via the unique local embedding potential vemb(r).369 The simplest method to calculate the total energy is in the first-order perturbation theory fashion,

EtotalDFET=EtotalDFT+(Ecluster,embCWEcluster,embDFT),
(176)

where Ecluster,embDFT and Eenv,embDFT are DFT energies of the embedded subsystems, whereas Ecluster,embCW is the energy of the embedded cluster at the correlated wavefunction (CW) level of theory. All these entities are computed with an additional one-electron embedding term in the Hamiltonian.

The embedding potential is obtained from the condition that the sum of embedded subsystem densities should reconstruct the DFT density of the total system. This can be achieved by maximizing the Wu–Yang functional with respect to vemb,370 

W[Vemb]=Ecluster[ρcluster]+Eenv[ρenv]+Vemb(ρtotalρclusterρenv)dr,
(177)

with the functional derivative to be identical to the density difference δWδVemb=ρtotalρclusterρenv.

The DFET workflow consists of computing the total system with DFT and obtaining vemb by repeating DFT calculations on the subsystems with updated embedding potential until the total DFT density is reconstructed. When the condition is fulfilled, the higher-level embedded calculation is performed on the cluster. The main computational cost comes from the DFT calculations. The cost is reduced by a few factors such as employing a wavefunction extrapolation from the previous iterations via the time-reversible always stable predictor-corrector integrator.312 

The DFET implementation is available for closed and open electronic shells in unrestricted and restricted open-shell variants for the latter. It is restricted to GPW calculations with pseudopotentials describing the core electrons (i.e., full-electron methods are currently not available). Any method implemented within Quickstep is available as a higher-level method, including hybrid DFT, MP2, and RPA. It is possible to perform property calculations on the embedded clusters using an externally provided vemb. The subsystems can employ different basis sets, although they must share the same PW grid.

In our implementation, vemb may be represented on the real-space grid, as well as using a finite Gaussian basis set, the first option being preferable, as it allows a much more accurate total density reconstruction and is computationally cheaper.

AIMD simulations of solutions, biological systems, or surfaces in the presence of a solvent are often computationally dominated by the calculation of the explicit solvent portion, which may easily amount to 70% of the total number of atoms in a model system. Although the first and second sphere or layer of the solvent molecules around a solute or above a surface might have a direct impact via chemical bonding, for instance, via a hydrogen bonding network, the bulk solvent mostly interacts electrostatically as a continuous dielectric medium. This triggered the development of methods treating the bulk solvent implicitly. Such implicit solvent methods have also the further advantage that they provide a statistically averaged effect of the bulk solvent. That is beneficial for AIMD simulations that consider only a relatively small amount of bulk solvent and quite short sampling times. The self-consistent reaction field (SCRF) in a sphere, which goes back to the early work by Onsager,371 is possibly the most simple implicit solvent method implemented in CP2K.372,373

More recent continuum solvation models such as the conductor-like screening model (COSMO),374 as well as the polarizable continuum model (PCM),375,376 take also into account the explicit shape of the solute. The solute–solvent interface is defined as the surface of a cavity around the solute. This cavity is constructed by interlocking spheres centered on the atoms or atomic groups composing the solute.377 This introduces a discontinuity of the dielectric function at the solute–solvent interface and thus causes non-continuous atomic forces, which may impact the convergence of structural relaxations, as well as the energy conservation within AIMD runs. To overcome these problems, Fatteberg and Gygi proposed a smoothed self-consistent dielectric model function of the electronic density ρelecr,378,379

ϵρelecr=1+ϵ0121+1ρelecr/ρ02β1+ρelecr/ρ02β,
(178)

with the model parameters β and ρ0, which fulfills the asymptotic behavior

ϵrϵρelecr=1,largeρelecrϵ0,ρelecr0.
(179)

The method requires a self-consistent iteration of the polarization charge density spreading across the solute–solvent interface in each SCF iteration step, since the dielectric function depends on the actual electronic density ρelecr. This may cause a non-negligible computational overhead depending on the convergence behavior.

More recently, Andreussi et al. proposed in the framework of a revised self-consistent continuum solvation (SCCS) model an improved piecewise defined dielectric model function,380 

ϵρelecr=1,ρelecr>ρmaxexp(t),ρminρelecrρmaxϵo,ρelecr<ρmin,
(180)

with t=tlnρelecr that employs a more elaborated switching function for the transition from the solute to the solvent region using the smooth function

t(x)=ln