Over the last few years, extraordinary advances in experimental and theoretical tools have allowed us to monitor and control matter at short time and atomic scales with a high degree of precision. An appealing and challenging route toward engineering materials with tailored properties is to find ways to design or selectively manipulate materials, especially at the quantum level. To this end, having a state-of-the-art ab initio computer simulation tool that enables a reliable and accurate simulation of light-induced changes in the physical and chemical properties of complex systems is of utmost importance. The first principles real-space-based Octopus project was born with that idea in mind, i.e., to provide a unique framework that allows us to describe non-equilibrium phenomena in molecular complexes, low dimensional materials, and extended systems by accounting for electronic, ionic, and photon quantum mechanical effects within a generalized time-dependent density functional theory. This article aims to present the new features that have been implemented over the last few years, including technical developments related to performance and massive parallelism. We also describe the major theoretical developments to address ultrafast light-driven processes, such as the new theoretical framework of quantum electrodynamics density-functional formalism for the description of novel light–matter hybrid states. Those advances, and others being released soon as part of the Octopus package, will allow the scientific community to simulate and characterize spatial and time-resolved spectroscopies, ultrafast phenomena in molecules and materials, and new emergent states of matter (quantum electrodynamical-materials).

It is a general challenge in the electronic structure community to develop accurate and efficient methods for modeling materials of ever increasing complexity in order to predict their properties. In this respect, time-dependent density functional theory (TDDFT) and related methods have become a natural choice for modeling systems in and out of their equilibrium.

During the last few years, novel directions of research emerged in the fields of chemistry, physics, and materials science that required the development of novel simulation tools needed to face the new challenges posed by those experimental advances and emerging phenomena. One can cite some examples among these new fields of research: the strong coupling between light and matter (including materials embedded in cavities), new states of matter (hidden phases and topological solids and molecules), and strong field dynamics in periodic systems. Whereas the strong-field dynamics of atoms and molecules is now well understood, strong-field dynamics in solids is an active field of research. Real-time (rt) TDDFT1,2 represents a natural tool to study highly non-linear phenomena in solids and low-dimensional materials, and the development of efficient numerical methods to perform real-time TDDFT in such periodic or semi-periodic systems is crucial to explore this new phenomenon (including the quantum nature of light and phonons). Indeed, it allows us to describe highly nonlinear processes without having to resort to perturbation theory on top of equilibrium DFT calculations.

Recent years have seen tremendous experimental progress in the field of strong light–matter interactions,3–5 where the strong coupling of light to chemical systems, quantum materials, or nanoplasmonic systems, among others, has been demonstrated. In this regime, light and matter meet on the same footing and the electron–photon interaction has to be explicitly considered.4–9 A novel theoretical approach accelerating this field is quantum-electrodynamical density-functional theory (QEDFT),4,5,10–14 which complements TDDFT with the photonic degrees of freedom and provides reliable and predictive simulations in this emerging field of research.

In this paper, we explore the recent advances in the Octopus project.15–19 We focus our attention on the recently added features and, particularly, on the ones that have not been described in the previous papers.17–19 These new features include the implementation of new levels of self-consistent and microscopic couplings of light and matter, the treatment of solvent effects through the polarizable continuum model (PCM), the implementation of various methods to treat van der Waals (vdW) interactions, new methods to calculate magnons, conductivities, and photoelectron spectroscopy from real-time TDDFT, and the calculation of orbital magneto-optical responses. Advances in numerical algorithms and methods, such as new propagators and the use of iterative eigensolvers in the context of reduced density matrix (RDM) functional theory, are also discussed. Finally, recent improvements in the treatment of periodic systems, as well as more technical code improvements, are also presented. For a more detailed explanation about how to use these newly introduced features in practice, we refer the readers to the Octopus webpage,20 as new tutorials and examples are regularly being added to it.

This paper is organized as follows: First, we present in Secs. II–XII the new implementations of physical theories and algorithms that allow us to deal with non-equilibrium phenomena in materials and nanostructures. This is followed by Secs. XIII–XVI dealing with technical developments that are fundamental in improving the code performance and the stability of the algorithms. Finally, we draw our conclusions in Sec. XVII. Unless otherwise stated, atomic units are used throughout the paper.

In most cases when light–matter interactions are considered, a decoupling of light and matter is performed at the outset. Either the electromagnetic fields are prescribed and then the properties of the matter subsystem are determined, as frequently found in, e.g., quantum chemistry or solid-state physics, or the properties of matter are prescribed and then the properties of the photon subsystem are determined, as done in, e.g., quantum optics or photonics.12,21

The microscopic interaction of light and matter in Octopus has followed this decoupling strategy and has been treated so far only in the forward-coupling direction. In this approximation, an external classical laser pulse or kick is prescribed and the response of the system is computed by evolving the Kohn–Sham orbitals.22 The back-action of the matter subsystem on the laser pulse, the subsequent effect of this modified pulse on the matter, and so on are ignored. This forward-coupling approximation is highly accurate when the total current generated in the system is comparably small, such as in atoms or small molecules. Small refers here to currents that induce transverse electric fields that are about an order of magnitude smaller than the longitudinal electric field that originates from the Hartree potential of the system. This has been exploited in Octopus and other codes, and many different types of spectroscopies were computed successfully in the past using this approach.

In contrast, in classical electromagnetic modeling, the opposite view is taken. Here, the material properties are prescribed and the resulting electromagnetic fields are computed. In practice, the material properties are routinely approximated by a local continuum model or by the dielectric function of the system (such as the Debye, Lorentz, or Drude models) and then Maxwell’s equations are solved for linear dielectric media arranged in appropriate geometries.23 

It is clear that both perspectives, a focus on matter dynamics alone or a focus on electromagnetic field dynamics alone, break down when the total induced currents become large and when electromagnetic near-field effects on the scale of the material system are not negligible anymore. Prime examples for such cases are nanoplasmonic systems, surface plasmon-polaritons, or tip-enhanced spectroscopies. In these cases, the back-action of the material response on the system itself has to be taken into account, leading to screening and retardation effects. The proper theoretical framework, which encompasses all these effects, is quantum electrodynamics. Starting with a generalized Pauli-Fierz field theory for the combined system of electrons, nuclei, and photons, we have recently derived different levels of self-consistent and microscopic couplings of light and matter. In the classical limit, this results in a coupled set of Ehrenfest–Maxwell–Pauli–Kohn–Sham equations.21 To implement these equations, we added a Maxwell solver to the Octopus code, which we couple self-consistently to the dynamics of the electrons and nuclei. In the following, we briefly summarize the basic ingredients for this implementation and show an example of self-consistent light–matter interactions for a nanoplasmonic system. Further details of the implementation and nano-optical applications can be found in Ref. 21.

Since over the years, Octopus has been optimized heavily to solve time-dependent Schrödinger and Kohn–Sham equations, we have exploited the fact that Maxwell’s equations can be formulated in Schrödinger form24 to benefit from the efficient time-evolution in the code. This reformulation is based on the Riemann–Silberstein vector,25 which is a combination of the electric E(r, t) and magnetic field B(r, t),

(1)

The sign of the imaginary part of the Riemann–Silberstein vector corresponds to different helicities. The reformulation of Maxwell’s equations in Schrödinger form is purely algebraic and starts out with the microscopic Maxwell’s equations

(2)
(3)

where E and B are the classical electric and magnetic fields, respectively, ρ and J are the charge and current densities, ϵ0 and μ0 are the vacuum permittivity and permeability, respectively, and c=ϵ0μ01/2 is the speed of light. Using the Riemann–Silberstein vector, the electric and magnetic Gauss laws may now be combined,

(4)

and likewise, the Faraday and Ampere laws can be combined into one evolution equation for the Riemann–Silberstein vector,

(5)

Here, S = (Sx, Sy, Sz) denotes a vector of spin one matrices

(6)

which are analogous to the Pauli matrices and show the spin-one character of the photon. Having cast Maxwell’s equations as an inhomogeneous Schrödinger equation, it is now straightforward to use the time-evolution algorithms in Octopus to time-evolve the Riemann–Silberstein vector. The only difference to the matter propagation is that we are now dealing with the “Maxwell Hamiltonian”

(7)

which acts upon six orbitals of the Riemann–Silberstein vector corresponding to the three components of the electric and magnetic field vectors. As in the matter case in Octopus, the discretization of the gradient in the Maxwell Hamiltonian is performed with finite-difference stencils, and the domain parallelization of Octopus can be used seamlessly for the Maxwell case as well. The difference to finite-difference time-domain (FDTD) codes based on the Yee algorithm is that we do not employ two shifted grids for the electric and magnetic fields,23 but rather a single grid for the Riemann–Silberstein vector. This simplifies the coupling to matter and allows us to use higher-order finite-difference discretizations for the gradient. Since the spatial discretization is connected to the temporal discretization through the Courant condition, this, in turn, allows us to use larger time steps. Furthermore, from our experience, a unified grid also improves the stability compared to FDTD.

Instead of using the constitutive relations, we couple Maxwell’s equations directly to the microscopic current density of the matter subsystem, consisting of the usual paramagnetic current term, the diamagnetic current term, and the magnetization current term. For the coupling of the electromagnetic fields to the matter subsystem, we are relying on the Power–Zienau–Woolley transformation,26,27 which leads to a multipole expansion. We have implemented the first two orders of this expansion: the dipole approximation in lowest order and electrical-quadrupole and magnetic-dipole coupling in the next order. In addition, we are currently working on implementing the full minimal coupling with a full position dependent vector potential.

The time evolution of the Kohn–Sham orbitals and the Maxwell fields is performed side-by-side and the two subsystems are coupled self-consistently in each time step, as illustrated in Fig. 1. To propagate different subsystems with different Hamiltonians and different sets of orbitals simultaneously, we implemented a multi-system framework in Octopus (for more details, refer Subsection XVI C).

FIG. 1.

(Left) The standard forward-coupling approximation: the electromagnetic fields (in blue) propagate freely and only influence the propagation of the matter (in red). The back reaction of the matter currents on the electromagnetic fields is neglected. (Right) Illustration of a fully self-consistent predictor-corrector scheme for a coupled Maxwell–Pauli–Kohn–Sham time step. As before, the electromagnetic fields influence the propagation of the matter (forward coupling). However, in addition, here the currents from the matter propagation also influence the propagation of the electromagnetic fields (backward coupling). A given time step for the matter wavefunctions and the electromagnetic fields is repeated until self-consistency is found (self-consistent forward–backward coupling).

FIG. 1.

(Left) The standard forward-coupling approximation: the electromagnetic fields (in blue) propagate freely and only influence the propagation of the matter (in red). The back reaction of the matter currents on the electromagnetic fields is neglected. (Right) Illustration of a fully self-consistent predictor-corrector scheme for a coupled Maxwell–Pauli–Kohn–Sham time step. As before, the electromagnetic fields influence the propagation of the matter (forward coupling). However, in addition, here the currents from the matter propagation also influence the propagation of the electromagnetic fields (backward coupling). A given time step for the matter wavefunctions and the electromagnetic fields is repeated until self-consistency is found (self-consistent forward–backward coupling).

Close modal

Similar to the matter propagation, also in the Maxwell case, outgoing waves that reach the boundary of the simulation box have to be absorbed to avoid artificial reflections and backscattering. Our first attempt was to use also the mask functions that are used for the matter propagation in Octopus. However, in the electromagnetic case, the mask absorption of outgoing waves turned out to be not efficient enough so that we implemented a perfectly matched layer (PML)23 for the Maxwell propagation.

When considering incoming electromagnetic fields with optical wavelengths, the coupling to atomistic or nano-scale systems leads to a multi-scale problem. The optical wavelength of the radiation is in this case much larger than the de Broglie wavelength of the matter. Likewise, the electromagnetic waves are traveling with the speed of light, which requires sub-attosecond time steps. We have, therefore, implemented different multi-scale couplings in space and time. For example, the Maxwell simulation box can be on the same scale as the matter box. In this case, the electromagnetic waves are represented as incoming analytical time-dependent boundary conditions and are propagated numerically inside the simulation box. Alternatively, the Maxwell simulation box can be much larger than the matter box to fully encompass laser pulses with optical wavelengths. In this case, prolongations and interpolations have to be used similarly to multi-grid methods. Since the electronic and nuclear motion is much slower than the time-evolution of the electromagnetic waves, we also implemented a multi-scale approach for the real-time propagation. The Riemann–Silberstein vector is propagated with frozen electronic current from the last point of interaction for many intermediate time steps before a coupling to the matter subsystem takes place. The number of intermediate steps is a convergence parameter and depends on the physical situation at hand.

Since we now include the description of classical electromagnetic fields explicitly in our real-time simulations, we have directly access to the outgoing electromagnetic radiation. This allows us to define electromagnetic detectors at the box boundaries, which accumulate the outgoing electromagnetic waves. We implemented such electromagnetic detectors in Octopus, and this allows us to run simulations in close analogy with experiments and to directly observe the outgoing radiation. For example, it is no longer needed to Fourier transform the time signal of the matter dipole to get optical spectra, but we rather have access to the spectrum directly on the Maxwell grid.

As an example of a coupled Ehrenfest–Maxwell–Pauli–Kohn–Sham propagation with Octopus, we selected a nano-optical application. We consider in this example two almost spherical sodium nanoparticles with 297 sodium atoms each, which are arranged in a dimer configuration. This system interacts with an incoming laser pulse that excites either the internal dipole or quadrupole plasmon motion of the dimer. In Fig. 2, we show the resulting electromagnetic field enhancements for different levels of light–matter coupling. In panels (a) and (b), we show the temporal profile of the incoming laser pulse and the resulting current density at the center point between the two nanoparticles. The field enhancement can be seen in panels (c)–(e). Including a self-consistent back reaction in the light–matter coupling, the field enhancement is reduced at the center point between the two nanoparticles, as shown in panel (c), while far away from the dimer, as shown in panels (d) and (e), the field enhancement is larger than in the forward coupled case. Furthermore, frequency shifts can also be observed, which are more pronounced in the near-field. We found that the field enhancement is also sensitive to the coupling terms of the multipole expansion,which are included, and that the quantitative difference of switching from local-density approximation (LDA) to the Perdew-Burke-Ernzerhof (PBE) functional for the exchange–correlation functional is in this case smaller than including the back reaction in the light–matter coupling (cf. Ref. 21).

FIG. 2.

Electric field and current density in the z-direction for a dimer of sodium nanoparticles. The centers of the two nanoparticles are located along the z-axis, and the distance between the two effective spheres of the nanoparticles is 0.5 nm. The electric field values are calculated at two different points: the center point rcp between the two nanoparticles, located at the origin, and an off-center point rocpx located along the x-axis at a distance of 1.957 nm from rcp. Panel (a) illustrates the incident cosinusoidal laser pulse with frequency ω1 = 3.05 eV (0.112 a.u.), λ1 = 406.5 nm (7681.84 a.u.), and amplitude of Ez0=5.142×107 V/m (10−4 a.u.), which drives the system. Panel (b) displays the current density at rcp. In panels (c)–(e), we show the electric field enhancement at rcp, the electric field enhancement at rocpx, and the average of the electric field over the detector surface close to the box boundary, respectively. The curve in bright gray in panel (c) was added to simplify the comparison and is identical to the laser pulse in panel (a). The period T1 = 1.36 fs corresponding to the laser frequency ω1 is indicated with gray vertical lines.

FIG. 2.

Electric field and current density in the z-direction for a dimer of sodium nanoparticles. The centers of the two nanoparticles are located along the z-axis, and the distance between the two effective spheres of the nanoparticles is 0.5 nm. The electric field values are calculated at two different points: the center point rcp between the two nanoparticles, located at the origin, and an off-center point rocpx located along the x-axis at a distance of 1.957 nm from rcp. Panel (a) illustrates the incident cosinusoidal laser pulse with frequency ω1 = 3.05 eV (0.112 a.u.), λ1 = 406.5 nm (7681.84 a.u.), and amplitude of Ez0=5.142×107 V/m (10−4 a.u.), which drives the system. Panel (b) displays the current density at rcp. In panels (c)–(e), we show the electric field enhancement at rcp, the electric field enhancement at rocpx, and the average of the electric field over the detector surface close to the box boundary, respectively. The curve in bright gray in panel (c) was added to simplify the comparison and is identical to the laser pulse in panel (a). The period T1 = 1.36 fs corresponding to the laser frequency ω1 is indicated with gray vertical lines.

Close modal

To conclude, with our new efficient implementation for coupled Ehrenfest–Maxwell–Pauli–Kohn–Sham equations in Octopus, we have now a very versatile tool that allows us to compute fully self-consistent forward–backward light–matter coupling in real-time and real-space for a vast set of applications and in close analogy to experiments. As the coupling to Maxwell’s equations of motion (EOM) represents the classical limit of the light–matter interaction, this development leads to the classical limit of QEDFT.

The nascent field of strong light–matter interaction expanded over the past decades from small atomic structures to chemistry3 and materials science.28 This development necessitates predictive first-principles methods capable to describe light and matter on the same footing. We introduced for the first time, a time-dependent density functional theory for quantum electrodynamics (QEDFT)4,10 to treat ab initio weak and strong light–matter interactions and its applications to chemistry and materials, which provides a unique framework to explore, predict, and control new states of matter out of equilibrium. This generalization of the time-dependent density-functional method allows us, for the first time, to explore the effects of dressing electronic states with photons while retaining the electronic properties of real materials.

A general non-relativistic Hamiltonian for light–matter systems treating N interacting electrons coupled to Np photonic modes in the case of the so-called length-gauge, and after employing the long-wavelength (dipole) approximation,13 reads as follows:

(8)

Here, the first two terms on the right-hand side correspond to the usual electronic many-body Hamiltonian, while the last term describes the photon modes, which is characterized for each photon mode α by its elongation qα, frequency ωα, and electron–photon coupling strength vector λα that includes the polarization of the photon mode and introduces the coupling to the total dipole k=1Nrk of the electronic system. For a more detailed discussion of the different regimes of weak/strong/ultra-strong light–matter coupling, we refer to Ref. 6. The external variable for the photon system is the time-dependent current jext(α)(t).

QEDFT4,10 is structurally similar to time-dependent density-functional theory in that it is based on a one-to-one correspondence between internal and external variables. If now photons are also considered, the set of internal variables has to be expanded. In the framework of Eq. (8), the internal variables become the density n(r, t) and the mode-resolved contributions to the electric displacement field qα(t). By introducing and exploiting the bijective mapping of these internal and the external variables [vext(r, t) and jext(t)], the auxiliary Kohn–Sham system is described by the electronic Kohn–Sham equations as well as Maxwell’s equations,5 leading to no exchange–correlation contribution in the photon subsystem [jxc(α)(t)=0]. This reformulation subsumes the “quantumness” of the light–matter interaction solely into the local exchange–correlation potential that now features a component due to the electron–photon interactions, in addition to the part due to electron–electron interaction, i.e., vxcσ(r,t)=vxcσee(r,t)+vxcσep(r,t).4,10,12 Extending the coupled Maxwell–Kohn–Sham equations to consider quantum photons is, thus, condensed into the calculation of the local exchange–correlation potential.

In practice, QEDFT requires the construction of an additional exchange–correlation potential that describes the electron–photon interaction. First attempts to construct this potential were based on many-body perturbation theory via the Sham–Schlüter equation29 within the exact exchange (EXX) approximation13,30 by utilizing the optimized-effective potential (OEP) method.31 

The framework of the optimized effective potential (OEP) equation for electron–photon systems has been introduced in Ref. 30 for the static and time-dependent regimes. Solving the time-dependent OEP equation is unfortunately computationally challenging. Applications to electronic systems32–34 have been realized so far only for small and low-dimensional systems while their utilization for interacting electron–photon systems remained restricted to models.30,35,36 This section provides a brief introduction to the implementation that allows us to solve the computationally more tractable static OEP equation.

As introduced in Ref. 30, the OEP photon energy depends on both the occupied and unoccupied electronic orbitals. Alternatively, the OEP photon energy can also be formulated using occupied orbitals and orbital shifts only.13 The full energy expression is given by

(9)

where Exc(ee) describes the electronic exchange–correlation energy and Ex(α) describes the exchange energy due to the interaction of the electrons with the photon mode α. Avoiding unoccupied orbitals is computationally much more favorable for larger systems and the photonic induced exchange energy can be correspondingly expressed as a sum over the Nσ occupied orbitals of spin channel σ,

(10)

where ωα describes the αth mode of the electromagnetic field and d^α=λαr describes the dipole operator and the electron–photon coupling strength. We can now reformulate the problem in terms of two electron–photon orbital shifts. The Kohn–Sham orbitals φ contribute to both electron–photon orbital shifts Φiσ,α(1) and Φiσ,α(2) that can be calculated using the Sternheimer equations.13 The first electron–photon orbital shift can be obtained explicitly by the solution of a linear Sternheimer equation

(11)

with the matrix element dijσ(α)=φiσ|d^α|φjσ. In addition, the second electron–photon orbital shift Φiσ,α(2)(r) can be defined explicitly as follows:

(12)

From Eq. (10), we can now deduce the potential using

(13)

After these reformulations, we find for the final OEP equation including electron–electron effects as well as electron–photon effects

(14)

where the inhomogeneity Λ(r) is given by

In Eq. (14), we defined a third orbital shift, the exchange–correlation orbital shift, which will be used to obtain the corresponding exchange–correlation potential and is defined along the lines of the orbital shift usually used in OEP calculations.31 We can also obtain ψiσ*(r) using a Sternheimer equation

(15)

where Miσ*(r) now consist of the electron–photon orbital shifts and the Kohn–Sham orbitals, as described in Ref. 13. Accordingly, we define this quantity as

(16)

where the effects of the electron–electron interaction are included in the quantity ux(r). For instance, in exchange-only calculations, this quantity is defined as uxiσ(r)=1φiσ*(r)δEx(ee)[{φjτ}]δφiσ(r), where Ex(ee) is the usual Fock exchange energy.

Equation (15) has to be solved self-consistently with Eq. (11). By this reformulation, we replaced the problem of calculating the OEP equation using all unoccupied states by a problem of solving Np+1 Sternheimer equations that only involve the occupied orbitals. In this way, the formulation of the problem becomes similar to the one of Ref. 37 for electrons only.

For practical implementation, we reformulate the OEP equation in the following form, as it is commonly done to construct the electronic OEP:31 

(17)

and update the potential with

(18)

The quantity Sσ(r) becomes a measure for convergence, since it is vanishing at the solution point [compare Eqs. (17) and (14)]. For the function c(r), we have different possibilities, such as using a constant or using the inverse of the electron density, as used in Ref. 37. Other methods are also possible, such as the Barzilai–Borwein (BB) method.38 We found stable algorithms when using a constant and the Barzilai–Borwein method.

While we will show the computational feasibility of this approach in the following, often a simplified solution is beneficial as a starting point for the self-consistency procedure. Such a simplified approximation can be deduced by reformulating Eq. (15) into the following equivalent form:

(19)

and subsequently assuming (ĥsσ(r)ϵiσ)ψiσ*(r)=0 to start the iterative process. In situations where Λ(r) = 0, such as pure electronic exact exchange, this approximation is exact for a single electron and is referred to as the Krieger–Li–Iafrate (KLI) approximation.13,37,39–41 By multiplying Eq. (19) by |φ(r)|2 and integrating over the spatial coordinates, we arrive at a linear equation that, in turn, can be solved for the approximate v. This approximation scheme has proven to often deliver sufficiently accurate results for electronic structure calculations with significantly lower computational effort and reliable stability. As this approximation can be seen as a diagonal approximation to the response function, it unavoidably fails in accurately describing polarization features. In the context of light–matter correlated ground-states, this leads to a slight unbalance when approximating components including photonic excitations [introduced by Φ(1)] and self-polarization interaction [introduced by Φ(2)].13,30,42,43 This results in a violation of translational invariance and introduces an artificial dependence on the permanent dipole. When performing KLI calculations including light–matter interaction, we thus suggest moving the set of coordinates into the electronic center-of-charge instead of the center-of-mass frame. To reduce the effect of this dependence during a self-consistent calculation, the optional input parameter KLIpt_coc has been introduced in the code. When activated, this option defines the dipole operator d^α with respect to the electronic center-of-charge, thus improving the stability of the algorithm.

Finally, in Fig. 3, we show the capabilities of the new implementation, where we calculate two sodium dimers in the weak coupling regime under light–matter coupling. Figure 3(a) shows the electron density, and Fig. 3(b) shows the comparison of OEP and KLI results. As shown in Ref. 13, in the weak-coupling regime, the KLI is close to the OEP result. In Fig. 3(c), we show the convergence behavior when using a constant in Eq. (18) and when using the Barzilai–Borwein method.38 

FIG. 3.

Ground-state density of two sodium dimers affected by the vacuum-field of a cavity: In (a), we show the electron density, in (b), we show the difference of the electron density inside and outside the cavity for the OEP and the KLI approximations (see main text for details). We find for the exchange energy Ex(α) = 6.52 meV and the number of photons in the correlated light–matter ground-state npt=aαaα = 2.03×103 for the OEP case, and Ex(α) = 6.67 meV and npt = 2.22 × 10−3 for KLI. Panel (c) shows the convergence of |S(r)|2 as defined by Eq. (17) using a constant with c = 20 and using the Barzilai–Borwein (BB) scheme. The simulation is set up as described in Refs. 12 and 13, with ℏωα = 2.19 eV, λα = 2.95 eV1/2 nm−1, and λα = λαey, and the real-space grid is sampled as 31.75 × 31.75 × 31.75 Å3 with a grid spacing of 0.265 Å.

FIG. 3.

Ground-state density of two sodium dimers affected by the vacuum-field of a cavity: In (a), we show the electron density, in (b), we show the difference of the electron density inside and outside the cavity for the OEP and the KLI approximations (see main text for details). We find for the exchange energy Ex(α) = 6.52 meV and the number of photons in the correlated light–matter ground-state npt=aαaα = 2.03×103 for the OEP case, and Ex(α) = 6.67 meV and npt = 2.22 × 10−3 for KLI. Panel (c) shows the convergence of |S(r)|2 as defined by Eq. (17) using a constant with c = 20 and using the Barzilai–Borwein (BB) scheme. The simulation is set up as described in Refs. 12 and 13, with ℏωα = 2.19 eV, λα = 2.95 eV1/2 nm−1, and λα = λαey, and the real-space grid is sampled as 31.75 × 31.75 × 31.75 Å3 with a grid spacing of 0.265 Å.

Close modal

For a more detailed analysis on how the OEP equation performs also in the strong and the ultra-strong coupling limit, we refer to Refs. 13 and 30, where we find accurate results for few-level systems and asymmetric external potentials up to high electron–photon coupling strength.

Extensions of QEDFT to the regime of vibrational strong coupling,44 the linear-response regime,45 and multitrajectory methods that capture quantum fluctuations46 are currently work in progress and will further strengthen the capabilities of the Octopus code for the real-space description of strongly coupled light–matter systems. To describe the effects in the ultra-strong coupling regime, one can use an alternative method that is presented in Sec. IV.

The accurate description of the (ultra-)strong coupling regime of light–matter systems is a formidable task. In many cases, the known functionals for QEDFT (see Sec. III) are inaccurate, and for complex electronic systems, typical few-level approximations become unreliable.42,47 In this section, we present the Octopus implementation of an alternative real-space ab initio method for coupled light–matter systems. Dressed Reduced Density Matrix Functional Theory (dressed RDMFT)48 extends standard electronic RDMFT to coupled light–matter systems similarly to how QEDFT extends DFT. First tests on simple model systems suggest that the dressed RDMFT remains accurate from the weak to the ultra-strong coupling regime. A proper introduction of the theory, examples, and convergence studies can be found in Ref. 49.

This approach allows for the description of an interacting N-electron system coupled to one photonic mode within the dipole approximation. The respective Hamiltonian is given in Eq. (8) of Sec. III. Note that we set jext = 0 throughout this section and that the ground state Ψ of Eq. (8) depends on 4N + 1 coordinates, i.e., Ψ = Ψ(r1, σ1, …, rN, σN, p), where {σi} denotes the spin degrees of freedom and p is the elongation of the photon mode.

Within the dressed RDMFT, the original Hamiltonian (8) of N electrons in d dimensions and one mode is replaced by an extended auxiliary Hamiltonian of N dressed fermions in d + 1 dimensions with coordinates z=(r,q)Rd+1. This auxiliary Hamiltonian reads

(20)

and gives access to the same physics (see Refs. 49, Sec. 4). For a d = 1 matter subsystem, the operators introduced in Eq. (20) read as follows: the dressed Laplacian Δ=2x2+2q2, the dressed local potential

(21)

and the dressed interaction kernel

(22)

The ground state Ψ′(z1, σ1, …, zN, σN) = Ψ(x1, σ1, …, xN, σn) ⊗ χ(p2, …, pN) of Ĥ′ is a product of the original physical ground state Ψ and the ground state of χ, which, in turn, is the product of N − 1 harmonic oscillator ground states. The auxiliary Hamiltonian (20) contains only one-body and two-body terms in terms of the dressed coordinates, which allows us, in principle, to use any standard electronic structure method to solve it (see also Refs. 49, Secs. 4 and 5). We use this construction to develop the dressed RDMFT and dressed Hartree–Fock (HF). For that, we define the dressed (spin-summed) first order reduced density matrix (1RDM)

(23)

To apply RDMFT theory on the auxiliary system, we have to replace the total energy functional of electronic RDMFT, given in Ref. 19, with the newly introduced quantities of the dressed system, i.e., the auxiliary Hamiltonian of Eq. (20) (approximately) evaluated by the dressed 1RDM γ′ of Eq. (23). By that, common approximations for the two-body energy expression in terms of the 1RDM can be directly transferred from electronic theory to the dressed system.50 The minimization is performed like in the electronic case and is based on the RDMFT implementation of Octopus, though the convergence of the dressed system requires a more complicated protocol that can be found in the supplementary material of Ref. 49. The current implementation in Octopus approximates the conditions under which the dressed 1RDM corresponds to a wave function by ensuring the fermionic ensemble N-representability conditions.51 However, the auxiliary wave function also exhibits another exchange symmetry with respect to the auxiliary coordinates, which is currently neglected. For the practical validity of this approach, the reader is referred to Sec. 5 and the supplementary material of Ref. 49.

As an example, we consider the one-dimensional (1D) H2 molecule in a soft-Coulomb potential with a slightly stretched bond-length of b = 2.0 bohrs52 that is modeled by the local potential

(24)

and the soft Coulomb interaction

(25)

In Fig. 4, we show the total energy and the total photon number of the dressed RDMFT, the dressed HF, and the exact many-body calculations53 for different coupling strengths. We see that for small couplings, both observables are captured well by the dressed RDMFT and dressed HF. With an increase in the coupling strength, both approximations fail to capture the strongly increasing photon number. For the total energy instead, the dressed RDMFT remains very close to the exact result, whereas the deviations to the dressed HF increase with increasing coupling strength. This shows the potential of the dressed RDMFT to describe correlated electron systems that are strongly coupled to a cavity mode. In the future, we plan to investigate better approximations to the polaritonic N-representability conditions that also account for the symmetry of the many-body wave function with respect to the exchange of photon coordinates.

FIG. 4.

Differences of the dressed HF (dHF) and the dressed RDMFT (dRDMFT) from the exact ground state energies (in hartree, left) and from the exact photon number (right) as a function of the coupling g/ω for the stretched H2 molecule in the dressed orbital description. The dressed RDMFT improves considerably upon dressed HF for the energy, especially due to the better description of the electronic correlation. The photon number, an example of a photonic observable, is captured similarly with both methods.

FIG. 4.

Differences of the dressed HF (dHF) and the dressed RDMFT (dRDMFT) from the exact ground state energies (in hartree, left) and from the exact photon number (right) as a function of the coupling g/ω for the stretched H2 molecule in the dressed orbital description. The dressed RDMFT improves considerably upon dressed HF for the energy, especially due to the better description of the electronic correlation. The photon number, an example of a photonic observable, is captured similarly with both methods.

Close modal

It is well known that the standard local and semilocal functionals of DFT tend to over-delocalize the electrons, as usual approximations are based on the homogeneous electron gas. This leads to several failures of DFT for materials in which the localization of electrons plays a critical role in dictating the system’s properties. This is, for instance, the case for transition metal oxides. The DFT+U method was originally proposed to compensate for some of the failures of the LDA for such materials.54–57 In essence, the DFT+U method aims at a better description of the local electron–electron interaction, which is achieved by adding the mean-field Hubbard model on a chosen localized subspace to the DFT total energy. The double counting of electron interaction in this localized subspace is then removed. The DFT+U total energy functional reads

(26)

where E(ee) is the usual electron–electron interaction energy and Edc accounts for the double counting of the electron–electron interaction already present in EDFT. The analytical expression of this double-counting term is not known in the general case, a general problem to all +U methods. Several approximated forms have been proposed along the years.58,59 In the Octopus code, we implemented the most commonly used double-counting terms: the fully localized limit (FLL)60 and the around-mean field (AMF) double-counting terms.61 They read as59 

(27)

and

(28)

respectively, where Nσ=mnmmσ and N = N + N. The Eee and Edc energies depend on the density matrix of a localized orbital basis set {ϕI,m}, which are the localized orbitals attached to the atom I. In the following, we refer to the elements of the density matrix of the localized basis as occupation matrices and we denote them as {nmmI,σ}. In the rotational-invariant form of DFT+U proposed by Dudarev et al.,60 and for the FLL double-counting term, we obtain the EU energy to be added to the DFT total energy, which only depends on an effective Hubbard U parameter Ueff = UJ,

(29)

where I is an atom index, σ is the spin index, and n, l, and m refer to the principal, azimuthal, and angular quantum numbers, respectively. In the case of a periodic system, the occupation matrices nmmI,n,l,σ are given by

(30)

where wk is the k-point weight and fnkσ is the occupation of the Bloch state |ψn,kσ. Here, |ϕI,n,l,m⟩ are the localized orbitals that form the basis used to describe the electron localization. Details of the implementation can be found in Ref. 62. We recently extended our original implementation to be able to construct a localized subspace from localized states, such as Wannier states,63 and to treat the intersite interaction.64 

In its usual formulation, the DFT+U method is an empirical method in which the effective U is a parameter of the calculation. However, it recently became possible to evaluate U and J fully ab initio and self-consistently using the ACBN0 functional.65 We also implemented this method in Octopus and extended it to the time-dependent case62 to investigate strongly correlated materials out-of equilibrium. We showed that the absorption spectra of transition metal oxides, such as NiO or MnO, are well reproduced by our TDDFT+U simulations.62 

Figure 5 shows the calculated time profile of the effective Hubbard Ueff = UJ for the 3d orbitals of Ni for light-driven NiO. The top panel exhibits the time profile of the driving vector potential. This shows that strongly driven correlated materials cannot be described by simply assuming that the effective electronic parameters (here, the effective Hubbard U) remain constant out of equilibrium. Moreover, the possibility to tune these effective electronic parameters offers opportunities for light-driven phase transitions, such as light-induced magnetic Weyl semimetals.66 

FIG. 5.

Self-consistent dynamics of Hubbard U for the Ni 3d orbitals (bottom panel) for pump intensities as indicated. The top panel represents the time-dependent vector potential, and the vertical dashed lines indicate the extrema of the vector potential, i.e., the minima of the driving electric field. Results are from Ref. 67.

FIG. 5.

Self-consistent dynamics of Hubbard U for the Ni 3d orbitals (bottom panel) for pump intensities as indicated. The top panel represents the time-dependent vector potential, and the vertical dashed lines indicate the extrema of the vector potential, i.e., the minima of the driving electric field. Results are from Ref. 67.

Close modal

We finally note that TDDFT+U is not the only method that can be used to describe correlated materials such as transition metal oxides. An alternative approach to this problem would be to use hybrid functionals. However, such types of calculations in complex systems, potentially involving strong spin–orbit coupling interactions, are usually numerically very expensive in the context of real-time real-space TDDFT.

The van der Waals (vdW) interactions arise from correlations between electrons and are, in principle, described by the exact energy functional through the correlation energy functional Ec[n(r)]. However, the vdW interactions are inherently non-local and long-range and, by construction, cannot be described by usual local and semi-local functionals.68 Therefore, much work has been devoted to finding consistent ways to enhance available functionals to correctly describe the necessary correlations yielding vdW forces.

Within a pure DFT approach, the inclusion of vdW interactions should be done through the correlation functional. To this end, a family of non-local density functionals has been proposed.68 The so-called vdW density functionals (vdW-DF) are derived ab initio from the screened response of the homogeneous electron gas.68 Another route to include vdW interactions in DFT calculations is by adding explicit corrections to the energy and forces based on atomic parameterizations. The most well-known method of this type is probably the vdW-D3 scheme of dispersion corrections from Grimme.69 Although computationally more favorable than the vdW-DF method, the major disadvantage of this type of approach, particularly from the perspective of time-dependent simulations, is that it fails to correctly describe the effects that depend on the electron dynamics. A trivial example would be a lone electron traveling through space. Another scheme based on explicit corrections to the energy and forces was proposed by Tkatchenko and Scheffler (vdW-TS).70 This scheme has the advantage of retaining much of the low computational cost of the vdW-D3 scheme while making the atomic parameterization dependent on the electronic density. Recently, these three schemes (vdW-DF, vdW-TS, and vdW-D3) were implemented in the Octopus code to deal with vdW interactions in isolated and periodic systems.

In the context of real-time TDDFT, it is important to note that these various levels of the description of vdW interactions have different implications. Indeed, in the vdW-D3 method, the energy and forces do not depend on the electronic density and will not change in time, unless atoms are moved, as already mentioned above. This should be a good approximation only when the time-dependent density remains close enough to the ground-state density. On the contrary, the expressions for the vdW-TS and the vdW-DF methods have a direct dependence on the electronic density and should thus yield a better description of vdW energies and forces out of equilibrium. However, we note that the use of the vdW-TS and vdW-DF methods within real-time TDDFT must be done using the adiabatic approximation and will, therefore, behave poorly when this approximation breaks down.

Octopus supports vdW-DF functionals68 through the libvdwxc library.71 The vdW-DF functionals are expressed as a sum,

(31)

of the LDA correlation energy72 and a fully non-local correlation term. The latter is the integral over a kernel function ϕ(q0, q0′, r),

(32)

where q0(r) depends on the local density and its gradient. An explicit evaluation of this six-dimensional integral is very expensive and scales as the volume squared, O(N2). Roman-Pérez and Soler proposed an efficient method, which approximates it as a sum of three-dimensional integrals.73 The method works by expressing the integrand as a convolution using a limited set of helper functions and then applying the Fourier convolution theorem for a scaling of O(NlogN) and a much lower prefactor. This has since become the standard method for evaluating vdW-DF functionals.

As implemented in Octopus, the density is redistributed from its normal uniform grid of arbitrary shape onto a uniform 3D grid forming a cube or parallelepiped, which is suitable for 3D Fourier transforms. The library libvdwxc then evaluates the non-local energy and contributions to the potential, relying on the FFTW library74 for efficient parallel Fourier transforms. After calculating the energy and potential, the potential is redistributed back to the original form.

Octopus supports the standard functionals vdW-DF1,75,76 vdW-DF2,77 and vdW-DF-cx,78 as well as other common forms that differ by combining the correlation with a different exchange functional. Some common supported variations are vdW-DF-optPBE, vdW-DF-optB88,79 and vdW-DF-C09.80 

Since the vdW-TS approach depends on the density, the effect of the van der Waals interaction can be observed in properties other than the forces. In particular, we expect to observe an effect in the excited states of systems that interact through vdW forces. We use the hydrogen fluoride dimer as a simple model system for a proof-of-concept application of the modular implementation of the vdW-TS functional correction on TDDFT calculations. The dimer geometry is setup as shown in Fig. 6. The hydrogen fluoride monomers are placed in an anti-parallel fashion, each one with its main symmetry axis oriented along the y-axis. The hydrogen fluoride bond in each monomer is 0.92 Å long, and the molecules are separated by 2.8 Å along the z-axis. At this distance, the van der Waals interaction between the monomers is the strongest according to the vdW-TS model. We choose this dimer model because it is a conveniently small system in which the effects of including the dispersion correction can be shown at an affordable computational cost.

FIG. 6.

Absorption cross section spectrum of the hydrogen fluoride dimer with and without van der Waals effects. A zoomed-in view of the absorption feature at around 9.3 eV illustrates a small red shift when van der Waals effects are considered. Both hydrogen fluoride molecules are placed on the same plane at a separation of 2.8 Å.

FIG. 6.

Absorption cross section spectrum of the hydrogen fluoride dimer with and without van der Waals effects. A zoomed-in view of the absorption feature at around 9.3 eV illustrates a small red shift when van der Waals effects are considered. Both hydrogen fluoride molecules are placed on the same plane at a separation of 2.8 Å.

Close modal

To calculate absorption, we excite the system with an infinitesimal electric-field pulse and then propagate the time-dependent Kohn–Sham (TDKS) equations for 30.385 35 /eV. The singlet dipole spectrum is evaluated from the time-dependent dipole moment. The strength of the perturbation is set to 0.01 Å−1, and the polarization is in the z-axis. The time evolution is carried out using the enforced time-reversal symmetry propagator with (default) time steps of 0.033 52 /eV.

The results (Fig. 6) show a small van der Waals-induced bathochromic-like (red) shift in the optical spectrum of the hydrogen fluoride dimer calculated with the LDA. This example opens the door for a new series of applications in supra-molecular chemistry, structural biology, and polymer science, which incorporate van der Waals effects on real-time electron dynamics.

Octopus also supports the DFT-D3 van der Waals correction.69 This correction depends only on the atomic positions and does not depend on the atomic density. It is implemented in Octopus by linking to the DFT-D3 library provided by the authors. As we have validated Octopus results with the reference data provided with the library, the results are guaranteed to be consistent with other codes that implement this correction.

The only modification we did to the library was to move the very large set of coefficients from a source file to a stand alone text file that is parsed only when necessary. This speeds up compilation and reduces the size of the binaries. The modified library is distributed with Octopus, so users need not compile it separately.

The Polarizable Continuum Model (PCM)81 comprises a family of implicit-solvent approaches to tackle quantum-mechanical calculations of molecules in solution. The PCM assumes that (i) the solvent is a continuum and infinite dielectric medium characterized by a frequency-dependent dielectric function and (ii) a void cavity of appropriate shape and size encapsulates the solute molecule, separating it from the solvent by a sharp interface. The numerical implementation of the PCM relies on the Apparent Surface Charge (ASC) approach and the Boundary Element Method (BEM).82 In this framework, the solvent polarization response induced by the molecule’s charge density is modeled by a reaction potential defined by a set of point charges q = {q1, q2, …, qT} that spread over a tessellated cavity surface consisting of T finite surface elements or tesserae.83 The reaction potential is the object through which the complex dielectric environment is accounted for.84 The implementation of the PCM in Octopus rests on the Integral Equation Formalism (IEF).85 The key IEF-PCM equation for computing the polarization charges is

(33)

where q and V are column vectors of size T storing the induced polarization charges and the molecule’s electrostatic potential at the tesserae representative points, respectively. Q is the T × T PCM response matrix, which depends on the geometry of the cavity and the dielectric function of the solvent.81 

A realistic description of the molecular cavity is key to capture accurate electronic and optical properties of molecules in solution. In principle, the cavity should (i) exclude the solvent, (ii) comprise most of the solute electronic density, and (iii) conform with the molecular shape. Octopus uses the GEPOL algorithm,86 which builds up the van der Waals cavity from the union of interlocking spheres with element-specific radii centered at each atom position (by default, no spheres are built around hydrogen atoms). Within GEPOL, the BEM tessellation of the solute–solvent interface is done starting from a 60- or 240-face circumscribed polyhedron per sphere, selecting only exposed tesserae and properly reshaping those that are partially exposed. The BEM tessellation is a surface grid constructed independently from the real-space three-dimensional grid used in Octopus to represent both the Kohn–Sham electronic Hamiltonian and molecular orbitals. The mismatch between the two discrete representations might cause numerical problems arising from the Coulomb singularities whenever tesserae and grid points are close to each other. The implementation of the PCM in Octopus regularizes such singularities by using normalized spherical Gaussian functions to smooth the discretized polarization charges q.84 

Having briefly described the implementation, we now look at specific and relevant cases for chemical reactions, namely, solvation energies and ground-state stabilization. The most evident effect of the presence of a solvent is to change the total energy of a system compared to its value in vacuum. In the framework of DFT, the Kohn–Sham Hamiltonian of the solvated molecule contains the solvent reaction potential, which is a functional of the electronic and nuclear densities of the solute molecule.84 The latter implies that the Kohn–Sham and IEF-PCM equations become coupled and the polarization charges qe = QVHartree[ρe], induced by the molecule’s electronic density, have to be computed at each Kohn–Sham iteration until convergence is reached. The resulting electronic density can be used to compute the electrostatic contribution to the solvation free energy ΔGel=G[ρsolve]E[ρvace], where G and E denote the free and total energy functionals of the molecule in solvent and in vacuum, respectively. The current PCM implementation in Octopus84 has been thoroughly tested on a benchmark set of organic molecules and compared with analogous calculations performed with the quantum chemistry package GAMESS.87 As an example of the solvent stabilization effect in the ground state, we studied the nitrobenzene molecule in water (static dielectric constant set to ϵs = 80).

In Fig. 7, we show the convergence of the solvation free energy as a function of the Kohn–Sham iteration. The solvation free energy is stabilized already after 10 Kohn–Sham iterations out of the 19 required to optimize the molecular orbitals of the solvated molecule. We have also verified that the numerical error inherent to the discretization of the solute cavity surface is very small. For example, the total polarization charge QPCM=i=1Tqie at each Kohn–Sham iteration deviates by only 0.03% from the actual number of valence electrons in the nitrobenzene molecule (the relation between q and the solute charge is determined by Gauss’s theorem81). However, the magnitude of this error will depend, in general, on the size and the geometry of the molecule.

FIG. 7.

(Left panel) Nitrobenzene molecule in water (ϵs = 80) surrounded by the ground state PCM charges distributed on the solvation cavity. The actual distribution of the charges in 3D is given in the supplementary material, SM1. (Right panel) Electrostatic contribution to the solvation free energy computed at each Kohn–Sham iteration. Kohn–Sham equations were solved in real-space as implemented in Octopus using the GGA-PBE approximation88 to the exchange–correlation energy. The simulation box was built using spheres of radius 5 Å and an uniform spacing of 0.19 Å between grid points. The radii of the spheres used to build the cavity surface of the solute molecule are 2.4 Å for carbon, 1.8 Å for oxygen, and 1.9 Å for nitrogen.

FIG. 7.

(Left panel) Nitrobenzene molecule in water (ϵs = 80) surrounded by the ground state PCM charges distributed on the solvation cavity. The actual distribution of the charges in 3D is given in the supplementary material, SM1. (Right panel) Electrostatic contribution to the solvation free energy computed at each Kohn–Sham iteration. Kohn–Sham equations were solved in real-space as implemented in Octopus using the GGA-PBE approximation88 to the exchange–correlation energy. The simulation box was built using spheres of radius 5 Å and an uniform spacing of 0.19 Å between grid points. The radii of the spheres used to build the cavity surface of the solute molecule are 2.4 Å for carbon, 1.8 Å for oxygen, and 1.9 Å for nitrogen.

Close modal

Now we move forward and describe the extension of the previous PCM method to the time domain using real-time PCM and non-equilibrium solvation. The ground state PCM is unable to capture the complex dynamical interactions between the solvent and the solute when the latter is in an excited state. Fortunately, the PCM admits a generalization to account for solvation dynamics.89 The time-dependent PCM (TD-PCM) implementation in Octopus comes in three flavors of increasing complexity, all coupled with real-time TDDFT calculations of the solute molecules. The first one, called equilibrium TD-PCM,84 assumes that the solvent is fast enough to instantaneously equilibrate the solute charge density fluctuations and to polarize accordingly. This approach is physically sound for weakly polar solvents, having similar values for the static and dynamic dielectric constants. The second is a nonequilibrium approach called inertial TD-PCM and amounts to partitioning the solvent response in a fast (dynamic) and a slow (inertial) part.90 Faster degrees of freedom respond instantaneously to the changes of the applied potential (either of molecular or external in origin), whereas slower degrees of freedom remain “frozen” and in equilibrium with the initial value of the field. This approximation works well when the solvent relaxation times are large enough with respect to the electronic excitations in the solute molecule (e.g., within the picosecond scale). The third TD-PCM approach, called equation of motion (EOM) TD-PCM, considers the full history-dependent evolution of the solvent polarization through a set of equations of motion for the polarization charges.91 Nonequilibrium polarization effects of this sort originate from the frequency-dependent dielectric response of the solvent, encoding the fact that it takes a different non-negligible time to adjust to fast or slow electrostatic perturbations. Solvation dynamics affect strongly and non-trivially the absorption spectrum of molecules, especially for fast and polar solvents, by inducing solvatochromic shifts of the peaks in the UV–visible absorption spectrum and modifying their relative amplitudes. The details about the implementation of all of these schemes and a detailed discussion of their effects can be found in Refs. 84 and 92.

Here, we show the differences among the TD-PCM flavors described above for the case of the nitrobenzene molecule in aqueous solution, excited with an electrical dipole perturbation in the x direction. We take the dynamic dielectric constant of water, entering in the inertial and EOM TD-PCMs, as ϵd = 1.786. Figure 8 shows the photo-absorption spectrum of the gas-phase and solvatedmolecule. We can see that, with respect to the absorption invacuo, all of the TD-PCM methods produce shifts of the features (in this case, toward lower excitation energies). The shifts are not rigid overall but depend on the excited state and on the specific solvation scheme (equilibrium, inertial, and EOM). Within Debye’s model, the equilibrium and inertial TD-PCMs are limiting cases for the dynamics when the relaxation time is zero and infinite, respectively. The EOM TD-PCM, with a finite relaxation time, interpolates between these limits. Water has a large relaxation time of 3.37 ps, and therefore, the absorption spectrum is almost coincident for the EOM and the inertial TD-PCMs, as shown in Fig. 8. Whenever the solvent is as slow as water, there is not much gain in selecting the EOM over inertial TD-PCM and the latter is the method of choice in terms of computational performance. Instead, for faster solvents, the EOM results depart from those of the inertial algorithm, approaching those of the equilibrium TD-PCM. In Fig. 8, we considered an effective solvent having the same static and dynamic dielectric constants as water, but with a 1 fs relaxation time. The EOM TD-PCM for such a model solvent produces almost the same excitation energy shift equilibrium TD-PCM and peak intensities in between the inertial and the equilibrium TD-PCMs, as expected.

FIG. 8.

Absorption spectrum of nitrobenzene in vacuo and in water (ϵs = 80, ϵd = 1.786) for all the TD-PCM schemes. The inset shows the normalized cavity field factor (CFF) vs the excitation energy for the two EOM TD-PCMs with solvent relaxation times of 3.37 ps (slow) and 1 fs (fast) corresponding to CFFmax = 1.27 and 2.55, respectively. Real-time TDDFT simulations of 20 fs with a time step of 1.7 × 10−3 fs were performed to obtain the absorption spectra. An electric dipole perturbation within the linear regime and oriented along x was applied as an initial perturbation. The rest of the computational details are shared with the ground-state DFT calculations leading to Fig. 7.

FIG. 8.

Absorption spectrum of nitrobenzene in vacuo and in water (ϵs = 80, ϵd = 1.786) for all the TD-PCM schemes. The inset shows the normalized cavity field factor (CFF) vs the excitation energy for the two EOM TD-PCMs with solvent relaxation times of 3.37 ps (slow) and 1 fs (fast) corresponding to CFFmax = 1.27 and 2.55, respectively. Real-time TDDFT simulations of 20 fs with a time step of 1.7 × 10−3 fs were performed to obtain the absorption spectra. An electric dipole perturbation within the linear regime and oriented along x was applied as an initial perturbation. The rest of the computational details are shared with the ground-state DFT calculations leading to Fig. 7.

Close modal

A real-time representation of the molecular dipole coupled with the PCM surface charges is found in the supplementary material, SM2, which also highlights how the different TD-PCM approaches affect the time-evolution of the dipole.

When studying the evolution of a solvated molecule under the effect of an external time-dependent electromagnetic field, a separate treatment is required to take into account that there are two solvent polarization contributions interacting with the solute molecule, namely, the reaction and the cavity fields.93 The reaction field comes from the polarization induced by the solute molecule itself, while the cavity field arises as a polarization induced by the external electric field. Both reaction and cavity field effects can be accounted for in static and real-time quantum-mechanical calculations of molecules within the PCM and TD-PCM simulations in Octopus.92 

All TD-PCM calculations shown here were performed using both reaction- and cavity-field effects, although the redshift of the peaks in each TD-PCM scheme is mainly a feature of the reaction-field, as ascertained in test calculations (not shown) and as expected.94 Cavity field effects impact directly on the peak intensities by making the absorption more favorable depending on how large the effective local field acting on the molecule is allowed to be by solvent dielectric properties and the geometry of the cavity (normally, reassembling the molecular shape). Still, cavity field effects can have a non-trivial influence in the absorption spectrum shape when considering non-equilibrium solvation dynamics (EOM TD-PCM) by changing the relative peak intensities. This effect can be seen in the inset of Fig. 8, where the normalized cavity field factor (CFF)—the ratio between absorption cross section with and without cavity field effects—is plotted against the excitation energy for the EOM TD-PCM simulations with water and the aforementioned faster water-like solvent. The large difference in absorption peak intensities between the equilibrium and the rest of the TD-PCMs in Fig. 8 is also related to the modification of the probing electromagnetic field inside the dielectric. The photo-absorption cross section is, by definition, the ratio between the absorbed and the incoming power of light. In our case, both increase with the dielectric constant: the larger the dielectric constant of a medium, the larger the CFF, but also a larger dielectric constant implies a smaller phase velocity of light, therefore increasing the power of the traveling electromagnetic wave. The impact of the latter effect (refractiveindex=(|ϵ(ω)|+Rϵ(ω))/2) is stronger than that of the former [∝CFF, e.g., for a spherical cavity933ϵ(ω)2ϵ(ω)+12] for a large enough dielectric constant such as the one corresponding to the equilibrium TD-PCM for water (ϵs = 80).

In conclusion, Octopus is now capable of including an implicit dielectric continuum model as an environment for quantum mechanical calculations of excited states in the time-domain. The different versions of the TD-PCM scheme implemented allow us to select the most suitable to capture the relevant physics, accounting for a full range of different relaxation and response times.

In the last couple of years, the first studies investigating magnetization dynamics from first principles in real time have emerged.95–100 Here, we are interested in transverse magnetic excitations, specifically magnons, which are long wavelength collective excitations with a typical energy of tens to hundreds of millielectron volts. We recently developed an alternative to the linear-response TDDFT formulation98,99,101 to compute the spin susceptibilities of magnetic systems based on real-time TDDFT.102 Our approach follows the work of Bertsch et al. for optical excitations. In the original work of Bertsch et al., the system is perturbed by a sudden change in the vector potential, which induces a charge- and current-density response. To investigate magnons, we employ a “transverse magnetic kick,” which induces magnetic fluctuations in the system. To be more precise, our perturbation corresponds to an infinitely short application of a Zeeman term,

(34)

where q is the momentum of the spin wave we are exciting, B is the strength of the perturbation, and σ± = σx ± y are Pauli matrices. In this expression, the z axis is taken to be along the direction of the magnetization of the system before excitation. In case the system has a preferred magnetization direction due to the presence of spin–orbit coupling, usually referred as an “easy axis,” we perform a kick in the transverse direction with respect to this “easy axis” of the system.

The subsequent time evolution of the spin magnetization m(r, t), governed by the time-dependent Schrödinger equation, is then computed and analyzed in Fourier space. If we perturb our system from its ground state and we assume linear response, we have directly that

(35)

where χ+−(q; ω) is the spin susceptibility we want to extract. A similar expression is obtained for χ−+(q; ω).

In order to access finite momenta, one typically has to employ large supercells to perform the dynamics, which can be computationally very expensive when a few meV energy resolution is needed. Fortunately, there is a way to circumvent the construction of supercells by using the so-called generalized Bloch theorem (GBT). The GBT has been introduced by Sandradskii103 for the calculation of ground-state spin waves and requires the implementation of specific boundary conditions. We implemented the GBT for the real-time calculation of magnons, taking advantage that Octopus is a real-space finite difference code, for which we can easily specify any type of boundary conditions. However, it is important to note that this applies only in the absence of spin–orbit coupling. The boundary condition, described below, depends on the momentum q of the perturbation and acts differently depending on whether a state was originally “up” or “down” with respect to the unperturbed magnetization. This is determined by the sign of ⟨Φnk|Sznk⟩ just before the perturbation for each spinor state |Φnk⟩. If we label these states α and β, the boundary condition reads

(36)

We checked that performing the simulation using the GBT or the supercell approach leads to the same results up to numerical precision. We tested our approach for cubic Ni, Fe, and Co, which have been widely studied using linear response TDDFT, and we found that our results are in very good agreement with previous work,96 thus validating our implementation. Figure 9 shows the results obtained for bulk nickel using the adiabatic local-density approximation. We used here a real-space grid spacing of 0.27 bohr, norm-conserving pseudopotentials, and we employed a 16 × 16 × 16 k-point grid shifted 4 times in order to resolve momenta q that are multiples of 2π/(16a), where a is the lattice parameter of Ni, which we took to be 3.436 Å. In order to obtain the linear response, we took B=0.02 and we propagated for 435.4 fs using a time step of 1.81 as. To reduce the numerical burden, symmetries were employed.

FIG. 9.

Calculated magnon dispersion: spin susceptibility of bulk Ni along the ΓX¯ direction, displayed in logarithmic scale. Numerical details are given in the main text. The experimental data are taken from Ref. 104.

FIG. 9.

Calculated magnon dispersion: spin susceptibility of bulk Ni along the ΓX¯ direction, displayed in logarithmic scale. Numerical details are given in the main text. The experimental data are taken from Ref. 104.

Close modal

Let us now comment on the interest of the proposed approach. So far, we only used this new method for investigating weak magnetic kicks, where we recover the results from linear response theory. However, our approach does not rely on the assumptions of small perturbations and can be used to investigate nonlinear phenomena induced by a strong magnetic field as well as out-of-equilibrium situations where the system is kicked from an excited state. This is the strength of a real-time method.105–108 Moreover, we can investigate directly the coupling with other degrees of freedom, such as phonons or photons, without any new theory or code development. Using the real-space method, we benefit from the favorable scaling of the time propagation, which is linear in the number of states, whereas sum-over-state approaches usually scale quadratically with the number of states. Finally, our approach offers the great advantage of not requiring the use of an exchange–correlation kernel, as only the exchange–correlation potential is needed to perform a time propagation. This is very interesting in order to test new functionals and theory levels for which deriving the expression of the exchange–correlation kernel can become complicated. These different aspects will be investigated in future works.

In this section, we review the implemented routines for magneto-optical phenomena, which arise from the loss of symmetry between left and right circularly polarized light in the presence of a magnetic field.109,110 While magneto-optical spectra can be straightforwardly computed for molecules,111–115 the theory for solids has been developed only recently.116 The reason is that external electromagnetic fields break the translational symmetry of periodic systems, which is formally expressed through the unboundness of the position operator.117–119 The orbital response to magnetic fields is especially complicated to describe, as such fields lead to non-perturbative changes in wavefunctions and introduce vector coupling to electron dynamics.120–122 Here, we will focus on calculations of changes in the linear optical response in the presence of the magnetic field within the Sternheimer approach.19,123–125

To treat uniform magnetic fields in periodic systems, we use the approach based on perturbation theory for the one-particle density matrix.116,120,122 Such an approach allows us to work under purely periodic boundary conditions and to take into account automatically gauge invariance. A periodic and gauge-invariant counterpart Õ is distinguished for any operator O=Or1r2 defined for two points r1 and r2 in real space

(37)

where A is the vector potential associated with external electric (E = −c−1/∂tA, where c is the speed of light) and magnetic (B = ∇ × A) fields and the integral is taken along the straight line between points r2 and r1.

The time-dependent Liouville equation for the gauge-invariant counterpart ρ̃ of the one-particle density matrix to the first order in E and B takes the form116 

(38)

Here, the commutator and anticommutator of operators O(1) and O(2) are denoted by [O(1),O(2)] and {O(1),O(2)}, respectively, the velocity operator V=i[r,H̃] is computed taking into account all non-local contributions to the Hamiltonian (e.g., non-local pseudopotentials), and the Hamiltonian is represented as H̃=H0+δH̃, where the difference δH̃ between the gauge-invariant counterpart H̃ of the Hamiltonian and unperturbed Hamiltonian H0 is related to the local-field effects. Unlike the singular position operator r, the commutator [r,ρ̃] of the position operator with the periodic function ρ̃ is well defined in Eq. (38) and corresponds to the derivative with respect to the wave vector, ikρ̃k, in reciprocal space. Differentiating Eq. (38), one finds the derivatives of the density matrix ρ̃(P)=ρ̃/P with respect to external perturbations P (E, B, etc.).

For TDDFT calculations in Octopus, ρ is the Kohn–Sham density matrix. The nth order derivative ρ̃(P) describing the joint response to the perturbations P = P1P2Pn is divided into four blocks within and between the occupied (V) and unoccupied subspaces (C): ρ̃VV(P)=Pvρ̃(P)Pv, ρ̃CC(P)=Pcρ̃(P)Pc, ρ̃VC(P)=Pvρ̃(P)Pc, and ρ̃CV(P)=Pcρ̃(P)Pv, where Pv = ρ(0) and Pc = 1 − Pv are the projectors onto the occupied and unoccupied bands. In accordance with the density matrix perturbation theory,126 to get the elements ρ̃CV(P), Eq. (38) is projected onto unperturbed Kohn–Sham wavefunctions |ψvk(0) of occupied bands v to give an equation for the response function |ηvk(P)=ρ̃CV(P)(Ω)|ψvk(0),

(39)

Here, the operator on the left-hand side is given by Lvk(Ω) = Ω + H0ϵvk, where Ω is the frequency considered and ϵvk is the energy of the unperturbed state |ψvk(0). The operator R comes from the right-hand side of Eq. (38) and is determined by the derivatives of the density matrix of the previous orders. If the local-field effects are taken into account, the right-hand side R also depends on the derivative of the electron density n(P)(r1) = ρ(P)(r1, r2)δ(r1r2). In this case, Eq. (39) needs to be solved self-consistently. The calculations, in practice, work with the periodic parts of the wavefunctions, |uvk(0). The commutator [r,ρ̃] corresponding to ikρ̃k in the reciprocal space is computed within the k · p theory.19,124,125 Equation (39) is solved using the efficient Sternheimer approach,19,123–125 where the function |ηvk(P)(Ω) that fits into Eq. (39) is found iteratively at each frequency Ω. To avoid divergences at resonances, a small but finite imaginary frequency is added to the frequency Ω0 of the external perturbation so that Ω = Ω0 + .

Once the solution of Eq. (39) is known, the elements ρ̃CV(P) are obtained as

(40)

The elements of ρ̃(P) between the occupied and unoccupied subspaces are found using the relation ρ̃VC(P)(Ω)=(ρ̃CV(P)(Ω*))*, and for that, Eq. (39) is also solved for the frequency −Ω*.

To find the elements within the occupied ρ̃VV(P) and unoccupied ρ̃CC(P) subspaces, the idempotency condition ρ = ρρ is used. In terms of the periodic counterpart ρ̃ of the density matrix and to the first order in the magnetic field, it is written as120,122

(41)

The contribution ανμ,γ to the polarizability in the presence of the magnetic field (ανμ = α0νμ + ανμ,γBγ) is finally obtained from the current response as

(42)

where indices ν, μ, and γ are used to denote components of the vectors V, E, and B. The dielectric tensor in the presence of the magnetic field is computed as ϵνμ = δνμ + 4πανμ/V, where V is the volume of the unit cell. Note that according to the “2n + 1” theorem,127,128 there is no need to calculate explicitly the second-order derivative ρ(EμBγ). Instead, ανμ,γ is expressed through the first-order derivatives to the perturbations P = Eμ, Bγ and a supplementary perturbation corresponding to a vector potential P = Aν at frequency −Ω.116 

To test the formalism for solids, we applied it to bulk silicon and the corresponding results are shown in Fig. 10. The full details of the calculations can be found in Ref. 116. It is seen in Fig. 10(b) that even without accounting for excitonic effects, the calculated spectra Re/Im ϵxy for the transverse component of the dielectric tensor are already qualitatively similar to the experimental curves110 at the direct absorption edge. To model excitonic effects, we used the model from Ref. 129. The spectra computed by accounting for excitonic effects show a better agreement with the experimental results [Figs. 10(a) and 10(b)]. Although the magnitudes of the peaks in the magneto-optical spectra are about a factor of two smaller than in the magneto-optical measurements,110 they can be corrected by reducing the linewidth δ assumed in the calculations.

FIG. 10.

Calculated components (a) ϵxx and (b) ϵxy of the dielectric tensor of silicon for different frequencies of light Ω0 (in eV) and the magnetic field of 1 T along the z axis (with and without account of excitonic effects). The linewidth δ = 0.1 eV is used. The blue shift by 0.7 eV is applied to take into account the GW correction to the bandgap.130,131 The experimental data for ϵxx132 and ϵxy110 (scaled by 1/2) are indicated by symbols.

FIG. 10.

Calculated components (a) ϵxx and (b) ϵxy of the dielectric tensor of silicon for different frequencies of light Ω0 (in eV) and the magnetic field of 1 T along the z axis (with and without account of excitonic effects). The linewidth δ = 0.1 eV is used. The blue shift by 0.7 eV is applied to take into account the GW correction to the bandgap.130,131 The experimental data for ϵxx132 and ϵxy110 (scaled by 1/2) are indicated by symbols.

Close modal

In the limit of a large supercell, this formalism becomes equivalent to the simpler standard formulation for finite systems, which we have also implemented for reference. In this case, the Liouville equation for the density matrix is written in the Coulomb gauge as

(43)

where d = −r is the electric dipole moment, m = −r × V/2c is the orbital magnetic dipole moment, and δH describes local-field effects. The change in the polarizability ανμ,γBγ in the presence of the magnetic field is calculated from the dipole response

(44)

As in the periodic case, due to the “2n + 1” theorem,127,128 the explicit calculation of the second-order derivative ρ(EμBγ) is avoided. Instead, a supplementary electric field at frequency −Ω, Eν, is introduced.116 In the case of real wavefunctions, |ηv(Eν)(Ω) = (|ηv(Eν)(Ω*))*, and there is no need to solve additionally the Liouville equation for this supplementary electric field.

The efficiency of the present scheme to compute magneto-optics is comparable to standard linear-response calculations of simple optical polarizability in the absence of the magnetic field.116 When local-field effects are included self-consistently, the calculations of magneto-optical spectra for solids take only twice as long as those of polarizability. For finite systems, the computational effort is the same as for the simple optical polarizability.

The real-space representation of the dynamics of the electronic structure allows for a seamless and straightforward description of dynamical processes outside the material, i.e., processes where electrons are excited into vacuum. Beyond simply describing the ionization process, Octopus has routines implemented that compute photoelectron spectroscopy in different flavors. Photoelectron spectroscopy is particularly ubiquitous for the characterization of the electronic structure in solids because it provides a direct observable of the energy and momentum distribution of electronic states, known as the bandstructure. In contrast to other electronic structure methods that compute quantities linked to the photoelectron spectrum, most commonly the single-quasiparticle spectral function obtained through the GW approximation to the many-body self-energy, the approach described here does not consider unit cells of bulk materials but instead computes directly the energy and momentum resolved ionization probability.107,133 Besides the study of the electronic structure of solids, photoelectron spectra of atoms and molecules are of equal interest and the implementation in Octopus is capable to describe accurately all such systems on an equal footing.

The most detailed quantity available in the experiments is the momentum-resolved photoelectron probability P(p), i.e., the probability to detect an electron with a given momentum p. In some cases, the experimental setup offers the possibility to measure the spin polarization along a given axis. The formalism we are going to outline can easily accommodate a non-collinear spin structure and, therefore, calculate spin-resolved quantities, but for the sake of simplicity, in the following, we are going to restrict ourselves to closed shell systems with collinear spins, i.e., where all the orbitals are doubly occupied with electrons having opposite spins. The reader interested in the most general case can find a detailed description in Ref. 134. From P(p), one can obtain other derived quantities by simple manipulation. For instance, the energy-resolved spectrum P(E), used to identify the occupied energy levels, can be obtained with a change of variable using the free electron energy dispersion relation E = p2/2. The angle-resolved photoelectron spectrum (ARPES), normally employed to measure the quasiparticle bandstructure, is simply obtained by taking the energy resolved spectrum as a function of the electron momentum parallel to the surface P(p,E).

The t-SURFF method was first proposed by Tao and Scrinzi135 for one-electron systems and later extended to many electrons with TDDFT for periodic134 and non-periodic systems.136 It is based on the assumption that the Kohn–Sham Hamiltonian describing the full experimental process, i.e., including the ionization and detection, can be decomposed into the sum of two Hamiltonians acting into complementary spatial regions, inner and outer, and that can be approximated in different ways. In the inner region surrounding the system, the electron dynamics is governed by the interacting Kohn–Sham Hamiltonian ĤKS(r, t), while in the outer region, electrons are free from the Coulomb tails of the parent system and behave as independent particles driven by an external field and, therefore, are described by the Volkov Hamiltonian ĤV(r,t)=1/2iA(t)/c2. We express the field with a time dependent vector potential potential A(t) in the dipole approximation, i.e., by discarding the spatial dependence of the field A(r,t)A(t). The advantage of this approach is provided by the fact that the time-dependent Schrödinger equation associated with the Volkov Hamiltonian can be solved analytically and that the solutions, the Volkov waves,

(45)

with

(46)

are eigenstates of the momentum operator. We can, therefore, expand the Kohn–Sham orbitals φi as a superposition of detector states in the form of Volkov waves

(47)

and obtain the photoelectron probability in terms of the expansion coefficients by summing up the contribution of all the orbitals: P(p)=limt2/Ni=1N/2|bi(p,t)|2.

Using the continuity equation, t-SURFF allows us to express the coefficients bi, and thus P(p), as a time integral of the photo-current flux through the surface S separating the domains of the two Hamiltonians. More specifically,

(48)

with the single-particle current density operator matrix element given by

(49)

The flexibility offered by the definition of the boundary surface S allows us to easily adapt t-SURFF to periodic and non-periodic systems. For periodic systems, we choose S as a plane parallel to the material surface, while for the non-periodic case, the most natural choice is a sphere, as shown in Figs. 11(a) and 11(b).

FIG. 11.

Calculated photoelectron spectrum with t-SURFF, as implemented in Octopus.136 The geometries employed to calculate the flux of the photoelectron current are depicted in (a) for periodic and (b) for non-periodic systems. (c) Pump–probe ARPES spectrum of monolayer h-BN driven by a laser field resonant with the gap at K, adapted from Ref. 134. (d) Photoelectron angular distribution obtained by strong-field ionization of C60 with an IR field polarized along x, adapted from Ref. 136. All the electrons rescattering at the same time in one period of the field end up with final momenta forming a ring (dashed) centered at the value of A(tr) at the rescattering time, tr. The two arrows represent the graphical decomposition of the final momentum of the photoelectron in the vector potential at the moment of rescattering (horizontal arrow) plus the rescattering momentum.

FIG. 11.

Calculated photoelectron spectrum with t-SURFF, as implemented in Octopus.136 The geometries employed to calculate the flux of the photoelectron current are depicted in (a) for periodic and (b) for non-periodic systems. (c) Pump–probe ARPES spectrum of monolayer h-BN driven by a laser field resonant with the gap at K, adapted from Ref. 134. (d) Photoelectron angular distribution obtained by strong-field ionization of C60 with an IR field polarized along x, adapted from Ref. 136. All the electrons rescattering at the same time in one period of the field end up with final momenta forming a ring (dashed) centered at the value of A(tr) at the rescattering time, tr. The two arrows represent the graphical decomposition of the final momentum of the photoelectron in the vector potential at the moment of rescattering (horizontal arrow) plus the rescattering momentum.

Close modal

For periodic systems, one can apply the Bloch theorem to the Volkov Hamiltonian and further reduce Eq. (48) to an expression where only the periodic part of the Bloch waves is employed.134 

The present computation of the photoelectron spectrum relies on the condition that the Volkov Hamiltonian can be used to describe electrons in a region of space and entails the following approximations: (i) the ionized electrons become non-interacting at some distance from the system and (ii) they are free with respect to the tail of the Coulomb potential of the system, i.e., it has to be possible to neglect it at some distance. Both conditions are controlled by the placement of the analyzing surface S for the evaluation of the flux, and in practice, this has to be converged by placing it atsuccessively larger distances from the system. This method needs an explicit description of the vacuum around the probed system, which for solids requires an explicit construction of the electronic structure of the surface layer, and if bulk properties are probed, one needs to converge a slab of material. Compared to unit cell calculations of solids, this puts the present method at a disadvantage. However, when aiming at simulating specific experiments, it natively includes signals coming from the surface layers and can thus capture more of the experimental reality of the spectroscopic process compared to standard perturbative many-body approaches that are directly performed in the bulk unit cell.

In practical calculations, in order to avoid spurious reflection of electrons from the boundaries of the simulation box, one has to employ absorbing boundaries, and depending on the condition of their transparency, this can add up to the size of the simulation box.137 Other than the increased size of the computational box, this method is not computationally costly, as it requires the evaluation of the gradient operator on surface points S only and straightforwardly supports the existing k-point, state, and mesh parallelizations.

Pump–probe configurations can be simulated naturally since there is no restriction on the functional form of the vector potential A(t), and therefore, we can accommodate any linear combination of pulses. As an example, in Fig. 11(c), we show the result of a simulated pump–probe time-resolved ARPES measurement of monolayer hexagonal boron nitride (hBN) driven by a field resonant with the gap at the K-point in the Brillouin zone. As it is apparent from the simulation, the excited state population transfer to the conduction band is well observed in the resulting ARPES spectrum.

The method is not limited to simple photoelectron spectroscopy but can be used to simulate complex experimental techniques, such as the reconstruction of attosecond beating by interference of two-photon transitions (RABBITT).138 Furthermore, due to the versatility of the real-time description in Octopus, the underlying excitation does not need to be originating from an optical pulse (i.e., an external vector potential), and since the ions can move according to TDDFT-Ehrenfest dynamics,139 one can also simulate features coming from the ionic or lattice motion, specifically electron–phonon coupling signature in ARPES.140 Pump–probe simulations are not limited to the dynamical process of excitations but can also be employed to study steady-state modifications of driven electronic states.141 This opens the possibility of studying Floquet physics from an ab initio perspective and to simulate directly the effect that a periodic force would have on the dressed electronic structure—an important aspect to underline, given the growing interest in the field of Floquet engineering142 and Floquet analysis.143 

Finally, t-SURFF is particularly suited to simulating strong field ionization processes such as in laser-induced photoelectron diffraction (LIED) experiments, where the ionization takes place by direct tunneling into the continuum and the field is so strong as to drive electrons in trajectories recolliding with the parent system. In fact, since in Eq. (48), we accumulate the flux through S over the time of the propagation, with t-SURFF, electrons can seamlessly flow back and forth through the surface without producing any artifact in the final spectrum. As an example, in Fig. 11(d), we present the photoelectron angular distribution of C60 ionized by a strong IR pulse capable of inducing rescattering dynamics, which then gets imprinted in the photoelectron spectrum as characteristic rings centered in the value of the vector potential at the instant of rescattering.144 In this regime, the result of simulations obtained with Octopus are in excellent agreement with the experiments.145,146

Electrical conductivity in real materials can be described to a first approximation by Ohm’s law, which is a linear relationship between the applied electric field and the current density generated in response. The standard method to study electrical and thermal conductivity is to carry out a DFT calculation at equilibrium and apply Kubo–Greenwood linear-response theory to evaluate the conductivities. Although this method has been used successfully in the past to calculate the transport properties,147,148 the application is limited to systems in the linear response regime. It is known that in many cases, especially in the presence of strong external fields, Ohm’s law is no longer valid and the system exhibits non-linear behavior. In order to capture this complex behavior in materials, one must go beyond simple, linear approximations and describe electron interactions in a non-trivial manner.

One promising route to go beyond linear response is to use density functional methods to study directly thermoelectric transport. This topic has received less attention but has recently been implemented in Octopus and applied to liquid aluminum.149 For detailed descriptions and derivations of non-equilibrium thermoelectric phenomena using density based methods, we refer the reader to Ref. 150.

In Octopus, we are able to calculate the current density and heat current density at each step during a time-dependent simulation. A current is induced in this study by applying an electric field of the form E(t) = E0δ(t), where E0 describes the magnitude and direction of the field. The electric field is induced through a time-dependent vector potential [E(t) = −c−1tA(t), where c is the speed of light] in the Hamiltonian. This vector potential satisfies the periodic boundary conditions of an extended system. It should be noted that there is no limitation on the form of the applied electric field in general. We are able to evaluate the macroscopic current density as

(50)

where Ω is the volume of the unit cell. The energy current density, J^h(r,t), is expressed as

(51)

where J^t(r) is the kinetic energy contribution given as

(52)

and φ = φ(r, t) are understood to be the time-dependent states. The output of the time-dependent simulation can be further analyzed to yield the frequency-dependent conductivity. The electrical (or thermal) conductivity σ can be found by Fourier transforming the corresponding current as

(53)

In general, it is possible to study the conductivity of an extended system in the presence of an applied electric field. Here, we illustrate the use of this method on hydrogen at 1400 K and 400 GPa. At this temperature and pressure, hydrogen is in its liquid metallic phase. A molecular dynamics simulation was carried out with VASP151 to produce several ionic snapshots of the system. For each ionic snapshot, we performed a TDDFT simulation with a time step of 0.05 a.u. (0.001 21 fs) for a total time of 3 fs. The initial electric field E0 was 0.1 a.u. The calculation was carried out on a 3 × 3 × 3 k-point grid to ensure that the current density decays to zero.

In Fig. 12, we plot the time-dependent current density and heat current density as a function of time after an initial electric field is applied at time zero. Both currents are shown to decay to zero by the end of the simulation. It should be noted that the heat current corresponds to the kinetic energy contribution given in Eq. (52). In general, this heat current is related to the Peltier coefficient, since it describes the heat generated from the response to an electric current. The remaining contributions in Eq. (51) are not yet implemented in Octopus. Both Ju and Jf arise from electron–electron interactions and Jv is the potential energy. In future work, it would be interesting to evaluate these remaining contributions to determine if they are large or can reasonably be ignored for some systems. The electrical conductivity and the heat conductivity can be evaluated using Eq. (53). These frequency-dependent conductivities are shown in Fig. 12 (right). One may also calculate the DC conductivity by taking ω = 0.

FIG. 12.

(Left) Time-dependent current density for one ionic snapshot of a supercell of 128 atoms of hydrogen in the liquid phase at 1400 K and 400 GPa with an initial electric field E0 of 0.1 a.u. The blue line represents the current density as given in Eq. (50), and the red line is the heat current evaluated from Eq. (52). Both currents are decaying to zero. (Right) The Fourier transform of the respective currents gives the frequency-dependent conductivities.

FIG. 12.

(Left) Time-dependent current density for one ionic snapshot of a supercell of 128 atoms of hydrogen in the liquid phase at 1400 K and 400 GPa with an initial electric field E0 of 0.1 a.u. The blue line represents the current density as given in Eq. (50), and the red line is the heat current evaluated from Eq. (52). Both currents are decaying to zero. (Right) The Fourier transform of the respective currents gives the frequency-dependent conductivities.

Close modal

This suite of tools implemented in Octopus will allow for the study of time-dependent thermoelectric phenomena using TDDFT. This approach to study current and conductivity is more general and widely applicable than the standard Kubo–Greenwood approach. While Kubo–Greenwood is a linear response theory, this method can be applied to study materials where non-linear conduction effects can appear. In a recent study, this TDDFT method was used to illustrate non-linear conductivity effects in liquid aluminum,149 which would not be possible by applying standard linear response theory. In the future, we plan to extend these tools by implementing a thermal vector potential that can induce a heat current in an extended system during a time-dependent simulation.

Usually, we are interested in studying systems consisting of different atoms, molecules, regions, or domains. In such cases, we might want to understand the different contributions from different parts of the system toward a specific observable. Based on the Hohenberg–Kohn theorems for the DFT,152 and its extension for TDDFT, the Runge–Gross theorem,153 we can state that any physical observable of the system is a functional of the electronic density, either static (ground state) or time dependent. In other words, the expectation value of an operator O^ can be expressed as a functional of the electronic density

(54)

Let us now consider the case where we write the total electronic density as a sum of N densities

(55)

and no further constraints are imposed on n or ni. Such partitioning of the density is at the basis of subsystem DFT, which allows us to divide a system into several Kohn–Sham subsystems that interact with each other in a theoretically well justified manner (see Ref. 154 and references therein). However, in our case, we are only interested in this kind of partition for post-processing purposes where we assign a density ni to a specific part of the system in order to identify how it contributes to a given observable. We are, thus, interested in operators for which the following condition is true:

(56)

Such operators are said to be additive. One example of an additive operator that is particularly relevant in the context of TDDFT is the time-dependent dipole

(57)

as this is the relevant observable for obtaining the optical absorption cross section of finite systems.

The range of operators that fulfill Eq. (56) can be further expanded if we now consider non-overlapping densities, that is, densities ni that are non-zero only in a given domain Vi and that the domains do not overlap. We refer to this type of partitioning as a local domain partitioning. In such cases, any (semi-)local operator that depends (semi-)locally on the density should fulfill Eq. (56). In this context, an example of a relevant observable is the exchange–correlation energy within the LDA (or GGA) approximation

(58)

where excLDA is the exchange–correlation energy per unit particle.

Currently, several different options can be used in Octopus to define specific regions of the simulation box. These options include simple geometric shapes, such as spheres centered on particular points of space, atom-dependent domains, such as a union of spheres centered on the atoms, or the definition of Bader volumes. The latter option follows the Quantum Theory of Atoms in Molecules (QTAIM)155 and associates a density region with a given atom through a density gradient path such that the boundaries of each volume are defined as the surfaces through which the charge density gradient has a zero flux. Note that some of these options allow for the user to specify overlapping regions. In this case, the overlap must be taken into account when analyzing the results and extra care is required when comparing results from different domains.

Let us exemplify the applicability of the local domain partitioning by decomposing the optical response for a coupled chromophore system. Similar to Frozen Density Embedding real-time TDDFT (FDE-rt-TDDFT),156 we decompose the total optical spectrum as a sum of the local response within each domain

(59)

where α is the dynamic polarizability. In addition, from each local dynamic polarizability tensor, we can compute for each fragment the corresponding cross section. This is justified by the fact that the relevant operator to calculate α is the dipole operator, and as shown above, this is an additive operator that fulfills Eq. (56).

As an example, we performed a series of optical spectrum simulations using real-time TDDFT for a benzene–fulvene dimer with different π-stacking separation ranging from 4 Å to 8 Å. We use the standard PBE exchange–correlation functional157,158 and the optimized norm-conserving Vanderbilt PBE pseudopotential (sg15) set.159 The real-space grid is defined as a parallelepiped box with length 16 Å, 17 Å, and 20 Å in the three Cartesian axes. The spacing between points is set to be 0.13 Å.

For each intermolecular separation, we carry out a single ground-state calculation and three time-propagations. For each time-propagation, a dipolar electric perturbation is applied along one of the Cartesian axes.125 We let the perturbed Kohn–Sham states evolve for a propagation time of T = 24 /eV (15.8 fs) with a resolution of 0.26 eV (2π/T). The ground-state electron density is fragmented following the Bader atomic decomposition, and each molecular domain is defined as the sum of these atomic volumes. Then, the corresponding dipole operator is applied over each defined domain. Finally, by Fourier transform of the local time-dependent dipole moment, the local polarizability tensor is recovered.

Figure 13 shows the spectral decomposition of the benzene–fulvene system as a function of the intermolecular separation. The full system at large intermolecular separation (8 Å) shows two major peaks located at the same frequencies as for the isolated molecules. As the intermolecular distance becomes smaller, the global spectrum changes due to electrostatic effects induced by the electronic cloudof the neighboring molecule. We can see that the peak located at around 5 eV suffers a red-shift, while the stronger peak between 6.5 eV and 7 eV reduces its intensity, giving rise to an oscillator strength transfer to excited states at higher energy. The local domain analysis reveals that the major change is caused by the reduction of the oscillator strength of the fulvene peak located at around 6.5 eV.

FIG. 13.

(Top) Schematic representation of the benzene–fulvene dimer with π-stacking along the z axis and a intermolecular distance of 4 Å. From left to right, the electronic density for the global system for an isosurface of 0.006 Å−3 and the QTAIM local density for the fulvene and benzene molecules, respectively. Visualized with USCF Chimera software.160 (Bottom) Photo-absorption cross section obtained from, from left to right, the full system density, the fulvene local density, and the benzene local density.

FIG. 13.

(Top) Schematic representation of the benzene–fulvene dimer with π-stacking along the z axis and a intermolecular distance of 4 Å. From left to right, the electronic density for the global system for an isosurface of 0.006 Å−3 and the QTAIM local density for the fulvene and benzene molecules, respectively. Visualized with USCF Chimera software.160 (Bottom) Photo-absorption cross section obtained from, from left to right, the full system density, the fulvene local density, and the benzene local density.

Close modal

These results are in agreement with the FDE-rt-TDDFT calculations of Ref. 156, further validating our strategy. In addition, the local domain methodology does not require any previous fragmentation, selection, or localization of the basis set, allowing us also to treat very large systems, such as biological molecules like the major light harvesting complex LHC-II.161 Recently, we have shown that this technique also allows for exciton coupling calculations when combined with a new formulation of the transition densities from real-time TDDFT.162 

At the core of most Octopus calculations lies the need to integrate the time-dependent Kohn–Sham (TDKS) equations. These are a set of non-linear equations, given the dependence of the Kohn–Sham Hamiltonian on the electronic density. Upon the discretization of the electronic Hilbert space, they take the generic form of a system of first-order ordinary differential equations (ODEs)

(60)
(61)

where t0 is the initial time, φ is an array containing all the Kohn–Sham orbitals, {φmN} (φ0 is its initial value), and f is a vector function (f = (f1, …, fN)) given by the action of the Kohn–Sham Hamiltonian Ĥ[n(t), t],

(62)

Note that (i) if the nuclei are also to be propagated, the state should be supplemented with their position and momenta, and the equations with the corresponding nuclear equations of motion. (ii) Likewise, the formalism for solids includes a polarization vector field22 that must also be included in the system definition and propagation. (iii) Strictly speaking, the TDKS equations are not ordinary but “delay differential equations” (i.e., equations for which the derivative of the unknown function at a certain time depends on the function values at previous times), due to the dependence of the exact exchange and correlation potential functional on the past densities. This is ignored in the adiabatic approximation, which is almost always assumed.

A myriad of numerical propagators for ODEs are available, all of them theoretically applicable to the TDKS equations. Ideally, however, one should choose a propagator that respects all the mathematical properties of the equations that describe the problem at hand, for example, the preservation of the norm of the orbitals. In the case of the TDKS equations in the adiabatic approximation, another such property is symplecticity (a fact that is demonstrated in Ref. 163). This property is usually stated in terms of the conservation of the volume of the flow of the system of ODEs in the phase space, although the precise definition is as follows:

Any system of ODEs can be viewed as a “flow,” a differentiable map g:RpRp that transforms the state y(t0) at some point in time t0 into the state at time t, y(t) = g(y(t0)). For systems described with complex variables such as ours, since it can be split into a real and an imaginary part, p is even: g:R2NR2N. A map with an even number of variables such as this one is defined to be symplectic if and only if

(63)

where I is the unit matrix of dimension N, and xR2N (conventionally, the first N variables of x are called the “coordinates” and the second half are the “momenta,” but no physical meaning should be assumed for them in this purely mathematical definition). A numerical propagator is also a differentiable map, which relates the solution y(t) to y(t + Δt). As such, it may respect or not the symplecticity—and other properties—of the original flow.

The symplecticity also means that the system has to be “Hamiltonian”: one may find a set of coordinates qRN, pRN (in this case, the real and imaginary parts of the orbitals coefficients) and some scalar function H(q, p) (the Hamiltonian expectation value), which permits us to rewrite the system into the well-known form of Hamilton’s equations

(64a)
(64b)

The relevance of the symplectic numerical propagators stems from the fact that they present some features, such as a better conservation of the energy at long times, where the value of the energy ends up oscillating around the true value instead of diverging. For a more detailed discussion about symplecticity on numerical propagators, see Ref. 164.

The Octopus code has several propagator options, many of them already described in Ref. 165, such as the Crank–Nicolson, standard Runge–Kutta, exponential midpoint rule (EMR) and variations (e.g., the “enforced time-reversal symmetry” scheme), and split operator techniques. In a recent paper,163 some of the present authors have studied propagators of different families that had been scarcely (or not at all) tested for the TDKS equations: multistep, exponential Runge–Kutta, and commutator-free Magnus (CFM) expansions. After considering the accuracy, stability, and the performance of the propagators, the CFM techniques were identified as suitable schemes for TDDFT problems and implemented in Octopus. In this section, we make a brief description.

Developed in Ref. 166 for linear non-autonomous systems, the CFM expansion offers an alternative to the “standard” Magnus expansion, which requires expensive application of nested commutators of the Hamiltonian with itself at different times. In essence, CFM expansions Γ(t + Δt, t) consist of substituting the propagator Û(t + Δt, t) by a product of exponentials

(65)

The m linear operators D^i are either the Hamiltonian at different times within the interval [t, t + Δt] or parts of it.

We implemented in Octopus an order four (q = 4) version, given, for example, in Eq. (43) of Ref. 166, which we call hereafter CFM4. This propagator requires two exponentials (m = 2),

(66)

where

(67)
(68)

Ĥ[t1] and Ĥ[t2] are the Hamiltonians at times t1 and t2, which are, in fact, unknown because they depend on the Kohn–Sham states through the density: we are dealing with a non-linear problem and the CFM expansions were, in fact, developed for linear systems. We have various options to extend them for our non-linear problem: for example, one could define Ĥ[t1] and Ĥ[t2] as interpolated Hamiltonians from Ĥ[t] and Ĥ[t + Δt] in which case we end up with an implicit equation for φ(t + Δt) that we would have to solve at a substantial cost.

The alternative that we implemented, however, is to approximate Ĥ[ti] via an extrapolation from the Hamiltonian at various previous time steps (in practice, it is the Hartree, exchange, and correlation parts that must be extrapolated). The resulting method is then explicit, i.e., no linear or non-linear algebraic equations need to be solved. The fourth order accuracy is preserved as long as the extrapolation is also done at order four.

As an example, Fig. 14 shows the performance of this CFM4 method against the well-known exponential midpoint rule (EMR). The benchmark system consists of a benzene molecule. It is subject to an instantaneous perturbation at the beginning of the propagation, and then, it evolves freely for some fixed interval of time. The propagations are performed at varying values of Δt. The plot displays the cost of the propagation vs the accuracy achieved in each case. As we can see, the CFM4 outperforms the EMR for all values of the error examined, although the differences become more obvious when higher accuracy is demanded. The key difference between the two methods is the fourth-order accuracy of the CFM4 scheme—at the cost of requiring two exponentials, instead of just one, while the EMR is only second-order accurate. This is reflected in the different slopes of the curves as the error becomes smaller, i.e., as Δt → 0.

FIG. 14.

Wall-time computational cost of the method (in seconds) against the error in the wave function [defined as Error(T,Δt)=m||φm(T)φmexact(T)||2, where the φmexact are the values of the Kohn–Sham orbitals computed using the explicit fourth-order Runge–Kutta integrator with a very small time step] for the EMR and CFM4 propagators.

FIG. 14.

Wall-time computational cost of the method (in seconds) against the error in the wave function [defined as Error(T,Δt)=m||φm(T)φmexact(T)||2, where the φmexact are the values of the Kohn–Sham orbitals computed using the explicit fourth-order Runge–Kutta integrator with a very small time step] for the EMR and CFM4 propagators.

Close modal

The RDMFT implementation in Octopus has been described in detail in a previous paper.19 In this section, we review briefly the existing implementation, providing some new insights, and introduce a recently implemented method to solve the RDMFT equations, which is better suited to the real-space grids used in the code.

The optimization of the natural orbitals in RDMFT is subject to an orthonormalization constraint for the orbitals. One minimizes the functional

(69)

where Etotal and M denote the total energy and the number of natural orbitals ψj included in the calculation, respectively. The parameters λjk are the Lagrange multipliers, which ensure the orthonormality of the natural orbitals at the solution point. We note that, for an RDMFT calculation, the number of natural orbitals has to be larger than the number of electrons in the system (core electrons that are included in the pseudopotential do not count). The exact number M is system-dependent and should be treated as an additional parameter with respect to which a convergence study should be carried out, just like it is done for the basis set or the grid parameters. Typically, the optimization is performed using the so-called Piris method,167 which was the method previously implemented in Octopus. Within this approach, one uses the orthonormality constraint of the natural orbitals, which implies that a certain matrix constructed from the Lagrange multipliers λjk is diagonal at the solution point. As an immediate consequence of the Piris method, the natural orbitals at the solution point are linear combinations of the orbitals used as a starting point for the minimization. In other words, the initial orbitals serve as a basis. Consequently, the necessary matrix elements for different energy contributions can be calculated for the basis functions (initial orbitals) before the iterative optimization of the natural orbitals and their occupation numbers is started. In addition, the optimization of the occupation numbers can be turned off completely, resulting in a Hartree–Fock calculation on the basis of the initial orbitals.

For the existing implementation in Octopus of the Piris method, the initial orbitals are taken to be the solutions obtained with a different level of theory, such as independent particles or DFT. In order to better understand the effect of the choice of basis, we tested the following choices: (i) independent particles, density functional theory within (ii) the local density approximation (LDA) or (iii) the exact exchange (EXX) approximation, and (iv) the Hartree–Fock approximation. In all cases, we have to ensure that the number of unoccupied states in the calculation is sufficient to cover all the natural orbitals that will obtain significant occupation in the following RDMFT calculation. The results for the convergence of the total energy of a one-dimensional (1D) hydrogen molecule using the Müller functional168 are given in Fig. 15. The calculations were performed on a 1D grid extending from −12.0 bohr to 12.0 bohrs, with a grid spacing of 0.03 bohr. The nuclear potential for the 1D hydrogen molecule reads

(70)

with d = 1.628 bohr, which corresponds to the equilibrium geometry. The electron–electron interaction in one dimension is described by the soft-Coulomb interaction

(71)
FIG. 15.

Total energy for the RDMFT calculation for one-dimensional H2169 using the Piris method with different basis sets and using the conjugate gradient implementation. The inset shows a zoomed-in view of the area where convergence is reached. We employ the Müller functional168 for all calculations.

FIG. 15.

Total energy for the RDMFT calculation for one-dimensional H2169 using the Piris method with different basis sets and using the conjugate gradient implementation. The inset shows a zoomed-in view of the area where convergence is reached. We employ the Müller functional168 for all calculations.

Close modal

Since Octopus performs all calculations on a finite grid, we typically obtain a finite number of bound states in the calculations for the basis set and any additional orbitals that extend over the whole grid, i.e., they are unbound and, therefore, delocalized. However, all natural orbitals with non-zero occupation, because they decay with the ionization potential of the system,170 are localized on the system. Hence, the extended basis states will only contribute with very small coefficients, if at all, and their inclusion in the basis set does not lead to a significant improvement of the results. In the past, this problem was addressed by performing an additional step to localize the initial states before starting the RDMFT calculation. However, further investigations showed that this only improves the results for a small number of natural orbitals in the calculation. Testing the convergence with respect to the number of natural orbitals then showed that the additional localization step slows down the convergence with respect to the number of basis functions. The fastest convergence and lowest total energies are obtained by using the results from an independent particle calculation, as those yield the largest number of localized orbitals from all the different basis sets that were tested, as shown in Fig. 15. As the additional localization step proved to be unnecessary and even hinders convergence, it has been removed from the new version of the code.

Since the natural way of representing quantities in Octopus is directly on the real-space grid, and to circumvent the limitations with the quality of the basis sets available for the Piris method, we decided to implement a conjugate gradient optimization of the natural orbitals. This implementation follows the procedure for DFT explained in Ref. 171, which we adapted for RDMFT. This procedure allows us to take advantage of the full flexibility of a real-space grid and provides a systematic way of improving the results by enlarging the grid and reducing the spacing between grid points. The conjugate gradient algorithm requires a set of initial orbitals to start the self-consistent calculation; however, at convergence, the results are independent of that starting point. Therefore, while the calculation using the Piris method requires a set of initial states that serve as the basis, the conjugate gradient algorithm can be used starting from an initial set of random states. In our tests of the conjugate gradient implementation, the quality of the initial states only had an influence on the number of iterations necessary for the convergence, but not on the final result. We suggest to use the orbitals obtained from an independent particle calculation as initial states since they can be obtained for a small numerical cost and simultaneously can serve as a basis set in the Piris implementation.

Since we are not solving an eigenvalue equation to obtain the natural orbitals, they are not automatically orthogonal. As mentioned above, the orthogonality is taken into account via a constraint. Compared to Ref. 171, the non-diagonal Lagrange multipliers λjk lead to two modifications in the conjugate gradient procedure. First, the steepest-descent direction [cf. Eq. (5.10) of Ref. 171] reads here

(72)

where

(73)

Second, the one parameter of the line-minimization [cf. Eq. (5.26) of Ref. 171] is changed to

(74)

where Θ parameterizes the descend in the direction of the gradient and the single-particle Hamiltonian acting on a state |ϕ⟩ is defined as

(75)

For details of the notation, we refer the reader to Ref. 171.

The convergence study with respect to the number of natural orbitals for the conjugate gradient algorithm is also included in Fig. 15. As one can see, a smaller number of natural orbitals needs to be included in the calculation than for the basis set implementations with the Piris method. As the number of natural orbitals equals the number of basis functions in these calculations, this is mostly due to the fact that the available basis sets are of rather poor quality. In addition, the converged total energy for the conjugate gradient algorithm is slightly lower than for all the basis sets using the Piris method (see the inset of Fig. 15), which shows that the conjugate gradient algorithm exploits the full flexibility of the grid implementation, allowing for contributions to the natural orbitals that are not covered by any of the basis sets. We have also verified that the converged result of the conjugate gradient method is indeed independent of the choice of initial state for the Müller functional168 employed in our calculations. The Müller functional is known to be convex when all infinitely many natural orbitals are included.172 This property is most likely not shared by all available RDMFT functionals. In addition, the number of natural orbitals in any practical calculation is always finite. Consequently, in practice, one needs to test the convergence for the different starting points, as the appearance of local minima cannot be excluded.

Electronic structure in periodic systems is usually described using plane waves, but real-space grids have been shown to be a viable alternative when performing DFT and TDDFT calculations.22,173 Unfortunately, the discretization introduced by the real-space grid often breaks the direct connection between the physical system and the basis set, as symmetries and translations are, in most of the cases, incompatible with the discretized grid. However, real-space grids offer many advantages, such as natural mixed periodic-boundary conditions for semi-periodic systems, and the calculation of the exchange and correlation term of DFT is straightforwardly obtained on the grid.

One of the main challenges for treating periodic systems in real space is the generation of the real-space grid and the corresponding weights for the finite differences. This becomes relevant when the grid is generated along the primitive axes of the Wigner–Seitz cell (primitive cell) of a solid, where the generating axes are usually non-orthogonal.

In Octopus, the grid points are generated along the primitive axes, and the calculation of the finite-difference weights follows the implementation described in Ref. 174. Once the grid is generated, and the weights for the finite differences (gradient and Laplacian) are obtained, it is necessary to deal with the discretization of the reciprocal space. Indeed, for periodic systems, the full crystal is replaced by the primitive cell of the crystal in real-space, thanks to the Bloch theorem, but is then complemented by the Brillouin zone, which also needs to be sampled. This grid, usually called k-point grid, is common to any type of basis sets, as long as one decides to reduce the crystal to its primitive cell.

In order to reduce the numerical effort associated with the description of periodic systems, we make use of the symmetries to reduce the Brillouin zone to its irreducible zone, which can drastically reduce the number of k-points. The space group of the crystal, as well as its symmetries, are obtained, thanks to the spglib library.175 In the present implementation, we restrict the symmetries to the symmorphic symmetries (inversion, rotations, and mirror planes), leaving for later the so-called non-symmorphic symmetries, i.e., symmetries involving a fractional translation, as they are not compatible with arbitrary real-space grids.

In order to assess the validity of our implementation, we used the so-called delta-factor test.176 Using the Schlipf-Gygi ONCVPSP 2015 pseudopotential set, we obtained the value of 1.50 meV/atom, which is very close to the value obtained by other codes using the same pseudopotential set.

When investigating the electron dynamics driven by a laser field or any type of symmetry-breaking perturbation (vector potentials, kicks with a finite momentum, strain, etc.), some of the original symmetries are lost. To deal with that, Octopus finds the small group of symmetries corresponding to the original space-group of the solid, retaining only the symmetries that leave invariant the perturbation direction. This defines the symmetries that are used for a time-dependent calculation.

One important aspect when using symmetries is the symmetrization (in real-space) of the charge and current densities, as well as other observables, such as kinetic energy density. In complement to the reduction of the Brillouin zone to its irreducible zone, we implemented a real-space symmetrization. We found that performing the real-space symmetrization is very important to achieve good and stable numerical results when taking into account symmetries.

For a comprehensive description of periodic systems, the ion dynamics has to be considered in addition to the electron dynamics. For isolated systems, it is often described with the TDDFT-Ehrenfest dynamics method,17,177 where the ions obey the Newton equation with force fields computed by TDDFT. In contrast, for periodic systems, the ion dynamics has to be described with two sets of equations: one is the Newton equation for ions on the reduced coordinates in the primitive cell and the other is the equation of motion for the primitive cell itself, as the ionic coordinates of periodic systems are described by the combination of the reduced coordinates with the lattice vectors of the primitive cell. The lattice dynamics is often treated with the Parrinello–Rahman method,178,179 and the key ingredient of the equation of motion is the stress tensor. Therefore, to realize the ab initio simulations for electron-ion-lattice dynamics based on TDDFT, the calculation of the stress-tensor has been implemented based on Ref. 180, and the implementation of the lattice dynamics is under way.

In order to make all the new developments and applications described in the above sections possible, the code needs to be accurate, efficient, and reliable. In addition, when a code reaches the size and complexity of Octopus (currently more than 200 000 lines of source code), the amount of time required for maintenance and for adding new features becomes considerable. Therefore, continuous efforts at optimizing, validating, and improving the source code quality are needed. These efforts are essential, but often do not get the attention they deserve. In this section, we present some noteworthy developments to improve the code reliability and efficiency, to make the project easier to maintain, and to tackle new computer architecture challenges.

In a complex code such as Octopus, every new development may have unintended side effects, possibly introducing errors to already existing features. To avoid such regressions, every change to the code is required to pass a suite of tests before being accepted. This suite is executed automatically using Buildbot181 and is integrated with the continuous integration framework of the gitlab platform where Octopus is hosted for several years now. The testsuite is executed with 30 different toolchains spanning a variety of architectures, compilers, and Message-Passing Interface (MPI) libraries. It contains currently about 180 tests that execute Octopus roughly 740 times and that contain about 12 000 comparisons to reference values. The overall coverage of the tests is determined using the codecov.io service and lies at about 71% at the moment.

Although the testsuite is efficient in avoiding regressions, analyzing why a test failed has been difficult so far due to the large amount of data generated. Thus, we implemented an interactive analysis and visualization of the testsuite results as a web application available at https://octopus-code.org/testsuite/. This application makes it easier to understand why a test failed: Is it a problem with just one toolchain, e.g., a particular architecture, or is it simply a larger numerical variation of the results? Moreover, it facilitates updating the tests and improving the testsuite itself.

The application was implemented using the web framework Django coupled to a Postgres database. The testsuite results are automatically uploaded by the Buildbot service as soon as they are available. The application allows us to analyze single toolchain runs and single comparison matches and to compare all toolchain runs for a single commit in git. It provides histograms to judge if there are outliers or if it is a broad distribution; moreover, this allows us to determine, e.g., if there is a difference between MPI and non-MPI toolchains. This application has already helped us to identify bugs causing regressions and will continue to be useful to understand failed tests, to update them, and to improve the testsuite.

To improve the reliability of ground-state calculations with Octopus, the default eigensolver used inside the self-consistent field (SCF) cycle was improved and different real-space preconditioners for the eigensolvers were evaluated.

The default eigensolver, a conjugate-gradient algorithm, has been improved over the previous implementation by now following closely Ref. 171, which greatly improves the reliability of the solver. The updated implementation differs from Ref. 171 in only one point: by default, the current band is not orthogonalized against all bands, instead it is only orthogonalized against previously computed bands with lower energies. According to our tests, in Octopus, this leads to a faster convergence for most cases.

Preconditioners for eigensolvers are an integral part of SCF calculations because they greatly accelerate convergence. They achieve this by applying an approximate inverse of the Hamiltonian in each iteration that leaves the solution invariant and brings the system closer to the real solution. As Octopus uses a real-space grid (as opposed to plane waves like in Ref. 171), different preconditioners are needed. A range of preconditioners has been compared for a wide variety of systems, comprising molecules, semiconductors, metallic systems, and surfaces to test convergence for very different cases.

The default preconditioner in Octopus is a low-pass filter obtained by adding the neighboring values in each dimension to the current value at a grid point, weighted by a certain factor α,

(76)

This preconditioner was first described by Saad et al.182 with α = 0.5. It was proven to be the most effective preconditioner in our comparison because it is quite cheap to apply and it, nevertheless, decreases the number of SCF iterations noticeably. It can also be understood as two weighted Jacobi iterations (up to a prefactor) to solve for the inverse of the kinetic term of the Hamiltonian (i.e., 0.5Δψ′ = ψ). From the convergence radius of the Jacobi iterations, we can conclude that the allowed values for α are between 0.5 and 1. From theoretical considerations of damping of different spatial wavelengths during Jacobi iterations (see Secs. 4.1.3 and 4.2 of Ref. 183), the ideal value should be 0.75 to most effectively damp high spatial frequencies. In practice, we find that this depends on the system. Moreover, increasing the number of Jacobi iterations does not make the preconditioner more effective. In addition, using a single Jacobi iteration (i.e., dividing by the diagonal of the Laplacian) does not speed up the convergence significantly.

A multigrid method similar to the one implemented in GPAW184 also uses Jacobi iterations to solve for the kinetic Hamiltonian but employs different grids to reach a faster decrease in the error. Although this method reduces the number of SCF iterations, it is computationally more expensive because the Laplacian is applied several times for each iteration and is, thus, less effective than the filter preconditioner.

A preconditioner specifically targeting real-space methods was proposed by Seitsonen et al.185 It uses the ratio between the difference of the energy and the potential to the kinetic energy [their Eq. (3)] with a preconditioning function [their Eq. (4)] originally from Ref. 171. However, this preconditioner did not speed up convergence significantly when used in Octopus.

Another way to get an approximate inverse of the Hamiltonian is to solve the Poisson equation associated with the kinetic part of the Hamiltonian. This reduces the number of iterations needed to converge the SCF calculation, but it increases the total calculation time, as each iteration is much more costly.

In summary, we find that for our real-space code, the filter preconditioner is most effective in reducing the total time needed to compute ground states because it reduces the total number of iterations without being computationally too expensive.

Nonetheless, sometimes certain states cannot be fully converged using a preconditioner, especially for calculations with many unoccupied states. In this case, restarting without preconditioner is needed to obtain a complete SCF convergence. The reasons for this are still under investigation.

In the way Octopus was originally designed, the entire code was structured around the idea that there was only one system (albeit of arbitrary size and complexity) with an associated Hamiltonian. This was typically a combined system of electrons and nuclei, where the former were described using DFT and the latter were treated classically. All possible sorts of interactions of this system with some external source (e.g., external electromagnetic fields, solvents described with the PCM, etc.) were then added in an ad hoc fashion. Although this approach worked well for many features and applications, its limitations became evident with several recent developments, in particular, the coupled Maxwell–Kohn–Sham equations described in Sec. II, where the Kohn–Sham orbitals need to be time-propagated alongside the Maxwell fields.

This has prompted us to start a major refactoring of the code, where the full physical system is treated as several subsystems interacting with each other. Such subsystems can be electrons, nuclei, and Maxwell fields. While this framework was mainly motivated by the Maxwell–TDDFT coupling, it has been implemented in a very general fashion so that all existing features can be converted to it and that, in the future, many other developments that require the coupling of several subsystems can be based on it.

A new memory layout for storing the orbitals was introduced in Ref. 186, where all states are stored in a number of smaller batches with the innermost index being the state index instead of the real-space grid index. Thus, for all states in a batch, exactly the same operations can be executed when looping over the grid while accessing memory contiguously. This allows efficient parallelization on graphical processing units (GPUs), where all threads in a warp need to execute the same instructions, as well as vectorization on central processing units (CPUs), where one instruction can operate on several data points. To fully utilize these instructions, the kernel for computing finite differences (e.g., the Laplacian) has been specialized to explicitly use SSE, AVX, or AVX512 instructions. Now, more parts of the code have been ported to use this new layout to increase their performance, and whenever possible, the code will use this layout by default.

The first GPU implementation was described in Ref. 186. This was based on OpenCL and was limited to a small selection of numerical algorithms and calculation modes. These included some of the most commonly used features of the code or algorithms that were particularly suited for GPU porting, such as the residual minimization method-direct inversion in the iterative subspace (RMM-DIIS) eigensolver for ground-state calculations or the enforced time-reversal-symmetry propagator for time-dependent calculations. Since then, the GPU implementation has been expanded in several different ways. Most notably, it now supports CUDA through an additional compatibility layer. The number of supported features and algorithms has also been increasing steadily such that most time-dependent calculations can now be run efficiently on GPUs. The implementation was also expanded to support multiple devices per host. Using the packed storage format described above, Octopus is able to store the states fully in the GPU memory, provided the memory is large enough, thus reducing memory transfers to a minimum. Recently, this was improved further by removing frequent allocations and deallocations of temporary variables on the GPU, now using a custom memory management for those. This proved very effective in improving the scaling to several GPUs and also to several nodes with GPUs.

We show two examples of time-dependent runs in Fig. 16. For the first example (left panel), β-cyclodextrine was used as an input system and the simulation was run on the GPU island of the COBRA supercomputer at the Max Planck Computing and Data Facility. Each node in this island consists of two 20-core Intel Xeon 6148 Gold sockets (Skylake architecture) with two Nvidia Volta 100 GPUs (PCIe); the nodes are connected with an Omnipath fabric (100 Gbit/s). When executed on a full node with GPUs, the average time step for the example is a factor of 4.8 faster than on one full CPU node, i.e., the time to solution is reduced by a factor of 4.8. This also means that the GPU version consumes less energy: although one GPU node draws more power than a CPU node (∼950 W vs 450 W), the faster execution time reduces the overall energy to solution by a factor of 2.3 when running on a GPU node. When scaling to 2, 4, and 8 nodes, the speed-up is 1.26, 2.14, and 3.37, respectively, and the corresponding parallel efficiency is 63%, 54%, and 42%. Although the scaling is not yet perfect, it has improved considerably and also hints at the need for improving inter-node communication. For the second example (Fig. 16, right panel), a time-dependent simulation of a part of a chlorophyll complex was executed on a Supermicro server with two 8-core Intel Xeon 6134 Gold CPUs (Skylake architecture) and eight Nvidia Volta 100 GPUs interconnected with NVLink. Here, the speed-up of the average time step of the simulation when using 2, 4, and 8 GPUs is 1.95, 3.70, and 5.97, which corresponds to parallel efficiencies of 97%, 93%, and 75%, respectively. These efficiencies are much better than for the multi-node example, probably because data are only communicated within one node. For both examples, all states were stored in the GPU memory, and the parallelization was achieved by distributing the states only. Distributing the grid (i.e., parallelization over domain) is not as effective, because it leads to more frequent communication, and thus has a larger overhead.

FIG. 16.

Scaling plots for the GPU implementation of Octopus. The left panel shows speed-up for a machine with two 20-core Intel Xeon 6148 Gold sockets (Skylake architecture) and two V100 GPUs (PCIe) per node. The right panel shows speed-up for a Supermicro server with 8 V100 GPUs (NVLink).

FIG. 16.

Scaling plots for the GPU implementation of Octopus. The left panel shows speed-up for a machine with two 20-core Intel Xeon 6148 Gold sockets (Skylake architecture) and two V100 GPUs (PCIe) per node. The right panel shows speed-up for a Supermicro server with 8 V100 GPUs (NVLink).

Close modal

Our current and future efforts regarding the GPU implementation are focused on two aspects. First, we are planning to enhance the existing implementation by improving the communication, possibly by overlapping computation and communication, and by optimizing the existing kernels. Second, we want to port more algorithms that are currently only implemented for the CPU version, especially for spin-dependent calculations and ground-state runs. This is a long-term effort, and the ultimate goal is to have as many features and algorithms as possible running efficiently on GPUs.

It has been almost 20 years since the development of Octopus started. During this period, the code has matured and expanded to cover an ever-growing range of methods, theories, applications, and systems. It has also adapted continuously to the available computer architectures and computing paradigms, allowing researchers to tackle increasingly challenging problems in electronic structure theory.

Although many of the code capabilities have been routinely used by many groups around the world, mainly to study electronic excited state properties and dynamics, which still remain, to a large extent, the core of Octopus, we believe the main reason for the code’s success lies elsewhere. From the beginning, the code has been designed to take full advantage of the flexibility and versatility offered by the use of real-space grids and provides developers with a framework to easily implement and test new ideas and methods that can be later on adopted by other codes (as it has already been the case with quite a few features previously developed within Octopus). This is demonstrated by the number of new theoretical methodologies and frameworks presented in this paper to deal with non-equilibrium phenomena of complex systems and to address the combined dynamics of electrons, phonons, and photons that go beyond what can be found in other electronic structure codes. This includes several novel approaches to treat the coupling of the electronic systems to the photons, such as the coupled Maxwell–Kohn–Sham equations, the OEP approach to the electron–photon coupling, and the dressed RDMFT. Other examples include the description of magnons in real-time and the orbital magneto-optical response in solids and molecules using the Sternheimer approach. In the case of magnons, supercells need to be employed, and the scalability of real-space grid methods make this approach very promising for future applications in more complex correlated magnetic systems. We expect that many of these methods and approaches will be integrated into other electronic-structure codes and will become standard tools in the near future.

The flexibility of the code is also demonstrated by the variety of systems that it can efficiently treat, as the applications described in this paper include molecules, nanoparticles, model systems, solvents, solids, and monolayers. Particularly noteworthy is the efficient treatment of periodic systems, which traditionally have been described using plane-wave basis-sets, and this has been achieved without loss of accuracy, as shown by our results for the delta-factor test. In the end, it is our hope that combining this flexibility with a growing range of methodologies will provide researchers with the necessary tools to study new challenging phenomena, such as novel correlated materials, out of equilibrium physics, or coupled electron–boson systems.

We also discussed recent improvements in performance and scalability, with particular emphasis on the GPU support. These developments are crucial in the view of the new challenges that electronic-structure applications are facing with the upcoming exaflop supercomputers.

Finally, several other implementations are in the pipeline, such as Floquet and cavity QED materials engineering, multitrajectory methods to deal with the nonadiabatic electron-ion dynamics, treatement of open quantum dissipative systems, and spectroscopies with entangled photons. All of these should be available to the users in the next years, so we invite you to stay tuned to the Octopus webpage at https://octopus-code.org/.

The supplementary material contains the following: (i) Movie SM1—The nitrobenzene molecule is surrounded by the solvation cavity on which the PCM charges rest. PCM charges are modified in an iterative scheme coupled to the KS problem to get the molecular ground state in equilibrium with the solvent (here, water). The size and color of the spheres reflect the PCM charge value at each point of the surface. The animation is the 3D version of Fig. 7 in the main text. (ii) Movie SM2—Time dependent evolution of the molecular dipole of the nitrobenzene molecule in water subject to an electrical dipole perturbation in the x direction applied at the initial time. (Left) The PCM surface charges (represented as in movie SM1) and the molecular dipole (red arrow) are animated according to the equilibrium method (see main text for details). The white arrow is the molecular dipole in vacuo. (Right) The molecular dipole magnitude (top panel) and xy in-plane angle (bottom panel) calculated with the different PCM methods implemented in Octopus. Observe how the presence of the solvent enhances the intrinsic molecular dipole magnitude and produces a noticeable dephasing with respect to the in vacuo case already after few fs. This dephasing reflects into the solvatochromic shift reported in Fig. 8 of the main text.

The authors would like to thank all the people who have contributed to the development of Octopus over the last two decades. They would also like to thank Lin Lin for useful and interesting discussions and acknowledge the open discussions about real space methods with the group of Professor Chelikowsky. This work was supported by the European Research Council (Grant No. ERC-2015-AdG694097), the Cluster of Excellence “Advanced Imaging of Matter” (AIM), Grupos Consolidados (IT1249-19), and SFB925. The Flatiron Institute is a division of the Simons Foundation. X.A., A.W., and A.C. acknowledge that part of this work was performed under the auspices of the U.S. Department of Energy at Lawrence Livermore National Laboratory under Contract No. DE-AC52-07A27344. J.J.-S. gratefully acknowledges the funding from the European Union Horizon 2020 Research and Innovation Program under the Marie Sklodowska-Curie Grant Agreement No. 795246-StrongLights. J.F. acknowledges financial support from the Deutsche Forschungsgemeinschaft (DFG Forschungsstipendium FL 997/1-1). D.A.S. acknowledges University of California, Merced start-up funding.

1.
M. A. L.
Marques
,
A.
Rubio
,
E. K.
Gross
,
K.
Burke
,
F.
Nogueira
, and
C. A.
Ullrich
,
Time-Dependent Density Functional Theory
(
Springer Science & Business Media
,
2006
), Vol. 706.
2.
M. A. L.
Marques
,
N. T.
Maitra
,
F. M.
Nogueira
,
E. K.
Gross
, and
A.
Rubio
,
Fundamentals of Time-Dependent Density Functional Theory
(
Springer Science & Business Media
,
2012
), Vol. 837.
3.
T. W.
Ebbesen
, “
Hybrid light–matter states in a molecular and material science perspective
,”
Acc. Chem. Res.
49
,
2403
2412
(
2016
).
4.
M.
Ruggenthaler
,
J.
Flick
,
C.
Pellegrini
,
H.
Appel
,
I. V.
Tokatly
, and
A.
Rubio
, “
Quantum-electrodynamical density-functional theory: Bridging quantum optics and electronic-structure theory
,”
Phys. Rev. A
90
,
012508
(
2014
).
5.
J.
Flick
,
M.
Ruggenthaler
,
H.
Appel
, and
A.
Rubio
, “
Kohn-Sham approach to quantum electrodynamical density-functional theory: Exact time-dependent effective potentials in real space
,”
Proc. Natl. Acad. Sci. U. S. A.
112
,
15285
15290
(
2015
).
6.
J.
Flick
,
N.
Rivera
, and
P.
Narang
, “
Strong light-matter coupling in quantum chemistry and quantum photonics
,”
Nanophotonics
7
,
1479
1501
(
2018
).
7.
J.
Feist
,
J.
Galego
, and
F. J.
Garcia-Vidal
, “
Polaritonic chemistry with organic molecules
,”
ACS Photonics
5
,
205
216
(
2017
).
8.
R. F.
Ribeiro
,
L. A.
Martínez-Martínez
,
M.
Du
,
J.
Campos-Gonzalez-Angulo
, and
J.
Yuen-Zhou
, “
Polariton chemistry: Controlling molecular dynamics with optical cavities
,”
Chem. Sci.
9
,
6325
6339
(
2018
).
9.
A. F.
Kockum
,
A.
Miranowicz
,
S.
De Liberato
,
S.
Savasta
, and
F.
Nori
, “
Ultrastrong coupling between light and matter
,”
Nat. Rev. Phys.
1
,
19
40
(
2019
).
10.
I. V.
Tokatly
, “
Time-dependent density functional theory for many-electron systems interacting with cavity photons
,”
Phys. Rev. Lett.
110
,
233001
(
2013
).
11.
J.
Flick
, “
Exact nonadiabatic many-body dynamics: Electron-phonon coupling in photoelectron spectroscopy and light-matter interactions in quantum electrodynamical density-functional theory
,” Ph.D. thesis,
Humboldt-Universität zu Berlin, Berlin
,
2016
.
12.
M.
Ruggenthaler
,
N.
Tancogne-Dejean
,
J.
Flick
,
H.
Appel
, and
A.
Rubio
, “
From a quantum-electrodynamical light–matter description to novel spectroscopies
,”
Nat. Rev. Chem.
2
,
0118
(
2018
).
13.
J.
Flick
,
C.
Schäfer
,
M.
Ruggenthaler
,
H.
Appel
, and
A.
Rubio
, “
Ab initio optimized effective potentials for real molecules in optical cavities: Photon contributions to the molecular ground state
,”
ACS Photonics
5
,
992
1005
(
2018
).
14.
C.
Schäfer
,
M.
Ruggenthaler
,
H.
Appel
, and
A.
Rubio
, “
Modification of excitation and charge transfer in cavity quantum-electrodynamical chemistry
,”
Proc. Natl. Acad. Sci. U. S. A.
116
,
4883
4892
(
2019
).
15.
A.
Castro
, “
A first-principles time-dependent density functional theory scheme for the computation of the electromagnetic response of nanostructures
,” Ph.D. thesis,
University of Valladolid
,
2004
.
16.
X.
Andrade
, “
Linear and non-linear response phenomena of molecular systems within time-dependent density functional theory
,” Ph.D. thesis,
Universidad del País Vasco
,
2010
.
17.
M. A.
Marques
,
A.
Castro
,
G. F.
Bertsch
, and
A.
Rubio
, “
Octopus: A first-principles tool for excited electron–ion dynamics
,”
Comput. Phys. Commun.
151
,
60
78
(
2003
).
18.
A.
Castro
,
H.
Appel
,
M.
Oliveira
,
C. A.
Rozzi
,
X.
Andrade
,
F.
Lorenzen
,
M. A. L.
Marques
,
E. K. U.
Gross
, and
A.
Rubio
, “
Octopus: A tool for the application of time-dependent density functional theory
,”
Phys. Status Solidi B
243
,
2465
2488
(
2006
).
19.
X.
Andrade
,
D.
Strubbe
,
U.
De Giovannini
,
A. H.
Larsen
,
M. J. T.
Oliveira
,
J.
Alberdi-Rodriguez
,
A.
Varas
,
I.
Theophilou
,
N.
Helbig
,
M. J.
Verstraete
,
L.
Stella
,
F.
Nogueira
,
A.
Aspuru-Guzik
,
A.
Castro
,
M. A. L.
Marques
, and
A.
Rubio
, “
Real-space grids and the Octopus code as tools for the development of new simulation approaches for electronic systems
,”
Phys. Chem. Chem. Phys.
17
,
31371
31396
(
2015
).
20.
See https://octopus-code.org/ for the Octopus code, tutorials, and examples can be found on its website.
21.
R.
Jestädt
,
M.
Ruggenthaler
,
M. J. T.
Oliveira
,
A.
Rubio
, and
H.
Appel
, “
Light-matter interactions within the Ehrenfest-Maxwell-Pauli-Kohn-Sham framework: Fundamentals, implementation, and nano-optical applications
,”
Adv. Phys.
68
,
225
333
(
2019
); arXiv:1812.05049.
22.
G. F.
Bertsch
,
J.-I.
Iwata
,
A.
Rubio
, and
K.
Yabana
, “
Real-space, real-time method for the dielectric function
,”
Phys. Rev. B
62
,
7998
8002
(
2000
).
23.
A.
Taflove
and
S.
Hagness
,
Computational Electrodynamics: The Finite-Difference Time-Domain Method
, Artech House Antennas and Propagation Library (
Artech House
,
2005
).
24.
I.
Bialynicki-Birula
, “
On the wave function of the photon
,”
Acta Phys. Pol., Ser. A
86
,
97
116
(
1994
).
25.
S.
Ludwig
, “
Elektromagnetische grundgleichungen in bivektorieller behandlung
,”
Ann. Phys.
327
,
579
586
(
1907
).
26.
R.
Loudon
,
The Quantum Theory of Light
(
Oxford Science Publications
,
1988
).
27.
D. P.
Craig
and
T.
Thirunamachandran
,
Molecular Quantum Electrodynamics: An Introduction to Radiation-Molecule Interactions
(
Courier Corporation
,
1998
).
28.
J. J.
Baumberg
,
J.
Aizpurua
,
M. H.
Mikkelsen
, and
D. R.
Smith
, “
Extreme nanophotonics from ultrathin metallic gaps
,”
Nat. Mater.
18
,
668
678
(
2019
).
29.
R.
van Leeuwen
, “
The Sham-Schlüter equation in time-dependent density-functional theory
,”
Phys. Rev. Lett.
76
,
3610
3613
(
1996
).
30.
C.
Pellegrini
,
J.
Flick
,
I. V.
Tokatly
,
H.
Appel
, and
A.
Rubio
, “
Optimized effective potential for quantum electrodynamical time-dependent density functional theory
,”
Phys. Rev. Lett.
115
,
093001
(
2015
).
31.
S.
Kümmel
and
L.
Kronik
, “
Orbital-dependent density functionals: Theory and applications
,”
Rev. Mod. Phys.
80
,
3
60
(
2008
).
32.
M.
Mundt
and
S.
Kümmel
, “
Optimized effective potential in real time: Problems and prospects in time-dependent density-functional theory
,”
Phys. Rev. A
74
,
022511
(
2006
).
33.
H. O.
Wijewardane
and
C. A.
Ullrich
, “
Real-time electron dynamics with exact-exchange time-dependent density-functional theory
,”
Phys. Rev. Lett.
100
,
056404
(
2008
).
34.
S.-L.
Liao
,
T.-S.
Ho
,
H.
Rabitz
, and
S.-I.
Chu
, “
Time-local equation for the exact optimized effective potential in time-dependent density functional theory
,”
Phys. Rev. Lett.
118
,
243001
(
2017
).
35.
J.
Flick
,
M.
Ruggenthaler
,
H.
Appel
, and
A.
Rubio
, “
Atoms and molecules in cavities, from weak to strong coupling in quantum-electrodynamics (QED) chemistry
,”
Proc. Natl. Acad. Sci. U. S. A.
114
,
3026
3034
(
2017
).
36.
S. E. B.
Nielsen
,
C.
Schäfer
,
M.
Ruggenthaler
, and
A.
Rubio
, “
Dressed-orbital approach to cavity quantum electrodynamics and beyond
,” arXiv:1812.00388 (
2018
).
37.
S.
Kümmel
and
J. P.
Perdew
, “
Simple iterative construction of the optimized effective potential for orbital functionals, including exact exchange
,”
Phys. Rev. Lett.
90
,
043004
(
2003
).
38.
T. W.
Hollins
,
S. J.
Clark
,
K.
Refson
, and
N. I.
Gidopoulos
, “
Optimized effective potential using the hylleraas variational method
,”
Phys. Rev. B
85
,
235126
(
2012
).
39.
J.
Krieger
,
Y.
Li
, and
G.
Iafrate
, “
Derivation and application of an accurate Kohn-Sham potential with integer discontinuity
,”
Phys. Lett. A
146
,
256
260
(
1990
).
40.
J. B.
Krieger
,
Y.
Li
, and
G. J.
Iafrate
, “
Construction and application of an accurate local spin-polarized Kohn-Sham potential with integer discontinuity: Exchange-only theory
,”
Phys. Rev. A
45
,
101
126
(
1992
).
41.
J. B.
Krieger
,
Y.
Li
, and
G. J.
Iafrate
, “
Systematic approximations to the optimized effective potential: Application to orbital-density-functional theory
,”
Phys. Rev. A
46
,
5453
5458
(
1992
).
42.
C.
Schäfer
,
M.
Ruggenthaler
, and
A.
Rubio
, “
Ab initio nonrelativistic quantum electrodynamics: Bridging quantum chemistry and quantum optics from weak to strong coupling
,”
Phys. Rev. A
98
,
043801
(
2018
).
43.
V.
Rokaj
,
D. M.
Welakuh
,
M.
Ruggenthaler
, and
A.
Rubio
, “
Light–matter interaction in the long-wavelength limit: No ground-state without dipole self-energy
,”
J. Phys. B: At., Mol. Opt. Phys.
51
,
034005
(
2018
).
44.
J.
Flick
and
P.
Narang
, “
Cavity-correlated electron-nuclear dynamics from first principles
,”
Phys. Rev. Lett.
121
,
113002
(
2018
).
45.
J.
Flick
,
D. M.
Welakuh
,
M.
Ruggenthaler
,
H.
Appel
, and
A.
Rubio
, “
Light–matter response in nonrelativistic quantum electrodynamics
,”
ACS Photonics
6
,
2757
2778
(
2019
).
46.
N. M.
Hoffmann
,
C.
Schäfer
,
A.
Rubio
,
A.
Kelly
, and
H.
Appel
, “
Capturing vacuum fluctuations and photon correlations in cavity quantum electrodynamics with multitrajectory ehrenfest dynamics
,”
Phys. Rev. A
99
,
063819
(
2019
).
47.
J.
Galego
,
C.
Climent
,
F. J.
Garcia-Vidal
, and
J.
Feist
, “
Cavity Casimir-Polder forces and their effects in ground state chemical reactivity
,”
Phys. Rev. X
9
,
021057
(
2019
).
48.

Note that Hartree-Fock theory is also included within RDMFT by fixing the orbital occupations to 0 and 1.

49.
F.
Buchholz
,
I.
Theophilou
,
S. E. B.
Nielsen
,
M.
Ruggenthaler
, and
A.
Rubio
, “
Reduced density-matrix approach to strong matter-photon interaction
,”
ACS Photonics
6
,
2694
2711
(
2019
).
50.

At the current state of the code, we have only implemented the so-called Müller functional,171 but one could potentially use any RDMFT functional to approximate two-body expression.

51.
A. J.
Coleman
, “
Structure of fermion density matrices
,”
Rev. Mod. Phys.
35
,
668
686
(
1963
).
52.

The respective results for the equilibrium distance b = 1.61 bohr are shown in Ref. 50, Sec. 7.

53.

Performed with the many-body routine of Octopus, see Ref. 19, Sec. 13 for details.

54.
V. I.
Anisimov
,
J.
Zaanen
, and
O. K.
Andersen
, “
Band theory and Mott insulators: Hubbard U instead of Stoner I
,”
Phys. Rev. B
44
,
943
(
1991
).
55.
V. I.
Anisimov
,
I. V.
Solovyev
,
M. A.
Korotin
,
M. T.
Czyżyk
, and
G. A.
Sawatzky
, “
Density-functional theory and NiO photoemission spectra
,”
Phys. Rev. B
48
,
16929
(
1993
).
56.
A. I.
Liechtenstein
,
V. I.
Anisimov
, and
J.
Zaanen
, “
Density-functional theory and strong interactions: Orbital ordering in Mott-Hubbard insulators
,”
Phys. Rev. B
52
,
R5467
(
1995
).
57.
V. I.
Anisimov
,
F.
Aryasetiawan
, and
A.
Lichtenstein
, “
First-principles calculations of the electronic structure and spectra of strongly correlated systems: The LDA+U method
,”
J. Phys.: Condens. Matter
9
,
767
(
1997
).
58.
K.
Haule
, “
Exact double counting in combining the dynamical mean field theory and the density functional theory
,”
Phys. Rev. Lett.
115
,
196403
(
2015
).
59.
A. G.
Petukhov
,
I. I.
Mazin
,
L.
Chioncel
, and
A. I.
Lichtenstein
, “
Correlated metals and the LDA+U method
,”
Phys. Rev. B
67
,
153106
(
2003
).
60.
S. L.
Dudarev
,
G. A.
Botton
,
S. Y.
Savrasov
,
C. J.
Humphreys
, and
A. P.
Sutton
, “
Electron-energy-loss spectra and the structural stability of nickel oxide: An LSDA+U study
,”
Phys. Rev. B
57
,
1505
1509
(
1998
).
61.
M. T.
Czyżyk
and
G. A.
Sawatzky
, “
Local-density functional and on-site correlations: The electronic structure of La2CuO4 and LaCuO3
,”
Phys. Rev. B
49
,
14211
14228
(
1994
).
62.
N.
Tancogne-Dejean
,
M. J. T.
Oliveira
, and
A.
Rubio
, “
Self-consistent DFT+U method for real-space time-dependent density functional theory calculations
,”
Phys. Rev. B
96
,
245133
(
2017
).
63.
L.
Xian
,
D. M.
Kennes
,
N.
Tancogne-Dejean
,
M.
Altarelli
, and
A.
Rubio
, “
Multiflat bands and strong correlations in twisted bilayer boron nitride: Doping-induced correlated insulator and superconductor
,”
Nano Lett.
19
,
4934
4940
(
2019
).
64.
N.
Tancogne-Dejean
and
A.
Rubio
, “
Parameter-free hybrid functional based on an extended Hubbard model: DFT+U+V
,” arXiv:1911.10813 (
2019
).
65.
L. A.
Agapito
,
S.
Curtarolo
, and
M.
Buongiorno Nardelli
, “
Reformulation of DFT+U as a pseudohybrid hubbard density functional for accelerated materials discovery
,”
Phys. Rev. X
5
,
011006
(
2015
).
66.
G. E.
Topp
,
N.
Tancogne-Dejean
,
A. F.
Kemper
,
A.
Rubio
, and
M. A.
Sentef
, “
All-optical nonequilibrium pathway to stabilising magnetic Weyl semimetals in pyrochlore iridates
,”
Nature Commun.
9
,
4452
(
2018
).
67.
N.
Tancogne-Dejean
,
M. A.
Sentef
, and
A.
Rubio
, “
Ultrafast modification of hubbard U in a strongly correlated material: Ab initio high-harmonic generation in NiO
,”
Phys. Rev. Lett.
121
,
097402
(
2018
).
68.
K.
Berland
,
V. R.
Cooper
,
K.
Lee
,
E.
Schröder
,
T.
Thonhauser
,
P.
Hyldgaard
, and
B. I.
Lundqvist
, “
van der Waals forces in density functional theory: A review of the vdW-DF method
,”
Rep. Prog. Phys.
78
,
066501
(
2015
).
69.
S.
Grimme
,
J.
Antony
,
S.
Ehrlich
, and
H.
Krieg
, “
A consistent and accurate ab initio parametrization of density functional dispersion correction (DFT-D) for the 94 elements H-Pu
,”
J. Chem. Phys.
132
,
154104
(
2010
).
70.
A.
Tkatchenko
and
M.
Scheffler
, “
Accurate molecular van der Waals interactions from ground-state electron density and free-atom reference data
,”
Phys. Rev. Lett.
102
,
073005
(
2009
).
71.
A. H.
Larsen
,
M.
Kuisma
,
J.
Löfgren
,
Y.
Pouillon
,
P.
Erhart
, and
P.
Hyldgaard
, “
libvdwxc: A library for exchange–correlation functionals in the vdW-DF family
,”
Modell. Simul. Mater. Sci. Eng.
25
,
065004
(
2017
).
72.
J. P.
Perdew
and
Y.
Wang
, “
Accurate and simple analytic representation of the electron-gas correlation energy
,”
Phys. Rev. B
45
,
13244
13249
(
1992
).
73.
G.
Román-Pérez
and
J. M.
Soler
, “
Efficient implementation of a van der Waals density functional: Application to double-wall carbon nanotubes
,”
Phys. Rev. Lett.
103
,
096102
(
2009
).
74.
M.
Frigo
and
S. G.
Johnson
, “
The design and implementation of FFTW3
,”
Proc. IEEE
93
,
216
231
(
2005
), part of special issue: Program generation, optimization, and platform adaptation.
75.
M.
Dion
,
H.
Rydberg
,
E.
Schröder
,
D. C.
Langreth
, and
B. I.
Lundqvist
, “
van der Waals density functional for general geometries
,”
Phys. Rev. Lett.
92
,
246401
(
2004
).
76.
M.
Dion
,
H.
Rydberg
,
E.
Schröder
,
D. C.
Langreth
, and
B. I.
Lundqvist
, “
Erratum: “van der Waals density functional for general geometries” [Phys. Rev. Lett. 92, 246401 (2004)]
,”
Phys. Rev. Lett.
95
,
109902
(
2005
).
77.
K.
Lee
,
E. D.
Murray
,
L.
Kong
,
B. I.
Lundqvist
, and
D. C.
Langreth
, “
Higher-accuracy van der Waals density functional
,”
Phys. Rev. B
82
,
081101
(
2010
).
78.
K.
Berland
and
P.
Hyldgaard
, “
Exchange functional that tests the robustness of the plasmon description of the van der Waals density functional
,”
Phys. Rev. B
89
,
035412
(
2014
).
79.
J.
Klimeš
,
D. R.
Bowler
, and
A.
Michaelides
, “
Chemical accuracy for the van der Waals density functional
,”
J. Phys.: Condens. Matter
22
,
022201
(
2010
).
80.
V. R.
Cooper
, “
van der Waals density functional: An appropriate exchange functional
,”
Phys. Rev. B
81
,
161104
(
2010
).
81.
J.
Tomasi
,
B.
Mennucci
, and
R.
Cammi
, “
Quantum mechanical continuum solvation models
,”
Chem. Rev.
105
,
2999
3094
(
2005
).
82.
J.
Tomasi
and
M.
Persico
, “
Molecular interactions in solution: An overview of methods based on continuous distributions of the solvent
,”
Chem. Rev.
94
,
2027
2094
(
1994
).
83.
J.
Tomasi
, “
The physical model
,” in
Continuum Solvation Models in Chemical Physics: From Theory to Applications
, edited by
B.
Mennucci
and
R.
Cammi
(
Wiley & Sons
,
Chichester, UK
,
2007
), Chap. 1.1, p.
16
.
84.
A.
Delgado
,
S.
Corni
,
S.
Pittalis
, and
C. A.
Rozzi
, “
Modeling solvation effects in real-space and real-time within density functional approaches
,”
J. Chem. Phys.
143
,
144111
(
2015
).
85.
E.
Cancés
,
B.
Mennucci
, and
J.
Tomasi
, “
A new integral equation formalism for the polarizable continuum model: Theoretical background and applications to isotropic and anisotropic dielectrics
,”
J. Chem. Phys.
107
,
3032
(
1997
).
86.
J.-L.
Pascual-ahuir
,
E.
Silla
, and
I.
Tunon
, “
GEPOL: An improved description of molecular surfaces. III. A new algorithm for the computation of a solvent-excluding surface
,”
J. Comput. Chem.
15
,
1127
1138
(
1994
).
87.
M. W.
Schmidt
,
K. K.
Baldridge
,
J. A.
Boatz
,
S. T.
Elbert
,
M. S.
Gordon
,
J. H.
Jensen
,
S.
Koseki
,
N.
Matsunaga
,
K. A.
Nguyen
,
S.
Su
 et al, “
General atomic and molecular electronic structure system
,”
J. Comput. Chem.
14
,
1347
1363
(
1993
).
88.
J.
Perdew
,
K.
Burke
, and
M.
Ernzerhof
, “
Perdew, Burke, and Ernzerhof reply
,”
Phys. Rev. Lett.
80
,
891
(
1998
).
89.
M.
Caricato
,
F.
Ingrosso
,
B.
Mennucci
, and
J.
Tomasi
, “
A time-dependent polarizable continuum model: Theory and application
,”
J. Chem. Phys.
122
,
154501
(
2005
).
90.
R.
Cammi
and
J.
Tomasi
, “
Nonequilibrium solvation theory for the polarizable continuum model: A new formulation at the SCF level with application to the case of the frequency-dependent linear electric response function
,”
Int. J. Quantum Chem.
56
,
465
474
(
1995
).
91.
S.
Corni
,
S.
Pipolo
, and
R.
Cammi
, “
Equation of motion for the solvent polarization apparent charges in the polarizable continuum model: Application to real-time TDDFT
,”
J. Phys. Chem. A
119
,
5405
5416
(
2014
).
92.
G.
Gil
,
S.
Pipolo
,
A.
Delgado
,
C. A.
Rozzi
, and
S.
Corni
, “
Nonequilibrium solvent polarization effects in real-time electronic dynamics of solute molecules subject to time-dependent electric fields: A new feature of the polarizable continuum model
,”
J. Chem. Theory Comput.
15
,
2306
2319
(
2019
).
93.
L.
Onsager
, “
Electric moments of molecules in liquids
,”
J. Am. Chem. Soc.
58
,
1486
1493
(
1936
).
94.
S.
Corni
,
R.
Cammi
,
B.
Mennucci
, and
J.
Tomasi
, “
Electronic excitation energies of molecules in solution within continuum solvation models: Investigating the discrepancy between state-specific and linear-response methods
,”
J. Chem. Phys.
123
,
134512
(
2005
).
95.
P.
Buczek
,
A.
Ernst
, and
L. M.
Sandratskii
, “
Different dimensionality trends in the Landau damping of magnons in iron, cobalt, and nickel: Time-dependent density functional study
,”
Phys. Rev. B
84
,
174418
(
2011
).
96.
M.
Niesert
, “
Ab initio calculations of spin-wave excitation spectra from time-dependent density-functional theory
,” Ph.D. dissertation (
RWTH University
,
Aachen
,
2011
), record converted from VDB: 12.11.2012.
97.
B.
Rousseau
,
A.
Eiguren
, and
A.
Bergara
, “
Efficient computation of magnon dispersions within time-dependent density functional theory using maximally localized wannier functions
,”
Phys. Rev. B
85
,
054305
(
2012
).
98.
T.
Gorni
,
I.
Timrov
, and
S.
Baroni
, “
Spin dynamics from time-dependent density functional perturbation theory
,”
Eur. Phys. J. B
91
,
249
(
2018
).
99.
K.
Cao
,
H.
Lambert
,
P. G.
Radaelli
, and
F.
Giustino
, “
Ab initio calculation of spin fluctuation spectra using time-dependent density functional perturbation theory, plane waves, and pseudopotentials
,”
Phys. Rev. B
97
,
024420
(
2018
).
100.
N.
Singh
,
P.
Elliott
,
T.
Nautiyal
,
J. K.
Dewhurst
, and
S.
Sharma
, “
Adiabatic generalized gradient approximation kernel in time-dependent density functional theory
,”
Phys. Rev. B
99
,
035151
(
2019
).
101.
S. Y.
Savrasov
, “
Linear response calculations of spin fluctuations
,”
Phys. Rev. Lett.
81
,
2570
2573
(
1998
).
102.
N.
Tancogne-Dejean
,
F. G.
Eich
, and
A.
Rubio
, “
Time-dependent magnons from first principles
,”
J. Chem. Theory Comput.
16
,
1007
1017
(
2020
).
103.
L. M.
Sandratskii
, “
Energy band structure calculations for crystals with spiral magnetic structure
,”
Phys. Status Solidi B
136
,
167
180
(
1986
).
104.
H. A.
Mook
and
D. M.
Paul
, “
Neutron-scattering measurement of the spin-wave spectra for nickel
,”
Phys. Rev. Lett.
54
,
227
229
(
1985
).
105.
U.
De Giovannini
,
G.
Brunetto
,
A.
Castro
,
J.
Walkenhorst
, and
A.
Rubio
, “
Simulating pump–probe photoelectron and absorption spectroscopy on the attosecond timescale with time-dependent density functional theory
,”
ChemPhysChem
14
,
1363
1376
(
2013
).
106.
J.
Walkenhorst
,
U.
De Giovannini
,
A.
Castro
, and
A.
Rubio
, “
Tailored pump-probe transient spectroscopy with time-dependent density-functional theory: Controlling absorption spectra
,”
Eur. Phys. J. B
89
,
128
(
2016
).
107.
U.
De Giovannini
, “
Pump-probe photoelectron spectra
,” in
Handbook of Materials Modeling Methods: Theory Modeling
(
Springer
,
2018
), pp.
1
19
.
108.
S.
Sato
,
H.
Hübener
,
U.
De Giovannini
, and
A.
Rubio
, “
Ab initio simulation of attosecond transient absorption spectroscopy in two-dimensional materials
,”
Appl. Sci.
8
,
1777
(
2018
).
109.
L. D.
Barron
,
Molecular Light Scattering and Optical Activity
, 2nd ed. (
Cambridge University Press
,
Cambridge
,
2004
).
110.
F. R.
Keßler
and
J.
Metzdorf
, “
Landau level spectroscopy: Interband effects and Faraday rotation
,” in
Modern Problems in Condensed Matter Sciences
, edited by
V. M.
Agranovich
and
A. A.
Maradudin
(
Elsevier Science Publishers
,
Amsterdam
,
1991
), Vol. 27.1, Chap. 11.
111.
H.
Solheim
,
K.
Ruud
,
S.
Coriani
, and
P.
Norman
, “
Complex polarization propagator calculations of magnetic circular dichroism spectra
,”
J. Chem. Phys.
128
,
094103
(
2008
).
112.
H.
Solheim
,
K.
Ruud
,
S.
Coriani
, and
P.
Norman
, “
The A and B terms of magnetic circular dichroism revisited
,”
J. Phys. Chem. A
112
,
9615
9618
(
2008
).
113.
K.-M.
Lee
,
K.
Yabana
, and
G. F.
Bertsch
, “
Magnetic circular dichroism in real-time time-dependent density functional theory
,”
J. Chem. Phys.
134
,
144106
(
2011
).
114.
M.
Seth
,
M.
Krykunov
,
T.
Ziegler
,
J.
Autschbach
, and
A.
Banerjee
, “
Application of magnetically perturbed time-dependent density functional theory to magnetic circular dichroism: Calculation of B terms
,”
J. Chem. Phys.
128
,
144105
(
2008
).
115.
M.
Seth
,
M.
Krykunov
,
T.
Ziegler
, and
J.
Autschbach
, “
Application of magnetically perturbed time-dependent density functional theory to magnetic circular dichroism. II. Calculation of A terms
,”
J. Chem. Phys.
128
,
234102
(
2008
).
116.
I. V.
Lebedeva
,
D. A.
Strubbe
,
I. V.
Tokatly
, and
A.
Rubio
, “
Orbital magneto-optical response of periodic insulators from first principles
,”
npj Comput. Mater.
5
,
32
(
2019
).
117.
R. D.
King-Smith
and
D.
Vanderbilt
, “
Theory of polarization of crystalline solids
,”
Phys. Rev. B
47
,
1651
1654
(
1993
).
118.
D.
Vanderbilt
and
R. D.
King-Smith
, “
Electric polarization as a bulk quantity and its relation to surface charge
,”
Phys. Rev. B
48
,
4442
4455
(
1993
).
119.
R.
Resta
, “
Macroscopic polarization in crystalline dielectrics: The geometric phase approach
,”
Rev. Mod. Phys.
66
,
899
915
(
1994
).
120.
A. M.
Essin
,
A. M.
Turner
,
J. E.
Moore
, and
D.
Vanderbilt
, “
Orbital magnetoelectric coupling in band insulators
,”
Phys. Rev. B
81
,
205104
(
2010
).
121.
K.-T.
Chen
and
P. A.
Lee
, “
Unified formalism for calculating polarization, magnetization, and more in a periodic insulator
,”
Phys. Rev. B
84
,
205137
(
2011
).
122.
X.
Gonze
and
J. W.
Zwanziger
, “
Density-operator theory of orbital magnetic susceptibility in periodic insulators
,”
Phys. Rev. B
84
,
064445
(
2011
).
123.
X.
Andrade
,
S.
Botti
,
M. A. L.
Marques
, and
A.
Rubio
, “
Time-dependent density functional theory scheme for efficient calculations of dynamic (hyper)polarizabilities
,”
J. Chem. Phys.
126
,
184106
(
2007
).
124.
D. A.
Strubbe
, “
Optical and transport properties of organic molecules: Methods and applications
,” Ph.D. thesis,
University of California
,
Berkeley, USA
,
2012
.
125.
D. A.
Strubbe
,
L.
Lehtovaara
,
A.
Rubio
,
M. A. L.
Marques
, and
S. G.
Louie
, “
Response functions in TDDFT: Concepts and implementation
,” in
Fundamentals of Time-Dependent Density Functional Theory
(
Springer Berlin Heidelberg
,
Berlin, Heidelberg
,
2012
), pp.
139
166
.
126.
M.
Lazzeri
and
F.
Mauri
, “
High-order density-matrix perturbation theory
,”
Phys. Rev. B
68
,
161101
(
2003
).
127.
X.
Gonze
and
J.-P.
Vigneron
, “
Density-functional approach to nonlinear-response coefficients of solids
,”
Phys. Rev. B
39
,
13120
13128
(
1989
).
128.
A. D.
Corso
,
F.
Mauri
, and
A.
Rubio
, “
Density-functional theory of the nonlinear optical susceptibility: Application to cubic semiconductors
,”
Phys. Rev. B
53
,
15638
15642
(
1996
).
129.
J. A.
Berger
, “
Fully parameter-free calculation of optical spectra for insulators, semiconductors, and metals from a simple polarization functional
,”
Phys. Rev. Lett.
115
,
137402
(
2015
).
130.
S.
Albrecht
,
L.
Reining
,
R.
Del Sole
, and
G.
Onida
, “
Ab initio calculation of excitonic effects in the optical spectra of semiconductors
,”
Phys. Rev. Lett.
80
,
4510
4513
(
1998
).
131.
S.
Botti
,
F.
Sottile
,
N.
Vast
,
V.
Olevano
,
L.
Reining
,
H.-C.
Weissker
,
A.
Rubio
,
G.
Onida
,
R.
Del Sole
, and
R. W.
Godby
, “
Long-range contribution to the exchange-correlation kernel of time-dependent density functional theory
,”
Phys. Rev. B
69
,
155112
(
2004
).
132.
P.
Lautenschlager
,
M.
Garriga
,
L.
Viña
, and
M.
Cardona
, “
Temperature dependence of the dielectric function and interband critical points in silicon
,”
Phys. Rev. B
36
,
4821
4830
(
1987
).
133.
U.
De Giovannini
and
A.
Castro
, “
Real-time and real-space time-dependent density-functional theory approach to attosecond dynamics
,” in
Attosecond Molecular Dynamics
(
Royal Society of Chemistry
,
Cambridge
,
2018
), pp.
424
461
.
134.
U.
De Giovannini
,
H.
Hübener
, and
A.
Rubio
, “
A first-principles time-dependent density functional theory framework for spin and time-resolved angular-resolved photoelectron spectroscopy in periodic systems
,”
J. Chem. Theory Comput.
13
,
265
273
(
2017
).
135.
L.
Tao
and
A.
Scrinzi
, “
Photo-electron momentum spectra from minimal volumes: The time-dependent surface flux method
,”
New J. Phys.
14
,
013021
(
2012
).
136.
P.
Wopperer
,
U.
De Giovannini
, and
A.
Rubio
, “
Efficient and accurate modeling of electron photoemission in nanostructures with TDDFT
,”
Eur. Phys. J. B
90
,
1307
(
2017
).
137.
U.
De Giovannini
,
A. H.
Larsen
,
A.
Rubio
, and
A.
Rubio
, “
Modeling electron dynamics coupled to continuum states in finite volumes with absorbing boundaries
,”
Eur. Phys. J. B
88
,
56
(
2015
).
138.
S. A.
Sato
,
H.
Hübener
,
A.
Rubio
, and
U.
De Giovannini
, “
First-principles simulations for attosecond photoelectron spectroscopy based on time-dependent density functional theory
,”
Eur. Phys. J. B
91
,
126
(
2018
).
139.
X.
Andrade
,
A.
Castro
,
D.
Zueco
,
J. L.
Alonso
,
P.
Echenique
,
F.
Falceto
, and
A.
Rubio
, “
Modified Ehrenfest formalism for efficient large-scale ab initio molecular dynamics
,”
J. Chem. Theory Comput.
5
,
728
742
(
2009
).
140.
H.
Hübener
,
U.
De Giovannini
, and
A.
Rubio
, “
Phonon driven Floquet matter
,”
Nano Lett.
18
,
1535
1542
(
2018
).
141.
U.
De Giovannini
,
H.
Hübener
, and
A.
Rubio
, “
Monitoring electron-photon dressing in WSe2
,”
Nano Lett.
16
,
7993
7998
(
2016
).
142.
T.
Oka
and
S.
Kitamura
, “
Floquet engineering of quantum materials
,”
Annu. Rev. Condens. Matter Phys.
10
,
387
408
(
2019
).
143.
U.
De Giovannini
and
H.
Hübener
, “
Floquet analysis of excitations in materials
,”
J. Phys.: Mater.
3
,
012001
(
2019
).
144.
C. I.
Blaga
,
J.
Xu
,
A. D.
DiChiara
,
E.
Sistrunk
,
K.
Zhang
,
P.
Agostini
,
T. A.
Miller
,
L. F.
DiMauro
, and
C. D.
Lin
, “
Imaging ultrafast molecular dynamics with laser-induced electron diffraction
,”
Nature
483
,
194
197
(
2012
).
145.
F.
Krečinić
,
P.
Wopperer
,
B.
Frusteri
,
F.
Brauße
,
J.-G.
Brisset
,
U.
De Giovannini
,
A.
Rubio
,
A.
Rouzée
, and
M. J. J.
Vrakking
, “
Multiple-orbital effects in laser-induced electron diffraction of aligned molecules
,”
Phys. Rev. A
98
,
041401
(
2018
).
146.
A.
Trabattoni
,
S.
Trippel
,
U.
De Giovannini
,
J. F.
Olivieri
,
J.
Wiese
,
T.
Mullins
,
J.
Onvlee
,
S.-K.
Son
,
B.
Frusteri
,
A.
Rubio
, and
J.
Küpper
, “
Setting the clock of photoelectron emission through molecular alignment
,” arXiv:1802.06622 (
2018
).
147.
V.
Recoules
,
F.
Lambert
,
A.
Decoster
,
B.
Canaud
, and
J.
Clérouin
, “
Ab initio determination of thermal conductivity of dense hydrogen plasmas
,”
Phys. Rev. Lett.
102
,
075002
(
2009
).
148.
F.
Lambert
,
V.
Recoules
,
A.
Decoster
,
J.
Clerouin
, and
M.
Desjarlais
, “
On the transport coefficients of hydrogen in the inertial confinement fusion regime
,”
Phys. Plasmas
18
,
056306
(
2011
).
149.
X.
Andrade
,
S.
Hamel
, and
A. A.
Correa
, “
Negative differential conductivity in liquid aluminum from real-time quantum simulations
,”
Eur. Phys. J. B
91
,
229
(
2018
).
150.
F.
Eich
,
M.
Di Ventra
, and
G.
Vignale
, “
Functional theories of thermoelectric phenomena
,”
J. Phys.: Condens. Matter
29
,
063001
(
2016
).
151.
G.
Kresse
and
J.
Furthmüller
, “
Efficient iterative schemes for ab initio total-energy calculations using a plane-wave basis set
,”
Phys. Rev. B
54
,
11169
11186
(
1996
).
152.
P.
Hohenberg
and
W.
Kohn
, “
Inhomogeneous electron gas
,”
Phys. Rev.
136
,
B864
B871
(
1964
).
153.
E.
Runge
and
E. K. U.
Gross
, “
Density-functional theory for time-dependent systems
,”
Phys. Rev. Lett.
52
,
997
1000
(
1984
).
154.
J.
Neugebauer
, “
Chromophore-specific theoretical spectroscopy: From subsystem density functional theory to mode-specific vibrational spectroscopy
,”
Phys. Rep.
489
,
1
87
(
2010
).
155.
R. F. W.
Bader
,
Atoms in Molecules: A Quantum Theory
(
Clarendon Press
,
1994
).
156.
A.
Krishtal
,
D.
Ceresoli
, and
M.
Pavanello
, “
Subsystem real-time time dependent density functional theory
,”
J. Chem. Phys.
142
,
154116
(
2015
).
157.
J. P.
Perdew
,
K.