We present a Graphics Processing Unit (GPU)-accelerated version of the real-space SPARC electronic structure code for performing Kohn–Sham density functional theory calculations within the local density and generalized gradient approximations. In particular, we develop a modular math-kernel based implementation for NVIDIA architectures wherein the computationally expensive operations are carried out on the GPUs, with the remainder of the workload retained on the central processing units (CPUs). Using representative bulk and slab examples, we show that relative to CPU-only execution, GPUs enable speedups of up to 6× and 60× in node and core hours, respectively, bringing time to solution down to less than 30 s for a metallic system with over 14 000 electrons and enabling significant reductions in computational resources required for a given wall time.

Over the past few decades, Kohn–Sham density functional theory (DFT)1,2 has established itself as a cornerstone of materials and chemical sciences research. In particular, due to its high accuracy-to-cost ratio relative to other ab initio methods, it has seen widespread use for understanding as well as predicting material properties and chemical phenomena from the first principles of quantum mechanics.3,4 Despite significant advances in numerical/computational algorithms as well as high-performance computing architectures, bringing down the time to solution of the Kohn–Sham problem remains a challenging task. In particular, the computational cost and memory requirements scale cubically and quadratically with system size, respectively, restricting the range and types of systems that can be investigated, particularly in ab initio molecular dynamics (AIMD) simulations, where reaching time scales of interest may require the solution of the Kohn–Sham equations tens or hundreds of thousands of times.3 

The planewave pseudopotential method,5 which employs the complete, orthogonal, Laplacian-diagonalizing, periodic, and atom-position independent Fourier basis for discretization, is among the most widely used techniques for the solution of the Kohn–Sham equations.6–13 In particular, the planewave method is accurate, relies on a single parameter for convergence with basis, and is highly efficient on small to moderate computational resources through the use of efficient preconditioning schemes and well-optimized Fast Fourier Transforms (FFTs). However, the planewave method is restricted to periodic boundary conditions, wherein artificial periodicity has to be introduced through large vacuum regions for systems that are finite in one or more directions. Moreover, the global nature of the Fourier basis makes the development of linear-scaling methods14–16 difficult and limits the parallel scalability of the planewave method on large-scale computational resources, which severely limits the system sizes and time scales accessible to a rigorous first-principles Kohn–Sham DFT investigation.

Motivated by the limitations of the planewave method, a number of alternate solution strategies based on systematically improvable, localized representations have been developed,17–37 among which the real-space finite-difference method38,39 is perhaps the most mature and widely used to date. In this method, computational locality is maximized by discretizing all spatial quantities on a uniform, atom-position independent real-space grid, wherein convergence is controlled by a single parameter, i.e., the grid spacing. The method naturally accommodates both periodic and Dirichlet boundary conditions, allowing for the accurate and efficient treatment of systems with different dimensionalities, i.e., finite, semi-infinite, and bulk, as well as those with nontraditional symmetries.40,41 Moreover, the localized real-space representation allows for the development of linear-scaling methods and being free from communication-intensive transforms such as FFTs, the method allows for large-scale parallel computational resources to be efficiently leveraged.22,33,42–45

SPARC34,46,47 is a recently developed open source electronic structure code that incorporates a number of the developments in real-space DFT made over the past decade, allowing for efficient utilization of modest as well as large-scale computational resources. Its accuracy and performance have been extensively verified and benchmarked against established planewave codes, during which it has been found to be an order of magnitude faster for local, semilocal, and hybrid exchange–correlation functionals, with increasing advantages as the number of processors is increased.45,46 However, it has heretofore been unable to exploit the acceleration provided by Graphics Processing Units (GPUs), which have been shown to provide substantial speedups in the context of electronic structure calculations,29,48–60 providing the motivation for the current work. In particular, we develop a modular math-kernel based GPU-accelerated version of SPARC for local and semilocal Kohn–Sham DFT calculations, wherein the computationally expensive operations are carried out on the GPUs, with the remainder of the workload retained on the central processing units (CPUs). Using representative bulk and slab examples, we show that GPU acceleration provides speedups of up to 6× and 60× in node and core hours, respectively, bringing time to solution down to less than 30 s for a metallic system with over 14 000 electrons and enabling significant reductions in computational resources required for a given wall time.

The remainder of this paper is organized as follows: In Sec. II, we describe the GPU acceleration of local/semilocal DFT calculations in SPARC. Next, we verify the performance of the GPU-accelerated SPARC code in Sec. III. Finally, we provide concluding remarks in Sec. IV.

The electronic ground state in SPARC34,46,47 is determined using the self-consistent field (SCF) method,5 which represents a fixed-point iteration with respect to either the density or potential. In each SCF iteration, a Schrödinger-type linear eigenproblem is solved for the eigenvectors/orbitals and the Poisson equation is solved for the electrostatic potential. Given the large prefactor and O(N3) scaling with system size, the overall computational cost of Kohn–Sham DFT calculations is primarily determined by the solution of the eigenproblem, especially when the exchange–correlation functional is approximated using either the local density approximation (LDA) or generalized gradient approximation (GGA),5 as is the focus of the current work.

SPARC employs the Chebyshev-filtered subspace iteration (CheFSI)61,62 to perform partial diagonalization of the Hamiltonian during each SCF iteration, as summarized in Algorithm 1. The CheFSI algorithm consists of two main steps: Chebyshev filtering and Rayleigh–Ritz. In Chebyshev filtering, the rapid growth of Chebyshev polynomials outside the interval [−1, 1] is used to filter out the unwanted part of the Hamiltonian’s spectrum, i.e., the unoccupied subspace. In Rayleigh–Ritz—which consists of projection of the Hamiltonian onto the filtered subspace, diagonalization of the resulting subspace Hamiltonian, and rotation of the filtered basis—approximations to the eigenvectors and eigenvalues of the Hamiltonian are then calculated. As the SCF iteration proceeds toward self-consistency, these eigenvectors converge to the Kohn–Sham orbitals.

ALGORITHM 1.

CheFSI-based partial diagonalization.

H : Hamiltonian for a given spin, Bloch wavevector, and SCF iteration, a matrix of dimension Nd × Nd applied as an operator 
X : guess for the eigenvectors/orbitals, a matrix of dimension Nd × Ns 
Nd : Number of finite-difference nodes, Ns: Number of orbitals 
Filtering
  • Enhance desired part of the spectrum through Chebyshev polynomial filtering:

pm : Chebyshev polynomial of degree m, c=(λNd+λc)/2, e=(λNdλc)/2,
  • λNd : largest eigenvalue of H, λc: filter cutoff.

  • Key computational kernel and its scaling:

 
Projection
  • Project Hamiltonian onto the Chebyshev-filtered basis:

  • key computational kernels and their scaling:

Ỹ : matrix of dimension Nd × Ns
Diagonalization
  • Solve subspace eigenproblem:

Z̃ : matrix of dimension Ns × Ns
  •       D̃ : Diagonal matrix of dimension Ns × Ns

  • Key computational kernel and its scaling:

 
Rotation
  • Subspace rotation step to obtain approximate eigenvectors of H, used as the guess for the next SCF step:

  • Key computational kernel and its scaling:

 
H : Hamiltonian for a given spin, Bloch wavevector, and SCF iteration, a matrix of dimension Nd × Nd applied as an operator 
X : guess for the eigenvectors/orbitals, a matrix of dimension Nd × Ns 
Nd : Number of finite-difference nodes, Ns: Number of orbitals 
Filtering
  • Enhance desired part of the spectrum through Chebyshev polynomial filtering:

pm : Chebyshev polynomial of degree m, c=(λNd+λc)/2, e=(λNdλc)/2,
  • λNd : largest eigenvalue of H, λc: filter cutoff.

  • Key computational kernel and its scaling:

 
Projection
  • Project Hamiltonian onto the Chebyshev-filtered basis:

  • key computational kernels and their scaling:

Ỹ : matrix of dimension Nd × Ns
Diagonalization
  • Solve subspace eigenproblem:

Z̃ : matrix of dimension Ns × Ns
  •       D̃ : Diagonal matrix of dimension Ns × Ns

  • Key computational kernel and its scaling:

 
Rotation
  • Subspace rotation step to obtain approximate eigenvectors of H, used as the guess for the next SCF step:

  • Key computational kernel and its scaling:

 

In the CPU implementation of SPARC, parallelization is achieved using the Message Passing Interface (MPI) standard. In particular, an eigensolver topology is implemented for CheFSI in which the MPI_COMM_WORLD communicator is split into two spin groups, then each spin group is split into multiple Bloch wavevector groups, then each wavevector group is split into multiple orbital groups, and finally, each orbital group is embedded with a Cartesian topology.46 In the current GPU-accelerated implementation, we neglect spin and employ only wavevector and orbital parallelization, i.e., no domain decomposition, which translates to each orbital group no longer being embedded with a Cartesian topology. Note that it is relatively straightforward to include spin polarization, given that the eigenproblems for different spins are essentially independent. Also note that in the default SPARC operation, the parallelization over all the orbitals occurs first, and then only domain decomposition is activated, i.e., domain decomposition is important in the strong scaling limit, but not in regular operation where moderate number of processors are used, motivating the current choice.

In this work, we propose a strategy that ensures maximum portability across diverse and ever-evolving GPU architectures and their corresponding programming interfaces, code separation of CPU and GPU CheFSI modules that allows their independent development and optimizations, minimum data transfer between host and device, and minimum peak memory requirement on a GPU. In what follows, we describe how the key computational kernels in each of the aforementioned CheFSI steps are accelerated on NVIDIA GPUs using the cuBLAS and cuSOLVER libraries via the CUDA parallel programming platform. The vectors/matrices are transferred from CPU to GPU and GPU to CPU using the cublasSetVector and cublasGetVector routines, respectively. Note that for isolated systems/Γ− point calculations, real-valued computations are performed, whereas for other choices of Brillouin zone integration, complex-valued computations are performed, with all operations performed in double-precision arithmetic.

While the current implementation works for any integer CPU-thread-to-GPU ratio greater than or equal to 1, for simplicity of discussion, we assume a CPU-thread-to-GPU ratio of 1, which is the default and most efficient setting in our implementation, providing an efficient load distribution with minimum PCI bus transactions between the host CPU and mapped device GPU. In addition, we consider a single wavevector in the Brillouin zone, since parallelization over different wavevectors follows naturally, given that the eigenproblems associated with different wavevectors are essentially independent. The corresponding Hamiltonian at a given SCF iteration, which is a sparse matrix of dimension Nd × Nd, will be denoted by H, and the guess for its eigenvectors/orbitals, which is a dense matrix of dimension Nd × Ns, will be denoted by X, where Nd denotes the number of finite-difference nodes and Ns denotes the number of orbitals. We will consider two partitions for X and related quantities:
(1)
(2)
where X(k) and X(k) are matrices of dimension Nd × Ns/p and Nd/p × Ns, respectively, associated with CPUk/GPUk and p is the number of CPUs/GPUs. Indeed, if Nd and Ns are not integer multiples of the number of processors, the number of rows and columns for the p-th processor are reduced, respectively, such that the sizes of the matrices are the same on the remaining processors. Henceforth, we shall use under- and side-braces to denote which CPU/GPU the matrix resides in, and therefore where the computations are performed (if any). Note that the computer memory requirement in GPU-accelerated SPARC, which is the same as in the CPU-only implementation, is essentially determined by the storage associated with the orbitals, three copies of which are needed as part of the CheFSI method.

It is worth noting that some of the routines developed for the implementation of Chebyshev filtering (Sec. II A), i.e., stencil operations and nonlocal projector multiplications, can be used to accelerate the computation of the nonlocal component of the Hellmann–Feynman atomic forces and stresses, where the key computational kernels are the application of the gradient operator on the Kohn–Sham orbitals and application of the nonlocal pseudopotential operator on the resultant quantity.34,47,63 For the stresses, the gradient of the orbitals so computed can be used for the calculation of the electronic kinetic energy component of the stress. Indeed, these nonlocal components of the forces and stresses are explicitly dependent on the orbitals and therefore significantly more expensive than the local components, motivating acceleration through GPU computations.

The guess for the eigenvectors X is initially distributed on the CPU threads as follows:
(3)
The matrix X, effective potential, and nonlocal projectors are then transferred from the host CPU to its mapped GPU device. The key computational kernel within the Chebyshev filtering is computed as
(4)
where the Hamiltonian H, which consists of the Laplacian, effective potential (sum of the electrostatic and exchange–correlation potentials), and outer product of the nonlocal projectors, is never explicitly created, but rather its application on vectors/matrices is computed in matrix-free fashion as follows:
  • The application of the finite-difference stencil for the Laplacian on each column of X(k) by GPUk, k ∈ {1, 2, …, p}, proceeds as follows:64 (i) Group threads into 2D threadblocks of size (p, q) to match data tiling in (x, y) and assign one thread per output element; (ii) allocate a shared memory for (p + n0) × (q + n0) array, n0 being the finite-difference order; (iii) load the column of Xp in the shared memory; (iv) compute 2D stencil in each threadblock by fetching data from the shared memory; and (v) compute 1D stencil in z-direction in each threadblock and add to the 2D stencil result. This algorithm ensures minimum read redundancy by collecting the data corresponding to the extended region in the (x, y) tile in the shared memory of a threadblock. In addition, all GPU threads work in parallel, each performing only 3n0 + 1 computations, thus enabling very fast and accurate stencil computations.

  • The effective potential is multiplied pointwise to each column of X(k) by GPUk, k ∈ {1, 2, …, p}.

  • The nonlocal projectors for each atom, which are stored as a dense matrix in both CPU-only and GPU-accelerated implementations, are applied on the appropriate components of X(k) by GPUk, k ∈ {1, 2, …, p}, by performing a dense matrix–matrix multiplication using the cublasZgemm/cublasDgemm routine. Note that the dense matrix associated with each atom is of size: number of grid points in the compact support of the largest projector times the number of projectors, whereby the associated computer memory, even when considering all the atoms, is negligible compared to the overall requirements.

Once the filtering is complete, the filtered basis X̃ is transferred from GPUs to CPUs. Note that since Ỹ=HX̃ is needed as part of the projection step, it is calculated as described above and also transferred from the GPUs to CPUs.

The matrices X̃ and Ỹ are first redistributed from a 1D column block distribution to a 1D row block distribution on the CPUs as follows:
(5)
(6)
Next, X̃ and Ỹ are transferred from the CPUs to the GPUs, after which the subspace Hamiltonian H̃ and overlap M̃ matrices are computed as follows:
(7)
(8)
where the matrix–matrix multiplication H̃(k)=X̃(k)TỸ(k) and M̃(k)=X̃(k)TX̃(k) is performed by GPUk, k ∈ {1, 2, …, p} using the cublasZgemm/cublasDgemm routine, then the resultant matrix is transferred to CPUk. The additions are performed on the CPUs using the MPI_Ireduce routine, reducing to CPU1.
The matrices H̃ and M̃ are first transferred from CPU1 to GPU1. Next, the subspace generalized eigenproblem
(9)
where Z̃ is the matrix of eigenvectors and D̃ is a diagonal matrix of the eigenvalues, is solved on GPU1 using the cusolverDnZhegvd/cusolverDnDsygvd routine. Thereafter, the matrices Z̃ and D̃ are transferred from GPU1 to CPU1, and then from CPU1 to all CPU threads using the MPI_Bcast routine. Note that cusolverDnZhegvd/cusolverDnDsygvd are single GPU routines and their multi-GPU versions are currently not available, which limits the size of the eigenproblem that can be solved to ∼15 000 orbitals, due to memory constraints. However, this does not pose a problem in the majority of practical applications, which typically target systems of 1000 atoms or less, in AIMD calculations in particular.
The matrices X̃ and Z̃ are first transferred from the CPUs to the GPUs, the entire Z̃ is transferred from each CPUk to GPUk. Next, the approximate eigenvectors of the Hamiltonian H are calculated as follows:
(10)
where the matrix–matrix multiplication X(k)=X̃(k)Z̃ is performed by GPUk, k ∈ {1, 2, …, p}, using the cublasZgemm/cublasDgemm routine. Thereafter, the matrix X is transferred from the GPUs to the CPUs. Finally, the matrix X is redistributed as follows:
(11)
The matrix of approximate eigenvectors X so generated is used as initial guess for the subsequent SCF iteration.

We now study the performance of the GPU-accelerated SPARC implementation through representative examples, namely, bulk molybdenum (Mo) and 12-layer (001) slab of titanium dioxide (TiO2).65 Specifically, we consider 250-, 686-, and 1024-atom unit cells of Mo, with LDA2,66 exchange–correlation functional and Γ-point Brillouin zone integration; and 144-, 324-, and 576-atom unit cells of TiO2 with Perdew–Burke–Ernzerhof (PBE)67 exchange–correlation functional and 4 × 4, 3 × 3, and 2 × 2 Monkhorst–Pack68 grids for Brillouin zone integration, respectively. We perform isokinetic ensemble (NVK) ab initio molecular dynamics (AIMD) with Gaussian thermostat69 at temperatures of 3000 and 300 K, with electronic and ionic temperatures the same, and time steps of 1 and 2 fs for the Mo and TiO2 systems, respectively. In particular, we perform 10 steps of the AIMD simulation before collecting timings, i.e., after the computational timings per MD step have stabilized.

In all calculations, we employ optimized norm-conserving Vanderbilt (ONCV) pseudopotentials70 with nonlinear core corrections (NLCCs) from the SPMS set,71 which has 14, 12, and 6 electrons in valence for Mo, Ti, and O, respectively. In addition, we employ the restarted Periodic Pulay mixing scheme,72,73 real-space Kerker preconditioning,74,75 and the Alternating Anderson–Richardson (AAR)76,77 linear solver for the Poisson equation. The Poisson equation is solved entirely on the CPUs since it takes a very small fraction of the total time. Indeed, it can be immediately ported to the GPUs using the Laplacian-vector product routine described in Sec. II A but is not done in order to maximize code simplicity. The number of orbitals chosen for the Mo systems, Mo250, Mo686, and Mo1024, are Ns = 2105, 5767, and 8606, respectively; and for the TiO2 systems, (TiO2)48, (TiO2)108, and (TiO2)192, Ns = 696, 1560, and 2769, respectively, as automatically determined by the SPARC code. The grid spacing used for the Mo and TiO2 systems is 0.372 and 0.3 bohr, respectively, which translates to Nd = 80 × 80 × 80, 112 × 112 × 112, and 144 × 144 × 144 finite-difference nodes for the Mo250, Mo686, and Mo1024 systems, respectively; and Nd = 58 × 37 × 225, 88 × 56 × 225, and 117 × 75 × 225 for the (TiO2)48, (TiO2)108, and (TiO2)192 systems, respectively. The Chebyshev filtering polynomial degrees for the Mo and TiO2 systems are 21 and 25, respectively, which are the SPARC defaults for the chosen grid spacings. All numerical parameters, including grid spacing and SCF tolerances, are chosen to provide chemical accuracy (10−3 Ha/atom), as typical in AIMD simulations and as targeted in recent TiO2 applications in particular.65 All simulations are carried out on the Lassen supercomputer at the Lawrence Livermore National Laboratory (LLNL),78 wherein each computational node has 4 NVIDIA Volta V100 GPUs with 16 GB of memory each and 40 IBM POWER9 CPU cores with a total of 256 GB of memory. We use all 40 CPU cores per computational node in CPU-only runs, with one CPU thread (MPI rank) per CPU core, and use 4 CPU cores and 4 GPUs per computational node in GPU-accelerated runs, with one CPU thread (MPI rank) per CPU core—the configuration that was found to be most efficient. This translates to speedups in terms of core hours being a factor of 10 larger than those in terms of node hours as are the focus here.

In Fig. 1, we present the strong scaling results obtained for the chosen Mo and TiO2 systems. In particular, we report the variation in the total wall time per MD step—which includes three SCF iterations and calculation of Hellmann–Feynman atomic forces—with the number of computational nodes. It is clear that the GPU implementation demonstrates good parallel strong scaling, with a continuous decrease in the time to solution as the number of nodes is increased. The parallel scaling for the TiO2 systems is especially good by virtue of parallelization over wavevectors in the Brillouin zone in addition to parallelization over orbitals. In particular, the GPU-accelerated execution provides significant speedup compared to CPU-only execution—with the same number of SCF iterations in both instances, as generally found for a given SCF tolerance—with a maximum speedup of 3.3×, 6.0×, and 6.2× for Mo250, Mo686, and Mo1024, respectively; and 2.7×, 4.4×, and 6.4× for (TiO2)48, (TiO2)108, and (TiO2)192, respectively. Furthermore, the minimum MD step times are well within half a minute: 2.7, 12.9, and 28.3 s for Mo250, Mo686, and Mo1024; and 3.8, 8.9, and 13.0 s for (TiO2)48, (TiO2)108, and (TiO2)192, respectively, demonstrating the attractiveness of GPU-accelerated SPARC for AIMD. The figure also indicates that the rate of decrease in the time to solution reduces as the number of computational nodes is increased, with the speedup having an inverse correlation with the number of computational nodes and a direct correlation with the problem size. This is due to the fact that within memory constraints, the GPU is able to simultaneously process much larger amounts of data in comparison to a CPU, therefore reduction in computational workload on a GPU is not in direct correspondence with the associated reduction in time. We have verified this behavior by performing the Mo1024 simulation with a grid spacing of 0.22 bohr, e.g., corresponding to a harder pseudopotential or higher accuracy, and found a speedup and timing of 5.6× and 86 s on 64 computational nodes, respectively. The corresponding numbers for 0.372 bohr mesh (Fig. 1) are 3.9× and 33.3 s, respectively. Indeed, though the number of finite-difference nodes increased by a factor of 4.8×, the wall time increased by only a factor of 2.6×, even with the Chebyshev polynomial degree increasing from 23 to 33. It is worth noting that, since we have considered three system sizes each for both Mo and TiO2, the weak scaling of O(N2)O(N3) can also be inferred from the curves, consistent with scaling estimates for the chosen system sizes. It is also worth noting that since the largest speedups occur on the smallest computational resources, the reduction in wall time is especially useful in real-world production runs where resources are generally limited.

FIG. 1.

Strong scaling of MD step time in GPU-accelerated SPARC on the Lassen supercomputer,78 where each computational node has 4 GPUs and 40 CPU cores. The timings correspond to using 4 GPUs and 4 CPU threads on each computational node. The number displayed next to each marker represents the speedup in time to solution relative to CPU-only execution, wherein all 40 CPU cores on each computational node are utilized, i.e., the speedup in node hours. The speedup in core hours is a factor of 10 larger. The timings include the computation of atomic forces.

FIG. 1.

Strong scaling of MD step time in GPU-accelerated SPARC on the Lassen supercomputer,78 where each computational node has 4 GPUs and 40 CPU cores. The timings correspond to using 4 GPUs and 4 CPU threads on each computational node. The number displayed next to each marker represents the speedup in time to solution relative to CPU-only execution, wherein all 40 CPU cores on each computational node are utilized, i.e., the speedup in node hours. The speedup in core hours is a factor of 10 larger. The timings include the computation of atomic forces.

Close modal

To get further insight into the performance of the GPU-accelerated SPARC code, we determine the timings for each of the main CheFSI steps: Chebyshev filtering, projection, subspace diagonalization, and rotation, the details of which are available in Sec. II. Note that with GPU implementation of the nonlocal forces, the time taken in the calculation of the atomic forces is less than 1% of the total time in GPU-accelerated execution, similar to CPU-only execution, which motivates its exclusion from the analysis here. In Figs. 2 and 3, we present the breakdown of the timings for GPU-accelerated and CPU-only executions on the minimum and maximum number of computational nodes used in the strong scaling study for each system (Fig. 1). It is clear that other than subspace diagonalization, the speedups for each of the steps on the smallest number of nodes is significantly larger than on the largest number of nodes, for the reasons discussed above with regard to the processing capability of the GPU. There is no noticeable change in the timing of the subspace diagonalization since it is restricted to a single GPU in GPU-accelerated execution, while the number of CPU threads on which it is performed is large enough in all cases that the timing remains relatively unchanged in the strong scaling study. As is to be expected, for both GPU-accelerated and CPU-only executions, the O(N3) steps, i.e., projection, subspace diagonalization, and rotation, become more dominant as the system size increases. The strong scaling efficiency of the different CheFSI steps in GPU-accelerated execution are in the following order: Chebyshev filtering > rotation > projection > subspace diagonalization. The efficiency of Chebyshev filtering is the highest, given that it employs orbital parallelization, whereby all the computations happen independently on the GPUs, without the need for communication between the GPUs or CPUs during the whole step. The relatively large amount of global communications that are required for forming the subspace Hamiltonian and overlap matrices during the projection step make its scaling worse than the rotation step, which would otherwise be similar. The subspace diagonalization timings remain unchanged, by virtue of being run on the same number of processors for the whole strong scaling study, as discussed above. Note that the ordering of the strong scaling efficiency of the different steps in CPU-only execution mirrors that in GPU-accelerated execution, for reasons similar to those discussed above.

FIG. 2.

Breakdown of the timings for GPU-accelerated and CPU-only SPARC execution on the minimum number of computational nodes used in the strong scaling study (Fig. 1). In terms of node hours, the speedups in (filtering, projection, diagonalization, rotation) for Mo250, Mo686, and Mo1024 are (3.5×, 4.0×, 3.6×, 3.8×), (4.9×, 7.7×, 2.81×, 7.9×), and (5.0×, 7.9×, 2.0×, 9.2×), respectively. The corresponding numbers for (TiO2)48, (TiO2)108, and (TiO2)192 are (2.6×, 3.2×, 11.9×, 3.4×), (4.3×, 5.2×, 5.0×, 5.4×), and (5.1×, 8.3×, 4.1×, 8.8×), respectively. The speedups are a factor of 10 larger when considering core hours.

FIG. 2.

Breakdown of the timings for GPU-accelerated and CPU-only SPARC execution on the minimum number of computational nodes used in the strong scaling study (Fig. 1). In terms of node hours, the speedups in (filtering, projection, diagonalization, rotation) for Mo250, Mo686, and Mo1024 are (3.5×, 4.0×, 3.6×, 3.8×), (4.9×, 7.7×, 2.81×, 7.9×), and (5.0×, 7.9×, 2.0×, 9.2×), respectively. The corresponding numbers for (TiO2)48, (TiO2)108, and (TiO2)192 are (2.6×, 3.2×, 11.9×, 3.4×), (4.3×, 5.2×, 5.0×, 5.4×), and (5.1×, 8.3×, 4.1×, 8.8×), respectively. The speedups are a factor of 10 larger when considering core hours.

Close modal
FIG. 3.

Breakdown of the timings for GPU-accelerated and CPU-only SPARC execution on the maximum number of computational nodes used in the strong scaling study (Fig. 1). In terms of node hours, the speedups in (filtering, projection, diagonalization, rotation) for Mo250, Mo686, and Mo1024 are (1.0×, 2.8×, 3.6×, 1.3×), (2.0×, 3.9×, 2.81×, 3.7×), and (2.8×, 6.2×, 2.2×, 4.8×), respectively. The corresponding numbers for (TiO2)48, (TiO2)108, and (TiO2)192 are (1.9×, 3.9×, 11.9×, 2.9×), (3.5×, 4.8×, 5.0×, 4.9×), and (4.2×, 5.1×, 4.1×, 6.7×), respectively. The speedups are a factor of 10 larger when considering core hours.

FIG. 3.

Breakdown of the timings for GPU-accelerated and CPU-only SPARC execution on the maximum number of computational nodes used in the strong scaling study (Fig. 1). In terms of node hours, the speedups in (filtering, projection, diagonalization, rotation) for Mo250, Mo686, and Mo1024 are (1.0×, 2.8×, 3.6×, 1.3×), (2.0×, 3.9×, 2.81×, 3.7×), and (2.8×, 6.2×, 2.2×, 4.8×), respectively. The corresponding numbers for (TiO2)48, (TiO2)108, and (TiO2)192 are (1.9×, 3.9×, 11.9×, 2.9×), (3.5×, 4.8×, 5.0×, 4.9×), and (4.2×, 5.1×, 4.1×, 6.7×), respectively. The speedups are a factor of 10 larger when considering core hours.

Close modal

We have presented a GPU-accelerated implementation of the real-space SPARC electronic structure code for performing Kohn–Sham DFT calculations with LDA/GGA exchange–correlation functionals. In particular, we have developed a modular math-kernel based implementation for NVIDIA architectures in which the computationally intensive operations are carried out on the GPUs, while the remainder of the workload is retained on the CPUs. Through representative bulk and slab examples, we have shown that relative to CPU-only execution, GPUs enable speedups of up to 6× and 60× in node and core hours, respectively, bringing time to solution down to less than 30 s for a metallic system with over 14 000 electrons and enabling significant reductions in computational resources required for a given wall time. Opportunities for further reductions in wall time include (i) performing all-reduce operations in the projection step on GPUs through NVLink; (ii) having the CPUs, currently idle during GPU operation, perform some of the computations; and (iii) using more advanced NVIDIA H100 GPUs, possibly with mixed-precision computations (e.g., Ref. 29) that can take advantage of their tensor cores.

The modular yet general nature of the developed implementation allows for its relatively simple extension to other GPU architectures, e.g., AMD and Intel, which is currently being pursued by the authors. A GPU-accelerated Parallel Computation Engine (libPCE) is also in development, which targets problem sizes that do not fit on a single GPU and reduces the number of CPU-GPU and GPU-CPU transfers. It uses a distinct orbital + domain data distribution and uses CA3DMM79 to provide optimal or near-optimal communication for matrix–matrix products (used in the CheFSI projection and rotation steps). Other worthy subjects of research include extending the implementation to enable GPU acceleration for advanced semilocal/hybrid exchange–correlation functionals, which are significantly more computationally expensive than LDA/GGA; and GPU acceleration of the O(N) Spectral Quadrature (SQ) method80,81 in SPARC,44,82 which will enable the study of systems of a million atoms45 and more as larger-scale parallel computing platforms become available.

J.E.P., A.S., and P.S. gratefully acknowledge support from U.S. Department of Energy (DOE), National Nuclear Security Administration (NNSA): Advanced Simulation and Computing (ASC) Program at LLNL, and computational resources provided under the Multiprogrammatic and Institutional Computing programs at LLNL. P.S., L.E., and E.C. gratefully acknowledge support from the U.S. Department of Energy, Office of Science under Grant No. DE-SC0019410. This work was performed in part under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract No. DE-AC52-07NA27344. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Department of Energy, or the U.S. Government.

The authors have no conflicts to disclose.

Abhiraj Sharma: Conceptualization (equal); Data curation (lead); Investigation (lead); Methodology (equal); Software (equal); Validation (lead); Writing – original draft (lead); Writing – review & editing (supporting). Alfredo Metere: Conceptualization (equal); Data curation (supporting); Investigation (supporting); Methodology (equal); Software (equal); Validation (supporting); Writing – review & editing (supporting). Phanish Suryanarayana: Conceptualization (supporting); Data curation (equal); Funding acquisition (lead); Investigation (equal); Methodology (equal); Project administration (equal); Supervision (supporting); Validation (supporting); Writing – original draft (supporting); Writing – review & editing (lead). Lucas Erlandson: Conceptualization (supporting); Investigation (equal); Methodology (equal); Software (equal); Validation (supporting); Writing – review & editing (supporting). Edmond Chow: Conceptualization (supporting); Funding acquisition (equal); Investigation (supporting); Methodology (supporting); Project administration (supporting); Software (supporting); Supervision (supporting); Writing – review & editing (supporting). John E. Pask: Conceptualization (lead); Data curation (supporting); Funding acquisition (lead); Investigation (equal); Methodology (lead); Project administration (lead); Software (supporting); Supervision (lead); Validation (supporting); Writing – original draft (supporting); Writing – review & editing (equal).

The data that support the findings of this study are available from the corresponding author upon reasonable request.

1.
P.
Hohenberg
and
W.
Kohn
,
Phys. Rev.
136
,
B864
(
1964
).
2.
W.
Kohn
and
L. J.
Sham
,
Phys. Rev.
140
,
A1133
(
1965
).
3.
4.
A. D.
Becke
,
J. Chem. Phys.
140
,
18A301
(
2014
).
5.
R.
Martin
,
Electronic Structure: Basic Theory and Practical Methods
(
Cambridge University Press
,
2004
).
6.
G.
Kresse
and
J.
Furthmüller
,
Phys. Rev. B
54
,
11169
(
1996
).
7.
S. J.
Clark
,
M. D.
Segall
,
C. J.
Pickard
,
P. J.
Hasnip
,
M. I. J.
Probert
,
K.
Refson
, and
M. C.
Payne
,
Z. Kristallogr. Cryst. Mater.
220
,
567
(
2005
).
8.
X.
Gonze
,
J.-M.
Beuken
,
R.
Caracas
,
F.
Detraux
,
M.
Fuchs
,
G.-M.
Rignanese
,
L.
Sindic
,
M.
Verstraete
,
G.
Zerah
,
F.
Jollet
et al,
Comput. Mater. Sci.
25
,
478
(
2002
).
9.
P.
Giannozzi
,
S.
Baroni
,
N.
Bonini
,
M.
Calandra
,
R.
Car
,
C.
Cavazzoni
,
D.
Ceresoli
,
G. L.
Chiarotti
,
M.
Cococcioni
,
I.
Dabo
et al,
J. Phys.: Condens. Matter
21
,
395502
(
2009
).
10.
D.
Marx
and
J.
Hutter
,
Mod. Methods Algorithms Quantum Chem.
1
,
301
(
2000
).
11.
S.
Ismail-Beigi
and
T. A.
Arias
,
Comput. Phys. Commun.
128
,
1
(
2000
).
12.
13.
M.
Valiev
,
E. J.
Bylaska
,
N.
Govind
,
K.
Kowalski
,
T. P.
Straatsma
,
H. J. J.
Van Dam
,
D.
Wang
,
J.
Nieplocha
,
E.
Apra
,
T. L.
Windus
et al,
Comput. Phys. Commun.
181
,
1477
(
2010
).
14.
15.
D. R.
Bowler
and
T.
Miyazaki
,
Rep. Prog. Phys.
75
,
036503
(
2012
).
16.
J.
Aarons
,
M.
Sarwar
,
D.
Thompsett
, and
C.-K.
Skylaris
,
J. Chem. Phys.
145
,
220901
(
2016
).
17.
A. D.
Becke
,
Int. J. Quantum Chem.
36
,
599
(
1989
).
18.
J. R.
Chelikowsky
,
N.
Troullier
, and
Y.
Saad
,
Phys. Rev. Lett.
72
,
1240
(
1994
).
19.
L.
Genovese
,
A.
Neelov
,
S.
Goedecker
,
T.
Deutsch
,
S. A.
Ghasemi
,
A.
Willand
,
D.
Caliste
,
O.
Zilberberg
,
M.
Rayson
,
A.
Bergman
et al,
J. Chem. Phys.
129
,
014109
(
2008
).
20.
A. P.
Seitsonen
,
M. J.
Puska
, and
R. M.
Nieminen
,
Phys. Rev. B
51
,
14057
(
1995
).
21.
S. R.
White
,
J. W.
Wilkins
, and
M. P.
Teter
,
Phys. Rev. B
39
,
5819
(
1989
).
22.
J.-I.
Iwata
,
D.
Takahashi
,
A.
Oshiyama
,
T.
Boku
,
K.
Shiraishi
,
S.
Okada
, and
K.
Yabana
,
J. Comput. Phys.
229
,
2339
(
2010
).
23.
E.
Tsuchida
and
M.
Tsukada
,
Phys. Rev. B
52
,
5573
(
1995
).
24.
Q.
Xu
,
P.
Suryanarayana
, and
J. E.
Pask
,
J. Chem. Phys.
149
,
094104
(
2018
).
25.
P.
Suryanarayana
,
K.
Bhattacharya
, and
M.
Ortiz
,
J. Comput. Phys.
230
,
5226
(
2011
).
26.
P.
Suryanarayana
,
V.
Gavini
,
T.
Blesgen
,
K.
Bhattacharya
, and
M.
Ortiz
,
J. Mech. Phys. Solids
58
,
256
(
2010
).
27.
C.-K.
Skylaris
,
P. D.
Haynes
,
A. A.
Mostofi
, and
M. C.
Payne
,
J. Chem. Phys.
122
,
084119
(
2005
).
28.
D. R.
Bowler
,
R.
Choudhury
,
M. J.
Gillan
, and
T.
Miyazaki
,
Phys. Status Solidi B
243
,
989
(
2006
).
29.
S.
Das
,
P.
Motamarri
,
V.
Subramanian
,
D. M.
Rogers
, and
V.
Gavini
,
Comput. Phys. Commun.
280
,
108473
(
2022
).
30.
A.
Castro
,
H.
Appel
,
M.
Oliveira
,
C. A.
Rozzi
,
X.
Andrade
,
F.
Lorenzen
,
M. A. L.
Marques
,
E. K. U.
Gross
, and
A.
Rubio
,
Phys. Status Solidi B
243
,
2465
(
2006
).
31.
E. L.
Briggs
,
D. J.
Sullivan
, and
J.
Bernholc
,
Phys. Rev. B
54
,
14362
(
1996
).
32.
J.-L.
Fattebert
,
J. Comput. Phys.
149
,
75
(
1999
).
33.
F.
Shimojo
,
R. K.
Kalia
,
A.
Nakano
, and
P.
Vashishta
,
Comput. Phys. Commun.
140
,
303
(
2001
).
34.
S.
Ghosh
and
P.
Suryanarayana
,
Comput. Phys. Commun.
216
,
109
(
2017
).
35.
36.
J. E.
Pask
and
P. A.
Sterne
,
Model. Simul. Mater. Sci. Eng.
13
,
R71
(
2005
).
37.
L.
Lin
,
J.
Lu
,
L.
Ying
, and
W.
E
,
J. Comput. Phys.
231
,
2140
(
2012
).
38.
39.
Y.
Saad
,
J. R.
Chelikowsky
, and
S. M.
Shontz
,
SIAM Rev.
52
,
3
(
2010
).
40.
A.
Sharma
and
P.
Suryanarayana
,
Phys. Rev. B
103
,
035101
(
2021
).
41.
S.
Ghosh
,
A. S.
Banerjee
, and
P.
Suryanarayana
,
Phys. Rev. B
100
,
125143
(
2019
).
42.
Y.
Hasegawa
,
J.-I.
Iwata
,
M.
Tsuji
,
D.
Takahashi
,
A.
Oshiyama
,
K.
Minami
,
T.
Boku
,
F.
Shoji
,
A.
Uno
,
M.
Kurokawa
et al, in
Proceedings of 2011 International Conference for High Performance Computing, Networking, Storage and Analysis
(
ACM
,
2011
), p.
1
.
43.
D.
Osei-Kuffuor
and
J.-L.
Fattebert
,
Phys. Rev. Lett.
112
,
046401
(
2014
).
44.
P.
Suryanarayana
,
P. P.
Pratapa
,
A.
Sharma
, and
J. E.
Pask
,
Comput. Phys. Commun.
224
,
288
(
2018
).
45.
V.
Gavini
,
S.
Baroni
,
V.
Blum
,
D. R.
Bowler
,
A.
Buccheri
,
J. R.
Chelikowsky
,
S.
Das
,
W.
Dawson
,
P.
Delugas
,
M.
Dogan
, et al, arXiv:2209.12747 (
2022
).
46.
Q.
Xu
,
A.
Sharma
,
B.
Comer
,
H.
Huang
,
E.
Chow
,
A. J.
Medford
,
J. E.
Pask
, and
P.
Suryanarayana
,
SoftwareX
15
,
100709
(
2021
).
47.
S.
Ghosh
and
P.
Suryanarayana
,
Comput. Phys. Commun.
212
,
189
(
2017
).
48.
R. C.
Walker
and
A. W.
Goetz
,
Electronic Structure Calculations on Graphics Processing Units: From Quantum Chemistry to Condensed Matter Physics
(
John Wiley and Sons
,
2016
).
49.
X.
Gonze
,
F.
Jollet
,
F. A.
Araujo
,
D.
Adams
,
B.
Amadon
,
T.
Applencourt
,
C.
Audouze
,
J.-M.
Beuken
,
J.
Bieder
,
A.
Bokhanchuk
et al,
Comput. Phys. Commun.
205
,
106
(
2016
).
50.
L.
Genovese
,
M.
Ospici
,
T.
Deutsch
,
J.-F.
Méhaut
,
A.
Neelov
, and
S.
Goedecker
,
J. Chem. Phys.
131
,
034103
(
2009
).
51.
L.
Genovese
,
B.
Videau
,
D.
Caliste
,
J.-F.
Méhaut
,
S.
Goedecker
, and
T.
Deutsch
,
Electronic Structure Calculations on Graphics Processing Units: From Quantum Chemistry to Condensed Matter Physics
(
2016
), pp.
115
134
.
52.
P.
Manninen
and
P.
Öster
,
Applied Parallel and Scientific Computing: 11th International Conference, PARA 2012
(
Springer
,
Helsinki, Finland
,
2013
), Vol.
7782
.
53.
S.
Maintz
,
B.
Eck
, and
R.
Dronskowski
,
Comput. Phys. Commun.
182
,
1421
(
2011
).
54.
M.
Hacene
,
A.
Anciaux-Sedrakian
,
X.
Rozanska
,
D.
Klahr
,
T.
Guignon
, and
P.
Fleurat-Lessard
,
J. Comput. Chem.
33
,
2581
(
2012
).
55.
W.
Jia
,
J.
Wang
,
X.
Chi
, and
L.-W.
Wang
,
Comput. Phys. Commun.
211
,
8
(
2017
).
56.
X.
Andrade
,
J.
Alberdi-Rodriguez
,
D. A.
Strubbe
,
M. J. T.
Oliveira
,
F.
Nogueira
,
A.
Castro
,
J.
Muguerza
,
A.
Arruabarrena
,
S. G.
Louie
,
A.
Aspuru-Guzik
et al,
J. Phys.: Condens. Matter
24
,
233202
(
2012
).
57.
K.
Wilkinson
and
C.-K.
Skylaris
,
J. Comput. Chem.
34
,
2446
(
2013
).
58.
W.
Jia
,
J.
Fu
,
Z.
Cao
,
L.
Wang
,
X.
Chi
,
W.
Gao
, and
L.-W.
Wang
,
J. Comput. Phys.
251
,
102
(
2013
).
59.
J.
Romero
,
E.
Phillips
,
G.
Ruetsch
,
M.
Fatica
,
F.
Spiga
, and
P.
Giannozzi
, in
International Workshop on Performance Modeling, Benchmarking and Simulation of High Performance Computer Systems
(
Springer
,
2018
), pp.
67
87
.
60.
W. P.
Huhn
,
B.
Lange
,
V. W.-z.
Yu
,
M.
Yoon
, and
V.
Blum
,
Comput. Phys. Commun.
254
,
107314
(
2020
).
61.
Y.
Zhou
,
Y.
Saad
,
M. L.
Tiago
, and
J. R.
Chelikowsky
,
J. Comput. Phys.
219
,
172
(
2006
).
62.
Y.
Zhou
,
Y.
Saad
,
M. L.
Tiago
, and
J. R.
Chelikowsky
,
Phys. Rev. E
74
,
066704
(
2006
).
63.
A.
Sharma
and
P.
Suryanarayana
,
J. Chem. Phys.
149
,
194104
(
2018
).
64.
P.
Micikevicius
, in
Proceedings of 2nd Workshop on General Purpose Processing on Graphics Processing Units
(
Association for Computing Machinery
,
2009
), pp.
79
84
.
65.
S. J.
Sahoo
,
X.
Jing
,
P.
Suryanarayana
, and
A. J.
Medford
,
J. Phys. Chem. C
126
,
2121
(
2022
).
66.
J. P.
Perdew
and
A.
Zunger
,
Phys. Rev. B
23
,
5048
(
1981
).
67.
J. P.
Perdew
,
K.
Burke
, and
M.
Ernzerhof
,
Phys. Rev. Lett.
77
,
3865
(
1996
).
68.
H. J.
Monkhorst
and
J. D.
Pack
,
Phys. Rev. B
13
,
5188
(
1976
).
69.
Y.
Zhang
and
W.
Yang
,
Phys. Rev. Lett.
80
,
890
(
1998
).
71.
M. F.
Shojaei
,
J. E.
Pask
,
A. J.
Medford
, and
P.
Suryanarayana
,
Comput. Phys. Commun.
283
,
108594
(
2023
).
72.
P. P.
Pratapa
and
P.
Suryanarayana
,
Chem. Phys. Lett.
635
,
69
(
2015
).
73.
A. S.
Banerjee
,
P.
Suryanarayana
, and
J. E.
Pask
,
Chem. Phys. Lett.
647
,
31
(
2016
).
74.
75.
S.
Kumar
,
Q.
Xu
, and
P.
Suryanarayana
,
Chem. Phys. Lett.
739
,
136983
(
2020
).
76.
P.
Suryanarayana
,
P. P.
Pratapa
, and
J. E.
Pask
,
Comput. Phys. Comm.
234
,
278
(
2019
).
77.
P. P.
Pratapa
,
P.
Suryanarayana
, and
J. E.
Pask
,
J. Comput. Phys.
306
,
43
(
2016
).
78.
Lawrence Livermore National Laboratory (LLNL) high performance computing systems: https://hpc.llnl.gov/hardware/compute-platforms, accessed 06 January 2023.
79.
H.
Huang
and
E.
Chow
, in
Proceedings of the International Conference on High Performance Computing, Networking, Storage and Analysis
,
2022
.
80.
P.
Suryanarayana
,
Chem. Phys. Lett.
584
,
182
(
2013
).
81.
P. P.
Pratapa
,
P.
Suryanarayana
, and
J. E.
Pask
, “
Spectral quadrature method for accurate O(N) electronic structure calculations of metals and insulators
,”
Comp. Phys. Commun.
200
,
96
107
(
2016
).
82.
K.
Bhattacharya
,
V.
Gavini
,
M.
Ortiz
,
M.
Ponga
, and
P.
Suryanarayana
, arXiv:2112.06016 (
2021
).