Two existing particle-in-cell gyrokinetic codes, GEM for the core region and XGC for the edge region, have been successfully coupled with a spatial coupling scheme at the interface in a toroidal geometry. A mapping technique is developed for transferring data between GEM's structured and XGC's unstructured meshes. Two examples of coupled simulations are presented to demonstrate the coupling scheme. The optimization of GEM for graphics processing unit is also presented.

The study of burning plasmas will be the next frontier for fusion research with the upcoming ITER in the next decade. Based on several fusion experiments as well as theory and simulation, ITER is expected to reach plasma regions beyond present and previous experiments. It is, thus, important to make theory and simulation reliable to predict the plasma behavior of ITER so as to help ITER fulfill its potential through well-confined, disruption-free, and burning operation scenarios. Since the behavior of a fusion plasma in tokamaks involves multiple spatial and time scales, a Whole Device Model (WDM) is critical for understanding and predicting the operation of the existing fusion devices and future facilities such as ITER.

Recently, an ambitious simulation project on high-fidelity whole device modeling of magnetically confined fusion plasma was launched by the Department of Energy (DOE) Exascale Computing Program (ECP). It aims at delivering a first-principles-based computational tool that simulates both the neoclassical transport and anomalous turbulent transport from the core to the edge of a tokamak. To enable such simulations, it has been proposed that the existing gyrokinetic codes1 be coupled together in order to improve computational efficiency. This strategy takes advantage of the complementary nature of different applications to build a more advanced and efficient whole device kinetic transport kernel. A core gyrokinetic code, such as GEM2,3 or GENE,4,5 is well optimized to model small amplitude, weak-gradient driven turbulence for plasmas within the closed-flux surface, while an edge gyrokinetic code such as XGC6–9 is well designed to model large amplitude fluctuations, strong gradient driven turbulence, and neoclassical physics for plasmas crossing the magnetic separatrix and in the open magnetic field region up to the divertor plates. Prior to the core-edge coupling with different codes, some prerequisite steps have already been made, such as the cross-verification10 of the global gyrokinetic codes GENE, ORB5,11 and XGC, as well as developing the spatial coupling scheme1 by the XGC(core)–XGC(edge) coupling. Cross-verification of GEM and XGC is also a prerequisite for the target coupling physics problem.

In this paper, we present a numerical scheme for coupling GEM and XGC, where GEM is used for the core and XGC is used for the edge. GEM is optimized for the simulation of core micro-turbulence and transport in two main aspects. First, the δf-particle-in-cell (PIC) algorithm is used to take advantage of the low fluctuation amplitude in the core. Second, searching particles is easier due to the structured grids, which are constructed based on the field line following the feature of the closed flux surface in a tokamak. On the other hand, XGC is originally designed for the simulation of edge turbulence and transport. Due to the complexity of the magnetic field configuration near the X-point, the limiter, and the divertor, the cylindrical coordinates (R,Z,ζ) and unstructured triangular mesh are used to describe the particle motion. Due to the high fluctuation amplitude in the edge, the required number of particles in the edge is also much larger than in a core simulation.

In principle, XGC alone can be used to simulate the whole volume. In such a simulation, the unstructured grids in the core region need to obey similar complex rules as the edge region requires, and the core simulation needs to adopt the same time-consuming algorithm as in the edge to deal with the complexity of the grids. Furthermore, it is difficult to reduce the particle number in the core region, as the δf-PIC algorithm requires that the core and edge regions in a XGC-alone simulation to have an equivalent number of marker particles for the same volume in the spatial space. Because a large number of particles are required to resolve the large amplitude in the edge, the total number of particles in the core region would far exceed what is necessary for resolving the low-amplitude fluctuations in the core. This will greatly enhance the required computing resources. A scheme based on coupling of two different codes should be pursued. Such a coupling scheme would fully utilize code features that are specialized to the core and edge plasma conditions, and this is the motivation of the present work.

Here, we present an implementation of the spatial coupling scheme,1 which has been previously demonstrated with the XGC–XGC coupling. In the XGC–XGC coupling, two XGC simulations are coupled as if they are independent codes. The simulations are coupled only through an overlap region, where charge density and potential are exchanged. The same mesh is used in both simulations for the purpose of developing the coupling scheme. No data mapping is needed. In the coupling scheme for the GEM–XGC coupling, the data mapping becomes essential. The poloidal cross section is divided into three regions, the core region of GEM, the edge region of XGC, and an overlapping region included in both codes. Time-stepping of the distribution functions is achieved by pushing the composite distribution function independently in each code, while the electric potential is solved solely in XGC and then passed to GEM. Specifically, in the gyrokinetic electrostatic GEM–XGC coupling simulations, the complete process is split into four steps: (1) GEM and XGC push particles independently; (2) GEM's charge density is sent to XGC; (3) then, the XGC's field solver will solve the gyrokinetic Poisson equation for the potential field using the composite charge density; and (4) GEM will receive the potential field from XGC, and so, GEM and XGC can push particles with the updated field.

Since GEM uses the field-line-following magnetic-flux coordinates and XGC adopts field-line-following cylindrical coordinates (R,Z,ζ), a parallelized mapping technique between GEM and XGC is developed to convert between these different coordinates. Within the coupling framework,12 the software ADIOS13 is used to exchange data through files or memory, which enables the cost of data exchange being only a very small fraction of the total cost of simulations. As a part of the ECP project, the GPU optimization for GEM in Summit is also achieved, which shows a good scaling for the particle evolution.

We present two examples to demonstrate the coupling scheme. In the first example with the Cyclone Base Case (CBC),14 the simulation domains of both codes are within the separatrix. A reference simulation is first carried out with XGC alone, to be used as a gauge for the coupled simulation. Good agreement is obtained between the coupling and reference case for both linear and nonlinear simulations. In the second example, the edge region outside the separatrix is also included in a DIII-D-like plasma, with the edge region simulated by XGC and the core region by GEM. Good agreement between the reference simulation and the coupled simulation is achieved.

This paper is organized as follows. A brief description of the established coupling scheme1 is given in Sec. II. Data mapping between GEM and XGC is presented in Sec. III. Two examples of coupled simulation are presented in Sec. IV. The GPU optimization of GEM on Summit is presented in Sec. V, and conclusion is given Sec. VI.

In this section, we briefly describe the coupling scheme. We refer readers to Ref. 1 for more details. The radial domain is divided into three regions, the core, the edge, and an overlapping region. Let δfC denotes the perturbed particle distribution function in the core code (GEM), which is also defined in the overlapping region. The radial size of the overlap region should be larger than the radial turbulence correlation length. Once initial values are assigned to δfC, it is evolved in GEM with the δf-PIC method. Similarly, δfE denotes the distribution in the edge region and is evolved in XGC. The two codes share the same gyrokinetic set of equations. In this study, XGC is operated in the δf mode. The two distribution functions are combined into a composite one,

(1)

where ϖ=ϖ(ψgc) is a continuous connection function varying between 0 and 1 with respect to the gyrocenter's radial position ψgc, with ϖ=1 in the core, 0<ϖ<1 in the overlapping region, and ϖ=0 in the edge. The electrostatic potential ϕˇ is obtained by solving the global Poisson equation over the whole simulation domain, with the ion charge density obtained from the composite distribution δfˇ,

(2)

where the ion polarization density and the adiabatic electron response are denoted by the operator £. The ion charge density is given by

(3)

where X is the gyrocenter position, ρ is the Larmor vector, x is the particle position vector in the configuration space, v|| is the particle speed scalar parallel to the magnetic field, μ is the magnetic moment, α is the gyroangle, B is the magnetic field, m is the particle mass, and δ[] is the Dirac operator. For consistency, it is crucial to have ϖ defined in gyrocenter space, ϖ=ϖ(X), before applying the guiding-center pull back operation, δ[X+ρx]. Equation (3) has important consequences (see more details in Ref. 1). In addition, GEM uses the usual Dirichlet boundary condition for ϕ. However, in the coupled simulations, the potential is provided by XGC. No boundary condition is applied at the boundary of the overlap region. At the boundary of the buffer region,1 calculation of the electric field from the provided potential is limited to grid points inside the buffer. For grid points on the buffer boundary, the fields are set to zero, and particles crossing this boundary are put back inside the buffer. Our first step is to ensure that the real-time data exchange between the two codes is accurate, thereby verifying the particle pushing routines in both codes, and sufficiently fast so that the coupling scheme is practical. Even though the two codes are based on the same set of gyrokinetic equations, they use different discretization methods. Not only simulations from individual codes should be verified, but also the coupled simulation and the reference simulation should be cross-verified.

We use the δf method to evolve the perturbed distribution, δf=ff0, with f0 being the Maxwellian distribution with radius-dependent density and temperature. It is to be noted that, in the present work, sources and collisions are not included. The marker particle distribution g is usually chosen to be different from f0. In this paper, two methods are used to evolve the weight distribution: the conventional δf scheme15 and the direct δf scheme.7 In the conventional δf scheme, the weight is obtained by advancing the ordinary differential equation for the weight, which is derived from the gyrokinetic equation for δf. In the direct δf method, use is made of the fact that the total distribution is constant along the particle trajectory between collisions; hence, δf is simply the difference between the total distribution at the initial particle phase-space location and the equilibrium distribution evaluated at the current location.

We define the weights wgδf/g and wδf/f at the particle phase space coordinate. Defining w0f/g, the weight for the conventional δf scheme evolves according to

(4)

Here, w0f0(Xt=0,t=0)/g(Xt=0,t=0), where Xt=0 is the initial position. In Eq. (4), the invariance of f and g has been used. This is the consequence of the collisionless assumption. In a collisionless plasma, the total distribution is constant along the particle trajectory.

In the direct weight evolution method for the δf scheme, the weight is given by

(5)

where Δ represents the finite difference along the particle trajectory. This scheme is used for both GEM and XGC when the coupling simulations include the open field line region.

In this section, we describe the numerical procedure for data exchange. A general equilibrium magnetic field in a tokamak is given by

(6)

where ψ is the poloidal magnetic flux, ζ is the toroidal angle, ζ̂ is the basic vector along the ζ direction, R is the major radius, and f(ψ)=RBζ, with Bζ the toroidal component of the magnetic field. GEM uses the field-line-following magnetic-flux coordinates (x, y, z),

(7)

where r=(RmaxRmin)/2 is a flux surface label, Rmax and Rmin are the major radii of the two points where the flux surface intersects the horizontal plane containing the magnetic axis, r0 is a reference radius, R0 is the major radius of the magnetic axis, q is the safety factor and q0=q(r0), θ is the usual poloidal angle, and tanθ=Z/(RR0) such that the cylindrical coordinates (R,Z,ζ) are right-handed, θf is the straight-field-line poloidal angle,

(8)

q̂(r,θ)=B·ζB·θ, and q(r)=B·ζB·θf. The simulation domain in GEM is r[rmin,rmax],θ[π,π). The outer boundary rmax lies inside the separatrix. The inner boundary rmin cannot be the magnetic axis, as the Jacobian of the field-line-following coordinates becomes singular at r = 0. The simulation domain in the toroidal direction is often chosen to be a toroidal wedge, ζ[0,2π/nw), and correspondingly, the exact y-coordinate becomes

(9)

Then, the simulation domain is given by (0,Lx)×(0,Ly)×(0,Lz) with Lx=rmaxrmin,Ly=r0q02πnw,Lz=2πq0R0, which is divided into nx×ny×nz equally sized cells in (x, y, z) coordinates.

Given a numerical equilibrium such as the files from the the tokamak equilibrium reconstruction code EFIT (Equilibrium Fitting),16 GEM first constructs two numerical arrays to represent the arbitrary flux-surface shapes. These are the major radius R(r,θ) and the cylindrical coordinate Z(r,θ), on a set of structured (r,θ) grids. A typical grid number for this purpose is a few hundred grids in each dimension. Quantities such as the equilibrium magnetic field magnitude B(r,θ), the straight-field-line poloidal angle θf, and geometric quantities such as ζ̂·r×θ are, then, computed and represented with the same (r,θ) grids. This approach of representing an axisymmetric tokamak equilibrium within the separatrix is completely general and is used even if the equilibrium allows for an analytic representation, as in the case of a model equilibrium with circular concentric flux-surfaces. Whenever the value of a quantity at an arbitrary point (e.g., the value of θf corresponding to an XGC grid) is needed, it is obtained with a 2D linear interpolation in the (r,θ)-plane.

XGC uses the field-line following cylindrical coordinates (R,Z,ζ) with regular grids in ζ and unstructured triangular mesh in the (R, Z) plane that is twisted along the magnetic field lines to be identical at each ζ grid. The node points of the poloidal mesh reside in a set of magnetic flux surfaces and follow the magnetic field lines. Within one closed flux surface, each node point for the (R, Z) grid corresponds to a specific θ or θf. The mapping from (R, Z) to GEM's (r,θ) coordinates is obtained by first finding θ from tanθ=Z/(RR0) and then finding r with a searching algorithm. Hence, (R,Z,ζ) can be written as (x,θ,ζ) or (x,θf,ζ). The same toroidal wedge is used in both XGC and GEM.

To simplify data exchange, the radial grids in GEM are chosen to coincide with those magnetic surfaces used in XGC's mesh, which reside within the GEM's radial domain (radial overlap domain). When, for instance, the potential ϕ of XGC is sent to GEM and re-defined on the GEM grids, interpolation is needed only within the flux-surface, not in the radial direction. Data exchange, then, becomes a two-dimensional interpolation problem for each flux-surface. We can use the coordinates either (θf,ζ) or (y,θf) for the 2D interpolation. Although GEM's z-grids are more directly related to angle θ, we find θf to be more convenient for mapping purposes, as it simplifies the relation between y and ζ.

In the remainder of this section, the subscript “GEM” is used to indicate the value of a coordinate at a GEM grid, and the subscript “XGC” is used to indicate an XGC grid. Thus, a grid in GEM is written as (x,yGEM,θf_GEM) or, equivalently, (x,θf_GEM,ζGEM). A grid in XGC is written as (x,θf_XGC,ζXGC) or, equivalently, (x,yXGC,θf_XGC). The subscript for x is suppressed, as the x-grids of the two codes coincide in the GEM domain.

The 2D interpolation is accomplished as two consecutive 1D interpolations. In the case of the ion charge density, which is to be transferred from GEM to XGC, one starts with n¯(x,yGEM,θf_GEM) and first performs a 1D cubic interpolation in the θf-dimension to obtain n¯(x,yGEM,θf_XGC) and then performs a linear interpolation in the y-dimension to obtain n¯(x,yXGC,θf_XGC). We have verified that the cubic interpolation in the first step can be replaced with linear interpolation without changing the result. The order of the two interpolations is mandated by the requirement that, in constructing the intermediate array, no information is lost. The grids in both GEM and XGC take advantage of the field-aligned feature of micro-turbulence but differ in the choice of the field-aligned coordinate. In GEM, θ (or θf) is used as the field-line-following direction such that the number of y-grids in GEM is far greater than the number of θ-grids, Ny_GEMNθ_GEM. In XGC, the toroidal angle is chosen to be the field-line-following direction such that Nθ_XGCNζ_XGC. By performing the first interpolation in the θ-direction, i.e., from GEM's θ-grids to the more dense XGC's θ-grids while holding the y-coordinate fixed, one simply assigns values to more points along the field line. The size of the intermediate array is much larger than the original array, by a factor of Nθ_XGC/Nθ_GEM1. No information is lost. If we perform the first interpolation in the y-direction while holding θf fixed, the intermediate array will be n¯(x,yXGC,θf_GEM), where yXGC are the values of y corresponding to XGC's ζ-grids, much fewer than the y-grids in GEM. Data information in the y-direction is lost, and the second interpolation in the θ-direction will be meaningless.

The interpolation procedure is slightly more complicated in the case of ϕ, which is transferred from XGC to GEM. Starting from ϕ(x,θf_XGC,ζXGC), the first step is a 1D interpolation in the field-line-following direction (to be explained below), by which we obtain the potential defined on the same θf_XGC but a set of new ζ-grids, ϕ(x,θf_XGC,ζ(yGEM,θf_XGC)), with the new ζ grids given by

(10)

Now, we regard ϕ(x,θf_XGC,ζ(yGEM,θf_XGC)) as ϕ(x,yGEM,θf_XGC), as both refer to ϕ at the same spatial location. The second step is a cubic interpolation in the θf-direction to obtain ϕ(x,yGEM,θf_GEM), the desired representation for GEM.

The first step of interpolation along the field line can be understood as follows. Starting from ϕ(x,θf_XGC,ζXGC), a naive 1D interpolation in the ζ-direction while holding θf fixed cannot be used to obtain ϕ(x,θf_XGC,ζ(yGEM,θf_XGC)), as the number of the ζ-grids in XGC is not sufficient for resolving the mode structure along a line of constant θf. The mode structure at a fixed poloidal angle can vary strongly between two neighboring toroidal grids, especially for high-n modes. The mode is adequately resolved by the ζ-grids of XGC only if viewed along the field line. Consequently, interpolation along the field line must be used to obtain ϕ at the arbitrary toroidal angle. The procedure is schematically shown in Fig. 1. For brevity, we write the θf grids in XGC as θ0, θ1, and θ2 in Fig. 1. The field line passing through the point (x,yGEM,θ1) crosses the two nearby poloidal planes in the XGC mesh, at θleft,θright, respectively, with θleft between θ0 and θ1 and θright between θ1 and θ2. The potential at (x,yGEM,θ1) will be obtained with linear interpolation, using the potential values at (θleft,ζ0) and (θright,ζ1), which are in turn obtained with linear interpolation using values on the XGC grids. For instance, ϕ(θleft,ζ0) is obtained from ϕ(θ0,ζ0) and ϕ(θ1,ζ0). The combined linear field-line-following interpolation is given by

(11)

where wζ0=(ζ1ζy)/(ζ1ζ0),wζ1=1wζ0,w00=(θleftθ1)/(θ1θ0),w01=1w00, and w10=(θrightθ2)/(θ2θ1), respectively.

FIG. 1.

Linear field-line-following interpolation between (x,yGEM,θf_XGC) and (x,θf_XGC,ζXGC) grids. The dots indicate the XGC grids, while the solid line shows the poloidal planes. The triangle indicates (x,yGEM,θ1), where the short dashed line indicates its location in XGC coordinates, and the long dashed line shows the magnetic field line.

FIG. 1.

Linear field-line-following interpolation between (x,yGEM,θf_XGC) and (x,θf_XGC,ζXGC) grids. The dots indicate the XGC grids, while the solid line shows the poloidal planes. The triangle indicates (x,yGEM,θ1), where the short dashed line indicates its location in XGC coordinates, and the long dashed line shows the magnetic field line.

Close modal

In the mapping operation, the meshes of both GEM and XGC are generated based on the EFIT files,16 where the Grad–Shafranov equilibrium equations are solved for both the cyclone-based case and the more realistic DIII-D equilibrium. In addition, the coefficients for linear field-line-following interpolation (x,θf,ζ)(x,y,θf) and for the linear interpolation (x,y,θf)(x,θf,ζ) are calculated once at the beginning of the coupled simulation and saved for subsequent time steps. The calculation for all the flux surfaces proceeds independently and is fully parallelized.

It is worth noting that conventional PIC simulations require the same grid interpolation scheme for both the particle deposition and the field calculation for good conservation properties, e.g., no particle self-force. So, some caution is in order in picking and verifying the best interpolation schemes in the two codes and in the mapping algorithm.

In this section, we present two examples of coupled GEM–XGC simulation that demonstrate the feasibility of the coupling scheme. In each example, a reference XGC-alone simulation is performed for comparison. Collisions and particle/energy source terms are not included in these simulations. Electrons are assumed to be adiabatic. Since the time histories of the heat flux and the growth rate are compared, the same initial condition must be implemented in both codes. Expressed in the GEM coordinates, the initial condition for the ion weights is chosen to be

(12)

where A is the amplitude for each toroidal mode and nky is the number of toroidal modes for initialization. The initial condition for the perturbed distribution is δf=wf0(Xt=0,t=0). In XGC, the weight of a marker is initialized accordingly, taking into account XGC's marker distribution.

For simplicity, we first adopt a simple equilibrium without the open field region. This equilibrium, defined as case V in Ref. 17, is based on the experimental DIII-D discharge underlying the Cyclone Base Case (CBC).14 In this case, the major radius R0=1.68m, the minor radius a=0.59m, and the magnetic field on axis B0=2.09T. A deuterium plasma is considered with the same temperature and density profile (Fig. 2) for both ions and electrons.

FIG. 2.

Normalized radial profiles (blue lines) and corresponding gradient scale length (red lines) of temperature (left) and density (right) of the CBC.

FIG. 2.

Normalized radial profiles (blue lines) and corresponding gradient scale length (red lines) of temperature (left) and density (right) of the CBC.

Close modal

We first perform a benchmark of GEM and XGC with a series of single-mode linear simulations. We use the plasma equilibrium previously used to benchmark GENE and XGC.10 The results for mode frequency and growth rate from all three codes are plotted in Fig. 3, with data from GEM, GENE, and XGC shown in blue, red, and green, respectively. Good agreement is obtained among the three codes. Data from GENE and XGC in Fig. 3 are obtained from the benchmark of Merlo et al.10 For the linear growth rate, the three codes agree to within 8% for all modes considered. The frequencies agree to within 3.5%. The remaining difference is likely due to different ways of discretizing the quasi-neutrality equation. GEM starts with a spectral form of the ion polarization density in the xy plane, which is Fourier transformed in y and then discretized with a pseudospectral method,3 resulting in a 1D equation in x for each (ky,z). XGC, on the other hand, uses a Padé approximation for the ion polarization density, which is discretized directly in the 2D poloidal cross section for each ζ-grid. As long as the gyrokinetic ordering is valid, both methods are good approximations, but they are not identical. Better agreement can be expected if the same Poisson solver is used in both GEM and XGC. We have tested this hypothesis for the n = 24 mode, for which the difference in the growth rates between GEM and XGC is 1.1%. If the XGC Poisson solver is used in the GEM simulation, the difference in the growth rate decreases to 0.2%. In order to eliminate the difference due to the Poisson solver, in the following coupled simulations, only the XGC Poisson solver is used.

FIG. 3.

Growth rates γ (left) and real frequencies ω (right) for the most unstable mode as a function of the toroidal mode number n. GEM results are shown as blue, GENE results as red, and XGC results as green.

FIG. 3.

Growth rates γ (left) and real frequencies ω (right) for the most unstable mode as a function of the toroidal mode number n. GEM results are shown as blue, GENE results as red, and XGC results as green.

Close modal

In the coupled simulations, the “core” region occupied by GEM is ψ/ψx=[0.0191,0.897]r/a=[0.1,0.9]; the “edge” region occupied by XGC is ψ/ψx=[0.0191,1] or r/a=[0.1,1]; the overlap region is ψ/ψx=[0.333,0.462] or r/a=[0.45,0.55], where the density and temperature gradients are the steepest and the ion-temperature-gradient (ITG) modes are most unstable; the buffer regions are r/a=[0.55,0.9] in GEM and r/a=[0.1,0.45] in XGC. The purpose of the buffer region18 is to reduce the effect of an artificial boundary condition for the particles at the edge of the overlap region. For convenience, the sizes of the buffer region in this paper are chosen to be large. In a realistic coupled simulation, they should be smaller, e.g., on the order of particle orbit width. We choose the instability growth region to reside in the overlap region to demonstrate the robustness of our coupling scheme. The continuous connection function in the coupling region is chosen as a linear function ϖ(r/a)=(0.55r/a)/0.1.

The GEM simulations described in this section use a grid resolution of nx×ny×nz=240×256×64 and 80 marker particles per cell; the four-point nearest-grid-point averaging technique19 is employed.

For the XGC simulation presented here, the spatial grid is composed of nRZ×nζ=355890×32, where nRZ represents the number of grid points in a poloidal plane and nζ is the number of the poloidal plane in the toroidal direction. There are 50 maker particles per mesh, and a four-point adaptive gyroaveraging matrices18 in the μ grid is used.

We select toroidal number n = 24 for the coupling exercises of the linear simulations. For initial conditions, A=106,nky=1. The frequency is identical comparing the XGC reference and the GEM–XGC coupling case. The growth rate shows only 1% difference (Fig. 4). The mode structure of electrostatic potential also shows a good agreement (Fig. 5).

FIG. 4.

Time evolution of the linear growth rate of XGC reference (blue line) and GEM-XGC coupling (red line) results.

FIG. 4.

Time evolution of the linear growth rate of XGC reference (blue line) and GEM-XGC coupling (red line) results.

Close modal
FIG. 5.

Electrostatic potential ϕ mode structure comparison between XGC reference (left) and GEM-XGC coupling (right) results at the time point t=0.078ms for CBC linear simulations. The dashed lines indicate the overlap region.

FIG. 5.

Electrostatic potential ϕ mode structure comparison between XGC reference (left) and GEM-XGC coupling (right) results at the time point t=0.078ms for CBC linear simulations. The dashed lines indicate the overlap region.

Close modal

In the nonlinear simulations, a toroidal wedge nw = 3 is adopted, so the corresponding toroidal numbers n=3,6,9,12 are selected. For initial conditions, A=104,nky=16. The poloidal structures agree well in the linear stage (Fig. 6). The difference is apparent in the nonlinear stage (Fig. 7). Such differences are expected to arise due to the accumulation of small differences in the numerical methods on the grid scale, such as the mesh size and shape and the interpolation methods in the two codes. Despite the differences in the details of the mode structures, the statistical macroscopic quantities, such as the heat flux, should show better agreement and can be used for verification. This is shown in Fig. 8. The peak magnitude of the radially averaged heat flux (Fig. 9) has only 4% difference between the reference simulation and the coupled simulation.

FIG. 6.

Electrostatic potential ϕ mode structure comparison between XGC reference (left) and GEM-XGC coupling (right) results at the time point t=0.078ms for CBC nonlinear simulations. The dashed circle lines indicate the overlap region.

FIG. 6.

Electrostatic potential ϕ mode structure comparison between XGC reference (left) and GEM-XGC coupling (right) results at the time point t=0.078ms for CBC nonlinear simulations. The dashed circle lines indicate the overlap region.

Close modal
FIG. 7.

Comparison of the electrostatic potential ϕ mode structure between XGC reference (left) and GEM-XGC coupling (right) at the time point t=0.269ms for CBC nonlinear simulations. The dashed circle lines indicate the overlap region.

FIG. 7.

Comparison of the electrostatic potential ϕ mode structure between XGC reference (left) and GEM-XGC coupling (right) at the time point t=0.269ms for CBC nonlinear simulations. The dashed circle lines indicate the overlap region.

Close modal
FIG. 8.

Time evolution of heat flux Qi through the radial range for the CBC of XGC reference (left) and GEM-XGC coupling case (right). Dashed lines indicate the overlap region.

FIG. 8.

Time evolution of heat flux Qi through the radial range for the CBC of XGC reference (left) and GEM-XGC coupling case (right). Dashed lines indicate the overlap region.

Close modal
FIG. 9.

Time evolution of radially averaged heat flux Qi for the CBC of the XGC reference (blue line) and GEM-XGC coupling case (red line).

FIG. 9.

Time evolution of radially averaged heat flux Qi for the CBC of the XGC reference (blue line) and GEM-XGC coupling case (red line).

Close modal

The second example is a deuterium plasma in DIII-D like geometry. Artificially made-up temperature and density profiles are shown in Fig. 10 with the steep edge gradient. The direct δf scheme7 is used in both GEM and XGC. In the direct δf scheme, the deviation of the distribution from the local Maxwellian from ion banana excursion is included. This effect is important in the edge. In principle, neoclassical transport can be simulated, provided that collisions are also included in simulations. Here, for demonstrating the coupling scheme, collisions are again omitted.

FIG. 10.

Radial profiles of temperature (red line) and density (blue line) of the DIII-D case.

FIG. 10.

Radial profiles of temperature (red line) and density (blue line) of the DIII-D case.

Close modal

In this case, the core region of GEM is ψ/ψx=[0.0104,0.945], the edge region of XGC is ψ/ψx=[0,1.11], and the overlap regions are ψ/ψx=[0.652,0.803] (r/a=[0.76,0.86]), which is around the pedestal top and ψ/ψx=[0.042,0.096] (r/a=[0.2,0.3]) in the axis region; the buffer regions are ψ/ψx=[0.803,0.945] and ψ/ψx=[0.0104,0.042] in GEM and ψ/ψx=[0.096,0.652] in XGC. The connection function in the coupling region is ϖ(r/a)=(0.86r/a)/0.1 in the pedestal region and ϖ(r/a)=(r/a0.2)/0.1 on the axis region. GEM uses a grid of nx×ny×nz=258×512×256 and 512 marker particles per cell. An eight-point regular gyroaveraging technique15 is employed. XGC uses a grid of nRZ×nζ=503016×32, 1024 maker particles per mesh vertex, and an eight-point adaptive gyroaveraging matrices in the μ grid. Here, more points are used for gyroaveraging in both codes, as the flux-surfaces are strongly shaped.

Toroidal modes with n=2,4,6,8 are selected for nonlinear simulations with the toroidal wedge number nw = 2. For initial conditions, A=1012,nky=1. The radially averaged heat flux (Fig. 11, left) of XGC reference and GEM–XGC coupling shows similar evolution shapes with a decay toward the end due to the lack of the heat source. The time-accumulated turbulent heat flux, dtQi, is also shown (Fig. 11, right). At the end of the simulation, there is less than 4% difference in the time-accumulated heat flux. We consider the agreement to be satisfactory between two gyrokinetic simulations with different discretization methods. The time evolution of heat flux (Fig. 12) through the radial range of XGC reference simulation and the coupled simulation indicates that the heat flux starts from the steep gradient region where the ion-temperature-gradient (ITG) driven turbulence originates and propagates across the overlap region.

FIG. 11.

Time evolution of radially averaged heat flux Qi (left) and time-accumulated Qi (right) for the DIII-D case of the XGC reference (blue line) and GEM-XGC coupling case (red line).

FIG. 11.

Time evolution of radially averaged heat flux Qi (left) and time-accumulated Qi (right) for the DIII-D case of the XGC reference (blue line) and GEM-XGC coupling case (red line).

Close modal
FIG. 12.

Time evolution of heat flux Qi through the radial range for the DIII-D case. Dashed horizontal lines indicate the overlap region in the pedestal region. (a) MPI + OpenACC and (b) original MPI-only.

FIG. 12.

Time evolution of heat flux Qi through the radial range for the DIII-D case. Dashed horizontal lines indicate the overlap region in the pedestal region. (a) MPI + OpenACC and (b) original MPI-only.

Close modal

As a PIC code, GEM mainly consists of three parts: particle push and shift, deposition, and field solving. The primary domain decomposition in GEM is along the field line, the z-direction in the field-line-following coordinates. The box length in z is divided into nz equally spaced grids, and each nx×ny grid is assigned one or multiple message passing interface (MPI) tasks. An MPI task holds only those particles with the z-coordinate within its domain. After each particle pushing step, particles are sorted into the corresponding domains; this is the shift operation. Deposition is the operation whereby the density and current are obtained by depositing the corresponding particle quantities onto the grids. Particle push and deposition consist of loops over all the particles. At present, GEM optimization focuses on these operations. The starting point is the original pure MPI version.

Optimization on the GPU hardware architecture is achieved using the OpenACC directive-based heterogeneous parallel programming model, with few modifications to the original source code. In PIC codes, a significant amount of time is spent on indexing operations (integer arithmetic) and data movement (shift). Consequently, the main strategy of GPU-optimization using OpenACC is to offload the calculation processes from the central processing unit (CPU) to GPU and minimize data transfer. Particle arrays (for spatial coordinates, parallel velocity, weight, etc.) are moved from CPU memory to GPU memory, and all particle loops are performed on GPUs. Particle pushing and deposition all proceed continuously on GPU without data movement. This requires that the field arrays also reside in GPU memories.

After the push step, some particles will move across the domain boundaries. The shift operation transfers such particles to the new domains. At present, such data movement must be accomplished on CPUs and entails particle data movement between the GPU and CPU. Such data movement must be minimized. The shift comprises of two steps: the initialization step and the actual data movement. Initialization constructs sorted pointers to particle holes, as well as buffers for the sending and receiving processes. It is called only once per shift operation. The actual movement performs nonblocking MPI communication, removes holes, and restructures the particle arrays. Particle arrays need to be updated between the host and device during this procedure and are called for each particle array. In optimizing the particle shift, it is found unnecessary to construct the pointers to particle holes in a strictly increasing order. Only the search for nonhole indices from the end of the array needed to be in sequential order. Consequently, the initialization step has been modified to allow more loops to run on GPU, and only the data of the actually transferred particles need to be updated between the CPU and GPU.

The performance of the optimized code has been tested on Summit's multi-GPU nodes.20 Each Summit compute node has six GPUs and 42 physical CPU cores, and GEM uses 42 MPI ranks per node, where the Multi-Process Service (MPS) is enabled. In reporting the performance data, we exclude the execution time on code initialization (memory allocation, particle loading, etc.) and focus on the time-stepping loop that evolves the particle coordinates and the electrostatic fields.

Since the main focus of the present GEM code optimization is particle push and the density and current deposition, our starting point for the performance tests is the case with fixed grid size (128×64×42) and a number of particles, which increase proportionally to the number of MPI tasks (in the case of the CPU version). The execution time is given in Table I. The speedup of the MPI + OpenACC version over the original MPI-only version is about 40 times. The scalability of different computational modules in both MPI + OpenACC and MPI-only codes is shown in Fig. 13. One can infer from Fig. 13(a) that a satisfactory scalability of the MPI + OpenACC code has been obtained in these cases.

TABLE I.

Execution time (s) comparison.

Number of nodesNumber of particlesOriginal MPI-onlyOpenACC + MPISpeedup
128 34 406 400 × 128 70.228 1.731 40.571 
256 34 406 400 × 256 77.423 1.764 43.891 
512 34 406 400 × 512 75.418 1.752 43.047 
1024 34 406 400 × 1024 78.194 1.842 42.451 
Number of nodesNumber of particlesOriginal MPI-onlyOpenACC + MPISpeedup
128 34 406 400 × 128 70.228 1.731 40.571 
256 34 406 400 × 256 77.423 1.764 43.891 
512 34 406 400 × 512 75.418 1.752 43.047 
1024 34 406 400 × 1024 78.194 1.842 42.451 
FIG. 13.

Performance comparison of MPI + OpenACC (left) and MPI-only (right) GEM simulations on Summit, with the fixed grid (128×64×42). The MPI + OpenACC version demonstrates that the particle components scale well. The other small component is hardly noticeable in the original MPI-only version.

FIG. 13.

Performance comparison of MPI + OpenACC (left) and MPI-only (right) GEM simulations on Summit, with the fixed grid (128×64×42). The MPI + OpenACC version demonstrates that the particle components scale well. The other small component is hardly noticeable in the original MPI-only version.

Close modal

On the other hand, when both the number of grids and the total particle number are changed, we find that scalability of the MPI + OpenACC code degrades in the large node number regime (see Table II). In Fig. 14, a detailed analysis of the scalability of different modules is shown. The results suggest that the poor scalability comes from the field solver. Nevertheless, the figure clearly shows the excellent scaling of particle push and deposition. The poor scaling of the field solvers can be easily understood. The gyrokinetic Poisson equation is first Fourier transformed in the toroidal direction and then discretized in radius using a pseudospectral method,3 leading to O(nynz) dense linear systems. With domain decomposition in the z-direction, each MPI task solves O(ny) linear systems, with the size of each matrix being about (nx×nx). Thus, the execution time on the field solvers increases rapidly with the number of grids. To solve this bottleneck, the cuFFT21 and cuBLAS22 library routines will be used in the future. Notice that for the present XGC–GEM coupling scheme, the field is solved only in the XGC code.

TABLE II.

Execution time (s) varying grid size.

Number of nodesNumber of particlesGrid sizeOpenACC + MPI
17 203 200 × 1 4 × 16 × 42 0.781 
128 17 203 200 × 128 256 × 32 × 42 1.156 
256 17 203 200 × 256 256 × 64 × 42 1.433 
512 17 203 200 × 512 256 × 128 × 42 1.911 
1024 17 203 200 × 1024 256 × 256 × 42 3.319 
Number of nodesNumber of particlesGrid sizeOpenACC + MPI
17 203 200 × 1 4 × 16 × 42 0.781 
128 17 203 200 × 128 256 × 32 × 42 1.156 
256 17 203 200 × 256 256 × 64 × 42 1.433 
512 17 203 200 × 512 256 × 128 × 42 1.911 
1024 17 203 200 × 1024 256 × 256 × 42 3.319 
FIG. 14.

Performance comparison of varying grid MPI + OpenACC GEM simulations on Summit.

FIG. 14.

Performance comparison of varying grid MPI + OpenACC GEM simulations on Summit.

Close modal

We have presented a numerical scheme to couple the two existing particle-in-cell codes GEM and XGC through a configuration space interface region. In the coupling scheme, time stepping of the distribution functions is achieved by pushing the composite distribution function independently in each code, while the electrostatic potential is solved over the whole volume using the gyrokinetic Poisson solver in XGC. The two codes share the 3D density and field data across a spatial interface. A grid mapping technique is developed for data exchange. The GEM–XGC coupling scheme is demonstrated with two examples, which verify the accuracy of the mapping and the applicability of the spatial coupling scheme. However, in all the simulations presented here, the heat flux tends to zero because no heat source is included in these simulations. Future work will involve including sources and sinks to maintain a steady state. Optimization of GEM for architectures with GPUs is also presented. Similarly, the spatial coupling of GENE and XGC with this spatial coupling scheme will be reported in a separate publication.23 

In the present work, the coupling scheme requires only the exchange of ion density in the overlapping region, and no kinetic particle distribution-function (PDF) information is exchanged. The scheme works well for ITG turbulence in which there is no radial particle flux across the coupling interface. A more complete coupling scheme with kinetic electrons may require the exchange of kinetic PDF information between the coupled codes. Although the particle distribution function contains all the kinetic information in simulation, it is usually not explicitly constructed in PIC simulation. Rather, moments of the distribution function, such as the density and current, are needed for solving the field equations. They are obtained from the discrete particles directly via deposition to the mesh nodes. Exchange of kinetic information between XGC and GEM can take the form of exchange of marker particles. Depending on the marker density in each code, re-sampling24 of the exchanged particles might be needed. Based on the re-sampling, a new coupling scheme,25 where the marker particle information is exchanged by 5D grids, has been developed and will be reported for publishing. Meanwhile, XGC simulates the whole domain in current coupled code implementation, which is for convenience. In future implementations, the XGC simulations can still include a core buffer region but then have no particles in the core, which is demonstrated in the new coupling scheme.25 Thus, the GEM–XGC coupling will earn gains in computation.

This research was supported by the Exascale Computing Project (No. 17-SC-20-SC), a collaborative effort of the U.S. Department of Energy Office of Science and the National Nuclear Security Administration.

This research used resources of the Oak Ridge Leadership Computing Facility at the Oak Ridge National Laboratory (ORNL) and resources of the National Energy Research Scientific Computing Center (NERSC), which are supported by the Office of Science of the U.S. Department of Energy under Contract Nos. DE-AC05–00OR22725 and DE-AC02–05CH11231, respectively.

The data that support the findings of this study are available from the corresponding author upon reasonable request.

1.
J.
Dominski
,
S.
Ku
,
C. S.
Chang
,
J. Y.
Choi
,
E.
Suchyta
,
S. E.
Parker
,
S. A.
Klasky
, and
A.
Bhattacharjee
,
Phys. Plasmas
25
,
072308
(
2018
).
2.
S. E.
Parker
,
C.
Kim
, and
Y.
Chen
,
Phys. Plasmas
6
,
1709
(
1999
).
3.
Y.
Chen
and
S. E.
Parker
,
J. Comput. Phys.
220
,
839
(
2007
).
4.
F.
Jenko
,
W.
Dorland
,
M.
Kotschenreuther
, and
B.
Rogers
,
Phys. Plasmas
7
,
1904
(
2000
).
5.
T.
Goerler
,
X.
Lapillonne
,
S.
Brunner
,
T.
Dannert
,
F.
Jenko
,
F.
Merz
, and
D.
Told
,
J. Comput. Phys.
230
,
7053
(
2011
).
6.
C. S.
Chang
,
S.
Ku
,
P.
Diamond
,
Z.
Lin
,
S. E.
Parker
,
T.
Hahm
, and
N.
Samatova
,
Phys. Plasmas
16
,
056108
(
2009
).
7.
S.
Ku
,
R.
Hager
,
C. S.
Chang
,
J.
Kwon
, and
S. E.
Parker
,
J. Comput. Phys.
315
,
467
(
2016
).
8.
R.
Hager
,
E.
Yoon
,
S.
Ku
,
E. F.
D'Azevedo
,
P. H.
Worley
, and
C. S.
Chang
,
J. Comput. Phys.
315
,
644
(
2016
).
9.
S.
Ku
,
C. S.
Chang
,
R.
Hager
,
R.
Churchill
,
G.
Tynan
,
I.
Cziegler
,
M.
Greenwald
,
J.
Hughes
,
S. E.
Parker
,
M.
Adams
,
E. F.
D'Azevedo
, and
P.
Worley
,
Phys. Plasmas
25
,
056107
(
2018
).
10.
G.
Merlo
,
J.
Dominski
,
A.
Bhattacharjee
,
C. S.
Chang
,
F.
Jenko
,
S.
Ku
,
E.
Lanti
, and
S.
Parker
,
Phys. Plasmas
25
,
062308
(
2018
).
11.
E.
Lanti
,
N.
Ohana
,
N.
Tronko
,
T.
Hayward-Schneider
,
A.
Bottino
,
B.
McMillan
,
A.
Mishchenko
,
A.
Scheinberg
,
A.
Biancalani
,
P.
Angelino
,
S.
Brunner
,
J.
Dominski
,
P.
Donnel
,
C.
Gheller
,
R.
Hatzky
,
A.
Jocksch
,
S.
Jolliet
,
Z.
Lu
,
J. M.
Collar
,
I.
Novikau
,
E.
Sonnendrücker
,
T.
Vernay
, and
L.
Villard
,
Comput. Phys. Commun.
251
,
107072
(
2020
).
12.
J. Y.
Choi
,
C. S.
Chang
,
J.
Dominski
,
S. A.
Klasky
,
G.
Merlo
,
E.
Suchyta
,
M.
Ainsworth
,
B.
Allen
,
F.
Cappello
,
M.
Churchill
,
P.
Davis
,
S.
Di
,
G.
Eisenhauer
,
S.
Ethier
,
I.
Foster
,
B.
Geveci
,
H.
Guo
,
K.
Huck
,
F.
Jenko
,
M.
Kim
,
J.
Kress
,
S.
Ku
,
Q.
Liu
,
J.
Logan
,
A.
Malony
,
K.
Mehta
,
K.
Moreland
,
T.
Munson
,
M.
Parashar
,
T.
Peterka
,
N.
Podhorszki
,
D.
Pugmire
,
O.
Tugluk
,
R.
Wang
,
B.
Whitney
,
M.
Wolf
, and
C.
Wood
, in
2018 IEEE 14th International Conference on e-Science (e-Science)
(
IEEE
,
2018
), pp.
442
452
.
13.
Q.
Liu
,
J.
Logan
,
Y.
Tian
,
H.
Abbasi
,
N.
Podhorszki
,
J. Y.
Choi
,
S. A.
Klasky
,
R.
Tchoua
,
J.
Lofstead
,
R.
Oldfield
 et al,
Concurrency Comput.: Pract. Exp.
26
,
1453
(
2014
).
14.
A. M.
Dimits
,
G.
Bateman
,
M.
Beer
,
B.
Cohen
,
W.
Dorland
,
G.
Hammett
,
C.
Kim
,
J.
Kinsey
,
M.
Kotschenreuther
,
A.
Kritz
 et al,
Phys. Plasmas
7
,
969
(
2000
).
15.
16.
L.
Lao
,
H. S.
John
,
R.
Stambaugh
,
A.
Kellman
, and
W.
Pfeiffer
,
Nucl. Fusion
25
,
1611
(
1985
).
17.
A.
Burckel
,
O.
Sauter
,
C.
Angioni
,
J.
Candy
,
E.
Fable
, and
X.
Lapillonne
,
J. Phys.
260
,
012006
(
2010
).
18.
J.
Dominski
,
S.-H.
Ku
, and
C.-S.
Chang
,
Phys. Plasmas
25
,
052304
(
2018
).
19.
S. E.
Parker
,
J. Comput. Phys.
178
,
520
(
2002
).
20.
See https://www.olcf.ornl.gov/olcf-resources/compute-systems/summit for the information about the supercomputer Summit.
21.
See https://developer.nvidia.com/cufft for the information about the CUDA (Compute Unified Device Architecture) FFT (Fast Fourier Transform) library cuFFT.
22.
See https://developer.nvidia.com/cublas for the information about the CUDA (Compute Unified Device Architecture) BLAS (Basic Linear Algebra Subprograms) library cuBLAS.
23.
G.
Merlo
,
S.
Janhunen
,
F.
Jenko
,
A.
Bhattacharjee
,
C. S.
Chang
,
J.
Cheng
,
P.
Davis
,
J.
Dominski
,
K.
Germaschewski
,
R.
Hager
,
S.
Klasky
,
S.
Parker
, and
E.
Suchyta
, “
First coupled GENE-XGC microturbulence simulations
,”
Phys. Plasmas
(to be published).
24.
D.
Faghihi
,
V.
Carey
,
C.
Michoski
,
R.
Hager
,
S.
Janhunen
,
C.-S.
Chang
, and
R.
Moser
,
J. Comput. Phys.
409
,
109317
(
2020
).
25.
J.
Dominski
,
J.
Cheng
,
G.
Merlo
,
V.
Carey
,
R.
Hager
,
L.
Ricketson
,
J.
Choi
,
S.
Ethier
,
K.
Germaschewski
,
S.
Ku
,
A.
Mollen
,
N.
Podhorszki
,
D.
Pugmire
,
E.
Suchyta
,
P.
Trivedi
,
R.
Wang
,
C. S.
Chang
,
J.
Hittinger
,
F.
Jenko
,
S.
Klasky
,
S. E.
Parker
, and
A.
Bhattacharjee
, “
Spatial coupling of gyrokinetic simulations, a generalized scheme based on first-principles
” (to be published).