We report on the design, verification and performance of MuMax3, an open-source GPU-accelerated micromagnetic simulation program. This software solves the time- and space dependent magnetization evolution in nano- to micro scale magnets using a finite-difference discretization. Its high performance and low memory requirements allow for large-scale simulations to be performed in limited time and on inexpensive hardware. We verified each part of the software by comparing results to analytical values where available and to micromagnetic standard problems. MuMax3 also offers specific extensions like MFM image generation, moving simulation window, edge charge removal and material grains.
I. INTRODUCTION
MuMax3 is a GPU-accelerated micromagnetic simulation program. It calculates the space- and time-dependent magnetization dynamics in nano- to micro-sized ferromagnets using a finite-difference discretization. A similar technique is used by the open-source programs OOMMF1 (CPU) and MicroMagnum2 (GPU), and the commercial GpMagnet3 (GPU). MuMax3 was re-designed independently of its predecessors MuMax14 and MuMax2.5
MuMax3 is open-source software written in Go6 and CUDA7, and is freely available under the GPLv3 license on http://mumax.github.io. In addition to the terms of the GPL, we kindly request that any work using MuMax3 refers to the latter website and this paper. An nVIDIA GPU and a Linux, Windows or Mac platform is required to run the software. Apart from nVIDIA’s GPU driver, no other dependencies are required to run MuMax3.
Finite-element micromagnetic software exists as well like, e.g., NMag,8 TetraMag,9 MagPar10 and FastMag.11 They offer more geometrical flexibility than finite-difference methods, at the expense of performance.
In this paper we first describe each of MuMax3’s components and asses their individual correctness and accuracy. Then we address the micromagnetic standard problems,12 where all software components have to work correctly together to solve real-world simulations. We typically compare against OOMMF1 which has been widely used and extensively tested for over more than a decade. Finally, we report on the performance in terms of speed and memory consumption.
The complete input files used to generate the graphs in this paper are available on http://mumax.github.io/pub, allowing for each of the presented results to be reproduced independently. The scripts were executed with MuMax version 3.6.
II. DESIGN
A. Material Regions
MuMax3 employs a finite difference (FD) discretization of space using a 2D or 3D grid of orthorhombic cells. Volumetric quantities, like the magnetization and effective field, are treated at the center of each cell. On the other hand, coupling quantities like the exchange strength, are considered on the faces between the cells (Fig. 1).
Each simulation cell is attributed a region index representing the cell’s material type. Material parameters like the saturation magnetization Msat, anisotropy constants, etc are stored in 1D look-up tables indexed by the region index. Coupling parameters like the exchange strength are stored in a 2D lower triangular matrix indexed by a face’s two neighbor region indices.
Each simulation cell is attributed a region index representing the cell’s material type. Material parameters like the saturation magnetization Msat, anisotropy constants, etc are stored in 1D look-up tables indexed by the region index. Coupling parameters like the exchange strength are stored in a 2D lower triangular matrix indexed by a face’s two neighbor region indices.
In order to conserve memory, space-dependent material parameters are not explicitly stored per-cell. Instead, each cell is attributed a region index between 0 and 256. Different region indices represent different materials. The actual material parameters are stored in 256-element look-up tables, indexed by the cell’s region index.
Coupling parameters like the exchange strength are stored in a triangular matrix, indexed by the region numbers of the two interacting cells. This allows arbitrary exchange coupling between all pairs of materials (Section III C).
Time-dependent parameters In addition to region-wise space-dependence, material parameters in each region can be time-dependent, given by one arbitrary function of time per region.
Excitations like the externally applied field or electrical current density can be set region- and time-wise in the same way as material parameters. Additionally they can have an arbitrary number of extra terms of the form f(t) × g(x, y, z), where f(t) is any function of time multiplied by a continuously varying spatial profile g(x, y, z). This allows to model smooth time- and space dependent excitations like, e.g., an antenna’s RF field or an AC electrical current.
B. Geometry
MuMax3 uses Constructive Solid Geometry to define the shape of the magnet and the material regions inside it. Any shape is represented by a function f(x, y, z) that returns true when (x, y, z) lies inside the shape or false otherwise. For example, a sphere is represented by the function x2 + y2 + z2 ≤ r2. Shapes can be rotated, translated, scaled and combined together with boolean operations like AND, OR, XOR. This allows for complex, parametrized geometries to be defined programmatically. The example in Fig. 2 shows the magnetization in the logical OR of an ellipsoid and cuboid.
Geometry obtained by logically combining an ellipsoid and rotated cuboid. The arrows depict the magnetization direction in this complex shape.
Geometry obtained by logically combining an ellipsoid and rotated cuboid. The arrows depict the magnetization direction in this complex shape.
C. Interface
Input scripts MuMax3 provides a dedicated scripting language that resembles a subset of the Go programming language. The script provides a simple means to define fairly complex simulations. This is illustrated by the code snippet below where we excite a Permalloy ellipse with a 1 GHz RF field:
setgridsize(128, 32, 1)setcellsize(5e-9, 5e-9, 8e-9)setGeom(ellipse(500e-9, 160e-9))Msat = 860e3 Aex = 13e-12 alpha= 0.05m=uniform(1, 0, 0)relax()f := 1e9 // 1GHz A := 0.01 // 10mT B_ext = vector(0.1, A*sin(2*pi*f*t), 0)run(10e-9)
Programming The MuMax3 libraries can also be called from native Go. In this way, the full Go language and libraries can be leveraged for more powerful input generation and output processing than the built-in scripting.
Web interface MuMax3 provides web-based HTML 5 user interface. It allows to inspect and control simulations from within a web browser, whether they are running locally or remotely. Simulations may also be entirely constructed and run from within the web GUI. In any case an input file corresponding to the user’s clicks is generated, which may later be used to repeat the simulation in an automated fashion.
Data format MuMax3 uses OOMMF’s “OVF” data format for input and output of all space-dependent quantities. This allows to leverage existing tools. Additionally a tool is provided to convert the output to several other data formats like paraview’s VTK,13 gnuplot,14 comma-separated values (CSV), Python-compatible JSON, …, and to image formats like PNG, JPG and GIF. Finally, the output is compatible with the 3D rendering software MuView, contributed by Graham Rowlands.15
III. DYNAMICAL TERMS
MuMax3 calculates the evolution of the reduced magnetization , which has unit length. In what follows the dependence on time and space will not be explicitly written down. We refer to the time derivative of as the torque (units 1/s):
has three contributions:
A. Landau-Lifshitz torque
MuMax3 uses the following explicit form for the Landau-Lifshitz torque:16,17
with γLL the gyromagnetic ratio (rad/Ts), α the dimensionless damping parameter and eff the effective field (T). The default value for γLL can be overridden by the user. eff has the following contributions:
Fig. 3 shows a validation of the Landau-Lifshitz torque for a single spin precessing without damping in a constant external field.
Validation of Eq. (2) for a single spin precessing without damping in a 0.1 T field along z, perpendicular to . Analytical solution: mx = cos(0.1TγLLt).
Validation of Eq. (2) for a single spin precessing without damping in a 0.1 T field along z, perpendicular to . Analytical solution: mx = cos(0.1TγLLt).
B. Magnetostatic field
Magnetostatic convolution A finite difference discretization allows the magnetostatic field to be evaluated as a (discrete) convolution of the magnetization with a demagnetizing kernel :
where is the unnormalized magnetization, with Msat the saturation magnetization (A/m). This calculation is FFT-accelerated based on the well-known convolution theorem. The corresponding energy density is provided as:
Magnetostatic kernel We construct the demagnetizing kernel assuming constant magnetization18 in each finite difference cell and we average the resulting demag over the cell volumes. The integration is done numerically with the number of integration points automatically chosen based on the distance between source and destination cells and their aspect ratios. The kernel is initialized on CPU in double precision, and only truncated to single before transferring to GPU.
The kernel’s mirror symmetries and zero elements are exploited to reduce storage and initialization time. This results in a 9 × or 12 × decrease in kernel memory usage for 2D and 3D simulations respectively, and is part of the reason for MuMax3’s relatively low memory requirements (Section VII).
Accuracy The short-range accuracy of is tested by calculating the demagnetizing factors of a uniformly magnetized cube, analytically known to be -1/3 in each direction. The cube was discretized in cells with varying aspect ratios to stress the numerical integration scheme. The smallest possible number of cells was used to ensure that the short-range part of the field has an important contribution. The results presented in Table I are accurate to 3 or 4 digits. Standard Problem #2 (V B) is another test sensitive to the short-range kernel accuracy.19
Demagnetizing factors (Nij = Hi/Mj) calculated for a cube discretized in the smallest possible number of tetragonal cells with given aspect ratio sizez/sizex (=sizez/sizey). The results lie close to the analytical value of 1/3, even for very elongated (aspect>1) or flat (aspect<1) cells. The off-diagonal elements (not shown) are consistent with zero within the single-precision limit.
aspect . | N xx . | N yy . | N zz . |
---|---|---|---|
8/1 | -0.333207 | -0.333207 | -0.333176 |
4/1 | -0.333149 | -0.333149 | -0.333144 |
2/1 | -0.333118 | -0.333118 | -0.333118 |
1/1 | -0.333372 | -0.333372 | -0.333372 |
1/4 | -0.333146 | -0.333146 | -0.333145 |
1/16 | -0.333176 | -0.333176 | -0.333280 |
1/64 | -0.333052 | -0.333052 | -0.333639 |
aspect . | N xx . | N yy . | N zz . |
---|---|---|---|
8/1 | -0.333207 | -0.333207 | -0.333176 |
4/1 | -0.333149 | -0.333149 | -0.333144 |
2/1 | -0.333118 | -0.333118 | -0.333118 |
1/1 | -0.333372 | -0.333372 | -0.333372 |
1/4 | -0.333146 | -0.333146 | -0.333145 |
1/16 | -0.333176 | -0.333176 | -0.333280 |
1/64 | -0.333052 | -0.333052 | -0.333639 |
The long-range accuracy of the magnetostatic convolution is assessed by comparing kernel and the field of a single magnetized cell to the corresponding point dipole. The fields presented in Fig. 4, show perfect long-range accuracy for the kernel, indicating accurate numerical integration in that range. The resulting field, obtained by convolution of a single magnetized cell (Bsat =1 T) with the kernel, is accurate down to about 0.01 μT—the single-precision noise floor introduced by the FFT’s.
Kernel element (top) and , the field of a single magnetized cell (1 nm3, Bsat = 1 T) (bottom) along the x axis (1 nm cells), compared to the field of a corresponding dipole. The long-range field remains accurate down to the single-precision numerical limit (∼10−7 T).
Kernel element (top) and , the field of a single magnetized cell (1 nm3, Bsat = 1 T) (bottom) along the x axis (1 nm cells), compared to the field of a corresponding dipole. The long-range field remains accurate down to the single-precision numerical limit (∼10−7 T).
Periodic boundary conditions MuMax3 provides optional periodic boundary conditions (PBCs) in each direction. PBCs imply magnetization wrap-around in the periodic directions, felt by stencil operations like the exchange interaction. A less trivial consequence is that the magnetostatic field of repeated magnetization images (copies) has to be added to .
In contrast to OOMMF’s PBC implementation,20 MuMax3 employs a so-called macro geometry approach8,21 where a finite (though usually large) number of periodic images (copies) is taken into account, and that number can be freely chosen in each direction. MuMax3’s setPBC(Px, Py, Pz) command enables Px, Py, Pz additional copies on each side of the simulation box, given that Pi is sufficiently large.
To test the magnetostatic field with PBC’s, we calculate the demagnetizing tensors of a wide film and long rod in two different ways: either with a large grid without PBC’s, or with a small grid but with PBC’s equivalent to the larger grid. In our implementation, a gridsize (Nx, Ny, Nz) with PBC’s (Px, Py, Pz) should approximately correspond to a gridsize (2PxNx, 2PyNy, 2PzNz) without PBC’s. This is verified in tables II and III where we extend in plane for the film and along z for the rod. Additionally, for very large sizes both results converge to the well-known analytical values for infinite geometries.
Out-of-plane demagnetizing factors for a thin film with grid size 2 × 2 × 1 and 2D PBC’s with Px(=Py) copies in the x and y direction (column 1) or without PBC’s but with a correspondingly enlarged grid size extended ×2Px(=2Py) in both x and y directions (column 2). Both give comparable results for sufficiently large Px, Py, verifying the PBC implementation.
Px (=Py) . | Nzz . | grid . | Nzz . |
---|---|---|---|
× 1 | -0.71257 | ×2 | -0.76368 |
× 4 | -0.93879 | ×8 | -0.94224 |
× 16 | -0.98438 | ×32 | -0.98460 |
× 64 | -0.99514 | ×128 | -0.99515 |
× 256 | -0.99713 | ×512 | -0.99779 |
∞ | -1 | ×∞ | -1 |
Px (=Py) . | Nzz . | grid . | Nzz . |
---|---|---|---|
× 1 | -0.71257 | ×2 | -0.76368 |
× 4 | -0.93879 | ×8 | -0.94224 |
× 16 | -0.98438 | ×32 | -0.98460 |
× 64 | -0.99514 | ×128 | -0.99515 |
× 256 | -0.99713 | ×512 | -0.99779 |
∞ | -1 | ×∞ | -1 |
Transverse demagnetizing factors for a long rod with grid size 2 × 2 × 1 and 1D PBC’s with Pz copies along z (column 1) or without PBC’s but with a correspondingly enlarged grid size ×2Pz (column 2). Both give comparable results for sufficiently large Pz, verifying the PBC implementation.
Pz . | Nxx . | grid . | Nxx . |
---|---|---|---|
× 1 | -0.251960 | ×2 | -0.3331182 |
× 4 | -0.476766 | ×8 | -0.4809398 |
× 16 | -0.498280 | ×32 | -0.4983577 |
× 64 | -0.499517 | ×128 | -0.4995183 |
× 256 | -0.499590 | ×512 | -0.4995911 |
∞ | -0.5 | ×∞ | -0.5 |
Pz . | Nxx . | grid . | Nxx . |
---|---|---|---|
× 1 | -0.251960 | ×2 | -0.3331182 |
× 4 | -0.476766 | ×8 | -0.4809398 |
× 16 | -0.498280 | ×32 | -0.4983577 |
× 64 | -0.499517 | ×128 | -0.4995183 |
× 256 | -0.499590 | ×512 | -0.4995911 |
∞ | -0.5 | ×∞ | -0.5 |
C. Heisenberg exchange interaction
The effective field due to the Heisenberg exchange interaction:22
is evaluated using a 6-neighbor small-angle approximation:23,24
where i ranges over the six nearest neighbors of the central cell with magnetization . Δi is the cell size in the direction of neighbor i.
At the boundary of the magnet some neighboring magnetizations are missing. In that case we use the cell’s own value instead of , which is equivalent to employing Neumann boundary conditions.23,24
The corresponding energy density is provided as:
MuMax3 calculates the energy from the effective field using Eqns. (6), (8). The implementation is verified by calculating the exchange energy of a 1D magnetization spiral, for which the exact form (Eq. (7)) is easily evaluated. Fig. 5 shows that the linearized approximation is suited as long as the angle between neighboring magnetizations is not too large. This can be achieved by choosing a sufficiently small cell size compared to the exchange length.
Numerical (Eq. (6), (8)) and analytical (Eq. (7)) exchange energy density (in units ) for spiral mag-netizations as a function of the angle between neighboring spins (independent of material parameters). To ensure an accurate energy, spin-spin angles should be kept below ∼20–30∘ by choosing a sufficiently small cell size.
Numerical (Eq. (6), (8)) and analytical (Eq. (7)) exchange energy density (in units ) for spiral mag-netizations as a function of the angle between neighboring spins (independent of material parameters). To ensure an accurate energy, spin-spin angles should be kept below ∼20–30∘ by choosing a sufficiently small cell size.
Inter-region exchange The exchange interaction between different materials deserves special attention. Aex and Msat are defined in the cell volumes, while Eq. (6) requires a value of Aex/Msat properly averaged out between the neighboring cells. For neighboring cells with different material parameters Aex 1, Aex 2 and Msat1, Msat2 MuMax3 uses a harmonic mean:
which can easily be derived, and where we set S = 1 by default. S is an arbitrary scaling factor which may be used to alter the exchange coupling between regions, e.g., to lower the coupling between grains or antiferromagnetically couple two layers.
D. Dzyaloshinskii-Moriya interaction
MuMax3 provides induced Dzyaloshinskii-Moriya interaction for thin films with out-of-plane symmetry breaking according to Ref. 25, yielding an effective field term:
where we apply boundary conditions:26
Numerically, all derivatives are implemented as central derivatives, i.e., the difference between neighboring magnetizations over their distance in that direction: . When a neighbor is missing at the boundary (∂V), its magnetization is replaced by where refers to the central cell, to the surface normal and the relevant partial derivative is selected from Eq. (11)–(15).
In case of nonzero D, these boundary conditions are simultaneously applied to the Heisenberg exchange field.
The effective field in Eq. (10) gives rises to an energy density:
Similar to the anisotropic exchange case, MuMax3 calculates the energy density from Eqns. (17), (10). Eq. (16) is the exact form, well approximated for sufficiently small cell sizes.
In Fig. 6, the DMI implementation is compared to the work of Thiaville et al.,27 where the transformation of a Bloch wall into a Néel wall by varying Dex is studied.
Simulated domain wall magnetization in a 250 nm wide, 0.6 nm thick Co/Pt film (Msat=1100×103 A/m, Aex=16×10−12 J/m, Ku1=1.27×106 J/m3) as a function of the Dzyaloshinskii-Moriya strength Dex. The left-hand and right-hand sides correspond to a Bloch and Néel wall, respectively. Results correspond well to Ref. 27.
Simulated domain wall magnetization in a 250 nm wide, 0.6 nm thick Co/Pt film (Msat=1100×103 A/m, Aex=16×10−12 J/m, Ku1=1.27×106 J/m3) as a function of the Dzyaloshinskii-Moriya strength Dex. The left-hand and right-hand sides correspond to a Bloch and Néel wall, respectively. Results correspond well to Ref. 27.
E. Magneto-crystalline anisotropy
Uniaxial MuMax3 provides uniaxial magneto-crystalline anisotropy in the form of an effective field term:
where Ku1 and Ku2 are the first and second order uniaxial anisotropy constants and a unit vector indicating the anisotropy direction. This corresponds to an energy density:
MuMax3 calculates the energy density from the effective field using Eq. (20), where denotes the effective field term where only Kui is taken into account. The resulting energy is verified in Fig. 7. Since the energy is derived directly from the effective field, this serves as a test for the field as well.
Uniaxial (top) and cubic (bottom) anisotropy energy density of a single spin as a function of its orientation in the xy-plane. The uniaxial axis is along x, the cubic axes along x, y and z. The dots are computed with MuMax3 (Eqs. (20) and (23)), lines are analytical expressions (Eq. (19), (22)). Positive and negative K values denote hard and easy anisotropy, respectively. In this case the 2nd order cubic energies are zero and not shown, but have been verified separately for angles out of the xy-plane.
Uniaxial (top) and cubic (bottom) anisotropy energy density of a single spin as a function of its orientation in the xy-plane. The uniaxial axis is along x, the cubic axes along x, y and z. The dots are computed with MuMax3 (Eqs. (20) and (23)), lines are analytical expressions (Eq. (19), (22)). Positive and negative K values denote hard and easy anisotropy, respectively. In this case the 2nd order cubic energies are zero and not shown, but have been verified separately for angles out of the xy-plane.
Cubic MuMax3 provides cubic magneto-crystalline anisotropy in the form of an effective field term:
where Kcn is the nth-order cubic anisotropy constant and , , a set of mutually perpendicular unit vectors indicating the anisotropy directions. (The user only specifies and . We compute automatically as .) This corresponds to an energy density:
which, just like in the uniaxial case, MuMax3 computes using the effective field:
which is verified in Fig. 7.
F. Thermal fluctuations
MuMax3 provides finite temperature by means of a fluctuating thermal field therm according to Brown:28
where α is the damping parameter, kB the Boltzmann constant, T the temperature, Bsat the saturation magnetization expressed in Tesla, γLL the gyromagnetic ratio (1/Ts), ΔV the cell volume, Δt the time step and a random vector from a standard normal distribution whose value is changed after every time step.
Solver constraints randomly changes between time steps. Therefore, only MuMax3’s Euler and Heun solvers (IV) can be used as they do not require torque continuity between steps. Additionally, with thermal fluctuations enabled we enforce a fixed time step Δt. This avoids feedback issues with adaptive time step algorithms.
Verification We test our implementation by calculating the thermal switching rate of a single (macro-)spin particle with easy uniaxial anisotropy. In the limit of a high barrier compared to the thermal energy, the switching rate f is know analytically to be:29
Fig. 8 shows Arrhenius plots for the temperature-dependent switching rate of a particle with volume V=(10 nm)3 and Ku1=1×104 or 2×104 J/m3. The MuMax3 simulations correspond well to Eq. (25).
Arrhenius plot of the thermal switching rate of a 10 nm large cubic particle (macrospin), with Msat = 1MA/m, α = 0.1, Δt = × 10-12s, Ku1 = 1 × 104 or 2×104 J/m3. Simulations were performed on an ensemble of 5122 uncoupled particles for 0.1 μs (high temperatures) or 1 μs (low temperatures). Solid lines are the analytically expected switching rates (Eq. (25)).
Arrhenius plot of the thermal switching rate of a 10 nm large cubic particle (macrospin), with Msat = 1MA/m, α = 0.1, Δt = × 10-12s, Ku1 = 1 × 104 or 2×104 J/m3. Simulations were performed on an ensemble of 5122 uncoupled particles for 0.1 μs (high temperatures) or 1 μs (low temperatures). Solid lines are the analytically expected switching rates (Eq. (25)).
G. Zhang-Li Spin-transfer torque
MuMax3 includes a spin-transfer torque term according to Zhang and Li,30 applicable when electrical current flows through more than one layer of cells:
where is the current density, ξ is the degree of non-adiabaticity, μB the Bohr magneton and Bsat the saturation magnetization expressed in Tesla.
The validity of our implementation is tested by Standard Problem #5 (Section V E).
H. Slonczewski Spin-transfer torque
MuMax3 provides a spin momentum torque term according to Slonczewski,31,32 transformed to the Landau-Lifshitz formalism:
where jz is the current density along the z axis, d is the free layer thickness, the fixed-layer magnetization, P the spin polarization, the Slonczewski Λ parameter characterizes the spacer layer, and ϵ′ is the secondary spin-torque parameter.
MuMax3 only explicitly models the free layer magnetization. The fixed layer is handled in the same way as material parameters and is always considered to be on top of the free layer. The fixed layer’s stray field is not automatically taken into account, but can be pre-computed by the user and added as a space-dependent external field term.
As a verification we consider switching an MRAM bit in 160 nm × 80 nm × 5 nm Permalloy (Msat=800×103 A/m, Aex=13×10−12 J/m2, α=0.01, P = 0.5669) by a total current of -6 mA along the z axis using Λ=2, ϵ′=1. These parameters were chosen so that none of the terms in Eq. (28) are zero. The fixed layer is polarized at 20∘ from the x axis to avoid symmetry problems and the initial magnetization was chosen uniform along x. The MuMax3 and OOMMF results shown in Fig. 9 correspond well.
Verification of the Slonczewski torque: average magnetization in a 160 nm × 80 nm × 5 nm rectangle with Msat=800×103 A/m, Aex=13×10−12 J/m2, α=0.01, P = 0.5669, jz=4.6875×1011 A, Λ=2, ϵ′=1, =(cos(20∘), sin(20∘), 0), initial m=(1,0,0). Solid line calculated with OOMMF, points by MuMax3 .
Verification of the Slonczewski torque: average magnetization in a 160 nm × 80 nm × 5 nm rectangle with Msat=800×103 A/m, Aex=13×10−12 J/m2, α=0.01, P = 0.5669, jz=4.6875×1011 A, Λ=2, ϵ′=1, =(cos(20∘), sin(20∘), 0), initial m=(1,0,0). Solid line calculated with OOMMF, points by MuMax3 .
IV. TIME INTEGRATION
A. Dynamics
MuMax3 provides a number of explicit Runge-Kutta methods for advancing the Landau-Lifshitz equation (Eq. (2)):
RK45, the Dormand-Prince method, offers 5-th order convergence and a 4-th order error estimate used for adaptive time step control. This is the default for dynamical simulations.
RK32, the Bogacki-Shampine method, offers 3-th order convergence and a 2nd order error estimate used for adaptive time step control. This method is used when relaxing the magnetization to its ground state in which case it performs better than RK45.
RK12, Heun’s method, offers 2nd order convergence and a 1st order error estimate. This method is used for finite temperature simulations as it does not require torque continuity between time steps.
RK1, Euler’s method is provided for academic purposes.
These solvers’ convergence rates are verified in Fig. 10, which serves as a test for their implementation and performance.
Absolute error on a single spin after precessing without damping for one period in a 0.1 T field, as a function of different solver’s time steps. The errors follow 1st, 2nd, 3rd or 5th order convergence (solid lines) for the respective solvers down to a limit set by the single precision arithmetic.
Absolute error on a single spin after precessing without damping for one period in a 0.1 T field, as a function of different solver’s time steps. The errors follow 1st, 2nd, 3rd or 5th order convergence (solid lines) for the respective solvers down to a limit set by the single precision arithmetic.
Adaptive time step RK45, RK23 and RK12 provide adaptive time step control, i.e., automatically choosing the time step to keep the error per step ϵ close to a preset value ϵ0. As the error per step we use , with τhigh and τlow high-order and low-order torque estimates provided by the particular Runge-Kutta method, and Δt the time step. The time step is adjusted using a default headroom of 0.8.
In MuMax3, ϵ0 is accessible as the variable MaxErr. Its default value of 10−5 was adequate for the presented standard problems. The relation between ϵ0 and the overall error at the end of the simulation is in general hard to determine. Nevertheless, we illustrate this in Fig. 11 for a single period of spin precession under the same conditions as Fig. 10. It can be seen that the absolute error per precession scales linearly with ϵ0, although the absolute value of the error depends on the solver type and exact simulation conditions.
Absolute error on a single spin after precessing without damping for one period in a 0.1 T field, as a function of different solver’s MaxErr settings. Solid lines represent 1st order fits. The same lower bound as in Fig. 10 is visible.
Absolute error on a single spin after precessing without damping for one period in a 0.1 T field, as a function of different solver’s MaxErr settings. Solid lines represent 1st order fits. The same lower bound as in Fig. 10 is visible.
B. Energy minimization
MuMax3 provides a relax() function that attempts to find the systems’ energy minimum. This function disables the precession term Eq. (2), so that the effective field points towards decreasing energy. Relax first advances in time until the total energy cuts into the numerical noise floor. At that point the state will be close to equilibrium already. We then begin monitoring the magnitude of the torque instead of the energy, since close to equilibrium the torque will decrease monotonically and is less noisy than the energy. So we advance further until the torque cuts into the noise floor as well. Each time that happens, we decrease MaxErr and continue further until MaxErr=10−9. At this point it does not make sense to increase the accuracy anymore (see Fig. 11) and we stop advancing.
This Relax procedure was used in the presented standard problems, where it proved adequate. Typical residual torques after Relax are of the order of 10−4–10−7 γLL T, indicating that the system is indeed very close to equilibrium. Nevertheless, as with any energy minimization technique, there is always a possibility that the system ends up in a saddle point or very flat part of the energy landscape.
Relax internally uses the RK23 solver, which we noticed performs better then RK45 in most relaxation scenarios. Near equilibrium, both solvers tend to take similarly large time steps, but RK23 needs only half as many torque evaluations per step as RK45.
V. STANDARD PROBLEMS
A. Standard Problem #1
The first μMag standard problem involves the hysteresis loops of a 1 μm × 2 μm × 20 nm Permalloy rectangle (Aex = 1.3×10−11 J/m, Msat = 8×105 A/m, Ku1 = 5×102 J/m3 uniaxial, with easy axis nominally parallel to the long edges of the rectangle) for the field approximately parallel to the long and short axis, respectively. Our solution is presented in Fig. 12. Unfortunately the submitted solutions12 do not agree with each other, making it impossible to assess the correctness in a quantitative way.
MuMax3 solution for standard problem #1, using a 2D grid of 3.90625 nm wide cells. Open symbols represent the virgin curve starting from a vortex state. After each field step we applied thermal fluctuations with α = 0.05, T = 300K for 500ps to allow the magnetization to jump over small energy barriers. There are no consistent standard solutions to compare with.
MuMax3 solution for standard problem #1, using a 2D grid of 3.90625 nm wide cells. Open symbols represent the virgin curve starting from a vortex state. After each field step we applied thermal fluctuations with α = 0.05, T = 300K for 500ps to allow the magnetization to jump over small energy barriers. There are no consistent standard solutions to compare with.
B. Standard Problem #2
The second μMag standard problem considers a thin film of width d, length 5d and thickness 0.1d, all expressed in terms of the exchange length . The remanence and coercitive field, expressed in units Msat, are to be calculated as a function of d/lex.
The remanence is compared to OOMMF in Fig. 13. The coercivity, shown in Fig. 14, behaves interestingly in the small-particle limit where an analytical solution exists.19 In that case the magnetization is uniform and the magnetostatic field dominates the behaviour. Of the solutions submitted to the μMag group,19,34–36 the Streibl,34 Donahue19 (OOMMF 1.1) and MuMax3 results best approach the small-particle limit. It was shown by Donahue et al.19 that proper averaging of the magnetostatic field over each cell volume is needed to accurately reproduce the analytical limit. Hence this standard problem serves as a test for the numerical integration of our demagnetizing kernel.
Remanence for standard problem #2 as a function of the magnet size d expressed in exchange lengths lex. The MuMax3 calculations (points) use automatically chosen cell sizes between 0.25 and 0.5 lex. OOMMF results (line) were taken from Ref. 12.
Remanence for standard problem #2 as a function of the magnet size d expressed in exchange lengths lex. The MuMax3 calculations (points) use automatically chosen cell sizes between 0.25 and 0.5 lex. OOMMF results (line) were taken from Ref. 12.
Coercivity for standard problem #2 as a function of the magnet size d expressed in exchange lengths lex. MuMax3 calculations (points) use automatically chosen cell sizes between 0.25 and 0.5lex. OOMMF results (line) taken from Ref. 12. The slight discrepancy at high d is attributed to OOMMF’s solution using larger cells there. The analytical limit for very small size is by Donahue et al.19
Coercivity for standard problem #2 as a function of the magnet size d expressed in exchange lengths lex. MuMax3 calculations (points) use automatically chosen cell sizes between 0.25 and 0.5lex. OOMMF results (line) taken from Ref. 12. The slight discrepancy at high d is attributed to OOMMF’s solution using larger cells there. The analytical limit for very small size is by Donahue et al.19
C. Standard Problem #3
Standard problem #3 considers a cube with edge length L expressed in exchange lengths . The magnet has uniaxial anisotropy with Ku1 = 0.1Km, with , easy axis parallel to the z-axis. The critical edge size L where the ground state transitions between a quasi-uniform and vortex-like state needs to be found, it is expected around L=8.
This problem was solved using a 16 × 16 × 16 grid. The cube was initialized with ∼3,000 different random initial magnetization states for random edge lengths L between 7.5 and 9, and relaxed to equilibrium. Four stable states were found, shown in Fig. 15: a quasi-uniform flower state (a), twisted flower state (b), vortex state (c) and a canted vortex (d). Then cubes of different sizes were initialized to these states and relaxed to equilibrium. The resulting energy for each state, shown in Fig. 15, reveals the absolute ground states in the considered range: flower state for L < 8.16, twisted flower for 8.16 < L < 8.47 and vortex for L > 8.47.
Standard problem #3: energy densities of the flower (a), twisted flower (b), vortex (c) and canted vortex (d) states as a function of the cube edge length L. Transitions of the ground state are marked with vertical lines at L = 8.16 and L = 8.47.
Standard problem #3: energy densities of the flower (a), twisted flower (b), vortex (c) and canted vortex (d) states as a function of the cube edge length L. Transitions of the ground state are marked with vertical lines at L = 8.16 and L = 8.47.
D. Standard Problem #4
Standard problem #4 considers dynamic magnetization reversal in a 500 nm × 125 nm × 3 nm Permalloy magnet (Aex=1.3×10−11 J/m, Msat=8×105 A/m). The initial state is an S-state obtained after saturating along the (1,1,1) direction. Then the magnet is reversed by either field (a): (-24.6, 4.3, 0) mT or field (b): (-35.5, -6.3, 0) mT. Time-dependent average magnetizations should be given, as well as the space-dependent magnetization when <mx > first crosses zero.
Our solution, shown in Fig. 16, agrees with OOMMF.
MuMax3 (dots) and OOMMF (lines) solution to standard problem #4a (top graph) and #4b (bottom graph), as well as space-dependent magnetization snapshots when <mx > crosses zero, for fields (a) and (b). All use a 200 × 50 × 1 grid.
MuMax3 (dots) and OOMMF (lines) solution to standard problem #4a (top graph) and #4b (bottom graph), as well as space-dependent magnetization snapshots when <mx > crosses zero, for fields (a) and (b). All use a 200 × 50 × 1 grid.
E. Standard Problem #5
Standard problem #5 proposed by Najafi et al.33 considers a 100 nm × 100 nm × 10 nm Permalloy square (A = 13 × 10−12 J/m, Msat = 8 × 105 A/m, α=0.1, ξ=0.05) with an initial vortex magnetization. A homogeneous current j = 1012 Am−2 along x, applied at t = 0 drives the vortex towards a new equilibrium state. The obtained time-dependent average magnetization, shown in Fig. 17, agrees well the OOMMF solution.
MuMax3 (dots) and OOMMF (lines) solution to standard problem #5, both using a 50 × 50 × 5 grid.
MuMax3 (dots) and OOMMF (lines) solution to standard problem #5, both using a 50 × 50 × 5 grid.
VI. EXTENSIONS
MuMax3 is designed to be modular and extensible. Some of our extensions, described below, have been merged into the mainline code because they may be of general interest. Nevertheless, extensions are considered specific to certain needs and are less generally usable than the aforementioned main features. For instance, MFM images and Voronoi tessellation are only implemented in 2D and only qualitatively tested.
A. Moving frame
MuMax3 provides an extension to translate the magnetization with respect to the finite difference grid (along the x-axis), inserting new values from the side. This allows the simulation window to seemingly “follow” a region of interest like domain wall moving in a long nanowire, without having to simulate the entire wire. MuMax3 can automatically translate the magnetization to keep an average magnetization component of choice as close to zero as possible, or the user may arbitrarily translate from the input script.
When following a domain wall in a long transverse magnetized wire, we also provide the possibility to remove the magnetic charges on the ends of the wire. This simulates an effectively infinitely long wire without closure domains, as illustrated in Fig. 18.
Top frame: magnetization in a 1 μm wide, 20 nm thick Permalloy wire of finite length. The remaining frames apply edge charge removal to simulate an infinitely long wire. The domain wall is driven by a 3×1012 A/m2 current while being followed by the simulation window. So that it appears steady although moving at high speed (visible by the wall breakdown). While moving, new notches enter the simulation from the right.
Top frame: magnetization in a 1 μm wide, 20 nm thick Permalloy wire of finite length. The remaining frames apply edge charge removal to simulate an infinitely long wire. The domain wall is driven by a 3×1012 A/m2 current while being followed by the simulation window. So that it appears steady although moving at high speed (visible by the wall breakdown). While moving, new notches enter the simulation from the right.
Finally, when shifting the magnetization there is an option to also shift the material regions and geometry along. The geometry and material parameters for the “new”cells that enter the simulation from the side are automatically re-calculated so that new grains and geometrical features may seamlessly enter the simulation. This can be useful for, e.g., simulating a long racetrack with notches like illustrated in Fig. 18, or a moving domain wall in a grainy material as published in Ref. 38.
B. Voronoi Tessellation
MuMax3 provides 2D Voronoi tessellation as a way to simulate grains in thin films, similar to OOMMF.39 It is possible to have MuMax3 set-up the regions map with grain-shaped islands, randomly colored with up to 256 region numbers (Fig. 19(a)). The material parameters in each of those regions can then be varied to simulate, e.g., grains with randomly distributed anisotropy axes or even change the exchange coupling between them (Fig. 19(b)).
Example of a Voronoi tessellation with average 100 nm grains in a 2048 μm wide disk. Left: cells colored by their region index (0–256). Right: boundaries between the grains visualized by reducing the exchange coupling between them (Eq. (9)), and outputting MuMax3 ’s ExchCoupling quantity, the average Msat/Aex around each cell.
Example of a Voronoi tessellation with average 100 nm grains in a 2048 μm wide disk. Left: cells colored by their region index (0–256). Right: boundaries between the grains visualized by reducing the exchange coupling between them (Eq. (9)), and outputting MuMax3 ’s ExchCoupling quantity, the average Msat/Aex around each cell.
Our implementation is compatible with the possibility to move the simulation window. E.g., when the simulation window is following a moving domain wall, new grains will automatically enter the simulation from the sides. The new grains are generated using hashes of the cell coordinates so that there is no need to store a (potentially very large) map of all the grains beyond the current simulation grid. More details can be found in Ref. 38.
C. Magnetic force microscopy
MuMax3 has a built-in capability to generate magnetic force microscopy (MFM) images in Dynamic (AC) mode from a 2D magnetization. We calculate the derivative of the force between tip and sample from the convolution:
where is the tip’s stray field evaluated in the sample plane. MuMax3 provides the field of an idealized dipole or monopole tip with arbitrary elevation. No attempt is made to reproduce tip fields in absolute terms as our only goal is to produce output proportional to the actual MFM contrast, like shown in Fig. 20.
(a) vortex magnetization in a 750 nm × 750 nm × 10 nm Permalloy square. (b), (c) are MuMax3-generated MFM images at 50 nm and 100 nm lift height respectively, both using AC mode and a monopole tip model.
(a) vortex magnetization in a 750 nm × 750 nm × 10 nm Permalloy square. (b), (c) are MuMax3-generated MFM images at 50 nm and 100 nm lift height respectively, both using AC mode and a monopole tip model.
Eq. (31) is implemented using FFT-acceleration similar to the magnetostatic field, and is evaluated on the GPU. Hence MFM image generation is very fast and generally takes only a few milliseconds. This makes it possible to watch “real-time” MFM images in the web interface while the magnetization is evolving in time.
VII. PERFORMANCE
A. Simulation size
Nowadays, GPU’s offer massive computational performance of several TFlop/s per device. However, that compute power is only fully utilized in case of sufficient parallelization, i.e., for sufficiently large simulations. This is clearly illustrated by considering how many cells can be processed per second, i.e., Ncells/tupdate with tupdate the time needed to calculate the torque for Ncells cells. We refer to this quantity as the throughput. Given the overall complexity of one would expect a nearly constant throughput that slowly degrades at high N. For all presented throughputs, magnetostatic and exchange interactions were enabled and no output was saved.
The throughput presented in Fig. 21 for a square 2D simulation on a GTX TITAN GPU only exhibits the theoretical, nearly constant, behaviour starting from about 256 000 cells. Below, the GPU is not fully utilized and performance drops. Fortunately, large simulations are exactly where GPU speed-up is needed most and where performance is optimal.
MuMax3 throughput on GTX TITAN GPU, for all N × N grid sizes up to 1024 × 1024. Numbers with only factors 2,3,5,7 are marked with an open box, pure powers of two (corresponding to Fig. 22) with a full box. Proper grid sizes should be chosen to ensure optimal performance.
MuMax3 throughput on GTX TITAN GPU, for all N × N grid sizes up to 1024 × 1024. Numbers with only factors 2,3,5,7 are marked with an open box, pure powers of two (corresponding to Fig. 22) with a full box. Proper grid sizes should be chosen to ensure optimal performance.
MuMax3’s performance is dominated by FFT calculations using the cuFFT library, which performs best for power-of-two sizes and acceptably for 7-smooth numbers (having only factors 2,3,5 and 7). Other numbers, especially primes should be avoided. This is clearly illustrated in Fig. 21 where other than the recommended sizes show a performance penalty of up to about an order of magnitude. So somewhat oversizing the grid up to a nice smooth number may be beneficial to the performance.
Note that the data in Fig. 22 is for a 2D simulation. Typically a 3D simulation with the same total number of cells costs an additional factor ∼1.5 × in compute time and memory due to additional FFTs along the z-axis.
MuMax3 throughput, measured in how many cells can have their torque evaluated per second (higher is better), for a 4×106 cell simulation (indicative for all sufficiently large simulations). For comparision, OOMMF performance on a quad-core 2.1 GHz CPU lies around 4 M cells/s.
MuMax3 throughput, measured in how many cells can have their torque evaluated per second (higher is better), for a 4×106 cell simulation (indicative for all sufficiently large simulations). For comparision, OOMMF performance on a quad-core 2.1 GHz CPU lies around 4 M cells/s.
On the other hand, simulations with periodic boundary conditions will run considerably faster than their non-periodic counterparts. This is due to the absence of zero-padding which reduces FFT sizes by 2 in each periodic direction. Memory consumption will be considerably lower as well. Only the one-time kernel initialization will take longer in this case, as a number of repeated images need to taken into account here.
B. Hardware
Apart form the simulation size, MuMax3’s performance is strongly affected by the particular GPU hardware. We highlight the differences between several GPU models by comparing their throughput in Fig. 22. This was done for a 4 M cells simulation where all tested GPUs were fully utilized. So the numbers are indicative for all sufficiently large simulation sizes.
We also included OOMMF’s throughput on a quad-core 2.1 GHz core i7 CPU to give a rough impression of the GPU speed-up. The measured OOMMF performance (not clearly distinguishable in Fig. 22) was around 4×106 cells/s. So with a proper GPU and sufficiently large grid sizes, a speed-up of 20–45 × with respect to a quad-core can be reached or, equivalently, a 80–180 × speed-up compared to a single-core CPU. This is in line with earlier MuMax 1 and MicroMagnum benchmarks.2,4 It must be noted however that OOMMF operates in double-precision in contrast to MuMax3’s single-precision arithmetic, and also does not suffer reduced throughput for small simulations.
Finally, MicroMagnum’s throughput (not presented) was found to be nearly indistinguishable from MuMax3. This is unsurprising since both MuMax3’s and MicroMagnum’s performance are dominated by CUDA’s FFT routines. In our benchmarks on a GTX650M, differences between both packages were comparable to the noise on the timings.
C. Memory use
In contrast to their massive computational power, GPUs are typically limited to rather small amounts of memory (currently 1—6 GB). Therefore, MuMax3 was heavily optimized to use as little memory as possible. We exploited, for example, the magnetostatic kernel symmetries and zero elements and make heavy use of memory pooling and recycling.
Also, MuMax3 employs minimal zero-padding in the common situation of 3D simulations with only a small number of layers. For up to 10 layers there is no need to use a power of two, and memory usage will be somewhat reduced as well.
In this way, MuMax3 on a GPU with only 2 GB of memory is able to simulate about 9 million cells in 2D and 6 million in 3D, or about 2 × more than MicroMagnum v0.22 (see Fig. 23). When using a lower-order solver this number can be further increased to 12×106 cells with RK23 (2D) or 16×106 cells with RK12(2D), all in 2 GB. Cards like the GTX TITAN and K20XM, with 6 GB RAM can store proportionally more, e.g., 31 M cells for 2D with the RK45 solver.
Indication of the number of cells that can be addressed with 2 GB of GPU memory for simulations in 2D, 3D and thin 3D (here 3 layers) and using different solvers. RK45 is MuMax3’s default solver for dynamics, RK23 for relaxation. Only magnetostatic, exchange and Zeeman terms were enabled.
Indication of the number of cells that can be addressed with 2 GB of GPU memory for simulations in 2D, 3D and thin 3D (here 3 layers) and using different solvers. RK45 is MuMax3’s default solver for dynamics, RK23 for relaxation. Only magnetostatic, exchange and Zeeman terms were enabled.
VIII. CONCLUSION
We have presented in detail the micromagnetic model employed by MuMax3, as well as a verification for each of its components. GPU acceleration provides a speed-up of 1–2 orders of magnitude compared to CPU-based micromagnetic simulations. In addition, MuMax3’s low memory requirements open up the possibility of very large-scale micromagnetic simulations, a regime where the GPU’s potential is fully utilized and where the speed-up is also needed most. Depending on the solver type MuMax3 can fit 10–16 million cells in 2 GB GPU RAM—about 2 × more than MuMax2 or MicroMagnum.
MuMax3 is open-source and designed to be easily extensible, so anybody can in principle add functionality. Some extensions like a moving simulation window, edge charge removal, Voronoi tessellation and MFM images have been permanently merged into MuMax3 and more extensions are expected in the future.
ACKNOWLEDGMENTS
This work was supported by the Flanders Research Foundation (FWO).
The authors would like to cordially thank (in alphabetical order): Ahmad Syukri bin Abdollah, Alex Mellnik, Aurelio Hierro, Ben Van de Wiele, Colin Jermain, Damien Louis, Ezio Iacocca, Gabriel Chaves, Graham Rowlands, Henning Ulrichs, Joo-Von Kim, Lasse Laurson, Raffaele Pellicelli, Rémy Lassalle-Balier, Robert Stamps and Xuanyao (Kelvin) Fong for the fruitful discussions, contributions or feedback, as well as all others who tested early MuMax versions.
MuMax3 uses svgo (http://github.com/ajstarks/svgo), copyright Anthony Starks, and freetype-go (http://code.google.com/p/freetype-go), copyright Google Inc., Jeff R. Allen, Rémy Oudompheng, Roger Peppe.