Fully kinetic three-dimensional (3D) particle-in-cell (PIC) simulations are necessary to reveal the underlying physics of Hall thruster azimuthal instability. To better study the problem, a code named PMSL-PIC-HET-3D is developed recently specifically for simulating the Hall thruster, aiming at high computational efficiency and speed but less development effort. In this paper, the PMSL code is validated by comparison to the open-source PIC code WarpX, and its computation speed has reached the same level as WarpX. In addition, PMSL is able to combine cubic simulation domains such that the effect of the plume region arrangement is investigated in terms of the axial plume size and the location of the thruster exit relative to the peak magnetic field, i.e., the aft-loading magnetic field design. The simulation results indicate that the plume region arrangement has non-negligible effects on the global profiles, such as the plasma density and potential, but has a minor influence on the azimuthal instability properties. In addition, simulation results of larger plume angles are obtained for aft-loading cases, which are consistent with experimental observations.
I. INTRODUCTION
Hall thruster is one of the most promising types of electric propulsion for space applications. Despite its various advantages, its efficiency and performance are still expected to be further raised. One of the key factors affecting its efficiency and performance is known as the electron anomalous transport, and now the most acceptable theory for the electron anomalous transport is the azimuthal instability caused by the electron drift, such as the electron cyclotron drift instability (ECDI) and the modified two-stream instability (MTSI).1–5 Currently, the modeling and understanding of the azimuthal instability are far from clear and complete. Insights into this mechanism are expected to guide further improvement of the Hall thruster's efficiency and performance.
Particle-in-cell (PIC) simulation would be the most effective and suitable way to study the phenomenon of electron azimuthal drift instability, as it can capture the non-linear behavior of plasma and wave interactions and obtain the non-equilibrium particle distribution. Thus, the PIC method has gained wide adoption in simulating Hall thruster azimuthal instability. Nevertheless, the main and direct disadvantage of the PIC method is that it imposes highly demanding computational conditions, both in hardware and software, such that the computational cost can easily be very expensive. Hence, so far, most of the studies on Hall thruster azimuthal instability adopting PIC use the two-dimensional (2D) geometry, such as the two famous benchmark works using 2D azimuthal–axial and azimuthal–radial dimensions.6,7
However, due to the inherent three-dimensional (3D) nature of the electron azimuthal instability that the interaction between the charged particles and the electromagnetic field exhibits strong 3D characteristics, 2D simulations cannot incorporate some important physical effects and reflect all the detailed results. For example, the 2D azimuthal–axial simulations do not incorporate the radial direction, and the MTSI is muted, thus the results of the coupling of the ECDI and the MTSI cannot be reflected. On the other hand, 2D azimuthal–radial simulations are not able to take into account the global variation and the local fluctuation of the fields along the important accelerating axial direction. Therefore, fully 3D PIC simulation should be adopted, but so far there are only a few published works on 3D simulations because of the intensive computation required as mentioned earlier.
The first published 3D PIC study without applying artificial speed-up tricks (which have been shown to cause inaccuracy) on Hall thruster azimuthal instability is conducted by Villafana et al. using their in-house code AVIP-PIC in 2023.8 The simulation domain is set up with a T-shaped geometry representing the discharge channel and the plume region of expanded radius. One simulation case is presented in their work, which simulates the evolution of 10 μs costing 30 days (1920 cores and 140 000 core-hours). In a more recent 3D PIC work9 by our research group, a preliminary study is done using the open-source code WarpX with a focus on initializing the particles closer to the steady state to speed up the simulation, and the effects of different plasma initialization profiles are studied. In that work, the plume region is also taken into consideration with respect to the axial magnetic field profile but is imitated by an elongated discharge channel. A typical running time is less than 15 days to simulate 15 μs.
Although WarpX is a powerful PIC code, it was designed originally for plasma accelerators, and its main users and developers are still in the beam–plasma community. We therefore think having a more specialized code for simulations of Hall thrusters would be more advantageous, as optimization can be more precisely targeted utilizing the characteristics of Hall thrusters, and better performance could be achieved. Thus, we have been developing and optimizing a subroutine library that we call “Plasma Modeling Subroutine Library (PMSL)” lately and constructing a program named “PMSL-PIC-HET-3D,” based on PMSL aiming to achieve high-efficiency computation, particularly for studying Hall thruster azimuthal instability. Therefore, part of the scope of this paper is to introduce this newly developed 3D PIC code PMSL-PIC-HET-3D and compare it to WarpX to verify its validity.
In addition, since 3D PIC simulations of Hall thruster azimuthal instability are still in the very early stages, many physical and numerical questions remain unclear. Under the help of PMSL-PIC-HET-3D, we are ready to find answers to some of these questions. One first question investigated in this paper is the influence of the plume region arrangement on Hall thruster azimuthal instability. Because we have considered a relatively rough plume region in the previous work9 using WarpX, here the plume region is expanded in the radial direction, and the effect of the axial size is studied. On the other hand, the thruster exit location relative to the peak magnetic field is also investigated by arranging the plume region differently, which corresponds to the so-called aft-loading magnetic field design.10–12 Thus, the effects of aft-loading on the Hall thruster azimuthal instability as well as the resulting plume angle are studied.
The paper is organized as follows. In Sec. II, the PMSL-PIC-HET-3D code is introduced with respect to the algorithms and the parallelization applied, the WarpX code is introduced, and the simulation setup and parameters are described. In Sec. III, a comparison between PMSL and WarpX is given in terms of the validity and the efficiency; then the influence of the plume region axial size is studied; next the dispersion relation of all the simulation cases is discussed; and at last the plume angle variation due to the extent of the aft-loading magnetic field is investigated. The conclusions are given in Sec. IV.
II. THE SIMULATION METHODS
A. The PMSL-PIC-HET-3D code
In this work, a recently self-developed 3D particle-in-cell code is applied. The code is named PMSL-PIC-HET-3D, a Cartesian three-dimensional electrostatic particle-in-cell code for simulating Hall (effect) thrusters written based on the PMSL library. PMSL stands for Plasma Modeling Subroutine Library, which is an underdevelopment Git repository containing various plasma modeling algorithms encapsulated as different subroutines or functions. With respect to the code design principles, instead of developing a general and comprehensive program for solving multi-problems, we focus more on developing a small and specific program just for studying the particular physical problem, aiming at higher computational efficiency, speed, and less development effort.
As a normal electrostatic PIC code,13 PMSL-PIC-HET-3D contains the four major modules: the particle pusher, the Poisson solver, the density deposition, and the field interpolation. The classic non-relativistic Boirs pusher14 is applied to numerically integrate the particle equations of motion. The Poisson's equation is solved using the PCG solver provided in the Hypre library.15 The most basic linear density deposition and the field interpolation algorithms are applied.13
The code is fully parallelized by domain decomposition for both fields and particles along three directions using hybrid MPI and OpenMP. An example of domain decomposition is shown in Fig. 1. First, the simulation domain is formed by four individual cubic physical regions: one determines the Hall thruster discharge channel (with the color green), and the other three determine the plume, as illustrated by different colors in Fig. 1. The reason for splitting the plume into three physical regions is that it is easier to address MPI communications among neighbors with the same size of the contacting surfaces in terms of coding. Then, each physical region is split evenly into smaller MPI computing regions, as illustrated by the blocks split by thin dotted lines in Fig. 1. In addition, time-consuming subroutines are also parallelized using OpenMP; thus, within each MPI process, OpenMP can be utilized to further speed up the simulation.
An illustrative example of the physical domain decomposition (distinguished by colors) and MPI domain decomposition (indicated by the thin dotted lines).
An illustrative example of the physical domain decomposition (distinguished by colors) and MPI domain decomposition (indicated by the thin dotted lines).
B. The WarpX code
In addition to PMSL-PIC-HET-3D, WarpX is also applied to simulate one case as a benchmark, and the computational speeds between these two codes are compared. The open-source PIC code WarpX16 is a highly parallel and highly optimized code, which can run on GPUs and multi-core CPUs and includes load balancing capabilities. WarpX scales to the world's largest supercomputers and was awarded the 2022 ACM Gordon Bell Prize. WarpX supports many features, such as 1–3D Cartesian and 2D-RZ dimensions, FDTD, CKC, PSATD electromagnetic field solvers, Multi-Level Multi-Grid (MLMG) electrostatic field solvers, Monte Carlo collision modules, adaptive mesh refinement, perfectly matched layer (PML) and other absorbing boundaries, Lorentz transformed boosted-frame acceleration techniques, embedded boundaries, and high-performance domain decomposition CPU/GPU parallelization. The features applied in this paper are the 3D Cartesian geometry, the MLMG (Multi-Level Multi-Grid) electrostatic field solver, the embedded boundary to separate the channel and the plume regions, and the hybrid MPI and OpenMP parallelization.
C. The domain setup and parameters
The simulation domain contains a channel region and a plume region as two connecting cubes shown in Fig. 2. The x, y, and z directions represent the radial (r), azimuthal ( ), and axial directions in a Hall thruster, respectively. The surface at is set to be the anode with a fixed potential boundary condition and the surface at is set to be the cathode with a fixed zero potential; the other surfaces in x and z are considered to be either the wall boundaries or the vacuum boundaries with a fixed zero potential; the two surfaces in y are set to be periodic. One should note that we follow the simple way of implementing the open boundary (the vacuum boundary) using the Dirichlet condition with fixed zero potential, as done in many references,6–9 but more advanced open boundary conditions could be considered to improve the simulation in the future, such as the one used in simulating a magnetic nozzle.17 Particles are sampled uniformly within the whole simulation domain at the beginning; for those particles that reach the anode, cathode, wall, or vacuum boundaries will be removed; electron–ion pairs due to ionization will be generated in the channel region; and free electrons will be injected through the cathode boundary to maintain the overall neutrality. The electric field is primarily along the axial direction as a superposition result of the applied voltage difference between the anode and cathode at z = 0 and z = zmax planes and the electric field formed by the charged particles. The magnetic field is along the radial direction. Then, under the effect of the radial magnetic field and the axial electric field, the drift of electrons can be formed, and the current-driven azimuthal instabilities, such as ECDI and MTSI, are triggered and evolved in the simulation.
The simulation parameters are chosen based on a recent simulation work using WarpX with a pure 3D cubic simulation domain.9 Uniform grid cells are applied, and the curvature in the azimuthal direction is ignored as applied in many previous numerical studies.6–8 The anode potential is set to be . The initial plasma density is set to be , the electron and ion temperatures are and . The resulting Debye length is . The ion mass is chosen to be that of xenon. The cell sizes are chosen as . The time step is set to be , which results in . If time steps are run in total, i.e., 15 μs, ion oscillation periods can be simulated. If the number of cells and are chosen for the channel, the corresponding physical domain sizes are and . If the number of cells and are chosen for the plume, the corresponding physical domain sizes are and . Specific numbers of cells are varied in different simulation cases and will be introduced in Sec. II D. The channel region is located in the middle of the plume region along x. Both the channel and the plume have the same azimuthal size for all cases with and . Note that in the previous study9 we have shown that using a relatively small size ( ) in the azimuthal direction, does not affect the dominant modes and the evolution of the instability but can greatly save the computational cost. macro-particles per cell is used, which is the same as the value used in Ref. 9. The macro-particle weight can be computed by . Electrons and ions share the same and .
The axial magnetic field and the ionization rate profiles. The two dotted vertical lines indicate the positions along the axial (z) direction where the magnetic field and ionization rate reach maximum value, respectively.
The axial magnetic field and the ionization rate profiles. The two dotted vertical lines indicate the positions along the axial (z) direction where the magnetic field and ionization rate reach maximum value, respectively.
D. List of simulation cases
Several simulation cases are carried out as listed in Table I, in which and are varied such that the effects of the plume arrangement can be studied. Case has the largest total number of cells along z, i.e., , which can be considered as enlarging the radial plume region compared with the simulation cases studied previously in Ref. 9, where , , and , with the wall boundary throughout the channel and the plume. Note that usually, the Hall thruster exit is close to the location of the peak magnetic field. Then, the total number of cells in z is reduced from 256 to 200 to save some computational costs for other cases. In this case, , the simulated thruster exit is close to the peak magnetic field, and is a corresponding case run using WarpX for comparison. Next, can be compared with to study the influence of the plume axial size. Furthermore, can be compared with and to study the influence of the thruster exit location on the azimuthal instability.
List of simulation cases, in which PMSL runs are labeled by P; WarpX runs are labeled by W. Computing nodes with two AMD EPYC 9754 CPUs (128 Physical Cores on each CPU) are used for these cases, and no OpenMP is used.
Label . | Varied parameters . | Parallelization . | Time (days) . | |
---|---|---|---|---|
128 MPI | 12.71 | |||
128 MPI | 14.22 | |||
128 MPI | 10.37 | |||
128 MPI | 12.85 | |||
256 MPI | 19.58 |
Label . | Varied parameters . | Parallelization . | Time (days) . | |
---|---|---|---|---|
128 MPI | 12.71 | |||
128 MPI | 14.22 | |||
128 MPI | 10.37 | |||
128 MPI | 12.85 | |||
256 MPI | 19.58 |
All the PMSL cases have the same domain decomposition, as given in Table II and illustrated in Fig. 4. Because the physical regions 1 and 3 have the same size in x, which is twice the size of regions 2 and 4 in x, regions 1 and 3 are split into two sub-regions in x. Because region 1 has the largest number of particles during the simulation, it is further split into two sub-regions along z. Then, all four physical regions are split into 16 sub-regions along the azimuthal y direction. Therefore, the domain decomposition number is 64, 16, 32, and 16 for regions 1, 2, 3, and 4, respectively, resulting in a total of 128 decomposed sub-regions for MPI parallelization. For WarpX, MPI processes are used in the x, y, and z directions, respectively. Other combinations of parallelization for WarpX are also tested, but no significant difference is observed. The computing resources used and the computing time to finish time steps are also listed in Table I for all cases. As we can see, the time consumed by PMSL with half of the MPI is less than that of WarpX. On the one hand, to incorporate the plume region, the embedded boundary condition feature of WarpX is used. Rather than initially considering a T-shaped simulation region, WarpX has to have a rectangular box region encompassing the whole simulation region, and then the embedded boundary is applied to rule out the computation in the unwanted region. This could be a factor that increases the practical amount of computation for WarpX for this T-shaped region of the channel and plume. On the other hand, in PMSL it is natural to have the T-shaped computational region. In the vicinity of the channel output where there is a focus of plasma density, extra MPI processes are assigned for addressing additional particles here. The lower computation time is a reflection of the advantage of a specialized program for the specific problem, as targeted optimization can be applied. At this stage, we may conclude that the efficiencies of WarpX and PMSL are comparable. Since PMSL is at its preliminary development stage, there is plenty room for optimization.
PMSL domain decomposition and the number of MPI splits in each physical region. Region 1 is the channel, region 2 is the lower plume, region 3 is the middle plume, and region 4 is the upper plume.
Region . | 1 . | 2 . | 3 . | 4 . |
---|---|---|---|---|
Splits in x | 2 | 1 | 1 | 1 |
Splits in y | 16 | 16 | 16 | 16 |
Splits in z | 2 | 1 | 2 | 1 |
Total MPI | 64 | 16 | 32 | 16 |
Region . | 1 . | 2 . | 3 . | 4 . |
---|---|---|---|---|
Splits in x | 2 | 1 | 1 | 1 |
Splits in y | 16 | 16 | 16 | 16 |
Splits in z | 2 | 1 | 2 | 1 |
Total MPI | 64 | 16 | 32 | 16 |
III. THE SIMULATION RESULTS
The results of the five simulation cases are presented and discussed in this section. First, Fig. 5 shows the time evolution of the ion number density averaged over the whole domain. Overall, the steady-state reaches less than , i.e., 2.6 times of , which is less than the steady-state value of 3 times of as shown in Fig. 3 of Xie et al.,9 in which the plume region is confined radially by extended channel wall boundaries along z. Therefore, expanding the radial size to form a more realistic plume region contributes to lowering the steady-state ion density, which would be due to the fact that there is more room for the plasma to expand.
The time costs of each major module are given in Table III. The most time-consuming calculation is “the gather and push,” taking more than 50% of the total time, then the density deposition, taking 20%–30%, and the Poisson equation solving taking about 10%–20%. In addition, the figures for the relative computation time for Poisson equation solving for case and is noticeable larger, about twice as the time for the rest two, cases and . If converted to absolute time, the days for the computation of the four cases are 2.49, 1.28, 0.99, and 2.13, respectively. The time for Poisson equation solving is directly positively correlated with the number of the grid points, and not related to the particle numbers. Grid point numbers for the four cases can be compared considering the axial grid point number, where in the plume region the grid point number is twice as the number in the channel region. The four cases have different grid point numbers in the channel and plume. By doing the arithmetic, one gets the numbers of grid points for the four cases, which are 414, 302, 274, and 330, and these numbers are positively correlated with the time for computing the four cases. A detailed explanation or modeling of the time is complex, involving influences from the case grid setup and the solving of the large sparse system of equations, to the lowest level as the underlying hardware on the HPC (high performance computing), each level introducing different degrees of nonlinearity. An example of the hardware influence is that our servers are of non-uniform memory access architecture and there are two asymmetric memory fetching speeds. When the system assigns MPI processes, currently without explicit specification, there is no guarantee that the MPI processes will be ideally assigned across each physical CPU that ensures optimal and uniform speed of memory fetching and MPI exchange. This can directly influence the efficiency of our program and is an optimization goal in the future.
The percentage of timers of different PIC modules and simulation cases.
Time percentage (%) . | . | . | . | . |
---|---|---|---|---|
Gather and push | 55.7 | 64.7 | 56.9 | 56.7 |
Deposition | 23.1 | 24.9 | 31.5 | 25.2 |
Poisson equation solving | 19.6 | 9.0 | 9.6 | 16.6 |
Ionization | 1.5 | 1.3 | 2.0 | 1.5 |
Output | 1.3 × 10−2 | 1.2 × 10−2 | 2.4 × 10−2 | 8.0 × 10−3 |
Cathode | 3.4 × 10−6 | 2.9 × 10−6 | 3.9 × 10−6 | 3.2 × 10−6 |
Time percentage (%) . | . | . | . | . |
---|---|---|---|---|
Gather and push | 55.7 | 64.7 | 56.9 | 56.7 |
Deposition | 23.1 | 24.9 | 31.5 | 25.2 |
Poisson equation solving | 19.6 | 9.0 | 9.6 | 16.6 |
Ionization | 1.5 | 1.3 | 2.0 | 1.5 |
Output | 1.3 × 10−2 | 1.2 × 10−2 | 2.4 × 10−2 | 8.0 × 10−3 |
Cathode | 3.4 × 10−6 | 2.9 × 10−6 | 3.9 × 10−6 | 3.2 × 10−6 |
Furthermore, because the instability is dynamic in nature, it is better to present it in animations besides the plots at different time steps in the paper. One video that gives the temporal evolution of 6 field quantities can be accessed through the supplementary link. The readers are invited to the supplementary material for the description of the video and the link.
In Sec. III A–D, the simulation results of different cases are compared in detail.
A. Comparison between PMSL and WarpX
The computation times of PMSL (case ) and WarpX (case ) are compared as shown in Fig. 6. The left vertical axis is the execution time per step (labeled as t); we can see both PMSL and WarpX become slower as the number of simulated particles increases at the beginning due to continuous ionization, then t reaches a constant value, during which PMSL spends about 1.3 s per time step, while WarpX spends about 1.7 s per time step. The high spikes of PMSL in every steps are due to outputting the large particle data, while WarpX's large particle data are not outputted to save storage. The cumulative times are also plotted in Fig. 6 (labeled as T), as indicated by the right vertical axis. As we can see, PMSL spends 14.22 days to finish time steps, while WarpX spends 19.58 days. In addition, note that 128 CPU cores are used for PMSL, while 256 are used for WarpX. Therefore, this comparison indicates that the PMSL code can at least reach the same computation speed level as WarpX and is even faster than WarpX under this particular simulation setup. However, there are a number of optimization features in WarpX available that would contribute to speeding up the simulation, such as tuning the field-solve tolerance and the particle resampling thresholds as done by Marks and Gorodetsky,18 but these optimizations are beyond the scope of this paper; instead, all WarpX's default parameters are applied in this work. Similarly, PMSL is still under great development and has plenty of room for optimization and speeding up. In the following, we compare the simulation results between PMSL and WarpX, since WarpX has been benchmarked,9,18 if PMSL's results agree with WarpX under the same simulation setup, we can consider PMSL to be a valid code for simulating the Hall thruster azimuthal instability.
Computation time comparison between PMSL (case ) and WarpX (case ).
The azimuthal electric fields at different time steps of case and are plotted in Fig. 7 for the z–x plane at the middle y position and Fig. 8 for the z–y plane at the middle x position. As we can see, at time step 50 000, the instability just starts to form; then the instability is expanded from local to the whole channel and plume regions at 100 000; the plots at 500 000 show the instability at the steady state, as indicated by the steady-state in Fig. 5 reached already at 7.5 μs. Although the fields presented do not match with each other exactly for PMSL and WarpX, because of different random sampling and different phases, the strength and patterns are very similar, indicating the validity of PMSL. In addition, we can see that the magnitude of oscillating gradually decreases in the extended radial plume regions.
The azimuthal electric fields at different time steps for case (top row) and (bottom row) on the z–x plane at the middle y position.
The azimuthal electric fields at different time steps for case (top row) and (bottom row) on the z–x plane at the middle y position.
The azimuthal electric fields at different time steps for case (top row) and (bottom row) on the z–y plane at the middle x position.
The azimuthal electric fields at different time steps for case (top row) and (bottom row) on the z–y plane at the middle x position.
The ion density at different time steps of case and are plotted in Fig. 9 for the z–x plane at the middle y position and Fig. 10 for the z–y plane at the middle x position. As we can see at time step 50 000, the initially sampled ions begin to travel to the plume, and wave patterns start to form in the ionization high-density region inside the channel. At 100 000, the initially sampled ions begin to move out of the cathode boundary, and wave patterns are established in the plume region too near the thruster exit. At 500 000, the characteristic steady-state oscillating is presented.
The ion density at different time steps for case (top row) and (bottom row) on the z–x plane at the middle y position.
The ion density at different time steps for case (top row) and (bottom row) on the z–x plane at the middle y position.
The ion density at different time steps for case (top row) and (bottom row) on the z–y plane at the middle x position.
The ion density at different time steps for case (top row) and (bottom row) on the z–y plane at the middle x position.
At last, the potential distributions at different time steps of cases and are plotted in Fig. 11 for the z–x plane at the middle y position. The largest potential gradient is established at around to , i.e., the thruster exit. Again, the similarity between PMSL and WarpX indicates the correctness of the newly developed code PMSL. By the way, if we look back at Fig. 5, we can see that and reach a more similar steady-state value of compared to other cases, which further indicates the correctness of PMSL.
The potential at different time steps for case (top row) and (bottom row) on the z–x plane at the middle y position.
The potential at different time steps for case (top row) and (bottom row) on the z–x plane at the middle y position.
B. Influence of the plume region axial size
The cases (with 256 cells in z) and (with 200 cells in z) can be compared to see the influence of the plume region axial size. First, the oscillations on the x–z plane are plotted in Fig. 12 for (left) and (middle and right). Note that two different time steps at 948 000 and 782 000 of are presented to approximately match the phase of at time step 1 000 000. If we only look at the plot from z = 0 to 200, and compare it with the other two plots, no obvious difference can be seen in terms of the wave pattern and intensity either in the channel or the plume.
The azimuthal electric fields on the z–x plane at the middle y position, for case (left) and case (middle and right) at different time steps.
The azimuthal electric fields on the z–x plane at the middle y position, for case (left) and case (middle and right) at different time steps.
Next, the ion density and the potential profiles along z are plotted and compared between cases and . In Fig. 13, the time evolution of at the middle position of x and y along z is plotted for (left) and (middle). In Fig. 13 (right), time averages are computed and plotted, based on the data from to at every 1000 time steps. As we can see, the oscillation patterns are similar between and , but the averaged over time has a discrepancy, i.e., has a greater peak value , while has a lower peak value . However, the curves in the plume region starting from have a better agreement.
The ion density profiles along z at the middle position of x and y. Left: the time evolution of . Middle: the time evolution of . Right: averaged from to time steps.
The ion density profiles along z at the middle position of x and y. Left: the time evolution of . Middle: the time evolution of . Right: averaged from to time steps.
Similarly, the potential profiles are given in Fig. 14. The two curves of and have a better agreement in the channel region but gradually form a discrepancy in the plume region, mostly due to the fact that the fixed zero potential boundary conditions are applied at different axial locations.
The potential profiles along z at the middle position of x and y. Left: the time evolution of . Middle: the time evolution of . Right: averaged from to time steps.
The potential profiles along z at the middle position of x and y. Left: the time evolution of . Middle: the time evolution of . Right: averaged from to time steps.
In addition, the averaged and distributions along z of the case and are also plotted in Fig. 13 (right) and Fig. 14 (right). As we can see, small differences exist among , , and due to movements of the thruster exit location, but the differences are less significant compared to that between and .
C. Dispersion relation
In this section, the dispersion relation is diagnosed and compared among different cases. Take the case as an example, the time evolution of the azimuthal electric field along y at the middle position of x but different positions of z are plotted in Fig. 15, where plots of only three z positions are shown as examples. From these plots, the fast Fourier transform (FFT) can be applied to obtain the dispersion relation in the k–ω space, where k is the wave number in the y direction, is the angular frequency, as shown in Fig. 16, where five z positions are considered and labeled in the figure.
Time evolution of along y at the middle of x, for (left), (middle), and (right).
Time evolution of along y at the middle of x, for (left), (middle), and (right).
The dispersion relation of computed based on the data shown in Fig. 15. The and k in each plot are normalized by unique and values, calculated based on the labeled and values.
The dispersion relation of computed based on the data shown in Fig. 15. The and k in each plot are normalized by unique and values, calculated based on the labeled and values.
For each plot, the electron number density at the corresponding z and the middle position of x is averaged over 64 cells in y and time steps from to . Similarly, the electron macro-particles at the corresponding and the middle position of are considered to compute the electron temperature , and it is averaged over 64 cells in y and time steps from to . The averaged values of and are labeled in Fig. 16, which are used to calculate the ion oscillation frequency and the Debye length for normalizing and k.
The discrepancy over z for all P cases (top). The averaged and values over z for all P cases (bottom).
The discrepancy over z for all P cases (top). The averaged and values over z for all P cases (bottom).
At last, by looking at the animation (please see supplementary video), longitudinal wave patterns can also be detected, i.e., electrostatic waves propagating along z toward the anode. We therefore plot the time evolution of along z at the middle x and y in Fig. 18 (left). Because has a 0th order distribution, minus its time average is also plotted in Fig. 18 (middle), and its resulting dispersion relation is shown in Fig. 18 (right). The corresponding ion acoustic dispersion is still plotted, and it can be seen that the dispersion deviates from the ion acoustic dispersion to a higher phase velocity. Therefore, the Ez wave does not appear to exhibit characteristics consistent with ion acoustic waves. Since this type of wave or instability has rarely been reported and falls outside the scope of this paper, we defer further discussion of it to future work.
Time evolution of along z at the middle of x and y (left), minus its time average (middle), and the resulting dispersion relation (right).
Time evolution of along z at the middle of x and y (left), minus its time average (middle), and the resulting dispersion relation (right).
D. Plume angle
At last, we compare the cases of , , and to study the effect of the thruster exit position. Since the dispersion analyses and the macroscopic distributions of and have been done in former sections, and no significant difference can be seen, here we focus on the plume angle. Because the concept of aft-loading magnetic field design has been proposed in recent years to prolong the lifetime of Hall thrusters, which corresponds qualitatively to a transition from case to to , where has the longest discharge channel and the peak magnetic field is inside the channel, while has the shortest channel and the peak magnetic field has been moved out of the channel. Therefore, the plume angles produced by these simulations are of interest.
To diagnose the plume angle, the ion velocity vectors are plotted in Fig. 19, where only those ions located inside the middle cells of at time step are considered. Then, it is found that the plots of can also help to determine the plume angle with an even higher accuracy. As shown in Fig. 19, the plume angle could be determined by the negative fields in the plume region. Therefore, both the ion velocity vector and the field are presented in Fig. 19 for cases , , and .
Plume angle determination from the ion velocity vector and the field for the case (top), (middle), and (bottom).
Plume angle determination from the ion velocity vector and the field for the case (top), (middle), and (bottom).
As we can see from Fig. 19, has the smallest plume angle about 45°, has a plume angle about 74°, and has the largest plume angle about 92°. Thus, the tendency of moving the peak magnetic field out of the channel, resulting in an enlarging plume angle is consistent with experimental observations, that an aft-loading magnetic field design leads to an enlarged plume angle. By the way, one should note that the approach applied here to determine the plume angle, namely, from the ion vector plot and the plot, may not be consistent with the commonly used experimental approach, and since only the radial component magnetic field is considered as given in Eq. (1), the angles obtained here may not be appropriate to compare with those obtained in experiments.
IV. CONCLUSION AND DISCUSSION
Several major conclusions can be drawn based on the results obtained in this work as follows. (1) The newly self-developed code PMSL-PIC-HET-3D has been validated by comparison to WarpX. PMSL's computing speed has reached the same level as WarpX and could be even faster than WarpX under some circumstances. (2) Based on our previous work using a 3D cubic domain9 with the channel wall boundary extended into the plume region, the radial size of the domain is partially enlarged in this work to better simulate the plume by combining different sized cubic domain regions in PMSL. It is found that enlarging the radial plume size leads to overall lower values of the averaged ion number density at the steady state, most probably because the plasma has more room to expand and dissipate. (3) The effect of plume size in z is investigated by comparing simulation cases and . It is found that the macroscopic profiles, such as the number density and the potential are partially affected by the plume axial size in the channel region and the plume region, respectively, but the wave and instability properties with respect to the dispersion relation are not affected much. Thus, if one only focuses on the wave and instability properties, a relatively small plume axial size would be used to save some computational costs. (4) Then, for all cases, the dispersion relations of the oscillation azimuthal electric field are obtained at different axial locations, and the discrepancy from the ion acoustic wave dispersion is evaluated by defining a quantity . It is found that, although small differences exist among different cases, the tendency of along z remains the same. The discrepancy of is relatively big toward the anode or the cathode and becomes zero twice at both sides of the maximum magnetic field location. (5) At last, simulation cases , , and are compared to study the effect of the thruster exist relative to the maximum magnetic field. In terms of the dispersion relation, no significant difference is observed, but it is found that the plume angle for the so-called aft-loading magnetic field design becomes much larger, which agrees with the experimental observation. It is also found that the field distribution in the x–z plane can be used to determine the plume angle, which is constant to the ion velocity vector distribution.
In the future, the PMSL code will be further developed in terms of computing speed and parallel efficiency, and more features will be added to carry out more realistic simulations on Hall thruster azimuthal instability. In addition, there are many other types of PIC codes that we can learn from for inspiration to enhance our PIC code for Hall thruster simulations, such as the UNIPIC PIC code,19,20 conformal symplectic PIC code,21 and finite element code.22
SUPPLEMENTARY MATERIAL
See the supplementary material for the following: a video that shows the temporal evolution of six field quantities of interest in middle x–y/radial–azimuthal and x–z/radial–axial cross section planes in simulation case , where six field quantities are normalized electron density , normalized ion density , azimuthal electric field , radial electric field , axial electric field , and electric potential , grouped into two groups during the display of the video, with the former three first group and the rest three second group; and temporal resolution of the video is that every frame is drawn from every 1000 steps field quantities, so that 1000 frames (total 1 000 000 steps simulated divided by 1000) are presented in each group in the video.
ACKNOWLEDGMENTS
The authors acknowledge the support from the National Natural Science Foundation of China (Grant No. 5247120164). This research used the open-source particle-in-cell code WarpX https://github.com/ECP-WarpX/WarpX, primarily funded by the U.S. DOE Exascale Computing Project. Primary WarpX contributors are with LBNL, LLNL, CEA-LIDYL, SLAC, DESY, CERN, and TAE Technologies. We acknowledge all WarpX contributors.
AUTHOR DECLARATIONS
Conflict of Interest
The authors have no conflicts to disclose.
Author Contributions
Xi Chen: Data curation (equal); Formal analysis (equal); Investigation (equal); Validation (equal); Visualization (equal); Writing – original draft (equal). Lihuan Xie: Data curation (supporting); Methodology (equal); Writing – review & editing (equal). Kunpeng Zhong: Methodology (equal); Software (equal); Visualization (equal). Xin Luo: Software (supporting); Visualization (equal); Writing – review & editing (supporting). Zhijun Zhou: Software (supporting); Visualization (supporting); Writing – review & editing (supporting). Baisheng Wang: Software (supporting); Writing – review & editing (supporting). Zhe Liu: Software (supporting). Yinjian Zhao: Conceptualization (lead); Funding acquisition (lead); Methodology (equal); Resources (lead); Software (lead); Supervision (lead); Writing – review & editing (supporting).
DATA AVAILABILITY
The data that support the findings of this study are available within the article and its supplementary material.