We present a higher-order convergent numerical solver for active polar hydrodynamics in three-dimensional domains of arbitrary shape, along with a scalable open-source software implementation for shared- and distributed-memory parallel computers. This enables the computational study of the nonlinear dynamics of out-of-equilibrium materials from first principles. We numerically solve the nonlinear active Ericksen–Leslie hydrodynamic equations of three-dimensional (3D) active nematics using both a meshfree and a hybrid particle-mesh method in either the Eulerian or Lagrangian frame of reference. The solver is validated against a newly derived analytical solution in 3D and implemented using the OpenFPM software library for scalable scientific computing. We then apply the presented method to studying the transition of 3D active polar fluids to spatiotemporal chaos, the emergence of coherent angular motion in a 3D annulus, and chiral vortices in symmetric and asymmetric 3D shapes resembling dividing cells. Overall, this provides a robust and efficient open-source simulation framework for 3D active matter with verified numerical convergence and scalability on parallel computers.
I. INTRODUCTION
Active matter refers to material systems whose constituents convert chemical energy into mechanical work, resulting in self-organized emergent motion. The constant input of energy at the microscopic scale maintains the system out of thermodynamic equilibrium. The type of emerging motion depends on many factors, including the rate of energy injection,1 material properties,2,3 and constituent concentrations.4 This gives rise to rich nonlinear dynamics, including topological active hydrodynamics.5–7 In addition, active matter often comprises anisotropic or orientable constituents, endowing it with nematic (liquid crystal) or polar order, respectively. Examples of nematic or polar active matter include cytoskeletal structures in living cells,8–10 bacterial suspensions,11,12 multicellular tissues,12 and flocks of birds.13 The governing partial differential equations (PDEs) for the mean-field description of polar or nematic active matter have been derived from first principles (symmetries and conservation laws),8–10,14 as comprehensively reviewed by Marchetti et al.12 Studying the fascinating emergent properties of this class of materials, however, relies on numerical simulations, as the equations are analytically intractable in all but the simplest one-dimensional cases.
Owing to this importance of computer simulations in the study of active matter, various numerical methods have been proposed. On the microscopic level, active matter has been simulated “bottom-up,” resolving the individual molecular components and their stochastic dynamics, as oriented active particle systems,15 by Brownian dynamics,16 or by stochastic simulation algorithms.17 Furthermore, molecular dynamics simulations using HOOMD-Blue18,19 have been used for active matter studies at the molecular scale.20,21 On the mesoscopic level, hybrid Lattice-Boltzmann methods (hLBM) have established themselves as a standard for simulating active matter in both two dimensions (2D) and three dimensions (3D), due to their versatility in modeling complex fluids.22 hLBM for active matter derives from a classic algorithm for passive liquid crystals,23–25 which was extended to active nematics. This has been successfully applied to a range of problems in active hydrodynamics, including chaotic flows (active turbulence)2,26–32 and topological defect dynamics in activity gradients.28,29,31,33,34 Despite their popularity, though, hLBM methods are limited to periodic or no-slip boundary conditions, and convergence to the correct hydrodynamic mean-field limit is only guaranteed for certain lattice symmetries35 under specific moment constraints on the equilibrium distribution.22,33 If they converge, the order of convergence is at most 1 in domains with curved boundaries,36 but a systematic convergence validation for active hydrodynamics is missing in the literature. Most importantly, an open-source implementation of hLBM for active matter does not seem to be available, limiting reproducibility and utility for the scientific community.
On the macroscopic, continuum level, spectral or pseudospectral methods have been used to numerically solve the governing PDEs.37 This has, e.g., enabled studying active suspensions,38–41 bacterial suspensions,42 and chaotic behavior in 2D active nematics.43,44 Spectral methods have superior accuracy (exponential convergence to machine precision) in periodic domains, but are limited to simple domain geometries and lose their spectral convergence for non-periodic boundary conditions.45–47 Ramaswamy et al.48 therefore presented a hybrid Lagrangian particle-mesh method for 2D active polar fluids, which allows for arbitrary boundary conditions in rectangular domains and was the first method solving the most general Ericksen–Leslie PDEs of active polar fluids with Lagrange multipliers for unit polarity magnitude.8–10,14 This method has been used to characterize spontaneous transitions of 2D contractile active fluid from a resting initial state to spontaneous laminar flow, and further to traveling vortices and waves, and eventually spatiotemporal chaos.49 Due to the use of staggered Cartesian grids, this method is also limited to simple (rectangular or isomorphic to a rectangle) domain geometries. A simplified unsteady Stokes-flow model of active hydrodynamics has been solved using finite differences in order to study topological defect dynamics and spatiotemporal chaos in 2D active nematics.50,51 In 3D, a box-shooting algorithm on a staggered Cartesian grid was proposed for solving a minimal phenomenological model of active hydrodynamics that did not include flows induced by the rotation of molecules along their axis in the nematic field.52 Jointly, these works emphasize the progress, but also the challenges and limitations of the different numerical methods applied to active polar hydrodynamics. Importantly, a convergence-validated method that solves the macroscopic symmetry-preserving PDE model in 3D with arbitrary boundary conditions and in arbitrary 3D geometries is, to our knowledge, so far missing.
This work focuses on the computational challenges to numerically solve the full nonlinear theory of active polar hydrodynamics arising from symmetry relations. The model considered here contains a 3D vectorial orientation field, which is nonlinearly coupled to the flow velocity field. More specifically, we present a numerical method to solve the 3D active Ericksen–Leslie equations that model polar or nematic active matter in the mean-field. The proposed hybrid particle-mesh approach can incorporate arbitrary boundary conditions on complex-shaped 3D domain geometries, and it is implemented as a scalable open-source software for shared-memory and distributed-memory parallel computers. We use Discretization-Corrected Particle Strength Exchange (DC-PSE)53 to consistently discretize differential operators on geometry-adapted distributions of collocation points. This is coupled with a pressure-correction scheme to enforce incompressibility of the fluid, avoiding the use of staggered grids, which are challenging in complex domains. We validate convergence of the presented solver for the incompressible force balance on regular and irregular collocation-point distributions. The computational cost associated with 3D problems is addressed by using the open-source framework OpenFPM for scalable and efficient scientific computing.54 The complexity of the governing equations is tackled using a C++ template expression system for PDEs, greatly reducing code complexity and improving readability and extensibility.55 We present the derivation of the governing equations in 3D, a detailed description of the algorithm, as well as its performance and convergence. The algorithm is then applied to study the spontaneous flow transition in an extensile active polar fluid with friction boundary conditions, and the transition of the same system to spatiotemporal chaos (a.k.a. active turbulence) as well as to traveling waves in a 3D annulus. Finally, the application of the presented solver to asymmetric 3D geometries is demonstrated, simulating the onset of vortical flows in geometries resembling dividing biological cells. Together, these results fully validate the proposed numerical method and illustrate its application to cases where the symmetry-preserving PDE model of active matter was hitherto impossible to solve.
II. HYDRODYNAMIC DESCRIPTION OF 3D ACTIVE LIQUID CRYSTALS
The hydrodynamic theory of active matter describes the dynamics of out-of-equilibrium systems that are driven by energy conversion at the microscopic scale.8–10,14 The average orientation of the microscopic constituents in a control volume defines the macroscopic orientation field and is referred to as polarity field p with components , where α stands for any coordinate direction [{x, y, z} in Cartesian coordinates]. A typical example of such a material is the cytoskeleton in biological cells. There, the polarity field describes the average orientation of the cytoskeletal filaments.56 Motor proteins bind to the filaments and convert chemical energy (in the form of the “fuel” molecule ATP) to mechanical work (in the form of forces/stresses). This deforms the filaments and sets the surrounding fluid in motion. At the same time, passive and elastic stresses relax the polarity field toward uniform nematic order, and the viscosity of the surrounding fluid dissipates energy. The resulting force balance leads to self-organized emergent flows with velocity field v with components . Based on first principles (conservation laws, symmetry, entropy production, and Onsager reciprocal relations), the constitutive equations can be derived, linking all stresses to the fluid strain rate.
The pressure Π acts as another Lagrange multiplier to enforce the incompressibility condition on the velocity field, . Because of the Lagrange multiplier entering the stress tensor, additional flow-field derivatives appear in the force-balance equation when computing the divergence of the total stresses . This leads to the final governing equations as given in Appendix B. Since those equations would be challenging to hand-code in a computer programming language, we leverage a C++ template expression system55 to directly translate the mathematical expressions into executable code.
III. NUMERICAL ALGORITHM
The numerical algorithm for the equations in Appendix B starts by solving the force balance, because knowing the velocity field is prerequisite for evolving the polarity in time. It then computes the polarity time evolution in either an Eulerian or Lagrangian frame of reference. This results in a hybrid particle-mesh method with normalization correction for the polarity field as outlined in the flow chart in Fig. 1. All computer codes are implemented in C++ using the open-source scalable scientific computing framework OpenFPM.54
The present approach is fundamentally different from the popular hybrid Lattice-Boltzmann method, as it directly solves the mean-field PDE model of the system in the long time limit. Unlike Lattice-Boltzmann, the present method does not evolve the velocity field in time, but instead solves the force balance for the velocity at each time step. This allows flow velocities to relax on a different timescale than the polarity. Previous simulation methods also confined predictions to 2D or to solving a transient Stokes equation in 3D accompanied by a finite speed of sound. The present approach relaxes these limitations and also affords more freedom than Lattice-Boltzmann methods in choosing the space and time resolution of the simulations. One should always carefully check the relationship between viscosity, relaxation time, and mesh resolution in order to avoid unphysical oscillations in the computed flow fields. This is easier to do in the present framework than in Lattice-Boltzmann methods, where the viscosity inherently depends on the mesh resolution. This becomes particularly important when simulating active turbulence, where numerical compressibility can influence the results. Unlike Lattice-Boltzmann methods, the solver presented here converges toward the incompressible limit as resolution increases, offering precise tolerance-based control over compressibility artifacts without needing to monitor the speed of sound.
A. Force-balance solver
Given the polarity field p for any point in time t, the induced velocity field v is determined by discretizing Eqs. (B1)–(B3) using DC-PSE differential operators.53 Incompressibility is imposed using a pressure-correction scheme as previously described in 2D58,59 and successfully used with DC-PSE operators.60 Here, we extend this algorithm to 3D. The solver can be used on irregular distributions of collocation points, which are therefore referred to as particles. This renders the algorithm suitable for simulations in both Eulerian and Lagrangian frames of reference, where particles are stationary or move with the velocity field of the flow, respectively. The resulting linear system of equations is assembled using the OpenFPM template expression system55 and is solved with the KSPGMRES solver from the PETSc library.61 The pressure-correction algorithm iteratively improves the estimate of the flow velocity to a desired numerical tolerance ϵv. The tolerance ϵv is bounded from below by the approximation error of the spatial derivatives, which converges with increasing resolution. This tolerance is set it to for h = 0.2 and to for h = 0.1. The full pressure-correction solver is described in Algorithm 1.
Input: |
1. ϵv: Numerical relative tolerance for the incompressible flow solver. |
2. : Maximum number of pressure-correction iterations. |
3. : Initial guess for velocity and pressure |
Output: v: Incompressible flow velocity satisfying Eq. (1b) and the boundary conditions. n = 0, |
while and do |
Solve Stokes equations for using |
Correct pressure: |
if then |
Stop with error: “Requested tolerance is unachievable for the chosen resolution” |
end |
end |
Input: |
1. ϵv: Numerical relative tolerance for the incompressible flow solver. |
2. : Maximum number of pressure-correction iterations. |
3. : Initial guess for velocity and pressure |
Output: v: Incompressible flow velocity satisfying Eq. (1b) and the boundary conditions. n = 0, |
while and do |
Solve Stokes equations for using |
Correct pressure: |
if then |
Stop with error: “Requested tolerance is unachievable for the chosen resolution” |
end |
end |
B. Time evolution of the polarity field
Once the velocity field has been computed, a time-integration step for the polarity field is performed. Equation (1a) can be integrated in time in either an Eulerian or Lagrangian frame of reference until a desired final time tf. In an Eulerian frame of reference, the advection operator in Eq. (2) is be computed using DC-PSE spatial operators. To achieve numerical stability, however, predictor–corrector time integration is necessary, as the operators are centered in space and not upwind. Hence, for the Eulerian frame of reference, we use the second-order Adams–Bashforth–Moulton predictor–corrector scheme for time integration, as implemented in OpenFPM.62 For Eulerian simulations, the time step δt is limited by the CFL condition for numerical stability, where h is the average inter-particle spacing.63 This severely limits the time step size. The issue is exacerbated at higher activity , leading to stronger active stresses causing faster flows. However, when computing flow steady states, and for imposing boundary conditions in complex domain shapes, the Eulerian frame of reference still provides better stability than the Lagrangian approach. Steady states are detected with a user-defined tolerance of , checked against the maximal rate of change of polarity in the entire domain.
The Lagrangian frame of reference allows for the less stringent Lagrangian CFL condition , imposing that particle trajectories never cross flow streamlines.64 Moving the particles changes their positions, which requires DC-PSE kernels to be recomputed at each simulation time step. Time integration in the Lagrangian case is also done using the second-order Adams–Bashforth–Moulton predictor–corrector scheme. Alternatively, Runge–Kutta 4 can be used in this case. This adds additional computational cost, but is amortized in most cases by the increased stability permitting larger time steps.53 Detecting non-equilibrium steady states with nonzero flow, however, is not easily done in the Lagrangian frame of reference. It is only possible if the rate of change of the polarity can be equated to an approximation of the advection operator. Computing the advection operator, however, introduces additional numerical error and lowers the tolerance with which the steady state can be detected. Nevertheless, the superior time-stepping stability of the Lagrangian method makes it the preferred choice for simulating unsteady flows in simple Cartesian geometries.
The present method supports all types of polarity boundary conditions including homogeneous and inhomogeneous Dirichlet, Neumann, and Robin (mixed) boundary conditions. Dirichlet boundary conditions are straightforward, as the polarity simply remains constant at boundary particles. Neumann boundary conditions are imposed using the method of images by creating a “ghost layer” of particles in a thin tubular neighborhood outside the domain boundary. The width of this ghost layer is given by the radius of the DC-PSE operators used to discretize derivatives. The ghost layer is created by mirroring particles at the boundary along the normal n. If the particles do not move, this is done only once at the beginning of a simulation, otherwise at each time step. In a distributed-memory simulation, it is ensured that ghost and source particles are stored on the same process to minimize communication. The polarity values of the “ghost particles” are reset at each time step, ensuring the correct value of at the boundary.
C. Remeshing
Input: |
1. S0: Particle set with polarity p and velocity v stored on the particles located on a mesh. |
2. ϵp: Numerical tolerance for deviation in polarity magnitude. |
Output: Polarity field p on S0 after advection with velocity v. Create a particle distribution S1 by advecting particles from S0. |
Create mirror particles outside the domain boundaries as required for the method of images to impose boundary conditions.48 |
Interpolate polarization p from S1 to S0. |
for each particle do |
if then |
end |
end |
Input: |
1. S0: Particle set with polarity p and velocity v stored on the particles located on a mesh. |
2. ϵp: Numerical tolerance for deviation in polarity magnitude. |
Output: Polarity field p on S0 after advection with velocity v. Create a particle distribution S1 by advecting particles from S0. |
Create mirror particles outside the domain boundaries as required for the method of images to impose boundary conditions.48 |
Interpolate polarization p from S1 to S0. |
for each particle do |
if then |
end |
end |
IV. RESULTS
The theoretically expected order of convergence of the method presented above depends on the spatial discretization operators and the time-integration scheme used. In the presented solver, all spatial differential operators are discretized using DC-PSE with consistency of , where N is the total number of particles used in the simulation. The remeshing scheme converges with third order and does therefore not limit the convergence rate. Time is integrated using the two-step Adams–Bashforth–Moulton predictor–corrector scheme, which has a global convergence order of , where Nt is the total number of time steps in a simulation. We therefore expect the method to overall be second-order accurate in both space and time.
A. Validation: Three-dimensional active incompressible channel flow
We numerically solve this problem for η = 1, γ = 1, , λ = 1, , and L = 10 in the Lagrangian frame of reference with remeshing. For this set of parameters, the analytical solution has . We use the proposed hybrid particle-mesh method with , and to perform the simulation in a box-shaped domain of size discretized by a uniform Cartesian mesh of resolution h. The domain has periodic boundary conditions in x and z to model the infinite extent of the fluid in these directions. In the y direction, the boundary conditions are as specified by the problem.
The polarity field is invariant along the x and z axes [Figs. 2(a) and 2(b), left]. Its one-dimensional gradient in y creates coherent active stresses that generate a unidirectional flow of material along the x axis. This flow has a velocity gradient that respects the no-slip boundary condition at the bottom wall and the stress-free boundary condition at the top wall [Figs. 2(a) and 2(b), right]. Reversing the sign of the active stress in the simulation reverses the flow direction, respecting the symmetry of the problem.
Refining h, we observe second-order convergence to the analytical solution at t = 40, as theoretically expected. In order to satisfy the CFL condition, the time step size is refined along with the grid resolution according to the relation , where Nx and Nt are the number of particles in x direction and number of time steps to reach t = 40, respectively. The initial polarity and velocity fields at t = 0 are visualized in Fig. 2(a). The steady-state numerical solution at t = 40 is visualized in Fig. 2(b). We plot the and errors in both the polarity field p and the nonzero components of the strain rate uxy for increasing Nx ( ) in Fig. 2(c). Figure 2(d) further verifies that the error in the polarity magnitude plateaus over time and converges for decreasing h, confirming numerical convergence of the Lagrange multiplier . The jaggedness in Fig. 2(d) reflects numerical “ringing” of the Lagrange multiplier. This ringing is caused by the remeshing error causing the predictor–corrector time integration scheme with constant time step size to overshoot the flow steady state. The amplitude of this ringing, however, does converge with the correct order for increasing resolution. Together with earlier convergence results of the DC-PSE pressure-correction scheme for steady-state Stokes flow in a 3D ball with irregular particle distributions,55 this validates the correctness of the present method also in the active polar case.
B. Benchmarks: Computational cost and scalability
Next, we benchmark the computational performance of the solver. The KSPGMRES algorithm68 implemented in the PETSc library uses the Arnoldi iterations to find the orthonormal basis of the Krylov subspace. In each Arnoldi iteration, a new basis vector is found and orthonormalized using the Gram–Schmidt process. This requires storing the vectors from all k previous iterations. The number of multiplications over k Arnoldi iterations thus is , where N is the total number of particles and k = N in the worst case. The storage requirement is O(Nk). We prevent the explosion of the cost with k2 by restarting the solver every m = 5000 iterations. This restarted version requires multiplications, where Z is the number of nonzero elements in the sparse force-balance system matrix of size for the three spatial components.69 The memory requirement reduces to . For each iteration of the pressure-correction algorithm, the KSPGMRES solver is run to convergence. This is repeated at most times. The cost for the two-step Adams–Bashforth–Moulton time-integration scheme is O(N). Likewise, remeshing has a computational cost of O(N), as each particle assigns to a constant number of neighboring grid nodes. In the present Eulerian simulation, the particle positions do not change over time. Therefore, the DC-PSE kernels only need to be calculated once at the beginning of the simulation. Evaluating the kernels is an O(N) operation. In summary, the overall computational cost of the simulation in an Eulerian frame of reference is dominated by KSPGMRES, as it needs to solve a linear system of equations for each pressure-correction iteration, the cost of which depends on the condition number of the system. In a Lagrangian frame of reference, the DC-PSE kernels need to be recomputed at every time step, incurring an additional computational cost of O(N), albeit with a large pre-factor. This cost then dominates the one from the linear system solver. Overall, we thus expect the computational cost of the whole simulation to scale linearly with the number of particles N. This is empirically confirmed in our benchmarks with N ranging from 103 to 105.
The software implementation of the presented method relies on OpenFPM to leverage shared- and distributed-memory parallel computers to address the high computational cost of 3D simulations. We therefore check how the runtime of the simulation from Sec. IV A scales when distributing it across an increasing number of CPU cores. For each configuration, ten time steps are performed on Intel Xeon E5–2680v3 CPUs at 2.50 GHz with 24 cores on each compute node. Individual compute nodes are connected by a 4-lane FDR InfiniBand network (at 14 Gb/s per lane) with a latency of 0.7 μs for message passing using the OpenMPI library. Each experiment is repeated 10 times, and the results are averaged over the 10 independent repetitions. We use the ParMETIS graph domain decomposition70 of OpenFPM to parallelize the computation over spatial particle sets.54
For a fixed number of particles (Nx = 64, Ny = 65, and Nz = 5), the number of CPU cores increases from 1 to 192, hence measuring the strong scaling of the algorithm [Fig. 3(a)]. The time measurements show an almost linear scaling within one compute node. Using more cores per node, the memory bandwidth becomes the bottleneck, such that the speedup on 24 cores (1 full node) is about 14-fold. Scaling beyond one node, communicating data via the interconnect network of the machine, the scaling slows down only slightly up until 128 cores. Using more than 128 cores, runtimes start to increase, as this problem is too small to be distributed over more than 128 cores. We next measure the weak scaling of the simulation where the number of particles increases proportionally to the number of CPU cores. For simplicity, this test is performed in a cubic computational domain and increase the number of particles from on 1 core to (one additional particle is required in y to impose the boundary conditions) on 1012 cores [Fig. 3(b)]. We observe perfect scaling in one node up to 4 cores, while the parallel efficiency when using all 24 cores of a node is 58%. Scaling to more nodes, the communication overhead of the linear system solver eventually becomes limiting, as it requires global all-to-all communication with a volume that grows with the total problem size. The communication time depends on the node interconnect topology, the condition number of the system matrix, the number of GMRES iterations until convergence, the shape of the domain decomposition, and the memory bandwidth of the compute nodes. As the number of processor cores increases, communication overhead on average increases. The nonmonotic behavior of the measured times in Fig. 3(b) is not caused by measurement noise, as the error bars over the 10 independent repetitions of each run are smaller than the symbol size. Instead, these variations are deterministic. They are caused by the condition number of the system matrix changing with increasing resolution (weak scaling), the node interconnect network topology changing as more nodes are used, and the shape of the domain decomposition changing as data distributes across more cores within a node.
Taken together, these results demonstrate that the parallel OpenFPM implementation of the present code can reduce the wall-clock time of the simulations by more than one order of magnitude (strong scaling) and allows performing large simulations (weak scaling). This enables using the code for real-world applications.
V. APPLICATIONS
We illustrate the use of the present simulation method to active polar flow problems in increasingly complex 3D spatial domains and with different boundary conditions. While analytical solutions are not available in these cases, we compare with published results where possible in order to corroborate the correctness of our simulations. Together, these example simulations showcase the broad applicability of the present method.
A. Active Fréedericksz transition in extensile polar fluids
B. Transition to active turbulence
Section V A has shown that the present numerical solver accurately recapitulates the space–time dynamics of the active Fréedericksz transition in extensile polar fluids when the activity exceeds the critical value . Increasing the activity even more first leads to the emergence of traveling waves and traveling vortices, followed by a transition to spatiotemporal chaos. To identify these flow transitions, we numerically compute the maximum finite-time Lyapunov exponent λ1, which measures the exponential growth rate of a small perturbation in the system. Negative λ1 indicate that the flow relaxes back to the unperturbed state. Positive λ1 are indicative of chaos transitions. If the perturbation remains in the system forever, λ1 is zero. We numerically compute λ1 using the classic method of Benettin et al.73 and Skokos,74 as previously implemented.49 Therefore, two Lagrangian simulations are run with the same model and parameters, except that the second simulation has an additional initial perturbation of to both θ and of the polarity vector of the central particle. Every TL = 50 time steps, it is checked whether the difference between the two simulations (measured by the L2-norm over all particles) increased or decreased. In order to detect the small differences in the polarity field, we decrease . This provides one estimation for . Then, the perturbation is reset to d0. With a time step size of 0.01, a total of 800 λ1 estimates are thus collected by the final time tf = 400. We report the mean and standard deviation over the last 300 λ1 estimates when the flow has transitioned into its final state.
A plot of the measured λ1 (mean ± standard deviation) as a function of is shown in Fig. 4(c) with error bars indicating the standard deviation. The “No Flow” and “Flow” regimes are characterized by a homogeneous and an inhomogeneous steady state, respectively. This transition, where λ1 changes from being negative to being zero, therefore is the active Fréedericksz transition in extensile polar fluids as studied in Subsection V A. At an activity level of , the system starts to exhibit self-organized traveling waves of polarity and traveling vortices of velocity. Both fields travel in x direction at constant group velocity. The transition can be empirically localized by computing the maximum rate of change or the slope of the power spectrum of the polarity field. Such empirically estimated values, however, are always approximate and may vary slightly for different spatial and temporal resolutions of the simulation. Within the “Waves” regime, there is sub-transition happening around . There, the polarity field starts to exhibit oscillations leading to oscillating values of λ1, as evidenced by the growing standard deviation of λ1 beyond this point. At , the previously oscillation frequency starts to wobble, which is a sign of an imminent transition to spatiotemporal chaos. The mean maximum Lyapunov exponent become positive beyond , but the oscillatory behavior still dominates at this activity. In the region between and , the system exhibits a mixture of signatures for oscillatory vortices and chaos. This soft transition is indicated by a color gradient in Fig. 4(c). In the “Chaos” regime beyond , the flow fields are turbulent as quantified by a growing positive Lyapunov exponent. In this regime, we observe disclination lines in the polarity field as one-dimensional line defects preferentially forming at the walls and growing into the bulk. The defect loops move chaotically and fuse with other disclination lines upon encounter. They always form closed loops either with themselves of by starting and ending at a wall. A visualization of the polarity and velocity fields in the chaotic regime is shown in Fig. 4(d) for the same resolution Nx = 32. To our knowledge, this constitutes the first numerical validation of the transition to turbulence for the full three-dimensional Ericksen–Leslie hydrodynamic model with extensile stress.
C. Coherent motion in a 3D annulus
Moving to a non-Cartesian geometry, we test our algorithm and implementation in a 3D annulus. The behavior of active matter in 2D annular geometries has been extensively studied both experimentally75–78 and numerically.79–81 The main focus of these studies was on confinement-induced transitions to active turbulence. In 3D, an experimental study reported coherent motion of microtubule assays confined to an annulus,82 but we are not aware of any numerical simulations of this system in 3D.
We here solve Eq. (1) of active polar hydrodynamics in a 3D annulus of inner radius 1 (dimensionless units), outer radius of 5, and thickness of 2.75 in the periodic z direction. The annulus is centered at . At the inner and outer ring surfaces, the polarity field is anchored tangentially and parallel to the xy plane. For the velocity field, physically realistic no-slip boundary conditions are imposed. We use the same model and simulation parameters as in Subsection V B, but set . The simulation is performed in the Eulerian frame of reference, which would be harder to do in the Lagrangian reference frame in this case. At t = 0, polarity is homogeneously aligned tangentially to the annulus and parallel to the xy plane except for a small perturbation setting and , where θ and are the azimuthal and polar angles, respectively, for particles with and [Fig. 5(a)]. We observe that the polarity develops a circular variant of the bend instability at [Fig. 5(b)]. This bent state instability develops a sixfold angular symmetry exhibits coherent flows rotating it clockwise [Figs. 5(c) and 5(d)]. This confirms the experimentally observed82 emergence of coherent angular motion in 3D annular domains and suggests that extensile active materials can use the bend instability to sustain a bent mode for generating coherent motion.
D. Chiral vortices in asymmetric 3D geometries
Finally, we demonstrate the robustness and versatility of the present simulation algorithm in more challenging 3D geometries that have never been considered in the literature so far. Two different geometries are created using Blender83 that resemble shapes of dividing biological cells. The first geometry is a rotationally and left-right symmetric “peanut” shape, discretized with 1952 boundary particles given by the vertices of Blender's triangular surface mesh and 10 627 bulk particles placed in a regular Cartesian grid with spacing h = 0.1. The second geometry is an asymmetric geometry created by manually indenting the peanut mesh at arbitrary locations, different for the left and right sides. It has the same number of boundary particles and 10 898 bulk particles with the same h = 0.1. At the boundary, polarity is anchored to be perpendicular to the boundary, and the velocity is zero. The initial condition consists of a random polarity field with polarity at each bulk particle sampled uniformly randomly on the unit sphere. The initial velocity field is zero everywhere. We use the same model and simulations parameters as in Sec. V A, except for ν = 0 and . All simulations run until . Due to the difficulty of applying the method of images in complex geometries, we use the Eulerian frame of reference for this simulation.
Without activity, the polarity field aligns with the boundary condition, and the velocity decays to zero everywhere in the domain (not shown). This is as expected for a purely dissipative system and confirms the physical plausibility of the simulation. At , a steady state with nonzero flow develops in both the symmetric and the asymmetric geometries (Fig. 6). The polarity and velocity fields reflect the symmetry properties of the domain. In the symmetric shape [Figs. 6(a)–6(c)], the fields are approximately left-right symmetric, but not exactly, since the simulation starts from a random initial polarity field. Over time (panels from left to right), the polarity fields develops two stable asters in either body of the peanut, and the velocity field becomes chiral with multiple stable vortices stemming from the antisymmetric part of the stress tensor. In the asymmetric shape [Figs. 6(d)–6(f)], a similar pattern initially forms at , but later one aster “eats” the other, and a strong steady-state streaming velocity is induced through the neck between the two sides of the peanut. Taken together, these simulations not only showcase the application of the present simulation framework to arbitrary 3D geometries, but predict the emergence of chiral flows in axially symmetric shapes and geometry-induced left-right symmetry breaking in asymmetric domain shapes.
VI. CONCLUSION
We have presented a numerical method for solving the full, symmetry-preserving 3D active Ericksen–Leslie equations with unit polarity constraint. These equations model the physics of active polar fluids from first principles. The presented method numerically solves these complex PDEs using a hybrid particle-mesh method with Discretization-Corrected Particle Strength Exchange (DC-PSE) to consistently discretize differential operators with second-order accuracy in space over potentially irregular distributions of discretization points (a.k.a. particles). This simplifies the implementation of arbitrary boundary conditions and of simulations in irregularly shaped domains. We presented an iterative pressure-correction solver for imposing incompressibility of the flow using DC-PSE in 3D and validated its second-order accurate convergence. Thanks to the flexibility afforded by DC-PSE, remeshing, and iterative pressure-correction, the method can be used in both the Eulerian and Lagrangian frames of reference. In the Eulerian frame of reference, particles do not move. Remeshing is then not needed, but the CFL condition imposes a smaller time step limit on the simulations. Detecting flow steady states and imposing boundary conditions in complex-shaped domains, however, is easier in the Eulerian frame. In the Lagrangian frame of reference, the particles move with the velocity field of the flow. This allows for larger time steps, but also incurs a higher computational cost per time step, as DC-PSE operator kernels need to be recomputed at each time step and remeshing is required. The Lagrangian frame is therefore preferable for unsteady simulations in simple domain shapes, where the method of images can be applied to impose the boundary conditions.
The software implementation of the present algorithm leverages the open-source framework OpenFPM for scalable scientific computing54 and its template-expression system for PDEs.55 This allows the simulations to scale on shared- and distributed-memory multi-processor parallel computers, addressing the computational challenges associated with 3D active matter problems. The use of the PDE template expression system further facilitates modifying the model equations and domain geometry without needing to change the numerical solver code. We have benchmarked the parallel scalability in both the strong and the weak sense. The scalability of the simulation was predominantly contingent on the scalability of the pressure-correction solver, which currently uses the Portable Extensible Toolkit for Scientific Computation (PETSc)61 to solve the global linear system of equations.
The main results of this work were as follows: a comprehensive derivation of the governing hydrodynamic equations of 3D active polar fluids, a detailed description of a fully consistent numerical solver algorithm, a validation of its second-order accurate numerical convergence, and an analysis of its computational performance. We further validated the proposed simulation algorithm by recapitulating known phenomena in active polar fluids, including the active Fréedericksz transition in extensile polar fluids, the transition to active turbulence, and the emergence of coherent rotational motion in a 3D annulus, which was numerically simulated for the first time here. All of these simulations compared the numerical results with experimental observations or theoretical predictions from the literature. Finally, the presented solver was applied to more complex symmetric and asymmetric 3D geometries that have never been considered in the literature so far. This predicted the emergence of chiral vortices in axially symmetric domains, as well as geometry-induced left-right symmetry breaking. Together, these simulations highlighted the versatility of our approach over a range of 3D active polar hydrodynamics problems in different geometries and with different boundary conditions.
Despite these advantages, the present approach also has a number of limitations. The most important one is its high computational cost with a single simulation time step requiring tens to hundreds of seconds of wall-clock time even on a parallel computer and for low spatial resolution. This high computational cost is partly intrinsic to the use of remeshed DC-PSE and partly limited by the parallel scalability of the third-party PETSc solver used. However, the modular architecture of OpenFPM could in the future be exploited to use alternative solvers after assembling the system matrix in a distributed fashion. This could potentially reduce the computational cost of the simulations in the future and enhance their parallel scalability. Another measure that could reduce the runtime of the simulations would be to perform them on Graphics Processing Units (GPUs). This will depend on the GPU support in OpenFPM being extended to the PDE template expression system and to particle simulations. Furthermore, the presented algorithm was not tested in moving or deforming domains, albeit we did show it to work in different complex geometries. Finally, the present algorithm is limited to solving bulk active flow problems in 3D. While it can relatively easily be adapted to solving 2D bulk problems by changing the governing equations in the code, extending it to flows in curved surfaces would involve a meshfree Surface DC-PSE method,84 but first requires a number of mathematical questions to be resolved.
Taken together, the present work enables the numerical solution of 3D active polar fluid models by providing a convergence-validated, scalable, and open-source method capable of handling complex geometries and boundary conditions. This opens up new avenues for exploring and investigating of the rich and at times puzzling physics of active fluids and their use to model living systems, including tissue morphogenesis and animal flocking behavior.
ACKNOWLEDGMENTS
This work was funded in parts by the Federal Ministry of Education and Research (Bundesministerium für Bildung und Forschung, BMBF) under Funding Code 031L0160 (project “SPlaT-DM—computer simulation platform for topology-driven morphogenesis”) and in the framework of the Federal Center for Scalable Data Analytics and Artificial Intelligence, ScaDS.AI, Dresden/Leipzig.
AUTHOR DECLARATIONS
Conflict of Interest
The authors have no conflicts to disclose.
Author Contributions
Abhinav Singh and Philipp H. Suhrcke contributed equally to this work.
Abhinav Singh: Conceptualization (equal); Data curation (lead); Formal analysis (equal); Investigation (equal); Methodology (lead); Resources (equal); Software (lead); Supervision (supporting); Validation (lead); Visualization (lead); Writing – original draft (lead); Writing – review & editing (supporting). Philipp Suhrcke: Data curation (equal); Formal analysis (equal); Investigation (equal); Methodology (equal); Resources (equal); Software (equal); Validation (equal); Visualization (equal); Writing – original draft (lead); Writing – review & editing (supporting). Pietro Incardona: Methodology (supporting); Resources (supporting); Software (equal); Validation (supporting); Writing – review & editing (supporting). Ivo Fabian Sbalzarini: Conceptualization (lead); Formal analysis (supporting); Funding acquisition (lead); Investigation (equal); Methodology (equal); Project administration (lead); Resources (equal); Software (supporting); Supervision (lead); Validation (equal); Visualization (supporting); Writing – original draft (supporting); Writing – review & editing (lead).
DATA AVAILABILITY
The computer codes that support the findings of this study are openly available in the 3Dactive-hydrodynamics github repository located at https://github.com/mosaic-group/3Dactive-hydrodynamics, Ref. 85.
APPENDIX A: MOLECULAR FIELD
The parallel component is defined as the Lagrange multiplier in Eq. (4) to enforce . The perpendicular components are then as above.