This article deals with approximating steady-state particle-resolved fluid flow around a fixed particle of interest under the influence of randomly distributed stationary particles in a dispersed multiphase setup using convolutional neural network (CNN). The considered problem involves rotational symmetry about the mean velocity (streamwise) direction. Thus, this work enforces this symmetry using SE(3)-equivariant, special Euclidean group of dimension 3, CNN architecture, which is translation and three-dimensional rotation equivariant. This study mainly explores the generalization capabilities and benefits of a SE(3)-equivariant network. Accurate synthetic flow fields for Reynolds number and particle volume fraction combinations spanning over a range of [86.22, 172.96] and [0.11, 0.45], respectively, are produced with careful application of symmetry-aware data-driven approach.

Availability of big data and high computing power through ever advancing technology has drawn immense attention in recent times toward the application of data-driven techniques in the field of fluid mechanics.1–3 Extremely successful applicability of these methods to various fluid mechanics problems is the main reason behind their evident popularity. Brunton et al.4 discussed many current and emerging developments of machine learning (ML) in fluid mechanics. A significant number of the current ML applications are related to turbulence modeling. Different aspects of data-driven turbulence modeling have been presented in a review by Duraisamy et al.5 The challenges associated with ML-augmented sub-grid models are discussed in Ref. 6. The challenges and opportunities of integrating data science and ML into the aerospace industry are discussed in Ref. 7. Many interesting ML applications are also being performed in the field of multiphase flows. Computing the curvature of interface in volume of fluid (VOF) method using machine learning was carried out in Refs. 8–10. Yi Wan et al.11 presented a machine learning algorithm to predict the motion of bubbles in turbulent flows. Lin et al.12 used the so-called deep operator network (DeepONet13) to predict bubble growth dynamics. Abstract representation of flow physics, when sufficient data are available, and ease of implementation have made neural networks (NNs) the first choice among many ML techniques for fluid applications. The abstract nature of neural networks also suggests that prior known information such as governing equations of the flow and symmetries present in the flow are not generally enforced.

Raissi et al.14 introduced physics-informed neural networks (PINNs) which explicitly enforce the governing equations in the loss function. This ensures that solutions of neural networks satisfactorily obey the governing equations. This approach has shown improved generalization capabilities of the trained networks. Application of PINN approach to high-speed flows has been studied in Ref. 15. A similar approach of including residues of governing equations in the loss function was implemented by Subramaniam et al.16 to perform super-resolution of an incompressible forced isotropic homogeneous turbulence problem, and by Jiang et al.17 to super-resolve turbulent flows in the Rayleigh–Bérnad convection problem. Of relevance to the present work is the recent work of Siddani et al.18 who used a generative adversarial network (GAN) to predict the flow around a particle in a particle-laden flow given information on the local mean flow and the relative location of its neighbors.

As seen in the above listed applications, the primary use of neural network in fluid mechanics has been to serve as models, which when trained well are designed to accurately predict the desired output quantities based on an input feature vector. For example, in a turbulence closure model, given the local strain-rate tensor of the resolved scale flow, the objective of the NN is to evaluate the sub-grid stress due to the unresolved scales. In a super-resolution problem, given the coarse resolution flow on a plane or a volume as input, the NN is trained to predict a highly resolved flow within the same plane or volume.19,20 In the work of Ref. 18, the convolutional neural network (CNN) was trained to predict the entire velocity and pressure fields within an array of randomly distributed particles only from information on their position and particle Reynolds number.

It is well known that such models must obey certain underlying symmetry and conservation principles, some of which are listed below. (i) Translational invariance: also known as spatial homogeneity, implies that if the input features were to be specified at two different points, planes, or volumes that are shifted by a spatial translation, then the predicted flow quantities in both these situations must be identical. (ii) Rotational equivariance: implies that if the input features were to be specified at two different points, planes, or volumes that differ by a rotation, then the predicted flow quantities in the two situations must be related through the same rotation. Note that under rotation, scalar quantities will remain identically the same (invariance), whereas vector and tensor quantities are only related and will not be identically the same (equivariance). (iii) Reflectional equivariance: implies that if the input features were to be specified at two different points, planes, or volumes that differ by a reflection, then the predicted flow quantities must be related through the same reflection. (iv) Galilean invariance: implies that if input features were to be specified in two different frames of reference that differ by a uniform velocity, then the predicted flow quantities in two frames of reference must be identical. (v) Kinematic conditions: such as boundary conditions at interfaces or the incompressibility condition must be satisfied by the predicted flow quantities. (vi) Conservation laws: of mass, momentum, and energy must also be satisfied by the predicted flow quantities to the extent that they are applicable.

Let us consider the above list of NN requirements in the context of predicting the flow around a reference particle surrounded by a random distribution of neighbors. In this context, translational invariance says that the predicted flow should not depend on where the reference particle is located within a particle-laden flow as long as its neighborhood remains the same. Three-dimensional (3D) CNN, such as the ones used in Siddani et al.,18 by design, satisfy translational invariance along all three directions. Similarly, rotational and reflectional equivariances imply that if the neighborhood around the reference particle were to be arbitrarily rotated or reflected from an initial configuration, the corresponding predicted flows must be accordingly rotated or reflected versions of the solution predicted in the initial configuration. Galilean invariance can be ensured by requiring that the input and output features of the network include only relative velocity information between the reference particle and the ambient flow and between the different particles. The general approach to enforcing kinematic conditions and conservation equations has been to calculate them for the predicted solution and any error in their satisfaction will be applied as an additional loss term in the training of the NN.

Thus, translational invariance, Galilean invariance, kinematic conditions, and conservation equations are satisfied in a CNN by design, as part of the loss function, or with a careful choice of input and output feature vectors. In comparison, enforcement of rotational and reflectional symmetries has generally been in an approximate sense through data augmentation, which tends to increase the cost of the training process. As a result, machine-learned models have been sensitive to arbitrary coordinate system used in their training process.21 This suggests the importance of building in the desired rotational and reflectional symmetries into ML models by carefully designing their architecture. For example, Wang et al.22 incorporated rotational symmetry into data-driven models for two-dimensional (2D) fluid flow systems and their results showed improved accuracy. Development of equivariant neural networks which strictly enforce different forms of symmetry is an active area of research.

The quest of this article is to present a 3D rotationally (SO(3), special orthogonal group of dimension 3) equivariant CNN and demonstrate its ability to predict fluid quantities at greater accuracy, especially under conditions of paucity of training data. Furthermore, it will be shown that the resulting symmetry-constrained network is more compact and involves optimization of far fewer network weights during the training process. In fluid mechanical applications, the requirement of rotational equivariance is in the context of a mixture of scalar (e.g., pressure), vector (e.g., velocity), pseudo-vector (e.g., vorticity), and tensor (e.g., strain-rate, stress) quantities that can be part of the input features or the predicted output. Enforcement of rotational eqivariance for a mixture of scalar, vector, and tensor quantities requires planning and will be discussed. These features of equivariant CNN will be discussed for the prediction of fluid flow around a particle of interest (reference particle) under the influence of its neighboring particles with a strict enforcement of rotational symmetry about the mean-flow direction. The importance and need for such symmetry preserving networks in the limit of scarce data will be reasoned and demonstrated through quantitative results of the considered multiphase flow problem. The discussed rotationally equivariant approach is an efficient alternative to data augmentation process of considering arbitrary rotations which is a necessity in applications with data paucity.

The remainder of this paper is arranged as follows: Sec. II describes the problem in detail. Section III presents details of the dataset used in the demonstration of SE(3)-equivariant, special Euclidean group of dimension 3, CNN. Section IV provides an overview of SE(3)-CNN. The machine learning approach is detailed in Sec. V. The performance and properties of a trained network are evaluated in Sec. VI. Finally, the carried-out study is summarized in Sec. VII. This section also briefly mentions some future possibilities of the presented methodology.

Consider a large random distribution of particles in which one particle is arbitrarily chosen as the reference particle, shaded red as shown in Fig. 1. The neighbors of the reference particle within a sphere of five diameters in size are shown shaded in gray in Fig. 1. Given the average fluid velocity within this sphere in a frame attached to the reference particle (indicated by the velocity vector), here we are interested in predicting the flow around the reference particle, which is influenced by the presence of neighbors. As shown in Siddani et al.18 the velocity and pressure fields within a considered volume are first predicted which can then be improved in the region close to the reference particle.

FIG. 1.

Visual depiction of equivariance of flow systems.

FIG. 1.

Visual depiction of equivariance of flow systems.

Close modal

An example of such a velocity field plotted on the central yz plane of this domain is shown in the bottom left corner. The contour represents u (velocity component along x direction) and the arrows represent the in-plane 2D vector created using v and w, velocity components along y and z, respectively. The white circular patches of different sizes correspond to the cross sections of neighboring particles that intersect with this plane. For the sake of further discussion, this domain will be called original configuration in the current discussion.

Now consider a new particle configuration referred to as the rotated configuration, shown in the upper right hand side of the figure, which corresponds to the “original configuration” rotated about the x-axis by 90° in the clockwise direction. Note that the average flow velocity within the sphere, which is also part of the input feature along with the location of the neighboring particles, is also correspondingly rotated. The velocity field of the “rotated configuration” in the central yz plane is shown in the bottom right image of the figure. Rotational equivariance implies that the resulting velocity field in the rotated configuration must be related to that of the original configuration by a rotational transformation, which is evident in Fig. 1. As the axis of rotation in this example is x-axis, the velocity component u contours are simply rotated about the center, while the in-plane velocity vectors undergo vectorial transformation.

The present work makes use of 3D Steerable CNN developed by Weiler et al.23 3D Steerable CNNs, which operate on regular 3D grids, are equivariant to 3D rotations and translation - i.e., these neural networks are SE(3)-equivariant to 3D rotations and translations. Since the translational invariance property of a regular CNN is well established23 here we will discuss mainly the structure of the network that enforces rotational equivariance (i.e., SO(3)-equivariance). We note that in Fig. 1 rotation about the x axis is presented only as an example, and the equivariance equally applies to rotations about any other axis. In the remainder of this paper, such a CNN will be referred to as SE(3)-CNN.

Generalization capabilities of a fairly complex neural network increase with the amount of training data. Data augmentation opportunities are often exploited to increase the training dataset size, since the amount of raw 3D data available for training from fluid mechanical simulations and experiments is not usually very large. Performing discrete rotations and reflections of the training data are a common method to increase the total sample size.18,24 The number of discrete rotations that can be performed as part of data augmentation can be arbitrarily high. However, the total training time of the network increases with increasing data augmentation. The inherent SE(3)-equivariance property of a SE(3)-CNN ensures that it achieves the desired generalization capabilities without actually performing data augmentation through discrete rotations. In fact, an SE(3)-CNN enforces continuous 3D rotations, and hence, enjoys a superior generalization capability than that obtained through discrete rotational data augmentation procedure.

The present focus on predicting the flow around the reference particle is inspired by the need to predict the force on the particle more accurately taking into account the fluid-mediated influence of its neighbors. There have been others ML efforts that have explored different networks to obtain the force on a particle. He and Tafti25 used artificial neural network (ANN) to predict drag force on a particle in a randomly distributed monodispersed, stationary, spherical particles system. The inputs to the ANN were Reynolds number (Re) of the flow, particle volume fraction (ϕ) and relative positions of 15 nearest neighbors, and the drag force being the required output. Muralidhar et al.26 improved this approach through the incorporation of pressure and velocities at discrete points around the particle's surface, shear, and pressure forces acting on the particle as outputs of network's intermediate layers. These physically meaningful intermediate variables were optimized through the inclusion of mean squared error (MSE) terms with respect to corresponding PR-DNS values in the loss function. Moore et al.27 formulated a method that combined the pairwise-interaction extended point-particle (PIEP28) framework with a data-driven approach, based on non-linear regression, to obtain particle forces.

The superposition of physics-based PIEP model and data-driven approach enabled this method to achieve accurate predictions for both low and high particle volume fraction cases. Furthermore, Moore and Balachandar29 introduced the superposable wake method which like the present SE(3)-CNN also attempts to predict the flow around a reference particle under the perturbing influence of neighbors. While the SE(3)-CNN simultaneously takes into account the influence of all the neighbors within the surrounding neighborhood, the superposable wake method accounts for the effect of neighbors taken one at a time using pairwise superposition. Nevertheless, the physics-based superposable approach by construction automatically satisfies the SE(3) symmetry of the problem. Whereas, in the case of CNN, SE(3)-equivariance is not automatically guaranteed and must be enforced by special construction of the network. An important takeaway from all of the above-mentioned works is that it is vital or even mandatory to include flow physics into ML models with the inclusion of all symmetries, invariances, and conservation laws in order to get high quality results.

The present objective is to recreate the particle-resolved flow around a particle of interest under the influence of its neighbors using SE(3)-CNN when the geometric location of all particles, volume-averaged Reynolds number (Re) and mean-flow direction are given as input information (Fig. 2). These stated input quantities are readily available in a standard Euler–Lagrange simulation.30 The neighboring particles are randomly distributed across the entire domain with uniform probability and the reference particle is considered at the center of the domain. Furthermore, all particles are spherical in shape with same diameter (monodispersed) and they are also considered to be stationary. It is worth mentioning that particle volume fraction (ϕ) for the domain can be calculated as the location and total number of particles are given.

FIG. 2.

Objective: Predict flow fields around a particle of interest under the influence of its neighbors when the volume-averaged Reynolds number, mean-flow direction, and location of the particles are given.

FIG. 2.

Objective: Predict flow fields around a particle of interest under the influence of its neighbors when the volume-averaged Reynolds number, mean-flow direction, and location of the particles are given.

Close modal

Particle-resolved direct numerical simulation (PR-DNS) data are used to train and test the network. The current work utilizes the results presented in Refs. 27 and 31. These simulations are performed in a cubic domain. The diameter of particles d* is chosen as the length scale for non-dimensionalization, thereby making the non-dimensional diameter equal to unity. The non-dimensional length of each side of the cubic domain is 3π. The domain is subjected to periodic boundary conditions along x and y directions. Symmetric, no-stress boundary conditions are applied along z direction. Pressure gradient is applied such that the mean flow (streamwise direction) of the entire computational domain is along the x direction, although the mean flow averaged over smaller boxes around each particle could slightly deviate from this entire domain average. Non-dimensional form of the incompressible Navier–Stokes equations are solved to obtain the PR-DNS data. The different simulated cases can be classified using the two macroscale parameters, namely, particle volume fraction (ϕ) and particle Reynolds number (Re). Table I shows all the cases considered in this study.

TABLE I.

The Reynolds number and volume fraction of the different cases considered, the corresponding RMS values of u,v,w,p and the size of training (Ntr), validation (Nval), and testing (Nte) datasets.

CaseReϕRealizationsσuσvσwσpNtrNvalNte
172.96 0.11 10 0.4968 0.1947 0.1960 0.1973 372 46 46 
86.22 0.21 0.5960 0.2580 0.2609 0.3807 450 90 90 
114.60 0.45 0.8006 0.4340 0.4421 1.0467 579 193 193 
CaseReϕRealizationsσuσvσwσpNtrNvalNte
172.96 0.11 10 0.4968 0.1947 0.1960 0.1973 372 46 46 
86.22 0.21 0.5960 0.2580 0.2609 0.3807 450 90 90 
114.60 0.45 0.8006 0.4340 0.4421 1.0467 579 193 193 

The analysis of perturbation force maps obtained using the superposable wakes (for Table I cases) by Balachandar et al.32 indicates that the influence of neighbors that are three or four diameters away from the reference particle is practically zero. This supports the concept of considering a local box, which encompasses the region of highly influential neighbors, around the reference particle to obtain the flow field around it. Here, we consider a local box of non-dimensional size 5×5×5 with the reference particle at its center and the flow within it is resolved with a grid of 64×64×64 points. This consideration of local volume is the same as that used in Siddani et al.18 and the process of creating these data samples from PR-DNS27,31 has been thoroughly detailed in it. The total number of data samples is partitioned into training, validation, and testing samples. For the three cases listed in Table I, the number of training, validation, and testing data samples are presented. Although the data set contains only O(1000) reference particles, the number of grid points available for training the prediction of flow inside is far more.

Normalized perturbations of velocity and pressure fields (u,p) defined below are the expected outputs from the SE(3)-CNN,

u=uu|u|,p=pp3σp.
(1)

Here, σp is an estimate of rms pressure obtained using the formula presented in Ref. 18.

This work makes use of the framework e3nn33,34 that is built on top of PyTorch35 to implement the SE(3)-CNN architecture used to produce results presented in this article. The reader is suggested to read Weiler et al.,23 Thomas et al.36 and references therein for a comprehensive understanding of SE(3)-CNN. Here we will provide only the essential information needed for the present purposes.

Consider a feature vector field q(x), where the feature vector may consist of several scalar, vector, and tensor quantities. For example, if the feature is made up of Reynolds number, pressure, velocity, and stress tensor, the length of the feature vector is 14, corresponding to 2 scalars, 1 vector, and 1 second-rank tensor. Each element of the feature vector is a three-dimensional field (the Reynolds number is also defined as a constant field). We now define the set of all possible rotations about the origin to be the group G. Let us consider the action of a rotation gG on the feature vector field. A post-rotation feature vector at point x, before rotation corresponds to a pre-rotation feature vector at a point y=R1(g)x, where R(g) is a 3 × 3 rotation matrix corresponding to the rotational group element g. Furthermore, after rotation, (i) a scalar element of the feature vector, p remains invariant to become ρ0(g)p, where ρ0(g)=1 (ii) a vector u transforms as ρ1(g)u, where ρ1(g)=R(g), and (iii) a second rank tensor σ transform as ρ2(g)σ, where ρ2=R(g)KronR(g) is a 9 × 9 matrix formed by the kronecker product, Kron, of the rotational matrix with itself (note the second rank tensor has been unrolled into a 9-element long vector). Since, the rotational transformation of each scalar, vector, and tensor component is independent the rotational transformation of the feature vector field can be expressed as

[π(g)q](x)=ρ(g)q(R1(g)x),
(2)

where ρ corresponds to the rotational representation of G for the entire feature vector, which can be expressed as

ρ(g)=i=0Ij=1niρi(g),
(3)

where corresponds to construction of a block diagonal matrix with ni copies of the matrix operators ρi for i = 0, 1,,I. In the above notation, the feature vector q contains n0 scalars, n1 vectors, n2 second rank tensors and so on up to nI Ith rank Cartesian tensors and therefore is of length N=i=0Ini(3)i. In (2), π(g) is the map that appropriately transforms the feature vector q at the point y to ρ(g)q at point x.

Now that rotational representations of arbitrary feature vectors have been presented, we can mathematically define the rotational equivariance of a CNN that takes in an input feature vector q to produce the desired output feature vector s. A rotationally equivariant CNN can be considered as a functional relation F between the input and the desired output feature vectors that change deterministically under the operation of 3D rotational group elements. In the present problem, the input feature vector q is a particular configuration of relative location of P neighbors within the domain (P vectors), particle Reynolds number (scalar), along with the unit vector along the average relative fluid velocity (vector). There are infinite possible neighbor configurations, particle Reynolds numbers and relative velocity orientations, which together define the set Q, and qQ. Let the flow field corresponding to the input feature q be s(x), and the set of flow fields of all input feature vectors be S. From this definition sS. Thus, the quest to predict the flow given an input feature vector can be expressed as a functional relation F:QS. The functional relation F is said to be equivariant to an element gG if

F(πQ(g)q)=πS(g)F(q),
(4)

where πQ:QQ is the set of rotational representations of G on Q and πS:SS is the set of rotational representations of G on S.37 

The above definition given in (4) relates the rotational equivariance of the relationship between the overall input and output features of the CNN. However, in a deep neural network, the input and output features are separated by many intermediate layers. It can be mathematically shown that for the overall CNN to be rotationally equivariant, each layer of the CNN must be equivariant. Several linear and non-linear layers put together form a constitutive block of a CNN. The following three steps are essential in order to guarantee the rotational equivaraiance of each block of CNN:

  • The input and output feature vectors of each block must be irreducible representations of the SO(3) group. Therefore, in the input feature vector of the CNN (which is the input to the very first layer) Cartesian fields such as pressure, velocity, vorticity and stress/strain tensors must be transformed from raw quantities to irreducible representations. Similarly, the output feature vector of the CNN must be synthesized from the irreducible representation output of the last block.

  • The convolution operation performed in each block must be a rotationally equivariant linear map. This will guarantee that the functional relationship between the input and output of the block satisfies rotational equivariance similar to that given in (4).

  • Non-linearities included in a block must also be rotationally equivariant, which implies that that non-linearities must be carefully applied to each irreducible representation rather than individual elements of the feature vector.

Each of these aspects will be elaborated below.

The SO(3) group contains all possible 3D rotations about the origin as group elements. Tensor field physical properties, such as pressure, velocity, and strain can be reduced further into representations known as irreducible representations of SO(3). These irreducible representations cannot be decomposed further into a direct sum of representations.

A scalar quantity like pressure is an order-0 irreducible representation as it is invariant under rotational transformation. Vectors and pseudo-vectors, such as velocity and vorticity, respectively, are order-1 irreducible representations as there is intertwining of the three Cartesian components when subjected to an arbitrary rotation, and therefore, cannot be broken down further. Cartesian tensors of rank greater than 1 can be decomposed into a summation of irreducible representations. Decomposition of a second rank Cartesian tensor, σ, has been shown below as an example,

σ=(tr(σ)3I)Order0+(σσT2)Order1+(σ+σT2tr(σ)3I)Order2,
(5)

where, tr(σ) corresponds to the trace of σ. In (5), σ has been reduced into three different terms shown in the right hand side of the equation. The first term depends only on trace of σ is a scalar (order-0 representation). The second term represents the anti-symmetric part of σ is an order-1 irreducible representation. The traceless symmetric part of σ (third term on the right hand side) has five independent elements and is an order-2 irreducible representation. Therefore, a second rank Cartesian tensor has been decomposed into a summation of independent irreducible representations of order-0, 1, and 2, which have 1, 3, and 5 elements, respectively. In general, a rank l Cartesian tensor can be reduced into a summation of irreducible representations of order-l and lower. An irreducible representation of order-l, where l=0,1,2,, has a dimension of 2l+1, i.e., 2l+1 elements. Upon applying this transformation to the irreducible form, the feature vector q̂ can be rearranged in terms of concatenation of J irreducible components as

q̂=j=0Jq̂j,
(6)

where the jth component q̂j is of order-l(j) and thus its length is (2l(j)+1). Therefore, the total length of q̂ is j=0J(2l(j)+1).

Correspondingly, the rotational representation ρ can also be written in terms of irreducible representations. Such irreducible rotational representations of order-0, 1, and 2 are

ρ0(g)=D0(g),ρ1(g)=D1(g),ρ2(g)=T21[m=02Dm(g)]T2,
(7)

where Dm(g) is the irreducible representation of G at order-m. It is also known as the Wigner-D matrix of order-m and it is of size (2m+1)×(2m+1). Thus, the 9 × 9 block representation matrix ρ2 can be further decomposed to block representations of size 1 × 1, 3 × 3 and 5 × 5. Each of these irreducible rotational representations does not mix and therefore can be considered independently. Here T2 is the change of basis matrix that decomposes a second rank tensor into its order-0, order-1, and order-2 irreducible representations. Also, T21 is the inverse that will assemble the order-0, order-1, and order-2 irreducible representations back into a second rank tensor.

We note that the decomposition operator Ti needs to be applied only once for the input feature vector of the entire CNN, and the resulting irreducible feature vector is the input to the first layer. After that, all layer input and output features are in terms of irreducible feature vectors. Finally, the output of the last layer is assembled back with the change of basis Ti1 to obtain the desired output feature vector. Hence, all further discussion will assume the input and output feature vectors of each layer to be in irreducible form.

It must be emphasized that the rotationally equivariant convolution operation of SE(3)-CNN, explained in Subsection IV D, relies on the tensor product of irreducible representations36 involving the convolution kernel at each block of the CNN. This explains the reason behind the input and output features vectors of each block being irreducible representations, since only with such an irreducible input feature vector the convolution operation can guarantee rotational equivariance.

Two irreducible representations of different orders can be combined together using “tensor product of irreducible representations,” Irreps, to produce new irreducible representations of different orders. Let us consider two irreducible representations, namely, q̂1 and q̂2 of orders l1 and l2, respectively. Then the “tensor product of irreducible representations” between q̂1 and q̂2 gives rise to irreducible representation q̂3 of order-l3 for |l1l2|l3(l1+l2). The tensor product of each output irreducible representation will be denoted by (q̂1Irreps(l1,l2,l3)q̂2=q̂3) and given by

q̂3,k(x)=i=1(2l1+1)j=1(2l2+1)Ci,j,k(l1,l2,l3)q̂1,i(x)q̂2,j(x),  fork=1,2,,2l3+1,
(8)

where Ci,j,k(l1,l2,l3) are (real) Clebsch–Gordan coefficients. The indices i and j are along the dimensions of q̂1 and q̂2, respectively, and similarly, k is along the length of q̂3. Furthermore, the “tensor product of irreducible representations” is equivariant.36 

It can be seen from (8) that “tensor product of irreducible representations” can be broken down into two steps. The first step being outer product between q̂1 and q̂2,(q̂1outerq̂2=q̂1,iq̂2,j). The resulting (2l1+1)×(2l2+1) matrix is reshaped into a column matrix of size (2l1+1)(2l2+1)×1. Second, the Clebsch–Gordan coefficients can also be appropriately arranged into a matrix of size (2l3+1)×(2l1+1)(2l2+1). Matrix multiplication between these two matrices generates q̂3 of dimension 2l3+1. Essentially, the Clebsch–Gordan coefficients form a change of basis matrix that is utilized in creating irreducible representation q̂3 from the outer product of q̂1 and q̂2. Note that this two-step operation can be carried out for only |l1l2|l3(l1+l2). This two-step perspective is useful in appreciating similarity between conventional-convolution and equivariant convolution operations discussed in Subsections IV C–IV D.

The convolution operation can be described as the most important mathematical operation in a CNN. The layer performing the convolution operation is known as a convolutional layer. The convolution operations performed in a conventional CNN and SE(3)-CNN will be described to highlight their differences. Again, in both cases, the convolution operation needs to be described for only one block of the neural network, since this process is repeated for all blocks of the CNN.

In the conventional CNN, the input feature vector, denoted as q, is a vector of length NLI consisting of stacked list of n0LI zeroth, n1LI first, and n2LI second rank tensors. Similarly, let the output feature vector of the layer, s, be made up of n0LO zeroth, n1LO first, and n2LO second rank tensors with a total length of NLO (here subscripts LI and LO stand for layer-input and layer-output). Let us also assume that a convolution operation is applied to map the input features to the output features. Then, the convolution operation performed in a conventional CNN is as follows:

si(x1)=3j=1NLIκij(x2x1)qj(x2)dx2,for i=1,2,,NLO.
(9)

Here, x1 is the center of convolution operation, κij(x2x1) is the continuous learnable kernel, and x2 is the dummy variable spanning over the 3D space (3). The index i runs along the dimension of the output feature vector s and j moves along the dimension of the input feature vector q. The learnable kernel takes relative position about x1 as input and contains parameters that can be optimized so that q can be linearly mapped to s. The aspect of considering relative location ensures that the CNN preserves translational invariance. The kernel is a matrix operator of size NLO×NLI and it is a component of the matrix multiplication (jκi,jqj) of the convolution operation. The kernel can be recognized to have a block structure, where each block relates one of the zeroth, first, or second rank feature of the input vector with one of the zeroth, first, or second rank feature of the output vector. Thus, each block is of size 3k×3m, where k and m are the rank of the output and input feature, respectively.

Though the convolution operation is shown in continuous space, during implementation the space is discretized and the information is only available at discrete grid points. Similarly, the size of the kernel along each direction is chosen so that only the points that fall within the kernel size about the center of convolution are considered and not the entire space.

The conventional-convolution operation given in (9), when considered as a functional relation between q and s, does not satisfy the equivariance condition stated in (4). Since the input and output feature vectors transform according to their representations, the convolution operation that maps the input features to the output features can also be expressed in terms of equivariant sub-kernel operations, when expressed in irreducible form. The equivariant sub-kernels used in the convolution operation are constrained to be built from (real) spherical harmonics (see Ref. 23 for proof). Spherical harmonics are irreducible representations of the SO(3) group. Moreover, it is known that the “tensor product of irreducible representations” is an irreducible representation itself and it is equivariant as well.36 Thus, the decomposition of the input and output Cartesian fields into irreducible representations will now ensure that the convolution operation between them can be built using equivariant sub-kernels and tensor product of irreducible representations.

In the SE(3)-CNN, the input feature vector of a layer will be denoted as q̂, and it is in irreducible form. It is also of length NLI but is made up of a stacked list of JLI irreducible components. Similarly, the output feature vector ŝ is made up of JLO irreducible components, with a total length of NLO. The SE(3)-equivariant convolution operation performed on q̂ in a single layer of the network to obtain the output feature vector ŝ can be expressed as

ŝk(x1)=3j=1JLIηk,j(x2x1)Irreps(L,lLI(j),lLO(k))q̂j(x2)dx2,for k=1,2,,JLO,
(10)

where q̂j denotes the jth irreducible component of the input feature of order-lLI(j) and ŝk corresponds to the kth irreducible component of the output feature of order-lLO(k). Here, ηk,j represents the list of all irreducible representations that will preserve the equivariance between input feature, q̂j, and output feature, ŝk. Finally, the variable L corresponds to order of each individual representation of ηk,j. The symbol Irreps(L,lLI(j),lLO(k)) denotes that each individual irreducible representation of ηk,j tensor products with q̂j to produce an irreducible representation of order-lLO(k).

The equivariance preserving list of irreducible representations, ηk,j, is given by (11).

ηk,j(x2x1)=L=|lLO(k)lLI(j)|(lLO(k)+lLI(j))ηL,k,j(|x2x1|)YL(x2x1|x2x1|),
(11)

where denotes concatenation of the different irreducible representations and YL=(YLL,.,YLL)2L+1 corresponds to (real) spherical harmonics of order L. Unlike the traditional convolution operation, the learnable part of ηk,j, given by ηL,k,j, is limited to take only the radial distance (|x2x1|) between the points x1 and x2 as input. The polar (θ) and azimuthal (Φ) (spherical coordinate system) dependence (i.e.,dependence onx2x1|x2x1|) is restricted to (real) spherical harmonics, where YL(θ,Φ)2L+1 are (real) spherical harmonics of order L. The limits |lLO(k)lLI(j)|L(lLO(k)+lLI(j)) denote the range of spherical harmonics that can lead to an irreducible representation of order-lLO(k).

The following steps will show how the different irreducible representations of ηk,j can be combined together. The total length of ηk,j is given by L=|lLO(k)lLI(j)|(lLO(k)+lLI(j))2L+1=(2lLO(k)+1)(2lLI(j)+1). It has been explained earlier that the Clebsch–Gordan coefficients involved in Irreps can be assembled into a change of basis matrix. Similarly, the coefficients involved in all the different Irreps between ηk,j and q̂j can also be arranged into a change of basis matrix TCG, which has a size of (2lLO(k)+1)(2lLI(j)+1)×(2lLO(k)+1)(2lLI(j)+1). Matrix multiplication between TCG and ηk,j will produce a column vector. The column vector can be rearranged into a matrix of size (2lLO(k)+1)×(2lLI(j)+1), which can be denoted as κ̂k,j. Similar to the block form that exists in a conventional-convolution operation this sub-kernel block κ̂k,j acts between the jth input irreducible feature and kth output irreducible feature. This also explains the size of the block being (2lLO(k)+1)×(2lLI(j)+1) as the output feature is of size (2lLO(k)+1) and the input feature is of size (2lLI(j)+1).

In traditional neural networks, the non-linearity using an activation function (F) is applied elementwise to all of its input features. However, such a non-linearity cannot be applied in SE(3)-CNN as it compromises the equivariance property. An activation function can be applied elementwise for a scalar feature but cannot be applied elementwise for a non-scalar irreducible feature of SO(3) as it ignores the intertwining of the elements of an irreducible component. Thus, non-linearity is applied with the following two equivariant methodologies for non-scalar irreducible features.

1. Norm-based activation

The norm of a non-scalar irreducible feature, ŝk, of order-lLO(k) is given by

||ŝk||=i=12lLO(k)+1(ŝk,i)2,
(12)

which does not change when ŝk is subjected to a rotation or translation. As the norm is a scalar, the non-linearity is applied to the norm and then this value is multiplied with the irreducible feature, ŝk. Multiplying any irreducible feature with a scalar is equivariant. This activation36 is summarized as (13).

Norm-basedactivation={F(ŝk)ifŝkisascalarirreduciblefeature.F(||ŝk||)ŝkifŝkisanon-scalarirreduciblefeature.
(13)

2. Gated non-linearity

The gated activation23 is also built on the same principle that multiplying any irreducible feature with a scalar is equivariant. Generally, non-linearity in CNNs is applied after a convolution operation. This method takes advantage of the previous convolution operation to produce additional scalar fields termed gates. As these gates are produced using SE(3)-convolution they are equivariant. The non-linearity is applied to these scalar gates and then multiplied with non-scalar irreducible features. Thus, the number of gates produced is equal to the number of non-scalar irreducible features.

Let us consider that JsLO scalar features, {ŝk,k=1,2,,JsLO}, JLOJsLO non-scalar irreducible features, {ŝk,k=JsLO+1,,JLO}, and JLOJsLO scalar gates, {γk,k=JsLO+1,,JLO}, are produced as output features of a SE(3)-convolution process. Then the gated activation is applied as follows:

Gatedactivation={F(ŝk)k=1,2,,JsLOF(γk)ŝkk=JsLO+1,,JLO.
(14)

Hence, after the application of the gated non-linearity, the total number of features is reduced back to JLO.

One block of the SE(3)-CNN is symbolically shown in Fig. 3. The irreducible input feature vector to the layer is denoted by q̂_in (q̂_in is the same as Rs_in used in Fig. 5 and the remainder of this paper). In this particular block, the (12,0) in q̂_in corresponds to 12 different order-0 and the (12,1) corresponds to 12 different irreducible order-1 features. The final output features of the block are given by ŝ_out (ŝ_out is the same as Rs_out used in Fig. 5 and the remainder of this article). The example shown in the figure produces 12 order-0 and 12 order-1 irreducible features as outputs. The mathematical operations performed in this block are convolution operation (Conv), batch normalization (BatchNorm), and gated activation (Gated Act). The sequence of these operations is same as the order in which they appear from top to bottom.

FIG. 3.

A non-linear SE(3)-CNN block.

FIG. 3.

A non-linear SE(3)-CNN block.

Close modal

As ŝ_out is the final output produced after the gated non-linearity, irreducible representations produced by the earlier applied convolution operation are 12 scalars, 12 scalar gates and 12 order-1 irreducible features. The convolution operation has a kernel of size 5 (KS) along each of the three directions. The zero padding (PD) used along each boundary of the domain is two in this example shown in the figure. The batch normalization used in SE(3)-CNN is an equivariant version of the regular batch normalization.38 The reader is directed to Ref. 33 for the implementation of the equivariant batch normalization. Finally, the pre-defined function used in the gated non-linearity is Sigmoid function.

Pictorial representation of the overall ML approach implemented in this work is shown in Fig. 4. As seen in the figure, artificial flow fields generated by the network are evaluated against the corresponding PR-DNS results using a loss function to improve the network's ability to recreate the PR-DNS flow fields.

FIG. 4.

ML Approach.

Sixty-four uniformly spaced grid points are chosen along each direction in a sample. In this study, we will only concentrate on achieving accurate flow fields on a moderately coarse grid which takes all neighbors that fall in 5×5×5 local volume into consideration. The input information contains locations of all particles (including the reference particle), volume-averaged Reynolds number, and the mean-flow direction. Discrete representation of locations of all particles in a sample can be achieved using indicator function (If), which is defined as follows:

If(x,y,z)={0,if(x,y,z)isinsideaparticle.1,if(x,y,z)isoutsideaparticle.
(15)

In addition to the indicator function, another scalar input information is Reynolds number (Re). Thus, these two scalars can be combined together to create the following redefined indicator function (c1),

c1(x,y,z)={0,if(x,y,z)isinsideaparticle.Re,if(x,y,z)isoutsideaparticle.
(16)

Finally, the information of mean-flow direction is fed to the network using a unit vector û along the respective direction. The streamwise velocity direction is required to fully define the problem as it enforces rotational symmetry only about the mean-flow direction. In essence, the input feature (c) to the network is a combination of a scalar and a vector as shown below,

c={c1,û}.
(17)

The metric used in this study to evaluate the performance of a network is coefficient of determination (R2). For example, the performance of the network for streamwise velocity component, u, will be defined as follows:

Ru2=1i=1Ntevolume(uDNSunetwork)2Ifi=1Ntevolume(uDNS)2If,
(18)

where Nte is the number of test samples. Similarly, we can also evaluate the performance of the network for pressure (p) and transverse velocities (v,w).

R2 value of unity denotes a perfect model that can exactly capture the PR-DNS fluctuations. A value of zero for a network is equivalent to not predicting any fluctuations at all. Finally, a value less than zero is an indication that the network is severely under-predicting or over-predicting the perturbations.

Deep learning results presented in this work have been performed using a single Quadro RTX 6000 GPU device. The largest equivariant U-Net39 that could be considered for a nominal mini-batch of size 3 on this GPU device is a 13 block U-Net. The architecture of the network has been shown in Fig. 5. In the first block, the input information c is the Rs_in, i.e., one scalar and one vector [(1, 0), (1, 1)]. A filter/kernel of size (KS =5) along each dimension has been used in every equivariant convolution operation (Conv) performed in this network. A zero-padding of size (PD =2) for each boundary has been used to make sure that a 64×64×64 output is obtained after applying a convolution operation. An equivariant batch normalization (BatchNorm) (see Ref. 33) is applied after the convolution process in the first 12 blocks. The final operation in a block after BatchNorm is the application of non-linearity. Here, gated non-linearity (see (14) and Ref. 23) has been utilized. The inputs to the network and its outputs are only scalars and vectors. Hence, the outputs of intermediate blocks (Rs_out) have also been restricted to only representations of order-0 and 1. Each intermediate block outputs 12 scalars and 12 order-1 irreducible feature after the application of the gated non-linearity, Rs_out = [(12, 0), (12, 1)]. As a U-Net involves skip connections, the outputs of the first 6 blocks are also concatenated along the feature dimension as shown in the figure. This is responsible for the Rs_in = [(24, 0), (24, 1)] in the last 6 blocks in spite of their previous blocks only outputting 12 each of order-0 and 1 representations.

FIG. 5.

Network Architecture.

FIG. 5.

Network Architecture.

Close modal

Layer-wise locally adaptive activation function (L-LAAF)40 of type Sigmoid with slope recovery term has been used in the gated non-linearity with an intention to accelerate the training process. The pre-defined scaling factor, n, of L-LAAF was chosen to be unity, which is its lower bound (n1). Thus, the scalable parameter, a, of each layer is initialized with a = 1 as n×a=1 was the initialization condition used in Ref. 40.

The loss function (L) used in network training is given below,

L=DL+λ643×PL+10×S(a),
(19)

where data-based loss (DL) is

DL=||[{u,p}DNS{u,p}network]If||1.
(20)

The operation corresponds to elementwise multiplication between every field variable and indicator function. It ensures that the loss function only takes grid points that fall inside the fluid volume into consideration. Here ||.||1 denotes L-1 norm averaged by the number of elements.

Physics-based loss (PL) is defined as

PL=||volume{u,p}networkIf||1,
(21)

based on the principle that average of perturbation must be zero. Although the governing equations can also be included in the PL term, they have been avoided here due to the additional computational cost that is associated with imposing them. However, it has to be stated that inclusion of governing equations has resulted in more generalizable models in other fluid flow problems.17 

The slope recovery term (S(a)) is

S(a)=1112k=112exp(ak).
(22)

As mentioned earlier, the slope recovery term, which is associated with L-LAAF, is added for accelerated training convergence. In the 13-block network, 12 instances of this activation are applied in the first 12 blocks and hence ak in (22) corresponds to the scalable parameter of each layer.

An analysis optimizing the value of λ has been carried out using the case 1 dataset as the Reynolds number associated with this set is the highest among the considered cases. The test performance achieved with different λ values has been presented in Table II. It can be seen from the results that the inclusion of this particular physics-based loss term in the cost/loss function does not lead to a significant and consistent higher performance. This is primarily due to the weak nature of this condition. Unlike imposing a governing equation that has to be satisfied at every grid point, this more flexible global condition deals with the overall volumetric output produced by the network. The training for remaining datasets presented in Table I is performed using λ=0.01 as it produces the best performance for pressure and second best performance for velocity components among the considered values. The optimizer used in this study is ADAM optimizer41 and a learning rate of 0.001 was used for the equivariant network.

TABLE II.

Test performance (R2) for different λ of the network trained using case Re = 172.96, ϕ=0.11.

λPressureStreamwise velocityTransverse velocities
0.7487 0.6844 0.4960 
0.001 0.7433 0.6851 0.4906 
0.01 0.7533 0.6865 0.4945 
0.1 0.7481 0.6889 0.4896 
0.7479 0.6802 0.4863 
λPressureStreamwise velocityTransverse velocities
0.7487 0.6844 0.4960 
0.001 0.7433 0.6851 0.4906 
0.01 0.7533 0.6865 0.4945 
0.1 0.7481 0.6889 0.4896 
0.7479 0.6802 0.4863 

Deep learning analysis with the network was individually performed for each of the considered cases. The following procedure was adopted to divide the samples available in each case into training, validation, and testing datasets. As every case has multiple realizations, two realizations are selected randomly with one of them being assigned as the validation set and the other being the testing set. The remaining realizations put together form the training set. Test performance variability study performed in Ref. 18 for the same cases indicates that the variation in test performance with respect to which realization is taken as testing data are very low. Hence, the results presented here will be a good estimate of the overall performance of the network.

Here, the performance of the SE(3)-CNN will be evaluated and it will also be compared with that of an equivalent regular CNN with and without data augmentation. The regular CNN is obtained by replacing all the equivariant components of the network shown in Fig. 5 with conventional non-equivariant components directly from PyTorch.35 Essentially, the gated non-linearity is replaced by conventional elementwise activation function of the same type (L-LAAF Sigmoid). Every SE(3)-convolutional layer in the intermediate blocks of SE(3)-CNN produces 12 order-0 features, 12 scalar gates and 12 order-1 features as its outputs. Therefore, each conventional-convolutional layer of regular CNN outputs 60 channels to maintain the same overall dimension. As elementwise non-linearity is applied in the regular CNN the output of each block also has 60 channels. This regular CNN has approximately 7.23 times the independent parameters in the equivariant version.

Data augmentation has been utilized for both equivariant and regular CNN. As SE(3)-CNN only imposes 3D rotational and translation symmetries, reflection about either y or z axis has been performed for all datasets. The data augmentations performed for the regular CNN are discrete 90° rotations about the streamwise direction (along x) and reflections about y&z axes. This regular CNN with data-augmented samples will be referred to as Data-augmented CNN and the traditional CNN without any data augmentation will be called Simple CNN. Therefore, the total number of samples for the Data-augmented CNN is 8 times those of Simple CNN and the total number of samples for SE(3)-CNN is twice the number of samples in Simple CNN. The same loss function, given in (19) with λ=0.01, and optimizer are used for all the networks. A learning rate of 0.0002 was used for Data-augmented CNN and Simple CNN due to the significantly higher number of independent parameters in them.

It can be seen from Table III that the SE(3)-CNN outperforms both Data-augmented CNN and Simple CNN for all cases. This improvement can be attributed to the fact that continuous rotational symmetry about the given input streamwise direction is imposed when an equivariant CNN is used to generate the flow fields. This process can be interpreted as providing continuous data augmentations of infinitesimal angle rotations about the given streamwise direction and it is known that a neural network becomes more generalizable when it is provided with a larger size of training samples. Hence, the higher test performance for pressure (p) and streamwise velocity component (u) of SE(3)-CNN can be attributed to the aspect that the abstract information learned in its intermediate layers is equivariant, and therefore, more generalizable. The significant improvement for transverse velocity components (v,w) in Data-augmented CNN when compared with Simple CNN is indicative that discrete rotations and reflections are very helpful in teaching the CNN that v and w are statistically the same. Much higher performance of SE(3)-CNN for transverse velocity components suggests that the SE(3)-CNN does a better job in understanding that these two components are truly the same in a statistical sense.

TABLE III.

Test performance (R2) of SE(3)-CNN, Data-augmented CNN, and Simple CNN.

SE(3)-CNNData-augmented CNNSimple CNN
Casepuv,wpuv,wpuv,w
0.7533 0.6865 0.4945 0.7128 0.6607 0.4628 0.7195 0.5933 0.2972 
0.7803 0.8076 0.7166 0.7797 0.7729 0.6838 0.7058 0.7375 0.4933 
0.5941 0.8297 0.6988 0.4932 0.8081 0.6549 0.4275 0.7691 0.5712 
SE(3)-CNNData-augmented CNNSimple CNN
Casepuv,wpuv,wpuv,w
0.7533 0.6865 0.4945 0.7128 0.6607 0.4628 0.7195 0.5933 0.2972 
0.7803 0.8076 0.7166 0.7797 0.7729 0.6838 0.7058 0.7375 0.4933 
0.5941 0.8297 0.6988 0.4932 0.8081 0.6549 0.4275 0.7691 0.5712 

The obvious advantage of SE(3)-CNN, apart from higher test performance, is that it can be readily used in test samples whose streamwise direction substantially deviates from the +x axis due to its equivariance property. On the contrary, the regular CNN should either be trained through data augmentation along these directions, which results in an increase in training time, or coordinate transformation has to be performed on the testing samples to reorient the streamwise direction along +x axis and then later followed by another coordinate transformation to shift back to the original coordinate system of testing data.

The amount of training data that is typically available in fluid flow applications is low. This increases the importance of using rotational and reflectional symmetries for increasing amount of data available for training. In this section, we will illustrate that even with limited amount of training data SE(3)-CNN is able to perform quite well as a result of its in-built symmetry enforcement which tends to greatly enhance the available data. The impact of training dataset size is evaluated using the case 2 dataset. The networks are trained with limited sub-samples of the entire training data and the results are shown in Table IV. These sub-samples are chosen randomly from the entire training dataset. The test performance (R2) for the three networks trained using this reduced training data is presented in Table IV along with the test performance of total training samples for the purpose of comparison.

TABLE IV.

Test performance (R2) of SE(3)-CNN, Data-augmented CNN, and Simple CNN when trained on limited samples of case 2.

UniqueSE(3)-CNNData-augmented CNNSimple CNN
Training samplespuv,wpuv,wpuv,w
0.5354 0.6355 0.4906 0.3267 0.4921 −0.1493 0.2516 0.3596 −0.2843 
0.6610 0.7255 0.5779 0.6465 0.6616 0.3202 0.2531 0.2185 −0.0666 
20 0.7380 0.7754 0.6546 0.6630 0.7396 0.5366 0.5892 0.6616 0.1756 
75 0.7649 0.8000 0.7018 0.7402 0.7858 0.6360 0.6554 0.7112 0.4102 
450 0.7803 0.8076 0.7166 0.7797 0.7729 0.6838 0.7058 0.7375 0.4933 
UniqueSE(3)-CNNData-augmented CNNSimple CNN
Training samplespuv,wpuv,wpuv,w
0.5354 0.6355 0.4906 0.3267 0.4921 −0.1493 0.2516 0.3596 −0.2843 
0.6610 0.7255 0.5779 0.6465 0.6616 0.3202 0.2531 0.2185 −0.0666 
20 0.7380 0.7754 0.6546 0.6630 0.7396 0.5366 0.5892 0.6616 0.1756 
75 0.7649 0.8000 0.7018 0.7402 0.7858 0.6360 0.6554 0.7112 0.4102 
450 0.7803 0.8076 0.7166 0.7797 0.7729 0.6838 0.7058 0.7375 0.4933 

It can be seen from the results that both Data-augmented CNN and Simple CNN perform inconsistently when the number of unique samples is small. The poor performance can be attributed to the large number of trainable parameters involved in them that would generally lead to overfitting, which causes low generalization. However, the SE(3)-CNN performs moderately even with very few samples which is an indication of its high generalizability. When the unique training samples are of O(10), both SE(3)-CNN and Data-augmented CNN outperform Simple CNN by a significant margin, especially for transverse velocities. This is to be expected as SE(3)-CNN implicitly enforces continuous 3D rotational symmetry and the Data-augmented CNN has eight times more samples from data augmentation process. Furthermore, the difference in performance for transverse velocities between SE(3)-CNN and Data-augmented CNN is considerably large even when the training samples are of O(10). This higher performance is due to the continuous rotational symmetry of SE(3)-CNN architecture.

From the earlier discussions, it can be stated that SE(3)-CNN is an efficient and effective alternative to performing discrete rotation data augmentation procedure, which becomes a necessity in the limit of scarce training data. Hence, SE(3)-CNN is the ideal option to obtain a generalizable model when the training data are limited.

Central yz plane of a random test sample from case 2 has been used here to illustrate the discussed equivariance property of SE(3)-CNN. Pressure and velocity plots on this yz plane and corresponding yz planes obtained through discrete 90° degree rotations about the x axis have been shown in Figs. 6 and 7, respectively. The white circular patches of different cross section correspond to the particles that intersect these planes. For the velocity plots, the streamwise component (u) is shown as contour and the other two velocity components (v,w) are shown as in-plane velocity vector. It can be seen from both the figures, especially the contours of pressure and streamwise velocity component, that the SE(3)-CNN preserves the equivariance property precisely. The Data-augmented CNN yields reasonable results that appear to be equivariant but some differences can be clearly noticed. As expected the least promising equivariant output is produced by Simple CNN.

FIG. 6.

Pressure contours for a central yz plane of a test sample and its discrete rotations generated using (a) SE(3)-CNN, (b) Data-augmented CNN and (c) Simple CNN from case Re = 86.22, ϕ=0.21.

FIG. 6.

Pressure contours for a central yz plane of a test sample and its discrete rotations generated using (a) SE(3)-CNN, (b) Data-augmented CNN and (c) Simple CNN from case Re = 86.22, ϕ=0.21.

Close modal
FIG. 7.

Velocity plots for a central yz plane of a test sample and its discrete rotations generated using (a) SE(3)-CNN, (b) Data-augmented CNN and (c) Simple CNN from case Re = 86.22, ϕ=0.21.

FIG. 7.

Velocity plots for a central yz plane of a test sample and its discrete rotations generated using (a) SE(3)-CNN, (b) Data-augmented CNN and (c) Simple CNN from case Re = 86.22, ϕ=0.21.

Close modal

For a given sample, the model is not aware of the presence of neighboring particles that are outside of the 5×5×5 volume, and therefore, the accuracy of the solution will be low close to the outer boundaries of the volume. Furthermore, the accuracy of the flow fields should be higher close to the reference particle as it has deterministic knowledge about all of its immediate neighbors. To confirm the validity of the above reasoning, normalized mean squared error (NMSE) is evaluated for the fluid grid points in 5×5×5 and 2×2×2 regions around the reference particle. The mathematical definition of these two errors for u are given below as

NMSEu,(5×5×5)=i=1Nte(5×5×5)(uDNSunetwork)2Ifi=1Nte(5×5×5)(uDNS)2If,
(23)
NMSEu,(2×2×2)=i=1Nte(2×2×2)(uDNSunetwork)2Ifi=1Nte(2×2×2)(uDNS)2If.
(24)

These NMSE values are presented in Table V. It can be seen from the results that the error in the 2×2×2 volume is lower than that in the 5×5×5 volume for all three cases. Thus, these results are in agreement with the reasoning that the flow fields are more accurate closer to the reference particle. Siddani et al.18 used this property of accurate flow fields around a reference particle along with voronoi tessellation algorithm to reconstruct flow fields around a given random distribution of stationary particles.

TABLE V.

Normalized mean squared error (NMSE) for (5×5×5) and (2×2×2) volumes around reference particle using SE(3)-CNN.

(5×5×5)(2×2×2)
Casepuv,wpuv,w
0.2467 0.3135 0.5055 0.1444 0.2070 0.3793 
0.2197 0.1924 0.2834 0.1178 0.1135 0.1734 
0.4059 0.1703 0.3012 0.1751 0.0759 0.1358 
(5×5×5)(2×2×2)
Casepuv,wpuv,w
0.2467 0.3135 0.5055 0.1444 0.2070 0.3793 
0.2197 0.1924 0.2834 0.1178 0.1135 0.1734 
0.4059 0.1703 0.3012 0.1751 0.0759 0.1358 

Loss curves for SE(3)-CNN, Data-augmented CNN and Simple CNN trained on case 3 have been presented in Fig. 8. As data-based loss is the term that is essentially indicative of the network's performance. The decision of early stopping of the training process to alleviate the impact of overfitting is taken using the data-based loss. The minimum value of data-based loss for validation dataset, DLval,min, is monitored during the entire training process. The training process is stopped if DLval,min does not get updated, i.e., data-based loss for validation dataset does not reduce further, in 12 consecutive epochs.

FIG. 8.

Evolution of Data-based Loss for (a) SE(3)-CNN, (b) Data-augmented CNN and (c) Simple CNN trained on case Re = 114.60, ϕ=0.45.

FIG. 8.

Evolution of Data-based Loss for (a) SE(3)-CNN, (b) Data-augmented CNN and (c) Simple CNN trained on case Re = 114.60, ϕ=0.45.

Close modal

Computational cost with the considered SE(3)-CNN and regular CNN for a mini-batch of size 3 is presented in Table VI. It can be seen from the table that the training time for the SE(3)-CNN is slightly lower than that of the regular CNN. This run-time difference can be associated with the higher number of parameter optimizations that are involved in a regular CNN to update its larger set of independent parameters. However, the testing time for SE(3)-CNN is marginally higher than that of the regular CNN. The higher testing time of SE(3)-CNN can be attributed to the change of basis operation (using TCG) involved in the SE(3)-equivariant convolutional layers. This run-time difference can be reduced by pre-storing the kernel/filter in the form of κ̂k,j rather than in the form of ηk,j once the optimized parameters are obtained after the end of training process. This type of pre-storing the kernel eliminates the change of basis operation in SE(3)-convolutional layers during testing process.

TABLE VI.

Computation time in seconds for a mini-batch of size 3 using SE(3)-CNN and regular CNN.

Computation typeSE(3)-CNNRegular CNN
Training 5.27 s 5.59 s 
Testing 1.75 s 1.08 s 
Computation typeSE(3)-CNNRegular CNN
Training 5.27 s 5.59 s 
Testing 1.75 s 1.08 s 

One of the advantages of rotational and reflectional equivariance is their ability to preserve the proper symmetries of tensorial quantities. Although here we have restricted prediction to only pressure (scalar) and velocity (vector) quantities, we can consider tensor quantities constructed out of them, such as gradient of the velocity field. In this section, we will consider how well SE(3)-CNN is able to predict these derived tensor quantities. The gradient of flow velocity obtained using SE(3)-CNN, Data-augmented CNN, and Simple CNN are produced using second-order accurate central difference scheme at non-boundary fluid grid points. A fluid grid point that satisfies the following criterion is called a non-boundary fluid grid point in this work:

If(x+Δxî)+If(xΔxî)+If(x+Δyĵ)+If(xΔyĵ)+If(x+Δzk̂)+If(xΔzk̂)=6.

Here, Δx,Δy,Δz are grid spacing along x, y, z directions, respectively. The derivatives are only evaluated at the non-boundary fluid grid points because second-order accurate central difference scheme can only be applied to these points. Interestingly, the process of obtaining derivatives using a central difference scheme for each variable can be applied as a conventional-convolutional layer with the central difference stencil as its kernel.

The NMSE was calculated for each of the nine components of the velocity gradient tensor, and its definition for ux is shown below as an example,

NMSEunetworkx=i=1Ntesample(uDNSxunetworkx)2Inbfi=1Ntesample(uDNSx)2Inbf,
(25)

where

Inbf(x,y,z)={1if(x,y,z)isanon-boundaryfluidpoint0else.

The NMSE was calculated for each of the three networks by comparing their prediction against the corresponding values obtained in PR-DNS, where the gradient was also evaluated using the same central difference scheme. NMSE values related to spatial derivatives of the three networks are presented for case 1 in Table VII. Similar to the performance pattern of flow variables, the spatial derivatives of SE(3)-CNN solution have the highest accuracy. The next best accuracy is achieved by Data-augmented CNN.

TABLE VII.

Normalized mean squared errors (NMSE) of spatial derivatives for the three networks' predictions are presented for case 1.

SE(3)-CNNData-augmented CNNSimple CNN
Variablexyzxyzxyz
p 0.2272 0.3277 0.3276 0.2549 0.3678 0.3633 0.2593 0.3755 0.3767 
u 0.3898 0.3418 0.3418 0.4219 0.3515 0.3516 0.4826 0.4175 0.4122 
v 0.4098 0.5186 0.7575 0.4573 0.5529 0.7816 0.5657 0.6599 0.9058 
w 0.4098 0.7576 0.5186 0.4470 0.7741 0.5401 0.5714 0.8888 0.6537 
SE(3)-CNNData-augmented CNNSimple CNN
Variablexyzxyzxyz
p 0.2272 0.3277 0.3276 0.2549 0.3678 0.3633 0.2593 0.3755 0.3767 
u 0.3898 0.3418 0.3418 0.4219 0.3515 0.3516 0.4826 0.4175 0.4122 
v 0.4098 0.5186 0.7575 0.4573 0.5529 0.7816 0.5657 0.6599 0.9058 
w 0.4098 0.7576 0.5186 0.4470 0.7741 0.5401 0.5714 0.8888 0.6537 

It has been mentioned earlier that the SE(3)-CNN ensures the flow information corresponding to transverse directions (yandz) is statistically the same. This point can be illustrated using the results in Table VII. The NMSE values of (py&pz) are very close to each other for SE(3)-CNN solution compared to the Data-augmented CNN or Simple CNN. Even the small difference between these errors using SE(3)-CNN can be attributed to the small deviation of the streamwise direction from x-axis in each individual sample, and this leads to the actual transverse directions also slightly deviating from yandz axes.

Other pairs that can be compared in the same manner are (uy&uz),(vy&wz), (vx&wx) and (vz&wy). For all pairs, the errors are very small when SE(3)-CNN is used. The errors in the case of Data-augmented CNN and Simple CNN are higher indicating their difficulty in replicating statistical property. Preserving the required rotational statistical symmetries of fluid flow around a random distribution of particles is an in-built characteristic property that is preserved by SE(3)-CNN, irrespective of the amount of training data available to it. On the other hand, this statistical nature can only be learned by a regular CNN when sufficient data are available for the training procedure. The results presented in Table VII were when all the training samples (greater than 350) were used in all three approaches. In this case, symmetry preservation of Data-augmented and Simple CNN was not unacceptable. However, when the same analysis was performed for number of training samples less than 20 or so, the symmetry errors were much larger for the Simple CNN, and for the Data-augmented CNN, thus illustrating the importance of SE(3)-CNN in the case of limited data.

This paper presents a data-driven methodology based on CNN to recreate steady-state particle-resolved flow field around a particle of interest when provided with volume-averaged Reynolds number, mean-flow direction, and exact locations of neighboring stationary particles in a dispersed multiphase setup. Instead of using conventional CNN, we have implemented a SE(3)-CNN,23 which is translation and 3D rotation equivariant. This architecture enables incorporation of rotational symmetry. The model only takes a local volume that contains the most influential neighbors around a particle of interest to predict the flow field. The results for the considered cases indicate that the equivariant CNN performs better than a conventional CNN with or without data augmentation. This advantage is particularly important in the limit of low availability of training data. While SE(3)-CNN produces very good results even in cases of limited training data, mainly due to implicit enhancement of data due to enforced symmetries, the corresponding results for conventional CNN with or without discrete data augmentation are rather poor. Thus, indicating that incorporating symmetries into data-driven models improves their accuracy, especially under conditions of limited training data. From our limited experience of working with the SE(3)-CNN approach, we observed that finding an appropriate network architecture that can represent the complex flow physics requires more effort, compared to a conventional CNN, due to the imposed architectural restrictions to preserve symmetry. However, we do not view this aspect to be a strict limitation because there would be many equivariant network architectures that can act as an initial trial architecture with an increase in usage and popularity of SE(3)-CNN.

Since the current work implements rotational symmetry about an arbitrary direction using SE(3)-CNN, this equivariant network can be used for any 3D physical rotation of the problem. SE(3)-CNN's property of intermediate layers being irreducible representations of the SO(3) group is an important step toward interpretable data-driven models. Though the presented methodology has been applied to a steady-state problem with stationary particles, it can be easily extended to an unsteady case where mean-flow and particles velocities at current and previous time steps are also taken as inputs to predict flow fields at the current time step. The presented data-driven method of flow field generation is applicable to any combination of Reynolds number and particle volume fraction. However, the network architecture and amount of training data are critical to achieve a desired level of performance for a given case. The considered local volume is discretized using a uniform grid along each direction because CNN architecture is used. Hence, the approach can be seamlessly extended to poly-dispersed (particles with different diameters) and non-spherical particle systems. However, care must be taken such that grid resolution is sufficient to moderately capture the curvature of the involved particles.

This work was sponsored by the Office of Naval Research (ONR) as part of the Multidisciplinary University Research Initiatives (MURI) Program, under grant number N00014‐16-1‐2617. This work was also partly supported and benefited from the U.S. Department of Energy, National Nuclear Security Administration, Advanced Simulation and Computing Program, as a Cooperative Agreement to the University of Florida under the Predictive Science Academic Alliance Program, under Contract No. DE-NA0002378. This work was also partly supported by National Science Foundation under Grant No. 1908299. B.S. would like to thank Mario Geiger and Tess E. Smidt for their helpful suggestions and discussions on the usage of the e3nn library.33 

The data that support the findings of this study are available from the corresponding author upon reasonable request.

1.
G.
Novati
,
H. L.
de Laroussilhe
, and
P.
Koumoutsakos
, “
Automating turbulence modelling by multi-agent reinforcement learning
,”
Nat. Mach. Intell.
3
,
87
96
(
2021
).
2.
S.
Pawar
and
O.
San
, “
Data assimilation empowered neural network parametrizations for subgrid processes in geophysical flows
,”
Phys. Rev. Fluids
6
,
050501
(
2021
).
3.
X.-H.
Zhou
,
J.
Han
, and
H.
Xiao
, “
Learning nonlocal constitutive models with neural networks
,”
Comput. Methods Appl. Mech. Eng.
384
,
113927
(
2021
).
4.
S. L.
Brunton
,
B. R.
Noack
, and
P.
Koumoutsakos
, “
Machine learning for fluid mechanics
,”
Annu. Rev. Fluid Mech.
52
,
477
508
(
2020
).
5.
K.
Duraisamy
,
G.
Iaccarino
, and
H.
Xiao
, “
Turbulence modeling in the age of data
,”
Annu. Rev. Fluid Mech.
51
,
357
377
(
2019
).
6.
K.
Duraisamy
, “
Perspectives on machine learning-augmented Reynolds-averaged and large eddy simulation models of turbulence
,” arXiv:2009.10675 [physics.flu-dyn] (
2021
).
7.
S. L.
Brunton
,
J.
Nathan Kutz
,
K.
Manohar
,
A. Y.
Aravkin
,
K.
Morgansen
,
J.
Klemisch
,
N.
Goebel
,
J.
Buttrick
,
J.
Poskin
,
A. W.
Blom-Schieber
,
T.
Hogan
, and
D.
McDonald
, “
Data-driven aerospace engineering: Reframing the industry with machine learning
,”
AIAA J.
59
,
1
2847
(
2021
).
8.
Y.
Qi
,
J.
Lu
,
R.
Scardovelli
,
S.
Zaleski
, and
G.
Tryggvason
, “
Computing curvature for volume of fluid methods using machine learning
,”
J. Comput. Phys.
377
,
155
161
(
2019
).
9.
H.
Patel
,
A.
Panda
,
J.
Kuipers
, and
E.
Peters
, “
Computing interface curvature from volume fractions: A machine learning approach
,”
Comput. Fluids
193
,
104263
(
2019
).
10.
M.
Ataei
,
M.
Bussmann
,
V.
Shaayegan
,
F.
Costa
,
S.
Han
, and
C. B.
Park
, “
Nplic: A machine learning approach to piecewise linear interface construction
,”
Comput. Fluids
223
,
104950
(
2021
).
11.
Z.
Yi Wan
,
P.
Karnakov
,
P.
Koumoutsakos
, and
T. P.
Sapsis
, “
Bubbles in turbulent flows: Data-driven, kinematic models with history terms
,”
Int. J. Multiphase Flow
129
,
103286
(
2020
).
12.
C.
Lin
,
Z.
Li
,
L.
Lu
,
S.
Cai
,
M.
Maxey
, and
G. E.
Karniadakis
, “
Operator learning for predicting multiscale bubble growth dynamics
,”
J. Chem. Phys.
154
,
104118
(
2021
).
13.
L.
Lu
,
P.
Jin
,
G.
Pang
,
Z.
Zhang
, and
G. E.
Karniadakis
, “
Learning nonlinear operators via deeponet based on the universal approximation theorem of operators
,”
Nat. Mach. Intell.
3
,
218
229
(
2021
).
14.
M.
Raissi
,
P.
Perdikaris
, and
G.
Karniadakis
, “
Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations
,”
J. Comput. Phys.
378
,
686
707
(
2019
).
15.
Z.
Mao
,
A. D.
Jagtap
, and
G. E.
Karniadakis
, “
Physics-informed neural networks for high-speed flows
,”
Comput. Methods Appl. Mech. Eng.
360
,
112789
(
2020
).
16.
A.
Subramaniam
,
M. L.
Wong
,
R. D.
Borker
,
S.
Nimmagadda
, and
S. K.
Lele
, “
Turbulence enrichment using physics-informed generative adversarial networks
,” arXiv:2003.01907 [physics.comp-ph] (
2020
).
17.
C. M.
Jiang
,
S.
Esmaeilzadeh
,
K.
Azizzadenesheli
,
K.
Kashinath
,
M.
Mustafa
,
H. A.
Tchelepi
,
P.
Marcus
,
Prabhat
, and
A.
Anandkumar
, “
Meshfreeflownet: A physics-constrained deep continuous space-time super-resolution framework
,” arXiv:2005.01463 [cs.LG] (
2020
).
18.
B.
Siddani
,
S.
Balachandar
,
W. C.
Moore
,
Y.
Yang
, and
R.
Fang
, “
Machine learning for physics-informed generation of dispersed multiphase flow using generative adversarial networks
,” arXiv:2005.05363 [physics.flu-dyn] (
2020
).
19.
K.
Fukami
,
K.
Fukagata
, and
K.
Taira
, “
Machine-learning-based spatio-temporal super resolution reconstruction of turbulent flows
,”
J. Fluid Mech.
909
,
A9
(
2021
).
20.
A.
GÃemes
,
S.
Discetti
,
A.
Ianiro
,
B.
Sirmacek
,
H.
Azizpour
, and
R.
Vinuesa
, “
From coarse wall measurements to turbulent velocity fields through deep learning
,”
Phys. Fluids
33
,
075121
(
2021
).
21.
T. E.
Smidt
, “
Euclidean symmetry and equivariance in machine learning
,”
Trends Chem.
3
,
82
85
(
2021
).
22.
R.
Wang
,
R.
Walters
, and
R.
Yu
, “
Incorporating symmetry into deep dynamics models for improved generalization
,” arXiv:2002.03061 [cs.LG] (
2020
).
23.
M.
Weiler
,
M.
Geiger
,
M.
Welling
,
W.
Boomsma
, and
T.
Cohen
, “
3d steerable cnns: Learning rotationally equivariant features in volumetric data
,” arXiv:1807.02547 [cs.LG] (
2018
).
24.
M.
Chu
and
N.
Thuerey
, “
Data-driven synthesis of smoke flows with CNN-based feature descriptors
,”
ACM Trans. Graph.
36
,
1
14
(
2017
).
25.
L.
He
and
D. K.
Tafti
, “
A supervised machine learning approach for predicting variable drag forces on spherical particles in suspension
,”
Powder Technol.
345
,
379
389
(
2019
).
26.
N.
Muralidhar
,
J.
Bu
,
Z.
Cao
,
L.
He
,
N.
Ramakrishnan
,
D.
Tafti
, and
A.
Karpatne
, “
Phynet: Physics guided neural networks for particle drag force prediction in assembly
,” in
Proceedings of the 2020 SIAM International Conference on Data Mining
(
SIAM
,
2020
), pp.
559
567
.
27.
W.
Moore
,
S.
Balachandar
, and
G.
Akiki
, “
A hybrid point-particle force model that combines physical and data-driven approaches
,”
J. Comput. Phys.
385
,
187
208
(
2019
).
28.
G.
Akiki
,
T. L.
Jackson
, and
S.
Balachandar
, “
Pairwise interaction extended point-particle model for a random array of monodisperse spheres
,”
J. Fluid Mech.
813
,
882
928
(
2017
).
29.
W. C.
Moore
and
S.
Balachandar
, “
Lagrangian investigation of pseudo-turbulence in multiphase flow using superposable wakes
,”
Phys. Rev. Fluids
4
,
114301
(
2019
).
30.
S.
Balachandar
and
J. K.
Eaton
, “
Turbulent dispersed multiphase flow
,”
Annu. Rev. Fluid Mech.
42
,
111
133
(
2010
).
31.
G.
Akiki
and
S.
Balachandar
, “
Immersed boundary method with non-uniform distribution of Lagrangian markers for a non-uniform Eulerian mesh
,”
J. Comput. Phys.
307
,
34
59
(
2016
).
32.
S.
Balachandar
,
W. C.
Moore
,
G.
Akiki
, and
K.
Liu
, “
Toward particle-resolved accuracy in Euler–Lagrange simulations of multiphase flow using machine learning and pairwise interaction extended point-particle (PIEP) approximation
,”
Theor. Comput. Fluid Dyn.
34
,
401
428
(
2020
).
33.
M.
Geiger
,
T.
Smidt
,
B. K.
Miller
,
W.
Boomsma
,
K.
Lapchevskyi
,
M.
Weiler
,
M.
Tyszkiewicz
, and
J.
Frellsen
(
2020
). “Euclidean neural networks: e3nn,”
Zenodo
.
34.
T.
Smidt
(
2020
). “blondegeek/e3nn_tutorial: Tutorials now support ‘e3nn’. Plus, new SphericalTensor methods and notebooks,”
Zenodo
.
35.
A.
Paszke
,
S.
Gross
,
F.
Massa
,
A.
Lerer
,
J.
Bradbury
,
G.
Chanan
,
T.
Killeen
,
Z.
Lin
,
N.
Gimelshein
,
L.
Antiga
,
A.
Desmaison
,
A.
Kopf
,
E.
Yang
,
Z.
DeVito
,
M.
Raison
,
A.
Tejani
,
S.
Chilamkurthy
,
B.
Steiner
,
L.
Fang
,
J.
Bai
, and
S.
Chintala
, “
Pytorch: An imperative style, high-performance deep learning library
,” in
Advances in Neural Information Processing Systems 32
, edited by
H.
Wallach
,
H.
Larochelle
,
A.
Beygelzimer
,
F.
d'Alché-Buc
,
E.
Fox
, and
R.
Garnett
(
Curran Associates, Inc
.,
2019
), pp.
8024
8035
.
36.
N.
Thomas
,
T.
Smidt
,
S.
Kearnes
,
L.
Yang
,
L.
Li
,
K.
Kohlhoff
, and
P.
Riley
, “
Tensor field networks: Rotation- and translation-equivariant neural networks for 3d point clouds
,” arXiv:1802.08219 [cs.LG] (
2018
).
37.
V. G.
Satorras
,
E.
Hoogeboom
, and
M.
Welling
, “
E(n) equivariant graph neural networks
,” arXiv:2102.09844 [cs.LG] (
2021
).
38.
S.
Ioffe
and
C.
Szegedy
, “
Batch normalization: Accelerating deep network training by reducing internal covariate shift
,” arXiv:1502.03167 [cs.LG] (
2015
).
39.
O.
Ronneberger
,
P.
Fischer
, and
T.
Brox
,
U-Net: Convolutional networks for biomedical image segmentation
,”
in
International Conference on Medical Image Computing and Computer-Assisted Intervention
(
Springer
,
2015
), pp.
234
241
.
40.
A. D.
Jagtap
,
K.
Kawaguchi
, and
G. E.
Karniadakis
, “
Locally adaptive activation functions with slope recovery term for deep and physics-informed neural networks
,” CoRR abs/1909 12228, arXiv:1909.12228 (
2019
).
41.
D. P.
Kingma
and
J.
Ba
, “
Adam: A method for stochastic optimization
,” arXiv:1412.6980 [cs.LG] (
2017
).