The modeling of multiphase flow in a pipe presents a significant challenge for high-resolution computational fluid dynamics (CFD) models due to the high aspect ratio (length over diameter) of the domain. In subsea applications, the pipe length can be several hundreds of meters vs a pipe diameter of just a few inches. Approximating CFD models in a low-dimensional space, reduced-order models have been shown to produce accurate results with a speed-up of orders of magnitude. In this paper, we present a new AI-based non-intrusive reduced-order model within a domain decomposition framework (AI-DDNIROM), which is capable of making predictions for domains significantly larger than the domain used in training. This is achieved by (i) using a domain decomposition approach; (ii) using dimensionality reduction to obtain a low-dimensional space in which to approximate the CFD model; (iii) training a neural network to make predictions for a single subdomain; and (iv) using an iteration-by-subdomain technique to converge the solution over the whole domain. To find the low-dimensional space, we compare Proper Orthogonal Decomposition with several types of autoencoder networks, known for their ability to compress information accurately and compactly. The comparison is assessed with two advection-dominated problems: flow past a cylinder and slug flow in a pipe. To make predictions in time, we exploit an adversarial network, which aims to learn the distribution of the training data, in addition to learning the mapping between particular inputs and outputs. This type of network has shown the potential to produce visually realistic outputs. The whole framework is applied to multiphase slug flow in a horizontal pipe for which an AI-DDNIROM is trained on high-fidelity CFD simulations of a pipe of length 10 m with an aspect ratio of 13:1 and tested by simulating the flow for a pipe of length 98 m with an aspect ratio of almost 130:1. Inspection of the predicted liquid volume fractions shows a good match with the high fidelity model as shown in the results. Statistics of the flows obtained from the CFD simulations are compared to those of the AI-DDNIROM predictions to demonstrate the accuracy of our approach.

Non-intrusive reduced-order modeling (NIROM) has been the subject of intense research activity over the last five years, largely due to the advances made in machine learning and the re-application of these techniques to reduced-order modeling. This paper takes a step toward demonstrating how non-intrusive reduced-order models can generalize by training the model on one domain and deploying it on a much larger domain. The method outlined here could be extremely useful for the energy industry, for example, in which pipelines of the order of kilometers in lengths and inches in diameter are used for the subsea transportation of fluids. With such high aspect ratios, these pipes are too long to be modeled by high-resolution computational fluid dynamics (CFD) models alone. In this paper, we propose a non-intrusive reduced-order model based on autoencoders (for dimensionality reduction), an adversarial network (for prediction), and a domain decomposition approach. For dimensionality reduction, we investigate the performance of several autoencoders and Proper Orthogonal Decomposition (POD) using two test cases (flow past a cylinder and multiphase slug flow in a horizontal pipe). With the method that performs best (the convolutional autoencoder), we go on to demonstrate the NIROM approach on multiphase slug flow in a horizontal pipe, training the networks on CFD data from a 10 m pipe with an aspect ratio of 13:1 and making predictions for the flow within a 98 m pipe with an aspect ratio of almost 130:1. In the following paragraphs, we give some background on reduced-order modeling (ROM) and NIROM; on dimensionality reduction methods and the use of autoencoders; on prediction; domain decomposition methods and ROM; and finally on multiphase flow. The final two paragraphs summarize the main contributions of the paper and describe the layout of the rest of the paper.

The aim of reduced-order modeling1 (ROM) is to obtain a low-dimensional approximation of a computationally expensive high-dimensional system of discretized equations, henceforth referred to as the high-fidelity model (HFM). To be of benefit, the low-dimensional model should be accurate enough for its intended purpose and orders of magnitude faster to solve than the HFM. Known as projection-based ROM,2 one common strategy for constructing reduced-order models is to use a Galerkin projection of the HFM onto a low-dimensional subspace. However, in this article we focus on an alternative method, NIROM, which, unlike projection-based ROM, does not require access to or modification of the source code of the HFM. It requires only the results of the HFM with which it constructs a low-dimensional approximation to the HFM in two stages: the offline stage and the online stage. During the offline stage, solutions from the HFM are generated (known as snapshots); a set of basis functions that span the low-dimensional or reduced space are obtained by a dimensionality reduction method; and finally, the evolution of the HFM in the reduced space is approximated in some manner. The latter step can be done in several ways, but here, as we focus on AI-based NIROM, we use a neural network. A profusion of terms exists for this type of non-intrusive modeling, including POD with interpolation;3 NIROM;4,5 POD surrogate modeling;5,6 system or model identification;7,8 Galerkin-free;9 data-driven reduced-order modeling;10–12 Deep Learning ROM;13 and digital twins.14–17 In addition to making predictions, digital twins assimilate data from observations to improve the accuracy of the prediction.

To find the low-dimensional subspace in which to approximate the HFM, many of these non-intrusive approaches rely on Proper Orthogonal Decomposition (POD),18 which is based on Singular Value Decomposition. Also known as Principal Component Analysis, POD finds the optimal linear subspace (with a given dimension) that can represent the space spanned by the snapshots and prioritizes the modes according to those that exhibit the most variance. Whilst POD works well in many situations, for advection-dominated flows with their slow decay of singular values or large Kolmogorov N–width,19 approximations based on POD can be poor20–22 and researchers are turning increasingly to autoencoders.23 Although adding to the offline cost, these networks seek a low-dimensional nonlinear subspace, which can be more accurate and efficient than a linear subspace for approximating the HFM.

Convolutional networks are particularly good at analyzing and classifying images (on structured grids)24,25 with the ability to pick out features and patterns wherever their location (translational invariance), and these methods are applicable directly to the dimensionality reduction of CFD solutions on structured grids through the use of convolutional autoencoders (CAEs). Methods that apply convolutional networks to data on unstructured meshes do exist (based on space-filling curves;26 graph convolutional networks;27,28 and a method that introduces spatially varying kernels29) but are in their infancy, so most researchers either solve the high-resolution problem on structured grids directly or interpolate from the high-fidelity model snapshots to a structured grid before applying the convolutional layers. The latter approach is adopted here.

Perhaps the first use of an autoencoder for dimensionality reduction within a ROM framework was applied to reconstruct flow fields in the near-wall region of channel flow based on information at the wall,30 whilst the first use of a convolutional autoencoder came 16 years later and was applied to Burgers Equation, advecting vortices and lid-driven cavity flow.31 In the few years since 2018, many papers have appeared, in which convolutional autoencoders have been applied to sloshing waves, colliding bodies of fluid and smoke convection;32 flow past a cylinder;33–35 the Sod shock test and transient wake of a ship;36 air pollution in an urban environment;37–39 parametrized time-dependent problems;40 natural convection problems in porous media;41 the inviscid shallow water equations;42 supercritical flow around an airfoil;43 cardiac electrophysiology;44 multiphase flow examples;45 the Kuramoto–Sivashinsky equation;46 the parametrized 2D heat equation;47 and a collapsing water column.48 Of these papers, those which compare autoencoder networks with POD generally conclude that autoencoders can outperform POD,31,33 especially when small numbers of reduced variables are used.41–44 However, when large enough numbers of POD basis functions are retained, POD can yield good results, sometimes outperforming the autoencoders.

A recent dimensionality reduction method that combines POD/SVD and an autoencoder (SVD-AE) has been introduced independently by a number of researchers and demonstrated on: vortex-induced vibrations of a flexible offshore riser at a high Reynolds number49 (described as hybrid ROM); the generalized eigenvalue problems associated with neutron diffusion50 (described as an SVD autoencoder); Marsigli flow51 (described as nonlinear POD); and cardiac electrophysiology52 (described as POD-enhanced deep learning ROM). This method has at least three advantages: (i) by training the autoencoder with POD coefficients, it is of no consequence whether the snapshots are associated with a structured or unstructured mesh; (ii) an initial reduction of the number of variables by applying POD means that the autoencoder will have fewer trainable parameters and therefore be easier to train; and (iii) autoencoders in general can find the minimum number of latent variables needed in the reduced representation. For example, the solution of flow past a cylinder evolves on a one-dimensional manifold parametrized by time; therefore, only one latent variable is needed to capture the physics of this solution.26,42,44

The Adversarial Autoencoder53 (AAE) is a generative autoencoder sharing similarities with the variational autoencoder (VAE) and the generative adversarial network (GAN). In addition to an encoder and decoder, the AAE has a discriminator network linked to its bottleneck layer. The purpose of the discriminator and associated adversarial training is to make the posterior distribution of the latent representation close to an arbitrary prior distribution thereby reducing the likelihood that the latent space will have “gaps.” Therefore, any set of latent variables should be associated, through the decoder, with a visually realistic output. Not many examples exist of using an AAE for dimensionality reduction in fluid dynamics problems; however, it has been applied to model air pollution in an urban environment.38,39 In this work, we compare POD, CAE, AAE, and the SVD-AE on flow past a cylinder and multiphase flow in a pipe, to assess their suitability as dimension reduction methods.

Once the low-dimensional space has been found, the snapshots are projected onto this space, and the resulting reduced variables (either POD coefficients or latent variables of an autoencoder) can be used to train a neural network, which attempts to learn the evolution of the reduced variables in time (and/or their dependence on a set of parameters). From the references in this paper alone, many examples exist of feed-forward and recurrent neural networks having been used for the purpose of learning the evolution of time series data, for example, by Multi-layer perceptrons,12,13,40,41,43,54–60 Gaussian Process Regression,11,45,61–63 and Long-Short Term Memory networks.31,32,34,35,38,51,64 When using these types of neural networks to predict in time, if the reduced variables stray outside of the range of values encountered during training, the neural network can produce unphysical, divergent results.39,51,52,64,65 To combat this, a number of methods have been proposed. Physics-informed neural networks55 aim to constrain the predictions of the neural network to satisfy physical laws, such as conservation of mass or momentum.59,60 A method introduced by Refs. 56 and 57 aims to learn the mapping from the reduced variables at a particular time level to their time derivative, rather than the reduced values themselves at a future time level. This enables the use of variable time steps when needed, to control the accuracy of the solution in time. A third way of tackling this issue, which is explored in this paper, is to use adversarial networks, renowned for their ability to give realistic predictions.

Adversarial networks, such as the GAN and the AAE, aim to learn a distribution to which the training data could belong, in addition to a mapping between solutions at successive time levels. GANs and AAEs are similar in that they both use a discriminator network and deploy adversarial training, and both require some modification so that they can make predictions in time. The aim of these networks is to generate images (or in this case, reduced variables associated with fluid flows) that are as realistic as possible. To date, there are not many examples of the use of GANs or AAEs for prediction in CFD modeling. Two exceptions are Ref. 66, which combines a VAE and GAN to model flow past a cylinder and the collapse of a water dam and Ref. 67, which uses a GAN to predict the reduced variables of an epidemiological model which modeled the spread of a virus through a small, idealized town. This particular model performed well when compared with an LSTM.68 Conditional GANs (CGAN) have similar properties to the GAN and AAE, and they have been used successfully to model forward and inverse problems for coupled hydro-mechanical processes in heterogeneous porous media;69 a flooding event in Hokkaido, Japan, after the 1993 earthquake;70 and a flooding event in Denmark.71 However, the closeness of the CGAN's distribution to that of the training data is compromised by the “condition” or constraint. GANs are known to be difficult to train, so, in this paper, we use an Adversarial Autoencoder, albeit modified, so that it can predict the evolution of the reduced variables in time.

Combining domain decomposition techniques with ROM has been done by a number of researchers. An early example72 presents a method for projection-based ROMs in which the POD basis functions are restricted to the nodes of each subdomain of the partitioned domain. A similar approach has also been developed for non-intrusive ROMs,61 which was later extended to partition the domain by minimizing communication between subdomains,62 effectively isolating as much as possible, the physical complexities between subdomains. As the domain of our main test case (multiphase flow in a pipe) is long and thin with a similar amount of resolution and complexity of behavior occurring in partitions of equal length in the axial direction, here, we simply split the domain into subdomains of equal length in the axial direction (see Fig. 1). The neural network learns how to predict the solution for a given subdomain, and the solution throughout the entire pipe is built up by using the iteration-by-subdomain approach.73 The domain decomposition approach we use has some similarities to the method employed in Ref. 74, which decomposes a domain into patches to make training a neural network more tractable. However, our motivation for using domain decomposition is to make predictions for domains that are significantly larger than those used in the training process. When modeling a pipe that is longer than the pipe used to generate the training data, it is likely that the simulation will need to be run for longer than the original model as the fluid will take longer to reach the end of the pipe. This means that boundary conditions for the longer pipe must be generated somehow, rather than relying on using boundary conditions from the original model. Generating suitable boundary conditions for turbulent CFD problems is, in general, an open area of research. Often used are incoming synthetic-eddy methods,75 which attempt to match specified mean flows and Reynolds stresses at the inlet. Recently, researchers have explored using GANs to generate boundary conditions with success.76,77 We present three methods of generating boundary conditions for our particular application and also discuss alternative methods in Conclusions and Further Work.

FIG. 1.

A schematic diagram of a pipe split into eight subdomains of equal length in the axial direction.

FIG. 1.

A schematic diagram of a pipe split into eight subdomains of equal length in the axial direction.

Close modal

The test case of multiphase flow in a pipe is particularly challenging due to the difficulties such as the space-time evolution of multiphase flow patterns (stratified, bubbly, slug, annular), the turbulent phase-to-phase interactions, the drag, inertia, and wake effects that arise for the HFM from the high aspect ratio (length to diameter) of the domain of a typical pipe. Many address this by developing one dimensional (flow regime-dependent or -independent) models for long pipes.78–80 Nevertheless, such models contain some uncertainties as they rely on several closure or empirical expressions81 under the limited experimental data82 in describing, for example, the 3D space-time variations of interfacial frictional forces with phase distributions (the bubble/drop entrainment, the bubble-induced turbulence, and the phase interfacial interactions), depending on the flow pattern, flow direction, and pipe physical properties (inclination, diameter, and length). Significant progress has been made in 3D modeling83 by using direct numerical simulations84 (DNS) and front-tracking methods.85 To generate the solutions of the HFM, we employ a method based on Large Eddy Simulation, which advects a volume fraction field86 and uses mesh adaptivity to have high resolution where most needed. Although compromising on resolving features on the smaller temporal and spatial scales, this approach is computationally more feasible than DNS and has the advantage of being conservative, unlike front-tracking methods.

In this paper, we propose a non-intrusive reduced-order model (AI-DDNIROM) capable of making predictions for a domain to which it has not been exposed during training. Several autoencoders are explored for the dimensionality reduction stage, as there is evidence that they are more efficient than POD for advection-dominated problems such as those tackled here. The dimensionality reduction methods are applied to 2D flow past a cylinder and 3D multiphase slug flow in a horizontal pipe. For the prediction stage, an adversarial network is chosen (based on a modified adversarial autoencoder) as these types of networks are believed to generate latent spaces with no gaps53 and thus are likely to produce more realistic results than feed-forward or recurrent neural networks without adversarial layers. A domain decomposition approach is applied, which, with an iteration-by-subdomain technique, enables predictions to be made for multiphase slug flow with a significantly longer pipe than was used when training the networks. The predictions of the adversarial network are taken from a probability distribution learned during training. Any point within the Gaussian distribution of the latent variables should therefore result in a realistic solution. Statistics from the HFM solutions and predictions of the non-intrusive reduced-order models for the original length pipe and the longer pipe are compared. The contributions of this work are: (i) a method, which can make predictions for a domain significantly larger than that used to train the reduced-order models; (ii) the exploitation of an adversarial network to make realistic predictions, and comparing statistics of the reduced-order models with the original CFD model; and (iii) the investigation of a number of methods to generate boundary conditions for the larger domain.

The outline of the remainder of the paper is as follows. Section II describes the methods used in constructing the reduced-order models and the domain decomposition approach, which is exploited in order to be able to make predictions for a longer domain than that used in training. Section III presents the results for the dimensionality reduction methods applied to flow past a cylinder and multiphase flow in a pipe and then shows the predictions of the reduced-order model of multiphase flow in a pipe, for both the original domain and the extended domain. Conclusions are drawn, and future work described in the final section. Details of the hyperparameter optimization process and the network architectures are given in the  Appendix.

The offline stage of a non-intrusive reduced-order model can be split into three steps: (i) generating the snapshots by solving a set of discretized governing equations (the high-resolution or high-fidelity model); (ii) reducing the dimensionality of the discretized system; and (iii) teaching a neural network to predict the evolution of the snapshots in reduced space. The online stage consists of two steps: (i) predicting values of the reduced variables with the neural network for an unseen state and (ii) mapping back to the physical space of the high-resolution model. In this section, the methods used in this investigation for dimensionality reduction (Sec. II B) and prediction (Sec. II C) are described. The final section (Sec. II D) outlines an approach for making predictions for a larger domain having used a smaller domain to generate the training data.

Described here are four techniques for dimensionality reduction, which are used in this investigation, namely Proper Orthogonal Decomposition, a convolutional autoencoder, an adversarial autoencoder, and a hybrid SVD autoencoder.

1. Proper orthogonal decomposition

Proper Orthogonal Decomposition is a commonly used technique for dimensionality reduction when constructing reduced-order models. POD requires the minimization of the reconstruction error of the projection of a set of solutions (snapshots) onto a number of basis functions, which define a low-dimensional space. In order to minimize the reconstruction error, the basis functions must be chosen as the left singular vectors of the singular value decomposition (SVD) of the matrix of snapshots. Suppose the snapshots matrix is represented by S, whose columns are solutions at different instances in time (i.e., the snapshots) and whose rows correspond to nodal values of solution variables, then S can be decomposed as

S=UΣVT,
(1)

where the matrix U contains the left singular vectors, V the right singular vectors, and Σ contains the singular values on its diagonal, zeros elsewhere. If POD is well suited to the problem, many of the singular values will be close to zero and the corresponding columns of U can be discarded. The POD basis functions to be retained are stored in a matrix denoted by R. The POD coefficients of a snapshot can be found by pre-multiplying the snapshot by RT, and the reconstruction of a snapshot can be found by pre-multiplying the POD coefficients of the snapshot by R:

(urecon)k=RRTuk,
(2)

where uk is the kth snapshot and (urecon)k is its reconstruction. Hence, the reconstruction error over a set of N snapshots {u1,u2,,uN} can be written as

1Nk=1N(uk(urecon)k)·(uk(urecon)k).
(3)

Often the mean is subtracted from the snapshots before applying singular value decomposition; however, in this study, doing so was found to have little effect. In the first test case, 2D flow past a cylinder, two velocity components are included in the snapshot matrix,

uk=(u1k,u2k,,uMk,v1k,v2k,,vMk)T,
(4)

where ui and vi represent the x and y components of velocity, respectively, at the ith node; k denotes a particular snapshot; and M is the number of nodes. For the 3D multiphase flow test case, the snapshots comprise velocities and volume fractions, so a single snapshot has the form

uk=(u1k,u2k,,uMk,v1k,v2k,,vMk,w1k,w2k,,wMk,α1k,α2k,,αMk)T,
(5)

where wik and αik represent the z component of velocity and the volume fraction, respectively, at the ith node of the kth snapshot. In this case, the velocity components are scaled to be in the range [1,1] so that their magnitudes are similar to those of the volume fractions.

2. Convolutional autoencoder

An autoencoder is a particular type of feed-forward network that attempts to learn the identity map.87 When used for compression, these networks have a central or bottleneck layer that has fewer neurons than the input and output layers, thereby forcing the autoencoder to learn a compressed representation of the training data. An autoencoder consists of an encoder, which compresses the data to the latent variables of the bottleneck layer, and a decoder, which decompresses or reconstructs the latent variables to an output layer of the same dimension as the input layer. The latent variables span what is referred to as the latent space. The convolutional autoencoder typically uses a series of two types of layers to compress the input data in the encoder: convolutional layers and pooling layers. These layers both apply operations to an input grid resulting in an output grid (or feature map) of reduced size. The inverse operations are then used in succession in a decoder, resulting in a reconstructed grid of the same shape as the input. The encoder-decoder pair can be trained as any other neural network: by passing training data through the network and updating the weights associated with the layers according to a loss function such as the mean square error. If uk represents the kth sample in the dataset of N samples and (urecon)k represents the corresponding output of the autoencoder, which can be written as

(urecon)k=fae(uk),
(6)

then the mean square error can be expressed as in Eq. (3).

3. Adversarial autoencoder

The adversarial autoencoder53 is a recently developed neural network that uses an adversarial strategy to force the latent space to follow a (given) prior distribution (Pprior). Its encoder-decoder network is the same as that of a standard autoencoder; however, in addition, the adversarial autoencoder includes a discriminator network, which is trained to distinguish between true samples (from the prior) and fake samples (from the latent space). There are therefore three separate training steps per mini-batch. In the first step, the reconstruction error of the inputs is minimized (as is done in a standard autoencoder). In the second and third steps, the adversarial training takes place. In the second step, the discriminator network is trained on latent variables sampled from the prior distribution with label 1 and latent variables generated by the encoder with label 0. In the third step, the encoder is trained to fool the discriminator, that is, it tries to make the discriminator produce an output of 1 from its generated latent vectors. Note that this is the role of the generator in a GAN and, as such, the encoder (G) and discriminator (D) play the minimax game described by Eq. (7). This equation is the implicit loss function for the adversarial training:

minGmaxDV(D,G)=EzPprior[logD(z)]+EuPdata[log(1D(G(u)))],
(7)

where V is the value function that G and D play the minimax game over, zPprior is a sample from the desired distribution, and uPdata is a sample input grid. There are strong similarities between the adversarial autoencoder, GANs and Variational Autoencoders (VAEs). All three types of networks set out to obtain better generalization than non-adversarial networks by attempting to obtain a smooth latent space with no gaps. Results in Ref. 53 show that the AAE performs better at this task than the VAE on the MNIST digits. Imposing a prior distribution upon the variables of the latent space ensures that any set of latent variables, when passed through the decoder, should have a realistic output.53 

4. SVD autoencoder

As the name suggests, the SVD autoencoder makes use of two strategies. Initially, an SVD is applied to the data, resulting in POD coefficients that are subsequently used to train an autoencoder, which applies a second level of compression. Once trained, the latent variables of the SVD autoencoder can be written as

zk=fenc(Ruk),
(8)

where fenc is the encoder, R represents the POD basis functions, uk is the kth snapshot, and zk are the latent variables. For reconstruction, the inverse of this process is then employed, whereby a trained decoder first decompresses the latent space variables to POD coefficients, after which these POD coefficients are reconstructed to the original space of the high-fidelity model. The reconstruction can be written as

(urecon)k=RTfdec(fenc(Ruk))RTfae(Ruk),
(9)

where fdec is the decoder, fae is the autoencoder, and (urecon)k is the reconstruction of the kth snapshot. This network could be approximated by adding to the autoencoder a linear layer after the input and before the output, and dispensing with the SVD, however, it has been found that this network is harder to train. Here, we take advantage of the efficiency of the SVD and use this in conjunction with an autoencoder.

In this study, when predicting, we wish to approximate a set of reduced variables (either POD coefficients or latent variables of an autoencoder) at a future time step. The adversarial autoencoder is re-purposed for this task in an attempt to capitalize on the fact that this network should produce realistic results (providing that the training dataset is representative of the behavior that will be modeled). So that it can predict time series data, three modifications are made to the original adversarial autoencoder network:53 namely that (i) the bottleneck layer no longer has fewer variables than the input (to prevent further compression); (ii) the output is the network's approximation of the reduced variables at a future time level; and (iii) the input is the reduced variables at the preceding time level as well as the reduced variables of the neighboring subdomains at the future time (as we adopt a domain decomposition approach, which is described in the next paragraph). The modified adversarial autoencoder is trained by minimizing the error between its output and the predicted variables at the future time level, as well as incorporating the adversarial training strategy described in Sec. II B 3. To avoid confusion, we refer to this network as a predictive adversarial network, because, with different inputs and outputs, it is no longer an autoencoder.

In this study, we adopt a domain decomposition approach to facilitate predicting the solution for larger domains than that used in training (see Sec. II D). Given the aspect ratio of the pipe, we split the domain into subdomains of equal length in the axial direction, see Fig. 1. To train the predictive adversarial network, reduced variables are obtained by interpolating the high-fidelity solutions or snapshots onto a structured grid in each subdomain in turn and compressing the interpolated snapshots from all the subdomains using POD or an autoencoder. The interpolation is linear and achieved by using the finite element basis functions. The predictive adversarial network is taught to predict the reduced variables in a particular subdomain at a future time level given the reduced variables in the neighboring subdomains at the future time level and the reduced variables in the subdomain at the preceding time level. Using training data for all the subdomains and those time levels that are in the training dataset, the predictive adversarial network learns the mapping f, written as

zik=f(zi1k,zik1,zi+1k),i
(10)

where zik represents the reduced variables in subdomain i at the future time level k; zik1 represents the same but at the preceding time level; and zi1k and zi+1k denote the reduced variables at the future time level for the subdomains to the left and right of subdomain i. When predicting for one time level, all subdomains are iterated over (the iteration-by-subdomain method) until convergence is reached over the whole domain. This is done by sweeping from left to right (increasing i) and then sweeping from right to left (decreasing i). During the iteration process, zik of Eq. (10) is being continually updated. As we consider incompressible flows in this study, the solution method has to be implicit in order to allow information to travel throughout the domain within one time level. This sweeping from left to right and back again allows information to pass from the leftmost to the rightmost subdomains and vice versa.

For each new time level, an initial solution is required to start the iteration process [for zi1k and zi+1k in Eq. (10)]. The solution at the previous, converged time level could be used (zi±1k=zi±1k1); however, using linear extrapolation based on two previous time levels showed better convergence

zik=zik1+(zik1zik2).
(11)

The procedure for sweeping over the subdomains is given in Algorithm 1, in which f represents the predictive adversarial network, Ntime is the number of time levels, Nsweep is the number of sweeps carried out over the whole domain, and Nsub is the total number of subdomains. Two of these subdomains are treated as boundary conditions and are fully imposed throughout the duration of the prediction, so at line 18 of Algorithm I, only the subdomains where a solution is sought are iterated over. In this study, a fixed number of sweep iterations were used as this gave good results; however, a convergence criterion could be easily implemented if desired.

Algorithm 1

An algorithm for finding the solution for the reduced variables in a subdomain and sweeping over all the subdomains to obtain a converged solution over the whole domain.

1:  !! set initial conditions for each subdomain i 
2:  zi0i 
3:  for time level k=1,2,,Ntimedo 
4:  !! set boundary conditions 
5:  z1k,zNsubk 
6:  !! estimate the solution at the future time level k for all the subdomains 
7:  if k > 1 then 
8:   for subdomain i=2,3,,Nsub1do 
9:    zik=zik1+(zik1zik2) 
10:   end for 
11:  else 
12:   for subdomain i=2,3,,Nsub1do 
13:    zik=zik1 
14:   end for 
15:  end if 
16:  !! sweep over subdomains 
17:  for sweep iteration j=1,2,,Nsweepdo 
18:   for subdomain i=2,3,,Nsub2,Nsub1,Nsub2,,4,3do 
19:    !! calculate the latent variables of subdomain i at time level k 
20:    zik=f(zi1k,zik1,zi+1k) 
21:   end for 
22:  end for 
23:  end for 
1:  !! set initial conditions for each subdomain i 
2:  zi0i 
3:  for time level k=1,2,,Ntimedo 
4:  !! set boundary conditions 
5:  z1k,zNsubk 
6:  !! estimate the solution at the future time level k for all the subdomains 
7:  if k > 1 then 
8:   for subdomain i=2,3,,Nsub1do 
9:    zik=zik1+(zik1zik2) 
10:   end for 
11:  else 
12:   for subdomain i=2,3,,Nsub1do 
13:    zik=zik1 
14:   end for 
15:  end if 
16:  !! sweep over subdomains 
17:  for sweep iteration j=1,2,,Nsweepdo 
18:   for subdomain i=2,3,,Nsub2,Nsub1,Nsub2,,4,3do 
19:    !! calculate the latent variables of subdomain i at time level k 
20:    zik=f(zi1k,zik1,zi+1k) 
21:   end for 
22:  end for 
23:  end for 

In this study, we investigate the ability of a non-intrusive reduced-order model in combination with a domain decomposition approach to be able to make predictions for domains larger than that used in the training process. We test this approach on the dataset generated from multiphase flow in a pipe. With sufficient initial conditions and boundary conditions, exactly the same procedure can be used to make predictions for the extended domain as is used to make predictions for the domain used in training. That is, the solution is obtained for a single subdomain, whilst sweeping over all subdomains until convergence is reached (outlined in Sec. II C).

As the length of the pipe of interest (“extended pipe”) is longer than the pipe used in training, initial conditions must be generated throughout the extended pipe. The method used here is to specify initial conditions throughout the extended pipe by repeating initial conditions from the shorter pipe. An alternative would be to find the reduced variables for a steady state (for example, water in the bottom half of the pipe and air in the top half) and use these values in every subdomain in the extended pipe. We choose the former method to reduce the time taken for instabilities and slugs to develop.

For the extended pipe, boundary conditions (effectively the reduced variables in an entire subdomain) can be imposed using the data already available from the HFM. However, as the length of the pipe is longer than the pipe used in training, we wish to make predictions over a longer period over which snapshots were collected from the HFM. In order to obtain boundary conditions for the extended pipe, several methods are explored for the inlet or upstream boundary. Of those investigated, three methods performed better than the others, listed below.

  • (i)

    Cycling through slug formation: a slug is found in the shorter pipe, and the velocity and volume fraction fields associated with the advection of the slug through a subdomain are looped over in the upstream boundary subdomain.

  • (ii)

    Perturbed instability: the volume fraction field associated with an instability from the shorter pipe is perturbed with Gaussian noise. This is then imposed on the boundary subdomain. The associated velocity field is used unperturbed.

  • (iii)

    Original boundaries repeated: solutions from the shorter pipe are cycled through in the boundary subdomain.

At the downstream boundary, reduced variables corresponding to a steady state solution (water in the bottom half of the pipe and air in the top half) were imposed. Specific details of the boundary conditions are given in the results section. These three approaches are somewhat heuristic. As we are using information that the model will not have seen and that does not accurately satisfy the governing equations, we exploit the ability of the predictive adversarial network to produce realistic results, as it should have learnt appropriate spatial and temporal covariance information during training. An alternative method for generating boundary conditions is discussed in the section on conclusions and future work.

Two test cases are used to demonstrate the dimensionality reduction methods proposed in this paper. The first is flow past a cylinder in 2D; the second is 3D multiphase flow in a pipe. The second test case is also used to demonstrate the prediction capabilities of the predictive adversarial network for both the domain that was used in training and a domain that is significantly longer that the one used in training. The code used to produce the high-fidelity model results, IC-FERST, has been validated with experimental results for a 2D solitary wave and 3D collapsing water column,88 and with the Rayleigh–Taylor benchmark problem in 2D and 3D.86 The test cases used in this paper are now described.

1. Flow past a cylinder

The following partial differential equations describe the motion of an incompressible fluid:

·u=0,
(12)
ρ(ut+u·u)·τ=p,
(13)

where ρ is the density (assumed constant), u is the velocity vector, τ contains the viscous terms associated with an isotropic Newtonian fluid, p represents the non-hydrostatic pressure, t is time, and the gradient operator is defined as

=(x,y)T.
(14)

When solving these equations, a linear triangular element is used with a discontinuous Galerkin discretization for the velocities and a continuous Galerkin representation of the pressure (the P1DG-P1CV element). As well as satisfying the Ladyzhenskaya–Babuška–Brezzi condition, this element type is stable and accurate even on highly distorted elements, such as those which may occur along the interface of the two fluids.88 To discretise in time, Crank–Nicolson is used. The resulting velocity solutions will satisfy the discretized continuity equation. As the velocity field fully describes incompressible flow, only the velocity variables are required by the reduced-order models. For more details on how this system of equations is discretized and solved, the reader is referred to Ref. 89. For the flow past a cylinder test case, the domain measures 2.2 m (horizontal axis) by 0.41 m (vertical axis), and the center of the cylinder is located at 0.2 m from the leftmost boundary on the horizontal centerline of the domain. Free slip and no normal flow boundary conditions are applied on the upper and lower walls; no slip is applied on the surface of the cylinder. Zero shear and zero normal stress are applied at the outlet (the right-hand boundary of the domain). In the following results, speeds and velocities are given in meters per second and time is in seconds. A Reynolds number of 3900 was used:

Re=ρULμ=3900,
(15)

where U is the constant inlet velocity, U=0.039ms1, the density has value ρ=1000kgm3, and the diameter of the cylinder is L=0.1m. Thus, the dynamic viscosity is μ=103kgm1s1. Formed from solutions of this problem, the dataset consists of 2000 snapshots with a time interval of 0.25 s. (An adaptive time step was used to solve the equations; however, the solutions were saved every 0.25 s to generate the snapshots.)

2. Multiphase flow in a pipe

Multiphase slug flow in a horizontal pipe is used as the second test case. We use an interface capturing method, in which we track the interface by solving an advection equation for the volume fraction of the liquid phase. Let α be the volume fraction of the liquid (water in this case), which means that the volume fraction of the gas (air) is (1α). The conservation of mass for incompressible fluids can therefore be written as

t(α)+·(αu)=0,
(16)
·u=0,
(17)

where t represents time and u represents velocity. Assuming incompressible viscous fluids, conservation of momentum yields the following:

ρ(ut+u·u)=p+·(μ(u+Tu))+ρg+Fσ,
(18)

where p represents pressure, g is the gravitational acceleration vector (0,0,9.8), and Fσ is the force representing surface tension. The bulk density and bulk viscosity are defined as

ρ=αρwater+(1α)ρair,
(19)
μ=αμwater+(1α)μair,
(20)

respectively, where ρwater and ρair are the densities of water and air, respectively, and μwater and μair are the dynamic viscosities of water and air, respectively. Again, the resulting velocity solutions will satisfy the discretized continuity equation. For more details of how the governing equations are discretized and solved, see Ref. 86, including information on the unstructured adaptive meshing process, the adaptive time stepping and compressive advection technique to keep the interface at the boundary of the fluids sharp. The densities of air and water are taken as 1.125 and 1000 kg m−3, respectively, and the viscosities are 1.81×105 and 9.892×104kgm1s1, respectively. The modeled pipe has dimensions of 10 m in length and a radius of 0.039 m. Boundary conditions of no normal flow and no slip were weakly enforced on the pipe wall, and any incoming momentum is given a value of zero. The outlet of the pipe has a non-hydrostatic pressure of zero, and again, any incoming velocities are set to zero and incoming volume fraction is taken to be water. Initially, the pipe is filled entirely with water, which flows along the axial direction at a velocity of 4.162 m s−1 in the top half of the pipe and 2.082 m s−1 in the bottom half. After the first time step, air starts flowing in through the inlet through the top half at a velocity of 4.162 m s−1, a scenario which can lead to the formation of slugs. These values of velocity correspond to superficial velocities of air and water of 2.081 and 1.041 m s−1, respectively. The dataset used for training the reduced-order models consists of solutions at 800 time levels with a fixed time interval of 0.01 s. The reduced-order models use the velocity fields in three directions and the volume fraction field.

Four methods for dimensionality reduction (or compression) are compared, namely POD, CAE, AAE, and SVD-AE. An extensive hyperparameter optimsation was performed to find the optimal set of values for the hyperparameters of each autoencoder. Details of the hyperparameters that were varied, the ranges over which they were varied, and the optimal values and architectures that were obtained as a result can be found in Tables IV–,VI. Ten POD basis functions were retained for the compression based on POD and ten latent variables were used in the bottleneck layers of the autoencoders. For the SVD-AE, one hundred POD coefficients were retained, which were then compressed to ten latent variables by an autoencoder. The top part (shaded blue) of Fig. 2 shows a schematic diagram of how the networks used for dimensionality reduction are trained for the flow past a cylinder test case.

FIG. 2.

Upper part (shaded blue): shows the training of the autoencoders for the dimensionality reduction of flow past a cylinder. Lower part (shaded orange): shows how the predictive adversarial network was trained.

FIG. 2.

Upper part (shaded blue): shows the training of the autoencoders for the dimensionality reduction of flow past a cylinder. Lower part (shaded orange): shows how the predictive adversarial network was trained.

Close modal

1. Flow past a cylinder

The CFD solutions were saved every 0.25 s for 500 s resulting in 2000 snapshots. The domain was split into four subdomains, each spanning the entire height of the domain and a quarter of its length. These were discretized with 20 × 20 structured grids. The velocity solutions from the unstructured mesh were linearly interpolated onto the four grids using the finite element basis functions, resulting in a dataset of 8000 samples. For POD, the columns of the snapshots matrix consisted of values of both velocity components, and for the autoencoders, the two velocity components were fed into two separate channels. The training data (which also include the validation data) were formed by randomly selecting 7200 samples from the full dataset. The remaining 800 samples were used as the test dataset (i.e., unseen data).

To test the methods, the solutions are compressed and reconstructed using Eq. (2) for POD, Eq. (6) for the convolutional and adversarial autoencoders, and Eq. (9) for the SVD autoencoder. The error in the reconstruction, Eq. (3), is calculated using the test dataset. Figure 3 shows the effect of the four compression methods (POD, CAE, AAE, and SVD-AE) on a snapshot taken at the 200th time level compared against the original snapshot. The pointwise errors in the velocity magnitude are shown on the right. It can be seen that all four methods (including POD) perform well in their reconstruction of flow past a cylinder. The pointwise errors indicate that, for this snapshot, the convolutional autoencoder gives the best results, followed by the adversarial autoencoder, the SVD-autoencoder, and finally POD. Table I shows the mean of the square reconstruction errors calculated over the test dataset for the flow past a cylinder test case. As seen for the single snapshot in Fig. 3, every compression method that involves an autoencoder outperforms POD.

FIG. 3.

Velocity magnitude at a time of 50 s for the flow past a cylinder test case. From top to bottom (left): the original data; the reconstruction by POD; by the convolutional autoencoder; by the adversarial autoencoder; and by the SVD-Autoencoder. The corresponding pointwise errors of the reconstructions are also shown (right). Each of the reconstructions was made from 10 POD coefficients or latent variables.

FIG. 3.

Velocity magnitude at a time of 50 s for the flow past a cylinder test case. From top to bottom (left): the original data; the reconstruction by POD; by the convolutional autoencoder; by the adversarial autoencoder; and by the SVD-Autoencoder. The corresponding pointwise errors of the reconstructions are also shown (right). Each of the reconstructions was made from 10 POD coefficients or latent variables.

Close modal
TABLE I.

Reconstruction error averaged over the test dataset for flow past a cylinder using POD and several autoencoders. Each of the reconstructions was made from 10 POD coefficients or latent variables.

PODConvolutional AEAdversarial AESVD-AE
111×104 14.4×104 62.9×104 25.1×104 
PODConvolutional AEAdversarial AESVD-AE
111×104 14.4×104 62.9×104 25.1×104 

2. Multiphase flow in a pipe

The domain is split into ten subdomains [each spanning one tenth of the length (x) of the domain, but spanning the entire width (y) and height (z)], which are discretized with 60 × 20 × 20 structured grids. The velocity and volume fraction solutions from the unstructured mesh are interpolated onto these grids over 800 time levels each corresponding to 0.01 s. As before, the finite element basis functions are used to perform the interpolation. The dataset for this test case therefore has a total of 8000 samples (ten subdomains and 800 time levels).

The four compression methods (POD, CAE, AAE, and SVD-AE) are applied to the multiphase flow dataset. For POD, one column of the snapshot matrix consists of nodal values of the three velocity components (each scaled between −1 and 1) and the volume fractions (within the interval [0,1]). For the autoencoders, four channels are used, and scaling is applied to the fields as usual. Initially, ten subdomains were used; however, as the autoencoders were found to have a relatively high error, a further ten subdomains were created that were randomly located within the domain, making a total of 20 subdomains (and 16 000 samples in the dataset). Having more subdomains provided more training data, which probably led to the observed improvement in the results. The autoencoders were trained with 90% of the data chosen at random from the dataset. For details of the hyperparameter optimsation and the networks used, see Tables IV–,VI in the  Appendix.

Figure 4 shows how the autoencoders performed in reconstructing the pipe flow dataset. It is not surprising that they seem to perform less well than for the flow past a cylinder case, given the fact that the compression ratio was 80 for flow past a cylinder, whereas, for pipe flow, it was 9600. (For the former a 20 × 20 grid with two fields was compressed to ten variables, whereas for the latter this a 60 × 20 × 20 grid with four fields was compressed to ten variables.) Even at this compression ratio, all dimensionality reduction methods seemed able to reconstruct the slug in Fig. 4 to some degree, with the convolutional AE doing this particularly well. For easier visualization, Fig. 4 shows just part of the domain, which includes a slug and also two boundaries between subdomains. The boundary at 2 m can be identified by a slight kink that can be observed particularly well in the reconstructions of the AAE and the SVD-AE. This kink appears to the left of the slug, and highlights that for some models these boundaries induced additional inaccuracies. This issue could be addressed in future research by allowing the compressive methods to see the solutions of the neighboring subdomains during compression, so that they can explicitly take this boundary into account.

FIG. 4.

The volume fractions taken at a time of 1.73 s, spanning the domain between 1.59 and 3.67 m. A cross section along the length through the center of the pipe is shown. From top to bottom (left), the snapshots are from the original pipe flow data; the data reconstructed by POD; by the convolutional autoencoder; by the adversarial autoencoder; and by the SVD-Autoencoder. The pointwise errors of the reconstructions are also shown (right). Each of the reconstructions was made from 10 POD coefficients or latent variables.

FIG. 4.

The volume fractions taken at a time of 1.73 s, spanning the domain between 1.59 and 3.67 m. A cross section along the length through the center of the pipe is shown. From top to bottom (left), the snapshots are from the original pipe flow data; the data reconstructed by POD; by the convolutional autoencoder; by the adversarial autoencoder; and by the SVD-Autoencoder. The pointwise errors of the reconstructions are also shown (right). Each of the reconstructions was made from 10 POD coefficients or latent variables.

Close modal

Table II shows the reconstruction error over the test data for the dimensionality reduction methods. Here, Eq. (3) was used, where vectors uk and (urecon)k consist of the scaled velocities and volume fractions. Once again, the convolutional autoencoder has the lowest errors.

TABLE II.

Reconstruction error for POD and the autoencoders over the test dataset of multiphase slug flow. Each of the reconstructions were made from 10 POD coefficients or latent variables.

PODConvolutional AEAdversarial AESVD-AE
21.7×104 4.70×104 20.2×104 30.8×104 
PODConvolutional AEAdversarial AESVD-AE
21.7×104 4.70×104 20.2×104 30.8×104 

As the convolutional autoencoder performed better than the other networks for dimensionality reduction, we go on to combine this with a predictive adversarial network within a domain decomposition framework to form a reduced-order model (AI-DDNIROM). A schematic diagram of how the networks are combined can be seen in Fig. 2.

1. Training and predicting with the original domain

For the prediction, the HFM produced 1400 solutions over 14 s of real time. The training and validation data were taken from time levels 1 to 799, and the test data from time levels 800 to 1400. Hyperparameter optimization was performed, and the results of this can be found in Tables VII and VIII of the  Appendix. As part of this process, it was found that the best time step for the NIROM was 0.06 s, i.e., 6 times as large as the time interval between the HFM solutions. The MSE achieved on the validation data was 15.6×104 and on the test data was 103×104. Figure 5 compares the predictions of volume fraction with those of the HFM and shows the pointwise error for two snapshots in the test data (unseen by the model). The agreement between the predictive adversarial network and the HFM is very good.

FIG. 5.

A snapshot of the volume fractions (left) and the velocity fields (right) at t= 8.96 s, spanning the domain between 2.86 and 4.51 m and sliced exactly through the middle. The top plots show the original CFD, the middle plots show the predictions from the AI-DDNIROM and the bottom plots show the pointwise error between the CFD and the AI-DDNIROM.

FIG. 5.

A snapshot of the volume fractions (left) and the velocity fields (right) at t= 8.96 s, spanning the domain between 2.86 and 4.51 m and sliced exactly through the middle. The top plots show the original CFD, the middle plots show the predictions from the AI-DDNIROM and the bottom plots show the pointwise error between the CFD and the AI-DDNIROM.

Close modal

2. Extending the domain and associated boundary conditions

Having trained an AI-DDNIROM in Sec. III C I with snapshots from the 10 m long pipe and made predictions for that pipe, in this section. we use the method described in Sec. II D to predict the flow evolution and volume fractions along a pipe of length 98 m based on training data from the 10 m pipe. The extended pipe is split into 98 subdomamins, for which the initial conditions come from the simulation of the 10 m pipe taken at 7.2 s (time level 720). This is in order to start simulating from a state that is well developed. The first subdomain of the 98 m pipe takes initial conditions from the third subdomain of the 10 m pipe; the second to seventh subdomains of the 98 m pipe take the values from the fourth to the ninth subdomains of the 10 m pipe. This is repeated 15 more times, and the final subdomain of the 98 m pipe takes the tenth and final subdomain of the 10 m pipe, see Fig. 6. The first, second, and tenth subdomains of the 10 m pipe were not used, to avoid introducing any spurious effects from the boundaries.

FIG. 6.

Above: the shorter, original pipe used in generating the snapshots with subdomain numbering. Below: the extended pipe with initial conditions taken from the indicated subdomains of the shorter pipe. The gray subdomains at either end take their initial conditions from the boundary conditions.

FIG. 6.

Above: the shorter, original pipe used in generating the snapshots with subdomain numbering. Below: the extended pipe with initial conditions taken from the indicated subdomains of the shorter pipe. The gray subdomains at either end take their initial conditions from the boundary conditions.

Close modal

Velocity and volume fractions are specified throughout time in the first and last (98th) subdomains, which act in a manner similar to boundary conditions. There is no high-fidelity model for the 98 m pipe from which to take boundary conditions, and, as the time over which predictions are made exceeds the time over which snapshots were collected from the high-fidelity model of the 10 m pipe, boundary conditions must be generated somehow. Three methods of producing boundary conditions are reported (as described in Sec. II D):

  • (i)
    Cycling through slug formation: a slug is found in the shorter pipe, and the velocity and volume fraction fields associated with the advection of this slug through the subdomain are repeated as required. The particular subdomain of the shorter pipe was the third (between 2 and 3 m), between time levels 750 and 804. So, the boundary condition for the left-most end of the extended pipe can be written as
    α1ext(tk)=α3(t̃k)tk0,
    (21)
    u1ext(tk)=u3(t̃k)tk0,
    (22)
    where tk=kΔt for time level k, (k=0,1,,) and a time step of Δt, and
    t̃k=(tkΔt(mod54)+750)Δt.
    (23)

    where a(modn) gives the non-negative remainder when n has been subtracted from a as many times as possible. For this example, the time step of the reduced-order model is 0.06 s. The slug appears in this subdomain shortly after the selected time window as a relatively thin instability, of the order of magnitude of 10 cm in length, and develops in width as it advects through the domain.

  • (ii)
    Perturbed instability: at the 798th time level an instability occurs in the third subdomain of the shorter pipe. The volume fraction field associated with this is perturbed spatially by Gaussian noise, the velocity field is left unperturbed, and both are used as boundary conditions in the first subdomain of the extended pipe. In the following, α1ext is the volume fraction in the first subdomain of the extended pipe, α3 is the volume fraction in the third subdomain of the shorter pipe:
    α1ext(tk)=α3(t̃k)+rtk0,
    (24)
    u1ext(tk)=u3(t̃k)tk0,
    (25)

    where t̃k= 7.98 s and r is a random spatial perturbation.

  • (iii)
    Original boundaries repeated: velocity and volume fraction solutions from the third subdomain of the shorter pipe are used as the boundary conditions for the first subdomain in the extended pipe and repeated for as long as required. The solution fields from time 1 s to 8 s are used, as this corresponds to times where the air had passed through the entire length of the shorter pipe. Therefore, for times in [0,7) seconds of the extended pipe, times in [1,8) seconds of the shorter pipe are used; for times in [7,14) seconds of the extended pipe, times in [1,8) seconds of the shorter pipe are used; etc. So, the boundary condition for the left-most end of the extended pipe can be written as
    α1ext(tk)=α3(t̃k)tk0,
    (26)
    u1ext(tk)=u3(t̃k)tk0,
    (27)
    where tk=kΔt for time level k, (k=0,1,,) and a time step of Δt, and
    t̃k=(tkΔt(mod700)+100)Δt.
    (28)

In all cases, the boundary condition for the final subdomain is based on a snapshot from subdomain 2 at 7.5 s in the 10 m pipe when the flow was almost steady with the lower half of the pipe occupied by water and the upper half occupied by air.

Various statistics are presented in this section in an aim to assess whether the AI-DDNIROM approach produces realistic results. If the expected advantage of the adversarial training strategy to produce a model that does not extrapolate beyond the seen training data holds true, then the predictive model could be expected to not diverge significantly from the original simulation. Figures 7(a)–7(c) show how the liquid volume fraction field varies over time in the original simulation and for the reduced models using two of the tested boundary conditions (cycling through slug formation and perturbed instability). The results obtained when repeating the original boundary conditions were similar to the original simulation and are not shown here. The time interval for the two reduced-order models corresponds to the instabilities having passed through two thirds of the pipe. Time series data were collected at values of x=6.5m (for the original simulation) and x=64.5m (for the two reduced models), and at a height of 0.0039 m (a tenth of the pipe radius) above the centerline of the pipe. To analyze the frequency spectra, a discrete Fourier transform was then applied to the data. Figures 7(d)–7(f) show that the slug characteristic frequency spectra for the predictions are similar to that of the original simulation. In particular, the main peak has a similar value in all three simulations (original simulation: 0.76 Hz; reduced model which cycles through slug formation: 0.7 Hz; reduced model with the perturbed instability 0.88 Hz). This suggests that simulations from the AI-DDNIROMs based on either of these boundary conditions are able to behave in a realistic way. In fact, the frequency of the main peak could be interpreted as the pseudo-slug frequency. Technically slugs are only defined as such when they span the full vertical extent of the pipe. On the other hand, pseudo-slugs90 or proto-slugs91 are precursors to slugs, which do not necessarily reach the full height of the pipe.

FIG. 7.

(a)–(c) Volume fractions plotted against time at (a) 6.5 m for the original results and at (b) and (c) 64.5 m for the reduced models. In each case, the data were collected at a height of 0.1r =0.0039 m above the centerline of the pipe. (d)–(f) Discrete Fourier Transform (DFT) applied to the data presented in subfigures (a)–(c). (a) Original simulation; (b) cycling through slug formation; (c) perturbed instability; (d) original simulation; (e) cycling through slug formation; and (f) perturbed instability.

FIG. 7.

(a)–(c) Volume fractions plotted against time at (a) 6.5 m for the original results and at (b) and (c) 64.5 m for the reduced models. In each case, the data were collected at a height of 0.1r =0.0039 m above the centerline of the pipe. (d)–(f) Discrete Fourier Transform (DFT) applied to the data presented in subfigures (a)–(c). (a) Original simulation; (b) cycling through slug formation; (c) perturbed instability; (d) original simulation; (e) cycling through slug formation; and (f) perturbed instability.

Close modal

Figure 8 follows a pseudo-slug for five time levels as viewed through the volume fraction fields for the original simulation and the reduced-order models. It shows first that the instabilities presented in Figs. 8(d) and 8(e) were similar to an instability that also occurred within the original simulation, presented in Fig. 8(c). Furthermore, by observing that the instabilities traveled similar distances between time levels, it can be deduced that they traveled at a similar velocity within the shown timespan as well. While this only presents the dynamics for a single instability at a couple of points in time, the similarity of these situations might reveal that the predictive adversarial model was producing a situation very similar to one it had seen before, which is what this model was hypothesized to do due to its use of the adversarial training strategy.

FIG. 8.

(a) and (b) Volume fractions from AI-DDNIROMs with boundary conditions as indicated at a single point in time spanning the domain between 20 and 80 m and at a height of 0.0039 m. (c)–(e) Zoomed in sections from subfigures (a) and (b) and a section of the original simulation, plotted for a few consecutive timesteps. (a) Cycling through slug formation; (b) perturbed instability; (c) original simulation; (d) cycling through slug formation; and (e) perturbed instability.

FIG. 8.

(a) and (b) Volume fractions from AI-DDNIROMs with boundary conditions as indicated at a single point in time spanning the domain between 20 and 80 m and at a height of 0.0039 m. (c)–(e) Zoomed in sections from subfigures (a) and (b) and a section of the original simulation, plotted for a few consecutive timesteps. (a) Cycling through slug formation; (b) perturbed instability; (c) original simulation; (d) cycling through slug formation; and (e) perturbed instability.

Close modal

Figures 9(a)–9(c) show the volume fractions averaged over the full 98 m domain for a short time period. Note that the start of this time period was chosen so that the influence of the boundaries had already propagated throughout the domain. Figures 9(d)–9(f) display the volume fractions averaged over the time period included in the previous three subfigures [Figs. 9(a)–9(c)], and the width and length of the domain. These latter three plots thus show how the volume fractions change with respect to height. It is clear from these plots that most of the water collects at the bottom of the pipe. If we assume that the situation in which the original boundaries were repeated in their entirety produced results similar to the original simulation, the fact that the plotted dynamics are similar to the two simulations with artificially generated boundaries suggests strongly that the model was making realistic predictions here for each of the boundaries.

FIG. 9.

Mean volume fractions throughout the pipe spanning the domain between 20 and 80 m for the different boundary conditions. (a)–(c) Volume fractions averaged over all points in space. (d)–(f) Volume fractions averaged over time, as well as width and length. Note that σ refers to the standard deviation. (a) Cycling through slug formation; (b) perturbed instability; (c) original boundaries repeated; (d) cycling through slug formation; (e) perturbed instability; and (f) original boundaries repeated.

FIG. 9.

Mean volume fractions throughout the pipe spanning the domain between 20 and 80 m for the different boundary conditions. (a)–(c) Volume fractions averaged over all points in space. (d)–(f) Volume fractions averaged over time, as well as width and length. Note that σ refers to the standard deviation. (a) Cycling through slug formation; (b) perturbed instability; (c) original boundaries repeated; (d) cycling through slug formation; (e) perturbed instability; and (f) original boundaries repeated.

Close modal

Figure 10 displays the volume fractions predicted by the AI-DDNIROMs throughout the pipe along the centerline (xz plane) for 12 s in time and averaged over the height of the pipe. The red bands that stretch diagonally along the plotted domain represent slugs propagating downstream through the domain as time progresses. The slopes of these lines represent the corresponding velocities. Black lines have been drawn on these plots to indicate these liquid slug velocities and also velocities of secondary waves (light blue). The velocity magnitudes are given in Table III. From these plots and the table, one can see that both the slug velocities and velocities of the secondary waves produced by different boundary conditions are very similar. Note that the slight variations may have been caused by the interactions of slugs being within each others vicinity. A pattern that is generally observed in each of the three graphs in Fig. 10 is that two slugs which are close to one another tend to slowly approach one another. The slug which is ahead seems to slow down and disappear as the approaching slug catches up. These graphs also clearly display the influence of the boundary at the inlet on the simulation. In fact, the first couple of meters is where the simulations differ the most. However, it seems that after those first couple of meters the simulations all restore to a very similar pattern. In a similar experimental setup to the computational domain modeled here, what has been observed is that the slug frequencies are largest near the inlet, but, after about 5 m, settle to a value independent of the distance from the inlet.92 Correlations for slug frequency are often sensitive to the superficial liquid velocity, which, in turn depends on the mass of liquid in the pipe,93 so the fact that the slug frequencies do not appear to change significantly after about 10 m (see corresponding slug lengths in Fig. 10), seems to suggest that the mass of liquid is conserved along the pipe.

FIG. 10.

Volume fractions as a result of different boundary conditions [(a) cycling through slug formation, (b) perturbed instability, and (c) original boundaries repeated] throughout 12 s in time for the entire 98 m pipe. Note that the volume fractions were averaged over the entire height of the pipe.

FIG. 10.

Volume fractions as a result of different boundary conditions [(a) cycling through slug formation, (b) perturbed instability, and (c) original boundaries repeated] throughout 12 s in time for the entire 98 m pipe. Note that the volume fractions were averaged over the entire height of the pipe.

Close modal
TABLE III.

The slug velocities and secondary wave velocities for the three methods of generating boundary conditions.

Slug velocitySecondary wave velocity
Cycling through slug formation 3.5 m s−1 1.5 m s−1 
Perturbed instability 3.4 m s−1 1. m s−1 
Original boundaries repeated 3.3 m s−1 1.5 m s−1 
Slug velocitySecondary wave velocity
Cycling through slug formation 3.5 m s−1 1.5 m s−1 
Perturbed instability 3.4 m s−1 1. m s−1 
Original boundaries repeated 3.3 m s−1 1.5 m s−1 

Figure 11 shows the volume fractions at a few steps in time to give an impression of these fields throughout the full width of the domain, showing a different perspective of the information seen in Fig. 10.

FIG. 11.

Volume fractions at 36 s (upper) and 48 s (lower), at a height of 0.0039 m above the centerline of the pipe (excluding the first and last subdomain due to the fact that these subdomains showed different values from those throughout the rest of the domain, skewing the vertical axes) for the two predictions with different boundary conditions. Plots (a)–(c) are for t =36 s, and plots (d)–(f) are for t =48 s. (a) Cycling through slug formation; (b) perturbed instability; (c) original boundaries repeated; (d) cycling through slug formation; (e) perturbed instability; and (f) original boundaries repeated.

FIG. 11.

Volume fractions at 36 s (upper) and 48 s (lower), at a height of 0.0039 m above the centerline of the pipe (excluding the first and last subdomain due to the fact that these subdomains showed different values from those throughout the rest of the domain, skewing the vertical axes) for the two predictions with different boundary conditions. Plots (a)–(c) are for t =36 s, and plots (d)–(f) are for t =48 s. (a) Cycling through slug formation; (b) perturbed instability; (c) original boundaries repeated; (d) cycling through slug formation; (e) perturbed instability; and (f) original boundaries repeated.

Close modal

3. Computational times

The AI-DDNIROM shows a significant computational speed up over the high-fidelity model as expected. The high-fidelity model of the 10 m pipe took approximately two weeks to complete (run on processor type Intel® Xeon® E5–2640, 2.4 GHz), whereas the AA-DDNIROM prediction for the 98 m pipe took approximately 20 minutes to generate (run on GPUs within Google's Colab platform94,95)

We present an AI-based non-intrusive reduced-order model combined with domain decomposition (AI-DDNIROM), which is capable of making predictions for significantly larger domains than the domain used in training the model. For dimensionality reduction we use a convolutional autoencoder and for prediction we use a predictive adversarial network. During training, the predictive adversarial network learns the underlying probability distribution (of the latent variables) associated with the fluids' behavior. The main findings of this study are listed below.

  • (i)

    For dimensionality reduction, a number of autoencoders are compared with proper orthogonal decomposition, and the convolutional autoencoder is seen to perform the best for both test cases (2D flow past a cylinder and 3D multiphase flow in a pipe).

  • (ii)

    When training neural networks, it has been observed that computational physics applications typically have access to less training data than image-based applications,12,23 which can lead to poor generalization. To combat this, for the dimensionality reduction of multiphase flow in a pipe, we use “overlapping” snapshots, that is, in addition to ten subdomains being equally spaced along the pipe, ten supplementary subdomains are located at random within the pipe. This doubles the amount of training data and results in improved performance.

  • (iii)

    For prediction, we use an predictive adversarial network based on the adversarial autoencoder53 but modified to predict in time. This model performs well, gives realistic results, and, unlike feed forward or recurrent networks without such an adversarial layer, does not diverge for the multiphase test case shown here.

  • (iv)

    Finally, we make predictions for a 98 m pipe (the “extended pipe”) with the AI-DDNIROM that was trained on results from a 10 m pipe. Statistics of results from the extended pipe are similar to those of the original pipe, so we conclude that the predictive adversarial network has made realistic predictions for this extended pipe.

A number of improvements could be made to the approach presented here. A physics-informed term could be included in the loss function of either the convolutional autoencoder or the predictive adversarial network. This would ensure that conservation of mass and momentum would be more closely satisfied by the predictions of the neural networks. Second, although the initial conditions have little effect on the predictions, the boundary conditions do have a significant effect. Rather than the heuristic approach adopted here, a generative adversarial model (GAN) could be used to predict boundary conditions for the inlet and outlet subdomains. The GAN could be trained to predict the reduced variables at several time levels, then latent variables consistent with all but one of the time levels (the future time level) can be found by an optimization approach.67 From these latent variables, the boundary condition for the future time level can be obtained. Finally, a hierarchy of reduced-order models could be used in order to make the approach faster. The lowest-order model could represent the simplest physical features of the flow, and the higher-order models could represent more complicated flow features. To decide whether the model being used in a particular subdomain was sufficient, the discriminator of the predictive adversarial network could be used.

The authors would like to acknowledge the following EPSRC grants: MUFFINS, MUltiphase Flow-induced Fluid-flexible structure InteractioN in Subsea applications (Nos. EP/P033180/1 and EP/P033148/1); RELIANT, Risk EvaLuatIon fAst iNtelligent Tool for COVID19 (No. EP/V036777/1); the PREMIERE programme grant (No. EP/T000414/1); MAGIC, Managing Air for Green Inner Cities (No. EP/N010221/1); and INHALE, Health assessment across biological length scales (No. EP/T003189/1). We would also like to acknowledge the Applied Computational Science and Engineering MSc course at Imperial, during which, Zef Wolffs obtained the results shown in this paper. Finally, we would like to thank the reviewers for their comments and suggestions, which have improved the paper.

The authors have no conflicts to disclose.

C.E.H.: conceptualization, methodology, software, writing—original draft, writing—review and editing, supervision; Z.W.: methodology, software, writing —original draft, writing—review and editing; J.A.T.: software, writing—review and editing; L.K.: software, writing—review and editing; P.S.: software, writing—review and editing; A.N. conceptualization, writing—review and editing, supervision; I.M.N.: conceptualization, writing—review and editing; O.K.M.: conceptualization, writing—review and editing, supervision, funding acquisition; N.S.: conceptualization, writing—original draft, writing—review and editing, supervision, funding acquisition; C.C.P.: conceptualization, methodology, software, writing—original draft, writing—review and editing, supervision, funding acquisition.

The data that support the findings of this study are openly available in Github at https://github.com/acse-zrw20/DD-GAN-AE, Ref. 96.

Extensive hyperparameter optimzation was carried out for the artificial neural networks used in this investigation. This was done on the Weights and Biases platform which allows for efficient searching of high-dimensional parameter space, using methods such as random searches and Bayesian searches. For example, to perform a grid search of the predictive adversarial network for one architecture would involve searching 18 dimensional parameter space, and, with the combinations given in Table IV, would amount to over 2 billion (2×109) model evaluations (for one architecture). Instead of using a grid search, we perform an initial random search of parameter space, followed by Bayesian optimization. For the predictive adversarial network, this resulted in 1530 model evaluations (for all architectures). The full report for this network is available on Weights & Biases.

TABLE IV.

Variation and ranges of values studied during the hyperparameter optimization.

All networks
 Activation functions tanh, sigmoid, relu, elu 
 Final activation Function tanh, sigmoid, linear 
 Architecturea Number of layers: 6, , 20 
  Number of channels: 2, , 128 
  Dense layer sizes (non-latent): 32, , 2000 
  Kernel sizes: 3, 5 
  Layer types: {1D, 2D, 3D}-Conv., {1D, 2D, 3D}-MaxPool, {1D, 2D, 3D}-UpSample, Dense 
 Batch size 32, 64, 128 
 Optimizer Adam, Nadam, SGD 
β1 0.8, 0.9, 0.98 
β2 0.9, 0.999, 1 
 Batch normalization True, false 
 Dropout 0.3, 0.55, 0.8 
 Epochs 100, 200, 500, 1000, 2000 
 Interval 1, 2, 4, 5, 6, 10 
 Learning rate 0.000 05, 0.0005, 0.005 
Adversarial networks only 
 Discrim architecturea Number of layers: 3 
  Dense layer sizes (non-latent): 100, 500, 1000 
  Layer types: dense 
 n discrim 1, 2, 5 
 n gradient 0, 3, 8, 15, 30 (0 means that no steps of gradient ascent were taken
 std noise 0, 0.000 01, 0.001, 0.01, 0.05, 0.1 
 Regularization 0, 0.000 001, 0.000 01, 0.001 
Predictive adversarial networks only 
 Latent vars 30, 50, 100 
All networks
 Activation functions tanh, sigmoid, relu, elu 
 Final activation Function tanh, sigmoid, linear 
 Architecturea Number of layers: 6, , 20 
  Number of channels: 2, , 128 
  Dense layer sizes (non-latent): 32, , 2000 
  Kernel sizes: 3, 5 
  Layer types: {1D, 2D, 3D}-Conv., {1D, 2D, 3D}-MaxPool, {1D, 2D, 3D}-UpSample, Dense 
 Batch size 32, 64, 128 
 Optimizer Adam, Nadam, SGD 
β1 0.8, 0.9, 0.98 
β2 0.9, 0.999, 1 
 Batch normalization True, false 
 Dropout 0.3, 0.55, 0.8 
 Epochs 100, 200, 500, 1000, 2000 
 Interval 1, 2, 4, 5, 6, 10 
 Learning rate 0.000 05, 0.0005, 0.005 
Adversarial networks only 
 Discrim architecturea Number of layers: 3 
  Dense layer sizes (non-latent): 100, 500, 1000 
  Layer types: dense 
 n discrim 1, 2, 5 
 n gradient 0, 3, 8, 15, 30 (0 means that no steps of gradient ascent were taken
 std noise 0, 0.000 01, 0.001, 0.01, 0.05, 0.1 
 Regularization 0, 0.000 001, 0.000 01, 0.001 
Predictive adversarial networks only 
 Latent vars 30, 50, 100 
a

Here a global picture of the architectures is presented, for the source code containing all of the used architectures please see the Github repository: https://github.com/acse-zrw20/DD-GAN-AE/tree/main/ddganAE/architectures.

Table IV shows the range of hyperparameters that were investigated during optimization for all the networks (the three autoencoder-based networks used for dimensionality reduction for the two test cases and the predictive adversarial network used in multiphase flow in a pipe). These include the exponential decay rate for the first moment estimates (β1); the exponential decay rate for the exponentially weighted infinity norm (β2); the interval between snapshots (interval) so that an interval of n corresponds to every nth snapshot being put in the datasets; the number of discriminator iterations (n discrim); the number of gradient ascent steps (n gradient); the standard deviation of the noise that was randomly added to the input of the discriminator within the adversarial autoencoder.

Table V shows the optimal values found in the hyperparameter optimization for the dimensionality reduction methods based on autoencoders for flow past a cylinder and for multiphase flow in a pipe.

TABLE V.

The optimal values for the hyperparameters of the autoencoders used in the dimensionality reduction stage of flow past a cylinder and multiphase flow in a pipe.

Flow past a cylinderMultiphase pipe flow
CAEAAESVD-AECAEAAESVD-AE
Activation functions:      
 Convolutional layers elu elu  elu elu sigmoid 
 Dense layers relu relu relu relu linear sigmoid 
 Output layer   elu sigmoid sigmoid linear 
Optimizer:       
 Method Adam Nadam Nadam Adam Adam Nadam 
β1 0.98 0.9 0.98 0.8 0.9 0.8 
β2 0.9 0.999 99 0.999 99 0.9 0.9 0.999 99 
Batch size 128 128 64 64 32 64 
Epochs 200 200 200 100 1000 100 
Batch normalization   False   False 
Train method  Default   Default  
Dropout   0.55   0.55 
Learning rate 0.00005 0.000 005 0.0005 0.0005 0.000 005 0.000 05 
Regularization 
Flow past a cylinderMultiphase pipe flow
CAEAAESVD-AECAEAAESVD-AE
Activation functions:      
 Convolutional layers elu elu  elu elu sigmoid 
 Dense layers relu relu relu relu linear sigmoid 
 Output layer   elu sigmoid sigmoid linear 
Optimizer:       
 Method Adam Nadam Nadam Adam Adam Nadam 
β1 0.98 0.9 0.98 0.8 0.9 0.8 
β2 0.9 0.999 99 0.999 99 0.9 0.9 0.999 99 
Batch size 128 128 64 64 32 64 
Epochs 200 200 200 100 1000 100 
Batch normalization   False   False 
Train method  Default   Default  
Dropout   0.55   0.55 
Learning rate 0.00005 0.000 005 0.0005 0.0005 0.000 005 0.000 05 
Regularization 

Table VI gives the optimal architectures found by hyperparameter optimization the six autoencoder-based networks used in the dimensionality reduction of flow past a cylinder and multiphase flow in a pipe.

TABLE VI.

The optimal architectures of the autoencoder-based networks used for dimensionality reduction. The figures in the table are the dimensions of the outputs of each layer. For tuples, the final value represents the number of channels or feature maps. The layer type can be convolutional (Conv), maxpooling (MaxPool), or upsampling (UpSample). Flatten layers take an n-dimensional array as an input and return a 1D array as an output. A reshape layer converts a 1 D input to have the indicated output dimensions.

Flow past a cylinderMultiphase flow in a pipe
LayersCAEAAESVD-AECAEAAESVD-AE
Input (55, 42, 2) (55, 42, 2) 100 (60, 20, 20, 4) (60, 20, 20, 4) 100 
Conv (55, 42, 32) (55, 42, 32)  (60, 20, 20, 32) (60, 20, 20, 32)  
MaxPool (28, 21, 32) (28, 21, 32)  (30, 10, 10, 32) (30, 10, 10, 32)  
Conv (28, 21, 64) (28, 21, 64)  (30, 10, 10, 64) (30, 10, 10, 64)  
MaxPool (14, 11, 64) (14, 11, 64)  (15, 5, 5, 64) (15, 5, 5, 64)  
Conv (14, 11, 128)   (15, 5, 5, 128) (15, 5, 5, 128)  
MaxPool (7, 6, 128)   (8, 3, 3, 128) (8, 3, 3, 128)  
Flatten 5376 9856  9216 9216  
Dense 1 2688 9856 500a 10 4608 1500 
Dense 2 10 4926 500a 9216 10 2000 
Dense 3 2688 10 10  4608 10 
Dense 4 5376 4926 500a  9218 1500 
Dense 5  9856 500a   2000 
Dense 6  9856     
Reshape (7, 6, 128) (14, 11, 64)  (8, 3, 3, 128) (8, 3, 3, 128)  
Conv (7, 6, 128) (14, 11, 64)  (8, 3, 3, 128) (8, 3, 3, 128)  
UpSample (14, 12, 128) (28, 22, 64)  (16, 6, 6, 128) (16, 6, 6, 128)  
Conv (14, 12, 64) (28, 22, 32)  (16, 6, 6, 64) (16, 6, 6, 64)  
Upsample (28, 24, 64) (56, 44, 32)  (32, 12, 12, 64) (32, 12, 12, 64)  
Conv (28, 24, 32) (56, 44, 2)  (30, 10, 10, 32)b (30, 10, 10, 32)b  
UpSample (56, 48, 32)   (60, 20, 20, 32) (60, 20, 20, 32)  
Conv (56, 48, 2)   (60, 20, 20, 4) (60, 20, 20, 4)  
Crop (55, 42, 2)      
Output   100   100 
Trainable parameters 29 300 300 291 587 010 612 110 1196 238 86 001 860 6 392 110 
Flow past a cylinderMultiphase flow in a pipe
LayersCAEAAESVD-AECAEAAESVD-AE
Input (55, 42, 2) (55, 42, 2) 100 (60, 20, 20, 4) (60, 20, 20, 4) 100 
Conv (55, 42, 32) (55, 42, 32)  (60, 20, 20, 32) (60, 20, 20, 32)  
MaxPool (28, 21, 32) (28, 21, 32)  (30, 10, 10, 32) (30, 10, 10, 32)  
Conv (28, 21, 64) (28, 21, 64)  (30, 10, 10, 64) (30, 10, 10, 64)  
MaxPool (14, 11, 64) (14, 11, 64)  (15, 5, 5, 64) (15, 5, 5, 64)  
Conv (14, 11, 128)   (15, 5, 5, 128) (15, 5, 5, 128)  
MaxPool (7, 6, 128)   (8, 3, 3, 128) (8, 3, 3, 128)  
Flatten 5376 9856  9216 9216  
Dense 1 2688 9856 500a 10 4608 1500 
Dense 2 10 4926 500a 9216 10 2000 
Dense 3 2688 10 10  4608 10 
Dense 4 5376 4926 500a  9218 1500 
Dense 5  9856 500a   2000 
Dense 6  9856     
Reshape (7, 6, 128) (14, 11, 64)  (8, 3, 3, 128) (8, 3, 3, 128)  
Conv (7, 6, 128) (14, 11, 64)  (8, 3, 3, 128) (8, 3, 3, 128)  
UpSample (14, 12, 128) (28, 22, 64)  (16, 6, 6, 128) (16, 6, 6, 128)  
Conv (14, 12, 64) (28, 22, 32)  (16, 6, 6, 64) (16, 6, 6, 64)  
Upsample (28, 24, 64) (56, 44, 32)  (32, 12, 12, 64) (32, 12, 12, 64)  
Conv (28, 24, 32) (56, 44, 2)  (30, 10, 10, 32)b (30, 10, 10, 32)b  
UpSample (56, 48, 32)   (60, 20, 20, 32) (60, 20, 20, 32)  
Conv (56, 48, 2)   (60, 20, 20, 4) (60, 20, 20, 4)  
Crop (55, 42, 2)      
Output   100   100 
Trainable parameters 29 300 300 291 587 010 612 110 1196 238 86 001 860 6 392 110 
a

Denotes a layer which is followed by a dropout layer during training.

b

Denotes convolutional layers which have no padding. In all other cases padding is set so that the output has the same dimensions as the input array, although the number of channels may vary.

Table VII shows the optimal values found in the hyperparameter optimization for the predictive adversarial network used for the non-intrusive reduced-order model of multiphase flow in a pipe, and Table VIII gives the optimal architecture.

TABLE VII.

The optimal values of the hyperparameters for the predictive adversarial network found by optimization for multiphase flow in a pipe.

Activation functions: Dropout 0.3 
 Convolutional layers relu Interval 
 Dense layers relu Learning rate 0.000 05 
 Final layer tanh Latent vars 100 
Optimizer:  n discrim 
 Method Nadam n gradient 15 
β1 0.98 std noise 0.01 
β2 0.9 Regularization 0.001 
Batch size 32 Batch normalization True 
Epochs 2000 Training method Weighted loss 
Activation functions: Dropout 0.3 
 Convolutional layers relu Interval 
 Dense layers relu Learning rate 0.000 05 
 Final layer tanh Latent vars 100 
Optimizer:  n discrim 
 Method Nadam n gradient 15 
β1 0.98 std noise 0.01 
β2 0.9 Regularization 0.001 
Batch size 32 Batch normalization True 
Epochs 2000 Training method Weighted loss 
TABLE VIII.

The optimal architecture of the predictive adversarial network found by hyperparameter optimization for multiphase flow in a pipe.

LayersPredictive adversarial networkDiscriminator
Input 30 100 
Dense 1 500 100 
Dense 2 500 500 
Dense 3 100  
Dense 4 500  
Dense 5 500  
Output 10 
Trainable parameters 622 110 61 101 
LayersPredictive adversarial networkDiscriminator
Input 30 100 
Dense 1 500 100 
Dense 2 500 500 
Dense 3 100  
Dense 4 500  
Dense 5 500  
Output 10 
Trainable parameters 622 110 61 101 
1.
Model Order Reduction: Theory, Research Aspects and Applications
, The European Consortium for Mathematics in Industry, Vol.
13
, edited by
W.
Schilders
,
H.
van der Vorst
, and
J.
Rommes
(
Springer
,
2008
).
2.
P.
Benner
,
S.
Gugercin
, and
K.
Willcox
, “
A survey of projection-based model reduction methods for parametric dynamical systems
,”
SIAM Rev.
57
,
483
531
(
2015
).
3.
T.
Bui-Thanh
,
M.
Damodaran
, and
K.
Willcox
, “
Proper orthogonal decomposition extensions for parametric applications in compressible aerodynamics
,” AIAA Paper No. 2003-421 (
2003
).
4.
C.
Audouze
,
F.
De Vuyst
, and
P. B.
Nair
, “
Nonintrusive reduced-order modeling of parametrized time-dependent partial differential equations
,”
Numer. Methods Partial Differ. Equations
29
,
1587
1628
(
2013
).
5.
M.
Guénot
,
I.
Lepot
,
C.
Sainvitu
,
J.
Goblet
, and
R.
Filomeno Coelho
, “
Adaptive sampling strategies for non-intrusive POD-based surrogates
,”
Eng. Comput.
30
,
521
547
(
2013
).
6.
M.
Hamdaoui
,
G. L.
Quilliec
,
P.
Breitkopf
, and
P.
Villon
, “
POD surrogates for real-time multi-parametric sheet metal forming problems
,”
Int. J. Mater. Forming
7
,
337
358
(
2014
).
7.
W.
Polifke
, “
Black-box system identification for reduced order model construction
,”
Ann. Nucl. Energy
67
,
109
128
(
2014
).
8.
Z.
Wang
,
D.
Xiao
,
F.
Fang
,
R.
Govindan
,
C. C.
Pain
, and
Y.-K.
Guo
, “
Model identification of reduced order fluid dynamics systems using deep learning
,”
Int. J. Numer. Methods Fluids
86
,
255
268
(
2018
).
9.
V.
Shinde
,
E.
Longatte
,
F.
Baj
,
Y.
Hoarau
, and
M.
Braza
, “
A Galerkin-free model reduction approach for the Navier-Stokes equations
,”
J. Comput. Phys.
309
,
148
163
(
2016
).
10.
E.
Kaiser
,
B.
Noack
,
L.
Cordier
,
A.
Spohn
,
M.
Segond
,
M.
Abel
,
G.
Daviller
,
J.
Östh
,
S.
Krajnović
, and
R.
Niven
, “
Cluster-based reduced-order modelling of a mixing layer
,”
J. Fluid Mech.
754
,
365
414
(
2014
).
11.
M.
Guo
and
J. S.
Hesthaven
, “
Data-driven reduced order modeling for time-dependent problems
,”
Comput. Methods Appl. Mech. Eng.
345
,
75
99
(
2019
).
12.
R.
Swischuk
,
L.
Mainini
,
B.
Peherstorfer
, and
K.
Willcox
, “
Projection-based model reduction: Formulations for physics-based machine learning
,”
Comput. Fluids
179
,
704
717
(
2019
).
13.
S.
Fresca
,
A.
Manzoni
,
L.
Dedè
, and
A.
Quarteroni
, “
Deep learning-based reduced order models in cardiac electrophysiology
,”
PLoS One
15
,
e0239416
(
2020
).
14.
A.
Rasheed
,
O.
San
, and
T.
Kvamsdal
, “
Digital twin: Values, challenges and enablers from a modeling perspective
,”
IEEE Access
8
,
21980
22012
(
2020
).
15.
M.
Kapteyn
,
D.
Knezevic
,
D.
Huynh
,
M.
Tran
, and
K.
Willcox
, “
Data-driven physics-based digital twins via a library of component-based reduced-order models
,”
Int. J. Numer. Methods Eng.
121
,
1
18
(
2020
).
16.
AIAA Digital Engineering Integration Committee
,
Digital Twin: Definition & Value (AIAA Position Paper)
(
AIAA Digital Engineering Integration Committee
,
2020
).
17.
S.
Niederer
,
M.
Sacks
,
M.
Girolami
, and
K.
Willcox
, “
Scaling digital twins from the artisanal to the industrial
,”
Nat. Comput. Sci.
1
,
313
320
(
2021
).
18.
P.
Holmes
,
J.
Lumley
,
G.
Berkooz
, and
C.
Rowley
,
Turbulence, Coherent Structures, Dynamical Systems and Symmetry
(
Cambridge University Press
,
2012
).
19.
C.
Greif
and
K.
Urban
, “
Decay of the Kolmogorov N-width for wave problems
,”
Appl. Math. Lett.
96
,
216
222
(
2019
).
20.
A.
Iollo
and
D.
Lombardi
, “
Advection modes by optimal mass transfer
,”
Phys. Rev. E
89
,
022923
(
2014
).
21.
S. E.
Ahmed
,
S. M.
Rahman
,
S.
Omer
,
A.
Rasheed
, and
I. M.
Navon
, “
Memory embedded non-intrusive reduced order modeling of non-ergodic flows
,”
Phys. Fluids
31
,
126602
(
2019
).
22.
H.
Lu
and
D. M.
Tartakovsky
, “
Lagrangian dynamic mode decomposition for construction of reduced-order models of advection-dominated phenomena
,”
J. Comput. Phys.
407
,
109229
(
2020
).
23.
S. L.
Brunton
,
B. R.
Noack
, and
P.
Koumoutsakos
, “
Machine learning for fluid mechanics
,”
Annu. Rev. Fluid Mech.
52
,
477
508
(
2020
).
24.
A.
Krizhevsky
,
I.
Sutskever
, and
G. E.
Hinton
, “
ImageNet classification with deep convolutional neural networks
,” in
Advances in Neural Information Processing Systems 25 (NIPS
) (
2012
).
25.
K.
He
,
X.
Zhang
,
S.
Ren
, and
J.
Sun
, “
Deep residual learning for image recognition
,” arXiv:1512.03385 (
2015
).
26.
C. E.
Heaney
,
Y.
Li
,
O. K.
Matar
, and
C. C.
Pain
, “
Applying convolutional neural networks to data on unstructured meshes with space-filling curves
,” arXiv:2011.14820 (
2020
).
27.
R.
Hanocka
,
A.
Hertz
,
N.
Fish
,
R.
Giryes
,
S.
Fleishman
, and
D.
Cohen-Or
, “
MeshCNN: A network with an edge
,”
ACM Trans. Graph.
38
,
1–12
(
2019
).
28.
J.
Tencer
and
K.
Potter
, “
A tailored convolutional neural network for nonlinear manifold learning of computational physics data using unstructured spatial discretizations
,”
SIAM J. Sci. Comput.
43
,
A2581
A2613
(
2021
).
29.
Y.
Zhou
,
C.
Wu
,
Z.
Li
,
C.
Cao
,
Y.
Ye
,
J.
Saragih
,
H.
Li
, and
Y.
Sheikh
, “
Fully convolutional mesh autoencoder using efficient spatially varying kernels
,” in
Advances in Neural Information Processing Systems
, edited by
H.
Larochelle
,
M.
Ranzato
,
R.
Hadsell
,
M. F.
Balcan
, and
H.
Lin
(
Curran Associates, Inc.
,
2020
), Vol.
33
, pp.
9251
9262
.
30.
M.
Milano
and
P.
Koumoutsakos
, “
Neural network modeling for near wall turbulent flow
,”
J. Comput. Phys.
182
,
1
26
(
2002
).
31.
F. J.
Gonzalez
and
M.
Balajewicz
, “
Deep convolutional recurrent autoencoders for learning low-dimensional feature dynamics of fluid systems
,” arXiv:1808.01346 (
2018
).
32.
S.
Wiewel
,
M.
Becher
, and
N.
Thuerey
, “
Latent space physics: Towards learning the temporal evolution of fluid flow
,”
Comput. Graph. Forum
38
,
71
82
(
2019
).
33.
K.
Fukami
,
T.
Nakamura
, and
K.
Fukagata
, “
Convolutional neural network based hierarchical autoencoder for nonlinear mode decomposition of fluid field data
,”
Phys. Fluids
32
,
095110
(
2020
).
34.
H.
Eivazi
,
H.
Veisi
,
M. H.
Naderi
, and
V.
Esfahanian
, “
Deep neural networks for nonlinear model order reduction of unsteady flows
,”
Phys. Fluids
32
,
105104
(
2020
).
35.
P.
Wu
,
S.
Gong
,
K.
Pan
,
F.
Qiu
,
W.
Feng
, and
C. C.
Pain
, “
Reduced order model using convolutional auto-encoder with self-attention
,”
Phys. Fluids
33
,
077107
(
2021
).
36.
J.
Xu
and
K.
Duraisamy
, “
Multi-level convolutional autoencoder networks for parametric prediction of spatio-temporal dynamics
,”
Comput. Methods Appl. Mech. Eng.
372
,
113379
(
2020
).
37.
J.
Mack
,
R.
Arcucci
,
M.
Molina-Solana
, and
Y.-K.
Guo
, “
Attention-based convolutional autoencoders for 3D-variational data assimilation
,”
Comput. Methods Appl. Mech. Eng.
372
,
113291
(
2020
).
38.
C.
Quilodrán Casas
,
R.
Arcucci
, and
Y.
Guo
, “
Urban air pollution forecasts generated from latent space representation
,” in
ICLR 2020 Workshop on Integration of Deep Neural Models and Differential Equations
(
2020
).
39.
C.
Quilodrán-Casas
,
R.
Arcucci
,
L.
Mottet
,
Y.-K.
Guo
, and
C. C.
Pain
, “
Adversarial autoencoders and adversarial LSTM for improved forecasts of urban air pollution simulations
,” arXiv:2104.06297 (
2021
).
40.
S.
Nikolopoulos
,
I.
Kalogeris
, and
V.
Papadopoulos
, “
Non-intrusive surrogate modeling for parametrized time-dependent PDEs using convolutional autoencoders
,” arXiv:2101.05555 [math.NA] (
2021
).
41.
T.
Kadeethum
,
F.
Ballarin
,
Y.
Choi
,
D.
O'Malley
,
H.
Yoon
, and
N.
Bouklas
, “
Non-intrusive reduced order modeling of natural convection in porous media using convolutional autoencoders: Comparison with linear subspace techniques
,”
Adv. Water Resour.
160
,
104098
(
2022
).
42.
R.
Maulik
,
B.
Lusch
, and
P.
Balaprakash
, “
Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders
,”
Phys. Fluids
33
,
037106
(
2021
).
43.
J.
Wang
,
C.
He
,
R.
Li
,
H.
Chen
,
C.
Zhai
, and
M.
Zhang
, “
Flow field prediction of supercritical airfoils via variational autoencoder based deep learning framework
,”
Phys. Fluids
33
,
086108
(
2021
).
44.
S.
Fresca
,
L.
Dedè
, and
A.
Manzoni
, “
A comprehensive deep learning-based approach to reduced order modeling of nonlinear time-dependent parametrized PDEs
,”
J. Sci. Comput.
87
,
61
(
2021
).
45.
T.
Botsas
,
I.
Pan
,
L. R.
Mason
, and
O. K.
Matar
, “
Multiphase flow applications of non-intrusive reduced-order models with Gaussian process emulation
,” arXiv:2111.08037 [physics.comp-ph] (
2021
).
46.
C.
Gin
,
B.
Lusch
,
S.
Brunton
, and
J.
Kutz
, “
Deep learning models for global coordinate transformations that linearise PDEs
,”
Eur. J. Appl. Math.
32
,
515
539
(
2021
).
47.
A.
Gruber
,
M.
Gunzburger
,
L.
Ju
, and
Z.
Wang
, “
A comparison of neural network architectures for data-driven reduced-order modeling
,” arXiv:2110.03442 [cs.LG] (
2021
).
48.
R.
Fu
,
D.
Xiao
,
I. M.
Navon
, and
C.
Wang
, “
A data driven reduced order model of fluid flow by auto-encoder and self-attention deep learning methods
,” arXiv:2109.02126v1 [physics.comp-ph] (
2021
).
49.
S. B.
Reddy
,
A. R.
Magee
,
R. K.
Jaiman
,
J.
Liu
,
W.
Xu
,
A.
Choudhary
, and
A. A.
Hussain
, “
Reduced order model for unsteady fluid flows via recurrent neural networks
,” in
ASME 2019 38th International Conference on Ocean, Offshore and Arctic Engineering
, Glasgow, Scotland, 9–14 June 2019 (ASME, 2019), Vol. 2.
50.
T. R. F.
Phillips
,
C. E.
Heaney
,
P. N.
Smith
, and
C. C.
Pain
, “
An autoencoder-based reduced-order model for eigenvalue problems with application to neutron diffusion
,”
Int. J. Numer. Methods Eng.
122
,
3780
3811
(
2021
).
51.
S. E.
Ahmed
,
O.
San
,
A.
Rasheed
, and
T.
Iliescu
, “
Nonlinear proper orthogonal decomposition for convection-dominated flowss
,”
Phys. Fluids
33
,
121702
(
2021
).
52.
S.
Fresca
and
A.
Manzoni
, “
POD-DL-ROM: Enhancing deep learning-based reduced order models for nonlinear parametrized PDEs by proper orthogonal decomposition
,”
Comput. Methods Appl. Mech. Eng.
388
,
114181
(
2022
).
53.
A.
Makhzani
,
J.
Shlens
,
N.
Jaitly
,
I.
Goodfellow
, and
B.
Frey
, “
Adversarial autoencoders
,” arXiv:1511.05644 [cs.LG] (
2015
).
54.
J. S.
Hesthaven
and
S.
Ubbiali
, “
Non-intrusive reduced order modeling of nonlinear problems using neural networks
,”
J. Comput. Phys.
363
,
55
78
(
2018
).
55.
M.
Raissi
,
P.
Perdikaris
, and
G.
Karniadakis
, “
Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations
,”
J. Comput. Phys.
378
,
686
707
(
2019
).
56.
F.
Regazzoni
,
L.
Dedè
, and
A.
Quarteroni
, “
Machine learning for fast and reliable solution of time-dependent differential equations
,”
J. Comput. Phys.
397
,
108852
(
2019
).
57.
S.
Pawar
,
S. M.
Rahman
,
H.
Vaddireddy
,
O.
San
,
A.
Rasheed
, and
P.
Vedula
, “
A deep learning enabler for nonintrusive reduced order modeling of fluid flows
,”
Phys. Fluids
31
,
085101
(
2019
).
58.
S. E.
Ahmed
,
O.
San
,
D. A.
Bistrian
, and
I. M.
Navon
, “
Sampling and resolution characteristics in reduced order models of shallow water equations: Intrusive vs nonintrusive
,”
Int. J. Numer. Methods Fluids
92
,
992
1036
(
2020
).
59.
W.
Chen
,
Q.
Wang
,
J. S.
Hesthaven
, and
C.
Zhang
, “
Physics-informed machine learning for reduced-order modeling of nonlinear problems
,”
J. Comput. Phys.
446
,
110666
(
2021
).
60.
C. J.
Arthurs
and
A. P.
King
, “
Active training of physics-informed neural networks to aggregate and interpolate parametric solutions to the Navier-Stokes equations
,”
J. Comput. Phys.
438
,
110364
(
2021
).
61.
D.
Xiao
,
F.
Fang
,
C. E.
Heaney
,
I. M.
Navon
, and
C. C.
Pain
, “
A domain decomposition method for the non-intrusive reduced order modelling of fluid flow
,”
Comput. Methods Appl. Mech. Eng.
354
,
307
330
(
2019
).
62.
D.
Xiao
,
C. E.
Heaney
,
F.
Fang
,
L.
Mottet
,
R.
Hu
,
D. A.
Bistrian
,
E.
Aristodemou
,
I. M.
Navon
, and
C. C.
Pain
, “
A domain decomposition non-intrusive reduced order model for turbulent flows
,”
Comput. Fluids
182
,
15
27
(
2019
).
63.
R.
Maulik
,
T.
Botsas
,
N.
Ramachandra
,
L. R.
Mason
, and
I.
Pan
, “
Latent-space time evolution of non-intrusive reduced-order models using Gaussian process emulation
,”
Physica D
416
,
132797
(
2021
).
64.
R.
Maulik
,
B.
Lusch
, and
P.
Balaprakash
, “
Non-autoregressive time-series methods for stable parametric reduced-order models
,”
Phys. Fluids
32
,
087115
(
2020
).
65.
C.
Quilodrán-Casas
,
R.
Arcucci
,
C. C.
Pain
, and
Y.-K.
Guo
, “
Adversarially trained LSTMs on reduced order models of urban air pollution simulations
,” arXiv:2101.01568 [cs.LG] (
2021
).
66.
M.
Cheng
,
F.
Fang
,
C. C.
Pain
, and
I. M.
Navon
, “
An advanced hybrid deep adversarial autoencoder for parameterized nonlinear fluid flow modelling
,”
Comput. Methods Appl. Mech. Eng.
372
,
113375
(
2020
).
67.
V. L. S.
Silva
,
C. E.
Heaney
, and
C. C.
Pain
, “
Data assimilation predictive GAN (DA-PredGAN): Applied to determine the spread of COVID-19
,” arXiv:2105.07729 [cs.LG] (
2021
).
68.
C.
Quilodrán-Casas
,
V. S.
Silva
,
R.
Arcucci
,
C. E.
Heaney
,
Y.-K.
Guo
, and
C. C.
Pain
, “
Digital twins based on bidirectional LSTM and GAN for modelling the COVID-19 pandemic
,”
Neurocomputing
470
,
11
28
(
2022
).
69.
T.
Kadeethum
,
D.
O'Malley
,
J. N.
Fuhg
,
Y.
Choi
,
J.
Lee
,
H. S.
Viswanathan
, and
N.
Bouklas
, “
A framework for data-driven solution and parameter estimation of PDEs using conditional generative adversarial networks
,” arXiv:2105.13136 [cs.LG] (
2021
).
70.
M.
Cheng
,
F.
Fang
,
C. C.
Pain
, and
I. M.
Navon
, “
Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network
,”
Comput. Methods Appl. Mech. Eng.
365
,
113000
(
2020
).
71.
M.
Cheng
,
F.
Fang
,
C. C.
Pain
, and
I. M.
Navon
, “
A real-time flow forecasting with deep convolutional generative adversarial network: Application to flooding event in Denmark
,”
Phys. Fluids
33
,
056602
(
2021
).
72.
J.
Baiges
,
R.
Codina
, and
S.
Idelsohn
, “
A domain decomposition strategy for reduced order models. Application to the incompressible Navier-Stokes equations
,”
Comput. Methods Appl. Mech. Eng.
267
,
23
42
(
2013
).
73.
L.
Gastaldi
, “
A domain decomposition method associated with the streamline diffusion FEM for linear hyperbolic systems
,”
Appl. Numer. Math.
10
,
357
380
(
1992
).
74.
L. M.
Yang
and
I.
Grooms
, “
Machine learning techniques to construct patched analog ensembles for data assimilation
,”
J. Comput. Phys.
443
,
110532
(
2021
).
75.
A.
Skillen
,
A.
Revell
, and
T.
Craft
, “
Accuracy and efficiency improvements in synthetic eddy methods
,”
Int. J. Heat Fluid Flow
62
,
386
394
(
2016
).
76.
K.
Fukami
,
Y.
Nabae
,
K.
Kawai
, and
K.
Fukagata
, “
Synthetic turbulent inflow generator using machine learning
,”
Phys. Rev. Fluids
4
,
064603
(
2019
).
77.
J.
Kim
and
C.
Lee
, “
Deep unsupervised learning of turbulence for inflow generation at various reynolds numbers
,”
J. Comput. Phys.
406
,
109216
(
2020
).
78.
J.
Kjølaas
,
A.
De Leebeeck
, and
S.
Johansen
, “
Simulation of hydrodynamic slug flow using the LedaFlow slug capturing model
,” in
16th International Conference on Multiphase Production Technology
(
OnePetro
,
2013
).
79.
A.
Bonzanini
,
D.
Picchi
, and
P.
Poesio
, “
Simplified 1D incompressible two-fluid model with artificial diffusion for slug flow capturing in horizontal and nearly horizontal pipes
,”
Energies
10
,
1372
(
2017
).
80.
B. I.
Krasnopolsky
and
A. A.
Lukyanov
, “
A conservative fully implicit algorithm for predicting slug flows
,”
J. Comput. Phys.
355
,
597
619
(
2018
).
81.
B.
Ma
and
N.
Srinil
, “
Planar dynamics of inclined curved flexible riser carrying slug liquid-gas flows
,”
J. Fluids Struct.
94
,
102911
(
2020
).
82.
T.-W.
Kim
,
S.
Kim
, and
J.-T.
Lim
, “
Modeling and prediction of slug characteristics utilizing data-driven machine-learning methodology
,”
J. Pet. Sci. Eng.
195
,
107712
(
2020
).
83.
G.
Tryggvason
and
J.
Lu
, “
Direct numerical simulations of multiphase flows: Opportunities and challenges
,”
AIP Conf. Proc.
2293
,
030002
(
2020
).
84.
F.
Xie
,
X.
Zheng
,
M. S.
Triantafyllou
,
Y.
Constantinides
,
Y.
Zheng
, and
G. E.
Karniadakis
, “
Direct numerical simulations of two-phase flow in an inclined pipe
,”
J. Fluid Mech.
825
,
189
207
(
2017
).
85.
G.
Tryggvason
,
R.
Scardovelli
, and
S.
Zaleski
,
Direct Numerical Simulations of Gas-Liquid Multiphase Flows
(
Cambridge University Press
,
2011
).
86.
A.
Obeysekara
,
P.
Salinas
,
C. E.
Heaney
,
L.
Kahouadji
,
L.
Via-Estrem
,
J.
Xiang
,
N.
Srinil
,
A.
Nicolle
,
O. K.
Matar
, and
C. C.
Pain
, “
Prediction of multiphase flows with sharp interfaces using anisotropic mesh optimisation
,”
Adv. Eng. Software
160
,
103044
(
2021
).
87.
P.
Baldi
and
K.
Hornik
, “
Neural networks and principal component analysis: Learning from examples without local minima
,”
Neural Networks
2
,
53
58
(
1989
).
88.
L.
Via-Estrem
,
P.
Salinas
,
Z.
Xie
,
J.
Xiang
,
J.-P.
Latham
,
S.
Douglas
,
I.
Nistor
, and
C.
Pain
, “
Robust control volume finite element methods for numerical wave tanks using extreme adaptive anisotropic meshes
,”
Int. J. Numer. Methods Fluids
92
,
1707
1722
(
2020
).
89.
Z.
Xie
,
D.
Pavlidis
,
P.
Salinas
,
J. R.
Percival
,
C. C.
Pain
, and
O. K.
Matar
, “
A balanced-force control volume finite element method for interfacial flows with surface tension using adaptive anisotropic unstructured meshes
,”
Comput. Fluids
138
,
38
50
(
2016
).
90.
Y.
Fan
,
E.
Pereyra
, and
C.
Sarica
, “
Experimental study of pseudo-slug flow in upward inclined pipes
,”
J. Nat. Gas Sci. Eng.
75
,
103147
(
2020
).
91.
C.
Friedemann
,
M.
Mortensen
, and
J.
Nossen
, “
Gas-liquid slug flow in a horizontal concentric annulus, a comparison of numerical simulations and experimental data
,”
Int. J. Heat Fluid Flow
78
,
108437
(
2019
).
92.
P.
Ujang
,
C.
Lawrence
,
C.
Hale
, and
G.
Hewitt
, “
Slug initiation and evolution in two-phase horizontal flow
,”
Int. J. Multiphase Flow
32
,
527
552
(
2006
).
93.
A. H.
Zitouni
,
A.
Arabi
,
Y.
Salhi
,
Y.
Zenati
,
E. K.
Si-Ahmed
, and
J.
Legrand
, “
Slug length and frequency upstream a sudden expansion in gas-liquid intermittent flow
,”
Exp. Comput. Multiphase Flow
3
,
124
130
(
2021
).
94.
Google Research
, see https://colab.research.google.com for “
Google Colab
” (last accessed November 16, 2021).
95.
E.
Bisong
,
Building Machine Learning and Deep Learning Models on Google Cloud Platform: A Comprehensive Guide
(
Apress
,
Berkeley
,
CA
,
2019
), pp.
59
64
.
96.
See https://github.com/acse-zrw20/DD-GAN-AE for
some codes and information about the various neural networks used in this paper
.