Physics-informed neural networks (PINNs) are successful machine-learning methods for the solution and identification of partial differential equations. We employ PINNs for solving the Reynolds-averaged Navier–Stokes equations for incompressible turbulent flows without any specific model or assumption for turbulence and by taking only the data on the domain boundaries. We first show the applicability of PINNs for solving the Navier–Stokes equations for laminar flows by solving the Falkner–Skan boundary layer. We then apply PINNs for the simulation of four turbulent-flow cases, i.e., zero-pressure-gradient boundary layer, adverse-pressure-gradient boundary layer, and turbulent flows over a NACA4412 airfoil and the periodic hill. Our results show the excellent applicability of PINNs for laminar flows with strong pressure gradients, where predictions with less than 1% error can be obtained. For turbulent flows, we also obtain very good accuracy on simulation results even for the Reynolds-stress components.

## I. INTRODUCTION

In recent years, machine-learning (ML) methods have started to play a revolutionary role in many scientific disciplines. The emergence of various architectures, e.g., convolutional neural networks (CNNs)^{1} and long short-term memory (LSTM),^{2} has led to the development of a variety of deep-learning (DL)-based frameworks for modeling complex physical systems by accounting for spatiotemporal dependencies in the predictions. Deep learning applications have been demonstrated in many scientific and engineering disciplines, e.g., astronomy,^{3} climate modeling,^{4} solid mechanics,^{5} chemistry,^{6} and sustainability.^{7,8} Fluid mechanics has been one of the active research topics for the development of innovative ML-based approaches.^{9–11} The contribution of ML in turbulent-flow problems is mainly in the contexts of data-driven turbulence closure modeling,^{12,13} prediction of temporal dynamics in turbulent flows,^{14,15} nonlinear modal decomposition,^{16,17} extraction of turbulence theory form data,^{18} non-intrusive sensing in turbulent flows,^{19,20} and flow control.^{21,22} More recently, exploiting the universal-approximation property of neural networks for solving complex partial differential equation (PDE) systems has brought attention, aiming to provide efficient solvers that approximate the solution.

Deep learning provides powerful modeling approaches for data-rich domains such as vision, language, and speech. However, learning interpretable and generalizable models is still challenging, especially for domains with limited data available such as complex physical systems.^{8,23} Purely data-driven approaches based on deep learning demand large datasets for training that may not be available for many scientific problems. Moreover, these models generally do not take into account the physical constraints and may fit the observational data very well but lack in complying with the underlying physical principles. Therefore, integrating governing physical laws and domain knowledge into the model training may lead to more accurate and robust models. The domain knowledge can act as an informative prior for teaching the model about the physical or mathematical understanding of the system besides the observational data. Physics-informed neural networks (PINNs), introduced by Raissi *et al.*,^{24} are deep-learning-based frameworks able to integrate data and physical laws in the form of governing partial differential equations (PDEs) for learning. PINNs have been demonstrated to be well suited for the solution of forward and inverse problems related to several different types of PDEs. PINNs have been used to simulate vortex-induced vibrations^{25} and tackle ill-posed inverse fluid mechanics problems.^{26} Moreover, PINNs have been employed for super-resolution and denoising of 4D-flow magnetic resonance imaging (MRI)^{27} and prediction of near-wall blood flow from sparse data^{28} among other applications.^{29–33} Jin *et al.*^{34} showed the applicability of PINNs for the simulation of turbulence directly, where good agreement was obtained between the direct numerical simulation (DNS) results and the PINNs simulation results. Cai *et al.*^{35} proposed a method based on PINNs to infer the full continuous three-dimensional velocity and pressure fields from snapshots of three-dimensional temperature fields obtained by tomographic background-oriented Schlieren (Tomo-BOS) imaging. Recently, Eivazi and Vinuesa^{36} applied PINNs for super-resolution of flow-field data both in time and space from a limited set of noisy measurements without having any high-resolution reference data. A detailed discussion on the prevailing trends in embedding physics into machine-learning algorithms and diverse applications of PINNs can be found in the work by Karniadakis *et al.*^{37}

In this study, we employ PINNs for solving the Reynolds-averaged Navier–Stokes (RANS) equations for incompressible turbulent flows without any specific model or assumption for turbulence. In the RANS equations, the loss of information in the averaging process leads to an underdetermined system of equations. Traditional solvers require the introduction of modeling assumptions to close the system. We introduced an alternative approach to tackle this problem by using the information from a few data examples and the underdetermined system of equations for the training of a neural network that solves the system of equations. In particular, we use the data on the domain boundaries (including Reynolds-stress components) along with the RANS equations that guide the learning process toward the solution. The spatial coordinates are the inputs of the model, and the mean-flow quantities, i.e., velocity components, pressure, and Reynolds-stress components, are the outputs. Automatic differentiation (AD)^{38} is applied to differentiate the outputs with respect to the inputs to construct the RANS equations. Only the data on the domain boundaries are considered the training dataset for the supervised-learning part, while a set of points inside the domain together with the points on the boundaries are used to calculate the residual of the governing equations, which acts as the unsupervised loss. The Reynolds number is set through the governing equations. We take mean-flow quantities obtained from DNS or well-resolved large-eddy simulation (LES) of canonical turbulent flow cases as the reference.

This article is organized as follows: in Sec. II, we provide an overview of the physics-informed neural networks and discuss the theoretical background; in Sec. III, we discuss the application of PINNs for solving RANS equations; and finally, in Sec. IV, we provide a summary and the conclusions of the study.

## II. METHODOLOGY

In this section, we provide the methodological background of PINNs and their application for solving partial differential equations. We also discuss the implementation of PINNs for solving the RANS equations. The goal of PINNs is to integrate the information from the governing physical laws in the form of partial differential equations into the training process of a deep neural network so that the solution can be approximated by only having a limited set of training samples, e.g., initial and boundary conditions. A PINN model comprises a multilayer perceptron (MLP) that maps the coordinates to the solution and a so-called residual network, which calculates the residual of the governing equations.

### A. Multilayer perceptrons (MLPs)

Deep neural networks are universal approximators of continuous functions composed of two or more levels (the so-called layers) of simple but non-linear operations at each level. Each layer contains a number of nodes or neurons. By applying a sufficient number of such non-linear transformations and having a large dataset for training, a deep neural network can learn very complex functions using the backpropagation algorithm.^{39} Multilayer perceptrons (MLPs)^{40} (also known as fully connected neural networks) are the most basic type of artificial neural networks comprised of two or more layers of nodes, where each node is connected to all nodes in the preceding and succeeding layers. We consider *X* and $\Psi $ as the input and output spaces and each pair of vectors $(x,y)$ as a training example or sample. During the training, we optimize the parameters of the neural network, i.e., weights ** w** and biases

**, to approximate the mapping $f:X\u2192\Psi $ such that a loss function $L(f(x);y)$ is minimized. The output of the MLP is obtained using a set of simple operations,**

*b**i.e.,*linear matrix operations followed by an element-wise non-linear function, e.g., sigmoidal or hyperbolic tangent, known as the activation function. For the layer

*i*, the weight matrix $wi$ and the bias $bi$ perform the linear matrix transformation from layer

*i*– 1 to the next; then, the operation of the activation function

*g*leads to the output of the layer as $zi=gi(wizi\u22121+bi)$.

_{i}### B. Physics-informed neural networks (PINNs)

In PINNs, the temporal and the spatial coordinates $(t,x)$ are the inputs of the MLP, and the solution vector of the PDE system $u(t,x)$ is the output. Let us consider a general spatiotemporal partial differential equation as

where **x** is the vector of spatial coordinates defined over the domain Ω, $u(t,x)$ is the solution vector of the PDE, and $ut$ is its derivative with respect to time *t* in the period $[0,T]$. $N[\xb7]$ denotes a nonlinear differential operator. Let us consider the residual of the partial differential equations $e(t,x)$ as the left-hand side of Eq. (1):

and the approximation of the solution by the MLP as $u\u0303=f(t,x)$. The function *f*, which is parameterized by the MLP, is a composition of simple differentiable operations. By applying the chain rule on *f*, the derivatives with respect to space and time can be obtained as: $\u2202u/\u2202x=G(u,x)$ and $\u2202u/\u2202t=G(u,t)$, where $G$ indicates the chain-rule operation. Automatic differentiation (AD)^{38} is utilized to apply the chain rule and formulate the governing equations. AD can be implemented directly from the deep-learning framework since it is used to compute the gradients and update the network parameters, i.e., weights ** w** and biases

**, during the training. Therefore, $ut$ and $N[u]$ in Eq. (1) can be computed using AD, thus yielding the residual of the governing equations. The aforementioned process for computation of the residuals through the implementation of AD is known as the residual network. We use the open-source deep-learning framework TensorFlow**

*b*^{41}to develop our PINN models. TensorFlow provides the “tf.GradientTape” application programming interface (API) for AD by computing the gradient of a computation with respect to inputs. TensorFlow “records” the computational operations executed inside the context of a tf.GradientTape onto a so-called tape and then uses that tape to compute the gradients of a recorded computation using reverse-mode differentiation.

The training process of a PINN consists of both supervised and unsupervised parts. We refer to supervised learning for those points in the solution domain for which the target solution is available, e.g., boundary conditions, so that the loss can be obtained in a supervised manner. The unsupervised learning refers to the calculation of the residual of the PDE for the samples for which the solution is unknown. Following the work by Raissi *et al.*,^{24} we implement a full-batch training procedure to learn function *f*. We initiate the training process with the Adam optimizer^{42} for 1000 epochs using a learning rate of $1\xd710\u22123$. Then, we continue the training using the limited-memory Broyden–Fletcher–Goldfarb–Shanno (L-BFGS) algorithm^{43} to obtain the solution. The optimization process of the L-BFGS algorithm is stopped automatically based on the increment tolerance. In the following, we discuss the implementation of PINNs for solving the RANS equations.

### C. PINNs for RANS equations

A schematic of PINNs for the RANS equations is depicted in Fig. 1. In a general two-dimensional set-up, the spatial coordinates (*x* and *y*) are the inputs of an MLP, and the outputs are the mean streamwise and wall-normal components of velocity (*U* and *V*, respectively), pressure (*P*), and Reynolds-stress components ($u2\xaf,\u2009uv\xaf$, and $v2\xaf$). Automatic differentiation^{38} is applied to differentiate outputs with respect to the inputs and formulate the RANS equations (continuity and momentum equations). In our setup, only the data on the domain boundaries are used as the training dataset for supervised learning. The total loss is the summation of the supervised loss and the residual of the governing equations as follows:

where *L _{e}* and

*L*are the loss-function components corresponding to the residual of the RANS equations and the boundary conditions, respectively. Here,

_{b}*N*represents the number of points for which the residual of the RANS equations is calculated (the so-called collocation points) and

_{e}*N*is the number of training samples on the domain boundaries. We consider a set of points inside the domain and compute the residuals on these points together with the points on the domain boundaries. Here, $Ubn=[Ubn,Vbn,Pbn,u2\xafbn,uv\xafbn,v2\xafbn]T$ represents the given data for point

_{b}*n*on the boundaries. $U\u0303bn$ is the vector of PINN predictions at the corresponding point, and $\u03f5in$ is the residual of the

*i*th governing equation at point

*n*. It is also possible to consider weighting coefficients to balance different terms of the loss function and accelerate convergence in the training process.

^{34}Figure 2 shows the total loss

*L*, the physical loss

*L*, and the boundary loss

_{e}*L*for different tests.

_{b}## III. RESULTS AND DISCUSSION

We employ PINNs for solving the RANS equations for four turbulent flow cases, i.e., zero-pressure-gradient (ZPG) boundary layer,^{44} adverse-pressure-gradient (APG) boundary layer,^{45} and turbulent flows over a NACA4412 airfoil^{46} and the periodic hill. We also show the applicability of PINNs for the simulation of laminar boundary flows. To evaluate the accuracy of the predictions, we consider the relative $\u21132$-norm of errors *E _{i}* on all the computational points and for the

*i*th variable as

where $||\xb7||2$ denotes $\u21132$-norm, and **U** and $U\u0303$ indicate the vectors of reference data and PINN predictions, respectively. Results for *E* is reported in Table I for all the test cases. The dash symbol in this table denotes that the corresponding variable is not calculated in that test case. The relative computational time (RCT) with respect to the training time of the Falkner–Skan boundary layer (FSBL) test case is also reported in Table I. The computational time stands for the time required for the training of the PINN model, and for the FSBL, it is about 2.5 minutes on a personal workstation. We discuss each test case in detail in the following.

Test . | E
. _{U} | E
. _{V} | E
. _{P} | $Eu2\xaf$ . | $Euv\xaf$ . | $Ev2\xaf$ . | RCT . |
---|---|---|---|---|---|---|---|

FSBL | 0.07% | 0.12% | 0.001% | ⋯ | ⋯ | ⋯ | 1 |

ZPG | 1.02% | 4.25% | ⋯ | ⋯ | 6.46% | ⋯ | 2.49 |

APG | 0.28% | 1.57% | 4.60% | ⋯ | 7.96% | ⋯ | 3.05 |

NACA4412 | 1.56% | 2.17% | 7.30% | 9.43% | 11.36% | 4.69% | 4.26 |

Periodic hill | 2.77% | 19.70% | 8.61% | 28.18% | 16.70% | 20.24% | 5.38 |

Test . | E
. _{U} | E
. _{V} | E
. _{P} | $Eu2\xaf$ . | $Euv\xaf$ . | $Ev2\xaf$ . | RCT . |
---|---|---|---|---|---|---|---|

FSBL | 0.07% | 0.12% | 0.001% | ⋯ | ⋯ | ⋯ | 1 |

ZPG | 1.02% | 4.25% | ⋯ | ⋯ | 6.46% | ⋯ | 2.49 |

APG | 0.28% | 1.57% | 4.60% | ⋯ | 7.96% | ⋯ | 3.05 |

NACA4412 | 1.56% | 2.17% | 7.30% | 9.43% | 11.36% | 4.69% | 4.26 |

Periodic hill | 2.77% | 19.70% | 8.61% | 28.18% | 16.70% | 20.24% | 5.38 |

### A. Falkner–Skan boundary layer (FSBL)

As the first test case, we solve two-dimensional Navier–Stokes equations for the Falkner–Skan boundary layer at *Re *=* *100 using PINNs to show the applicability of PINNs for the laminar boundary layer flows. We consider a boundary layer with adverse-pressure-gradient with $m=\u22120.08$ leading to $\beta \u200aFS=2m/(m+1)=\u22120.1739$, where $\beta \u200aFS=\u22120.1988$ corresponds to separation. The inputs of the PINN model are *x* and *y*, and the outputs are the velocity components and the pressure. We only use the data on the domain boundaries for the velocity components for supervised learning. The reference data is obtained from the analytical solution on a grid with the resolution of $(Nx,Ny)=(501,201)$. The MLP comprises 8 hidden layers, each containing 20 neurons with hyperbolic tangent function (tanh) as the activation function. We first use the Adam optimizer^{42} for the training of the model and then apply Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm to obtain a more accurate solution. The optimization process of the BFGS algorithm is stopped automatically based on the increment tolerance. Similar model architecture and training procedures are implemented for the simulation of other test cases using PINNs.

Results are illustrated in Fig. 3 as the contours of *U*, *V*, and *P* obtained from PINN predictions (left) and reference data (right). The relative $\u21132$-norm of errors is reported in Table I. Our results suggest that excellent predictions can be obtained using PINNs for laminar boundary layer flows. It can be seen in Fig. 3 that although we only use velocity components on the domain boundaries as the training data, the PINN model provides accurate predictions for the pressure.

### B. ZPG turbulent boundary layer

For the ZPG boundary layer, we employ the simulation data of Ref. 44 for a domain range of $1000<Re\theta <7000$, where $Re\theta =\theta U\u221e/\nu $ represents the Reynolds number based on the momentum thickness *θ*, the free-stream velocity $U\u221e$, and the kinematic viscosity *ν*. The reference data has a resolution of $(Nx,Ny)=(5985,200)$. For this test case, we consider continuity and streamwise momentum equations as the governing equations and the mean streamwise *U* and wall-normal *V* velocity components and the shear Reynolds-stress $uv\xaf$ as the outputs of the model.

Results of the PINN simulation are depicted in Figs. 4(a)–4(c) in comparison with the reference data. Figure 4(a) shows the contours of *U*, *V*, and $uv\xaf$. The relative $\u21132$-norm of errors for *U*, *V*, and $uv\xaf$ is 1.02%, 4.25%, and 6.46%, respectively, as reported in Table I. The characteristics of the boundary layer are also quantified in terms of the shape factor, defined as the ratio of displacement and momentum thickness, $H12=\delta */\theta $, and the skin-friction coefficient $cf=2(u\tau /U\u221e)2$, where $u\tau =\tau w/\rho $ represents the friction velocity (*τ _{w}* is the mean wall-shear stress and

*ρ*is the fluid density). The relevant velocity and length scales close to the wall are given by $u\tau $ and $\u2113*=\nu /u\tau $. The inner-scaled quantities are, thus, written as, e.g., $U+=U/u\tau $ and $y+=y/\u2113*$. Figure 4(b) shows the shape factor

*H*

_{12}and the skin-friction coefficient

*cf.*obtained from PINN predictions against the reference data. Moreover, Fig. 4(c) depicts inner-scaled mean streamwise velocity

*U*

^{+}and Reynolds-stress $uv\xaf+$ profiles at three streamwise locations ($Re\theta =2500$, 4000, and 5500) indicated by white dashed lines in Fig. 4(a) (top left). Our results show excellent accuracy of the PINN simulation.

### C. APG turbulent boundary layer

As the next test case, we simulate the APG turbulent boundary layer for a Reynolds number range of $910<Re\theta <3360$ and at a constant Clauser pressure-gradient parameter $\beta =\delta */\tau wdP\u221e/dx\u22431$, where $P\u221e$ is the free stream pressure.^{45} The reference data has a resolution of $(Nx,Ny)=(1009,301)$. For this test case, we consider a PINN model with eight hidden layers, each containing 20 neurons, and continuity and streamwise and wall-normal momentum equations as the governing equations. We observe that, even in the presence of an adverse pressure gradient, excellent predictions can be obtained using PINNs as it is shown in Fig. 5, where we compared the predictions with the reference data. It can be seen in Fig. 5(a) that although we do not use any turbulence model, the predictions for the shear Reynolds stress $uv\xaf$ coincide with the reference data. The relative $\u21132$-norm of errors is reported in Table I, where the lowest and the highest errors are related to *U* and $uv\xaf$ and are equal to 0.28% and 7.96%, respectively. Figure 5(b) shows the accuracy of the PINN predictions against the reference data in terms of boundary layer characteristics, i.e., $H12$ (top) and *cf.* (bottom). Moreover, Fig. 5(c) depicts the inner-scaled streamwise velocity (top) and Reynolds-stress (bottom) obtained from the PINN predictions and the reference data at three different $Re\theta $ of 1623, 2138, and 2588, respectively. Our results show that the PINN model can provide accurate predictions for the APG turbulent boundary layer.

### D. NACA4412 airfoil

Next, we use PINNs for simulation of the turbulent boundary layer developing on the suction side of a NACA4412 airfoil at the Reynolds number based on $U\u221e$ and chord length *c* of $Rec=U\u221ec/\nu =200\u2009000$. The data for the training and testing of the model are employed from the work by Ref. 46, where results were obtained based on well-resolved large-eddy simulations (LESs) using a spectral-element method. We consider a domain range of $0.5<x/c<1$ and a PINN model with eight hidden layers, each with 20 neurons. The reference data has a resolution of $(Nx,Ny)=(7014,170)$. Two-dimensional RANS equations are considered the governing equations. For this test case, we use the wall-normal-based spatial coordinates *x _{n}* and

*y*as the inputs of the MLP and

_{n}*U*,

*V*,

*P*, $u2\xaf,\u2009uv\xaf$, and $v2\xaf$ as the outputs. Results are reported as the profiles of the inner-scaled streamwise velocity and Reynolds-stress components at $x/c\u22430.625$, 0.75, and 0.875 in Fig. 6. Our results show an excellent agreement between the PINN predictions and the reference data both for velocity and Reynolds-stress components. Table I shows the relative $\u21132$-norm of errors in the PINN predictions. It can be seen that for mean-velocity components

*U*and

*V*, the error is less than 3%. The highest error is related to $uv\xaf$, and it is equal to 11.36%.

### E. Periodic hill

At last, we evaluate the performance of our proposed framework for solving RANS equations using PINNs in the simulation of the turbulent flow over a periodic hill at the Reynolds number $Reb=2800$ based on the crest height *H* and the bulk velocity *U _{b}* at the crest. The training and testing data are obtained from DNS simulation using a spectral-element method. For this test case, we consider a domain range of $1<x/H<5$ as it is depicted in Fig. 7 by the gray dashed lines. The reference data has a resolution of $(Nx,Ny)=(89\u2009540)$. Similar to the previous test cases, the data on the domain boundaries are used for the training of a PINN model with eight hidden layers, each containing 20 neurons where two-dimensional RANS equations are considered the governing equations. Figure 7(a) shows the streamwise velocity

*U*contour and the flow streamlines for the PINN predictions (left) and the reference data (right). It can be observed that the PINN model is able to simulate the separated flow without a turbulence model and only by using the data on the domain boundaries and the RANS equations as the governing equations. It should be noted that the velocity and Reynolds-stress components on the top and bottom boundaries are equal to zero due to the no-slip condition. Therefore, we only need the data at the input and output boundaries and pressure on all the boundaries for the supervised learning. Figure 7(b) illustrates the streamwise velocity profiles at five different streamwise locations for the PINN predictions and the reference data. Our results show an excellent agreement between the PINN predictions and the reference data. It can be seen in Fig. 7 that the extent of the separated region and the reattachment point is accurately predicted by the PINN model. The relative $\u21132$-norm of errors is reported in Table I. The lowest error is related to

*U*, and it is equal to 2.77%. The highest error is equal to 28.18% and is associated with $u2\xaf$.

## IV. SUMMARY AND CONCLUSIONS

We introduced an alternative approach based on PINNs for solving the RANS equations. In contrast to traditional methods, we solve the RANS equations for incompressible turbulent flows without any specific model or assumption for turbulence and through the use of the data on the domain boundaries (including Reynolds-stress components) along with the governing equations that guide the learning process toward the solution. We simulate the Falkner–Skan boundary layer with adverse pressure gradient using PINNs to show the applicability of the model for laminar boundary layer flows. In this case, we only used the data on the boundaries for the velocity components as the training data. Our results suggest the applicability of PINNs for laminar boundary layer flows where excellent predictions can be obtained, even for the pressure. Moreover, we applied PINNs for the simulation of ZPG and APG turbulent boundary layers, and turbulent flows over the NACA4412 airfoil and the periodic hill. We only used the data on the domain boundaries, including Reynolds-stress components, for supervised learning while we considered the residual of the RANS equations as the loss for unsupervised learning. A set of points inside the domain and the points on the domain boundaries are employed to evaluate the residual of the governing equations. From these points, we predict the mean-flow quantities. For the points on the boundaries, we calculate the supervised loss by comparing the predictions with the training data, while for all the points, we compute the residuals by constructing the RANS equations using AD. For ZPG and APG boundary layers, we obtained excellent predictions using PINNs, where the average relative $\u21132$-norm of errors is equal to 3.91% and 3.60%, respectively. Our results for the NACA4412 airfoil and the periodic hill show that PINNs can provide accurate predictions for the streamwise velocity with less than 3% error while leading to good accuracy even in the simulation of Reynolds-stress components.

## ACKNOWLEDGMENTS

H.E. acknowledges the support of the University of Tehran. R.V. acknowledges the Göran Gustafsson foundation for supporting this research.

## AUTHOR DECLARATIONS

### Conflict of Interest

The authors have no conflicts to disclose.

### Author Contributions

**Hamidreza Eivazi:** Conceptualization (equal); Investigation (equal); Methodology (equal); Software (equal); Writing - original draft (equal); Writing - review and editing (equal). **Mojtaba Tahani:** Supervision (equal). **Philipp Schlatter:** Supervision (equal). **Ricardo Vinuesa:** Conceptualization (equal); Funding acquisition (equal); Investigation (equal); Methodology (equal); Project administration (equal); Resources (equal); Supervision (equal); Writing - review and editing (equal).

## DATA AVAILABILITY

The data that support the findings of this study are available from the corresponding author upon reasonable request.

### APPENDIX A: HYPERPARAMETERS

We performed a hyperparameter analysis to select the architecture of the PINN model. Here, the results are reported for the periodic hill test case. Similar behavior has been observed in other test cases. Table II summarizes the results for the implementation of different sizes of the neural network, where NN size represents the number of hidden layers × the number of neurons per each hidden layer. $Lbn$ and $Len$ are the boundary loss and the physical loss, respectively, for the last training epoch. We used hyperbolic tangent as the activation function and tested a different number of hidden layers and neurons per each hidden layer. We observed that very good results can be obtained using an NN size of 8 × 20 leading to *E _{U}* of 2.77%. The neural network with the size of 4 × 8 also leads to very good predictions with

*E*of 2.87%. Our results show that although increasing the NN size leads to a lower $Lbn$ and $Len$, it may not improve the accuracy of the predictions, where a PINN model with the NN size of 10 × 100 leads to

_{U}*E*of 3.78%.

_{U}NN size . | E
. _{U} | E
. _{V} | E
. _{P} | $Eu2\xaf$ . | $Euv\xaf$ . | $Ev2\xaf$ . | $Lbn$ . | $Len$ . |
---|---|---|---|---|---|---|---|---|

2 × 10 | 4.79% | 28.41% | 15.79% | 34.98% | 34.40% | 23.62% | 3.29 × $10\u22124$ | 1.79 × $10\u22124$ |

4 × 10 | 4.51% | 25.39% | 8.82% | 32.23% | 22.82% | 19.32% | 6.99 × $10\u22125$ | 4.95 × $10\u22125$ |

4 × 20 | 2.87% | 19.72% | 9.33% | 27.33% | 29.44% | 28.85% | 2.90 × $10\u22125$ | 2.26 × $10\u22125$ |

8 × 20 | 2.77% | 19.70% | 8.61% | 28.18% | 16.70% | 20.24% | 1.93 × $10\u22125$ | 8.31 × $10\u22126$ |

8 × 50 | 3.36% | 21.94% | 14.4% | 28.55% | 24.51% | 23.28% | 1.02 × $10\u22125$ | 3.26 × $10\u22126$ |

10 × 50 | 4.27% | 17.66% | 10.75% | 34.88% | 29.48% | 28.95% | 1.26 × $10\u22125$ | 6.03 × $10\u22126$ |

10 × 100 | 3.78% | 18.83% | 13.74% | 31.68% | 21.48% | 32.70% | 8.17 × $10\u22126$ | 2.84 × $10\u22126$ |

NN size . | E
. _{U} | E
. _{V} | E
. _{P} | $Eu2\xaf$ . | $Euv\xaf$ . | $Ev2\xaf$ . | $Lbn$ . | $Len$ . |
---|---|---|---|---|---|---|---|---|

2 × 10 | 4.79% | 28.41% | 15.79% | 34.98% | 34.40% | 23.62% | 3.29 × $10\u22124$ | 1.79 × $10\u22124$ |

4 × 10 | 4.51% | 25.39% | 8.82% | 32.23% | 22.82% | 19.32% | 6.99 × $10\u22125$ | 4.95 × $10\u22125$ |

4 × 20 | 2.87% | 19.72% | 9.33% | 27.33% | 29.44% | 28.85% | 2.90 × $10\u22125$ | 2.26 × $10\u22125$ |

8 × 20 | 2.77% | 19.70% | 8.61% | 28.18% | 16.70% | 20.24% | 1.93 × $10\u22125$ | 8.31 × $10\u22126$ |

8 × 50 | 3.36% | 21.94% | 14.4% | 28.55% | 24.51% | 23.28% | 1.02 × $10\u22125$ | 3.26 × $10\u22126$ |

10 × 50 | 4.27% | 17.66% | 10.75% | 34.88% | 29.48% | 28.95% | 1.26 × $10\u22125$ | 6.03 × $10\u22126$ |

10 × 100 | 3.78% | 18.83% | 13.74% | 31.68% | 21.48% | 32.70% | 8.17 × $10\u22126$ | 2.84 × $10\u22126$ |

We also compare the performance of the PINNs with three different activation functions, i.e., hyperbolic tangent (tanh), swish, and sigmoid. For these tests, we select a network size of 8 × 20. Figure 8 depicts the boundary loss (solid lines) and the physical loss (dashed lines) during the training. We observed that tanh activation function leads to a faster and better convergence compared to swish and sigmoid. Moreover, more accurate predictions can be obtained using tanh activation function, where *E _{U}* for tanh, swish, and sigmoid are 2.77%, 3.56%, and 4.33%, respectively. All the hyperparameters for each test case are reported in Table III.

Test . | N
. _{b} | N
. _{e} | $NAdam$ . | $\phi $ . | N
. _{n} | N
. _{l} | lr
. |
---|---|---|---|---|---|---|---|

FSBL | 500 | 1035 | 1000 | tanh | 8 | 20 | 1 × $10\u22123$ |

ZPG | 998 | 2400 | 1000 | tanh | 8 | 20 | 1 × $10\u22123$ |

APG | 802 | 3311 | 1000 | tanh | 8 | 20 | 1 × $10\u22123$ |

NACA4412 | 620 | 2550 | 1000 | tanh | 8 | 20 | 1 × $10\u22123$ |

Periodic hill | 1254 | 2430 | 1000 | tanh | 8 | 20 | 1 × $10\u22123$ |

Test . | N
. _{b} | N
. _{e} | $NAdam$ . | $\phi $ . | N
. _{n} | N
. _{l} | lr
. |
---|---|---|---|---|---|---|---|

FSBL | 500 | 1035 | 1000 | tanh | 8 | 20 | 1 × $10\u22123$ |

ZPG | 998 | 2400 | 1000 | tanh | 8 | 20 | 1 × $10\u22123$ |

APG | 802 | 3311 | 1000 | tanh | 8 | 20 | 1 × $10\u22123$ |

NACA4412 | 620 | 2550 | 1000 | tanh | 8 | 20 | 1 × $10\u22123$ |

Periodic hill | 1254 | 2430 | 1000 | tanh | 8 | 20 | 1 × $10\u22123$ |

## References

_{c}=1 000 000