Generalizability of machine-learning (ML) based turbulence closures to accurately predict unseen practical flows remains an important challenge. It is well recognized that the neural network (NN) architecture and training protocol profoundly influence the generalizability characteristics. At the Reynolds-averaged Navier–Stokes level, NN–based turbulence closure modeling is rendered difficult due to two important reasons: inherent complexity of the constitutive relation arising from flow-dependent non-linearity and bifurcations; and, inordinate difficulty in obtaining high-fidelity data covering the entire parameter space of interest. Thus, a predictive turbulence model must be robust enough to perform reasonably outside the domain of training. In this context, the objective of the work is to investigate the approximation capabilities of standard moderate‐sized fully connected NNs. We seek to systematically investigate the effects of (i) intrinsic complexity of the solution manifold; (ii) sampling procedure (interpolation vs extrapolation); and (iii) optimization procedure. To overcome the data acquisition challenges, three proxy-physics turbulence surrogates of different degrees of complexity (yet significantly simpler than turbulence physics) are employed to generate the parameter-to-solution maps. Lacking a strong theoretical basis for finding the globally optimal NN architecture and hyperparameters in the presence of non-linearity and bifurcations, a “brute‐force” parameter-space sweep is performed to determine a locally optimal solution. Even for this simple proxy-physics system, it is demonstrated that feed-forward NNs require more degrees of freedom than the original proxy-physics model to accurately approximate the true model even when trained with data over the entire parameter space (interpolation). Additionally, if deep fully connected NNs are trained with data only from part of the parameter space (extrapolation), their approximation capability reduces considerably and it is not straightforward to find an optimal architecture. Overall, the findings provide a realistic perspective on the utility of ML turbulence closures for practical applications and identify areas for improvement.

1.
G.
Blaisdell
and
K.
Shari
, in
Proceedings of the Summer Program
(
Citeseer
,
1996
), p.
1
.
2.
S. S.
Girimaji
, “
Pressure-strain correlation modelling of complex turbulent flows
,”
J. Fluid Mech.
422
,
91
(
2000
).
3.
C. G.
Speziale
, “
Analytical Methods for the development of Reynolds-stress closure in turbulence
,”
Annu. Rev. Fluid Mech
23
,
107
(
1991
).
4.
A. A.
Mishra
and
S. S.
Girimaji
, “
Pressure-strain correlation modeling: Towards achieving consistency with rapid distortion theory
,”
Flow, Turbul. Combust.
85
,
593
(
2010
).
5.
A. A.
Mishra
and
S. S.
Girimaji
, “
Toward approximating non-local dynamics in single-point pressure–strain correlation closures
,”
J. Fluid Mech.
811
,
168
(
2017
).
6.
C. G.
Speziale
,
R.
Abid
, and
G. A.
Blaisdell
, “
On the consistency of Reynolds stress turbulence closures with hydrodynamic stability theory
,”
Phys. Fluids
8
,
781
(
1996
).
7.
S. S.
Girimaji
, “
Partially-averaged Navier-Stokes model for turbulence: A Reynolds-averaged Navier-Stokes to direct numerical simulation bridging method
,”
J. Appl. Mech.
73
,
413
(
2006
).
8.
S. S.
Girimaji
,
E.
Jeong
, and
R.
Srinivasan
, “
Partially averaged Navier-Stokes method for turbulence: Fixed point analysis and comparison with unsteady partially averaged Navier-Stokes
,”
J. Appl. Mech.
73
,
422
(
2006
).
9.
F.
Sarghini
,
G.
De Felice
, and
S.
Santini
, “
Neural networks based subgrid scale modeling in large eddy simulations
,”
Comput. Fluids
32
,
97
(
2003
).
10.
M.
Gamahara
and
Y.
Hattori
, “
Searching for turbulence models by artificial neural network
,”
Phys. Rev. Fluids
2
,
054604
(
2017
).
11.
R.
Maulik
and
O.
San
, “
A neural network approach for the blind deconvolution of turbulent flows
,”
J. Fluid Mech.
831
,
151
(
2017
).
12.
R.
Maulik
,
O.
San
,
A.
Rasheed
, and
P.
Vedula
, “
Data-driven deconvolution for large eddy simulations of Kraichnan turbulence
,”
Phys. Fluids
30
,
125109
(
2018
).
13.
A.
Beck
,
D.
Flad
, and
C.-D.
Munz
, “
Deep neural networks for data-driven LES closure models
,”
J. Comput. Phys.
398
,
108910
(
2019
).
14.
Z.
Zhou
,
G.
He
,
S.
Wang
, and
G.
Jin
, “
Subgrid-scale model for large-eddy simulation of isotropic turbulent flows using an artificial neural network
,”
Comput. Fluids
195
,
104319
(
2019
).
15.
C.
Xie
,
J.
Wang
, and
E.
Weinan
, “
Modeling subgrid-scale forces by spatial artificial neural networks in large eddy simulation of turbulence
,”
Phys. Rev. Fluids
5
,
054606
(
2020
).
16.
M.
Schoepplein
,
J.
Weatheritt
,
R.
Sandberg
,
M.
Talei
, and
M.
Klein
, “
Application of an evolutionary algorithm to LES modelling of turbulent transport in premixed flames
,”
J. Comput. Phys.
374
,
1166
(
2018
).
17.
M.
Reissmann
,
J.
Hasslberger
,
R. D.
Sandberg
, and
M.
Klein
, “
Application of gene expression programming to a-posteriori LES modeling of a Taylor Green vortex
,”
J. Comput. Phys.
424
,
109859
(
2021
).
18.
R.
Maulik
,
O.
San
,
A.
Rasheed
, and
P.
Vedula
, “
Subgrid modelling for two-dimensional turbulence using neural networks
,”
J. Fluid Mech.
858
,
122
(
2019
).
19.
Z.
Wang
,
K.
Luo
,
D.
Li
,
J.
Tan
, and
J.
Fan
, “
Investigations of data-driven closure for subgrid-scale stress in large-eddy simulation
,”
Phys. Fluids
30
,
125101
(
2018
).
20.
R.
Maulik
,
O.
San
,
J. D.
Jacob
, and
C.
Crick
, “
Sub-grid scale model classification and blending through deep learning
,”
J. Fluid Mech.
870
,
784
(
2019b
).
21.
C.
Xie
,
J.
Wang
,
H.
Li
,
M.
Wan
, and
S.
Chen
, “
Effective mean free path and viscosity of confined gases
,”
Phys. Fluids
31
,
072002
(
2019a
).
22.
C.
Xie
,
Z.
Yuan
, and
J.
Wang
, “
Artificial neural network-based nonlinear algebraic models for large eddy simulation of turbulence
,”
Phys. Fluids
32
,
115101
(
2020b
).
23.
C.
Xie
,
K.
Li
,
C.
Ma
, and
J.
Wang
, “
Modeling subgrid-scale force and divergence of heat flux of compressible isotropic turbulence by artificial neural network
,”
Phys. Rev. Fluids
4
,
104605
(
2019b
).
24.
J.
Park
and
H.
Choi
, “
Toward neural-network-based large eddy simulation: Application to turbulent channel flow
,”
J. Fluid Mech.
914
,
A16
(
2021
).
25.
Y.
Wang
,
Z.
Yuan
,
C.
Xie
, and
J.
Wang
, “
Artificial neural network-based spatial gradient models for large-eddy simulation of turbulence
,”
AIP Adv.
11
,
055216
(
2021
).
26.
A.
Subel
,
A.
Chattopadhyay
,
Y.
Guan
, and
P.
Hassanzadeh
, “
Data-driven subgrid-scale modeling of forced Burgers turbulence using deep learning with generalization to higher Reynolds numbers via transfer learning
,”
Phys. Fluids
33
,
031702
(
2021
).
27.
S.
Pawar
,
O.
San
,
A.
Rasheed
, and
P.
Vedula
, “
A priori analysis on deep learning of subgrid-scale parameterizations for Kraichnan turbulence
,”
Theor. Comput. Fluid Dyn.
34
,
429
(
2020
).
28.
J.
Ling
,
A.
Kurzawski
, and
J.
Templeton
, “
Reynolds averaged turbulence modelling using deep neural networks with embedded invariance
,”
J. Fluid Mech.
807
,
155
(
2016
).
29.
Z.
Zhang
,
X.-d.
Song
,
S.-r.
Ye
,
Y.-w.
Wang
,
C.-g.
Huang
,
Y.-r.
An
, and
Y.-s.
Chen
, “
Application of deep learning method to Reynolds stress models of channel flow based on reduced-order modeling of DNS data
,”
J. Hydrodyn.
31
,
58
(
2019
).
30.
C.
Sotgiu
,
B.
Weigand
,
K.
Semmler
, and
P.
Wellinger
, “
Towards a general data-driven explicit algebraic Reynolds stress prediction framework
,”
Int. J. Heat Fluid Flow
79
,
108454
(
2019
).
31.
N.
Geneva
and
N.
Zabaras
, “
Quantifying model form uncertainty in Reynolds-averaged turbulence models with Bayesian deep neural networks
,”
J. Comput. Phys.
383
,
125
(
2019
).
32.
R.
Fang
,
D.
Sondak
,
P.
Protopapas
, and
S.
Succi
, “
Neural network models for the anisotropic Reynolds stress tensor in turbulent channel flow
,”
J. Turbul.
21
,
525
(
2020
).
33.
C.
Jiang
,
J.
Mi
,
S.
Laima
, and
H.
Li
, “
A novel algebraic stress model with machine-learning-assisted parameterization
,”
Energies
13
,
258
(
2020
).
34.
Y.
Yin
,
P.
Yang
,
Y.
Zhang
,
H.
Chen
, and
S.
Fu
, “
Feature selection and processing of turbulence modeling based on an artificial neural network
,”
Phys. Fluids
32
,
105117
(
2020
).
35.
S.
Taghizadeh
,
F. D.
Witherden
, and
S. S.
Girimaji
, “
Turbulence closure modeling with data-driven techniques: Physical compatibility and consistency considerations
,”
New J. Phys.
22
,
093023
(
2020
).
36.
C.
Jiang
,
R.
Vinuesa
,
R.
Chen
,
J.
Mi
,
S.
Laima
, and
H.
Li
, “
An interpretable framework of data-driven turbulence modeling using deep neural networks
,”
Phys. Fluids
33
,
055133
(
2021
).
37.
X.-H.
Zhou
,
J.
Han
, and
H.
Xiao
, “
Learning nonlocal constitutive models with neural networks
,”
Comput. Methods Appl. Mech. Eng.
384
,
113927
(
2021
).
38.
M.
Yang
and
Z.
Xiao
, “Improving the k-ω-γ-Ar transition model by the field inversion and machine learning framework,”
Phys. Fluids
32
,
064101
(
2020
).
39.
P. S.
Volpiani
,
M.
Meyer
,
L.
Franceschini
,
J.
Dandois
,
F.
Renac
,
E.
Martin
,
O.
Marquet
, and
D.
Sipp
, “
Machine learning-augmented turbulence modeling for RANS simulations of massively separated flows
,”
Phys. Rev. Fluids
6
,
064607
(
2021
).
40.
R.
Maulik
,
H.
Sharma
,
S.
Patel
,
B.
Lusch
, and
E.
Jennings
, “
A turbulent eddy-viscosity surrogate modeling framework for Reynolds-averaged Navier-Stokes simulations
,”
Comput. Fluids
227
,
104777
(
2021
).
41.
L.
Zhu
,
W.
Zhang
,
J.
Kou
, and
Y.
Liu
, “
Machine learning methods for turbulence modeling in subsonic flows around airfoils
,”
Phys. Fluids
31
,
015105
(
2019
).
42.
E. L.
Peters
,
R.
Balin
,
K. E.
Jansen
,
A.
Doostan
, and
J. A.
Evans
, “
S-frame discrepancy correction models for data-informed Reynolds stress closure
,”
J. Comput. Phys.
448
,
110717
(
2022
).
43.
L.
Breiman
, “
Bagging predictors
,”
Mach. Learn.
45
,
5
(
2001
).
44.
J.-L.
Wu
,
J.-X.
Wang
,
H.
Xiao
, and
J.
Ling
, “
A priori assessment of prediction confidence for data-driven turbulence modeling
,”
Flow, Turbul. Combust.
99
,
25
(
2017
).
45.
M. L.
Kaandorp
and
R. P.
Dwight
, “
Data-driven modelling of the Reynolds stress tensor using random forests with invariance
,”
Comput. Fluids
202
,
104497
(
2020
).
46.
J.-L.
Wu
,
H.
Xiao
, and
E.
Paterson
, “
Physics-informed machine learning approach for augmenting turbulence models: A comprehensive framework
,”
Phys. Rev. Fluids
3
,
074602
(
2018
).
47.
J.
Weatheritt
and
R.
Sandberg
, “
The development of algebraic stress models using a novel evolutionary algorithm
,”
Int. J. Heat Fluid Flow
68
,
298
(
2017
).
48.
J.
Weatheritt
,
R.
Pichler
,
R. D.
Sandberg
,
G.
Laskowski
, and
V.
Michelassi
, in
ASME Turbo Expo 2017: Turbomachinery Technical Conference and Exposition
(
American Society of Mechanical Engineers Digital Collection
,
2017
).
49.
H. D.
Akolekar
,
J.
Weatheritt
,
N.
Hutchins
,
R. D.
Sandberg
,
G.
Laskowski
, and
V.
Michelassi
, in
ASME Turbo Expo 2018: Turbomachinery Technical Conference and Exposition
(
American Society of Mechanical Engineers Digital Collection
,
2018
).
50.
Y.
Zhao
,
H. D.
Akolekar
,
J.
Weatheritt
,
V.
Michelassi
, and
R. D.
Sandberg
, “
RANS turbulence model development using CFD-driven machine learning
,”
J. Comput. Phys.
411
,
109413
(
2020
).
51.
C.
Lav
,
R. D.
Sandberg
, and
J.
Philip
, “
A framework to develop data-driven turbulence models for flows with organised unsteadiness
,”
J. Comput. Phys.
383
,
148
(
2019
).
52.
F.
Waschkowski
,
Y.
Zhao
,
R.
Sandberg
, and
J.
Klewicki
, preprint arXiv:2105.06225 (
2021
).
53.
M.
Schmelzer
,
R. P.
Dwight
, and
P.
Cinnella
, “
Discovery of algebraic Reynolds-stress models using sparse symbolic regression
,”
Flow, Turbul. Combust.
104
,
579
(
2020
).
54.
J. P.
Huijing
,
R. P.
Dwight
, and
M.
Schmelzer
, “
Data-driven RANS closures for three-dimensional flows around bluff bodies
,”
Comput. Fluids
225
,
104997
(
2021
).
55.
S.
Beetham
and
J.
Capecelatro
, “
Formulating turbulence closures using sparse regression with embedded form invariance
,”
Phys. Rev. Fluids
5
,
084611
(
2020
).
56.
Y.
Zhang
,
R. P.
Dwight
,
M.
Schmelzer
,
J. F.
Gómez
,
Z-h
Han
, and
S.
Hickel
, “
Customized data-driven RANS closures for bi-fidelity LES–RANS optimization
,”
J. Comput. Phys.
432
,
110153
(
2021
).
57.
J. N.
Kutz
, “
Deep learning in fluid dynamics
,”
J. Fluid Mech.
814
,
1–4
(
2017
).
58.
K.
Duraisamy
,
G.
Iaccarino
, and
H.
Xiao
, “
Turbulence modeling in the age of data
,”
Annu. Rev. Fluid Mech.
51
,
357
(
2019
).
59.
S. L.
Brunton
,
B. R.
Noack
, and
P.
Koumoutsakos
, “
Machine learning for fluid mechanics
,”
Annu. Rev. Fluid Mech.
52
,
477
(
2020
).
60.
A.
Beck
and
M.
Kurz
, “
A perspective on machine learning methods in turbulence modeling
,”
GAMM-Mitteilungen
44
,
e202100002
(
2021
).
61.
K.
Duraisamy
, “
Perspectives on machine learning-augmented Reynolds-averaged and large eddy simulation models of turbulence
,”
Phys. Rev. Fluids
6
,
050504
(
2021
).
62.
K.
Anand
,
Z.
Wang
,
M.
Loog
, and
J.
van Gemert
, preprint arXiv:2008.05981 (
2020
).
63.
G.
Cybenko
, “
Approximation by superpositions of a sigmoidal function
,”
Math. Control, Signals Syst.
2
,
303
(
1989
).
64.
K.
Hornik
,
M.
Stinchcombe
, and
H.
White
, “
Multilayer feedforward networks are universal approximators
,”
Neural Networks
2
,
359
(
1989
).
65.
A. R.
Barron
, “
Universal approximation bounds for superpositions of a sigmoidal function
,”
IEEE Trans. Inf. theory
39
,
930
(
1993
).
66.
M.
Leshno
,
V. Y.
Lin
,
A.
Pinkus
, and
S.
Schocken
, “
Multilayer feedforward networks with a nonpolynomial activation function can approximate any function
,”
Neural Networks
6
,
861
(
1993
).
67.
V.
Maiorov
and
A.
Pinkus
, “
Lower bounds for approximation by MLP neural networks
,”
Neurocomputing
25
,
81
(
1999
).
68.
H. N.
Mhaskar
, “
Approximation properties of a multilayered feedforward artificial neural network
,”
Adv. Comput. Math.
1
,
61
(
1993
).
69.
H. N.
Mhaskar
, “
Neural networks for optimal approximation of smooth and analytic functions
,”
Neural Comput.
8
,
164
(
1996
).
70.
D.
Yarotsky
, “
Error bounds for approximations with deep ReLU networks
,”
Neural Networks
94
,
103
(
2017
).
71.
P.
Petersen
and
F.
Voigtlaender
, “
Optimal approximation of piecewise smooth functions using deep ReLU neural networks
,”
Neural Networks
108
,
296
(
2018
).
72.
M.
Raissi
,
P.
Perdikaris
, and
G. E.
Karniadakis
, “
Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations
,”
J. Comput. Phys.
378
,
686
(
2019
).
73.
X.
Jin
,
S.
Cai
,
H.
Li
, and
G. E.
Karniadakis
, “
NSFnets (Navier-Stokes flow nets): Physics-informed neural networks for the incompressible Navier-Stokes equations
,”
J. Comput. Phys.
426
,
109951
(
2021
).
74.
J.
Berner
,
P.
Grohs
, and
A.
Jentzen
, “
Analysis of the generalization error: empirical risk minimization over deep artificial neural networks overcomes the curse of dimensionality in the numerical approximation of Black-Scholes partial differential equations
,”
SIAM J. Math. Data Sci.
2
,
631
(
2020
).
75.
E.
Weinan
,
J.
Han
, and
A.
Jentzen
, “
Deep learning-based numerical methods for high-dimensional parabolic partial differential equations and backward stochastic differential equations
,”
Commun. Math. Stat.
5
,
349
(
2017
).
76.
J.
Han
,
A.
Jentzen
, and
E.
Weinan
, “
Solving high-dimensional partial differential equations using deep learning
,”
Proc. Natl. Acad. Sci. U. S. A.
115
,
8505
(
2018
).
77.
J.
Sirignano
and
K.
Spiliopoulos
, “
DGM: A deep learning algorithm for solving partial differential equations
,”
J. Comput. Phys.
375
,
1339
(
2018
).
78.
N.
Dal Santo
,
S.
Deparis
, and
L.
Pegolotti
, “
Data driven approximation of parametrized PDEs by reduced basis and neural networks
,”
J. Comput. Phys.
416
,
109550
(
2020
).
79.
J. S.
Hesthaven
and
S.
Ubbiali
, “
Non-intrusive reduced order modeling of nonlinear problems using neural networks
,”
J. Comput. Phys.
363
,
55
(
2018
).
80.
Y.
Khoo
,
J.
Lu
, and
L.
Ying
, “
Solving parametric PDE problems with artificial neural networks
,”
Eur. J. Appl. Math.
32
,
421
(
2021
).
81.
K.
Lee
and
K. T.
Carlberg
, “
Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders
,”
J. Comput. Phys.
404
,
108973
(
2020
).
82.
G.
Kutyniok
,
P.
Petersen
,
M.
Raslan
, and
R.
Schneider
, “
A theoretical analysis of deep neural networks and parametric PDEs
,”
Constr. Approximation
2021
,
1–53
.
83.
M.
Geist
,
P.
Petersen
,
M.
Raslan
,
R.
Schneider
, and
G.
Kutyniok
, “
Numerical solution of the parametric diffusion equation by deep neural networks
,”
J. Sci. Comput.
88
,
22
(
2021
).
84.
S. S.
Girimaji
, “
Fully explicit and self-consistent algebraic Reynolds stress model
,”
Theor. Comput. Fluid Dyn.
8
,
387
(
1996
).
85.
T.
Gatski
, “
On explicit algebraic stress models for complex turbulent flows
,”
J. Fluid Mech
254
,
59
(
1993
).
86.
B. E.
Launder
,
G. J.
Reece
, and
W.
Rodi
, “
Progress in the development of a Reynolds-stress turbulence closure
,”
J. Fluid Mech.
68
,
537
(
1975
).
87.
C.
Speziale
,
S.
Sarkar
, and
T.
Gatski
, “
Modelling the pressure–strain correlation of turbulence: An invariant dynamical systems approach
,”
J. Fluid Mech.
227
,
245
(
1991
).
88.
S. S.
Girimaji
, “
Lower-dimensional manifold (algebraic) representation of Reynolds stress closure equations
,”
Theor. Comput. Fluid Dyn.
14
,
259
(
2001
).
89.
C. A.
Gomez
and
S. S.
Girimaji
, “
Explicit algebraic Reynolds stress model (EARSM) for compressible shear flows
,”
Theor. Comput. Fluid Dyn.
28
,
171
(
2014
).
90.
A. A.
Mishra
and
S. S.
Girimaji
, “
Intercomponent energy transfer in incompressible homogeneous turbulence: Multi-point physics and amenability to one-point closures
,”
J. Fluid Mech.
731
,
639
(
2013
).
91.
T.-H.
Shih
,
J.
Zhu
, and
J. L.
Lumley
,
A Realizable Reynolds Stress Algebraic Equation Model
(
National Aeronautics and Space Administration
,
1993
), Vol.
105993
.
92.
T.
Craft
,
B.
Launder
, and
K.
Suga
, “
Development and application of a cubic eddy-viscosity model of turbulence
,”
Int. J. Heat Fluid Flow
17
,
108
(
1996
).
93.
I.
Goodfellow
,
Y.
Bengio
, and
A.
Courville
,
Deep Learning
(
MIT Press
,
Cambridge
,
2016
), Vol.
1
.
94.
S.
Cai
,
Z.
Wang
,
L.
Lu
,
T. A.
Zaki
, and
G. E.
Karniadakis
, “
DeepM&Mnet: Inferring the electroconvection multiphysics fields based on operator approximation by neural networks
,”
J. Comput. Phys.
436
,
110296
(
2021
).
95.
G.
James
,
D.
Witten
,
T.
Hastie
, and
R.
Tibshirani
,
An Introduction to Statistical Learning
(
Springer
,
2013
), Vol.
112
.
96.
D. P.
Kingma
and
J.
Ba
, preprint arXiv:1412.6980 (
2014
).
97.
X.
Glorot
and
Y.
Bengio
, in
Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics (JMLR Workshop and Conference Proceedings
(PMLR,
2010
), pp.
249
256
.
You do not currently have access to this content.