Data-driven machine learning algorithms, random forests and artificial neural network (ANN), are used to establish the subgrid-scale (SGS) model for large-eddy simulation. A total of 30 flow variables are examined as the potential input features. A priori tests indicate that the ANN algorithm provides a better solution for this regression problem. The relative importance of the input variables is evaluated by the two algorithms. It reveals that the gradient of filtered velocity and the second derivative of filtered velocity account for a vast majority of the importance. Besides, a pattern is found for the dependence of each component of the SGS stress tensor on the input features. Accordingly, a new uniform ANN model is proposed to provide closure for all the components of the SGS stress, and a correlation coefficient over 0.7 is reached. The proposed new model is tested by large-eddy simulation of isotropic turbulence. By examining the energy budget and the dissipative properties, the ANN model shows good agreement with direct numerical simulation and it provides better predictions than the Smagorinsky model and the dynamic Smagorinsky model. The current research suggests that data-driven algorithms are effective approaches to help us discover knowledge from large amounts of data.

1.
B. E.
Launder
and
B. I.
Sharma
, “
Application of the energy-dissipation model of turbulence to the calculation of flow near a spinning disc
,”
Int. Commun. Heat Mass Transfer
1
,
131
137
(
1974
).
2.
D. C.
Wilcox
, “
Reassessment of the scale-determining equation for advanced turbulence models
,”
AIAA J.
26
,
1299
1310
(
1988
).
3.
P.
Spalart
and
S.
Allmaras
, “
A one-equation turbulence model for aerodynamic flows
,” in
30th Aerospace Sciences Meeting and Exhibit
(
AIAA
,
1992
), p.
439
.
4.
F. R.
Menter
, “
Two-equation eddy-viscosity turbulence models for engineering applications
,”
AIAA J.
32
,
1598
1605
(
1994
).
5.
F. G.
Schmitt
, “
About Boussinesq’s turbulent viscosity hypothesis: Historical remarks and a direct evaluation of its validity
,”
C. R. Mec.
335
,
617
627
(
2007
).
6.
J.
Smagorinsky
, “
General circulation experiments with the primitive equations. I. The basic experiment
,”
Mon. Weather Rev.
91
,
99
164
(
1963
).
7.
M.
Germano
,
U.
Piomelli
,
P.
Moin
, and
W. H.
Cabot
, “
A dynamic subgrid-scale eddy viscosity model
,”
Phys. Fluids A
3
,
1760
1765
(
1991
).
8.
D. K.
Lilly
, “
A proposed modification of the Germano subgrid-scale closure method
,”
Phys. Fluids A
4
,
633
635
(
1992
).
9.
S.
Liu
,
C.
Meneveau
, and
J.
Katz
, “
On the properties of similarity subgrid-scale models as deduced from measurements in a turbulent jet
,”
J. Fluid Mech.
275
,
83
119
(
1994
).
10.
R. A.
Clark
,
J. H.
Ferziger
, and
W. C.
Reynolds
, “
Evaluation of subgrid-scale models using an accurately simulated turbulent flow
,”
J. Fluid Mech.
91
,
1
16
(
1979
).
11.
P. K.
Yeung
,
X. M.
Zhai
, and
K. R.
Sreenivasan
, “
Extreme events in computational turbulence
,”
Proc. Natl. Acad. Sci. U. S. A.
112
,
12633
12638
(
2015
).
12.
K.
Kanov
,
R.
Burns
,
C.
Lalescu
, and
G.
Eyink
, “
The Johns Hopkins turbulence databases: An open simulation laboratory for turbulence research
,”
Comput. Sci. Eng.
17
,
10
17
(
2015
).
13.
R. S.
Michalski
,
J. G.
Carbonell
, and
T. M.
Mitchell
,
Machine Learning: An Artificial Intelligence Approach
(
Springer Science & Business Media
,
2013
).
14.
A.
Krizhevsky
,
I.
Sutskever
, and
G. E.
Hinton
, “
ImageNet classification with deep convolutional neural networks
,” in
Advances in Neural Information Processing Systems
(
IEEE
,
2012
), pp.
1097
1105
.
15.
D.
Silver
,
A.
Huang
,
C. J.
Maddison
,
A.
Guez
,
L.
Sifre
,
G.
Van Den Driessche
,
J.
Schrittwieser
,
I.
Antonoglou
,
V.
Panneershelvam
,
M.
Lanctot
 et al, “
Mastering the game of Go with deep neural networks and tree search
,”
Nature
529
,
484
(
2016
).
16.
A. E. L.
Sallab
,
M.
Abdou
,
E.
Perot
, and
S.
Yogamani
, “
Deep reinforcement learning framework for autonomous driving
,”
Electron. Imaging
2017
,
70
76
.
17.
B.
Alipanahi
,
A.
Delong
,
M. T.
Weirauch
, and
B. J.
Frey
, “
Predicting the sequence specificities of DNA-and RNA-binding proteins by deep learning
,”
Nat. Biotechnol.
33
,
831
(
2015
).
18.
T. K.
Ho
, “
Random decision forests
,” in
Proceedings of 3rd International Conference on Document Analysis and Recognition (1995)
(
IEEE
,
1995
), Vol. 1, pp.
278
282
.
19.
J. R.
Quinlan
, “
Induction of decision trees
,”
Mach. Learn.
1
,
81
106
(
1986
).
20.
L.
Breiman
, “
Random forests
,”
Mach. Learn.
45
,
5
32
(
2001
).
21.
W. S.
McCulloch
and
W.
Pitts
, “
A logical calculus of the ideas immanent in nervous activity
,”
Bull. Math. Biophys.
5
,
115
133
(
1943
).
22.
I.
Goodfellow
,
Y.
Bengio
, and
A.
Courville
,
Deep Learning
(
MIT Press
,
2016
).
23.
J. N.
Kutz
, “
Deep learning in fluid dynamics
,”
J. Fluid Mech.
814
,
1
4
(
2017
).
24.
K.
Duraisamy
,
G.
Iaccarino
, and
H.
Xiao
, “
Turbulence modeling in the age of data
,”
Annu. Rev. Fluid Mech.
(published online).
25.
B. D.
Tracey
,
K.
Duraisamy
, and
J. J.
Alonso
, “
A machine learning strategy to assist turbulence model development
,” in
53rd AIAA Aerospace Science Meeting
(
AIAA
,
2015
), p.
1287
.
26.
J.
Ling
,
A.
Kurzawski
, and
J.
Templeton
, “
Reynolds averaged turbulence modelling using deep neural networks with embedded invariance
,”
J. Fluid Mech.
807
,
155
166
(
2016
).
27.
J.
Ling
,
R.
Jones
, and
J.
Templeton
, “
Machine learning strategies for systems with invariance properties
,”
J. Comput. Phys.
318
,
22
35
(
2016
).
28.
J.-X.
Wang
,
J.-L.
Wu
, and
H.
Xiao
, “
Physics-informed machine learning approach for reconstructing Reynolds stress modeling discrepancies based on DNS data
,”
Phys. Rev. Fluids
2
,
034603
(
2017
).
29.
F.
Sarghini
,
G.
De Felice
, and
S.
Santini
, “
Neural networks based subgrid scale modeling in large eddy simulations
,”
Comput. Fluids
32
,
97
108
(
2003
).
30.
M.
Gamahara
and
Y.
Hattori
, “
Searching for turbulence models by artificial neural network
,”
Phys. Rev. Fluids
2
,
054604
(
2017
).
31.
G. I.
Taylor
, “
Statistical theory of turbulence
,”
Proc. R. Soc. London, Ser. A
151
,
421
444
(
1935
).
32.
A. N.
Kolmogorov
, “
The local structure of turbulence in incompressible viscous fluid for very large Reynolds numbers
,”
Dokl. Akad. Nauk SSSR
30
,
299
303
(
1941
).
33.
S. G.
Chumakov
, “
Scaling properties of subgrid-scale energy dissipation
,”
Phys. Fluids
19
,
58104
(
2007
).
34.
S. B.
Pope
,
Turbulent Flows
(
Cambridge University Press
,
2000
).
35.
M. S.
Mohamed
and
J. C.
Larue
, “
The decay power law in grid-generated turbulence
,”
J. Fluid Mech.
219
,
195
214
(
1990
).
36.
P.
Davidson
,
Turbulence: An Introduction for Scientists and Engineers
(
Oxford University Press
,
2015
).
37.
F.
Trias
,
A.
Gorobets
,
M.
Silvis
,
R.
Verstappen
, and
A.
Oliva
, “
A new subgrid characteristic length for turbulence simulations on anisotropic grids
,”
Phys. Fluids
29
,
115109
(
2017
).
38.
R.
Maulik
and
O.
San
, “
A neural network approach for the blind deconvolution of turbulent flows
,”
J. Fluid Mech.
831
,
151
181
(
2017
).
39.
D.
Opitz
and
R.
Maclin
, “
Popular ensemble methods: An empirical study
,”
J. Artif. Intell. Res.
11
,
169
198
(
1999
).
40.
Z.-H.
Zhou
,
Ensemble Methods: Foundations and Algorithms
(
Chapman and Hall/CRC
,
2012
).
41.
I.
Barandiaran
, “
The random subspace method for constructing decision forests
,”
IEEE Trans. Pattern Anal. Mach. Intell.
20
,
832
834
(
1998
).
42.
F.
Pedregosa
,
G.
Varoquaux
,
A.
Gramfort
,
V.
Michel
,
B.
Thirion
,
O.
Grisel
,
M.
Blondel
,
P.
Prettenhofer
,
R.
Weiss
,
V.
Dubourg
,
J.
Vanderplas
,
A.
Passos
,
D.
Cournapeau
,
M.
Brucher
,
M.
Perrot
, and
E.
Duchesnay
, “
Scikit-learn: Machine learning in Python
,”
J. Mach. Learn. Res.
12
,
2825
2830
(
2011
).
43.
J.
Snoek
,
H.
Larochelle
, and
R. P.
Adams
, “
Practical Bayesian optimization of machine learning algorithms
,” in
Advances in Neural Information Processing Systems
(
IEEE
,
2012
), pp.
2951
2959
.
44.
D. E.
Rumelhart
,
G. E.
Hinton
, and
R. J.
Williams
, “
Learning representations by back-propagating errors
,”
Nature
323
,
533
(
1986
).
45.
M.
Abadi
,
P.
Barham
,
J.
Chen
,
Z.
Chen
,
A.
Davis
,
J.
Dean
,
M.
Devin
,
S.
Ghemawat
,
G.
Irving
,
M.
Isard
 et al, “
Tensorflow: A system for large-scale machine learning
,” in
Symposium on Operating Systems Design and Implementation (OSDI)
(
USENIX Association
,
2016
), Vol. 16, pp.
265
283
.
46.
G. D.
Garson
, “
Interpreting neural-network connection weights
,”
AI Expert
6
,
46
51
(
1991
).
47.
P.
Sagaut
,
Large Eddy Simulation for Incompressible Flows: An Introduction
(
Springer Science & Business Media
,
2006
).
48.
S.
Laizet
and
E.
Lamballais
, “
High-order compact schemes for incompressible flows: A simple and efficient method with quasi-spectral accuracy
,”
J. Comput. Phys.
228
,
5989
6015
(
2009
).
49.
J.
Jimé
and
R. D.
Moser
, “
Large-eddy simulations: Where are we and what can we expect?
,”
AIAA J.
38
,
605
612
(
2000
).
50.
S. B.
Pope
, “
Ten questions concerning the large-eddy simulation of turbulent flows
,”
New J. Phys.
6
,
35
(
2004
).
You do not currently have access to this content.