Acoustic beamforming and mode detections by means of machine learning have potential advantages over conventional strategies, e.g., first-principle based forward acoustic models may be replaced by neural networks. In this work, the machine-learning-based strategy is presented for aeroengine duct acoustic mode detections and the focus is on the associated machine learning implementation. Next, the proposed neural network implementation is incorporated into compressive sensing by taking into account specific acoustic mode detection requirements. The proposed method shall direct the research attention of acoustic measurements to machine learning and particularly benefit mode detections for next-generation aircraft engine problems.

Since the previous work,1 compressive sensing has found ever-increasing applications in aerospace engineering2 in general and aeroengine acoustic mode detection experiments3–6 in particular. In its simplest form, compressive sensing is achieved by solving the following linear programming:7 

(1)

where Ŝ is the reconstructed estimation of acoustic modes, ǁ⋅ǁ1 represents L1 norm, y denotes complete samples that shall satisfy the sampling theorem, C is compressive sensing matrix, and G is the transfer function between the acoustic mode inputs and measurements. Once an experimental setup is established, y and C are already known, and G must be achieved from an idealized model setup, mostly by first-principle based methods, which will enable the L1-minimization in Eq. (1). In practice, experiments are more often than not non-ideal. The different practical issues, such as model geometries, experimental setups, and facility background noises will possibly influence sound propagations from fans and sensing points, which would result in different forward models. In this Letter, the author demonstrates how to infer G by machine learning, and how to complete acoustic mode detections by incorporating neural network technique into the available convex optimization tool.8 Overall, the proposed machine-learning-based measurement method and the associated implementation details shall benefit the acoustic research community.

Nowadays, jet noise,9 combustion noise,10 and fan noise are still dominant noise sources on aircraft engines. Nevertheless, the developments of high bypass-ratio turbofan engines and acoustic liner and chevron technologies have extensively reduced jet noise and led to relatively more prominent fan noise radiating from rotor-stator assemblies. Moreover, jet noise and combustion noise will be completely absent from future full-electric engines (such as E-Fan proposed by Airbus recently). Hence, the attention of this research is focused on fan noise.

Figure 1(a) shows one classical experimental setup of aeroengine fan noise, where multiple rings of microphones are flush mounted to the inner surface of a test section. If the engine duct is of idealized cylindrical geometry, the sound pressure measurements could be theoretically described by

(2)

where (z,r,θ) represents cylindrical coordinates [see Fig. 1(b), where the sensor arrays collect sound pressure signals p], Jm is the mth-order Bessel function of the first kind, ω is angular frequency, krmn and kzmn are the associated wavenumber numbers in the radial and axial directions, respectively, and N represents background noise and interference. Given p, the reconstruction of Amn(ω,m,n) is the so-called duct acoustics mode detection, which is one of the most important problems in acoustics for aircraft engine industry.

It is worthwhile to mention that Eq. (2) comes from first-principle-based analytical solutions of wave equations inside an idealized cylindrical duct. Practical mode detection is much more complicated due to background flow, sound reflection, and refraction from the end of an aeroengine duct and realistic geometrical effects. The first issue has been considered in the previous work4 in wind tunnel experiments [see Fig. 1(c)] and the second issue can be resolved by taking the Wiener–Hopf method.11 To address the last issue, a machine-learning-based approach proposed in this work can be considered as a promising solution. We wish to mention that machine learning has been tried recently for duct acoustics,12 where the upstream and downstream modes are decomposed by an artificial neural network. In contrast, this Letter is focused on azimuthal mode detections.

As shown in Fig. 1(c), certain sources S (actively controlled by a spinning mode synthesizer) can be incident into an aeroengine duct system (of realistic geometries in the presence of background flows) and the corresponding sound pressure measurements p can be acquired by microphone arrays. Then, the power of neural networks, deep13 or not, is utilized to establish the mapping between the inputs and outputs [see Fig. 1(d)]. The largest difficulty, however, is how to incorporate machine learning techniques into the duct acoustic problem of interest because, to the author's best knowledge, a similar problem is not reported yet (one exception can be found in the recent paper,12 which is focused on upstream/downstream decompositions rather than azimuthal and radial mode detections).

More specifically, Fig. 2(a) shows one of the most classical (and possibly easiest) problems, where handwriting digits can be classified by machine learning methods. One may suggest that duct acoustic mode detection problems are similar, that is, given some measurements smeared by background noise, one will be able to infer the numbers of (azimuthal and/or radial) modes. Nevertheless, if the digits in Fig. 2(a) were fan noise modes, they would have tangled together in duct acoustic measurements, as shown in Fig. 2(b). In summary, the problem of interest is largely different from those sound localization works that have been recently published on this journal14–16 in that the task of mode detection is not only to identify the specific mode numbers, but also to determine the associated amplitudes. To establish such a capability, training inputs and the corresponding measurements are first prepared for machine learning.

Figure 3 shows the flowchart of the whole machine-learning-based mode detection. As an initial test, the data are rapidly collected based on the first-principle method, such as Eq. (2), and experimental data can be easily included later (by the so-called transfer learning). The neural network is implemented and trained by Keras, which is written by Python and is capable of running with a simplified high-level abstraction of the powerful (but complicated) TensorFlow. In this work, one of the most important reasons to go with Keras, rather than other popular machine learning tools such as PyTorch, is its cross-platform capability [through the help of PlaidML, which is a software framework that enables deep leaning on almost every computing units and works perfectly with graphical processing units (GPUs) from Intel, AMD and Nvidia], which will extensively simplify future deployments of trained neural networks on various industry computing systems, such as onboard computers of aircraft engines.

For current problems of some representative setups (e.g., a dozen modes at certain nondimensional frequency ω = 15), the training shows that a deep neural network of 11 dense layers with linear activation functions and 986 112 training parameters can successfully map Eq. (2). The optimizer is Adam, which is an extension of stochastic gradient descent with adaptive learning rates, and the loss function is the classical mean squared error function, and the regularization technique is dropout (empirically set to 0.2), which introduces some artificial perturbations and thus helps to avoid possible overfitting. In addition, Keras can automatically separate a portion of (usually 20% to 30%) the training data to the validation dataset (by setting the parameter of validation_split in fit function), and evaluate the corresponding performance of deep learning model each epoch on the validation dataset. Figure 4(a) shows the training and validation loss of one example with representative setups of duct acoustics (the corresponding set-up is the same as those in Ref. 1). It can be seen that the machine learning rapidly reduces the errors and the trained model should be able to approximate the mapping relation between inputs and outputs of the duct acoustic system. Here the training is rapidly completed on a desktop computer with 3.2 GHz Xeon, 64 GB DDR4 memory, and AMD Radeon Pro Vega 64 GPU.

In practical acoustic measurements, it is normal to have around 1 to 2 dB deviations. Hence, in this work, neural network predictions are regarded as acceptable (and the associated forward model G is good) if the corresponding differences from training inputs are less than 10%. When forward model predictions are not acceptable, we can consider modifying neural network architectures, the associated hyperparameters, and/or training inputs. The last is shown in the flowchart in Fig. 3 because the former two are already sufficient for the adopted neural network with almost 106 training parameters.

Finally, Figs. 4(b) and 4(c) demonstrate a mode detection process by incorporating machine learning and compressive sensing together. The latter was implemented on Matlab before.1 To reuse this part in this work, the trained Keras model and the neural network parameters are loaded into Matlab (by importKerasNetwork) and the rest would be the same. The simple testing case in Ref. 1, where the measurements are governed by p(θ)=cos(18θ)+2cos(25θ)+0.5cos(35θ), which suggests 3 azimuthal modes (m =18, 25, 35) with random amplitudes of 1, 2, and 0.5, respectively. If the Nyquist–Shannon sampling theorem were satisfied, we would have to use at least 2 × max (m) = 70 microphones. Hence, if the number of sensors is 24, the reconstructed mode amplitudes would be aliased below mode number 12 [see Fig. 4(b)]. In contrast, when p(θ) is mapped into wavenumber domain, it is easy to see that the measurements are sparse, which enables compressive sensing. Figure 4(c) shows that compressive sensing successfully reconstructs the amplitudes of each mode. The results in Figs. 4(b) and 4(c), and the associated Matlab code, are essentially the same as those in Ref. 1. For brevity, more details and discussions are omitted in this Letter. However, it must be emphasized that the machine learning model is used here to conduct the required mapping from the θ direction to the azimuthal wavenumber domain. Theoretically, the following extension to realistic azimuthal and radial mode detections is straightforward, and the next challenge of this research direction should come from actual experimental demonstrations. The associated work is still ongoing and will be reported in the follow-up paper.

In summary, the machine-learning-based mode detection method has been introduced in this Letter. Up to now, most mode detection methods still adopt a first-principle based forward model. To further address realistic geometrical and practical experimental issues, machine learning can be considered to directly map the forward model from the acoustic source inputs to measurements, and the associated concepts and software implementation details have been discussed in this tutorial Letter. The proposed neural network implementation has been incorporated into compressive sensing by taking into account specific acoustic mode detection requirements. Overall, the proposed method and pedagogical introductions should be able to direct the research attention of acoustic measurements to machine learning and benefit mode detections for next-generation aircraft engine problems. Moreover, the author believes that this tutorial Letter will enable readers of interest to learn and apply machine learning techniques in their particular acoustic applications.

This work is supported by the National Science Foundation of China (with Grant Nos. 11772005 and 91852201) and Beijing Municipal Science and Technology Commission (with Grant No. Z181100001018030). The author wishes to acknowledge J. Y. Yangzhou and H. X. Bu for providing some sketch materials that have been used in this paper. The draft was prepared during the visit of Issac Newton Institute under the support of EPSRC (with Grant No. EP/R014604/1) and the machine learning task was supported by High-performance Computing Platform of Peking University.

1.
X.
Huang
, “
Compressive sensing and reconstruction in measurements with an aerospace application
,”
AIAA J.
51
(
4
),
1011
1016
(
2013
).
2.
S. Y.
Zhong
,
Q. K.
Wei
, and
X.
Huang
, “
Compressive sensing beamforming based on covariance for acoustic imaging with noisy measurements
,”
J. Acoust. Soc. Am.
134
(
5
),
EL445
EL451
(
2013
).
3.
P.
Limacher
,
C.
Spinder
,
M. C.
Banica
, and
H. J.
Feld
, “
A robust industrial procedure for measuring modal sound fields in the development of radial compressor stages
,”
J. Eng. Gas Turbines Power
139
,
062604-1
062604-10
(
2017
).
4.
H. X.
Bu
,
W. J.
Yu
,
P. W.
Kwan
, and
X.
Huang
, “
Wind-tunnel investigation on the compressive-sensing technique for aeroengine fan noise detection
,”
AIAA J.
56
(
9
),
3536
3546
(
2018
).
5.
M.
Behn
and
U.
Tapken
, “
Investigation of sound generation and transmission effects through the acat1 fan stage using compressed sensing-based mode analysis
,” AIAA Paper 2019–2502 (
2019
).
6.
H. X.
Bu
,
X.
Huang
, and
X.
Zhang
, “
Compressive sensing method with enhanced sparsity for aeroengine duct mode detection
,”
J. Acoust. Soc. Am.
146
(
1
),
EL39
EL44
(
2019
).
7.
E. J.
Candes
,
J.
Romberg
, and
T.
Tao
, “
Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information
,”
IEEE Trans. Inf. Theory
52
(
2
),
489
509
(
2006
).
8.
S.
Boyd
and
L.
Vandenberghe
,
Convex Optimization
(
Cambridge University Press
,
New York
,
2004
).
9.
L. K.
Ran
,
C. C.
Ye
,
Z. H.
Wan
,
H. H.
Yang
, and
D. J.
Sun
, “
Instability waves and low-frequency noise radiation in the subsonic chevron jet
,”
Acta Mech. Sin.
34
,
421
430
(
2018
).
10.
X.
Chen
,
G.
Dong
, and
B. M.
Li
, “
Numerical study of three-dimensional developments of premixed flame induced by multiple shock waves
,”
Acta Mech. Sin.
34
,
1035
1047
(
2018
).
11.
G.
Gabard
and
R. J.
Astley
, “
Theoretical model for sound radiation from annular jet pipes: Far- and near-field solutions
,”
J. Fluid Mech.
549
,
315
341
(
2006
).
12.
S.
Sack
and
M.
Åbom
, “
Trained algorithms for mode decomposition in ducts
,” in
Proceedings of the 23rd International Congress on Acoustics
(
2019
), pp.
5314
5320
.
13.
Y.
LeCun
,
Y.
Bengio
, and
G.
Hinton
, “
Deep learning
,”
Nature
521
,
436
444
(
2015
).
14.
H. Q.
Niu
,
E.
Reeves
, and
P.
Gerstoft
, “
Source localization in an ocean waveguide using supervised machine learning
,”
J. Acoust. Soc. Am.
142
(
3
),
1176
1188
(
2017
).
15.
H. Q.
Niu
,
E.
Ozanich
, and
P.
Gerstoft
, “
Ship localization in Santa Barbara channel using machine learning classifiers
,”
J. Acoust. Soc. Am.
142
(
5
),
EL455
EL460
(
2017
).
16.
Y.
Wang
and
H.
Peng
, “
Underwater acoustic source localization using generalized regression neural network
,”
J. Acoust. Soc. Am.
143
(
4
),
2321
2331
(
2018
).