Realistic sound is essential in virtual environments, such as computer games and mixed reality. Efficient and accurate numerical methods for pre-calculating acoustics have been developed over the last decade; however, pre-calculating acoustics makes handling dynamic scenes with moving sources challenging, requiring intractable memory storage. A physics-informed neural network (PINN) method in one dimension is presented, which learns a compact and efficient surrogate model with parameterized moving Gaussian sources and impedance boundaries and satisfies a system of coupled equations. The model shows relative mean errors below 2%/0.2 dB and proposes a first step in developing PINNs for realistic three-dimensional scenes.

In computer games and mixed reality, realistic sound is essential for an immersive user experience. The impulse responses (IR) can be obtained accurately and efficiently by numerically solving the wave equation using traditional numerical methods, such as finite element methods,1 spectral element methods (SEM),2 discontinuous Galerkin finite element method,3 and finite-difference time-domain methods.4,5 For real-time applications spanning a broad frequency range, the IRs are calculated offline due to the computational requirements. However, for dynamic, interactive scenes with numerous moving sources and receivers, the computation time and storage requirement for a lookup database become intractable (in the range of gigabytes) since the IR is calculated for each source-receiver pair. When covering the whole audible frequency range, these challenges become even more extensive. Previous attempts to overcome the storage requirements of the IRs include work for lossy compression,6 and lately, a novel portal search method has been proposed as a drop-in solution to pre-computed IRs to adapt to flexible scenes, e.g., when doors and windows are opened and closed.7 A recent technique for handling parameter parameterization and model order reduction for acceleration of numerical models is the reduced basis method (RBM).8,9 Although very efficient, RBM cannot meet the runtime requirements regarding computation time for virtual acoustics.

In this paper, we consider a new approach using physics-informed neural networks (PINNs)10–12 including knowledge of the underlying physics (in contrary to traditional “black box” neural networks13) to learn a surrogate model for a one-dimensional (1D) domain that can be executed very efficiently at runtime (in the range of ms) and takes up little storage due to their intrinsic interpolation properties in grid-less domains. The applications of PINNs in virtual acoustics are very limited,14,15 and the main contribution of this work is the development of frequency-dependent and independent impedance boundary conditions with parameterized moving Gaussian sources, making it possible to model sound propagation taking boundary materials properly into account. This work investigated PINNs for virtual acoustics in a 1D domain—still taking the necessary physics into account—making it a possible stepping stone to model realistic and complex 3D scenes for applications, such as games and mixed reality, where the computation and storage requirements are very strict.

We take a data-free approach where only the underlying physics is included in the training and their residual minimized through the loss function, allowing insights into how well PINNs perform for predicting sound fields in acoustic conditions. The Gaussian impulse is used as the initial condition tested with frequency-independent and dependent impedance boundaries. To assess the quality of the developed PINN models, we have used our in-house open-source SEM simulators.2 

We consider in the following the use of PINNs for the construction of a surrogate model predicting the solution to the linear wave equation in 1D,

(1)

where p is the pressure (Pa), t is the time (s), and c is the speed of sound in air (m/s). The initial conditions (ICs) are satisfied by using a Gaussian source for the pressure part and setting the velocity equal to zero,

(2)

with σ0 being the width of the pulse determining the frequencies to span.

We consider impedance boundaries and denote the boundary domain as Γ (in 1D, the left and right endpoints). We will omit the source position x0 in the following.

2.2.1 Frequency-independent impedance boundaries

The acoustic properties of a wall can be described by its surface impedance16Zs=p/vn, where vn is the normal component of the velocity at the same location on the wall surface. Combining the surface impedance with the pressure term p/n=ρ0(vn/t) of the linear coupled wave equation yields

(3)

where ξ=Zs/(ρ0c) is the normalized surface impedance and ρ0 denotes the air density (kg/m3). Note that perfectly reflecting boundaries can be obtained by letting ξ being the Neumann boundary formulation.

2.2.2 Frequency-dependent impedance boundaries

The wall impedance can be written as a rational function in terms of the admittance Y=1/Zs and rewritten by using partial fraction decomposition in the last equation (17)

(4)

where a, b are real coefficients; i=1 is the complex number; Q is the number of real poles λk; S is the number of complex conjugate pole pairs αk±jβk; and Y, Ak, Bk, and Ck are numerical coefficients. Since we are concerned with the (time-domain) wave equation, the inverse Fourier transform is applied on the admittance and on the partial fraction decomposition term in Eq. (4). Combining these gives17 

(5)

The functions ϕk,ψk(0), and ψk(1) are the so-called accumulators determined by the following set of ordinary differential equations (ODEs) referred to as auxiliary differential equations (ADEs):

(6)

The boundary conditions can then be formulated by inserting the velocity vn calculated in Eq. (5) into the pressure term of the linear coupled wave equation p/n=ρ0(vn/t).

Two multi-layer feed-forward neural networks are setup,

where W and b are the network weights and biases, respectively; NADE is only applied in case of frequency-dependent boundaries. The networks take three inputs x,t,x0 corresponding to the spatial, temporal, and source position dimensions. The network Nf has one output p̂(x,t,x0) approximating p(x,t,x0) predicting the pressure; the network NADE is a multi-output network with the number of outputs corresponding to the number of accumulators to approximate as explained in the following.

Including only information about the underlying physics, the governing partial differential equation (PDE) from Eq. (1) and initial conditions (ICs) from Eq. (2) can be learned by minimizing the mean squared error loss denoted as

(7)

where

(8a)
(8b)

Here, {xici,x0,ici}i=1NIC denotes the initial NIC data points, {xfi,tfi,x0,fi}i=1Nf denotes the Nf collocation points for the PDE f, and the penalty weights λIC and λBC are used for balancing the impact of the individual terms. The loss function LBC will be treated separately for the impedance boundary conditions in the following, where {xbci,tbci,x0,bci}i=1NBC will, correspondingly, be denoting the NBC collocation points on the boundaries. For frequency-dependent boundaries, an auxiliary neural network will be coupled, resulting in the additional loss term LADE in Eq. (7) and is explained in detail in the following.

2.3.1 Frequency-independent impedance boundary loss functions

The frequency-independent boundary condition, Eq. (3), is included in the loss function LBC:=LINDEP satisfied by LINDEP=(/t)Nf(xbci,tbci,x0,bci;W,b)+cξ(/n)Nf(xbci,tbci,x0,bci;W,b).

2.3.2 Frequency-dependent impedance boundary loss functions

For frequency-dependent boundaries, the ADEs need to be solved as well, approximating the ODEs from Eq. (6) by introducing an additional neural network NADE(xb,t,x0;W,b) parameterized by xb and x0 for boundary positions and moving sources, respectively. The network has multiple outputs NADE(xb,t,x0;W,b)=[ϕ̃0,ϕ̃1,,ϕ̃Q1;ψ̃0(0),ψ̃1(0),,ψ̃S1(0);ψ̃0(1),ψ̃1(1),,ψ̃S1(1)] corresponding to the scaled accumulators determined by a scaling factor lADE, mapping ϕ̃k=lADEϕkϕ̂k,ψ̃k(0)=lADEψkψ̂k(0), and ψ̃k(1)=lADEψkψ̂k(1), such that ϕ̃k,ψ̃k(0),ψ̃k(1):(xb,t,x0)[1,1] matching the range of the tanh function used in this work. The scaling factors are independent of the geometry and domain dimensionality, only the material properties determine the amplitude of the accumulators. In this work, the scaling factors are determined using the SEM solver, but might be analytically estimated from the accumulators considering only a single reflection in a 1D domain. A graphical representation of the neural network architectures for approximating the governing physical equations and ADEs is depicted in Fig. 1. The accumulators with parametrized moving sources and boundary positions ϕ̃k(xbci,tbci,x0,bci),ψ̃k(0)(xbci,tbci,x0,bci),ψ̃k(1)(xbci,tbci,x0,bci) can be learned by minimizing the mean squared error loss as (arguments omitted)

(9)

where

(10a)
(10b)
(10c)

and

Fig. 1.

PINN scheme for frequency-dependent boundaries. Left: Two fully connected feed-forward neural network architectures, Nf (PDE + ICs) and NADE (ADEs). Right: The governing physical equations and ADEs are coupled via the loss function (ICs and scaling terms are omitted for brevity). Training is done when a maximum number of epochs is reached, or the total loss is smaller than a given threshold.

Fig. 1.

PINN scheme for frequency-dependent boundaries. Left: Two fully connected feed-forward neural network architectures, Nf (PDE + ICs) and NADE (ADEs). Right: The governing physical equations and ADEs are coupled via the loss function (ICs and scaling terms are omitted for brevity). Training is done when a maximum number of epochs is reached, or the total loss is smaller than a given threshold.

Close modal

The frequency-dependent boundary conditions are satisfied by LDEP=(/n)Nf(xbci,tbci,x0,bci;W,b)+ρ0[vn(xbci,tbci,x0,bci)/t], where vn is the expression at the boundaries given in Eq. (5) with ϕ̂k=1/lADEϕkϕ̃k,ψ̂k(0)=1/lADEψk(0)ψ̃k(0), and ψ̂k(1)=1/lADEψk(1)ψ̃k(1), ensuring that the accumulators are properly re-scaled. The loss is included as the term LBC:=LDEP in Eq. (7) together with the loss for the ADEs in Eq. (9).

tensorflow 2.5.1,18sciann 0.6.4.7,19 and python 3.8.9 with 64 bit floating points for the neural network weights are used. The code for reproducing the results can be found online.20 

Reference data for impedance boundaries are generated using a fourth-order Jacobi polynomial SEM solver. The grid was discretized with 20 points per wavelength spanning frequencies up to 1000 Hz yielding an average grid resolution of Δx=0.017 m, and the time step was Δt=CFL×Δx/c, where CFL is the Courant-Friedrichs-Lewy constant (CFL=1.0 and CFL=0.1 for frequency-independent and dependent boundaries, respectively). The speed of sound c = 1 m/s is used for the PINN setup, implying Δx=Δt/c=Δt, which is a normalization introduced to ensure the same scaling in time and space required for the optimization problem to converge. In case of a normalized speed of sound, the effective normalized frequency is correspondingly f=fphys/cphys, since the wave is now travelling slower compared to the physical setup. To evaluate the results for a physical speed of sound cphys=343 m/s, the temporal dimension should be converted back as tphys=t/cphys s. In case of frequency-dependent boundaries, the velocity from Eq. (5) needs to be normalized accordingly regarding frequency and flow resistivity σmat=σmat,phys/cphys in Miki's model. Fitting the parameters for c = 1 m/s yields modified λk, αk, βk, and Y values resulting in the exact same surface impedance as for c = 343 m/s, but scaled by cphys in frequency range and amplitude. This can be seen from the complex wavenumber and characteristic impedance of the porous medium in Miki's model involving f/σmat and 2πf/c not being affected by normalization.20 

The point distribution in time and space, number of sources, penalty weights λ and scaling factors lADE for the ADEs are listed in Table 1. Note that the number of (time and space) domain points (30%) per source (7) is 47089×0.3/7=2018, satisfying the Nyquist theorem, since Δx=2/2018=0.045 m (we can use the square root to get the gridpoint distribution in the spatial dimension) resulting in ppw=λw/Δx=7.6 points per wavelength; λw=c/f being the wavelength for physical frequency 1000 Hz and physical speed of sound 343 m/s. For the neural network, we have used the ADAM optimizer and the mean-squared error for calculating the losses for both networks. The training was run with learning rate 1e−4 and batch size 512 until a total loss of ϵ=2e4 was reached (roughly 16k and 20k epochs needed for frequency-independent and dependent boundaries, respectively). The relatively big batch size was chosen to ensure that enough initial and boundary points were included in the optimization steps.

Table 1.

Number of points in time and space; inner domain, boundaries, and initial condition point distributions; number of evenly distributed sources (srcs); values for the penalty weights λ; scaling factors l for normalizing the accumulators.

#total#BC#IC#innera#srcsλICλBCλADElADEϕ0lADEϕ1lADEψ0(0)lADEψ0(1)
47,089 45% 25% 30% 20 10 10.3 261.4 45.9 22.0 
#total#BC#IC#innera#srcsλICλBCλADElADEϕ0lADEϕ1lADEψ0(0)lADEψ0(1)
47,089 45% 25% 30% 20 10 10.3 261.4 45.9 22.0 
a

The centered Latin hypercube sampling strategy (Ref. 24) is used.

The network architecture of Nf consists of three layers, each with 256 neurons applying the sine activation function in each layer except for a linear output layer, with proper weight initialization.21 Using sine activation functions can be seen as representing the signal using Fourier series22 and is probably the reason for a significantly better convergence compared to using the more common choice of tanh activation functions. However, experiments showed degraded interpolation properties using sine activation functions when the network was trained on grids with source positions distributed more sparsely (0.3 m), even when lowering the number of neurons to prevent overfitting. A reason could be related to the distributed source interval violating the Nyquist sampling theorem Δx<c/(2f)=0.17 m and causing aliasing effects, but this remains an open question. Therefore, the source positions were distributed evenly with finer resolution to improve the results between the source positions, consequently resulting in a sparser grid per source by keeping the total number of points the same. Despite the sparser grid, the convergence and final error still showed satisfying results. Distributing the source positions more densely is trivial in a data-free implementation, but if a combination of the underlying physics and simulated/measured data is considered later, a large number of source positions could be practically challenging.

The network architecture of NADE consists of three layers, each with 20 neurons applying the tanh activation function in each layer except for a linear output layer, with Glorot normal initialization of the weights.23 Using the tanh function is an obvious choice since we have chosen to scale the accumulators to take values in the range [1,1].

Frequency-independent and dependent boundary conditions are tested, each with parameterized moving sources trained at seven evenly distributed positions x0=[0.3,0.2,,0.3] m and evaluated at five positions x0=[0.3,0.15,0.0,0.15,0.3] m. Additional results are included as supplementary materials.20 The source is satisfied through the initial condition modeled as a Gaussian impulse from Eq. (2) with σ0=0.2 spanning frequencies up to 1000 Hz. The speed of sound cphys=343 m/s and air density ρ0=1.2kg/m3 are used for all studies.

First, we test the frequency-independent boundary condition with normalized impedance ξ=5.83 depicted in Fig. 2(a). Frequency-independent boundaries with corresponding wave propagation animations available from Mm. 1. Then, we test the frequency-dependent impedance boundary condition, where the boundary is modeled as a porous material mounted on a rigid backing with thickness dmat=0.10 m with an air flow resistivity of σmat,phys=8000Nsm4. The surface impedance Y of this material is estimated using Miki's model25 and mapped to a two-pole rational function in the form of Eq. (4) with Q = 2 and S = 1 using a vector fitting algorithm26 yielding the coefficients for Eq. (5). The results are depicted in Fig. 2(b). Frequency-dependent boundaries with corresponding wave propagation animations available from Mm. 2.

Fig. 2.

Wave propagations in a 1D domain [1,1] m.

Fig. 2.

Wave propagations in a 1D domain [1,1] m.

Close modal
Mm. 1.

Frequency-independent boundaries, animation. Same parameters as Fig. 2(a). Frequency-independent boundaries. File of type “mp4” (1.3 MB).

Mm. 1.

Frequency-independent boundaries, animation. Same parameters as Fig. 2(a). Frequency-independent boundaries. File of type “mp4” (1.3 MB).

Close modal
Mm. 2.

Frequency-dependent boundaries, animation. Same parameters as Fig. 2(b). Frequency-dependent boundaries. File of type “mp4” (1.6 MB).

Mm. 2.

Frequency-dependent boundaries, animation. Same parameters as Fig. 2(b). Frequency-dependent boundaries. File of type “mp4” (1.6 MB).

Close modal

We observe that the shape of the wave propagations is well captured, and the impulse responses also fit the reference solutions very well for all boundary types. The relative mean error μrel(x,x0)=(1/N)i=0N1[|p̂(x,ti,x0)p(x,ti,x0)|/p(x,ti,x0)] within −60 dB and absolute maximum error abs(x,x0)=max{|p̂(x,ti,x0)p(x,ti,x0)|:i=0N1} of the impulse responses originating from various source and receiver positions are summarized in Table 2. Relative errors are below 2%/0.2 dB for all predictions. The absolute maximum errors are below 0.011 Pa for all predictions indicating that no severe outliers are present.

Table 2.

Time domain errors for source/receiver pairs (x0,x) measured in meters in what follows at positions s0=(0.3,0.64),s1=(0.15,0.58),s2=(0.0,0.5),s3=(0.15,0.58),s4=(0.3,0.66)μrel(x,x0)=(1/N)i=0N1[|p̂(x,ti,x0)p(x,ti,x0)|/p(x,ti,x0)] is the relative mean error over time within 60 dB range and abs(x,x0)=max{|p̂(x,ti,x0)p(x,ti,x0)|:i=0N1} Pa is the maximum absolute error for source/receiver pair si.

s0s1s2s3s4
μrelabsμrelabsμrelabsμrelabsμrelabs
Freq. indep. 0.7% 0.004 0.3% 0.002 0.5% 0.001 0.4% 0.002 1.5% 0.006 
Freq. dep. 0.5% 0.004 1.5% 0.008 1.7% 0.011 1.9% 0.009 1.1% 0.005 
s0s1s2s3s4
μrelabsμrelabsμrelabsμrelabsμrelabs
Freq. indep. 0.7% 0.004 0.3% 0.002 0.5% 0.001 0.4% 0.002 1.5% 0.006 
Freq. dep. 0.5% 0.004 1.5% 0.008 1.7% 0.011 1.9% 0.009 1.1% 0.005 

A novel method is presented for predicting the sound field in a 1D domain for impedance boundaries and parameterized moving Gaussian sources using PINNs. A coupled system of differential equations, consisting of the governing physical equations and a system of ODEs predicting the accumulators of the ADEs, was used for training the PINN. The equations for the ADEs depend only on time t but were parameterized to take boundary and source positions into account, yielding a very flexible implementation. The results are promising, with relative mean errors below 2%/0.2 dB for all cases. The approach taken by learning a compact surrogate model that is inexpensive to evaluate at runtime shows potential to overcome current numerical methods' limitations in modeling flexible scenes, such as moving sources.

Compared to standard numerical methods, the PINN method takes up to three orders of magnitude more time to converge. Therefore, to solve realistic problems in 3D, the convergence rate needs to be improved. This is partly due to the need for fairly large amounts of grid points with 70% of the points located at the initial time step and at boundaries where penalty weights are also needed for balancing each loss term. Formulating an ansatz imposing initial and boundary conditions directly could overcome this problem.27 Also, considering other architectures taking (discrete) time-dependence into account instead of optimizing the entire spatiotemporal domain at once might improve the learning rate and produce more precise results. Moreover, we have observed challenges in the global optimizer for a larger domain size and/or by increasing the frequency due to the ratio between zero and non-zero pressure values. Domain decomposition methods28 have been introduced to overcome this limitation. In ongoing work, more complex benchmarks are being considered.

Thanks to DTU Computing Center GPULAB for access to GPU clusters and swift help. Also, a big thanks to Ehsan Haghighat for valuable discussions regarding PINNs and SciANN. Last but not least, thanks to Finnur Pind for making an SEM code available for calculating reference solutions.

1.
T.
Okuzono
,
T.
Otsuru
,
R.
Tomiku
, and
N.
Okamoto
, “
A finite-element method using dispersion reduced spline elements for room acoustics simulation
,”
Appl. Acoust.
79
,
1
8
(
2014
).
2.
F.
Pind
,
A. P.
Engsig-Karup
,
C.-H.
Jeong
,
J. S.
Hesthaven
,
M. S.
Mejling
, and
J.
Strømann-Andersen
, “
Time domain room acoustic simulations using the spectral element method
,”
J. Acoust. Soc. Am.
145
(
6
),
3299
3310
(
2019
).
3.
A.
Melander
,
E.
Strøm
,
F.
Pind
,
A.
Engsig-Karup
,
C.-H.
Jeong
,
T.
Warburton
,
N.
Chalmers
, and
J. S.
Hesthaven
, “
Massive parallel nodal discontinuous Galerkin finite element method simulator for room acoustics
,” Int. J. High Perform. Comput. Appl. (
2020
), http://infoscience.epfl.ch/record/279868 (Last viewed 30/10/2021).
4.
D.
Botteldoorena
, “
Finite-difference time-domain simulation of low-frequency room acoustic problems
,”
J. Acoust. Soc. Am.
98
(
6
),
3302
3308
(
1995
).
5.
B.
Hamilton
and
S.
Bilbao
, “
FDTD methods for 3-D room acoustics simulation with high-order accuracy in space and time
,”
IEEE/ACM Trans. Audio Speech Lang. Proc.
25
(
11
),
2112
2124
(
2017
).
6.
N.
Raghuvanshi
and
J.
Snyder
, “
Parametric wave field coding for precomputed sound propagation
,”
ACM Trans. Graph.
33
(
4
),
1
(
2014
).
7.
N.
Raghuvanshi
, “
Dynamic portal occlusion for precomputed interactive sound propagation
,” arXiv:2107.11548 (
2021
).
8.
J.
Hesthaven
,
G.
Rozza
, and
B.
Stamm
,
Certified Reduced Basis Methods Parametrized Partial Differential Equations
(
Springer
,
Berlin
,
2015
) pp.
1
131
.
9.
H. S.
Llopis
,
A. P.
Engsig-Karup
,
C.-H.
Jeong
,
F.
Pind
, and
J. S.
Hesthaven
, “
Efficient numerical room acoustic simulations with parametrized boundaries using the spectral element and reduced basis method
,” arXiv:2103.11730 (
2021
).
10.
D. C.
Psichogios
and
L. H.
Ungar
, “
A hybrid neural network-first principles approach to process modeling
,”
AIChE J.
38
(
10
),
1499
1511
(
1992
).
11.
I.
Lagaris
,
A.
Likas
, and
D.
Fotiadis
, “
Artificial neural networks for solving ordinary and partial differential equations
,”
IEEE Trans. Neural Networks
9
(
5
),
987
1000
(
1998
).
12.
M.
Raissi
,
P.
Perdikaris
, and
G. E.
Karniadakis
, “
Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations
,”
J. Comput. Phys.
378
,
686
707
(
2019
).
13.
Z.
Fan
,
V.
Vineet
,
H.
Gamper
, and
N.
Raghuvanshi
, “
Fast acoustic scattering using convolutional neural networks
,”
ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing—Proceedings 2020-May
(
2020
), pp.
171
175
.
14.
B.
Moseley
,
A.
Markham
, and
T.
Nissen-Meyer
, “
Solving the wave equation with physics-informed deep learning
,” arXiv:2006.11894 (
2020
).
15.
M.
Rasht-Behesht
,
C.
Huber
,
K.
Shukla
, and
G. E.
Karniadakis
, “
Physics-informed neural networks (PINNs) for wave propagation and full waveform inversions
,” arXiv:2108.12035 (
2021
), pp.
1
29
.
16.
H.
Kuttruff
,
Room Acoustics
, 6th ed. (
CRC Press
,
Boca Raton
,
2016
), p.
322
.
17.
R.
Troian
,
D.
Dragna
,
C.
Bailly
, and
M. A.
Galland
, “
Broadband liner impedance education for multimodal acoustic propagation in the presence of a mean flow
,”
J. Sound Vib.
392
,
200
216
(
2017
).
18.
Google
, “
TensorFlow
” (
2021
), https://www.tensorflow.org/ (Last viewed 30/10/2021).
19.
E.
Haghighat
and
R.
Juanes
, “
SciANN: A Keras/TensorFlow wrapper for scientific computations and physics-informed deep learning using artificial neural networks
,”
Comput. Methods Appl. Mech. Eng.
373
,
113552
(
2021
).
20.
See supplementary material at https://www.scitation.org/doi/suppl/10.1121/10.0009057 for source code and addition results for Neumann boundaries, accumulator predictions, runtime efficiency of the surrogate model, and detailed explanation of the normalization for the frequency-dependent impedance boundary formulation.
21.
V.
Sitzmann
,
J. N. P.
Martel
,
A. W.
Bergman
,
D. B.
Lindell
, and
G.
Wetzstein
, “
Implicit neural representations with periodic activation functions
,” arXiv:2006.09661 (
2020
).
22.
N.
Benbarka
,
T.
Höfer
,
H.
ul-moqeet Riaz
, and
A.
Zell
, “
Seeing implicit neural representations as fourier series
,” arXiv:2109.00249 (
2021
).
23.
X.
Glorot
and
Y.
Bengio
, “
Understanding the difficulty of training deep feedforward neural networks
,” in
Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, Vol. 9 of Proceedings of Machine Learning Research
, edited by
Y. W.
Teh
and
M.
Titterington
,
PMLR, Chia Laguna Resort
,
Sardinia, Italy
(
2010
), pp.
249
256
.
24.
M.
Stein
, “
Large sample properties of simulations using Latin hypercube sampling
,”
Technometrics
29
(
2
),
143
151
(
1987
).
25.
Y.
Miki
, “
Acoustical properties of porous materials-modifications of Delany-Bazley models
J. Acoust. Soc. Jpn. (E)
11
(
1
),
19
24
(
1990
).
26.
B.
Gustavsen
and
A.
Semlyen
, “
Rational approximation of frequency domain responses by vector fitting
,”
IEEE Trans. Power Deliv.
14
(
3
),
1052
1061
(
1999
).
27.
N.
Sukumar
and
A.
Srivastava
, “
Exact imposition of boundary conditions with distance functions in physics-informed deep neural networks
,” arxiv.org/abs/2104.08426 (
2021
).
28.
K.
Shukla
,
A. D.
Jagtap
, and
G. E.
Karniadakis
, “
Parallel physics-informed neural networks via domain decomposition
,” arxiv.org/abs/2104.10013 (
2021
).

Supplementary Material