In this paper, the 1-norm sparse regularization method is applied to the time domain reconstruction of transient acoustic fields such as impulse noise. This method properly reconstructs the back-propagated sound field where its amplitude should be null: for transient sources, this occurs mostly for positions and times that precede the arrival of the first wave front. Therefore, it significantly reduces causal errors typically found in time domain reconstruction when standard Tikhonov regularizations is applied. The reconstructions obtained from both Tikhonov and sparse regularization methods are compared using a transient baffled piston model, and show that the global root-mean-square (RMS) error is significantly reduced when using sparse regularization. The improvement provided depends on the level of sparsity of the reconstructed signal. For the studied cases, it can represent a reduction of the global RMS error by up to a factor of 3. The performance of Pareto frontier curve for predicting the optimal sparse regularization parameter is examined; it leads to accurate predictions especially for lower noise levels. Finally, sparse regularization is applied to experimental data over time and spatial domains in order to obtain an accurate reconstruction of the transient sound field produced by an impacted plate.

Nearfield acoustical holography (NAH) is the state-of-the-art method for visualizing acoustic fields over three-dimensional (3-D) space.1–3 It requires measuring the sound field with a two-dimensional (2-D) microphone located in the nearfield of the source. This measurement acts as a boundary condition used to determine the sound field toward the source by means of a deconvolution with a Green's function derived from the wave equation.

When the studied source is not stationary, the sound field is represented in the time domain instead of the frequency domain, since the frequency components of the measured signal are not statistically constant over time. A method based on the use of the 3-D linear convolution and a Green's function expressed in the time and spatial domains (TSDs) was developed to accurately visualize non-stationary fields for both the forward4 and inverse5 problems. An asset of this method is that the 3-D linear convolution completely suppresses wraparound errors and significantly reduces spectral leakage when the sound field is represented in the TSD.

It is well known that the inverse problem is ill-posed; it requires regularization since high frequency components with low signal-to-noise ratio (SNR) are amplified in the reconstruction process.6 In NAH, this amplification is considerable since the measurement performed in the nearfield takes into account the contribution of evanescent wave components, which are amplified exponentially in the back-propagation process. The standard regularization method for NAH is Tikhonov regularization;7 however, such a method fails at properly reconstructing the sound field where its amplitude is null, while correctly reconstructing the field where its amplitude is non-zero. The main idea behind this work is that sparse regularization, also known as least absolute shrinkage and selection operator (LASSO) in statistics,8 which minimizes the 1-norm of the solution instead of the 2-norm, is better suited for the regularization of transient acoustic fields. This assumption is based on the fact that this regularization method favors the convergence of the solution to null values, which constitute a significant proportion of the transient signals studied. On the contrary, Tikhonov regularization tends to spread the energy of the reconstruction over time and space, rather than its sparsity.

To the author's knowledge, sparse regularization has not been applied to non-stationary NAH before. However, the idea has been explored in the context of stationary NAH. Chardon et al. have applied compressed sensing with sparse regularization to stationary NAH.9 They have shown that sparse regularization acts as an alternative regularization technique, and that reconstruction can be achieved with a much lower sampling rate (under that of Nyquist's criterion) by using compressed sensing. Regularization errors they obtained are of the same order of magnitude as those from Tikhonov's method. Fernandez-Grande et al. demonstrated the application of compressed sensing and sparse regularization to the equivalent source method (ESM), a variant of standard NAH. Sparse regularization is used to successfully obtain the reconstruction, and the resulting error is under that of Tikhonov regularization.10 The sparse regularization method has also been applied to other methods related to NAH: Simard and Antoni applied 1 regularization to acoustic source identification method,11 and Alqadah and Valvidia applied sparse regularization to nearfield electromagnetic holography with the ESM.12 

In all of these cases, the reconstruction is represented in the frequency domain: the analysis deals with signals that are sparse in the sense that their amplitude is spatially localized. This condition is rarely obtained for each frequency component, and consequently, the performance of the reconstruction varies with the frequency. For the non-stationary case, the representation of the sparsity is different. We suppose that the studied signals are of finite duration with null amplitude before t=t0 and after its duration τ+t0. The resulting sound field radiates from the source, and according to the nature of the studied sources, the field is null before its first wave front reaches the positon at which the reconstruction is performed. Since sparse regularization is prone to properly reconstruct these null components, it is supposed that it will significantly reduce causal errors typically obtained when Tikhonov regularization is applied.

The main objective of this paper is to compare the regularization error obtained with the standard Tikhonov method with that of sparse regularization when applied to reconstructing transient sources. Regularization is applied over the 3-D spatio-temporal domains with both methods. The comparison is performed using the sound field obtained from a transient baffled piston model, and the analysis is developed for planar geometry, but could be extended to other geometries in future work. We also investigate the performance of the Pareto frontier curve13 (an adaptation of the L-curve method14) to predict the optimal regularization parameter for sparse regularization. In previous work, the method was successfully applied to electrical impedance tomography15 and acoustic beamforming.16 To the authors' knowledge, a method for predicting the optimal 1 regularization parameter has not been proposed yet in the context of NAH. Finally, the sparse regularization method is applied to experimental measurements obtained from the transient acoustic field radiating from an impacted plate.

In this section, we present an overview of 3-D linear deconvolution formulation for time domain NAH, along with the two regularization methods compared in this paper.

The geometry of the problem is illustrated in Fig. 1. The finite acoustic source represented in gray is located on the source plane at z=0. For the forward problem, the measurement is performed in the nearfield of the source at z=z0, and the field is determined on the plane z=z0+d, with d being the propagation distance. The mathematical relationship between the pressure field p on the two planes is a 3-D convolution, as expressed in

(1)

where g is the TSD Green's function expressed as

(2)

with speed of sound c and propagation radius R=x2+y2+d2.

FIG. 1.

Geometry of planar NAH for forward and inverse problems.

FIG. 1.

Geometry of planar NAH for forward and inverse problems.

Close modal

The inverse problem, that is to calculate the pressure p(z0) from a measurement on plane p(z0+d), is obtained by discretizing Eq. (1), and expressing the convolution in the form of a matrix product

(3)

Here, matrix G takes the form of a Toeplitz block-Toeplitz convolution matrix,17,ps is the unknown sound field near the source, and pm is the measurement. The reconstruction ps is obtained by multiplying pm with the inverse of G. Matrix G, in its Toeplitz form, typically contains too many elements for its inverse to be computed with a standard desktop computer. As detailed in Ref. 5] an approximation of its inverse is obtained by embedding the Green's function in a larger block circulant matrix by zero-padding g; this requires the Green's function to be expressed in the TSD. Such a block circulant matrix is diagonalized by the 3-D Fourier operator,18 and the calculation of its inverse can be performed efficiently with the fast Fourier transform (FFT)

(4)

where the tilde represents application of 3-D zero-padding on the signal and F and F1 are the direct and inverse 3-D FFT operators.

Tikhonov regularization consists in finding a compromise between the residual norm Gpspm2 and the norm of the solution Γps2, where 2 denotes the 2-norm. In this paper, we assume the Tikhonov matrix Γ to be the identity, and the solution to this regularization problem becomes analogous to applying a low-pass filter. When the regularization applied is not sufficient (corresponding to a high cutoff frequency), the amplified noised components dominate the reconstruction and the norm of the solution ps2 becomes large. At the opposite, when too much regularization is applied, ps2 tends to zero and the residual norm Gpspm2 becomes large. The optimal solution lies in between these two cases, and such compromise if found by minimizing ps in the Lagrangian expression

(5)

where the regularization coefficient λ acts as a Lagrange multiplier. The solution pŝ is obtained by taking the derivative of Eq. (5) and equalizing to zero.

In the Fourier domain, Tikhonov regularization corresponds to applying the low-pass Wiener filter expressed in Eq. (6), and the regularization parameter λ acts as its cutoff frequency

(6)

The operations required for solving the inverse problem with Tikhonov regularization are described in the block diagram presented in Fig. 2.

The first step is to apply 3-D zero-padding to the TSD Green's function in order to double its values in the TSDs. Padding can also be applied to the measured field pm in order to fit the size of the Green's function if needed. Then, a 3-D FFT is applied to both signals, and the measured field is divided by the Green's function in the Fourier domain. The Wiener filter with parameter λ is applied to regularize the reconstruction. Finally, the inverse 3-D FFT is applied and the reconstruction pŝ is obtained.

Tikhonov regularization is not effective in reconstructing null components unless the value of the regularization parameter used is high. However, in that case, the Wiener filter also considerably attenuates the non-zero components. For lower regularization parameter values, important ripples are generated where the reconstructed pressure field should be null. In addition to creating causality errors, this considerably increases the global root-mean-square (RMS) error of the reconstruction.

The sparse regularization consists in exchanging the 2-norm applied to the solution ps with the 1-norm. The 2 residual norm is conserved. This solves the problem described above since it gives a more significant impact in the cost function to the ripples where the solution should be null. The sparse regularization cost function is expressed as

(7)

In this case, it is not possible to derive a solution in a closed form, and the solution pŝ to Eq. (7) must be computed numerically. Such a problem is known as basis pursuit denoising: it is a convex optimization problem for which many algorithms were developed.19 In this paper, the YALL1 module based on an alternating direction algorithm is used on matlab.20 An asset of this module is that the convolution matrix G does not have to be explicitly expressed, as YALL1 can take advantage of FFT-based computation for the matrix multiplication. It was shown that both the accuracy and computational efficiency of the YALL1 module is competitive in comparison to other matlab-based convex optimization algorithm applied to 1 regularization.28 However, higher efficiency can be obtained with packages available in other programming environments. The operations involved in solving the inverse problem with sparse regularization are described in the block diagram presented in Fig. 3.

The 3-D zero-padding is applied to the TSD Green's function to obtain the linear deconvolution formulation. It can also be applied to the measurement pm to fit the size of g. The Green's function is inputted into the convex optimization algorithm. The measured field and the regularization parameter are other inputs of the convex optimization algorithm.

Another way of interpreting the difference between Tikhonov and sparse regularization is through the Bayesian approach. Antoni proposed a Bayesian interpretation for NAH, as described in Ref. 21. It considers the measured pressure field pm and the pressure at the source ps as probabilistic quantities. Both are expressed in the TSDs, and one can be related to the other using Bayes' theorem

(8)

The posterior P(ps|pm) is the probability density function (PDF) that expresses the probability of obtaining such a pressure field at the source knowing the measurement pm. By maximizing the posterior, that is to select the solution which has the most probable occurrence, the reconstruction can be determined. The posterior depends on the PDF attributed to the likelihood function P(pm|ps) and the prior P(ps). P(pm) acts as a normalization constant since the integral of the PDF must be unitary.

According to the central limit theorem, it makes sense to assign a Gaussian PDF to the likelihood function, since the variations of the measurement usually depend on many independent random processes, such as electronic or background noise. In addition, assume Gaussian PDF is the most conservative choice, since it is the distribution with maximum entropy.

The prior P(ps) can take many forms, depending on our knowledge of the source. By assuming a Gaussian PDF, the maximum a posteriori becomes equivalent to the Tikhonov cost function expressed in Eq. (5), where λ is interpreted as the noise-to-signal ratio (i.e., the ratio of the variance of the Gaussian prior and likelihood).21,22 For a more sparse source, a Laplace distribution can be assumed, which leads to the sparse regularization cost function Eq. (7). Gaussian (continuous line) and Laplace (dashed line) distributions with unitary variance σ2 and mean μ are compared in Figs. 2–4.

FIG. 2.

Block diagram of the operations involved in the 3-D linear deconvolution with Tikhonov regularization.

FIG. 2.

Block diagram of the operations involved in the 3-D linear deconvolution with Tikhonov regularization.

Close modal
FIG. 3.

Block diagram of the operations involved in the 3-D linear deconvolution with sparse regularization.

FIG. 3.

Block diagram of the operations involved in the 3-D linear deconvolution with sparse regularization.

Close modal
FIG. 4.

(Color online) Comparison between the Gaussian and Laplacian PDFs.

FIG. 4.

(Color online) Comparison between the Gaussian and Laplacian PDFs.

Close modal

The Laplace distribution has a larger proportion of its probability density contained near its maximum and its tails. This reflects its tendency to produce reconstruction values that are either relatively large or close to zero.

With standard NAH formulations, the analytical Green's function is sampled in the frequency or wave number domains.1,23 In such cases, the application of the inverse discrete Fourier transform produces a truncation in the Green's function's spectrum, and consequently, important spectral leakage errors are generated in the non-stationary field represented in the TSD. As presented in Ref. 5, a means to avoid the generation of such errors is to sample the analytical Green's function directly in the TSD [Eq. (2)] by rounding the term R/c to the closest time domain sample. Additionally, the truncation error of such Green's function in the TSD is not significant, since its amplitude is located mostly near the origin due to its terms in 1/R2 and 1/R3. As discussed in Ref. 5, it is not possible to fully suppress such a truncation error; however, its importance is significantly reduced as the sizes of the finite domains increase. Another asset of the TSD formulation is the consideration of the linear deconvolution operator obtained by concatenating zero-padding to the Green's function in both the spatial and time domains. This avoids the generation of wraparound errors.5 It is therefore recommended to use the TSD formulation when dealing with non-stationary signals, since it provides a more precise reconstruction than is Fourier domain counterpart.5 

These important assets of the TSD formulation require sampling both the Green's function and the deconvolved signal in the TSD. The reconstructed non-stationary signal is also represented in the TSD, since its frequency content is not statistically constant over time and space. However, in the context of 1 regularization, the sparsity for such representation is not guaranteed for any non-stationary sound field as it depends on the size of the finite domains considered with respect to the span and duration of the sound field radiated by the source. The sparsity is obtained in the case of transient sources (when the finite TSD considered is large enough), which is why this paper focuses on the analysis of such signals. Nonetheless, this constitutes an important constraint for the application of sparse regularization to more general non-stationary signals.

Other representations of the signals, such as in the wavelet domains, could provide a sparse representation for the generalized non-stationary case. However, as summarized above, a formulation in an alternative basis can be sensible to the generation of leakage and wraparound errors. For the application of sparse regularization to the reconstruction of less restrictive non-stationary signals, there should thus be a compromise between the generation of such signal processing errors and obtaining a signal with a high level of sparsity. This paradigm is not covered in the present paper, but its analysis would constitute interesting future work.

In this section, we present a comparative analysis of the regularization error obtained from the sparse and Tikhonov methods. The analysis is based on the use of a baffled circular piston model.24 This theoretical model is exact in the nearfield for any transient displacement of the piston. We consider an exponentially decaying sinusoidal displacement to simulate a damped vibrating source. An overview of the model and physical parameters used is presented in the  Appendix.

The approach is as follows:

  • A signal ps is generated on the reconstruction plane z=z0 using the baffled circular piston model.

  • The 3-D linear convolution of ps with the TSD Green's function is calculated to determine the propagated field on the plane z=z0+d.

  • We add white Gaussian noise of given SNR to the propagated field, which acts as the measurement pm.

  • We solve the inverse problem with both regularization methods for a wide range of regularization parameters λ to obtain the reconstructions pŝ.

  • We calculate the relative RMS error between the reconstructions pŝ and the simulated pressure field ps for both methods.

An asset of this approach is that only the noise and its amplification due to the ill-posedness of the problem contribute to the error, since the operators for propagation and back-propagation are the exact inverse of each other. Therefore, other sources of error typically found in NAH, such as truncation error, do not contribute to the RMS error; the analysis focuses solely on the regularization problem, as intended.

At this point in the paper, we assume that we have the means of predicting the optimal regularization parameter. Therefore, for each studied cases, we select the solution associated to the regularization parameter λopt that minimizes the RMS error. The comparison is thus made between the optimal reconstructions for both methods.

The relative RMS error expressed in Eq. (9) is used to compare the results obtained with the two methods

(9)

Here, ps is the simulated sound field at z=z0 and pŝ is the regularized reconstruction obtained from the noised measurement.

We first present in Fig. 5 a comparative example of typical reconstruction results obtained with the two regularization methods. In this case, the distance between each microphone is dx=12 cm, the microphone array span is 1.2 × 1.2 m wide and the back-propagation distance is d=6dx. White Gaussian noise is added to pm in order to obtain a SNR of 30 decibels.

FIG. 5.

(Color online) Example of reconstructions obtained with the Tikhonov (2) and sparse (1) methods. (a) Relative RMS error with respect to the regularization parameter λ. (b) Spatial reconstructions at t=2.4 ms.(c) Absolute error of the spatial reconstruction at t=2.4 ms. (d) Time domain reconstructions at (x,y)=(0,0.36) m.

FIG. 5.

(Color online) Example of reconstructions obtained with the Tikhonov (2) and sparse (1) methods. (a) Relative RMS error with respect to the regularization parameter λ. (b) Spatial reconstructions at t=2.4 ms.(c) Absolute error of the spatial reconstruction at t=2.4 ms. (d) Time domain reconstructions at (x,y)=(0,0.36) m.

Close modal

In Fig. 5(a), we present the relative RMS error with respect to the regularization parameter λ. Tikhonov regularization produces a minimal relative RMS error of 12% as opposed to 5% with sparse regularization: the global error is thus less than half as small with the sparse method. Furthermore, the error with sparse regularization remains lower than with Tikhonov regularization for a wide range of regularization parameters (up to λ=5×103) in this example. Consequently, the improvement due to the sparse method remains effective even if optimal regularization parameter is not accurately predicted. The improvement caused by sparse regularization is due to the sparse nature of the solution, as it is reflected in Figs. 5(b) and 5(c).

In Fig. 5(b), the spatial reconstructions pŝ obtained with both regularization methods are presented for t=2.4 ms. The corresponding absolute errors (pŝps) are presented in Fig. 5(c). The black dashed circles represent the limit of the propagation of the sound field at t=2.4 ms; beyond this limit, the causal pressure field should be null. With Tikhonov regularization, we observe that the absolute error is distributed uniformly over the spatial domain; noisy values beyond the propagation limit produce causal errors in the reconstruction. With sparse regularization, the absolute error beyond the propagation limit drops. The average RMS error beyond the propagation boundary is more than four times smaller for sparse regularized reconstruction at 0.006 Pa than it is for Tikhonov regularized reconstruction at 0.028 Pa. However, within the boundary, the absolute RMS error is lower for Tikhonov regularization than it is for the sparse method (0.031 Pa vs 0.041 Pa). This reflects the fact that the 1-norm-based method is effective only when the reconstructed signal has a high level of sparsity; otherwise, the standard Tikhonov method should be used. When the boundary between the non-zero and null region is well defined in the 2 reconstruction such as in this studied case, thresholding the values of the reconstruction to zero beyond the main wave front can considerably improve the performance of the Tikhonov method. However, in more realistic cases, this is not possible because of the presence of important dispersive components or because the geometry of the source is more complex.

In Fig. 5(d), the time domain reconstruction at x=0 m and y=0.36 m is presented. The black vertical dashed line represents the time it takes to the first wave front to travel from the piston and reach the position of observation. Again, before this limit, the amplitude of the pressure field should be null, which is approximately the case with the reconstruction obtained from the sparse regularization method. In fact, the reconstruction obtained with sparse regularization is almost identical to the solution. With Tikhonov regularization, we notice the presence of causal errors. Beyond the limit, both methods lead to similar results.

We present an analysis of the relative RMS error with respect to levels of white Gaussian noise and back-propagation distances. The relative RMS error is presented in Fig. 6(a). The relative improvement obtained from the difference in RMS errors between the two methods normalized by the relative RMS error of the sparse method is presented in Fig. 6(b). In each case, reconstruction with the regularization parameter which minimizes the error is selected. The area highlighted in orange in Fig. 6(b) represents the condition in which sparse regularization is outperformed by Tikhonov regularization.

FIG. 6.

(Color online) Error analysis of Tikhonov and sparse regularization with respect to the SNR and the normalized back-propagation distance. (a) Relative RMS errors ε. (b) Difference in RMS errors for both methods normalized by that of the sparse method.

FIG. 6.

(Color online) Error analysis of Tikhonov and sparse regularization with respect to the SNR and the normalized back-propagation distance. (a) Relative RMS errors ε. (b) Difference in RMS errors for both methods normalized by that of the sparse method.

Close modal

In the analysis presented in Fig. 6, the SNR is studied for various propagation distances. This means that for a given SNR, the absolute noise level added to the measurement varies with the propagation distance, since the amplitude of the field decreases with propagation. In reality, we expect the SNR to decrease with propagation, due to the lower sound pressures relative to the transducer self-noise.

We observe in Fig. 6(a) that the relative RMS error increases with the noise level, as expected. Additionally, the relative RMS error obtained with the sparse regularization method (dashed curves) is below that of the Tikhonov method (solid curves) for almost all cases studied.

For all SNR studied, the relative RMS error does not increase monotonically with the back-propagation distance: beyond about d=6d/dx, it decreases even though the ill-posedness becomes more important as d increases. This is due to the fact that, for a given SNR, the amplitude of the measurement pm decreases with d, and consequently, the added noise level also decreases. Consequently, the absolute noise level added is more important for shorter propagation distances, and there is a trade-off between the added noise level and its amplification caused by the ill-posedness of the problem.

We observe in Fig. 6(b) that the improvement of the sparse method over the Tikhonov method decreases as the noise level increases, that is for lower SNR, and smaller back-propagation distances. In fact, the improvement becomes inexistent for high noise levels because the sparse property of the signal is lost. Indeed, in the presented example, when the added noise is at its highest (10 dB and d={2,3}dx), Tikhonov outperforms the sparse method.

To support the fact that the performance of sparse regularization increases with the degree of sparsity of the reconstruction, the error is analysed with respect to the span of the square array. In Fig. 7, the proportion of sparsity increases with the size of the microphone array because with wider spatial and time domain widows, it takes more time for the wave front to travel across the whole finite domain. In this case, the back-propagation distance is 6dx with dx=12 cm.

FIG. 7.

(Color online) Sparsity ratio (solid curve) and difference in RMS errors for both studied methods normalized by that of the sparse method (dashed curve) with respect to the span of the square array.

FIG. 7.

(Color online) Sparsity ratio (solid curve) and difference in RMS errors for both studied methods normalized by that of the sparse method (dashed curve) with respect to the span of the square array.

Close modal

In Fig. 7, the proportion of sparsity is represented in red by the solid curve. It refers to the ratio of the number of zero elements to the number of all elements in the solution ps. The improvement of sparse regularization with respect to Tikhonov regularization is presented by the blue dashed curve. Clearly, the improvement of sparse regularization with respect to Tikhonov method is due to the sparsity of the solution.

The main drawback of sparse regularization when compared to the standard method is its increased computational cost. Because the solution to 2 regularization is obtained analytically, its computation is straightforward and very efficient; for a standard microphone array, it takes less than a second on a desktop computer. The basis pursuit problem requires solving an iterative convex optimization algorithm. With the YALL1 module, for a tolerance of 104 and with N on the order of 104, the algorithm takes about 100 s to converge with matlab on a desktop computer. However, this calculation time could certainly be improved by lowering the floating-point precision or by using a more computationally efficient programming language.

The accuracy of the sparse reconstruction algorithm depends on the choice of regularization parameter λ. Performance of standard methods for predicting λ with sparse regularization such as the generalized cross validation, the Bayesian information criterion and the Akaike information criterion were investigated. However the results we obtained when applied in conjunction with the non-stationary NAH method proposed were of poor accuracy. Instead, in this section, we focus our analysis on the Pareto frontier curve, a variant of the L-curve method, which more adequately predicts the optimal regularization parameter.

The Pareto frontier curve is analogous to the L-curve method frequently used in NAH for predicting the optimal regularization parameter with Tikhonov regularization. However, with sparse regularization, it considers the 1-norm of the solution. Since Eq. (7) consists in a trade-off between the residual norm and the norm of the solution, the Pareto frontier curve involves plotting the logarithm of the two quantities for various values of λ, and finding a trade-off value where the curvature of the curve is maximal.

It is demonstrated in Ref. 13 that when applied to sparse regularization, such a curve is convex and continuously differentiable over all points of interest, which ensures its applicability. An example of the Pareto frontier curve is shown in Fig. 8 for d=5dx and an SNR of 10 dB.

FIG. 8.

(Color online) Pareto frontier curve for d=5dx and a SNR of 10 dB.

FIG. 8.

(Color online) Pareto frontier curve for d=5dx and a SNR of 10 dB.

Close modal

The trade-off value for λ is obtained at the maximum negative curvature located in the “elbow” of the L-shaped curve and denoted by the arrow in Fig. 8. In this case, the prediction for the optimal value of λ is about 103. The minimum curvature is obtained numerically by minimizing the function

(10)

where ρ=log(gpŝ(λ)pm2) and η=log(pŝ(λ)1). We calculated the derivative by interpolating with cubic spline functions.

Furthermore, Eq. (10) can be expressed as a function of λ, thus the minimum can conveniently be obtained numerically without computing the solution ps for numerous regularization parameters, therefore, speeding up the optimization process.13 

In Fig. 9, we present the relative RMS error with respect to the regularization parameter. The results are shown for three back-propagation distances and four noise levels. In each case, the black diamond markers indicate the predictions obtained with the Pareto frontier curve.

FIG. 9.

(Color online) Relative error with respect to the regularization parameter. (a) d=1dx. (b) d=5dx. (c) d=9dx.

FIG. 9.

(Color online) Relative error with respect to the regularization parameter. (a) d=1dx. (b) d=5dx. (c) d=9dx.

Close modal

We observe in Fig. 9 that for all back-propagation distances, at high SNRs (low noise levels), the relative RMS error does not vary significantly with the regularization parameter for low values of λ. For higher values of λ, the error starts increasing more significantly. Since the relative error curve remains flat with respect to λ, the accuracy of the reconstruction is not very sensitive to the regularization coefficient, as long as the prediction remains under about a threshold of λ=103 for the examples presented. In all presented cases with high SNR, the Pareto frontier curve method accurately predicts the minimum.

For lower SNR (high noise levels), the relative RMS error becomes more sensitive to the choice of the regularization parameter: the curves in Fig. 9 for a SNR of 10 dB is much steeper, especially for d=5d/dx where the error is high [see Fig. 6(a)]. In these cases, prediction of the regularization parameter can have important effect on the accuracy of the reconstruction, especially since the RMS error is already high. For example, in Fig. 9(a), the minimum is at λ=1.6×102 and the predicted value is λ=2.5×103: this corresponds to increasing the error from 14.9% to 18.4% (about 20% increase).

In Fig. 10, we present the difference in relative RMS error between the optimal reconstruction (εopt) and the reconstruction obtained with the regularization parameter predicted with the Pareto frontier curve method (ε1), expressed as

(11)
FIG. 10.

(Color online) Difference in relative RMS error between the error obtained from the predicted regularization parameter and the minimum error with respect to the back-propagation distance.

FIG. 10.

(Color online) Difference in relative RMS error between the error obtained from the predicted regularization parameter and the minimum error with respect to the back-propagation distance.

Close modal

In general, the Pareto frontier curve method produces a good prediction of the optimal regularization parameter. The difference in relative error remains low (under 1% in most cases), especially for higher SNR. The difference becomes more important at lower SNR, however, since the absolute error is already important, the relative effect on the result is not significant.

The prediction error Δε remains adequate: by adding it to the optimal sparse regularization error in Fig. 6(a), the total error remains under that of the minimal error produced with Tikhonov method.

In this section, both sparse and Tikhonov regularizations are applied to the reconstruction of a transient field measured in an anechoic environment. The field is produced by the impact of a metallic rod on a poly(methyl methacrylate) (PMMA) plate. The reconstructions obtained with both methods are compared.

The experimental setup considered is presented in in Fig. 11. The studied acoustic source is a 500 × 300 × 6 mm PMMA plate impacted at its center. The plate is represented by the gray surface in Fig. 11. It is fixed to the frame of the apparatus using four metallic wires attached at its corners, which allows the plate to move freely in the vertical axis. The impact rod is released by an electromagnet, which ensures the repeatability of the impact conditions.

FIG. 11.

Experimental setup. Adapted from Ref. 4.

FIG. 11.

Experimental setup. Adapted from Ref. 4.

Close modal

After the impact, the plate vibrates for several milliseconds before its movement is completely damped. The measurement is performed using a linear array made of 64 class-1 microphones distanced by 1 cm center to center. A micrometric slide allows translating the array over the scanning surface (91 cm × 63 cm), with an increment of 1 cm. The impact is repeated for each measurement position by releasing the rod with the electromagnet, and the measurements are phased by using a 65th fixed reference microphone. The electromagnet and rod setup allows a high repeatability of the plate excitation. The sampling frequency is 50 kHz, 1000 samples are acquired, and the acquisition is performed with a 24-bit resolution.

With sparse regularization, the regularization parameter is chosen with the Pareto frontier curve method as described in Sec. IV. With the Tikhonov regularization, the parameter is predicted using the generalized cross-validation, as described in Ref. 25. In both cases, the back-propagation distance is d=5dx=5 cm, which corresponds to the location of the plane source.

For the experimental results, the true values of the field at the reconstruction plane are unknown. Consequently, it is not possible to calculate the RMS error associated to the reconstruction for this case and, thus, the analysis presented in this section is qualitative. The reconstructions with both methods are presented in Fig. 12 in the spatial domain (for t=3.5ms) and in the time domain at the center of the aperture as well as at x=y=15 cm from the center.

FIG. 12.

(Color online) Reconstruction on the source plane obtained from experimental measurements with sparse and Tikhonov regularization methods. (a) In the spatial domain at time sample t=3.5ms. (b) In the time domain at the center of the aperture (x,y)=(0,0) m. (c) In the time domain at (x,y)=(0.15,0.15) m.

FIG. 12.

(Color online) Reconstruction on the source plane obtained from experimental measurements with sparse and Tikhonov regularization methods. (a) In the spatial domain at time sample t=3.5ms. (b) In the time domain at the center of the aperture (x,y)=(0,0) m. (c) In the time domain at (x,y)=(0.15,0.15) m.

Close modal

When the center of the plate is struck by the rod, a flexural wave is generated in the plate and propagates radially with respect to the impact point. This is observable in the reconstructed acoustic field as shown in Fig. 12(a) in the spatial domain. The main wave front represented as a red circle corresponds to the primary rise in amplitude caused by the impact, and propagated over 3.5 ms. The wave front travels radially from its origin and is followed by a succession of oscillations (i.e., smaller concentric circles) with lower amplitude that corresponds to the damped displacement of the plate that occurs following the impact. Because the flexural wave is dispersive, the presence of low amplitude, high frequency and high velocity wave components preceding the main wave front is also expected.26 Some of the ripples observed beyond the main wave front in the form of larger concentric circles in Fig. 12(a) correspond to such high frequency components. A proportion of these ripples also indicates the presence of deconvolution errors due to the amplification of high frequency evanescent components. With the 1 regularization, the amplitude of the ripples falls down to zero a few centimeters beyond the boundaries of the main wave front. The selection of the regularization parameter with the sparse method seems adequate, since the amplitude of the field is not null around the main wave front, thus supporting the presence of high frequency components, while suppressing causal errors at positions close to the boundaries of the finite aperture. However, with 2 regularization, the extent of these ripples is such that it reaches the boundaries of the finite aperture. A majority of these components should correspond to non-causal deconvolution errors, since the dispersive components travel only slightly faster than the main wave front, as their velocity is proportional to the square root of the band-limited frequency.27 

In the time domain, both methods lead to comparable results on the central position, as shown in Fig. 12(b). The maxima are somewhat higher with sparse regularization and some low amplitude oscillations are attenuated to zero. This oscillatory behavior of the plate is expected, since the vibration field should not contain any discontinuity, and it is likely that the sparse method overattenuates these components. Similar conclusions are drawn for the time domain results obtained at x=y=0.15 and presented in Fig. 12(c). However, in this case, the reconstruction obtained with the Tikhonov regularization results in important oscillations from t=2 ms (highlighted by the black arrow), even though the causal field should be null for that time and position. Indeed, since the wave originates around t = 2.7 ms at the impact point [as seen in Fig. 12(b)], and because some time is required for its travel, it is expected that the wave will reach the position in Fig. 12(c) later than t = 2.7 ms. With sparse regularization, these causal errors are either suppressed or significantly attenuated. In conclusion, the observations described for the simulated examples presented in Sec. III remain applicable to the experimental results: in general, with sparse regularization, the amplitude of the field beyond the main wave front converges to zero. The reconstruction of the causal field is thus improved by sparse regularization, as the causality of the field is preserved. This stands even with measurement noise, sampling frequencies, and finite aperture dimensions that are representative of typical experimental conditions.

In this paper, we have shown that sparse regularization based on the use of the 1-norm significantly reduces causal errors typically found in the reconstruction of transient signals. For signals that have a higher level of sparsity, this considerably reduces the global RMS error in comparison to standard Tikhonov regularization. In the studied cases, the global RMS error is reduced by half when half of the reconstructed field values are sparse and by a factor of 3 when the sparsity of the reconstructed field reaches 70%. At time and positions for which the reconstruction is non-null, 2 regularization only slightly outperforms the 1 method. It was shown that the Pareto frontier curve adequately predicts the optimal value of the regularization parameter with the TSD formulation. The difference in relative RMS error obtained with optimal regularization and with the Pareto frontier curve remains below 1% for SNR of 30 and 40 dB, below 2% for SNR of 20 dB, and below 6% for SNR of 10 dB. Additionally, sparse regularization was successfully applied to the reconstruction of the experimental transient signal obtained from the impact of a metallic rod on a PMMA plate.

In the past years, many efforts were invested in finding methods to predict the optimal Tikhonov parameter. Since the performance of 1 regularization is promising, future work could focus on investigating the application of prediction methods to such a regularization method. In addition, performance of the elastic net method which consists in a combination of both 1 and 2 regularization methods could be investigated. It would profit from the advantage of both sparse and Tikhonov as an improved reconstruction could lie in such a combination.

This work was supported by the Natural Science and Engineering Research Council of Canada. The authors would like to express their gratitude to Marc Provost and Jeremy Pinto for their valuable help with the experimental part of this work.

An analytical model to determine the transient radiation of a circular baffled piston was developed by Stepanishen.24 The model is based on the Green's function formulation and is exact in both the nearfield and the far-field, and it is applicable to any displacement of the piston. Since the displacement of the piston is uniform in space, Stepanishen shows that the pressure field with respect to time and position r is expressed as

(A1)

The “∗” symbol denotes the time domain convolution of the acceleration with the impulse response h(r,t), and ρ0 is the density of air. When piston radius a>r, the impulse response function is defined as

(A2)

When a<r, the impulse response function becomes

(A3)

where Rmin and Rmax are, respectively, the shortest and longest distances from the observation point to the circumference of the piston

(A4)

and l is the projection of r over the z axis. In this analysis, an underdamped sinusoidal piston motion profile is used

(A5)
(A6)

where ζ is the decay coefficient. The physical parameters involved in the calculation are presented in Table I.

TABLE I.

Physical parameters of the transient piston.

ParametersValues
Piston radius a=0.20
Sinusoidal frequency f=500 Hz 
Decay coefficient ζ=1000 s−1 
Amplitude A=0.0001
Speed of sound c=343 m/s 
Air density ρ0=1.2 kg/m3 
Measuring distance z0+d=0.05
ParametersValues
Piston radius a=0.20
Sinusoidal frequency f=500 Hz 
Decay coefficient ζ=1000 s−1 
Amplitude A=0.0001
Speed of sound c=343 m/s 
Air density ρ0=1.2 kg/m3 
Measuring distance z0+d=0.05
1.
E. G.
Williams
,
Fourier Acoustics: Sound Radiation and Nearfield Acoustical Holography
(
Academic
,
New York
,
1999
), pp.
1
306
.
2.
J. D.
Maynard
,
E. G.
Williams
, and
Y.
Lee
, “
Nearfield acoustic holography: I. Theory of generalized holography and the development of NAH
,”
J. Acoust. Soc. Am.
78
,
1395
1413
(
1985
).
3.
E. G.
Williams
and
J. D.
Maynard
, “
Holographic imaging without the wavelength resolution limit
,”
Phys. Rev. Lett.
45
(
7
),
554
557
(
1980
).
4.
J.-M.
Attendu
and
A.
Ross
, “
Nearfield acoustical holography without wraparound error and spectral leakage for nonstationary forward propagation
,”
J. Acoust. Soc. Am.
141
(
2
),
1039
1050
(
2017
).
5.
J.-M.
Attendu
and
A.
Ross
, “
Time domain nearfield acoustical holography with three-dimensional linear deconvolution
,”
J. Acoust. Soc. Am.
143
(3),
1672
1683
(
2018
).
6.
P. C.
Hansen
,
Rank-Deficient and Discrete Ill-Posed Problems
(
SIAM
,
Philadelphia
,
1998
).
7.
E. G.
Williams
, “
Regularization methods for near-field acoustical holography
,”
J. Acoust. Soc. Am.
110
(
4
),
1976
1988
(
2001
).
8.
R.
Tibshirani
, “
Regression shrinkage and selection via the lasso
,”
J. Royal. Statist. Soc B.
58
(
1
),
267
288
(
1996
).
9.
G.
Chardon
,
L.
Daudet
,
A.
Peillot
,
F.
Ollivier
,
N.
Bertin
, and
R.
Gribonval
, “
Nearfield acoustic holography using sparsity and compressive sampling principles
,”
J. Acoust. Soc. Am.
132
(
3
),
1521
1534
(
2012
).
10.
E.
Fernandez-Grande
,
A.
Xenaki
, and
P.
Gersoft
, “
A sparse equivalent source method for near-field acoustic holography
,”
J. Acoust. Soc. Am.
141
(
1
),
532
542
(
2017
).
11.
P.
Simard
and
J.
Antoni
, “
Acoustic source identification: Experimenting the l1 minimization approach
,”
Appl. Acoust.
74
,
974
986
(
2013
).
12.
H. F.
Alqadah
and
N.
Valdivia
, “
Near-field electromagnetic holography for arbitrary surfaces using sparse regularization
,” in
International Conference on Electromagnetics in Advanced Applications (ICEAA)
, Torino, Italy (
2013
).
13.
E.
Van Den Berg
and
M. P.
Friedlander
, “
Probing the Pareto frontier for basis pursuit solutions
,”
SIAM J. Sci. Comput.
31
(
2
),
890
912
(
2008
).
14.
P. C.
Hansen
and
D. P.
O'Leary
, “
The use of the L-curve in the regularization of discrete ill-posed problems
,”
SIAM J. Sci. Comput.
14
(
6
),
1487
1503
(
1993
).
15.
J.
Nasehi Tehrani
,
A.
McEwan
,
C.
Jin
, and
A.
van Schaik
, “
L1 regularization method in electrical impedance tomography by using the L1-curve (Pareto frontier curve)
,”
Appl. Math. Model.
36
,
1095
1105
(
2012
).
16.
P.
Gersoft
,
A.
Xenaki
, and
C. F.
Mecklenbräuker
, “
Multiple and single snapshot compressive beamforming
,”
J. Acoust. Soc. Am.
138
(
4
),
2003
2014
(
2015
).
17.
P. C.
Hansen
, “
Deconvolution and regularization with Toeplitz matrices
,”
Numer. Algorithms
29
,
323
378
(
2002
).
18.
P. J.
Davis
,
Circulant Matrices
(
Chelsea
,
New York
,
1994
).
19.
S.
Boyd
and
L.
Vandenberghe
,
Convex Optimization
(
Cambridge University Press
,
New York, NY
,
2004
).
20.
Y.
Zhang
,
J.
Yang
, and
W.
Yin
, “
YALL1: Your ALgorithms for L1
,” available at yall1.blogs.rice.edu (Last viewed 1 July 2017).
21.
J.
Antoni
, “
A Bayesian approach to sound source reconstruction: Optimal basis, regularization, and focusing
,”
J. Acoust. Soc. Am.
131
(
4
),
2873
2890
(
2012
).
22.
A.
Pereira
,
J.
Antoni
, and
Q.
Leclère
, “
Empirical Bayesian regularization of the inverse acoustic problem
,”
Appl. Acoust.
97
,
11
29
(
2015
).
23.
J.-H.
Thomas
,
V.
Grulier
,
S.
Paillasseur
,
J.-C.
Pascal
, and
J.-C.
Leroux
, “
Real-time near-field acoustic holography for continuously visualizing non-stationary acoustic fields
,”
J. Acoust. Soc. Am.
128
(
1
),
3554
3567
(
2010
).
24.
P. R.
Stepanishen
, “
Transient radiation from pistons in an infinite baffle
,”
J. Acoust. Soc. Am.
49
(
5
),
1629
1638
(
1971
).
25.
G. H.
Golub
,
M.
Heath
, and
G.
Wahba
, “
Generalized cross-validation as a method for choosing a good ridge parameter
,”
Technometrics
21
(
2
),
215
223
(
1979
).
26.
A.
Ross
and
G.
Ostiguy
, “
Propagation of the initial transient noise from an impacted plate
,”
J. Sound Vib.
301
(
1–2
),
28
42
(
2007
).
27.
A. D.
Pierce
,
Acoustics: An Introduction to Its Physical Principles and Applications
(
McGraw-Hill
,
New York
,
1981
).
28.
J.
Huang
,
C. R.
Berger
,
S.
Zhou
, and
J.
Huang
, “
Comparison of basis pursuit algorithms for sparse channel estimation in underwater acoustic OFDM
,” in
OCEANS IEEE
, Sydney, Australia (
2010
).