During the past decade, several publications have described the use of compressive sensing principles to extend the frequency range supported by a given irregular microphone array for near-field acoustic holography. The applied numerical source model has typically been of the type used for the equivalent source method, i.e., a mesh of point sources, and a one-norm regularized inverse problem has been solved using a very stable, but slow interior-point optimization algorithm. A few publications have investigated the use of simpler and faster iterative algorithms. The present paper gives a brief description of five such iterative algorithms, and it compares their performances with that of the interior-point algorithm based on a set of simulated measurements. A particular focus is on the suitability for industrial applications. Finally, an optimal choice of methodology is discussed based on the presented limited set of simulated tests.

Near-field acoustic holography is a powerful technique for reconstructing in three dimensions all parameters of the sound field near a sound source based on a near-field array measurement. Many different methods have been developed during the past 35 years, for example, to support sources of arbitrary geometry, to allow measurement over only a part of the source surface, to extend the frequency range, etc. An overview will not be given here because it can be found in several of the references.1–3 

All methods need to solve an ill-conditioned inverse problem in order to obtain a model of the sound source or sound field, supporting a three-dimensional (3D) sound-field reconstruction up to the source surface. In the present paper, a source model of the type known from the equivalent source method (ESM)4–6 is applied, and the inverse problem is solved using compressive sensing (CS)1–3,7–10 methods in order to extend the useful frequency range and improve accuracy. ESM models the sound radiated from a source surface by a layer of monopole point sources on a retracted surface (some small distance inside the physical source) radiating into free field. The complex source amplitudes must be determined based typically on sound pressure data measured by a microphone array near the source surface. Since the mesh spacing of the source model must be less than half of the acoustic wavelength, the inverse problem of determining the source amplitudes from the measured data is typically heavily underdetermined, in particular at high frequencies. Following the principles of CS, an accurate solution can be obtained anyway provided that (1) the sound field can be represented well by a sparse vector of source amplitudes and, (2) an irregular array geometry is applied providing a sufficient degree of non-redundancy in the matrix of transfer functions between the model sources and the microphones.1 The inverse problem must then be solved using a method that enforces sparsity in the solution vector, which is in practice accomplished by use of a penalty (regularization) term representing the one-norm of the solution vector.

In the majority of the existing publications on the subject,1–3,7–10 the inverse problem has been solved using the interior-point convex optimization algorithm implemented in the publicly available MATLAB toolbox CVX (MATLAB library for convex optimization).11 Two main drawbacks are associated with that solution: (a) The method is quite slow when applied to large scale problems and (b) source code is difficult and/or expensive to get for commercial applications. The method also has the problem of being sensitive to the selection of a constraint parameter, which should represent the noise floor. For that parameter selection there is no fast, automated solution. The present paper therefore investigates the possibilities for using much simpler and potentially faster iterative algorithms for solution of the one-norm regularized inverse problems. Two such algorithms have been investigated in earlier publications [iterative zoom-out-thresholding algorithm (IZOTA)2,8,12 and iteratively reweighted least squares (IRLS)12–15], but no comparison exists covering the many existing alternatives. Focus will be on the following areas, which are important in industrial applications:

  • How stable/robust are the different algorithms, avoiding manual selection of critical parameters?

  • How fast are the algorithms? In industrial applications, many measurements must be processed, each including many frequency lines.

  • What happens if a source cannot be represented by a truly sparse solution vector? Real sources have irregular geometries and continuously distributed sound radiation, although it is often concentrated in compact regions.

  • What is the optimal measurement distance, and how sensitive is the method to the applied distance? In several industrial applications, measurements cannot be made very close to the source.

  • Accurate sound-power ranking of partial sources is a primary application. Thus, the ability to calculate the sound-field parameters at specific points with high accuracy is less important.

The paper is organized as follows: Sec. II contains a brief theoretical introduction of all the methods to be investigated. Section III presents simulated measurements on monopole point sources and on a baffled, point-driven plate. Finally, Sec. IV contains a discussion and Sec. V presents the conclusions.

Table I contains a list of the main acronyms to be used in the paper.

TABLE I.

List of acronyms.

CS  Compressive sensing 
ESM  Equivalent source method 
GCV  Generalized cross validation 
CVX  MATLAB library for convex optimization 
RIP  Restricted isometry property 
L1CVX  one-norm compressive sensing algorithm using MATLAB library for convex optimization 
WBH  Wideband holography 
ISTA  Iterative shrinkage-thresholding algorithm 
FISTA  Fast iterative shrinkage-thresholding algorithm 
IZOTA  Iterative zoom-out-thresholding algorithm 
IRLS  Iteratively reweighted least squares 
YALL1  MATLAB library for one-norm problems 
CS  Compressive sensing 
ESM  Equivalent source method 
GCV  Generalized cross validation 
CVX  MATLAB library for convex optimization 
RIP  Restricted isometry property 
L1CVX  one-norm compressive sensing algorithm using MATLAB library for convex optimization 
WBH  Wideband holography 
ISTA  Iterative shrinkage-thresholding algorithm 
FISTA  Fast iterative shrinkage-thresholding algorithm 
IZOTA  Iterative zoom-out-thresholding algorithm 
IRLS  Iteratively reweighted least squares 
YALL1  MATLAB library for one-norm problems 

All the sound-field reconstruction methods to be described briefly in the present section are based on the source model typically applied in the ESM:4–6 a mesh of monopole point sources on a retracted surface inside the physical source. Figure 1(a) illustrates a patch type of measurement, where a small, planar microphone array covers a partial source surface, and the source model represents the sound radiation from the underlying source surface, plus a section just outside. Figure 1(b) shows a simplified, approximate, planar source model, intended to represent well the sound field in a limited region around the array. The approximate planar source model avoids the need to know precisely the source geometry.

FIG. 1.

Illustration of two alternative setups for patch ESM.

FIG. 1.

Illustration of two alternative setups for patch ESM.

Close modal

In the following, we will be using complex time-harmonic representation of data with an implicit time factor e j ω t , where j is the imaginary unit, ω is the angular frequency, and t is time. Microphone pressure data can be obtained, for example, from a fast Fourier transform (FFT) of a single time record or as a principal component from an averaged cross-spectral matrix. With Ami representing the sound pressure at microphone m due to a unit excitation of monopole number i, the requirement that the modeled sound pressure at microphone m must equal the measured pressure pm can be written as

(1)

Here, I is the number of point sources, and qi, i = 1,2,…,I, are the complex amplitudes of these sources. Equation (1) can be rewritten in matrix-vector notation as

(2)

where A is an M × I matrix containing the quantities Ami, p is a vector with elements pm, and q is a vector with elements qi. Equation (2) represents an inverse problem to be solved for the complex source amplitudes in q, and once that has been done, the 3D radiated sound field can be reconstructed. Typically, the number of point sources will be much larger than the number of microphones, in particular at high frequencies, making Eq. (2) highly underdetermined. In addition, this kind of inverse problems is by nature ill-conditioned because it will attempt a reconstruction of wave components that decay away from the source surface.

Because of these factors, the problem needs regularization to provide a meaningful solution. Tikhonov regularization, which solves the following minimization problem with a penalty on the solution two-norm,

(3)

is known to work well somewhat below the frequency f hw of half-wavelength average microphone spacing, but to fail at the higher frequencies. When frequency increases above that limit, the minimization of the solution two-norm in Eq. (3) will favor solutions that represent well the measured pressure at the microphones, but minimum pressure elsewhere.2,8 With short measurement distances, the represented pressure can become very small between the microphones at high frequencies, although the true pressure distribution is smooth across the array area. Sound power values will therefore be increasingly underestimated toward higher frequencies, and ghost sources will appear in the estimated source models. Important advantages of Tikhonov regularization are the existence of a simple analytic solution and the availability of methods (generalized cross validation, L-curve, etc.) to compute an optimal value of the regularization parameter θ for given noisy input data p. Typically, the analytic solution is calculated through application of a singular value decomposition (SVD) of the matrix A. Manual selection of θ can then be through a specified applicable dynamic range D in decibels: Only data related to singular values down to D decibels below the largest singular value will be reconstructed.

Many recent publications1–3,7–10 have shown that replacing the Tikhonov regularization by a sparsity promoting one-norm regularization,

(4)

can produce good results at frequencies well above f hw , provided an adequate irregular array geometry has been applied and provided there is a (close to) sparse solution vector q. This technique is of the CS type. Chardon et al.7 were the first to apply the CS methodology for the solution of acoustic inverse problems using a plane-wave model of plate vibration. The use of CS with an ESM source model was first described by Hald8 and has been treated later in several publications.1,2,9,10

An important issue with practical use of the formulation (4) is the determination of the regularization parameter θ. Reference 16 describes a method, which can identify the solution as a function of θ in an efficient way. In the present paper, however, a simple method based on a specified dynamic range D in decibels will be tested. Let A = U S V H = i s i u i v i H be a SVD of A with singular values s i , left singular vectors u i , and right singular vectors v i , where “H” represents Hermitian (conjugate) transpose. Consider a hypothetical measured pressure vector p = y u j containing only a component related to singular value number j, which we assume to be D decibels below the largest singular value s max : s j = s max 10 D / 20 . Reduction of the squared residual in Eq. (4) from p 2 2 = | y | 2 to 0 requires the solution vector q = ( y / s j ) v j . Cutoff will happen around singular value s j , if we require the increase in the regularization term θ q 1 to equal the reduction p 2 2 of the squared residual. This leads to the expression θ = s max 10 D / 20 p 2 / v j 1 . The one-norm v j 1 attains its maximum value of I , when all components of v j have equal amplitude. For all the simulated measurements of the present paper, all one-norms v i 1 have been between 0.56 I and I , with an average value around 0.86 I . Using that average value, we get the following expression, which has been used throughout this paper, when solving the one-norm regularized problem of Eq. (4):

(5)

Considering a realistic case where p has components along many of the vectors u i , things are more complicated because the one-norm of q is not additive over its components along the vectors v i . Equation (5) seems, however, to work well for the simulated measurements to be presented.

The regularization parameter θ in Eq. (4) can be avoided by solving instead the following equivalent constrained one-norm minimization problem:

(6)

Here, instead, the upper limit χ on the two-norm of the residual vector p A q must be specified. A suitable value of χ would be the noise floor in the measurement, but the ability of the applied source model to represent the measured sound field must also be considered. If, for example, there are significant reflections from the surroundings, a rather large value may be required. For that reason, it was suggested in Ref. 2 to replace the constraint on the residual two-norm in Eq. (6) by a constraint on the gradient of the squared two-norm f ( q ) of the residual vector

(7)

The gradient will be small near a minimum although the function f is not. In this paper, however, Eq. (6) will be used with χ set to an estimate of the noise floor specified through an applied dynamic range D,

(8)

Use of CVX to solve Eq. (6) constitutes the reference method in the present paper, and it will be referred to as “L1CVX.” All results to be presented have been produced using D = 25 , except data to be shown in Fig. 8, where the choice of D is investigated.

Several of the iterative methods to be dealt with subsequently [iterative shrinkage-thresholding algorithm (ISTA), IZOTA, and fast iterative shrinkage-thresholding algorithm (FISTA)] use steepest-descent steps in the negative gradient direction of the squared residual function f ( q ) . The gradient of a real function on a complex domain is far from trivial, so it needs some special treatment. Here, we just define the gradient as

(9)

Using that definition, one can quite easily verify that

(10)

Clearly, f ( q ) H is the steepest descent direction from q, and for a positive, real, and sufficiently small step s we have f ( q + s d ) f ( q ) + s f ( q ) H d , when d is in the positive or negative gradient direction. Equation (10) also shows that a point q with f ( q ) = 0 must be a global minimizer for the function f. A mathematically more stringent treatment of the concept of gradients of real-valued functions over a complex domain is given, for example, in Ref. 17.

1. ISTA

ISTAs17–20 constitute one group of iterative algorithms. To the authors knowledge there are presently no publications describing applications to acoustic inverse problems, and since other applications involve typically only real variables, a few details will be given here.

Starting with an approximation q ( k 1 ) , the kth iterate q ( k ) ( k 1 ) is found as the vector u that minimizes the function F ( u ) = f ( u ) + θ u 1 of Eq. (4), but with f ( u ) replaced by a local approximation f ̂ ( u ) to f ( u ) at q ( k 1 ) based on Eq. (10),

(11)

Here, λ max = s max 2 is the largest eigenvalue of the matrix A H A , implying that for any vector x we have x H A H A x λ max x 2 2 and therefore f ( u ) f ̂ ( u ) for any u. The strong second-order term in Eq. (11) will limit the step size, and its omni-directional form leads to the existence of an analytic minimizer for the function F ̂ ( u ) f ̂ ( u ) + θ u 1 as we shall see. To find the minimizer we introduce the function

(12)

which has the same minimizer as F ̂ ( u ) , since only constant terms have been subtracted and added. Through simple algebraic manipulations one can show that

(13)

where q ̃ ( k ) is a steepest descent step from q ( k 1 ) ,

(14)

Thus, first a steepest descent step is taken as described by Eq. (14), and subsequently the next iterate q ( k ) is identified as the minimizer of the function G ( u ) in Eq. (13). Effectively, the solution one-norm is minimized within some distance from the steepest descent solution. The minimizer will be shown in the  Appendix to be

(15)

where q i and u i are the components of the vectors q ̃ ( k ) and u, respectively. The operation expressed through Eq. (15), which minimizes G ( u ) , is known as the proximity operation.

The iteration can be started with q 0 equal to a Tikhonov solution or a vector of zeros. Comparison has shown very little difference. In the present paper, a Tikhonov solution was used. Calculation of that solution based on a SVD of A also provides s max and λ max = s max 2 , the first of which was used to obtain the regularization parameter θ from Eq. (5) with D = 30 .

It is shown in Ref. 18 that a sublinear convergence of the regularized function F of Eq. (4) to the value F at its minimum can be guaranteed, meaning that F F will have an upper bound that is inversely proportional with the iteration count k. The ISTA algorithms are, in general, known to exhibit very slow convergence like the pure steepest-descent methods. With θ = 0 , the above ISTA algorithm is just a steepest-descent method. To increase the speed, several improved algorithms have been developed, such as the FISTA algorithm,18,19 which will be described in Sec. II B. FISTA is a two-step algorithm, meaning that each new iterate q ( k + 1 ) is computed based on the two latest iterates, q ( k ) and q ( k 1 ) , while ISTA is a one-step algorithm with each new iterate computed from only the latest iterate.

2. IZOTA

IZOTA must start with a solution vector of zeros. Otherwise, it has some similarities with the ISTA methods being a one-step iterative method with a steepest-descent step for the residual function f and a thresholding step at each iteration, but with both parts performed in a very different way:

  • Steepest-descent step: This step is extended to the minimum of the residual function f in the negative gradient direction.

  • Thresholding: All elements in the solution vector below a certain threshold are set to zero. In iteration k, the threshold is D k decibels below the largest element. D k is initially set to a very small number, D 0 , and for each iteration it is increased by an amount Δ D around 1 dB. Thus, the method performs a “zoom-out thresholding,” which is the reason for the name used for the method here. Eventually, the threshold becomes so low that it has no effect, so it can be skipped and a switch can optionally be made to a conjugate gradient algorithm.

The initial use of a very narrow dynamic range will represent as much as possible of the measured sound by very few sources, i.e., promote sparse solutions. Finalizing possibly with a conjugate gradient algorithm will make sure that the residual function f is minimized, requiring in most cases a more distributed source component. Use of an ESM source model with real industrial sources will, in practice, almost never have a true sparse solution anyway. Thus, the method guarantees a solution vector that represents well the measured data (i.e., with a minimized residual norm f), while focusing on solutions with minimum one-norm during the initial stage. The method does not, however, provide a minimizer of the one-norm regularized function F. The first gradient step can be shown to be calculated much like the output from traditional frequency-domain beamforming/focusing, which has low resolution at low frequencies. Closely spaced point sources not resolvable by beamforming will therefore initially be partially represented by a single source at the peak in the beamformed map. That “central source” typically remains throughout the iteration because a minimum of F is not subsequently enforced. Since beamforming has poor resolution only at low frequencies, this turns out to be a problem only at low frequencies. Further details of the algorithm can be found in Refs. 2 and 8, where it was given the more generic name wideband holography (WBH).

All results in the present paper have been produced using D 0 = 0.1 and Δ D = 1.0 with no switch made to a conjugate gradient algorithm. The iteration was stopped, when the two-norm of the gradient vector became smaller than ε = 0.01 times the gradient norm at the starting point.

Several extensions have been developed of ISTA to speed up convergence, such as the two-step iterative shrinkage-thresholding algorithm (TwIST)21 and the FISTA,18,19 which are both two-step methods. For FISTA, convergence has been proven to a solution of Eq. (4), see Refs. 18 and 19, and according to tests in the same references it is the faster of the two methods. The present paper will therefore only deal with FISTA.

The main difference of FISTA compared with ISTA is that the steepest descent step at iteration k ( k 1 ) is taken from a different, carefully chosen starting point y ( k ) ,

(16)

The next iterate q ( k ) is then obtained as the minimizer of the function G ( u ) in Eq. (13) with q ̃ ( k ) as input. With q ( k ) and q ( k 1 ) available, the starting point for the next steepest descent step is calculated as a linear combination of these

(17)

where the parameters t k + 1 are obtained from the iteration formula

(18)

Based on an initial solution vector q ( 0 ) (for example, a Tikhonov solution or a vector of zeros), the iteration starts with y ( 1 ) = q ( 0 ) and t 1 = 1 . The same regularization parameter θ applies as for ISTA. It was shown in Ref. 18 that convergence of F to the value F at its minimum can be guaranteed and that F F will have an upper bound, which is inversely proportional to ( k + 1 ) 2 .

All results in the present paper have been obtained with θ calculated from Eq. (5) with D = 30 .

The IRLS12–15,22,23 method solves in each iteration a Tikhonov regularized problem like Eq. (3), but between the iterations an increasing weight is put on the areas of the ESM model with the highest source amplitudes. Thus, an initially strongly underdetermined problem will slowly approach an overdetermined problem, if a sufficiently sparse solution exists. The algorithm has been shown to converge to a possible sparse solution.23 

The relation to the one-norm regularized problem of Eq. (4) can be seen by rewriting that equation as follows:

(19)

where the weights W i must fulfil the equation W i 2 | q i | = 1 or W i = | q i | . Thus, to calculate the weights, the solution must be known! The recursive algorithm is based on using the solution vector q ( k 1 ) from the latest approximation to calculate the weights W i ( k ) needed for calculation of the next iterate

(20)

Arranging the weights W i ( k ) on the diagonal of a diagonal matrix W ( k ) allows Eq. (19) to be rewritten as

(21)

which has standard Tikhonov form. For k > 1 , elementary sources with a small amplitude in q ( k 1 ) will be strongly penalized by the regularization term in the subsequent iteration. For k = 1 , W ( 1 ) is set equal to the unit diagonal matrix, W ( 1 ) = I . Assuming the underlying inverse problem of Eq. (2) to be underdetermined, the analytic solution to Eq. (21) is

(22)

where G ( k ) A W ( k ) . The calculation of Eq. (22) can conveniently be performed through a SVD of the matrix G ( k ) , supporting manual selection of θ through a specified dynamic range D of singular values. Automated methods like generalized cross validation or L-curve can alternatively be used to find an optimal value of θ.

During the progress of the iteration, the focusing of the source power in smaller and smaller areas will have the effect of reducing the one-norm q ( k ) 1 of the solution vector. The iteration is stopped when the decay between two subsequent iterates becomes too small, i.e., when

(23)

for some ρ < 1 . All results in the present paper have been produced using ρ = 0.9992 and a value of θ corresponding to an applied 30 dB dynamic range of singular values of G ( k ) .

Reference 24 describes an alternating direction algorithm for solution of the one-norm regularized problem of Eq. (4). It is a first-order primal-dual algorithm applied to an augmented Lagrangian function of the problem. Version 1.4 of the publicly available MATLAB library YALL1 was downloaded from the website given in Ref. 25 and used for the simulated measurements to be presented. Earlier versions require orthonormal rows of the matrix A, while the downloaded version supports non-orthonormal rows. YALL1 found, in some cases, a useful solution, but as will be apparent from the results, it was very unstable. The YALL1 UsersGuide recommends orthonormalization of the rows of A, and that worked alright at the high frequencies, where the rows are already quite linearly independent. At the low frequencies, however, there is a high degree of correlation between the rows, so orthonormalization would need some kind of regularization of its own, which is not trivial. Because of these complications, and because the results were not very good, no detailed description will be given.

The parameter settings used in YALL1 were opts.nonorth = 1 (non-orthonormal rows in A), opts.tol = 1e-3 (tolerance), opts.maxit = 300 (max iterations) and opts.rho =  θ / 2 , where θ was obtained from Eq. (5) with D = 30 .

Figure 2 shows the 60-element irregular array geometry with a 1 m diameter used for all simulated measurements. The average element spacing of the array is approximately 12 cm, which makes it suited for traditional ESM applications (with Tikhonov regularization) up to around f Tikh = 0.85 f hw 1.2 kHz . The array was optimized for minimum sidelobe level in beamforming applications up to 6 kHz, implying a maximum degree of ability to distinguish plane waves incident from different directions. This guarantees good restricted isometry properties (RIP),1 when the array is used at large distances. In Refs. 2 and 8, the optimal measurement distance for CS-based holography applications was argued to be between two and three times the average element spacing of the array. Shorter distances will, in some ways, improve the RIP properties because resolution is improved, but the risk of compact complex sources (with many partial sources) being captured effectively only by a small sub-array increases. That sub-array may not have the irregular array properties required for CS (or beamforming/focusing) to work properly at high frequencies. Section III C presents data indicating this to be true, but more work is needed. Several industrial applications do not allow very small measurement distances anyway, for example, because of irregular source surfaces, or because of hot or rotating components. Use with distances in the recommended range leads to similarities with the “generalized inverse beamforming” application from Ref. 13. The long distances will limit the resolution on the source surface and thus sacrifice point-wise reconstruction accuracy. Partial source sound power may, however, still be accurate. To investigate these aspects, two different measurement distances will be simulated for each source configuration, and the reconstruction error will be considered both point wise and for area-integrated sound power values.

FIG. 2.

Applied pseudo-random array geometry.

FIG. 2.

Applied pseudo-random array geometry.

Close modal

Simulated measurements on three different sources will be described: (1) a single monopole point source some short distance away from the plane of the ESM source model. Thus, there is no true sparse solution, but the sound field can be represented quite accurately by a sparse solution; (2) a similar setup, just with two coherent in-phase monopole sources. As opposed to the first case, this allows spatial resolution properties of the methods to be investigated; (3) a point-driven baffled plate, in which case there is no sparse solution providing good accuracy. But it is still of practical interest to see how the methods perform in that case.

With reference to Fig. 3(a), we consider a setup with a monopole point source located on the array axis at 28 cm distance from the array plane, while the source-model mesh is at 25.5 cm distance, and the sound field is reconstructed in a mapping plane 24 cm from the array plane. The reconstruction mesh has 51 columns and 51 rows with 1 cm spacing, covering a 0.5 m × 0.5 m area centered on the array axis. The source-model mesh is similar, i.e., with 1 cm spacing, but it is extended by six rows/columns in all four directions. In total, 63 × 63 = 3969 complex point-source amplitudes must be determined from the 60 measured complex sound-pressure values. Random noise was added to the complex microphone pressure data at a level 30 dB below the average sound pressure across the microphones. With 2.5 cm spacing between the real source and the plane of the ESM model there will be no true sparse solution, but the sound field can be represented quite well by a sparse solution. The number of source terms in the L1CVX solution larger than 1% of the largest term varied from 4 at 200 Hz to 54 at the maximum considered frequency of 6.4 kHz.

FIG. 3.

(Color online) (a) Illustration of setup for simulated measurements on a single monopole point source. (b) Sound intensity map at 4 kHz reconstructed by IZOTA. Contour interval is 2 dB and display range is 20 dB.

FIG. 3.

(Color online) (a) Illustration of setup for simulated measurements on a single monopole point source. (b) Sound intensity map at 4 kHz reconstructed by IZOTA. Contour interval is 2 dB and display range is 20 dB.

Close modal

Figure 3(b) shows a contour plot of the sound intensity at 4 kHz reconstructed by IZOTA. All methods produced, in this case, almost identical maps. At 4 kHz, the average array-element spacing is 1.33 wavelengths, and the frequency is well above f Tikh . Figure 4(a) shows the progress in the reconstruction of sound power at 4 kHz versus iteration count for the different methods. Sound power was calculated by area integration of maps like the one in Fig. 3(b) across the full mapping area. For graphical visibility, the L1CVX result is shown as a horizontal line, just like the value from the true sound field. The methods ISTA, FISTA, and IRLS all start with a Tikhonov regularized solution, producing a sound power estimate equal to 35.3 dB, while the sound power from the true intensity map is 41.2 dB. Figures 4(b) and 4(c) show corresponding progress in the minimization of the relative residual two-norm p A q 2 / p 2 and the solution one-norm q 1 , respectively, at 4 kHz. The computational effort of the iterative methods cannot be judged based on only the number of iterations. IRLS requires a new SVD in each iteration, while ISTA, FISTA, and IZOTA require only matrix-vector multiplications. ISTA and FISTA require two such multiplications per iteration, while IZOTA requires one more to determine the step length to the minimum along the steepest descent direction. The very slow convergence of ISTA is apparent, and for that reason it will not be considered in the following. FISTA is much faster, but still slow as compared with the remaining methods. All subsequent FISTA results are based on 300 iterations. YALL1 is very unstable during the first approximately 130 iterations, but then it converges reasonably fast. The relative residual from L1CVX is seen to be high. A reduction would require a larger applied dynamic range D in Eq. (8), but then partial modeling of the noise starts to be a problem as we shall see in Sec. III B. The iterative methods exhibit that conflict to a much smaller degree.

FIG. 4.

(Color online) Estimated sound power (top), relative residual two-norm (middle), and solution one-norm (bottom) at 4 kHz versus iteration count for the investigated methods.

FIG. 4.

(Color online) Estimated sound power (top), relative residual two-norm (middle), and solution one-norm (bottom) at 4 kHz versus iteration count for the investigated methods.

Close modal

Sound power spectra were calculated with 200 Hz intervals from 200 Hz to 6.4 kHz using all methods. Figure 5 shows the deviations 10 log 10 ( W calc / W true ) in decibels of all the reconstructed sound power spectra W calc from the true power spectrum W true . Thus, negative values represent underestimation. All spectra were calculated by area-integration of intensity maps. A Tikhonov solution (calculated with a fixed 30 dB dynamic range) is provided as a reference to illustrate the enhancement achieved by the CS methods. All CS methods are very accurate up to approximately 3.5 kHz, although FISTA would need more than 300 iterations to reach the accuracy of the other iterative methods. Above 3.5 kHz, all methods show a slowly increasing underestimation, in particular FISTA and IZOTA. At frequencies, where YALL1 converges, it also produces accurate estimates, but for approximately half of the frequencies it did not converge to a useful solution. Because of its high instability, it will be disregarded in the following.

FIG. 5.

(Color online) Error on the estimated sound power spectra: One monopole at 28 cm, source model at 25.5 cm and mapping plane at 24 cm distance from the array.

FIG. 5.

(Color online) Error on the estimated sound power spectra: One monopole at 28 cm, source model at 25.5 cm and mapping plane at 24 cm distance from the array.

Close modal

The accuracies/errors shown in Fig. 5 represent only the area-integrated results, not the intensity distributions, which contain, for example, the information about source positions. Figure 6 shows the relative average error on the intensity maps in percent, calculated as

(24)

where I true , i is the true sound intensity normal to the mapping surface at calculation point i, and I calc , i is the corresponding reconstructed intensity. The quite high relative errors between typically 40% and 50% are due to the long measurement distance and the reconstruction quite close to the source. These factors lead to distorted intensity maps, such as the peak shown in Fig. 3(b), which has circumferential ripples and is a bit narrower than it should be, but appears at the correct position. Such distortion will, in most cases, not be important in source ranking applications where only the sound power within sub-areas is important. For measurements on two point sources or more, position errors can also occur as we shall see in Sec. III B. Despite the rather large point by point errors, Fig. 5 shows that the sound power is accurately reconstructed. The relatively large low-frequency errors produced by L1CVX are due to a peak that is too narrow, as we will see in Fig. 8 for the case of two point sources. This error could be reduced by application of a slightly larger dynamic range D in Eq. (8), but as will be apparent in Sec. III B, this quickly leads to false peaks because a modeling of the noise starts being enforced. The slightly high low-frequency IZOTA errors could similarly be reduced through application of a smaller value of ε in the stopping criterion, but at the risk of modeling the noise.

FIG. 6.

(Color online) Relative average error in % on the intensity maps: One monopole at 28 cm, source model at 25.5 cm, and mapping plane at 24 cm distance from the array.

FIG. 6.

(Color online) Relative average error in % on the intensity maps: One monopole at 28 cm, source model at 25.5 cm, and mapping plane at 24 cm distance from the array.

Close modal

Figure 7 contains results from a simulated measurement, where the distances have been changed: The source model is now at 15 cm distance from the array, the monopole source is only 1 cm behind the source model, and the reconstruction/mapping plane is 8 cm from the array, i.e., midway between the monopole and the array. Clearly, the intensity reconstruction errors have been significantly reduced. Above approximately 3.7 kHz, the errors increase slightly, although the number of significant amplitudes in the L1CVX solutions remains well below 60 up to 6.4 kHz. The particularly high low-frequency errors of L1CVX seen in Fig. 6 have almost disappeared because the available ∼25 dB dynamic range supports a better reconstruction from the shorter distance. Overall, the CS methods produce very similar results/maps, although their regularization parameters are different and chosen in different ways. The regularization parameter value of Eq. (5) seems to work very well with Eq. (4), and it seems less critical than selection of the noise-floor constraint in Eq. (8) for the L1CVX solution. This must be because of the iterations being stopped after a limited number of steps, while CVX proceeds until a solution of Eq. (6) has been obtained with high accuracy.

FIG. 7.

(Color online) Relative average error in % on the intensity maps: One monopole at 16 cm, source model at 15 cm, and mapping plane at 8 cm distance from the array.

FIG. 7.

(Color online) Relative average error in % on the intensity maps: One monopole at 16 cm, source model at 15 cm, and mapping plane at 8 cm distance from the array.

Close modal

The main purpose of this simulated measurement is to demonstrate some important resolution properties at low frequencies. Thus, we use a setup similar to that of Fig. 3(a), the only difference being that there are now two sources 28 cm in front of the array plane at (x,y) coordinates (15,15) cm and (−15,−15) cm relative to the array axis. The source model is again at 25.5 cm distance and the mapping plane at 24 cm distance. Random noise was added to the complex microphone pressure data at a level 30 dB below the average sound pressure across the microphones.

Figure 8 contains contour plots of the sound intensity output from the different methods at 200 Hz. With the applied measurement distance, the two sources can just exactly be distinguished when using Tikhonov regularization with 30 dB dynamic range. This is not shown though. FISTA and IRLS provide significantly better resolution, and both methods correctly identify the locations of the two sources. That actually holds true over the entire frequency range. L1CVX results are shown for dynamic ranges D in Eq. (8) equal to 25 dB (as used otherwise in the paper) and 30 dB. With 25 dB range, two sources are identified, but they are slightly too close to each other. Increase of the range to 30 dB causes the source separation to be more correct, but the two sources get misplaced and ghost sources appear due to partial modeling of the added noise. Some noise realizations led to even higher-level ghosts. The IZOTA map has a high-level peak at the center position between the two real sources plus two lower-level sources in the correct directions, but too far apart. This is due to the previously mentioned allocation of source strength in the central area by the initial low-resolution gradient steps, the first being a conventional beamforming map. The central source is never removed because the algorithm does not subsequently enforce a simultaneous minimization of residual and solution one-norm. From 800 Hz and higher, the problem disappears in the present setup. According to experience, the problem is insignificant at frequencies above f Tikh . All methods will have limited low-frequency resolution in the presence of added noise, but the phenomenon is more pronounced for IZOTA than for the other methods.

FIG. 8.

(Color online) Sound intensity maps reproduced by the different methods at 200 Hz. The same scaling with 2 dB contour interval has been applied in all maps.

FIG. 8.

(Color online) Sound intensity maps reproduced by the different methods at 200 Hz. The same scaling with 2 dB contour interval has been applied in all maps.

Close modal

Figure 9 shows the error in decibels on the estimated sound power spectra for the full 0.5 m × 0.5 m mapping area. Disregarding the Tikhonov solution, the accuracy is very high between 800 Hz and 3.5 kHz as in the case of a single source, and above 3.5 kHz there is again a slowly increasing underestimation, which is somewhat faster in the present case. Above 800 Hz, all methods could easily resolve the two sources, and they produced almost identical intensity maps. Also, the progress had basically the same form as presented in Fig. 4.

FIG. 9.

(Color online) Error on the estimated sound power spectra: Two monopoles at 28 cm, source model at 25.5 cm, and mapping plane at 24 cm distance from the array.

FIG. 9.

(Color online) Error on the estimated sound power spectra: Two monopoles at 28 cm, source model at 25.5 cm, and mapping plane at 24 cm distance from the array.

Close modal

Figure 10 shows the corresponding relative average error on the reconstructed intensity maps in percent as defined in Eq. (24). Again, the error is quite high despite the accurate area-integrated power levels documented in Fig. 9, the explanation being distorted maps as shown in Fig. 8. The reason for the particularly high error on the L1CVX result between 800 Hz and 3 kHz is the same as the reason given in relation to Fig. 6: The noise floor prevents an increased weight on residual reduction. Too much weight is therefore put on the minimization of the solution one-norm, producing too sharp and slightly misplaced peaks as seen in Fig. 8. Below 800 Hz, IZOTA generates the strong central source seen in Fig. 8, producing very large relative errors in the intensity maps. Above 800 Hz, the sources are correctly localized, and the area integrated sound power levels are very accurate, also for the two individual point sources as investigated in more depth in Refs. 2 and 8.

FIG. 10.

(Color online) Relative average error in % on the intensity maps: Two monopoles at 28 cm, source model at 25.5 cm, and mapping plane at 24 cm distance from the array.

FIG. 10.

(Color online) Relative average error in % on the intensity maps: Two monopoles at 28 cm, source model at 25.5 cm, and mapping plane at 24 cm distance from the array.

Close modal

As for the case of a single point source, the deviation between the true and the reconstructed intensity maps can be reduced significantly by reducing the measurement distance and by mapping closer to the array. Figure 11 shows intensity-map error spectra when using the same distances as used in the single-source case of Fig. 7. Except for large errors on the IZOTA maps below 600 Hz, the errors are only marginally larger with two sources than with one source. The large IZOTA errors below 600 Hz are due to the same source misplacement and distortion errors that started below 800 Hz with the larger measurement distance represented in Fig. 10.

FIG. 11.

(Color online) Relative average error in % on the intensity maps: Two monopoles at 16 cm, source model at 15 cm, and mapping plane at 8 cm distance from the array.

FIG. 11.

(Color online) Relative average error in % on the intensity maps: Two monopoles at 16 cm, source model at 15 cm, and mapping plane at 8 cm distance from the array.

Close modal

The aim of the simulated plate measurements is to show that the investigated methods can give quite good results even when the true source cannot be represented by a sparse model. As an example of a more distributed source, a baffled, center-driven, simply supported, 6 mm thick, 40 cm × 40 cm aluminum plate has been used. The coincidence frequency for the plate is at 2026 Hz. The vibration pattern was calculated using the formulation by Williams26 and, subsequently, the radiated sound field was obtained using the discretized Rayleigh integral, approximating the plate velocity distribution by 161 × 161 monopole point sources. This allowed the microphone sound-pressure values and the “true” pressure and particle velocity in a reconstruction plane 1 cm above the plate to be calculated. As for the simulated measurements on monopole point sources, random noise was added to the complex microphone pressure data at a level 30 dB below the average sound pressure across the microphones. The reconstruction mesh had 41 × 41 points with 1 cm spacing, covering exactly the plate area, and the array was placed 24 cm above the plate. A source model comprising 53 × 53 monopole point sources with 1 cm spacing was located 1 cm below the plate.

The true and the IZOTA reconstructed intensities at 3 and 4 kHz are shown in Fig. 12. The other methods provide maps very much like the IZOTA maps. Up to around 3 kHz, the reconstructed intensity maps are all quite close to the true maps. Above 3 kHz, the reconstructed maps gradually become very distorted, as represented by the 4 kHz result in Fig. 12. It is perhaps surprising that quite good reconstruction accuracy can be obtained up to 3 kHz, even when using Tikhonov regularization.

FIG. 12.

(Color online) True (top row) and IZOTA reconstructed (bottom row) sound intensity maps at 3 kHz (left) and at 4 kHz (right). Display range is 20 dB with 2 dB contour interval. For each frequency, the same scale is used.

FIG. 12.

(Color online) True (top row) and IZOTA reconstructed (bottom row) sound intensity maps at 3 kHz (left) and at 4 kHz (right). Display range is 20 dB with 2 dB contour interval. For each frequency, the same scale is used.

Close modal

Figure 13 shows the deviations of all the reconstructed sound power spectra from the true power spectrum. All spectra were calculated by integration of sound intensity over the full mapping area. In general, all methods show a small underestimation up to around 3 kHz, with some consistent ripples. Above that frequency, the underestimation increases. Overall, the IRLS method seems more stable than the other algorithms, and perhaps surprisingly the L1CVX method shows significantly stronger underestimation than the iterative methods above 3 kHz. The two negative peaks in the error around 900 Hz and 1.6 kHz occur where there is a low radiation efficiency. These two frequencies are below coincidence, so there may be strong evanescent wave components, which are not picked up well at the quite long measurement distance. It is interesting then that the negative peaks produced by IRLS are not as deep.

FIG. 13.

(Color online) Error on the estimated sound power spectra: Plate at 24 cm, source model at 25 cm, and mapping plane at 23 cm distance from the array.

FIG. 13.

(Color online) Error on the estimated sound power spectra: Plate at 24 cm, source model at 25 cm, and mapping plane at 23 cm distance from the array.

Close modal

One can also notice that the Tikhonov regularized solution produces quite accurate sound power estimates up to 3 kHz. In the case of a single monopole source (represented in Fig. 5), Tikhonov produced significantly stronger underestimation at 3 kHz, although the source model and the measurement distance are very similar. The reason for that difference has not been identified, so it remains as an interesting open question. Even though there is no sparse solution for the plate source, the iterative CS methods provide nevertheless a significant improvement of the estimated sound power above 3 kHz.

Figure 14 shows the corresponding relative average error spectra for the reconstructed sound intensity maps as defined in Eq. (24). Between 2.4 and 3 kHz the error levels are quite low for all the CS reconstructions and the Tikhonov regularized solution. Below 2.4 kHz, the CS methods produce higher errors than Tikhonov because of excessive sharpening by the minimization of the solution one-norm. In particular L1CVX and IZOTA concentrate the sources too much. Above 3 kHz, the error levels increase and the underestimation of the sound power increases slowly.

FIG. 14.

(Color online) Relative average error in % on the intensity maps: Plate at 24 cm, source model at 25 cm, and mapping plane at 23 cm distance from the array.

FIG. 14.

(Color online) Relative average error in % on the intensity maps: Plate at 24 cm, source model at 25 cm, and mapping plane at 23 cm distance from the array.

Close modal

As for the case of simulated measurements on point sources, the effect of decreasing the measurement distance and reconstructing closer to the array shall be investigated. The distance to the plate is halved to become 12 cm, keeping the source model 1 cm below the plate, and the reconstruction plane is moved 1 cm further away from the plate to be 2 cm above the plate. Figure 15 shows the errors on the sound power spectra from the different methods. When comparing with the errors associated with the longer distance (in Fig. 13), the errors have been reduced up to 3 kHz. The two negative peaks associated with low radiation efficiency have been reduced as one would expect when decreasing the measurement distance. Above 3 kHz, however, the underestimation now increases much faster as compared to the case of the longer measurement distance. This could be because the exposure of the array microphones has become more inhomogeneous with effectively only a central sub-array exposed. Figure 16 contains the corresponding relative average deviations between the reconstructed intensity maps and the true map as defined in Eq. (24). When comparing with the errors resulting from the longer distance (Fig. 14), the low-frequency errors have been reduced because of better resolution, while the high-frequency errors have increased, perhaps because of the less uniform exposure of the array. Based on this example, it seems that the CS techniques do not extend the useful frequency range relative to the Tikhonov-based solution, when a short measurement distance is used with a non-sparse source. A longer measurement distance can improve the sound power estimation at high frequencies while sacrificing some resolution at low frequencies.

FIG. 15.

(Color online) Error on the estimated sound power spectra: Plate at 12 cm, source model at 13 cm, and mapping plane at 10 cm distance from the array.

FIG. 15.

(Color online) Error on the estimated sound power spectra: Plate at 12 cm, source model at 13 cm, and mapping plane at 10 cm distance from the array.

Close modal
FIG. 16.

(Color online) Relative average error in % on the intensity maps: Plate at 12 cm, source model at 13 cm, and mapping plane at 10 cm distance from the array.

FIG. 16.

(Color online) Relative average error in % on the intensity maps: Plate at 12 cm, source model at 13 cm, and mapping plane at 10 cm distance from the array.

Close modal

A few concluding remarks shall be given in relation to the choice of measurement distance based on the limited data presented in the paper. The same conclusions seem to hold for all the considered CS methods.

If the sound field can be represented well by a sparse solution vector, then a small distance is clearly best—typically a distance approximately equal to the average microphone spacing. In that case, accurate point-wise reconstructions can be obtained quite close to the source. Longer measurement distances lead to poorer resolution near the source and therefore larger errors on the point-wise reconstruction. Sound power estimates, however, remain accurate.

If the sound field cannot be represented well by a sparse solution vector, then the use of a slightly longer measurement distance seems advantageous in the sense that it provides better estimates of sound power levels at the high frequencies. Point-wise reconstruction near the source surface is then not accurate. With measurement distances around two times the average array-microphone spacing, reasonable low-frequency resolution can still be achieved as a basis for sound-power ranking of sub-areas on the source surface. The application is then somewhat between near-field holography and generalized inverse beamforming, with a balanced focus on near-field mapping and far-field contribution estimation.

Table II contains the central processing unit (CPU) times in seconds taken by comparable MATLAB implementations of the different methods to solve the underdetermined problems at the 49 frequency lines in Fig. 13. IZOTA is fastest with IRLS second fastest. For the processing of the simulated measurements on point sources, the relative distributions of CPU seconds between the methods were very similar. It is noteworthy that IRLS is competitive despite the need for a new SVD as part of each iteration. The explanation is the rather small number of microphones.

TABLE II.

Central processing unit (CPU) time in seconds.

L1CVX FISTA IZOTA IRLS
480.0  268.0  48.3  119.5 
L1CVX FISTA IZOTA IRLS
480.0  268.0  48.3  119.5 

The paper has given, first, a short introduction to the solution of the inverse acoustic problem in the ESM using CS. When using a suitable irregular array, CS can accurately identify sparse source amplitude vectors well above the upper limiting frequency f Tikh associated with Tikhonov regularization. The CS methods dealt with in the present paper are all based on use of one-norm regularization or minimization. The interior-point method implemented in the CVX library has been used as a reference. A main drawback with that method is its severe CPU load, but it also needs manual setting of one sensitive parameter—an upper bound on the residual. Five different iterative methods were then briefly described: ISTA, FISTA, IZOTA, IRLS, and YALL1. The methods were tested with two simulated measurements on monopole point sources—one with a single source and one with two sources—plus a simulated measurement on a baffled plate. The two first tests comply with the sparsity assumption behind CS, while this is not the case for the plate measurement. For practical applications, however, it is of interest to investigate how the methods work with non-sparse sources.

YALL1 was quickly found to be too unstable, so it was omitted after the single-monopole test. ISTA exhibited very slow convergence, as expected, so it was also omitted. IZOTA was shown to partially replace multiple sources by a single central source at the lowest frequencies, as it was known from earlier publications. The L1CVX algorithm also exhibited problems at low frequencies: Due to added noise, the noise-floor parameter could not be chosen sufficiently small to prevent an excessive sharpening of the maps and some misplacement of the sources. Apart from that, all methods provided very comparable solutions.

Disregarding the lowest frequencies, where IZOTA has problems, the main differentiator among FISTA, IZOTA, and IRLS is the convergence speed and the CPU load. IRLS uses quite few iterations, typically less than 35, but for each iteration a new SVD of a reweighted transfer matrix is needed. FISTA and IZOTA need only matrix-vector multiplications—FISTA two per iteration and IZOTA three per iteration. However, FISTA typically needs five times the number of iterations needed by IZOTA to reach the same level of accuracy. The SVDs needed by IRLS are not a big problem with typical array applications because of the rather small number of microphones, limiting the size of the transfer matrix. Overall IZOTA was fastest, IRLS taking approximately 2.5 times as long, and FISTA 5–6 times as long. L1CVX took even more CPU time, having the additional problem that the source code is much harder to get/implement. The iterative methods, except YALL1, are all very easy to implement. The most stable of all methods seems to be IRLS, and its associated parameter selection seems to be rather uncritical. Based on the presented limited data, an optimal choice of methodology could therefore be IRLS in the traditional Tikhonov frequency range and IZOTA at higher frequencies.

Table III gives a very short overview of the observed performances of the methods, the accuracy (including stability) of the methods being represented separately for low frequency (LF) and high frequency (HF). The number of “+” characters do not represent score values—just a subjective overall ranking across the benchmarks presented in the paper.

TABLE III.

Brief comparison of methods.

Accuracy LF Accuracy HF Calculation speed
L1CVX  ++  +++ 
ISTA  +++  +++ 
FISTA  +++  +++  ++ 
IZOTA  +++  ++++ 
IRLS  ++++  +++  +++ 
Accuracy LF Accuracy HF Calculation speed
L1CVX  ++  +++ 
ISTA  +++  +++ 
FISTA  +++  +++  ++ 
IZOTA  +++  ++++ 
IRLS  ++++  +++  +++ 

The function G to be minimized in Eq. (13) has the form

(A1)

where q = { q i } = { a i + j b i } and u = { u i } = { x i + j y i } are complex vectors, and λ and θ are real positive parameters. The function can be rewritten as

(A2)

and further in terms of real variables

(A3)

Clearly, the minimization can be performed independently for each component i, and the solution can be found by requiring the gradients G / x i and G / y i to equal zero. After some trivial algebraic manipulation, the solution is found to be the one given in Eq. (15).

1.
E.
Fernandez-Grande
,
A.
Xenaki
, and
P.
Gerstoft
, “
A sparse equivalent source method for near-field acoustic holography
,”
J. Acoust. Soc. Am.
141
(
1
),
532
542
(
2017
).
2.
J.
Hald
, “
Fast wideband acoustical holography
,”
J. Acoust. Soc. Am.
139
(
4
),
1508
1517
(
2016
).
3.
N. M.
Abusag
and
D. J.
Chappell
, “
On sparse reconstructions in near-field acoustic holography using the method of superposition
,”
J. Comput. Acoust.
24
,
1650009
(
2016
).
4.
G. H.
Koopmann
,
L.
Song
, and
J. B.
Fahnline
, “
A method for computing acoustic fields based on the principle of wave superposition
,”
J. Acoust. Soc. Am.
86
(
6
),
2433
2438
(
1989
).
5.
C.-X.
Bi
,
X. Z.
Chen
,
J.
Chen
, and
R.
Zhou
, “
Nearfield acoustic holography based on the equivalent source method
,”
Sci. China, Ser. E: Technol. Sci.
48
(
3
),
338
353
(
2005
).
6.
A.
Sarkissian
, “
Method of superposition applied to patch near-field acoustical holography
,”
J. Acoust. Soc. Am.
118
,
671
678
(
2005
).
7.
G.
Chardon
,
L.
Daudet
,
A.
Peillot
,
F.
Ollivier
,
N.
Bertin
, and
R.
Gribonval
, “
Near-field acoustic holography using sparse regularization and compressive sampling principles
,”
J. Acoust. Soc. Am.
132
(
3
),
1521
1534
(
2012
).
8.
J.
Hald
, “
Wideband acoustical holography
,” in
Proceedings of Inter-Noise 2014
, paper 44.
9.
E.
Fernandez-Grande
and
A.
Xenaki
, “
The equivalent source method as a sparse signal reconstruction
,” in
Proceedings of Inter-Noise 2015
.
10.
C.-X.
Bi
,
Y.
Liu
,
L.
Xu
, and
Y.-B.
Zhang
, “
Sound field reconstruction using compressed modal equivalent source method
,”
J. Acoust. Soc. Am.
141
(
1
),
73
79
(
2017
).
11.
M.
Grant
and
S.
Boyd
, “
CVX: Matlab software for disciplined convex programming (version 2.1)
,” http://cvxr.com/cvx (Last viewed June 20,
2018
).
12.
G.
Ping
,
Z.
Chu
,
Z.
Xu
, and
L.
Shen
, “
A refined wideband acoustical holography based on equivalent source method
,”
Sci. Rep.
7
,
43458
(
2017
).
13.
T.
Suzuki
, “
L1 generalized inverse beamforming algorithm resolving coherent/incoherent, distributed and multipole sources
,”
J. Sound Vib.
330
,
5835
5851
(
2011
).
14.
L.
Xu
,
C.-X.
Bi
,
X.-Z.
Zhang
, and
C.-J.
Zheng
, “
High resolution nearfield acoustic holography based on iterative weighted equivalent source method
,” in
Proceedings of Inter-Noise 2014
, paper 458.
15.
B.
Oudompheng
,
A.
Pereira
,
C.
Picard
,
Q.
Leclère
, and
B.
Nicolas
, “
A theoretical and experimental comparison of the iterative equivalent source method and the generalized inverse beamforming
,” in
Proceedings of Berlin Beamforming Conference 2014
, paper BeBec-2014-12.
16.
P.
Gerstoft
,
A.
Xenaki
, and
C. F.
Mecklenbrauker
, “
Multiple and single snapshot compressive beamforming
,”
J. Acoust. Soc. Am.
138
,
2003
2014
(
2015
).
17.
H. F.
Alqadah
,
N.
Valdivia
, and
E. G.
Williams
, “
A super-resolving nearfield electromagnetic holographic method
,”
IEEE Trans. Antenn. Propag.
62
(
7
),
3679
3692
(
2014
).
18.
A.
Beck
and
M.
Teboulle
, “
A fast iterative shrinkage-thresholding algorithm for linear inverse problems
,”
SIAM J. Imaging Sci.
2
(
1
),
183
202
(
2009
).
19.
A.
Beck
and
M.
Teboulle
, “
A fast iterative shrinkage-thresholding algorithm with application to wavelet-based image deblurring
,” in
Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing
(
2009
), pp.
693
696
.
20.
I.
Rish
and
G. Y.
Grabarnik
,
Sparse Modelling: Theory, Algorithms and Applications
(
Chapmann and Hall/CRC
,
New York
,
2015
), Sec. 5.3.
21.
J. M.
Bioucas-Dias
and
M. A. T.
Figueiredo
, “
A new TwIST: Two-step iterative shrinkage/thresholding algorithms for image restoration
,”
IEEE Trans. Image Proc.
16
(
12
),
2992
3004
(
2007
).
22.
P. J.
Huber
, “
Robust statistics
,” in
Wiley Series in Probability and Statistics
(
Wiley
,
New York
,
1981
).
23.
I.
Daubechies
,
R.
Devore
,
M.
Fornasier
, and
C. S.
Güntürk
, “
Iteratively reweighted least squares minimization for sparse recovery
,”
Commun. Pure Appl. Math.
63
(
1
),
1
38
(
2010
).
24.
J.
Yang
and
Y.
Zhang
, “
Alternating direction algorithms for L1-problems in compressive sensing
,”
SIAM J. Sci. Comput.
33
(
1–2
),
250
278
(
2011
).
25.
Y.
Zhang
,
J.
Yang
, and
W.
Yin
, “
YALL1: Your ALgorithms for L1 (version 1.4)
,” available at yall1.blogs.rice.edu (Last viewed June 20,
2018
).
26.
E. G.
Williams
,
Fourier Acoustics: Sound Radiation and Nearfield Acoustical Holography
(
Academic
,
London
,
1999
), Sec. 2.14.