Dispersion curves characterize many propagation mediums. When known, many methods use these curves to analyze waves. Yet, in many scenarios, their exact values are unknown due to material and environmental uncertainty. This paper presents a fast implementation of sparse wavenumber analysis, a method for recovering dispersion curves from data. This approach, based on orthogonal matching pursuit, is compared with a prior implementation, based on basis pursuit denoising. In the results, orthogonal matching pursuit provides two to three orders of magnitude improvement in speed and a small average reduction in prediction capability. The analysis is demonstrated across multiple scenarios and parameters.
1. Introduction
Accurately measuring wave phase velocity (i.e., dispersion curves) is a common challenge throughout different acoustics applications, including the fields of structural health monitoring and nondestructive testing. Dispersion curves, which describe how velocities or wavenumbers vary with frequency, can aide researchers in analyzing and predicting how waves travel through an environment. While theory can estimate the wave velocity and amplitude in some media, these estimates often fail to match with experimental observations. This is often due to unaccounted parameters or boundary conditions in the wave equation and/or variability from environmental and operational effects.1
In a response to this challenge, we introduced sparse wavenumber analysis in prior work.2 Sparse wavenumber analysis recovers the dispersion curves of a wave by recognizing that its frequency-wavenumber representation is sparse. In prior work, sparse wavenumber analysis utilized an optimization method known as basis pursuit denoising3 to accurately recover dispersion curves from data. This was demonstrated in the presence of distorting effects, such as multipath. While basis pursuit denoising is effective, it is often too slow for real-time applications. Basis pursuit denoising's performance also depends on the choice of a regularization parameter τ. The optimal choice of τ is generally scenario-specific and unknown.
We explore an alternative sparse wavenumber analysis implementation, based on orthogonal matching pursuit.4 Orthogonal matching pursuit is a method for recovering sparse representations when computational efficiency is important. In general, the recovery performance of orthogonal matching pursuit and basis pursuit denoising depends on many factors. These factors include the number of measurements, the wavenumber sampling density, and the amount of noise in the data. In this paper, we compare the computational efficiency and recovery performance of both implementations with Lamb wave simulation. We show that, for ten frequencies, orthogonal matching pursuit performs up to 1400 times faster than basis pursuit denoising with a relatively small average reduction of 0.053 in its prediction correlation coefficient. We also illustrate each method's behavior as a function of various parameters and scenarios.
2. Sparse wavenumber analysis
Throughout this paper, we assume waves travel according to a multi-modal, dispersive wave model. That is, we assume a wave measurement X(d, ω) with angular frequency ω and travel distance d is represented by
In this representation, the wave is a sum of modes m = 1, 2,… . Each mode has its own frequency-dependent amplitude Gm(ω) and frequency-dependent dispersion relation km(ω) such that the mode speed can vary as a function of frequency. The term S(ω) represents the initial source excitation. We represent a collection of M wave measurements at frequency ωq as an M × 1 vector
The N × 1 vector vq represents amplitudes at N discretized wavenumbers κ1, κ2,…, κN for the frequency ωq. The vector d contains the M travel distances d = [d1 ⃛ dM] corresponding to each measurement in xq. The M × N matrix Φ(d)Dκ relates xq to vq and is defined by
The matrices Dκ and Φ(d) can be derived from Eq. (2).
The goal of sparse wavenumber analysis is to accurately recover the wavenumber vectors vq at each frequency ωq. These vectors represent the dispersion curves of the waves2 and can be used to predict how the waves behave in a medium.5 Sparse wavenumber analysis recovers vq by utilizing the knowledge that vq is sparse, or mostly zeros. Further details on this theoretical setup and the underlying physics can be found in prior work.2
2.1 Basis pursuit denoising
Basis pursuit denoising is a convex optimization setup for recovering the sparse representation of signals. Sparse representations have been used in various applications, including acoustic localization.6,7 Basis pursuit denoising has been widely studied and is currently known to be one of the best methods, in terms of recovery performance, for extracting sparse representations. Basis pursuit denoising minimizes the squared error between the measured data xq and the model Φ(d)Dκvq with regularization to encourage sparsity in vq. For sparse wavenumber analysis, basis pursuit denoising is implemented for each frequency ωq by solving the optimization problem
In Eq. (5), represents the -norm and represents the -norm, or Euclidean-norm. The larger the regularization parameter τ, the greater sparsity is enforced in the recovered signal . If τ is too large, converges to an all-zero solution. The scalar value ensures that each column of has a unit -norm. The scalar value ensures that xq, at each frequency ωq, has a unit -norm. These normalizations are included for numerical stability and consistency across multiple data sets.
Two challenges in implementing basis pursuit denoising are its computational speed and its need for an effective regularization parameter. For complex-valued signals, the basis pursuit denoising problem can be solved using a second order cone optimization program.8 Second order cone programs are typically solved using interior points algorithms, which are computationally intensive due to the need to solve systems of linear equations in each iteration of the process.9 Furthermore, the overall performance of sparse wavenumber analysis depends on the choice of τ. While prior work demonstrated 0.4 < τ < 0.6 to be effective against multipath noise, this rule-of-thumb is not necessarily applicable in all applications and scenarios.
2.2 Orthogonal matching pursuit
Orthogonal matching pursuit is a greedy, iterative algorithm for extracting the sparse representation of a signal.4 We consider orthogonal matching pursuit since it is widely implemented and studied in conjunction with basis pursuit denoising. The method iteratively finds the column, or atom, of the matrix Φ(d) that optimally matches with the measured data xq. The best matching atom is then subtracted from xq and the process is repeated. Formally, we can describe the algorithm for frequencies ω1, ω2,…, ωQ as
In the algorithm, represents the empty set, and is the ith element of vector . The notation represents the vector of the values from corresponding to indices defined in . The notation (d) represents a matrix of columns from Φ(d) corresponding to indices defined in . Therefore, if the set contains k0 numbers, then the dimensions of are k0 × 1 and the dimensions of (d) are M × k0.
Unlike basis pursuit denoising, orthogonal matching pursuit does not require a regularization parameter τ. Instead, orthogonal matching pursuit terminates when a desired sparsity k0 (number of non-zero values) is achieved. Other stopping criteria may be used. Due to our knowledge of the approximate sparsity for many dispersion curves, i.e., the number of modes at any frequency, we use the sparsity criteria. Due to its simplicity, orthogonal matching pursuit is generally faster than basis pursuit denoising, but often at the cost of performance. In the following sections, we explore the tradeoffs between computational efficiency and recovery performance.
3. Simulation setup
We examine the performance of sparse wavenumber analysis with basis pursuit denoising and orthogonal matching pursuit by simulating Lamb waves, or plate waves, on a 2 m × 2 m aluminum plate. The Lamb waves are simulated according to Eq. (1) with S(ω)Gm(ω) = 1 and a dispersion relation km(ω) computed by numerically solving the Rayleigh–Lamb equation.10 The Rayleigh–Lamb equation is the solution to the wave equation for a plate and describes a Lamb wave's frequency and wavenumber dependence. We solved the Rayleigh–Lamb equation with a longitudinal bulk wave speed of 6334.31 m/s and a transverse bulk wave speed of 3042.90 m/s and a plate thickness of 3 mm. To evaluate the performance of each algorithm, we consider the effects of five parameters: (1) the number of measurements M, (2) the sampled wavenumber density dκ, (3) the number of frequencies Q, (4) the strength of multipath noise R, and (5) the locations of the sensors.
To observe the effect of the number of measurements M on each method, we implement sparse wavenumber analysis for 12 sets of measurements with 10 ≤ M ≤ 120. To observe how wavenumber density, the number of wavenumbers N divided by the maximum wavenumber, affects performance, we analyze six sets of wavenumbers with densities 0.05 ≤ dκ ≤ 0.55. To evaluate the effect of the number of frequencies Q on computational speed, we analyze ten frequency sets with 1 ≤ Q ≤ 10 over a frequency range of 110 to 200 kHz. When evaluating recovery performance, we only consider Q = 10. The average recovery results over each frequency do not vary considerably with a smaller Q. When implementing orthogonal matching pursuit, we use a sparsity of k0 = 2 since there are two modes in the Lamb wave dispersion curves in the chosen frequency range.
We evaluate the effect of different multipath noise strengths by varying R, the number of reflections modeled in the simulation. When the reflection order R = 0, no multipath is simulated. When R = 1, waves can reflect from the plate's boundaries once. When R = 2, the waves can reflect from the plate's boundaries up to two times. The wave reflections are approximated using the “method of mirror images.”11 We treat reflections as originating from “virtual sources.” The virtual source locations are determined by mirroring the initial source across each boundary around the plate's perimeter. We refer to the resulting multipath behavior as “noise” since it is not explicitly defined in the propagation matrix Φ(d).
To consider the effects of sensor position, we place the sensors randomly according to uniform distributions and implement each method for 50 different random sensor setups. In each setup, we simulate one transmitter and M receivers to collect M individual measurements. The results in the following section are averaged over these 50 different scenarios to ensure our results are not biased by the sensor placement.
To implement basis pursuit denoising, we use the optimization toolbox cvx (Ref. 12) in matlab™, which utilizes the SeDuMi (self-dual-minimization)13 solver. Note that algorithm development for solving the basis pursuit denoising is an active area of research. cvx is a widely utilized and accepted tool in the optimization and signal processing communities and was utilized in our prior work on sparse wavenumber analysis.2 For these reasons, we consider cvx an appropriate benchmark for our current analysis. We implement basis pursuit denoising with τ = 0.001 when R = 0 and τ = 0.4 when R > 0 based on results from prior work.2
4. Results and discussion
In this section, we discuss (1) recovery performance, for orthogonal matching pursuit and basis pursuit denoising, as a function of the number of measurements M, (2) recovery performance as a function of the wavenumber density dκ, (3) computational speed as a function of the number of measurements M, and (4) computational speed as a function of the number of frequencies Q. To evaluate recovery performance, we simulate Lamb waves yq between one transmitter and 200 new receiver locations. We then use sparse wavenumber synthesis and the recovered dispersion curves to predict these 200 Lamb waves.2 Sparse wavenumber synthesis predicts measurements at each frequency ωq by solving
where r contains the 200 distances associated with each transmitter-receiver pair. Both basis pursuit denoising and orthogonal matching pursuit minimize the least-squares error between the measured data and the model with sparsity constraints. As a metric of recovery performance, we compute the average correlation coefficient (a measure of least-squares fit)
between the true yq and predicted signals across all Q frequencies under test.
Figure 1(a) illustrates recovery performance as a function of the number of measurements M. In these results, we use a constant wavenumber density of dκ = 0.55 and a constant number of frequencies Q = 10. The figure shows, for both implementations, that performance monotonically improves as M increases and degrades as the strength of the multipath noise order R increases. In general, the performance of orthogonal matching pursuit and basis pursuit denoising is similar. The correlation coefficient for orthogonal matching pursuit has, on average, a 0.053 smaller value.
(Color online) Average algorithm recovery performance as a function of (a) number of measurements M at Q = 10 and dκ = 0.55 and (b) wavenumber density dκ at Q = 10 and M = 120. Bars represent one standard deviation around the mean. Average computation time of basis pursuit denoising (BPD) over orthogonal matching pursuit (OMP) as a function of (c) number of measurements M at Q = 10 and (d) number of frequencies Q at M = 120. Values next to each line represent wavenumber density dκ values.
(Color online) Average algorithm recovery performance as a function of (a) number of measurements M at Q = 10 and dκ = 0.55 and (b) wavenumber density dκ at Q = 10 and M = 120. Bars represent one standard deviation around the mean. Average computation time of basis pursuit denoising (BPD) over orthogonal matching pursuit (OMP) as a function of (c) number of measurements M at Q = 10 and (d) number of frequencies Q at M = 120. Values next to each line represent wavenumber density dκ values.
Figure 1(b) illustrates recovery performance as a function of the wavenumber dκ. In these results, we use a constant number of measurements M = 120 and a constant number of frequencies Q = 10. The figure shows, for both implementations, that performance monotonically improves as the dκ increases and generally degrades as R increases. Note, however, that basis pursuit denoising for R = 0 and small dκ does not follow this behavior. For small wavenumber densities, the significant quantization that occurs prevents the inverse problem in Eq. (5) from being solved effectively. This implies that a larger τ value, which makes basis pursuit denoising more robust to errors, would be more appropriate for small wavenumber densities .
Overall, Figs. 1(a) and 1(b) show that the recovery performance of orthogonal matching pursuit remains weaker, but relatively similar, to basis pursuit denoising in most situations. The recovery performance results also provide guidelines for effectively implementing sparse wavenumber analysis. For mild multipath noise (i.e., R = 1), we generally want and for best performance. Around these values, the curves begin to level out. Note that the performance for stronger multipath scenarios can be improved through intelligent time-domain windowing.2
Figure 1(c) illustrates the average computational performance of each method as a function of the number of measurements M. In these results, we use a constant number of frequencies Q = 10. These results are averaged across 50 different sensor configurations as well as 3 different R values. The vertical axis represents the time to compute with basis pursuit denoising divided by the time to compute the vectors with orthogonal matching pursuit. Figure 1(c) shows that orthogonal matching pursuit is approximately 1400 times faster than basis pursuit denoising. The speed gains improve with increases in both M and dκ.
Figure 1(d) illustrates the average computational performance, using the same computational multiplier metric as Fig. 1(c), as a function of the number of frequencies Q. In these results, we use a constant number of sensors M = 120. These results are also averaged across 50 different sensor configurations and 3 different R values. The figure again shows that orthogonal matching pursuit is significantly faster than basis pursuit denoising. It also shows that the performance of orthogonal matching pursuit improves with increases in Q.
Overall, Figs. 1(c) and 1(d) show that orthogonal matching pursuit provides better computational efficiency over basis pursuit denoising. Note that computational speeds can significantly vary based on the computational hardware and software used. The results in this paper are indicative of the potentially significant gains provided by orthogonal matching pursuit over basis pursuit denoising.
5. Conclusions
In this paper, we developed an implementation of sparse wavenumber analysis, a method for recovering dispersion curves from data, based on orthogonal matching pursuit, a fast algorithm for recovering the sparse representations of signals. We compared this approach with another implementation based on basis pursuit denoising that has been previously studied. Our results illustrate the tradeoffs between each method and provide additional guidelines for implementing sparse wavenumber analysis. If accuracy is paramount, basis pursuit denoising should be employed. The predictive power of sparse wavenumber analysis is consistently stronger with basis pursuit denoising. However, if near real-time processing is required, orthogonal matching pursuit should be used. Orthogonal matching pursuit is two to three orders of magnitude faster than basis pursuit denoising with a relatively small degradation in performance.
Acknowledgments
J.B.H. was supported by the National Science Foundation Graduate Research Fellowship under Grant No. 0946825.