This paper proposes efficient algorithms for accurate recovery of direction-of-arrivals (DoAs) of sources from single-snapshot measurements using compressed beamforming (CBF). In CBF, the conventional sensor array signal model is cast as an underdetermined complex-valued linear regression model and sparse signal recovery methods are used for solving the DoA finding problem. A complex-valued pathwise weighted elastic net (c-PW-WEN) algorithm is developed that finds solutions at the knots of penalty parameter values over a path (or grid) of elastic net (EN) tuning parameter values. c-PW-WEN also computes least absolute shrinkage and selection operator (LASSO) or weighted LASSO in its path. A sequential adaptive EN (SAEN) method is then proposed that is based on c-PW-WEN algorithm with adaptive weights that depend on previous solution. Extensive simulation studies illustrate that SAEN improves the probability of exact recovery of true support compared to conventional sparse signal recovery approaches such as LASSO, EN, or orthogonal matching pursuit in several challenging multiple target scenarios. The effectiveness of SAEN is more pronounced in the presence of high mutual coherence.
I. INTRODUCTION
Acoustic signal processing problems generally employ a system of linear equations as the data model. For overdetermined linear systems, the least square estimation (LSE) is often applied, but in underdetermined or ill-conditioned problems, the LSE is no longer unique but the optimization problem has an infinite number of solutions. In these cases, additional constraints such as those promoting sparse solutions, are commonly used such as the least absolute shrinkage and selection operator (LASSO)1 or the elastic net (EN) penalty2 which is an extension of LASSO based on a convex combination of and penalties of the LASSO and ridge regression.
It is now a common practice in acoustic applications3,4 to employ grid based sparse signal recovery methods for finding source parameters, e.g., direction-of-arrival (DoA) and power. This approach, referred to as compressive beamforming (CBF), was originally proposed in Ref. 5 and has then emerged as one of the most useful approaches in problems where only a few measurements are available. Since the pioneering work of Ref. 5, the usefulness of CBF approach has been shown in a series of papers.6–14 In this paper, we address the problem of estimating the unknown source parameters when only a single snapshot is available.
Existing approaches in grid-based single-snapshot CBF problem often use the LASSO for sparse recovery. LASSO, however, often performs poorly in the cases when sources are closely spaced in angular domain or when there exists large variations in the source powers. The same holds true when the grid used for constructing the array steering matrix is dense, which is the case, when one aims for high-resolution DoA finding. The problem is due to the fact that LASSO has poor performance when the predictors are highly correlated (cf. Refs. 2, 15). Another problem is that LASSO lacks group selection ability. This means that when two sources are closely spaced in the angular domain, then LASSO tends to choose only one of them in the estimation grid but ignores the other. EN is often performing better in such cases as it enforces sparse solution but has a tendency to pick and reject the correlated variables as a group unlike the LASSO. Furthermore, EN also enjoys the computational advantages of the LASSO2 and adaptation using smartly chosen data-dependent weights can further enhance performance.
The main aim in this paper is to improve over the conventional sparse signal recovery methods especially in the presence of high mutual coherence of basis vector or when non-zero coefficients have largely varying amplitudes. This former case occurs in the CBF problem when a dense grid is used for constructing the steering matrix or when sources arrive to a sensor array from either neighbouring or oblique angles. To this end, we propose a sequential adaptive approach using the weighted elastic net (WEN) framework. To achieve this in a computationally efficient way, we propose a homotopy method16 that is a complex-valued extension of the least angles regression and shrinkage (LARS)17 algorithm for the weighted LASSO problem, which we refer to as c-LARS-WLASSO. The developed c-LARS-WLASSO method is numerically cost effective and avoids an exhaustive grid-search over candidate values of the penalty parameter.
In this paper, we assume that the number of non-zero coefficients (i.e., number of sources K arriving at a sensor array in CBF problem) is known and propose a complex-valued pathwise (c-PW)-WEN algorithm that utilizes c-LARS-WLASSO along with PW-LARS-EN algorithm proposed in Ref. 18 to compute the WEN path. c-PW-WEN computes the K-sparse WEN solutions over a grid of EN tuning parameter values and then selects the best final WEN solution. We also propose a novel sequential adaptive elastic net (SAEN) approach that applies adaptive c-PW-WEN sequentially by decreasing the sparsity level (order) from to K in three stages. SAEN utilizes smartly chosen adaptive (i.e., data dependent) weights that are based on solutions obtained in the previous stage.
Application of the developed algorithms is illustrated in the single-snapshot CBF DoA estimation problem. In CBF, accurate recovery depends heavily on the user specified angular estimation grid (the look directions of interests) which determines the array steering matrix consisting of array response vectors to look directions (estimation grid points) of interests. A dense angular grid implies high mutual coherence, which indicates a poor recovery region for most sparse recovery methods.19 Effectiveness of SAEN compared to state-of-the art sparse signal recovery algorithms is illustrated via extensive simulation studies.
The paper is structured as follows. Section II discusses the WEN optimization problem and its benefits over the LASSO and adaptive LASSO. We introduce c-LARS-WLASSO method in Sec. III. In Sec. IV, we develop the c-PW-WEN algorithm that finds the K-sparse WEN solutions over a grid of EN tuning parameter values. In Sec. V, the SAEN approach is proposed. Section VI layouts the DoAs estimation problem from single-snapshot measurements using CBF. The simulation studies using large variety of set-ups are provided in Sec. VII. Finally, Sec. VIII concludes the paper.
Notations: Lowercase boldface letters are used for vectors and uppercase for matrices. The -norm and the -norm are defined as and , respectively, where denotes the modulus of a complex number . The support of a vector is the index set of its nonzero elements, i.e., . The –(pseudo)norm of is defined as , which is equal to the total number of nonzero elements in it. For a vector (respectively, matrix ) and an index set of cardinality , we denote by (respectively, ) the vector (respectively, matrix) restricted to components of (respectively, columns of ) indexed by the set . Let denotes the Hadamard (i.e., element-wise) product of and and by we denote the element-wise division of vectors. We denote by the usual Hermitian inner product of . Finally, denotes a matrix with elements of a as its diagonal elements.
II. WEIGHTED ELASTIC NET FRAMEWORK
We consider the linear model, where the n complex-valued measurements are modeled as
where is a known complex-valued design or measurement matrix, is the unknown vector of complex-valued regression coefficients and is the complex noise vector. For ease of exposition, we consider the centered linear model (i.e., we assume that the intercept is equal to zero). In this paper, we deal with underdetermined or ill-posed linear model, where , and the primary interest is to find a sparse estimate of the unknown parameters given and . In this paper, we assume that the sparsity level, , i.e., the number of non-zero elements of is known. In DoA finding problem using compressed beamforming, this is equivalent to assuming that the number of sources arriving at a sensor array is known.
The WEN estimator finds the solution to the following constrained optimization problem
where is the threshold parameter chosen by the user and
is the WEN constraint (or penalty) function, vector for , collects the non-negative weights, and is an EN tuning parameter. Both the weights and are chosen by the user. When the weights are data-dependent, we refer the solution as adaptive EN (AEN). Note that AEN is an extension of the adaptive LASSO.20
The constrained optimization problem in Eq. (2) can also be written in an equivalent penalized form
where is the penalty (or regularization) parameter. The problems in Eqs. (2) and (3) are equivalent due to Lagrangian duality and either can be solved. Herein, we use Eq. (3).
Recall that EN tuning parameter offers a blend amongst the LASSO and the ridge regression. The benefit of EN is its ability to select correlated variables as a group, which is illustrated in Fig. 1. The EN penalty has singularities at the vertexes like LASSO, which is a necessary property for sparse estimation. It also has strictly convex edges which then help in selecting variables as a group, which is a useful property when high correlations exists between predictors. Moreover, for (a vector of ones) and α = 1, Eq. (3) results in a LASSO solution and for and α = 0 we obtain the ridge regression21 estimator.
(Color online) The LASSO solution is often at the vertices (corners) but the EN solution can occur on edges as well, depending on the correlations among the variables. In the uncorrelated case of (a), both the LASSO and the EN has sparse solution. When the predictors are highly correlated as in (b), the EN has a group selection in contrast to the LASSO.
(Color online) The LASSO solution is often at the vertices (corners) but the EN solution can occur on edges as well, depending on the correlations among the variables. In the uncorrelated case of (a), both the LASSO and the EN has sparse solution. When the predictors are highly correlated as in (b), the EN has a group selection in contrast to the LASSO.
Then WEN solution can be computed using any algorithm that can find the non-weighted () EN solution. To see this, let us write and . Then, WEN solution is found by applying the following steps.
- Solve the (non-weighted) EN solution on transformed data :
where is the EN penalty.
- WEN solution for the original data is
Yet, the standard (i.e., non-weighted, for ) EN estimator may perform inconsistent variable selection. The EN solution depends largely on (and ) and tuning these parameters optimally is a difficult problem. The adaptive LASSO20 obtains oracle variable selection property by using cleverly chosen adaptive weights for regression coefficients in the -penalty. We extend this idea to WEN-penalty, coupling it with active set approach, where WEN is applied to only nonzero (active) coefficients. Our proposed adaptive EN uses data dependent weights, defined as
Above denotes a sparse initial estimator of . Moreover, larger weight means that the corresponding variable is penalized more heavily. The vector can be written compactly as
where notation means element-wise application of the absolute value operator on the vector, i.e., . The idea is that only nonzero coefficients are exploited, i.e., basis vectors with are omitted from the model, and thus the dimensionality of the linear model is reduced from p to .
III. COMPLEX-VALUED LARS METHOD FOR WEIGHTED LASSO
In this section, we develop the c-LARS-WLASSO algorithm which is a complex-valued extension of the LARS algorithm17 for weighted LASSO framework. This is then used to construct a complex-valued pathwise (c-PW-)WEN algorithm. These methods compute solution (3) at particular penalty parameter values, called knots, at which a new variable enters (or leaves) the active set of nonzero coefficients. Our c-PW-WEN exploits the c-LARS-WLASSO as its core computational engine.
Let denote a solution to Eq. (3) for some fixed value of the penalty parameter in the case that LASSO penalty is used (α = 1) with unit weights (i.e., ). Also recall that predictors are normalized so that . Then note that the solution needs to verify the generalized Karush-Kuhn-Tucker conditions. That is, is a solution to Eq. (3) if and only if it verifies the zero sub-gradient equations given by
where and , meaning that , where , if and some number inside the unit circle, , otherwise. Taking absolute values of both sides of Eq. (6), one notices that at the solution, condition holds for the active predictors whereas holds for non-active predictors. Thus as decreases and more predictors are joined to the active set, the set of active predictors become less correlated with the residual. Moreover, the absolute value of the correlation, or equivalently the angle between any active predictor and the residual is the same. In the real-valued case, the LARS method exploits this feature and linearity of the LASSO path to compute the knot-values, i.e., the value of the penalty parameters where there is a change in the active set of predictors.
Let us briefly recall the main principle of the LARS algorithm. LARS starts with a model having no variables (so ) and picks a predictor that has maximal correlation (i.e., having smallest angle) with the residual . Suppose the predictor is chosen. Then, the magnitude of the coefficient of the selected predictor is increased (toward its least-squares value) until one has reached a step size such that another predictor (say, predictor ) has the same absolute value of correlation with the evolving residual , i.e., the updated residual makes equal angles with both predictors as shown in Fig. 2. Thereafter, LARS moves in the new direction, which keeps the evolving residual equally correlated (i.e., equiangular) with selected predictors, until another predictor becomes equally correlated with the residual. After that, one can repeat this process until all predictors are in the model or to specified sparsity level.
(Color online) Starting from all zeros, LARS picks predictor that makes least angle (i.e., ) with residual and moves in its direction until where LARS picks and changes direction. Then, LARS repeats this procedure and selects next equicorrelated predictor and so on until some stopping criterion.
(Color online) Starting from all zeros, LARS picks predictor that makes least angle (i.e., ) with residual and moves in its direction until where LARS picks and changes direction. Then, LARS repeats this procedure and selects next equicorrelated predictor and so on until some stopping criterion.
We first consider the weighted LASSO (WLASSO) problem (so α = 1) by letting and . We write for the solution of the optimization problem (3) in this case. Let denotes the smallest value of such that all coefficients of the WLASSO solution are zero, i.e., . It is easy to see that for .15 Let denote the active set at the regularization parameter value . The knots are defined as smallest values of the penalty parameters after which there is a change in the set of active predictors, i.e., the order of sparsity changes. The active set at a knot λk is denoted by . The active set thus contains a single index as , where j1 is predictor that becomes active first, i.e.,
By definition of the knots, one has that and for all .
The c-LARS-WLASSO outlined in algorithm 1 is a straightforward generalization of LARS-LASSO algorithm to complex-valued and weighted case. It does not have the same theoretical guarantees as its real-valued counterpart to solve the exact values of the knots. Namely, LARS algorithm uses the property that (in the real-valued case) the LASSO regularization path is continuous and piecewise linear with respect to ; see Refs. 17, 22–24. In the complex-valued case, however, the solution path between the knots is not necessarily linear.25 Hence the c-LARS-WLASSO may not give precise values of the knots in all the cases. However, simulations validate that the algorithm finds the knots with reasonable precision. Future work is needed to provide theoretical guarantees of the algorithm to find the knots.
Algorithm 1: c-LARS-WLASSO algorithm
input : and K |
output : |
initialize : , , the residual . Set . |
1 Compute and , where . |
for do |
2 Find the active set and its least- squares direction to have , |
3 Define vector for and the corresponding residual as |
4 The knot λk is the largest -value subject to (7) |
where a new predictor (at index ) becomes active, thus verifying from Eq. (7). |
5 Update the values at a knot λk, |
The LASSO solution is , |
6 . |
input : and K |
output : |
initialize : , , the residual . Set . |
1 Compute and , where . |
for do |
2 Find the active set and its least- squares direction to have , |
3 Define vector for and the corresponding residual as |
4 The knot λk is the largest -value subject to (7) |
where a new predictor (at index ) becomes active, thus verifying from Eq. (7). |
5 Update the values at a knot λk, |
The LASSO solution is , |
6 . |
Below we discuss how to solve the knot λk and the index in step 4 of the c-LARS-WLASSO algorithm.
Solving step 4: First we note that
where we have written and . First we need to find λ for each , such that holds. Due to Eqs. (7) and (8) this means finding such that
Let us reparametrize such that . Then identifying is equivalent to identifying the auxiliary variable . Now Eq. (9) becomes
The last equation implies that can be found by solving the roots of the second-order polynomial equation, , where and . The roots are and the final value of will be
where for . Thus, finding largest λ that verifies Eq. (7) is equivalent to finding smallest non-negative . Hence the variable that enters to active set is and the knot is thus . Thereafter, solution at the knot λk is simple to find in step 5.
IV. COMPLEX-VALUED PATHWISE WEIGHTED ELASTIC NET
Next we develop a complex-valued and weighted version of PW-LARS-EN algorithm proposed in Ref. 18, and referred to as c-PW-WEN algorithm. Generalization to complex-valued case is straightforward. The essential difference is that c-LARS-WLASSO algorithm 1 is used instead of the (real-valued) LARS-LASSO algorithm. The algorithm finds the Kth knot λK and the corresponding WEN solutions at a dense grid of EN tuning parameter values and then picks final solution for best -value.
Let denotes the smallest value of such that all coefficients in the WEN estimate are zero, i.e., . The value of can be expressed in closed-form,15
The c-PW-WEN algorithm computes K-sparse WEN solutions for a set of values in a dense grid
Let denote the active set (i.e., nonzero elements of WEN solution) for a given fixed regularization parameter value and for given value in the grid . The knots are the border values of the regularization parameter after which there is a change in the set of active predictors. Since is fixed we drop the dependency of the penalty parameter on and simply write λ or λK instead of or . The reader should however keep in mind that the values of the knots are different for any given . The active set at a knot λk is then denoted shortly by . Note that for all . Note that it is assumed that a non-zero coefficient does not leave the active set for any value , that is, the sparsity level is increasing from 0 (at λ0) to K (at λK).
First we let that algorithm 1 can be written as then we can write it as let
for extracting only the kth knot (and the corresponding solution) from a sequence of the knot-solution pairs found by the c-LARS-WLASSO algorithm. Next note that we can write the EN objective function in augmented form as follows:
where
are new parameterizations of the tuning and shrinkage parameter pair , and
are the augmented forms of the response vector and the predictor matrix , respectively. Note that Eq. (11) resembles the LASSO objective function with and that is an matrix.
It means that we can compute the K-sparse WEN solution at the knot for fixed α using the c-LARS-WLASSO algorithm. Our c-PW-WEN method is given in algorithm 2. It computes the WEN solutions at the knots over a dense grid (10) of values. After step 6 of the algorithm 2, we have solution at the knot for a given value on the grid. Having the solution available, we then in steps 7–9, compute the residual sum of squares (RSS) of the debiased WEN solution at the knot (having K nonzeros),
where is the active set at the knot and is the debiased LSE, defined as
where consists of the K active columns of associated with the active set and denotes its Moore-Penrose pseudo inverse.
While sweeping through the grid of values and computing the WEN solutions , we choose our best candidate solution as the WEN estimate that had the smallest RSS value, i.e., , where .
V. SEQUENTIALLY ADAPTIVE ELASTIC NET
Next we turn our attention on how to choose the adaptive (i.e., data dependent) weights in c-PW-WEN. In adaptive LASSO,20 one ideally uses the LSE or, if , the LASSO as an initial estimator to construct the weights given in Eq. (4). The problem is that both the LSE and the LASSO estimator have very poor accuracy (high variance) when there exists high correlations between predictors, which is the condition we are concerned in this paper. This lowers the probability of exact recovery of the adaptive LASSO significantly.
To overcome the problem above, we devise a sequential adaptive elastic net (SAEN) algorithm that obtains the K-sparse solution in a sequential manner decreasing the sparsity level of the solution at each iteration and using the previous solution as adaptive weights for c-PW-WEN. The SAEN is described in algorithm 3. SAEN runs the c-PW-WEN algorithm three times. At first step, it finds a standard (unit weight) WEN solution for nonzero (active) coefficients which we refer to as initial EN solution . The obtained solution determines the adaptive weights via Eq. (4) (and hence the active set of nonzero coefficients) which is used in the second step to compute the WEN solution that has nonzero coefficients. This again determines the adaptive weights via Eq. (4) (and hence the active set of nonzero coefficients) which is used in the third step to compute the final WEN solution that has the desired K nonzero coefficients. It is important to notice that since we start from a solution with nonzeros, it is quite likely that the true K non-zero coefficients will be included in the active set of which is computed in the first step of the SAEN algorithm. Note that the choice of 3K is similar to CoSaMP algorithm29 which also uses 3K as an initial support size. Using 3K also usually guarantees that is well conditioned which may not be the case if larger value than 3K is chosen.
VI. SINGLE-SNAPSHOT COMPRESSIVE BEAMFORMING
Estimating the source location, in terms of its DoA, plays an important role in many applications. In Ref. 5, it was observed that CS algorithms can be applied for DoA estimation (e.g., of sound sources) using sensor arrays when the array output is be expressed as sparse (underdetermined) linear model by discretizing the DoA parameter space. This approach is referred to as compressive beamforming (CBF), and it has been subsequently used in a series of papers (e.g., Refs. 7–13, 26).
In CBF, after finding the sparse regression estimator, its support can be mapped to the DoA estimates on the grid. Thus, the DoA estimates in CBF are always selected from the resulting finite set of discretized DoA parameters. Hence the resolution of CBF is dependent on the density of the grid (spacing or grid size p). Denser grid implies large mutual coherence of the basis vectors (here equal to the steering vectors for the DoAs on the grid) and thus a poor recovery region for most sparse regression techniques.
The proposed SAEN estimator can effectively mitigate the effect of high mutual coherence caused by discretization of the DoA space with significantly better performance than state-of-the-art compressed sensing algorithms. This is illustrated in Sec. VII via extensive simulation studies using challenging multi-source set-ups of closely spaced sources and large variation of source powers.
We assume narrowband processing and a far-field source wave impinging on an array of sensors with known configuration. The sources are assumed to be located in the far-field of the sensor array (i.e., propagation radius array size). A uniform linear array (ULA) of n sensors (e.g., hydrophones or microphones) is used for estimating the DoA of the source with respect to the array axis. The array response (steering or wavefront vector) of ULA for a source from DoA (in radians) is given by
where we assume half a wavelength inter-element spacing between sensors. We consider the case that sources from distinct DoAs arrive at a sensor at some time instant t. A single-snapshot obtained by ULA can then be modeled as27
where collects the DoAs, the matrix is the dictionary of replicas also known as the array steering matrix, contains the source waveforms, and is complex noise at time instant t.
Consider an angular grid of size p (commonly ) of look directions of interest,
Let the ith column of the measurement matrix in the model (1) be the array response for look direction , so . Then, if the true source DoAs are contained in the angular grid, i.e., for , then the snapshot in Eq. (14) (where we drop the time index t) can be equivalently modeled by Eq. (1) as
where is exactly K-sparse () and nonzero elements of maintain the source waveforms . Thus, identifying the true DoAs is equivalent to identifying the nonzero elements of , which we refer to as the CBF-principle. Hence, sparse regression and CS methods can be utilised for estimating the DoAs based on a single snapshot only. We assume that the number of sources K is known a priori.
Besides SNR, also the size p or spacing of the grid greatly affect the performance of CBF methods. The cross-correlation (coherence) between the true steering vector and steering vectors on the grid depends on both the grid spacing and obliqueness of the target DoAs with reference to the array. Moreover, the values of cross-correlations in the gram matrix also depend on the distance between array elements and configuration of the sensors array.7 Let us construct a measure, called maximal basis coherence (MBC), defined as the maximum absolute value of the cross-correlations among the true steering vectors and the basis ,
Note that steering vectors , are assumed to be normalized (). MBC value measures the obliqueness of the incoming DoA to the array. The higher the MBC value, the more difficult it is for any CBF method to distinguish the true DoA in the grid. Note that the value of MBC also depends on the grid spacing .
Figure 3 shows the geometry of DoA estimation problem, where the target DoAs have varying basis coherence and it increases with the level of obliqueness (inclination) on either side of the normal to the ULA axis. We define a straight DoA when the angle of incidence of the target DoAs is in the shaded sector in Fig. 3. This is the region where the MBC has lower values. In contrast, an oblique DoA is defined when the angle of incidence of the target is oblique with respect to the array axis.
(Color online) The straight and oblique DoAs exhibit different basis coherence.
Consider the case of ULA with elements receiving two sources at straight DoAs, and , or at oblique DoAs, and . The above scenarios correspond to set-up 2 and set-up 3 of Sec. VII; see also Table II. Angular separation between the DoAs is in both the scenarios. In Table I we compute the correlation between the true steering vectors and with a neighboring steering vector in the grid.
Correlation between true steering vector at DoAs θ1 and θ2 and a steering vector at angle ϑ on the grid in a two source scenario set-ups with either straight or oblique DoAs.
. | θ1 . | θ2 . | ϑ . | correlation . |
---|---|---|---|---|
True straight DoAs | 0.071 | |||
0.814 | ||||
0.812 | ||||
True oblique DoAs | 0.069 | |||
0.901 | ||||
0.927 |
. | θ1 . | θ2 . | ϑ . | correlation . |
---|---|---|---|---|
True straight DoAs | 0.071 | |||
0.814 | ||||
0.812 | ||||
True oblique DoAs | 0.069 | |||
0.901 | ||||
0.927 |
These values validate the fact highlighted in Fig. 3. Namely, a target with an oblique DoA with reference to the array has a larger maximal correlation (coherence) with the basis steering vectors. This makes it difficult for the sparse recovery method to identify the true steering vector from the spurious steering vector that simply has a very large correlation with the true one. Due to this mutual coherence, it may happen that neither nor are assigned a non-zero coefficient value in or perhaps just one of them in random fashion.
VII. SIMULATION STUDIES
We consider seven simulation set-ups. First five set-ups use grid spacing (leading to p = 180 look directions in the grid ) and the last two employ grid spacing (leading to p = 90 look directions in the grid). The number of sensors n in the ULA are for set-ups 1–5 and for set-ups 6–7. Each set-up has sources at different (straight or oblique) DoAs and the source waveforms are generated as , where source powers are fixed for each set-up but the source phases are randomly generated for each Monte-Carlo trial as , for . Table II specifies the values of DoAs and power of the sources used in the set-ups. Also the MBC values (15) are reported for each case.
Details of all the set-ups tested in this paper. First five set-ups have grid spacing and last two .
Set-ups . | . | . | MBC . |
---|---|---|---|
1 | [0.9, 1, 1] | [−5, 3, 6] | 0.814 |
2 | [0.9, 1] | [−6, 2] | 0.814 |
3 | [0.9, 1] | [44, 52] | 0.927 |
4 | [0.8, 0.7, 1] | [43, 44, 52] | 0.927 |
5 | [0.9, 0.1, 1, 0.4] | [−8.7, −3.8, −3.5, 9.7] | 0.990 |
6 | [0.8, 1, 0.9, 0.4] | −[48.5, 46.4, 31.5, 22] | 0.991 |
7 | [0.7, 1, 0.6, 0.7] | [6, 8, 14, 18] | 0.643 |
Set-ups . | . | . | MBC . |
---|---|---|---|
1 | [0.9, 1, 1] | [−5, 3, 6] | 0.814 |
2 | [0.9, 1] | [−6, 2] | 0.814 |
3 | [0.9, 1] | [44, 52] | 0.927 |
4 | [0.8, 0.7, 1] | [43, 44, 52] | 0.927 |
5 | [0.9, 0.1, 1, 0.4] | [−8.7, −3.8, −3.5, 9.7] | 0.990 |
6 | [0.8, 1, 0.9, 0.4] | −[48.5, 46.4, 31.5, 22] | 0.991 |
7 | [0.7, 1, 0.6, 0.7] | [6, 8, 14, 18] | 0.643 |
The error terms are independent and identically distributed random variables and generated from distribution, where the noise variance depends on the signal-to-noise ratio (SNR) level in decibel (dB), given by (dB) , where denotes the average source power. An SNR level of 20 dB is used in this paper unless its specified otherwise and the number of Monte-Carlo trials is L = 1000.
In each set-up, we evaluate the performance of all methods in recovering exactly the true support and the source powers. Due to high mutual coherence and due the large differences in source powers, the DoA estimation is now a challenging task. A key performance measure is the (empirical) probability of exact recovery (PER) of all K sources, defined as
where denotes the index set of true source DoAs on the grid and set the found support set, where , and the average is over all Monte-Carlo trials. Above denotes the indicator function. We also compute the average root mean squared error (RMSE) of the debiased estimate of the source vector as .
A. Compared methods
This paper compares the SAEN approach to the existing well-known greedy methods, such as orthogonal matching pursuit (OMP)28 and compressive sampling matching pursuit (CoSaMP).29 Moreover, we also draw comparisons for two special cases of the c-PW-WEN algorithm to LASSO estimate that has K-non-zeros (i.e., , computed by c-PW-WEN using and α = 1) and EN estimator when cherry-picking the best in the grid [i.e., ].
It is instructive to compare the SAEN to simpler adaptive EN (AEN) approach that simply uses adaptive weights to weight different coefficients differently in the spirit of adaptive LASSO.20 This helps in understanding the effectiveness of the cleverly chosen weights and the usefulness of the three-step procedure used by the SAEN algorithm. Recall that the first step in AEN approach is compute the weights using some initial solution. After obtaining the (adaptive) weights the final K-sparse AEN solution is computed. We devise three AEN approaches each one using a different initial solution to compute the weights,
- uses the weights found as
where is Moore-Penrose pseudo inverse of .
employs weights from an initial n-sparse EN solution at nth knot which is found by c-PW-WEN algorithm with .
instead uses weights calculated from an initial EN solution having nonzeros as in step 1 of SAEN algorithm, but the remaining two steps of SAEN algorithm are omitted.
The upper bound for PER rate for SAEN is the (empirical) probability that the initial solution computed in step 1 of algorithm 3 contains the true support , i.e., the value
where the average is over all Monte Carlo trials. We also compute this upper bound to illustrate the ability of SAEN to pick the true K-sparse support from the original -sparse initial value. For set-up 1 (cf. Table II), the average recovery results for all of the abovementioned methods are provided in Table III.
The recovery results for set-up 1. Results illustrate the effectiveness of three step SAEN approach compared to its competitors. The SNR level was 20 dB and the upper bound (16) for the PER rate of SAEN is given in parentheses.
. | . | SAEN . | . | . | . |
---|---|---|---|---|---|
PER | (0.957) | 0.864 | 0.689 | 0.616 | |
RMSE | – | 0.449 | 0.694 | 0.889 | |
EN | LASSO | OMP | CoSaMP | ||
PER | 0 | 0.332 | 0.332 | 0.477 | 0.140 |
RMSE | 1.870 | 1.163 | 1.163 | 1.060 | 35.58 |
. | . | SAEN . | . | . | . |
---|---|---|---|---|---|
PER | (0.957) | 0.864 | 0.689 | 0.616 | |
RMSE | – | 0.449 | 0.694 | 0.889 | |
EN | LASSO | OMP | CoSaMP | ||
PER | 0 | 0.332 | 0.332 | 0.477 | 0.140 |
RMSE | 1.870 | 1.163 | 1.163 | 1.060 | 35.58 |
It can be noted that the proposed SAEN outperforms all other methods and weighting schemes, and recovers the true support and powers of the sources effectively. Note that the SAEN's upper bound for PER rate was 95.7% and SAEN reached the PER rate 86.4%. The results of the AEN approaches validate the need for accurate initial estimate to construct the adaptive weights. For example, performs better than , but much worse than the SAEN method.
B. Straight and oblique DoAs
Set-up 2 and set-up 3 correspond to the case where the targets are at straight and oblique DoAs, respectively. Performance results of the sparse recovery algorithms are tabulated in Table IV. As can be seen, the upper bound for PER rate of SAEN is full 100% percentage which means that the true support is correctly included in -sparse solution computed at step 1 of the algorithm. For set-up 2 (straight DoAs), all methods have almost full PER rates except CoSaMP with 67.8% rate. Performance of other estimators expect of SAEN changes drastically in set-up 3 (oblique DoAs). Here SAEN is achieving nearly perfect (98%) PER rate which means reduction of 2% compared to set-up 2. Other methods perform poorly. For example, PER rate of LASSO drops from near 98% to 40%. Similar behavior is observed for EN, OMP, and CoSaMP.
Recovery results for set-ups 2–4. Note that for oblique DoAs (set-ups 3 and 4), the SAEN method outperform the other methods and has a perfect recovery results for set-up 2 (straight DoAs). SNR level is 20 dB. The upper bound (16) for the PER rate of SAEN is given in parentheses.
Set-up 2 with two straight DoAs . | ||||||
---|---|---|---|---|---|---|
. | . | SAEN . | EN . | LASSO . | OMP . | CoSaMP . |
PER | (1.000) | 1.000 | 0.981 | 0.981 | 0.998 | 0.678 |
RMSE | 0.126 | 0.145 | 0.145 | 0.128 | 1.436 |
Set-up 2 with two straight DoAs . | ||||||
---|---|---|---|---|---|---|
. | . | SAEN . | EN . | LASSO . | OMP . | CoSaMP . |
PER | (1.000) | 1.000 | 0.981 | 0.981 | 0.998 | 0.678 |
RMSE | 0.126 | 0.145 | 0.145 | 0.128 | 1.436 |
Set-up 3 with two oblique DoAs . | ||||||
---|---|---|---|---|---|---|
. | . | SAEN . | EN . | LASSO . | OMP . | CoSaMP . |
PER | (1.000) | 0.978 | 0.399 | 0.399 | 0.613 | 0.113 |
RMSE | 0.154 | 0.916 | 0.916 | 0.624 | 2.296 |
Set-up 3 with two oblique DoAs . | ||||||
---|---|---|---|---|---|---|
. | . | SAEN . | EN . | LASSO . | OMP . | CoSaMP . |
PER | (1.000) | 0.978 | 0.399 | 0.399 | 0.613 | 0.113 |
RMSE | 0.154 | 0.916 | 0.916 | 0.624 | 2.296 |
Set-up 4 with three oblique DoAs . | ||||||
---|---|---|---|---|---|---|
. | . | SAEN . | EN . | LASSO . | OMP . | CoSaMP . |
PER | (0.776) | 0.749 | 0.392 | 0.378 | 0 | 0 |
RMSE | 0.505 | 0.838 | 0.827 | 1.087 | 5.290 |
Set-up 4 with three oblique DoAs . | ||||||
---|---|---|---|---|---|---|
. | . | SAEN . | EN . | LASSO . | OMP . | CoSaMP . |
PER | (0.776) | 0.749 | 0.392 | 0.378 | 0 | 0 |
RMSE | 0.505 | 0.838 | 0.827 | 1.087 | 5.290 |
Next we discuss the results for set-up 4 which is similar to set-up 3, except that we have introduced a third source that also arrives from an oblique DoA and the variation of the source powers is slightly larger. As can be noted from Table IV, the PER rates of greedy algorithms, OMP and CoSaMP, have declined to outstandingly low 0%. This is very different with the PER rates they had in set-up 3 which contained only two sources. Indeed, inclusion of the third source from an DoA similar with the other two sources completely ruined their accuracy. This is in deep contrast with the SAEN method that still achieves PER rate of 75%, which is more than twice the PER rate achieved by LASSO. SAEN is again having the lowest RMSE values.
In summary, the recovery results for set-ups 1–4 (which express different degrees of basis coherence, proximity of target DoAs, as well as variation of source powers), clearly illustrate that the proposed SAEN performs very well in identifying the true support and the power of the sources and is always outperforming the commonly used benchmarks sparse recovery methods, namely, the LASSO, EN, OMP, or CoSaMP with a significant margin. It is also noteworthy that EN often achieved better PER rates than LASSO which is mainly due to its group selection ability. As a specific example of this particular feature, Fig. 4 shows the solution paths for LASSO and EN for one particular Monte Carlo trial, where EN correctly chooses the true DoAs but LASSO fails to select all correct DoAs. In this particular instance, the EN tuning parameter was . This is reason behind the success of our c-PW-WEN algorithm which is the core computational engine of the SAEN.
(Color online) The LASSO and EN solution paths (upper panel) and respective DoA solutions at the knot λ3. Observe that LASSO fails to recover the true support but EN successfully picks the true DoAs. The EN tuning parameter was in this example.
(Color online) The LASSO and EN solution paths (upper panel) and respective DoA solutions at the knot λ3. Observe that LASSO fails to recover the true support but EN successfully picks the true DoAs. The EN tuning parameter was in this example.
C. Off-grid sources
Set-ups 5 and 6 explore the case when the target DoAs are off the grid. Also note that set-up 5 uses finer grid spacing compared to set-up 6 with . Both set-ups contain four target sources that have largely varying source powers. In the off-grid case, one would like the CBF method to localize the targets to the nearest DoA in the angular grid that is used to construct the array steering matrix. Therefore, in the off the grid case, the PER rate refers to the case that CBF method selects the K-sparse support that corresponds to DoAs on the grid that are closest in distance to the true DoAs. Table V provides the recovery results. As can be seen, again the SAEN is performing very well, outperforming the LASSO and EN. Note that OMP and CoSaMP completely fail in selecting the nearest grid-points.
Performance results of CBF methods for set-ups 5 and 6, where target DoAs are off the grid. Here PER rate refers to the case that CBF method selects the K-sparse support that corresponds to DoAs on the grid that are closest to the true DoAs. The upper bound (16) for the PER rate of SAEN is given in parentheses. SNR level is 20 dB.
Set-up 5 with four off-grid straight DoAs . | ||||||
---|---|---|---|---|---|---|
. | . | SAEN . | EN . | LASSO . | OMP . | CoSaMP . |
PER | (0.999) | 0.649 | 0.349 | 0.328 | 0 | 0 |
RMSE | 0.899 | 0.947 | 0.943 | 1.137 | 89.09 |
Set-up 5 with four off-grid straight DoAs . | ||||||
---|---|---|---|---|---|---|
. | . | SAEN . | EN . | LASSO . | OMP . | CoSaMP . |
PER | (0.999) | 0.649 | 0.349 | 0.328 | 0 | 0 |
RMSE | 0.899 | 0.947 | 0.943 | 1.137 | 89.09 |
Set-up 6 with four off-grid oblique DoAs . | ||||||
---|---|---|---|---|---|---|
. | . | SAEN . | EN . | LASSO . | OMP . | CoSaMP . |
PER | (0.794) | 0.683 | 0.336 | 0.336 | 0 | 0.005 |
RMSE | 0.811 | 0.913 | 0.911 | 1.360 | 28919 |
Set-up 6 with four off-grid oblique DoAs . | ||||||
---|---|---|---|---|---|---|
. | . | SAEN . | EN . | LASSO . | OMP . | CoSaMP . |
PER | (0.794) | 0.683 | 0.336 | 0.336 | 0 | 0.005 |
RMSE | 0.811 | 0.913 | 0.911 | 1.360 | 28919 |
D. More targets and varying SNR levels
Next we consider the set-up 7 (cf. Table II) which contains K = 4 sources. The first three of the sources are at straight DoAs and the fourth one at a DoA with modest obliqueness (). We now compute the PER rates of the methods as a function of SNR. From the PER rates shown in Fig. 5 we again notice that SAEN clearly outperforms all of the other methods. Note that the upper bound (16) of the PER rate of the SAEN is also plotted. Both greedy algorithms, OMP and CoSaMP, are performing very poorly even at high SNR levels. LASSO and EN are attaining better recovery results than the greedy algorithms. Again, EN is performing better than LASSO due to additional flexibility offered by EN tuning parameter and its ability to cope with correlated steering (basis) vectors. SAEN recovers the exact true support in most of the cases due to its step-wise adaptation using cleverly chosen weights. Furthermore, the improvement in PER rates offered by SAEN becomes larger as the SNR level increases. One can also notice that SAEN is close to the theoretical upper bound of PER rate at higher SNR regime.
(Color online) PER rates of CBF methods at different SNR levels for set-up 7.
VIII. CONCLUSIONS
We developed c-PW-WEN algorithm that computes weighted elastic net solutions at the knots of the penalty parameter over a grid of EN tuning parameter values. c-PW-WEN also computes weighted LASSO as a special case (i.e., solution at α = 1) and adaptive EN (AEN) is obtained when adaptive (data dependent) weights are used. We then proposed a novel SAEN approach that uses c-PW-WEN method as its core computational engine and uses three-step adaptive weighting scheme where sparsity is decreased from 3K to K in three steps. Simulations illustrated that SAEN performs better than the adaptive EN approaches. Furthermore, we illustrated that the -sparse initial solution computed at step 1 of SAEN provide smart weights for further steps and includes the true K-sparse support with high accuracy. The proposed SAEN algorithm is then accurately including the true support at each step.
Using the K-sparse LASSO solution computed directly from the LASSO path at the kth knot fails to provide exact support recovery in many cases, especially when we have high basis coherence and lower SNR. Greedy algorithms often fail in the face of high mutual coherence (due to dense grid spacing or oblique target DoAs) or low SNR. This is mainly due the fact that their performance heavily depends on their ability to accurately detect maximal correlation between the measurement vector and the basis vectors (column vectors of ). Our simulation study also showed that their performance (in terms of PER rate) deteriorates when the number of targets increases. In the off-grid case, the greedy algorithms also failed to find the nearby grid-points.
Finally, the SAEN algorithm performed better than all other methods in each set-up and the improvement was more pronounced in the presence of high mutual coherence. This is due to the ability of SAEN to include the true support correctly at all three steps of the algorithm. Our matlab package that implements the proposed algorithms is freely available.30 The package also contains a matlab live script demo on how to use the method in CBF problem along with an example from simulation set-up 4 presented in the paper.
ACKNOWLEDGMENTS
This research was partially supported by the Academy of Finland Grant No. 298118, which is gratefully acknowledged.