Direction-Of-Arrival Estimation Using AMLSS Method - IEEE Xplore

16 downloads 0 Views 514KB Size Report
estimation. As such, the Joint l2,0 Approximation Direction Of. Arrival (JLZA-DOA) method [9] makes use of some convex. M. Atashbar, Iran University of Science ...
IEEE LATIN AMERICA TRANSACTIONS, VOL. 10, NO. 5, SEPTEMBER 2012

2053

Direction-Of-Arrival Estimation Using AMLSS Method M. Atashbar and M. H. Kahaei We propose the AMLSS method for DOA Abstract— estimation of narrow-band signals. In this method, the distribution of covariance matrix estimation error is used for Maximum Likelihood estimation of potential source signals variances. This leads to a nonnegative LASSO sparse model for which a new criterion is proposed to determine the regulation parameter. Simulation results show that the performance of this method in estimation of the number of sources is more accurate than the Picard, MUSIC, and SPICE methods. Also, it outperforms Picard and SPICE methods in accurate DOA estimation. Keywords— DOA estimation, maximum likelihood, LASSO nonnegative sparse.

D

I. INTRODUCTION

IRECTION of Arrival (DOA) estimation of multiple narrowband sources is an important issue in signal processing, communication systems, and etc. [1]. Some high resolution methods such as the Maximum Likelihood (ML) [2], Multiple Signal Classification (MUSIC) [3,4], and Estimation of Signal Parameters via Rotational Invariance Techniques (ESPRIT) [5,6] have already been addressed. However, they suffer from needing a large number of snapshots and the knowledge of the sources number. In recent works, spatial sparsity-based strategy has presented a new field for DOA estimation of multiple sources [7-16]. In this strategy, sensor signals are represented by an overcomplete dictionary of angles; referred to as “angles dictionary”, leading to an underdetermined problem which is solved using sparsity properties. Using these methods, the prior knowledge about the number of sources is no longer required while a better performance is achieved for DOA estimation at low SNRs [8]. However, due to incorporation of a large number of variables into computations, these methods are intrinsically computationally expensive. In some of these methods, sensor signals are represented based on potential sources presumed at dictionary angles [710]. Then, the signals of potential sources are estimated by solving underdetermined problems. The energies of estimated potential signals at different angles are used for DOA estimation. As such, the Joint l2,0 Approximation Direction Of Arrival (JLZA-DOA) method [9] makes use of some convex

M. Atashbar, Iran University of Science & Technology, Tehran, Iran, [email protected] M. H. Kahaei, Iran University of Science & Technology, Tehran, Iran, [email protected]

functions to approximate the l0-norm and solve the corresponding underdetermined problem. As a disadvantage, in this method × variables are estimated where T and D denote the number of snapshots and dictionary angles, respectively. In the l1-Singular Value Decomposition (l1-SVD) method [7-8], the SVD is used to reduce the number of variables to × , in which the number of sources K is less than T. Also, in [10] after applying the SVD, the Capon spectrum is used to model the problem by the weighted l1norm. In the other spatial sparsity based DOA estimation methods, the covariance matrix of sensor signals is used for sparse modeling [11-15]. As such, in the method of Sparse Representation of Array Covariance Vectors (SRACV) [11], the covariance matrix of sensor signals is represented by some vectors associated with the angles dictionary. Then, this problem is reformulated as a Second Order Cone Programming (SOCP) [17] in which × variables should be estimated where M is the number of sensors. This method, as a result, is computationally burden. In the SParse Iterative Covariance-based Estimation (SPICE) method [12,13], minimization of the covariance fitting criterion is used for sparse modeling of the problem. In this method, D variables should be estimated which is the least number of variables compared to the above ones. However, it suffers from a low DOA resolution. Also, Picard et al. [14] have already developed a method with the same number of variables as the SPICE. Our simulation results, however, indicate that the resolution of DOA estimates of this method is higher than that of the SPICE. However, due to the error of sensors covariance matrix estimates, the performance of Picard’s method reduces for a small number of snapshots. Moreover, no solution has been presented for the sparse model. In this paper, we propose the Approximate Maximum Likelihood Spatial Sparsity (AMLSS) method in three steps. First, the error of covariance matrix estimation is incorporated in the Picard method for a small number of snapshots. Then, the ML criterion is proposed for sparse modeling of DOA which leads to a LASSO model [18,19] with a nonnegative variable. In the second step, we propose a method to determine the regulation parameter for this model. Next, in the third step, the interior point based Newton method [20] is used for solving the obtained sparse model. The paper is organized as follows. In Section 2, signal modeling for DOA estimation of multiple sources is introduced. The Picard method is reviewed in Section 3 and the AMLSS method is proposed in Section 4. Simulation

2054

IEEE LATIN AMERICA TRANSACTIONS, VOL. 10, NO. 5, SEPTEMBER 2012

results are presented in Section 5 and Section 6 concludes the paper. II. SIGNAL MODELING Suppose that is the kth source direction, ( ) shows the kth ( ) is zero-mean noise of the mth narrowband source signal, ( ) denotes the mth sensor signal. Assuming that sensor, and the sources are located in far-field with no reverberation in the environment, the mth sensor signal is defined as [21] ( )=∑

(

)

( )+

( )

(1)

(, )=

(

)=

(

)

)/

(2) th

th

denotes the coefficient between the k source and the m sensor, shows the central frequency, l is the distance between every two sensors in the linear array, and c is the wave speed. In spatial sparsity based DOA estimation methods, the dictionary angles = are chosen so that by , ,…, some approximation, = is considered as a , ,…, subset of . For instance, for = 1°, 2°, … ,180° , the error of each source angle, , lies between ±0.5°. Also, we define ( ) as the potential source signal corresponding to the dth angle of the dictionary for which we have ( ) = ( ) if ( ) = 0. This scenario introduces a = ; otherwise, spatial sparsity feature for potential sources. In this way, the sensor signals defined in (1) can be presented as a function of potential source signals as ( )=∑

(

)

(

( )= where form, we have ( )=

( )+ )

( ) (

(3) )/

. Then, in matrix

( )+ ( )

(4)

( )= ( ) = ( ) , ( ) , … , ( )] , where ( ) = ( ), ( ), … , ( )] ( ), ( ) , … , ( )] , is an unknown sparse vector, and is an M×D matrix whose ( )]. = ( ), ( ) , … , mth row is Then, ( ) is determined by solving (4). Note that by increasing the number of snapshots, the number of unknown variables increases as well [9]. To cope with this problem, Picard et al. [14] have developed a method for the large number of snapshots as follows. III. PICARD METHOD Assuming statistical independence of source signals and noise, the covariance matrix of sensor signals using (4) is expressed as = + (5) ( ) ( ) , where = ( ( ) ( )), = and ( ) ( ) respectively show the covariance = matrices of sensor signals, sensor noise, and potential source signals with E indicating statistical expectation and H denoting Hermitian.





(, )

+

(6)

where * shows complex conjugation and ⊙ indicates an element by element multiplication of two vectors. Also, the elements of each diagonal of are averaged to develop a sparse model as [14] =

where (

For uncorrelated sources signals, is a diagonal matrix with only K nonzero elements due to spatial sparsity feature of potential signals. This means that the vector of main diagonal (1,1), (2,2), … , ( , ) is , i.e., = elements of sparse. Using (5), the i,jth element of as a function of x is

+

(7)

where =

(



)

(8)

=

(

(

))

(9)

while vec(.) defines the matrix and = is an



column-wise ordered vector of a

]



(10) th

×

matrix while the i,j − =

(, )=

0

for , = 1,2, … ,



element of −1

is given by (11)

[14].

IV. PROPOSED AMLSS METHOD First, we introduce the ML-based sparse modeling of DOA estimation which leads to a nonnegative LASSO model. Then, a method is introduced to determine the regulation parameter of the LASSO model and finally the latter model is solved by an interior point based Newton method [20]. A. Sparse Modeling Assume that sensors noise is spatially uncorrelated, or otherwise, is whitened using a whitening procedure. Then, in (5) is a diagonal matrix which only affects the main diagonal elements of . To suppress the noise effect in the sparse model, we propose to ignore the main diagonal . For this purpose, in (7) we replace the elements of × matrix with the ( − 1) × matrix which contains all the elements of except the first row. In this way, (7) is reformulated as = where =

=

(12) = (



] … ) is an (

is an ( − 1) × matrix and − 1) × 1 vector defined by



(k, k + 1)



(k, k + 2) .

⋮ (1, M)

(13)

ATASHBAR AND KAHAEI : DIRECTION-OF-ARRIVAL ESTIMATION USING

Furthermore, in practice, the sensor signals covariance matrix is estimated using a finite number of snapshots; = ∑ ( ) ( ) . Then, we can write =

−∆

(14)

where ∆ is the estimation error matrix whose vectorized form has an asymptotic normal distribution given by [11,22] ∼ ( ,





)

(15)

where is a zero vector and ⊗ indicates the Kronecker matrix product. Then, we can present (12) as = where ( ∆

+∆ = ∆

(16) ( ) is

∼ ( ,



)

and the distribution of

)

(

= (17)

( ⊗ ) where = probability density function of =



)

|

. In this way, the conditional in (16) becomes .

|

(18)

Thus, the ML estimate of x in (16) is achieved by minimizing the following objective function ( )=

|

|+





.

(19)





(20)

wherein is a real vector while the others are complex. Note that /2 has a constant value with no effect on the minimization procedure. Using the symmetric property of , (20) is rewritten as ( )=



(21)

which is equivalent to ( ) = ‖ − =

where (





)

(22)

( (

)

( )



)

and

=

have real values.

‖ −

‖ + ‖ ‖

min ‖ − ‖ + . . ≥



(24)

where is a vector whose elements are one. For = 0, the optimum solution of (24) is a zero vector and for = ∞ , it becomes a Least Square (LS) solution. For 0 < < ∞, there is a tradeoff between sparsity and covariance matrix fitting. In fact, selection of is a challenging issue for the LASSO model solution for which we present a new method in the following. B. Regulation Parameter Selection − has a Normal According to (15) and (16), ∆ = distribution with zero mean and covariance matrix . Thus, − or − has a Normal distribution with zero mean and identity covariance matrix which leads to a chi‖ . Hence, using the squared distribution for ‖ − Cumulative Distribution Function (CDF) of chi-squared distribution, we can determine a constant so that Pr ‖ − ‖ ≤ = where has a large value like 0.99. ‖ ≤ with a high probability, the sparse Having ‖ − vector x in (16) is obtained by solving the Basis Pursuit (BP) model [24,25] defined as

(23)

which has the same form as the LASSO sparse model [19] with as a regulation parameter.

(25)

The Lagrange unconstraint cost function of (25) is then given by ( , ) = ‖ ‖ +

‖ −

‖ −

.

(26)

where is the Lagrange variable. Note that by using (25), we do not intend to estimate x, but to determine . By selecting in the LASSO model based on (23) equal to the optimum Lagrange variable of the BP in (26), both models lead to the same solution [26, 27]. We use this approach to find the regulation parameter of the LASSO model as follows. For the case of ≥ , equivalency of (23) and (25) reduces to equivalency of the two following problems. min . . ‖ − min

As seen, (22) is a Least Square estimation problem which does not ensure a sparse solution for . To overcome this problem, (22) should be completed by adding a sparsity minimization criterion such as the l1-norm which is a convex function [23]. In this way, estimation of x leads to solving the following problem. min

In our definition in (16), x contains the variances of potential signals which have nonnegative values and thus (23) presents a nonnegative LASSO model. In this model, by having ≥ and ‖ ‖ = ∑ | | = ∑ , (23) reduces to

min ‖ ‖ ‖ ≤ . . . ‖ −

For simplicity, we replace with = ⊗ which leads to an Approximate ML objective function as ( )=

2055

‖ −

‖ ≤

(27)

‖ +

(28)

Thus, determination of is equivalent to finding the optimum value of the Lagrange variable for (27). For this purpose, the Lagrange unconstraint cost function of (27) is written as ( , )=

+

‖ −

‖ −

.

(29)

Assuming that is fixed, by taking the gradient of ( , ) with respect to x and equating the results to zero, i.e. −2 we get

( −

)= ,

(30)

2056

IEEE LATIN AMERICA TRANSACTIONS, VOL. 10, NO. 5, SEPTEMBER 2012

=



− ↑

=( ) where reformulating (29) as ( , )=

is the pseudo inverse of C. By ( −

−2 ( −

+

)+

( −





(32)



+

(33)

which is a function of . By equating the gradient of ( ) with respect to to zero, we get ↑

=

.

(34) =

Since always the Lagrange variable is positive,



=

=

obtained as

.



C. Nonnegative LASSO Model Solving

.

(35)

where by selecting =1/ , this problem is equivalent to (24). The method of solving (35) briefly explained in the following. [20] is based on the interior point strategy [28-29] which is the most efficient one for constraint optimization. In this strategy, an equivalent unconstraint function is defined using the logarithmic Barrier function as ( )= ‖

− ‖ +



−∑

( )

(36)

where is a constant value which increases in each iteration of the Newton method used for minimization of (36). The Newton recursive relationship is given by [30] =

+ ∆

(37)

where s is the step-size parameter and ∆ determines the search direction vector obtained by solving the following linear equation. ∆ =−

=2

(



(39)

+

=2 where

− )+

=

(40) ,

,…,

and

=

,

,…,

].

(42)

,…,

( )+



< 1 and 0
99 0 33 10 85 87 5 2 99 56 150 5 4 > 99 36 200 3 3 > 99 16 250