Exploiting arrays with multiple invariances using

0 downloads 0 Views 215KB Size Report
niques have been developed for circular arrays [6] and for rect- angular arrays where .... whose elements are the largest eigenvalues, and the remaining eigenvectors (all of ... We will assume that the subarrays have been ordered so that ...... system identification methods based on instrumental variables and subspace fitting ...
IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 49, NO. 11, NOVEMBER 2001

2511

Exploiting Arrays with Multiple Invariances Using MUSIC and MODE A. Lee Swindlehurst, Senior Member, IEEE, Petre Stoica, Fellow, IEEE, and Magnus Jansson, Member, IEEE

Abstract—This paper describes several new techniques for direction of arrival (DOA) estimation using arrays composed of multiple translated and uncalibrated subarrays. The new algorithms can be thought of as generalizations of the MUSIC, Root-MUSIC, and MODE techniques originally developed for fully calibrated arrays. The advantage of these new approaches is that the DOAs can be estimated using either a simple one-dimensional (1-D) search or by rooting a polynomial, as opposed to the multidimensional search required by multiple invariance (MI)-ESPRIT. When it can be applied, the proposed MI-MODE algorithm shares the statistical optimality of MI-ESPRIT. While MI-MUSIC and Root-MI-MUSIC are only optimal for uncorrelated sources, they perform better than a single invariance implementation of ESPRIT and are thus better suited for finding the initial conditions required by the MI-ESPRIT search. Index Terms—Antenna arrays, array signal processing, direction-of-arrival estimation, spectral analysis.

I. INTRODUCTION

T

HE elements of a sensor array are often arranged in a very regular, structured geometry. Linear, circular, and rectangular arrays are common examples. These special geometries can be exploited in developing computationally efficient algorithms for direction of arrival (DOA) estimation. Root-MUSIC [1], [2], IQML [3], and MODE [4] are well-known techniques that take advantage of the structure of uniform linear arrays (ULAs) to obtain DOA estimates via polynomial rooting. ESPRIT [5] is an alternative that only requires an array composed of two identical, translated subarrays (of which a ULA is a special case) to achieve rooting-based DOA estimates. Similar techniques have been developed for circular arrays [6] and for rectangular arrays where two-dimensional (2-D) angles (e.g., azimuth and elevation) must be determined [7]–[13]. The so-called multiple invariance (MI-)ESPRIT algorithm [14]–[16], which generalizes ESPRIT to handle arrays composed of multiple identical subarrays, is of particular relevance to this paper. MI-ESPRIT allows one to exploit array structure Manuscript received June 26, 2000; revised August 20, 2001. This work was supported in part by the Office of Naval Research under Grant N00014-00-1-0338 and by the Swedish Foundation for Strategic Research (SSF). The associate editor coordinating the review of this paper and approving it for publication was Dr. Brian Sadler. A. L. Swindlehurst is with the Department of Electrical and Computer Engineering, Brigham Young University, Provo, UT 84602 USA (e-mail: [email protected]). P. Stoica is with the Department of Systems and Control, Uppsala University, Uppsala, Sweden (e-mail: [email protected]). M. Jansson is with the Department of Signals, Sensors, and Systems, Royal Institute of Technology, Stockholm, Sweden (e-mail: [email protected]). Publisher Item Identifier S 1053-587X(01)09602-7.

that falls somewhere “in between” a fully calibrated ULA and an array with only two subarrays. While MI-ESPRIT is known to be asymptotically statistically efficient for its given set of assumptions (provided that the subarrays do not share elements), it requires a multidimensional search for the DOA parameters. ESPRIT can always be applied to a multiple invariance array to obtain initial estimates for the search, but in doing so, it may be not able to use all of the array elements nor all available structure. In this paper, we propose several novel algorithms for exploiting arrays with multiple translated subarrays. One of the new approaches can be viewed as a generalization of MUSIC [17], and thus, we refer to this algorithm as MI-MUSIC. Unlike MI-ESPRIT, it estimates the DOAs using only a one-dimensional (1-D) search, and unlike standard ESPRIT, it is able to enforce the constraint that the subarray responses for a given source are related by a scalar multiplier that lies on the unit circle. Although, in general, MI-MUSIC does not share the statistical optimality of MI-ESPRIT, it provides more accurate estimates than regular ESPRIT since it is always able to exploit all array invariances and enforce the aforementioned unit circle constraint. In addition, like MI-ESPRIT, MI-MUSIC can be applied to arrays whose subarrays are not necessarily translated along a straight line. Generalizations of Root-MUSIC and MODE to arrays with multiple invariances are also derived in the paper. These algorithms are attractive since they eliminate the need for even a 1-D search for the DOA parameters. Although the standard version of Root-MUSIC finds the zeros of a scalar polynomial to estimate the DOAs, the MI version requires calculation of the zeros of a matrix polynomial. The asymptotic performance of Root-MI-MUSIC will be equivalent to that of MI-MUSIC and, hence, is suboptimal. On the other hand, like MI-ESPRIT, the MI version of MODE is statistically efficient when the subarrays do not overlap (i.e., share elements). However, the parameterization of the noise subspace used to derive MI-MODE only holds when the number of subarrays satisfies a certain constraint that depends on the number of signals present (see Section V for the precise requirement). In addition, unlike MI-MUSIC, both Root-MI-MUSIC and MI-MODE require that the subarrays be displaced along a straight line. After a description of the assumed data model in the next section, we derive the various new algorithms in Sections III–V. A brief discussion of the computational complexity of the algorithms is given in Section VI, and some simulation results are then presented in Section VII.

1053-587X/00$10.00 © 2001 IEEE

Authorized licensed use limited to: IEEE Editors in Chief. Downloaded on August 17, 2009 at 19:43 from IEEE Xplore. Restrictions apply.

2512

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 49, NO. 11, NOVEMBER 2001

II. MATHEMATICAL BACKGROUND We assume an array composed of sensors that receives the sources. The array output is denoted by the signals from vector and satisfies the well-known model (1) is the DOA vector, the where vector of signal waveforms, the vector of noise and interference, and (2) matrix whose columns are the array response vecis the tors for each source. In the development that follows, we assume that1

is full rank (no coherent signals), that the noise and source signals are uncorrelated, and that the noise is spatially white:

With these assumptions, the array covariance matrix and its eigendecomposition are given by (3) (4) is and contains the eigenvectors of aswhere is a diagonal matrix sociated with the largest eigenvalues, whose elements are the largest eigenvalues, and the remaining eigenvectors (all of whose eigenvalues are equal to ) . When both and are full rank, form the columns of the eigenvectors span the signal and noise subspaces col span col span

col span col span

be several ways in which to form the subarrays, depending on the amount of subarray overlap. If the subarrays do not overlap, then no calibration information is used or required for the subarrays, that is, the gain and phase response, locations, and mutual coupling of the individual elements within the subarrays need not be known. However, the requirement that the subarrays be identical with identical orientation and known displacements cannot be exactly satisfied, and hence, multiple-invariance arrays are still subject to “calibration” errors (similar to those encountered in a standard array processing scenario involving identical elements in known locations). An interesting question for further research is whether or not the multiple invariance assumption is more or less robust to errors in the assumed array structure than a fully calibrated array. For overlapping subarrays, additional structure in the array is implicitly assumed. As an example, for the second array shown in Fig. 1, the fact that the second and third subarrays overlap implies that the two elements in the lower half of each triangle must be identical. In describing multiple invariance arrays mathematically, one subarray is chosen as the reference, and the known translational and angular displacement of the other subarrays from the reference are denoted by and , respectively, for . The structure of this type of array can be described by the following equation (see, e.g., [14]):

.. .

(6)

where known selection matrix; unknown response of the reference subarray; diagonal matrix defined by

(5)

With noisy data, the signal and noise subspaces must be approximated using the sample covariance matrix

We will denote the matrices made from the eigenvectors of associated with the largest and smallest eigenvalues by and , respectively. A. Multiple Invariance Arrays The sensor array is assumed to be composed of identical elements each, the subarrays being translated subarrays of displaced with arbitrary but known spacing. Fig. 1 shows a few examples of this type of multiple-invariance array. Notice that the second and third arrays have subarrays that are collinear, whereas the first does not. In addition, note that in general, since different subarrays may have elements in common. If the full array were uniform and linear, there would

diag

(7)

is assumed to be measured in units of half-wavewhere lengths. We will assume that the subarrays have been . For collinear subarrays, ordered so that ( without loss of generality), and . In such cases, we drop the subscript on and simply write

.. .

(8)

All of the dependence of the model on is contained in the , which lie on the unit phase of the diagonal elements of each circle. The selection matrix is needed to handle overlapping subarrays, in which case is tall. If no elements are shared be. Equivalently, tween subarrays, then is square, and can be arranged for nonoverlapping subarrays, the rows of . We can also write (6) column-wise as follows: so that

1In our notation, the symbol (1) denotes the complex conjugate transpose operation, and E f1g denotes an expectation.

Authorized licensed use limited to: IEEE Editors in Chief. Downloaded on August 17, 2009 at 19:43 from IEEE Xplore. Restrictions apply.

(9)

SWINDLEHURST et al.: EXPLOITING ARRAYS WITH MULTIPLE INVARIANCES

2513

number of equations to be greater than or equal to the number of unknowns, we obtain . Together with the fact , we have the following necessary condition for MI that arrays: (12) which corresponds to the condition for identifiability derived in [14]. We note that identifiability conditions for the MI problem based on the notion of -rank are given in [18], although a slightly different problem is considered in which the diagonal elements of are not constrained to be on the unit circle. C. Multiple Invariance ESPRIT In [14], (6) and (11) are exploited by minimizing the following weighted signal subspace fitting criterion, which is referred to as MI-ESPRIT: (13) denotes the Frobenius norm, and is a suitably where chosen weighting matrix. While only collinear subarrays were explicitly considered in [14], MI-ESPRIT is general enough to handle subarrays with nonzero angular displacement as well. , this algoWhen there are no overlapping subarrays rithm has been shown to be asymptotically statistically efficient when the weighting matrix is chosen to be a consistent estimate of (14) Fig. 1. Examples of arrays with multiple invariances.

where denotes the Kronecker product, of , and

.. .

is the th column

(10)

B. Identifiability is necessary for any DOA estimaWhile the condition , it may tion algorithm based on an eigendecomposition of not be sufficient in our application due to the extra parameters in that must be estimated. To derive a more precise condition, matrix note that (5) implies the existence of a full-rank that satisfies (11) The number of equations (both real and imaginary) represented , whereas the number of unknowns required to by (11) is for for , and specify the right-hand side include for . The matrix requires only instead of parameters due to the fact that an arbitrary diagonal scaling can be placed between and ; in other words, the entries in one of the rows of (or, equivalently, one of the columns of ) could be arbitrarily scaled without changing the equation. If we set the

and are easily obtained from the Consistent estimates of . Note also that an efficient algoeigendecomposition of rithm for tracking subspaces possessing the multiple invariance structure of (8) was recently proposed in [19]. However, in general, implementing (13) requires a search over and the eleor , whichever is of smaller dimension. ments of either While the standard single-invariance ESPRIT algorithm can be used to obtain an initial estimate of the desired parameters, the need for a multidimensional search of relatively high dimension is a significant drawback. In the next section, we present a new algorithm that (unlike ESPRIT) exploits the full invariance structure of the array and (unlike MI-ESPRIT) only requires a 1-D search for DOA estimation. III. MULTIPLE INVARIANCE MUSIC The key idea behind the approach presented in this section is a modification of the criterion in (13) that places the subspace fitting matrix on the left-hand rather than right-hand term [in in (13) is irrelevant]: which case, (15) This has the advantage of making the error term linear in both and so that estimates of both can be obtained in closed form. The drawback is that an unknown coloring of the statistics of the error term is introduced. While a weighting matrix could be used to optimally account for this color (as in the so-called noise subspace fitting approach [4], [20], [21]), the resulting solution

Authorized licensed use limited to: IEEE Editors in Chief. Downloaded on August 17, 2009 at 19:43 from IEEE Xplore. Restrictions apply.

2514

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 49, NO. 11, NOVEMBER 2001

would still require a multidimensional search over the DOAs in . In the interest of finding a simpler solution, we prefer to focus on the unweighted criterion of (15). Minimization of the criterion in (15) with respect to yields the following concentrated problem: Tr

(16)

where

Using (9), the minimization of (16) becomes

(17) Since each term in the sum depends only on the parameters of one source ( and ), the minimization can be performed for one source at a time by finding the deepest minima of the criterion (see, e.g., [22]): (18) Minimizing (18) amounts to finding response vectors that are as “close” as possible to the multiple-invariance signal . This is the basic idea behind the subspace defined by MUSIC algorithm, and consequently, we refer to this approach as multiple invariance (MI)-MUSIC. For convenience, we will rewrite (18) as follows: (19) (20) is defined in an obvious way. Since (20) is quadratic where in , we can explicitly minimize it with respect to , provided soluthat a constraint is employed to eliminate the trivial tion. The effect of enforcing such a constraint is a simple scaling , which poses no difficulty (see also the of the signal vector identifiability discussion in the previous section). We present the solution for two constraints below. In both cases, the DOAs are estimated via a 1-D search, and the subarray response vectors (columns of ) are solved for in closed form. 1) Linear Constraint: In this approach, we constrain the , where is the first element of to be unity: unit vector with a one in the first position and zeros elsewhere. yields the following Minimizing (20) subject to solution [22]:

2) Quadratic Constraint: Here, we force the unit-length constraint . The solution in this case is given in terms : of the eigenvalue decomposition of (23) is the minimum eigenvalue of its matrix arguwhere ment. The subarray response vector associated with is taken . Both of the to be the eigenvector associated with above implementations reduce to the standard MUSIC algorithm in the limit where each subarray consists of a single array element. We note here that a similar approach to that described above is possible for the case where one must estimate a 2-D angle of arrival for each source, e.g., azimuth and elevation . An array with displacement invariances in two dimensions is needed for such problems, and (10) must be written as a function of both and . To implement the MI-MUSIC algorithm for this case, by and perform a 2-D we would simply replace rather than a 1-D search for the deepest minima. IV. MULTIPLE INVARIANCE ROOT-MUSIC The Root-MUSIC algorithm [1], [2] was developed as a specialization of MUSIC to uniform linear arrays. It takes advantage of the resulting Vandermonde structure in the steering vec-degree polynomial, tors to write the MUSIC criterion as a of whose roots will ideally lie on the unit circle. The phase of these complex roots determines the DOAs. In the presence of noise, the roots closest to the unit circle are chosen for use in estimating the DOAs. In this section, we show how the Root-MUSIC idea is extended to MI arrays. We must assume that the subarrays are all collinear and that the displacements are rational multiples of one another. In other words, we assume that and that there exist integers and satisfying for . To simplify the discussion that follows, all we will assume that the displacements are integers (i.e., they , where denotes the signal wavelength), are multiples of although this is not strictly necessary. With the above assumptions, we can write (10) as the following polynomial vector:

.. .

, and where criterion in (20) then becomes

are integers. The MUSIC

(25)

(21) (22)

(24)

where (26)

In words, we find the largest maxima of the (1, 1) element of , and for each estimate , we set to be the first column scaled by the (1, 1) element of . of

Authorized licensed use limited to: IEEE Editors in Chief. Downloaded on August 17, 2009 at 19:43 from IEEE Xplore. Restrictions apply.

(27)

SWINDLEHURST et al.: EXPLOITING ARRAYS WITH MULTIPLE INVARIANCES

and . The matrix is the sub-block of matrix located at block row and block the . In the absence of noise, if we pick a value of column corresponding to one of the DOAs, then there will exist a vector such that . In other words, we are looking for drops rank. The process of values of where the matrix finding such ’s is complicated by the presence of the conjugate powers of in (27). To overcome this problem, we observe that since the desired values of lie on the unit circle where , we may instead consider the following matrix polynomial:

(28) Thus, to estimate the DOAs, we seek the roots of the matrix or the solutions to polynomial (29) . A criterion which represents a polynomial of degree similar to this was recently proposed in [23] for the 2-D angle estimation problem. Note that the matrix polynomial in (28) is ), and thus it may a spectral density matrix (i.e., for some . Consebe factored as quently, the order of the polynomial that must be rooted can be if is used instead of . To find the reduced to subarray steering vector associated with a particular root , we . simply solve the linear equation The process described above is analogous to the standard Root-MUSIC algorithm; in fact, if we specialize to the case of a and , then is equal to the ULA with scalar polynomial used by Root-MUSIC. As with Root-MUSIC, is larger than , and some the order of the polynomial method is needed to separate the desired roots from the extraneous ones. In the presence of noise, this is done by picking the roots closest to the unit circle. Reliable numerical algorithms exist for finding the roots of matrix polynomials; for example, see [24] and the algorithms in the polynomial toolbox for Matlab described in [25] and at the website www.polyx.com.

2515

A. Nullspace Parameterization To begin, consider the weighted signal subspace fitting criterion of (13), which we repeat here: (30) If is minimized with respect to the matrix and the resulting estimate of is inserted in (30), we get the following concentrated criterion: (31) where

is the orthogonal projector onto the nullspace of . In order to use the MODE approach to minimize (31), we full-rank matrix whose need to find an and which depends linearly columns span the nullspace of on a minimal set of parameters. To construct such a , first permute the rows of as

.. . (32)

.. . .. . denotes row of

where nomial

. As with MODE, define the poly-

(33) where and

V. MULTIPLE INVARIANCE MODE The motivation for examining the use of MODE [4] for this problem is the desire to obtain statistically efficient DOA estimates without resorting to a search. We will see below that the extension of MODE to MI arrays presented here requires that , which can be the number of subarrays satisfies a serious restriction in some applications. However, when this condition is satisfied, MI-MODE provides a significant computational advantage over MI-ESPRIT. To extend MODE to the MI case, we will make a connection with system identification and observe that can be viewed as the observability matrix corresponding to a linear system if the subarrays are regularly for some constant . spaced in the sense that With this observation, we can use the results presented in [26] to considerably simplify the minimization of the MI-ESPRIT criterion.

and

Re

(34)

denotes conjugation. Next, define the Toeplitz matrices

.. .

..

..

.

..

.

.

.. . .. . (35)

..

.

..

.

.. .

(36) (37)

where is the identity matrix, and scalars, matrix of zeros. The assumption that

Authorized licensed use limited to: IEEE Editors in Chief. Downloaded on August 17, 2009 at 19:43 from IEEE Xplore. Restrictions apply.

are complex is a is needed for

2516

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 49, NO. 11, NOVEMBER 2001

the definition of to make sense. With the above definitions, matrix : we are now ready to specify the

sented herein. Running MI-MODE once with also provides a consistent estimate of that can be used to initialize the algorithm. B. Special Considerations for Overlapping Subarrays

(38) .. .

.. .

parameters in , there is a unique For a given set of and such that . Note set of - and -parameters in that the number of real parameters that determine (or ) is since, as explained above, one of can be arbitrarily normalized. In addition, note the rows of since we have that the number of parameters in is real parameters in the -polynomial (given the constraints in -parameters. (34)) and It is clear from the special structure of that it has full rank, . We can and hence, we have thus rewrite the criterion in terms of the new parameterization as

The derivation above started from the MI-ESPRIT criterion , the estimates obtained by minimizing in (30). When this criterion have previously been shown [14] to be statistically is chosen as in (14). This also folefficient, provided that lows directly from the analysis of MODE in [21]. Hence, for , MI-MODE is statistically efficient. However, when , statistical efficiency the subarrays share elements is lost since the subspace fitting criterion is overparameterized (i.e., the shared elements are parameterized independently of one another). This implies that if we attempt to derive an optimally weighted subspace fitting method based on the overparameterized model, we should expect the residual covariance matrix (the inverse of which, if it existed, would constitute the optimal weighting) to be singular. Indeed, let us take a “MODE approach” and study the residual vector vec

(43)

By using the first-order perturbation result [21] Tr where is equal to with its rows permuted as those of were in forming , and is a vector containing the unknown subject to parameters (i.e., the real and imaginary parts of the conjugate symmetry constraint and the real and imaginary parts of the -coefficients). The idea behind MODE is to use the (asymptotically in or SNR) to show fact that can be replaced by a consistent estimate without that impairing the asymptotic variance of the final estimates [4]. The advantage of making this replacement is that the criterion becomes quadratic in the unknowns. be a To explain this approach in more detail, let and parameter-independent vector and matrix, respectively, that is implicitly defined by

where

, and that cov vec

we immediately obtain (for large

) (44) (45)

cov vec

vec

vec

vec (46) cov vec (47) (48)

vec

(49)

Using this definition, rewrite the criterion as follows: (39) vec

(40) vec

(41) (42)

The parameter vector can hence be obtained by solving a weighted overdetermined least squares problem. Once an escan be recomputed, and the leasttimate of is found, squares minimization can be repeated. Although multiple iterations do not provide any asymptotic performance improvement [4], some gain may be observed in finite sample scenarios. Note that the initial value for needed to start the iterations can be found using ESPRIT or one of the MI-MUSIC algorithms pre-

is of dimension , but its We note that since and since there rank is no more than linearly independent vectors in the nullspace of . are , the maAccordingly, when in (49) has rank , and hence trix the residual covariance matrix is singular, as expected. One approach to overcoming this difficulty would be to regularize the error covariance prior to using its inverse as the weighting for the least squares minimization [27]. VI. COMPUTATIONAL COMPLEXITY When implemented nonadaptively, the computational load of subspace-based DOA estimators like those presented above is

Authorized licensed use limited to: IEEE Editors in Chief. Downloaded on August 17, 2009 at 19:43 from IEEE Xplore. Restrictions apply.

SWINDLEHURST et al.: EXPLOITING ARRAYS WITH MULTIPLE INVARIANCES

usually dominated by formation of the covariance matrix and calculation of its eigendecomposition. However, in adaptive scenarios where the covariance or signal subspace can be efficiently updated, the cost of implementing the algorithm itself can become an issue. In comparing the algorithms derived above with their single-invariance counterparts, we will assume for simplicity that there is little overlap between subarrays so . that For MI-MUSIC, which employs a 1-D criterion that must be searched for local minima, the computational load is determined by the complexity of evaluating the criterion for each (the number of criterion evaluations required to find the minima depends on the particular search algorithm used, in addition to the quality of the estimates used to initialize the search). matrix MI-MUSIC requires the formation of the for each , which is an operation, followed by calor the (1, 1) elculation of the minimum eigenvalue of , which, in either case, requires flops. ement of flops per criterion evalThe MUSIC algorithm requires uation; therefore, we see that MI-MUSIC requires roughly times more computation for an array with the same number of total elements, where is the number of elements per subarray. The situation is somewhat different for Root-MI-MUSIC, whose computational load is primarily due to rooting a . For standard Root-MUSIC, polynomial of degree . If we the corresponding polynomial has degree compare Root-MI-MUSIC with Root-MUSIC assuming that the arrays used for each have roughly the same aperture, then , and the computational load for each would be about the same. However, if the first and last subarrays are could be significantly separated by a large distance, then , in which case MI-Root-MUSIC would larger than require significantly more computation (although it would likely perform better due to the larger aperture). The computational load of MI-MODE is typically larger than MI-MUSIC since it estimates the DOAs together with the unknown elements of the subarray response matrix . It does this by minimizing the least-squares criterion in (42) in closed form, operations. The standard MODE alwhich requires operations since it assumes a gorithm requires only fully calibrated array and estimates only the parameters directly associated with the DOAs. VII. SIMULATIONS The output of an array composed of four identical subarrays of two elements each was simulated to compare the performance of MI-MUSIC, MI-Root-MUSIC, MI-MODE with ESPRIT, MI-ESPRIT, and the alternating least-squares (ALS) method of [13], [18]. The two elements within each subarray are . The assumed to be separated by one quarter wavelength subarrays are all collinear, with the first three separated by and the first and fourth separated by , as depicted in Fig. 2. Under these assumptions, the steering vectors will be of the form

(50)

2517

Fig. 2. Array used for the simulation examples.

Results for the algorithms were obtained using • Case I: all four of the subarrays; • Case II: only the first three subarrays (i.e., without elements 7 and 8). Although simulation data was generated assuming that every element in the array was identical, this fact was not exploited by any of the algorithms; only the array structure given by (50) was assumed. For Case I, there is no way for ESPRIT to form two identical subarrays that use all eight elements of the array (since we are not assuming that all elements in the array are identical). We note that there are “multiresolution” implementations of ESPRIT [28], [29] that attempt to remedy this deficiency, but for our purposes, we simply implement ESPRIT using only subarrays 1 and 4, which are separated by . To overcome the inherent ambiguity involved in using subarrays separated by more , the sources were assumed to be close enough to broadthan side that they could be uniquely resolved. For Case II, ESPRIT was implemented with two overlapping subarrays composed of elements 1–4 and 3–6, respectively. Since the MI-MODE algorithm was only presented for the case of uniformly spaced subarrays, it was implemented for Case II only. The ALS algorithm attempts a least-squares fit of the data to the following array response models:

Case I

Case II

and thus, in its initial stage, it does not exploit knowledge of the subarray displacements. Consequently, the ALS estimates are unstructured (although diagonal), and care of must be taken in deriving estimates of the DOAs from them. In the absence of noise

so several possibilities exist. In the simulations described below, and were used to dethe diagonal elements of termine the DOA estimates for Cases I and II, respectively. The DOA ambiguity associated with this approach was resolved as with ESPRIT. We note that providing ALS and ESPRIT with the information necessary to resolve the DOA ambiguity gives them a significant performance advantage over implementations using only the first three subarrays. If these algorithms had to base their DOA estimates on more closely spaced subarrays,

Authorized licensed use limited to: IEEE Editors in Chief. Downloaded on August 17, 2009 at 19:43 from IEEE Xplore. Restrictions apply.

2518

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 49, NO. 11, NOVEMBER 2001

Fig. 3. Performance of ESPRIT, ALS, MI-MUSIC, and MI-ESPRIT as a function of SNR for Case I.

their performance would be much poorer. Since MI-MUSIC exploits the full structure of the array response matrix, there is no DOA ambiguity, and no prior information about the location of the sources is needed or used by the algorithm. In each of the simulation examples outlined below, two sources were present, 100 samples of data were taken from the array, ESPRIT was used to initialize the MI-ESPRIT and ALS iterations, and algorithm performance for each case was calculated based on an average over 1000 independent Monte Carlo trials. There was virtually no difference in performance between the two different versions of MI-MUSIC in (22) and (23); therefore, only results for the linear constraint are shown in the plots. In no case were the simulated scenarios difficult enough to cause MI-MUSIC to be unable to resolve the source DOAs. Results for ALS and MI-ESPRIT are not shown on the plots for Case II since ALS performed virtually identically to ESPRIT, and MI-ESPRIT gave essentially the same results as MI-MODE. 1) Performance Versus SNR: In this example, two with respect to equipower sources were located at 5 and the array broadside. For Case I, the sources were uncorrelated, whereas for Case II, they were correlated with a correlation coefficient of 0.9 (with zero phase). The root mean square (RMS) DOA estimation error of the algorithms was calculated at various SNR and is plotted in Figs. 3 and 4 for Cases I and II, respectively, together with the Cramér-Rao bound (CRB) associated with the two cases. For Case I, the performance of MI-ESPRIT and MI-MUSIC is essentially identical, except at the lowest SNRs, where MI-MUSIC has a lower (i.e., better) threshold, but degrades more rapidly past the threshold. Both algorithms achieve the CRB for SNRs above about 0 dB, whereas the ESPRIT and ALS methods are significantly above the bound (about 10 dB in SNR). The excellent performance of MI-MUSIC is not surprising in this case, as MUSIC has been shown to be asymptotically efficient for uncorrelated sources [30], [31]. For Case II in Fig. 4, MI-MODE achieves the CRB for SNR’s at or above 0 dB. Below 0 dB, the performance of MI-MODE drops below the CRB, primarily because of the

Fig. 4. Performance of ESPRIT, MI-Root-MUSIC, and MI-MODE as a function of SNR for Case II.

Fig. 5. Performance of ESPRIT, ALS, MI-MUSIC, and MI-ESPRIT as a function of source separation for Case I.

large estimate bias in these cases. Due to the correlation of the sources, both ESPRIT and MI-Root-MUSIC are above the bound at all SNR. If we define the “threshold” to be the point where the RMS error exceeds half the angular separation of the sources, then we see that the performance threshold for MI-MODE is nearly 10 dB lower than that for either ESPRIT or MI-Root-MUSIC. 2) Performance versus Angular Separation: Similar results are observed when the angular separation between the sources is varied. Two correlated sources with SNRs of 10 dB were simulated, with the DOA of one fixed at 5 , and the other varied . As in the previous example, the correlabetween 2 and tion coefficient was chosen to be 0.9. The performance of the algorithms is depicted in Figs. 5 and 6 for Cases I and II, respectively. In Case I, for angular separations greater than about 6 , both MI-ESPRIT and MI-MUSIC achieve the CRB. In the toughest case (source separation of only 3 ), MI-MUSIC has a notable advantage over MI-ESPRIT. Both ALS and ESPRIT

Authorized licensed use limited to: IEEE Editors in Chief. Downloaded on August 17, 2009 at 19:43 from IEEE Xplore. Restrictions apply.

SWINDLEHURST et al.: EXPLOITING ARRAYS WITH MULTIPLE INVARIANCES

2519

Fig. 6. Performance of ESPRIT, MI-Root-MUSIC, and MI-MODE as a function of source separation for Case II.

Fig. 8. Performance of ESPRIT, MI-Root-MUSIC, and MI-MODE versus correlation coefficient for Case II.

VIII. CONCLUSION

Fig. 7. Performance of ESPRIT, ALS, MI-MUSIC, and MI-ESPRIT versus correlation coefficient for Case I.

are again significantly above the bound. For Case II, MI-MODE achieves the bound for angular separations greater than about 7 (for points above the threshold region where the CRB is less than half the angular separation), whereas ESPRIT and MI-RootMUSIC are consistently above the bound due to the source correlation. 3) Performance Versus Correlation Coefficient: As seen in the previous examples, the advantage of MI-ESPRIT and MI-MODE is especially apparent when correlated sources are present. This example is similar to the first two, except that with 20 dB SNR, and the the two sources are fixed at magnitude of the correlation coefficient is varied between 0 and 1 (the phase of the correlation coefficient was assumed to be zero in all cases). The results are displayed in Figs. 7 and 8. The performance of MI-ESPRIT and MI-MODE is insensitive to correlation, and both achieve the CRB in all cases. The other algorithms begin to degrade when the sources are more than about 60% correlated.

We have presented generalizations of the MUSIC, Root-MUSIC, and MODE algorithms for DOA estimation that exploit arrays with multiple identical translated subarrays. Like the optimal MI-ESPRIT algorithm, the new MI-MUSIC approach is able to exploit the full invariance structure of the array, as well as the so-called unit circle constraint. However, it estimates the DOAs using a simple 1-D search, and, as such, enjoys a significant computational advantage over MI-ESPRIT. MI-Root-MUSIC and MI-MODE were presented only for the more restrictive case where the subarrays are uniformly spaced, but when applicable, they are able to obtain DOA estimates via polynomial rooting, without the need for any search procedure. In addition, MI-MODE shares the statistical optimality of MI-ESPRIT. Simulations showed that the MI-MUSIC and MI-Root-MUSIC algorithms have excellent performance for uncorrelated sources and that MI-MUSIC may enjoy better robustness than MI-ESPRIT in difficult threshold scenarios. For highly correlated sources, MI-ESPRIT and MI-MODE enjoy a significant performance advantage over MI-MUSIC and MI-Root-MUSIC, although these latter two algorithms typically outperform other suboptimal estimators (like ESPRIT or ALS) for the same problem. REFERENCES [1] A. J. Barabell, “Improving the resolution performance of eigenstructurebased direction-finding algorithms,” Proc. IEEE ICASSP, pp. 336–339, 1983. [2] B. D. Rao and K. V. S. Hari, “Performance analysis of root-MUSIC,” IEEE Trans. Acoust., Speech, Signal Processing, vol. 37, pp. 1939–1949, Dec. 1989. [3] Y. Bresler and A. Macovski, “Exact maximum likelihood parameter estimation of superimposed exponential signals in noise,” IEEE Trans. Acoust., Speech, Signal Processing, vol. ASSP–34, pp. 1081–1089, Oct. 1986. [4] P. Stoica and K. Sharman, “Maximum likelihood methods for direction-of-arrival estimation,” IEEE Trans. Acoust., Speech, Signal Processing, vol. 38, pp. 1132–1143, July 1990. [5] R. Roy and T. Kailath, “ESPRIT—Estimation of signal parameters via rotational invariance techniques,” IEEE Trans. Acoust., Speech, Signal Processing, vol. 37, pp. 984–995, July 1989.

Authorized licensed use limited to: IEEE Editors in Chief. Downloaded on August 17, 2009 at 19:43 from IEEE Xplore. Restrictions apply.

2520

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 49, NO. 11, NOVEMBER 2001

[6] C. Mathews and M. Zoltowski, “Eigenstructure techniques for 2-D angle estimation with uniform circular arrays,” IEEE Trans. Signal Processing, vol. 42, pp. 2395–2407, Sept. 1994. [7] A. van der Veen, P. Ober, and E. Deprettere, “Azimuth and elevation computation in high resolution DOA estimation,” IEEE Trans. Signal Processing, vol. 40, pp. 1828–1832, July 1992. [8] A. Swindlehurst and T. Kailath, “Azimuth/elevation direction finding using regular array geometries,” IEEE Trans. Aerosp. Eletron. Syst., vol. 29, pp. 145–156, Jan. 1993. [9] M. Clark and L. Scharf, “Two-dimensional modal analysis based on maximum likelihood,” IEEE Trans. Signal Processing, vol. 42, pp. 1443–1452, June 1994. [10] M. Zoltowski, M. Haardt, and C. Mathews, “Closed-form 2-D angle estimation with rectangular arrays in element space or beamspace via unitary ESPRIT,” IEEE Trans. Signal Processing, vol. 44, pp. 316–328, Feb. 1996. [11] K. T. Wong and M. D. Zoltowski, “Extended-aperture underwater acoustic multisource azimuth/elevation direction finding using uniformly but sparsely spaced vector hydrophones,” IEEE J. Oceanic Eng., vol. 22, pp. 659–672, Apr. 1997. [12] M. Haardt and J. Nossek, “Simultaneous schur decomposition of several nonsymmetric matrices to achieve automatic pairing in multidimensional harmonic retrieval problems,” IEEE Trans. Signal Processing, vol. 46, pp. 161–169, Jan. 1998. [13] R. Bro, N. Sidiropoulos, and G. Giannakis, “Optimal joint azimuth-elevation and signal array response estimation using parallel factor analysis,” in Proc. 32nd Asilomar Conf. Signals, Syst., Comput., 1998. [14] A. Swindlehurst, B. Ottersten, R. Roy, and T. Kailath, “Multiple invariance ESPRIT,” IEEE Trans. Signal Processing, vol. 40, pp. 867–881, Apr. 1992. [15] K. Wong and M. Zoltowski, “Closed-form multi-dimensional multi-invariance ESPRIT,” in Proc. ICASSP, Munich, Germany, 1997, pp. 3489–3492. [16] S.-J. Yu and J.-H. Lee, “Efficient eigenspace-based array signal processing using multiple shift-invariant subarrays,” IEEE Trans. Antennas Propagat., vol. 47, pp. 186–194, Jan. 1999. [17] R. Schmidt, “A signal subspace approach to multiple emitter location and spectral estimation,” Ph.D. dissertation, Stanford Univ., Stanford, CA, 1981. [18] N. Sidiropoulos, R. Bro, and G. Giannakis, “Parallel factor analysis in sensor array processing,” IEEE Trans. Signal Processing, vol. 48, pp. 2377–2388, Aug. 2000. [19] P. Strobach, “Bi-iteration multiple invariance subspace tracking and adaptive ESPRIT,” IEEE Trans. Signal Processing, vol. 48, pp. 442–456, Feb. 2000. [20] P. Stoica and A. Nehorai, “Performance study of conditional and unconditional direction-of-arrival estimation,” IEEE Trans. Acoust., Speech, Signal Processing, vol. 38, pp. 1783–1795, Oct. 1990. [21] B. Ottersten, M. Viberg, P. Stoica, and A. Nehorai, “Exact and large sample ML techniques for parameter estimation and detection in array processing,” in Radar Array Processing, A. Haykin, A. Litva, and A. Shepherd, Eds, Berlin, Germany: Springer-Verlag, 1993, pp. 99–151. [22] P. Stoica and R. Moses, Introduction to Spectral Analysis. Upper Saddle River, NJ: Prentice-Hall, 1997. [23] K. T. Wong and M. Zoltowski, “Root-MUSIC-based azimuth-elevation angle-of-arrival estimation with uniformly spaced but arbitrarily oriented velocity hydrophones,” IEEE Trans. Signal Processing., vol. 47, pp. 3250–3260, Dec. 1999. [24] M. Hromcik and M. Sebek, “Numerical and symbolic computation of polynomial matrix determinant,” in Proc. 38th CDC Conf., Phoenix, AZ, Dec. 1999. [25] M. Sebek, H. Kwakernaak, D. Henrion, and S. Pejchova, “Recent progress in polynomial methods and polynomial toolbox for matlab version 2.0,” in Proc. 37th CDC Conf., Tampa, FL, Dec. 1998, pp. 3661–3668. [26] M. Viberg, B. Wahlberg, and B. Ottersten, “Analysis of state space system identification methods based on instrumental variables and subspace fitting,” Automatica, vol. 33, no. 9, pp. 1603–1616, 1997. [27] A. Gorokhov and P. Stoica, “Generalized quadratic minimization and blind multichannel deconvolution,” IEEE Trans. Signal Processing, vol. 48, pp. 201–213, Jan. 2000. [28] A. Lemma, A. van der Veen, and E. Deprettere, “On the multi-resolution ESPRIT algorithm,” in Proc. 9th Workshop Statist. Signal Array Process., Portland, OR, 1998, pp. 248–251. [29] K. T. Wong and M. D. Zoltowski, “Direction-finding with sparse rectangular dual-size spatial invariance array,” IEEE Trans. Aerosp. Electron. Syst., vol. 34, pp. 1320–1335, Oct. 1998.

[30] P. Stoica and A. Nehorai, “MUSIC, maximum likelihood, and Cramér-Rao bound,” IEEE Trans. Acoust., Speech Signal Processing, vol. 37, pp. 720–741, May 1989. [31] , “MUSIC, maximum likelihood, and Cramér-Rao bound: Further results and comparisons,” IEEE Trans. Acoust., Speech, Signal Processing, vol. 38, pp. 2140–2150, Dec. 1990.

A. Lee Swindlehurst (SM’99) received the B.S. (summa cum laude) and M.S. degrees in electrical engineering from Brigham Young University (BYU), Provo, UT, in 1985 and 1986, respectively, and the Ph.D. degree in electrical engineering from Stanford University, Stanford, CA, in 1991. From 1983 to 1984, he was with Eyring Research Institute, Provo, as a Scientific Programmer. From 1984 to 1986, he was a Research Assistant with the Department of Electrical Engineering, BYU, working on various problems in signal processing and estimation theory. He was awarded an Office of Naval Research Graduate Fellowship between 1985 and 1988, and during most of that time, he was with the Information Systems Laboratory, Stanford University. From 1986 to 1990, he was also with ESL, Inc., Sunnyvale, CA, where he was involved in the design of algorithms and architectures for several radar and sonar signal processing systems. He joined the faculty of the Department of Electrical and Computer Engineering at BYU in 1990, where he is a Full Professor. From 1996 to 1997, he held a joint appointment as a Visiting Professor at both Uppsala University, Uppsala, Sweden, and at the Royal Institute of Technology, Stockholm, Sweden. His research interests include sensor array signal processing for radar and wireless communications, detection and estimation theory, and system identification. Dr. Swindlehurst is a past Associate Editor for the IEEE TRANSACTIONS ON SIGNAL PROCESSING, a past member of the Statistical Signal and Array Processing Technical Committee of the IEEE Signal Processing Society, and past Vice-Chair of the Signal Processing for Communications Technical Committee within the same society. He has served as the Technical Program Chair for the 1998 IEEE Digital Signal Processing Workshop and for the 2001 IEEE International Conference on Acoustics, Speech, and Signal Processing. He is received the 2000 IEEE W. R. G. Baker Prize Paper Award and is co-author of a paper that received a Signal Processing Society Young Author Best Paper Award in 2001.

Petre Stoica (F’94) received the D.Sc. degree in automatic control from the Bucharest Polytechnic Institute (BPI), Bucharest, Romania, in 1979 and an honorary doctorate degree in science from Uppsala University (UU), Uppsala, Sweden, in 1993. He is a Professor of System Modeling with the Department of Systems and Control at UU. Previously, he was a Professor of Signal Processing at BPI. He held longer visiting positions with the Eindhoven University of Technology, Eindhoven, The Netherlands; Chalmers University of Technology, Göteborg, Sweden (where he held a Jubilee Visiting Professorship); UU; the University of Florida, Gainesville; and Stanford University, Stanford, CA. His main scientific interests are in the areas of system identification, time series analysis and prediction, statistical signal and array processing, spectral analysis, wireless communications, and radar signal processing. He has published seven books, ten book chapters, and some 450 papers in archival journals and conference records on these topics. The most recent book he co-authored is entitled Introduction to Spectral Analysis (Englewood Cliffs, NJ: Prentice-Hall, 1997). Recently, he co-edited two books on signal processing advances in wireless and mobile communications (Englewood Cliffs, NJ: Prentice-Hall, 2000). He is on the editorial boards of five journals in the field: Signal Processing; Journal of Forecasting; Circuits, Signals, and Signal Processing; Multidimensional Systems and Signal Processing; and Digital Signal Processing: A Review Journal. He was a guest co-editor for several special issues on system identification, signal processing, spectral analysis, and radar for some of the aforementioned journals as well as for Proceedings of the IEE.

Authorized licensed use limited to: IEEE Editors in Chief. Downloaded on August 17, 2009 at 19:43 from IEEE Xplore. Restrictions apply.

SWINDLEHURST et al.: EXPLOITING ARRAYS WITH MULTIPLE INVARIANCES

2521

Dr. Stoica was co-recipient of the 1989 ASSP Society Senior Award for a paper on statistical aspects of array signal processing and recipient of the 1996 Technical Achievement Award of the IEEE Signal Processing Society for fundamental contributions to statistical signal processing with applications in time series analysis, system identification, and array processing. In 1998, he received a Senior Individual Grant Award from the Swedish Foundation for Strategic Research. He was also co-recipient of the 1998 EURASIP Best Paper Award for Signal Processing for a work on parameter estimation of exponential signals with time-varying amplitude, a 1999 Signal Processing Society Best Paper Award for a paper on parameter and rank estimation of reduced-rank regression, a 2000 IEEE Third Millenium Medal, and the 2000 IEEE W. R. G. Baker Paper Prize Award for a work on maximum likelihood methods for radar. He was a member of the international program committees of many topical conferences. From 1981 to 1986, he was a Director of the International Time Series Analysis and Forecasting Society, and he has been a Member of the IFAC Committee on Modeling, Identification, and Signal Processing since 1994. He is also a Member of the Romanian Academy and a Fellow of the Royal Statistical Society.

Magnus Jansson (M’98) was born in Enköping, Sweden, in 1968. He received the M.S., Tech.Lic., and Ph.D. degrees in electrical engineering from the Royal Institute of Technology (KTH), Stockholm, Sweden, in 1992, 1995, and 1997, respectively. From 1998 to 1999, he was with the Department of Electrical and Computer Engineering, University of Minnesota, Minneapolis. He is currently a Research Associate with the Department of Signals, Sensors and Systems, Royal Institute of Technology, Stockholm, Sweden. His research interests include statistical signal processing, time series analysis, and system identification.

Authorized licensed use limited to: IEEE Editors in Chief. Downloaded on August 17, 2009 at 19:43 from IEEE Xplore. Restrictions apply.