MINIMUM-VARIANCE PSEUDO-UNBIASED ... - Semantic Scholar

1 downloads 0 Views 114KB Size Report
This paper presents mathematically novel estimator for the linear regression model named Minimum-Variance. Pseudo-Unbiased Reduced-Rank Estimator ...
MINIMUM-VARIANCE PSEUDO-UNBIASED REDUCED-RANK ESTIMATOR (MV-PURE) AND ITS APPLICATIONS TO ILL-CONDITIONED INVERSE PROBLEMS Tomasz Piotrowski and Isao Yamada Department of Communications and Integrated Systems (S3-60), Tokyo Institute of Technology, Tokyo 152-8552, Japan Phone: +81-3-5734-2503, Fax: +81-3-5734-2905 E-mail addresses: {tpiotrowski, isao}@comm.ss.titech.ac.jp ABSTRACT This paper presents mathematically novel estimator for the linear regression model named Minimum-Variance Pseudo-Unbiased Reduced-Rank Estimator (MV-PURE), designed specially for applications where the model matrix under consideration is ill-conditioned, and auxiliary knowledge on the unknown deterministic parameter vector is available in the form of linear constraints. We demonstrate closed algebraic form of MV-PURE estimator and provide a numerical example of its application, where we employ our estimator to the ill-conditioned problem of reconstructing a 2-D image subjected to linear constraints from blurred, noisy observation. It is shown that MVPURE estimator achieves much smaller M SE for all values of SN R not only than the minimum-variance unbiased Gauss-Markov (BLUE) estimator, but also than the minimum-variance conditionally unbiased affine estimator subject to linear restrictions and the recently introduced generalized Marquardt’s reduced-rank estimator. In particular, it will be shown that all of the aforementioned estimators are particular cases of MV-PURE estimator, if the rank constraint on estimator and/or the linear constraints on the unknown deterministic vector of parameters are not imposed. 1. INTRODUCTION Since the seminal work of Gauss [1], the problem of linear estimation of the unknown determininstic vector of parameters has received much attention in the literature. The main reason is certainly a huge variety of applications of the linear regression model, ranging from economics or medicine to image and signal processing. In particular, such wide variety of applications implies that researchers encounter very different problems with accurately estimating the unknown quantity at hand, and thus various different solutions have been proposed over the years to alleviate them. In particular, the problem that is prevailing among almost all applications of linear regression model is a possibility of design matrix under consideration being ill-conditioned, which immediately negates well-known unbiased estimators: the least squares estimator and the Gauss-Markov (BLUE) estimator as rea-

sonable solutions.1 Thus, to circumvent this difficulty, a variety of biased estimators have been proposed achieving superior performance (in the sense of minimizing the mean square error, which is the most universally applied measure of performance of estimator) in ill-conditioned cases than the unbiased estimators. These include, for example, the widely applied ridge regression estimator [2, 3], which is based on the Tikhonov’s regularization technique [5–8], the rank-shaping estimator [9] and the rank-reduction estimators [4, 10]. Intriguingly, despite their superb performance in variety of applications, the mathematical background of the aforementioned estimators is not directly related to the mean square error expression, despite the fact that this is the very quantity they aim to minimize. Clearly, the main reason is that the mean square error expression contains the unknown deterministic parameter vector, therefore it cannot be minimized in the direct form. On the other hand, as will be shown below, MV-PURE estimator is the solution to problem arising from a natural inequality realated to the mean square error expression, which makes our approach most natural and assures excellent performance of obtained solution. Ill-conditioning aside, another common setting in many practical situations is the possibility of obtaining a priori knowledge on the unknown deterministic parameter vector to be estimated. In particular, in this paper we focus on linear constraints imposed on the unknown vector, which incorporates various scenarios occuring in practical applications, e.g. the exact knowledge of a given component of the unknown vector, formulating a hypothesis on its subvector or knowledge of the ratios between certain coefficients (see e.g. [11]). Moreover, since the pioneering work of Toro-Vizcarrondo and Wallace [12] (see also [10]) it is well-known that under certain conditions, imposing linear constraints leads to reduction of the mean square error of the estimator even if the constraint set does not contain the unknown deterministic parameter vector to be estimated. Indeed, an efficient estimator: the minimum-variance conditionally unbiased affine estimator subject to linear constraints [13] has been de1 See e.g. [2–4] for a discussion of inadequacy of unbiased estimators for ill-conditioned problems.

veloped specially to incorporate auxiliary knowledge in the form of linear constraints. However, the aforementioned estimator does not incorporate the reduced-rank approach, hence just like the least squares and the GaussMarkov (BLUE) estimators, is inherently unsuitable for ill-conditioned problems. Such situation clearly demands a research on developing an estimator which will not only be optimal among the reduced-rank estimators, but will also allow to incorporate a priori knowledge on the unknown deterministic parameter vector in the form of linear constraints. Indeed, MV-PURE estimators directly includes such constraints in its formulation, which guarantees the optimal usage of this auxiliary information. The first goal of this paper is to provide a firm mathematical reasoning behind introduction of the novel MV-PURE estimator and present the main result - a closed algrebaic form of MV-PURE estimator. The second goal is to demonstrate its usefulness in practical application, where we employ our estimator to the ill-conditioned problem of reconstructing a 2-D image subjected to linear constraints from blurred, noisy observation, and obtain a significantly lower M SE for all values of SN R than all previously known estimators under consideration. Finally, we demonstrate that MV-PURE estimator is the optimal extension of the Gauss-Markov (BLUE) estimator, and in particular we show that the minimum-variance conditionally unbiased affine estimator subject to linear restrictions [13] and the generalized Marquardt’s reduced-rank estimator [10] are particular cases of MV-PURE estimator, if the rank constraint on estimator or the linear constraints on the unknown deterministic vector of parameters are not imposed. This paper is a preliminary short version of [14] and contains partial results of the research project initiated by the second author in [15], where the term “low-rank” was used in place of “reduced-rank” to denote the reducedrank approach employed in our research. An application of MV-PURE estimator to wireless communications problem is given in [16]. 2. PRELIMINARIES We begin with giving an introduction to the ill-conditioned linear inverse problem, where the linear constraints on the unknown deterministic parameter vector to be estimated are available. Furthermore, for clear comparison with our method, we firstly present known estimators used in such problems, and afterwards give a mathematical background behind developing MV-PURE estimator. The linear statistical model assumes that we can observe data vector y ∈ Rn of the form: y = Lβ + ǫ,

(1)

where L ∈ Rn×m is a known design matrix (model matrix) of full column rank m, β ∈ Rm is an unknown deterministic parameter vector to be estimated and ǫ ∈ Rn is a random vector with zero mean and positive definite covariance matrix E(ǫǫt ) = Q ∈ Rn×n . Note that y is itself a random vector due to the presence of noise in

the above model, and can be viewed as the outcome of inexact measurement of deterministic vector Lβ. If the linear constraints on β are available, they can be conveniently cast into the following form: let A ∈ Rs×m , where rk(A) = a ≤ s < m, be a given matrix, and let b ∈ Rs be a given vector. Then, we request the unknown deterministic parameter vector β ∈ Rm to be an element of the following set: V = {β ∈ Rm : Aβ = b} .

(2)

Thus, instead of model (1), we consider now an alternative linear regression model of the following form: y = Lβ + ǫ, β ∈ V.

(3)

2.1. Ill-conditioned regression model (1) The problem of developing a linear estimator of the unknown deterministic parameter vector β ∈ Rm , based on available observations y ∈ Rn , i.e. finding a constant matrix Φ ∈ Rm×n , called here an estimator, such that: βb = Φy

(4)

is a well-behaved estimate of β, has been one of the fundamental statistical problems since the seminal work of Gauss [1]. The most widely applied measure of the performance of Φ has been the mean square error of the obb tained estimate β:   J(Φ) = E k βb − β k2 =  E k Φy − E(Φy) k2 + k E(Φy) − β k2 =

tr(ΦQΦt ) + k (ΦL − Im )β k2 , (5) {z } | {z } | variance

bias2

where E denotes expectation and k · k the Euclidean norm. However, as can be seen from the above expression, the fundamental problem is that it explicitly depends on the unkown vector of parameters β, hence it is impossible to minimize J(Φ) globally over Φ. Therefore, a variety of estimators have been proposed which aim at minimizing the obtained mean square error indirectly. In particular, let us for convenience of presentation introduce a singular value decompostion (SVD) of L of the form: L = U ΣV t =

m X

σi ui vit .

(6)

i=1

Then, the least squares estimator [1, 17] is defined simply as the Moore-Penrose pseudoinverse of L: 2 Φls = (Lt L)−1 Lt = L† = V Σ† U t =

m X 1 vi uti . (7) σ i i=1

2 Without losing generality, we assume that all SVDs considered have singular values organized in nonincreasing order. For a complete discussion of the singular value decomposition and its relation to MoorePenrose pseudoinverse, see e.g. [8].

Note that Φls L = Im , thus the least squares estimator (7) is in view of (5) a uniformly unbiased estimator, since k (Φls L − Im )β k2 = 0 (∀β ∈ Rm ) in (5). Hence, it is only natural to inquire about existence of the estimator, which is not only uniformly unbiased, but also minimizes its variance among the unbiased estimators. In other words, we ask for a solution of the following optimization problem:  minimize tr [ΦQΦt ] (8) subject to ΦL = Im . Indeed, there exists a unique solution to problem (8), the Gauss-Markov (BLUE) estimator [17–19], which upon setting: e = Q−1/2 L, L

is given by:

e t L) e −1 L e t Q−1/2 = L e † Q−1/2 . ΦGM = (L

(9)

Clearly, ΦGM satisfies J(ΦGM ) ≤ J(Φls ). However, since for Q = In we have ΦGM = Φls and from (5) and (7) we obtain that: m X 1 J(Φls ) = J(L† ) = σ 2 2, σ i=1 i | {z } variance

we observe that the insistence on unbiasedness on the estimators make them drastically unsuitable for applications where the linear regression model (1) is ill-conditioned, i.e. when L has some vanishingly small singular values σr+1 , . . . , σm . Thus, for ill-conditioned cases biased estimators may be expected to yield a significantly improved performance than their unbiased counterparts. Therefore, to avoid the inadequacy posed by the very small singular values σr+1 , . . . , σm , a reduced-rank extension of the least squares estimator Φls (7) has been proposed by Marquardt [4] for the case Q = In as follows: ΦM =

r X 1 vi uti = Vr Vrt Φls , σ i i=1

e † Q−1/2 = Ver Ve t ΦGM , ΦC = Ver Vert L r

m X

k β k2 1 > , σ2 σ2 i=r+1 i we have J(ΦM ) < J(Φls ). Recently, Marquardt’s idea has been extended by Chipman [10] to the general case of Q being any positive-definite matrix, by whitening the noise, which simply amounts to considering the following weighted model:

(11)

e = Q−1/2 L is given with a SVD L e=U eΣ e Ve t and where L e Vr = (e v1 , . . . , ver ), achieves under mild conditions lower means square error than the Gauss-Markov (BLUE) estimator (see [10]). We observe therefore that the estimators ΦM (10) and ΦC (11) are reduced-rank generalizations of the estimators Φls (7) and ΦGM (9), respectively. However, despite the great invention behind them, they seem to lack a discussion of optimality of their performance among all reduced-rank estimators. As expected, it is due to the fact that their mathematical fundaments are not related directly to the task of minimizing the mean square error. On the other hand, we will see below that MV-PURE estimator provides an answer to the question of the optimal reducedrank estimator, thus shedding a new insight onto statistical properties of the aforementioned estimators. 2.2. Linearly restricted regression model (3) Firstly, let us emphasize the main difference we encounter in designing estimators of the unknown vector of parameters β ∈ Rm in the linear regression model (3) compared to the unconstrained model (1): since the constraint set V (2) is a linear variety (a translation of a subspace of Rm by a constant vector), we may consider using an affine estimator Ψ(y) : Rn → Rm in place of previously employed linear estimator Φ. Thus, our goal is now to find an affine estimator Ψ(y) such that: βb = Ψ(y), βb ∈ V.

(12)

ΨCR (y) = (Im − A‡ A)ΦGM y + A‡ b,

(13)

Indeed, an efficient affine estimator: the minimum-variance conditionally unbiased affine estimator subject to linear restrictions has been proposed in [13] (see also [10]) for the case where matrix A in (2) is of full row rank a = s:

(10)

where Vr = (v1 , . . . , vr ). This estimator is a natural choice in view of the Schmidt Approximation Theorem [8], since its Moore-Penrose pseudoinverse is closest to L in the sense of Frobenius norm among matrices constrained to have rank at most r. Indeed, it has been shown in [4] that under the following reasonable assumption for illconditioned cases:

Q−1/2 (y = Lβ + ǫ) .

Indeed, Chipman has demonstrated that the following generalized Marquardt’s reduced-rank estimator:

where: i−1 h e t L) e −1 At e t L) e −1 At A(L . A‡ = (L

A particular conditions, upon which the affine estimator ΨCR (y) achieves lower mean square error than the GaussMarkov (BLUE) estimator ΦGM have been given in [10]. However, as the affine estimator ΦCR does not incorporate reduced-rank technique, and for no linear constraints imposed on the unknown vector of parameters β ∈ Rm to be estimated it can be immediately verified that we have ΨCR = ΦGM , it inherently shares the same inadequacy for ill-conditioned problems as the unbiased estimators defined for the unconstrained linear regression model (1). 2.3. Introduction of MV-PURE estimator Note firstly that all elements of the constraint set V (2) can be written in the following form (see e.g. [8]): β = β ′ + A† b, β ′ ∈ N (A).

(14)

Thus, upon inserting the above expression into (3), we obtain the following equivalent linear regression model: y ′ = Lβ ′ + ǫ, β ′ ∈ N (A),

(15)

where y ′ = y − LA† b. In view of Section 2.2, we will design an affine estimator Ψ(y), which can be written explicitly as: Ψ(y ′ ) = Φy ′ + c, (16) where Φ ∈ Rm×n is a matrix and c ∈ Rm is a vector, both to be determined. From (14)-(16), we observe that we should set c = A† b in our estimator (16), and find a matrix Φ ∈ Rm×n such that βe = Φy ′ ∈ N (A) is an estimate of β ′ in the linear regression model (15), in order to obtain an estimate of β ∈ V of the form: Ψ(y − LA† b) = Ψ(y ′ ) = βe + A† b = βb ∈ V.

(17)

We see therefore that in principle the problem has been reduced to finding an estimator Φ of β ′ in model (15), with an additional constraint βe = Φy ′ ∈ N (A). Since clearly random variable y ′ can assign any value in Rn , this constraint can be written as: R(Φ) ⊆ N (A),

(18)

where by R(Φ) we mean the range of Φ. Thus, proceeding analogously as in (5), we obtain the following expression of mean square error of our estimator:   J(Φ) = E k βe − β ′ k2 =  E k Φy ′ − E(Φy ′ ) k2 + k E(Φy ′ ) − β ′ k2 =

tr(ΦQΦt ) + k (ΦL − Im )β ′ k2 . (19) {z } | {z } | variance

bias2

To remove the dependence on the unknown vector of parameters β ′ in the above expression, we use the following inequality:  bias2 =k (ΦL − Im )β ′ k2 =k ΦLPN (A) − PN (A) z k2

≤k ΦLPN (A) − PN (A) k2F k z k2 , z ∈ Rm , (20)

where we have used the fact that β ′ ∈ N (A), hence there exists z ∈ Rm such that β ′ = PN (A) z. Thus, we can exploit the reduced-rank approach discussed above as follows: let: Xr = {Xr ∈ Rm×n : rk(Xr ) ≤ r}, r ≤ min(m, n), and let us define: ζ(Φ) =k ΦLPN (A) − PN (A) k2F ,

(21)

and for given r ≤ rk(PN (A) ) let us set: Pr = {Φ⋆r ∈ Xr : ζ(Φ⋆r ) ≤ ζ(Φr ) ∀ Φr ∈ Xr } . (22) Then a solution ( minimize subject to

Φmvp r

Ψmvp (y ′ ) = Φmvp y ′ + A† b. r

3. CLOSED ALGEBRAIC FORM OF MV-PURE ESTIMATOR We present the following result, which gives a closed algebraic form of the solution of problem (23), which induces the affine MV-PURE estimator (24). Theorem 1 1. Let us set rank constraint r < m − a, and let L′ = Q−1/2 LPN (A) , rk(L′ ) = m − a. Let us furthermore set a SVD of A = M ΥN t , so that:   Im−a 0 † ′ PN (A) = Im − A A = N N ′t , 0 0 where N ′ = (Nm−a Na ) with Nm−a = (na+1 , . . . , nm ) and Na = (n1 , . . . , na ). Moreover, let us set:   K = N ′t (L′ )† (L′t )† N ′ sub((m−a)×(m−a)) .

Then K is positive definite and Φmvp is a solution to probr lem (23) if and only if Φmvp is of the following form: r   Sr Srt 0 N ′t (L′ )† Q−1/2 , (25) Φmvp = N′ r 0 0 where K = S∆S t is any eigenvalue decomposition of K with eigenvalues organized in nondecreasing order: 0 < δ1 ≤ δ2 ≤ · · · ≤ δm−a , and where we denoted Sr = (s1 , . . . , sr ). The variance of Φmvp is given by: r r  X  mvp t δi . = tr Φmvp Q(Φ ) r r

(26)

i=1

Moreover, if δr 6= δr+1 , the solution Φmvp is unique. r 2. For no rank constraint imposed, i.e. when r = m − a, the solution to problem (23) is uniquely given by: ′ † −1/2 Φmvp , m−a = PN (A) (L ) Q

(27)

m−a X  mvp t  δi . = Q(Φ ) with variance tr Φmvp m−a m−a i=1

rk(Φmvp ) = r, r (23)

(24)

Let us close this section with the following important observations: since the set of rank-constrained matrices Xr is clearly non-convex, the two-stage optimization problem (23) is non-convex and therefore we could not use the powerful methods of double stage convex optimization (see e.g. [20–22]). However, we will show that for all rank constraints r ≤ rk(PN (A) ) solution to problem (23) exists, as we will give an explicit algebraic form of such solution.

3. For all r ≤ m − a, we have:

of the problem:

tr [Φ⋆r Q(Φ⋆r )t ] Φ⋆r ∈ Pr and R(Φ⋆r ) ⊆ N (A)

induces the minimum-variance pseudo-unbiased reducedrank estimator (MV-PURE) for the linear regression model (3) given by:

k Φmvp LPN (A) − PN (A) k2F = m − a − r. r

(28) (29)

Figure 1. An original image to be restored.

4. NUMERICAL EXAMPLE In this section we provide a simple numerical example of application of MV-PURE estimator, where its superior performance to all previously known estimators under consideration in this paper is demonstrated. Moreover, we heuristically propose two new projected estimators and show that they also give a reasonable advantage in performance over their original counterparts, as in our example the linear constraints on the unknown deterministic parameter vector are available. We considered a problem of restoration of a 2-D image from the observation degraded by atmospheric turbulence blur and contaminated with correlated Gaussian noise. 3 The sparse blurring matrix is a realization of the MATLAB algorithm provided in [23] (see also [24]), and the small 8 × 8 pixels image to be restored is shown in Fig.1. This image was originally stored in a matrix form as P ∈ R8×8 , where grayscale pixels were represented as numbers in [0, 1], with 0 representing black and 1 representing white pixels. To accomodate linear constraints in our example, we have assumed that P is an element of linear variety V of all images having first two rows of white pixels. These linear constraints can be easily expressed as 3 We

used MATLAB for implementation of this problem.

 V = X ∈ R8×8 : AX = B , where:   1 ... A = I2 0 ∈ R2×8 , B = 1 ...

1 1



∈ R2×8 .

By denoting vecX ∈ R64 to be the columnwise stacked version of a matrix X ∈ R8×8 , and using the fact that a matrix equation AX = B can be equivalently expressed as (I ⊗ A)vecX = vecB, where by ⊗ we denoted the Kronecker product of matrices, we see that our restoration problem can be cast into the linear regression model (3) as follows: let L ∈ R64×64 be the blurring matrix taken from [23], 4 , and let us moreover rewrite the definition of linear constraint V as:  V ′ = vecX ∈ R64 : (I8 ⊗ A)vecX = vecB .

Thus, if we denote by Y ∈ R8×8 the blurred, noisy observation, we recognize that the model obtained is given by: vecY = L ∗ vecP + ǫ, vecP ∈ V ′ , (30)

which is of the form (3). Let us note that in our example the matrix I8 ⊗ A is of full row rank, i.e. a = s in terms of notation used in (3), thus in particular we can compare performance of our estimator with performance 4 This matrix is severely ill-conditioned, as its singular values are σ1 ≈ 0.943, σ2 ≈ 0.864, . . . , σ63 ≈ 0.052, σ64 ≈ 0.039.

50 MV−PURE BLUE Projected BLUE Generalized Marquardt Projected Generalized Marquardt Chipman’s LC

40 30

MSE [dB]

20 10 0 −10 −20 −30 −10

0

10

20 SNR [dB]

30

40

50

Figure 2. Plot of M SE versus SN R obtained.

of the minimum-variance conditionally unbiased affine estimator subject to linear restrictions ΨCR (13). We considered the following estimators: Gauss-Markov (BLUE) ΦGM (9), generalized Marquardt’s reduced-rank estimator ΦC (11), minimum-variance conditionally unbiased affine estimator subject to linear restrictions ΨCR (13), MV-PURE, as well as two heuristically proposed new estimators: the projected Gauss-Markov estimator and the projected generalized Marquardt’s estimator defined by ΨGM (y) = PN (A) ΦGM y + A† b and ΨC (y) = PN (A) ΦC y + A† b, respectively. 2

|| The SN R is defined as SN R = ||LvecP T r(Q) . We examined performance of our estimators at levels [−10, −5, . . . , 50] of SN R[dB]. At each stage, we optimize the rank of the generalized Marquardt’s, projected generalized Marquardt’s and MV-PURE estimators in order to achieve their best performance. The M SEs obtained are averaged over 1000 realizations of noise. Finally, let us note that in our numerical example δr 6= δr+1 for all r = 1, . . . , m − a − 1, hence MV-PURE estimator Ψmvp (y ′ ) = Φmvp y ′ + A† b r was always uniquely defined for all rank constraints. The above results show clearly that MV-PURE estimator has superior performance to all other estimators considered for all values of SN R. Also, the heuristically posed projected generalized Marquardt’s reduced-rank estimator achieves a good performance, but due to its ad-hoc

nature it is difficult to verify whether in general such estimator is a reasonable solution for ill-conditioned problems, where linear constrains on the unknown vector of parameters are available. 5. MV-PURE ESTIMATOR AS THE EXTENSION OF BLUE ESTIMATOR In this section we provide a result showing that MV-PURE estimator is the extension of the minimum-variance unbiased Gauss-Markov (BLUE) estimator ΦGM (9) for the case of reduced-rank estimator, where linear constraints on the unknown deterministic vector of parameters are avilable. In particular, this result sheds a new light onto the optimality of the generalized Marquardt’s reduced-rank estimator ΦC (11) and the minimum-variance conditionally unbiased affine estimator subject to linear restrictions ΨCR (13). Theorem 2 1. Let us assume that no linear constraints have been imposed on the unknown vector of parameters β ∈ Rm . Then, for all rank constraints r < m, we have Ψmvp = ΦC and such estimator is unique if the rth and e = Q−1/2 L are distinct. the (r + 1)st singular values of L Moreover, for r = m we have Ψmvp = ΦGM . 2. Let us assume that no rank constraint has been imposed on the estimator, r = m − a. Then, for all linear

constraints V (2) in the linear regression model (3) we have Ψmvp = ΨCR if A ∈ Rs×m in (2) is of full row rank, a = s, and:  −1 et L e e t ) = N (PN (A) L e t ). N (PN (A(Le t Le)−1 ) L L Acknowledgement: The authors would like to express their deep gratitude to Prof. Kohichi Sakaniwa of Tokyo Institute of Technology for helpful discussions and comments. 6. REFERENCES [1] C. F. Gauss, Theory of Motion of Heavenly Bodies Moving About the Sun in Conic Sections. New York: Dover, 1963. [2] A. E. Horel, “Application of ridge analysis to regression problems,” Chemical Engineering Progress, vol. 58, pp. 54–59, 1962. [3] A. E. Horel and R. W. Kennard, “Ridge regression: biased estimation for nonorthogonal problems,” Technometrics, vol. 12, pp. 55–67, 1970.

[12] C. Toro-Vizcarrondo and T. D. Wallace, “A test of the mean square error criterion for restrictions in linear regression,” Journal of the American Statistical Association, vol. 63, no. 322, pp. 558–572, June 1968. [13] J. S. Chipman and M. M. Rao, “The treatment of linear restrictions in regression analysis,” Econometrica, vol. 32, no. 1/2, pp. 198–204, 1964. [14] T. Piotrowski and I. Yamada, “MV-PURE estimator,” IEEE Trans. Signal Processing, submitted after submission of this conference paper. [15] I. Yamada and J. Elbadraoui, “Minimum-variance pseudo-unbiased low-rank estimator for illconditioned inverse problems,” in Proc. ICASSP, Toulouse, France, May 2006, pp. 325–328. [16] T. Piotrowski, G. Hori, K. Umeno, and I. Yamada, “CDMA signal estimation based on independent component analysis followed by MV-PURE estimator,” in Proc. NDES 2007, Tokushima, Japan, July 2007, to appear. [17] D. G. Luenberger, Optimization by Vector Space Methods. New York: John Wiley & Sons, 1969.

[4] D. W. Marquardt, “Generalized inverses, ridge regression, biased linear estimation, and nonlinear estimation,” Technometrics, vol. 12, pp. 591–612, 1970.

[18] T. Kailath, A. H. Sayed, and B. Hassibi, Linear Estimation. New Jersey: Prentice Hall, 2000.

[5] J. D. Riley, “Solving systems of linear equations with a positive definite, symmetric, but possibly illconditioned matrix,” Math. Tables Aids Comput., vol. 9, pp. 96–101, 1955.

[20] I. Yamada, “Hybrid steepest descent method for variational inequality problem over the intersection of fixed point sets of nonexpansive mappings,” in Inherently Parallel Algorithm for Feasibility and Optimization and Their Applications, D. Butnariu, Y. Censor, and S. Reich, Eds. Elsevier, 2001, pp. 473–504.

[6] D. L. Phillips, “A technique for the numerical solution of certain integral equations of the first kind,” J. Assoc. Comput. Mach, vol. 9, pp. 84–97, 1962. [7] A. N. Tikhonov, “Solution of incorrectly formulated problems and the regularization method,” Soviet. Math. Doktl., vol. 5, pp. 1035–1038, 1963. [8] A. Ben-Israel and T. N. E. Greville, Generalized Inverses : Theory and Applications, Second Edition. New York: Springer Verlag, 2003. [9] A. J. Thorpe and L. L. Scharf, “Data adaptive rankshaping methods for solving least squares problems,” IEEE Trans. Signal Processing, vol. 43, no. 7, pp. 1591–1601, July 1995. [10] J. S. Chipman, “Linear restrictions, rank reduction, and biased estimation in linear regression,” Linear Algebra and its Applications, vol. 289, pp. 55–74, 1999. [11] C. R. Rao and H. Toutenburg, Linear Models: Least Squares and Alternatives. Springer, 1999.

[19] H. V. Poor, An Introduction to Signal Detection and Estimation. New York: Springer Verlag, 1994.

[21] I. Yamada, N. Ogura, and N. Shirakawa, “A numerically robust hybrid steepest descent method for the convexly constrained generalized inverse problems,” in Inverse Problems, Image Analysis and Medical Imaging, contemporary mathematics, 313 ed., Z. Nashed and O. Scherzer, Eds. American Mathematical Society, 2002, pp. 269–305. [22] I. Yamada and N. Ogura, “Hybrid steepest descent method for variational inequality problem over the fixed point set of certain quasi-nonexpansive mappings,” Numerical Functional Analysis and Optimization, vol. 25, no. 7& 8, pp. 619–655, 2004. [23] P. C. Hansen. Regularization tools version 3.1 (for MATLAB version 6.0). [Online]. Available: http://www2.imm.dtu.dk/ pch/Regutools/Software.zip [24] ——, “Regularization tools: A MATLAB package for analysis and solution of discrete ill-posed problems,” Numerical Algorithms, no. 6, pp. 1–35, 1994.