arXiv:1602.05522v2 [math.ST] 13 Jul 2017 - Statistik

2 downloads 0 Views 906KB Size Report
mean vector in matrix-variate location mixture of normal distributions ... Keywords: Normal mixtures, skew normal distribution, large dimensional asymptotics, ...
Central limit theorems for functionals of large dimensional sample covariance matrix and mean vector in matrix-variate skewed model



arXiv:1602.05522v1 [math.ST] 17 Feb 2016

Taras Bodnara, , Stepan Mazurb , Nestor Parolyac

a

Department of Mathematics, Stockholm University, SE-10691 Stockholm, Sweden b

c

1

Department of Statistics, Lund University, SE-22007 Lund, Sweden

Institute of Empirical Economics, Leibniz University of Hannover, D-30167 Hannover, Germany

Abstract In this paper we consider the asymptotic distributions of functionals of the sample covariance matrix and the sample mean vector obtained under the assumption that the matrix of observations has a matrix variate general skew normal distribution. The central limit theorem is derived for the product of the sample covariance matrix and the sample mean vector. Moreover, we consider the product of an inverse covariance matrix and the mean vector for which the central limit theorem is established as well. All results are obtained under the large dimensional asymptotic regime where the dimension p and sample size n approach to infinity such that p/n → c ∈ (0, 1).

ASM Classification: 62H10, 62E15, 62E20, 60F05, 60B20 Keywords: Skew normal distribution, large dimensional asymptotics, stochastic representation, random matrix theory.

1∗ Corresponding author. E-mail address: [email protected]. The second author appreciates the financial support of the Swedish Research Council Grant Dnr: 2013-5180 and Riksbankens Jubileumsfond Grant Dnr: P13-1024:1

1

1

Introduction

The functions of the sample covariance matrix and the sample mean vector appear in various statistical applications. The classical improvement techniques for the mean estimation have already been discussed by Stein (1956) and Jorion (1986). In particular, Efron (2006) constructed confidence regions of smaller volume than the standard spheres for the mean vector of a multivariate normal distribution. Fan et al. (2008), Bai and Shi (2011), Bodnar and Gupta (2011), Cai and Zhou (2012), Cai and Yuan (2012), Fan et al. (2013), Bodnar et al. (2014a), Bodnar et al. (2016a), Wang et al. (2015) among others suggested improved techniques for the estimation of covariance matrix and precision matrix (the inverse of covariance matrix). In our work we introduce the family of matrix variate general skew normal (MVGSN) distributions which is a generalization of the models considered by Azzalini and Dalla-Valle (1996), Azzalini and Capitanio (1999), Azzalini (2005), Liseo and Loperfido (2003, 2006), Bartoletti and Loperfido (2010), Loperfido (2010), Christiansen and Loperfido (2014), Adcock et al. (2015), De Luca and Loperfido (2015) and among others. Under the assumption of MVGSN we consider the expressions for the sample mean vector x and the sample covariance matrix S. In particulary, we deal with two products lT Sx and lT S−1 x where l is a non-zero vector of constants. It is noted that this kind of expressions has not been intensively considered in the literature, although they are present in numerous important applications. The first application of the products arises in the portfolio theory, where the vector of optimal portfolio weights is proportional to S−1 x. The second application is in the discriminant analysis where the coefficients of the discriminant function are expressed as a product of the inverse sample covariance matrix and the difference of the sample mean vectors. In Bayesian context it is highly related to the product Sx. Bodnar and Okhrin (2011) derived the exact distribution of the product of the inverse sample covariance matrix and the sample mean vector under the assumption of normality, while Kotsiuba and Mazur (2015) obtained its asymptotic distribution as well as its approximate density based on the Gaussian integral and the third order Taylor series expansion. Moreover, Bodnar et al. (2013, 2014b) analyzed the product of the sample (singular) covariance matrix and the sample mean vector. In the present paper, we contribute to the existing literature by deriving the central limit theorems (CLTs) under the introduced class of matrix variate distribution, i.e., general matrix variate skew normality in the case of the high-dimensional observation matrix. Under the considered family of distributions, the columns of the observation matrix are not independent anymore and, thus, the CLTs cover more general class of random matrices. Nowadays, modern scientific data include large number of sample points which is often comparable to the number of features (dimension) and so the sample covariance matrix and the sample mean vector are not the efficient estimators anymore. For example, stock markets include a large number of companies which is often close to the number of available time points. In order to understand better the statistical properties of the traditional estimators and tests based on high-dimensional settings, it is of interest to study the asymptotic distribution of the above mentioned bilinear forms involving the sample covariance matrix and the sample mean vector. The appropriate central limit theorems, which do not suffer from the “curse of dimension-

2

ality” and do not reduce the number of dimensions, are of great interest for high-dimensional statistics because more efficient estimators and tests may be constructed and applied in practice. The classical multivariate procedures are based on the central limit theorems assuming that the dimension p is fixed and the sample size n increases. However, numerous authors provide quite reasonable proofs that this assumption does not lead to precise distributional approximations for commonly used statistics, and that under increasing dimension asymptotics the better approximations can be obtained [see, e.g., Bai and Silverstein (2004) and references therein]. Technically speaking, under the high-dimensional asymptotics we understand the case when the sample size n and the dimension p tend to infinity, such that their ratio p/n converges to some positive constant c (here we assume that c < 1). Under this condition the well-known Marchenko-Pastur and Silverstein’s equations were derived [see, Marˇcenko and Pastur (1967), Silverstein (1995)]. The rest of the paper is structured as follows. In Section 2 we introduce a semi-parametric matrix-variate family of skewed distributions. Main results are given in Section 3, where we derive the central limit theorems under high-dimensional asymptotic regime of the sample (inverse) covariance matrix and the sample mean vector under the MVGSN distribution. Section 4 presents a short numerical study in order to verify the obtained analytic results.

2

Semi-parametric matrix-variate family of skewed distributions

In this section we introduce a family the skew normal distribution. Let  x11  . . X=  . xp1

of matrix-variate skewed distributions which generalizes

 . . . x1n ..  .. . .   = (x1 , ..., xn ) , . . . xpn

be the p × n observation matrix where xj is the j th observation vector. In the following, we assume that the random matrix X possesses a stochastic representation given by d

X = Y + Bν1Tn ,

(1)

where Y ∼ Np,n (µ1Tn , Σ ⊗ In ) (p × n-dimensional matrix-variate normal distribution with mean matrix µ1Tn and covariance matrix Σ ⊗ In ), ν is a q-dimensional random vector with continuous density function fν (·), B is a p × q matrix of constants. Further, it is assumed that Y and ν are independently distributed. If random matrix X follows model (1) then we say that X is generalized matrix-variate skew-normal distributed with parameters µ, Σ, B, and fν (·). The first three parameters are finite dimensional, while the third parameter is infinite dimensional. This makes model (1) to be of a semi-parametric type. The assertion we denote by X ∼ SN p,n;q (µ, Σ, B; fν ). If fν can be parametrized by finite dimensional parameter θ then model (1) reduces to a parametrical model which is denoted by X ∼ SN p,n;q (µ, Σ, B; θ). If n = 1 then we use the notation SN p;q (·, ·, ·; ·) instead of SN p,1;q (·, ·, ·; ·).

3

From (1) the density function of X is expressed as Z fX (Z) = Rq

fNp,n (µ,Σ⊗In ) (Z − Bν ∗ 1Tn |ν = ν ∗ )fν (ν ∗ )dν ∗ .

(2)

Let C = Φq (0; −ξ, Ω). In a special case when ν = |ψ| is the vector formed by the absolute values of every element in ψ where ψ ∼ Nq (ξ, Ω), i.e. ν has a q-variate truncated normal distribution, we get Proposition 1. Assume model (1). Let ν = |ψ| with ψ ∼ Nq (ξ, Ω). Then the density function of X is given by   e −1 Φq 0; −D[Evec(Z − µ1T ) + Ω−1 ξ], D φpn vec(Z − µ1T ); FET DΩ−1 ξ, F (3) fX (Z) = C n n where D = (nBT Σ−1 B + Ω−1 )−1 , E = 1Tn ⊗ BT Σ−1 , F = (In ⊗ Σ−1 − ET DE)−1 , and   1/2 |D|1/2  1  T −1 −1 −1 |F| T −1 e C =C exp − ξ Ω (−D + Ω − DEFE D)Ω ξ . 2 |Ω|1/2 |Σ|n/2 The proof of Proposition 1 is presented in the Appendix. It is remarkable that model (1) includes several skew-normal distributions considered by Azzalini and Dalla-Valle (1996), Azzalini and Capitanio (1999), Azzalini (2005). For example, in case of n = 1, q = 1, µ = 0, B = ∆1p , and Σ = (Ip − ∆2 )1/2 Ψ(Ip − ∆2 )1/2 we get d

X = (Ip − ∆2 )1/2 v0 + ∆1p |v1 |,

(4)

where v0 ∼ Np (0, Ψ) and v1 ∼ N (0, 1) are independently distributed; Ψ is a correlation matrix and ∆ = diag(δ1 , ..., δp ) with δj ∈ (−1, 1). Model (4) was previously introduced by Azzalini (2005).

3

CLTs for expressions involving the sample covariance matrix and the sample mean vector

The sample estimators for the mean vector and the covariance matrix are given by n

x=

1X 1 xi = X1n n n i=1

n

and

S=

1 X (xi − x)(xi − x)T = XVXT , n−1 i=1

where V = In − n1 1n 1Tn is a symmetric idempotent matrix, i.e., V = VT and V2 = V. The following theorem shows that x and S are independently distributed and presents their marginal distributions under model (1). The results of Theorem 1 show that the independence of x and S could not be used as a characterization property of a multivariate normal distribution if the observation vectors in data matrix are dependent. Theorem 1. Let X ∼ SN p,n;q (µ, Σ, B; fν ) with p < n − 1. Then

4

(a) (n − 1)S ∼ Wp (n − 1, Σ) (p-dimensional Wishart distribution with (n − 1) degrees of freedom and covariance matrix Σ),  (b) x ∼ SN p;q µ, n1 Σ, B; fν , (c) S and x are independently distributed. d

d

d

Proof. Let X∗ = X|ν = ν ∗ , x∗ = x|ν = ν ∗ , and S∗ = S|ν = ν ∗ . Because Y and ν are independent we get that d

X∗ = X|ν = ν ∗ ∼ Np,n ((µ + Bν ∗ )1Tn , Σ ⊗ In ). From Theorem 3.1.2 of Muirhead (1982) we obtain that x



S∗



 1 ∼ Np µ + Bν , Σ , n ∼ Wp (n − 1, Σ), ∗

and S∗ and x∗ are independent. Hence, it follows that Z e e x|ν =ν ∗ (e x)fν (ν ∗ )dν ∗ fx,S (e x, S) = fS|ν =ν ∗ (S)f Rq+ Z e fx|ν =ν ∗ (e = fS (S) x)fν (ν ∗ )dν ∗ , Rq+

(5)

where the last equality follows from the fact that the density of S∗ does not depend on ν ∗ . From (5) we directly get that x and S are independent; S is Wishart distributed with (n − 1) degrees of freedom and covariance matrix Σ; x is generalized skew-normal distributed with parameters µ, n1 Σ, B and fν . The theorem is proved. For the validity of the asymptotic results presented in Sections 3.1 and 3.2 we need the following two conditions (A1) Let (λi , ui ) denote the set of eigenvalues and eigenvectors of Σ. We assume that there exist m1 and M1 such that 0 < m1 ≤ λ1 ≤ λ2 ≤ ... ≤ λp ≤ M1 < ∞ uniformly on p. (A2) There exists M2 such that |uTi µ| ≤ M2 and |uTi bj | ≤ M2 for all i = 1, ..., p and j = 1, ..., q uniformly on p where bj , j = 1, ..., q, are the columns of B. In more general terms, we say that an arbitrary p-dimensional vector l satisfies the condition (A2) if |uTi l| ≤ M2 for all i = 1, . . . , p. 5

Assumption (A1) is a classical condition in random matrix theory (see, Bai and Silverstein (2004)), which bounds the spectrum of Σ from below as well as from above. Assumption (A2) is a technical one. In combination with (A1) this condition ensures that p−1 µT Σµ, p−1 µT Σ−1 µ, p−1 µT Σ3 µ, p−1 µT Σ−3 µ, as well as that all the diagonal elements of BT ΣB, BT Σ3 B, and BT Σ−1 B are uniformly bounded. All these quadratic forms are used in the statements and the proofs of our results. Note that the constants appearing in the inequalities will be denoted by M2 and may vary from one expression to another.

3.1

CLT for the product of sample covariance matrix and sample mean vector

In this section we present the central limit theorem for the product of the sample covariance matrix and the sample mean vector. Theorem 2. Assume X ∼ SN p,n;q (µ, Σ, B; fν ), p < n − 1, with Σ positive definite and let p/n = c + o(n−1/2 ), c ∈ [0, 1) as n → ∞. Let l be a p-dimensional vector of constants that satisfies condition (A2). Then, under (A1) and (A2) it holds that √

where

 D −1 T l Sx − lT Σµν −→ N (0, 1) for p/n → c ∈ [0, 1) as n → ∞ , nσν

µν 2 σν

= µ + Bν,   = µTν Σµν + c||Σ||2F lT Σl + (lT Σµν )2 .

(6)

(7) (8)

Proof. Since S and x are independently distributed, the conditional distribution lT Sx|(x = x∗ ) e = L∗ SL∗T = {S e ij }i,j=1,2 equals to the distribution of lT Sx∗ . Let L∗ = (l, x∗ )T and define S e = L∗ ΣL∗ T = with Se11 = lT Sl, Se12 = lT Sx∗ , Se21 = x∗T Sl, and Se22 = x∗T Sx∗ . Similarly, let Σ e ij }i,j=1,2 with Σ e 11 = lT Σl, Σ e 12 = lT Σx∗ , Σ e 21 = x∗T Σl, and Σ e 22 = x∗T Σx∗ . {Σ   1 Using S ∼ Wp n − 1, Σ and rank L∗ = 2 ≤ p we get from Theorem 3.2.5 of n − 1  e ∼ W2 n − 1, 1 Σ . As a result, applying Theorem 3.2.10 of Muirhead (1982) that S n−1 Muirhead (1982) we obtain  e 12 Σ e −1 Se22 , Se12 |Se22 , x = x∗ ∼ Nk Σ 22

 1 e Σ11·2 Se22 , n−1

e 11·2 = Σ e 11 − Σ e 2 /Σ e 22 is the Schur complement. where Σ 12 e 22 , then Let ξ = (n − 1)Se22 /Σ lT Sx|ξ, x ∼ N



 ξ T ξ T T T 2 l Σx, [x Σxl Σl − (x Σl) ] . n−1 (n − 1)2

From Theorem 3.2.8 of Muirhead (1982) follows that ξ and x are independently distributed

6

and ξ ∼ χ2n−1 . Hence, the stochastic representation of lT Sx is given by ξ T l Sx = l Σx + n−1 T

d

r

ξ z0 , (xT ΣxlT Σl − (lT Σx)2 )1/2 √ n−1 n−1

(9)

 where ξ ∼ χ2n , z0 ∼ N (0, 1), x ∼ SN p;q µ, n1 Σ, B; fν ; ξ, z0 and x are mutually independent. From the properties of χ2 -distribution we immediately receive √

 n

ξ −1 n



D

−→ N (0, 2) as n → ∞ .

(10)

√ √ We further get that n(z0 / n) ∼ N (0, 1) for all n and, consequently, it is also its asymptotic distribution. Next, we show that lT Σx and xT Σx are jointly asymptotically normally distributed given ν = ν ∗ . For any a1 and a2 , we consider T



T

a1 x Σx + 2a2 l Σx = a1

a2 x+ l a1

T

 a2 a2 a2 ˜ T Σ˜ x − 2 lT Σl , Σ x + l − 2 lT Σl = a1 x a1 a1 a1 

 a2 ˜ |ν = ν ∗ ∼ Np µa,ν ∗ , n1 Σ with µa,ν ∗ = µ + Bν ∗ + l. By Provost and Rudiuk (1996) where x a1 ˜ T Σ˜ the random variable x x can be expressed as p

d

˜ T Σ˜ x x=

1X 2 λi ξi , n i=1

where i.i.d.

ξi ∼ χ21 (δi2 ) with δi =



−1/2 T ui µa,ν ∗

nλi

.

The symbol χ2d (δi2 ) denotes the chi-squared distribution with d degrees of freedom and noncentrality parameter δi2 . Now, we use the Lindeberg CLT to the i.i.d. random variables Vi = λ2i ξi /n. For that reason, P we need first to verify the Lindeberg’s condition. Denoting σn2 = V( pi=1 Vi ) we get σn2

=

p X i=1

V



 X p  λ2i λ4i 1 ξi = 2(1 + 2δi2 ) = 2 2tr(Σ4 ) + 4nµ0a,ν ∗ Σ3 µa,ν ∗ 2 n n n i=1

We need to check if for any small ε > 0 it holds that p i 1 X h lim 2 E (Vi − E(Vi ))2 1{|Vi −E(Vi )|>εσn } −→ 0 . n→∞ σn i=1

7

(11)

First, we get p X

h

E (Vi − E(Vi ))2 1{|Vi −E(Vi )|>εσn }

Cauchy−Schwarz

i



p X

i=1

h

i

E1/2 (Vi − E(Vi ))4 P1/2 {|Vi − E(Vi )| > εσn }

i=1 p q X λ4

Chebyshev

i



i=1

n2

12(1 + 2δi2 )2 + 48(1 + 4δi2 )

σi εσn

with σi = V(Vi ) and, thus, 1 σn2

p X

h

E (Vi − E(Vi ))2 1{|Vi −E(Vi )|>εσn }

i

1 ε



Pp

4 i=1 λi

q 12(1 + 2δi2 )2 + 48(1 + 4δi2 ) σσni

2tr(Σ4 ) + 4nµTa,ν Σ3 µa,ν q √ Pp 4 (5 + 2δ 2 )2 − 20 σi λ i 3 i=1 i σn

i=1

=

ε tr(Σ4 ) + 2nµTa,ν ∗ Σ3 µa,ν ∗ Pp √ 4 2 σi 3 i=1 λi (5 + 2δi ) σn ε tr(Σ4 ) + 2nµTa,ν ∗ Σ3 µa,ν ∗ √ 4 3 3 5tr(Σ ) + 2nµTa,ν ∗ Σ µa,ν ∗ σmax ε tr(Σ4 ) + 2nµTa,ν ∗ Σ3 µa,ν ∗ σn ! √ σmax 4 3 3 4 +1 T ε σn 1 + 2nµa,ν ∗ Σ µa,ν ∗ /tr(Σ ) √ 5 3 σmax . ε σn

≤ = ≤ ≤ Finally, Assumptions (A1) and (A2) yield

2 supi λ4i + 2nλ3i (uTi µa,ν ∗ )2 σmax supi σi2 supi λ4i (1 + 2δi2 ) = = = −→ 0 , σn2 σn2 tr(Σ4 ) + 2nµTa,ν ∗ Σ3 µa,ν ∗ tr(Σ4 ) + 2nµTa,ν ∗ Σ3 µa,ν ∗

(12)

which verifies the Lindeberg condition since (uTi µa,ν ∗ )2

= = (A2)

≤ =

 2 T T ∗ T a2 ui µ + ui Bν + ui l a1  2 a2 a2 (u0i µ)2 + uTi l + (uTi Bν ∗ )2 + 2uTi µ · uTi Bν ∗ + 2 uTi l(uTi µ + uTi Bν ∗ ) a1 a1 2 p p a2 a M22 + qM22 ν ∗ 0 ν ∗ + M22 22 + 2M22 qν ∗ 0 ν ∗ + 2M22 (1 + qν ∗ T ν ∗ ) a1 a1  2 p a2 M22 1 + qν ∗ 0 ν ∗ + < ∞. (13) a1

Thus, using (11) and p X i=1

E(Vi ) =

p X λ2 i

i=1

n

(1 + δi2 ) = tr(Σ2 )/n + µTa,ν ∗ Σµa,ν ∗

8

(14)

we get the following CLT ˜ T Σ˜ x − tr(Σ2 )/n − µTa,ν ∗ Σµa,ν ∗ d √ x n q −→ N (0, 2) tr(Σ4 )/n + 2µ0a,ν ∗ Σ3 µa,ν ∗ and for a1 xT Σx + 2a2 lT Σx we have  a22 T T Σx + 2a lT Σx − a tr(Σ2 )/n + µT a x l Σl ∗ Σµa,ν ∗ + 1 2 1 a, ν √ d a1 r  −→ N (0, 2) . n  a21 tr(Σ4 )/n + 2µ0a,ν ∗ Σ3 µa,ν ∗ Denoting a = (a1 , 2a2 )T and µν ∗ = µ + Bν ∗ we can rewrite it as √

" n aT

xT Σx lT Σx

µTν ∗ Σµν ∗ + c tr(Σ p lT Σµν ∗

! − aT





2

)

!# d

−→ N

xT Σx

0, aT

2c tr(Σ p

2

) , c tr(Σ p

µT

4

)

+ 4µTν ∗ Σ3 µν ∗ T 2l Σ3 µν ∗

lT Σx

lT Σµ

2lT Σ3 µν ∗ l T Σ3 l

! ! a

T

− ν ∗ Σµν ∗ − − has asympν∗ totically multivariate normal distribution because the vector a is arbitrary. Taking into account (15), (10) and the fact that ξ, z0 and x are mutually independent we get the following CLT which implies that the vector

 ξ  Tn √  x Σx n T   l Σx z0 √ n





n

1

2   ) µTν ∗ Σµν ∗ + c tr(Σ   p  −     lT Σµν ∗ 0







  d  −→ N  

    0 0,      0 0

2

0 2c tr(Σ p

4

)

+ 4µTν ∗ Σ3 µν ∗ 2lT Σ3 µν ∗ 0

0

0



2lT Σ3 µν ∗ l T Σ3 l 0

0 0 1

   .  

The application of the multivariate delta method leads to √

 d −1 T T nσν −→ N (0, 1) ∗ l Sx − l Σµν ∗

(16)

where tr(Σ2 ) σν ∗ = (l Σµν ∗ ) + l Σl µν ∗ Σµν ∗ + c p 2

T

2

T



T



The asymptotic distribution does not depend on ν ∗ and, thus, it is also the unconditional asymptotic distribution. Theorem 2 shows that properly normalized bilinear form lT Sx itself can be accurately approximated by a mixture of normal distributions with both mean and variance depending on ν. Moreover, this central limit theorem delivers the following approximation for the distribution of lT Sx, namely for large n and p we have p

  2 p−2 σν −1 T l Sx ≈ CN p l Σµν , , n

−1 T

(17)

i.e., it has compound normal distribution with random mean and variance. The usual asymptotic normality can be recovered as a special case of our result, i.e., taking ν as a deterministic vector 9

(15)

(e.g., zero). The proof of Theorem 2 shows, in particular, that its key point is a stochastic representation of the product lT Sx which can be presented using a χ2 distributed random variable, a standard normally distributed random variable, and the random vector which follows the generalized skew normal distribution. Assumption (A1) ensures that the spectrum of matrix Σ is uniformly bounded away from zero, i.e., maximum eigenvalue is bounded from above and minimum eigenvalue from below by positive constants. The second assumption (A2) is a technical one, it stands, in particular, for boundedness of normalized Frobenius norm of the population covariance matrix. Both (A1) and (A2) guarantee that the asymptotic mean p−1 lT Σµν and 2 stay bounded and the covariance matrix Σ is invertible as the dimension p invariance p−2 σν creases. Note that the case of standard asymptotics can be easily recovered from our result if we set c → 0.

3.2

CLT for the product of inverse sample covariance matrix and sample mean vector

In this section we consider the distributional properties of the product of the inverse sample covariance matrix S−1 and the sample mean vector x. Again we prove that proper weighted bilinear forms involving S−1 and x have asymptotically a normal distribution. This result is summarized in Theorem 3. Theorem 3. Assume X ∼ SN p,n;q (µ, Σ, B; fν ), p < n − 1, with Σ positive definite and let p/n = c + o(n−1/2 ), c ∈ [0, 1) as n → ∞. Let l be a p-dimensional vector of constants such that p−1 lT Σ−1 l ≤ M2 < ∞. Then, under (A1) and (A2) it holds that √

  1 T −1 D T −1 n˜ σν l S x − l Σ µν −→ N (0, 1) 1−c −1

(18)

where µν = µ + Bν and 2 σ ˜ν

=

  2 1 T −1 T −1 T −1 ) l Σ µ + l Σ l(1 + µ Σ µ ν ν . ν (1 − c)3

Proof. From the properties of the Wishart distribution (see Muirhead (1982)) and Theorem 1 it holds that  S−1 ∼ IW p n + p, (n − 1)Σ−1 . Since S−1 and x are independently distributed we get that the conditional distribution of lT S−1 x|x = x∗ equals to the distribution of lT S−1 x∗ with lT S−1 x∗ = (n − 1)x∗T Σ−1 x∗

lT S−1 x∗ x∗T S−1 x∗ . x∗T S−1 x∗ (n − 1)x∗T Σ−1 x∗

10

Using Theorem 3.2.12 of Muirhead (1982) we get that (n − 1)

x∗T Σ−1 x∗ ∼ χ2n−p ∗T ∗ −1 x S x

and it is independent of x∗ . Hence, xT Σ−1 x ξe = (n − 1) T −1 ∼ χ2n−p x S x

(19)

and it is independent of x. Applying Theorem 3 of Bodnar and Okhrin (2008) it follows that x∗T S−1 x∗ is independent of lT S−1 x∗ /x∗T S−1 x∗ for given x∗ . Moreover, as a result, it is also independent of x∗T Σ−1 x∗ · lT S−1 x∗ /x∗T S−1 x∗ and, respectively, of xT Σ−1 x · lT S−1 x/xT S−1 x. From the proof of Theorem 1 of Bodnar and Schmid (2008) we obtain ∗T

  ∗T −1 ∗ lT S−1 x∗ T −1 ∗ 2x Σ x T x ∗T −1 ∗ ∼ t n − p + 1; (n − 1)l Σ x ; (n − 1) l Rx∗ l , (20) n−p+1 x S x

−1 ∗

(n − 1)x Σ

where Ra = Σ−1 − Σ−1 aaT Σ−1 /aT Σ−1 a, a ∈ IRp , and the symbol t(k, µ, σ 2 ) denotes t distribution with k degrees of freedom, mean µ and variance σ 2 . Combining (19) and (20), we get that the stochastic representation of lT S−1 x∗ is given by 

s

d lT S−1 x∗ = ξe−1 (n − 1) lT Σ−1 x∗ + t0

 −1 e = ξ (n − 1) lT Σ−1 x∗ + √

x∗T Σ−1 x∗ n−p+1

 · lT Rx∗ l

 q √ t0 ∗T ∗ T −1 l Σ l x Rl x , n−p+1

 e x∗ and t0 are mutually where x∗ ∼ Np µ + Bν ∗ , n1 Σ , ξe ∼ χ2n−p , and t0 ∼ t(n − p + 1, 0, 1); ξ, independent. Since Rl ΣRl = Rl , tr(Rl Σ) = p − 1, and Rl ΣΣ−1 l = 0, the application of Corollary 5.1.3a and Theorem 5.5.1 in Mathai and Provost (1992) leads to T

  T −1 ∗ 1 T −1 x ∼ N l Σ (µ + Bν ), l Σ l n

−1 ∗

l Σ and

x∗T Rl x∗ ∼ χ2p−1 (nδ 2 (ν ∗ )) with δ 2 (ν ∗ ) = (µ + Bν ∗ )T Rl (µ + Bν ∗ ) as well as lT Σ−1 x∗ and x∗T Rl x∗ are independent. Finally, using the stochastic representation of a t-distributed random variable, we get T

l S

−1 ∗

x

d

e−1

= ξ

r  √ T −1 ∗ (n − 1) l Σ (µ + Bν ) + lT Σ−1 l 1 +

p−1 z0 η∗ √ n−p+1 n

 ,

(21)

where ξe ∼ χ2n−p , z0 ∼ N (0, 1), and η ∗ ∼ Fp−1,n−p+1 (nδ 2 (ν ∗ )) (non-central F -distribution with e z0 and η ∗ are p − 1 and n − p + 1 degrees of freedom and non-centrality parameter nδ 2 (ν ∗ )); ξ,

11

mutually independent. From Lemma 6.4.(b) in Bodnar et al. (2016b) we get       e 2/(1 − c) 0 0 1 ξ/(n − p) √        D n  0 ση2 0  η∗  −  1 + δ 2 (ν ∗ )/c  −→ N 0,  √ 0 0 1 0 z0 / n 

for p/n = c + o(n−1/2 ), c ∈ (0, 1) as n → ∞ with 2 c

ση2 =

 1+2

δ 2 (ν ∗ ) c

 +

2 1−c

 2 δ 2 (ν ∗ ) 1+ c

Consequently, 

  e ξ/(n − 1) √    n  (p − 1)η ∗ /(n − p + 1)  −  √ z0 / n   2(1 − c) 0 D   2 2 −→ N 0,  0 c ση /(1 − c)2 0 0

 (1 − c)  (c + δ 2 (ν ∗ ))/(1 − c)  0  0  0  1

for p/n = c + o(n−1/2 ), c ∈ (0, 1) as n → ∞. Finally, the application of the delta-method (c.f. DasGupta (2008, Theorem 3.7)) leads to √

 n lT S−1 x∗ −

1 T −1 l Σ (µ + Bν ∗ ) 1−c



D

2 −→ N (0, σ ˜ν ∗)

for p/n = c + o(n−1/2 ), c ∈ (0, 1) as n → ∞ with 2 σ ˜ν ∗ =

   1 T −1 ∗ 2 T −1 2 ∗ 2 l Σ (µ + Bν ) + l Σ l(1 + δ (ν )) . (1 − c)3

Consequently, √

 n˜ σν ∗ lT S−1 x∗ − −1

1 T −1 l Σ (µ + Bν ∗ ) 1−c



D

−→ N (0, 1) ,

where the asymptotic distribution does not depend on ν ∗ . Hence, it is also the unconditional asymptotic distribution.

Again, Theorem 3 shows that the distribution of lT S−1 x can be approximated by a mixture of normal distributions. Indeed, p

−1 T

−1

l S

 x ≈ CN

2 p−1 T −1 p−2 σ ˜ν l Σ µν , 1−c n

 .

(22)

In the proof of Theorem 3 we can read out that the stochastic representation for the product

12

of the inverse sample covariance matrix and the sample mean vector is presented by using a χ2 distributed random variable, a general skew normally distributed random vector and a standard t-distributed random variable. This result is itself very useful and allows to generate the values of lT S−1 x by just generating three random variables from the standard univariate distributions and a random vector ν which determines the family of the skew normal matrix variate distribution. The assumptions about the boundedness of the quadratic and bilinear forms involving Σ−1 plays here the same role as in Theorem 2. Note that in this case we need no assumption either on the Frobenius norm of the covariance matrix or its inverse.

4

Numerical study

In this section we provide a Monte Carlo simulation study to investigate the performance of the suggested CLTs for the products of the sample (inverse) covariance matrix and the sample mean vector. In our simulations we put l = 1p , each element of the vector µ is uniformly distributed on [−1, 1] while each element of the matrix B is uniformly distributed on [0, 1]. Also, we take Σ as a diagonal matrix where each diagonal element is uniformly distributed on [0, 1]. It can be checked that in such a setting the assumptions (A1) and (A2) are satisfied. Indeed, the population covariance matrix satisfies the condition (A1) because the probability of getting exactly zero eigenvalue equals to zero. On the other hand, the condition (A2) is obviously valid too because the ith eigenvector of Σ is ui = ei = (0, . . . , 1 , 0, . . . , 0)0 . ith place

In order to define the distribution for the random vector ν, we consider two special cases. In the first case we take ν = |ψ|, where ψ ∼ Nq (0, Iq ), i.e. ν has a q-variate truncated normal distribution. In the second case we put ν ∼ GALq (Iq , 1q , 10), i.e. ν has a q-variate generalized asymmetric Laplace distribution (c.f., Kozubowski et al. (2013)). Also, we put q = 10. We compare the results for several values of c ∈ {0.1, 0.5, 0.8, 0.95}. The simulated data consists of N = 104 independent realizations which are used to fit the corresponding kernel density estimators with Gaussian density. The bandwith parameters are determined via crossvalidation for every sample. The asymptotic distributions are simulated using the results of Theorems 2 and 3. The corresponding algorithm is given next: a) generate ν = |ψ|, where ψ ∼ Nq (0q , Iq ), or generate ν ∼ GALq (Iq , 1q , 10); b) generate lT Sx by using the stochastic representation (9) obtained in the proof of Theorem 2, namely √ ξ T ξ l Sx = l Σ(y + Bν) + ((y + Bν)T Σ(y + Bν)lT Σl − (lT Σ(y + Bν))2 )1/2 z0 , n−1 n−1 T

d

where ξ ∼ χ2n , z0 ∼ N (0, 1), y ∼ Np (µ, n1 Σ); ξ, z0 , y, and ν are mutually independent b’) generate lT S−1 x by using the stochastic representation (21) obtained in the proof of The-

13

orem 3, namely T

l S

−1

d

e−1

x = ξ



T

−1

(n − 1) l Σ

√ (µ + Bν) +

r lT Σ−1 l 1 +

p−1 z0 η√ n−p+1 n

 ,

where ξe ∼ χ2n−p , z0 ∼ N (0, 1), and η ∼ Fp−1,n−p+1 (nδ 2 (ν)) with δ 2 (ν) = (µ+Bν)T Rl (µ+ e z0 and (η, ν) are mutually independent. Bν), Rl = Σ−1 − Σ−1 llT Σ−1 /lT Σ−1 l; ξ, c) compute



and



−1 T nσν l Sx − lT Σµν



  1 T −1 T −1 n˜ σν l S x − l Σ µν 1−c −1

where µν 2

σν 2 σ ˜ν

= µ + Bν   = µTν Σµν + c||Σ||2F lT Σl + (lT Σµν )2   2 1 T −1 T −1 2 2 l Σ µ + l Σ l(1 + δ (ν)) = ν (1 − c)3

with δ 2 (ν) = µTν Rl µν , Rl = Σ−1 − Σ−1 llT Σ−1 /lT Σ−1 l. d) repeat a)-c) N times. It is remarkable that for generating lT Sx and lT S−1 x only random variables from the standard distributions are need. Neither the data matrix X nor the sample covariance matrix S are used. [ F igures 1 − 8 ] In Figures 1-4 we present the results of simulations for the asymptotic distribution that is given in Theorem 2 while the asymptotic distribution as given in Theorem 3 is presented in Figures 5-8 for different values of c = {0.1, 0.5, 0.8, 0.95}. The suggested asymptotic distributions are shown as a a dashed black line, while the standard normal distribution is a solid black line. All results demonstrate a good performance of both asymptotic distributions for all considered values of c. Even in the extreme case c = 0.95 our asymptotic results seem to produce a quite reasonable approximation. Moreover, we observe a good robustness of our theoretical results for different distributions of ν. Also, we observe that all asymptotic distributions are slightly skewed to the right for the finite dimensions. This effect is even more significant in the case of the generalized asymmetric Laplace distribution. Nevertheless, the skewness disappears with growing dimension and sample size, i.e., the distribution becomes symmetric one and converges to its asymptotic counterpart.

14

5

Summary

In this paper we introduce the family of the matrix-variate generalized skew normal (MVGSN) distribution that generalizes a large number of the existing skew normal models. Under the MVGSN distribution we derive the distributions of the sample mean vector and the sample covariance matrix. Moreover, we show that they are independently distributed. Furthermore, we derive the CLTs under high-dimensional asymptotic regime for the products of the sample (inverse) covariance matrix and the sample mean vector. In the numerical study, we document the good finite sample performance of both asymptotic distributions.

6

Appendix

Proof of Proposition 1. Proof. Straightforward but tedious calculations give fX (Z)

= C −1

Z Rq+

fNp,n (µ1Tn ,Σ⊗In ) (Z − Bν ∗ |ν = ν ∗ )fNq (ξ ,Ω) (ν ∗ )dν ∗

  Z 1 ∗ (2π)−(np+q)/2 T −1 ∗ exp − (ν − ξ) Ω (ν − ξ) 2 |Ω|1/2 |Σ|n/2 Rq+   T  1 exp − vec Z − µ1Tn − Bν ∗ 1Tn (In ⊗ Σ)−1 vec Z − µ1Tn − Bν ∗ 1Tn dν ∗ 2  i 1h (2π)−(np+q)/2 T −1 T T −1 T exp − vec(Z − µ1 ) (I ⊗ Σ) vec(Z − µ1 ) + ξ Ω ξ C −1 n n n 2 |Ω|1/2 |Σ|n/2  h  T i 1 Evec(Z − µ1Tn ) + Ω−1 ξ D Evec(Z − µ1Tn ) + Ω−1 ξ exp 2  Z T 1h ∗ exp − ν − D Evec(Z − µ1Tn ) + Ω−1 ξ 2 Rq+  ∗ D−1 ν ∗ − D Evec(Z − µ1Tn ) + Ω−1 ξ dν  h i 1/2 1/2 1 T −1 |F| |D| T −1 exp − ξ Ω (−D + Ω − DEFE D)Ω ξ C −1 2 |Ω|1/2 |Σ|n/2  T Φq 0; −D[Evec(Z − µ1n ) + Ω−1 ξ], D  φpn vec(Z − µ1Tn ); FET DΩ−1 ξ, F   e −1 Φq 0; −D[Evec(Z − µ1Tn ) + Ω−1 ξ], D φpn vec(Z − µ1Tn ); FET DΩ−1 ξ, F C

= C −1 × = × × × = × × =

where D = (nBT Σ−1 B + Ω−1 )−1 , E = 1Tn ⊗ BT Σ−1 , F = (In ⊗ Σ−1 − ET DE)−1 , and  i 1/2 |D|1/2 1 h T −1 T −1 −1 |F| −1 e C =C exp − ξ Ω (−D + Ω − DEFE D)Ω ξ . 2 |Ω|1/2 |Σ|n/2

References Adcock, C., Eling, M., and Loperfido, N. (2015). Skewed distributions in finance and actuarial science: a review. The European Journal of Finance, 21:1253–1281.

15

Azzalini, A. (2005). The skew-normal distribution and related multivariate families. Scandinavian Journal of Statistics, 32:159–188. Azzalini, A. and Capitanio, A. (1999). Statistical applications of the multivariate skew-normal distribution. Journal of the Royal Statistical Society: Series B, 61:579–602. Azzalini, A. and Dalla-Valle, A. (1996). The multivariate skew-normal distribution. Biometrika, 83:715–726. Bai, J. and Shi, S. (2011). Estimating high dimensional covariance matrices and its applications. Annals of Economics and Finance, 12:199–215. Bai, Z. D. and Silverstein, J. W. (2004). CLT for linear spectral statistics of large dimensional sample covariance matrices. Annals of Probability, 32:553–605. Bartoletti, S. and Loperfido, N. (2010). Modelling air polution data by the skew-normal distribution. Stochastic Environmental Research and Risk Assessment, 24:513–517. Bodnar, T. and Gupta, A. K. (2011). Estimation of the precision matrix of multivariate elliptically contoured stable distribution. Statistics, 45:131–142. Bodnar, T., Gupta, A. K., and Parolya, N. (2014a). On the strong convergence of the optimal linear shrinkage estimator for large dimensional covariance matrix. Journal of Multivariate Analysis, 132:215–228. Bodnar, T., Gupta, A. K., and Parolya, N. (2016a). Direct shrinkage estimation of large dimensional precision matrix. Journal of Multivariate Analysis, page to appear. Bodnar, T., Hautsch, N., and Parolya, N. (2016b). Consistent estimation of the high dimensional efficient frontier. Technical report. Bodnar, T., Mazur, S., and Okhrin, Y. (2013). On the exact and approximate distributions of the product of a wishart matrix with a normal vector. Journal of Multivariate Analysis, 125:176–189. Bodnar, T., Mazur, S., and Okhrin, Y. (2014b). Distribution of the product of singular wishart matrix and normal vector. Theory of Probability and Mathematical Statistics, 91:1–14. Bodnar, T. and Okhrin, Y. (2008). Properties of the singular, inverse and generalized inverse partitioned wishart distributions. Journal of Multivariate Analysis, 99:2389–2405. Bodnar, T. and Okhrin, Y. (2011). On the product of inverse wishart and normal distributions with applications to discriminant analysis and portfolio theory. Scandinavian Journal of Statistics, 38:311–331. Bodnar, T. and Schmid, W. (2008). A test for the weights of the global minimum variance portfolio in an elliptical model. Metrika, 67(2):127–143.

16

Cai, T. T. and Yuan, M. (2012). Adaptive covariance matrix estimation through block thresholding. Ann. Statist., 40:2014–2042. Cai, T. T. and Zhou, H. H. (2012). Optimal rates of convergence for sparse covariance matrix estimation. Ann. Statist., 40:2389–2420. Christiansen, M. and Loperfido, N. (2014). Improved approximation of the sum of random vectors by the skew normal distribution. Journal of Applied Probability, 51(2):466–482. DasGupta, A. (2008). Asymptotic Theory of Statistics and Probability. Springer Texts in Statistics. Springer. De Luca, G. and Loperfido, N. (2015). Modelling multivariate skewness in financial returns: a SGARCH approach. The European Journal of Finance, 21:1113–1131. Efron, B. (2006). Minimum volume confidence regions for a multivariate normal mean vector. Journal of the Royal Statistical Society: Series B, 68:655–670. Fan, J., Fan, Y., and Lv, J. (2008). High dimensional covariance matrix estimation using a factor model. Journal of Econometrics, 147:186–197. Fan, J., Liao, Y., and Mincheva, M. (2013). Large covariance estimation by thresholding principal orthogonal complements. Journal of the Royal Statistical Society: Series B, 75:603–680. Jorion, P. (1986). Bayes-stein estimation for portfolio analysis. Quantative Analysis, 21:279–292.

Journal of Financial and

Kotsiuba, I. and Mazur, S. (2015). On the asymptotic and approximate distributions of the product of an inverse Wishart matrix and a Gaussian random vector. Theory of Probability and Mathematical Statistics, 93:96–105. Kozubowski, T. J., Podg´ orski, K., and Rychlik, I. (2013). Multivariate generalized Laplace distribution and related random fields. Journal of Multivariate Analysis, 113:59–72. Liseo, B. and Loperfido, N. (2003). A Bayesian interpretation of the multivariate skew-normal distribution. Statistics & Probability Letters, 61:395–401. Liseo, B. and Loperfido, N. (2006). A note on reference priors for the scalar skew-normal distribution. Journal of Statistical Planning and Inference, 136:373–389. Loperfido, N. (2010). Canonical transformations of skew-normal variates. TEST, 19:146–165. Marˇcenko, V. A. and Pastur, L. A. (1967). Distribution of eigenvalues for some sets of random matrices. Sbornik: Mathematics, 1:457–483. Mathai, A. and Provost, S. B. (1992). Quadratic Forms in Random Variables. Marcel Dekker. Muirhead, R. J. (1982). Aspects of Multivariate Statistical Theory. Wiley, New York.

17

Provost, S. and Rudiuk, E. (1996). The exact distribution of indefinite quadratic forms in noncentral normal vectors. Annals of the Institute of Statistical Mathematics, 48:381–394. Silverstein, J. W. (1995). Strong convergence of the empirical distribution of eigenvalues of large-dimensional random matrices. Journal of Multivariate Analysis, 55:331–339. Stein, C. (1956). Inadmissibility of the usual estimator of the mean of a multivariate normal distribution. In Neyman, J., editor, Proceedings of the third Berkeley symposium on mathematical and statistical probability. University of California, Berkley. Wang, C., Pan, G., Tong, T., and Zhu, L. (2015). Shrinkage estimation of large dimensional precision matrix using random matrix theory. Statistica Sinica, 25:993–1008.

18

0.5

0.5

0.4 0.3 0.2 0.1 −4

−2

0

2

4

−4

p = 50, n = 500, ν ∼ T N q (0, Iq ).

(b)

0

2

4

p = 100, n = 1000, ν ∼ T N q (0, Iq ).

0.1

0.2

0.3

0.4

Asymptotic Standard Normal

0.0

0.0

0.1

0.2

0.3

0.4

Asymptotic Standard Normal

−4

(c)

−2

0.5

0.5

(a)

Asymptotic Standard Normal

0.0

0.0

0.1

0.2

0.3

0.4

Asymptotic Standard Normal

−2

0

2

4

p = 50, n = 500, ν ∼ GALq (1q , Iq , 10).

−4

(d)

−2

0

2

4

p = 100, n = 1000, ν ∼ GALq (1q , Iq , 10).

Figure 1: The kernel density estimator of the asymptotic distribution as given in Theorem 2 for c = 0.1.

19

0.5

0.5

0.4 0.3 0.2 0.1 −4

−2

0

2

4

−4

p = 250, n = 500, ν ∼ T N q (0, Iq ).

(b)

0

2

4

p = 500, n = 1000, ν ∼ T N q (0, Iq ).

0.1

0.2

0.3

0.4

Asymptotic Standard Normal

0.0

0.0

0.1

0.2

0.3

0.4

Asymptotic Standard Normal

−4

(c)

−2

0.5

0.5

(a)

Asymptotic Standard Normal

0.0

0.0

0.1

0.2

0.3

0.4

Asymptotic Standard Normal

−2

0

2

4

−4

p = 250, n = 500, ν ∼ GALq (1q , Iq , 10).

(d)

−2

0

2

4

p = 500, n = 1000, ν ∼ GALq (1q , Iq , 10).

Figure 2: The kernel density estimator of the asymptotic distribution as given in Theorem 2 for c = 0.5.

20

0.5

0.5

0.4 0.3 0.2 0.1 −4

−2

0

2

4

−4

p = 400, n = 500, ν ∼ T N q (0, Iq ).

(b)

0

2

4

p = 800, n = 1000, ν ∼ T N q (0, Iq ).

0.1

0.2

0.3

0.4

Asymptotic Standard Normal

0.0

0.0

0.1

0.2

0.3

0.4

Asymptotic Standard Normal

−4

(c)

−2

0.5

0.5

(a)

Asymptotic Standard Normal

0.0

0.0

0.1

0.2

0.3

0.4

Asymptotic Standard Normal

−2

0

2

4

−4

p = 400, n = 500, ν ∼ GALq (1q , Iq , 10).

(d)

−2

0

2

4

p = 800, n = 1000, ν ∼ GALq (1q , Iq , 10).

Figure 3: The kernel density estimator of the asymptotic distribution as given in Theorem 2 for c = 0.8.

21

0.5

0.5

0.4 0.3 0.2 0.1 −4

−2

0

2

4

−4

p = 475, n = 500, ν ∼ T N q (0, Iq ).

(b)

0

2

4

p = 950, n = 1000, ν ∼ T N q (0, Iq ).

0.1

0.2

0.3

0.4

Asymptotic Standard Normal

0.0

0.0

0.1

0.2

0.3

0.4

Asymptotic Standard Normal

−4

(c)

−2

0.5

0.5

(a)

Asymptotic Standard Normal

0.0

0.0

0.1

0.2

0.3

0.4

Asymptotic Standard Normal

−2

0

2

4

−4

p = 475, n = 500, ν ∼ GALq (1q , Iq , 10).

(d)

−2

0

2

4

p = 950, n = 1000, ν ∼ GALq (1q , Iq , 10).

Figure 4: The kernel density estimator of the asymptotic distribution as given in Theorem 2 for c = 0.95.

22

0.5

0.5

0.4 0.3 0.2 0.1 −4

−2

0

2

4

−4

p = 50, n = 500, ν ∼ T N q (0, Iq ).

(b)

0

2

4

p = 100, n = 1000, ν ∼ T N q (0, Iq ).

0.1

0.2

0.3

0.4

Asymptotic Standard Normal

0.0

0.0

0.1

0.2

0.3

0.4

Asymptotic Standard Normal

−4

(c)

−2

0.5

0.5

(a)

Asymptotic Standard Normal

0.0

0.0

0.1

0.2

0.3

0.4

Asymptotic Standard Normal

−2

0

2

4

p = 50, n = 500, ν ∼ GALq (1q , Iq , 10).

−4

(d)

−2

0

2

4

p = 100, n = 1000, ν ∼ GALq (1q , Iq , 10).

Figure 5: The kernel density estimator of the asymptotic distribution as given in Theorem 3 for c = 0.1.

23

0.5

0.5

0.4 0.3 0.2 0.1 −4

−2

0

2

4

−4

p = 250, n = 500, ν ∼ T N q (0, Iq ).

(b)

0

2

4

p = 500, n = 1000, ν ∼ T N q (0, Iq ).

0.1

0.2

0.3

0.4

Asymptotic Standard Normal

0.0

0.0

0.1

0.2

0.3

0.4

Asymptotic Standard Normal

−4

(c)

−2

0.5

0.5

(a)

Asymptotic Standard Normal

0.0

0.0

0.1

0.2

0.3

0.4

Asymptotic Standard Normal

−2

0

2

4

−4

p = 250, n = 500, ν ∼ GALq (1q , Iq , 10).

(d)

−2

0

2

4

p = 500, n = 1000, ν ∼ GALq (1q , Iq , 10).

Figure 6: The kernel density estimator of the asymptotic distribution as given in Theorem 3 for c = 0.5.

24

0.5

0.5

0.4 0.3 0.2 0.1 −4

−2

0

2

4

−4

p = 400, n = 500, ν ∼ T N q (0, Iq ).

(b)

0

2

4

p = 800, n = 1000, ν ∼ T N q (0, Iq ).

0.1

0.2

0.3

0.4

Asymptotic Standard Normal

0.0

0.0

0.1

0.2

0.3

0.4

Asymptotic Standard Normal

−4

(c)

−2

0.5

0.5

(a)

Asymptotic Standard Normal

0.0

0.0

0.1

0.2

0.3

0.4

Asymptotic Standard Normal

−2

0

2

4

−4

p = 400, n = 500, ν ∼ GALq (1q , Iq , 10).

(d)

−2

0

2

4

p = 800, n = 1000, ν ∼ GALq (1q , Iq , 10).

Figure 7: The kernel density estimator of the asymptotic distribution as given in Theorem 3 for c = 0.8.

25

0.5

0.5

0.4 0.3 0.2 0.1 −4

−2

0

2

4

−4

p = 475, n = 500, ν ∼ T N q (0, Iq ).

(b)

0

2

4

p = 950, n = 1000, ν ∼ T N q (0, Iq ).

0.1

0.2

0.3

0.4

Asymptotic Standard Normal

0.0

0.0

0.1

0.2

0.3

0.4

Asymptotic Standard Normal

−4

(c)

−2

0.5

0.5

(a)

Asymptotic Standard Normal

0.0

0.0

0.1

0.2

0.3

0.4

Asymptotic Standard Normal

−2

0

2

4

−4

p = 475, n = 500, ν ∼ GALq (1q , Iq , 10).

(d)

−2

0

2

4

p = 950, n = 1000, ν ∼ GALq (1q , Iq , 10).

Figure 8: The kernel density estimator of the asymptotic distribution as given in Theorem 3 for c = 0.95.

26