Tail Dependence for Multivariate t-Distributions and Its Monotonicity

57 downloads 0 Views 232KB Size Report
Key words and phrases: Tail dependence, copula, multivariate t-distribution, in- verse Gamma .... dependence of bivariate elliptical distributions. Our tail analysis ...
Tail Dependence for Multivariate t-Distributions and Its Monotonicity Yin Chan Department of Statistics

Haijun Li Department of Mathematics

Washington State University Pullman, WA 99164, U.S.A.

and Department of Statistics Washington State University Pullman, WA 99164, U.S.A.

yin [email protected]

[email protected]

February 2007 Abstract The tail dependence indexes of a multivariate distribution describe the amount of dependence in the upper right tail or lower left tail of the distribution and can be used to analyze the dependence among extremal random events. This paper examines the tail dependence of multivariate t-distributions whose copulas are not explicitly accessible. The tractable formulas of tail dependence indexes of a multivariate t-distribution are derived in terms of the joint moments of its underlying multivariate normal distribution, and the monotonicity properties of these indexes with respect to the distribution parameters are established. Simulation results are presented to illustrate the results. Key words and phrases: Tail dependence, copula, multivariate t-distribution, inverse Gamma distribution, regularly varying distribution, orthant dependence. MSC2000 classification: 62H20, 62P05.

1

Introduction

Let Xi , 1 ≤ i ≤ n, be the loss variable of the ith sub-portfolio in a financial portfolio over a given time horizon. For financial applications in which there is often a stronger dependence among big losses, the loss variables X1 , . . . , Xn usually exhibit strong dependence 1

among their extremal values (Embrechts, Lindskog and McNeil 2003). Various multivariate distributions with heavy tails are utilized to characterize the extremal or tail dependence, and in particular, the multivariate t-distribution is frequently used in the context of modeling multivariate financial return data. A multivariate t-distribution t(µ, Σ, ν) can be expressed as the distribution of the following multivariate normal variance mixture: √ (X1 , . . . , Xn ) = (µ1 , . . . , µn ) +

R (Z1 , . . . , Zn ) ,

(1.1)

where µ = (µ1 , . . . , µn ) is a vector of component means, and (Z1 , . . . , Zn ) has jointly a multivariate normal distribution N (0, Σ) with covariance matrix Σ, and the scale variable R, independent of (Z1 , . . . , Zn ), has an inverse Gamma distribution, denoted by IG( ν2 , ν2 ), with distribution function, ν Γ( ν2 , 2r ) G(r) , Pr{R ≤ r} = , r > 0, ν > 0, ν Γ( 2 )

(1.2)

R∞ ν ν where Γ( ν2 , 2r ) = ν t 2 −1 e−t dt is known as the upper incomplete gamma function and Γ( ν2 ) = 2r R ∞ ν −1 −t 2 t e dt is the gamma function. The inverse gamma distribution IG( ν2 , ν2 ) is known to 0 have a regularly varying right tail, and the tail dependence of the multivariate t-distribution emerges from this heavy tail phenomenon. The tail dependence of a multivariate distribution can be defined and analyzed by using the copula of the distribution. A copula C is a distribution function, defined on the unit cube [0, 1]n , with uniform one-dimensional marginals. Given a copula C, if one defines F (t1 , . . . , tn ) = C(F1 (t1 ), . . . , Fn (tn )), (t1 , . . . , tn ) ∈ Rn ,

(1.3)

then F is a multivariate distribution with univariate marginal distributions F1 , . . . , Fn . Given a distribution F with marginals F1 , . . . , Fn , there exists a copula C such that (1.3) holds. If F1 , . . . , Fn are all continuous, then the corresponding copula C is unique, and can be written as C(u1 , . . . , un ) = F (F1−1 (u1 ), . . . , Fn−1 (un )), (u1 , . . . , un ) ∈ [0, 1]n . Thus, for continuous multivariate distribution functions such as that of (1.1), the univariate marginals and multivariate dependence structure can be separated, and the dependence structure can be represented by a copula (Sklar 1959, and Nelsen 1999). The copula of a multivariate distribution couples the distribution margins together, and provides a tool to describe various dependencies among the marginals. The tail dependence of a bivariate distribution has been discussed extensively in the statistics literature (Joe 2

1997). The tail dependence of the general case can be extended as follows (Schmidt 2002, and Li 2006a). Definition 1.1. Let X = (X1 , . . . , Xn ) be a random vector with continuous marginals F1 , . . . , Fn and copula C. 1. X is said to be upper-orthant tail dependent if for some subset ∅ 6= J ⊂ {1, . . . , n}, the following limit exists and is positive. τJ = lim Pr{Fj (Xj ) > u, ∀j ∈ / J | Fi (Xi ) > u, ∀i ∈ J} > 0. u↑1

(1.4)

If for all ∅ = 6 J ⊂ {1, . . . , n}, τJ = 0, then we say X is upper-orthant tail independent. 2. X is said to be upper extremal dependent if the following limit exists and is positive. γ = lim Pr{Fj (Xj ) > u, ∀j ∈ {1, . . . , n} | Fi (Xi ) > u, ∃i ∈ {1, . . . , n}} > 0. u↑1

(1.5)

If γ = 0, then we say X is upper extremal independent. Thus, the tail dependence describes the conditional probability of joint exceedance over a large threshold given that some components already exceed that threshold. The limits τJ ’s (γ) are called the upper tail (extremal) dependence indexes. Obviously, for any random vector, τJ ≥ γ, for all ∅ = 6 J ⊂ {1, . . . , n}. Thus, the extremal dependence index provides a lower bound for upper-orthant tail dependence indexes (Frahm 2006). The tail dependence is clearly a copula property, and these indexes do not depend on the marginal distributions. If X1 , . . . , Xn are independent, then the corresponding upper tail indexes are all zeros. The lower tail dependence can be similarly defined. Since multivariate t-distributions are radially symmetric, the upper and lower tail dependence are the same. We only need to discuss the upper tail dependence for multivariate t-distributions. It follows from Definition 1.1 that the upper tail and extremal dependence indexes of a distribution can be derived directly from its copula, and this has been done for the bivariate orthant tail dependence (Embrechts, Lindskog and McNeil 2003) and for the bivariate extremal dependence (Buhl, Reich and Wegmann 2002, Frahm 2005). Using the copula method, Li (2006a, 2006b) derived explicit expressions of the upper- and lower-orthant tail dependence for Marshall-Olkin distributions and for multivariate Pareto distributions. It is evident in Li (2006a, 2006b) that even the copula is explicitly available, this direct copula 3

method already becomes cumbersome in the multivariate case. The copula method is obviously ineffective for the distributions of (1.1) whose copulas are not explicitly accessible. In this paper, we utilize the tail analysis on the inverse gamma distribution of (1.2) to derive the tractable formulas of tail dependence for the multivariate t-distributions of (1.1). It is well-known that the bivariate normal distribution is asymptotically tail independent if its correlation coefficient ρ < 1. Schmidt (2002) showed that bivariate elliptical distributions possess the upper- (and lower-) orthant tail dependence property if their generating random variable is heavy-tailed. Frahm (2006) derived the formula for the extreme dependence of bivariate elliptical distributions. Our tail analysis extends the formulas of tail dependence to general multivariate t-distributions, and expresses their tail dependence indexes as ratios of higher moments of their underlying normal distributions. Such expressions enable us to investigate the structural properties of tail dependence for multivariate t-distributions, and also provide efficient simulation procedures for the tail dependence indexes. The paper is organized as follows. In Section 2, we first investigate the tail behavior of the incomplete gamma function and derive explicitly the regularly varying representation of the inverse Gamma distribution of (1.2). Such an explicit representation facilitates our limiting arguments, leading to the explicit expressions of the tail dependence for these distributions of (1.1). In Section 3, we discuss sufficient conditions under which the tail dependencies of (1.1) can be compared. The simulation-based numerical examples of multivariate t-distributions are presented to illustrate our results. Finally, some comments in Section 4 conclude the paper. Throughout this paper, the terms ‘increasing’ and ‘decreasing’ are used in the weak sense, and the measurability of functions are assumed without explicit mention.

2

Tail and Extremal Dependence Indexes of Multivariate t-Distributions

Since the upper tail dependence only depends on the tail behavior of the random variables, we can focus, without loss of generality, on the multivariate t-distributed random vector with zero mean vector: √ (2.1) (X1 , . . . , Xn ) = R (Z1 , . . . , Zn ) . Let σii > 0 denote the standard deviation of Zi , 1 ≤ i ≤ n, and consider −1 −1 (σ11 X1 , . . . , σnn Xn ) =



4

 −1 −1 R σ11 Z1 , . . . , σnn Zn .

−1 −1 Since (σ11 X1 , . . . , σnn Xn ) is a separable, strictly increasing transform of (X1 , . . . , Xn ), these two random vectors have exactly the same copula, and thus the same tail dependence indexes. Observe that marginally σii−1 Xi has the same distribution, say H, for all 1 ≤ i ≤ n, we have −1 γJ = lim Pr{H(σjj Xj ) > u, ∀j ∈ / J | H(σii−1 Xi ) > u, ∀i ∈ J} u↑1

−1 = lim Pr{σjj Xj > H −1 (u), ∀j ∈ / J | σii−1 Xi > H −1 (u), ∀i ∈ J} u↑1

= lim Pr{Xj > σjj t, ∀j ∈ / J | Xi > σii t, ∀i ∈ J}. t↑∞

(2.2)

Similarly, γ = lim Pr{Xj > σjj t, ∀j ∈ {1, . . . , n} | Xi > σii t, ∃i ∈ {1, . . . , n}} t↑∞

(2.3)

Therefore, the assessment of upper tail dependence indexes boils down to analyzing properly scaled tail behavior of (X1 , . . . , Xn ). For this, the following technical lemmas are crucial for our tail analysis. The distribution F of a non-negative random variable is said to be regularly varying at ∞ with index α if its survival function is of this form: L(r) F¯ (r) , 1 − F (r) = α , r > 0, α > 0, r

(2.4)

where L is a slowly varying function; that is, L is a positive function on (0, ∞) with property lim

r→∞ ν

L(cr) = 1, for every c > 0. L(r)

(2.5)

ν

Γ( , ) ¯ Lemma 2.1. Let G(r) = 1 − Γ(2 ν2r) be the survival function of R with the inverse gamma 2 ¯ distribution IG( ν2 , ν2 ) given by (1.2). Then G(r) is regularly varying with index ν2 .

Proof. Applying L’Hospital’s Rule yields the following:  ν  ν ν ν ν Γ( 2 )−Γ( ν2 , 2r ) d ν 2 −1 − 2r − Γ(1ν ) 2r e ¯ dr Γ( ν2 ) G(r) 2  ν  ν2 2r 2 2  −ν    lim − ν = lim = lim = . ν ν r→∞ e 2r r→∞ − 2r r→∞ d e 2r ν ν νΓ( 2 ) ν 2 e ν ν +2 − ν +1 dr 2 2 2 r2 r

Let L∗ (r) =

2 νΓ( ν2 )

ν 2

 ν2

2r

(2.6)

2r

ν

e− 2r . Since for any c > 0, ν ν 1 L∗ (cr) = lim e( 2 − 2c ) r = 1, ∗ r→∞ L (r) r→∞

lim

L∗ (r) is slowly varying. It follows from (2.6) and (2.7) that    L∗ (cr)  L∗ (r) ! ν ¯ ¯ ν G(cr) G(cr) 2  (cr)  = c− ν2 . lim ¯ = lim  L∗ (cr)  ¯r 2 ∗ (r) L r→∞ G(r) r→∞ G(r) ν ν (cr) 2

r2

5

(2.7)

(2.8)

¯ Consider G(r) =

L(r) ν

r2

ν

¯ for any r > 0, where L(r) = r 2 G(r). The limit (2.8) implies that ν ν ¯ ν ν c 2 r 2 G(cr) L(cr) = lim = c 2 c− 2 = 1. lim ν ¯ r→∞ r 2 G(r) r→∞ L(r)

¯ is regularly varying with index ν . That is, L is slowly varying. Thus G 2 ¯ Lemma 2.2. Let G(r) =1− distribution

IG( ν2 , ν2 )



ν Γ( ν2 , 2r ) ν Γ( 2 )

be the survival function of R with the inverse gamma ν ¯ given by (1.2). Then L(r) = r 2 G(r) is bounded over (0, ∞).

Proof. Obviously L(r) is continuous on (0, ∞). Again, by using L’Hospital’s Rule, the limit lim L(r) = lim

r→∞

r→∞

¯ G(r) 1 ν r2

2  ν  ν2 − ν 2  ν  ν2 2r = e > 0, r→∞ νΓ( ν ) 2 νΓ( ν2 ) 2 2

= lim

(2.9)

is finite. Therefore, L(r) is bounded on (0, ∞).  In fact, it can be also easily shown that L is asymptotically monotone; that is, there exists a sufficiently large c > 0, such that L(r) is monotone for all r ≥ c. We are now in the position to derive the main result. In what follows, a ∨ b and a ∧ b mean the maximum and the minimum of a and b respectively. Theorem 2.3. Let (X1 , . . . , Xn ) have a multivariate t-distribution t(µ, Σ, ν) as that in (1.1). Then the upper-orthant tail dependence indexes are given by τJ =

E(∧ni=1 σii−1 Z¯i )ν , for all ∅ = 6 J ⊂ {1, . . . , n}, E(∧i∈J σii−1 Z¯i )ν

(2.10)

and the upper extremal dependence index is given by γ=

E(∧ni=1 σii−1 Z¯i )ν , E(∨ni=1 σii−1 Z¯i )ν

(2.11)

where Z¯i = Zi ∨ 0 for 1 ≤ i ≤ n. Proof. Let Φ(z1 , . . . , zn ) be the cumulative distribution of the multivariate normal distribution N (0, Σ). It follows from (2.2) that τJ = lim Pr{Xj > σjj t, ∀j ∈ / J | Xi > σii t, ∀i ∈ J} t↑∞

Pr{Xj ∨ 0 > σjj t, ∀j ∈ {1, . . . , n}} t↑∞ Pr{Xi ∨ 0 > σii t, ∀i ∈ J} √ σ t Pr{ R > ∨nj=1 Zjjj∨0 } √ = lim σjj t t↑∞ Pr{ R > ∨ j∈J Zj ∨0 } = lim

6





t −1 ¯ ∧n i=1 σii Zi

Pr R >  = lim  t→∞ Pr R > ∧

t −1 ¯ i∈J σii Zi



R

2  2  

Pr R >  = lim  t→∞ R Pr R > z1 >0,...,zn >0 ∧ z1 >0,...,zn >0

R z1 >0,...,zn >0

= lim

t→∞

R z1 >0,...,zn >0

2 

t −1 ∧n i=1 σii zi t −1 i∈J σii zi

dΦ(z1 , . . . , zn )

2 

dΦ(z1 , . . . , zn )  2  ν −1 t n dΦ(z1 , . . . , zn ) ∧i=1 σii zi L −1 ∧n i=1 σii zi  , 2  ν −1 t dΦ(z1 , . . . , zn ) ∧i∈J σii zi L ∧ σ −1 z i∈J ii

i

where the last equality follows from Lemma 2.1. By Lemma 2.2, L is bounded over (0, ∞). Thus, the dominated convergence theorem and (2.9) imply that for all ∅ = 6 J ⊂ {1, . . . , n}, ν R ∧ni=1 σii−1 zi dΦ(z1 , . . . , zn ) E(∧ni=1 σii−1 Z¯i )ν z1 >0,...,zn >0  τJ = R = . ν E(∧i∈J σii−1 Z¯i )ν ∧i∈J σii−1 zi dΦ(z1 , . . . , zn ) z1 >0,...,zn >0 Similarly, for the extremal dependence index, we have, γ = lim Pr{Xj > σjj t, ∀j ∈ {1, . . . , n} | Xi > σii t, ∃i ∈ {1, . . . , n}} t↑∞

Pr{Xj ∨ 0 > σjj t, ∀j ∈ {1, . . . , n}} t↑∞ Pr{Xi ∨ 0 > σii t, ∃i ∈ {1, . . . , n}}  2   t Pr R > ∧n σ−1 Z¯ i i=1 ii  = lim  2  t→∞ t Pr R > ∨n σ−1 Z¯ i i=1 ii   2  R t dΦ(z1 , . . . , zn ) Pr R > ∧n σ−1 z z1 >0,...,zn >0 i=1 ii i   = lim  2 t→∞ R t Pr R > ∨n σ−1 z dΦ(z1 , . . . , zn ) z1 >0,...,zn >0 = lim

i=1 ii

z1 >0,...,zn >0

ν ∧ni=1 σii−1 zi

z1 >0,...,zn >0

ν ∨ni=1 σii−1 zi

R

L

= lim

t→∞

=

R

E(∧ni=1 σii−1 Z¯i )ν , E(∨ni=1 σii−1 Z¯i )ν 7

L

 

i

t −1 ∧n i=1 σii zi t −1 ∨n i=1 σii zi

2  2 

dΦ(z1 , . . . , zn ) dΦ(z1 , . . . , zn )

where the last equality follows from Lemma 2.2 and the dominated convergence theorem.  Note that τJ and γ can be also expressed in terms of (X1 , . . . , Xn ). Let (Xi − µi )+ = (Xi − µi ) ∨ 0, i = 1, . . . , n, then we have from (1.1) ((X1 − µ1 )+ , . . . , (Xn − µn )+ ) =



 R Z¯1 , . . . , Z¯n .

Since the scale variable R and (Z1 , . . . , Zn ) are independent, we have for all ∅ = 6 J ⊂ {1, . . . , n}, ν E(R 2 )E(∧ni=1 σii−1 Z¯i )ν E(∧ni=1 σii−1 (Xi − µi )+ )ν τJ = . (2.12) = ν E(∧i∈J σii−1 (Xi − µi )+ )ν E(R 2 )E(∧i∈J σii−1 Z¯i )ν Similarly, γ=

E(∧ni=1 σii−1 (Xi − µi )+ )ν . E(∨ni=1 σii−1 (Xi − µi )+ )ν

(2.13)

These expressions can be used to construct moment estimators of τJ and γ based on observations of (X1 , . . . , Xn ).

3

Monotonicity Properties of Tail Dependence Indexes

A tail dependence index of a multivariate distribution is a limiting conditional probability of components’ joint exceedance over a large threshold given that some components have exceeded that threshold. As we illustrate in this section, increasing the dependence of a multivariate distribution may or may not be translated into increasing the tail dependence. For this, we work with (2.10) and (2.11). Observe that Z ∞ −1 ¯ ν n E(∧i=1 σii Zi ) = Pr{(∧ni=1 σii−1 Z¯i )ν > t}dt Z0 ∞ 1 1 = Pr{Z1 > σ11 t ν , . . . , Zn > σnn t ν }dt, (3.1) 0 Z ∞ −1 ¯ ν n Pr{(∨ni=1 σii−1 Z¯i )ν > t}dt E(∨i=1 σii Zi ) = Z0 ∞ 1 = Pr{Zi > σii t ν , ∃i ∈ {1, . . . , n}}dt Z0 ∞ 1 1 = (1 − Pr{Z1 ≤ σ11 t ν , . . . , Zn ≤ σnn t ν })dt. (3.2) 0

To compare the upper and lower orthant probabilities of multivariate normal distributions, the following result from Tong (1980, Theorem 2.1.1 and its Corollary 1) is useful.

8

Lemma 3.1. Let (Z1 , . . . , Zn ) and (Z10 , . . . , Zn0 ) be normally distributed with distributions N (0, Σ) and N (0, Σ0 ) respectively. If Σ ≥ Σ0 entry-wise, then, Pr{Z1 > t1 , . . . , Zn > tn } ≥ Pr{Z 0 1 > t1 , . . . , Z 0 n > tn }, Pr{Z1 ≤ t1 , . . . , Zn ≤ tn } ≥ Pr{Z 0 1 ≤ t1 , . . . , Z 0 n ≤ tn }, for any t1 , . . . , tn . This lemma, together with (3.1) and (3.2), implies the following Proposition 3.2. Let (X1 , . . . , Xn ) have a multivariate t-distribution t(µ, Σ, ν), as that in (1.1), with normal covariances σij = Cov{Zi , Zj }, i 6= j. The extremal dependence index γ is increasing in σij . Thus, increasing the covariances also increases the lower bound γ of the upper-orthant tail dependence indexes τJ ’s. The effect of covariances on τJ ’s, however, is more subtle. As our next result shows, the monotonicity of τJ in σij depends not only on whether or not i, j ∈ J, but also on the underlying concentration matrix. Let Z J = (Zj , j ∈ J), and then Z J has a |J|-dimensional normal distribution N (0, ΣJ ) where ΣJ is the corresponding covariance matrix of Z J . Define K = (kij ) , Σ−1 , KJ = (kijJ ) , Σ−1 J .

(3.3)

The matrices K and KJ are called the concentration matrices of N (0, Σ) and N (0, ΣJ ) respectively (Lauritzen 1996). Proposition 3.3. Let (X1 , . . . , Xn ) have a multivariate t-distribution t(µ, Σ, ν), as that in (1.1), with normal covariances σij = Cov{Zi , Zj }, i 6= j. 1. If i ∈ / J or j ∈ / J, then τJ is increasing in σij . 2. If i, j ∈ J and kij > kijJ (kij < kijJ ), then

∂τJ ∂σij

< 0 (> 0).

Proof. (1) Suppose that i ∈ / J or j ∈ / J. Then the multivariate marginal distribution of Z J , −1 ¯ ν and in particular, E(∧i∈J σii Zi ) , are invariant when σij is increased. It follows from (3.1) and Lemma 3.1 that E(∧ni=1 σii−1 Z¯i )ν is increasing in σij , and thus τJ is increasing in σij in this case. (2) If i, j ∈ J, then both E(∧ni=1 σii−1 Z¯i )ν and E(∧i∈J σii−1 Z¯i )ν are increasing in σij . To examine the ratio τJ , let φ(z, Σ) and φJ (z J , ΣJ ) denote the normal densities as follows, n 1 1 φ(z, Σ) = (2π)− 2 |K| 2 exp{− z T Kz}, 2 |J| 1 1 φJ (z J , ΣJ ) = (2π)− 2 |KJ | 2 exp{− z TJ KJ z J }, 2

9

where (·)T denotes the transpose of a column vector. Then we have ν R ∧ni=1 σii−1 zi φ(z, Σ)dz z≥0 ν . τJ = R −1 ∧ σ z φJ (z J , ΣJ )dz J i∈J i ii z J ≥0 It follows from Plackett’s Lemma (Tong 1980, page 9) that for i, j ∈ J with i 6= j, ∂2 ∂ ∂2 ∂ φ(z, Σ) = φ(z, Σ), and φJ (z J , ΣJ ) = φJ (z J , ΣJ ). ∂σij ∂zi ∂zj ∂σij ∂zi ∂zj Observe that   1 ∂2 1 T 1 ∂2 −n T φ(z, Σ) = (2π) 2 |K| 2 exp{− z Kz} − z Kz ∂zi ∂zj 2 2 ∂zi ∂zj 1 n 1 = (2π)− 2 |K| 2 exp{− z T Kz}(−kij ) = −kij φ(z, Σ). 2

(3.4)

Similarly, ∂2 φJ (z J , ΣJ ) = −kijJ φJ (z J , ΣJ ). ∂zi ∂zj

(3.5)

After interchanging the order of differentiation and integration, we obtain that i 6= j, if and only if Z Z ν ν ∂ 2 −1 ∧i∈J σii zi φJ (z J , ΣJ )dz J φ(z, Σ)dz ∧ni=1 σii−1 zi ∂zi ∂zj z J ≥0 z≥0 Z < z J ≥0

ν ∧i∈J σii−1 zi

∂2 φJ (z J , ΣJ )dz J ∂zi ∂zj

Z

∧ni=1 σii−1 zi



∂τJ ∂σij

< 0,

φ(z, Σ)dz.

z≥0

∂τJ Plugging (3.4) and (3.5) into this inequality, it is equivalent to −kij < −kijJ . Thus, ∂σ kijJ . The positive derivative case can be similarly established.  Proposition 3.3 (1) can be easily interpreted as τJ being increased in σij , i ∈ J, j ∈ / J, when the correlation between Z J and (Zk , k ∈ / J) is increased. For Proposition 3.3 (2), we interpret kij as the negative partial correlation coefficient between components i and j (Lauritzen 1996); that is, the conditional correlation of components i and j given that other components are being fixed. Then increasing σij would yield a smaller increase in the expected minimum of correlated normal random variables with the smaller partial correlation, leading to a decrease in the ratio τJ . To illustrate our monotonicity results, we estimate τJ and γ via classical Monte Carlo simulation (Robert and Casella 1999). Let z (1) , . . . , z (m) be a sample generated from the

10

multivariate normal distribution N (0, Σ). The upper-orthant tail dependence index τJ can be estimated by Pm  n −1 (k) ν I{z (k) ≥ 0} k=1 ∧i=1 σii zi ν (3.6) τ¯J = P  m −1 (k) (k) ≥ 0} ∧ σ z I{z i∈J ii i k=1 (k)

(k)

where z (k) = (z1 , . . . , zn ), 1 ≤ k ≤ m. Since the normal distribution N (0, Σ) is radially symmetric, we can also use the following estimator Pm  n −1 (k) ν I{z (k) ≥ 0, or z (k) ≤ 0} k=1 ∧i=1 σii |zi | ν τˆJ = P  . (3.7) m −1 (k) (k) (k) I{z ≥ 0, or z ≤ 0} k=1 ∧i∈J σii |zi | Similarly, the estimator ν −1 (k) n | σ |z I{z (k) ≥ 0, or z (k) ≤ 0} ∧ i=1 ii i k=1 ν γˆ = P  m −1 (k) n I{z (k) ≥ 0, or z (k) ≤ 0} k=1 ∨i=1 σii |zi | Pm 

(3.8)

is used to estimate the extremal dependence index γ. Note that the relative inefficiency of these estimators is due to the generation of vectors outside the domain of interest, especially when random variables are negatively correlated. In the following examples, a variety of 3 × 3 covariance matrices Σ are selected for simulation illustrations. For each covariance matrix, we simulate 7,000,000 random samples from the corresponding trivariate normal distribution, and estimate the upper tail dependence indexes for various values of ν, using τˆJ of (3.7) and γˆ of (3.8). The sample variances of τˆJ for ν = 10 are calculated using 1,000 independent runs, and the sample variances are observed to be stabilized after 400 runs. The program is written in R, and the curves of tail dependence indexes against heavy tail parameter ν are plotted in various figures. 1. Consider



 4 0 1   Σ =  0 3 1.5  , 1 1.5 2

The values of τ3 , τ23 and γ are plotted in Figure 1, showing that τ3 and τ23 are bounded by γ from below. The sample variance of τˆ3 for ν = 10 is 1.033265 × e−7 . 2. Consider



   4 0 0.5 4 1 0.5     Σ1 =  0 3 1  , Σ2 =  1 3 1  . 0.5 1 2 0.5 1 2 11

0 .6

Tao23 Gamma

0 .0

0 .0

0 .1

0 .1

0 .2

P ro b a b ility 0 .2

P ro b a b ility 0 .3 0 .4

0 .3

0 .5

0 .4

Tao3 Gamma

0

5

10

15

20

25

30

0

5

Degree of Freedom v

10

15

20

25

30

Degree of Freedom v

Figure 1: τ3 , τ23 and γ Note that the only difference between Σ1 and Σ2 is that σ12 increases from 0 in Σ1 to 1 in Σ2 . The concentration matrices K and K12 of Σ1 can be easily obtained, showing 12 that k12 = 0.02597403 > 0 = k12 . Thus, it follows from Proposition 3.3 (2) that τ12 is locally decreasing in σ12 , as is showed in the left graph of Figure 2. Proposition 3.3 (1) implies that τ23 is increasing in σ12 , as is showed in the right graph of Figure 2. The sample variance of τˆ12 for ν = 10 is 0.0002762413. 3. Consider

   4 0.8 2 4 0.8 0.8     Σ1 =  0.8 4 0.8  , Σ2 =  0.8 4 0.8  . 2 0.8 4 0.8 0.8 4 

This is another illustration of Proposition 3.3. The difference between Σ1 and Σ2 is that σ13 increases from 0.8 in Σ1 to 2 in Σ2 . The concentration matrices K and K13 13 of Σ1 can be easily obtained, showing that k13 = −0.04464286 > −0.05208333 = k13 . Thus, it follows from Proposition 3.3 (2) that τ13 is locally decreasing in σ13 , as is showed in the left graph of Figure 3. Proposition 3.3 (1) implies that τ23 is increasing in σ13 , as is showed in the right graph of Figure 3. The sample variance of τˆ13 for ν = 10 is 8.634881 × e−5 .

12

0 .7

0 .8

Tao23 of SIGMA1 Tao23 of SIGMA2

P ro b a b ility 0 .3 0 .4 0 .0

0 .0

0 .1

0 .2

0 .2

P ro b a b ility 0 .4

0 .5

0 .6

0 .6

Tao12 of SIGMA1 Tao12 of SIGMA2

0

10

20

30

40

50

0

Degree of Freedom v

10

20

30

40

50

Degree of Freedom v

Figure 2: Comparison of tail dependence indexes of Σ1 and Σ2 4. Consider

   4 0.5 0.7 4 0 0.5     Σ1 =  0 3 1  , Σ2 =  0.5 3 1.4  . 0.7 1.4 2 0.5 1 2 

Each of σij , i 6= j is increased from Σ1 to Σ2 . The indexes τ12 and τ23 are also increased as is showed in Figure 4. The sample variance of τˆ12 for ν = 10 is 0.0002954590.

4

Concluding Remarks

Tail dependence of random variables is the limiting proportion of joint exceedance of these random variables over a large threshold given that some of them have already exceeded the threshold. As such, whether or not increasing dependence of random variables would result in an increase of their tail dependence depends not only on the amount of dependence increase but also where the increase takes place. As we have showed in this paper, both theoretically and numerically, some tail dependence indexes of a multivariate t-distribution can be even decreased when correlations of its underlying multivariate normal distribution are increased. The tractable formulas we obtained in this paper for a multivariate t-distribution express its various tail dependence indexes in terms of its degree of freedom and its joint moments. Estimations of tail dependence indexes based on these formulas via simulations have been 13

0 .8

Tao23 of SIGMA1 Tao23 of SIGMA2

P ro b a b ility 0 .4 0 .0

0 .0

0 .2

0 .2

P ro b a b ility 0 .4

0 .6

0 .6

Tao13 of SIGMA1 Tao13 of SIGMA2

0

10

20

30

40

50

0

Degree of Freedom v

10

20

30

40

50

Degree of Freedom v

Figure 3: Various upper-orthant tail dependence indexes carried out for various distribution parameters. These numerical results illustrate various monotonicity properties of tail dependence for multivariate t-distributions. One issue which we have not addressed in this paper is the monotonicity of tail dependence indexes with respect to the degrees of freedom. This monotonicity property, which we believe holds for multivariate t-distributions, remains an open problem. The formulas we obtained in this paper also suggest that tail dependence indexes for multivariate t-distributions can be more accurately evaluated via numerical integrations of multivariate normal distributions. This issue as well as other estimation issues such as comparisons of the estimators proposed in this paper and the non-parametric estimators of tail dependence indexes need further studies.

References [1] Embrechts, P., Lindskog, F. and McNeil, A. (2003). Modeling dependence with copulas and applications to risk management. Handbook of Heavy Tailed Distributions in Finance, ed. S. Rachev, Elsevier, Chapter 8, pp. 329-384. [2] Buhl, C., Reich, C. and Wegmann, P. (2002). Extremal dependence between return risk and liquidity risk: An analysis for the Swiss Market. Technical report, Department of Finance, University of Basel.

14

0 .7 0 .8

Tao23 of SIGMA1 Tao23 of SIGMA2

0 .0

0 .0

0 .1

0 .2

0 .2

P ro b a b ility 0 .4

P ro b a b ility 0 .3 0 .4

0 .5

0 .6

0 .6

Tao12 of SIGMA1 Tao12 of SIGMA2

0

10

20

30

40

50

0

Degree of Freedom v

10

20

30

40

50

Degree of Freedom v

Figure 4: Comparisons of τ12 and τ23 [3] Frahm, G. (2006). On the extreme dependence coefficient of multivariate distributions. Probability & Statistics Letter, 76, 1470-1481. [4] Joe, H. (1997). Multivariate Models and Dependence Concepts. Chapman & Hall, London. [5] Lauritzen, S. L. (1996). Graphical Models. Oxford Science Publications, New York. [6] Li, H. (2006a). Tail dependence comparison of survival Marshall-Olkin copulas. Technical Report 2006-4, Department of Mathematics, Washington State University, Pullman, WA (http://www.math.wsu.edu/TRS/2006-index.html). [7] Li, H. (2006b). Tail dependence of multivariate Pareto distributions. Technical Report 2006-6, Department of Mathematics, Washington State University, Pullman, WA 99164 (http://www.math.wsu.edu/TRS/2006-index.html). [8] Marshall, A. W. and Olkin, I. (1967). A multivariate exponential distribution. J. Amer. Statist. Assoc. 2, 84-98. [9] Nelson, R. (1999). An Introduction to Copulas. Springer, New York. [10] Resnick, S. (1987). Extreme Values, Regularly Variation, and Point Processes, Springer, New York.

15

[11] Robert, C. P. and Casella, G (1999). Monte Carlo Statistical Methods. Springer, New York. [12] Schmidt, R. (2002). Tail dependence for elliptically contoured distributions. Mathematical Methods of Operations Research 55, 301-327. [13] Schmidt, R. (2003). Credit risk modeling and estimation via elliptical copulae. Credit Risk - Measurement, Evaluation and Management, Eds G. Bohl, G. Nakhaeizadeh, S. T. Rachev, T. Ridder and K. H. Vollmer. Physica-Verlag Heidelberg, 267-289. [14] Sklar, A. (1959). Fonctions de r´epartition `a n dimensions et leurs marges. Publications de l’Institut de statistique de l’Universit´e de Paris, [15] Tong, Y. L. (1980). Probability Inequalities in Multivariate Distributions. Academic Press, New York. 8, 229-231.

16