Operator Geometric Stable Laws - CiteSeerX

3 downloads 0 Views 245KB Size Report
Operator Geometric Stable Laws. Tomasz J. Kozubowski1, Mark M. Meerschaert2, Anna K. Panorska3. University of Nevada, Reno and. Hans-Peter Scheffler.
Operator Geometric Stable Laws Tomasz J. Kozubowski1, Mark M. Meerschaert2, Anna K. Panorska3 University of Nevada, Reno and Hans-Peter Scheffler University of Dortmund

Operator geometric stable laws are the weak limits of operator normed and centered geometric random sums of independent, identically distributed random vectors. They generalize operator stable laws and geometric stable laws. In this work we characterize operator geometric stable distributions, their divisibility and domains of attraction, and their application to finance. Operator geometric stable laws are useful for modeling financial portfolios where the cumulative price change vectors are sums of a random number of small random shocks with heavy tails, and each component has a different tail index. AMS 2000 subject classifications: 60E07, 60F05, 60G50, 62H05, 62P05 Key words and phrases: currency exchange rates; domains of attraction; geometric stable law; heavy tails; infinite divisibility; Linnik distribution; operator stable law; randomized sum; skew Laplace law; stability; stable distribution.

1. Introduction We introduce a new class of multivariate distributions called operator geometric stable, generalizing the geometric stable and operator stable laws. Our motivation comes from 1

The research of this author was partially supported by NSF grant DMS-0139927.

2

The research of this author was partially supported by NSF grants DES-9980484 and DMS-0139927.

3

The research of this author was partially supported by NSF grants ATM-0231781 and DMS-0139927 1

2

a problem in finance, where a portfolio of stocks or other financial instruments changes price over time, resulting in a time series of random vectors. The daily price change vectors are each accumulations of a random number of random shocks. Price shocks are typically heavy tailed with a tail parameter that is different for each stock [39]. Operator stable models can handle the variations in tail behavior [33] while geometric stable models [11, 14, 21, 24, 26, 34, 35, 40, 41] capture the fact that these are random sums. The combination of operator norming and geometric randomized sums pursued in this paper should provide a more useful and realistic class of distributions for portfolio modeling. The more general case where the number of summands has an arbitrary distribution is discussed in a companion paper [18]. The focus of this paper on geometric summation allows a simpler treatment, and more complete results. Let (Xi ) be a sequence of independent and identically distributed (i.i.d.) random vectors (r.v.’s) in Rd . Consider a random sum X1 + · · · + XNp , where Np is a geometric variable with mean 1/p independent of the Xi ’s. If there exists a weak limit of

(1.1)

Ap

Np X

(Xi + bp )

as p → 0,

i=1

where Ap is a linear operator on Rd and bp ∈ Rd , then we call it an operator geometric stable (OGS) law. The limits of (1.1) under scalar normalization Ap = ap ∈ R are geometric stable vectors (see, e.g., [12, 27]). The same limit of a deterministic sum (Np replaced with positive integer n) is an operator stable (OS) vector (see, e.g., [10, 32]). Each component of an OGS vector may have different tail behavior, unlike GS laws where the tail behavior is the same in every coordinate. All components of an OGS law are dependent, unlike OS laws where the normal and heavy tailed components are necessarily independent. When all components have finite variance, an OGS vector has a skew Laplace distribution (see [14, 23, 32]). If the normalizing operator Ap is a scalar, the OGS law is GS. If the normalizing operator Ap is diagonal, the OGS law is marginally GS (all components have geometric stable distributions). Geometric summation arises naturally in various fields, including biology, economics, insurance mathematics, physics, reliability and queuing theories among others (see, e.g.,

3

[11]). Thus, as limits of random sums, OGS laws will undoubtedly find numerous applications in stochastic modeling. Their infinite divisibility provides for a natural model when the variable of interest can be thought of as a random sum of small quantities, which is often the case in finance and insurance. Finally, OGS laws can be asymmetric, which further adds to their modeling applicability. Univariate and multivariate geometric stable distributions, and their special cases of skew Laplace laws compete successfully with stable and other laws in modeling financial asset returns (see, e.g., [14, 21, 23, 24, 26, 28, 35, 39]). OGS models, which extend this class to allow different tail behavior for each vector component, should enhance their modeling potential. The problems of geometric summation and geometric stability with operator norming have also been considered for certain (noncommutative) groups that include finite dimensional real vector spaces as a special case. See [5, 6, 7, 8] for details. As we develop the theory of OGS laws in this paper, we will also point out the relations between our results, and those obtained in the more abstract setting of probability on groups. For the special case of finite dimensional real vector spaces treated in this paper, our treatment using characteristic functions is simpler, and leads to several new results. In this paper we derive fundamental properties of OGS laws. We start in Section 2 with a brief recounting of essential ideas from the theory of operator stable laws. In Section 3, we present the definition and characterization of OGS laws, their (generalized) domains of attraction, infinite divisibility and stability, and we discuss important special cases. In Section 4, we focus on marginally OGS laws with heavy tail and finite variance components and present an OGS model for financial data. All proofs are collected in Section 5.

2. Operator stable laws Suppose X, X1 , X2 , . . . are independent and identically distributed random vectors on Rd with common distribution µ and that Y0 is a random vector whose distribution ω is full, i.e., not supported on any lower dimensional hyperplane. We say that ω is operator stable (OS) if there exist linear operators An on Rd and nonrandom vectors bn ∈ Rd such

4

that (2.1)

An

n X

(Xi − bn ) ⇒ Y0 .

i=1

In terms of measures, we can rewrite (2.1) as An µn ∗ εsn ⇒ ω

(2.2)

n where An µ(dx) = µ(A−1 n dx) is the probability distribution of An X, µ is the nth convo-

lution power, and εsn is the unit mass at the point sn = −nAn bn . In this case, we say that µ (or X) belongs to the generalized domain of attraction of ω (or Y0 ), and we write µ ∈ GDOA(ω), or X ∈ GDOA(Y0 ). Theorem 7.2.1 in [32] shows that the operator stable law ω is infinitely divisible and (2.3)

ω t = tE ω ∗ ε a t

for all t > 0,

where E is a linear operator called an exponent of ω, tE = exp(E log t) and exp(A) = I + A + A2 /2! + A3 /3! + · · · is the usual exponential operator. Further, the characteristic function ω ˆ (x) = E[eihx,Y i ] satisfies (2.4)



ω ˆ (x)t = ω ˆ (tE x)eihat ,xi

for all t > 0,

see, e.g., [18]. If (2.1) holds with all bn = 0 we say that µ belongs to the strict generalized domain of attraction of ω. In this case ω is strictly operator stable, that is (2.3) holds with all at = 0. 3. Operator geometric stable laws Operator geometric stable laws arise as limits of convolutions with geometrically distributed number of terms (see [13]). Let {Np , p ∈ (0, 1)} be a family of geometric random variables with mean 1/p, so that (3.1)

P (Np = n) = p(1 − p)n−1 , n = 1, 2, . . . .

Definition 3.1. A full random vector Y on Rd is operator geometric stable (OGS) if for Np geometric with mean 1/p there exist i.i.d. random vectors X1 , X2 , . . . independent of

5

Np , linear operators Ap , and centering constants bp such that (3.2)

Ap

Np X

(Xi + bp ) ⇒ Y

as p ↓ 0.

i=1

If (3.2) holds we say that the distribution of X1 is weakly geometrically attracted to that of Y , and the collection of such distributions is called the generalized domain of geometric attraction of Y . The following result shows that there is a one to one correspondence between OS and OGS laws, and provides a fundamental representation of OGS vectors in terms of their OS counterparts. It is an analog of the relation between GS and stable distributions (see, e.g., [21]). Theorem 3.2. Let Y be a full random vector on Rd with distribution λ and characteristic R ˆ = eiht,xi λ(dx). Then the following are equivalent: function λ(t) (a) Y is OGS; d

(b) Y = Z E X + aZ where X is operator stable with distribution ω satisfying (2.3), Z is standard exponential, and X, Z are independent; d

(c) Y = L(Z) where Z is standard exponential and {L(s) : s ≥ 0} is a stochastic process with stationary independent increments, independent of Z, and such that L(1) is OS and L(0) = 0 almost surely; (d) The distribution λ has the form (3.3)

λ(dx) =

Z



ω(dx)t ν(dt),

0

where ω is an operator stable probability distribution on Rd and ν(dt) = e−t dt; ˆ has the form (e) The characteristic function λ (3.4)

ˆ = (1 − log ω λ(t) ˆ (t))−1 , t ∈ Rd , where ω ˆ is an operator stable characteristic function on Rd .

Remark 3.3. Theorem 3.2 complements known characterizations of OGS laws that follow from more general results on geometric stability on groups. To be more specific, the equivalence of (a), (b) and (d) in Theorem 3.2 in the special case at = 0 in (2.3) for all

6

t > 0 follows from Theorem 2.10 in [7]. See also Proposition 5.3 in [6] and Chapter 2.13 in [8]. 3.1. Special cases. Below we list important special cases of OGS laws. 3.1.1. Strictly OGS laws. If the OS law given by the characteristic function ω ˆ in (3.4) is ˆ is called strictly OGS. For the strictly OGS strictly OS, then the distribution given by λ laws, convergence in (3.2) holds with bp = 0. We also have the representation d

Y = Z E X,

(3.5)

where Z is a standard exponential variable and X is strictly OS with exponent E (and independent of Z). 3.1.2. Geometric stable laws. When the operators in (3.2) are of the form Ap = ap Id , where ap > 0 and Id is a d-dimensional identity matrix, then the limiting distributions are called geometric stable (GS) laws (see, e.g., [16, 41], and also [27] for a summary of their properties, applications, and references). The characteristic function of a GS law is of the form (3.4) where ω ˆ is the characteristic function of some α-stable distribution in Rd , so that (3.6)

ˆ = (1 + Iα (t) − iht, mi)−1 , t ∈ Rd , λ(t)

where m ∈ Rd is the location parameter (the mean if α > 1) and Z (3.7) Iα (t) = ωα,1 (ht, si)Γ(ds). Sd

Here, Sd is the unit sphere in and

(3.8)

ωα,β (u) =

Rd ,

Γ is a finite measure on Sd , called the spectral measure,

   πα  α  1 − iβ sign(u) tan for α 6= 1  |u|   2      2    |u| 1 + iβ sign(u) log |u| π

for α = 1.

When α = 2 we obtain the special case of multivariate Laplace distribution (see below), while when α < 2, the probability P (Yj > x) associated with each component of a GS random variable Y decreases like the power function x−α as x increases to infinity. As

7

in the stable case, the spectral measure controls the dependence among the components of Y (which are dependent even if Γ is discrete and concentrated on the intersection of Sd with the coordinate axes, in which case the coordinates of the corresponding stable vector are independent). Note that (3.5) still holds with E = 1/α. In one dimension the characteristic function (3.6) reduces to ˆ = EeitY = (1 + σ α ωα,β (t) − iµt)−1 , λ(t)

(3.9)

where the parameter α is the index of stability as before, β ∈ [−1, 1] is a skewness parameter, the parameters µ ∈ R and σ > 0 control the location and the scale, respectively, and ωα,β is given by (3.8). Although GS distributions have the same type of tail behavior as stable laws, their densities are more peaked near the mode. Since such sharp peaks and heavy tails are often observed in financial data, GS laws have found applications in the area of financial modeling (see, e.g., [21, 26, 35, 36, 42]). 3.1.3. Skew Laplace distributions. When the variables Xi in (3.2) are in the (classical) domain of attraction of the normal law (for example, if they have finite second moments), then the characteristic function ω ˆ in (3.4) corresponds to a multivariate normal distribution, so that ˆ = λ(t)

(3.10)

1 1+

1 2 ht, Σti

− ihm, ti

, t ∈ Rd ,

where m ∈ Rd and Σ is a d × d non-negative definite symmetric matrix. These are multivariate Laplace distributions (see [23]). In the symmetric case (m = 0), we obtain an elliptically contoured distribution with the density p  v/2 g(y) = 2(2π)−d/2 |Σ|−1/2 hy, Σ−1 yi/2 Kv 2hy, Σ−1 yi , where v = (2 − d)/2 and Kv (u) is the modified Bessel function of the third kind (see, e.g., [1]). In one dimension we obtain a univariate skew Laplace distribution (with an explicit density) studied in [22]. More information on theory and applications of Laplace laws can be found in [14].

8

3.1.4. Marginally geometric stable laws. If the operators Ap in (3.2) are diagonal matrices diag(ap1 , . . . , apd ) for some positive api ’s, then the one-dimensional marginals of the limiting OGS vector Y are geometric stable with characteristic function (3.9) and possibly different values of α. The characteristic function of Y is given by (3.4) where this time ω ˆ corresponds to a marginally stable OS random vector X introduced in [43] and studied in [3, 31] (see also [37]). If the values of α for all marginal distributions are strictly less than 2, then the characteristic function of Y takes the form:   −1 Z Z ∞ dr iht, r E s)i iht,r E si ˆ (3.11) λ(t) = 1 + e −1− Γ(ds) − iht, mi , 1 + kr E sk2 r 2 Sd 0 where E is a diagonal matrix (3.12)

E = diag(1/α1 , . . . , 1/αd ), 0 < αi < 2, i = 1, . . . , d,

called the exponent of Y , the power r E is a diagonal matrix diag(|r|1/α1 , . . . , |r|1/αd ), the vector m ∈ Rd is the shift parameter, and as for the geometric stable vectors, the spectral measure Γ is a finite measure on the unit sphere Sd in Rd . As in the stable and geometric stable cases, the spectral measure determines the dependence structure among the components of a marginally GS vector. The fact that these distributions allow for a different tail behavior for their marginals makes them, along with marginally stable laws, attractive in financial portfolio analysis (see [36, 39]). 3.2. Divisibility and stability properties. Since OS laws are infinitely divisible, and so is exponential distribution ν, in view of (3.3) we conclude that OGS laws are infinitely divisible as well (see, e.g., Property (e), XVII.4 of [4]). Their L´evy representation can be obtained as a special case of the result below, where ν is any infinitely divisible law on R+ . In this general case, it follows from Theorem 2, XIII.7 of [4] that the Laplace transform ν˜ of ν has the form (3.13)

ν˜(z) =

Z 0



e

−zt

ν(dt) = exp

Z [0,∞)

! e−zs − 1 dK(s) s

where K is nondecreasing, continuous from the right, and fulfills Z ∞ 1 (3.14) dK(s) < ∞. s 1

9

(When s = 0 the integrand in (3.13) is extended by continuity to equal −z.) Using this representation, we obtain the following result, which is an extension of one dimensional cases studied in [9] and [25]. Theorem 3.4. Let ω be a full operator stable law with exponent E and let g(s, x) denote the Lebesgue density of ω s for any s > 0. Assume further that ν is infinitely divisible and that (3.13) holds, where dK(s) has no atom at zero. Then Z ∞ λ= ω t ν(dt) 0

is infinitely divisible with L´evy representation [a, 0, φ], where Z ∞Z x 1 (3.15) a= g(s, x)dx dK(s) 2 s Rd 1 + kxk 0 and dφ(x) = h(x)dx with (3.16)

h(x) =

Z 0



1 g(s, x) dK(s). s

Remark 3.5. To obtain the L´evy measure of an OGS distribution λ, use the above result with standard exponential distribution ν, so that dK(s) = e−s ds (no atom at zero!) Remark 3.6. Under the conditions of Theorem 3.4, λ has no normal component. If dK(s) has an atom b > 0 at zero then (3.13) becomes ! ! Z Z e−zs − 1 e−zs − 1 ν˜(z) = exp dK(s) = exp −bz + dK(s) , s s [0,∞) (0,∞) ˆ ˆ 1 (ξ) where −ψ(ξ) is the log-characteristic function of so that λ(ξ) = ν˜(ψ(ξ)) = e−bψ(ξ) λ ω and λ1 is infinitely divisible with L´evy representation [a, 0, φ] as described in Theorem 3.4. If ω has L´evy representation [a2 , Q2 , φ2 ] then λ is infinitely divisible with L´evy representation [a + ba2 , bQ2 , φ + bφ2 ]. For example, take Z standard exponential, b > 0, ν the distribution of b + Z, ω strictly operator stable with exponent E and X, X1 i.i.d. with distribution ω. Then the mixture λ defined by (3.3) is the distribution of bE X + Z E X1 , the sum of two independent infinitely divisible laws. Remark 3.7. Theorem 3.4 can be obtained as a special case of Theorem 30.1 in Sato [45] for subordinated L´evy processes. Since Sato uses a different form of the L´evy representation, his formula for the centering constant is different.

10

3.2.1. Geometric infinite divisibility. A random vector Y (and its probability distribution) is said to be geometric infinitely divisible if for all p ∈ (0, 1) we have d

(3.17)

Y =

Np X

Ypi ,

i=1

where Np is geometrically distributed random variable given by (3.1), the variables Ypi are i.i.d. for each p, and Np and (Ypi ) are independent (see, e.g., [12]). Since geometric infinitely divisible laws arise as the weak limits of triangular arrays with geometric number of terms in each row, it follows that the OGS distributions are geometric infinitely divisible. Proposition 3.8. Let Y be OGS given by the characteristic function (3.4). Then, Y is geometric infinitely divisible and the relation (3.17) holds where the Ypi ’s have the characteristic function of the form (3.18)

ˆ p (t) = (1 − log ω λ ˆ p (t))−1 .

3.2.2. Stability with respect to geometric summation. The following characterization of strictly OGS distributions extends similar properties of geometric stable and Laplace distributions (see, e.g., [12, 14, 16]). Theorem 3.9. Let Y , Y1 , Y2 , . . . be i.i.d. random variables in Rd , and let Np be a geometrically distributed random variable independent of the sequence (Yi ). Then (3.19)

Sp = Ap

Np X

d

Yi = Y, p ∈ (0, 1),

i=1

with some operators Ap on Rd if and only if Y is strictly OGS, in which case Y admits the representation (3.5) for some OS random variable X with exponent E and Ap = pE for p ∈ (0, 1). Remark 3.10. Theorem 3.9 also follows from a more general result on strict geometric stability on nilpotent Lie groups, see Theorem 2.12 in [7] and Theorem 4.3 in [6]. We give a simpler proof using characteristic functions. The above result can be somewhat strengthened if the operators in (3.19) correspond to diagonal matrices. The following result follows from Theorem 3.9 combined with similar

11

result for geometric stable distributions (see [15], Theorem 3.2), when we take into account that the stability relation (3.20) holds for each coordinate of Y . Theorem 3.11. Let Y , Y1 , Y2 , . . . be i.i.d. random variables in Rd , and let Np be a geometrically distributed random variable independent of the sequence (Yi ). Then (3.20)

Sp = Ap

Np X

d

(Yi + bp ) = Y, p ∈ (0, 1),

i=1

with some diagonal Ap ’s and bp ∈ Rd if and only if Y is marginally strictly GS with the representation (3.5), where E is the diagonal matrix (3.12), Z is standard exponential variable, and X is marginally strictly stable with indices α1 , . . . , αd . Moreover, we must necessarily have bp = 0 and Ap = pE for each p. 4. Infinitely divisible laws with Laplace and Linnik marginals and an application in financial modeling Here we consider marginally geometric stable laws discussed in Section 3.1.4, whose characteristic exponent (3.12) contains some αi ’s less than two and some equal to two. For simplicity, we focus on a bivariate symmetric case with α1 = 2 and 0 < α2 < 2. It is well known that a symmetric bivariate OS r.v. X = (X1 , X2 ) with the characteristic exponent (3.12) and the above αi ’s has independent components (see [43]) with characteristic functions (4.1)

EeitX1 = e−σ

2 t2

, t ∈ R,

(normal distribution with mean 0 and variance 2σ 2 ) and (4.2)

EeisX2 = e−η

α |s|α

, s ∈ R,

(symmetric α stable with scale parameter η > 0), respectively. Consequently, the ch.f. of X = (X1 , X2 ) is (4.3)

ω ˆ (t, s) = Eei(tX1 +sX2 ) = e−σ

2 t2 −η α |s|α

, (t, s) ∈ R2 ,

and the corresponding OGS ch.f. (3.4) takes the form (4.4)

ˆ s) = λ(t,

1 , (t, s) ∈ R2 . 1 + σ 2 t2 + η α |s|α

12

The marginal distributions of the OGS r.v. Y = (Y1 , Y2 ) with the above ch.f. are classical Laplace and symmetric geometric stable (also called Linnik) distributions with characteristic functions (4.5)

ˆ 1 (t) = EeitY1 = λ

1 , t∈R 1 + σ 2 t2

and (4.6)

ˆ 2 (s) = EeisY2 = λ

1 , s ∈ R, 1 + η α |s|α

respectively (see, e.g., [14]). The respective densities are (4.7)

f1 (x) =

1 −|x|/σ , x ∈ R, e 2σ

and

(4.8)

1 f2 (y) = η

Z



z 0

sin πα 2 = πη

−1/α

Z 0







y ηz 1/α



e−z dz

v α exp(−v|y|/η)dv , y 6= 0, 1 + v 2α + 2v α cos πα 2

where pα is the density of standard symmetric stable law (with ch.f. (4.2) where η = 1). We shall refer to the above distribution as an OGS law with marginal Laplace and Linnik distributions (in short: MLL distribution), denoting it by MLLα (σ, η).

Since both Laplace and Linnik distributions have been found useful in modeling univariate data (see, e.g., [14] and references therein), multivariate laws with these marginals will also be valuable for modeling data with both power and exponential tail behavior of one dimensional components. Many financial data exhibit features characteristic of Laplace and Linnik laws - high peak at the mode and relatively slowly converging tail probabilities. We first collect basic properties of bivariate MLL distributions, some of which illustrate results of previous sections, and then fit a bivariate MLL model to foreign currency exchange rates and compare its fit with that of an OS model. 4.1. Basic properties. The following representation that follows from our Theorem 3.2 (b) plays an important role in studying bivariate MLL distributions.

13

Theorem 4.1. If Y = (Y1 , Y2 ) has an MLLα (σ, η) distribution given by the ch.f. (4.4), then d

Y = (Z 1/2 X1 , Z 1/α X2 ),

(4.9)

where the variables Z, X1 , X2 are mutually independent, Z is standard exponential, and X1 , X2 have normal and symmetric stable distributions with ch.f.’s (4.1) and (4.2), respectively. Our next result gives the joint density of an MLL random vector. Theorem 4.2. The distribution function and density of Y = (Y1 , Y2 ) ∼ MLLα (σ, η) are, respectively, (4.10)

F (y1 , y2 ) =

Z



Φ



0

y √ 1 2zσ



Ψα



y2



z 1/α η

e−z dz, (y1 , y2 ) ∈ R2 ,

and (4.11)

Z

f (y1 , y2 ) = Cσ,η



y2

z

−1/2−1/α −z− 4σ12 z

e



0



y2



z 1/α η

dz, (y1 , y2 ) 6= (0, 0),

where Φ is the standard normal distribution function, Ψα and pα are the distribution function and the density of standard symmetric α-stable distribution with ch.f. exp(−|t|α ), and 1 Cσ,η = √ 2 πση

(4.12)

The L´evy representation of MLL ch.f. follows from Theorem 3.4. Theorem 4.3. The ch.f. of Y = (Y1 , Y2 ) ∼ MLLα (σ, η) admits L´evy representation [(a1 , a2 ), 0, φ], where Z ∞Z Z (4.13) ai = 0

xi 1 −s 2 + x2 g(s, x1 , x2 )dx1 dx2 s e ds, i = 1, 2, 1 + x R R 1 2

and dφ(x) = h(x)dx, where (4.14)

h(x1 , x2 ) =

Z

∞ 0

1 g(s, x1 , x2 ) e−s ds. s

Here, (4.15)

g(s, x1 , x2 ) =



x2 √ 1 2 πσηs1/2+1/α e 4σ2 s

−1





x2 s1/α η



,

14

where pα is the density of standard symmetric α-stable distribution with ch.f. exp(−|t|α ). MLL distributions have the stability property (3.20) with Ap = diag(p1/2 , p1/α ) and bp = 0. The following result is an extension of corresponding stability properties of univariate and multivariate Laplace and Linnik distributions (see, e.g., [14]). Theorem 4.4. Let Y , Y1 , Y2 , . . . be i.i.d. symmetric bivariate random vectors whose first components have finite variance, and let Np be a geometrically distributed random variable independent of the sequence (Yi ). Then (4.16)

Sp = Ap

Np X

d

(Yi + bp ) = Y, p ∈ (0, 1),

i=1

with some diagonal Ap ’s and bp ∈ Rd if and only if Y has an MLL distribution given by the ch.f. (4.4). Moreover, we must necessarily have bp = 0 and Ap = diag(p1/2 , p1/α ) for each p. Our next result shows that like Laplace and Linnik laws, the conditional distributions of Y2 |Y1 = y and of Y1 |Y2 = y are scale mixtures of stable and normal distributions, respectively. Theorem 4.5. Let Y = (Y1 , Y2 ) ∼ MLLα (σ, η). (i) The conditional distribution of Y2 |Y1 = y 6= 0 is the same as that of (U + Vy )1/α S,

(4.17)

where the variables U , Vy , and S are mutually independent, U is gamma distributed with density (4.18)

fU (x) =

1 x−1/2 e−x , x > 0, Γ(1/2)

Vy has inverse Gaussian distribution with density (4.19)

|y|e|y|/σ −3/2 − fy (x) = √ e x 2 πσ

and S is symmetric stable with the ch.f. (4.2).



y2 +x 4σ 2 x

 , x > 0,

15

(ii) The conditional distribution of Y1 |Y2 = y 6= 0 is the same as that of Zy1/2 X,

(4.20)

where X and Zy are independent, X is normally distributed with mean zero and variance 2σ 2 , and Zy has a weighted exponential distribution with density ω(x)e−x fy (x) = R ∞ , x > 0. −x 0 ω(x)e dx

(4.21)

The weight function in (4.21) is −1/α

(4.22)

ω(x) = x





y ηx1/α



, x > 0,

where pα is the density of standard symmetric α-stable distribution. Remark 4.6. Using the results of [17], we can obtain the weighted exponential r.v. Zy d

with density (4.21) from a standard exponential r.v. Z via the transformation Zy = q(Z), where q = q(z) is the unique solution of the equation Z ∞ Z ∞ (4.23) ω(x)e−x dx = e−z ω(x)e−x dx. q

0

The following two results deal with tail behavior and joint moments of MLL variables. In the first result we present the exact tail behavior of linear combinations of the components of an MLL r.v., showing that they are heavy tailed with the same tail index α. Theorem 4.7. Let Y = (Y1 , Y2 ) ∼ MLLα (σ, η) and let (a, b) ∈ R2 with a2 + b2 > 0. Then, as x → ∞, we have (4.24)

P (aY1 + bY2 > x) ∼

  

1 α α πα −α π η |b| Γ(α) sin 2 x x 1 − |a|σ 2e

for a ∈ R, b 6= 0, for a 6= 0, b = 0.

Finally we give condition for the existence of joint moments of MLL random vectors. Theorem 4.8. Let Y = (Y1 , Y2 ) ∼ MLLα (σ, η) and let α1 , α2 ≥ 0. Then the joint moment E|Y1 |α1 |Y2 |α2 exists if and only if α2 < α, in which case we have   2α1 σ α1 η α2 (1 − α2 )Γ α21 + αα2 + 1 Γ α21 + 12 Γ 1 − α1 α2 √ (4.25) E|Y1 | |Y2 | = π(2 − α2 ) cos πα2 2 where for α2 = 1 we set (1 − α2 )/ cos πα2 2 = 2/π.

α2 α



,

16

Remark 4.9. Note that for α2 = 0 we obtain absolute moments of classical Laplace distribution (see, e.g., [14]) and for α1 = 0 we get fractional absolute moments of a symmetric Linnik distribution: (4.26)

e(α2 ) = E|Y2 |

α2

 η α2 (1 − α2 )Γ αα2 + 1 Γ 1 − = (2 − α2 ) cos πα2 2

α2 α



.

The above formula is useful in estimating the parameters of Linnik laws (see, e.g., [14]). 4.2. MLL model for financial asset returns. To illustrate the modeling potential of OGS laws, we fit a portfolio of foreign exchange rates with an MLL model and compare the fit with that of an operator stable model. We use the data set of foreign exchange rates presented in [33]. The data contains 2853 daily exchange log-rates for the US Dollar versus the German Deutsch Mark (X1 ) and the Japanese Yen (X2 ). In order to unmask the variations in tail behavior, we transform the original vector of log-returns using the same linear transformation as in [33] to obtain

Z1 = 0.69X1 − 0.72X2 and Z2 = 0.72X1 + 0.69X2 . The tail parameters were estimated in [33] to be 1.998 for Z1 and 1.656 for Z2 , indicating that Z1 fits a finite variance model, whereas Z2 is heavy tailed. The operator stable model assumes Z1 Normal and Z2 stable. The OGS model will fit Z1 with a Laplace and Z2 with a Linnik law. We estimate all parameters of normal and Laplace models using standard maximum likelihood techniques. The Linnik and stable models are estimated using moment type estimators. Since we assume symmetry around zero, we only need to estimate the scale of the normal and Laplace (σ in (4.5)) fit. These were 0.999 and 0.7296, respectively. Scale parameters for the stable and Linnik (η in (4.6)) distributions are estimated to be 1.404 and 1.571, respectively. The scale estimator for the Linnik distribution is based on formula (4.26). Because of symmetry, the location and skewness for the stable model are taken as zero. The scale for the stable model (η in 4.2) is estimated based on the moment formula 1.2.17 in [44].

17

Operator stable Z1

Z2

OGS Model Z1

Z2

KD 0.04620 0.08825 0.02538 0.05091 AD

2.167

0.183

0.064

0.124

Table 1. The goodness-of-fit statistics (KD - Kolmogorov distance and AD - Anderson-Darling) for operator stable and operator geometric stable (OGS) models. In the stable model, the variables Z1 and Z2 have normal and stable distributions, respectively. In the OGS model, they have Laplace and Linnik laws, respectively.

The goodness-of-fit was assessed using the Kolmogorov distance (KD) and the Anderson-Darling (AD) statistics for the fitted marginals. The former is defined as KD = sup |F (x) − Fn (x)|, x

where Fn and F are the empirical and the fitted distribution functions, respectively. The latter statistic is |F (x) − Fn (x)| AD = sup p , x F (x)(1 − F (x)) and measures the goodness-of-fit in the tails. The results are summarized in Table 1.

The KD statistics for Z1 and Z2 under the OGS model are about half of those under the operator stable model. The AD statistics are also smaller under the OGS model, indicating that the OGS distribution provides a better fit to this data. Figures 1 and 2 compare the fit of both distributions to the data. It is clear from the graphs that the OGS model fits the sharp central peak better than the OS model. In conclusion, both quantitative and graphical evidence show that the OGS laws have high potential in modeling and in this case outperform the best OS models. Remark 4.10. From the view point of risk management, the investor is interested in the distribution of the original log returns X1 and X2 . Since the Xi ’s are linear functions of

18

the Zi ’s, we can recover their density by change of variables in the density of the Zi ’s. In the above MLL model, the joint density of X1 and X2 is estimated as g(x1 , x2 ) = f (0.69x1 − 0.72x2 , 0.72x1 + 0.69x2 ), where f is the MLL density (4.11) with α = 1.656, σ = 0.7296, and η = 1.571. Remark 4.11. To compare the fits of stable and Linnik models, for consistency we used the same method of moments to estimate the parameters, although for stable parameters, maximum likelihood estimators (MLE’s) are also available (see, e.g., [2, 29] for the symmetric case and [30, 38] for the skew case). Numerical routines for stable MLE’s are available at John P. Nolan website (http://academic2.american.edu/ jpnolan/), and also from Bravo Risk Management Group. We used MLE’s to estimate stable parameters of Z2 under the operator stable case as well, obtaining essentially the same results as above; the goodness-of-fit statistics for Z2 were KD = 0.0774 and AD = 0.183.

0.0

0.2

0.4

0.6

19

-4

-2

0

2

4

Figure 1. Histogram of Z1 with pdf’s of normal (dashed line) and Laplace (solid line) models.

5. Appendix 5.1. Auxiliary results. The following two results taken from [18] will be needed to prove Theorem 3.2.

Theorem 5.1. Suppose that X ∈ GDOA(Y0 ) and (2.1) holds. If Nn are positive integervalued random variables independent of (Xi ) with Nn → ∞ in probability, and if Nn /kn ⇒ Z for some random variable Z > 0 with distribution ν and some sequence of positive integers (kn ) tending to infinity, then

(5.1)

Ak n

Nn X i=1

(Xi − bkn ) ⇒ Y

0.0

0.1

0.2

0.3

0.4

20

-10

-5

0

5

10

15

Figure 2. Histogram of Z2 with pdf’s of stable (dashed line) and Linnik (solid line) models. where Y has distribution (5.2)

λ(dx) =

Z



ω(dx)t ν(dt) 0

and ω is the distribution of Y0 . Theorem 5.2. Suppose that (Xi ) are independent, identically distributed random vectors on Rd , Mn are positive integer-valued random variables independent of (Xi ) with Mn → ∞ in probability, and (5.3)

Bn

Mn X

(Xi − an ) ⇒ Y

i=1

for some random vector Y with distribution λ and some linear operators Bn on Rd and centering constants an ∈ Rd . Then there exists a sequence of positive integers (kn ) tending to infinity such that for any subsequence (n0 ) there exists a further subsequence (n00 ), a

21

random variable Z > 0 with distribution ν, and a random vector Y0 with distribution ω such that Mn00 /kn00 ⇒ Z,

(5.4)

Bn00

kn00 X

(Xi − an00 ) ⇒ Y0 ,

i=1

and (5.2) holds. Remark 5.3. Theorem 5.1 is also called Gnedenko’s transfer theorem and has been generalized to various algebraic structures including locally compact groups. See section 1 in [5] and chapter 2.12 in [8] for details. The assertion of Theorem 5.2 is also known as Szasz’s compactness theorem, see [46] for the real valued case. It has been generalized to nilpotent Lie groups in [7]. 5.2. Proof of Theorem 3.2. If (d) holds, take Z a random variable with distribution ν and let Np be a geometric random variables with mean 1/p. Then Np → ∞ a.s. and pNp ⇒ Z as p → 0. Take Y0 , X1 , X2 , . . . i.i.d. as ω. Since ω is operator stable, (2.3) shows that d

(X1 + · · · + Xn ) = nE Y0 + an so that (2.1) holds with An = n−E and bn = n−1 an . Given a sequence pn → 0 let kn = [1/pn ] and write Nn = Npn so that kn → ∞ and p n Np n Nn = ⇒Z kn pn [1/pn ] where Z is standard exponential. Then Theorem 5.1 shows that (5.1) holds where the limit Y has distribution (5.2). Condition on the value of Z and use (2.3) to show that (b) holds. Since this is true for any sequence pn → 0, (a) also holds. If (d) holds, take Z standard exponential and {L(s) : s > 0} a stochastic process with stationary independent increments independent of Z such that L(s) has characteristic function ω ˆ (t)s . Then L(Z) has characteristic function (5.5)

E[E[eiht,L(Z)i |Z]] =

Z



ˆ ω ˆ (t)s e−s ds = λ(t) 0

22

so that (c) and (d) are equivalent. Let ψ be the log-characteristic function of ω ˆ (t) as in Definition 3.1.1 of [32], so that ω ˆ (t)s = esψ(t) . Then (5.5) implies Z ∞ 1 1 ˆ (5.6) λ(t) = esψ(t) e−s ds = = , 1 − ψ(t) 1 − log ω ˆ (t) 0 so that (d) and (e) are equivalent. Note that Re(ψ(t)) ≤ 0 since |eψ(t) | ≤ 1. To see that (b) implies (d) choose any Borel set M ⊂ Rd and use (2.4) to compute P {Y ∈ M } = P {Z E X + aZ ∈ M } Z ∞ = P {Z E X + aZ ∈ M |Z = t}ν(dt) 0 Z ∞  = tE ω ∗ at (M )ν(dt) 0 Z ∞ = ω t (M )ν(dt) 0

so that (d) holds. Finally we show that (a) implies (e). The proof is similar to the special case of geometric stable laws (see [34]). Condition on Np in (3.2) and take characteristic functions to obtain pˆ µp (t) ˆ → λ(t) 1 − (1 − p)ˆ µp (t)

(5.7)

as p ↓ 0,

ˆ is the where µ ˆ p (t) = µ ˆ (A∗p t) ∗ εAp bp is the characteristic function of Ap (X + bp ) and λ characteristic function of Y . The relation (5.7) can be written equivalently as (5.8)

1−

1 − (1 − p)ˆ µp (t) 1 →1− ˆ pˆ µp (t) λ(t)

as p ↓ 0

ˆ 6= 0, which we will verify at the end of the proof. Simplifying we obtain assuming that λ(t) (5.9)

µ ˆ p (t) − 1 1 →1− ˆ pˆ µp (t) λ(t)

as p ↓ 0,

and also (5.10)

µ ˆ p (t) − 1 → 0, µ ˆ p (t)

as p ↓ 0.

Now, by (5.10), we conclude that µ ˆp (t) → 1, which combined with (5.9) produces (5.11)

µ ˆ p (t) − 1 1 →1− ˆ p λ(t)

as p ↓ 0.

23

Letting p = 1/n, (5.11) takes the form   1 (5.12) n µ ˆ (A∗1/n t) ∗ εA1/n b1/n − 1 → 1 − ˆ λ(t) which implies that 

(5.13)

µ ˆ(A∗1/n t) ∗ εA1/n b1/n

n

→ exp 1 −

1 ˆ λ(t)

!

.

Since the left-hand side of (5.13) is the characteristic function of the centered and operator normalized partial sums of a sequence of i.i.d. random vectors, the right-hand side of (5.13) must be the characteristic function ω ˆ of some operator stable law. Solving ! 1 (5.14) exp 1 − =ω ˆ (t) ˆ λ(t) ˆ ˆ 6= 0, take an arbitrary sequence pn → 0 for λ(t) shows that (e) holds. To verify that λ(t) and apply (3.2) to see that (5.3) holds with Mn = Npn , Bn = Apn and an = −bpn . Then Theorem 5.2 implies that there exists a sequence of positive integers (kn ) tending to infinity such that for any subsequence (n0 ) there exists a further subsequence (n00 ), a random variable Z > 0 with distribution ν, and a random vector Y0 with distribution ω such that Npn00 /kn00 ⇒ Z and (5.4) holds where ω related to λ via (5.2). Since the limit Y0 in (5.4) is the weak limit of a triangular array, ω is infinitely divisible (c.f. Theorem 3.3.4 in [32]). Since Npn00 /kn00 ⇒ Z, the law ν of Z is infinitely divisible as the weak limit of infinitely divisible laws. Since (5.2) holds with both ω and ν infinitely divisible, λ is also infinitely divisible (see, e.g., Property (e), XVII of [4]). Then it follows from the L´evy ˆ representation (c.f. Theorem 3.1.11 in [32]) that λ(t) 6= 0 so that the right-hand side in (5.8) is well-defined. 5.3. Proof of Theorem 3.4. Let ω ˆ (ξ) = e−ψ(ξ) , where −ψ(ξ) is the log-characteristic function of ω. Then (5.15)

ˆ λ(ξ) =

Z



ω ˆ (ξ)t ν(dt) =

0

Z



e−tψ(ξ) ν(dt) = ν˜(ψ(ξ)).

0

Moreover, since e−sψ(ξ) is the characteristic function of ω s , we have Z e−sψ(ξ) = eihx,ξi g(s, x)dx.

Rd

24

Hence, by (3.13) and (5.15), we have Z ∞  1 ˆ λ(ξ) = exp (e−sψ(ξ) − 1) dK(s) s 0 Z ∞ Z    1 = exp eihx,ξi − 1 g(s, x)dx dK(s) . s Rd 0 Note that F (ξ) =

Z



(e

−sψ(ξ)

0

1 − 1) dK(s) = s

∞Z

Z 0



 1 eihx,ξi − 1 g(s, x)dx dK(s) s Rd

exists and is the log-characteristic function of the infinitely divisible law λ. Write Z ∞Z  ihx,ξi 1 ihx, ξi  F (ξ) = e g(s, x)dx dK(s) + iha, ξi −1− 2 1 + kxk s Rd 0 = I(ξ) + iha, ξi where a is given by (3.15). We will show below that I(ξ) exists for all ξ ∈ Rd . Then, since F (ξ) exists, it follows that a ∈ Rd exists. Now let h(x, ξ) = eihx,ξi − 1 −

ihx, ξi 1 + kxk2

and note that |h(x, ξ)| ≤ C1 kxk2 for kxk ≤ 1 and |h(x, ξ)| ≤ C2 for all x ∈ Rd , where C1 and C2 are some constants. In order to show that I(ξ) exists, it suffices to show that Z ∞Z 1 (5.16) |h(x, ξ)|g(s, x)dx dK(s) < ∞. s Rd 0 For δ > 0 write the LHS of (5.16) as Z δZ Z ∞Z 1 1 |h(x, ξ)|g(s, x)dx dK(s) + |h(x, ξ)|g(s, x)dx dK(s) = I1 + I2 . s s d d R R 0 δ In view of (3.14), we have Z ∞Z I2 ≤ δ

1 C2 g(s, x)dx dK(s) = C2 s Rd

Z

∞ δ

1 dK(s) < ∞. s

On the other hand, Z Z 1 1 |h(x, ξ)|g(s, x)dx = |h(x, ξ)|dω s (x) s Rd s Rd Z 1 ≤ f (x)dω s (x), s Rd

25

where f is a bounded C ∞ -function such that f (0) = 0 and |h(x, ξ)| ≤ f (x) for all x ∈ Rd . Note that f ∈ D(A), where A is the generator of the continuous convolution semigroup (ω t )t>0 . Hence

Z 1 f (x)dω s (x) = A(f ). s↓0 s Rd Therefore, for some δ > 0, we have Z 1 f (x)dω s (x) ≤ M s Rd lim

for all 0 < s ≤ δ. Consequently, Z I1 ≤ M

δ

dK(s) = M (K(δ) − K(0)) < ∞,

0

and (5.16) follows. Since I(ξ) exists, it follows from Fubini’s theorem that  Z Z   ihx, ξi  ∞ 1 ihx,ξi I(ξ) = e −1− g(s, x) dK(s) dx 1 + kxk2 s Rd 0  Z  ihx, ξi ihx,ξi = e h(x)dx, −1− 1 + kxk2 Rd where h(x) =

Z 0



1 g(s, x) dK(s) s

exists. Therefore, the log-characteristic function F of λ has the form Z  ihx,ξi ihx, ξi  F (ξ) = iha, ξi + e dφ(x), −1− 1 + kxk2 Rd where dφ(x) = h(x)dx. This concludes the proof. 5.4. Proof of Proposition 3.8. Writing the relation (3.17) in terms of the characteristic functions, we obtain: (5.17)

ˆ p (t) pλ ˆ = λ(t), p ∈ (0, 1), t ∈ Rd , ˆ p (t) 1 − (1 − p)λ

ˆ and λ ˆ p are the characteristic functions of Y and Ypi , respectively. Substituting where λ ˆ (3.18) into (5.17) and noting that λ(t) = (1 − log ω ˆ (t))−1 we immediately obtain the validity of (5.17).

26

5.5. Proof of Theorem 3.9. Assume that Y is strictly OGS so that the representation (3.5) holds for some OS random variable X with exponent E. Conditioning on Np we write the characteristic function of the LHS in (3.19) as follows:

iht,Sp i

E[e

(5.18)

Pn

]=

∞ X

E[eiht,Ap

=

n=1 ∞ X

ˆ ∗ t)]n (1 − p)n−1 p [λ(A p

i=1

Yi i

](1 − p)n−1 p

n=1

=

ˆ ∗ t) pλ(A p , ˆ ∗ t) 1 − (1 − p)λ(A p

ˆ is the characteristic function of Y . Now, since λ(t) ˆ = (1 − log ω where λ ˆ (t))−1 , where ω ˆ is the characteristic function of a strictly OS law, the above equals:

(5.19)

ˆ ∗ t) pλ(A p[1 − log ω ˆ (A∗p t)]−1 1 p = . = ∗ t)]−1 −1 log ω ∗ ˆ 1 − (1 − p)[1 − log ω ˆ (A 1 − p ˆ (A∗p t) 1 − (1 − p)λ(Ap t) p

Substituting Ap = pE into (5.19) we obtain the characteristic function of Y , since the OS ∗

characteristic function ω ˆ satisfies the relation ω ˆ (pE t) = [ˆ ω (t)]p for each p > 0. Conversely, assume that the relation (3.19) holds. Then, by definition, Y must be OGS ˆ of the form (3.4) with some OS characteristic function with the characteristic function λ ω ˆ . Following the above calculations, we write relation (3.19) in terms of the characteristic functions as follows:

(5.20)

ˆ ∗ t) pλ(A p ˆ = λ(t), p ∈ (0, 1), t ∈ Rd . ˆ ∗ t) 1 − (1 − p)λ(A p

ˆ = (1 − log ω Substituting λ(t) ˆ (t))−1 we obtain the following relation for ω ˆ:

(5.21)

ω ˆ (A∗p t) = [ˆ ω (t)]p , p ∈ (0, 1), t ∈ Rd ,

which essentially holds only for strictly OS characteristic function ω ˆ with some exponent E and Ap = pE .

27

5.6. Proof of Theorem 4.2. To obtain (4.10) use the representation (4.9) coupled with independence of X1 and X2 , and apply a simple conditioning argument: Z ∞ F (y1 , y2 ) = P (Y1 ≤ y1 , Y2 ≤ y2 |Z = z)e−z dz 0 (5.22)   Z ∞  y1 y2  = P X1 ≤ √ P X2 ≤ 1/α e−z dz. z z 0 Since X1 and X2 have normal and stable laws with ch.f.’s (4.1), (4.2), respectively, we obtain (4.10). To obtain (4.11), differentiate the above function with respect y1 and y2 . Alternatively, apply standard transformation theorem for functions of random vectors to obtain the density of Y directly from the joint density of Z, X1 , and X2 . 5.7. Proof of Theorem 4.5. We start with Part (i). Proceeding as in the proof of relation (5.1.7) from [44], we obtain the following expression for the ch.f. of the conditional distribution of Y2 |Y1 = y: (5.23)

ˆ 2|1 (s) = E(e λ

isY2

R

e |Y1 = y) = R

−ity λ(t, ˆ s)dt

2πf1 (y)

,

ˆ is the joint ch.f. (4.4) and f1 is the marginal (Laplace) density of Y1 given by where λ ˆ (4.7). Upon factoring λ ˆ s) = λ(t,

(5.24)

1 1 , α α 1 + η |s| 1 + σs2 t2

where σs = p

(5.25)

σ , 1 + η α |s|α

we apply Fourier inversion formula Z 1 1 −|y|/σs (5.26) e−ity (1 + σs2 t2 )−1 dt = e , 2π R 2σs obtaining after some algebra (5.27)

ˆ 2|1 (s) = p λ

√ α α |y| 1 e− σ ( 1+η |s| −1) . 1 + η α |s|α

To finish the proof apply the representation of ν-stable random variables (see, e.g., [20], Theorem 3.1) noting that (5.28)

ˆ 2|1 (s) = g(− log φ(s)), ˆ λ

28

where |y| √ 1 e− σ ( 1+s−1) g(s) = √ 1+s

(5.29)

is the Laplace transform of the distribution of U + Vy while φˆ is the ch.f. of the α-stable r.v. X2 with ch.f. (4.2). We now move to Part (ii). The characteristic function of the conditional distribution of Y1 given Y2 = y is (5.30)

R itu e f (u, y)du itY1 ˆ λ1|2 (t) = E(e |Y2 = y) = R , f2 (y)

where f is the joint density (4.11) of Y1 and Y2 and f2 is the marginal Linnik density (4.8) of Y2 . Substituting these into (5.30) and changing the order of integration we obtain after some elementary algebra: (5.31)

R

2 2

−zt σ ω(z)e−z dz ˆ 1|2 (t) = R eR λ , −z R ω(z)e dz

with ω as in (4.22). Thus, we have (5.32)

ˆ 1|2 (t) = h(− log φ(t)), ˆ λ

where h is the Laplace transform of the positive r.v. with density (4.21) and φˆ is the normal ch.f. (4.1). By the representation of ν-stable r.v.’s cited above, we obtain the variance mixture (4.20) of normal distributions. This concludes the proof. 5.8. Proof of Theorem 4.7. For b = 0 and a 6= 0 we obtain Laplace variable with the survival function 1 − x P (aY1 > x) = e |a|σ , x > 0. 2 For b 6= 0 we note that the power tail of the Linnik variable bY2 dominates the exponential tail of aY1 : P (aY1 > x) = 0. x→∞ P (bY2 > x) lim

Consequently, the tail behavior of aY1 + bY2 is the same as that of bY2 (see Lemma 4.4.2 of [44]). The latter follows from more general results for univariate ν-stable laws (see, e.g., [19]).

29

5.9. Proof of Theorem 4.8. Apply the representation (4.9) to obtain (5.33)

d

|Y1 |α1 |Y2 |α2 = Z

α1 α + α2 2

|X1 |α1 |X2 |α2 .

Since all positive absolute moments of Z and X1 exists, it is clear that the joint absolute moment of Y1 and Y2 exists if and only if the absolute moment of X2 of order α2 exists. The latter exists if and only if α2 < α and equals (see, e.g., [44]):  η α2 (1 − α2 )Γ 1 − αα2 α2 (5.34) E|X2 | = . (2 − α2 ) cos πα2 2 The moments of exponential and normal distributions are straightforward to compute and well known: (5.35)

(5.36)

EZ

α1 2

E|X1 |

α1

+

α2 α





1

2

+

1 = √ 2α1 σ α1 Γ π

 α2 +1 , α 

α1 1 + 2 2



.

The result follows.

ACKNOWLEDGMENTS We thank the anonymous referee and the editors for helpful comments. We also thank Stoyan Stoyanov of Bravo Risk Management Group for providing us with numerical routines for stable MLE’s and John P. Nolan for making his routines available on his website.

References [1] Abramowitz, M. and Stegun, I.A. (1965) Handbook of Mathematical Functions, Dover Publications, New York. [2] Brorsen, W.B. and Yang, S.R. (1990) Maximum likelihood estimates of symmetric stable distribution parameters, Comm. Statist. Simulation Comput. 19(4), 1459-1464. [3] Cambanis, S. and Taraporevala, A. (1994) Infinitely divisible distributions with stable marginals, preprint. [4] Feller, V. (1971) Introduction to the Theory of Probability and its Applications, Vol. 2 (2nd ed.), Wiley, New York.

30

[5] Hazod, W. (1994) On the limit behavior of products of a random number of group valued random variables, Theory Probab. Appl. 39, 374–394. English version: Theory Probab. Appl. 32/2, 249–263 (1995). [6] Hazod, W. (1995) On geometric convolutions of distributions of group-valued random variables, in Probability Measures on Groups and Related Structures, Proceedings Oberwolfach 1994, 167–181, World Scientific. [7] Hazod, W. and Khokhlov, Yu. S. (1996) On Szasz’s compactness theorem and applications to geometric stability on groups Prob. Math. Stat. 16, 143–156. [8] Hazod, W. and Siebert, E. (2001) Stable Probability Measures on Euclidean Spaces and on Locally Compact Groups, Kluwer Academic Publishers. [9] Huff, B.V. (1969) The strict subordination of differential processes, Sankhya Ser. A 31, 403-412. [10] Jurek, Z. and Mason, J.D. (1993) Operator-Limit Distributions in Probability Theory, Wiley, New York. [11] Kalashnikov, V. (1997) Geometric Sums: Bounds for Rare Events with Applications, Kluwer Acad. Publ., Dordrecht. [12] Klebanov, L.B., Maniya, G.M., and Melamed, I.A. (1984) A problem of Zolotarev and analogs of infinitely divisible and stable distributions in a scheme for summing a random number of random variables, Theory Probab. Appl. 29, 791-794. [13] Klebanov, L.B. and Rachev, S.T. (1996) Sums of random number of random variables and their approximations with ν-accompanying infinitely divisible laws, Serdica Math. J. 22, 471-496. [14] Kotz, S., Kozubowski, T.J. and Podg´ orski, K. (2001) The Laplace distribution and generalizations: A Revisit with Applications to Communications, Economics, Engineering, and Finance, Birkh¨ auser, Boston. [15] Kozubowski, T.J. (1994) The inner characterization of geometric stable laws, Statist. Decisions 12, 307-321. [16] Kozubowski, T.J. (1997) Characterization of multivariate geometric stable distributions, Statist. Decisions 15, 397-416. [17] Kozubowski, T.J. (2001) A general functional relation between a random variable and its length biased counterpart, Technical Report No. 52, Department of Mathematics, University of Nevada at Reno. [18] Kozubowski, T.J., Meerschaert, M.M. and Scheffler, H.P. (2003) The operator ν-stable laws, Publ. Math. Debrecen (in press). [19] Kozubowski, T.J. and Panorska, A.K. (1996) On moments and tail behavior of ν–stable random variables, Statist. Probab. Lett. 29, 307–315. [20] Kozubowski, T.J. and Panorska, A.K. (1998) Weak limits for multivariate random sums, J. Multivariate Anal. 67, 398–413.

31

[21] Kozubowski, T.J. and Panorska, A.K. (1999) Multivariate geometric stable distributions in financial applications, Math. Comput. Modelling 29, 83-92. [22] Kozubowski, T.J. and Podg´ orski, K. (2000) Asymmetric Laplace distributions, Math. Sci. 25, 37-46. [23] Kozubowski, T.J. and Podg´ orski, K. (2000) A multivariate and asymmetric generalization of Laplace distribution, Comput. Statist. 15, 531-540. [24] Kozubowski, T.J. and Podg´ orski, K. (2001) Asymmetric Laplace laws and modeling financial data, Math. Comput. Modelling 34, 1003-1021. [25] Kozubowski, T.J., Podg´ orski, K. and Samorodnitsky, G. (1998) Tails of L´evy measure of geometric stable random variables, Extremes 1(3), 367-378. [26] Kozubowski, T.J. and Rachev, S.T. (1994) The theory of geometric stable distributions and its use in modeling financial data, European J. Oper. Res. 74, 310-324. [27] Kozubowski, T.J. and Rachev, S.T. (1999) Multivariate geometric stable laws, J. Comput. Anal. Appl. 1(4), 349-385. [28] Madan, D.B., Carr, P.P. and Chang, E.C. (1998) The variance gamma process and option pricing, European Finance Rev. 2, 79-105. [29] McCulloch, J.H. (1998) Linear regression with stable disturbances, in A Practical Guide to Heavy Tails: Statistical Techniques and Applications, (Eds., R.J. Adler, R.E. Feldman and M. Taqqu), pp. 359-376, Birkh¨ auser, Boston. [30] Mittnik, S., Rachev, S.T., Doganoglu, T. and Chenyao, D. (1999) Maximum likelihood estimation of stable Paretian models, Math. Comput. Modelling 29, 275-293. [31] Meerschaert, M. (1991) Regular variation in

Rk

and vector-normed domains of attraction, Statist.

Probab. Lett. 11, 287–289. [32] Meerschaert, M.M. and Scheffler, H.P. (2001) Limit Distributions for Sums of Independent Random Vectors, Wiley, New York. [33] Meerschaert, M.M. and Scheffler, H.P. (2003) Portfolio modeling with heavy tailed random vectors, in Handbook of Heavy Tailed Distributions in Finance, S.T.Rachev (Ed.), Elsevier Science, Amsterdam, 595-640. [34] Mittnik, S. and Rachev, S.T. (1991) Alternative multivariate stable distributions and their applications to financial modelling, in Stable Processes and Related Topics, (S. Cambanis et al. eds.), pp. 107-119, Birkh¨ auser, Boston. [35] Mittnik, S. and Rachev, S.T. (1993) Modeling asset returns with alternative stable distributions, Econometric Rev. 12(3), 261-330. [36] Mittnik, S. and Rachev, S.T. (1993) Reply to comments on “Modeling asset returns with alternative stable distributions” and some extensions, Econometric Rev. 12(3), 347-389.

32

[37] Mittnik, S., Rachev, S.T. and R¨ uschendorf, L. (1999) Test of association between multivariate stable vectors, Math. Comput. Modelling 29, 181-195. [38] Nolan, J.P. (1997) Maximum likelihood estimation and diagnostics for stable distributions, in L´evy Processes, (Eds., O. Barndorff-Nielsen, T. Mikosch, and S. Resnick), pp. 379-400, Birkh¨ auser, Boston. [39] Rachev, S.T. and Mittnik, S. (2000) Stable Paretian Models in Finance, Wiley, Chichester. [40] Rachev, S.T. and Samorodnitsky, G. (1994) Geometric stable distributions in Banach spaces, J. Theoret. Probab. 7(2), 351-373. [41] Rachev, S.T. and SenGupta, A. (1992) Geometric stable distributions and Laplace-Weibull mixtures, Statist. Decisions 10, 251-271. [42] Rachev, S.T. and SenGupta, A. (1993) Laplace-Weibull mixtures for modeling price changes, Management Science 39(8), 1029-1038. [43] Resnick, S. and Greenwood, P. (1979) A bivariate stable characterization and domains of attraction, J. Multivariate Anal. 9, 206-221. [44] Samorodnitsky, G. and Taqqu, M. (1994) Stable Non-Gaussian Random Processes, Chapman & Hall, New York. [45] Sato, K.I. (1999) L´evy Processes and Infinitely Divisible Distributions, Cambridge University Press. [46] Szasz, D. (1972) On classes of limit distributions for sums of a random number of identically distributed independent random variables, Theory. Probab. Appl. 27, 401–415.