Simulating Multivariate Distributions with Specific Correlations

Abu T. M. Minhajuddin, Ian R. Harris, and William R. Schucany Department of Statistical Science, Southern Methodist University Dallas, Texas.

July 17, 2003

Abstract

The mixture approach for simulating bivariate distributions introduced by Michael and Schucany (2002) is generalized to generate pseudo-random numbers from multivariate distributions. The simulated joint distributions have identical marginals and equal, positive pairwise correlations. The approach is illustrated for the p-dimensional families of beta and gamma distributions. For these families the formulas for the correlations have simple closed forms and the computations are quite simple. Keywords: beta, conjugate prior, dependent, exchangeable, gamma, generating random deviates, joint, mixture method

1

Introduction

Simulated multivariate distributions are valuable for evaluating methods to analyze multivariate data. Simulation studies are used to verify theoretical, large sample prop-

1

erties of statistical methods, estimators, and test statistics and also to assess their small sample properties. Indeed simulation techniques are an integral part of data analysis methodologies, e.g., Markov Chain Monte Carlo and bootstrap resampling. Almost all multivariate statistical methods assume some specific multivariate model for the underlying joint distribution. One of the most popular models used in practice is the multivariate normal distribution. Its familiarity and convenience of simulation may be the reason for this popularity of the normal model. However, real life is more complicated than the nice, bell-shaped symmetry of the multivariate normal model. Often observational data are non-normal, either skewed or relatively heavy-tailed. Unfortunately, the simulation of multivariate non-normal data is not a very common practice, partly because of the lack of algorithms for doing so. A few general techniques for simulating non-normal multivariate distributions have appeared in the literature. Examples include the conditional distribution approach, such as the Gibbs sampler, the transformation approach, and the rejection approach. However, these methods have computational difficulties, are restrictive in form, and model only weak dependence (Johnson, 1987). Moreover, some of these approaches such as rejection may be computationally expensive. Michael and Schucany (2002) presented a mixture approach to simulate bivariate distributions with specified correlations. The crux of their new bivariate beta is the property of conjugate families that random variables from a beta prior can be generated and eventually yields a random deviates from a posterior that is also beta. This paper is a generalization of the mixture approach for multivariate distributions with a specific correlation structure. The resulting pvariate distributions have correlation matrix (I + (J − I) ∗ ρ), where ρ is the correlation coefficient between any two of the exchangeable components, I and J denote the identity and unit matrices, respectively. This paper is organized as follows: Section 2 presents the basic theory of the mixture approach for multivariate distributions. Sections 3 and 4 apply this new mixture

2

method to simulate new p-variate families of gamma and beta distributions, respectively. These may offer valuable alternatives to some current Bayesian approaches that use independent exchangeable priors; see Howard (1998). Section 5 contains some graphical illustrations of simulated trivariate gamma and beta distributions. Finally, Section 6 concludes the paper with some general comments on mixture approaches for general p-variate distributions and future research directions. The Appendix contains the derivations of formulas for the correlation coefficients for these new p-variate gamma and beta families.

2

Theory of the Mixture Method

The mixture method for bivariate families in Michael and Schucany (2002) is sequential. For example X1 ∼ beta (prior), K ∼ binomial (conditional), and X2 ∼ beta (posterior). The familiar terminology of Bayesian models may be helpful even though there is merely a three-stage hierarchy: X1 → K | x1 → X2 | k. The present paper gets exactly the same result by starting in the middle and working inside out to each xi . We establish the equivalence of beginning with the marginal of K ∼ beta-binomial and then taking independent draws from two beta posteriors, X1 | k and X2 | k. This two-stage hierarchy allows the extension from 2 to p exchangeable variates to be immediate. Let the random variable K have a distribution with marginal probability density (or mass) function m(k; θ),

(1)

where the parameter θ may be vector valued. Conditional on K = k, one can independently simulate p random variables X1 , X2 , . . . , Xp each with conditional pdf (or pmf) p(xi | k, η),

i = 1, 2, . . . , p,

(2)

where the parameter η may also be vector valued. Multiplying (1) and (2) yields the

3

joint distribution of X1 , X2 , . . . , Xp and K, j(x1 , x2 , . . . , xp , k; θ, η) = m(k; θ)

p Y

p(xi | k, η).

(3)

i=1

Notice that, in equation (3), the joint distribution is “symmetric” with respect to xi and xj for i 6= j. Thus the random variables Xi , i = 1, 2, . . . , p are exchangeable and conditional on K = k, Xi and Xj are independent. Integrating (or summing, in case K is discrete) the joint distribution (3) with respect to K, we can find the joint distribution of X1 , X2 , . . . , Xp , f (x1 , x2 , . . . , xp ; θ, η) =

Z

j(x1 , x2 , . . . , xp , k; θ, η)dk.

(4)

The parameter η in the joint distribution (4) plays a critical role: the choice of the value of η precisely controls the correlation between any two exchangeable components Xi and Xj for i 6= j through the outcomes of K. Therefore, there are only two steps in the mixture approach for simulating multivariate distributions: Step 1: Simulate K = k from marginal m(k; θ). Step 2: Based on the same value of K = k, independently simulate p random variables Xi , i = 1, 2, . . . , p from a posterior p(xi , | K = k, η). Here, we take advantage of the Bayesian conjugate prior families of distributions. For example, the beta family is conjugate for the binomial family, i. e., if X has a beta(α, β) distribution, and K|X = x has a binomial distribution, this yields a beta posterior for X and a beta-binomial marginal (unconditional) for K. Similarly, gamma priors for the Poisson family lead to a negative binomial distribution for K. In this new mixture approach, we eliminate the prior distribution step and start from the unconditional distribution of K and conditional on K = k, generate the random variables Xi , i = 1, 2, . . . , p, each from the conjugate family.

4

3

Simulating a p-Variate Beta Family

Let K be distributed as a beta-binomial random variable with parameters ν, α, and β. Conditional on K = k, we can independently generate p random variables from a beta(α + k, ν + β − 1). The joint distribution of X and K is j(x, k; ν, α, β)

= =

ν xk+α−1 (1 − x)ν+β−k−1 k B(k + α, ν + β − k) × B(k + α, ν + β − k) B(α, β) α−1 β−1 x (1 − x) ν k × x (1 − x)(ν−k) B(α, β) k

(5)

The marginal distribution for X is obtained by summing the joint distribution (5) over k. The terms involving k in (5) represent the probabilities of a binomial random variable with parameters ν and x and hence sum to one. These calculations are valid irrespective of the subscript i of the random variable X, which proves the exchangeability of the Xi s. The correlation coefficient between any two Xi and Xj for i 6= j simplifies to ρbeta = ν/(ν + α + β). Technical details are in the Appendix. The required parameter ν for the beta-binomial distribution may be obtained by solving the formula for the correlation coefficient. For specified α, β, and correlation ρbeta , we get ν = (α + β)ρbeta /(1 − ρbeta ). For certain values of ρ, one can get a fractional value for ν. However, in practice, one can easily avoid this by slightly modifying the value of the correlation ρ. For example, ρ = 0.79 and ρ = 0.80 are the same for most practical situations.

4

Simulating a p-Variate Gamma Family

To simulate a multivariate gamma(r, λ) family, begin by simulating K = k from the negative binomial distribution with parameters r and π, where π = λ/(λ + θ). Conditional on the same value of K = k, then independently generate X1 , X2 , . . . , Xp from Gamma(r + k, λ + θ) with pdf p(x | k; r, λ, θ) =

λ + θ r+k−1 −(λ+θ)x x e . Γ(r + k)

5

(6)

Therefore, the joint distribution of X and K is j(x, k; r, θ, λ)

= =

(λ + θ)r+k r+k−1 −(λ+θ)x r+k−1 r x e × π (1 − π)k Γ(r + k) k

λr θk xr+k−1 e−(λ+θ)x . Γ(r)Γ(k + 1)

(7)

We evaluate the marginal of X by summing equation (7) over k. Recognizing that ∞ P k=0

(θx)k e−θx /Γ(k) = 1 is the total probability of a Poisson(θx) random variable,

the marginal distribution of X is Gamma(r, λ). The above calculations are true for each Xi , i = 1, 2, . . . , p. This proves the exchangeability of the Xs. The correlation coefficient between any two Xi and Xj for i 6= j reduces to the simple expression ρgamma = θ/(λ + θ). Details are shown in the Appendix. Solving for θ, we get θ = λρgamma /(1 − ρgamma ).

5

Graphical Illustrations

The graphical illustrations presented in this section illustrate the equicorrelation and exchangeability of the components of the simulated p-variate distributions through scatterplots and quantile-quantile plots. The univariate random numbers needed are generated by using S-Plus functions rbeta, rbinom, rnbinom and rgamma. Figure 1 contains three pairwise scatterplots and three sample quantile-quantile plots for a simulated sample of size n = 50 from a trivariate beta(2, 1) distribution. The correlation coefficient ρbeta is specified to be 0.8, which dictated the value ν = 3(.8)/.2 = 12. The upper right panels of Figure 1 show the strong positive pairwise correlation of the components of this trivariate beta(2, 1). In each case the sample points are skewed to the left and concentrated near the top right corner of the plot, as expected from random observations from the beta(2, 1) distribution. The lower left panels are the sample quantile-quantile plots. These points are close to the straight line with unit slope passing through the origin as with any components that are identically distributed. This empirically demonstrates the exchangeability of the components of the trivariate

6

1.0 0.8 0.6 1.0 0.8 0.6

0.6

0.8

1.0

0.0

0.2

0.4

X[, 1 ]

0.4 0.2 0.0

0.6

0.8

1.0

0.0

0.2

0.4

X[, 2 ]

0.0

0.2

0.4

X[, 3 ]

0.0

0.2

0.4

0.6

0.8

1.0 0.0

0.2

0.4

0.6

0.8

1.0

Figure 1: Scatterplots (above the diagonal) and sample quantile-quantile plots (below the diagonal) for beta(2, 1), sample size = 50, correlation coefficient = 0.8. beta distribution. Figure 2 is a plot of the sample quantiles of each component against the theoretical beta(2, 1) quantiles. The strong linear patterns provide a graphical evidence that each marginal is the beta(2,1). A similar plot for a random sample from the trivariate gamma(1, 3) distribution is presented in Figure 3. The scatterplots in the upper right panels of Figure 3 suggest the right skewed nature of each of the components. In this case also, the correlation is set to be ρgamma = 0.8. The quantile-quantile plots in the lower left panels of Figure 3 show the exchangeable nature of the components. Theoretical gamma quantile plots for each marginal offer the same graphical confirmation as Figure 2 and are not presented here.

7

0.4

0.8

0.8 0.0

0.4

X[,3]

0.8 X[,2]

0.0

0.4

0.8 0.4 0.0

X[,1]

0.0

0.0

Beta ( 2 , 1 )

0.4

0.8

0.0

Beta ( 2 , 1 )

0.4

0.8

Beta ( 2 , 1 )

Figure 2: True beta (2,1) quantile plots for the marginals.

6

Summary and Discussion

The extended mixture approach gives an exact method for simulating multivariate distributions with a positive equicorrelation structure. The method is computationally efficient. For a p-variate random sample of size n, only n additional observations need be generated. It is available for most of the positive values of the correlation coefficient. The specific applications presented here used the hierarchical structure of binomialbeta and negative binomial-gamma distributions. Whenever this form of hierarchy is available, the computations are relatively easy when univariate generators are readily available. However, such a hierarchy is not a must for the application of the mixture approach. But the computational complexity may be severe as one might have to resort to numerical integration and iterative algorithms to simulate observations from a general univariate distribution. See Gentle (1998) for details of such methods. Simulating a multivariate distribution with negative correlation using the mixture approach is not straightforward. If the distribution has finite support, then one can simulate a multivariate distribution with same pairwise negative correlations. In this special case the correlation matrix will have the form ρ12 = ρ34 = . . . < 0, and ρij = 0, otherwise. To 0

0

simulate bivariate beta distribution with negative correlation, one can generate (X1 , X2 ) 0

0

with ρ12 > 0, and transform to X1 = X1 and X2 = 1 − X2, so that Cor(X1 , X2 ) = −ρ12 . One retains the exchangeability only if α = β. One can independently generate pairs

8

2.0 1.5 2.0 1.5

1.5

2.0

0.0

0.5

1.0

X[, 1 ]

1.0 0.5 0.0

1.5

2.0

0.0

0.5

1.0

X[, 2 ]

0.0

0.5

1.0

X[, 3 ]

0.0

0.5

1.0

1.5

2.0 0.0

0.5

1.0

1.5

2.0

Figure 3: Scatterplots (above the diagonal) and sample quantile-quantile plots (below the diagonal) for gamma(1, 3), sample size = 50, correlation coefficient = 0.8. with negative correlations, and then combine them to comprise the p-variate sample. Clearly, if p were not a multiple of two, the last component Xp would be independent of any other component Xi s. We have no recommendation for using the mixture method for multivariate distributions with negative correlations if the support is unbounded. Two further extensions of the mixture approach are under consideration: to simulate 1) multivariate distributions with correlation structure following the first-order autoregressive scheme; 2) multivariate distributions with specific correlation structure where the components are not exchangeable.

9

7

References 1 Johnson, M. E. (1987), Multivariate Statistical Simulation, New York: Wiley. 2 Gentle, J. E. (1998), Random Number Generation and Monte Carlo Methods, New York: Springer Verlag. 3 Howard, J. V. (1998), “The 2 × 2 table: A discussion from a Bayesian viewpoint”, Statistical Science, 13, 351-367. 4 Michael, J. R., and Schucany, W. R. (2002), “The mixture approach for simulating bivariate distributions with specific correlations”, The American Statistician, 56, 48-54.

8

Appendix

A

Derivation for a Multivariate Beta

Let K ∼ beta-binomial(ν, α, β) with pmf m(k) = να α+β

and variance V (K) =

ναβ(α+β+ν) (α+β)2 (α+β+1) ,

(νk)B(k+α,ν+β−k) B(α,β)

and mean E(K) =

which implies that E(K 2 ) =

να(να+ν+β) (α+β+1)(α+β) .

Conditional on the same fixed k, generate p random variables X1 , X2 , . . . , Xp , independently of each other. The conditional distribution for each Xi is Beta(α+k, ν +β−k) with pdf p(xi |k) = function B(a, b) =

xk+α−1 (1−xi )ν+β−k−1 i , B(k+α,ν+β−k) Γ(a+b) Γ(a)Γ(b) .

i = 1, 2, . . . , p, where B(·, ·) denotes the beta

Since the Xi ’s are generated independently, the joint pdf

of Xi , i = 1, 2, . . . , p and K is given by j(x1 , x2 , . . . , xp , k; α, β, ν) =

ν k

" # p B(k + α, ν + β − k) Y xk+α−1 (1 − xi )ν+β−k−1 i · B(α, β) B(k + α, ν + β − k) i=1

To show that the unconditional marginal distribution of each xi is beta(α, β), let us consider the case p = 2. The joint distribution of X1 , X2 and K is given by " # 2 ν Y xk+α−1 (1 − xi )ν+β−k−1 i k B(k + α, ν + β − k) j(x1 , x2 , k; α, β, ν) = B(α, β) B(k + α, ν + β − k) i=1 ν α+k−1 (x1 x2 ) {(1 − x1 )(1 − x2 )}ν+β−k−1 k = B(α, β) B(α + k, ν + β − k)

10

It follows that the joint distribution of X1 and K is j(x1 , k; α, β, ν)

=

x1α−1 (1 − x1 )β−1 ν k x (1 − x1 )ν−k B(α, β) k 1

which in turn implies that the unconditional marginal of X1 is beta(α, β). This is true for any p and each of the Xi , i = 1, 2, . . . , p. Moreover,

and

E(Xi )

= EK {EXi |K (Xi )} K +α = EK α+β+ν να 1 +α = α+β+ν α+β α = α+β

V (Xi )

= EK {VXi |K (Xi )} + VK {EXi |K (Xi )} K +α (K + α)(β + ν − K) + V = EK K (α + β + ν + 1)(α + β + ν)2 (α + β + ν) αβ · = (α + β)2 (α + β + 1)

For any 1 ≤ i 6= j ≤ p, we calculate the expected value of the product to be E(Xi Xj ) = = = = =

EK {EXi Xj |K (Xi Xj )} EK {EXi |K (Xi )EXj |K (Xj )} (K + α)2 EK (α + β + ν)2 EK {K 2 + 2αK + α2 } (α + β + ν)2 α{ν(α + 1) + α(α + β + 1)} · (α + β + ν)(α + β + 1)(α + β)

Hence, the correlation between any Xi and Xj for i 6= j is given by ρXi Xj

= = = =

Cov(Xi , Xj ) p V (Xi )V (Xj ) E(Xi Xj ) − E(Xi )E(Xj ) p V (Xi )V (Xj ) β)2 (α

(α + ν · α+β+ν

ναβ (α + β)2 (α + β + 1) + β + 1)(α + β + ν) αβ

11

B

Derivation for a Multivariate Gamma

Let K ∼ negative-binomial (r, p) with pmf m(k; r, p) = p ≤ 1 and mean E(K) =

r(1−p) p

0, θ > 0. Hence, E(K) =

rθ λ

and variance V (K) =

and V (K) =

rθ(λ+θ) . λ2

r+k−1 k

r p (1−p)k ; k = 0, 1, 2, . . . ; 0 ≤

r(1−p) p2 .

Let p =

Therefore, E(K 2 ) =

λ λ+θ

where λ >

rθ λ2 (rθ

+ λ + θ).

Conditional on the same k, independently generate p random variables X1 , X2 , . . . , Xp so that each Xi is Gamma(r+k, λ+θ) with pdf p(xi |k; r, λ, θ) =

(λ+θ)r+k r+k−1 −(λ+θ)xi e . Γ(r+k) xi

It follows that the joint pdf of X1 , X2 , . . . , Xp and K is (λ + θ)r+k j(x1 , x2 , . . . , xp , k; r, λ, θ) = Γ(r + k)

p

(x1 x2 . . . xp )r+k−1 e−(λ+θ)(x1 +x2 +···+xp ) ×

λr θk Γ(r + k) . Γ(r)Γ(k + 1) (λ + θ)r+k

It requires only trivial steps to show that the unconditional distribution of each Xi is Gamma(r, λ). Also,

and

E(Xi )

= EK {EXi |K (Xi )} r+K = EK λ+θ r(λ + θ) = λ+θ r = λ

V (Xi )

= EK {VXi |K (Xi )} + VK {EXi |K (Xi )} r+K r+K = EK + V K (λ + θ)2 λ+θ 1 rθ rθ(λ + θ) = r + + (λ + θ)2 λ λ2 r · = λ2

For any 1 ≤ i 6= j ≤ p, we calculate the expected value of the product to be E(Xi Xj ) = = = =

EK {EXi Xj |K (Xi Xj )} EK {EXi |K (Xi )EXj |K (Xj )} (r + K)2 EK (λ + θ)2 r {r(λ + θ) + θ}· (λ + θ)λ2

12

Hence, the correlation between any Xi and Xj for i 6= j is given by ρXi Xj

= = = =

Cov(Xi , Xj ) p V (Xi )V (Xj )

E(Xi Xj ) − {E(Xi )}2 V (Xi ) rθ λ2 λ2 (λ + θ) r θ · λ+θ

13

Abu T. M. Minhajuddin, Ian R. Harris, and William R. Schucany Department of Statistical Science, Southern Methodist University Dallas, Texas.

July 17, 2003

Abstract

The mixture approach for simulating bivariate distributions introduced by Michael and Schucany (2002) is generalized to generate pseudo-random numbers from multivariate distributions. The simulated joint distributions have identical marginals and equal, positive pairwise correlations. The approach is illustrated for the p-dimensional families of beta and gamma distributions. For these families the formulas for the correlations have simple closed forms and the computations are quite simple. Keywords: beta, conjugate prior, dependent, exchangeable, gamma, generating random deviates, joint, mixture method

1

Introduction

Simulated multivariate distributions are valuable for evaluating methods to analyze multivariate data. Simulation studies are used to verify theoretical, large sample prop-

1

erties of statistical methods, estimators, and test statistics and also to assess their small sample properties. Indeed simulation techniques are an integral part of data analysis methodologies, e.g., Markov Chain Monte Carlo and bootstrap resampling. Almost all multivariate statistical methods assume some specific multivariate model for the underlying joint distribution. One of the most popular models used in practice is the multivariate normal distribution. Its familiarity and convenience of simulation may be the reason for this popularity of the normal model. However, real life is more complicated than the nice, bell-shaped symmetry of the multivariate normal model. Often observational data are non-normal, either skewed or relatively heavy-tailed. Unfortunately, the simulation of multivariate non-normal data is not a very common practice, partly because of the lack of algorithms for doing so. A few general techniques for simulating non-normal multivariate distributions have appeared in the literature. Examples include the conditional distribution approach, such as the Gibbs sampler, the transformation approach, and the rejection approach. However, these methods have computational difficulties, are restrictive in form, and model only weak dependence (Johnson, 1987). Moreover, some of these approaches such as rejection may be computationally expensive. Michael and Schucany (2002) presented a mixture approach to simulate bivariate distributions with specified correlations. The crux of their new bivariate beta is the property of conjugate families that random variables from a beta prior can be generated and eventually yields a random deviates from a posterior that is also beta. This paper is a generalization of the mixture approach for multivariate distributions with a specific correlation structure. The resulting pvariate distributions have correlation matrix (I + (J − I) ∗ ρ), where ρ is the correlation coefficient between any two of the exchangeable components, I and J denote the identity and unit matrices, respectively. This paper is organized as follows: Section 2 presents the basic theory of the mixture approach for multivariate distributions. Sections 3 and 4 apply this new mixture

2

method to simulate new p-variate families of gamma and beta distributions, respectively. These may offer valuable alternatives to some current Bayesian approaches that use independent exchangeable priors; see Howard (1998). Section 5 contains some graphical illustrations of simulated trivariate gamma and beta distributions. Finally, Section 6 concludes the paper with some general comments on mixture approaches for general p-variate distributions and future research directions. The Appendix contains the derivations of formulas for the correlation coefficients for these new p-variate gamma and beta families.

2

Theory of the Mixture Method

The mixture method for bivariate families in Michael and Schucany (2002) is sequential. For example X1 ∼ beta (prior), K ∼ binomial (conditional), and X2 ∼ beta (posterior). The familiar terminology of Bayesian models may be helpful even though there is merely a three-stage hierarchy: X1 → K | x1 → X2 | k. The present paper gets exactly the same result by starting in the middle and working inside out to each xi . We establish the equivalence of beginning with the marginal of K ∼ beta-binomial and then taking independent draws from two beta posteriors, X1 | k and X2 | k. This two-stage hierarchy allows the extension from 2 to p exchangeable variates to be immediate. Let the random variable K have a distribution with marginal probability density (or mass) function m(k; θ),

(1)

where the parameter θ may be vector valued. Conditional on K = k, one can independently simulate p random variables X1 , X2 , . . . , Xp each with conditional pdf (or pmf) p(xi | k, η),

i = 1, 2, . . . , p,

(2)

where the parameter η may also be vector valued. Multiplying (1) and (2) yields the

3

joint distribution of X1 , X2 , . . . , Xp and K, j(x1 , x2 , . . . , xp , k; θ, η) = m(k; θ)

p Y

p(xi | k, η).

(3)

i=1

Notice that, in equation (3), the joint distribution is “symmetric” with respect to xi and xj for i 6= j. Thus the random variables Xi , i = 1, 2, . . . , p are exchangeable and conditional on K = k, Xi and Xj are independent. Integrating (or summing, in case K is discrete) the joint distribution (3) with respect to K, we can find the joint distribution of X1 , X2 , . . . , Xp , f (x1 , x2 , . . . , xp ; θ, η) =

Z

j(x1 , x2 , . . . , xp , k; θ, η)dk.

(4)

The parameter η in the joint distribution (4) plays a critical role: the choice of the value of η precisely controls the correlation between any two exchangeable components Xi and Xj for i 6= j through the outcomes of K. Therefore, there are only two steps in the mixture approach for simulating multivariate distributions: Step 1: Simulate K = k from marginal m(k; θ). Step 2: Based on the same value of K = k, independently simulate p random variables Xi , i = 1, 2, . . . , p from a posterior p(xi , | K = k, η). Here, we take advantage of the Bayesian conjugate prior families of distributions. For example, the beta family is conjugate for the binomial family, i. e., if X has a beta(α, β) distribution, and K|X = x has a binomial distribution, this yields a beta posterior for X and a beta-binomial marginal (unconditional) for K. Similarly, gamma priors for the Poisson family lead to a negative binomial distribution for K. In this new mixture approach, we eliminate the prior distribution step and start from the unconditional distribution of K and conditional on K = k, generate the random variables Xi , i = 1, 2, . . . , p, each from the conjugate family.

4

3

Simulating a p-Variate Beta Family

Let K be distributed as a beta-binomial random variable with parameters ν, α, and β. Conditional on K = k, we can independently generate p random variables from a beta(α + k, ν + β − 1). The joint distribution of X and K is j(x, k; ν, α, β)

= =

ν xk+α−1 (1 − x)ν+β−k−1 k B(k + α, ν + β − k) × B(k + α, ν + β − k) B(α, β) α−1 β−1 x (1 − x) ν k × x (1 − x)(ν−k) B(α, β) k

(5)

The marginal distribution for X is obtained by summing the joint distribution (5) over k. The terms involving k in (5) represent the probabilities of a binomial random variable with parameters ν and x and hence sum to one. These calculations are valid irrespective of the subscript i of the random variable X, which proves the exchangeability of the Xi s. The correlation coefficient between any two Xi and Xj for i 6= j simplifies to ρbeta = ν/(ν + α + β). Technical details are in the Appendix. The required parameter ν for the beta-binomial distribution may be obtained by solving the formula for the correlation coefficient. For specified α, β, and correlation ρbeta , we get ν = (α + β)ρbeta /(1 − ρbeta ). For certain values of ρ, one can get a fractional value for ν. However, in practice, one can easily avoid this by slightly modifying the value of the correlation ρ. For example, ρ = 0.79 and ρ = 0.80 are the same for most practical situations.

4

Simulating a p-Variate Gamma Family

To simulate a multivariate gamma(r, λ) family, begin by simulating K = k from the negative binomial distribution with parameters r and π, where π = λ/(λ + θ). Conditional on the same value of K = k, then independently generate X1 , X2 , . . . , Xp from Gamma(r + k, λ + θ) with pdf p(x | k; r, λ, θ) =

λ + θ r+k−1 −(λ+θ)x x e . Γ(r + k)

5

(6)

Therefore, the joint distribution of X and K is j(x, k; r, θ, λ)

= =

(λ + θ)r+k r+k−1 −(λ+θ)x r+k−1 r x e × π (1 − π)k Γ(r + k) k

λr θk xr+k−1 e−(λ+θ)x . Γ(r)Γ(k + 1)

(7)

We evaluate the marginal of X by summing equation (7) over k. Recognizing that ∞ P k=0

(θx)k e−θx /Γ(k) = 1 is the total probability of a Poisson(θx) random variable,

the marginal distribution of X is Gamma(r, λ). The above calculations are true for each Xi , i = 1, 2, . . . , p. This proves the exchangeability of the Xs. The correlation coefficient between any two Xi and Xj for i 6= j reduces to the simple expression ρgamma = θ/(λ + θ). Details are shown in the Appendix. Solving for θ, we get θ = λρgamma /(1 − ρgamma ).

5

Graphical Illustrations

The graphical illustrations presented in this section illustrate the equicorrelation and exchangeability of the components of the simulated p-variate distributions through scatterplots and quantile-quantile plots. The univariate random numbers needed are generated by using S-Plus functions rbeta, rbinom, rnbinom and rgamma. Figure 1 contains three pairwise scatterplots and three sample quantile-quantile plots for a simulated sample of size n = 50 from a trivariate beta(2, 1) distribution. The correlation coefficient ρbeta is specified to be 0.8, which dictated the value ν = 3(.8)/.2 = 12. The upper right panels of Figure 1 show the strong positive pairwise correlation of the components of this trivariate beta(2, 1). In each case the sample points are skewed to the left and concentrated near the top right corner of the plot, as expected from random observations from the beta(2, 1) distribution. The lower left panels are the sample quantile-quantile plots. These points are close to the straight line with unit slope passing through the origin as with any components that are identically distributed. This empirically demonstrates the exchangeability of the components of the trivariate

6

1.0 0.8 0.6 1.0 0.8 0.6

0.6

0.8

1.0

0.0

0.2

0.4

X[, 1 ]

0.4 0.2 0.0

0.6

0.8

1.0

0.0

0.2

0.4

X[, 2 ]

0.0

0.2

0.4

X[, 3 ]

0.0

0.2

0.4

0.6

0.8

1.0 0.0

0.2

0.4

0.6

0.8

1.0

Figure 1: Scatterplots (above the diagonal) and sample quantile-quantile plots (below the diagonal) for beta(2, 1), sample size = 50, correlation coefficient = 0.8. beta distribution. Figure 2 is a plot of the sample quantiles of each component against the theoretical beta(2, 1) quantiles. The strong linear patterns provide a graphical evidence that each marginal is the beta(2,1). A similar plot for a random sample from the trivariate gamma(1, 3) distribution is presented in Figure 3. The scatterplots in the upper right panels of Figure 3 suggest the right skewed nature of each of the components. In this case also, the correlation is set to be ρgamma = 0.8. The quantile-quantile plots in the lower left panels of Figure 3 show the exchangeable nature of the components. Theoretical gamma quantile plots for each marginal offer the same graphical confirmation as Figure 2 and are not presented here.

7

0.4

0.8

0.8 0.0

0.4

X[,3]

0.8 X[,2]

0.0

0.4

0.8 0.4 0.0

X[,1]

0.0

0.0

Beta ( 2 , 1 )

0.4

0.8

0.0

Beta ( 2 , 1 )

0.4

0.8

Beta ( 2 , 1 )

Figure 2: True beta (2,1) quantile plots for the marginals.

6

Summary and Discussion

The extended mixture approach gives an exact method for simulating multivariate distributions with a positive equicorrelation structure. The method is computationally efficient. For a p-variate random sample of size n, only n additional observations need be generated. It is available for most of the positive values of the correlation coefficient. The specific applications presented here used the hierarchical structure of binomialbeta and negative binomial-gamma distributions. Whenever this form of hierarchy is available, the computations are relatively easy when univariate generators are readily available. However, such a hierarchy is not a must for the application of the mixture approach. But the computational complexity may be severe as one might have to resort to numerical integration and iterative algorithms to simulate observations from a general univariate distribution. See Gentle (1998) for details of such methods. Simulating a multivariate distribution with negative correlation using the mixture approach is not straightforward. If the distribution has finite support, then one can simulate a multivariate distribution with same pairwise negative correlations. In this special case the correlation matrix will have the form ρ12 = ρ34 = . . . < 0, and ρij = 0, otherwise. To 0

0

simulate bivariate beta distribution with negative correlation, one can generate (X1 , X2 ) 0

0

with ρ12 > 0, and transform to X1 = X1 and X2 = 1 − X2, so that Cor(X1 , X2 ) = −ρ12 . One retains the exchangeability only if α = β. One can independently generate pairs

8

2.0 1.5 2.0 1.5

1.5

2.0

0.0

0.5

1.0

X[, 1 ]

1.0 0.5 0.0

1.5

2.0

0.0

0.5

1.0

X[, 2 ]

0.0

0.5

1.0

X[, 3 ]

0.0

0.5

1.0

1.5

2.0 0.0

0.5

1.0

1.5

2.0

Figure 3: Scatterplots (above the diagonal) and sample quantile-quantile plots (below the diagonal) for gamma(1, 3), sample size = 50, correlation coefficient = 0.8. with negative correlations, and then combine them to comprise the p-variate sample. Clearly, if p were not a multiple of two, the last component Xp would be independent of any other component Xi s. We have no recommendation for using the mixture method for multivariate distributions with negative correlations if the support is unbounded. Two further extensions of the mixture approach are under consideration: to simulate 1) multivariate distributions with correlation structure following the first-order autoregressive scheme; 2) multivariate distributions with specific correlation structure where the components are not exchangeable.

9

7

References 1 Johnson, M. E. (1987), Multivariate Statistical Simulation, New York: Wiley. 2 Gentle, J. E. (1998), Random Number Generation and Monte Carlo Methods, New York: Springer Verlag. 3 Howard, J. V. (1998), “The 2 × 2 table: A discussion from a Bayesian viewpoint”, Statistical Science, 13, 351-367. 4 Michael, J. R., and Schucany, W. R. (2002), “The mixture approach for simulating bivariate distributions with specific correlations”, The American Statistician, 56, 48-54.

8

Appendix

A

Derivation for a Multivariate Beta

Let K ∼ beta-binomial(ν, α, β) with pmf m(k) = να α+β

and variance V (K) =

ναβ(α+β+ν) (α+β)2 (α+β+1) ,

(νk)B(k+α,ν+β−k) B(α,β)

and mean E(K) =

which implies that E(K 2 ) =

να(να+ν+β) (α+β+1)(α+β) .

Conditional on the same fixed k, generate p random variables X1 , X2 , . . . , Xp , independently of each other. The conditional distribution for each Xi is Beta(α+k, ν +β−k) with pdf p(xi |k) = function B(a, b) =

xk+α−1 (1−xi )ν+β−k−1 i , B(k+α,ν+β−k) Γ(a+b) Γ(a)Γ(b) .

i = 1, 2, . . . , p, where B(·, ·) denotes the beta

Since the Xi ’s are generated independently, the joint pdf

of Xi , i = 1, 2, . . . , p and K is given by j(x1 , x2 , . . . , xp , k; α, β, ν) =

ν k

" # p B(k + α, ν + β − k) Y xk+α−1 (1 − xi )ν+β−k−1 i · B(α, β) B(k + α, ν + β − k) i=1

To show that the unconditional marginal distribution of each xi is beta(α, β), let us consider the case p = 2. The joint distribution of X1 , X2 and K is given by " # 2 ν Y xk+α−1 (1 − xi )ν+β−k−1 i k B(k + α, ν + β − k) j(x1 , x2 , k; α, β, ν) = B(α, β) B(k + α, ν + β − k) i=1 ν α+k−1 (x1 x2 ) {(1 − x1 )(1 − x2 )}ν+β−k−1 k = B(α, β) B(α + k, ν + β − k)

10

It follows that the joint distribution of X1 and K is j(x1 , k; α, β, ν)

=

x1α−1 (1 − x1 )β−1 ν k x (1 − x1 )ν−k B(α, β) k 1

which in turn implies that the unconditional marginal of X1 is beta(α, β). This is true for any p and each of the Xi , i = 1, 2, . . . , p. Moreover,

and

E(Xi )

= EK {EXi |K (Xi )} K +α = EK α+β+ν να 1 +α = α+β+ν α+β α = α+β

V (Xi )

= EK {VXi |K (Xi )} + VK {EXi |K (Xi )} K +α (K + α)(β + ν − K) + V = EK K (α + β + ν + 1)(α + β + ν)2 (α + β + ν) αβ · = (α + β)2 (α + β + 1)

For any 1 ≤ i 6= j ≤ p, we calculate the expected value of the product to be E(Xi Xj ) = = = = =

EK {EXi Xj |K (Xi Xj )} EK {EXi |K (Xi )EXj |K (Xj )} (K + α)2 EK (α + β + ν)2 EK {K 2 + 2αK + α2 } (α + β + ν)2 α{ν(α + 1) + α(α + β + 1)} · (α + β + ν)(α + β + 1)(α + β)

Hence, the correlation between any Xi and Xj for i 6= j is given by ρXi Xj

= = = =

Cov(Xi , Xj ) p V (Xi )V (Xj ) E(Xi Xj ) − E(Xi )E(Xj ) p V (Xi )V (Xj ) β)2 (α

(α + ν · α+β+ν

ναβ (α + β)2 (α + β + 1) + β + 1)(α + β + ν) αβ

11

B

Derivation for a Multivariate Gamma

Let K ∼ negative-binomial (r, p) with pmf m(k; r, p) = p ≤ 1 and mean E(K) =

r(1−p) p

0, θ > 0. Hence, E(K) =

rθ λ

and variance V (K) =

and V (K) =

rθ(λ+θ) . λ2

r+k−1 k

r p (1−p)k ; k = 0, 1, 2, . . . ; 0 ≤

r(1−p) p2 .

Let p =

Therefore, E(K 2 ) =

λ λ+θ

where λ >

rθ λ2 (rθ

+ λ + θ).

Conditional on the same k, independently generate p random variables X1 , X2 , . . . , Xp so that each Xi is Gamma(r+k, λ+θ) with pdf p(xi |k; r, λ, θ) =

(λ+θ)r+k r+k−1 −(λ+θ)xi e . Γ(r+k) xi

It follows that the joint pdf of X1 , X2 , . . . , Xp and K is (λ + θ)r+k j(x1 , x2 , . . . , xp , k; r, λ, θ) = Γ(r + k)

p

(x1 x2 . . . xp )r+k−1 e−(λ+θ)(x1 +x2 +···+xp ) ×

λr θk Γ(r + k) . Γ(r)Γ(k + 1) (λ + θ)r+k

It requires only trivial steps to show that the unconditional distribution of each Xi is Gamma(r, λ). Also,

and

E(Xi )

= EK {EXi |K (Xi )} r+K = EK λ+θ r(λ + θ) = λ+θ r = λ

V (Xi )

= EK {VXi |K (Xi )} + VK {EXi |K (Xi )} r+K r+K = EK + V K (λ + θ)2 λ+θ 1 rθ rθ(λ + θ) = r + + (λ + θ)2 λ λ2 r · = λ2

For any 1 ≤ i 6= j ≤ p, we calculate the expected value of the product to be E(Xi Xj ) = = = =

EK {EXi Xj |K (Xi Xj )} EK {EXi |K (Xi )EXj |K (Xj )} (r + K)2 EK (λ + θ)2 r {r(λ + θ) + θ}· (λ + θ)λ2

12

Hence, the correlation between any Xi and Xj for i 6= j is given by ρXi Xj

= = = =

Cov(Xi , Xj ) p V (Xi )V (Xj )

E(Xi Xj ) − {E(Xi )}2 V (Xi ) rθ λ2 λ2 (λ + θ) r θ · λ+θ

13