## HW#5 Solution

The repair of each laptop can be viewed as an independent trial with four possible ... Finally, the probability that more laptops require motherboard repairs than ...

Problem Solutions – Chapter 5 Problem 5.1.1 Solution The repair of each laptop can be viewed as an independent trial with four possible outcomes corresponding to the four types of needed repairs. (a) Since the four types of repairs are mutually exclusive choices and since 4 laptops are returned for repair, the joint distribution of N1 , . . . , N4 is the multinomial PMF   4 pn1 pn2 pn3 pn4 PN1 ,...,N4 (n 1 , . . . , n 4 ) = (1) n1, n2, n3, n4 1 2 3 4  8 n 1  4 n 2  2 n 3  1 n 4  4! n 1 + · · · + n 4 = 4; n i ≥ 0 n !n !n !n ! 15 15 15 15 1 2 3 4 = 0 otherwise (2) (b) Let L 2 denote the event that exactly two laptops need LCD repairs. Thus P[L 2 ] = PN1 (2). Since each laptop requires an LCD repair with probability p1 = 8/15, the number of LCD repairs, N1 , is a binomial (4, 8/15) random variable with PMF   4 (8/15)n 1 (7/15)4−n 1 PN1 (n 1 ) = (3) n1 The probability that two laptops need LCD repairs is   4 PN1 (2) = (8/15)2 (7/15)2 = 0.3717 2

(4)

(c) A repair is type (2) with probability p2 = 4/15. A repair is type (3) with probability p3 = 2/15; otherwise a repair is type “other” with probability po = 9/15. Define X as the number of “other” repairs needed. The joint PMF of X, N2 , N3 is the multinomial PMF   n 2  n 3  x  2 9 4 4 (5) PN2 ,N3 ,X (n 2 , n 3 , x) = 15 15 15 n2, n3, x However, Since X + 4 − N2 − N3 , we observe that PN2 ,N3 (n 2 , n 3 ) = PN2 ,N3 ,X (n 2 , n 3 , 4 − n 2 − n 3 )   n 2  n 3  4−n 2 −n 3  4 2 9 4 = 15 15 15 n2, n3, 4 − n2 − n3   n 2  n 3  4  4 4 2 9 = n2, n3, 4 − n2 − n3 15 9 9

(6) (7) (8)

Similarly, since each repair is a motherboard repair with probability p2 = 4/15, the number of motherboard repairs has binomial PMF    n 2  4−n 2 4 11 4 PN2 (n 2 ) n 2 = (9) 15 15 n2 205

Finally, the probability that more laptops require motherboard repairs than keyboard repairs is P [N2 > N3 ] = PN2 ,N3 (1, 0) + PN2 ,N3 (2, 0) + PN2 ,N3 (2, 1) + PN2 (3) + PN2 (4)

(10)

where we use the fact that if N2 = 3 or N2 = 4, then we must have N2 > N3 . Inserting the various probabilities, we obtain P [N2 > N3 ] = PN2 ,N3 (1, 0) + PN2 ,N3 (2, 0) + PN2 ,N3 (2, 1) + PN2 (3) + PN2 (4)

(11)

Plugging in the various probabilities yields ...

Problem 5.1.2 Solution

Whether a pizza has topping i is a Bernoulli trial with success probability pi = 2−i . Given that n pizzas were sold, the number of pizzas sold with topping i has the binomial PMF   n  ni pi (1 − pi )ni n i = 0, 1, . . . , n ni (1) PNi (n i ) = 0 otherwise Since a pizza has topping i with probability pi independent of whether any other topping is on the pizza, the number Ni of pizzas with topping i is independent of the number of pizzas with any other toppings. That is, N1 , . . . , N4 are mutually independent and have joint PMF PN1 ,...,N4 (n 1 , . . . , n 4 ) = PN1 (n 1 ) PN2 (n 2 ) PN3 (n 3 ) PN4 (n 4 )

(2)

Problem 5.1.3 Solution (a) In terms of the joint PDF, we can write joint CDF as  x1  xn ··· f X 1 ,...,X n (y1 , . . . , yn ) dy1 · · · dyn FX 1 ,...,X n (x1 , . . . , xn ) = −∞

(1)

−∞

However, simplifying the above integral depends on the values of each xi . In particular, f X 1 ,...,X n (y1 , . . . , yn ) = 1 if and only if 0 ≤ yi ≤ 1 for each i. Since FX 1 ,...,X n (x1 , . . . , xn ) = 0 if any xi < 0, we limit, for the moment, our attention to the case where xi ≥ 0 for all i. In this case, some thought will show that we can write the limits in the following way:  FX 1 ,...,X n (x1 , . . . , xn ) =

max(1,x 1 )



min(1,x n )

dy1 · · · dyn

(2)

= min(1, x1 ) min(1, x2 ) · · · min(1, xn )

(3)

0

··· 0

A complete expression for the CDF of X 1 , . . . , X n is  n i=1 min(1, x i ) 0 ≤ x i , i = 1, 2, . . . , n FX 1 ,...,X n (x1 , . . . , xn ) = 0 otherwise

206

(4)

Problem 5.3.4 Solution

For 0 ≤ y1 ≤ y4 ≤ 1, the marginal PDF of Y1 and Y4 satisfies  f Y1 ,Y4 (y1 , y4 ) = f Y (y) dy2 dy3  y4  y4 = ( 24 dy3 ) dy2 y1 y2  y4 24(y4 − y2 ) dy2 = y1 y2 =y4 = −12(y4 − y2 )2 y =y = 12(y4 − y1 )2 2

1

The complete expression for the joint PDF of Y1 and Y4 is  12(y4 − y1 )2 0 ≤ y1 ≤ y4 ≤ 1 f Y1 ,Y4 (y1 , y4 ) = 0 otherwise For 0 ≤ y1 ≤ y2 ≤ 1, the marginal PDF of Y1 and Y2 is  f Y1 ,Y2 (y1 , y2 ) = f Y (y) dy3 dy4  1  1 ( 24 dy4 ) dy3 = =

y2  1

(1) (2) (3) (4)

(5)

(6) (7)

y3

24(1 − y3 ) dy3 = 12(1 − y2 )2

(8)

y2

The complete expression for the joint PDF of Y1 and Y2 is  12(1 − y2 )2 0 ≤ y1 ≤ y2 ≤ 1 f Y1 ,Y2 (y1 , y2 ) = 0 otherwise For 0 ≤ y1 ≤ 1, the marginal PDF of Y1 can be found from  ∞  1 f Y1 ,Y2 (y1 , y2 ) dy2 = 12(1 − y2 )2 dy2 = 4(1 − y1 )3 f Y1 (y1 ) = −∞

(9)

(10)

y1

The complete expression of the PDF of Y1 is  4(1 − y1 )3 0 ≤ y1 ≤ 1 f Y1 (y1 ) = (11) 0 otherwise

∞ Note that the integral f Y1 (y1 ) = −∞ f Y1 ,Y4 (y1 , y4 ) dy4 would have yielded the same result. This is a good way to check our derivations of fY1 ,Y4 (y1 , y4 ) and f Y1 ,Y2 (y1 , y2 ).

Problem 5.3.5 Solution The value of each byte is an independent experiment with 255 possible outcomes. Each byte takes on the value bi with probability pi = p = 1/255. The joint PMF of N0 , . . . , N255 is the multinomial PMF 10000! p n 0 p n 1 · · · p n 255 n 0 + · · · + n 255 = 10000 (1) PN0 ,...,N255 (n 0 , . . . , n 255 ) = n 0 !n 1 ! · · · n 255 ! 10000! n 0 + · · · + n 255 = 10000 (2) (1/255)10000 = n 0 !n 1 ! · · · n 255 ! 211

(b) From the problem statement,

1 −2 Y1 = X = AX. Y= Y2 3 4

(2)

By Theorem 5.13, Y has covariance matrix 1 −2 4 3 1 3 28 −66  CY = ACX A = = . 3 4 3 9 −2 4 −66 252

(3)

Problem 5.6.2 Solution The mean value of a sum of random variables is always the sum of their individual means. E [Y ] =

n

E [X i ] = 0

(1)

i=1

The variance of any sum of random variables can be expressed in terms of the individual variances and co-variances. Since the E[Y ] is zero, Var[Y ] = E[Y 2 ]. Thus, Var[Y ] = E[(

n

X i ) ] = E[

i=1

2

n n

Xi X j ] =

i=1 j=1

n

E[X i2 ]

i=1

Since E[X i ] = 0, E[X i2 ] = Var[X i ] = 1 and for i  = j,     E X i X j = Cov X i , X j = ρ

+

n

E[X i X j ]

(2)

i=1 j =i

(3)

Thus, Var[Y ] = n + n(n − 1)ρ.

Problem 5.6.3 Solution

Since X and Y are independent and E[Y j ] = 0 for all components Y j , we observe that E[X i Y j ] = E[X i ]E[Y j ] = 0. This implies that the cross-covariance matrix is     E XY = E [X] E Y = 0. (1)

Problem 5.6.4 Solution

Inspection of the vector PDF f X (x) will show that X 1 , X 2 , X 3 , and X 4 are iid uniform (0, 1) random variables. That is, (1) f X (x) = f X 1 (x1 ) f X 2 (x2 ) f X 3 (x3 ) f X 4 (x4 ) where each X i has the uniform (0, 1) PDF f X i (x) =



1 0≤x ≤1 0 otherwise

(2)

It follows that for each i, E[X i ] = 1/2, E[X i2 ] = 1/3 and Var[X i ] = 1/12. In addition, X i and X j have correlation     (3) E X i X j = E [X i ] E X j = 1/4. and covariance Cov[X i , X j ] = 0 for i  = j since independent random variables always have zero covariance. 225

The PDF of Y is f Y (y) =

1 √

 −1 (y−μ

e−(y−μY ) CY

Y )/2

(10)

2π 12 1 2 2 =√ e−(y1 +y1 y2 −16y1 −20y2 +y2 +112)/6 48π 2

(11)

  Since Y = X 1 , X 2 , the PDF of X 1 and X 2 is simply f X 1 ,X 2 (x1 , x2 ) = f Y1 ,Y2 (x1 , x2 ) = √

1 48π 2

e−(x1 +x1 x2 −16x1 −20x2 +x2 +112)/6 2

2

(12)

(c) We can observe directly from μ X and C X that X 1 is a Gaussian (4, 2) random variable. Thus, 8−4 X1 − 4 P [X 1 > 8] = P > = Q(2) = 0.0228 (13) 2 2

Problem 5.7.2 Solution

We are given that X is a Gaussian random vector with ⎡ ⎤ ⎡ ⎤ 4 4 −2 1 CX = ⎣−2 4 −2⎦ . μX = ⎣8⎦ 6 1 −2 4 We are also given that Y = AX + b where 1 1/2 2/3 A= b 1 −1/2 2/3

−4 = . −4

(1)

(2)

Since the two rows of A are linearly independent row vectors, A has rank 2. By Theorem 5.16, Y is a Gaussian random vector. Given these facts, the various parts of this problem are just straightforward calculations using Theorem 5.16. (a) The expected value of Y ⎡ ⎤ 4 1 1/2 2/3 ⎣ ⎦ −4 8 8 + = μY = AμX + b = 1 −1/2 2/3 −4 0 6

(3)

(b) The covariance matrix of Y is CY = ACX A ⎤⎡ ⎤ ⎡ 4 −2 1 1 1 1 43 55 1 1/2 2/3 ⎣ ⎦ ⎣ ⎦ −2 4 −2 1/2 −1/2 = = . 1 −1/2 2/3 9 55 103 1 −2 4 2/3 2/3

233

(4) (5)

(c) Y has correlation matrix RY = CY +

μY μY

 1 619 55 1 43 55 8  8 0 = + = 0 9 55 103 9 55 103

(6)

(d) From μY , we see that E[Y2 ] = 0. From the covariance matrix CY , we learn that Y2 has variance σ22 = CY (2, 2) = 103/9. Since Y2 is a Gaussian random variable, 1 Y2 1 P [−1 ≤ Y2 ≤ 1] = P − ≤ (7) ≤ σ2 σ2 σ2     −1 1 − (8) = σ2 σ2   1 −1 (9) = 2 σ2   3 − 1 = 0.2325 (10) = 2 √ 103

Problem 5.7.3 Solution

This problem is just a special case of Theorem 5.16 with the matrix A replaced by the row vector a and a 1 element vector b = b = 0. In this case, the vector Y becomes the scalar Y . The expected value vector μY = [μY ] and the covariance “matrix” of Y is just the 1 × 1 matrix [σY2 ]. Directly from Theorem 5.16, we can conclude that Y is a length 1 Gaussian random vector, which is just a Gaussian random variable. In addition, μY = a μX and Var[Y ] = CY = a CX a.

(1)

Problem 5.7.4 Solution

From Definition 5.17, the n = 2 dimensional Gaussian vector X has PDF   1 1  −1 f X (x) = exp − (x − μX ) CX (x − μX ) 2π [det (CX )]1/2 2

(1)

where CX has determinant det (CX ) = σ12 σ22 − ρ 2 σ12 σ22 = σ12 σ22 (1 − ρ 2 ).

(2)

1 1  = . 1/2 2π [det (CX )] 2π σ1 σ2 1 − ρ 2

(3)

Thus,

Using the 2 × 2 matrix inverse formula −1 1 a b d −b = , c d ad − bc −c a

234

(4)