Expectations of Products of Independent Random Variables

Let X and Y be independent random variables, and let f and g be Borel – measurable functions on R. I will show now that

E [f(X)g(Y)] = (E [f(X)])(E [g(Y)]). (2.30)

In general, (2.30) does not hold, although there are cases in which it holds for dependent X and Y. As an example of a case in which (2.30) does not hold, let X = U0 ■ U1 and X = U0 ■ U2, where U0, U1, and U2 are independent and

uniformly [0, 1] distributed, and let f (x) = x, g(x) = x. The joint density of

U0, U1 and U2 is

h(u0, u1, u2) = 1 if (u0, u 1, u2)T є [0, 1] x [0, 1] x [0, 1], h(u0, u1, u2) = 0 elsewhere;

hence,

Подпись: E [f (X) g(Y)]E[X ■ Y] = E[ Uo U1 U2 ]

1 1 1

/// u0u1U2du0 du1 du2

Подпись: 000 1

J u0 du0 J u1du1 J u2du2

000

(1/3) x (1/2) x (1/2) = 1/12,

image082

whereas

 

image083

E [f(X)] = E [X]

 

00

 

and similarly, E [g(Y)] = E [Y] = 1/4.

As an example of dependent random variables X and Y for which (2.30) holds, now let X = U0(U1 – 0.5) and Y = U0(U2 – 0.5), where U0, U1, and U2 are the same as before, and again f (x) = x, g(x) = x. Then it is easy to show that E[X ■ Y] = E[X] = E[Y] = 0.

To prove (2.30) for independent random variables X and Y, let f and g be simple functions, f (x) = J2Г=1 aiI(x є Ai), g(x) = YTj=1 PjI(x є Bj), where the Ai’s are disjoint Borel sets and the Bj’s are disjoint Borel sets. Then

 

image084

= ai P({ш є й : X(ш) є Ai})

 

j

 

х ^Е Pj P({ш є й : Y(ш) є Bj})

= (E [ f (X)])(E [g(Y)])

 

because, by the independence of X and Y, P(X є Ai and Y є Bj) = P(X є Ai)P(Y є Bj). From this result the next theorem follows more generally:

 

Theorem 2.20: LetXand Ybe random vectors in Rp andR?, respectively. Then X and Y are independent if and only if E [f (X)g(Y)] = (E [f (X)])(E [g(Y)]) for all Borel-measurable functions fand g on Кp and R?, respectively, for which the expectations involved are defined.

This theorem implies that independent random variables are uncorrelated. The reverse, however, is in general not true. A counterexample is the case I have considered before, namely, X = Uo(Ui – 0.5) and Y = U0(U2 — 0-5), where U0, U1, and U2 are independent and uniformly [0, 1] distributed. In this case, E[X ■ Y] = E[X] = E[Y] = 0; hence, cov(X, Y) = 0, but X and Y are dependent owing to the common factor U0. The latter can be shown formally in different ways, but the easiest way is to verify that, for example, E[X2 ■ Y2] = (E[X2])(E[Y2]), and thus the dependence of X and Y follows from Theorem 2.20.

Leave a reply

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>