The Uniform Distribution and Its Relation to the Standard Normal Distribution

As we have seen before in Chapter 1, the uniform [0,1] distribution has density

f (x) = 1 for 0 < x < 1, f (x) = 0 elsewhere.

image267
More generally, the uniform [a, b] distribution (denoted by U[a, b]) has density

where U and U2 are independent U[0, 1] distributed. Then Xj and X2 are

independent, standard normally distributed. This method is called the Box – Muller algorithm.

2.7. The Gamma Distribution

Подпись: g(x) Подпись: x“ 1exp(-x/в) Г (а) ва Подпись: x > 0, a > 0, в > 0.

The x2 distribution is a special case of a Gamma distribution. The density of the Gamma distribution is

This distribution is denoted by Г (а, в). Thus, the x2 distribution is a Gamma distribution with a = n/2 and в = 2.

The Gamma distribution has moment-generating function

тг(а, в)(0 = [1 – ві]-a, t < 1/в (4.44)

and characteristic function (рГ(а, в'() = [1 – в • i • t]—a – Therefore, the Г(а, в) distribution has expectation ав and variance ав2.

The Г(а, в) distribution with a = 1 is called the exponential distribution.

2.8. Exercises

1. Derive (4.2).

2. Derive (4.4) and (4.5) directly from (4.3).

3. Derive (4.4) and (4.5) from the moment-generating function (4.6).

4. Derive (4.8), (4.9), and (4.10).

5. If X is discrete and Y = g(x), do we need to require that g be Borel measurable?

6. Prove the last equality in (4.14).

7. Prove Theorem 4.1, using characteristic functions.

8. Prove that (4.25) holds for all four cases in (4.24).

9. Let X be a random variable with continuous distribution function F(x). Derive the distribution of Y = F(X).

10. The standard normal distribution has density f(x) = exp(-x2/2)Д/2л, x є К. Let X1 and X2 be independent random drawings from the standard normal distribution involved, and let Y1 = X1 + X2, Y2 = X1 — X2. Derive the joint density h(y1, y2) of Y1 and Y2, and show that Y1 and Y2 are indepen­dent. Hint: Use Theorem 4.3.

11. The exponential distribution has density f (x) = 9—1 exp(-x /9) if x > 0 and f (x) = 0 if x < 0, where 9 > 0 is a constant. Let X1 and X2 be inde­pendent random drawings from the exponential distribution involved and let Y1 = X1 + X2, Y2 = X1 — X2. Derive the joint density h( y1, y2) of Y1 and Y2. Hints: Determine first the support {(y1, y2)T є К2 : h(y1, y2) > 0} of h(y1, y2) and then use Theorem 4.3.

12. Let X ~ N(0, 1). Derive E[X2k] for k = 2, 3, 4, using the moment-generating function.

13. Let X1, X2,…, Xn be independent, standard normally distributed. Show that (1/»E"j=1 Xj is standard normally distributed.

14. Prove (4.31).

15. Show that for t < 1 /2, (4.33) is the moment-generating function of (4.34).

16. Explain why the moment-generating function of the tn distribution does not exist.

17. Prove (4.36).

18. Prove (4.37).

19. Let X1, X2,…, Xn be independent, standard Cauchy distributed. Show that (1/n)Ej=1 Xj is standard Cauchy distributed.

20. The class of standard stable distributions consists of distributions with char­

acteristic functions of the type p(t) = exp(—|t |a/a), where а є (0, 2]. Note that the standard normal distribution is stable with а = 2, and the standard Cauchy distribution is stable with а = 1. Show that for a random sample X1, X2, Xn from a standard stable distribution with parameter a, the ran­

dom variable Yn = n—1/a ‘Yl’j=1 Xj has the same standard stable distribution (this is the reason for calling these distributions stable).

21. Let X and Y be independent, standard normally distributed. Derive the distri­bution of X/Y.

22. Derive the characteristic function of the distribution with density exp(—|x |)/2, —ж < x < ж.

23. Explain why the moment-generating function of the Fm, n distribution does not exist.

24. Prove (4.44).

25. Show that if U and U2 are independent U[0, 1] distributed, then X1 and X2 in (4.43) are independent, standard normally distributed.

26. If X and Y are independent Г(1, 1) distributed, what is the distribution of X-Y ?

APPENDICES

 

4. A. Tedious Derivations Derivation of (4.38):

Подпись: E ИпГ((п + 1)/2) f x2/n,

^ППГ(п/2) J (1 + x2/n)(n+1)/2 dX

пГ((п + 1)/2) Г 1 + x2/n d

4ПЛГ{п/2) ] (1 + x2/п)(п+1)/2 dX

пГ((п + 1)/2W 1 d

image272

^пЛГ(п/2) J (1 + x2/п)(п+1)/2

пГ((п – 1)/2 + 1) Г(п/2 – 1) п

——————————————— п = ——— .

Г(п/2) Г((п – 1)/2) п – 2

In this derivation I have used (4.36) and the fact that

1 = j h^2(x)dx

_ Г((п – 1)/2) f _________ 1________ dx

Мп – 2)пГ((п – 2)/2) J (1 + x2 /(п – 2))(п-1)/2 x

= Г((п – 1)/2W 1 d

^ЛГ((п – 2)/2) J (1 + x2)(п-і)/2

Derivation of (4.40): For m > 0, we have

m

Подпись: i ■ t ■ x)exp(-|t|)dt2П J exp(

-m

image274 image275

t ■ x) exp(-1)dt

 

1 f 1 f

— I exp[-(1 + i ■ x)t]dt +—— I exp[-(1 — i ■ x)t]dt

2n J 2n J

image276

00

2n (1 — i ■ x)

Подпись: [cos(m ■ x) — x ■ sin(m ■ x)].1 exp(—m)

n (1 + x2) n (1 + x2)

Letting m ^ to, we find that (4.40) follows. Derivation of (4.41):

image278

hm, n (x) Hm, n (x)

yn/2 1 exp(— y/2) .

X T(n/2)2n/1 d

mm/2 xm/2—1

nm/2 Y(m/2)Y(n/2) 2m/2+n/2

TO

Xf y",2+"2—[1+mx/n] y/2) dy

0

mm /2 xm/2 1

nm/2 Y(m/2)Y(n/2) [1 + m ■ x/n]m/2+n/2

TO

x j zm/2+n/2—X exp (—z) dz

Подпись: x > 0.mm/2 Y(m/2 + n/2) xm/2—1
nm/2 Y(m/2)Y(n/2) [1 + m ■ x/n]m/2+n/2 ’

Подпись: /

image281

Derivation of (4.42): It follows from (4.41) that

hence, if k < n /2, then

TO

J xk hm, n (x)dx 0

TO

mm/2 Tim/2 + n/2) f xm/2 + k-1

—_______________ і _________________ dx

nm/1 T(m/2)T(n/2) J (1 + m ■ x/n)m/2 + n/2
0

TO

)k Tm/2 + nmf_______ xm + *<r–1_____

T(m/2)T(n/2)j (1 + x)(m + Ml2 + (n – ЖГ-

0

Подпись: = (n/m )kT(m/2 + k)T(n/2 – к)
T(m /2)T(n/2)

Подпись: = (n/m )knk=0(m /2+j )

Ytj=1(n/2 – j )’

Подпись: TO j xhm,n(x)dx 0 Подпись: n n — 2 Подпись: if n > 3, №m,n = TO if n < 2, (4.46)

Подпись: if n > 5,

Подпись: TO J x2 hm,n (x)dx 0 Подпись: n2(m + 2) m (n — 2)(n — 4)

where the last equality follows from the fact that, by (4.36), T(a + k) = T(a) nk=0(« + J) for a > 0. Thus,

Подпись: (4.47)= to if n < 4.

The results in (4.42) follow now from (4.46) and (4.47).

4. B. Proof of Theorem 4.4

For notational convenience I will prove Theorem 4.4 for the case k = 2 only. First note that the distribution of Y is absolutely continuous because, for arbi­trary Borel sets B in K2,

P[Y є B] = P [G(X) є B] = P [X є G-1(B)] = j f(x)dx.

G-1(B)

If B has Lebesgue measure zero, then, because G is a one-to-one mapping, the Borel set A = G-1(B) has Lebesgue measure zero. Therefore, Y has density

Подпись: j h( y)dy■
Подпись: P [Y є B]

h(y), for instance, and thus for arbitrary Borel sets B in K2,

Choose a fixed y0 = (y0>1, y0,2)T in the support G(K2) of Y such that x0 = G-1(y0) is a continuity point of the density f of X and y0 is a continuity point of the density h of Y. Let Y(5b «2) = [y0,1, У0,1 + «1] x [70,2, У0,2 + «2] for

some positive numbers 51 and 52. Then, with X the Lebesgue measure

image293 image294

P [Y є Y(51,52)]

and similarly,

P [Y є Y(5b 52)] >( inf f(G-1(y))) X(G-1(Y(51, 52))).

yeY(31,«2) /

(4.49)

Подпись: h(y0) = lim lim ■ «1^0 52^0 It follows now from (4.48) and (4.49) that P [Y є Y(51,52)]

image296 Подпись: (4.50)

51 52

It remains to show that the latter limit is equal to |det[ J(y0)] |.

If we let G-1 (y) = (g* ( y), gf( y))T, it follows from the mean value theorem that for each element g*(y) there exists a Xj є [0, 1] depending on y and y0 such that g*(y) = g*У0) + JjУ0 + Xj(y – y0))(y – У0), where Jj(y) is the jth row of J(y). Thus, writing

D ( )_ ( J1(y0 + X1(y – y0)) – J1(y0)£

0(Л~ J2(y0 + X2(y – y0)) – J2(y0) J

= J0(y) – J(y0), (4.51)

for instance, we have G-1(y) = G-1(y0) + J(y0)(y – y0) + D0(y)(y – y0). Now, put A = J(y0)-1 and b = y0 – J(y0)-1 G-1(y0). Then,

Подпись: (4.52)G-1(y) = A-1(y – b) + D0(y)(y – y0);

hence,

G-1(Y(«i,«2)) = {x є К2:x

= A-1(y – b) + D0(y)(y – yo), y є Y(5i, «2)}-

(4.53)

The matrix A maps the set (4.53) onto A[G-1(Y(5i,52))]

= {x є К2 : x = y – b + A ■ D0(y)(y – У0), y є Y(«i, «2)},

(4.54)

def

where for arbitrary Borel sets B conformable with a matrix A, A [ B ] = {x : x = Ay, y є B}. Because the Lebesgue measure is invariant for location shifts (i. e., the vector b in (4.54)), it follows that

X (A[G-1(Y(«i, «2))])

= X ({x є К2 : x = y + A ■ D0(y)(y – y0), y є Y(«i, «2)}) –

(4.55)

Observe from (4.51) that

A ■ Do(y) = J(yo)-1 Do(y) = J(yo)-1 Jo(y) – I2 (4.56)

and

lim J(yo)-1 J o(y) = I2- (4.57)

У^Уо

Then

X (A[G-1(Y(«i,«2))])

= X ({x є К2 : x = yo + J(yo)-1 Jo(у)(У – Уо), y є Y(«i, «2)}) –

(4.58)

It can be shown, using (4.57), that

Подпись: (4.59)X (A[G-1(Y(«i,«2))]) lim lim —1 -—- 1

5i4-0 «24-0 X (Y(«i,«2))

Recall from Appendix I that the matrix A can be written as A = QDU, where Q is an orthogonal matrix, D is a diagonal matrix, and U is an upper-triangular matrix with diagonal elements all equal to 1. Let B = (0, 1) x (0, 1). Then it is not hard to verify in the 2 x 2 case that U maps B onto a parallelogram U[B] with the same area as B; hence, X(U[B]) = X(B) = 1. Consequently, the Lebesgue measure ofthe rectangle D[B] is the same as the Lebesgue measure of the set D[U [ B]]. Moreover, an orthogonal matrix rotates a set of points around the origin, leaving all the angles and distances the same. Therefore, the set A [B]
has the same Lebesgue measure as the rectangle D[B] :k(A[B]) = k(D[B ]) = |det[D]| = |det[A]|. Along the same lines, the following more general result can be shown:

Lemma 4.B.1: For a k x k matrix A and a Borel set B in Kk, к(A[B]) = |det[A] |к(B), where к is the Lebesgue measure on the Borel sets in Kk.

Thus, (4.59) now becomes

Подпись: 52^0 к (Y(5j. 82))

image301

limlim к (A[G"(V№.fe))])

hence,

Подпись: 51^0 8210 81 82 1

|det[ A]|

|det[ A-1]| = |det[J(^0)]|- (4.60)

Theorem 4.4 follows now from (4.50) and (4.60).

image303

Leave a reply

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>