We shall give further examples of applications of the convergence theo­rems of Section 6.1 and 6.2. There will be more applications in Chapter 7, as well.

example 6.4.1 Let {XJ be independent with EX{ = p,, TX, = of. Under what conditions on of and (XJ does X = Xf=1X;/n converge to p in probability?

We can answer this question by using either Theorem 6.1.1 (Chebyshev)

— о —

or Theorem 6.2.1 (Khinchine). In the first case, note E(X — p) = VX = пЧи^г. The required condition, therefore, is that this last quantity should converge to 0 as n goes to infinity. In the second case, we should assume that {XJ are identically distributed in addition to being inde­pendent.

EXAMPLE 6.4.2 In Example 6.4.1, assume further that EXt — p|3 = ms.
Under what conditions on of does (X — p)/VTX converge to N(0, 1)?
The condition of Theorem 6.2.3 (Liapounov) in this case becomes



EXAMPLE 6.4.В Let (XJ be i. i.d. with a finite mean p and a finite variance a2. Prove that the sample variance, defined as Sx = n ^’LjX2 — X2, converges to о in probability.

By Khinchine’s LLN (Theorem 6.2.1) we have plim„_>oo n! Xf=1X2 = EX and plim„^oo X = p. Because Sx is clearly a continuous function of n ‘X’LjX2 and X, the desired result follows from Theorem 6.1.3.

EXAMPLE 6.4.4 Let {XJ be i. i.d. with EXt = px A 0 and ТХ; = o| and let {TJ be i. i.d. with EYi = pyand VYt = of. Assume that {XJ and {TJ are independent of each other. Obtain the asymptotic distribution of Y/X.

By Theorem 6.2.1 (Khinchine), X —> px and Y -4 px. Therefore, by Theorem 6.1.3, Y/X -» py/px – The next step is to find an appropriate normalization of (Y/X — py/px) to make it converge to a proper random variable. For this purpose note the identity

Подпись: (6.4.1)Y pу _ P-xCT Py) Yv(X Px)

X P-х Xxx

Then we can readily see that the numerator will converge to a normal variable with an appropriate normalization and the denominator will

converge to (|Лх) in probability, so that we can use (iii) of Theorem 6.1.4 (Slutsky). Define Wi = рхТ, — |xyXr Then (W,) satisfies the conditions of Theorem 6.2.2 (Lindeberg-Levy). Therefore

(6.4.2) Zre = n V X (Щ – шг) -> N(0, 1),


2 2 2 2 2

where aw = |xxo> + |хустх. Using (iii) of Theorem 6.1.4 (Slutsky), we obtain from (6.4.1) and (6.4.2)

machines? Solve this exercise both by using the binomial distribution and by using the normal approximation.

5. (Section 6.3)

There is a coin which produces heads with an unknown probability p. How many times should we throw this coin if the proportion of heads is to lie within 0.05 of p with probability at least 0.9?

6. (Section 6.4)

Let {Xj} be as in Example 6.4.4. Obtain the asymptotic distribution of

(a) X2.

(b) 1/X.

(c) exp(X).

7. (Section 6.4)

Suppose X has a Poisson distribution P(X = k) = (Xke k)/kl Derive the probability limit and the asymptotic distribution of the estimator

– _ -1 +V1 + 4Zra X—— 2

based on a sample of size n, where Zn = n ‘EjLj X2. Note that EX = VX = and T(X2) = 4A3 + 6k2 + A.

8. (Section 6.4)

Let {X,} be independent with EX = p and VX = ct2. What more

о — о

assumptions on {XJ are needed in order for a = 2(Х* — X) /n to


converge to cr in probability? What more assumptions are needed for its asymptotic normality?

9. (Section 6.4)

Suppose (XJ are i. i.d. with EX = 0 and VX = о2 < со.

(a) Obtain


plim n~l X (Xi + xi+i)-

i= 1

(b) Obtain the limit distribution of


и“1/2Х(Х; + Хг+1).


10. (Section 6.4)

Let [Xu Yj} be i. i.d. with the means |jlx and |Xy, the variances a| and


dy, and the covariance сгху – Derive the asymptotic distribution of X-Y X + Y

Explain carefully each step of the derivation and at each step indicate what convergence theorems you have used. If a theorem has a well- known name, you may simply refer to it. Otherwise, describe it.

11. (Section 6.4)

Suppose X ~ IV[exp(a|3), 1] and Y ~ lV[exp(a), 1], independent of each other. Let {Xb F,}, і = 1, 2, . . . , n, be i. i.d. observations on (X, Y), and define X = nlYnl=lXt and Y = n~]Y”=}Yr We are to estimate (3 by P = logX/logF. Prove the consistency of (3 (see Defini­tion 7.2.5, p. 132) and derive its asymptotic distribution.

Chapters 7 and 8 are both concerned with estimation: Chapter 7 with point estimation and Chapter 8 with interval estimation. The goal of point estimation is to obtain a single-valued estimate of a parameter in question; the goal of interval estimation is to determine the degree of confidence we can attach to the statement that the true value of a parameter lies within a given interval. For example, suppose we want to estimate the probability (p) of heads on a given coin toss on the basis of five heads in ten tosses. Guessing p to be 0.5 is an act of point estimation. We can never be perfectly sure that the true value of p is 0.5, however. At most we can say that p lies within an interval, say, (0.3, 0.7), with a particular degree of confidence. This is an act of interval estimation.

In this chapter we discuss estimation from the standpoint of classical statistics. The Bayesian method, in which point estimation and interval estimation are more closely connected, will be discussed in Chapter 8.

Leave a reply

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>