# Asymptotic Normality of Least Absolute Deviations Estimator

As noted in Section 4.6.1, the asymptotic normality of LAD cannot be proved by means of Theorem 4.1.3; nor is the proof of Section 4.6.1 easily generaliz – able to the regression case. We shall give abrief outline of aproof of asymptotic normality. The interested reader should refer to Koenker and Bassett (1982) and the references of that article.13

The asymptotic normality of the LAD estimator fi is based on the following three fundamental results:

4= £ *My, – *IP)~ 0, (4.6.26)

Vi (-1

where y/(x) — sgn (x),

4 2 x^y< ~ _4 2 (4-6-27)

vT t-1 vi r-i

-X xr [е¥І. у, ~ ~ 2j x‘E^y> ~ x»A>)j ^ o,

and

4 2 – x’Mi = 4 E *tEv(y, ~ x’A) (4.6.28)

Vi 1 Vi Г-1

+ plim j; x, Ei//(y, – х«’Д)]л * ~ A>)-

These results imply

JT(ji-Po)= jplim y 2 [^7 EV(yt ~ X’^]A} (4 6.29)

Noting Ey/(yt — x’tP) = 1 — 2F(x’S), Eyf(ut) = 0, and Vy/(ut) = 1, we obtain 4f(p – Д) — N(0, 4-,/(0)-2[lim Г^Х’Х]-1). (4.6.30)

Г—*00

The proof of (4.6.26) is straightforward and is given in Ruppert and Carroll (1980, Lemma A2, p. 836). The proof of (4.6.27) is complicated; it follows from Lemma 1 (p. 831) and Lemma A3 (p. 836) of Ruppert and Carroll, who in turn used results of Bickel (1975). Equation (4.6.28) is a Taylor series approximation.

Exercises

1. (Section 4.1.1)

Prove (i) ^ (ii) => (iii) in Definition 4.1.1.

2. (Section 4.1.1)

In the model of Exercise 11 in Chapter 1, obtain the probability limits of the two roots of the likelihood equation, assuming limr_„ T~lx’x = c, where c is a finite, positive constant.

3. (Section 4.1.1)

In the model of Exercise 2 (this chapter), prove the existence of a consistent root, using Theorem 4.1.2.

4. (Section 4.1.2)

Suppose that {XT} are essentially bounded; that is, for any e > 0, there exists Mt such that Р(|ЛҐГ| < Mt) ё 1 — e for all T. Show that if plimr_.„ YT = 0, then plim^., XTYT = 0. (This is needed in the proof of Theorem 4.1.4.)

5. (Section 4.2.2)

Prove (4.2.18) by verifying Condition D given after (4.1.7). Assume for simplicity that a2 is known. (Proof for the case of unknown a2 is similar but more complicated.)

6. (Section 4.2.2)

heXXit, i= ,2,. . . , n, /=1,2,. . . , T, be independent with the distribution N(pt, a2). Obtain the probability limit of the maximum likelihood estimator of <r2 assuming that n is fixed and T goes to infinity (cf. Neyman and Scott, 1948).

7. (Section 4.2.3)

Let (XJ, /=1,2,. . . , T, be i. i.d. with the probability distribution X, = 1 with probability p = 0 with probability 1 — p.

Prove the consistency and asymptotic normality of the maximum likelihood estimator using Theorems 4.1.1 and 4.2.4. (The direct method is much simpler but not to be used here for the sake of an exercise.)

8. (Section 4.2.3)

Prove the asymptotic normality of the consistent root in the model of Exercise 2 (this chapter).

9. (Section 4.2.3)

Let {ЛГ,} be i. i.d. with uniform distribution over (0, в). Show that if в is defined by T~T+ 1) max (Xl, X2,. . . , XT),

P [T(6 – в) <x] = exp ф-‘х – 1) for x ё в.

10. (Section 4.2.3)

Consider the model

yt = Po + ut> t= 1,2,. . . , T,

where y, and u, are scalar random variables and ft is a scalar unknown parameter. If {u,} are i. i.d – with Eu, = 0, Eu2 = fio, Eu] = 0, and Euf = m4 (note that we do not assume the normality of u,), which of the following three estimators do you prefer most and why?

(1) ft = T~^Uyn

(2) ft, which maximizes S = —(772) log/?2 — (1 /2/?2)2£.,(y, — /?)2,

(3) ft defined as 0.5 times the value of /? that minimizes

11. (Section 4.2.3)

Derive the asymptotic variance of the estimator of /? obtained by minimizing!(>>, —fttr)4, where yt is independent with the distribution

N(fi0xt, al) and limr_«, T~12/.[X,2 is a finite, positive constant. You may assume consistency and asymptotic normality. Indicate the additional assumptions on x, one needs. Note if Z ~ N(0, a2), EZ2k = o2k(2k)/ (2 kk).

12. (Section 4.2.4)

Complete the proof of Example 4.2.4—the derivation of the asymptotic normality of the superefficient estimator.

13. (Section 4.2.5)

In the model of^Example 4.2.3, obtain the asymptotic variance-covariance matrix of fi using the concentrated likelihood function in ft

14. (Section 4.3.2)

What assumptions are needed to prove consistency in Example 4.3.2 using Theorem 4.3.1?

15. (Section 4.3.3)

Prove the asymptotic normality of the NLLS estimator in Example 4.3.1.

16. (Section 4.3.3)

Consider a nonlinear regression model

У, = (Ao + *t)2 + Щ,

where we assume

(A) {u,} are i. i.d. with Ей, = 0 and Vu, = Сто-

(B) Parameter space В = [—і, і].

(C) {х,} are i. i.d. with the uniform distribution over [1,2], distributed independently of {и,}. [EX’ = (r + 1)~1(2,+1 — 1) for every positive or negative integer r except r = — 1. EX~l = log 2.]

Define two estimators offi0:

(1) A minimizes ST(0) = 2£.,[у, — (A + x,)2]2 over B.

(2) A minimizes {уЛА + – О2 + log [(A + x,)2]} over B.

If A» = 0, which of the two estimators do you prefer? Explain your preference on the basis of asymptotic results.

17. (Section 4.3.5)

Your client wants to test the hypothesis a + A = 1 in the nonlinear regression model

Q, = L? Kf+ut, <-1,2………………. T,

where L, and Kt are assumed exogenous and [ut] are i. i.d. with Eu, = 0 and Vu, = a2. Write your answer in such a way that your client can perform the test by reading your answer without reading anything else, except that you may assume he can compute linear least squares estimates and has access to all the statistical tables and knows how to use them. Assume that your client understands high-school algebra but not calculus or matrix analysis.

18. (Section 4.4.3)

Prove the asymptotic efficiency of the second-round estimator of the Gauss-Newton iteration.

19. (Section 4.5.1)

Prove Wald ~*х2ІЯ where Wald is defined in Eq. (4.5.4).

20. (Section 4.5.1)

Prove Rao —»^2(?), where Rao is defined in Eq. (4.5.5).

21. (Section 4.5.1)

Show Wald 2 LRT Ш Rao, where these statistics are defined by (4.5.23), (4.5.24), and (4.5.25).

22. (Section 4.6.1)

Consider the regression model y, = fi0xt + u„ where {jc,} are known constants such that limr_„ T~1’2]Llxj is a finite positive constant and {и,} satisfy the conditions for {Yt) given in Section 4.6.1. By modifying the proof of the asymptotic normality of the median given in Section 4.6.1, prove the asymptotic normality of the estimator of /? obtained by minimizing 2£.,|и – Дх,|.

23. (Section 4.6.1)

Let {X,} be i. i.d. with a uniform density over (—i, £) and let Y be the binary variable taking values T~1/2 and — T~l/2 with equal probability and distributed independently of {X,}. Define Wt = 1 if X, + Y Ш 0 and W, = 0 if X, + Y < 0. Prove that 7,_1/22(:L1(H/I — $) converges to a mixture of JV(1, i) and N(— 1, {) with equal probability.

## Leave a reply