Efficiency of Least Squares Estimator

It is easy to show that in Model 6 the least squares estimator fi is unbiased with its covariance matrix given by

FJe = (X’X)-,X’XX(X’X)-1. (6.1.5)

Because GLS is BLUE, it follows that

(X’X)-1X’XX(X’X)-1 ё (X’X-^X)-1, (6.1.6)

which can also be directly proved. There are cases where the equality holds in

(6.1.6) , as shown in the following theorem.

Theorem 6.1.1. Let X’X and X both be positive definite. Then the follow­ing statements are equivalent.

(A) (X’X)-1X’XX(X’X)-‘ = (X’X-‘X)-‘.

(B) XX = XB for some nonsingular B.

(C) (Х’ХГ’Х’ = (X’X-‘X^X’X-1.

(D) X = HA for some nonsingular A where the columns of H are К characteristic vectors of X.

(E) X’XZ = 0 for any Z such that Z’X = 0.

(F) X = ХГХ’ + Z0Z’ + <t2I for some Г and 0 and Z such that Z’X = 0. Proof. We show that statement A => statement В => statement C:

Statement A =>X’XX – X’X(X, X-,X)-‘X’X

=> x’X,/2[i – x-|^х(х/х-,х)-,х/х-,/а]Х,/2х = о

=> X1/2x = X_1/2XB for some В using Theorem 14 of Appen­dix 1

=> XX = XB for some В => (X’X)-,X’XX = В

=> В is nonsingular because X’XX is nonsingular => statement В

=> X’X-‘X – (B’)-*X’X and X’X"1 = (В’Г’Х’

=> statement C.

Statement C => statement D can be easily proved using Theorem 16 of Ap­pendix 1. (Anderson, 1971, p. 561, has given the proof.) Statement D=* statement A and statement В => statement E are straightforward. To prove statement E => statement B, note

statement E => X’X(Z, X) = (0, X’XX)

=> X’X = (0, X’XXXZfZ’Z)-1, X(X’X)-‘]’

= X’XXIX’XJ-‘X’ using Theorem 15 of Appendix 1 => statement В because X’XX(X’X)-1 is nonsingular.

Fora proof of the equivalence of statement F and the other five statements, see Rao (1965).

There are situations in which LS is equal to GLS. For example, consider

y = X(/? + v) + u, (6.1.7)

where /lisa vector of unknown parameters and u and v are random variables with £u = 0, Ev = 0, £uu’ = <r2I, Erv’ = Г, and £uv’ = 0. In this model, statement F of Theorem 6.1.1 is satisfied because £(Xv + u)(Xv + u)’ = ХГХ’ + <t2I. Therefore LS = GLS in the estimation of fi.

There are situations in which the conditions of Theorem 6.1.1 are asymp­totically satisfied so that LS and GLS have the same asymptotic distribution. Anderson (1971, p. 581) has presented two such examples:

Уі = Рі+ ^ + Pit2 + ■ • • + PxtK~l + u, (6.1.8)

and

y, = fix COS Xxt + /?2 cos lyt + . . ■+ Pk cos kilt + u„ (6.1.9)

where in each case (u,} follow a general stationary process defined by (5.2.42). [That is, take the y, of (5.2.42) as the present u,.]

We shall verify that condition В of Theorem 6.1.1 is approximately satisfied for the polynomial regression (6.1.8) with K= 3 when {и,} follow the station­ary first-order autoregressive model (5.2.1). Define X = (x,, x2, x3), where the fth elements of the vectors x,, x2, and x3 are xu = 1, хь = t, and x3l = t2, respectively. Then it is easy to show XX = XA, where X is given in (5.2.9) and

Подпись: a2Подпись: A(1 ~pf 0 -2p

0 (1 – pf 0

0 0 (1 – pf

The approximate equality is exact except for the first and the last rows.

We have seen in the preceding discussion that in Model 6 LS is generaly not efficient. Use of LS in Model 6 has another possible drawback: The covariance matrix given in (6.1.5) may not be estimated consistently using the usual formula <t2(X’X)-1. Under appropriate assumptions we have pJim a2 = lim Г-1 trMX, where M = I-X(X’X)-‘X’. Therefore plim ^(X’X)"1 is generally different from (6.1.5). Furthermore, we cannot unequivocally de­termine the direction of the bias. Consequently, the standard t and F tests of linear hypotheses developed in Chapter 1 are no longer valid.

Leave a reply

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>