# Further Topics

The following topics have not been discussed in this chapter but many impor­tant results in these areas have appeared in the literature over the past several years: (1) lagged endogenous variables included in X; (2) serially correlated errors; (3) prediction; and (4) undersized samples (that is, X having more columns than rows). Harvey (1981 a) has given a detailed discussion of the first two topics. Recent results concerning the third topic can be found in articles by Sant (1978) and by Nagar and Sahay (1978). For a discussion of the fourth topic, consult the articles by Brundy and Jorgenson (1974) and by Swamy

(1980) .

Exercises

1. (Section 7.3.1)

In the limited information model defined by (7.3.1) and (7.3.2), let X = (Xh X2) where Xi and X2 have and K2 columns, respectively. Suppose we define a class of instrumental variables estimators of a by (S’Zt Г1 S’y, where S = (X2A, X!) with A being a K2X (K2 Ш A,) matrix of constants. Show that there exists some A for which the instru­mental variables estimator is consistent if and only if the rank condition of identifiability is satisfied for Eq. (7.3.1).

2. (Section 7.3.3)

Show that the LIML estimator of у is obtained by minimizing d’W^/d’Wd, where 5 = (1, — уwith respect toy. Hence the estimator is sometimes referred to as the least variance ratio estimator. Also, show that the minimization of 8′ W, 6 — S’ W£ yields the 2SLS estimator of y.

3. (Section 7.3.3)

Consider the model

у = Yy + X, j8 + u = Za + u

Y = ХП + V = Y + V,

where Obtain the asymptotic variance-covariance matrix of a = (Y’Y)-1 Y’y and compare it with that of the 2SLS estimator of a.

4. (Section 7.3.3)

In a two-equation model

Уі = М2 + U!

У2 = Ml + + &X2 + «2.

compute the LIML and 2SLS estimates of y,, given the following moment matrices where X = (x,, x2) and Y = (y,, y2).

5. (Section 7.3.7)

Show that Theil’s G2SLS defined at the end of Section 7.3.7 is asymptoti­cally equivalent to the definition (7.3.25).

6. (Section 7.3.7)

Define a = [Z’X(X”FX)-1X’Z]-1Z’X(X’4’X)-1X’y. Show that this is a consistent estimator of a in model (7.3.21) but not as asymptotically as efficient as do2s defined in (7.3.25).

7. (Section 7.4)

Suppose that a simultaneous equations model is defined by (7.1.5) and the reduced form Z = [I ® Х]П + V. Show that Oqjs defined in (7.3.25) and Theil’s G2SLS defined at the end of Section 7.3.7 will lead to the same 3SLS when applied to this model.

8. (Section 7.4)

Consider the following model:

(1) Уі = УУ2 + “і

(2) y2 = fiiXi + 02X2 + u2 = X0 + u2, where y, fix, and P2 are scalar unknown parameters, x, and x2 are Г-compo­nent vectors of constants such that T~lX’X is nonsingular for every Г and also in the limit, y, and y2 are Г-component vectors of observable random variables, and ut and u2 are Г-component vectors of unobservable random variables that are independently and identically distributed with zero mean and contemporaneous variance-covariance matrix

a. Prove that the 3SLS estimator of у is identical with the 2SLS estima­tor of y.

b. Transform Eqs. (1) and (2) to obtain the following equivalent model:

(4) у, = Урі*і + Vp2*2 + yu2 + «і •

Define the reverse 2SLS estimator of у as the reciprocal of the 2SLS estima­tor of 1 /у obtained from Eqs. (3) and (4). Prove that the reverse 2SLS estimator of у has the same asymptotic distribution as the 2SLS estimator

of y.

c. Assume that at period p outside the sample the following relation­ship holds:

(5) УіР = УУ2р+и1р

(6) У2р = Pl*lP + PtX2p + m2p = xpP + u2p, where ulp and u2p are independent of u, and u2 with zero mean and var­iance-covariance matrix X. We want to predict ylp when xlp and are given. Compare the mean squared prediction error of the indirect least squares predictor, defined by