Generalized Least Squares

9.1 GLS Is More Efficient than OLS.

a. Equation (7.5) of Chap. 7 gives "ois = " + (X’X)-1X’u so that E("ois) = " as long as X and u are uncorrelated and u has zero mean. Also,

var("ols) = E("ols – ")("ols – ")’ = E[(X, X)_1X, uu, X(X, X)_1]

= (X’X)-1X’ E(uu’)X(X’X)-1 = CT2(X, X)-1X’fiX(X’X)-1.

b. var("ols) – var("gls) = o2[(X’X)-1X’fiX(X’X)-1 – (X’fi-1X)-1]

= CT2[(X, X)-1X, fiX(X, X)-1 – (X’^-1X)-1X’^-1fifi-1 X(X’fi-1X)-1]

= ct2[(X’X)-1X’ – (X’fi-1X)-1X’fi-1]fi[X(X’X)-1 – fi-1X(X’fi-1X)-1]

= o2 AfiA’

where A = [(X’X)-1X’ – (X’fi-1X)-1X’fi-1]. The second equality post multiplies (X’fi-1X)-1 by (X’fi-1X)(X’fi-1X)-1 which is an identity of dimension K. The third equality follows since the cross-product terms give -2(X’fi-1X)-1. The difference in variances is positive semi-definite since fi is positive definite.

9.2 a. From Chap. 7, we know that s2 = e’e/(n – K) = u’PXu/(n – K) or

(n – K)s2 = u’PXu. Hence,

(n – K)E(s2) = E(u’PXu) = E[tr(u’PXu)]

= tr[E(uu’)PX ] = tr(SPX) = o2tr(fiPX)
and E(s2) = o2tr(fiPX)/(n – K) which in general is not equal to o2.

B. H. Baltagi, Solutions Manual for Econometrics, Springer Texts in Business and Economics, DOI 10.1007/978-3-642-54548-1-9, © Springer-Verlag Berlin Heidelberg 2015

b. From part (a),

(n – K)E(s2) = tr(SPx) = tr(S) – tr(SPx)

but, both S and PX are non-negative definite. Hence, tr(SPX) > 0 and (n — K)E(s2) < tr(S)

which upon rearranging yields E(s2) < tr(S)/(n — K). Also, S and PX are non-negative definite. Hence, tr(SPX) > 0. Therefore, E(s2) > 0. This proves the bound derived by Dufour (1986):

0 < E(s2) < tr(S)/(n — K)

n

where tr(S) = o2. Under homoskedasticity o2 = o2 fori = 1,2, ..,n.

i=1

Hence, tr(S) = no2 and the upper bound becomes no2/(n — K). A useful bound for E(s2) has been derived by Sathe and Vinod (1974) and Neudecker (1977, 1978). This is given by 0 < mean of (n — K) smallest characteris­tic roots of S < E(s2) < mean of (n — K) largest characteristic roots of S < tr(S)/(n — K).

c. Using s2 = u0PXu/(n — K) = u0u/(n — K) — u0PXu/(n — K) we have

plim s2 = plim u0u/(n — K) — plim u0PXu/(n — K). By assumption plim u0u/n = o2. Hence, the first term tend in plim to o2 as n!1. The second term has expectation o2tr(PX^)/(n—K). But, PX^ has rank K and therefore exactly K non-zero characteristic roots each of which cannot exceed Xmax. This means that

E[u0PXu/(n — K)] < o2KXmax/(n — K).

Using the condition that Xmax/n! 0 as n!1 proves that lim E[u0PXu/(n — K)] ! 0

as n! 1. Hence, plim [u0PXu/(n — K)] ! 0 as n!1 and plim s2 = o2. Therefore, a sufficient condition for s2 to be consistent for o2 irrespective of X is that Xmax/n! 0 and plim (u0u/n) = o2 as n! 1, see Kramer and Berghoff (1991).

d. From (9.6), s*2 = e*0e*/(n – K) where e* = y* – X*"GLS = y* –

X*(X*,X*)_1X*,y* = PX*y* using (9.4), where PX* = In – PX* and PX* = X*(X*’X*)-1X*’. Substituting y* from (9.3), we get e* = PX*u* where PX*X* = 0. Hence, (n — K)s*2 = e*0e* = u*’PX*u* with

(n – K)E(s*2) = E (u*’PX*u*) = E [tr (u*u*’PX*)]

= tr [E (u*u*0) PX* ] = tr (ct2Px*) = o2(n – K)

from the fact that var(u*) = o2In. Hence, E(s*2) = ct2 and s*2 is unbiased for o2.

9.3 The AR(1) Model.

1

= IT

The multiplication is tedious but simple. The (1,1) element automatically gives (1 – p2). The (1,2) element gives – p + p(1 + p2) – pp2 = – p +

p + p3 — p3 = 0. The (2,2) element gives —p2 + (1 + p2) — pp = 1 — p2 and so on.

Again, the multiplication is simple but tedious. The (1,1) element gives

V і — pV і — p2 — p(—p) = (1 — p2) + p2 = 1, the (1,2) element gives ^ 1 — p2.0 — p.1 = —p, the (2,2) element gives 1 — p(—p) = 1 + p2 and so on.

c. From part (b) we verified that P-1,P-1 = (1 — p2)^-1. Hence, £2/(1 — p2) = PP0 or £2 = (1 — p2)PP0. Therefore,

var(P-1u) = P-1var(u)P-10 = o,2p-1^P-10

= CTu2(1 — p2)P-1PP0P-10 = ct82It

since o2 = cr62/(1 — p2).

9.4 Restricted GLS. From Chap. 7, restricted least squares is given by "rls = "ols + (X0X)-1R0[R(X0X)-1R0]-1(r — R"ols). Applying the same analysis to the transformed model in (9.3) we get that "*ls = (X*0X*)-1X*0y* = "GLS. From (9.4) and the above restricted estimator, we get

"RGLS = "GLS C (X*0X*)-1R0[R(X*0X*)-1R0]-1(r — R"GLS)

where X* now replaces X. ButX*0X* = X0^-1X, hence,

"RGLS = "GLS C (X0^-1X)-1R0[R(X0^-1X)-1R0]-1(r — R"GLS).

9.5 Best Linear Unbiased Prediction. This is based on Goldberger (1962).

a. Consider linear predictors of the scalar yT+s given by yT+s = c0y. From (9.1) we getyT+s = c0X" C c0u and using the fact that yT+s = xT+s"Cux+s, we get

yT+s — yT+s = (c0X — x’x+s)" C c0u — ux+s.

The unbiased condition is given by E(yT+s — yT+s) = 0. Since E(u) = 0 and E(uT+s) = 0, this requires that c0X = xT+s for this to hold for every ". Therefore, an unbiased predictor will have prediction error

y T+s — yT+s = c’u — ut+s.

b. The prediction variance is given by

var (yT+s) = E (^t+s — Ут+s) (yT+s — Ут+s)0 = E(c0u — ux+s)(c0u — ux+s)0 = c0E(uu0)c C var(uT+s) — 2c0E(uT+su) = c0Ec C o. j:+s — 2c0m

using the definitions crT+s = var(uT+s) and m = E(uT+su).

c. Minimizing var(yT+s) subject to c0X = xT+s sets up the following Lagrangian function

§(c, X) = c0Ec — 2c0m — 2X0(X0c — xT+s)

where oT+s is fixed and where X denotes the Kx1 vector of Lagrangian multipliers. The first order conditions of § with respect to c and X yield a§/9c = 2£c — 2m — 2XX = 0 and Э§/ЭХ = 2X0c — 2xT+s = 0.

In matrix form, these two equations become

£ Xl/c’

X0 о -i

Using partitioned inverse matrix formulas one gets

£-1[IT – X(X0£-1X)-1X0£-1] £-1X(X0£-1X)-1

(X0 £-1X)-1X0 £-1 —(X0£-1X)-1

so that c = £-1X(X0£-1X)-1xT+s + £-1[IT — X(X0 £-1X)-1X0 £-1]ш.

Therefore, the BLUP is given by yT+s = O0y = xT+s(X0£-1X)-1X0£-1y + rn0£-1y — rn0£-1X(X0£-1X)-1X0£-1y = xT+s" gls + m0£-1y — m0£-1X" gls = xT+s" gls + m0£-1(y — X" gls)

= xT+s" GLS + ш0£ 1eGLS

where eGLS = y — X"GLS. For £ = a2^, this can also be written as y T+s = xT+s" GLS + m0^-1eGLs/^2.

d. For the stationary AR(1) case

ut = put-1 + є with ©t ~ IID (0, a2)

|p| <1 andvar(ut) = = a©2/(1 — p2). In this case, cov(ut, ut-s) = psa2

Therefore, for s periods ahead forecast, we get

^E(ut+su1)^

pT+s-1

E(ut+su) =

E(uT+su2)

= au2

pT+s-2

^E(ut+sut) )

ps

From £2 given in (9.9) we can deduce that ш = psc2 (last column of £2). But, £2_1£ = IT. Hence, £2_1 (last column of £2) = (last column of It) = (0, 0,.., 1/0. Substituting for the last column of £ the expression (ш/psc2) yields

£2_1 ш/psc2 = (0,0,.., 1)0

which can be transposed and rewritten as

ш’£-1/с,2 = ps (0,0,.., 1).

Substituting this expression in the BLUP for yT+s in part (c) we get yT+s = xT+s"GLS + Ш0£_1eGLS = C2 = xT+s"GLS + ps(0, 0, .., 1)eGLS = xT+s|3 GLS + pseT, GLS

where eT GLS is the T-th GLS residual. For s = 1, this gives y T+1 = xTC10 GLS + peT, GLS as shown in the text.

9.6 The W, LR and LM Inequality. From Eq. (9.27), the Wald statistic W can be interpreted as a LR statistic conditional on £, the unrestricted MLE of £, i. e., W = — 2log[maxL(B/£)/ maxL(B/£)]. But, from (9.34), we know

R"=r "

that the likelihood ratio statistic LR = — 2 log[maxL(", £)/maxL(", £)]. Using (9.33), maxU B/£) < maxL(B, £). The right hand side term is an

R"=r V ’ Rf=r,£

unconditional maximum over all £ whereas the left hand side is a condi­tional maximum based on £ under the null hypothesis Ho; R" = r. Also, from (9.32) maxL(B, £) = maxL(B/£). Therefore, W > LR. Similarly, from Eq. (9.31), the Lagrange Multiplier statistic can be interpreted as a LR statistic conditional on £, the restricted maximum likelihood of £, i. e., LM = —2 log[maxL("/£)/ maxL("/£)]. Using (9.33), maxL("/£) = maxL(", £)

R"=r " R"=r R"=r,£

and from (9.32), we get maxL("/£) < maxL(", £) because the latter is an
unconditional maximum over all £. Hence, LR > LM. Therefore, W > LR > LM.

9.7 The W, LR and LM for this simple regression with Ho; " = 0 were derived in problem 7.16 in Chap. 7. Here, we follow the alternative derivation proposed by Breusch (1979) and considered in problem 9.6. From (9.34), the LR is given

(<5, " = 0, 52) /L (dmle, "mle, &mle)

n

where ‘ = y, " = 0, 52 = (yi — y)2/nand

i=1

‘mle — ‘ols — y " olsX,

n n n

0 mle = 0 ols = E WE xi2, 5m le = E ei2/n i=1 i=1 i=1

and ei = yi — ‘ols — 0olsXi, see the solution to problem 7.16. But,

n

logL («,", 5^ =-2 log 2k — 2 log 52 — У2 (Уі — ‘ — "Xi/2/252.

i=1

Therefore,

n

loggia, " = 0, 52) = —2 log 2 л — 2 log 52 — (yi — y)2/252

and

Therefore, lr=-2— 2 log5 2+2 log 5m le = – log (5 2/5m le)

= nlog (TSS/RSS) = n log(1 /1 — R2)

where TSS = total sum of squares, and RSS = residual sum of squares for the simple regression. Of course, R2 = 1— (RSS/TSS).

Similarly, from (9.31) we have LM = —2 log maxL (a, "/52) / maxL(a, "/52)

But, maximization of L(a, "/52) gives aols and "ols. Therefore,

maxL(a, "/52) = L (a,", 52^

with

n

loggia,", 52) = —2 log 2 к — n log<52 — ^ e2/252.

i=1

Also, restricted maximization of L(a, "/(52) under Ho; " = 0 gives a = y and " = 0. Therefore, maxL(a, "/(52) = L(a, ", 52). From this, we conclude that

LM 2

Vi=1

= n — (-Х)є2/Xy2 ) = n[1 — (RSS/TSS)] = nR2.

Finally, from (9.27), we have W=—2 log The maximization of L(a, "/52) gives ‘ols and "ols. Therefore,

maxL(a, "/cr2) = L(a,", 52).

a"

Also, restricted maximization of L(a,"/52) under " = 0 gives ca = y and " = 0. Therefore, maxL(a, "/(52) = L(a, " = 0, 52) with

n

logL(a," = a 52) = -22 log2 it — n log 52 — ^(yi — ^)2/2crmle.

2 2 i= 1

R2

RSS ) "VI – R2 This is exactly what we got in problem 7.16, but now from Breusch’s (1979) alternative derivation. From problem 9.6, we infer using this LR interpretation of all three statistics that W > LR > LM.

9.8 Sampling Distributions and Efficiency Comparison of OLS and GLS. This is based on Baltagi (1992).

2

a. From the model it is clear that ^ x2 = 5, yi = 2 + ui, y2 = 4 + u2, and

t=i

22 xtyt xtut

P ols = = = " + = = 2 + 0.2ui + 0.4u2

Let u0 = (u1, u2), then it is easy to verify that E(u) = 0 and £2 = var(u) =

The disturbances have zero mean, are heteroskedastic and serially correlated with a correlation coefficient p = —0.5.

b. Using the joint probability function P(u1, u2) and Pols from part (a), one gets

“Pols

Probability

1

1/8

1.4

3/8

2.6

3/8

3

1/8

Therefore, E(Pols) = " = 2 and var(Pols) = 0.52. These results can be also verified from Pols = 2 + 0.2u1 + 0.4u2. In fact, E(Pols) = 2 since

E(ui) = E(u2) = 0 and

var (" ois) = 0.04 var(ui) + 0.16 var(u2) + 0.16 cov(u15u2) = 0.04 + 0.64 – 0.16 = 0.52.

Also,

In fact, "GLS = (x0^ 1 x) Vfi 1y = 1/4(2y1 + y2) which can be

rewritten as " gls = 2 + 1/4[2u1 + u2]

Using P(u1,u2) and this equation for "GLS, one gets

" GLS

Probability

1

1/8

2

3/4

3

1/8

Therefore, E("GLS) = " = 2 and var("GLS) = 0.25. This can also be verified from "GLS = 2 + 1/4[2u1 + u2]. In fact, E("GLS) = 2 since E(u1) = E(u2) = 0 and

var ^" gls) = 16[4var(u1) + var(u2) + 4cov(u1,u2)] = 16[4 + 4 – 4] = 4.

This variance is approximately 48% of the variance of the OLS estimator.

c. The OLS predictions are given by yt = "olsxt which means that y 1 = "ols and y2 = 2"ols. The OLS residuals are given by et = yt — yt and their probability function is given by

(Єь e 2)

Probability

(0,0)

1/4

00

0

1

VO

3/8

(—1.6, 0.8)

3/8

("ols)] = 0.48 Ф var ok) = 0.52.

Similarly, the GLS predictions are given by yt = QGLSxt which means that yi = QGLS and y2 = 2QGLS. The GLS residuals are given by et = yt — yt and their probability function is given by

(Є1,Є2)

Probability

(0,0)

1/4

(1, —2)

3/8

(—1,2)

3/8

The MSE of the GLS regression is given by s2 = e0fi 1e = 1/3 [4e2 + 2e1e2 + e^] and this has a probability function

s2

Probability

0

1/4

4/3

3/4

with E(s2) = 1. An alternative solution of this problem is given by Im and Snow (1993).

9.9 Equi-correlation.

a. For the regression with equi-correlated disturbances, OLS is equivalent to GLS as long as there is a constant in the regression model. Note that

1 p p… p

fi = P 1 P ••• P

p p p… 1

so that ut is homoskedastic and has constant serial correlation. In fact, cor­relation (ut, ut_s) = p for t ф s. Therefore, this is called equi-correlated. Zyskind’s (1967) condition given in (9.8) yields

Px^ = fiPx.

In this case,

Px^ = (1 — p)Px + pPxtTtT

and

^Px = (1 — p)Px + ptTtTPx.

But, we know that X contains a constant, i. e., a column of ones denoted by tT. Therefore, using PXX = Xwe get PXtT = tT since tT is a column of X. Substituting this in PX^ we get

Px^ = (1 — p)Px + ptTtT.

Similarly, substituting tTPX = tT in £2PX we get £2PX = (1 — p)PX + ptTtT. Hence, £2PX = PXЙ andOLS is equivalent to GLS for this model.

b. We know that (T — K)s2 = u0PXu, see Chap. 7. Also that

E. u’I^u) = E[tr(uu0 PX)] = tr[E(uu0PX)] = tr(o2^PX)

= o2tr[(1 — p)Px + pltlT^P x] = ct2(1 — p)tr(Px)

since tTPx = tT — tTPX = tT — tT = 0 see part (a). But, tr(PX) = T — K, hence, E(u0PXu) = o2(1 — p)(T — K) andE(s2) = o2(1 — p).

Now for £2 to be positive semi-definite, it should be true that for every arbitrary non-zero vector a we have a0^a > 0. In particular, for a = tT, we get

tT^tT = (1 — p)tT tT + p tT tT tT tT = T(1 — p) + T2p.

This should be non-negative for every p. Hence, (T2 — T)p + T > 0 which gives p > — 1/(T — 1). But, we know that |p| < 1. Hence, —1/(T — 1) <

p < 1 as required. This means that 0 < E(s2) < [T/(T — 1)]o2 where the lower and upper bounds for E(s2) are attained at p = 1 and p = —1/(T — 1), respectively. These bounds were derived by Dufour (1986).

9.10 a. The model can be written in vector form as: y = ain + u where y0 =

(yi,..,yn), in is a vector of ones of dimension n, and u0 = (ui,..,un). -1 n

Therefore, &ols = (i^in i^y = yi/n = y and

_p p.. 1_

where In is an identity matrix of dimension n and Jn is a matrix of ones of dimension n. Define En = In — Jn where Jn = Jn/n, one can rewrite S as S = o2[(1 — p)En + (1 + p(n — 1))Jn] = o2^ with

Therefore,

i0T i

i n n[4] n

‘gls = (ins 1in) 1ins 1y =

inJny

inJny = = y

nn

b. s2 = e0e/(n — 1) where e is the vector of OLS residuals with typical element ei = yi — y for i = 1,.., n. In vector form, e = Eny and

s2 = y0Eny/(n — 1) = u0Enu/(n — 1) since En in = 0. But,

E(u0Enu) = tr(SEn) = o2tr[(1 — p)En] = o2(1 — p)(n — 1)

since EnTn = 0 and tr(En) = (n — 1). Hence, E(s2) = o2(1 — p) and E(s2) — o2 = — po2.

This bias is negative if 0 < p < 1 and positive if — 1/(n — 1) < p < 0.

c. s2 = eGLS^_1eGLS/(n — 1/ = e,^_1e/(n — 1/ where eGLS denotes the vector of GLS residuals which in this case is identical to the OLS residuals. Substituting for e = Eny we get 2 y0En^_1 Eny u0En^_1Enu

s* = n — 1 = (n — 1/

E(s^) = o2tr(^En^ 1En//(n — 1/ =

tr[En] = o2

since Entn = 0. Hence,

d. truevar(‘ols/ = (i^n/ 1 i^n^in/ 1 = i^in/n2

= o2 [(1 + p(n — 1//inJnin]/n2 = o2[1 + p(n — 1/]/n (9.1)

which is equal to var (aGLS/ = (i^S-1^) 1 as it should be. estimated var((aols/ = s2 (i^in) 1 = s2/nsothat

E[estimated var(aols/ — true var(aols/] = E(s2//n — o2[1 + p(n — 1/]/n

= o2[1 — p — 1 — p(n — 1/]/n = —po2.

9.15 Neighborhood Effects and Housing Demand

a. This replicates the first three columns of Table VII in Ioannides and Zabel (2003, p. 569) generating descriptive statistics on key variables, by year:

. by year, sort: sum price pincoml highschool changehand white npersons married

-> year = 1985 Variable

Obs

Mean

Std. Dev.

Min

Max

price

1947

81.92058

25.0474

44.89161

146.1314

pincom1

1947

28.55038

15.47855

3.557706

90.00319

highschool

1947

.8361582

.3702271

0

1

changehand

1947

.2891628

.4534901

0

1

white

1947

.8798151

.3252612

0

1

npersons

1947

2.850539

1.438622

1

11

married

1947

.7134052

.4522867

0

1

-> year = 1989

Variable

Obs

Mean

Std. Dev.

Min

Max

price

2318

116.7232

49.82718

48.3513

220.3118

pincom1

2318

47.75942

30.3148

4.444763

174.0451

highschool

2318

.8597929

.3472767

0

1

changehand

2318

.3170837

.4654407

0

1

white

2318

.8658326

.3409056

0

1

npersons

2318

2.768335

1.469969

1

11

married

2318

.6535807

.4759314

0

1

-> year = 1993 Variable

Obs

Mean

Std. Dev.

Min

Max

price

2909

115.8608

44.73127

53.93157

240.2594

pincom1

2909

50.07294

29.95046

6.201

184.7133

highschool

2909

.8697147

.3366749

0

1

changehand

2909

.2781024

.4481412

0

1

white

2909

.8480578

.3590266

0

1

npersons

2909

2.738398

1.435682

1

9

married

2909

.6452389

.4785231

0

1

This replicates the last column of Table VII in loannides and Zabel (2003, p.569) generating descriptive statistics on key variables for the pooled data:

. sum price pincom1 highschool changehand white npersons married

Variable

Obs

Mean

Std. Dev.

Min

Max

price

7174

106.9282

44.90505

44.89161

240.2594

pincom1

7174

43.48427

28.45273

3.557706

184.7133

highschool

7174

.8574017

.3496871

0

1

changehand

7174

.2936995

.4554877

0

1

white

7174

.8624198

.3444828

0

1

npersons

7174

2.778506

1.448163

1

11

married

7174

.6664343

.4715195

0

1

b. This replicates column 1 of Table VIII of Ioannides and Zabel(2003, p. 577) estimating mean of neighbors housing demand. The estimates are close but do not match.

. reg lnhdemm lnprice d89 d93 lnpincomem highschoolm changehandm whitem npersonsm marriedm hagem hage2m fullbathsm bedroomsm garagem

7174

788.35

0.0000

0.6066

0.6058

.28272

lnhdemm

Coef.

Std. Err.

t

P> |t|

[95% Conf. Interval]

lnprice

-.2429891

.0266485

-9.12

0.000

-.295228

-.1907503

d89

-.0967419

.0102894

-9.40

0.000

-.1169122

-.0765715

d93

-.1497614

.0109956

-13.62

0.000

-.1713159

-.1282068

lnpincomem

.3622927

.0250064

14.49

0.000

.3132728

.4113126

highschoolm

.1185672

.0263588

4.50

0.000

.066896

.1702383

changehandm

.0249327

.0179043

1.39

0.164

-.0101651

.0600305

whitem

.2402858

.0144223

16.66

0.000

.2120139

.2685577

npersonsm

-.0692484

.0069556

-9.96

0.000

-.0828834

-.0556134

marriedm

.1034179

.0236629

4.37

0.000

.0570315

.1498042

hagem

.0074906

.0009053

8.27

0.000

.0057159

.0092652

hage2m

-.008222

.0010102

-8.14

0.000

-.0102023

-.0062417

fullbathsm

.2544969

.0085027

29.93

0.000

.2378291

.2711647

bedroomsm

.1770101

.009258

19.12

0.000

.1588616

.1951586

garagem

.2081956

.0135873

15.32

0.000

.1815604

.2348308

_cons

2.861724

.1188536

24.08

0.000

2.628735

3.094712

This replicates column 2 of Table VIII of loannides and Zabel(2003, p.577) estimating a standard housing demand with no neighborhood effects. The estimates do not match. This may be because the authors included only one observation per cluster. Here, all observations are used.

. reg lnhdem lnprice lnpincome highschool changehand white npersons married d89 d93, vce(cluster neigh)

Linear regression Number of obs = 7174

F(9,364) = 24.53

Prob > F = 0.0000

R-squared = 0.1456

Root MSE = .49666

(Std. Err. adjusted for 365 clusters in neigh)

Robust

lnhdemm

Coef.

Std. Err.

t

P> |t|

[95% Conf. Interval]

lnprice

-.1982708

.059528

-3.33

0.001

-.3153328

-.0812088

lnpincome

.3376063

.0346802

9.73

0.000

.2694076

.405805

highschool

.0702029

.0285124

2.46

0.014

.0141332

.1262727

changehand

.0009962

.0210258

0.05

0.962

-.040351

.0423434

This replicates column 3 of Table VIII of loannides and Zabel(2003, p.577) estimating a reduced form housing demand. The estimates do not match.

. reg lnhdem lnprice lnpincome highschool changehand white npersons married d89 > d93 lnpincomem highschoolm changehandm whitem npersonsm marriedm hagem hage2m fullbathsm bedroomsm garagem, vce(cluster neigh)

Linear regression

Number of obs

= 7174

F(20, 364)

= 34.27

Prob > F

= 0.0000

R-squared

= 0.4220

Root MSE

= .40883

(Std. Err. adjusted for 365 clusters in neigh)

lnhdemm

Coef.

Robust Std. Err.

t

P> |t|

[95% Conf. Interval]

lnprice

-.2850123

.1280493

-2.23

0.027

-.5368216

-.0332029

lnpincome

.1042973

.0192407

5.42

0.000

.0664603

.1421342

highschool

.0027566

.0172223

0.16

0.873

-.031111

.0366242

changehand

.0154469

.0110693

1.40

0.164

-.006321

.0372148

white

.0189483

.0208229

0.91

0.363

-.022

.0598967

npersons

.0044466

.0042704

1.04

0.298

-.0039512

.0128443

married

.0079301

.0150311

0.53

0.598

-.0216286

.0374889

d89

-.1015519

.0288299

-3.52

0.000

-.158246

-.0448578

d93

-.1571002

.0365114

-4.30

0.000

-.2289

-.0853004

lnpincomem

.2963244

.1108145

2.67

0.008

.0784074

.5142413

highschoolm

.1294341

.0832116

1.56

0.121

-.0342018

.2930701

changehandm

.0029356

.0550799

0.05

0.958

-.1053792

.1112504

whitem

.207916

.0682558

3.05

0.002

.0736908

.3421412

npersonsm

-.0710319

.0216564

-3.28

0.001

-.1136194

-.0284444

marriedm

.1154993

.0812373

1.42

0.156

-.0442541

.2752526

hagem

.0079615

.0039834

2.00

0.046

.0001281

.0157949

hage2m

-.0085303

.0043061

-1.98

0.048

-.0169983

-.0000623

fullbathsm

.2520568

.0343649

7.33

0.000

.1844781

.3196356

bedroomsm

.1514635

.0385524

3.93

0.000

.0756501

.2272769

garagem

.2012601

.0555207

3.62

0.000

.0920784

.3104418

_cons

2.715015

.6149303

4.42

0.000

1.505753

3.924277

References

Baltagi, B. H. (1992), “Sampling Distributions and Efficiency Comparisons of OLS and GLS in the Presence of Both Serial Correlation and Heteroskedasticity,” Econometric Theory, Problem 92.2.3, 8: 304-305.

Breusch, T. S. (1979), “Conflict Among Criteria for Testing Hypotheses: Extensions and Comments,” Econometrica, 47: 203-207.

Dufour, J. M. (1986), “Bias of s2 in Linear Regressions with Dependent Errors,” The American Statistician, 40: 284-285.

Goldberger, A. S. (1962), “Best Linear Unbiased Prediction in the Generalized Linear Regression Model,” Journal of the American Statistical Association, 57: 369-375.

Ioannides, Y. M. and J. E Zabel (2003), “Neighbourhood Effects and Housing Demand,” Journal of Applied Econometrics 18: 563-584.

Kramer, W. and S. Berghoff (1991), “Consistency of s2 in the Linear Regression Model with Correlated Errors,” Empirical Economics, 16: 375-377.

Neudecker, H. (1977), “Bounds for the Bias of the Least Squares Estimator of s2 in Case of a First-Order Autoregressive Process (positive autocorrelation),” Econometrica, 45: 1257-1262.

Neudecker, H. (1978), ’’Bounds for the Bias of the LS Estimator in the Case of a First-Order (positive) Autoregressive Process Where the Regression Contains a Constant Term,” Econometrica, 46: 1223-1226.

Sathe, S. T. and H. D. Vinod (1974), “Bounds on the Variance of Regression Coef­ficients Due to Heteroscedastic or Autoregressive Errors,” Econometrica, 42: 333-340.

Zyskind, G. (1967), “On Canonical Forms, Non-Negative Covariance Matrices and Best and Simple Least Squares Linear Estimators in Linear Models,” The Annals of Mathematical Statistics, 38: 1092-1109.

CHAPTER 10

Leave a reply

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>