Category Springer Texts in Business and Economics

Pooling Time-Series of Cross-Section Data

12.1 Fixed Effects and the Within Transformation.

a. Premultiplying (12.11) by Q one gets Qy = «Qint + QX" + QZpp + Qv

But PZp = Zp and QZp = 0. Also, PiNT = iNT and Qint = 0. Hence, this

transformed equation reduces to (12.12)

Qy = QX" + Qv

Now E(Qv) = QE(v) = 0 and var(Qv) = Q var(v)Q0 = o2Q, since var(v) = ov2Int

and Q is symmetric and idempotent.

b. For the general linear model y = X" + u with E(uu0) = Й, a necessary and sufficient condition for OLS to be equivalent to GLS is given by X0 fi_1PX where PX = I – PX and PX = X(X0X)_1 X0, see Eq.(9.7) of Chap.9. For Eq. (12.12), this condition can be written as

(X0Q)(Q/o2)P qx = 0

using the fact that Q is idempotent, the left hand side can be written as (X0Q)P qx/ov2

which is clearly 0, since PqX is the orthogonal projection of QX.

One ca...

Read More

Variance-Covariance Matrix of Random Effects

a. From (12.17) we get

Й = ct^In <8> Jt) + c^.In <8> It)

Replacing JT by TJT, and IT by (Et + JT) where ET is by definition (It — JT), one gets

Й = Tc^(In <8> Jt) + c^(In <8> Et) + c^.In <8> Jt)

collecting terms with the same matrices, we get

Й = (Tc^ C c2)(In <S> Jt) C cv2(In <S> Et) = стуР + cv2Q where Cj2 = Tc2 C c2.

b. p = z2(z;z2)“ z; = IN <S> JT is a projection matrix of Z2. Hence,

it is by definition symmetric and idempotent. Similarly, Q = INT — P is the orthogonal projection matrix of Z2. Hence, Q is also symmetric and idempotent. By definition, P + Q = INT. Also, PQ = P(Int—P) = P—P2 = P — P = 0.

c. From (12.18) and (12.19) one gets

П ^-1 = (ci2P C cv2Q) (%P C q) = P C Q = Int

Vc12 cv2 J

since P2 = P, Q2 = Q and PQ = 0 as verified in part (b)...

Read More

Limited Dependent Variables

13.1 The Linear Probability Model

Уі

u;

Prob.

1

1 — x0"

к;

0

CD.

УС

1

1 — к;

a. Let к і = Pr[y; = 1], then y; = 1 when u; = 1 — x0" with probability к; as shown in the table above. Similarly, y; = 0 when u; = —x0" with probability 1 — к ;. Hence, E(u;) = к; (1 — x[") + (1 — к 😉 (—x0").

For this to equal zero, we get, к; — к ;xi" + к ;xi" — x0" = 0 which gives к ; = xi" as required.

b. var(u;) = E(u2) = (1 — xi")2 к ; + (—x0")2 (1 — к 😉

1 — 2×0" + (x0")2 к; + (xi")2 (1 — к i)

= к ; — 2×0" к ; + (x0")2 = к ; — к 2 = к ;(1 — к 😉 = x0" (1 — xi") using the fact that к ; = xi".

13.2 a. Since there are no slopes and only a constant, x0" = a and (13.16) becomes

n

log ‘ = J]{y; logF(a) +...

Read More

Time-Series Analysis

14.1 The AR(1) Model. yt = pyt_i + ©t with |p| <1 and ©t ~ IIN (0, a©2). Also, yo – N (0, o©2/1 – p2).

a. By successive substitution

yt = pyt_i + ©t = p(pyt_2 + ©t_1) + ©t = P2yt_2 + P©t_1 + ©t

= p2(pyt_3 + ©t_2) + p©t_1 + ©t = p3yt_3 + p2 ©t_2 + p©t_1 + ©t

= ••• = pVo c pt 1©i c pt 2©2 C ••• C ©t Then, E(yt) = ptE(yo) = 0 for every t, since E(yo) = E(©t) = 0.

var(yt) = p2tvar(yo) C p2(t 1)var(©i) C p2(t 2)var(©2) C———————————— C var(©t)

If p = 1, then var(yt) = „©2/0 ! 1. Also, if |p| > 1, then 1 — p2 < 0 and var(yt) < 0.

b. The AR(1) series yt has zero mean and constant variance „2 = var(yt), for t = 0, 1 , 2, … In part (a) we could have stopped the successive substitution at yt_s, this yields yt = psyt_s C pS 1©t_s+1 C • • C©t

Therefore, c...

Read More

Relative Efficiency of OLS Under Heteroskedasticity

a. From Eq. (5.9) we have

n / n 2 n / n 2

var(p 0ls) = e x2^i2 / (Ex2) = °2 Ex2x8 / (Ex2)

i=1 i=1 i=1 i=1

where xi = Xi — X. For Xi = 1,2,.., 10 and 8 = 0.5, 1, 1.5 and 2. This is tabulated below.

[2] P

"P2

b. Apply these four Wald statistics to the equation relating real per-capita con­sumption to real per-capita disposable income in the U. S. over the post World War II period 1959-2007. The SAS program that generated these Wald statistics is given below

[4] + p(n — 1) 1 + p(n — 1)

[5] 22

c21mx2xi c22mx2x2

[7] dF/dx is for discrete change of dummy variable from 0 to 1 z and P>|z| correspond to the test of the underlying coefficient being 0

One can also run logit and probit for the unemployment variable and repeat this for females. This is not done here to save space.

[8] dF/dx is...

Read More

The General Linear Model: The Basics

7.1 Invariance of the fitted values and residuals to non-singular transformations of the independent variables.

The regression model in (7.1) can be written as y = XCC-1" + u where Cisa non-singular matrix. LetX* = XC, theny = X*"* + u where "* = C-1".

a. PX* = X* (X*0X*)-1 X*0 = XC [C0X0XC]-1 C0X0 = XCC-1 (X0X)-1 c0-1 C0X0 = PX.

Hence, the regression of y on X* yields

y = X*" *ls = PX* y = PXy = X" ols which is the same fitted values as those from the regression of y on X. Since the dependent variable y is the same, the residuals from both regressions will be the same.

b. Multiplying each X by a constant is equivalent to post-multiplying the matrix X by a diagonal matrix C with a typical k-th element ck. Each Xk will be multiplied by the constant ck for k = 1,2,.., K...

Read More