Category Springer Texts in Business and Economics

Variance-Covariance Matrix of Random Effects

a. From (12.17) we get

Й = ct^In <8> Jt) + c^.In <8> It)

Replacing JT by TJT, and IT by (Et + JT) where ET is by definition (It — JT), one gets

Й = Tc^(In <8> Jt) + c^(In <8> Et) + c^.In <8> Jt)

collecting terms with the same matrices, we get

Й = (Tc^ C c2)(In <S> Jt) C cv2(In <S> Et) = стуР + cv2Q where Cj2 = Tc2 C c2.

b. p = z2(z;z2)“ z; = IN <S> JT is a projection matrix of Z2. Hence,

it is by definition symmetric and idempotent. Similarly, Q = INT — P is the orthogonal projection matrix of Z2. Hence, Q is also symmetric and idempotent. By definition, P + Q = INT. Also, PQ = P(Int—P) = P—P2 = P — P = 0.

c. From (12.18) and (12.19) one gets

П ^-1 = (ci2P C cv2Q) (%P C q) = P C Q = Int

Vc12 cv2 J

since P2 = P, Q2 = Q and PQ = 0 as verified in part (b)...

Read More

Limited Dependent Variables

13.1 The Linear Probability Model

Уі

u;

Prob.

1

1 — x0"

к;

0

CD.

УС

1

1 — к;

a. Let к і = Pr[y; = 1], then y; = 1 when u; = 1 — x0" with probability к; as shown in the table above. Similarly, y; = 0 when u; = —x0" with probability 1 — к ;. Hence, E(u;) = к; (1 — x[") + (1 — к 😉 (—x0").

For this to equal zero, we get, к; — к ;xi" + к ;xi" — x0" = 0 which gives к ; = xi" as required.

b. var(u;) = E(u2) = (1 — xi")2 к ; + (—x0")2 (1 — к 😉

1 — 2×0" + (x0")2 к; + (xi")2 (1 — к i)

= к ; — 2×0" к ; + (x0")2 = к ; — к 2 = к ;(1 — к 😉 = x0" (1 — xi") using the fact that к ; = xi".

13.2 a. Since there are no slopes and only a constant, x0" = a and (13.16) becomes

n

log ‘ = J]{y; logF(a) +...

Read More

Time-Series Analysis

14.1 The AR(1) Model. yt = pyt_i + ©t with |p| <1 and ©t ~ IIN (0, a©2). Also, yo – N (0, o©2/1 – p2).

a. By successive substitution

yt = pyt_i + ©t = p(pyt_2 + ©t_1) + ©t = P2yt_2 + P©t_1 + ©t

= p2(pyt_3 + ©t_2) + p©t_1 + ©t = p3yt_3 + p2 ©t_2 + p©t_1 + ©t

= ••• = pVo c pt 1©i c pt 2©2 C ••• C ©t Then, E(yt) = ptE(yo) = 0 for every t, since E(yo) = E(©t) = 0.

var(yt) = p2tvar(yo) C p2(t 1)var(©i) C p2(t 2)var(©2) C———————————— C var(©t)

If p = 1, then var(yt) = „©2/0 ! 1. Also, if |p| > 1, then 1 — p2 < 0 and var(yt) < 0.

b. The AR(1) series yt has zero mean and constant variance „2 = var(yt), for t = 0, 1 , 2, … In part (a) we could have stopped the successive substitution at yt_s, this yields yt = psyt_s C pS 1©t_s+1 C • • C©t

Therefore, c...

Read More

Relative Efficiency of OLS Under Heteroskedasticity

a. From Eq. (5.9) we have

n / n 2 n / n 2

var(p 0ls) = e x2^i2 / (Ex2) = °2 Ex2x8 / (Ex2)

i=1 i=1 i=1 i=1

where xi = Xi — X. For Xi = 1,2,.., 10 and 8 = 0.5, 1, 1.5 and 2. This is tabulated below.

[2] P

"P2

b. Apply these four Wald statistics to the equation relating real per-capita con­sumption to real per-capita disposable income in the U. S. over the post World War II period 1959-2007. The SAS program that generated these Wald statistics is given below

[4] + p(n — 1) 1 + p(n — 1)

[5] 22

c21mx2xi c22mx2x2

[7] dF/dx is for discrete change of dummy variable from 0 to 1 z and P>|z| correspond to the test of the underlying coefficient being 0

One can also run logit and probit for the unemployment variable and repeat this for females. This is not done here to save space.

[8] dF/dx is...

Read More

Simple Versus Multiple Regression Coefficients. This is based on Baltagi (1987)

The OLS residuals from Yi = у + 82v2i + 83v3i + wi, say tVi, satisfy the

following conditions:

n n n

wі = 0 ^]iViv2i = 0 ^^wiV 3i = 0

i=i i=i i=i

with Yi = у + 82V2i + 83V3i + vi.

n

Multiply this last equation by v2i and sum, we get P Yiv2i = °2 P v? i +

Подпись: i=iimage163
i=i

"ols = Y ^VEv.2 = E Yix^]xi2 = Y WExi2.

i=1 i=1 i=1 i=1 i=1 i=1

b. Regressing a constant 1 on Xi we get

n n n

b = ‘Y/ Ъ/Y Xi2 with residuals wi = 1 — I nX^ ^ XiM Xi i=1 i=1 i=1

nn

so that, regressing Yi on wi yields a = wiYi/ w2.

i=1 i=1

_ n _ n

n _ _ n n nY EXi2-n^XiYi

But P wiYi = nY — nX £ XiYi/ P Xi2 = —i=^-5———————— ^——- and

i=1 i=1 i=1 P Xi2

i=1

_ n n 2 n

n 2 2 2nX X Xi n X Xi2-n2X2 n X Xi2

P w2 = n + ———– ^ .

i=1 P Xi2 P Xi2 P Xi2 P Xi2

i = 1 i = 1 i = 1 i=1

xi2

Подпись: xi2 i=1 i=1

n

X XiYi – nXY

Подпись: Y

image166

a=4 = Y – " olsX

xi2

image167
Read More

Independence and Simple Correlation

a. Assume that X and Y are continuous random variables. The proof is similar if X and Y are discrete random variables and is left to the reader. If X and Y are independent, then f(x, y) = f1(x)f2(y) where f1(x) is the marginal probability density function (p. d.f.) of X and f2(y) is the marginal p. d.f. of Y. In this case,

E(XY) = ‘ xyf(x, y)dxdy = ‘ xyf1(x)f2(y)dxdy =(/ xf1(x)dx)(/ yf2(y)dy) = E(X)E(Y)

B. H. Baltagi, Solutions Manual for Econometrics, Springer Texts in Business and Economics, DOI 10.1007/978-3-642-54548-1_2, © Springer-Verlag Berlin Heidelberg 2015

image001
image002

butvar(Y) = b2 var(X) from problem 2.1a. Hence, pXY ± 1 depending on the sign of b.

 

b var(X)
v/b2(var(X))2

 

image003

E(YX) = E(X2.X) = E(X3) = 0

 

and

 

Подпись: cov(Y, X)Подпись: Hence, pXYE(X – E(X))(Y ...

Read More