Category Springer Texts in Business and Economics

Limited Dependent Variables

13.1 The Linear Probability Model

Уі

u;

Prob.

1

1 — x0"

к;

0

CD.

УС

1

1 — к;

a. Let к і = Pr[y; = 1], then y; = 1 when u; = 1 — x0" with probability к; as shown in the table above. Similarly, y; = 0 when u; = —x0" with probability 1 — к ;. Hence, E(u;) = к; (1 — x[") + (1 — к 😉 (—x0").

For this to equal zero, we get, к; — к ;xi" + к ;xi" — x0" = 0 which gives к ; = xi" as required.

b. var(u;) = E(u2) = (1 — xi")2 к ; + (—x0")2 (1 — к 😉

1 — 2×0" + (x0")2 к; + (xi")2 (1 — к i)

= к ; — 2×0" к ; + (x0")2 = к ; — к 2 = к ;(1 — к 😉 = x0" (1 — xi") using the fact that к ; = xi".

13.2 a. Since there are no slopes and only a constant, x0" = a and (13.16) becomes

n

log ‘ = J]{y; logF(a) +...

Read More

Time-Series Analysis

14.1 The AR(1) Model. yt = pyt_i + ©t with |p| <1 and ©t ~ IIN (0, a©2). Also, yo – N (0, o©2/1 – p2).

a. By successive substitution

yt = pyt_i + ©t = p(pyt_2 + ©t_1) + ©t = P2yt_2 + P©t_1 + ©t

= p2(pyt_3 + ©t_2) + p©t_1 + ©t = p3yt_3 + p2 ©t_2 + p©t_1 + ©t

= ••• = pVo c pt 1©i c pt 2©2 C ••• C ©t Then, E(yt) = ptE(yo) = 0 for every t, since E(yo) = E(©t) = 0.

var(yt) = p2tvar(yo) C p2(t 1)var(©i) C p2(t 2)var(©2) C———————————— C var(©t)

If p = 1, then var(yt) = „©2/0 ! 1. Also, if |p| > 1, then 1 — p2 < 0 and var(yt) < 0.

b. The AR(1) series yt has zero mean and constant variance „2 = var(yt), for t = 0, 1 , 2, … In part (a) we could have stopped the successive substitution at yt_s, this yields yt = psyt_s C pS 1©t_s+1 C • • C©t

Therefore, c...

Read More

Relative Efficiency of OLS Under Heteroskedasticity

a. From Eq. (5.9) we have

n / n 2 n / n 2

var(p 0ls) = e x2^i2 / (Ex2) = °2 Ex2x8 / (Ex2)

i=1 i=1 i=1 i=1

where xi = Xi — X. For Xi = 1,2,.., 10 and 8 = 0.5, 1, 1.5 and 2. This is tabulated below.

[2] P

"P2

b. Apply these four Wald statistics to the equation relating real per-capita con­sumption to real per-capita disposable income in the U. S. over the post World War II period 1959-2007. The SAS program that generated these Wald statistics is given below

[4] + p(n — 1) 1 + p(n — 1)

[5] 22

c21mx2xi c22mx2x2

[7] dF/dx is for discrete change of dummy variable from 0 to 1 z and P>|z| correspond to the test of the underlying coefficient being 0

One can also run logit and probit for the unemployment variable and repeat this for females. This is not done here to save space.

[8] dF/dx is...

Read More

The General Linear Model: The Basics

7.1 Invariance of the fitted values and residuals to non-singular transformations of the independent variables.

The regression model in (7.1) can be written as y = XCC-1" + u where Cisa non-singular matrix. LetX* = XC, theny = X*"* + u where "* = C-1".

a. PX* = X* (X*0X*)-1 X*0 = XC [C0X0XC]-1 C0X0 = XCC-1 (X0X)-1 c0-1 C0X0 = PX.

Hence, the regression of y on X* yields

y = X*" *ls = PX* y = PXy = X" ols which is the same fitted values as those from the regression of y on X. Since the dependent variable y is the same, the residuals from both regressions will be the same.

b. Multiplying each X by a constant is equivalent to post-multiplying the matrix X by a diagonal matrix C with a typical k-th element ck. Each Xk will be multiplied by the constant ck for k = 1,2,.., K...

Read More

Simple Linear Regression

3.1 For least squares, the first-order conditions of minimization, given by Eqs. (3.2) and (3.3), yield immediately the first two numerical properties of OLS esti-

n n n л n

mates, i. e., !>i = 0 and eiXi = 0. Now consider eiYi = a ei C

i=1 i=1 i=1 i=1

n

" P eiXi = 0 where the first equality uses Yi = a C "Xi and the second

i=1

equality uses the first two numerical properties of OLS. Using the fact that

/V n n n

ei = Yi — Yi, we can sum both sides to get ei = Yi — Yi, but

i=1 i=1 i=1

n n n л

P ei = 0, therefore we get £ Yi = P Yi. Dividing both sides by n, we get

i=1 _ i=1 i=1

Y = Y.

nn

3.2 Minimizing P (Yi — a)2 with respect to a yields — 2 P(Yi — a) = 0. Solv-

i=1 i=1

ing for a yields aols = Y. Averaging Yi = a C ui we get Y = a C u.

n

Hence aols = a C u with E (cіols) = a s...

Read More

Violations of the Classical Assumptions

5.1 s2 is Biased Under Heteroskedasticity. From Chap. 3 we have shown that

ei = Yi ‘ols "olsXi = yi "olsxi = "ols^ Xi + (ui u/

for i = 1, 2, .., n.

The second equality substitutes aols = Y — |3olsX and the third equality substitutes yi = "xi + (ui — it). Hence,

n

Подпись: Xi (ui — u) andXe2 = (° ols — " x2 + X (ui — u)2 — 2 ols — ") X xi (ui — u)

Подпись: i=1i=1 i=1 i=1

image185
image186 image187

n1

 

of.

 

n

 

i=1

 

image188

B. H. Baltagi, Solutions Manual for Econometrics, Springer Texts in Business and Economics, DOI 10.1007/978-3-642-54548-1—5, © Springer-Verlag Berlin Heidelberg 2015

nb E( ±e2)

 

Hence, E(s2) =

 

image189 image190

1

n — 2

 

n1

 

image191

Under homoskedasticity this reverts back to E(s2) = -2.

 

image192
image193

1

 

Read More