Category Springer Texts in Business and Economics

Limited Dependent Variables

13.1 The Linear Probability Model

Уі

u;

Prob.

1

1 — x0"

к;

0

CD.

УС

1

1 — к;

a. Let к і = Pr[y; = 1], then y; = 1 when u; = 1 — x0" with probability к; as shown in the table above. Similarly, y; = 0 when u; = —x0" with probability 1 — к ;. Hence, E(u;) = к; (1 — x[") + (1 — к 😉 (—x0").

For this to equal zero, we get, к; — к ;xi" + к ;xi" — x0" = 0 which gives к ; = xi" as required.

b. var(u;) = E(u2) = (1 — xi")2 к ; + (—x0")2 (1 — к 😉

1 — 2×0" + (x0")2 к; + (xi")2 (1 — к i)

= к ; — 2×0" к ; + (x0")2 = к ; — к 2 = к ;(1 — к 😉 = x0" (1 — xi") using the fact that к ; = xi".

13.2 a. Since there are no slopes and only a constant, x0" = a and (13.16) becomes

n

log ‘ = J]{y; logF(a) +...

Read More

Time-Series Analysis

14.1 The AR(1) Model. yt = pyt_i + ©t with |p| <1 and ©t ~ IIN (0, a©2). Also, yo – N (0, o©2/1 – p2).

a. By successive substitution

yt = pyt_i + ©t = p(pyt_2 + ©t_1) + ©t = P2yt_2 + P©t_1 + ©t

= p2(pyt_3 + ©t_2) + p©t_1 + ©t = p3yt_3 + p2 ©t_2 + p©t_1 + ©t

= ••• = pVo c pt 1©i c pt 2©2 C ••• C ©t Then, E(yt) = ptE(yo) = 0 for every t, since E(yo) = E(©t) = 0.

var(yt) = p2tvar(yo) C p2(t 1)var(©i) C p2(t 2)var(©2) C———————————— C var(©t)

If p = 1, then var(yt) = „©2/0 ! 1. Also, if |p| > 1, then 1 — p2 < 0 and var(yt) < 0.

b. The AR(1) series yt has zero mean and constant variance „2 = var(yt), for t = 0, 1 , 2, … In part (a) we could have stopped the successive substitution at yt_s, this yields yt = psyt_s C pS 1©t_s+1 C • • C©t

Therefore, c...

Read More

Relative Efficiency of OLS Under Heteroskedasticity

a. From Eq. (5.9) we have

n / n 2 n / n 2

var(p 0ls) = e x2^i2 / (Ex2) = °2 Ex2x8 / (Ex2)

i=1 i=1 i=1 i=1

where xi = Xi — X. For Xi = 1,2,.., 10 and 8 = 0.5, 1, 1.5 and 2. This is tabulated below.

[2] P

"P2

b. Apply these four Wald statistics to the equation relating real per-capita con­sumption to real per-capita disposable income in the U. S. over the post World War II period 1959-2007. The SAS program that generated these Wald statistics is given below

[4] + p(n — 1) 1 + p(n — 1)

[5] 22

c21mx2xi c22mx2x2

[7] dF/dx is for discrete change of dummy variable from 0 to 1 z and P>|z| correspond to the test of the underlying coefficient being 0

One can also run logit and probit for the unemployment variable and repeat this for females. This is not done here to save space.

[8] dF/dx is...

Read More

The General Linear Model: The Basics

7.1 Invariance of the fitted values and residuals to non-singular transformations of the independent variables.

The regression model in (7.1) can be written as y = XCC-1" + u where Cisa non-singular matrix. LetX* = XC, theny = X*"* + u where "* = C-1".

a. PX* = X* (X*0X*)-1 X*0 = XC [C0X0XC]-1 C0X0 = XCC-1 (X0X)-1 c0-1 C0X0 = PX.

Hence, the regression of y on X* yields

y = X*" *ls = PX* y = PXy = X" ols which is the same fitted values as those from the regression of y on X. Since the dependent variable y is the same, the residuals from both regressions will be the same.

b. Multiplying each X by a constant is equivalent to post-multiplying the matrix X by a diagonal matrix C with a typical k-th element ck. Each Xk will be multiplied by the constant ck for k = 1,2,.., K...

Read More

Regression Diagnostics and Specification Tests

8.1 Since H = PX is idempotent, it is positive semi-definite with b0H b > 0 for any arbitrary vector b. Specifically, for b0 = (1,0,.., 0/ we get hn > 0. Also, H2 = H. Hence,

n

hii =J2 hb2 > h2i > 0.

j=i

From this inequality, we deduce that hjy — h11 < 0 or that h11(h11 — 1/ < 0. But h11 > 0, hence 0 < h11 < 1. There is nothing particular about our choice of h11. The same proof holds for h22 or h33 or in general hii. Hence, 0 < hii < 1 for i = 1,2,.., n.

8.2 A Simple Regression With No Intercept. Consider yi = xi" + ui for i = 1,2,.., n

a. H = Px = x(x0x)_1x0 = xx0/x0x since x0x is a scalar. Therefore, hii =

n

x2/ x2 for i = 1,2,.., n. Note that the xi’s are not in deviation form as

i=1

in the case of a simple regression with an intercept. In this case tr(H/ =

n

tr(Px/ = tr(xx0//x0x ...

Read More

Generalized Least Squares

9.1 GLS Is More Efficient than OLS.

a. Equation (7.5) of Chap. 7 gives "ois = " + (X’X)-1X’u so that E("ois) = " as long as X and u are uncorrelated and u has zero mean. Also,

var("ols) = E("ols – ")("ols – ")’ = E[(X, X)_1X, uu, X(X, X)_1]

= (X’X)-1X’ E(uu’)X(X’X)-1 = CT2(X, X)-1X’fiX(X’X)-1.

b. var("ols) – var("gls) = o2[(X’X)-1X’fiX(X’X)-1 – (X’fi-1X)-1]

= CT2[(X, X)-1X, fiX(X, X)-1 – (X’^-1X)-1X’^-1fifi-1 X(X’fi-1X)-1]

= ct2[(X’X)-1X’ – (X’fi-1X)-1X’fi-1]fi[X(X’X)-1 – fi-1X(X’fi-1X)-1]

= o2 AfiA’

where A = [(X’X)-1X’ – (X’fi-1X)-1X’fi-1]. The second equality post multiplies (X’fi-1X)-1 by (X’fi-1X)(X’fi-1X)-1 which is an identity of dimension K. The third equality follows since the cross-product terms give -2(X’fi-1X)-1...

Read More