Category Springer Texts in Business and Economics

Limited Dependent Variables

13.1 The Linear Probability Model

Уі

u;

Prob.

1

1 — x0"

к;

0

CD.

УС

1

1 — к;

a. Let к і = Pr[y; = 1], then y; = 1 when u; = 1 — x0" with probability к; as shown in the table above. Similarly, y; = 0 when u; = —x0" with probability 1 — к ;. Hence, E(u;) = к; (1 — x[") + (1 — к 😉 (—x0").

For this to equal zero, we get, к; — к ;xi" + к ;xi" — x0" = 0 which gives к ; = xi" as required.

b. var(u;) = E(u2) = (1 — xi")2 к ; + (—x0")2 (1 — к 😉

1 — 2×0" + (x0")2 к; + (xi")2 (1 — к i)

= к ; — 2×0" к ; + (x0")2 = к ; — к 2 = к ;(1 — к 😉 = x0" (1 — xi") using the fact that к ; = xi".

13.2 a. Since there are no slopes and only a constant, x0" = a and (13.16) becomes

n

log ‘ = J]{y; logF(a) +...

Read More

Time-Series Analysis

14.1 The AR(1) Model. yt = pyt_i + ©t with |p| <1 and ©t ~ IIN (0, a©2). Also, yo – N (0, o©2/1 – p2).

a. By successive substitution

yt = pyt_i + ©t = p(pyt_2 + ©t_1) + ©t = P2yt_2 + P©t_1 + ©t

= p2(pyt_3 + ©t_2) + p©t_1 + ©t = p3yt_3 + p2 ©t_2 + p©t_1 + ©t

= ••• = pVo c pt 1©i c pt 2©2 C ••• C ©t Then, E(yt) = ptE(yo) = 0 for every t, since E(yo) = E(©t) = 0.

var(yt) = p2tvar(yo) C p2(t 1)var(©i) C p2(t 2)var(©2) C———————————— C var(©t)

If p = 1, then var(yt) = „©2/0 ! 1. Also, if |p| > 1, then 1 — p2 < 0 and var(yt) < 0.

b. The AR(1) series yt has zero mean and constant variance „2 = var(yt), for t = 0, 1 , 2, … In part (a) we could have stopped the successive substitution at yt_s, this yields yt = psyt_s C pS 1©t_s+1 C • • C©t

Therefore, c...

Read More

Relative Efficiency of OLS Under Heteroskedasticity

a. From Eq. (5.9) we have

n / n 2 n / n 2

var(p 0ls) = e x2^i2 / (Ex2) = °2 Ex2x8 / (Ex2)

i=1 i=1 i=1 i=1

where xi = Xi — X. For Xi = 1,2,.., 10 and 8 = 0.5, 1, 1.5 and 2. This is tabulated below.

[2] P

"P2

b. Apply these four Wald statistics to the equation relating real per-capita con­sumption to real per-capita disposable income in the U. S. over the post World War II period 1959-2007. The SAS program that generated these Wald statistics is given below

[4] + p(n — 1) 1 + p(n — 1)

[5] 22

c21mx2xi c22mx2x2

[7] dF/dx is for discrete change of dummy variable from 0 to 1 z and P>|z| correspond to the test of the underlying coefficient being 0

One can also run logit and probit for the unemployment variable and repeat this for females. This is not done here to save space.

[8] dF/dx is...

Read More

The General Linear Model: The Basics

7.1 Invariance of the fitted values and residuals to non-singular transformations of the independent variables.

The regression model in (7.1) can be written as y = XCC-1" + u where Cisa non-singular matrix. LetX* = XC, theny = X*"* + u where "* = C-1".

a. PX* = X* (X*0X*)-1 X*0 = XC [C0X0XC]-1 C0X0 = XCC-1 (X0X)-1 c0-1 C0X0 = PX.

Hence, the regression of y on X* yields

y = X*" *ls = PX* y = PXy = X" ols which is the same fitted values as those from the regression of y on X. Since the dependent variable y is the same, the residuals from both regressions will be the same.

b. Multiplying each X by a constant is equivalent to post-multiplying the matrix X by a diagonal matrix C with a typical k-th element ck. Each Xk will be multiplied by the constant ck for k = 1,2,.., K...

Read More

The Wald, LR, and LM Inequality. This is based on Baltagi (1994). The likelihood is given by Eq. (2.1) in the text

image028

image029

(1)

 

image030

image031
image032
image033

(3)

 

(4)

 

image034

(5)

 

image035

(6)

 

image036

(7)

 

where I11 denotes the (1,1) element of the information matrix evaluated at the unrestricted maximum likelihood estimates. It is easy to show from (1) that

n

(Xi — 10)2

 

image037

(8)

 

2a2

 

image038

Hence, using (4) and (8), one gets

image039 Подпись: (12)

Hence, using (3) and (11), one gets

where the last equality follows from (10). L(f, 52) is the restricted max­imum; therefore, logL(ft, о2) < logL(ft, 52), from which we deduce that W > LR. Also, L(ft, &2) is the unrestricted maximum; therefore log L(ft, о2) > log L(ft, 52), from which we deduce that LR > LM.

An alternative derivation of this inequality shows first that LM W/n L...

Read More

Efficiency as Correlation. This is based on Zheng (1994)

3.12 Since " and " are linear unbiased estimators of ", it follows that" C X(" — ") for any X is a linear unbiased estimator of ". Since " is the BLU estimator of ",

2 var

image134

is minimized at X — 0. Setting the derivative of var " C X(" — ") respect to X at X — 0, we have 2E " (" — ") — 0, or E("2) — E(""). Thus, the squared correlation between " and " is

r(") var (") var(") var(") var(") ’

where the third equality uses the result that E("2) — E(""). The final equality gives var(")/var(") which is the relative efficiency of " and ".

Read More