Category Springer Texts in Business and Economics

Distributed Lags and Dynamic Models

6.1 a. Using the Linear Arithmetic lag given in Eq. (6.2), a 6 year lag

on income gives a regression of consumption on a constant and

6

Zt = J2 (7 — i) Xt_i where Xt denotes income. In this case,

i=0

Zt = 7Xt C 6Xt_i + .. + Xt_6,

The Stata regression output is given below:

. gen ^6=7*ly+6*l. ly+5*l2.ly+4*l3.ly+3*l4.ly+2*l5.ly+l6.ly (6 missing values generated)

. reg lc z_6

Source

SS

df

MS

Number of obs F(1,41)

Prob > F R-squared Adj R-squared Root MSE

= 43 = 3543.62 = 0.0000 = 0.9886 = 0.9883 = .03037

Model

Residual

3.26755259

.037805823

1

41

3.26755259

.000922093

Total

3.30535842

42

.07869901

lc

Coef.

Std. Err.

t

P>|t|

[95% Conf. Interval]

z_6

.cons

.0373029 .0006266 -.4950913 .1721567

59.53

-2.88

0.000

0.006

.0360374

-...

Read More

The Best Predictor

a. The problem is to minimize E[Y — h(X)]2 with respect to h(X). Add and subtract E(Y/X) to get

E{[Y — E(Y/X)] + [E(Y/X) — h(X)]}2

= E[Y — E(Y/X)]2 + E[E(Y/X) — h(X)]2

and the cross-product term E[Y — E(Y/X)] [E(Y/X) — h(X)] is zero because of the law of iterated expectations, see the Appendix to Chapter 2 Amemiya (1994). In fact, this says that expectations can be written as

E = ExEy/x

and the cross-product term given above EY/X [Y—E(Y/X)] [E(Y/X) — h(X)] is clearly zero. Hence, E[Y — h(X)]2 is expressed as the sum of two positive terms. The first term is not affected by our choice of h(X). The second term however is zero for h(X) = E(Y/X). Clearly, this is the best predictor of Y based on X.

b. In the Appendix to Chapter 2, we considered the bivariate Normal distri­bu...

Read More

. Effect of Additional Regressors on R2

a. Least Squares on the K = K1 + K2 regressors minimizes the sum of squared error and yields SSE2 = min P (Yi – a – "2 X2i -.. – "k Xk і -.. – "kXkD2

i=1

Let us denote the corresponding estimates by (a, b2,..,bKj,..,Ьк). This implies that SSE* = p(Yi – a* – "*X2i – .. – "KXkp – .. – "KXkD2

i= 1

based on arbitrary (a*, "*,.., "K, ., PK) satisfies SSE* > SSE2. In partic­ular, substituting the least squares estimates using only K1 regressors say a,"2,..,"Ki and "Ki+1 = 0,..,"к = 0 satisfy the above inequality. Hence,

n

SSE1 > SSE2. Since P yi2 is fixed, this means that R2 > R*. This is based

i=1

on the solution by Rao and White (1988).

Подпись: ei2 (n -2 2

b. From the definition of R, we get (1 – R ) =

image171 image172

i=1

4.4 This regression suffers from perfect multicollinearity...

Read More

The Binomial Distribution

a. Pr[X = 5 or 6] = Pr[X = 5] + Pr[X = 6]

= b(n = 20,X = 5, 0 = 0.1) C b(n = 20,X = 6, 0 = 0.1)

image006j (0.1)5(0.9)15 C (0.1)6(0.9)14

= 0.0319 C 0.0089 = 0.0408.

Подпись: b. Подпись: n-X Подпись: (n-X)!(n-n+X)! Подпись: (n-X)!X! Подпись: n

This can be easily done with a calculator, on the computer or using the Binomial tables, see Freund (1992).

Hence,

Подпись: b(n, n - X, 1 - 0) =n (1 – 0)n-X (1 – 1 C 0)n-n+X n – X

= n(1 – 0)n-X0X = b(n, X, 0).

X

c. Using the MGF for the Binomial distribution given in problem 2.14a, we get MX(t) = [(1 – 0) C 0e‘]n.

Differentiating with respect to t yields MX(t) = n[(1 – 0) C 0e‘]n-10e‘. Therefore, MX (0) = n0 = E(X).

Differentiating MX(t) again with respect to t yields

MX(t) = n(n – 1)[(1 – 0) C 0e‘]n-2(0e*)2 C n[(1 – 0) C 0e‘]n-10e‘.

Therefore MX(0) = n(n – 1)02 C n0 = E(X2).

Hence var(X) = E(X2) — (E(X))2 = n0 + n202 — n02 — n202 = n0(1 — 0)...

Read More

Simple Linear Regression

3.1 For least squares, the first-order conditions of minimization, given by Eqs. (3.2) and (3.3), yield immediately the first two numerical properties of OLS esti-

n n n л n

mates, i. e., !>i = 0 and eiXi = 0. Now consider eiYi = a ei C

i=1 i=1 i=1 i=1

n

" P eiXi = 0 where the first equality uses Yi = a C "Xi and the second

i=1

equality uses the first two numerical properties of OLS. Using the fact that

/V n n n

ei = Yi — Yi, we can sum both sides to get ei = Yi — Yi, but

i=1 i=1 i=1

n n n л

P ei = 0, therefore we get £ Yi = P Yi. Dividing both sides by n, we get

i=1 _ i=1 i=1

Y = Y.

nn

3.2 Minimizing P (Yi — a)2 with respect to a yields — 2 P(Yi — a) = 0. Solv-

i=1 i=1

ing for a yields aols = Y. Averaging Yi = a C ui we get Y = a C u.

n

Hence aols = a C u with E (cіols) = a s...

Read More

Violations of the Classical Assumptions

5.1 s2 is Biased Under Heteroskedasticity. From Chap. 3 we have shown that

ei = Yi ‘ols "olsXi = yi "olsxi = "ols^ Xi + (ui u/

for i = 1, 2, .., n.

The second equality substitutes aols = Y — |3olsX and the third equality substitutes yi = "xi + (ui — it). Hence,

n

Подпись: Xi (ui — u) andXe2 = (° ols — " x2 + X (ui — u)2 — 2 ols — ") X xi (ui — u)

Подпись: i=1i=1 i=1 i=1

image185
image186 image187

n1

 

of.

 

n

 

i=1

 

image188

B. H. Baltagi, Solutions Manual for Econometrics, Springer Texts in Business and Economics, DOI 10.1007/978-3-642-54548-1—5, © Springer-Verlag Berlin Heidelberg 2015

nb E( ±e2)

 

Hence, E(s2) =

 

image189 image190

1

n — 2

 

n1

 

image191

Under homoskedasticity this reverts back to E(s2) = -2.

 

image192
image193

1

 

Read More