Category Springer Texts in Business and Economics

Simple Linear Regression

3.1 For least squares, the first-order conditions of minimization, given by Eqs. (3.2) and (3.3), yield immediately the first two numerical properties of OLS esti-

n n n л n

mates, i. e., !>i = 0 and eiXi = 0. Now consider eiYi = a ei C

i=1 i=1 i=1 i=1


" P eiXi = 0 where the first equality uses Yi = a C "Xi and the second


equality uses the first two numerical properties of OLS. Using the fact that

/V n n n

ei = Yi — Yi, we can sum both sides to get ei = Yi — Yi, but

i=1 i=1 i=1

n n n л

P ei = 0, therefore we get £ Yi = P Yi. Dividing both sides by n, we get

i=1 _ i=1 i=1

Y = Y.


3.2 Minimizing P (Yi — a)2 with respect to a yields — 2 P(Yi — a) = 0. Solv-

i=1 i=1

ing for a yields aols = Y. Averaging Yi = a C ui we get Y = a C u.


Hence aols = a C u with E (cіols) = a s...

Read More

Violations of the Classical Assumptions

5.1 s2 is Biased Under Heteroskedasticity. From Chap. 3 we have shown that

ei = Yi ‘ols "olsXi = yi "olsxi = "ols^ Xi + (ui u/

for i = 1, 2, .., n.

The second equality substitutes aols = Y — |3olsX and the third equality substitutes yi = "xi + (ui — it). Hence,


Подпись: Xi (ui — u) andXe2 = (° ols — " x2 + X (ui — u)2 — 2 ols — ") X xi (ui — u)

Подпись: i=1i=1 i=1 i=1

image186 image187










B. H. Baltagi, Solutions Manual for Econometrics, Springer Texts in Business and Economics, DOI 10.1007/978-3-642-54548-1—5, © Springer-Verlag Berlin Heidelberg 2015

nb E( ±e2)


Hence, E(s2) =


image189 image190


n — 2





Under homoskedasticity this reverts back to E(s2) = -2.





Read More

The Wald, LR, and LM Inequality. This is based on Baltagi (1994). The likelihood is given by Eq. (2.1) in the text




















where I11 denotes the (1,1) element of the information matrix evaluated at the unrestricted maximum likelihood estimates. It is easy to show from (1) that


(Xi — 10)2








Hence, using (4) and (8), one gets

image039 Подпись: (12)

Hence, using (3) and (11), one gets

where the last equality follows from (10). L(f, 52) is the restricted max­imum; therefore, logL(ft, о2) < logL(ft, 52), from which we deduce that W > LR. Also, L(ft, &2) is the unrestricted maximum; therefore log L(ft, о2) > log L(ft, 52), from which we deduce that LR > LM.

An alternative derivation of this inequality shows first that LM W/n L...

Read More

Efficiency as Correlation. This is based on Zheng (1994)

3.12 Since " and " are linear unbiased estimators of ", it follows that" C X(" — ") for any X is a linear unbiased estimator of ". Since " is the BLU estimator of ",

2 var


is minimized at X — 0. Setting the derivative of var " C X(" — ") respect to X at X — 0, we have 2E " (" — ") — 0, or E("2) — E(""). Thus, the squared correlation between " and " is

r(") var (") var(") var(") var(") ’

where the third equality uses the result that E("2) — E(""). The final equality gives var(")/var(") which is the relative efficiency of " and ".

Read More

Weighted Least Squares. This is based on Kmenta (1986)

a. From the first equation in (5.11), one could solve for a

n n n

5 ід2) = V°?) -" Xi/°2).

i=l i=1 i=1

Подпись: Dividing both sides by (і/ст?) one getsi=1

n n n n

Подпись: a =Yi/o? 1/* -" X,/ o2 1/o2)

Li=1 i=1 i=1 i=1

= Y* – "x*.

Подпись: i=1

Подпись: nn Yp 1/4) i=1 i=1
Подпись: YiXi/of = Xi/of
Подпись: i=1

Substituting a in the second equation of (5.11) one gets


Подпись: i=1Xi/o? 1/o?) C" X2/o?).

i= 1 i=1


Multiplying both sides by (1/o?) and solving for " one gets (5.12b)

i= 1

pOA? P(Y’X’/o? – Lj(4o? PAA)

£ wi* (Xi — x*)2


Подпись: 2Subtract this equation from the original regression equation to get Yi—Y* = "(Xi — X*) + (ui — it*). Substitute this in the expression for p in (5.12b), we get

n n

£ Wi* (Xi — X ) (ui — u*) £ Wi* (Xi — X )ui

£ Wi* (Xi — X*)[1]


Ewi * (Xi — x*)2


where the second equality uses the fact that n n /n/n

Ew* (Xi—X*) = E w*Xi— (E w* E w*Xi /Ew?


Read More

Poisson Distribution

a. Using the MGF for the Poisson derived in problem 2.14c one gets Mx(t) = ex(e‘“1).

Differentiating with respect to t yields MX(t) = ex(et-1}Xe‘.

Evaluating MX(t) at t = 0, we get Mx(0) = E(X) = X.

Similarly, differentiating MX(t) once more with respect to t, we get

M'(t) = ex(e,-r) (Xef + ex(e,-r)Xe‘

evaluating it at t = 0 gives

M"(0) = X2 + X = E(X2)

so that var(X) = E(X2) – (E(X)2) = X2 + X – X2 = X.

Hence, the mean and variance of the Poisson are both equal to X.

b. The likelihood function is


Подпись: e“nXXiDlP Xi

Подпись: L (X) =X1K2LXJ

so that

logL(X) = – nX + (X Xi J log X – X logXi!

Подпись: i=1i=1



3 log l(X) a ‘ „

—3Г~ = – n + — = 0

Solving for X, yields Xmle = X.

c. The method of moments equates E(X) to Xi and since E(X) = X the solution is >0 = Xi, same as the ML method.

d. E(X) = P E(Xi)/n = nX/n =...

Read More