Category Springer Texts in Business and Economics

The Wald, LR, and LM Inequality. This is based on Baltagi (1994). The likelihood is given by Eq. (2.1) in the text




















where I11 denotes the (1,1) element of the information matrix evaluated at the unrestricted maximum likelihood estimates. It is easy to show from (1) that


(Xi — 10)2








Hence, using (4) and (8), one gets

image039 Подпись: (12)

Hence, using (3) and (11), one gets

where the last equality follows from (10). L(f, 52) is the restricted max­imum; therefore, logL(ft, о2) < logL(ft, 52), from which we deduce that W > LR. Also, L(ft, &2) is the unrestricted maximum; therefore log L(ft, о2) > log L(ft, 52), from which we deduce that LR > LM.

An alternative derivation of this inequality shows first that LM W/n L...

Read More

Efficiency as Correlation. This is based on Zheng (1994)

3.12 Since " and " are linear unbiased estimators of ", it follows that" C X(" — ") for any X is a linear unbiased estimator of ". Since " is the BLU estimator of ",

2 var


is minimized at X — 0. Setting the derivative of var " C X(" — ") respect to X at X — 0, we have 2E " (" — ") — 0, or E("2) — E(""). Thus, the squared correlation between " and " is

r(") var (") var(") var(") var(") ’

where the third equality uses the result that E("2) — E(""). The final equality gives var(")/var(") which is the relative efficiency of " and ".

Read More

Weighted Least Squares. This is based on Kmenta (1986)

a. From the first equation in (5.11), one could solve for a

n n n

5 ід2) = V°?) -" Xi/°2).

i=l i=1 i=1

Подпись: Dividing both sides by (і/ст?) one getsi=1

n n n n

Подпись: a =Yi/o? 1/* -" X,/ o2 1/o2)

Li=1 i=1 i=1 i=1

= Y* – "x*.

Подпись: i=1

Подпись: nn Yp 1/4) i=1 i=1
Подпись: YiXi/of = Xi/of
Подпись: i=1

Substituting a in the second equation of (5.11) one gets


Подпись: i=1Xi/o? 1/o?) C" X2/o?).

i= 1 i=1


Multiplying both sides by (1/o?) and solving for " one gets (5.12b)

i= 1

pOA? P(Y’X’/o? – Lj(4o? PAA)

£ wi* (Xi — x*)2


Подпись: 2Subtract this equation from the original regression equation to get Yi—Y* = "(Xi — X*) + (ui — it*). Substitute this in the expression for p in (5.12b), we get

n n

£ Wi* (Xi — X ) (ui — u*) £ Wi* (Xi — X )ui

£ Wi* (Xi — X*)[1]


Ewi * (Xi — x*)2


where the second equality uses the fact that n n /n/n

Ew* (Xi—X*) = E w*Xi— (E w* E w*Xi /Ew?


Read More

Poisson Distribution

a. Using the MGF for the Poisson derived in problem 2.14c one gets Mx(t) = ex(e‘“1).

Differentiating with respect to t yields MX(t) = ex(et-1}Xe‘.

Evaluating MX(t) at t = 0, we get Mx(0) = E(X) = X.

Similarly, differentiating MX(t) once more with respect to t, we get

M'(t) = ex(e,-r) (Xef + ex(e,-r)Xe‘

evaluating it at t = 0 gives

M"(0) = X2 + X = E(X2)

so that var(X) = E(X2) – (E(X)2) = X2 + X – X2 = X.

Hence, the mean and variance of the Poisson are both equal to X.

b. The likelihood function is


Подпись: e“nXXiDlP Xi

Подпись: L (X) =X1K2LXJ

so that

logL(X) = – nX + (X Xi J log X – X logXi!

Подпись: i=1i=1



3 log l(X) a ‘ „

—3Г~ = – n + — = 0

Solving for X, yields Xmle = X.

c. The method of moments equates E(X) to Xi and since E(X) = X the solution is >0 = Xi, same as the ML method.

d. E(X) = P E(Xi)/n = nX/n =...

Read More

Adding 5 to each observation of Xi, adds 5 to the sample average X and it

__ n

is now 12.5. This means that xi — Xi — X is unaffected. Hence ‘Jf xi2 is the


same and since Yi, is unchanged, we conclude that "ols is still the same at

0.8095. However, aols = Y — "olsX is changed because X is changed. This is now aols = 6.5 — (0.8095/(12.5) = —3.6188. It has decreased by 5"ols since X increased by 5 while "ols and Y remained unchanged. It is easy to see that Yi = aols + "olsXi remains the same. When Xi increased by 5, with "ols the same, this increases Yi by 5"ols. But aols decreases Yi by —5"ols. The net effect on Yi is zero. Since Yi is unchanged, this means ei = Yi — Yi


is unchanged. Hence s2 = ei2/n — 2 is unchanged. Since xi is unchanged i=1 n

s2/ P xi2 is unchanged and seols) and the t-statistic for Ho; " = 0 are



unchanged. The

Read More

TheAR(1) model. From (5.26), by continuous substitution just like (5.29), one could stop at ut_s to get

ut = Psut_s + ps 1£t_s+1 + ps 2©t_s+2 + .. + pet-1 + ©t for t > s.

Note that the power of p and the subscript of e always sum to t. Multiplying both sides by ut-s and taking expected value, one gets

E (utut_s) = psE (uj-s) + ps-1E (©t_s+1ut_s) + .. + pE (et_1ut_s) + E (©tut_s)
using (5.29), ut_s is a function of et_s, past values of et_s and uo. Since uo is independent of the e’s, and the e’s themselves are not serially correlated, then ut_s is independent of et, et_i,…, et_s+i. Hence, all the terms on the right hand side of E(utut_s) except the first are zero. Therefore, cov(ut, ut_s) = E(utut_s) = psau2 for t > s.

5.7 Relative Efficiency of OLS Under the AR(1) Model.


a. "ols = J2 xtyt xt2 = " + J2 xtut xt2 with E ("ols) = " since xt

t=i t=i t=i t=i

and ut are independent...

Read More