Statistical Properties of Least Squares
Given assumptions 1-4, it is easy to show that вOLS is unbiased for в. In fact, using equation (3.4) one can write
Pols = EEi xW EEi xj = EEi xiYi/ EEi xi = в + ЕЕі xiW EEi xf
where the second equality follows from the fact that yi = Y — Y and E’l=1 xiY = Y Ei=1 xi = 0. The third equality follows from substituting Yi from (3.1) and using the fact that En=1 xi = 0. Taking expectations of both sides of (3.5) and using assumptions 1 and 4, one can show that E(/3OLS) = в. Furthermore, one can derive the variance of /3OLS from (3.5) since
var(pOLS) = E(Pols — e)2 = E(IP=i xiuY EEi xf)2 (3.6)
= var(E(Li ХіЩ/J:(Li xf ) = a2/ EEl x
where the last equality uses assumptions 2 and 3, i. e., that the ui’s are not correlated with each other and that their variance is constant, see problem 4. Note that the variance of the OLS estimator of в depends upon a2, the variance of the disturbances in the true model, and on the variation in X. The larger the variation in X the larger is En=i xf and the smaller is the variance of /3OLS.
Next, we show that вOLS is consistent for в. A sufficient condition for consistency is that вOLS is unbiased and its variance tends to zero as n tends to infinity. We have already shown вOLS to be unbiased, it remains to show that its variance tends to zero as n tends to infinity.
lim var(Pols) = 1™ [(a2/n)/(E7=ix2/n)] = 0
where the second equality follows from the fact that (a2/n) ^ 0 and (E™=1 x2/n) = 0 and has a finite limit, see assumption 4. Hence, plim вOLS = в and вOLS is consistent for в. Similarly one can show that aOLS is unbiased and consistent for a with variance a2 EIEXf/nY^™=1 x2, and cov(aOLS, вOLS) = —Xa2/EП=1 x2, see problem 5.
(iii) Best Linear Unbiased
Using (3.5) one can write вOLS as EnE wiYi where wi = xi^Pi=1 x2. This proves that вOLS is a linear combination of the Yi’s, with weights wi satisfying the following properties:
Ei=i Wi = 0; EI=i WiXi = 1; EI=i w2 = 1/ EI=i x2
The next theorem shows that among all linear unbiased estimators of в, it is вOLS which has the smallest variance. This is known as the Gauss-Markov Theorem.
Theorem 1: Consider any arbitrary linear estimator в = Ei=1 aY for в, where the ai’s denote arbitrary constants. If в is unbiased for в, and assumptions 1 to 4 are satisfied, then var^) > var( l3 ols).
Proof: Substituting Yi from (3.1) into f3, one gets f3 = a ЕП= ai+в ЕП= aiXiТЕП^ a%ui – For f3 to be unbiased for в it must follow that E(в) = a ЕП= ai+в ЕП= aiXi = в for all observations i = 1, 2,… ,n. This means that E7= ai = 0 and ЕП= aiXi = 1 for all i = 1,2,…, n. Hence, в = в + ЕП= aiui with var(e) = var(En= aiui) = o2Y77=1 aj where the last equality follows from assumptions 2 and 3. But the afs are constants which differ from the wfs, the weights of the OLS estimator, by some other constants, say dfs, i. e., ai = wi + di for i = 1, 2,…, n. Using the properties of the ai’ s and wi one can deduce similar properties on the di’ s i. e., E7= di = 0 and Ei=1 diXi = 0. In fact,
where EП=1 widi = EП=1 xidi^2П=1 x2 = 0. This follows from the definition of wi and the fact that i=1 di = i=1 diXi = 0. Hence,
var(/?) = а2ТП= 1 a2 = о2Т7=1 dj + о2ТП=1 wj = var^LS) + о2ТП=1 dj
Since о2 En=1 d2 is non-negative, this proves that var^) > var(вolS) with the equality holding only if di = 0 for all i = 1,2,…,n, i. e.^only if ai = wi, in which case в reduces to вOLS. Therefore, any linear estimator of в, like в that is unbiased for в has variance at least as large as var(вOLS). This proves that вOLS is BLUE, Best among all Linear Unbiased Estimators of в.
Similarly, one can show that aoLS is linear in Yi and has the smallest variance among all linear unbiased estimators of a, if assumptions 1 to 4 are satisfied, see problem 6. This result implies that the OLS estimator of a is also BLUE.