The need to predict a value of the dependent variable outside the sample (a future value if we are dealing with time series) when the corresponding value of the independent variable is known arises frequently in practice. We add the following “prediction period” equation to the model (10.1.1):

(10.2.82) yp = a + $xp + up,

where jp and up are both unobservable, xp is a known constant, and up is independent of {ut}, t = 1, 2, . . . , T, with Eup = 0 and Vup = a2. Note that the parameters a, (3, and cr2 are the same as in the model (10.1.1). Consider the class of predictors of yp which can be written in the form

(10.2.83) yp = a + $xp,

where a and (3 are arbitrary unbiased estimators of a and (3, which are linear in {yt}, t = 1, 2, . . . , T. We call this the class of linear unbiased predictors of yp. The mean squared prediction error of yp is given by

(10.2.84) E(yp – ypf = E{up – [(a + $xp) – (a + (Зхр)]}2

= a2 + V(a + (3 xp),

where the second equality follows from the independence of up and [yt, t = 1, 2, . . . , T.

The least squares predictor of yp is given by

(10.2.85) yp = a+$xp.

It is clearly a member of the class defined in (10.2.83). Since У (a + $xp) < У (a + (3xp) because of the result of Section 10.2.2, we conclude that the least squares predictor is the best linear unbiased predictor. We have now reduced the problem of prediction to the problem of estimating a linear combination of a and |3.

Leave a reply

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>