# Confidence Intervals and Test of Hypotheses

We start by constructing a confidence interval for any linear combination of в, say c’ в. We know that cCвOLS ~ N(с’в,&2с'(X’X)-1c) and it is a scalar. Hence,

Zobs = (c’Pols – cc e)/o(cc(X’X )-1 c)1/2 (7.26)

is a standardized N(0,1) random variable. Replacing a by s is equivalent to dividing zobs by the square root of a x2 random variable divided by its degrees of freedom. The latter random variable is (n — k)s2/a2 = RSS/a2 which was shown to be a хП-к. Problem 8 shows that zobs and RSS/a2 are independent. This means that

tobs = (c%ls — c в)/s(c'(X’ X )-1c)1/2 (7.27)

is a N(0,1) random variable divided by the square root of an independent хП-к/(n — k). This is a t-statistic with (n — k) degrees of freedom. Hence, a 100(1 — a)% confidence interval for c’в is

c’^OLS ± ta/2s(c'(X’X)-1c)1/2 (7.28)

Example: Let us say we are predicting one year ahead so that To = 1 and xo is a (1 x k) vector of next year’s observations on the exogenous variables. The 100(1 — a) confidence interval for next year’s forecast of yo will be yo ± ta/2s(1 + x’o(X’X)-1xo)1/2. Similarly (7.28) allows us to construct confidence intervals or test any single hypothesis on any single вj (again by picking c to have 1 in its j-th position and zero elsewhere). In this case we get the usual t-statistic reported in any regression package. More importantly, this allows us to test any hypothesis concerning any linear combination of the в’s, e. g., testing that the sum of coefficients of input variables in a Cobb-Douglas production function is equal to one. This is known as a test for constant returns to scale, see Chapter 4.