# The F Test

In this section we consider the test of the null hypothesis Q’|3 = c against the alternative hypothesis Q’P Ф c when it involves more than one constraint (that is, q > 1). In this case the t test cannot be used.

Again Q’P — c will play a central role in the test statistic. The distribution of Q’P given in (12.4.7) is valid even if q > 1 because of Theorem 5.4.2. Therefore, by Theorem 9.7.1,

(Q’P — c)'[Q'(X’X)-1Q]-1(Q’P — c)

(12.4.10) ————————————— ————————————- Xf ■

9

If cr were known, we could use the test statistic (12.4.10) right away and reject the null hypothesis if the left-hand side were greater than a certain value. The reader will recall from Section 9.7 that this would be the likelihood ratio test if (3 were normal and the generalized Wald test if j3 were only asymptotically normal.

Since P and u are independent as shown in the argument leading to Theorem 12.4.2, the chi-square variables (12.4.2) and (12.4.10) are independent. Therefore, by Definition 3 of the Appendix, we have

The null hypothesis Q’ P = c is rejected if > d, where d is determined so that P(t) > d) is equal to a certain prescribed significance level under the null hypothesis.

Comparing (12.4.9) and (12.4.11), we see that if q = 1 (and therefore Q’ is a row vector), the F statistic (12.4.11) is the square of the t statistic

(12.4.9) . This fact indicates that if q = 1 we must use the t test rather than the F test, since a one-tail test is possible only with the t test.

The F statistic can be alternatively written as follows. From equation

(12.3.3) we have

(12.4.12) S(P+) – S(P) = (Э – P+)’X’X(P – p+).

From equations (12.3.10) and (12.4.12) we have

(12.4.13) 5(P+) – S(P) = (Q’p – cVtQ’tX’X^QHQ’P – c). Therefore we can write (12.4.11) alternatively as

T — К s(0+) “ 5(P)

**(12.4.14) **ті =———————– ^———— F(q, T – K).

1 4 5(p) 4

Note that S(P+) — S(P) is always nonnegative by the definition of P+ and P, and the closer Q’P is to c, the smaller S(P+) — S(P) becomes. Also note that (12.4.14) provides a more convenient form for computation than (12.4.11) if constrained least squares residuals can be easily computed.

The result (12.4.14) maybe directly verified. Using the regression equation (12.3.13), we have

(12.4.15) S(p) = u’ [I — Z{Z’Z)-lZ’vi ~ xt-k and

(12.4.16) S(P+) = u’ [I — Z2(Z2Z2) 1Zz]u ~ Хт-к+q ■

Therefore, by Theorem 11.5.19,

**(12.4.17) **S(P+) – S(P) = vl%{Z[Z,)~1Zxx,

where Zj = [I — Z2(Z2Z2)_1Z2]Z1. Finally, (12.4.15) and (12.4.17) are independent because [I — Z(Z’Z)_1Z’]Z1 = 0.

The F statistic q given in (12.4.11) takes on a variety of forms as we insert specific values into Q and c. Consider the case where the P is partitioned as P’ = (РІ, p2), where Pi is a Xrvector and p2 is a X2-vector such that Ki + X2 = K, and the null hypothesis specifies p2 = P2 and leaves Pi unspecified. This hypothesis can be written in the form Q’P = c by putting Q’ = (0,1), where 0 is the X Ki matrix of zeroes, I is the identity matrix of size K2, and c = p2. Inserting these values into (12.4.11) yields

T – К (02 – 02)'[(O, I)(X’X)_1(0, i)’]_1(p2 – 02)

(12.4.18) 7! = —————————————————————————–

л2 u u

~ F{Kb T – K).

We can simplify (12.4.18) somewhat. Partition X as X = (Xb X2) conform

ably with the partition of P, and define Mj = I — Xj (X J Xj) xXj. Then, by Theorem 11.3.9, we have

(12.4.19) [(0,1)(X’X)_1(0,1)’]-1 = X2 MjXg.

Of particular interest is a special case of (12.4.20) where Ki = 1, so that Pi is a scalar coefficient on the first column of X, which we assume to be the vector of ones (denoted by 1). Furthermore, we assume jj2 = 0- Then ДІ! in (12.4.19) becomes L = I – T_T1′. Also, we have from equation

(12.2.14) ,

(12.4.21) Э2 = (X2LX2)_1X2Ly.

Therefore (12.4.20) can now be written as T-K у’ІХ2(Х2Щ_1Х2Ьу

К – 1

Using the definition of R2 given in (12.2.33), we further rewrite (12.2.22) as

(12.4.23) T] • —^-2 ~ F(K – 1, T – K),

Л — 1 і — К

since u’u = y’Ly — y’LX2(X2LX2)_1X2Ly by (12.2.32).

## Leave a reply