# Hypothesis Testing with Artificial Regressions

Artificial regressions like the GNR are probably employed most frequently for hypothesis testing. Suppose we wish to test a set of r equality restrictions on 9. Without loss of generality, we can assume that these are zero restrictions. This allows us to partition 9 into two subvectors, 91 of length k – r, and 92 of length r, the restrictions being that 92 = 0. If the estimator 0 is not only root-n consistent but also asymptotically normal, an appropriate statistic for testing these restrictions is

0ї(^г (02))-102, (1.19)

which will be asymptotically distributed as x2(r) under the null if ^«(02) is a suitable estimate of the covariance matrix of 02.

Suppose that r(9) and R(9) define an artificial regression for the estimator 0. Let 0 = [Q1 і 0] be a vector of root-n consistent estimates under the null. Then, if the variables of the artificial regression are evaluated at 0, the regression can be expressed as

r(01, 0) = R1(01, 0)b1 + R2(02, 0)b2 + residuals, (1.20)

where the partitioning of R = [R1 R2] corresponds to the partitioning of 9 as [91 і 92]. Regression (1.20) will usually be written simply as

r = R1b1 + R2b2 + residuals,

although this notation hides the fact that 0 satisfies the null hypothesis.

By the one-step property, b2 from (1.20) is asymptotically equivalent under the null to the estimator 02, since under the null the true value of 02 is zero. This suggests that we may replace 02 in (1.19) by b2. By property (2), the asymptotic covariance matrix of n1/2(0 – 00) is estimated by (n-1RTR)-1. A suitable estimate of the covariance matrix of 02 can be obtained from this by use of the Frisch – Waugh-Lovell (FWL) theorem: See Davidson and MacKinnon (1993, ch. 1) for a full treatment of the FWL theorem. The estimate is (RTM1R2)-1, where the orthogonal projection matrix M1 is defined by

M1 = I – R1(RtR1)-1RT. (1.21)

By the same theorem, we have that

b2 = (RT M1R2)-1RT M1r. (1.22)

Thus the artificial regression version of the test statistic (1.19) is

bT RT M1R2b2 = rTM1R2(RT M1R2)-1Rt M1r. (1.23)

The following theorem demonstrates the asymptotic validity of (1.23).

Theorem 1. If the regressand r(0) and the regressor matrix R(0) define an artificial regression for the root-n consistent, asymptotically normal, estimator 0, and if the partition R = [R1 R2] corresponds to the partition 0 = [01 I 02], then the statistic (1.23), computed at any root-n consistent 0 = [Q1 I 0], is asymptotically distributed as x2(r) under the null hypothesis that 02 = 0, and is asymptotically equivalent to the generic statistic (1.19).

Proof. To prove this theorem, we need to show two things. The first is that

n R 2 M1R2 = n R2(00)M1(00)R2(00) + 0p(1),

where 00 is the true parameter vector, and M1(00) is defined analogously to (1.21). This result follows by standard asymptotic arguments based on the one-step property. The second is that the vector

n-1/2RT М1Г = n-1/2RT(00)M1(00)r(00) + op(1)

is asymptotically normally distributed. The equality here also follows by standard asymptotic arguments. The asymptotic normality of 0 implies that b is asymptotically normally distributed. Therefore, by (1.22), n~1/2RT2M1s must also be asymptotically normally distributed. These two results imply that, asymptotically under the null hypothesis, the test statistic (1.23) is a quadratic form in a normally distributed r-vector, the mean of which is zero, and the inverse of its covariance matrix. Such a quadratic form follows the x2(r) distribution. ■

Remarks. The statistic (1.23) can be computed as the difference between the sums of squared residuals (SSR) from the regressions

r = R1b1 + residuals, and (1.24)

r = R1b1 + R2b2 + residuals. (1.25)

Equivalently, it can be computed as the difference between the explained sums of squares (ESS), with the opposite sign, or as the ESS from the FWL regression corresponding to (1.25):

M1r = M1R2b2 + residuals.

If plim n1 rTr = 1 for all root-n consistent 0, there are other convenient ways of computing (1.23), or statistics asymptotically equivalent to it. One is the ordinary F-statistic for b2 = 0 in regression (25):

F _ fTM1R2(RtM1 r2)1RTm 1 r/r (1 26)

rTM1r/(n – k) ,

which works because the denominator tends to a probability limit of 1 as n ^ This statistic is, of course, in F rather than %2 form.

Another frequently used test statistic is available if 0 is actually the vector of restricted estimates, that is, the estimator that minimizes the criterion function when the restriction that 02 = 0 is imposed. In this case, n times the uncentered R2 from (1.25) is a valid test statistic. With this choice of 0, the ESS from (1.24) is zero, by property (1). Thus (1.23) is just the ESS from (1.25). Since nR2 = ESS/(TSS/ n), where tSS denotes the total sum of squares, and since TSS/ n ^ 1 as n ^ ro, it follows that this statistic is asymptotically equivalent to (1.23).

Even though the GNR does not satisfy condition (2) when it is expressed in its usual form with all variables not divided by the standard error s, the F-statistic

(1.26) and the nR2 statistic are still valid test statistics, because they are both ratios. In fact, variants of the GNR are routinely used to perform many types of specification tests. These include tests for serial correlation similar to the ones proposed by Godfrey (1978), nonnested hypothesis tests where both models are parametric (Davidson and MacKinnon, 1981), and nonnested hypothesis tests where the alternative model is nonparametric (Delgado and Stengos, 1994). They also include several Durbin-Wu-Hausman, or DWH, tests, in which an efficient estimator is compared with an inefficient estimator that is consistent under weaker conditions; see Sections 7.9 and 11.4 of Davidson and MacKinnon (1993).

## Leave a reply