Double-Length Regressions

Up to this point, the number of observations for all the artificial regressions we have studied has been equal to n, the number of observations in the data. In some cases, however, artificial regressions may have 2n or even 3n observations. This can happen whenever each observation makes two or more contributions to the criterion function.

The first double-length artificial regression, or DLR, was proposed by Davidson and MacKinnon (1984a). We will refer to it as the DLR, even though it is no longer the only artificial regression with 2n observations. The class of models to which the DLR applies is a subclass of the one used for GMM estimation. Such models may be written as

f (y, 0) = є t, t = 1,…, n, £t ~ NID(0, 1), (1.47)

where, as before, each ft () is a smooth function that depends on the data and on a k-vector of parameters 0. Here, however, the ft are assumed to be normally distributed conditional on the information sets Qt, as well as being of mean zero, serially uncorrelated, and homoskedastic with variance 1. Further, ft may depend only on a scalar dependent variable yt, although lagged dependent variables are allowed as explanatory variables.

The class of models (1.47) is much less restrictive than it may at first appear to be. In particular, it is not essential that the error terms follow the normal dis­tribution, although it is essential that they follow some specified, continuous distribution, which can be transformed into the standard normal distribution, so as to allow the model to be written in the form of (1.47). A great many models that involve transformations of the dependent variable can be put into the form of (1.47). For example, consider the Box-Cox regression model

k l

Подпись: (1.48)T(yt, X) = !PiX(Xft., X) + jZtj + ut, ut ~ N(0, о2),

i=1 j=1

Подпись: ft (yt, 9) image4

where t (x, X) = (xX – 1)/X is the Box-Cox transformation (Box and Cox, 1964), yt is the dependent variable, the Xti are independent variables that are always positive, and the Ztj are additional independent variables. We can rewrite (1.48) in the form of (1.47) by making the definition

For the model (1.47), the contribution of the tth observation to the loglikelihood function €(y, 9) is

ї-ку» 9) = – ilog(2n) – if2(yt, 9) + kt(yt, 9),

where

Подпись: kt (yt, e) = logdft (yt, 9)

tyt

Подпись: Fti(yt, e) Подпись: dft(yt, 9) d9i Подпись: and Kti (yt, 9) Подпись: dkt (yt, 9) d9i

is a Jacobian term. Now let us make the definitions

and define F(y, 9) and K(y, 9) as the n x k matrices with typical elements Fti(yt, 9) and Kti(yt, 9) and typical rows Ft(y, 9) and Kt(y, 9). Similarly, let f(y, 9) be the n-vector with typical element ft(yt, 9).

The DLR, which has 2n artificial observations, may be written as

Подпись:Подпись: b + residuals.(1.49)

Since the gradient of €(y, 9) is

g(y, 9) = – FT(y, 9)f(y, 9) + KT(y, 9)i, (1.50)

we see that regression (1.49) satisfies condition (1′). It can also be shown that it satisfies conditions (2) and (3), and thus it has all the properties of an artificial regression.

The DLR can be used for many purposes, including nonnested hypothesis tests of models with different functional forms (Davidson and MacKinnon, 1984a), tests of functional form (MacKinnon and Magee, 1990), and tests of linear and loglinear regressions against Box-Cox alternatives like (1.48) (Davidson and MacKinnon, 1985a). The latter application has recently been extended to models with AR(1) errors by Baltagi (1999). An accessible discussion of the DLR may be
found in Davidson and MacKinnon (1988). When both the OPG regression and the DLR are available, the finite-sample performance of the latter always seems to be very much better than that of the former.

As we remarked earlier, the DLR is not the only artificial regression with 2n artificial observations. In particular, Orme (1995) showed how to construct such a regression for the widely-used tobit model, and Davidson and MacKinnon (1999) provided evidence that Orme’s regression generally works very well. It makes sense that a double-length regression should be needed in this case, because the tobit loglikelihood is the sum of two summations, which are quite different in form. One summation involves all the observations for which the dependent variable is equal to zero, and the other involves all the observations for which it takes on a positive value.

Leave a reply

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>