# An Artificial Regression for GMM Estimation

Another useful artificial regression, much less well known than the OPG regres­sion, is available for a class of models estimated by the generalized method of moments (GMM). Many such models can be formulated in terms of functions f(0) of the model parameters and the data, such that, when they are evaluated at the true 0, their expectations conditional on corresponding information sets, Qt, vanish. The Qt usually contain all information available prior to the time of observation, and so, as with the GNR and the OPG regression, lags of depend­ent variables are allowed.

Let the n x l matrix W denote the instruments used to obtain the GMM estim­ates. The tth row of W, denoted Wt, must contain variables in Qt only. The dimension of 0 is k, as before, and, for 0 to be identified, we need l > k. The GMM estimates with l x l weighting matrix A are obtained by minimizing the criterion function

Q(0) = |fT(0)WAWTf(0) (1.31)

with respect to 0. Here f(0) is the n-vector with typical element f(0). For the procedure known as efficient GMM, the weighting matrix A is chosen so as to be proportional, asymptotically at least, to the inverse of the covariance matrix of WTf(0). In the simplest case, the ft(0) are serially uncorrelated and homoskedastic with variance 1, and so an appropriate choice is A = (WTW)-1. With this choice, the criterion function (1.31) becomes

Q(0) = – If T(0)Pw f(0), (1.32)

where PW is the orthogonal projection on to the columns of W.

Let J(0) be the negative of the n x k Jacobian matrix of f(0), so that the tith element of J(0) is – dft/30;(0). The first-order conditions for minimizing (1.32) are

J(0)P wf(0) = 0. (1.33)

By standard arguments, it can be seen that the vector 0 that solves (1.33) is asymptotically normal and asymptotically satisfies the equation

n1/2(0 – 0o) = (n-JTPwJo)-1n-1/2JTPwfo, (1.34)

with J0 = J(0O) and f0 = f(0o). See Davidson and MacKinnon (1993, ch. 17), for a full discussion of GMM estimation.

Now consider the artificial regression

By the first-order conditions (1.33) for 0, this equation clearly satisfies condition (1), and in fact it also satisfies condition (1′) for the criterion function Q(0) of (1.32). Since the covariance matrix of f(00) is just the identity matrix, it follows from (1.34) that condition (2) is also satisfied. Arguments just like those pre­sented in Section 3 for the GNR can be used to show that condition (3), the one – step property, is also satisfied by (1.35).

If the ft(0o) are homoskedastic but with unknown variance a2, regression (1.35) can be used in exactly the same way as the GNR. Either the regressand and regressors can be divided by a suitable consistent estimate of a, or else all test statistics can be computed as ratios, in F or nR2 form, as appropriate.

An important special case of (1.35) is provided by the class of regression models, linear or nonlinear, estimated with instrumental variables (IV). Such a model can be written in the form (1.3), but it will be estimated by minimizing, not the criterion function (1.4) related to the sum of squared residuals, but rather

Q(P) – -2(y – x(P))TPw(y – x(P)),

where W is an n x l matrix of instrumental variables. This criterion function has exactly the same form as (1.32), with в instead of 0, and with f(P) = y – x(P). In addition, J(P) = X(P), where X(P) is defined, exactly as for the GNR, to have the tith element 3xt/dp;(P). The resulting artificial regression for the IV model, which takes the form

y – x(P) = PW X(P)b + residuals, (1.36)

is often referred to as a GNR, because, except for the projection matrix PW, it is identical to (1.7): See Davidson and MacKinnon (1993, ch. 7).