Category A COMPANION TO Theoretical Econometrics

The Gauss-Newton Regression

Associated with every nonlinear regression model is a somewhat nonstandard artificial regression which is probably more widely used than any other. Con­sider the univariate, nonlinear regression model

yt = vt(P) + ut, ut ~ iid(0, о2), t = 1,…, n, (1.2)

where yt is the tth observation on the dependent variable, and в is a ^-vector of parameters to be estimated. The scalar function vt(P) is a nonlinear regression function. It determines the mean value of yt as a function of unknown parameters в and, usually, of explanatory variables, which may include lagged dependent variables. The explanatory variables are not shown explicitly in (1.2), but the t subscript on х((в) reminds us that they are present. The model (1.2) may also be written as

y = x(e) + u, u ~ iid(0, о2I), (1.3)

where y i...

Read More

Hypothesis Testing with Artificial Regressions

Artificial regressions like the GNR are probably employed most frequently for hypothesis testing. Suppose we wish to test a set of r equality restrictions on 9. Without loss of generality, we can assume that these are zero restrictions. This allows us to partition 9 into two subvectors, 91 of length k – r, and 92 of length r, the restrictions being that 92 = 0. If the estimator 0 is not only root-n consistent but also asymptotically normal, an appropriate statistic for testing these restric­tions is

0ї(^г (02))-102, (1.19)

which will be asymptotically distributed as x2(r) under the null if ^«(02) is a suit­able estimate of the covariance matrix of 02.

Suppose that r(9) and R(9) define an artificial regression for the estimator 0...

Read More

The OPG Regression

By no means all interesting econometric models are regression models. It is there­fore useful to see if artificial regressions other than the GNR exist for wide classes of models. One of these is the outer-product-of-the-gradient regression, or OPG
regression, a particularly simple artificial regression that can be used with most models that are estimated by maximum likelihood. Suppose we are interested in a model of which the loglikelihood function can be written as


€(9) = (0), (1.27)

t = 1

where €t() denotes the contribution to the loglikelihood function associated with observation t. This is the log of the density of the dependent variable(s) for observation t, conditional on observations 1,…, t – 1. Thus lags of the dependent variable(s) are allowed. The key feature of (1...

Read More

An Artificial Regression for GMM Estimation

Another useful artificial regression, much less well known than the OPG regres­sion, is available for a class of models estimated by the generalized method of moments (GMM). Many such models can be formulated in terms of functions f(0) of the model parameters and the data, such that, when they are evaluated at the true 0, their expectations conditional on corresponding information sets, Qt, vanish. The Qt usually contain all information available prior to the time of observation, and so, as with the GNR and the OPG regression, lags of depend­ent variables are allowed.

Let the n x l matrix W denote the instruments used to obtain the GMM estim­ates. The tth row of W, denoted Wt, must contain variables in Qt only...

Read More

Artificial Regressions and HETEROsKEDASTiciTy

Covariance matrices and test statistics calculated via the GNR (1.7), or via arti­ficial regressions such as (1.35) and (1.36), are not asymptotically valid when the assumption that the error terms are iid is violated. Consider a modified version of the nonlinear regression model (1.3), in which E(uuT) = Q, where Q is an n x n diagonal matrix with tth diagonal element ю2 Let V denote an n x n diagonal matrix with the squared residual й] as the tth diagonal element. It has been known since the work of White (1980) that the matrix

(XTX)-1XTVX(XTX)-1 (1.37)

provides an estimator of var(S), which can be used in place of the usual estim­ator, s2(XTX)-1...

Read More