By no means all interesting econometric models are regression models. It is therefore useful to see if artificial regressions other than the GNR exist for wide classes of models. One of these is the outer-product-of-the-gradient regression, or OPG
regression, a particularly simple artificial regression that can be used with most models that are estimated by maximum likelihood. Suppose we are interested in a model of which the loglikelihood function can be written as
€(9) = (0), (1.27)
t = 1
where €t() denotes the contribution to the loglikelihood function associated with observation t. This is the log of the density of the dependent variable(s) for observation t, conditional on observations 1,…, t – 1. Thus lags of the dependent variable(s) are allowed. The key feature of (1... Read More
Another useful artificial regression, much less well known than the OPG regression, is available for a class of models estimated by the generalized method of moments (GMM). Many such models can be formulated in terms of functions f(0) of the model parameters and the data, such that, when they are evaluated at the true 0, their expectations conditional on corresponding information sets, Qt, vanish. The Qt usually contain all information available prior to the time of observation, and so, as with the GNR and the OPG regression, lags of dependent variables are allowed.
Let the n x l matrix W denote the instruments used to obtain the GMM estimates. The tth row of W, denoted Wt, must contain variables in Qt only... Read More
Covariance matrices and test statistics calculated via the GNR (1.7), or via artificial regressions such as (1.35) and (1.36), are not asymptotically valid when the assumption that the error terms are iid is violated. Consider a modified version of the nonlinear regression model (1.3), in which E(uuT) = Q, where Q is an n x n diagonal matrix with tth diagonal element ю2 Let V denote an n x n diagonal matrix with the squared residual й] as the tth diagonal element. It has been known since the work of White (1980) that the matrix
provides an estimator of var(S), which can be used in place of the usual estimator, s2(XTX)-1... Read More
Up to this point, the number of observations for all the artificial regressions we have studied has been equal to n, the number of observations in the data. In some cases, however, artificial regressions may have 2n or even 3n observations. This can happen whenever each observation makes two or more contributions to the criterion function.
The first double-length artificial regression, or DLR, was proposed by Davidson and MacKinnon (1984a). We will refer to it as the DLR, even though it is no longer the only artificial regression with 2n observations. The class of models to which the DLR applies is a subclass of the one used for GMM estimation. Such models may be written as
f (y, 0) = є t, t = 1,…, n, £t ~ NID(0, 1), (1.47)
where, as before, each ft () is a smooth function that depends on... Read More
For binary response models such as the logit and probit models, there exists a very simple artificial regression that can be derived as an extension of the Gauss – Newton regression. It was independently suggested by Engle (1984) and Davidson and MacKinnon (1984b).
The object of a binary response model is to predict the probability that the binary dependent variable, yt, is equal to 1 conditional on some information set Qt. A useful class of binary response models can be written as
E(y 11 Qt) = Pr(y t = 1) = F(ZtP). (1.51)
Here Z t is a row vector of explanatory variables that belong to Qt, в is the vector of parameters to be estimated, and F(x) is the differentiable cumulative distribution function (CDF) of some scalar probability distribution... Read More
Anil K. Bera and Gamini Premaratne*
The history of statistical hypothesis testing is, indeed, very long. Neyman and Pearson (1933) traced its origin to Bayes (1763). However, systematic applications of hypothesis testing began only after the publication of Karl Pearson’s (1900) goodness-of-fit test, which is regarded as one of the 20 most important scientific breakthroughs in this century. In terms of the development of statistical methods, Ronald Fisher took up where Pearson left off. Fisher (1922) can be regarded as the analytical beginning of statistical methods. In his paper Fisher advocated the use of maximum likelihood estimation and provided the general theory of parametric statistical inference... Read More