Category A COMPANION TO Theoretical Econometrics

Double-Length Regressions

Up to this point, the number of observations for all the artificial regressions we have studied has been equal to n, the number of observations in the data. In some cases, however, artificial regressions may have 2n or even 3n observations. This can happen whenever each observation makes two or more contributions to the criterion function.

The first double-length artificial regression, or DLR, was proposed by Davidson and MacKinnon (1984a). We will refer to it as the DLR, even though it is no longer the only artificial regression with 2n observations. The class of models to which the DLR applies is a subclass of the one used for GMM estimation. Such models may be written as

f (y, 0) = є t, t = 1,…, n, £t ~ NID(0, 1), (1.47)

where, as before, each ft () is a smooth function that depends on...

Read More

An Artificial Regression for Binary Response Models

For binary response models such as the logit and probit models, there exists a very simple artificial regression that can be derived as an extension of the Gauss – Newton regression. It was independently suggested by Engle (1984) and Davidson and MacKinnon (1984b).

The object of a binary response model is to predict the probability that the binary dependent variable, yt, is equal to 1 conditional on some information set Qt. A useful class of binary response models can be written as

E(y 11 Qt) = Pr(y t = 1) = F(ZtP). (1.51)

Here Z t is a row vector of explanatory variables that belong to Qt, в is the vector of parameters to be estimated, and F(x) is the differentiable cumulative distribu­tion function (CDF) of some scalar probability distribution...

Read More

General Hypothesis. Testing

Anil K. Bera and Gamini Premaratne*

1 Introduction

The history of statistical hypothesis testing is, indeed, very long. Neyman and Pearson (1933) traced its origin to Bayes (1763). However, systematic applications of hypothesis testing began only after the publication of Karl Pearson’s (1900) goodness-of-fit test, which is regarded as one of the 20 most important scientific breakthroughs in this century. In terms of the development of statistical methods, Ronald Fisher took up where Pearson left off. Fisher (1922) can be regarded as the analytical beginning of statistical methods. In his paper Fisher advocated the use of maximum likelihood estimation and provided the general theory of parametric statistical inference...

Read More


This is the first companion in econometrics. It covers 32 chapters written by international experts in the field. The emphasis of this companion is on "keeping things simple" so as to give students of econometrics a guide through the maze of important topics in econometrics. These chapters are helpful for readers and users of econometrics who are not looking for exhaustive surveys on the subject. Instead, these chapters give the reader some of the basics and point to further readings on the subject...

Read More

Some Test Principles Suggested in the Statistics Literature

image034 Подпись: (2.1)

We start by introducing some notation and concepts. Suppose we have n inde­pendent observations y1, y2,…, yn on a random variable Y with density function f(y; 0), where 0 is a p x 1 parameter vector with 0 Є 0 C ^p. It is assumed that f(y; 0) satisfies the regularity conditions stated in Rao (1973, p. 364) and Serfling (1980, p. 144). The likelihood function is given by

where y = (y1, y2,…, yn)’ denotes the sample.

Suppose we are interested in testing a simple hypothesis H0 : 0 = 00 against another simple hypothesis H1 : 0 = 01. Let S denote the sample space. In standard test procedures, S is partitioned into two regions, ю and its compliment юс. We reject the null hypothesis if the sample y Є ю; otherwise, we do not reject H0...

Read More