Category Advanced Econometrics Takeshi Amemiya

Asymptotic Properties of Extremum Estimators

By extremum estimators we mean estimators obtained by either maximizing or minimizing a certain function defined over the parameter space. First, we shall establish conditions for the consistency and the asymptotic normality of extremum estimators (Section 4.1), and second, we shall apply the results to important special cases, namely, the maximum likelihood estimator (Section 4.2) and the nonlinear least squares estimator (Section 4.3).

What we call extremum estimators Huber called M estimators, meaning maximum-likelihood-like estimators. He developed the asymptotic proper­ties in a series of articles (summarized in Huber, 1981). The emphasis here, however, will be different from his...

Read More

A Singular Covariance Matrix

If the covariance matrix X is singular, we obviously cannot define GLS by

(6.1.3) . Suppose that the rank of X is S < T. Then, by Theorem 3 of Appendix 1, we can find an orthogonal matrix H = (H,, H2), where H, is T X S and H2 is TX (T — S), such that HJXH, = A, a diagonal matrix consisting of the S positive characteristic roots of X, H’,XH2 = 0, and H2XH2 = 0. The premulti­plication of (6.1.1) by H’ yields two vector equations:

Н’^ІВД+Щи (6.1.11)

and

Щу = ЩХ0. (6.1.12)

Note that there is no error term in (6.1.12) because АШ2ии’Н2 = H2XH2 = 0 and therefore H2u is identically equal to a zero vector. Then the best linear unbiased estimator of fi is GLS applied to (6.1.11) subject to linear constraints

(6.1.12) .1 Or, equivalently, it is LS applied to

A-,/2H’,y = A-1/2H’,X0 ...

Read More

Bayesian Solution

The Bayesian solution to the selection-of-regressors problem provides a peda – gogically useful starting point although it does not necessarily lead to a useful solution in practice. We can obtain the Bayesian solution as a special case of the Bayes estimator (defined in Section 2.1.2) for which both в and D consist of two elements. Let the losses be represented as shown in Table 2.1, where Ln is the loss incurred by choosing model 1 when model 2 is the true model and L2l is the loss incurred by choosing model 2 when model 1 is the true model.3 Then, by the result of Section 2.1.2, the Bayesian strategy is to choose model 1 if

(2.1.3)

image126

where />(%), і = 1 and 2, is the posterior probability that the model і is true given the sample y...

Read More

Gauss-Newton Method

The Gauss-Newton method was specifically designed to calculate the nonlin­ear least square estimator. Expanding/X)?) of Eq. (4.3.5) in a Taylor series around the initial estimate P, we obtain

(4.4.10)

Подпись: S* image314 image315 Подпись: (4.4.11)

Substituting the right-hand side of (4.4.10) for /Х/І) in (4.3.5) yields

The second-round estimator/?2 of the Gauss-Newton iteration is obtained by minimizing the right-hand side of approximation (4.4.11) with respect to fi as

image317 image318 image319

where

The iteration (4.4.12) is to be repeated until convergence is obtained. This method involves only the first derivatives off„ whereas the Newton-Raphson iteration applied to nonlinear least squares estimation involves the second derivatives of f, as well.

The Gauss-Newton iteration may be alternatively motivated as follows: Evaluatin...

Read More