Category Advanced Econometrics Takeshi Amemiya

Regression Case

Let us generalize some of the estimation methods discussed earlier to the regression situation.

M Estimator. The M estimator is easily generalized to the regres­sion model: It minimizes 2£.i/>[(y, — xJb)/$] with respect to the vector b. Its asymptotic variance-covariance matrix is given by Jo(X, AX)~1X’BX(X’AX)_1, where A and В are diagonal matrices with the tth diagonal elements equal to Ep"[(y, — x’t{f)/s0] and E{p'[(y, — x’tfi)/s0]2}, re­spectively.

Hill and Holland (1977) did a Monte Carlo study on the regression general­ization of Andrews’ M estimator described in (2.3.5). They used s = (2.1) Median (largest T— K+ 1 of|y, — х,’Д|} as the scale factor in the p function,

where 0 is the value of b that minimizes i y, — xt’b|. Actually, their estima­

tor, which they ...

Read More

Estimation of p

Because 2 defined in (5.2.9) depends on a2 only through a scalar multiplica­tion, fio defined in (6.1.3) does not depend on a2. Therefore, in obtaining FGLS (6.2.1), we need to estimate only p. The most natural estimator of p is

2 Й/-ІЙ/

р=Ц—. (6.3.3)



where й, = у, — xlfi. The consistency of pis straightforward. We shall prove its asymptotic normality.

Using Uf = put- + e„ we have

Подпись: JT(P~P) =Подпись: (6.3.4)Tt%™+6′

1 t ’

•* г-2


4, – 4= У О» – + (6-3.5)


*2 = J. i2 U – Д>’*г-.]2 + (fi – Д>’*-іЧг-. • (6-3.6)

If we assume that НтпГ Г_1Х’Х is a finite nonsingular matrix, it is easy to show that both Д, and Д2 converge to 0 in probability. For this, we need only the consistency of the LS estimator fi and — fi) = 0(1) but not the
asymptotic normality...

Read More

Constrained Least Squares Estimator as Best Linear Unbiased Estimator

That fi is the best linear unbiased estimator follows from the fact that y2 is the best linear unbiased estimator of y2 in (1.4.9); however, we also can prove it directly. Inserting (1.4.9) into (1.4.11) and using (1.4.8), we obtain

j» = ^+R(R’X’XR)-1R, X’u. (1.4.14)

Therefore, fi is unbiased and its variance-covariance matrix is given by

F()?) = (72R(R, X’XR)-,R’. (1.4.15)

We shall now define the class of linear estimators by fi* = С’ у — d where C’ is a ATX T matrix and d is a ЛГ-vector. This class is broader than the class of linear estimators considered in Section 1.2.5 because of the additive constants d. We did not include d previously because in the unconstrained model the unbi­asedness condition would ensure d = 0...

Read More

Maximum Likelihood Estimator

4.1.2 Definition

Let LT(6) = L(y, в) be the joint density of a Г-vector of random variables У — (У і. Уі. • • • . Утї characterized by a Я-vector of parameters в. When we regard it as a function of в, we call it the likelihood function. The term maximum likelihood estimator (MLE) is often used to mean two different concepts: (1) the value of в that globally maximizes the likelihood function L( у, в) over the parameter space 0; or (2) any root of the likelihood equation

Подпись: (4.2.1)дЬт(в) _

that corresponds to a local maximum. We use it only in the second sense and use the term global maximum likelihood estimator to refer to the first concept. We sometimes use the term local maximum likelihood estimator to refer to the second concept.

4.1.3 Consistency

The conditions for...

Read More