Maximum Likelihood Estimators

In this section we show that if we assume the normality of {ut} in the model

(10.1.1) , the least squares estimators a, (3, and a are also the maximum likelihood estimators.

The likelihood function of the parameters (that is, the joint density of Уъ Уъ ■ ■ ■ , Ут) is given by

1 ,

(10.2.78) / = 0 ■ ■ — exp t=l V2tt ct

– (yt~ 0L – fix,)2

L 2ct2 j

= (2ttct2) r/2exp

– – Ц – Z(yt – a – |3x()2


Taking the natural logarithm of both sides of (10.2.78), we have

(10.2.79) log L = log 2tt – ^ log ct2 – Z(yt – cl – fix,)2.

2 2 2a2

Since log L depends on a and (3 only via the last term of the right-hand side of (10.2.79), the maximum likelihood estimators of a and (3 are identical to the least squares estimators.

Inserting a and (3 into the right-hand side of (10.2.79), we obtain the so-called concentrated log-likelihood function, which depends only on ct2.

(10.2.80) log L* = log 2tt – ^ log ct2 – – Ц Ий2.

2 2 2<т


Подпись: (10.2.81) Подпись: d log L* da2 Подпись: -— + —m2=o. 2CT2 2CT4

Differentiating (10.2.80) with respect to ct and equating the derivative to zero yields

Solving (10.2.81) for a yields the maximum likelihood estimator, which is identical to the least squares estimator a2. These results constitute a generalization of the results in Example 7.3.3.

In Section 12.2.5 we shall show that the least squares estimators a and (3 are best unbiased if ut) are normal.

Leave a reply

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>