Continuous Sample

For the continuous case, the principle of the maximum likelihood estima­tor is essentially the same as for the discrete case, and we need to modify Definition 7.3.1 only slightly.

DEFINITION 7.S.2 Let (Xj, X%, . . . , X„) be a random sample on a con­tinuous population with a density function /(• 10), where 0 = Ob 02, ■ ■ • , 0jf), and let хг be the observed value of X,. Then we call L = IIf= f{xt | 0) the likelihood function of 0 given (x, x%,. . • , xn) and the value of 0 that maximizes L, the maximum likelihood estimator.

example 7.3.3 Let {X,-}, і = 1, 2, . . . , n, be a random sample on

2

Подпись: (7.3.13) so that image271 Подпись: (*>■ - p)2 2a2

N{p, a ) and let {x,| be their observed values. Then the likelihood func­tion is given by

7h 1 11

(7.3.14) log L = ~ log(2ir) – ^ log a2 – —- X (*« “ P)2- 2 2 2cr2 i=

Equating the derivatives to zero, we obtain

The maximum likelihood estimator of p and ct2, denoted as p and ct2, are obtained by solving (7.3.15) and (7.3.16). (Do they indeed give a maxi­mum?) Therefore we have

(7.3.17)

1 n

= – X Xi = *

n "

г— 1

and

(7.3.18)

d2 = – X (*i “

They are the sample mean and the sample variance, respectively.

7.3.2 Computation

In all the examples of the maximum likelihood estimator in the preceding sections, it has been possible to solve the likelihood equation explicitly, equating the derivative of the log likelihood to zero, as in (7.3.3). The likelihood equation is often so highly nonlinear in the parameters, how­ever, that it can be solved only by some method of iteration.

Подпись: (7.3.19) image274 Подпись: (0 image276 Подпись: (0 image278

The most common method of iteration is the Newton-Raphson method, which can be used to maximize or minimize a general function, not just the likelihood function, and is based on a quadratic approximation of the maximand or minimand. Let (?(0) be the function we want to maximize (or minimize). Its quadratic Taylor expansion around an initial value 0] is given by

where the derivatives are evaluated at 0j. The second-round estimator of the iteration, denoted 02, is the value of 0 that maximizes the right-hand side of the above equation. Therefore,

Подпись: (dQ /tQ J Э02 U Подпись: 02 - 01(7.3.20)

Next 02 can be used as the initial value to compute the third-round estimator, and the iteration should be repeated until it converges. Whether the iteration will converge to the global maximum, rather than some other stationary point, and, if it does, how fast it converges depend upon the shape of Q and the initial value. Various modifications have been proposed to improve the convergence.

Leave a reply

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>