Continuous Sample
For the continuous case, the principle of the maximum likelihood estimator is essentially the same as for the discrete case, and we need to modify Definition 7.3.1 only slightly.
DEFINITION 7.S.2 Let (Xj, X%, . . . , X„) be a random sample on a continuous population with a density function /(• 10), where 0 = Ob 02, ■ ■ • , 0jf), and let хг be the observed value of X,. Then we call L = IIf= f{xt  0) the likelihood function of 0 given (x, x%,. . • , xn) and the value of 0 that maximizes L, the maximum likelihood estimator.
example 7.3.3 Let {X,}, і = 1, 2, . . . , n, be a random sample on
2
N{p, a ) and let {x, be their observed values. Then the likelihood function is given by
7h 1 11
(7.3.14) log L = ~ log(2ir) – ^ log a2 – — X (*« “ P)2 2 2 2cr2 i=
Equating the derivatives to zero, we obtain
The maximum likelihood estimator of p and ct2, denoted as p and ct2, are obtained by solving (7.3.15) and (7.3.16). (Do they indeed give a maximum?) Therefore we have
(7.3.17) 
1 n = – X Xi = * 
n " г— 1 

and 

(7.3.18) 
d2 = – X (*i “ 
They are the sample mean and the sample variance, respectively.
In all the examples of the maximum likelihood estimator in the preceding sections, it has been possible to solve the likelihood equation explicitly, equating the derivative of the log likelihood to zero, as in (7.3.3). The likelihood equation is often so highly nonlinear in the parameters, however, that it can be solved only by some method of iteration.
The most common method of iteration is the NewtonRaphson method, which can be used to maximize or minimize a general function, not just the likelihood function, and is based on a quadratic approximation of the maximand or minimand. Let (?(0) be the function we want to maximize (or minimize). Its quadratic Taylor expansion around an initial value 0] is given by
where the derivatives are evaluated at 0j. The secondround estimator of the iteration, denoted 02, is the value of 0 that maximizes the righthand side of the above equation. Therefore,
(7.3.20)
Next 02 can be used as the initial value to compute the thirdround estimator, and the iteration should be repeated until it converges. Whether the iteration will converge to the global maximum, rather than some other stationary point, and, if it does, how fast it converges depend upon the shape of Q and the initial value. Various modifications have been proposed to improve the convergence.
Leave a reply