First – and Second-Order Conditions

The following conditions guarantee that the first – and second-order conditions for a maximum hold.

Assumption 8.1: The parameter space © is convex and в0 is an interior point of ©. The likelihood function L n (в) is, with probability 1, twice continuously dif­ferentiable in an open neighborhood ©0 of в0, and, for i, i2 = 1, 2, 3,…,m,

Подпись: д 2 Ln (в) 1 sup в Є©0 дві, дв2 Подпись: EПодпись: <(8.21)

Подпись: E Подпись: sup в Є©0 Подпись: d 2ln( L n (в)) двк дві2 Подпись: < Подпись: (8.22)

and

Подпись: E image661 Подпись: = 0 and E Подпись: d 2ln(L „ (в)) двдвт
image664

Theorem 8.2: Under Assumption 8.1,

Подпись: d ln( L n (в )) двт image666= —Var

Proof: For notational convenience I will prove this theorem for the uni­variate parameter case m = 1 only. Moreover, I will focus on the case that Z = (zT, •••, zT )T is a random sample from an absolutely continuous distri­bution with density f (z^0).

Observe that

1 n f

E [ln( L n (в ))/n] = -£> [ln( f( Zj |в))] = Ы(/^в ))f(z^o)dz,

n j=i J

(8.23)

It follows from Taylor’s theorem that, for в e ©0 and every 8 = 0 for which

в + 8 e ©0, there exists a A.(z, 8) e [0, 1] such that

ln(f (z^ + 8)) — ln(f (z|в))

Подпись: (8.24)= 8 d ln(f(z |в)) 1 82 d 2ln(f (z6 + k(z,8)8))

d в + 2 8 (d (в + Mz,8)8))2

Note that, by the convexity of ©, в0 + A.(z, 8)8 e ©• Therefore, it follows from condition (8.22), the definition of a derivative, and the dominated convergence theorem that

d f f d ln( [Ш))

d^j Щ/^в))f(z|вo)dz = j ————– f(z|вo)dz• (8.25)

image668 Подпись: d 1 = 0. d в Подпись: (8.26)
image671

Similarly, it follows from condition (8.21), Taylor’s theorem, and the dominated convergence theorem that

image672

Moreover,

The first part of Theorem 8.2 now follows from (8.23) through (8.27).

image673

As is the case for (8.25) and (8.26), it follows from the mean value theorem and conditions (8.21) and (8.22) that

image674
and

[(dAz^ )/d в2„m4J| [d2 f (z^ ),,

– Д-№|в»)’4′ – = ! – wdA

– f (d ln(f(z|9)) d)2/(-іво)^г|в.»o.

The adaptation of the proof to the general case is reasonably straightforward and is therefore left as an exercise. Q. E.D.

The matrix

H = Var (9 ln( L n (в ))/дв T|в =во) (8.30)

is called the Fisher information matrix. As we have seen in Chapter 5, the inverse of the Fisher information matrix is just the Cramer-Rao lower bound of the variance matrix of an unbiased estimator of в0.

Leave a reply

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>