Category INTRODUCTION TO STATISTICS AND ECONOMETRICS

Asymptotic Normality

theorem 7.4.3 Let the likelihood function be L(XbX2,. . . ,Xn 0). Then, under general conditions, the maximum likelihood estimator 0 is asymptotically distributed as

Подпись: (7.4.16) 0 ~ N f 0, - E 92 log L ] -ІЛ Э02 У

(Here we interpret the maximum likelihood estimator as a solution to the likelihood equation obtained by equating the derivative to zero, rather than the global maximum likelihood estimator. Since the asymptotic nor­mality can be proved only for this local maximum likelihood estimator, henceforth this is always what we mean by the maximum likelihood esti­mator.)

Sketch of Proof. By definition, 31ogL/30 evaluated at 0 is zero. We expand it in a Taylor series around 0O to obtain

3 log L

_ 3 log L

! 32 log L

30

9 30

e0 302

(7.4.17) 0

(0 – 0o),

where 0* lies between 0 and 0O...

Read More

Serial Correlation

In this section we allow a nonzero correlation between ut and us for s Ф t in the model (12.1.1). Correlation between the values at different periods of a time series is called serial correlation or autocorrelation. It can be spe­cified in infinitely various ways; here we consider one particular form of serial correlation associated with the stationary first-order autoregressive model. It is defined by

(13.1.15) щ = pUt-i + st, t = 1, 2, . . . , T,

where (єг) are i. i.d. with Est = 0 and Vst = a, and щ is independent of

2 2

Єї, є2, . . . , St with Ещ = 0 and Vu0 = a /(1 — p ).

Taking the expectation of both sides of (13.1.15) for t = 1 and using our assumptions, we see that Ещ = рЕщ + Еє{ = 0. Repeating the same procedure for t = 2, 3, . . . , T, we conclude that

(13.1...

Read More

LEAST SQUARES ESTIMATORS

10.2.1 Definition

In this section we study the estimation of the parameters a, (3, and a2 in the bivariate linear regression model (10.1.1). We first consider estimating a and (3. The T observations on у and x can be plotted in a so-called scatter diagram, as in Figure 10.1. In that figure each dot represents a vector of observations on у and x. We have labeled one dot as the vector (yt, xt). We have also drawn a straight line through the scattered dots and labeled the point of intersection between the line and the dashed perpendicular line that goes through (yt, xt) as {% xt). Then the problem of estimating a and (3 can be geometrically interpreted as the problem of drawing a straight line such that its slope is an estimate of 3 and its intercept is an estimate of a.

Since Eut = 0, a re...

Read More

Estimators in General

We may sometimes want to estimate a parameter of a distribution other than a moment. An example is the probability (pi) that the ace will turn up in a roll of a die. A “natural” estimator in this case is the ratio of the number of times the ace appears in n rolls to n—denote it by p. In general, we estimate a parameter 0 by some function of the sample. Mathematically we express it as

(7.1.1) 0 = ф(Х], X2, . . . , Xn).

We call any function of a sample by the name statistic. Thus an estimator is a statistic used to estimate a parameter. Note that an estimator is a random variable. Its observed value is called an estimate.

The pi just defined can be expressed as a function of the sample. Let Xi be the outcome of the zth roll of a die and define У, = 1 if X* = 1 and Yi = 0 otherwise...

Read More

DETERMINANTS AND INVERSES

Throughout this section, all the matrices are square and n X n.

Before we give a formal definition of the determinant of a square matrix, let us give some examples. The determinant of a 1 X 1 matrix, or a scalar, is the scalar itself. Consider a 2 X 2 matrix

д __ an an

Cl 2i #22

Its determinant, denoted by |A| or det A, is defined by (11.3.1) |A| = <211^22 — <221«12-

The determinant of а З X 3 matrix

ап

а12

аЪ

а21

а 22

а23

а31

а32

аЪЪ

Подпись: (11.3.2) Подпись: |A| — an Подпись: a22 «23 «32 Язз Подпись: a2i Подпись: «12 e32 Подпись: «13 a33 Подпись: + «3! Подпись: «12 «13 «22 «23

is given by

= «цяггя’зз — «11«32«23 — «21«12«33 + «21 «32 «13 + «31 «12«23 — «31«22«13 •

Now we present a formal definition, given inductively on the assumption that the determinant of an (n – 1) X (n — 1) matrix has already been defined.

DEFINITION 11...

Read More

CONFIDENCE INTERVALS

We shall assume that confidence is a number between 0 and 1 and use it in statements such as “a parameter 0 lies in the interval [a, b with 0.95 confidence,” or, equivalently, “a 0.95 confidence interval for 0 is [a, b].” A confidence interval is constructed using some estimator of the parameter in question. Although some textbooks define it in a more general way, we shall define a confidence interval mainly when the estimator used to construct it is either normal or asymptotically normal. This restriction is not a serious one, because most reasonable estimators are at least asymp­totically normal. (An exception occurs in Example 8.2.5, where a chi – square distribution is used to construct a confidence interval concerning a variance...

Read More