DEFINITION 1 (Chi-square Distribution) Let {ZJ, і = 1, 2, . . . , n, be i. i.d. as N(0, 1). Then the distribution of X”=1Z2 is called the chi-square


distribution, with n degrees of freedom and denoted by Xn •

2 2

THEOREM 1 IfX~xn and T ~ Xm and if X and Y are independent, then

X + Y ~ xl+m ■

THEOREM 2 If X ~ xl > then EX = n and VX = 2n.

THEOREM 3 Let {X,} be i. i.d. as iV(|a, cr2), і = 1, 2, . . . , n. Define Xn = n"1 SjLiXj. Then


X № – *n)2

i= 1 2

2 Xn—1 *



Proof. Define Z* = (X* — |x)/a. Then Z,- ~ N(0, 1) and


But since (Z — Z2)/V2 ~ N{0, 1), the right-hand side of (2) is Xi by Definition 1. Therefore, the theorem is true for n = 2. Second, assume it is true for n and consider n + 1. We have

П+1 n

Подпись: (Zn+1 Zn)(3) X (Z* – Zn+l)2 = X (Zi – Zn...

Read More

Sample Moments

Подпись: Sample mean i= 1

In Chapter 4 we defined population moments of various kinds. Here we shall define the corresponding sample moments. Sample moments are “natural” estimators of the corresponding population moments. We define

Sample variance

S = Z № – *)* = ^ X ХЇ – (Xf.

i= 1 i= 1

Подпись: i= 1

Sample kth moment around zero

Sample kth moment around the mean

І № – X)


If (Хіг Yi), і = 1, 2, . . . , n, are mutually independent in the sense of Definition 3.5.4 and have the same distribution as (X, Y), we call {(X„ Yj} a bivariate sample of size n on a bivariate population (X, У). We define

Sample covariance

Z № – X) (Xi – Y) = і S x? i – XY.

i= 1 J= 1

Sample correlation

Sample Covariance

The observed values of the sample moments are also called by the same names...

Read More


Equality. If A and В are matrices of the same size and A = {аф and В = {Ьф, then we write Ip = В if and only if at] = by for every і and j.

Addition or subtraction. If A and В are matrices of the same size and A = {аф and В = {Ьф, then A ± В is a matrix of the same size as A and В whose i, jth element is equal to al} ± by. For example, we have






«11 ±*H






«21 ± *21

0>22 — ^22

Scalar multiplication. Let A be as in (11.1.1) and let c be a scalar (that is, a real number). Then, we define cA or Ac, the product of a scalar and a matrix, to be an n X m matrix whose i, jth element is сац. In other words, every element of A is multiplied by c.

Matrix multiplication...

Read More

Asymptotic Normality

theorem 7.4.3 Let the likelihood function be L(XbX2,. . . ,Xn 0). Then, under general conditions, the maximum likelihood estimator 0 is asymptotically distributed as

Подпись: (7.4.16) 0 ~ N f 0, - E 92 log L ] -ІЛ Э02 У

(Here we interpret the maximum likelihood estimator as a solution to the likelihood equation obtained by equating the derivative to zero, rather than the global maximum likelihood estimator. Since the asymptotic nor­mality can be proved only for this local maximum likelihood estimator, henceforth this is always what we mean by the maximum likelihood esti­mator.)

Sketch of Proof. By definition, 31ogL/30 evaluated at 0 is zero. We expand it in a Taylor series around 0O to obtain

3 log L

_ 3 log L

! 32 log L


9 30

e0 302

(7.4.17) 0

(0 – 0o),

where 0* lies between 0 and 0O...

Read More

Serial Correlation

In this section we allow a nonzero correlation between ut and us for s Ф t in the model (12.1.1). Correlation between the values at different periods of a time series is called serial correlation or autocorrelation. It can be spe­cified in infinitely various ways; here we consider one particular form of serial correlation associated with the stationary first-order autoregressive model. It is defined by

(13.1.15) щ = pUt-i + st, t = 1, 2, . . . , T,

where (єг) are i. i.d. with Est = 0 and Vst = a, and щ is independent of

2 2

Єї, є2, . . . , St with Ещ = 0 and Vu0 = a /(1 — p ).

Taking the expectation of both sides of (13.1.15) for t = 1 and using our assumptions, we see that Ещ = рЕщ + Еє{ = 0. Repeating the same procedure for t = 2, 3, . . . , T, we conclude that


Read More


10.2.1 Definition

In this section we study the estimation of the parameters a, (3, and a2 in the bivariate linear regression model (10.1.1). We first consider estimating a and (3. The T observations on у and x can be plotted in a so-called scatter diagram, as in Figure 10.1. In that figure each dot represents a vector of observations on у and x. We have labeled one dot as the vector (yt, xt). We have also drawn a straight line through the scattered dots and labeled the point of intersection between the line and the dashed perpendicular line that goes through (yt, xt) as {% xt). Then the problem of estimating a and (3 can be geometrically interpreted as the problem of drawing a straight line such that its slope is an estimate of 3 and its intercept is an estimate of a.

Since Eut = 0, a re...

Read More