Category INTRODUCTION TO STATISTICS AND ECONOMETRICS

BIVARIATE REGRESSION MODEL

10.1 Подпись: 10INTRODUCTION

In Chapters 1 through 9 we studied statistical inference about the distri­bution of a single random variable on the basis of independent observa­tions on the variable. Let {Xt}, t = 1, 2, . . . , T, be a sequence of inde­pendent random variables with the same distribution F. Thus far we have considered statistical inference about F based on the observed values {xt} of {X,}.

In Chapters 10, 12, and 13 we shall study statistical inference about the relationship among more than one random variable. In the present chap­ter we shall consider the relationship between two random variables, x and y...

Read More

APPENDIX: DISTRIBUTION THEORY

 

DEFINITION 1 (Chi-square Distribution) Let {ZJ, і = 1, 2, . . . , n, be i. i.d. as N(0, 1). Then the distribution of X”=1Z2 is called the chi-square

9

distribution, with n degrees of freedom and denoted by Xn •

2 2

THEOREM 1 IfX~xn and T ~ Xm and if X and Y are independent, then

X + Y ~ xl+m ■

THEOREM 2 If X ~ xl > then EX = n and VX = 2n.

THEOREM 3 Let {X,} be i. i.d. as iV(|a, cr2), і = 1, 2, . . . , n. Define Xn = n"1 SjLiXj. Then

n

X № – *n)2

i= 1 2

2 Xn—1 *

CT

 

Proof. Define Z* = (X* — |x)/a. Then Z,- ~ N(0, 1) and

  image716

But since (Z — Z2)/V2 ~ N{0, 1), the right-hand side of (2) is Xi by Definition 1. Therefore, the theorem is true for n = 2. Second, assume it is true for n and consider n + 1. We have

П+1 n

Подпись: (Zn+1 Zn)(3) X (Z* – Zn+l)2 = X (Zi – Zn...

Read More

Sample Moments

Подпись: Sample mean i= 1

In Chapter 4 we defined population moments of various kinds. Here we shall define the corresponding sample moments. Sample moments are “natural” estimators of the corresponding population moments. We define

Sample variance

S = Z № – *)* = ^ X ХЇ – (Xf.

i= 1 i= 1

Подпись: i= 1

Sample kth moment around zero

Sample kth moment around the mean

І № – X)

i=l

If (Хіг Yi), і = 1, 2, . . . , n, are mutually independent in the sense of Definition 3.5.4 and have the same distribution as (X, Y), we call {(X„ Yj} a bivariate sample of size n on a bivariate population (X, У). We define

Sample covariance

Z № – X) (Xi – Y) = і S x? i – XY.

i= 1 J= 1

Sample correlation

Sample Covariance
SXSY

The observed values of the sample moments are also called by the same names...

Read More

MATRIX OPERATIONS

Equality. If A and В are matrices of the same size and A = {аф and В = {Ьф, then we write Ip = В if and only if at] = by for every і and j.

Addition or subtraction. If A and В are matrices of the same size and A = {аф and В = {Ьф, then A ± В is a matrix of the same size as A and В whose i, jth element is equal to al} ± by. For example, we have

«11

«12

Ч-

‘*11

*12

«11 ±*H

dl2±bi2

«21

«22

*21

*22

«21 ± *21

0>22 — ^22

Scalar multiplication. Let A be as in (11.1.1) and let c be a scalar (that is, a real number). Then, we define cA or Ac, the product of a scalar and a matrix, to be an n X m matrix whose i, jth element is сац. In other words, every element of A is multiplied by c.

Matrix multiplication...

Read More

Asymptotic Normality

theorem 7.4.3 Let the likelihood function be L(XbX2,. . . ,Xn 0). Then, under general conditions, the maximum likelihood estimator 0 is asymptotically distributed as

Подпись: (7.4.16) 0 ~ N f 0, - E 92 log L ] -ІЛ Э02 У

(Here we interpret the maximum likelihood estimator as a solution to the likelihood equation obtained by equating the derivative to zero, rather than the global maximum likelihood estimator. Since the asymptotic nor­mality can be proved only for this local maximum likelihood estimator, henceforth this is always what we mean by the maximum likelihood esti­mator.)

Sketch of Proof. By definition, 31ogL/30 evaluated at 0 is zero. We expand it in a Taylor series around 0O to obtain

3 log L

_ 3 log L

! 32 log L

30

9 30

e0 302

(7.4.17) 0

(0 – 0o),

where 0* lies between 0 and 0O...

Read More

Serial Correlation

In this section we allow a nonzero correlation between ut and us for s Ф t in the model (12.1.1). Correlation between the values at different periods of a time series is called serial correlation or autocorrelation. It can be spe­cified in infinitely various ways; here we consider one particular form of serial correlation associated with the stationary first-order autoregressive model. It is defined by

(13.1.15) щ = pUt-i + st, t = 1, 2, . . . , T,

where (єг) are i. i.d. with Est = 0 and Vst = a, and щ is independent of

2 2

Єї, є2, . . . , St with Ещ = 0 and Vu0 = a /(1 — p ).

Taking the expectation of both sides of (13.1.15) for t = 1 and using our assumptions, we see that Ещ = рЕщ + Еє{ = 0. Repeating the same procedure for t = 2, 3, . . . , T, we conclude that

(13.1...

Read More