Category INTRODUCTION TO STATISTICS AND ECONOMETRICS

Cramer-Rao Lower Bound

We shall derive a lower bound to the variance of an unbiased estimator and show that in certain cases the variance of the maximum likelihood estimator attains the lower bound.

THEOREM 7.4.1 (Cramer-Rao) Let L(Xb X2, . .., Xn | 0) be the likeli­hood function and let 0(Xlt X2,. . . , Xn) be an unbiased estimator of 0. Then, under general conditions, we have

(7.4.1) V(0) > ——— ^———

log L

d02

The right-hand side is known as the Cramer-Rao lower bound (CRLB).

(In Section 7.3 the likelihood function was always evaluated at the observed values of the sample, because there we were only concerned with the definition and computation of the maximum likelihood estimator...

Read More

Heteroscedasticity

In the classical regression model it is assumed that the variance of the error term is constant (homoscedastic). Here we relax this assumption and specify more generally that

0^3.1.12) Vut = пі t = 1, 2, . . . , T.

This assumption of nonconstant variances is called heteroscedasticity. The other assumptions remain the same. If the variances are known, this model is a special case of the model discussed in Section 13.1.1. In the present case, X is a diagonal matrix whose tth diagonal element is equal to erf. The GLS estimator in this case is given a special name, the weighted least squares estimator.

If the variances are unknown, we must specify them as depending on a finite number of parameters. There are two main methods of parameteri­zation.

In the first method, the variances are assum...

Read More

BIVARIATE REGRESSION MODEL

10.1 Подпись: 10INTRODUCTION

In Chapters 1 through 9 we studied statistical inference about the distri­bution of a single random variable on the basis of independent observa­tions on the variable. Let {Xt}, t = 1, 2, . . . , T, be a sequence of inde­pendent random variables with the same distribution F. Thus far we have considered statistical inference about F based on the observed values {xt} of {X,}.

In Chapters 10, 12, and 13 we shall study statistical inference about the relationship among more than one random variable. In the present chap­ter we shall consider the relationship between two random variables, x and y...

Read More

APPENDIX: DISTRIBUTION THEORY

 

DEFINITION 1 (Chi-square Distribution) Let {ZJ, і = 1, 2, . . . , n, be i. i.d. as N(0, 1). Then the distribution of X”=1Z2 is called the chi-square

9

distribution, with n degrees of freedom and denoted by Xn •

2 2

THEOREM 1 IfX~xn and T ~ Xm and if X and Y are independent, then

X + Y ~ xl+m ■

THEOREM 2 If X ~ xl > then EX = n and VX = 2n.

THEOREM 3 Let {X,} be i. i.d. as iV(|a, cr2), і = 1, 2, . . . , n. Define Xn = n"1 SjLiXj. Then

n

X № – *n)2

i= 1 2

2 Xn—1 *

CT

 

Proof. Define Z* = (X* — |x)/a. Then Z,- ~ N(0, 1) and

  image716

But since (Z — Z2)/V2 ~ N{0, 1), the right-hand side of (2) is Xi by Definition 1. Therefore, the theorem is true for n = 2. Second, assume it is true for n and consider n + 1. We have

П+1 n

Подпись: (Zn+1 Zn)(3) X (Z* – Zn+l)2 = X (Zi – Zn...

Read More

Sample Moments

Подпись: Sample mean i= 1

In Chapter 4 we defined population moments of various kinds. Here we shall define the corresponding sample moments. Sample moments are “natural” estimators of the corresponding population moments. We define

Sample variance

S = Z № – *)* = ^ X ХЇ – (Xf.

i= 1 i= 1

Подпись: i= 1

Sample kth moment around zero

Sample kth moment around the mean

І № – X)

i=l

If (Хіг Yi), і = 1, 2, . . . , n, are mutually independent in the sense of Definition 3.5.4 and have the same distribution as (X, Y), we call {(X„ Yj} a bivariate sample of size n on a bivariate population (X, У). We define

Sample covariance

Z № – X) (Xi – Y) = і S x? i – XY.

i= 1 J= 1

Sample correlation

Sample Covariance
SXSY

The observed values of the sample moments are also called by the same names...

Read More

MATRIX OPERATIONS

Equality. If A and В are matrices of the same size and A = {аф and В = {Ьф, then we write Ip = В if and only if at] = by for every і and j.

Addition or subtraction. If A and В are matrices of the same size and A = {аф and В = {Ьф, then A ± В is a matrix of the same size as A and В whose i, jth element is equal to al} ± by. For example, we have

«11

«12

Ч-

‘*11

*12

«11 ±*H

dl2±bi2

«21

«22

*21

*22

«21 ± *21

0>22 — ^22

Scalar multiplication. Let A be as in (11.1.1) and let c be a scalar (that is, a real number). Then, we define cA or Ac, the product of a scalar and a matrix, to be an n X m matrix whose i, jth element is сац. In other words, every element of A is multiplied by c.

Matrix multiplication...

Read More