Category INTRODUCTION TO STATISTICS AND ECONOMETRICS

BAYESIAN METHOD

We have stated earlier that the goal of statistical inference is not merely to obtain an estimator but to be able to say, using the estimator, where the true value of the parameter is likely to lie. This is accomplished by constructing confidence intervals, but a shortcoming of this method is
that confidence can be defined only for a certain restricted sets of inter­vals. In the Bayesian method this problem is alleviated, because in it we can treat a parameter as a random variable and therefore define a prob­ability distribution for it. If the parameter space is continuous, as is usually the case, we can define a density function over the parameter space and thereby consider the probability that a parameter lies in any given interval...

Read More

TIME SERIES REGRESSION

In this section we consider the pxh order autoregressive model

P

(13.2.1) yt = X Pjjt-j + et, t = p+l, p+2, . . . , T,

i= і

where {є(} are i. i.d. with Eet = 0 and Vet = ct2, and (уі, у2> • • • ,Ур) are independent of (є^+ь єр+2,. . . , £r). This model differs from (13.1.26) only in that the {yt} are observable, whereas the {ut} in the earlier equation are not. We can write (13.2.1) in matrix notation as (13.2.2) у = Yp + є by defining

У = (Ур+ъУр+2, ■ ■ ■ ,Ут)’, E = (Є^+1, Єр+2, ■ • • , Єт)’,

Р — (Pb Р2> • • • > Рр)’,

Ур Ур-1 Уі

Ур+1 Ур У2

Ут-і Ут-2 Ут-р

Although the model superficially resembles (12.1.3), it is not a classical regression model because Y cannot be regarded as a nonstochastic matrix.

The LS estimator p = (Y...

Read More

Estimation of a2

We shall now consider the estimation of a. If ut) were observable, the most natural estimator of ct2 would be the sample variance T Since {ut} are not observable, we must first predict them by the least squares residuals {ut} defined in (10.2.7). Then a2 can be estimated by

(10.2.35) a2 = ^Xu2

which we shall call the least squares estimator of a. Although the use of the term least squares here is not as compelling as in the case of a and (3, we use it because it is an estimator based on the least squares residuals. Using ct we can estimate Up and Ua given in (10.2.23) and (10.2.24) by

о 9

substituting (T for a in the respective formulae.

We shall evaluate £ct2. From (10.2.7) we can write

(10.2.36) ut = ut – {a — a) — (Э — P)^.

Multiplying both sides of (10.2...

Read More

Various Measures of Closeness

The ambiguity of the first kind is resolved once we decide on a measure of closeness between the estimator and the parameter. There are many reasonable measures of closeness, however, and it is not easy to choose a particular one. In this section we shall consider six measures of closeness and establish relationships among them. In the following discussion we shall denote two competing estimators by X and F and the parameter by 0. Note that 0 is always a fixed number in the present analysis. Each of the six statements below gives the condition under which estimator X is pre­ferred to estimator Y. (We allow for the possibility of a tie. If X is preferred to Y and Y is not preferred to X, we say X is strictly preferred to F.) Or, we might say, X is “better” than F...

Read More

PROPERTIES OF THE SYMMETRIC MATRIX

Now we shall study the properties of symmetric matrices, which play a major role in multivariate statistical analysis. Throughout this section, A will denote an n X n symmetric matrix and X a matrix that is not neces­sarily square. We shall often assume that X is n X К with К < n.

The following theorem about the diagonalization of a symmetric matrix is central to this section.

THEOREM 11.5.1 For any symmetric matrix A, there exists an orthogonal matrix H (that is, a square matrix satisfying H’H = I) such that

(11.5.1) H’AH = A,

where A is a diagonal matrix. The diagonal elements of A are called the characteristic roots (or eigenvalues) of A...

Read More

TYPE I AND TYPE II ERRORS

The question of how to determine the critical region ideally should de­pend on the cost of making a wrong decision. In this regard it is useful to define the following two types of error.

DEFINITION 9.2.1 A Type I error is the error of rejecting H0 when it is true. A Type II error is the error of accepting H0 when it is false (that is, when Hi is true).

image395

figure 9.1 Relationship between a and p

The probabilities of the two types of error are crucial in the choice of a critical region. We denote the probability of Type I error by a and that of Type II error by p. Therefore we can write mathematically

(9.2.1) a = P(X Є R І Нй) and

(9.2.2) p = Р(ХЄД|#і).

The probability of Type I error is also called the size of a test.

Sometimes it is useful to consider a test which chooses ...

Read More