The question of how to determine the critical region ideally should de­pend on the cost of making a wrong decision. In this regard it is useful to define the following two types of error.

DEFINITION 9.2.1 A Type I error is the error of rejecting H0 when it is true. A Type II error is the error of accepting H0 when it is false (that is, when Hi is true).


figure 9.1 Relationship between a and p

The probabilities of the two types of error are crucial in the choice of a critical region. We denote the probability of Type I error by a and that of Type II error by p. Therefore we can write mathematically

(9.2.1) a = P(X Є R І Нй) and

(9.2.2) p = Р(ХЄД|#і).

The probability of Type I error is also called the size of a test.

Sometimes it is useful to consider a test which chooses ...

Read More


A study of the simultaneous equations model was initiated by the researchers of the Cowles Commission at the University of Chicago in the 1940s. The model was extensively used by econometricians in the 1950s and 1960s. Although it was more frequently employed in macroeconomic analysis, we shall illustrate it by a supply and demand model. Consider

(13.3.1) yj = уіу2 + Х:Рі + uj and

(13.3.2) y2 = y2yi + X2p2 + u2,

where yi and y2 are T-dimensional vectors of dependent variables, Xj and

X2 are known nonstochastic matrices, and Uj and u2 are unobservable

2 2

random variables such that £u] = £u2 = 0, Uuj = ayl, Vu2 = cr2I, and £uju2 = a12I. We give these equations the following interpretation.

A buyer comes to the market with the schedule (13.3...

Read More

Asymptotic Properties of Least Squares Estimators

In this section we prove the consistency and the asymptotic normality of the least squares estimators a and (3 and the consistency of a under suitable assumptions about the regressor {xt}.

To prove the consistency of a and p, we use Theorem 6.1.1, which states that convergence in mean square implies consistency. Since both ti and P are unbiased estimators of the respective parameters, we need only show that the variances given in (10.2.23) and (10.2.24) converge to zero. Therefore, we conclude that ti and p are consistent if

(10.2.55) lim E(lf)2 = oo and

(10.2.56) lim Z(xf )2 = oo.


We shall rewrite these conditions in terms of the original variables {xt}...

Read More

Strategies for Choosing an Estimator

How can we resolve the ambiguity of the second kind and choose between two admissible estimators, T and W, in Example 7.2.1?

Subjective strategy. One strategy is to compare the graphs of the mean squared errors for T and W in Figure 7.5 and to choose one after consid­ering the a priori likely values of p. For example, suppose we believe a priori that any value of p is equally likely and express this situation by a uniform density over the interval [0, 1]. We would then choose the esti­mator which has the minimum area under the mean squared error func­tion. In our example, T and W are equally good by this criterion. This strategy is highly subjective; therefore, it is usually not discussed in a textbook written in the framework of classical statistics...

Read More


We shall rewrite (12.1.1) in vector and matrix notation in two steps. Define the ^-dimensional row vector x( = (хл, xt2, . . . , xtK) and the X-dimensional column vector p = (P], p2,. . . , Px)’- Then (12.1.1) can be written as

(12.1.2) yt = x(‘p + ut, t = 1, 2, . . . , T.

Although we have simplified the notation by going from (12.1.1) to

(12.1.2) , the real advantage of matrix notation is that we can write the T equations in (12.1.2) as a single vector equation.

Define the column vectors у = (yi, Уч, ■ ■ ■, Jr)’ and u = (iq, щ, . . . , uT)’ and define the T X К matrix X whose fth row is equal to x( so that X’ = (x1; x2, . . . , X7-). Then we can rewrite (12.1.2) as

#21 X22 X2 к

(12.1.3) у = XP + u, where X =

Xt Xt2 ’ * ‘ %TK

We assume rank(X) = K...

Read More


In this section we study the Bayesian strategy of choosing an optimal test among all the admissible tests and a practical method which enables us to find a best test of a given size. The latter is due to Neyman and Pearson



figure 9.4 A set of admissible characteristics

and is stated in the lemma that bears their names. A Bayesian interpreta­tion of the Neyman-Pearson lemma will be pedagogically useful here.

We first consider how the Bayesian would solve the problem of hypothe­sis testing. For her it is a matter of choosing between HQ and Hx given the posterior probabilities P(H0 | x) and P{H | x) where x is the observed value of X. Suppose the loss of making a wrong decision is as given in Table 9.2. For example, if we choose H0 when Hx is in fact true, we incur a loss y2.


Read More