Various Measures of Closeness

The ambiguity of the first kind is resolved once we decide on a measure of closeness between the estimator and the parameter. There are many reasonable measures of closeness, however, and it is not easy to choose a particular one. In this section we shall consider six measures of closeness and establish relationships among them. In the following discussion we shall denote two competing estimators by X and F and the parameter by 0. Note that 0 is always a fixed number in the present analysis. Each of the six statements below gives the condition under which estimator X is pre­ferred to estimator Y. (We allow for the possibility of a tie. If X is preferred to Y and Y is not preferred to X, we say X is strictly preferred to F.) Or, we might say, X is “better” than F...

Read More


Now we shall study the properties of symmetric matrices, which play a major role in multivariate statistical analysis. Throughout this section, A will denote an n X n symmetric matrix and X a matrix that is not neces­sarily square. We shall often assume that X is n X К with К < n.

The following theorem about the diagonalization of a symmetric matrix is central to this section.

THEOREM 11.5.1 For any symmetric matrix A, there exists an orthogonal matrix H (that is, a square matrix satisfying H’H = I) such that

(11.5.1) H’AH = A,

where A is a diagonal matrix. The diagonal elements of A are called the characteristic roots (or eigenvalues) of A...

Read More


The question of how to determine the critical region ideally should de­pend on the cost of making a wrong decision. In this regard it is useful to define the following two types of error.

DEFINITION 9.2.1 A Type I error is the error of rejecting H0 when it is true. A Type II error is the error of accepting H0 when it is false (that is, when Hi is true).


figure 9.1 Relationship between a and p

The probabilities of the two types of error are crucial in the choice of a critical region. We denote the probability of Type I error by a and that of Type II error by p. Therefore we can write mathematically

(9.2.1) a = P(X Є R І Нй) and

(9.2.2) p = Р(ХЄД|#і).

The probability of Type I error is also called the size of a test.

Sometimes it is useful to consider a test which chooses ...

Read More


A study of the simultaneous equations model was initiated by the researchers of the Cowles Commission at the University of Chicago in the 1940s. The model was extensively used by econometricians in the 1950s and 1960s. Although it was more frequently employed in macroeconomic analysis, we shall illustrate it by a supply and demand model. Consider

(13.3.1) yj = уіу2 + Х:Рі + uj and

(13.3.2) y2 = y2yi + X2p2 + u2,

where yi and y2 are T-dimensional vectors of dependent variables, Xj and

X2 are known nonstochastic matrices, and Uj and u2 are unobservable

2 2

random variables such that £u] = £u2 = 0, Uuj = ayl, Vu2 = cr2I, and £uju2 = a12I. We give these equations the following interpretation.

A buyer comes to the market with the schedule (13.3...

Read More

Asymptotic Properties of Least Squares Estimators

In this section we prove the consistency and the asymptotic normality of the least squares estimators a and (3 and the consistency of a under suitable assumptions about the regressor {xt}.

To prove the consistency of a and p, we use Theorem 6.1.1, which states that convergence in mean square implies consistency. Since both ti and P are unbiased estimators of the respective parameters, we need only show that the variances given in (10.2.23) and (10.2.24) converge to zero. Therefore, we conclude that ti and p are consistent if

(10.2.55) lim E(lf)2 = oo and

(10.2.56) lim Z(xf )2 = oo.


We shall rewrite these conditions in terms of the original variables {xt}...

Read More

Strategies for Choosing an Estimator

How can we resolve the ambiguity of the second kind and choose between two admissible estimators, T and W, in Example 7.2.1?

Subjective strategy. One strategy is to compare the graphs of the mean squared errors for T and W in Figure 7.5 and to choose one after consid­ering the a priori likely values of p. For example, suppose we believe a priori that any value of p is equally likely and express this situation by a uniform density over the interval [0, 1]. We would then choose the esti­mator which has the minimum area under the mean squared error func­tion. In our example, T and W are equally good by this criterion. This strategy is highly subjective; therefore, it is usually not discussed in a textbook written in the framework of classical statistics...

Read More