In this section we consider the regression model (13.1.1) у = X0 + u,

where we assume that X is a full-rank T X К matrix of known constants and u is a Г-dimensional vector of random variables such that £u = 0 and

(13.1.2) £uu’ = X.

We assume only that X is a positive definite matrix. This model differs from the classical regression model only in its general specification of the variance-covariance matrix given in (13.1.2).

Read More


Those who are not familiar with matrix analysis should study Chapter 11 before reading this section. The results of this chapter will not be needed to understand Chapter 10. Insofar as possible, we shall illustrate our results in the two-dimensional case.

We consider the problem of testing H0: 0 = 00 against Ну 0 Ф 0O, where 0 is a A-dimensional vector of parameters. We are to use the test statistic 0 ~ N(Q, X), where X is а К X К variance-covariance matrix: that is, X = £(0 — 0)(0 — 0)’. (Throughout this section a matrix is denoted by a boldface capital letter and a vector by a boldface lower-case letter.) In


FIGURE 9.9 Critical region for testing about two parameters

Section 9.7.1 we consider the case where X is completely known, and in Section 9.7...

Read More


Tobin (1958) proposed

the following important model:


yf = Хг’р + Ui



Уі = x’p + и{

if yf > о

= 0

if yf >0, і = 1, 2…………………. n,

where (wj are assumed to be i. i.d. N(0, cr2) and хг is a known nonstochastic vector. It is assumed that {yj and (x,) are observed for all i, but {y*} are unobserved if y* < 0. This model is called the censored regression model or the Tobit model (after Tobin, in analogy to probit). If the observations corresponding to y* < 0 are totally lost, that is, if {x,} are not observed whenever y* < 0, and if the researcher does not know how many obser­vations exist for which y* < 0, the model is called the truncated regression model.

Tobin used this model to explain a household’s expenditure (y) on a ...

Read More


Подпись: 11In Chapter 10 we discussed the bivariate regression model using summa­tion notation. In this chapter we present basic results in matrix analysis. The multiple regression model with many independent variables can be much more effectively analyzed by using vector and matrix notation. Since our goal is to familiarize the reader with basic results, we prove only those theorems which are so fundamental that the reader can learn important facts from the process of proof itself. For the other proofs we refer the reader to Bellman (1970).

Symmetric matrices play a major role in statistics, and Bellman’s discus­sion of them is especially good. Additional useful results, especially with respect to nonsymmetric matrices, may be found in a compact paperback volume, Marcus and Mine (1964)...

Read More


In Section 7.4.1 we show that the maximum likelihood estimator is the best unbiased estimator under certain conditions. We show this by means of the Cramer-Rao lower bound. In Sections 7.4.2 and 7.4.3 we show the consistency and the asymptotic normality of the maximum likelihood estimator under general conditions. In Section 7.4.3 we define the con­cept of asymptotic efficiency, which is closely related to the Cramer-Rao lower bound. In Section 7.4.4 examples are given. To avoid mathematical complexity, some results are given without full mathematical rigor. For a rigorous discussion, see Amemiya (1985).

Read More

Known Variance-Covariance Matrix

In this subsection we develop the theory of generalized least squares under the assumption that X is known (known up to a scalar multiple, to be precise); in the remaining subsections we discuss various ways the ele­ments of X are specified as a function of a finite number of parameters so that they can be consistently estimated.

Since X is symmetric, by Theorem 11.5.1 we can find an orthogonal matrix H which diagonalizes X as H’XH = A, where A is the diagonal matrix consisting of the characteristic roots of X. Moreover, since X is positive definite, the diagonal elements of A are positive by Theorem 11.5.10. Using (11.5.4), we define X_1/2 = HA’1/2H’, where A“1/2 =

_ і /о

Z>{, }, where X, is the ith diagonal element of A. Premultiplying

(13.1.1) by X_1/2, we obtain (13.1...

Read More