Linear Simultaneous Equations Models
In this chapter we shall give only the basic facts concerning the estimation of the parameters in linear simultaneous equations. A major purpose of the chapter is to provide a basis for the discussion of nonlinear simultaneous equations to be given in the next chapter. Another purpose is to provide a rigorous derivation of the asymptotic properties of several commonly used estimators. For more detailed discussion of linear simultaneous equations, the reader is referred to textbooks by Christ (1966) and Malinvaud (1980).
We can write the simultaneous equations model as
Yr = XB + U (7.1.1)
where Y is a TXN matrix of observable random variables (endogenous variables), X is a T X К matrix of known constants (exogenous variables), U is аГХ N matrix of unobservable random variables, and Г and В are iV X A and К X N matrices of unknown parameters. We denote the t. ith element of Y by y„, the rth column of Y by y, , and the rth row of Y by y(‘0, and similarly for X and U. This notation is consistent with that of Chapter 1.
As an example of the simultaneous equations model, consider the following demand and supply equations:
Demand: p, = у^qt + x’nfll + un.
Supply: q, = y2pt + x’afi2 + ua.
The demand equation specifies the price the consumer is willing to pay for given values of the quantity and the independent variables plus the error term, and the supply equation specifies the quantity the producer is willing to supply for given values of the price and the independent variables plus the error term. The observed price and quantity are assumed to be the equilibrium values that satisfy both equations. This is the classic explanation of how a simultaneous equations model arises. For an interesting alternative explanation in which
the simultaneous equations model is regarded as the limit of a multivariate time series model as the length of the time lag goes to 0, see articles by Strotz (1960) and Fisher (1970).
We impose the following assumptions:
Assumption 7.1.1. The sequence of JVvectors {U(()} is i. i.d. with zero mean and an unknown covariance matrix X. (Thus EU = 0 and E’T’IJ’I] = X.) We do not assume normality of Ц,>}, although some estimators considered in this chapter are obtained by maximizing a normal density.
Assumption 7.1.2. Rank of X is K, and lim T~1X’ X exists and is nonsingular.
Assumption 7.1.3. Г is nonsingular.
Solving (7.1.1) for Y, we obtain
Y = ХП + V, (7.1.2)
where
П = ВГ> (7.1.3)
and V = Ur1. We define Л – Г^ХГ1. We shall call (7.1.2) the reduced form equations, in contrast to (7.1.1), which are called the structural equations.
We assume that the diagonal elements of Г are ones. This is merely a normalization and involves no loss of generality. In addition, we assume that certain elements of Г and В are zeros.1 Let—у t be the column vector consisting of those elements of the ith column of Г that are specified to be neither 1 nor 0, and let fii be the column vector consisting of those elements of the ith column of В that are not specified to be 0. Also, let Y, and X, be the subsets of the columns of Y and X that are postmultiplied by—y, and /?,, respectively. Then we can write the ith structural equation as
У/ = Y, yf + X, fi, + u, (7.1.4)
^ZiOi + Uj.
We denote the number of columns of Y, and X, by A, and Kt, respectively. Combining N such equations, we can write (7.1.1) alternatively as
у = Za + u,
where 
У = (УЇ, УІ, • • • 
.У n)’> 
а = {а,а’г,. . . 
> <*#)’, 

u = (ul, u£,. . . 
,u 

and 
Z = diag(Z, , 
. • • ,ZN). 
We define £2 = jEuu’ = 2 © Ir. Note that (7.1.5) is analogous to the multivariate regression model (6.4.2), except that Z in (7.1.5) includes endogenous variables.
We now ask, Is a, identified? The precise definition of identification differs among authors and can be very complicated. In this book we shall take a simple approach and use the word synonymously with “existence of a consistent estimator.”2 Thus our question is, Is there a consistent estimator of a,? Because there is a consistent estimator of П under our assumptions (for example, the least squares estimator П = (X’X)_1X’Y will do), our question can be paraphrased as, Does (7.1.3) uniquely determine at when П is determined?
To answer this question, we write that part of (7.1.3) that involves yf and
Pi as
Яіі — П<1 У4 = Pi (7.1.6)
and
яю ~ ПюУі = 0. (7.1.7)
Here, (я a, n’„)’ is the /th column of П, and (Щ, П*,)’ is the subset of the columns of П that are postmultiplied by yt. The second subscript 0 or 1 indicates the rows corresponding to the zero or nonzero elements of the ith column of B. Note that Пю is a K(i) X N, matrix, where K(i)~ К — Kt. From
(7.1.7) it is clear that y( is uniquely determined if and only if
гапк(Пю) = Nt. (7.1.8)
This is called the rank condition of identifiability. It is clear from (7.1.6) that once уj is uniquely determined, /?, is uniquely determined. For (7.1.8) to hold, it is necessary to assume
(7.1.9)
which means that the number of excluded exogenous variables is greater than or equal to the number of included endogenous variables. The condition (7.1.9) is called the order condition of identifiability.3
Demand curves with different values of the independent variables
q
If (7.1.8) does not hold, we say at is not identified or is underidentified. If
(7.1.8) holds, and moreover, if K(i) = Nt, we say at is exactly identified or is justidentified. If (7.1.8) holds and K(fj > Nh we say a, is overidentified.
If/I, Ф 0 and fi2 = 0 in the demand and supply model given in the beginning of this section, y2 is identified but y, is not. This fact is illustrated in Figure 7.1, where the equilibrium values of the quantity and the price will be scattered along the supply curve as the demand curve shifts with the values of the independent variables. Under the same assumption on the fl’s, we have
(7.1.10)
where ITj and П2 are the coefficients on x, in the reduced form equations for p and q, respectively. From (7.1.10) it is clear that if ftx consists of a single element, y2 is exactly identified, whereas if fit is a vector of more than one element, y2 is overidentified.
Leave a reply