MIMIC and reduced rank regression

Above, we considered elaborations of equation (8.18), which offered additional information about the otherwise unidentified parameters in the form of an additional indicator. The latent variable appeared once more as the exogenous variable in yet another regression equation. Another way in which additional information may be available is in the form of a regression equation with the latent variable as the endogenous variable:

£ n = w’n<x + Un, (8.19)

where wn is an l-vector of observable variables that "cause" £, a is an l-vector of regression coefficients and Un is an iid disturbance term with mean zero and variance oU. Note that in this case, wn can also be used as a vector of instrumental variables.

We now show that an additional relation of the form (8.19) can help identifica­tion. To that end, it is convenient to write the model in matrix format. For (8.19) this gives £ = Wa + u and for the factor analysis structure Y = £X’ + E in self­evident notation. The model consisting of these two elements is known as the multiple indicators-multiple causes (MIMIC) model (Joreskog and Goldberger, 1975)

and relates a number of exogenous variables (causes) to a number of endogenous variables (indicators) through a single latent variable. In reduced form, it is

Y — Wal’ + (E + uT).

This multivariate regression system has two kinds of restriction on its para­meters. First, the matrix of regression coefficients al’, which is of order l x M, has rank one. Second, the disturbance vector has a covariance matrix of the form X = ¥ + о UTT’, with ¥ diagonal, which is a one-factor FA structure. One nor­malization on the parameters a, T, and о U is required since, for any c > 0, multiplying a by c, dividing T by c and multiplying о U by c2 has no observable implications. Under this normalization, the model is identified.

A frequently used generalization of the MIMIC model is the reduced rank regres­sion (RRR) model. It extends MIMIC in two ways. First, the rank of the coefficient matrix can be larger than one, and the error covariance matrix need not be structured. Then

Y — WAA’ + F,

where A and Л are l x r and M x r matrices, respectively, both of full column rank r < min(M, l), and F has iid rows with expectation zero and unrestricted covariance matrix ¥. See, e. g., Cragg and Donald (1997), for tests of the rank r of the matrix of regression coefficients. Bekker, Dobbelstein, and Wansbeek (1996) showed that the arbitrage pricing theory model can be written as an RRR model with rank one. In its general form, A and Л are not identified, because A*A*’ = (AT )(T -1A’) = Ah’ for every nonsingular r x r matrix T. In some cases, the identification problem may be resolved by using restrictions derived from substantive theory, whereas in others an arbitrary normalization can be used. Ten Berge (1993, section 4.6) and Reinsel and Velu (1998) give an extensive discussion of the reduced rank regression model and its relations to other multivariate statistical methods.

Leave a reply

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>