General linear structural equation models
Up till now, we have discussed several models that specify linear relations among observed and/or latent variables. Such models are called (linear) structural equation models. A general formulation of structural equation models can be given by the following equations.
x — Л £ + ft
n x n n
yn = Лyn n + £ n,
n n — B Пп + r£n + Z n,
where n n is a vector of latent endogenous variables for subject n, £ n is a vector of latent exogenous variables for subject n, Zn is a vector of random residuals, B and Г are matrices of regression coefficients, Лx and Лy are matrices of factor loadings,
and 5n and en are vectors of errors. The random vectors 8n, en, Zn, and Zn are assumed mutually independent. The formulation (8.20) is known as the LISREL model, named after the widely used LISREL program in which it was implemented (Joreskog and Sorbom, 1996) and consists of a simultaneous equations system in latent endogenous and exogenous variables (8.20c), where (8.20a) and (8.20b) relate the latent variables to observable variables through an FA structure. The theory of structural equation modeling is discussed by Bollen (1989) and Hoyle (1995), which also contains some applications and practicalities. An overview with more recent topics is given by Bentler and Dudgeon (1996).
It turns out that it is possible to write a large number of models as submodels of this model. Examples of submodels are standard linear regression models, simultaneous equations linear regression models, linear regression models with measurement errors, MANOVA, factor analysis, MIMIC. The general model is, of course, highly underidentified. In practice, many restrictions are imposed on the parameters, for example many loadings and regression coefficients are fixed to zero, the scales of the latent variables are fixed by setting a factor loading or a variance parameter to one. The advantage of the general formulation is that all restricted models can be easily estimated by the same computer program and that theoretical properties of estimators can be derived for a large class of models at the same time. For a given set of restrictions (i. e. a given model), the identification of the model can be checked by the program IDLIS (Bekker et al., 1994). For the important special case of simultaneous equations with measurement error (i. e. xn and Z n of the same order, Лx = I, and analogously for yn, n n, and Л y), identification conditions are given by Merckens and Bekker (1993) and estimation is discussed by Wooldridge (1996).
The most well known software packages for structural equation modeling are LISREL, including the preprocessor PRELIS (Joreskog and Sorbom, 1996), EQS (Bentler, 1995), AMOS (Arbuckle, 1997), and SaS/CALIS. A new software package, which can also estimate latent class models, is Mplus (Muthen and Muthen, 1998).
If a specific distribution of the observed variables is assumed, typically the normal distribution, the model can be estimated with maximum likelihood (ML). An alternative estimation method is (nonlinear) generalized least squares (GLS). Assume that we have a vector of sample statistics sN, which usually consists of the diagonal and subdiagonal elements of the sample covariance matrix SN of zn = (хП, yX Further, assume that
4N(sn – o(9)) -4 N(0, Y),
where the vector o(9) = plim sN and 9 is the vector of free parameters. This assumption is usually satisfied under very mild regularity conditions. The estimator is obtained by minimizing the function
where W is a symmetric positive definite matrix. If plim W-1 = Y, W is optimal in the sense that the estimator has the smallest asymptotic covariance matrix in the Lowner sense.
If sN consists of the nonduplicated elements of the sample covariance matrix, the elements of the matrix Y are given by the formula Yijil = Сщ – <з;ры, where Yij/il is the asymptotic covariance between the (i, j )th and (i, l )th elements of ^NSn, a;jil = E(zniznjZniznl) and a;j = E(zniznj). An asymptotically optimal W is given by letting W-1 have elements sijil – sijSil. This estimator is called the asymptotically distribution free (ADF) estimator (Browne, 1984), denoted by 0ADF (although in EQS this is called AGLS and in LISREL it is called WLS). The asymptotic distribution of the ADF estimator is given by
where A = Эо/Э0′, evaluated in the true value of 0. The asymptotic covariance matrix can be consistently estimated by evaluating A in 0 and inserting W for Y-1. Note that ADF, as well as all estimators discussed before, are all special cases of generalized method of moments (GMM) estimators, see Hall (2001).
If a structural equation model has been estimated, it is important to assess the fit of the model, i. e. whether the model and the data agree. Many statistics have been proposed for assessing model fit. Most of these are functions of F = F(0), where F denotes the function (8.21) that is minimized. In this section, it is assumed that plim W-1 = Y. The statistic most frequently used is the chi-square statistic x2 = NF, which is a formal test statistic for the null hypothesis that the model is correct in the population against the alternative hypothesis that the model is not correct in the population. Under the null hypothesis, this test statistic converges to a chi-square variate with df = p* – q degrees of freedom, where p* is the number of elements of o(0) and q is the number of elements of 0.
In practice, however, models are obviously rarely entirely correct in the population. For the GLS estimators, F converges to F+ = (o+ – o(0+))’Y-1(o+ – o(0+)) for some 0+, where o+ = plim sN. If the model is correct in the population, o+ = o(0+) and F+ = 0. If the model is not entirely correct in the population, o+ Ф o(0+) and F+ > 0. Hence, x2 ^ NF+ ^ +ro. This illustrates the empirical finding that for large sample sizes, nonsaturated models (i. e. models with df > 0) tend to be rejected, although they may describe the data very well. Therefore, alternative measures of fit have been developed. The quality of the model may be defined by the quantity
F0 – F1
where F0 is defined similar to F+, but for a highly restrictive baseline model or null model and F1 is F+ for the target model. It is customary to use the independence model, in which all variables are assumed to be independently distributed, as the
null model. Clearly, (8.22) is very similar to R2. It is always between zero and one, higher values indicating better fit. It may be estimated by the (Bentler – Bonett) normed fit index NFI = (F0 – Ff)/F0. The NFI has been widely used since its introduction by Bentler and Bonett (1980).
However, simulation studies and theoretical derivations have shown that NFI is biased in finite samples and that its mean is generally an increasing function of N. By approximating the distribution of NF by a noncentral chi-square distribution, a better estimator of (8.22) has been derived. This is the relative noncentrality index (RNI)
where 5; = Fi – df /N (McDonald and Marsh, 1990). A disadvantage of RNI is that it is not necessarily between zero and one, although usually it is. This disadvantage is overcome by the comparative fit index (CFI; Bentler, 1990), which is generally equal to the RNI, but if RNI > 1, CFI = 1, and if RNI < 0, CFI = 0, provided 50 > 0, which is usually the case.
* The authors would like to thank Anne Boomsma, Bart Boon, Jos ten Berge, Michel Wedel, and an anonymous referee for their helpful comments on an earlier version of this paper.
1 Strictly speaking, this violates the iid assumptions used in this chapter. It would be theoretically better to specify the model with nonzero means and intercepts. The practical consequences of this violation are, however, negligible, whereas the formulas are considerably less complicated. Therefore, in this chapter we ignore the resulting theoretical subtleties.
Aigner, D. J., C. Hsiao, A. Kapteyn, and T. J. Wansbeek (1984). Latent variable models in econometrics. In Z. Griliches and M. D. Intriligator (eds.) Handbook of Econometrics, Volume 2. pp. 1321-93. Amsterdam: North-Holland.
Alonso-Borrego, C., and M. Arellano (1999). Symmetrically normalized instrumental – variable estimation using panel data. Journal of Business & Economic Statistics 17, 36-49.
Angrist, J. D., G. W. Imbens, and A. B. Krueger (1999). Jackknife instrumental variables estimation. Journal of Applied Econometrics 14, 57-67.
Arbuckle, J. L. (1997). Amos User’s Guide. Version 3.6. Chicago: Smallwaters.
Baltagi, B. H. (1995). Econometric Analysis of Panel Data. Chichester: Wiley.
Bekker, P. A. (1986). Comment on identification in the linear errors in variables model. Econometrica 54, 215-17.
Bekker, P. A. (1994). Alternative approximations to the distributions of instrumental variable estimators. Econometrica 62, 657-81.
Bekker, P. A., P. Dobbelstein, and T. J. Wansbeek (1996). The APT model as reduced rank regression. Journal of Business & Economic Statistics 14, 199-202.
Bekker, P. A., A. Kapteyn, and T. J. Wansbeek (1984). Measurement error and endogeneity in regression: bounds for ML and 2SLS estimates. In T. K. Dijkstra (ed.) Misspecification Analysis. pp. 85-103. Berlin: Springer.
Bekker, P. A., A. Kapteyn, and T. J. Wansbeek (1987). Consistent sets of estimates for regressions with correlated or uncorrelated measurement errors in arbitrary subsets of all variables. Econometrica 55, 1223-30.
Bekker, P. A., A. Merckens, and T. J. Wansbeek (1994). Identification, Equivalent Models, and Computer Algebra. Boston: Academic Press.
Bekker, P. A., T. J. Wansbeek, and A. Kapteyn (1985). Errors in variables in econometrics: New developments and recurrent themes. Statistica Neerlandica 39, 129-41.
Bentler, P. M. (1990). Comparative fit indexes in structural models. Psychological Bulletin 107, 238-46.
Bentler, P. M. (1995). EQS Structural Equations Program Manual. Encino, CA: Multivariate Software.
Bentler, P. M., and D. G. Bonett (1980). Significance tests and goodness of fit in the analysis of covariance structures. Psychological Bulletin 88, 588-606.
Bentler, P. M., and P. Dudgeon (1996). Covariance structure analysis: Statistical practice, theory, and directions. Annual Review of Psychology 47, 563-92.
Bi0rn, E. (1992a). The bias of some estimators for panel data models with measurement errors. Empirical Economics 17, 51-66.
Bi0rn, E. (1992b). Panel data with measurement errors. In L. Matyas and P. Sevestre (eds.) The Econometrics of Panel Data. Dordrecht: Kluwer.
Bollen, K. A. (1989). Structural Equations with Latent Variables. New York: Wiley.
Bound, J., D. A. Jaeger, and R. M. Baker (1995). Problems with instrumental variables estimation when the correlation between the instruments and the endogenous explanatory variable is weak. Journal of the American Statistical Association 90, 443-50.
Bowden, R. J., and D. A. Turkington (1984). Instrumental Variables. Cambridge, UK: Cambridge University Press.
Browne, M. W. (1984). Asymptotically distribution-free methods for the analysis of covariance structures. British Journal of Mathematical and Statistical Psychology 37, 62-83.
Cheng, C.-L., and J. W. Van Ness (1999). Statistical Regression with Measurement Error. London: Arnold.
Cragg, J. G., and S. G. Donald (1997). Inferring the rank of a matrix. Journal of Econometrics 76, 223-50.
Erickson, T. (1993). Restricting regression slopes in the errors-in-variables model by bounding the error correlation. Econometrica 61, 959-69.
Fuller, W. A. (1987). Measurement Error Models. New York: Wiley.
Goldberger, A. S. (1984a). Redirecting reverse regression. Journal of Business & Economic Statistics 2, 114-16.
Goldberger, A. S. (1984b). Reverse regression and salary discrimination. The Journal of Human Resources 19, 293-319.
Griliches, Z. (1986). Economic data issues. In Z. Griliches and M. D. Intriligator (eds.) Handbook of Econometrics, Volume 3. Amsterdam: North-Holland.
Griliches, Z., and J. A. Hausman (1986). Errors in variables in panel data. Journal of Econometrics 32, 93-118.
Hall, A. R. (2001). Generalized method of moments. In B. H. Baltagi (ed.) A Companion to Theoretical Econometrics. Oxford: Blackwell Publishing. (this volume)
Hoyle, R. (ed.). (1995). Structural Equation Modeling: Concepts, Issues, and Applications. Thousand Oaks, CA: Sage.
Joreskog, K. G., and A. S. Goldberger (1975). Estimation of a model with multiple indicators and multiple causes of a single latent variable. Journal of the American Statistical Association 70, 631-9.
Joreskog, K. G., and D. Sorbom (1996). LISREL 8 User’s Reference Guide. Chicago: Scientific Software International.
Kapteyn, A., and T. J. Wansbeek (1984). Errors in variables: Consistent Adjusted Least Squares (CALS) estimation. Communications in Statistics – Theory and Methods 13, 1811-37.
Manski, C. F. (1995). Identification Problems in the Social Sciences. Cambridge, MA: Harvard University Press.
McDonald, R. P., and H. W. Marsh (1990). Choosing a multivariate model: Noncentrality and goodness of fit. Psychological Bulletin 107, 247-55.
Merckens, A., and P. A. Bekker (1993). Identification of simultaneous equation models with measurement error: A computerized evaluation. Statistica Neerlandica 47, 233-44.
Muthen, B. O., and L. K. Muthen (1998). Mplus User’s Guide. Los Angeles: Muthen & Muthen.
Nelson, C. R., and R. Startz (1990a). Some further results on the exact small sample properties of the instrumental variables estimator. Econometrica 58, 967-76.
Nelson, C. R., and R. Startz (1990b). The distribution of the instrumental variables estimator and its t-ratio when the instrument is a poor one. Journal of Business 63, 125-40.
Poirier, D. J. (1998). Revising beliefs in nonidentified models. Econometric Theory 14, 483509.
Reiers0l, O. (1950). Identifiability of a linear relation between variables which are subject to error. Econometrica 18, 375-89.
Reinsel, G. C., and R. P. Velu (1998). Multivariate Reduced Rank Regression: Theory and Applications. New York: Springer.
Staiger, D., and J. H. Stock (1997). Instrumental variables regression with weak instruments. Econometrica 65, 557-86.
Ten Berge, J. M.F. (1993). Least Squares Optimization in Multivariate Analysis. Leiden, The Netherlands: DSWO Press.
Van Montfort, K., A. Mooijaart, and J. De Leeuw (1987). Regression with errors in variables: Estimators based on third order moments. Statistica Neerlandica 41, 223-39.
Wald, A. (1940). The fitting of straight lines if both variables are subject to error. Annals of Mathematical Statistics 11, 284-300.
Wansbeek, T. J. (1989). Permutation matrix – II. In S. Kotz and N. L. Johnson (eds.) Encyclopedia of Statistical Sciences, Supplement Volume. pp. 121-2. New York: Wiley.
Wansbeek, T. J., and R. H. Koning (1991). Measurement error and panel data. Statistica Neerlandica 45, 85-92.
Wooldridge, J. M. (1996). Estimating systems of equations with different instruments for different equations. Journal of Econometrics 74, 387-405.