Misspecification and tests

In the sample selection model (18.1), the data on observed outcomes y are cen­sored and least squares estimators of в obtained using y suffer from selectivity bias when the disturbances u and e are correlated. But if they were independent, the model would have a simple structure and в and о2 could be estimated by applying ordinary least squares to the observed outcome equation and у of the selection equation can be estimated by the probit MLE. To test whether there is selectivity for sample under normal disturbances, one may examine the hypoth­esis H0 : p = 0. The score test statistic for this hypothesis is derived in Melino

(1982) . The score test statistic has the same asymptotic distribution as a f-statistic for testing the significance of the coefficient of the bias-corrected term in a two – stage estimation procedure. The simple f-statistic is an asymptotically efficient test statistic.

For special cases where the bias-corrected term is perfectly collinear with in­cluded regressors, these test statistics break down. The score vector evaluated at the restricted MLE under such a circumstance is identically zero and the corres­ponding information matrix is singular. Lee and Chesher (1986) suggest a gener­alization of the score test to extremum tests. The intuition behind the score test statistic is to ask whether the average loglikelihood of a model evaluated at the restricted MLE has a significantly nonzero gradient. When it does, we are led to reject the null hypothesis because by moving from the parameter vector under the null hypothesis, higher values of the loglikelihood can be achieved. The score test is exploiting the first-order derivative testing for a maximum. Thus, when the score test statistic is identically zero, one should consider tests for extremum based on higher-order derivatives as in calculus. For testing the presence of sample selection bias, the extremum test statistic is asymptotically equivalent to a test of skewness constructed from the statistic XN=1l;e3/(3o3) where в; is the

least squares residual of the outcome equation. This statistic is intuitively appeal­ing because, if p Ф 0, the disturbance of the observed outcome equation has nonzero mean and its distribution (conditional on selection) is not symmetric.

The normal distribution is a popular assumption in parametric sample selec­tion models. Contrary to the standard linear regression model, the misspecification of normality of disturbances will, in general, provide inconsistent estimates under the two-stage or maximum likelihood methods. Theoretical consequences of misspecification are well presented for limited dependent variables models (Goldberger, 1983). For the sample selection model, investigations of sensitivity of distributional misspecification can be found in Olsen (1982), Lee (1982), Mroz (1987), and others. Diagnostic test statistics have been developed for detecting model misspecification. The computationally simple and motivating approach is the Lagrange multiplier (efficient score) approach. For various misspecifications such as omitted variables, heteroskedasticity, serial correlation, and normal dis­turbances, the Lagrange multiplier statistics have simple moment structures (see the survey by Pagan and Vella, 1989). This can be seen as follows. In an econo­metric model with latent variables, suppose that g(y*, y | 9) is the joint density of latent variables y* and observed sample y, where 9 is a vector of possible para­meters. Let f(y | 9) be the density of y, and g(y*| y, 9) be the conditional density of y* given y. Since g(y*, y19) = g( y*1 y, 9)f(y19^ ln f(y19) = ln g(y* y19) – ln g(y*1 y, 9^ and 38ylS) = 3hg(ye’ y|e) – 3lng(3y8|y’ e). Integrating these expressions with respect to g(y*| y, 9+) where 9+ is an arbitrary value of 9, it follows that ln f (y | 9) =

/О[lng(y*, y| 9)]g(У*| У, 9+)dy* – [lng(y^ У, 9)]g(У*| У, 9+)dy* and

dln f (y| 8)
de

(18.11)

At 9+ = 9, the first-order derivative of the loglikelihood (18.11) becomes 3ln= Ee(3lng(3Vy|e) |y), because Ee(3hg(y8|y’ e) |y) = 0. A test statistic based on the score 3hf8y|8) will be a conditional moment statistic of 3hg(ye’ y|8) conditional on sample observations. For many specification tests of the sample selection model, the score test statistics are based on some simple conditional moments. Lee (1984) considered the efficient score test of normality of the sample selection model (18.1). The test statistic is derived within the bivariate Edgeworth series of distributions. For the truncated sample selection case, the test compares some sample moments of order (r, s) for which r + s > 2 with correspondingly esti­mated hypothesized conditional moments of disturbances. For the censored case, the test is equivalent to the testing of some sample semi-invariants for which r + s > 2 are zeros.

Leave a reply

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>