Testing Linear Versus Log-Linear Functional Form

yi — j=1 /3j[14]ij + XyS=1 YsZis + ui І — 1) 2)–

Подпись: ,n Подпись: (8.84)

In many economic applications where the explanatory variables take only positive values, econo­metricians must decide whether a linear or log-linear regression model is appropriate. In general, the linear model is given by

and the log-linear model is

Подпись: (8.85)logyi = Ekj=i вj logXij + ES=i Yszis + Ui i = 1, 2,…,n

with ui ~ NID(0,ct2). Note that, the log-linear model is general in that only the dependent variable y and a subset of the regressors, i. e., the X variables are subject to the logarithmic transformation. Of course, one could estimate both models and compare their log-likelihood values. This would tell us which model fits best, but not whether either is a valid specification.

Подпись: B(yi, X) Подпись: when X = 0 logyi when X = 0 Подпись: (8.86)

Box and Cox (1964) suggested the following transformation

where yi > 0. Note that for X = 1, as long as there is constant in the regression, subjecting the linear model to a Box-Cox transformation is equivalent to not transformation yields the log – linear regression. Therefore, the following Box-Cox model regression. Therefore, the following Box-Cox model
encompasses as special cases the linear and log-linear models given in (8.84) and (8.85), respec­tively. Box and Cox (1964) suggested estimating these models by ML and using the LR test to test (8.84) and (8.85) against (8.87). However, estimation of (8.87) is computationally burden­some, see Davidson and MacKinnon (1993). Instead, we give an LM test involving a Double Length Regression (DLR) due to Davidson and MacKinnon (1985) that is easier to compute. In fact, Davidson and MacKinnon (1993, p. 510) point out that “everything that one can do with the Gauss-Newton Regression for nonlinear regression models can be done with the DLR for models involving transformations of the dependent variable.” The GNR is not applicable in cases where the dependent variable is subjected to a nonlinear transformation, so one should use a DLR in these cases. Conversely, in cases where the GNR is valid, there is no need to run the DLR, since in these cases the latter is equivalent to the GNR.

For the linear model (8.84), the null hypothesis is that A = 1. In this case, Davidson and MacKinnon suggest running a regression with 2n observations where the dependent variable has observations (е1/дд,…, еп/д, 1,…, 1)’, i. e., the first n observations are the OLS residuals from (8.84) divided by the MLE of d, where d^e = e’e/n. The second n observations are all equal to 1. The 2n observations for the regressors have typical elements:

for /3j: Xj — 1 for i = 1,…,n and 0 for the second n elements

for уs: Zis for i = 1,…,n and 0 for the second n elements

for d: ei/д for i = 1,…,n and —1 for the second n elements

for A:Yjj=i Pj (Xij logXij — Xij + 1) — (yilog yi — y+) for i = 1,…,n

and dlogyi for the second n elements

The explained sum of squares for this DLR provides an asymptotically valid test for A = 1. This will be distributed as xl under the null hypothesis.

Similarly, when testing the log-linear model (8.85), the null hypothesis is that A = 0. In this case, the dependent variable of the DLR has observations (ei/d, Єе2/д,…, en/d, 1,…, 1)’, i. e., the first n observations are the OLS residuals from (8.85) divided by the MLE for d, i. e., a where a2 = e’e/n. The second n observations are all equal to 1. The 2n observations for the regressors have typical elements:

for )3j: logXj for i = 1,..

.,n

and

0

for the second n elements

for ys: Zis for i = 1,..

., n

and

0

for the second n elements

for d: ei/a for i = 1,..

., n

and

—1

for the second n elements

for A: 1 k=i aj(logXij)2 —

2 (logyi)2

for i

= 1,…

,n

and

dlogyi

for the second n elements

The explained sum of squares from this DLR provides an asymptotically valid test for A = 0. This will be distributed as xl under the null hypothesis.

For the cigarette data given in Table 3.2, the linear model is given by C = /30 + fi1P + (32Y + u whereas the log-linear model is given by logC = у0 + у 1logP + у2logY + є and the Box-Cox model is given by B(C, A) = A0 + S1B(P, A) + S2B(Y, A) + v, where B(C, A) is defined in (8.86). In this case, the DLR which tests the hypothesis that H0; A = 1, i. e., the model is linear, gives an explained sum of squares equal to 15.55. This is greater than a x2 0 05 = 3.84 and is therefore significant at the 5% level. Similarly the DLR that tests the hypothesis that H0; A = 0, i. e., the model is log-linear, gives an explained sum of squares equal to 8.86. This is also greater than x10.05 = 3.84 and is therefore significant at the 5% level. In this case, both the linear and log-linear models are rejected by the data.

Finally, it is important to note that there are numerous other tests for testing linear and log-linear models and the interested reader should refer to Davidson and MacKinnon (1993).

Notes

1. This section is based on Belsley, Kuh and Welsch (1980).

2. Other residuals that are linear unbiased with a scalar covariance matrix (LUS) are the BLUS residuals suggested by Theil (1971). Since we are explicitly dealing with time-series data, we use subscript t rather than i to index observations and T rather than n to denote the sample size.

3. Ramsey’s (1969) initial formulation was based on BLUS residuals, but Ramsey and Schmidt (1976) showed that this is equivalent to using OLS residuals.

4. This section is based on Davidson and MacKinnon (1993, 2001).

5. This section is based on Davidson and MacKinnon (1993, pp. 502-510).

Problems

1. We know that H = PX is idempotent. Also, (In — PX) is idempotent. Therefore, b’Hb > 0 for any arbitrary vector b. Using these facts, show for b’ = (1, 0,…, 0) that 0 < Нц < 1. Deduce that 0 < hii < 1 for i = 1,…,n.

2. For the simple regression with no constant yi = xie + Ui for i = 1,…,n

(a) What is hii? Verify that ^П=1 hii = 1.

(b) What is /3 — в(і), see (8.13)? What is sin terms of s2 and e2, see (8.18)? What is DFBE – TASij, see (8.19)?

(c) What are DFFITi and DFFITSi, see (8.21) and (8.22)?

(d) What is Cook’s distance measure D2(s) for this simple regression with no intercept, see (8.24)?

(e) Verify that (8.27) holds for this simple regression with no intercept. What is COVRATIOi, see (8.26)?

3. From the definition of s2i) in (8.17), substitute (8.13) in (8.17) and verify (8.18).

4. Consider the augmented regression given in (8.5) y = X@* + dip + u where 3 is a scalar and di = 1 for the i-th observation and 0 otherwise. Using the Frisch-Waugh Lovell Theorem given in section 7.3, verify that

(a) в = (X’i)X(i))-1 X’i)y(i) = /?(i).

(b) 3 = (diPxdi)-1diPxy = ei/(1 — ha) where Px = I — Px.

(c) Residual Sum of Squares from (8.5) = (Residual Sum of Squares with di deleted) — e2/(1 —

hii ).

(d) Assuming Normality of u, show that the t-statistic for testing 3 = 0 is t = 3/s. e.(3) = e* as given in (8.3).

5. Consider the augmented regression y = Xfi* + PXDpp* + u, where Dp is an n x p matrix of dummy variables for the p suspected observations. Note that PXDp rather than Dp appear in this equation. Compare with (8.6). Let ep = D’pe, then E(ep) = 0, var(ep) = a2D’pPXDp. Verify that

(a) в = (X’X)-1X’y = Pols and

(b) в * = (Dp Px Dp )-1Dp Px y = (Dp Px Dp)-1 Dp e = (Dp Px Dp)-1 ep.

(c) Residual Sum of Squares = (Residual Sum of Squares with Dp deleted) — e’p(D’pPX)D-1ep. Using the Frisch-Waugh Lovell Theorem show this residual sum of squares is the same as that for (8.6).

(d) Assuming normality of u, verify (8.7) and (8.9).

(e) Repeat this exercise for problem 4 with PX di replacing di. What do you conclude?

6. Using the updating formula in (8.11), verify (8.12) and deduce (8.13).

7. Verify that Cook’s distance measure given in (8.25) is related to DFFITSi(a) as follows: DF – FITS j(a) = V kDi(a).

8. Using the matrix identity det(Ik — ab’) = 1 — b’a, where a and b are column vectors of dimension k, prove (8.27). Hint: Use a = xi and b’ = xj(X’X)-1 and the fact that det[X(i)X(i)] =det[{/k — Xjxj(X ‘X )-1}X ‘X ].

9. For the cigarette data given in Table 3.2

(a) Replicate the results in Table 8.2.

(b) For the New Hampshire observation (NH), compute eNH, e*NH, в— P(Nh), DFBETASnh, DFFITNH, DFFITSNH, D2NH(s), COVRATIONH, and FVARATIONH.

(c) Repeat the calculations in part (b) for the following states: AR, CT, NJ and UT.

(d) What about the observations for NV, ME, NM and ND? Are they influential?

10. For the Consumption-Income data given in Table 5.3, compute

(a) The internal studentized residuals e given in (8.1).

(b) The externally studentized residuals e* given in (8.3).

(c) Cook’s statistic given in (8.25).

(d) The leverage of each observation h.

(e) The DFFITS given in (8.22).

(f) The COVRATIO given in (8.28).

(g) Based on the results in parts (a) to (f), identify the observations that are influential.

11. Repeat problem 10 for the 1982 data on earnings used in Chapter 4. This data is provided on the Springer web site as EARN. ASC.

12. Repeat problem 10 for the Gasoline data provided on the Springer web site as GASOLINE. DAT. Use the gasoline demand model given in Chapter 10, section 5. Do this for Austria and Belgium separately.

13. Independence of Recursive Residuals.

(a) Using the updating formula given in (8.11) with A = (X’Xt) and a = —b = x’t+1, verify (8.31).

(b) Using (8.31), verify (8.32).

(c) For ut ~ IIN(0,a2) and wt+1 defined in (8.30) verify (8.33). Hint: define vt+1 =/ft+1wt+1. From (8.30), we have

vt+1 = J ft+iWt+1 = yt+1 – x’t+1Pt = xt+i(P – 3t) + ut+i for t = k, T – 1

Since ft+i is fixed, it suffices to show that cov(vt+1, vs+1) = 0 for t = s.

14. Recursive Residuals are Linear Unbiased With Scalar Covariance Matrix (LUS).

(a) Verify that the (T – k) recursive residuals defined in (8.30) can be written in vector form as w = Cy where C is defined in (8.34). This shows that the recursive residuals are linear in y.

(b) Show that C satisfies the three properties given in (8.35) i. e., CX = 0, CC = IT-k, and CC = Px. Prove that CX = 0 means that the recursive residuals are unbiased with zero mean. Prove that the CC’ = IT-k means that the recursive residuals have a scalar covariance matrix. Prove that C’C = Px means that the sum of squares of (T — k) recursive residuals is equal to the sum of squares of T least squares residuals.

(c) If the true disturbances u ~ N(0, a2IT), prove that the recursive residuals w ~ N(0, a2IT-k) using parts (a) and (b).

(d) Verify (8.36), i. e., show that RSSt+1 = RSSt + w2t+x for t = k,…,T — 1 where RSSt = (Yt – Xti§t)'(Yt – Xtfit).

15. The Harvey and Collier (1977) Misspecification t-Test as a Variable Additions Test. This is based on Wu (1993).

(a) Show that the F-statistic for testing H0; Y = 0 versus 7 = 0 in (8.44) is given by

F = y’PxУ – y’P[X, z]y =_________ y’PzУ________

y’P[x, z]y/(T – k – 1) y'(Px – Pz)y/(T – k – 1)

and is distributed as F(1,T – k – 1) under the null hypothesis.

(b) Using the properties of C given in (8.35), show that the F-statistic given in part (a) is the square of the Harvey and Collier (1977) t-statistic given in (8.43).

16. For the Gasoline data for Austria given on the Springer web site as GASOLINE. DAT and the model given in Chapter 10, section 5, compute:

(a) The recursive residuals given in (8.30).

(b) The CUSUM given in (8.46) and plot it against r.

(c) Draw the 5% upper and lower lines given below (8.46) and see whether the CUSUM crosses these boundaries.

(d) The post-sample predictive test for 1978. Verify that computing it from (8.38) or (8.40) yields the same answer.

(e) The modified von Neuman ratio given in (8.42).

(f) The Harvey and Collier (1977) functional misspecification test given in (8.43).

17. The Differencing Test in a Regression with Equicorrelated Disturbances. This is based on Baltagi (1990). Consider the time-series regression

Подпись: (1)Y — it a + Xp + u

where iT is a vector of ones of dimension T. X is T xK and [iT, X] is of full column rank. u > – (0, D) where D is positive definite. Differencing this model, we get

DY = DXe + Du (2)

where D is a (T — 1) x T matrix given below (8.50). Maeshiro and Wichers (1989) show that GLS on (1) yields through partitioned inverse:

З = (X ‘LX )-1X ‘LY (3)

where L = D-1 — D-1tT(i’TD-1tT)-1 i’TD-1. Also, GLS on (2) yields

/? = (X ‘MX)-1X ‘MY (4)

where M = D'(DDD’)-1D. Finally, they show that M = L, and GLS on (2) is equivalent to GLS on (1) as long as there is an intercept in (1).

Consider the special case of equicorrelated disturbances

D = a2 [(1 — р)Іт + rJt] (5)

where IT is an identity matrix of dimension T and JT is a matrix of ones of dimension T.

(a) Derive the L and M matrices for the equicorrelated case, and verify the Maeshiro and Wichers result for this special case.

(b) Show that for the equicorrelated case, the differencing test given by Plosser, Schwert, and White (1982) can be obtained as the difference between the OLS and GLS estimators of the differenced equation (2). Hint: See the solution by Koning (1992).

18. For the 1982 data on earnings used in Chapter 4, provided as EARN. ASC on the Springer web site, (a) compute Ramsey’s (1969) RESET. (b) Compute White’s (1982) information matrix test given in (8.69) and (8.70).

19. Repeat problem 18 for the Hedonic housing data given on the Springer web site as HEDONIC. XLS.

20. Repeat problem 18 for the cigarette data given in Table 3.2.

21. Repeat problem 18 for the Gasoline data for Austria given on the Springer web site as GASO – LINE. DAT. Use the model given in Chapter 10, section 5. Also compute the PSW differencing test given in (8.54).

22. Use the 1982 data on earnings used in Chapter 4, and provided on the Springer web site as EARN. ASC. Consider the two competing non-nested models

H0; log(wage) = в0 + в 1ED + e2EXP + e3EXP2 + e4WKS

+в5 MS + в 6FEM + e7BLK + e8UNION + u

H1; log(wage) = 70 + y1 ED + 72EXP + y3EXP2 + j4WKS

+Y5OCC + Y6SOUTH + y7SMSA + Y 8IND + є

Compute:

(a) The Davidson and MacKinnon (1981) J-test for H0 versus H1.

(b) The Fisher and McAleer (1981) JA-test for H0 versus H1.

(c) Reverse the roles of H0 and Hi and repeat parts (a) and (b).

(d) Both H0 and Hi can be artificially nested in the model used in Chapter 4. Using the F – test given in (8.62), test for H0 versus this augmented model. Repeat for Hi versus this augmented model. What do you conclude?

23. For the Consumption-Income data given in Table 5.3,

(a) Test the hypothesis that the Consumption model is linear against a general Box-Cox alter­native.

(b) Test the hypothesis that the Consumption model is log-linear against a general Box-Cox alternative.

24. Repeat problem 23 for the Cigarette data given in Table 3.2.

25. RESET as a Gauss-Newton Regression. This is based on Baltagi (1998). Davidson and MacKinnon (1993) showed that Ramsey’s (1969) regression error specification test (RESET) can be derived as a Gauss-Newton Regression. This problem is a simple extension of their results. Suppose that the linear regression model under test is given by:

yt = Xt[3 + ut t = І-, 2,…,T (1)

where в is a к x 1 vector of unknown parameters. Suppose that the alternative is the nonlinear regression model between yt and Xt:

yt = X’t в[1 + 6(X’t в) + у(Х;в)2 + A(Xte)3] + ut, (2)

where в, y, and A are unknown scalar parameters. It is well known that Ramsey’s (1969) RESET is obtained by regressing yt on Xt, yf2, yf and y and by testing that the coefficients of all powers of yt are jointly zero. Show that this RESET can be derived from a Gauss-Newton Regression on (2), which tests в = y = A = 0.

References

This chapter is based on Belsley, Kuh and Welsch (1980), Johnston (1984), Maddala (1992) and Davidson and MacKinnon (1993). Additional references are the following:

Baltagi, B. H. (1990), “The Differencing Test in a Regression with Equicorrelated Disturbances,” Econo­metric Theory, Problem 90.4.5, 6: 488.

Baltagi, B. H. (1998), “Regression Specification Error Test as A Gauss-Newton Regression,” Econometric Theory, Problem 98.4.3, 14: 526.

Belsley, D. A., E. Kuh and R. E. Welsch (1980), Regression Diagnostics (Wiley: New York).

Box, G. E.P. and D. R. Cox (1964), “An Analysis of Transformations,” Journal of the Royal Statistical Society, Series B, 26: 211-252.

Brown, R. L., J. Durbin, and J. M. Evans (1975), “Techniques for Testing the Constancy of Regression Relationships Over Time,” Journal of the Royal Statistical Society 37:149-192.

Chesher, A. and R. Spady (1991), “Asymptotic Expansions of the Information Matrix Test Statistic,” Econometrica 59: 787-815.

Cook, R. D. (1977), “Detection of Influential Observations in Linear Regression,” Technometrics 19:15­18.

Cook, R. D. and S. Weisberg (1982), Residuals and Influences in Regression (Chapman and Hall: New York).

Cox, D. R. (1961), “Tests of Separate Families of Hypotheses,” Proceedings of the Fourth Berkeley Sym­posium on Mathematical Statistics and Probability, 1: 105-123.

Davidson, R., L. G. Godfrey and J. G. MacKinnon (1985), “A Simplified Version of the Differencing Test,” International Economic Review, 26: 639-47.

Davidson, R. and J. G. MacKinnon (1981), “Several Tests for Model Specification in the Presence of Alternative Hypotheses,” Econometrica, 49: 781-793.

Davidson, R. and J. G. MacKinnon (1985), “Testing Linear and Loglinear Regressions Against Box-Cox Alternatives,” Canadian Journal of Economics, 18: 499-517.

Davidson, R. and J. G. MacKinnon (1992), “A New Form of the Information Matrix Test,” Econometrica, 60: 145-157.

Davidson, R. and J. G. MacKinnon (2001), “Artificial Regressions,” Chapter 1 in Baltagi, B. H. (ed.) A Companion to Theoretical Econometrics (Blackwell: Massachusetts).

Fisher, G. R. and M. McAleer (1981), “Alternative Procedures and Associated Tests of Significance for Non-Nested Hypotheses,” Journal of Econometrics, 16: 103-119.

Gentleman, J. F. and M. B. Wilk (1975), “Detecting Outliers II: Supplementing the Direct Analysis of Residuals,” Biometrics, 31: 387-410.

Godfrey, L. G. (1988), Misspecification Tests in Econometrics: The Lagrange Multiplier Principle and Other Approaches (Cambridge University Press: Cambridge).

Hall, A. (1987), “The Information Matrix Test for the Linear Model,” Review of Economic Studies, 54: 257-263.

Harvey, A. C. (1976), “An Alternative Proof and Generalization of a Test for Structural Change,” The American Statistician, 30: 122-123.

Harvey, A. C. (1990), The Econometric Analysis of Time Series (MIT Press: Cambridge).

Harvey, A. C. and P. Collier (1977), “Testing for Functional Misspecification in Regression Analysis,” Journal of Econometrics, 6: 103-119.

Harvey, A. C. and G. D.A. Phillips (1974), “A Comparison of the Power of Some Tests for Heteroskedas – ticity in the General Linear Model,” Journal of Econometrics, 2: 307-316.

Hausman, J. (1978), “Specification Tests in Econometrics,” Econometrica, 46: 1251-1271.

Koning, R. H. (1992), “The Differencing Test in a Regression with Equicorrelated Disturbances,” Econo­metric Theory, Solution 90.4.5, 8: 155-156.

Kramer, W. and H. Sonnberger (1986), The Linear Regression Model Under Test (Physica-Verlag: Hei­delberg).

Krasker, W. S., E. Kuh and R. E. Welsch (1983), “Estimation for Dirty Data and Flawed Models,” Chapter 11 in Handbook of Econometrics, Vol. I, eds. Z. Griliches and M. D. Intrilligator, Amsterdam, North – Holland.

Maeshiro, A. and R. Wichers (1989), “On the Relationship Between the Estimates of Level Models and Difference Models,” American Journal of Agricultural Economics, 71: 432-434.

Orme, C. (1990), “The Small Sample Performance of the Information Matrix Test,” Journal of Econo­metrics, 46: 309-331.

Pagan, A. R. and A. D. Hall (1983), “Diagnostic Tests as Residual Analysis,” Econometric Reviews, 2: 159-254.

Pesaran, M. H. and M. Weeks (2001), “Nonnested Hypothesis Testing: A Overview,” Chapter 13 in Baltagi, B. H. (ed.) A Companion to Theoretical Econometrics (Blackwell: Massachusetts).

Phillips, G. D.A. and A. C. Harvey (1974), “A Simple Test for Serial Correlation in Regression Analysis,” Journal of the American Statistical Association, 69: 935-939.

Plosser, C. I., G. W. Schwert, and H. White (1982), “Differencing as a Test of Specification,” International Economic Review, 23: 535-552.

Ramsey, J. B. (1969), “Tests for Specification Errors in Classical Linear Least-Squares Regression Anal­ysis,” Journal of the Royal Statistics Society, Series B, 31: 350-371.

Ramsey, J. B. and P. Schmidt (1976), “Some Further Results in the Use of OLS and BLUS Residuals in Error Specification Tests,” Journal of the American Statistical Association, 71: 389-390.

Schmidt, P. (1976), Econometrics (Marcel Dekker: New York).

Theil, H. (1971), Principles of Econometrics (Wiley: New York).

Thursby, J. and P. Schmidt (1977), “Some Properties of Tests for Specification Error in a Linear Regres­sion Model,” Journal of the American Statistical Association, 72: 635-641.

Utts, J. M. (1982), “The Rainbow Test for Lack of Fit in Regression,” Communications in Statistics, 11: 2801-2815.

Velleman, P. and R. Welsch (1981), “Efficient Computing of Regression Diagnostics,” The American Statistician, 35: 234-242.

White, H. (1980), “A Heteroskedasticity-Consistent Covariance Matrix Estimator and a Direct Test for Heteroskedasticity,” Econometrica, 48: 817-838.

White, H. (1982), “Maximum Likelihood Estimation of Misspecified Models,” Econometrica, 50: 1-25.

Wooldridge, J. M. (2001), “Diagnostic Testing,” Chapter 9 in B. H. Baltagi (ed.) A Companion to Theo­retical Econometrics (Blackwell: Massachusetts).

Wu, P. (1993), “Variable Addition Test,” Econometric Theory, Problem 93.1.2, 9: 145-146.

Leave a reply

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>