# Alternative Approaches to Testing Nonnested Hypotheses

with Application to Linear Regression Models

To provide an intuitive introduction to concepts which are integral to an understanding of nonnested hypothesis tests we consider testing of linear regression models as a convenient starting point. In the ensuing discussion we demonstrate that despite its special features nonnested hypothesis testing is firmly rooted within the Neyman-Pearson framework.

There are three general approaches to nonnested hypothesis testing all discussed in the pioneering contributions of Cox (1961) and Cox (1962). (i) The modified (centered) loglikelihood ratio procedure, also known as the Cox test. (ii) The comprehensive model approach, whereby the nonnested models are tested against an artificially constructed general model that includes the nonnested models as special cases. This approach was advocated by Atkinson (1970) and was later taken up under a different guise by Davidson and MacKinnon (1981) in developing their J-test and by Fisher and McAleer (1981) who proposed a related alternative procedure known as the JA-test. (iii) A third approach, originally considered by Deaton (1982) and Dastoor (1983) and further developed by Gourieroux et al. (1983) and Mizon and Richard (1986) is the encompassing procedure where the ability of one model to explain particular features of an alternative model is tested directly. The Wald and score encompassing tests (usually denoted by WET and SET) are typically constructed under the assumption that one of the rival models is correct. Encompassing tests when the true model does not necessarily lie in the set of models (whether nested or nonnested) under consideration are proposed by Gourieroux and Monfort (1995) and Smith

(1993) .

We shall now illustrate the main features of these three approaches in the context of the classical linear normal regression models (13.10) and (13.11) set out above. Rewriting these models in familiar matrix notations we have:

Hf: y = Xa + uf, uf ~ |
– N(0, o2IT), |
(13.19) |

Hg :y = ZP + u^ ug ~ |
– N(0, ю2IT), |
(13.20) |

where y is the T x 1 vector of observations on the dependent variable, X and Z are T x kf and T x kg observation matrices for the regressors of models Hf and Hg, a and в are the kf x 1 and kg x 1 unknown regression coefficient vectors, uf and ug are the T x 1 disturbance vectors, and IT is an identity matrix of order T. In addition, throughout this section we assume that

T-1X’uf – 0, T-1X’ug -2 0, T-1/2X’uf ~ N(0, o2Xxx),

T-1Z’ug – 0, T-1Z’uf – 0, r1/2Z’ug ~ N(0, ra2Xzz),

-1 Vs -1 Vs -1 Vs

xx ^xx zz ^zz zx ^zxf

where — denotes convergence in probability, the matrices Xxx, Xxx, Xzz, Xzz are non-singular, Xzx = X’xz Ф 0, and set

and Xg Xzz XzxXxx Xxz.

## Leave a reply