Normality tests

Let us now consider the fundamental problem of testing disturbance normality in the context of the linear regression model:

Y = Xp + u, (23.12)

where Y = (y1, …, yn)’ is a vector of observations on the dependent variable, X is the matrix of n observations on k regressors, P is a vector of unknown coefficients and u = (u1, …, un)’ is an n-dimensional vector of iid disturbances. The problem consists in testing:

H0 : f(u) = ф (u; 0, о), о > 0, (23.13)

where f(u) is the probability density function (pdf) of ui, and ф (u; p, о) is the normal pdf with mean p and standard deviation о. In this context, normality tests are typically based on the least squares residual vector

й = y – xp = Mxu, (23.14)

where p = (XX)-1 X’y and Mx = In – X(XX)-1X’. Let й1п £ й2п £ … £ йпп denote the order statistics of the residual, and

s2 = (n – k)-1 ^ u 2n, 62 = n_1 ^ u 2n. (23.15)

i=1 i=1

Here we focus on two tests: the Kolmogorov-Smirnov (KS) test (Kolmogorov, 1933; Smirnov, 1939), and the Jarque and Bera (1980, 1987; henceforth JB) test.

The KS test is based on a measure of discrepancy between the empirical and hypothesized distributions:

KS = max (D+, D-), (23.16)

where D+ = max1£i<n[(i/n) – fl] and D – = max1£i£n[zi – (i – 1)/n], 2i = 0(^n/s),

image571 Подпись: (23.17)

i = 1,…, n, and Ф(.) denotes the cumulative N(0, 1) distribution function. The exact and limiting distributions of the KS statistic are non-standard and even asymptotic critical points must be estimated. We have used significance points from D’Agostino and Stephens (1986, Table 4.7), although these were formally derived for the location-scale model. The JB test combines the skewness (Sk) and kurtosis (Ku) coefficients:

Подпись: Table 23.2 Kolmogorov-Smirnov/Jarque-Bera residuals based tests: empirical type I errors k1 n = 25 n = 50 n = 100 KS JB KS JB KS JB 0 STD 0.050 0.029 0.055 0.039 0.055 0.041 MC 0.052 0.052 0.052 0.050 0.047 0.048 2, (n = 25) STD 0.114 0.048 0.163 0.064 0.131 0.131 4, (n > 25) MC 0.053 0.052 0.050 0.050 0.050 0.050 k - 1 (n < 50) STD 0.282 0.067 0.301 0.084 0.322 0.322 8, (n = 100) MC 0.052 0.048 0.050 0.047 0.047 0.047 STD refers to the standard normality test and MC denotes the (corresponding) Monte Carlo test.
where Sk = nr1 X"=i й3п/(62)3/2 and Ku = n4 X"=i ^n/(62)2. Under the null and ap­propriate regularity conditions, the JB statistic is asymptotically distributed as X2(2); the statistic’s exact distribution is intractable.

We next summarize relevant results from the simulation experiment reported in Dufour et al. (1998). The experiment based on (23.12) was performed as fol­lows. For each disturbance distribution, the tests were applied to the residual vector, obtained as й = MXu. Hence, there was no need to specify the coefficients vector p. The matrix X included a constant term, k1 dummy variables, and a set of independent standard normal variates. Table 23.2 reports rejection percentages (from 10,000 replications) at the nominal size of 5 percent under the null hypoth­esis, with n = 25, 50, 100, k = the largest integer less than or equal to – Jn and k1 = 0, 2, 4, . . . , k – 1. Our conclusions may be summarized as follows. Although the tests appear adequate when the explanatory variables are generated as standard normal, the sizes of all tests vary substantially from the nominal 5 percent for all other designs, irrespective of the sample size. More specifically, (i) the KS test consistently overrejects, and (ii) the JB test based on 6 underrejects when the number of dummy variables relative to normal regressors is small and overreject otherwise. We will discuss the MC tests results in Section 4.

Leave a reply

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>