# A Test of Structural Change when Variances Are Unequal

In this section we shall remove the assumption <r? = a and shall study how to test the equality of jj, and fi2. The problem is considerably more difficult than the case considered in the previous section; in fact, there is no definitive solution to this problem. Difficulty arises because (1.5.25) is no longer Model 1 because of the heteroscedasticity of u. Another way to pinpoint the difficulty is to note that rs and a do not drop out from the formula (1.5.39) for the t statistic.  Before proceeding to discuss tests of the equality of and f}2 when a2 Ф a, we shall first consider a test of the equality of the variances. For, if the hypoth­esis a = a is accepted, we can use the F test of the previous section. The null hypothesis to be tested is a = o(=o2). Under the null hypothesis we have  and

Because these two chi-square variables are independent by the assumptions of the model, we have by Theorem 4 of Appendix 2

(L5-441

Unlike the /’test of Section 1.5.2, a two-tailed test should be used here because either a large or a small value of (1.5.44) is a reason for rejecting the null hypothesis.

Regarding the test of the equality of the regression parameters, we shall consider only the special case considered at the end of Section 1.5.3, namely, the test of the equality of single elements, fiu = P2I, where the t test is applica­ble. The problem is essentially that of testing the equality of two normal means when the variances are unequal; it is well known among statisticians as the Behrens-Fisher problem. Many methods have been proposed and others are still being proposed in current journals; yet there is no definitive solution to the problem. Kendall and Stuart (1979, Vol. 2, p. 159) have discussed various methods of coping with the problem. We shall present one of the methods, which is attributable to Welch (1938).

As we noted earlier, the difficulty lies in the fact that one cannot derive

(1.5.40) from (1.5.39) unless a = a. We shall present a method based on the assumption that a slight modification of (1.5.40), namely, _ fiu fin

(JL+JLia’

*lAl *2<*2</

where а? = (Г,—К^-УїМіУ, and о = {Т2 — ^)-1У2М2у2, is approxi­mately distributed as Student’s t with degrees of freedom to be appropriately determined. Because the statement (1.5.37) is still valid, the assumption that

(1.5.45) is approximately Student’s t is equivalent to the assumption that w defined by

JL+JL  *»*» *3I*M

o I g2 *{/*» *aAf

is approximately xl for some v. Because Ew = v, w has the same mean as xl – We shall determine v so as to satisfy Vw = 2v.  Solving (1.5.47) for v, we obtain

(Г, – К*ШМ (T2 – K*)(x^x2if Finally, using the standard normal variable (1.5.37) and the approximate

chi-square variable (1.5.46), we have approximately

Z~SV. (1.5.49)

In practice v will be estimated by inserting <s and a into the right-hand side of (1.5.48) and then choosing the integer closest to the calculated value.

Unfortunately, Welch’s method does not easily generalize to a situation where the equality of vector parameters is involved. Toyoda (1974), like Welch, proposed that both the denominator and the numerator chi-square variables of (1.5.26) be approximated by the moment method; but the result­ing test statistic is independent of the unknown parameters only under unreal­istic assumptions. Schmidt and Sickles (1977) found Toyoda’s approximation to be rather deficient.

In view of the difficulty encountered in generalizing the Welch method, it seems that we should look for other ways to test the equality of the regression parameters in the unequal-variances case. There are two obvious methods that come to mind: They are (1) the asymptotic likelihood ratio test and (2) the asymptotic F test (see Goldfeld and Quandt, 1978).9

The likelihood function of the model defined by (1.5.23) and (1.5.24) is

L = (2л)-<г‘+7»’%Гг‘<тГ7’г (1.5.50)

X exp [—0.5<т72(Уі – Х, АУ(Уі ~ ХМ

X exp [ 0.5<rj2(y2 – X2fi2)y2 – XM-

The value of L attained when it is maximized without constraint, de­noted by L, can be obtained by evaluating the parameters of L at fii = fix, fi2 = L = Jx-XMbt-XM and o = o =

T2l(y2 ~ x2&)'(Уг – X2/?2). The value of L attained_when it is maximized subject to the constraints fi{ =&(=Д), denoted by L, can be obtained by evaluating the parameters of L at the constrained maximum likelihood esti­mates: Д, = fi2(=fi), a, and a. These estimates can be iteratively obtained as follows:

Step 1. Calculate fi = (<772X’,X, + 5їаХ£Х2)г1(о7аХ|Уі + ст72Х^у2). _

Step 2. Calculate a =^7’71(Уі — Х|/?)'(Уі — X, j?) and <r2 = Тг У2-Х2Д)'(У2-Х2Д).

Step 3. Repeat Step 1, substituting a and a for a and a.

Step ^Repeat Step 2, substituting the estimates of fi obtained at Step 3 for fi.

Continue this process until the estimates converge. In practice, however, the estimates obtained at the end of Step 1 and Step 2 may be used without changing the asymptotic result (1.5.51).

Using the asymptotic theory of the likelihood ratio test, which will be developed in Section 4.5.1, we have asymptotically (that is, approximately when both Г, and T2 are large)

-2 log (Lit) = Г, log (a2/o) + T2 log (ffi/5І) ~ x2k-• (1 -5.51)

The null hypothesis = f}2 is to be rejected when the statistic (1.5.51) is larger than a certain value.

The asymptotic F test is derived by the following simple procedure: First, estimate a and a by a and a2, respectively, and definep = aja2. Second, multiply both sides of (1.5.24) by p and define the new equation

y? = X^2 + uJ, (1.5.52)

where yj = py2, XJ = pX2, and u* = pu2. Third, treat (1.5.23) and (1.5.52) as the given equations and perform the F test (1.5.26) on them. The method works asymptotically because the variance of ufis approximately the same as that ofuj when T, and T2 are large, because p converges to <7, ja2 in probability. Goldfeld and Quandt (1978) conducted a Monte Carlo experiment that showed that, when аФа, the asymptotic F test performs well, closely followed by the asymptotic likelihood ratio test, whereas the Ftest based on the assumption of equality of the variances could be considerably inferior.