# Serial Correlation

In this section we allow a nonzero correlation between ut and us for s Ф t in the model (12.1.1). Correlation between the values at different periods of a time series is called serial correlation or autocorrelation. It can be spe­cified in infinitely various ways; here we consider one particular form of serial correlation associated with the stationary first-order autoregressive model. It is defined by

(13.1.15) щ = pUt-i + st, t = 1, 2, . . . , T,

where (єг) are i. i.d. with Est = 0 and Vst = a, and щ is independent of

2 2

Єї, є2, . . . , St with Ещ = 0 and Vu0 = a /(1 — p ).

Taking the expectation of both sides of (13.1.15) for t = 1 and using our assumptions, we see that Ещ = рЕщ + Еє{ = 0. Repeating the same procedure for t = 2, 3, . . . , T, we conclude that

(13.1.16) Eut = 0 for all t.

Next we evaluate the variances and covariances of {ut. Taking the variance of both sides of (13.1.15) for t = 1, we obtain

2 2 Т/ 2 O’ 2 О   Vux = p •—— + cr =——– •

Multiplying both sides of (13.1.15) by ut- and taking the expectation, we obtain

2

(13.1.18) Ещщ-i = CT— forallt

1 – p2

because of (13.1.17) and because ut- and et are independent. Next, multiplying both sides of (13.1.15) by w,_2 and taking the expectation, we obtain

2 2

(13.1.19) Eutut-2 = CT ^— for all £.

1 – p2

Repeating this process, we obtain

2 J

(13.1.20) Eutut-j — – CT ^ ч t = 1, 2, . . . , T; j = 0, 1, . . . , t — 1.

і – p

Note that (13.1.20) contains (13.1.17), (13.1.18), and (13.1.19) as special cases. Conditions (13.1.16) and (13.1.20) constitute stationarity (more precisely, weak stationarity).  In matrix notation, (13.1.16) can be written as £u = 0 and (13.1.20) is equivalent to

T-l T-2

P P

It can be shown that

(13.1.22) X 1

If p is known, we can compute the GLS estimator of (3 by inserting X 1

9

obtained above into (13.1.5). Note that a need not be known because it drops out of the formula (13.1.5).

The computation of the GLS estimator is facilitated by noting that

(13.1.23) X 1 = R’R,

(7

where

Vl – p2 0 0

-p 1 0 0 – p 1

0 – p 1 0

0 – p 1

Using R, we can write the GLS estimator (13.1.5) as

(13.1.25) pG = (X’R’RX)_1X’R’Ry.

Except for the first row, premultiplication of a T-vector z = (zb z2, .. . , яг)’ byR performs the operation Zj — pz,_i, t = 2, 3, … ,T. Thus the GLS estimator is computed as the LS estimator after this operation is per­formed on the dependent and the independent variables. The asymptotic distribution of the GLS estimator is unchanged if the first row of R is deleted in defining the estimator by (13.1.25).

Many economic variables exhibit a pattern of serial correlation similar to that in (13.1.20). Therefore the first-order autoregressive model

(13.1.15) is an empirically useful model to the extent that the error term of the regression may be regarded as the sum of the omitted independent variables. If, however, we believe that {ut follow a higher-order autoregres­sive process, we should appropriately modify the definition of R used in

(13.1.25) . For example, if we suppose that {ut follow a pth order autoregres­sive model

P

(13.1.26) ut = X Pjut-j +

j= 1

we should perform the operation zt – EjLjp-z, • on both the dependent and independent variables and then apply the LS method.

Another important process that gives rise to serial correlation is the moving-average process. It is defined by

?

(13.1.27) m, = X a

j= 0

where {єг} are i. i.d. as before. Computation of the GLS estimator is still possible in this case, but with more difficulty than for an autoregressive process. Nevertheless, a moving-average process can be well approximated by an autoregressive process as long as its order is taken high enough.

We consider next the estimation of p in the regression model defined by (12.1.1) and (13.1.15). If {ut}, t = 1, 2, . . . , T, were observable, we could estimate p by the LS estimator applied to (13.1.15). Namely,

T

X utut-1

(13.1.28) p = ^————–

X“?-1

t=2

Since (13.1.15) itself cannot be regarded as the classical regression model because ut-1 cannot be regarded as nonstochastic, p does not possess all the properties of the LS estimator under the classical regression model. For example, it can be shown that p is generally biased. But it can also be shown that p is consistent and its asymptotic distribution is given by

(13.1.29) VT (p – p) -> N(0, 1 – p2).  Since {ut) are in fact unobservable, it should be reasonable to replace them in (13.1.28) by the LS residuals ut = yt — x(p, where Э is the LS estimator, and define

It can be shown that p is consistent and has the same asymptotic distribu­tion as p given in (13.1.29). Finally, inserting p into R in (13.1.24), we can compute the FGLS estimator.

In the remainder of this section, we consider the test of independence against serial correlation. In particular, we take the classical regression model as the null hypothesis and the model defined by (12.1.1) and

(13.1.15) as the alternative hypothesis. This test is equivalent to testing H0: p = 0 versus H. p Ф 0 in (13.1.15). Therefore it would be reasonable
to use (13.1.30) as the test statistic. It is customary, however, to use the Durbin-Watson statistic

T

X (Щ – Ut-if

(13.1.31) d =

Xfi?

(=1

which is approximately equal to 2 — 2p, because its distribution can be more easily computed than that of p. Before the days of modern computer technology, researchers used the table of the upper and lower bounds of the statistic compiled by Durbin and Watson (1951). Today, however, the exact Rvalue of the statistic can be computed.