Unit Roots

Herman J. Bierens*

1 Introduction

In this chapter I will explain the two most frequently applied types of unit root tests, namely the Augmented Dickey-Fuller tests (see Fuller, 1996; Dickey and Fuller, 1979, 1981), and the Phillips-Perron tests (see Phillips, 1987; Phillips and Perron, 1988). The statistics and econometrics levels required for understanding the material below are Hogg and Craig (1978) or a similar level for statistics, and Green (1997) or a similar level for econometrics. The functional central limit theorem (see Billingsley, 1968), which plays a key role in the derivations in­volved, will be explained in this chapter by showing its analogy with the concept of convergence in distribution of random variables, and by confining the discus­sion to Gaussian unit root processes.

This chapter is not a review of the vast literature on unit roots. Such a review would entail a long list of descriptions of the many different recipes for unit root testing proposed in the literature, and would leave no space for motivation, let alone proofs. I have chosen for depth rather than breadth, by focusing on the most influential papers on unit root testing, and discussing them in detail, with­out assuming that the reader has any previous knowledge about this topic.

As an introduction to the concept of a unit root and its consequences, consider the Gaussian AR(1) process yt = p0 + p1 yt-1 + ut, or equivalently (1 – P1L)yt = P0 + ut, where L is the lag operator: Lyt = yt-1, and the uts are iid N(0, о2). The lag polyno­mial 1 – P1L has root equal to 1/P1. If | P1| < 1, then by backwards substitution we can write yt = P0/(1 – P1) + X7”0P 1ut-, so that yt is strictly stationary, i. e. for arbitrary natural numbers m1 < m2 < … < mt-1 the joint distribution of yt, yt-m1, yt-m2,…, yt-mt_1 does not depend on t, but only on the lags or leads m1, m2,…, mt-1. Moreover, the distribution of yt, t > 0, conditional on y0, y-1, y-2,…, then converges to the marginal distribution of yt if t ^ ^. In other words, yt has a vanishing memory: yt becomes independent of its past, y0, y-1, y-2,…, if t ^ ^.

If p1 = 1, so that the lag polynomial 1 – p1L has a unit root, then yt is called a unit root process. In this case the AR(1) process under review becomes yt = yt-1 +

Po + ut, which by backwards substitution yields for t > 0, yt = y0 + p0f + E^u,. Thus now the distribution of yt, t > 0, conditional on y0, y_1, y-2,…, is N(y0 + pot, о2t), so that yt has no longer a vanishing memory: a shock in y0 will have a persistent effect on yt. The former intercept p0 now becomes the drift parameter of the unit root process involved.

It is important to distinguish stationary processes from unit root processes, for the following reasons.

1. Regressions involving unit root processes may give spurious results. If yt and xt are mutually independent unit root processes, i. e. yt is independent of xt_ for all t and j, then the OLS regression of yt on xt for t = 1,…, n, with or with­out an intercept, will yield a significant estimate of the slope parameter if n is large: the absolute value of the t-value of the slope converges in probability to ro if n ^ ro. We then might conclude that yt depends on xt, while in reality the yts are independent of the xts. This phenomenon is called spurious regression.1 One should therefore be very cautious when conducting standard econometric analysis using time series. If the time series involved are unit root processes, naive application of regression analysis may yield nonsense results.

2. For two or more unit root processes there may exist linear combinations which are stationary, and these linear combinations may be interpreted as long – run relationships. This phenomenon is called cointegration,2 and plays a dominant role in modern empirical macroeconomic research.

3. Tests of parameter restrictions in (auto)regressions involving unit root pro­cesses have in general different null distributions than in the case of stationary processes. In particular, if one would test the null hypothesis P1 = 1 in the above AR(1) model using the usual t-test, the null distribution involved is nonnormal. Therefore, naive application of classical inference may give incorrect results. We will demonstrate the latter first, and in the process derive the Dickey-Fuller test (see Fuller, 1996; Dickey and Fuller, 1979, 1981), by rewriting the AR(1) model as

Ayt = y t _ y_1 = P0 + (P1 _ 1)yt_1 + ut = a0 + a 1yM + ut, (29.1)

say, estimating the parameter a 1 by OLS on the basis of observations y0, y1,…, yn, and then testing the unit root hypothesis a 1 = 0 against the stationarity hy­pothesis _2 < a 1 < 0, using the t-value of a 1. In Section 2 we consider the case where a 0 = 0 under both the unit root hypothesis and the stationarity hypothesis. In Section 3 we consider the case where a 0 = 0 under the unit root hypothesis but not under the stationarity hypothesis.

The assumption that the error process ut is independent is quite unrealistic for macroeconomic time series. Therefore, in Sections 4 and 5 this assumption will be relaxed, and two types of appropriate unit root tests will be discussed: the augmented Dickey-Fuller (ADF) tests, and the Phillips-Perron (PP) tests.

In Section 6 we consider the unit root with drift case, and we discuss the ADF and PP tests of the unit root with drift hypothesis, against the alternative of trend stationarity.

Finally, Section 7 contains some concluding remarks.

Leave a reply

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>