# AR(1): the probabilistic reduction perspective

The probabilistic reduction perspective has been developed in Spanos (1986). This perspective begins with the observable process {yt, t Є T} and specifies the statistical model exclusively in terms of this process. In particular, it contemplates the DGM (28.3) from left to right as an orthogonal decomposition of the form:

yt = E(ytc(Y0-i)) + ut, t Є T, (28.9)

where Yt-i := (yt-i, yt-2,…, Уо) and Ut = yt – E(yt o(Yt-i)), with the underlying

statistical model viewed as a reduction from the joint distribution of the underlying process {yt, t Є T}. The form of the autoregressive function depends on:

f(Уо, yv У2,…, yr, v^ for all (yо, Уl, У2,…, Ут) є

in the sense of Kolmogorov (1933). In the present case, the reduction assumptions on the joint distribution of the process {yt, t Є T}, that would yield (28.3) are: (i) normal, (ii) Markov, and (iii) stationary.

Let us consider this in some detail. The assumption of Markovness for the underlying process enables one to concentrate on bivariate distributions since:

f(Уо, yu У2,. .., Ут; V) = f(Уо, Фо) nT=if (yt y-i, Ф), (Уо, Уl, У2,. . ., Ут) є rM

(28.10)

The underlying bivariate distribution for model (28.3) is:

(28.ii)

which, via the orthogonal decomposition (28.9) gives rise to:

yt = a0 + aiyt-i + ut, t Є T. (28.i2)

The statistical parameters ф := (a0, a i, о2) are related to the primary parameters V := (h, Oo, Oi) via:

2

a0 = (i – °-)ц Є R, ai = Є (-i, i), о2 = о 0 – Є R+, (28.i3)

O 0 о 0 о 0

and thus the two parameter spaces take the form:

V := (h, oi, о0) Є R2 x R+, ф := (a0, ai, о2) Є R x (-i, i) x R+.

The inverse mapping from ф to v:

reveals that the admissible range of values of ai is (-i, i), excluding unity. Note that the parameterization (28.i3) can be derived directly from (28.i2) by utilizing the assumptions E(ut o(yt-i)) = 0, E(u2t o(yt-i)) = о2; see Spanos (i995).

The probabilistic reduction approach views the AR(i) model specified in terms of (28.i2) as comprising the following model assumptions concerning the conditional process {(yt Yt-i), t Є T}:

f(ytYY0_1; v) is normal

E( y {| o( Y 0_1)) = a о + a у-1, linear in yM,

var( yt | o( Y 0_1)) = a2, free of Y °_1,

(a 0, a 1, a2) are not functions of t Є T,

{(u t1Y 0_1), t Є T} is a martingale difference process.

Note that the temporal dependence assumption underlying the observable process {yt, t Є T} is Markov autocorrelation, whose general form (see (28.8)) is:

cov(yt, yt-T) := a|T| < cX|T|, c > 0, 0 < X < 1, т Ф 0, t = 1, 2,…

The question that naturally arises at this stage is what kind of advantages the probabilistic reduction (PR) perspective offers (if any) when compared with the traditional DGM view. For a more systematic answer to this question we will consider the advantages at the different stages of empirical modeling: (i) specification, (ii) estimation, (iii) misspecification testing and (iv) respecification.

Specification

This refers to the initial stage of choosing a statistical model in view of the observed data and the theoretical question(s) of interest. The postulated statistical model purports to provide an adequate description of the observable stochastic phenomenon of interest; model all the statistical systematic information exhibited by the observed data (see Spanos, 1999, ch. 1). The PR perspective of the AR(1) model (defined in terms of assumptions 1c-5c) has a distinct advantage over that of the traditional approach based on assumptions 1a-4a in so far as the latter assumptions are not a priori assessable because they are defined in terms of the unobservable error term process {єt, t Є T}. In contrast, assumptions 1c-5c are specified directly in terms of the process {(y t1Y^), t Є T} and their validity can be assessed a priori via the reduction assumptions (i)-(iii) relating to {yt, t Є T} using graphical techniques such as t-plots and scatter plots. The relationship between the model assumptions 1c-5c and the reduction assumptions is given by the following theorem.

Theorem 1. Let {yt, t Є T} by a stochastic process with bounded moments of order two. The process {yt, t Є T} is normal, Markov and stationary if and only if the conditional process {(yt | У°.1, t Є T} satisfies the model assumptions 1c-5c.

Proof. The if part, (normality-Markovness-stationarity) ^ 1c-5c, is trivial since:

T

f(Уо, y^ y^ …, Ут; v) = f(y0; ф0) Пf( yt1 у^ у^…, y^; Ф)

t=1 T

= f (У0; Ф0) П ft(yi yt-1; 9f) – Markovness

t=1

T

=f( У0; Ф0) П f(yt1 yt-1; ф) – stafionarity.

t=1

These combined with normality implies assumptions 1c-5c. The only if part follows directly from Theorem 1 in Spanos (1995). ■

Estimation

Given that the likelihood function is defined in terms of the joint distribution of the observable process {yt, t Є T}, the PR approach enjoys a minor advantage over the traditional approach because the need to transfer the probabilistic structure from the error process {et, t Є T} onto the observable process does not arise. The primary advantage of the PR approach, however, arises from the implicit parameterization (28.13) which relates the model parameters ф := (a 0, a 1, a2) and primary parameters у := (p, a0, a1). This parameterization plays an important role in bringing out the interrelationships among the model parameters as well as determining their admissible range. For instance, the PR statistical parameterization in (28.13) brings out two important points relating to the model parameters which are ignored by the traditional time series literature. The first point is that the admissible range of values of the model parameter a1 is (-1, 1) which excludes the values |a1| = 1. This has very important implications for the unit root testing literature. The second point is that the implicit restriction a2 = a0(1 – a1) does not involve the initial condition (as traditionally assumed) but all observations. This has important implications for the MLEs of ф for a1 near the unit root because the likelihood function based on (28.10); see Spanos and McGuirk

(1999) for further details.

The PR approach can also help shed some light on the finite sample distribution of the OLS estimators of (a0, a1). In view of the similarity between the conditioning information set a(Y°_1) of the AR(1) model and that of the stochastic normal/linear regression model a(Xt) (see Spanos, 1986, ch. 20), one can conjecture that the finite sampling distributions of (a0, 71) are closer to the student’s-t than the normal.

Misspecification testing

This refers to the testing of the model assumptions using misspecification tests which are probing beyond the boundaries of the postulated model; this should be contrasted with Neyman-Pearson testing which is viewed as testing within the boundaries (see Spanos, 1999, chs 14-15). The PR perspective has again certain distinct advantages over the traditional approach. First, the assumptions 1c-5c are specified explicitly in terms of the observable and not the error stochastic process (assumptions 1a-4a). This makes it easier to develop misspecification tests for these assumptions. In addition, the various misspecification tests developed in the context of the normal/linear regression model (see Spanos, 1986, chs 21-23) can be easily adapted to apply to the case of the AR(1) model with assumptions 1c-5c. Second, in the context of the PR approach, the connection between the reduction and model assumptions, utilized at the specification stage, can shed light on the likely directions of departures from 1c-5c, which can be useful in the choice of appropriate misspecification tests. Third, the same relationship can also be used to device joint misspecification tests (see Spanos, 1999, ch. 15).

Respecification

This refers to the choice of an alternative statistical model when the original model is found to be statistically inadequate. In the context of the PR approach, respecification can be viewed as the choice of an alternative statistical model which can be devised by changing the reduction (not the model) assumptions in view of the misspecification testing results. For instance, if the misspecification testing has shown departures from assumptions 1c and 3c, the correspondence between reduction and model assumptions suggests changing the normality reduction assumption to another joint distribution with a linear autoregression and a heteroskedastic conditional variance; a member of the elliptically symmetric family of joint distributions, such as the Student’s-f, suggests itself in this case. Changing the normal to the Student’s-f distribution will give rise to a different AR(1) model with assumption 1c replaced by the Student’s-f and assumption 3c to a particular dynamic heteroskedasticity formulation as suggested by the Student’s-f distribution (see Spanos, 1994). In contrast, in the context of the traditional approach respecification takes the form of changing the model assumptions without worrying about the potential internal inconsistency among these assumptions. As shown in Spanos (1995), postulating some arbitrary dynamic heteroskedasticity formulation might give rise to internal inconsistencies.

## Leave a reply