The probabilistic reduction perspective

The probabilistic reduction perspective contemplates the DGM of the Vector Autoregressive representation from left to right as an orthogonal decomposition:

Zt = E(Zt|o(Z0-1)) + ut, t Є T, (28.28)

where Zt-1 := (Zt-1, Zt-2,…, Z0) and ut = Zt – E(Zt| o(Z°)-1)), with the underlying statistical model viewed as reduction from the joint distribution of the under­lying process {Zt, t Є T}. In direct analogy to the univariate case the reduction assumptions on the joint distribution of the process {Zt, t Є T}, that would yield VAR(1) model are: (i) normal, (ii) Markov, and (iii) stationary.

Let us consider the question of probabilistic reduction in some detail by impos­ing the reduction assumptions in a certain sequence in order to reduce the joint distribution to an operational model:

T

D(Zo, ZZt; v) = D(Z0; фо) П Dt(Zt1 ZU; ф(),

t=1

= D(Zо; фо) П D (Z11 Zм; ф),

t=1

M=S D(Zо; фо) П D(Ztl Zм; ф), (Zо,…, Zt) Є Rm(T+1).

t=і

(28.29)

The first equality does not entail any assumptions, but the second follows from the Markovness (M) and the third from the stationarity (S) assumption. In order to see what happens to D(Z11 Zt-1; ф) when the normality assumption:

is imposed, the orthogonal decomposition (28.28) gives rise to:

Zt = ао + A1Zt-1 + ut, t Є T. (28.31)

The statistical parameters ф := (а о, A1, Q) are related to the primary parameters V = (fi, X (о), X (1)) via:

ао = (I – A1)fi, A1 = X (1)т X (о)-1, Q = X (о) – X (1)т X (о)-1Х (о), (28.32)

and the discussion concerning the interrelationships between two parameter spaces is analogous to the univariate case discussed in the previous section and will not be pursued any further; see Spanos (1986, chs 22-23).

The probabilistic reduction approach views the VAR(1) model specified in terms of (28.12) as comprising the following model assumptions concerning the conditional process {(Z t| Zt-1), t Є T}:

Подпись:D(Z 11 Z t-1; v) is normal,

E(Z 11 o(Z t-1)) = а о + A1Zt-1, linear in Z t-1, cov(Z11 o(Zt-1)) = Q, free of Z °_1,

(а о, A1, Q) are not functions of t Є T,

{(ut | Zt-1), t Є T} is a vector martingale difference process.

Continuing with the analogies between the vector and univariate cases, the tem­poral dependence assumption for the process {Zt, t Є T} is Markov autocorrelation:

cov(Zit, Z;-(t-T)) = aijx < c^|T|, c > о, о < X < 1, т Ф о, i, j = 1, 2,…, t = 1, 2,…

For the VAR(1) model as specified by (28.31) and assumptions 1f-5f above to be statistically adequate, the modeler should test the underlying assumptions, with

image338

the misspecification tests being modifications of the ones for the univariate case (see Spanos, 1986, ch. 24).

The VAR(1) is much richer than the univariate ARMA(p, q) specification because, in addition to the self-temporal structure, it enables the modeler to consider the cross-temporal structure, fulfilling Moore’s basic objective in the classic (1914) study. This can be seen in the simplest case of a VAR(1) model with m = 2:

(Z X

Z1t

( X

a10

+

/

a 11

a12

( z X

Z1(t-1)

+

/ X

u1t

, Q =

/

Ю11

X

Ю12

v Z2t,

v a20 ,

v a 21

a22,

v Z2(t-1),

Ku2ty

v®21

®22 y

The coefficients (a 12, a 21) measure the cross-temporal dependence between the two processes. In the case where a12 = 0, Z2t does not Granger cause Z1t, and vice versa in the case where a21 = 0. This is an important concept in the context of forecasting. The covariance ю 12 constitutes a measure of the contemporaneous (cov(Z1t, Z2t | Z°_1)) dependence. As argued in the next subsection, the dynamic linear regression model can be viewed as a further reduction of the VAR model which purports to model this contemporaneous dependence. For further discus­sion of the VAR model see Hamilton (1994).

Leave a reply

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>