SecondOrder Autoregressive Model
A stationary secondorder autoregressive model, abbreviated as AR(2), is defined by
У, = РіУ,і +р2У,г + е» t = 0, ±1,±2……………. (5.2.16)
where we assume Assumptions A, C, and
Assumption B’. The roots of z2 — pxz — p2 — 0 lie inside the unit circle. Using the lag operator defined in Section 5.2.1, we can write (5.2.16) as (1 – PlLp2L2)y, = et. (5.2.17)
(1 – цхЩ – p2L)yt = e„
where Ці and ц2 are the roots of z2 – pxz – p2 = 0. Premultiplying (5.2.18) by (1 —p1L)~l( 1 —fi2L)~l, we obtain
(5.2.19)
Convergence in the meansquare sense of (5.2.19) is ensured by Assumption B’. Note that even if px and p2 are complex, the coefficients on the є,_, are always real.
The values of px and p2 for which the condition ^,, p2 < 1 is satisfied correspond to the inner region of the largest triangle in Figure 5.1. In the region above the parabola, the roots are real, whereas in the region below it, they are complex.
The autocovariances may be obtained as follows: Multiplying (5.2.16) by y,i and taking the expectation, we obtain
Уі=РіУо + РгУі
Squaring each side of (5.2.16) and taking the expectation, we obtain
Уо = (РЇ + Рг)Уо + ЇРхРгУх + о2.
Solving (5.2.20) and (5.2.21) for y0 and yy, we obtain oPi~ 1)
and
(1+лМ/*(1А>*Г
Next, multiplying (5.2.16) by y,h and taking the expectation, we obtain
Ун – PiVh1 + PiVhi, A = 2. (5.2.24)
Note that {%} satisfy the same difference equation as {y,} except for the random part. This is also true for a higherorder autoregressive process. Thus, given the initial conditions (5.2.22) and (5.2.23), the secondorder difference equation (5.2.24) can be solved (see Goldberg, 1958) as




= Рк~ШУірУо)+РУо if Pi =Pi=P – If Цх and fi2 are complex, (5.2.25) may be rewritten as


where Цх = rew and ц2 = re~w.
Arranging the autocovariances given in (5.2.25) in the form of (5.1.1) yields the autocovariance matrix of AR(2), denoted X2. We shall not write it explicitly; instead, we shall express it as a function of a transformation analogous to
(5.2.10). If we define a Гvector ef2) = (аіУі, а.2У + a3y2, e3, e4……… eT)
and a TXT matrix
0 
0 
• 
• 0 

a2 
аз 
0 

~Pi 
~Pi 
1 
0 

0 
~Pi 
Pi 
1 0 

• 
1 0 

0 
• 
• 
<2 1 0 
pi 1 

(5.2.28)
Now, if we determine a,, a2, and a3 by solving К(а1у,) = <т2, У(а2Уі + аъу2) = ст2, and E[alyl{a2yl + a3y2)] = 0, we have = a21.
Therefore we obtain from (5.2.28)
X2 = a2Rl,(R2)’1
Higherorder autoregressive processes can be similarly handled.
5.1.3 pthOrder Autogressive Model
A stationary pthorder autoregressive process, abbreviated as AR(p), is defined by
yt = 2 PjVtj + t = 0, ± 1, ±2,… ,
7І
where we assume Assumptions A and C and
Assumption B". The roots of Х*_0 PjZp~J = 0, />0 = — 1, lie inside the unit circle.
A representation of the TX T autocovariance matrix Xp of AR(p) analogous to (5.2.12) or (5.2.29) is possible. The j, /cth element (j, k = 0, 1,. . . , T— 1) of Xj1 can be shown to be the coefficient on &Ck in
where p0 = — 1 (see Whittle, 1983, p. 73).
We shall prove a series of theorems concerning the properties of a general autoregressive process. Each theorem except Theorem 5.2.4 is stated in such a way that its premise is the conclusion of the previous theorem.
Theorem 5.2.1. (yt) defined in (5.2.30) with Assumptions A, B", and C can be written as a movingaverage process of the form
у,=І) 2 i^i < °°>
j – о j – 0
where {e,} are i. i.d. with Ее, = 0 and Eej= a2.
Proof. From (5.2.30) we have
where fa, ц2,. . . . , (ip are the roots of p}zp J. Therefore we have
(5.2.33)
Equating the coefficients of (5.2.31) and (5.2.33), we obtain
00 ^
Фі=Рі+Рг + ■ • +Рр
02 = Е PiPj
0»“ . 2 РФн ■ • • /Ч
Therefore where/Хм = max[/i,, p2,. . . . , рр].
Theorem 5.2.2. Let {y,) be any sequence of random variables satisfying (5.2.31). Then
(5.2.36)
where yh = Ey, yt+h.
Proof. We have
Yo = ff2(0o + ФІ + • ■ •)
Yi = <т2(ФоФі + Ф1Ф2 + • . .) Уг = о2(ФоФі + ФіФз + ■ • •),

(5.2.37)
from which the theorem follows.
Theorem 5.2.3. Let {y,} be any stationary sequence satisfying (5.2.36). Then the characteristic roots of the T X Г autocovariance matrix X of{y,} are bounded from above.
Proof. Letx = (xq, xx, . . . , xr_,)’ be the characteristic vector of X corresponding to a root A. That is,
Xx = Ax. (5.2.38)
Suppose x, = max [Uq, x,, . . . , хг_,[]. The (t + l)st equation of (5.2.38) can be written as
y, Xo + ytxXy + . . . + yQxt + . . . + yT^_txTx = Axt. (5.2.39) Therefore, because А Ш 0,
ІУгІ + ІУгіІ+• • +ІУ0І+ • . + ІУгiilsA.
Therefore
T
(5.2.41)
from which the theorem follows.
The premise of the next theorem is weaker than the conclusion of the preceding theorem. In terms of the spectral density, the premise of Theorem
5.1.4 is equivalent to its existence, and the conclusion of Theorem 5.2.3 to its continuity.
Theorem 5.2.4. Let {y,} be any sequence of random variables satisfying
(5.2.42)
where {€,} are i. i.d with Ее, = 0 and Eef= a2. Then lim yh = 0.
Л*»
Note that (5.2.31) implies (5.2.42).
Proof. The theorem follows from the CauchySchwartz inequality, namely,
УІ = оФ<Фк + ФіФи+і + • • У
Leave a reply