# Autogressive Models with Moving-Average Residuals

A stationary autoregressive, moving-average model is defined by

2) РіУг-і= 2) Pjet-j> A> = /?o = – l. t = 0, ± 1, ±2,. . . ,

j – о j – о

(5.3.1)

where we assume Assumptions А, В", C, and

Assumption D. The roots of 2J_0 fyz 4~J = 0 lie inside the unit circle.

Such a model will be called ARMA(p, q) for short.

We can write (5.3.1) as

p(L)yt = p(L)€t, (5.3.2)

where p(L) = Sf_0 PjJJ and fi(L) = 2J_0 PjLj. Because of Assumptions B" and C, we can express y, as an infinite moving average

yt = p~l(L)P(L)et = ф(Це„ (5.3.3)

where ф(Ь) = ‘2jL0 ф]и. Similarly, because of Assumption D, we can express y, as an infinite autoregressive process

y{L)yt = p~l{L)p{L)yt = et, (5.3.4)

where ydL) = 2jl0 y/jV.  The spectral density of ARMA(p, q) is given by

where |z|2 = zz for a complex number z with z being its complex conjugate. Note that (5.3.5) is reduced to (5.2.15) in the special case of AR(1). We also see
from (5.3.5) that the spectral density of a moving-average model is, except for a2, the inverse of the spectral density of an autoregressive model with the same order and the same coefficients. Because the spectral density of a stationary process approximately corresponds to the set of the characteristic roots of the autocovariance matrix, as was noted earlier, we can show that the autocovar­iance matrix of a moving-average model is approximately equal to the inverse of the autocovariance matrix of the corresponding autoregressive model. We shall demonstrate this for the case of MA(1).

Consider an MA(1) model defined by

y, = et-pet-i, (5.3.6)

where |p| < 1 and {€,} are i. i.d. with Ее, = 0 and Ve, = a2. The TXT autoco­variance matrix is given by 1 +p2 —p 0
-p 1 +p2

(5.3.7)

1+P2 – P -p 1 + p2m

We wish to approximate the inverse of 2(1). If we define a T-vector ij such that its first element is e, — p€o — (1 — p1)1’2^ and the other T— 1 elements are zeroes, we have у = + v,

where R, is given by (5.2.10). Therefore1 2(i) as <t2RiRJ.

But we can directly verify

From (5.2.12), (5.3.9), and (5.3.10), we conclude «T^oi^RrHR;)-1

Whittle (1983, p. 75) has presented the exact inverse of 2(i). The j, /cth element (у, к — 0, 1,. . . , T— 1) of 2^), denoted 2Д, is given by

r» (p-U+V-pi+IXp-iT-Q-f-k)

(1> РІр-‘-р)(р-<Т+1)-рТ+1) •

However, the exact inverse is complicated for higher-order moving-average models, and the approximation (5.3.11) is attractive because the same inverse relationship is also valid for higher-order processes.

Anderson (1971, p. 223) has considered the estimation of the parameters of ARMA(p, q) without the normality assumption on {є,}, and Box and Jenkins (1976) have derived the maximum likelihood estimator assuming normality in the autoregressive integrated moving-average process. This is the model in which the sequence obtained by repeatedly first-differencing the original series follows ARMA(/>, q). The computation of MLE can be time-consuming because the inverse of the covariance matrix of the process, even if we use the approximation mentioned in the preceding paragraph, is a complicated non­linear function of the parameters (see Harvey, 1981b, for various computa­tional methods).