Category Advanced Econometrics Takeshi Amemiya

Autogressive Models with Moving-Average Residuals

A stationary autoregressive, moving-average model is defined by

2) РіУг-і= 2) Pjet-j> A> = /?o = – l. t = 0, ± 1, ±2,. . . ,

j – о j – о


where we assume Assumptions А, В", C, and

Assumption D. The roots of 2J_0 fyz 4~J = 0 lie inside the unit circle.

Such a model will be called ARMA(p, q) for short.

We can write (5.3.1) as

p(L)yt = p(L)€t, (5.3.2)

where p(L) = Sf_0 PjJJ and fi(L) = 2J_0 PjLj. Because of Assumptions B" and C, we can express y, as an infinite moving average

yt = p~l(L)P(L)et = ф(Це„ (5.3.3)

where ф(Ь) = ‘2jL0 ф]и. Similarly, because of Assumption D, we can express y, as an infinite autoregressive process

y{L)yt = p~l{L)p{L)yt = et, (5.3.4)

where ydL) = 2jl0 y/jV.

image413 Подпись: (5.3.5)

The spectral density of ARMA(p, q) is given by

where |z|2 = zz for a comp...

Read More

Constrained Least Squares Estimator as Best Linear Unbiased Estimator

That fi is the best linear unbiased estimator follows from the fact that y2 is the best linear unbiased estimator of y2 in (1.4.9); however, we also can prove it directly. Inserting (1.4.9) into (1.4.11) and using (1.4.8), we obtain

j» = ^+R(R’X’XR)-1R, X’u. (1.4.14)

Therefore, fi is unbiased and its variance-covariance matrix is given by

F()?) = (72R(R, X’XR)-,R’. (1.4.15)

We shall now define the class of linear estimators by fi* = С’ у — d where C’ is a ATX T matrix and d is a ЛГ-vector. This class is broader than the class of linear estimators considered in Section 1.2.5 because of the additive constants d. We did not include d previously because in the unconstrained model the unbi­asedness condition would ensure d = 0...

Read More

Maximum Likelihood Estimator

4.1.2 Definition

Let LT(6) = L(y, в) be the joint density of a Г-vector of random variables У — (У і. Уі. • • • . Утї characterized by a Я-vector of parameters в. When we regard it as a function of в, we call it the likelihood function. The term maximum likelihood estimator (MLE) is often used to mean two different concepts: (1) the value of в that globally maximizes the likelihood function L( у, в) over the parameter space 0; or (2) any root of the likelihood equation

Подпись: (4.2.1)дЬт(в) _

that corresponds to a local maximum. We use it only in the second sense and use the term global maximum likelihood estimator to refer to the first concept. We sometimes use the term local maximum likelihood estimator to refer to the second concept.

4.1.3 Consistency

The conditions for...

Read More

Estimation of p

Because 2 defined in (5.2.9) depends on a2 only through a scalar multiplica­tion, fio defined in (6.1.3) does not depend on a2. Therefore, in obtaining FGLS (6.2.1), we need to estimate only p. The most natural estimator of p is

2 Й/-ІЙ/

р=Ц—. (6.3.3)



where й, = у, — xlfi. The consistency of pis straightforward. We shall prove its asymptotic normality.

Using Uf = put- + e„ we have

Подпись: JT(P~P) =Подпись: (6.3.4)Tt%™+6′

1 t ’

•* г-2


4, – 4= У О» – + (6-3.5)


*2 = J. i2 U – Д>’*г-.]2 + (fi – Д>’*-іЧг-. • (6-3.6)

If we assume that НтпГ Г_1Х’Х is a finite nonsingular matrix, it is easy to show that both Д, and Д2 converge to 0 in probability. For this, we need only the consistency of the LS estimator fi and — fi) = 0(1) but not the
asymptotic normality...

Read More