Special Forms

If the disturbances are heteroskedastic but not serially correlated, then Q = diag[a2]. In this case, P = diagOi], P-1 = Q-1/2 = diag[1/oi] and Q-1= diag[1/a2]. Premultiplying the regression equation by Q-1/2 is equivalent to dividing the i-th observation of this model by ai. This makes the new disturbance ui/ai have 0 mean and homoskedastic variance a2, leaving properties like no serial correlation intact. The new regression runs y* = yi/ai on X*k = Xik/ai for i = 1,2,…,n, and k = 1, 2,…,K. Specific assumptions on the form of these ai’s were studied in the heteroskedasticity chapter.

If the disturbances follow an AR(1) process ut = put-1 + et for t = 1,2,…,T; with p < 1 and et ~IID(0,a2), then cov(ut, ut-s) = psa2u with a2u = a2/(1 — p2). This means that

1

p

p2

T1

pT

Q =

p

1

p

pT-2

(9.9)

T— 1

L pT 1

T

pT-

2 pT-

3

1

and

1

—p

0.

0

0

0

‘ 1

—p

1 + p2

—p.

0

0

0

Q-1 =

/

і

(9.10)

“ l

a —

p2)

0

0

0.

—p

1 + p2

—p

0

0

0.

0

—p

1

Then

1

— p2

0

0…

0

0

0

—p

1

0…

0

0

0

P-1 =

0

—p

1…

0

0

0

(9.11)

0

0

0…

—p

1

0

0

0

0…

0—

p

1

is the matrix that satisfies the following condition p-l’p-1 = (1 — p2)Q-1. Premultiplying the regression model by P-1 is equivalent to performing the Prais-Winsten transformation. In particular the first observation on y becomes y* = y/1 — p2y1 and the remaining observations are given by y* = (yt—pyt-1) fort = 2,3,…,T, with similar terms for the X’s and the disturbances. Problem 3 shows that the variance covariance matrix of the transformed disturbances u* = P-1u is a^IT.

Other examples where an explicit form for P-1 has been derived include, (i) the MA(1) model, see Balestra (1980); (ii) the AR(2) model, see Lempers and Kloek (1973); (iii) the specialized AR(4) model for quarterly data, see Thomas and Wallis (1971); and (iv) the error components model, see Fuller and Battese (1974) and Chapter 12.

Подпись: Maximum Likelihood Estimation Assuming that u — N(0, a2Q), the new likelihood function can be derived keeping in mind that u* = P-1u = Q-1/2u and u* — N(0, a2In). In this case

f (uf,…, un; a2) = (1/2na2)n/2 exp{— u*’u*/2a2} (9.12)

Making the transformation u = Pu* = Q1/2u*, we get

f (u1,…,un; a2) = (1/2na2)n/2|Q-1/2| exp{—u’Q-1u/2a2} (9.13)

where |Q-1/2| is the Jacobian of the inverse transformation. Finally, substituting y = X@ + u in (9.13), one gets the likelihood function

L(/3, a2; Q) = (1/2пa2)n/2|Q-1/2| exp{—(y — Xв)/Q-1(y — Xp)/2a2} (9.14)

since the Jacobian of this last transformation is 1. Knowing Q, maximizing (9.14) with respect to в is equivalent to minimizing u*’u* with respect to в. This means that вMLE is the OLS estimate on the transformed model, i. e., вGLs. From (9.14), we see that this RSS is a weighted one with the weight being the inverse of the variance covariance matrix of the disturbances. Similarly, maximizing (9.14) with respect to a2 gets a2MLE = the OLS residual sum of squares of the transformed regression (9.3) divided by n. From (9.6) this can be written as a2MLE = e*’e*/n = (n — K)s*2/n. The distributions of these maximum likelihood estimates can be derived from the transformed model using the results in Chapter 7. In fact, вGLS N (в, a2 (X ’Q-1X )-1)

and (n — K)s*2/a2 – хП-к.

Leave a reply

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>