Category A COMPANION TO Theoretical Econometrics

Appendix A The Future Component

The future component we used was a fourth-order polynomial in the state vari­ables. Below, in the interest of space and clarity, we will develop that polynomial only up to its third-order terms. The extension to the higher-order terms is obvi­ous. From equation (22.12), the future component is the flexible functional form

F(Xilt + x( j = 1), Xn + x( j = 2), Sit + x( j = 3), t + 1, x( j = 3))

Define ik = x( j = k). Then, to third-order terms, we used the following polynomial to represent this function.

F(X1 + lV X2 + ^ S + ^ t + 1 l3) = P1 + P2(X1 + l1) + P3(X2 + l2) + P4(S + l3)

+ P5(t + 1) + P6(X1 + I1)2 + P7(X2 + I2)2 + P8(S + І3)2 + P9(t + 1)2 + P10X1 + І1)3 + Pu(X2 + I2)3 + P12(S + I3)3 + P13(t + 1)3 + P14(X1 + l1)2(X2 + I2) + P15(X1 + l1)2(S + 13)

+ P16(X1 + l1)2(t + 1) + P17(X2 +...

Read More

Economic Forecasting: A Theoretical Framework

2.1 Optimal forecasts, feasible forecasts, and forecast errors

Let yt denote the scalar time series variable that the forecaster wishes to forecast, let h denote the horizon of the forecast, and let Ft denote the set of data used at time t for making the forecast (Ft is sometimes referred to as the information set available to the forecaster). If the forecaster has squared error loss, the point forecast yt+h|t is the function of Ft that minimizes the expected squared forecast error, that is, E[(yt+h – yt+h|t)21 Ft]. This expected loss is minimized when the fore­cast is the conditional expectation, E(yt+h | Ft). In general, this conditional expecta­tion might be a time varying function of Ft...

Read More

Preliminaries: Unit Roots and Cointegration

2.1 Some basic concepts

A well known result in time series analysis is Wold’s (1938) decomposition theorem which states that a stationary time series process, after removal of any determi­nistic components, has an infinite moving average (MA) representation which, under some technical conditions (absolute summability of the MA coefficients), can be represented by a finite autoregressive moving average (ARMA) process.

However, as mentioned in the introduction, many time series need to be appropriately differenced in order to achieve stationarity. From this comes the definition of integration: a time series is said to be integrated of order d, in short I(d), if it has a stationary, invertible, non-deterministic ARMA representation after differencing d times...

Read More

Models and Their Specification

Suppose the focus of the analysis is to consider the behavior of the n x 1 vector of random variables wt = (w1t, w2t,…, wnt)’ observed over the period t = 1, 2,…, T. A model of wt, indexed by Щ-, is defined by the joint probability distribution function (pdf) of the observations

[11] + exp[(Zfe – ZitYy2 + Y 1(yis-1 – yi, t+1) + Y 1(yis+1 – yi, t-1)]1(t – s ^ 3)

(16.30)

[12] are unemployed in January and remain unemployed in December too;

[13] are unemployed in January and find a job before December.

[14] MC tests based on pivotal statistics: an exact randomized test procedure;

• MC tests in the presence of nuisance parameters:

(a) local MC p-value,

(b) bounds MC p-value,

(c) maximized MC p-value;

• MC tests versus the bootstrap:

(a) fundamental differences/similarities,

(b) the number...

Read More

Additive Regressions

In recent years several researchers have attempted to estimate m(x,) by imposing some structure upon the nature of the conditional mean m(x;). One popular solu­tion is the generalized additive models of Hastie and Tibshirani (1990), which is

У i = m(x,) + U = m1(xi1) + m2(x,1) + … + mq(xiq) + UU

where ms, s = 1,…, q, are functions of single variables with Ems(xis) = 0, s = 2,…, q, for identification. Each of ms and hence m(x,) is then estimated by one dimen­sional convergence rate of (nh)1/2 which is faster than the convergence rate (nhq)1/2 achieved by direct nonparametric estimation of m(x,). The statistical properties of Hastie and Tibshirani (1990) estimation algorithm is complicated...

Read More

Monte Carlo tests in the presence of nuisance parameters: examples from the multivariate regression model

In this section, we provide examples from Dufour and Khalaf (1998a, 1998b) pertaining to LR test criteria in the MLR (reduced form) model. The model was introduced in Section 2.3. Consider the three equations system

Y1 = P10 + P11X1 + U1,

Y2 = P20 + P22X2 + U2,

Y3 = P30 + P33X3 + U3, (23.38)

imposing normality, and the hypothesis H0 : pn = P22 = P33. First restate H0 in

terms of the MLR model which includes the SURE system as a special case, so that it incorporates the SURE exclusion restrictions. Formally, in the framework of the MLR model

Y1 = P10 + PuX1 + PuX2 + p13X3 + U1,

Y2 = P20 + P21X1 + P22X2 + P23X3 + U2,

Y3 = P30 + P31X1 + p32X2 + P33X3 + U3, (23.39)

H0 is equivalent to the joint hypothesis

H* : P11 = P22 = P33 and P12 = P13 = P21 = P23 = P31 = P32 = °. (23.40)

The associated L...

Read More