Univariate forecasts are made solely using past observations on the series being forecast. Even if economic theory suggests additional variables that should be useful in forecasting a particular variable, univariate forecasts provide a simple and often reliable benchmark against which to assess the performance of those multivariate methods. In this section, some linear and nonlinear univariate forecasting methods are briefly presented. The performance of these methods is then illustrated for the macroeconomic time series in Figures 27.1-27.5.
One of the simplest forecasting methods is the exponential smoothing or exponentially weighted moving average (EWMA) method. The EWMA forecast is,
Dt+h 11 = aDt+h-i t-i + (1 – a)yt, (27.4)
where a is a parameter chosen by the forecaster or estimated by nonlinear least squares from historical data.
Autoregressive moving average (ARMA) models are a mainstay of univariate forecasting. The ARMA(p, q) model is,
a(L)yt = At + b(L)et, (27.5)
where et is a serially uncorrelated disturbance and a(L) and b(L) are lag polynomials of orders p and q, respectively. For yt to be stationary, the roots of a(L) lie outside the unit circle, and for b(L) to be invertible, the roots of b(L) also lie outside the unit circle. The term pt summarizes the deterministic component of the series. For example, if pt is a constant, the series is stationary around a constant mean. If pt = p 0 + p1t, the series is stationary around a linear time trend. If q > 0, estimation of the unkown parameters of a(L) and b(L) entails nonlinear maximization. Asymptotic Gaussian maximum likelihood estimates of these parameters are a staple of time series forecasting computer packages. Multistep forecasts are computed by iterating forward the one-step forecasts. A deficiency of ARMA models is estimator bias introduced when the MA roots are large, the so-called unit MA root pileup problem (see Davis and Dunsmuir, 1996; and, for a general discussion and references, Stock, 1994).
An important special case of ARMA models are pure autoregressive models with lag order p (AR(p)). Meese and Geweke (1984) performed a large simulated out of sample forecasting comparison that examined a variety of linear forecasts, and found that long autoregressions and autoregressions with lags selected by information criteria performed well, and on average outperformed forecasts from ARMA models. The parameters can be estimated by ordinary least squares (OLS) and the order of the autoregression can be consistently estimated by, for example, the BIC.
Harvey (1989) has proposed a different framework for univariate forecasting, based on a decomposition of a series into various components: trend, cycle, seasonal, and irregular. Conceptually, this framework draws on an old concept in economic time series analysis in which the series is thought of as having different properties at different horizons, so that for example one might talk about the cyclical properties of a time series separately from its trend properties; he therefore calls these structural time series models. Harvey models these components as statistically uncorrelated at all leads and lags, and he parameterizes the components to reflect their role, for example, the trend can be modeled as a random walk with drift or a doubly integrated random walk, possibly with drift. Estimation is by asymptotic Gaussian maximum likelihood. The resulting forecasts are linear in historical data (although nonlinear in the parameters of the model) so these too are linear forecasts. Harvey (1989) argues that this formulation produces forecasts that avoid some of the undesirable properties of ARMA models. As with ARMA models, user judgment is required to select the models. One interesting application of these models is for trend estimation, see for example Stock and Watson (1998).