Discussion and Conclusion

One of the few truly safe predictions is that economic forecasters will remain the target of jokes in public discourse. In part this arises from a lack of understanding that all forecasts must in the end be wrong, and that forecast error is inevitable. Economic forecasters can, however, bolster their credibility by providing infor­mation about the possible range of forecast errors. Some consumers are uncom­fortable with forecast uncertainty: when his advisors presented a forecast interval for economic growth, President Lyndon Johnson is said to have replied, "ranges are for cattle." Yet communication of forecast uncertainty to those who rely on forecasts helps them to create better, more flexible plans and supports the cred­ibility of forecasters more generally.

A theme of this chapter has been the tradeoff between complex models, which either use more information to forecast or allow subtle nonlinear formulations of the conditional mean, and simple models, which require fitting a small number of parameters and which thereby reduce parameter estimation uncertainty. The empirical results in Tables 27.1 and 27.2 provide a clear illustration of this tradeoff. The short-term interest rate is influenced by expected inflation, monetary policy, and the general supply and demand for funds, and, because the nominal rate must be positive, the "true" model for the interest rate must be nonlinear. Yet, of the autoregressions, neural nets, LSTAR models, and VARs considered in Tables 27.1 and 27.2, the best forecast was generated by a simple exponentially weighted moving average of past values of the interest rate. No attempt has been made to uncover the source of the relatively poor performance of the more sophisticated forecasts of the interest rate, but presumably it arises from a combination of parameter estimation error and temporal instability in the more complicated models.

An important practical question is how to resolve this tradeoff in practice. Two methods have been discussed here. At a formal level, this tradeoff is captured by the use of information criteria. Information criteria can be misleading, however, when many models are being compared and/or when the forecasting environ­ment changes over time. The other method is to perform a simulated out-of­sample forecast comparison of a small number of models. This is in fact closely related to information criteria (Wei, 1992) and shares some of their disadvan­tages. When applied to at most a few candidate models, however, this has the advantage of providing evidence on recent forecasting performance and how the forecasting performance of a model has evolved over the simulated forecast period. These observations, along with those above about reporting forecast uncertainty, suggest a simple rule: even if your main interest is in more com­plicated models, it pays to maintain benchmark forecasts using a simple model with honest forecast standard errors evaluated using a simulated real time experi­ment, and to convey the forecast uncertainty to the consumer of the forecast.

Finally, an important topic not addressed in this chapter is model instability. All forecasting models, no matter how sophisticated, are stylized and simplified ways to capture the complex and rich relations among economic time series variables. There is no particular reason to believe that these underlying relations are stable – technology, global trade, and macroeconomic policy have all evolved greatly over the past three decades – and even if they were, the implied para­meters of the forecasting relations need not be stable. One therefore would expect estimated forecasting models to have parameters that vary over time, and in fact this appears to be the case empirically (Stock and Watson, 1996). Indeed, Clements and Hendry (1999) argue that most if not all major economic forecast failures arise because of unforeseen events that lead to a breakdown of the forecasting model; they survey existing methods and suggest some new techniques for detecting and adjusting to such structural shifts. The question of how best to forecast in a time-varying environment remains an important area of econo­metric research.


* The author thanks Lewis Chan for research assistance and four anonymous referees for useful suggestions.

1 All series were obtained from the Basic Economics Database maintained by DRI/ McGraw Hill. The series mnemonics are: PUNEW (the CPI); IP (industrial production); LHUR (the unemployment rate); FYGM3 (the 90 day U. S. Treasury bill rate); and IVMTQ (real manufacturing and trade inventories).

2 These results are drawn from the much larger model comparison exercise in Stock and Watson (1999a), to which the reader is referred for additional details on estimation method, model definitions, data sources, etc.

3 In influential work, Cooper (1972) and Nelson (1972) showed this in a particularly dramatic way. They found that simple ARMA models typically produced better fore­casts of the major macroeconomic aggregates than did the main large structural macroeconomic models of the time. For a discussion of these papers and the ensuing literature, see Granger and Newbold (1986, ch. 9.4).

Leave a reply

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>