During the last 20 years, computer-based simulation methods have revolutionized the way we approach statistical analysis. This has been made possible by the rapid development of increasingly quick and inexpensive computers. Important innovations in this field include the bootstrap methods for improving standard asymptotic approximations (for reviews, see Efron, 1982; Efron and Tibshirani, 1993; Hall, 1992; Jeong and Maddala, 1993; Vinod, 1993; Shao and Tu, 1995; Davison and Hinkley, 1997; Horowitz, 1997) and techniques where estimators and forecasts are obtained from criteria evaluated by simulation (see Mariano and Brown, 1993; Hajivassiliou, 1993; Keane, 1993; Gourieroux, Monfort, and Renault, 1993; Gallant and Tauchen, 1996)...Read More
Category A COMPANION TO Theoretical Econometrics
In some cases, the object of forecasting is not to produce a point forecast but rather to produce a range within which yt+h has a prespecified probability of falling. Even if within the context of point forecasting, it is useful to provide users of forecasts with a measure of the uncertainty of the forecast. Both ends can be accomplished by reporting prediction intervals.
In general, the form of the prediction interval depends on the underlying distribution of the data. The simplest prediction interval is obtained by assuming that the data are conditionally homoskedastic and normal...Read More
Whereas in the previous section we confined the analysis to the case where there is at most a single cointegrating vector in a bivariate system, this setup is usually quite restrictive when analyzing the cointegrating properties of an n-dimensional vector of I(1) variables where several cointegration relationships may arise. For example, when dealing with a trivariate system formed by the logarithms of nominal wages, prices, and labor productivity, there may exist two relationships, one determining an employment equation and another determining a wage equation. In this section we survey some of the popular estimation and testing procedures for cointegration in this more general multivariate context, which will be denoted as system-based approaches.
In general, if yt now represents a ve...Read More
An obvious question is how to carry out various diagnostic tests done in the parametric econometrics within the nonparametric and semiparametric models. Several papers have appeared in the recent literature which deal with this issue. We present them here and show their links.
First consider the problem of testing a specified parametric model against a nonparametric alternative, H0 : /(p, x) = E(yi | xi) against H1 : m(x) = E(yi | xi). The idea behind the Ullah (1985) test statistic is to compare the parametric RSS (PRSS) XЙ2, x = Vi – /(S, xi) with the nonparametric RSS (NPRSS), XB2, where щ = – m(xi). His test statistic is
or simply T * = (PRSS – NPRSS), and reject the null hypothesis when T1 is large. л/П T1 has a degenerage distribution under H0...Read More
Stochastic frontier models are commonly used in the empirical study of firm1 efficiency and productivity. The seminal papers in the field are Aigner, Lovell, and Schmidt (1977) and Meeusen and van den Broeck (1977), while a recent survey is provided in Bauer (1990). The ideas underlying this class of models can be demonstrated using a simple production model2 where output of firm i, Yi, is produced using a vector of inputs, Xi, (i = 1 … N). The best practice technology for turning inputs into output depends on a vector of unknown parameters, p, and is given by:
Y = f (Xi; P). (24.1)
This so-called production frontier captures the maximum amount of output that can be obtained from a given level of inputs...Read More
Time series data have been used since the dawn of empirical analysis in the mid-seventeenth century. In the "Bills of Mortality" John Graunt compared data on births and deaths over the period 1604-60 and across regions (parishes); see Stigler (1986). The time dimension of such data, however, was not properly understood during the early stages of empirical analysis. Indeed, it can be argued that the time dimension continued to bedevil statistical analysis for the next three centuries before it could be tamed in the context of proper statistical models for time series data.
The descriptive statistics period: 1665-1926
Up until the last quarter of the nineteenth century the time dimension of observed data and what that entails was not apparent to the descriptive statistics literature which c...Read More