The (augmented) Dickey-Fuller test can be used to test for the stationarity of your data. To perform this test, a few decisions have to be made regarding the time-series. The decisions are usually made based on visual inspection of the time-series plots. By looking at the plots you can determine whether the time-series have a linear or quadratic trend. If the trend in the series is quadratic then the differenced version of the series will have a linear trend in them. In Figure 12.1 you can see that the Fed Funds rate appears to be trending downward and its difference appears to wander around some constant amount. Ditto for bonds. This suggests that the Augmented Dickey Fuller test regressions for each of the series should contain a constant, but not a time trend.
The GDP series in the uppe... Read More
In this example, the probabilities of attending no college, a 2 year college, and a 4 year college after graduation are modeled as a function of a student’s grades. In principle, we would expect that those with higher grades to be more likely to attend a 4 year college and less likely to skip college altogether. In the dataset, grades are measured on a scale of 1 to 13, with 1 being the highest. That means that if higher grades increase the probability of going to a 4 year college, the coefficient on grades will be negative. The probabilities are modeled using the normal distribution in this model where the outcomes represent increasing levels of difficulty.
We can use gretl to estimate the ordered probit model because its probit command actually handles multinomial ordered choices as we... Read More
The vector autoregression model (VAR) is actually a little simpler to estimate than the VEC model. It is used when there is no cointegration among the variables and it is estimated using time-series that have been transformed to their stationary values.
In the example from POE4, we have macroeconomic data on RPDI and RPCE for the United States. The data are found in the fred. gdt dataset and have already been transformed into their natural logarithms. In the dataset, y is the log of real disposable income and c is log of real consumption expenditures. As in the previous example, the first step is to determine whether the variables are stationary. If they are not, then you transform them into stationary time-series and test for cointegration.
The data need to be analyzed in the same way as ... Read More
The first thing I usually do is to change the name to something less generic, e. g., cola, using
> cola <-gretldata
You can also load the current gretl data into R manually as shown below. To load the data in properly, you have to locate the Rdata. tmp file that gretl creates when you launch R from the GUI. Mine was cleverly hidden in C:/Users/Lee/AppData/Roaming/gretl/Rdata. tmp. Once found, use the read. table command in R as shown. The system you are using (Windows in my case) dictate whether the slashes are forward or backward. Also, I read the data in as cola rather than the generic gretldata to make things easier later. R.
> cola <- read. table("C:/Users/Lee/AppData/Roaming/gretl/Rdata. tmp", + header = TRUE )
The addition of Header = TRUE to the code that gretl writes for you ensure... Read More
3Cragg and Donald (1993) have proposed a test statistic that can be used to test for weak identification (i. e., weak instruments). In order to compute it manually, you have to obtain a set of canonical correlations. These are not computed in gretl so we will use another free software, R, to do part of the computations. On the other hand, gretl prints the value of the Cragg-Donald statistic by default so you won’t have to go to all of this trouble. Still, to illustrate a very powerful feature of gretl we will use R to compute part of this statistic.
One solution to identifying weak instruments in models with more than one endogenous regressor is based on the use of canonical correlations... Read More
Before discussing such tests, another estimator of the model’s parameters deserves mention. The between estimator is also used in some circumstances. The between model is
yi = ві + в2Х2 І + взХз І + Ui + ei (15.11)
where the yi is the average value of y for individual i, and xki is the average value of the kth regressor for individual i. Essentially, the observation in each group (or individual) are averaged over time. The parameters are then estimated by least squares. The variation between individuals is being used to estimate parameters. The errors are uncorrelated across individuals and homoskedastic and as long as individual differences are not correlated with regressors, the between estimator should be consistent for the parameters.
To obtain the between estimates, simply... Read More