# ARCH and GARCH

The basic ARCH(1) model can be expressed as:

 yt = в + et (14.1) et|1t-i ~ N(0, ht) (14.2) ht = ao + aqe2_i (14.3)
 a0 > 0, 0 < a1 < 1

The first equation describes the behavior of the mean of your time-series. In this case, equation (14.1) indicates that we expect the time-series to vary randomly about its mean, в. If the mean of your time-series drifts over time or is explained by other variables, you’d add them to this equation just as you would a regular regression model. The second equation indicates that the error of the regression, et, are normally distributed and heteroskedastic. The variance of the current period’s error depends on information that is revealed in the preceding period, i. e., It_i. The variance of et is given the symbol ht. The final equation describes how the variance behaves. Notice that ht

depends on the error in the preceding time period. The parameters in this equation have to be positive to ensure that the variance, ht, is positive. Notice also that a cannot be greater than one; if it were, the variance would be unstable.

The ARCH(1) model can be extended to include more lags of the errors, et-q. In this case, q refers to the order of the ARCH model. For example, ARCH(2) replaces (14.3) with ht = ao + a1e2-1 + a2e2-2. When estimating regression models that have ARCH errors in gretl, you’ll have to specify this order.

ARCH is treated as a special case of a more general model in gretl called GARCH. GARCH stands for generalized autoregressive conditional heteroskedasticity and it adds lagged values of the variance itself, ht-p, to (14.3). The GARCH(1,1) model is:

yt = в + et
et|It-1 ~ N(0, ht)

ht = 5 + aie2-1 + eiht-i (14.4)

The difference between ARCH (14.3) and its generalization (14.4) is a term e1ht-1, a function of the lagged variance. In higher order GARCH(p, q) model’s, q refers to the number of lags of et and p refers to the number of lags of ht to include in the model of the regression’s variance.

To open the dialog for estimating ARCH and GARCH in gretl choose Model>Time series>GARCH from the main gretl window.1 This reveals the dialog box where you specify the model (Figure 14.1). To estimate the ARCH(1) model, you’ll place the time-series r into the dependent variable box and set q=1 and p=0. This yields the results:

Model 1: GARCH, using observations 1-500
Dependent variable: r
Standard errors based on Hessian

 Coefficient Std. Error z p-value const 1.06394 0.0399241 26.6491 0.0000 ao a1 0.642139 0.569347 0.0648195 0.0913142 9.9066 6.2350 0.0000 0.0000

1.078294 S. D. dependent var -740.7932 Akaike criterion 1506.445 Hannan-Quinn

Unconditional error variance = 1.49108

Tn a later version of gretl, an ARCH option has been added. You can use this as well, but the answer you get will be slightly different due to differences in the method used to estimate the model.

You will notice that the coefficient estimates and standard errors for the ARCH(1) and GARCH(1, 1) models are quite close to those in chapter 14 of your textbook. To obtain these, you will have to change the default variance-covariance computation using set garch_vcv op before running the script. Although this gets you close the the results in POE4, using the garch_vcv op is not usually recommended; just use the gretl default, set garch_vcv unset.

The standard errors and t-ratios often vary a bit, depending on which software and numerical techniques are used. This is the nature of maximum likelihood estimation of the model’s parameters. With maximum likelihood, the model’s parameters are estimated using numerical optimization techniques. All of the techniques usually get you to the same parameter estimates, i. e., those that maximize the likelihood function; but, they do so in different ways. Each numerical algorithm arrives at the solution iteratively based on reasonable starting values and the method used to measure the curvature of the likelihood function at each round of estimates. Once the algorithm finds the maximum of the function, the curvature measure is reused as an estimate of the variance covariance matrix. Since curvature can be measured in slightly different ways, the routine will produce slightly different estimates of standard errors.

Gretl gives you a way to choose which method you like use for estimating the variance – covariance matrix. And, as expected, this choice will produce different standard errors and t-ratios. The set garch_vcv command allows you to choose among five alternatives: unset-which restores

the default, hessian, im (information matrix) , op (outer product matrix), qml (QML estimator), or bw (Bollerslev-Wooldridge). If unset is given the default is restored, which in this case is the Hessian; if the ’’robust” option is given for the garch command, QML is used.

carer.

Arg um ents: p q; depver [ indepvars ]

Options: —robust (robust standard errors)

—verbose (print details of iterations)

—vcv (print covariance matrix)

—no (do not include a constant)

—stdresid (standardize the residuals)

—fcp (use Fiorentini, Calzolari, Panattoni algorithm)

— anua-in. it (initial variance parameters from ARM A)

The series are characterized by random, rapid changes and are said to be volatile. The volatility seems to change over time as well. For instance the U. S. stock returns index (NASDAQ) experiences a relatively sedate period from 1992 to 1996. Then, stock returns become much more volatile until early 2004. Volatility increases again at the end of the sample. The other series exhibit similar periods of relative calm followed by increased volatility.

A histogram graphs of the empirical distribution of a variable. In gretl the freq command generates a histogram. A curve from a normal distribution is overlaid using the normal option and the Doornik-Hansen test for normality is performed. A histogram for the ALLORDS series appears below in Figure 14.1.

The series is leptokurtic. That means it has many observations around the average and a relatively large number of observations that are far from average; the center of the histogram has a high peak and the tails are relatively heavy compared to the normal. The normality test has a p-value of 0.0007 < 0.05 and is significant at the 5% level.