A Hetroskedasticity Function

A commonly used model for the error variance is the multipicative heteroskedasticity model. It appears below in equation 8.7.

o2 = exp (ai + a2Zi) (8.7)

The variable zi is an independent explanatory variable that determines how the error variance changes with each observation. You can add additional zs if you believe that the variance is related to them (e. g., of = exp (a1 + a2zi2 + a3zi3)). It’s best to keep the number of zs relatively small. The idea is to estimate the parameters of (8.7) using least squares and then use predictions as weights to transform the data.

In terms of the food expenditure model, let zi = lu(incomei). Then, taking the natural loga­rithms of both sides of (8.7) and adding a random error term, vi, yields

ln (o2) = ai + a2zi + Vi (8.8)

To estimate the as, first estimate the linear regression (8.2) (or more generally, 8.1) using least
squares and save the residuals. Square the residuals, then take the natural log; this forms an

estimate of ln (a2) to use as the dependent variable in a regression. Now, add a constant and the zs to the right-hand side of the model and estimate the as using least squares.

The regression model to estimate is

ln (e2) = ai + a2Zi + Vi (8.9)

where e2 are the least squares residuals from the estimation of equation (8.1). The predictions from this regression can then be transformed using the exponential function to provide weights for weighted least squares.

For the food expenditure example, the gretl code appears below.

1 ols food_exp const income

2 series lnsighat = log($uhat*$uhat)

3 series z = ln(income)

4 ols lnsighat const z

5 series predsighat = exp($yhat)

6 series w = 1/predsighat

7 wls w food_exp const income

The first line estimates the linear regression using least squares. Next, a new variable is generated (lnsighat) that is the natural log of the squared residuals from the preceding regression. Then, generate z as the natural log of income. Estimate the skedasticity function using least squares, take the predicted values (yhat) and use these in the exponential function (i. e., exp (food-exp,)). The reciprocal of these serve as weights for generalized least squares. Remember, gretl automatically takes the square roots of w for you in the wls function.

The output is:

WLS, using observations 1-40 Dependent variable: food_exp Variable used as weight: w

coefficient std. error t-ratio p-value

const 76.0538 9.71349 7.830 1.91e-09 ***

income 10.6335 0.971514 10.95 2.62e-013 ***

Подпись: Sum squared resid R-squared F(1, 38) Log-likelihood Schwarz criterion Подпись: 90.91135 0.759187 119.7991 -73.17765 153.7331 Подпись: S.E. of regression Adjusted R-squared P-value(F) Akaike criterion Hannan-Quinn Подпись: 1.546740 0.752850 2.62e-13 150.3553 151.5766

Statistics based on the weighted data:

Statistics based on the original data:

Mean dependent var 283.5735 S. D. dependent var 112.6752

Sum squared resid 304869.6 S. E. of regression 89.57055

The model was estimated by least squares with the HCCME standard errors in section 8.1. The parameter estimates from FGLS are not much different than those. However, the standard errors are much smaller now. The HC3 standard error for the slope was 1.88 and is now only 0.97. The constant is being estimated more precisely as well. So, there are some potential benefits from using a more precise estimator of the parameters.

Leave a reply

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>