When variables interact, the marginal effect of one variable on the mean of another has to be computed manually based on calculus. Taking the partial derivative of average sales with respect to advertising yields produces the marginal effect on average sales of an increase in advertising;
The magnitude of the marginal effect depends on the parameters as well as on the level of advertising. In the example marginal effect is evaluated at two points, advert=.5 and advert=2. The code is:
1 scalar me1 = $coeff(advert)+2*(0.5)*$coeff(a2)
2 scalar me2 = $coeff(advert)+2*2*$coeff(a2)
3 printf "nThe marginal effect at $500 (advert=.5) is
4 %.3f and at $2000 (advert=2) is %.3fn",me1,me2 and the result is:
The marginal effect at $500 (advert=.5) is 9.383 and at $2000 (advert=2) is 1.079 Read More
Using examples from Hill et al. (2011) a model of grouped heteroskedasticity is estimated and a Goldfeld-Quandt test is performed to determine whether the two sample subsets have the same error variance. The error variance associated with the first subset is af and that for the other subset is af.
The null and alternative hypotheses are
22 H0 : ^1 — ^2
Hi : — af
Estimating both subsets separately and obtaining the estimated error variances allow us to construct the following ratio:
F = ~ pdh, df2 (8-3)
a2 Iа 2
where df1 = N1 — K1 from the first subset and df2 = N2 — K2 is from the second subset. Under the null hypothesis that the two variances are equal
f=ai „ Ftlt,
F = a2 Fdfi, df2
This is just the ratio of the estimated variances from the two subset regressions.
Wage Exampl... Read More
One use of regression analysis is to “explain” variation in dependent variable as a function of the independent variable. A summary statistic that is used for this purpose is the coefficient of determination, also known as R2.
There are a number of different ways of obtaining R2 in gretl. The simplest way to get R2 is to read it directly off of gretl’s regression output. This is shown in Figure 4.3. Another way, and probably the most difficult, is to compute it manually using the analysis of variance (ANOVA) table. The ANOVA table can be produced after a regression by choosing Analysis>ANOVA from the model window’s pull-down menu as shown in Figure 4.1. Or, one can simply use the —anova option to ols to produce the table from the console of as part of a script.
ols income const i... Read More
Choosing an appropriate model is part art and part science. Omitting relevant variables that are correlated with regressors causes least squares to be biased and inconsistent. Including irrelevant variables reduces the precision of least squares. So, from a purely technical point, it is important to estimate a model that has all of the necessary relevant variables and none that are irrelevant. It is also important to use a suitable functional form. There is no set of mechanical rules that one can follow to ensure that the model is correctly specified, but there are a few things you can do to increase your chances of having a suitable model to use for decision-making.
Here are a few rules of thumb:
The correlogram can also be used to check whether the assumption that model errors have zero covariance-an important assumption in the proof of the Gauss-Markov theorem. The example that illustrates this is based on the Phillips curve that relates inflation and unemployment. The data used are from Australia and reside in the phillips-aus. gdt dataset.
The model to be estimated is
inf = ві + в2 Аи* + et (9.6)
The data are quarterly and begin in 1987:1. A time-series plot of both series is shown below in Figure 9.10. The graphs show some evidence of serial correlation in both series.
The model is estimated by least squares and the residuals are plotted ag... Read More