# T-Tests, Critical Values, and p-values

In section 3.4 the GUI was used to obtain test statistics, critical values and p-values. However, it is often much easier to use the the genr or scalar commands from either the console or as a script to compute these. In this section, the scripts will be used to test various hypotheses about the sales model for Big Andy.

Significance Tests

Multiple regression models includes several independent variables because one believes that each as an independent effect on the mean of the dependent variable. To confirm this belief it is customary to perform tests of individual parameter significance. If the parameter is zero, then the variable does not belong in the model. In gretl the t-ratio associated with the null hypothesis that Pk = 0 against the alternative ftk = 0 is printed in the regression results along side the associated p-value. For the sake of completeness, these can be computed manually using a script as found below. For t-ratios and one – and two-sided hypothesis tests the appropriate commands are:

1 ols sales const price advert

2 scalar t1 = (\$coeff(price)-0)/\$stderr(price)

4 printf "n The t-ratio for H0: b2=0 is = %.3f.n

5 The t-ratio for H0: b3=0 is = %.3f.n", t1, t2

The results shown in Figure 5.5 As you can see, the automatic results and the manually generated Figure 5.5: Notice that the usual model estimation results produced by gretl prints the t-ratios needed for parameter significance by default. These match the manual computation.

ones match perfectly.

One of the advantages of doing t-tests manually is that you can test hypotheses other than parameter significance. You can test hypothesis that the parameter is different from values other than zero, test a one-sided hypotheses, or test a hypotheses involving a linear combinations of parameters.

One-tail Alternatives

If a decrease in price increases sales revenue then we can conclude that demand is elastic. So, if > 0 demand is elastic and if в2 < 0 it is inelastic. To test H0 : в2 > 0 versus H1 : в < 0, the test statistic is the usual t-ratio.

1 scalar t1 = (\$coeff(price)-0)/\$stderr(price)

2 pvalue t \$df t1

The rejection region for this test lies to the left of —tc, which is the a level critical value from the distribution of t. This is a perfect opportunity to use the pvalue function. The result is:

t(72): area to the right of -7.21524 =~ 1 (to the left: 2.212e-010)

(two-tailed value = 4.424e-010; complement = 1)

You can see that the area to the left of -7.21524 is close to zero. That is less than 5% nominal level of the test and therefore we reject that в2 is non-negative.

A test of whether a dollar of additional advertising will generate at least a dollar’s worth of sales is expressed parametrically as H0 : вз < 1 versus H1 : вз > 1. This requires a new t-ratio and again we use the pvalue function to conduct the test.

2 pvalue t \$df t3

The results are

t(72): area to the right of 1.26257 = 0.105408 (two-tailed value = 0.210817; complement = 0.789183)

The rejection region for this alternative hypothesis lies to the right of the computed t-ratio. That implies that the p-value is 0.105. At 5% level of significance, this null hypothesis cannot be rejected.

Linear Combinations of Parameters

Big Andy’s advertiser claims that dropping the price by 20 cents will increase sales more than spending an extra \$500 on advertising. This can be translated into a parametric hypothesis that

can be tested using the sample. If the advertiser is correct then -0.2,02 > 0.5в3. The hypothesis to be tested is:    Ho : – 0.2,02 – 0.5вз < 0 Hi : – 0.2,02 – 0.5вз > 0

provided the null hypothesis is true. The script is

1 ols sales const price advert —vcv