Distributed Lags and Dynamic Models
Many economic models have lagged values of the regressors in the regression equation. For example, it takes time to build roads and highways. Therefore, the effect of this public investment on growth in GNP will show up with a lag, and this effect will probably linger on for several years. It takes time before investment in research and development pays off in new inventions which in turn take time to develop into commercial products. In studying consumption behavior, a change in income may affect consumption over several periods. This is true in the permanent income theory of consumption, where it may take the consumer several periods to determine whether the change in real disposable income was temporary or permanent. For example, is the extra consulting money earned this year going to continue next year? Also, lagged values of real disposable income appear in the regression equation because the consumer takes into account his life time earnings in trying to smooth out his consumption behavior. In turn, one’s life time income may be guessed by looking at past as well as current earnings. In other words, the regression relationship would look like
Yt — a + PoXt + P iXti + •• + PsXts + ut t — 1,2,…,T (6.1)
where Yt denotes the tth observation on the dependent variable Y and Xts denotes the (ts)th observation on the independent variable X. a is the intercept and в о, Pl,…,Ps are the current and lagged coefficients of Xt. Equation (6.1) is known as a distributed lag since it distributes the effect of an increase in income on consumption over s periods. Note that the shortrun effect of a unit change in X on Y is given by @o, while the longrun effect of a unit change in X on Y
is (в0 + в1 + .. + вs).
Suppose that you observe Xt from 1959 to 2007. Xtl is the same variable but for the previous period, i. e., 19582006. Since 1958 is not available in this data, the software you are using will start from 1959 for Xtl, and end at 2006. This means that when we lag once, the current Xt series will have to start at 1960 and end at 2007. For practical purposes, this means that when we lag once we loose one observation from the sample. So if we lag s periods, we loose s observations. Furthermore, we are estimating one extra в with every lag. Therefore, there is double jeopardy with respect to loss of degrees of freedom. The number of observations fall (because we are lagging the same series), and the number of parameters to be estimated increase with every lagging variable introduced. Besides the loss of degrees of freedom, the regressors in (6.1) are likely to be highly correlated with each other. In fact most economic time series are usually trended and very highly correlated with their lagged values. This introduces the problem of among the regressors and as we saw in Chapter 4, the higher the multicollinearity among these regressors, the lower is the reliability of the regression estimates.
In this model, OLS is still BLUE because the classical assumptions are still satisfied. All we have done in (6.1) is introduce the additional regressors (Xtl,… ,Xts). These regressors are uncorrelated with the disturbances since they are lagged values of Xt, which are by assumption not correlated with ut for every t.
B. H. Baltagi, Econometrics, Springer Texts in Business and Economics, DOI 10.1007/9783642200595_6, 131
© SpringerVerlag Berlin Heidelberg 2011
A
In order to reduce the degrees of freedom problem, one could impose more structure on the ft’s. One of the simplest forms imposed on these coefficients is the linear arithmetic lag, (see Figure 6.1), which can be written as
в i = [(s + 1) — і]в for i = 0,1,…,s (6.2)
The lagged coefficients of X follow a linear distributed lag declining arithmetically from (s + 1)в for Xt to в for Xts. Substituting (6.2) in (6.1) one gets
Yt = a + ^2 s=o eiXti + ut = a + /3^2 s=o[(s + 1) — i]Xti + ut (6.3)
where the latter equation can be estimated by the regression of Yt on a constant and Zt, where
Zt = £S=o[(s + 1)— i]Xti
This Zt can be calculated given s and Xt. Hence, we have reduced the estimation of в0, ві,…,ви into the estimation of just one в. Once в is obtained, вi can be deduced from (6.2), for і = 0,1,…,s. Despite its simplicity, this lag is too restrictive to impose on the regression and is not usually used in practice.
Alternatively, one can think of вi = f (i) for і = 0,1,…,s. If f (i) is a continuous function, over a closed interval, then it can be approximated by an rth degree polynomial,
f (i) = a0 + aii + … + ar ir
For example, if r = 2, then
вi = a0 + a1i + a2i2 for i = 0,1,2,…,s
so that
в о = ao в і = ao + ai + a,2 в 2 = ao + 2ai + 4a2
в s = ao + sai + s2 a2
Once a0,ai, and a2 are estimated, в0,в1,—,в3 can be deduced. In fact, substituting ві = a0 + aii + a2i2 in (6.1) we get
Yt = a + J2 i=0(ao + aii + a2 i2)Xti + щ (6.4)
= a + ao =o Xti + ai X] =o iXti + a2 X] =o ftXti + ut
This last equation, shows that a, a0, ai and a2 can be estimated from the regression of Yt
on a constant, Zo = ^ S=o Xti, Zi = ^S=o iXti and Z2 = ^S=o f2Xt_i. This procedure was
proposed by Almon (1965) and is known as the Almon lag. One of the problems with this
procedure is the choice of s and r, the number of lags on Xt, and the degree of the polynomial, respectively. In practice, neither is known. Davidson and MacKinnon (1993) suggest starting with a maximum reasonable lag s* that is consistent with the theory and then based on the unrestricted regression, given in (6.1), checking whether the fit of the model deteriorates as s* is reduced. Some criteria suggested for this choice include: (i) maximizing R2; (ii) minimizing Akaike’s (1973) Information Criterion (AIC) with respect to s. This is given by AIC(s) = (RSS/T)e2s/T; or (iii) minimizing Schwarz (1978) Bayesian Information Criterion (BIC) with respect to s. This is given by BIC(s) = (RSS/T)Ts/T where RSS denotes the residual sum of squares. Note that the AIC and BIC criteria, like R2, reward good fit but penalize loss of degrees of freedom associated with a high value of s. These criteria are printed by most regression software including SHAZAM, EViews and SAS. Once the lag length s is chosen it is straight forward to determine r, the degree of the polynomial. Start with a high value of r and construct the Z variables as described in (6.4). If r = 4 is the highest degree polynomial chosen and a4, the coefficient of Z4 = ^2S=o i4Xt4 is insignificant, drop Z4 and run the regression for r = 3. Stop, if the coefficient of Z3 is significant, otherwise drop Z3 and run the regression for r = 2.
Applied researchers usually impose end point constraints on this Almon lag. A near end point constraint means that вi = 0 in equation (6.1). This means that for equation (6.4), this constraint yields the following restriction on the second degree polynomial in a’s: в1 = f (1) = a0 — ai + a2 = 0. This restriction allows us to solve for a0 given ai and a2. In fact, substituting a0 = ai — a2 into (6.4), the regression becomes
Yt = a + ai(Zi + Z0) + a2(Z2 — Z0) + ut (6.5)
and once ai and a2 are estimated, a0 is deduced, and hence the вi’s. This restriction essentially states that Xt+i has no effect on Yt. This may not be a plausible assumption, especially in our consumption example, where income next year enters the calculation of permanent income or life time earnings. A more plausible assumption is the far end point constraint, where вв+1 = 0. This means that Xt_(s+i) does not affect Yt. The further you go back in time, the less is the effect on the current period. All we have to be sure of is that we have gone far back enough
Figure 6.2 A Polynomial Lag with End Point Constraints 
to reach an insignificant effect. This far end point constraint is imposed by removing Xt_(s+1) from the equation as we have done above. But, some researchers impose this restriction on ві = f (i), i. e., by restricting вS+1 = f (s +1) = 0. This yields for r = 2 the following constraint: a0 + (s + 1)a1 + (s + 1)2a2 = 0. Solving for a0 and substituting in (6.4), the constrained regression becomes
Y = a + ai[Zi — (s + 1)^o] + a2[^2 — (s + 1)2^o] + ut (6.6)
One can also impose both end point constraints and reduce the regression into the estimation of one a rather than three a’s. Note that в_1 = вз+1 = 0 can be imposed by not including Xt+1 and Xt_(s+1) in the regression relationship. However, these end point restrictions impose the additional restrictions that the polynomial on which the a’s lie should pass through zero at i = —1 and i = (s + 1), see Figure 6.2.
These additional restrictions on the polynomial may not necessarily be true. In other words, the polynomial could intersect the Xaxis at points other than —1 or (s + 1). Imposing a restriction, whether true or not, reduces the variance of the estimates, and introduces bias if the restriction is untrue. This is intuitive, because this restriction gives additional information which should increase the reliability of the estimates. The reduction in variance and the introduction of bias naturally lead to Mean Square Error criteria that help determine whether these restrictions should be imposed, see Wallace (1972). These criteria are beyond the scope of this chapter. In general, one should be careful in the use of restrictions that may not be plausible or even valid. In fact, one should always test these restrictions before using them. See Schmidt and Waud (1975).
Empirical Example: Using the ConsumptionIncome data from the Economic Report of the President over the period 19592007, given in Table 5.3, we estimate a consumptionincome regression imposing a five year lag on income. In this case, all variables are in log and s = 5 in equation (6.1). Table 6.1 gives the Stata output imposing the linear arithmetic lag given in equation (6.2).
The regression output reports в = 0.0498 which is statistically significant with a fvalue of 64.4. One can test the arithmetic lag restrictions jointly using an Ftest. The Unrestricted
Table 6.1 Regression with Arithmetic Lag Restriction
. tsset year
time variable: year, 1959 to 2007
delta: 1 unit
. gen ly=ln(y)
. gen lc=ln(c)
. gen z=6*ly+5*l. ly+4*l2.ly+3*l3.ly+2*l4.ly+l5.ly (5 missing values generated)
. reg lc z
Source 
SS 
df 
MS 
Number of obs = 
44 
Model 
3.5705689 
1 
3.5705689 
F(1, 42) = Prob > F = 
4149.96 0.0000 
Residual 
.036136249 
42 
.000860387 
Rsquared = 
0.9900 
Total 
3.60670515 
43 
.083876864 
Adj Rsquared = Root MSE = 
0.9897 .02933 
lc 
Coef. 
Std. Err. 
t 
P > t [95% Conf. Interval] 

Z 
.049768 
.0007726 
64.42 
0.000 .0482089 
.0513271 
cons 
.5086255 
.1591019 
3.20 
0.003 .8297061 
.1875449 
Table 6.2 
Almon Polynomial, 
r = 2,s = 
5 and Near EndPoint Constraint 
Dependent Variable = LNC
Sample (adjusted): 1964 2007
Included observations: 44 after adjustments
Coefficient 
Std. Error 
tStatistic 
Prob. 

C 
0.770611 
0.201648 
3.821563 
0.0004 
PDL01 
0.342152 
0.056727 
6.031589 
0.0000 
PDL02 
0.067215 
0.012960 
5.186494 
0.0000 
Rsquared 
0.990054 
Mean dependent var 
9.736786 

Adjusted Rsquared 
0.989568 
S. D. dependent var 
0.289615 

S. E. of regression 
0.029580 
Akaike info criterion 
4.137705 

Sum squared resid 
0.035874 
Schwarz criterion 
4.016055 

Log likelihood 
94.02950 
HannanQuinn criter. 
4.092591 

Fstatistic 
2040.559 
DurbinWatson stat 
0.382851 

Prob(Fstatistic) 
0.000000 

Lag Distribution of LNY 
i 
Coefficient 
Std. Error 
tStatistic 
. * I 
0 
0.27494 
0.04377 
6.28161 
. *I 
1 
0.41544 
0.06162 
6.74167 
. *I 
2 
0.42152 
0.05358 
7.86768 
. * I 
3 
0.29317 
0.01976 
14.8332 
* I 
4 
0.03039 
0.04056 
0.74937 
* . I 
5 
0.36682 
0.12630 
2.90445 
Sum of Lags 
1.06865 
0.01976 
54.0919 
Residual Sum of Squares (URSS) is obtained by regressing Ct on Yt, Yt1,…, Yt5 and a constant. This yields URSS = 0.016924. The RRSS is given in Table 6.1 as 0.036136 and it involves imposing 5 restrictions given in (6.2). Therefore,
(0.036136249 – 0.016924337)/5
0.016924337/37 = .
and this is distributed as F5,37 under the null hypothesis. This rejects the linear arithmetic lag restrictions.
Next we impose an Almon lag based on a second degree polynomial as described in equation (6.4). Table 6.2 reports the EViews output for s = 5 imposing the near end point constraint. To do this using EViews, one replaces the regressor Y by PDL(Y, 5, 2,1) indicating a request to fit a five year Almon lag on Y that is of the secondorder degree, with a near end point constraint. In this case, the estimated regression coefficients rise and then fall becoming negative: P0 = 0.275, P1 = 0.415, …J35 = 0.367. Note that (34 is statistically insignificant. The Almon lag restrictions can be jointly tested using Chow’s Fstatistic. The URSS is obtained from the unrestricted regression of Ct on Yt, Ytl,… ,Yt5 and a constant. This was reported above as URSS = 0.016924.
Table 6.3 Almon Polynomial, r = 2,s = 5 and Far EndPoint Constraint
Dependent Variable = LNC Method: Least Squares Sample (adjusted): 1964 2007 Included observations: 44 after adjustments

The RRSS, given in Table 6.2, is 0.035874 and involves four restrictions. Therefore,
_ (0.03587367 – 0.016924337)/4 _ 0.016924337/37
and this is distributed as under the null hypothesis. This rejects the second degree polynomial Almon lag specification with a near end point constraint.
Table 6.3 reports the EViews output for s _ 5, imposing the far end point constraint. To do this using EViews, one replaces the regressor Y by PDL(Y, 5,2,2) indicating a request to fit a five year Almon lag on Y that is of the secondorder degree, with a far end point constraint. In this case, the в’s are positive, then becoming negative, 30 _ 0.879, в і _ 0.450, …,3 5 _ 0.136, all being statistically significant. This second degree polynomial Almon lag specification with a far end point constraint can be tested against the unrestricted lag model using Chow’s Fstatistic. The RRSS, given in Table 6.3, is 0.019955101 and involves four restrictions. Therefore,
(0.019955101 – 0.016924337)/4
F _—————————————– _ 1.656
0.016924337/37
and this is distributed as F4,37 under the null hypothesis. This does not reject the restrictions imposed by this model.
Leave a reply