# Additive Regressions

In recent years several researchers have attempted to estimate m(x,) by imposing some structure upon the nature of the conditional mean m(x;). One popular solution is the generalized additive models of Hastie and Tibshirani (1990), which is

У i = m(x,) + U = m1(xi1) + m2(x,1) + … + mq(xiq) + UU

where ms, s = 1,…, q, are functions of single variables with Ems(xis) = 0, s = 2,…, q, for identification. Each of ms and hence m(x,) is then estimated by one dimensional convergence rate of (nh)1/2 which is faster than the convergence rate (nhq)1/2 achieved by direct nonparametric estimation of m(x,). The statistical properties of Hastie and Tibshirani (1990) estimation algorithm is complicated. For practical implementations, simpler estimation techniques are proposed in Linton and Nielson (1995) and Chen et al. (1996). The basic idea behind this is as follows. At

the first stage estimate m(x,) = m(xi1,…, xiq) = m(xi1, хй) by the nonparametric LLS procedure, where xa is a vector of xi2…, xiq. Then, using Ems(xis) = 0, we note that

m1(x! l) = fm(xa, xiL)dF(xiL)

and hence m1(xil) = fm(xi1, xiL)dF(xiL). Using the empirical distribution one can calculate m1(xi1) = n £J=1 m(xi1, хд). ms(xis) for any s can be similarly calculated. Under the assumptions that [yi, xi] are iid, nh3 ^ and nh5 ^ 0 as n ^ ^, Linton and Nielson show the (nh)1/2 convergence to normality for m1. For the test of additivity of m(x,) see Linton and Gozalo (1996), and for the application to estimating a production function see Chen et al. (1996).

Alternative useful approaches which impose structure on m(x,) are the projection pursuit regression and the neural networks procedures. For details on them, see Friedman and Tukey (1974), Breiman and Friedman (1985), Kuan and White

(1994) , Hardle (1990) and Pagan and Ullah (1999).

## Leave a reply