# Gauss-Newton Method

The Gauss-Newton method was specifically designed to calculate the nonlin­ear least square estimator. Expanding/X)?) of Eq. (4.3.5) in a Taylor series around the initial estimate P, we obtain

(4.4.10)    Substituting the right-hand side of (4.4.10) for /Х/І) in (4.3.5) yields

The second-round estimator/?2 of the Gauss-Newton iteration is obtained by minimizing the right-hand side of approximation (4.4.11) with respect to fi as   where

The iteration (4.4.12) is to be repeated until convergence is obtained. This method involves only the first derivatives off„ whereas the Newton-Raphson iteration applied to nonlinear least squares estimation involves the second derivatives of f, as well.

The Gauss-Newton iteration may be alternatively motivated as follows: Evaluating the approximation (4.4.10) at/l0 and inserting it into Eq. (4.3.1), we obtain y.-a A)+M. A“|pLa+«.-

op U op |j,

Then the second-round estimator p2 can be interpreted as the least squares estimate of Д, applied to the linear regression equation (4.4.14), treating the whole left-hand side as the dependent variable and (dft/dP’)jt as the vector of independent variables. Equation (4.4.14) reminds us of the point raised at the beginning of Section 4.3.5, namely, the nonlinear regression model asymptot­ically behaves like a linear regression model if we treat df/dP’ evaluated at a good estimate of P as the regressor matrix.

The Gauss-Newton iteration suffers from weaknesses similar to those of the Newton-Raphson iteration, namely, the possibility of an exact or near singu­larity of the matrix to be in verted in (4.4.12) and the possibility of too much or too little change from p{ to j}2.     To deal with the first weakness, Marquardt (1963) proposed a modification

where at is a positive scalar to be appropriately chosen. To deal with the second weakness, Hartley (1961) proposed the following modification: First, calculate

and, second, choose to minimize

акД+АЛ), OS A, SI. (4.4.17)

Hartley proved that under general conditions his iteration converges to a root of Eq. (4.3.6). (Gallant, 1975a, has made useful comments on Marquardt’s and Hartley’s algorithms.)

As in the Newton-Raphson method, it can be shown that the second-round estimator of the Gauss-Newton iteration is asymptotically as efficient as NLLS if the iteration is started from an estimator Д such that VT()J, — fi0) converges to a nondegenerate random variable.

Finally, we want to mention several empirical papers in which the Gauss – Newton iteration and related iterative methods have been used. Bodkin and Klein (1967) estimated Cobb-Douglas and CES production functions by the Newton-Raphson method. Charatsis (1971) estimated the CES production function by a modification of the Gauss-Newton method similar to that of Hartley (1961) and found that in 64 out of 74 samples it converged within six iterations. Mizon (1977), in a paper whose major aim was to choose among nine production functions including Cobb-Douglas and CES, used the conju­gate gradient method of Powell (1964) (see Quandt, 1983). Mizon’s article also contained interesting econometric applications of various statistical tech­niques we shall discuss in Section 4.5, namely, a comparison of the likelihood ratio and related tests, Akaike information criterion, tests of separate families of hypotheses, and the Box-Cox transformation (Section 8.1.2). Sargent (1978) estimated a rational expectations model (which gives rise to nonlinear constraints among parameters) by the DFP algorithm.