The nonlinear regression model is defined by (13.4.1) yt = /Др) + ut, t = 1, 2, . . . , T,

where /,(•) is a known function, P is a ^-vector of unknown parameters, and {ut} are i. i.d. with Eut = 0 and Vut = a2. In practice we often specify /*(P) = /(x(, P), where x, is a vector of exogenous variables which, unlike the linear regression model, may not necessarily be of the same dimension as p.

An example of the nonlinear regression model is the Cobb-Douglas pro­duction function with an additive error term,

(13.4.2) d = + Щ,

where Q, K, and L denote output, capital input, and labor input, respec­tively. Another example is the CES production function (see Arrow et al., 1961):

(13.4.3) Qt = + (1 – р2)АГРзГр4/Рз + щ.

We can write (13.4.1) in vector notation as

(13.4.4) у = f(p) + u,

where y, f, and u are T-vectors having yt, ft, and ut, respectively, for the fth element.

The nonlinear least squares (NLLS) estimator of P is defined as the value of P that minimizes


(13.4.5) Sr(P) = X^-/((P)]2-


Denoting the NLLS estimator by P, we can estimate a2 by

(13.4.6) d2 = ^ ST(P).

The estimators P and d2 can be shown to be the maximum likelihood estimators if {ut} are assumed to be jointly normal. The derivation is analogous to the linear case given in Section 12.2.5.

The minimization of Sj-(P) must generally be done by an iterative method. The Newton-Raphson method described in Section 7.3.3 can be used for this purpose. Another iterative method, the Gauss-Newton method, is specifically designed for the nonlinear regression model. Let Pi be the initial value, be it an estimator or a mere guess. Expand/,(p) in a Taylor series around P = as

Подпись: (13.4.7) /((P) =/((Pi) + -^7. (Э – Pi), p,

where dft / Эр’ is a X-dimensional row vector whose 7th element is the derivative of ft with respect to the jth element of p. Note that (13.4.7) holds approximately because the derivatives are evaluated by Pj. Inserting


Подпись: (13.4.8) image693 Подпись: P + Щ- 0i

into the right-hand side of (13.4.1) and rearranging terms, we obtain:

The second-round estimator of the iteration, P2, is obtained as the LS estimator applied to (13.4.8), treating the entire left-hand side as the dependent variable and dft/dp as the vector of regressors. The iteration

is repeated until it converges. It is simpler than the Newton-Raphson method because it requires computation of only the first derivatives of ft, whereas Newton-Raphson requires the second derivatives as well.

Подпись: (13.4.9) VT(p — P) -> N Подпись: 0, ex plim T Подпись: df’ df эр эр' Подпись: -л

We can show that under general assumptions the NLLS estimator p is consistent and

The above result is analogous to the asymptotic normality of the LS estimator given in Theorem 12.2.4. Note that df/ЭР’ above is just like X in Theorem 12.2.4. The difference is that3f/3p’ depends on the unknown parameter P and hence is unknown, whereas X is assumed to be known. The practical implication of (13.4.9) is that

A „ „


л 9




~ N





0 –


(13.4.10) p

The asymptotic variance-covariance matrix above is comparable to for­mula (12.2.22) for the LS estimator. We can test hypotheses about P in the nonlinear regression model by the methods presented in Section 12.4, provided that we use df/dfi’p for X.

Leave a reply

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>