# Nonlinear Least Squares and Nonlinear Weighted Least Squares Estimators

In this subsection we shall consider four estimators: the NLLS and NLWLS estimators applied to (10.4.11), denoted yN and ftw, respectively, and the NLLS and NLWLS estimators applied to (10.4.23), denoted yN and j^w, respectively.

All these estimators are consistent and their asymptotic distributions can be obtained straightforwardly by noting that all the results of a linear regression model hold asymptotically for a nonlinear regression model if we treat the derivative of the nonlinear regression function with respect to the parameter vector as the regression matrix.6 In this way we can verify the interesting fact that yN and y^ have the same asymptotic distributions as у and yw, respec­tively.7 We can also show that yN and jw are asymptotically normal with mean у and with their respective asymptotic variance-covariance matrices given by

VyN = ^(S’Sr’S’XSfS’S)-1 (10.4.34)

and

FyNW = a7(S’2-1S)-1, (10.4.35)

where S = (XX, D2A), where D2 is the и, X diagonal matrix the /th element of which is 1 + (x’a)2 – I – x,’aA(Xjat). We cannot make a definite comparison either between (10.4.22) and (10.4.34) or between (10.4.32) and (10.4.35).

In the two-step methods defining у and у and their generalizations yw and yw, we can naturally define an iteration procedure by repeating the two steps. For example, having obtained y, we can obtain a new estimate of a, insert it into the argument of A, and apply least squares again to Eq. (10.4.11). The procedure is to be repeated until a sequence of estimates of a thus obtained converges. In the iteration starting from yw, we use the mth-round estimate of у not only to evaluate A but also to estimate the variance-covariance matrix of the error term for the purpose of obtaining the (m + l)st-round estimate. Iterations starting from у and yw can be similarly defined but are probably not
worthwhile because у and yw are asymptotically equivalent to yN and y^, as we indicated earlier. The estimators (yN, y^, yN, y^) are clearly stationary values of the iterations starting from (y, yw, y, yw). However, they may not necessarily be the converging values.

A simulation study by Wales and Woodland (1980) based on only one replication with sample sizes of 1000 and 5000 showed that yN is distinctly inferior to the MLE and is rather unsatisfactory.