Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Gauss–Newton algorithm
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Mathematical algorithm}} [[File:Regression pic assymetrique.gif|thumb|300px|Fitting of a noisy curve by an asymmetrical peak model <math>f_{\beta}(x)</math> with parameters <math>\beta</math> by mimimizing the sum of squared residuals <math> r_i(\beta) = y_i - f_{\beta}(x_i) </math> at grid points <math> x_i </math>, using the Gauss–Newton algorithm. <br /> Top: Raw data and model.<br /> Bottom: Evolution of the normalised sum of the squares of the errors.]] The '''Gauss–Newton algorithm''' is used to solve [[non-linear least squares]] problems, which is equivalent to minimizing a sum of squared function values. It is an extension of [[Newton's method in optimization|Newton's method]] for finding a [[maxima and minima|minimum]] of a non-linear [[function (mathematics)|function]]. Since a sum of squares must be nonnegative, the algorithm can be viewed as using Newton's method to iteratively approximate [[Zero of a function|zeroes]] of the components of the sum, and thus minimizing the sum. In this sense, the algorithm is also an effective method for [[#Solving_overdetermined_systems_of_equations|solving overdetermined systems of equations]]. It has the advantage that second derivatives, which can be challenging to compute, are not required.<ref>{{cite book |first1=Ron C. |last1=Mittelhammer |first2=Douglas J. |last2=Miller |first3=George G. |last3=Judge |title=Econometric Foundations |location=Cambridge |publisher=Cambridge University Press |year=2000 |isbn=0-521-62394-4 |pages=197–198 |url=https://books.google.com/books?id=fycmsfkK6RQC&pg=PA197 }}</ref> Non-linear least squares problems arise, for instance, in [[non-linear regression]], where parameters in a model are sought such that the model is in good agreement with available observations. The method is named after the mathematicians [[Carl Friedrich Gauss]] and [[Isaac Newton]], and first appeared in Gauss's 1809 work ''Theoria motus corporum coelestium in sectionibus conicis solem ambientum''.<ref name=optimizationEncyc>{{Cite book|authorlink1=Christodoulos Floudas| last1 = Floudas | first1 = Christodoulos A. | last2=Pardalos |first2=Panos M.|title = Encyclopedia of Optimization | publisher = Springer | year = 2008 | page = 1130 | isbn = 9780387747583}}</ref> == Description == Given <math>m</math> functions <math>\textbf{r} = (r_1, \ldots, r_m)</math> (often called residuals) of <math>n</math> variables <math>\boldsymbol{\beta} = (\beta_1, \ldots \beta_n),</math> with <math>m \geq n,</math> the Gauss–Newton algorithm [[iterative method|iteratively]] finds the value of <math>\beta</math> that minimize the sum of squares<ref name=ab>Björck (1996)</ref> <math display="block"> S(\boldsymbol \beta) = \sum_{i=1}^m r_i(\boldsymbol \beta)^{2}.</math> Starting with an initial guess <math>\boldsymbol \beta^{(0)}</math> for the minimum, the method proceeds by the iterations <math display="block"> \boldsymbol \beta^{(s+1)} = \boldsymbol \beta^{(s)} - \left(\mathbf{J_r}^\operatorname{T} \mathbf{J_r} \right)^{-1} \mathbf{J_r}^\operatorname{T} \mathbf{r}\left(\boldsymbol \beta^{(s)}\right), </math> where, if '''r''' and '''''β''''' are [[column vectors]], the entries of the [[Jacobian matrix]] are <math display="block"> \left(\mathbf{J_r}\right)_{ij} = \frac{\partial r_i \left(\boldsymbol \beta^{(s)}\right)}{\partial \beta_j},</math> and the symbol <math>^\operatorname{T}</math> denotes the [[matrix transpose]]. At each iteration, the update <math>\Delta = \boldsymbol \beta^{(s+1)} - \boldsymbol \beta^{(s)}</math> can be found by rearranging the previous equation in the following two steps: *<math>\Delta = -\left(\mathbf{J_r}^\operatorname{T} \mathbf{J_r} \right)^{-1} \mathbf{J_r}^\operatorname{T} \mathbf{r}\left(\boldsymbol \beta^{(s)}\right)</math> *<math>\mathbf{J_r}^\operatorname{T} \mathbf{J_r} \Delta = -\mathbf{J_r}^\operatorname{T} \mathbf{r}\left(\boldsymbol \beta^{(s)}\right) </math> With substitutions <math display="inline">A = \mathbf{J_r}^\operatorname{T} \mathbf{J_r} </math>, <math>\mathbf{b} = -\mathbf{J_r}^\operatorname{T} \mathbf{r}\left(\boldsymbol \beta^{(s)}\right) </math>, and <math>\mathbf {x} = \Delta </math>, this turns into the conventional matrix equation of form <math>A\mathbf {x} = \mathbf {b} </math>, which can then be solved in a variety of methods (see [[#Notes|Notes]]). If {{math|1=''m'' = ''n''}}, the iteration simplifies to <math display="block"> \boldsymbol \beta^{(s+1)} = \boldsymbol \beta^{(s)} - \left(\mathbf{J_r}\right)^{-1} \mathbf{r}\left(\boldsymbol \beta^{(s)}\right),</math> which is a direct generalization of [[Newton's method]] in one dimension. In data fitting, where the goal is to find the parameters <math>\boldsymbol{\beta}</math> such that a given model function <math> \mathbf{f}(\mathbf{x}, \boldsymbol{\beta}) </math> best fits some data points <math> (x_i, y_i) </math>, the functions <math> r_i </math>are the [[residual (statistics)|residuals]]: <math display="block">r_i(\boldsymbol \beta) = y_i - f\left(x_i, \boldsymbol \beta\right).</math> Then, the Gauss–Newton method can be expressed in terms of the Jacobian <math> \mathbf{J_f} = -\mathbf{J_r} </math> of the function <math> \mathbf{f} </math> as <math display="block"> \boldsymbol \beta^{(s+1)} = \boldsymbol \beta^{(s)} + \left(\mathbf{J_f}^\operatorname{T} \mathbf{J_f} \right)^{-1} \mathbf{J_f}^\operatorname{T} \mathbf{r}\left(\boldsymbol \beta^{(s)}\right). </math> Note that <math>\left(\mathbf{J_f}^\operatorname{T} \mathbf{J_f}\right)^{-1} \mathbf{J_f}^\operatorname{T}</math> is the left [[Moore–Penrose pseudoinverse|pseudoinverse]] of <math>\mathbf{J_f}</math>. ==Notes== The assumption {{math|''m'' ≥ ''n''}} in the algorithm statement is necessary, as otherwise the matrix <math> \mathbf{J_r}^T\mathbf{J_r} </math> is not invertible and the normal equations cannot be solved (at least uniquely). The Gauss–Newton algorithm can be derived by [[linear approximation|linearly approximating]] the vector of functions ''r''<sub>''i''</sub>. Using [[Taylor's theorem]], we can write at every iteration: <math display="block">\mathbf{r}(\boldsymbol \beta) \approx \mathbf{r}\left(\boldsymbol \beta^{(s)}\right) + \mathbf{J_r}\left(\boldsymbol \beta^{(s)}\right)\Delta</math> with <math>\Delta = \boldsymbol \beta - \boldsymbol \beta^{(s)}</math>. The task of finding <math> \Delta </math> minimizing the sum of squares of the right-hand side; i.e., <math display="block">\min \left\|\mathbf{r}\left(\boldsymbol \beta^{(s)}\right) + \mathbf{J_r}\left(\boldsymbol \beta^{(s)}\right)\Delta\right\|_2^2,</math> is a [[linear least squares (mathematics)|linear least-squares]] problem, which can be solved explicitly, yielding the normal equations in the algorithm. The normal equations are ''n'' simultaneous linear equations in the unknown increments <math> \Delta </math>. They may be solved in one step, using [[Cholesky decomposition]], or, better, the [[QR factorization]] of <math> \mathbf{J_r} </math>. For large systems, an [[iterative method]], such as the [[conjugate gradient]] method, may be more efficient. If there is a linear dependence between columns of '''J'''<sub>'''r'''</sub>, the iterations will fail, as <math> \mathbf{J_r}^T\mathbf{J_r} </math> becomes singular. When <math>\mathbf{r}</math> is complex <math>\mathbf{r}:\Complex^n \to \Complex</math> the conjugate form should be used: <math>\left(\overline \mathbf{J_r}^\operatorname{T} \mathbf{J_r}\right)^{-1}\overline \mathbf{J_r}^\operatorname{T}</math>. ==Example== [[File:Gauss Newton illustration.png|thumb|right|280px|Calculated curve obtained with <math>\hat\beta_1 = 0.362</math> and <math>\hat\beta_2 = 0.556</math> (in blue) versus the observed data (in red)]] In this example, the Gauss–Newton algorithm will be used to fit a model to some data by minimizing the sum of squares of errors between the data and model's predictions. In a biology experiment studying the relation between substrate concentration {{math|[''S'']}} and reaction rate in an enzyme-mediated reaction, the data in the following table were obtained. {|class="wikitable" style="text-align: center; margin-left: 1em;" ! {{mvar|i}} | 1 || 2 || 3 || 4 || 5 || 6 || 7 |- ! {{math|[''S'']}} | 0.038 || 0.194 || 0.425 || 0.626 || 1.253 || 2.500 || 3.740 |- ! Rate | 0.050 || 0.127 || 0.094 || 0.2122 || 0.2729 || 0.2665 || 0.3317 |} It is desired to find a curve (model function) of the form <math display="block">\text{rate} = \frac{V_\text{max} \cdot [S]}{K_M + [S]}</math> that fits best the data in the least-squares sense, with the parameters <math>V_\text{max}</math> and <math>K_M</math> to be determined. Denote by <math>x_i</math> and <math>y_i</math> the values of {{math|[''S'']}} and '''rate''' respectively, with <math>i = 1, \dots, 7</math>. Let <math>\beta_1 = V_\text{max}</math> and <math>\beta_2 = K_M</math>. We will find <math>\beta_1</math> and <math>\beta_2</math> such that the sum of squares of the residuals <math display="block">r_i = y_i - \frac{\beta_1 x_i}{\beta_2 + x_i}, \quad (i = 1, \dots, 7)</math> is minimized. The Jacobian <math>\mathbf{J_r}</math> of the vector of residuals <math>r_i</math> with respect to the unknowns <math>\beta_j</math> is a <math>7 \times 2</math> matrix with the <math>i</math>-th row having the entries <math display="block">\frac{\partial r_i}{\partial \beta_1} = -\frac{x_i}{\beta_2 + x_i}; \quad \frac{\partial r_i}{\partial \beta_2} = \frac{\beta_1 \cdot x_i}{\left(\beta_2 + x_i\right)^2}.</math> Starting with the initial estimates of <math>\beta_1 = 0.9</math> and <math>\beta_2 = 0.2</math>, after five iterations of the Gauss–Newton algorithm, the optimal values <math>\hat\beta_1 = 0.362</math> and <math>\hat\beta_2 = 0.556</math> are obtained. The sum of squares of residuals decreased from the initial value of 1.445 to 0.00784 after the fifth iteration. The plot in the figure on the right shows the curve determined by the model for the optimal parameters with the observed data. ==Convergence properties== The Gauss-Newton iteration is guaranteed to converge toward a local minimum point <math>\hat{\beta}</math> under 4 conditions:<ref name = DenSch>{{cite book |last1=J.E. Dennis, Jr. and R.B. Schnabel |title=Numerical Methods for Unconstrained Optimization and Nonlinear Equations |date=1983 |publisher=SIAM 1996 reproduction of Prentice-Hall 1983 edition |page=222}}</ref> The functions <math>r_1,\ldots,r_m</math> are twice continuously differentiable in an open convex set <math>D\ni\hat{\beta}</math>, the Jacobian <math>\mathbf{J}_\mathbf{r}(\hat{\beta})</math> is of full column rank, the initial iterate <math>\beta^{(0)}</math> is near <math>\hat{\beta}</math>, and the local minimum value <math>|S(\hat{\beta})|</math> is small. The convergence is quadratic if <math>|S(\hat{\beta})|=0</math>. It can be shown<ref>Björck (1996), p. 260.</ref> that the increment Δ is a [[descent direction]] for {{math|''S''}}, and, if the algorithm converges, then the limit is a [[stationary point]] of {{math|''S''}}. For large minimum value <math>|S(\hat{\beta})|</math>, however, convergence is not guaranteed, not even [[local convergence]] as in [[Newton's method in optimization|Newton's method]], or convergence under the usual Wolfe conditions.<ref>{{citation |title=The divergence of the BFGS and Gauss Newton Methods |last1=Mascarenhas |journal=Mathematical Programming |date=2013 |volume=147 |issue=1 |pages=253–276 |doi=10.1007/s10107-013-0720-6 |arxiv=1309.7922|s2cid=14700106 }}</ref> The rate of convergence of the Gauss–Newton algorithm can approach [[rate of convergence|quadratic]].<ref>Björck (1996), p. 341, 342.</ref> The algorithm may converge slowly or not at all if the initial guess is far from the minimum or the matrix <math>\mathbf{J_r^\operatorname{T} J_r}</math> is [[ill-conditioned]]. For example, consider the problem with <math>m = 2</math> equations and <math>n = 1</math> variable, given by <math display="block">\begin{align} r_1(\beta) &= \beta + 1, \\ r_2(\beta) &= \lambda \beta^2 + \beta - 1. \end{align} </math> For <math>\lambda < 1</math>, <math>\beta = 0</math> is a local optimum. If <math>\lambda = 0</math>, then the problem is in fact linear and the method finds the optimum in one iteration. If |λ| < 1, then the method converges linearly and the error decreases asymptotically with a factor |λ| at every iteration. However, if |λ| > 1, then the method does not even converge locally.<ref>Fletcher (1987), p. 113.</ref> ==Solving overdetermined systems of equations== The Gauss-Newton iteration <math display="block">\mathbf{x}^{(k+1)} = \mathbf{x}^{(k)} - J(\mathbf{x}^{(k)})^\dagger\mathbf{f}(\mathbf{x}^{(k)}) \,,\quad k=0,1,\ldots</math> is an effective method for solving [[overdetermined system]]s of equations in the form of <math>\mathbf{f}(\mathbf{x})=\mathbf{0}</math> with <math display="block">\mathbf{f}(\mathbf{x}) = \begin{bmatrix} f_1(x_1,\ldots,x_n) \\ \vdots \\ f_m(x_1,\ldots,x_n) \end{bmatrix}</math> and <math>m>n</math> where <math>J(\mathbf{x})^\dagger</math> is the [[Moore-Penrose inverse]] (also known as [[pseudoinverse]]) of the [[Jacobian matrix]] <math>J(\mathbf{x})</math> of <math>\mathbf{f}(\mathbf{x})</math>. It can be considered an extension of [[Newton's method]] and enjoys the same local quadratic convergence <ref name=DenSch></ref> toward isolated regular solutions. If the solution doesn't exist but the initial iterate <math>\mathbf{x}^{(0)}</math> is near a point <math>\hat{\mathbf{x}} = (\hat{x}_1,\ldots,\hat{x}_n)</math> at which the sum of squares <math display="inline">\sum_{i=1}^m |f_i(x_1,\ldots,x_n)|^2 \equiv \|\mathbf{f}(\mathbf{x})\|_2^2</math> reaches a small local minimum, the Gauss-Newton iteration linearly converges to <math>\hat{\mathbf{x}}</math>. The point <math>\hat{\mathbf{x}}</math> is often called a [[least squares]] solution of the overdetermined system. == Derivation from Newton's method == In what follows, the Gauss–Newton algorithm will be derived from [[Newton's method in optimization|Newton's method]] for function optimization via an approximation. As a consequence, the rate of convergence of the Gauss–Newton algorithm can be quadratic under certain regularity conditions. In general (under weaker conditions), the convergence rate is linear.<ref>{{Cite web |url=http://www.henley.ac.uk/web/FILES/maths/09-04.pdf |title=Archived copy |access-date=2014-04-25 |archive-url=https://web.archive.org/web/20160804022151/http://www.henley.ac.uk/web/FILES/maths/09-04.pdf |archive-date=2016-08-04 |url-status=dead }}</ref> The recurrence relation for Newton's method for minimizing a function ''S'' of parameters <math>\boldsymbol\beta</math> is <math display="block"> \boldsymbol\beta^{(s+1)} = \boldsymbol\beta^{(s)} - \mathbf H^{-1} \mathbf g,</math> where '''g''' denotes the [[gradient|gradient vector]] of ''S'', and '''H''' denotes the [[Hessian matrix]] of ''S''. Since <math display="inline">S = \sum_{i=1}^m r_i^2</math>, the gradient is given by <math display="block">g_j = 2 \sum_{i=1}^m r_i \frac{\partial r_i}{\partial \beta_j}.</math> Elements of the Hessian are calculated by differentiating the gradient elements, <math>g_j</math>, with respect to <math>\beta_k</math>: <math display="block">H_{jk} = 2 \sum_{i=1}^m \left(\frac{\partial r_i}{\partial \beta_j} \frac{\partial r_i}{\partial \beta_k} + r_i \frac{\partial^2 r_i}{\partial \beta_j \partial \beta_k}\right).</math> The Gauss–Newton method is obtained by ignoring the second-order derivative terms (the second term in this expression). That is, the Hessian is approximated by <math display="block">H_{jk} \approx 2 \sum_{i=1}^m J_{ij} J_{ik},</math> where <math display="inline">J_{ij} = {\partial r_i}/{\partial \beta_j}</math> are entries of the Jacobian '''J<sub>r</sub>'''. Note that when the exact hessian is evaluated near an exact fit we have near-zero <math>r_i</math>, so the second term becomes near-zero as well, which justifies the approximation. The gradient and the approximate Hessian can be written in matrix notation as <math display="block">\mathbf{g} = 2 {\mathbf{J}_\mathbf{r}}^\operatorname{T} \mathbf{r}, \quad \mathbf{H} \approx 2 {\mathbf{J}_\mathbf{r}}^\operatorname{T} \mathbf{J_r}.</math> These expressions are substituted into the recurrence relation above to obtain the operational equations <math display="block"> \boldsymbol{\beta}^{(s+1)} = \boldsymbol\beta^{(s)} + \Delta; \quad \Delta = -\left(\mathbf{J_r}^\operatorname{T} \mathbf{J_r}\right)^{-1} \mathbf{J_r}^\operatorname{T} \mathbf{r}.</math> Convergence of the Gauss–Newton method is not guaranteed in all instances. The approximation <math display="block">\left|r_i \frac{\partial^2 r_i}{\partial \beta_j \partial \beta_k}\right| \ll \left|\frac{\partial r_i}{\partial \beta_j} \frac{\partial r_i}{\partial \beta_k}\right|</math> that needs to hold to be able to ignore the second-order derivative terms may be valid in two cases, for which convergence is to be expected:<ref>Nocedal (1999), p. 259.</ref> # The function values <math>r_i</math> are small in magnitude, at least around the minimum. # The functions are only "mildly" nonlinear, so that <math display="inline">\frac{\partial^2 r_i}{\partial \beta_j \partial \beta_k}</math> is relatively small in magnitude. == Improved versions == With the Gauss–Newton method the sum of squares of the residuals ''S'' may not decrease at every iteration. However, since Δ is a descent direction, unless <math>S\left(\boldsymbol \beta^s\right)</math> is a stationary point, it holds that <math>S\left(\boldsymbol \beta^s + \alpha\Delta\right) < S\left(\boldsymbol \beta^s\right)</math> for all sufficiently small <math>\alpha>0</math>. Thus, if divergence occurs, one solution is to employ a fraction <math>\alpha</math> of the increment vector Δ in the updating formula: <math display="block"> \boldsymbol \beta^{s+1} = \boldsymbol \beta^s + \alpha \Delta.</math> In other words, the increment vector is too long, but it still points "downhill", so going just a part of the way will decrease the objective function ''S''. An optimal value for <math>\alpha</math> can be found by using a [[line search]] algorithm, that is, the magnitude of <math>\alpha</math> is determined by finding the value that minimizes ''S'', usually using a [[line search|direct search method]] in the interval <math>0 < \alpha < 1</math> or a [[backtracking line search]] such as [[Backtracking line search|Armijo-line search]]. Typically, <math>\alpha</math> should be chosen such that it satisfies the [[Wolfe conditions]] or the [[Goldstein conditions]].<ref>{{Cite book|title=Numerical optimization|last=Nocedal, Jorge. | date=1999 | publisher=Springer|others=Wright, Stephen J., 1960-|isbn=0387227423|location=New York|oclc=54849297}}</ref> In cases where the direction of the shift vector is such that the optimal fraction α is close to zero, an alternative method for handling divergence is the use of the [[Levenberg–Marquardt algorithm]], a [[trust region]] method.<ref name="ab"/> The normal equations are modified in such a way that the increment vector is rotated towards the direction of [[steepest descent]], <math display="block">\left(\mathbf{J^\operatorname{T} J + \lambda D}\right) \Delta = -\mathbf{J}^\operatorname{T} \mathbf{r},</math> where '''D''' is a positive diagonal matrix. Note that when '''D''' is the identity matrix '''I''' and <math>\lambda \to +\infty</math>, then <math>\lambda \Delta = \lambda \left(\mathbf{J^\operatorname{T} J} + \lambda \mathbf{I}\right)^{-1} \left(-\mathbf{J}^\operatorname{T} \mathbf{r}\right) = \left(\mathbf{I} - \mathbf{J^\operatorname{T} J} / \lambda + \cdots \right) \left(-\mathbf{J}^\operatorname{T} \mathbf{r}\right) \to -\mathbf{J}^\operatorname{T} \mathbf{r}</math>, therefore the [[Direction (geometry, geography)|direction]] of Δ approaches the direction of the negative gradient <math>-\mathbf{J}^\operatorname{T} \mathbf{r}</math>. The so-called Marquardt parameter <math>\lambda</math> may also be optimized by a line search, but this is inefficient, as the shift vector must be recalculated every time <math>\lambda</math> is changed. A more efficient strategy is this: When divergence occurs, increase the Marquardt parameter until there is a decrease in ''S''. Then retain the value from one iteration to the next, but decrease it if possible until a cut-off value is reached, when the Marquardt parameter can be set to zero; the minimization of ''S'' then becomes a standard Gauss–Newton minimization. ==Large-scale optimization== For large-scale optimization, the Gauss–Newton method is of special interest because it is often (though certainly not always) true that the matrix <math>\mathbf{J}_\mathbf{r}</math> is more [[Sparse matrix|sparse]] than the approximate Hessian <math>\mathbf{J}_\mathbf{r}^\operatorname{T} \mathbf{J_r}</math>. In such cases, the step calculation itself will typically need to be done with an approximate iterative method appropriate for large and sparse problems, such as the [[conjugate gradient method]]. In order to make this kind of approach work, one needs at least an efficient method for computing the product <math display="block">{\mathbf{J}_\mathbf{r}}^\operatorname{T} \mathbf{J_r} \mathbf{p}</math> for some vector '''p'''. With [[sparse matrix]] storage, it is in general practical to store the rows of <math>\mathbf{J}_\mathbf{r}</math> in a compressed form (e.g., without zero entries), making a direct computation of the above product tricky due to the transposition. However, if one defines '''c'''<sub>''i''</sub> as row ''i'' of the matrix <math>\mathbf{J}_\mathbf{r}</math>, the following simple relation holds: <math display="block">{\mathbf{J}_\mathbf{r}}^\operatorname{T}\mathbf{J_r} \mathbf{p} = \sum_i \mathbf c_i \left(\mathbf c_i \cdot \mathbf{p}\right),</math> so that every row contributes additively and independently to the product. In addition to respecting a practical sparse storage structure, this expression is well suited for [[Parallel computing|parallel computations]]. Note that every row '''c'''<sub>''i''</sub> is the gradient of the corresponding residual ''r''<sub>''i''</sub>; with this in mind, the formula above emphasizes the fact that residuals contribute to the problem independently of each other. == Related algorithms == In a [[quasi-Newton method]], such as that due to [[Davidon–Fletcher–Powell formula|Davidon, Fletcher and Powell]] or Broyden–Fletcher–Goldfarb–Shanno ([[BFGS method]]) an estimate of the full Hessian <math display="inline">\frac{\partial^2 S}{\partial \beta_j \partial\beta_k}</math> is built up numerically using first derivatives <math display="inline">\frac{\partial r_i}{\partial\beta_j}</math> only so that after ''n'' refinement cycles the method closely approximates to Newton's method in performance. Note that quasi-Newton methods can minimize general real-valued functions, whereas Gauss–Newton, Levenberg–Marquardt, etc. fits only to nonlinear least-squares problems. Another method for solving minimization problems using only first derivatives is [[gradient descent]]. However, this method does not take into account the second derivatives even approximately. Consequently, it is highly inefficient for many functions, especially if the parameters have strong interactions. == Example implementations == === Julia === The following implementation in [[Julia (programming language)|Julia]] provides one method which uses a provided Jacobian and another computing with [[automatic differentiation]].<syntaxhighlight lang="julia"> """ gaussnewton(r,J,β₀,maxiter,tol) Perform Gauss-Newton optimization to minimize the residual function `r` with Jacobian `J` starting from `β₀`. The algorithm terminates when the norm of the step is less than `tol` or after `maxiter` iterations. """ function gaussnewton(r,J,β₀,maxiter,tol) β = copy(β₀) for _ in 1:maxiter Jβ = J(β); Δ = -(Jβ'*Jβ) \ (Jβ'*r(β)) β += Δ if sqrt(sum(abs2,Δ)) < tol break end end return β end import AbstractDifferentiation as AD, Zygote backend = AD.ZygoteBackend() # other backends are available """ gaussnewton(r,β₀,maxiter,tol) Perform Gauss-Newton optimization to minimize the residual function `r` starting from `β₀`. The relevant Jacobian is calculated using automatic differentiation. The algorithm terminates when the norm of the step is less than `tol` or after `maxiter` iterations. """ function gaussnewton(r,β₀,maxiter,tol) β = copy(β₀) for _ in 1:maxiter rβ, Jβ = AD.value_and_jacobian(backend,r,β) Δ = -(Jβ[1]'*Jβ[1]) \ (Jβ[1]'*rβ) β += Δ if sqrt(sum(abs2,Δ)) < tol break end end return β end </syntaxhighlight> ==Notes== <references /> ==References== *{{cite book | last = Björck | first = A. | title = Numerical methods for least squares problems | publisher = SIAM, Philadelphia | year = 1996 | isbn = 0-89871-360-9 }} * {{Cite book | last1=Fletcher | first1=Roger | title=Practical methods of optimization | publisher=[[John Wiley & Sons]] | location=New York | edition=2nd | isbn=978-0-471-91547-8 | year=1987 | url-access=registration | url=https://archive.org/details/practicalmethods0000flet }}. *{{cite book | last1 = Nocedal | first1 = Jorge | last2 = Wright | first2 = Stephen | title = Numerical optimization | publisher = New York: Springer | year = 1999 | isbn = 0-387-98793-2 }} == External links == * ''[http://www.incertitudes.fr/book.pdf Probability, Statistics and Estimation]'' The algorithm is detailed and applied to the biology experiment discussed as an example in this article (page 84 with the uncertainties on the estimated values). === Implementations === *[https://www.artelys.com/en/optimization-tools/knitro Artelys Knitro] is a non-linear solver with an implementation of the Gauss–Newton method. It is written in C and has interfaces to C++/C#/Java/Python/MATLAB/R. {{Isaac Newton}} {{Least Squares and Regression Analysis}} {{Optimization algorithms|unconstrained|state=autocollapse}} {{DEFAULTSORT:Gauss-Newton algorithm}} [[Category:Optimization algorithms and methods]] [[Category:Least squares]] [[Category:Statistical algorithms]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Citation
(
edit
)
Template:Cite book
(
edit
)
Template:Cite web
(
edit
)
Template:Isaac Newton
(
edit
)
Template:Least Squares and Regression Analysis
(
edit
)
Template:Math
(
edit
)
Template:Mvar
(
edit
)
Template:Optimization algorithms
(
edit
)
Template:Short description
(
edit
)