Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Root-finding algorithm
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Iterative methods == Although all root-finding algorithms proceed by [[iteration]], an [[iterative method|iterative]] root-finding method generally uses a specific type of iteration, consisting of defining an auxiliary function, which is applied to the last computed approximations of a root for getting a new approximation. The iteration stops when a [[fixed-point iteration|fixed point]] of the auxiliary function is reached to the desired precision, i.e., when a new computed value is sufficiently close to the preceding ones. === Newton's method (and similar derivative-based methods) === [[Newton's method]] assumes the function ''f'' to have a continuous [[derivative]]. Newton's method may not converge if started too far away from a root. However, when it does converge, it is faster than the bisection method; its [[order of convergence]] is usually quadratic whereas the bisection method's is linear. Newton's method is also important because it readily generalizes to higher-dimensional problems. [[Householder's method]]s are a class of Newton-like methods with higher orders of convergence. The first one after Newton's method is [[Halley's method]] with cubic order of convergence. === Secant method === Replacing the derivative in Newton's method with a [[finite difference]], we get the [[secant method]]. This method does not require the computation (nor the existence) of a derivative, but the price is slower convergence (the order of convergence is the [[golden ratio]], approximately 1.62<ref>{{Cite web |last=Chanson |first=Jeffrey R. |date=October 3, 2024 |title=Order of Convergence |url=https://math.libretexts.org/Bookshelves/Applied_Mathematics/Numerical_Methods_(Chasnov)/02%3A_Root_Finding/2.04%3A_Order_of_Convergence |access-date=October 3, 2024 |website=LibreTexts Mathematics}}</ref>). A generalization of the secant method in higher dimensions is [[Broyden's method]]. === Steffensen's method === If we use a polynomial fit to remove the quadratic part of the finite difference used in the secant method, so that it better approximates the derivative, we obtain [[Steffensen's method]], which has quadratic convergence, and whose behavior (both good and bad) is essentially the same as Newton's method but does not require a derivative. === Fixed point iteration method === We can use the [[fixed-point iteration]] to find the root of a function. Given a function <math> f(x) </math> which we have set to zero to find the root (<math> f(x)=0 </math>), we rewrite the equation in terms of <math> x </math> so that <math> f(x)=0 </math> becomes <math> x=g(x) </math> (note, there are often many <math> g(x) </math> functions for each <math> f(x)=0 </math> function). Next, we relabel each side of the equation as <math> x_{n+1}=g(x_{n}) </math> so that we can perform the iteration. Next, we pick a value for <math> x_{1} </math> and perform the iteration until it converges towards a root of the function. If the iteration converges, it will converge to a root. The iteration will only converge if <math> |g'(root)|<1 </math>. As an example of converting <math> f(x)=0 </math> to <math> x=g(x) </math>, if given the function <math> f(x)=x^2+x-1 </math>, we will rewrite it as one of the following equations. :<math> x_{n+1}=(1/x_n) - 1 </math>, :<math> x_{n+1}=1/(x_n+1) </math>, :<math> x_{n+1}=1-x_n^2 </math>, :<math> x_{n+1}=x_n^2+2x_n-1 </math>, or :<math> x_{n+1}=\pm \sqrt{1-x_n} </math>. === Inverse interpolation === The appearance of complex values in interpolation methods can be avoided by interpolating the [[inverse function|inverse]] of ''f'', resulting in the [[inverse quadratic interpolation]] method. Again, convergence is asymptotically faster than the secant method, but inverse quadratic interpolation often behaves poorly when the iterates are not close to the root.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)