Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Secant method
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Comparison with other root-finding methods== The secant method does not require or guarantee that the root remains bracketed by sequential iterates, like the [[bisection method]] does, and hence it does not always converge. The [[false position method]] (or {{lang|la|regula falsi}}) uses the same formula as the secant method. However, it does not apply the formula on <math>x_{n-1}</math> and <math>x_{n-2}</math>, like the secant method, but on <math>x_{n-1}</math> and on the last iterate <math>x_k</math> such that <math>f(x_k)</math> and <math>f(x_{n-1})</math> have a different sign. This means that the [[false position method]] always converges; however, only with a linear order of convergence. Bracketing with a super-linear order of convergence as the secant method can be attained with improvements to the false position method (see [[Regula falsi#Improvements in ''regula falsi''|Regula falsi Β§ Improvements in ''regula falsi'']]) such as the [[ITP Method|ITP method]] or the [[Illinois Method|Illinois method]]. The recurrence formula of the secant method can be derived from the formula for [[Newton's method]] :<math>x_n = x_{n-1} - \frac{f(x_{n-1})}{f'(x_{n-1})}</math> by using the [[finite-difference]] approximation, for a small <math>\epsilon =x_{n-1} - x_{n-2} </math>: <math>f'(x_{n-1}) = \lim_{\epsilon \rightarrow 0} \frac {f(x_{n-1})-f(x_{n-1} - \epsilon)}{\epsilon } \approx \frac{f(x_{n-1}) - f(x_{n-2})}{x_{n-1} - x_{n-2}}</math> The secant method can be interpreted as a method in which the derivative is replaced by an approximation and is thus a [[quasi-Newton method]]. If we compare Newton's method with the secant method, we see that Newton's method converges faster (order 2 against order the [[golden ratio]] ''Ο'' β 1.6).<ref name=":0" /> However, Newton's method requires the evaluation of both <math>f</math> and its derivative <math>f'</math> at every step, while the secant method only requires the evaluation of <math>f</math>. Therefore, the secant method may sometimes be faster in practice. For instance, if we assume that evaluating <math>f</math> takes as much time as evaluating its derivative and we neglect all other costs, we can do two steps of the secant method (decreasing the logarithm of the error by a factor ''Ο''<sup>2</sup> β 2.6) for the same cost as one step of Newton's method (decreasing the logarithm of the error by a factor of 2), so the secant method is faster. In higher dimensions, the full set of [[Partial derivative|partial derivatives]] required for Newton's method, that is, the [[Jacobian matrix and determinant|Jacobian matrix]], may become much more expensive to calculate than the function itself. If, however, we consider parallel processing for the evaluation of the derivative or derivatives, Newton's method can be faster in clock time though still costing more computational operations overall.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)