Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Newton's method
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Modifications== ===Quasi-Newton methods=== When the Jacobian is unavailable or too expensive to compute at every iteration, a [[quasi-Newton method]] can be used. ===Chebyshev's third-order method=== {{Main|Chebyshev iteration}}Since higher-order Taylor expansions offer more accurate local approximations of a function {{mvar|f}}, it is reasonable to ask why Newton’s method relies only on a second-order Taylor approximation. In the 19th century, Russian mathematician Pafnuty Chebyshev explored this idea by developing a variant of Newton’s method that used cubic approximations.<ref>{{cite book |last1=Chebyshev |first1=Pafnutii L'vovich |title=Polnoe sobranie sochinenii |last2=Bernshtein |first2=Sergei Natanovich |publisher=Izd-vo Akademii nauk SSSR |year=1947}}</ref><ref>{{Cite journal |last1=Ahmadi |first1=Amir Ali |last2=Chaudhry |first2=Abraar |last3=Zhang |first3=Jeffrey |date=2024 |title=Higher-order Newton methods with polynomial work per iteration |url=https://linkinghub.elsevier.com/retrieve/pii/S0001870824003232 |journal=Advances in Mathematics |language=en |volume=452 |pages=109808 |arxiv=2311.06374 |doi=10.1016/j.aim.2024.109808}}</ref><ref>{{Cite web |last=Hartnett |first=Kevin |date=2025-03-24 |title=Three Hundred Years Later, a Tool from Isaac Newton Gets an Update |url=https://www.quantamagazine.org/three-hundred-years-later-a-tool-from-isaac-newton-gets-an-update-20250324/ |access-date=2025-04-03 |website=Quanta Magazine |language=en}}</ref> ===Over {{mvar|p}}-adic numbers=== In {{mvar|p}}-adic analysis, the standard method to show a polynomial equation in one variable has a {{mvar|p}}-adic root is [[Hensel's lemma]], which uses the recursion from Newton's method on the {{mvar|p}}-adic numbers. Because of the more stable behavior of addition and multiplication in the {{mvar|p}}-adic numbers compared to the real numbers (specifically, the unit ball in the {{mvar|p}}-adics is a ring), convergence in Hensel's lemma can be guaranteed under much simpler hypotheses than in the classical Newton's method on the real line. ==={{mvar|q}}-analog=== Newton's method can be generalized with the [[q-analog|{{mvar|q}}-analog]] of the usual derivative.<ref>{{ cite journal| journal=Matematicki Vesnik| volume=54|number=3–4| pages=171–178 |date=2002| title=Mean value theorems in $q$-calculus| first1=Predrag M. | last1= Rajković|first2= Miomir S. | last2= Stanković| first3= Slađana D. | last3= Marinković |url=https://www.emis.de/journals/MV/0234/16.html}}</ref> ===Modified Newton methods=== ====Maehly's procedure==== A nonlinear equation has multiple solutions in general. But if the initial value is not appropriate, Newton's method may not converge to the desired solution or may converge to the same solution found earlier. When we have already found {{mvar|N}} solutions of <math>f(x)=0</math>, then the next root can be found by applying Newton's method to the next equation:<ref>{{harvnb|Press|Teukolsky|Vetterling|Flannery|2007}}</ref><ref>{{cite book | url=https://archive.org/details/introductiontonu0000stoe/mode/2up | page=279 | last1=Stoer|last2=Bulirsch|date=1980| first1= Josef | first2= Roland | oclc= 1244842246 | title= Introduction to numerical analysis | url-access= registration}}</ref> <math display="block">F(x) = \frac{f(x)}{\prod_{i=1}^N(x-x_i)} = 0 .</math> This method is applied to obtain zeros of the [[Bessel function]] of the second kind.<ref>{{cite book <!-- Citation bot: don't add identifiers. there is a bogus doi at the s2cid for this title that actually points to a review of the book, and i know from experience that bots will conflate the two and bugger up the reference -->| last1= Zhang | first1= Shanjie | last2= Jin | first2= Jianming | date= 1996 | publisher= Wiley | title= Computation of Special Functions|isbn=9780471119630}}{{page needed |date=June 2024}}</ref> ====Hirano's modified Newton method==== Hirano's modified Newton method is a modification conserving the convergence of Newton method and avoiding unstableness.<ref>{{cite journal |first=Kazuo |last=Murota |date=1982 |doi=10.1137/0719055 |title=Global Convergence of a Modified Newton Iteration for Algebraic Equations |journal=SIAM Journal on Numerical Analysis |volume=19 |issue=4 |pages=793–799|bibcode=1982SJNA...19..793M }}</ref> It is developed to solve complex polynomials. ====Interval Newton's method==== {{cite check|section|date=February 2019}} Combining Newton's method with [[interval arithmetic]] is very useful in some contexts. This provides a stopping criterion that is more reliable than the usual ones (which are a small value of the function or a small variation of the variable between consecutive iterations). Also, this may detect cases where Newton's method converges theoretically but diverges numerically because of an insufficient [[floating-point arithmetic|floating-point precision]] (this is typically the case for polynomials of large degree, where a very small change of the variable may change dramatically the value of the function; see [[Wilkinson's polynomial]]).<ref>Moore, R. E. (1979). ''Methods and applications of interval analysis'' (Vol. 2). Siam.</ref><ref>Hansen, E. (1978). Interval forms of Newtons method. ''Computing'', 20(2), 153–163.</ref> Consider {{math|{{var|f}} → {{mathcal|C}}{{sup|1}}({{var|X}})}}, where {{mvar|X}} is a real interval, and suppose that we have an interval extension {{mvar|{{prime|F}}}} of {{mvar|{{prime|f}}}}, meaning that {{mvar|{{prime|F}}}} takes as input an interval {{math|{{var|Y}} ⊆ {{var|X}}}} and outputs an interval {{math|{{var|{{prime|F}}}}({{var|Y}})}} such that: <math display="block">\begin{align} F'([y,y]) &= \{f'(y)\}\\[5pt] F'(Y) &\supseteq \{f'(y)\mid y \in Y\}. \end{align}</math> We also assume that {{math|0 ∉ {{var|{{prime|F}}}}({{var|X}})}}, so in particular {{mvar|f}} has at most one root in {{mvar|X}}. We then define the interval Newton operator by: <math display="block">N(Y) = m - \frac{f(m)}{F'(Y)} = \left\{\left.m - \frac{f(m)}{z} ~\right|~ z \in F'(Y)\right\}</math> where {{math|{{var|m}} ∈ {{var|Y}}}}. Note that the hypothesis on {{mvar|{{prime|F}}}} implies that {{math|{{var|N}}({{var|Y}})}} is well defined and is an interval (see [[interval arithmetic]] for further details on interval operations). This naturally leads to the following sequence: <math display="block"> \begin{align} X_0 &= X\\ X_{k+1} &= N(X_k) \cap X_k. \end{align} </math> The [[mean value theorem]] ensures that if there is a root of {{mvar|f}} in {{math|{{var|X}}{{sub|{{var|k}}}}}}, then it is also in {{math|{{var|X}}{{sub|{{var|k}} + 1}}}}. Moreover, the hypothesis on {{mvar|F′}} ensures that {{math|{{var|X}}{{sub|{{var|k}} + 1}}}} is at most half the size of {{math|{{var|X}}{{sub|{{var|k}}}}}} when {{mvar|m}} is the midpoint of {{mvar|Y}}, so this sequence converges towards {{math|[{{var|x*}}, {{var|x*}}]}}, where {{mvar|x*}} is the root of {{mvar|f}} in {{mvar|X}}. If {{math|{{var|{{prime|F}}}}({{var|X}})}} strictly contains 0, the use of extended interval division produces a union of two intervals for {{math|{{var|N}}({{var|X}})}}; multiple roots are therefore automatically separated and bounded.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)