Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Newton's method
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
===Fourier conditions=== Suppose that {{math|''f''(''x'')}} is a [[concave function]] on an interval, which is [[strictly increasing]]. If it is negative at the left endpoint and positive at the right endpoint, the [[intermediate value theorem]] guarantees that there is a zero {{math|ζ}} of {{mvar|f}} somewhere in the interval. From geometrical principles, it can be seen that the Newton iteration {{math|''x''<sub>''i''</sub>}} starting at the left endpoint is [[monotonic function|monotonically increasing]] and convergent, necessarily to {{math|ζ}}.<ref name="ostrowski">{{cite book|last1=Ostrowski|first1=A. M.|title=Solution of equations in Euclidean and Banach spaces|year=1973|mr=0359306|series=Pure and Applied Mathematics|volume=9|publisher=[[Academic Press]]|location=New York–London|author-link1=Alexander Ostrowski|zbl=0304.65002|edition=Third edition of 1960 original}}</ref> [[Joseph Fourier]] introduced a modification of Newton's method starting at the right endpoint: :<math>y_{i+1}=y_i-\frac{f(y_i)}{f'(x_i)}.</math> This sequence is monotonically decreasing and convergent. By passing to the limit in this definition, it can be seen that the limit of {{math|''y''<sub>''i''</sub>}} must also be the zero {{math|ζ}}.<ref name="ostrowski" /> So, in the case of a concave increasing function with a zero, initialization is largely irrelevant. Newton iteration starting anywhere left of the zero will converge, as will Fourier's modified Newton iteration starting anywhere right of the zero. The accuracy at any step of the iteration can be determined directly from the difference between the location of the iteration from the left and the location of the iteration from the right. If {{mvar|f}} is twice continuously differentiable, it can be proved using [[Taylor's theorem]] that :<math>\lim_{i\to\infty}\frac{y_{i+1}-x_{i+1}}{(y_i-x_i)^2}=-\frac{1}{2}\frac{f''(\zeta)}{f'(\zeta)},</math> showing that this difference in locations converges quadratically to zero.<ref name="ostrowski" /> All of the above can be extended to systems of equations in multiple variables, although in that context the relevant concepts of [[monotonicity]] and concavity are more subtle to formulate.<ref>Ortega and Rheinboldt, Section 13.3</ref> In the case of single equations in a single variable, the above monotonic convergence of Newton's method can also be generalized to replace concavity by positivity or negativity conditions on an arbitrary higher-order derivative of {{mvar|f}}. However, in this generalization, Newton's iteration is modified so as to be based on [[Taylor polynomial]]s rather than the [[tangent line]]. In the case of concavity, this modification coincides with the standard Newton method.<ref>{{cite book|last1=Traub|first1=J. F.|title=Iterative methods for the solution of equations|year=1964|series=Prentice-Hall Series in Automatic Computation|publisher=[[Prentice-Hall, Inc.]]|location=Englewood Cliffs, NJ|mr=0169356|author-link1=Joseph F. Traub|zbl=0121.11204}}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)