Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Runge's phenomenon
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Failure of convergence in interpolation}} [[Image:Runge_phenomenon.svg|right|thumb|upright=1.5|Demonstration of the Runge phenomenon in which the oscillations near the interval's boundaries increase with higher order polynomial interpolations<br> {{legend0|#FF0000|The function <math>\frac{1}{1+25x^2}\,</math>}}<br>{{legend0|#0000FF|A fifth order polynomial interpolation (exact replication of the red curve at 6 points)}}<br>{{legend0|#009F00|A ninth order polynomial interpolation (exact replication of the red curve at 10 points)}}]] In the [[mathematics|mathematical]] field of [[numerical analysis]], '''Runge's phenomenon''' ({{IPA|de|ˈʁʊŋə|lang}}) is a problem of oscillation at the edges of an interval that occurs when using [[polynomial interpolation]] with polynomials of high degree over a set of equispaced interpolation points. It was discovered by [[Carl David Tolmé Runge]] (1901) when exploring the behavior of errors when using polynomial interpolation to approximate certain functions.<ref>{{ Citation | first = Carl | last = Runge | author-link = Carl David Tolmé Runge | year = 1901 | title = Über empirische Funktionen und die Interpolation zwischen äquidistanten Ordinaten | journal = Zeitschrift für Mathematik und Physik | volume = 46 | pages = 224–243 | postscript = . }} available at [https://archive.org/details/zeitschriftfrma12runggoog www.archive.org]</ref> The discovery shows that going to higher degrees does not always improve accuracy. The phenomenon is similar to the [[Gibbs phenomenon]] in Fourier series approximations. The [[Weierstrass approximation theorem]] states that for every [[continuous function]] <math>f(x)</math> defined on an [[interval (mathematics)|interval]] <math>[a, b]</math>, there exists a set of [[polynomial]] functions <math>P_n(x)</math> for <math>n=0, 1, 2, \ldots</math>, each of degree at most <math>n</math>, that approximates <math>f(x)</math> with [[uniform convergence]] over <math>[a, b]</math> as <math>n</math> tends to infinity. This can be expressed as: :<math>\lim_{n \rightarrow \infty} \left( \sup_{a \leq x \leq b} \left| f(x) - P_n(x) \right| \right) = 0.</math> Consider the case where one desires to [[Interpolation|interpolate]] through <math>n+1</math> equispaced points of a function <math>f(x)</math> using the <math>n</math>-degree polynomial <math>P_n(x)</math> that passes through those points. Naturally, one might expect from Weierstrass' theorem that using more points would lead to a more accurate reconstruction of <math>f(x)</math>. However, this ''particular'' set of polynomial functions <math>P_n(x)</math> is not guaranteed to have the property of uniform convergence; the theorem only states that a set of polynomial functions exists, without providing [[Bernstein polynomial|a general method of finding one]]. The <math>P_n(x)</math> produced in this manner may in fact diverge away from <math>f(x)</math> as <math>n</math> increases; this typically occurs in an oscillating pattern that magnifies near the ends of the interpolation points. The discovery of this phenomenon is attributed to Runge.<ref>{{cite journal|author=Epperson, James|title=On the Runge example|journal=Amer. Math. Monthly|volume=94|year=1987|issue=4 |pages=329–341|url=http://www.maa.org/programs/maa-awards/writing-awards/on-the-runge-example|doi=10.2307/2323093|jstor=2323093 }}</ref> ==Problem== Consider the '''Runge function''' :<math>f(x) = \frac{1}{1+25x^2}\,</math> (a scaled version of the [[Witch of Agnesi]]). Runge found that if this function is [[interpolation|interpolated]] at equidistant points ''x''<sub>''i''</sub> between −1 and 1 such that: :<math>x_i = \frac{2i}{n} - 1,\quad i \in \left\{ 0, 1, \dots, n \right\}</math> with a [[polynomial]] ''P''<sub>''n''</sub>(''x'') of degree ≤ ''n'', the resulting interpolation oscillates toward the end of the interval, i.e. close to −1 and 1. It can even be proven that the interpolation error increases (without bound) when the degree of the polynomial is increased: :<math>\lim_{n \rightarrow \infty} \left( \sup_{-1 \leq x \leq 1} | f(x) -P_n(x)| \right) = \infty.</math> This shows that high-degree polynomial interpolation at equidistant points can be troublesome. ===Reason=== Runge's phenomenon is the consequence of two properties of this problem. * The magnitude of the ''n''-th order derivatives of this particular function grows quickly when ''n'' increases. * The equidistance between points leads to a [[Lebesgue constant]] that increases quickly when ''n'' increases. The phenomenon is graphically obvious because both properties combine to increase the magnitude of the oscillations. The error between the generating function and the interpolating polynomial of order ''n'' is given by :<math>f(x) - P_n(x) = \frac{f^{(n + 1)}(\xi)}{(n + 1)!} \prod_{i=0}^{n} (x - x_i) </math> for some <math>\xi</math> in (−1, 1). Thus, :<math> \max_{-1 \leq x \leq 1} |f(x) - P_n(x)| \leq \max_{-1 \leq x \leq 1} \frac{\left|f^{(n + 1)}(x)\right|}{(n + 1)!} \max_{-1 \leq x \leq 1} \prod_{i=0}^n |x - x_i| </math>. Denote by <math>w_n(x)</math> the nodal function :<math>w_n(x) = (x - x_0)(x - x_1)\cdots(x - x_n)</math> and let <math>W_n</math> be the maximum of the magnitude of the <math>w_n</math> function: :<math>W_n=\max_{-1 \leq x \leq 1} |w_n(x)|</math>. It is elementary to prove that with equidistant nodes :<math>W_n \leq n!h^{n+1} </math> where <math>h=2/n</math> is the step size. Moreover, assume that the (''n''+1)-th derivative of <math>f</math> is bounded, i.e. :<math>\max_{-1 \leq x \leq 1} |f^{(n+1)}(x)| \leq M_{n+1}</math>. Therefore, :<math>\max_{-1 \leq x \leq 1} |f(x) - P_{n}(x)| \leq M_{n+1} \frac{h^{n+1}}{(n+1)}</math>. But the magnitude of the (''n''+1)-th derivative of Runge's function increases when ''n'' increases. The consequence is that the resulting upper bound tends to infinity when ''n'' tends to infinity. Although often used to explain the Runge phenomenon, the fact that the upper bound of the error goes to infinity does not necessarily imply, of course, that the error itself also diverges with ''n.'' ==Mitigations== ===Change of interpolation points=== The oscillation can be minimized by using nodes that are distributed more densely towards the edges of the interval, specifically, with asymptotic density (on the interval <math>[-1,1]</math>) given by the formula<ref>{{Citation | last1=Berrut | first1=Jean-Paul | last2=Trefethen | first2=Lloyd N. | author2-link=Lloyd N. Trefethen | title=Barycentric Lagrange interpolation | doi=10.1137/S0036144502417715 | year=2004 | journal=SIAM Review | issn=1095-7200 | volume=46 | pages=501–517 | issue=3| bibcode=2004SIAMR..46..501B | citeseerx=10.1.1.15.5097 }}</ref> :<math>\frac{1}{\sqrt{1-x^2}}</math>. A standard example of such a set of nodes is [[Chebyshev nodes]], for which the maximum error in approximating the Runge function is guaranteed to diminish with increasing polynomial order. ===S-Runge algorithm without resampling=== When equidistant samples must be used because resampling on well-behaved sets of nodes is not feasible, the S-Runge algorithm can be considered.<ref name="FakeNodes"> {{citation |first1 = Stefano |last1 = De Marchi | author1-link = Stefano De Marchi |first2 = Francesco |last2 = Marchetti |first3 = Emma |last3 = Perracchione |first4 = Davide |last4 = Poggiali |title = Polynomial interpolation via mapped bases without resampling |doi = 10.1016/j.cam.2019.112347 |journal = J. Comput. Appl. Math. |volume = 364 |year = 2020 |issn = 0377-0427 |doi-access = free }} </ref> In this approach, the original set of nodes is mapped on the set of [[Chebyshev nodes]], providing a stable polynomial reconstruction. The peculiarity of this method is that there is no need of resampling at the mapped nodes, which are also called ''[[fake nodes]]''. A [[Python (programming language)|Python]] implementation of this procedure can be found [https://github.com/pog87/FakeNodes here]. ===Use of piecewise polynomials=== The problem can be avoided by using [[spline curve]]s which are piecewise polynomials. When trying to decrease the interpolation error one can increase the number of polynomial pieces which are used to construct the spline instead of increasing the degree of the polynomials used. ===Constrained minimization=== One can also fit a polynomial of higher degree (for instance, with <math>n</math> points use a polynomial of order <math>N = n^2</math> instead of <math>n + 1</math>), and fit an interpolating polynomial whose first (or second) derivative has minimal [[Lp space|<math>L^2</math> norm]]. A similar approach is to minimize a constrained version of the <math>L^p</math> distance between the polynomial's <math>m</math>-th derivative and the mean value of its <math>m</math>-th derivative. Explicitly, to minimize :<math> V_p = \int_a^b \left|\frac{\mathrm{d}^m P_N(x)}{\mathrm{d} x^m} - \frac{1}{b-a} \int_a^b \frac{\mathrm{d}^m P_N(z)}{\mathrm{d} z^m} \mathrm{d}z\right|^p \mathrm{d} x - \sum_{i=1}^n \lambda_i \, \left(P_N(x_i) - f(x_i)\right), </math> where <math> N \ge n - 1</math> and <math> m < N </math>, with respect to the polynomial coefficients and the [[Lagrange multiplier]]s, <math>\lambda_i</math>. When <math>N = n - 1</math>, the constraint equations generated by the Lagrange multipliers reduce <math>P_N(x)</math> to the minimum polynomial that passes through all <math>n</math> points. At the opposite end, <math>\lim_{N \to \infty} P_N(x)</math> will approach a form very similar to a piecewise polynomials approximation. When <math>m=1</math>, in particular, <math>\lim_{N \to \infty} P_N(x)</math> approaches the linear piecewise polynomials, i.e. connecting the interpolation points with straight lines. The role played by <math>p</math> in the process of minimizing <math>V_p</math> is to control the importance of the size of the fluctuations away from the mean value. The larger <math>p</math> is, the more large fluctuations are penalized compared to small ones. The greatest advantage of the Euclidean norm, <math>p=2</math>, is that it allows for analytic solutions and it guarantees that <math>V_p</math> will only have a single minimum. When <math>p\neq 2</math> there can be multiple minima in <math>V_p</math>, making it difficult to ensure that a particular minimum found will be the [[Maxima and minima|global minimum]] instead of a local one. ===Least squares fitting=== {{main|Polynomial fit}} Another method is fitting a polynomial of lower degree using the method of [[least squares]]. Generally, when using <math>m</math> equidistant points, if <math>N < 2\sqrt{m}</math> then least squares approximation <math>P_N(x)</math> is well-conditioned.<ref>{{Citation | last1=Dahlquist | first1=Germund | last2=Björk | first2=Åke | author1-link=Germund Dahlquist | title=Numerical Methods | year=1974 | isbn=0-13-627315-7 | chapter=4.3.4. Equidistant Interpolation and the Runge Phenomenon | pages=[https://archive.org/details/numericalmethods00dahl/page/101 101–103] | url-access=registration | url=https://archive.org/details/numericalmethods00dahl/page/101 }}</ref> ===Bernstein polynomial=== Using [[Bernstein polynomial]]s, one can uniformly approximate every continuous function in a closed interval, although this method is rather computationally expensive.{{Citation needed|date=December 2019}} ===External fake constraints interpolation=== This method proposes to optimally stack a dense distribution of constraints of the type {{math|1=''P''″(''x'') = 0}} on nodes positioned externally near the endpoints of each side of the interpolation interval, where P"(x) is the second derivative of the interpolation polynomial. Those constraints are called External Fake Constraints as they do not belong to the interpolation interval and they do not match the behaviour of the Runge function. The method has demonstrated that it has a better interpolation performance than Piecewise polynomials (splines) to mitigate the Runge phenomenon.<ref>{{citation | last1 = Belanger | first1 = Nicolas | title = External Fake Constraints Interpolation: the end of Runge phenomenon with high degree polynomials relying on equispaced nodes – Application to aerial robotics motion planning | publisher = Proceedings of the 5th Institute of Mathematics and its Applications Conference on Mathematics in Defence | year = 2017 | url = https://cdn.ima.org.uk/wp/wp-content/uploads/2017/03/Belanger-paper.pdf}}</ref> ==Related statements from the [[approximation theory]]== For every predefined table of interpolation nodes there is a continuous function for which the sequence of interpolation polynomials on those nodes diverges.<ref>{{citation | last1 = Cheney | first1 = Ward | last2 = Light | first2 = Will | title = A Course in Approximation Theory | page = 19 | publisher = Brooks/Cole | year = 2000 | isbn = 0-534-36224-9 | url = http://www.ams.org/bookstore-getitem/item=gsm-101}}</ref> For every continuous function there is a table of nodes on which the interpolation process converges. {{Citation needed|date=March 2014}} Chebyshev interpolation (i.e., on [[Chebyshev nodes]]) converges uniformly for every absolutely continuous function. ==See also== * [[Chebyshev nodes]] * Compare with the [[Gibbs phenomenon]] for sinusoidal basis functions * [[Occam's razor]] argues for simpler models * [[Overfitting]] can be caused by excessive model complexity * [[Schwarz lantern]], another example of failure of convergence * [[Stone–Weierstrass theorem]] * [[Taylor series]] * [[Wilkinson's polynomial]] ==References== <references /> [[Category:Interpolation]] [[Category:Theory of continuous functions]] [[Category:Numerical artifacts]] [[de:Polynominterpolation#Runges Phänomen]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Citation
(
edit
)
Template:Citation needed
(
edit
)
Template:Cite journal
(
edit
)
Template:IPA
(
edit
)
Template:Legend0
(
edit
)
Template:Main
(
edit
)
Template:Math
(
edit
)
Template:Short description
(
edit
)