Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Polynomial interpolation
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Constructing the interpolation polynomial== [[File:Interpolation example polynomial.svg|thumb|right|The red dots denote the data points {{math|(''x<sub>k</sub>'', ''y<sub>k</sub>'')}}, while the blue curve shows the interpolation polynomial.]] === Lagrange Interpolation === {{main article|Lagrange polynomial}} We may write down the polynomial immediately in terms of [[Lagrange polynomial]]s as: <math display="block">\begin{align} p(x) &= \frac{(x-x_1)(x-x_2)\cdots(x-x_n)}{(x_0-x_1)(x_0-x_2)\cdots(x_0-x_n)} y_0 \\ [4pt] &+ \frac{(x-x_0)(x-x_2)\cdots(x-x_n)}{(x_1-x_0)(x_1-x_2) \cdots(x_1-x_n)}y_1 \\ [4pt] &+ \cdots\\ [4pt] &+\frac{(x-x_0)(x-x_1)\cdots(x-x_{n-1})}{(x_n-x_0)(x_n-x_1)\cdots(x_n-x_{n-1})}y_n \\ [7pt] &=\sum_{i=0}^n \Biggl( \prod_{\stackrel{\!0\,\leq\, j\,\leq\, n}{j\,\neq\, i}} \frac{x-x_j}{x_i-x_j} \Biggr) y_i =\sum_{i=0}^n \frac{p(x)}{p'(x_i)(x-x_i)}\,y_i \end{align}</math>For matrix arguments, this formula is called [[Sylvester's formula]] and the matrix-valued Lagrange polynomials are the [[Frobenius covariant]]s. : === Newton Interpolation === {{See also|Newton polynomial|Divided differences}} ==== Theorem ==== For a polynomial <math>p_n</math> of degree less than or equal to <math>n</math>, that interpolates <math>f</math> at the nodes <math>x_i</math> where <math>i = 0,1,2,3,\cdots,n</math>. Let <math>p_{n+1}</math> be the polynomial of degree less than or equal to <math>n+1</math> that interpolates <math>f</math> at the nodes <math>x_i</math> where <math>i = 0,1,2,3,\cdots,n, n+1</math>. Then <math>p_{n+1}</math> is given by:<math display="block">p_{n+1}(x) = p_n(x) +a_{n+1}w_n(x) </math>where <math display="inline">w_n(x) := \prod_{i=0}^n (x-x_i) </math> also known as Newton basis and <math display="inline">a_{n+1} :={f(x_{n+1})-p_n(x_{n+1}) \over w_n(x_{n+1})} </math>. '''Proof:''' This can be shown for the case where <math>i = 0,1,2,3,\cdots,n</math>:<math display="block">p_{n+1}(x_i) = p_n(x_i) +a_{n+1}\prod_{j=0}^n (x_i-x_j) = p_n(x_i) </math>and when <math>i = n+1</math>:<math display="block">p_{n+1}(x_{n+1}) = p_n(x_{n+1}) +{f(x_{n+1})-p_n(x_{n+1}) \over w_n(x_{n+1})} w_n(x_{n+1}) = f(x_{n+1}) </math>By the uniqueness of interpolated polynomials of degree less than <math>n+1</math>, <math display="inline">p_{n+1}(x) = p_n(x) +a_{n+1}w_n(x) </math> is the required polynomial interpolation. The function can thus be expressed as: <math display="inline">p_{n}(x) = a_0+a_1(x-x_0)+a_2(x-x_0)(x-x_1)+\cdots + a_n(x-x_0)\cdots(x-x_{n-1}) .</math> ==== Polynomial coefficients ==== To find <math>a_i</math>, we have to solve the [[lower triangular matrix]] formed by arranging <math display="inline">p_{n} (x_i)=f(x_i)=y_i </math> from above equation in matrix form: : <math>\begin{bmatrix} 1 & & \ldots & & 0 \\ 1 & x_1-x_0 & & & \\ 1 & x_2-x_0 & (x_2-x_0)(x_2-x_1) & & \vdots \\ \vdots & \vdots & & \ddots & \\ 1 & x_k-x_0 & \ldots & \ldots & \prod_{j=0}^{n-1}(x_n - x_j) \end{bmatrix} \begin{bmatrix} a_0 \\ \\ \vdots \\ \\ a_{n} \end{bmatrix} = \begin{bmatrix} y_0 \\ \\ \vdots \\ \\ y_{n} \end{bmatrix}</math> The coefficients are derived as : <math>a_j := [y_0,\ldots,y_j]</math> where : <math>[y_0,\ldots,y_j]</math> is the notation for [[divided differences]]. Thus, [[Newton polynomial]]s are used to provide a polynomial interpolation formula of n points.<ref name="Epperson 2013"/> {| class="toccolours collapsible collapsed" width="80%" style="text-align:left" !Proof |- | The first few coefficients can be calculated using the system of equations. The form of n-th coefficient is assumed for proof by mathematical induction. <math>\begin{align} a_0 &= y_0 = [y_0] \\ a_1 &= {y_1-y_0 \over x_1 - x_0}= [y_0,y_1] \\ \vdots\\ a_n &= [y_0,\cdots,y_n] \quad \text{(let)}\\ \end{align} </math> Let Q be polynomial interpolation of points <math>(x_1, y_1), \ldots, (x_n, y_n)</math>. Adding <math>(x_0,y_0)</math> to the polynomial Q: <math>Q(x)+ a'_n (x - x_1)\cdot\ldots\cdot(x - x_n) = P_{n}(x), </math> where <math display="inline">a'_n(x_{0} - x_1)\ldots(x_{0}-x_{n}) = y_{0} - Q(x_{0}) </math>. By uniqueness of the interpolating polynomial of the points <math>(x_0, y_0), \ldots, (x_n, y_n)</math>, equating the coefficients of <math>x^{n-1}</math> we get, <math display="inline">a'_n=[y_0, \ldots, y_{n}] </math>. Hence the polynomial can be expressed as:<math>P_{n}(x)= Q(x)+ [y_0,\ldots,y_n](x - x_1)\cdot\ldots\cdot(x - x_n).</math> Adding <math>(x_{n+1},y_{n+1})</math> to the polynomial Q, it has to satisfiy: <math display="inline">[y_1, \ldots,y_{n+1}](x_{n+1} - x_1)\cdot\ldots\cdot(x_{n+1}-x_{n}) = y_{n+1} - Q(x_{n+1}) </math> where the formula for <math display="inline">a_n </math> and interpolating polynomial are used. The <math display="inline">a_{n+1} </math> term for the polynomial <math display="inline">P_{n+1} </math> can be found by calculating:<math display="block">\begin{align} & [y_0,\ldots,y_{n+1}](x_{n+1} - x_0)\cdot\ldots\cdot(x_{n+1} - x_n)\\ &= \frac{[y_1,\ldots,y_{n+1}] - [y_0,\ldots,y_{n}]}{x_{n+1} - x_0}(x_{n+1} - x_0)\cdot\ldots\cdot(x_{n+1} - x_n) \\ &= \left([y_1,\ldots,y_{n+1}] - [y_0,\ldots,y_{n}]\right) (x_{n+1} - x_1)\cdot\ldots\cdot(x_{n+1} - x_n) \\ &= [y_1,\ldots,y_{n+1}](x_{n+1} - x_1)\cdot\ldots\cdot(x_{n+1} - x_n) - [y_0,\ldots,y_n](x_{n+1} - x_1)\cdot\ldots\cdot(x_{n+1} - x_n) \\ &= (y_{n+1} - Q(x_{n+1})) - [y_0,\ldots,y_n](x_{n+1} - x_1)\cdot\ldots\cdot(x_{n+1} - x_n) \\ &= y_{n+1} - (Q(x_{n+1}) + [y_0,\ldots,y_n](x_{n+1} - x_1)\cdot\ldots\cdot(x_{n+1} - x_n))\\ &=y_{n+1}-P(x_{n+1}). \end{align} </math>which implies that <math>a_{n+1}={y_{n+1}-P_n(x_{n+1}) \over w_n(x_{n+1})} = [y_0,\ldots,y_{n+1}]</math>. Hence it is proved by principle of mathematical induction. |} ==== Newton forward formula ==== The Newton polynomial can be expressed in a simplified form when <math>x_0, x_1, \dots, x_k</math> are arranged consecutively with equal spacing. If <math>x_0, x_1, \dots, x_k</math> are consecutively arranged and equally spaced with <math>{x}_{i}={x}_{0}+ih </math> for ''i'' = 0, 1, ..., ''k'' and some variable x is expressed as <math>{x}={x}_{0}+sh</math>, then the difference <math>x-x_i</math> can be written as <math>(s-i)h</math>. So the Newton polynomial becomes : <math>\begin{align} N(x) &= [y_0] + [y_0,y_1]sh + \cdots + [y_0,\ldots,y_k] s (s-1) \cdots (s-k+1){h}^{k} \\ &= \sum_{i=0}^{k}s(s-1) \cdots (s-i+1){h}^{i}[y_0,\ldots,y_i] \\ &= \sum_{i=0}^{k}{s \choose i}i!{h}^{i}[y_0,\ldots,y_i]. \end{align}</math> Since the relationship between divided differences and [[Divided differences#Forward and backward differences|forward differences]] is given as:<ref>{{cite book |last1=Burden |first1=Richard L. |url=https://archive.org/details/numericalanalysi00rlbu |title=Numerical Analysis |last2=Faires |first2=J. Douglas |date=2011 |isbn=9780538733519 |edition=9th |page=[https://archive.org/details/numericalanalysi00rlbu/page/n146 129] |publisher=Cengage Learning |url-access=limited}}</ref><math display="block">[y_j, y_{j+1}, \ldots , y_{j+n}] = \frac{1}{n!h^n}\Delta^{(n)}y_j,</math>Taking <math>y_i=f(x_i)</math>, if the representation of x in the previous sections was instead taken to be <math>x=x_j+sh</math>, the '''Newton forward interpolation formula''' is expressed as:<math display="block">f(x) \approx N(x)=N(x_j+sh) = \sum_{i=0}^{k}{s \choose i}\Delta^{(i)} f(x_j) </math>which is the interpolation of all points after <math>x_j</math>. It is expanded as:<math display="block">f(x_j+sh)=f(x_j)+\frac{s}{1!}\Delta f(x_j)+ \frac{s(s-1)}{2!}\Delta^2 f(x_j)+\frac{s(s-1)(s-2)}{3!}\Delta^3 f(x_j)+\frac{s(s-1)(s-2)(s-3)}{4!}\Delta^4 f(x_j)+\cdots </math> ==== Newton backward formula ==== If the nodes are reordered as <math>{x}_{k},{x}_{k-1},\dots,{x}_{0}</math>, the Newton polynomial becomes : <math>N(x)=[y_k]+[{y}_{k}, {y}_{k-1}](x-{x}_{k})+\cdots+[{y}_{k},\ldots,{y}_{0}](x-{x}_{k})(x-{x}_{k-1})\cdots(x-{x}_{1}).</math> If <math>{x}_{k},\;{x}_{k-1},\;\dots,\;{x}_{0}</math> are equally spaced with <math>{x}_{i}={x}_{k}-(k-i)h</math> for ''i'' = 0, 1, ..., ''k'' and <math>{x}={x}_{k}+sh</math>, then, : <math>\begin{align} N(x) &= [{y}_{k}]+ [{y}_{k}, {y}_{k-1}]sh+\cdots+[{y}_{k},\ldots,{y}_{0}]s(s+1)\cdots(s+k-1){h}^{k} \\ &=\sum_{i=0}^{k}{(-1)}^{i}{-s \choose i}i!{h}^{i}[{y}_{k},\ldots,{y}_{k-i}]. \end{align}</math> Since the relationship between divided differences and backward differences is given as:{{Citation needed|date=December 2023}}<math display="block">[{y}_{j}, y_{j-1},\ldots,{y}_{j-n}] = \frac{1}{n!h^n}\nabla^{(n)}y_j, </math>taking <math>y_i=f(x_i)</math>, if the representation of x in the previous sections was instead taken to be <math>x=x_j+sh</math>, the '''Newton backward interpolation formula''' is expressed as:<math display="block">f(x) \approx N(x) =N(x_j+sh)=\sum_{i=0}^{k}{(-1)}^{i}{-s \choose i}\nabla^{(i)} f(x_j). </math>which is the interpolation of all points before <math>x_j</math>. It is expanded as:<math display="block">f(x_j+sh)=f(x_j)+\frac{s}{1!}\nabla f(x_j)+ \frac{s(s+1)}{2!}\nabla^2 f(x_j)+\frac{s(s+1)(s+2)}{3!}\nabla^3 f(x_j)+\frac{s(s+1)(s+2)(s+3)}{4!}\nabla^4 f(x_j)+\cdots </math> === Lozenge Diagram === A Lozenge diagram is a diagram that is used to describe different interpolation formulas that can be constructed for a given data set. A line starting on the left edge and tracing across the diagram to the right can be used to represent an interpolation formula if the following rules are followed:<ref name=":0">{{Cite book |last=Hamming |first=Richard W. |title=Numerical methods for scientists and engineers |date=1986 |publisher=Dover |isbn=978-0-486-65241-2 |edition=Unabridged republ. of the 2. ed. (1973) |location=New York}}</ref> [[File:Lozenge_Diagram.svg|thumb|Lozenge Diagram: geometric representation of polynomial interpolations.]] # Left to right steps indicate addition whereas right to left steps indicate subtraction # If the slope of a step is positive, the term to be used is the product of the difference and the factor immediately below it. If the slope of a step is negative, the term to be used is the product of the difference and the factor immediately above it. # If a step is horizontal and passes through a factor, use the product of the factor and the average of the two terms immediately above and below it. If a step is horizontal and passes through a difference, use the product of the difference and the average of the two terms immediately above and below it. The factors are expressed using the formula:<math display="block">C(u+k,n)=\frac{(u+k)(u+k-1)\cdots(u+k-n+1)}{n!} </math> ==== Proof of equivalence ==== If a path goes from <math>\Delta^{n-1}y_s </math> to <math>\Delta^{n+1}y_{s-1} </math>, it can connect through three intermediate steps, (a) through <math>\Delta^{n}y_{s-1} </math>, (b) through <math display="inline">C(u-s ,n) </math> or (c) through <math>\Delta^{n}y_s </math>. Proving the equivalence of these three two-step paths should prove that all (n-step) paths can be morphed with the same starting and ending, all of which represents the same formula. Path (a): <math>C(u-s, n) \Delta^n y_{s-1}+C(u-s+1, n+1) \Delta^{n+1} y_{s-1} </math> Path (b): <math>C(u-s, n) \Delta^n y_s + C(u-s, n+1) \Delta^{n+1} y_{s-1} </math> Path (c): <math>C(u-s, n) \frac{\Delta^n y_{s-1}+\Delta^n y_{s}}{2} \quad+\frac{C(u-s+1, n+1)+C(u-s, n+1)}{2} \Delta^{n+1} y_{s-1} </math> Subtracting contributions from path a and b: <math>\begin{aligned} \text{Path a - Path b}= & C(u-s, n)(\Delta^n y_{s-1}-\Delta^n y_s) +(C(u-s+1, n+1)-C(u-s, n-1)) \Delta^{n+1} y_{s-1} \\ = & - C(u-s, n)\Delta^{n+1} y_{s-1} + C(u-s, n) \frac{(u-s+1)-(u-s-n)}{n+1} \Delta^{n+1} y_{s-1} \\ = & C(u-s, n)(-\Delta^{n+1} y_{s-1}+\Delta^{n+1} y_{s-1} )=0 \\ \end{aligned} </math> Thus, the contribution of either path (a) or path (b) is the same. Since path (c) is the average of path (a) and (b), it also contributes identical function to the polynomial. Hence the equivalence of paths with same starting and ending points is shown. To check if the paths can be shifted to different values in the leftmost corner, taking only two step paths is sufficient: (a) <math>y_{s+1} </math> to <math>y_{s} </math> through <math>\Delta y_{s} </math> or (b) factor between <math>y_{s+1} </math> and <math>y_{s} </math>, to <math>y_{s} </math> through <math>\Delta y_{s} </math> or (c) starting from <math>y_{s} </math>. Path (a) <math>y_{s+1}+C(u-s-1,1) \Delta y_s - C(u-s, 1) \Delta y_s </math> Path (b) <math>\frac{y_{s+1}+y_s}{2}+\frac{C(u-s-1,1)+C(u-s, 1)}{2} \Delta y_s - C(u-s, 1) \Delta y_s </math> Path (c) <math>y_{s} </math> Since <math>\Delta y_{s} = y_{s+1}-y_s </math>, substituting in the above equations shows that all the above terms reduce to <math>y_{s} </math> and are hence equivalent. Hence these paths can be morphed to start from the leftmost corner and end in a common point.<ref name=":0" /> ==== Newton formula ==== Taking negative slope transversal from <math>y_0 </math> to <math>\Delta^n y_0 </math> gives the interpolation formula of all the <math>n+1 </math> consecutively arranged points, equivalent to Newton's forward interpolation formula: <math>\begin{aligned} y(s) &=y_0+C(s, 1) \Delta y_0+C(s, 2) \Delta^2 y_0+C(s, 3) \Delta^3 y_0+\cdots \\ & =y_0+s \Delta y_0+\frac{s(s-1)}{2} \Delta^2 y_0+\frac{s(s-1)(s-2)}{3 !} \Delta^3 y_0+\frac{s(s-1)(s-2)(s-3)}{4 !} \Delta^4 y_0+\cdots \end{aligned} </math> whereas, taking positive slope transversal from <math>y_n </math> to <math>\nabla^n y_n = \Delta^n y_0 </math>, gives the interpolation formula of all the <math>n+1 </math> consecutively arranged points, equivalent to Newton's backward interpolation formula: <math>\begin{aligned} y(u) & = y_k+C(u-k, 1) \Delta y_{k-1}+C(u-k+1,2) \Delta^2 y_{k-2} +C(u - k+2,3) \Delta^3 y_{k-3}+\cdots \\ & = y_k+(u-k) \Delta y_{k-1} +\frac{(u-k+1) (u-k)}{2} \Delta^2 y_{k-2}+\frac{(u-k+2)(u-k+1)(u-k)}{3 !} \Delta^3 y_{k-3}+\cdots \\ y(k+s) & = y_k+(s) \nabla y_{k} +\frac{(s+1) s}{2} \nabla^2 y_{k}+\frac{(s+2)(s+1) s}{3 !} \nabla^3 y_{k}+\frac{(s+3)(s+2)(s+1) s}{4 !} \nabla^4 y_{k}+\cdots \\ \end{aligned} </math> where <math>s=u-k </math> is the number corresponding to that introduced in Newton interpolation. ==== Gauss formula ==== Taking a zigzag line towards the right starting from <math>y_0 </math> with negative slope, we get Gauss forward formula: <math>y(u)=y_0+u \Delta y_0+\frac{u(u-1)}{2} \Delta^2 y_{-1} +\frac{(u+1)u\left(u-1\right)}{3 !} \Delta^3 y_{-1}+ \frac{(u+1)u\left(u-1\right)(u-2)}{4 !} \Delta^4 y_{-2} + \cdots </math> whereas starting from <math>y_0 </math> with positive slope, we get Gauss backward formula: <math>y(u)=y_0+u \Delta y_{-1}+\frac{(u+1)u}{2} \Delta^2 y_{-1} +\frac{(u+1)u\left(u-1\right)}{3 !} \Delta^3 y_{-2}+ \frac{(u+2)(u+1)u\left(u-1\right)}{4 !} \Delta^4 y_{-2} + \cdots </math> ==== Stirling formula ==== By taking a horizontal path towards the right starting from <math>y_0 </math>, we get Stirling formula: <math>\begin{aligned} y(u)&= y_0 +u \frac{\Delta y_0+\Delta y_{-1}}{2}+\frac{C(u+1,2)+C(u, 2)}{2} \Delta^2 y_{-1} +C(u+1,3) \frac{\Delta^3 y_{-2}+\Delta^3 y_{-1}}{2}+\cdots \\ & = y_0+u \frac{\Delta y_0+\Delta y_{-1}}{2}+\frac{u^2}{2} \Delta^2 y_{-1}+\frac{u(u^2-1)}{3 !} \frac{\Delta^3 y_{-2}+\Delta^3 y_{-1}}{2}+\frac{u^2(u^2-1)}{4!}\Delta^4 y_{-2}+\cdots \end{aligned} </math> Stirling formula is the average of Gauss forward and Gauss backward formulas. ==== Bessel formula ==== By taking a horizontal path towards the right starting from factor between <math>y_0 </math> and <math>y_1 </math>, we get Bessel formula: <math>\begin{align} y(u)&=1{\frac{y_{0}+y_{1}}{2}}+{\frac{C(u,1)+C(u-1,1)}{2}}\Delta y_{0}+C(u,2){\frac{\Delta^{2}y_{-1}+\Delta^{2}y_{0}}{2}}+\cdots\\ &= \frac{y_{0}+y_{1}}{2}+\left(u-{\frac{1}{2}}\right)\Delta y_{0}+\frac{u(u-1)}{2}\frac{\Delta^{2}y_{-1} +\Delta^{2}y_{0}}{2}+\frac{\left(u-{\frac{1}{2}}\right)u\left( u-1\right)}{3!}\Delta^{3} y_{0} + \frac{(u+1)u(u-1)(u-2)}{4!}\frac{\Delta^{4}y_{-1}+\Delta^{4}y_{-2}}{2}+\cdots\\ \end{align} </math> === Vandermonde Algorithms === The [[Vandermonde matrix]] in the second proof above may have large [[condition number]],<ref>{{cite journal |last=Gautschi |first=Walter |year=1975 |title=Norm Estimates for Inverses of Vandermonde Matrices |journal=Numerische Mathematik |volume=23 |issue=4 |pages=337–347 |doi=10.1007/BF01438260 |s2cid=122300795}}</ref> causing large errors when computing the coefficients {{math|''a<sub>i</sub>''}} if the system of equations is solved using [[Gaussian elimination]]. Several authors have therefore proposed algorithms which exploit the structure of the Vandermonde matrix to compute numerically stable solutions in O(''n''<sup>2</sup>) operations instead of the O(''n''<sup>3</sup>) required by Gaussian elimination.<ref>{{cite journal |last=Higham |first=N. J. |year=1988 |title=Fast Solution of Vandermonde-Like Systems Involving Orthogonal Polynomials |journal=IMA Journal of Numerical Analysis |volume=8 |issue=4 |pages=473–486 |doi=10.1093/imanum/8.4.473}}</ref><ref>{{cite journal |last=Björck |first=Å |author2=V. Pereyra |year=1970 |title=Solution of Vandermonde Systems of Equations |journal=Mathematics of Computation |publisher=American Mathematical Society |volume=24 |issue=112 |pages=893–903 |doi=10.2307/2004623 |jstor=2004623}}</ref><ref>{{cite journal |last1=Calvetti |first1=D. |author1-link=Daniela Calvetti |last2=Reichel |first2=L. |year=1993 |title=Fast Inversion of Vandermonde-Like Matrices Involving Orthogonal Polynomials |journal=BIT |volume=33 |issue=3 |pages=473–484 |doi=10.1007/BF01990529 |s2cid=119360991}}</ref> These methods rely on constructing first a [[Newton polynomial|Newton interpolation]] of the polynomial and then converting it to a [[monomial form]]. === Non-Vandermonde algorithms === To find the interpolation polynomial ''p''(''x'') in the vector space ''P''(''n'') of polynomials of degree {{mvar|n}}, we may use the usual [[monomial basis]] for ''P''(''n'') and invert the Vandermonde matrix by Gaussian elimination, giving a [[Computational complexity|computational cost]] of O(''n''<sup>3</sup>) operations. To improve this algorithm, a more convenient basis for ''P''(''n'') can simplify the calculation of the coefficients, which must then be translated back in terms of the [[monomial basis]]. One method is to write the interpolation polynomial in the [[Newton form]] (i.e. using Newton basis) and use the method of [[divided differences]] to construct the coefficients, e.g. [[Neville's algorithm]]. The cost is [[Big O notation|O(''n''<sup>2</sup>)]] operations. Furthermore, you only need to do O(''n'') extra work if an extra point is added to the data set, while for the other methods, you have to redo the whole computation. Another method is preferred when the aim is not to compute the ''coefficients'' of ''p''(''x''), but only a ''single value'' ''p''(''a'') at a point ''x = a'' not in the original data set. The [[Lagrange form]] computes the value ''p''(''a'') with complexity O(''n''<sup>2</sup>).<ref>R.Bevilaqua, D. Bini, M.Capovani and O. Menchi (2003). ''Appunti di Calcolo Numerico''. Chapter 5, p. 89. Servizio Editoriale Universitario Pisa - Azienda Regionale Diritto allo Studio Universitario.</ref> The [[Bernstein form]] was used in a constructive proof of the [[Weierstrass approximation theorem]] by [[Sergei Natanovich Bernstein|Bernstein]] and has gained great importance in computer graphics in the form of [[Bézier curve]]s.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)