Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Divided differences
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Algorithm for computing polynomial coefficients}} In [[mathematics]], '''divided differences''' is an [[algorithm]], historically used for computing tables of [[logarithms]] and [[trigonometric functions]].{{Citation needed |date=October 2017}} [[Charles Babbage]]'s [[difference engine]], an early [[mechanical calculator]], was designed to use this algorithm in its operation.<ref name="Isaacson2014">{{cite book |last1=Isaacson |first1=Walter |title=The Innovators |date=2014 |publisher=Simon & Schuster |isbn=978-1-4767-0869-0 |page=20 }}</ref> Divided differences is a [[recursion|recursive]] [[division (mathematics)|division]] process. Given a sequence of data points <math>(x_0, y_0), \ldots, (x_{n}, y_{n})</math>, the method calculates the coefficients of the [[polynomial interpolation|interpolation polynomial]] of these points in the [[Newton polynomial|Newton form]]. It is sometimes denoted by a delta with a bar: <math>\text{△} \!\!\!|\,\,</math> or <math>\text{◿} \! \text{◺}</math>. ==Definition== Given ''n'' + 1 data points <math display="block">(x_0, y_0),\ldots,(x_{n}, y_{n})</math> where the <math>x_k</math> are assumed to be pairwise distinct, the '''forward divided differences''' are defined as: <math display="block">\begin{align} \mathopen[y_k] &:= y_k, && k \in \{ 0,\ldots,n\} \\ \mathopen[y_k,\ldots,y_{k+j}] &:= \frac{[y_{k+1},\ldots , y_{k+j}] - [y_{k},\ldots , y_{k+j-1}]}{x_{k+j}-x_k}, && k\in\{0,\ldots,n-j\},\ j\in\{1,\ldots,n\}. \end{align}</math> To make the recursive process of computation clearer, the divided differences can be put in tabular form, where the columns correspond to the value of ''j'' above, and each entry in the table is computed from the difference of the entries to its immediate lower left and to its immediate upper left, divided by a difference of corresponding ''x-''values: <math display="block"> \begin{matrix} x_0 & y_0 = [y_0] & & & \\ & & [y_0,y_1] & & \\ x_1 & y_1 = [y_1] & & [y_0,y_1,y_2] & \\ & & [y_1,y_2] & & [y_0,y_1,y_2,y_3]\\ x_2 & y_2 = [y_2] & & [y_1,y_2,y_3] & \\ & & [y_2,y_3] & & \\ x_3 & y_3 = [y_3] & & & \\ \end{matrix} </math> === Notation === Note that the divided difference <math>[y_k,\ldots,y_{k+j}]</math> depends on the values <math>x_k,\ldots,x_{k+j}</math> and <math>y_k,\ldots,y_{k+j}</math>, but the notation hides the dependency on the ''x''-values. If the data points are given by a function ''f'', <math display="block">(x_0, y_0), \ldots, (x_{k}, y_n) =(x_0, f(x_0)), \ldots, (x_n, f(x_n))</math> one sometimes writes the divided difference in the notation <math display="block">f[x_k,\ldots,x_{k+j}] \ \stackrel{\text{def}}= \ [f(x_k),\ldots,f(x_{k+j})] = [y_k,\ldots,y_{k+j}].</math>Other notations for the divided difference of the function ''ƒ'' on the nodes ''x''<sub>0</sub>, ..., ''x''<sub>''n''</sub> are: <math display="block">f[x_k,\ldots,x_{k+j}]=\mathopen[x_0,\ldots,x_n]f= \mathopen[x_0,\ldots,x_n;f]= D[x_0,\ldots,x_n]f.</math> ==Example== Divided differences for <math>k=0</math> and the first few values of <math>j</math>: <math display="block"> \begin{align} \mathopen[y_0] &= y_0 \\ \mathopen[y_0,y_1] &= \frac{y_1-y_0}{x_1-x_0} \\ \mathopen[y_0,y_1,y_2] &= \frac{\mathopen[y_1,y_2]-\mathopen[y_0,y_1]}{x_2-x_0} = \frac{\frac{y_2-y_1}{x_2-x_1}-\frac{y_1-y_0}{x_1-x_0}}{x_2-x_0} = \frac{y_2-y_1}{(x_2-x_1)(x_2-x_0)}-\frac{y_1-y_0}{(x_1-x_0)(x_2-x_0)} \\ \mathopen[y_0,y_1,y_2,y_3] &= \frac{\mathopen[y_1,y_2,y_3]-\mathopen[y_0,y_1,y_2]}{x_3-x_0} \end{align} </math> <!-- the \mathopen command is there because latex otherwise thinks that [...] denotes an optional argument --> Thus, the table corresponding to these terms upto two columns has the following form: <math display="block">\begin{matrix} x_0 & y_{0} & & \\ & & {y_{1}-y_{0}\over x_1 - x_0} & \\ x_1 & y_{1} & & {{y_{2}-y_{1}\over x_2 - x_1}-{y_{1}-y_{0}\over x_1 - x_0} \over x_2 - x_0} \\ & & {y_{2}-y_{1}\over x_2 - x_1} & \\ x_2 & y_{2} & & \vdots \\ & & \vdots & \\ \vdots & & & \vdots \\ & & \vdots & \\ x_n & y_{n} & & \\ \end{matrix} </math> ==Properties== * [[Linear functional|Linearity]] <math display="block">\begin{align} (f+g)[x_0,\dots,x_n] &= f[x_0,\dots,x_n] + g[x_0,\dots,x_n] \\ (\lambda\cdot f)[x_0,\dots,x_n] &= \lambda\cdot f[x_0,\dots,x_n] \end{align}</math> * [[Leibniz rule (generalized product rule)|Leibniz rule]] <math display="block">(f\cdot g)[x_0,\dots,x_n] = f[x_0]\cdot g[x_0,\dots,x_n] + f[x_0,x_1]\cdot g[x_1,\dots,x_n] + \dots + f[x_0,\dots,x_n]\cdot g[x_n] = \sum_{r=0}^n f[x_0,\ldots,x_r]\cdot g[x_r,\ldots,x_n]</math> * Divided differences are symmetric: If <math>\sigma : \{0, \dots, n\} \to \{0, \dots, n\}</math> is a permutation then <math display="block">f[x_0, \dots, x_n] = f[x_{\sigma(0)}, \dots, x_{\sigma(n)}]</math> * [[Polynomial interpolation]] in the [[Newton polynomial|Newton form]]: if <math>P</math> is a polynomial function of degree <math>\leq n</math>, and <math>p[x_0, \dots , x_n]</math> is the divided difference, then <math display="block">P_{n-1}(x) = p[x_0] + p[x_0,x_1](x-x_0) + p[x_0,x_1,x_2](x-x_0)(x-x_1) + \cdots + p[x_0,\ldots,x_n] (x-x_0)(x-x_1)\cdots(x-x_{n-1})</math> * If <math>p</math> is a polynomial function of degree <math><n</math>, then <math display="block">p[x_0, \dots, x_n] = 0.</math> *[[Mean value theorem for divided differences]]: if <math>f</math> is ''n'' times differentiable, then <math display="block">f[x_0,\dots,x_n] = \frac{f^{(n)}(\xi)}{n!}</math> for a number <math>\xi</math> in the open interval determined by the smallest and largest of the <math>x_k</math>'s. ==Matrix form== The divided difference scheme can be put into an upper [[triangular matrix]]: <math display="block">T_f(x_0,\dots,x_n)= \begin{pmatrix} f[x_0] & f[x_0,x_1] & f[x_0,x_1,x_2] & \ldots & f[x_0,\dots,x_n] \\ 0 & f[x_1] & f[x_1,x_2] & \ldots & f[x_1,\dots,x_n] \\ 0 & 0 & f[x_2] & \ldots & f[x_2,\dots,x_n] \\ \vdots & \vdots & & \ddots & \vdots \\ 0 & 0 & 0 & \ldots & f[x_n] \end{pmatrix}.</math> Then it holds * <math>T_{f+g}(x) = T_f(x) + T_g(x)</math> * <math>T_{\lambda f}(x) = \lambda T_f(x)</math> if <math>\lambda</math> is a scalar * <math>T_{f\cdot g}(x) = T_f(x) \cdot T_g(x)</math> {{pb}} This follows from the Leibniz rule. It means that multiplication of such matrices is [[commutativity|commutative]]. Summarised, the matrices of divided difference schemes with respect to the same set of nodes ''x'' form a [[commutative ring]]. * Since <math> T_f(x) </math> is a triangular matrix, its [[eigenvalue]]s are obviously <math> f(x_0), \dots, f(x_n) </math>. * Let <math>\delta_\xi</math> be a [[Kronecker delta]]-like function, that is <math display="block">\delta_\xi(t) = \begin{cases} 1 &: t=\xi , \\ 0 &: \mbox{else}. \end{cases}</math> Obviously <math>f\cdot \delta_\xi = f(\xi)\cdot \delta_\xi</math>, thus <math>\delta_\xi</math> is an [[eigenfunction]] of the pointwise function multiplication. That is <math>T_{\delta_{x_i}}(x)</math> is somehow an "[[eigenmatrix]]" of <math>T_f(x)</math>: <math> T_f(x) \cdot T_{\delta_{x_i}} (x) = f(x_i) \cdot T_{\delta_{x_i}}(x) </math>. However, all columns of <math>T_{\delta_{x_i}}(x)</math> are multiples of each other, the [[matrix rank]] of <math>T_{\delta_{x_i}}(x)</math> is 1. So you can compose the matrix of all eigenvectors of <math>T_f(x)</math> from the <math>i</math>-th column of each <math>T_{\delta_{x_i}}(x)</math>. Denote the matrix of eigenvectors with <math>U(x)</math>. Example <math display="block"> U(x_0,x_1,x_2,x_3) = \begin{pmatrix} 1 & \frac{1}{(x_1-x_0)} & \frac{1}{(x_2-x_0) (x_2-x_1)} & \frac{1}{(x_3-x_0) (x_3-x_1) (x_3-x_2)} \\ 0 & 1 & \frac{1}{(x_2-x_1)} & \frac{1}{(x_3-x_1) (x_3-x_2)} \\ 0 & 0 & 1 & \frac{1}{(x_3-x_2)} \\ 0 & 0 & 0 & 1 \end{pmatrix} </math> The [[diagonalizable matrix|diagonalization]] of <math>T_f(x)</math> can be written as <math display="block"> U(x) \cdot \operatorname{diag}(f(x_0),\dots,f(x_n)) = T_f(x) \cdot U(x) .</math> ===Polynomials and power series=== The matrix <math display="block"> J = \begin{pmatrix} x_0 & 1 & 0 & 0 & \cdots & 0 \\ 0 & x_1 & 1 & 0 & \cdots & 0 \\ 0 & 0 & x_2 & 1 & & 0 \\ \vdots & \vdots & & \ddots & \ddots & \\ 0 & 0 & 0 & 0 & \; \ddots & 1\\ 0 & 0 & 0 & 0 & & x_n \end{pmatrix} </math> contains the divided difference scheme for the [[identity function]] with respect to the nodes <math>x_0,\dots,x_n</math>, thus <math>J^m</math> contains the divided differences for the [[monomial|power function]] with [[exponent]] <math>m</math>. Consequently, you can obtain the divided differences for a [[polynomial function]] <math>p</math> by applying <math>p</math> to the matrix <math>J</math>: If <math display="block">p(\xi) = a_0 + a_1 \cdot \xi + \dots + a_m \cdot \xi^m</math> and <math display="block">p(J) = a_0 + a_1\cdot J + \dots + a_m\cdot J^m</math> then <math display="block">T_p(x) = p(J).</math> This is known as ''Opitz' formula''.<ref>[[Carl de Boor|de Boor, Carl]], ''Divided Differences'', Surv. Approx. Theory 1 (2005), 46–69, [http://www.emis.de/journals/SAT/papers/2/]</ref><ref>Opitz, G. ''Steigungsmatrizen'', Z. Angew. Math. Mech. (1964), 44, T52–T54</ref> Now consider increasing the degree of <math>p</math> to infinity, i.e. turn the Taylor polynomial into a [[Taylor series]]. Let <math>f</math> be a function which corresponds to a [[power series]]. You can compute the divided difference scheme for <math>f</math> by applying the corresponding matrix series to <math>J</math>: If <math display="block">f(\xi) = \sum_{k=0}^\infty a_k \xi^k</math> and <math display="block">f(J)=\sum_{k=0}^\infty a_k J^k</math> then <math display="block">T_f(x)=f(J).</math> ==Alternative characterizations== ===Expanded form=== <math display="block"> \begin{align} f[x_0] &= f(x_0) \\ f[x_0,x_1] &= \frac{f(x_0)}{(x_0-x_1)} + \frac{f(x_1)}{(x_1-x_0)} \\ f[x_0,x_1,x_2] &= \frac{f(x_0)}{(x_0-x_1)\cdot(x_0-x_2)} + \frac{f(x_1)}{(x_1-x_0)\cdot(x_1-x_2)} + \frac{f(x_2)}{(x_2-x_0)\cdot(x_2-x_1)} \\ f[x_0,x_1,x_2,x_3] &= \frac{f(x_0)}{(x_0-x_1)\cdot(x_0-x_2)\cdot(x_0-x_3)} + \frac{f(x_1)}{(x_1-x_0)\cdot(x_1-x_2)\cdot(x_1-x_3)} +\\ &\quad\quad \frac{f(x_2)}{(x_2-x_0)\cdot(x_2-x_1)\cdot(x_2-x_3)} + \frac{f(x_3)}{(x_3-x_0)\cdot(x_3-x_1)\cdot(x_3-x_2)} \\ f[x_0,\dots,x_n] &= \sum_{j=0}^{n} \frac{f(x_j)}{\prod_{k\in\{0,\dots,n\}\setminus\{j\}} (x_j-x_k)} \end{align} </math> With the help of the [[polynomial function]] <math>\omega(\xi) = (\xi-x_0) \cdots (\xi-x_n)</math> this can be written as <math display="block"> f[x_0,\dots,x_n] = \sum_{j=0}^{n} \frac{f(x_j)}{\omega'(x_j)}. </math> ===Peano form=== If <math>x_0<x_1<\cdots<x_n</math> and <math>n\geq 1</math>, the divided differences can be expressed as<ref>{{Cite book |last=Skof |first=Fulvia |url=https://books.google.com/books?id=ulUM2GagwacC&dq=This+is+called+the+Peano+form+of+the+divided+differences&pg=PA41 |title=Giuseppe Peano between Mathematics and Logic: Proceeding of the International Conference in honour of Giuseppe Peano on the 150th anniversary of his birth and the centennial of the Formulario Mathematico Torino (Italy) October 2-3, 2008 |date=2011-04-30 |publisher=Springer Science & Business Media |isbn=978-88-470-1836-5 |pages=40 |language=en}}</ref> <math display="block">f[x_0,\ldots,x_n] = \frac{1}{(n-1)!} \int_{x_0}^{x_n} f^{(n)}(t)\;B_{n-1}(t) \, dt</math> where <math>f^{(n)}</math> is the <math>n</math>-th [[derivative]] of the function <math>f</math> and <math>B_{n-1}</math> is a certain [[B-spline]] of degree <math>n-1</math> for the data points <math>x_0,\dots,x_n</math>, given by the formula <math display="block">B_{n-1}(t) = \sum_{k=0}^n \frac{(\max(0,x_k-t))^{n-1}}{\omega'(x_k)}</math> This is a consequence of the [[Peano kernel theorem]]; it is called the ''Peano form'' of the divided differences and <math>B_{n-1}</math> is the ''Peano kernel'' for the divided differences, all named after [[Giuseppe Peano]]. ===Forward and backward differences=== {{details|Finite difference}} When the data points are equidistantly distributed we get the special case called '''forward differences'''. They are easier to calculate than the more general divided differences. Given ''n''+1 data points <math display="block">(x_0, y_0), \ldots, (x_n, y_n)</math> with <math display="block">x_{k} = x_0 + k h,\ \text{ for } \ k=0,\ldots,n \text{ and fixed } h>0</math> the forward differences are defined as <math display="block">\begin{align} \Delta^{(0)} y_k &:= y_k,\qquad k=0,\ldots,n \\ \Delta^{(j)}y_k &:= \Delta^{(j-1)}y_{k+1} - \Delta^{(j-1)}y_k,\qquad k=0,\ldots,n-j,\ j=1,\dots,n. \end{align}</math>whereas the backward differences are defined as: <math display="block">\begin{align} \nabla^{(0)} y_k &:= y_k,\qquad k=0,\ldots,n \\ \nabla^{(j)}y_k &:= \nabla^{(j-1)}y_{k} - \nabla^{(j-1)}y_{k-1},\qquad k=0,\ldots,n-j,\ j=1,\dots,n. \end{align}</math> Thus the forward difference table is written as:<math display="block"> \begin{matrix} y_0 & & & \\ & \Delta y_0 & & \\ y_1 & & \Delta^2 y_0 & \\ & \Delta y_1 & & \Delta^3 y_0\\ y_2 & & \Delta^2 y_1 & \\ & \Delta y_2 & & \\ y_3 & & & \\ \end{matrix} </math>whereas the backwards difference table is written as:<math display="block"> \begin{matrix} y_0 & & & \\ & \nabla y_1 & & \\ y_1 & & \nabla^2 y_2 & \\ & \nabla y_2 & & \nabla^3 y_3\\ y_2 & & \nabla^2 y_3 & \\ & \nabla y_3 & & \\ y_3 & & & \\ \end{matrix} </math> The relationship between divided differences and forward differences is<ref>{{cite book|last1=Burden|first1=Richard L.| last2=Faires|first2=J. Douglas | title=Numerical Analysis |url=https://archive.org/details/numericalanalysi00rlbu |url-access=limited | date=2011|page=[https://archive.org/details/numericalanalysi00rlbu/page/n146 129]|publisher=Cengage Learning | isbn=9780538733519| edition=9th}}</ref> <math display="block">[y_j, y_{j+1}, \ldots , y_{j+k}] = \frac{1}{k!h^k}\Delta^{(k)}y_j, </math>whereas for backward differences:{{Citation needed|date=December 2023}}<math display="block">[{y}_{j}, y_{j-1},\ldots,{y}_{j-k}] = \frac{1}{k!h^k}\nabla^{(k)}y_j. </math> == See also == * [[Difference quotient]] * [[Neville's algorithm]] * [[Polynomial interpolation]] * [[Mean value theorem for divided differences]] * [[Nörlund–Rice integral]] * [[Pascal's triangle]] ==References== {{Reflist|1}} * {{cite book|author=Louis Melville Milne-Thomson|author-link=L. M. Milne-Thomson| title=The Calculus of Finite Differences | year=2000|orig-year=1933| publisher=American Mathematical Soc.| isbn=978-0-8218-2107-7|at=Chapter 1: Divided Differences}} * {{cite book|author1=Myron B. Allen|author2=Eli L. Isaacson| title=Numerical Analysis for Applied Science | year=1998|publisher=John Wiley & Sons|isbn=978-1-118-03027-1| at=Appendix A}} * {{cite book|author=Ron Goldman|title=Pyramid Algorithms: A Dynamic Programming Approach to Curves and Surfaces for Geometric Modeling|year=2002| publisher=Morgan Kaufmann| isbn=978-0-08-051547-2| at=Chapter 4:Newton Interpolation and Difference Triangles}} == External links == * An [http://code.henning-thielemann.de/htam/src/Numerics/Interpolation/DividedDifference.hs implementation] in [[Haskell (programming language)|Haskell]]. [[Category:Finite differences]] [[de:Polynominterpolation#Bestimmung der Koeffizienten: Schema der dividierten Differenzen]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Citation needed
(
edit
)
Template:Cite book
(
edit
)
Template:Details
(
edit
)
Template:Pb
(
edit
)
Template:Reflist
(
edit
)
Template:Short description
(
edit
)