Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Divided differences
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Alternative characterizations== ===Expanded form=== <math display="block"> \begin{align} f[x_0] &= f(x_0) \\ f[x_0,x_1] &= \frac{f(x_0)}{(x_0-x_1)} + \frac{f(x_1)}{(x_1-x_0)} \\ f[x_0,x_1,x_2] &= \frac{f(x_0)}{(x_0-x_1)\cdot(x_0-x_2)} + \frac{f(x_1)}{(x_1-x_0)\cdot(x_1-x_2)} + \frac{f(x_2)}{(x_2-x_0)\cdot(x_2-x_1)} \\ f[x_0,x_1,x_2,x_3] &= \frac{f(x_0)}{(x_0-x_1)\cdot(x_0-x_2)\cdot(x_0-x_3)} + \frac{f(x_1)}{(x_1-x_0)\cdot(x_1-x_2)\cdot(x_1-x_3)} +\\ &\quad\quad \frac{f(x_2)}{(x_2-x_0)\cdot(x_2-x_1)\cdot(x_2-x_3)} + \frac{f(x_3)}{(x_3-x_0)\cdot(x_3-x_1)\cdot(x_3-x_2)} \\ f[x_0,\dots,x_n] &= \sum_{j=0}^{n} \frac{f(x_j)}{\prod_{k\in\{0,\dots,n\}\setminus\{j\}} (x_j-x_k)} \end{align} </math> With the help of the [[polynomial function]] <math>\omega(\xi) = (\xi-x_0) \cdots (\xi-x_n)</math> this can be written as <math display="block"> f[x_0,\dots,x_n] = \sum_{j=0}^{n} \frac{f(x_j)}{\omega'(x_j)}. </math> ===Peano form=== If <math>x_0<x_1<\cdots<x_n</math> and <math>n\geq 1</math>, the divided differences can be expressed as<ref>{{Cite book |last=Skof |first=Fulvia |url=https://books.google.com/books?id=ulUM2GagwacC&dq=This+is+called+the+Peano+form+of+the+divided+differences&pg=PA41 |title=Giuseppe Peano between Mathematics and Logic: Proceeding of the International Conference in honour of Giuseppe Peano on the 150th anniversary of his birth and the centennial of the Formulario Mathematico Torino (Italy) October 2-3, 2008 |date=2011-04-30 |publisher=Springer Science & Business Media |isbn=978-88-470-1836-5 |pages=40 |language=en}}</ref> <math display="block">f[x_0,\ldots,x_n] = \frac{1}{(n-1)!} \int_{x_0}^{x_n} f^{(n)}(t)\;B_{n-1}(t) \, dt</math> where <math>f^{(n)}</math> is the <math>n</math>-th [[derivative]] of the function <math>f</math> and <math>B_{n-1}</math> is a certain [[B-spline]] of degree <math>n-1</math> for the data points <math>x_0,\dots,x_n</math>, given by the formula <math display="block">B_{n-1}(t) = \sum_{k=0}^n \frac{(\max(0,x_k-t))^{n-1}}{\omega'(x_k)}</math> This is a consequence of the [[Peano kernel theorem]]; it is called the ''Peano form'' of the divided differences and <math>B_{n-1}</math> is the ''Peano kernel'' for the divided differences, all named after [[Giuseppe Peano]]. ===Forward and backward differences=== {{details|Finite difference}} When the data points are equidistantly distributed we get the special case called '''forward differences'''. They are easier to calculate than the more general divided differences. Given ''n''+1 data points <math display="block">(x_0, y_0), \ldots, (x_n, y_n)</math> with <math display="block">x_{k} = x_0 + k h,\ \text{ for } \ k=0,\ldots,n \text{ and fixed } h>0</math> the forward differences are defined as <math display="block">\begin{align} \Delta^{(0)} y_k &:= y_k,\qquad k=0,\ldots,n \\ \Delta^{(j)}y_k &:= \Delta^{(j-1)}y_{k+1} - \Delta^{(j-1)}y_k,\qquad k=0,\ldots,n-j,\ j=1,\dots,n. \end{align}</math>whereas the backward differences are defined as: <math display="block">\begin{align} \nabla^{(0)} y_k &:= y_k,\qquad k=0,\ldots,n \\ \nabla^{(j)}y_k &:= \nabla^{(j-1)}y_{k} - \nabla^{(j-1)}y_{k-1},\qquad k=0,\ldots,n-j,\ j=1,\dots,n. \end{align}</math> Thus the forward difference table is written as:<math display="block"> \begin{matrix} y_0 & & & \\ & \Delta y_0 & & \\ y_1 & & \Delta^2 y_0 & \\ & \Delta y_1 & & \Delta^3 y_0\\ y_2 & & \Delta^2 y_1 & \\ & \Delta y_2 & & \\ y_3 & & & \\ \end{matrix} </math>whereas the backwards difference table is written as:<math display="block"> \begin{matrix} y_0 & & & \\ & \nabla y_1 & & \\ y_1 & & \nabla^2 y_2 & \\ & \nabla y_2 & & \nabla^3 y_3\\ y_2 & & \nabla^2 y_3 & \\ & \nabla y_3 & & \\ y_3 & & & \\ \end{matrix} </math> The relationship between divided differences and forward differences is<ref>{{cite book|last1=Burden|first1=Richard L.| last2=Faires|first2=J. Douglas | title=Numerical Analysis |url=https://archive.org/details/numericalanalysi00rlbu |url-access=limited | date=2011|page=[https://archive.org/details/numericalanalysi00rlbu/page/n146 129]|publisher=Cengage Learning | isbn=9780538733519| edition=9th}}</ref> <math display="block">[y_j, y_{j+1}, \ldots , y_{j+k}] = \frac{1}{k!h^k}\Delta^{(k)}y_j, </math>whereas for backward differences:{{Citation needed|date=December 2023}}<math display="block">[{y}_{j}, y_{j-1},\ldots,{y}_{j-k}] = \frac{1}{k!h^k}\nabla^{(k)}y_j. </math>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)