Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Divided differences
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Example== Divided differences for <math>k=0</math> and the first few values of <math>j</math>: <math display="block"> \begin{align} \mathopen[y_0] &= y_0 \\ \mathopen[y_0,y_1] &= \frac{y_1-y_0}{x_1-x_0} \\ \mathopen[y_0,y_1,y_2] &= \frac{\mathopen[y_1,y_2]-\mathopen[y_0,y_1]}{x_2-x_0} = \frac{\frac{y_2-y_1}{x_2-x_1}-\frac{y_1-y_0}{x_1-x_0}}{x_2-x_0} = \frac{y_2-y_1}{(x_2-x_1)(x_2-x_0)}-\frac{y_1-y_0}{(x_1-x_0)(x_2-x_0)} \\ \mathopen[y_0,y_1,y_2,y_3] &= \frac{\mathopen[y_1,y_2,y_3]-\mathopen[y_0,y_1,y_2]}{x_3-x_0} \end{align} </math> <!-- the \mathopen command is there because latex otherwise thinks that [...] denotes an optional argument --> Thus, the table corresponding to these terms upto two columns has the following form: <math display="block">\begin{matrix} x_0 & y_{0} & & \\ & & {y_{1}-y_{0}\over x_1 - x_0} & \\ x_1 & y_{1} & & {{y_{2}-y_{1}\over x_2 - x_1}-{y_{1}-y_{0}\over x_1 - x_0} \over x_2 - x_0} \\ & & {y_{2}-y_{1}\over x_2 - x_1} & \\ x_2 & y_{2} & & \vdots \\ & & \vdots & \\ \vdots & & & \vdots \\ & & \vdots & \\ x_n & y_{n} & & \\ \end{matrix} </math> ==Properties== * [[Linear functional|Linearity]] <math display="block">\begin{align} (f+g)[x_0,\dots,x_n] &= f[x_0,\dots,x_n] + g[x_0,\dots,x_n] \\ (\lambda\cdot f)[x_0,\dots,x_n] &= \lambda\cdot f[x_0,\dots,x_n] \end{align}</math> * [[Leibniz rule (generalized product rule)|Leibniz rule]] <math display="block">(f\cdot g)[x_0,\dots,x_n] = f[x_0]\cdot g[x_0,\dots,x_n] + f[x_0,x_1]\cdot g[x_1,\dots,x_n] + \dots + f[x_0,\dots,x_n]\cdot g[x_n] = \sum_{r=0}^n f[x_0,\ldots,x_r]\cdot g[x_r,\ldots,x_n]</math> * Divided differences are symmetric: If <math>\sigma : \{0, \dots, n\} \to \{0, \dots, n\}</math> is a permutation then <math display="block">f[x_0, \dots, x_n] = f[x_{\sigma(0)}, \dots, x_{\sigma(n)}]</math> * [[Polynomial interpolation]] in the [[Newton polynomial|Newton form]]: if <math>P</math> is a polynomial function of degree <math>\leq n</math>, and <math>p[x_0, \dots , x_n]</math> is the divided difference, then <math display="block">P_{n-1}(x) = p[x_0] + p[x_0,x_1](x-x_0) + p[x_0,x_1,x_2](x-x_0)(x-x_1) + \cdots + p[x_0,\ldots,x_n] (x-x_0)(x-x_1)\cdots(x-x_{n-1})</math> * If <math>p</math> is a polynomial function of degree <math><n</math>, then <math display="block">p[x_0, \dots, x_n] = 0.</math> *[[Mean value theorem for divided differences]]: if <math>f</math> is ''n'' times differentiable, then <math display="block">f[x_0,\dots,x_n] = \frac{f^{(n)}(\xi)}{n!}</math> for a number <math>\xi</math> in the open interval determined by the smallest and largest of the <math>x_k</math>'s. ==Matrix form== The divided difference scheme can be put into an upper [[triangular matrix]]: <math display="block">T_f(x_0,\dots,x_n)= \begin{pmatrix} f[x_0] & f[x_0,x_1] & f[x_0,x_1,x_2] & \ldots & f[x_0,\dots,x_n] \\ 0 & f[x_1] & f[x_1,x_2] & \ldots & f[x_1,\dots,x_n] \\ 0 & 0 & f[x_2] & \ldots & f[x_2,\dots,x_n] \\ \vdots & \vdots & & \ddots & \vdots \\ 0 & 0 & 0 & \ldots & f[x_n] \end{pmatrix}.</math> Then it holds * <math>T_{f+g}(x) = T_f(x) + T_g(x)</math> * <math>T_{\lambda f}(x) = \lambda T_f(x)</math> if <math>\lambda</math> is a scalar * <math>T_{f\cdot g}(x) = T_f(x) \cdot T_g(x)</math> {{pb}} This follows from the Leibniz rule. It means that multiplication of such matrices is [[commutativity|commutative]]. Summarised, the matrices of divided difference schemes with respect to the same set of nodes ''x'' form a [[commutative ring]]. * Since <math> T_f(x) </math> is a triangular matrix, its [[eigenvalue]]s are obviously <math> f(x_0), \dots, f(x_n) </math>. * Let <math>\delta_\xi</math> be a [[Kronecker delta]]-like function, that is <math display="block">\delta_\xi(t) = \begin{cases} 1 &: t=\xi , \\ 0 &: \mbox{else}. \end{cases}</math> Obviously <math>f\cdot \delta_\xi = f(\xi)\cdot \delta_\xi</math>, thus <math>\delta_\xi</math> is an [[eigenfunction]] of the pointwise function multiplication. That is <math>T_{\delta_{x_i}}(x)</math> is somehow an "[[eigenmatrix]]" of <math>T_f(x)</math>: <math> T_f(x) \cdot T_{\delta_{x_i}} (x) = f(x_i) \cdot T_{\delta_{x_i}}(x) </math>. However, all columns of <math>T_{\delta_{x_i}}(x)</math> are multiples of each other, the [[matrix rank]] of <math>T_{\delta_{x_i}}(x)</math> is 1. So you can compose the matrix of all eigenvectors of <math>T_f(x)</math> from the <math>i</math>-th column of each <math>T_{\delta_{x_i}}(x)</math>. Denote the matrix of eigenvectors with <math>U(x)</math>. Example <math display="block"> U(x_0,x_1,x_2,x_3) = \begin{pmatrix} 1 & \frac{1}{(x_1-x_0)} & \frac{1}{(x_2-x_0) (x_2-x_1)} & \frac{1}{(x_3-x_0) (x_3-x_1) (x_3-x_2)} \\ 0 & 1 & \frac{1}{(x_2-x_1)} & \frac{1}{(x_3-x_1) (x_3-x_2)} \\ 0 & 0 & 1 & \frac{1}{(x_3-x_2)} \\ 0 & 0 & 0 & 1 \end{pmatrix} </math> The [[diagonalizable matrix|diagonalization]] of <math>T_f(x)</math> can be written as <math display="block"> U(x) \cdot \operatorname{diag}(f(x_0),\dots,f(x_n)) = T_f(x) \cdot U(x) .</math> ===Polynomials and power series=== The matrix <math display="block"> J = \begin{pmatrix} x_0 & 1 & 0 & 0 & \cdots & 0 \\ 0 & x_1 & 1 & 0 & \cdots & 0 \\ 0 & 0 & x_2 & 1 & & 0 \\ \vdots & \vdots & & \ddots & \ddots & \\ 0 & 0 & 0 & 0 & \; \ddots & 1\\ 0 & 0 & 0 & 0 & & x_n \end{pmatrix} </math> contains the divided difference scheme for the [[identity function]] with respect to the nodes <math>x_0,\dots,x_n</math>, thus <math>J^m</math> contains the divided differences for the [[monomial|power function]] with [[exponent]] <math>m</math>. Consequently, you can obtain the divided differences for a [[polynomial function]] <math>p</math> by applying <math>p</math> to the matrix <math>J</math>: If <math display="block">p(\xi) = a_0 + a_1 \cdot \xi + \dots + a_m \cdot \xi^m</math> and <math display="block">p(J) = a_0 + a_1\cdot J + \dots + a_m\cdot J^m</math> then <math display="block">T_p(x) = p(J).</math> This is known as ''Opitz' formula''.<ref>[[Carl de Boor|de Boor, Carl]], ''Divided Differences'', Surv. Approx. Theory 1 (2005), 46β69, [http://www.emis.de/journals/SAT/papers/2/]</ref><ref>Opitz, G. ''Steigungsmatrizen'', Z. Angew. Math. Mech. (1964), 44, T52βT54</ref> Now consider increasing the degree of <math>p</math> to infinity, i.e. turn the Taylor polynomial into a [[Taylor series]]. Let <math>f</math> be a function which corresponds to a [[power series]]. You can compute the divided difference scheme for <math>f</math> by applying the corresponding matrix series to <math>J</math>: If <math display="block">f(\xi) = \sum_{k=0}^\infty a_k \xi^k</math> and <math display="block">f(J)=\sum_{k=0}^\infty a_k J^k</math> then <math display="block">T_f(x)=f(J).</math>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)