In numerical analysis, numerical differentiation algorithms estimate the derivative of a mathematical function or subroutine using values of the function and perhaps other knowledge about the function.
Finite differencesEdit
The simplest method is to use finite difference approximations.
A simple two-point estimation is to compute the slope of a nearby secant line through the points Template:Math and Template:Math.<ref>Richard L. Burden, J. Douglas Faires (2000), Numerical Analysis, (7th Ed), Brooks/Cole. Template:Isbn.</ref> Choosing a small number Template:Mvar, Template:Mvar represents a small change in Template:Mvar, and it can be either positive or negative. The slope of this line is <math display="block">\frac{f(x + h) - f(x)}{h}.</math> This expression is Newton's difference quotient (also known as a first-order divided difference).
The slope of this secant line differs from the slope of the tangent line by an amount that is approximately proportional to Template:Mvar. As Template:Mvar approaches zero, the slope of the secant line approaches the slope of the tangent line. Therefore, the true derivative of Template:Math at Template:Mvar is the limit of the value of the difference quotient as the secant lines get closer and closer to being a tangent line: <math display="block">f'(x) = \lim_{h \to 0} \frac{f(x + h) - f(x)}{h}.</math>
Since immediately substituting 0 for Template:Mvar results in <math>\frac{0}{0}</math> indeterminate form, calculating the derivative directly can be unintuitive.
Equivalently, the slope could be estimated by employing positions Template:Math and Template:Mvar.
Another two-point formula is to compute the slope of a nearby secant line through the points Template:Math and Template:Math. The slope of this line is <math display="block">\frac{f(x + h) - f(x - h)}{2h}.</math>
This formula is known as the symmetric difference quotient. In this case the first-order errors cancel, so the slope of these secant lines differ from the slope of the tangent line by an amount that is approximately proportional to <math>h^2</math>. Hence for small values of Template:Mvar this is a more accurate approximation to the tangent line than the one-sided estimation. However, although the slope is being computed at Template:Mvar, the value of the function at Template:Mvar is not involved.
The estimation error is given by <math display="block">R = \frac{-f^{(3)}(c)}{6} h^2,</math> where <math>c</math> is some point between <math>x - h</math> and <math>x + h</math>. This error does not include the rounding error due to numbers being represented and calculations being performed in limited precision.
The symmetric difference quotient is employed as the method of approximating the derivative in a number of calculators, including TI-82, TI-83, TI-84, TI-85, all of which use this method with Template:Math.<ref name="Merseth2003">Template:Cite book</ref><ref name="RubySellers2014">Template:Cite book</ref>
Step sizeEdit
An important consideration in practice when the function is calculated using floating-point arithmetic of finite precision is the choice of step size, Template:Mvar. If chosen too small, the subtraction will yield a large rounding error. In fact, all the finite-difference formulae are ill-conditioned<ref name=Fornberg1>Numerical Differentiation of Analytic Functions, B Fornberg – ACM Transactions on Mathematical Software (TOMS), 1981.</ref> and due to cancellation will produce a value of zero if Template:Mvar is small enough.<ref name=SquireTrapp1>Using Complex Variables to Estimate Derivatives of Real Functions, W. Squire, G. Trapp – SIAM REVIEW, 1998.</ref> If too large, the calculation of the slope of the secant line will be more accurately calculated, but the estimate of the slope of the tangent by using the secant could be worse.<ref name="edo2021">Template:Cite book</ref>
For basic central differences, the optimal step is the cube-root of machine epsilon.<ref>Sauer, Timothy (2012). Numerical Analysis. Pearson. p.248.</ref> For the numerical derivative formula evaluated at Template:Mvar and Template:Math, a choice for Template:Mvar that is small without producing a large rounding error is <math>\sqrt{\varepsilon} x</math> (though not when x = 0), where the machine epsilon Template:Mvar is typically of the order of Template:Val for double precision.<ref>Following Numerical Recipes in C, Chapter 5.7.</ref> A formula for Template:Mvar that balances the rounding error against the secant error for optimum accuracy is<ref>p. 263.</ref> <math display="block">h = 2\sqrt{\varepsilon\left|\frac{f(x)}{f(x)}\right|}</math> (though not when <math>f(x) = 0</math>), and to employ it will require knowledge of the function.
For computer calculations the problems are exacerbated because, although Template:Mvar necessarily holds a representable floating-point number in some precision (32 or 64-bit, etc.), Template:Math almost certainly will not be exactly representable in that precision. This means that Template:Math will be changed (by rounding or truncation) to a nearby machine-representable number, with the consequence that Template:Math will not equal Template:Mvar; the two function evaluations will not be exactly Template:Mvar apart. In this regard, since most decimal fractions are recurring sequences in binary (just as 1/3 is in decimal) a seemingly round step such as Template:Math will not be a round number in binary; it is 0.000110011001100...2 A possible approach is as follows:
h := sqrt(eps) * x; xph := x + h; dx := xph - x; slope := (F(xph) - F(x)) / dx;
However, with computers, compiler optimization facilities may fail to attend to the details of actual computer arithmetic and instead apply the axioms of mathematics to deduce that Template:Math and Template:Mvar are the same. With C and similar languages, a directive that Template:Math is a volatile variable will prevent this.
Other methodsEdit
Higher-order methodsEdit
Template:Further To obtain more general derivative approximation formulas for some function <math>f(x)</math>, let <math>h>0</math> be a positive number close to zero. The Taylor expansion of <math>f(x)</math> about the base point <math>x</math> is Template:NumBlk{2!}f(x) + \frac{h^{3}}{3!}f'(x) + ...</math>|Template:EquationRef}}
Replacing <math>h</math> by <math>2h</math> gives Template:NumBlk{2!}f(x) + \frac{8h^{3}}{3!}f'(x) + ...</math>|Template:EquationRef}}
Multiplying identity (Template:EquationNote) by 4 gives Template:NumBlk{2!}f(x) + \frac{4h^{3}}{3!}f'(x) + ...</math>|Template:EquationRef}}
Subtracting identity (Template:EquationNote) from (Template:EquationNote) eliminates the <math>h^{2}</math> term:
<math display="block"> f(x+2h) - 4f(x+h) = -3f(x) -2hf'(x) + \frac{4h^{3}}{3!}f(x) + ...</math>
which can be written as
<math display="block"> f(x+2h) - 4f(x+h) = -3f(x) -2hf'(x) + O(h^{3}).</math>
Rearranging terms gives <math display="block"> f'(x) = \frac{-3f(x) + 4f(x+h) -f(x+2h)}{2h} + O(h^{2}),</math>
which is called the three-point forward difference formula for the derivative. Using a similar approach, one can show
<math display="block"> f'(x) = \frac{f(x+h) - f(x-h)}{2h} + O(h^{2})</math> which is called the three-point central difference formula, and <math display="block"> f'(x) = \frac{f(x-2h) -4f(x-h) + 3f(x)}{2h} + O(h^{2})</math> which is called the three-point backward difference formula.
By a similar approach, the five point midpoint approximation formula can be derived as:<ref>Abramowitz & Stegun, Table 25.2.</ref> <math display="block">f'(x) = \frac{-f(x + 2h) + 8 f(x + h) - 8 f(x - h) + f(x - 2h)}{12h} + O(h^{4}).</math>
Numerical ExampleEdit
Consider approximating the derivative of <math>f(x)=x \sin{x}</math> at the point <math>x_{0} = \frac{\pi}{4}</math>. Since <math>f'(x)=\sin{x} + x \cos{x}</math>, the exact value is <math display="block"> f'(\frac{\pi}{4}) = \sin{\frac{\pi}{4}} + \frac{\pi}{4}\cos{\frac{\pi}{4}} = \frac{1}{\sqrt{2}} + \frac{\pi}{4\sqrt{2}} \approx 1.2624671484563432. </math>
Formula | h | Approximation | Error |
---|---|---|---|
Three-point forward difference formula | 0.1 | 1.2719084899816118 | 9.441 x 10-3 |
0.01 | 1.2625569346253918 | 8.978 x 10-5 | |
0.001 | 1.2624680412510747 | 8.927 x 10-7 | |
0.0001 | 1.2624671573796542 | 8.923 x 10-9 | |
Three-point backward difference formula | 0.1 | 1.2580094219247624 | 4.457 x 10-3 |
0.01 | 1.2624225374520737 | 4.461 x 10-5 | |
0.001 | 1.2624667023429792 | 4.461 x 10-7 | |
0.0001 | 1.2624671439953605 | 4.460 x 10-9 | |
Three-point central difference formula | 0.1 | 1.2707750261498707 | 8.307 x 10-3 |
0.01 | 1.2625557981227442 | 8.864 x 10-5 | |
0.001 | 1.2624680401146504 | 8.916 x 10-7 | |
0.0001 | 1.262467157379099 | 8.922 x 10-9 |
CodeEdit
The following is an example of a Python implementation for finding derivatives numerically for <math>f(x) = \frac{2x}{1+\sqrt{x}}</math> using the various three-point difference formulas at <math>x_{0} = 4</math>. The function func
has derivative func_prime
.
Example implementation in Python |
<syntaxhighlight lang="python3" line="1">
import math def func(x): return (2*x) / (1 + math.sqrt(x)) def func_prime(x): return (2 + math.sqrt(x)) / ((1 + math.sqrt(x))**2) def three_point_forward(value, h): return ((-3/2) * func(value) + 2*func(value + h) - (1/2)*func(value + 2*h)) / h def three_point_backward(value, h): return ((-1/2)*func(value - h) + (1/2)*func(value + h)) / h def three_point_central(value, h): return ((1/2)*func(value - 2*h) - 2*func(value - h) + (3/2)*func(value)) / h value = 4 actual = func_prime(value) print("Actual value " + str(actual)) print("============================================") for step_size in [0.1, 0.01, 0.001, 0.0001]: print("Step size " + str(step_size)) forward = three_point_forward(value, step_size) backward = three_point_backward(value, step_size) central = three_point_central(value, step_size) print("Forward {:>12}, Error = {:>12}".format(str(forward), str(abs(forward - actual)))) print("Backward {:>12}, Error = {:>12}".format(str(forward), str(abs(backward - actual)))) print("Central {:>12}, Error = {:>12}".format(str(forward), str(abs(central - actual)))) print("============================================") </syntaxhighlight> |
Output |
<syntaxhighlight lang="python3" line="1">
Actual value 0.4444444444444444 ================================EditStep size 0.1 Forward 0.4443963018050967, Error = 4.814263934771468e-05 Backward 0.4443963018050967, Error = 2.5082646145202503e-05 Central 0.4443963018050967, Error = 5.231976394060034e-05 ================================EditStep size 0.01 Forward 0.4444439449793336, Error = 4.994651108258807e-07 Backward 0.4444439449793336, Error = 2.507721614808389e-07 Central 0.4444439449793336, Error = 5.036366184096863e-07 ================================EditStep size 0.001 Forward 0.4444444394311464, Error = 5.013297998957e-09 Backward 0.4444444394311464, Error = 2.507574814458735e-09 Central 0.4444444394311464, Error = 5.017960935660426e-09 ================================EditStep size 0.0001 Forward 0.4444444443896245, Error = 5.4819926376126205e-11 Backward 0.4444444443896245, Error = 2.5116131396885066e-11 Central 0.4444444443896245, Error = 5.037903427762558e-11 ================================Edit</syntaxhighlight> |
Higher derivativesEdit
Using Newton's difference quotient, <math display="block">f'(x) = \lim_{h \to 0} \frac{f(x + h) - f(x)}{h}</math> the following can be shown<ref>Template:Cite book</ref> (for Template:Math): <math display="block">f^{(n)}(x) = \lim_{h\to 0} \frac{1}{h^n} \sum_{k=0}^n (-1)^{k+n} \binom{n}{k} f(x + kh)</math>
Complex-variable methodsEdit
The classical finite-difference approximations for numerical differentiation are ill-conditioned. However, if <math>f</math> is a holomorphic function, real-valued on the real line, which can be evaluated at points in the complex plane near <math>x</math>, then there are stable methods. For example,<ref name=SquireTrapp1/> the first derivative can be calculated by the complex-step derivative formula:<ref>Template:Cite journal</ref><ref>Differentiation With(out) a Difference by Nicholas Higham </ref><ref>article from MathWorks blog, posted by Cleve Moler</ref> <math display="block">f'(x) = \frac{\Im(f(x + \mathrm{i}h))}{h} + O(h^2), \quad \mathrm{i^2}:=-1.</math>
The recommended step size to obtain accurate derivatives for a range of conditions is <math>h = 10^{-200}</math>.<ref name="edo2021"/> This formula can be obtained by Taylor series expansion: <math display="block">f(x+\mathrm{i}h) = f(x) + \mathrm{i}h f'(x) - \tfrac{1}{2!} h^2 f(x) - \tfrac{\mathrm{i}}{3!} h^3 f^{(3)}(x) + \cdots.</math>
The complex-step derivative formula is only valid for calculating first-order derivatives. A generalization of the above for calculating derivatives of any order employs multicomplex numbers, resulting in multicomplex derivatives.<ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref><ref>Template:Cite journal</ref><ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref> <math display="block">f^{(n)}(x) \approx \frac{\mathcal{C}^{(n)}_{n^2-1}(f(x + \mathrm{i}^{(1)} h + \cdots + \mathrm{i}^{(n)} h))}{h^n}</math> where the <math>\mathrm{i}^{(k)}</math> denote the multicomplex imaginary units; <math>\mathrm{i}^{(1)} \equiv \mathrm{i}</math>. The <math>\mathcal{C}^{(n)}_k</math> operator extracts the <math>k</math>th component of a multicomplex number of level <math>n</math>, e.g., <math>\mathcal{C}^{(n)}_0</math> extracts the real component and <math>\mathcal{C}^{(n)}_{n^2-1}</math> extracts the last, “most imaginary” component. The method can be applied to mixed derivatives, e.g. for a second-order derivative <math display="block">\frac{\partial^2 f(x, y)}{\partial x \,\partial y} \approx \frac{\mathcal{C}^{(2)}_3(f(x + \mathrm{i}^{(1)} h, y + \mathrm{i}^{(2)} h))}{h^2}</math>
A C++ implementation of multicomplex arithmetics is available.<ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref>
In general, derivatives of any order can be calculated using Cauchy's integral formula:<ref>Ablowitz, M. J., Fokas, A. S.,(2003). Complex variables: introduction and applications. Cambridge University Press. Check theorem 2.6.2</ref> <math display="block">f^{(n)}(a) = \frac{n!}{2\pi i} \oint_\gamma \frac{f(z)}{(z - a)^{n+1}} \,\mathrm{d}z,</math> where the integration is done numerically.
Using complex variables for numerical differentiation was started by Lyness and Moler in 1967.<ref name=LynessMoler1>Template:Cite journal</ref> Their algorithm is applicable to higher-order derivatives.
A method based on numerical inversion of a complex Laplace transform was developed by Abate and Dubner.<ref>Template:Cite journal</ref> An algorithm that can be used without requiring knowledge about the method or the character of the function was developed by Fornberg.<ref name=Fornberg1/>
Differential quadratureEdit
Differential quadrature is the approximation of derivatives by using weighted sums of function values.<ref>Differential Quadrature and Its Application in Engineering: Engineering Applications, Chang Shu, Springer, 2000, Template:Isbn.</ref><ref>Advanced Differential Quadrature Methods, Yingyan Zhang, CRC Press, 2009, Template:Isbn.</ref> Differential quadrature is of practical interest because its allows one to compute derivatives from noisy data. The name is in analogy with quadrature, meaning numerical integration, where weighted sums are used in methods such as Simpson's rule or the trapezoidal rule. There are various methods for determining the weight coefficients, for example, the Savitzky–Golay filter. Differential quadrature is used to solve partial differential equations. There are further methods for computing derivatives from noisy data.<ref>Template:Cite journal</ref>
See alsoEdit
- Template:Annotated link
- Template:Annotated link
- List of numerical-analysis software
- Template:Annotated link
- Template:Annotated link
- Template:Annotated link