Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Richardson extrapolation
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Sequence acceleration method in numerical analysis}} [[File:Richardson extra 2d.gif|thumb|An example of Richardson extrapolation method in two dimensions.]] In [[numerical analysis]], '''Richardson extrapolation''' is a [[Series acceleration|sequence acceleration]] method used to improve the [[rate of convergence]] of a [[sequence]] of estimates of some value <math>A^\ast = \lim_{h\to 0} A(h)</math>. In essence, given the value of <math>A(h)</math> for several values of <math>h</math>, we can estimate <math>A^\ast</math> by extrapolating the estimates to <math>h=0</math>. It is named after [[Lewis Fry Richardson]], who introduced the technique in the early 20th century,<ref>{{cite journal | last=Richardson | first=L. F. | author-link=Lewis Fry Richardson | title=The approximate arithmetical solution by finite differences of physical problems including differential equations, with an application to the stresses in a masonry dam | journal=Philosophical Transactions of the Royal Society A | year=1911 | volume=210 | issue=459–470 | pages=307–357 | doi=10.1098/rsta.1911.0009 | doi-access=free}}</ref><ref>{{cite journal | last1=Richardson | first1=L. F. | author-link=Lewis Fry Richardson | title=The deferred approach to the limit | journal=Philosophical Transactions of the Royal Society A | year=1927 | volume=226 | issue=636–646 | pages=299–349 | doi=10.1098/rsta.1927.0008 | last2=Gaunt | first2=J. A. | doi-access=free }}</ref> though the idea was already known to [[Christiaan Huygens]] in [[Christiaan_Huygens#De_Circuli_Magnitudine_Inventa|his calculation]] of <math>\pi</math>.<ref>{{Citation|last=Brezinski|first=Claude|title=Some pioneers of extrapolation methods|date=2009-11-01|url=https://www.worldscientific.com/doi/10.1142/9789812836267_0001|work=The Birth of Numerical Analysis|pages=1–22|publisher=WORLD SCIENTIFIC|doi=10.1142/9789812836267_0001|isbn=978-981-283-625-0}}</ref> In the words of [[Garrett Birkhoff|Birkhoff]] and [[Gian-Carlo Rota|Rota]], "its usefulness for practical computations can hardly be overestimated."<ref>Page 126 of {{cite book | last=Birkhoff | first=Garrett | author-link=Garrett Birkhoff |author2=Gian-Carlo Rota |author2-link=Gian-Carlo Rota | title=Ordinary differential equations | publisher=John Wiley and sons | year=1978 | edition=3rd | isbn=0-471-07411-X | oclc= 4379402}}</ref> Practical applications of Richardson extrapolation include [[Romberg integration]], which applies Richardson extrapolation to the [[trapezoid rule]], and the [[Bulirsch–Stoer algorithm]] for solving ordinary differential equations. ==General formula== === Notation === Let <math>A_0(h)</math> be an approximation of ''<math>A^*</math>''(exact value) that depends on a step size {{mvar|h}} (where <math display="inline">0 < h < 1</math>) with an [[Approximation error|error]] formula of the form <math display="block"> A^* = A_0(h)+a_0h^{k_0} + a_1h^{k_1} + a_2h^{k_2} + \cdots </math> where the <math>a_i</math> are unknown constants and the <math>k_i</math> are known constants such that <math>h^{k_i} > h^{k_{i+1}}</math>. Furthermore, <math>O(h^{k_i})</math> represents the [[truncation error]] of the <math>A_i(h)</math> approximation such that <math>A^* = A_i(h)+O(h^{k_i}).</math> Similarly, in <math>A^*=A_i(h)+O(h^{k_i}),</math> the approximation <math>A_i(h)</math> is said to be an <math>O(h^{k_i})</math> approximation. Note that by simplifying with [[Big O notation]], the following formulae are equivalent: <math display="block"> \begin{align} A^* &= A_0(h) + a_0h^{k_0} + a_1h^{k_1} + a_2h^{k_2} + \cdots \\ A^* &= A_0(h)+ a_0h^{k_0} + O(h^{k_1}) \\ A^* &= A_0(h)+O(h^{k_0}) \end{align} </math> === Purpose === Richardson extrapolation is a process that finds a better approximation of <math>A^*</math> by changing the error formula from <math>A^*=A_0(h)+O(h^{k_0})</math> to <math>A^* = A_1(h) + O(h^{k_1}).</math> Therefore, by replacing <math>A_0(h)</math> with <math>A_1(h)</math> the [[truncation error]] has reduced from <math>O(h^{k_0}) </math> to <math>O(h^{k_1}) </math> for the same step size <math>h</math>. The general pattern occurs in which <math>A_i(h)</math> is a more accurate estimate than <math>A_j(h)</math> when <math>i>j</math>. By this process, we have achieved a better approximation of <math>A^*</math> by subtracting the largest term in the error which was <math>O(h^{k_0}) </math>. This process can be repeated to remove more error terms to get even better approximations. === Process === Using the step sizes <math>h</math> and <math>h / t</math> for some constant <math>t</math>, the two formulas for <math>A^*</math> are: {{NumBlk||<math display="block">A^* = A_0(h)+ a_0h^{k_0} + a_1h^{k_1} + a_2h^{k_2} + O(h^{k_3}) </math>|{{EquationRef|1}}}} {{NumBlk||<math display="block">A^* = A_0\!\left(\frac{h}{t}\right) + a_0\left(\frac{h}{t}\right)^{k_0} + a_1\left(\frac{h}{t}\right)^{k_1} + a_2\left(\frac{h}{t}\right)^{k_2} + O(h^{k_3}) </math>|{{EquationRef|2}}}} To improve our approximation from <math>O(h^{k_0})</math> to <math>O(h^{k_1})</math> by removing the first error term, we multiply {{EquationNote|2|equation 2}} by <math>t^{k_0}</math> and subtract {{EquationNote|1|equation 1}} to give us <math display="block"> (t^{k_0}-1)A^* = \bigg[t^{k_0}A_0\left(\frac{h}{t}\right) - A_0(h)\bigg] + \bigg(t^{k_0}a_1\bigg(\frac{h}{t}\bigg)^{k_1}-a_1h^{k_1}\bigg)+ \bigg(t^{k_0}a_2\bigg(\frac{h}{t}\bigg)^{k_2}-a_2h^{k_2}\bigg) + O(h^{k_3}). </math> This multiplication and subtraction was performed because <math display="inline">\big[t^{k_0}A_0\left(\frac{h}{t}\right) - A_0(h)\big]</math> is an <math>O(h^{k_1})</math> approximation of <math>(t^{k_0}-1)A^*</math>. We can solve our current formula for <math>A^*</math> to give <math display="block">A^* = \frac{\bigg[t^{k_0}A_0\left(\frac{h}{t}\right) - A_0(h)\bigg]}{t^{k_0}-1} + \frac{\bigg(t^{k_0}a_1\bigg(\frac{h}{t}\bigg)^{k_1}-a_1h^{k_1}\bigg)}{t^{k_0}-1} + \frac{\bigg(t^{k_0}a_2\bigg(\frac{h}{t}\bigg)^{k_2}-a_2h^{k_2}\bigg)}{t^{k_0}-1} +O(h^{k_3}) </math> which can be written as <math>A^* = A_1(h)+O(h^{k_1})</math> by setting <math display="block">A_1(h) = \frac{t^{k_0}A_0\left(\frac{h}{t}\right) - A_0(h)}{t^{k_0}-1} .</math> === Recurrence relation === A general [[recurrence relation]] can be defined for the approximations by <math display="block"> A_{i+1}(h) = \frac{t^{k_i}A_i\left(\frac{h}{t}\right) - A_i(h)}{t^{k_i}-1} </math> where <math>k_{i+1}</math> satisfies <math display="block"> A^* = A_{i+1}(h) + O(h^{k_{i+1}}) .</math> === Properties === The Richardson extrapolation can be considered as a linear [[sequence transformation]]. Additionally, the general formula can be used to estimate <math>k_0</math> (leading order step size behavior of [[Truncation error]]) when neither its value nor <math>A^*</math> is known ''a priori''. Such a technique can be useful for quantifying an unknown [[rate of convergence]]. Given approximations of <math>A^*</math> from three distinct step sizes <math>h</math>, <math>h / t</math>, and <math>h / s</math>, the exact relationship<math display="block">A^*=\frac{t^{k_0}A_i\left(\frac{h}{t}\right) - A_i(h)}{t^{k_0}-1} + O(h^{k_1}) = \frac{s^{k_0}A_i\left(\frac{h}{s}\right) - A_i(h)}{s^{k_0}-1} + O(h^{k_1})</math>yields an approximate relationship (please note that the notation here may cause a bit of confusion, the two O appearing in the equation above only indicates the leading order step size behavior but their explicit forms are different and hence cancelling out of the two {{math|''O''}} terms is only approximately valid) <math display="block">A_i\left(\frac{h}{t}\right) + \frac{A_i\left(\frac{h}{t}\right) - A_i(h)}{t^{k_0}-1} \approx A_i\left(\frac{h}{s}\right) +\frac{A_i\left(\frac{h}{s}\right) - A_i(h)}{s^{k_0}-1}</math> which can be solved numerically to estimate <math>k_0</math> for some arbitrary valid choices of <math>h</math>, <math>s</math>, and <math>t</math>. As <math>t \neq 1</math>, if <math>t>0</math> and <math>s</math> is chosen so that <math>s = t^2</math>, this approximate relation reduces to a quadratic equation in <math>t^{k_0}</math>, which is readily solved for <math>k_0</math> in terms of <math>h</math> and <math>t</math>. ==Example of Richardson extrapolation== Suppose that we wish to approximate <math>A^*</math>, and we have a method <math>A(h)</math> that depends on a small parameter <math>h</math> in such a way that <math display="block">A(h) = A^\ast + C h^n + O(h^{n+1}).</math> Let us define a new function<math display="block"> R(h,t) := \frac{ t^n A(h/t) - A(h)}{t^n-1} </math>where <math>h</math> and <math>\frac{h}{t}</math> are two distinct step sizes. Then <math display="block"> R(h, t) = \frac{ t^n ( A^* + C \left(\frac{h}{t}\right)^n + O(h^{n+1}) ) - ( A^* + C h^n + O(h^{n+1}) ) }{ t^n - 1} = A^* + O(h^{n+1}). </math> <math> R(h,t) </math> is called the Richardson [[extrapolation]] of ''A''(''h''), and has a higher-order error estimate <math> O(h^{n+1}) </math> compared to <math> A(h) </math>. Very often, it is much easier to obtain a given precision by using ''R''(''h'') rather than ''A''(''h′'') with a much smaller ''h′''. Where ''A''(''h′'') can cause problems due to limited precision ([[rounding error]]s) and/or due to the increasing [[Computational cost|number of calculations]] needed (see examples below). ==Example pseudocode for Richardson extrapolation== The following pseudocode in MATLAB style demonstrates Richardson extrapolation to help solve the ODE <math>y'(t) = -y^2</math>, <math>y(0) = 1</math> with the [[Trapezoidal method]]. In this example we halve the step size <math>h</math> each iteration and so in the discussion above we'd have that <math>t = 2</math>. The error of the Trapezoidal method can be expressed in terms of odd powers so that the error over multiple steps can be expressed in even powers; this leads us to raise <math>t</math> to the second power and to take powers of <math>4 = 2^2 = t^2</math> in the pseudocode. We want to find the value of <math>y(5)</math>, which has the exact solution of <math>\frac{1}{5 + 1} = \frac{1}{6} = 0.1666...</math> since the exact solution of the ODE is <math>y(t) = \frac{1}{1 + t}</math>. This pseudocode assumes that a function called <code>Trapezoidal(f, tStart, tEnd, h, y0)</code> exists which attempts to compute <code>y(tEnd)</code> by performing the trapezoidal method on the function <code>f</code>, with starting point <code>y0</code> and <code>tStart</code> and step size <code>h</code>. Note that starting with too small an initial step size can potentially introduce error into the final solution. Although there are methods designed to help pick the best initial step size, one option is to start with a large step size and then to allow the Richardson extrapolation to reduce the step size each iteration until the error reaches the desired tolerance. <syntaxhighlight lang="matlab"> tStart = 0 % Starting time tEnd = 5 % Ending time f = -y^2 % The derivative of y, so y' = f(t, y(t)) = -y^2 % The solution to this ODE is y = 1/(1 + t) y0 = 1 % The initial position (i.e. y0 = y(tStart) = y(0) = 1) tolerance = 10^-11 % 10 digit accuracy is desired % Don't allow the iteration to continue indefinitely maxRows = 20 % Pick an initial step size initialH = tStart - tEnd % Were we able to find the solution to within the desired tolerance? not yet. haveWeFoundSolution = false h = initialH % Create a 2D matrix of size maxRows by maxRows to hold the Richardson extrapolates % Note that this will be a lower triangular matrix and that at most two rows are actually % needed at any time in the computation. A = zeroMatrix(maxRows, maxRows) % Compute the top left element of the matrix. % The first row of this (lower triangular) matrix has now been filled. A(1, 1) = Trapezoidal(f, tStart, tEnd, h, y0) % Each row of the matrix requires one call to Trapezoidal % This loops starts by filling the second row of the matrix, % since the first row was computed above for i = 1 : maxRows - 1 % Starting at i = 1, iterate at most maxRows - 1 times % Halve the previous value of h since this is the start of a new row. h = h/2 % Starting filling row i+1 from the left by calling % the Trapezoidal function with this new smaller step size A(i + 1, 1) = Trapezoidal(f, tStart, tEnd, h, y0) % Go across this current (i+1)-th row until the diagonal is reached for j = 1 : i % To compute A(i + 1, j + 1), which is the next Richardson extrapolate, % use the most recently computed value (i.e. A(i + 1, j)) % and the value from the row above it (i.e. A(i, j)). A(i + 1, j + 1) = ((4^j).*A(i + 1, j) - A(i, j))/(4^j - 1); end % After leaving the above inner loop, the diagonal element of row i + 1 has been computed % This diagonal element is the latest Richardson extrapolate to be computed. % The difference between this extrapolate and the last extrapolate of row i is a good % indication of the error. if (absoluteValue(A(i + 1, i + 1) - A(i, i)) < tolerance) % If the result is within tolerance % Display the result of the Richardson extrapolation print("y = ", A(i + 1, i + 1)) haveWeFoundSolution = true % Done, so leave the loop break end end % If we were not able to find a solution to within the desired tolerance if (not haveWeFoundSolution) print("Warning: Not able to find solution to within the desired tolerance of ", tolerance); print("The last computed extrapolate was ", A(maxRows, maxRows)) end </syntaxhighlight> ==See also== * [[Aitken's delta-squared process]] * [[Takebe Kenko]] * [[Richardson iteration]] ==References== {{Reflist}} {{refbegin}} *''Extrapolation Methods. Theory and Practice'' by C. Brezinski and [[Michela Redivo-Zaglia|M. Redivo Zaglia]], North-Holland, 1991. *Ivan Dimov, Zahari Zlatev, Istvan Farago, Agnes Havasi: ''Richardson Extrapolation: Practical Aspects and Applications'', Walter de Gruyter GmbH & Co KG, {{ISBN|9783110533002}} (2017). {{refend}} ==External links== * {{cite web|url=http://www.math.ubc.ca/~feldman/m256/richard.pdf|first1=Joel|last1=Feldman|title= Richardson-Extrapolation|year=2000}} * {{cite web|url=http://www.math.ubc.ca/~israel/m215/rich/rich.html|title= Richardson extrapolation|first1=Robert|last1=Israel}} * [https://www.mathworks.com/matlabcentral/fileexchange/24388-richardson-extrapolation Matlab code] for generic Richardson extrapolation. * [https://github.com/JuliaMath/Richardson.jl Julia code] for generic Richardson extrapolation. [[Category:Series acceleration methods]] [[Category:Extrapolation]] [[Category:Articles with example MATLAB/Octave code]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Citation
(
edit
)
Template:Cite book
(
edit
)
Template:Cite journal
(
edit
)
Template:Cite web
(
edit
)
Template:EquationNote
(
edit
)
Template:ISBN
(
edit
)
Template:Math
(
edit
)
Template:Mvar
(
edit
)
Template:NumBlk
(
edit
)
Template:Refbegin
(
edit
)
Template:Refend
(
edit
)
Template:Reflist
(
edit
)
Template:Short description
(
edit
)