Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Richardson extrapolation
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==General formula== === Notation === Let <math>A_0(h)</math> be an approximation of ''<math>A^*</math>''(exact value) that depends on a step size {{mvar|h}} (where <math display="inline">0 < h < 1</math>) with an [[Approximation error|error]] formula of the form <math display="block"> A^* = A_0(h)+a_0h^{k_0} + a_1h^{k_1} + a_2h^{k_2} + \cdots </math> where the <math>a_i</math> are unknown constants and the <math>k_i</math> are known constants such that <math>h^{k_i} > h^{k_{i+1}}</math>. Furthermore, <math>O(h^{k_i})</math> represents the [[truncation error]] of the <math>A_i(h)</math> approximation such that <math>A^* = A_i(h)+O(h^{k_i}).</math> Similarly, in <math>A^*=A_i(h)+O(h^{k_i}),</math> the approximation <math>A_i(h)</math> is said to be an <math>O(h^{k_i})</math> approximation. Note that by simplifying with [[Big O notation]], the following formulae are equivalent: <math display="block"> \begin{align} A^* &= A_0(h) + a_0h^{k_0} + a_1h^{k_1} + a_2h^{k_2} + \cdots \\ A^* &= A_0(h)+ a_0h^{k_0} + O(h^{k_1}) \\ A^* &= A_0(h)+O(h^{k_0}) \end{align} </math> === Purpose === Richardson extrapolation is a process that finds a better approximation of <math>A^*</math> by changing the error formula from <math>A^*=A_0(h)+O(h^{k_0})</math> to <math>A^* = A_1(h) + O(h^{k_1}).</math> Therefore, by replacing <math>A_0(h)</math> with <math>A_1(h)</math> the [[truncation error]] has reduced from <math>O(h^{k_0}) </math> to <math>O(h^{k_1}) </math> for the same step size <math>h</math>. The general pattern occurs in which <math>A_i(h)</math> is a more accurate estimate than <math>A_j(h)</math> when <math>i>j</math>. By this process, we have achieved a better approximation of <math>A^*</math> by subtracting the largest term in the error which was <math>O(h^{k_0}) </math>. This process can be repeated to remove more error terms to get even better approximations. === Process === Using the step sizes <math>h</math> and <math>h / t</math> for some constant <math>t</math>, the two formulas for <math>A^*</math> are: {{NumBlk||<math display="block">A^* = A_0(h)+ a_0h^{k_0} + a_1h^{k_1} + a_2h^{k_2} + O(h^{k_3}) </math>|{{EquationRef|1}}}} {{NumBlk||<math display="block">A^* = A_0\!\left(\frac{h}{t}\right) + a_0\left(\frac{h}{t}\right)^{k_0} + a_1\left(\frac{h}{t}\right)^{k_1} + a_2\left(\frac{h}{t}\right)^{k_2} + O(h^{k_3}) </math>|{{EquationRef|2}}}} To improve our approximation from <math>O(h^{k_0})</math> to <math>O(h^{k_1})</math> by removing the first error term, we multiply {{EquationNote|2|equation 2}} by <math>t^{k_0}</math> and subtract {{EquationNote|1|equation 1}} to give us <math display="block"> (t^{k_0}-1)A^* = \bigg[t^{k_0}A_0\left(\frac{h}{t}\right) - A_0(h)\bigg] + \bigg(t^{k_0}a_1\bigg(\frac{h}{t}\bigg)^{k_1}-a_1h^{k_1}\bigg)+ \bigg(t^{k_0}a_2\bigg(\frac{h}{t}\bigg)^{k_2}-a_2h^{k_2}\bigg) + O(h^{k_3}). </math> This multiplication and subtraction was performed because <math display="inline">\big[t^{k_0}A_0\left(\frac{h}{t}\right) - A_0(h)\big]</math> is an <math>O(h^{k_1})</math> approximation of <math>(t^{k_0}-1)A^*</math>. We can solve our current formula for <math>A^*</math> to give <math display="block">A^* = \frac{\bigg[t^{k_0}A_0\left(\frac{h}{t}\right) - A_0(h)\bigg]}{t^{k_0}-1} + \frac{\bigg(t^{k_0}a_1\bigg(\frac{h}{t}\bigg)^{k_1}-a_1h^{k_1}\bigg)}{t^{k_0}-1} + \frac{\bigg(t^{k_0}a_2\bigg(\frac{h}{t}\bigg)^{k_2}-a_2h^{k_2}\bigg)}{t^{k_0}-1} +O(h^{k_3}) </math> which can be written as <math>A^* = A_1(h)+O(h^{k_1})</math> by setting <math display="block">A_1(h) = \frac{t^{k_0}A_0\left(\frac{h}{t}\right) - A_0(h)}{t^{k_0}-1} .</math> === Recurrence relation === A general [[recurrence relation]] can be defined for the approximations by <math display="block"> A_{i+1}(h) = \frac{t^{k_i}A_i\left(\frac{h}{t}\right) - A_i(h)}{t^{k_i}-1} </math> where <math>k_{i+1}</math> satisfies <math display="block"> A^* = A_{i+1}(h) + O(h^{k_{i+1}}) .</math> === Properties === The Richardson extrapolation can be considered as a linear [[sequence transformation]]. Additionally, the general formula can be used to estimate <math>k_0</math> (leading order step size behavior of [[Truncation error]]) when neither its value nor <math>A^*</math> is known ''a priori''. Such a technique can be useful for quantifying an unknown [[rate of convergence]]. Given approximations of <math>A^*</math> from three distinct step sizes <math>h</math>, <math>h / t</math>, and <math>h / s</math>, the exact relationship<math display="block">A^*=\frac{t^{k_0}A_i\left(\frac{h}{t}\right) - A_i(h)}{t^{k_0}-1} + O(h^{k_1}) = \frac{s^{k_0}A_i\left(\frac{h}{s}\right) - A_i(h)}{s^{k_0}-1} + O(h^{k_1})</math>yields an approximate relationship (please note that the notation here may cause a bit of confusion, the two O appearing in the equation above only indicates the leading order step size behavior but their explicit forms are different and hence cancelling out of the two {{math|''O''}} terms is only approximately valid) <math display="block">A_i\left(\frac{h}{t}\right) + \frac{A_i\left(\frac{h}{t}\right) - A_i(h)}{t^{k_0}-1} \approx A_i\left(\frac{h}{s}\right) +\frac{A_i\left(\frac{h}{s}\right) - A_i(h)}{s^{k_0}-1}</math> which can be solved numerically to estimate <math>k_0</math> for some arbitrary valid choices of <math>h</math>, <math>s</math>, and <math>t</math>. As <math>t \neq 1</math>, if <math>t>0</math> and <math>s</math> is chosen so that <math>s = t^2</math>, this approximate relation reduces to a quadratic equation in <math>t^{k_0}</math>, which is readily solved for <math>k_0</math> in terms of <math>h</math> and <math>t</math>.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)