Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Runge–Kutta methods
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Implicit Runge–Kutta methods== All Runge–Kutta methods mentioned up to now are [[explicit and implicit methods|explicit methods]]. Explicit Runge–Kutta methods are generally unsuitable for the solution of [[stiff equation]]s because their region of absolute stability is small; in particular, it is bounded.<ref>{{harvnb|Süli|Mayers|2003|pp=349–351}}</ref> This issue is especially important in the solution of [[Numerical partial differential equations|partial differential equations]]. The instability of explicit Runge–Kutta methods motivates the development of implicit methods. An implicit Runge–Kutta method has the form :<math> y_{n+1} = y_n + h \sum_{i=1}^s b_i k_i, </math> where :<math> k_i = f\left( t_n + c_i h,\ y_{n} + h \sum_{j=1}^s a_{ij} k_j \right), \quad i = 1, \ldots, s.</math> <ref>{{harvnb|Iserles|1996|p=41}}; {{harvnb|Süli|Mayers|2003|pp=351–352}}</ref> The difference with an explicit method is that in an explicit method, the sum over ''j'' only goes up to ''i'' − 1. This also shows up in the Butcher tableau: the coefficient matrix <math>a_{ij}</math> of an explicit method is lower triangular. In an implicit method, the sum over ''j'' goes up to ''s'' and the coefficient matrix is not triangular, yielding a Butcher tableau of the form<ref name="Süli 2003 352"/> :<math> \begin{array}{c|cccc} c_1 & a_{11} & a_{12}& \dots & a_{1s}\\ c_2 & a_{21} & a_{22}& \dots & a_{2s}\\ \vdots & \vdots & \vdots& \ddots& \vdots\\ c_s & a_{s1} & a_{s2}& \dots & a_{ss} \\ \hline & b_1 & b_2 & \dots & b_s\\ & b^*_1 & b^*_2 & \dots & b^*_s\\ \end{array} = \begin{array}{c|c} \mathbf{c}& A\\ \hline & \mathbf{b^T} \\ \end{array} </math> See [[#Adaptive Runge–Kutta methods|Adaptive Runge-Kutta methods above]] for the explanation of the <math>b^*</math> row. The consequence of this difference is that at every step, a system of algebraic equations has to be solved. This increases the computational cost considerably. If a method with ''s'' stages is used to solve a differential equation with ''m'' components, then the system of algebraic equations has ''ms'' components. This can be contrasted with implicit [[linear multistep method]]s (the other big family of methods for ODEs): an implicit ''s''-step linear multistep method needs to solve a system of algebraic equations with only ''m'' components, so the size of the system does not increase as the number of steps increases.<ref name="Süli 2003 353">{{harvnb|Süli|Mayers|2003|p=353}}</ref> ===Examples=== The simplest example of an implicit Runge–Kutta method is the [[backward Euler method]]: :<math>y_{n + 1} = y_n + h f(t_n + h,\ y_{n + 1}). \,</math> The Butcher tableau for this is simply: :<math> \begin{array}{c|c} 1 & 1 \\ \hline & 1 \\ \end{array} </math> This Butcher tableau corresponds to the formulae :<math> k_1 = f(t_n + h,\ y_n + h k_1) \quad\text{and}\quad y_{n+1} = y_n + h k_1, </math> which can be re-arranged to get the formula for the backward Euler method listed above. Another example for an implicit Runge–Kutta method is the [[trapezoidal rule (differential equations)|trapezoidal rule]]. Its Butcher tableau is: :<math> \begin{array}{c|cc} 0 & 0& 0\\ 1 & \frac{1}{2}& \frac{1}{2}\\ \hline & \frac{1}{2}&\frac{1}{2}\\ & 1 & 0 \\ \end{array} </math> The trapezoidal rule is a [[collocation method]] (as discussed in that article). All collocation methods are implicit Runge–Kutta methods, but not all implicit Runge–Kutta methods are collocation methods.<ref>{{harvnb|Iserles|1996|pp=43–44}}</ref> The [[Gauss–Legendre method]]s form a family of collocation methods based on [[Gauss quadrature]]. A Gauss–Legendre method with ''s'' stages has order 2''s'' (thus, methods with arbitrarily high order can be constructed).<ref>{{harvnb|Iserles|1996|p=47}}</ref> The method with two stages (and thus order four) has Butcher tableau: :<math> \begin{array}{c|cc} \frac12 - \frac16 \sqrt3 & \frac14 & \frac14 - \frac16 \sqrt3 \\ \frac12 + \frac16 \sqrt3 & \frac14 + \frac16 \sqrt3 & \frac14 \\ \hline & \frac12 & \frac12 \\ & \frac12+\frac12 \sqrt3 & \frac12-\frac12 \sqrt3 \end{array} </math> <ref name="Süli 2003 353"/> ===Stability=== The advantage of implicit Runge–Kutta methods over explicit ones is their greater stability, especially when applied to [[stiff equation]]s. Consider the linear test equation <math> y' = \lambda y </math>. A Runge–Kutta method applied to this equation reduces to the iteration <math> y_{n+1} = r(h\lambda) \, y_n </math>, with ''r'' given by :<math> r(z) = 1 + z b^T (I-zA)^{-1} e = \frac{\det(I-zA+zeb^T)}{\det(I-zA)}, </math> <ref>{{harvnb|Hairer|Wanner|1996|pp=40–41}}</ref> where ''e'' stands for the vector of ones. The function ''r'' is called the ''stability function''.<ref>{{harvnb|Hairer|Wanner|1996|p=40}}</ref> It follows from the formula that ''r'' is the quotient of two polynomials of degree ''s'' if the method has ''s'' stages. Explicit methods have a strictly lower triangular matrix ''A'', which implies that det(''I'' − ''zA'') = 1 and that the stability function is a polynomial.<ref name="Iserles 1996 60">{{harvnb|Iserles|1996|p=60}}</ref> The numerical solution to the linear test equation decays to zero if | ''r''(''z'') | < 1 with ''z'' = ''h''λ. The set of such ''z'' is called the ''domain of absolute stability''. In particular, the method is said to be [[Stiff equation#A-stability|absolute stable]] if all ''z'' with Re(''z'') < 0 are in the domain of absolute stability. The stability function of an explicit Runge–Kutta method is a polynomial, so explicit Runge–Kutta methods can never be A-stable.<ref name="Iserles 1996 60"/> If the method has order ''p'', then the stability function satisfies <math> r(z) = \textrm{e}^z + O(z^{p+1}) </math> as <math> z \to 0 </math>. Thus, it is of interest to study quotients of polynomials of given degrees that approximate the exponential function the best. These are known as [[Padé approximant]]s. A Padé approximant with numerator of degree ''m'' and denominator of degree ''n'' is A-stable if and only if ''m'' ≤ ''n'' ≤ ''m'' + 2.<ref>{{harvnb|Iserles|1996|pp=62–63}}</ref> The Gauss–Legendre method with ''s'' stages has order 2''s'', so its stability function is the Padé approximant with ''m'' = ''n'' = ''s''. It follows that the method is A-stable.<ref>{{harvnb|Iserles|1996|p=63}}</ref> This shows that A-stable Runge–Kutta can have arbitrarily high order. In contrast, the order of A-stable [[linear multistep method]]s cannot exceed two.<ref>This result is due to {{harvtxt|Dahlquist|1963}}.</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)