Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Rate of convergence
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==== R-convergence ==== The definitions of Q-convergence rates have the shortcoming that they do not naturally capture the convergence behavior of sequences that do converge, but do not converge with an asymptotically constant rate with every step, so that the Q-convergence limit does not exist. One class of examples is the staggered geometric progressions that get closer to their limits only every other step or every several steps, for instance the example <math display="inline">(b_k) = 1, 1, 1/4, 1/4, 1/16, 1/16, \ldots, 1/4^{\left\lfloor \frac{k}{2} \right\rfloor}, \ldots</math> detailed below (where <math display="inline">\lfloor x \rfloor</math> is the [[floor function]] applied to <math>x</math>). The defining Q-linear convergence limits do not exist for this sequence because one subsequence of error quotients starting from odd steps converges to 1 and another subsequence of quotients starting from even steps converges to 1/4. When two subsequences of a sequence converge to different limits, the sequence does not itself converge to a limit. In cases like these, a closely related but more technical definition of rate of convergence called R-convergence is more appropriate. The "R-" prefix stands for "root."<ref name=":0" /><ref name="NocedalWright2006">{{Cite book |last1=Nocedal |first1=Jorge |title=Numerical Optimization |last2=Wright |first2=Stephen J. |publisher=[[Springer-Verlag]] |year=2006 |isbn=978-0-387-30303-1 |edition=2nd |location=Berlin, New York}}</ref>{{rp|620}} A sequence <math>(x_k)</math> that converges to <math>L</math> is said to ''converge at least R-linearly'' if there exists an error-bounding sequence <math>(\varepsilon_k)</math> such that <math display="inline"> |x_k - L|\le\varepsilon_k\quad\text{for all }k </math> and <math> (\varepsilon_k) </math> converges Q-linearly to zero; analogous definitions hold for R-superlinear convergence, R-sublinear convergence, R-quadratic convergence, and so on.<ref name=":0" /> Any error bounding sequence <math>(\varepsilon_k)</math> provides a lower bound on the rate and order of R-convergence and the greatest lower bound gives the exact rate and order of R-convergence. As for Q-convergence, sequences with larger orders <math>q</math> converge more quickly and those with smaller rates <math>\mu</math> converge more quickly for a given order, so these greatest-rate-lower-bound error-upper-bound sequences are those that have the greatest possible <math>q</math> and the smallest possible <math>\mu</math> given that <math>q</math>. For the example <math display="inline">(b_k)</math> given above, the tight bounding sequence <math display="inline">(\varepsilon_k) = 2, 1, 1/2, 1/4, 1/8, 1/16, \ldots, 1/2^{k - 1}, \ldots</math>converges Q-linearly with rate 1/2, so <math display="inline">(b_k)</math> converges R-linearly with rate 1/2. Generally, for any staggered geometric progression <math>(a r^{\lfloor k / m \rfloor})</math>, the sequence will not converge Q-linearly but will converge R-linearly with rate <math display="inline">\sqrt[m]{|r|}. </math> These examples demonstrate why the "R" in R-linear convergence is short for "root."
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)