Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Kahan summation algorithm
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Accuracy== {{Confusing|1=section|reason=this section uses Ξ΅, while there exist [[Machine epsilon#Variant definitions|two conflicting definitions]]. I suggest to use '''u''' instead, whose definition is rather standard|date=December 2021}} A careful analysis of the errors in compensated summation is needed to appreciate its accuracy characteristics. While it is more accurate than naive summation, it can still give large relative errors for ill-conditioned sums. Suppose that one is summing <math>n</math> values <math>x_i</math>, for <math>i = 1, \, \ldots, \, n</math>. The exact sum is : <math>S_n = \sum_{i=1}^n x_i</math> (computed with infinite precision). With compensated summation, one instead obtains <math>S_n + E_n</math>, where the error <math>E_n</math> is bounded by<ref name=Higham93/> : <math>|E_n| \le \big[2\varepsilon + O(n\varepsilon^2)\big] \sum_{i=1}^n |x_i|,</math> where <math>\varepsilon</math> is the [[machine precision]] of the arithmetic being employed (e.g. <math>\varepsilon \approx 10^{-16}</math> for IEEE standard [[double-precision]] floating point). Usually, the quantity of interest is the [[relative error]] <math>|E_n|/|S_n|</math>, which is therefore bounded above by : <math>\frac{|E_n|}{|S_n|} \le \big[2\varepsilon + O(n\varepsilon^2)\big] \frac{\sum\limits_{i=1}^n |x_i|}{\left|\sum\limits_{i=1}^n x_i\right|}.</math> In the expression for the relative error bound, the fraction <math>\Sigma |x_i| / |\Sigma x_i|</math> is the [[condition number]] of the summation problem. Essentially, the condition number represents the ''intrinsic'' sensitivity of the summation problem to errors, regardless of how it is computed.<ref>{{cite book |first1=Lloyd N. |authorlink1=Lloyd N. Trefethen |last1=Trefethen |first2=David |last2=Bau |title=Numerical Linear Algebra |publisher=SIAM |location=Philadelphia |year=1997 |isbn=978-0-89871-361-9}}</ref> The relative error bound of ''every'' ([[backwards stable]]) summation method by a fixed algorithm in fixed precision (i.e. not those that use [[arbitrary-precision]] arithmetic, nor algorithms whose memory and time requirements change based on the data), is proportional to this condition number.<ref name=Higham93/> An ''ill-conditioned'' summation problem is one in which this ratio is large, and in this case even compensated summation can have a large relative error. For example, if the summands <math>x_i</math> are uncorrelated random numbers with zero mean, the sum is a [[random walk]], and the condition number will grow proportional to <math>\sqrt{n}</math>. On the other hand, for random inputs with nonzero mean the condition number asymptotes to a finite constant as <math>n \to \infty</math>. If the inputs are all [[non-negative]], then the condition number is 1. Given a condition number, the relative error of compensated summation is effectively independent of <math>n</math>. In principle, there is the <math>O (n \varepsilon^2)</math> that grows linearly with <math>n</math>, but in practice this term is effectively zero: since the final result is rounded to a precision <math>\varepsilon</math>, the <math>n \varepsilon^2</math> term rounds to zero, unless <math>n</math> is roughly <math>1 / \varepsilon</math> or larger.<ref name=Higham93/> In double precision, this corresponds to an <math>n</math> of roughly <math>10^{16}</math>, much larger than most sums. So, for a fixed condition number, the errors of compensated summation are effectively <math>O (\varepsilon)</math>, independent of <math>n</math>. In comparison, the relative error bound for naive summation (simply adding the numbers in sequence, rounding at each step) grows as <math>O(\varepsilon n)</math> multiplied by the condition number.<ref name=Higham93/> This worst-case error is rarely observed in practice, however, because it only occurs if the rounding errors are all in the same direction. In practice, it is much more likely that the rounding errors have a random sign, with zero mean, so that they form a random walk; in this case, naive summation has a [[root mean square]] relative error that grows as <math>O\left(\varepsilon \sqrt{n}\right)</math> multiplied by the condition number.<ref name=Tasche>Manfred Tasche and Hansmartin Zeuner, ''Handbook of Analytic-Computational Methods in Applied Mathematics'', Boca Raton, FL: CRC Press, 2000.</ref> This is still much worse than compensated summation, however. However, if the sum can be performed in twice the precision, then <math>\varepsilon</math> is replaced by <math>\varepsilon^2</math>, and naive summation has a worst-case error comparable to the <math>O(n \varepsilon^2)</math> term in compensated summation at the original precision. By the same token, the <math>\Sigma |x_i|</math> that appears in <math>E_n</math> above is a worst-case bound that occurs only if all the rounding errors have the same sign (and are of maximal possible magnitude).<ref name=Higham93/> In practice, it is more likely that the errors have random sign, in which case terms in <math>\Sigma |x_i|</math> are replaced by a random walk, in which case, even for random inputs with zero mean, the error <math>E_n</math> grows only as <math>O\left(\varepsilon \sqrt{n}\right)</math> (ignoring the <math>n \varepsilon^2</math> term), the same rate the sum <math>S_n</math> grows, canceling the <math>\sqrt{n}</math> factors when the relative error is computed. So, even for asymptotically ill-conditioned sums, the relative error for compensated summation can often be much smaller than a worst-case analysis might suggest.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)