Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Floating-point arithmetic
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== Machine precision and backward error analysis === ''Machine precision'' is a quantity that characterizes the accuracy of a floating-point system, and is used in [[error analysis (mathematics)#Error analysis in numerical modeling|backward error analysis]] of floating-point algorithms. It is also known as unit roundoff or ''[[machine epsilon]]''. Usually denoted {{math|{{var|Ξ}}<sub>mach</sub>}}, its value depends on the particular rounding being used. With rounding to zero, <math display=block>\Epsilon_\text{mach} = B^{1-P},\,</math> whereas rounding to nearest, <math display=block>\Epsilon_\text{mach} = \tfrac{1}{2} B^{1-P},</math> where ''B'' is the base of the system and ''P'' is the precision of the significand (in base ''B''). This is important since it bounds the ''[[relative error]]'' in representing any non-zero real number {{math|{{var|x}}}} within the normalized range of a floating-point system: <math display=block>\left| \frac{\operatorname{fl}(x) - x}{x} \right| \le \Epsilon_\text{mach}.</math> Backward error analysis, the theory of which was developed and popularized by [[James H. Wilkinson]], can be used to establish that an algorithm implementing a numerical function is numerically stable.<ref name="RalstonReilly2003"/> The basic approach is to show that although the calculated result, due to roundoff errors, will not be exactly correct, it is the exact solution to a nearby problem with slightly perturbed input data. If the perturbation required is small, on the order of the uncertainty in the input data, then the results are in some sense as accurate as the data "deserves". The algorithm is then defined as ''[[numerical stability#Forward, backward, and mixed stability|backward stable]]''. Stability is a measure of the sensitivity to rounding errors of a given numerical procedure; by contrast, the [[condition number]] of a function for a given problem indicates the inherent sensitivity of the function to small perturbations in its input and is independent of the implementation used to solve the problem.<ref name="Einarsson_2005"/> As a trivial example, consider a simple expression giving the inner product of (length two) vectors <math>x</math> and <math>y</math>, then <math display=block>\begin{align} \operatorname{fl}(x \cdot y) &= \operatorname{fl}\big(\operatorname{fl}(x_1 \cdot y_1) + \operatorname{fl}(x_2 \cdot y_2)\big), && \text{ where } \operatorname{fl}() \text{ indicates correctly rounded floating-point arithmetic} \\ &= \operatorname{fl}\big((x_1 \cdot y_1)(1 + \delta_1) + (x_2 \cdot y_2)(1 + \delta_2)\big), && \text{ where } \delta_n \leq \Epsilon_\text{mach}, \text{ from above} \\ &= \big((x_1 \cdot y_1)(1 + \delta_1) + (x_2 \cdot y_2)(1 + \delta_2)\big)(1 + \delta_3) \\ &= (x_1 \cdot y_1)(1 + \delta_1)(1 + \delta_3) + (x_2 \cdot y_2)(1 + \delta_2)(1 + \delta_3), \end{align}</math> and so <math display=block>\operatorname{fl}(x \cdot y) = \hat{x} \cdot \hat{y},</math> where <math display=block>\begin{align} \hat{x}_1 &= x_1(1 + \delta_1); & \hat{x}_2 &= x_2(1 + \delta_2);\\ \hat{y}_1 &= y_1(1 + \delta_3); & \hat{y}_2 &= y_2(1 + \delta_3),\\ \end{align}</math> where <math display=block>\delta_n \leq \Epsilon_\text{mach}</math> by definition, which is the sum of two slightly perturbed (on the order of Ξ<sub>mach</sub>) input data, and so is backward stable. For more realistic examples in [[numerical linear algebra]], see Higham 2002<ref name="Higham_2002"/> and other references below.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)