Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Fixed-point arithmetic
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
===Comparison with floating-point=== Fixed-point computations can be faster and/or use less hardware than floating-point ones. If the range of the values to be represented is known in advance and is sufficiently limited, fixed point can make better use of the available bits. For example, if 32 bits are available to represent a number between 0 and 1, a fixed-point representation can have error less than 1.2 Γ 10<sup>β10</sup>, whereas the standard floating-point representation may have error up to 596 Γ 10<sup>β10</sup> β because 9 of the bits are wasted with the sign and exponent of the dynamic scaling factor. Specifically, comparing 32-bit fixed-point to [[IEEE 754|floating-point]] audio, a recording requiring less than 40 [[Decibel|dB]] of [[Headroom (audio signal processing)|headroom]] has a higher [[signal-to-noise ratio]] using 32-bit fixed. Programs using fixed-point computations are usually more portable than those using floating-point since they do not depend on the availability of an FPU. This advantage was particularly strong before the [[IEEE Floating Point Standard]] was widely adopted when floating-point computations with the same data would yield different results depending on the manufacturer, and often on the computer model. Many embedded processors lack an FPU, because integer arithmetic units require substantially fewer [[logic gate]]s and consume much smaller [[integrated circuit|chip]] area than an FPU; and software [[emulation (computing)|emulation]] of floating-point on low-speed devices would be too slow for most applications. CPU chips for the earlier [[personal computer]]s and [[game console]]s, like the [[Intel 386]] and [[Intel 486|486SX]], also lacked an FPU. The ''absolute'' resolution (difference between successive values) of any fixed-point format is constant over the whole range, namely the scaling factor ''S''. In contrast, the ''relative'' resolution of a floating-point format is approximately constant over their whole range, varying within a factor of the base ''b''; whereas their ''absolute'' resolution varies by many orders of magnitude, like the values themselves. In many cases, the [[Quantization (signal processing)|rounding and truncation]] errors of fixed-point computations are easier to analyze than those of the equivalent floating-point computations. Applying linearization techniques to truncation, such as [[dither]]ing and/or [[noise shaping]] is more straightforward within fixed-point arithmetic. On the other hand, the use of fixed point requires greater care by the programmer. Avoidance of overflow requires much tighter estimates for the ranges of variables and all intermediate values in the computation, and often also extra code to adjust their scaling factors. Fixed-point programming normally requires the use of [[C data types#Main types|integer types of different widths]]. Fixed-point applications can make use of [[block floating point]], which is a fixed-point environment having each array (block) of fixed-point data be scaled with a common exponent in a single word.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)