Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Analysis of algorithms
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Constant factors== Analysis of algorithms typically focuses on the asymptotic performance, particularly at the elementary level, but in practical applications constant factors are important, and real-world data is in practice always limited in size. The limit is typically the size of addressable memory, so on 32-bit machines 2<sup>32</sup> = 4 GiB (greater if [[segmented memory]] is used) and on 64-bit machines 2<sup>64</sup> = 16 EiB. Thus given a limited size, an order of growth (time or space) can be replaced by a constant factor, and in this sense all practical algorithms are {{math|''O''(1)}} for a large enough constant, or for small enough data. This interpretation is primarily useful for functions that grow extremely slowly: (binary) [[iterated logarithm]] (log<sup>*</sup>) is less than 5 for all practical data (2<sup>65536</sup> bits); (binary) log-log (log log ''n'') is less than 6 for virtually all practical data (2<sup>64</sup> bits); and binary log (log ''n'') is less than 64 for virtually all practical data (2<sup>64</sup> bits). An algorithm with non-constant complexity may nonetheless be more efficient than an algorithm with constant complexity on practical data if the overhead of the constant time algorithm results in a larger constant factor, e.g., one may have <math>K > k \log \log n</math> so long as <math>K/k > 6</math> and <math>n < 2^{2^6} = 2^{64}</math>. For large data linear or quadratic factors cannot be ignored, but for small data an asymptotically inefficient algorithm may be more efficient. This is particularly used in [[hybrid algorithm]]s, like [[Timsort]], which use an asymptotically efficient algorithm (here [[merge sort]], with time complexity <math>n \log n</math>), but switch to an asymptotically inefficient algorithm (here [[insertion sort]], with time complexity <math>n^2</math>) for small data, as the simpler algorithm is faster on small data.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)