Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Analysis of algorithms
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
===Shortcomings of empirical metrics=== Since algorithms are [[platform-independent]] (i.e. a given algorithm can be implemented in an arbitrary [[programming language]] on an arbitrary [[computer]] running an arbitrary [[operating system]]), there are additional significant drawbacks to using an [[empirical]] approach to gauge the comparative performance of a given set of algorithms. Take as an example a program that looks up a specific entry in a [[collation|sorted]] [[list (computing)|list]] of size ''n''. Suppose this program were implemented on Computer A, a state-of-the-art machine, using a [[linear search]] algorithm, and on Computer B, a much slower machine, using a [[binary search algorithm]]. [[benchmark (computing)|Benchmark testing]] on the two computers running their respective programs might look something like the following: {| class="wikitable" |- ! ''n'' (list size) ! Computer A run-time<br />(in [[nanosecond]]s) ! Computer B run-time<br />(in [[nanosecond]]s) |- | 16 | 8 | 100,000 |- | 63 | 32 | 150,000 |- | 250 | 125 | 200,000 |- | 1,000 | 500 | 250,000 |} Based on these metrics, it would be easy to jump to the conclusion that ''Computer A'' is running an algorithm that is far superior in efficiency to that of ''Computer B''. However, if the size of the input-list is increased to a sufficient number, that conclusion is dramatically demonstrated to be in error: {| class="wikitable" |- ! ''n'' (list size) ! Computer A run-time<br />(in [[nanosecond]]s) ! Computer B run-time<br />(in [[nanosecond]]s) |- | 16 | 8 | 100,000 |- | 63 | 32 | 150,000 |- | 250 | 125 | 200,000 |- | 1,000 | 500 | 250,000 |- | ... | ... | ... |- | 1,000,000 | 500,000 | 500,000 |- | 4,000,000 | 2,000,000 | 550,000 |- | 16,000,000 | 8,000,000 | 600,000 |- | ... | ... | ... |- | 63,072 × 10<sup>12</sup> | 31,536 × 10<sup>12</sup> ns,<br />or 1 year | 1,375,000 ns,<br />or 1.375 milliseconds |} Computer A, running the linear search program, exhibits a [[linear]] growth rate. The program's run-time is directly proportional to its input size. Doubling the input size doubles the run-time, quadrupling the input size quadruples the run-time, and so forth. On the other hand, Computer B, running the binary search program, exhibits a [[logarithm]]ic growth rate. Quadrupling the input size only increases the run-time by a [[wiktionary:Constant|constant]] amount (in this example, 50,000 ns). Even though Computer A is ostensibly a faster machine, Computer B will inevitably surpass Computer A in run-time because it is running an algorithm with a much slower growth rate.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)