Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Analysis of algorithms
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Study of resources used by an algorithm}} {{more footnotes|date=March 2010}} [[File:Binary search vs Linear search example svg.svg|thumb|For looking up a given entry in a given ordered list, both the [[binary search algorithm|binary]] and the [[linear search]] algorithm (which ignores ordering) can be used. The analysis of the former and the latter algorithm shows that it takes at most {{math|log<sub>2</sub> ''n''}} and {{mvar|''n''}} check steps, respectively, for a list of size {{mvar|''n''}}. In the depicted example list of size 33, searching for ''"Morin, Arthur"'' takes 5 and 28 steps with binary (shown in {{color|#008080|cyan}}) and linear ({{color|#800080|magenta}}) search, respectively.]] [[File:comparison_computational_complexity.svg|thumb|Graphs of functions commonly used in the analysis of algorithms, showing the number of operations {{mvar|''N''}} versus input size {{mvar|''n''}} for each function]] In [[computer science]], the '''analysis of algorithms''' is the process of finding the [[computational complexity]] of [[algorithm]]sβthe amount of time, storage, or other resources needed to execute them. Usually, this involves determining a [[Function (mathematics)|function]] that relates the size of an algorithm's input to the number of steps it takes (its [[time complexity]]) or the number of storage locations it uses (its [[space complexity]]). An algorithm is said to be efficient when this function's values are small, or grow slowly compared to a growth in the size of the input. Different inputs of the same size may cause the algorithm to have different behavior, so [[best, worst and average case]] descriptions might all be of practical interest. When not otherwise specified, the function describing the performance of an algorithm is usually an [[upper bound]], determined from the worst case inputs to the algorithm. The term "analysis of algorithms" was coined by [[Donald Knuth]].<ref>{{cite web|url=http://www-cs-faculty.stanford.edu/~uno/news.html|archive-url=https://web.archive.org/web/20160828152021/http://www-cs-faculty.stanford.edu/~uno/news.html|url-status=dead|archive-date=28 August 2016|title=Knuth: Recent News|date=28 August 2016}}</ref> Algorithm analysis is an important part of a broader [[computational complexity theory]], which provides theoretical estimates for the resources needed by any algorithm which solves a given [[computational problem]]. These estimates provide an insight into reasonable directions of search for [[Algorithmic efficiency|efficient algorithms]]. In theoretical analysis of algorithms it is common to estimate their complexity in the asymptotic sense, i.e., to estimate the complexity function for arbitrarily large input. [[Big O notation]], [[Big-omega notation]] and [[Big-theta notation]] are used to this end.<ref>{{Cite book |url=https://www.worldcat.org/title/311310321 |title=Introduction to algorithms |date=2009 |publisher=MIT Press |isbn=978-0-262-03384-8 |editor-last=Cormen |editor-first=Thomas H. |edition=3rd |location=Cambridge, Mass |pages=44β52 |oclc=311310321}}</ref> For instance, [[binary search]] is said to run in a number of steps proportional to the logarithm of the size {{mvar|''n''}} of the sorted list being searched, or in {{math|''O''(log ''n'')}}, colloquially "in [[logarithmic time]]". Usually [[Asymptotic analysis|asymptotic]] estimates are used because different [[implementation]]s of the same algorithm may differ in efficiency. However the efficiencies of any two "reasonable" implementations of a given algorithm are related by a constant multiplicative factor called a ''hidden constant''. Exact (not asymptotic) measures of efficiency can sometimes be computed but they usually require certain assumptions concerning the particular implementation of the algorithm, called a [[model of computation]]. A model of computation may be defined in terms of an [[abstract machine|abstract computer]], e.g. [[Turing machine]], and/or by postulating that certain operations are executed in unit time. For example, if the sorted list to which we apply binary search has {{mvar|''n''}} elements, and we can guarantee that each lookup of an element in the list can be done in unit time, then at most {{math|log<sub>2</sub>(''n'') + 1}} time units are needed to return an answer. <!-- Exact measures of efficiency are useful to the people who actually implement and use algorithms, because they are more precise and thus enable them to know how much time they can expect to spend in execution. To some people (e.g. game programmers), a hidden constant can make all the difference between success and failure.-->
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)