Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Computational complexity theory
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
===Best, worst and average case complexity=== [[File:Sorting quicksort anim.gif|thumb|Visualization of the [[quicksort]] [[algorithm]], which has [[Best, worst and average case|average case performance]] <math>\mathcal{O}(n\log n)</math>]] The [[best, worst and average case]] complexity refer to three different ways of measuring the time complexity (or any other complexity measure) of different inputs of the same size. Since some inputs of size <math>n</math> may be faster to solve than others, we define the following complexities: # Best-case complexity: This is the complexity of solving the problem for the best input of size <math>n</math>. # Average-case complexity: This is the complexity of solving the problem on an average. This complexity is only defined with respect to a [[probability distribution]] over the inputs. For instance, if all inputs of the same size are assumed to be equally likely to appear, the average case complexity can be defined with respect to the uniform distribution over all inputs of size <math>n</math>. # [[Amortized analysis]]: Amortized analysis considers both the costly and less costly operations together over the whole series of operations of the algorithm. # [[Worst-case complexity]]: This is the complexity of solving the problem for the worst input of size <math>n</math>. The order from cheap to costly is: Best, average (of [[discrete uniform distribution]]), amortized, worst. For example, the deterministic sorting algorithm [[quicksort]] addresses the problem of sorting a list of integers. The worst-case is when the pivot is always the largest or smallest value in the list (so the list is never divided). In this case, the algorithm takes time [[Big O notation|O]](<math>n^2</math>). If we assume that all possible permutations of the input list are equally likely, the average time taken for sorting is <math>O(n \log n)</math>. The best case occurs when each pivoting divides the list in half, also needing <math>O(n \log n)</math> time.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)