Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Time complexity
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Estimate of time taken for running an algorithm}} {{Redirect-distinguish|Running time|Running Time (film)}} [[File:comparison computational complexity.svg|thumb|Graphs of functions commonly used in the [[analysis of algorithms]], showing the number of operations ''N'' as the result of input size ''n'' for each function]] In [[theoretical computer science]], the '''time complexity''' is the [[computational complexity]] that describes the amount of computer time it takes to run an [[algorithm]]. Time complexity is commonly estimated by counting the number of elementary operations performed by the algorithm, supposing that each elementary operation takes a fixed amount of time to perform. Thus, the amount of time taken and the number of elementary operations performed by the algorithm are taken to be related by a [[constant factor]]. Since an algorithm's running time may vary among different inputs of the same size, one commonly considers the [[Worst-case complexity|worst-case time complexity]], which is the maximum amount of time required for inputs of a given size. Less common, and usually specified explicitly, is the [[average-case complexity]], which is the average of the time taken on inputs of a given size (this makes sense because there are only a finite number of possible inputs of a given size). In both cases, the time complexity is generally expressed as a [[Function (mathematics)|function]] of the size of the input.<ref name="Sipser" />{{RP|226}} Since this function is generally difficult to compute exactly, and the running time for small inputs is usually not consequential, one commonly focuses on the behavior of the complexity when the input size increases—that is, the [[asymptotic analysis|asymptotic behavior]] of the complexity. Therefore, the time complexity is commonly expressed using [[big O notation]], typically {{nowrap|<math>O(n)</math>,}} {{nowrap|<math>O(n\log n)</math>,}} {{nowrap|<math>O(n^\alpha)</math>,}} {{nowrap|<math>O(2^n)</math>,}} etc., where {{mvar|n}} is the size in units of [[bit]]s needed to represent the input. Algorithmic complexities are classified according to the type of function appearing in the big O notation. For example, an algorithm with time complexity <math>O(n)</math> is a ''linear time algorithm'' and an algorithm with time complexity <math>O(n^\alpha)</math> for some constant <math>\alpha > 0</math> is a ''polynomial time algorithm''.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)