Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Computational complexity theory
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Machine models and complexity measures== ===Turing machine=== {{main|Turing machine}} [[File:Turing machine 2b.svg|thumb|right|An illustration of a Turing machine]] A Turing machine is a mathematical model of a general computing machine. It is a theoretical device that manipulates symbols contained on a strip of tape. Turing machines are not intended as a practical computing technology, but rather as a general model of a computing machine—anything from an advanced supercomputer to a mathematician with a pencil and paper. It is believed that if a problem can be solved by an algorithm, there exists a Turing machine that solves the problem. Indeed, this is the statement of the [[Church–Turing thesis]]. Furthermore, it is known that everything that can be computed on other models of computation known to us today, such as a [[RAM machine]], [[Conway's Game of Life]], [[cellular automata]], [[lambda calculus]] or any programming language can be computed on a Turing machine. Since Turing machines are easy to analyze mathematically, and are believed to be as powerful as any other model of computation, the Turing machine is the most commonly used model in complexity theory. Many types of Turing machines are used to define complexity classes, such as [[deterministic Turing machine]]s, [[probabilistic Turing machine]]s, [[non-deterministic Turing machine]]s, [[quantum Turing machine]]s, [[symmetric Turing machine]]s and [[alternating Turing machine]]s. They are all equally powerful in principle, but when resources (such as time or space) are bounded, some of these may be more powerful than others. A deterministic Turing machine is the most basic Turing machine, which uses a fixed set of rules to determine its future actions. A probabilistic Turing machine is a deterministic Turing machine with an extra supply of random bits. The ability to make probabilistic decisions often helps algorithms solve problems more efficiently. Algorithms that use random bits are called [[randomized algorithm]]s. A non-deterministic Turing machine is a deterministic Turing machine with an added feature of non-determinism, which allows a Turing machine to have multiple possible future actions from a given state. One way to view non-determinism is that the Turing machine branches into many possible computational paths at each step, and if it solves the problem in any of these branches, it is said to have solved the problem. Clearly, this model is not meant to be a physically realizable model, it is just a theoretically interesting [[abstract machine]] that gives rise to particularly interesting complexity classes. For examples, see [[non-deterministic algorithm]]. ===Other machine models=== Many machine models different from the standard [[Turing machine equivalents#Multi-tape Turing machines|multi-tape Turing machines]] have been proposed in the literature, for example [[random-access machine]]s. Perhaps surprisingly, each of these models can be converted to another without providing any extra computational power. The time and memory consumption of these alternate models may vary.<ref>See {{harvnb|Arora|Barak|2009|loc=Chapter 1: The computational model and why it doesn't matter}}</ref> What all these models have in common is that the machines operate [[deterministic algorithm|deterministically]]. However, some computational problems are easier to analyze in terms of more unusual resources. For example, a non-deterministic Turing machine is a computational model that is allowed to branch out to check many different possibilities at once. The non-deterministic Turing machine has very little to do with how we physically want to compute algorithms, but its branching exactly captures many of the mathematical models we want to analyze, so that [[non-deterministic time]] is a very important resource in analyzing computational problems. ===Complexity measures=== For a precise definition of what it means to solve a problem using a given amount of time and space, a computational model such as the [[deterministic Turing machine]] is used. The time required by a deterministic Turing machine <math>M</math> on input <math>x</math> is the total number of state transitions, or steps, the machine makes before it halts and outputs the answer ("yes" or "no"). A Turing machine <math>M</math> is said to operate within time <math>f(n)</math> if the time required by <math>M</math> on each input of length <math>n</math> is at most <math>f(n)</math>. A decision problem <math>A</math> can be solved in time <math>f(n)</math> if there exists a Turing machine operating in time <math>f(n)</math> that solves the problem. Since complexity theory is interested in classifying problems based on their difficulty, one defines sets of problems based on some criteria. For instance, the set of problems solvable within time <math>f(n)</math> on a deterministic Turing machine is then denoted by [[DTIME]](<math>f(n)</math>). Analogous definitions can be made for space requirements. Although time and space are the most well-known complexity resources, any [[Complexity|complexity measure]] can be viewed as a [[computational resource]]. Complexity measures are very generally defined by the [[Blum complexity axioms]]. Other complexity measures used in complexity theory include [[communication complexity]], [[circuit complexity]], and [[decision tree complexity]]. The complexity of an algorithm is often expressed using [[big O notation]]. ===Best, worst and average case complexity=== [[File:Sorting quicksort anim.gif|thumb|Visualization of the [[quicksort]] [[algorithm]], which has [[Best, worst and average case|average case performance]] <math>\mathcal{O}(n\log n)</math>]] The [[best, worst and average case]] complexity refer to three different ways of measuring the time complexity (or any other complexity measure) of different inputs of the same size. Since some inputs of size <math>n</math> may be faster to solve than others, we define the following complexities: # Best-case complexity: This is the complexity of solving the problem for the best input of size <math>n</math>. # Average-case complexity: This is the complexity of solving the problem on an average. This complexity is only defined with respect to a [[probability distribution]] over the inputs. For instance, if all inputs of the same size are assumed to be equally likely to appear, the average case complexity can be defined with respect to the uniform distribution over all inputs of size <math>n</math>. # [[Amortized analysis]]: Amortized analysis considers both the costly and less costly operations together over the whole series of operations of the algorithm. # [[Worst-case complexity]]: This is the complexity of solving the problem for the worst input of size <math>n</math>. The order from cheap to costly is: Best, average (of [[discrete uniform distribution]]), amortized, worst. For example, the deterministic sorting algorithm [[quicksort]] addresses the problem of sorting a list of integers. The worst-case is when the pivot is always the largest or smallest value in the list (so the list is never divided). In this case, the algorithm takes time [[Big O notation|O]](<math>n^2</math>). If we assume that all possible permutations of the input list are equally likely, the average time taken for sorting is <math>O(n \log n)</math>. The best case occurs when each pivoting divides the list in half, also needing <math>O(n \log n)</math> time. ===Upper and lower bounds on the complexity of problems=== To classify the computation time (or similar resources, such as space consumption), it is helpful to demonstrate upper and lower bounds on the maximum amount of time required by the most efficient algorithm to solve a given problem. The complexity of an algorithm is usually taken to be its worst-case complexity unless specified otherwise. Analyzing a particular algorithm falls under the field of [[analysis of algorithms]]. To show an upper bound <math>T(n)</math> on the time complexity of a problem, one needs to show only that there is a particular algorithm with running time at most <math>T(n)</math>. However, proving lower bounds is much more difficult, since lower bounds make a statement about all possible algorithms that solve a given problem. The phrase "all possible algorithms" includes not just the algorithms known today, but any algorithm that might be discovered in the future. To show a lower bound of <math>T(n)</math> for a problem requires showing that no algorithm can have time complexity lower than <math>T(n)</math>. Upper and lower bounds are usually stated using the [[big O notation]], which hides constant factors and smaller terms. This makes the bounds independent of the specific details of the computational model used. For instance, if <math>T(n) = 7n^2 + 15n + 40</math>, in big O notation one would write <math>T(n) \in O(n^2)</math>.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)