Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Complexity class
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{short description|Set of problems in computational complexity theory}} [[Image:Complexity subsets pspace.svg|thumb|right|A representation of the relationships between several important complexity classes]] In [[computational complexity theory]], a '''complexity class''' is a [[set (mathematics)|set]] of [[computational problem]]s "of related resource-based [[computational complexity|complexity]]".{{sfnp|Johnson|1990}} The two most commonly analyzed resources are [[time complexity|time]] and [[space complexity|memory]]. In general, a complexity class is defined in terms of a type of computational problem, a [[model of computation]], and a bounded resource like [[time complexity|time]] or [[space complexity|memory]]. In particular, most complexity classes consist of [[decision problem]]s that are solvable with a [[Turing machine]], and are differentiated by their time or space (memory) requirements. For instance, the class '''[[P (complexity)|P]]''' is the set of decision problems solvable by a deterministic Turing machine in [[polynomial time]]. There are, however, many complexity classes defined in terms of other types of problems (e.g. [[Counting problem (complexity)|counting problem]]s and [[function problem]]s) and using other models of computation (e.g. [[probabilistic Turing machine]]s, [[interactive proof system]]s, [[Boolean circuit]]s, and [[quantum computer]]s). The study of the relationships between complexity classes is a major area of research in [[theoretical computer science]]. There are often general hierarchies of complexity classes; for example, it is known that a number of fundamental time and space complexity classes relate to each other in the following way: [[L (complexity)|'''L''']]⊆'''[[NL (complexity)|NL]]'''⊆'''[[P (complexity)|P]]'''⊆'''[[NP (complexity)|NP]]'''⊆'''[[PSPACE]]'''⊆'''[[EXPTIME]]'''⊆'''[[NEXPTIME]]'''⊆'''[[EXPSPACE]]''' (where ⊆ denotes the [[subset]] relation). However, many relationships are not yet known; for example, one of the most famous [[open problem]]s in computer science concerns whether [[P versus NP|'''P''' equals '''NP''']]. The relationships between classes often answer questions about the fundamental nature of computation. The '''P''' versus '''NP''' problem, for instance, is directly related to questions of whether [[Nondeterministic algorithm|nondeterminism]] adds any computational power to computers and whether problems having solutions that can be quickly checked for correctness can also be quickly solved. ==Background== Complexity classes are [[Set (mathematics)|sets]] of related [[computational problem]]s. They are defined in terms of the computational difficulty of solving the problems contained within them with respect to particular computational resources like time or memory. More formally, the definition of a complexity class consists of three things: a type of computational problem, a model of computation, and a bounded computational resource. In particular, most complexity classes consist of [[decision problem]]s that can be solved by a [[Turing machine]] with bounded [[time complexity|time]] or [[space complexity|space]] resources. For example, the complexity class '''[[P (complexity)|P]]''' is defined as the set of [[decision problem]]s that can be solved by a [[deterministic Turing machine]] in [[polynomial time]]. ===Computational problems=== Intuitively, a [[computational problem]] is just a question that can be solved by an [[algorithm]]. For example, "is the [[natural number]] <math>n</math> [[prime number|prime]]?" is a computational problem. A computational problem is mathematically represented as the [[set (mathematics)|set]] of answers to the problem. In the primality example, the problem (call it <math>\texttt{PRIME}</math>) is represented by the set of all natural numbers that are prime: <math>\texttt{PRIME} = \{ n \in \mathbb{N} | n \text{ is prime}\}</math>. In the theory of computation, these answers are represented as [[string (computer science)|strings]]; for example, in the primality example the natural numbers could be represented as strings of [[bit]]s that represent [[binary number]]s. For this reason, computational problems are often synonymously referred to as languages, since strings of bits represent [[formal language]]s (a concept borrowed from [[linguistics]]); for example, saying that the <math>\texttt{PRIME}</math> problem is in the complexity class '''[[P (complexity)|P]]''' is equivalent to saying that the language <math>\texttt{PRIME}</math> is in '''P'''. ====Decision problems==== [[Image:Decision Problem.svg|thumb|A [[decision problem]] has only two possible outputs, ''yes'' or ''no'' (alternatively, 1 or 0) on any input.]] The most commonly analyzed problems in theoretical computer science are [[decision problem]]s—the kinds of problems that can be posed as [[yes–no question]]s. The primality example above, for instance, is an example of a decision problem as it can be represented by the yes–no question "is the [[natural number]] <math>n</math> [[prime number|prime]]". In terms of the theory of computation, a decision problem is represented as the set of input strings that a computer running a correct [[algorithm]] would answer "yes" to. In the primality example, <math>\texttt{PRIME}</math> is the set of strings representing natural numbers that, when input into a computer running an algorithm that correctly [[primality testing|tests for primality]], the algorithm answers "yes, this number is prime". This "yes-no" format is often equivalently stated as "accept-reject"; that is, an algorithm "accepts" an input string if the answer to the decision problem is "yes" and "rejects" if the answer is "no". While some problems cannot easily be expressed as decision problems, they nonetheless encompass a broad range of computational problems.{{sfn|Arora|Barak|2009|p=28}} Other types of problems that certain complexity classes are defined in terms of include: * [[Function problem]]s (e.g. '''[[FP (complexity)|FP]]''') * [[counting problem (complexity)|Counting problem]]s (e.g. '''[[Sharp-P|#P]]''') * [[Optimization problem]]s * [[Promise problem]]s (see section "Other types of problems") ===Computational models=== To make concrete the notion of a "computer", in theoretical computer science problems are analyzed in the context of a [[computational model]]. Computational models make exact the notions of computational resources like "time" and "memory". In [[computational complexity theory]], complexity classes deal with the ''inherent'' resource requirements of problems and not the resource requirements that depend upon how a physical computer is constructed. For example, in the real world different computers may require different amounts of time and memory to solve the same problem because of the way that they have been engineered. By providing an abstract mathematical representations of computers, computational models abstract away superfluous complexities of the real world (like differences in [[Processor (computing)|processor]] speed) that obstruct an understanding of fundamental principles. The most commonly used computational model is the [[Turing machine]]. While other models exist and many complexity classes are defined in terms of them (see section [[Complexity class#Other models of computation|"Other models of computation"]]), the Turing machine is used to define most basic complexity classes. With the Turing machine, instead of using standard units of time like the second (which make it impossible to disentangle running time from the speed of physical hardware) and standard units of memory like [[byte]]s, the notion of time is abstracted as the number of elementary steps that a Turing machine takes to solve a problem and the notion of memory is abstracted as the number of cells that are used on the machine's tape. These are explained in greater detail below. It is also possible to use the [[Blum axioms]] to define complexity classes without referring to a concrete [[computational model]], but this approach is less frequently used in complexity theory. ====Deterministic Turing machines==== {{Main|Turing machine}} [[File:Turing machine 2b.svg|thumb|right|An illustration of a Turing machine. The "B" indicates the blank symbol.]] A '''Turing machine''' is a mathematical model of a general computing machine. It is the most commonly used model in complexity theory, owing in large part to the fact that it is believed to be as powerful as any other model of computation and is easy to analyze mathematically. Importantly, it is believed that if there exists an algorithm that solves a particular problem then there also exists a Turing machine that solves that same problem (this is known as the [[Church–Turing thesis]]); this means that it is believed that ''every'' algorithm can be represented as a Turing machine. Mechanically, a Turing machine (TM) manipulates symbols (generally restricted to the bits 0 and 1 to provide an intuitive connection to real-life computers) contained on an infinitely long strip of tape. The TM can read and write, one at a time, using a tape head. Operation is fully determined by a finite set of elementary instructions such as "in state 42, if the symbol seen is 0, write a 1; if the symbol seen is 1, change into state 17; in state 17, if the symbol seen is 0, write a 1 and change to state 6". The Turing machine starts with only the input string on its tape and blanks everywhere else. The TM accepts the input if it enters a designated accept state and rejects the input if it enters a reject state. The deterministic Turing machine (DTM) is the most basic type of Turing machine. It uses a fixed set of rules to determine its future actions (which is why it is called "[[deterministic]]"). A computational problem can then be defined in terms of a Turing machine as the set of input strings that a particular Turing machine accepts. For example, the primality problem <math>\texttt{PRIME}</math> from above is the set of strings (representing natural numbers) that a Turing machine running an algorithm that correctly [[primality test|tests for primality]] accepts. A Turing machine is said to '''recognize''' a language (recall that "problem" and "language" are largely synonymous in computability and complexity theory) if it accepts all inputs that are in the language and is said to '''decide''' a language if it additionally rejects all inputs that are not in the language (certain inputs may cause a Turing machine to run forever, so [[Recursive set|decidability]] places the additional constraint over [[Recursively enumerable set|recognizability]] that the Turing machine must halt on all inputs). A Turing machine that "solves" a problem is generally meant to mean one that decides the language. Turing machines enable intuitive notions of "time" and "space". The [[time complexity]] of a TM on a particular input is the number of elementary steps that the Turing machine takes to reach either an accept or reject state. The [[space complexity]] is the number of cells on its tape that it uses to reach either an accept or reject state. ====Nondeterministic Turing machines==== {{Main|Nondeterministic Turing machine}} [[Image:Difference_between_deterministic_and_Nondeterministic.svg|thumb|350px|right| A comparison of deterministic and nondeterministic computation. If any branch on the nondeterministic computation accepts then the NTM accepts.]] The deterministic Turing machine (DTM) is a variant of the nondeterministic Turing machine (NTM). Intuitively, an NTM is just a regular Turing machine that has the added capability of being able to explore multiple possible future actions from a given state, and "choosing" a branch that accepts (if any accept). That is, while a DTM must follow only one branch of computation, an NTM can be imagined as a computation tree, branching into many possible computational pathways at each step (see image). If at least one branch of the tree halts with an "accept" condition, then the NTM accepts the input. In this way, an NTM can be thought of as simultaneously exploring all computational possibilities in parallel and selecting an accepting branch.{{sfn|Sipser|2006|p=48, 150}} NTMs are not meant to be physically realizable models, they are simply theoretically interesting abstract machines that give rise to a number of interesting complexity classes (which often do have physically realizable equivalent definitions). The [[time complexity]] of an NTM is the maximum number of steps that the NTM uses on ''any'' branch of its computation.{{sfn|Sipser|2006|p=255}} Similarly, the [[space complexity]] of an NTM is the maximum number of cells that the NTM uses on any branch of its computation. DTMs can be viewed as a special case of NTMs that do not make use of the power of nondeterminism. Hence, every computation that can be carried out by a DTM can also be carried out by an equivalent NTM. It is also possible to simulate any NTM using a DTM (the DTM will simply compute every possible computational branch one-by-one). Hence, the two are equivalent in terms of computability. However, simulating an NTM with a DTM often requires greater time and/or memory resources; as will be seen, how significant this slowdown is for certain classes of computational problems is an important question in computational complexity theory. ===Resource bounds=== Complexity classes group computational problems by their resource requirements. To do this, computational problems are differentiated by [[upper bound]]s on the maximum amount of resources that the most efficient algorithm takes to solve them. More specifically, complexity classes are concerned with the ''rate of growth'' in the resources required to solve particular computational problems as the input size increases. For example, the amount of time it takes to solve problems in the complexity class '''[[P (complexity)|P]]''' grows at a [[polynomial]] rate as the input size increases, which is comparatively slow compared to problems in the exponential complexity class '''[[EXPTIME]]''' (or more accurately, for problems in '''EXPTIME''' that are outside of '''P''', since <math>\mathsf{P}\subseteq\mathsf{EXPTIME}</math>). Note that the study of complexity classes is intended primarily to understand the ''inherent'' complexity required to solve computational problems. Complexity theorists are thus generally concerned with finding the smallest complexity class that a problem falls into and are therefore concerned with identifying which class a computational problem falls into using the ''most efficient'' algorithm. There may be an algorithm, for instance, that solves a particular problem in exponential time, but if the most efficient algorithm for solving this problem runs in polynomial time then the inherent time complexity of that problem is better described as polynomial. ====Time bounds==== {{Main|Time complexity}} The [[time complexity]] of an algorithm with respect to the Turing machine model is the number of steps it takes for a Turing machine to run an algorithm on a given input size. Formally, the time complexity for an algorithm implemented with a Turing machine <math>M</math> is defined as the function <math>t_M: \mathbb{N} \to \mathbb{N}</math>, where <math>t_M(n)</math> is the maximum number of steps that <math>M</math> takes on any input of length <math>n</math>. In computational complexity theory, theoretical computer scientists are concerned less with particular runtime values and more with the general class of functions that the time complexity function falls into. For instance, is the time complexity function a [[polynomial]]? A [[logarithmic function]]? An [[exponential function]]? Or another kind of function? ====Space bounds==== {{Main|Space complexity}} The [[space complexity]] of an algorithm with respect to the Turing machine model is the number of cells on the Turing machine's tape that are required to run an algorithm on a given input size. Formally, the space complexity of an algorithm implemented with a Turing machine <math>M</math> is defined as the function <math>s_M: \mathbb{N} \to \mathbb{N}</math>, where <math>s_M(n)</math> is the maximum number of cells that <math>M</math> uses on any input of length <math>n</math>. ==Basic complexity classes== {{See also|List of complexity classes}} ===Basic definitions=== Complexity classes are often defined using granular sets of complexity classes called '''DTIME''' and '''NTIME''' (for time complexity) and '''DSPACE''' and '''NSPACE''' (for space complexity). Using [[big O notation]], they are defined as follows: * The time complexity class <math>\mathsf{DTIME}(t(n))</math> is the set of all problems that are decided by an <math>O(t(n))</math> time deterministic Turing machine. * The time complexity class <math>\mathsf{NTIME}(t(n))</math> is the set of all problems that are decided by an <math>O(t(n))</math> time nondeterministic Turing machine. * The space complexity class <math>\mathsf{DSPACE}(s(n))</math> is the set of all problems that are decided by an <math>O(s(n))</math> space deterministic Turing machine. * The space complexity class <math>\mathsf{NSPACE}(s(n))</math> is the set of all problems that are decided by an <math>O(s(n))</math> space nondeterministic Turing machine. ===Time complexity classes=== {{Main|Time complexity}} ====P and NP==== {{Main|P (complexity)|NP (complexity)}} '''P''' is the class of problems that are solvable by a [[deterministic Turing machine]] in [[polynomial time]] and '''NP''' is the class of problems that are solvable by a [[nondeterministic Turing machine]] in polynomial time. Or more formally, : <math>\mathsf{P} = \bigcup_{k\in\mathbb{N}} \mathsf{DTIME}(n^k) </math> : <math>\mathsf{NP} = \bigcup_{k\in\mathbb{N}} \mathsf{NTIME}(n^k) </math> '''P''' is often said to be the class of problems that can be solved "quickly" or "efficiently" by a deterministic computer, since the [[time complexity]] of solving a problem in '''P''' increases relatively slowly with the input size. An important characteristic of the class '''NP''' is that it can be equivalently defined as the class of problems whose solutions are ''verifiable'' by a deterministic Turing machine in polynomial time. That is, a language is in '''NP''' if there exists a ''deterministic'' polynomial time Turing machine, referred to as the verifier, that takes as input a string <math>w</math> ''and'' a polynomial-size [[Certificate (complexity)|certificate]] string <math>c</math>, and accepts <math>w</math> if <math>w</math> is in the language and rejects <math>w</math> if <math>w</math> is not in the language. Intuitively, the certificate acts as a [[Mathematical proof|proof]] that the input <math>w</math> is in the language. Formally:{{sfn|Aaronson|2017|p=12}} : '''NP''' is the class of languages <math>L</math> for which there exists a polynomial-time deterministic Turing machine <math>M</math> and a polynomial <math>p</math> such that for all <math>w \in \{0,1\}^*</math>, <math>w</math> is in <math>L</math> ''if and only if'' there exists some <math>c \in \{0,1\}^{p(|w|)}</math> such that <math>M(w,c)</math> accepts. This equivalence between the nondeterministic definition and the verifier definition highlights a fundamental connection between [[Nondeterministic algorithm|nondeterminism]] and solution verifiability. Furthermore, it also provides a useful method for proving that a language is in '''NP'''—simply identify a suitable certificate and show that it can be verified in polynomial time. =====The P versus NP problem===== While there might seem to be an obvious difference between the class of problems that are efficiently solvable and the class of problems whose solutions are merely efficiently checkable, '''P''' and '''NP''' are actually at the center of one of the most famous unsolved problems in computer science: the [[P versus NP|'''P''' versus '''NP''']] problem. While it is known that <math>\mathsf{P} \subseteq \mathsf{NP}</math> (intuitively, deterministic Turing machines are just a subclass of nondeterministic Turing machines that don't make use of their nondeterminism; or under the verifier definition, '''P''' is the class of problems whose polynomial time verifiers need only receive the empty string as their certificate), it is not known whether '''NP''' is strictly larger than '''P'''. If '''P'''='''NP''', then it follows that nondeterminism provides ''no additional computational power'' over determinism with regards to the ability to quickly find a solution to a problem; that is, being able to explore ''all possible branches'' of computation provides ''at most'' a polynomial speedup over being able to explore only a single branch. Furthermore, it would follow that if there exists a proof for a problem instance and that proof can be quickly be checked for correctness (that is, if the problem is in '''NP'''), then there also exists an algorithm that can quickly ''construct'' that proof (that is, the problem is in '''P''').{{sfn|Aaronson|2017|p=3}} However, the overwhelming majority of computer scientists believe that <math>\mathsf{P}\neq\mathsf{NP}</math>,{{sfn|Gasarch|2019}} and most [[Cryptography#Modern cryptography|cryptographic schemes]] employed today rely on the assumption that <math>\mathsf{P}\neq\mathsf{NP}</math>.{{sfn|Aaronson|2017|p=4}} ====EXPTIME and NEXPTIME==== {{Main|EXPTIME|NEXPTIME}} '''EXPTIME''' (sometimes shortened to '''EXP''') is the class of decision problems solvable by a deterministic Turing machine in exponential time and '''NEXPTIME''' (sometimes shortened to '''NEXP''') is the class of decision problems solvable by a nondeterministic Turing machine in exponential time. Or more formally, : <math>\mathsf{EXPTIME} = \bigcup_{k\in\mathbb{N}} \mathsf{DTIME}(2^{n^k}) </math> : <math>\mathsf{NEXPTIME} = \bigcup_{k\in\mathbb{N}} \mathsf{NTIME}(2^{n^k}) </math> '''EXPTIME''' is a strict superset of '''P''' and '''NEXPTIME''' is a strict superset of '''NP'''. It is further the case that '''EXPTIME'''<math>\subseteq</math>'''NEXPTIME'''. It is not known whether this is proper, but if '''P'''='''NP''' then '''EXPTIME''' must equal '''NEXPTIME'''. ===Space complexity classes=== {{Main|Space complexity}} ====L and NL==== {{Main|L (complexity)|NL (complexity)}} While it is possible to define [[Logarithmic growth|logarithmic]] time complexity classes, these are extremely narrow classes as sublinear times do not even enable a Turing machine to read the entire input (because <math>\log n < n </math>).{{efn|While a logarithmic runtime of <math>c \log n</math>, i.e. <math>\log n</math> multiplied by a constant <math>c</math>, allows a Turing machine to read inputs of size <math>n < c \log n</math>, there will invariably reach a point where <math>n > c \log n </math>.}}{{sfn|Sipser|2006|p=320}} However, there are a meaningful number of problems that can be solved in logarithmic space. The definitions of these classes require a [[multitape Turing machine|two-tape Turing machine]] so that it is possible for the machine to store the entire input (it can be shown that in terms of [[computability]] the two-tape Turing machine is equivalent to the single-tape Turing machine).{{sfn|Sipser|2006|p=321}} In the two-tape Turing machine model, one tape is the input tape, which is read-only. The other is the work tape, which allows both reading and writing and is the tape on which the Turing machine performs computations. The space complexity of the Turing machine is measured as the number of cells that are used on the work tape. '''L''' (sometimes lengthened to '''LOGSPACE''') is then defined as the class of problems solvable in logarithmic space on a deterministic Turing machine and '''NL''' (sometimes lengthened to '''NLOGSPACE''') is the class of problems solvable in logarithmic space on a nondeterministic Turing machine. Or more formally,{{sfn|Sipser|2006|p=321}} :<math>\mathsf{L} = \mathsf{DSPACE}(\log n)</math> :<math>\mathsf{NL} = \mathsf{NSPACE}(\log n)</math> It is known that <math>\mathsf{L}\subseteq\mathsf{NL}\subseteq\mathsf{P}</math>. However, it is not known whether any of these relationships is proper. ====PSPACE and NPSPACE==== {{Main|PSPACE (complexity)}} The complexity classes '''PSPACE''' and '''NPSPACE''' are the space analogues to '''[[P (complexity) | P]]''' and '''[[NP (complexity) | NP]]'''. That is, '''PSPACE''' is the class of problems solvable in polynomial space by a deterministic Turing machine and '''NPSPACE''' is the class of problems solvable in polynomial space by a nondeterministic Turing machine. More formally, :<math>\mathsf{PSPACE} = \bigcup_{k\in\mathbb{N}} \mathsf{DSPACE}(n^k)</math> :<math>\mathsf{NPSPACE} = \bigcup_{k\in\mathbb{N}} \mathsf{NSPACE}(n^k)</math> While it is not known whether '''P'''='''NP''', [[Savitch's theorem ]] famously showed that '''PSPACE'''='''NPSPACE'''. It is also known that <math>\mathsf{P} \subseteq \mathsf{PSPACE}</math>, which follows intuitively from the fact that, since writing to a cell on a Turing machine's tape is defined as taking one unit of time, a Turing machine operating in polynomial time can only write to polynomially many cells. It is suspected that '''P''' is strictly smaller than '''PSPACE''', but this has not been proven. ====EXPSPACE and NEXPSPACE==== {{Main article|EXPSPACE}} The complexity classes '''EXPSPACE''' and '''NEXPSPACE''' are the space analogues to '''[[EXPTIME]]''' and '''[[NEXPTIME]]'''. That is, '''EXPSPACE''' is the class of problems solvable in exponential space by a deterministic Turing machine and '''NEXPSPACE''' is the class of problems solvable in exponential space by a nondeterministic Turing machine. Or more formally, :<math>\mathsf{EXPSPACE} = \bigcup_{k\in\mathbb{N}} \mathsf{DSPACE}(2^{n^k})</math> :<math>\mathsf{NEXPSPACE} = \bigcup_{k\in\mathbb{N}} \mathsf{NSPACE}(2^{n^k})</math> [[Savitch's theorem]] showed that '''EXPSPACE'''='''NEXPSPACE'''. This class is extremely broad: it is known to be a strict superset of '''PSPACE''', '''NP''', and '''P''', and is believed to be a strict superset of '''EXPTIME'''. ==Properties of complexity classes== ===Closure=== Complexity classes have a variety of [[Closure (mathematics)|closure]] properties. For example, decision classes may be closed under [[negation]], [[disjunction]], [[Logical conjunction|conjunction]], or even under all [[Logical connective|Boolean operations]]. Moreover, they might also be closed under a variety of quantification schemes. '''P''', for instance, is closed under all Boolean operations, and under quantification over polynomially sized domains. Closure properties can be helpful in separating classes—one possible route to separating two complexity classes is to find some closure property possessed by one class but not by the other. Each class '''X''' that is not closed under negation has a complement class '''co-X''', which consists of the complements of the languages contained in '''X''' (i.e. <math>\textsf{co-X} = \{L| \overline{L} \in \mathsf{X} \}</math>). '''[[co-NP]]''', for instance, is one important complement complexity class, and sits at the center of the unsolved problem over whether '''co-NP'''='''NP'''. Closure properties are one of the key reasons many complexity classes are defined in the way that they are.{{sfn|Aaronson|2017|p=7}} Take, for example, a problem that can be solved in <math>O(n)</math> time (that is, in linear time) and one that can be solved in, at best, <math>O(n^{1000})</math> time. Both of these problems are in '''P''', yet the runtime of the second grows considerably faster than the runtime of the first as the input size increases. One might ask whether it would be better to define the class of "efficiently solvable" problems using some smaller polynomial bound, like <math>O(n^3)</math>, rather than all polynomials, which allows for such large discrepancies. It turns out, however, that the set of all polynomials is the smallest class of functions containing the linear functions that is also closed under addition, multiplication, and composition (for instance, <math>O(n^3) \circ O(n^2) = O(n^6)</math>, which is a polynomial but <math>O(n^6)>O(n^3)</math>).{{sfn|Aaronson|2017|p=7}} Since we would like composing one efficient algorithm with another efficient algorithm to still be considered efficient, the polynomials are the smallest class that ensures composition of "efficient algorithms".{{sfn|Aaronson|2017|p=5}} (Note that the definition of '''P''' is also useful because, empirically, almost all problems in '''P''' that are practically useful do in fact have low order polynomial runtimes, and almost all problems outside of '''P''' that are practically useful do not have any known algorithms with small exponential runtimes, i.e. with <math>O(c^n)</math> runtimes where {{mvar|c}} is close to 1.{{sfn|Aaronson|2017|p=6}}) ===Reductions=== {{See also|Reduction (complexity)}} Many complexity classes are defined using the concept of a '''reduction'''. A reduction is a transformation of one problem into another problem, i.e. a reduction takes inputs from one problem and transforms them into inputs of another problem. For instance, you can reduce ordinary base-10 addition <math>x+y</math> to base-2 addition by transforming <math>x</math> and <math>y</math> to their base-2 notation (e.g. 5+7 becomes 101+111). Formally, a problem <math>X</math> reduces to a problem <math>Y</math> if there exists a function <math>f</math> such that for every <math>x \in \Sigma^* </math>, <math>x \in X</math> ''if and only if'' <math>f(x) \in Y</math>. Generally, reductions are used to capture the notion of a problem being at least as difficult as another problem. Thus we are generally interested in using a polynomial-time reduction, since any problem <math>X</math> that can be efficiently reduced to another problem <math>Y</math> is no more difficult than <math>Y</math>. Formally, a problem <math>X</math> is polynomial-time reducible to a problem <math>Y</math> if there exists a ''polynomial-time'' computable function <math>p</math> such that for all <math>x \in \Sigma^*</math>, <math>x \in X</math> ''if and only if'' <math> p(x) \in Y</math>. Note that reductions can be defined in many different ways. Common reductions are [[Cook reduction]]s, [[Karp reduction]]s and [[Levin reduction]]s, and can vary based on resource bounds, such as [[polynomial-time reduction]]s and [[log-space reduction]]s. ====Hardness==== Reductions motivate the concept of a problem being '''hard''' for a complexity class. A problem <math>X</math> is hard for a class of problems '''C''' if every problem in '''C''' can be polynomial-time reduced to <math>X</math>. Thus no problem in '''C''' is harder than <math>X</math>, since an algorithm for <math>X</math> allows us to solve any problem in '''C''' with at most polynomial slowdown. Of particular importance, the set of problems that are hard for '''NP''' is called the set of '''[[NP-hard]]''' problems. ====Completeness==== If a problem <math>X</math> is hard for '''C''' and is also in '''C''', then <math>X</math> is said to be '''[[complete (complexity)|complete]]''' for '''C'''. This means that <math>X</math> is the hardest problem in '''C''' (since there could be many problems that are equally hard, more precisely <math>X</math> is as hard as the hardest problems in '''C'''). Of particular importance is the class of [[NP-complete|'''NP'''-complete]] problems—the most difficult problems in '''NP'''. Because all problems in '''NP''' can be polynomial-time reduced to '''NP'''-complete problems, finding an '''NP'''-complete problem that can be solved in polynomial time would mean that '''P''' = '''NP'''. ==Relationships between complexity classes== ===Savitch's theorem=== {{Main|Savitch's theorem}} Savitch's theorem establishes the relationship between deterministic and nondetermistic space resources. It shows that if a nondeterministic Turing machine can solve a problem using <math>f(n)</math> space, then a deterministic Turing machine can solve the same problem in <math>f(n)^2</math> space, i.e. in the square of the space. Formally, Savitch's theorem states that for any <math>f(n) > n </math>,{{sfn|Lee|2014}} :<math>\mathsf{NSPACE}\left(f\left(n\right)\right) \subseteq \mathsf{DSPACE}\left(f\left(n\right)^2\right).</math> Important corollaries of Savitch's theorem are that '''PSPACE''' = '''NPSPACE''' (since the square of a polynomial is still a polynomial) and '''EXPSPACE''' = '''NEXPSPACE''' (since the square of an exponential is still an exponential). These relationships answer fundamental questions about the power of nondeterminism compared to determinism. Specifically, Savitch's theorem shows that any problem that a nondeterministic Turing machine can solve in polynomial space, a deterministic Turing machine can also solve in polynomial space. Similarly, any problem that a nondeterministic Turing machine can solve in exponential space, a deterministic Turing machine can also solve in exponential space. ===Hierarchy theorems=== {{main article|Time hierarchy theorem|Space hierarchy theorem}} By definition of '''DTIME''', it follows that <math>\mathsf{DTIME}(n^{k_1})</math> is contained in <math>\mathsf{DTIME}(n^{k_2})</math> if <math>k_1 \leq k_2 </math>, since <math>O(n^{k_1}) \subseteq O(n^{k_2})</math> if <math>k_1 \leq k_2</math>. However, this definition gives no indication of whether this inclusion is strict. For time and space requirements, the conditions under which the inclusion is strict are given by the time and space hierarchy theorems, respectively. They are called hierarchy theorems because they induce a proper hierarchy on the classes defined by constraining the respective resources. The hierarchy theorems enable one to make quantitative statements about how much more additional time or space is needed in order to increase the number of problems that can be solved. The [[time hierarchy theorem]] states that :<math>\mathsf{DTIME}\big(f(n) \big) \subsetneq \mathsf{DTIME} \big(f(n) \sdot \log^{2}(f(n)) \big)</math>. The [[space hierarchy theorem]] states that :<math>\mathsf{DSPACE}\big(f(n)\big) \subsetneq \mathsf{DSPACE} \big(f(n) \sdot \log(f(n)) \big)</math>. The time and space hierarchy theorems form the basis for most separation results of complexity classes. For instance, the time hierarchy theorem establishes that '''P''' is strictly contained in '''EXPTIME''', and the space hierarchy theorem establishes that '''L''' is strictly contained in '''PSPACE'''. ==Other models of computation== While deterministic and non-deterministic [[Turing machine]]s are the most commonly used models of computation, many complexity classes are defined in terms of other computational models. In particular, * A number of classes are defined using [[probabilistic Turing machine]]s, including the classes '''[[Bounded-error probabilistic polynomial|BPP]]''', '''[[PP (complexity)|PP]]''', '''[[RP (complexity)|RP]]''', and '''[[ZPP (complexity)|ZPP]]''' * A number of classes are defined using [[interactive proof system]]s, including the classes '''[[IP (complexity)|IP]]''', '''[[MA (complexity)|MA]]''', and '''[[AM (complexity)|AM]]''' * A number of classes are defined using [[Boolean circuit]]s, including the classes '''[[P/poly]]''' and its subclasses '''[[NC (complexity)|NC]]''' and '''[[AC (complexity)|AC]]''' * A number of classes are defined using [[quantum Turing machine]]s, including the classes '''[[BQP]]''' and '''[[QMA]]''' These are explained in greater detail below. ===Randomized computation=== {{Main|Randomized computation}} A number of important complexity classes are defined using the '''[[probabilistic Turing machine]]''', a variant of the [[Turing machine]] that can toss random coins. These classes help to better describe the complexity of [[randomized algorithm]]s. A probabilistic Turing machine is similar to a deterministic Turing machine, except rather than following a single transition function (a set of rules for how to proceed at each step of the computation) it probabilistically selects between multiple transition functions at each step. The standard definition of a probabilistic Turing machine specifies two transition functions, so that the selection of transition function at each step resembles a coin flip. The randomness introduced at each step of the computation introduces the potential for error; that is, strings that the Turing machine is meant to accept may on some occasions be rejected and strings that the Turing machine is meant to reject may on some occasions be accepted. As a result, the complexity classes based on the probabilistic Turing machine are defined in large part around the amount of error that is allowed. Formally, they are defined using an error probability <math>\epsilon</math>. A probabilistic Turing machine <math>M</math> is said to recognize a language <math>L</math> with error probability <math>\epsilon</math> if: # a string <math>w</math> in <math>L</math> implies that <math>\text{Pr}[M \text{ accepts } w] \geq 1 - \epsilon</math> # a string <math>w</math> not in <math>L</math> implies that <math>\text{Pr}[M \text{ rejects } w] \geq 1 - \epsilon</math> ====Important complexity classes==== [[File:Randomized Complexity Classes.svg|thumb|The relationships between the fundamental probabilistic complexity classes. BQP is a probabilistic [[Quantum complexity theory|quantum complexity]] class and is described in the quantum computing section.]] The fundamental randomized time complexity classes are '''[[ZPP (complexity)|ZPP]]''', '''[[RP (complexity)|RP]]''', '''[[co-RP]]''', '''[[BPP (complexity)|BPP]]''', and '''[[PP (complexity)|PP]]'''. The strictest class is '''[[ZPP (complexity)|ZPP]]''' (zero-error probabilistic polynomial time), the class of problems solvable in polynomial time by a probabilistic Turing machine with error probability 0. Intuitively, this is the strictest class of probabilistic problems because it demands ''no error whatsoever''. A slightly looser class is '''[[RP (complexity)|RP]]''' (randomized polynomial time), which maintains no error for strings not in the language but allows bounded error for strings in the language. More formally, a language is in '''RP''' if there is a probabilistic polynomial-time Turing machine <math>M</math> such that if a string is not in the language then <math>M</math> always rejects and if a string is in the language then <math>M</math> accepts with a probability at least 1/2. The class '''[[co-RP]]''' is similarly defined except the roles are flipped: error is not allowed for strings in the language but is allowed for strings not in the language. Taken together, the classes '''RP''' and '''co-RP''' encompass all of the problems that can be solved by probabilistic Turing machines with [[one-sided error]]. Loosening the error requirements further to allow for [[two-sided error]] yields the class '''[[BPP (complexity)|BPP]]''' (bounded-error probabilistic polynomial time), the class of problems solvable in polynomial time by a probabilistic Turing machine with error probability less than 1/3 (for both strings in the language and not in the language). '''BPP''' is the most practically relevant of the probabilistic complexity classes—problems in '''BPP''' have efficient [[randomized algorithm]]s that can be run quickly on real computers. '''BPP''' is also at the center of the important unsolved problem in computer science over whether '''[[BPP (complexity)#Problems|P=BPP]]''', which if true would mean that randomness does not increase the computational power of computers, i.e. any probabilistic Turing machine could be simulated by a deterministic Turing machine with at most polynomial slowdown. The broadest class of efficiently-solvable probabilistic problems is '''[[PP (complexity)|PP]]''' (probabilistic polynomial time), the set of languages solvable by a probabilistic Turing machine in polynomial time with an error probability of less than 1/2 for all strings. '''ZPP''', '''RP''' and '''co-RP''' are all subsets of '''BPP''', which in turn is a subset of '''PP'''. The reason for this is intuitive: the classes allowing zero error and only one-sided error are all contained within the class that allows two-sided error, and '''PP''' simply relaxes the error probability of '''BPP'''. '''ZPP''' relates to '''RP''' and '''co-RP''' in the following way: <math>\textsf{ZPP}=\textsf{RP}\cap\textsf{co-RP}</math>. That is, '''ZPP''' consists exactly of those problems that are in both '''RP''' and '''co-RP'''. Intuitively, this follows from the fact that '''RP''' and '''co-RP''' allow only one-sided error: '''co-RP''' does not allow error for strings in the language and '''RP''' does not allow error for strings not in the language. Hence, if a problem is in both '''RP''' and '''co-RP''', then there must be no error for strings both in ''and'' not in the language (i.e. no error whatsoever), which is exactly the definition of '''ZPP'''. Important randomized space complexity classes include '''[[BPL (complexity)|BPL]]''', '''[[RL (complexity)|RL]]''', and '''[[Randomized Logarithmic-space Polynomial-time|RLP]]'''. ===Interactive proof systems=== {{Main|Interactive proof system}} A number of complexity classes are defined using '''[[interactive proof systems]]'''. Interactive proofs generalize the proofs definition of the complexity class '''[[NP (complexity)|NP]]''' and yield insights into [[cryptography]], [[approximation algorithm]]s, and [[formal verification]]. [[File:Interactive proof (complexity).svg|thumb|300px|General representation of an interactive proof protocol]] Interactive proof systems are [[abstract machine]]s that model computation as the exchange of messages between two parties: a prover <math>P</math> and a verifier <math>V</math>. The parties interact by exchanging messages, and an input string is accepted by the system if the verifier decides to accept the input on the basis of the messages it has received from the prover. The prover <math>P</math> has unlimited computational power while the verifier has bounded computational power (the standard definition of interactive proof systems defines the verifier to be polynomially-time bounded). The prover, however, is untrustworthy (this prevents all languages from being trivially recognized by the proof system by having the computationally unbounded prover solve for whether a string is in a language and then sending a trustworthy "YES" or "NO" to the verifier), so the verifier must conduct an "interrogation" of the prover by "asking it" successive rounds of questions, accepting only if it develops a high degree of confidence that the string is in the language.{{sfn|Arora|Barak|2009|p=144}} ====Important complexity classes==== The class '''[[NP (complexity)|NP]]''' is a simple proof system in which the verifier is restricted to being a deterministic polynomial-time [[Turing machine]] and the procedure is restricted to one round (that is, the prover sends only a single, full proof—typically referred to as the [[Certificate (complexity)|certificate]]—to the verifier). Put another way, in the definition of the class '''NP''' (the set of decision problems for which the problem instances, when the answer is "YES", have proofs verifiable in polynomial time by a deterministic Turing machine) is a proof system in which the proof is constructed by an unmentioned prover and the deterministic Turing machine is the verifier. For this reason, '''NP''' can also be called '''dIP''' (deterministic interactive proof), though it is rarely referred to as such. It turns out that '''NP''' captures the full power of interactive proof systems with deterministic (polynomial-time) verifiers because it can be shown that for any proof system with a deterministic verifier it is never necessary to need more than a single round of messaging between the prover and the verifier. Interactive proof systems that provide greater computational power over standard complexity classes thus require ''probabilistic'' verifiers, which means that the verifier's questions to the prover are computed using [[probabilistic algorithm]]s. As noted in the section above on [[randomized computation]], probabilistic algorithms introduce error into the system, so complexity classes based on probabilistic proof systems are defined in terms of an error probability <math>\epsilon</math>. The most general complexity class arising out of this characterization is the class '''[[IP (complexity)|IP]]''' (interactive polynomial time), which is the class of all problems solvable by an interactive proof system <math>(P,V)</math>, where <math>V</math> is probabilistic polynomial-time and the proof system satisfies two properties: for a language <math>L \in \mathsf{IP}</math> # (Completeness) a string <math>w</math> in <math>L</math> implies <math>\Pr[V \text{ accepts }w \text{ after interacting with } P] \ge \tfrac{2}{3}</math> # (Soundness) a string <math>w</math> not in <math>L</math> implies <math>\Pr[V \text{ accepts }w \text{ after interacting with } P] \le \tfrac{1}{3}</math> An important feature of '''IP''' is that it equals '''[[PSPACE]]'''. In other words, any problem that can be solved by a polynomial-time interactive proof system can also be solved by a [[deterministic Turing machine]] with polynomial space resources, and vice versa. A modification of the protocol for '''IP''' produces another important complexity class: '''[[AM (complexity)|AM]]''' (Arthur–Merlin protocol). In the definition of interactive proof systems used by '''IP''', the prover was not able to see the coins utilized by the verifier in its probabilistic computation—it was only able to see the messages that the verifier produced with these coins. For this reason, the coins are called ''private random coins''. The interactive proof system can be constrained so that the coins used by the verifier are ''public random coins''; that is, the prover is able to see the coins. Formally, '''AM''' is defined as the class of languages with an interactive proof in which the verifier sends a random string to the prover, the prover responds with a message, and the verifier either accepts or rejects by applying a deterministic polynomial-time function to the message from the prover. '''AM''' can be generalized to '''AM'''[''k''], where ''k'' is the number of messages exchanged (so in the generalized form the standard '''AM''' defined above is '''AM'''[2]). However, it is the case that for all <math>k \geq 2</math>, '''AM'''[''k'']='''AM'''[2]. It is also the case that <math>\mathsf{AM}[k]\subseteq\mathsf{IP}[k]</math>. Other complexity classes defined using interactive proof systems include '''[[Interactive proof system#MIP|MIP]]''' (multiprover interactive polynomial time) and '''[[QIP (complexity)|QIP]]''' (quantum interactive polynomial time). ===Boolean circuits=== {{Main|Circuit complexity}} [[File:Three input boolean circuit.svg|thumb|right|350px|Example Boolean circuit computing the Boolean function <math>f_C(x_1,x_2,x_3)=\neg (x_1 \wedge x_2) \wedge ((x_2 \wedge x_3) \vee \neg x_3)</math>, with example input <math>x_1=0</math>, <math>x_2=1</math>, and <math>x_3=0</math>. The <math>\wedge</math> nodes are [[AND gate]]s, the <math>\vee</math> nodes are [[OR gate]]s, and the <math>\neg</math> nodes are [[NOT gate]]s.]] An alternative model of computation to the [[Turing machine]] is the '''[[Boolean circuit]]''', a simplified model of the [[digital circuit]]s used in modern [[computer]]s. Not only does this model provide an intuitive connection between computation in theory and computation in practice, but it is also a natural model for [[non-uniform computation]] (computation in which different input sizes within the same problem use different algorithms). Formally, a Boolean circuit <math>C</math> is a [[directed acyclic graph]] in which edges represent wires (which carry the [[bit]] values 0 and 1), the input bits are represented by source vertices (vertices with no incoming edges), and all non-source vertices represent [[logic gate]]s (generally the [[AND gate|AND]], [[OR gate|OR]], and [[NOT gate]]s). One logic gate is designated the output gate, and represents the end of the computation. The input/output behavior of a circuit <math>C</math> with <math>n</math> input variables is represented by the [[Boolean function]] <math>f_C:\{0,1\}^n \to \{0,1\}</math>; for example, on input bits <math>x_1,x_2,...,x_n</math>, the output bit <math>b</math> of the circuit is represented mathematically as <math>b = f_C(x_1,x_2,...,x_n)</math>. The circuit <math>C</math> is said to ''compute'' the Boolean function <math>f_C</math>. Any particular circuit has a fixed number of input vertices, so it can only act on inputs of that size. [[Formal language|Languages]] (the formal representations of [[decision problem]]s), however, contain strings of differing lengths, so languages cannot be fully captured by a single circuit (this contrasts with the Turing machine model, in which a language is fully described by a single Turing machine that can act on any input size). A language is thus represented by a '''circuit family'''. A circuit family is an infinite list of circuits <math>(C_0,C_1,C_2,...)</math>, where <math>C_n</math> is a circuit with <math>n</math> input variables. A circuit family is said to decide a language <math>L</math> if, for every string <math>w</math>, <math>w</math> is in the language <math>L</math> if and only if <math>C_n(w)=1</math>, where <math>n</math> is the length of <math>w</math>. In other words, a string <math>w</math> of size <math>n</math> is in the language represented by the circuit family <math>(C_0,C_1,C_2,...)</math> if the circuit <math>C_n</math> (the circuit with the same number of input vertices as the number of bits in <math>w</math>) evaluates to 1 when <math>w</math> is its input. While complexity classes defined using Turing machines are described in terms of [[time complexity]], circuit complexity classes are defined in terms of circuit size — the number of vertices in the circuit. The size complexity of a circuit family <math>(C_0,C_1,C_2,...)</math> is the function <math>f:\mathbb{N} \to \mathbb{N}</math>, where <math>f(n)</math> is the circuit size of <math>C_n</math>. The familiar function classes follow naturally from this; for example, a polynomial-size circuit family is one such that the function <math>f</math> is a [[polynomial]]. ====Important complexity classes==== The complexity class '''[[P/poly]]''' is the set of languages that are decidable by polynomial-size circuit families. It turns out that there is a natural connection between circuit complexity and time complexity. Intuitively, a language with small time complexity (that is, requires relatively few sequential operations on a Turing machine), also has a small circuit complexity (that is, requires relatively few Boolean operations). Formally, it can be shown that if a language is in <math>\mathsf{DTIME}(t(n))</math>, where <math>t</math> is a function <math>t:\mathbb{N} \to \mathbb{N}</math>, then it has circuit complexity <math>O(t^2(n))</math>.{{sfn|Sipser|2006|p=355}} It follows directly from this fact that [[P (complexity)|<math>\mathsf{\color{Blue}P}\subset\textsf{P/poly}</math>]]. In other words, any problem that can be solved in polynomial time by a deterministic Turing machine can also be solved by a polynomial-size circuit family. It is further the case that the inclusion is proper, i.e. <math>\textsf{P}\subsetneq \textsf{P/poly}</math> (for example, there are some [[undecidable problem]]s that are in '''P/poly'''). '''P/poly''' has a number of properties that make it highly useful in the study of the relationships between complexity classes. In particular, it is helpful in investigating problems related to [[P versus NP|'''P''' versus '''NP''']]. For example, if there is any language in '''NP''' that is not in '''P/poly''', then <math>\mathsf{P}\neq\mathsf{NP}</math>.{{sfn|Arora|Barak|2009|p=286}} '''P/poly''' is also helpful in investigating properties of the [[polynomial hierarchy]]. For example, if '''[[NP (complexity)|NP]]''' ⊆ '''P/poly''', then '''PH''' collapses to <math>\Sigma_2^{\mathsf P}</math>. A full description of the relations between '''P/poly''' and other complexity classes is available at "[[P/poly#Importance of P/poly|Importance of P/poly]]". '''P/poly''' is also helpful in the general study of the properties of [[Turing machine]]s, as the class can be equivalently defined as the class of languages recognized by a polynomial-time Turing machine with a polynomial-bounded [[advice (complexity)|advice function]]. Two subclasses of '''P/poly''' that have interesting properties in their own right are '''[[NC (complexity)|NC]]''' and '''[[AC (complexity)|AC]]'''. These classes are defined not only in terms of their circuit size but also in terms of their '''depth'''. The depth of a circuit is the length of the longest [[directed path]] from an input node to the output node. The class '''NC''' is the set of languages that can be solved by circuit families that are restricted not only to having polynomial-size but also to having polylogarithmic depth. The class '''AC''' is defined similarly to '''NC''', however gates are allowed to have unbounded fan-in (that is, the AND and OR gates can be applied to more than two bits). '''NC''' is a notable class because it can be equivalently defined as the class of languages that have efficient [[parallel algorithm]]s. ===Quantum computation=== {{expand section|date=April 2017}} The classes '''[[BQP]]''' and '''[[QMA]]''', which are of key importance in [[quantum information science]], are defined using '''[[quantum Turing machine]]s'''. ==Other types of problems== While most complexity classes studied by computer scientists are sets of [[decision problem]]s, there are also a number of complexity classes defined in terms of other types of problems. In particular, there are complexity classes consisting of [[counting problem (complexity)|counting problems]], [[function problem]]s, and [[promise problem]]s. These are explained in greater detail below. ===Counting problems=== {{Main|Counting problem (complexity)}} A '''counting problem''' asks not only ''whether'' a solution exists (as with a [[decision problem]]), but asks ''how many'' solutions exist.{{snf|Fortnow|1997}} For example, the decision problem <math>\texttt{CYCLE}</math> asks ''whether'' a particular graph <math>G</math> has a [[simple cycle]] (the answer is a simple yes/no); the corresponding counting problem <math>\#\texttt{CYCLE}</math> (pronounced "sharp cycle") asks ''how many'' simple cycles <math>G</math> has.{{sfn|Arora|2003}} The output to a counting problem is thus a number, in contrast to the output for a decision problem, which is a simple yes/no (or accept/reject, 0/1, or other equivalent scheme).{{sfn|Arora|Barak|2009|p=342}} Thus, whereas decision problems are represented mathematically as [[formal language]]s, counting problems are represented mathematically as [[Function (mathematics)|functions]]: a counting problem is formalized as the function <math>f:\{0,1\}^* \to \mathbb{N}</math> such that for every input <math>w \in \{0,1\}^*</math>, <math>f(w)</math> is the number of solutions. For example, in the <math>\#\texttt{CYCLE}</math> problem, the input is a graph <math>G \in \{0,1\}^*</math> (a graph represented as a string of [[bit]]s) and <math>f(G)</math> is the number of simple cycles in <math>G</math>. Counting problems arise in a number of fields, including [[statistical estimation]], [[statistical physics]], [[network design]], and [[economics]].{{sfn|Arora|Barak|2009|p=341–342}} ==== Important complexity classes ==== {{Main|♯P}} '''#P''' (pronounced "sharp P") is an important class of counting problems that can be thought of as the counting version of '''NP'''.{{sfn|Barak|2006}} The connection to '''NP''' arises from the fact that the number of solutions to a problem equals the number of accepting branches in a [[nondeterministic Turing machine]]'s computation tree. '''#P''' is thus formally defined as follows: : '''#P''' is the set of all functions <math>f:\{0,1\}^* \to \mathbb{N}</math> such that there is a polynomial time nondeterministic Turing machine <math>M</math> such that for all <math>w \in \{0,1\}^*</math>, <math>f(w)</math> equals the number of accepting branches in <math>M</math>'s computation tree on <math>w</math>.{{sfn|Barak|2006}} And just as '''NP''' can be defined both in terms of nondeterminism and in terms of a verifier (i.e. as an [[interactive proof system]]), so too can '''#P''' be equivalently defined in terms of a verifier. Recall that a decision problem is in '''NP''' if there exists a polynomial-time checkable [[certificate (complexity)|certificate]] to a given problem instance—that is, '''NP''' asks whether there exists a proof of membership (a certificate) for the input that can be checked for correctness in polynomial time. The class '''#P''' asks ''how many'' such certificates exist.{{sfn|Barak|2006}} In this context, '''#P''' is defined as follows: : '''#P''' is the set of functions <math>f: \{0,1\}^* \to \mathbb{N}</math> such that there exists a polynomial <math>p: \mathbb{N} \to \mathbb{N}</math> and a polynomial-time Turing machine <math>V</math> (the verifier), such that for every <math>w \in \{0,1\}^*</math>, <math>f(w)=\Big| \big\{c \in \{0,1\}^{p(|w|)} : V(w,c)=1 \big\}\Big| </math>.{{sfn|Arora|Barak|2009|p=344}} In other words, <math>f(w)</math> equals the size of the set containing all of the polynomial-size certificates for <math>w</math>. ===Function problems=== {{Main|Function problem}} Counting problems are a subset of a broader class of problems called '''function problems'''. A function problem is a type of problem in which the values of a [[Function (mathematics)|function]] <math>f:A \to B</math> are computed. Formally, a function problem <math>f</math> is defined as a relation <math>R</math> over strings of an arbitrary [[Alphabet (formal languages)|alphabet]] <math>\Sigma</math>: :<math> R \subseteq \Sigma^* \times \Sigma^*</math> An algorithm solves <math>f</math> if for every input <math>x</math> such that there exists a <math>y</math> satisfying <math>(x, y) \in R</math>, the algorithm produces one such <math>y</math>. This is just another way of saying that <math>f</math> is a [[function (mathematics)|function]] and the algorithm solves <math>f(x)</math> for all <math>x \in \Sigma^*</math>. ====Important complexity classes==== An important function complexity class is '''[[FP (complexity)|FP]]''', the class of efficiently solvable functions.{{sfn|Arora|Barak|2009|p=344}} More specifically, '''FP''' is the set of function problems that can be solved by a [[deterministic Turing machine]] in [[polynomial time]].{{sfn|Arora|Barak|2009|p=344}} '''FP''' can be thought of as the function problem equivalent of '''[[P (complexity)|P]]'''. Importantly, '''FP''' provides some insight into both counting problems and [[P versus NP|'''P''' versus '''NP''']]. If '''#P'''='''FP''', then the functions that determine the number of certificates for problems in '''NP''' are efficiently solvable. And since computing the number of certificates is at least as hard as determining whether a certificate exists, it must follow that if '''#P'''='''FP''' then '''P'''='''NP''' (it is not known whether this holds in the reverse, i.e. whether '''P'''='''NP''' implies '''#P'''='''FP''').{{sfn|Arora|Barak|2009|p=344}} Just as '''FP''' is the function problem equivalent of '''P''', '''[[FNP (complexity)|FNP]]''' is the function problem equivalent of '''[[NP (complexity)|NP]]'''. Importantly, '''FP'''='''FNP''' if and only if '''P'''='''NP'''.{{sfn|Rich|2008|p=689 (510 in provided PDF)}} ===Promise problems=== {{Main|Promise problem}} '''Promise problems''' are a generalization of decision problems in which the input to a problem is guaranteed ("promised") to be from a particular subset of all possible inputs. Recall that with a decision problem <math>L \subseteq \{0,1\}^*</math>, an algorithm <math>M</math> for <math>L</math> must act (correctly) on ''every'' <math>w \in \{0,1\}^*</math>. A promise problem loosens the input requirement on <math>M</math> by restricting the input to some subset of <math>\{0,1\}^*</math>. Specifically, a promise problem is defined as a pair of non-intersecting sets <math>(\Pi_{\text{ACCEPT}},\Pi_{\text{REJECT}})</math>, where:{{sfn|Watrous|2006|p=1}} * <math>\Pi_{\text{ACCEPT}} \subseteq \{0,1\}^*</math> is the set of all inputs that are accepted. * <math>\Pi_{\text{REJECT}} \subseteq \{0,1\}^*</math> is the set of all inputs that are rejected. The input to an algorithm <math>M</math> for a promise problem <math>(\Pi_{\text{ACCEPT}},\Pi_{\text{REJECT}})</math> is thus <math>\Pi_{\text{ACCEPT}} \cup \Pi_{\text{REJECT}}</math>, which is called the '''promise'''. Strings in <math>\Pi_{\text{ACCEPT}} \cup \Pi_{\text{REJECT}}</math> are said to ''satisfy the promise''.{{sfn|Watrous|2006|p=1}} By definition, <math>\Pi_{\text{ACCEPT}}</math> and <math>\Pi_{\text{REJECT}}</math> must be disjoint, i.e. <math>\Pi_{\text{ACCEPT}} \cap \Pi_{\text{REJECT}} = \emptyset</math>. Within this formulation, it can be seen that decision problems are just the subset of promise problems with the trivial promise <math>\Pi_{\text{ACCEPT}} \cup \Pi_{\text{REJECT}} = \{0,1\}^*</math>. With decision problems it is thus simpler to simply define the problem as only <math>\Pi_{\text{ACCEPT}}</math> (with <math>\Pi_{\text{REJECT}}</math> implicitly being <math>\{0,1\}^* / \Pi_{\text{ACCEPT}}</math>), which throughout this page is denoted <math>L</math> to emphasize that <math>\Pi_{\text{ACCEPT}}=L</math> is a [[formal language]]. Promise problems make for a more natural formulation of many computational problems. For instance, a computational problem could be something like "given a [[planar graph]], determine whether or not..."{{sfn|Goldreich|2006|p=255 (2–3 in provided pdf)}} This is often stated as a decision problem, where it is assumed that there is some translation schema that takes ''every'' string <math>s \in \{0,1\}^*</math> to a planar graph. However, it is more straightforward to define this as a promise problem in which the input is promised to be a planar graph. ====Relation to complexity classes==== Promise problems provide an alternate definition for standard complexity classes of decision problems. '''P''', for instance, can be defined as a promise problem:{{sfn|Goldreich|2006|p=257 (4 in provided pdf)}} : '''P''' is the class of promise problems that are solvable in deterministic polynomial time. That is, the promise problem <math>(\Pi_{\text{ACCEPT}},\Pi_{\text{REJECT}})</math> is in '''P''' if there exists a polynomial-time algorithm <math>M</math> such that: :* For every <math>x \in \Pi_{\text{ACCEPT}}, M(x)=1</math> :* For ever <math>x \in \Pi_{\text{REJECT}}, M(x)=0</math> Classes of decision problems—that is, classes of problems defined as formal languages—thus translate naturally to promise problems, where a language <math>L</math> in the class is simply <math>L= \Pi_{\text{ACCEPT}}</math> and <math>\Pi_{\text{REJECT}}</math> is implicitly <math>\{0,1\}^* / \Pi_{\text{ACCEPT}}</math>. Formulating many basic complexity classes like '''P''' as promise problems provides little additional insight into their nature. However, there are some complexity classes for which formulating them as promise problems have been useful to computer scientists. Promise problems have, for instance, played a key role in the study of '''SZK''' (statistical zero-knowledge).{{sfn|Goldreich|2006|p=266 (11–12 in provided pdf)}} ==Summary of relationships between complexity classes== The following table shows some of the classes of problems that are considered in complexity theory. If class '''X''' is a strict [[subset]] of '''Y''', then '''X''' is shown below '''Y''' with a dark line connecting them. If '''X''' is a subset, but it is unknown whether they are equal sets, then the line is lighter and dotted. Technically, the breakdown into decidable and undecidable pertains more to the study of [[computability theory]], but is useful for putting the complexity classes in perspective. {| cellpadding="0" cellspacing="0" border="0" style="margin:auto;" |- style="text-align:center;" | colspan=2 | | colspan=4 | {| cellpadding="0" cellspacing="0" border="1" style="background:lightBlue; width:100%; height:100%;" |- | style="text-align:center;" | [[decision problem|Decision Problem]] |} |- style="text-align:center;" | colspan=2 | | [[File:solidLine.png]] | colspan=2 | | [[File:solidLine.png]] |- style="text-align:center;" | colspan=3 | {| cellpadding="0" cellspacing="0" border="1" style="background:lightBlue; width:100%; height:100%;" |- | style="text-align:center;" | [[recursively enumerable language|Type 0 (Recursively enumerable)]] |} | | colspan=4 | {| cellpadding="0" cellspacing="0" border="1" style="background:lightBlue; width:100%; height:100%;" |- | style="text-align:center;" | [[List of undecidable problems|Undecidable]] |} |- style="text-align:center;" | colspan=3 | [[File:solidLine.png]] |- style="text-align:center;" | colspan=3 | {| cellpadding="0" cellspacing="0" border="1" style="background:lightBlue; width:100%; height:100%;" |- | style="text-align:center;" | [[recursive language|Decidable]] |} |- style="text-align:center;" | colspan=3 | [[File:solidLine.png]] |- style="text-align:center;" | colspan=3 | {| cellpadding="0" cellspacing="0" border="1" style="background:lightGreen; width:100%; height:100%;" |- | style="text-align:center;" | [[EXPSPACE]] |} |- style="text-align:center;" | colspan=3 | [[File:dottedLine.png]] |- style="text-align:center;" | colspan=3 | {| cellpadding="0" cellspacing="0" border="1" style="background:lightGreen; width:100%; height:100%;" |- | style="text-align:center;" | [[NEXPTIME]] |} |- style="text-align:center;" | colspan=3 | [[File:dottedLine.png]] |- style="text-align:center;" | colspan=3 | {| cellpadding="0" cellspacing="0" border="1" style="background:lightGreen; width:100%; height:100%;" |- | style="text-align:center;" | [[EXPTIME]] |} |- style="text-align:center;" | colspan=3 | [[File:dottedLine.png]] |- style="text-align:center;" | colspan=8 | {| cellpadding="0" cellspacing="0" border="1" style="background:lightGreen; width:100%; height:100%;" |- | style="text-align:center;" | [[PSPACE]] |} |- style="text-align:center;" | [[File:solidLine.png]] | width=40 | [[File:solidLine.png]] | [[File:dottedLine.png]] | [[File:dottedLine.png]] | | [[File:dottedLine.png]] |- style="text-align:center;" | {| cellpadding="0" cellspacing="0" border="1" style="background:lightBlue; width:100%; height:100%;" |- | style="text-align:center;" | [[context-sensitive grammar|Type 1 (Context Sensitive)]] |} | [[File:solidLine.png]] | [[File:dottedLine.png]] | border="1" | [[File:dottedLine.png]] | | [[File:dottedLine.png]] |- style="text-align:center;" | [[File:solidLine.png]] | [[File:solidLine.png]] | [[File:dottedLine.png]] | [[File:dottedLine.png]] | | [[File:dottedLine.png]] |- style="text-align:center;" | [[File:solidLine.png]] | [[File:solidLine.png]] | {| cellpadding="0" cellspacing="0" border="1" style="background:lightGreen; width:100%; height:100%;" |- | style="text-align:center;" | [[co-NP]] |} | {| cellpadding="0" cellspacing="0" border="1" style="background:lightGreen; width:100%; height:100%;" |- | style="text-align:center;" | [[BQP]] |} | | colspan=2 | {| cellpadding="0" cellspacing="0" border="1" style="background:lightGreen; width:100%; height:100%;" |- | style="text-align:center;" | [[NP (complexity)|NP]] |} |- style="text-align:center;" | [[File:solidLine.png]] | [[File:solidLine.png]] | [[File:dottedLine.png]] | [[File:dottedLine.png]] | | [[File:dottedLine.png]] |- style="text-align:center;" | [[File:solidLine.png]] | [[File:solidLine.png]] | [[File:dottedLine.png]] | {| cellpadding="0" cellspacing="0" border="1" style="background:lightGreen; width:100%; height:100%;" |- | style="text-align:center;" | [[Bounded-error probabilistic polynomial|BPP]] |} | width=10 | | [[File:dottedLine.png]] |- style="text-align:center;" | [[File:solidLine.png]] | [[File:solidLine.png]] | [[File:dottedLine.png]] | [[File:dottedLine.png]] | | [[File:dottedLine.png]] |- style="text-align:center;" | [[File:solidLine.png]] | [[File:solidLine.png]] | colspan=5 | {| cellpadding="0" cellspacing="0" border="1" style="background:lightGreen; width:100%; height:100%;" |- | style="text-align:center;" | [[P (complexity)|P]] |} |- style="text-align:center;" | [[File:solidLine.png]] | [[File:solidLine.png]] | [[File:dottedLine.png]] |- style="text-align:center;" | [[File:solidLine.png]] | colspan=2 | {| cellpadding="0" cellspacing="0" border="1" style="background:lightGreen; width:100%; height:100%;" |- | style="text-align:center;" | [[NC (complexity)|NC]] |} |- style="text-align:center;" | [[File:solidLine.png]] | colspan=2 | [[File:solidLine.png]] |- style="text-align:center;" | colspan=3 | {| cellpadding="0" cellspacing="0" border="1" style="background:lightBlue; width:100%; height:100%;" |- | style="text-align:center;" | [[context-free grammar|Type 2 (Context Free)]] |} |- style="text-align:center;" | colspan=3 | [[File:solidLine.png]] |- style="text-align:center;" | colspan=3 | {| cellpadding="0" cellspacing="0" border="1" style="background:lightBlue; width:100%; height:100%;" |- | style="text-align:center;" | [[regular grammar|Type 3 (Regular)]] |} |} ==See also== * [[List of complexity classes]] ==Notes== {{notelist}} ==References== {{reflist}} ==Bibliography== * {{cite web |last1=Aaronson |first1=Scott |author-link=Scott Aaronson |title=P=?NP |url=https://eccc.weizmann.ac.il/report/2017/004/ |website=Electronic Colloquim on Computational Complexity |publisher=Weizmann Institute of Science |date=8 January 2017|archive-url=https://web.archive.org/web/20200617175017/https://eccc.weizmann.ac.il/report/2017/004/download/|archive-date=June 17, 2020|url-status=live}} * {{cite book |last1=Arora |first1=Sanjeev |last2=Barak|author1-link=Sanjeev Arora |first2=Boaz|author2-link=Boaz Barak |title=Computational Complexity: A Modern Approach |url=https://archive.org/details/computationalcom00aror |url-access=limited |date=2009 |publisher=Cambridge University Press |isbn=978-0-521-42426-4|at=[http://theory.cs.princeton.edu/complexity/book.pdf Draft]. [https://web.archive.org/web/20220223014316/https://theory.cs.princeton.edu/complexity/book.pdf Archived] from the original on February 23, 2022.}} * {{cite web |last1=Arora |first1=Sanjeev |title=Complexity classes having to do with counting |url=https://www.cs.princeton.edu/courses/archive/spring03/cs522/book2.ps |website=Computer Science 522: Computational Complexity Theory |publisher=Princeton University |date=Spring 2003 |archive-url=https://web.archive.org/web/20220521204917/https://www.cs.princeton.edu/courses/archive/spring03/cs522/book2.ps |archive-date= May 21, 2022 |url-status=live}} * {{cite web |last1=Barak |first1=Boaz |author-link=Boaz Barak |title=Complexity of counting |url=https://www.cs.princeton.edu/courses/archive/spring06/cos522/count.pdf |website=Computer Science 522: Computational Complexity |publisher=[[Princeton University]] |date=Spring 2006|at=[https://web.archive.org/web/20210403191124/https://www.cs.princeton.edu/courses/archive/spring06/cos522/count.pdf Archived] from the original on April 3, 2021.}} * {{Cite book |last=Fortnow |first=Lance |title=Complexity Theory Retrospective II |publisher=Springer |year=1997 |isbn=9780387949734 |editor-last=Hemaspaandra |editor-first=Lane A. |pages=81–106 |chapter=Counting Complexity |editor-last2=Selman |editor-first2=Alan L. |chapter-url=https://lance.fortnow.com/papers/files/counting.pdf |archive-url=https://web.archive.org/web/20220618222631/https://lance.fortnow.com/papers/files/counting.pdf |archive-date=June 18, 2022}} * {{cite web |last=Gasarch |first=William I. |date=2019 |url=https://www.cs.umd.edu/users/gasarch/BLOGPAPERS/pollpaper3.pdf |url-status=live |title=Guest Column: The Third P =? NP Poll |website=[[University of Maryland]]|archive-url=https://web.archive.org/web/20211102102656/https://www.cs.umd.edu/users/gasarch/BLOGPAPERS/pollpaper3.pdf|archive-date=November 2, 2021}} * {{Cite book |last=Goldreich |first=Oded |author-link=Oded Goldreich |url=https://www.wisdom.weizmann.ac.il/~oded/PSX/prpr-r.pdf |title=Theoretical Computer Science. Lecture Notes in Computer Science, vol 3895 |publisher=Springer |year=2006 |isbn=978-3-540-32881-0 |editor-last=Goldreich |editor-first=Oded |pages=254–290 |chapter=On Promise Problems: A Survey |volume=3895 |doi=10.1007/11685654_12 |editor-last2=Rosenberg |editor-first2=Arnold L. |editor-last3=Selman |editor-first3=Alen L. |chapter-url=https://www.wisdom.weizmann.ac.il/~oded/PSX/prpr-r.pdf |archive-url=https://web.archive.org/web/20210506131638/https://www.wisdom.weizmann.ac.il/~oded/PSX/prpr-r.pdf |archive-date=May 6, 2021 |url-status=live}} * {{cite encyclopedia | last = Johnson | first = David S. | title = A Catalog of Complexity Classes | doi = 10.1016/b978-0-444-88071-0.50007-2 | pages = 67–161 | publisher = Elsevier | series = Handbook of Theoretical Computer Science | encyclopedia = Algorithms and Complexity | year = 1990| isbn = 978-0-444-88071-0 }} * {{Cite web |last=Lee |first=James R. |date=May 22, 2014 |title=Lecture 16 |url=https://courses.cs.washington.edu/courses/cse431/14sp/scribes/lec16.pdf |url-status=live |archive-url=https://web.archive.org/web/20211129075858/https://courses.cs.washington.edu/courses/cse431/14sp/scribes/lec16.pdf |archive-date=November 29, 2021 |access-date=October 5, 2022 |website=CSE431: Introduction to Theory of Computation |publisher=[[University of Washington]]}} * {{Cite book |last=Rich |first=Elaine|author-link=Elaine Rich |title=Automata, Computability and Complexity: Theory and Applications |publisher=[[Prentice Hall]] |year=2008 |isbn=978-0132288064|url=https://www.cs.utexas.edu/~ear/cs341/automatabook/AutomataTheoryBook.pdf|archive-url=https://web.archive.org/web/20220121174820/https://www.cs.utexas.edu/~ear/cs341/automatabook/AutomataTheoryBook.pdf|archive-date=January 21, 2022|url-status=live}} * {{cite book|last=Sipser|first=Michael|author-link=Michael Sipser|title=Introduction to the Theory of Computation|edition=2nd|year=2006|publisher=Thomson Course Technology|location=USA|isbn=0-534-95097-3|url=http://fuuu.be/polytech/INFOF408/Introduction-To-The-Theory-Of-Computation-Michael-Sipser.pdf|archive-url=https://web.archive.org/web/20220207141236/http://fuuu.be/polytech/INFOF408/Introduction-To-The-Theory-Of-Computation-Michael-Sipser.pdf|archive-date=February 7, 2022}} * {{Cite web |last=Watrous |first=John |author-link=John Watrous (computer scientist) |date=April 11, 2006 |title=Lecture 22: Quantum computational complexity |url=https://cs.uwaterloo.ca/~watrous/QC-notes/QC-notes.22.pdf |url-status=live |archive-url=https://web.archive.org/web/20220618022421/https://cs.uwaterloo.ca/~watrous/QC-notes/QC-notes.22.pdf |archive-date=June 18, 2022 |website=[[University of Waterloo]]}} ==Further reading== *[https://complexityzoo.uwaterloo.ca/Complexity_Zoo The Complexity Zoo] {{Webarchive|url=https://web.archive.org/web/20190827233504/https://complexityzoo.uwaterloo.ca/Complexity_Zoo |date=2019-08-27 }}: A huge list of complexity classes, a reference for experts. *{{cite web |url=http://www.cs.umass.edu/~immerman/complexity_theory.html |author=[[Neil Immerman]] |title=Computational Complexity Theory |archive-url=https://web.archive.org/web/20160416021243/https://people.cs.umass.edu/~immerman/complexity_theory.html |archive-date=2016-04-16}} Includes a diagram showing the hierarchy of complexity classes and how they fit together. *[[Michael Garey]], and [[David S. Johnson]]: ''Computers and Intractability: A Guide to the Theory of NP-Completeness.'' New York: W. H. Freeman & Co., 1979. The standard reference on NP-Complete problems - an important category of problems whose solutions appear to require an impractically long time to compute. {{ComplexityClasses}} [[Category:Complexity classes| ]] [[Category:Computational complexity theory|*]] [[Category:Measures of complexity]] [[Category:Theoretical computer science]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Cite book
(
edit
)
Template:Cite encyclopedia
(
edit
)
Template:Cite web
(
edit
)
Template:ComplexityClasses
(
edit
)
Template:Efn
(
edit
)
Template:Expand section
(
edit
)
Template:Main
(
edit
)
Template:Main article
(
edit
)
Template:Mvar
(
edit
)
Template:Notelist
(
edit
)
Template:Reflist
(
edit
)
Template:See also
(
edit
)
Template:Sfn
(
edit
)
Template:Sfnp
(
edit
)
Template:Short description
(
edit
)
Template:Snf
(
edit
)
Template:Webarchive
(
edit
)