Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Theory of computation
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Branches == === Automata theory === {{main|Automata theory}} {| class="wikitable" |- ! Grammar ! Languages ! Automaton ! Production rules (constraints) |- | Type-0 | [[recursively enumerable language|Recursively enumerable]] | [[Turing machine]] | <math>\alpha \rightarrow \beta</math> (no restrictions) |- | Type-1 | [[context-sensitive grammar|Context-sensitive]] | [[Linear bounded automaton|Linear-bounded non-deterministic Turing machine]] | <math>\alpha A \beta \rightarrow \alpha \gamma \beta</math> |- | Type-2 | [[context-free grammar|Context-free]] | Non-deterministic [[pushdown automaton]] | <math>A \rightarrow \gamma</math> |- | Type-3 | [[regular grammar|Regular]] | [[Finite-state automaton]] | <math>A \rightarrow a</math><br /> and<br /><math>A \rightarrow aB</math> |} Automata theory is the study of abstract machines (or more appropriately, abstract 'mathematical' machines or systems) and the computational problems that can be solved using these machines. These abstract machines are called automata. Automata comes from the Greek word (ΞΟ ΟΟΞΌΞ±ΟΞ±) which means that something is doing something by itself. Automata theory is also closely related to [[formal language]] theory,<ref name=hopcroft-ullman>{{cite book|author =[[John Hopcroft|Hopcroft, John E.]] and [[Jeffrey D. Ullman]] | year = 2006| title =Introduction to Automata Theory, Languages, and Computation. 3rd ed| publisher =Reading, MA: Addison-Wesley. |isbn= 978-0-321-45536-9| title-link = Introduction to Automata Theory, Languages, and Computation}}</ref> as the automata are often classified by the class of formal languages they are able to recognize. An automaton can be a finite representation of a formal language that may be an infinite set. Automata are used as theoretical models for computing machines, and are used for proofs about computability. ==== Formal language theory ==== {{main|Formal language}} [[Image:Chomsky-hierarchy.svg|thumb|right|200px|alt=The Chomsky hierarchy|Set inclusions described by the Chomsky hierarchy]] Formal language theory is a branch of mathematics concerned with describing languages as a set of operations over an [[Alphabet (formal languages)|alphabet]]. It is closely linked with automata theory, as automata are used to generate and recognize formal languages. There are several classes of formal languages, each allowing more complex language specification than the one before it, i.e. [[Chomsky hierarchy]],<ref>{{cite journal |doi=10.1109/TIT.1956.1056813 |s2cid=19519474 |title=Three models for the description of language |date=1956 |last1=Chomsky |first1=N. |journal=IEEE Transactions on Information Theory |volume=2 |issue=3 |pages=113β124 }}</ref> and each corresponding to a class of automata which recognizes it. Because automata are used as models for computation, formal languages are the preferred mode of specification for any problem that must be computed. === Computability theory === {{main|Computability theory}} Computability theory deals primarily with the question of the extent to which a problem is solvable on a computer. The statement that the [[halting problem]] cannot be solved by a Turing machine<ref>{{cite journal |last=[[Alan Turing]] |date=1937 |title=On computable numbers, with an application to the Entscheidungsproblem |url=http://www.turingarchive.org/browse.php/B/12 |journal= Proceedings of the London Mathematical Society |publisher=IEEE |volume=2 |issue=42 |pages=230β265 |doi=10.1112/plms/s2-42.1.230 |s2cid=73712 |access-date=6 January 2015}}</ref> is one of the most important results in computability theory, as it is an example of a concrete problem that is both easy to formulate and impossible to solve using a Turing machine. Much of computability theory builds on the halting problem result. Another important step in computability theory was [[Rice's theorem]], which states that for all non-trivial properties of partial functions, it is [[Decidability (logic)|undecidable]] whether a Turing machine computes a partial function with that property.<ref>{{cite journal |last=Henry Gordon Rice |date=1953 |title=Classes of Recursively Enumerable Sets and Their Decision Problems |journal= Transactions of the American Mathematical Society |publisher=American Mathematical Society|volume=74 |issue=2 |pages=358β366 |doi= 10.2307/1990888|jstor=1990888|doi-access=free }}</ref> Computability theory is closely related to the branch of [[mathematical logic]] called [[recursion theory]], which removes the restriction of studying only models of computation which are reducible to the Turing model.<ref name=davis>{{cite book|author =Martin Davis |year = 2004 |title =The undecidable: Basic papers on undecidable propositions, unsolvable problems and computable functions (Dover Ed) |publisher =Dover Publications |isbn=978-0486432281|author-link = Martin Davis (mathematician) }}</ref> Many mathematicians and computational theorists who study recursion theory will refer to it as computability theory. === Computational complexity theory === {{main|Computational complexity theory}} [[Image:Complexity subsets pspace.svg|250px|thumb|right|A representation of the relation among complexity classes]] Computational complexity theory considers not only whether a problem can be solved at all on a computer, but also how efficiently the problem can be solved. Two major aspects are considered: [[time complexity]] and [[space complexity]], which are respectively how many steps it takes to perform a computation, and how much memory is required to perform that computation. In order to analyze how much time and space a given [[algorithm]] requires, computer scientists express the time or space required to solve the problem as a function of the size of the input problem. For example, finding a particular number in a long list of numbers becomes harder as the list of numbers grows larger. If we say there are ''n'' numbers in the list, then if the list is not sorted or indexed in any way we may have to look at every number in order to find the number we're seeking. We thus say that in order to solve this problem, the computer needs to perform a number of steps that grow linearly in the size of the problem. To simplify this problem, computer scientists have adopted [[Big O notation|big ''O'' notation]], which allows functions to be compared in a way that ensures that particular aspects of a machine's construction do not need to be considered, but rather only the [[Asymptotic analysis|asymptotic behavior]] as problems become large. So in our previous example, we might say that the problem requires <math>O(n)</math> steps to solve. Perhaps the most important open problem in all of [[computer science]] is the question of whether a certain broad class of problems denoted [[NP (complexity)|NP]] can be solved efficiently. This is discussed further at [[P = NP problem|Complexity classes P and NP]], and [[P versus NP problem]] is one of the seven [[Millennium Prize Problems]] stated by the [[Clay Mathematics Institute]] in 2000. The Official Problem Description was given by [[Turing Award]] winner [[Stephen Cook]].
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)