Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Computational complexity theory
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==History== An early example of algorithm complexity analysis is the running time analysis of the [[Euclidean algorithm]] done by [[Gabriel Lamé]] in 1844. Before the actual research explicitly devoted to the complexity of algorithmic problems started off, numerous foundations were laid out by various researchers. Most influential among these was the definition of Turing machines by [[Alan Turing]] in 1936, which turned out to be a very robust and flexible simplification of a computer. The beginning of systematic studies in computational complexity is attributed to the seminal 1965 paper "On the Computational Complexity of Algorithms" by [[Juris Hartmanis]] and [[Richard E. Stearns]], which laid out the definitions of [[time complexity]] and [[space complexity]], and proved the hierarchy theorems.<ref name="Fortnow 2003">{{Harvtxt|Fortnow|Homer|2003}}</ref> In addition, in 1965 [[Jack Edmonds|Edmonds]] suggested to consider a "good" algorithm to be one with running time bounded by a polynomial of the input size.<ref>Richard M. Karp, "[http://cecas.clemson.edu/~shierd/Shier/MthSc816/turing-karp.pdf Combinatorics, Complexity, and Randomness]", 1985 Turing Award Lecture</ref> Earlier papers studying problems solvable by Turing machines with specific bounded resources include<ref name="Fortnow 2003"/> [[John Myhill]]'s definition of [[linear bounded automata]] (Myhill 1960), [[Raymond Smullyan]]'s study of rudimentary sets (1961), as well as [[Hisao Yamada]]'s paper<ref>{{Cite journal | last1 = Yamada | first1 = H. | title = Real-Time Computation and Recursive Functions Not Real-Time Computable | journal = IEEE Transactions on Electronic Computers | volume = EC-11 | issue = 6 | pages = 753–760 | year = 1962 | doi = 10.1109/TEC.1962.5219459}}</ref> on real-time computations (1962). Somewhat earlier, [[Boris Trakhtenbrot]] (1956), a pioneer in the field from the USSR, studied another specific complexity measure.<ref>Trakhtenbrot, B.A.: Signalizing functions and tabular operators. Uchionnye Zapiski Penzenskogo Pedinstituta (Transactions of the Penza Pedagogoical Institute) 4, 75–87 (1956) (in Russian)</ref> As he remembers: {{blockquote|However, [my] initial interest [in automata theory] was increasingly set aside in favor of computational complexity, an exciting fusion of combinatorial methods, inherited from [[switching theory]], with the conceptual arsenal of the theory of algorithms. These ideas had occurred to me earlier in 1955 when I coined the term "signalizing function", which is nowadays commonly known as "complexity measure".<ref>Boris Trakhtenbrot, "[https://books.google.com/books?id=GFX2qiLuRAMC&pg=PA1 From Logic to Theoretical Computer Science – An Update]". In: ''Pillars of Computer Science'', LNCS 4800, Springer 2008.</ref>}} In 1967, [[Manuel Blum]] formulated a set of [[axiom]]s (now known as [[Blum axioms]]) specifying desirable properties of complexity measures on the set of computable functions and proved an important result, the so-called [[Blum's speedup theorem|speed-up theorem]]. The field began to flourish in 1971 when [[Stephen Cook]] and [[Leonid Levin]] [[Cook–Levin theorem|proved]] the existence of practically relevant problems that are [[NP-complete]]. In 1972, [[Richard Karp]] took this idea a leap forward with his landmark paper, "Reducibility Among Combinatorial Problems", in which he showed that 21 diverse [[combinatorics|combinatorial]] and [[graph theory|graph theoretical]] problems, each infamous for its computational intractability, are NP-complete.<ref>{{Citation | author = Richard M. Karp | chapter = Reducibility Among Combinatorial Problems | chapter-url = http://www.cs.berkeley.edu/~luca/cs172/karp.pdf | title = Complexity of Computer Computations | editor = R. E. Miller | editor2 = J. W. Thatcher | publisher = New York: Plenum | pages = 85–103 | year = 1972 | access-date = September 28, 2009 | archive-date = June 29, 2011 | archive-url = https://web.archive.org/web/20110629023717/http://www.cs.berkeley.edu/~luca/cs172/karp.pdf | url-status = dead }}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)