Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Theoretical computer science
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Subfield of computer science and mathematics}} {{about|the branch of computer science and mathematics|the journal|Theoretical Computer Science (journal){{!}}''Theoretical Computer Science'' (journal)}} {{Distinguish|Theory of computation}} [[File:DFAexample.svg|thumb| 250px|A [[finite-state automaton]] from [[automata theory]], a branch of theoretical computer science]] '''Theoretical computer science''' is a subfield of [[computer science]] and [[mathematics]] that focuses on the [[Abstraction|abstract]] and mathematical foundations of [[computation]]. It is difficult to circumscribe the theoretical areas precisely. The [[Association for Computing Machinery|ACM]]'s [[Special Interest Group on Algorithms and Computation Theory]] (SIGACT) provides the following description:<ref>{{cite web | title = SIGACT | url = https://www.sigact.org/ | access-date = 2017-01-19}}</ref> {{"|TCS covers a wide variety of topics including [[algorithms]], [[data structure]]s, [[computational complexity theory|computational complexity]], [[parallel computation|parallel]] and [[distributed computation|distributed]] computation, [[probabilistic computation]], [[quantum computation]], [[automata theory]], [[information theory]], [[cryptography]], [[program semantics]] and [[Formal methods|verification]], [[algorithmic game theory]], [[machine learning]], [[computational biology]], [[computational economics]], [[computational geometry]], and [[computational number theory]] and [[Symbolic computation|algebra]]. Work in this field is often distinguished by its emphasis on mathematical technique and [[rigor#Mathematical rigour|rigor]].}} == History == {{Main|History of computer science}} While logical inference and mathematical proof had existed previously, in 1931 [[Kurt Gödel]] proved with his [[incompleteness theorem]] that there are fundamental limitations on what statements could be proved or disproved. [[Information theory]] was added to the field with [[A Mathematical Theory of Communication|a 1948 mathematical theory of communication]] by [[Claude Shannon]]. In the same decade, [[Donald Hebb]] introduced a mathematical model of [[Hebbian learning|learning]] in the brain. With mounting biological data supporting this hypothesis with some modification, the fields of [[neural network]]s and [[parallel distributed processing]] were established. In 1971, [[Stephen Cook]] and, working [[Multiple discovery|independently]], [[Leonid Levin]], proved that there exist practically relevant problems that are [[NP-complete]] – a landmark result in [[computational complexity theory]].<ref>{{cite book |doi=10.1145/800157.805047 |chapter=The complexity of theorem-proving procedures |title=Proceedings of the third annual ACM symposium on Theory of computing - STOC '71 |date=1971 |last1=Cook |first1=Stephen A. |pages=151–158 |isbn=978-1-4503-7464-4 }}</ref> Modern theoretical computer science research is based on these basic developments, but includes many other mathematical and interdisciplinary problems that have been posed, as shown below: {| style="border:1px solid #ddd; text-align:center; margin: 0 auto;" cellspacing="15" | <math> P \rightarrow Q \,</math> | [[File:DFAexample.svg|96px]] | [[File:Elliptic curve simple.svg|96px]] | [[File:6n-graf.svg|96px]] | [[File:Wang tiles.svg|96px]] | '''P = NP''' ? |- | [[Mathematical logic]] | [[Automata theory]] | [[Number theory]] | [[Graph theory]] | [[Computability theory]] | [[Computational complexity theory]] |- | '''GNITIRW-TERCES''' | <math>\Gamma\vdash x: \text{Int}</math> | [[File:Commutative diagram for morphism.svg|96px]] | [[File:SimplexRangeSearching.svg|96px]] | [[File:TSP Deutschland 3.png|96px]] | [[File:Blochsphere.svg|96px]] |- | [[Cryptography]] | [[Type theory]] | [[Category theory]] | [[Computational geometry]] | [[Combinatorial optimization]] | [[Quantum computer|Quantum computing theory]] |} == Topics == ===Algorithms=== {{main|Algorithm}} An [[algorithm]] is a step-by-step procedure for calculations. Algorithms are used for [[calculation]], [[data processing]], and [[automated reasoning]]. An algorithm is an [[effective method]] expressed as a [[wikt:finite|finite]] list<ref>"Any classical mathematical algorithm, for example, can be described in a finite number of English words". {{cite book|title=Theory of Recursive Functions and Effective Computability|year=1967|publisher=McGraw-Hill|first=Hartley Jr.|last=Rogers|author-link=Hartley Rogers Jr.}} Page 2.</ref> of well-defined instructions<ref>Well defined with respect to the agent that executes the algorithm: "There is a computing agent, usually human, which can react to the instructions and carry out the computations" {{harv|Rogers|1967|p=2}}.</ref> for calculating a [[Function (mathematics)|function]].<ref>"an algorithm is a procedure for computing a ''function'' (with respect to some chosen notation for integers) ... this limitation (to numerical functions) results in no loss of generality", {{harv|Rogers|1967|p=1}}.</ref> Starting from an initial state and initial input (perhaps [[null string|empty]]),<ref>"An algorithm has [[zero]] or more inputs, i.e., [[quantity|quantities]] which are given to it initially before the algorithm begins" (Knuth 1973:5).</ref> the instructions describe a [[computation]] that, when [[Execution (computing)|executed]], proceeds through a finite<ref>"A procedure which has all the characteristics of an algorithm except that it possibly lacks finiteness may be called a 'computational method'" (Knuth 1973:5).</ref> number of well-defined successive states, eventually producing "output"<ref>"An algorithm has one or more outputs, i.e. quantities which have a specified relation to the inputs" (Knuth 1973:5).</ref> and terminating at a final ending state. The transition from one state to the next is not necessarily [[deterministic]]; some algorithms, known as [[randomized algorithms]], incorporate random input.<ref>Whether or not a process with random interior processes (not including the input) is an algorithm is debatable. Rogers opines that: "a computation is carried out in a discrete stepwise fashion, without the use of continuous methods or analog devices . . . carried forward deterministically, without resort to random methods or devices, e.g., dice" {{harv|Rogers|1967|p=2}}.</ref> ===Automata theory=== {{main|Automata theory}} [[Automata theory]] is the study of ''[[abstract machine]]s'' and ''[[automaton|automata]]'', as well as the computational problems that can be solved using them. It is a theory in theoretical computer science, under [[discrete mathematics]] (a section of [[mathematics]] and also of [[computer science]]). ''Automata'' comes from the Greek word αὐτόματα meaning "self-acting". Automata Theory is the study of self-operating virtual machines to help in the logical understanding of input and output process, without or with intermediate stage(s) of [[computation]] (or any [[Function (engineering)|function]]/process). ===Coding theory=== {{main|Coding theory}} [[Coding theory]] is the study of the properties of codes and their fitness for a specific application. Codes are used for [[data compression]], [[cryptography]], [[error correction]] and more recently also for [[network coding]]. Codes are studied by various scientific disciplines – such as [[information theory]], [[electrical engineering]], [[mathematics]], and [[computer science]] – for the purpose of designing efficient and reliable [[data transmission]] methods. This typically involves the removal of redundancy and the correction (or detection) of errors in the transmitted data. ===Computational complexity theory=== {{main|Computational complexity theory}} [[Computational complexity theory]] is a branch of the [[theory of computation]] that focuses on classifying [[computational problems]] according to their inherent difficulty, and relating those [[Complexity class|classes]] to each other. A computational problem is understood to be a task that is in principle amenable to being solved by a computer, which is equivalent to stating that the problem may be solved by mechanical application of mathematical steps, such as an [[algorithm]]. A problem is regarded as inherently difficult if its solution requires significant resources, whatever the [[algorithm]] used. The theory formalizes this intuition, by introducing mathematical [[models of computation]] to study these problems and quantifying the amount of resources needed to solve them, such as time and storage. Other [[complexity]] measures are also used, such as the amount of communication (used in [[communication complexity]]), the number of [[logic gate|gates]] in a circuit (used in [[circuit complexity]]) and the number of processors (used in [[parallel computing]]). One of the roles of computational complexity theory is to determine the practical limits on what [[computer]]s can and cannot do. ===Computational geometry=== {{main|Computational geometry}} [[Computational geometry]] is a branch of computer science devoted to the study of algorithms that can be stated in terms of [[geometry]]. Some purely geometrical problems arise out of the study of computational geometric algorithms, and such problems are also considered to be part of computational geometry. The main impetus for the development of computational geometry as a discipline was progress in [[computer graphics]] and computer-aided design and manufacturing ([[Computer-aided design|CAD]]/[[Computer-aided manufacturing|CAM]]), but many problems in computational geometry are classical in nature, and may come from [[mathematical visualization]]. Other important applications of computational geometry include [[robotics]] (motion planning and visibility problems), [[geographic information system]]s (GIS) (geometrical location and search, route planning), [[integrated circuit]] design (IC geometry design and verification), [[computer-aided engineering]] (CAE) (mesh generation), [[computer vision]] (3D reconstruction). ===Computational learning theory=== {{main|Computational learning theory}} Theoretical results in machine learning mainly deal with a type of inductive learning called supervised learning. In supervised learning, an algorithm is given samples that are labeled in some useful way. For example, the samples might be descriptions of mushrooms, and the labels could be whether or not the mushrooms are edible. The algorithm takes these previously labeled samples and uses them to induce a classifier. This classifier is a function that assigns labels to samples including the samples that have never been previously seen by the algorithm. The goal of the supervised learning algorithm is to optimize some measure of performance such as minimizing the number of mistakes made on new samples. ===Computational number theory=== {{main|Computational number theory}} [[Computational number theory]], also known as '''algorithmic number theory''', is the study of [[algorithm]]s for performing [[number theory|number theoretic]] [[computation]]s. The best known problem in the field is [[integer factorization]]. ===Cryptography=== {{main|Cryptography}} [[Cryptography]] is the practice and study of techniques for [[secure communication]] in the presence of third parties (called [[adversary (cryptography)|adversaries]]).<ref name="rivest90">{{cite book|first=Ronald L.|last=Rivest|author-link=Ron Rivest|editor=J. Van Leeuwen|title=Handbook of Theoretical Computer Science|chapter=Cryptology|volume=1|publisher=Elsevier|year=1990}}</ref> More generally, it is about constructing and analyzing [[communications protocol|protocol]]s that overcome the influence of adversaries<ref name="modern-crypto">{{Cite book|first1=Mihir|last1=Bellare|first2=Phillip|last2=Rogaway|title=Introduction to Modern Cryptography|chapter=Introduction|page=10|date=21 September 2005}}</ref> and that are related to various aspects in [[information security]] such as data [[confidentiality]], [[data integrity]], [[authentication]], and [[non-repudiation]].<ref name="hac">{{cite book |first1=A. J. |last1=Menezes |first2=P. C. |last2=van Oorschot |first3=S. A. |last3=Vanstone |url=https://archive.org/details/handbookofapplie0000mene |title=Handbook of Applied Cryptography |isbn=978-0-8493-8523-0 |year=1997 |publisher=Taylor & Francis }}</ref> Modern cryptography intersects the disciplines of [[mathematics]], [[computer science]], and [[electrical engineering]]. Applications of cryptography include [[automated teller machine|ATM cards]], [[password|computer passwords]], and [[electronic commerce]]. Modern cryptography is heavily based on mathematical theory and computer science practice; cryptographic algorithms are designed around [[computational hardness assumption]]s, making such algorithms hard to break in practice by any adversary. It is theoretically possible to break such a system, but it is infeasible to do so by any known practical means. These schemes are therefore termed computationally secure; theoretical advances, e.g., improvements in [[integer factorization]] algorithms, and faster computing technology require these solutions to be continually adapted. There exist [[Information theoretic security|information-theoretically secure]] schemes that {{not a typo|provably}} cannot be broken even with unlimited computing power—an example is the [[one-time pad]]—but these schemes are more difficult to implement than the best theoretically breakable but computationally secure mechanisms. ===Data structures=== {{main|Data structure}} A [[data structure]] is a particular way of organizing [[data (computing)|data]] in a computer so that it can be used [[algorithmic efficiency|efficiently]].<ref>Paul E. Black (ed.), entry for ''data structure'' in ''[[Dictionary of Algorithms and Data Structures]]. U.S. [[National Institute of Standards and Technology]]. 15 December 2004. [http://xlinux.nist.gov/dads/HTML/datastructur.html Online version] Accessed May 21, 2009.''</ref><ref>Entry ''data structure'' in the [[Encyclopædia Britannica]] (2009) [http://www.britannica.com/EBchecked/topic/152190/data-structure Online entry] accessed on May 21, 2009.</ref> Different kinds of data structures are suited to different kinds of applications, and some are highly specialized to specific tasks. For example, databases use [[B-tree]] indexes for small percentages of data retrieval and [[compiler]]s and databases use dynamic [[hash table]]s as look up tables. Data structures provide a means to manage large amounts of data efficiently for uses such as large [[database]]s and [[web indexing|internet indexing services]]. Usually, efficient data structures are key to designing efficient [[algorithm]]s. Some formal design methods and [[programming language]]s emphasize data structures, rather than algorithms, as the key organizing factor in software design. Storing and retrieving can be carried out on data stored in both [[main memory]] and in [[secondary memory]]. ===Distributed computation=== {{main|Distributed computation}} [[Distributed computing]] studies distributed systems. A distributed system is a software system in which components located on [[computer network|networked computers]] communicate and coordinate their actions by [[message passing|passing messages]].<ref name="Coulouris">{{cite book|last=Coulouris|first=George|author2=Jean Dollimore|author-link2=Jean Dollimore|author3=Tim Kindberg|author4=Gordon Blair|title=Distributed Systems: Concepts and Design|publisher = Addison-Wesley|year=2011|location=Boston|isbn=978-0-132-14301-1|edition=5th}}</ref> The components interact with each other in order to achieve a common goal. Three significant characteristics of distributed systems are: concurrency of components, lack of a global clock, and independent failure of components.<ref name="Coulouris"/> Examples of distributed systems vary from [[Service-oriented architecture|SOA-based systems]] to [[massively multiplayer online game]]s to [[Peer-to-peer| peer-to-peer applications]], and blockchain networks like [[Bitcoin]]. A [[computer program]] that runs in a distributed system is called a '''distributed program''', and distributed programming is the process of writing such programs.<ref>{{cite book | last=Ghosh | first=Sukumar | title=Distributed Systems – An Algorithmic Approach | page = 10 | publisher=Chapman & Hall/CRC | year=2007 | isbn=978-1-58488-564-1 }}</ref> There are many alternatives for the message passing mechanism, including [[Remote procedure call|RPC-like]] connectors and [[Message-oriented middleware|message queues]]. An important goal and challenge of distributed systems is [[location transparency]]. ===Information-based complexity=== {{main|Information-based complexity}} [[Information-based complexity]] (IBC) studies optimal algorithms and computational complexity for continuous problems. IBC has studied continuous problems as path integration, partial differential equations, systems of ordinary differential equations, nonlinear equations, integral equations, fixed points, and very-high-dimensional integration. ===Formal methods=== {{main|Formal methods}} [[Formal methods]] are a particular kind of [[mathematics]] based techniques for the [[formal specification|specification]], development and [[formal verification|verification]] of [[software]] and [[computer hardware|hardware]] systems.<ref name="butler">{{cite web|author=R. W. Butler|title=What is Formal Methods?|url=http://shemesh.larc.nasa.gov/fm/fm-what.html|date=2001-08-06|access-date=2006-11-16}}</ref> The use of formal methods for software and hardware design is motivated by the expectation that, as in other engineering disciplines, performing appropriate mathematical analysis can contribute to the reliability and robustness of a design.<ref>{{cite web|author=C. Michael Holloway|title=Why Engineers Should Consider Formal Methods|url=http://klabs.org/richcontent/verification/holloway/nasa-97-16dasc-cmh.pdf|publisher=16th Digital Avionics Systems Conference (27–30 October 1997)|access-date=2006-11-16|url-status=dead|archive-url=https://web.archive.org/web/20061116210448/http://klabs.org/richcontent/verification/holloway/nasa-97-16dasc-cmh.pdf|archive-date=16 November 2006}}</ref> Formal methods are best described as the application of a fairly broad variety of theoretical computer science fundamentals, in particular [[logic in computer science|logic]] calculi, [[formal language]]s, [[automata theory]], and [[program semantics]], but also [[type systems]] and [[algebraic data types]] to problems in software and hardware specification and verification.<ref>Monin, pp.3–4</ref> ===Information theory=== {{main|Information theory}} [[Information theory]] is a branch of [[applied mathematics]], [[electrical engineering]], and [[computer science]] involving the [[Quantification (science)|quantification]] of [[information]]. Information theory was developed by [[Claude E. Shannon]] to find fundamental limits on [[signal processing]] operations such as [[data compression|compressing data]] and on reliably [[Computer data storage|storing]] and [[Telecommunications|communicating]] data. Since its inception it has broadened to find applications in many other areas, including [[statistical inference]], [[natural language processing]], [[cryptography]], [[neurobiology]],<ref>{{cite book|author1=F. Rieke |author2=D. Warland |author3=R Ruyter van Steveninck |author4=W Bialek |title=Spikes: Exploring the Neural Code|publisher=The MIT press|year=1997|isbn=978-0262681087}}</ref> the evolution<ref>{{cite journal | last1=Huelsenbeck | first1=J. P. |first2=F. |last2=Ronquist|first3=R. |last3=Nielsen |first4=J. P. |last4=Bollback| title=Bayesian Inference of Phylogeny and Its Impact on Evolutionary Biology | journal=Science | publisher=American Association for the Advancement of Science (AAAS) | volume=294 | issue=5550 | date=2001-12-14 | issn=0036-8075 | doi=10.1126/science.1065889 | pages=2310–2314| pmid=11743192 | bibcode=2001Sci...294.2310H | s2cid=2138288 }}</ref> and function<ref>Rando Allikmets, Wyeth W. Wasserman, Amy Hutchinson, Philip Smallwood, Jeremy Nathans, Peter K. Rogan, [http://alum.mit.edu/www/toms/ Thomas D. Schneider], Michael Dean (1998) Organization of the ABCR gene: analysis of promoter and splice junction sequences, ''Gene'' '''215''':1, 111–122</ref> of molecular codes, [[model selection]] in statistics,<ref>Burnham, K. P. and Anderson D. R. (2002) ''Model Selection and Multimodel Inference: A Practical Information-Theoretic Approach, Second Edition'' (Springer Science, New York) {{isbn|978-0-387-95364-9}}.</ref> thermal physics,<ref>{{cite journal | last=Jaynes | first=E. T. | title=Information Theory and Statistical Mechanics | journal=[[Physical Review]] | publisher=American Physical Society (APS) | volume=106 | issue=4 | date=1957-05-15 | issn=0031-899X | doi=10.1103/physrev.106.620 | pages=620–630| bibcode=1957PhRv..106..620J | s2cid=17870175 }}</ref> [[quantum computing]], [[linguistics]], plagiarism detection,<ref>Charles H. Bennett, Ming Li, and Bin Ma (2003) [http://sciamdigital.com/index.cfm?fa=Products.ViewIssuePreview&ARTICLEID_CHAR=08B64096-0772-4904-9D48227D5C9FAC75 Chain Letters and Evolutionary Histories] {{Webarchive|url=https://web.archive.org/web/20071007041539/http://sciamdigital.com/index.cfm?fa=Products.ViewIssuePreview&ARTICLEID_CHAR=08B64096-0772-4904-9D48227D5C9FAC75 |date=2007-10-07 }}, ''Scientific American'' '''288''':6, 76–81</ref> [[pattern recognition]], [[anomaly detection]] and other forms of [[data analysis]].<ref> {{Cite web |author = David R. Anderson |title = Some background on why people in the empirical sciences may want to better understand the information-theoretic methods |date = November 1, 2003 |url = http://aicanderson2.home.comcast.net/~aicanderson2/home.pdf |access-date = 2010-06-23 |url-status = dead |archive-url = https://web.archive.org/web/20110723045720/http://aicanderson2.home.comcast.net/~aicanderson2/home.pdf |archive-date = July 23, 2011 }} </ref> Applications of fundamental topics of information theory include [[lossless data compression]] (e.g. [[ZIP (file format)|ZIP files]]), [[lossy data compression]] (e.g. [[MP3]]s and [[JPEG]]s), and [[channel capacity|channel coding]] (e.g. for [[DSL|Digital Subscriber Line (DSL)]]). The field is at the intersection of [[mathematics]], [[statistics]], [[computer science]], [[physics]], [[neurobiology]], and [[electrical engineering]]. Its impact has been crucial to the success of the [[Voyager program|Voyager]] missions to deep space, the invention of the compact disc, the feasibility of mobile phones, the development of the [[Internet]], the study of [[linguistics]] and of human perception, the understanding of [[black hole]]s, and numerous other fields. Important sub-fields of information theory are [[source coding]], [[channel coding]], [[algorithmic complexity theory]], [[algorithmic information theory]], [[information-theoretic security]], and measures of information. ===Machine learning=== {{main|Machine learning}} [[Machine learning]] is a [[academic disciplines|scientific discipline]] that deals with the construction and study of [[algorithm]]s that can [[learning|learn]] from data.<ref>{{cite journal |title=Glossary of terms |author1=Ron Kovahi |author2=Foster Provost |journal=[[Machine Learning (journal)|Machine Learning]] |volume=30 |pages=271–274 |year=1998 |url=https://ai.stanford.edu/~ronnyk/glossary.html|doi=10.1023/A:1007411609915 |doi-access=free }}</ref> Such algorithms operate by building a [[Statistical model|model]] based on inputs<ref name="bishop">{{cite book |author=C. M. Bishop |author-link=Christopher M. Bishop |year=2006 |title=Pattern Recognition and Machine Learning |publisher=Springer |isbn=978-0-387-31073-2}}</ref>{{rp|2}} and using that to make predictions or decisions, rather than following only explicitly programmed instructions. Machine learning can be considered a subfield of computer science and [[statistics]]. It has strong ties to [[artificial intelligence]] and [[mathematical optimization|optimization]], which deliver methods, theory and application domains to the field. Machine learning is employed in a range of computing tasks where designing and programming explicit, rule-based [[algorithm]]s is infeasible. Example applications include [[spam filter]]ing, [[optical character recognition]] (OCR),<ref name=Wernick-Signal-Proc-July-2010>Wernick, Yang, Brankov, Yourganov and Strother, Machine Learning in Medical Imaging, ''[[IEEE Signal Processing Society|IEEE Signal Processing Magazine]]'', vol. 27, no. 4, July 2010, pp. 25–38</ref> [[Learning to rank|search engines]] and [[computer vision]]. Machine learning is sometimes conflated with [[data mining]],<ref>{{cite conference |last=Mannila |first=Heikki |title=Data mining: machine learning, statistics, and databases |conference=Int'l Conf. Scientific and Statistical Database Management |publisher=IEEE Computer Society |year=1996}}</ref> although that focuses more on exploratory data analysis.<ref>{{cite journal |last=Friedman |first=Jerome H. |author-link=Jerome H. Friedman |title=Data Mining and Statistics: What's the connection? |journal=Computing Science and Statistics |volume=29 |issue=1 |year=1998 |pages=3–9}}</ref> Machine learning and [[pattern recognition]] "can be viewed as two facets of the same field."<ref name="bishop"/>{{rp|vii}} ===Natural computation=== {{excerpt|Natural computing}}<ref>{{cite book |doi=10.1142/9789812810403_0005 |chapter=Natural Computing |title=Current Trends in Theoretical Computer Science |date=2001 |last1=Rozenberg |first1=Grzegorz |pages=543–690 |isbn=978-981-02-4473-6 }}</ref> ===Parallel computation=== {{main|Parallel computation}} [[Parallel computing]] is a form of [[computing|computation]] in which many calculations are carried out simultaneously,<ref>{{cite book|last=Gottlieb|first=Allan|title=Highly parallel computing|year=1989|publisher=Benjamin/Cummings|location=Redwood City, Calif.|isbn=978-0-8053-0177-9|url=http://dl.acm.org/citation.cfm?id=160438|author2=Almasi, George S.}}</ref> operating on the principle that large problems can often be divided into smaller ones, which are then solved [[Parallelism (computing)|"in parallel"]]. There are several different forms of parallel computing: [[bit-level parallelism|bit-level]], [[instruction level parallelism|instruction level]], [[data parallelism|data]], and [[task parallelism]]. Parallelism has been employed for many years, mainly in [[high performance computing|high-performance computing]], but interest in it has grown lately due to the physical constraints preventing [[frequency scaling]].<ref>S.V. Adve et al. (November 2008). [http://www.upcrc.illinois.edu/documents/UPCRC_Whitepaper.pdf "Parallel Computing Research at Illinois: The UPCRC Agenda"] {{Webarchive|url=https://web.archive.org/web/20081209154000/http://www.upcrc.illinois.edu/documents/UPCRC_Whitepaper.pdf |date=2008-12-09 }} (PDF). Parallel@Illinois, University of Illinois at Urbana-Champaign. "The main techniques for these performance benefits – increased clock frequency and smarter but increasingly complex architectures – are now hitting the so-called power wall. The computer industry has accepted that future performance increases must largely come from increasing the number of processors (or cores) on a die, rather than making a single core go faster."</ref> As power consumption (and consequently heat generation) by computers has become a concern in recent years,<ref>[[Krste Asanović|Asanovic]] et al. Old [conventional wisdom]: Power is free, but transistors are expensive. New [conventional wisdom] is [that] power is expensive, but transistors are "free".</ref> parallel computing has become the dominant paradigm in [[computer architecture]], mainly in the form of [[multi-core processor]]s.<ref name="View-Power">[[Asanovic, Krste]] et al. (December 18, 2006). [http://www.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-183.pdf "The Landscape of Parallel Computing Research: A View from Berkeley"] (PDF). University of California, Berkeley. Technical Report No. UCB/EECS-2006-183. "Old [conventional wisdom]: Increasing clock frequency is the primary method of improving processor performance. New [conventional wisdom]: Increasing parallelism is the primary method of improving processor performance ... Even representatives from Intel, a company generally associated with the 'higher clock-speed is better' position, warned that traditional approaches to maximizing performance through maximizing clock speed have been pushed to their limit."</ref> [[Parallel algorithm|Parallel computer programs]] are more difficult to write than sequential ones,<ref>{{cite book|last=Hennessy|first=John L.|title=Computer organization and design : the hardware/software interface|year=1999|publisher=Kaufmann|location=San Francisco|isbn=978-1-55860-428-5|edition=2. ed., 3rd print.|author2=Patterson, David A.|author3=Larus, James R.|url=https://archive.org/details/computerorganiz000henn}}</ref> because concurrency introduces several new classes of potential [[software bug]]s, of which [[race condition]]s are the most common. [[Computer networking|Communication]] and [[Synchronization (computer science)|synchronization]] between the different subtasks are typically some of the greatest obstacles to getting good parallel program performance. The maximum possible [[speedup|speed-up]] of a single program as a result of parallelization is known as [[Amdahl's law]]. ===Programming language theory and program semantics=== {{main|Programming language theory|Program semantics}} [[Programming language theory]] is a branch of computer science that deals with the design, implementation, analysis, characterization, and classification of [[programming language]]s and their individual [[Programming language#Elements|features]]. It falls within the discipline of theoretical computer science, both depending on and affecting [[mathematics]], software engineering, and [[linguistics]]. It is an active research area, with numerous dedicated academic journals. In [[programming language theory]], '''semantics''' is the field concerned with the rigorous mathematical study of the meaning of [[programming language]]s. It does so by evaluating the meaning of [[programming language syntax|syntactically]] legal [[String (computer science)|strings]] defined by a specific programming language, showing the computation involved. In such a case that the evaluation would be of syntactically illegal strings, the result would be non-computation. Semantics describes the processes a computer follows when executing a program in that specific language. This can be shown by describing the relationship between the input and output of a program, or an explanation of how the program will execute on a certain [[computer platform|platform]], hence creating a [[model of computation]]. ===Quantum computation=== {{main|Quantum computation}} A [[quantum computer]] is a [[computation]] system that makes direct use of [[quantum mechanics|quantum-mechanical]] [[phenomena]], such as [[quantum superposition|superposition]] and [[quantum entanglement|entanglement]], to perform [[Instruction (computer science)|operations]] on [[data]].<ref>"[http://cba.mit.edu/docs/papers/98.06.sciqc.pdf Quantum Computing with Molecules]" article in ''[[Scientific American]]'' by [[Neil Gershenfeld]] and [[Isaac L. Chuang]]</ref> Quantum computers are different from digital computers based on [[transistor]]s. Whereas digital computers require data to be encoded into binary digits ([[bit]]s), each of which is always in one of two definite states (0 or 1), quantum computation uses [[qubits]] (quantum bits), which can be in [[quantum superposition|superpositions]] of states. A theoretical model is the [[quantum Turing machine]], also known as the universal quantum computer. Quantum computers share theoretical similarities with [[Non-deterministic Turing machine|non-deterministic]] and [[probabilistic automaton|probabilistic computers]]; one example is the ability to be in more than one state simultaneously. The field of quantum computing was first introduced by [[Yuri Manin]] in 1980<ref name="manin1980vychislimoe">{{cite book| author=Manin, Yu. I.| title=Vychislimoe i nevychislimoe| trans-title=Computable and Noncomputable| year=1980| publisher=Sov.Radio| url=http://publ.lib.ru/ARCHIVES/M/MANIN_Yuriy_Ivanovich/Manin_Yu.I._Vychislimoe_i_nevychislimoe.(1980).%5Bdjv%5D.zip| pages=13–15| language=ru| access-date=4 March 2013| url-status=dead| archive-url=https://web.archive.org/web/20130510173823/http://publ.lib.ru/ARCHIVES/M/MANIN_Yuriy_Ivanovich/Manin_Yu.I._Vychislimoe_i_nevychislimoe.(1980).%5Bdjv%5D.zip| archive-date=10 May 2013}}</ref> and [[Richard Feynman]] in 1982.<ref name="Feynman82">{{cite journal |last=Feynman |first=R. P. |title=Simulating physics with computers |journal=[[International Journal of Theoretical Physics]] |year=1982 |volume=21 |issue=6 |pages=467–488 |doi=10.1007/BF02650179 |bibcode=1982IJTP...21..467F |citeseerx=10.1.1.45.9310 |s2cid=124545445 }}</ref><ref>{{cite journal |title=Quantum computation |author-link=David Deutsch |first=David |last=Deutsch |journal=Physics World |date=1992-01-06 |volume=5 |issue=6 |pages=57–61 |doi=10.1088/2058-7058/5/6/38 }}</ref> A quantum computer with spins as quantum bits was also formulated for use as a quantum [[space–time]] in 1968.<ref>{{cite book |first=David |last=Finkelstein |chapter=Space-Time Structure in High Energy Interactions |title=Fundamental Interactions at High Energy |editor1-first=T. |editor1-last=Gudehus |editor2-first=G. |editor2-last=Kaiser |location=New York |publisher=Gordon & Breach |year=1968 }}</ref> Experiments have been carried out in which quantum computational operations were executed on a very small number of qubits.<ref>{{cite web|url=http://phys.org/news/2013-01-qubit-bodes-future-quantum.html|title=New qubit control bodes well for future of quantum computing|access-date=26 October 2014}}</ref> Both practical and theoretical research continues, and many national governments and military funding agencies support quantum computing research to develop quantum [[computer]]s for both civilian and national security purposes, such as [[cryptanalysis]].<ref>[http://qist.lanl.gov/qcomp_map.shtml Quantum Information Science and Technology Roadmap] for a sense of where the research is heading.</ref> ===Symbolic computation=== {{main|Symbolic computation}} [[Computer algebra]], also called symbolic computation or algebraic computation is a scientific area that refers to the study and development of [[algorithm]]s and [[software]] for manipulating [[expression (mathematics)|mathematical expressions]] and other [[mathematical object]]s. Although, properly speaking, computer algebra should be a subfield of [[scientific computing]], they are generally considered as distinct fields because scientific computing is usually based on [[numerical computation]] with approximate [[floating point number]]s, while symbolic computation emphasizes ''exact'' computation with expressions containing [[variable (mathematics)|variable]]s that have not any given value and are thus manipulated as symbols (therefore the name of ''symbolic computation''). [[Software]] applications that perform symbolic calculations are called ''[[computer algebra system]]s'', with the term ''system'' alluding to the complexity of the main applications that include, at least, a method to represent mathematical data in a computer, a user programming language (usually different from the language used for the implementation), a dedicated memory manager, a [[user interface]] for the input/output of mathematical expressions, a large set of [[function (computer science)|routines]] to perform usual operations, like simplification of expressions, [[differentiation (mathematics)|differentiation]] using [[chain rule]], [[polynomial factorization]], [[indefinite integration]], etc. ===Very-large-scale integration=== {{main|VLSI}} [[Very-large-scale integration]] ('''VLSI''') is the process of creating an [[integrated circuit]] (IC) by combining thousands of [[transistors]] into a single chip. VLSI began in the 1970s when complex [[semiconductor]] and [[communication]] technologies were being developed. The [[microprocessor]] is a VLSI device. Before the introduction of VLSI technology most ICs had a limited set of functions they could perform. An [[electronic circuit]] might consist of a [[central processing unit|CPU]], [[Read-only memory|ROM]], [[Random Access Memory|RAM]] and other [[glue logic]]. VLSI allows IC makers to add all of these circuits into one chip. == Organizations == * [[European Association for Theoretical Computer Science]] * [[SIGACT]] * [[Simons Institute for the Theory of Computing]] == Journals and newsletters == * [[Discrete Mathematics and Theoretical Computer Science]] * ''[[Information and Computation]]'' * ''[[Theory of Computing (journal)|Theory of Computing]]'' ([[Open access (publishing)|open access]] journal) * ''[[Formal Aspects of Computing]]'' * ''[[Journal of the ACM]]'' * ''[[SIAM Journal on Computing]]'' (SICOMP) * ''[[SIGACT News]]'' * ''[[Theoretical Computer Science (journal)|Theoretical Computer Science]]'' * ''[[Theory of Computing Systems]]'' * [https://theoretics.episciences.org/ TheoretiCS] ([[Open access (publishing)|open access]] journal) * ''[[International Journal of Foundations of Computer Science]]'' * ''[[Chicago Journal of Theoretical Computer Science]]'' ([[Open access (publishing)|open access]] journal) * ''[[Foundations and Trends in Theoretical Computer Science]]'' * ''[[Journal of Automata, Languages and Combinatorics]]'' * ''[[Acta Informatica]]'' * ''[[Fundamenta Informaticae]]'' * ''[[ACM Transactions on Computation Theory]]'' * ''[[Computational Complexity (journal)|Computational Complexity]]'' * ''[[Journal of Complexity (journal)|Journal of Complexity]]'' * ACM Transactions on Algorithms * Information Processing Letters * [https://www.degruyter.com/view/j/comp Open Computer Science] ([[Open access (publishing)|open access]] journal) == Conferences == * Annual ACM [[Symposium on Theory of Computing]] (STOC)<ref name="core-a-plus">The [http://www.core.edu.au/rankings/Conference%20Ranking%20Main.html 2007 Australian Ranking of ICT Conferences] {{Webarchive|url=https://web.archive.org/web/20091002002344/http://www.core.edu.au/rankings/Conference%20Ranking%20Main.html |date=2009-10-02 }}: tier A+.</ref> * Annual IEEE [[Symposium on Foundations of Computer Science]] (FOCS)<ref name="core-a-plus"/> * [[Innovations in Theoretical Computer Science]] (ITCS) * [[Mathematical Foundations of Computer Science]] (MFCS)<ref>{{Cite web |url=http://mfcs2017.cs.aau.dk/ |title=MFCS 2017 |access-date=2018-01-09 |archive-date=2018-01-10 |archive-url=https://web.archive.org/web/20180110054700/http://mfcs2017.cs.aau.dk/ |url-status=dead }}</ref> * [[International Computer Science Symposium in Russia]] (CSR)<ref>[https://logic.pdmi.ras.ru/csr2018/ CSR 2018]</ref> * ACM–SIAM [[Symposium on Discrete Algorithms]] (SODA)<ref name="core-a-plus"/> * IEEE [[Symposium on Logic in Computer Science]] (LICS)<ref name="core-a-plus"/> * [[Computational Complexity Conference]] (CCC)<ref name="core-a"/> * [[International Colloquium on Automata, Languages and Programming]] (ICALP)<ref name="core-a"/> * Annual [[Symposium on Computational Geometry]] (SoCG)<ref name="core-a">The [http://www.core.edu.au/rankings/Conference%20Ranking%20Main.html 2007 Australian Ranking of ICT Conferences] {{Webarchive|url=https://web.archive.org/web/20091002002344/http://www.core.edu.au/rankings/Conference%20Ranking%20Main.html |date=2009-10-02 }}: tier A.</ref> * ACM [[Symposium on Principles of Distributed Computing]] (PODC)<ref name="core-a-plus"/> * ACM [[Symposium on Parallelism in Algorithms and Architectures]] (SPAA)<ref name="core-a"/> * [[Annual Conference on Learning Theory]] (COLT)<ref name="core-a"/> * [[International Conference on Current Trends in Theory and Practice of Computer Science]] (SOFSEM)<ref>[https://www.sofsem.cz/ SOFSEM webpage] (retrieved 2024-09-03)</ref> * [[Symposium on Theoretical Aspects of Computer Science]] (STACS)<ref name="core-a"/> * [[European Symposium on Algorithms]] (ESA)<ref name="core-a"/> * [[Workshop on Approximation Algorithms for Combinatorial Optimization Problems]] (APPROX)<ref name="core-a"/> * [[Workshop on Randomization and Computation]] (RANDOM)<ref name="core-a"/> * [[International Symposium on Algorithms and Computation]] (ISAAC)<ref name="core-a"/> * [[International Symposium on Fundamentals of Computation Theory]] (FCT)<ref>[http://fct11.ifi.uio.no/ FCT 2011] (retrieved 2013-06-03)</ref> * [[International Workshop on Graph-Theoretic Concepts in Computer Science]] (WG) ==See also== * [[Formal science]] * [[Unsolved problems in computer science]] * [[Sun–Ni law]] == Notes == <references/> == Further reading == * [[Martin Davis (mathematician)|Martin Davis]], Ron Sigal, [[Elaine Weyuker|Elaine J. Weyuker]], ''Computability, complexity, and languages: fundamentals of theoretical computer science'', 2nd ed., Academic Press, 1994, {{isbn|0-12-206382-1}}. Covers [[theory of computation]], but also [[program semantics]] and [[quantification theory]]. Aimed at graduate students. == External links == * [https://web.archive.org/web/20170715101741/http://www.sigact.org/webpages.php SIGACT directory of additional theory links] (archived 15 July 2017) * [http://theorymatters.org/ Theory Matters Wiki] Theoretical Computer Science (TCS) Advocacy Wiki * [http://www.confsearch.org/confsearch/faces/pages/topic.jsp?topic=Theory&sortMode=1&graphicView=1 List of academic conferences in the area of theoretical computer science] at [http://www.confsearch.org confsearch] * [https://cstheory.stackexchange.com/ Theoretical Computer Science – StackExchange], a Question and Answer site for researchers in theoretical computer science * [http://www.csanimated.com/browse.php Computer Science Animated] * [http://theory.csail.mit.edu/ Theory of computation] at the [[Massachusetts Institute of Technology]] {{Computer science}} {{Authority control}} [[Category:Theoretical computer science| ]] [[Category:Formal sciences]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:"
(
edit
)
Template:About
(
edit
)
Template:Authority control
(
edit
)
Template:Cite book
(
edit
)
Template:Cite conference
(
edit
)
Template:Cite journal
(
edit
)
Template:Cite web
(
edit
)
Template:Computer science
(
edit
)
Template:Distinguish
(
edit
)
Template:Excerpt
(
edit
)
Template:Harv
(
edit
)
Template:Isbn
(
edit
)
Template:Main
(
edit
)
Template:Not a typo
(
edit
)
Template:Rp
(
edit
)
Template:Short description
(
edit
)
Template:Webarchive
(
edit
)