Template:Short description Template:About Template:Sidebar with collapsible lists

Computational linguistics is an interdisciplinary field concerned with the computational modelling of natural language, as well as the study of appropriate computational approaches to linguistic questions. In general, computational linguistics draws upon linguistics, computer science, artificial intelligence, mathematics, logic, philosophy, cognitive science, cognitive psychology, psycholinguistics, anthropology and neuroscience, among others. Computational linguistics is closely related to mathematical linguistics.

OriginsEdit

The field overlapped with artificial intelligence since the efforts in the United States in the 1950s to use computers to automatically translate texts from foreign languages, particularly Russian scientific journals, into English.<ref>John Hutchins: Retrospect and prospect in computer-based translation. Template:Webarchive Proceedings of MT Summit VII, 1999, pp. 30–44.</ref> Since rule-based approaches were able to make arithmetic (systematic) calculations much faster and more accurately than humans, it was expected that lexicon, morphology, syntax and semantics can be learned using explicit rules, as well. After the failure of rule-based approaches, David Hays<ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref> coined the term in order to distinguish the field from AI and co-founded both the Association for Computational Linguistics (ACL) and the International Committee on Computational Linguistics (ICCL) in the 1970s and 1980s. What started as an effort to translate between languages evolved into a much wider field of natural language processing.<ref>Natural Language Processing by Liz Liddy, Eduard Hovy, Jimmy Lin, John Prager, Dragomir Radev, Lucy Vanderwende, Ralph Weischedel</ref><ref>Arnold B. Barach: Translating Machine 1975: And the Changes To Come.</ref>

Annotated corporaEdit

In order to be able to meticulously study the English language, an annotated text corpus was much needed. The Penn Treebank<ref>Template:Cite journal</ref> was one of the most used corpora. It consisted of IBM computer manuals, transcribed telephone conversations, and other texts, together containing over 4.5 million words of American English, annotated using both part-of-speech tagging and syntactic bracketing.<ref>Template:Cite book</ref>

Japanese sentence corpora were analyzed and a pattern of log-normality was found in relation to sentence length.<ref name="autogenerated3">Template:Cite journal</ref>

Modeling language acquisitionEdit

The fact that during language acquisition, children are largely only exposed to positive evidence,<ref>Bowerman, M. (1988). The "no negative evidence" problem: How do children avoid constructing an overly general grammar. Explaining language universals.</ref> meaning that the only evidence for what is a correct form is provided, and no evidence for what is not correct,<ref name="autogenerated1971">Braine, M.D.S. (1971). On two types of models of the internalization of grammars. In D.I. Slobin (Ed.), The ontogenesis of grammar: A theoretical perspective. New York: Academic Press.</ref> was a limitation for the models at the time because the now available deep learning models were not available in late 1980s.<ref name="powers1989">Powers, D.M.W. & Turk, C.C.R. (1989). Machine Learning of Natural Language. Springer-Verlag. Template:ISBN.</ref>

It has been shown that languages can be learned with a combination of simple input presented incrementally as the child develops better memory and longer attention span,<ref name="autogenerated1993">Template:Cite journal</ref> which explained the long period of language acquisition in human infants and children.<ref name="autogenerated1993"/>

Robots have been used to test linguistic theories.<ref>Template:Cite journal</ref> Enabled to learn as children might, models were created based on an affordance model in which mappings between actions, perceptions, and effects were created and linked to spoken words. Crucially, these robots were able to acquire functioning word-to-meaning mappings without needing grammatical structure.

Using the Price equation and Pólya urn dynamics, researchers have created a system which not only predicts future linguistic evolution but also gives insight into the evolutionary history of modern-day languages.<ref>Template:Cite journal</ref>

Chomsky's theoriesEdit

Noam Chomsky's theories have influenced computational linguistics, particularly in understanding how infants learn complex grammatical structures, such as those described in Chomsky normal form.<ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref> Attempts have been made to determine how an infant learns a "non-normal grammar" as theorized by Chomsky normal form.<ref name="autogenerated1971"/> Research in this area combines structural approaches with computational models to analyze large linguistic corpora like the Penn Treebank, helping to uncover patterns in language acquisition.<ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref>

See alsoEdit

Template:Portal Template:Div col

Template:Div col end

ReferencesEdit

Template:Reflist

Further readingEdit

Template:Refbegin

  • Template:Cite journal
  • Steven Bird, Ewan Klein, and Edward Loper (2009). Natural Language Processing with Python. O'Reilly Media. Template:ISBN.
  • Daniel Jurafsky and James H. Martin (2008). Speech and Language Processing, 2nd edition. Pearson Prentice Hall. Template:ISBN.
  • Mohamed Zakaria KURDI (2016). Natural Language Processing and Computational Linguistics: speech, morphology, and syntax, Volume 1. ISTE-Wiley. Template:ISBN.
  • Mohamed Zakaria KURDI (2017). Natural Language Processing and Computational Linguistics: semantics, discourse, and applications, Volume 2. ISTE-Wiley. Template:ISBN.

Template:Refend

External linksEdit

Template:Wikiversity Template:Sister project

Template:Computer science

Template:Authority control