Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Artificial intelligence
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== Natural language processing === <!-- This is linked to in the introduction --> [[Natural language processing]] (NLP)<ref>[[Natural language processing]] (NLP): {{Harvtxt|Russell|Norvig|2021|loc=chpt. 23β24}}, {{Harvtxt|Poole|Mackworth|Goebel|1998|pp=91β104}}, {{Harvtxt|Luger|Stubblefield|2004|pp=591β632}}</ref> allows programs to read, write and communicate in human languages such as [[English (language)|English]]. Specific problems include [[speech recognition]], [[speech synthesis]], [[machine translation]], [[information extraction]], [[information retrieval]] and [[question answering]].<ref>Subproblems of [[Natural language processing|NLP]]: {{Harvtxt|Russell|Norvig|2021|pp=849β850}}</ref> Early work, based on [[Noam Chomsky]]'s [[generative grammar]] and [[semantic network]]s, had difficulty with [[word-sense disambiguation]]{{Efn|See {{Section link|AI winter|Machine translation and the ALPAC report of 1966 }}}} unless restricted to small domains called "[[blocks world|micro-worlds]]" (due to the common sense knowledge problem<ref name="Breadth of commonsense knowledge"/>). [[Margaret Masterman]] believed that it was meaning and not grammar that was the key to understanding languages, and that [[thesauri]] and not dictionaries should be the basis of computational language structure. Modern deep learning techniques for NLP include [[word embedding]] (representing words, typically as [[Vector space|vectors]] encoding their meaning),{{Sfnp|Russell|Norvig|2021|pp=856β858}} [[transformer (machine learning model)|transformer]]s (a deep learning architecture using an [[Attention (machine learning)|attention]] mechanism),{{Sfnp|Dickson|2022}} and others.<ref>Modern statistical and deep learning approaches to [[Natural language processing|NLP]]: {{Harvtxt|Russell|Norvig|2021|loc=chpt. 24}}, {{Harvtxt|Cambria|White|2014}}</ref> In 2019, [[generative pre-trained transformer]] (or "GPT") language models began to generate coherent text,{{Sfnp|Vincent|2019}}{{Sfnp|Russell|Norvig|2021|pp=875β878}} and by 2023, these models were able to get human-level scores on the [[bar exam]], [[SAT]] test, [[GRE]] test, and many other real-world applications.{{Sfnp|Bushwick|2023}}
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)