Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Natural language understanding
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==History== The program [[STUDENT (computer program)|STUDENT]], written in 1964 by [[Daniel Bobrow]] for his PhD dissertation at [[MIT]], is one of the earliest known attempts at NLU by a computer.<ref>[[American Association for Artificial Intelligence]] ''Brief History of AI'' [http://www.aaai.org/AITopics/pmwiki/pmwiki.php/AITopics/BriefHistory]</ref><ref>[[Daniel Bobrow]]'s PhD Thesis [http://hdl.handle.net/1721.1/5922 Natural Language Input for a Computer Problem Solving System].</ref><ref>''Machines who think'' by Pamela McCorduck 2004 {{ISBN|1-56881-205-1}} page 286</ref><ref>Russell, Stuart J.; Norvig, Peter (2003), ''Artificial Intelligence: A Modern Approach'' Prentice Hall, {{ISBN|0-13-790395-2}}, http://aima.cs.berkeley.edu/, p. 19</ref><ref>''Computer Science Logo Style: Beyond programming'' by Brian Harvey 1997 {{ISBN|0-262-58150-7}} page 278</ref> Eight years after [[John McCarthy (computer scientist)|John McCarthy]] coined the term [[artificial intelligence]], Bobrow's dissertation (titled ''Natural Language Input for a Computer Problem Solving System'') showed how a computer could understand simple natural language input to solve algebra word problems. A year later, in 1965, [[Joseph Weizenbaum]] at MIT wrote [[ELIZA]], an interactive program that carried on a dialogue in English on any topic, the most popular being psychotherapy. ELIZA worked by simple parsing and substitution of key words into canned phrases and Weizenbaum sidestepped the problem of giving the program a [[database]] of real-world knowledge or a rich [[lexicon]]. Yet ELIZA gained surprising popularity as a toy project and can be seen as a very early precursor to current commercial systems such as those used by [[Ask.com]].<ref>Weizenbaum, Joseph (1976). ''Computer power and human reason: from judgment to calculation'' W. H. Freeman and Company. {{ISBN|0-7167-0463-3}} pages 188-189</ref> In 1969, [[Roger Schank]] at [[Stanford University]] introduced the [[conceptual dependency theory]] for NLU.<ref>[[Roger Schank]], 1969, ''A conceptual dependency parser for natural language'' Proceedings of the 1969 conference on Computational linguistics, Sång-Säby, Sweden, pages 1-3</ref> This model, partially influenced by the work of [[Sydney Lamb]], was extensively used by Schank's students at [[Yale University]], such as [[Robert Wilensky]], [[Wendy Lehnert]], and [[Janet Kolodner]]. In 1970, [[William Aaron Woods|William A. Woods]] introduced the [[augmented transition network]] (ATN) to represent natural language input.<ref>Woods, William A (1970). "Transition Network Grammars for Natural Language Analysis". Communications of the ACM 13 (10): 591–606 [http://www.eric.ed.gov/ERICWebPortal/custom/portlets/recordDetails/detailmini.jsp?_nfpb=true&_&ERICExtSearch_SearchValue_0=ED037733&ERICExtSearch_SearchType_0=no&accno=ED037733]</ref> Instead of ''[[phrase structure rules]]'' ATNs used an equivalent set of [[finite-state automata]] that were called recursively. ATNs and their more general format called "generalized ATNs" continued to be used for a number of years. In 1971, [[Terry Winograd]] finished writing [[SHRDLU]] for his PhD thesis at MIT. SHRDLU could understand simple English sentences in a restricted world of children's blocks to direct a robotic arm to move items. The successful demonstration of SHRDLU provided significant momentum for continued research in the field.<ref>''Artificial intelligence: critical concepts'', Volume 1 by Ronald Chrisley, Sander Begeer 2000 {{ISBN|0-415-19332-X}} page 89</ref><ref>Terry Winograd's SHRDLU page at Stanford [http://hci.stanford.edu/~winograd/shrdlu/ SHRDLU]</ref> Winograd continued to be a major influence in the field with the publication of his book ''Language as a Cognitive Process''.<ref>Winograd, Terry (1983), ''Language as a Cognitive Process'', Addison–Wesley, Reading, MA.</ref> At Stanford, Winograd would later advise [[Larry Page]], who co-founded [[Google]].<!--does this really belong here? it seems like trivia--> In the 1970s and 1980s, the natural language processing group at [[SRI International]] continued research and development in the field. A number of commercial efforts based on the research were undertaken, ''e.g.'', in 1982 [[Gary Hendrix]] formed [[Symantec Corporation]] originally as a company for developing a natural language interface for database queries on personal computers. However, with the advent of mouse-driven [[graphical user interface]]s, Symantec changed direction. A number of other commercial efforts were started around the same time, ''e.g.'', Larry R. Harris at the Artificial Intelligence Corporation and Roger Schank and his students at Cognitive Systems Corp.<ref>Larry R. Harris, ''Research at the Artificial Intelligence corp.'' ACM SIGART Bulletin, issue 79, January 1982 [http://portal.acm.org/citation.cfm?id=1056663.1056670]</ref><ref>''Inside case-based reasoning'' by Christopher K. Riesbeck, Roger C. Schank 1989 {{ISBN|0-89859-767-6}} page xiii</ref> In 1983, Michael Dyer developed the BORIS system at Yale which bore similarities to the work of Roger Schank and W. G. Lehnert.<ref>''In Depth Understanding: A Model of Integrated Process for Narrative Comprehension.''. Michael G. Dyer. MIT Press. {{ISBN|0-262-04073-5}}</ref> The third millennium saw the introduction of systems using machine learning for text classification, such as the IBM [[Watson (computer)|Watson]]. However, experts debate how much "understanding" such systems demonstrate: ''e.g.'', according to [[John Searle]], Watson did not even understand the questions.<ref>{{Cite news | url=https://www.wsj.com/articles/SB10001424052748703407304576154313126987674 | title=Watson Doesn't Know It Won on 'Jeopardy!'| newspaper=Wall Street Journal| date=23 February 2011| last1=Searle| first1=John}}</ref> [[John Ball (cognitive scientist)|John Ball]], cognitive scientist and inventor of the [[Patom Theory]], supports this assessment. Natural language processing has made inroads for applications to support human productivity in service and e-commerce, but this has largely been made possible by narrowing the scope of the application. There are thousands of ways to request something in a human language that still defies conventional natural language processing.{{Citation needed|date=February 2024}} According to Wibe Wagemans, "To have a meaningful conversation with machines is only possible when we match every word to the correct meaning based on the meanings of the other words in the sentence – just like a 3-year-old does without guesswork."<ref>{{Cite web |last=Brandon |first=John |date=2016-07-12 |title=What Natural Language Understanding tech means for chatbots |url=https://venturebeat.com/business/what-natural-language-understanding-tech-means-for-chatbots/ |access-date=2024-02-29 |website=VentureBeat |language=en-US}}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)