Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Natural language understanding
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Scope and context== The umbrella term "natural language understanding" can be applied to a diverse set of computer applications, ranging from small, relatively simple tasks such as short commands issued to [[robot]]s, to highly complex endeavors such as the full comprehension of newspaper articles or poetry passages. Many real-world applications fall between the two extremes, for instance [[Document classification|text classification]] for the automatic analysis of emails and their routing to a suitable department in a corporation does not require an in-depth understanding of the text,<ref>''An approach to hierarchical email categorization'' by Peifeng Li et al. in ''Natural language processing and information systems'' edited by Zoubida Kedad, Nadira Lammari 2007 {{ISBN|3-540-73350-7}}</ref> but needs to deal with a much larger vocabulary and more diverse syntax than the management of simple queries to database tables with fixed schemata. Throughout the years various attempts at processing natural language or ''English-like'' sentences presented to computers have taken place at varying degrees of complexity. Some attempts have not resulted in systems with deep understanding, but have helped overall system usability. For example, [[Wayne Ratliff]] originally developed the ''Vulcan'' program with an English-like syntax to mimic the English speaking computer in [[Star Trek]]. Vulcan later became the [[dBase]] system whose easy-to-use syntax effectively launched the personal computer database industry.<ref>[[InfoWorld]], Nov 13, 1989, page 144</ref><ref>[[InfoWorld]], April 19, 1984, page 71</ref> Systems with an easy to use or English-like syntax are, however, quite distinct from systems that use a rich [[lexicon]] and include an internal [[knowledge representation and reasoning|representation]] (often as [[first order logic]]) of the semantics of natural language sentences. Hence the breadth and depth of "understanding" aimed at by a system determine both the complexity of the system (and the implied challenges) and the types of applications it can deal with. The "breadth" of a system is measured by the sizes of its vocabulary and grammar. The "depth" is measured by the degree to which its understanding approximates that of a fluent native speaker. At the narrowest and shallowest, ''English-like'' command interpreters require minimal complexity, but have a small range of applications. Narrow but deep systems explore and model mechanisms of understanding,<ref>''Building Working Models of Full Natural-Language Understanding in Limited Pragmatic Domains'' by James Mason 2010 [http://www.yorku.ca/jmason/UnderstandingEnglishInLimitedPragmaticDomains.html]</ref> but they still have limited application. Systems that attempt to understand the contents of a document such as a news release beyond simple keyword matching and to judge its suitability for a user are broader and require significant complexity,<ref>''Mining the Web: discovering knowledge from hypertext data'' by Soumen Chakrabarti 2002 {{ISBN|1-55860-754-4}} page 289</ref> but they are still somewhat shallow. Systems that are both very broad and very deep are beyond the current state of the art.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)