Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Symbolic artificial intelligence
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== The Frame Problem: knowledge representation challenges for first-order logic === {{Main|Philosophy of artificial intelligence}} Limitations were discovered in using simple first-order logic to reason about dynamic domains. Problems were discovered both with regards to enumerating the preconditions for an action to succeed and in providing axioms for what did not change after an action was performed. McCarthy and Hayes introduced the [[Frame problem|Frame Problem]] in 1969 in the paper, "Some Philosophical Problems from the Standpoint of Artificial Intelligence."{{sfn|McCarthy|Hayes|1969}} A simple example occurs in "proving that one person could get into conversation with another", as an axiom asserting "if a person has a telephone he still has it after looking up a number in the telephone book" would be required for the deduction to succeed. Similar axioms would be required for other domain actions to specify what ''did not'' change. A similar problem, called the [[Qualification problem|Qualification Problem]], occurs in trying to enumerate the ''preconditions'' for an action to succeed. An infinite number of pathological conditions can be imagined, e.g., a banana in a tailpipe could prevent a car from operating correctly. McCarthy's approach to fix the frame problem was [[Circumscription (logic)|circumscription]], a kind of [[non-monotonic logic]] where deductions could be made from actions that need only specify what would change while not having to explicitly specify everything that would not change. Other [[non-monotonic logic]]s provided [[Reason maintenance|truth maintenance systems]] that revised beliefs leading to contradictions. Other ways of handling more open-ended domains included [[Probabilistic logic|probabilistic reasoning]] systems and machine learning to learn new concepts and rules. McCarthy's [[Advice taker|Advice Taker]] can be viewed as an inspiration here, as it could incorporate new knowledge provided by a human in the form of assertions or rules. For example, experimental symbolic machine learning systems explored the ability to take high-level natural language advice and to interpret it into domain-specific actionable rules. Similar to the problems in handling dynamic domains, common-sense reasoning is also difficult to capture in formal reasoning. Examples of common-sense reasoning include implicit reasoning about how people think or general knowledge of day-to-day events, objects, and living creatures. This kind of knowledge is taken for granted and not viewed as noteworthy. Common-sense reasoning is an open area of research and challenging both for symbolic systems (e.g., Cyc has attempted to capture key parts of this knowledge over more than a decade) and neural systems (e.g., [[self-driving car]]s that do not know not to drive into cones or not to hit pedestrians walking a bicycle). McCarthy viewed his [[Advice taker|Advice Taker]] as having common-sense, but his definition of common-sense was different than the one above.{{sfn|McCarthy|1959}} He defined a program as having common sense "''if it automatically deduces for itself a sufficiently wide class of immediate consequences of anything it is told and what it already knows''."
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)