Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Backward chaining
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Method of forming inferences}} {{distinguish|Backward chaining (applied behavior analysis)|Back-chaining}} '''Backward chaining''' (or '''backward reasoning''') is an [[inference]] method described colloquially as working backward from the goal. It is used in [[automated theorem prover]]s, [[inference engine]]s, [[proof assistant]]s, and other [[artificial intelligence]] applications.<ref>{{cite book|last=Feigenbaum|first=Edward|title=The Rise of the Expert Company|url=https://archive.org/details/riseofexpertcomp00feig|url-access=registration|year=1988|publisher=Times Books|isbn=0-8129-1731-6|page=[https://archive.org/details/riseofexpertcomp00feig/page/317 317]}}</ref> In [[game theory]], researchers apply it to (simpler) [[subgame]]s to find a solution to the game, in a process called ''[[backward induction]]''. In chess, it is called [[retrograde analysis]], and it is used to generate table bases for [[chess endgame]]s for [[computer chess]]. Backward chaining is implemented in [[logic programming]] by [[SLD resolution]]. Both rules are based on the [[modus ponens]] inference rule. It is one of the two most commonly used methods of [[reasoning]] with [[inference rule]]s and [[Logical consequence|logical implications]] β the other is [[forward chaining]]. Backward chaining systems usually employ a [[depth-first search]] strategy, e.g. [[Prolog]].<ref name="CheinMugnier2009">{{cite book|author1=Michel Chein|author2=Marie-Laure Mugnier|title=Graph-based knowledge representation: computational foundations of conceptual graphs|url=https://books.google.com/books?id=iz3y6WK2EMEC&pg=PA297|year=2009|publisher=Springer|isbn=978-1-84800-285-2|page=297}}</ref> == Usage == Backward chaining starts with a list of [[goal]]s (or a [[hypothesis]]) and works backwards from the [[consequent]] to the [[antecedent (logic)|antecedent]] to see if any [[data]] supports any of these consequents.<ref name="Norwig Definition"/> An [[inference engine]] using backward chaining would search the [[inference]] rules until it finds one with a consequent ('''Then''' clause) that matches a desired goal. If the antecedent ('''If''' clause) of that rule is not known to be true, then it is added to the list of goals (for one's goal to be confirmed one must also provide data that confirms this new rule). For example, suppose a new pet, Fritz, is delivered in an opaque box along with two facts about Fritz: * Fritz croaks * Fritz eats flies The goal is to decide whether Fritz is green, based on a [[rule base]] containing the following four rules: [[File:Backward Chaining Frog Color Example.png|upright=0.56|thumb|alt=An Example of Backward Chaining.|An example of backward chaining]] # '''If''' X croaks and X eats flies β '''Then''' X is a frog # '''If''' X chirps and X sings β '''Then''' X is a canary # '''If''' X is a frog β '''Then''' X is green # '''If''' X is a canary β '''Then''' X is yellow With backward reasoning, an inference engine can determine whether Fritz is green in four steps. To start, the query is phrased as a goal assertion that is to be proven: "Fritz is green". 1. Fritz is substituted for X in rule #3 to see if its consequent matches the goal, so rule #3 becomes: '''If''' Fritz is a frog β '''Then''' Fritz is green Since the consequent matches the goal ("Fritz is green"), the rules engine now needs to see if the antecedent ("Fritz is a frog") can be proven. The antecedent, therefore, becomes the new goal: Fritz is a frog 2. Again substituting Fritz for X, rule #1 becomes: '''If''' Fritz croaks and Fritz eats flies β '''Then''' Fritz is a frog Since the consequent matches the current goal ("Fritz is a frog"), the inference engine now needs to see if the antecedent ("Fritz croaks and eats flies") can be proven. The antecedent, therefore, becomes the new goal: Fritz croaks and Fritz eats flies 3. Since this goal is a conjunction of two statements, the inference engine breaks it into two sub-goals, both of which must be proven: Fritz croaks Fritz eats flies 4. To prove both of these sub-goals, the inference engine sees that both of these sub-goals were given as initial facts. Therefore, the conjunction is true: Fritz croaks and Fritz eats flies therefore the antecedent of rule #1 is true and the consequent must be true: Fritz is a frog therefore the antecedent of rule #3 is true and the consequent must be true: Fritz is green This derivation, therefore, allows the inference engine to prove that Fritz is green. Rules #2 and #4 were not used. Note that the goals always match the affirmed versions of the consequents of implications (and not the negated versions as in [[modus tollens]]) and even then, their antecedents are then considered as the new goals (and not the conclusions as in [[affirming the consequent]]), which ultimately must match known facts (usually defined as consequents whose antecedents are always true); thus, the inference rule used is [[modus ponens]]. Because the list of goals determines which rules are selected and used, this method is called [[goal-oriented|goal-driven]], in contrast to [[Data science|data-driven]] [[forward chaining|forward-chaining]] inference. The backward chaining approach is often employed by [[expert systems]]. Programming languages such as [[Prolog]], [[Knowledge Machine]] and [[ECLiPSe]] support backward chaining within their inference engines.<ref name="Programming Languages"/> ==See also== *[[Backtracking]] *[[Backward induction]] *[[Forward chaining]] *[[Opportunistic reasoning]] ==References== <references> <ref name="Norwig Definition"> Definition of backward chaining as a depth-first search method: * {{Harvnb|Russell|Norvig|2009|p=337}} </ref> <ref name="Programming Languages"> Languages that support backward chaining: * {{Harvnb|Russell|Norvig|2009|p=339}} </ref> </references> ===Sources=== * {{cite book |last1=Russell |first1=Stuart |author1link = Stuart J. Russell|last2=Norvig |first2=Peter |author2link = Peter Norvig|title=Artificial Intelligence: A Modern Approach |date=2009 |publisher=Prentice Hall |isbn=978-0-13-604259-4 |url=https://books.google.com/books?id=8jZBksh-bUMC}} ==External links== * [http://www.j-paine.org/students/lectures/lect3/node12.html Backward chaining example] {{Automated reasoning}} [[Category:Expert systems]] [[Category:Logic in computer science]] [[Category:Reasoning]] [[Category:Automated reasoning]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Automated reasoning
(
edit
)
Template:Cite book
(
edit
)
Template:Distinguish
(
edit
)
Template:Harvnb
(
edit
)
Template:Short description
(
edit
)