Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Reason maintenance
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{more footnotes|date=September 2009}} '''Reason maintenance'''<ref name="insNouts">Doyle, J., 1983. The ins and outs of reason maintenance, in: Proceedings of the Eighth International Joint Conference on Artificial Intelligence - Volume 1, IJCAI’83. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, pp. 349–351.</ref><ref name="originalTR">Doyle, J.: Truth maintenance systems for problem solving. Tech. Rep. AI-TR-419, Dep. of Electrical Engineering and Computer Science of MIT (1978)</ref> is a [[knowledge representation]] approach to efficient handling of inferred information that is explicitly stored. Reason maintenance distinguishes between base facts, which can be [[Defeasible reasoning|defeated]], and derived facts. As such it differs from [[belief revision]] which, in its basic form, assumes that all facts are equally important. Reason maintenance was originally developed as a technique for implementing problem solvers.<ref name="originalTR"/> It encompasses a variety of techniques that share a common architecture:<ref name="mcAllesterInterface">McAllester, D.A.: Truth maintenance. AAAI90 (1990)</ref> two components—a reasoner and a reason maintenance system—communicate with each other via an interface. The reasoner uses the reason maintenance system to record its inferences and justifications of ("reasons" for) the inferences. The reasoner also informs the reason maintenance system which are the currently valid base facts (assumptions). The reason maintenance system uses the information to compute the truth value of the stored derived facts and to restore consistency if an inconsistency is derived. A '''truth maintenance system''', or '''TMS''', is a [[knowledge representation]] method for representing both beliefs and their dependencies and an algorithm called the "truth maintenance algorithm" that manipulates and maintains the dependencies. The name ''truth maintenance'' is due to the ability of these systems to restore consistency. A truth maintenance system maintains consistency between old believed knowledge and current believed knowledge in the knowledge base (KB) through revision. If the current believed statements contradict the knowledge in the KB, then the KB is updated with the new knowledge. It may happen that the same data will again be believed, and the previous knowledge will be required in the KB. If the previous data are not present, but may be required for new inference. But if the previous knowledge was in the KB, then no retracing of the same knowledge is needed. The use of TMS avoids such retracing; it keeps track of the contradictory data with the help of a dependency record. This record reflects the retractions and additions which makes the [[inference engine]] (IE) aware of its current belief set. Each statement having at least one valid justification is made a part of the current belief set. When a contradiction is found, the statement(s) responsible for the contradiction are identified and the records are appropriately updated. This process is called dependency-directed backtracking. The TMS algorithm maintains the records in the form of a dependency network. Each node in the network is an entry in the KB (a premise, antecedent, or inference rule etc.) Each arc of the network represent the inference steps through which the node was derived. A premise is a fundamental belief which is assumed to be true. They do not need justifications. The set of premises are the basis from which justifications for all other nodes will be derived. There are two types of justification for a node. They are: # Support list [SL] # Conditional proof (CP) Many kinds of truth maintenance systems exist. Two major types are single-context and multi-context truth maintenance. In single context systems, consistency is maintained among all facts in memory (KB) and relates to the notion of consistency found in [[classical logic]]. Multi-context systems support [[paraconsistency]] by allowing consistency to be relevant to a subset of facts in memory, a context, according to the history of logical inference. This is achieved by tagging each fact or deduction with its logical history. Multi-agent truth maintenance systems perform truth maintenance across multiple memories, often located on different machines. de Kleer's assumption-based truth maintenance system (ATMS, 1986) was utilized in systems based upon [[AI winter#The fall of expert systems|KEE]] on the [[Lisp Machine]]. The first multi-agent TMS was created by Mason and Johnson. It was a multi-context system. Bridgeland and Huhns created the first single-context multi-agent system. ==See also== * [[Artificial intelligence]] * [[Belief revision]] * [[Knowledge acquisition]] * [[Knowledge representation]] * [[Neurath's boat]] ==References== <references /> ==Other references== * Bridgeland, D. M. & Huhns, M. N., Distributed Truth Maintenance. Proceedings of. AAAI–90: Eighth National Conference on Artificial Intelligence, 1990. * J. de Kleer (1986). An assumption-based TMS. ''Artificial Intelligence'', 28:127–162. * J. Doyle. A Truth Maintenance System. AI. Vol. 12. No 3, pp. 251–272. 1979. * U. Junker and K. Konolige (1990). Computing the extensions of autoepistemic and default logics with a truth maintenance system. In ''Proceedings of the Eighth National Conference on Artificial Intelligence (AAAI'90)'', pages 278–283. [[MIT Press]]. * Mason, C. and Johnson, R. DATMS: A Framework for Assumption Based Reasoning, in Distributed Artificial Intelligence, Vol. 2, [[Morgan Kaufmann Publishers]], Inc., 1989. * D. A. McAllester. A three valued maintenance system. [[Massachusetts Institute of Technology]], Artificial Intelligence Laboratory. AI Memo 473. 1978. * G. M. Provan (1988). A complexity analysis of assumption-based truth maintenance systems. In B. Smith and G. Kelleher, editors, ''Reason Maintenance Systems and their Applications'', pages 98–113. Ellis Horwood, New York. * G. M. Provan (1990). The computational complexity of multiple-context truth maintenance systems. In ''Proceedings of the Ninth European Conference on Artificial Intelligence (ECAI'90)'', pages 522–527. * R. Reiter and J. de Kleer (1987). Foundations of assumption-based truth maintenance systems: Preliminary report. In ''Proceedings of the Sixth National Conference on Artificial Intelligence (AAAI'87)'', pages 183–188. [https://web.archive.org/web/20050424041254/http://www2.parc.com/spl/members/dekleer/Publications/Foundations%20of%20Assumption-Based%20Truth%20Maintenance%20Systems.pdf PDF] ==External links== * [https://scholar.google.com/scholar?q=Truth+maintenance+system&ie=UTF-8&oe=UTF-8&hl=en&btnG=Search Google Scholar on TMSs] * [http://plato.stanford.edu/entries/logic-ai/#3.2.1 Belief Revision and TMSs] at [[Stanford Encyclopedia of Philosophy]] [[Category:Belief revision]] [[Category:Knowledge representation]] [[Category:Information systems]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:More footnotes
(
edit
)