Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Symbolic artificial intelligence
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Techniques and contributions == This section provides an overview of techniques and contributions in an overall context leading to many other, more detailed articles in Wikipedia. Sections on [[#Machine_learning|Machine Learning]] and [[#Uncertain_reasoning|Uncertain Reasoning]] are covered earlier in the [[#A short history|history section]]. === AI programming languages === The key AI programming language in the US during the last symbolic AI boom period was [[LISP (programming language)|LISP]]. [[LISP (programming language)|LISP]] is the second oldest programming language after [[FORTRAN]] and was created in 1958 by [[John McCarthy (computer scientist)|John McCarthy]]. LISP provided the first [[read-eval-print loop]] to support rapid program development. Compiled functions could be freely mixed with interpreted functions. Program tracing, stepping, and breakpoints were also provided, along with the ability to change values or functions and continue from breakpoints or errors. It had the first [[Self-hosting (compilers)|self-hosting compiler]], meaning that the compiler itself was originally written in LISP and then ran interpretively to compile the compiler code. Other key innovations pioneered by LISP that have spread to other programming languages include: * [[Garbage collection (computer science)|Garbage collection]] * [[Dynamic typing]] * [[Higher-order function]]s * [[Recursion]] * [[Conditional (computer programming)|Conditionals]] Programs were themselves data structures that other programs could operate on, allowing the easy definition of higher-level languages. In contrast to the US, in Europe the key AI programming language during that same period was [[Prolog]]. Prolog provided a built-in store of facts and clauses that could be queried by a [[read-eval-print loop]]. The store could act as a knowledge base and the clauses could act as rules or a restricted form of logic. As a subset of first-order logic Prolog was based on [[Horn clauses]] with a [[closed-world assumption]]—any facts not known were considered false—and a [[unique name assumption]] for primitive terms—e.g., the identifier barack_obama was considered to refer to exactly one object. [[Backtracking]] and [[Unification (computer science)|unification]] are built-in to Prolog. [[Alain Colmerauer]] and Philippe Roussel are credited as the inventors of Prolog. Prolog is a form of logic programming, which was invented by [[Robert Kowalski]]. Its history was also influenced by [[Carl Hewitt]]'s [[PLANNER]], an assertional database with pattern-directed invocation of methods. For more detail see the [[Planner (programming language)#The genesis of Prolog|section on the origins of Prolog in the PLANNER article]]. Prolog is also a kind of [[declarative programming]]. The logic clauses that describe programs are directly interpreted to run the programs specified. No explicit series of actions is required, as is the case with [[imperative programming]] languages. Japan championed Prolog for its [[Fifth Generation Project]], intending to build special hardware for high performance. Similarly, [[LISP machines]] were built to run LISP, but as the second AI boom turned to bust these companies could not compete with new workstations that could now run LISP or Prolog natively at comparable speeds. See the [[#A short history|history section]] for more detail. [[Smalltalk]] was another influential AI programming language. For example, it introduced [[metaclasses]] and, along with [[Flavors (programming language)|Flavors]] and [[CommonLoops]], influenced the [[Common Lisp Object System]], or ([[CLOS]]), that is now part of [[Common Lisp]], the current standard Lisp dialect. [[CLOS]] is a Lisp-based object-oriented system that allows [[multiple inheritance]], in addition to incremental extensions to both classes and metaclasses, thus providing a run-time [[meta-object protocol]].<ref name="meta-object protocol">{{Cite book| edition = 1st | publisher = The MIT Press| isbn = 978-0-262-61074-2| last1 = Kiczales| first1 = Gregor| last2 = Rivieres| first2 = Jim des| last3 = Bobrow| first3 = Daniel G.| title = The Art of the Metaobject Protocol| location = Cambridge, Mass| date = 1991-07-30}}</ref> For other AI programming languages see this [[list of programming languages for artificial intelligence]]. Currently, [[Python (programming language)|Python]], a [[multi-paradigm programming language]], is the most popular programming language, partly due to its extensive package library that supports [[data science]], natural language processing, and deep learning. [[Python (programming language)|Python]] includes a read-eval-print loop, functional elements such as [[higher-order function]]s, and [[object-oriented programming]] that includes metaclasses. === Search === {{Main|Combinatorial search}} Search arises in many kinds of problem solving, including [[automated planning|planning]], [[constraint satisfaction]], and playing games such as [[checkers]], [[chess]], and [[go (game)|go]]. The best known AI-search tree search algorithms are [[breadth-first search]], [[depth-first search]], [[A* search algorithm|A*]], and [[Monte Carlo tree search|Monte Carlo Search]]. Key search algorithms for [[Boolean satisfiability]] are [[WalkSAT]], [[conflict-driven clause learning]], and the [[DPLL algorithm]]. For adversarial search when playing games, [[alpha-beta pruning]], [[branch and bound]], and [[minimax]] were early contributions. === Knowledge representation and reasoning === {{Main|Knowledge representation and reasoning}} Multiple different approaches to represent knowledge and then reason with those representations have been investigated. Below is a quick overview of approaches to knowledge representation and automated reasoning. ==== Knowledge representation ==== {{Main|Knowledge Representation}} [[Semantic networks]], [[conceptual graphs]], [[Frame (artificial intelligence)|frames]], and [[formal logic|logic]] are all approaches to modeling knowledge such as domain knowledge, problem-solving knowledge, and the semantic meaning of language. [[Ontologies]] model key concepts and their relationships in a domain. Example ontologies are [[YAGO (database)|YAGO]], [[WordNet]], and [[Upper ontology#DOLCE|DOLCE]]. [[Upper ontology#DOLCE|DOLCE]] is an example of an [[upper ontology]] that can be used for any domain while WordNet is a lexical resource that can also be viewed as an [[upper ontology#WordNet|ontology]]. YAGO incorporates WordNet as part of its ontology, to align facts extracted from [[Wikipedia]] with WordNet [[synsets]]. The [[Disease Ontology]] is an example of a medical ontology currently being used. [[Description logic]] is a logic for automated classification of ontologies and for detecting inconsistent classification data. [[Web Ontology Language|OWL]] is a language used to represent ontologies with [[description logic]]. [[Protégé (software)|Protégé]] is an ontology editor that can read in [[Web Ontology Language|OWL]] ontologies and then check consistency with [[deductive classifier]]s such as such as HermiT.<ref name="HermiT">{{Cite journal| doi = 10.1613/jair.2811| issn = 1076-9757| volume = 36| pages = 165–228| last1 = Motik| first1 = Boris| last2 = Shearer| first2 = Rob| last3 = Horrocks| first3 = Ian| title = Hypertableau Reasoning for Description Logics| journal = Journal of Artificial Intelligence Research| date = 2009-10-28| arxiv = 1401.3485| s2cid = 190609}}</ref> First-order logic is more general than description logic. The automated theorem provers discussed below can prove theorems in first-order logic. [[Horn clause]] logic is more restricted than first-order logic and is used in logic programming languages such as Prolog. Extensions to first-order logic include [[temporal logic]], to handle time; [[epistemic logic]], to reason about agent knowledge; [[modal logic]], to handle possibility and necessity; and [[probabilistic logic]]s to handle logic and probability together. ==== Automatic theorem proving ==== {{Main|Automated theorem proving}} Examples of automated theorem provers for first-order logic are: * [[Prover9]] * [[ACL2]] * [[Vampire (theorem prover)|Vampire]] [[Prover9]] can be used in conjunction with the [[Mace4]] [[Model checking|model checker]]. [[ACL2]] is a theorem prover that can handle proofs by induction and is a descendant of the Boyer-Moore Theorem Prover, also known as [[Nqthm]]. ==== Reasoning in knowledge-based systems ==== {{Main|Reasoning system}} Knowledge-based systems have an explicit [[knowledge base]], typically of rules, to enhance reusability across domains by separating procedural code and domain knowledge. A separate [[inference engine]] processes rules and adds, deletes, or modifies a knowledge store. [[Forward chaining]] inference engines are the most common, and are seen in [[CLIPS]] and [[OPS5]]. [[Backward chaining]] occurs in Prolog, where a more limited logical representation is used, [[Horn clause|Horn Clauses]]. Pattern-matching, specifically [[unification (computer science)|unification]], is used in Prolog. A more flexible kind of problem-solving occurs when reasoning about what to do next occurs, rather than simply choosing one of the available actions. This kind of meta-level reasoning is used in Soar and in the BB1 blackboard architecture. Cognitive architectures such as ACT-R may have additional capabilities, such as the ability to compile frequently used knowledge into higher-level [[Chunking (psychology)|chunks]]. ==== Commonsense reasoning ==== {{Main|Commonsense reasoning}} [[Marvin Minsky]] first proposed frames as a way of interpreting common visual situations, such as an office, and Roger Schank extended this idea to [[Script theory|scripts]] for common routines, such as dining out. [[Cyc]] has attempted to capture useful common-sense knowledge and has "micro-theories" to handle particular kinds of domain-specific reasoning. Qualitative simulation, such as [[Benjamin Kuipers]]'s QSIM,<ref name="QSIM">{{Cite book| publisher = MIT Press| isbn = 978-0-262-51540-5| last = Kuipers| first = Benjamin| title = Qualitative Reasoning: Modeling and Simulation with Incomplete Knowledge| date = 1994}}</ref> approximates human reasoning about naive physics, such as what happens when we heat a liquid in a pot on the stove. We expect it to heat and possibly boil over, even though we may not know its temperature, its boiling point, or other details, such as atmospheric pressure. Similarly, [[James F. Allen (computer scientist)|Allen]]'s [[Allen's interval algebra|temporal interval algebra]] is a simplification of reasoning about time and [[Region Connection Calculus]] is a simplification of reasoning about spatial relationships. Both can be solved with [[constraint programming|constraint solvers]]. ==== Constraints and constraint-based reasoning ==== {{Main|Constraint programming|Spatial–temporal reasoning}} Constraint solvers perform a more limited kind of inference than first-order logic. They can simplify sets of spatiotemporal constraints, such as those for [[Region connection calculus|RCC]] or [[Allen's interval algebra|Temporal Algebra]], along with solving other kinds of puzzle problems, such as [[Wordle]], [[Sudoku]], [[verbal arithmetic|cryptarithmetic problems]], and so on. [[Constraint logic programming]] can be used to solve scheduling problems, for example with [[constraint handling rules]] (CHR). === Automated planning === {{Main|Automated planning and scheduling}} The [[General Problem Solver]] (GPS) cast planning as problem-solving used [[means-ends analysis]] to create plans. [[Stanford Research Institute Problem Solver|STRIPS]] took a different approach, viewing planning as theorem proving. [[Graphplan]] takes a least-commitment approach to planning, rather than sequentially choosing actions from an initial state, working forwards, or a goal state if working backwards. [[Satplan]] is an approach to planning where a planning problem is reduced to a [[Boolean satisfiability problem]]. === Natural language processing === {{Main|Natural language processing}} Natural language processing focuses on treating language as data to perform tasks such as identifying topics without necessarily understanding the intended meaning. Natural language understanding, in contrast, constructs a meaning representation and uses that for further processing, such as answering questions. [[Parsing]], [[Tokenization (lexical analysis)|tokenizing]], [[spell checker|spelling correction]], [[part-of-speech tagging]], [[shallow parsing|noun and verb phrase chunking]] are all aspects of natural language processing long handled by symbolic AI, but since improved by deep learning approaches. In symbolic AI, [[discourse representation theory]] and first-order logic have been used to represent sentence meanings. [[Latent semantic analysis]] (LSA) and [[explicit semantic analysis]] also provided vector representations of documents. In the latter case, vector components are interpretable as concepts named by Wikipedia articles. New deep learning approaches based on [[Transformer (machine learning model)|Transformer models]] have now eclipsed these earlier symbolic AI approaches and attained state-of-the-art performance in natural language ''processing''. However, Transformer models are opaque and do not yet produce human-interpretable semantic representations for sentences and documents. Instead, they produce task-specific vectors where the meaning of the vector components is opaque. === Agents and multi-agent systems === {{Main|Agent architecture|Multi-agent system}} [[Software agent|Agents]] are autonomous systems embedded in an environment they perceive and act upon in some sense. Russell and Norvig's standard textbook on artificial intelligence is organized to reflect agent architectures of increasing sophistication.{{sfn|Russell|Norvig|2021}} The sophistication of agents varies from simple reactive agents, to those with a model of the world and [[automated planning]] capabilities, possibly a [[Belief–desire–intention software model|BDI agent]], i.e., one with beliefs, desires, and intentions – or alternatively a [[reinforcement learning]] model learned over time to choose actions – up to a combination of alternative architectures, such as a neuro-symbolic architecture<ref name=":0" /> that includes deep learning for perception.<ref>Leo de Penning, Artur S. d'Avila Garcez, Luís C. Lamb, John-Jules Ch. Meyer: "A Neural-Symbolic Cognitive Agent for Online Learning and Reasoning." IJCAI 2011: 1653-1658</ref> In contrast, a [[multi-agent system]] consists of multiple agents that communicate amongst themselves with some inter-agent communication language such as [[Knowledge Query and Manipulation Language]] (KQML). The agents need not all have the same internal architecture. Advantages of multi-agent systems include the ability to divide work among the agents and to increase fault tolerance when agents are lost. Research problems include [[Consensus dynamics|how agents reach consensus]], [[Cooperative distributed problem solving|distributed problem solving]], [[multi-agent learning]], [[multi-agent planning]], and [[distributed constraint optimization]].
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)