Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Symbolic artificial intelligence
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Methods in artificial intelligence research}} {{Artificial intelligence|Approaches}} In [[artificial intelligence]], '''symbolic artificial intelligence''' (also known as '''classical artificial intelligence ''' or '''logic-based artificial intelligence''')<ref>{{cite journal |last1=Garnelo |first1=Marta |last2=Shanahan |first2=Murray |title=Reconciling deep learning with symbolic artificial intelligence: representing objects and relations |journal=Current Opinion in Behavioral Sciences |date=October 2019 |volume=29 |pages=17β23 |doi=10.1016/j.cobeha.2018.12.010|doi-access=free |hdl=10044/1/67796 |hdl-access=free }}</ref><ref>{{Cite SEP|url-id=thomason|title=Logic-Based Artificial Intelligence|first=Richmond|last=Thomason|date=February 27, 2024}}</ref> is the term for the collection of all methods in artificial intelligence research that are based on high-level [[physical symbol systems hypothesis|symbolic]] (human-readable) representations of problems, [[Formal logic|logic]] and [[search algorithm|search]].<ref>{{Cite journal|last1=Garnelo|first1=Marta|last2=Shanahan|first2=Murray|date=2019-10-01|title=Reconciling deep learning with symbolic artificial intelligence: representing objects and relations|journal=Current Opinion in Behavioral Sciences|language=en|volume=29|pages=17β23|doi=10.1016/j.cobeha.2018.12.010|s2cid=72336067 |doi-access=free|hdl=10044/1/67796|hdl-access=free}}</ref> Symbolic AI used tools such as [[logic programming]], [[production (computer science)|production rules]], [[semantic nets]] and [[frame (artificial intelligence)|frames]], and it developed applications such as [[knowledge-based systems]] (in particular, [[expert systems]]), [[symbolic mathematics]], [[automated theorem provers]], [[ontologies]], the [[semantic web]], and [[automated planning and scheduling]] systems. The Symbolic AI paradigm led to seminal ideas in [[Artificial intelligence#Search and optimization|search]], [[symbolic programming]] languages, [[Intelligent agent|agents]], [[multi-agent systems]], the [[semantic web]], and the strengths and limitations of formal knowledge and [[automated reasoning|reasoning systems]]. Symbolic AI was the dominant [[paradigm]] of AI research from the mid-1950s until the mid-1990s.{{sfn|Kolata|1982}} Researchers in the 1960s and the 1970s were convinced that symbolic approaches would eventually succeed in creating a machine with [[artificial general intelligence]] and considered this the ultimate goal of their field.<ref>{{Cite journal |last=Newell |first=Allen |last2=Simon |first2=Herbert A. |date=1976-03-01 |title=Computer science as empirical inquiry: symbols and search |url=https://dl.acm.org/doi/10.1145/360018.360022 |journal=Commun. ACM |volume=19 |issue=3 |pages=113β126 |doi=10.1145/360018.360022 |issn=0001-0782}}</ref> An early boom, with early successes such as the [[Logic Theorist]] and [[Arthur Samuel (computer scientist)|Samuel]]'s [[Arthur Samuel (computer scientist)|Checkers Playing Program]], led to unrealistic expectations and promises and was followed by the first [[AI winter|AI Winter]] as funding dried up.{{sfn|Kautz|2022|pp=107-109}}{{sfn|Russell |Norvig|2021|p=19}} A second boom (1969β1986) occurred with the rise of expert systems, their promise of capturing corporate expertise, and an enthusiastic corporate embrace.{{sfn|Russell |Norvig|2021|pp=22-23}}{{sfn|Kautz|2022|pp=109-110}} That boom, and some early successes, e.g., with [[XCON]] at [[Digital Equipment Corporation|DEC]], was followed again by later disappointment.{{sfn|Kautz|2022|pp=109-110}} Problems with difficulties in knowledge acquisition, maintaining large knowledge bases, and brittleness in handling out-of-domain problems arose. Another, second, AI Winter (1988β2011) followed.{{sfn|Kautz|2022|p=110}} Subsequently, AI researchers focused on addressing underlying problems in handling uncertainty and in knowledge acquisition.{{sfn|Kautz|2022|pp=110-111}} Uncertainty was addressed with formal methods such as [[hidden Markov model]]s, [[Bayesian reasoning]], and [[statistical relational learning]].{{sfn|Russell |Norvig|2021|p=25}}{{sfn|Kautz|2022|p=111}} Symbolic machine learning addressed the knowledge acquisition problem with contributions including [[Version space learning|Version Space]], [[Leslie Valiant|Valiant]]'s [[Probably approximately correct learning|PAC learning]], [[Ross Quinlan|Quinlan]]'s [[ID3 algorithm|ID3]] [[decision-tree]] learning, [[Case-based reasoning|case-based learning]], and [[inductive logic programming]] to learn relations.{{sfn|Kautz|2020|pp=110-111}} [[Artificial neural network|Neural networks]], a subsymbolic approach, had been pursued from early days and reemerged strongly in 2012. Early examples are [[Frank Rosenblatt|Rosenblatt]]'s [[perceptron]] learning work, the [[backpropagation]] work of Rumelhart, Hinton and Williams,<ref>{{cite journal| doi = 10.1038/323533a0| issn = 1476-4687| volume = 323| issue = 6088| pages = 533β536| last1 = Rumelhart| first1 = David E.| last2 = Hinton| first2 = Geoffrey E.| last3 = Williams| first3 = Ronald J.| title = Learning representations by back-propagating errors| journal = Nature| date = 1986 | bibcode = 1986Natur.323..533R| s2cid = 205001834}}</ref> and work in [[convolutional neural network]]s by LeCun et al. in 1989.<ref>{{Cite journal| volume = 1| issue = 4| pages = 541β551| last1 = LeCun| first1 = Y.| last2 = Boser| first2 = B.| last3 = Denker| first3 = I.| last4 = Henderson| first4 = D.| last5 = Howard| first5 = R.| last6 = Hubbard| first6 = W.| last7 = Tackel| first7 = L.| title = Backpropagation Applied to Handwritten Zip Code Recognition| journal = Neural Computation| date = 1989| doi = 10.1162/neco.1989.1.4.541| s2cid = 41312633}}</ref> However, neural networks were not viewed as successful until about 2012: "Until Big Data became commonplace, the general consensus in the Al community was that the so-called neural-network approach was hopeless. Systems just didn't work that well, compared to other methods. ... A revolution came in 2012, when a number of people, including a team of researchers working with Hinton, worked out a way to use the power of [[GPUs]] to enormously increase the power of neural networks."{{sfn|Marcus |Davis|2019}} Over the next several years, [[deep learning]] had spectacular success in handling vision, [[speech recognition]], speech synthesis, image generation, and machine translation. However, since 2020, as inherent difficulties with bias, explanation, comprehensibility, and robustness became more apparent with deep learning approaches; an increasing number of AI researchers have called for [[Neuro-symbolic AI|combining]] the best of both the symbolic and neural network approaches<ref name="Rossi"> {{cite web |last1=Rossi |first1=Francesca |title=Thinking Fast and Slow in AI |url=https://aaai-2022.virtualchair.net/plenary_13.html |publisher=AAAI |access-date=5 July 2022}}</ref><ref name="Selman"> {{cite web |last1=Selman |first1=Bart |title=AAAI Presidential Address: The State of AI |url=https://aaai-2022.virtualchair.net/plenary_2.html |publisher=AAAI |access-date=5 July 2022}}</ref> and addressing areas that both approaches have difficulty with, such as [[Commonsense reasoning|common-sense reasoning]].{{sfn|Marcus |Davis|2019}}
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)