Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Symbolic artificial intelligence
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Controversies == Controversies arose from early on in symbolic AI, both within the field—e.g., between logicists (the pro-logic [[Neats and scruffies|"neats"]]) and non-logicists (the anti-logic [[Neats and scruffies|"scruffies"]])—and between those who embraced AI but rejected symbolic approaches—primarily [[Connectionism|connectionists]]—and those outside the field. Critiques from outside of the field were primarily from philosophers, on intellectual grounds, but also from funding agencies, especially during the two AI winters. === The Frame Problem: knowledge representation challenges for first-order logic === {{Main|Philosophy of artificial intelligence}} Limitations were discovered in using simple first-order logic to reason about dynamic domains. Problems were discovered both with regards to enumerating the preconditions for an action to succeed and in providing axioms for what did not change after an action was performed. McCarthy and Hayes introduced the [[Frame problem|Frame Problem]] in 1969 in the paper, "Some Philosophical Problems from the Standpoint of Artificial Intelligence."{{sfn|McCarthy|Hayes|1969}} A simple example occurs in "proving that one person could get into conversation with another", as an axiom asserting "if a person has a telephone he still has it after looking up a number in the telephone book" would be required for the deduction to succeed. Similar axioms would be required for other domain actions to specify what ''did not'' change. A similar problem, called the [[Qualification problem|Qualification Problem]], occurs in trying to enumerate the ''preconditions'' for an action to succeed. An infinite number of pathological conditions can be imagined, e.g., a banana in a tailpipe could prevent a car from operating correctly. McCarthy's approach to fix the frame problem was [[Circumscription (logic)|circumscription]], a kind of [[non-monotonic logic]] where deductions could be made from actions that need only specify what would change while not having to explicitly specify everything that would not change. Other [[non-monotonic logic]]s provided [[Reason maintenance|truth maintenance systems]] that revised beliefs leading to contradictions. Other ways of handling more open-ended domains included [[Probabilistic logic|probabilistic reasoning]] systems and machine learning to learn new concepts and rules. McCarthy's [[Advice taker|Advice Taker]] can be viewed as an inspiration here, as it could incorporate new knowledge provided by a human in the form of assertions or rules. For example, experimental symbolic machine learning systems explored the ability to take high-level natural language advice and to interpret it into domain-specific actionable rules. Similar to the problems in handling dynamic domains, common-sense reasoning is also difficult to capture in formal reasoning. Examples of common-sense reasoning include implicit reasoning about how people think or general knowledge of day-to-day events, objects, and living creatures. This kind of knowledge is taken for granted and not viewed as noteworthy. Common-sense reasoning is an open area of research and challenging both for symbolic systems (e.g., Cyc has attempted to capture key parts of this knowledge over more than a decade) and neural systems (e.g., [[self-driving car]]s that do not know not to drive into cones or not to hit pedestrians walking a bicycle). McCarthy viewed his [[Advice taker|Advice Taker]] as having common-sense, but his definition of common-sense was different than the one above.{{sfn|McCarthy|1959}} He defined a program as having common sense "''if it automatically deduces for itself a sufficiently wide class of immediate consequences of anything it is told and what it already knows''." === Connectionist AI: philosophical challenges and sociological conflicts === Connectionist approaches include earlier work on [[Artificial neural network|neural networks]],{{sfn|Nilsson|1998|p=7}} such as [[perceptron]]s; work in the mid to late 80s, such as [[Danny Hillis]]'s [[Connection Machine]] and [[Yann LeCun]]'s advances in [[convolutional neural network]]s; to today's more advanced approaches, such as [[Transformer (machine learning model)|Transformers]], [[Generative adversarial network|GANs]], and other work in deep learning. Three philosophical positions{{sfn|Olazaran|1993|pp=411-416}} have been outlined among connectionists: # Implementationism—where connectionist architectures implement the capabilities for symbolic processing, # Radical connectionism—where symbolic processing is rejected totally, and connectionist architectures underlie intelligence and are fully sufficient to explain it, # Moderate connectionism—where symbolic processing and connectionist architectures are viewed as complementary and <u>both</u> are required for intelligence. Olazaran, in his sociological history of the controversies within the neural network community, described the moderate connectionism view as essentially compatible with current research in neuro-symbolic hybrids:<blockquote> The third and last position I would like to examine here is what I call the moderate connectionist view, a more eclectic view of the current debate between [[connectionism]] and symbolic AI. One of the researchers who has elaborated this position most explicitly is [[Andy Clark]], a philosopher from the School of Cognitive and Computing Sciences of the University of Sussex (Brighton, England). Clark defended hybrid (partly symbolic, partly connectionist) systems. He claimed that (at least) two kinds of theories are needed in order to study and model cognition. On the one hand, for some information-processing tasks (such as pattern recognition) connectionism has advantages over symbolic models. But on the other hand, for other cognitive processes (such as serial, deductive reasoning, and generative symbol manipulation processes) the symbolic paradigm offers adequate models, and not only "approximations" (contrary to what radical connectionists would claim).{{sfn|Olazaran|1993|pp=415-416}}</blockquote> [[Gary Marcus]] has claimed that the animus in the deep learning community against symbolic approaches now may be more sociological than philosophical:<blockquote>To think that we can simply abandon symbol-manipulation is to suspend disbelief. <p> And yet, for the most part, that's how most current AI proceeds. [[Geoffrey Hinton|Hinton]] and many others have tried hard to banish symbols altogether. The deep learning hope—seemingly grounded not so much in science, but in a sort of historical grudge—is that intelligent behavior will emerge purely from the confluence of massive data and deep learning. Where classical computers and software solve tasks by defining sets of symbol-manipulating rules dedicated to particular jobs, such as editing a line in a word processor or performing a calculation in a spreadsheet, [[Artificial neural network|neural networks]] typically try to solve tasks by statistical approximation and learning from examples.</p></blockquote>According to Marcus, [[Geoffrey Hinton]] and his colleagues have been vehemently "anti-symbolic":<blockquote>When deep learning reemerged in 2012, it was with a kind of take-no-prisoners attitude that has characterized most of the last decade. By 2015, his hostility toward all things symbols had fully crystallized. He gave a talk at an AI workshop at Stanford comparing symbols to [[Aether (classical element)|aether]], one of science's greatest mistakes. ... Since then, his anti-symbolic campaign has only increased in intensity. In 2016, [[Yann LeCun]], [[Yoshua Bengio|Bengio]], and Hinton wrote a manifesto for deep learning in one of science's most important journals, Nature. It closed with a direct attack on symbol manipulation, calling not for reconciliation but for outright replacement. Later, Hinton told a gathering of European Union leaders that investing any further money in symbol-manipulating approaches was "a huge mistake," likening it to investing in internal combustion engines in the era of electric cars.{{sfn|Marcus|2020|p=20}}</blockquote> Part of these disputes may be due to unclear terminology: <blockquote>Turing award winner [[Judea Pearl]] offers a critique of machine learning which, unfortunately, conflates the terms machine learning and deep learning. Similarly, when Geoffrey Hinton refers to symbolic AI, the connotation of the term tends to be that of expert systems dispossessed of any ability to learn. The use of the terminology is in need of clarification. Machine learning is not confined to [[Association rule learning|association rule]] mining, c.f. the body of work on symbolic ML and relational learning (the differences to deep learning being the choice of representation, localist logical rather than distributed, and the non-use of [[gradient descent|gradient-based learning algorithms]]). Equally, symbolic AI is not just about [[Production system (computer science)|production rules]] written by hand. A proper definition of AI concerns [[knowledge representation and reasoning]], autonomous [[multi-agent system]]s, planning and [[Argumentation framework|argumentation]], as well as learning.{{sfn|Garcez|Lamb|2020|p=8}}</blockquote>It is worth noting that, from a theoretical perspective, the boundary of advantages between connectionist AI and symbolic AI may not be as clear-cut as it appears. For instance, Heng Zhang and his colleagues have proved that mainstream knowledge representation formalisms are recursively isomorphic, provided they are universal or have equivalent expressive power.<ref>{{Cite journal |last=Zhang |first=Heng |last2=Jiang |first2=Guifei |last3=Quan |first3=Donghui |date=2025-04-11 |title=A Theory of Formalisms for Representing Knowledge |url=https://ojs.aaai.org/index.php/AAAI/article/view/33674 |journal=Proceedings of the AAAI Conference on Artificial Intelligence |language=en |volume=39 |issue=14 |pages=15257–15264 |doi=10.1609/aaai.v39i14.33674 |issn=2374-3468|arxiv=2412.11855 }}</ref> This finding implies that there is no fundamental distinction between using symbolic or connectionist knowledge representation formalisms for the realization of [[artificial general intelligence]] (AGI). Moreover, the existence of recursive isomorphisms suggests that different technical approaches can draw insights from one another. From this perspective, it seems unnecessary to overemphasize the advantages of any single technical school; instead, mutual learning and integration may offer the most promising path toward the realization of AGI. === Situated robotics: the world as a model === Another critique of symbolic AI is the [[embodied cognition]] approach: {{blockquote |text=The [[embodied cognition]] approach claims that it makes no sense to consider the brain separately: cognition takes place within a body, which is embedded in an environment. We need to study the system as a whole; the brain's functioning exploits regularities in its environment, including the rest of its body. Under the embodied cognition approach, robotics, vision, and other sensors become central, not peripheral.{{sfn|Russell |Norvig|2021|p=982}} }} [[Rodney Brooks]] invented [[behavior-based robotics]], one approach to embodied cognition. [[Nouvelle AI]], another name for this approach, is viewed as an alternative to ''both'' symbolic AI and connectionist AI. His approach rejected representations, either symbolic or distributed, as not only unnecessary, but as detrimental. Instead, he created the [[subsumption architecture]], a layered architecture for embodied agents. Each layer achieves a different purpose and must function in the real world. For example, the first robot he describes in ''Intelligence Without Representation'', has three layers. The bottom layer interprets sonar sensors to avoid objects. The middle layer causes the robot to wander around when there are no obstacles. The top layer causes the robot to go to more distant places for further exploration. Each layer can temporarily inhibit or suppress a lower-level layer. He criticized AI researchers for defining AI problems for their systems, when: "There is no clean division between perception (abstraction) and reasoning in the real world."{{sfn|Brooks|1991|p=143}} He called his robots "Creatures" and each layer was "composed of a fixed-topology network of simple finite state machines."{{sfn|Brooks|1991|p=151}} In the Nouvelle AI approach, "First, it is vitally important to test the Creatures we build in the real world; i.e., in the same world that we humans inhabit. It is disastrous to fall into the temptation of testing them in a simplified world first, even with the best intentions of later transferring activity to an unsimplified world."{{sfn|Brooks|1991|p=150}} His emphasis on real-world testing was in contrast to "Early work in AI concentrated on games, geometrical problems, symbolic algebra, theorem proving, and other formal systems"{{sfn|Brooks|1991|p=142}} and the use of the [[blocks world]] in symbolic AI systems such as [[SHRDLU]]. === Current views === Each approach—symbolic, connectionist, and behavior-based—has advantages, but has been criticized by the other approaches. Symbolic AI has been criticized as disembodied, liable to the qualification problem, and poor in handling the perceptual problems where deep learning excels. In turn, [[Connectionism|connectionist AI]] has been criticized as poorly suited for deliberative step-by-step problem solving, incorporating knowledge, and handling planning. Finally, Nouvelle AI excels in reactive and real-world robotics domains but has been criticized for difficulties in incorporating learning and knowledge. [[Hybrid intelligent system|Hybrid AIs]] incorporating one or more of these approaches are currently viewed as the path forward.{{sfn|Kautz|2020}}{{sfn|Rossi|2022}}{{sfn|Selman|2022}} Russell and Norvig conclude that:<blockquote>Overall, [[Hubert Dreyfus|Dreyfus]] saw areas where AI did not have complete answers and said that Al is therefore impossible; we now see many of these same areas undergoing continued research and development leading to increased capability, not impossibility.{{sfn|Russell |Norvig|2021|p=982}}</blockquote>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)