Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Symbolic artificial intelligence
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== Connectionist AI: philosophical challenges and sociological conflicts === Connectionist approaches include earlier work on [[Artificial neural network|neural networks]],{{sfn|Nilsson|1998|p=7}} such as [[perceptron]]s; work in the mid to late 80s, such as [[Danny Hillis]]'s [[Connection Machine]] and [[Yann LeCun]]'s advances in [[convolutional neural network]]s; to today's more advanced approaches, such as [[Transformer (machine learning model)|Transformers]], [[Generative adversarial network|GANs]], and other work in deep learning. Three philosophical positions{{sfn|Olazaran|1993|pp=411-416}} have been outlined among connectionists: # Implementationism—where connectionist architectures implement the capabilities for symbolic processing, # Radical connectionism—where symbolic processing is rejected totally, and connectionist architectures underlie intelligence and are fully sufficient to explain it, # Moderate connectionism—where symbolic processing and connectionist architectures are viewed as complementary and <u>both</u> are required for intelligence. Olazaran, in his sociological history of the controversies within the neural network community, described the moderate connectionism view as essentially compatible with current research in neuro-symbolic hybrids:<blockquote> The third and last position I would like to examine here is what I call the moderate connectionist view, a more eclectic view of the current debate between [[connectionism]] and symbolic AI. One of the researchers who has elaborated this position most explicitly is [[Andy Clark]], a philosopher from the School of Cognitive and Computing Sciences of the University of Sussex (Brighton, England). Clark defended hybrid (partly symbolic, partly connectionist) systems. He claimed that (at least) two kinds of theories are needed in order to study and model cognition. On the one hand, for some information-processing tasks (such as pattern recognition) connectionism has advantages over symbolic models. But on the other hand, for other cognitive processes (such as serial, deductive reasoning, and generative symbol manipulation processes) the symbolic paradigm offers adequate models, and not only "approximations" (contrary to what radical connectionists would claim).{{sfn|Olazaran|1993|pp=415-416}}</blockquote> [[Gary Marcus]] has claimed that the animus in the deep learning community against symbolic approaches now may be more sociological than philosophical:<blockquote>To think that we can simply abandon symbol-manipulation is to suspend disbelief. <p> And yet, for the most part, that's how most current AI proceeds. [[Geoffrey Hinton|Hinton]] and many others have tried hard to banish symbols altogether. The deep learning hope—seemingly grounded not so much in science, but in a sort of historical grudge—is that intelligent behavior will emerge purely from the confluence of massive data and deep learning. Where classical computers and software solve tasks by defining sets of symbol-manipulating rules dedicated to particular jobs, such as editing a line in a word processor or performing a calculation in a spreadsheet, [[Artificial neural network|neural networks]] typically try to solve tasks by statistical approximation and learning from examples.</p></blockquote>According to Marcus, [[Geoffrey Hinton]] and his colleagues have been vehemently "anti-symbolic":<blockquote>When deep learning reemerged in 2012, it was with a kind of take-no-prisoners attitude that has characterized most of the last decade. By 2015, his hostility toward all things symbols had fully crystallized. He gave a talk at an AI workshop at Stanford comparing symbols to [[Aether (classical element)|aether]], one of science's greatest mistakes. ... Since then, his anti-symbolic campaign has only increased in intensity. In 2016, [[Yann LeCun]], [[Yoshua Bengio|Bengio]], and Hinton wrote a manifesto for deep learning in one of science's most important journals, Nature. It closed with a direct attack on symbol manipulation, calling not for reconciliation but for outright replacement. Later, Hinton told a gathering of European Union leaders that investing any further money in symbol-manipulating approaches was "a huge mistake," likening it to investing in internal combustion engines in the era of electric cars.{{sfn|Marcus|2020|p=20}}</blockquote> Part of these disputes may be due to unclear terminology: <blockquote>Turing award winner [[Judea Pearl]] offers a critique of machine learning which, unfortunately, conflates the terms machine learning and deep learning. Similarly, when Geoffrey Hinton refers to symbolic AI, the connotation of the term tends to be that of expert systems dispossessed of any ability to learn. The use of the terminology is in need of clarification. Machine learning is not confined to [[Association rule learning|association rule]] mining, c.f. the body of work on symbolic ML and relational learning (the differences to deep learning being the choice of representation, localist logical rather than distributed, and the non-use of [[gradient descent|gradient-based learning algorithms]]). Equally, symbolic AI is not just about [[Production system (computer science)|production rules]] written by hand. A proper definition of AI concerns [[knowledge representation and reasoning]], autonomous [[multi-agent system]]s, planning and [[Argumentation framework|argumentation]], as well as learning.{{sfn|Garcez|Lamb|2020|p=8}}</blockquote>It is worth noting that, from a theoretical perspective, the boundary of advantages between connectionist AI and symbolic AI may not be as clear-cut as it appears. For instance, Heng Zhang and his colleagues have proved that mainstream knowledge representation formalisms are recursively isomorphic, provided they are universal or have equivalent expressive power.<ref>{{Cite journal |last=Zhang |first=Heng |last2=Jiang |first2=Guifei |last3=Quan |first3=Donghui |date=2025-04-11 |title=A Theory of Formalisms for Representing Knowledge |url=https://ojs.aaai.org/index.php/AAAI/article/view/33674 |journal=Proceedings of the AAAI Conference on Artificial Intelligence |language=en |volume=39 |issue=14 |pages=15257–15264 |doi=10.1609/aaai.v39i14.33674 |issn=2374-3468|arxiv=2412.11855 }}</ref> This finding implies that there is no fundamental distinction between using symbolic or connectionist knowledge representation formalisms for the realization of [[artificial general intelligence]] (AGI). Moreover, the existence of recursive isomorphisms suggests that different technical approaches can draw insights from one another. From this perspective, it seems unnecessary to overemphasize the advantages of any single technical school; instead, mutual learning and integration may offer the most promising path toward the realization of AGI.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)