Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Connectionism
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Basic principle== The central connectionist principle is that mental phenomena can be described by interconnected networks of simple and often uniform units. The form of the connections and the units can vary from model to model. For example, units in the network could represent [[neurons]] and the connections could represent [[synapses]], as in the [[human brain]]. This principle has been seen as an alternative to GOFAI and the classical [[theory of mind|theories of mind]] based on symbolic computation, but the extent to which the two approaches are compatible has been the subject of much debate since their inception.<ref name=":0" /> ===Activation function=== {{Main|Activation function}} Internal states of any network change over time due to neurons sending a signal to a succeeding layer of neurons in the case of a feedforward network, or to a previous layer in the case of a recurrent network. Discovery of non-linear activation functions has enabled the second wave of connectionism. ===Memory and learning=== {{Main|Artificial neural networks|Deep learning}} Neural networks follow two basic principles: # Any mental state can be described as a ''n''-dimensional [[Vector (mathematics)|vector]] of numeric activation values over neural units in a network. # Memory and learning are created by modifying the 'weights' of the connections between neural units, generally represented as an ''n''×''m'' [[Matrix (mathematics)|matrix]]. The weights are adjusted according to some [[learning rule]] or algorithm, such as [[Hebbian theory|Hebbian learning]].<ref>{{Cite journal|last1=Novo|first1=María-Luisa|last2=Alsina|first2=Ángel|last3=Marbán|first3=José-María|last4=Berciano|first4=Ainhoa|date=2017|title=Connective Intelligence for Childhood Mathematics Education|journal=Comunicar|language=es|volume=25|issue=52|pages=29–39|doi=10.3916/c52-2017-03|issn=1134-3478|doi-access=free|hdl=10272/14085|hdl-access=free}}</ref> Most of the variety among the models comes from: * ''Interpretation of units'': Units can be interpreted as neurons or groups of neurons. * ''Definition of activation'': Activation can be defined in a variety of ways. For example, in a [[Boltzmann machine]], the activation is interpreted as the probability of generating an action potential spike, and is determined via a [[logistic function]] on the sum of the inputs to a unit. * ''Learning algorithm'': Different networks modify their connections differently. In general, any mathematically defined change in connection weights over time is referred to as the "learning algorithm". ===Biological realism=== Connectionist work in general does not need to be biologically realistic.<ref>{{Cite web|url=http://www.encephalos.gr/48-1-01e.htm|title=Encephalos Journal|website=www.encephalos.gr|access-date=2018-02-20}}</ref><ref>{{Cite book|url=https://books.google.com/books?id=s-OCCwAAQBAJ&q=%22accurate%22&pg=PT18|title=Neural Geographies: Feminism and the Microstructure of Cognition|last=Wilson|first=Elizabeth A.|date=2016-02-04|publisher=Routledge|isbn=978-1-317-95876-5|language=en}}</ref><ref name=OIR_1>{{cite journal| title=Organismically-inspired robotics: homeostatic adaptation and teleology beyond the closed sensorimotor loop| author=Di Paolo, E.A| url=https://users.sussex.ac.uk/~ezequiel/dp-erasmus.pdf| publisher=[[University of Sussex]]| journal=Dynamical Systems Approach to Embodiment and Sociality, Advanced Knowledge International| date=1 January 2003| access-date=29 December 2023| s2cid=15349751}}</ref><ref>{{Cite journal|last1=Zorzi|first1=Marco|last2=Testolin|first2=Alberto|last3=Stoianov|first3=Ivilin P.|date=2013-08-20|title=Modeling language and cognition with deep unsupervised learning: a tutorial overview|journal=Frontiers in Psychology|volume=4|page=515|doi=10.3389/fpsyg.2013.00515|issn=1664-1078|pmc=3747356|pmid=23970869|doi-access=free}}</ref><ref name=AA_1>{{cite journal| title=Analytic and Continental Philosophy, Science, and Global Philosophy| author=Tieszen, R.| url=https://scholarworks.sjsu.edu/cgi/viewcontent.cgi?referer=&httpsredir=1&article=1015&context=comparativephilosophy| journal=Comparative Philosophy| volume=2| issue=2| pages=4–22| date=2011| access-date=29 December 2023}}</ref><ref>{{Cite book|url=https://books.google.com/books?id=uV9TZzOITMwC&q=%22biological%20plausibility%22&pg=PA17|title=Neural Network Perspectives on Cognition and Adaptive Robotics|last=Browne|first=A.|date=1997-01-01|publisher=CRC Press|isbn=978-0-7503-0455-9|language=en}}</ref><ref>{{Cite book|url=https://books.google.com/books?id=7pPv0STSos8C&q=%22biological+realism%22&pg=PA63|title=Connectionism in Perspective|last1=Pfeifer|first1=R.|last2=Schreter|first2=Z.|last3=Fogelman-Soulié|first3=F.|last4=Steels|first4=L.|date=1989-08-23|publisher=Elsevier|isbn=978-0-444-59876-9|language=en}}</ref> One area where connectionist models are thought to be biologically implausible is with respect to error-propagation networks that are needed to support learning,<ref>{{Cite journal|last=Crick|first=Francis|date=January 1989|title=The recent excitement about neural networks|journal=Nature|language=en|volume=337|issue=6203|pages=129–132|doi=10.1038/337129a0|pmid=2911347|issn=1476-4687|bibcode=1989Natur.337..129C|s2cid=5892527}}</ref><ref name=":4">{{Cite journal|last1=Rumelhart|first1=David E.|last2=Hinton|first2=Geoffrey E.|last3=Williams|first3=Ronald J.|date=October 1986|title=Learning representations by back-propagating errors|journal=Nature|language=en|volume=323|issue=6088|pages=533–536|doi=10.1038/323533a0|issn=1476-4687|bibcode=1986Natur.323..533R|s2cid=205001834}}</ref> but error propagation can explain some of the biologically-generated electrical activity seen at the scalp in [[event-related potential]]s such as the [[N400 (neuroscience)|N400]] and [[P600 (neuroscience)|P600]],<ref>{{Cite journal|last1=Fitz|first1=Hartmut|last2=Chang|first2=Franklin|date=2019-06-01|title=Language ERPs reflect learning through prediction error propagation|journal=Cognitive Psychology|volume=111|pages=15–52|doi=10.1016/j.cogpsych.2019.03.002|pmid=30921626|issn=0010-0285|hdl=21.11116/0000-0003-474F-6|s2cid=85501792|hdl-access=free}}</ref> and this provides some biological support for one of the key assumptions of connectionist learning procedures. Many recurrent connectionist models also incorporate [[dynamical systems theory]]. Many researchers, such as the connectionist [[Paul Smolensky]], have argued that connectionist models will evolve toward fully [[Continuous function|continuous]], high-dimensional, [[non-linear]], [[dynamic systems]] approaches.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)