Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Hebbian theory
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Principles== In [[artificial neuron]]s and [[artificial neural network]]s, Hebb's principle can be described as a method of determining how to alter the weights between model neurons. The weight between two neurons increases if the two neurons activate simultaneously, and reduces if they activate separately. Nodes that tend to be either both positive or both negative at the same time have strong positive weights, while those that tend to be opposite have strong negative weights. The following is a formulaic description of Hebbian learning (many other descriptions are possible): :<math>\,w_{ij}=x_ix_j,</math> where <math>w_{ij} </math> is the weight of the connection from neuron <math> j </math> to neuron <math> i </math>, and <math> x_i </math> is the input for neuron <math> i </math>. This is an example of pattern learning, where weights are updated after every training example. In a [[Hopfield network]], connections <math>w_{ij} </math> are set to zero if <math>i=j </math> (no reflexive connections allowed). With binary neurons (activations either 0 or 1), connections would be set to 1 if the connected neurons have the same activation for a pattern.{{fact|date=March 2025}} When several training patterns are used, the expression becomes an average of the individuals: :<math>w_{ij} = \frac{1}{p} \sum_{k=1}^p x_i^k x_j^k,</math> where <math>w_{ij} </math> is the weight of the connection from neuron <math> j </math> to neuron <math> i </math>, <math> p </math> is the number of training patterns and <math>x_{i}^k</math> the <math> k </math>-th input for neuron <math> i </math>. This is learning by epoch, with weights updated after all the training examples are presented and is last term applicable to both discrete and continuous training sets. Again, in a Hopfield network, connections <math>w_{ij} </math> are set to zero if <math>i=j </math> (no reflexive connections). A variation of Hebbian learning that takes into account phenomena such as blocking and other neural learning phenomena is the mathematical model of [[Harry Klopf]]. [[Harry Klopf|Klopf]]'s model assumes that parts of a system with simple adaptive mechanisms can underlie more complex systems with more advanced adaptive behavior, such as neural networks.<ref>Klopf, A. H. (1972). [https://web.archive.org/web/20170212151545/http://www.dtic.mil/dtic/tr/fulltext/u2/742259.pdf Brain function and adaptive systems—A heterostatic theory]. Technical Report AFCRL-72-0164, Air Force Cambridge Research Laboratories, Bedford, MA.</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)