Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Hopfield network
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== Relationship to classical Hopfield network with continuous variables === Classical formulation of continuous Hopfield Networks<ref name=":0" /> can be understood<ref name=":4" /> as a special limiting case of the modern Hopfield networks with one hidden layer. Continuous Hopfield Networks for neurons with graded response are typically described<ref name=":0" /> by the dynamical equations {{NumBlk2|:|<math display="block">\tau_f \frac{d x_i}{dt} = \sum\limits_{j=1}^{N_f}T_{ij} V_j - x_i + I_i</math>|5}} and the energy function {{NumBlk2|:|<math display="block">E = -\frac{1}{2}\sum\limits_{i,j=1}^{N_f} T_{ij} V_i V_j - \sum\limits_{i=1}^{N_f} V_i I_i + \sum\limits_{i=1}^{N_f} \int\limits^{V_i} g^{-1}(z)\,dz</math>|6}} where <math display="inline">V_i = g(x_i)</math>, and <math>g^{-1}(z)</math> is the inverse of the activation function <math>g(x)</math>. This model is a special limit of the class of models that is called models A,<ref name=":4" /> with the following choice of the Lagrangian functions {{NumBlk2|:|<math display="block">L_v = \sum\limits_{i=1}^{N_f}\int\limits^{x_i} g(x) dx,\ \ \ \ \ \text{and}\ \ \ \ \ L_h = \frac{1}{2} \sum\limits_{\mu=1}^{N_h} h_\mu^2</math>|7}} that, according to the definition ({{EquationNote|2}}), leads to the activation functions {{NumBlk2|:|<math display="block">V_i = g(x_i), \ \ \ \ \ \text{and}\ \ \ \ \ f_\mu = h_\mu</math>|8}} If we integrate out the hidden neurons the system of equations ({{EquationNote|1}}) reduces to the equations on the feature neurons ({{EquationNote|5}}) with <math>T_{ij} = \sum\limits_{\mu=1}^{N_h} \xi_{\mu i }\xi_{\mu j}</math>, and the general expression for the energy ({{EquationNote|3}}) reduces to the effective energy {{NumBlk2|:|<math display="block">E = -\frac{1}{2} \sum\limits_{i,j=1}^{N_f} T_{ij} V_i V_j - \sum\limits_{i=1}^{N_f} V_i I_i +\sum\limits_{i=1}^{N_f} \Big( x_i V_i - \int\limits^{x_i} g(x)dx \Big)</math>|9}} While the first two terms in equation ({{EquationNote|6}}) are the same as those in equation ({{EquationNote|9}}), the third terms look superficially different. In equation ({{EquationNote|9}}) it is a Legendre transform of the Lagrangian for the feature neurons, while in ({{EquationNote|6}}) the third term is an integral of the inverse activation function. Nevertheless, these two expressions are in fact equivalent, since the derivatives of a function and its Legendre transform are inverse functions of each other. The easiest way to see that these two terms are equal explicitly is to differentiate each one with respect to <math>x_i</math>. The results of these differentiations for both expressions are equal to <math>x_i g(x_i)'</math>. Thus, the two expressions are equal up to an additive constant. This completes the proof<ref name=":4" /> that the classical Hopfield Network with continuous states<ref name=":0" /> is a special limiting case of the modern Hopfield network ({{EquationNote|1}}) with energy ({{EquationNote|3}}).
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)