Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Unsupervised learning
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== Specific Networks === Here, we highlight some characteristics of select networks. The details of each are given in the comparison table below. {{glossary}} {{term |1=[[Hopfield Network]]}} {{defn |1=Ferromagnetism inspired Hopfield networks. A neuron correspond to an iron domain with binary magnetic moments Up and Down, and neural connections correspond to the domain's influence on each other. Symmetric connections enable a global energy formulation. During inference the network updates each state using the standard activation step function. Symmetric weights and the right energy functions guarantees convergence to a stable activation pattern. Asymmetric weights are difficult to analyze. Hopfield nets are used as Content Addressable Memories (CAM).}} {{term |1=[[Boltzmann Machine]]}} {{defn |1=These are stochastic Hopfield nets. Their state value is sampled from this [[Probability density function|pdf]] as follows: suppose a binary neuron fires with the Bernoulli probability p(1) = 1/3 and rests with p(0) = 2/3. One samples from it by taking a ''uniformly'' distributed random number y, and plugging it into the inverted [[cumulative distribution function]], which in this case is the step function thresholded at 2/3. The inverse function = { 0 if x <= 2/3, 1 if x > 2/3 }.}} {{term |1=Sigmoid Belief Net}} {{defn |1=Introduced by Radford Neal in 1992, this network applies ideas from probabilistic graphical models to neural networks. A key difference is that nodes in graphical models have pre-assigned meanings, whereas Belief Net neurons' features are determined after training. The network is a sparsely connected directed acyclic graph composed of binary stochastic neurons. The learning rule comes from Maximum Likelihood on p(X): Δw<sub>ij</sub> <math>\propto</math> s<sub>j</sub> * (s<sub>i</sub> - p<sub>i</sub>), where p<sub>i</sub> = 1 / ( 1 + e<sup>weighted inputs into neuron i</sup> ). s<sub>j</sub>'s are activations from an unbiased sample of the posterior distribution and this is problematic due to the Explaining Away problem raised by Judea Perl. [[Variational Bayesian methods]] uses a surrogate posterior and blatantly disregard this complexity. }} {{term |1= [[Deep Belief Network]] }} {{defn |1=Introduced by Hinton, this network is a hybrid of RBM and Sigmoid Belief Network. The top 2 layers is an RBM and the second layer downwards form a sigmoid belief network. One trains it by the [[Stacked Restricted Boltzmann Machine|stacked RBM]] method and then throw away the recognition weights below the top RBM. As of 2009, 3-4 layers seems to be the optimal depth.<ref name=HintonMlss2009/> }} {{term |1=[[Helmholtz machine]]}} {{defn |1=These are early inspirations for the Variational Auto Encoders. Its 2 networks combined into one—forward weights operates recognition and backward weights implements imagination. It is perhaps the first network to do both. Helmholtz did not work in machine learning but he inspired the view of "statistical inference engine whose function is to infer probable causes of sensory input".<ref name="nc95">{{Cite journal|title = The Helmholtz machine.|journal = Neural Computation|date = 1995|pages = 889–904|volume = 7|issue = 5|first1 = Dayan|last1 = Peter|authorlink1=Peter Dayan|first2 = Geoffrey E.|last2 = Hinton|authorlink2=Geoffrey Hinton|first3 = Radford M.|last3 = Neal|authorlink3=Radford M. Neal|first4 = Richard S.|last4 = Zemel|authorlink4=Richard Zemel|doi = 10.1162/neco.1995.7.5.889|pmid = 7584891|s2cid = 1890561|hdl = 21.11116/0000-0002-D6D3-E|hdl-access = free}} {{closed access}}</ref> the stochastic binary neuron outputs a probability that its state is 0 or 1. The data input is normally not considered a layer, but in the Helmholtz machine generation mode, the data layer receives input from the middle layer and has separate weights for this purpose, so it is considered a layer. Hence this network has 3 layers.}} {{term |1=[[Variational autoencoder]]}} {{defn |1=These are inspired by Helmholtz machines and combines probability network with neural networks. An Autoencoder is a 3-layer CAM network, where the middle layer is supposed to be some internal representation of input patterns. The encoder neural network is a probability distribution q<sub>φ</sub>(z given x) and the decoder network is p<sub>θ</sub>(x given z). The weights are named phi & theta rather than W and V as in Helmholtz—a cosmetic difference. These 2 networks here can be fully connected, or use another NN scheme. }} {{glossary end}}
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)