Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Nonlinear dimensionality reduction
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== Autoencoders === An [[autoencoder]] is a feed-forward [[neural network]] which is trained to approximate the identity function. That is, it is trained to map from a vector of values to the same vector. When used for dimensionality reduction purposes, one of the hidden layers in the network is limited to contain only a small number of network units. Thus, the network must learn to encode the vector into a small number of dimensions and then decode it back into the original space. Thus, the first half of the network is a model which maps from high to low-dimensional space, and the second half maps from low to high-dimensional space. Although the idea of autoencoders is quite old,<ref>{{cite book |last1=DeMers |first1=D. |last2=Cottrell |first2=G.W. |date=1993 |chapter=Non-linear dimensionality reduction |title=Advances in neural information processing systems |pages=580β7 |volume=5 |publisher= Morgan Kaufmann|isbn=1558600159 |oclc=928936290}}</ref> training of deep autoencoders has only recently become possible through the use of [[restricted Boltzmann machine]]s and stacked denoising autoencoders. Related to autoencoders is the [[NeuroScale]] algorithm, which uses stress functions inspired by [[multidimensional scaling]] and [[Sammon mapping]]s (see above) to learn a non-linear mapping from the high-dimensional to the embedded space. The mappings in NeuroScale are based on [[radial basis function network]]s.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)