Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Dimensionality reduction
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
===Graph-based kernel PCA=== Other prominent nonlinear techniques include [[manifold learning]] techniques such as [[Isomap]], [[locally linear embedding]] (LLE),<ref>{{cite journal |last1=Roweis |first1=S. T. |last2=Saul |first2=L. K. |title=Nonlinear Dimensionality Reduction by Locally Linear Embedding |doi=10.1126/science.290.5500.2323 |journal=Science |volume=290 |issue=5500 |pages=2323β2326 |year=2000 |pmid=11125150 |bibcode=2000Sci...290.2323R |citeseerx=10.1.1.111.3313|s2cid=5987139 }}</ref> Hessian LLE, Laplacian eigenmaps, and methods based on tangent space analysis.<ref>{{cite journal |last1=Zhang |first1=Zhenyue |last2=Zha |first2=Hongyuan |date=2004 |title=Principal Manifolds and Nonlinear Dimensionality Reduction via Tangent Space Alignment |journal=SIAM Journal on Scientific Computing |volume=26 |issue=1 |pages=313β338 |doi=10.1137/s1064827502419154|bibcode=2004SJSC...26..313Z }}</ref> These techniques construct a low-dimensional data representation using a cost function that retains local properties of the data, and can be viewed as defining a graph-based kernel for Kernel PCA. More recently, techniques have been proposed that, instead of defining a fixed kernel, try to learn the kernel using [[semidefinite programming]]. The most prominent example of such a technique is [[maximum variance unfolding]] (MVU). The central idea of MVU is to exactly preserve all pairwise distances between nearest neighbors (in the inner product space) while maximizing the distances between points that are not nearest neighbors. An alternative approach to neighborhood preservation is through the minimization of a cost function that measures differences between distances in the input and output spaces. Important examples of such techniques include: classical [[multidimensional scaling]], which is identical to PCA; [[Isomap]], which uses geodesic distances in the data space; [[diffusion map]]s, which use diffusion distances in the data space; [[t-distributed stochastic neighbor embedding]] (t-SNE), which minimizes the divergence between distributions over pairs of points; and curvilinear component analysis. A different approach to nonlinear dimensionality reduction is through the use of [[autoencoder]]s, a special kind of [[feedforward neural network]]s with a bottleneck hidden layer.<ref>Hongbing Hu, Stephen A. Zahorian, (2010) [http://ws2.binghamton.edu/zahorian/pdf/Hu2010Dimensionality.pdf "Dimensionality Reduction Methods for HMM Phonetic Recognition"], ICASSP 2010, Dallas, TX</ref> The training of deep encoders is typically performed using a greedy layer-wise pre-training (e.g., using a stack of [[restricted Boltzmann machine]]s) that is followed by a finetuning stage based on [[backpropagation]]. [[File:LDA Projection Illustration 01.gif|thumb|A visual depiction of the resulting LDA projection for a set of 2D points.]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)