Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Nonlinear dimensionality reduction
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== Locally-linear embedding === [[Locally-linear Embedding]] (LLE) was presented at approximately the same time as Isomap.<ref>{{cite journal |first1=S. T. |last1=Roweis |first2=L. K. |last2=Saul |title=Nonlinear Dimensionality Reduction by Locally Linear Embedding |journal=Science |volume=290 |issue=5500 |year=2000 |pages=2323β6 |doi=10.1126/science.290.5500.2323 |pmid=11125150 |bibcode=2000Sci...290.2323R |s2cid=5987139 }}</ref> It has several advantages over Isomap, including faster optimization when implemented to take advantage of [[sparse matrix]] algorithms, and better results with many problems. LLE also begins by finding a set of the nearest neighbors of each point. It then computes a set of weights for each point that best describes the point as a linear combination of its neighbors. Finally, it uses an eigenvector-based optimization technique to find the low-dimensional embedding of points, such that each point is still described with the same linear combination of its neighbors. LLE tends to handle non-uniform sample densities poorly because there is no fixed unit to prevent the weights from drifting as various regions differ in sample densities. LLE has no internal model. The original data points <math>X_i \in \R^D</math>, and the goal of LLE is to embed each point <math>X_i</math> to some low-dimensional point <math>Y_i \in \R^d</math>, where <math>d \ll D</math>. LLE has two steps. In the '''first''' step, it computes, for each point ''X''<sub>''i''</sub>, the best approximation of ''X''<sub>''i''</sub> based on barycentric coordinates of its neighbors ''X''<sub>''j''</sub>. The original point is approximately reconstructed by a linear combination, given by the weight matrix ''W''<sub>''ij''</sub>, of its neighbors. The reconstruction error is: :<math> E(W) = \sum_i \left|\mathbf{X}_i - \sum_j {\mathbf{W}_{ij}\mathbf{X}_j}\right|^2 </math> The weights ''W''<sub>''ij''</sub> refer to the amount of contribution the point ''X''<sub>''j''</sub> has while reconstructing the point ''X''<sub>''i''</sub>. The cost function is minimized under two constraints: * Each data point ''X''<sub>''i''</sub> is reconstructed only from its neighbors. That is, <math>W_{ij} = 0</math> if point ''X''<sub>''j''</sub> is not a neighbor of the point ''X''<sub>''i''</sub>. * The sum of every row of the weight matrix equals 1, that is, <math> \sum_j {\mathbf{W}_{ij}} = 1 </math>. These two constraints ensure that <math>W</math> is unaffected by rotation and translation. In the '''second''' step, a neighborhood preserving map is created based the weights. Each point <math>X_i \in \R^D</math> is mapped onto a point <math>Y_i \in \R^d</math> by minimizing another cost: :<math> C(Y) = \sum_i \left|\mathbf{Y}_i - \sum_j {\mathbf{W}_{ij}\mathbf{Y}_j}\right|^{2} </math> Unlike in the previous cost function, the weights W<sub>ij</sub> are kept fixed and the minimization is done on the points Y<sub>i</sub> to optimize the coordinates. This minimization problem can be solved by solving a sparse ''N'' X ''N'' [[Eigendecomposition of a matrix|eigenvalue problem]] (''N'' being the number of data points), whose bottom ''d'' nonzero eigen vectors provide an orthogonal set of coordinates. The only hyperparameter in the algorithm is what counts as a "neighbor" of a point. Generally the data points are reconstructed from ''K'' nearest neighbors, as measured by [[Euclidean distance]]. In this case, the algorithm has only one integer-valued hyperparameter ''K,'' which can be chosen by cross validation. ==== Hessian locally-linear embedding (Hessian LLE) ==== Like LLE, [[Hessian LLE]] is also based on sparse matrix techniques.<ref>{{cite journal |first1=D. |last1=Donoho |first2=C. |last2=Grimes |title=Hessian eigenmaps: Locally linear embedding techniques for high-dimensional data |journal=Proc Natl Acad Sci U S A |date=2003 |volume=100 |issue=10 |pages=5591β6 |doi=10.1073/pnas.1031596100 |pmid=16576753 |pmc=156245 |bibcode=2003PNAS..100.5591D |doi-access=free }}</ref> It tends to yield results of a much higher quality than LLE. Unfortunately, it has a very costly computational complexity, so it is not well-suited for heavily sampled manifolds. It has no internal model. ==== Modified Locally-Linear Embedding (MLLE) ==== Modified LLE (MLLE)<ref>{{cite journal |first1=Z. |last1=Zhang |first2=J. |last2=Wang |title=MLLE: Modified Locally Linear Embedding Using Multiple Weights |journal=NIPS'06: Proceedings of the 19th International Conference on Neural Information Processing Systems |year=2006 |pages=1593β1600 |url=https://dl.acm.org/doi/abs/10.5555/2976456.2976656 }}</ref> is another LLE variant which uses multiple weights in each neighborhood to address the local weight matrix conditioning problem which leads to distortions in LLE maps. Loosely speaking the multiple weights are the local [[orthogonal projection]] of the original weights produced by LLE. The creators of this regularised variant are also the authors of Local Tangent Space Alignment (LTSA), which is implicit in the MLLE formulation when realising that the global optimisation of the orthogonal projections of each weight vector, in-essence, aligns the local tangent spaces of every data point. The theoretical and empirical implications from the correct application of this algorithm are far-reaching.<ref name=borntolose>{{cite journal|author-last=Sidhu|author-first=Gagan|title=Locally Linear Embedding and fMRI feature selection in psychiatric classification|journal=IEEE Journal of Translational Engineering in Health and Medicine|year=2019|volume=7|pages=1β11|doi=10.1109/JTEHM.2019.2936348|pmid=31497410|pmc=6726465|arxiv=1908.06319|s2cid=201832756}}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)