Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Multidimensional scaling
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Types== MDS algorithms fall into a [[Taxonomy (general)|taxonomy]], depending on the meaning of the input matrix: ===Classical multidimensional scaling=== It is also known as '''Principal Coordinates Analysis''' (PCoA), Torgerson Scaling or Torgerson–Gower scaling. It takes an input matrix giving dissimilarities between pairs of items and outputs a coordinate matrix whose configuration minimizes a [[loss function]] called ''strain'',<ref name="borg"/> which is given by <math display=block>\text{Strain}_D(x_1,x_2,...,x_n)=\Biggl(\frac{ \sum_{i,j} \bigl( b_{ij} - x_i^T x_j \bigr)^2}{\sum_{i,j}b_{ij}^2} \Biggr)^{1/2},</math> where <math>x_{i}</math> denote vectors in ''N''-dimensional space, <math>x_i^T x_j </math> denotes the scalar product between <math>x_{i}</math> and <math>x_{j}</math>, and <math>b_{ij}</math> are the elements of the matrix <math>B</math> defined on step 2 of the following algorithm, which are computed from the distances. : '''Steps of a Classical MDS algorithm:''' : Classical MDS uses the fact that the coordinate matrix <math>X</math> can be derived by [[Eigendecomposition of a matrix|eigenvalue decomposition]] from <math display="inline">B=XX'</math>. And the matrix <math display="inline">B</math> can be computed from proximity matrix <math display="inline">D</math> by using double centering.<ref>Wickelmaier, Florian. "An introduction to MDS." ''Sound Quality Research Unit, Aalborg University, Denmark'' (2003): 46</ref> :# Set up the squared proximity matrix <math display="inline">D^{(2)}=[d_{ij}^2]</math> :# Apply double centering: <math display="inline">B=-\frac{1}{2}CD^{(2)}C</math> using the [[centering matrix]] <math display="inline">C=I-\frac{1}{n}J_n</math>, where <math display="inline">n</math> is the number of objects, <math display="inline">I</math> is the <math display="inline">n \times n</math> identity matrix, and <math display="inline">J_{n}</math> is an <math display="inline">n\times n</math> matrix of all ones. :# Determine the <math display="inline">m</math> largest [[Eigenvalues and eigenvectors|eigenvalues]] <math display="inline">\lambda_1,\lambda_2,...,\lambda_m</math> and corresponding [[Eigenvalues and eigenvectors|eigenvectors]] <math display="inline">e_1,e_2,...,e_m</math> of <math display="inline">B</math> (where <math display="inline">m</math> is the number of dimensions desired for the output). :# Now, <math display="inline">X=E_m\Lambda_m^{1/2}</math> , where <math display="inline">E_m</math> is the matrix of <math display="inline">m</math> eigenvectors and <math display="inline">\Lambda_m</math> is the [[diagonal matrix]] of <math display="inline">m</math> eigenvalues of <math display="inline">B</math>. :Classical MDS assumes metric distances. So this is not applicable for direct dissimilarity ratings. === Metric multidimensional scaling (mMDS) === It is a superset of classical MDS that generalizes the optimization procedure to a variety of loss functions and input matrices of known distances with weights and so on. A useful loss function in this context is called ''stress'', which is often minimized using a procedure called [[stress majorization]]. Metric MDS minimizes the cost function called “stress” which is a residual sum of squares:<blockquote><math>\text{Stress}_D(x_1,x_2,...,x_n)=\sqrt{\sum_{i\ne j=1,...,n}\bigl(d_{ij}-\|x_i-x_j\|\bigr)^2}.</math></blockquote> Metric scaling uses a power transformation with a user-controlled exponent <math display="inline">p</math>: <math display="inline">d_{ij}^p</math> and <math display="inline">-d_{ij}^{2p}</math> for distance. In classical scaling <math display="inline">p=1.</math> Non-metric scaling is defined by the use of isotonic regression to nonparametrically estimate a transformation of the dissimilarities. ===Non-metric multidimensional scaling (NMDS)=== In contrast to metric MDS, non-metric MDS finds both a [[non-parametric]] [[monotonic]] relationship between the dissimilarities in the item-item matrix and the Euclidean distances between items, and the location of each item in the low-dimensional space. Let <math>d_{ij}</math> be the dissimilarity between points <math>i, j</math>. Let <math>\hat d_{ij} = \| x_i - x_j\|</math> be the Euclidean distance between embedded points <math>x_i, x_j</math>. Now, for each choice of the embedded points <math>x_i</math> and is a monotonically increasing function <math>f</math>, define the "stress" function: :<blockquote><math>S(x_1, ..., x_n; f)=\sqrt{\frac{\sum_{i<j}\bigl(f(d_{ij})-\hat d_{ij}\bigr)^2}{\sum_{i<j} \hat d_{ij}^2}}.</math></blockquote> The factor of <math>\sum_{i<j} \hat d_{ij}^2</math> in the denominator is necessary to prevent a "collapse". Suppose we define instead <math>S=\sqrt{\sum_{i<j}\bigl(f(d_{ij})-\hat d_{ij})^2}</math>, then it can be trivially minimized by setting <math>f = 0</math>, then collapse every point to the same point. A few variants of this cost function exist. MDS programs automatically minimize stress in order to obtain the MDS solution. The core of a non-metric MDS algorithm is a twofold optimization process. First the optimal monotonic transformation of the proximities has to be found. Secondly, the points of a configuration have to be optimally arranged, so that their distances match the scaled proximities as closely as possible. NMDS needs to optimize two objectives simultaneously. This is usually done iteratively: :# Initialize <math>x_i</math> randomly, e. g. by sampling from a normal distribution. :# Do until a stopping criterion (for example, <math>S < \epsilon</math>) :## Solve for <math>f = \arg\min_f S(x_1, ..., x_n ; f)</math> by [[isotonic regression]]. :## Solve for <math>x_1, ..., x_n = \arg\min_{x_1, ..., x_n} S(x_1, ..., x_n ; f)</math> by gradient descent or other methods. :#Return <math>x_i</math> and <math>f</math> [[Louis Guttman]]'s smallest space analysis (SSA) is an example of a non-metric MDS procedure. ===Generalized multidimensional scaling (GMD)=== {{main|Generalized multidimensional scaling}} An extension of metric multidimensional scaling, in which the target space is an arbitrary smooth non-Euclidean space. In cases where the dissimilarities are distances on a surface and the target space is another surface, GMDS allows finding the minimum-distortion embedding of one surface into another.<ref name="bron">{{cite journal |vauthors=Bronstein AM, Bronstein MM, Kimmel R |title=Generalized multidimensional scaling: a framework for isometry-invariant partial surface matching |journal=Proc. Natl. Acad. Sci. U.S.A. |volume=103 |issue=5 |pages=1168–72 |date=January 2006 |pmid=16432211 |pmc=1360551 |doi=10.1073/pnas.0508601103 |bibcode=2006PNAS..103.1168B |doi-access=free }}</ref> === Super multidimensional scaling (SMDS) === An extension of MDS, known as Super MDS, incorporates both distance and angle information for improved source localization. Unlike traditional MDS, which uses only distance measurements, Super MDS processes both distance and angle-of-arrival (AOA) data algebraically (without iteration) to achieve better accuracy.<ref>{{cite conference |last1=de Abreu |first1=G. T. F. |last2=Destino |first2=G. |title=Super MDS: Source Location from Distance and Angle Information |conference=2007 IEEE Wireless Communications and Networking Conference |location=Hong Kong, China |pages=4430–4434 |year=2007 |doi=10.1109/WCNC.2007.807}}</ref> The method proceeds in the following steps: # '''Construct the Reduced Edge Gram Kernel:''' For a network of <math>N</math> sources in an <math>\eta</math>-dimensional space, define the edge vectors as <math>v_{i} = x_{m} - x_{n}</math>. The dissimilarity is given by <math>k_{i,j} = \langle v_i, v_j \rangle</math>. Assemble these into the full kernel <math>K = VV^T</math>, and then form the reduced kernel using the <math>N-1</math> independent vectors: <math>\bar{K} = [V]_{(N-1)\times\eta}\ [V]_{(N-1)\times\eta}^T</math>, # '''Eigen-Decomposition:''' Compute the eigen-decomposition of <math>\bar{K}</math>, # '''Estimate Edge Vectors:''' Recover the edge vectors as <math> \hat{V} = \Bigl( U_{M \times \eta}\, \Lambda^{\odot \frac{1}{2}}_{\eta \times \eta} \Bigr)^T </math>, # '''Procrustes Alignment:''' Retrieve <math>\hat{V}</math> from <math>V</math> via Procrustes Transformation, # '''Compute Coordinates:''' Solve the following linear equations to compute the coordinate estimates <math>\begin{pmatrix} 1 \vline \mathbf{0}_{1 \times N-1} \\ \hline \mathbf{[C]}_{N-1 \times N} \end{pmatrix} \cdot \begin{pmatrix}\mathbf{x}_{1} \\ \hline[\mathbf{X}]_{N-1 \times \eta} \end{pmatrix}=\begin{pmatrix} \mathbf{x}_{1} \\ \hline[\mathbf{V}]_{N-1 \times \eta} \end{pmatrix}, </math> This concise approach reduces the need for multiple anchors and enhances localization precision by leveraging angle constraints.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)