Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Principal component analysis
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Details == PCA is defined as an [[orthogonal transformation|orthogonal]] [[linear transformation]] on a real [[inner product space]] that transforms the data to a new [[coordinate system]] such that the greatest variance by some scalar projection of the data comes to lie on the first coordinate (called the first principal component), the second greatest variance on the second coordinate, and so on.<ref name="Jolliffe2002"/> Consider an <math>n \times p</math> data [[Matrix (mathematics)|matrix]], '''X''', with column-wise zero [[empirical mean]] (the sample mean of each column has been shifted to zero), where each of the ''n'' rows represents a different repetition of the experiment, and each of the ''p'' columns gives a particular kind of feature (say, the results from a particular sensor). Mathematically, the transformation is defined by a set of size <math>l</math> of ''p''-dimensional vectors of weights or coefficients <math>\mathbf{w}_{(k)} = (w_1, \dots, w_p)_{(k)} </math> that map each row vector <math>\mathbf{x}_{(i)} = (x_1, \dots, x_p)_{(i)}</math> of '''X''' to a new vector of principal component ''scores'' <math>\mathbf{t}_{(i)} = (t_1, \dots, t_l)_{(i)}</math>, given by :<math>{t_{k}}_{(i)} = \mathbf{x}_{(i)} \cdot \mathbf{w}_{(k)} \qquad \mathrm{for} \qquad i = 1,\dots,n \qquad k = 1,\dots,l </math> in such a way that the individual variables <math>t_1, \dots, t_l</math> of '''t''' considered over the data set successively inherit the maximum possible variance from '''X''', with each coefficient vector '''w''' constrained to be a [[unit vector]] (where <math>l</math> is usually selected to be strictly less than <math>p</math> to reduce dimensionality). The above may equivalently be written in matrix form as :<math>\mathbf{T} = \mathbf{X} \mathbf{W}</math> where <math>{\mathbf{T}}_{ik} = {t_{k}}_{(i)}</math>, <math>{\mathbf{X}}_{ij} = {x_{j}}_{(i)}</math>, and <math>{\mathbf{W}}_{jk} = {w_{j}}_{(k)}</math>. === First component === In order to maximize variance, the first weight vector '''w'''<sub>(1)</sub> thus has to satisfy :<math>\mathbf{w}_{(1)} = \arg\max_{\Vert \mathbf{w} \Vert = 1} \,\left\{ \sum_i (t_1)^2_{(i)} \right\} = \arg\max_{\Vert \mathbf{w} \Vert = 1} \,\left\{ \sum_i \left(\mathbf{x}_{(i)} \cdot \mathbf{w} \right)^2 \right\}</math> Equivalently, writing this in matrix form gives :<math>\mathbf{w}_{(1)} = \arg\max_{\left\| \mathbf{w} \right\| = 1} \left\{ \left\| \mathbf{Xw} \right\|^2 \right\} = \arg\max_{\left\| \mathbf{w} \right\| = 1} \left\{ \mathbf{w}^\mathsf{T} \mathbf{X}^\mathsf{T} \mathbf{X w} \right\}</math> Since '''w'''<sub>(1)</sub> has been defined to be a unit vector, it equivalently also satisfies :<math>\mathbf{w}_{(1)} = \arg\max \left\{ \frac{\mathbf{w}^\mathsf{T} \mathbf{X}^\mathsf{T} \mathbf{X w}}{\mathbf{w}^\mathsf{T} \mathbf{w}} \right\}</math> The quantity to be maximised can be recognised as a [[Rayleigh quotient]]. A standard result for a [[positive semidefinite matrix]] such as '''X'''<sup>T</sup>'''X''' is that the quotient's maximum possible value is the largest [[eigenvalue]] of the matrix, which occurs when '''''w''''' is the corresponding [[eigenvector]]. With '''w'''<sub>(1)</sub> found, the first principal component of a data vector '''x'''<sub>(''i'')</sub> can then be given as a score ''t''<sub>1(''i'')</sub> = '''x'''<sub>(''i'')</sub> ⋅ '''w'''<sub>(1)</sub> in the transformed co-ordinates, or as the corresponding vector in the original variables, {'''x'''<sub>(''i'')</sub> ⋅ '''w'''<sub>(1)</sub>} '''w'''<sub>(1)</sub>. === Further components === The ''k''-th component can be found by subtracting the first ''k'' − 1 principal components from '''X''': :<math>\mathbf{\hat{X}}_k = \mathbf{X} - \sum_{s = 1}^{k - 1} \mathbf{X} \mathbf{w}_{(s)} \mathbf{w}_{(s)}^{\mathsf{T}} </math> and then finding the weight vector which extracts the maximum variance from this new data matrix :<math>\mathbf{w}_{(k)} = \mathop{\operatorname{arg\,max}}_{\left\| \mathbf{w} \right\| = 1} \left\{ \left\| \mathbf{\hat{X}}_{k} \mathbf{w} \right\|^2 \right\} = \arg\max \left\{ \tfrac{\mathbf{w}^\mathsf{T} \mathbf{\hat{X}}_{k}^\mathsf{T} \mathbf{\hat{X}}_{k} \mathbf{w}}{\mathbf{w}^T \mathbf{w}} \right\}</math> It turns out that this gives the remaining eigenvectors of '''X'''<sup>T</sup>'''X''', with the maximum values for the quantity in brackets given by their corresponding eigenvalues. Thus the weight vectors are eigenvectors of '''X'''<sup>T</sup>'''X'''. The ''k''-th principal component of a data vector '''x'''<sub>(''i'')</sub> can therefore be given as a score ''t''<sub>''k''(''i'')</sub> = '''x'''<sub>(''i'')</sub> ⋅ '''w'''<sub>(''k'')</sub> in the transformed coordinates, or as the corresponding vector in the space of the original variables, {'''x'''<sub>(''i'')</sub> ⋅ '''w'''<sub>(''k'')</sub>} '''w'''<sub>(''k'')</sub>, where '''w'''<sub>(''k'')</sub> is the ''k''th eigenvector of '''X'''<sup>T</sup>'''X'''. The full principal components decomposition of '''X''' can therefore be given as :<math>\mathbf{T} = \mathbf{X} \mathbf{W}</math> where '''W''' is a ''p''-by-''p'' matrix of weights whose columns are the eigenvectors of '''X'''<sup>T</sup>'''X'''. The transpose of '''W''' is sometimes called the [[whitening transformation|whitening or sphering transformation]]. Columns of '''W''' multiplied by the square root of corresponding eigenvalues, that is, eigenvectors scaled up by the variances, are called ''loadings'' in PCA or in Factor analysis. === Covariances === '''X'''<sup>T</sup>'''X''' itself can be recognized as proportional to the empirical sample [[covariance matrix]] of the dataset '''X<sup>T</sup>'''.<ref name="Jolliffe2002"/>{{rp|30–31}} The sample covariance ''Q'' between two of the different principal components over the dataset is given by: :<math>\begin{align} Q(\mathrm{PC}_{(j)}, \mathrm{PC}_{(k)}) & \propto (\mathbf{X}\mathbf{w}_{(j)})^\mathsf{T} (\mathbf{X}\mathbf{w}_{(k)}) \\ & = \mathbf{w}_{(j)}^\mathsf{T} \mathbf{X}^\mathsf{T} \mathbf{X} \mathbf{w}_{(k)} \\ & = \mathbf{w}_{(j)}^\mathsf{T} \lambda_{(k)} \mathbf{w}_{(k)} \\ & = \lambda_{(k)} \mathbf{w}_{(j)}^\mathsf{T} \mathbf{w}_{(k)} \end{align}</math> where the eigenvalue property of '''w'''<sub>(''k'')</sub> has been used to move from line 2 to line 3. However eigenvectors '''w'''<sub>(''j'')</sub> and '''w'''<sub>(''k'')</sub> corresponding to eigenvalues of a symmetric matrix are orthogonal (if the eigenvalues are different), or can be orthogonalised (if the vectors happen to share an equal repeated value). The product in the final line is therefore zero; there is no sample covariance between different principal components over the dataset. Another way to characterise the principal components transformation is therefore as the transformation to coordinates which diagonalise the empirical sample covariance matrix. In matrix form, the empirical covariance matrix for the original variables can be written :<math>\mathbf{Q} \propto \mathbf{X}^\mathsf{T} \mathbf{X} = \mathbf{W} \mathbf{\Lambda} \mathbf{W}^\mathsf{T}</math> The empirical covariance matrix between the principal components becomes :<math>\mathbf{W}^\mathsf{T} \mathbf{Q} \mathbf{W} \propto \mathbf{W}^\mathsf{T} \mathbf{W} \, \mathbf{\Lambda} \, \mathbf{W}^\mathsf{T} \mathbf{W} = \mathbf{\Lambda} </math> where '''Λ''' is the diagonal matrix of eigenvalues ''λ''<sub>(''k'')</sub> of '''X'''<sup>T</sup>'''X'''. ''λ''<sub>(''k'')</sub> is equal to the sum of the squares over the dataset associated with each component ''k'', that is, ''λ''<sub>(''k'')</sub> = Σ<sub>''i''</sub> ''t''<sub>''k''</sub><sup>2</sup><sub>(''i'')</sub> = Σ<sub>''i''</sub> ('''x'''<sub>(''i'')</sub> ⋅ '''w'''<sub>(''k'')</sub>)<sup>2</sup>. === Dimensionality reduction === The transformation '''P''' = '''X''' '''W''' maps a data vector '''x'''<sub>(''i'')</sub> from an original space of ''x'' variables to a new space of ''p'' variables which are uncorrelated over the dataset. To non-dimensionalize the centered data, let ''X<sub>c</sub>'' represent the characteristic values of data vectors ''X<sub>i</sub>'', given by: * <math>\|X\|_{\infty}</math> (maximum norm), * <math>\frac{1}{n} \|X\|_1</math> (mean absolute value), or * <math>\frac{1}{\sqrt{n}} \|X\|_2</math> (normalized Euclidean norm), for a dataset of size ''n''. These norms are used to transform the original space of variables ''x, y'' to a new space of uncorrelated variables ''p, q'' (given ''Y<sub>c</sub>'' with same meaning), such that <math>p_i = \frac{X_i}{X_c}, \quad q_i = \frac{Y_i}{Y_c}</math>; and the new variables are linearly related as: <math>q = \alpha p</math>. To find the optimal linear relationship, we minimize the total squared reconstruction error: <math>E(\alpha) = \frac{1}{1 - \alpha^2} \sum_{i=1}^{n} (\alpha p_i - q_i)^2</math>; such that setting the derivative of the error function to zero <math>(E'(\alpha) = 0)</math> yields:<math>\alpha = \frac{1}{2} \left( -\lambda \pm \sqrt{\lambda^2 + 4} \right)</math> where<math>\lambda = \frac{p \cdot p - q \cdot q}{p \cdot q}</math>.<ref name="Holmes2023" /> [[File:PCA of Haplogroup J using 37 STRs.png|thumb|right|A principal components analysis scatterplot of [[Y-STR]] [[haplotype]]s calculated from repeat-count values for 37 Y-chromosomal STR markers from 354 individuals.<br /> PCA has successfully found linear combinations of the markers that separate out different clusters corresponding to different lines of individuals' Y-chromosomal genetic descent.]] Such [[dimensionality reduction]] can be a very useful step for visualising and processing high-dimensional datasets, while still retaining as much of the variance in the dataset as possible. For example, selecting ''L'' = 2 and keeping only the first two principal components finds the two-dimensional plane through the high-dimensional dataset in which the data is most spread out, so if the data contains [[Cluster analysis|clusters]] these too may be most spread out, and therefore most visible to be plotted out in a two-dimensional diagram; whereas if two directions through the data (or two of the original variables) are chosen at random, the clusters may be much less spread apart from each other, and may in fact be much more likely to substantially overlay each other, making them indistinguishable. Similarly, in [[regression analysis]], the larger the number of [[explanatory variable]]s allowed, the greater is the chance of [[overfitting]] the model, producing conclusions that fail to generalise to other datasets. One approach, especially when there are strong correlations between different possible explanatory variables, is to reduce them to a few principal components and then run the regression against them, a method called [[principal component regression]]. Dimensionality reduction may also be appropriate when the variables in a dataset are noisy. If each column of the dataset contains independent identically distributed Gaussian noise, then the columns of '''T''' will also contain similarly identically distributed Gaussian noise (such a distribution is invariant under the effects of the matrix '''W''', which can be thought of as a high-dimensional rotation of the co-ordinate axes). However, with more of the total variance concentrated in the first few principal components compared to the same noise variance, the proportionate effect of the noise is less—the first few components achieve a higher [[signal-to-noise ratio]]. PCA thus can have the effect of concentrating much of the signal into the first few principal components, which can usefully be captured by dimensionality reduction; while the later principal components may be dominated by noise, and so disposed of without great loss. If the dataset is not too large, the significance of the principal components can be tested using [[Bootstrapping (statistics)#Parametric bootstrap|parametric bootstrap]], as an aid in determining how many principal components to retain.<ref>{{Cite journal|author=Forkman J., Josse, J., Piepho, H. P. |year=2019 |title= Hypothesis tests for principal component analysis when variables are standardized |journal= Journal of Agricultural, Biological, and Environmental Statistics|volume=24 |issue=2 |pages= 289–308 |doi=10.1007/s13253-019-00355-5|doi-access=free |bibcode=2019JABES..24..289F }}</ref> === Singular value decomposition === {{Main|Singular value decomposition}} The principal components transformation can also be associated with another matrix factorization, the [[singular value decomposition]] (SVD) of '''X''', :<math>\mathbf{X} = \mathbf{U}\mathbf{\Sigma}\mathbf{W}^T</math> Here '''Σ''' is an ''n''-by-''p'' [[Diagonal matrix|rectangular diagonal matrix]] of positive numbers ''σ''<sub>(''k'')</sub>, called the singular values of '''X'''; '''U''' is an ''n''-by-''n'' matrix, the columns of which are orthogonal unit vectors of length ''n'' called the left singular vectors of '''X'''; and '''W''' is a ''p''-by-''p'' matrix whose columns are orthogonal unit vectors of length ''p'' and called the right singular vectors of '''X'''. In terms of this factorization, the matrix '''X'''<sup>T</sup>'''X''' can be written :<math>\begin{align} \mathbf{X}^T\mathbf{X} & = \mathbf{W}\mathbf{\Sigma}^\mathsf{T} \mathbf{U}^\mathsf{T} \mathbf{U}\mathbf{\Sigma}\mathbf{W}^\mathsf{T} \\ & = \mathbf{W}\mathbf{\Sigma}^\mathsf{T} \mathbf{\Sigma} \mathbf{W}^\mathsf{T} \\ & = \mathbf{W}\mathbf{\hat{\Sigma}}^2 \mathbf{W}^\mathsf{T} \end{align}</math> where ''' <math> \mathbf{\hat{\Sigma}} </math>''' is the square diagonal matrix with the singular values of '''X '''and the excess zeros chopped off that satisfies''' <math> \mathbf{\hat{\Sigma}^2}=\mathbf{\Sigma}^\mathsf{T} \mathbf{\Sigma} </math>'''. Comparison with the eigenvector factorization of '''X'''<sup>T</sup>'''X''' establishes that the right singular vectors '''W''' of '''X''' are equivalent to the eigenvectors of '''X'''<sup>T</sup>'''X''', while the singular values ''σ''<sub>(''k'')</sub> of ''' <math> \mathbf{{X}}</math>''' are equal to the square-root of the eigenvalues ''λ''<sub>(''k'')</sub> of '''X'''<sup>T</sup>'''X'''. Using the singular value decomposition the score matrix '''T''' can be written :<math>\begin{align} \mathbf{T} & = \mathbf{X} \mathbf{W} \\ & = \mathbf{U}\mathbf{\Sigma}\mathbf{W}^\mathsf{T} \mathbf{W} \\ & = \mathbf{U}\mathbf{\Sigma} \end{align}</math> so each column of '''T''' is given by one of the left singular vectors of '''X''' multiplied by the corresponding singular value. This form is also the [[polar decomposition]] of '''T'''. Efficient algorithms exist to calculate the SVD of '''X''' without having to form the matrix '''X'''<sup>T</sup>'''X''', so computing the SVD is now the standard way to calculate a principal components analysis from a data matrix,<ref>{{Cite book |last1=Boyd |first1=Stephen |url=http://dx.doi.org/10.1017/cbo9780511804441 |title=Convex Optimization |last2=Vandenberghe |first2=Lieven |date=2004-03-08 |publisher=Cambridge University Press |doi=10.1017/cbo9780511804441 |isbn=978-0-521-83378-3}}</ref> unless only a handful of components are required. As with the eigen-decomposition, a truncated {{math|''n'' × ''L''}} score matrix '''T'''<sub>L</sub> can be obtained by considering only the first L largest singular values and their singular vectors: :<math>\mathbf{T}_L = \mathbf{U}_L\mathbf{\Sigma}_L = \mathbf{X} \mathbf{W}_L </math> The truncation of a matrix '''M''' or '''T''' using a truncated singular value decomposition in this way produces a truncated matrix that is the nearest possible matrix of [[Rank (linear algebra)|rank]] ''L'' to the original matrix, in the sense of the difference between the two having the smallest possible [[Frobenius norm]], a result known as the [[Low-rank approximation#Proof of Eckart–Young–Mirsky theorem (for Frobenius norm)|Eckart–Young theorem]] [1936]. <blockquote> '''Theorem (Optimal k‑dimensional fit).''' Let P be an n×m data matrix whose columns have been mean‑centered and scaled, and let <math>P = U \,\Sigma\, V^{T}</math> be its singular value decomposition. Then the best rank‑k approximation to P in the least‑squares (Frobenius‑norm) sense is <math>P_{k} = U_{k}\,\Sigma_{k}\,V_{k}^{T}</math>, where V<sub>k</sub> consists of the first k columns of V. Moreover, the relative residual variance is <math>R(k)=\frac{\sum_{j=k+1}^{m}\sigma_{j}^{2}}{\sum_{j=1}^{m}\sigma_{j}^{2}}</math>. </blockquote><ref name="Holmes2023" />
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)