Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Principal component analysis
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Further considerations == The singular values (in '''Σ''') are the square roots of the [[eigenvalue]]s of the matrix '''X'''<sup>T</sup>'''X'''. Each eigenvalue is proportional to the portion of the "variance" (more correctly of the sum of the squared distances of the points from their multidimensional mean) that is associated with each eigenvector. The sum of all the eigenvalues is equal to the sum of the squared distances of the points from their multidimensional mean. PCA essentially rotates the set of points around their mean in order to align with the principal components. This moves as much of the variance as possible (using an orthogonal transformation) into the first few dimensions. The values in the remaining dimensions, therefore, tend to be small and may be dropped with minimal loss of information (see [[Principle Component Analysis#PCA and information theory|below]]). PCA is often used in this manner for [[dimensionality reduction]]. PCA has the distinction of being the optimal orthogonal transformation for keeping the subspace that has largest "variance" (as defined above). This advantage, however, comes at the price of greater computational requirements if compared, for example, and when applicable, to the [[discrete cosine transform]], and in particular to the DCT-II which is simply known as the "DCT". [[Nonlinear dimensionality reduction]] techniques tend to be more computationally demanding than PCA. PCA is sensitive to the scaling of the variables. Mathematically this sensitivity comes from the way a rescaling changes the sample‑covariance matrix that PCA diagonalises.<ref name="Holmes2023"> {{cite book |last=Holmes |first=Mark H. |title=Introduction to Scientific Computing and Data Analysis |series=Texts in Computational Science and Engineering |edition=2nd |year=2023 |publisher=Springer |isbn=978-3-031-22429-4 |pages=475–490 }} </ref> Let <math>\mathbf X_\text{c}</math> be the *centered* data matrix (''n'' rows, ''p'' columns) and define the covariance <math>\Sigma = \frac{1}{n}\,\mathbf X_\text{c}^{\mathsf T}\mathbf X_\text{c}.</math> If the <math>j</math>‑th variable is multiplied by a factor <math>\alpha_j</math> we obtain <math>\mathbf X_\text{c}^{(\alpha)} = \mathbf X_\text{c}D,\qquad D = \operatorname{diag}(\alpha_1,\ldots,\alpha_p).</math> Hence the new covariance is <math>\Sigma^{(\alpha)} = D^{\mathsf T}\,\Sigma\,D.</math> Because the eigenvalues and eigenvectors of <math>\Sigma^{(\alpha)}</math> are those of <math>\Sigma</math> scaled by <math>D</math>, the principal axes rotate toward any column whose variance has been inflated, exactly as the 2‑D example below illustrates. If we have just two variables and they have the same [[sample variance]] and are completely correlated, then the PCA will entail a rotation by 45° and the "weights" (they are the cosines of rotation) for the two variables with respect to the principal component will be equal. But if we multiply all values of the first variable by 100, then the first principal component will be almost the same as that variable, with a small contribution from the other variable, whereas the second component will be almost aligned with the second original variable. This means that whenever the different variables have different units (like temperature and mass), PCA is a somewhat arbitrary method of analysis. (Different results would be obtained if one used Fahrenheit rather than Celsius for example.) Pearson's original paper was entitled "On Lines and Planes of Closest Fit to Systems of Points in Space" – "in space" implies physical Euclidean space where such concerns do not arise. One way of making the PCA less arbitrary is to use variables scaled so as to have unit variance, by standardizing the data and hence use the autocorrelation matrix instead of the autocovariance matrix as a basis for PCA. However, this compresses (or expands) the fluctuations in all dimensions of the signal space to unit variance. Classical PCA assumes the cloud of points has already been translated so its centroid is at the origin.<ref name="Holmes2023" /> Write each observation as <math>\mathbf q_i = \boldsymbol\mu + \mathbf z_i,\qquad \boldsymbol\mu = \tfrac{1}{n}\sum_{i=1}^{n}\mathbf q_i.</math> Without subtracting <math>\boldsymbol\mu</math> we are in effect diagonalising <math>\Sigma_{\text{unc}} \;=\; n\,\boldsymbol\mu\boldsymbol\mu^{\mathsf T} \;+\;\tfrac{1}{n}\,\mathbf Z^{\mathsf T}\mathbf Z,</math> where <math>\mathbf Z</math> is the centered matrix. The rank‑one term <math>n\,\boldsymbol\mu\boldsymbol\mu^{\mathsf T}</math> often dominates, forcing the leading eigenvector to point almost exactly toward the mean and obliterating any structure in the centred part <math>\mathbf Z</math>. After mean subtraction that term vanishes and the principal axes align with the true directions of maximal variance. Mean-centering is unnecessary if performing a principal components analysis on a correlation matrix, as the data are already centered after calculating correlations. Correlations are derived from the cross-product of two standard scores (Z-scores) or statistical moments (hence the name: ''Pearson Product-Moment Correlation''). Also see the article by Kromrey & Foster-Johnson (1998) on ''"Mean-centering in Moderated Regression: Much Ado About Nothing"''. Since [[Covariance matrix#Relation to the correlation matrix|covariances are correlations of normalized variables]] ([[Standard score#Calculation|Z- or standard-scores]]) a PCA based on the correlation matrix of '''X''' is [[Equality (mathematics)|equal]] to a PCA based on the covariance matrix of '''Z''', the standardized version of '''X'''. PCA is a popular primary technique in [[pattern recognition]]. It is not, however, optimized for class separability.<ref>{{Cite book| last=Fukunaga|first=Keinosuke|author-link=Keinosuke Fukunaga | title = Introduction to Statistical Pattern Recognition |publisher=Elsevier | year = 1990 | url=https://dl.acm.org/doi/book/10.5555/92131| isbn=978-0-12-269851-4}}</ref> However, it has been used to quantify the distance between two or more classes by calculating center of mass for each class in principal component space and reporting Euclidean distance between center of mass of two or more classes.<ref>{{cite journal|last1=Alizadeh|first1=Elaheh|last2=Lyons|first2=Samanthe M|last3=Castle|first3=Jordan M|last4=Prasad|first4=Ashok|title=Measuring systematic changes in invasive cancer cell shape using Zernike moments|journal=Integrative Biology|date=2016|volume=8|issue=11|pages=1183–1193|doi=10.1039/C6IB00100A|pmid=27735002|url=https://pubs.rsc.org/en/Content/ArticleLanding/2016/IB/C6IB00100A}}</ref> The [[linear discriminant analysis]] is an alternative which is optimized for class separability.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)