Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Principal component analysis
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== Covariances === '''X'''<sup>T</sup>'''X''' itself can be recognized as proportional to the empirical sample [[covariance matrix]] of the dataset '''X<sup>T</sup>'''.<ref name="Jolliffe2002"/>{{rp|30β31}} The sample covariance ''Q'' between two of the different principal components over the dataset is given by: :<math>\begin{align} Q(\mathrm{PC}_{(j)}, \mathrm{PC}_{(k)}) & \propto (\mathbf{X}\mathbf{w}_{(j)})^\mathsf{T} (\mathbf{X}\mathbf{w}_{(k)}) \\ & = \mathbf{w}_{(j)}^\mathsf{T} \mathbf{X}^\mathsf{T} \mathbf{X} \mathbf{w}_{(k)} \\ & = \mathbf{w}_{(j)}^\mathsf{T} \lambda_{(k)} \mathbf{w}_{(k)} \\ & = \lambda_{(k)} \mathbf{w}_{(j)}^\mathsf{T} \mathbf{w}_{(k)} \end{align}</math> where the eigenvalue property of '''w'''<sub>(''k'')</sub> has been used to move from line 2 to line 3. However eigenvectors '''w'''<sub>(''j'')</sub> and '''w'''<sub>(''k'')</sub> corresponding to eigenvalues of a symmetric matrix are orthogonal (if the eigenvalues are different), or can be orthogonalised (if the vectors happen to share an equal repeated value). The product in the final line is therefore zero; there is no sample covariance between different principal components over the dataset. Another way to characterise the principal components transformation is therefore as the transformation to coordinates which diagonalise the empirical sample covariance matrix. In matrix form, the empirical covariance matrix for the original variables can be written :<math>\mathbf{Q} \propto \mathbf{X}^\mathsf{T} \mathbf{X} = \mathbf{W} \mathbf{\Lambda} \mathbf{W}^\mathsf{T}</math> The empirical covariance matrix between the principal components becomes :<math>\mathbf{W}^\mathsf{T} \mathbf{Q} \mathbf{W} \propto \mathbf{W}^\mathsf{T} \mathbf{W} \, \mathbf{\Lambda} \, \mathbf{W}^\mathsf{T} \mathbf{W} = \mathbf{\Lambda} </math> where '''Ξ''' is the diagonal matrix of eigenvalues ''Ξ»''<sub>(''k'')</sub> of '''X'''<sup>T</sup>'''X'''. ''Ξ»''<sub>(''k'')</sub> is equal to the sum of the squares over the dataset associated with each component ''k'', that is, ''Ξ»''<sub>(''k'')</sub> = Ξ£<sub>''i''</sub> ''t''<sub>''k''</sub><sup>2</sup><sub>(''i'')</sub> = Ξ£<sub>''i''</sub> ('''x'''<sub>(''i'')</sub> β '''w'''<sub>(''k'')</sub>)<sup>2</sup>.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)