Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Principal component analysis
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== The NIPALS method === ''Non-linear iterative partial least squares (NIPALS)'' is a variant the classical [[power iteration]] with matrix deflation by subtraction implemented for computing the first few components in a principal component or [[partial least squares]] analysis. For very-high-dimensional datasets, such as those generated in the *omics sciences (for example, [[genomics]], [[metabolomics]]) it is usually only necessary to compute the first few PCs. The [[non-linear iterative partial least squares]] (NIPALS) algorithm updates iterative approximations to the leading scores and loadings '''t'''<sub>1</sub> and '''r'''<sub>1</sub><sup>T</sup> by the [[power iteration]] multiplying on every iteration by '''X''' on the left and on the right, that is, calculation of the covariance matrix is avoided, just as in the matrix-free implementation of the power iterations to {{math|'''X<sup>T</sup>X'''}}, based on the function evaluating the product {{math|1='''X<sup>T</sup>(X r)''' = '''((X r)<sup>T</sup>X)<sup>T</sup>'''}}. The matrix deflation by subtraction is performed by subtracting the outer product, '''t'''<sub>1</sub>'''r'''<sub>1</sub><sup>T</sup> from '''X''' leaving the deflated residual matrix used to calculate the subsequent leading PCs.<ref>{{Cite journal | last1 = Geladi | first1 = Paul | last2 = Kowalski | first2 = Bruce | title = Partial Least Squares Regression:A Tutorial | journal = Analytica Chimica Acta | volume = 185 | pages = 1–17 | year = 1986 | doi = 10.1016/0003-2670(86)80028-9 | bibcode = 1986AcAC..185....1G }}</ref> For large data matrices, or matrices that have a high degree of column collinearity, NIPALS suffers from loss of orthogonality of PCs due to machine precision [[round-off errors]] accumulated in each iteration and matrix deflation by subtraction.<ref>{{cite book |last=Kramer |first=R. |year=1998 |title=Chemometric Techniques for Quantitative Analysis |publisher=CRC Press |location=New York |isbn= 9780203909805|url=https://books.google.com/books?id=iBpOzwAOfHYC}}</ref> A [[Gram–Schmidt]] re-orthogonalization algorithm is applied to both the scores and the loadings at each iteration step to eliminate this loss of orthogonality.<ref>{{cite journal |first=M. |last=Andrecut |title=Parallel GPU Implementation of Iterative PCA Algorithms |journal=Journal of Computational Biology |volume=16 |issue=11 |year=2009 |pages=1593–1599 |doi=10.1089/cmb.2008.0221 |pmid=19772385 |arxiv=0811.1081 |s2cid=1362603 }}</ref> NIPALS reliance on single-vector multiplications cannot take advantage of high-level [[BLAS]] and results in slow convergence for clustered leading singular values—both these deficiencies are resolved in more sophisticated matrix-free block solvers, such as the Locally Optimal Block Preconditioned Conjugate Gradient ([[LOBPCG]]) method.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)