Template:Short description Template:Regression bar Partial least squares (PLS) regression is a statistical method that bears some relation to principal components regression and is a reduced rank regression;<ref>Template:Cite book</ref> instead of finding hyperplanes of maximum variance between the response and independent variables, it finds a linear regression model by projecting the predicted variables and the observable variables to a new space of maximum covariance (see below). Because both the X and Y data are projected to new spaces, the PLS family of methods are known as bilinear factor models. Partial least squares discriminant analysis (PLS-DA) is a variant used when the Y is categorical.
PLS is used to find the fundamental relations between two matrices (X and Y), i.e. a latent variable approach to modeling the covariance structures in these two spaces. A PLS model will try to find the multidimensional direction in the X space that explains the maximum multidimensional variance direction in the Y space. PLS regression is particularly suited when the matrix of predictors has more variables than observations, and when there is multicollinearity among X values. By contrast, standard regression will fail in these cases (unless it is regularized).
Partial least squares was introduced by the Swedish statistician Herman O. A. Wold, who then developed it with his son, Svante Wold. An alternative term for PLS is projection to latent structures,<ref name="wold_2001">Template:Cite journal</ref><ref>Template:Cite journal</ref> but the term partial least squares is still dominant in many areas. Although the original applications were in the social sciences, PLS regression is today most widely used in chemometrics and related areas. It is also used in bioinformatics, sensometrics, neuroscience, and anthropology.
Core ideaEdit
We are given a sample of <math>n</math> paired observations <math>(\vec{x}_i, \vec{y}_i), i \in {1,\ldots,n}</math>. In the first step <math>j=1</math>, the partial least squares regression searches for the normalized direction <math>\vec{p}_j</math>, <math>\vec{q}_j</math> that maximizes the covariance<ref>See lecture https://www.youtube.com/watch?v=Px2otK2nZ1c&t=46s</ref>
- <math>\max_{\vec{p}_j, \vec{q}_j} \operatorname E [\underbrace{(\vec{p}_j\cdot \vec{X})}_{t_j} \underbrace{(\vec{q}_j\cdot \vec{Y})}_{u_j} ]. </math>
Note below, the algorithm is denoted in matrix notation.
Underlying modelEdit
The general underlying model of multivariate PLS with <math>\ell</math> components is
- <math>X = T P^\mathrm{T} + E</math>
- <math>Y = U Q^\mathrm{T} + F</math>
where
- Template:Mvar is an <math>n \times m</math> matrix of predictors
- Template:Mvar is an <math>n \times p</math> matrix of responses
- Template:Mvar and Template:Mvar are <math>n \times \ell</math> matrices that are, respectively, projections of Template:Mvar (the X score, component or factor matrix) and projections of Template:Mvar (the Y scores)
- Template:Mvar and Template:Mvar are, respectively, <math>m \times \ell</math> and <math>p \times \ell</math> loading matrices
- and matrices Template:Mvar and Template:Mvar are the error terms, assumed to be independent and identically distributed random normal variables.
The decompositions of Template:Mvar and Template:Mvar are made so as to maximise the covariance between Template:Mvar and Template:Mvar.
Note that this covariance is defined pair by pair: the covariance of column i of Template:Mvar (length n) with the column i of Template:Mvar (length n) is maximized. Additionally, the covariance of the column i of Template:Mvar with the column j of Template:Mvar (with <math>i \ne j</math>) is zero.
In PLSR, the loadings are thus chosen so that the scores form an orthogonal basis. This is a major difference with PCA where orthogonality is imposed onto loadings (and not the scores).
AlgorithmsEdit
A number of variants of PLS exist for estimating the factor and loading matrices Template:Mvar and Template:Mvar. Most of them construct estimates of the linear regression between Template:Mvar and Template:Mvar as <math>Y = X \tilde{B} + \tilde{B}_0</math>. Some PLS algorithms are only appropriate for the case where Template:Mvar is a column vector, while others deal with the general case of a matrix Template:Mvar. Algorithms also differ on whether they estimate the factor matrix Template:Mvar as an orthogonal (that is, orthonormal) matrix or not.<ref> Template:Cite journal</ref><ref>Template:Cite journal</ref><ref>Template:Cite journal</ref><ref>Template:Cite journal</ref><ref>Template:Cite journal</ref><ref>Template:Cite journal</ref> The final prediction will be the same for all these varieties of PLS, but the components will differ.
PLS is composed of iteratively repeating the following steps k times (for k components):
- finding the directions of maximal covariance in input and output space
- performing least squares regression on the input score
- deflating the input <math>X</math> and/or target <math>Y</math>
PLS1Edit
PLS1 is a widely used algorithm appropriate for the vector Template:Mvar case. It estimates Template:Mvar as an orthonormal matrix. (Caution: the Template:Mvar vectors in the code below may not be normalized appropriately; see talk.) In pseudocode it is expressed below (capital letters are matrices, lower case letters are vectors if they are superscripted and scalars if they are subscripted).
1 Template:Nowrap 2 Template:Nowrap 3 Template:Nowrap, an initial estimate of Template:Mvar. 4 Template:Nowrap 5 Template:Nowrap 6 Template:Nowrap^\mathrm{T} t^{(k)}</math> (note this is a scalar)}} 7 Template:Nowrap 8 Template:Nowrap^\mathrm{T} t^{(k)}</math>}} 9 Template:Nowrap 10 Template:Nowrap 11 Template:Nowrap 12 Template:Nowrap 13 Template:Nowrap^\mathrm{T}</math>}} 14 Template:Nowrap^\mathrm{T} y </math>}} 15 Template:Nowrap 16 define Template:Mvar to be the matrix Template:Nowrap Do the same to form the Template:Mvar matrix and Template:Mvar vector. 17 Template:Nowrap 18 Template:Nowrap^\mathrm{T} B</math>}} 19 Template:Nowrap
This form of the algorithm does not require centering of the input Template:Mvar and Template:Mvar, as this is performed implicitly by the algorithm. This algorithm features 'deflation' of the matrix Template:Mvar (subtraction of <math>t_k t^{(k)} {p^{(k)}}^\mathrm{T}</math>), but deflation of the vector Template:Mvar is not performed, as it is not necessary (it can be proved that deflating Template:Mvar yields the same results as not deflating<ref>Template:Cite journal</ref>). The user-supplied variable Template:Mvar is the limit on the number of latent factors in the regression; if it equals the rank of the matrix Template:Mvar, the algorithm will yield the least squares regression estimates for Template:Mvar and <math>B_0</math>
ExtensionsEdit
OPLSEdit
In 2002 a new method was published called orthogonal projections to latent structures (OPLS). In OPLS, continuous variable data is separated into predictive and uncorrelated (orthogonal) information. This leads to improved diagnostics, as well as more easily interpreted visualization. However, these changes only improve the interpretability, not the predictivity, of the PLS models.<ref>Template:Cite journal </ref> Similarly, OPLS-DA (Discriminant Analysis) may be applied when working with discrete variables, as in classification and biomarker studies.
The general underlying model of OPLS is
- <math>X = T P^\mathrm{T} +T_\text{Y-orth} P^\mathrm{T}_\text{Y-orth} + E</math>
- <math>Y = U Q^\mathrm{T} + F</math>
or in O2-PLS<ref>Eriksson, S. Wold, and J. Tryg. "O2PLS® for improved analysis and visualization of complex data." https://www.dynacentrix.com/telecharg/SimcaP/O2PLS.pdf</ref>
- <math>X = T P^\mathrm{T} +T_\text{Y-orth} P^\mathrm{T}_\text{Y-orth} + E</math>
- <math>Y = U Q^\mathrm{T} +U_\text{X-orth} Q^\mathrm{T}_\text{X-orth} + F</math>
L-PLSEdit
Another extension of PLS regression, named L-PLS for its L-shaped matrices, connects 3 related data blocks to improve predictability.<ref>Template:Cite journal</ref> In brief, a new Z matrix, with the same number of columns as the X matrix, is added to the PLS regression analysis and may be suitable for including additional background information on the interdependence of the predictor variables.
3PRFEdit
In 2015 partial least squares was related to a procedure called the three-pass regression filter (3PRF).<ref>Template:Cite journal</ref> Supposing the number of observations and variables are large, the 3PRF (and hence PLS) is asymptotically normal for the "best" forecast implied by a linear latent factor model. In stock market data, PLS has been shown to provide accurate out-of-sample forecasts of returns and cash-flow growth.<ref>Template:Cite journal</ref>
Partial least squares SVDEdit
A PLS version based on singular value decomposition (SVD) provides a memory efficient implementation that can be used to address high-dimensional problems, such as relating millions of genetic markers to thousands of imaging features in imaging genetics, on consumer-grade hardware.<ref>Template:Cite journal</ref>
PLS correlationEdit
PLS correlation (PLSC) is another methodology related to PLS regression,<ref name=":0">Template:Cite journal</ref> which has been used in neuroimaging <ref name=":0" /><ref>Template:Cite journal</ref><ref>Template:Cite journal</ref> and sport science,<ref>Template:Cite journal</ref> to quantify the strength of the relationship between data sets. Typically, PLSC divides the data into two blocks (sub-groups) each containing one or more variables, and then uses singular value decomposition (SVD) to establish the strength of any relationship (i.e. the amount of shared information) that might exist between the two component sub-groups.<ref name=":1">Template:Citation</ref> It does this by using SVD to determine the inertia (i.e. the sum of the singular values) of the covariance matrix of the sub-groups under consideration.<ref name=":1" /><ref name=":0" />
See alsoEdit
- Canonical correlation
- Data mining
- Deming regression
- Feature extraction
- Machine learning
- Partial least squares path modeling
- Principal component analysis
- Regression analysis
- Total sum of squares
- Projection pursuit regression
ReferencesEdit
LiteratureEdit
- Template:Cite book
- Template:Cite journal
- Template:Cite journal
- Template:Cite book
- Template:Cite journal
- Template:Cite book
- Template:Cite book
- Template:Cite journal
- Template:Cite book
- Template:Cite book
- Template:Cite book
- Template:Cite journal
- Template:Cite journal
- Template:Cite book
- Template:Cite journal
External linksEdit
- A short introduction to PLS regression and its history
- Video: Derivation of PLS by Prof. H. Harry Asada
Template:Least Squares and Regression Analysis Template:Authority control