Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Factor analysis
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Statistical model== ===Definition=== The model attempts to explain a set of <math>p</math> observations in each of <math>n</math> individuals with a set of <math>k</math> ''common factors'' (<math>f_{i,j}</math>) where there are fewer factors per unit than observations per unit (<math>k<p</math>). Each individual has <math>k</math> of their own common factors, and these are related to the observations via the factor ''loading matrix'' (<math>L \in \mathbb{R}^{p \times k}</math>), for a single observation, according to : <math>x_{i,m} - \mu_{i} = l_{i,1} f_{1,m} + \dots + l_{i,k} f_{k,m} + \varepsilon_{i,m} </math> where * <math>x_{i,m}</math> is the value of the <math>i</math>th observation of the <math>m</math>th individual, * <math>\mu_i</math> is the observation mean for the <math>i</math>th observation, * <math>l_{i,j}</math> is the loading for the <math>i</math>th observation of the <math>j</math>th factor, * <math>f_{j,m}</math> is the value of the <math>j</math>th factor of the <math>m</math>th individual, and * <math>\varepsilon_{i,m} </math> is the <math>(i,m)</math>th ''unobserved stochastic error term'' with mean zero and finite variance. In matrix notation : <math>X - \Mu = L F + \varepsilon</math> where observation matrix <math>X \in \mathbb{R}^{p \times n}</math>, loading matrix <math>L \in \mathbb{R}^{p \times k}</math>, factor matrix <math>F \in \mathbb{R}^{k \times n}</math>, error term matrix <math>\varepsilon \in \mathbb{R}^{p \times n}</math> and mean matrix <math>\Mu \in \mathbb{R}^{p \times n}</math> whereby the <math>(i,m)</math>th element is simply <math>\Mu_{i,m}=\mu_i</math>. Also we will impose the following assumptions on <math>F</math>: # <math>F</math> and <math>\varepsilon</math> are independent. # <math>\mathrm{E}(F) = 0</math>; where <math>\mathrm E</math> is [[Multivariate random variable#Expected value|Expectation]] # <math>\mathrm{Cov}(F)=I</math> where <math>\mathrm{Cov}</math> is the [[covariance matrix]], to make sure that the factors are uncorrelated, and <math>I</math> is the [[identity matrix]]. Suppose <math>\mathrm{Cov}(X - \Mu)=\Sigma</math>. Then : <math>\Sigma=\mathrm{Cov}(X - \Mu)=\mathrm{Cov}(LF + \varepsilon),\,</math> and therefore, from conditions 1 and 2 imposed on <math>F</math> above, <math>E[LF]=LE[F]=0</math> and <math>Cov(LF+\epsilon)=Cov(LF)+Cov(\epsilon)</math>, giving : <math>\Sigma = L \mathrm{Cov}(F) L^T + \mathrm{Cov}(\varepsilon),\,</math> or, setting <math>\Psi:=\mathrm{Cov}(\varepsilon)</math>, : <math>\Sigma = LL^T + \Psi.\,</math> For any [[orthogonal matrix]] <math>Q</math>, if we set <math>L^\prime=\ LQ</math> and <math>F^\prime=Q^T F</math>, the criteria for being factors and factor loadings still hold. Hence a set of factors and factor loadings is unique only up to an [[orthogonal transformation]]. ===Example=== Suppose a psychologist has the hypothesis that there are two kinds of [[intelligence (trait)|intelligence]], "verbal intelligence" and "mathematical intelligence", neither of which is directly observed.{{Explanatory footnote|In this example, "verbal intelligence" and "mathematical intelligence" are latent variables. The fact that they're not directly observed is what makes them latent.|name=latent variables|group=note}} [[Evidence]] for the hypothesis is sought in the examination scores from each of 10 different academic fields of 1000 students. If each student is chosen randomly from a large [[population (statistics)|population]], then each student's 10 scores are random variables. The psychologist's hypothesis may say that for each of the 10 academic fields, the score averaged over the group of all students who share some common pair of values for verbal and mathematical "intelligences" is some [[Constant (mathematics)|constant]] times their level of verbal intelligence plus another constant times their level of mathematical intelligence, i.e., it is a linear combination of those two "factors". The numbers for a particular subject, by which the two kinds of intelligence are multiplied to obtain the expected score, are posited by the hypothesis to be the same for all intelligence level pairs, and are called '''"factor loading"''' for this subject. {{Clarify|date=July 2019}} For example, the hypothesis may hold that the predicted average student's aptitude in the field of [[astronomy]] is :{10 Γ the student's verbal intelligence} + {6 Γ the student's mathematical intelligence}. The numbers 10 and 6 are the factor loadings associated with astronomy. Other academic subjects may have different factor loadings. Two students assumed to have identical degrees of verbal and mathematical intelligence may have different measured aptitudes in astronomy because individual aptitudes differ from average aptitudes (predicted above) and because of measurement error itself. Such differences make up what is collectively called the "error" β a statistical term that means the amount by which an individual, as measured, differs from what is average for or predicted by his or her levels of intelligence (see [[errors and residuals in statistics]]). The observable data that go into factor analysis would be 10 scores of each of the 1000 students, a total of 10,000 numbers. The factor loadings and levels of the two kinds of intelligence of each student must be inferred from the data. ===Mathematical model of the same example=== In the following, matrices will be indicated by indexed variables. "Academic Subject" indices will be indicated using letters <math>a</math>,<math>b</math> and <math>c</math>, with values running from <math>1</math> to <math>p</math> which is equal to <math>10</math> in the above example. "Factor" indices will be indicated using letters <math>p</math>, <math>q</math> and <math>r</math>, with values running from <math>1</math> to <math>k</math> which is equal to <math>2</math> in the above example. "Instance" or "sample" indices will be indicated using letters <math>i</math>,<math>j</math> and <math>k</math>, with values running from <math>1</math> to <math>N</math>. In the example above, if a sample of <math>N=1000</math> students participated in the <math>p=10</math> exams, the <math>i</math>th student's score for the <math>a</math>th exam is given by <math>x_{ai}</math>. The purpose of factor analysis is to characterize the correlations between the variables <math>x_a</math> of which the <math>x_{ai}</math> are a particular instance, or set of observations. In order for the variables to be on equal footing, they are [[normalization (statistics)|normalized]] into standard scores <math>z</math>: :<math>z_{ai}=\frac{x_{ai}-\hat\mu_a}{\hat\sigma_a}</math> where the sample mean is: :<math>\hat\mu_a=\tfrac{1}{N}\sum_i x_{ai}</math> and the sample variance is given by: :<math>\hat\sigma_a^2=\tfrac{1}{N-1}\sum_i (x_{ai}-\hat\mu_a)^2</math> The factor analysis model for this particular sample is then: :<math>\begin{matrix}z_{1,i} & = & \ell_{1,1}F_{1,i} & + & \ell_{1,2}F_{2,i} & + & \varepsilon_{1,i} \\ \vdots & & \vdots & & \vdots & & \vdots \\ z_{10,i} & = & \ell_{10,1}F_{1,i} & + & \ell_{10,2}F_{2,i} & + & \varepsilon_{10,i} \end{matrix}</math> or, more succinctly: :<math> z_{ai}=\sum_p \ell_{ap}F_{pi}+\varepsilon_{ai} </math> where * <math>F_{1i}</math> is the <math>i</math>th student's "verbal intelligence", * <math>F_{2i}</math> is the <math>i</math>th student's "mathematical intelligence", * <math>\ell_{ap}</math> are the factor loadings for the <math>a</math>th subject, for <math>p=1,2</math>. In [[Matrix (mathematics)|matrix]] notation, we have :<math>Z=LF+\varepsilon</math> Observe that by doubling the scale on which "verbal intelligence"βthe first component in each column of <math>F</math>βis measured, and simultaneously halving the factor loadings for verbal intelligence makes no difference to the model. Thus, no generality is lost by assuming that the standard deviation of the factors for verbal intelligence is <math>1</math>. Likewise for mathematical intelligence. Moreover, for similar reasons, no generality is lost by assuming the two factors are [[uncorrelated]] with each other. In other words: :<math>\sum_i F_{pi}F_{qi}=\delta_{pq}</math> where <math>\delta_{pq}</math> is the [[Kronecker delta]] (<math>0</math> when <math>p \ne q</math> and <math>1</math> when <math>p=q</math>). The errors are assumed to be independent of the factors: :<math>\sum_i F_{pi}\varepsilon_{ai}=0</math> Since any rotation of a solution is also a solution, this makes interpreting the factors difficult. See disadvantages below. In this particular example, if we do not know beforehand that the two types of intelligence are uncorrelated, then we cannot interpret the two factors as the two different types of intelligence. Even if they are uncorrelated, we cannot tell which factor corresponds to verbal intelligence and which corresponds to mathematical intelligence without an outside argument. The values of the loadings <math>L</math>, the averages <math>\mu</math>, and the [[variance]]s of the "errors" <math>\varepsilon</math> must be estimated given the observed data <math>X</math> and <math>F</math> (the assumption about the levels of the factors is fixed for a given <math>F</math>). The "fundamental theorem" may be derived from the above conditions: :<math>\sum_i z_{ai}z_{bi}=\sum_j \ell_{aj}\ell_{bj}+\sum_i \varepsilon_{ai}\varepsilon_{bi}</math> The term on the left is the <math>(a,b)</math>-term of the correlation matrix (a <math>p \times p</math> matrix derived as the product of the <math> p \times N</math> matrix of standardized observations with its transpose) of the observed data, and its <math>p</math> diagonal elements will be <math>1</math>s. The second term on the right will be a diagonal matrix with terms less than unity. The first term on the right is the "reduced correlation matrix" and will be equal to the correlation matrix except for its diagonal values which will be less than unity. These diagonal elements of the reduced correlation matrix are called "communalities" (which represent the fraction of the variance in the observed variable that is accounted for by the factors): :<math> h_a^2=1-\psi_a=\sum_j \ell_{aj}\ell_{aj} </math> The sample data <math>z_{ai}</math> will not exactly obey the fundamental equation given above due to sampling errors, inadequacy of the model, etc. The goal of any analysis of the above model is to find the factors <math>F_{pi}</math> and loadings <math>\ell_{ap}</math> which give a "best fit" to the data. In factor analysis, the best fit is defined as the minimum of the mean square error in the off-diagonal residuals of the correlation matrix:<ref name="Harman">{{cite book |last=Harman |first=Harry H. |year=1976 |title=Modern Factor Analysis |publisher=University of Chicago Press |pages=175, 176 |isbn=978-0-226-31652-9 }}</ref> :<math>\varepsilon^2 = \sum_{a\ne b} \left[\sum_i z_{ai}z_{bi}-\sum_j \ell_{aj}\ell_{bj}\right]^2</math> This is equivalent to minimizing the off-diagonal components of the error covariance which, in the model equations have expected values of zero. This is to be contrasted with principal component analysis which seeks to minimize the mean square error of all residuals.<ref name="Harman"/> Before the advent of high-speed computers, considerable effort was devoted to finding approximate solutions to the problem, particularly in estimating the communalities by other means, which then simplifies the problem considerably by yielding a known reduced correlation matrix. This was then used to estimate the factors and the loadings. With the advent of high-speed computers, the minimization problem can be solved iteratively with adequate speed, and the communalities are calculated in the process, rather than being needed beforehand. The [[Generalized minimal residual method|MinRes]] algorithm is particularly suited to this problem, but is hardly the only iterative means of finding a solution. If the solution factors are allowed to be correlated (as in 'oblimin' rotation, for example), then the corresponding mathematical model uses [[skew coordinates]] rather than orthogonal coordinates. ===Geometric interpretation=== [[File:FactorPlot.svg|thumb|upright=1.5|Geometric interpretation of Factor Analysis parameters for 3 respondents to question "a". The "answer" is represented by the unit vector <math>\mathbf{z}_a</math>, which is projected onto a plane defined by two orthonormal vectors <math>\mathbf{F}_1</math> and <math>\mathbf{F}_2</math>. The projection vector is <math>\hat{\mathbf{z}}_a</math> and the error <math>\boldsymbol{\varepsilon}_a</math> is perpendicular to the plane, so that <math>\mathbf{z}_a=\hat{\mathbf{z}}_a+\boldsymbol{\varepsilon}_a</math>. The projection vector <math>\hat{\mathbf{z}}_a</math> may be represented in terms of the factor vectors as <math>\hat{\mathbf{z}}_a=\ell_{a1}\mathbf{F}_1+\ell_{a2}\mathbf{F}_2</math>. The square of the length of the projection vector is the communality: <math>||\hat{\mathbf{z}}_a||^2=h^2_a</math>. If another data vector <math>\mathbf{z}_b</math> were plotted, the cosine of the angle between <math>\mathbf{z}_a</math> and <math>\mathbf{z}_b</math> would be <math>r_{ab}</math> : the <math>(a,b)</math>-entry in the correlation matrix. (Adapted from Harman Fig. 4.3)<ref name="Harman"/>]] The parameters and variables of factor analysis can be given a geometrical interpretation. The data (<math>z_{ai}</math>), the factors (<math>F_{pi}</math>) and the errors (<math>\varepsilon_{ai}</math>) can be viewed as vectors in an <math>N</math>-dimensional Euclidean space (sample space), represented as <math>\mathbf{z}_a</math>, <math>\mathbf{F}_p</math> and <math>\boldsymbol{\varepsilon}_a</math> respectively. Since the data are standardized, the data vectors are of unit length (<math>||\mathbf{z}_a||=1</math>). The factor vectors define an <math>k</math>-dimensional linear subspace (i.e. a hyperplane) in this space, upon which the data vectors are projected orthogonally. This follows from the model equation :<math>\mathbf{z}_a=\sum_p \ell_{ap} \mathbf{F}_p+\boldsymbol{\varepsilon}_a</math> and the independence of the factors and the errors: <math>\mathbf{F}_p\cdot\boldsymbol{\varepsilon}_a=0</math>. In the above example, the hyperplane is just a 2-dimensional plane defined by the two factor vectors. The projection of the data vectors onto the hyperplane is given by :<math>\hat{\mathbf{z}}_a=\sum_p \ell_{ap}\mathbf{F}_p</math> and the errors are vectors from that projected point to the data point and are perpendicular to the hyperplane. The goal of factor analysis is to find a hyperplane which is a "best fit" to the data in some sense, so it doesn't matter how the factor vectors which define this hyperplane are chosen, as long as they are independent and lie in the hyperplane. We are free to specify them as both orthogonal and normal (<math>\mathbf{F}_p\cdot \mathbf{F}_q=\delta_{pq}</math>) with no loss of generality. After a suitable set of factors are found, they may also be arbitrarily rotated within the hyperplane, so that any rotation of the factor vectors will define the same hyperplane, and also be a solution. As a result, in the above example, in which the fitting hyperplane is two dimensional, if we do not know beforehand that the two types of intelligence are uncorrelated, then we cannot interpret the two factors as the two different types of intelligence. Even if they are uncorrelated, we cannot tell which factor corresponds to verbal intelligence and which corresponds to mathematical intelligence, or whether the factors are linear combinations of both, without an outside argument. The data vectors <math>\mathbf{z}_a</math> have unit length. The entries of the correlation matrix for the data are given by <math>r_{ab}=\mathbf{z}_a\cdot\mathbf{z}_b</math>. The correlation matrix can be geometrically interpreted as the cosine of the angle between the two data vectors <math>\mathbf{z}_a</math> and <math>\mathbf{z}_b</math>. The diagonal elements will clearly be <math>1</math>s and the off diagonal elements will have absolute values less than or equal to unity. The "reduced correlation matrix" is defined as :<math>\hat{r}_{ab}=\hat{\mathbf{z}}_a\cdot\hat{\mathbf{z}}_b</math>. The goal of factor analysis is to choose the fitting hyperplane such that the reduced correlation matrix reproduces the correlation matrix as nearly as possible, except for the diagonal elements of the correlation matrix which are known to have unit value. In other words, the goal is to reproduce as accurately as possible the cross-correlations in the data. Specifically, for the fitting hyperplane, the mean square error in the off-diagonal components :<math>\varepsilon^2=\sum_{a\ne b} \left(r_{ab}-\hat{r}_{ab}\right)^2</math> is to be minimized, and this is accomplished by minimizing it with respect to a set of orthonormal factor vectors. It can be seen that :<math> r_{ab}-\hat{r}_{ab}= \boldsymbol{\varepsilon}_a\cdot\boldsymbol{\varepsilon}_b </math> The term on the right is just the covariance of the errors. In the model, the error covariance is stated to be a diagonal matrix and so the above minimization problem will in fact yield a "best fit" to the model: It will yield a sample estimate of the error covariance which has its off-diagonal components minimized in the mean square sense. It can be seen that since the <math>\hat{z}_a</math> are orthogonal projections of the data vectors, their length will be less than or equal to the length of the projected data vector, which is unity. The square of these lengths are just the diagonal elements of the reduced correlation matrix. These diagonal elements of the reduced correlation matrix are known as "communalities": :<math> {h_a}^2=||\hat{\mathbf{z}}_a||^2= \sum_p {\ell_{ap}}^2 </math> Large values of the communalities will indicate that the fitting hyperplane is rather accurately reproducing the correlation matrix. The mean values of the factors must also be constrained to be zero, from which it follows that the mean values of the errors will also be zero.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)