Eigenvalue algorithm

Revision as of 04:25, 26 May 2025 by imported>OAbot (Open access bot: url-access updated in citation with #oabot.)
(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)

Template:Short description In numerical analysis, one of the most important problems is designing efficient and stable algorithms for finding the eigenvalues of a matrix. These eigenvalue algorithms may also find eigenvectors.

Eigenvalues and eigenvectorsEdit

{{#invoke:Labelled list hatnote|labelledList|Main article|Main articles|Main page|Main pages}} Given an Template:Math square matrix Template:Math of real or complex numbers, an eigenvalue Template:Math and its associated generalized eigenvector Template:Math are a pair obeying the relation<ref name="Axler">Template:Citation</ref>

<math>\left(A - \lambda I\right)^k {\mathbf v} = 0,</math>

where Template:Math is a nonzero Template:Math column vector, Template:Math is the Template:Math identity matrix, Template:Math is a positive integer, and both Template:Math and Template:Math are allowed to be complex even when Template:Math is real. When Template:Math, the vector is called simply an eigenvector, and the pair is called an eigenpair. In this case, Template:Math. Any eigenvalue Template:Math of Template:Math has ordinary<ref group="note">The term "ordinary" is used here only to emphasize the distinction between "eigenvector" and "generalized eigenvector".</ref> eigenvectors associated to it, for if Template:Math is the smallest integer such that Template:Math for a generalized eigenvector Template:Math, then Template:Math is an ordinary eigenvector. The value Template:Math can always be taken as less than or equal to Template:Math. In particular, Template:Math for all generalized eigenvectors Template:Math associated with Template:Math.

For each eigenvalue Template:Math of Template:Math, the kernel Template:Math consists of all eigenvectors associated with Template:Math (along with 0), called the eigenspace of Template:Math, while the vector space Template:Math consists of all generalized eigenvectors, and is called the generalized eigenspace. The geometric multiplicity of Template:Math is the dimension of its eigenspace. The algebraic multiplicity of Template:Math is the dimension of its generalized eigenspace. The latter terminology is justified by the equation

<math>p_A\left(z\right) = \det\left( zI - A \right) = \prod_{i=1}^k (z - \lambda_i)^{\alpha_i},</math>

where Template:Math is the determinant function, the Template:Math are all the distinct eigenvalues of Template:Math and the Template:Math are the corresponding algebraic multiplicities. The function Template:Math is the characteristic polynomial of Template:Math. So the algebraic multiplicity is the multiplicity of the eigenvalue as a zero of the characteristic polynomial. Since any eigenvector is also a generalized eigenvector, the geometric multiplicity is less than or equal to the algebraic multiplicity. The algebraic multiplicities sum up to Template:Math, the degree of the characteristic polynomial. The equation Template:Math is called the characteristic equation, as its roots are exactly the eigenvalues of Template:Math. By the Cayley–Hamilton theorem, Template:Math itself obeys the same equation: Template:Math.<ref group="note">where the constant term is multiplied by the identity matrix Template:Math.</ref> As a consequence, the columns of the matrix <math display="inline">\prod_{i \ne j} (A - \lambda_iI)^{\alpha_i}</math> must be either 0 or generalized eigenvectors of the eigenvalue Template:Math, since they are annihilated by <math>(A - \lambda_jI)^{\alpha_j}</math>. In fact, the column space is the generalized eigenspace of Template:Math.

Any collection of generalized eigenvectors of distinct eigenvalues is linearly independent, so a basis for all of Template:Math can be chosen consisting of generalized eigenvectors. More particularly, this basis Template:Math can be chosen and organized so that

If these basis vectors are placed as the column vectors of a matrix Template:Math, then Template:Math can be used to convert Template:Math to its Jordan normal form:

<math>V^{-1}AV = \begin{bmatrix} \lambda_1 & \beta_1 & 0 & \ldots & 0 \\ 0 & \lambda_2 & \beta_2 & \ldots & 0 \\ 0 & 0 & \lambda_3 & \ldots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & \ldots & \lambda_n \end{bmatrix},</math>

where the Template:Math are the eigenvalues, Template:Math if Template:Math and Template:Math otherwise.

More generally, if Template:Math is any invertible matrix, and Template:Math is an eigenvalue of Template:Math with generalized eigenvector Template:Math, then Template:Math. Thus Template:Math is an eigenvalue of Template:Math with generalized eigenvector Template:Math. That is, similar matrices have the same eigenvalues.

Normal, Hermitian, and real-symmetric matricesEdit

{{#invoke:Labelled list hatnote|labelledList|Main article|Main articles|Main page|Main pages}} The adjoint Template:Math of a complex matrix Template:Math is the transpose of the conjugate of Template:Math: Template:Math. A square matrix Template:Math is called normal if it commutes with its adjoint: Template:Math. It is called Hermitian if it is equal to its adjoint: Template:Math. All Hermitian matrices are normal. If Template:Math has only real elements, then the adjoint is just the transpose, and Template:Math is Hermitian if and only if it is symmetric. When applied to column vectors, the adjoint can be used to define the canonical inner product on Template:Math: Template:Math.<ref group="note">This ordering of the inner product (with the conjugate-linear position on the left), is preferred by physicists. Algebraists often place the conjugate-linear position on the right: Template:Math.</ref> Normal, Hermitian, and real-symmetric matrices have several useful properties:

  • Every generalized eigenvector of a normal matrix is an ordinary eigenvector.
  • Any normal matrix is similar to a diagonal matrix, since its Jordan normal form is diagonal.
  • Eigenvectors of distinct eigenvalues of a normal matrix are orthogonal.
  • The null space and the image (or column space) of a normal matrix are orthogonal to each other.
  • For any normal matrix Template:Math, Template:Math has an orthonormal basis consisting of eigenvectors of Template:Math. The corresponding matrix of eigenvectors is unitary.
  • The eigenvalues of a Hermitian matrix are real, since Template:Math for a non-zero eigenvector Template:Math.
  • If Template:Math is real, there is an orthonormal basis for Template:Math consisting of eigenvectors of Template:Math if and only if Template:Math is symmetric.

It is possible for a real or complex matrix to have all real eigenvalues without being Hermitian. For example, a real triangular matrix has its eigenvalues along its diagonal, but in general is not symmetric.

Condition numberEdit

Any problem of numeric calculation can be viewed as the evaluation of some function Template:Math for some input Template:Math. The condition number Template:Math of the problem is the ratio of the relative error in the function's output to the relative error in the input, and varies with both the function and the input. The condition number describes how error grows during the calculation. Its base-10 logarithm tells how many fewer digits of accuracy exist in the result than existed in the input. The condition number is a best-case scenario. It reflects the instability built into the problem, regardless of how it is solved. No algorithm can ever produce more accurate results than indicated by the condition number, except by chance. However, a poorly designed algorithm may produce significantly worse results. For example, as mentioned below, the problem of finding eigenvalues for normal matrices is always well-conditioned. However, the problem of finding the roots of a polynomial can be very ill-conditioned. Thus eigenvalue algorithms that work by finding the roots of the characteristic polynomial can be ill-conditioned even when the problem is not.

For the problem of solving the linear equation Template:Math where Template:Math is invertible, the matrix condition number Template:Math is given by Template:Math, where Template:Nowrap is the operator norm subordinate to the normal Euclidean norm on Template:Math. Since this number is independent of Template:Math and is the same for Template:Math and Template:Math, it is usually just called the condition number Template:Math of the matrix Template:Math. This value Template:Math is also the absolute value of the ratio of the largest singular value of Template:Math to its smallest. If Template:Math is unitary, then Template:Math, so Template:Math. For general matrices, the operator norm is often difficult to calculate. For this reason, other matrix norms are commonly used to estimate the condition number.

For the eigenvalue problem, Bauer and Fike proved that if Template:Math is an eigenvalue for a diagonalizable Template:Math matrix Template:Math with eigenvector matrix Template:Math, then the absolute error in calculating Template:Math is bounded by the product of Template:Math and the absolute error in Template:Math.<ref>Template:Citation</ref> As a result, the condition number for finding Template:Math is Template:Math. If Template:Math is normal, then Template:Math is unitary, and Template:Math. Thus the eigenvalue problem for all normal matrices is well-conditioned.

The condition number for the problem of finding the eigenspace of a normal matrix Template:Math corresponding to an eigenvalue Template:Math has been shown to be inversely proportional to the minimum distance between Template:Math and the other distinct eigenvalues of Template:Math.<ref>Template:Citation</ref> In particular, the eigenspace problem for normal matrices is well-conditioned for isolated eigenvalues. When eigenvalues are not isolated, the best that can be hoped for is to identify the span of all eigenvectors of nearby eigenvalues.

AlgorithmsEdit

The most reliable and most widely used algorithm for computing eigenvalues is John G. F. Francis' and Vera N. Kublanovskaya's QR algorithm, considered one of the top ten algorithms of 20th century.<ref name="t10">Template:Cite journal</ref>

Any monic polynomial is the characteristic polynomial of its companion matrix. Therefore, a general algorithm for finding eigenvalues could also be used to find the roots of polynomials. The Abel–Ruffini theorem shows that any such algorithm for dimensions greater than 4 must either be infinite, or involve functions of greater complexity than elementary arithmetic operations and fractional powers. For this reason algorithms that exactly calculate eigenvalues in a finite number of steps only exist for a few special classes of matrices. For general matrices, algorithms are iterative, producing better approximate solutions with each iteration.

Some algorithms produce every eigenvalue, others will produce a few, or only one. However, even the latter algorithms can be used to find all eigenvalues. Once an eigenvalue Template:Math of a matrix Template:Math has been identified, it can be used to either direct the algorithm towards a different solution next time, or to reduce the problem to one that no longer has Template:Math as a solution.

Redirection is usually accomplished by shifting: replacing Template:Math with Template:Math for some constant Template:Math. The eigenvalue found for Template:Math must have Template:Math added back in to get an eigenvalue for Template:Math. For example, for power iteration, Template:Math. Power iteration finds the largest eigenvalue in absolute value, so even when Template:Math is only an approximate eigenvalue, power iteration is unlikely to find it a second time. Conversely, inverse iteration based methods find the lowest eigenvalue, so Template:Math is chosen well away from Template:Math and hopefully closer to some other eigenvalue.

Reduction can be accomplished by restricting Template:Math to the column space of the matrix Template:Math, which Template:Math carries to itself. Since Template:Math is singular, the column space is of lesser dimension. The eigenvalue algorithm can then be applied to the restricted matrix. This process can be repeated until all eigenvalues are found.

If an eigenvalue algorithm does not produce eigenvectors, a common practice is to use an inverse iteration based algorithm with Template:Math set to a close approximation to the eigenvalue. This will quickly converge to the eigenvector of the closest eigenvalue to Template:Math. For small matrices, an alternative is to look at the column space of the product of Template:Math for each of the other eigenvalues Template:Math.

A formula for the norm of unit eigenvector components of normal matrices was discovered by Robert Thompson in 1966 and rediscovered independently by several others.<ref>Template:Cite journal</ref><ref>Template:Cite journal</ref><ref>Template:Cite journal</ref><ref>Template:Cite journal</ref><ref>Template:Cite journal</ref> If Template:Math is an <math display="inline"> n \times n</math> normal matrix with eigenvalues Template:Math and corresponding unit eigenvectors Template:Math whose component entries are Template:Math, let Template:Math be the <math display="inline"> n - 1 \times n - 1</math> matrix obtained by removing the Template:Math-th row and column from Template:Math, and let Template:Math be its Template:Math-th eigenvalue. Then <math display="block"> |v_{i,j}|^2 \prod_{k=1,k\ne i}^n (\lambda_i(A) - \lambda_k(A)) = \prod_{k=1}^{n-1}(\lambda_i(A) - \lambda_k(A_j))</math>

If <math>p, p_j</math> are the characteristic polynomials of <math>A</math> and <math>A_j</math>, the formula can be re-written as <math display="block"> |v_{i,j}|^2 = \frac{p_j(\lambda_i(A))}{p'(\lambda_i(A))}</math> assuming the derivative <math>p'</math> is not zero at <math>\lambda_i(A)</math>.

Hessenberg and tridiagonal matricesEdit

{{#invoke:Labelled list hatnote|labelledList|Main article|Main articles|Main page|Main pages}}

Because the eigenvalues of a triangular matrix are its diagonal elements, for general matrices there is no finite method like gaussian elimination to convert a matrix to triangular form while preserving eigenvalues. But it is possible to reach something close to triangular. An upper Hessenberg matrix is a square matrix for which all entries below the subdiagonal are zero. A lower Hessenberg matrix is one for which all entries above the superdiagonal are zero. Matrices that are both upper and lower Hessenberg are tridiagonal. Hessenberg and tridiagonal matrices are the starting points for many eigenvalue algorithms because the zero entries reduce the complexity of the problem. Several methods are commonly used to convert a general matrix into a Hessenberg matrix with the same eigenvalues. If the original matrix was symmetric or Hermitian, then the resulting matrix will be tridiagonal.

When only eigenvalues are needed, there is no need to calculate the similarity matrix, as the transformed matrix has the same eigenvalues. If eigenvectors are needed as well, the similarity matrix may be needed to transform the eigenvectors of the Hessenberg matrix back into eigenvectors of the original matrix.

Method Applies to Produces Cost without similarity matrix Cost with similarity matrix Description
Householder transformations General Hessenberg Template:Math<ref name=NumericalRecipes>Template:Cite book</ref>Template:Rp Template:Math<ref name=NumericalRecipes />Template:Rp Reflect each column through a subspace to zero out its lower entries.
Givens rotations General Hessenberg Template:Math<ref name=NumericalRecipes />Template:Rp Apply planar rotations to zero out individual entries. Rotations are ordered so that later ones do not cause zero entries to become non-zero again.
Arnoldi iteration General Hessenberg Perform Gram–Schmidt orthogonalization on Krylov subspaces.
Lanczos algorithm Hermitian Tridiagonal Arnoldi iteration for Hermitian matrices, with shortcuts.

For symmetric tridiagonal eigenvalue problems all eigenvalues (without eigenvectors) can be computed numerically in time O(n log(n)), using bisection on the characteristic polynomial.<ref name=CoakleyRokhlin>Template:Citation</ref>

Iterative algorithmsEdit

Iterative algorithms solve the eigenvalue problem by producing sequences that converge to the eigenvalues. Some algorithms also produce sequences of vectors that converge to the eigenvectors. Most commonly, the eigenvalue sequences are expressed as sequences of similar matrices which converge to a triangular or diagonal form, allowing the eigenvalues to be read easily. The eigenvector sequences are expressed as the corresponding similarity matrices.

Method Applies to Produces Cost per step Convergence Description
Lanczos algorithm Hermitian Template:Math largest/smallest eigenpairs
Power iteration general eigenpair with largest value Template:Math linear Repeatedly applies the matrix to an arbitrary starting vector and renormalizes.
Inverse iteration general Template:Nowrap linear Power iteration for Template:Math
Rayleigh quotient iteration Hermitian any eigenpair cubic Power iteration for Template:Math, where Template:Math for each iteration is the Rayleigh quotient of the previous iteration.
Preconditioned inverse iteration<ref>Template:Citation</ref> or LOBPCG algorithm positive-definite real symmetric eigenpair with value closest to μ Inverse iteration using a preconditioner (an approximate inverse to Template:Math).
Bisection method real symmetric tridiagonal any eigenvalue linear Uses the bisection method to find roots of the characteristic polynomial, supported by the Sturm sequence.
Laguerre iteration real symmetric tridiagonal any eigenvalue cubic<ref>Template:Citation</ref> Uses Laguerre's method to find roots of the characteristic polynomial, supported by the Sturm sequence.
QR algorithm Hessenberg all eigenvalues Template:Math cubic Factors A = QR, where Q is orthogonal and R is triangular, then applies the next iteration to RQ.
all eigenpairs Template:Math
Jacobi eigenvalue algorithm real symmetric all eigenvalues Template:Math quadratic Uses Givens rotations to attempt clearing all off-diagonal entries. This fails, but strengthens the diagonal.
Divide-and-conquer Hermitian tridiagonal all eigenvalues Template:Math Divides the matrix into submatrices that are diagonalized then recombined.
all eigenpairs Template:Math
Homotopy method real symmetric tridiagonal all eigenpairs Template:Math Constructs a computable homotopy path from a diagonal eigenvalue problem.
Folded spectrum method real symmetric eigenpair with value closest to μ Preconditioned inverse iteration applied to Template:Math
MRRR algorithm<ref>Template:Citation</ref> real symmetric tridiagonal some or all eigenpairs Template:Math "Multiple relatively robust representations" – performs inverse iteration on a LDLT decomposition of the shifted matrix.
Gram iteration<ref>Template:Citation</ref> general Eigenpair with largest eigenvalue super-linear Repeatedly computes the Gram product and rescales, deterministically.

Direct calculationEdit

While there is no simple algorithm to directly calculate eigenvalues for general matrices, there are numerous special classes of matrices where eigenvalues can be directly calculated. These include:

Triangular matricesEdit

Since the determinant of a triangular matrix is the product of its diagonal entries, if T is triangular, then <math display="inline">\det(\lambda I - T) = \prod_i (\lambda - T_{ii})</math>. Thus the eigenvalues of T are its diagonal entries.

Factorable polynomial equationsEdit

If Template:Math is any polynomial and Template:Math then the eigenvalues of Template:Math also satisfy the same equation. If Template:Math happens to have a known factorization, then the eigenvalues of Template:Math lie among its roots.

For example, a projection is a square matrix Template:Math satisfying Template:Math. The roots of the corresponding scalar polynomial equation, Template:Math, are 0 and 1. Thus any projection has 0 and 1 for its eigenvalues. The multiplicity of 0 as an eigenvalue is the nullity of Template:Math, while the multiplicity of 1 is the rank of Template:Math.

Another example is a matrix Template:Math that satisfies Template:Math for some scalar Template:Math. The eigenvalues must be Template:Math. The projection operators

<math>P_+=\frac{1}{2}\left(I+\frac{A}{\alpha}\right)</math>
<math>P_-=\frac{1}{2}\left(I-\frac{A}{\alpha}\right)</math>

satisfy

<math>AP_+=\alpha P_+ \quad AP_-=-\alpha P_-</math>

and

<math>P_+P_+=P_+ \quad P_-P_-=P_- \quad P_+P_-=P_-P_+=0.</math>

The column spaces of Template:Math and Template:Math are the eigenspaces of Template:Math corresponding to Template:Math and Template:Math, respectively.

2×2 matricesEdit

For dimensions 2 through 4, formulas involving radicals exist that can be used to find the eigenvalues. While a common practice for 2×2 and 3×3 matrices, for 4×4 matrices the increasing complexity of the root formulas makes this approach less attractive.

For the 2×2 matrix

<math>A = \begin{bmatrix} a & b \\ c & d \end{bmatrix},</math>

the characteristic polynomial is

<math>\det \begin{bmatrix} \lambda - a & -b \\ -c & \lambda - d \end{bmatrix} = \lambda^2\, -\, \left( a + d \right )\lambda\, +\, \left ( ad - bc \right ) = \lambda^2\, -\, \lambda\, {\rm tr}(A)\, +\, \det(A).</math>

Thus the eigenvalues can be found by using the quadratic formula:

<math>\lambda = \frac{{\rm tr}(A) \pm \sqrt{{\rm tr}^2 (A) - 4 \det(A)}}{2}.</math>

Defining <math display="inline"> {\rm gap}\left ( A \right ) = \sqrt{{\rm tr}^2 (A) - 4 \det(A)}</math> to be the distance between the two eigenvalues, it is straightforward to calculate

<math>\frac{\partial\lambda}{\partial a} = \frac{1}{2}\left ( 1 \pm \frac{a - d}{{\rm gap}(A)} \right ),\qquad \frac{\partial\lambda}{\partial b} = \frac{\pm c}{{\rm gap}(A)}</math>

with similar formulas for Template:Math and Template:Math. From this it follows that the calculation is well-conditioned if the eigenvalues are isolated.

Eigenvectors can be found by exploiting the Cayley–Hamilton theorem. If Template:Math are the eigenvalues, then Template:Math, so the columns of Template:Math are annihilated by Template:Math and vice versa. Assuming neither matrix is zero, the columns of each must include eigenvectors for the other eigenvalue. (If either matrix is zero, then Template:Math is a multiple of the identity and any non-zero vector is an eigenvector.)

For example, suppose

<math>A = \begin{bmatrix} 4 & 3 \\ -2 & -3 \end{bmatrix},</math>

then Template:Math and Template:Math, so the characteristic equation is

<math> 0 = \lambda^2 - \lambda - 6 = (\lambda - 3)(\lambda + 2),</math>

and the eigenvalues are 3 and -2. Now,

<math>A - 3I = \begin{bmatrix} 1 & 3 \\ -2 & -6 \end{bmatrix}, \qquad A + 2I = \begin{bmatrix} 6 & 3 \\ -2 & -1 \end{bmatrix}.</math>

In both matrices, the columns are multiples of each other, so either column can be used. Thus, Template:Math can be taken as an eigenvector associated with the eigenvalue -2, and Template:Math as an eigenvector associated with the eigenvalue 3, as can be verified by multiplying them by Template:Math.

Symmetric 3×3 matricesEdit

The characteristic equation of a symmetric 3×3 matrix Template:Math is:

<math>\det \left( \alpha I - A \right) = \alpha^3 - \alpha^2 {\rm tr}(A) - \alpha \frac{1}{2}\left( {\rm tr}(A^2) - {\rm tr}^2(A) \right) - \det(A) = 0.</math>

This equation may be solved using the methods of Cardano or Lagrange, but an affine change to Template:Math will simplify the expression considerably, and lead directly to a trigonometric solution. If Template:Math, then Template:Math and Template:Math have the same eigenvectors, and Template:Math is an eigenvalue of Template:Math if and only if Template:Math is an eigenvalue of Template:Math. Letting <math display="inline"> q = {\rm tr}(A)/3</math> and <math display="inline"> p =\left({\rm tr}\left((A - qI)^2\right)/ 6\right)^{1/2}</math>, gives

<math>\det \left( \beta I - B \right) = \beta^3 - 3 \beta - \det(B) = 0.</math>

The substitution Template:Math and some simplification using the identity Template:Math reduces the equation to Template:Math. Thus

<math>\beta = 2{\cos}\left(\frac{1}{3}{\arccos}\left( \det(B)/2 \right) + \frac{2k\pi}{3}\right), \quad k = 0, 1, 2.</math>

If Template:Math is complex or is greater than 2 in absolute value, the arccosine should be taken along the same branch for all three values of Template:Math. This issue doesn't arise when Template:Math is real and symmetric, resulting in a simple algorithm:<ref name=Smith>Template:Citation</ref>

<syntaxhighlight lang="matlab"> % Given a real symmetric 3x3 matrix A, compute the eigenvalues % Note that acos and cos operate on angles in radians

p1 = A(1,2)^2 + A(1,3)^2 + A(2,3)^2 if (p1 == 0)

  % A is diagonal.
  eig1 = A(1,1)
  eig2 = A(2,2)
  eig3 = A(3,3)

else

  q = trace(A)/3               % trace(A) is the sum of all diagonal values
  p2 = (A(1,1) - q)^2 + (A(2,2) - q)^2 + (A(3,3) - q)^2 + 2 * p1
  p = sqrt(p2 / 6)
  B = (1 / p) * (A - q * I)    % I is the identity matrix
  r = det(B) / 2
  % In exact arithmetic for a symmetric matrix  -1 <= r <= 1
  % but computation error can leave it slightly outside this range.
  if (r <= -1) 
     phi = pi / 3
  elseif (r >= 1)
     phi = 0
  else
     phi = acos(r) / 3
  end
  % the eigenvalues satisfy eig3 <= eig2 <= eig1
  eig1 = q + 2 * p * cos(phi)
  eig3 = q + 2 * p * cos(phi + (2*pi/3))
  eig2 = 3 * q - eig1 - eig3     % since trace(A) = eig1 + eig2 + eig3

end </syntaxhighlight>

Once again, the eigenvectors of Template:Math can be obtained by recourse to the Cayley–Hamilton theorem. If Template:Math are distinct eigenvalues of Template:Math, then Template:Math. Thus the columns of the product of any two of these matrices will contain an eigenvector for the third eigenvalue. However, if Template:Math, then Template:Math and Template:Math. Thus the generalized eigenspace of Template:Math is spanned by the columns of Template:Math while the ordinary eigenspace is spanned by the columns of Template:Math. The ordinary eigenspace of Template:Math is spanned by the columns of Template:Math.

For example, let

<math>A = \begin{bmatrix} 3 & 2 & 6 \\ 2 & 2 & 5 \\ -2 & -1 & -4 \end{bmatrix}.</math>

The characteristic equation is

<math> 0 = \lambda^3 - \lambda^2 - \lambda + 1 = (\lambda - 1)^2(\lambda + 1),</math>

with eigenvalues 1 (of multiplicity 2) and -1. Calculating,

<math>A - I = \begin{bmatrix} 2 & 2 & 6 \\ 2 & 1 & 5 \\ -2 & -1 & -5 \end{bmatrix}, \qquad A + I = \begin{bmatrix} 4 & 2 & 6 \\ 2 & 3 & 5 \\ -2 & -1 & -3 \end{bmatrix}</math>

and

<math>(A - I)^2 = \begin{bmatrix} -4 & 0 & -8 \\ -4 & 0 & -8 \\ 4 & 0 & 8 \end{bmatrix}, \qquad (A - I)(A + I) = \begin{bmatrix} 0 & 4 & 4 \\ 0 & 2 & 2 \\ 0 & -2 & -2 \end{bmatrix}</math>

Thus Template:Math is an eigenvector for −1, and Template:Math is an eigenvector for 1. Template:Math and Template:Math are both generalized eigenvectors associated with 1, either one of which could be combined with Template:Math and Template:Math to form a basis of generalized eigenvectors of Template:Math. Once found, the eigenvectors can be normalized if needed.

Eigenvectors of normal 3×3 matricesEdit

If a 3×3 matrix <math>A</math> is normal, then the cross-product can be used to find eigenvectors. If <math>\lambda</math> is an eigenvalue of <math>A</math>, then the null space of <math>A - \lambda I</math> is perpendicular to its column space. The cross product of two independent columns of <math>A - \lambda I</math> will be in the null space. That is, it will be an eigenvector associated with <math>\lambda</math>. Since the column space is two dimensional in this case, the eigenspace must be one dimensional, so any other eigenvector will be parallel to it.

If <math>A - \lambda I</math> does not contain two independent columns but is not Template:Math, the cross-product can still be used. In this case <math>\lambda</math> is an eigenvalue of multiplicity 2, so any vector perpendicular to the column space will be an eigenvector. Suppose <math>\mathbf v</math> is a non-zero column of <math>A - \lambda I</math>. Choose an arbitrary vector <math>\mathbf u</math> not parallel to <math>\mathbf v</math>. Then <math>\mathbf v\times \mathbf u</math> and <math>(\mathbf v\times \mathbf u)\times \mathbf v</math> will be perpendicular to <math>\mathbf v</math> and thus will be eigenvectors of <math>\lambda</math>.

This does not work when <math>A</math> is not normal, as the null space and column space do not need to be perpendicular for such matrices.

See alsoEdit

NotesEdit

Template:Reflist

ReferencesEdit

Template:Reflist

Further readingEdit

Template:Numerical linear algebra