Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Eigenvalue algorithm
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
===Symmetric 3×3 matrices=== The characteristic equation of a symmetric 3×3 matrix {{math|''A''}} is: :<math>\det \left( \alpha I - A \right) = \alpha^3 - \alpha^2 {\rm tr}(A) - \alpha \frac{1}{2}\left( {\rm tr}(A^2) - {\rm tr}^2(A) \right) - \det(A) = 0.</math> This equation may be solved using the methods of [[Cubic equation#Cardano's method|Cardano]] or [[Cubic equation#Lagrange's method|Lagrange]], but an affine change to {{math|''A''}} will simplify the expression considerably, and lead directly to a [[Cubic equation#Trigonometric and hyperbolic solutions|trigonometric solution]]. If {{math|1=''A'' = ''pB'' + ''qI''}}, then {{math|''A''}} and {{math|''B''}} have the same eigenvectors, and {{math|''β''}} is an eigenvalue of {{math|''B''}} if and only if {{math|1=''α'' = ''pβ'' + ''q''}} is an eigenvalue of {{math|''A''}}. Letting <math display="inline"> q = {\rm tr}(A)/3</math> and <math display="inline"> p =\left({\rm tr}\left((A - qI)^2\right)/ 6\right)^{1/2}</math>, gives :<math>\det \left( \beta I - B \right) = \beta^3 - 3 \beta - \det(B) = 0.</math> The substitution {{math|1=''β'' = 2cos ''θ''}} and some simplification using the identity {{math|1=cos 3''θ'' = 4cos<sup>3</sup> ''θ'' − 3cos ''θ''}} reduces the equation to {{math|1=cos 3''θ'' = det(''B'') / 2}}. Thus :<math>\beta = 2{\cos}\left(\frac{1}{3}{\arccos}\left( \det(B)/2 \right) + \frac{2k\pi}{3}\right), \quad k = 0, 1, 2.</math> If {{math|det(''B'')}} is complex or is greater than 2 in absolute value, the arccosine should be taken along the same branch for all three values of {{math|''k''}}. This issue doesn't arise when {{math|''A''}} is real and symmetric, resulting in a simple algorithm:<ref name=Smith>{{Citation |last=Smith |first=Oliver K. |title=Eigenvalues of a symmetric 3 × 3 matrix. |journal=[[Communications of the ACM]] |volume=4 |issue=4 |date=April 1961 |page=168 |doi=10.1145/355578.366316|s2cid=37815415 |doi-access=free }}</ref> <syntaxhighlight lang="matlab"> % Given a real symmetric 3x3 matrix A, compute the eigenvalues % Note that acos and cos operate on angles in radians p1 = A(1,2)^2 + A(1,3)^2 + A(2,3)^2 if (p1 == 0) % A is diagonal. eig1 = A(1,1) eig2 = A(2,2) eig3 = A(3,3) else q = trace(A)/3 % trace(A) is the sum of all diagonal values p2 = (A(1,1) - q)^2 + (A(2,2) - q)^2 + (A(3,3) - q)^2 + 2 * p1 p = sqrt(p2 / 6) B = (1 / p) * (A - q * I) % I is the identity matrix r = det(B) / 2 % In exact arithmetic for a symmetric matrix -1 <= r <= 1 % but computation error can leave it slightly outside this range. if (r <= -1) phi = pi / 3 elseif (r >= 1) phi = 0 else phi = acos(r) / 3 end % the eigenvalues satisfy eig3 <= eig2 <= eig1 eig1 = q + 2 * p * cos(phi) eig3 = q + 2 * p * cos(phi + (2*pi/3)) eig2 = 3 * q - eig1 - eig3 % since trace(A) = eig1 + eig2 + eig3 end </syntaxhighlight> Once again, the eigenvectors of {{math|''A''}} can be obtained by recourse to the [[Cayley–Hamilton theorem]]. If {{math|''α''<sub>1</sub>, ''α''<sub>2</sub>, ''α''<sub>3</sub>}} are distinct eigenvalues of {{math|''A''}}, then {{math|1=(''A'' − ''α''<sub>1</sub>''I'')(''A'' − ''α''<sub>2</sub>''I'')(''A'' − ''α''<sub>3</sub>''I'') = 0}}. Thus the columns of the product of any two of these matrices will contain an eigenvector for the third eigenvalue. However, if {{math|1=''α''<sub>3</sub> = ''α''<sub>1</sub>}}, then {{math|1=(''A'' − ''α''<sub>1</sub>''I'')<sup>2</sup>(''A'' − ''α''<sub>2</sub>''I'') = 0}} and {{math|1=(''A'' − ''α''<sub>2</sub>''I'')(''A'' − ''α''<sub>1</sub>''I'')<sup>2</sup> = 0}}. Thus the ''generalized'' eigenspace of {{math|''α''<sub>1</sub>}} is spanned by the columns of {{math|''A'' − ''α''<sub>2</sub>''I''}} while the ordinary eigenspace is spanned by the columns of {{math|1=(''A'' − ''α''<sub>1</sub>''I'')(''A'' − ''α''<sub>2</sub>''I'')}}. The ordinary eigenspace of {{math|''α''<sub>2</sub>}} is spanned by the columns of {{math|(''A'' − ''α''<sub>1</sub>''I'')<sup>2</sup>}}. For example, let :<math>A = \begin{bmatrix} 3 & 2 & 6 \\ 2 & 2 & 5 \\ -2 & -1 & -4 \end{bmatrix}.</math> The characteristic equation is :<math> 0 = \lambda^3 - \lambda^2 - \lambda + 1 = (\lambda - 1)^2(\lambda + 1),</math> with eigenvalues 1 (of multiplicity 2) and -1. Calculating, :<math>A - I = \begin{bmatrix} 2 & 2 & 6 \\ 2 & 1 & 5 \\ -2 & -1 & -5 \end{bmatrix}, \qquad A + I = \begin{bmatrix} 4 & 2 & 6 \\ 2 & 3 & 5 \\ -2 & -1 & -3 \end{bmatrix}</math> and :<math>(A - I)^2 = \begin{bmatrix} -4 & 0 & -8 \\ -4 & 0 & -8 \\ 4 & 0 & 8 \end{bmatrix}, \qquad (A - I)(A + I) = \begin{bmatrix} 0 & 4 & 4 \\ 0 & 2 & 2 \\ 0 & -2 & -2 \end{bmatrix}</math> Thus {{math|(−4, −4, 4)}} is an eigenvector for −1, and {{math|(4, 2, −2)}} is an eigenvector for 1. {{math|(2, 3, −1)}} and {{math|(6, 5, −3)}} are both generalized eigenvectors associated with 1, either one of which could be combined with {{math|(−4, −4, 4)}} and {{math|(4, 2, −2)}} to form a basis of generalized eigenvectors of {{math|''A''}}. Once found, the eigenvectors can be normalized if needed. ==== Eigenvectors of normal 3×3 matrices ==== If a 3×3 matrix <math>A</math> is normal, then the cross-product can be used to find eigenvectors. If <math>\lambda</math> is an eigenvalue of <math>A</math>, then the null space of <math>A - \lambda I</math> is perpendicular to its column space. The [[cross product]] of two independent columns of <math>A - \lambda I</math> will be in the null space. That is, it will be an eigenvector associated with <math>\lambda</math>. Since the column space is two dimensional in this case, the eigenspace must be one dimensional, so any other eigenvector will be parallel to it. If <math>A - \lambda I</math> does not contain two independent columns but is not {{math|'''0'''}}, the cross-product can still be used. In this case <math>\lambda</math> is an eigenvalue of multiplicity 2, so any vector perpendicular to the column space will be an eigenvector. Suppose <math>\mathbf v</math> is a non-zero column of <math>A - \lambda I</math>. Choose an arbitrary vector <math>\mathbf u</math> not parallel to <math>\mathbf v</math>. Then <math>\mathbf v\times \mathbf u</math> and <math>(\mathbf v\times \mathbf u)\times \mathbf v</math> will be perpendicular to <math>\mathbf v</math> and thus will be eigenvectors of <math>\lambda</math>. This does not work when <math>A</math> is not normal, as the null space and column space do not need to be perpendicular for such matrices.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)