Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Eigenvalue algorithm
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Condition number== Any problem of numeric calculation can be viewed as the evaluation of some function {{math|''f''}} for some input {{math|''x''}}. The [[condition number]] {{math|''κ''(''f'', ''x'')}} of the problem is the ratio of the relative error in the function's output to the relative error in the input, and varies with both the function and the input. The condition number describes how error grows during the calculation. Its base-10 logarithm tells how many fewer digits of accuracy exist in the result than existed in the input. The condition number is a best-case scenario. It reflects the instability built into the problem, regardless of how it is solved. No algorithm can ever produce more accurate results than indicated by the condition number, except by chance. However, a poorly designed algorithm may produce significantly worse results. For example, as mentioned below, the problem of finding eigenvalues for normal matrices is always well-conditioned. However, the problem of finding the roots of a polynomial can be [[Wilkinson's polynomial|very ill-conditioned]]. Thus eigenvalue algorithms that work by finding the roots of the characteristic polynomial can be ill-conditioned even when the problem is not. For the problem of solving the linear equation {{math|1=''A'''''v''' = '''b'''}} where {{math|''A''}} is invertible, the [[Condition number#Matrices|matrix condition number]] {{math|1=''κ''(''A''<sup>−1</sup>, '''b''')}} is given by {{math|1={{!!}}''A''{{!!}}<sub>op</sub>{{!!}}''A''<sup>−1</sup>{{!!}}<sub>op</sub>}}, where {{nowrap|{{!!}} {{!!}}<sub>op</sub>}} is the [[operator norm]] subordinate to the normal [[Norm (mathematics)#Euclidean norm|Euclidean norm]] on {{math|'''C'''<sup>''n''</sup>}}. Since this number is independent of {{math|'''b'''}} and is the same for {{math|''A''}} and {{math|''A''<sup>−1</sup>}}, it is usually just called the condition number {{math|''κ''(''A'')}} of the matrix {{math|''A''}}. This value {{math|''κ''(''A'')}} is also the absolute value of the ratio of the largest [[singular value]] of {{math|''A''}} to its smallest. If {{math|''A''}} is [[Unitary matrix|unitary]], then {{math|1={{!!}}''A''{{!!}}<sub>op</sub> = {{!!}}''A''<sup>−1</sup>{{!!}}<sub>op</sub> = 1}}, so {{math|1=''κ''(''A'') = 1}}. For general matrices, the operator norm is often difficult to calculate. For this reason, other [[matrix norms]] are commonly used to estimate the condition number. For the eigenvalue problem, [[Bauer–Fike theorem|Bauer and Fike proved]] that if {{math|''λ''}} is an eigenvalue for a [[Diagonalizable matrix|diagonalizable]] {{math|''n'' × ''n''}} matrix {{math|''A''}} with [[eigenvector matrix]] {{math|''V''}}, then the absolute error in calculating {{math|''λ''}} is bounded by the product of {{math|''κ''(''V'')}} and the absolute error in {{math|''A''}}.<ref>{{Citation | author = F. L. Bauer | author2 = C. T. Fike | title = Norms and exclusion theorems | journal = Numer. Math. | volume = 2 | pages = 137–141 | year = 1960 | doi=10.1007/bf01386217| s2cid = 121278235 }}</ref> [[Bauer-Fike theorem#Corollary|As a result]], the condition number for finding {{math|''λ''}} is {{math|1=''κ''(''λ'', ''A'') = ''κ''(''V'') = {{!!}}''V'' {{!!}}<sub>op</sub> {{!!}}''V'' <sup>−1</sup>{{!!}}<sub>op</sub>}}. If {{math|''A''}} is normal, then {{math|''V''}} is unitary, and {{math|1=''κ''(''λ'', ''A'') = 1}}. Thus the eigenvalue problem for all normal matrices is well-conditioned. The condition number for the problem of finding the eigenspace of a normal matrix {{math|''A''}} corresponding to an eigenvalue {{math|''λ''}} has been shown to be inversely proportional to the minimum distance between {{math|''λ''}} and the other distinct eigenvalues of {{math|''A''}}.<ref>{{Citation | author = S.C. Eisenstat | author2 = I.C.F. Ipsen | title = Relative Perturbation Results for Eigenvalues and Eigenvectors of Diagonalisable Matrices | journal = BIT | volume = 38 | issue = 3 | pages = 502–9 | year = 1998 | doi=10.1007/bf02510256| s2cid = 119886389 | url = http://www.lib.ncsu.edu/resolver/1840.4/286 | url-access = subscription }}</ref> In particular, the eigenspace problem for normal matrices is well-conditioned for isolated eigenvalues. When eigenvalues are not isolated, the best that can be hoped for is to identify the span of all eigenvectors of nearby eigenvalues.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)