Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Inverse iteration
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Implementation options == The method is defined by the formula: <math display="block"> b_{k+1} = \frac{(A - \mu I)^{-1}b_k}{C_k}, </math> There are, however, multiple options for its implementation. === Calculate inverse matrix or solve system of linear equations === We can rewrite the formula in the following way: <math display="block"> (A - \mu I) b_{k+1} = \frac{b_k}{C_k}, </math> emphasizing that to find the next approximation <math> b_{k+1} </math> we may solve a system of linear equations. There are two options: one may choose an algorithm that solves a linear system, or one may calculate the inverse <math>(A - \mu I)^{-1}</math> and then apply it to the vector. Both options have complexity ''O''(''n''<sup>3</sup>), the exact number depends on the chosen method. The choice depends also on the number of iterations. Naively, if at each iteration one solves a linear system, the complexity will be ''k'' ''O''(''n''<sup>3</sup>), where ''k'' is number of iterations; similarly, calculating the inverse matrix and applying it at each iteration is of complexity ''k'' ''O''(''n''<sup>3</sup>). Note, however, that if the eigenvalue estimate <math>\mu</math> remains constant, then we may reduce the complexity to ''O''(''n''<sup>3</sup>) + ''k'' ''O''(''n''<sup>2</sup>) with either method. Calculating the inverse matrix once, and storing it to apply at each iteration is of complexity ''O''(''n''<sup>3</sup>) + ''k'' ''O''(''n''<sup>2</sup>). Storing an [[LU decomposition]] of <math>(A - \mu I)</math> and using [[Triangular matrix#Forward and back substitution|forward and back substitution]] to solve the system of equations at each iteration is also of complexity ''O''(''n''<sup>3</sup>) + ''k'' ''O''(''n''<sup>2</sup>). Inverting the matrix will typically have a greater initial cost, but lower cost at each iteration. Conversely, solving systems of linear equations will typically have a lesser initial cost, but require more operations for each iteration. === Tridiagonalization, [[Hessenberg form]] === If it is necessary to perform many iterations (or few iterations, but for many eigenvectors), then it might be wise to bring the matrix to the upper [[Hessenberg form]] first (for symmetric matrix this will be [[tridiagonal matrix|tridiagonal form]]). Which costs <math display="inline">\frac{10}{3} n^3 + O(n^2)</math> arithmetic operations using a technique based on [[Householder transformation|Householder reduction]]), with a finite sequence of orthogonal similarity transforms, somewhat like a two-sided QR decomposition.<ref name=Demmel>{{citation | last = Demmel | first = James W. | authorlink = James Demmel | mr = 1463942 | isbn = 0-89871-389-7 | location = Philadelphia, PA | publisher = [[Society for Industrial and Applied Mathematics]] | title = Applied Numerical Linear Algebra | year = 1997}}.</ref><ref name=Trefethen>Lloyd N. Trefethen and David Bau, ''Numerical Linear Algebra'' (SIAM, 1997).</ref> (For QR decomposition, the Householder rotations are multiplied only on the left, but for the Hessenberg case they are multiplied on both left and right.) For [[symmetric matrix|symmetric matrices]] this procedure costs <math display="inline">\frac{4}{3} n^3 + O(n^2)</math> arithmetic operations using a technique based on Householder reduction.<ref name=Demmel/><ref name=Trefethen/> Solution of the system of linear equations for the [[tridiagonal matrix]] costs <math>O(n)</math> operations, so the complexity grows like <math>O(n^3) + kO(n)</math>, where <math>k</math> is the iteration number, which is better than for the direct inversion. However, for few iterations such transformation may not be practical. Also transformation to the [[Hessenberg form]] involves square roots and the division operation, which are not universally supported by hardware. === Choice of the normalization constant {{math|''C<sub>k</sub>''}} === On general purpose processors (e.g. produced by Intel) the execution time of addition, multiplication and division is approximately equal. But on embedded and/or low energy consuming hardware ([[digital signal processor]]s, [[FPGA]], [[Application-specific integrated circuit|ASIC]]) division may not be supported by hardware, and so should be avoided. Choosing <math>C_k = 2^{n_k}</math> allows fast division without explicit hardware support, as division by a power of 2 may be implemented as either a [[Bitwise operation#Bit shifts|bit shift]] (for [[fixed-point arithmetic]]) or subtraction of <math>k</math> from the exponent (for [[Floating point|floating-point arithmetic]]). When implementing the algorithm using [[fixed-point arithmetic]], the choice of the constant <math>C_k</math> is especially important. Small values will lead to fast growth of the norm of <math>b_k</math> and to [[Integer overflow|overflow]]; large values of <math>C_k</math> will cause the vector <math>b_k</math> to tend toward zero.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)