Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Rayleigh quotient iteration
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
'''Rayleigh quotient iteration''' is an [[eigenvalue algorithm]] which extends the idea of the [[inverse iteration]] by using the [[Rayleigh quotient]] to obtain increasingly accurate [[eigenvalue]] estimates. Rayleigh quotient iteration is an [[iterative method]], that is, it delivers a sequence of approximate solutions that [[Limit of a sequence|converges]] to a true solution in the limit. Very rapid convergence is guaranteed and no more than a few iterations are needed in practice to obtain a reasonable approximation. The Rayleigh quotient iteration algorithm [[rate of convergence|converges cubically]] for Hermitian or symmetric matrices, given an initial vector that is sufficiently close to an [[EigenVector|eigenvector]] of the [[Matrix (mathematics)|matrix]] that is being analyzed. == Algorithm == The algorithm is very similar to inverse iteration, but replaces the estimated eigenvalue at the end of each iteration with the Rayleigh quotient. Begin by choosing some value <math>\mu_0</math> as an initial eigenvalue guess for the Hermitian matrix <math>A</math>. An initial vector <math>b_0</math> must also be supplied as initial eigenvector guess. Calculate the next approximation of the eigenvector <math>b_{i+1}</math> by <math display="block"> b_{i+1} = \frac{(A-\mu_i I)^{-1}b_i}{\|(A-\mu_i I)^{-1}b_i\|}, </math> where <math>I</math> is the identity matrix, and set the next approximation of the eigenvalue to the Rayleigh quotient of the current iteration equal to<br> <math display="block"> \mu_{i+1} = \frac{b^*_{i+1} A b_{i+1}}{b^*_{i+1} b_{i+1}}. </math> To compute more than one eigenvalue, the algorithm can be combined with a deflation technique.{{Citation needed|date=February 2025}} Note that for very small problems it is beneficial to replace the [[matrix inverse]] with the [[adjugate matrix|adjugate]], which will yield the same iteration because it is equal to the inverse up to an irrelevant scale (the inverse of the determinant, specifically). The adjugate is easier to compute explicitly than the inverse (though the inverse is easier to apply to a vector for problems that aren't small), and is more numerically sound because it remains well defined as the eigenvalue converges. == Example == Consider the matrix :<math> A = \left[\begin{matrix} 1 & 2 & 3\\ 1 & 2 & 1\\ 3 & 2 & 1\\ \end{matrix}\right] </math> for which the exact eigenvalues are <math>\lambda_1 = 3+\sqrt5</math>, <math>\lambda_2 = 3-\sqrt5</math> and <math>\lambda_3 = -2</math>, with corresponding eigenvectors :<math>v_1 = \left[ \begin{matrix} 1 \\ \varphi-1 \\ 1 \\ \end{matrix}\right]</math>, <math>v_2 = \left[ \begin{matrix} 1 \\ -\varphi \\ 1 \\ \end{matrix}\right]</math> and <math>v_3 = \left[ \begin{matrix} 1 \\ 0 \\ 1 \\ \end{matrix}\right]</math>. (where <math>\textstyle\varphi=\frac{1+\sqrt5}2</math> is the golden ratio). The largest eigenvalue is <math>\lambda_1 \approx 5.2361</math> and corresponds to any eigenvector proportional to <math>v_1 \approx \left[ \begin{matrix} 1 \\ 0.6180 \\ 1 \\ \end{matrix}\right]. </math> We begin with an initial eigenvalue guess of :<math>b_0 = \left[\begin{matrix} 1 \\ 1 \\ 1 \\ \end{matrix}\right], ~\mu_0 = 200</math>. Then, the first iteration yields :<math>b_1 \approx \left[\begin{matrix} -0.57927 \\ -0.57348 \\ -0.57927 \\ \end{matrix}\right], ~\mu_1 \approx 5.3355 </math> the second iteration, :<math>b_2 \approx \left[\begin{matrix} 0.64676 \\ 0.40422 \\ 0.64676 \\ \end{matrix}\right], ~\mu_2 \approx 5.2418 </math> and the third, :<math>b_3 \approx \left[\begin{matrix} -0.64793 \\ -0.40045 \\ -0.64793 \\ \end{matrix}\right], ~\mu_3 \approx 5.2361 </math> from which the cubic convergence is evident. == Octave implementation == The following is a simple implementation of the algorithm in [[GNU Octave|Octave]]. <syntaxhighlight lang="matlab"> function x = rayleigh(A, epsilon, mu, x) x = x / norm(x); % the backslash operator in Octave solves a linear system y = (A - mu * eye(rows(A))) \ x; lambda = y' * x; mu = mu + 1 / lambda err = norm(y - lambda * x) / norm(y) while err > epsilon x = y / norm(y); y = (A - mu * eye(rows(A))) \ x; lambda = y' * x; mu = mu + 1 / lambda err = norm(y - lambda * x) / norm(y) end end </syntaxhighlight> == See also == * [[Power iteration]] * [[Inverse iteration]] ==References== * Lloyd N. Trefethen and David Bau, III, ''Numerical Linear Algebra'', Society for Industrial and Applied Mathematics, 1997. {{isbn|0-89871-361-7}}. * Rainer Kress, "Numerical Analysis", Springer, 1991. {{isbn|0-387-98408-9}} {{Numerical linear algebra}} [[Category:Numerical linear algebra]] [[Category:Articles with example MATLAB/Octave code]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Citation needed
(
edit
)
Template:Isbn
(
edit
)
Template:Numerical linear algebra
(
edit
)