Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Spectral radius
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Largest absolute value of an operator's eigenvalues}} {{distinguish|Spectral norm}} {{More footnotes|date=July 2022}} In [[mathematics]], the '''spectral radius''' of a [[matrix (mathematics)|square matrix]] is the maximum of the absolute values of its [[eigenvalues]].<ref>{{Cite book |last=Gradshteĭn |first=I. S. |url=https://www.worldcat.org/oclc/5892996 |title=Table of integrals, series, and products |date=1980 |publisher=Academic Press |others=I. M. Ryzhik, Alan Jeffrey |isbn=0-12-294760-6 |edition=Corr. and enl. |location=New York |oclc=5892996}}</ref> More generally, the spectral radius of a [[bounded linear operator]] is the [[supremum]] of the absolute values of the elements of its [[spectrum (functional analysis)|spectrum]]. The spectral radius is often denoted by {{math|ρ(·)}}. ==Definition== ===Matrices=== Let {{math|''λ''<sub>1</sub>, ..., ''λ<sub>n</sub>''}} be the eigenvalues of a matrix {{math|''A'' ∈ '''C'''<sup>''n''×''n''</sup>}}. The spectral radius of {{math|''A''}} is defined as :<math>\rho(A) = \max \left \{ |\lambda_1|, \dotsc, |\lambda_n| \right \}.</math> The spectral radius can be thought of as an infimum of all norms of a matrix. Indeed, on the one hand, <math> \rho(A) \leqslant \|A\| </math> for every [[matrix norm#Matrix norms induced by vector norms|natural matrix norm]] <math>\|\cdot\|</math>; and on the other hand, Gelfand's formula states that <math> \rho(A) = \lim_{k\to\infty} \|A^k\|^{1/k} </math>. Both of these results are shown below. However, the spectral radius does not necessarily satisfy <math> \|A\mathbf{v}\| \leqslant \rho(A) \|\mathbf{v}\| </math> for arbitrary vectors <math> \mathbf{v} \in \mathbb{C}^n </math>. To see why, let <math>r > 1</math> be arbitrary and consider the matrix :<math> C_r = \begin{pmatrix} 0 & r^{-1} \\ r & 0 \end{pmatrix} </math>. The [[characteristic polynomial]] of <math> C_r </math> is <math> \lambda^2 - 1 </math>, so its eigenvalues are <math>\{-1, 1\}</math> and thus <math>\rho(C_r) = 1</math>. However, <math>C_r \mathbf{e}_1 = r \mathbf{e}_2</math>. As a result, :<math> \| C_r \mathbf{e}_1 \| = r > 1 = \rho(C_r) \|\mathbf{e}_1\|. </math> As an illustration of Gelfand's formula, note that <math>\|C_r^k\|^{1/k} \to 1</math> as <math>k \to \infty</math>, since <math>C_r^k = I</math> if <math>k</math> is even and <math>C_r^k = C_r</math> if <math>k</math> is odd. A special case in which <math> \|A\mathbf{v}\| \leqslant \rho(A) \|\mathbf{v}\| </math> for all <math> \mathbf{v} \in \mathbb{C}^n </math> is when <math>A</math> is a [[Hermitian matrix]] and <math> \|\cdot\| </math> is the [[Euclidean norm]]. This is because any Hermitian Matrix is [[diagonalizable matrix|diagonalizable]] by a [[unitary matrix]], and unitary matrices preserve vector length. As a result, : <math> \|A\mathbf{v}\| = \|U^*DU\mathbf{v}\| = \|DU\mathbf{v}\| \leqslant \rho(A) \|U\mathbf{v}\| = \rho(A) \|\mathbf{v}\| .</math> ===Bounded linear operators=== In the context of a [[bounded linear operator]] {{mvar|A}} on a [[Banach space]], the eigenvalues need to be replaced with the elements of the [[Spectrum of an operator|spectrum of the operator]], i.e. the values <math>\lambda</math> for which <math>A - \lambda I</math> is not bijective. We denote the spectrum by :<math>\sigma(A) = \left\{ \lambda \in \Complex: A - \lambda I \; \text{is not bijective} \right\}</math> The spectral radius is then defined as the supremum of the magnitudes of the elements of the spectrum: :<math>\rho(A) = \sup_{\lambda \in \sigma(A)} |\lambda|</math> Gelfand's formula, also known as the spectral radius formula, also holds for bounded linear operators: letting <math>\|\cdot\|</math> denote the [[operator norm]], we have :<math>\rho(A) = \lim_{k \to \infty}\|A^k\|^{\frac{1}{k}}=\inf_{k\in\mathbb{N}^*} \|A^k\|^{\frac{1}{k}}.</math> A bounded operator (on a complex Hilbert space) is called a '''spectraloid operator''' if its spectral radius coincides with its [[numerical radius]]. An example of such an operator is a [[normal operator]]. ===Graphs=== The spectral radius of a finite [[Graph (discrete mathematics)|graph]] is defined to be the spectral radius of its [[adjacency matrix]]. This definition extends to the case of infinite graphs with bounded degrees of vertices (i.e. there exists some real number {{mvar|C}} such that the degree of every vertex of the graph is smaller than {{mvar|C}}). In this case, for the graph {{mvar|G}} define: :<math> \ell^2(G) = \left \{ f : V(G) \to \mathbf{R} \ : \ \sum\nolimits_{v \in V(G)} \left \|f(v)^2 \right \| < \infty \right \}.</math> Let {{mvar|γ}} be the adjacency operator of {{mvar|G}}: :<math> \begin{cases} \gamma : \ell^2(G) \to \ell^2(G) \\ (\gamma f)(v) = \sum_{(u,v) \in E(G)} f(u) \end{cases}</math> The spectral radius of {{mvar|G}} is defined to be the spectral radius of the bounded linear operator {{mvar|γ}}. ==Upper bounds== ===Upper bounds on the spectral radius of a matrix=== The following proposition gives simple yet useful upper bounds on the spectral radius of a matrix. '''Proposition.''' Let {{math|''A'' ∈ '''C'''<sup>''n''×''n''</sup>}} with spectral radius {{math|''ρ''(''A'')}} and a [[sub-multiplicative norm|sub-multiplicative matrix norm]] {{math|{{!!}}⋅{{!!}}}}. Then for each integer <math>k \geqslant 1</math>: ::<math>\rho(A)\leq \|A^k\|^{\frac{1}{k}}.</math> '''Proof''' Let {{math|('''v''', ''λ'')}} be an [[eigenvector]]-[[eigenvalue]] pair for a matrix ''A''. By the sub-multiplicativity of the matrix norm, we get: :<math>|\lambda|^k\|\mathbf{v}\| = \|\lambda^k \mathbf{v}\| = \|A^k \mathbf{v}\| \leq \|A^k\|\cdot\|\mathbf{v}\|.</math> Since {{math|'''v''' ≠ 0}}, we have :<math>|\lambda|^k \leq \|A^k\|</math> and therefore :<math>\rho(A)\leq \|A^k\|^{\frac{1}{k}}.</math> concluding the proof. === Upper bounds for spectral radius of a graph === There are many upper bounds for the spectral radius of a graph in terms of its number ''n'' of vertices and its number ''m'' of edges. For instance, if :<math>\frac{(k-2)(k-3)}{2} \leq m-n \leq \frac{k(k-3)}{2}</math> where <math>3 \le k \le n</math> is an integer, then<ref>{{Cite journal|last1=Guo|first1=Ji-Ming|last2=Wang|first2=Zhi-Wen|last3=Li|first3=Xin|date=2019|title=Sharp upper bounds of the spectral radius of a graph|journal=Discrete Mathematics|language=en|volume=342|issue=9|pages=2559–2563|doi=10.1016/j.disc.2019.05.017|s2cid=198169497|doi-access=free}}</ref> :<math>\rho(G) \leq \sqrt{2 m-n-k+\frac{5}{2}+\sqrt{2 m-2 n+\frac{9}{4}}}</math> ==Symmetric matrices== For real-valued matrices <math>A</math> the inequality <math>\rho(A) \leq {\|A\|}_{2}</math> holds in particular, where <math>{\|\cdot\|}_{2}</math> denotes the [[Matrix_norm#Spectral_norm|spectral norm]]. In the case where <math>A</math> is [[:en:Symmetric_matrix|symmetric]], this inequality is tight: '''Theorem.''' Let <math>A \in \mathbb{R}^{n \times n}</math> be symmetric, i.e., <math>A = A^T.</math> Then it holds that <math>\rho(A) = {\|A\|}_{2}.</math> '''Proof''' Let <math>(v_i, \lambda_i)_{i=1}^{n}</math> be the eigenpairs of ''A''. Due to the symmetry of ''A'', all <math>v_i</math> and <math>\lambda_i</math> are real-valued and the eigenvectors <math>v_i</math> are [[Orthonormal_basis|orthonormal]]. By the definition of the spectral norm, there exists an <math>x \in \mathbb{R}^{n}</math> with <math>{\|x\|}_{2} = 1</math> such that <math>{\|A\|}_{2} = {\| A x \|}_{2}.</math> Since the eigenvectors <math>v_i</math> form a basis of <math>\mathbb{R}^{n},</math> there exists factors <math>\delta_{1}, \ldots, \delta_{n} \in \mathbb{R}^{n}</math> such that <math>\textstyle x = \sum_{i = 1}^{n} \delta_{i} v_{i}</math> which implies that :<math>A x = \sum_{i = 1}^{n}\delta_{i} A v_{i} = \sum_{i = 1}^{n} \delta_{i} \lambda_{i} v_{i}.</math> From the orthonormality of the eigenvectors <math>v_i</math> it follows that :<math>{\| A x\|}_{2} = \| \sum_{i = 1}^{n} \delta_{i} \lambda_{i} v_{i}\|_{2} = \sum_{i = 1}^{n} {|\delta_{i}|} \cdot {|\lambda_{i}|} \cdot {\| v_{i}\|}_{2} = \sum_{i = 1}^{n} {|\delta_{i}|} \cdot {|\lambda_{i}|} </math> and :<math>{\|x\|}_{2} = \| \sum_{i = 1}^{n} \delta_{i} v_{i} \|_{2} = \sum_{i = 1}^{n} {|\delta_{i}|} \cdot {\| v_{i} \|}_{2} = \sum_{i = 1}^{n} {|\delta_{i}|}.</math> Since <math>x</math> is chosen such that it maximizes <math>{\|Ax\|}_{2}</math> while satisfying <math>{\|x\|}_{2} = 1,</math> the values of <math>\delta_{i}</math> must be such that they maximize <math>\textstyle \sum_{i = 1}^{n} {|\delta_{i}|} \cdot {|\lambda_{i}|}</math> while satisfying <math>\textstyle \sum_{i = 1}^{n} {|\delta_{i}|} = 1.</math> This is achieved by setting <math>\delta_{k} = 1</math> for <math>k = \mathrm{arg\,max}_{i=1}^{n} {|\lambda_i|}</math> and <math>\delta_{i} = 0</math> otherwise, yielding a value of <math>{\|Ax\|}_{2} = {|\lambda_k|} = \rho(A).</math> ==Power sequence== The spectral radius is closely related to the behavior of the convergence of the power sequence of a matrix; namely as shown by the following theorem. '''Theorem.''' Let {{math|''A'' ∈ '''C'''<sup>''n''×''n''</sup>}} with spectral radius {{math|''ρ''(''A'')}}. Then {{math|''ρ''(''A'') < 1}} if and only if :<math>\lim_{k \to \infty} A^k = 0.</math> On the other hand, if {{math|''ρ''(''A'') > 1}}, <math>\lim_{k \to \infty} \|A^k\| = \infty</math>. The statement holds for any choice of matrix norm on {{math|'''C'''<sup>''n''×''n''</sup>}}. '''Proof''' Assume that <math>A^k</math> goes to zero as <math>k</math> goes to infinity. We will show that {{math|''ρ''(''A'') < 1}}. Let {{math|('''v''', ''λ'')}} be an [[eigenvector]]-[[eigenvalue]] pair for ''A''. Since {{math|''A<sup>k</sup>'''''v''' {{=}} ''λ<sup>k</sup>'''''v'''}}, we have :<math>\begin{align} 0 &= \left(\lim_{k \to \infty} A^k \right) \mathbf{v} \\ &= \lim_{k \to \infty} \left(A^k\mathbf{v} \right ) \\ &= \lim_{k \to \infty} \lambda^k\mathbf{v} \\ &= \mathbf{v} \lim_{k \to \infty} \lambda^k \end{align}</math> Since {{math|'''v''' ≠ 0}} by hypothesis, we must have :<math>\lim_{k \to \infty}\lambda^k = 0,</math> which implies <math>|\lambda| < 1</math>. Since this must be true for any eigenvalue <math>\lambda</math>, we can conclude that {{math|''ρ''(''A'') < 1}}. Now, assume the radius of {{mvar|A}} is less than {{math|1}}. From the [[Jordan normal form]] theorem, we know that for all {{math|''A'' ∈ '''C'''<sup>''n''×''n''</sup>}}, there exist {{math|''V'', ''J'' ∈ '''C'''<sup>''n''×''n''</sup>}} with {{mvar|V}} non-singular and {{mvar|J}} block diagonal such that: :<math>A = VJV^{-1}</math> with :<math>J=\begin{bmatrix} J_{m_1}(\lambda_1) & 0 & 0 & \cdots & 0 \\ 0 & J_{m_2}(\lambda_2) & 0 & \cdots & 0 \\ \vdots & \cdots & \ddots & \cdots & \vdots \\ 0 & \cdots & 0 & J_{m_{s-1}}(\lambda_{s-1}) & 0 \\ 0 & \cdots & \cdots & 0 & J_{m_s}(\lambda_s) \end{bmatrix}</math> where :<math>J_{m_i}(\lambda_i)=\begin{bmatrix} \lambda_i & 1 & 0 & \cdots & 0 \\ 0 & \lambda_i & 1 & \cdots & 0 \\ \vdots & \vdots & \ddots & \ddots & \vdots \\ 0 & 0 & \cdots & \lambda_i & 1 \\ 0 & 0 & \cdots & 0 & \lambda_i \end{bmatrix}\in \mathbf{C}^{m_i \times m_i}, 1\leq i\leq s.</math> It is easy to see that :<math>A^k=VJ^kV^{-1}</math> and, since {{mvar|J}} is block-diagonal, :<math>J^k=\begin{bmatrix} J_{m_1}^k(\lambda_1) & 0 & 0 & \cdots & 0 \\ 0 & J_{m_2}^k(\lambda_2) & 0 & \cdots & 0 \\ \vdots & \cdots & \ddots & \cdots & \vdots \\ 0 & \cdots & 0 & J_{m_{s-1}}^k(\lambda_{s-1}) & 0 \\ 0 & \cdots & \cdots & 0 & J_{m_s}^k(\lambda_s) \end{bmatrix}</math> Now, a standard result on the {{mvar|k}}-power of an <math>m_i \times m_i</math> Jordan block states that, for <math>k \geq m_i-1</math>: :<math>J_{m_i}^k(\lambda_i)=\begin{bmatrix} \lambda_i^k & {k \choose 1}\lambda_i^{k-1} & {k \choose 2}\lambda_i^{k-2} & \cdots & {k \choose m_i-1}\lambda_i^{k-m_i+1} \\ 0 & \lambda_i^k & {k \choose 1}\lambda_i^{k-1} & \cdots & {k \choose m_i-2}\lambda_i^{k-m_i+2} \\ \vdots & \vdots & \ddots & \ddots & \vdots \\ 0 & 0 & \cdots & \lambda_i^k & {k \choose 1}\lambda_i^{k-1} \\ 0 & 0 & \cdots & 0 & \lambda_i^k \end{bmatrix}</math> Thus, if <math>\rho(A) < 1</math> then for all {{mvar|i}} <math>|\lambda_i| < 1</math>. Hence for all {{mvar|i}} we have: :<math>\lim_{k \to \infty}J_{m_i}^k=0</math> which implies :<math>\lim_{k \to \infty} J^k = 0.</math> Therefore, :<math>\lim_{k \to \infty}A^k=\lim_{k \to \infty}VJ^kV^{-1}=V \left (\lim_{k \to \infty}J^k \right )V^{-1}=0</math> On the other side, if <math>\rho(A)>1</math>, there is at least one element in {{mvar|J}} that does not remain bounded as {{mvar|k}} increases, thereby proving the second part of the statement. ==Gelfand's formula== Gelfand's formula, named after [[Israel Gelfand]], gives the spectral radius as a limit of matrix norms. === Theorem === For any [[matrix norm]] {{math|{{!!}}⋅{{!!}},}} we have<ref>The formula holds for any [[Banach algebra]]; see Lemma IX.1.8 in {{harvnb|Dunford|Schwartz|1963}} and {{harvnb|Lax|2002|pp=195–197}}</ref> :<math>\rho(A)=\lim_{k \to \infty} \left \|A^k \right \|^{\frac{1}{k}}</math>. Moreover, in the case of a [[matrix norm|consistent]] matrix norm <math>\lim_{k \to \infty} \left \|A^k \right \|^{\frac{1}{k}}</math> approaches <math>\rho(A)</math> from above (indeed, in that case <math>\rho(A) \leq \left \|A^k \right \|^{\frac{1}{k}}</math> for all <math>k</math>). ===Proof=== For any {{math|''ε'' > 0}}, let us define the two following matrices: :<math>A_{\pm}= \frac{1}{\rho(A) \pm\varepsilon}A.</math> Thus, :<math>\rho \left (A_{\pm} \right ) = \frac{\rho(A)}{\rho(A) \pm \varepsilon}, \qquad \rho (A_+) < 1 < \rho (A_-).</math> We start by applying the previous theorem on limits of power sequences to {{math|''A''<sub>+</sub>}}: :<math>\lim_{k \to \infty} A_+^k=0.</math> This shows the existence of {{math|''N''<sub>+</sub> ∈ '''N'''}} such that, for all {{math|''k'' ≥ ''N''<sub>+</sub>}}, :<math>\left\|A_+^k \right \| < 1.</math> Therefore, :<math>\left \|A^k \right \|^{\frac{1}{k}} < \rho(A)+\varepsilon.</math> Similarly, the theorem on power sequences implies that <math>\|A_-^k\|</math> is not bounded and that there exists {{math|''N''<sub>−</sub> ∈ '''N'''}} such that, for all {{math|''k'' ≥ ''N''<sub>−</sub>}}, :<math>\left\|A_-^k \right \| > 1.</math> Therefore, :<math>\left\|A^k \right\|^{\frac{1}{k}} > \rho(A)-\varepsilon.</math> Let {{math|''N'' {{=}} max{''N''<sub>+</sub>, ''N''<sub>−</sub>}}}. Then, :<math>\forall \varepsilon>0\quad \exists N\in\mathbf{N} \quad \forall k\geq N \quad \rho(A)-\varepsilon < \left \|A^k \right \|^{\frac{1}{k}} < \rho(A)+\varepsilon,</math> that is, :<math>\lim_{k \to \infty} \left \|A^k \right \|^{\frac{1}{k}} = \rho(A).</math> This concludes the proof. ===Corollary=== Gelfand's formula yields a bound on the spectral radius of a product of commuting matrices: if <math>A_1, \ldots, A_n</math> are matrices that all commute, then :<math>\rho(A_1 \cdots A_n) \leq \rho(A_1) \cdots \rho(A_n).</math> ===Numerical example=== [[File:Gelfand's formula for a 3x3 matrix.svg|thumb|The convergence of all 3 matrix norms to the spectral radius.]] Consider the matrix :<math>A=\begin{bmatrix} 9 & -1 & 2\\ -2 & 8 & 4\\ 1 & 1 & 8 \end{bmatrix}</math> whose eigenvalues are {{math|5, 10, 10}}; by definition, {{math|''ρ''(''A'') {{=}} 10}}. In the following table, the values of <math>\|A^k\|^{\frac{1}{k}}</math> for the four most used norms are listed versus several increasing values of k (note that, due to the particular form of this matrix,<math>\|.\|_1=\|.\|_\infty</math>): == Notes and references == {{reflist}} ==Bibliography== * {{citation | last1=Dunford | first1=Nelson | last2=Schwartz | first2=Jacob | title = Linear operators II. Spectral Theory: Self Adjoint Operators in Hilbert Space | publisher = Interscience Publishers, Inc. | year = 1963 }} * {{citation | last=Lax| first = Peter D. |author-link=Peter Lax | title = Functional Analysis | publisher = Wiley-Interscience | year = 2002 | isbn = 0-471-55604-1}} == See also == * [[Power iteration]] * [[Spectral gap]] * The [[Joint spectral radius]] is a generalization of the spectral radius to sets of matrices. * [[Spectrum of a matrix]] * [[Spectral abscissa]] {{Functional Analysis}} {{SpectralTheory}} [[Category:Spectral theory]] [[Category:Articles containing proofs]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Citation
(
edit
)
Template:Cite book
(
edit
)
Template:Cite journal
(
edit
)
Template:Distinguish
(
edit
)
Template:Functional Analysis
(
edit
)
Template:Harvnb
(
edit
)
Template:Math
(
edit
)
Template:More footnotes
(
edit
)
Template:Mvar
(
edit
)
Template:Reflist
(
edit
)
Template:Short description
(
edit
)
Template:SpectralTheory
(
edit
)