Template:Short description In linear algebra, a tridiagonal matrix is a band matrix that has nonzero elements only on the main diagonal, the subdiagonal/lower diagonal (the first diagonal below this), and the supradiagonal/upper diagonal (the first diagonal above the main diagonal). For example, the following matrix is tridiagonal:

<math>\begin{pmatrix}

1 & 4 & 0 & 0 \\ 3 & 4 & 1 & 0 \\ 0 & 2 & 3 & 4 \\ 0 & 0 & 1 & 3 \\ \end{pmatrix}.</math>

The determinant of a tridiagonal matrix is given by the continuant of its elements.<ref>Template:Cite book</ref>

An orthogonal transformation of a symmetric (or Hermitian) matrix to tridiagonal form can be done with the Lanczos algorithm.

PropertiesEdit

A tridiagonal matrix is a matrix that is both upper and lower Hessenberg matrix.<ref>Template:Cite book</ref> In particular, a tridiagonal matrix is a direct sum of p 1-by-1 and q 2-by-2 matrices such that Template:Nowrap — the dimension of the tridiagonal. Although a general tridiagonal matrix is not necessarily symmetric or Hermitian, many of those that arise when solving linear algebra problems have one of these properties. Furthermore, if a real tridiagonal matrix A satisfies ak,k+1 ak+1,k > 0 for all k, so that the signs of its entries are symmetric, then it is similar to a Hermitian matrix, by a diagonal change of basis matrix. Hence, its eigenvalues are real. If we replace the strict inequality by ak,k+1 ak+1,k ≥ 0, then by continuity, the eigenvalues are still guaranteed to be real, but the matrix need no longer be similar to a Hermitian matrix.<ref>Horn & Johnson, page 174</ref>

The set of all n × n tridiagonal matrices forms a 3n-2 dimensional vector space.

Many linear algebra algorithms require significantly less computational effort when applied to diagonal matrices, and this improvement often carries over to tridiagonal matrices as well.

DeterminantEdit

{{#invoke:Labelled list hatnote|labelledList|Main article|Main articles|Main page|Main pages}} The determinant of a tridiagonal matrix A of order n can be computed from a three-term recurrence relation.<ref>Template:Cite journal</ref> Write f1 = |a1| = a1 (i.e., f1 is the determinant of the 1 by 1 matrix consisting only of a1), and let

<math>f_n = \begin{vmatrix}

a_1 & b_1 \\ c_1 & a_2 & b_2 \\ & c_2 & \ddots & \ddots \\ & & \ddots & \ddots & b_{n-1} \\ & & & c_{n-1} & a_n \end{vmatrix}.</math>

The sequence (fi) is called the continuant and satisfies the recurrence relation

<math>f_n = a_n f_{n-1} - c_{n-1}b_{n-1}f_{n-2}</math>

with initial values f0 = 1 and f−1 = 0. The cost of computing the determinant of a tridiagonal matrix using this formula is linear in n, while the cost is cubic for a general matrix.

InversionEdit

The inverse of a non-singular tridiagonal matrix T

<math>T = \begin{pmatrix}

a_1 & b_1 \\ c_1 & a_2 & b_2 \\ & c_2 & \ddots & \ddots \\ & & \ddots & \ddots & b_{n-1} \\ & & & c_{n-1} & a_n \end{pmatrix}</math>

is given by

<math>(T^{-1})_{ij} = \begin{cases}

(-1)^{i+j}b_i \cdots b_{j-1} \theta_{i-1} \phi_{j+1}/\theta_n & \text{ if } i < j\\ \theta_{i-1} \phi_{j+1}/\theta_n & \text{ if } i = j\\ (-1)^{i+j}c_j \cdots c_{i-1} \theta_{j-1} \phi_{i+1}/\theta_n & \text{ if } i > j\\ \end{cases}</math>

where the θi satisfy the recurrence relation

<math>\theta_i = a_i \theta_{i-1} - b_{i-1}c_{i-1}\theta_{i-2} \qquad i=2,3,\ldots,n</math>

with initial conditions θ0 = 1, θ1 = a1 and the ϕi satisfy

<math>\phi_i = a_i \phi_{i+1} - b_i c_i \phi_{i+2} \qquad i=n-1,\ldots,1</math>

with initial conditions ϕn+1 = 1 and ϕn = an.<ref>Template:Cite journal</ref><ref>Template:Cite journal</ref>

Closed form solutions can be computed for special cases such as symmetric matrices with all diagonal and off-diagonal elements equal<ref>Template:Cite journal</ref> or Toeplitz matrices<ref>Template:Cite journal</ref> and for the general case as well.<ref>Template:Cite journal</ref><ref>Template:Cite journal</ref>

In general, the inverse of a tridiagonal matrix is a semiseparable matrix and vice versa.<ref name="VandebrilBarel2008">Template:Cite book</ref> The inverse of a symmetric tridiagonal matrix can be written as a single-pair matrix (a.k.a. generator-representable semiseparable matrix) of the form<ref name="Meurant1992">Template:Cite journal</ref><ref>Template:Cite journal</ref>

<math>\begin{pmatrix}

       \alpha_1 & -\beta_1 \\
       -\beta_1  & \alpha_2 & -\beta_2 \\
       & \ddots & \ddots & \ddots & \\

& & \ddots & \ddots & -\beta_{n-1} \\ & & & -\beta_{n-1} & \alpha_n

       \end{pmatrix}^{-1} = 

\begin{pmatrix}

       a_1 b_1 & a_1 b_2 & \cdots & a_1 b_n \\
       a_1 b_2 & a_2 b_2 & \cdots & a_2 b_n \\
       \vdots  & \vdots & \ddots & \vdots \\
       a_1 b_n & a_2 b_n & \cdots & a_n b_n
   \end{pmatrix}

= \left( a_{\min(i,j)} b_{\max(i,j)} \right) </math>

where <math>\begin{cases} \displaystyle a_i = \frac{\beta_{i} \cdots \beta_{n-1}}{\delta_i \cdots \delta_n\,b_n} \\ \displaystyle b_i = \frac{\beta_1 \cdots \beta_{i-1}}{d_1 \cdots d_i}\end{cases}</math> with <math>\begin{cases}

       d_n = \alpha_n,\quad d_{i-1} = \alpha_{i-1} - \frac{\beta_{i-1}^2}{d_{i}}, & i = n, n-1, \cdots, 2,
       \\
       \delta_1 = \alpha_1, \quad
       \delta_{i+1} = \alpha_{i+1} - \frac{\beta_{i}^2}{\delta_{i}}, & i = 1, 2, \cdots, n-1.
   \end{cases}

</math>

Solution of linear systemEdit

{{#invoke:Labelled list hatnote|labelledList|Main article|Main articles|Main page|Main pages}} A system of equations Ax = b for <math>b\in \R^n</math> can be solved by an efficient form of Gaussian elimination when A is tridiagonal called tridiagonal matrix algorithm, requiring O(n) operations.<ref>Template:Cite book</ref>

EigenvaluesEdit

When a tridiagonal matrix is also Toeplitz, there is a simple closed-form solution for its eigenvalues, namely:<ref>Template:Cite journal</ref><ref>This can also be written as <math> a + 2 \sqrt{bc} \cos(k \pi / {(n+1)}) </math> because <math> \cos(x) = -\cos(\pi-x) </math>, as is done in: Template:Cite journal</ref>

<math> a - 2 \sqrt{bc} \cos \left (\frac{k\pi}{n+1} \right ), \qquad k=1, \ldots, n. </math>

A real symmetric tridiagonal matrix has real eigenvalues, and all the eigenvalues are distinct (simple) if all off-diagonal elements are nonzero.<ref>Template:Cite book</ref> Numerous methods exist for the numerical computation of the eigenvalues of a real symmetric tridiagonal matrix to arbitrary finite precision, typically requiring <math>O(n^2)</math> operations for a matrix of size <math>n\times n</math>, although fast algorithms exist which (without parallel computation) require only <math>O(n\log n)</math>.<ref>Template:Cite journal</ref>

As a side note, an unreduced symmetric tridiagonal matrix is a matrix containing non-zero off-diagonal elements of the tridiagonal, where the eigenvalues are distinct while the eigenvectors are unique up to a scale factor and are mutually orthogonal.<ref>Template:Cite thesis</ref>

Similarity to symmetric tridiagonal matrixEdit

For unsymmetric or nonsymmetric tridiagonal matrices one can compute the eigendecomposition using a similarity transformation. Given a real tridiagonal, nonsymmetric matrix

<math>

T = \begin{pmatrix} a_1 & b_1 \\ c_1 & a_2 & b_2 \\ & c_2 & \ddots & \ddots \\ & & \ddots & \ddots & b_{n-1} \\ & & & c_{n-1} & a_n \end{pmatrix} </math> where <math>b_i \neq c_i </math>. Assume that each product of off-diagonal entries is Template:Em positive <math>b_i c_i > 0 </math> and define a transformation matrix <math>D</math> by<ref name=kreer_operator>Template:Cite journal</ref>

<math>

D := \operatorname{diag}(\delta_1 , \dots, \delta_n) \quad \text{for} \quad \delta_i := \begin{cases} 1 & , \, i=1 \\ \sqrt{\frac{c_{i-1} \dots c_1}{b_{i-1} \dots b_1}} & , \, i=2,\dots,n \,. \end{cases} </math>

The similarity transformation <math>D^{-1} T D </math> yields a symmetric tridiagonal matrix <math>J</math> by:<ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref><ref name=kreer_operator/>

<math>

J:=D^{-1} T D = \begin{pmatrix} a_1 & \sgn b_1 \, \sqrt{b_1 c_1} \\ \sgn b_1 \, \sqrt{b_1 c_1} & a_2 & \sgn b_2 \, \sqrt{b_2 c_2} \\ & \sgn b_2 \, \sqrt{b_2 c_2} & \ddots & \ddots \\ & & \ddots & \ddots & \sgn b_{n-1} \, \sqrt{b_{n-1} c_{n-1}} \\ & & & \sgn b_{n-1} \, \sqrt{b_{n-1} c_{n-1}} & a_n \end{pmatrix} \,. </math> Note that <math>T</math> and <math>J</math> have the same eigenvalues.

Computer programmingEdit

A transformation that reduces a general matrix to Hessenberg form will reduce a Hermitian matrix to tridiagonal form. So, many eigenvalue algorithms, when applied to a Hermitian matrix, reduce the input Hermitian matrix to (symmetric real) tridiagonal form as a first step.<ref>Template:Cite journal</ref>

A tridiagonal matrix can also be stored more efficiently than a general matrix by using a special storage scheme. For instance, the LAPACK Fortran package stores an unsymmetric tridiagonal matrix of order n in three one-dimensional arrays, one of length n containing the diagonal elements, and two of length n − 1 containing the subdiagonal and superdiagonal elements.

ApplicationsEdit

The discretization in space of the one-dimensional diffusion or heat equation

<math>\frac{\partial u(t,x)}{\partial t} = \alpha \frac{\partial^2 u(t,x)}{\partial x^2}</math>

using second order central finite differences results in

<math>

\begin{pmatrix} \frac{\partial u_{1}(t)}{\partial t} \\ \frac{\partial u_{2}(t)}{\partial t} \\ \vdots \\ \frac{\partial u_{N}(t)}{\partial t} \end{pmatrix}

= \frac{\alpha}{\Delta x^2} \begin{pmatrix}

-2 & 1 & 0 & \ldots & 0 \\ 1 & -2 & 1 & \ddots & \vdots \\ 0 & \ddots & \ddots & \ddots & 0 \\ \vdots & & 1 & -2 & 1 \\ 0 & \ldots & 0 & 1 & -2 \end{pmatrix} \begin{pmatrix} u_{1}(t) \\ u_{2}(t) \\ \vdots \\ u_{N}(t) \\ \end{pmatrix} </math> with discretization constant <math>\Delta x</math>. The matrix is tridiagonal with <math>a_{i}=-2</math> and <math>b_{i}=c_{i}=1</math>. Note: no boundary conditions are used here.

See alsoEdit

NotesEdit

<references/>

External linksEdit

Template:Matrix classes