Matrix exponential

Revision as of 17:15, 27 February 2025 by imported>Quantling (→‎Diagonalizable case: Only part of the original sentence depends upon D being diagonal)
(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)

Template:Short description Template:Use American English In mathematics, the matrix exponential is a matrix function on square matrices analogous to the ordinary exponential function. It is used to solve systems of linear differential equations. In the theory of Lie groups, the matrix exponential gives the exponential map between a matrix Lie algebra and the corresponding Lie group.

Let Template:Mvar be an Template:Math real or complex matrix. The exponential of Template:Mvar, denoted by Template:Math or Template:Math, is the Template:Math matrix given by the power series

<math display="block">e^X = \sum_{k=0}^\infty \frac{1}{k!} X^k</math>

where <math>X^0</math> is defined to be the identity matrix <math>I</math> with the same dimensions as <math>X</math>, and Template:Tmath.<ref>Template:Harvnb Equation 2.1</ref> The series always converges, so the exponential of Template:Mvar is well-defined.

Equivalently, <math display="block">e^X = \lim_{k \rightarrow \infty} \left(I + \frac{X}{k} \right)^k</math>

for integer-valued Template:Mvar, where Template:Mvar is the Template:Math identity matrix.

Equivalently, given by the solution to the differential equation

<math display="block">\frac d {dt} e^{X t} = X e^{X t}, \quad e^{X 0} = I</math>

When Template:Mvar is an Template:Math diagonal matrix then Template:Math will be an Template:Math diagonal matrix with each diagonal element equal to the ordinary exponential applied to the corresponding diagonal element of Template:Mvar.

PropertiesEdit

Elementary propertiesEdit

Let Template:Math and Template:Math be Template:Math complex matrices and let Template:Math and Template:Math be arbitrary complex numbers. We denote the Template:Math identity matrix by Template:Math and the zero matrix by 0. The matrix exponential satisfies the following properties.<ref>Template:Harvnb Proposition 2.3</ref>

We begin with the properties that are immediate consequences of the definition as a power series:

The next key result is this one:

  • If <math>XY=YX</math> then <math>e^Xe^Y=e^{X+Y}</math>.

The proof of this identity is the same as the standard power-series argument for the corresponding identity for the exponential of real numbers. That is to say, as long as <math>X</math> and <math>Y</math> commute, it makes no difference to the argument whether <math>X</math> and <math>Y</math> are numbers or matrices. It is important to note that this identity typically does not hold if <math>X</math> and <math>Y</math> do not commute (see Golden-Thompson inequality below).

Consequences of the preceding identity are the following:

Using the above results, we can easily verify the following claims. If Template:Math is symmetric then Template:Math is also symmetric, and if Template:Math is skew-symmetric then Template:Math is orthogonal. If Template:Math is Hermitian then Template:Math is also Hermitian, and if Template:Math is skew-Hermitian then Template:Math is unitary.

Finally, a Laplace transform of matrix exponentials amounts to the resolvent, <math display="block">\int_0^\infty e^{-ts}e^{tX}\,dt = (sI - X)^{-1}</math> for all sufficiently large positive values of Template:Mvar.

Linear differential equation systemsEdit

{{#invoke:Labelled list hatnote|labelledList|Main article|Main articles|Main page|Main pages}}

One of the reasons for the importance of the matrix exponential is that it can be used to solve systems of linear ordinary differential equations. The solution of <math display="block"> \frac{d}{dt} y(t) = Ay(t), \quad y(0) = y_0, </math> where Template:Mvar is a constant matrix and y is a column vector, is given by <math display="block"> y(t) = e^{At} y_0. </math>

The matrix exponential can also be used to solve the inhomogeneous equation <math display="block"> \frac{d}{dt} y(t) = Ay(t) + z(t), \quad y(0) = y_0. </math> See the section on applications below for examples.

There is no closed-form solution for differential equations of the form <math display="block"> \frac{d}{dt} y(t) = A(t) \, y(t), \quad y(0) = y_0, </math> where Template:Mvar is not constant, but the Magnus series gives the solution as an infinite sum.

The determinant of the matrix exponentialEdit

By Jacobi's formula, for any complex square matrix the following trace identity holds:<ref>Template:Harvnb Theorem 2.12</ref> Template:Equation box 1

In addition to providing a computational tool, this formula demonstrates that a matrix exponential is always an invertible matrix. This follows from the fact that the right hand side of the above equation is always non-zero, and so Template:Math, which implies that Template:Math must be invertible.

In the real-valued case, the formula also exhibits the map <math display="block">\exp \colon M_n(\R) \to \mathrm{GL}(n, \R)</math> to not be surjective, in contrast to the complex case mentioned earlier. This follows from the fact that, for real-valued matrices, the right-hand side of the formula is always positive, while there exist invertible matrices with a negative determinant.

Real symmetric matricesEdit

The matrix exponential of a real symmetric matrix is positive definite. Let <math>S</math> be an Template:Math real symmetric matrix and <math>x \in \R^n</math> a column vector. Using the elementary properties of the matrix exponential and of symmetric matrices, we have:

<math display="block">x^Te^Sx=x^Te^{S/2}e^{S/2}x=x^T(e^{S/2})^Te^{S/2}x =(e^{S/2}x)^Te^{S/2}x=\lVert e^{S/2}x\rVert^2\geq 0.</math>

Since <math>e^{S/2}</math> is invertible, the equality only holds for <math>x=0</math>, and we have <math>x^Te^Sx > 0</math> for all non-zero <math>x</math>. Hence <math>e^S</math> is positive definite.

The exponential of sumsEdit

For any real numbers (scalars) Template:Mvar and Template:Mvar we know that the exponential function satisfies Template:Math. The same is true for commuting matrices. If matrices Template:Mvar and Template:Mvar commute (meaning that Template:Math), then, <math display="block">e^{X+Y} = e^Xe^Y.</math>

However, for matrices that do not commute the above equality does not necessarily hold.

The Lie product formulaEdit

Even if Template:Mvar and Template:Mvar do not commute, the exponential Template:Math can be computed by the Lie product formula<ref>Template:Harvnb Theorem 2.11</ref> <math display="block">e^{X+Y} = \lim_{k\to\infty} \left(e^{\frac{1}{k}X}e^{\frac{1}{k}Y}\right)^k.</math>

Using a large finite Template:Mvar to approximate the above is basis of the Suzuki-Trotter expansion, often used in numerical time evolution.

The Baker–Campbell–Hausdorff formulaEdit

In the other direction, if Template:Mvar and Template:Mvar are sufficiently small (but not necessarily commuting) matrices, we have <math display="block">e^Xe^Y = e^Z,</math> where Template:Mvar may be computed as a series in commutators of Template:Mvar and Template:Mvar by means of the Baker–Campbell–Hausdorff formula:<ref>Template:Harvnb Chapter 5</ref> <math display="block">Z = X + Y + \frac{1}{2}[X,Y] + \frac{1}{12}[X,[X,Y]] - \frac{1}{12}[Y,[X,Y]]+ \cdots,</math> where the remaining terms are all iterated commutators involving Template:Mvar and Template:Mvar. If Template:Mvar and Template:Mvar commute, then all the commutators are zero and we have simply Template:Math.

Inequalities for exponentials of Hermitian matricesEdit

{{#invoke:Labelled list hatnote|labelledList|Main article|Main articles|Main page|Main pages}} For Hermitian matrices there is a notable theorem related to the trace of matrix exponentials.

If Template:Mvar and Template:Mvar are Hermitian matrices, then<ref>Template:Cite book</ref> <math display="block">\operatorname{tr}\exp(A + B) \leq \operatorname{tr}\left[\exp(A)\exp(B)\right].</math>

There is no requirement of commutativity. There are counterexamples to show that the Golden–Thompson inequality cannot be extended to three matrices – and, in any event, Template:Math is not guaranteed to be real for Hermitian Template:Math, Template:Math, Template:Math. However, Lieb proved<ref>Template:Cite journal</ref><ref>Template:Cite journal</ref> that it can be generalized to three matrices if we modify the expression as follows <math display="block">\operatorname{tr}\exp(A + B + C) \leq \int_0^\infty \mathrm{d}t\, \operatorname{tr}\left[e^A\left(e^{-B} + t\right)^{-1}e^C \left(e^{-B} + t\right)^{-1}\right].</math>

The exponential mapEdit

The exponential of a matrix is always an invertible matrix. The inverse matrix of Template:Math is given by Template:Math. This is analogous to the fact that the exponential of a complex number is always nonzero. The matrix exponential then gives us a map <math display="block">\exp \colon M_n(\Complex) \to \mathrm{GL}(n, \Complex)</math> from the space of all n × n matrices to the general linear group of degree Template:Mvar, i.e. the group of all n × n invertible matrices. In fact, this map is surjective which means that every invertible matrix can be written as the exponential of some other matrix<ref>Template:Harvnb Exercises 2.9 and 2.10</ref> (for this, it is essential to consider the field C of complex numbers and not R).

For any two matrices Template:Mvar and Template:Mvar, <math display="block">\left\| e^{X+Y} - e^X\right\| \le \|Y\| e^{\|X\|} e^{\|Y\|}, </math>

where Template:Math denotes an arbitrary matrix norm. It follows that the exponential map is continuous and Lipschitz continuous on compact subsets of Template:Math.

The map <math display="block">t \mapsto e^{tX}, \qquad t \in \R</math> defines a smooth curve in the general linear group which passes through the identity element at Template:Math.

In fact, this gives a one-parameter subgroup of the general linear group since <math display="block">e^{tX}e^{sX} = e^{(t + s)X}.</math>

The derivative of this curve (or tangent vector) at a point t is given by Template:NumBlk The derivative at Template:Math is just the matrix X, which is to say that X generates this one-parameter subgroup.

More generally,<ref>Template:Cite journal</ref> for a generic Template:Mvar-dependent exponent, Template:Math, Template:Equation box 1

Taking the above expression Template:Math outside the integral sign and expanding the integrand with the help of the Hadamard lemma one can obtain the following useful expression for the derivative of the matrix exponent,<ref>Template:Harvnb Theorem 5.4</ref> <math display="block">e^{-X(t)}\left(\frac{d}{dt}e^{X(t)}\right) = \frac{d}{dt}X(t) - \frac{1}{2!} \left[X(t), \frac{d}{dt}X(t)\right] + \frac{1}{3!} \left[X(t), \left[X(t), \frac{d}{dt}X(t)\right]\right] - \cdots </math>

The coefficients in the expression above are different from what appears in the exponential. For a closed form, see derivative of the exponential map.

Directional derivatives when restricted to Hermitian matricesEdit

Let <math>X</math> be a <math>n \times n</math> Hermitian matrix with distinct eigenvalues. Let <math>X = E \textrm{diag}(\Lambda) E^*</math> be its eigen-decomposition where <math>E</math> is a unitary matrix whose columns are the eigenvectors of <math>X</math>, <math>E^*</math> is its conjugate transpose, and <math>\Lambda = \left(\lambda_1, \ldots, \lambda_n\right)</math> the vector of corresponding eigenvalues. Then, for any <math>n \times n</math> Hermitian matrix <math>V</math>, the directional derivative of <math>\exp: X \to e^X</math> at <math>X</math> in the direction <math>V</math> is <ref name="lewis">Template:Cite journal See Theorem 3.3.</ref> <ref name="deledalle">Template:Cite journal See Propositions 1 and 2. </ref> <math display="block"> D \exp (X) [V] \triangleq \lim_{\epsilon \to 0} \frac{1}{\epsilon} \left(\displaystyle e^{X + \epsilon V} - e^{X} \right) = E(G \odot \bar{V}) E^* </math> where <math>\bar{V} = E^* V E</math>, the operator <math>\odot</math> denotes the Hadamard product, and, for all <math>1 \leq i, j \leq n</math>, the matrix <math>G</math> is defined as <math display="block"> G_{i, j} = \left\{\begin{align} & \frac{e^{\lambda_i} - e^{\lambda_j}}{\lambda_i - \lambda_j} & \text{ if } i \neq j,\\ & e^{\lambda_i} & \text{ otherwise}.\\ \end{align}\right. </math> In addition, for any <math>n \times n</math> Hermitian matrix <math>U</math>, the second directional derivative in directions <math>U</math> and <math>V</math> is<ref name="deledalle"/> <math display="block"> D^2 \exp (X) [U, V] \triangleq \lim_{\epsilon_u \to 0} \lim_{\epsilon_v \to 0} \frac{1}{4 \epsilon_u \epsilon_v} \left(\displaystyle e^{X + \epsilon_u U + \epsilon_v V} - e^{X - \epsilon_u U + \epsilon_v V} - e^{X + \epsilon_u U - \epsilon_v V} + e^{X - \epsilon_u U - \epsilon_v V} \right) = E F(U, V) E^* </math> where the matrix-valued function <math>F</math> is defined, for all <math>1 \leq i, j \leq n</math>, as <math display="block"> F(U, V)_{i,j} = \sum_{k=1}^n \phi_{i,j,k}(\bar{U}_{ik}\bar{V}_{jk}^* + \bar{V}_{ik}\bar{U}_{jk}^*) </math> with <math display="block"> \phi_{i,j,k} = \left\{\begin{align} & \frac{G_{ik} - G_{jk}}{\lambda_i - \lambda_j} & \text{ if } i \ne j,\\ & \frac{G_{ii} - G_{ik}}{\lambda_i - \lambda_k} & \text{ if } i = j \text{ and } k \ne i,\\ & \frac{G_{ii}}{2} & \text{ if } i = j = k.\\ \end{align}\right. </math>

Computing the matrix exponentialEdit

Finding reliable and accurate methods to compute the matrix exponential is difficult, and this is still a topic of considerable current research in mathematics and numerical analysis. Matlab, GNU Octave, R, and SciPy all use the Padé approximant.<ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref><ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref><ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref><ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref> In this section, we discuss methods that are applicable in principle to any matrix, and which can be carried out explicitly for small matrices.<ref>See Template:Harvnb Section 2.2</ref> Subsequent sections describe methods suitable for numerical evaluation on large matrices.

Diagonalizable caseEdit

If a matrix is diagonal: <math display="block">A = \begin{bmatrix} a_1 & 0 & \cdots & 0 \\ 0 & a_2 & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & a_n \end{bmatrix} ,</math> then its exponential can be obtained by exponentiating each entry on the main diagonal: <math display="block">e^A = \begin{bmatrix} e^{a_1} & 0 & \cdots & 0 \\ 0 & e^{a_2} & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & e^{a_n} \end{bmatrix} .</math>

This result also allows one to exponentiate diagonalizable matrices. If Template:Block indent then Template:Block indent which is especially easy to compute when Template:Math is diagonal.

Application of Sylvester's formula yields the same result. (To see this, note that addition and multiplication, hence also exponentiation, of diagonal matrices is equivalent to element-wise addition and multiplication, and hence exponentiation; in particular, the "one-dimensional" exponentiation is felt element-wise for the diagonal case.)

Example : DiagonalizableEdit

For example, the matrix <math display="block"> A = \begin{bmatrix} 1 & 4\\ 1 & 1\\ \end{bmatrix}</math> can be diagonalized as <math display="block">\begin{bmatrix} -2 & 2\\ 1 & 1\\ \end{bmatrix}\begin{bmatrix} -1 & 0\\ 0 & 3\\ \end{bmatrix}\begin{bmatrix} -2 & 2\\ 1 & 1\\ \end{bmatrix}^{-1}.</math>

Thus, <math display="block">e^A = \begin{bmatrix} -2 & 2\\ 1 & 1\\ \end{bmatrix}e^\begin{bmatrix} -1 & 0\\ 0 & 3\\ \end{bmatrix}\begin{bmatrix} -2 & 2\\ 1 & 1\\ \end{bmatrix}^{-1}=\begin{bmatrix} -2 & 2\\ 1 & 1\\ \end{bmatrix}\begin{bmatrix} \frac{1}{e} & 0\\ 0 & e^3\\ \end{bmatrix}\begin{bmatrix} -2 & 2\\ 1 & 1\\ \end{bmatrix}^{-1} = \begin{bmatrix} \frac{e^4+1}{2e} & \frac{e^4-1}{e}\\ \frac{e^4-1}{4e} & \frac{e^4+1}{2e}\\ \end{bmatrix}.</math>

Nilpotent caseEdit

A matrix Template:Mvar is nilpotent if Template:Math for some integer q. In this case, the matrix exponential Template:Math can be computed directly from the series expansion, as the series terminates after a finite number of terms:

<math display="block">e^N = I + N + \frac{1}{2}N^2 + \frac{1}{6}N^3 + \cdots + \frac{1}{(q - 1)!}N^{q-1} ~.</math>

Since the series has a finite number of steps, it is a matrix polynomial, which can be computed efficiently.

General caseEdit

Using the Jordan–Chevalley decompositionEdit

By the Jordan–Chevalley decomposition, any <math>n \times n</math> matrix X with complex entries can be expressed as <math display="block">X = A + N </math> where

  • A is diagonalizable
  • N is nilpotent
  • A commutes with N

This means that we can compute the exponential of X by reducing to the previous two cases: <math display="block">e^X = e^{A+N} = e^A e^N. </math>

Note that we need the commutativity of A and N for the last step to work.

Using the Jordan canonical formEdit

A closely related method is, if the field is algebraically closed, to work with the Jordan form of Template:Mvar. Suppose that Template:Math where Template:Mvar is the Jordan form of Template:Mvar. Then <math display="block">e^{X} = Pe^{J}P^{-1}.</math>

Also, since <math display="block">\begin{align}

   J &= J_{a_1}(\lambda_1) \oplus J_{a_2}(\lambda_2) \oplus \cdots \oplus J_{a_n}(\lambda_n), \\
 e^J &= \exp \big( J_{a_1}(\lambda_1) \oplus J_{a_2}(\lambda_2) \oplus \cdots \oplus J_{a_n}(\lambda_n) \big) \\
     &= \exp \big( J_{a_1}(\lambda_1) \big) \oplus \exp \big( J_{a_2}(\lambda_2) \big) \oplus \cdots \oplus \exp \big( J_{a_n}(\lambda_n) \big).

\end{align}</math>

Therefore, we need only know how to compute the matrix exponential of a Jordan block. But each Jordan block is of the form <math display="block">\begin{align}

 &            &     J_a(\lambda) &= \lambda I + N \\
 &\Rightarrow & e^{J_a(\lambda)} &= e^{\lambda I + N} = e^\lambda e^N.

\end{align}</math>

where Template:Mvar is a special nilpotent matrix. The matrix exponential of Template:Mvar is then given by <math display="block">e^J = e^{\lambda_1} e^{N_{a_1}} \oplus e^{\lambda_2} e^{N_{a_2}} \oplus \cdots \oplus e^{\lambda_n} e^{N_{a_n}}</math>

Projection caseEdit

If Template:Mvar is a projection matrix (i.e. is idempotent: Template:Math), its matrix exponential is: Template:Block indent

Deriving this by expansion of the exponential function, each power of Template:Mvar reduces to Template:Mvar which becomes a common factor of the sum: <math display="block">e^P = \sum_{k=0}^{\infty} \frac{P^k}{k!} = I + \left(\sum_{k=1}^{\infty} \frac{1}{k!}\right)P = I + (e - 1)P ~.</math>

Rotation caseEdit

For a simple rotation in which the perpendicular unit vectors Template:Math and Template:Math specify a plane,<ref>in a Euclidean space</ref> the rotation matrix Template:Mvar can be expressed in terms of a similar exponential function involving a generator Template:Mvar and angle Template:Mvar.<ref>Template:Cite book</ref><ref>Template:Cite book</ref> <math display="block">\begin{align}

 G &= \mathbf{ba}^\mathsf{T} - \mathbf{ab}^\mathsf{T} & P &= -G^2 = \mathbf{aa}^\mathsf{T} + \mathbf{bb}^\mathsf{T} \\
 P^2 &= P & PG &= G = GP ~,

\end{align}</math> <math display="block">\begin{align}

 R\left( \theta \right) = e^{G\theta}
   &= I + G\sin (\theta) + G^2(1 - \cos(\theta)) \\
   &= I - P + P\cos (\theta) + G\sin (\theta ) ~.\\

\end{align}</math>

The formula for the exponential results from reducing the powers of Template:Mvar in the series expansion and identifying the respective series coefficients of Template:Math and Template:Mvar with Template:Math and Template:Math respectively. The second expression here for Template:Math is the same as the expression for Template:Math in the article containing the derivation of the generator, Template:Math.

In two dimensions, if <math>a = \left[\begin{smallmatrix} 1 \\ 0 \end{smallmatrix}\right]</math> and <math>b = \left[ \begin{smallmatrix} 0 \\ 1 \end{smallmatrix} \right]</math>, then <math>G = \left[ \begin{smallmatrix} 0 & -1 \\ 1 & 0\end{smallmatrix} \right]</math>, <math>G^2 = \left[ \begin{smallmatrix}-1 & 0 \\ 0 & -1\end{smallmatrix} \right]</math>, and <math display="block">R(\theta) = \begin{bmatrix}\cos(\theta) & -\sin(\theta)\\ \sin(\theta) & \cos(\theta)\end{bmatrix} = I \cos(\theta) + G \sin(\theta)</math> reduces to the standard matrix for a plane rotation.

The matrix Template:Math projects a vector onto the Template:Math-plane and the rotation only affects this part of the vector. An example illustrating this is a rotation of Template:Math in the plane spanned by Template:Math and Template:Math,

<math display="block">\begin{align}

 \mathbf{a} &= \begin{bmatrix} 1 \\ 0 \\ 0 \\ \end{bmatrix} &
 \mathbf{b} &= \frac{1}{\sqrt{5}}\begin{bmatrix} 0 \\ 1 \\ 2 \\ \end{bmatrix}

\end{align}</math> <math display="block">\begin{align}

 G = \frac{1}{\sqrt{5}}&\begin{bmatrix}
    0 & -1 & -2 \\
    1 &  0 &  0 \\
    2 &  0 &  0 \\
 \end{bmatrix} &
 P = -G^2 &= \frac{1}{5}\begin{bmatrix}
    5 & 0 & 0 \\
    0 & 1 & 2 \\
    0 & 2 & 4 \\
 \end{bmatrix} \\
 P\begin{bmatrix} 1 \\ 2 \\ 3 \\ \end{bmatrix} =
   \frac{1}{5}&\begin{bmatrix} 5 \\ 8 \\ 16 \\ \end{bmatrix} =
   \mathbf{a} + \frac{8}{\sqrt{5}}\mathbf{b} &
 R\left(\frac{\pi}{6}\right) &= \frac{1}{10}\begin{bmatrix}
    5\sqrt{3} &      -\sqrt{5} &     -2\sqrt{5} \\
     \sqrt{5} &   8 + \sqrt{3} & -4 + 2\sqrt{3} \\
    2\sqrt{5} & -4 + 2\sqrt{3} &  2 + 4\sqrt{3} \\
 \end{bmatrix} \\

\end{align}</math>

Let Template:Math, so Template:Math and its products with Template:Math and Template:Math are zero. This will allow us to evaluate powers of Template:Math.

<math display="block">\begin{align}

      R\left( \frac{\pi}{6} \right) &= N + P\frac{\sqrt{3}}{2} + G\frac{1}{2} \\
    R\left( \frac{\pi}{6} \right)^2 &= N + P\frac{1}{2} + G\frac{\sqrt{3}}{2} \\
    R\left( \frac{\pi}{6} \right)^3 &= N + G \\
    R\left( \frac{\pi}{6} \right)^6 &= N - P \\
 R\left( \frac{\pi}{6} \right)^{12} &= N + P = I \\

\end{align}</math>

Template:Further

Evaluation by Laurent seriesEdit

By virtue of the Cayley–Hamilton theorem the matrix exponential is expressible as a polynomial of order Template:Mvar−1.

If Template:Mvar and Template:Math are nonzero polynomials in one variable, such that Template:Math, and if the meromorphic function <math display="block">f(z)=\frac{e^{t z}-Q_t(z)}{P(z)}</math> is entire, then <math display="block">e^{t A} = Q_t(A).</math> To prove this, multiply the first of the two above equalities by Template:Math and replace Template:Mvar by Template:Mvar.

Such a polynomial Template:Math can be found as follows−see Sylvester's formula. Letting Template:Mvar be a root of Template:Mvar, Template:Math is solved from the product of Template:Mvar by the principal part of the Laurent series of Template:Mvar at Template:Mvar: It is proportional to the relevant Frobenius covariant. Then the sum St of the Qa,t, where Template:Mvar runs over all the roots of Template:Mvar, can be taken as a particular Template:Math. All the other Qt will be obtained by adding a multiple of Template:Mvar to Template:Math. In particular, Template:Math, the Lagrange-Sylvester polynomial, is the only Template:Math whose degree is less than that of Template:Mvar.

Example: Consider the case of an arbitrary Template:Math matrix, <math display="block">A := \begin{bmatrix}

 a & b \\
 c & d

\end{bmatrix}.</math>

The exponential matrix Template:Math, by virtue of the Cayley–Hamilton theorem, must be of the form <math display="block">e^{tA} = s_0(t)\, I + s_1(t)\,A.</math>

(For any complex number Template:Mvar and any C-algebra Template:Mvar, we denote again by Template:Mvar the product of Template:Mvar by the unit of Template:Mvar.)

Let Template:Mvar and Template:Mvar be the roots of the characteristic polynomial of Template:Mvar, <math display="block">P(z) = z^2 - (a + d)\ z + ad - bc = (z - \alpha)(z - \beta) ~ .</math>

Then we have <math display="block">S_t(z) = e^{\alpha t} \frac{z - \beta}{\alpha - \beta} + e^{\beta t} \frac{z - \alpha}{\beta - \alpha}~,</math> hence <math display="block">\begin{align}

 s_0(t) &= \frac{\alpha\,e^{\beta t} - \beta\,e^{\alpha t}}{\alpha - \beta}, &
 s_1(t) &= \frac{e^{\alpha t} - e^{\beta t}}{\alpha - \beta}

\end{align}</math>

if Template:Math; while, if Template:Math, <math display="block">S_t(z) = e^{\alpha t} (1 + t (z - \alpha)) ~,</math>

so that <math display="block">\begin{align}

 s_0(t) &= (1 - \alpha\,t)\,e^{\alpha t},&
 s_1(t) &= t\,e^{\alpha t}~.

\end{align}</math>

Defining <math display="block">\begin{align}

 s &\equiv \frac{\alpha + \beta}{2} = \frac{\operatorname{tr} A}{2}~, &
 q &\equiv \frac{\alpha - \beta}{2} = \pm\sqrt{-\det\left(A - sI\right)},

\end{align}</math>

we have <math display="block">\begin{align}

 s_0(t) &= e^{st}\left(\cosh(qt) - s\frac{\sinh(qt)}{q}\right), &
 s_1(t) &= e^{st}\frac{\sinh(qt)}{q},

\end{align}</math>

where Template:Math is 0 if Template:Math, and Template:Mvar if Template:Math.

Thus, Template:Equation box 1

Thus, as indicated above, the matrix Template:Mvar having decomposed into the sum of two mutually commuting pieces, the traceful piece and the traceless piece, <math display="block">A = sI + (A-sI)~,</math>

the matrix exponential reduces to a plain product of the exponentials of the two respective pieces. This is a formula often used in physics, as it amounts to the analog of Euler's formula for Pauli spin matrices, that is rotations of the doublet representation of the group SU(2).

The polynomial Template:Math can also be given the following "interpolation" characterization. Define Template:Math, and Template:Math. Then Template:Math is the unique degree Template:Math polynomial which satisfies Template:Math whenever Template:Mvar is less than the multiplicity of Template:Mvar as a root of Template:Mvar. We assume, as we obviously can, that Template:Mvar is the minimal polynomial of Template:Mvar. We further assume that Template:Mvar is a diagonalizable matrix. In particular, the roots of Template:Mvar are simple, and the "interpolation" characterization indicates that Template:Math is given by the Lagrange interpolation formula, so it is the Lagrange−Sylvester polynomial.

At the other extreme, if Template:Math, then <math display="block">S_t = e^{at}\ \sum_{k=0}^{n-1}\ \frac{t^k}{k!}\ (z - a)^k ~.</math>

The simplest case not covered by the above observations is when <math>P = (z - a)^2\,(z - b)</math> with Template:Math, which yields <math display="block">S_t = e^{at}\ \frac{z - b}{a - b}\ \left(1 + \left(t + \frac{1}{b - a}\right)(z - a)\right) + e^{bt}\ \frac{(z - a)^2}{(b - a)^2}.</math>

Evaluation by implementation of Sylvester's formulaEdit

A practical, expedited computation of the above reduces to the following rapid steps. Recall from above that an Template:Math matrix Template:Math amounts to a linear combination of the first Template:Mvar−1 powers of Template:Mvar by the Cayley–Hamilton theorem. For diagonalizable matrices, as illustrated above, e.g. in the Template:Math case, Sylvester's formula yields Template:Math, where the Template:Mvars are the Frobenius covariants of Template:Mvar.

It is easiest, however, to simply solve for these Template:Mvars directly, by evaluating this expression and its first derivative at Template:Math, in terms of Template:Mvar and Template:Mvar, to find the same answer as above.

But this simple procedure also works for defective matrices, in a generalization due to Buchheim.<ref>Rinehart, R. F. (1955). "The equivalence of definitions of a matric function". The American Mathematical Monthly, 62 (6), 395-414.</ref> This is illustrated here for a Template:Math example of a matrix which is not diagonalizable, and the Template:Mvars are not projection matrices.

Consider <math display="block">A = \begin{bmatrix}

 1 & 1 &           0 &            0 \\
 0 & 1 &           1 &            0 \\
 0 & 0 &           1 & -\frac{1}{8} \\
 0 & 0 & \frac{1}{2} &  \frac{1}{2}

\end{bmatrix} ~,</math> with eigenvalues Template:Math and Template:Math, each with a multiplicity of two.

Consider the exponential of each eigenvalue multiplied by Template:Mvar, Template:Math. Multiply each exponentiated eigenvalue by the corresponding undetermined coefficient matrix Template:Math. If the eigenvalues have an algebraic multiplicity greater than 1, then repeat the process, but now multiplying by an extra factor of Template:Mvar for each repetition, to ensure linear independence.

(If one eigenvalue had a multiplicity of three, then there would be the three terms: <math>B_{i_1} e^{\lambda_i t}, ~ B_{i_2} t e^{\lambda_i t}, ~ B_{i_3} t^2 e^{\lambda_i t} </math>. By contrast, when all eigenvalues are distinct, the Template:Mvars are just the Frobenius covariants, and solving for them as below just amounts to the inversion of the Vandermonde matrix of these 4 eigenvalues.)

Sum all such terms, here four such, <math display="block">\begin{align}

 e^{At} &= B_{1_1} e^{\lambda_1 t} + B_{1_2} t e^{\lambda_1 t} + B_{2_1} e^{\lambda_2 t} + B_{2_2} t e^{\lambda_2 t} , \\
 e^{At} &= B_{1_1} e^{\frac{3}{4} t} + B_{1_2} t e^{\frac{3}{4} t} + B_{2_1} e^{1 t} + B_{2_2} t e^{1 t} ~.

\end{align}</math>

To solve for all of the unknown matrices Template:Mvar in terms of the first three powers of Template:Mvar and the identity, one needs four equations, the above one providing one such at Template:Mvar = 0. Further, differentiate it with respect to Template:Mvar, <math display="block">A e^{A t} = \frac{3}{4} B_{1_1} e^{\frac{3}{4} t} + \left( \frac{3}{4} t + 1 \right) B_{1_2} e^{\frac{3}{4} t} + 1 B_{2_1} e^{1 t} + \left(1 t + 1 \right) B_{2_2} e^{1 t} ~,</math>

and again, <math display="block">\begin{align}

 A^2 e^{At} &= \left(\frac{3}{4}\right)^2 B_{1_1} e^{\frac{3}{4} t} + \left( \left(\frac{3}{4}\right)^2 t + \left( \frac{3}{4} + 1 \cdot \frac{3}{4}\right) \right) B_{1_2} e^{\frac{3}{4} t} + B_{2_1} e^{1 t} + \left(1^2 t + (1 + 1 \cdot 1 )\right) B_{2_2} e^{1 t} \\
            &= \left(\frac{3}{4}\right)^2 B_{1_1} e^{\frac{3}{4} t} + \left( \left(\frac{3}{4}\right)^2 t + \frac{3}{2} \right) B_{1_2} e^{\frac{3}{4} t} + B_{2_1} e^{t} + \left(t + 2\right) B_{2_2} e^{t} ~,

\end{align}</math>

and once more, <math display="block">\begin{align}

 A^3 e^{At} &= \left(\frac{3}{4}\right)^3 B_{1_1} e^{\frac{3}{4} t} + \left( \left(\frac{3}{4}\right)^3 t + \left( \left(\frac{3}{4}\right)^2 + \left(\frac{3}{2}\right) \cdot \frac{3}{4}\right) \right) B_{1_2} e^{\frac{3}{4} t} + B_{2_1} e^{1 t} + \left(1^3 t + (1 + 2) \cdot 1 \right) B_{2_2} e^{1 t} \\
            &= \left(\frac{3}{4}\right)^3 B_{1_1} e^{\frac{3}{4} t}\! + \left( \left(\frac{3}{4}\right)^3 t\! + \frac{27}{16} \right) B_{1_2} e^{\frac{3}{4} t}\! + B_{2_1} e^{t}\! + \left(t + 3\cdot 1\right) B_{2_2} e^{t} ~.

\end{align}</math>

(In the general case, Template:Mvar−1 derivatives need be taken.)

Setting Template:Mvar = 0 in these four equations, the four coefficient matrices Template:Mvars may now be solved for, <math display="block">\begin{align} I &= B_{1_1} + B_{2_1} \\ A &= \frac{3}{4} B_{1_1} + B_{1_2} + B_{2_1} + B_{2_2} \\ A^2 &= \left(\frac{3}{4}\right)^2 B_{1_1} + \frac{3}{2} B_{1_2} + B_{2_1} + 2 B_{2_2} \\ A^3 &= \left(\frac{3}{4}\right)^3 B_{1_1} + \frac{27}{16} B_{1_2} + B_{2_1} + 3 B_{2_2} ~, \end{align} </math>

to yield <math display="block">\begin{align} B_{1_1} &= 128 A^3 - 366 A^2 + 288 A - 80 I \\ B_{1_2} &= 16 A^3 - 44 A^2 + 40 A - 12 I \\ B_{2_1} &= -128 A^3 + 366 A^2 - 288 A + 80 I \\ B_{2_2} &= 16 A^3 - 40 A^2 + 33 A - 9 I ~. \end{align}</math>

Substituting with the value for Template:Mvar yields the coefficient matrices <math display="block">\begin{align} B_{1_1} &= \begin{bmatrix}0 & 0 & 48 & -16\\ 0 & 0 & -8 & 2\\ 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 1\end{bmatrix}\\ B_{1_2} &= \begin{bmatrix}0 & 0 & 4 & -2\\ 0 & 0 & -1 & \frac{1}{2}\\ 0 & 0 & \frac{1}{4} & -\frac{1}{8}\\ 0 & 0 & \frac{1}{2} & -\frac{1}{4} \end{bmatrix}\\ B_{2_1} &= \begin{bmatrix}1 & 0 & -48 & 16\\ 0 & 1 & 8 & -2\\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0\end{bmatrix}\\ B_{2_2} &= \begin{bmatrix}0 & 1 & 8 & -2\\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0\end{bmatrix} \end{align}</math>

so the final answer is <math display="block">e^{tA} = \begin{bmatrix}

   e^t & te^t & \left(8t - 48\right) e^t\! + \left(4t + 48\right)e^{\frac{3}{4}t} & \left(16 - 2\,t\right)e^t\! + \left(-2t - 16\right)e^{\frac{3}{4}t}\\
   0 & e^t & 8e^t\! + \left(-t - 8\right) e^{\frac{3}{4}t} & -2e^t + \frac{t + 4}{2}e^{\frac{3}{4}t}\\
   0 & 0 & \frac{t + 4}{4}e^{\frac{3}{4}t} & -\frac{t}{8}e^{\frac{3}{4}t}\\
   0 & 0 & \frac{t}{2}e^{\frac{3}{4}t} & -\frac{t - 4}{4}e^{\frac{3}{4}t} ~.
 \end{bmatrix}

</math>

The procedure is much shorter than Putzer's algorithm sometimes utilized in such cases.

Template:See also

IllustrationsEdit

Suppose that we want to compute the exponential of <math display="block">B = \begin{bmatrix}

 21 & 17 &  6 \\
 -5 & -1 & -6 \\
  4 &  4 & 16

\end{bmatrix}.</math>

Its Jordan form is <math display="block">J = P^{-1}BP = \begin{bmatrix}

 4 &  0 &  0 \\
 0 & 16 &  1 \\
 0 &  0 & 16

\end{bmatrix},</math> where the matrix P is given by <math display="block">P = \begin{bmatrix}

 -\frac14 &  2 &  \frac54 \\
  \frac14 & -2 & -\frac14 \\
        0 &  4 &        0

\end{bmatrix}.</math>

Let us first calculate exp(J). We have <math display="block">J = J_1(4) \oplus J_2(16) </math>

The exponential of a Template:Math matrix is just the exponential of the one entry of the matrix, so Template:Math. The exponential of J2(16) can be calculated by the formula Template:Math mentioned above; this yields<ref>This can be generalized; in general, the exponential of Template:Math is an upper triangular matrix with Template:Math on the main diagonal, Template:Math on the one above, Template:Math on the next one, and so on.</ref>

<math display="block">\begin{align}

       &\exp \left( \begin{bmatrix} 16 & 1 \\ 0 & 16 \end{bmatrix} \right)
   = e^{16} \exp \left( \begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix} \right) = \\[6pt]
 {}={} &e^{16} \left(\begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} + \begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix} + {1 \over 2!}\begin{bmatrix} 0 & 0 \\ 0 & 0 \end{bmatrix} + \cdots {} \right)
   = \begin{bmatrix} e^{16} & e^{16} \\ 0 & e^{16} \end{bmatrix}.

\end{align} </math>

Therefore, the exponential of the original matrix Template:Mvar is <math display="block">\begin{align}

 \exp(B)
 &= P \exp(J) P^{-1}
  = P \begin{bmatrix} e^4 & 0 & 0 \\ 0 & e^{16} & e^{16} \\ 0 & 0 & e^{16} \end{bmatrix} P^{-1} \\[6pt]
 &= {1 \over 4} \begin{bmatrix}
   13e^{16} - e^4 & 13e^{16} - 5e^4 &  2e^{16} - 2e^4 \\
   -9e^{16} + e^4 & -9e^{16} + 5e^4 & -2e^{16} + 2e^4 \\
   16e^{16}       & 16e^{16}        &  4e^{16}
 \end{bmatrix}.

\end{align}</math>

ApplicationsEdit

Linear differential equationsEdit

The matrix exponential has applications to systems of linear differential equations. (See also matrix differential equation.) Recall from earlier in this article that a homogeneous differential equation of the form <math display="block"> \mathbf{y}' = A\mathbf{y} </math> has solution Template:Math.

If we consider the vector <math display="block"> \mathbf{y}(t) = \begin{bmatrix} y_1(t) \\ \vdots \\y_n(t) \end{bmatrix} ~,</math> we can express a system of inhomogeneous coupled linear differential equations as <math display="block"> \mathbf{y}'(t) = A\mathbf{y}(t)+\mathbf{b}(t).</math> Making an ansatz to use an integrating factor of Template:Math and multiplying throughout, yields <math display="block">\begin{align}

 &            &       e^{-At}\mathbf{y}'-e^{-At}A\mathbf{y} &= e^{-At}\mathbf{b} \\
 &\Rightarrow &       e^{-At}\mathbf{y}'-Ae^{-At}\mathbf{y} &= e^{-At}\mathbf{b} \\
 &\Rightarrow & \frac{d}{dt} \left(e^{-At}\mathbf{y}\right) &= e^{-At}\mathbf{b}~.

\end{align}</math>

The second step is possible due to the fact that, if Template:Math, then Template:Math. So, calculating Template:Math leads to the solution to the system, by simply integrating the third step with respect to Template:Mvar.

A solution to this can be obtained by integrating and multiplying by <math>e^{\textbf{A}t}</math> to eliminate the exponent in the LHS. Notice that while <math>e^{\textbf{A}t}</math> is a matrix, given that it is a matrix exponential, we can say that <math>e^{\textbf{A}t} e^{-\textbf{A}t} = I</math>. In other words, <math>\exp{\textbf{A}t} = \exp{{(-\textbf{A}t)}^{-1}}</math>.

Example (homogeneous)Edit

Consider the system <math display="block">\begin{matrix}

 x' &=& 2x & -y &  +z \\
 y' &=&    & 3y & -1z \\
 z' &=& 2x & +y & +3z

\end{matrix}~.</math>

The associated defective matrix is <math display="block">A = \begin{bmatrix}

 2 & -1 &  1 \\
 0 &  3 & -1 \\
 2 &  1 &  3

\end{bmatrix}~.</math>

The matrix exponential is <math display="block">e^{tA} = \frac{1}{2}\begin{bmatrix}

   e^{2t}\left( 1 + e^{2t} - 2t\right) & -2te^{2t}       &  e^{2t}\left(-1 + e^{2t}\right) \\
  -e^{2t}\left(-1 + e^{2t} - 2t\right) &  2(t + 1)e^{2t} & -e^{2t}\left(-1 + e^{2t}\right) \\
   e^{2t}\left(-1 + e^{2t} + 2t\right) &  2te^{2t}       &  e^{2t}\left( 1 + e^{2t}\right)

\end{bmatrix}~,</math>

so that the general solution of the homogeneous system is <math display="block">\begin{bmatrix}x \\y \\ z\end{bmatrix} =

   \frac{x(0)}{2}\begin{bmatrix}e^{2t}\left(1 + e^{2t} - 2t\right) \\ -e^{2t}\left(-1 + e^{2t} - 2t\right) \\ e^{2t}\left(-1 + e^{2t} + 2t\right)\end{bmatrix}
 + \frac{y(0)}{2}\begin{bmatrix}-2te^{2t} \\ 2(t + 1)e^{2t} \\ 2te^{2t}\end{bmatrix}
 + \frac{z(0)}{2}\begin{bmatrix}e^{2t}\left(-1 + e^{2t}\right) \\ -e^{2t}\left(-1 + e^{2t}\right) \\ e^{2t}\left(1 + e^{2t}\right)\end{bmatrix} ~,

</math>

amounting to <math display="block">\begin{align}

 2x &= x(0)e^{2t}\left(1 + e^{2t} - 2t\right) + y(0)\left(-2te^{2t}\right) + z(0)e^{2t}\left(-1 + e^{2t}\right) \\[2pt]
 2y &= x(0)\left(-e^{2t}\right)\left(-1 + e^{2t} - 2t\right) + y(0)2(t + 1)e^{2t} + z(0)\left(-e^{2t}\right)\left(-1 + e^{2t}\right) \\[2pt]
 2z &= x(0)e^{2t}\left(-1 + e^{2t} + 2t\right) + y(0)2te^{2t} + z(0)e^{2t}\left(1 + e^{2t}\right) ~.

\end{align}</math>

Example (inhomogeneous)Edit

Consider now the inhomogeneous system <math display="block">\begin{matrix}

 x' &=& 2x & - & y & + & z & + & e^{2t} \\
 y' &=&    &   & 3y& - & z & \\
 z' &=& 2x & + & y & + & 3z & + & e^{2t}

\end{matrix} ~.</math>

We again have <math display="block">A = \left[\begin{array}{rrr}

 2 & -1 &  1 \\
 0 &  3 & -1 \\
 2 &  1 &  3

\end{array}\right] ~,</math>

and <math display="block">\mathbf{b} = e^{2t}\begin{bmatrix}1 \\0\\1\end{bmatrix}.</math>

From before, we already have the general solution to the homogeneous equation. Since the sum of the homogeneous and particular solutions give the general solution to the inhomogeneous problem, we now only need find the particular solution.

We have, by above, <math display="block">\begin{align}

 \mathbf{y}_p
 &= e^{tA}\int_0^t e^{(-u)A}\begin{bmatrix}e^{2u} \\0\\e^{2u}\end{bmatrix}\,du+e^{tA}\mathbf{c} \\[6pt]
 &= e^{tA}\int_0^t \begin{bmatrix}
        2e^u - 2ue^{2u} & -2ue^{2u}    & 0 \\
   -2e^u + 2(u+1)e^{2u} & 2(u+1)e^{2u} & 0 \\
               2ue^{2u} & 2ue^{2u}     & 2e^u
 \end{bmatrix}\begin{bmatrix}e^{2u} \\0 \\e^{2u}\end{bmatrix}\,du + e^{tA}\mathbf{c} \\[6pt]
 &= e^{tA}\int_0^t \begin{bmatrix}
    e^{2u}\left( 2e^u - 2ue^{2u}\right) \\
    e^{2u}\left(-2e^u + 2(1 + u)e^{2u}\right) \\
   2e^{3u} + 2ue^{4u}
 \end{bmatrix}\,du + e^{tA}\mathbf{c} \\[6pt]
 &= e^{tA}\begin{bmatrix}
   -{1 \over 24}e^{3t}\left(3e^t(4t - 1) - 16\right) \\
    {1 \over 24}e^{3t}\left(3e^t(4t + 4) - 16\right) \\
    {1 \over 24}e^{3t}\left(3e^t(4t - 1) - 16\right)
 \end{bmatrix} + \begin{bmatrix}
          2e^t - 2te^{2t} &      -2te^{2t} & 0 \\
   -2e^t + 2(t + 1)e^{2t} & 2(t + 1)e^{2t} & 0 \\
                 2te^{2t} &       2te^{2t} & 2e^t
 \end{bmatrix}\begin{bmatrix}c_1 \\c_2 \\c_3\end{bmatrix} ~,

\end{align}</math> which could be further simplified to get the requisite particular solution determined through variation of parameters. Note c = yp(0). For more rigor, see the following generalization.

Inhomogeneous case generalization: variation of parametersEdit

For the inhomogeneous case, we can use integrating factors (a method akin to variation of parameters). We seek a particular solution of the form Template:Math, <math display="block">\begin{align}

 \mathbf{y}_p'(t)
   & = \left(e^{tA}\right)'\mathbf{z}(t) + e^{tA}\mathbf{z}'(t) \\[6pt]
   & = Ae^{tA}\mathbf{z}(t) + e^{tA}\mathbf{z}'(t) \\[6pt]
   & = A\mathbf{y}_p(t) + e^{tA}\mathbf{z}'(t)~.

\end{align}</math>

For Template:Math to be a solution, <math display="block">\begin{align}

 e^{tA}\mathbf{z}'(t) &= \mathbf{b}(t) \\[6pt]
       \mathbf{z}'(t) &= \left(e^{tA}\right)^{-1}\mathbf{b}(t) \\[6pt]
        \mathbf{z}(t) &= \int_0^t e^{-uA}\mathbf{b}(u)\,du + \mathbf{c} ~.

\end{align}</math>

Thus, <math display="block">\begin{align}

 \mathbf{y}_p(t)
   & = e^{tA}\int_0^t e^{-uA}\mathbf{b}(u)\,du + e^{tA}\mathbf{c} \\
   & = \int_0^t e^{(t - u)A}\mathbf{b}(u)\,du + e^{tA}\mathbf{c}~,

\end{align}</math> where Template:Math is determined by the initial conditions of the problem.

More precisely, consider the equation <math display="block">Y' - A\ Y = F(t)</math>

with the initial condition Template:Math, where

Left-multiplying the above displayed equality by Template:Math yields <math display="block">Y(t) = e^{(t - t_0)A}\ Y_0 + \int_{t_0}^t e^{(t - x)A}\ F(x)\ dx ~.</math>

We claim that the solution to the equation <math display="block">P(d/dt)\ y = f(t)</math>

with the initial conditions <math>y^{(k)}(t_0) = y_k</math> for Template:Math is <math display="block">y(t) = \sum_{k=0}^{n-1}\ y_k\ s_k(t - t_0) + \int_{t_0}^t s_{n-1}(t - x)\ f(x)\ dx ~,</math>

where the notation is as follows:

  • <math>P\in\mathbb{C}[X]</math> is a monic polynomial of degree Template:Math,
  • Template:Mvar is a continuous complex valued function defined on some open interval Template:Mvar,
  • <math>t_0</math> is a point of Template:Mvar,
  • <math>y_k</math> is a complex number, and

Template:Math is the coefficient of <math>X^k</math> in the polynomial denoted by <math>S_t\in\mathbb{C}[X]</math> in Subsection Evaluation by Laurent series above.

To justify this claim, we transform our order Template:Mvar scalar equation into an order one vector equation by the usual reduction to a first order system. Our vector equation takes the form <math display="block">\frac{dY}{dt} - A\ Y = F(t),\quad Y(t_0) = Y_0,</math> where Template:Mvar is the transpose companion matrix of Template:Mvar. We solve this equation as explained above, computing the matrix exponentials by the observation made in Subsection Evaluation by implementation of Sylvester's formula above.

In the case Template:Mvar = 2 we get the following statement. The solution to <math display="block">

 y - (\alpha + \beta)\ y' + \alpha\,\beta\ y = f(t),\quad
 y(t_0) = y_0,\quad
 y'(t_0) = y_1

</math>

is <math display="block">y(t) = y_0\ s_0(t - t_0) + y_1\ s_1(t - t_0) + \int_{t_0}^t s_1(t - x)\,f(x)\ dx,</math>

where the functions Template:Math and Template:Math are as in Subsection Evaluation by Laurent series above.

Matrix-matrix exponentialsEdit

The matrix exponential of another matrix (matrix-matrix exponential),<ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref> is defined as <math display="block">X^Y = e^{\log(X) \cdot Y}</math> <math display="block">^Y\!X = e^{Y \cdot \log(X)}</math> for any normal and non-singular Template:Math matrix Template:Mvar, and any complex Template:Math matrix Template:Mvar.

For matrix-matrix exponentials, there is a distinction between the left exponential Template:Mvar and the right exponential Template:Mvar, because the multiplication operator for matrix-to-matrix is not commutative. Moreover,

See alsoEdit

Template:Div col

Template:Div col end

ReferencesEdit

Template:Reflist

External linksEdit

Template:Matrix classes