Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Matrix exponential
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Computing the matrix exponential == Finding reliable and accurate methods to compute the matrix exponential is difficult, and this is still a topic of considerable current research in mathematics and numerical analysis. [[Matlab]], [[GNU Octave]], [[R (programming_language)|R]], and [[SciPy]] all use the [[Padé approximant]].<ref>{{cite web|url=http://www.mathworks.de/help/techdoc/ref/expm.html |title=Matrix exponential – MATLAB expm – MathWorks Deutschland |publisher=Mathworks.de |date=2011-04-30 |access-date=2013-06-05}}</ref><ref>{{cite web |url=http://www.network-theory.co.uk/docs/octave3/octave_200.html |title=GNU Octave – Functions of a Matrix |publisher=Network-theory.co.uk |date=2007-01-11 |access-date=2013-06-05 |archive-url=https://web.archive.org/web/20150529200317/http://www.network-theory.co.uk/docs/octave3/octave_200.html |archive-date=2015-05-29 |url-status=dead }}</ref><ref>{{cite web| url=https://search.r-project.org/CRAN/refmans/Matrix/html/expm.html|title=R - pkg {Matrix}: Matrix Exponential |date=2005-02-28 |access-date=2023-07-17}}</ref><ref>{{cite web| url=http://docs.scipy.org/doc/scipy-0.15.1/reference/generated/scipy.linalg.expm.html |title=scipy.linalg.expm function documentation |publisher=The SciPy Community |date=2015-01-18 |access-date=2015-05-29}}</ref> In this section, we discuss methods that are applicable in principle to any matrix, and which can be carried out explicitly for small matrices.<ref>See {{harvnb|Hall|2015}} Section 2.2</ref> Subsequent sections describe methods suitable for numerical evaluation on large matrices. === Diagonalizable case === If a matrix is [[diagonal matrix|diagonal]]: <math display="block">A = \begin{bmatrix} a_1 & 0 & \cdots & 0 \\ 0 & a_2 & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & a_n \end{bmatrix} ,</math> then its exponential can be obtained by exponentiating each entry on the main diagonal: <math display="block">e^A = \begin{bmatrix} e^{a_1} & 0 & \cdots & 0 \\ 0 & e^{a_2} & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & e^{a_n} \end{bmatrix} .</math> This result also allows one to exponentiate [[diagonalizable matrix|diagonalizable matrices]]. If {{block indent|em=1.2|text={{math|1=''A'' = ''UDU''<sup>−1</sup>}} }} then {{block indent|em=1.2|text={{math|1=''e''<sup>''A''</sup> = ''Ue''<sup>''D''</sup>''U''<sup>−1</sup>}},}} which is especially easy to compute when {{math|''D''}} is diagonal. Application of [[Sylvester's formula]] yields the same result. (To see this, note that addition and multiplication, hence also exponentiation, of diagonal matrices is equivalent to element-wise addition and multiplication, and hence exponentiation; in particular, the "one-dimensional" exponentiation is felt element-wise for the diagonal case.) ===Example : Diagonalizable=== For example, the matrix <math display="block"> A = \begin{bmatrix} 1 & 4\\ 1 & 1\\ \end{bmatrix}</math> can be diagonalized as <math display="block">\begin{bmatrix} -2 & 2\\ 1 & 1\\ \end{bmatrix}\begin{bmatrix} -1 & 0\\ 0 & 3\\ \end{bmatrix}\begin{bmatrix} -2 & 2\\ 1 & 1\\ \end{bmatrix}^{-1}.</math> Thus, <math display="block">e^A = \begin{bmatrix} -2 & 2\\ 1 & 1\\ \end{bmatrix}e^\begin{bmatrix} -1 & 0\\ 0 & 3\\ \end{bmatrix}\begin{bmatrix} -2 & 2\\ 1 & 1\\ \end{bmatrix}^{-1}=\begin{bmatrix} -2 & 2\\ 1 & 1\\ \end{bmatrix}\begin{bmatrix} \frac{1}{e} & 0\\ 0 & e^3\\ \end{bmatrix}\begin{bmatrix} -2 & 2\\ 1 & 1\\ \end{bmatrix}^{-1} = \begin{bmatrix} \frac{e^4+1}{2e} & \frac{e^4-1}{e}\\ \frac{e^4-1}{4e} & \frac{e^4+1}{2e}\\ \end{bmatrix}.</math> === Nilpotent case === A matrix {{mvar|N}} is [[nilpotent matrix|nilpotent]] if {{math|1=''N''<sup>''q''</sup> = 0}} for some integer ''q''. In this case, the matrix exponential {{math|''e''<sup>''N''</sup>}} can be computed directly from the series expansion, as the series terminates after a finite number of terms: <math display="block">e^N = I + N + \frac{1}{2}N^2 + \frac{1}{6}N^3 + \cdots + \frac{1}{(q - 1)!}N^{q-1} ~.</math> Since the series has a finite number of steps, it is a matrix polynomial, which can be [[Polynomial evaluation#Matrix polynomials|computed efficiently]]. === General case === ==== Using the Jordan–Chevalley decomposition ==== By the [[Jordan–Chevalley decomposition]], any <math>n \times n</math> matrix ''X'' with complex entries can be expressed as <math display="block">X = A + N </math> where * ''A'' is diagonalizable * ''N'' is nilpotent * ''A'' [[commutativity|commutes]] with ''N'' This means that we can compute the exponential of ''X'' by reducing to the previous two cases: <math display="block">e^X = e^{A+N} = e^A e^N. </math> Note that we need the commutativity of ''A'' and ''N'' for the last step to work. ==== Using the Jordan canonical form ==== A closely related method is, if the field is [[algebraically closed]], to work with the [[Jordan form]] of {{mvar|X}}. Suppose that {{math|1=''X'' = ''PJP''{{i sup|−1}}}} where {{mvar|J}} is the Jordan form of {{mvar|X}}. Then <math display="block">e^{X} = Pe^{J}P^{-1}.</math> Also, since <math display="block">\begin{align} J &= J_{a_1}(\lambda_1) \oplus J_{a_2}(\lambda_2) \oplus \cdots \oplus J_{a_n}(\lambda_n), \\ e^J &= \exp \big( J_{a_1}(\lambda_1) \oplus J_{a_2}(\lambda_2) \oplus \cdots \oplus J_{a_n}(\lambda_n) \big) \\ &= \exp \big( J_{a_1}(\lambda_1) \big) \oplus \exp \big( J_{a_2}(\lambda_2) \big) \oplus \cdots \oplus \exp \big( J_{a_n}(\lambda_n) \big). \end{align}</math> Therefore, we need only know how to compute the matrix exponential of a [[Jordan block]]. But each Jordan block is of the form <math display="block">\begin{align} & & J_a(\lambda) &= \lambda I + N \\ &\Rightarrow & e^{J_a(\lambda)} &= e^{\lambda I + N} = e^\lambda e^N. \end{align}</math> where {{mvar|N}} is a special nilpotent matrix. The matrix exponential of {{mvar|J}} is then given by <math display="block">e^J = e^{\lambda_1} e^{N_{a_1}} \oplus e^{\lambda_2} e^{N_{a_2}} \oplus \cdots \oplus e^{\lambda_n} e^{N_{a_n}}</math> === Projection case === If {{mvar|P}} is a [[projection matrix]] (i.e. is [[idempotent]]: {{math|1=''P''<sup>2</sup> = ''P''}}), its matrix exponential is: {{block indent|em=1.2|text= {{math|1=''e''<sup>''P''</sup> = ''I'' + (''e'' − 1)''P''}}.}} Deriving this by expansion of the exponential function, each power of {{mvar|P}} reduces to {{mvar|P}} which becomes a common factor of the sum: <math display="block">e^P = \sum_{k=0}^{\infty} \frac{P^k}{k!} = I + \left(\sum_{k=1}^{\infty} \frac{1}{k!}\right)P = I + (e - 1)P ~.</math> === Rotation case === For a simple rotation in which the perpendicular unit vectors {{Math|'''a'''}} and {{Math|'''b'''}} specify a plane,<ref>in a Euclidean space</ref> the [[Rotation matrix#Exponential map|rotation matrix]] {{mvar|R}} can be expressed in terms of a similar exponential function involving a [[Euler's rotation theorem#Generators of rotations|generator]] {{mvar|G}} and angle {{mvar|θ}}.<ref>{{cite book|last=Weyl|first=Hermann|url=https://books.google.com/books?id=KCgZAQAAIAAJ&pg=PA142 |title=Space Time Matter| year=1952|publisher=Dover|isbn=978-0-486-60267-7|page=142}}</ref><ref>{{cite book|last1=Bjorken|first1=James D.| last2=Drell| first2=Sidney D.|title=Relativistic Quantum Mechanics|url=https://archive.org/details/relativisticquan0000bjor|url-access=registration | publisher=McGraw-Hill|year=1964|page=[https://archive.org/details/relativisticquan0000bjor/page/22 22]}}</ref> <math display="block">\begin{align} G &= \mathbf{ba}^\mathsf{T} - \mathbf{ab}^\mathsf{T} & P &= -G^2 = \mathbf{aa}^\mathsf{T} + \mathbf{bb}^\mathsf{T} \\ P^2 &= P & PG &= G = GP ~, \end{align}</math> <math display="block">\begin{align} R\left( \theta \right) = e^{G\theta} &= I + G\sin (\theta) + G^2(1 - \cos(\theta)) \\ &= I - P + P\cos (\theta) + G\sin (\theta ) ~.\\ \end{align}</math> The formula for the exponential results from reducing the powers of {{mvar|G}} in the series expansion and identifying the respective series coefficients of {{math|''G<sup>2</sup>''}} and {{mvar|G}} with {{math|−cos(''θ'')}} and {{math|sin(''θ'')}} respectively. The second expression here for {{math|''e<sup>Gθ</sup>''}} is the same as the expression for {{math|''R''(''θ'')}} in the article containing the derivation of the [[Euler's rotation theorem#Generators of rotations|generator]], {{math|1=''R''(''θ'') = ''e<sup>Gθ</sup>''}}. In two dimensions, if <math>a = \left[\begin{smallmatrix} 1 \\ 0 \end{smallmatrix}\right]</math> and <math>b = \left[ \begin{smallmatrix} 0 \\ 1 \end{smallmatrix} \right]</math>, then <math>G = \left[ \begin{smallmatrix} 0 & -1 \\ 1 & 0\end{smallmatrix} \right]</math>, <math>G^2 = \left[ \begin{smallmatrix}-1 & 0 \\ 0 & -1\end{smallmatrix} \right]</math>, and <math display="block">R(\theta) = \begin{bmatrix}\cos(\theta) & -\sin(\theta)\\ \sin(\theta) & \cos(\theta)\end{bmatrix} = I \cos(\theta) + G \sin(\theta)</math> reduces to the standard matrix for a plane rotation. The matrix {{math|1=''P'' = −''G''<sup>2</sup>}} [[Projection (linear algebra)|projects]] a vector onto the {{math|ab}}-plane and the rotation only affects this part of the vector. An example illustrating this is a rotation of {{math|1=30° = π/6}} in the plane spanned by {{math|'''a'''}} and {{math|'''b'''}}, <math display="block">\begin{align} \mathbf{a} &= \begin{bmatrix} 1 \\ 0 \\ 0 \\ \end{bmatrix} & \mathbf{b} &= \frac{1}{\sqrt{5}}\begin{bmatrix} 0 \\ 1 \\ 2 \\ \end{bmatrix} \end{align}</math> <math display="block">\begin{align} G = \frac{1}{\sqrt{5}}&\begin{bmatrix} 0 & -1 & -2 \\ 1 & 0 & 0 \\ 2 & 0 & 0 \\ \end{bmatrix} & P = -G^2 &= \frac{1}{5}\begin{bmatrix} 5 & 0 & 0 \\ 0 & 1 & 2 \\ 0 & 2 & 4 \\ \end{bmatrix} \\ P\begin{bmatrix} 1 \\ 2 \\ 3 \\ \end{bmatrix} = \frac{1}{5}&\begin{bmatrix} 5 \\ 8 \\ 16 \\ \end{bmatrix} = \mathbf{a} + \frac{8}{\sqrt{5}}\mathbf{b} & R\left(\frac{\pi}{6}\right) &= \frac{1}{10}\begin{bmatrix} 5\sqrt{3} & -\sqrt{5} & -2\sqrt{5} \\ \sqrt{5} & 8 + \sqrt{3} & -4 + 2\sqrt{3} \\ 2\sqrt{5} & -4 + 2\sqrt{3} & 2 + 4\sqrt{3} \\ \end{bmatrix} \\ \end{align}</math> Let {{math|1=''N'' = ''I'' - ''P''}}, so {{math|1=''N''<sup>2</sup> = ''N''}} and its products with {{math|''P''}} and {{math|''G''}} are zero. This will allow us to evaluate powers of {{math|''R''}}. <math display="block">\begin{align} R\left( \frac{\pi}{6} \right) &= N + P\frac{\sqrt{3}}{2} + G\frac{1}{2} \\ R\left( \frac{\pi}{6} \right)^2 &= N + P\frac{1}{2} + G\frac{\sqrt{3}}{2} \\ R\left( \frac{\pi}{6} \right)^3 &= N + G \\ R\left( \frac{\pi}{6} \right)^6 &= N - P \\ R\left( \frac{\pi}{6} \right)^{12} &= N + P = I \\ \end{align}</math> {{further|Rodrigues' rotation formula|Axis–angle representation#Exponential map from so(3) to SO(3)}}
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)