Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Matrix exponential
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Applications == === Linear differential equations === The matrix exponential has applications to systems of [[linear differential equation]]s. (See also [[matrix differential equation]].) Recall from earlier in this article that a ''homogeneous'' differential equation of the form <math display="block"> \mathbf{y}' = A\mathbf{y} </math> has solution {{math|''e''<sup>''At''</sup> '''y'''(0)}}. If we consider the vector <math display="block"> \mathbf{y}(t) = \begin{bmatrix} y_1(t) \\ \vdots \\y_n(t) \end{bmatrix} ~,</math> we can express a system of ''inhomogeneous'' coupled linear differential equations as <math display="block"> \mathbf{y}'(t) = A\mathbf{y}(t)+\mathbf{b}(t).</math> Making an [[ansatz]] to use an integrating factor of {{math|''e''<sup>β''At''</sup>}} and multiplying throughout, yields <math display="block">\begin{align} & & e^{-At}\mathbf{y}'-e^{-At}A\mathbf{y} &= e^{-At}\mathbf{b} \\ &\Rightarrow & e^{-At}\mathbf{y}'-Ae^{-At}\mathbf{y} &= e^{-At}\mathbf{b} \\ &\Rightarrow & \frac{d}{dt} \left(e^{-At}\mathbf{y}\right) &= e^{-At}\mathbf{b}~. \end{align}</math> The second step is possible due to the fact that, if {{math|1=''AB'' = ''BA''}}, then {{math|1=''e''<sup>''At''</sup>''B'' = ''Be''<sup>''At''</sup>}}. So, calculating {{math|''e''<sup>''At''</sup>}} leads to the solution to the system, by simply integrating the third step with respect to {{mvar|t}}. A solution to this can be obtained by integrating and multiplying by <math>e^{\textbf{A}t}</math> to eliminate the exponent in the LHS. Notice that while <math>e^{\textbf{A}t}</math> is a matrix, given that it is a matrix exponential, we can say that <math>e^{\textbf{A}t} e^{-\textbf{A}t} = I</math>. In other words, <math>\exp{\textbf{A}t} = \exp{{(-\textbf{A}t)}^{-1}}</math>. ==== Example (homogeneous) ==== Consider the system <math display="block">\begin{matrix} x' &=& 2x & -y & +z \\ y' &=& & 3y & -1z \\ z' &=& 2x & +y & +3z \end{matrix}~.</math> The associated [[defective matrix]] is <math display="block">A = \begin{bmatrix} 2 & -1 & 1 \\ 0 & 3 & -1 \\ 2 & 1 & 3 \end{bmatrix}~.</math> The matrix exponential is <math display="block">e^{tA} = \frac{1}{2}\begin{bmatrix} e^{2t}\left( 1 + e^{2t} - 2t\right) & -2te^{2t} & e^{2t}\left(-1 + e^{2t}\right) \\ -e^{2t}\left(-1 + e^{2t} - 2t\right) & 2(t + 1)e^{2t} & -e^{2t}\left(-1 + e^{2t}\right) \\ e^{2t}\left(-1 + e^{2t} + 2t\right) & 2te^{2t} & e^{2t}\left( 1 + e^{2t}\right) \end{bmatrix}~,</math> so that the general solution of the homogeneous system is <math display="block">\begin{bmatrix}x \\y \\ z\end{bmatrix} = \frac{x(0)}{2}\begin{bmatrix}e^{2t}\left(1 + e^{2t} - 2t\right) \\ -e^{2t}\left(-1 + e^{2t} - 2t\right) \\ e^{2t}\left(-1 + e^{2t} + 2t\right)\end{bmatrix} + \frac{y(0)}{2}\begin{bmatrix}-2te^{2t} \\ 2(t + 1)e^{2t} \\ 2te^{2t}\end{bmatrix} + \frac{z(0)}{2}\begin{bmatrix}e^{2t}\left(-1 + e^{2t}\right) \\ -e^{2t}\left(-1 + e^{2t}\right) \\ e^{2t}\left(1 + e^{2t}\right)\end{bmatrix} ~, </math> amounting to <math display="block">\begin{align} 2x &= x(0)e^{2t}\left(1 + e^{2t} - 2t\right) + y(0)\left(-2te^{2t}\right) + z(0)e^{2t}\left(-1 + e^{2t}\right) \\[2pt] 2y &= x(0)\left(-e^{2t}\right)\left(-1 + e^{2t} - 2t\right) + y(0)2(t + 1)e^{2t} + z(0)\left(-e^{2t}\right)\left(-1 + e^{2t}\right) \\[2pt] 2z &= x(0)e^{2t}\left(-1 + e^{2t} + 2t\right) + y(0)2te^{2t} + z(0)e^{2t}\left(1 + e^{2t}\right) ~. \end{align}</math> ==== Example (inhomogeneous) ==== Consider now the inhomogeneous system <math display="block">\begin{matrix} x' &=& 2x & - & y & + & z & + & e^{2t} \\ y' &=& & & 3y& - & z & \\ z' &=& 2x & + & y & + & 3z & + & e^{2t} \end{matrix} ~.</math> We again have <math display="block">A = \left[\begin{array}{rrr} 2 & -1 & 1 \\ 0 & 3 & -1 \\ 2 & 1 & 3 \end{array}\right] ~,</math> and <math display="block">\mathbf{b} = e^{2t}\begin{bmatrix}1 \\0\\1\end{bmatrix}.</math> From before, we already have the general solution to the homogeneous equation. Since the sum of the homogeneous and particular solutions give the general solution to the inhomogeneous problem, we now only need find the particular solution. We have, by above, <math display="block">\begin{align} \mathbf{y}_p &= e^{tA}\int_0^t e^{(-u)A}\begin{bmatrix}e^{2u} \\0\\e^{2u}\end{bmatrix}\,du+e^{tA}\mathbf{c} \\[6pt] &= e^{tA}\int_0^t \begin{bmatrix} 2e^u - 2ue^{2u} & -2ue^{2u} & 0 \\ -2e^u + 2(u+1)e^{2u} & 2(u+1)e^{2u} & 0 \\ 2ue^{2u} & 2ue^{2u} & 2e^u \end{bmatrix}\begin{bmatrix}e^{2u} \\0 \\e^{2u}\end{bmatrix}\,du + e^{tA}\mathbf{c} \\[6pt] &= e^{tA}\int_0^t \begin{bmatrix} e^{2u}\left( 2e^u - 2ue^{2u}\right) \\ e^{2u}\left(-2e^u + 2(1 + u)e^{2u}\right) \\ 2e^{3u} + 2ue^{4u} \end{bmatrix}\,du + e^{tA}\mathbf{c} \\[6pt] &= e^{tA}\begin{bmatrix} -{1 \over 24}e^{3t}\left(3e^t(4t - 1) - 16\right) \\ {1 \over 24}e^{3t}\left(3e^t(4t + 4) - 16\right) \\ {1 \over 24}e^{3t}\left(3e^t(4t - 1) - 16\right) \end{bmatrix} + \begin{bmatrix} 2e^t - 2te^{2t} & -2te^{2t} & 0 \\ -2e^t + 2(t + 1)e^{2t} & 2(t + 1)e^{2t} & 0 \\ 2te^{2t} & 2te^{2t} & 2e^t \end{bmatrix}\begin{bmatrix}c_1 \\c_2 \\c_3\end{bmatrix} ~, \end{align}</math> which could be further simplified to get the requisite particular solution determined through variation of parameters. Note '''c''' = '''y'''<sub>''p''</sub>(0). For more rigor, see the following generalization. === Inhomogeneous case generalization: variation of parameters === For the inhomogeneous case, we can use [[integrating factor]]s (a method akin to [[variation of parameters]]). We seek a particular solution of the form {{math|1='''y'''<sub>p</sub>(''t'') = exp(''tA'') '''z'''(''t'')}}, <math display="block">\begin{align} \mathbf{y}_p'(t) & = \left(e^{tA}\right)'\mathbf{z}(t) + e^{tA}\mathbf{z}'(t) \\[6pt] & = Ae^{tA}\mathbf{z}(t) + e^{tA}\mathbf{z}'(t) \\[6pt] & = A\mathbf{y}_p(t) + e^{tA}\mathbf{z}'(t)~. \end{align}</math> For {{math|'''y'''<sub>''p''</sub>}} to be a solution, <math display="block">\begin{align} e^{tA}\mathbf{z}'(t) &= \mathbf{b}(t) \\[6pt] \mathbf{z}'(t) &= \left(e^{tA}\right)^{-1}\mathbf{b}(t) \\[6pt] \mathbf{z}(t) &= \int_0^t e^{-uA}\mathbf{b}(u)\,du + \mathbf{c} ~. \end{align}</math> Thus, <math display="block">\begin{align} \mathbf{y}_p(t) & = e^{tA}\int_0^t e^{-uA}\mathbf{b}(u)\,du + e^{tA}\mathbf{c} \\ & = \int_0^t e^{(t - u)A}\mathbf{b}(u)\,du + e^{tA}\mathbf{c}~, \end{align}</math> where {{math|'''''c'''''}} is determined by the initial conditions of the problem. More precisely, consider the equation <math display="block">Y' - A\ Y = F(t)</math> with the initial condition {{math|1=''Y''(''t''<sub>0</sub>) = ''Y''<sub>0</sub>}}, where * {{mvar|A}} is an {{mvar|n}} by {{mvar|n}} complex matrix, * {{mvar|F}} is a continuous function from some open interval {{mvar|I}} to {{math|'''C'''<sup>''n''</sup>}}, * <math>t_0</math> is a point of {{mvar|I}}, and * <math>Y_0</math> is a vector of {{math|'''C'''<sup>''n''</sup>}}. Left-multiplying the above displayed equality by {{math|''e<sup>βtA</sup>''}} yields <math display="block">Y(t) = e^{(t - t_0)A}\ Y_0 + \int_{t_0}^t e^{(t - x)A}\ F(x)\ dx ~.</math> We claim that the solution to the equation <math display="block">P(d/dt)\ y = f(t)</math> with the initial conditions <math>y^{(k)}(t_0) = y_k</math> for {{math|0 β€ ''k'' < ''n''}} is <math display="block">y(t) = \sum_{k=0}^{n-1}\ y_k\ s_k(t - t_0) + \int_{t_0}^t s_{n-1}(t - x)\ f(x)\ dx ~,</math> where the notation is as follows: * <math>P\in\mathbb{C}[X]</math> is a monic polynomial of degree {{math|''n'' > 0}}, * {{mvar|f}} is a continuous complex valued function defined on some open interval {{mvar|I}}, * <math>t_0</math> is a point of {{mvar|I}}, * <math>y_k</math> is a complex number, and {{math|''s<sub>k</sub>''(''t'')}} is the coefficient of <math>X^k</math> in the polynomial denoted by <math>S_t\in\mathbb{C}[X]</math> in Subsection [[matrix exponential#Evaluation by Laurent series|Evaluation by Laurent series]] above. To justify this claim, we transform our order {{mvar|n}} scalar equation into an order one vector equation by the usual [[Ordinary differential equation#Reduction to a first-order system|reduction to a first order system]]. Our vector equation takes the form <math display="block">\frac{dY}{dt} - A\ Y = F(t),\quad Y(t_0) = Y_0,</math> where {{mvar|A}} is the [[transpose]] [[companion matrix]] of {{mvar|P}}. We solve this equation as explained above, computing the matrix exponentials by the observation made in Subsection [[matrix exponential#Evaluation by implementation of Sylvester's formula|Evaluation by implementation of Sylvester's formula]] above. In the case {{mvar|n}} = 2 we get the following statement. The solution to <math display="block"> y'' - (\alpha + \beta)\ y' + \alpha\,\beta\ y = f(t),\quad y(t_0) = y_0,\quad y'(t_0) = y_1 </math> is <math display="block">y(t) = y_0\ s_0(t - t_0) + y_1\ s_1(t - t_0) + \int_{t_0}^t s_1(t - x)\,f(x)\ dx,</math> where the functions {{math|''s''<sub>0</sub>}} and {{math|''s''<sub>1</sub>}} are as in Subsection [[matrix exponential#Evaluation by Laurent series|Evaluation by Laurent series]] above.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)