Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Lyapunov equation
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Equation from stability analysis}} The '''Lyapunov equation''', named after the Russian mathematician [[Aleksandr Lyapunov]], is a matrix equation used in the [[Lyapunov stability|stability analysis]] of [[Linear dynamical system|linear dynamical systems]].<ref>{{Cite journal |last=Parks |first=P. C. |date=1992-01-01 |title=A. M. Lyapunov's stability theory—100 years on * |url=https://doi.org/10.1093/imamci/9.4.275 |journal=IMA Journal of Mathematical Control and Information |volume=9 |issue=4 |pages=275–303 |doi=10.1093/imamci/9.4.275 |issn=0265-0754|url-access=subscription }}</ref><ref>{{Cite journal |last=Simoncini |first=V. |author-link=Valeria Simoncini |date=2016-01-01 |title=Computational Methods for Linear Matrix Equations |url=https://epubs.siam.org/doi/10.1137/130912839 |journal=SIAM Review |volume=58 |issue=3 |pages=377–441 |doi=10.1137/130912839 |issn=0036-1445 |hdl-access=free |hdl=11585/586011}}</ref> In particular, the '''discrete-time Lyapunov equation''' (also known as '''Stein equation''') for <math>X</math> is :<math>A X A^{H} - X + Q = 0</math> where <math>Q</math> is a [[Hermitian matrix]] and <math>A^H</math> is the [[conjugate transpose]] of <math>A</math>, while the '''continuous-time Lyapunov equation''' is :<math>AX + XA^H + Q = 0</math>. ==Application to stability== In the following theorems <math>A, P, Q \in \mathbb{R}^{n \times n}</math>, and <math>P</math> and <math>Q</math> are symmetric. The notation <math>P>0</math> means that the matrix <math>P</math> is [[Positive-definite matrix|positive definite]]. '''Theorem''' (continuous time version). Given any <math>Q>0</math>, there exists a unique <math>P>0</math> satisfying <math>A^T P + P A + Q = 0</math> if and only if the linear system <math>\dot{x}=A x</math> is globally asymptotically stable. The quadratic function <math>V(x)=x^T P x</math> is a [[Lyapunov function]] that can be used to verify stability. '''Theorem''' (discrete time version). Given any <math>Q>0</math>, there exists a unique <math>P>0</math> satisfying <math>A^T P A -P + Q = 0</math> if and only if the linear system <math>x_{t+1}=A x_{t}</math> is globally asymptotically stable. As before, <math>x^T P x</math> is a Lyapunov function. ==Computational aspects of solution== The Lyapunov equation is linear; therefore, if <math>X</math> contains <math>n</math> entries, the equation can be solved in <math>\mathcal O(n^3)</math> time using standard matrix factorization methods. However, specialized algorithms are available which can yield solutions much quicker owing to the specific structure of the Lyapunov equation. For the discrete case, the Schur method of Kitagawa is often used.<ref>{{cite journal |last=Kitagawa |first=G. |title=An Algorithm for Solving the Matrix Equation X = F X F' + S |journal=International Journal of Control |volume=25 |issue=5 |pages=745–753 |year=1977 |doi=10.1080/00207177708922266 }}</ref> For the continuous Lyapunov equation the [[Bartels–Stewart algorithm]] can be used.<ref>{{cite journal |first=R. H. |last=Bartels |first2=G. W. |last2=Stewart |title=Algorithm 432: Solution of the matrix equation AX + XB = C |journal=Comm. ACM |volume=15 |year=1972 |issue=9 |pages=820–826 |doi=10.1145/361573.361582 |doi-access=free }}</ref> ==Analytic solution== Defining the [[Vectorization (mathematics)|vectorization operator]] <math>\operatorname{vec} (A)</math> as stacking the columns of a matrix <math>A</math> and <math>A \otimes B</math> as the [[Kronecker product]] of <math>A</math> and <math>B</math>, the continuous time and discrete time Lyapunov equations can be expressed as solutions of a matrix equation. Furthermore, if the matrix <math>A</math> is "stable", the solution can also be expressed as an integral (continuous time case) or as an infinite sum (discrete time case). ===Discrete time=== Using the result that <math> \operatorname{vec}(ABC)=(C^{T} \otimes A)\operatorname{vec}(B) </math>, one has :<math> (I_{n^2}-\bar{A} \otimes A)\operatorname{vec}(X) = \operatorname{vec}(Q) </math> where <math>I_{n^2}</math> is a [[conformable]] identity matrix and <math>\bar{A}</math> is the element-wise complex conjugate of <math>A</math>.<ref>{{cite book |first=J. |last=Hamilton |year=1994 |title=Time Series Analysis |at=Equations 10.2.13 and 10.2.18 |publisher=Princeton University Press |isbn=0-691-04289-6 }}</ref> One may then solve for <math>\operatorname{vec}(X)</math> by inverting or solving the linear equations. To get <math>X</math>, one must just reshape <math>\operatorname{vec} (X)</math> appropriately. Moreover, if <math>A</math> is stable (in the sense of [[Stable polynomial|Schur stability]], i.e., having eigenvalues with magnitude less than 1), the solution <math>X</math> can also be written as :<math> X = \sum_{k=0}^{\infty} A^{k} Q (A^{H})^k </math>. For comparison, consider the one-dimensional case, where this just says that the solution of <math> (1 - a^2) x = q </math> is :<math> x = \frac{q}{1-a^2} = \sum_{k=0}^{\infty} qa^{2k} </math>. ===Continuous time=== Using again the Kronecker product notation and the vectorization operator, one has the matrix equation :<math> (I_n \otimes A + \bar{A} \otimes I_n) \operatorname{vec}X = -\operatorname{vec}Q, </math> where <math>\bar{A}</math> denotes the matrix obtained by complex conjugating the entries of <math>A</math>. Similar to the discrete-time case, if <math>A</math> is stable (in the sense of [[Hurwitz-stable matrix|Hurwitz stability]], i.e., having eigenvalues with negative real parts), the solution <math>X</math> can also be written as :<math> X = \int_0^\infty {e}^{A \tau} Q \mathrm{e}^{A^H \tau} d\tau </math>, which holds because :<math> \begin{align}AX+XA^H =& \int_0^\infty A{e}^{A \tau} Q \mathrm{e}^{A^H \tau}+{e}^{A \tau} Q \mathrm{e}^{A^H \tau}A^H d\tau \\ =&\int_0^\infty \frac{d}{d\tau} {e}^{A \tau} Q \mathrm{e}^{A^H \tau} d\tau \\ =& {e}^{A \tau} Q \mathrm{e}^{A^H \tau} \bigg|_0^\infty\\ =& -Q. \end{align} </math> For comparison, consider the one-dimensional case, where this just says that the solution of <math> 2ax = - q </math> is :<math> x = \frac{-q}{2a} = \int_0^{\infty} q{e}^{2 a \tau} d\tau </math>. == Relationship Between Discrete and Continuous Lyapunov Equations == We start with the continuous-time linear dynamics: :<math> \dot{\mathbf{x}} = \mathbf{A}\mathbf{x} </math>. And then discretize it as follows: :<math> \dot{\mathbf{x}} \approx \frac{\mathbf{x}_{t+1}-\mathbf{x}_{t}}{\delta}</math> Where <math> \delta > 0 </math> indicates a small forward displacement in time. Substituting the bottom equation into the top and shuffling terms around, we get a discrete-time equation for <math> \mathbf{x}_{t+1}</math>. <math> \mathbf{x}_{t+1} = \mathbf{x}_t + \delta \mathbf{A} \mathbf{x}_t = (\mathbf{I} + \delta\mathbf{A})\mathbf{x}_t = \mathbf{B}\mathbf{x}_t</math> Where we've defined <math> \mathbf{B} \equiv \mathbf{I} + \delta\mathbf{A}</math>. Now we can use the discrete time Lyapunov equation for <math> \mathbf{B}</math>: <math> \mathbf{B}^T\mathbf{M}\mathbf{B} - \mathbf{M} = -\delta\mathbf{Q}</math> Plugging in our definition for <math> \mathbf{B}</math>, we get: <math> (\mathbf{I} + \delta \mathbf{A})^T\mathbf{M}(\mathbf{I} + \delta \mathbf{A}) - \mathbf{M} = -\delta \mathbf{Q}</math> Expanding this expression out yields: <math> (\mathbf{M} + \delta \mathbf{A}^T\mathbf{M}) (\mathbf{I} + \delta \mathbf{A}) - \mathbf{M} = \delta(\mathbf{A}^T\mathbf{M} + \mathbf{M}\mathbf{A}) + \delta^2 \mathbf{A}^T\mathbf{M}\mathbf{A} = -\delta \mathbf{Q}</math> Recall that <math> \delta </math> is a small displacement in time. Letting <math> \delta </math> go to zero brings us closer and closer to having continuous dynamics—and in the limit we achieve them. It stands to reason that we should also recover the continuous-time Lyapunov equations in the limit as well. Dividing through by <math> \delta </math> on both sides, and then letting <math> \delta \to 0 </math> we find that: <math> \mathbf{A}^T\mathbf{M} + \mathbf{M}\mathbf{A} = -\mathbf{Q} </math> which is the continuous-time Lyapunov equation, as desired. ==See also== * [[Sylvester equation]], which generalizes the Lyapunov equation * [[Algebraic Riccati equation]] * [[Kalman filter]] ==References== {{Reflist}} {{Authority control}} {{DEFAULTSORT:Lyapunov Equation}} [[Category:Control theory]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Authority control
(
edit
)
Template:Cite book
(
edit
)
Template:Cite journal
(
edit
)
Template:Reflist
(
edit
)
Template:Short description
(
edit
)