Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
State-space representation
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Linear systems == [[Image:Typical State Space model.svg|framed|left|Block diagram representation of the linear state-space equations]] {{clear}} The most general state-space representation of a linear system with <math>p</math> inputs, <math>q</math> outputs and <math>n</math> state variables is written in the following form:<ref>{{cite book|last1=Brogan |first1=William L. |year=1974 |title=Modern Control Theory |edition=1st |publisher=Quantum Publishers, Inc. |page=172}}</ref> <math display="block">\dot{\mathbf{x}}(t) = \mathbf{A}(t) \mathbf{x}(t) + \mathbf{B}(t) \mathbf{u}(t)</math> <math display="block">\mathbf{y}(t) = \mathbf{C}(t) \mathbf{x}(t) + \mathbf{D}(t) \mathbf{u}(t)</math> where: {{unbulleted list | style = padding-left: 1.5em | <math>\mathbf{x}(\cdot)</math> is called the "state vector", <math>\mathbf{x}(t) \in \mathbb{R}^n</math>; | <math>\mathbf{y}(\cdot)</math> is called the "output vector", <math>\mathbf{y}(t) \in \mathbb{R}^q</math>; | <math>\mathbf{u}(\cdot)</math> is called the "input (or control) vector", <math>\mathbf{u}(t) \in \mathbb{R}^p</math>; | <math>\mathbf{A}(\cdot)</math> is the "state (or system) matrix", <math>\dim[\mathbf{A}(\cdot)] = n \times n</math>, | <math>\mathbf{B}(\cdot)</math> is the "input matrix", <math>\dim[\mathbf{B}(\cdot)] = n \times p</math>, | <math>\mathbf{C}(\cdot)</math> is the "output matrix", <math>\dim[\mathbf{C}(\cdot)] = q \times n</math>, | <math>\mathbf{D}(\cdot)</math> is the "feedthrough (or feedforward) matrix" (in cases where the system model does not have a direct feedthrough, <math>\mathbf{D}(\cdot)</math> is the zero matrix), <math>\dim[\mathbf{D}(\cdot)] = q \times p</math>, | <math>\dot{\mathbf{x}}(t) := \frac{d}{dt} \mathbf{x}(t)</math>. }} In this general formulation, all matrices are allowed to be time-variant (i.e. their elements can depend on time); however, in the common [[LTI system|LTI]] case, matrices will be time invariant. The time variable <math>t</math> can be continuous (e.g. <math>t \in \mathbb{R}</math>) or discrete (e.g. <math>t \in \mathbb{Z}</math>). In the latter case, the time variable <math>k</math> is usually used instead of <math>t</math>. [[Hybrid system]]s allow for time domains that have both continuous and discrete parts. Depending on the assumptions made, the state-space model representation can assume the following forms: {| class="wikitable" |- valign="top" ! System type !! State-space model |- valign="top" |Continuous time-invariant ||<math>\dot{\mathbf{x}}(t) = \mathbf{A} \mathbf{x}(t) + \mathbf{B} \mathbf{u}(t)</math><BR /><math>\mathbf{y}(t) = \mathbf{C} \mathbf{x}(t) + \mathbf{D} \mathbf{u}(t)</math> |- valign="top" |Continuous time-variant || <math>\dot{\mathbf{x}}(t) = \mathbf{A}(t) \mathbf{x}(t) + \mathbf{B}(t) \mathbf{u}(t)</math><BR /><math>\mathbf{y}(t) = \mathbf{C}(t) \mathbf{x}(t) + \mathbf{D}(t) \mathbf{u}(t)</math> |- valign="top" |Explicit discrete time-invariant ||<math>\mathbf{x}(k+1) = \mathbf{A} \mathbf{x}(k) + \mathbf{B} \mathbf{u}(k)</math><BR /><math>\mathbf{y}(k) = \mathbf{C} \mathbf{x}(k) + \mathbf{D} \mathbf{u}(k)</math> |- valign="top" |Explicit discrete time-variant || <math>\mathbf{x}(k+1) = \mathbf{A}(k) \mathbf{x}(k) + \mathbf{B}(k) \mathbf{u}(k)</math><BR /><math>\mathbf{y}(k) = \mathbf{C}(k) \mathbf{x}(k) + \mathbf{D}(k) \mathbf{u}(k)</math> |- valign="top" |[[Laplace domain]] of<BR />continuous time-invariant ||<math>s \mathbf{X}(s) - \mathbf{x}(0) = \mathbf{A} \mathbf{X}(s) + \mathbf{B} \mathbf{U}(s)</math><BR /><math>\mathbf{Y}(s) = \mathbf{C} \mathbf{X}(s) + \mathbf{D} \mathbf{U}(s)</math> |- valign="top" |[[Z-domain]] of<BR />discrete time-invariant ||<math>z \mathbf{X}(z) - z \mathbf{x}(0) = \mathbf{A} \mathbf{X}(z) + \mathbf{B} \mathbf{U}(z)</math><BR /><math>\mathbf{Y}(z) = \mathbf{C} \mathbf{X}(z) + \mathbf{D} \mathbf{U}(z)</math> |} === Example: continuous-time LTI case === Stability and natural response characteristics of a continuous-time [[LTI system]] (i.e., linear with matrices that are constant with respect to time) can be studied from the [[eigenvalue]]s of the matrix <math>\mathbf{A}</math>. The stability of a time-invariant state-space model can be determined by looking at the system's [[transfer function]] in factored form. It will then look something like this: <math display="block"> \mathbf{G}(s) = k \frac{ (s - z_{1})(s - z_{2})(s - z_{3}) }{ (s - p_{1})(s - p_{2})(s - p_{3})(s - p_{4}) }. </math> The denominator of the transfer function is equal to the [[characteristic polynomial]] found by taking the [[determinant]] of <math>s\mathbf{I} - \mathbf{A}</math>, <math display="block">\lambda(s) = \left|s\mathbf{I} - \mathbf{A}\right|. </math> The roots of this polynomial (the [[eigenvalue]]s) are the system transfer function's [[complex pole|pole]]s (i.e., the [[Mathematical singularity|singularities]] where the transfer function's magnitude is unbounded). These poles can be used to analyze whether the system is [[exponential stability|asymptotically stable]] or [[marginal stability|marginally stable]]. An alternative approach to determining stability, which does not involve calculating eigenvalues, is to analyze the system's [[Lyapunov stability]]. The zeros found in the numerator of <math>\mathbf{G}(s)</math> can similarly be used to determine whether the system is [[minimum phase]]. The system may still be '''input–output stable''' (see [[BIBO stability|BIBO stable]]) even though it is not internally stable. This may be the case if unstable poles are canceled out by zeros (i.e., if those singularities in the transfer function are [[removable singularity|removable]]). === Controllability === {{Main| Controllability}} The state controllability condition implies that it is possible – by admissible inputs – to steer the states from any initial value to any final value within some finite time window. A continuous time-invariant linear state-space model is '''controllable''' [[iff|if and only if]] <math display="block">\operatorname{rank}\begin{bmatrix}\mathbf{B}& \mathbf{A}\mathbf{B}& \mathbf{A}^{2}\mathbf{B}& \cdots & \mathbf{A}^{n-1} \mathbf{B}\end{bmatrix} = n, </math> where [[rank (linear algebra)|rank]] is the number of linearly independent rows in a matrix, and where ''n'' is the number of state variables. === Observability === {{Main| Observability}} Observability is a measure for how well internal states of a system can be inferred by knowledge of its external outputs. The observability and controllability of a system are mathematical duals (i.e., as controllability provides that an input is available that brings any initial state to any desired final state, observability provides that knowing an output trajectory provides enough information to predict the initial state of the system). A continuous time-invariant linear state-space model is '''observable''' if and only if <math display="block">\operatorname{rank}\begin{bmatrix}\mathbf{C}\\ \mathbf{C}\mathbf{A}\\ \vdots\\ \mathbf{C}\mathbf{A}^{n-1}\end{bmatrix} = n. </math> === Transfer function === The "[[transfer function]]" of a continuous time-invariant linear state-space model can be derived in the following way: First, taking the [[Laplace transform]] of <math display="block">\dot{\mathbf{x}}(t) = \mathbf{A} \mathbf{x}(t) + \mathbf{B} \mathbf{u}(t)</math> yields <math display="block">s\mathbf{X}(s)-\mathbf{x}(0) = \mathbf{A} \mathbf{X}(s) + \mathbf{B} \mathbf{U}(s). </math> Next, we simplify for <math>\mathbf{X}(s)</math>, giving <math display="block">(s\mathbf{I} - \mathbf{A})\mathbf{X}(s) =\mathbf{x}(0)+ \mathbf{B}\mathbf{U}(s) </math> and thus <math display="block">\mathbf{X}(s) =(s\mathbf{I} - \mathbf{A})^{-1}\mathbf{x}(0)+ (s\mathbf{I} - \mathbf{A})^{-1}\mathbf{B}\mathbf{U}(s). </math> Substituting for <math>\mathbf{X}(s)</math> in the output equation <math display="block">\mathbf{Y}(s) = \mathbf{C}\mathbf{X}(s) + \mathbf{D}\mathbf{U}(s),</math> giving <math display="block">\mathbf{Y}(s) = \mathbf{C}((s\mathbf{I} - \mathbf{A})^{-1}\mathbf{x}(0)+ (s\mathbf{I} - \mathbf{A})^{-1}\mathbf{B}\mathbf{U}(s)) + \mathbf{D}\mathbf{U}(s). </math> Assuming zero initial conditions <math>\mathbf{x}(0) =\mathbf{0} </math> and a [[Single-input single-output system|single-input single-output (SISO) system]], the [[transfer function]] is defined as the ratio of output and input <math>G(s)=Y(s)/U(s)</math>. For a [[MIMO|multiple-input multiple-output (MIMO) system]], however, this ratio is not defined. Therefore, assuming zero initial conditions, the [[transfer function matrix]] is derived from <math display="block">\mathbf{Y}(s) = \mathbf{G}(s) \mathbf{U}(s) </math> using the method of equating the coefficients which yields <math display="block">\mathbf{G}(s) = \mathbf{C}(s\mathbf{I} - \mathbf{A})^{-1}\mathbf{B} + \mathbf{D} . </math> Consequently, <math>\mathbf{G}(s)</math> is a matrix with the dimension <math>q \times p</math> which contains transfer functions for each input output combination. Due to the simplicity of this matrix notation, the state-space representation is commonly used for multiple-input, multiple-output systems. The [[Rosenbrock system matrix]] provides a bridge between the state-space representation and its [[transfer function]]. === Canonical realizations === {{main|Realization (systems)}} Any given transfer function which is [[strictly proper]] can easily be transferred into state-space by the following approach (this example is for a 4-dimensional, single-input, single-output system): Given a transfer function, expand it to reveal all coefficients in both the numerator and denominator. This should result in the following form: <math display="block"> \mathbf{G}(s) = \frac{n_1 s^3 + n_2 s^2 + n_3 s + n_4}{s^4 + d_1 s^3 + d_2 s^2 + d_3 s + d_4}.</math> The coefficients can now be inserted directly into the state-space model by the following approach: <math display="block">\dot{\mathbf{x}}(t) = \begin{bmatrix} 0& 1& 0& 0\\ 0& 0& 1& 0\\ 0& 0& 0& 1\\ -d_4 & -d_3 & -d_2 & -d_1 \end{bmatrix}\mathbf{x}(t) + \begin{bmatrix} 0\\ 0\\ 0\\ 1 \end{bmatrix}\mathbf{u}(t)</math> <math display="block"> \mathbf{y}(t) = \begin{bmatrix} n_4 & n_3 & n_2 & n_1 \end{bmatrix} \mathbf{x}(t). </math> This state-space realization is called '''controllable canonical form''' because the resulting model is guaranteed to be controllable (i.e., because the control enters a chain of integrators, it has the ability to move every state). The transfer function coefficients can also be used to construct another type of canonical form <math display="block">\dot{\mathbf{x}}(t) = \begin{bmatrix} 0& 0& 0& -d_{4}\\ 1& 0& 0& -d_{3}\\ 0& 1& 0& -d_{2}\\ 0& 0& 1& -d_{1} \end{bmatrix}\mathbf{x}(t) + \begin{bmatrix} n_{4}\\ n_{3}\\ n_{2}\\ n_{1} \end{bmatrix}\mathbf{u}(t)</math><math display="block"> \mathbf{y}(t) = \begin{bmatrix} 0& 0& 0& 1 \end{bmatrix}\mathbf{x}(t). </math> This state-space realization is called '''observable canonical form''' because the resulting model is guaranteed to be observable (i.e., because the output exits from a chain of integrators, every state has an effect on the output). === Proper transfer functions === Transfer functions which are only [[proper transfer function|proper]] (and not [[strictly proper]]) can also be realised quite easily. The trick here is to separate the transfer function into two parts: a strictly proper part and a constant. <math display="block"> \mathbf{G}(s) = \mathbf{G}_\mathrm{SP}(s) + \mathbf{G}(\infty). </math> The strictly proper transfer function can then be transformed into a canonical state-space realization using techniques shown above. The state-space realization of the constant is trivially <math>\mathbf{y}(t) = \mathbf{G}(\infty)\mathbf{u}(t)</math>. Together we then get a state-space realization with matrices ''A'', ''B'' and ''C'' determined by the strictly proper part, and matrix ''D'' determined by the constant. Here is an example to clear things up a bit: <math display="block"> \mathbf{G}(s) = \frac{s^2 + 3s + 3}{s^2 + 2s + 1} = \frac{s + 2}{s^2 + 2s + 1} + 1</math> which yields the following controllable realization <math display="block">\dot{\mathbf{x}}(t) = \begin{bmatrix} -2& -1\\ 1& 0\\ \end{bmatrix}\mathbf{x}(t) + \begin{bmatrix} 1\\ 0\end{bmatrix}\mathbf{u}(t)</math><math display="block"> \mathbf{y}(t) = \begin{bmatrix} 1& 2\end{bmatrix}\mathbf{x}(t) + \begin{bmatrix} 1\end{bmatrix}\mathbf{u}(t)</math> Notice how the output also depends directly on the input. This is due to the <math>\mathbf{G}(\infty)</math> constant in the transfer function. === Feedback === [[Image:Typical State Space model with feedback.svg|framed|Typical state-space model with feedback]] A common method for feedback is to multiply the output by a matrix ''K'' and setting this as the input to the system: <math>\mathbf{u}(t) = K \mathbf{y}(t)</math>. Since the values of ''K'' are unrestricted the values can easily be negated for [[negative feedback]]. The presence of a negative sign (the common notation) is merely a notational one and its absence has no impact on the end results. <math display="block">\dot{\mathbf{x}}(t) = A \mathbf{x}(t) + B \mathbf{u}(t)</math> <math display="block">\mathbf{y}(t) = C \mathbf{x}(t) + D \mathbf{u}(t)</math> becomes <math display="block">\dot{\mathbf{x}}(t) = A \mathbf{x}(t) + B K \mathbf{y}(t)</math> <math display="block">\mathbf{y}(t) = C \mathbf{x}(t) + D K \mathbf{y}(t)</math> solving the output equation for <math>\mathbf{y}(t)</math> and substituting in the state equation results in <math display="block">\dot{\mathbf{x}}(t) = \left(A + B K \left(I - D K\right)^{-1} C \right) \mathbf{x}(t)</math> <math display="block">\mathbf{y}(t) = \left(I - D K\right)^{-1} C \mathbf{x}(t)</math> The advantage of this is that the [[eigenvalues]] of ''A'' can be controlled by setting ''K'' appropriately through [[Eigendecomposition of a matrix|eigendecomposition]] of <math>\left(A + B K \left(I - D K\right)^{-1} C \right)</math>. This assumes that the closed-loop system is [[controllability|controllable]] or that the unstable eigenvalues of ''A'' can be made stable through appropriate choice of ''K''. ==== Example ==== For a strictly proper system ''D'' equals zero. Another fairly common situation is when all states are outputs, i.e. ''y'' = ''x'', which yields ''C'' = ''I'', the [[identity matrix]]. This would then result in the simpler equations <math display="block">\dot{\mathbf{x}}(t) = \left(A + B K \right) \mathbf{x}(t)</math> <math display="block">\mathbf{y}(t) = \mathbf{x}(t)</math> This reduces the necessary eigendecomposition to just <math>A + B K</math>. === Feedback with setpoint (reference) input === [[Image:Typical State Space model with feedback and input.png|framed|Output feedback with set point]] In addition to feedback, an input, <math>r(t)</math>, can be added such that <math>\mathbf{u}(t) = -K \mathbf{y}(t) + \mathbf{r}(t)</math>. <math display="block">\dot{\mathbf{x}}(t) = A \mathbf{x}(t) + B \mathbf{u}(t)</math> <math display="block">\mathbf{y}(t) = C \mathbf{x}(t) + D \mathbf{u}(t)</math> becomes <math display="block">\dot{\mathbf{x}}(t) = A \mathbf{x}(t) - B K \mathbf{y}(t) + B \mathbf{r}(t)</math> <math display="block">\mathbf{y}(t) = C \mathbf{x}(t) - D K \mathbf{y}(t) + D \mathbf{r}(t)</math> solving the output equation for <math>\mathbf{y}(t)</math> and substituting in the state equation results in <math display="block">\dot{\mathbf{x}}(t) = \left(A - B K \left(I + D K\right)^{-1} C \right) \mathbf{x}(t) + B \left(I - K \left(I + D K\right)^{-1}D \right) \mathbf{r}(t)</math> <math display="block">\mathbf{y}(t) = \left(I + D K\right)^{-1} C \mathbf{x}(t) + \left(I + D K\right)^{-1} D \mathbf{r}(t)</math> One fairly common simplification to this system is removing ''D'', which reduces the equations to <math display="block">\dot{\mathbf{x}}(t) = \left(A - B K C \right) \mathbf{x}(t) + B \mathbf{r}(t)</math> <math display="block">\mathbf{y}(t) = C \mathbf{x}(t)</math> === Moving object example === A classical linear system is that of one-dimensional movement of an object (e.g., a cart). [[Newton's laws of motion]] for an object moving horizontally on a plane and attached to a wall with a spring: <math display="block">m \ddot{y}(t) = u(t) - b\dot{y}(t) - k y(t)</math> where *<math>y(t)</math> is position; <math>\dot y(t)</math> is velocity; <math>\ddot{y}(t)</math> is acceleration *<math>u(t)</math> is an applied force *<math>b</math> is the viscous friction coefficient *<math>k</math> is the spring constant *<math>m</math> is the mass of the object The state equation would then become <math display="block">\begin{bmatrix} \dot{\mathbf x}_1(t) \\ \dot{\mathbf x}_2(t) \end{bmatrix} = \begin{bmatrix} 0 & 1 \\ -\frac{k}{m} & -\frac{b}{m} \end{bmatrix} \begin{bmatrix} \mathbf{x}_1(t) \\ \mathbf{x}_2(t) \end{bmatrix} + \begin{bmatrix} 0 \\ \frac{1}{m} \end{bmatrix} \mathbf{u}(t)</math> <math display="block">\mathbf{y}(t) = \left[ \begin{matrix} 1 & 0 \end{matrix} \right] \left[ \begin{matrix} \mathbf{x_1}(t) \\ \mathbf{x_2}(t) \end{matrix} \right]</math> where *<math>x_1(t)</math> represents the position of the object *<math>x_2(t) = \dot{x}_1(t)</math> is the velocity of the object *<math>\dot{x}_2(t) = \ddot{x}_1(t)</math> is the acceleration of the object *the output <math>\mathbf{y}(t)</math> is the position of the object The [[controllability]] test is then <math display="block">\begin{bmatrix} B & AB \end{bmatrix} = \begin{bmatrix} \begin{bmatrix} 0 \\ \frac{1}{m} \end{bmatrix} & \begin{bmatrix} 0 & 1 \\ -\frac{k}{m} & -\frac{b}{m} \end{bmatrix} \begin{bmatrix} 0 \\ \frac{1}{m} \end{bmatrix} \end{bmatrix} = \begin{bmatrix} 0 & \frac{1}{m} \\ \frac{1}{m} & -\frac{b}{m^2} \end{bmatrix}</math> which has full rank for all <math>b</math> and <math>m</math>. This means, that if initial state of the system is known (<math>y(t)</math>, <math>\dot y(t)</math>, <math>\ddot{y}(t)</math>), and if the <math>b</math> and <math>m</math> are constants, then there is a force <math>u</math> that could move the cart into any other position in the system. The [[observability]] test is then <math display="block">\begin{bmatrix} C \\ CA \end{bmatrix} = \begin{bmatrix} \begin{bmatrix} 1 & 0 \end{bmatrix} \\ \begin{bmatrix} 1 & 0 \end{bmatrix} \begin{bmatrix} 0 & 1 \\ -\frac{k}{m} & -\frac{b}{m} \end{bmatrix} \end{bmatrix} = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}</math> which also has full rank. Therefore, this system is both controllable and observable.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)