Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Linear time-invariant system
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Discrete-time systems == Almost everything in continuous-time systems has a counterpart in discrete-time systems.<!-- this section may be very redundant. don't remove this redundancy because these should probably be separate articles. --> === Discrete-time systems from continuous-time systems === In many contexts, a discrete time (DT) system is really part of a larger continuous time (CT) system. For example, a digital recording system takes an analog sound, digitizes it, possibly processes the digital signals, and plays back an analog sound for people to listen to. In practical systems, DT signals obtained are usually uniformly sampled versions of CT signals. If <math>x(t)</math> is a CT signal, then the [[Sample and hold|sampling circuit]] used before an [[analog-to-digital converter]] will transform it to a DT signal: <math display="block">x_n \mathrel{\stackrel{\text{def}}{=}} x(nT) \qquad \forall \, n \in \mathbb{Z},</math> where ''T'' is the [[sampling frequency|sampling period]]. Before sampling, the input signal is normally run through a so-called [[anti-aliasing filter|Nyquist filter]] which removes frequencies above the "folding frequency" 1/(2T); this guarantees that no information in the filtered signal will be lost. Without filtering, any frequency component ''above'' the folding frequency (or [[Nyquist frequency]]) is [[Aliasing|aliased]] to a different frequency (thus distorting the original signal), since a DT signal can only support frequency components lower than the folding frequency. === Impulse response and convolution === Let <math>\{x[m - k];\ m\}</math> represent the sequence <math>\{x[m - k];\text{ for all integer values of } m\}.</math> And let the shorter notation <math>\{x\}</math> represent <math>\{x[m];\ m\}.</math> A discrete system transforms an input sequence, <math>\{x\}</math> into an output sequence, <math>\{y\}.</math> In general, every element of the output can depend on every element of the input. Representing the transformation operator by <math>O</math>, we can write: <math display="block">y[n] \mathrel{\stackrel{\text{def}}{=}} O_n\{x\}.</math> Note that unless the transform itself changes with ''n'', the output sequence is just constant, and the system is uninteresting. (Thus the subscript, ''n''.) In a typical system, ''y''[''n''] depends most heavily on the elements of ''x'' whose indices are near ''n''. For the special case of the [[Kronecker delta function]], <math>x[m] = \delta[m],</math> the output sequence is the '''impulse response''': <math display="block">h[n] \mathrel{\stackrel{\text{def}}{=}} O_n\{\delta[m];\ m\}.</math> For a linear system, <math>O</math> must satisfy: {{NumBlk|:|<math> O_n\left\{\sum_{k=-\infty}^{\infty} c_k\cdot x_k[m];\ m\right\} = \sum_{k=-\infty}^{\infty} c_k\cdot O_n\{x_k\}. </math>|{{EquationRef|Eq.4}}}} And the time-invariance requirement is: {{NumBlk|:|<math> \begin{align} O_n\{x[m-k];\ m\} &\mathrel{\stackrel{\quad}{=}} y[n-k]\\ &\mathrel{\stackrel{\text{def}}{=}} O_{n-k}\{x\}.\, \end{align} </math>|{{EquationRef|Eq.5}}}} In such a system, the impulse response, <math>\{h\}</math>, characterizes the system completely. That is, for any input sequence, the output sequence can be calculated in terms of the input and the impulse response. To see how that is done, consider the identity: <math display="block">x[m] \equiv \sum_{k=-\infty}^{\infty} x[k] \cdot \delta[m - k],</math> which expresses <math>\{x\}</math> in terms of a sum of weighted delta functions. Therefore: <math display="block">\begin{align} y[n] = O_n\{x\} &= O_n\left\{\sum_{k=-\infty}^\infty x[k]\cdot \delta[m-k];\ m \right\}\\ &= \sum_{k=-\infty}^\infty x[k]\cdot O_n\{\delta[m-k];\ m\},\, \end{align}</math> where we have invoked {{EquationNote|Eq.4}} for the case <math>c_k = x[k]</math> and <math>x_k[m] = \delta[m-k]</math>. And because of {{EquationNote|Eq.5}}, we may write: <math display="block">\begin{align} O_n\{\delta[m-k];\ m\} &\mathrel{\stackrel{\quad}{=}} O_{n-k}\{\delta[m];\ m\} \\ &\mathrel{\stackrel{\text{def}}{=}} h[n-k]. \end{align}</math> Therefore: :{| | <math>y[n]</math> | <math>= \sum_{k=-\infty}^{\infty} x[k] \cdot h[n - k]</math> |- | | <math>= \sum_{k=-\infty}^{\infty} x[n-k] \cdot h[k],</math> {{spaces|5}} ([[Convolution#Commutativity|commutativity]]) |} which is the familiar discrete convolution formula. The operator <math>O_n</math> can therefore be interpreted as proportional to a weighted average of the function ''x''[''k'']. The weighting function is ''h''[β''k''], simply shifted by amount ''n''. As ''n'' changes, the weighting function emphasizes different parts of the input function. Equivalently, the system's response to an impulse at ''n''=0 is a "time" reversed copy of the unshifted weighting function. When ''h''[''k''] is zero for all negative ''k'', the system is said to be [[Causal system|causal]]. === Exponentials as eigenfunctions === An [[eigenfunction]] is a function for which the output of the operator is the same function, scaled by some constant. In symbols, <math display="block">\mathcal{H}f = \lambda f ,</math> where ''f'' is the eigenfunction and <math>\lambda</math> is the [[eigenvalue]], a constant. The [[exponential function]]s <math>z^n = e^{sT n}</math>, where <math>n \in \mathbb{Z}</math>, are [[eigenfunction]]s of a [[linear]], [[time-invariant]] operator. <math>T \in \mathbb{R}</math> is the sampling interval, and <math>z = e^{sT}, \ z,s \in \mathbb{C}</math>. A simple proof illustrates this concept. Suppose the input is <math>x[n] = z^n</math>. The output of the system with impulse response <math>h[n]</math> is then <math display="block">\sum_{m=-\infty}^{\infty} h[n-m] \, z^m</math> which is equivalent to the following by the commutative property of [[convolution]] <math display="block">\sum_{m=-\infty}^{\infty} h[m] \, z^{(n - m)} = z^n \sum_{m=-\infty}^{\infty} h[m] \, z^{-m} = z^n H(z)</math> where <math display="block">H(z) \mathrel{\stackrel{\text{def}}{=}} \sum_{m=-\infty}^\infty h[m] z^{-m}</math> is dependent only on the parameter ''z''. So <math>z^n</math> is an [[eigenfunction]] of an LTI system because the system response is the same as the input times the constant <math>H(z)</math>. === Z and discrete-time Fourier transforms === The eigenfunction property of exponentials is very useful for both analysis and insight into LTI systems. The [[Z transform]] <math display="block">H(z) = \mathcal{Z}\{h[n]\} = \sum_{n=-\infty}^\infty h[n] z^{-n}</math> is exactly the way to get the eigenvalues from the impulse response.{{clarify|date=September 2020}} Of particular interest are pure sinusoids; i.e. exponentials of the form <math>e^{j \omega n}</math>, where <math>\omega \in \mathbb{R}</math>. These can also be written as <math>z^n</math> with <math>z = e^{j \omega}</math>{{clarify|date=September 2020}}. The [[discrete-time Fourier transform]] (DTFT) <math>H(e^{j \omega}) = \mathcal{F}\{h[n]\}</math> gives the eigenvalues of pure sinusoids{{clarify|date=September 2020}}. Both of <math>H(z)</math> and <math>H(e^{j\omega})</math> are called the ''system function'', ''system response'', or ''transfer function''. Like the one-sided Laplace transform, the Z transform is usually used in the context of one-sided signals, i.e. signals that are zero for t<0. The discrete-time Fourier transform [[Fourier series]] may be used for analyzing periodic signals. Due to the convolution property of both of these transforms, the convolution that gives the output of the system can be transformed to a multiplication in the transform domain. That is, <math display="block">y[n] = (h*x)[n] = \sum_{m=-\infty}^\infty h[n-m] x[m] = \mathcal{Z}^{-1}\{H(z)X(z)\}.</math> Just as with the Laplace transform transfer function in continuous-time system analysis, the Z transform makes it easier to analyze systems and gain insight into their behavior. === Examples === {{bulleted list | A simple example of an LTI operator is the delay operator <math>D\{x[n]\} \mathrel{\stackrel{\text{def}}{=}} x[n-1]</math>. * <math> D \left( c_1 \cdot x_1[n] + c_2 \cdot x_2[n] \right) = c_1 \cdot x_1[n - 1] + c_2\cdot x_2[n - 1] = c_1\cdot Dx_1[n] + c_2\cdot Dx_2[n]</math> (i.e., it is linear) * <math> D\{x[n - m]\} = x[n - m - 1] = x[(n - 1) - m] = D\{x\}[n - m]</math> (i.e., it is time invariant) The Z transform of the delay operator is a simple multiplication by ''z''<sup>β1</sup>. That is, <math display="block"> \mathcal{Z}\left\{Dx[n]\right\} = z^{-1} X(z). </math> | Another simple LTI operator is the averaging operator <math display="block"> \mathcal{A}\left\{x[n]\right\} \mathrel{\stackrel{\text{def}}{=}} \sum_{k=n-a}^{n+a} x[k].</math> Because of the linearity of sums, <math display="block">\begin{align} \mathcal{A}\left\{c_1 x_1[n] + c_2 x_2[n] \right\} &= \sum_{k=n-a}^{n+a} \left( c_1 x_1[k] + c_2 x_2[k] \right)\\ &= c_1 \sum_{k=n-a}^{n+a} x_1[k] + c_2 \sum_{k=n-a}^{n+a} x_2[k]\\ &= c_1 \mathcal{A}\left\{x_1[n] \right\} + c_2 \mathcal{A}\left\{x_2[n] \right\}, \end{align}</math> and so it is linear. Because, <math display="block">\begin{align} \mathcal{A}\left\{x[n-m]\right\} &= \sum_{k=n-a}^{n+a} x[k-m]\\ &= \sum_{k'=(n-m)-a}^{(n-m)+a} x[k']\\ &= \mathcal{A}\left\{x\right\}[n-m], \end{align}</math> it is also time invariant. }} === Important system properties === The input-output characteristics of discrete-time LTI system are completely described by its impulse response <math>h[n]</math>. Two of the most important properties of a system are causality and stability. Non-causal (in time) systems can be defined and analyzed as above, but cannot be realized in real-time. Unstable systems can also be analyzed and built, but are only useful as part of a larger system whose overall transfer function ''is'' stable. ==== Causality ==== {{Main|Causal system}} <!--the causal system article needs work--> A discrete-time LTI system is causal if the current value of the output depends on only the current value and past values of the input.<ref>Phillips 2007, p. 508.</ref> A necessary and sufficient condition for causality is <math display="block">h[n] = 0 \ \forall n < 0,</math> where <math>h[n]</math> is the impulse response. It is not possible in general to determine causality from the Z transform, because the inverse transform is not unique{{dubious|date=September 2020}}. When a [[region of convergence]] is specified, then causality can be determined. ==== Stability ==== {{Main|BIBO stability}} A system is '''bounded input, bounded output stable''' (BIBO stable) if, for every bounded input, the output is finite. Mathematically, if <math display="block">\|x[n]\|_{\infty} < \infty</math> implies that <math display="block">\|y[n]\|_{\infty} < \infty</math> (that is, if bounded input implies bounded output, in the sense that the [[Infinity norm|maximum absolute values]] of <math>x[n]</math> and <math>y[n]</math> are finite), then the system is stable. A necessary and sufficient condition is that <math>h[n]</math>, the impulse response, satisfies <math display="block">\|h[n]\|_1 \mathrel{\stackrel{\text{def}}{=}} \sum_{n = -\infty}^\infty |h[n]| < \infty.</math> In the frequency domain, the [[region of convergence]] must contain the [[unit circle]] (i.e., the [[locus (mathematics)|locus]] satisfying <math>|z| = 1</math> for complex ''z'').
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)