Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Linear time-invariant system
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Continuous-time systems == ===Impulse response and convolution=== The behavior of a linear, continuous-time, time-invariant system with input signal ''x''(''t'') and output signal ''y''(''t'') is described by the convolution integral:<ref>Crutchfield, p. 1. ''Welcome!''</ref> :{| | <math>y(t) = (x * h)(t)</math> | <math>\mathrel{\stackrel{\mathrm{def}}{=}} \int\limits_{-\infty}^{\infty} x(t - \tau)\cdot h(\tau) \, \mathrm{d}\tau</math> |- | | <math>= \int\limits_{-\infty}^\infty x(\tau)\cdot h(t - \tau) \,\mathrm{d}\tau,</math> {{spaces|5}} (using [[Convolution#Commutativity|commutativity]]) |} where <math display="inline"> h(t)</math> is the system's response to an [[Dirac delta function|impulse]]: <math display="inline">x(\tau) = \delta(\tau)</math>. <math display="inline"> y(t) </math> is therefore proportional to a weighted average of the input function <math display="inline">x(\tau)</math>. The weighting function is <math display="inline"> h(-\tau)</math>, simply shifted by amount <math display="inline"> t</math>. As <math display="inline"> t</math> changes, the weighting function emphasizes different parts of the input function. When <math display="inline"> h(\tau)</math> is zero for all negative <math display="inline"> \tau</math>, <math display="inline"> y(t)</math> depends only on values of <math display="inline"> x</math> prior to time <math display="inline"> t</math>, and the system is said to be [[Causal system|causal]]. To understand why the convolution produces the output of an LTI system, let the notation <math display="inline"> \{x(u-\tau);\ u\}</math> represent the function <math display="inline"> x(u-\tau)</math> with variable <math display="inline"> u</math> and constant <math display="inline"> \tau</math>. And let the shorter notation <math display="inline"> \{x\}</math> represent <math display="inline"> \{x(u);\ u\}</math>. Then a continuous-time system transforms an input function, <math display="inline"> \{x\},</math> into an output function, <math display="inline">\{y\}</math>. And in general, every value of the output can depend on every value of the input. This concept is represented by: <math display="block">y(t) \mathrel{\stackrel{\text{def}}{=}} O_t\{x\},</math> where <math display="inline"> O_t</math> is the transformation operator for time <math display="inline"> t</math>. In a typical system, <math display="inline"> y(t)</math> depends most heavily on the values of <math display="inline"> x</math> that occurred near time <math display="inline"> t</math>. Unless the transform itself changes with <math display="inline"> t</math>, the output function is just constant, and the system is uninteresting. For a linear system, <math display="inline"> O</math> must satisfy {{EquationNote|Eq.1}}: {{NumBlk|:|<math> O_t\left\{\int\limits_{-\infty}^\infty c_{\tau}\ x_{\tau}(u) \, \mathrm{d}\tau ;\ u\right\} = \int\limits_{-\infty}^\infty c_\tau\ \underbrace{y_\tau(t)}_{O_t\{x_\tau\}} \, \mathrm{d}\tau. </math>|{{EquationRef|Eq.2}}}} And the time-invariance requirement is: {{NumBlk|:|<math> \begin{align} O_t\{x(u - \tau);\ u\} &\mathrel{\stackrel{\quad}{=}} y(t - \tau)\\ &\mathrel{\stackrel{\text{def}}{=}} O_{t-\tau}\{x\}.\, \end{align} </math> | {{EquationRef|Eq.3}} }} In this notation, we can write the '''impulse response''' as <math display="inline"> h(t) \mathrel{\stackrel{\text{def}}{=}} O_t\{\delta(u);\ u\}.</math> Similarly: :{| | <math>h(t - \tau)</math> | <math>\mathrel{\stackrel{\text{def}}{=}} O_{t-\tau}\{\delta(u);\ u\}</math> |- | | <math>= O_t\{\delta(u - \tau);\ u\}.</math> {{spaces|5}} (using {{EquationNote|Eq.3}}) |} Substituting this result into the convolution integral: <math display="block"> \begin{align} (x * h)(t) &= \int_{-\infty}^\infty x(\tau)\cdot h(t - \tau) \,\mathrm{d}\tau \\[4pt] &= \int_{-\infty}^\infty x(\tau)\cdot O_t\{\delta(u-\tau);\ u\} \, \mathrm{d}\tau,\, \end{align} </math> which has the form of the right side of {{EquationNote|Eq.2}} for the case <math display="inline"> c_\tau = x(\tau)</math> and <math display="inline"> x_\tau(u) = \delta(u-\tau).</math> {{EquationNote|Eq.2}} then allows this continuation: <math display="block"> \begin{align} (x * h)(t) &= O_t\left\{\int_{-\infty}^\infty x(\tau)\cdot \delta(u-\tau) \, \mathrm{d}\tau;\ u \right\}\\[4pt] &= O_t\left\{x(u);\ u \right\}\\ &\mathrel{\stackrel{\text{def}}{=}} y(t).\, \end{align} </math> In summary, the input function, <math display="inline"> \{x\}</math>, can be represented by a continuum of time-shifted impulse functions, combined "linearly", as shown at {{EquationRef|Eq.1}}. The system's linearity property allows the system's response to be represented by the corresponding continuum of impulse <u>responses</u>, combined in the same way. And the time-invariance property allows that combination to be represented by the convolution integral. The mathematical operations above have a simple graphical simulation.<ref>Crutchfield, p. 1. ''Exercises''</ref> === Exponentials as eigenfunctions === An [[eigenfunction]] is a function for which the output of the operator is a scaled version of the same function. That is, <math display="block">\mathcal{H}f = \lambda f,</math> where ''f'' is the eigenfunction and <math>\lambda</math> is the [[eigenvalue]], a constant. The [[exponential function]]s <math>A e^{s t}</math>, where <math>A, s \in \mathbb{C}</math>, are [[eigenfunction]]s of a [[linear]], [[time-invariant]] operator. A simple proof illustrates this concept. Suppose the input is <math>x(t) = A e^{s t}</math>. The output of the system with impulse response <math>h(t)</math> is then <math display="block">\int_{-\infty}^\infty h(t - \tau) A e^{s \tau}\, \mathrm{d} \tau</math> which, by the commutative property of [[convolution]], is equivalent to <math display="block">\begin{align} \overbrace{\int_{-\infty}^\infty h(\tau) \, A e^{s (t - \tau)} \, \mathrm{d} \tau}^{\mathcal{H} f} &= \int_{-\infty}^\infty h(\tau) \, A e^{s t} e^{-s \tau} \, \mathrm{d} \tau \\[4pt] &= A e^{s t} \int_{-\infty}^{\infty} h(\tau) \, e^{-s \tau} \, \mathrm{d} \tau \\[4pt] &= \overbrace{\underbrace{A e^{s t}}_{\text{Input}}}^{f} \, \overbrace{\underbrace{H(s)}_{\text{Scalar}}}^{\lambda} \, , \\ \end{align}</math> where the scalar <math display="block">H(s) \mathrel{\stackrel{\text{def}}{=}} \int_{-\infty}^\infty h(t) e^{-s t} \, \mathrm{d} t</math> is dependent only on the parameter ''s''. So the system's response is a scaled version of the input. In particular, for any <math>A, s \in \mathbb{C}</math>, the system output is the product of the input <math>A e^{st}</math> and the constant <math>H(s)</math>. Hence, <math>A e^{s t}</math> is an [[eigenfunction]] of an LTI system, and the corresponding [[eigenvalue]] is <math>H(s)</math>. ==== Direct proof ==== It is also possible to directly derive complex exponentials as eigenfunctions of LTI systems. Let's set <math>v(t) = e^{i \omega t}</math> some complex exponential and <math>v_a(t) = e^{i \omega (t+a)}</math> a time-shifted version of it. <math>H[v_a](t) = e^{i\omega a} H[v](t)</math> by linearity with respect to the constant <math>e^{i \omega a}</math>. <math>H[v_a](t) = H[v](t+a)</math> by time invariance of <math>H</math>. So <math>H[v](t+a) = e^{i \omega a} H[v](t)</math>. Setting <math>t = 0</math> and renaming we get: <math display="block">H[v](\tau) = e^{i\omega \tau} H[v](0)</math> i.e. that a complex exponential <math>e^{i \omega \tau}</math> as input will give a complex exponential of same frequency as output. === Fourier and Laplace transforms === The eigenfunction property of exponentials is very useful for both analysis and insight into LTI systems. The one-sided [[Laplace transform]] <math display="block">H(s) \mathrel{\stackrel{\text{def}}{=}} \mathcal{L}\{h(t)\} \mathrel{\stackrel{\text{def}}{=}} \int_0^\infty h(t) e^{-s t} \, \mathrm{d} t</math> is exactly the way to get the eigenvalues from the impulse response. Of particular interest are pure sinusoids (i.e., exponential functions of the form <math>e^{j \omega t}</math> where <math>\omega \in \mathbb{R}</math> and <math>j \mathrel{\stackrel{\text{def}}{=}} \sqrt{-1}</math>). The [[Fourier transform]] <math>H(j \omega) = \mathcal{F}\{h(t)\}</math> gives the eigenvalues for pure complex sinusoids. Both of <math>H(s)</math> and <math>H(j\omega)</math> are called the ''system function'', ''system response'', or ''transfer function''. The Laplace transform is usually used in the context of one-sided signals, i.e. signals that are zero for all values of ''t'' less than some value. Usually, this "start time" is set to zero, for convenience and without loss of generality, with the transform integral being taken from zero to infinity (the transform shown above with lower limit of integration of negative infinity is formally known as the [[bilateral Laplace transform]]). The Fourier transform is used for analyzing systems that process signals that are infinite in extent, such as modulated sinusoids, even though it cannot be directly applied to input and output signals that are not [[square integrable]]. The Laplace transform actually works directly for these signals if they are zero before a start time, even if they are not square integrable, for stable systems. The Fourier transform is often applied to spectra of infinite signals via the [[Wiener–Khinchin theorem]] even when Fourier transforms of the signals do not exist. Due to the convolution property of both of these transforms, the convolution that gives the output of the system can be transformed to a multiplication in the transform domain, given signals for which the transforms exist <math display="block">y(t) = (h*x)(t) \mathrel{\stackrel{\text{def}}{=}} \int_{-\infty}^\infty h(t - \tau) x(\tau) \, \mathrm{d} \tau \mathrel{\stackrel{\text{def}}{=}} \mathcal{L}^{-1}\{H(s)X(s)\}.</math> One can use the system response directly to determine how any particular frequency component is handled by a system with that Laplace transform. If we evaluate the system response (Laplace transform of the impulse response) at complex frequency {{nowrap|''s'' {{=}} ''jω''}}, where {{nowrap|''ω'' {{=}} 2''πf''}}, we obtain |''H''(''s'')| which is the system gain for frequency ''f''. The relative phase shift between the output and input for that frequency component is likewise given by arg(''H''(''s'')). === Examples === {{bulleted list | A simple example of an LTI operator is the [[derivative]]. * <math> \frac{\mathrm{d}}{\mathrm{d}t} \left( c_1 x_1(t) + c_2 x_2(t) \right) = c_1 x'_1(t) + c_2 x'_2(t) </math> (i.e., it is linear) * <math> \frac{\mathrm{d}}{\mathrm{d}t} x(t-\tau) = x'(t-\tau) </math> (i.e., it is time invariant) When the Laplace transform of the derivative is taken, it transforms to a simple multiplication by the Laplace variable ''s''. <math display="block"> \mathcal{L}\left\{\frac{\mathrm{d}}{\mathrm{d}t}x(t)\right\} = s X(s) </math> That the derivative has such a simple Laplace transform partly explains the utility of the transform. | Another simple LTI operator is an averaging operator <math display="block"> \mathcal{A}\left\{x(t)\right\} \mathrel{\stackrel{\text{def}}{=}} \int_{t-a}^{t+a} x(\lambda) \, \mathrm{d} \lambda. </math> By the linearity of integration, <math display="block">\begin{align} \mathcal{A} \{c_1 x_1(t) + c_2 x_2(t)\} &= \int_{t-a}^{t+a} ( c_1 x_1(\lambda) + c_2 x_2(\lambda)) \, \mathrm{d} \lambda\\ &= c_1 \int_{t-a}^{t+a} x_1(\lambda) \, \mathrm{d} \lambda + c_2 \int_{t-a}^{t+a} x_2(\lambda) \, \mathrm{d} \lambda\\ &= c_1 \mathcal{A}\{x_1(t)\} + c_2 \mathcal{A} \{x_2(t) \}, \end{align}</math> it is linear. Additionally, because <math display="block">\begin{align} \mathcal{A}\left\{x(t-\tau)\right\} &= \int_{t-a}^{t+a} x(\lambda-\tau) \, \mathrm{d} \lambda\\ &= \int_{(t-\tau)-a}^{(t-\tau)+a} x(\xi) \, \mathrm{d} \xi\\ &= \mathcal{A}\{x\}(t-\tau), \end{align}</math> it is time invariant. In fact, <math>\mathcal{A}</math> can be written as a convolution with the [[boxcar function]] <math>\Pi(t)</math>. That is, <math display="block">\mathcal{A}\left\{x(t)\right\} = \int_{-\infty}^\infty \Pi\left(\frac{\lambda-t}{2a}\right) x(\lambda) \, \mathrm{d} \lambda,</math> where the boxcar function <math display="block">\Pi(t) \mathrel{\stackrel{\text{def}}{=}} \begin{cases} 1 &\text{if } |t| < \frac{1}{2},\\ 0 &\text{if } |t| > \frac{1}{2}. \end{cases}</math> }} === Important system properties === Some of the most important properties of a system are causality and stability. Causality is a necessity for a physical system whose independent variable is time, however this restriction is not present in other cases such as image processing. ==== Causality ==== {{Main|Causal system}} <!--the causal system article needs work--> A system is causal if the output depends only on present and past, but not future inputs. A necessary and sufficient condition for causality is <math display="block">h(t) = 0 \quad \forall t < 0,</math> where <math>h(t)</math> is the impulse response. It is not possible in general to determine causality from the [[two-sided Laplace transform]]. However, when working in the time domain, one normally uses the [[Laplace transform|one-sided Laplace transform]] which requires causality. ==== Stability ==== {{Main|BIBO stability}} A system is '''bounded-input, bounded-output stable''' (BIBO stable) if, for every bounded input, the output is finite. Mathematically, if every input satisfying <math display="block">\ \|x(t)\|_{\infty} < \infty</math> leads to an output satisfying <math display="block">\ \|y(t)\|_{\infty} < \infty</math> (that is, a finite [[Infinity norm|maximum absolute value]] of <math>x(t)</math> implies a finite maximum absolute value of <math>y(t)</math>), then the system is stable. A necessary and sufficient condition is that <math>h(t)</math>, the impulse response, is in [[Lp space|L<sup>1</sup>]] (has a finite L<sup>1</sup> norm): <math display="block">\|h(t)\|_1 = \int_{-\infty}^\infty |h(t)| \, \mathrm{d}t < \infty.</math> In the frequency domain, the [[region of convergence]] must contain the imaginary axis <math>s = j\omega</math>. As an example, the ideal [[low-pass filter]] with impulse response equal to a [[sinc function]] is not BIBO stable, because the sinc function does not have a finite L<sup>1</sup> norm. Thus, for some bounded input, the output of the ideal low-pass filter is unbounded. In particular, if the input is zero for <math>t < 0</math> and equal to a sinusoid at the [[cut-off frequency]] for <math>t > 0</math>, then the output will be unbounded for all times other than the zero crossings.{{dubious|date=September 2020}}
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)