Template:Short description In control theory and signal processing, a linear, time-invariant system is said to be minimum-phase if the system and its inverse are causal and stable.<ref>Template:Cite book</ref><ref>J. O. Smith III, Introduction to Digital Filters with Audio Applications (September 2007 edition).</ref>

The most general causal LTI transfer function can be uniquely factored into a series of an all-pass and a minimum phase system. The system function is then the product of the two parts, and in the time domain the response of the system is the convolution of the two part responses. The difference between a minimum-phase and a general transfer function is that a minimum-phase system has all of the poles and zeros of its transfer function in the left half of the s-plane representation (in discrete time, respectively, inside the unit circle of the z plane). Since inverting a system function leads to poles turning to zeros and conversely, and poles on the right side (s-plane imaginary line) or outside (z-plane unit circle) of the complex plane lead to unstable systems, only the class of minimum-phase systems is closed under inversion. Intuitively, the minimum-phase part of a general causal system implements its amplitude response with minimal group delay, while its all-pass part corrects its phase response alone to correspond with the original system function.

The analysis in terms of poles and zeros is exact only in the case of transfer functions which can be expressed as ratios of polynomials. In the continuous-time case, such systems translate into networks of conventional, idealized LCR networks. In discrete time, they conveniently translate into approximations thereof, using addition, multiplication, and unit delay. It can be shown that in both cases, system functions of rational form with increasing order can be used to efficiently approximate any other system function; thus even system functions lacking a rational form, and so possessing an infinitude of poles and/or zeros, can in practice be implemented as efficiently as any other.

In the context of causal, stable systems, we would in theory be free to choose whether the zeros of the system function are outside of the stable range (to the right or outside) if the closure condition wasn't an issue. However, inversion is of great practical importance, just as theoretically perfect factorizations are in their own right. (Cf. the spectral symmetric/antisymmetric decomposition as another important example, leading e.g. to Hilbert transform techniques.) Many physical systems also naturally tend towards minimum-phase response, and sometimes have to be inverted using other physical systems obeying the same constraint.

Insight is given below as to why this system is called minimum-phase, and why the basic idea applies even when the system function cannot be cast into a rational form that could be implemented.

Inverse systemEdit

A system <math>\mathbb{H}</math> is invertible if we can uniquely determine its input from its output. I.e., we can find a system <math>\mathbb{H}_\text{inv}</math> such that if we apply <math>\mathbb{H}</math> followed by <math>\mathbb{H}_\text{inv}</math>, we obtain the identity system <math>\mathbb{I}</math>. (See Inverse matrix for a finite-dimensional analog). That is, <math display="block">

\mathbb{H}_\text{inv} \mathbb{H} = \mathbb{I}.

</math>

Suppose that <math>\tilde{x}</math> is input to system <math>\mathbb{H}</math> and gives output <math>\tilde{y}</math>: <math display="block">

\mathbb{H} \tilde{x} = \tilde{y}.

</math>

Applying the inverse system <math>\mathbb{H}_\text{inv}</math> to <math>\tilde{y}</math> gives <math display="block">

\mathbb{H}_\text{inv} \tilde{y} = \mathbb{H}_\text{inv} \mathbb{H} \tilde{x} = \mathbb{I} \tilde{x} = \tilde{x}.

</math>

So we see that the inverse system <math>\mathbb{H}_{inv}</math> allows us to determine uniquely the input <math>\tilde{x}</math> from the output <math>\tilde{y}</math>.

Discrete-time exampleEdit

Suppose that the system <math>\mathbb{H}</math> is a discrete-time, linear, time-invariant (LTI) system described by the impulse response <math>h(n)</math> for Template:Mvar in Template:Math. Additionally, suppose <math>\mathbb{H}_\text{inv}</math> has impulse response <math>h_\text{inv}(n)</math>. The cascade of two LTI systems is a convolution. In this case, the above relation is the following: <math display="block">

(h_\text{inv} * h)(n) = (h * h_\text{inv})(n) = \sum_{k=-\infty}^\infty h(k) h_\text{inv}(n - k) = \delta(n),

</math> where <math>\delta(n)</math> is the Kronecker delta, or the identity system in the discrete-time case. (Changing the order of <math>h_\text{inv}</math> and <math>h</math> is allowed because of commutativity of the convolution operation.) Note that this inverse system <math>\mathbb{H}_\text{inv}</math> need not be unique.

Minimum-phase systemEdit

Template:Unsourced section

When we impose the constraints of causality and stability, the inverse system is unique; and the system <math>\mathbb{H}</math> and its inverse <math>\mathbb{H}_\text{inv}</math> are called minimum-phase. The causality and stability constraints in the discrete-time case are the following (for time-invariant systems where Template:Mvar is the system's impulse response, and <math>\|{\cdot}\|_1</math> is the 1 norm):

CausalityEdit

<math display="block">

h(n) = 0\ \forall n < 0

</math> and <math display="block">

h_\text{inv}(n) = 0\ \forall n < 0.

</math>

StabilityEdit

<math display="block">

\sum_{n=-\infty}^\infty |h(n)| = \|h\|_1 < \infty

</math> and <math display="block">

\sum_{n=-\infty}^\infty |h_\text{inv}(n)| = \|h_\text{inv}\|_1 < \infty.

</math>

See the article on stability for the analogous conditions for the continuous-time case.

Frequency analysisEdit

Template:Unsourced section

Discrete-time frequency analysisEdit

Performing frequency analysis for the discrete-time case will provide some insight. The time-domain equation is <math display="block">

(h * h_\text{inv})(n) = \delta(n).

</math>

Applying the Z-transform gives the following relation in the z domain: <math display="block">

H(z) H_\text{inv}(z) = 1.

</math>

From this relation, we realize that <math display="block">

H_\text{inv}(z) = \frac{1}{H(z)}.

</math>

For simplicity, we consider only the case of a rational transfer function Template:Math. Causality and stability imply that all poles of Template:Math must be strictly inside the unit circle (see stability). Suppose <math display="block">

H(z) = \frac{A(z)}{D(z)},

</math> where Template:Math and Template:Math are polynomial in Template:Mvar. Causality and stability imply that the polesTemplate:Snd the roots of Template:MathTemplate:Snd must be strictly inside the unit circle. We also know that <math display="block">

H_\text{inv}(z) = \frac{D(z)}{A(z)},

</math> so causality and stability for <math>H_\text{inv}(z)</math> imply that its polesTemplate:Snd the roots of Template:MathTemplate:Snd must be inside the unit circle. These two constraints imply that both the zeros and the poles of a minimum-phase system must be strictly inside the unit circle.

Continuous-time frequency analysisEdit

Analysis for the continuous-time case proceeds in a similar manner, except that we use the Laplace transform for frequency analysis. The time-domain equation is <math display="block">

(h * h_\text{inv})(t) = \delta(t),

</math> where <math>\delta(t)</math> is the Dirac delta functionTemplate:Snd the identity operator in the continuous-time case because of the sifting property with any signal Template:Math: <math display="block">

(\delta * x)(t) = \int_{-\infty}^\infty \delta(t - \tau) x(\tau) \,d\tau = x(t).

</math>

Applying the Laplace transform gives the following relation in the s-plane: <math display="block">

H(s) H_\text{inv}(s) = 1,

</math> from which we realize that <math display="block">

H_\text{inv}(s) = \frac{1}{H(s)}.

</math>

Again, for simplicity, we consider only the case of a rational transfer function Template:Math. Causality and stability imply that all poles of Template:Math must be strictly inside the left-half s-plane (see stability). Suppose <math display="block">

H(s) = \frac{A(s)}{D(s)},

</math> where Template:Math and Template:Math are polynomial in Template:Mvar. Causality and stability imply that the polesTemplate:Snd the roots of Template:MathTemplate:Snd must be inside the left-half s-plane. We also know that <math display="block">

H_\text{inv}(s) = \frac{D(s)}{A(s)},

</math> so causality and stability for <math>H_\text{inv}(s)</math> imply that its polesTemplate:Snd the roots of Template:MathTemplate:Snd must be strictly inside the left-half s-plane. These two constraints imply that both the zeros and the poles of a minimum-phase system must be strictly inside the left-half s-plane.

Relationship of magnitude response to phase responseEdit

Template:See also A minimum-phase system, whether discrete-time or continuous-time, has an additional useful property that the natural logarithm of the magnitude of the frequency response (the "gain" measured in nepers, which is proportional to dB) is related to the phase angle of the frequency response (measured in radians) by the Hilbert transform. That is, in the continuous-time case, let <math display="block">

H(j\omega)\ \stackrel{\text{def}}{=}\ H(s)\Big|_{s=j\omega}

</math> be the complex frequency response of system Template:Math. Then, only for a minimum-phase system, the phase response of Template:Math is related to the gain by <math display="block">

\arg[H(j\omega)] = -\mathcal{H}\big\{\log\big(|H(j\omega)|\big)\big\},

</math> where <math>\mathcal{H}</math> denotes the Hilbert transform, and, inversely, <math display="block">

\log\big(|H(j\omega)|\big) = \log\big(|H(j\infty)|\big) + \mathcal{H}\big\{\arg[H(j\omega)]\big\}.

</math>

Stated more compactly, let <math display="block">

H(j\omega) = |H(j\omega)| e^{j\arg[H(j\omega)]}\ \stackrel{\text{def}}{=}\ e^{\alpha(\omega)} e^{j\phi(\omega)} = e^{\alpha(\omega) + j\phi(\omega)},

</math> where <math>\alpha(\omega)</math> and <math>\phi(\omega)</math> are real functions of a real variable. Then <math display="block">

\phi(\omega) = -\mathcal{H}\{\alpha(\omega)\}

</math> and <math display="block">

\alpha(\omega) = \alpha(\infty) + \mathcal{H}\{\phi(\omega)\}.

</math>

The Hilbert transform operator is defined to be <math display="block">

\mathcal{H}\{x(t)\}\ \stackrel{\text{def}}{=}\ \hat{x}(t) = \frac{1}{\pi} \int_{-\infty}^\infty \frac{x(\tau)}{t - \tau} \,d\tau.

</math>

An equivalent corresponding relationship is also true for discrete-time minimum-phase systems.

Minimum phase in the time domainEdit

Template:Unsourced section

For all causal and stable systems that have the same magnitude response, the minimum-phase system has its energy concentrated near the start of the impulse response. i.e., it minimizes the following function, which we can think of as the delay of energy in the impulse response: <math display="block">

\sum_{n=m}^\infty |h(n)|^2 \quad \forall m \in \mathbb{Z}^+.

</math>

Minimum phase as minimum group delayEdit

For all causal and stable systems that have the same magnitude response, the minimum phase system has the minimum group delay. The following proof illustrates this idea of minimum group delay.

Suppose we consider one zero <math>a</math> of the transfer function <math>H(z)</math>. Let's place this zero <math>a</math> inside the unit circle (<math>\left| a \right| < 1</math>) and see how the group delay is affected. <math display="block">a = \left| a \right| e^{i \theta_a} \, \text{ where } \, \theta_a = \operatorname{Arg}(a)</math>

Since the zero <math>a</math> contributes the factor <math>1 - a z^{-1}</math> to the transfer function, the phase contributed by this term is the following. <math display="block">\begin{align} \phi_a \left(\omega \right) &= \operatorname{Arg} \left(1 - a e^{-i \omega} \right)\\ &= \operatorname{Arg} \left(1 - \left| a \right| e^{i \theta_a} e^{-i \omega} \right)\\ &= \operatorname{Arg} \left(1 - \left| a \right| e^{-i (\omega - \theta_a)} \right)\\ &= \operatorname{Arg} \left( \left\{ 1 - \left| a \right| \cos( \omega - \theta_a ) \right\} + i \left\{ \left| a \right| \sin( \omega - \theta_a ) \right\}\right)\\ &= \operatorname{Arg} \left( \left\{ \left| a \right|^{-1} - \cos( \omega - \theta_a ) \right\} + i \left\{ \sin( \omega - \theta_a ) \right\} \right) \end{align}</math>

<math>\phi_a (\omega)</math> contributes the following to the group delay.

<math display="block">\begin{align} -\frac{d \phi_a (\omega)}{d \omega} &= \frac{ \sin^2( \omega - \theta_a ) + \cos^2( \omega - \theta_a ) - \left| a \right|^{-1} \cos( \omega - \theta_a ) }{ \sin^2( \omega - \theta_a ) + \cos^2( \omega - \theta_a ) + \left| a \right|^{-2} - 2 \left| a \right|^{-1} \cos( \omega - \theta_a ) } \\ &= \frac{ \left| a \right| - \cos( \omega - \theta_a ) }{ \left| a \right| + \left| a \right|^{-1} - 2 \cos( \omega - \theta_a ) } \end{align} </math>

The denominator and <math>\theta_a</math> are invariant to reflecting the zero <math>a</math> outside of the unit circle, i.e., replacing <math>a</math> with <math>(a^{-1})^{*}</math>. However, by reflecting <math>a</math> outside of the unit circle, we increase the magnitude of <math>\left| a \right|</math> in the numerator. Thus, having <math>a</math> inside the unit circle minimizes the group delay contributed by the factor <math>1 - a z^{-1}</math>. We can extend this result to the general case of more than one zero since the phase of the multiplicative factors of the form <math>1 - a_i z^{-1}</math> is additive. I.e., for a transfer function with <math>N</math> zeros, <math display="block">\operatorname{Arg}\left( \prod_{i = 1}^N \left( 1 - a_i z^{-1} \right) \right) = \sum_{i = 1}^N \operatorname{Arg}\left( 1 - a_i z^{-1} \right) </math>

So, a minimum phase system with all zeros inside the unit circle minimizes the group delay since the group delay of each individual zero is minimized.

File:Minimum and maximum phase responses.gif
Illustration of the calculus above. Top and bottom are filters with same gain response (on the left : the Nyquist diagrams, on the right : phase responses), but the filter on the top with <math>a = 0.8 < 1</math> has the smallest amplitude in phase response.

Non-minimum phaseEdit

Systems that are causal and stable whose inverses are causal and unstable are known as non-minimum-phase systems. A given non-minimum phase system will have a greater phase contribution than the minimum-phase system with the equivalent magnitude response.

Maximum phaseEdit

{{ safesubst:#invoke:Unsubst||date=__DATE__ |$B= {{ safesubst:#invoke:Unsubst||date=__DATE__ |$B= Template:Ambox }} }} A maximum-phase system is the opposite of a minimum phase system. A causal and stable LTI system is a maximum-phase system if its inverse is causal and unstable.{{ safesubst:#invoke:Unsubst||date=__DATE__ |$B= Template:Fix }} That is,

  • The zeros of the discrete-time system are outside the unit circle.
  • The zeros of the continuous-time system are in the right-hand side of the complex plane.

Such a system is called a maximum-phase system because it has the maximum group delay of the set of systems that have the same magnitude response. In this set of equal-magnitude-response systems, the maximum phase system will have maximum energy delay.

For example, the two continuous-time LTI systems described by the transfer functions <math display="block">\frac{s + 10}{s + 5} \qquad \text{and} \qquad \frac{s - 10}{s + 5}</math>

have equivalent magnitude responses; however, the second system has a much larger contribution to the phase shift. Hence, in this set, the second system is the maximum-phase system and the first system is the minimum-phase system. These systems are also famously known as nonminimum-phase systems that raise many stability concerns in control. One recent solution to these systems is moving the RHP zeros to the LHP using the PFCD method.<ref>Template:Cite book</ref>

Mixed phaseEdit

A mixed-phase system has some of its zeros inside the unit circle and has others outside the unit circle. Thus, its group delay is neither minimum or maximum but somewhere between the group delay of the minimum and maximum phase equivalent system.

For example, the continuous-time LTI system described by transfer function <math display="block">\frac{ (s + 1)(s - 5)(s + 10) }{ (s+2)(s+4)(s+6) }</math> is stable and causal; however, it has zeros on both the left- and right-hand sides of the complex plane. Hence, it is a mixed-phase system. To control the transfer functions that include these systems some methods such as internal model controller (IMC),<ref>Template:Cite book</ref> generalized Smith's predictor (GSP)<ref>Template:Cite journal</ref> and parallel feedforward control with derivative (PFCD)<ref>Template:Cite book</ref> are proposed.

Linear phaseEdit

A linear-phase system has constant group delay. Non-trivial linear phase or nearly linear phase systems are also mixed phase.

See alsoEdit

ReferencesEdit

Template:Reflist

Further readingEdit

Template:Refbegin

  • Dimitris G. Manolakis, Vinay K. Ingle, Stephen M. Kogon : Statistical and Adaptive Signal Processing, pp. 54–56, McGraw-Hill, Template:ISBN
  • Boaz Porat : A Course in Digital Signal Processing, pp. 261–263, John Wiley and Sons, Template:ISBN

Template:Refend