Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Discrete Fourier transform
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Properties== ===Linearity=== The DFT is a linear transform, i.e. if <math>\mathcal{F}(\{x_n\})_k=X_k</math> and <math>\mathcal{F}(\{y_n\})_k=Y_k</math>, then for any complex numbers <math>a,b</math>: :<math>\mathcal{F}(\{a x_n + b y_n\})_k=a X_k + b Y_k</math> ===Time and frequency reversal=== Reversing the time (i.e. replacing <math>n</math> by <math>N-n</math>){{ efn-ua|Time reversal for the DFT means replacing <math>n</math> by <math>N-n</math> and not <math>n</math> by <math>-n</math> to avoid negative indices. }} in <math>x_n</math> corresponds to reversing the frequency (i.e. <math>k</math> by <math>N-k</math>).<ref name=ProakisManolakis/>{{rp|p.421}} Mathematically, if <math>\{x_n\}</math> represents the vector '''x''' then :if <math>\mathcal{F}(\{x_n\})_k=X_k</math> :then <math>\mathcal{F}(\{ x_{N-n} \})_k=X_{N-k}</math> ===Conjugation in time=== If <math>\mathcal{F}(\{x_n\})_k = X_k</math> then <math>\mathcal{F}(\{ x_n^* \})_k = X_{N-k}^*</math>.<ref name=ProakisManolakis/>{{rp|p.423}} ===Real and imaginary part=== This table shows some mathematical operations on <math>x_n</math> in the time domain and the corresponding effects on its DFT <math>X_k</math> in the frequency domain. {| class="wikitable" |- ! Property ! Time domain<br /><math>x_n</math> ! Frequency domain<br /><math>X_k</math> |- | Real part in time | <math>\operatorname{Re}{\left(x_n\right)}</math> | <math>\frac{1}{2}\left(X_k + X^*_{N-k}\right)</math> |- | Imaginary part in time | <math>\operatorname{Im}{\left(x_n\right)}</math> | <math>\frac{1}{2i}\left(X_k - X^*_{N-k}\right)</math> |- | Real part in frequency | <math>\frac{1}{2}\left(x_n + x^*_{N-n}\right)</math> | <math>\operatorname{Re}{\left(X_k\right)}</math> |- | Imaginary part in frequency | <math>\frac{1}{2i}\left(x_n - x^*_{N-n}\right)</math> | <math>\operatorname{Im}{\left(X_k\right)}</math> |} === Orthogonality === The vectors <math>u_k = \left[\left. e^{ \frac{i 2\pi}{N} kn} \;\right|\; n=0,1,\ldots,N-1 \right]^\mathsf{T}</math>, for <math>k=0,1,\ldots,N-1</math>, form an [[orthogonal basis]] over the set of ''N''-dimensional complex vectors: :<math>u^\mathsf{T}_k u_{k'}^* = \sum_{n=0}^{N-1} \left(e^{ \frac{i 2\pi}{N} kn}\right) \left(e^{\frac{i 2\pi}{N} (-k')n}\right) = \sum_{n=0}^{N-1} e^{ \frac{i 2\pi}{N} (k-k') n} = N~\delta_{kk'} </math> where <math>\delta_{kk'}</math> is the [[Kronecker delta]]. (In the last step, the summation is trivial if <math>k=k'</math>, where it is {{nowrap|1=1 + 1 + ⋯ = ''N'',}} and otherwise is a [[geometric series]] that can be explicitly summed to obtain zero.) This orthogonality condition can be used to derive the formula for the IDFT from the definition of the DFT, and is equivalent to the unitarity property below. === The Plancherel theorem and Parseval's theorem === If <math>X_k</math> and <math>Y_k</math> are the DFTs of <math>x_n</math> and <math>y_n</math> respectively then [[Parseval's theorem]] states: :<math>\sum_{n=0}^{N-1} x_n y^*_n = \frac{1}{N} \sum_{k=0}^{N-1} X_k Y^*_k</math> where the star denotes [[Complex conjugate|complex conjugation]]. The [[Plancherel theorem]] is a special case of Parseval's theorem and states: :<math>\sum_{n=0}^{N-1} |x_n|^2 = \frac{1}{N} \sum_{k=0}^{N-1} |X_k|^2.</math> These theorems are also equivalent to the unitary condition below. ===Periodicity=== The periodicity can be shown directly from the definition: : <math>X_{k+N} \ \triangleq \ \sum_{n=0}^{N-1} x_n e^{-\frac{i 2\pi}{N} (k+N) n} = \sum_{n=0}^{N-1} x_n e^{-\frac{i 2\pi}{N} k n} \underbrace{e^{-i 2 \pi n}}_{1} = \sum_{n=0}^{N-1} x_n e^{-\frac{i 2\pi}{N} k n} = X_k. </math> Similarly, it can be shown that the IDFT formula leads to a periodic extension of <math>x_n</math>. ===Shift theorem=== Multiplying <math>x_n</math> by a ''linear phase'' <math>e^{\frac{i 2\pi}{N} nm}</math> for some integer ''m'' corresponds to a ''circular shift'' of the output <math>X_k</math>: <math>X_k</math> is replaced by <math>X_{k-m}</math>, where the subscript is interpreted [[modular arithmetic|modulo]] ''N'' (i.e., periodically). Similarly, a circular shift of the input <math>x_n</math> corresponds to multiplying the output <math>X_k</math> by a linear phase. Mathematically, if <math>\{x_n\}</math> represents the vector '''x''' then :if <math>\mathcal{F}(\{x_n\})_k=X_k</math> :then <math>\mathcal{F}\left(\left\{ x_n \cdot e^{\frac{i 2\pi}{N}n m} \right\}\right)_k=X_{k-m}</math> :and <math>\mathcal{F}\left(\left\{x_{n-m}\right\}\right)_k=X_k \cdot e^{-\frac{i 2\pi}{N}k m}</math> ===Circular convolution theorem and cross-correlation theorem=== {{Main|Convolution theorem#Functions of a discrete variable (sequences)}} The [[DTFT#Convolution|convolution theorem]] for the [[discrete-time Fourier transform]] (DTFT) indicates that a convolution of two sequences can be obtained as the inverse transform of the product of the individual transforms. An important simplification occurs when one of sequences is N-periodic, denoted here by <math>y_{_N},</math> because <math>\scriptstyle \text{DTFT} \displaystyle \{y_{_N}\}</math> is non-zero at only discrete frequencies (see {{slink|DTFT#Periodic_data}}), and therefore so is its product with the continuous function <math>\scriptstyle \text{DTFT} \displaystyle \{x\}.</math> That leads to a considerable simplification of the inverse transform. :<math>x * y_{_N}\ =\ \scriptstyle{\rm DTFT}^{-1} \displaystyle \left[\scriptstyle{\rm DTFT} \displaystyle \{x\}\cdot \scriptstyle{\rm DTFT} \displaystyle \{y_{_N}\}\right]\ =\ \scriptstyle{\rm DFT}^{-1} \displaystyle \left[\scriptstyle{\rm DFT} \displaystyle \{x_{_N}\}\cdot \scriptstyle{\rm DFT} \displaystyle \{y_{_N}\}\right],</math> where <math>x_{_N}</math> is a [[periodic summation]] of the <math>x</math> sequence''':''' <math>(x_{_N})_n\ \triangleq \sum_{m=-\infty}^{\infty} x_{(n-mN)}.</math> Customarily, the DFT and inverse DFT summations are taken over the domain <math>[0,N-1]</math>. Defining those DFTs as <math>X</math> and <math>Y</math>, the result is''':''' :<math> (x * y_{_N})_n \triangleq \sum_{\ell=-\infty}^{\infty}x_\ell \cdot (y_{_N})_{n-\ell} = \underbrace{\mathcal{F}^{-1}}_{\rm DFT^{-1}} \left \{ X\cdot Y \right \}_n.</math> In practice, the <math>x</math> sequence is usually length ''N'' or less, and <math>y_{_N}</math> is a periodic extension of an N-length <math>y</math>-sequence, which can also be expressed as a ''circular function''''':''' :<math>(y_{_N})_n = \sum_{p=-\infty}^\infty y_{(n-pN)} = y_{(n\operatorname{mod}N)}, \quad n\in\mathbb{Z}.</math> Then the convolution can be written as''':''' {{Equation box 1|title= |indent =: |cellpadding= 6 |border |border colour = #0073CF |background colour=#F5FFFA |equation = :<math> \mathcal{F}^{-1} \left \{ X\cdot Y \right \}_n = \sum_{\ell=0}^{N-1}x_\ell \cdot y_{_{(n-\ell)\operatorname{mod}N}} </math> }} which gives rise to the interpretation as a ''circular'' convolution of <math>x</math> and <math>y.</math><ref name=Oppenheim/><ref name=McGillem/> It is often used to efficiently compute their linear convolution. (see [[Circular convolution#Example|Circular convolution]], [[Convolution#Fast convolution algorithms|Fast convolution algorithms]], and [[Overlap-save method|Overlap-save]]) Similarly, the [[cross-correlation]] of <math>x</math> and <math>y_{_N}</math> is given by''':''' :<math>(x \star y_{_N})_n \triangleq \sum_{\ell=-\infty}^{\infty} x_\ell^* \cdot (y_{_N})_{n+\ell} = \mathcal{F}^{-1} \left \{ X^* \cdot Y \right \}_n.</math> === Uniqueness of the Discrete Fourier Transform === As seen above, the discrete Fourier transform has the fundamental property of carrying convolution into componentwise product. A natural question is whether it is the only one with this ability. It has been shown <ref>{{cite book |last1=Amiot |first1=Emmanuel |title=Music through Fourier Space |series=Computational Music Science |date=2016 |publisher=Springer |location=Zürich |isbn=978-3-319-45581-5 |page=8 |doi=10.1007/978-3-319-45581-5 |s2cid=6224021 |url=https://link.springer.com/book/10.1007/978-3-319-45581-5 |ref=Theorem 1.11}}</ref><ref name=Baraquin/> that any linear transform that turns convolution into pointwise product is the DFT up to a permutation of coefficients. Since the number of permutations of n elements equals n!, there exists exactly n! linear and invertible maps with the same fundamental property as the DFT with respect to convolution. === Convolution theorem duality === It can also be shown that''':''' :<math>\mathcal{F} \left \{ \mathbf{x\cdot y} \right \}_k \ \triangleq \sum_{n=0}^{N-1} x_n \cdot y_n \cdot e^{-i \frac{2\pi}{N} k n}</math> ::<math>=\frac{1}{N} (\mathbf{X * Y_N})_k, </math> which is the circular convolution of <math>\mathbf{X}</math> and <math>\mathbf{Y}</math>. ===Trigonometric interpolation polynomial=== The [[trigonometric interpolation polynomial]] :<math>p(t) = \begin{cases} \displaystyle\frac{1}{N} \left[ \begin{alignat}{3} X_0 + X_1 e^{i 2\pi t} + \cdots &+ X_{\frac{N}{2}-1} e^{i 2\pi\big(\!\frac{N}{2}-1\!\big) t} &\\ &+ X_{\frac{N}{2}} \cos(N\pi t) &\\ &+ X_{\frac{N}{2}+1} e^{-i 2\pi\big(\!\frac{N}{2}-1\!\big) t} &+ \cdots + X_{N-1} e^{-i 2\pi t} \end{alignat} \right] & N\text{ even} \\ \displaystyle\frac{1}{N} \left[ \begin{alignat}{3} X_0 + X_1 e^{i 2\pi t} + \cdots &+ X_{\frac{N-1}{2}} e^{i 2\pi\frac{N-1}{2} t} &\\ &+ X_{\frac{N+1}{2}} e^{-i 2\pi\frac{N-1}{2} t} &+ \cdots + X_{N-1} e^{-i 2\pi t} \end{alignat} \right] & N\text{ odd} \end{cases}</math> where the coefficients ''X''<sub>''k''</sub> are given by the DFT of ''x''<sub>''n''</sub> above, satisfies the interpolation property <math>p(n/N) = x_n</math> for <math>n = 0, \ldots, N-1</math>. For even ''N'', notice that the [[Nyquist frequency|Nyquist component]] <math display="inline">\frac{X_{N/2}}{N} \cos(N\pi t)</math> is handled specially. This interpolation is ''not unique'': aliasing implies that one could add ''N'' to any of the complex-sinusoid frequencies (e.g. changing <math>e^{-it}</math> to <math>e^{i(N-1)t}</math>) without changing the interpolation property, but giving ''different'' values in between the <math>x_n</math> points. The choice above, however, is typical because it has two useful properties. First, it consists of sinusoids whose frequencies have the smallest possible magnitudes: the interpolation is [[bandlimited]]. Second, if the <math>x_n</math> are real numbers, then <math>p(t)</math> is real as well. In contrast, the most obvious trigonometric interpolation polynomial is the one in which the frequencies range from 0 to <math>N-1</math> (instead of roughly <math>-N/2</math> to <math>+N/2</math> as above), similar to the inverse DFT formula. This interpolation does ''not'' minimize the slope, and is ''not'' generally real-valued for real <math>x_n</math>; its use is a common mistake. === The unitary DFT === Another way of looking at the DFT is to note that in the above discussion, the DFT can be expressed as the [[DFT matrix]], a [[Vandermonde matrix]], [[Generalizations of Pauli matrices#Construction: The clock and shift matrices|introduced by Sylvester]] in 1867, :<math>\mathbf{F} = \begin{bmatrix} \omega_N^{0 \cdot 0} & \omega_N^{0 \cdot 1} & \cdots & \omega_N^{0 \cdot (N-1)} \\ \omega_N^{1 \cdot 0} & \omega_N^{1 \cdot 1} & \cdots & \omega_N^{1 \cdot (N-1)} \\ \vdots & \vdots & \ddots & \vdots \\ \omega_N^{(N-1) \cdot 0} & \omega_N^{(N-1) \cdot 1} & \cdots & \omega_N^{(N-1) \cdot (N-1)} \\ \end{bmatrix} </math> where <math>\omega_N = e^{-i 2 \pi/N}</math> is a primitive [[roots of unity|''N''th root of unity]]. For example, in the case when <math>N = 2</math>, <math>\omega_N = e^{-i \pi}=-1</math>, and :<math>\mathbf{F} = \begin{bmatrix} 1 & 1 \\ 1 & -1 \\ \end{bmatrix}, </math> (which is a [[Hadamard matrix]]) or when <math>N = 4</math> as in the {{Section link|2=Example}} above, <math>\omega_N = e^{-i \pi/2}=-i</math>, and :<math>\mathbf{F} = \begin{bmatrix} 1 & 1 & 1 & 1 \\ 1 & -i & -1 & i \\ 1 & -1 & 1 & -1 \\ 1 & i & -1 & -i \\ \end{bmatrix}. </math> The inverse transform is then given by the inverse of the above matrix, :<math>\mathbf{F}^{-1}=\frac{1}{N}\mathbf{F}^*</math> With [[unitary operator|unitary]] normalization constants <math display="inline">1/\sqrt{N}</math>, the DFT becomes a [[unitary transformation]], defined by a unitary matrix: :<math>\begin{align} \mathbf{U} &= \frac{1}{\sqrt{N}}\mathbf{F} \\ \mathbf{U}^{-1} &= \mathbf{U}^* \\ \left|\det(\mathbf{U})\right| &= 1 \end{align}</math> where <math>\det()</math> is the [[determinant]] function. The determinant is the product of the eigenvalues, which are always <math>\pm 1</math> or <math>\pm i</math> as described below. In a real vector space, a unitary transformation can be thought of as simply a rigid rotation of the coordinate system, and all of the properties of a rigid rotation can be found in the unitary DFT. The orthogonality of the DFT is now expressed as an [[orthonormal]]ity condition (which arises in many areas of mathematics as described in [[root of unity]]): :<math>\sum_{m=0}^{N-1}U_{km}U_{mn}^* = \delta_{kn}</math> If '''X''' is defined as the unitary DFT of the vector '''x''', then :<math>X_k = \sum_{n=0}^{N-1} U_{kn} x_n</math> and the [[Parseval's theorem]] is expressed as :<math>\sum_{n=0}^{N-1}x_n y_n^* = \sum_{k=0}^{N-1}X_k Y_k^*</math> If we view the DFT as just a coordinate transformation which simply specifies the components of a vector in a new coordinate system, then the above is just the statement that the [[dot product]] of two vectors is preserved under a unitary DFT transformation. For the special case <math>\mathbf{x} = \mathbf{y}</math>, this implies that the length of a vector is preserved as well — this is just [[Plancherel theorem]], :<math>\sum_{n=0}^{N-1} |x_n|^2 = \sum_{k=0}^{N-1} |X_k|^2</math> A consequence of the [[#Circular convolution theorem and cross-correlation theorem|circular convolution theorem]] is that the DFT matrix {{mvar|F}} diagonalizes any [[circulant matrix]]. === Expressing the inverse DFT in terms of the DFT === A useful property of the DFT is that the inverse DFT can be easily expressed in terms of the (forward) DFT, via several well-known "tricks". (For example, in computations, it is often convenient to only implement a fast Fourier transform corresponding to one transform direction and then to get the other transform direction from the first.) First, we can compute the inverse DFT by reversing all but one of the inputs (Duhamel ''et al.'', 1988): :<math>\mathcal{F}^{-1}(\{x_n\}) = \frac{1}{N}\mathcal{F}(\{x_{N - n}\})</math> (As usual, the subscripts are interpreted [[modular arithmetic|modulo]] ''N''; thus, for <math>n = 0</math>, we have <math>x_{N-0} = x_0</math>.) Second, one can also conjugate the inputs and outputs: :<math>\mathcal{F}^{-1}(\mathbf{x}) = \frac{1}{N}\mathcal{F}\left(\mathbf{x}^*\right)^*</math> Third, a variant of this conjugation trick, which is sometimes preferable because it requires no modification of the data values, involves swapping real and imaginary parts (which can be done on a computer simply by modifying [[pointer (computer programming)|pointer]]s). Define <math display="inline">\operatorname{swap}(x_n)</math> as <math>x_n</math> with its real and imaginary parts swapped—that is, if <math>x_n = a + b i</math> then <math display="inline">\operatorname{swap}(x_n)</math> is <math>b + a i</math>. Equivalently, <math display="inline">\operatorname{swap}(x_n)</math> equals <math>i x_n^*</math>. Then :<math>\mathcal{F}^{-1}(\mathbf{x}) = \frac{1}{N}\operatorname{swap}(\mathcal{F}(\operatorname{swap}(\mathbf{x})))</math> That is, the inverse transform is the same as the forward transform with the real and imaginary parts swapped for both input and output, up to a normalization (Duhamel ''et al.'', 1988). The conjugation trick can also be used to define a new transform, closely related to the DFT, that is [[Involution (mathematics)|involutory]]—that is, which is its own inverse. In particular, <math>T(\mathbf{x}) = \mathcal{F}\left(\mathbf{x}^*\right) / \sqrt{N}</math> is clearly its own inverse: <math>T(T(\mathbf{x})) = \mathbf{x}</math>. A closely related involutory transformation (by a factor of <math display=inline>\frac{1 + i}{\sqrt{2}}</math>) is <math>H(\mathbf{x}) = \mathcal{F}\left((1 + i) \mathbf{x}^*\right) / \sqrt{2N}</math>, since the <math>(1 + i)</math> factors in <math>H(H(\mathbf{x}))</math> cancel the 2. For real inputs <math>\mathbf{x}</math>, the real part of <math>H(\mathbf{x})</math> is none other than the [[discrete Hartley transform]], which is also involutory. === Eigenvalues and eigenvectors === The [[eigenvalue]]s of the DFT matrix are simple and well-known, whereas the [[eigenvector]]s are complicated, not unique, and are the subject of ongoing research. Explicit formulas are given with a significant amount of number theory.<ref>{{cite journal |last1=Morton |first1=Patrick |title=On the eigenvectors of Schur's matrix |journal=Journal of Number Theory |date=1980 |volume=12 |issue=1 |pages=122–127 |doi=10.1016/0022-314X(80)90083-9 |url=https://dx.doi.org/10.1016/0022-314X%2880%2990083-9|hdl=2027.42/23371 |hdl-access=free }}</ref> Consider the unitary form <math>\mathbf{U}</math> defined above for the DFT of length ''N'', where :<math>\mathbf{U}_{m,n} = \frac 1{\sqrt{N}}\omega_N^{(m-1)(n-1)} = \frac 1{\sqrt{N}}e^{-\frac{i 2\pi}N (m-1)(n-1)}.</math> This matrix satisfies the [[matrix polynomial]] equation: :<math>\mathbf{U}^4 = \mathbf{I}.</math> This can be seen from the inverse properties above: operating <math>\mathbf{U}</math> twice gives the original data in reverse order, so operating <math>\mathbf{U}</math> four times gives back the original data and is thus the [[identity matrix]]. This means that the eigenvalues <math>\lambda</math> satisfy the equation: :<math>\lambda^4 = 1.</math> Therefore, the eigenvalues of <math>\mathbf{U}</math> are the fourth [[roots of unity]]: <math>\lambda</math> is +1, −1, +''i'', or −''i''. Since there are only four distinct eigenvalues for this <math>N\times N</math> matrix, they have some [[algebraic multiplicity|multiplicity]]. The multiplicity gives the number of [[linearly independent]] eigenvectors corresponding to each eigenvalue. (There are ''N'' independent eigenvectors; a unitary matrix is never [[defective matrix|defective]].) The problem of their multiplicity was solved by McClellan and Parks (1972), although it was later shown to have been equivalent to a problem solved by [[Carl Friedrich Gauss|Gauss]] (Dickinson and Steiglitz, 1982). The multiplicity depends on the value of ''N'' [[modular arithmetic|modulo]] 4, and is given by the following table: {| class="wikitable" style="margin:auto;" |+ align="bottom" | Multiplicities of the eigenvalues λ of the unitary DFT matrix '''U''' as a function of the transform size ''N'' (in terms of an integer ''m''). |- ! size ''N'' ! λ = +1 ! λ = −1 ! λ = −''i'' ! λ = +''i'' |- | 4''m'' || ''m'' + 1 || ''m'' || ''m'' || ''m'' − 1 |- | 4''m'' + 1 || ''m'' + 1 || ''m'' || ''m'' || ''m'' |- | 4''m'' + 2 || ''m'' + 1 || ''m'' + 1 || ''m'' || ''m'' |- | 4''m'' + 3 || ''m'' + 1 || ''m'' + 1 || ''m'' + 1 || ''m'' |} Otherwise stated, the [[characteristic polynomial]] of <math>\mathbf{U}</math> is: :<math>\det (\lambda I - \mathbf{U})= (\lambda-1)^{\left\lfloor \tfrac {N+4}{4}\right\rfloor} (\lambda+1)^{\left\lfloor \tfrac {N+2}{4}\right\rfloor} (\lambda+i)^{\left\lfloor \tfrac {N+1}{4}\right\rfloor} (\lambda-i)^{\left\lfloor \tfrac {N-1}{4}\right\rfloor}.</math> No simple analytical formula for general eigenvectors is known. Moreover, the eigenvectors are not unique because any linear combination of eigenvectors for the same eigenvalue is also an eigenvector for that eigenvalue. Various researchers have proposed different choices of eigenvectors, selected to satisfy useful properties like [[orthogonality]] and to have "simple" forms (e.g., McClellan and Parks, 1972; Dickinson and Steiglitz, 1982; Grünbaum, 1982; Atakishiyev and Wolf, 1997; Candan ''et al.'', 2000; Hanna ''et al.'', 2004; Gurevich and Hadani, 2008). One method to construct DFT eigenvectors to an eigenvalue <math>\lambda</math> is based on the linear combination of operators:<ref>Bose, N. K. "Eigenvectors and eigenvalues of 1-D and nD DFT matrices." [[AEU — International Journal of Electronics and Communications]] 55.2 (2001): 131-133.</ref><ref name="candan">Candan, Ç. (2011). On the eigenstructure of DFT matrices [DSP education]. IEEE Signal Processing Magazine, 28(2), 105-108.</ref><ref name="pei">Pei, S. C., Ding, J. J., Hsue, W. L., & Chang, K. W. (2008). Generalized commuting matrices and their eigenvectors for DFTs, offset DFTs, and other periodic operations. IEEE Transactions on Signal Processing, 56(8), 3891-3904.</ref> <math> \mathcal{P}_\lambda=\frac{1}{4}\left( \mathbf{I}+\lambda^{-1}\mathbf{U}+\lambda^{-2}\mathbf{U}^2+\lambda^{-3} \mathbf{U}^3\right)</math> For an arbitrary vector <math>\mathbf{v}</math>, vector <math>\mathbf{u}(\lambda)=\mathcal{P}_{\lambda}\mathbf{v}</math> satisfies: <math> \textbf{U}\mathbf{u}(\lambda)=\lambda \mathbf{u}(\lambda) </math> hence, vector <math>\mathbf{u}(\lambda)</math> is, indeed, the eigenvector of DFT matrix <math>\mathbf{U}</math>. Operators <math> \mathcal{P}_{\lambda} </math> project vectors onto subspaces which are orthogonal for each value of <math>\lambda</math>.<ref name="candan"/> That is, for two eigenvectors, <math>\mathbf{u}(\lambda)=\mathcal{P}_{\lambda}\mathbf{v}</math> and <math>\mathbf{u}'(\lambda')=\mathcal{P}_{\lambda'}\mathbf{v}'</math> we have: <math>\mathbf{u}^\dagger(\lambda) \mathbf{u}'(\lambda')= \delta_{\lambda\lambda'}\mathbf{u}^\dagger(\lambda) \mathbf{v}' </math> However, in general, projection operator method does not produce orthogonal eigenvectors within one subspace.<ref name="pei"/> The operator <math>\mathcal{P}_{\lambda}</math> can be seen as a matrix, whose columns are eigenvectors of <math>\mathbf{U}</math>, but they are not orthogonal. When a set of vectors <math>\{\mathbf{v}_n\}_{n=1,\dots,N_{\lambda}}</math>, spanning <math>N_{\lambda}</math>-dimensional space (where <math>N_{\lambda}</math> is the multiplicity of eigenvalue <math>\lambda</math>) is chosen to generate the set of eigenvectors <math>\{\mathbf{u}_n(\lambda)=\mathcal{P}_{\lambda}\mathbf{v}_n\}_{n=1,\dots,N_{\lambda}}</math> to eigenvalue <math>\lambda</math>, the mutual orthogonality of <math>\mathbf{u}_n(\lambda)</math> is not guaranteed. However, the orthogonal set can be obtained by further applying orthogonalization algorithm to the set <math>\{\mathbf{u}_n(\lambda)\}_{n=1,\dots,N_{\lambda}}</math>, e.g. [[Gram-Schmidt process]].<ref>Erseghe, T., & Cariolaro, G. (2003). An orthonormal class of exact and simple DFT eigenvectors with a high degree of symmetry. IEEE transactions on signal processing, 51(10), 2527-2539.</ref> A straightforward approach to obtain DFT eigenvectors is to discretize an eigenfunction of the continuous [[Fourier transform]], of which the most famous is the [[Gaussian function]]. Since [[periodic summation]] of the function means discretizing its frequency spectrum and discretization means periodic summation of the spectrum, the discretized and periodically summed Gaussian function yields an eigenvector of the discrete transform: * <math>F(m) = \sum_{k\in\mathbb{Z}} \exp\left(-\frac{\pi\cdot(m+N\cdot k)^2}{N}\right).</math> The closed form expression for the series can be expressed by [[Jacobi theta function]]s as * <math>F(m) = \frac1{\sqrt{N}}\vartheta_3\left(\frac{\pi m}N, \exp\left(-\frac{\pi}N \right)\right).</math> Several other simple closed-form analytical eigenvectors for special DFT period ''N'' were found (Kong, 2008 and Casper-Yakimov, 2024): For DFT period ''N'' = 2''L'' + 1 = 4''K'' + 1, where ''K'' is an integer, the following is an eigenvector of DFT: * <math>F(m) = \prod_{s=K+1}^L \left[\cos\left(\frac{2\pi}{N}m\right) - \cos\left(\frac{2\pi}{N}s\right)\right]</math> For DFT period ''N'' = 2''L'' = 4''K'', where ''K'' is an integer, the following are eigenvectors of DFT: * <math>F(m) = \sin\left(\frac{2\pi}{N}m\right) \prod_{s=K+1}^{L-1}\left[\cos\left(\frac{2\pi}{N}m\right)- \cos\left(\frac{2\pi}{N}s\right)\right]</math> * <math>F(m) = \cos\left(\frac{\pi}{N}m\right)\prod_{s=K+1}^{3K-1} \sin\left(\frac{\pi(s-m)}{N}\right)</math> For DFT period ''N'' = 4''K'' - 1, where ''K'' is an integer, the following are eigenvectors of DFT: * <math>F(m) = \sin\left(\frac{2\pi}{N}m\right)\prod_{s=K+1}^{3K-2} \sin\left(\frac{\pi(s-m)}{N}\right)</math> * <math>F(m) = \left(\cos\left(\frac{2\pi}{N}m\right)-\cos\left(\frac{2\pi}{N}K\right)\pm\sin\left(\frac{2\pi}{N}K\right)\right)\prod_{s=K+1}^{3K-2} \sin\left(\frac{\pi(s-m)}{N}\right)</math> The choice of eigenvectors of the DFT matrix has become important in recent years in order to define a discrete analogue of the [[fractional Fourier transform]]—the DFT matrix can be taken to fractional powers by exponentiating the eigenvalues (e.g., Rubio and Santhanam, 2005). For the [[continuous Fourier transform]], the natural orthogonal eigenfunctions are the [[Hermite function]]s, so various discrete analogues of these have been employed as the eigenvectors of the DFT, such as the [[Kravchuk polynomials]] (Atakishiyev and Wolf, 1997). The "best" choice of eigenvectors to define a fractional discrete Fourier transform remains an open question, however. === Uncertainty principles === ==== Probabilistic uncertainty principle ==== If the random variable {{math|''X''<sub>''k''</sub>}} is constrained by :<math>\sum_{n=0}^{N-1} |X_n|^2 = 1 ,</math> then :<math>P_n=|X_n|^2</math> may be considered to represent a discrete [[probability mass function]] of {{mvar|n}}, with an associated probability mass function constructed from the transformed variable, :<math>Q_m = N |x_m|^2 .</math> For the case of continuous functions <math>P(x)</math> and <math>Q(k)</math>, the [[Heisenberg uncertainty principle]] states that :<math>D_0(X)D_0(x)\ge\frac{1}{16\pi^2}</math> where <math>D_0(X)</math> and <math>D_0(x)</math> are the variances of <math>|X|^2</math> and <math>|x|^2</math> respectively, with the equality attained in the case of a suitably normalized [[Gaussian distribution]]. Although the variances may be analogously defined for the DFT, an analogous uncertainty principle is not useful, because the uncertainty will not be shift-invariant. Still, a meaningful uncertainty principle has been introduced by Massar and Spindel.<ref name=Massar/> However, the Hirschman [[entropic uncertainty]] will have a useful analog for the case of the DFT.<ref name=DeBrunner/> The Hirschman uncertainty principle is expressed in terms of the [[Entropy (information theory)|Shannon entropy]] of the two probability functions. In the discrete case, the Shannon entropies are defined as :<math>H(X)=-\sum_{n=0}^{N-1} P_n\ln P_n</math> and :<math>H(x)=-\sum_{m=0}^{N-1} Q_m\ln Q_m ,</math> and the [[entropic uncertainty]] principle becomes<ref name=DeBrunner/> :<math>H(X)+H(x) \ge \ln(N) .</math> The equality is obtained for <math>P_n</math> equal to translations and modulations of a suitably normalized [[Kronecker comb]] of period <math>A</math> where <math>A</math> is any exact integer divisor of <math>N</math>. The probability mass function <math>Q_m</math> will then be proportional to a suitably translated [[Kronecker comb]] of period <math>B=N/A</math>.<ref name=DeBrunner/> ==== Deterministic uncertainty principle ==== There is also a well-known deterministic uncertainty principle that uses signal sparsity (or the number of non-zero coefficients).<ref name=Donoho/> Let <math>\left\|x\right\|_0</math> and <math>\left\|X\right\|_0</math> be the number of non-zero elements of the time and frequency sequences <math>x_0,x_1,\ldots,x_{N-1}</math> and <math>X_0,X_1,\ldots,X_{N-1}</math>, respectively. Then, :<math>N \leq \left\|x\right\|_0 \cdot \left\|X\right\|_0.</math> As an immediate consequence of the [[Arithmetic–geometric mean|inequality of arithmetic and geometric means]], one also has <math>2\sqrt{N} \leq \left\|x\right\|_0 + \left\|X\right\|_0</math>. Both uncertainty principles were shown to be tight for specifically chosen "picket-fence" sequences (discrete impulse trains), and find practical use for signal recovery applications.<ref name=Donoho/> === DFT of real and purely imaginary signals === * If <math>x_0, \ldots, x_{N-1}</math> are [[real number]]s, as they often are in practical applications, then the DFT <math>X_0, \ldots, X_{N-1}</math> is [[Even and odd functions|even symmetric]]: :<math>x_n \in \mathbb{R} \quad \forall n \in \{0,\ldots,N-1 \} \implies X_k = X_{-k \mod N}^* \quad \forall k \in \{0,\ldots,N-1 \}</math>, where <math>X^*\,</math> denotes [[Complex conjugate|complex conjugation]]. It follows that for even <math>N</math> <math>X_0</math> and <math>X_{N/2}</math> are real-valued, and the remainder of the DFT is completely specified by just <math>N/2-1</math> complex numbers. * If <math>x_0, \ldots, x_{N-1}</math> are purely imaginary numbers, then the DFT <math>X_0, \ldots, X_{N-1}</math> is [[Even and odd functions|odd symmetric]]: :<math>x_n \in i \mathbb{R} \quad \forall n \in \{0,\ldots,N-1 \} \implies X_k = -X_{-k \mod N}^* \quad \forall k \in \{0,\ldots,N-1 \}</math>, where <math>X^*\,</math> denotes [[Complex conjugate|complex conjugation]].
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)