Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Fourier transform
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Applications == {{see also|Spectral density#Applications}} [[File:Commutative diagram illustrating problem solving via the Fourier transform.svg|class=skin-invert-image|thumb|400px|Some problems, such as certain differential equations, become easier to solve when the Fourier transform is applied. In that case the solution to the original problem is recovered using the inverse Fourier transform.]] Linear operations performed in one domain (time or frequency) have corresponding operations in the other domain, which are sometimes easier to perform. The operation of [[derivative|differentiation]] in the time domain corresponds to multiplication by the frequency,<ref group="note">Up to an imaginary constant factor whose magnitude depends on what Fourier transform convention is used.</ref> so some [[differential equation]]s are easier to analyze in the frequency domain. Also, [[convolution]] in the time domain corresponds to ordinary multiplication in the frequency domain (see [[Convolution theorem]]). After performing the desired operations, transformation of the result can be made back to the time domain. [[Harmonic analysis]] is the systematic study of the relationship between the frequency and time domains, including the kinds of functions or operations that are "simpler" in one or the other, and has deep connections to many areas of modern mathematics. === Analysis of differential equations === Perhaps the most important use of the Fourier transformation is to solve [[partial differential equation]]s. Many of the equations of the mathematical physics of the nineteenth century can be treated this way. Fourier studied the heat equation, which in one dimension and in dimensionless units is <math display="block">\frac{\partial^2 y(x, t)}{\partial^2 x} = \frac{\partial y(x, t)}{\partial t}.</math> The example we will give, a slightly more difficult one, is the wave equation in one dimension, <math display="block">\frac{\partial^2y(x, t)}{\partial^2 x} = \frac{\partial^2y(x, t)}{\partial^2t}.</math> As usual, the problem is not to find a solution: there are infinitely many. The problem is that of the so-called "boundary problem": find a solution which satisfies the "boundary conditions" <math display="block">y(x, 0) = f(x),\qquad \frac{\partial y(x, 0)}{\partial t} = g(x).</math> Here, {{mvar|f}} and {{mvar|g}} are given functions. For the heat equation, only one boundary condition can be required (usually the first one). But for the wave equation, there are still infinitely many solutions {{mvar|y}} which satisfy the first boundary condition. But when one imposes both conditions, there is only one possible solution. It is easier to find the Fourier transform {{mvar|ŷ}} of the solution than to find the solution directly. This is because the Fourier transformation takes differentiation into multiplication by the Fourier-dual variable, and so a partial differential equation applied to the original function is transformed into multiplication by polynomial functions of the dual variables applied to the transformed function. After {{mvar|ŷ}} is determined, we can apply the inverse Fourier transformation to find {{mvar|y}}. Fourier's method is as follows. First, note that any function of the forms <math display="block"> \cos\bigl(2\pi\xi(x\pm t)\bigr) \text{ or } \sin\bigl(2\pi\xi(x \pm t)\bigr)</math> satisfies the wave equation. These are called the elementary solutions. Second, note that therefore any integral <math display="block">\begin{align} y(x, t) = \int_{0}^{\infty} d\xi \Bigl[ &a_+(\xi)\cos\bigl(2\pi\xi(x + t)\bigr) + a_-(\xi)\cos\bigl(2\pi\xi(x - t)\bigr) +{} \\ &b_+(\xi)\sin\bigl(2\pi\xi(x + t)\bigr) + b_-(\xi)\sin\left(2\pi\xi(x - t)\right) \Bigr] \end{align}</math> satisfies the wave equation for arbitrary {{math|''a''<sub>+</sub>, ''a''<sub>−</sub>, ''b''<sub>+</sub>, ''b''<sub>−</sub>}}. This integral may be interpreted as a continuous linear combination of solutions for the linear equation. Now this resembles the formula for the Fourier synthesis of a function. In fact, this is the real inverse Fourier transform of {{math|''a''<sub>±</sub>}} and {{math|''b''<sub>±</sub>}} in the variable {{mvar|x}}. The third step is to examine how to find the specific unknown coefficient functions {{math|''a''<sub>±</sub>}} and {{math|''b''<sub>±</sub>}} that will lead to {{mvar|y}} satisfying the boundary conditions. We are interested in the values of these solutions at {{math|1=''t'' = 0}}. So we will set {{math|1=''t'' = 0}}. Assuming that the conditions needed for Fourier inversion are satisfied, we can then find the Fourier sine and cosine transforms (in the variable {{mvar|x}}) of both sides and obtain <math display="block"> 2\int_{-\infty}^\infty y(x,0) \cos(2\pi\xi x) \, dx = a_+ + a_-</math> and <math display="block">2\int_{-\infty}^\infty y(x,0) \sin(2\pi\xi x) \, dx = b_+ + b_-.</math> Similarly, taking the derivative of {{mvar|y}} with respect to {{mvar|t}} and then applying the Fourier sine and cosine transformations yields <math display="block">2\int_{-\infty}^\infty \frac{\partial y(u,0)}{\partial t} \sin (2\pi\xi x) \, dx = (2\pi\xi)\left(-a_+ + a_-\right)</math> and <math display="block">2\int_{-\infty}^\infty \frac{\partial y(u,0)}{\partial t} \cos (2\pi\xi x) \, dx = (2\pi\xi)\left(b_+ - b_-\right).</math> These are four linear equations for the four unknowns {{math|''a''<sub>±</sub>}} and {{math|''b''<sub>±</sub>}}, in terms of the Fourier sine and cosine transforms of the boundary conditions, which are easily solved by elementary algebra, provided that these transforms can be found. In summary, we chose a set of elementary solutions, parametrized by {{mvar|ξ}}, of which the general solution would be a (continuous) linear combination in the form of an integral over the parameter {{mvar|ξ}}. But this integral was in the form of a Fourier integral. The next step was to express the boundary conditions in terms of these integrals, and set them equal to the given functions {{mvar|f}} and {{mvar|g}}. But these expressions also took the form of a Fourier integral because of the properties of the Fourier transform of a derivative. The last step was to exploit Fourier inversion by applying the Fourier transformation to both sides, thus obtaining expressions for the coefficient functions {{math|''a''<sub>±</sub>}} and {{math|''b''<sub>±</sub>}} in terms of the given boundary conditions {{mvar|f}} and {{mvar|g}}. From a higher point of view, Fourier's procedure can be reformulated more conceptually. Since there are two variables, we will use the Fourier transformation in both {{mvar|x}} and {{mvar|t}} rather than operate as Fourier did, who only transformed in the spatial variables. Note that {{mvar|ŷ}} must be considered in the sense of a distribution since {{math|''y''(''x'', ''t'')}} is not going to be {{math|''L''<sup>1</sup>}}: as a wave, it will persist through time and thus is not a transient phenomenon. But it will be bounded and so its Fourier transform can be defined as a distribution. The operational properties of the Fourier transformation that are relevant to this equation are that it takes differentiation in {{mvar|x}} to multiplication by {{math|''i''2π''ξ''}} and differentiation with respect to {{mvar|t}} to multiplication by {{math|''i''2π''f''}} where {{mvar|f}} is the frequency. Then the wave equation becomes an algebraic equation in {{mvar|ŷ}}: <math display="block">\xi^2 \hat y (\xi, f) = f^2 \hat y (\xi, f).</math> This is equivalent to requiring {{math|1=''ŷ''(''ξ'', ''f'') = 0}} unless {{math|1=''ξ'' = ±''f''}}. Right away, this explains why the choice of elementary solutions we made earlier worked so well: obviously {{math|1=''f̂'' = ''δ''(''ξ'' ± ''f'')}} will be solutions. Applying Fourier inversion to these delta functions, we obtain the elementary solutions we picked earlier. But from the higher point of view, one does not pick elementary solutions, but rather considers the space of all distributions which are supported on the (degenerate) conic {{math|1=''ξ''{{isup|2}} − ''f''{{isup|2}} = 0}}. We may as well consider the distributions supported on the conic that are given by distributions of one variable on the line {{math|1=''ξ'' = ''f''}} plus distributions on the line {{math|''ξ'' {{=}} −''f''}} as follows: if {{mvar|Φ}} is any test function, <math display="block">\iint \hat y \phi(\xi,f) \, d\xi \, df = \int s_+ \phi(\xi,\xi) \, d\xi + \int s_- \phi(\xi,-\xi) \, d\xi,</math> where {{math|''s''<sub>+</sub>}}, and {{math|''s''<sub>−</sub>}}, are distributions of one variable. Then Fourier inversion gives, for the boundary conditions, something very similar to what we had more concretely above (put {{math|1=''Φ''(''ξ'', ''f'') = ''e''<sup>''i''2π(''xξ''+''tf'')</sup>}}, which is clearly of polynomial growth): <math display="block"> y(x,0) = \int\bigl\{s_+(\xi) + s_-(\xi)\bigr\} e^{i 2\pi \xi x+0} \, d\xi </math> and <math display="block"> \frac{\partial y(x,0)}{\partial t} = \int\bigl\{s_+(\xi) - s_-(\xi)\bigr\} i 2\pi \xi e^{i 2\pi\xi x+0} \, d\xi.</math> Now, as before, applying the one-variable Fourier transformation in the variable {{mvar|x}} to these functions of {{mvar|x}} yields two equations in the two unknown distributions {{math|''s''<sub>±</sub>}} (which can be taken to be ordinary functions if the boundary conditions are {{math|''L''<sup>1</sup>}} or {{math|''L''<sup>2</sup>}}). From a calculational point of view, the drawback of course is that one must first calculate the Fourier transforms of the boundary conditions, then assemble the solution from these, and then calculate an inverse Fourier transform. Closed form formulas are rare, except when there is some geometric symmetry that can be exploited, and the numerical calculations are difficult because of the oscillatory nature of the integrals, which makes convergence slow and hard to estimate. For practical calculations, other methods are often used. The twentieth century has seen the extension of these methods to all linear partial differential equations with polynomial coefficients, and by extending the notion of Fourier transformation to include Fourier integral operators, some non-linear equations as well. === Fourier-transform spectroscopy === {{Main|Fourier-transform spectroscopy}} The Fourier transform is also used in [[nuclear magnetic resonance]] (NMR) and in other kinds of [[spectroscopy]], e.g. infrared ([[Fourier-transform infrared spectroscopy|FTIR]]). In NMR an exponentially shaped free induction decay (FID) signal is acquired in the time domain and Fourier-transformed to a Lorentzian line-shape in the frequency domain. The Fourier transform is also used in [[magnetic resonance imaging]] (MRI) and [[mass spectrometry]]. === Quantum mechanics === The Fourier transform is useful in [[quantum mechanics]] in at least two different ways. To begin with, the basic conceptual structure of quantum mechanics postulates the existence of pairs of [[complementary variables]], connected by the [[Heisenberg uncertainty principle]]. For example, in one dimension, the spatial variable {{mvar|q}} of, say, a particle, can only be measured by the quantum mechanical "[[position operator]]" at the cost of losing information about the momentum {{mvar|p}} of the particle. Therefore, the physical state of the particle can either be described by a function, called "the wave function", of {{mvar|q}} or by a function of {{mvar|p}} but not by a function of both variables. The variable {{mvar|p}} is called the conjugate variable to {{mvar|q}}. In classical mechanics, the physical state of a particle (existing in one dimension, for simplicity of exposition) would be given by assigning definite values to both {{mvar|p}} and {{mvar|q}} simultaneously. Thus, the set of all possible physical states is the two-dimensional real vector space with a {{mvar|p}}-axis and a {{mvar|q}}-axis called the [[phase space]]. In contrast, quantum mechanics chooses a polarisation of this space in the sense that it picks a subspace of one-half the dimension, for example, the {{mvar|q}}-axis alone, but instead of considering only points, takes the set of all complex-valued "wave functions" on this axis. Nevertheless, choosing the {{mvar|p}}-axis is an equally valid polarisation, yielding a different representation of the set of possible physical states of the particle. Both representations of the wavefunction are related by a Fourier transform, such that <math display="block">\phi(p) = \int dq\, \psi (q) e^{-i pq/h} ,</math> or, equivalently, <math display="block">\psi(q) = \int dp \, \phi (p) e^{i pq/h}.</math> Physically realisable states are {{math|''L''<sup>2</sup>}}, and so by the [[Plancherel theorem]], their Fourier transforms are also {{math|''L''<sup>2</sup>}}. (Note that since {{mvar|q}} is in units of distance and {{mvar|p}} is in units of momentum, the presence of the Planck constant in the exponent makes the exponent [[Nondimensionalization|dimensionless]], as it should be.) Therefore, the Fourier transform can be used to pass from one way of representing the state of the particle, by a wave function of position, to another way of representing the state of the particle: by a wave function of momentum. Infinitely many different polarisations are possible, and all are equally valid. Being able to transform states from one representation to another by the Fourier transform is not only convenient but also the underlying reason of the Heisenberg [[#Uncertainty principle|uncertainty principle]]. The other use of the Fourier transform in both quantum mechanics and [[quantum field theory]] is to solve the applicable wave equation. In non-relativistic quantum mechanics, the [[Schrödinger equation]] for a time-varying wave function in one-dimension, not subject to external forces, is <math display="block">-\frac{\partial^2}{\partial x^2} \psi(x,t) = i \frac h{2\pi} \frac{\partial}{\partial t} \psi(x,t).</math> This is the same as the heat equation except for the presence of the imaginary unit {{mvar|i}}. Fourier methods can be used to solve this equation. In the presence of a potential, given by the potential energy function {{math|''V''(''x'')}}, the equation becomes <math display="block">-\frac{\partial^2}{\partial x^2} \psi(x,t) + V(x)\psi(x,t) = i \frac h{2\pi} \frac{\partial}{\partial t} \psi(x,t).</math> The "elementary solutions", as we referred to them above, are the so-called "stationary states" of the particle, and Fourier's algorithm, as described above, can still be used to solve the boundary value problem of the future evolution of {{mvar|ψ}} given its values for {{math|''t'' {{=}} 0}}. Neither of these approaches is of much practical use in quantum mechanics. Boundary value problems and the time-evolution of the wave function is not of much practical interest: it is the stationary states that are most important. In relativistic quantum mechanics, the Schrödinger equation becomes a wave equation as was usual in classical physics, except that complex-valued waves are considered. A simple example, in the absence of interactions with other particles or fields, is the free one-dimensional Klein–Gordon–Schrödinger–Fock equation, this time in dimensionless units, <math display="block">\left (\frac{\partial^2}{\partial x^2} +1 \right) \psi(x,t) = \frac{\partial^2}{\partial t^2} \psi(x,t).</math> This is, from the mathematical point of view, the same as the wave equation of classical physics solved above (but with a complex-valued wave, which makes no difference in the methods). This is of great use in quantum field theory: each separate Fourier component of a wave can be treated as a separate harmonic oscillator and then quantized, a procedure known as "second quantization". Fourier methods have been adapted to also deal with non-trivial interactions. Finally, the [[Quantum harmonic oscillator#Ladder operator method|number operator]] of the [[quantum harmonic oscillator]] can be interpreted, for example via the [[Mehler kernel#Physics version|Mehler kernel]], as the [[Symmetry in quantum mechanics|generator]] of the [[#Eigenfunctions|Fourier transform]] <math>\mathcal{F}</math>.<ref name="auto"/> === Signal processing === The Fourier transform is used for the spectral analysis of time-series. The subject of statistical signal processing does not, however, usually apply the Fourier transformation to the signal itself. Even if a real signal is indeed transient, it has been found in practice advisable to model a signal by a function (or, alternatively, a stochastic process) which is stationary in the sense that its characteristic properties are constant over all time. The Fourier transform of such a function does not exist in the usual sense, and it has been found more useful for the analysis of signals to instead take the Fourier transform of its autocorrelation function. The autocorrelation function {{mvar|R}} of a function {{mvar|f}} is defined by <math display="block">R_f (\tau) = \lim_{T\rightarrow \infty} \frac{1}{2T} \int_{-T}^T f(t) f(t+\tau) \, dt. </math> This function is a function of the time-lag {{mvar|τ}} elapsing between the values of {{mvar|f}} to be correlated. For most functions {{mvar|f}} that occur in practice, {{mvar|R}} is a bounded even function of the time-lag {{mvar|τ}} and for typical noisy signals it turns out to be uniformly continuous with a maximum at {{math|''τ'' {{=}} 0}}. The autocorrelation function, more properly called the autocovariance function unless it is normalized in some appropriate fashion, measures the strength of the correlation between the values of {{mvar|f}} separated by a time lag. This is a way of searching for the correlation of {{mvar|f}} with its own past. It is useful even for other statistical tasks besides the analysis of signals. For example, if {{math|''f''(''t'')}} represents the temperature at time {{mvar|t}}, one expects a strong correlation with the temperature at a time lag of 24 hours. It possesses a Fourier transform, <math display="block"> P_f(\xi) = \int_{-\infty}^\infty R_f (\tau) e^{-i 2\pi \xi\tau} \, d\tau. </math> This Fourier transform is called the [[Spectral density#Power spectral density|power spectral density]] function of {{mvar|f}}. (Unless all periodic components are first filtered out from {{mvar|f}}, this integral will diverge, but it is easy to filter out such periodicities.) The power spectrum, as indicated by this density function {{mvar|P}}, measures the amount of variance contributed to the data by the frequency {{mvar|ξ}}. In electrical signals, the variance is proportional to the average power (energy per unit time), and so the power spectrum describes how much the different frequencies contribute to the average power of the signal. This process is called the spectral analysis of time-series and is analogous to the usual analysis of variance of data that is not a time-series ([[ANOVA]]). Knowledge of which frequencies are "important" in this sense is crucial for the proper design of filters and for the proper evaluation of measuring apparatuses. It can also be useful for the scientific analysis of the phenomena responsible for producing the data. The power spectrum of a signal can also be approximately measured directly by measuring the average power that remains in a signal after all the frequencies outside a narrow band have been filtered out. Spectral analysis is carried out for visual signals as well. The power spectrum ignores all phase relations, which is good enough for many purposes, but for video signals other types of spectral analysis must also be employed, still using the Fourier transform as a tool.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)