Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Spectral density
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Definition == === Energy spectral density{{Anchor|Energy}} === {{distinguish-redirect|Energy spectral density|Energy spectrum}} In [[signal processing]], the [[Energy (signal processing)|energy]] of a signal <math>x(t)</math> is given by <math display="block"> E \triangleq \int_{-\infty}^\infty \left|x(t)\right|^2\ dt.</math> Assuming the total energy is finite (i.e. <math>x(t)</math> is a [[square-integrable function]]) allows applying [[Parseval's theorem]] (or [[Plancherel's theorem]]).{{sfn | Oppenheim | Verghese | 2016 | p=60}} That is, <math display="block">\int_{-\infty}^\infty |x(t)|^2\, dt = \int_{-\infty}^\infty \left|\hat{x}(f)\right|^2\, df,</math> where <math display="block">\hat{x}(f) = \int_{-\infty}^\infty e^{-i 2\pi ft}x(t) \ dt,</math> is the [[Fourier transform]] of <math>x(t)</math> at [[frequency]] <math>f</math> (in [[Hz]]).{{sfn | Stein | 2000 | pp=108,115}} The theorem also holds true in the discrete-time cases. Since the integral on the left-hand side is the energy of the signal, the value of<math>\left| \hat{x}(f) \right|^2 df</math> can be interpreted as a [[density function]] multiplied by an infinitesimally small frequency interval, describing the energy contained in the signal at frequency <math>f</math> in the frequency interval <math>f + df</math>. Therefore, the '''energy spectral density''' of <math>x(t)</math> is defined as:{{sfn | Oppenheim | Verghese | 2016 | p=14}} {{Equation box 1 |indent = : |title = |equation = {{NumBlk||<math> \bar{S}_{xx}(f) \triangleq \left| \hat{x}(f) \right|^2 </math>|{{EquationRef|Eq.1}}}} |cellpadding = 6 |border |border colour = #0073CF |background colour = #F5FFFA }} The function <math>\bar{S}_{xx}(f)</math> and the [[autocorrelation]] of <math>x(t)</math> form a Fourier transform pair, a result also known as the [[Wiener–Khinchin theorem]] (see also [[Periodogram#Definition|Periodogram]]). As a physical example of how one might measure the energy spectral density of a signal, suppose <math>V(t)</math> represents the [[electric potential|potential]] (in [[volt]]s) of an electrical pulse propagating along a [[transmission line]] of [[Electrical impedance|impedance]] <math>Z</math>, and suppose the line is terminated with a [[impedance matching|matched]] resistor (so that all of the pulse energy is delivered to the resistor and none is reflected back). By [[Ohm's law]], the power delivered to the resistor at time <math>t</math> is equal to <math>V(t)^2/Z</math>, so the total energy is found by integrating <math>V(t)^2/Z</math> with respect to time over the duration of the pulse. To find the value of the energy spectral density <math>\bar{S}_{xx}(f)</math> at frequency <math>f</math>, one could insert between the transmission line and the resistor a [[bandpass filter]] which passes only a narrow range of frequencies (<math>\Delta f</math>, say) near the frequency of interest and then measure the total energy <math>E(f)</math> dissipated across the resistor. The value of the energy spectral density at <math>f</math> is then estimated to be <math>E(f)/\Delta f</math>. In this example, since the power <math>V(t)^2/Z</math> has units of V<sup>2</sup> Ω<sup>−1</sup>, the energy <math>E(f)</math> has units of V<sup>2</sup> s Ω<sup>−1</sup> = [[Joule|J]], and hence the estimate <math>E(f)/\Delta f</math> of the energy spectral density has units of J Hz<sup>−1</sup>, as required. In many situations, it is common to forget the step of dividing by <math>Z</math> so that the energy spectral density instead has units of V<sup>2</sup> Hz<sup>−1</sup>. This definition generalizes in a straightforward manner to a discrete signal with a [[countably infinite]] number of values <math>x_n</math> such as a signal sampled at discrete times <math>t_n=t_0 + (n\,\Delta t)</math>: <math display="block">\bar{S}_{xx}(f) = \lim_{N\to \infty} (\Delta t)^2 \underbrace{\left|\sum_{n=-N}^N x_n e^{-i 2\pi f n \, \Delta t}\right|^2}_{\left|\hat x_d(f)\right|^2},</math> where <math>\hat x_d(f)</math> is the [[discrete-time Fourier transform]] of <math>x_n.</math> The sampling interval <math>\Delta t</math> is needed to keep the correct physical units and to ensure that we recover the continuous case in the limit <math>\Delta t\to 0.</math> But in the mathematical sciences the interval is often set to 1, which simplifies the results at the expense of generality. (also see [[Normalized frequency (unit)|normalized frequency]]) === Power spectral density === {{distinguish|spectral power distribution}} [[File:PowerSpectrumExt.svg|thumb|right|300px|The power spectrum of the measured [[cosmic microwave background radiation]] temperature anisotropy in terms of the angular scale. The solid line is a theoretical model, for comparison.]] The above definition of energy spectral density is suitable for transients (pulse-like signals) whose energy is concentrated around one time window; then the Fourier transforms of the signals generally exist. For continuous signals over all time, one must rather define the ''power spectral density'' (PSD) which exists for [[stationary process]]es; this describes how the [[power (physics)|power]] of a signal or time series is distributed over frequency, as in the simple example given previously. Here, power can be the actual physical power, or more often, for convenience with abstract signals, is simply identified with the squared value of the signal. For example, statisticians study the [[variance]] of a function over time <math>x(t)</math> (or over another independent variable), and using an analogy with electrical signals (among other physical processes), it is customary to refer to it as the ''power spectrum'' even when there is no physical power involved. If one were to create a physical [[voltage]] source which followed <math>x(t)</math> and applied it to the terminals of a one [[ohm]] [[resistor]], then indeed the instantaneous power dissipated in that resistor would be given by <math>x^2(t)</math> [[watt]]s. The average power <math>P</math> of a signal <math>x(t)</math> over all time is therefore given by the following time average, where the period <math> T</math> is centered about some arbitrary time <math> t=t_{0}</math>: <math display="block"> P = \lim_{T\to \infty} \frac 1 {T} \int_{t_{0}-T/2}^{t_{0}+T/2} \left|x(t)\right|^2\,dt</math> Whenever it is more convenient to deal with time limits in the signal itself rather than time limits in the bounds of the integral, the average power can also be written as <math display="block"> P = \lim_{T\to \infty} \frac 1 {T} \int_{-\infty}^{\infty} \left|x_{T}(t)\right|^2\,dt,</math> where <math>x_{T}(t) = x(t)w_{T}(t)</math> and <math>w_{T}(t)</math> is unity within the arbitrary period and zero elsewhere. When <math>P</math> is non-zero, the integral must grow to infinity at least as fast as <math>T</math> does. That is the reason why we cannot use the energy of the signal, which is that diverging integral. In analyzing the frequency content of the signal <math>x(t)</math>, one might like to compute the ordinary Fourier transform <math>\hat{x}(f)</math>; however, for many signals of interest the ordinary Fourier transform does not formally exist.<ref group=nb>Some authors, e.g., {{harv| Risken | Frank | 1996 | p=30}} still use the non-normalized Fourier transform in a formal way to formulate a definition of the power spectral density <math display="block"> \langle \hat x(\omega) \hat x^\ast(\omega') \rangle = 2\pi f(\omega) \delta(\omega - \omega'),</math> where <math> \delta(\omega-\omega')</math> is the [[Dirac delta function]]. Such formal statements may sometimes be useful to guide the intuition, but should always be used with utmost care.</ref> However, under suitable conditions, certain generalizations of the Fourier transform (e.g. the [[Fourier_transform#Fourier–Stieltjes_transform_on_measurable_spaces|Fourier-Stieltjes transform]]) still adhere to [[Parseval's theorem]]. As such, <math display="block"> P = \lim_{T\to \infty} \frac 1 {T} \int_{-\infty}^{\infty} |\hat{x}_{T}(f)|^2\,df,</math> where the integrand defines the '''power spectral density''':{{sfn | Oppenheim | Verghese | 2016 | pp=422-423}}{{sfn | Miller | Childers | 2012 | pp=429-431}} {{Equation box 1 |indent = : |title = |equation = {{NumBlk||<math> S_{xx}(f) = \lim_{T\to \infty} \frac 1 {T} |\hat{x}_{T}(f)|^2\,</math>|{{EquationRef|Eq.2}}}} |cellpadding = 6 |border |border colour = #0073CF |background colour = #F5FFFA }} The [[convolution theorem]] then allows regarding <math> |\hat{x}_{T}(f)|^2</math> as the [[Fourier transform]] of the time [[convolution]] of <math> x_{T}^*(-t)</math> and <math> x_{T}(t)</math>, where * represents the complex conjugate. In order to deduce Eq.2, we will find an expression for <math>[ \hat{x}_{T}(f) ] ^*</math> that will be useful for the purpose. In fact, we will demonstrate that <math>[ \hat{x}_{T}(f) ] ^* = \mathcal{F}\left\{ x_{T}^* ( - t ) \right\}</math>. Let's start by noting that <math display="block"> \begin{align} \mathcal{F}\left\{ x_{T}^* ( - t ) \right\} &= \int ^\infty _{ - \infty} x_{T}^* ( - t ) e^{-i 2\pi f t } dt \end{align}</math> and let <math> z = -t </math>, so that <math> z \rightarrow - \infty </math> when <math> t \rightarrow \infty </math> and vice versa. So <math display="block"> \begin{align} \int ^\infty _{ - \infty} x_{T}^* ( - t ) e^{-i 2\pi f t } dt &= \int ^{ - \infty} _\infty x_{T}^* ( z ) e^{i 2\pi f z } \left( -dz \right) \\ &= \int ^\infty _{ - \infty} x_{T}^* ( z ) e^{i 2\pi f z } dz \\ &= \int ^\infty _{ - \infty} x_{T}^* ( t ) e^{i 2\pi f t } dt \end{align}</math> Where, in the last line, we have made use of the fact that <math>z</math> and <math>t</math> are dummy variables. So, we have <math display="block"> \begin{align} \mathcal{F}\left\{ x_{T}^* ( - t ) \right\} &= \int ^\infty _{ - \infty} x_{T}^* ( - t ) e^{-i 2\pi f t } dt \\ &= \int ^\infty _{ - \infty} x_{T}^* ( t ) e^{ i 2\pi f t } dt \\ &= \int ^\infty _{ - \infty} x_{T}^* ( t ) [ e^{ - i 2\pi f t } ]^* dt \\ &= \left[\int ^\infty _{ - \infty} x_{T} ( t ) e^{ - i 2\pi f t } dt \right]^* \\ &= \left[\mathcal{F} \left\{ x_{T} ( t )\right\}\right] ^* \\ &= \left[\hat{x}_T(f) \right] ^* \end{align}</math> q.e.d. Now, let's demonstrate eq.2 by using the demonstrated identity. In addition, we will make the subtitution <math> u(t) = x_T ^{ * } ( - t)</math>. In this way, we have: <math display="block">\begin{align} \left|\hat{x}_{T}(f)\right|^2 &= [ \hat{x}_{T}(f) ] ^* \cdot \hat{x}_{T}(f) \\ & = \mathcal{F}\left\{ x_{T}^* ( - t ) \right\} \cdot \mathcal{F}\left\{ x_{T} ( t ) \right\} \\ & = \mathcal{F}\left\{ u(t) \right\} \cdot \mathcal{F}\left\{ x_{T} ( t ) \right\} \\ &= \mathcal{F}\left\{ u(t) \mathbin{\mathbf{*}} x_{T}(t) \right\} \\ &= \int_{-\infty}^\infty \left[ \int_{-\infty}^\infty u (\tau - t) x_T ( t ) dt \right] e^{-i 2\pi f\tau} d\tau \\ &= \int_{-\infty}^\infty \left[\int_{-\infty}^\infty x_{T}^*(t - \tau)x_{T}(t) dt \right]e^{-i 2\pi f\tau} \ d\tau, \end{align}</math> where the convolution theorem has been used when passing from the 3rd to the 4th line. Now, if we divide the time convolution above by the period <math>T</math> and take the limit as <math>T \rightarrow \infty</math>, it becomes the [[autocorrelation]] function of the non-windowed signal <math> x(t)</math>, which is denoted as <math>R_{xx}(\tau)</math>, provided that <math>x(t)</math> is [[ergodic]], which is true in most, but not all, practical cases.<ref group=nb> The [[Wiener–Khinchin theorem]] makes sense of this formula for any [[wide-sense stationary process]] under weaker hypotheses: <math>R_{xx}</math> does not need to be absolutely integrable, it only needs to exist. But the integral can no longer be interpreted as usual. The formula also makes sense if interpreted as involving [[Distribution (mathematics)|distributions]] (in the sense of [[Laurent Schwartz]], not in the sense of a statistical [[Cumulative distribution function]]) instead of functions. If <math>R_{xx}</math> is continuous, [[Bochner's theorem]] can be used to prove that its Fourier transform exists as a positive [[Measure (mathematics)|measure]], whose distribution function is F (but not necessarily as a function and not necessarily possessing a probability density).</ref> <math display="block"> \lim_{T\to \infty} \frac{1}{T} \left|\hat{x}_{T}(f)\right|^2 = \int_{-\infty}^\infty \left[\lim_{T\to \infty} \frac{1}{T}\int_{-\infty}^\infty x_{T}^*(t - \tau)x_{T}(t) dt \right]e^{-i 2\pi f\tau} \ d\tau = \int_{-\infty}^\infty R_{xx}(\tau)e^{-i 2\pi f\tau} d\tau </math> Assuming the ergodicity of <math>x(t)</math>, the power spectral density can be found once more as the Fourier transform of the autocorrelation function ([[Wiener–Khinchin theorem]]).{{sfn | Miller | Childers | 2012 | p=433}} {{Equation box 1 |indent = : |title = |equation = {{NumBlk||<math>S_{xx}(f) = \int_{-\infty}^\infty R_{xx}(\tau) e^{-i 2 \pi f \tau}\,d \tau = \hat R_{xx}(f)</math>|{{EquationRef|Eq.3}}}} |cellpadding = 6 |border |border colour = #0073CF |background colour = #F5FFFA }} Many authors use this equality to actually define the power spectral density.<ref>{{cite book|title = Echo Signal Processing| author = Dennis Ward Ricker | publisher = Springer | year = 2003 | isbn = 978-1-4020-7395-3 | url = https://books.google.com/books?id=NF2Tmty9nugC&q=%22power+spectral+density%22+%22energy+spectral+density%22&pg=PA23 }}</ref> The power of the signal in a given frequency band <math>[f_1, f_2]</math>, where <math> 0<f_1 < f_2</math>, can be calculated by integrating over frequency. Since <math>S_{xx}(-f) = S_{xx}(f)</math>, an equal amount of power can be attributed to positive and negative frequency bands, which accounts for the factor of 2 in the following form (such trivial factors depend on the conventions used): <math display="block"> P_\textsf{bandlimited} = 2 \int_{f_1}^{f_2} S_{xx}(f) \, df</math> More generally, similar techniques may be used to estimate a time-varying spectral density. In this case the time interval <math>T</math> is finite rather than approaching infinity. This results in decreased spectral coverage and resolution since frequencies of less than <math>1/T</math> are not sampled, and results at frequencies which are not an integer multiple of <math>1/T</math> are not independent. Just using a single such time series, the estimated power spectrum will be very "noisy"; however this can be alleviated if it is possible to evaluate the expected value (in the above equation) using a large (or infinite) number of short-term spectra corresponding to [[statistical ensemble]]s of realizations of <math>x(t)</math> evaluated over the specified time window. Just as with the energy spectral density, the definition of the power spectral density can be generalized to [[discrete time]] variables <math>x_n</math>. As before, we can consider a window of <math>-N\le n\le N</math> with the signal sampled at discrete times <math>t_n = t_0 + (n\,\Delta t)</math> for a total measurement period <math>T = (2N + 1) \,\Delta t</math>. <math display="block">S_{xx}(f) = \lim_{N\to \infty}\frac{(\Delta t)^2}{T}\left|\sum_{n=-N}^N x_n e^{-i 2\pi f n \,\Delta t}\right|^2</math> Note that a single estimate of the PSD can be obtained through a finite number of samplings. As before, the actual PSD is achieved when <math>N</math> (and thus <math>T</math>) approaches infinity and the expected value is formally applied. In a real-world application, one would typically average a finite-measurement PSD over many trials to obtain a more accurate estimate of the theoretical PSD of the physical process underlying the individual measurements. This computed PSD is sometimes called a [[periodogram]]. This periodogram converges to the true PSD as the number of estimates as well as the averaging time interval <math>T</math> approach infinity.{{sfn | Brown | Hwang | 1997}} If two signals both possess power spectral densities, then the [[#Cross-spectral density|cross-spectral density]] can similarly be calculated; as the PSD is related to the autocorrelation, so is the cross-spectral density related to the [[cross-correlation]]. ==== Properties of the power spectral density ==== Some properties of the PSD include:{{sfn | Miller | Childers | 2012 | p=431}} {{bulleted list | The power spectrum is always real and non-negative, and the spectrum of a real valued process is also an [[even function]] of frequency: <math>S_{xx}(-f) = S_{xx}(f)</math>. | For a continuous [[stochastic process]] x(t), the autocorrelation function ''R''<sub>''xx''</sub>(''t'') can be reconstructed from its power spectrum S<sub>xx</sub>(f) by using the [[inverse Fourier transform]] | Using [[Parseval's theorem]], one can compute the [[variance]] (average power) of a process by integrating the power spectrum over all frequency: <math display="block">P = \operatorname{Var}(x) = \int_{-\infty}^{\infty}\! S_{xx}(f) \, df</math> | For a real process ''x''(''t'') with power spectral density <math>S_{xx}(f)</math>, one can compute the ''integrated spectrum'' or ''power spectral distribution'' <math>F(f)</math>, which specifies the average ''bandlimited'' power contained in frequencies from DC to f using:{{sfn | Davenport | Root | 1987}} <math display="block">F(f) = 2 \int _0^f S_{xx}(f')\, df'. </math> Note that the previous expression for total power (signal variance) is a special case where {{math|''f'' → ∞}}. }} === Cross power spectral density {{anchor|Cross|Cross spectral density|Cross-spectral density}} === {{See also|Coherence (signal processing)}} Given two signals <math>x(t)</math> and <math>y(t)</math>, each of which possess power spectral densities <math>S_{xx}(f)</math> and <math>S_{yy}(f)</math>, it is possible to define a '''cross power spectral density''' ('''CPSD''') or '''cross spectral density''' ('''CSD'''). To begin, let us consider the average power of such a combined signal. <math display="block">\begin{align} P &= \lim_{T\to \infty} \frac{1}{T} \int_{-\infty}^{\infty} \left[x_T(t) + y_T(t)\right]^*\left[x_T(t) + y_T(t)\right]dt \\ &= \lim_{T\to \infty} \frac{1}{T} \int_{-\infty}^{\infty} |x_T(t)|^2 + x^*_T(t) y_T(t) + y^*_T(t) x_{T}(t) + |y_T(t)|^2 dt \\ \end{align}</math> Using the same notation and methods as used for the power spectral density derivation, we exploit Parseval's theorem and obtain <math display="block">\begin{align} S_{xy}(f) &= \lim_{T\to\infty} \frac{1}{T} \left[\hat{x}^*_T(f) \hat{y}_T(f)\right] & S_{yx}(f) &= \lim_{T\to\infty} \frac{1}{T} \left[\hat{y}^*_T(f) \hat{x}_T(f)\right] \end{align}</math> where, again, the contributions of <math>S_{xx}(f)</math> and <math>S_{yy}(f)</math> are already understood. Note that <math>S^*_{xy}(f) = S_{yx}(f)</math>, so the full contribution to the cross power is, generally, from twice the real part of either individual '''CPSD'''. Just as before, from here we recast these products as the Fourier transform of a time convolution, which when divided by the period and taken to the limit <math>T\to\infty</math> becomes the Fourier transform of a [[cross-correlation]] function.<ref>{{cite web|author=William D Penny|year=2009|title=Signal Processing Course, chapter 7|url=http://www.fil.ion.ucl.ac.uk/~wpenny/course/course.html}}</ref> <math display="block">\begin{align} S_{xy}(f) &= \int_{-\infty}^{\infty} \left[\lim_{T\to\infty} \frac 1 {T} \int_{-\infty}^{\infty} x^*_{T}(t-\tau) y_{T}(t) dt \right] e^{-i 2 \pi f \tau} d\tau= \int_{-\infty}^{\infty} R_{xy}(\tau) e^{-i 2 \pi f \tau} d\tau \\ S_{yx}(f) &= \int_{-\infty}^{\infty} \left[\lim_{T\to\infty} \frac 1 {T} \int_{-\infty}^{\infty} y^*_{T}(t-\tau) x_{T}(t) dt \right] e^{-i 2 \pi f \tau} d\tau= \int_{-\infty}^{\infty} R_{yx}(\tau) e^{-i 2 \pi f \tau} d\tau, \end{align}</math> where <math>R_{xy}(\tau)</math> is the [[cross-correlation]] of <math>x(t)</math> with <math>y(t)</math> and <math>R_{yx}(\tau)</math> is the cross-correlation of <math>y(t)</math> with <math>x(t)</math>. In light of this, the PSD is seen to be a special case of the CSD for <math>x(t) = y(t)</math>. If <math>x(t)</math> and <math>y(t)</math> are real signals (e.g. voltage or current), their Fourier transforms <math>\hat{x}(f)</math> and <math>\hat{y}(f)</math> are usually restricted to positive frequencies by convention. Therefore, in typical signal processing, the full '''CPSD''' is just one of the '''CPSD'''s scaled by a factor of two. <math display="block">\operatorname{CPSD}_\text{Full} = 2S_{xy}(f) = 2 S_{yx}(f)</math> For discrete signals {{math|''x<sub>n</sub>''}} and {{math|''y<sub>n</sub>''}}, the relationship between the cross-spectral density and the cross-covariance is <math display="block">S_{xy}(f) = \sum_{n=-\infty}^\infty R_{xy}(\tau_n)e^{-i 2 \pi f \tau_n}\,\Delta\tau</math>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)