Uncertainty principle

Revision as of 21:15, 14 April 2025 by imported>JJMC89 bot III (Moving Category:Inequalities to Category:Inequalities (mathematics) per Wikipedia:Categories for discussion/Speedy)
(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)

Template:Short description {{#invoke:other uses|otheruses}} Template:Use American English Template:Quantum mechanics

File:Werner Heisenberg - Canonical commutation rule for position and momentum variables of a particle - Uncertainty principle, 1927.jpg
Canonical commutation rule for position q and momentum p variables of a particle, 1927. pqqp = h/(2πi). Uncertainty principle of Heisenberg, 1927.

The uncertainty principle, also known as Heisenberg's indeterminacy principle, is a fundamental concept in quantum mechanics. It states that there is a limit to the precision with which certain pairs of physical properties, such as position and momentum, can be simultaneously known. In other words, the more accurately one property is measured, the less accurately the other property can be known.

More formally, the uncertainty principle is any of a variety of mathematical inequalities asserting a fundamental limit to the product of the accuracy of certain related pairs of measurements on a quantum system, such as position, x, and momentum, p.<ref name=Sen2014>Template:Cite journal</ref> Such paired-variables are known as complementary variables or canonically conjugate variables.

First introduced in 1927 by German physicist Werner Heisenberg,<ref name=":0">Template:Cite journalTemplate:Cite journal</ref><ref>Werner Heisenberg (1989), Encounters with Einstein and Other Essays on People, Places and Particles, Princeton University Press, p. 53. Template:ISBN?</ref><ref>Template:Cite book</ref><ref>Kumar, Manjit. Quantum: Einstein, Bohr, and the great debate about the nature of reality. 1st American ed., 2008. Chap. 10, Note 37. Template:ISBN?</ref> the formal inequality relating the standard deviation of position σx and the standard deviation of momentum σp was derived by Earle Hesse Kennard<ref name="Kennard">Template:Citation</ref> later that year and by Hermann Weyl<ref name="Weyl1928">Template:Cite bookTemplate:Page?</ref> in 1928: Template:Equation box 1 where <math>\hbar = \frac{h}{2\pi}</math> is the reduced Planck constant.

The quintessentially quantum mechanical uncertainty principle comes in many forms other than position–momentum. The energy–time relationship is widely used to relate quantum state lifetime to measured energy widths but its formal derivation is fraught with confusing issues about the nature of time. The basic principle has been extended in numerous directions; it must be considered in many kinds of fundamental physical measurements.

Position–momentumEdit

Template:Main article

File:Sequential superposition of plane waves.gif
The superposition of several plane waves to form a wave packet. This wave packet becomes increasingly localized with the addition of many waves. The Fourier transform is a mathematical operation that separates a wave packet into its individual plane waves. The waves shown here are real for illustrative purposes only; in quantum mechanics the wave function is generally complex.

It is vital to illustrate how the principle applies to relatively intelligible physical situations since it is indiscernible on the macroscopic<ref>Template:Cite journal</ref> scales that humans experience. Two alternative frameworks for quantum physics offer different explanations for the uncertainty principle. The wave mechanics picture of the uncertainty principle is more visually intuitive, but the more abstract matrix mechanics picture formulates it in a way that generalizes more easily.

Mathematically, in wave mechanics, the uncertainty relation between position and momentum arises because the expressions of the wavefunction in the two corresponding orthonormal bases in Hilbert space are Fourier transforms of one another (i.e., position and momentum are conjugate variables). A nonzero function and its Fourier transform cannot both be sharply localized at the same time.<ref>See Appendix B in Template:Citation</ref> A similar tradeoff between the variances of Fourier conjugates arises in all systems underlain by Fourier analysis, for example in sound waves: A pure tone is a sharp spike at a single frequency, while its Fourier transform gives the shape of the sound wave in the time domain, which is a completely delocalized sine wave. In quantum mechanics, the two key points are that the position of the particle takes the form of a matter wave, and momentum is its Fourier conjugate, assured by the de Broglie relation Template:Math, where Template:Mvar is the wavenumber.

In matrix mechanics, the mathematical formulation of quantum mechanics, any pair of non-commuting self-adjoint operators representing observables are subject to similar uncertainty limits. An eigenstate of an observable represents the state of the wavefunction for a certain measurement value (the eigenvalue). For example, if a measurement of an observable Template:Mvar is performed, then the system is in a particular eigenstate Template:Mvar of that observable. However, the particular eigenstate of the observable Template:Mvar need not be an eigenstate of another observable Template:Mvar: If so, then it does not have a unique associated measurement for it, as the system is not in an eigenstate of that observable.<ref>Template:Citation</ref>

VisualizationEdit

The uncertainty principle can be visualized using the position- and momentum-space wavefunctions for one spinless particle with mass in one dimension.

The more localized the position-space wavefunction, the more likely the particle is to be found with the position coordinates in that region, and correspondingly the momentum-space wavefunction is less localized so the possible momentum components the particle could have are more widespread. Conversely, the more localized the momentum-space wavefunction, the more likely the particle is to be found with those values of momentum components in that region, and correspondingly the less localized the position-space wavefunction, so the position coordinates the particle could occupy are more widespread. These wavefunctions are Fourier transforms of each other: mathematically, the uncertainty principle expresses the relationship between conjugate variables in the transform.

File:Quantum mechanics travelling wavefunctions wavelength.svg
Position x and momentum p wavefunctions corresponding to quantum particles. The colour opacity of the particles corresponds to the probability density of finding the particle with position x or momentum component p.
Top: If wavelength λ is unknown, so are momentum p, wave-vector k and energy E (de Broglie relations). As the particle is more localized in position space, Δx is smaller than for Δpx.
Bottom: If λ is known, so are p, k, and E. As the particle is more localized in momentum space, Δp is smaller than for Δx.

Wave mechanics interpretationEdit

Template:Main article Template:Multiple image According to the de Broglie hypothesis, every object in the universe is associated with a wave. Thus every object, from an elementary particle to atoms, molecules and on up to planets and beyond are subject to the uncertainty principle.

The time-independent wave function of a single-moded plane wave of wavenumber k0 or momentum p0 is<ref>Template:Citation</ref> <math display="block">\psi(x) \propto e^{ik_0 x} = e^{ip_0 x/\hbar} ~.</math>

The Born rule states that this should be interpreted as a probability density amplitude function in the sense that the probability of finding the particle between a and b is <math display="block"> \operatorname P [a \leq X \leq b] = \int_a^b |\psi(x)|^2 \, \mathrm{d}x ~.</math>

In the case of the single-mode plane wave, <math>|\psi(x)|^2</math> is 1 if <math>X=x</math> and 0 otherwise. In other words, the particle position is extremely uncertain in the sense that it could be essentially anywhere along the wave packet.

On the other hand, consider a wave function that is a sum of many waves, which we may write as <math display="block">\psi(x) \propto \sum_n A_n e^{i p_n x/\hbar}~, </math> where An represents the relative contribution of the mode pn to the overall total. The figures to the right show how with the addition of many plane waves, the wave packet can become more localized. We may take this a step further to the continuum limit, where the wave function is an integral over all possible modes <math display="block">\psi(x) = \frac{1}{\sqrt{2 \pi \hbar}} \int_{-\infty}^\infty \varphi(p) \cdot e^{i p x/\hbar} \, dp ~, </math> with <math>\varphi(p)</math> representing the amplitude of these modes and is called the wave function in momentum space. In mathematical terms, we say that <math>\varphi(p)</math> is the Fourier transform of <math>\psi(x)</math> and that x and p are conjugate variables. Adding together all of these plane waves comes at a cost, namely the momentum has become less precise, having become a mixture of waves of many different momenta.<ref name="L&L">Template:Cite book</ref>

One way to quantify the precision of the position and momentum is the standard deviation σ. Since <math>|\psi(x)|^2</math> is a probability density function for position, we calculate its standard deviation.

The precision of the position is improved, i.e. reduced σx, by using many plane waves, thereby weakening the precision of the momentum, i.e. increased σp. Another way of stating this is that σx and σp have an inverse relationship or are at least bounded from below. This is the uncertainty principle, the exact limit of which is the Kennard bound.

Proof of the Kennard inequality using wave mechanicsEdit

We are interested in the variances of position and momentum, defined as <math display="block">\sigma_x^2 = \int_{-\infty}^\infty x^2 \cdot |\psi(x)|^2 \, dx - \left( \int_{-\infty}^\infty x \cdot |\psi(x)|^2 \, dx \right)^2</math> <math display="block">\sigma_p^2 = \int_{-\infty}^\infty p^2 \cdot |\varphi(p)|^2 \, dp - \left( \int_{-\infty}^\infty p \cdot |\varphi(p)|^2 \, dp \right)^2~.</math>

Without loss of generality, we will assume that the means vanish, which just amounts to a shift of the origin of our coordinates. (A more general proof that does not make this assumption is given below.) This gives us the simpler form <math display="block">\sigma_x^2 = \int_{-\infty}^\infty x^2 \cdot |\psi(x)|^2 \, dx</math> <math display="block">\sigma_p^2 = \int_{-\infty}^\infty p^2 \cdot |\varphi(p)|^2 \, dp~.</math>

The function <math>f(x) = x \cdot \psi(x)</math> can be interpreted as a vector in a function space. We can define an inner product for a pair of functions u(x) and v(x) in this vector space: <math display="block">\langle u \mid v \rangle = \int_{-\infty}^\infty u^*(x) \cdot v(x) \, dx,</math> where the asterisk denotes the complex conjugate.

With this inner product defined, we note that the variance for position can be written as <math display="block">\sigma_x^2 = \int_{-\infty}^\infty |f(x)|^2 \, dx = \langle f \mid f \rangle ~.</math>

We can repeat this for momentum by interpreting the function <math>\tilde{g}(p)=p \cdot \varphi(p)</math> as a vector, but we can also take advantage of the fact that <math>\psi(x)</math> and <math>\varphi(p)</math> are Fourier transforms of each other. We evaluate the inverse Fourier transform through integration by parts: <math display="block">\begin{align} g(x) &= \frac{1}{\sqrt{2 \pi \hbar}} \cdot \int_{-\infty}^\infty \tilde{g}(p) \cdot e^{ipx/\hbar} \, dp \\ &= \frac{1}{\sqrt{2 \pi \hbar}} \int_{-\infty}^\infty p \cdot \varphi(p) \cdot e^{ipx/\hbar} \, dp \\ &= \frac{1}{2 \pi \hbar} \int_{-\infty}^\infty \left[ p \cdot \int_{-\infty}^\infty \psi(\chi) e^{-ip\chi/\hbar} \, d\chi \right] \cdot e^{ipx/\hbar} \, dp \\ &= \frac{i}{2 \pi} \int_{-\infty}^\infty \left[ \cancel{ \left. \psi(\chi) e^{-ip\chi/\hbar} \right|_{-\infty}^\infty } - \int_{-\infty}^\infty \frac{d\psi(\chi)}{d\chi} e^{-ip\chi/\hbar} \, d\chi \right] \cdot e^{ipx/\hbar} \, dp \\ &= -i \int_{-\infty}^\infty \frac{d\psi(\chi)}{d\chi} \left[ \frac{1}{2 \pi}\int_{-\infty}^\infty \, e^{ip(x - \chi)/\hbar} \, dp \right]\, d\chi\\ &= -i \int_{-\infty}^\infty \frac{d\psi(\chi)}{d\chi} \left[ \delta\left(\frac{x - \chi }{\hbar}\right) \right]\, d\chi\\ &= -i \hbar \int_{-\infty}^\infty \frac{d\psi(\chi)}{d\chi} \left[ \delta\left(x - \chi \right) \right]\, d\chi\\ &= -i \hbar \frac{d\psi(x)}{dx} \\ &= \left( -i \hbar \frac{d}{dx} \right) \cdot \psi(x) , \end{align}</math> where <math>v=\frac{\hbar}{-ip}e^{-ip\chi/\hbar}</math> in the integration by parts, the cancelled term vanishes because the wave function vanishes at both infinities and <math>|e^{-ip\chi/\hbar}|=1</math>, and then use the Dirac delta function which is valid because <math>\dfrac{d\psi(\chi)}{d\chi}</math> does not depend on p .

The term <math display="inline">-i \hbar \frac{d}{dx}</math> is called the momentum operator in position space. Applying Plancherel's theorem, we see that the variance for momentum can be written as <math display="block">\sigma_p^2 = \int_{-\infty}^\infty |\tilde{g}(p)|^2 \, dp = \int_{-\infty}^\infty |g(x)|^2 \, dx = \langle g \mid g \rangle.</math>

The Cauchy–Schwarz inequality asserts that <math display="block">\sigma_x^2 \sigma_p^2 = \langle f \mid f \rangle \cdot \langle g \mid g \rangle \ge |\langle f \mid g \rangle|^2 ~.</math>

The modulus squared of any complex number z can be expressed as <math display="block">|z|^{2} = \Big(\text{Re}(z)\Big)^{2}+\Big(\text{Im}(z)\Big)^{2} \geq \Big(\text{Im}(z)\Big)^{2} = \left(\frac{z-z^{\ast}}{2i}\right)^{2}. </math> we let <math>z=\langle f|g\rangle</math> and <math>z^{*}=\langle g\mid f\rangle</math> and substitute these into the equation above to get <math display="block">|\langle f\mid g\rangle|^2 \geq \left(\frac{\langle f\mid g\rangle-\langle g \mid f \rangle}{2i}\right)^2 ~.</math>

All that remains is to evaluate these inner products.

<math display="block">\begin{align} \langle f\mid g\rangle-\langle g\mid f\rangle &= \int_{-\infty}^\infty \psi^*(x) \, x \cdot \left(-i \hbar \frac{d}{dx}\right) \, \psi(x) \, dx - \int_{-\infty}^\infty \psi^*(x) \, \left(-i \hbar \frac{d}{dx}\right) \cdot x \, \psi(x) \, dx \\ &= i \hbar \cdot \int_{-\infty}^\infty \psi^*(x) \left[ \left(-x \cdot \frac{d\psi(x)}{dx}\right) + \frac{d(x \psi(x))}{dx} \right] \, dx \\ &= i \hbar \cdot \int_{-\infty}^\infty \psi^*(x) \left[ \left(-x \cdot \frac{d\psi(x)}{dx}\right) + \psi(x) + \left(x \cdot \frac{d\psi(x)}{dx}\right)\right] \, dx \\ &= i \hbar \cdot \int_{-\infty}^\infty \psi^*(x) \psi(x) \, dx \\ &= i \hbar \cdot \int_{-\infty}^\infty |\psi(x)|^2 \, dx \\ &= i \hbar \end{align}</math>

Plugging this into the above inequalities, we get <math display="block">\sigma_x^2 \sigma_p^2 \ge |\langle f \mid g \rangle|^2 \ge \left(\frac{\langle f\mid g\rangle-\langle g\mid f\rangle}{2i}\right)^2 = \left(\frac{i \hbar}{2 i}\right)^2 = \frac{\hbar^2}{4}</math> and taking the square root <math display="block">\sigma_x \sigma_p \ge \frac{\hbar}{2}~.</math>

with equality if and only if p and x are linearly dependent. Note that the only physics involved in this proof was that <math>\psi(x)</math> and <math>\varphi(p)</math> are wave functions for position and momentum, which are Fourier transforms of each other. A similar result would hold for any pair of conjugate variables.

Matrix mechanics interpretationEdit

Template:Main article In matrix mechanics, observables such as position and momentum are represented by self-adjoint operators.<ref name="L&L"/> When considering pairs of observables, an important quantity is the commutator. For a pair of operators Template:Mvar and <math>\hat{B}</math>, one defines their commutator as <math display="block">[\hat{A},\hat{B}]=\hat{A}\hat{B}-\hat{B}\hat{A}.</math> In the case of position and momentum, the commutator is the canonical commutation relation <math display="block">[\hat{x},\hat{p}]=i \hbar.</math>

The physical meaning of the non-commutativity can be understood by considering the effect of the commutator on position and momentum eigenstates. Let <math>|\psi\rangle</math> be a right eigenstate of position with a constant eigenvalue Template:Math. By definition, this means that <math>\hat{x}|\psi\rangle = x_0 |\psi\rangle.</math> Applying the commutator to <math>|\psi\rangle</math> yields <math display="block">[\hat{x},\hat{p}] | \psi \rangle = (\hat{x}\hat{p}-\hat{p}\hat{x}) | \psi \rangle = (\hat{x} - x_0 \hat{I}) \hat{p} \, | \psi \rangle = i \hbar | \psi \rangle,</math> where Template:Mvar is the identity operator.

Suppose, for the sake of proof by contradiction, that <math>|\psi\rangle</math> is also a right eigenstate of momentum, with constant eigenvalue Template:Mvar. If this were true, then one could write <math display="block">(\hat{x} - x_0 \hat{I}) \hat{p} \, | \psi \rangle = (\hat{x} - x_0 \hat{I}) p_0 \, | \psi \rangle = (x_0 \hat{I} - x_0 \hat{I}) p_0 \, | \psi \rangle=0.</math> On the other hand, the above canonical commutation relation requires that <math display="block">[\hat{x},\hat{p}] | \psi \rangle=i \hbar | \psi \rangle \ne 0.</math> This implies that no quantum state can simultaneously be both a position and a momentum eigenstate.

When a state is measured, it is projected onto an eigenstate in the basis of the relevant observable. For example, if a particle's position is measured, then the state amounts to a position eigenstate. This means that the state is not a momentum eigenstate, however, but rather it can be represented as a sum of multiple momentum basis eigenstates. In other words, the momentum must be less precise. This precision may be quantified by the standard deviations, <math display="block">\sigma_x=\sqrt{\langle \hat{x}^2 \rangle-\langle \hat{x}\rangle^2}</math> <math display="block">\sigma_p=\sqrt{\langle \hat{p}^2 \rangle-\langle \hat{p}\rangle^2}.</math>

As in the wave mechanics interpretation above, one sees a tradeoff between the respective precisions of the two, quantified by the uncertainty principle.

Quantum harmonic oscillator stationary statesEdit

Template:Main article Consider a one-dimensional quantum harmonic oscillator. It is possible to express the position and momentum operators in terms of the creation and annihilation operators: <math display="block">\hat x = \sqrt{\frac{\hbar}{2m\omega}}(a+a^\dagger)</math> <math display="block">\hat p = i\sqrt{\frac{m \omega\hbar}{2}}(a^\dagger-a).</math>

Using the standard rules for creation and annihilation operators on the energy eigenstates, <math display="block">a^{\dagger}|n\rangle=\sqrt{n+1}|n+1\rangle</math> <math display="block">a|n\rangle=\sqrt{n}|n-1\rangle, </math> the variances may be computed directly, <math display="block">\sigma_x^2 = \frac{\hbar}{m\omega} \left( n+\frac{1}{2}\right)</math> <math display="block">\sigma_p^2 = \hbar m\omega \left( n+\frac{1}{2}\right)\, .</math> The product of these standard deviations is then <math display="block">\sigma_x \sigma_p = \hbar \left(n+\frac{1}{2}\right) \ge \frac{\hbar}{2}.~</math>

In particular, the above Kennard bound<ref name="Kennard" /> is saturated for the ground state Template:Math, for which the probability density is just the normal distribution.

Quantum harmonic oscillators with Gaussian initial conditionEdit

Template:Multiple image

In a quantum harmonic oscillator of characteristic angular frequency ω, place a state that is offset from the bottom of the potential by some displacement x0 as <math display="block">\psi(x)=\left(\frac{m \Omega}{\pi \hbar}\right)^{1/4} \exp{\left( -\frac{m \Omega (x-x_0)^2}{2\hbar}\right)},</math> where Ω describes the width of the initial state but need not be the same as ω. Through integration over the propagator, we can solve for the Template:Not a typo-dependent solution. After many cancelations, the probability densities reduce to <math display="block">|\Psi(x,t)|^2 \sim \mathcal{N}\left( x_0 \cos{(\omega t)} , \frac{\hbar}{2 m \Omega} \left( \cos^2(\omega t) + \frac{\Omega^2}{\omega^2} \sin^2{(\omega t)} \right)\right)</math> <math display="block">|\Phi(p,t)|^2 \sim \mathcal{N}\left( -m x_0 \omega \sin(\omega t), \frac{\hbar m \Omega}{2} \left( \cos^2{(\omega t)} + \frac{\omega^2}{\Omega^2} \sin^2{(\omega t)} \right)\right),</math> where we have used the notation <math>\mathcal{N}(\mu, \sigma^2)</math> to denote a normal distribution of mean μ and variance σ2. Copying the variances above and applying trigonometric identities, we can write the product of the standard deviations as <math display="block">\begin{align} \sigma_x \sigma_p&=\frac{\hbar}{2}\sqrt{\left( \cos^2{(\omega t)} + \frac{\Omega^2}{\omega^2} \sin^2{(\omega t)} \right)\left( \cos^2{(\omega t)} + \frac{\omega^2}{\Omega^2} \sin^2{(\omega t)} \right)} \\ &= \frac{\hbar}{4}\sqrt{3+\frac{1}{2}\left(\frac{\Omega^2}{\omega^2}+\frac{\omega^2}{\Omega^2}\right)-\left(\frac{1}{2}\left(\frac{\Omega^2}{\omega^2}+\frac{\omega^2}{\Omega^2}\right)-1\right) \cos{(4 \omega t)}} \end{align}</math>

From the relations <math display="block">\frac{\Omega^2}{\omega^2}+\frac{\omega^2}{\Omega^2} \ge 2, \quad |\cos(4 \omega t)| \le 1,</math> we can conclude the following (the right most equality holds only when Template:Nowrap): <math display="block">\sigma_x \sigma_p \ge \frac{\hbar}{4}\sqrt{3+\frac{1}{2} \left(\frac{\Omega^2}{\omega^2}+\frac{\omega^2}{\Omega^2}\right)-\left(\frac{1}{2} \left(\frac{\Omega^2}{\omega^2}+\frac{\omega^2}{\Omega^2}\right)-1\right)} = \frac{\hbar}{2}. </math>

Coherent statesEdit

Template:Main article A coherent state is a right eigenstate of the annihilation operator, <math display="block">\hat{a}|\alpha\rangle=\alpha|\alpha\rangle,</math> which may be represented in terms of Fock states as <math display="block">|\alpha\rangle =e^{-{|\alpha|^2\over2}} \sum_{n=0}^\infty {\alpha^n \over \sqrt{n!}}|n\rangle</math>

In the picture where the coherent state is a massive particle in a quantum harmonic oscillator, the position and momentum operators may be expressed in terms of the annihilation operators in the same formulas above and used to calculate the variances, <math display="block">\sigma_x^2 = \frac{\hbar}{2 m \omega},</math> <math display="block">\sigma_p^2 = \frac{\hbar m \omega}{2}.</math> Therefore, every coherent state saturates the Kennard bound <math display="block">\sigma_x \sigma_p = \sqrt{\frac{\hbar}{2 m \omega}} \, \sqrt{\frac{\hbar m \omega}{2}} = \frac{\hbar}{2}. </math> with position and momentum each contributing an amount <math display="inline">\sqrt{\hbar/2}</math> in a "balanced" way. Moreover, every squeezed coherent state also saturates the Kennard bound although the individual contributions of position and momentum need not be balanced in general.

Particle in a boxEdit

Template:Main article Consider a particle in a one-dimensional box of length <math>L</math>. The eigenfunctions in position and momentum space are <math display="block">\psi_n(x,t) =\begin{cases} A \sin(k_n x)\mathrm{e}^{-\mathrm{i}\omega_n t}, & 0 < x < L,\\ 0, & \text{otherwise,} \end{cases}</math> and <math display="block">\varphi_n(p,t)=\sqrt{\frac{\pi L}{\hbar}}\,\,\frac{n\left(1-(-1)^ne^{-ikL} \right) e^{-i \omega_n t}}{\pi ^2 n^2-k^2 L^2},</math> where <math display="inline">\omega_n=\frac{\pi^2 \hbar n^2}{8 L^2 m}</math> and we have used the de Broglie relation <math>p=\hbar k</math>. The variances of <math>x</math> and <math>p</math> can be calculated explicitly: <math display="block">\sigma_x^2=\frac{L^2}{12}\left(1-\frac{6}{n^2\pi^2}\right)</math> <math display="block">\sigma_p^2=\left(\frac{\hbar n\pi}{L}\right)^2. </math>

The product of the standard deviations is therefore <math display="block">\sigma_x \sigma_p = \frac{\hbar}{2} \sqrt{\frac{n^2\pi^2}{3}-2}.</math> For all <math>n=1, \, 2, \, 3,\, \ldots</math>, the quantity <math display="inline">\sqrt{\frac{n^2\pi^2}{3}-2}</math> is greater than 1, so the uncertainty principle is never violated. For numerical concreteness, the smallest value occurs when <math>n = 1</math>, in which case <math display="block">\sigma_x \sigma_p = \frac{\hbar}{2} \sqrt{\frac{\pi^2}{3}-2} \approx 0.568 \hbar > \frac{\hbar}{2}.</math>

Constant momentumEdit

Template:Main article

File:Guassian Dispersion.gif
Position space probability density of an initially Gaussian state moving at minimally uncertain, constant momentum in free space

Assume a particle initially has a momentum space wave function described by a normal distribution around some constant momentum p0 according to <math display="block">\varphi(p) = \left(\frac{x_0}{\hbar \sqrt{\pi}} \right)^{1/2} \exp\left(\frac{-x_0^2 (p-p_0)^2}{2\hbar^2}\right),</math> where we have introduced a reference scale <math display="inline">x_0=\sqrt{\hbar/m\omega_0}</math>, with <math>\omega_0>0</math> describing the width of the distribution—cf. nondimensionalization. If the state is allowed to evolve in free space, then the time-dependent momentum and position space wave functions are <math display="block">\Phi(p,t) = \left(\frac{x_0}{\hbar \sqrt{\pi}} \right)^{1/2} \exp\left(\frac{-x_0^2 (p-p_0)^2}{2\hbar^2}-\frac{ip^2 t}{2m\hbar}\right),</math> <math display="block">\Psi(x,t) = \left(\frac{1}{x_0 \sqrt{\pi}} \right)^{1/2} \frac{e^{-x_0^2 p_0^2 /2\hbar^2}}{\sqrt{1+i\omega_0 t}} \, \exp\left(-\frac{(x-ix_0^2 p_0/\hbar)^2}{2x_0^2 (1+i\omega_0 t)}\right).</math>

Since <math> \langle p(t) \rangle = p_0</math> and <math>\sigma_p(t) = \hbar /(\sqrt{2}x_0)</math>, this can be interpreted as a particle moving along with constant momentum at arbitrarily high precision. On the other hand, the standard deviation of the position is <math display="block">\sigma_x = \frac{x_0}{\sqrt{2}} \sqrt{1+\omega_0^2 t^2}</math> such that the uncertainty product can only increase with time as <math display="block">\sigma_x(t) \sigma_p(t) = \frac{\hbar}{2} \sqrt{1+\omega_0^2 t^2}</math>

Mathematical formalismEdit

Starting with Kennard's derivation of position-momentum uncertainty, Howard Percy Robertson developed<ref name="Robertson1929">Template:Citation</ref><ref name=Sen2014/> a formulation for arbitrary Hermitian operators <math>\hat{\mathcal{O}}</math> expressed in terms of their standard deviation <math display="block">\sigma_{\mathcal{O}} = \sqrt{\langle \hat{\mathcal{O}}^2 \rangle-\langle \hat{\mathcal{O}}\rangle^2},</math> where the brackets <math>\langle\hat{\mathcal{O}}\rangle</math> indicate an expectation value of the observable represented by operator <math>\hat{\mathcal{O}}</math>. For a pair of operators <math>\hat{A}</math> and <math>\hat{B}</math>, define their commutator as <math display="block">[\hat{A},\hat{B}]=\hat{A}\hat{B}-\hat{B}\hat{A},</math> and the Robertson uncertainty relation is given by<ref>Template:Citation</ref> <math display="block">\sigma_A \sigma_B \geq \left| \frac{1}{2i}\langle[\hat{A},\hat{B}]\rangle \right| = \frac{1}{2}\left|\langle[\hat{A},\hat{B}]\rangle \right|.</math>

Erwin Schrödinger<ref>Schrödinger, E., Zum Heisenbergschen Unschärfeprinzip, Berliner Berichte, 1930, pp. 296–303.</ref> showed how to allow for correlation between the operators, giving a stronger inequality, known as the Robertson–Schrödinger uncertainty relation,<ref name="Schrodinger1930">Template:Citation</ref><ref name=Sen2014/>

Template:Equation box 1 where the anticommutator, <math>\{\hat{A},\hat{B}\}=\hat{A}\hat{B}+\hat{B}\hat{A}</math> is used.

Template:Math proof

Phase spaceEdit

In the phase space formulation of quantum mechanics, the Robertson–Schrödinger relation follows from a positivity condition on a real star-square function. Given a Wigner function <math>W(x,p)</math> with star product ★ and a function f, the following is generally true:<ref>Template:Cite journal</ref> <math display="block">\langle f^* \star f \rangle =\int (f^* \star f) \, W(x,p) \, dx \, dp \ge 0 ~.</math>

Choosing <math>f = a + bx + cp</math>, we arrive at <math display="block">\langle f^* \star f \rangle =\begin{bmatrix}a^* & b^* & c^* \end{bmatrix}\begin{bmatrix}1 & \langle x \rangle & \langle p \rangle \\ \langle x \rangle & \langle x \star x \rangle & \langle x \star p \rangle \\ \langle p \rangle & \langle p \star x \rangle & \langle p \star p \rangle \end{bmatrix}\begin{bmatrix}a \\ b \\ c\end{bmatrix} \ge 0 ~.</math>

Since this positivity condition is true for all a, b, and c, it follows that all the eigenvalues of the matrix are non-negative.

The non-negative eigenvalues then imply a corresponding non-negativity condition on the determinant, <math display="block">\det\begin{bmatrix}1 & \langle x \rangle & \langle p \rangle \\ \langle x \rangle & \langle x \star x \rangle & \langle x \star p \rangle \\ \langle p \rangle & \langle p \star x \rangle & \langle p \star p \rangle \end{bmatrix} = \det\begin{bmatrix}1 & \langle x \rangle & \langle p \rangle \\ \langle x \rangle & \langle x^2 \rangle & \left\langle xp + \frac{i\hbar}{2} \right\rangle \\ \langle p \rangle & \left\langle xp - \frac{i\hbar}{2} \right\rangle & \langle p^2 \rangle \end{bmatrix} \ge 0~,</math> or, explicitly, after algebraic manipulation, <math display="block">\sigma_x^2 \sigma_p^2 = \left( \langle x^2 \rangle - \langle x \rangle^2 \right)\left( \langle p^2 \rangle - \langle p \rangle^2 \right)\ge \left( \langle xp \rangle - \langle x \rangle \langle p \rangle \right)^2 + \frac{\hbar^2}{4} ~.</math>

ExamplesEdit

Since the Robertson and Schrödinger relations are for general operators, the relations can be applied to any two observables to obtain specific uncertainty relations. A few of the most common relations found in the literature are given below.

  • Position–linear momentum uncertainty relation: for the position and linear momentum operators, the canonical commutation relation <math>[\hat{x}, \hat{p}] = i\hbar</math> implies the Kennard inequality from above: <math display="block">\sigma_x \sigma_p \geq \frac{\hbar}{2}.</math>
  • Angular momentum uncertainty relation: For two orthogonal components of the total angular momentum operator of an object: <math display="block">\sigma_{J_i} \sigma_{J_j} \geq \frac{\hbar}{2} \big|\langle J_k\rangle\big|,</math> where i, j, k are distinct, and Ji denotes angular momentum along the xi axis. This relation implies that unless all three components vanish together, only a single component of a system's angular momentum can be defined with arbitrary precision, normally the component parallel to an external (magnetic or electric) field. Moreover, for <math>[J_x, J_y] = i \hbar \varepsilon_{xyz} J_z</math>, a choice <math>\hat{A} = J_x</math>, <math>\hat{B} = J_y</math>, in angular momentum multiplets, ψ = |j, m⟩, bounds the Casimir invariant (angular momentum squared, <math>\langle J_x^2+ J_y^2 + J_z^2 \rangle</math>) from below and thus yields useful constraints such as Template:Nobr, and hence jm, among others.

LimitationsEdit

The derivation of the Robertson inequality for operators <math>\hat{A}</math> and <math>\hat{B}</math> requires <math>\hat{A}\hat{B}\psi</math> and <math>\hat{B}\hat{A}\psi</math> to be defined. There are quantum systems where these conditions are not valid.<ref>Template:Cite journal</ref> One example is a quantum particle on a ring, where the wave function depends on an angular variable <math>\theta</math> in the interval <math>[0,2\pi]</math>. Define "position" and "momentum" operators <math>\hat{A}</math> and <math>\hat{B}</math> by <math display="block">\hat{A}\psi(\theta)=\theta\psi(\theta),\quad \theta\in [0,2\pi],</math> and <math display="block">\hat{B}\psi=-i\hbar\frac{d\psi}{d\theta},</math> with periodic boundary conditions on <math>\hat{B}</math>. The definition of <math>\hat{A}</math> depends the <math>\theta</math> range from 0 to <math>2\pi</math>. These operators satisfy the usual commutation relations for position and momentum operators, <math>[\hat{A},\hat{B}]=i\hbar</math>. More precisely, <math>\hat{A}\hat{B}\psi-\hat{B}\hat{A}\psi=i\hbar\psi</math> whenever both <math>\hat{A}\hat{B}\psi</math> and <math>\hat{B}\hat{A}\psi</math> are defined, and the space of such <math>\psi</math> is a dense subspace of the quantum Hilbert space.<ref>Template:Citation</ref>

Now let <math>\psi</math> be any of the eigenstates of <math>\hat{B}</math>, which are given by <math>\psi(\theta)=e^{2\pi in\theta}</math>. These states are normalizable, unlike the eigenstates of the momentum operator on the line. Also the operator <math>\hat{A}</math> is bounded, since <math>\theta</math> ranges over a bounded interval. Thus, in the state <math>\psi</math>, the uncertainty of <math>B</math> is zero and the uncertainty of <math>A</math> is finite, so that <math display="block">\sigma_A\sigma_B=0.</math> The Robertson uncertainty principle does not apply in this case: <math>\psi</math> is not in the domain of the operator <math>\hat{B}\hat{A}</math>, since multiplication by <math>\theta</math> disrupts the periodic boundary conditions imposed on <math>\hat{B}</math>.<ref name="Hall2013">Template:Citation</ref>

For the usual position and momentum operators <math>\hat{X}</math> and <math>\hat{P}</math> on the real line, no such counterexamples can occur. As long as <math>\sigma_x</math> and <math>\sigma_p</math> are defined in the state <math>\psi</math>, the Heisenberg uncertainty principle holds, even if <math>\psi</math> fails to be in the domain of <math>\hat{X}\hat{P}</math> or of <math>\hat{P}\hat{X}</math>.<ref>Template:Citation</ref>

Mixed statesEdit

The Robertson–Schrödinger uncertainty can be improved noting that it must hold for all components <math>\varrho_k</math> in any decomposition of the density matrix given as <math display="block"> \varrho=\sum_k p_k \varrho_k. </math> Here, for the probabilities <math>p_k\ge0</math> and <math>\sum_k p_k=1</math> hold. Then, using the relation <math display="block"> \sum_k a_k \sum_k b_k \ge \left(\sum_k \sqrt{a_k b_k}\right)^2 </math> for <math> a_k,b_k\ge 0</math>, it follows that<ref name="PhysRevResearch21">Template:Cite journal</ref> <math display="block"> \sigma_A^2 \sigma_B^2 \geq \left[\sum_k p_k L(\varrho_k)\right]^2, </math> where the function in the bound is defined <math display="block"> L(\varrho) = \sqrt{\left | \frac{1}{2}\operatorname{tr}(\rho\{A,B\}) - \operatorname{tr}(\rho A)\operatorname{tr}(\rho B)\right |^2 +\left | \frac{1}{2i} \operatorname{tr}(\rho[A,B])\right | ^2}. </math> The above relation very often has a bound larger than that of the original Robertson–Schrödinger uncertainty relation. Thus, we need to calculate the bound of the Robertson–Schrödinger uncertainty for the mixed components of the quantum state rather than for the quantum state, and compute an average of their square roots. The following expression is stronger than the Robertson–Schrödinger uncertainty relation <math display="block"> \sigma_A^2 \sigma_B^2 \geq \left[\max_{p_k,\varrho_k} \sum_k p_k L(\varrho_k)\right]^2, </math> where on the right-hand side there is a concave roof over the decompositions of the density matrix. The improved relation above is saturated by all single-qubit quantum states.<ref name="PhysRevResearch21" />

With similar arguments, one can derive a relation with a convex roof on the right-hand side<ref name="PhysRevResearch21" /> <math display="block"> \sigma_A^2 F_Q[\varrho,B] \geq 4 \left[\min_{p_k,\Psi_k} \sum_k p_k L(\vert \Psi_k\rangle\langle \Psi_k\vert)\right]^2 </math> where <math>F_Q[\varrho,B]</math> denotes the quantum Fisher information and the density matrix is decomposed to pure states as <math display="block"> \varrho=\sum_k p_k \vert \Psi_k\rangle \langle \Psi_k\vert.

</math>

The derivation takes advantage of the fact that the quantum Fisher information is the convex roof of the variance times four.<ref>Template:Cite journal</ref><ref>Template:Cite arXiv</ref>

A simpler inequality follows without a convex roof<ref>Template:Cite journal</ref> <math display="block"> \sigma_A^2 F_Q[\varrho,B] \geq \vert \langle i[A,B]\rangle\vert^2, </math> which is stronger than the Heisenberg uncertainty relation, since for the quantum Fisher information we have <math display="block"> F_Q[\varrho,B]\le 4 \sigma_B, </math> while for pure states the equality holds.

The Maccone–Pati uncertainty relationsEdit

The Robertson–Schrödinger uncertainty relation can be trivial if the state of the system is chosen to be eigenstate of one of the observable. The stronger uncertainty relations proved by Lorenzo Maccone and Arun K. Pati give non-trivial bounds on the sum of the variances for two incompatible observables.<ref>Template:Cite journal</ref> (Earlier works on uncertainty relations formulated as the sum of variances include, e.g., Ref.<ref>Template:Cite journal</ref> due to Yichen Huang.) For two non-commuting observables <math>A</math> and <math>B</math> the first stronger uncertainty relation is given by <math display="block"> \sigma_{A}^2 + \sigma_{ B}^2 \ge \pm i \langle \Psi\mid [A, B]|\Psi \rangle + \mid \langle \Psi\mid(A \pm i B)\mid{\bar \Psi} \rangle|^2, </math> where <math> \sigma_{A}^2 = \langle \Psi |A^2 |\Psi \rangle - \langle \Psi \mid A \mid \Psi \rangle^2 </math>, <math> \sigma_{B}^2 = \langle \Psi |B^2 |\Psi \rangle - \langle \Psi \mid B \mid\Psi \rangle^2 </math>, <math>|{\bar \Psi} \rangle </math> is a normalized vector that is orthogonal to the state of the system <math>|\Psi \rangle </math> and one should choose the sign of <math>\pm i \langle \Psi\mid[A, B]\mid\Psi \rangle </math> to make this real quantity a positive number.

The second stronger uncertainty relation is given by <math display="block"> \sigma_A^2 + \sigma_B^2 \ge \frac{1}{2}| \langle {\bar \Psi}_{A+B} \mid(A + B)\mid \Psi \rangle|^2 </math> where <math>| {\bar \Psi}_{A+B} \rangle </math> is a state orthogonal to <math> |\Psi \rangle </math>. The form of <math>| {\bar \Psi}_{A+B} \rangle </math> implies that the right-hand side of the new uncertainty relation is nonzero unless <math>| \Psi\rangle </math> is an eigenstate of <math>(A + B)</math>. One may note that <math>|\Psi \rangle </math> can be an eigenstate of <math>( A+ B)</math> without being an eigenstate of either <math> A</math> or <math> B </math>. However, when <math> |\Psi \rangle </math> is an eigenstate of one of the two observables the Heisenberg–Schrödinger uncertainty relation becomes trivial. But the lower bound in the new relation is nonzero unless <math> |\Psi \rangle </math> is an eigenstate of both.

Energy–timeEdit

Template:Anchor An energy–time uncertainty relation like <math display="block"> \Delta E \Delta t \gtrsim \hbar/2,</math> has a long, controversial history; the meaning of <math>\Delta t</math> and <math>\Delta E</math> varies and different formulations have different arenas of validity.<ref name="Busch2002">Template:Cite book</ref> However, one well-known application is both well established<ref>Template:Cite book</ref><ref name=Hilgevoord/> and experimentally verified:<ref>Template:Cite journal</ref><ref>Template:Cite book</ref> the connection between the life-time of a resonance state, <math>\tau_{\sqrt{1/2}}</math> and its energy width <math>\Delta E</math>: <math display=block>\tau_{\sqrt{1/2}} \Delta E = \pi\hbar/4.</math> In particle-physics, widths from experimental fits to the Breit–Wigner energy distribution are used to characterize the lifetime of quasi-stable or decaying states.<ref>Template:Cite journal</ref>

An informal, heuristic meaning of the principle is the following:<ref>Karplus, Martin, and Porter, Richard Needham (1970). Atoms and Molecules. California: Benjamin Cummings. p. 68 Template:ISBN. Template:Oclc</ref> A state that only exists for a short time cannot have a definite energy. To have a definite energy, the frequency of the state must be defined accurately, and this requires the state to hang around for many cycles, the reciprocal of the required accuracy. For example, in spectroscopy, excited states have a finite lifetime. By the time–energy uncertainty principle, they do not have a definite energy, and, each time they decay, the energy they release is slightly different. The average energy of the outgoing photon has a peak at the theoretical energy of the state, but the distribution has a finite width called the natural linewidth. Fast-decaying states have a broad linewidth, while slow-decaying states have a narrow linewidth.<ref>The broad linewidth of fast-decaying states makes it difficult to accurately measure the energy of the state, and researchers have even used detuned microwave cavities to slow down the decay rate, to get sharper peaks. Template:Cite journal</ref> The same linewidth effect also makes it difficult to specify the rest mass of unstable, fast-decaying particles in particle physics. The faster the particle decays (the shorter its lifetime), the less certain is its mass (the larger the particle's width).

Time in quantum mechanicsEdit

The concept of "time" in quantum mechanics offers many challenges.<ref name=HilgevoordConfusion/> There is no quantum theory of time measurement; relativity is both fundamental to time and difficult to include in quantum mechanics.<ref name="Busch2002"/> While position and momentum are associated with a single particle, time is a system property: it has no operator needed for the Robertson–Schrödinger relation.<ref name=Sen2014/> The mathematical treatment of stable and unstable quantum systems differ.<ref>Template:Cite journal</ref> These factors combine to make energy–time uncertainty principles controversial.

Three notions of "time" can be distinguished:<ref name="Busch2002"/> external, intrinsic, and observable. External or laboratory time is seen by the experimenter; intrinsic time is inferred by changes in dynamic variables, like the hands of a clock or the motion of a free particle; observable time concerns time as an observable, the measurement of time-separated events.

An external-time energy–time uncertainty principle might say that measuring the energy of a quantum system to an accuracy <math>\Delta E</math> requires a time interval <math>\Delta t > h/\Delta E</math>.<ref name=Hilgevoord>Template:Cite journal</ref> However, Yakir Aharonov and David Bohm<ref>Template:Cite journal</ref><ref name="Busch2002"/> have shown that, in some quantum systems, energy can be measured accurately within an arbitrarily short time: external-time uncertainty principles are not universal.

Intrinsic time is the basis for several formulations of energy–time uncertainty relations, including the Mandelstam–Tamm relation discussed in the next section. A physical system with an intrinsic time closely matching the external laboratory time is called a "clock".<ref name=HilgevoordConfusion>Template:Cite journal</ref>Template:Rp

Observable time, measuring time between two events, remains a challenge for quantum theories; some progress has been made using positive operator-valued measure concepts.<ref name="Busch2002"/>

Mandelstam–TammEdit

In 1945, Leonid Mandelstam and Igor Tamm derived a non-relativistic time–energy uncertainty relation as follows.<ref>L. I. Mandelstam, I. E. Tamm, The uncertainty relation between energy and time in nonrelativistic quantum mechanics Template:Webarchive, 1945.</ref><ref name="Busch2002"/> From Heisenberg mechanics, the generalized Ehrenfest theorem for an observable B without explicit time dependence, represented by a self-adjoint operator <math>\hat B</math> relates time dependence of the average value of <math>\hat B</math> to the average of its commutator with the Hamiltonian:

<math display=block> \frac{d\langle \hat{B} \rangle}{dt} = \frac{i}{\hbar}\langle [\hat{H},\hat{B}]\rangle. </math>

The value of <math>\langle [\hat{H},\hat{B}]\rangle</math> is then substituted in the Robertson uncertainty relation for the energy operator <math>\hat H</math> and <math>\hat B</math>: <math display=block> \sigma_H\sigma_B \geq \left|\frac{1}{2i} \langle[ \hat{H}, \hat{B}] \rangle\right|, </math> giving <math display="block"> \sigma_H \frac{\sigma_B}{\left| \frac{d\langle \hat B \rangle}{dt}\right |} \ge \frac{\hbar}{2}</math> (whenever the denominator is nonzero). While this is a universal result, it depends upon the observable chosen and that the deviations <math>\sigma_H</math> and <math>\sigma_B</math> are computed for a particular state. Identifying <math>\Delta E \equiv \sigma_E </math> and the characteristic time <math display="block">\tau_B \equiv \frac{\sigma_B}{\left| \frac{d\langle \hat B \rangle}{dt}\right |}</math> gives an energy–time relationship <math>\Delta E \tau_B \ge \frac{\hbar}{2}.</math> Although <math>\tau_B</math> has the dimension of time, it is different from the time parameter t that enters the Schrödinger equation. This <math>\tau_B</math> can be interpreted as time for which the expectation value of the observable, <math>\langle \hat B \rangle,</math> changes by an amount equal to one standard deviation.<ref>Template:Cite book</ref> Examples:

  • The time a free quantum particle passes a point in space is more uncertain as the energy of the state is more precisely controlled: <math>\Delta T = \hbar/2\Delta E.</math> Since the time spread is related to the particle position spread and the energy spread is related to the momentum spread, this relation is directly related to position–momentum uncertainty.<ref name="GriffithsSchroeter2018" />Template:Rp
  • A Delta particle, a quasistable composite of quarks related to protons and neutrons, has a lifetime of 10−23 s, so its measured mass equivalent to energy, 1232 MeV/c2, varies by ±120 MeV/c2; this variation is intrinsic and not caused by measurement errors.<ref name="GriffithsSchroeter2018" />Template:Rp
  • Two energy states <math>\psi_{1,2}</math> with energies <math>E_{1,2},</math> superimposed to create a composite state
<math display="block">\Psi(x,t) = a\psi_1(x) e^{-iE_1t/h} + b\psi_2(x) e^{-iE_2t/h}.</math>
The probability amplitude of this state has a time-dependent interference term:
<math display="block">|\Psi(x,t)|^2 = a^2|\psi_1(x)|^2 + b^2|\psi_2(x)|^2 + 2ab\cos(\frac{E_2 - E_1}{\hbar}t).</math>
The oscillation period varies inversely with the energy difference: <math>\tau = 2\pi\hbar/(E_2 - E_1)</math>.<ref name="GriffithsSchroeter2018" />Template:Rp

Each example has a different meaning for the time uncertainty, according to the observable and state used.

Quantum field theoryEdit

Some formulations of quantum field theory uses temporary electron–positron pairs in its calculations called virtual particles. The mass-energy and lifetime of these particles are related by the energy–time uncertainty relation. The energy of a quantum systems is not known with enough precision to limit their behavior to a single, simple history. Thus the influence of all histories must be incorporated into quantum calculations, including those with much greater or much less energy than the mean of the measured/calculated energy distribution.

The energy–time uncertainty principle does not temporarily violate conservation of energy; it does not imply that energy can be "borrowed" from the universe as long as it is "returned" within a short amount of time.<ref name="GriffithsSchroeter2018" />Template:Rp The energy of the universe is not an exactly known parameter at all times.<ref name=Sen2014/> When events transpire at very short time intervals, there is uncertainty in the energy of these events.

Harmonic analysisEdit

Template:Main article In the context of harmonic analysis the uncertainty principle implies that one cannot at the same time localize the value of a function and its Fourier transform. To wit, the following inequality holds, <math display="block">\left(\int_{-\infty}^\infty x^2 |f(x)|^2\,dx\right)\left(\int_{-\infty}^\infty \xi^2 |\hat{f}(\xi)|^2\,d\xi\right)\ge \frac{\|f\|_2^4}{16\pi^2}.</math>

Further mathematical uncertainty inequalities, including the above entropic uncertainty, hold between a function Template:Mvar and its Fourier transform Template:Math:<ref>Template:Citation</ref><ref>Template:Citation</ref><ref>Template:Springer</ref> <math display="block">H_x+H_\xi \ge \log(e/2)</math>

Signal processing Template:AnchorEdit

In the context of time–frequency analysis uncertainty principles are referred to as the Gabor limit, after Dennis Gabor, or sometimes the Heisenberg–Gabor limit. The basic result, which follows from "Benedicks's theorem", below, is that a function cannot be both time limited and band limited (a function and its Fourier transform cannot both have bounded domain)—see bandlimited versus timelimited. More accurately, the time-bandwidth or duration-bandwidth product satisfies <math display="block">\sigma_{t} \sigma_{f} \ge \frac{1}{4\pi} \approx 0.08 \text{ cycles},</math> where <math>\sigma_{t}</math> and <math>\sigma_{f}</math> are the standard deviations of the time and frequency energy concentrations respectively.<ref>Template:Cite book</ref> The minimum is attained for a Gaussian-shaped pulse (Gabor wavelet) [For the un-squared Gaussian (i.e. signal amplitude) and its un-squared Fourier transform magnitude <math>\sigma_t\sigma_f=1/2\pi</math>; squaring reduces each <math>\sigma</math> by a factor <math>\sqrt 2</math>.] Another common measure is the product of the time and frequency full width at half maximum (of the power/energy), which for the Gaussian equals <math>2 \ln 2 / \pi \approx 0.44</math> (see bandwidth-limited pulse).

Stated differently, one cannot simultaneously sharply localize a signal Template:Mvar in both the time domain and frequency domain.

When applied to filters, the result implies that one cannot simultaneously achieve a high temporal resolution and high frequency resolution at the same time; a concrete example are the resolution issues of the short-time Fourier transform—if one uses a wide window, one achieves good frequency resolution at the cost of temporal resolution, while a narrow window has the opposite trade-off.

Alternate theorems give more precise quantitative results, and, in time–frequency analysis, rather than interpreting the (1-dimensional) time and frequency domains separately, one instead interprets the limit as a lower limit on the support of a function in the (2-dimensional) time–frequency plane. In practice, the Gabor limit limits the simultaneous time–frequency resolution one can achieve without interference; it is possible to achieve higher resolution, but at the cost of different components of the signal interfering with each other.

As a result, in order to analyze signals where the transients are important, the wavelet transform is often used instead of the Fourier.

Discrete Fourier transformEdit

Let <math>\left \{ \mathbf{ x_n } \right \} := x_0, x_1, \ldots, x_{N-1}</math> be a sequence of N complex numbers and <math>\left \{ \mathbf{X_k} \right \} := X_0, X_1, \ldots, X_{N-1},</math> be its discrete Fourier transform.

Denote by <math>\|x\|_0</math> the number of non-zero elements in the time sequence <math>x_0,x_1,\ldots,x_{N-1}</math> and by <math>\|X\|_0</math> the number of non-zero elements in the frequency sequence <math>X_0,X_1,\ldots,X_{N-1}</math>. Then, <math display="block">\|x\|_0 \cdot \|X\|_0 \ge N.</math>

This inequality is sharp, with equality achieved when x or X is a Dirac mass, or more generally when x is a nonzero multiple of a Dirac comb supported on a subgroup of the integers modulo N (in which case X is also a Dirac comb supported on a complementary subgroup, and vice versa).

More generally, if T and W are subsets of the integers modulo N, let <math>L_T,R_W : \ell^2(\mathbb Z/N\mathbb Z)\to\ell^2(\mathbb Z/N\mathbb Z)</math> denote the time-limiting operator and band-limiting operators, respectively. Then <math display="block">\|L_TR_W\|^2 \le \frac{|T||W|}{|G|} </math> where the norm is the operator norm of operators on the Hilbert space <math>\ell^2(\mathbb Z/N\mathbb Z)</math> of functions on the integers modulo N. This inequality has implications for signal reconstruction.<ref name="Donoho">Template:Cite journal</ref>

When N is a prime number, a stronger inequality holds: <math display="block">\|x\|_0 + \|X\|_0 \ge N + 1.</math> Discovered by Terence Tao, this inequality is also sharp.<ref>Template:Citation</ref>

Benedicks's theoremEdit

Amrein–Berthier<ref> Template:Citation</ref> and Benedicks's theorem<ref>Template:Citation</ref> intuitively says that the set of points where Template:Mvar is non-zero and the set of points where Template:Math is non-zero cannot both be small.

Specifically, it is impossible for a function Template:Mvar in Template:Math and its Fourier transform Template:Math to both be supported on sets of finite Lebesgue measure. A more quantitative version is<ref>Template:Citation</ref><ref>Template:Citation</ref> <math display="block">\|f\|_{L^2(\mathbf{R}^d)}\leq Ce^{C|S||\Sigma|} \bigl(\|f\|_{L^2(S^c)} + \| \hat{f} \|_{L^2(\Sigma^c)} \bigr) ~.</math>

One expects that the factor Template:Math may be replaced by Template:Math, which is only known if either Template:Mvar or Template:Mvar is convex.

Hardy's uncertainty principleEdit

The mathematician G. H. Hardy formulated the following uncertainty principle:<ref>Template:Citation</ref> it is not possible for Template:Mvar and Template:Math to both be "very rapidly decreasing". Specifically, if Template:Mvar in <math>L^2(\mathbb{R})</math> is such that <math display="block">|f(x)|\leq C(1+|x|)^Ne^{-a\pi x^2}</math> and <math display="block">|\hat{f}(\xi)|\leq C(1+|\xi|)^Ne^{-b\pi \xi^2}</math> (<math>C>0,N</math> an integer), then, if Template:Math, while if Template:Math, then there is a polynomial Template:Mvar of degree Template:Math such that <math display="block">f(x)=P(x)e^{-a\pi x^2}. </math>

This was later improved as follows: if <math>f \in L^2(\mathbb{R}^d)</math> is such that <math display="block">\int_{\mathbb{R}^d}\int_{\mathbb{R}^d}|f(x)||\hat{f}(\xi)|\frac{e^{\pi|\langle x,\xi\rangle|}}{(1+|x|+|\xi|)^N} \, dx \, d\xi < +\infty ~,</math> then <math display="block">f(x)=P(x)e^{-\pi\langle Ax,x\rangle} ~,</math> where Template:Mvar is a polynomial of degree Template:Math and Template:Mvar is a real Template:Math positive definite matrix.

This result was stated in Beurling's complete works without proof and proved in Hörmander<ref>Template:Citation</ref> (the case <math>d=1,N=0</math>) and Bonami, Demange, and Jaming<ref>Template:Citation</ref> for the general case. Note that Hörmander–Beurling's version implies the case Template:Math in Hardy's Theorem while the version by Bonami–Demange–Jaming covers the full strength of Hardy's Theorem. A different proof of Beurling's theorem based on Liouville's theorem appeared in ref.<ref>Template:Citation</ref>

A full description of the case Template:Math as well as the following extension to Schwartz class distributions appears in ref.<ref>Template:Citation</ref>

Template:Math theorem

Additional uncertainty relationsEdit

Heisenberg limitEdit

In quantum metrology, and especially interferometry, the Heisenberg limit is the optimal rate at which the accuracy of a measurement can scale with the energy used in the measurement. Typically, this is the measurement of a phase (applied to one arm of a beam-splitter) and the energy is given by the number of photons used in an interferometer. Although some claim to have broken the Heisenberg limit, this reflects disagreement on the definition of the scaling resource.<ref>Template:Cite journal; arXiv Template:Webarchive</ref> Suitably defined, the Heisenberg limit is a consequence of the basic principles of quantum mechanics and cannot be beaten, although the weak Heisenberg limit can be beaten.<ref>Template:Cite journal</ref>

Systematic and statistical errorsEdit

The inequalities above focus on the statistical imprecision of observables as quantified by the standard deviation <math>\sigma</math>. Heisenberg's original version, however, was dealing with the systematic error, a disturbance of the quantum system produced by the measuring apparatus, i.e., an observer effect.

If we let <math>\varepsilon_A</math> represent the error (i.e., inaccuracy) of a measurement of an observable A and <math>\eta_B</math> the disturbance produced on a subsequent measurement of the conjugate variable B by the former measurement of A, then the inequality proposed by Masanao Ozawa − encompassing both systematic and statistical errors - holds:<ref name="Ozawa2003"/> Template:Equation box 1

Heisenberg's uncertainty principle, as originally described in the 1927 formulation, mentions only the first term of Ozawa inequality, regarding the systematic error. Using the notation above to describe the error/disturbance effect of sequential measurements (first A, then B), it could be written as Template:Equation box 1 The formal derivation of the Heisenberg relation is possible but far from intuitive. It was not proposed by Heisenberg, but formulated in a mathematically consistent way only in recent years.<ref>Template:Cite journal</ref><ref>Template:Cite journal</ref> Also, it must be stressed that the Heisenberg formulation is not taking into account the intrinsic statistical errors <math>\sigma_A</math> and <math>\sigma_B</math>. There is increasing experimental evidence<ref name="Rozema"/><ref>Template:Cite journal</ref><ref>Template:Cite journal</ref><ref>Template:Cite journal</ref> that the total quantum uncertainty cannot be described by the Heisenberg term alone, but requires the presence of all the three terms of the Ozawa inequality.

Using the same formalism,<ref name="Sen2014"/> it is also possible to introduce the other kind of physical situation, often confused with the previous one, namely the case of simultaneous measurements (A and B at the same time): Template:Equation box 1 The two simultaneous measurements on A and B are necessarily<ref>Template:Cite journal</ref> unsharp or weak.

It is also possible to derive an uncertainty relation that, as the Ozawa's one, combines both the statistical and systematic error components, but keeps a form very close to the Heisenberg original inequality. By adding Robertson<ref name="Sen2014"/> Template:Equation box 1 and Ozawa relations we obtain <math display="block">\varepsilon_A \eta_B + \varepsilon_A \, \sigma_B + \sigma_A \, \eta_B + \sigma_A \sigma_B \geq \left|\Bigl\langle \bigl[\hat{A},\hat{B}\bigr] \Bigr\rangle \right| .</math> The four terms can be written as: <math display="block">(\varepsilon_A + \sigma_A) \, (\eta_B + \sigma_B) \, \geq \, \left|\Bigl\langle\bigl[\hat{A},\hat{B} \bigr] \Bigr\rangle \right| .</math> Defining: <math display="block">\bar \varepsilon_A \, \equiv \, (\varepsilon_A + \sigma_A)</math> as the inaccuracy in the measured values of the variable A and <math display="block">\bar \eta_B \, \equiv \, (\eta_B + \sigma_B)</math> as the resulting fluctuation in the conjugate variable B, Kazuo Fujikawa<ref>Template:Cite journal</ref> established an uncertainty relation similar to the Heisenberg original one, but valid both for systematic and statistical errors: Template:Equation box 1

Quantum entropic uncertainty principleEdit

For many distributions, the standard deviation is not a particularly natural way of quantifying the structure. For example, uncertainty relations in which one of the observables is an angle has little physical meaning for fluctuations larger than one period.<ref name="CarruthersNieto" /><ref>Template:Citation</ref><ref>Template:Citation</ref><ref>Template:Citation</ref> Other examples include highly bimodal distributions, or unimodal distributions with divergent variance.

A solution that overcomes these issues is an uncertainty based on entropic uncertainty instead of the product of variances. While formulating the many-worlds interpretation of quantum mechanics in 1957, Hugh Everett III conjectured a stronger extension of the uncertainty principle based on entropic certainty.<ref>Template:Citation</ref> This conjecture, also studied by I. I. Hirschman<ref>Template:Citation</ref> and proven in 1975 by W. Beckner<ref name="Beckner">Template:Citation</ref> and by Iwo Bialynicki-Birula and Jerzy Mycielski<ref name="BBM">Template:Citation</ref> is that, for two normalized, dimensionless Fourier transform pairs Template:Math and Template:Math where

<math>f(a) = \int_{-\infty}^\infty g(b)\ e^{2\pi i a b}\,db</math>Template:Spaces and Template:Spaces <math> \,\,\,g(b) = \int_{-\infty}^\infty f(a)\ e^{- 2\pi i a b}\,da</math>

the Shannon information entropies <math display="block">H_a = -\int_{-\infty}^\infty |f(a)|^2 \log |f(a)|^2\,da,</math> and <math display="block">H_b = -\int_{-\infty}^\infty |g(b)|^2 \log |g(b)|^2\,db</math> are subject to the following constraint, Template:Equation box 1 where the logarithms may be in any base.

The probability distribution functions associated with the position wave function Template:Math and the momentum wave function Template:Math have dimensions of inverse length and momentum respectively, but the entropies may be rendered dimensionless by <math display="block">H_x = - \int |\psi(x)|^2 \ln \left(x_0 \, |\psi(x)|^2 \right) dx =-\left\langle \ln \left(x_0 \, \left|\psi(x)\right|^2 \right) \right\rangle</math> <math display="block">H_p = - \int |\varphi(p)|^2 \ln (p_0\,|\varphi(p)|^2) \,dp =-\left\langle \ln (p_0\left|\varphi(p)\right|^2 ) \right\rangle</math> where Template:Math and Template:Math are some arbitrarily chosen length and momentum respectively, which render the arguments of the logarithms dimensionless. Note that the entropies will be functions of these chosen parameters. Due to the Fourier transform relation between the position wave function Template:Math and the momentum wavefunction Template:Math, the above constraint can be written for the corresponding entropies as Template:Equation box 1 where Template:Mvar is the Planck constant.

Depending on one's choice of the Template:Math product, the expression may be written in many ways. If Template:Math is chosen to be Template:Mvar, then <math display="block">H_x + H_p \ge \log \left(\frac{e}{2}\right)</math>

If, instead, Template:Math is chosen to be Template:Mvar, then <math display="block">H_x + H_p \ge \log (e\,\pi)</math>

If Template:Math and Template:Math are chosen to be unity in whatever system of units are being used, then <math display="block">H_x + H_p \ge \log \left(\frac{e\,h }{2}\right)</math> where Template:Mvar is interpreted as a dimensionless number equal to the value of the Planck constant in the chosen system of units. Note that these inequalities can be extended to multimode quantum states, or wavefunctions in more than one spatial dimension.<ref>Template:Cite journal</ref>

The quantum entropic uncertainty principle is more restrictive than the Heisenberg uncertainty principle. From the inverse logarithmic Sobolev inequalities<ref>Template:Citation</ref> <math display="block">H_x \le \frac{1}{2} \log ( 2e\pi \sigma_x^2 / x_0^2 )~,</math> <math display="block">H_p \le \frac{1}{2} \log ( 2e\pi \sigma_p^2 /p_0^2 )~,</math> (equivalently, from the fact that normal distributions maximize the entropy of all such with a given variance), it readily follows that this entropic uncertainty principle is stronger than the one based on standard deviations, because <math display="block">\sigma_x \sigma_p \ge \frac{\hbar}{2} \exp\left(H_x + H_p - \log \left(\frac{e\,h}{2\,x_0\,p_0}\right)\right) \ge \frac{\hbar}{2}~.</math>

In other words, the Heisenberg uncertainty principle, is a consequence of the quantum entropic uncertainty principle, but not vice versa. A few remarks on these inequalities. First, the choice of base e is a matter of popular convention in physics. The logarithm can alternatively be in any base, provided that it be consistent on both sides of the inequality. Second, recall the Shannon entropy has been used, not the quantum von Neumann entropy. Finally, the normal distribution saturates the inequality, and it is the only distribution with this property, because it is the maximum entropy probability distribution among those with fixed variance (cf. here for proof).

A measurement apparatus will have a finite resolution set by the discretization of its possible outputs into bins, with the probability of lying within one of the bins given by the Born rule. We will consider the most common experimental situation, in which the bins are of uniform size. Let δx be a measure of the spatial resolution. We take the zeroth bin to be centered near the origin, with possibly some small constant offset c. The probability of lying within the jth interval of width δx is <math display="block">\operatorname P[x_j]= \int_{(j-1/2)\delta x-c}^{(j+1/2)\delta x-c}| \psi(x)|^2 \, dx</math>

To account for this discretization, we can define the Shannon entropy of the wave function for a given measurement apparatus as <math display="block">H_x=-\sum_{j=-\infty}^\infty \operatorname P[x_j] \ln \operatorname P[x_j].</math>

Under the above definition, the entropic uncertainty relation is <math display="block">H_x + H_p > \ln\left(\frac{e}{2}\right)-\ln\left(\frac{\delta x \delta p}{h} \right).</math>

Here we note that Template:Math is a typical infinitesimal phase space volume used in the calculation of a partition function. The inequality is also strict and not saturated. Efforts to improve this bound are an active area of research.

Uncertainty relation with three angular momentum componentsEdit

For a particle of total angular momentum <math>j</math> the following uncertainty relation holds <math display="block"> \sigma_{J_x}^2+\sigma_{J_y}^2+\sigma_{J_z}^2\ge j, </math> where <math>J_l</math> are angular momentum components. The relation can be derived from <math display="block"> \langle J_x^2+J_y^2+J_z^2\rangle = j(j+1), </math> and <math display="block"> \langle J_x\rangle^2+\langle J_y\rangle^2+\langle J_z\rangle^2\le j. </math> The relation can be strengthened as<ref name="PhysRevResearch21" /><ref>Template:Cite journal</ref> <math display="block"> \sigma_{J_x}^2+\sigma_{J_y}^2+F_Q[\varrho,J_z]/4\ge j, </math> where <math>F_Q[\varrho,J_z]</math> is the quantum Fisher information.

HistoryEdit

Template:See also In 1925 Heisenberg published the Umdeutung (reinterpretation) paper where he showed that central aspect of quantum theory was the non-commutativity: the theory implied that the relative order of position and momentum measurement was significant. Working with Max Born and Pascual Jordan, he continued to develop matrix mechanics, that would become the first modern quantum mechanics formulation.<ref>Template:Cite book</ref>

File:Heisenbergbohr.jpg
Werner Heisenberg and Niels Bohr

In March 1926, working in Bohr's institute, Heisenberg realized that the non-commutativity implies the uncertainty principle. Writing to Wolfgang Pauli in February 1927, he worked out the basic concepts.<ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref>

In his celebrated 1927 paper "{{#invoke:Lang|lang}}" ("On the Perceptual Content of Quantum Theoretical Kinematics and Mechanics"), Heisenberg established this expression as the minimum amount of unavoidable momentum disturbance caused by any position measurement,<ref name=":0" /> but he did not give a precise definition for the uncertainties Δx and Δp. Instead, he gave some plausible estimates in each case separately. His paper gave an analysis in terms of a microscope that Bohr showed was incorrect; Heisenberg included an addendum to the publication.

In his 1930 Chicago lecture<ref name="Heisenberg_1930">Template:Citation English translation The Physical Principles of Quantum Theory. Chicago: University of Chicago Press, 1930.</ref> he refined his principle: Template:NumBlk

Later work broadened the concept. Any two variables that do not commute cannot be measured simultaneously—the more precisely one is known, the less precisely the other can be known. Heisenberg wrote:

It can be expressed in its simplest form as follows: One can never know with perfect accuracy both of those two important factors which determine the movement of one of the smallest particles—its position and its velocity. It is impossible to determine accurately both the position and the direction and speed of a particle at the same instant.<ref>Heisenberg, W., Die Physik der Atomkerne, Taylor & Francis, 1952, p. 30.</ref>

Kennard<ref name="Kennard" /><ref name=Sen2014 />Template:Rp in 1927 first proved the modern inequality: Template:NumBlk where Template:Math, and Template:Math, Template:Math are the standard deviations of position and momentum. (Heisenberg only proved relation (Template:EquationNote) for the special case of Gaussian states.<ref name="Heisenberg_1930"/>) In 1929 Robertson generalized the inequality to all observables and in 1930 Schrödinger extended the form to allow non-zero covariance of the operators; this result is referred to as Robertson-Schrödinger inequality.<ref name=Sen2014 />Template:Rp

Terminology and translationEdit

Throughout the main body of his original 1927 paper, written in German, Heisenberg used the word "Ungenauigkeit",<ref name=":0" /> to describe the basic theoretical principle. Only in the endnote did he switch to the word "Unsicherheit". Later on, he always used "Unbestimmtheit". When the English-language version of Heisenberg's textbook, The Physical Principles of the Quantum Theory, was published in 1930, however, only the English word "uncertainty" was used, and it became the term in the English language.<ref>Template:Citation</ref>

Heisenberg's microscopeEdit

File:Heisenberg gamma ray microscope.svg
Heisenberg's gamma-ray microscope for locating an electron (shown in blue). The incoming gamma ray (shown in green) is scattered by the electron up into the microscope's aperture angle θ. The scattered gamma-ray is shown in red. Classical optics shows that the electron position can be resolved only up to an uncertainty Δx that depends on θ and the wavelength λ of the incoming light.

Template:Main article

The principle is quite counter-intuitive, so the early students of quantum theory had to be reassured that naive measurements to violate it were bound always to be unworkable. One way in which Heisenberg originally illustrated the intrinsic impossibility of violating the uncertainty principle is by using the observer effect of an imaginary microscope as a measuring device.<ref name="Heisenberg_1930"/>

He imagines an experimenter trying to measure the position and momentum of an electron by shooting a photon at it.<ref name=GreensteinZajonc2006>Template:Cite book</ref>Template:Rp

  • Problem 1 – If the photon has a short wavelength, and therefore, a large momentum, the position can be measured accurately. But the photon scatters in a random direction, transferring a large and uncertain amount of momentum to the electron. If the photon has a long wavelength and low momentum, the collision does not disturb the electron's momentum very much, but the scattering will reveal its position only vaguely.
  • Problem 2 – If a large aperture is used for the microscope, the electron's location can be well resolved (see Rayleigh criterion); but by the principle of conservation of momentum, the transverse momentum of the incoming photon affects the electron's beamline momentum and hence, the new momentum of the electron resolves poorly. If a small aperture is used, the accuracy of both resolutions is the other way around.

The combination of these trade-offs implies that no matter what photon wavelength and aperture size are used, the product of the uncertainty in measured position and measured momentum is greater than or equal to a lower limit, which is (up to a small numerical factor) equal to the Planck constant.<ref>Template:Citation</ref> Heisenberg did not care to formulate the uncertainty principle as an exact limit, and preferred to use it instead, as a heuristic quantitative statement, correct up to small numerical factors, which makes the radically new noncommutativity of quantum mechanics inevitable.

Intrinsic quantum uncertaintyEdit

Historically, the uncertainty principle has been confused<ref>Template:Citation</ref><ref name="Ozawa2003">Template:Citation</ref> with a related effect in physics, called the observer effect, which notes that measurements of certain systems cannot be made without affecting the system,<ref>Template:Citation</ref><ref>Template:Citation</ref> that is, without changing something in a system. Heisenberg used such an observer effect at the quantum level (see below) as a physical "explanation" of quantum uncertainty.<ref>Werner Heisenberg, The Physical Principles of the Quantum Theory, p. 20</ref> It has since become clearer, however, that the uncertainty principle is inherent in the properties of all wave-like systems,<ref name="Rozema">Template:Cite journal</ref> and that it arises in quantum mechanics simply due to the matter wave nature of all quantum objects.<ref>Template:Cite journal</ref> Thus, the uncertainty principle actually states a fundamental property of quantum systems and is not a statement about the observational success of current technology.<ref name=nptel>Template:YouTube</ref>

Critical reactionsEdit

Template:Main article

The Copenhagen interpretation of quantum mechanics and Heisenberg's uncertainty principle were, in fact, initially seen as twin targets by detractors. According to the Copenhagen interpretation of quantum mechanics, there is no fundamental reality that the quantum state describes, just a prescription for calculating experimental results. There is no way to say what the state of a system fundamentally is, only what the result of observations might be.

Albert Einstein believed that randomness is a reflection of our ignorance of some fundamental property of reality, while Niels Bohr believed that the probability distributions are fundamental and irreducible, and depend on which measurements we choose to perform. Einstein and Bohr debated the uncertainty principle for many years.

Ideal detached observerEdit

Wolfgang Pauli called Einstein's fundamental objection to the uncertainty principle "the ideal of the detached observer" (phrase translated from the German):

<templatestyles src="Template:Blockquote/styles.css" />

"Like the moon has a definite position," Einstein said to me last winter, "whether or not we look at the moon, the same must also hold for the atomic objects, as there is no sharp distinction possible between these and macroscopic objects. Observation cannot create an element of reality like a position, there must be something contained in the complete description of physical reality which corresponds to the possibility of observing a position, already before the observation has been actually made." I hope, that I quoted Einstein correctly; it is always difficult to quote somebody out of memory with whom one does not agree. It is precisely this kind of postulate which I call the ideal of the detached observer.{{#if:Letter from Pauli to Niels Bohr, February 15, 1955<ref>Template:Cite book</ref>|{{#if:|}}

}}

{{#invoke:Check for unknown parameters|check|unknown=Template:Main other|preview=Page using Template:Blockquote with unknown parameter "_VALUE_"|ignoreblank=y| 1 | 2 | 3 | 4 | 5 | author | by | char | character | cite | class | content | multiline | personquoted | publication | quote | quotesource | quotetext | sign | source | style | text | title | ts }}

Einstein's slitEdit

The first of Einstein's thought experiments challenging the uncertainty principle went as follows:

Template:Quote

Bohr's response was that the wall is quantum mechanical as well, and that to measure the recoil to accuracy Template:Math, the momentum of the wall must be known to this accuracy before the particle passes through. This introduces an uncertainty in the position of the wall and therefore the position of the slit equal to Template:Math, and if the wall's momentum is known precisely enough to measure the recoil, the slit's position is uncertain enough to disallow a position measurement.

A similar analysis with particles diffracting through multiple slits is given by Richard Feynman.<ref>Feynman lectures on Physics, vol 3, 2–2</ref>

Einstein's boxEdit

Bohr was present when Einstein proposed the thought experiment which has become known as Einstein's box. Einstein argued that "Heisenberg's uncertainty equation implied that the uncertainty in time was related to the uncertainty in energy, the product of the two being related to the Planck constant."<ref name="Gamow">Gamow, G., The great physicists from Galileo to Einstein, Courier Dover, 1988, p.260.</ref> Consider, he said, an ideal box, lined with mirrors so that it can contain light indefinitely. The box could be weighed before a clockwork mechanism opened an ideal shutter at a chosen instant to allow one single photon to escape. "We now know, explained Einstein, precisely the time at which the photon left the box."<ref>Kumar, M., Quantum: Einstein, Bohr and the Great Debate About the Nature of Reality, Icon, 2009, p. 282.</ref> "Now, weigh the box again. The change of mass tells the energy of the emitted light. In this manner, said Einstein, one could measure the energy emitted and the time it was released with any desired precision, in contradiction to the uncertainty principle."<ref name="Gamow" />

Bohr spent a sleepless night considering this argument, and eventually realized that it was flawed. He pointed out that if the box were to be weighed, say by a spring and a pointer on a scale, "since the box must move vertically with a change in its weight, there will be uncertainty in its vertical velocity and therefore an uncertainty in its height above the table. ... Furthermore, the uncertainty about the elevation above the Earth's surface will result in an uncertainty in the rate of the clock",<ref>Gamow, G., The great physicists from Galileo to Einstein, Courier Dover, 1988, pp. 260–261. Template:ISBN?</ref> because of Einstein's own theory of gravity's effect on time. "Through this chain of uncertainties, Bohr showed that Einstein's light box experiment could not simultaneously measure exactly both the energy of the photon and the time of its escape."<ref>Template:Cite book</ref>

EPR paradox for entangled particlesEdit

{{#invoke:Labelled list hatnote|labelledList|Main article|Main articles|Main page|Main pages}} In 1935, Einstein, Boris Podolsky and Nathan Rosen published an analysis of spatially separated entangled particles (EPR paradox).<ref>Template:Cite journal</ref> According to EPR, one could measure the position of one of the entangled particles and the momentum of the second particle, and from those measurements deduce the position and momentum of both particles to any precision, violating the uncertainty principle. In order to avoid such possibility, the measurement of one particle must modify the probability distribution of the other particle instantaneously, possibly violating the principle of locality.<ref>Template:Cite book</ref>

In 1964, John Stewart Bell showed that this assumption can be falsified, since it would imply a certain inequality between the probabilities of different experiments. Experimental results confirm the predictions of quantum mechanics, ruling out EPR's basic assumption of local hidden variables.

Popper's criticismEdit

Template:Main article Science philosopher Karl Popper approached the problem of indeterminacy as a logician and metaphysical realist.<ref name="Popper1959">Template:Cite book</ref> He disagreed with the application of the uncertainty relations to individual particles rather than to ensembles of identically prepared particles, referring to them as "statistical scatter relations".<ref name="Popper1959" /><ref name="Jarvie2006">Template:Cite book</ref> In this statistical interpretation, a particular measurement may be made to arbitrary precision without invalidating the quantum theory.

In 1934, Popper published {{#invoke:Lang|lang}} ("Critique of the Uncertainty Relations") in {{#invoke:Lang|lang}},<ref name="Popper1934">Template:Cite journal</ref> and in the same year {{#invoke:Lang|lang}} (translated and updated by the author as The Logic of Scientific Discovery in 1959<ref name="Popper1959" />), outlining his arguments for the statistical interpretation. In 1982, he further developed his theory in Quantum theory and the schism in Physics, writing:

Template:Quote

Popper proposed an experiment to falsify the uncertainty relations, although he later withdrew his initial version after discussions with Carl Friedrich von Weizsäcker, Heisenberg, and Einstein; Popper sent his paper to Einstein and it may have influenced the formulation of the EPR paradox.<ref name="Mehra2001">Template:Cite book</ref>Template:Rp

Free willEdit

Some scientists, including Arthur Compton<ref>Template:Cite journal</ref> and Martin Heisenberg,<ref>Template:Cite journal</ref> have suggested that the uncertainty principle, or at least the general probabilistic nature of quantum mechanics, could be evidence for the two-stage model of free will. One critique, however, is that apart from the basic role of quantum mechanics as a foundation for chemistry, nontrivial biological mechanisms requiring quantum mechanics are unlikely, due to the rapid decoherence time of quantum systems at room temperature.<ref name="ReferenceA">Template:Cite journal</ref> Proponents of this theory commonly say that this decoherence is overcome by both screening and decoherence-free subspaces found in biological cells.<ref name="ReferenceA"/>

ThermodynamicsEdit

There is reason to believe that violating the uncertainty principle also strongly implies the violation of the second law of thermodynamics.<ref>Template:Cite journal</ref> See Gibbs paradox.

Rejection of the principleEdit

Uncertainty principles relate quantum particles – electrons for example – to classical concepts – position and momentum. This presumes quantum particles have position and momentum. Edwin C. Kemble pointed out<ref>Template:Cite book</ref>Template:Clarify inline in 1937 that such properties cannot be experimentally verified and assuming they exist gives rise to many contradictions; similarly Rudolf Haag notes that position in quantum mechanics is an attribute of an interaction, say between an electron and a detector, not an intrinsic property.<ref>Template:Cite bookTemplate:Page?Template:ISBN?</ref><ref>Template:Cite journal</ref> From this point of view the uncertainty principle is not a fundamental quantum property but a concept "carried over from the language of our ancestors", as Kemble says.

ApplicationsEdit

Since the uncertainty principle is such a basic result in quantum mechanics, typical experiments in quantum mechanics routinely observe aspects of it. All forms of spectroscopy, including particle physics use the relationship to relate measured energy line-width to the lifetime of quantum states. Certain experiments, however, may deliberately test a particular form of the uncertainty principle as part of their main research program. These include, for example, tests of number–phase uncertainty relations in superconducting<ref>Template:Cite journal</ref> or quantum optics<ref>Template:Cite journal</ref> systems. Applications dependent on the uncertainty principle for their operation include extremely low-noise technology such as that required in gravitational wave interferometers.<ref>Template:Cite journal</ref>

See alsoEdit

Template:Div col

Template:Div col end

ReferencesEdit

Template:Reflist

External linksEdit

Template:Sister project Template:Sister project

Template:Quantum mechanics topics Template:Authority control