Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Uncertainty principle
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
===Quantum entropic uncertainty principle=== For many distributions, the standard deviation is not a particularly natural way of quantifying the structure. For example, uncertainty relations in which one of the observables is an angle has little physical meaning for fluctuations larger than one period.<ref name="CarruthersNieto" /><ref>{{Citation |first=D. |last=Judge |title=On the uncertainty relation for angle variables | journal=Il Nuovo Cimento |year=1964|volume=31|issue=2|pages=332–340|doi=10.1007/BF02733639 | bibcode=1964NCim...31..332J | s2cid=120553526 }}</ref><ref>{{Citation |first1= M. |last1= Bouten |first2= N. |last2= Maene | first3= P. | last3= Van Leuven | title=On an uncertainty relation for angle variables | journal=Il Nuovo Cimento | year=1965 | volume=37 | issue=3 | pages=1119–1125 | doi=10.1007/BF02773197 | bibcode=1965NCim...37.1119B | s2cid= 122838645 }}</ref><ref>{{Citation |first=W. H. | last=Louisell | title=Amplitude and phase uncertainty relations|journal=Physics Letters | year=1963 | volume=7 | issue=1 | pages=60–61 | doi=10.1016/0031-9163(63)90442-6 | bibcode = 1963PhL.....7...60L }}</ref> Other examples include highly [[bimodal distribution]]s, or [[unimodal distribution]]s with divergent variance. A solution that overcomes these issues is an uncertainty based on [[entropic uncertainty]] instead of the product of variances. While formulating the [[many-worlds interpretation]] of quantum mechanics in 1957, [[Hugh Everett III]] conjectured a stronger extension of the uncertainty principle based on entropic certainty.<ref>{{Citation |last1=DeWitt |first1=B. S. |last2=Graham |first2=N. |year=1973 |title=The Many-Worlds Interpretation of Quantum Mechanics |location=Princeton |publisher=[[Princeton University Press]] |pages=52–53 |isbn=0-691-08126-3 }}</ref> This conjecture, also studied by I. I. Hirschman<ref>{{Citation | first=I. I. Jr. |last=Hirschman |title=A note on entropy |journal=[[American Journal of Mathematics]] |year=1957 |volume=79 |issue=1 |pages=152–156 |doi=10.2307/2372390 |postscript=. |jstor=2372390 }}</ref> and proven in 1975 by W. Beckner<ref name="Beckner">{{Citation |first=W. |last=Beckner |title=Inequalities in Fourier analysis |journal=[[Annals of Mathematics]] |volume=102 |issue=6 |year=1975 |pages=159–182 |doi=10.2307/1970980 |postscript=. |jstor=1970980 |pmid=16592223 |pmc=432369 }}</ref> and by Iwo Bialynicki-Birula and Jerzy Mycielski<ref name="BBM">{{Citation |first1=I. |last1=Bialynicki-Birula |last2=Mycielski |first2=J. |title=Uncertainty Relations for Information Entropy in Wave Mechanics |journal=[[Communications in Mathematical Physics]] |volume=44 |year=1975 |pages=129–132 |doi=10.1007/BF01608825 |issue=2 |bibcode=1975CMaPh..44..129B |s2cid=122277352 |url=http://projecteuclid.org/euclid.cmp/1103899297 |access-date=2021-08-17 |archive-date=2021-02-08 |archive-url=https://web.archive.org/web/20210208011223/https://projecteuclid.org/euclid.cmp/1103899297 |url-status=live }}</ref> is that, for two normalized, dimensionless Fourier transform pairs {{math|''f''(''a'')}} and {{math|''g''(''b'')}} where :<math>f(a) = \int_{-\infty}^\infty g(b)\ e^{2\pi i a b}\,db</math>{{spaces|3}} and {{spaces|3}} <math> \,\,\,g(b) = \int_{-\infty}^\infty f(a)\ e^{- 2\pi i a b}\,da</math> the Shannon [[Information entropy|information entropies]] <math display="block">H_a = -\int_{-\infty}^\infty |f(a)|^2 \log |f(a)|^2\,da,</math> and <math display="block">H_b = -\int_{-\infty}^\infty |g(b)|^2 \log |g(b)|^2\,db</math> are subject to the following constraint, {{Equation box 1 |indent =: |equation =<math>H_a + H_b \ge \log (e/2)</math> |cellpadding= 6 |border |border colour = #0073CF |background colour=#F5FFFA}} where the logarithms may be in any base. The probability distribution functions associated with the position wave function {{math|''ψ''(''x'')}} and the momentum wave function {{math|''φ''(''x'')}} have dimensions of inverse length and momentum respectively, but the entropies may be rendered dimensionless by <math display="block">H_x = - \int |\psi(x)|^2 \ln \left(x_0 \, |\psi(x)|^2 \right) dx =-\left\langle \ln \left(x_0 \, \left|\psi(x)\right|^2 \right) \right\rangle</math> <math display="block">H_p = - \int |\varphi(p)|^2 \ln (p_0\,|\varphi(p)|^2) \,dp =-\left\langle \ln (p_0\left|\varphi(p)\right|^2 ) \right\rangle</math> where {{math|''x''<sub>0</sub>}} and {{math|''p''<sub>0</sub>}} are some arbitrarily chosen length and momentum respectively, which render the arguments of the logarithms dimensionless. Note that the entropies will be functions of these chosen parameters. Due to the [[Wavefunction#Relation between wave functions|Fourier transform relation]] between the position wave function {{math|''ψ''(''x'')}} and the momentum wavefunction {{math|''φ''(''p'')}}, the above constraint can be written for the corresponding entropies as {{Equation box 1 |indent =: |equation = <math>H_x + H_p \ge \log \left(\frac{e\,h}{2\,x_0\,p_0}\right)</math> |cellpadding= 6 |border |border colour = #0073CF |background colour=#F5FFFA}} where {{mvar|h}} is the [[Planck constant]]. Depending on one's choice of the {{math|''x<sub>0</sub> p<sub>0</sub>''}} product, the expression may be written in many ways. If {{math|''x''<sub>0</sub> ''p''<sub>0</sub>}} is chosen to be {{mvar|h}}, then <math display="block">H_x + H_p \ge \log \left(\frac{e}{2}\right)</math> If, instead, {{math|''x''<sub>0</sub> ''p''<sub>0</sub>}} is chosen to be {{mvar|ħ}}, then <math display="block">H_x + H_p \ge \log (e\,\pi)</math> If {{math|''x''<sub>0</sub>}} and {{math|''p''<sub>0</sub>}} are chosen to be unity in whatever system of units are being used, then <math display="block">H_x + H_p \ge \log \left(\frac{e\,h }{2}\right)</math> where {{mvar|h}} is interpreted as a dimensionless number equal to the value of the Planck constant in the chosen system of units. Note that these inequalities can be extended to multimode quantum states, or wavefunctions in more than one spatial dimension.<ref>{{cite journal |last1=Huang |first1=Yichen |title=Entropic uncertainty relations in multidimensional position and momentum spaces | journal=Physical Review A |date=24 May 2011 |volume=83 |issue=5 |page=052124 | doi=10.1103/PhysRevA.83.052124 | bibcode=2011PhRvA..83e2124H | arxiv=1101.2944 | s2cid=119243096 }}</ref> The quantum entropic uncertainty principle is more restrictive than the Heisenberg uncertainty principle. From the inverse [[logarithmic Sobolev inequalities]]<ref>{{citation |first=D. |last=Chafaï |chapter=Gaussian maximum of entropy and reversed log-Sobolev inequality|arxiv=math/0102227 |doi=10.1007/978-3-540-36107-7_5 |year=2003 |isbn=978-3-540-00072-3 |pages=194–200|title=Séminaire de Probabilités XXXVI |volume=1801 |series=Lecture Notes in Mathematics |s2cid=17795603 }}</ref> <math display="block">H_x \le \frac{1}{2} \log ( 2e\pi \sigma_x^2 / x_0^2 )~,</math> <math display="block">H_p \le \frac{1}{2} \log ( 2e\pi \sigma_p^2 /p_0^2 )~,</math> (equivalently, from the fact that normal distributions maximize the entropy of all such with a given variance), it readily follows that this entropic uncertainty principle is ''stronger than the one based on standard deviations'', because <math display="block">\sigma_x \sigma_p \ge \frac{\hbar}{2} \exp\left(H_x + H_p - \log \left(\frac{e\,h}{2\,x_0\,p_0}\right)\right) \ge \frac{\hbar}{2}~.</math> In other words, the Heisenberg uncertainty principle, is a consequence of the quantum entropic uncertainty principle, but not vice versa. A few remarks on these inequalities. First, the choice of [[base e]] is a matter of popular convention in physics. The logarithm can alternatively be in any base, provided that it be consistent on both sides of the inequality. Second, recall the [[Shannon entropy]] has been used, ''not'' the quantum [[von Neumann entropy]]. Finally, the normal distribution saturates the inequality, and it is the only distribution with this property, because it is the [[maximum entropy probability distribution]] among those with fixed variance (cf. [[differential entropy#Maximization in the normal distribution|here]] for proof). {| class="toccolours collapsible collapsed" width="70%" style="text-align:left" !Entropic uncertainty of the normal distribution |- |We demonstrate this method on the ground state of the QHO, which as discussed above saturates the usual uncertainty based on standard deviations. The length scale can be set to whatever is convenient, so we assign <math display="block">x_0 = \sqrt{\frac{\hbar}{2m\omega}}</math> <math display="block">\begin{align} \psi(x) &= \left(\frac{m \omega}{\pi \hbar}\right)^{1/4} \exp{\left( -\frac{m \omega x^2}{2\hbar}\right)} \\ &= \left(\frac{1}{2\pi x_0^2}\right)^{1/4} \exp{\left( -\frac{x^2}{4x_0^2}\right)} \end{align}</math> The probability distribution is the normal distribution <math display="block">|\psi(x)|^2 = \frac{1}{x_0 \sqrt{2\pi}} \exp{\left( -\frac{x^2}{2x_0^2}\right)}</math> with Shannon entropy <math display="block">\begin{align} H_x &= - \int |\psi(x)|^2 \ln (|\psi(x)|^2 \cdot x_0 ) \,dx \\ &= -\frac{1}{x_0 \sqrt{2\pi}} \int_{-\infty}^\infty \exp{\left( -\frac{x^2}{2x_0^2}\right)} \ln \left[\frac{1}{\sqrt{2\pi}} \exp{\left( -\frac{x^2}{2x_0^2}\right)}\right] \, dx \\ &= \frac{1}{\sqrt{2\pi}} \int_{-\infty}^\infty \exp{\left( -\frac{u^2}{2}\right)} \left[\ln(\sqrt{2\pi}) + \frac{u^2}{2}\right] \, du\\ &= \ln(\sqrt{2\pi}) + \frac{1}{2}. \end{align}</math> A completely analogous calculation proceeds for the momentum distribution. Choosing a standard momentum of <math>p_0=\hbar/x_0</math>: <math display="block">\varphi(p) = \left(\frac{2 x_0^2}{\pi \hbar^2}\right)^{1/4} \exp{\left( -\frac{x_0^2 p^2}{\hbar^2}\right)}</math> <math display="block">|\varphi(p)|^2 = \sqrt{\frac{2 x_0^2}{\pi \hbar^2}} \exp{\left( -\frac{2x_0^2 p^2}{\hbar^2}\right)}</math> <math display="block">\begin{align} H_p &= - \int |\varphi(p)|^2 \ln (|\varphi(p)|^2 \cdot \hbar / x_0 ) \,dp \\ &= -\sqrt{\frac{2 x_0^2}{\pi \hbar^2}} \int_{-\infty}^\infty \exp{\left( -\frac{2x_0^2 p^2}{\hbar^2}\right)} \ln \left[\sqrt{\frac{2}{\pi}} \exp{\left( -\frac{2x_0^2 p^2}{\hbar^2}\right)}\right] \, dp \\ &= \sqrt{\frac{2}{\pi}} \int_{-\infty}^\infty \exp{\left( -2v^2\right)} \left[\ln\left(\sqrt{\frac{\pi}{2}}\right) + 2v^2 \right] \, dv \\ &= \ln\left(\sqrt{\frac{\pi}{2}}\right) + \frac{1}{2}. \end{align}</math> The entropic uncertainty is therefore the limiting value <math display="block">\begin{align} H_x+H_p &= \ln(\sqrt{2\pi}) + \frac{1}{2} + \ln\left(\sqrt{\frac{\pi}{2}}\right) + \frac{1}{2}\\ &= 1 + \ln \pi = \ln(e\pi). \end{align}</math> |} A measurement apparatus will have a finite resolution set by the discretization of its possible outputs into bins, with the probability of lying within one of the bins given by the Born rule. We will consider the most common experimental situation, in which the bins are of uniform size. Let ''δx'' be a measure of the spatial resolution. We take the zeroth bin to be centered near the origin, with possibly some small constant offset ''c''. The probability of lying within the jth interval of width ''δx'' is <math display="block">\operatorname P[x_j]= \int_{(j-1/2)\delta x-c}^{(j+1/2)\delta x-c}| \psi(x)|^2 \, dx</math> To account for this discretization, we can define the Shannon entropy of the wave function for a given measurement apparatus as <math display="block">H_x=-\sum_{j=-\infty}^\infty \operatorname P[x_j] \ln \operatorname P[x_j].</math> Under the above definition, the entropic uncertainty relation is <math display="block">H_x + H_p > \ln\left(\frac{e}{2}\right)-\ln\left(\frac{\delta x \delta p}{h} \right).</math> Here we note that {{math|''δx'' ''δp''/''h''}} is a typical infinitesimal phase space volume used in the calculation of a [[partition function (statistical mechanics)|partition function]]. The inequality is also strict and not saturated. Efforts to improve this bound are an active area of research. {| class="toccolours collapsible collapsed" width="70%" style="text-align:left" !Normal distribution example |- |We demonstrate this method first on the ground state of the QHO, which as discussed above saturates the usual uncertainty based on standard deviations. <math display="block">\psi(x)=\left(\frac{m \omega}{\pi \hbar}\right)^{1/4} \exp{\left( -\frac{m \omega x^2}{2\hbar}\right)}</math> The probability of lying within one of these bins can be expressed in terms of the [[error function]]. <math display="block">\begin{align} \operatorname P[x_j] &= \sqrt{\frac{m \omega}{\pi \hbar}} \int_{(j-1/2)\delta x}^{(j+1/2)\delta x} \exp\left( -\frac{m \omega x^2}{\hbar}\right) \, dx \\ &= \sqrt{\frac{1}{\pi}} \int_{(j-1/2)\delta x\sqrt{m \omega / \hbar}}^{(j+1/2)\delta x\sqrt{m \omega / \hbar}} e^{u^2} \, du \\ &= \frac{1}{2} \left[ \operatorname{erf} \left( \left(j+\frac{1}{2}\right)\delta x \cdot \sqrt{\frac{m \omega}{\hbar}}\right)- \operatorname {erf} \left( \left(j-\frac{1}{2}\right)\delta x \cdot \sqrt{\frac{m \omega}{\hbar}}\right) \right] \end{align}</math> The momentum probabilities are completely analogous. <math display="block">\operatorname P[p_j] = \frac{1}{2} \left[ \operatorname{erf} \left( \left(j+\frac{1}{2}\right)\delta p \cdot \frac{1}{\sqrt{\hbar m \omega}}\right)- \operatorname{erf} \left( \left(j-\frac{1}{2}\right)\delta x \cdot \frac{1}{\sqrt{\hbar m \omega}}\right) \right]</math> For simplicity, we will set the resolutions to <math display="block">\delta x = \sqrt{\frac{h}{m \omega}}</math> <math display="block">\delta p = \sqrt{h m \omega}</math> so that the probabilities reduce to <math display="block">\operatorname P[x_j] = \operatorname P[p_j] = \frac{1}{2} \left[ \operatorname {erf} \left( \left(j+\frac{1}{2}\right) \sqrt{2\pi} \right)- \operatorname {erf} \left( \left(j-\frac{1}{2}\right) \sqrt{2\pi} \right) \right]</math> The Shannon entropy can be evaluated numerically. <math display="block">\begin{align} H_x = H_p &= -\sum_{j=-\infty}^\infty \operatorname P[x_j] \ln \operatorname P[x_j] \\ &= -\sum_{j=-\infty}^\infty \frac{1}{2} \left[ \operatorname {erf} \left( \left(j+\frac{1}{2}\right) \sqrt{2\pi} \right)- \operatorname {erf} \left( \left(j-\frac{1}{2}\right) \sqrt{2\pi} \right) \right] \ln \frac{1}{2} \left[ \operatorname {erf} \left( \left(j+\frac{1}{2}\right) \sqrt{2\pi} \right)- \operatorname {erf} \left( \left(j-\frac{1}{2}\right) \sqrt{2\pi} \right) \right] \\ &\approx 0.3226 \end{align}</math> The entropic uncertainty is indeed larger than the limiting value. <math display="block">H_x + H_p \approx 0.3226 + 0.3226 = 0.6452 >\ln\left(\frac{e}{2}\right)-\ln 1 \approx 0.3069</math> Note that despite being in the optimal case, the inequality is not saturated. |} {| class="toccolours collapsible collapsed" width="70%" style="text-align:left" !Sinc function example |- |An example of a unimodal distribution with infinite variance is the [[sinc function]]. If the wave function is the correctly normalized uniform distribution, <math display="block">\psi(x) = \begin{cases} {1}/{\sqrt{2a}} & \text{for } |x| \le a, \\[8pt] 0 & \text{for } |x|>a \end{cases}</math> then its Fourier transform is the sinc function, <math display="block">\varphi(p)=\sqrt{\frac{a}{\pi \hbar}} \cdot \operatorname{sinc}\left(\frac{a p}{\hbar}\right)</math> which yields infinite momentum variance despite having a centralized shape. The entropic uncertainty, on the other hand, is finite. Suppose for simplicity that the spatial resolution is just a two-bin measurement, ''δx'' = ''a'', and that the momentum resolution is ''δp'' = ''h''/''a''. Partitioning the uniform spatial distribution into two equal bins is straightforward. We set the offset ''c'' = 1/2 so that the two bins span the distribution. <math display="block">\operatorname P[x_0] = \int_{-a}^0 \frac{1}{2a} \, dx = \frac{1}{2}</math> <math display="block">\operatorname P[x_1] = \int_0^a \frac{1}{2a} \, dx = \frac{1}{2}</math> <math display="block">H_x = -\sum_{j=0}^{1} \operatorname P[x_j] \ln \operatorname P[x_j] = -\frac{1}{2} \ln \frac{1}{2} - \frac{1}{2} \ln \frac{1}{2} = \ln 2</math> The bins for momentum must cover the entire real line. As done with the spatial distribution, we could apply an offset. It turns out, however, that the Shannon entropy is minimized when the zeroth bin for momentum is centered at the origin. (The reader is encouraged to try adding an offset.) The probability of lying within an arbitrary momentum bin can be expressed in terms of the [[sine integral]]. <math display="block">\begin{align} \operatorname P[p_j] &= \frac{a}{\pi \hbar} \int_{(j-1/2)\delta p}^{(j+1/2)\delta p} \operatorname{sinc}^2\left(\frac{a p}{\hbar}\right) \, dp \\ &= \frac{1}{\pi} \int_{2\pi (j-1/2)}^{2\pi (j+1/2)} \operatorname{sinc}^2(u) \, du \\ &= \frac{1}{\pi} \left[ \operatorname {Si} ((4j+2)\pi)- \operatorname {Si} ((4j-2)\pi) \right] \end{align}</math> The Shannon entropy can be evaluated numerically. <math display="block">H_p = -\sum_{j=-\infty}^\infty \operatorname P[p_j] \ln \operatorname P[p_j] = -\operatorname P[p_0] \ln \operatorname P[p_0]-2 \cdot \sum_{j=1}^{\infty} \operatorname P[p_j] \ln \operatorname P[p_j] \approx 0.53</math> The entropic uncertainty is indeed larger than the limiting value. <math display="block">H_x+H_p \approx 0.69 + 0.53 = 1.22 >\ln\left(\frac{e}{2}\right)-\ln 1 \approx 0.31</math> |}
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)