Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Quantum statistical mechanics
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Von Neumann entropy ==<!-- This section is linked from [[Physical information]] --> {{main|Von Neumann entropy}} Of particular significance for describing randomness of a state is the von Neumann entropy of ''S'' ''formally'' defined by <math display="block"> \operatorname{H}(S) = -\operatorname{Tr}(S \log_2 S). </math> Actually, the operator ''S'' log<sub>2</sub> ''S'' is not necessarily trace-class. However, if ''S'' is a non-negative self-adjoint operator not of trace class we define Tr(''S'') = +∞. Also note that any density operator ''S'' can be diagonalized, that it can be represented in some orthonormal basis by a (possibly infinite) matrix of the form <math display="block"> \begin{bmatrix} \lambda_1 & 0 & \cdots & 0 & \cdots \\ 0 & \lambda_2 & \cdots & 0 & \cdots\\ \vdots & \vdots & \ddots & \\ 0 & 0 & & \lambda_n & \\ \vdots & \vdots & & & \ddots \end{bmatrix} </math> and we define <math display="block"> \operatorname{H}(S) = - \sum_i \lambda_i \log_2 \lambda_i. </math> The convention is that <math> \; 0 \log_2 0 = 0</math>, since an event with probability zero should not contribute to the entropy. This value is an extended real number (that is in [0, ∞]) and this is clearly a unitary invariant of ''S''. '''Remark'''. It is indeed possible that H(''S'') = +∞ for some density operator ''S''. In fact ''T'' be the diagonal matrix <math display="block"> T = \begin{bmatrix} \frac{1}{2 (\log_2 2)^2 }& 0 & \cdots & 0 & \cdots \\ 0 & \frac{1}{3 (\log_2 3)^2 } & \cdots & 0 & \cdots\\ \vdots & \vdots & \ddots & \\ 0 & 0 & & \frac{1}{n (\log_2 n)^2 } & \\ \vdots & \vdots & & & \ddots \end{bmatrix} </math> ''T'' is non-negative trace class and one can show ''T'' log<sub>2</sub> ''T'' is not trace-class. {{math theorem | Entropy is a unitary invariant.}} In analogy with [[Shannon entropy#Formal definitions|classical entropy]] (notice the similarity in the definitions), H(''S'') measures the amount of randomness in the state ''S''. The more dispersed the eigenvalues are, the larger the system entropy. For a system in which the space ''H'' is finite-dimensional, entropy is maximized for the states ''S'' which in diagonal form have the representation <math display="block"> \begin{bmatrix} \frac{1}{n} & 0 & \cdots & 0 \\ 0 & \frac{1}{n} & \dots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & \frac{1}{n} \end{bmatrix} </math> For such an ''S'', H(''S'') = log<sub>2</sub> ''n''. The state ''S'' is called the maximally mixed state. Recall that a [[pure state]] is one of the form <math display="block"> S = | \psi \rangle \langle \psi |, </math> for ψ a vector of norm 1. {{math theorem | math_statement = {{math|1=H(''S'') = 0}} if and only if {{mvar|S}} is a pure state.}} For {{mvar|S}} is a pure state if and only if its diagonal form has exactly one non-zero entry which is a 1. Entropy can be used as a measure of [[quantum entanglement]].
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)