Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Pauli matrices
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Pauli vectors == The Pauli vector is defined by{{efn| The Pauli vector is a formal device. It may be thought of as an element of <math>\mathcal M_2(\Complex) \otimes \R^3</math>, where the [[Tensor product|tensor product space]] is endowed with a mapping <math>\cdot : \mathbb{R}^3 \times (\mathcal M_2(\Complex) \otimes \R^3) \to \mathcal M_2(\Complex)</math> induced by the [[dot product]] on <math>\mathbb{R}^3.</math> }} <math display="block"> \vec{\sigma} = \sigma_1 \hat{x}_1 + \sigma_2 \hat{x}_2 + \sigma_3 \hat{x}_3, </math> where <math>\hat{x}_1</math>, <math>\hat{x}_2</math>, and <math>\hat{x}_3</math> are an equivalent notation for the more familiar <math>\hat{x}</math>, <math>\hat{y}</math>, and <math>\hat{z}</math>. The Pauli vector provides a mapping mechanism from a vector basis to a Pauli matrix basis<ref>See the [[Lorentz group#Relation to the Möbius group|spinor map]].</ref> as follows: <math display="block">\begin{align} \vec{a} \cdot \vec{\sigma} &= \sum_{k,l} a_k\, \sigma_\ell\, \hat{x}_k \cdot \hat{x}_\ell \\ &= \sum_k a_k\, \sigma_k \\ &= \begin{pmatrix} a_3 & a_1 - i a_2 \\ a_1 + i a_2 & -a_3 \end{pmatrix}. \end{align} </math> More formally, this defines a map from <math>\mathbb{R}^3</math> to the vector space of traceless Hermitian <math>2\times 2</math> matrices. This map encodes structures of <math>\mathbb{R}^3</math> as a normed vector space and as a Lie algebra (with the [[cross-product]] as its Lie bracket) via functions of matrices, making the map an isomorphism of Lie algebras. This makes the Pauli matrices intertwiners from the point of view of representation theory. Another way to view the Pauli vector is as a <math>2\times 2</math> Hermitian traceless matrix-valued dual vector, that is, an element of <math>\text{Mat}_{2\times 2}(\mathbb{C}) \otimes (\mathbb{R}^3)^*</math> that maps <math>\vec a \mapsto \vec a \cdot \vec \sigma.</math> === Completeness relation === Each component of <math>\vec a</math> can be recovered from the matrix (see [[#completeness anchor|completeness relation]] below) <math display="block"> \frac{1}{2} \operatorname{tr} \Bigl( \bigl( \vec{a} \cdot \vec{\sigma} \bigr) \vec{\sigma} \Bigr) = \vec{a}. </math> This constitutes an inverse to the map <math>\vec a \mapsto \vec a \cdot \vec \sigma</math>, making it manifest that the map is a bijection. === Determinant === The norm is given by the determinant (up to a minus sign) <math display="block"> \det \bigl( \vec{a} \cdot \vec{\sigma} \bigr) = -\vec{a} \cdot \vec{a} = -|\vec{a}|^2. </math> Then, considering the conjugation action of an <math>\text{SU}(2)</math> matrix <math>U</math> on this space of matrices, : <math>U * \vec a \cdot \vec \sigma := U \, \vec a \cdot \vec \sigma \, U^{-1},</math> we find <math>\det(U * \vec a \cdot \vec\sigma) = \det(\vec a \cdot \vec \sigma),</math> and that <math>U * \vec a \cdot \vec \sigma</math> is Hermitian and traceless. It then makes sense to define <math>U * \vec a \cdot \vec\sigma = \vec a' \cdot \vec\sigma,</math> where <math>\vec a'</math> has the same norm as <math>\vec a,</math> and therefore interpret <math>U</math> as a rotation of three-dimensional space. In fact, it turns out that the ''special'' restriction on <math>U</math> implies that the rotation is orientation preserving. This allows the definition of a map <math>R: \mathrm{SU}(2) \to \mathrm{SO}(3)</math> given by : <math>U * \vec a \cdot \vec \sigma = \vec a' \cdot \vec \sigma =: (R(U)\ \vec a) \cdot \vec \sigma,</math> where <math>R(U) \in \mathrm{SO}(3).</math> This map is the concrete realization of the double cover of <math>\mathrm{SO}(3)</math> by <math>\mathrm{SU}(2),</math> and therefore shows that <math>\text{SU}(2) \cong \mathrm{Spin}(3).</math> The components of <math>R(U)</math> can be recovered using the tracing process above: : <math>R(U)_{ij} = \frac{1}{2} \operatorname{tr} \left( \sigma_i U \sigma_j U^{-1} \right).</math> === Cross-product === The cross-product is given by the matrix commutator (up to a factor of <math>2i</math>) <math display="block"> [\vec a \cdot \vec \sigma, \vec b \cdot \vec \sigma] = 2i\,(\vec a \times \vec b) \cdot \vec \sigma. </math> In fact, the existence of a norm follows from the fact that <math>\mathbb{R}^3</math> is a Lie algebra (see [[Killing form]]). This cross-product can be used to prove the orientation-preserving property of the map above. === Eigenvalues and eigenvectors === The eigenvalues of <math>\ \vec a \cdot \vec \sigma\ </math> are <math>\ \pm |\vec{a}|.</math> This follows immediately from tracelessness and explicitly computing the determinant. More abstractly, without computing the determinant, which requires explicit properties of the Pauli matrices, this follows from <math>\ (\vec a \cdot \vec \sigma)^2 - |\vec a|^2 = 0\ ,</math> since this can be factorised into <math>\ (\vec a \cdot \vec \sigma - |\vec a|)(\vec a \cdot \vec \sigma + |\vec a|)= 0.</math> A standard result in linear algebra (a linear map that satisfies a polynomial equation written in distinct linear factors is diagonal) means this implies <math>\ \vec a \cdot \vec \sigma\ </math> is diagonal with possible eigenvalues <math>\ \pm |\vec a|.</math> The tracelessness of <math>\ \vec a \cdot \vec \sigma\ </math> means it has exactly one of each eigenvalue. Its normalized eigenvectors are <math display="block"> \psi_+ = \frac{1}{\sqrt{2 \left|\vec{a} \right|\ (a_3+\left|\vec{a}\right|)\ }\ } \begin{bmatrix} a_3 + \left|\vec{a}\right| \\ a_1 + ia_2 \end{bmatrix}; \qquad \psi_- = \frac{1}{\sqrt{2|\vec{a}|(a_3+|\vec{a}|)}} \begin{bmatrix} ia_2 - a_1 \\ a_3 + |\vec{a}| \end{bmatrix} ~ . </math> These expressions become singular for <math>a_3\to -\left|\vec{a} \right|</math>. They can be rescued by letting <math>\vec{a}=\left|\vec{a} \right|(\epsilon,0,-(1-\epsilon^2/2))</math> and taking the limit <math>\epsilon\to0</math>, which yields the correct eigenvectors (0,1) and (1,0) of <math>\sigma_z</math>. Alternatively, one may use spherical coordinates <math>\vec{a}=a(\sin\vartheta\cos\varphi, \sin\vartheta\sin\varphi, \cos\vartheta)</math> to obtain the eigenvectors <math>\psi_+=(\cos(\vartheta/2), \sin(\vartheta/2)\exp(i\varphi))</math> and <math>\psi_-=(-\sin(\vartheta/2)\exp(-i\varphi), \cos(\vartheta/2))</math>. === Pauli 4-vector === The Pauli 4-vector, used in spinor theory, is written <math>\ \sigma^\mu\ </math> with components :<math>\sigma^\mu = (I, \vec\sigma).</math> This defines a map from <math>\mathbb{R}^{1,3}</math> to the vector space of Hermitian matrices, :<math>x_\mu \mapsto x_\mu\sigma^\mu\ ,</math> which also encodes the [[Minkowski metric]] (with ''mostly minus'' convention) in its determinant: :<math>\det (x_\mu\sigma^\mu) = \eta(x,x).</math> This 4-vector also has a completeness relation. It is convenient to define a second Pauli 4-vector :<math>\bar\sigma^\mu = (I, -\vec\sigma).</math> and allow raising and lowering using the Minkowski metric tensor. The relation can then be written <math display="block">x_\nu = \tfrac{1}{2} \operatorname{tr} \Bigl( \bar\sigma_\nu\bigl( x_\mu \sigma^\mu \bigr) \Bigr) .</math> Similarly to the Pauli 3-vector case, we can find a matrix group that acts as isometries on <math>\ \mathbb{R}^{1,3}\ ;</math> in this case the matrix group is <math>\ \mathrm{SL}(2,\mathbb{C})\ ,</math> and this shows <math>\ \mathrm{SL}(2,\mathbb{C}) \cong \mathrm{Spin}(1,3).</math> Similarly to above, this can be explicitly realized for <math>\ S \in \mathrm{SL}(2,\mathbb{C})\ </math> with components :<math>\Lambda(S)^\mu{}_\nu = \tfrac{1}{2}\operatorname{tr} \left( \bar\sigma_\nu S \sigma^\mu S^{\dagger}\right).</math> In fact, the determinant property follows abstractly from trace properties of the <math>\ \sigma^\mu.</math> For <math>\ 2\times 2\ </math> matrices, the following identity holds: :<math>\det(A + B) = \det(A) + \det(B) + \operatorname{tr}(A)\operatorname{tr}(B) - \operatorname{tr}(AB).</math> That is, the 'cross-terms' can be written as traces. When <math>\ A,B\ </math> are chosen to be different <math>\ \sigma^\mu\ ,</math> the cross-terms vanish. It then follows, now showing summation explicitly, <math display="inline">\det\left(\sum_\mu x_\mu \sigma^\mu\right) = \sum_\mu \det\left(x_\mu\sigma^\mu\right).</math> Since the matrices are <math>\ 2 \times 2\ ,</math> this is equal to <math display="inline">\sum_\mu x_\mu^2 \det(\sigma^\mu) = \eta(x,x).</math> ===Relation to dot and cross product=== Pauli vectors elegantly map these commutation and anticommutation relations to corresponding vector products. Adding the commutator to the anticommutator gives :<math> \begin{align} \left[ \sigma_j, \sigma_k\right] + \{\sigma_j, \sigma_k\} &= (\sigma_j \sigma_k - \sigma_k \sigma_j ) + (\sigma_j \sigma_k + \sigma_k \sigma_j) \\ 2i\varepsilon_{j k \ell}\,\sigma_\ell + 2 \delta_{j k}I &= 2\sigma_j \sigma_k \end{align} </math> so that, {{Equation box 1 |indent =: |equation = <math>~~ \sigma_j \sigma_k = \delta_{j k}I + i\varepsilon_{j k \ell}\,\sigma_\ell ~ .~</math> |cellpadding= 6 |border |border colour = #0073CF |bgcolor=#F9FFF7 }} [[tensor contraction|Contracting]] each side of the equation with components of two {{math|3}}-vectors {{math|''a{{sub|p}}''}} and {{math|''b{{sub|q}}''}} (which commute with the Pauli matrices, i.e., {{math|1=''a{{sub|p}}σ{{sub|q}}'' = ''σ{{sub|q}}a{{sub|p}}'')}} for each matrix {{math|''σ{{sub|q}}''}} and vector component {{math|''a{{sub|p}}''}} (and likewise with {{math|''b{{sub|q}}''}}) yields :<math>~~ \begin{align} a_j b_k \sigma_j \sigma_k & = a_j b_k \left(i\varepsilon_{jk\ell}\,\sigma_\ell + \delta_{jk}I\right) \\ a_j \sigma_j b_k \sigma_k & = i\varepsilon_{jk\ell}\,a_j b_k \sigma_\ell + a_j b_k \delta_{jk}I \end{align}.~</math> Finally, translating the index notation for the [[dot product]] and [[cross product#Index notation for tensors|cross product]] results in {{NumBlk||{{Equation box 1 |indent =: |equation = <math>~~\Bigl(\vec{a} \cdot \vec{\sigma}\Bigr)\Bigl(\vec{b} \cdot \vec{\sigma}\Bigr) = \Bigl(\vec{a} \cdot \vec{b}\Bigr) \, I + i \Bigl(\vec{a} \times \vec{b}\Bigr) \cdot \vec{\sigma}~~</math> |cellpadding= 6 |border |border colour = #0073CF |bgcolor=#F9FFF7 }} |{{EquationRef|1}} }} If {{mvar|i}} is identified with the pseudoscalar {{math|''σ{{sub|x}}σ{{sub|y}}σ{{sub|z}}''}} then the right hand side becomes <math> a \cdot b + a \wedge b </math>, which is also the definition for the product of two vectors in geometric algebra. If we define the spin operator as {{math|1='''''J''''' = {{sfrac|''ħ''|2}}'''''σ'''''}}, then {{math|1='''''J'''''}} satisfies the commutation relation:<math display="block">\mathbf{J} \times \mathbf{J} = i\hbar \mathbf{J}</math>Or equivalently, the Pauli vector satisfies:<math display="block">\frac{\vec{\sigma}}{2} \times \frac{\vec{\sigma}}{2} = i\frac{\vec{\sigma}}{2}</math> === Some trace relations === The following traces can be derived using the commutation and anticommutation relations. :<math>\begin{align} \operatorname{tr}\left(\sigma_j \right) &= 0 \\ \operatorname{tr}\left(\sigma_j \sigma_k \right) &= 2\delta_{jk} \\ \operatorname{tr}\left(\sigma_j \sigma_k \sigma_\ell \right) &= 2i\varepsilon_{jk\ell} \\ \operatorname{tr}\left(\sigma_j \sigma_k \sigma_\ell \sigma_m \right) &= 2\left(\delta_{jk}\delta_{\ell m} - \delta_{j\ell}\delta_{km} + \delta_{jm}\delta_{k\ell}\right) \end{align}</math> If the matrix {{math|1=''σ''{{sub|0}} = ''I''}} is also considered, these relationships become <math display=block>\begin{align} \operatorname{tr}\left(\sigma_\alpha \right) &= 2\delta_{0 \alpha} \\ \operatorname{tr}\left(\sigma_\alpha \sigma_\beta \right) &= 2\delta_{\alpha \beta} \\ \operatorname{tr}\left(\sigma_\alpha \sigma_\beta \sigma_\gamma \right) &= 2 \sum_{(\alpha \beta \gamma)} \delta_{\alpha \beta} \delta_{0 \gamma} - 4 \delta_{0 \alpha} \delta_{0 \beta} \delta_{0 \gamma} + 2i\varepsilon_{0 \alpha \beta \gamma} \\ \operatorname{tr}\left(\sigma_\alpha \sigma_\beta \sigma_\gamma \sigma_\mu \right) &= 2\left(\delta_{\alpha \beta}\delta_{\gamma \mu} - \delta_{\alpha \gamma}\delta_{\beta \mu} + \delta_{\alpha \mu}\delta_{\beta \gamma}\right) + 4\left(\delta_{\alpha \gamma} \delta_{0 \beta} \delta_{0 \mu} + \delta_{\beta \mu} \delta_{0 \alpha} \delta_{0 \gamma}\right) - 8 \delta_{0 \alpha} \delta_{0 \beta} \delta_{0 \gamma} \delta_{0 \mu} + 2 i \sum_{(\alpha \beta \gamma \mu)} \varepsilon_{0 \alpha \beta \gamma} \delta_{0 \mu} \end{align}</math> where Greek indices {{math|''α'', ''β'', ''γ''}} and {{mvar|μ}} assume values from {{math|{0, ''x'', ''y'', ''z''}<nowiki/>}} and the notation <math display="inline">\sum_{(\alpha \ldots)}</math> is used to denote the sum over the [[cyclic permutation]] of the included indices. ===Exponential of a Pauli vector=== For :<math>\vec{a} = a\hat{n}, \quad |\hat{n}| = 1,</math> one has, for even powers, {{math|1=2''p'', ''p'' = 0, 1, 2, 3, ...}} :<math>(\hat{n} \cdot \vec{\sigma})^{2p} = I ,</math> which can be shown first for the {{math|1=''p'' = 1}} case using the anticommutation relations. For convenience, the case {{math|1=''p'' = 0}} is taken to be {{mvar|I}} by convention. For odd powers, {{math|1=2''q'' + 1, ''q'' = 0, 1, 2, 3, ...}} :<math>\left(\hat{n} \cdot \vec{\sigma}\right)^{2q+1} = \hat{n} \cdot \vec{\sigma} \, .</math> [[Matrix exponential|Matrix exponentiating]], and using the [[Taylor series#List of Maclaurin series of some common functions|Taylor series for sine and cosine]], :<math>\begin{align} e^{i a\left(\hat{n} \cdot \vec{\sigma}\right)} &= \sum_{k=0}^\infty{\frac{i^k \left[a \left(\hat{n} \cdot \vec{\sigma}\right)\right]^k}{k!}} \\ &= \sum_{p=0}^\infty{\frac{(-1)^p (a\hat{n}\cdot \vec{\sigma})^{2p}}{(2p)!}} + i\sum_{q=0}^\infty{\frac{(-1)^q (a\hat{n}\cdot \vec{\sigma})^{2q + 1}}{(2q + 1)!}} \\ &= I\sum_{p=0}^\infty{\frac{(-1)^p a^{2p}}{(2p)!}} + i (\hat{n}\cdot \vec{\sigma}) \sum_{q=0}^\infty{\frac{(-1)^q a^{2q+1}}{(2q + 1)!}}\\ \end{align}</math>. In the last line, the first sum is the cosine, while the second sum is the sine; so, finally, {{NumBlk||{{Equation box 1 |indent =: |equation = <math>~~e^{i a\left(\hat{n} \cdot \vec{\sigma}\right)} = I\cos{a} + i (\hat{n} \cdot \vec{\sigma}) \sin{a} ~~</math> |cellpadding= 6 |border |border colour = #0073CF |bgcolor=#F9FFF7 }} |{{EquationRef|2}} }} which is [[quaternions and spatial rotation#Using quaternion as rotations|analogous]] to [[Euler's formula]], extended to [[quaternions]]. In particular, <math>e^{i a \sigma_1} = \begin{pmatrix} \cos a & i \sin a \\ i \sin a & \cos a \end{pmatrix}, \quad e^{i a \sigma_2} = \begin{pmatrix} \cos a & \sin a \\ - \sin a & \cos a \end{pmatrix}, \quad e^{i a \sigma_3} = \begin{pmatrix} e^{ia} & 0 \\ 0 & e^{-ia} \end{pmatrix}.</math> Note that :<math>\det[i a(\hat{n} \cdot \vec{\sigma})] = a^2</math>, while the determinant of the exponential itself is just {{math|1}}, which makes it the '''generic group element of [[SU(2)]]'''. A more abstract version of formula {{EquationNote|(2)}} for a general {{math|2 × 2}} matrix can be found in the article on [[Matrix exponential#Evaluation by Laurent series|matrix exponentials]]. A general version of {{EquationNote|(2)}} for an analytic (at ''a'' and −''a'') function is provided by application of [[Sylvester's formula]],<ref> {{cite book |title=Quantum Computation and Quantum Information |last1=Nielsen |first1=Michael A. |author-link1=Michael Nielsen |last2=Chuang |first2=Isaac L. |author-link2=Isaac Chuang |year=2000 |publisher=Cambridge University Press |location=Cambridge, UK |isbn=978-0-521-63235-5 |oclc=43641333}} </ref> :<math>f(a(\hat{n} \cdot \vec{\sigma})) = I\frac{f(a) + f(-a)}{2} + \hat{n} \cdot \vec{\sigma} \frac{f(a) - f(-a)}{2}.</math> ====The group composition law of {{math|SU(2)}}==== A straightforward application of formula {{EquationNote|(2)}} provides a parameterization of the composition law of the group {{math|SU(2)}}.{{efn|The relation among {{math|''a, b, c,'' ''' n, m, k '''}} derived here in the {{math|2 × 2}} representation holds for ''all representations'' of {{math|SU(2)}}, being a ''group identity''. Note that, by virtue of the standard normalization of that group's generators as ''half'' the Pauli matrices, the parameters ''a'',''b'',''c'' correspond to ''half'' the rotation angles of the rotation group. That is, the Gibbs formula linked amounts to <math>\hat k \tan c/2= (\hat n \tan a/2+ \hat m \tan b/2 -\hat m \times \hat n \tan a/2 ~ \tan b/2 )/(1-\hat m\cdot \hat n \tan a/2 ~\tan b/2 )</math>.}} One may directly solve for {{mvar|c}} in <math display=block>\begin{align} e^{ia\left(\hat{n} \cdot \vec{\sigma}\right)} e^{ib\left(\hat{m} \cdot \vec{\sigma}\right)} &= I\left(\cos a \cos b - \hat{n} \cdot \hat{m} \sin a \sin b\right) + i\left(\hat{n} \sin a \cos b + \hat{m} \sin b \cos a - \hat{n} \times \hat{m} ~ \sin a \sin b \right) \cdot \vec{\sigma} \\ &= I\cos{c} + i \left(\hat{k} \cdot \vec{\sigma}\right) \sin c \\ &= e^{ic \left(\hat{k} \cdot \vec{\sigma}\right)}, \end{align}</math> which specifies the generic group multiplication, where, manifestly, <math display=block>\cos c = \cos a \cos b - \hat{n} \cdot \hat{m} \sin a \sin b~,</math> the [[spherical law of cosines]]. Given {{mvar|c}}, then, <math display=block>\hat{k} = \frac{1}{\sin c}\left(\hat{n} \sin a \cos b + \hat{m} \sin b \cos a - \hat{n}\times\hat{m} \sin a \sin b\right).</math> Consequently, the composite rotation parameters in this group element (a closed form of the respective [[Baker–Campbell–Hausdorff formula|BCH expansion]] in this case) simply amount to<ref>{{cite book |first=J.W. |last=Gibbs |year=1884 |title=Elements of Vector Analysis |place=New Haven, CT |page=67 |author-link=J. W. Gibbs |chapter=4. Concerning the differential and integral calculus of vectors |chapter-url={{GBurl|VurzAAAAMAAJ|p=67}} |publisher=Tuttle, Moorehouse & Taylor }} In fact, however, the formula goes back to [[Olinde Rodrigues]] (1840), replete with half-angle: {{cite journal |first=Olinde |last=Rodrigues |author-link=Olinde Rodrigues |year=1840 |title=Des lois géometriques qui regissent les déplacements d' un systéme solide dans l' espace, et de la variation des coordonnées provenant de ces déplacement considérées indépendant des causes qui peuvent les produire |journal=[[J. Math. Pures Appl.]] |volume=5 |pages=380–440 |url=http://sites.mathdoc.fr/JMPA/PDF/JMPA_1840_1_5_A39_0.pdf}}</ref> <math display=block> e^{ic \hat{k} \cdot \vec{\sigma}} = \exp \left( i\frac{c}{\sin c} \left(\hat{n} \sin a \cos b + \hat{m} \sin b \cos a - \hat{n}\times\hat{m} ~ \sin a \sin b\right) \cdot \vec{\sigma}\right). </math> (Of course, when <math>\hat{n}</math> is parallel to <math>\hat{m}</math>, so is <math>\hat{k}</math>, and {{math|1=''c'' = ''a + b''}}.) {{see also|Rotation formalisms in three dimensions#Rodrigues vector|Spinor#Three dimensions}} ====Adjoint action==== It is also straightforward to likewise work out the adjoint action on the Pauli vector, namely rotation of any angle <math>a</math> along any axis <math>\hat n</math>: <math display=block> R_n(-a) ~ \vec{\sigma} ~ R_n(a) = e^{i \frac{a}{2}\left(\hat{n} \cdot \vec{\sigma}\right)} ~ \vec{\sigma} ~ e^{-i \frac{a}{2}\left(\hat{n} \cdot \vec{\sigma}\right)} = \vec{\sigma}\cos (a) + \hat{n} \times \vec{\sigma} ~ \sin(a) + \hat{n} ~ \hat{n} \cdot \vec{\sigma} ~ (1 - \cos(a)) ~ . </math> Taking the dot product of any unit vector with the above formula generates the expression of any single qubit operator under any rotation. For example, it can be shown that <math display="inline">R_y\mathord\left(-\frac{\pi}{2}\right)\, \sigma_x\, R_y\mathord\left(\frac{\pi}{2}\right) = \hat{x} \cdot \left(\hat{y} \times \vec{\sigma}\right) = \sigma_z</math>. {{see also|Rodrigues' rotation formula}} === Completeness relation<span class="anchor" id="completeness_anchor"></span> === An alternative notation that is commonly used for the Pauli matrices is to write the vector index {{mvar|k}} in the superscript, and the matrix indices as subscripts, so that the element in row {{mvar|α}} and column {{mvar|β}} of the {{mvar|k}}-th Pauli matrix is {{math|''σ {{sup|k}}{{sub|αβ}}''.}} In this notation, the ''completeness relation'' for the Pauli matrices can be written :<math>\vec{\sigma}_{\alpha\beta}\cdot\vec{\sigma}_{\gamma\delta}\equiv \sum_{k=1}^3 \sigma^k_{\alpha\beta}\,\sigma^k_{\gamma\delta} = 2\,\delta_{\alpha\delta} \,\delta_{\beta\gamma} - \delta_{\alpha\beta}\,\delta_{\gamma\delta}.</math> {{math proof | proof = The fact that the Pauli matrices, along with the identity matrix {{mvar|I}}, form an orthogonal basis for the Hilbert space of all 2 × 2 [[complex number|complex]] matrices <math>\mathcal{M}_{2,2}(\mathbb{C})</math> over <math>\mathbb{C}</math>, means that we can express any 2 × 2 complex matrix {{mvar|M}} as <math display="block">M = c\,I + \sum_k a_k \,\sigma^k </math> where {{mvar|c}} is a complex number, and {{mvar|a}} is a 3-component, complex vector. It is straightforward to show, using the properties listed above, that <math display="block">\operatorname{tr}\left( \sigma^j\,\sigma^k \right) = 2\,\delta_{jk}</math> where "{{math|tr}}" denotes the [[trace (linear algebra)|trace]], and hence that <math display="block">\begin{align} c &={} \tfrac{1}{2}\, \operatorname{tr}\, M\,,\begin{align}&& a_k &= \tfrac{1}{2}\,\operatorname{tr}\,\sigma^k\,M .\end{align} \\[3pt] \therefore ~~ 2\,M &= I\,\operatorname{tr}\, M + \sum_k \sigma^k\,\operatorname{tr}\, \sigma^k M ~, \end{align}</math> which can be rewritten in terms of matrix indices as <math display="block">2\, M_{\alpha\beta} = \delta_{\alpha\beta}\,M_{\gamma\gamma} + \sum_k \sigma^k_{\alpha\beta}\,\sigma^k_{\gamma\delta}\,M_{\delta\gamma}~,</math> where [[Einstein notation|summation over the repeated indices is implied]] {{mvar|γ}} and {{mvar|δ}}. Since this is true for any choice of the matrix {{mvar|M}}, the completeness relation follows as stated above. [[Q.E.D.]] }} As noted above, it is common to denote the 2 × 2 unit matrix by {{math|''σ''{{sub|0}},}} so {{math|1=''σ{{sup|0}}{{sub|αβ}}'' = ''δ{{sub|αβ}}''.}} The completeness relation can alternatively be expressed as <math display="block">\sum_{k=0}^3 \sigma^k_{\alpha\beta}\,\sigma^k_{\gamma\delta} = 2\,\delta_{\alpha\delta}\,\delta_{\beta\gamma} ~ .</math> The fact that any Hermitian [[complex number|complex]] 2 × 2 matrices can be expressed in terms of the identity matrix and the Pauli matrices also leads to the [[Bloch sphere]] representation of 2 × 2 [[mixed state (physics)|mixed state]]s’ density matrix, ([[Positive semidefinite matrix|positive semidefinite]] 2 × 2 matrices with unit trace. This can be seen by first expressing an arbitrary Hermitian matrix as a real linear combination of {{math|{''σ''{{sub|0}}, ''σ''{{sub|1}}, ''σ''{{sub|2}}, ''σ''{{sub|3}}<nowiki>}</nowiki>}} as above, and then imposing the positive-semidefinite and [[Trace (linear algebra)|trace]] {{math|1}} conditions. For a pure state, in polar coordinates, <math display="block">\vec{a} = \begin{pmatrix}\sin\theta \cos\phi & \sin\theta \sin\phi & \cos\theta\end{pmatrix},</math> the [[idempotent]] density matrix <math display="block"> \tfrac{1}{2} \left(\mathbf{1} + \vec{a} \cdot \vec{\sigma}\right) = \begin{pmatrix} \cos^2\left(\frac{\,\theta\,}{2}\right) & e^{-i\,\phi}\sin\left(\frac{\,\theta\,}{2}\right)\cos\left(\frac{\,\theta\,}{2}\right) \\ e^{+i\,\phi}\sin\left(\frac{\,\theta\,}{2}\right)\cos\left(\frac{\,\theta\,}{2}\right) & \sin^2\left(\frac{\,\theta\,}{2}\right) \end{pmatrix} </math> acts on the state eigenvector <math>\begin{pmatrix}\cos\left(\frac{\,\theta\,}{2}\right) & e^{+i\phi}\,\sin\left(\frac{\,\theta\,}{2}\right) \end{pmatrix} </math> with eigenvalue +1, hence it acts like a [[projection (linear algebra)|projection operator]]. === Relation with the permutation operator === Let {{math|''P{{sub|jk}}''}} be the [[transposition (mathematics)|transposition]] (also known as a permutation) between two spins {{math|''σ{{sub|j}}''}} and {{math|''σ{{sub|k}}''}} living in the [[tensor product]] space {{nowrap|{{tmath|\Complex^2 \otimes \Complex^2}},}} :<math>P_{jk} \left| \sigma_j \sigma_k \right\rangle = \left| \sigma_k \sigma_j \right\rangle .</math> This operator can also be written more explicitly as [[Exchange interaction#Inclusion of spin|Dirac's spin exchange operator]], :<math>P_{jk} = \frac{1}{2}\,\left(\vec{\sigma}_j \cdot \vec{\sigma}_k + 1\right) ~ .</math> Its eigenvalues are therefore{{efn| Explicitly, in the convention of "right-space matrices into elements of left-space matrices", it is <math>\left(\begin{smallmatrix} 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \end{smallmatrix}\right) ~ .</math> }} 1 or −1. It may thus be utilized as an interaction term in a Hamiltonian, splitting the energy eigenvalues of its symmetric versus antisymmetric eigenstates.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)