Pauli matrix

Plane wave and spinor under time reversal

December 16, 2015 phy1520 No comments , , , , ,

[Click here for a PDF of this post with nicer formatting]

Q: [1] pr 4.7

  1. (a)
    Find the time reversed form of a spinless plane wave state in three dimensions.

  2. (b)
    For the eigenspinor of \( \Bsigma \cdot \ncap \) expressed in terms of polar and azimuthal angles \( \beta\) and \( \gamma \), show that \( -i \sigma_y \chi^\conj(\ncap) \) has the reversed spin direction.

A: part (a)

The Hamiltonian for a plane wave is

\begin{equation}\label{eqn:timeReversalPlaneWaveAndSpinor:20}
H = \frac{\Bp^2}{2m} = i \PD{t}.
\end{equation}

Under time reversal the momentum side transforms as

\begin{equation}\label{eqn:timeReversalPlaneWaveAndSpinor:40}
\begin{aligned}
\Theta \frac{\Bp^2}{2m} \Theta^{-1}
&=
\frac{\lr{ \Theta \Bp \Theta^{-1}} \cdot \lr{ \Theta \Bp \Theta^{-1}} }{2m} \\
&=
\frac{(-\Bp) \cdot (-\Bp)}{2m} \\
&=
\frac{\Bp^2}{2m}.
\end{aligned}
\end{equation}

The time derivative side of the equation is also time reversal invariant
\begin{equation}\label{eqn:timeReversalPlaneWaveAndSpinor:60}
\begin{aligned}
\Theta i \PD{t}{} \Theta^{-1}
&=
\Theta i \Theta^{-1} \Theta \PD{t}{} \Theta^{-1} \\
&=
-i \PD{(-t)}{} \\
&=
i \PD{t}{}.
\end{aligned}
\end{equation}

Solutions to this equation are linear combinations of

\begin{equation}\label{eqn:timeReversalPlaneWaveAndSpinor:80}
\psi(\Bx, t) = e^{i \Bk \cdot \Bx – i E t/\Hbar},
\end{equation}

where \( \Hbar^2 \Bk^2/2m = E \), the energy of the particle. Under time reversal we have

\begin{equation}\label{eqn:timeReversalPlaneWaveAndSpinor:100}
\begin{aligned}
\psi(\Bx, t)
\rightarrow e^{-i \Bk \cdot \Bx + i E (-t)/\Hbar}
&= \lr{ e^{i \Bk \cdot \Bx – i E (-t)/\Hbar} }^\conj \\
&=
\psi^\conj(\Bx, -t)
\end{aligned}
\end{equation}

A: part (b)

The text uses a requirement for time reversal of spin states to show that the Pauli matrix form of the time reversal operator is

\begin{equation}\label{eqn:timeReversalPlaneWaveAndSpinor:120}
\Theta = -i \sigma_y K,
\end{equation}

where \( K \) is a complex conjugating operator. The form of the spin up state used in that demonstration was

\begin{equation}\label{eqn:timeReversalPlaneWaveAndSpinor:140}
\begin{aligned}
\ket{\ncap ; +}
&= e^{-i S_z \beta/\Hbar} e^{-i S_y \gamma/\Hbar} \ket{+} \\
&= e^{-i \sigma_z \beta/2} e^{-i \sigma_y \gamma/2} \ket{+} \\
&= \lr{ \cos(\beta/2) – i \sigma_z \sin(\beta/2) }
\lr{ \cos(\gamma/2) – i \sigma_y \sin(\gamma/2) } \ket{+} \\
&= \lr{ \cos(\beta/2) – i \begin{bmatrix} 1 & 0 \\ 0 & -1 \\ \end{bmatrix} \sin(\beta/2) }
\lr{ \cos(\gamma/2) – i \begin{bmatrix} 0 & -i \\ i & 0 \\ \end{bmatrix} \sin(\gamma/2) } \ket{+} \\
&=
\begin{bmatrix}
e^{-i\beta/2} & 0 \\
0 & e^{i \beta/2}
\end{bmatrix}
\begin{bmatrix}
\cos(\gamma/2) & -\sin(\gamma/2) \\
\sin(\gamma/2) & \cos(\gamma/2)
\end{bmatrix}
\begin{bmatrix}
1 \\
0
\end{bmatrix} \\
&=
\begin{bmatrix}
e^{-i\beta/2} & 0 \\
0 & e^{i \beta/2}
\end{bmatrix}
\begin{bmatrix}
\cos(\gamma/2) \\
\sin(\gamma/2) \\
\end{bmatrix} \\
&=
\begin{bmatrix}
\cos(\gamma/2)
e^{-i\beta/2}
\\
\sin(\gamma/2)
e^{i \beta/2}
\end{bmatrix}.
\end{aligned}
\end{equation}

The state orthogonal to this one is claimed to be

\begin{equation}\label{eqn:timeReversalPlaneWaveAndSpinor:180}
\begin{aligned}
\ket{\ncap ; -}
&= e^{-i S_z \beta/\Hbar} e^{-i S_y (\gamma + \pi)/\Hbar} \ket{+} \\
&= e^{-i \sigma_z \beta/2} e^{-i \sigma_y (\gamma + \pi)/2} \ket{+}.
\end{aligned}
\end{equation}

We have

\begin{equation}\label{eqn:timeReversalPlaneWaveAndSpinor:200}
\begin{aligned}
\cos((\gamma + \pi)/2)
&=
\textrm{Re} e^{i(\gamma + \pi)/2} \\
&=
\textrm{Re} i e^{i\gamma/2} \\
&=
-\sin(\gamma/2),
\end{aligned}
\end{equation}

and
\begin{equation}\label{eqn:timeReversalPlaneWaveAndSpinor:220}
\begin{aligned}
\sin((\gamma + \pi)/2)
&=
\textrm{Im} e^{i(\gamma + \pi)/2} \\
&=
\textrm{Im} i e^{i\gamma/2} \\
&=
\cos(\gamma/2),
\end{aligned}
\end{equation}

so we should have

\begin{equation}\label{eqn:timeReversalPlaneWaveAndSpinor:240}
\ket{\ncap ; -}
=
\begin{bmatrix}
-\sin(\gamma/2)
e^{-i\beta/2}
\\
\cos(\gamma/2)
e^{i \beta/2}
\end{bmatrix}.
\end{equation}

This looks right, but we can sanity check orthogonality

\begin{equation}\label{eqn:timeReversalPlaneWaveAndSpinor:260}
\begin{aligned}
\braket{\ncap ; -}{\ncap ; +}
&=
\begin{bmatrix}
-\sin(\gamma/2)
e^{i\beta/2}
&
\cos(\gamma/2)
e^{-i \beta/2}
\end{bmatrix}
\begin{bmatrix}
\cos(\gamma/2)
e^{-i\beta/2}
\\
\sin(\gamma/2)
e^{i \beta/2}
\end{bmatrix} \\
&=
0,
\end{aligned}
\end{equation}

as expected.

The task at hand appears to be the operation on the column representation of \( \ket{\ncap; +} \) using the Pauli representation of the time reversal operator. That is

\begin{equation}\label{eqn:timeReversalPlaneWaveAndSpinor:160}
\begin{aligned}
\Theta \ket{\ncap ; +}
&=
-i \sigma_y K
\begin{bmatrix}
e^{-i\beta/2} \cos(\gamma/2) \\
e^{i \beta/2} \sin(\gamma/2)
\end{bmatrix} \\
&=
-i \begin{bmatrix} 0 & -i \\ i & 0 \\ \end{bmatrix}
\begin{bmatrix}
e^{i\beta/2} \cos(\gamma/2) \\
e^{-i \beta/2} \sin(\gamma/2)
\end{bmatrix} \\
&=
\begin{bmatrix}
0 & -1 \\
1 & 0
\end{bmatrix}
\begin{bmatrix}
e^{i\beta/2} \cos(\gamma/2) \\
e^{-i \beta/2} \sin(\gamma/2)
\end{bmatrix} \\
&=
\begin{bmatrix}
-e^{-i \beta/2} \sin(\gamma/2) \\
e^{i\beta/2} \cos(\gamma/2) \\
\end{bmatrix} \\
&= \ket{\ncap ; -},
\end{aligned}
\end{equation}

which is the result to be demononstrated.

References

[1] Jun John Sakurai and Jim J Napolitano. Modern quantum mechanics. Pearson Higher Ed, 2014.

Determining the rotation angle and normal for a rotation through Euler angles

November 2, 2015 phy1520 No comments , , ,

[Click here for a PDF of this post with nicer formatting]

[1] pr. 3.9 poses the problem to determine the total rotation angle for a set of Euler rotations given by

\begin{equation}\label{eqn:eulerAngleRotationAngleAndNormal:20}
\mathcal{D}^{1/2}(\alpha, \beta, \gamma)
=
\begin{bmatrix}
e^{-i(\alpha+\gamma)/2} \cos \frac{\beta}{2} & -e^{-i(\alpha-\gamma)/2} \sin \frac{\beta}{2} \\
e^{i(\alpha-\gamma)/2} \sin \frac{\beta}{2} & e^{i(\alpha+\gamma)/2} \cos \frac{\beta}{2}
\end{bmatrix}.
\end{equation}

Compare this to the matrix for a rotation (again double sided) about a normal, given by

\begin{equation}\label{eqn:eulerAngleRotationAngleAndNormal:40}
\mathcal{R}
= e^{-i \Bsigma \cdot \ncap \theta/2}
= \cos \frac{\theta}{2} I – i \Bsigma \cdot \ncap \sin \frac{\theta}{2}.
\end{equation}

With \( \ncap = \lr{ \sin \Theta \cos\Phi, \sin \Theta \sin\Phi, \cos\Theta} \), the normal direction in its Pauli basis is

\begin{equation}\label{eqn:eulerAngleRotationAngleAndNormal:60}
\Bsigma \cdot \ncap
=
\begin{bmatrix}
\cos\Theta & \sin \Theta \cos\Phi – i \sin \Theta \sin\Phi \\
\sin \Theta \cos\Phi + i \sin \Theta \sin\Phi & -\cos\Theta
\end{bmatrix}
=
\begin{bmatrix}
\cos\Theta & \sin \Theta e^{-i \Phi} \\
\sin \Theta e^{i \Phi} & -\cos\Theta
\end{bmatrix},
\end{equation}

so

\begin{equation}\label{eqn:eulerAngleRotationAngleAndNormal:80}
\mathcal{R} =
\begin{bmatrix}
\cos \frac{\theta}{2} -i \sin \frac{\theta}{2} \cos\Theta & -i \sin \Theta e^{-i \Phi} \sin \frac{\theta}{2} \\
-i \sin \Theta e^{i \Phi} \sin \frac{\theta}{2} & \cos \frac{\theta}{2} +i \sin \frac{\theta}{2} \cos\Theta \\
\end{bmatrix}.
\end{equation}

It’s not obvious how to put this into correspondence with the matrix for the Euler rotations. Doing so certainly doesn’t look fun. To solve this problem, let’s go the opposite direction, and put the matrix for the Euler rotations into the form of \ref{eqn:eulerAngleRotationAngleAndNormal:40}.

That is
\begin{equation}\label{eqn:eulerAngleRotationAngleAndNormal:100}
\begin{aligned}
\mathcal{D}^{1/2}(\alpha, \beta, \gamma)
&=
\begin{bmatrix}
e^{-i(\alpha+\gamma)/2} \cos \frac{\beta}{2} & -e^{-i(\alpha-\gamma)/2} \sin \frac{\beta}{2} \\
e^{i(\alpha-\gamma)/2} \sin \frac{\beta}{2} & e^{i(\alpha+\gamma)/2} \cos \frac{\beta}{2}
\end{bmatrix} \\
&=
\begin{bmatrix}
\cos\frac{\alpha+\gamma}{2} \cos \frac{\beta}{2} & – \cos\frac{\alpha-\gamma}{2} \sin \frac{\beta}{2} \\
\cos\frac{\alpha-\gamma}{2} \sin \frac{\beta}{2} & \cos\frac{\alpha+\gamma}{2} \cos \frac{\beta}{2}
\end{bmatrix} \\
&\quad +
i
\begin{bmatrix}
– \sin\frac{\alpha+\gamma}{2} \cos \frac{\beta}{2} & \sin\frac{\alpha-\gamma}{2} \sin \frac{\beta}{2} \\
\sin\frac{\alpha-\gamma}{2} \sin \frac{\beta}{2} & \sin\frac{\alpha+\gamma}{2} \cos \frac{\beta}{2}
\end{bmatrix} \\
&=
\cos\frac{\alpha+\gamma}{2} \cos \frac{\beta}{2}
+ i \sin\frac{\alpha-\gamma}{2} \sin \frac{\beta}{2} \sigma_x
– i \cos\frac{\alpha-\gamma}{2} \sin \frac{\beta}{2} \sigma_y
– i \sin\frac{\alpha+\gamma}{2} \cos \frac{\beta}{2} \sigma_z
\end{aligned},
\end{equation}

This gives us

\begin{equation}\label{eqn:eulerAngleRotationAngleAndNormal:120}
\begin{aligned}
\cos\frac{\theta}{2} &= \cos\frac{\alpha+\gamma}{2} \cos \frac{\beta}{2} \\
\ncap \sin\frac{\theta}{2} &= \lr{ -\sin\frac{\alpha-\gamma}{2} \sin \frac{\beta}{2}, \cos\frac{\alpha-\gamma}{2} \sin \frac{\beta}{2}, \sin\frac{\alpha+\gamma}{2} \cos \frac{\beta}{2} }.
\end{aligned}
\end{equation}

The angle is

\begin{equation}\label{eqn:eulerAngleRotationAngleAndNormal:140}
\theta
= 2 \textrm{arctan} \frac{
\sqrt{\sin^2\frac{\beta}{2} + \sin^2\frac{\alpha+\gamma}{2} \cos^2\frac{\beta}{2}
}
}{\cos\frac{\alpha+\gamma}{2} \cos \frac{\beta}{2}},
\end{equation}

or
\begin{equation}\label{eqn:eulerAngleRotationAngleAndNormal:180}
\boxed{
\theta = 2 \textrm{arctan} \frac{
\sqrt{\tan^2\frac{\beta}{2} + \sin^2\frac{\alpha+\gamma}{2}
}
}{\cos\frac{\alpha+\gamma}{2}
},
}
\end{equation}

and the normal direction is
\begin{equation}\label{eqn:eulerAngleRotationAngleAndNormal:160}
\boxed{
\ncap
=
\inv{\sqrt{1 – \cos^2\frac{\alpha+\gamma}{2} \cos^2\frac{\beta}{2} }}
\lr{ -\sin\frac{\alpha-\gamma}{2} \sin \frac{\beta}{2}, \cos\frac{\alpha-\gamma}{2} \sin \frac{\beta}{2}, \sin\frac{\alpha+\gamma}{2} \cos \frac{\beta}{2} }.
}
\end{equation}

References

[1] Jun John Sakurai and Jim J Napolitano. Modern quantum mechanics. Pearson Higher Ed, 2014.

Bra-ket and spin one-half problems

July 27, 2015 phy1520 No comments , , , , , , , , ,


[Click here for a PDF of this post with nicer formatting]

Question: Operator matrix representation ([1] pr. 1.5)

(a)

Determine the matrix representation of \( \ket{\alpha}\bra{\beta} \) given a complete set of eigenvectors \( \ket{a^r} \).

(b)

Verify with \( \ket{\alpha} = \ket{s_z = \Hbar/2}, \ket{s_x = \Hbar/2} \).

Answer

(a)

Forming the matrix element

\begin{equation}\label{eqn:moreBraKetProblems:20}
\begin{aligned}
\bra{a^r} \lr{ \ket{\alpha}\bra{\beta} } \ket{a^s}
&=
\braket{a^r}{\alpha}\braket{\beta}{a^s} \\
&=
\braket{a^r}{\alpha}
\braket{a^s}{\beta}^\conj,
\end{aligned}
\end{equation}

the matrix representation is seen to be

\begin{equation}\label{eqn:moreBraKetProblems:40}
\ket{\alpha}\bra{\beta}
\sim
\begin{bmatrix}
\bra{a^1} \lr{ \ket{\alpha}\bra{\beta} } \ket{a^1} & \bra{a^1} \lr{ \ket{\alpha}\bra{\beta} } \ket{a^2} & \cdots \\
\bra{a^2} \lr{ \ket{\alpha}\bra{\beta} } \ket{a^1} & \bra{a^2} \lr{ \ket{\alpha}\bra{\beta} } \ket{a^2} & \cdots \\
\vdots & \vdots & \ddots \\
\end{bmatrix}
=
\begin{bmatrix}
\braket{a^1}{\alpha} \braket{a^1}{\beta}^\conj & \braket{a^1}{\alpha} \braket{a^2}{\beta}^\conj & \cdots \\
\braket{a^2}{\alpha} \braket{a^1}{\beta}^\conj & \braket{a^2}{\alpha} \braket{a^2}{\beta}^\conj & \cdots \\
\vdots & \vdots & \ddots \\
\end{bmatrix}.
\end{equation}

(b)

First compute the spin-z representation of \( \ket{s_x = \Hbar/2 } \).

\begin{equation}\label{eqn:moreBraKetProblems:60}
\begin{aligned}
\lr{ S_x – \Hbar/2 I }
\begin{bmatrix}
a \\
b
\end{bmatrix}
&=
\lr{
\begin{bmatrix}
0 & \Hbar/2 \\
\Hbar/2 & 0 \\
\end{bmatrix}

\begin{bmatrix}
\Hbar/2 & 0 \\
0 & \Hbar/2 \\
\end{bmatrix}
} \\
&=
\begin{bmatrix}
a \\
b
\end{bmatrix} \\
&=
\frac{\Hbar}{2}
\begin{bmatrix}
-1 & 1 \\
1 & -1 \\
\end{bmatrix}
\begin{bmatrix}
a \\
b
\end{bmatrix},
\end{aligned}
\end{equation}

so \( \ket{s_x = \Hbar/2 } \propto (1,1) \).

Normalized we have

\begin{equation}\label{eqn:moreBraKetProblems:80}
\begin{aligned}
\ket{\alpha} &= \ket{s_z = \Hbar/2 } =
\begin{bmatrix}
1 \\
0
\end{bmatrix} \\
\ket{\beta} &= \ket{s_z = \Hbar/2 }
\inv{\sqrt{2}}
\begin{bmatrix}
1 \\
1
\end{bmatrix}.
\end{aligned}
\end{equation}

Using \ref{eqn:moreBraKetProblems:40} the matrix representation is

\begin{equation}\label{eqn:moreBraKetProblems:100}
\ket{\alpha}\bra{\beta}
\sim
\begin{bmatrix}
(1) (1/\sqrt{2})^\conj & (1) (1/\sqrt{2})^\conj \\
(0) (1/\sqrt{2})^\conj & (0) (1/\sqrt{2})^\conj \\
\end{bmatrix}
=
\inv{\sqrt{2}}
\begin{bmatrix}
1 & 1 \\
0 & 0
\end{bmatrix}.
\end{equation}

This can be confirmed with direct computation
\begin{equation}\label{eqn:moreBraKetProblems:120}
\begin{aligned}
\ket{\alpha}\bra{\beta}
&=
\begin{bmatrix}
1 \\
0
\end{bmatrix}
\inv{\sqrt{2}}
\begin{bmatrix}
1 & 1
\end{bmatrix} \\
&=
\inv{\sqrt{2}}
\begin{bmatrix}
1 & 1 \\
0 & 0
\end{bmatrix}.
\end{aligned}
\end{equation}

Question: eigenvalue of sum of kets ([1] pr. 1.6)

Given eigenkets \( \ket{i}, \ket{j} \) of an operator \( A \), what are the conditions that \( \ket{i} + \ket{j} \) is also an eigenvector?

Answer

Let \( A \ket{i} = i \ket{i}, A \ket{j} = j \ket{j} \), and suppose that the sum is an eigenket. Then there must be a value \( a \) such that

\begin{equation}\label{eqn:moreBraKetProblems:140}
A \lr{ \ket{i} + \ket{j} } = a \lr{ \ket{i} + \ket{j} },
\end{equation}

so

\begin{equation}\label{eqn:moreBraKetProblems:160}
i \ket{i} + j \ket{j} = a \lr{ \ket{i} + \ket{j} }.
\end{equation}

Operating with \( \bra{i}, \bra{j} \) respectively, gives

\begin{equation}\label{eqn:moreBraKetProblems:180}
\begin{aligned}
i &= a \\
j &= a,
\end{aligned}
\end{equation}

so for the sum to be an eigenket, both of the corresponding energy eigenvalues must be identical (i.e. linear combinations of degenerate eigenkets are also eigenkets).

Question: Null operator ([1] pr. 1.7)

Given eigenkets \( \ket{a’} \) of operator \( A \)

(a)

show that

\begin{equation}\label{eqn:moreBraKetProblems:200}
\prod_{a’} \lr{ A – a’ }
\end{equation}

is the null operator.

(b)

\begin{equation}\label{eqn:moreBraKetProblems:220}
\prod_{a” \ne a’} \frac{\lr{ A – a” }}{a’ – a”}
\end{equation}

(c)

Illustrate using \( S_z \) for a spin 1/2 system.

Answer

(a)

Application of \( \ket{a} \), the eigenket of \( A \) with eigenvalue \( a \) to any term \( A – a’ \) scales \( \ket{a} \) by \( a – a’ \), so the product operating on \( \ket{a} \) is

\begin{equation}\label{eqn:moreBraKetProblems:240}
\prod_{a’} \lr{ A – a’ } \ket{a} = \prod_{a’} \lr{ a – a’ } \ket{a}.
\end{equation}

Since \( \ket{a} \) is one of the \( \setlr{\ket{a’}} \) eigenkets of \( A \), one of these terms must be zero.

(b)

Again, consider the action of the operator on \( \ket{a} \),

\begin{equation}\label{eqn:moreBraKetProblems:260}
\prod_{a” \ne a’} \frac{\lr{ A – a” }}{a’ – a”} \ket{a}
=
\prod_{a” \ne a’} \frac{\lr{ a – a” }}{a’ – a”} \ket{a}.
\end{equation}

If \( \ket{a} = \ket{a’} \), then \( \prod_{a” \ne a’} \frac{\lr{ A – a” }}{a’ – a”} \ket{a} = \ket{a} \), whereas if it does not, then it equals one of the \( a” \) energy eigenvalues. This is a representation of the Kronecker delta function

\begin{equation}\label{eqn:moreBraKetProblems:300}
\prod_{a” \ne a’} \frac{\lr{ A – a” }}{a’ – a”} \ket{a} \equiv \delta_{a’, a} \ket{a}
\end{equation}

(c)

For operator \( S_z \) the eigenvalues are \( \setlr{ \Hbar/2, -\Hbar/2 } \), so the null operator must be

\begin{equation}\label{eqn:moreBraKetProblems:280}
\begin{aligned}
\prod_{a’} \lr{ A – a’ }
&=
\lr{ \frac{\Hbar}{2} }^2 \lr{ \begin{bmatrix} 1 & 0 \\ 0 & -1 \\ \end{bmatrix} – \begin{bmatrix} 1 & 0 \\ 0 & 1 \\ \end{bmatrix} } \lr{ \begin{bmatrix} 1 & 0 \\ 0 & -1 \\ \end{bmatrix} + \begin{bmatrix} 1 & 0 \\ 0 & 1 \\ \end{bmatrix} } \\
&=
\begin{bmatrix}
0 & 0 \\
0 & -2
\end{bmatrix}
\begin{bmatrix}
2 & 0 \\
0 & 0 \\
\end{bmatrix} \\
&=
\begin{bmatrix}
0 & 0 \\
0 & 0 \\
\end{bmatrix}
\end{aligned}
\end{equation}

For the delta representation, consider the \( \ket{\pm} \) states and their eigenvalue. The delta operators are

\begin{equation}\label{eqn:moreBraKetProblems:320}
\begin{aligned}
\prod_{a” \ne \Hbar/2} \frac{\lr{ A – a” }}{\Hbar/2 – a”}
&=
\frac{S_z – (-\Hbar/2) I}{\Hbar/2 – (-\Hbar/2)} \\
&=
\inv{2} \lr{ \sigma_z + I } \\
&=
\inv{2} \lr{ \begin{bmatrix} 1 & 0 \\ 0 & -1 \\ \end{bmatrix} + \begin{bmatrix} 1 & 0 \\ 0 & 1 \\ \end{bmatrix} } \\
&=
\inv{2}
\begin{bmatrix}
2 & 0 \\
0 & 0
\end{bmatrix}
\\
&=
\begin{bmatrix}
1 & 0 \\
0 & 0
\end{bmatrix}.
\end{aligned}
\end{equation}

\begin{equation}\label{eqn:moreBraKetProblems:340}
\begin{aligned}
\prod_{a” \ne -\Hbar/2} \frac{\lr{ A – a” }}{-\Hbar/2 – a”}
&=
\frac{S_z – (\Hbar/2) I}{-\Hbar/2 – \Hbar/2} \\
&=
\inv{2} \lr{ \sigma_z – I } \\
&=
\inv{2} \lr{ \begin{bmatrix} 1 & 0 \\ 0 & -1 \\ \end{bmatrix} – \begin{bmatrix} 1 & 0 \\ 0 & 1 \\ \end{bmatrix} } \\
&=
\inv{2}
\begin{bmatrix}
0 & 0 \\
0 & -2
\end{bmatrix} \\
&=
\begin{bmatrix}
0 & 0 \\
0 & 1
\end{bmatrix}.
\end{aligned}
\end{equation}

These clearly have the expected delta function property acting on kets \( \ket{+} = (1,0), \ket{-} = (0, 1) \).

Question: Spin half general normal ([1] pr. 1.9)

Construct \( \ket{\BS \cdot \ncap ; + } \), where \( \ncap = ( \cos\alpha \sin\beta, \sin\alpha \sin\beta, \cos\beta ) \) such that

\begin{equation}\label{eqn:moreBraKetProblems:360}
\BS \cdot \ncap \ket{\BS \cdot \ncap ; + } =
\frac{\Hbar}{2} \ket{\BS \cdot \ncap ; + },
\end{equation}

Solve this as an eigenvalue problem.

Answer

The spin operator for this direction is

\begin{equation}\label{eqn:moreBraKetProblems:380}
\begin{aligned}
\BS \cdot \ncap
&= \frac{\Hbar}{2} \Bsigma \cdot \ncap \\
&= \frac{\Hbar}{2}
\lr{
\cos\alpha \sin\beta \begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix} + \sin\alpha \sin\beta \begin{bmatrix} 0 & -i \\ i & 0 \\ \end{bmatrix} + \cos\beta \begin{bmatrix} 1 & 0 \\ 0 & -1 \\ \end{bmatrix}
} \\
&=
\frac{\Hbar}{2}
\begin{bmatrix}
\cos\beta &
e^{-i\alpha}
\sin\beta
\\
e^{i\alpha}
\sin\beta
& -\cos\beta
\end{bmatrix}.
\end{aligned}
\end{equation}

Observed that this is traceless and has a \( -\Hbar/2 \) determinant like any of the \( x,y,z \) spin operators.

Assuming that this has an \( \Hbar/2 \) eigenvalue (to be verified later), the eigenvalue problem is

\begin{equation}\label{eqn:moreBraKetProblems:400}
\begin{aligned}
0
&=
\BS \cdot \ncap – \Hbar/2 I \\
&=
\frac{\Hbar}{2}
\begin{bmatrix}
\cos\beta -1 &
e^{-i\alpha}
\sin\beta
\\
e^{i\alpha}
\sin\beta
& -\cos\beta -1
\end{bmatrix} \\
&=
\Hbar
\begin{bmatrix}
– \sin^2 \frac{\beta}{2} &
e^{-i\alpha}
\sin\frac{\beta}{2} \cos\frac{\beta}{2}
\\
e^{i\alpha}
\sin\frac{\beta}{2} \cos\frac{\beta}{2}
& -\cos^2 \frac{\beta}{2}
\end{bmatrix}
\end{aligned}
\end{equation}

This has a zero determinant as expected, and the eigenvector \( (a,b) \) will satisfy

\begin{equation}\label{eqn:moreBraKetProblems:420}
\begin{aligned}
0
&= – \sin^2 \frac{\beta}{2} a +
e^{-i\alpha}
\sin\frac{\beta}{2} \cos\frac{\beta}{2}
b \\
&= \sin\frac{\beta}{2} \lr{ – \sin \frac{\beta}{2} a +
e^{-i\alpha} b
\cos\frac{\beta}{2}
}
\end{aligned}
\end{equation}

\begin{equation}\label{eqn:moreBraKetProblems:440}
\begin{bmatrix}
a \\
b
\end{bmatrix}
\propto
\begin{bmatrix}
\cos\frac{\beta}{2} \\
e^{i\alpha}
\sin\frac{\beta}{2}
\end{bmatrix}.
\end{equation}

This is appropriately normalized, so the ket for \( \BS \cdot \ncap \) is

\begin{equation}\label{eqn:moreBraKetProblems:460}
\ket{ \BS \cdot \ncap ; + } =
\cos\frac{\beta}{2} \ket{+} +
e^{i\alpha}
\sin\frac{\beta}{2}
\ket{-}.
\end{equation}

Note that the other eigenvalue is

\begin{equation}\label{eqn:moreBraKetProblems:480}
\ket{ \BS \cdot \ncap ; – } =
-\sin\frac{\beta}{2} \ket{+} +
e^{i\alpha}
\cos\frac{\beta}{2}
\ket{-}.
\end{equation}

It is straightforward to show that these are orthogonal and that this has the \( -\Hbar/2 \) eigenvalue.

Question: Two state Hamiltonian ([1] pr. 1.10)

Solve the eigenproblem for

\begin{equation}\label{eqn:moreBraKetProblems:500}
H = a \biglr{
\ket{1}\bra{1}
-\ket{2}\bra{2}
+\ket{1}\bra{2}
+\ket{2}\bra{1}
}
\end{equation}

Answer

In matrix form the Hamiltonian is

\begin{equation}\label{eqn:moreBraKetProblems:520}
H = a
\begin{bmatrix}
1 & 1 \\
1 & -1
\end{bmatrix}.
\end{equation}

The eigenvalue problem is

\begin{equation}\label{eqn:moreBraKetProblems:540}
\begin{aligned}
0
&= \Abs{ H – \lambda I } \\
&= (a – \lambda)(-a – \lambda) – a^2 \\
&= (-a + \lambda)(a + \lambda) – a^2 \\
&= \lambda^2 – a^2 – a^2,
\end{aligned}
\end{equation}

or

\begin{equation}\label{eqn:moreBraKetProblems:560}
\lambda = \pm \sqrt{2} a.
\end{equation}

An eigenket proportional to \( (\alpha,\beta) \) must satisfy

\begin{equation}\label{eqn:moreBraKetProblems:580}
0
= ( 1 \mp \sqrt{2} ) \alpha + \beta,
\end{equation}

so

\begin{equation}\label{eqn:moreBraKetProblems:600}
\ket{\pm} \propto
\begin{bmatrix}
-1 \\
1 \mp \sqrt{2}
\end{bmatrix},
\end{equation}

or

\begin{equation}\label{eqn:moreBraKetProblems:620}
\begin{aligned}
\ket{\pm}
&=
\inv{2(2 – \sqrt{2})}
\begin{bmatrix}
-1 \\
1 \mp \sqrt{2}
\end{bmatrix} \\
&=
\frac{2 + \sqrt{2}}{4}
\begin{bmatrix}
-1 \\
1 \mp \sqrt{2}
\end{bmatrix}.
\end{aligned}
\end{equation}

That is
\begin{equation}\label{eqn:moreBraKetProblems:640}
\ket{\pm} =
\frac{2 + \sqrt{2}}{4} \lr{
-\ket{1} + (1 \mp \sqrt{2}) \ket{2}
}.
\end{equation}

Question: Spin half probability and dispersion ([1] pr. 1.12)

A spin \( 1/2 \) system \( \BS \cdot \ncap \), with \( \ncap = \sin \gamma \xcap + \cos\gamma \zcap \), is in state with eigenvalue \( \Hbar/2 \).

(a)

If \( S_x \) is measured. What is the probability of getting \( + \Hbar/2 \)?

(b)

Evaluate the dispersion in \( S_x \), that is,

\begin{equation}\label{eqn:moreBraKetProblems:660}
\expectation{\lr{ S_x – \expectation{S_x}}^2}.
\end{equation}

Answer

(a)

In matrix form the spin operator for the system is

\begin{equation}\label{eqn:moreBraKetProblems:680}
\begin{aligned}
\BS \cdot \ncap
&= \frac{\Hbar}{2} \lr{ \cos\gamma \begin{bmatrix} 1 & 0 \\ 0 & -1 \\ \end{bmatrix} + \sin\gamma \begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix}} \\
&= \frac{\Hbar}{2}
\begin{bmatrix}
\cos\gamma & \sin\gamma \\
\sin\gamma & -\cos\gamma \\
\end{bmatrix}
\end{aligned}
\end{equation}

An eigenket \( \ket{\BS \cdot \ncap ; + } = (a,b) \) must satisfy

\begin{equation}\label{eqn:moreBraKetProblems:700}
\begin{aligned}
0
&= \lr{ \cos \gamma – 1 } a + \sin\gamma b \\
&= \lr{ -2 \sin^2 \frac{\gamma}{2} } a + 2 \sin\frac{\gamma}{2} \cos\frac{\gamma}{2} b \\
&= -\sin \frac{\gamma}{2} a + \cos\frac{\gamma}{2} b,
\end{aligned}
\end{equation}

so the eigenstate is
\begin{equation}\label{eqn:moreBraKetProblems:720}
\ket{\BS \cdot \ncap ; + }
=
\begin{bmatrix}
\cos\frac{\gamma}{2} \\
\sin\frac{\gamma}{2}
\end{bmatrix}.
\end{equation}

Pick \( \ket{S_x ; \pm } = \inv{\sqrt{2}}
\begin{bmatrix}
1 \\ \pm 1
\end{bmatrix} \) as the basis for the \( S_x \) operator. Then, for the probability that the system will end up in the \( + \Hbar/2 \) state of \( S_x \), we have

\begin{equation}\label{eqn:moreBraKetProblems:740}
\begin{aligned}
P
&= \Abs{\braket{ S_x ; + }{ \BS \cdot \ncap ; + } }^2 \\
&= \Abs{ \inv{\sqrt{2} }
{
\begin{bmatrix}
1 \\
1
\end{bmatrix}}^\dagger
\begin{bmatrix}
\cos\frac{\gamma}{2} \\
\sin\frac{\gamma}{2}
\end{bmatrix}
}^2 \\
&=\inv{2}
\Abs{
\begin{bmatrix}
1 & 1
\end{bmatrix}
\begin{bmatrix}
\cos\frac{\gamma}{2} \\
\sin\frac{\gamma}{2}
\end{bmatrix}
}^2 \\
&=
\inv{2}
\lr{
\cos\frac{\gamma}{2} +
\sin\frac{\gamma}{2}
}^2 \\
&=
\inv{2}
\lr{ 1 + 2 \cos\frac{\gamma}{2} \sin\frac{\gamma}{2} } \\
&=
\inv{2}
\lr{ 1 + \sin\gamma }.
\end{aligned}
\end{equation}

This is a reasonable seeming result, with \( P \in [0, 1] \). Some special values also further validate this

\begin{equation}\label{eqn:moreBraKetProblems:760}
\begin{aligned}
\gamma &= 0, \ket{\BS \cdot \ncap ; + } =
\begin{bmatrix}
1 \\
0
\end{bmatrix}
=
\ket{S_z ; +}
=
\inv{\sqrt{2}} \ket{S_x;+}
+\inv{\sqrt{2}} \ket{S_x;-}
\\
\gamma &= \pi/2, \ket{\BS \cdot \ncap ; + } =
\inv{\sqrt{2}}
\begin{bmatrix}
1 \\
1
\end{bmatrix}
=
\ket{S_x ; +}
\\
\gamma &= \pi, \ket{\BS \cdot \ncap ; + } =
\begin{bmatrix}
0 \\
1
\end{bmatrix}
=
\ket{S_z ; -}
=
\inv{\sqrt{2}} \ket{S_x;+}
-\inv{\sqrt{2}} \ket{S_x;-},
\end{aligned}
\end{equation}

where we see that the probabilites are in proportion to the projection of the initial state onto the measured state \( \ket{S_x ; +} \).

(b)

The \( S_x \) expectation is

\begin{equation}\label{eqn:moreBraKetProblems:780}
\begin{aligned}
\expectation{S_x}
&=
\frac{\Hbar}{2}
\begin{bmatrix}
\cos\frac{\gamma}{2} & \sin\frac{\gamma}{2}
\end{bmatrix}
\begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix}
\begin{bmatrix}
\cos\frac{\gamma}{2} \\
\sin\frac{\gamma}{2}
\end{bmatrix} \\
&=
\frac{\Hbar}{2}
\begin{bmatrix}
\cos\frac{\gamma}{2} & \sin\frac{\gamma}{2}
\end{bmatrix}
\begin{bmatrix}
\sin\frac{\gamma}{2} \\
\cos\frac{\gamma}{2}
\end{bmatrix} \\
&=
\frac{\Hbar}{2} 2 \sin\frac{\gamma}{2} \cos\frac{\gamma}{2} \\
&=
\frac{\Hbar}{2} \sin\gamma.
\end{aligned}
\end{equation}

Note that \( S_x^2 = (\Hbar/2)^2I \), so

\begin{equation}\label{eqn:moreBraKetProblems:800}
\begin{aligned}
\expectation{S_x^2}
&=
\lr{\frac{\Hbar}{2}}^2
\begin{bmatrix}
\cos\frac{\gamma}{2} & \sin\frac{\gamma}{2}
\end{bmatrix}
\begin{bmatrix}
\cos\frac{\gamma}{2} \\
\sin\frac{\gamma}{2}
\end{bmatrix} \\
&=
\lr{ \frac{\Hbar}{2} }^2
\cos^2\frac{\gamma}{2} + \sin^2 \frac{\gamma}{2} \\
&=
\lr{ \frac{\Hbar}{2} }^2.
\end{aligned}
\end{equation}

The dispersion is

\begin{equation}\label{eqn:moreBraKetProblems:820}
\begin{aligned}
\expectation{\lr{ S_x – \expectation{S_x}}^2}
&=
\expectation{S_x^2} – \expectation{S_x}^2 \\
&=
\lr{ \frac{\Hbar}{2} }^2
\lr{1 – \sin^2 \gamma} \\
&=
\lr{ \frac{\Hbar}{2} }^2
\cos^2 \gamma.
\end{aligned}
\end{equation}

At \( \gamma = \pi/2 \) the dispersion is 0, which is expected since \( \ket{\BS \cdot \ncap ; + } = \ket{ S_x ; + } \) at that point. Similarily, the dispersion is maximized at \( \gamma = 0,\pi \) where the \( \ket{\BS \cdot \ncap ; + } \) component in the \( \ket{S_x ; + } \) direction is minimized.

References

[1] Jun John Sakurai and Jim J Napolitano. Modern quantum mechanics. Pearson Higher Ed, 2014.

Pauli matrix problems

July 21, 2015 phy1520 No comments ,

[Click here for a PDF of this post with nicer formatting]

Q: [1] problem 1.2.

Given an arbitrary \( 2 \times 2 \) matrix \( X = a_0 + \Bsigma \cdot \Ba \),
show the relationships between \( a_\mu \) and \( \textrm{tr}(X), \textrm{tr}(\sigma_k X) \), and \( X_{ij} \).

A.

Observe that each of the Pauli matrices \( \sigma_k \) are traceless

\begin{equation}\label{eqn:pauliProblems:20}
\begin{aligned}
\sigma_x &= \begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix} \\
\sigma_y &= \begin{bmatrix} 0 & -i \\ i & 0 \\ \end{bmatrix} \\
\sigma_z &= \begin{bmatrix} 1 & 0 \\ 0 & -1 \\ \end{bmatrix} \\
\end{aligned},
\end{equation}

so \( \textrm{tr}(X) = 2 a_0 \). Note that \( \textrm{tr}(\sigma_k \sigma_m) = 2 \delta_{k m} \), so \( \textrm{tr}(\sigma_k X) = 2 a_k \).

Notationally, it would seem to make sense to define \( \sigma_0 \equiv I \), so that \( \textrm{tr}(\sigma_\mu X) = a_\mu \). I don’t know if that is common practice.

For the opposite relations, given

\begin{equation}\label{eqn:pauliProblems:40}
\begin{aligned}
X
&= a_0 + \Bsigma \cdot \Ba \\
&= \begin{bmatrix} 1 & 0 \\ 0 & 1 \\ \end{bmatrix} a_0 + \begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix} a_1 + \begin{bmatrix} 0 & -i \\ i & 0 \\ \end{bmatrix} a_2 + \begin{bmatrix} 1 & 0 \\ 0 & -1 \\ \end{bmatrix} a_3 \\
&=
\begin{bmatrix}
a_0 + a_3 & a_1 – i a_2 \\
a_1 + i a_2 & a_0 – a_3
\end{bmatrix} \\
&=
\begin{bmatrix}
X_{11} & X_{12} \\
X_{21} & X_{22} \\
\end{bmatrix},
\end{aligned}
\end{equation}

so

\begin{equation}\label{eqn:pauliProblems:80}
\begin{aligned}
a_0 &= \inv{2} \lr{ X_{11} + X_{22} } \\
a_1 &= \inv{2} \lr{ X_{12} + X_{21} } \\
a_2 &= \inv{2 i} \lr{ X_{21} – X_{12} } \\
a_3 &= \inv{2} \lr{ X_{11} – X_{22} }
\end{aligned}.
\end{equation}

Q: [1] problem 1.3.

Determine the structure and determinant of the transformation

\begin{equation}\label{eqn:pauliProblems:100}
\Bsigma \cdot \Ba \rightarrow
\Bsigma \cdot \Ba’ =
\exp\lr{ i \Bsigma \cdot \ncap \phi/2}
\Bsigma \cdot \Ba
\exp\lr{ -i \Bsigma \cdot \ncap \phi/2}.
\end{equation}

A.

Knowing Geometric Algebra, this is recognized as a rotation transformation. In GA, \( i \) is treated as a pseudoscalar (which commutes with all grades in \R{3}), and the expression can be reduced to one involving dot and wedge products. Let’s see how can this be reduced using only the Pauli matrix toolbox.

First, consider the determinant of one of the exponentials. Showing that one such exponential has unit determinant is sufficient. The matrix representation of the unit normal is

\begin{equation}\label{eqn:pauliProblems:120}
\begin{aligned}
\Bsigma \cdot \ncap
&= n_x \begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix}
+ n_y \begin{bmatrix} 0 & -i \\ i & 0 \\ \end{bmatrix}
+ n_z \begin{bmatrix} 1 & 0 \\ 0 & -1 \\ \end{bmatrix} \\
&=
\begin{bmatrix}
n_z & n_x – i n_y \\
n_x + i n_y & -n_z
\end{bmatrix}.
\end{aligned}
\end{equation}

This is expected to have a unit square, and does

\begin{equation}\label{eqn:pauliProblems:140}
\begin{aligned}
\lr{ \Bsigma \cdot \ncap }^2
&=
\begin{bmatrix}
n_z & n_x – i n_y \\
n_x + i n_y & -n_z
\end{bmatrix}
\begin{bmatrix}
n_z & n_x – i n_y \\
n_x + i n_y & -n_z
\end{bmatrix} \\
&=
\lr{ n_x^2 + n_y^2 + n_z^2 }
\begin{bmatrix}
1 & 0 \\
0 & 1
\end{bmatrix} \\
&=
1.
\end{aligned}
\end{equation}

This allows for a cosine and sine expansion of the exponential, as in

\begin{equation}\label{eqn:pauliProblems:160}
\begin{aligned}
\exp\lr{ i \Bsigma \cdot \ncap \theta}
&=
\cos\theta + i \Bsigma \cdot \ncap \sin\theta \\
&=
\cos\theta
\begin{bmatrix}
1 & 0 \\
0 & 1
\end{bmatrix}
+
i \sin\theta
\begin{bmatrix}
n_z & n_x – i n_y \\
n_x + i n_y & -n_z
\end{bmatrix} \\
&=
\begin{bmatrix}
\cos\theta + i n_z \sin\theta & \lr{ n_x – i n_y } i \sin\theta \\
\lr{ n_x + i n_y } i \sin\theta & \cos\theta – i n_z \sin\theta \\
\end{bmatrix}.
\end{aligned}
\end{equation}

This has determinant

\begin{equation}\label{eqn:pauliProblems:180}
\begin{aligned}
\Abs{\exp\lr{ i \Bsigma \cdot \ncap \theta} }
&=
\cos^2\theta + n_z^2 \sin^2\theta

\lr{ -n_x^2 + -n_y^2 } \sin^2\theta \\
&=
\cos^2\theta + \lr{ n_x^2 + n_y^2 + n_z^2 } \sin^2\theta \\
&= 1,
\end{aligned}
\end{equation}

as expected.

Next step is to show that this transformation is a rotation, and determine the sense of the rotation. Let \( C = \cos\phi/2, S = \sin\phi/2 \), so that

\begin{equation}\label{eqn:pauliProblems:200}
\begin{aligned}
\Bsigma \cdot \Ba’
&=
\exp\lr{ i \Bsigma \cdot \ncap \phi/2}
\Bsigma \cdot \Ba
\exp\lr{ -i \Bsigma \cdot \ncap \phi/2} \\
&=
\lr{ C + i \Bsigma \cdot \ncap S }
\Bsigma \cdot \Ba
\lr{ C – i \Bsigma \cdot \ncap S } \\
&=
\lr{ C + i \Bsigma \cdot \ncap S }
\lr{ C \Bsigma \cdot \Ba – i \Bsigma \cdot \Ba \Bsigma \cdot \ncap S } \\
&=
C^2 \Bsigma \cdot \Ba + \Bsigma \cdot \ncap \Bsigma \cdot \Ba \Bsigma \cdot \ncap S^2
+ i \lr{
-\Bsigma \cdot \Ba \Bsigma \cdot \ncap
+ \Bsigma \cdot \ncap \Bsigma \cdot \Ba
} S C \\
&=
\inv{2} \lr{ 1 + \cos\phi}
\Bsigma \cdot \Ba
+ \Bsigma \cdot \ncap \Bsigma \cdot \Ba \Bsigma \cdot \ncap \inv{2} \lr{ 1 – \cos\phi}
+ i
\antisymmetric{
\Bsigma \cdot \ncap }{\Bsigma \cdot \Ba }
\inv{2} \sin\phi \\
&=
\inv{2}
\Bsigma \cdot \ncap
\symmetric{
\Bsigma \cdot \ncap }{\Bsigma \cdot \Ba }
+ \inv{2}
\Bsigma \cdot \ncap
\antisymmetric{
\Bsigma \cdot \ncap }{\Bsigma \cdot \Ba } \cos\phi
+
\inv{2}
i
\antisymmetric{
\Bsigma \cdot \ncap }{\Bsigma \cdot \Ba }
\sin\phi.
\end{aligned}
\end{equation}

Observe that the angle dependent portion can be written in a compact exponential form

\begin{equation}\label{eqn:pauliProblems:220}
\begin{aligned}
\Bsigma \cdot \Ba’
&=
\inv{2}
\Bsigma \cdot \ncap
\symmetric{
\Bsigma \cdot \ncap }{\Bsigma \cdot \Ba }
+
\lr{
\cos\phi
+
i
\Bsigma \cdot \ncap
\sin\phi
}
\inv{2}
\Bsigma \cdot \ncap
\antisymmetric{
\Bsigma \cdot \ncap }{\Bsigma \cdot \Ba } \\
&=
\inv{2}
\Bsigma \cdot \ncap
\symmetric{
\Bsigma \cdot \ncap }{\Bsigma \cdot \Ba }
+
\exp\lr{ i \Bsigma \cdot \ncap \phi }
\inv{2}
\Bsigma \cdot \ncap
\antisymmetric{
\Bsigma \cdot \ncap }{\Bsigma \cdot \Ba }.
\end{aligned}
\end{equation}

The anticommutator and commutator products with the unit normal can be identified as projections and rejections respectively. Consider the symmetric product first

\begin{equation}\label{eqn:pauliProblems:240}
\begin{aligned}
\inv{2}
\symmetric{
\Bsigma \cdot \ncap }{\Bsigma \cdot \Ba } \\
&=
\inv{2}
\sum n_r a_s \lr{ \sigma_r \sigma_s + \sigma_s \sigma_r } \\
&=
\inv{2}
\sum_{r \ne s} n_r a_s \lr{ \sigma_r \sigma_s + \sigma_s \sigma_r }
+
\inv{2}
\sum_{r } n_r a_r 2 \\
&= 2 \ncap \cdot \Ba.
\end{aligned}
\end{equation}

This shows that
\begin{equation}\label{eqn:pauliProblems:260}
\inv{2}
\Bsigma \cdot \ncap
\symmetric{
\Bsigma \cdot \ncap }{\Bsigma \cdot \Ba }
=
\lr{ \ncap \cdot \Ba } \Bsigma \cdot \ncap,
\end{equation}

which is the projection of \( \Ba \) in the direction of the normal \( \ncap \). To show that the commutator term is the rejection, consider the sum of the two

\begin{equation}\label{eqn:pauliProblems:280}
\begin{aligned}
\inv{2}
\Bsigma \cdot \ncap
\symmetric{
\Bsigma \cdot \ncap }{\Bsigma \cdot \Ba }
+
\inv{2}
\Bsigma \cdot \ncap
\antisymmetric{
\Bsigma \cdot \ncap }{\Bsigma \cdot \Ba }
&=
\Bsigma \cdot \ncap
\Bsigma \cdot \ncap \Bsigma \cdot \Ba \\
&=
\Bsigma \cdot \Ba,
\end{aligned}
\end{equation}

so we must have

\begin{equation}\label{eqn:pauliProblems:300}
\Bsigma \cdot \Ba – \lr{ \ncap \cdot \Ba } \Bsigma \cdot \ncap
=
\inv{2}
\Bsigma \cdot \ncap
\antisymmetric{
\Bsigma \cdot \ncap }{\Bsigma \cdot \Ba }.
\end{equation}

This is the component of \( \Ba \) that has the projection in the \( \ncap \) direction removed. Looking back to \ref{eqn:pauliProblems:220}, the transformation leaves components of the vector that are colinear with the unit normal unchanged, and applies an exponential operation to the component that lies in what is presumed to be the rotation plane. To verify that this latter portion of the transformation is a rotation, and to determine the sense of the rotation, let’s expand the factor of the sine of \ref{eqn:pauliProblems:200}.

That is

\begin{equation}\label{eqn:pauliProblems:320}
\begin{aligned}
\frac{i}{2} \antisymmetric{ \Bsigma \cdot \ncap }{\Bsigma \cdot \Ba }
&=
\frac{i}{2} \sum n_r a_s \antisymmetric{ \sigma_r }{\sigma_s } \\
&=
\frac{i}{2} \sum n_r a_s 2 i \epsilon_{r s t} \sigma_t \\
&=
– \sum \sigma_t n_r a_s \epsilon_{r s t} \\
&=
-\Bsigma \cdot \lr{ \ncap \cross \Ba } \\
&=
\Bsigma \cdot \lr{ \Ba \cross \ncap }.
\end{aligned}
\end{equation}

Since \( \Ba \cross \ncap = \lr{ \Ba – \ncap (\ncap \cdot \Ba) } \cross \ncap \), this vector is seen to lie in the plane normal to \( \ncap \), but perpendicular to the rejection of \( \ncap \) from \( \Ba \). That completes the demonstration that this is a rotation transformation.

To understand the sense of this rotation, consider \( \ncap = \zcap, \Ba = \xcap \), so

\begin{equation}\label{eqn:pauliProblems:340}
\Bsigma \cdot \lr{ \Ba \cross \ncap }
=
\Bsigma \cdot \lr{ \xcap \cross \zcap }
=
-\Bsigma \cdot \ycap,
\end{equation}

and
\begin{equation}\label{eqn:pauliProblems:360}
\Bsigma \cdot \Ba’
=
\xcap \cos\phi – \ycap \sin\phi,
\end{equation}

showing that this rotation transformation has a clockwise sense.

References

[1] Jun John Sakurai and Jim J Napolitano. Modern quantum mechanics. Pearson Higher Ed, 2014.

An observation about the geometry of Pauli x,y matrices

July 19, 2015 phy1520 No comments , , , ,

[Click here for a PDF of this post with nicer formatting]

Motivation

The conventional form for the Pauli matrices is

\begin{equation}\label{eqn:pauliMatrixXYgeometry:20}
\begin{aligned}
\sigma_x &=
\begin{bmatrix}
0 & 1 \\
1 & 0 \\
\end{bmatrix} \\
\sigma_y &=
\begin{bmatrix}
0 & -i \\
i & 0 \\
\end{bmatrix} \\
\sigma_z &=
\begin{bmatrix}
1 & 0 \\
0 & -1 \\
\end{bmatrix}
\end{aligned}.
\end{equation}

In [1] these forms are derived based on the commutation relations

\begin{equation}\label{eqn:pauliMatrixXYgeometry:40}
\antisymmetric{\sigma_r}{\sigma_s} = 2 i \epsilon_{r s t} \sigma_t,
\end{equation}

by defining raising and lowering operators \( \sigma_{\pm} = \sigma_x \pm i \sigma_y \) and figuring out what form the matrix must take. I noticed an interesting geometrical relation hiding in that derivation if \( \sigma_{+} \) is not assumed to be real.

Derivation

For completeness, I’ll repeat the argument of [1], which builds on the commutation relations of the raising and lowering operators. Those are

\begin{equation}\label{eqn:pauliMatrixXYgeometry:60}
\begin{aligned}
\antisymmetric{\sigma_z}{\sigma_{\pm}}
&=
\sigma_z \lr{ \sigma_x \pm i \sigma_y }
-\lr{ \sigma_x \pm i \sigma_y } \sigma_z \\
&=
\antisymmetric{\sigma_z}{\sigma_x} \pm i \antisymmetric{\sigma_z}{\sigma_y} \\
&=
2 i \sigma_y \pm i (-2 i) \sigma_x \\
&= \pm 2 \lr{ \sigma_x \pm i \sigma_y } \\
&= \pm 2 \sigma_{\pm},
\end{aligned}
\end{equation}

and

\begin{equation}\label{eqn:pauliMatrixXYgeometry:80}
\begin{aligned}
\antisymmetric{\sigma_{+}}{\sigma_{-}}
&=
\lr{ \sigma_x + i \sigma_y } \lr{ \sigma_x – i \sigma_y }
-\lr{ \sigma_x – i \sigma_y } \lr{ \sigma_x + i \sigma_y } \\
&=
-i \sigma_x \sigma_y + i \sigma_y \sigma_x
– i \sigma_x \sigma_y + i \sigma_y \sigma_x \\
&= 2 i \antisymmetric{ \sigma_y }{\sigma_x} \\
&= 2 i (-2i) \sigma_z \\
&= 4 \sigma_z
\end{aligned}
\end{equation}

From these a matrix representation containing unknown values can be assumed. Let

\begin{equation}\label{eqn:pauliMatrixXYgeometry:100}
\sigma_{+} =
\begin{bmatrix}
a & b \\
c & d
\end{bmatrix}.
\end{equation}

The commutator with \( \sigma_z \) can be computed

\begin{equation}\label{eqn:pauliMatrixXYgeometry:120}
\begin{aligned}
\antisymmetric{\sigma_z}{\sigma_{+}}
&=
\begin{bmatrix}
1 & 0 \\
0 & -1 \\
\end{bmatrix}
\begin{bmatrix}
a & b \\
c & d
\end{bmatrix}

\begin{bmatrix}
a & b \\
c & d
\end{bmatrix}
\begin{bmatrix}
1 & 0 \\
0 & -1 \\
\end{bmatrix}
\\
&=
\begin{bmatrix}
a & b \\
-c & -d
\end{bmatrix}

\begin{bmatrix}
a & -b \\
c & -d
\end{bmatrix} \\
&=
2
\begin{bmatrix}
0 & b \\
-c & 0
\end{bmatrix}
\end{aligned}
\end{equation}

Now compare this with \ref{eqn:pauliMatrixXYgeometry:60}

\begin{equation}\label{eqn:pauliMatrixXYgeometry:140}
2
\begin{bmatrix}
0 & b \\
-c & 0
\end{bmatrix}
=
2 \sigma_{+}
=
2
\begin{bmatrix}
a & b \\
d & d
\end{bmatrix}.
\end{equation}

This shows that \( a = 0 \), and \( d = 0 \). Similarly the \( \sigma_z \) commutator with the lowering operator is

\begin{equation}\label{eqn:pauliMatrixXYgeometry:160}
\begin{aligned}
\antisymmetric{\sigma_z}{\sigma_{-}}
&=
\begin{bmatrix}
1 & 0 \\
0 & -1 \\
\end{bmatrix}
\begin{bmatrix}
0 & -c^\conj \\
b^\conj & 0
\end{bmatrix}

\begin{bmatrix}
0 & -c^\conj \\
b^\conj & 0
\end{bmatrix}
\begin{bmatrix}
1 & 0 \\
0 & -1 \\
\end{bmatrix}
\\
&=
\begin{bmatrix}
0 & -c^\conj \\
-b^\conj & 0
\end{bmatrix}

\begin{bmatrix}
0 & c^\conj \\
b^\conj & 0
\end{bmatrix} \\
&=
-2
\begin{bmatrix}
0 & c^\conj \\
b^\conj & 0
\end{bmatrix}
\end{aligned}
\end{equation}

Again comparing to \ref{eqn:pauliMatrixXYgeometry:60}, we have
\begin{equation}\label{eqn:pauliMatrixXYgeometry:180}
-2
\begin{bmatrix}
0 & c^\conj \\
b^\conj & 0
\end{bmatrix}
= – 2 \sigma_{-}
= – 2
\begin{bmatrix}
0 & -c^\conj \\
b^\conj & 0
\end{bmatrix},
\end{equation}

so \( c = 0 \). Computing the commutator of the raising and lowering operators fixes \( b \)

\begin{equation}\label{eqn:pauliMatrixXYgeometry:200}
\begin{aligned}
\antisymmetric{\sigma_{+}}{\sigma_{-}}
&=
\begin{bmatrix}
0 & b \\
0 & 0 \\
\end{bmatrix}
\begin{bmatrix}
0 & 0 \\
b^\conj & 0 \\
\end{bmatrix}

\begin{bmatrix}
0 & 0 \\
b^\conj & 0 \\
\end{bmatrix}
\begin{bmatrix}
0 & b \\
0 & 0 \\
\end{bmatrix} \\
&=
\begin{bmatrix}
\Abs{b}^2 & 0 \\
0 & 0
\end{bmatrix}

\begin{bmatrix}
0 & 0
0 & -\Abs{b}^2 \\
\end{bmatrix} \\
&=
\Abs{b}^2
\begin{bmatrix}
1 & 0 \\
0 & -1 \\
\end{bmatrix}
\\
&=
\Abs{b}^2 \sigma_z.
\end{aligned}
\end{equation}

From \ref{eqn:pauliMatrixXYgeometry:80} it must be that \( \Abs{b}^2 = 4\), so the most general form of the raising operator is

\begin{equation}\label{eqn:pauliMatrixXYgeometry:220}
\sigma_{+}
=
2
\begin{bmatrix}
0 & e^{i \phi} \\
0 & 0
\end{bmatrix}.
\end{equation}

Observation

The conventional choice is to set \( \phi = 0 \), but I found it interesting to see the form of \( \sigma_x, \sigma_y \) without that choice. That is

\begin{equation}\label{eqn:pauliMatrixXYgeometry:240}
\begin{aligned}
\sigma_x
&= \inv{2} \lr{ \sigma_{+} + \sigma_{-} } \\
&=
\begin{bmatrix}
0 & e^{i \phi} \\
e^{-i \phi} & 0 \\
\end{bmatrix}
\end{aligned}
\end{equation}

\begin{equation}\label{eqn:pauliMatrixXYgeometry:260}
\begin{aligned}
\sigma_y
&= \inv{2 i} \lr{ \sigma_{+} – \sigma_{-} } \\
&=
\begin{bmatrix}
0 & -i e^{i \phi} \\
-i e^{-i \phi} & 0 \\
\end{bmatrix} \\
&=
\begin{bmatrix}
0 & e^{i (\phi – \pi/2) } \\
e^{-i (\phi – \pi/2)} & 0 \\
\end{bmatrix}.
\end{aligned}
\end{equation}

Notice that the Pauli matrices \( \sigma_x \) and \( \sigma_y \) actually both have the same form as \( \sigma_x \), but the phase of the complex argument of each differs by \(90^\circ\). That \( 90^\circ \) separation isn’t obvious in the standard form \ref{eqn:pauliMatrixXYgeometry:20}.

It’s a small detail, but I thought it was kind of cool that the orthogonality of these matrix unit vector representations is built directly into the structure of their matrix representations.

References

[1] BR Desai. Quantum mechanics with basic field theory. Cambridge University Press, 2009.