[Click here for a PDF of this post with nicer formatting]
Question: Uncertainty relation. ([1] pr. 1.20)
Find the ket that maximizes the uncertainty product
\begin{equation}\label{eqn:moreKet:140}
\expectation{\lr{\Delta S_x}^2}
\expectation{\lr{\Delta S_y}^2},
\end{equation}
and compare to the uncertainty bound \( \inv{4} \Abs{ \expectation{\antisymmetric{S_x}{S_y}}}^2 \).
Answer
To parameterize the ket space, consider first the kets that where both components are both not zero, where a single complex number can parameterize the ket
\begin{equation}\label{eqn:moreKet:160}
\ket{s} =
\begin{bmatrix}
\beta’ e^{i\phi’} \\
\alpha’ e^{i\theta’} \\
\end{bmatrix}
\propto
\begin{bmatrix}
1 \\
\alpha e^{i\theta} \\
\end{bmatrix}
\end{equation}
The expectation values with respect to this ket are
\begin{equation}\label{eqn:moreKet:180}
\begin{aligned}
\expectation{S_x}
&=
\frac{\Hbar}{2}
\begin{bmatrix}
1 & \alpha e^{-i\theta} \\
\end{bmatrix}
\begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix}
\begin{bmatrix}
1 \\
\alpha e^{i\theta} \\
\end{bmatrix} \\
&=
\frac{\Hbar}{2}
\begin{bmatrix}
1 &
\alpha e^{-i\theta} \\
\end{bmatrix}
\begin{bmatrix}
\alpha e^{i\theta} \\
1 \\
\end{bmatrix} \\
&=
\frac{\Hbar}{2}
\alpha e^{i\theta} + \alpha e^{-i\theta} \\
&=
\frac{\Hbar}{2}
2 \alpha \cos\theta \\
&=
\Hbar \alpha \cos\theta.
\end{aligned}
\end{equation}
\begin{equation}\label{eqn:moreKet:200}
\begin{aligned}
\expectation{S_y}
&=
\frac{\Hbar}{2}
\begin{bmatrix}
1 & \alpha e^{-i\theta} \\
\end{bmatrix}
\begin{bmatrix} 0 & -i \\ i & 0 \\ \end{bmatrix}
\begin{bmatrix}
1 \\
\alpha e^{i\theta} \\
\end{bmatrix} \\
&=
\frac{i\Hbar}{2}
\begin{bmatrix}
1 & \alpha e^{-i\theta} \\
\end{bmatrix}
\begin{bmatrix}
-\alpha e^{i\theta} \\
1 \\
\end{bmatrix} \\
&=
\frac{-i \alpha \Hbar}{2} 2 i \sin\theta \\
&=
\alpha \Hbar \sin\theta.
\end{aligned}
\end{equation}
The variances are
\begin{equation}\label{eqn:moreKet:220}
\begin{aligned}
\lr{ \Delta S_x }^2
&=
\lr{
\frac{\Hbar}{2}
\begin{bmatrix}
-2 \alpha \cos\theta & 1 \\
1 & -2 \alpha \cos\theta \\
\end{bmatrix}
}^2 \\
&=
\frac{\Hbar^2}{4}
\begin{bmatrix}
-2 \alpha \cos\theta & 1 \\
1 & -2 \alpha \cos\theta \\
\end{bmatrix}
\begin{bmatrix}
-2 \alpha \cos\theta & 1 \\
1 & -2 \alpha \cos\theta \\
\end{bmatrix} \\
&=
\frac{\Hbar^2}{4}
\begin{bmatrix}
4 \alpha^2 \cos^2\theta + 1 & -4 \alpha \cos\theta \\
-4 \alpha \cos\theta & 4 \alpha^2 \cos^2\theta + 1 \\
\end{bmatrix},
\end{aligned}
\end{equation}
and
\begin{equation}\label{eqn:moreKet:240}
\begin{aligned}
\lr{ \Delta S_y }^2
&=
\lr{
\frac{\Hbar}{2}
\begin{bmatrix}
-2 \alpha \sin\theta & -i \\
i & -2 \alpha \sin\theta \\
\end{bmatrix}
}^2 \\
&=
\frac{\Hbar^2}{4}
\begin{bmatrix}
-2 \alpha \sin\theta & -i \\
i & -2 \alpha \sin\theta \\
\end{bmatrix}
\begin{bmatrix}
-2 \alpha \sin\theta & -i \\
i & -2 \alpha \sin\theta \\
\end{bmatrix} \\
&=
\frac{\Hbar^2}{4}
\begin{bmatrix}
4 \alpha^2 \sin^2\theta + 1 & 4 \alpha i \sin\theta \\
-4 \alpha i \sin\theta & 4 \alpha^2 \sin^2\theta + 1 \\
\end{bmatrix}.
\end{aligned}
\end{equation}
The uncertainty factors are
\begin{equation}\label{eqn:moreKet:260}
\begin{aligned}
\expectation{\lr{\Delta S_x}^2}
&=
\frac{\Hbar^2}{4}
\begin{bmatrix}
1 & \alpha e^{-i\theta}
\end{bmatrix}
\begin{bmatrix}
4 \alpha^2 \cos^2\theta + 1 & -4 \alpha \cos\theta \\
-4 \alpha \cos\theta & 4 \alpha^2 \cos^2\theta + 1 \\
\end{bmatrix}
\begin{bmatrix}
1 \\
\alpha e^{i\theta}
\end{bmatrix} \\
&=
\frac{\Hbar^2}{4}
\begin{bmatrix}
1 & \alpha e^{-i\theta}
\end{bmatrix}
\begin{bmatrix}
4 \alpha^2 \cos^2\theta + 1 -4 \alpha^2 \cos\theta e^{i\theta} \\
-4 \alpha \cos\theta + 4 \alpha^3 \cos^2\theta e^{i\theta} + \alpha e^{i\theta} \\
\end{bmatrix} \\
&=
\frac{\Hbar^2}{4}
\lr{
4 \alpha^2 \cos^2\theta + 1 -4 \alpha^2 \cos\theta e^{i\theta}
-4 \alpha^2 \cos\theta e^{-i\theta} + 4 \alpha^4 \cos^2\theta + \alpha^2
} \\
&=
\frac{\Hbar^2}{4}
\lr{
4 \alpha^2 \cos^2\theta + 1 -8 \alpha^2 \cos^2\theta
+ 4 \alpha^4 \cos^2\theta + \alpha^2
} \\
&=
\frac{\Hbar^2}{4}
\lr{
-4 \alpha^2 \cos^2\theta + 1
+ 4 \alpha^4 \cos^2\theta + \alpha^2
} \\
&=
\frac{\Hbar^2}{4}
\lr{
4 \alpha^2 \cos^2\theta \lr{ \alpha^2 – 1 }
+ \alpha^2 + 1
}
,
\end{aligned}
\end{equation}
and
\begin{equation}\label{eqn:moreKet:280}
\begin{aligned}
\expectation{ \lr{ \Delta S_y }^2 }
&=
\frac{\Hbar^2}{4}
\begin{bmatrix}
1 & \alpha e^{-i\theta}
\end{bmatrix}
\begin{bmatrix}
4 \alpha^2 \sin^2\theta + 1 & 4 \alpha i \sin\theta \\
-4 \alpha i \sin\theta & 4 \alpha^2 \sin^2\theta + 1 \\
\end{bmatrix}
\begin{bmatrix}
1 \\
\alpha e^{i\theta}
\end{bmatrix} \\
&=
\frac{\Hbar^2}{4}
\begin{bmatrix}
1 & \alpha e^{-i\theta}
\end{bmatrix}
\begin{bmatrix}
4 \alpha^2 \sin^2\theta + 1 + 4 \alpha^2 i \sin\theta e^{i\theta} \\
-4 \alpha i \sin\theta + 4 \alpha^3 \sin^2\theta e^{i\theta} + \alpha e^{i\theta} \\
\end{bmatrix} \\
&=
\frac{\Hbar^2}{4}
\lr{
4 \alpha^2 \sin^2\theta + 1 + 4 \alpha^2 i \sin\theta e^{i\theta}
-4 \alpha^2 i \sin\theta e^{-i\theta} + 4 \alpha^4 \sin^2\theta + \alpha^2
} \\
&=
\frac{\Hbar^2}{4}
\lr{
-4 \alpha^2 \sin^2\theta + 1
+ 4 \alpha^4 \sin^2\theta + \alpha^2
} \\
&=
\frac{\Hbar^2}{4}
\lr{
4 \alpha^2 \sin^2\theta \lr{ \alpha^2 – 1}
+ \alpha^2
+ 1
}
.
\end{aligned}
\end{equation}
The uncertainty product can finally be calculated
\begin{equation}\label{eqn:moreKet:300}
\begin{aligned}
\expectation{\lr{\Delta S_x}^2}
\expectation{\lr{\Delta S_y}^2}
&=
\lr{\frac{\Hbar}{2} }^4
\lr{
4 \alpha^2 \cos^2\theta \lr{ \alpha^2 – 1 }
+ \alpha^2 + 1
}
\lr{
4 \alpha^2 \sin^2\theta \lr{ \alpha^2 – 1}
+ \alpha^2
+ 1
} \\
&=
\lr{\frac{\Hbar}{2} }^4
\lr{
4 \alpha^4 \sin^2 \lr{ 2\theta } \lr{ \alpha^2 – 1 }
+ 4 \alpha^2 \lr{ \alpha^4 – 1 }
+ \lr{\alpha^2 + 1 }^2
}
\end{aligned}
\end{equation}
The maximum occurs when \( f = \sin^2 2 \theta \) is extremized. Those points are
\begin{equation}\label{eqn:moreKet:320}
\begin{aligned}
0
&= \PD{\theta}{f} \\
&= 2 \sin 2 \theta \cos 2\theta \\
&= 4 \sin 4 \theta.
\end{aligned}
\end{equation}
Those points are at \( 4 \theta = \pi n \), for integer \( n \), or
\begin{equation}\label{eqn:moreKet:340}
\theta = \frac{\pi}{4} n, n \in [0, 7],
\end{equation}
Minimums will occur when
\begin{equation}\label{eqn:moreKet:360}
0 < \PDSq{\theta}{f} = 8 \cos 4\theta,
\end{equation}
or
\begin{equation}\label{eqn:moreKet:380}
n = 0, 2, 4, 6.
\end{equation}
At these points \( \sin^2 2\theta \) takes the values
\begin{equation}\label{eqn:moreKet:400}
\sin^2 \lr{ 2 \frac{\pi}{4} \setlr{ 0, 2, 4, 6 } }
=
\sin^2 \lr{ \pi \setlr{ 0, 1, 2, 3 } }
\in \setlr{ 0 },
\end{equation}
so the maximumization of the uncertainty product can be reduced to that of
\begin{equation}\label{eqn:moreKet:420}
\expectation{\lr{\Delta S_x}^2}
\expectation{\lr{\Delta S_y}^2}
=
\lr{\frac{\Hbar}{2} }^4
\lr{
4 \alpha^2 \lr{ \alpha^4 - 1 }
+ \lr{\alpha^2 + 1 }^2
}
\end{equation}
We seek
\begin{equation}\label{eqn:moreKet:440}
\begin{aligned}
0
&=
\PD{\alpha}{}
\lr{
4 \alpha^2 \lr{ \alpha^4 - 1 }
+ \lr{\alpha^2 + 1 }^2
} \\
&=
\lr{
8 \alpha \lr{ \alpha^4 - 1 }
+16 \alpha^5
+ 4 \lr{\alpha^2 + 1 } \alpha
} \\
&=
4 \alpha
\lr{
2 \alpha^4 - 2
+4 \alpha^4
+ 4 \alpha^2 + 4
} \\
&=
8 \alpha
\lr{
3 \alpha^4 + 2 \alpha^2 + 1
}.
\end{aligned}
\end{equation}
The only real root of this polynomial is \( \alpha = 0 \), so the ket where both \( \ket{+} \) and \( \ket{-} \) are not zero that maximizes the uncertainty product is
\begin{equation}\label{eqn:moreKet:460}
\ket{s}
=
\begin{bmatrix}
1 \\
0
\end{bmatrix}
= \ket{+}.
\end{equation}
The search for this maximizing value excluded those kets proportional to \( \begin{bmatrix} 0 \\ 1 \end{bmatrix} = \ket{-} \). Let's see the values of this uncertainty product at both \( \ket{\pm} \), and compare to the uncertainty commutator. First \( \ket{s} = \ket{+} \)
\begin{equation}\label{eqn:moreKet:480}
\expectation{S_x}
=
\begin{bmatrix}
1 & 0
\end{bmatrix}
\begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix}
\begin{bmatrix}
1 \\ 0
\end{bmatrix}
=
0.
\end{equation}
\begin{equation}\label{eqn:moreKet:500}
\expectation{S_y}
=
\begin{bmatrix}
1 & 0
\end{bmatrix}
\begin{bmatrix} 0 & -i \\ i & 0 \\ \end{bmatrix}
\begin{bmatrix}
1 \\ 0
\end{bmatrix}
=
0.
\end{equation}
so
\begin{equation}\label{eqn:moreKet:520}
\expectation{ \lr{ \Delta S_x }^2 }
=
\lr{\frac{\Hbar}{2}}^2
\begin{bmatrix}
1 & 0
\end{bmatrix}
\begin{bmatrix}
1 \\ 0
\end{bmatrix}
=
\lr{\frac{\Hbar}{2}}^2
\end{equation}
\begin{equation}\label{eqn:moreKet:540}
\expectation{ \lr{ \Delta S_y }^2 }
=
\lr{\frac{\Hbar}{2}}^2
\begin{bmatrix}
1 & 0
\end{bmatrix}
\begin{bmatrix}
1 \\ 0
\end{bmatrix}
=
\lr{\frac{\Hbar}{2}}^2.
\end{equation}
For the commutator side of the uncertainty relation we have
\begin{equation}\label{eqn:moreKet:560}
\begin{aligned}
\inv{4} \Abs{ \expectation{ \antisymmetric{ S_x}{ S_y } } }^2
&=
\inv{4} \Abs{ \expectation{ i \hbar S_z } }^2 \\
&=
\lr{ \frac{\Hbar}{2} }^4
\Abs{
\begin{bmatrix}
1 & 0
\end{bmatrix}
\begin{bmatrix} 1 & 0 \\ 0 & -1 \\ \end{bmatrix}
\begin{bmatrix}
1 \\ 0
\end{bmatrix}
}^2,
\end{aligned}
\end{equation}
so for the \( \ket{+} \) state we have an equality condition for the uncertainty relation
\begin{equation}\label{eqn:moreKet:580}
\expectation{\lr{\Delta S_x}^2}
\expectation{\lr{\Delta S_y}^2}
=
\inv{4} \Abs{ \expectation{\antisymmetric{S_x}{S_y}}}^2
=
\lr{ \frac{\Hbar}{2} }^4.
\end{equation}
It's reasonable to guess that the \( \ket{-} \) state also matches the equality condition. Let's check
\begin{equation}\label{eqn:moreKet:600}
\expectation{S_x}
=
\begin{bmatrix}
0 & 1
\end{bmatrix}
\begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix}
\begin{bmatrix}
0 \\ 1
\end{bmatrix}
=
0.
\end{equation}
\begin{equation}\label{eqn:moreKet:620}
\expectation{S_y}
=
\begin{bmatrix}
0 & 1
\end{bmatrix}
\begin{bmatrix} 0 & -i \\ i & 0 \\ \end{bmatrix}
\begin{bmatrix}
0 \\ 1
\end{bmatrix}
=
0.
\end{equation}
so \( \expectation{ \lr{ \Delta S_x }^2 } = \expectation{ \lr{ \Delta S_y }^2 } = \lr{\frac{\Hbar}{2}}^2 \).
For the commutator side of the uncertainty relation will be identical, so the equality of \ref{eqn:moreKet:580} is satisfied for both \( \ket{\pm} \). Note that it wasn't explicitly verified that \( \ket{-} \) maximized the uncertainty product, but I don't feel like working through that second set of algebraic mess.
We can see by example that equality does not mean that the equality condition means that the product is maximized. For example, it is straightforward to show that \( \ket{ S_x ; \pm } \) also satisfy the equality condition of the uncertainty relation. However, in that case the product is not maximized, but is zero.
Question: Degenerate ket space example. ([1] pr. 1.23)
Consider operators with representation
\begin{equation}\label{eqn:moreKet:20}
A =
\begin{bmatrix}
a & 0 & 0 \\
0 & -a & 0 \\
0 & 0 & -a
\end{bmatrix}
,
\qquad
B =
\begin{bmatrix}
b & 0 & 0 \\
0 & 0 & -ib \\
0 & ib & 0
\end{bmatrix}.
\end{equation}
Show that these both have degeneracies, commute, and compute a simultaneous ket space for both operators.
Answer
The eigenvalues and eigenvectors for \( A \) can be read off by inspection, with values of \( a, -a, -a \), and kets
\begin{equation}\label{eqn:moreKet:40}
\ket{a_1} =
\begin{bmatrix}
1 \\
0 \\
0
\end{bmatrix},
\ket{a_2} =
\begin{bmatrix}
0 \\
1 \\
0
\end{bmatrix},
\ket{a_3} =
\begin{bmatrix}
0 \\
0 \\
1 \\
\end{bmatrix}
\end{equation}
Notice that the lower-right \( 2 \times 2 \) submatrix of \( B \) is proportional to \( \sigma_y \), so it’s eigenvalues can be formed by inspection
\begin{equation}\label{eqn:moreKet:60}
\ket{b_1} =
\begin{bmatrix}
1 \\
0 \\
0
\end{bmatrix},
\ket{b_2} =
\inv{\sqrt{2}}
\begin{bmatrix}
0 \\
1 \\
i
\end{bmatrix},
\ket{b_3} =
\inv{\sqrt{2}}
\begin{bmatrix}
0 \\
1 \\
-i \\
\end{bmatrix}.
\end{equation}
Computing \( B \ket{b_i} \) shows that the eigenvalues are \( b, b, -b \) respectively.
Because of the two-fold degeneracy in the \( -a \) eigenvalues of \( A \), any linear combination of \( \ket{a_2}, \ket{a_3} \) will also be an eigenket. In particular,
\begin{equation}\label{eqn:moreKet:80}
\begin{aligned}
\ket{a_2} + i \ket{a_3} &= \ket{b_2} \\
\ket{a_2} – i \ket{a_3} &= \ket{b_3},
\end{aligned}
\end{equation}
so the basis \( \setlr{ \ket{b_i}} \) is a simulaneous eigenspace for both \( A \) and \(B\). Because there is a simulaneous eigenspace, the matrices must commute. This can be confirmed with direct computation
\begin{equation}\label{eqn:moreKet:100}
\begin{aligned}
A B
&= a b
\begin{bmatrix}
1 & 0 & 0 \\
0 & -1 & 0 \\
0 & 0 & -1
\end{bmatrix}
\begin{bmatrix}
1 & 0 & 0 \\
0 & 0 & -i \\
0 & i & 0
\end{bmatrix} \\
&=
a b
\begin{bmatrix}
1 & 0 & 0 \\
0 & 0 & i \\
0 & -i & 0
\end{bmatrix},
\end{aligned}
\end{equation}
and
\begin{equation}\label{eqn:moreKet:120}
\begin{aligned}
B A
&= a b
\begin{bmatrix}
1 & 0 & 0 \\
0 & 0 & -i \\
0 & i & 0
\end{bmatrix}
\begin{bmatrix}
1 & 0 & 0 \\
0 & -1 & 0 \\
0 & 0 & -1
\end{bmatrix} \\
&=
a b
\begin{bmatrix}
1 & 0 & 0 \\
0 & 0 & i \\
0 & -i & 0
\end{bmatrix}.
\end{aligned}
\end{equation}
Question: Unitary transformation. ([1] pr. 1.26)
Construct the transformation matrix that maps between the \( S_z \) diagonal basis, to the \( S_x \) diagonal basis.
Answer
Based on the definition
\begin{equation}\label{eqn:moreKet:640}
U \ket{a^{(r)}} = \ket{b^{(r)}},
\end{equation}
the matrix elements can be computed
\begin{equation}\label{eqn:moreKet:660}
\bra{a^{(s)}} U \ket{a^{(r)}} = \braket{a^{(s)}}{b^{(r)}},
\end{equation}
that is
\begin{equation}\label{eqn:moreKet:680}
\begin{aligned}
U
&=
\begin{bmatrix}
\bra{a^{(1)}} U \ket{a^{(1)}} & \bra{a^{(1)}} U \ket{a^{(2)}} \\
\bra{a^{(2)}} U \ket{a^{(1)}} & \bra{a^{(2)}} U \ket{a^{(2)}}
\end{bmatrix} \\
&=
\begin{bmatrix}
\braket{a^{(1)}}{b^{(1)}} & \braket{a^{(1)}}{b^{(2)}} \\
\braket{a^{(2)}}{b^{(1)}} & \braket{a^{(2)}}{b^{(2)}}
\end{bmatrix} \\
&=
\inv{\sqrt{2}}
\begin{bmatrix}
\begin{bmatrix}
1 & 0
\end{bmatrix}
\begin{bmatrix}
1 \\ 1
\end{bmatrix} &
\begin{bmatrix}
1 & 0
\end{bmatrix}
\begin{bmatrix}
1 \\ -1
\end{bmatrix} \\
\begin{bmatrix}
0 & 1
\end{bmatrix}
\begin{bmatrix}
1 \\ 1
\end{bmatrix} &
\begin{bmatrix}
0 & 1
\end{bmatrix}
\begin{bmatrix}
1 \\ -1
\end{bmatrix} \\
\end{bmatrix} \\
&=
\inv{\sqrt{2}}
\begin{bmatrix}
1 & 1 \\
1 & -1
\end{bmatrix}.
\end{aligned}
\end{equation}
As a similarity transformation, we have
\begin{equation}\label{eqn:moreKet:700}
\begin{aligned}
\bra{b^{(r)}} S_z \ket{b^{(s)}}
&=
\braket{b^{(r)}}{a^{(t)}}\bra{a^{(t)}} S_z \ket{a^{(u)}}\braket{a^{(u)}}{b^{(s)}} \\
&=
\braket{a^{(r)}}U^\dagger {a^{(t)}}\bra{a^{(t)}} S_z \ket{a^{(u)}}\bra{a^{(u)}}U \ket{a^{(s)}},
\end{aligned}
\end{equation}
or
\begin{equation}\label{eqn:moreKet:720}
S_z’ = U^\dagger S_z U.
\end{equation}
Let’s check that the computed similarity transformation does it’s job.
\begin{equation}\label{eqn:moreKet:740}
\begin{aligned}
\sigma_z’
&= U^\dagger \sigma_z U \\
&= \inv{2}
\begin{bmatrix}
1 & 1 \\
1 & -1
\end{bmatrix}
\begin{bmatrix}
1 & 0 \\
0 & -1
\end{bmatrix}
\begin{bmatrix}
1 & 1 \\
1 & -1
\end{bmatrix} \\
&=
\inv{2}
\begin{bmatrix}
1 & -1 \\
1 & 1
\end{bmatrix}
\begin{bmatrix}
1 & 1 \\
1 & -1
\end{bmatrix} \\
&=
\inv{2}
\begin{bmatrix}
0 & 2 \\
2 & 0
\end{bmatrix} \\
&= \sigma_x.
\end{aligned}
\end{equation}
The transformation matrix can also be computed more directly
\begin{equation}\label{eqn:moreKet:760}
\begin{aligned}
U
&= U \ket{a^{(r)}} \bra{a^{(r)}} \\
&= \ket{b^{(r)}}\bra{a^{(r)}} \\
&=
\inv{\sqrt{2}}
\begin{bmatrix}
1 \\
1
\end{bmatrix}
\begin{bmatrix}
1 & 0
\end{bmatrix}
+
\inv{\sqrt{2}}
\begin{bmatrix}
1 \\
-1
\end{bmatrix}
\begin{bmatrix}
0 & 1
\end{bmatrix} \\
&=
\inv{\sqrt{2}}
\begin{bmatrix}
1 & 0 \\
1 & 0
\end{bmatrix}
+
\inv{\sqrt{2}}
\begin{bmatrix}
0 & 1 \\
0 & -1
\end{bmatrix} \\
&=
\inv{\sqrt{2}}
\begin{bmatrix}
1 & 1 \\
1 & -1
\end{bmatrix}.
\end{aligned}
\end{equation}
References
[1] Jun John Sakurai and Jim J Napolitano. Modern quantum mechanics. Pearson Higher Ed, 2014.