commutator

SHO translation operator expectation

September 2, 2015 phy1520 , , , ,

[Click here for a PDF of this post with nicer formatting]

Question: SHO translation operator expectation ([1] pr. 2.12)

Using the Heisenberg picture evaluate the expectation of the position operator \( \expectation{x} \) with respect to the initial time state

\begin{equation}\label{eqn:translationExpectation:20}
\ket{\alpha, 0} = e^{-i p_0 a/\Hbar} \ket{0},
\end{equation}

where \( p_0 \) is the initial time position operator, and \( a \) is a constant with dimensions of position.

Answer

Recall that the Heisenberg picture position operator expands to

\begin{equation}\label{eqn:translationExpectation:40}
x^{\textrm{H}}(t)
= U^\dagger x U
= x_0 \cos(\omega t) + \frac{p_0}{m \omega} \sin(\omega t),
\end{equation}

so the expectation of the position operator is
\begin{equation}\label{eqn:translationExpectation:60}
\begin{aligned}
\expectation{x}
&=
\bra{0} e^{i p_0 a/\Hbar} \lr{ x_0 \cos(\omega t) + \frac{p_0}{m \omega}
\sin(\omega t) } e^{-i p_0 a/\Hbar} \ket{0} \\
&=
\bra{0} \lr{ e^{i p_0 a/\Hbar} x_0 \cos(\omega t) e^{-i p_0 a/\Hbar} \cos(\omega t) + \frac{p_0}{m \omega} \sin(\omega t) } \ket{0}.
\end{aligned}
\end{equation}

The exponential sandwich above can be expanded using the Baker-Campbell-Hausdorff [2] formula

\begin{equation}\label{eqn:translationExpectation:80}
\begin{aligned}
e^{i p_0 a/\Hbar} x_0 e^{-i p_0 a/\Hbar}
&=
x_0
+ \frac{i a}{\Hbar} \antisymmetric{p_0}{x_0}
+ \inv{2!} \lr{\frac{i a}{\Hbar}}^2 \antisymmetric{p_0}{\antisymmetric{p_0}{x_0}}
+ \cdots \\
&=
x_0
+ \frac{i a}{\Hbar} \lr{ -i \Hbar }
+ \inv{2!} \lr{\frac{i a}{\Hbar}}^2 \antisymmetric{p_0}{-i \Hbar}
+ \cdots \\
&=
x_0 + a.
\end{aligned}
\end{equation}

The position expectation with respect to this translated state is

\begin{equation}\label{eqn:translationExpectation:100}
\begin{aligned}
\expectation{x}
&= \bra{0} \lr{ (x_0 + a)\cos(\omega t) + \frac{p_0}{m \omega} \sin(\omega t)
}\ket{0} \\
&= a \cos(\omega t).
\end{aligned}
\end{equation}

The final simplification above follows from \( \bra{n} x \ket{n} = \bra{n} p \ket{n} = 0 \).

References

[1] Jun John Sakurai and Jim J Napolitano. Modern quantum mechanics. Pearson Higher Ed, 2014.

[2] Wikipedia. Baker-campbell-hausdorff formula — wikipedia, the free encyclopedia, 2015. URL https://en.wikipedia.org/w/index.php?title=Baker\%E2\%80\%93Campbell\%E2\%80\%93Hausdorff_formula&oldid=665123858. [Online; accessed 16-August-2015].

Quantum Virial Theorem

August 31, 2015 phy1520 , , , , ,

[Click here for a PDF of this post with nicer formatting]

Question: Quantum Virial Theorem ([1] pr. 2.7)

Consider a particle with Hamiltonian

\begin{equation}\label{eqn:qmVirialTheorem:20}
H = \frac{\Bp^2}{2 m} + V(\Bx),
\end{equation}

By calculating the time evolution of \( \antisymmetric{\Bx \cdot \Bp}{H} \), identify the quantum virial theorem and show the conditions where it is satisfied.

Answer

\begin{equation}\label{eqn:qmVirialTheorem:40}
\begin{aligned}
\antisymmetric{\Bx \cdot \Bp}{H}
&=
\inv{2 m} \antisymmetric{\Bx \cdot \Bp}{\Bp^2} + \antisymmetric{\Bx \cdot \Bp}{V(\Bx)} \\
&=
\inv{2 m} \lr{ x_r p_r \Bp^2 – \Bp^2 x_r p_r}
+
\lr{ x_r p_r V(\Bx) – V(\Bx) x_r p_r } \\
&=
\inv{2 m} \antisymmetric{ x_r }{\Bp^2} p_r
+
x_r \antisymmetric{ p_r}{ V(\Bx)},
\end{aligned}
\end{equation}

Evaluating those commutators separately, gives

\begin{equation}\label{eqn:qmVirialTheorem:60}
\begin{aligned}
\antisymmetric{ x_r }{\Bp^2}
&=
\antisymmetric{ x_r }{p_r^2}\qquad \text{no sum} \\
&=
2 i \Hbar p_r,
\end{aligned}
\end{equation}

and

\begin{equation}\label{eqn:qmVirialTheorem:80}
\antisymmetric{ p_r}{ V(\Bx)}
= -i \Hbar \PD{x_r}{V(\Bx)},
\end{equation}

so
\begin{equation}\label{eqn:qmVirialTheorem:100}
\begin{aligned}
\ddt{}\lr{\Bx \cdot \Bp}
&=
\inv{i \Hbar}
\antisymmetric{\Bx \cdot \Bp}{H} \\
&=
\inv{2 m} 2 p_r p_r – x_r \PD{x_r}{V(\Bx)} \\
&=
\frac{\Bp^2}{m} – \Bx \cdot \spacegrad V(\Bx).
\end{aligned}
\end{equation}

Taking expectation values, assuming that the states are independent of time, we have

\begin{equation}\label{eqn:qmVirialTheorem:120}
\begin{aligned}
0
&= \ddt{} \expectation{ \Bx \cdot \Bp } \\
&= \expectation{\frac{\Bp^2}{m}} – \expectation{\Bx \cdot \spacegrad V(\Bx)}.
\end{aligned}
\end{equation}

Note that taking the expectation with respect to stationary states was required to reverse the order of the time derivative with the expectation operation.

The right hand side is the quantum equivalent of the virial theorem, relating the average kinetic energy to the potential

\begin{equation}\label{eqn:qmVirialTheorem:140}
2 \expectation{T} = \expectation{\Bx \cdot \spacegrad V(\Bx)}
\end{equation}

References

[1] Jun John Sakurai and Jim J Napolitano. Modern quantum mechanics. Pearson Higher Ed, 2014.

Heisenberg picture position commutator

August 14, 2015 phy1520 , , , ,

[Click here for a PDF of this post with nicer formatting]

Question: Heisenberg picture position commutator ([1] pr. 2.5)

Evaluate

\begin{equation}\label{eqn:positionCommutator:20}
\antisymmetric{x(t)}{x(0)},
\end{equation}

for a Heisenberg picture operator \( x(t) \) for a free particle.

Answer

The free particle Hamiltonian is

\begin{equation}\label{eqn:positionCommutator:40}
H = \frac{p^2}{2m},
\end{equation}

so the time evolution operator is

\begin{equation}\label{eqn:positionCommutator:60}
U(t) = e^{-i p^2 t/(2 m \Hbar)}.
\end{equation}

The Heisenberg picture position operator is

\begin{equation}\label{eqn:positionCommutator:80}
\begin{aligned}
x^\textrm{H}
&= U^\dagger x U \\
&= e^{i p^2 t/(2 m \Hbar)} x e^{-i p^2 t/(2 m \Hbar)} \\
&= \sum_{k = 0}^\infty \inv{k!} \lr{ \frac{i p^2 t}{2 m \Hbar} }^k
x
e^{-i p^2 t/(2 m \Hbar)} \\
&= \sum_{k = 0}^\infty \inv{k!} \lr{ \frac{i t}{2 m \Hbar} }^k p^{2k} x
e^{-i p^2 t/(2 m \Hbar)} \\
&=
\sum_{k = 0}^\infty \inv{k!} \lr{ \frac{i t}{2 m \Hbar} }^k \lr{ \antisymmetric{p^{2k}}{x} + x p^{2k} }
e^{-i p^2 t/(2 m \Hbar)} \\
&= x +
\sum_{k = 0}^\infty \inv{k!} \lr{ \frac{i t}{2 m \Hbar} }^k \antisymmetric{p^{2k}}{x}
e^{-i p^2 t/(2 m \Hbar)} \\
&= x +
\sum_{k = 0}^\infty \inv{k!} \lr{ \frac{i t}{2 m \Hbar} }^k \lr{ -i \Hbar \PD{p}{p^{2k}} }
e^{-i p^2 t/(2 m \Hbar)} \\
&= x +
\sum_{k = 0}^\infty \inv{k!} \lr{ \frac{i t}{2 m \Hbar} }^k \lr{ -i \Hbar 2 k p^{2 k -1} }
e^{-i p^2 t/(2 m \Hbar)} \\
&= x +
-2 i \Hbar p \frac{i t}{2 m \Hbar} \sum_{k = 1}^\infty \inv{(k-1)!} \lr{ \frac{i t}{2 m \Hbar} }^{k-1} p^{2(k – 1)}
e^{-i p^2 t/(2 m \Hbar)} \\
&= x + t \frac{p}{m}.
\end{aligned}
\end{equation}

This has the structure of a classical free particle \( x(t) = x + v t \), but in this case \( x,p \) are operators.

The evolved position commutator is
\begin{equation}\label{eqn:positionCommutator:100}
\begin{aligned}
\antisymmetric{x(t)}{x(0)}
&=
\antisymmetric{x + t p/m}{x} \\
&=
\frac{t}{m} \antisymmetric{p}{x} \\
&=
-i \Hbar \frac{t}{m}.
\end{aligned}
\end{equation}

Compare this to the classical Poisson bracket
\begin{equation}\label{eqn:positionCommutator:120}
\antisymmetric{x(t)}{x(0)}_{\textrm{classical}}
=
\PD{x}{}\lr{x + p t/m} \PD{p}{x} – \PD{p}{}\lr{x + p t/m} \PD{x}{x}
=
– \frac{t}{m}.
\end{equation}

This has the expected relation \( \antisymmetric{x(t)}{x(0)} = i \Hbar \antisymmetric{x(t)}{x(0)}_{\textrm{classical}} \).

References

[1] Jun John Sakurai and Jim J Napolitano. Modern quantum mechanics. Pearson Higher Ed, 2014.

Translation operator problems

August 7, 2015 phy1520 , , , , , , ,

[Click here for a PDF of this post with nicer formatting]

Question: One dimensional translation operator. ([1] pr. 1.28)

(a)

Evaluate the classical Poisson bracket

\begin{equation}\label{eqn:translation:420}
\antisymmetric{x}{F(p)}_{\textrm{classical}}
\end{equation}

(b)

Evaluate the commutator

\begin{equation}\label{eqn:translation:440}
\antisymmetric{x}{e^{i p a/\Hbar}}
\end{equation}

(c)

Using the result in \ref{problem:translation:28:b}, prove that
\begin{equation}\label{eqn:translation:460}
e^{i p a/\Hbar} \ket{x’},
\end{equation}

is an eigenstate of the coordinate operator \( x \).

Answer

(a)

\begin{equation}\label{eqn:translation:480}
\begin{aligned}
\antisymmetric{x}{F(p)}_{\textrm{classical}}
&=
\PD{x}{x} \PD{p}{F(p)} – \PD{p}{x} \PD{x}{F(p)} \\
&=
\PD{p}{F(p)}.
\end{aligned}
\end{equation}

(b)

Having worked backwards through these problems, the answer for this one dimensional problem can be obtained from \ref{eqn:translation:140} and is

\begin{equation}\label{eqn:translation:500}
\antisymmetric{x}{e^{i p a/\Hbar}} = a e^{i p a/\Hbar}.
\end{equation}

(c)

\begin{equation}\label{eqn:translation:520}
\begin{aligned}
x e^{i p a/\Hbar} \ket{x’}
&=
\lr{
\antisymmetric{x}{e^{i p a/\Hbar}}
e^{i p a/\Hbar} x
+
}
\ket{x’} \\
&=
\lr{ a e^{i p a/\Hbar} + e^{i p a/\Hbar} x ‘ } \ket{x’} \\
&= \lr{ a + x’ } \ket{x’}.
\end{aligned}
\end{equation}

This demonstrates that \( e^{i p a/\Hbar} \ket{x’} \) is an eigenstate of \( x \) with eigenvalue \( a + x’ \).

Question: Polynomial commutators. ([1] pr. 1.29)

(a)

For power series \( F, G \), verify

\begin{equation}\label{eqn:translation:180}
\antisymmetric{x_k}{G(\Bp)} = i \Hbar \PD{p_k}{G}, \qquad
\antisymmetric{p_k}{F(\Bx)} = -i \Hbar \PD{x_k}{F}.
\end{equation}

(b)

Evaluate \( \antisymmetric{x^2}{p^2} \), and compare to the classical Poisson bracket \( \antisymmetric{x^2}{p^2}_{\textrm{classical}} \).

Answer

(a)

Let

\begin{equation}\label{eqn:translation:200}
\begin{aligned}
G(\Bp) &= \sum_{k l m} a_{k l m} p_1^k p_2^l p_3^m \\
F(\Bx) &= \sum_{k l m} b_{k l m} x_1^k x_2^l x_3^m.
\end{aligned}
\end{equation}

It is simpler to work with a specific \( x_k \), say \( x_k = y \). The validity of the general result will still be clear doing so. Expanding the commutator gives

\begin{equation}\label{eqn:translation:220}
\begin{aligned}
\antisymmetric{y}{G(\Bp)}
&=
\sum_{k l m} a_{k l m} \antisymmetric{y}{p_1^k p_2^l p_3^m } \\
&=
\sum_{k l m} a_{k l m} \lr{
y p_1^k p_2^l p_3^m – p_1^k p_2^l p_3^m y
} \\
&=
\sum_{k l m} a_{k l m} \lr{
p_1^k y p_2^l p_3^m – p_1^k y p_2^l p_3^m
} \\
&=
\sum_{k l m} a_{k l m}
p_1^k
\antisymmetric{y}{p_2^l}
p_3^m.
\end{aligned}
\end{equation}

From \ref{eqn:translation:100}, we have \( \antisymmetric{y}{p_2^l} = l i \Hbar p_2^{l-1} \), so

\begin{equation}\label{eqn:translation:240}
\begin{aligned}
\antisymmetric{y}{G(\Bp)}
&=
\sum_{k l m} a_{k l m}
p_1^k
\antisymmetric{y}{p_2^l}
\lr{ l
i \Hbar p_2^{l-1}
}
p_3^m \\
&=
i \Hbar \PD{y}{G(\Bp)}.
\end{aligned}
\end{equation}

It is straightforward to show that
\( \antisymmetric{p}{x^l} = -l i \Hbar x^{l-1} \), allowing for a similar computation of the momentum commutator

\begin{equation}\label{eqn:translation:260}
\begin{aligned}
\antisymmetric{p_y}{F(\Bx)}
&=
\sum_{k l m} b_{k l m} \antisymmetric{p_y}{x_1^k x_2^l x_3^m } \\
&=
\sum_{k l m} b_{k l m} \lr{
p_y x_1^k x_2^l x_3^m – x_1^k x_2^l x_3^m p_y
} \\
&=
\sum_{k l m} b_{k l m} \lr{
x_1^k p_y x_2^l x_3^m – x_1^k p_y x_2^l x_3^m
} \\
&=
\sum_{k l m} b_{k l m}
x_1^k
\antisymmetric{p_y}{x_2^l}
x_3^m \\
&=
\sum_{k l m} b_{k l m}
x_1^k
\lr{ -l i \Hbar x_2^{l-1}}
x_3^m \\
&=
-i \Hbar \PD{p_y}{F(\Bx)}.
\end{aligned}
\end{equation}

(b)

It isn’t clear to me how the results above can be used directly to compute \( \antisymmetric{x^2}{p^2} \). However, when the first term of such a commutator is a monomomial, it can be expanded in terms of an \( x \) commutator

\begin{equation}\label{eqn:translation:280}
\begin{aligned}
\antisymmetric{x^2}{G(\Bp)}
&= x^2 G – G x^2 \\
&= x \lr{ x G } – G x^2 \\
&= x \lr{ \antisymmetric{ x }{ G } + G x } – G x^2 \\
&= x \antisymmetric{ x }{ G } + \lr{ x G } x – G x^2 \\
&= x \antisymmetric{ x }{ G } + \lr{ \antisymmetric{ x }{ G } + G x } x – G x^2 \\
&= x \antisymmetric{ x }{ G } + \antisymmetric{ x }{ G } x.
\end{aligned}
\end{equation}

Similarily,

\begin{equation}\label{eqn:translation:300}
\antisymmetric{x^3}{G(\Bp)} = x^2 \antisymmetric{ x }{ G } + x \antisymmetric{ x }{ G } x + \antisymmetric{ x }{ G } x^2.
\end{equation}

An induction hypothesis can be formed

\begin{equation}\label{eqn:translation:320}
\antisymmetric{x^k}{G(\Bp)} = \sum_{j = 0}^{k-1} x^{k-1-j} \antisymmetric{ x }{ G } x^j,
\end{equation}

and demonstrated

\begin{equation}\label{eqn:translation:340}
\begin{aligned}
\antisymmetric{x^{k+1}}{G(\Bp)}
&=
x^{k+1} G – G x^{k+1} \\
&=
x \lr{ x^{k} G } – G x^{k+1} \\
&=
x \lr{ \antisymmetric{x^{k}}{G} + G x^k } – G x^{k+1} \\
&=
x \antisymmetric{x^{k}}{G} + \lr{ x G } x^k – G x^{k+1} \\
&=
x \antisymmetric{x^{k}}{G} + \lr{ \antisymmetric{x}{G} + G x } x^k – G x^{k+1} \\
&=
x \antisymmetric{x^{k}}{G} + \antisymmetric{x}{G} x^k \\
&=
x \sum_{j = 0}^{k-1} x^{k-1-j} \antisymmetric{ x }{ G } x^j + \antisymmetric{x}{G} x^k \\
&=
\sum_{j = 0}^{k-1} x^{(k+1)-1-j} \antisymmetric{ x }{ G } x^j + \antisymmetric{x}{G} x^k \\
&=
\sum_{j = 0}^{k} x^{(k+1)-1-j} \antisymmetric{ x }{ G } x^j.
\end{aligned}
\end{equation}

That was a bit overkill for this problem, but may be useful later. Application of this to the problem gives

\begin{equation}\label{eqn:translation:360}
\begin{aligned}
\antisymmetric{x^2}{p^2}
&=
x \antisymmetric{x}{p^2}
+ \antisymmetric{x}{p^2} x \\
&=
x i \Hbar \PD{x}{p^2}
+ i \Hbar \PD{x}{p^2} x \\
&=
x 2 i \Hbar p
+ 2 i \Hbar p x \\
&= i \Hbar \lr{ 2 x p + 2 p x }.
\end{aligned}
\end{equation}

The classical commutator is
\begin{equation}\label{eqn:translation:380}
\begin{aligned}
\antisymmetric{x^2}{p^2}_{\textrm{classical}}
&=
\PD{x}{x^2} \PD{p}{p^2} – \PD{p}{x^2} \PD{x}{p^2} \\
&=
2 x 2 p \\
&= 2 x p + 2 p x.
\end{aligned}
\end{equation}

This demonstrates the expected relation between the classical and quantum commutators

\begin{equation}\label{eqn:translation:400}
\antisymmetric{x^2}{p^2} = i \Hbar \antisymmetric{x^2}{p^2}_{\textrm{classical}}.
\end{equation}

Question: Translation operator and position expectation. ([1] pr. 1.30)

The translation operator for a finite spatial displacement is given by

\begin{equation}\label{eqn:translation:20}
J(\Bl) = \exp\lr{ -i \Bp \cdot \Bl/\Hbar },
\end{equation}

where \( \Bp \) is the momentum operator.

(a)

Evaluate

\begin{equation}\label{eqn:translation:40}
\antisymmetric{x_i}{J(\Bl)}.
\end{equation}

(b)

Demonstrate how the expectation value \( \expectation{\Bx} \) changes under translation.

Answer

(a)

For clarity, let’s set \( x_i = y \). The general result will be clear despite doing so.

\begin{equation}\label{eqn:translation:60}
\antisymmetric{y}{J(\Bl)}
=
\sum_{k= 0} \inv{k!} \lr{\frac{-i}{\Hbar}}
\antisymmetric{y}{
\lr{ \Bp \cdot \Bl }^k
}.
\end{equation}

The commutator expands as

\begin{equation}\label{eqn:translation:80}
\begin{aligned}
\antisymmetric{y}{
\lr{ \Bp \cdot \Bl }^k
}
+ \lr{ \Bp \cdot \Bl }^k y
&=
y \lr{ \Bp \cdot \Bl }^k \\
&=
y \lr{ p_x l_x + p_y l_y + p_z l_z } \lr{ \Bp \cdot \Bl }^{k-1} \\
&=
\lr{ p_x l_x y + y p_y l_y + p_z l_z y } \lr{ \Bp \cdot \Bl }^{k-1} \\
&=
\lr{ p_x l_x y + l_y \lr{ p_y y + i \Hbar } + p_z l_z y } \lr{ \Bp \cdot \Bl }^{k-1} \\
&=
\lr{ \Bp \cdot \Bl } y \lr{ \Bp \cdot \Bl }^{k-1}
+ i \Hbar l_y \lr{ \Bp \cdot \Bl }^{k-1} \\
&= \cdots \\
&=
\lr{ \Bp \cdot \Bl }^{k-1} y \lr{ \Bp \cdot \Bl }^{k-(k-1)}
+ (k-1) i \Hbar l_y \lr{ \Bp \cdot \Bl }^{k-1} \\
&=
\lr{ \Bp \cdot \Bl }^{k} y
+ k i \Hbar l_y \lr{ \Bp \cdot \Bl }^{k-1}.
\end{aligned}
\end{equation}

In the above expansion, the commutation of \( y \) with \( p_x, p_z \) has been used. This gives, for \( k \ne 0 \),

\begin{equation}\label{eqn:translation:100}
\antisymmetric{y}{
\lr{ \Bp \cdot \Bl }^k
}
=
k i \Hbar l_y \lr{ \Bp \cdot \Bl }^{k-1}.
\end{equation}

Note that this also holds for the \( k = 0 \) case, since \( y \) commutes with the identity operator. Plugging back into the \( J \) commutator, we have

\begin{equation}\label{eqn:translation:120}
\begin{aligned}
\antisymmetric{y}{J(\Bl)}
&=
\sum_{k = 1} \inv{k!} \lr{\frac{-i}{\Hbar}}
k i \Hbar l_y \lr{ \Bp \cdot \Bl }^{k-1} \\
&=
l_y \sum_{k = 1} \inv{(k-1)!} \lr{\frac{-i}{\Hbar}}
\lr{ \Bp \cdot \Bl }^{k-1} \\
&=
l_y J(\Bl).
\end{aligned}
\end{equation}

The same pattern clearly applies with the other \( x_i \) values, providing the desired relation.

\begin{equation}\label{eqn:translation:140}
\antisymmetric{\Bx}{J(\Bl)} = \sum_{m = 1}^3 \Be_m l_m J(\Bl) = \Bl J(\Bl).
\end{equation}

(b)

Suppose that the translated state is defined as \( \ket{\alpha_{\Bl}} = J(\Bl) \ket{\alpha} \). The expectation value with respect to this state is

\begin{equation}\label{eqn:translation:160}
\begin{aligned}
\expectation{\Bx’}
&=
\bra{\alpha_{\Bl}} \Bx \ket{\alpha_{\Bl}} \\
&=
\bra{\alpha} J^\dagger(\Bl) \Bx J(\Bl) \ket{\alpha} \\
&=
\bra{\alpha} J^\dagger(\Bl) \lr{ \Bx J(\Bl) } \ket{\alpha} \\
&=
\bra{\alpha} J^\dagger(\Bl) \lr{ J(\Bl) \Bx + \Bl J(\Bl) } \ket{\alpha} \\
&=
\bra{\alpha} J^\dagger J \Bx + \Bl J^\dagger J \ket{\alpha} \\
&=
\bra{\alpha} \Bx \ket{\alpha} + \Bl \braket{\alpha}{\alpha} \\
&=
\expectation{\Bx} + \Bl.
\end{aligned}
\end{equation}

References

[1] Jun John Sakurai and Jim J Napolitano. Modern quantum mechanics. Pearson Higher Ed, 2014.

More ket problems

August 5, 2015 phy1520 , , , , , , , , ,

[Click here for a PDF of this post with nicer formatting]

Question: Uncertainty relation. ([1] pr. 1.20)

Find the ket that maximizes the uncertainty product

\begin{equation}\label{eqn:moreKet:140}
\expectation{\lr{\Delta S_x}^2}
\expectation{\lr{\Delta S_y}^2},
\end{equation}

and compare to the uncertainty bound \( \inv{4} \Abs{ \expectation{\antisymmetric{S_x}{S_y}}}^2 \).

Answer

To parameterize the ket space, consider first the kets that where both components are both not zero, where a single complex number can parameterize the ket

\begin{equation}\label{eqn:moreKet:160}
\ket{s} =
\begin{bmatrix}
\beta’ e^{i\phi’} \\
\alpha’ e^{i\theta’} \\
\end{bmatrix}
\propto
\begin{bmatrix}
1 \\
\alpha e^{i\theta} \\
\end{bmatrix}
\end{equation}

The expectation values with respect to this ket are
\begin{equation}\label{eqn:moreKet:180}
\begin{aligned}
\expectation{S_x}
&=
\frac{\Hbar}{2}
\begin{bmatrix}
1 & \alpha e^{-i\theta} \\
\end{bmatrix}
\begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix}
\begin{bmatrix}
1 \\
\alpha e^{i\theta} \\
\end{bmatrix} \\
&=
\frac{\Hbar}{2}
\begin{bmatrix}
1 &
\alpha e^{-i\theta} \\
\end{bmatrix}
\begin{bmatrix}
\alpha e^{i\theta} \\
1 \\
\end{bmatrix} \\
&=
\frac{\Hbar}{2}
\alpha e^{i\theta} + \alpha e^{-i\theta} \\
&=
\frac{\Hbar}{2}
2 \alpha \cos\theta \\
&=
\Hbar \alpha \cos\theta.
\end{aligned}
\end{equation}

\begin{equation}\label{eqn:moreKet:200}
\begin{aligned}
\expectation{S_y}
&=
\frac{\Hbar}{2}
\begin{bmatrix}
1 & \alpha e^{-i\theta} \\
\end{bmatrix}
\begin{bmatrix} 0 & -i \\ i & 0 \\ \end{bmatrix}
\begin{bmatrix}
1 \\
\alpha e^{i\theta} \\
\end{bmatrix} \\
&=
\frac{i\Hbar}{2}
\begin{bmatrix}
1 & \alpha e^{-i\theta} \\
\end{bmatrix}
\begin{bmatrix}
-\alpha e^{i\theta} \\
1 \\
\end{bmatrix} \\
&=
\frac{-i \alpha \Hbar}{2} 2 i \sin\theta \\
&=
\alpha \Hbar \sin\theta.
\end{aligned}
\end{equation}

The variances are
\begin{equation}\label{eqn:moreKet:220}
\begin{aligned}
\lr{ \Delta S_x }^2
&=
\lr{
\frac{\Hbar}{2}
\begin{bmatrix}
-2 \alpha \cos\theta & 1 \\
1 & -2 \alpha \cos\theta \\
\end{bmatrix}
}^2 \\
&=
\frac{\Hbar^2}{4}
\begin{bmatrix}
-2 \alpha \cos\theta & 1 \\
1 & -2 \alpha \cos\theta \\
\end{bmatrix}
\begin{bmatrix}
-2 \alpha \cos\theta & 1 \\
1 & -2 \alpha \cos\theta \\
\end{bmatrix} \\
&=
\frac{\Hbar^2}{4}
\begin{bmatrix}
4 \alpha^2 \cos^2\theta + 1 & -4 \alpha \cos\theta \\
-4 \alpha \cos\theta & 4 \alpha^2 \cos^2\theta + 1 \\
\end{bmatrix},
\end{aligned}
\end{equation}

and

\begin{equation}\label{eqn:moreKet:240}
\begin{aligned}
\lr{ \Delta S_y }^2
&=
\lr{
\frac{\Hbar}{2}
\begin{bmatrix}
-2 \alpha \sin\theta & -i \\
i & -2 \alpha \sin\theta \\
\end{bmatrix}
}^2 \\
&=
\frac{\Hbar^2}{4}
\begin{bmatrix}
-2 \alpha \sin\theta & -i \\
i & -2 \alpha \sin\theta \\
\end{bmatrix}
\begin{bmatrix}
-2 \alpha \sin\theta & -i \\
i & -2 \alpha \sin\theta \\
\end{bmatrix} \\
&=
\frac{\Hbar^2}{4}
\begin{bmatrix}
4 \alpha^2 \sin^2\theta + 1 & 4 \alpha i \sin\theta \\
-4 \alpha i \sin\theta & 4 \alpha^2 \sin^2\theta + 1 \\
\end{bmatrix}.
\end{aligned}
\end{equation}

The uncertainty factors are

\begin{equation}\label{eqn:moreKet:260}
\begin{aligned}
\expectation{\lr{\Delta S_x}^2}
&=
\frac{\Hbar^2}{4}
\begin{bmatrix}
1 & \alpha e^{-i\theta}
\end{bmatrix}
\begin{bmatrix}
4 \alpha^2 \cos^2\theta + 1 & -4 \alpha \cos\theta \\
-4 \alpha \cos\theta & 4 \alpha^2 \cos^2\theta + 1 \\
\end{bmatrix}
\begin{bmatrix}
1 \\
\alpha e^{i\theta}
\end{bmatrix} \\
&=
\frac{\Hbar^2}{4}
\begin{bmatrix}
1 & \alpha e^{-i\theta}
\end{bmatrix}
\begin{bmatrix}
4 \alpha^2 \cos^2\theta + 1 -4 \alpha^2 \cos\theta e^{i\theta} \\
-4 \alpha \cos\theta + 4 \alpha^3 \cos^2\theta e^{i\theta} + \alpha e^{i\theta} \\
\end{bmatrix} \\
&=
\frac{\Hbar^2}{4}
\lr{
4 \alpha^2 \cos^2\theta + 1 -4 \alpha^2 \cos\theta e^{i\theta}
-4 \alpha^2 \cos\theta e^{-i\theta} + 4 \alpha^4 \cos^2\theta + \alpha^2
} \\
&=
\frac{\Hbar^2}{4}
\lr{
4 \alpha^2 \cos^2\theta + 1 -8 \alpha^2 \cos^2\theta
+ 4 \alpha^4 \cos^2\theta + \alpha^2
} \\
&=
\frac{\Hbar^2}{4}
\lr{
-4 \alpha^2 \cos^2\theta + 1
+ 4 \alpha^4 \cos^2\theta + \alpha^2
} \\
&=
\frac{\Hbar^2}{4}
\lr{
4 \alpha^2 \cos^2\theta \lr{ \alpha^2 – 1 }
+ \alpha^2 + 1
}
,
\end{aligned}
\end{equation}

and

\begin{equation}\label{eqn:moreKet:280}
\begin{aligned}
\expectation{ \lr{ \Delta S_y }^2 }
&=
\frac{\Hbar^2}{4}
\begin{bmatrix}
1 & \alpha e^{-i\theta}
\end{bmatrix}
\begin{bmatrix}
4 \alpha^2 \sin^2\theta + 1 & 4 \alpha i \sin\theta \\
-4 \alpha i \sin\theta & 4 \alpha^2 \sin^2\theta + 1 \\
\end{bmatrix}
\begin{bmatrix}
1 \\
\alpha e^{i\theta}
\end{bmatrix} \\
&=
\frac{\Hbar^2}{4}
\begin{bmatrix}
1 & \alpha e^{-i\theta}
\end{bmatrix}
\begin{bmatrix}
4 \alpha^2 \sin^2\theta + 1 + 4 \alpha^2 i \sin\theta e^{i\theta} \\
-4 \alpha i \sin\theta + 4 \alpha^3 \sin^2\theta e^{i\theta} + \alpha e^{i\theta} \\
\end{bmatrix} \\
&=
\frac{\Hbar^2}{4}
\lr{
4 \alpha^2 \sin^2\theta + 1 + 4 \alpha^2 i \sin\theta e^{i\theta}
-4 \alpha^2 i \sin\theta e^{-i\theta} + 4 \alpha^4 \sin^2\theta + \alpha^2
} \\
&=
\frac{\Hbar^2}{4}
\lr{
-4 \alpha^2 \sin^2\theta + 1
+ 4 \alpha^4 \sin^2\theta + \alpha^2
} \\
&=
\frac{\Hbar^2}{4}
\lr{
4 \alpha^2 \sin^2\theta \lr{ \alpha^2 – 1}
+ \alpha^2
+ 1
}
.
\end{aligned}
\end{equation}

The uncertainty product can finally be calculated

\begin{equation}\label{eqn:moreKet:300}
\begin{aligned}
\expectation{\lr{\Delta S_x}^2}
\expectation{\lr{\Delta S_y}^2}
&=
\lr{\frac{\Hbar}{2} }^4
\lr{
4 \alpha^2 \cos^2\theta \lr{ \alpha^2 – 1 }
+ \alpha^2 + 1
}
\lr{
4 \alpha^2 \sin^2\theta \lr{ \alpha^2 – 1}
+ \alpha^2
+ 1
} \\
&=
\lr{\frac{\Hbar}{2} }^4
\lr{
4 \alpha^4 \sin^2 \lr{ 2\theta } \lr{ \alpha^2 – 1 }
+ 4 \alpha^2 \lr{ \alpha^4 – 1 }
+ \lr{\alpha^2 + 1 }^2
}
\end{aligned}
\end{equation}

The maximum occurs when \( f = \sin^2 2 \theta \) is extremized. Those points are
\begin{equation}\label{eqn:moreKet:320}
\begin{aligned}
0
&= \PD{\theta}{f} \\
&= 2 \sin 2 \theta \cos 2\theta \\
&= 4 \sin 4 \theta.
\end{aligned}
\end{equation}

Those points are at \( 4 \theta = \pi n \), for integer \( n \), or

\begin{equation}\label{eqn:moreKet:340}
\theta = \frac{\pi}{4} n, n \in [0, 7],
\end{equation}

Minimums will occur when

\begin{equation}\label{eqn:moreKet:360}
0 < \PDSq{\theta}{f} = 8 \cos 4\theta, \end{equation} or \begin{equation}\label{eqn:moreKet:380} n = 0, 2, 4, 6. \end{equation} At these points \( \sin^2 2\theta \) takes the values \begin{equation}\label{eqn:moreKet:400} \sin^2 \lr{ 2 \frac{\pi}{4} \setlr{ 0, 2, 4, 6 } } = \sin^2 \lr{ \pi \setlr{ 0, 1, 2, 3 } } \in \setlr{ 0 }, \end{equation} so the maximumization of the uncertainty product can be reduced to that of \begin{equation}\label{eqn:moreKet:420} \expectation{\lr{\Delta S_x}^2} \expectation{\lr{\Delta S_y}^2} = \lr{\frac{\Hbar}{2} }^4 \lr{ 4 \alpha^2 \lr{ \alpha^4 - 1 } + \lr{\alpha^2 + 1 }^2 } \end{equation} We seek \begin{equation}\label{eqn:moreKet:440} \begin{aligned} 0 &= \PD{\alpha}{} \lr{ 4 \alpha^2 \lr{ \alpha^4 - 1 } + \lr{\alpha^2 + 1 }^2 } \\ &= \lr{ 8 \alpha \lr{ \alpha^4 - 1 } +16 \alpha^5 + 4 \lr{\alpha^2 + 1 } \alpha } \\ &= 4 \alpha \lr{ 2 \alpha^4 - 2 +4 \alpha^4 + 4 \alpha^2 + 4 } \\ &= 8 \alpha \lr{ 3 \alpha^4 + 2 \alpha^2 + 1 }. \end{aligned} \end{equation} The only real root of this polynomial is \( \alpha = 0 \), so the ket where both \( \ket{+} \) and \( \ket{-} \) are not zero that maximizes the uncertainty product is \begin{equation}\label{eqn:moreKet:460} \ket{s} = \begin{bmatrix} 1 \\ 0 \end{bmatrix} = \ket{+}. \end{equation} The search for this maximizing value excluded those kets proportional to \( \begin{bmatrix} 0 \\ 1 \end{bmatrix} = \ket{-} \). Let's see the values of this uncertainty product at both \( \ket{\pm} \), and compare to the uncertainty commutator. First \( \ket{s} = \ket{+} \) \begin{equation}\label{eqn:moreKet:480} \expectation{S_x} = \begin{bmatrix} 1 & 0 \end{bmatrix} \begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix} \begin{bmatrix} 1 \\ 0 \end{bmatrix} = 0. \end{equation} \begin{equation}\label{eqn:moreKet:500} \expectation{S_y} = \begin{bmatrix} 1 & 0 \end{bmatrix} \begin{bmatrix} 0 & -i \\ i & 0 \\ \end{bmatrix} \begin{bmatrix} 1 \\ 0 \end{bmatrix} = 0. \end{equation} so \begin{equation}\label{eqn:moreKet:520} \expectation{ \lr{ \Delta S_x }^2 } = \lr{\frac{\Hbar}{2}}^2 \begin{bmatrix} 1 & 0 \end{bmatrix} \begin{bmatrix} 1 \\ 0 \end{bmatrix} = \lr{\frac{\Hbar}{2}}^2 \end{equation} \begin{equation}\label{eqn:moreKet:540} \expectation{ \lr{ \Delta S_y }^2 } = \lr{\frac{\Hbar}{2}}^2 \begin{bmatrix} 1 & 0 \end{bmatrix} \begin{bmatrix} 1 \\ 0 \end{bmatrix} = \lr{\frac{\Hbar}{2}}^2. \end{equation} For the commutator side of the uncertainty relation we have \begin{equation}\label{eqn:moreKet:560} \begin{aligned} \inv{4} \Abs{ \expectation{ \antisymmetric{ S_x}{ S_y } } }^2 &= \inv{4} \Abs{ \expectation{ i \hbar S_z } }^2 \\ &= \lr{ \frac{\Hbar}{2} }^4 \Abs{ \begin{bmatrix} 1 & 0 \end{bmatrix} \begin{bmatrix} 1 & 0 \\ 0 & -1 \\ \end{bmatrix} \begin{bmatrix} 1 \\ 0 \end{bmatrix} }^2, \end{aligned} \end{equation} so for the \( \ket{+} \) state we have an equality condition for the uncertainty relation \begin{equation}\label{eqn:moreKet:580} \expectation{\lr{\Delta S_x}^2} \expectation{\lr{\Delta S_y}^2} = \inv{4} \Abs{ \expectation{\antisymmetric{S_x}{S_y}}}^2 = \lr{ \frac{\Hbar}{2} }^4. \end{equation} It's reasonable to guess that the \( \ket{-} \) state also matches the equality condition. Let's check \begin{equation}\label{eqn:moreKet:600} \expectation{S_x} = \begin{bmatrix} 0 & 1 \end{bmatrix} \begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix} \begin{bmatrix} 0 \\ 1 \end{bmatrix} = 0. \end{equation} \begin{equation}\label{eqn:moreKet:620} \expectation{S_y} = \begin{bmatrix} 0 & 1 \end{bmatrix} \begin{bmatrix} 0 & -i \\ i & 0 \\ \end{bmatrix} \begin{bmatrix} 0 \\ 1 \end{bmatrix} = 0. \end{equation} so \( \expectation{ \lr{ \Delta S_x }^2 } = \expectation{ \lr{ \Delta S_y }^2 } = \lr{\frac{\Hbar}{2}}^2 \). For the commutator side of the uncertainty relation will be identical, so the equality of \ref{eqn:moreKet:580} is satisfied for both \( \ket{\pm} \). Note that it wasn't explicitly verified that \( \ket{-} \) maximized the uncertainty product, but I don't feel like working through that second set of algebraic mess. We can see by example that equality does not mean that the equality condition means that the product is maximized. For example, it is straightforward to show that \( \ket{ S_x ; \pm } \) also satisfy the equality condition of the uncertainty relation. However, in that case the product is not maximized, but is zero.

Question: Degenerate ket space example. ([1] pr. 1.23)

Consider operators with representation

\begin{equation}\label{eqn:moreKet:20}
A =
\begin{bmatrix}
a & 0 & 0 \\
0 & -a & 0 \\
0 & 0 & -a
\end{bmatrix}
,
\qquad
B =
\begin{bmatrix}
b & 0 & 0 \\
0 & 0 & -ib \\
0 & ib & 0
\end{bmatrix}.
\end{equation}

Show that these both have degeneracies, commute, and compute a simultaneous ket space for both operators.

Answer

The eigenvalues and eigenvectors for \( A \) can be read off by inspection, with values of \( a, -a, -a \), and kets

\begin{equation}\label{eqn:moreKet:40}
\ket{a_1} =
\begin{bmatrix}
1 \\
0 \\
0
\end{bmatrix},
\ket{a_2} =
\begin{bmatrix}
0 \\
1 \\
0
\end{bmatrix},
\ket{a_3} =
\begin{bmatrix}
0 \\
0 \\
1 \\
\end{bmatrix}
\end{equation}

Notice that the lower-right \( 2 \times 2 \) submatrix of \( B \) is proportional to \( \sigma_y \), so it’s eigenvalues can be formed by inspection

\begin{equation}\label{eqn:moreKet:60}
\ket{b_1} =
\begin{bmatrix}
1 \\
0 \\
0
\end{bmatrix},
\ket{b_2} =
\inv{\sqrt{2}}
\begin{bmatrix}
0 \\
1 \\
i
\end{bmatrix},
\ket{b_3} =
\inv{\sqrt{2}}
\begin{bmatrix}
0 \\
1 \\
-i \\
\end{bmatrix}.
\end{equation}

Computing \( B \ket{b_i} \) shows that the eigenvalues are \( b, b, -b \) respectively.

Because of the two-fold degeneracy in the \( -a \) eigenvalues of \( A \), any linear combination of \( \ket{a_2}, \ket{a_3} \) will also be an eigenket. In particular,

\begin{equation}\label{eqn:moreKet:80}
\begin{aligned}
\ket{a_2} + i \ket{a_3} &= \ket{b_2} \\
\ket{a_2} – i \ket{a_3} &= \ket{b_3},
\end{aligned}
\end{equation}

so the basis \( \setlr{ \ket{b_i}} \) is a simulaneous eigenspace for both \( A \) and \(B\). Because there is a simulaneous eigenspace, the matrices must commute. This can be confirmed with direct computation

\begin{equation}\label{eqn:moreKet:100}
\begin{aligned}
A B
&= a b
\begin{bmatrix}
1 & 0 & 0 \\
0 & -1 & 0 \\
0 & 0 & -1
\end{bmatrix}
\begin{bmatrix}
1 & 0 & 0 \\
0 & 0 & -i \\
0 & i & 0
\end{bmatrix} \\
&=
a b
\begin{bmatrix}
1 & 0 & 0 \\
0 & 0 & i \\
0 & -i & 0
\end{bmatrix},
\end{aligned}
\end{equation}

and

\begin{equation}\label{eqn:moreKet:120}
\begin{aligned}
B A
&= a b
\begin{bmatrix}
1 & 0 & 0 \\
0 & 0 & -i \\
0 & i & 0
\end{bmatrix}
\begin{bmatrix}
1 & 0 & 0 \\
0 & -1 & 0 \\
0 & 0 & -1
\end{bmatrix} \\
&=
a b
\begin{bmatrix}
1 & 0 & 0 \\
0 & 0 & i \\
0 & -i & 0
\end{bmatrix}.
\end{aligned}
\end{equation}

Question: Unitary transformation. ([1] pr. 1.26)

Construct the transformation matrix that maps between the \( S_z \) diagonal basis, to the \( S_x \) diagonal basis.

Answer

Based on the definition

\begin{equation}\label{eqn:moreKet:640}
U \ket{a^{(r)}} = \ket{b^{(r)}},
\end{equation}

the matrix elements can be computed

\begin{equation}\label{eqn:moreKet:660}
\bra{a^{(s)}} U \ket{a^{(r)}} = \braket{a^{(s)}}{b^{(r)}},
\end{equation}

that is

\begin{equation}\label{eqn:moreKet:680}
\begin{aligned}
U
&=
\begin{bmatrix}
\bra{a^{(1)}} U \ket{a^{(1)}} & \bra{a^{(1)}} U \ket{a^{(2)}} \\
\bra{a^{(2)}} U \ket{a^{(1)}} & \bra{a^{(2)}} U \ket{a^{(2)}}
\end{bmatrix} \\
&=
\begin{bmatrix}
\braket{a^{(1)}}{b^{(1)}} & \braket{a^{(1)}}{b^{(2)}} \\
\braket{a^{(2)}}{b^{(1)}} & \braket{a^{(2)}}{b^{(2)}}
\end{bmatrix} \\
&=
\inv{\sqrt{2}}
\begin{bmatrix}
\begin{bmatrix}
1 & 0
\end{bmatrix}
\begin{bmatrix}
1 \\ 1
\end{bmatrix} &
\begin{bmatrix}
1 & 0
\end{bmatrix}
\begin{bmatrix}
1 \\ -1
\end{bmatrix} \\
\begin{bmatrix}
0 & 1
\end{bmatrix}
\begin{bmatrix}
1 \\ 1
\end{bmatrix} &
\begin{bmatrix}
0 & 1
\end{bmatrix}
\begin{bmatrix}
1 \\ -1
\end{bmatrix} \\
\end{bmatrix} \\
&=
\inv{\sqrt{2}}
\begin{bmatrix}
1 & 1 \\
1 & -1
\end{bmatrix}.
\end{aligned}
\end{equation}

As a similarity transformation, we have

\begin{equation}\label{eqn:moreKet:700}
\begin{aligned}
\bra{b^{(r)}} S_z \ket{b^{(s)}}
&=
\braket{b^{(r)}}{a^{(t)}}\bra{a^{(t)}} S_z \ket{a^{(u)}}\braket{a^{(u)}}{b^{(s)}} \\
&=
\braket{a^{(r)}}U^\dagger {a^{(t)}}\bra{a^{(t)}} S_z \ket{a^{(u)}}\bra{a^{(u)}}U \ket{a^{(s)}},
\end{aligned}
\end{equation}

or

\begin{equation}\label{eqn:moreKet:720}
S_z’ = U^\dagger S_z U.
\end{equation}

Let’s check that the computed similarity transformation does it’s job.
\begin{equation}\label{eqn:moreKet:740}
\begin{aligned}
\sigma_z’
&= U^\dagger \sigma_z U \\
&= \inv{2}
\begin{bmatrix}
1 & 1 \\
1 & -1
\end{bmatrix}
\begin{bmatrix}
1 & 0 \\
0 & -1
\end{bmatrix}
\begin{bmatrix}
1 & 1 \\
1 & -1
\end{bmatrix} \\
&=
\inv{2}
\begin{bmatrix}
1 & -1 \\
1 & 1
\end{bmatrix}
\begin{bmatrix}
1 & 1 \\
1 & -1
\end{bmatrix} \\
&=
\inv{2}
\begin{bmatrix}
0 & 2 \\
2 & 0
\end{bmatrix} \\
&= \sigma_x.
\end{aligned}
\end{equation}

The transformation matrix can also be computed more directly

\begin{equation}\label{eqn:moreKet:760}
\begin{aligned}
U
&= U \ket{a^{(r)}} \bra{a^{(r)}} \\
&= \ket{b^{(r)}}\bra{a^{(r)}} \\
&=
\inv{\sqrt{2}}
\begin{bmatrix}
1 \\
1
\end{bmatrix}
\begin{bmatrix}
1 & 0
\end{bmatrix}
+
\inv{\sqrt{2}}
\begin{bmatrix}
1 \\
-1
\end{bmatrix}
\begin{bmatrix}
0 & 1
\end{bmatrix} \\
&=
\inv{\sqrt{2}}
\begin{bmatrix}
1 & 0 \\
1 & 0
\end{bmatrix}
+
\inv{\sqrt{2}}
\begin{bmatrix}
0 & 1 \\
0 & -1
\end{bmatrix} \\
&=
\inv{\sqrt{2}}
\begin{bmatrix}
1 & 1 \\
1 & -1
\end{bmatrix}.
\end{aligned}
\end{equation}

References

[1] Jun John Sakurai and Jim J Napolitano. Modern quantum mechanics. Pearson Higher Ed, 2014.