eigenstate

Translation operator problems

August 7, 2015 phy1520 , , , , , , ,

[Click here for a PDF of this post with nicer formatting]

Question: One dimensional translation operator. ([1] pr. 1.28)

(a)

Evaluate the classical Poisson bracket

\begin{equation}\label{eqn:translation:420}
\antisymmetric{x}{F(p)}_{\textrm{classical}}
\end{equation}

(b)

Evaluate the commutator

\begin{equation}\label{eqn:translation:440}
\antisymmetric{x}{e^{i p a/\Hbar}}
\end{equation}

(c)

Using the result in \ref{problem:translation:28:b}, prove that
\begin{equation}\label{eqn:translation:460}
e^{i p a/\Hbar} \ket{x’},
\end{equation}

is an eigenstate of the coordinate operator \( x \).

Answer

(a)

\begin{equation}\label{eqn:translation:480}
\begin{aligned}
\antisymmetric{x}{F(p)}_{\textrm{classical}}
&=
\PD{x}{x} \PD{p}{F(p)} – \PD{p}{x} \PD{x}{F(p)} \\
&=
\PD{p}{F(p)}.
\end{aligned}
\end{equation}

(b)

Having worked backwards through these problems, the answer for this one dimensional problem can be obtained from \ref{eqn:translation:140} and is

\begin{equation}\label{eqn:translation:500}
\antisymmetric{x}{e^{i p a/\Hbar}} = a e^{i p a/\Hbar}.
\end{equation}

(c)

\begin{equation}\label{eqn:translation:520}
\begin{aligned}
x e^{i p a/\Hbar} \ket{x’}
&=
\lr{
\antisymmetric{x}{e^{i p a/\Hbar}}
e^{i p a/\Hbar} x
+
}
\ket{x’} \\
&=
\lr{ a e^{i p a/\Hbar} + e^{i p a/\Hbar} x ‘ } \ket{x’} \\
&= \lr{ a + x’ } \ket{x’}.
\end{aligned}
\end{equation}

This demonstrates that \( e^{i p a/\Hbar} \ket{x’} \) is an eigenstate of \( x \) with eigenvalue \( a + x’ \).

Question: Polynomial commutators. ([1] pr. 1.29)

(a)

For power series \( F, G \), verify

\begin{equation}\label{eqn:translation:180}
\antisymmetric{x_k}{G(\Bp)} = i \Hbar \PD{p_k}{G}, \qquad
\antisymmetric{p_k}{F(\Bx)} = -i \Hbar \PD{x_k}{F}.
\end{equation}

(b)

Evaluate \( \antisymmetric{x^2}{p^2} \), and compare to the classical Poisson bracket \( \antisymmetric{x^2}{p^2}_{\textrm{classical}} \).

Answer

(a)

Let

\begin{equation}\label{eqn:translation:200}
\begin{aligned}
G(\Bp) &= \sum_{k l m} a_{k l m} p_1^k p_2^l p_3^m \\
F(\Bx) &= \sum_{k l m} b_{k l m} x_1^k x_2^l x_3^m.
\end{aligned}
\end{equation}

It is simpler to work with a specific \( x_k \), say \( x_k = y \). The validity of the general result will still be clear doing so. Expanding the commutator gives

\begin{equation}\label{eqn:translation:220}
\begin{aligned}
\antisymmetric{y}{G(\Bp)}
&=
\sum_{k l m} a_{k l m} \antisymmetric{y}{p_1^k p_2^l p_3^m } \\
&=
\sum_{k l m} a_{k l m} \lr{
y p_1^k p_2^l p_3^m – p_1^k p_2^l p_3^m y
} \\
&=
\sum_{k l m} a_{k l m} \lr{
p_1^k y p_2^l p_3^m – p_1^k y p_2^l p_3^m
} \\
&=
\sum_{k l m} a_{k l m}
p_1^k
\antisymmetric{y}{p_2^l}
p_3^m.
\end{aligned}
\end{equation}

From \ref{eqn:translation:100}, we have \( \antisymmetric{y}{p_2^l} = l i \Hbar p_2^{l-1} \), so

\begin{equation}\label{eqn:translation:240}
\begin{aligned}
\antisymmetric{y}{G(\Bp)}
&=
\sum_{k l m} a_{k l m}
p_1^k
\antisymmetric{y}{p_2^l}
\lr{ l
i \Hbar p_2^{l-1}
}
p_3^m \\
&=
i \Hbar \PD{y}{G(\Bp)}.
\end{aligned}
\end{equation}

It is straightforward to show that
\( \antisymmetric{p}{x^l} = -l i \Hbar x^{l-1} \), allowing for a similar computation of the momentum commutator

\begin{equation}\label{eqn:translation:260}
\begin{aligned}
\antisymmetric{p_y}{F(\Bx)}
&=
\sum_{k l m} b_{k l m} \antisymmetric{p_y}{x_1^k x_2^l x_3^m } \\
&=
\sum_{k l m} b_{k l m} \lr{
p_y x_1^k x_2^l x_3^m – x_1^k x_2^l x_3^m p_y
} \\
&=
\sum_{k l m} b_{k l m} \lr{
x_1^k p_y x_2^l x_3^m – x_1^k p_y x_2^l x_3^m
} \\
&=
\sum_{k l m} b_{k l m}
x_1^k
\antisymmetric{p_y}{x_2^l}
x_3^m \\
&=
\sum_{k l m} b_{k l m}
x_1^k
\lr{ -l i \Hbar x_2^{l-1}}
x_3^m \\
&=
-i \Hbar \PD{p_y}{F(\Bx)}.
\end{aligned}
\end{equation}

(b)

It isn’t clear to me how the results above can be used directly to compute \( \antisymmetric{x^2}{p^2} \). However, when the first term of such a commutator is a monomomial, it can be expanded in terms of an \( x \) commutator

\begin{equation}\label{eqn:translation:280}
\begin{aligned}
\antisymmetric{x^2}{G(\Bp)}
&= x^2 G – G x^2 \\
&= x \lr{ x G } – G x^2 \\
&= x \lr{ \antisymmetric{ x }{ G } + G x } – G x^2 \\
&= x \antisymmetric{ x }{ G } + \lr{ x G } x – G x^2 \\
&= x \antisymmetric{ x }{ G } + \lr{ \antisymmetric{ x }{ G } + G x } x – G x^2 \\
&= x \antisymmetric{ x }{ G } + \antisymmetric{ x }{ G } x.
\end{aligned}
\end{equation}

Similarily,

\begin{equation}\label{eqn:translation:300}
\antisymmetric{x^3}{G(\Bp)} = x^2 \antisymmetric{ x }{ G } + x \antisymmetric{ x }{ G } x + \antisymmetric{ x }{ G } x^2.
\end{equation}

An induction hypothesis can be formed

\begin{equation}\label{eqn:translation:320}
\antisymmetric{x^k}{G(\Bp)} = \sum_{j = 0}^{k-1} x^{k-1-j} \antisymmetric{ x }{ G } x^j,
\end{equation}

and demonstrated

\begin{equation}\label{eqn:translation:340}
\begin{aligned}
\antisymmetric{x^{k+1}}{G(\Bp)}
&=
x^{k+1} G – G x^{k+1} \\
&=
x \lr{ x^{k} G } – G x^{k+1} \\
&=
x \lr{ \antisymmetric{x^{k}}{G} + G x^k } – G x^{k+1} \\
&=
x \antisymmetric{x^{k}}{G} + \lr{ x G } x^k – G x^{k+1} \\
&=
x \antisymmetric{x^{k}}{G} + \lr{ \antisymmetric{x}{G} + G x } x^k – G x^{k+1} \\
&=
x \antisymmetric{x^{k}}{G} + \antisymmetric{x}{G} x^k \\
&=
x \sum_{j = 0}^{k-1} x^{k-1-j} \antisymmetric{ x }{ G } x^j + \antisymmetric{x}{G} x^k \\
&=
\sum_{j = 0}^{k-1} x^{(k+1)-1-j} \antisymmetric{ x }{ G } x^j + \antisymmetric{x}{G} x^k \\
&=
\sum_{j = 0}^{k} x^{(k+1)-1-j} \antisymmetric{ x }{ G } x^j.
\end{aligned}
\end{equation}

That was a bit overkill for this problem, but may be useful later. Application of this to the problem gives

\begin{equation}\label{eqn:translation:360}
\begin{aligned}
\antisymmetric{x^2}{p^2}
&=
x \antisymmetric{x}{p^2}
+ \antisymmetric{x}{p^2} x \\
&=
x i \Hbar \PD{x}{p^2}
+ i \Hbar \PD{x}{p^2} x \\
&=
x 2 i \Hbar p
+ 2 i \Hbar p x \\
&= i \Hbar \lr{ 2 x p + 2 p x }.
\end{aligned}
\end{equation}

The classical commutator is
\begin{equation}\label{eqn:translation:380}
\begin{aligned}
\antisymmetric{x^2}{p^2}_{\textrm{classical}}
&=
\PD{x}{x^2} \PD{p}{p^2} – \PD{p}{x^2} \PD{x}{p^2} \\
&=
2 x 2 p \\
&= 2 x p + 2 p x.
\end{aligned}
\end{equation}

This demonstrates the expected relation between the classical and quantum commutators

\begin{equation}\label{eqn:translation:400}
\antisymmetric{x^2}{p^2} = i \Hbar \antisymmetric{x^2}{p^2}_{\textrm{classical}}.
\end{equation}

Question: Translation operator and position expectation. ([1] pr. 1.30)

The translation operator for a finite spatial displacement is given by

\begin{equation}\label{eqn:translation:20}
J(\Bl) = \exp\lr{ -i \Bp \cdot \Bl/\Hbar },
\end{equation}

where \( \Bp \) is the momentum operator.

(a)

Evaluate

\begin{equation}\label{eqn:translation:40}
\antisymmetric{x_i}{J(\Bl)}.
\end{equation}

(b)

Demonstrate how the expectation value \( \expectation{\Bx} \) changes under translation.

Answer

(a)

For clarity, let’s set \( x_i = y \). The general result will be clear despite doing so.

\begin{equation}\label{eqn:translation:60}
\antisymmetric{y}{J(\Bl)}
=
\sum_{k= 0} \inv{k!} \lr{\frac{-i}{\Hbar}}
\antisymmetric{y}{
\lr{ \Bp \cdot \Bl }^k
}.
\end{equation}

The commutator expands as

\begin{equation}\label{eqn:translation:80}
\begin{aligned}
\antisymmetric{y}{
\lr{ \Bp \cdot \Bl }^k
}
+ \lr{ \Bp \cdot \Bl }^k y
&=
y \lr{ \Bp \cdot \Bl }^k \\
&=
y \lr{ p_x l_x + p_y l_y + p_z l_z } \lr{ \Bp \cdot \Bl }^{k-1} \\
&=
\lr{ p_x l_x y + y p_y l_y + p_z l_z y } \lr{ \Bp \cdot \Bl }^{k-1} \\
&=
\lr{ p_x l_x y + l_y \lr{ p_y y + i \Hbar } + p_z l_z y } \lr{ \Bp \cdot \Bl }^{k-1} \\
&=
\lr{ \Bp \cdot \Bl } y \lr{ \Bp \cdot \Bl }^{k-1}
+ i \Hbar l_y \lr{ \Bp \cdot \Bl }^{k-1} \\
&= \cdots \\
&=
\lr{ \Bp \cdot \Bl }^{k-1} y \lr{ \Bp \cdot \Bl }^{k-(k-1)}
+ (k-1) i \Hbar l_y \lr{ \Bp \cdot \Bl }^{k-1} \\
&=
\lr{ \Bp \cdot \Bl }^{k} y
+ k i \Hbar l_y \lr{ \Bp \cdot \Bl }^{k-1}.
\end{aligned}
\end{equation}

In the above expansion, the commutation of \( y \) with \( p_x, p_z \) has been used. This gives, for \( k \ne 0 \),

\begin{equation}\label{eqn:translation:100}
\antisymmetric{y}{
\lr{ \Bp \cdot \Bl }^k
}
=
k i \Hbar l_y \lr{ \Bp \cdot \Bl }^{k-1}.
\end{equation}

Note that this also holds for the \( k = 0 \) case, since \( y \) commutes with the identity operator. Plugging back into the \( J \) commutator, we have

\begin{equation}\label{eqn:translation:120}
\begin{aligned}
\antisymmetric{y}{J(\Bl)}
&=
\sum_{k = 1} \inv{k!} \lr{\frac{-i}{\Hbar}}
k i \Hbar l_y \lr{ \Bp \cdot \Bl }^{k-1} \\
&=
l_y \sum_{k = 1} \inv{(k-1)!} \lr{\frac{-i}{\Hbar}}
\lr{ \Bp \cdot \Bl }^{k-1} \\
&=
l_y J(\Bl).
\end{aligned}
\end{equation}

The same pattern clearly applies with the other \( x_i \) values, providing the desired relation.

\begin{equation}\label{eqn:translation:140}
\antisymmetric{\Bx}{J(\Bl)} = \sum_{m = 1}^3 \Be_m l_m J(\Bl) = \Bl J(\Bl).
\end{equation}

(b)

Suppose that the translated state is defined as \( \ket{\alpha_{\Bl}} = J(\Bl) \ket{\alpha} \). The expectation value with respect to this state is

\begin{equation}\label{eqn:translation:160}
\begin{aligned}
\expectation{\Bx’}
&=
\bra{\alpha_{\Bl}} \Bx \ket{\alpha_{\Bl}} \\
&=
\bra{\alpha} J^\dagger(\Bl) \Bx J(\Bl) \ket{\alpha} \\
&=
\bra{\alpha} J^\dagger(\Bl) \lr{ \Bx J(\Bl) } \ket{\alpha} \\
&=
\bra{\alpha} J^\dagger(\Bl) \lr{ J(\Bl) \Bx + \Bl J(\Bl) } \ket{\alpha} \\
&=
\bra{\alpha} J^\dagger J \Bx + \Bl J^\dagger J \ket{\alpha} \\
&=
\bra{\alpha} \Bx \ket{\alpha} + \Bl \braket{\alpha}{\alpha} \\
&=
\expectation{\Bx} + \Bl.
\end{aligned}
\end{equation}

References

[1] Jun John Sakurai and Jim J Napolitano. Modern quantum mechanics. Pearson Higher Ed, 2014.

Bra-ket and spin one-half problems

July 27, 2015 phy1520 , , , , , , , , ,

[Click here for a PDF of this post with nicer formatting]

Question: Operator matrix representation ([1] pr. 1.5)

(a)

Determine the matrix representation of \( \ket{\alpha}\bra{\beta} \) given a complete set of eigenvectors \( \ket{a^r} \).

(b)

Verify with \( \ket{\alpha} = \ket{s_z = \Hbar/2}, \ket{s_x = \Hbar/2} \).

Answer

(a)

Forming the matrix element

\begin{equation}\label{eqn:moreBraKetProblems:20}
\begin{aligned}
\bra{a^r} \lr{ \ket{\alpha}\bra{\beta} } \ket{a^s}
&=
\braket{a^r}{\alpha}\braket{\beta}{a^s} \\
&=
\braket{a^r}{\alpha}
\braket{a^s}{\beta}^\conj,
\end{aligned}
\end{equation}

the matrix representation is seen to be

\begin{equation}\label{eqn:moreBraKetProblems:40}
\ket{\alpha}\bra{\beta}
\sim
\begin{bmatrix}
\bra{a^1} \lr{ \ket{\alpha}\bra{\beta} } \ket{a^1} & \bra{a^1} \lr{ \ket{\alpha}\bra{\beta} } \ket{a^2} & \cdots \\
\bra{a^2} \lr{ \ket{\alpha}\bra{\beta} } \ket{a^1} & \bra{a^2} \lr{ \ket{\alpha}\bra{\beta} } \ket{a^2} & \cdots \\
\vdots & \vdots & \ddots \\
\end{bmatrix}
=
\begin{bmatrix}
\braket{a^1}{\alpha} \braket{a^1}{\beta}^\conj & \braket{a^1}{\alpha} \braket{a^2}{\beta}^\conj & \cdots \\
\braket{a^2}{\alpha} \braket{a^1}{\beta}^\conj & \braket{a^2}{\alpha} \braket{a^2}{\beta}^\conj & \cdots \\
\vdots & \vdots & \ddots \\
\end{bmatrix}.
\end{equation}

(b)

First compute the spin-z representation of \( \ket{s_x = \Hbar/2 } \).

\begin{equation}\label{eqn:moreBraKetProblems:60}
\begin{aligned}
\lr{ S_x – \Hbar/2 I }
\begin{bmatrix}
a \\
b
\end{bmatrix}
&=
\lr{
\begin{bmatrix}
0 & \Hbar/2 \\
\Hbar/2 & 0 \\
\end{bmatrix}

\begin{bmatrix}
\Hbar/2 & 0 \\
0 & \Hbar/2 \\
\end{bmatrix}
} \\
&=
\begin{bmatrix}
a \\
b
\end{bmatrix} \\
&=
\frac{\Hbar}{2}
\begin{bmatrix}
-1 & 1 \\
1 & -1 \\
\end{bmatrix}
\begin{bmatrix}
a \\
b
\end{bmatrix},
\end{aligned}
\end{equation}

so \( \ket{s_x = \Hbar/2 } \propto (1,1) \).

Normalized we have

\begin{equation}\label{eqn:moreBraKetProblems:80}
\begin{aligned}
\ket{\alpha} &= \ket{s_z = \Hbar/2 } =
\begin{bmatrix}
1 \\
0
\end{bmatrix} \\
\ket{\beta} &= \ket{s_z = \Hbar/2 }
\inv{\sqrt{2}}
\begin{bmatrix}
1 \\
1
\end{bmatrix}.
\end{aligned}
\end{equation}

Using \ref{eqn:moreBraKetProblems:40} the matrix representation is

\begin{equation}\label{eqn:moreBraKetProblems:100}
\ket{\alpha}\bra{\beta}
\sim
\begin{bmatrix}
(1) (1/\sqrt{2})^\conj & (1) (1/\sqrt{2})^\conj \\
(0) (1/\sqrt{2})^\conj & (0) (1/\sqrt{2})^\conj \\
\end{bmatrix}
=
\inv{\sqrt{2}}
\begin{bmatrix}
1 & 1 \\
0 & 0
\end{bmatrix}.
\end{equation}

This can be confirmed with direct computation
\begin{equation}\label{eqn:moreBraKetProblems:120}
\begin{aligned}
\ket{\alpha}\bra{\beta}
&=
\begin{bmatrix}
1 \\
0
\end{bmatrix}
\inv{\sqrt{2}}
\begin{bmatrix}
1 & 1
\end{bmatrix} \\
&=
\inv{\sqrt{2}}
\begin{bmatrix}
1 & 1 \\
0 & 0
\end{bmatrix}.
\end{aligned}
\end{equation}

Question: eigenvalue of sum of kets ([1] pr. 1.6)

Given eigenkets \( \ket{i}, \ket{j} \) of an operator \( A \), what are the conditions that \( \ket{i} + \ket{j} \) is also an eigenvector?

Answer

Let \( A \ket{i} = i \ket{i}, A \ket{j} = j \ket{j} \), and suppose that the sum is an eigenket. Then there must be a value \( a \) such that

\begin{equation}\label{eqn:moreBraKetProblems:140}
A \lr{ \ket{i} + \ket{j} } = a \lr{ \ket{i} + \ket{j} },
\end{equation}

so

\begin{equation}\label{eqn:moreBraKetProblems:160}
i \ket{i} + j \ket{j} = a \lr{ \ket{i} + \ket{j} }.
\end{equation}

Operating with \( \bra{i}, \bra{j} \) respectively, gives

\begin{equation}\label{eqn:moreBraKetProblems:180}
\begin{aligned}
i &= a \\
j &= a,
\end{aligned}
\end{equation}

so for the sum to be an eigenket, both of the corresponding energy eigenvalues must be identical (i.e. linear combinations of degenerate eigenkets are also eigenkets).

Question: Null operator ([1] pr. 1.7)

Given eigenkets \( \ket{a’} \) of operator \( A \)

(a)

show that

\begin{equation}\label{eqn:moreBraKetProblems:200}
\prod_{a’} \lr{ A – a’ }
\end{equation}

is the null operator.

(b)

\begin{equation}\label{eqn:moreBraKetProblems:220}
\prod_{a” \ne a’} \frac{\lr{ A – a” }}{a’ – a”}
\end{equation}

(c)

Illustrate using \( S_z \) for a spin 1/2 system.

Answer

(a)

Application of \( \ket{a} \), the eigenket of \( A \) with eigenvalue \( a \) to any term \( A – a’ \) scales \( \ket{a} \) by \( a – a’ \), so the product operating on \( \ket{a} \) is

\begin{equation}\label{eqn:moreBraKetProblems:240}
\prod_{a’} \lr{ A – a’ } \ket{a} = \prod_{a’} \lr{ a – a’ } \ket{a}.
\end{equation}

Since \( \ket{a} \) is one of the \( \setlr{\ket{a’}} \) eigenkets of \( A \), one of these terms must be zero.

(b)

Again, consider the action of the operator on \( \ket{a} \),

\begin{equation}\label{eqn:moreBraKetProblems:260}
\prod_{a” \ne a’} \frac{\lr{ A – a” }}{a’ – a”} \ket{a}
=
\prod_{a” \ne a’} \frac{\lr{ a – a” }}{a’ – a”} \ket{a}.
\end{equation}

If \( \ket{a} = \ket{a’} \), then \( \prod_{a” \ne a’} \frac{\lr{ A – a” }}{a’ – a”} \ket{a} = \ket{a} \), whereas if it does not, then it equals one of the \( a” \) energy eigenvalues. This is a representation of the Kronecker delta function

\begin{equation}\label{eqn:moreBraKetProblems:300}
\prod_{a” \ne a’} \frac{\lr{ A – a” }}{a’ – a”} \ket{a} \equiv \delta_{a’, a} \ket{a}
\end{equation}

(c)

For operator \( S_z \) the eigenvalues are \( \setlr{ \Hbar/2, -\Hbar/2 } \), so the null operator must be

\begin{equation}\label{eqn:moreBraKetProblems:280}
\begin{aligned}
\prod_{a’} \lr{ A – a’ }
&=
\lr{ \frac{\Hbar}{2} }^2 \lr{ \begin{bmatrix} 1 & 0 \\ 0 & -1 \\ \end{bmatrix} – \begin{bmatrix} 1 & 0 \\ 0 & 1 \\ \end{bmatrix} } \lr{ \begin{bmatrix} 1 & 0 \\ 0 & -1 \\ \end{bmatrix} + \begin{bmatrix} 1 & 0 \\ 0 & 1 \\ \end{bmatrix} } \\
&=
\begin{bmatrix}
0 & 0 \\
0 & -2
\end{bmatrix}
\begin{bmatrix}
2 & 0 \\
0 & 0 \\
\end{bmatrix} \\
&=
\begin{bmatrix}
0 & 0 \\
0 & 0 \\
\end{bmatrix}
\end{aligned}
\end{equation}

For the delta representation, consider the \( \ket{\pm} \) states and their eigenvalue. The delta operators are

\begin{equation}\label{eqn:moreBraKetProblems:320}
\begin{aligned}
\prod_{a” \ne \Hbar/2} \frac{\lr{ A – a” }}{\Hbar/2 – a”}
&=
\frac{S_z – (-\Hbar/2) I}{\Hbar/2 – (-\Hbar/2)} \\
&=
\inv{2} \lr{ \sigma_z + I } \\
&=
\inv{2} \lr{ \begin{bmatrix} 1 & 0 \\ 0 & -1 \\ \end{bmatrix} + \begin{bmatrix} 1 & 0 \\ 0 & 1 \\ \end{bmatrix} } \\
&=
\inv{2}
\begin{bmatrix}
2 & 0 \\
0 & 0
\end{bmatrix}
\\
&=
\begin{bmatrix}
1 & 0 \\
0 & 0
\end{bmatrix}.
\end{aligned}
\end{equation}

\begin{equation}\label{eqn:moreBraKetProblems:340}
\begin{aligned}
\prod_{a” \ne -\Hbar/2} \frac{\lr{ A – a” }}{-\Hbar/2 – a”}
&=
\frac{S_z – (\Hbar/2) I}{-\Hbar/2 – \Hbar/2} \\
&=
\inv{2} \lr{ \sigma_z – I } \\
&=
\inv{2} \lr{ \begin{bmatrix} 1 & 0 \\ 0 & -1 \\ \end{bmatrix} – \begin{bmatrix} 1 & 0 \\ 0 & 1 \\ \end{bmatrix} } \\
&=
\inv{2}
\begin{bmatrix}
0 & 0 \\
0 & -2
\end{bmatrix} \\
&=
\begin{bmatrix}
0 & 0 \\
0 & 1
\end{bmatrix}.
\end{aligned}
\end{equation}

These clearly have the expected delta function property acting on kets \( \ket{+} = (1,0), \ket{-} = (0, 1) \).

Question: Spin half general normal ([1] pr. 1.9)

Construct \( \ket{\BS \cdot \ncap ; + } \), where \( \ncap = ( \cos\alpha \sin\beta, \sin\alpha \sin\beta, \cos\beta ) \) such that

\begin{equation}\label{eqn:moreBraKetProblems:360}
\BS \cdot \ncap \ket{\BS \cdot \ncap ; + } =
\frac{\Hbar}{2} \ket{\BS \cdot \ncap ; + },
\end{equation}

Solve this as an eigenvalue problem.

Answer

The spin operator for this direction is

\begin{equation}\label{eqn:moreBraKetProblems:380}
\begin{aligned}
\BS \cdot \ncap
&= \frac{\Hbar}{2} \Bsigma \cdot \ncap \\
&= \frac{\Hbar}{2}
\lr{
\cos\alpha \sin\beta \begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix} + \sin\alpha \sin\beta \begin{bmatrix} 0 & -i \\ i & 0 \\ \end{bmatrix} + \cos\beta \begin{bmatrix} 1 & 0 \\ 0 & -1 \\ \end{bmatrix}
} \\
&=
\frac{\Hbar}{2}
\begin{bmatrix}
\cos\beta &
e^{-i\alpha}
\sin\beta
\\
e^{i\alpha}
\sin\beta
& -\cos\beta
\end{bmatrix}.
\end{aligned}
\end{equation}

Observed that this is traceless and has a \( -\Hbar/2 \) determinant like any of the \( x,y,z \) spin operators.

Assuming that this has an \( \Hbar/2 \) eigenvalue (to be verified later), the eigenvalue problem is

\begin{equation}\label{eqn:moreBraKetProblems:400}
\begin{aligned}
0
&=
\BS \cdot \ncap – \Hbar/2 I \\
&=
\frac{\Hbar}{2}
\begin{bmatrix}
\cos\beta -1 &
e^{-i\alpha}
\sin\beta
\\
e^{i\alpha}
\sin\beta
& -\cos\beta -1
\end{bmatrix} \\
&=
\Hbar
\begin{bmatrix}
– \sin^2 \frac{\beta}{2} &
e^{-i\alpha}
\sin\frac{\beta}{2} \cos\frac{\beta}{2}
\\
e^{i\alpha}
\sin\frac{\beta}{2} \cos\frac{\beta}{2}
& -\cos^2 \frac{\beta}{2}
\end{bmatrix}
\end{aligned}
\end{equation}

This has a zero determinant as expected, and the eigenvector \( (a,b) \) will satisfy

\begin{equation}\label{eqn:moreBraKetProblems:420}
\begin{aligned}
0
&= – \sin^2 \frac{\beta}{2} a +
e^{-i\alpha}
\sin\frac{\beta}{2} \cos\frac{\beta}{2}
b \\
&= \sin\frac{\beta}{2} \lr{ – \sin \frac{\beta}{2} a +
e^{-i\alpha} b
\cos\frac{\beta}{2}
}
\end{aligned}
\end{equation}

\begin{equation}\label{eqn:moreBraKetProblems:440}
\begin{bmatrix}
a \\
b
\end{bmatrix}
\propto
\begin{bmatrix}
\cos\frac{\beta}{2} \\
e^{i\alpha}
\sin\frac{\beta}{2}
\end{bmatrix}.
\end{equation}

This is appropriately normalized, so the ket for \( \BS \cdot \ncap \) is

\begin{equation}\label{eqn:moreBraKetProblems:460}
\ket{ \BS \cdot \ncap ; + } =
\cos\frac{\beta}{2} \ket{+} +
e^{i\alpha}
\sin\frac{\beta}{2}
\ket{-}.
\end{equation}

Note that the other eigenvalue is

\begin{equation}\label{eqn:moreBraKetProblems:480}
\ket{ \BS \cdot \ncap ; – } =
-\sin\frac{\beta}{2} \ket{+} +
e^{i\alpha}
\cos\frac{\beta}{2}
\ket{-}.
\end{equation}

It is straightforward to show that these are orthogonal and that this has the \( -\Hbar/2 \) eigenvalue.

Question: Two state Hamiltonian ([1] pr. 1.10)

Solve the eigenproblem for

\begin{equation}\label{eqn:moreBraKetProblems:500}
H = a \biglr{
\ket{1}\bra{1}
-\ket{2}\bra{2}
+\ket{1}\bra{2}
+\ket{2}\bra{1}
}
\end{equation}

Answer

In matrix form the Hamiltonian is

\begin{equation}\label{eqn:moreBraKetProblems:520}
H = a
\begin{bmatrix}
1 & 1 \\
1 & -1
\end{bmatrix}.
\end{equation}

The eigenvalue problem is

\begin{equation}\label{eqn:moreBraKetProblems:540}
\begin{aligned}
0
&= \Abs{ H – \lambda I } \\
&= (a – \lambda)(-a – \lambda) – a^2 \\
&= (-a + \lambda)(a + \lambda) – a^2 \\
&= \lambda^2 – a^2 – a^2,
\end{aligned}
\end{equation}

or

\begin{equation}\label{eqn:moreBraKetProblems:560}
\lambda = \pm \sqrt{2} a.
\end{equation}

An eigenket proportional to \( (\alpha,\beta) \) must satisfy

\begin{equation}\label{eqn:moreBraKetProblems:580}
0
= ( 1 \mp \sqrt{2} ) \alpha + \beta,
\end{equation}

so

\begin{equation}\label{eqn:moreBraKetProblems:600}
\ket{\pm} \propto
\begin{bmatrix}
-1 \\
1 \mp \sqrt{2}
\end{bmatrix},
\end{equation}

or

\begin{equation}\label{eqn:moreBraKetProblems:620}
\begin{aligned}
\ket{\pm}
&=
\inv{2(2 – \sqrt{2})}
\begin{bmatrix}
-1 \\
1 \mp \sqrt{2}
\end{bmatrix} \\
&=
\frac{2 + \sqrt{2}}{4}
\begin{bmatrix}
-1 \\
1 \mp \sqrt{2}
\end{bmatrix}.
\end{aligned}
\end{equation}

That is
\begin{equation}\label{eqn:moreBraKetProblems:640}
\ket{\pm} =
\frac{2 + \sqrt{2}}{4} \lr{
-\ket{1} + (1 \mp \sqrt{2}) \ket{2}
}.
\end{equation}

Question: Spin half probability and dispersion ([1] pr. 1.12)

A spin \( 1/2 \) system \( \BS \cdot \ncap \), with \( \ncap = \sin \gamma \xcap + \cos\gamma \zcap \), is in state with eigenvalue \( \Hbar/2 \).

(a)

If \( S_x \) is measured. What is the probability of getting \( + \Hbar/2 \)?

(b)

Evaluate the dispersion in \( S_x \), that is,

\begin{equation}\label{eqn:moreBraKetProblems:660}
\expectation{\lr{ S_x – \expectation{S_x}}^2}.
\end{equation}

Answer

(a)

In matrix form the spin operator for the system is

\begin{equation}\label{eqn:moreBraKetProblems:680}
\begin{aligned}
\BS \cdot \ncap
&= \frac{\Hbar}{2} \lr{ \cos\gamma \begin{bmatrix} 1 & 0 \\ 0 & -1 \\ \end{bmatrix} + \sin\gamma \begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix}} \\
&= \frac{\Hbar}{2}
\begin{bmatrix}
\cos\gamma & \sin\gamma \\
\sin\gamma & -\cos\gamma \\
\end{bmatrix}
\end{aligned}
\end{equation}

An eigenket \( \ket{\BS \cdot \ncap ; + } = (a,b) \) must satisfy

\begin{equation}\label{eqn:moreBraKetProblems:700}
\begin{aligned}
0
&= \lr{ \cos \gamma – 1 } a + \sin\gamma b \\
&= \lr{ -2 \sin^2 \frac{\gamma}{2} } a + 2 \sin\frac{\gamma}{2} \cos\frac{\gamma}{2} b \\
&= -\sin \frac{\gamma}{2} a + \cos\frac{\gamma}{2} b,
\end{aligned}
\end{equation}

so the eigenstate is
\begin{equation}\label{eqn:moreBraKetProblems:720}
\ket{\BS \cdot \ncap ; + }
=
\begin{bmatrix}
\cos\frac{\gamma}{2} \\
\sin\frac{\gamma}{2}
\end{bmatrix}.
\end{equation}

Pick \( \ket{S_x ; \pm } = \inv{\sqrt{2}}
\begin{bmatrix}
1 \\ \pm 1
\end{bmatrix} \) as the basis for the \( S_x \) operator. Then, for the probability that the system will end up in the \( + \Hbar/2 \) state of \( S_x \), we have

\begin{equation}\label{eqn:moreBraKetProblems:740}
\begin{aligned}
P
&= \Abs{\braket{ S_x ; + }{ \BS \cdot \ncap ; + } }^2 \\
&= \Abs{ \inv{\sqrt{2} }
{
\begin{bmatrix}
1 \\
1
\end{bmatrix}}^\dagger
\begin{bmatrix}
\cos\frac{\gamma}{2} \\
\sin\frac{\gamma}{2}
\end{bmatrix}
}^2 \\
&=\inv{2}
\Abs{
\begin{bmatrix}
1 & 1
\end{bmatrix}
\begin{bmatrix}
\cos\frac{\gamma}{2} \\
\sin\frac{\gamma}{2}
\end{bmatrix}
}^2 \\
&=
\inv{2}
\lr{
\cos\frac{\gamma}{2} +
\sin\frac{\gamma}{2}
}^2 \\
&=
\inv{2}
\lr{ 1 + 2 \cos\frac{\gamma}{2} \sin\frac{\gamma}{2} } \\
&=
\inv{2}
\lr{ 1 + \sin\gamma }.
\end{aligned}
\end{equation}

This is a reasonable seeming result, with \( P \in [0, 1] \). Some special values also further validate this

\begin{equation}\label{eqn:moreBraKetProblems:760}
\begin{aligned}
\gamma &= 0, \ket{\BS \cdot \ncap ; + } =
\begin{bmatrix}
1 \\
0
\end{bmatrix}
=
\ket{S_z ; +}
=
\inv{\sqrt{2}} \ket{S_x;+}
+\inv{\sqrt{2}} \ket{S_x;-}
\\
\gamma &= \pi/2, \ket{\BS \cdot \ncap ; + } =
\inv{\sqrt{2}}
\begin{bmatrix}
1 \\
1
\end{bmatrix}
=
\ket{S_x ; +}
\\
\gamma &= \pi, \ket{\BS \cdot \ncap ; + } =
\begin{bmatrix}
0 \\
1
\end{bmatrix}
=
\ket{S_z ; -}
=
\inv{\sqrt{2}} \ket{S_x;+}
-\inv{\sqrt{2}} \ket{S_x;-},
\end{aligned}
\end{equation}

where we see that the probabilites are in proportion to the projection of the initial state onto the measured state \( \ket{S_x ; +} \).

(b)

The \( S_x \) expectation is

\begin{equation}\label{eqn:moreBraKetProblems:780}
\begin{aligned}
\expectation{S_x}
&=
\frac{\Hbar}{2}
\begin{bmatrix}
\cos\frac{\gamma}{2} & \sin\frac{\gamma}{2}
\end{bmatrix}
\begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix}
\begin{bmatrix}
\cos\frac{\gamma}{2} \\
\sin\frac{\gamma}{2}
\end{bmatrix} \\
&=
\frac{\Hbar}{2}
\begin{bmatrix}
\cos\frac{\gamma}{2} & \sin\frac{\gamma}{2}
\end{bmatrix}
\begin{bmatrix}
\sin\frac{\gamma}{2} \\
\cos\frac{\gamma}{2}
\end{bmatrix} \\
&=
\frac{\Hbar}{2} 2 \sin\frac{\gamma}{2} \cos\frac{\gamma}{2} \\
&=
\frac{\Hbar}{2} \sin\gamma.
\end{aligned}
\end{equation}

Note that \( S_x^2 = (\Hbar/2)^2I \), so

\begin{equation}\label{eqn:moreBraKetProblems:800}
\begin{aligned}
\expectation{S_x^2}
&=
\lr{\frac{\Hbar}{2}}^2
\begin{bmatrix}
\cos\frac{\gamma}{2} & \sin\frac{\gamma}{2}
\end{bmatrix}
\begin{bmatrix}
\cos\frac{\gamma}{2} \\
\sin\frac{\gamma}{2}
\end{bmatrix} \\
&=
\lr{ \frac{\Hbar}{2} }^2
\cos^2\frac{\gamma}{2} + \sin^2 \frac{\gamma}{2} \\
&=
\lr{ \frac{\Hbar}{2} }^2.
\end{aligned}
\end{equation}

The dispersion is

\begin{equation}\label{eqn:moreBraKetProblems:820}
\begin{aligned}
\expectation{\lr{ S_x – \expectation{S_x}}^2}
&=
\expectation{S_x^2} – \expectation{S_x}^2 \\
&=
\lr{ \frac{\Hbar}{2} }^2
\lr{1 – \sin^2 \gamma} \\
&=
\lr{ \frac{\Hbar}{2} }^2
\cos^2 \gamma.
\end{aligned}
\end{equation}

At \( \gamma = \pi/2 \) the dispersion is 0, which is expected since \( \ket{\BS \cdot \ncap ; + } = \ket{ S_x ; + } \) at that point. Similarily, the dispersion is maximized at \( \gamma = 0,\pi \) where the \( \ket{\BS \cdot \ncap ; + } \) component in the \( \ket{S_x ; + } \) direction is minimized.

References

[1] Jun John Sakurai and Jim J Napolitano. Modern quantum mechanics. Pearson Higher Ed, 2014.