ket

Operator matrix element

August 29, 2015 phy1520 , , , , , ,

[Click here for a PDF of this post with nicer formatting]

0dc1b8c5-232f-492d-b520-bcec41e45c88

Weird dreams

I woke up today having a dream still in my head from the night, but it was a strange one. I was expanding out the Dirac notation representation of an operator in matrix form, but the symbols in the kets were elaborate pictures of Disney princesses that I was drawing with forestry scenery in the background, including little bears. At the point that I woke up from the dream, I noticed that I’d gotten the proportion of the bears wrong in one of the pictures, and they looked like they were ready to eat one of the princess characters.

Guts

As a side effect of this weird dream I actually started thinking about matrix element representation of operators.

When forming the matrix element of an operator using Dirac notation the elements are of the form \( \bra{\textrm{row}} A \ket{\textrm{column}} \). I’ve gotten that mixed up a couple of times, so I thought it would be helpful to write this out explicitly for a \( 2 \times 2 \) operator representation for clarity.

To start, consider a change of basis for a single matrix element from basis \( \setlr{\ket{q}, \ket{r} } \), to basis \( \setlr{\ket{a}, \ket{b} } \)

\begin{equation}\label{eqn:operatorMatrixElement:20}
\begin{aligned}
\bra{q} A \ket{r}
&=
\braket{q}{a} \bra{a} A \ket{r}
+
\braket{q}{b} \bra{b} A \ket{r} \\
&=
\braket{q}{a} \bra{a} A \ket{a}\braket{a}{r}
+ \braket{q}{a} \bra{a} A \ket{b}\braket{b}{r} \\
&+ \braket{q}{b} \bra{b} A \ket{a}\braket{a}{r}
+ \braket{q}{b} \bra{b} A \ket{b}\braket{b}{r} \\
&=
\braket{q}{a}
\begin{bmatrix}
\bra{a} A \ket{a} & \bra{a} A \ket{b}
\end{bmatrix}
\begin{bmatrix}
\braket{a}{r} \\
\braket{b}{r}
\end{bmatrix}
+
\braket{q}{b}
\begin{bmatrix}
\bra{b} A \ket{a} & \bra{b} A \ket{b}
\end{bmatrix}
\begin{bmatrix}
\braket{a}{r} \\
\braket{b}{r}
\end{bmatrix} \\
&=
\begin{bmatrix}
\braket{q}{a} &
\braket{q}{b}
\end{bmatrix}
\begin{bmatrix}
\bra{a} A \ket{a} & \bra{a} A \ket{b} \\
\bra{b} A \ket{a} & \bra{b} A \ket{b}
\end{bmatrix}
\begin{bmatrix}
\braket{a}{r} \\
\braket{b}{r}
\end{bmatrix}.
\end{aligned}
\end{equation}

Suppose the matrix representation of \( \ket{q}, \ket{r} \) are respectively

\begin{equation}\label{eqn:operatorMatrixElement:40}
\begin{aligned}
\ket{q} &\sim
\begin{bmatrix}
\braket{a}{q} \\
\braket{b}{q} \\
\end{bmatrix} \\
\ket{r} &\sim
\begin{bmatrix}
\braket{a}{r} \\
\braket{b}{r} \\
\end{bmatrix} \\
\end{aligned},
\end{equation}

then

\begin{equation}\label{eqn:operatorMatrixElement:60}
\bra{q} \sim
{\begin{bmatrix}
\braket{a}{q} \\
\braket{b}{q} \\
\end{bmatrix}}^\dagger
=
\begin{bmatrix}
\braket{q}{a} &
\braket{q}{b}
\end{bmatrix}.
\end{equation}

The matrix element is then

\begin{equation}\label{eqn:operatorMatrixElement:80}
\bra{q} A \ket{r}
\sim
\bra{q}
\begin{bmatrix}
\bra{a} A \ket{a} & \bra{a} A \ket{b} \\
\bra{b} A \ket{a} & \bra{b} A \ket{b}
\end{bmatrix}
\ket{r},
\end{equation}

and the corresponding matrix representation of the operator is

\begin{equation}\label{eqn:operatorMatrixElement:100}
A \sim
\begin{bmatrix}
\bra{a} A \ket{a} & \bra{a} A \ket{b} \\
\bra{b} A \ket{a} & \bra{b} A \ket{b}
\end{bmatrix}.
\end{equation}

More ket problems

August 5, 2015 phy1520 , , , , , , , , ,

[Click here for a PDF of this post with nicer formatting]

Question: Uncertainty relation. ([1] pr. 1.20)

Find the ket that maximizes the uncertainty product

\begin{equation}\label{eqn:moreKet:140}
\expectation{\lr{\Delta S_x}^2}
\expectation{\lr{\Delta S_y}^2},
\end{equation}

and compare to the uncertainty bound \( \inv{4} \Abs{ \expectation{\antisymmetric{S_x}{S_y}}}^2 \).

Answer

To parameterize the ket space, consider first the kets that where both components are both not zero, where a single complex number can parameterize the ket

\begin{equation}\label{eqn:moreKet:160}
\ket{s} =
\begin{bmatrix}
\beta’ e^{i\phi’} \\
\alpha’ e^{i\theta’} \\
\end{bmatrix}
\propto
\begin{bmatrix}
1 \\
\alpha e^{i\theta} \\
\end{bmatrix}
\end{equation}

The expectation values with respect to this ket are
\begin{equation}\label{eqn:moreKet:180}
\begin{aligned}
\expectation{S_x}
&=
\frac{\Hbar}{2}
\begin{bmatrix}
1 & \alpha e^{-i\theta} \\
\end{bmatrix}
\begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix}
\begin{bmatrix}
1 \\
\alpha e^{i\theta} \\
\end{bmatrix} \\
&=
\frac{\Hbar}{2}
\begin{bmatrix}
1 &
\alpha e^{-i\theta} \\
\end{bmatrix}
\begin{bmatrix}
\alpha e^{i\theta} \\
1 \\
\end{bmatrix} \\
&=
\frac{\Hbar}{2}
\alpha e^{i\theta} + \alpha e^{-i\theta} \\
&=
\frac{\Hbar}{2}
2 \alpha \cos\theta \\
&=
\Hbar \alpha \cos\theta.
\end{aligned}
\end{equation}

\begin{equation}\label{eqn:moreKet:200}
\begin{aligned}
\expectation{S_y}
&=
\frac{\Hbar}{2}
\begin{bmatrix}
1 & \alpha e^{-i\theta} \\
\end{bmatrix}
\begin{bmatrix} 0 & -i \\ i & 0 \\ \end{bmatrix}
\begin{bmatrix}
1 \\
\alpha e^{i\theta} \\
\end{bmatrix} \\
&=
\frac{i\Hbar}{2}
\begin{bmatrix}
1 & \alpha e^{-i\theta} \\
\end{bmatrix}
\begin{bmatrix}
-\alpha e^{i\theta} \\
1 \\
\end{bmatrix} \\
&=
\frac{-i \alpha \Hbar}{2} 2 i \sin\theta \\
&=
\alpha \Hbar \sin\theta.
\end{aligned}
\end{equation}

The variances are
\begin{equation}\label{eqn:moreKet:220}
\begin{aligned}
\lr{ \Delta S_x }^2
&=
\lr{
\frac{\Hbar}{2}
\begin{bmatrix}
-2 \alpha \cos\theta & 1 \\
1 & -2 \alpha \cos\theta \\
\end{bmatrix}
}^2 \\
&=
\frac{\Hbar^2}{4}
\begin{bmatrix}
-2 \alpha \cos\theta & 1 \\
1 & -2 \alpha \cos\theta \\
\end{bmatrix}
\begin{bmatrix}
-2 \alpha \cos\theta & 1 \\
1 & -2 \alpha \cos\theta \\
\end{bmatrix} \\
&=
\frac{\Hbar^2}{4}
\begin{bmatrix}
4 \alpha^2 \cos^2\theta + 1 & -4 \alpha \cos\theta \\
-4 \alpha \cos\theta & 4 \alpha^2 \cos^2\theta + 1 \\
\end{bmatrix},
\end{aligned}
\end{equation}

and

\begin{equation}\label{eqn:moreKet:240}
\begin{aligned}
\lr{ \Delta S_y }^2
&=
\lr{
\frac{\Hbar}{2}
\begin{bmatrix}
-2 \alpha \sin\theta & -i \\
i & -2 \alpha \sin\theta \\
\end{bmatrix}
}^2 \\
&=
\frac{\Hbar^2}{4}
\begin{bmatrix}
-2 \alpha \sin\theta & -i \\
i & -2 \alpha \sin\theta \\
\end{bmatrix}
\begin{bmatrix}
-2 \alpha \sin\theta & -i \\
i & -2 \alpha \sin\theta \\
\end{bmatrix} \\
&=
\frac{\Hbar^2}{4}
\begin{bmatrix}
4 \alpha^2 \sin^2\theta + 1 & 4 \alpha i \sin\theta \\
-4 \alpha i \sin\theta & 4 \alpha^2 \sin^2\theta + 1 \\
\end{bmatrix}.
\end{aligned}
\end{equation}

The uncertainty factors are

\begin{equation}\label{eqn:moreKet:260}
\begin{aligned}
\expectation{\lr{\Delta S_x}^2}
&=
\frac{\Hbar^2}{4}
\begin{bmatrix}
1 & \alpha e^{-i\theta}
\end{bmatrix}
\begin{bmatrix}
4 \alpha^2 \cos^2\theta + 1 & -4 \alpha \cos\theta \\
-4 \alpha \cos\theta & 4 \alpha^2 \cos^2\theta + 1 \\
\end{bmatrix}
\begin{bmatrix}
1 \\
\alpha e^{i\theta}
\end{bmatrix} \\
&=
\frac{\Hbar^2}{4}
\begin{bmatrix}
1 & \alpha e^{-i\theta}
\end{bmatrix}
\begin{bmatrix}
4 \alpha^2 \cos^2\theta + 1 -4 \alpha^2 \cos\theta e^{i\theta} \\
-4 \alpha \cos\theta + 4 \alpha^3 \cos^2\theta e^{i\theta} + \alpha e^{i\theta} \\
\end{bmatrix} \\
&=
\frac{\Hbar^2}{4}
\lr{
4 \alpha^2 \cos^2\theta + 1 -4 \alpha^2 \cos\theta e^{i\theta}
-4 \alpha^2 \cos\theta e^{-i\theta} + 4 \alpha^4 \cos^2\theta + \alpha^2
} \\
&=
\frac{\Hbar^2}{4}
\lr{
4 \alpha^2 \cos^2\theta + 1 -8 \alpha^2 \cos^2\theta
+ 4 \alpha^4 \cos^2\theta + \alpha^2
} \\
&=
\frac{\Hbar^2}{4}
\lr{
-4 \alpha^2 \cos^2\theta + 1
+ 4 \alpha^4 \cos^2\theta + \alpha^2
} \\
&=
\frac{\Hbar^2}{4}
\lr{
4 \alpha^2 \cos^2\theta \lr{ \alpha^2 – 1 }
+ \alpha^2 + 1
}
,
\end{aligned}
\end{equation}

and

\begin{equation}\label{eqn:moreKet:280}
\begin{aligned}
\expectation{ \lr{ \Delta S_y }^2 }
&=
\frac{\Hbar^2}{4}
\begin{bmatrix}
1 & \alpha e^{-i\theta}
\end{bmatrix}
\begin{bmatrix}
4 \alpha^2 \sin^2\theta + 1 & 4 \alpha i \sin\theta \\
-4 \alpha i \sin\theta & 4 \alpha^2 \sin^2\theta + 1 \\
\end{bmatrix}
\begin{bmatrix}
1 \\
\alpha e^{i\theta}
\end{bmatrix} \\
&=
\frac{\Hbar^2}{4}
\begin{bmatrix}
1 & \alpha e^{-i\theta}
\end{bmatrix}
\begin{bmatrix}
4 \alpha^2 \sin^2\theta + 1 + 4 \alpha^2 i \sin\theta e^{i\theta} \\
-4 \alpha i \sin\theta + 4 \alpha^3 \sin^2\theta e^{i\theta} + \alpha e^{i\theta} \\
\end{bmatrix} \\
&=
\frac{\Hbar^2}{4}
\lr{
4 \alpha^2 \sin^2\theta + 1 + 4 \alpha^2 i \sin\theta e^{i\theta}
-4 \alpha^2 i \sin\theta e^{-i\theta} + 4 \alpha^4 \sin^2\theta + \alpha^2
} \\
&=
\frac{\Hbar^2}{4}
\lr{
-4 \alpha^2 \sin^2\theta + 1
+ 4 \alpha^4 \sin^2\theta + \alpha^2
} \\
&=
\frac{\Hbar^2}{4}
\lr{
4 \alpha^2 \sin^2\theta \lr{ \alpha^2 – 1}
+ \alpha^2
+ 1
}
.
\end{aligned}
\end{equation}

The uncertainty product can finally be calculated

\begin{equation}\label{eqn:moreKet:300}
\begin{aligned}
\expectation{\lr{\Delta S_x}^2}
\expectation{\lr{\Delta S_y}^2}
&=
\lr{\frac{\Hbar}{2} }^4
\lr{
4 \alpha^2 \cos^2\theta \lr{ \alpha^2 – 1 }
+ \alpha^2 + 1
}
\lr{
4 \alpha^2 \sin^2\theta \lr{ \alpha^2 – 1}
+ \alpha^2
+ 1
} \\
&=
\lr{\frac{\Hbar}{2} }^4
\lr{
4 \alpha^4 \sin^2 \lr{ 2\theta } \lr{ \alpha^2 – 1 }
+ 4 \alpha^2 \lr{ \alpha^4 – 1 }
+ \lr{\alpha^2 + 1 }^2
}
\end{aligned}
\end{equation}

The maximum occurs when \( f = \sin^2 2 \theta \) is extremized. Those points are
\begin{equation}\label{eqn:moreKet:320}
\begin{aligned}
0
&= \PD{\theta}{f} \\
&= 2 \sin 2 \theta \cos 2\theta \\
&= 4 \sin 4 \theta.
\end{aligned}
\end{equation}

Those points are at \( 4 \theta = \pi n \), for integer \( n \), or

\begin{equation}\label{eqn:moreKet:340}
\theta = \frac{\pi}{4} n, n \in [0, 7],
\end{equation}

Minimums will occur when

\begin{equation}\label{eqn:moreKet:360}
0 < \PDSq{\theta}{f} = 8 \cos 4\theta, \end{equation} or \begin{equation}\label{eqn:moreKet:380} n = 0, 2, 4, 6. \end{equation} At these points \( \sin^2 2\theta \) takes the values \begin{equation}\label{eqn:moreKet:400} \sin^2 \lr{ 2 \frac{\pi}{4} \setlr{ 0, 2, 4, 6 } } = \sin^2 \lr{ \pi \setlr{ 0, 1, 2, 3 } } \in \setlr{ 0 }, \end{equation} so the maximumization of the uncertainty product can be reduced to that of \begin{equation}\label{eqn:moreKet:420} \expectation{\lr{\Delta S_x}^2} \expectation{\lr{\Delta S_y}^2} = \lr{\frac{\Hbar}{2} }^4 \lr{ 4 \alpha^2 \lr{ \alpha^4 - 1 } + \lr{\alpha^2 + 1 }^2 } \end{equation} We seek \begin{equation}\label{eqn:moreKet:440} \begin{aligned} 0 &= \PD{\alpha}{} \lr{ 4 \alpha^2 \lr{ \alpha^4 - 1 } + \lr{\alpha^2 + 1 }^2 } \\ &= \lr{ 8 \alpha \lr{ \alpha^4 - 1 } +16 \alpha^5 + 4 \lr{\alpha^2 + 1 } \alpha } \\ &= 4 \alpha \lr{ 2 \alpha^4 - 2 +4 \alpha^4 + 4 \alpha^2 + 4 } \\ &= 8 \alpha \lr{ 3 \alpha^4 + 2 \alpha^2 + 1 }. \end{aligned} \end{equation} The only real root of this polynomial is \( \alpha = 0 \), so the ket where both \( \ket{+} \) and \( \ket{-} \) are not zero that maximizes the uncertainty product is \begin{equation}\label{eqn:moreKet:460} \ket{s} = \begin{bmatrix} 1 \\ 0 \end{bmatrix} = \ket{+}. \end{equation} The search for this maximizing value excluded those kets proportional to \( \begin{bmatrix} 0 \\ 1 \end{bmatrix} = \ket{-} \). Let's see the values of this uncertainty product at both \( \ket{\pm} \), and compare to the uncertainty commutator. First \( \ket{s} = \ket{+} \) \begin{equation}\label{eqn:moreKet:480} \expectation{S_x} = \begin{bmatrix} 1 & 0 \end{bmatrix} \begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix} \begin{bmatrix} 1 \\ 0 \end{bmatrix} = 0. \end{equation} \begin{equation}\label{eqn:moreKet:500} \expectation{S_y} = \begin{bmatrix} 1 & 0 \end{bmatrix} \begin{bmatrix} 0 & -i \\ i & 0 \\ \end{bmatrix} \begin{bmatrix} 1 \\ 0 \end{bmatrix} = 0. \end{equation} so \begin{equation}\label{eqn:moreKet:520} \expectation{ \lr{ \Delta S_x }^2 } = \lr{\frac{\Hbar}{2}}^2 \begin{bmatrix} 1 & 0 \end{bmatrix} \begin{bmatrix} 1 \\ 0 \end{bmatrix} = \lr{\frac{\Hbar}{2}}^2 \end{equation} \begin{equation}\label{eqn:moreKet:540} \expectation{ \lr{ \Delta S_y }^2 } = \lr{\frac{\Hbar}{2}}^2 \begin{bmatrix} 1 & 0 \end{bmatrix} \begin{bmatrix} 1 \\ 0 \end{bmatrix} = \lr{\frac{\Hbar}{2}}^2. \end{equation} For the commutator side of the uncertainty relation we have \begin{equation}\label{eqn:moreKet:560} \begin{aligned} \inv{4} \Abs{ \expectation{ \antisymmetric{ S_x}{ S_y } } }^2 &= \inv{4} \Abs{ \expectation{ i \hbar S_z } }^2 \\ &= \lr{ \frac{\Hbar}{2} }^4 \Abs{ \begin{bmatrix} 1 & 0 \end{bmatrix} \begin{bmatrix} 1 & 0 \\ 0 & -1 \\ \end{bmatrix} \begin{bmatrix} 1 \\ 0 \end{bmatrix} }^2, \end{aligned} \end{equation} so for the \( \ket{+} \) state we have an equality condition for the uncertainty relation \begin{equation}\label{eqn:moreKet:580} \expectation{\lr{\Delta S_x}^2} \expectation{\lr{\Delta S_y}^2} = \inv{4} \Abs{ \expectation{\antisymmetric{S_x}{S_y}}}^2 = \lr{ \frac{\Hbar}{2} }^4. \end{equation} It's reasonable to guess that the \( \ket{-} \) state also matches the equality condition. Let's check \begin{equation}\label{eqn:moreKet:600} \expectation{S_x} = \begin{bmatrix} 0 & 1 \end{bmatrix} \begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix} \begin{bmatrix} 0 \\ 1 \end{bmatrix} = 0. \end{equation} \begin{equation}\label{eqn:moreKet:620} \expectation{S_y} = \begin{bmatrix} 0 & 1 \end{bmatrix} \begin{bmatrix} 0 & -i \\ i & 0 \\ \end{bmatrix} \begin{bmatrix} 0 \\ 1 \end{bmatrix} = 0. \end{equation} so \( \expectation{ \lr{ \Delta S_x }^2 } = \expectation{ \lr{ \Delta S_y }^2 } = \lr{\frac{\Hbar}{2}}^2 \). For the commutator side of the uncertainty relation will be identical, so the equality of \ref{eqn:moreKet:580} is satisfied for both \( \ket{\pm} \). Note that it wasn't explicitly verified that \( \ket{-} \) maximized the uncertainty product, but I don't feel like working through that second set of algebraic mess. We can see by example that equality does not mean that the equality condition means that the product is maximized. For example, it is straightforward to show that \( \ket{ S_x ; \pm } \) also satisfy the equality condition of the uncertainty relation. However, in that case the product is not maximized, but is zero.

Question: Degenerate ket space example. ([1] pr. 1.23)

Consider operators with representation

\begin{equation}\label{eqn:moreKet:20}
A =
\begin{bmatrix}
a & 0 & 0 \\
0 & -a & 0 \\
0 & 0 & -a
\end{bmatrix}
,
\qquad
B =
\begin{bmatrix}
b & 0 & 0 \\
0 & 0 & -ib \\
0 & ib & 0
\end{bmatrix}.
\end{equation}

Show that these both have degeneracies, commute, and compute a simultaneous ket space for both operators.

Answer

The eigenvalues and eigenvectors for \( A \) can be read off by inspection, with values of \( a, -a, -a \), and kets

\begin{equation}\label{eqn:moreKet:40}
\ket{a_1} =
\begin{bmatrix}
1 \\
0 \\
0
\end{bmatrix},
\ket{a_2} =
\begin{bmatrix}
0 \\
1 \\
0
\end{bmatrix},
\ket{a_3} =
\begin{bmatrix}
0 \\
0 \\
1 \\
\end{bmatrix}
\end{equation}

Notice that the lower-right \( 2 \times 2 \) submatrix of \( B \) is proportional to \( \sigma_y \), so it’s eigenvalues can be formed by inspection

\begin{equation}\label{eqn:moreKet:60}
\ket{b_1} =
\begin{bmatrix}
1 \\
0 \\
0
\end{bmatrix},
\ket{b_2} =
\inv{\sqrt{2}}
\begin{bmatrix}
0 \\
1 \\
i
\end{bmatrix},
\ket{b_3} =
\inv{\sqrt{2}}
\begin{bmatrix}
0 \\
1 \\
-i \\
\end{bmatrix}.
\end{equation}

Computing \( B \ket{b_i} \) shows that the eigenvalues are \( b, b, -b \) respectively.

Because of the two-fold degeneracy in the \( -a \) eigenvalues of \( A \), any linear combination of \( \ket{a_2}, \ket{a_3} \) will also be an eigenket. In particular,

\begin{equation}\label{eqn:moreKet:80}
\begin{aligned}
\ket{a_2} + i \ket{a_3} &= \ket{b_2} \\
\ket{a_2} – i \ket{a_3} &= \ket{b_3},
\end{aligned}
\end{equation}

so the basis \( \setlr{ \ket{b_i}} \) is a simulaneous eigenspace for both \( A \) and \(B\). Because there is a simulaneous eigenspace, the matrices must commute. This can be confirmed with direct computation

\begin{equation}\label{eqn:moreKet:100}
\begin{aligned}
A B
&= a b
\begin{bmatrix}
1 & 0 & 0 \\
0 & -1 & 0 \\
0 & 0 & -1
\end{bmatrix}
\begin{bmatrix}
1 & 0 & 0 \\
0 & 0 & -i \\
0 & i & 0
\end{bmatrix} \\
&=
a b
\begin{bmatrix}
1 & 0 & 0 \\
0 & 0 & i \\
0 & -i & 0
\end{bmatrix},
\end{aligned}
\end{equation}

and

\begin{equation}\label{eqn:moreKet:120}
\begin{aligned}
B A
&= a b
\begin{bmatrix}
1 & 0 & 0 \\
0 & 0 & -i \\
0 & i & 0
\end{bmatrix}
\begin{bmatrix}
1 & 0 & 0 \\
0 & -1 & 0 \\
0 & 0 & -1
\end{bmatrix} \\
&=
a b
\begin{bmatrix}
1 & 0 & 0 \\
0 & 0 & i \\
0 & -i & 0
\end{bmatrix}.
\end{aligned}
\end{equation}

Question: Unitary transformation. ([1] pr. 1.26)

Construct the transformation matrix that maps between the \( S_z \) diagonal basis, to the \( S_x \) diagonal basis.

Answer

Based on the definition

\begin{equation}\label{eqn:moreKet:640}
U \ket{a^{(r)}} = \ket{b^{(r)}},
\end{equation}

the matrix elements can be computed

\begin{equation}\label{eqn:moreKet:660}
\bra{a^{(s)}} U \ket{a^{(r)}} = \braket{a^{(s)}}{b^{(r)}},
\end{equation}

that is

\begin{equation}\label{eqn:moreKet:680}
\begin{aligned}
U
&=
\begin{bmatrix}
\bra{a^{(1)}} U \ket{a^{(1)}} & \bra{a^{(1)}} U \ket{a^{(2)}} \\
\bra{a^{(2)}} U \ket{a^{(1)}} & \bra{a^{(2)}} U \ket{a^{(2)}}
\end{bmatrix} \\
&=
\begin{bmatrix}
\braket{a^{(1)}}{b^{(1)}} & \braket{a^{(1)}}{b^{(2)}} \\
\braket{a^{(2)}}{b^{(1)}} & \braket{a^{(2)}}{b^{(2)}}
\end{bmatrix} \\
&=
\inv{\sqrt{2}}
\begin{bmatrix}
\begin{bmatrix}
1 & 0
\end{bmatrix}
\begin{bmatrix}
1 \\ 1
\end{bmatrix} &
\begin{bmatrix}
1 & 0
\end{bmatrix}
\begin{bmatrix}
1 \\ -1
\end{bmatrix} \\
\begin{bmatrix}
0 & 1
\end{bmatrix}
\begin{bmatrix}
1 \\ 1
\end{bmatrix} &
\begin{bmatrix}
0 & 1
\end{bmatrix}
\begin{bmatrix}
1 \\ -1
\end{bmatrix} \\
\end{bmatrix} \\
&=
\inv{\sqrt{2}}
\begin{bmatrix}
1 & 1 \\
1 & -1
\end{bmatrix}.
\end{aligned}
\end{equation}

As a similarity transformation, we have

\begin{equation}\label{eqn:moreKet:700}
\begin{aligned}
\bra{b^{(r)}} S_z \ket{b^{(s)}}
&=
\braket{b^{(r)}}{a^{(t)}}\bra{a^{(t)}} S_z \ket{a^{(u)}}\braket{a^{(u)}}{b^{(s)}} \\
&=
\braket{a^{(r)}}U^\dagger {a^{(t)}}\bra{a^{(t)}} S_z \ket{a^{(u)}}\bra{a^{(u)}}U \ket{a^{(s)}},
\end{aligned}
\end{equation}

or

\begin{equation}\label{eqn:moreKet:720}
S_z’ = U^\dagger S_z U.
\end{equation}

Let’s check that the computed similarity transformation does it’s job.
\begin{equation}\label{eqn:moreKet:740}
\begin{aligned}
\sigma_z’
&= U^\dagger \sigma_z U \\
&= \inv{2}
\begin{bmatrix}
1 & 1 \\
1 & -1
\end{bmatrix}
\begin{bmatrix}
1 & 0 \\
0 & -1
\end{bmatrix}
\begin{bmatrix}
1 & 1 \\
1 & -1
\end{bmatrix} \\
&=
\inv{2}
\begin{bmatrix}
1 & -1 \\
1 & 1
\end{bmatrix}
\begin{bmatrix}
1 & 1 \\
1 & -1
\end{bmatrix} \\
&=
\inv{2}
\begin{bmatrix}
0 & 2 \\
2 & 0
\end{bmatrix} \\
&= \sigma_x.
\end{aligned}
\end{equation}

The transformation matrix can also be computed more directly

\begin{equation}\label{eqn:moreKet:760}
\begin{aligned}
U
&= U \ket{a^{(r)}} \bra{a^{(r)}} \\
&= \ket{b^{(r)}}\bra{a^{(r)}} \\
&=
\inv{\sqrt{2}}
\begin{bmatrix}
1 \\
1
\end{bmatrix}
\begin{bmatrix}
1 & 0
\end{bmatrix}
+
\inv{\sqrt{2}}
\begin{bmatrix}
1 \\
-1
\end{bmatrix}
\begin{bmatrix}
0 & 1
\end{bmatrix} \\
&=
\inv{\sqrt{2}}
\begin{bmatrix}
1 & 0 \\
1 & 0
\end{bmatrix}
+
\inv{\sqrt{2}}
\begin{bmatrix}
0 & 1 \\
0 & -1
\end{bmatrix} \\
&=
\inv{\sqrt{2}}
\begin{bmatrix}
1 & 1 \\
1 & -1
\end{bmatrix}.
\end{aligned}
\end{equation}

References

[1] Jun John Sakurai and Jim J Napolitano. Modern quantum mechanics. Pearson Higher Ed, 2014.

Bra-ket and spin one-half problems

July 27, 2015 phy1520 , , , , , , , , ,

[Click here for a PDF of this post with nicer formatting]

Question: Operator matrix representation ([1] pr. 1.5)

(a)

Determine the matrix representation of \( \ket{\alpha}\bra{\beta} \) given a complete set of eigenvectors \( \ket{a^r} \).

(b)

Verify with \( \ket{\alpha} = \ket{s_z = \Hbar/2}, \ket{s_x = \Hbar/2} \).

Answer

(a)

Forming the matrix element

\begin{equation}\label{eqn:moreBraKetProblems:20}
\begin{aligned}
\bra{a^r} \lr{ \ket{\alpha}\bra{\beta} } \ket{a^s}
&=
\braket{a^r}{\alpha}\braket{\beta}{a^s} \\
&=
\braket{a^r}{\alpha}
\braket{a^s}{\beta}^\conj,
\end{aligned}
\end{equation}

the matrix representation is seen to be

\begin{equation}\label{eqn:moreBraKetProblems:40}
\ket{\alpha}\bra{\beta}
\sim
\begin{bmatrix}
\bra{a^1} \lr{ \ket{\alpha}\bra{\beta} } \ket{a^1} & \bra{a^1} \lr{ \ket{\alpha}\bra{\beta} } \ket{a^2} & \cdots \\
\bra{a^2} \lr{ \ket{\alpha}\bra{\beta} } \ket{a^1} & \bra{a^2} \lr{ \ket{\alpha}\bra{\beta} } \ket{a^2} & \cdots \\
\vdots & \vdots & \ddots \\
\end{bmatrix}
=
\begin{bmatrix}
\braket{a^1}{\alpha} \braket{a^1}{\beta}^\conj & \braket{a^1}{\alpha} \braket{a^2}{\beta}^\conj & \cdots \\
\braket{a^2}{\alpha} \braket{a^1}{\beta}^\conj & \braket{a^2}{\alpha} \braket{a^2}{\beta}^\conj & \cdots \\
\vdots & \vdots & \ddots \\
\end{bmatrix}.
\end{equation}

(b)

First compute the spin-z representation of \( \ket{s_x = \Hbar/2 } \).

\begin{equation}\label{eqn:moreBraKetProblems:60}
\begin{aligned}
\lr{ S_x – \Hbar/2 I }
\begin{bmatrix}
a \\
b
\end{bmatrix}
&=
\lr{
\begin{bmatrix}
0 & \Hbar/2 \\
\Hbar/2 & 0 \\
\end{bmatrix}

\begin{bmatrix}
\Hbar/2 & 0 \\
0 & \Hbar/2 \\
\end{bmatrix}
} \\
&=
\begin{bmatrix}
a \\
b
\end{bmatrix} \\
&=
\frac{\Hbar}{2}
\begin{bmatrix}
-1 & 1 \\
1 & -1 \\
\end{bmatrix}
\begin{bmatrix}
a \\
b
\end{bmatrix},
\end{aligned}
\end{equation}

so \( \ket{s_x = \Hbar/2 } \propto (1,1) \).

Normalized we have

\begin{equation}\label{eqn:moreBraKetProblems:80}
\begin{aligned}
\ket{\alpha} &= \ket{s_z = \Hbar/2 } =
\begin{bmatrix}
1 \\
0
\end{bmatrix} \\
\ket{\beta} &= \ket{s_z = \Hbar/2 }
\inv{\sqrt{2}}
\begin{bmatrix}
1 \\
1
\end{bmatrix}.
\end{aligned}
\end{equation}

Using \ref{eqn:moreBraKetProblems:40} the matrix representation is

\begin{equation}\label{eqn:moreBraKetProblems:100}
\ket{\alpha}\bra{\beta}
\sim
\begin{bmatrix}
(1) (1/\sqrt{2})^\conj & (1) (1/\sqrt{2})^\conj \\
(0) (1/\sqrt{2})^\conj & (0) (1/\sqrt{2})^\conj \\
\end{bmatrix}
=
\inv{\sqrt{2}}
\begin{bmatrix}
1 & 1 \\
0 & 0
\end{bmatrix}.
\end{equation}

This can be confirmed with direct computation
\begin{equation}\label{eqn:moreBraKetProblems:120}
\begin{aligned}
\ket{\alpha}\bra{\beta}
&=
\begin{bmatrix}
1 \\
0
\end{bmatrix}
\inv{\sqrt{2}}
\begin{bmatrix}
1 & 1
\end{bmatrix} \\
&=
\inv{\sqrt{2}}
\begin{bmatrix}
1 & 1 \\
0 & 0
\end{bmatrix}.
\end{aligned}
\end{equation}

Question: eigenvalue of sum of kets ([1] pr. 1.6)

Given eigenkets \( \ket{i}, \ket{j} \) of an operator \( A \), what are the conditions that \( \ket{i} + \ket{j} \) is also an eigenvector?

Answer

Let \( A \ket{i} = i \ket{i}, A \ket{j} = j \ket{j} \), and suppose that the sum is an eigenket. Then there must be a value \( a \) such that

\begin{equation}\label{eqn:moreBraKetProblems:140}
A \lr{ \ket{i} + \ket{j} } = a \lr{ \ket{i} + \ket{j} },
\end{equation}

so

\begin{equation}\label{eqn:moreBraKetProblems:160}
i \ket{i} + j \ket{j} = a \lr{ \ket{i} + \ket{j} }.
\end{equation}

Operating with \( \bra{i}, \bra{j} \) respectively, gives

\begin{equation}\label{eqn:moreBraKetProblems:180}
\begin{aligned}
i &= a \\
j &= a,
\end{aligned}
\end{equation}

so for the sum to be an eigenket, both of the corresponding energy eigenvalues must be identical (i.e. linear combinations of degenerate eigenkets are also eigenkets).

Question: Null operator ([1] pr. 1.7)

Given eigenkets \( \ket{a’} \) of operator \( A \)

(a)

show that

\begin{equation}\label{eqn:moreBraKetProblems:200}
\prod_{a’} \lr{ A – a’ }
\end{equation}

is the null operator.

(b)

\begin{equation}\label{eqn:moreBraKetProblems:220}
\prod_{a” \ne a’} \frac{\lr{ A – a” }}{a’ – a”}
\end{equation}

(c)

Illustrate using \( S_z \) for a spin 1/2 system.

Answer

(a)

Application of \( \ket{a} \), the eigenket of \( A \) with eigenvalue \( a \) to any term \( A – a’ \) scales \( \ket{a} \) by \( a – a’ \), so the product operating on \( \ket{a} \) is

\begin{equation}\label{eqn:moreBraKetProblems:240}
\prod_{a’} \lr{ A – a’ } \ket{a} = \prod_{a’} \lr{ a – a’ } \ket{a}.
\end{equation}

Since \( \ket{a} \) is one of the \( \setlr{\ket{a’}} \) eigenkets of \( A \), one of these terms must be zero.

(b)

Again, consider the action of the operator on \( \ket{a} \),

\begin{equation}\label{eqn:moreBraKetProblems:260}
\prod_{a” \ne a’} \frac{\lr{ A – a” }}{a’ – a”} \ket{a}
=
\prod_{a” \ne a’} \frac{\lr{ a – a” }}{a’ – a”} \ket{a}.
\end{equation}

If \( \ket{a} = \ket{a’} \), then \( \prod_{a” \ne a’} \frac{\lr{ A – a” }}{a’ – a”} \ket{a} = \ket{a} \), whereas if it does not, then it equals one of the \( a” \) energy eigenvalues. This is a representation of the Kronecker delta function

\begin{equation}\label{eqn:moreBraKetProblems:300}
\prod_{a” \ne a’} \frac{\lr{ A – a” }}{a’ – a”} \ket{a} \equiv \delta_{a’, a} \ket{a}
\end{equation}

(c)

For operator \( S_z \) the eigenvalues are \( \setlr{ \Hbar/2, -\Hbar/2 } \), so the null operator must be

\begin{equation}\label{eqn:moreBraKetProblems:280}
\begin{aligned}
\prod_{a’} \lr{ A – a’ }
&=
\lr{ \frac{\Hbar}{2} }^2 \lr{ \begin{bmatrix} 1 & 0 \\ 0 & -1 \\ \end{bmatrix} – \begin{bmatrix} 1 & 0 \\ 0 & 1 \\ \end{bmatrix} } \lr{ \begin{bmatrix} 1 & 0 \\ 0 & -1 \\ \end{bmatrix} + \begin{bmatrix} 1 & 0 \\ 0 & 1 \\ \end{bmatrix} } \\
&=
\begin{bmatrix}
0 & 0 \\
0 & -2
\end{bmatrix}
\begin{bmatrix}
2 & 0 \\
0 & 0 \\
\end{bmatrix} \\
&=
\begin{bmatrix}
0 & 0 \\
0 & 0 \\
\end{bmatrix}
\end{aligned}
\end{equation}

For the delta representation, consider the \( \ket{\pm} \) states and their eigenvalue. The delta operators are

\begin{equation}\label{eqn:moreBraKetProblems:320}
\begin{aligned}
\prod_{a” \ne \Hbar/2} \frac{\lr{ A – a” }}{\Hbar/2 – a”}
&=
\frac{S_z – (-\Hbar/2) I}{\Hbar/2 – (-\Hbar/2)} \\
&=
\inv{2} \lr{ \sigma_z + I } \\
&=
\inv{2} \lr{ \begin{bmatrix} 1 & 0 \\ 0 & -1 \\ \end{bmatrix} + \begin{bmatrix} 1 & 0 \\ 0 & 1 \\ \end{bmatrix} } \\
&=
\inv{2}
\begin{bmatrix}
2 & 0 \\
0 & 0
\end{bmatrix}
\\
&=
\begin{bmatrix}
1 & 0 \\
0 & 0
\end{bmatrix}.
\end{aligned}
\end{equation}

\begin{equation}\label{eqn:moreBraKetProblems:340}
\begin{aligned}
\prod_{a” \ne -\Hbar/2} \frac{\lr{ A – a” }}{-\Hbar/2 – a”}
&=
\frac{S_z – (\Hbar/2) I}{-\Hbar/2 – \Hbar/2} \\
&=
\inv{2} \lr{ \sigma_z – I } \\
&=
\inv{2} \lr{ \begin{bmatrix} 1 & 0 \\ 0 & -1 \\ \end{bmatrix} – \begin{bmatrix} 1 & 0 \\ 0 & 1 \\ \end{bmatrix} } \\
&=
\inv{2}
\begin{bmatrix}
0 & 0 \\
0 & -2
\end{bmatrix} \\
&=
\begin{bmatrix}
0 & 0 \\
0 & 1
\end{bmatrix}.
\end{aligned}
\end{equation}

These clearly have the expected delta function property acting on kets \( \ket{+} = (1,0), \ket{-} = (0, 1) \).

Question: Spin half general normal ([1] pr. 1.9)

Construct \( \ket{\BS \cdot \ncap ; + } \), where \( \ncap = ( \cos\alpha \sin\beta, \sin\alpha \sin\beta, \cos\beta ) \) such that

\begin{equation}\label{eqn:moreBraKetProblems:360}
\BS \cdot \ncap \ket{\BS \cdot \ncap ; + } =
\frac{\Hbar}{2} \ket{\BS \cdot \ncap ; + },
\end{equation}

Solve this as an eigenvalue problem.

Answer

The spin operator for this direction is

\begin{equation}\label{eqn:moreBraKetProblems:380}
\begin{aligned}
\BS \cdot \ncap
&= \frac{\Hbar}{2} \Bsigma \cdot \ncap \\
&= \frac{\Hbar}{2}
\lr{
\cos\alpha \sin\beta \begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix} + \sin\alpha \sin\beta \begin{bmatrix} 0 & -i \\ i & 0 \\ \end{bmatrix} + \cos\beta \begin{bmatrix} 1 & 0 \\ 0 & -1 \\ \end{bmatrix}
} \\
&=
\frac{\Hbar}{2}
\begin{bmatrix}
\cos\beta &
e^{-i\alpha}
\sin\beta
\\
e^{i\alpha}
\sin\beta
& -\cos\beta
\end{bmatrix}.
\end{aligned}
\end{equation}

Observed that this is traceless and has a \( -\Hbar/2 \) determinant like any of the \( x,y,z \) spin operators.

Assuming that this has an \( \Hbar/2 \) eigenvalue (to be verified later), the eigenvalue problem is

\begin{equation}\label{eqn:moreBraKetProblems:400}
\begin{aligned}
0
&=
\BS \cdot \ncap – \Hbar/2 I \\
&=
\frac{\Hbar}{2}
\begin{bmatrix}
\cos\beta -1 &
e^{-i\alpha}
\sin\beta
\\
e^{i\alpha}
\sin\beta
& -\cos\beta -1
\end{bmatrix} \\
&=
\Hbar
\begin{bmatrix}
– \sin^2 \frac{\beta}{2} &
e^{-i\alpha}
\sin\frac{\beta}{2} \cos\frac{\beta}{2}
\\
e^{i\alpha}
\sin\frac{\beta}{2} \cos\frac{\beta}{2}
& -\cos^2 \frac{\beta}{2}
\end{bmatrix}
\end{aligned}
\end{equation}

This has a zero determinant as expected, and the eigenvector \( (a,b) \) will satisfy

\begin{equation}\label{eqn:moreBraKetProblems:420}
\begin{aligned}
0
&= – \sin^2 \frac{\beta}{2} a +
e^{-i\alpha}
\sin\frac{\beta}{2} \cos\frac{\beta}{2}
b \\
&= \sin\frac{\beta}{2} \lr{ – \sin \frac{\beta}{2} a +
e^{-i\alpha} b
\cos\frac{\beta}{2}
}
\end{aligned}
\end{equation}

\begin{equation}\label{eqn:moreBraKetProblems:440}
\begin{bmatrix}
a \\
b
\end{bmatrix}
\propto
\begin{bmatrix}
\cos\frac{\beta}{2} \\
e^{i\alpha}
\sin\frac{\beta}{2}
\end{bmatrix}.
\end{equation}

This is appropriately normalized, so the ket for \( \BS \cdot \ncap \) is

\begin{equation}\label{eqn:moreBraKetProblems:460}
\ket{ \BS \cdot \ncap ; + } =
\cos\frac{\beta}{2} \ket{+} +
e^{i\alpha}
\sin\frac{\beta}{2}
\ket{-}.
\end{equation}

Note that the other eigenvalue is

\begin{equation}\label{eqn:moreBraKetProblems:480}
\ket{ \BS \cdot \ncap ; – } =
-\sin\frac{\beta}{2} \ket{+} +
e^{i\alpha}
\cos\frac{\beta}{2}
\ket{-}.
\end{equation}

It is straightforward to show that these are orthogonal and that this has the \( -\Hbar/2 \) eigenvalue.

Question: Two state Hamiltonian ([1] pr. 1.10)

Solve the eigenproblem for

\begin{equation}\label{eqn:moreBraKetProblems:500}
H = a \biglr{
\ket{1}\bra{1}
-\ket{2}\bra{2}
+\ket{1}\bra{2}
+\ket{2}\bra{1}
}
\end{equation}

Answer

In matrix form the Hamiltonian is

\begin{equation}\label{eqn:moreBraKetProblems:520}
H = a
\begin{bmatrix}
1 & 1 \\
1 & -1
\end{bmatrix}.
\end{equation}

The eigenvalue problem is

\begin{equation}\label{eqn:moreBraKetProblems:540}
\begin{aligned}
0
&= \Abs{ H – \lambda I } \\
&= (a – \lambda)(-a – \lambda) – a^2 \\
&= (-a + \lambda)(a + \lambda) – a^2 \\
&= \lambda^2 – a^2 – a^2,
\end{aligned}
\end{equation}

or

\begin{equation}\label{eqn:moreBraKetProblems:560}
\lambda = \pm \sqrt{2} a.
\end{equation}

An eigenket proportional to \( (\alpha,\beta) \) must satisfy

\begin{equation}\label{eqn:moreBraKetProblems:580}
0
= ( 1 \mp \sqrt{2} ) \alpha + \beta,
\end{equation}

so

\begin{equation}\label{eqn:moreBraKetProblems:600}
\ket{\pm} \propto
\begin{bmatrix}
-1 \\
1 \mp \sqrt{2}
\end{bmatrix},
\end{equation}

or

\begin{equation}\label{eqn:moreBraKetProblems:620}
\begin{aligned}
\ket{\pm}
&=
\inv{2(2 – \sqrt{2})}
\begin{bmatrix}
-1 \\
1 \mp \sqrt{2}
\end{bmatrix} \\
&=
\frac{2 + \sqrt{2}}{4}
\begin{bmatrix}
-1 \\
1 \mp \sqrt{2}
\end{bmatrix}.
\end{aligned}
\end{equation}

That is
\begin{equation}\label{eqn:moreBraKetProblems:640}
\ket{\pm} =
\frac{2 + \sqrt{2}}{4} \lr{
-\ket{1} + (1 \mp \sqrt{2}) \ket{2}
}.
\end{equation}

Question: Spin half probability and dispersion ([1] pr. 1.12)

A spin \( 1/2 \) system \( \BS \cdot \ncap \), with \( \ncap = \sin \gamma \xcap + \cos\gamma \zcap \), is in state with eigenvalue \( \Hbar/2 \).

(a)

If \( S_x \) is measured. What is the probability of getting \( + \Hbar/2 \)?

(b)

Evaluate the dispersion in \( S_x \), that is,

\begin{equation}\label{eqn:moreBraKetProblems:660}
\expectation{\lr{ S_x – \expectation{S_x}}^2}.
\end{equation}

Answer

(a)

In matrix form the spin operator for the system is

\begin{equation}\label{eqn:moreBraKetProblems:680}
\begin{aligned}
\BS \cdot \ncap
&= \frac{\Hbar}{2} \lr{ \cos\gamma \begin{bmatrix} 1 & 0 \\ 0 & -1 \\ \end{bmatrix} + \sin\gamma \begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix}} \\
&= \frac{\Hbar}{2}
\begin{bmatrix}
\cos\gamma & \sin\gamma \\
\sin\gamma & -\cos\gamma \\
\end{bmatrix}
\end{aligned}
\end{equation}

An eigenket \( \ket{\BS \cdot \ncap ; + } = (a,b) \) must satisfy

\begin{equation}\label{eqn:moreBraKetProblems:700}
\begin{aligned}
0
&= \lr{ \cos \gamma – 1 } a + \sin\gamma b \\
&= \lr{ -2 \sin^2 \frac{\gamma}{2} } a + 2 \sin\frac{\gamma}{2} \cos\frac{\gamma}{2} b \\
&= -\sin \frac{\gamma}{2} a + \cos\frac{\gamma}{2} b,
\end{aligned}
\end{equation}

so the eigenstate is
\begin{equation}\label{eqn:moreBraKetProblems:720}
\ket{\BS \cdot \ncap ; + }
=
\begin{bmatrix}
\cos\frac{\gamma}{2} \\
\sin\frac{\gamma}{2}
\end{bmatrix}.
\end{equation}

Pick \( \ket{S_x ; \pm } = \inv{\sqrt{2}}
\begin{bmatrix}
1 \\ \pm 1
\end{bmatrix} \) as the basis for the \( S_x \) operator. Then, for the probability that the system will end up in the \( + \Hbar/2 \) state of \( S_x \), we have

\begin{equation}\label{eqn:moreBraKetProblems:740}
\begin{aligned}
P
&= \Abs{\braket{ S_x ; + }{ \BS \cdot \ncap ; + } }^2 \\
&= \Abs{ \inv{\sqrt{2} }
{
\begin{bmatrix}
1 \\
1
\end{bmatrix}}^\dagger
\begin{bmatrix}
\cos\frac{\gamma}{2} \\
\sin\frac{\gamma}{2}
\end{bmatrix}
}^2 \\
&=\inv{2}
\Abs{
\begin{bmatrix}
1 & 1
\end{bmatrix}
\begin{bmatrix}
\cos\frac{\gamma}{2} \\
\sin\frac{\gamma}{2}
\end{bmatrix}
}^2 \\
&=
\inv{2}
\lr{
\cos\frac{\gamma}{2} +
\sin\frac{\gamma}{2}
}^2 \\
&=
\inv{2}
\lr{ 1 + 2 \cos\frac{\gamma}{2} \sin\frac{\gamma}{2} } \\
&=
\inv{2}
\lr{ 1 + \sin\gamma }.
\end{aligned}
\end{equation}

This is a reasonable seeming result, with \( P \in [0, 1] \). Some special values also further validate this

\begin{equation}\label{eqn:moreBraKetProblems:760}
\begin{aligned}
\gamma &= 0, \ket{\BS \cdot \ncap ; + } =
\begin{bmatrix}
1 \\
0
\end{bmatrix}
=
\ket{S_z ; +}
=
\inv{\sqrt{2}} \ket{S_x;+}
+\inv{\sqrt{2}} \ket{S_x;-}
\\
\gamma &= \pi/2, \ket{\BS \cdot \ncap ; + } =
\inv{\sqrt{2}}
\begin{bmatrix}
1 \\
1
\end{bmatrix}
=
\ket{S_x ; +}
\\
\gamma &= \pi, \ket{\BS \cdot \ncap ; + } =
\begin{bmatrix}
0 \\
1
\end{bmatrix}
=
\ket{S_z ; -}
=
\inv{\sqrt{2}} \ket{S_x;+}
-\inv{\sqrt{2}} \ket{S_x;-},
\end{aligned}
\end{equation}

where we see that the probabilites are in proportion to the projection of the initial state onto the measured state \( \ket{S_x ; +} \).

(b)

The \( S_x \) expectation is

\begin{equation}\label{eqn:moreBraKetProblems:780}
\begin{aligned}
\expectation{S_x}
&=
\frac{\Hbar}{2}
\begin{bmatrix}
\cos\frac{\gamma}{2} & \sin\frac{\gamma}{2}
\end{bmatrix}
\begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix}
\begin{bmatrix}
\cos\frac{\gamma}{2} \\
\sin\frac{\gamma}{2}
\end{bmatrix} \\
&=
\frac{\Hbar}{2}
\begin{bmatrix}
\cos\frac{\gamma}{2} & \sin\frac{\gamma}{2}
\end{bmatrix}
\begin{bmatrix}
\sin\frac{\gamma}{2} \\
\cos\frac{\gamma}{2}
\end{bmatrix} \\
&=
\frac{\Hbar}{2} 2 \sin\frac{\gamma}{2} \cos\frac{\gamma}{2} \\
&=
\frac{\Hbar}{2} \sin\gamma.
\end{aligned}
\end{equation}

Note that \( S_x^2 = (\Hbar/2)^2I \), so

\begin{equation}\label{eqn:moreBraKetProblems:800}
\begin{aligned}
\expectation{S_x^2}
&=
\lr{\frac{\Hbar}{2}}^2
\begin{bmatrix}
\cos\frac{\gamma}{2} & \sin\frac{\gamma}{2}
\end{bmatrix}
\begin{bmatrix}
\cos\frac{\gamma}{2} \\
\sin\frac{\gamma}{2}
\end{bmatrix} \\
&=
\lr{ \frac{\Hbar}{2} }^2
\cos^2\frac{\gamma}{2} + \sin^2 \frac{\gamma}{2} \\
&=
\lr{ \frac{\Hbar}{2} }^2.
\end{aligned}
\end{equation}

The dispersion is

\begin{equation}\label{eqn:moreBraKetProblems:820}
\begin{aligned}
\expectation{\lr{ S_x – \expectation{S_x}}^2}
&=
\expectation{S_x^2} – \expectation{S_x}^2 \\
&=
\lr{ \frac{\Hbar}{2} }^2
\lr{1 – \sin^2 \gamma} \\
&=
\lr{ \frac{\Hbar}{2} }^2
\cos^2 \gamma.
\end{aligned}
\end{equation}

At \( \gamma = \pi/2 \) the dispersion is 0, which is expected since \( \ket{\BS \cdot \ncap ; + } = \ket{ S_x ; + } \) at that point. Similarily, the dispersion is maximized at \( \gamma = 0,\pi \) where the \( \ket{\BS \cdot \ncap ; + } \) component in the \( \ket{S_x ; + } \) direction is minimized.

References

[1] Jun John Sakurai and Jim J Napolitano. Modern quantum mechanics. Pearson Higher Ed, 2014.

bra-ket manipulation problems

July 22, 2015 phy1520 , , , , , , , ,

[Click here for a PDF of this post with nicer formatting]

Some bra-ket manipulation problems.([1] pr. 1.4)

Using braket logic expand

(a)

\begin{equation}\label{eqn:braketManip:20}
\textrm{tr}{X Y}
\end{equation}

(b)

\begin{equation}\label{eqn:braketManip:40}
(X Y)^\dagger
\end{equation}

(c)

\begin{equation}\label{eqn:braketManip:60}
e^{i f(A)},
\end{equation}

where \( A \) is Hermitian with a complete set of eigenvalues.

(d)

\begin{equation}\label{eqn:braketManip:80}
\sum_{a’} \Psi_{a’}(\Bx’)^\conj \Psi_{a’}(\Bx”),
\end{equation}

where \( \Psi_{a’}(\Bx”) = \braket{\Bx’}{a’} \).

Answers

(a)

\begin{equation}\label{eqn:braketManip:100}
\begin{aligned}
\textrm{tr}{X Y}
&= \sum_a \bra{a} X Y \ket{a} \\
&= \sum_{a,b} \bra{a} X \ket{b}\bra{b} Y \ket{a} \\
&= \sum_{a,b}
\bra{b} Y \ket{a}
\bra{a} X \ket{b} \\
&= \sum_{a,b}
\bra{b} Y
X \ket{b} \\
&= \textrm{tr}{ Y X }.
\end{aligned}
\end{equation}

(b)

\begin{equation}\label{eqn:braketManip:120}
\begin{aligned}
\bra{a} \lr{ X Y}^\dagger \ket{b}
&=
\lr{ \bra{b} X Y \ket{a} }^\conj \\
&=
\sum_c \lr{ \bra{b} X \ket{c}\bra{c} Y \ket{a} }^\conj \\
&=
\sum_c \lr{ \bra{b} X \ket{c} }^\conj \lr{ \bra{c} Y \ket{a} }^\conj \\
&=
\sum_c
\lr{ \bra{c} Y \ket{a} }^\conj
\lr{ \bra{b} X \ket{c} }^\conj \\
&=
\sum_c
\bra{a} Y^\dagger \ket{c}
\bra{c} X^\dagger \ket{b} \\
&=
\bra{a} Y^\dagger
X^\dagger \ket{b},
\end{aligned}
\end{equation}

so \( \lr{ X Y }^\dagger = Y^\dagger X^\dagger \).

(c)

Let’s presume that the function \( f \) has a Taylor series representation

\begin{equation}\label{eqn:braketManip:140}
f(A) = \sum_r b_r A^r.
\end{equation}

If the eigenvalues of \( A \) are given by

\begin{equation}\label{eqn:braketManip:160}
A \ket{a_s} = a_s \ket{a_s},
\end{equation}

this operator can be expanded like

\begin{equation}\label{eqn:braketManip:180}
\begin{aligned}
A
&= \sum_{a_s} A \ket{a_s} \bra{a_s} \\
&= \sum_{a_s} a_s \ket{a_s} \bra{a_s},
\end{aligned}
\end{equation}

To compute powers of this operator, consider first the square

\begin{equation}\label{eqn:braketManip:200}
\begin{aligned}
A^2 =
&=
\sum_{a_s} a_s \ket{a_s} \bra{a_s}
\sum_{a_r} a_r \ket{a_r} \bra{a_r} \\
&=
\sum_{a_s, a_r} a_s a_r \ket{a_s} \bra{a_s} \ket{a_r} \bra{a_r} \\
&=
\sum_{a_s, a_r} a_s a_r \ket{a_s} \delta_{s r} \bra{a_r} \\
&=
\sum_{a_s} a_s^2 \ket{a_s} \bra{a_s}.
\end{aligned}
\end{equation}

The pattern for higher powers will clearly just be

\begin{equation}\label{eqn:braketManip:220}
A^k =
\sum_{a_s} a_s^k \ket{a_s} \bra{a_s},
\end{equation}

so the expansion of \( f(A) \) will be

\begin{equation}\label{eqn:braketManip:240}
\begin{aligned}
f(A)
&= \sum_r b_r A^r \\
&= \sum_r b_r
\sum_{a_s} a_s^r \ket{a_s} \bra{a_s} \\
&=
\sum_{a_s} \lr{ \sum_r b_r a_s^r } \ket{a_s} \bra{a_s} \\
&=
\sum_{a_s} f(a_s) \ket{a_s} \bra{a_s}.
\end{aligned}
\end{equation}

The exponential expansion is

\begin{equation}\label{eqn:braketManip:260}
\begin{aligned}
e^{i f(A)}
&=
\sum_t \frac{i^t}{t!} f^t(A) \\
&=
\sum_t \frac{i^t}{t!}
\lr{ \sum_{a_s} f(a_s) \ket{a_s} \bra{a_s} }^t \\
&=
\sum_t \frac{i^t}{t!}
\sum_{a_s} f^t(a_s) \ket{a_s} \bra{a_s} \\
&=
\sum_{a_s}
e^{i f(a_s) }
\ket{a_s} \bra{a_s}.
\end{aligned}
\end{equation}

(d)

\begin{equation}\label{eqn:braketManip:n}
\begin{aligned}
\sum_{a’} \Psi_{a’}(\Bx’)^\conj \Psi_{a’}(\Bx”)
&=
\sum_{a’}
\braket{\Bx’}{a’}^\conj
\braket{\Bx”}{a’} \\
&=
\sum_{a’}
\braket{a’}{\Bx’}
\braket{\Bx”}{a’} \\
&=
\sum_{a’}
\braket{\Bx”}{a’}
\braket{a’}{\Bx’} \\
&=
\braket{\Bx”}{\Bx’} \\
&= \delta_{\Bx” – \Bx’}.
\end{aligned}
\end{equation}

References

[1] Jun John Sakurai and Jim J Napolitano. Modern quantum mechanics. Pearson Higher Ed, 2014.

Schwartz inequality in bra-ket notation

July 6, 2015 phy1520 , , , , , ,

[Click here for a PDF of this post with nicer formatting]

Motivation

In [2] the Schwartz inequality

\begin{equation}\label{eqn:qmSchwartz:20}
\boxed{
\braket{a}{a}
\braket{b}{b}
\ge \Abs{\braket{a}{b}}^2,
}
\end{equation}

is used in the derivation of the uncertainty relation. The proof of the Schwartz inequality uses a sneaky substitution that doesn’t seem obvious, and is even less obvious since there is a typo in the value to be substituted. Let’s understand where that sneakiness is coming from.

Without being sneaky

My ancient first year linear algebra text [1] contains a non-sneaky proof, but it only works for real vector spaces. Recast in bra-ket notation, this method examines the bounds of the norms of sums and differences of unit states (i.e. \( \braket{a}{a} = \braket{b}{b} = 1 \).)

\begin{equation}\label{eqn:qmSchwartz:40}
\braket{a – b}{a – b}
= \braket{a}{a} + \braket{b}{b} – \braket{a}{b} – \braket{b}{a}
= 2 – 2 \textrm{Re} \braket{a}{b}
\ge 0,
\end{equation}

so
\begin{equation}\label{eqn:qmSchwartz:60}
1 \ge \textrm{Re} \braket{a}{b}.
\end{equation}

Similarily

\begin{equation}\label{eqn:qmSchwartz:80}
\braket{a + b}{a + b}
= \braket{a}{a} + \braket{b}{b} + \braket{a}{b} + \braket{b}{a}
= 2 + 2 \textrm{Re} \braket{a}{b}
\ge 0,
\end{equation}

so
\begin{equation}\label{eqn:qmSchwartz:100}
\textrm{Re} \braket{a}{b} \ge -1.
\end{equation}

This means that for normalized state vectors

\begin{equation}\label{eqn:qmSchwartz:120}
-1 \le \textrm{Re} \braket{a}{b} \le 1,
\end{equation}

or
\begin{equation}\label{eqn:qmSchwartz:140}
\Abs{\textrm{Re} \braket{a}{b}} \le 1.
\end{equation}

Writing out the unit vectors explicitly, that last inequality is

\begin{equation}\label{eqn:qmSchwartz:180}
\Abs{ \textrm{Re} \braket{ \frac{a}{\sqrt{\braket{a}{a}}} }{ \frac{b}{\sqrt{\braket{b}{b}}} } } \le 1,
\end{equation}

squaring and rearranging gives

\begin{equation}\label{eqn:qmSchwartz:200}
\Abs{\textrm{Re} \braket{a}{b}}^2 \le
\braket{a}{a}
\braket{b}{b}.
\end{equation}

This is similar to, but not identical to the Schwartz inequality. Since \( \Abs{\textrm{Re} \braket{a}{b}} \le \Abs{\braket{a}{b}} \) the Schwartz inequality cannot be demonstrated with this argument. This first year algebra method works nicely for demonstrating the inequality for real vector spaces, so a different argument is required for a complex vector space (i.e. quantum mechanics state space.)

Arguing with projected and rejected components

Notice that the equality condition in the inequality holds when the vectors are colinear, and the largest inequality holds when the vectors are normal to each other. Given those geometrical observations, it seems reasonable to examine the norms of projected or rejected components of a vector. To do so in bra-ket notation, the correct form of a projection operation must be determined. Care is required to get the ordering of the bra-kets right when expressing such a projection.

Suppose we wish to calculation the rejection of \( \ket{a} \) from \( \ket{b} \), that is \( \ket{b – \alpha a}\), such that

\begin{equation}\label{eqn:qmSchwartz:220}
0
= \braket{a}{b – \alpha a}
= \braket{a}{b} – \alpha \braket{a}{a},
\end{equation}

or
\begin{equation}\label{eqn:qmSchwartz:240}
\alpha =
\frac{\braket{a}{b} }{ \braket{a}{a} }.
\end{equation}

Therefore, the projection of \( \ket{b} \) on \( \ket{a} \) is

\begin{equation}\label{eqn:qmSchwartz:260}
\textrm{Proj}_{\ket{a}} \ket{b}
= \frac{\braket{a}{b} }{ \braket{a}{a} } \ket{a}
= \frac{\braket{b}{a}^\conj }{ \braket{a}{a} } \ket{a}.
\end{equation}

The conventional way to write this in QM is in the operator form

\begin{equation}\label{eqn:qmSchwartz:300}
\textrm{Proj}_{\ket{a}} \ket{b}
= \frac{\ket{a}\bra{a}}{\braket{a}{a}} \ket{b}.
\end{equation}

In this form the rejection of \( \ket{a} \) from \( \ket{b} \) can be expressed as

\begin{equation}\label{eqn:qmSchwartz:280}
\textrm{Rej}_{\ket{a}} \ket{b} = \ket{b} – \frac{\ket{a}\bra{a}}{\braket{a}{a}} \ket{b}.
\end{equation}

This state vector is normal to \( \ket{a} \) as desired

\begin{equation}\label{eqn:qmSchwartz:320}
\braket{a}{b – \frac{\braket{a}{b} }{ \braket{a}{a} } a }
=
\braket{a}{ b} – \frac{ \braket{a}{b} }{ \braket{a}{a} } \braket{a}{a}
=
\braket{a}{ b} – \braket{a}{b}
= 0.
\end{equation}

How about it’s length? That is

\begin{equation}\label{eqn:qmSchwartz:340}
\begin{aligned}
\braket{b – \frac{\braket{a}{b} }{ \braket{a}{a} } a}{b – \frac{\braket{a}{b} }{ \braket{a}{a} } a }
&=
\braket{b}{b} – 2 \frac{\Abs{\braket{a}{b}}^2}{\braket{a}{a}} +\frac{\Abs{\braket{a}{b}}^2 }{ \braket{a}{a}^2 } \braket{a}{a} \\
&=
\braket{b}{b} – \frac{\Abs{\braket{a}{b}}^2}{\braket{a}{a}}.
\end{aligned}
\end{equation}

Observe that this must be greater to or equal to zero, so

\begin{equation}\label{eqn:qmSchwartz:360}
\braket{b}{b} \ge \frac{ \Abs{ \braket{a}{b} }^2 }{ \braket{a}{a} }.
\end{equation}

Rearranging this gives \ref{eqn:qmSchwartz:20} as desired. The Schwartz proof in [2] obscures the geometry involved and starts with

\begin{equation}\label{eqn:qmSchwartz:380}
\braket{b + \lambda a}{b + \lambda a} \ge 0,
\end{equation}

where the “proof” is nothing more than a statement that one can “pick” \( \lambda = -\braket{b}{a}/\braket{a}{a} \). The Pythagorean context of the Schwartz inequality is not mentioned, and without thinking about it, one is left wondering what sort of magic hat that \( \lambda \) selection came from.

References

[1] W Keith Nicholson. Elementary linear algebra, with applications. PWS-Kent Publishing Company, 1990.

[2] Jun John Sakurai and Jim J Napolitano. Modern quantum mechanics. Pearson Higher Ed, 2014.