braket

Plane wave ground state expectation for SHO

October 18, 2015 phy1520 , , , , , , , , , , ,

[Click here for a PDF of this post with nicer formatting]

Problem [1] 2.18 is, for a 1D SHO, show that

\begin{equation}\label{eqn:exponentialExpectationGroundState:20}
\bra{0} e^{i k x} \ket{0} = \exp\lr{ -k^2 \bra{0} x^2 \ket{0}/2 }.
\end{equation}

Despite the simple appearance of this problem, I found this quite involved to show. To do so, start with a series expansion of the expectation

\begin{equation}\label{eqn:exponentialExpectationGroundState:40}
\bra{0} e^{i k x} \ket{0}
=
\sum_{m=0}^\infty \frac{(i k)^m}{m!} \bra{0} x^m \ket{0}.
\end{equation}

Let

\begin{equation}\label{eqn:exponentialExpectationGroundState:60}
X = \lr{ a + a^\dagger },
\end{equation}

so that

\begin{equation}\label{eqn:exponentialExpectationGroundState:80}
x
= \sqrt{\frac{\Hbar}{2 \omega m}} X
= \frac{x_0}{\sqrt{2}} X.
\end{equation}

Consider the first few values of \( \bra{0} X^n \ket{0} \)

\begin{equation}\label{eqn:exponentialExpectationGroundState:100}
\begin{aligned}
\bra{0} X \ket{0}
&=
\bra{0} \lr{ a + a^\dagger } \ket{0} \\
&=
\braket{0}{1} \\
&=
0,
\end{aligned}
\end{equation}

\begin{equation}\label{eqn:exponentialExpectationGroundState:120}
\begin{aligned}
\bra{0} X^2 \ket{0}
&=
\bra{0} \lr{ a + a^\dagger }^2 \ket{0} \\
&=
\braket{1}{1} \\
&=
1,
\end{aligned}
\end{equation}

\begin{equation}\label{eqn:exponentialExpectationGroundState:140}
\begin{aligned}
\bra{0} X^3 \ket{0}
&=
\bra{0} \lr{ a + a^\dagger }^3 \ket{0} \\
&=
\bra{1} \lr{ \sqrt{2} \ket{2} + \ket{0} } \\
&=
0.
\end{aligned}
\end{equation}

Whenever the power \( n \) in \( X^n \) is even, the braket can be split into a bra that has only contributions from odd eigenstates and a ket with even eigenstates. We conclude that \( \bra{0} X^n \ket{0} = 0 \) when \( n \) is odd.

Noting that \( \bra{0} x^2 \ket{0} = \ifrac{x_0^2}{2} \), this leaves

\begin{equation}\label{eqn:exponentialExpectationGroundState:160}
\begin{aligned}
\bra{0} e^{i k x} \ket{0}
&=
\sum_{m=0}^\infty \frac{(i k)^{2 m}}{(2 m)!} \bra{0} x^{2m} \ket{0} \\
&=
\sum_{m=0}^\infty \frac{(i k)^{2 m}}{(2 m)!} \lr{ \frac{x_0^2}{2} }^m \bra{0} X^{2m} \ket{0} \\
&=
\sum_{m=0}^\infty \frac{1}{(2 m)!} \lr{ -k^2 \bra{0} x^2 \ket{0} }^m \bra{0} X^{2m} \ket{0}.
\end{aligned}
\end{equation}

This problem is now reduced to showing that

\begin{equation}\label{eqn:exponentialExpectationGroundState:180}
\frac{1}{(2 m)!} \bra{0} X^{2m} \ket{0} = \inv{m! 2^m},
\end{equation}

or

\begin{equation}\label{eqn:exponentialExpectationGroundState:200}
\begin{aligned}
\bra{0} X^{2m} \ket{0}
&= \frac{(2m)!}{m! 2^m} \\
&= \frac{ (2m)(2m-1)(2m-2) \cdots (2)(1) }{2^m m!} \\
&= \frac{ 2^m (m)(2m-1)(m-1)(2m-3)(m-2) \cdots (2)(3)(1)(1) }{2^m m!} \\
&= (2m-1)!!,
\end{aligned}
\end{equation}

where \( n!! = n(n-2)(n-4)\cdots \).

It looks like \( \bra{0} X^{2m} \ket{0} \) can be expanded by inserting an identity operator and proceeding recursively, like

\begin{equation}\label{eqn:exponentialExpectationGroundState:220}
\begin{aligned}
\bra{0} X^{2m} \ket{0}
&=
\bra{0} X^2 \lr{ \sum_{n=0}^\infty \ket{n}\bra{n} } X^{2m-2} \ket{0} \\
&=
\bra{0} X^2 \lr{ \ket{0}\bra{0} + \ket{2}\bra{2} } X^{2m-2} \ket{0} \\
&=
\bra{0} X^{2m-2} \ket{0} + \bra{0} X^2 \ket{2} \bra{2} X^{2m-2} \ket{0}.
\end{aligned}
\end{equation}

This has made use of the observation that \( \bra{0} X^2 \ket{n} = 0 \) for all \( n \ne 0,2 \). The remaining term includes the factor

\begin{equation}\label{eqn:exponentialExpectationGroundState:240}
\begin{aligned}
\bra{0} X^2 \ket{2}
&=
\bra{0} \lr{a + a^\dagger}^2 \ket{2} \\
&=
\lr{ \bra{0} + \sqrt{2} \bra{2} } \ket{2} \\
&=
\sqrt{2},
\end{aligned}
\end{equation}

Since \( \sqrt{2} \ket{2} = \lr{a^\dagger}^2 \ket{0} \), the expectation of interest can be written

\begin{equation}\label{eqn:exponentialExpectationGroundState:260}
\bra{0} X^{2m} \ket{0}
=
\bra{0} X^{2m-2} \ket{0} + \bra{0} a^2 X^{2m-2} \ket{0}.
\end{equation}

How do we expand the second term. Let’s look at how \( a \) and \( X \) commute

\begin{equation}\label{eqn:exponentialExpectationGroundState:280}
\begin{aligned}
a X
&=
\antisymmetric{a}{X} + X a \\
&=
\antisymmetric{a}{a + a^\dagger} + X a \\
&=
\antisymmetric{a}{a^\dagger} + X a \\
&=
1 + X a,
\end{aligned}
\end{equation}

\begin{equation}\label{eqn:exponentialExpectationGroundState:300}
\begin{aligned}
a^2 X
&=
a \lr{ a X } \\
&=
a \lr{ 1 + X a } \\
&=
a + a X a \\
&=
a + \lr{ 1 + X a } a \\
&=
2 a + X a^2.
\end{aligned}
\end{equation}

Proceeding to expand \( a^2 X^n \) we find
\begin{equation}\label{eqn:exponentialExpectationGroundState:320}
\begin{aligned}
a^2 X^3 &= 6 X + 6 X^2 a + X^3 a^2 \\
a^2 X^4 &= 12 X^2 + 8 X^3 a + X^4 a^2 \\
a^2 X^5 &= 20 X^3 + 10 X^4 a + X^5 a^2 \\
a^2 X^6 &= 30 X^4 + 12 X^5 a + X^6 a^2.
\end{aligned}
\end{equation}

It appears that we have
\begin{equation}\label{eqn:exponentialExpectationGroundState:340}
\antisymmetric{a^2 X^n}{X^n a^2} = \beta_n X^{n-2} + 2 n X^{n-1} a,
\end{equation}

where

\begin{equation}\label{eqn:exponentialExpectationGroundState:360}
\beta_n = \beta_{n-1} + 2 (n-1),
\end{equation}

and \( \beta_2 = 2 \). Some goofing around shows that \( \beta_n = n(n-1) \), so the induction hypothesis is

\begin{equation}\label{eqn:exponentialExpectationGroundState:380}
\antisymmetric{a^2 X^n}{X^n a^2} = n(n-1) X^{n-2} + 2 n X^{n-1} a.
\end{equation}

Let’s check the induction
\begin{equation}\label{eqn:exponentialExpectationGroundState:400}
\begin{aligned}
a^2 X^{n+1}
&=
a^2 X^{n} X \\
&=
\lr{ n(n-1) X^{n-2} + 2 n X^{n-1} a + X^n a^2 } X \\
&=
n(n-1) X^{n-1} + 2 n X^{n-1} a X + X^n a^2 X \\
&=
n(n-1) X^{n-1} + 2 n X^{n-1} \lr{ 1 + X a } + X^n \lr{ 2 a + X a^2 } \\
&=
n(n-1) X^{n-1} + 2 n X^{n-1} + 2 n X^{n} a
+ 2 X^n a
+ X^{n+1} a^2 \\
&=
X^{n+1} a^2 + (2 + 2 n) X^{n} a + \lr{ 2 n + n(n-1) } X^{n-1} \\
&=
X^{n+1} a^2 + 2(n + 1) X^{n} a + (n+1) n X^{n-1},
\end{aligned}
\end{equation}

which concludes the induction, giving

\begin{equation}\label{eqn:exponentialExpectationGroundState:420}
\bra{ 0 } a^2 X^{n} \ket{0 } = n(n-1) \bra{0} X^{n-2} \ket{0},
\end{equation}

and

\begin{equation}\label{eqn:exponentialExpectationGroundState:440}
\bra{0} X^{2m} \ket{0}
=
\bra{0} X^{2m-2} \ket{0} + (2m-2)(2m-3) \bra{0} X^{2m-4} \ket{0}.
\end{equation}

Let

\begin{equation}\label{eqn:exponentialExpectationGroundState:460}
\sigma_{n} = \bra{0} X^n \ket{0},
\end{equation}

so that the recurrence relation, for \( 2n \ge 4 \) is

\begin{equation}\label{eqn:exponentialExpectationGroundState:480}
\sigma_{2n} = \sigma_{2n -2} + (2n-2)(2n-3) \sigma_{2n -4}
\end{equation}

We want to show that this simplifies to

\begin{equation}\label{eqn:exponentialExpectationGroundState:500}
\sigma_{2n} = (2n-1)!!
\end{equation}

The first values are

\begin{equation}\label{eqn:exponentialExpectationGroundState:540}
\sigma_0 = \bra{0} X^0 \ket{0} = 1
\end{equation}
\begin{equation}\label{eqn:exponentialExpectationGroundState:560}
\sigma_2 = \bra{0} X^2 \ket{0} = 1
\end{equation}

which gives us the right result for the first term in the induction

\begin{equation}\label{eqn:exponentialExpectationGroundState:580}
\begin{aligned}
\sigma_4
&= \sigma_2 + 2 \times 1 \times \sigma_0 \\
&= 1 + 2 \\
&= 3!!
\end{aligned}
\end{equation}

For the general induction term, consider

\begin{equation}\label{eqn:exponentialExpectationGroundState:600}
\begin{aligned}
\sigma_{2n + 2}
&= \sigma_{2n} + 2 n (2n – 1) \sigma_{2n -2} \\
&= (2n-1)!! + 2n ( 2n – 1) (2n -3)!! \\
&= (2n + 1) (2n -1)!! \\
&= (2n + 1)!!,
\end{aligned}
\end{equation}

which completes the final induction. That was also the last thing required to complete the proof, so we are done!

References

[1] Jun John Sakurai and Jim J Napolitano. Modern quantum mechanics. Pearson Higher Ed, 2014.

PHY1520H Graduate Quantum Mechanics. Lecture 3: Density matrix (cont.). Taught by Prof. Arun Paramekanti

September 24, 2015 phy1520 , , , , , , , , ,

[Click here for a PDF of this post with nicer formatting]

Disclaimer

Peeter’s lecture notes from class. These may be incoherent and rough.

These are notes for the UofT course PHY1520, Graduate Quantum Mechanics, taught by Prof. Paramekanti, covering [1] chap. 3 content.

Density matrix (cont.)

An example of a partitioned system with four total states (two spin 1/2 particles) is sketched in fig. 1.

fig. 1.  Two spins

fig. 1. Two spins

An example of a partitioned system with eight total states (three spin 1/2 particles) is sketched in fig. 2.

fig. 2.  Three spins

fig. 2. Three spins

The density matrix

\begin{equation}\label{eqn:qmLecture3:20}
\hat{\rho} = \ket{\Psi}\bra{\Psi}
\end{equation}

is clearly an operator as can be seen by applying it to a state

\begin{equation}\label{eqn:qmLecture3:40}
\hat{\rho} \ket{\phi} = \ket{\Psi} \lr{ \braket{ \Psi }{\phi} }.
\end{equation}

The quantity in braces is just a complex number.

After expanding the pure state \( \ket{\Psi} \) in terms of basis states for each of the two partitions

\begin{equation}\label{eqn:qmLecture3:60}
\ket{\Psi}
= \sum_{m,n} C_{m, n} \ket{m}_{\textrm{L}} \ket{n}_{\textrm{R}},
\end{equation}

With \( \textrm{L} \) and \( \textrm{R} \) implied for \( \ket{m}, \ket{n} \) indexed states respectively, this can be written

\begin{equation}\label{eqn:qmLecture3:460}
\ket{\Psi}
= \sum_{m,n} C_{m, n} \ket{m} \ket{n}.
\end{equation}

The density operator is

\begin{equation}\label{eqn:qmLecture3:80}
\hat{\rho} =
\sum_{m,n}
C_{m, n}
C_{m’, n’}^\conj
\ket{m} \ket{n}
\sum_{m’,n’}
\bra{m’} \bra{n’}.
\end{equation}

Suppose we trace over the right partition of the state space, defining such a trace as the reduced density operator \( \hat{\rho}_{\textrm{red}} \)

\begin{equation}\label{eqn:qmLecture3:100}
\begin{aligned}
\hat{\rho}_{\textrm{red}}
&\equiv
\textrm{Tr}_{\textrm{R}}(\hat{\rho}) \\
&= \sum_{\tilde{n}} \bra{\tilde{n}} \hat{\rho} \ket{ \tilde{n}} \\
&= \sum_{\tilde{n}}
\bra{\tilde{n} }
\lr{
\sum_{m,n}
C_{m, n}
\ket{m} \ket{n}
}
\lr{
\sum_{m’,n’}
C_{m’, n’}^\conj
\bra{m’} \bra{n’}
}
\ket{ \tilde{n} } \\
&=
\sum_{\tilde{n}}
\sum_{m,n}
\sum_{m’,n’}
C_{m, n}
C_{m’, n’}^\conj
\ket{m} \delta_{\tilde{n} n}
\bra{m’ }
\delta_{ \tilde{n} n’ } \\
&=
\sum_{\tilde{n}, m, m’}
C_{m, \tilde{n}}
C_{m’, \tilde{n}}^\conj
\ket{m} \bra{m’ }
\end{aligned}
\end{equation}

Computing the matrix element of \( \hat{\rho}_{\textrm{red}} \), we have

\begin{equation}\label{eqn:qmLecture3:120}
\begin{aligned}
\bra{\tilde{m}} \hat{\rho}_{\textrm{red}} \ket{\tilde{m}}
&=
\sum_{m, m’, \tilde{n}} C_{m, \tilde{n}} C_{m’, \tilde{n}}^\conj \braket{ \tilde{m}}{m} \braket{m’}{\tilde{m}} \\
&=
\sum_{\tilde{n}} \Abs{C_{\tilde{m}, \tilde{n}} }^2.
\end{aligned}
\end{equation}

This is the probability that the left partition is in state \( \tilde{m} \).

Average of an observable

Suppose we have two spin half particles. For such a system the total magnetization is

\begin{equation}\label{eqn:qmLecture3:140}
S_{\textrm{Total}} =
S_1^z
+
S_1^z,
\end{equation}

as sketched in fig. 3.

fig. 3.  Magnetic moments from two spins.

fig. 3. Magnetic moments from two spins.

The average of some observable is

\begin{equation}\label{eqn:qmLecture3:160}
\expectation{\hatA}
= \sum_{m, n, m’, n’} C_{m, n}^\conj C_{m’, n’}
\bra{m}\bra{n} \hatA \ket{n’} \ket{m’}.
\end{equation}

Consider the trace of the density operator observable product

\begin{equation}\label{eqn:qmLecture3:180}
\textrm{Tr}( \hat{\rho} \hatA )
= \sum_{m, n} \braket{m n}{\Psi} \bra{\Psi} \hatA \ket{m, n}.
\end{equation}

Let

\begin{equation}\label{eqn:qmLecture3:200}
\ket{\Psi} = \sum_{m, n} C_{m n} \ket{m, n},
\end{equation}

so that

\begin{equation}\label{eqn:qmLecture3:220}
\begin{aligned}
\textrm{Tr}( \hat{\rho} \hatA )
&= \sum_{m, n, m’, n’, m”, n”} C_{m’, n’} C_{m”, n”}^\conj
\braket{m n}{m’, n’} \bra{m”, n”} \hatA \ket{m, n} \\
&= \sum_{m, n, m”, n”} C_{m, n} C_{m”, n”}^\conj
\bra{m”, n”} \hatA \ket{m, n}.
\end{aligned}
\end{equation}

This is just

\begin{equation}\label{eqn:qmLecture3:240}
\boxed{
\bra{\Psi} \hatA \ket{\Psi} = \textrm{Tr}( \hat{\rho} \hatA ).
}
\end{equation}

Left observables

Consider

\begin{equation}\label{eqn:qmLecture3:260}
\begin{aligned}
\bra{\Psi} \hatA_{\textrm{L}} \ket{\Psi}
&= \textrm{Tr}(\hat{\rho} \hatA_{\textrm{L}}) \\
&=
\textrm{Tr}_{\textrm{L}}
\textrm{Tr}_{\textrm{R}}
(\hat{\rho} \hatA_{\textrm{L}}) \\
&=
\textrm{Tr}_{\textrm{L}}
\lr{
\lr{
\textrm{Tr}_{\textrm{R}} \hat{\rho}
}
\hatA_{\textrm{L}})
} \\
&=
\textrm{Tr}_{\textrm{L}}
\lr{
\hat{\rho}_{\textrm{red}}
\hatA_{\textrm{L}})
}.
\end{aligned}
\end{equation}

We see

\begin{equation}\label{eqn:qmLecture3:280}
\bra{\Psi} \hatA_{\textrm{L}} \ket{\Psi}
=
\textrm{Tr}_{\textrm{L}} \lr{ \hat{\rho}_{\textrm{red}, \textrm{L}} \hatA_{\textrm{L}} }.
\end{equation}

We find that we don’t need to know the state of the complete system to answer questions about portions of the system, but instead just need \( \hat{\rho} \), a “probability operator” that provides all the required information about the partitioning of the system.

Pure states vs. mixed states

For pure states we can assign a state vector and talk about reduced scenarios. For mixed states we must work with reduced density matrix.

Example: Two particle spin half pure states

Consider

\begin{equation}\label{eqn:qmLecture3:300}
\ket{\psi_1} = \inv{\sqrt{2}} \lr{ \ket{ \uparrow \downarrow } – \ket{ \downarrow \uparrow } }
\end{equation}

\begin{equation}\label{eqn:qmLecture3:320}
\ket{\psi_2} = \inv{\sqrt{2}} \lr{ \ket{ \uparrow \downarrow } + \ket{ \uparrow \uparrow } }.
\end{equation}

For the first pure state the density operator is
\begin{equation}\label{eqn:qmLecture3:360}
\hat{\rho} = \inv{2}
\lr{ \ket{ \uparrow \downarrow } – \ket{ \downarrow \uparrow } }
\lr{ \bra{ \uparrow \downarrow } – \bra{ \downarrow \uparrow } }
\end{equation}

What are the reduced density matrices?

\begin{equation}\label{eqn:qmLecture3:340}
\begin{aligned}
\hat{\rho}_{\textrm{L}}
&= \textrm{Tr}_{\textrm{R}} \lr{ \hat{\rho} } \\
&=
\inv{2} (-1)(-1) \ket{\downarrow}\bra{\downarrow}
+\inv{2} (+1)(+1) \ket{\uparrow}\bra{\uparrow},
\end{aligned}
\end{equation}

so the matrix representation of this reduced density operator is

\begin{equation}\label{eqn:qmLecture3:380}
\hat{\rho}_{\textrm{L}}
=
\inv{2}
\begin{bmatrix}
1 & 0 \\
0 & 1
\end{bmatrix}.
\end{equation}

For the second pure state the density operator is
\begin{equation}\label{eqn:qmLecture3:400}
\hat{\rho} = \inv{2}
\lr{ \ket{ \uparrow \downarrow } + \ket{ \uparrow \uparrow } }
\lr{ \bra{ \uparrow \downarrow } + \bra{ \uparrow \uparrow } }.
\end{equation}

This has a reduced density matrice

\begin{equation}\label{eqn:qmLecture3:420}
\begin{aligned}
\hat{\rho}_{\textrm{L}}
&= \textrm{Tr}_{\textrm{R}} \lr{ \hat{\rho} } \\
&=
\inv{2} \ket{\uparrow}\bra{\uparrow}
+\inv{2} \ket{\uparrow}\bra{\uparrow} \\
&=
\ket{\uparrow}\bra{\uparrow} .
\end{aligned}
\end{equation}

This has a matrix representation

\begin{equation}\label{eqn:qmLecture3:440}
\hat{\rho}_{\textrm{L}}
=
\begin{bmatrix}
1 & 0 \\
0 & 0
\end{bmatrix}.
\end{equation}

In this second example, we have more information about the left partition. That will be seen as a zero entanglement entropy in the problem set. In contrast we have less information about the first state, and will find a non-zero positive entanglement entropy in that case.

References

[1] Jun John Sakurai and Jim J Napolitano. Modern quantum mechanics. Pearson Higher Ed, 2014.

Free particle propagator

September 7, 2015 phy1520 , , , , , ,

[Click here for a PDF of this post with nicer formatting]

Question: Free particle propagator ([1] pr. 2.31)

Derive the free particle propagator in one and three dimensions.

Answer

I found the description in the text confusing, so let’s start from scratch with the definition of the propagator. This is the kernel of the spatial convolution integral that encodes time evolution, and can be expressed by expanding a general time state with two sets of identity operators. Let the position relative state at time \( t \), relative to an initial time \( t_0 \) be given by \( \braket{\Bx}{\alpha, t ; t_0 } \), and expand this in terms of a complete basis of energy eigenstates \( | a’ > \) and the time evolution operator

\begin{equation}\label{eqn:freeParticlePropagator:20}
\begin{aligned}
\braket{\Bx”}{\alpha, t ; t_0 }
&= \bra{\Bx”} U \ket{\alpha, t_0 } \\
&= \bra{\Bx”} e^{-i H (t -t_0)/\Hbar} \ket{\alpha, t_0 } \\
&= \bra{\Bx”} e^{-i H (t -t_0)/\Hbar} \lr{ \sum_{a’} \ket{a’} \bra{a’ }} \ket{\alpha, t_0 } \\
&= \bra{\Bx”} \sum_{a’} e^{-i E_{a’} (t -t_0)/\Hbar} \ket{a’} \braket{a’ }{\alpha, t_0 } \\
&=
\bra{\Bx”} \sum_{a’} e^{-i E_{a’} (t -t_0)/\Hbar} \ket{a’} \bra{a’ }
\lr{ \int d^3 \Bx’
\ket{\Bx’}\bra{\Bx’}
}
\ket{\alpha, t_0 } \\
&=
\int d^3 \Bx’
\lr{
\bra{\Bx”} \sum_{a’} e^{-i E_{a’} (t -t_0)/\Hbar} \ket{a’} \braket{a’ }{\Bx’}
}
\braket{\Bx’}{\alpha, t_0 } \\
&=
\int d^3 \Bx’ K(\Bx”, t ; \Bx’, t_0) \braket{\Bx’}{\alpha, t_0 },
\end{aligned}
\end{equation}

where

\begin{equation}\label{eqn:freeParticlePropagator:40}
K(\Bx”, t ; \Bx’, t_0) =
\sum_{a’}
\braket{\Bx”}{a’}\braket{a’ }{\Bx’}
e^{-i E_{a’} (t -t_0)/\Hbar},
\end{equation}

the propagator, is the kernel of the convolution integral that takes the state \( \ket{\alpha, t_0} \) to state \( \ket{\alpha, t ; t_0} \). Evaluating this over the momentum states (where integration and not plain summation is required), we have

\begin{equation}\label{eqn:freeParticlePropagator:60}
\begin{aligned}
K(\Bx”, t ; \Bx’, t_0)
&=
\int d^3 \Bp’
\braket{\Bx”}{\Bp’}\braket{\Bp’ }{\Bx’}
e^{-i E_{\Bp’} (t -t_0)/\Hbar} \\
&=
\int d^3 \Bp’
\braket{\Bx”}{\Bp’}\braket{\Bp’ }{\Bx’}
\exp\lr{-i \frac{(\Bp’)^2 (t -t_0)}{2 m \Hbar}} \\
&=
\int d^3 \Bp’
\frac{e^{i \Bx” \cdot \Bp’/\Hbar}}{(\sqrt{2 \pi \Hbar})^3}
\frac{e^{-i \Bx’ \cdot \Bp’/\Hbar}}{(\sqrt{2 \pi \Hbar})^3}
\exp\lr{-i \frac{(\Bp’)^2 (t -t_0)}{2 m \Hbar}} \\
&=
\inv{(2 \pi \Hbar)^3}
\int d^3 \Bp’
e^{i (\Bx” -\Bx’) \cdot \Bp’/\Hbar}
\exp\lr{-i \frac{(\Bp’)^2 (t -t_0)}{2 m \Hbar}} \\
&=
\inv{ 2 \pi \Hbar }
\int_{-\infty}^\infty dp_1′
e^{i (x_1” -x_1′) p_1’/\Hbar}
\exp\lr{-i \frac{(p_1′)^2 (t -t_0)}{2 m \Hbar}} \times \\
&\quad \inv{ 2 \pi \Hbar }
\int_{-\infty}^\infty dp_2′
e^{i (x_2” -x_2′) p_2’/\Hbar}
\exp\lr{-i \frac{(p_2′)^2 (t -t_0)}{2 m \Hbar}} \times \\
&\quad \inv{ 2 \pi \Hbar }
\int_{-\infty}^\infty dp_3′
e^{i (x_3” -x_3′) p_3’/\Hbar}
\exp\lr{-i \frac{(p_3′)^2 (t -t_0)}{2 m \Hbar}}
\end{aligned}
\end{equation}

With \( a = \ifrac{(t -t_0)}{2 m \Hbar} \), each of these three integral factors is of the form

\begin{equation}\label{eqn:freeParticlePropagator:80}
\begin{aligned}
\inv{ 2 \pi \Hbar }
\int_{-\infty}^\infty dp
e^{i \Delta x p/\Hbar }
\exp\lr{-i a p^2}
&=
\inv{2 \pi \Hbar \sqrt{a}}
\int_{-\infty}^\infty du
e^{i \Delta x u/(\sqrt{a}\Hbar) }
\exp\lr{-i u^2} \\
&=
\inv{2 \pi \Hbar \sqrt{a}}
\int_{-\infty}^\infty du
e^{i \Delta x u/(\sqrt{a} \Hbar) }
\exp\lr{-i (u – \Delta x /(2\sqrt{a}\Hbar))^2 + i(\Delta x/(2\sqrt{a}\Hbar))^2} \\
&=
\inv{2 \pi \Hbar \sqrt{a}}
\exp\lr{ \frac{i(\Delta x)^2 2 m \Hbar}{4 (t -t_0) \Hbar^2} }
\int_{-\infty}^\infty dz
e^{-i z^2} \\
&= \sqrt{ \frac{ -i \pi 2 m \Hbar}{ 4 \pi^2 \Hbar^2 (t -t_0)} }
\exp\lr{ \frac{i(\Delta x)^2 m}{2 (t -t_0) \Hbar} } \\
&= \sqrt{ \frac{ m }{ 2 \pi i \Hbar (t -t_0)} }
\exp\lr{ \frac{i(\Delta x)^2 m}{2 (t -t_0) \Hbar} }.
\end{aligned}
\end{equation}

Note that the integral above has value \( \sqrt{-i\pi} \) which can be found by integrating over the contour of fig. 1, letting \( R \rightarrow \infty \).

contourFig1

fig. 1. Integration contour for \( \int e^{-i z^2} \)

Multiplying out each of the spatial direction factors gives the propagator in its closed form
\begin{equation}\label{eqn:freeParticlePropagator:120}
\boxed{
K(\Bx”, t ; \Bx’, t_0)
= \lr{ \sqrt{ \frac{ m }{ 2 \pi i \Hbar (t -t_0)} } }^3
\exp\lr{ \frac{i(\Bx” – \Bx’)^2 m}{2 (t -t_0) \Hbar} }.
}
\end{equation}

In one or two dimensions the exponential power \( 3 \) need only be adjusted appropriately.

Question: Momentum space free particle propagator ([1] pr. 2.33)

Derive the free particle propagator in momentum space.

Answer

The momentum space propagator follows in the same fashion as the spatial propagator

\begin{equation}\label{eqn:freeParticlePropagator:140}
\begin{aligned}
\braket{\Bp”}{\alpha, t ; t_0 }
&= \bra{\Bp”} U \ket{\alpha, t_0 } \\
&= \bra{\Bp”} e^{-i H (t -t_0)/\Hbar} \ket{\alpha, t_0 } \\
&= \bra{\Bp”} e^{-i H (t -t_0)/\Hbar} \lr{ \sum_{a’} \ket{a’} \bra{a’ }} \ket{\alpha, t_0 } \\
&= \bra{\Bp”} \sum_{a’} e^{-i E_{a’} (t -t_0)/\Hbar} \ket{a’} \braket{a’ }{\alpha, t_0 } \\
&=
\bra{\Bp”} \sum_{a’} e^{-i E_{a’} (t -t_0)/\Hbar} \ket{a’} \bra{a’ }
\lr{ \int d^3 \Bp’
\ket{\Bp’}\bra{\Bp’}
}
\ket{\alpha, t_0 } \\
&=
\int d^3 \Bp’
\lr{
\bra{\Bp”} \sum_{a’} e^{-i E_{a’} (t -t_0)/\Hbar} \ket{a’} \braket{a’ }{\Bp’}
}
\braket{\Bp’}{\alpha, t_0 } \\
&=
\int d^3 \Bp’ K(\Bp”, t ; \Bp’, t_0) \braket{\Bp’}{\alpha, t_0 },
\end{aligned}
\end{equation}

so

\begin{equation}\label{eqn:freeParticlePropagator:160}
K(\Bp”, t ; \Bp’, t_0)
=
\sum_{a’}
\braket{\Bp”}{a’}
\braket{a’ }{\Bp’}
e^{-i E_{a’} (t -t_0)/\Hbar}.
\end{equation}

For the free particle Hamiltonian, this can be evaluated over a momentum space basis

\begin{equation}\label{eqn:freeParticlePropagator:170}
\begin{aligned}
K(\Bp”, t ; \Bp’, t_0)
&=
\int d^3 \Bp”’
\braket{\Bp”}{\Bp”’}
\braket{\Bp”’ }{\Bp’}
e^{-i E_{\Bp”’} (t -t_0)/\Hbar} \\
&=
\int d^3 \Bp”’
\braket{\Bp”}{\Bp”’}
\delta(\Bp”’ – \Bp’)
\exp\lr{ -i \frac{(\Bp”’)^2 (t -t_0)}{2 m \Hbar}} \\
&=
\braket{\Bp”}{\Bp’}
\exp\lr{ -i \frac{(\Bp’)^2 (t -t_0)}{2 m \Hbar}}
\end{aligned}
\end{equation}

or

\begin{equation}\label{eqn:freeParticlePropagator:200}
\boxed{
K(\Bp”, t ; \Bp’, t_0)
=
\delta( \Bp” – \Bp’ )
\exp\lr{ -i \frac{(\Bp’)^2 (t -t_0)}{2 m \Hbar}}.
}
\end{equation}

This is what we expect since the time evolution is given by just this exponential factor

\begin{equation}\label{eqn:freeParticlePropagator:220}
\begin{aligned}
\braket{\Bp’}{\alpha, t_0 ; t}
&= \bra{\Bp’} \exp\lr{ -i \frac{(\Bp’)^2 (t -t_0)}{2 m \Hbar}} \ket{\alpha, t_0} \\
&=
\exp\lr{ -i \frac{(\Bp’)^2 (t -t_0)}{2 m \Hbar}}
\braket{\Bp’}
{\alpha, t_0}.
\end{aligned}
\end{equation}

References

[1] Jun John Sakurai and Jim J Napolitano. Modern quantum mechanics. Pearson Higher Ed, 2014.

More ket problems

August 5, 2015 phy1520 , , , , , , , , ,

[Click here for a PDF of this post with nicer formatting]

Question: Uncertainty relation. ([1] pr. 1.20)

Find the ket that maximizes the uncertainty product

\begin{equation}\label{eqn:moreKet:140}
\expectation{\lr{\Delta S_x}^2}
\expectation{\lr{\Delta S_y}^2},
\end{equation}

and compare to the uncertainty bound \( \inv{4} \Abs{ \expectation{\antisymmetric{S_x}{S_y}}}^2 \).

Answer

To parameterize the ket space, consider first the kets that where both components are both not zero, where a single complex number can parameterize the ket

\begin{equation}\label{eqn:moreKet:160}
\ket{s} =
\begin{bmatrix}
\beta’ e^{i\phi’} \\
\alpha’ e^{i\theta’} \\
\end{bmatrix}
\propto
\begin{bmatrix}
1 \\
\alpha e^{i\theta} \\
\end{bmatrix}
\end{equation}

The expectation values with respect to this ket are
\begin{equation}\label{eqn:moreKet:180}
\begin{aligned}
\expectation{S_x}
&=
\frac{\Hbar}{2}
\begin{bmatrix}
1 & \alpha e^{-i\theta} \\
\end{bmatrix}
\begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix}
\begin{bmatrix}
1 \\
\alpha e^{i\theta} \\
\end{bmatrix} \\
&=
\frac{\Hbar}{2}
\begin{bmatrix}
1 &
\alpha e^{-i\theta} \\
\end{bmatrix}
\begin{bmatrix}
\alpha e^{i\theta} \\
1 \\
\end{bmatrix} \\
&=
\frac{\Hbar}{2}
\alpha e^{i\theta} + \alpha e^{-i\theta} \\
&=
\frac{\Hbar}{2}
2 \alpha \cos\theta \\
&=
\Hbar \alpha \cos\theta.
\end{aligned}
\end{equation}

\begin{equation}\label{eqn:moreKet:200}
\begin{aligned}
\expectation{S_y}
&=
\frac{\Hbar}{2}
\begin{bmatrix}
1 & \alpha e^{-i\theta} \\
\end{bmatrix}
\begin{bmatrix} 0 & -i \\ i & 0 \\ \end{bmatrix}
\begin{bmatrix}
1 \\
\alpha e^{i\theta} \\
\end{bmatrix} \\
&=
\frac{i\Hbar}{2}
\begin{bmatrix}
1 & \alpha e^{-i\theta} \\
\end{bmatrix}
\begin{bmatrix}
-\alpha e^{i\theta} \\
1 \\
\end{bmatrix} \\
&=
\frac{-i \alpha \Hbar}{2} 2 i \sin\theta \\
&=
\alpha \Hbar \sin\theta.
\end{aligned}
\end{equation}

The variances are
\begin{equation}\label{eqn:moreKet:220}
\begin{aligned}
\lr{ \Delta S_x }^2
&=
\lr{
\frac{\Hbar}{2}
\begin{bmatrix}
-2 \alpha \cos\theta & 1 \\
1 & -2 \alpha \cos\theta \\
\end{bmatrix}
}^2 \\
&=
\frac{\Hbar^2}{4}
\begin{bmatrix}
-2 \alpha \cos\theta & 1 \\
1 & -2 \alpha \cos\theta \\
\end{bmatrix}
\begin{bmatrix}
-2 \alpha \cos\theta & 1 \\
1 & -2 \alpha \cos\theta \\
\end{bmatrix} \\
&=
\frac{\Hbar^2}{4}
\begin{bmatrix}
4 \alpha^2 \cos^2\theta + 1 & -4 \alpha \cos\theta \\
-4 \alpha \cos\theta & 4 \alpha^2 \cos^2\theta + 1 \\
\end{bmatrix},
\end{aligned}
\end{equation}

and

\begin{equation}\label{eqn:moreKet:240}
\begin{aligned}
\lr{ \Delta S_y }^2
&=
\lr{
\frac{\Hbar}{2}
\begin{bmatrix}
-2 \alpha \sin\theta & -i \\
i & -2 \alpha \sin\theta \\
\end{bmatrix}
}^2 \\
&=
\frac{\Hbar^2}{4}
\begin{bmatrix}
-2 \alpha \sin\theta & -i \\
i & -2 \alpha \sin\theta \\
\end{bmatrix}
\begin{bmatrix}
-2 \alpha \sin\theta & -i \\
i & -2 \alpha \sin\theta \\
\end{bmatrix} \\
&=
\frac{\Hbar^2}{4}
\begin{bmatrix}
4 \alpha^2 \sin^2\theta + 1 & 4 \alpha i \sin\theta \\
-4 \alpha i \sin\theta & 4 \alpha^2 \sin^2\theta + 1 \\
\end{bmatrix}.
\end{aligned}
\end{equation}

The uncertainty factors are

\begin{equation}\label{eqn:moreKet:260}
\begin{aligned}
\expectation{\lr{\Delta S_x}^2}
&=
\frac{\Hbar^2}{4}
\begin{bmatrix}
1 & \alpha e^{-i\theta}
\end{bmatrix}
\begin{bmatrix}
4 \alpha^2 \cos^2\theta + 1 & -4 \alpha \cos\theta \\
-4 \alpha \cos\theta & 4 \alpha^2 \cos^2\theta + 1 \\
\end{bmatrix}
\begin{bmatrix}
1 \\
\alpha e^{i\theta}
\end{bmatrix} \\
&=
\frac{\Hbar^2}{4}
\begin{bmatrix}
1 & \alpha e^{-i\theta}
\end{bmatrix}
\begin{bmatrix}
4 \alpha^2 \cos^2\theta + 1 -4 \alpha^2 \cos\theta e^{i\theta} \\
-4 \alpha \cos\theta + 4 \alpha^3 \cos^2\theta e^{i\theta} + \alpha e^{i\theta} \\
\end{bmatrix} \\
&=
\frac{\Hbar^2}{4}
\lr{
4 \alpha^2 \cos^2\theta + 1 -4 \alpha^2 \cos\theta e^{i\theta}
-4 \alpha^2 \cos\theta e^{-i\theta} + 4 \alpha^4 \cos^2\theta + \alpha^2
} \\
&=
\frac{\Hbar^2}{4}
\lr{
4 \alpha^2 \cos^2\theta + 1 -8 \alpha^2 \cos^2\theta
+ 4 \alpha^4 \cos^2\theta + \alpha^2
} \\
&=
\frac{\Hbar^2}{4}
\lr{
-4 \alpha^2 \cos^2\theta + 1
+ 4 \alpha^4 \cos^2\theta + \alpha^2
} \\
&=
\frac{\Hbar^2}{4}
\lr{
4 \alpha^2 \cos^2\theta \lr{ \alpha^2 – 1 }
+ \alpha^2 + 1
}
,
\end{aligned}
\end{equation}

and

\begin{equation}\label{eqn:moreKet:280}
\begin{aligned}
\expectation{ \lr{ \Delta S_y }^2 }
&=
\frac{\Hbar^2}{4}
\begin{bmatrix}
1 & \alpha e^{-i\theta}
\end{bmatrix}
\begin{bmatrix}
4 \alpha^2 \sin^2\theta + 1 & 4 \alpha i \sin\theta \\
-4 \alpha i \sin\theta & 4 \alpha^2 \sin^2\theta + 1 \\
\end{bmatrix}
\begin{bmatrix}
1 \\
\alpha e^{i\theta}
\end{bmatrix} \\
&=
\frac{\Hbar^2}{4}
\begin{bmatrix}
1 & \alpha e^{-i\theta}
\end{bmatrix}
\begin{bmatrix}
4 \alpha^2 \sin^2\theta + 1 + 4 \alpha^2 i \sin\theta e^{i\theta} \\
-4 \alpha i \sin\theta + 4 \alpha^3 \sin^2\theta e^{i\theta} + \alpha e^{i\theta} \\
\end{bmatrix} \\
&=
\frac{\Hbar^2}{4}
\lr{
4 \alpha^2 \sin^2\theta + 1 + 4 \alpha^2 i \sin\theta e^{i\theta}
-4 \alpha^2 i \sin\theta e^{-i\theta} + 4 \alpha^4 \sin^2\theta + \alpha^2
} \\
&=
\frac{\Hbar^2}{4}
\lr{
-4 \alpha^2 \sin^2\theta + 1
+ 4 \alpha^4 \sin^2\theta + \alpha^2
} \\
&=
\frac{\Hbar^2}{4}
\lr{
4 \alpha^2 \sin^2\theta \lr{ \alpha^2 – 1}
+ \alpha^2
+ 1
}
.
\end{aligned}
\end{equation}

The uncertainty product can finally be calculated

\begin{equation}\label{eqn:moreKet:300}
\begin{aligned}
\expectation{\lr{\Delta S_x}^2}
\expectation{\lr{\Delta S_y}^2}
&=
\lr{\frac{\Hbar}{2} }^4
\lr{
4 \alpha^2 \cos^2\theta \lr{ \alpha^2 – 1 }
+ \alpha^2 + 1
}
\lr{
4 \alpha^2 \sin^2\theta \lr{ \alpha^2 – 1}
+ \alpha^2
+ 1
} \\
&=
\lr{\frac{\Hbar}{2} }^4
\lr{
4 \alpha^4 \sin^2 \lr{ 2\theta } \lr{ \alpha^2 – 1 }
+ 4 \alpha^2 \lr{ \alpha^4 – 1 }
+ \lr{\alpha^2 + 1 }^2
}
\end{aligned}
\end{equation}

The maximum occurs when \( f = \sin^2 2 \theta \) is extremized. Those points are
\begin{equation}\label{eqn:moreKet:320}
\begin{aligned}
0
&= \PD{\theta}{f} \\
&= 2 \sin 2 \theta \cos 2\theta \\
&= 4 \sin 4 \theta.
\end{aligned}
\end{equation}

Those points are at \( 4 \theta = \pi n \), for integer \( n \), or

\begin{equation}\label{eqn:moreKet:340}
\theta = \frac{\pi}{4} n, n \in [0, 7],
\end{equation}

Minimums will occur when

\begin{equation}\label{eqn:moreKet:360}
0 < \PDSq{\theta}{f} = 8 \cos 4\theta, \end{equation} or \begin{equation}\label{eqn:moreKet:380} n = 0, 2, 4, 6. \end{equation} At these points \( \sin^2 2\theta \) takes the values \begin{equation}\label{eqn:moreKet:400} \sin^2 \lr{ 2 \frac{\pi}{4} \setlr{ 0, 2, 4, 6 } } = \sin^2 \lr{ \pi \setlr{ 0, 1, 2, 3 } } \in \setlr{ 0 }, \end{equation} so the maximumization of the uncertainty product can be reduced to that of \begin{equation}\label{eqn:moreKet:420} \expectation{\lr{\Delta S_x}^2} \expectation{\lr{\Delta S_y}^2} = \lr{\frac{\Hbar}{2} }^4 \lr{ 4 \alpha^2 \lr{ \alpha^4 - 1 } + \lr{\alpha^2 + 1 }^2 } \end{equation} We seek \begin{equation}\label{eqn:moreKet:440} \begin{aligned} 0 &= \PD{\alpha}{} \lr{ 4 \alpha^2 \lr{ \alpha^4 - 1 } + \lr{\alpha^2 + 1 }^2 } \\ &= \lr{ 8 \alpha \lr{ \alpha^4 - 1 } +16 \alpha^5 + 4 \lr{\alpha^2 + 1 } \alpha } \\ &= 4 \alpha \lr{ 2 \alpha^4 - 2 +4 \alpha^4 + 4 \alpha^2 + 4 } \\ &= 8 \alpha \lr{ 3 \alpha^4 + 2 \alpha^2 + 1 }. \end{aligned} \end{equation} The only real root of this polynomial is \( \alpha = 0 \), so the ket where both \( \ket{+} \) and \( \ket{-} \) are not zero that maximizes the uncertainty product is \begin{equation}\label{eqn:moreKet:460} \ket{s} = \begin{bmatrix} 1 \\ 0 \end{bmatrix} = \ket{+}. \end{equation} The search for this maximizing value excluded those kets proportional to \( \begin{bmatrix} 0 \\ 1 \end{bmatrix} = \ket{-} \). Let's see the values of this uncertainty product at both \( \ket{\pm} \), and compare to the uncertainty commutator. First \( \ket{s} = \ket{+} \) \begin{equation}\label{eqn:moreKet:480} \expectation{S_x} = \begin{bmatrix} 1 & 0 \end{bmatrix} \begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix} \begin{bmatrix} 1 \\ 0 \end{bmatrix} = 0. \end{equation} \begin{equation}\label{eqn:moreKet:500} \expectation{S_y} = \begin{bmatrix} 1 & 0 \end{bmatrix} \begin{bmatrix} 0 & -i \\ i & 0 \\ \end{bmatrix} \begin{bmatrix} 1 \\ 0 \end{bmatrix} = 0. \end{equation} so \begin{equation}\label{eqn:moreKet:520} \expectation{ \lr{ \Delta S_x }^2 } = \lr{\frac{\Hbar}{2}}^2 \begin{bmatrix} 1 & 0 \end{bmatrix} \begin{bmatrix} 1 \\ 0 \end{bmatrix} = \lr{\frac{\Hbar}{2}}^2 \end{equation} \begin{equation}\label{eqn:moreKet:540} \expectation{ \lr{ \Delta S_y }^2 } = \lr{\frac{\Hbar}{2}}^2 \begin{bmatrix} 1 & 0 \end{bmatrix} \begin{bmatrix} 1 \\ 0 \end{bmatrix} = \lr{\frac{\Hbar}{2}}^2. \end{equation} For the commutator side of the uncertainty relation we have \begin{equation}\label{eqn:moreKet:560} \begin{aligned} \inv{4} \Abs{ \expectation{ \antisymmetric{ S_x}{ S_y } } }^2 &= \inv{4} \Abs{ \expectation{ i \hbar S_z } }^2 \\ &= \lr{ \frac{\Hbar}{2} }^4 \Abs{ \begin{bmatrix} 1 & 0 \end{bmatrix} \begin{bmatrix} 1 & 0 \\ 0 & -1 \\ \end{bmatrix} \begin{bmatrix} 1 \\ 0 \end{bmatrix} }^2, \end{aligned} \end{equation} so for the \( \ket{+} \) state we have an equality condition for the uncertainty relation \begin{equation}\label{eqn:moreKet:580} \expectation{\lr{\Delta S_x}^2} \expectation{\lr{\Delta S_y}^2} = \inv{4} \Abs{ \expectation{\antisymmetric{S_x}{S_y}}}^2 = \lr{ \frac{\Hbar}{2} }^4. \end{equation} It's reasonable to guess that the \( \ket{-} \) state also matches the equality condition. Let's check \begin{equation}\label{eqn:moreKet:600} \expectation{S_x} = \begin{bmatrix} 0 & 1 \end{bmatrix} \begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix} \begin{bmatrix} 0 \\ 1 \end{bmatrix} = 0. \end{equation} \begin{equation}\label{eqn:moreKet:620} \expectation{S_y} = \begin{bmatrix} 0 & 1 \end{bmatrix} \begin{bmatrix} 0 & -i \\ i & 0 \\ \end{bmatrix} \begin{bmatrix} 0 \\ 1 \end{bmatrix} = 0. \end{equation} so \( \expectation{ \lr{ \Delta S_x }^2 } = \expectation{ \lr{ \Delta S_y }^2 } = \lr{\frac{\Hbar}{2}}^2 \). For the commutator side of the uncertainty relation will be identical, so the equality of \ref{eqn:moreKet:580} is satisfied for both \( \ket{\pm} \). Note that it wasn't explicitly verified that \( \ket{-} \) maximized the uncertainty product, but I don't feel like working through that second set of algebraic mess. We can see by example that equality does not mean that the equality condition means that the product is maximized. For example, it is straightforward to show that \( \ket{ S_x ; \pm } \) also satisfy the equality condition of the uncertainty relation. However, in that case the product is not maximized, but is zero.

Question: Degenerate ket space example. ([1] pr. 1.23)

Consider operators with representation

\begin{equation}\label{eqn:moreKet:20}
A =
\begin{bmatrix}
a & 0 & 0 \\
0 & -a & 0 \\
0 & 0 & -a
\end{bmatrix}
,
\qquad
B =
\begin{bmatrix}
b & 0 & 0 \\
0 & 0 & -ib \\
0 & ib & 0
\end{bmatrix}.
\end{equation}

Show that these both have degeneracies, commute, and compute a simultaneous ket space for both operators.

Answer

The eigenvalues and eigenvectors for \( A \) can be read off by inspection, with values of \( a, -a, -a \), and kets

\begin{equation}\label{eqn:moreKet:40}
\ket{a_1} =
\begin{bmatrix}
1 \\
0 \\
0
\end{bmatrix},
\ket{a_2} =
\begin{bmatrix}
0 \\
1 \\
0
\end{bmatrix},
\ket{a_3} =
\begin{bmatrix}
0 \\
0 \\
1 \\
\end{bmatrix}
\end{equation}

Notice that the lower-right \( 2 \times 2 \) submatrix of \( B \) is proportional to \( \sigma_y \), so it’s eigenvalues can be formed by inspection

\begin{equation}\label{eqn:moreKet:60}
\ket{b_1} =
\begin{bmatrix}
1 \\
0 \\
0
\end{bmatrix},
\ket{b_2} =
\inv{\sqrt{2}}
\begin{bmatrix}
0 \\
1 \\
i
\end{bmatrix},
\ket{b_3} =
\inv{\sqrt{2}}
\begin{bmatrix}
0 \\
1 \\
-i \\
\end{bmatrix}.
\end{equation}

Computing \( B \ket{b_i} \) shows that the eigenvalues are \( b, b, -b \) respectively.

Because of the two-fold degeneracy in the \( -a \) eigenvalues of \( A \), any linear combination of \( \ket{a_2}, \ket{a_3} \) will also be an eigenket. In particular,

\begin{equation}\label{eqn:moreKet:80}
\begin{aligned}
\ket{a_2} + i \ket{a_3} &= \ket{b_2} \\
\ket{a_2} – i \ket{a_3} &= \ket{b_3},
\end{aligned}
\end{equation}

so the basis \( \setlr{ \ket{b_i}} \) is a simulaneous eigenspace for both \( A \) and \(B\). Because there is a simulaneous eigenspace, the matrices must commute. This can be confirmed with direct computation

\begin{equation}\label{eqn:moreKet:100}
\begin{aligned}
A B
&= a b
\begin{bmatrix}
1 & 0 & 0 \\
0 & -1 & 0 \\
0 & 0 & -1
\end{bmatrix}
\begin{bmatrix}
1 & 0 & 0 \\
0 & 0 & -i \\
0 & i & 0
\end{bmatrix} \\
&=
a b
\begin{bmatrix}
1 & 0 & 0 \\
0 & 0 & i \\
0 & -i & 0
\end{bmatrix},
\end{aligned}
\end{equation}

and

\begin{equation}\label{eqn:moreKet:120}
\begin{aligned}
B A
&= a b
\begin{bmatrix}
1 & 0 & 0 \\
0 & 0 & -i \\
0 & i & 0
\end{bmatrix}
\begin{bmatrix}
1 & 0 & 0 \\
0 & -1 & 0 \\
0 & 0 & -1
\end{bmatrix} \\
&=
a b
\begin{bmatrix}
1 & 0 & 0 \\
0 & 0 & i \\
0 & -i & 0
\end{bmatrix}.
\end{aligned}
\end{equation}

Question: Unitary transformation. ([1] pr. 1.26)

Construct the transformation matrix that maps between the \( S_z \) diagonal basis, to the \( S_x \) diagonal basis.

Answer

Based on the definition

\begin{equation}\label{eqn:moreKet:640}
U \ket{a^{(r)}} = \ket{b^{(r)}},
\end{equation}

the matrix elements can be computed

\begin{equation}\label{eqn:moreKet:660}
\bra{a^{(s)}} U \ket{a^{(r)}} = \braket{a^{(s)}}{b^{(r)}},
\end{equation}

that is

\begin{equation}\label{eqn:moreKet:680}
\begin{aligned}
U
&=
\begin{bmatrix}
\bra{a^{(1)}} U \ket{a^{(1)}} & \bra{a^{(1)}} U \ket{a^{(2)}} \\
\bra{a^{(2)}} U \ket{a^{(1)}} & \bra{a^{(2)}} U \ket{a^{(2)}}
\end{bmatrix} \\
&=
\begin{bmatrix}
\braket{a^{(1)}}{b^{(1)}} & \braket{a^{(1)}}{b^{(2)}} \\
\braket{a^{(2)}}{b^{(1)}} & \braket{a^{(2)}}{b^{(2)}}
\end{bmatrix} \\
&=
\inv{\sqrt{2}}
\begin{bmatrix}
\begin{bmatrix}
1 & 0
\end{bmatrix}
\begin{bmatrix}
1 \\ 1
\end{bmatrix} &
\begin{bmatrix}
1 & 0
\end{bmatrix}
\begin{bmatrix}
1 \\ -1
\end{bmatrix} \\
\begin{bmatrix}
0 & 1
\end{bmatrix}
\begin{bmatrix}
1 \\ 1
\end{bmatrix} &
\begin{bmatrix}
0 & 1
\end{bmatrix}
\begin{bmatrix}
1 \\ -1
\end{bmatrix} \\
\end{bmatrix} \\
&=
\inv{\sqrt{2}}
\begin{bmatrix}
1 & 1 \\
1 & -1
\end{bmatrix}.
\end{aligned}
\end{equation}

As a similarity transformation, we have

\begin{equation}\label{eqn:moreKet:700}
\begin{aligned}
\bra{b^{(r)}} S_z \ket{b^{(s)}}
&=
\braket{b^{(r)}}{a^{(t)}}\bra{a^{(t)}} S_z \ket{a^{(u)}}\braket{a^{(u)}}{b^{(s)}} \\
&=
\braket{a^{(r)}}U^\dagger {a^{(t)}}\bra{a^{(t)}} S_z \ket{a^{(u)}}\bra{a^{(u)}}U \ket{a^{(s)}},
\end{aligned}
\end{equation}

or

\begin{equation}\label{eqn:moreKet:720}
S_z’ = U^\dagger S_z U.
\end{equation}

Let’s check that the computed similarity transformation does it’s job.
\begin{equation}\label{eqn:moreKet:740}
\begin{aligned}
\sigma_z’
&= U^\dagger \sigma_z U \\
&= \inv{2}
\begin{bmatrix}
1 & 1 \\
1 & -1
\end{bmatrix}
\begin{bmatrix}
1 & 0 \\
0 & -1
\end{bmatrix}
\begin{bmatrix}
1 & 1 \\
1 & -1
\end{bmatrix} \\
&=
\inv{2}
\begin{bmatrix}
1 & -1 \\
1 & 1
\end{bmatrix}
\begin{bmatrix}
1 & 1 \\
1 & -1
\end{bmatrix} \\
&=
\inv{2}
\begin{bmatrix}
0 & 2 \\
2 & 0
\end{bmatrix} \\
&= \sigma_x.
\end{aligned}
\end{equation}

The transformation matrix can also be computed more directly

\begin{equation}\label{eqn:moreKet:760}
\begin{aligned}
U
&= U \ket{a^{(r)}} \bra{a^{(r)}} \\
&= \ket{b^{(r)}}\bra{a^{(r)}} \\
&=
\inv{\sqrt{2}}
\begin{bmatrix}
1 \\
1
\end{bmatrix}
\begin{bmatrix}
1 & 0
\end{bmatrix}
+
\inv{\sqrt{2}}
\begin{bmatrix}
1 \\
-1
\end{bmatrix}
\begin{bmatrix}
0 & 1
\end{bmatrix} \\
&=
\inv{\sqrt{2}}
\begin{bmatrix}
1 & 0 \\
1 & 0
\end{bmatrix}
+
\inv{\sqrt{2}}
\begin{bmatrix}
0 & 1 \\
0 & -1
\end{bmatrix} \\
&=
\inv{\sqrt{2}}
\begin{bmatrix}
1 & 1 \\
1 & -1
\end{bmatrix}.
\end{aligned}
\end{equation}

References

[1] Jun John Sakurai and Jim J Napolitano. Modern quantum mechanics. Pearson Higher Ed, 2014.

Bra-ket and spin one-half problems

July 27, 2015 phy1520 , , , , , , , , ,

[Click here for a PDF of this post with nicer formatting]

Question: Operator matrix representation ([1] pr. 1.5)

(a)

Determine the matrix representation of \( \ket{\alpha}\bra{\beta} \) given a complete set of eigenvectors \( \ket{a^r} \).

(b)

Verify with \( \ket{\alpha} = \ket{s_z = \Hbar/2}, \ket{s_x = \Hbar/2} \).

Answer

(a)

Forming the matrix element

\begin{equation}\label{eqn:moreBraKetProblems:20}
\begin{aligned}
\bra{a^r} \lr{ \ket{\alpha}\bra{\beta} } \ket{a^s}
&=
\braket{a^r}{\alpha}\braket{\beta}{a^s} \\
&=
\braket{a^r}{\alpha}
\braket{a^s}{\beta}^\conj,
\end{aligned}
\end{equation}

the matrix representation is seen to be

\begin{equation}\label{eqn:moreBraKetProblems:40}
\ket{\alpha}\bra{\beta}
\sim
\begin{bmatrix}
\bra{a^1} \lr{ \ket{\alpha}\bra{\beta} } \ket{a^1} & \bra{a^1} \lr{ \ket{\alpha}\bra{\beta} } \ket{a^2} & \cdots \\
\bra{a^2} \lr{ \ket{\alpha}\bra{\beta} } \ket{a^1} & \bra{a^2} \lr{ \ket{\alpha}\bra{\beta} } \ket{a^2} & \cdots \\
\vdots & \vdots & \ddots \\
\end{bmatrix}
=
\begin{bmatrix}
\braket{a^1}{\alpha} \braket{a^1}{\beta}^\conj & \braket{a^1}{\alpha} \braket{a^2}{\beta}^\conj & \cdots \\
\braket{a^2}{\alpha} \braket{a^1}{\beta}^\conj & \braket{a^2}{\alpha} \braket{a^2}{\beta}^\conj & \cdots \\
\vdots & \vdots & \ddots \\
\end{bmatrix}.
\end{equation}

(b)

First compute the spin-z representation of \( \ket{s_x = \Hbar/2 } \).

\begin{equation}\label{eqn:moreBraKetProblems:60}
\begin{aligned}
\lr{ S_x – \Hbar/2 I }
\begin{bmatrix}
a \\
b
\end{bmatrix}
&=
\lr{
\begin{bmatrix}
0 & \Hbar/2 \\
\Hbar/2 & 0 \\
\end{bmatrix}

\begin{bmatrix}
\Hbar/2 & 0 \\
0 & \Hbar/2 \\
\end{bmatrix}
} \\
&=
\begin{bmatrix}
a \\
b
\end{bmatrix} \\
&=
\frac{\Hbar}{2}
\begin{bmatrix}
-1 & 1 \\
1 & -1 \\
\end{bmatrix}
\begin{bmatrix}
a \\
b
\end{bmatrix},
\end{aligned}
\end{equation}

so \( \ket{s_x = \Hbar/2 } \propto (1,1) \).

Normalized we have

\begin{equation}\label{eqn:moreBraKetProblems:80}
\begin{aligned}
\ket{\alpha} &= \ket{s_z = \Hbar/2 } =
\begin{bmatrix}
1 \\
0
\end{bmatrix} \\
\ket{\beta} &= \ket{s_z = \Hbar/2 }
\inv{\sqrt{2}}
\begin{bmatrix}
1 \\
1
\end{bmatrix}.
\end{aligned}
\end{equation}

Using \ref{eqn:moreBraKetProblems:40} the matrix representation is

\begin{equation}\label{eqn:moreBraKetProblems:100}
\ket{\alpha}\bra{\beta}
\sim
\begin{bmatrix}
(1) (1/\sqrt{2})^\conj & (1) (1/\sqrt{2})^\conj \\
(0) (1/\sqrt{2})^\conj & (0) (1/\sqrt{2})^\conj \\
\end{bmatrix}
=
\inv{\sqrt{2}}
\begin{bmatrix}
1 & 1 \\
0 & 0
\end{bmatrix}.
\end{equation}

This can be confirmed with direct computation
\begin{equation}\label{eqn:moreBraKetProblems:120}
\begin{aligned}
\ket{\alpha}\bra{\beta}
&=
\begin{bmatrix}
1 \\
0
\end{bmatrix}
\inv{\sqrt{2}}
\begin{bmatrix}
1 & 1
\end{bmatrix} \\
&=
\inv{\sqrt{2}}
\begin{bmatrix}
1 & 1 \\
0 & 0
\end{bmatrix}.
\end{aligned}
\end{equation}

Question: eigenvalue of sum of kets ([1] pr. 1.6)

Given eigenkets \( \ket{i}, \ket{j} \) of an operator \( A \), what are the conditions that \( \ket{i} + \ket{j} \) is also an eigenvector?

Answer

Let \( A \ket{i} = i \ket{i}, A \ket{j} = j \ket{j} \), and suppose that the sum is an eigenket. Then there must be a value \( a \) such that

\begin{equation}\label{eqn:moreBraKetProblems:140}
A \lr{ \ket{i} + \ket{j} } = a \lr{ \ket{i} + \ket{j} },
\end{equation}

so

\begin{equation}\label{eqn:moreBraKetProblems:160}
i \ket{i} + j \ket{j} = a \lr{ \ket{i} + \ket{j} }.
\end{equation}

Operating with \( \bra{i}, \bra{j} \) respectively, gives

\begin{equation}\label{eqn:moreBraKetProblems:180}
\begin{aligned}
i &= a \\
j &= a,
\end{aligned}
\end{equation}

so for the sum to be an eigenket, both of the corresponding energy eigenvalues must be identical (i.e. linear combinations of degenerate eigenkets are also eigenkets).

Question: Null operator ([1] pr. 1.7)

Given eigenkets \( \ket{a’} \) of operator \( A \)

(a)

show that

\begin{equation}\label{eqn:moreBraKetProblems:200}
\prod_{a’} \lr{ A – a’ }
\end{equation}

is the null operator.

(b)

\begin{equation}\label{eqn:moreBraKetProblems:220}
\prod_{a” \ne a’} \frac{\lr{ A – a” }}{a’ – a”}
\end{equation}

(c)

Illustrate using \( S_z \) for a spin 1/2 system.

Answer

(a)

Application of \( \ket{a} \), the eigenket of \( A \) with eigenvalue \( a \) to any term \( A – a’ \) scales \( \ket{a} \) by \( a – a’ \), so the product operating on \( \ket{a} \) is

\begin{equation}\label{eqn:moreBraKetProblems:240}
\prod_{a’} \lr{ A – a’ } \ket{a} = \prod_{a’} \lr{ a – a’ } \ket{a}.
\end{equation}

Since \( \ket{a} \) is one of the \( \setlr{\ket{a’}} \) eigenkets of \( A \), one of these terms must be zero.

(b)

Again, consider the action of the operator on \( \ket{a} \),

\begin{equation}\label{eqn:moreBraKetProblems:260}
\prod_{a” \ne a’} \frac{\lr{ A – a” }}{a’ – a”} \ket{a}
=
\prod_{a” \ne a’} \frac{\lr{ a – a” }}{a’ – a”} \ket{a}.
\end{equation}

If \( \ket{a} = \ket{a’} \), then \( \prod_{a” \ne a’} \frac{\lr{ A – a” }}{a’ – a”} \ket{a} = \ket{a} \), whereas if it does not, then it equals one of the \( a” \) energy eigenvalues. This is a representation of the Kronecker delta function

\begin{equation}\label{eqn:moreBraKetProblems:300}
\prod_{a” \ne a’} \frac{\lr{ A – a” }}{a’ – a”} \ket{a} \equiv \delta_{a’, a} \ket{a}
\end{equation}

(c)

For operator \( S_z \) the eigenvalues are \( \setlr{ \Hbar/2, -\Hbar/2 } \), so the null operator must be

\begin{equation}\label{eqn:moreBraKetProblems:280}
\begin{aligned}
\prod_{a’} \lr{ A – a’ }
&=
\lr{ \frac{\Hbar}{2} }^2 \lr{ \begin{bmatrix} 1 & 0 \\ 0 & -1 \\ \end{bmatrix} – \begin{bmatrix} 1 & 0 \\ 0 & 1 \\ \end{bmatrix} } \lr{ \begin{bmatrix} 1 & 0 \\ 0 & -1 \\ \end{bmatrix} + \begin{bmatrix} 1 & 0 \\ 0 & 1 \\ \end{bmatrix} } \\
&=
\begin{bmatrix}
0 & 0 \\
0 & -2
\end{bmatrix}
\begin{bmatrix}
2 & 0 \\
0 & 0 \\
\end{bmatrix} \\
&=
\begin{bmatrix}
0 & 0 \\
0 & 0 \\
\end{bmatrix}
\end{aligned}
\end{equation}

For the delta representation, consider the \( \ket{\pm} \) states and their eigenvalue. The delta operators are

\begin{equation}\label{eqn:moreBraKetProblems:320}
\begin{aligned}
\prod_{a” \ne \Hbar/2} \frac{\lr{ A – a” }}{\Hbar/2 – a”}
&=
\frac{S_z – (-\Hbar/2) I}{\Hbar/2 – (-\Hbar/2)} \\
&=
\inv{2} \lr{ \sigma_z + I } \\
&=
\inv{2} \lr{ \begin{bmatrix} 1 & 0 \\ 0 & -1 \\ \end{bmatrix} + \begin{bmatrix} 1 & 0 \\ 0 & 1 \\ \end{bmatrix} } \\
&=
\inv{2}
\begin{bmatrix}
2 & 0 \\
0 & 0
\end{bmatrix}
\\
&=
\begin{bmatrix}
1 & 0 \\
0 & 0
\end{bmatrix}.
\end{aligned}
\end{equation}

\begin{equation}\label{eqn:moreBraKetProblems:340}
\begin{aligned}
\prod_{a” \ne -\Hbar/2} \frac{\lr{ A – a” }}{-\Hbar/2 – a”}
&=
\frac{S_z – (\Hbar/2) I}{-\Hbar/2 – \Hbar/2} \\
&=
\inv{2} \lr{ \sigma_z – I } \\
&=
\inv{2} \lr{ \begin{bmatrix} 1 & 0 \\ 0 & -1 \\ \end{bmatrix} – \begin{bmatrix} 1 & 0 \\ 0 & 1 \\ \end{bmatrix} } \\
&=
\inv{2}
\begin{bmatrix}
0 & 0 \\
0 & -2
\end{bmatrix} \\
&=
\begin{bmatrix}
0 & 0 \\
0 & 1
\end{bmatrix}.
\end{aligned}
\end{equation}

These clearly have the expected delta function property acting on kets \( \ket{+} = (1,0), \ket{-} = (0, 1) \).

Question: Spin half general normal ([1] pr. 1.9)

Construct \( \ket{\BS \cdot \ncap ; + } \), where \( \ncap = ( \cos\alpha \sin\beta, \sin\alpha \sin\beta, \cos\beta ) \) such that

\begin{equation}\label{eqn:moreBraKetProblems:360}
\BS \cdot \ncap \ket{\BS \cdot \ncap ; + } =
\frac{\Hbar}{2} \ket{\BS \cdot \ncap ; + },
\end{equation}

Solve this as an eigenvalue problem.

Answer

The spin operator for this direction is

\begin{equation}\label{eqn:moreBraKetProblems:380}
\begin{aligned}
\BS \cdot \ncap
&= \frac{\Hbar}{2} \Bsigma \cdot \ncap \\
&= \frac{\Hbar}{2}
\lr{
\cos\alpha \sin\beta \begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix} + \sin\alpha \sin\beta \begin{bmatrix} 0 & -i \\ i & 0 \\ \end{bmatrix} + \cos\beta \begin{bmatrix} 1 & 0 \\ 0 & -1 \\ \end{bmatrix}
} \\
&=
\frac{\Hbar}{2}
\begin{bmatrix}
\cos\beta &
e^{-i\alpha}
\sin\beta
\\
e^{i\alpha}
\sin\beta
& -\cos\beta
\end{bmatrix}.
\end{aligned}
\end{equation}

Observed that this is traceless and has a \( -\Hbar/2 \) determinant like any of the \( x,y,z \) spin operators.

Assuming that this has an \( \Hbar/2 \) eigenvalue (to be verified later), the eigenvalue problem is

\begin{equation}\label{eqn:moreBraKetProblems:400}
\begin{aligned}
0
&=
\BS \cdot \ncap – \Hbar/2 I \\
&=
\frac{\Hbar}{2}
\begin{bmatrix}
\cos\beta -1 &
e^{-i\alpha}
\sin\beta
\\
e^{i\alpha}
\sin\beta
& -\cos\beta -1
\end{bmatrix} \\
&=
\Hbar
\begin{bmatrix}
– \sin^2 \frac{\beta}{2} &
e^{-i\alpha}
\sin\frac{\beta}{2} \cos\frac{\beta}{2}
\\
e^{i\alpha}
\sin\frac{\beta}{2} \cos\frac{\beta}{2}
& -\cos^2 \frac{\beta}{2}
\end{bmatrix}
\end{aligned}
\end{equation}

This has a zero determinant as expected, and the eigenvector \( (a,b) \) will satisfy

\begin{equation}\label{eqn:moreBraKetProblems:420}
\begin{aligned}
0
&= – \sin^2 \frac{\beta}{2} a +
e^{-i\alpha}
\sin\frac{\beta}{2} \cos\frac{\beta}{2}
b \\
&= \sin\frac{\beta}{2} \lr{ – \sin \frac{\beta}{2} a +
e^{-i\alpha} b
\cos\frac{\beta}{2}
}
\end{aligned}
\end{equation}

\begin{equation}\label{eqn:moreBraKetProblems:440}
\begin{bmatrix}
a \\
b
\end{bmatrix}
\propto
\begin{bmatrix}
\cos\frac{\beta}{2} \\
e^{i\alpha}
\sin\frac{\beta}{2}
\end{bmatrix}.
\end{equation}

This is appropriately normalized, so the ket for \( \BS \cdot \ncap \) is

\begin{equation}\label{eqn:moreBraKetProblems:460}
\ket{ \BS \cdot \ncap ; + } =
\cos\frac{\beta}{2} \ket{+} +
e^{i\alpha}
\sin\frac{\beta}{2}
\ket{-}.
\end{equation}

Note that the other eigenvalue is

\begin{equation}\label{eqn:moreBraKetProblems:480}
\ket{ \BS \cdot \ncap ; – } =
-\sin\frac{\beta}{2} \ket{+} +
e^{i\alpha}
\cos\frac{\beta}{2}
\ket{-}.
\end{equation}

It is straightforward to show that these are orthogonal and that this has the \( -\Hbar/2 \) eigenvalue.

Question: Two state Hamiltonian ([1] pr. 1.10)

Solve the eigenproblem for

\begin{equation}\label{eqn:moreBraKetProblems:500}
H = a \biglr{
\ket{1}\bra{1}
-\ket{2}\bra{2}
+\ket{1}\bra{2}
+\ket{2}\bra{1}
}
\end{equation}

Answer

In matrix form the Hamiltonian is

\begin{equation}\label{eqn:moreBraKetProblems:520}
H = a
\begin{bmatrix}
1 & 1 \\
1 & -1
\end{bmatrix}.
\end{equation}

The eigenvalue problem is

\begin{equation}\label{eqn:moreBraKetProblems:540}
\begin{aligned}
0
&= \Abs{ H – \lambda I } \\
&= (a – \lambda)(-a – \lambda) – a^2 \\
&= (-a + \lambda)(a + \lambda) – a^2 \\
&= \lambda^2 – a^2 – a^2,
\end{aligned}
\end{equation}

or

\begin{equation}\label{eqn:moreBraKetProblems:560}
\lambda = \pm \sqrt{2} a.
\end{equation}

An eigenket proportional to \( (\alpha,\beta) \) must satisfy

\begin{equation}\label{eqn:moreBraKetProblems:580}
0
= ( 1 \mp \sqrt{2} ) \alpha + \beta,
\end{equation}

so

\begin{equation}\label{eqn:moreBraKetProblems:600}
\ket{\pm} \propto
\begin{bmatrix}
-1 \\
1 \mp \sqrt{2}
\end{bmatrix},
\end{equation}

or

\begin{equation}\label{eqn:moreBraKetProblems:620}
\begin{aligned}
\ket{\pm}
&=
\inv{2(2 – \sqrt{2})}
\begin{bmatrix}
-1 \\
1 \mp \sqrt{2}
\end{bmatrix} \\
&=
\frac{2 + \sqrt{2}}{4}
\begin{bmatrix}
-1 \\
1 \mp \sqrt{2}
\end{bmatrix}.
\end{aligned}
\end{equation}

That is
\begin{equation}\label{eqn:moreBraKetProblems:640}
\ket{\pm} =
\frac{2 + \sqrt{2}}{4} \lr{
-\ket{1} + (1 \mp \sqrt{2}) \ket{2}
}.
\end{equation}

Question: Spin half probability and dispersion ([1] pr. 1.12)

A spin \( 1/2 \) system \( \BS \cdot \ncap \), with \( \ncap = \sin \gamma \xcap + \cos\gamma \zcap \), is in state with eigenvalue \( \Hbar/2 \).

(a)

If \( S_x \) is measured. What is the probability of getting \( + \Hbar/2 \)?

(b)

Evaluate the dispersion in \( S_x \), that is,

\begin{equation}\label{eqn:moreBraKetProblems:660}
\expectation{\lr{ S_x – \expectation{S_x}}^2}.
\end{equation}

Answer

(a)

In matrix form the spin operator for the system is

\begin{equation}\label{eqn:moreBraKetProblems:680}
\begin{aligned}
\BS \cdot \ncap
&= \frac{\Hbar}{2} \lr{ \cos\gamma \begin{bmatrix} 1 & 0 \\ 0 & -1 \\ \end{bmatrix} + \sin\gamma \begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix}} \\
&= \frac{\Hbar}{2}
\begin{bmatrix}
\cos\gamma & \sin\gamma \\
\sin\gamma & -\cos\gamma \\
\end{bmatrix}
\end{aligned}
\end{equation}

An eigenket \( \ket{\BS \cdot \ncap ; + } = (a,b) \) must satisfy

\begin{equation}\label{eqn:moreBraKetProblems:700}
\begin{aligned}
0
&= \lr{ \cos \gamma – 1 } a + \sin\gamma b \\
&= \lr{ -2 \sin^2 \frac{\gamma}{2} } a + 2 \sin\frac{\gamma}{2} \cos\frac{\gamma}{2} b \\
&= -\sin \frac{\gamma}{2} a + \cos\frac{\gamma}{2} b,
\end{aligned}
\end{equation}

so the eigenstate is
\begin{equation}\label{eqn:moreBraKetProblems:720}
\ket{\BS \cdot \ncap ; + }
=
\begin{bmatrix}
\cos\frac{\gamma}{2} \\
\sin\frac{\gamma}{2}
\end{bmatrix}.
\end{equation}

Pick \( \ket{S_x ; \pm } = \inv{\sqrt{2}}
\begin{bmatrix}
1 \\ \pm 1
\end{bmatrix} \) as the basis for the \( S_x \) operator. Then, for the probability that the system will end up in the \( + \Hbar/2 \) state of \( S_x \), we have

\begin{equation}\label{eqn:moreBraKetProblems:740}
\begin{aligned}
P
&= \Abs{\braket{ S_x ; + }{ \BS \cdot \ncap ; + } }^2 \\
&= \Abs{ \inv{\sqrt{2} }
{
\begin{bmatrix}
1 \\
1
\end{bmatrix}}^\dagger
\begin{bmatrix}
\cos\frac{\gamma}{2} \\
\sin\frac{\gamma}{2}
\end{bmatrix}
}^2 \\
&=\inv{2}
\Abs{
\begin{bmatrix}
1 & 1
\end{bmatrix}
\begin{bmatrix}
\cos\frac{\gamma}{2} \\
\sin\frac{\gamma}{2}
\end{bmatrix}
}^2 \\
&=
\inv{2}
\lr{
\cos\frac{\gamma}{2} +
\sin\frac{\gamma}{2}
}^2 \\
&=
\inv{2}
\lr{ 1 + 2 \cos\frac{\gamma}{2} \sin\frac{\gamma}{2} } \\
&=
\inv{2}
\lr{ 1 + \sin\gamma }.
\end{aligned}
\end{equation}

This is a reasonable seeming result, with \( P \in [0, 1] \). Some special values also further validate this

\begin{equation}\label{eqn:moreBraKetProblems:760}
\begin{aligned}
\gamma &= 0, \ket{\BS \cdot \ncap ; + } =
\begin{bmatrix}
1 \\
0
\end{bmatrix}
=
\ket{S_z ; +}
=
\inv{\sqrt{2}} \ket{S_x;+}
+\inv{\sqrt{2}} \ket{S_x;-}
\\
\gamma &= \pi/2, \ket{\BS \cdot \ncap ; + } =
\inv{\sqrt{2}}
\begin{bmatrix}
1 \\
1
\end{bmatrix}
=
\ket{S_x ; +}
\\
\gamma &= \pi, \ket{\BS \cdot \ncap ; + } =
\begin{bmatrix}
0 \\
1
\end{bmatrix}
=
\ket{S_z ; -}
=
\inv{\sqrt{2}} \ket{S_x;+}
-\inv{\sqrt{2}} \ket{S_x;-},
\end{aligned}
\end{equation}

where we see that the probabilites are in proportion to the projection of the initial state onto the measured state \( \ket{S_x ; +} \).

(b)

The \( S_x \) expectation is

\begin{equation}\label{eqn:moreBraKetProblems:780}
\begin{aligned}
\expectation{S_x}
&=
\frac{\Hbar}{2}
\begin{bmatrix}
\cos\frac{\gamma}{2} & \sin\frac{\gamma}{2}
\end{bmatrix}
\begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix}
\begin{bmatrix}
\cos\frac{\gamma}{2} \\
\sin\frac{\gamma}{2}
\end{bmatrix} \\
&=
\frac{\Hbar}{2}
\begin{bmatrix}
\cos\frac{\gamma}{2} & \sin\frac{\gamma}{2}
\end{bmatrix}
\begin{bmatrix}
\sin\frac{\gamma}{2} \\
\cos\frac{\gamma}{2}
\end{bmatrix} \\
&=
\frac{\Hbar}{2} 2 \sin\frac{\gamma}{2} \cos\frac{\gamma}{2} \\
&=
\frac{\Hbar}{2} \sin\gamma.
\end{aligned}
\end{equation}

Note that \( S_x^2 = (\Hbar/2)^2I \), so

\begin{equation}\label{eqn:moreBraKetProblems:800}
\begin{aligned}
\expectation{S_x^2}
&=
\lr{\frac{\Hbar}{2}}^2
\begin{bmatrix}
\cos\frac{\gamma}{2} & \sin\frac{\gamma}{2}
\end{bmatrix}
\begin{bmatrix}
\cos\frac{\gamma}{2} \\
\sin\frac{\gamma}{2}
\end{bmatrix} \\
&=
\lr{ \frac{\Hbar}{2} }^2
\cos^2\frac{\gamma}{2} + \sin^2 \frac{\gamma}{2} \\
&=
\lr{ \frac{\Hbar}{2} }^2.
\end{aligned}
\end{equation}

The dispersion is

\begin{equation}\label{eqn:moreBraKetProblems:820}
\begin{aligned}
\expectation{\lr{ S_x – \expectation{S_x}}^2}
&=
\expectation{S_x^2} – \expectation{S_x}^2 \\
&=
\lr{ \frac{\Hbar}{2} }^2
\lr{1 – \sin^2 \gamma} \\
&=
\lr{ \frac{\Hbar}{2} }^2
\cos^2 \gamma.
\end{aligned}
\end{equation}

At \( \gamma = \pi/2 \) the dispersion is 0, which is expected since \( \ket{\BS \cdot \ncap ; + } = \ket{ S_x ; + } \) at that point. Similarily, the dispersion is maximized at \( \gamma = 0,\pi \) where the \( \ket{\BS \cdot \ncap ; + } \) component in the \( \ket{S_x ; + } \) direction is minimized.

References

[1] Jun John Sakurai and Jim J Napolitano. Modern quantum mechanics. Pearson Higher Ed, 2014.