Day: October 15, 2015

Time evolution of spin half probability and dispersion

October 15, 2015 phy1520 No comments , , ,

[Click here for a PDF of this post with nicer formatting]

Question: Time evolution of spin half probability and dispersion ([1] pr. 2.3)

A spin \( 1/2 \) system \( \BS \cdot \ncap \), with \( \ncap = \sin \beta \xcap + \cos\beta \zcap \), is in state with eigenvalue \( \Hbar/2 \), acted on by a magnetic field of strength \( B \) in the \( +z \) direction.

(a)

If \( S_x \) is measured at time \( t \), what is the probability of getting \( + \Hbar/2 \)?

(b)

Evaluate the dispersion in \( S_x \) as a function of t, that is,

\begin{equation}\label{eqn:spinTimeEvolution:20}
\expectation{\lr{ S_x – \expectation{S_x}}^2}.
\end{equation}

(c)

Check your answers for \( \beta \rightarrow 0, \pi/2 \) to see if they make sense.

Answer

(a)

The spin operator in matrix form is
\begin{equation}\label{eqn:spinTimeEvolution:40}
\begin{aligned}
S \cdot \ncap
&=
\frac{\Hbar}{2} \lr{ \sigma_z \cos\beta + \sigma_x \sin\beta } \\
&=
\frac{\Hbar}{2} \lr{ \begin{bmatrix} 1 & 0 \\ 0 & -1 \\ \end{bmatrix} \cos\beta + \begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix} \sin\beta } \\
&=
\frac{\Hbar}{2}
\begin{bmatrix}
\cos\beta & \sin\beta \\
\sin\beta & -\cos\beta
\end{bmatrix}.
\end{aligned}
\end{equation}

The \( \ket{S \cdot \ncap ; + } \) eigenstate is found from

\begin{equation}\label{eqn:spinTimeEvolution:60}
\lr{ S \cdot \ncap – \Hbar/2}
\begin{bmatrix}
a \\
b
\end{bmatrix}
= 0,
\end{equation}

or

\begin{equation}\label{eqn:spinTimeEvolution:80}
\begin{aligned}
0
&=
\lr{ \cos\beta – 1 } a + \sin\beta b \\
&=
\lr{ -2 \sin^2(\beta/2) } a + 2 \sin(\beta/2) \cos(\beta/2) b \\
&=
\lr{ – \sin(\beta/2) } a + \cos(\beta/2) b,
\end{aligned}
\end{equation}

or

\begin{equation}\label{eqn:spinTimeEvolution:100}
\ket{ S \cdot \ncap ; + }
=
\begin{bmatrix}
\cos(\beta/2) \\
\sin(\beta/2) \\
\end{bmatrix}.
\end{equation}

The Hamiltonian is

\begin{equation}\label{eqn:spinTimeEvolution:120}
H
= – \frac{e B}{m c} S_z
= – \frac{e B \Hbar}{2 m c} \sigma_z,
\end{equation}

so the time evolution operator is

\begin{equation}\label{eqn:spinTimeEvolution:140}
U
= e^{-i H t/\Hbar}
= e^{ \frac{i e B t }{2 m c} \sigma_z }.
\end{equation}

Let \( \omega = e B/(2 m c) \), so

\begin{equation}\label{eqn:spinTimeEvolution:160}
\begin{aligned}
U
&=
e^{i \sigma_z \omega t} \\
&=
\cos(\omega t) + i \sigma_z \sin(\omega t) \\
&=
\begin{bmatrix}
1 & 0 \\
0 & 1
\end{bmatrix}
\cos(\omega t)
+
i \begin{bmatrix} 1 & 0 \\ 0 & -1 \\ \end{bmatrix} \sin(\omega t) \\
&=
\begin{bmatrix}
e^{i \omega t} & 0 \\
0 & e^{-i \omega t}
\end{bmatrix}.
\end{aligned}
\end{equation}

The time evolution of the initial state is

\begin{equation}\label{eqn:spinTimeEvolution:180}
\begin{aligned}
\ket{S \cdot \ncap ; + }(t)
&=
U \ket{S \cdot \ncap ; + }(0) \\
&=
\begin{bmatrix}
e^{i \omega t} & 0 \\
0 & e^{-i \omega t}
\end{bmatrix}
\begin{bmatrix}
\cos(\beta/2) \\
\sin(\beta/2) \\
\end{bmatrix} \\
&=
\begin{bmatrix}
\cos(\beta/2) e^{i \omega t} \\
\sin(\beta/2) e^{-i \omega t} \\
\end{bmatrix}.
\end{aligned}
\end{equation}

The probability of finding the state in \( \ket{S \cdot \xcap ; + } \) at time \( t \) (i.e. measuring \( S_x \) and finding \( \Hbar/2 \)) is

\begin{equation}\label{eqn:spinTimeEvolution:200}
\begin{aligned}
\Abs{\braket{S \cdot \xcap ; + }{S \cdot \ncap ; + }}^2
&=
\Abs{\inv{\sqrt{2}}
\begin{bmatrix}
1 & 1 \\
\end{bmatrix}
\begin{bmatrix}
\cos(\beta/2) e^{i \omega t} \\
\sin(\beta/2) e^{-i \omega t} \\
\end{bmatrix}
}^2 \\
&=
\inv{2}
\Abs{
\cos(\beta/2) e^{i \omega t} +
\sin(\beta/2) e^{-i \omega t} }^2 \\
&=
\inv{2} \lr{ 1 + 2 \cos(\beta/2) \sin(\beta/2) \cos(2 \omega t) } \\
&=
\inv{2} \lr{ 1 + \sin(\beta) \cos( 2 \omega t) }.
\end{aligned}
\end{equation}

(b)

To calculate the dispersion first note that

\begin{equation}\label{eqn:spinTimeEvolution:300}
S_x^2
= \lr{ \frac{\Hbar}{2} }^2 \begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix}^2
= \lr{ \frac{\Hbar}{2} }^2,
\end{equation}

so only the first order expectation is non-trivial to calculate. That is

\begin{equation}\label{eqn:spinTimeEvolution:320}
\begin{aligned}
\expectation{S_x}
&=
\frac{\Hbar}{2}
\begin{bmatrix}
\cos(\beta/2) e^{-i \omega t} &
\sin(\beta/2) e^{i \omega t}
\end{bmatrix}
\begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix}
\begin{bmatrix}
\cos(\beta/2) e^{i \omega t} \\
\sin(\beta/2) e^{-i \omega t} \\
\end{bmatrix} \\
&=
\frac{\Hbar}{2}
\begin{bmatrix}
\cos(\beta/2) e^{-i \omega t} &
\sin(\beta/2) e^{i \omega t}
\end{bmatrix}
\begin{bmatrix}
\sin(\beta/2) e^{-i \omega t} \\
\cos(\beta/2) e^{i \omega t} \\
\end{bmatrix} \\
&=
\frac{\Hbar}{2}
\sin(\beta/2) \cos(\beta/2) \lr{ e^{-2 i \omega t} + e^{ 2 i \omega t} } \\
&=
\frac{\Hbar}{2} \sin\beta \cos( 2 \omega t ).
\end{aligned}
\end{equation}

This gives

\begin{equation}\label{eqn:spinTimeEvolution:340}
\boxed{
\expectation{(\Delta S_x)^2}
=
\lr{ \frac{\Hbar}{2} }^2 \lr{ 1 – \sin^2\beta \cos^2( 2 \omega t ) }.
}
\end{equation}

(c)

For \( \beta = 0 \), \( \ncap = \zcap \), and \( \beta = \pi/2 \), \( \ncap = \xcap \). For the first case, the state is in an eigenstate of \( S_z \), so must evolve as

\begin{equation}\label{eqn:spinTimeEvolution:220}
\ket{S \cdot \ncap ; + }(t) = \ket{S \cdot \ncap ; + }(0) e^{i \omega t}.
\end{equation}

The probability of finding it in state \( \ket{S \cdot \xcap ; + } \) is therefore

\begin{equation}\label{eqn:spinTimeEvolution:240}
\begin{aligned}
\Abs{
\inv{\sqrt{2}}
\begin{bmatrix}
1 & 1
\end{bmatrix}
\begin{bmatrix}
e^{i \omega t} \\
0
\end{bmatrix}
}^2
&=
\inv{2} \Abs{ e^{i\omega t} }^2 \\
&=
\inv{2} \\
&=
\inv{2} \lr{ 1 + \sin(0) \cos(2 \omega t) }.
\end{aligned}
\end{equation}

This matches \ref{eqn:spinTimeEvolution:200} as expected.

For \( \beta = \pi/2 \) we have

\begin{equation}\label{eqn:spinTimeEvolution:260}
\begin{aligned}
\ket{S \cdot \xcap ; + }(t)
&=
\inv{\sqrt{2}}
\begin{bmatrix}
e^{i \omega t} & 0 \\
0 & e^{-i \omega t}
\end{bmatrix}
\begin{bmatrix}
1 \\
1
\end{bmatrix} \\
&=
\inv{\sqrt{2}}
\begin{bmatrix}
e^{i \omega t} \\
e^{-i \omega t}
\end{bmatrix}.
\end{aligned}
\end{equation}

The probability for the \( \Hbar/2 \) \( S_x \) measurement at time \( t \) is
\begin{equation}\label{eqn:spinTimeEvolution:280}
\begin{aligned}
\Abs{
\inv{2}
\begin{bmatrix}
1 & 1
\end{bmatrix}
\begin{bmatrix}
e^{i \omega t} \\
e^{-i \omega t}
\end{bmatrix}
}^2
&=
\inv{4} \Abs{ e^{i \omega t} + e^{-i \omega t} }^2 \\
&=
\cos^2(\omega t) \\
&=
\inv{2}\lr{ 1 + \sin(\pi/2) \cos( 2 \omega t )}.
\end{aligned}
\end{equation}

Again, this matches the expected value.

For the dispersions, at \( \beta = 0 \), the dispersion is

\begin{equation}\label{eqn:spinTimeEvolution:360}
\lr{\frac{\Hbar}{2}}^2
\end{equation}

This is the maximum dispersion, which makes sense since we are measuring \( S_x \) when the initial state is \( \ket{S \cdot \zcap ; + } \). For \( \beta = \pi/2 \) the dispersion is

\begin{equation}\label{eqn:spinTimeEvolution:380}
\lr{\frac{\Hbar}{2}}^2 \sin^2 ( 2 \omega t ).
\end{equation}

This starts off as zero dispersion (because the initial state is \( \ket{ S \cdot \xcap ; + } \), but then oscillates.

References

[1] Jun John Sakurai and Jim J Napolitano. Modern quantum mechanics. Pearson Higher Ed, 2014.

PHY1520H Graduate Quantum Mechanics. Lecture 9: Dirac equation (cont.). Taught by Prof. Arun Paramekanti

October 15, 2015 phy1520 No comments , , , , , , , ,

[Click here for a PDF of this post with nicer formatting]

Disclaimer

Peeter’s lecture notes from class. These may be incoherent and rough.

These are notes for the UofT course PHY1520, Graduate Quantum Mechanics, taught by Prof. Paramekanti.

Where we left off

\begin{equation}\label{eqn:qmLecture9:20}
-i \Hbar \PD{t}{}
\begin{bmatrix}
\psi_1 \\
\psi_2
\end{bmatrix}
=
\begin{bmatrix}
-i \Hbar c \PD{x}{} & m c^2 \\
m c^2 & i \Hbar c \PD{x}{} \\
\end{bmatrix}.
\end{equation}

With a potential this would be

\begin{equation}\label{eqn:qmLecture9:40}
-i \Hbar \PD{t}{}
\begin{bmatrix}
\psi_1 \\
\psi_2
\end{bmatrix}
=
\begin{bmatrix}
-i \Hbar c \PD{x}{} + V(x) & m c^2 \\
m c^2 & i \Hbar c \PD{x}{} + V(x) \\
\end{bmatrix}.
\end{equation}

This means that the potential is raising the energy eigenvalue of the system.

Free Particle

Assuming a form

\begin{equation}\label{eqn:qmLecture9:60}
\begin{bmatrix}
\psi_1(x,t) \\
\psi_2(x,t)
\end{bmatrix}
=
e^{i k x}
\begin{bmatrix}
f_1(t) \\
f_2(t) \\
\end{bmatrix},
\end{equation}

and plugging back into the Dirac equation we have

\begin{equation}\label{eqn:qmLecture9:80}
-i \Hbar \PD{t}{}
\begin{bmatrix}
f_1 \\
f_2
\end{bmatrix}
=
\begin{bmatrix}
k \Hbar c & m c^2 \\
m c^2 & – \Hbar k c \\
\end{bmatrix}
\begin{bmatrix}
f_1 \\
f_2
\end{bmatrix}.
\end{equation}

We can use a diagonalizing rotation

\begin{equation}\label{eqn:qmLecture9:100}
\begin{bmatrix}
f_1 \\
f_2
\end{bmatrix}
=
\begin{bmatrix}
\cos\theta_k & -\sin\theta_k \\
\sin\theta_k & \cos\theta_k \\
\end{bmatrix}
\begin{bmatrix}
f_{+} \\
f_{-} \\
\end{bmatrix}.
\end{equation}

Plugging this in reduces the system to the form

\begin{equation}\label{eqn:qmLecture9:140}
-i \Hbar \PD{t}{}
\begin{bmatrix}
f_{+} \\
f_{-} \\
\end{bmatrix}
=
\begin{bmatrix}
E_k & 0 \\
0 & -E_k
\end{bmatrix}
\begin{bmatrix}
f_{+} \\
f_{-} \\
\end{bmatrix}.
\end{equation}

Where the rotation angle is found to be given by

\begin{equation}\label{eqn:qmLecture9:160}
\begin{aligned}
\sin(2 \theta_k) &= \frac{m c^2}{\sqrt{(\Hbar k c)^2 + m^2 c^4}} \\
\cos(2 \theta_k) &= \frac{\Hbar k c}{\sqrt{(\Hbar k c)^2 + m^2 c^4}} \\
E_k &= \sqrt{(\Hbar k c)^2 + m^2 c^4}
\end{aligned}
\end{equation}

See fig. 1 for a sketch of energy vs momentum. The asymptotes are the limiting cases when \( m c^2 \rightarrow 0 \). The \( + \) branch is what we usually associate with particles. What about the other energy states. For Fermions Dirac argued that the lower energy states could be thought of as “filled up”, using the Pauli principle to leave only the positive energy states available. This was called the “Dirac Sea”. This isn’t a good solution, and won’t work for example for Bosons.

fig. 1. Dirac equation solution space

fig. 1. Dirac equation solution space

Another way to rationalize this is to employ ideas from solid state theory. For example consider a semiconductor with a valence and conduction band as sketched in fig. 2.

fig. 2. Solid state valence and conduction band transition

fig. 2. Solid state valence and conduction band transition

A photon can excite an electron from the valence band to the conduction band, leaving all the valence band states filled except for one (a hole). For an electron we can use almost the same picture, as sketched in fig. 3.

fig. 3. Pair creation

fig. 3. Pair creation

A photon with energy \( E_k – (-E_k) \) can create a positron-electron pair from the vacuum, where the energy of the electron and positron pair is \( E_k \).

At high enough energies, we can see this pair creation occur.

Zitterbewegung

If a particle is created at a non-eigenstate such as one on the asymptotes, then oscillations between the positive and negative branches are possible as sketched in fig. 4.

fig. 4. Zitterbewegung oscillation

fig. 4. Zitterbewegung oscillation

Only “vertical” oscillations between the positive and negative locations on these branches is possible since those are the points that match the particle momentum. Examining this will be the aim of one of the problem set problems.

Probability and current density

If we define a probability density

\begin{equation}\label{eqn:qmLecture9:180}
\rho(x, t) = \Abs{\psi_1}^2 + \Abs{\psi_2}^2,
\end{equation}

does this satisfy a probability conservation relation

\begin{equation}\label{eqn:qmLecture9:200}
\PD{t}{\rho} + \PD{x}{j} = 0,
\end{equation}

where \( j \) is the probability current. Plugging in the density, we have

\begin{equation}\label{eqn:qmLecture9:220}
\PD{t}{\rho}
=
\PD{t}{\psi_1^\conj} \psi_1
+
\psi_1^\conj \PD{t}{\psi_1}
+
\PD{t}{\psi_2^\conj} \psi_2
+
\psi_2^\conj \PD{t}{\psi_2}.
\end{equation}

It turns out that the probability current has the form

\begin{equation}\label{eqn:qmLecture9:240}
j(x,t) = c \lr{ \psi_1^\conj \psi_1 + \psi_2^\conj \psi_2 }.
\end{equation}

Here the speed of light \( c \) is the slope of the line in the plots above. We can think of this current density as right movers minus the left movers. Any state that is given can be thought of as a combination of right moving and left moving states, neither of which are eigenstates of the free particle Hamiltonian.

Potential step

The next logical thing to think about, as in non-relativistic quantum mechanics, is to think about what occurs when the particle hits a potential step, as in fig. 5.

fig. 5. Reflection off a potential barrier

fig. 5. Reflection off a potential barrier

The approach is the same. We write down the wave functions for the \( V = 0 \) region (I), and the higher potential region (II).

The eigenstates are found on the solid lines above the asymptotes on the right hand movers side as sketched in fig. 6. The right and left moving designations are based on the phase velocity \( \PDi{k}{E} \) (approaching \( \pm c \) on the top-right and top-left quadrants respectively).

fig. 6. Right movers and left movers

fig. 6. Right movers and left movers

For \( k > 0 \), an eigenstate for the incident wave is

\begin{equation}\label{eqn:qmLecture9:261}
\Bpsi_{\textrm{inc}}(x) =
\begin{bmatrix}
\cos\theta_k \\
\sin\theta_k
\end{bmatrix}
e^{i k x},
\end{equation}

For the reflected wave function, we pick a function on the left moving side of the positive energy branch.

\begin{equation}\label{eqn:qmLecture9:260}
\Bpsi_{\textrm{ref}}(x) =
\begin{bmatrix}
? \\
?
\end{bmatrix}
e^{-i k x},
\end{equation}

We’ll go through this in more detail next time.

Question: Calculate the right going diagonalization

Prove (7).

Answer

To determine the relations for \( \theta_k \) we have to solve

\begin{equation}\label{eqn:qmLecture9:280}
\begin{bmatrix}
E_k & 0 \\
0 & -E_k
\end{bmatrix}
= R^{-1} H R.
\end{equation}

Working with \( \Hbar = c = 1 \) temporarily, and \( C = \cos\theta_k, S = \sin\theta_k \), that is

\begin{equation}\label{eqn:qmLecture9:300}
\begin{aligned}
\begin{bmatrix}
E_k & 0 \\
0 & -E_k
\end{bmatrix}
&=
\begin{bmatrix}
C & S \\
-S & C
\end{bmatrix}
\begin{bmatrix}
k & m \\
m & -k
\end{bmatrix}
\begin{bmatrix}
C & -S \\
S & C
\end{bmatrix} \\
&=
\begin{bmatrix}
C & S \\
-S & C
\end{bmatrix}
\begin{bmatrix}
k C + m S & -k S + m C \\
m C – k S & -m S – k C
\end{bmatrix} \\
&=
\begin{bmatrix}
k C^2 + m S C + m C S – k S^2 & -k S C + m C^2 -m S^2 – k C S \\
-k C S – m S^2 + m C^2 – k S C & k S^2 – m C S -m S C – k C^2
\end{bmatrix} \\
&=
\begin{bmatrix}
k \cos(2 \theta_k) + m \sin(2 \theta_k) & m \cos(2 \theta_k) – k \sin(2 \theta_k) \\
m \cos(2 \theta_k) – k \sin(2 \theta_k) & -k \cos(2 \theta_k) – m \sin(2 \theta_k) \\
\end{bmatrix}.
\end{aligned}
\end{equation}

This gives

\begin{equation}\label{eqn:qmLecture9:320}
\begin{aligned}
E_k
\begin{bmatrix}
1 \\
0
\end{bmatrix}
&=
\begin{bmatrix}
k \cos(2 \theta_k) + m \sin(2 \theta_k) \\
m \cos(2 \theta_k) – k \sin(2 \theta_k) \\
\end{bmatrix} \\
&=
\begin{bmatrix}
k & m \\
m & -k
\end{bmatrix}
\begin{bmatrix}
\cos(2 \theta_k) \\
\sin(2 \theta_k) \\
\end{bmatrix}.
\end{aligned}
\end{equation}

Adding back in the \(\Hbar\)’s and \(c\)’s this is

\begin{equation}\label{eqn:qmLecture9:340}
\begin{aligned}
\begin{bmatrix}
\cos(2 \theta_k) \\
\sin(2 \theta_k) \\
\end{bmatrix}
&=
\frac{E_k}{-(\Hbar k c)^2 -(m c^2)^2}
\begin{bmatrix}
– \Hbar k c & – m c^2 \\
– m c^2 & \Hbar k c
\end{bmatrix}
\begin{bmatrix}
1 \\
0
\end{bmatrix} \\
&=
\inv{E_k}
\begin{bmatrix}
\Hbar k c \\
m c^2
\end{bmatrix}.
\end{aligned}
\end{equation}

Question: Verify the Dirac current relationship.

Prove \ref{eqn:qmLecture9:240}.

Answer

The components of the Schrodinger equation are

\begin{equation}\label{eqn:qmLecture9:360}
\begin{aligned}
-i \Hbar \PD{t}{\psi_1} &= -i \Hbar c \PD{x}{\psi_1} + m c^2 \psi_2 \\
-i \Hbar \PD{t}{\psi_2} &= m c^2 \psi_1 + i \Hbar c \PD{x}{\psi_2},
\end{aligned}
\end{equation}

The conjugates of these are
\begin{equation}\label{eqn:qmLecture9:380}
\begin{aligned}
i \Hbar \PD{t}{\psi_1^\conj} &= i \Hbar c \PD{x}{\psi_1^\conj} + m c^2 \psi_2^\conj \\
i \Hbar \PD{t}{\psi_2^\conj} &= m c^2 \psi_1^\conj – i \Hbar c \PD{x}{\psi_2^\conj}.
\end{aligned}
\end{equation}

This gives
\begin{equation}\label{eqn:qmLecture9:400}
\begin{aligned}
i \Hbar \PD{t}{\rho}
&=
\lr{ i \Hbar c \PD{x}{\psi_1^\conj} + m c^2 \psi_2^\conj } \psi_1 \\
&+ \psi_1^\conj \lr{ i \Hbar c \PD{x}{\psi_1} – m c^2 \psi_2 } \\
&+ \lr{ m c^2 \psi_1^\conj – i \Hbar c \PD{x}{\psi_2^\conj} } \psi_2 \\
&+ \psi_2^\conj \lr{ -m c^2 \psi_1 – i \Hbar c \PD{x}{\psi_2} }.
\end{aligned}
\end{equation}

All the non-derivative terms cancel leaving

\begin{equation}\label{eqn:qmLecture9:420}
\inv{c} \PD{t}{\rho}
=
\PD{x}{\psi_1^\conj} \psi_1
+ \psi_1^\conj \PD{x}{\psi_1}
– \PD{x}{\psi_2^\conj} \psi_2
– \psi_2^\conj \PD{x}{\psi_2}
=
\PD{x}{}
\lr{
\psi_1^\conj \psi_1 –
\psi_2^\conj \psi_2
}.
\end{equation}

Expectations for SHO Hamiltonian, and virial theorem.

October 15, 2015 phy1520 No comments , , , , ,

[Click here for a PDF of this post with nicer formatting]

Question: Expectations for SHO Hamiltonian, and virial theorem. ([1] pr. 2.3)

(a)

For a 1D SHO, compute \(
\bra{m} x \ket{n},
\bra{m} x^2 \ket{n},
\bra{m} p \ket{n},
\bra{m} p^2 \ket{n} \) and \( \bra{m} \symmetric{x}{p} \ket{n} \).

(b)

Verify the virial theorem is satisfied for energy eigenstates.

Answer

(a)

Using

\begin{equation}\label{eqn:shoExpectations:20}
\begin{aligned}
x &= \frac{x_0}{\sqrt{2}} \lr{ a + a^\dagger } \\
p &= \frac{i\Hbar}{x_0 \sqrt{2}} \lr{ a^\dagger – a} \\
a(t) &= a(0) e^{-i \omega t} \\
a(0) \ket{n} &= \sqrt{n} \ket{n-1} \\
a^\dagger(0) \ket{n} &= \sqrt{n+1} \ket{n+1} \\
x_0^2 &= \frac{\Hbar}{\omega m},
\end{aligned}
\end{equation}

we have

\begin{equation}\label{eqn:shoExpectations:40}
\begin{aligned}
\bra{m} x \ket{n}
&=
\frac{x_0}{\sqrt{2}} \bra{m} \lr{ a + a^\dagger } \ket{n} \\
&=
\frac{x_0}{\sqrt{2}} \bra{m}
\lr{
e^{-i \omega t} \sqrt{n} \ket{n-1}
+
e^{i \omega t} \sqrt{n+1} \ket{n+1}
} \\
&=
\frac{x_0}{\sqrt{2}} \lr{
\delta_{m, n-1} e^{-i \omega t} \sqrt{n}
+
\delta_{m, n+1} e^{i \omega t} \sqrt{n+1}
},
\end{aligned}
\end{equation}

\begin{equation}\label{eqn:shoExpectations:60}
\begin{aligned}
\bra{m} x^2 \ket{n}
&=
\frac{x_0^2}{2} \bra{m} \lr{ a + a^\dagger }^2 \ket{n} \\
&=
\frac{x_0^2}{2}
\lr{
e^{i \omega t} \sqrt{m} \bra{m-1}
+
e^{-i \omega t} \sqrt{m+1} \bra{m+1}
}
\lr{
e^{-i \omega t} \sqrt{n} \ket{n-1}
+
e^{i \omega t} \sqrt{n+1} \ket{n+1}
} \\
&=
\frac{x_0^2}{2}
\lr{
\delta_{m+1,n+1} \sqrt{(m+1)(n+1)}
+\delta_{m+1,n-1} \sqrt{(m+1)n} e^{-2 i \omega t}
+\delta_{m-1,n+1} \sqrt{m(n+1)} e^{2 i \omega t}
+\delta_{m-1,n-1} \sqrt{m n}
},
\end{aligned}
\end{equation}

\begin{equation}\label{eqn:shoExpectations:80}
\begin{aligned}
\bra{m} p \ket{n}
&=
\frac{i\Hbar}{\sqrt{2} x_0} \bra{m} \lr{ a^\dagger – a} \ket{n} \\
&=
\frac{i\Hbar}{\sqrt{2} x_0} \bra{m} \lr{
e^{i \omega t} \sqrt{n+1} \ket{n+1}

e^{-i \omega t} \sqrt{n} \ket{n-1}
} \\
&=
\frac{i\Hbar}{\sqrt{2} x_0} \lr{
\delta_{m,n+1} e^{i \omega t} \sqrt{n+1}

\delta_{m,n-1} e^{-i \omega t} \sqrt{n}
},
\end{aligned}
\end{equation}

\begin{equation}\label{eqn:shoExpectations:100}
\begin{aligned}
\bra{m} p^2 \ket{n}
&=
\frac{\Hbar^2}{2 x_0^2} \ket{m} \lr{ a – a^\dagger } \lr{ a^\dagger – a}
\ket{n} \\
&=
\frac{\Hbar^2}{2 x_0^2}
\lr{
-e^{-i \omega t} \sqrt{m+1} \bra{m+1}
+
e^{i \omega t} \sqrt{m} \bra{m-1}
}
\lr{
e^{i \omega t} \sqrt{n+1} \ket{n+1}

e^{-i \omega t} \sqrt{n} \ket{n-1}
} \\
&=
\frac{\Hbar^2}{2 x_0^2}
\lr{
\delta_{m+1,n+1} \sqrt{(m+1)(n+1)}
+\delta_{m+1,n-1} \sqrt{(m+1)n} e^{-2 i \omega t}
+\delta_{m-1,n+1} \sqrt{m(n+1)} e^{2 i \omega t}
+\delta_{m-1,n-1} \sqrt{m n}
}.
\end{aligned}
\end{equation}

For the anticommutator \( \symmetric{x}{p} \), we have

\begin{equation}\label{eqn:shoExpectations:120}
\begin{aligned}
\symmetric{x}{p}
&=
\frac{i\Hbar}{2}
\lr{
\lr{ a e^{-i \omega t} + a^\dagger e^{i \omega t} } \lr{ a^\dagger e^{i \omega t} – a e^{-i \omega t} }

\lr{ a^\dagger e^{i \omega t} – a e^{-i \omega t} }
\lr{ a e^{-i \omega t} + a^\dagger e^{i \omega t} }
} \\
&=
\frac{i\Hbar}{2}
\lr{
– a^2 e^{- 2 i \omega t}
+ (a^\dagger)^2 e^{ 2 i \omega t}
+ a a^\dagger
– a^\dagger a
+ a^2 e^{- 2 i \omega t}
– (a^\dagger)^2 e^{ 2 i \omega t}
– a^\dagger a
+ a a^\dagger
} \\
&=
i\Hbar
\lr{
a a^\dagger – a^\dagger a
},
\end{aligned}
\end{equation}

so

\begin{equation}\label{eqn:shoExpectations:140}
\begin{aligned}
\bra{m} \symmetric{x}{p} \ket{n}
&=
i\Hbar
\bra{m}
\lr{
a a^\dagger – a^\dagger a
}
\ket{n} \\
&=
i\Hbar
\bra{m}
\lr{
\sqrt{(n+1)^2}\ket{n}
-\sqrt{n^2}\ket{n}
} \\
&=
i\Hbar
\bra{m}
\lr{
2 n + 1
}
\ket{n}.
\end{aligned}
\end{equation}

(b)

For the SHO, the virial theorem requires \( \expectation{p^2/m} = \expectation{m \omega x^2} \). That momentum expectation with respect to the eigenstate \( \ket{n} \) is

\begin{equation}\label{eqn:shoExpectations:160}
\begin{aligned}
\expectation{p^2/m}
&=
\frac{\Hbar^2}{2 x_0^2 m}
\lr{
\sqrt{(n+1)(n+1)}
+
\sqrt{n n}
} \\
&=
\frac{\Hbar^2 m \omega}{2 \Hbar m} \lr{ 2 n + 1 } \\
&=
\Hbar \omega \lr{ n + \inv{2} }.
\end{aligned}
\end{equation}

For the position expectation we’ve got

\begin{equation}\label{eqn:shoExpectations:180}
\begin{aligned}
\expectation{m \omega x^2}
&=
\frac{m \omega^2 x_0^2}{2}
\lr{
\sqrt{(n+1)(n+1)}
+ \sqrt{n n}
} \\
&=
\frac{m \omega^2 \Hbar}{2 m \omega}
\lr{
\sqrt{(n+1)(n+1)}
+ \sqrt{n n}
} \\
&=
\frac{\omega \Hbar}{2 }
\lr{ 2 n + 1 } \\
&=
\omega \Hbar
\lr{ n + \inv{2} }.
\end{aligned}
\end{equation}

This shows that the virial theorem holds for the SHO Hamiltonian for eigenstates.

References

[1] Jun John Sakurai and Jim J Napolitano. Modern quantum mechanics. Pearson Higher Ed, 2014.

Damn, couldn’t resist walking out on Oh Canada during “New math” objections.

October 15, 2015 Incoherent ramblings No comments , ,

I made the mistake of walking away from Oh Canada during a conversation.  I don’t respect the training in conformity and patriotism (a trait second only to religion for inspiring war) that the Oh Canada ritual provides.  This and other similar patriotism rituals like the US pledge of allegiance have no more business in the schools than the old religious pledges that we used to have to mumble as kids, not knowing what we were even saying.  In my opinion, Oh Canada is an important brainwashing technique, and trains us to love our country regardless of the actions done in our name by the government and its unending bureaucracy.  This ritual trains us in blind obedience.  The end result is that we are comfortable dismissing real insanities like the political party whip, a true destroyer of democratic representation.  With blind love for our country we can ignore insanities of our government, thinking of them as yet another facet of Canadian values.  It is training to not question the system imposed on us.
I know that I am extremely isolated in my distaste for Oh Canada, and I haven’t figured out a good strategy for dealing with it.  We’ve been trained to respect it with so much force, regardless of there being no good reason to do so, that breaking with the convention unfortunately upsets people when they see it done.  Once I came to the conclusion that Oh Canada should not be respected, it was hard to fight the training in conformity that I had been subjected to.  It would be much easier to just stand and pretend to like it, and let the old feelings of patriotism back in, but I feel I have a moral obligation not to do so.
Unfortunately, walking out on the go-to-war-for-country song, interrupted my conversation with Mr Dixon (Unionville public school principle) on the more important topic of some bizarre multiplication techniques that are being taught in fourth grade, techniques that have the side effect of severe confusion, and making mathematics suddenly a hated subject, where it used to be loved.  This hatred is coming from a boy with a natural aptitude for numbers, who for example, memorized 50 digits of pi in three days just for the sheer pleasure of doing so.  Because of the effort of trying to keep the “four multiplication methods” all straight in his head, he’s not sure how to do the most important of them all: the standard multiplication technique that has been the workhorse of paper multiplication for hundreds of years before these bizarre teaching ideas invaded the curriculum.  I recall when my kids came home with these techniques.  They had the good sense to go through the motions mechanically and forget about them after that.  To spend all this effort just for the best case option of having all the kids that learn it forget it after the test is nonsensical to say the least.  But when it also has the side effect of destroying love for the subject, it is unforgivable.