plane wave

Solving Maxwell’s equation in freespace: Multivector plane wave representation

March 14, 2018 math and physics play , , , , , , , , , , , ,

[Click here for a PDF of this post with nicer formatting]

The geometric algebra form of Maxwell’s equations in free space (or source free isotopic media with group velocity \( c \)) is the multivector equation
\begin{equation}\label{eqn:planewavesMultivector:20}
\lr{ \spacegrad + \inv{c}\PD{t}{} } F(\Bx, t) = 0.
\end{equation}
Here \( F = \BE + I c \BB \) is a multivector with grades 1 and 2 (vector and bivector components). The velocity \( c \) is called the group velocity since \( F \), or its components \( \BE, \BH \) satisfy the wave equation, which can be seen by pre-multiplying with \( \spacegrad – (1/c)\PDi{t}{} \) to find
\begin{equation}\label{eqn:planewavesMultivector:n}
\lr{ \spacegrad^2 – \inv{c^2}\PDSq{t}{} } F(\Bx, t) = 0.
\end{equation}

Let’s look at the frequency domain solution of this equation with a presumed phasor representation
\begin{equation}\label{eqn:planewavesMultivector:40}
F(\Bx, t) = \textrm{Re} \lr{ F(\Bk) e^{-j \Bk \cdot \Bx + j \omega t} },
\end{equation}
where \( j \) is a scalar imaginary, not necessarily with any geometric interpretation.

Maxwell’s equation reduces to just
\begin{equation}\label{eqn:planewavesMultivector:60}
0
=
-j \lr{ \Bk – \frac{\omega}{c} } F(\Bk).
\end{equation}

If \( F(\Bk) \) has a left multivector factor
\begin{equation}\label{eqn:planewavesMultivector:80}
F(\Bk) =
\lr{ \Bk + \frac{\omega}{c} } \tilde{F},
\end{equation}
where \( \tilde{F} \) is a multivector to be determined, then
\begin{equation}\label{eqn:planewavesMultivector:100}
\begin{aligned}
\lr{ \Bk – \frac{\omega}{c} }
F(\Bk)
&=
\lr{ \Bk – \frac{\omega}{c} }
\lr{ \Bk + \frac{\omega}{c} } \tilde{F} \\
&=
\lr{ \Bk^2 – \lr{\frac{\omega}{c}}^2 } \tilde{F},
\end{aligned}
\end{equation}
which is zero if \( \Norm{\Bk} = \ifrac{\omega}{c} \).

Let \( \kcap = \ifrac{\Bk}{\Norm{\Bk}} \), and \( \Norm{\Bk} \tilde{F} = F_0 + F_1 + F_2 + F_3 \), where \( F_0, F_1, F_2, \) and \( F_3 \) are respectively have grades 0,1,2,3. Then
\begin{equation}\label{eqn:planewavesMultivector:120}
\begin{aligned}
F(\Bk)
&= \lr{ 1 + \kcap } \lr{ F_0 + F_1 + F_2 + F_3 } \\
&=
F_0 + F_1 + F_2 + F_3
+
\kcap F_0 + \kcap F_1 + \kcap F_2 + \kcap F_3 \\
&=
F_0 + F_1 + F_2 + F_3
+
\kcap F_0 + \kcap \cdot F_1 + \kcap \cdot F_2 + \kcap \cdot F_3
+
\kcap \wedge F_1 + \kcap \wedge F_2 \\
&=
\lr{
F_0 + \kcap \cdot F_1
}
+
\lr{
F_1 + \kcap F_0 + \kcap \cdot F_2
}
+
\lr{
F_2 + \kcap \cdot F_3 + \kcap \wedge F_1
}
+
\lr{
F_3 + \kcap \wedge F_2
}.
\end{aligned}
\end{equation}
Since the field \( F \) has only vector and bivector grades, the grades zero and three components of the expansion above must be zero, or
\begin{equation}\label{eqn:planewavesMultivector:140}
\begin{aligned}
F_0 &= – \kcap \cdot F_1 \\
F_3 &= – \kcap \wedge F_2,
\end{aligned}
\end{equation}
so
\begin{equation}\label{eqn:planewavesMultivector:160}
\begin{aligned}
F(\Bk)
&=
\lr{ 1 + \kcap } \lr{
F_1 – \kcap \cdot F_1 +
F_2 – \kcap \wedge F_2
} \\
&=
\lr{ 1 + \kcap } \lr{
F_1 – \kcap F_1 + \kcap \wedge F_1 +
F_2 – \kcap F_2 + \kcap \cdot F_2
}.
\end{aligned}
\end{equation}
The multivector \( 1 + \kcap \) has the projective property of gobbling any leading factors of \( \kcap \)
\begin{equation}\label{eqn:planewavesMultivector:180}
\begin{aligned}
(1 + \kcap)\kcap
&= \kcap + 1 \\
&= 1 + \kcap,
\end{aligned}
\end{equation}
so for \( F_i \in F_1, F_2 \)
\begin{equation}\label{eqn:planewavesMultivector:200}
(1 + \kcap) ( F_i – \kcap F_i )
=
(1 + \kcap) ( F_i – F_i )
= 0,
\end{equation}
leaving
\begin{equation}\label{eqn:planewavesMultivector:220}
F(\Bk)
=
\lr{ 1 + \kcap } \lr{
\kcap \cdot F_2 +
\kcap \wedge F_1
}.
\end{equation}

For \( \kcap \cdot F_2 \) to be non-zero \( F_2 \) must be a bivector that lies in a plane containing \( \kcap \), and \( \kcap \cdot F_2 \) is a vector in that plane that is perpendicular to \( \kcap \). On the other hand \( \kcap \wedge F_1 \) is non-zero only if \( F_1 \) has a non-zero component that does not lie in along the \( \kcap \) direction, but \( \kcap \wedge F_1 \), like \( F_2 \) describes a plane that containing \( \kcap \). This means that having both bivector and vector free variables \( F_2 \) and \( F_1 \) provide more degrees of freedom than required. For example, if \( \BE \) is any vector, and \( F_2 = \kcap \wedge \BE \), then
\begin{equation}\label{eqn:planewavesMultivector:240}
\begin{aligned}
\lr{ 1 + \kcap }
\kcap \cdot F_2
&=
\lr{ 1 + \kcap }
\kcap \cdot \lr{ \kcap \wedge \BE } \\
&=
\lr{ 1 + \kcap }
\lr{
\BE

\kcap \lr{ \kcap \cdot \BE }
} \\
&=
\lr{ 1 + \kcap }
\kcap \lr{ \kcap \wedge \BE } \\
&=
\lr{ 1 + \kcap }
\kcap \wedge \BE,
\end{aligned}
\end{equation}
which has the form \( \lr{ 1 + \kcap } \lr{ \kcap \wedge F_1 } \), so the solution of the free space Maxwell’s equation can be written
\begin{equation}\label{eqn:planewavesMultivector:260}
\boxed{
F(\Bx, t)
=
\textrm{Re} \lr{
\lr{ 1 + \kcap }
\BE\,
e^{-j \Bk \cdot \Bx + j \omega t}
}
,
}
\end{equation}
where \( \BE \) is any vector for which \( \BE \cdot \Bk = 0 \).

Plane wave and spinor under time reversal

December 16, 2015 phy1520 , , , , ,

[Click here for a PDF of this post with nicer formatting]

Q: [1] pr 4.7

  1. (a)
    Find the time reversed form of a spinless plane wave state in three dimensions.

  2. (b)
    For the eigenspinor of \( \Bsigma \cdot \ncap \) expressed in terms of polar and azimuthal angles \( \beta\) and \( \gamma \), show that \( -i \sigma_y \chi^\conj(\ncap) \) has the reversed spin direction.

A: part (a)

The Hamiltonian for a plane wave is

\begin{equation}\label{eqn:timeReversalPlaneWaveAndSpinor:20}
H = \frac{\Bp^2}{2m} = i \PD{t}.
\end{equation}

Under time reversal the momentum side transforms as

\begin{equation}\label{eqn:timeReversalPlaneWaveAndSpinor:40}
\begin{aligned}
\Theta \frac{\Bp^2}{2m} \Theta^{-1}
&=
\frac{\lr{ \Theta \Bp \Theta^{-1}} \cdot \lr{ \Theta \Bp \Theta^{-1}} }{2m} \\
&=
\frac{(-\Bp) \cdot (-\Bp)}{2m} \\
&=
\frac{\Bp^2}{2m}.
\end{aligned}
\end{equation}

The time derivative side of the equation is also time reversal invariant
\begin{equation}\label{eqn:timeReversalPlaneWaveAndSpinor:60}
\begin{aligned}
\Theta i \PD{t}{} \Theta^{-1}
&=
\Theta i \Theta^{-1} \Theta \PD{t}{} \Theta^{-1} \\
&=
-i \PD{(-t)}{} \\
&=
i \PD{t}{}.
\end{aligned}
\end{equation}

Solutions to this equation are linear combinations of

\begin{equation}\label{eqn:timeReversalPlaneWaveAndSpinor:80}
\psi(\Bx, t) = e^{i \Bk \cdot \Bx – i E t/\Hbar},
\end{equation}

where \( \Hbar^2 \Bk^2/2m = E \), the energy of the particle. Under time reversal we have

\begin{equation}\label{eqn:timeReversalPlaneWaveAndSpinor:100}
\begin{aligned}
\psi(\Bx, t)
\rightarrow e^{-i \Bk \cdot \Bx + i E (-t)/\Hbar}
&= \lr{ e^{i \Bk \cdot \Bx – i E (-t)/\Hbar} }^\conj \\
&=
\psi^\conj(\Bx, -t)
\end{aligned}
\end{equation}

A: part (b)

The text uses a requirement for time reversal of spin states to show that the Pauli matrix form of the time reversal operator is

\begin{equation}\label{eqn:timeReversalPlaneWaveAndSpinor:120}
\Theta = -i \sigma_y K,
\end{equation}

where \( K \) is a complex conjugating operator. The form of the spin up state used in that demonstration was

\begin{equation}\label{eqn:timeReversalPlaneWaveAndSpinor:140}
\begin{aligned}
\ket{\ncap ; +}
&= e^{-i S_z \beta/\Hbar} e^{-i S_y \gamma/\Hbar} \ket{+} \\
&= e^{-i \sigma_z \beta/2} e^{-i \sigma_y \gamma/2} \ket{+} \\
&= \lr{ \cos(\beta/2) – i \sigma_z \sin(\beta/2) }
\lr{ \cos(\gamma/2) – i \sigma_y \sin(\gamma/2) } \ket{+} \\
&= \lr{ \cos(\beta/2) – i \begin{bmatrix} 1 & 0 \\ 0 & -1 \\ \end{bmatrix} \sin(\beta/2) }
\lr{ \cos(\gamma/2) – i \begin{bmatrix} 0 & -i \\ i & 0 \\ \end{bmatrix} \sin(\gamma/2) } \ket{+} \\
&=
\begin{bmatrix}
e^{-i\beta/2} & 0 \\
0 & e^{i \beta/2}
\end{bmatrix}
\begin{bmatrix}
\cos(\gamma/2) & -\sin(\gamma/2) \\
\sin(\gamma/2) & \cos(\gamma/2)
\end{bmatrix}
\begin{bmatrix}
1 \\
0
\end{bmatrix} \\
&=
\begin{bmatrix}
e^{-i\beta/2} & 0 \\
0 & e^{i \beta/2}
\end{bmatrix}
\begin{bmatrix}
\cos(\gamma/2) \\
\sin(\gamma/2) \\
\end{bmatrix} \\
&=
\begin{bmatrix}
\cos(\gamma/2)
e^{-i\beta/2}
\\
\sin(\gamma/2)
e^{i \beta/2}
\end{bmatrix}.
\end{aligned}
\end{equation}

The state orthogonal to this one is claimed to be

\begin{equation}\label{eqn:timeReversalPlaneWaveAndSpinor:180}
\begin{aligned}
\ket{\ncap ; -}
&= e^{-i S_z \beta/\Hbar} e^{-i S_y (\gamma + \pi)/\Hbar} \ket{+} \\
&= e^{-i \sigma_z \beta/2} e^{-i \sigma_y (\gamma + \pi)/2} \ket{+}.
\end{aligned}
\end{equation}

We have

\begin{equation}\label{eqn:timeReversalPlaneWaveAndSpinor:200}
\begin{aligned}
\cos((\gamma + \pi)/2)
&=
\textrm{Re} e^{i(\gamma + \pi)/2} \\
&=
\textrm{Re} i e^{i\gamma/2} \\
&=
-\sin(\gamma/2),
\end{aligned}
\end{equation}

and
\begin{equation}\label{eqn:timeReversalPlaneWaveAndSpinor:220}
\begin{aligned}
\sin((\gamma + \pi)/2)
&=
\textrm{Im} e^{i(\gamma + \pi)/2} \\
&=
\textrm{Im} i e^{i\gamma/2} \\
&=
\cos(\gamma/2),
\end{aligned}
\end{equation}

so we should have

\begin{equation}\label{eqn:timeReversalPlaneWaveAndSpinor:240}
\ket{\ncap ; -}
=
\begin{bmatrix}
-\sin(\gamma/2)
e^{-i\beta/2}
\\
\cos(\gamma/2)
e^{i \beta/2}
\end{bmatrix}.
\end{equation}

This looks right, but we can sanity check orthogonality

\begin{equation}\label{eqn:timeReversalPlaneWaveAndSpinor:260}
\begin{aligned}
\braket{\ncap ; -}{\ncap ; +}
&=
\begin{bmatrix}
-\sin(\gamma/2)
e^{i\beta/2}
&
\cos(\gamma/2)
e^{-i \beta/2}
\end{bmatrix}
\begin{bmatrix}
\cos(\gamma/2)
e^{-i\beta/2}
\\
\sin(\gamma/2)
e^{i \beta/2}
\end{bmatrix} \\
&=
0,
\end{aligned}
\end{equation}

as expected.

The task at hand appears to be the operation on the column representation of \( \ket{\ncap; +} \) using the Pauli representation of the time reversal operator. That is

\begin{equation}\label{eqn:timeReversalPlaneWaveAndSpinor:160}
\begin{aligned}
\Theta \ket{\ncap ; +}
&=
-i \sigma_y K
\begin{bmatrix}
e^{-i\beta/2} \cos(\gamma/2) \\
e^{i \beta/2} \sin(\gamma/2)
\end{bmatrix} \\
&=
-i \begin{bmatrix} 0 & -i \\ i & 0 \\ \end{bmatrix}
\begin{bmatrix}
e^{i\beta/2} \cos(\gamma/2) \\
e^{-i \beta/2} \sin(\gamma/2)
\end{bmatrix} \\
&=
\begin{bmatrix}
0 & -1 \\
1 & 0
\end{bmatrix}
\begin{bmatrix}
e^{i\beta/2} \cos(\gamma/2) \\
e^{-i \beta/2} \sin(\gamma/2)
\end{bmatrix} \\
&=
\begin{bmatrix}
-e^{-i \beta/2} \sin(\gamma/2) \\
e^{i\beta/2} \cos(\gamma/2) \\
\end{bmatrix} \\
&= \ket{\ncap ; -},
\end{aligned}
\end{equation}

which is the result to be demononstrated.

References

[1] Jun John Sakurai and Jim J Napolitano. Modern quantum mechanics. Pearson Higher Ed, 2014.

PHY1520H Graduate Quantum Mechanics. Lecture 10: 1D Dirac scattering off potential step. Taught by Prof. Arun Paramekanti

October 20, 2015 phy1520 , ,

[Click here for a PDF of this post with nicer formatting]

Disclaimer

Peeter’s lecture notes from class. These may be incoherent and rough.

These are notes for the UofT course PHY1520, Graduate Quantum Mechanics, taught by Prof. Paramekanti.

Dirac scattering off a potential step

For the non-relativistic case we have

\begin{equation}\label{eqn:qmLecture10:20}
\begin{aligned}
E < V_0 &\Rightarrow T = 0, R = 1 \\ E > V_0 &\Rightarrow T > 0, R < 1.
\end{aligned}
\end{equation}

What happens for a relativistic 1D particle?

Referring to fig. 1.

fig. 1. Potential step

fig. 1. Potential step

the region I Hamiltonian is

\begin{equation}\label{eqn:qmLecture10:40}
H =
\begin{bmatrix}
\hat{p} c & m c^2 \\
m c^2 & – \hat{p} c
\end{bmatrix},
\end{equation}

for which the solution is

\begin{equation}\label{eqn:qmLecture10:60}
\Phi = e^{i k_1 x }
\begin{bmatrix}
\cos \theta_1 \\
\sin \theta_1
\end{bmatrix},
\end{equation}

where
\begin{equation}\label{eqn:qmLecture10:80}
\begin{aligned}
\cos 2 \theta_1 &= \frac{ \Hbar c k_1 }{E_{k_1}} \\
\sin 2 \theta_1 &= \frac{ m c^2 }{E_{k_1}} \\
\end{aligned}
\end{equation}

To consider the \( k_1 < 0 \) case, note that

\begin{equation}\label{eqn:qmLecture10:100}
\begin{aligned}
\cos^2 \theta_1 – \sin^2 \theta_1 &= \cos 2 \theta_1 \\
2 \sin\theta_1 \cos\theta_1 &= \sin 2 \theta_1
\end{aligned}
\end{equation}

so after flipping the signs on all the \( k_1 \) terms we find for the reflected wave

\begin{equation}\label{eqn:qmLecture10:120}
\Phi = e^{-i k_1 x}
\begin{bmatrix}
\sin\theta_1 \\
\cos\theta_1
\end{bmatrix}.
\end{equation}

FIXME: this reasoning doesn’t entirely make sense to me. Make sense of this by trying this solution as was done for the form of the incident wave solution.

The region I wave has the form

\begin{equation}\label{eqn:qmLecture10:140}
\Phi_I
=
A e^{i k_1 x}
\begin{bmatrix}
\cos\theta_1 \\
\sin\theta_1 \\
\end{bmatrix}
+
B e^{-i k_1 x}
\begin{bmatrix}
\sin\theta_1 \\
\cos\theta_1 \\
\end{bmatrix}.
\end{equation}

By the time we are done we want to have computed the reflection coefficient

\begin{equation}\label{eqn:qmLecture10:160}
R =
\frac{\Abs{B}^2}{\Abs{A}^2}.
\end{equation}

The region I energy is

\begin{equation}\label{eqn:qmLecture10:180}
E = \sqrt{ \lr{ m c^2}^2 + \lr{ \Hbar c k_1 }^2 }.
\end{equation}

We must have
\begin{equation}\label{eqn:qmLecture10:200}
E
=
\sqrt{ \lr{ m c^2}^2 + \lr{ \Hbar c k_2 }^2 } + V_0
=
\sqrt{ \lr{ m c^2}^2 + \lr{ \Hbar c k_1 }^2 },
\end{equation}

so

\begin{equation}\label{eqn:qmLecture10:220}
\begin{aligned}
\lr{ \Hbar c k_2 }^2
&=
\lr{ E – V_0 }^2 – \lr{ m c^2}^2 \\
&=
\underbrace{\lr{ E – V_0 + m c }}_{r_1}\underbrace{\lr{ E – V_0 – m c }}_{r_2}.
\end{aligned}
\end{equation}

The \( r_1 \) and \( r_2 \) branches are sketched in fig. 2.

fig. 2. Energy signs

fig. 2. Energy signs

For low energies, we have a set of potentials for which we will have propagation, despite having a potential barrier. For still higher values of the potential barrier the product \( r_1 r_2 \) will be negative, so the solutions will be decaying. Finally, for even higher energies, there will again be propagation.

The non-relativistic case is sketched in fig. 3.

fig. 3. Effects of increasing potential for non-relativistic case

fig. 3. Effects of increasing potential for non-relativistic case

For the relativistic case we must consider three different cases, sketched in fig 4, fig 5, and fig 6 respectively. For the low potential energy, a particle with positive group velocity (what we’ve called right moving) can be matched to an equal energy portion of the potential shifted parabola in region II. This is a case where we have transmission, but no antiparticle creation. There will be an energy region where the region II wave function has only a dissipative term, since there is no region of either of the region II parabolic branches available at the incident energy. When the potential is shifted still higher so that \( V_0 > E + m c^2 \), a positive group velocity in region I with a given energy can be matched to an antiparticle branch in the region II parabolic energy curve.

lecture10Fig4a

Fig 4. Low potential energy

lecture10Fig4b

fig. 5. High enough potential energy for no propagation

lecture10Fig4c

fig 6. High potential energy

 

Boundary value conditions

We want to ensure that the current across the barrier is conserved (no particles are lost), as sketched in fig. 7.

 

fig. 7. Transmitted, reflected and incident components.

fig. 7. Transmitted, reflected and incident components.

Recall that given a wave function

\begin{equation}\label{eqn:qmLecture10:240}
\Psi =
\begin{bmatrix}
\psi_1 \\
\psi_2
\end{bmatrix},
\end{equation}

the density and currents are respectively

\begin{equation}\label{eqn:qmLecture10:260}
\begin{aligned}
\rho &= \psi_1^\conj \psi_1 + \psi_2^\conj \psi_2 \\
j &= \psi_1^\conj \psi_1 – \psi_2^\conj \psi_2
\end{aligned}
\end{equation}

Matching boundary value conditions requires

  1. For both the relativistic and non-relativistic cases we must have\begin{equation}\label{eqn:qmLecture10:280}
    \Psi_{\textrm{L}} = \Psi_{\textrm{R}}, \qquad \mbox{at \( x = 0 \).}
    \end{equation}
  2. For the non-relativistic case we want
    \begin{equation}\label{eqn:qmLecture10:300}
    \int_{-\epsilon}^\epsilon -\frac{\Hbar^2}{2m} \PDSq{x}{\Psi} =
    {\int_{-\epsilon}^\epsilon \lr{ E – V(x) } \Psi(x)}.
    \end{equation}The RHS integral is zero, so

    \begin{equation}\label{eqn:qmLecture10:320}
    -\frac{\Hbar^2}{2m} \lr{ \evalbar{\PD{x}{\Psi}}{{\textrm{R}}} – \evalbar{\PD{x}{\Psi}}{{\textrm{L}}} } = 0.
    \end{equation}

    We have to match

    For the relativistic case

    \begin{equation}\label{eqn:qmLecture10:460}
    -i \Hbar \sigma_z \int_{-\epsilon}^\epsilon \PD{x}{\Psi} +
    {m c^2 \sigma_x \int_{-\epsilon}^\epsilon \psi}
    =
    {\int_{-\epsilon}^\epsilon \lr{ E – V_0 } \psi},
    \end{equation}

the second two integrals are wiped out, so

\begin{equation}\label{eqn:qmLecture10:340}
-i \Hbar c \sigma_z \lr{ \psi(\epsilon) – \psi(-\epsilon) }
=
-i \Hbar c \sigma_z \lr{ \psi_{\textrm{R}} – \psi_{\textrm{L}} }.
\end{equation}

so we must match

\begin{equation}\label{eqn:qmLecture10:360}
\sigma_z \psi_{\textrm{R}} = \sigma_z \psi_{\textrm{L}} .
\end{equation}

It appears that things are simpler, because we only have to match the wave function values at the boundary, and don’t have to match the derivatives too. However, we have a two component wave function, so there are still two tasks.

Solving the system

Let’s look for a solution for the \( E + m c^2 > V_0 \) case on the right branch, as sketched in fig. 8.

 

fig. 8. High potential region. Anti-particle transmission.

fig. 8. High potential region. Anti-particle transmission.

While the right branch in this case is left going, this might work out since that is an antiparticle. We could try both.

Try

\begin{equation}\label{eqn:qmLecture10:480}
\Psi_{II} = D e^{i k_2 x}
\begin{bmatrix}
-\sin\theta_2 \\
\cos\theta_2
\end{bmatrix}.
\end{equation}

This is justified by

\begin{equation}\label{eqn:qmLecture10:500}
+E \rightarrow
\begin{bmatrix}
\cos\theta \\
\sin\theta
\end{bmatrix},
\end{equation}

so

\begin{equation}\label{eqn:qmLecture10:520}
-E \rightarrow
\begin{bmatrix}
-\sin\theta \\
\cos\theta \\
\end{bmatrix}
\end{equation}

At \( x = 0 \) the exponentials vanish, so equating the waves at that point means

\begin{equation}\label{eqn:qmLecture10:380}
\begin{bmatrix}
\cos\theta_1 \\
\sin\theta_1 \\
\end{bmatrix}
+
\frac{B}{A}
\begin{bmatrix}
\sin\theta_1 \\
\cos\theta_1 \\
\end{bmatrix}
=
\frac{D}{A}
\begin{bmatrix}
-\sin\theta_2 \\
\cos\theta_2
\end{bmatrix}.
\end{equation}

Solving this yields

\begin{equation}\label{eqn:qmLecture10:400}
\frac{B}{A} = – \frac{\cos(\theta_1 – \theta_2)}{\sin(\theta_1 + \theta_2)}.
\end{equation}

This yields

\begin{equation}\label{eqn:qmLecture10:420}
\boxed{
R = \frac{1 + \cos( 2 \theta_1 – 2 \theta_2) }{1 – \cos( 2 \theta_1 – 2 \theta_2)}.
}
\end{equation}

As \( V_0 \rightarrow \infty \) this simplifies to

\begin{equation}\label{eqn:qmLecture10:440}
R = \frac{ E – \sqrt{ E^2 – \lr{ m c^2 }^2 } }{ E + \sqrt{ E^2 – \lr{ m c^2 }^2 } }.
\end{equation}

Filling in the details for these results part of problem set 4.

Second update of aggregate notes for phy1520, Graduate Quantum Mechanics

October 20, 2015 phy1520 , , , , , , , , , , , ,

I’ve posted a second update of my aggregate notes for PHY1520H Graduate Quantum Mechanics, taught by Prof. Arun Paramekanti. In addition to what was noted previously, this contains lecture notes up to lecture 9, my ungraded solutions for the second problem set, and some additional worked practise problems.

Most of the content was posted individually in the following locations, but those original documents will not be maintained individually any further.

Plane wave ground state expectation for SHO

October 18, 2015 phy1520 , , , , , , , , , , ,

[Click here for a PDF of this post with nicer formatting]

Problem [1] 2.18 is, for a 1D SHO, show that

\begin{equation}\label{eqn:exponentialExpectationGroundState:20}
\bra{0} e^{i k x} \ket{0} = \exp\lr{ -k^2 \bra{0} x^2 \ket{0}/2 }.
\end{equation}

Despite the simple appearance of this problem, I found this quite involved to show. To do so, start with a series expansion of the expectation

\begin{equation}\label{eqn:exponentialExpectationGroundState:40}
\bra{0} e^{i k x} \ket{0}
=
\sum_{m=0}^\infty \frac{(i k)^m}{m!} \bra{0} x^m \ket{0}.
\end{equation}

Let

\begin{equation}\label{eqn:exponentialExpectationGroundState:60}
X = \lr{ a + a^\dagger },
\end{equation}

so that

\begin{equation}\label{eqn:exponentialExpectationGroundState:80}
x
= \sqrt{\frac{\Hbar}{2 \omega m}} X
= \frac{x_0}{\sqrt{2}} X.
\end{equation}

Consider the first few values of \( \bra{0} X^n \ket{0} \)

\begin{equation}\label{eqn:exponentialExpectationGroundState:100}
\begin{aligned}
\bra{0} X \ket{0}
&=
\bra{0} \lr{ a + a^\dagger } \ket{0} \\
&=
\braket{0}{1} \\
&=
0,
\end{aligned}
\end{equation}

\begin{equation}\label{eqn:exponentialExpectationGroundState:120}
\begin{aligned}
\bra{0} X^2 \ket{0}
&=
\bra{0} \lr{ a + a^\dagger }^2 \ket{0} \\
&=
\braket{1}{1} \\
&=
1,
\end{aligned}
\end{equation}

\begin{equation}\label{eqn:exponentialExpectationGroundState:140}
\begin{aligned}
\bra{0} X^3 \ket{0}
&=
\bra{0} \lr{ a + a^\dagger }^3 \ket{0} \\
&=
\bra{1} \lr{ \sqrt{2} \ket{2} + \ket{0} } \\
&=
0.
\end{aligned}
\end{equation}

Whenever the power \( n \) in \( X^n \) is even, the braket can be split into a bra that has only contributions from odd eigenstates and a ket with even eigenstates. We conclude that \( \bra{0} X^n \ket{0} = 0 \) when \( n \) is odd.

Noting that \( \bra{0} x^2 \ket{0} = \ifrac{x_0^2}{2} \), this leaves

\begin{equation}\label{eqn:exponentialExpectationGroundState:160}
\begin{aligned}
\bra{0} e^{i k x} \ket{0}
&=
\sum_{m=0}^\infty \frac{(i k)^{2 m}}{(2 m)!} \bra{0} x^{2m} \ket{0} \\
&=
\sum_{m=0}^\infty \frac{(i k)^{2 m}}{(2 m)!} \lr{ \frac{x_0^2}{2} }^m \bra{0} X^{2m} \ket{0} \\
&=
\sum_{m=0}^\infty \frac{1}{(2 m)!} \lr{ -k^2 \bra{0} x^2 \ket{0} }^m \bra{0} X^{2m} \ket{0}.
\end{aligned}
\end{equation}

This problem is now reduced to showing that

\begin{equation}\label{eqn:exponentialExpectationGroundState:180}
\frac{1}{(2 m)!} \bra{0} X^{2m} \ket{0} = \inv{m! 2^m},
\end{equation}

or

\begin{equation}\label{eqn:exponentialExpectationGroundState:200}
\begin{aligned}
\bra{0} X^{2m} \ket{0}
&= \frac{(2m)!}{m! 2^m} \\
&= \frac{ (2m)(2m-1)(2m-2) \cdots (2)(1) }{2^m m!} \\
&= \frac{ 2^m (m)(2m-1)(m-1)(2m-3)(m-2) \cdots (2)(3)(1)(1) }{2^m m!} \\
&= (2m-1)!!,
\end{aligned}
\end{equation}

where \( n!! = n(n-2)(n-4)\cdots \).

It looks like \( \bra{0} X^{2m} \ket{0} \) can be expanded by inserting an identity operator and proceeding recursively, like

\begin{equation}\label{eqn:exponentialExpectationGroundState:220}
\begin{aligned}
\bra{0} X^{2m} \ket{0}
&=
\bra{0} X^2 \lr{ \sum_{n=0}^\infty \ket{n}\bra{n} } X^{2m-2} \ket{0} \\
&=
\bra{0} X^2 \lr{ \ket{0}\bra{0} + \ket{2}\bra{2} } X^{2m-2} \ket{0} \\
&=
\bra{0} X^{2m-2} \ket{0} + \bra{0} X^2 \ket{2} \bra{2} X^{2m-2} \ket{0}.
\end{aligned}
\end{equation}

This has made use of the observation that \( \bra{0} X^2 \ket{n} = 0 \) for all \( n \ne 0,2 \). The remaining term includes the factor

\begin{equation}\label{eqn:exponentialExpectationGroundState:240}
\begin{aligned}
\bra{0} X^2 \ket{2}
&=
\bra{0} \lr{a + a^\dagger}^2 \ket{2} \\
&=
\lr{ \bra{0} + \sqrt{2} \bra{2} } \ket{2} \\
&=
\sqrt{2},
\end{aligned}
\end{equation}

Since \( \sqrt{2} \ket{2} = \lr{a^\dagger}^2 \ket{0} \), the expectation of interest can be written

\begin{equation}\label{eqn:exponentialExpectationGroundState:260}
\bra{0} X^{2m} \ket{0}
=
\bra{0} X^{2m-2} \ket{0} + \bra{0} a^2 X^{2m-2} \ket{0}.
\end{equation}

How do we expand the second term. Let’s look at how \( a \) and \( X \) commute

\begin{equation}\label{eqn:exponentialExpectationGroundState:280}
\begin{aligned}
a X
&=
\antisymmetric{a}{X} + X a \\
&=
\antisymmetric{a}{a + a^\dagger} + X a \\
&=
\antisymmetric{a}{a^\dagger} + X a \\
&=
1 + X a,
\end{aligned}
\end{equation}

\begin{equation}\label{eqn:exponentialExpectationGroundState:300}
\begin{aligned}
a^2 X
&=
a \lr{ a X } \\
&=
a \lr{ 1 + X a } \\
&=
a + a X a \\
&=
a + \lr{ 1 + X a } a \\
&=
2 a + X a^2.
\end{aligned}
\end{equation}

Proceeding to expand \( a^2 X^n \) we find
\begin{equation}\label{eqn:exponentialExpectationGroundState:320}
\begin{aligned}
a^2 X^3 &= 6 X + 6 X^2 a + X^3 a^2 \\
a^2 X^4 &= 12 X^2 + 8 X^3 a + X^4 a^2 \\
a^2 X^5 &= 20 X^3 + 10 X^4 a + X^5 a^2 \\
a^2 X^6 &= 30 X^4 + 12 X^5 a + X^6 a^2.
\end{aligned}
\end{equation}

It appears that we have
\begin{equation}\label{eqn:exponentialExpectationGroundState:340}
\antisymmetric{a^2 X^n}{X^n a^2} = \beta_n X^{n-2} + 2 n X^{n-1} a,
\end{equation}

where

\begin{equation}\label{eqn:exponentialExpectationGroundState:360}
\beta_n = \beta_{n-1} + 2 (n-1),
\end{equation}

and \( \beta_2 = 2 \). Some goofing around shows that \( \beta_n = n(n-1) \), so the induction hypothesis is

\begin{equation}\label{eqn:exponentialExpectationGroundState:380}
\antisymmetric{a^2 X^n}{X^n a^2} = n(n-1) X^{n-2} + 2 n X^{n-1} a.
\end{equation}

Let’s check the induction
\begin{equation}\label{eqn:exponentialExpectationGroundState:400}
\begin{aligned}
a^2 X^{n+1}
&=
a^2 X^{n} X \\
&=
\lr{ n(n-1) X^{n-2} + 2 n X^{n-1} a + X^n a^2 } X \\
&=
n(n-1) X^{n-1} + 2 n X^{n-1} a X + X^n a^2 X \\
&=
n(n-1) X^{n-1} + 2 n X^{n-1} \lr{ 1 + X a } + X^n \lr{ 2 a + X a^2 } \\
&=
n(n-1) X^{n-1} + 2 n X^{n-1} + 2 n X^{n} a
+ 2 X^n a
+ X^{n+1} a^2 \\
&=
X^{n+1} a^2 + (2 + 2 n) X^{n} a + \lr{ 2 n + n(n-1) } X^{n-1} \\
&=
X^{n+1} a^2 + 2(n + 1) X^{n} a + (n+1) n X^{n-1},
\end{aligned}
\end{equation}

which concludes the induction, giving

\begin{equation}\label{eqn:exponentialExpectationGroundState:420}
\bra{ 0 } a^2 X^{n} \ket{0 } = n(n-1) \bra{0} X^{n-2} \ket{0},
\end{equation}

and

\begin{equation}\label{eqn:exponentialExpectationGroundState:440}
\bra{0} X^{2m} \ket{0}
=
\bra{0} X^{2m-2} \ket{0} + (2m-2)(2m-3) \bra{0} X^{2m-4} \ket{0}.
\end{equation}

Let

\begin{equation}\label{eqn:exponentialExpectationGroundState:460}
\sigma_{n} = \bra{0} X^n \ket{0},
\end{equation}

so that the recurrence relation, for \( 2n \ge 4 \) is

\begin{equation}\label{eqn:exponentialExpectationGroundState:480}
\sigma_{2n} = \sigma_{2n -2} + (2n-2)(2n-3) \sigma_{2n -4}
\end{equation}

We want to show that this simplifies to

\begin{equation}\label{eqn:exponentialExpectationGroundState:500}
\sigma_{2n} = (2n-1)!!
\end{equation}

The first values are

\begin{equation}\label{eqn:exponentialExpectationGroundState:540}
\sigma_0 = \bra{0} X^0 \ket{0} = 1
\end{equation}
\begin{equation}\label{eqn:exponentialExpectationGroundState:560}
\sigma_2 = \bra{0} X^2 \ket{0} = 1
\end{equation}

which gives us the right result for the first term in the induction

\begin{equation}\label{eqn:exponentialExpectationGroundState:580}
\begin{aligned}
\sigma_4
&= \sigma_2 + 2 \times 1 \times \sigma_0 \\
&= 1 + 2 \\
&= 3!!
\end{aligned}
\end{equation}

For the general induction term, consider

\begin{equation}\label{eqn:exponentialExpectationGroundState:600}
\begin{aligned}
\sigma_{2n + 2}
&= \sigma_{2n} + 2 n (2n – 1) \sigma_{2n -2} \\
&= (2n-1)!! + 2n ( 2n – 1) (2n -3)!! \\
&= (2n + 1) (2n -1)!! \\
&= (2n + 1)!!,
\end{aligned}
\end{equation}

which completes the final induction. That was also the last thing required to complete the proof, so we are done!

References

[1] Jun John Sakurai and Jim J Napolitano. Modern quantum mechanics. Pearson Higher Ed, 2014.