Month: November 2015

Alternate Dirac equation representation

November 27, 2015 phy1520 , ,

[Click here for a PDF of this post with nicer formatting]

Given an alternate representation of the Dirac equation

\begin{equation}\label{eqn:diracAlternate:20}
H =
\begin{bmatrix}
m c^2 + V_0 & c \hat{p} \\
c \hat{p} & – m c^2 + V_0
\end{bmatrix},
\end{equation}

calculate the constant momentum solutions, the Heisenberg velocity operator \( \hat{v} \), and find the form of the probability density current.

Plane wave solutions

The action of the Hamiltonian on

\begin{equation}\label{eqn:diracAlternate:40}
\psi =
e^{i k x – i E t/\Hbar}
\begin{bmatrix}
\psi_1 \\
\psi_2
\end{bmatrix}
\end{equation}

is
\begin{equation}\label{eqn:diracAlternate:60}
\begin{aligned}
H \psi
&=
\begin{bmatrix}
m c^2 + V_0 & c (-i \Hbar) i k \\
c (-i \Hbar) i k & – m c^2 + V_0
\end{bmatrix}
\begin{bmatrix}
\psi_1 \\
\psi_2
\end{bmatrix}
e^{i k x – i E t/\Hbar} \\
&=
\begin{bmatrix}
m c^2 + V_0 & c \Hbar k \\
c \Hbar k & – m c^2 + V_0
\end{bmatrix}
\psi.
\end{aligned}
\end{equation}

Writing

\begin{equation}\label{eqn:diracAlternate:80}
H_k
=
\begin{bmatrix}
m c^2 + V_0 & c \Hbar k \\
c \Hbar k & – m c^2 + V_0
\end{bmatrix}
\end{equation}

the characteristic equation is

\begin{equation}\label{eqn:diracAlternate:100}
0
=
(m c^2 + V_0 – \lambda)
(-m c^2 + V_0 – \lambda)
– (c \Hbar k)^2
=
\lr{ (\lambda – V_0)^2 – (m c^2)^2 } – (c \Hbar k)^2,
\end{equation}

so

\begin{equation}\label{eqn:diracAlternate:120}
\lambda = V_0 \pm \epsilon,
\end{equation}

where
\begin{equation}\label{eqn:diracAlternate:140}
\epsilon^2 = (m c^2)^2 + (c \Hbar k)^2.
\end{equation}

We’ve got

\begin{equation}\label{eqn:diracAlternate:160}
\begin{aligned}
H – ( V_0 + \epsilon )
&=
\begin{bmatrix}
m c^2 – \epsilon & c \Hbar k \\
c \Hbar k & – m c^2 – \epsilon
\end{bmatrix} \\
H – ( V_0 – \epsilon )
&=
\begin{bmatrix}
m c^2 + \epsilon & c \Hbar k \\
c \Hbar k & – m c^2 + \epsilon
\end{bmatrix},
\end{aligned}
\end{equation}

so the eigenkets are

\begin{equation}\label{eqn:diracAlternate:180}
\begin{aligned}
\ket{V_0+\epsilon}
&\propto
\begin{bmatrix}
-c \Hbar k \\
m c^2 – \epsilon
\end{bmatrix} \\
\ket{V_0-\epsilon}
&\propto
\begin{bmatrix}
-c \Hbar k \\
m c^2 + \epsilon
\end{bmatrix}.
\end{aligned}
\end{equation}

Up to an arbitrary phase for each, these are

\begin{equation}\label{eqn:diracAlternate:200}
\begin{aligned}
\ket{V_0 + \epsilon}
&=
\inv{\sqrt{ 2 \epsilon ( \epsilon – m c^2) }}
\begin{bmatrix}
c \Hbar k \\
\epsilon -m c^2
\end{bmatrix} \\
\ket{V_0 – \epsilon}
&=
\inv{\sqrt{ 2 \epsilon ( \epsilon + m c^2) }}
\begin{bmatrix}
-c \Hbar k \\
\epsilon + m c^2
\end{bmatrix} \\
\end{aligned}
\end{equation}

We can now write

\begin{equation}\label{eqn:diracAlternate:220}
H_k =
E
\begin{bmatrix}
V_0 + \epsilon & 0 \\
0 & V_0 – \epsilon
\end{bmatrix}
E^{-1},
\end{equation}

where
\begin{equation}\label{eqn:diracAlternate:240}
\begin{aligned}
E &=
\inv{\sqrt{2 \epsilon} }
\begin{bmatrix}
\frac{c \Hbar k}{ \sqrt{ \epsilon – m c^2 } } & -\frac{c \Hbar k}{ \sqrt{ \epsilon + m c^2 } } \\
\sqrt{ \epsilon – m c^2 } & \sqrt{ \epsilon + m c^2 }
\end{bmatrix}, \qquad k > 0 \\
E &=
\inv{\sqrt{2 \epsilon} }
\begin{bmatrix}
-\frac{c \Hbar k}{ \sqrt{ \epsilon – m c^2 } } & -\frac{c \Hbar k}{ \sqrt{ \epsilon + m c^2 } } \\
-\sqrt{ \epsilon – m c^2 } & \sqrt{ \epsilon + m c^2 }
\end{bmatrix}, \qquad k < 0. \end{aligned} \end{equation} Here the signs have been adjusted to ensure the transformation matrix has a unit determinant. Observe that there's redundancy in this matrix since \( \ifrac{c \Hbar \Abs{k}}{ \sqrt{ \epsilon - m c^2 } } = \sqrt{ \epsilon + m c^2 } \), and \( \ifrac{c \Hbar \Abs{k}}{ \sqrt{ \epsilon + m c^2 } } = \sqrt{ \epsilon - m c^2 } \), which allows the transformation matrix to be written in the form of a rotation matrix \begin{equation}\label{eqn:diracAlternate:260} \begin{aligned} E &= \inv{\sqrt{2 \epsilon} } \begin{bmatrix} \frac{c \Hbar k}{ \sqrt{ \epsilon - m c^2 } } & -\frac{c \Hbar k}{ \sqrt{ \epsilon + m c^2 } } \\ \frac{c \Hbar k}{ \sqrt{ \epsilon + m c^2 } } & \frac{c \Hbar k}{ \sqrt{ \epsilon - m c^2 } } \end{bmatrix}, \qquad k > 0 \\
E &=
\inv{\sqrt{2 \epsilon} }
\begin{bmatrix}
-\frac{c \Hbar k}{ \sqrt{ \epsilon – m c^2 } } & -\frac{c \Hbar k}{ \sqrt{ \epsilon + m c^2 } } \\
\frac{c \Hbar k}{ \sqrt{ \epsilon + m c^2 } } & -\frac{c \Hbar k}{ \sqrt{ \epsilon – m c^2 } }
\end{bmatrix}, \qquad k < 0 \\ \end{aligned} \end{equation} With \begin{equation}\label{eqn:diracAlternate:280} \begin{aligned} \cos\theta &= \frac{c \Hbar \Abs{k}}{ \sqrt{ 2 \epsilon( \epsilon - m c^2) } } = \frac{\sqrt{ \epsilon + m c^2} }{ \sqrt{ 2 \epsilon}}\\ \sin\theta &= \frac{c \Hbar k}{ \sqrt{ 2 \epsilon( \epsilon + m c^2) } } = \frac{\textrm{sgn}(k) \sqrt{ \epsilon - m c^2}}{ \sqrt{ 2 \epsilon } }, \end{aligned} \end{equation} the transformation matrix (and eigenkets) is \begin{equation}\label{eqn:diracAlternate:300} \boxed{ E = \begin{bmatrix} \ket{V_0 + \epsilon} & \ket{V_0 - \epsilon} \end{bmatrix} = \begin{bmatrix} \cos\theta & -\sin\theta \\ \sin\theta & \cos\theta \end{bmatrix}. } \end{equation} Observe that \ref{eqn:diracAlternate:280} can be simplified by using double angle formulas \begin{equation}\label{eqn:diracAlternate:320} \begin{aligned} \cos(2 \theta) &= \frac{\lr{ \epsilon + m c^2} }{ 2 \epsilon } - \frac{\lr{ \epsilon - m c^2}}{ 2 \epsilon } \\ &= \frac{1}{ 2 \epsilon } \lr{ \epsilon + m c^2 - \epsilon + m c^2 } \\ &= \frac{m c^2 }{ \epsilon }, \end{aligned} \end{equation} and \begin{equation}\label{eqn:diracAlternate:340} \sin(2\theta) = 2 \frac{1}{2 \epsilon} \textrm{sgn}(k ) \sqrt{ \epsilon^2 - (m c^2)^2 } = \frac{\Hbar k c}{\epsilon}. \end{equation} This allows all the \( \theta \) dependence on \( \Hbar k c \) and \( m c^2 \) to be expressed as a ratio of momenta \begin{equation}\label{eqn:diracAlternate:360} \boxed{ \tan(2\theta) = \frac{\Hbar k}{m c}. } \end{equation}

Hyperbolic solutions

For a wave function of the form

\begin{equation}\label{eqn:diracAlternate:380}
\psi =
e^{k x – i E t/\Hbar}
\begin{bmatrix}
\psi_1 \\
\psi_2
\end{bmatrix},
\end{equation}

some of the work above can be recycled if we substitute \( k \rightarrow -i k \), which yields unnormalized eigenfunctions

\begin{equation}\label{eqn:diracAlternate:400}
\begin{aligned}
\ket{V_0+\epsilon}
&\propto
\begin{bmatrix}
i c \Hbar k \\
m c^2 – \epsilon
\end{bmatrix} \\
\ket{V_0-\epsilon}
&\propto
\begin{bmatrix}
i c \Hbar k \\
m c^2 + \epsilon
\end{bmatrix},
\end{aligned}
\end{equation}

where

\begin{equation}\label{eqn:diracAlternate:420}
\epsilon^2 = (m c^2)^2 – (c \Hbar k)^2.
\end{equation}

The squared magnitude of these wavefunctions are

\begin{equation}\label{eqn:diracAlternate:440}
\begin{aligned}
(c \Hbar k)^2 + (m c^2 \mp \epsilon)^2
&=
(c \Hbar k)^2 + (m c^2)^2 + \epsilon^2 \mp 2 m c^2 \epsilon \\
&=
(c \Hbar k)^2 + (m c^2)^2 + (m c^2)^2 \mp (c \Hbar k)^2 – 2 m c^2 \epsilon \\
&= 2 (m c^2)^2 \mp 2 m c^2 \epsilon \\
&= 2 m c^2 ( m c^2 \mp \epsilon ),
\end{aligned}
\end{equation}

so, up to a constant phase for each, the normalized kets are

\begin{equation}\label{eqn:diracAlternate:460}
\begin{aligned}
\ket{V_0+\epsilon}
&=
\inv{\sqrt{ 2 m c^2 ( m c^2 – \epsilon ) }}
\begin{bmatrix}
i c \Hbar k \\
m c^2 – \epsilon
\end{bmatrix} \\
\ket{V_0-\epsilon}
&=
\inv{\sqrt{ 2 m c^2 ( m c^2 + \epsilon ) }}
\begin{bmatrix}
i c \Hbar k \\
m c^2 + \epsilon
\end{bmatrix},
\end{aligned}
\end{equation}

After the \( k \rightarrow -i k \) substitution, \( H_k \) is not Hermitian, so these kets aren’t expected to be orthonormal, which is readily verified

\begin{equation}\label{eqn:diracAlternate:480}
\begin{aligned}
\braket{V_0+\epsilon}{V_0-\epsilon}
&=
\inv{\sqrt{ 2 m c^2 ( m c^2 – \epsilon ) }}
\inv{\sqrt{ 2 m c^2 ( m c^2 + \epsilon ) }}
\begin{bmatrix}
-i c \Hbar k &
m c^2 – \epsilon
\end{bmatrix}
\begin{bmatrix}
i c \Hbar k \\
m c^2 + \epsilon
\end{bmatrix} \\
&=
\frac{ 2 ( c \Hbar k )^2 }{2 m c^2 \sqrt{(\Hbar k c)^2} } \\
&=
\textrm{sgn}(k)
\frac{
\Hbar k }{m c } .
\end{aligned}
\end{equation}

Heisenberg velocity operator

\begin{equation}\label{eqn:diracAlternate:500}
\begin{aligned}
\hat{v}
&= \inv{i \Hbar} \antisymmetric{ \hat{x} }{ H} \\
&= \inv{i \Hbar} \antisymmetric{ \hat{x} }{ m c^2 \sigma_z + V_0 + c \hat{p} \sigma_x } \\
&= \frac{c \sigma_x}{i \Hbar} \antisymmetric{ \hat{x} }{ \hat{p} } \\
&= c \sigma_x.
\end{aligned}
\end{equation}

Probability current

Acting against a completely general wavefunction the Hamiltonian action \( H \psi \) is

\begin{equation}\label{eqn:diracAlternate:520}
\begin{aligned}
i \Hbar \PD{t}{\psi}
&= m c^2 \sigma_z \psi + V_0 \psi + c \hat{p} \sigma_x \psi \\
&= m c^2 \sigma_z \psi + V_0 \psi -i \Hbar c \sigma_x \PD{x}{\psi}.
\end{aligned}
\end{equation}

Conversely, the conjugate \( (H \psi)^\dagger \) is

\begin{equation}\label{eqn:diracAlternate:540}
-i \Hbar \PD{t}{\psi^\dagger}
= m c^2 \psi^\dagger \sigma_z + V_0 \psi^\dagger +i \Hbar c \PD{x}{\psi^\dagger} \sigma_x.
\end{equation}

These give

\begin{equation}\label{eqn:diracAlternate:560}
\begin{aligned}
i \Hbar \psi^\dagger \PD{t}{\psi}
&=
m c^2 \psi^\dagger \sigma_z \psi + V_0 \psi^\dagger \psi -i \Hbar c \psi^\dagger \sigma_x \PD{x}{\psi} \\
-i \Hbar \PD{t}{\psi^\dagger} \psi
&= m c^2 \psi^\dagger \sigma_z \psi + V_0 \psi^\dagger \psi +i \Hbar c \PD{x}{\psi^\dagger} \sigma_x \psi.
\end{aligned}
\end{equation}

Taking differences
\begin{equation}\label{eqn:diracAlternate:580}
\psi^\dagger \PD{t}{\psi} + \PD{t}{\psi^\dagger} \psi
=
– c \psi^\dagger \sigma_x \PD{x}{\psi} – c \PD{x}{\psi^\dagger} \sigma_x \psi,
\end{equation}

or

\begin{equation}\label{eqn:diracAlternate:600}
0
=
\PD{t}{}
\lr{
\psi^\dagger \psi
}
+
\PD{x}{}
\lr{
c \psi^\dagger \sigma_x \psi
}.
\end{equation}

The probability current still has the usual form \( \rho = \psi^\dagger \psi = \psi_1^\conj \psi_1 + \psi_2^\conj \psi_2 \), but the probability current with this representation of the Dirac Hamiltonian is

\begin{equation}\label{eqn:diracAlternate:620}
\begin{aligned}
j
&= c \psi^\dagger \sigma_x \psi \\
&= c
\begin{bmatrix}
\psi_1^\conj &
\psi_2^\conj
\end{bmatrix}
\begin{bmatrix}
\psi_2 \\
\psi_1
\end{bmatrix} \\
&= c \lr{ \psi_1^\conj \psi_2 + \psi_2^\conj \psi_1 }.
\end{aligned}
\end{equation}

PHY1520H Graduate Quantum Mechanics. Lecture 19: Variational method. Taught by Prof. Arun Paramekanti

November 27, 2015 phy1520 , ,

[Click here for a PDF of this post with nicer formatting]

Disclaimer

Peeter’s lecture notes from class. These may be incoherent and rough.

These are notes for the UofT course PHY1520, Graduate Quantum Mechanics, taught by Prof. Paramekanti, covering \textchapref{{5}} [1] content.

Variational method

Today we want to use the variational degree of freedom to try to solve some problems that we don’t have analytic solutions for.

Anharmonic oscillator

\begin{equation}\label{eqn:qmLecture19:20}
V(x) = \inv{2} m \omega^2 x^2 + \lambda x^4, \qquad \lambda \ge 0.
\end{equation}

With the potential growing faster than the harmonic oscillator, which had a ground state solution

\begin{equation}\label{eqn:qmLecture19:40}
\psi(x) = \inv{\pi^{1/4}} \inv{a_0^{1/2} } e^{- x^2/2 a_0^2},
\end{equation}

where
\begin{equation}\label{eqn:qmLecture19:60}
a_0 = \sqrt{\frac{\Hbar}{m \omega}}.
\end{equation}

Let’s try allowing \( a_0 \rightarrow a \), to be a variational degree of freedom

\begin{equation}\label{eqn:qmLecture19:80}
\psi_a(x) = \inv{\pi^{1/4}} \inv{a^{1/2} } e^{- x^2/2 a^2},
\end{equation}

\begin{equation}\label{eqn:qmLecture19:100}
\bra{\psi_a} H \ket{\psi_a}
=
\bra{\psi_a} \frac{p^2}{2m} + \inv{2} m \omega^2 x^2 + \lambda x^4 \ket{\psi_a}
\end{equation}

We can find
\begin{equation}\label{eqn:qmLecture19:120}
\expectation{x^2} = \inv{2} a^2
\end{equation}
\begin{equation}\label{eqn:qmLecture19:140}
\expectation{x^4} = \frac{3}{4} a^4
\end{equation}

Define

\begin{equation}\label{eqn:qmLecture19:160}
\tilde{\omega} = \frac{\Hbar}{m a^2},
\end{equation}

so that

\begin{equation}\label{eqn:qmLecture19:180}
\overline{{E}}_a
=
\bra{\psi_a} \lr{ \frac{p^2}{2m} + \inv{2} m \tilde{\omega}^2 x^2 }
+ \lr{
\inv{2} m \lr{ \omega^2 – \tilde{\omega}^2 } x^2
+
\lambda x^4 }
\ket{\psi_a}
=
\inv{2} \Hbar \tilde{\omega} + \inv{2} m \lr{ \omega^2 – \tilde{\omega}^2 } \inv{2} a^2 + \frac{3}{4} \lambda a^4.
\end{equation}

Write this as
\begin{equation}\label{eqn:qmLecture19:200}
\overline{{E}}_{\tilde{\omega}}
=
\inv{2} \Hbar \tilde{\omega} + \inv{4} \frac{\Hbar}{\tilde{\omega}} \lr{ \omega^2 – \tilde{\omega}^2 } + \frac{3}{4} \lambda \frac{\Hbar^2}{m^2 \tilde{\omega}^2 }.
\end{equation}

This might look something like fig. 1.

fig. 1: Energy after perturbation.

fig. 1: Energy after perturbation.

Demand that

\begin{equation}\label{eqn:qmLecture19:220}
0
= \PD{\tilde{\omega}}{ \overline{{E}}_{\tilde{\omega}}}
=
\frac{\Hbar}{2} – \frac{\Hbar}{4} \frac{\omega^2}{\tilde{\omega}^2}
– \frac{\Hbar}{4}
+ \frac{3}{4} (-2) \frac{\lambda \Hbar^2}{m^2 \tilde{\omega}^3}
=
\frac{\Hbar}{4}
\lr{
1 – \frac{\omega^2}{\tilde{\omega}^2}
– 6 \frac{\lambda \Hbar}{m^2 \tilde{\omega}^3}
}
\end{equation}

or
\begin{equation}\label{eqn:qmLecture19:260}
\tilde{\omega}^3 – \omega^2 \tilde{\omega} – \frac{6 \lambda \Hbar}{m^2} = 0.
\end{equation}

for \( \lambda a_0^4 \ll \Hbar \omega \), we have something like \( \tilde{\omega} = \omega + \epsilon \). Expanding \ref{eqn:qmLecture19:260} to first order in \( \epsilon \), this gives

\begin{equation}\label{eqn:qmLecture19:280}
\omega^3 + 3 \omega^2 \epsilon – \omega^2 \lr{ \omega + \epsilon } – \frac{6 \lambda \Hbar}{m^2} = 0,
\end{equation}

so that

\begin{equation}\label{eqn:qmLecture19:300}
2 \omega^2 \epsilon = \frac{6 \lambda \Hbar}{m^2},
\end{equation}

and

\begin{equation}\label{eqn:qmLecture19:320}
\Hbar \epsilon = \frac{ 3 \lambda \Hbar^2}{m^2 \omega^2 } = 3 \lambda a_0^4.
\end{equation}

Plugging into

\begin{equation}\label{eqn:qmLecture19:340}
\overline{{E}}_{\omega + \epsilon}
=
\inv{2} \Hbar \lr{ \omega + \epsilon }
+ \inv{4} \frac{\Hbar}{\omega} \lr{ -2 \omega \epsilon + \epsilon^2 } + \frac{3}{4} \lambda \frac{\Hbar^2}{m^2 \omega^2 }
\approx
\inv{2} \Hbar \lr{ \omega + \epsilon }
– \inv{2} \Hbar \epsilon
+ \frac{3}{4} \lambda \frac{\Hbar^2}{m^2 \omega^2 }
=
\inv{2} \Hbar \omega + \frac{3}{4} \lambda a_0^4.
\end{equation}

With \ref{eqn:qmLecture19:320}, that is

\begin{equation}\label{eqn:qmLecture19:540}
\overline{{E}}_{\tilde{\omega} = \omega + \epsilon} \approx \inv{2} \Hbar \lr{ \omega + \frac{\epsilon}{2} }.
\end{equation}

The energy levels are shifted slightly for each shift in the Hamiltonian frequency.

What do we have in the extreme anharmonic limit, where \( \lambda a_0^4 \gg \Hbar \omega \). Now we get

\begin{equation}\label{eqn:qmLecture19:360}
\tilde{\omega}^\conj = \lr{ \frac{ 6 \Hbar \lambda }{m^2} }^{1/3},
\end{equation}

and
\begin{equation}\label{eqn:qmLecture19:380}
\overline{{E}}_{\tilde{\omega}^\conj} = \frac{\Hbar^{4/3} \lambda^{1/3}}{m^{2/3}} \frac{3}{8} 6^{1/3}.
\end{equation}

(this last result is pulled from a web treatment somewhere of the anharmonic oscillator). Note that the first factor in this energy, with \( \Hbar^4 \lambda/m^2 \) traveling together could have been worked out on dimensional grounds.

This variational method tends to work quite well in these limits. For a system where \( m = \omega = \Hbar = 1 \), for this problem, we have

Capture

tab. 1: Comparing numeric and variational solutions.

Example: (sketch) double well potential

lecture19Fig2

fig. 2: Double well potential.

\begin{equation}\label{eqn:qmLecture19:400}
V(x) = \frac{m \omega^2}{8 a^2} \lr{ x – a }^2\lr{ x + a}^2.
\end{equation}

Note that this potential, and the Hamiltonian, both commute with parity.

We are interested in the regime where \( a_0^2 = \frac{\Hbar}{m \omega} \ll a^2 \).

Near \( x = \pm a \), this will be approximately

\begin{equation}\label{eqn:qmLecture19:420}
V(x) = \inv{2} m \omega^2 \lr{ x \pm a }^2.
\end{equation}

Guessing a wave function that is an eigenstate of parity

\begin{equation}\label{eqn:qmLecture19:440}
\Psi_{\pm} = g_{\pm} \lr{ \phi_{\textrm{R}}(x) \pm \phi_{\textrm{L}}(x) }.
\end{equation}

perhaps looking like the even and odd functions sketched in fig. 3, and fig. 4.

fig. 3. Even double well function

fig. 3. Even double well function

fig. 4. Odd double well function

fig. 4. Odd double well function

Using harmonic oscillator functions

\begin{equation}\label{eqn:qmLecture19:460}
\begin{aligned}
\phi_{\textrm{L}}(x) &= \Psi_{{\textrm{H}}.{\textrm{O}}.}(x + a) \\
\phi_{\textrm{R}}(x) &= \Psi_{{\textrm{H}}.{\textrm{O}}.}(x – a)
\end{aligned}
\end{equation}

After doing a lot of integral (i.e. in the problem set), we will see a splitting of the variational energy levels as sketched in fig. 5.

fig. 5. Splitting for double well potential.

fig. 5. Splitting for double well potential.

This sort of level splitting was what was used in the very first mazers.

Perturbation theory (outline)

Given

\begin{equation}\label{eqn:qmLecture19:480}
H = H_0 + \lambda V,
\end{equation}

where \( \lambda V \) is “small”. We want to figure out the eigenvalues and eigenstates of this Hamiltonian

\begin{equation}\label{eqn:qmLecture19:500}
H \ket{n} = E_n \ket{n}.
\end{equation}

We don’t know what these are, but do know that

\begin{equation}\label{eqn:qmLecture19:520}
H_0 \ket{n^{(0)}} = E_n^{(0)} \ket{n^{(0)}}.
\end{equation}

We are hoping that the level transitions have adiabatic transitions between the original and perturbed levels as sketched in fig. 6.

fig. 6. Adiabatic transitions.

fig. 6. Adiabatic transitions.

and not crossed level transitions as sketched in fig. 7.

fig. 7. Crossed level transitions.

fig. 7. Crossed level transitions.

If we have level crossings (which can in general occur), as opposed to adiabatic transitions, then we have no hope of using perturbation theory.

References

[1] Jun John Sakurai and Jim J Napolitano. Modern quantum mechanics. Pearson Higher Ed, 2014.

PHY1520H Graduate Quantum Mechanics. Lecture 18: Approximation methods. Taught by Prof. Arun Paramekanti

November 26, 2015 phy1520 , ,

[Click here for a PDF of this post with nicer formatting]

Disclaimer

Peeter’s lecture notes from class. These may be incoherent and rough, especially since I didn’t attend this class myself, and am doing a walkthrough of notes provided by Nishant.

These are notes for the UofT course PHY1520, Graduate Quantum Mechanics, taught by Prof. Paramekanti, covering [1] chap. 5 content.

Approximation methods

Suppose we have a perturbed Hamiltonian

\begin{equation}\label{eqn:qmLecture18:20}
H = H_0 + \lambda V,
\end{equation}

where \( \lambda = 0 \) represents a solvable (perhaps known) system, and \( \lambda = 1 \) is the case of interest. There are two approaches of interest

  1. Direct solution of \( H \) with \( \lambda = 1 \).
  2. Take \( \lambda \) small, and do a series expansion. This is perturbation theory.

Variational methods

Given

\begin{equation}\label{eqn:qmLecture18:40}
H \ket{\phi_n} = E_n \ket{\phi_n},
\end{equation}

where we don’t know \( \ket{\phi_n} \), we can compute the expectation with respect to an arbitrary state \( \ket{\psi} \)

\begin{equation}\label{eqn:qmLecture18:60}
\bra{\psi} H \ket{\psi}
=
\bra{\psi} H \lr{ \sum_n \ket{\phi_n} \bra{\phi_n} } \ket{\psi}
=
\sum_n E_n \braket{\psi}{\phi_n} \braket{\phi_n}{\psi}
=
\sum_n E_n \Abs{\braket{\psi}{\phi_n}}^2.
\end{equation}

Define

\begin{equation}\label{eqn:qmLecture18:80}
\overline{{E}}
= \frac{\bra{\psi} H \ket{\psi}}{\braket{\psi}{\psi}}.
\end{equation}

Assuming that it is possible to express the state in the Hamiltonian energy basis

\begin{equation}\label{eqn:qmLecture18:100}
\ket{\psi}
=
\sum_n a_n \ket{\phi_n},
\end{equation}

this average energy is
\begin{equation}\label{eqn:qmLecture18:120}
\overline{{E}}
= \frac{ \sum_{m,n}\bra{\phi_m} a_m^\conj H a_n \ket{\phi_n}}{ \sum_n \Abs{a_n}^2 }
= \frac{ \sum_{n} \Abs{a_n}^2 E_n }{ \sum_n \Abs{a_n}^2 }.
= \sum_{n}
\frac{\Abs{a_n}^2 }{ \sum_n \Abs{a_n}^2 }
E_n
= \sum_n \frac{P_n}{\sum_m P_m} E_n,
\end{equation}

where \( P_m = \Abs{a_m}^2 \), which has the structure of a probability coefficient once divided by \( \sum_m P_m \), as sketched in fig. 1.

fig. 1.  A decreasing probability distribution

fig. 1. A decreasing probability distribution

This average energy is a probability weighted average of the individual energy basis states. One of those energies is the ground state energy \( E_1 \), so we necessarily have

\begin{equation}\label{eqn:qmLecture18:140}
\boxed{
\overline{{E}} \ge E_1.
}
\end{equation}

Example: particle in a \( [0,L] \) box.

For the infinite potential box sketched in fig. 2.

fig. 2.  Infinite potential  [0,L]  box.

fig. 2. Infinite potential [0,L] box.

The exact solutions for such a system are found to be

\begin{equation}\label{eqn:qmLecture18:220}
\psi(x) = \sqrt{\frac{2}{L}} \sin\lr{ \frac{n \pi}{L} x },
\end{equation}

where the energies are

\begin{equation}\label{eqn:qmLecture18:240}
E = \frac{\Hbar^2}{2m} \frac{n^2 \pi^2}{L^2}.
\end{equation}

The function \( \psi’ = x (L-x) \) also satisfies the boundary value constraints? How close in energy is that function to the ground state?

\begin{equation}\label{eqn:qmLecture18:260}
\overline{{E}}
=
-\frac{\Hbar^2}{2m} \frac{\int_0^L dx x (L-x) \frac{d^2}{dx^2} \lr{ x (L-x) }}{
\int_0^L dx x^2 (L-x)^2
}
=
\frac{\Hbar^2}{2m} \frac{\frac{2 L^3}{6}}{
\frac{L^5}{30}
}
=
\frac{\Hbar^2}{2m} \frac{10}{L^2}.
\end{equation}

This average energy is quite close to the ground state energy

\begin{equation}\label{eqn:qmLecture18:280}
\frac{\overline{{E}} }{E_1} = \frac{10}{\pi^2} = 1.014.
\end{equation}

Example II: particle in a \( [-L/2,L/2] \) box.

fig. 3.  Infinite potential  [-L/2,L/2]  box.

fig. 3. Infinite potential [-L/2,L/2] box.

Shifting the boundaries, as sketched in fig. 3 doesn’t change the energy levels. For this potential let’s try a shifted trial function

\begin{equation}\label{eqn:qmLecture18:300}
\psi(x) = \lr{ x – \frac{L}{2} } \lr{ x + \frac{L}{2} } = x^2 – \frac{L^2}{4},
\end{equation}

without worrying about the form of the exact solution. This produces the same result as above

\begin{equation}\label{eqn:qmLecture18:270}
\overline{{E}}
=
-\frac{\Hbar^2}{2m} \frac{\int_0^L dx \lr{ x^2 – \frac{L^2}{4} } \frac{d^2}{dx^2} \lr{ x^2 – \frac{L^2}{4} }}{
\int_0^L dx \lr{x^2 – \frac{L^2}{4} }^2
}
=
-\frac{\Hbar^2}{2m} \frac{- 2 L^3/6}{
\frac{L^5}{30}
}
=
\frac{\Hbar^2}{2m} \frac{10}{L^2}.
\end{equation}

Summary (Nishant)

The above example is that of a particle in a box. The actual wave function is a sin as shown. But we can
come up with a guess wave function that meets the boundary conditions and ask how accurate it is
compared to the actual one.

Basically we are assuming a wave function form and then seeing how it differs from the exact form.
We cannot do this if we have nothing to compare it against. But, we note that the variance of the
number operator in the systems eigenstate is zero. So we can still calculate the variance and try to
minimize it. This is one way of coming up with an approximate wave function. This does not necessarily
give the ground state wave function though. For this we need to minimize the energy itself.

References

[1] Jun John Sakurai and Jim J Napolitano. Modern quantum mechanics. Pearson Higher Ed, 2014.

PHY1520H Graduate Quantum Mechanics. Lecture 17: Clebsch-Gordan. Taught by Prof. Arun Paramekanti

November 21, 2015 phy1520 , , ,

[Click here for a PDF of this post with nicer formatting]

Disclaimer

Peeter’s lecture notes from class. These may be incoherent and rough.

These are notes for the UofT course PHY1520, Graduate Quantum Mechanics, taught by Prof. Paramekanti, covering [1] chap. 3 content.

Clebsch-Gordan

How can we related total momentum states to individual momentum states in the \( 1 \otimes 2 \) space?

\begin{equation}\label{eqn:qmLecture16:720}
\ket{l_1, l_2, l, m } = \sum_{m_1, m_2} C_{l_1 l_2 m_1 m_2}^{l_1 l_2 l m} \ket{l_1 m_1 ; l_2 m_2 }
\end{equation}

The values \( C_{l_1 l_2 m_1 m_2}^{l_1 l_2 l m} \) are called the Clebsch-Gordan coefficients.

Example: spin one and spin one half

With individual momentum states \( \ket{l_1 m_1 ; l_2 m_2 } \)

\begin{equation}\label{eqn:qmLecture16:740}
\begin{aligned}
l_1 &= 1 \\
m_1 &= \pm 1, 0 \\
l_2 &= \inv{2} \\
m_2 &= \pm \inv{2}
\end{aligned}
\end{equation}

The total angular momentum numbers are

\begin{equation}\label{eqn:qmLecture16:760}
l_{\textrm{tot}} \in [l_1 – l_2, l_1 + l_2] = [\ifrac{1}{2}, \ifrac{3}{2}]
\end{equation}

The possible states \( \ket{ l_{\textrm{tot}}, m_{\textrm{tot}} } \) are

\begin{equation}\label{eqn:qmLecture16:780}
\ket{\inv{2}, \inv{2} }, \ket{\inv{2}, -\inv{2} },
\end{equation}

and
\begin{equation}\label{eqn:qmLecture16:800}
\ket{\frac{3}{2}, \frac{3}{2} }, \ket{\frac{3}{2}, \frac{1}{2} },
\ket{\frac{3}{2}, -\frac{1}{2} }
\ket{\frac{3}{2}, -\frac{3}{2} }.
\end{equation}

The Clebsch-Gordan procedure is the search for an orthogonal angular momentum basis, built up from the individual momentum bases. For the total momentum basis we want the basis states to satisfy the ladder operators, but also have them satisfy the consistuient ladder operators for the individual particle angular momenta. This procedure is sketched in fig. 1.

fig. 1. Spin one,one-half Clebsch-Gordan procedure

fig. 1. Spin one,one-half Clebsch-Gordan procedure

Demonstrating by example, let the highest total momentum state be proportional to the highest product of individual momentum states

\begin{equation}\label{eqn:qmLecture16:820}
\ket{ \frac{3}{2} \frac{3}{2} } = \ket{ 1 1 } \otimes \ket{ \frac{1}{2} \frac{1}{2} }.
\end{equation}

A lowered state can be constructed in two different ways, one using the total angular momentum lowering operator

\begin{equation}\label{eqn:qmLecture16:840}
\begin{aligned}
\ket{ \frac{3}{2} \frac{1}{2} }
&=
\hat{L}_{-}^{\textrm{tot}} \ket{ \frac{3}{2} \frac{3}{2} } \\
&= \Hbar \sqrt{\lr{\frac{3}{2} + \frac{3}{2}}\lr{\frac{3}{2} – \frac{3}{2} + 1}} \ket{ \frac{3}{2} \frac{1}{2} } \\
&= \Hbar \sqrt{3} \ket{ \frac{3}{2} \frac{1}{2} }.
\end{aligned}
\end{equation}

On the other hand, the lowering operator can also be expressed as \( \hat{L}_{-}^{\textrm{tot}} = \hat{L}_{-}^{(1)} \otimes 1 + 1 \otimes \hat{L}_{-}^{(2)} \). Operating with that gives

\begin{equation}\label{eqn:qmLecture16:860}
\begin{aligned}
\ket{ \frac{3}{2} \frac{1}{2} }
&=
\hat{L}_{-}^{\textrm{tot}} \ket{ 1 1 } \otimes \ket{ \frac{1}{2} \frac{1}{2} }
\\
&=
\hat{L}_{-}^{(1)} \ket{ 1 1 } \otimes \ket{ \frac{1}{2} \frac{1}{2} }
+
\ket{ 1 1 } \otimes \hat{L}_{-}^2 \ket{ \frac{1}{2} \frac{1}{2} } \\
&=
\Hbar \sqrt{\lr{1 + 1}\lr{1 – 1 + 1}} \ket{ 1 0 } \otimes \ket{ \frac{1}{2} \frac{1}{2} }
+
\Hbar \sqrt{\lr{\frac{1}{2} + \frac{1}{2}}\lr{\frac{1}{2} – \frac{1}{2} + 1}}
\ket{ 1 1 } \otimes \ket{ \frac{1}{2} -\frac{1}{2} } \\
&=
\Hbar \sqrt{2} \ket{ 1 0 } \otimes \ket{ \frac{1}{2} \frac{1}{2} }
+
\Hbar \ket{ 1 1 } \otimes \ket{ \frac{1}{2} -\frac{1}{2} }.
\end{aligned}
\end{equation}

Equating both sides and dispensing with the direct product notation, this is

\begin{equation}\label{eqn:qmLecture16:880}
\sqrt{3} \ket{ \frac{3}{2} \frac{1}{2} }
=
\sqrt{2} \ket{ 1 0 ; \frac{1}{2} \frac{1}{2} }
+
\ket{ 1 1 ; \frac{1}{2} -\frac{1}{2} },
\end{equation}

or

\begin{equation}\label{eqn:qmLecture16:900}
\boxed{
\ket{ \frac{3}{2} \frac{1}{2} }
=
\sqrt{\frac{2}{3}} \ket{ 1 0 ; \frac{1}{2} \frac{1}{2} }
+
\inv{\sqrt{3}} \ket{ 1 1 ; \frac{1}{2} -\frac{1}{2} }.
}
\end{equation}

This is clearly both a unit ket, and normal to \( \ket{ \frac{3}{2} \frac{3}{2} } \). We can continue operating with the lowering operator for the total angular momentum to constuct all the states down to \( \ket{ \frac{3}{2} \frac{-3}{2} } \). Working with \( \Hbar = 1 \) since we see it cancel out, the next lower state follows from

\begin{equation}\label{eqn:qmLecture16:920}
\begin{aligned}
\hat{L}_{-}^{\textrm{tot}} \ket{ \frac{3}{2} \frac{1}{2} }
&=
\sqrt{2 \times 2} \ket{ \frac{3}{2} \frac{-1}{2} } \\
&=
2 \ket{ \frac{3}{2} \frac{-1}{2} }.
\end{aligned}
\end{equation}

and from the individual lowering operators on the components of \( \ket{ \frac{3}{2} \frac{1}{2} } \).

\begin{equation}\label{eqn:qmLecture16:940}
\hat{L}_{-}^{(1)} \ket{ 1 0 ; \frac{1}{2} \frac{1}{2} }
=
\sqrt{ 1 \times 2 } \ket{ 1 -1 ; \frac{1}{2} \frac{1}{2} },
\end{equation}

and

\begin{equation}\label{eqn:qmLecture16:960}
\hat{L}_{-}^{(2)} \ket{ 1 0 ; \frac{1}{2} \frac{1}{2} }
=
\sqrt{ 1 \times 1 } \ket{ 1 0 ; \frac{1}{2} \frac{-1}{2} },
\end{equation}

and

\begin{equation}\label{eqn:qmLecture16:980}
\hat{L}_{-}^1
\ket{ 1 1 ; \frac{1}{2} -\frac{1}{2} }
=
\sqrt{ 2 \times 1 }
\ket{ 1 0 ; \frac{1}{2} -\frac{1}{2} }.
\end{equation}

This gives

\begin{equation}\label{eqn:qmLecture16:1000}
2 \ket{ \frac{3}{2} \frac{-1}{2} } =
\sqrt{\frac{2}{3}} \lr{
\sqrt{ 2 } \ket{ 1 -1 ; \frac{1}{2} \frac{1}{2} }
+
\ket{ 1 0 ; \frac{1}{2} \frac{-1}{2} }
}
+ \inv{\sqrt{3}}
\sqrt{ 2 }
\ket{ 1 0 ; \frac{1}{2} -\frac{1}{2} },
\end{equation}

or

\begin{equation}\label{eqn:qmLecture16:1001}
\boxed{
\ket{ \frac{3}{2} \frac{-1}{2} } =
\inv{\sqrt{ 3 }} \ket{ 1 -1 ; \frac{1}{2} \frac{1}{2} }
+
\sqrt{\frac{2}{3}}
\ket{ 1 0 ; \frac{1}{2} \frac{-1}{2} }.
}
\end{equation}

There’s one more possible state with total angular momentum \( \frac{3}{2} \). This time

\begin{equation}\label{eqn:qmLecture16:1060}
\begin{aligned}
\hat{L}_{-}^{\textrm{tot}}
\ket{ \frac{3}{2} \frac{-1}{2} }
&=
\sqrt{ 1 \times 3 }
\ket{ \frac{3}{2} \frac{-3}{2} } \\
&=
\inv{\sqrt{ 3 }} \hat{L}_{-}^{(2)} \ket{ 1 -1 ; \frac{1}{2} \frac{1}{2} }
+
\sqrt{\frac{2}{3}}
\hat{L}_{-}^{(1)} \ket{ 1 0 ; \frac{1}{2} \frac{-1}{2} } \\
&=
\inv{\sqrt{ 3 }} \sqrt{1 \times 1 } \ket{ 1 -1 ; \frac{1}{2} \frac{-1}{2} }
+
\sqrt{\frac{2}{3}}
\sqrt{ 1 \times 2 } \ket{ 1 -1 ; \frac{1}{2} \frac{-1}{2} },
\end{aligned}
\end{equation}

or
\begin{equation}\label{eqn:qmLecture16:1080}
\boxed{
\ket{ \frac{3}{2} \frac{-3}{2} }
=
\ket{ 1 -1 ; \frac{1}{2} \frac{-1}{2} }.
}
\end{equation}

The \( \ket{ \frac{1}{2} \frac{1}{2} } \) state is constructed as normal to \( \ket{ \frac{3}{2} \frac{1}{2} } \), or

\begin{equation}\label{eqn:qmLecture17:1120}
\boxed{
\ket{ \frac{1}{2} \frac{1}{2} } =
\sqrt{\frac{1}{3}} \ket{ 1 0 ; \frac{1}{2} \frac{1}{2} }

\sqrt{ \frac{2}{3} } \ket{ 1 1 ; \frac{1}{2} -\frac{1}{2} },
}
\end{equation}

and \( \ket{ \frac{1}{2} -\frac{1}{2} } \) by lowering that. With

\begin{equation}\label{eqn:qmLecture17:1160}
\hat{L}_{-}^{\textrm{tot}} \ket{ \frac{1}{2} \frac{1}{2} } = \sqrt{ 1 \times 1 } \ket{ \frac{1}{2} -\frac{1}{2} },
\end{equation}

we have

\begin{equation}\label{eqn:qmLecture17:1180}
\ket{ \frac{1}{2} -\frac{1}{2} } =
\sqrt{\frac{1}{3}} \lr{
\sqrt{ 1 \times 2 } \ket{ 1 -1 ; \frac{1}{2} \frac{1}{2} }
+ \ket{ 1 0 ; \frac{1}{2} -\frac{1}{2} }
}
-\sqrt{ \frac{2}{3} } \sqrt{ 2 \times 1 } \ket{ 1 0 ; \frac{1}{2} -\frac{1}{2} },
\end{equation}

or
\begin{equation}\label{eqn:qmLecture17:1200}
\boxed{
\ket{ \frac{1}{2} -\frac{1}{2} } =
\sqrt{ \frac{2}{3} } \ket{ 1 -1 ; \frac{1}{2} \frac{1}{2} }
– \inv{\sqrt{3}} \ket{ 1 0 ; \frac{1}{2} -\frac{1}{2} }.
}
\end{equation}

Observe that further lowering this produces zero

\begin{equation}\label{eqn:qmLecture17:1240}
\hat{L}_{-}^{\textrm{tot}} \ket{ \frac{1}{2} -\frac{1}{2} }
=
\sqrt{ \frac{2}{3} } \sqrt{ 1 \times 1 } \ket{ 1 -1 ; \frac{1}{2} -\frac{1}{2} }
– \inv{\sqrt{3}} \sqrt{ 1 \times 2 } \ket{ 1 -1 ; \frac{1}{2} -\frac{1}{2} }.
= 0.
\end{equation}

All the basis elements have been determined, and are summarized in table 1.

ClebschGordanOneOneHalf

Example. Spin two, spin one.

With \( j_1 = 2 \) and \( j_2 = 1 \), we have \( j \in 1,2,3 \), and can proceed the same way as sketched in fig. 2.

fig. 2. Spin two,one Clebsch-Gordan procedure

fig. 2. Spin two,one Clebsch-Gordan procedure

References

[1] Jun John Sakurai and Jim J Napolitano. Modern quantum mechanics. Pearson Higher Ed, 2014.

Dirac delta function potential

November 19, 2015 phy1520 , ,

[Click here for a PDF of this post with nicer formatting]

Note: there’s an error below (and in the associated PDF).  10 points to anybody that finds it (I’ve fixed it in my working version of phy1520.pdf)

Q:Dirac delta function potential

Problem 2.24/2.25 [1] introduces a Dirac delta function potential

\begin{equation}\label{eqn:diracPotential:20}
H = \frac{p^2}{2m} – V_0 \delta(x),
\end{equation}

which vanishes after \( t = 0 \). Solve for the bound state for \( t < 0 \) and then the time evolution after that.

A:

The first part of this problem was assigned back in phy356, where we solved this for a rectangular potential that had the limiting form of a delta function potential. However, this problem can be solved directly by considering the \( \Abs{x} > 0 \) and \( x = 0 \) regions separately.

For \( \Abs{x} > 0 \) Schrodinger’s equation takes the form

\begin{equation}\label{eqn:diracPotential:40}
E \psi = -\frac{\Hbar^2}{2m} \frac{d^2 \psi}{dx^2}.
\end{equation}

With

\begin{equation}\label{eqn:diracPotential:60}
\kappa =
\frac{\sqrt{-2 m E}}{\Hbar},
\end{equation}

this has solutions

\begin{equation}\label{eqn:diracPotential:80}
\psi = e^{\pm \kappa x}.
\end{equation}

For \( x > 0 \) we must have
\begin{equation}\label{eqn:diracPotential:100}
\psi = a e^{-\kappa x},
\end{equation}

and for \( x < 0 \)
\begin{equation}\label{eqn:diracPotential:120}
\psi = b e^{\kappa x}.
\end{equation}

requiring that \( \psi \) is continuous at \( x = 0 \) means \( a = b \), or

\begin{equation}\label{eqn:diracPotential:140}
\psi = \psi(0) e^{-\kappa \Abs{x}}.
\end{equation}

For the \( x = 0 \) region, consider an interval \( [-\epsilon, \epsilon] \) region around the origin. We must have

\begin{equation}\label{eqn:diracPotential:160}
E \int_{-\epsilon}^\epsilon \psi(x) dx = \frac{-\Hbar^2}{2m} \int_{-\epsilon}^\epsilon \frac{d^2 \psi}{dx^2} dx – V_0 \int_{-\epsilon}^\epsilon \delta(x) \psi(x) dx.
\end{equation}

The RHS is zero

\begin{equation}\label{eqn:diracPotential:180}
E \int_{-\epsilon}^\epsilon \psi(x) dx
=
E \frac{ e^{-\kappa (\epsilon)} – 1}{-\kappa}
-E \frac{ 1 – e^{\kappa (-\epsilon)}}{\kappa}
\rightarrow
0.
\end{equation}

That leaves
\begin{equation}\label{eqn:diracPotential:200}
\begin{aligned}
V_0 \int_{-\epsilon}^\epsilon \delta(x) \psi(x) dx
&=
\frac{-\Hbar^2}{2m} \int_{-\epsilon}^\epsilon \frac{d^2 \psi}{dx^2} dx \\
&=
\frac{-\Hbar^2}{2m} \evalrange{\frac{d \psi}{dx}}{-\epsilon}{\epsilon} \\
&=
\frac{-\Hbar^2}{2m}
\psi(0)
\lr
{
-\kappa e^{-\kappa (\epsilon)}

\kappa e^{\kappa (-\epsilon)}
}.
\end{aligned}
\end{equation}

In the \( \epsilon \rightarrow 0 \) limit this gives

\begin{equation}\label{eqn:diracPotential:220}
V_0 = \frac{\Hbar^2 \kappa}{m}.
\end{equation}

Equating relations for \( \kappa \) we have

\begin{equation}\label{eqn:diracPotential:240}
\kappa = \frac{m V_0}{\Hbar^2} = \frac{\sqrt{-2 m E}}{\Hbar},
\end{equation}

or

\begin{equation}\label{eqn:diracPotential:260}
E = -\inv{2 m} \lr{ \frac{m V_0}{\Hbar} }^2,
\end{equation}

with

\begin{equation}\label{eqn:diracPotential:280}
\psi(x, t < 0) = C \exp\lr{ -i E t/\hbar – \kappa \Abs{x}}.
\end{equation}

The normalization requires

\begin{equation}\label{eqn:diracPotential:300}
1
= 2 \Abs{C}^2 \int_0^\infty e^{- 2 \kappa x} dx
= 2 \Abs{C}^2 \evalrange{\frac{e^{- 2 \kappa x}}{-2 \kappa}}{0}{\infty}
= \frac{\Abs{C}^2}{\kappa},
\end{equation}

so
\begin{equation}\label{eqn:diracPotential:320}
\boxed{
\psi(x, t < 0) = \inv{\sqrt{\kappa}} \exp\lr{ -i E t/\hbar – \kappa \Abs{x}}. } \end{equation} There is only one bound state for such a potential. After turning off the potential, any plane wave \begin{equation}\label{eqn:diracPotential:360} \psi(x, t) = e^{i k x – i E(k) t/\Hbar}, \end{equation} where \begin{equation}\label{eqn:diracPotential:380} k = \frac{\sqrt{2 m E}}{\Hbar}, \end{equation} is a solution. In particular, at \( t = 0 \), the wave packet \begin{equation}\label{eqn:diracPotential:400} \psi(x,0) = \inv{\sqrt{2\pi}} \int_{-\infty}^\infty e^{i k x} A(k) dk, \end{equation} is a solution. To solve for \( A(k) \), we require \begin{equation}\label{eqn:diracPotential:420} \inv{\sqrt{2\pi}} \int_{-\infty}^\infty e^{i k x} A(k) dk = \inv{\sqrt{\kappa}} e^{ – \kappa \Abs{x} }, \end{equation} or \begin{equation}\label{eqn:diracPotential:440} \boxed{ A(k) = \inv{\sqrt{2\pi \kappa}} \int_{-\infty}^\infty e^{-i k x} e^{ – m V_0 \Abs{x}/\Hbar^2 } dx. } \end{equation} The initial time state established by the delta function potential evolves as \begin{equation}\label{eqn:diracPotential:480} \boxed{ \psi(x, t > 0) = \inv{\sqrt{2\pi}} \int_{-\infty}^\infty e^{i k x – i \Hbar k^2 t/2m} A(k) dk.
}
\end{equation}

References

[1] Jun John Sakurai and Jim J Napolitano. Modern quantum mechanics. Pearson Higher Ed, 2014.