Heisenberg picture

PHY2403H Quantum Field Theory. Lecture 13: Forced Klein-Gordon equation, coherent states, number density, time ordered product, pole shifting, perturbation theory, Heisenberg picture, interaction picture, Dyson’s formula. Taught by Prof. Erich Poppitz

October 24, 2018 phy2403 , , , , , , , ,

[Click here for a PDF of this post with nicer formatting]

DISCLAIMER: Very rough notes from class, with some additional side notes.

These are notes for the UofT course PHY2403H, Quantum Field Theory, taught by Prof. Erich Poppitz, fall 2018.

Review: “particle creation problem”.

fig. 1. Finite window impulse response.

We imagined that we have a windowed source function \( j(y^0, \By) \), as sketched in fig. 1, which is acting as a forcing source for the non-homogeneous Klein-Gordon equation

\begin{equation}\label{eqn:qftLecture13:20}
\lr{ \partial_\mu \partial^\mu + m^2 } \phi = j
\end{equation}

Our solution was
\begin{equation}\label{eqn:qftLecture13:40}
\phi(x) = \phi(x_0) + i \int d^4 y D_R( x – y) j(y),
\end{equation}
where \( \phi(x_0) \) obeys the homogeneous equation, and
\begin{equation}\label{eqn:qftLecture13:60}
D_r(x – y) = \Theta(x^0 – y^0) \lr{ D(x – y) – D(y – x) },
\end{equation}
and \( D(x) = \int \frac{d^3 p}{(2\pi)^3 2 \omega_\Bp } \evalbar{ e^{-i p \cdot x} }{p^0 = \omega_\Bp} \) is the Weightmann function.

For \( x^0 > t_{\text{after}} \)
\begin{equation}\label{eqn:qftLecture13:80}
\phi(x)
=
\int \frac{d^3 p}{(2\pi)^3 \sqrt{ 2 \omega_\Bp }}
\evalbar{
\lr{ e^{-i p \cdot x} a_\Bp + e^{i p \cdot x } a_\Bp^\dagger }
}{
p^0 = \omega_\Bp
}
+ i
\int \frac{d^3 p}{(2\pi)^3 2 \omega_\Bp }
\evalbar{
\lr{ e^{-i p \cdot x} \tilde{j}(p) + e^{i p \cdot x} \tilde{j}(p_0, -\Bp) }
}{
p^0 = \omega_\Bp
}
\end{equation}
where we have used \( \tilde{j}^\conj(p_0, \Bp) = \tilde{j}(p_0, -\Bp) \). This gives
\begin{equation}\label{eqn:qftLecture13:100}
\phi(x) =
\int \frac{d^3 p}{(2\pi)^3 \sqrt{ 2 \omega_\Bp } }
\evalbar{
\lr{
e^{-i p \cdot x}
\lr{ a_\Bp + i \frac{\tilde{j}(p)}{\sqrt{2 \omega_\Bp}} }
+ e^{i p \cdot x }
\lr{ a_\Bp^\dagger – i \frac{\tilde{j}^\conj(p)}{\sqrt{2 \omega_\Bp}} }
}
}{
p^0 = \omega_\Bp
}
\end{equation}

It was left as an exercise to show that given
\begin{equation}\label{eqn:qftLecture13:120}
H = \int d^3 p \lr{ \inv{2} \pi^2 + \inv{2} \lr{ \spacegrad \phi}^2 + \frac{m^2}{2} \phi^2 },
\end{equation}
we obtain
\begin{equation}\label{eqn:qftLecture13:140}
H_{\text{after}} =
\int d^3 x \omega_\Bp
\lr{ a_\Bp^\dagger – i \frac{\tilde{j}^\conj(p)}{\sqrt{2 \omega_\Bp}} }
\lr{ a_\Bp + i \frac{\tilde{j}(p)}{\sqrt{2 \omega_\Bp}} }
\end{equation}

System in ground state
\begin{equation}\label{eqn:qftLecture13:160}
\bra{0} \hatH_{\text{before}} \ket{0} = \expectation{E}_{\text{before}} = 0.
\end{equation}
\begin{equation}\label{eqn:qftLecture13:180}
\begin{aligned}
\bra{0} \hatH_{\text{after}} \ket{0} = \expectation{E}_{\text{after}}
&=
\int d^3 x \omega_\Bp
\frac{ \tilde{j}^\conj(p) \tilde{j}(p)}{2 \omega_\Bp} \\
&=
\inv{2} \int d^3 x
\Abs{j(p)}^2.
\end{aligned}
\end{equation}
We can identify
\begin{equation}\label{eqn:qftLecture13:200}
N(\Bp) =
\frac{\Abs{j(p)}^2}{2 \omega_\Bp},
\end{equation}
as the number density of particles with momentum \( \Bp \).

Digression: coherent states.

Defintion: Coherent state.

A coherent state is an eigenstate of the destruction operator
\begin{equation*}
a \ket{\alpha} = \alpha \ket{\alpha}.
\end{equation*}

For the SHO, if we solve for such a coherent state, we find
\begin{equation}\label{eqn:qftLecture13:240}
\ket{\alpha} = \text{constant} \times \sum_{n = 0}^\infty \frac{\alpha^n}{n!} \lr{ a^\dagger }^n \ket{0}.
\end{equation}
If we assume the existence of a coherent state
\begin{equation}\label{eqn:qftLecture13:260}
a_\Bp \ket{
\frac{j(p)}{\sqrt{2 \omega_\Bp}}
}
=
\frac{j(p)}{\sqrt{2 \omega_\Bp}}
\ket{
\frac{j(p)}{\sqrt{2 \omega_\Bp}}
},
\end{equation}
then the expectation value of the number operator with respect to this state is the number density identified in \ref{eqn:qftLecture13:200}
\begin{equation}\label{eqn:qftLecture13:1200}
\bra{
\frac{j(p)}{\sqrt{2 \omega_\Bp}}
}
a_\Bp^\dagger a_\Bp
\ket{
\frac{j(p)}{\sqrt{2 \omega_\Bp}}
} = \frac{\Abs{j(p)}^2}{2 \omega_\Bp} = N(\Bp).
\end{equation}

Feynman’s Green’s function

\begin{equation}\label{eqn:qftLecture13:280}
\begin{aligned}
D_F(x)
&=
\Theta(x^0) D(x) +
\Theta(-x^0) D(-x) \\
&=
\Theta(x^0) \bra{0} \phi(x) \phi(0) \ket{0}
+\Theta(x^0) \bra{0} \phi(-x) \phi(0) \ket{0}
\end{aligned}
\end{equation}
Utilizing a translation operation \( U(a) = e^{i a_\mu P^\mu } \), where \( U(a) \phi(y) U^\dagger(a) = \phi(y + a) \), this second operation can be written as
\begin{equation}\label{eqn:qftLecture13:300}
\begin{aligned}
\bra{0} \phi(-x) \phi(0) \ket{0}
&=
\bra{0} U^\dagger(a) U(a) \phi(-x) U^\dagger(a) U(a) \phi(0) U^\dagger(a) U(a) \ket{0} \\
&=
\bra{0} U(a) \phi(-x) U^\dagger(a) U(a) \phi(0) U^\dagger(a) \ket{0} \\
&=
\bra{0} \phi(-x + a) \phi(a) \ket{0},
\end{aligned}
\end{equation}
In particular, with \( a = x \)
\begin{equation}\label{eqn:qftLecture13:320}
\bra{0} \phi(-x) \phi(0) \ket{0}
=
\bra{0} \phi(0) \phi(x) \ket{0},
\end{equation}
so the Feynman’s Green function can be written
\begin{equation}\label{eqn:qftLecture13:340}
D_F(x) =
\Theta(x^0) \bra{0} \phi(x) \phi(0) \ket{0}
+\Theta(x^0) \bra{0} \phi(x) \phi(x) \ket{0}
=
\bra{0}
\lr{
\Theta(x^0)
\phi(x) \phi(0)
+
\Theta(-x^0)
\phi(0) \phi(x)
}
\ket{0}.
\end{equation}
We define

Definition: Time ordered product.

The time ordered product of two operators is defined as
\begin{equation*}
T(\phi(x) \phi(y)) =
\left\{
\begin{array}{l l}
\phi(x)\phi(y) & \quad \mbox{\( x^0 > y^0 \)} \\
\phi(y)\phi(x) & \quad \mbox{\( x^0 < y^0 \)} \\
\end{array}
\right.,
\end{equation*}
or
\begin{equation*}
T(\phi(x) \phi(y)) =
\phi(x)\phi(y) \Theta(x^0 – y^0)
+
\phi(y)\phi(x) \Theta(y^0 – x^0).
\end{equation*}

Using this helpful construct, the Feynman’s Green function can now be written in a very simple fashion
\begin{equation}\label{eqn:qftLecture13:380}
\boxed{
D_F(x) = \bra{0} T(\phi(x) \phi(0)) \ket{0}.
}
\end{equation}

Remark:

Recall that the four dimensional form of the Green’s function was
\begin{equation}\label{eqn:qftLecture13:400}
D_F = i \int \frac{d^4 p}{(2 \pi)^4} e^{-i p \cdot x} \inv{ p^2 – m^2 }.
\end{equation}
For the Feynman case, the contour that we were taking around the poles can also be accomplished by shifting the poles strategically, as sketched in fig. 2.

fig. 2. Feynman deformation or equivalent shift of the poles.

 

This shift can be expressed explicit algebraically by introducing an offset
\begin{equation}\label{eqn:qftLecture13:420}
D_F = i \int \frac{d^4 p}{(2 \pi)^4} e^{-i p \cdot x} \inv{ p^2 – m^2 + i \epsilon }
\end{equation}
which puts the poles at

\begin{equation}\label{eqn:qftLecture13:440}
\begin{aligned}
p^0
&= \pm \sqrt{ \omega_\Bp – i \epsilon } \\
&= \pm \omega_\Bp \lr{ 1 – \frac{i \epsilon}{\omega_\Bp^2} }^{1/2} \\
&= \pm \omega_\Bp \lr{ 1 – \inv{2} \frac{i \epsilon}{\omega_\Bp^2} } \\
&=
\left\{
\begin{array}{l}
+\omega_\Bp – \inv{2} i \frac{\epsilon}{\omega_\Bp} \\
-\omega_\Bp + \inv{2} i \frac{\epsilon}{\omega_\Bp} \\
\end{array}
\right.
\end{aligned}
\end{equation}

 

Interacting field theory: perturbation theory in QFT.

We perturb the Hamiltonian
\begin{equation}\label{eqn:qftLecture13:500}
H = H_0 + H_{\text{int}}
\end{equation}
where \( H_0 \) is the free Hamiltonian and \( H_{\text{int}} \) is the interaction term (the perturbation).

Example:

\begin{equation}\label{eqn:qftLecture13:460}
\begin{aligned}
H_0 &= SHO = \frac{p^2}{2} + \frac{\omega^2 q^2}{2} \\
H_{\text{int}} &= \lambda q^4,
\end{aligned}
\end{equation}
i.e. the anharmonic oscillator.

In QFT
\begin{equation}\label{eqn:qftLecture13:480}
\begin{aligned}
H_0 &=
\int d^3 x \lr{ \inv{2} \pi^2 + \inv{2} \lr{ \spacegrad \phi}^2 + \frac{m^2}{2} \phi^2 } \\
H_{\text{int}} &=
\lambda \int d^3 x \phi^4.
\end{aligned}
\end{equation}

We will expand the interaction in small \( \lambda \). Perturbation theory is the expansion in a small dimensionless coupling constant, such as

  • \( \lambda \) in \( \lambda \phi^4 \) theory,
  • \( \alpha = e^2/4 \pi \sim \inv{137} \) in QED, and
  • \( \alpha_s \) in QCD.

Perturbation theory, interaction representation and Dyson formula

\begin{equation}\label{eqn:qftLecture13:520}
H = H_0 + H_{\text{int}}
\end{equation}
Example interaction
\begin{equation}\label{eqn:qftLecture13:540}
H_{\text{int}} = \lambda \int d^3 x \phi^4
\end{equation}

We know all there is to know about \( H_0 \) (decoupled SHOs, …)
\begin{equation}\label{eqn:qftLecture13:560}
H_0 \ket{0} = \ket{0} E^0_{\text{vac}}
\end{equation}
where \( E^0_{\text{vac}} = 0 \). Assume
\begin{equation}\label{eqn:qftLecture13:580}
\lr{ H_0 + H_{\text{int}} } \ket{\Omega} = \ket{\Omega} E_{\text{vac}},
\end{equation}
where the ground state energy of the perturbed system is zero when \( \lambda = 0 \). That is \( E_{\text{vac}}(\lambda = 0 ) = 0 \).

So for
\begin{equation}\label{eqn:qftLecture13:600}
\evalbar{\phi(x) }{x^0 = t_0, \text{some fixed value}}
=
\int \frac{d^3}{(2 \pi)^3 \sqrt{ 2 \omega_\Bp } }
\evalbar{
\lr{
e^{-i p \cdot x} a_\Bp
+ e^{i p \cdot x} a_\Bp^\dagger }
}
{
p^0 = \omega_\Bp
}.
\end{equation}
Let’s call \( \phi(\Bx, t_0) \) the free Schr\”{o}dinger operator, where
\( \phi(\Bx, t_0) \) is evaluated at a fixed value of \( t_0 \). At such a point, the Schr\”{o}dinger and Heisenberg pictures coincide.
\begin{equation}\label{eqn:qftLecture13:620}
\antisymmetric{\phi(\Bx, t_0)}{\pi(\By, t_0)} = i \delta^3(\Bx – \By).
\end{equation}

Normally (QM) one defines the Heisenberg operator as
\begin{equation}\label{eqn:qftLecture13:640}
O_H = e^{i H(t – t_0)} O_S e^{-i H(t – t_0)},
\end{equation}
where \( O_H \) depends on time, and \( O_S \) is defined at a fixed time \( t_0 \), usually 0.
From \ref{eqn:qftLecture13:640} we find
\begin{equation}\label{eqn:qftLecture13:660}
\ddt{O_H} = i \antisymmetric{H}{O_H}.
\end{equation}
The equivalent of \ref{eqn:qftLecture13:640} in QFT is very complicated. We’d like to develop an intermediate picture.

We will define an intermediate picture, called the “interaction representation”, which is equivalent to the Heisenberg picture with respect to \( H_0 \).

Definition: Intermediate picture operator.

\begin{equation*}
\phi_I(t, \Bx) =
e^{i H_0(t – t_0) }
\phi(t_0, \Bx)
e^{-i H_0(t – t_0) }.
\end{equation*}

This is familiar, and is the Heisenberg picture operator that we had in free QFT
\begin{equation}\label{eqn:qftLecture13:700}
\phi_I(t, \Bx) =
\int \frac{d^3}{(2 \pi)^3 \sqrt{ 2 \omega_\Bp } }
\evalbar{
\lr{
e^{-i p \cdot x} a_\Bp
+ e^{i p \cdot x} a_\Bp^\dagger }
}
{
p^0 = \omega_\Bp
},
\end{equation}
where \( x_0 = t \).

The Heisenberg picture operator is
\begin{equation}\label{eqn:qftLecture13:720}
\begin{aligned}
\phi_H(t, \Bx)
&=
\phi(t, \Bx) \\
&=
e^{i H(t – t_0) }
e^{-i H_0(t – t_0) }
\lr{
e^{i H_0(t – t_0) }
\phi_S(t_0, \Bx)
e^{-i H_0(t – t_0) }
}
e^{i H_0(t – t_0) }
e^{-i H(t – t_0) } \\
&=
e^{i H(t – t_0) }
e^{-i H_0(t – t_0) }
\phi_I(t, \Bx)
e^{-i H_0(t – t_0) }
e^{i H(t – t_0) }
\end{aligned}
\end{equation}
or
\begin{equation}\label{eqn:qftLecture13:760}
\phi_H(t, \Bx)
=
U^\dagger(t, t_0)
\phi_I(t_0, \Bx)
U(t, t_0),
\end{equation}
where
\begin{equation}\label{eqn:qftLecture13:740}
U(t, t_0) =
e^{i H_0(t – t_0) }
e^{-i H(t – t_0) }.
\end{equation}

We want to apply perturbation techniques to find \( U(t, t_0) \) which is complicated.

\begin{equation}\label{eqn:qftLecture13:780}
\begin{aligned}
i \PD{t}{} U(t, t_0)
&=
i e^{i H_0(t – t_0) } i H_0
e^{-i H(t – t_0) }
+
i e^{i H_0(t – t_0) }
e^{-i H(t – t_0) } (-i H) \\
&=
e^{i H_0(t – t_0) }
\lr{ -H_0 + H }
e^{-i H(t – t_0) } \\
&=
e^{i H_0(t – t_0) }
H_{\text{int}}
e^{-i H_0(t – t_0) }
e^{i H_0(t – t_0) }
e^{-i H(t – t_0) }
\end{aligned}
\end{equation}
so we have
\begin{equation}\label{eqn:qftLecture13:800}
\boxed{
i \PD{t}{} U(t, t_0)
=
H_{\text{int}, I}(t) U(t, t_0).
}
\end{equation}
For the (Schr\”{o}dinger) interaction \( H_{\text{int}} = \
\lambda \int d^3 x \phi^4(\Bx, t_0) \), what we really mean by
\( H_{\text{int}, I}(t) \) is
\begin{equation}\label{eqn:qftLecture13:820}
H_{\text{int}, I}(t) = \lambda \int d^3 x \phi_I^4(\Bx, t).
\end{equation}

It will be more convenient to remove the explicit \( \lambda \) factor from the interaction Hamiltonian, and write instead
\begin{equation}\label{eqn:qftLecture13:880}
H_{\text{int}, I}(t) = \int d^3 x \phi_I^4(\Bx, t),
\end{equation}
so the equation to solve is
\begin{equation}\label{eqn:qftLecture13:1220}
i \PD{t}{} U(t, t_0)
=
\lambda H_{\text{int}, I}(t) U(t, t_0).
\end{equation}

We assume that
\begin{equation}\label{eqn:qftLecture13:900}
U(t, t_0)
=
U_0(t, t_0)
+ \lambda U_1(t, t_0)
+ \lambda^2 U_2(t, t_0)
+ \cdots
+ \lambda^n U_n(t, t_0)
\end{equation}

Plugging into \ref{eqn:qftLecture13:880} we have
\begin{equation}\label{eqn:qftLecture13:1160}
\begin{aligned}
i &\lambda^0 \PD{t}{}U_0(t, t_0)
+ i \lambda^1 \PD{t}{}U_1(t, t_0)
+ i \lambda^2 \PD{t}{}U_2(t, t_0)
+ \cdots
+ i \lambda^n \PD{t}{}U_n(t, t_0) \\
&=
\lambda H_{\text{int}, I}(t)
\lr{
1
+ \lambda U_1(t, t_0)
+ \lambda^2 U_2(t, t_0)
+ \cdots
+ \lambda^n U_n(t, t_0)
},
\end{aligned},
\end{equation}
so
equating equal powers of \( \lambda \) on each side gives a recurrence relation for each \( U_k, k > 0 \)
\begin{equation}\label{eqn:qftLecture13:1180}
\PD{t}{}U_k(t, t_0) = -i H_{\text{int}, I}(t) U_{k-1}(t, t_0).
\end{equation}

Let’s consider each power in turn.

\(O(\lambda^0)\):

Solving \ref{eqn:qftLecture13:800} to \( O(\lambda^0) \) gives
\begin{equation}\label{eqn:qftLecture13:840}
i \PD{t}{} U_0(t, t_0) = 0,
\end{equation}
or
\begin{equation}\label{eqn:qftLecture13:860}
U(t, t_0) = 1 + O(\lambda).
\end{equation}

\(O(\lambda^1)\):

\begin{equation}\label{eqn:qftLecture13:940}
\PD{t}{U_1(t, t_0)} = -i H_{\text{int}, I}(t),
\end{equation}
which has solution
\begin{equation}\label{eqn:qftLecture13:960}
U_1(t, t_0) = -i \int_{t_0}^t H_{\text{int}, I}(t’) dt’.
\end{equation}

\(O(\lambda^2)\):

\begin{equation}\label{eqn:qftLecture13:1000}
\begin{aligned}
\PD{t}{U_2(t, t_0)}
&= -i H_{\text{int}, I}(t) U_1(t, t_0) \\
&= (-i)^2 H_{\text{int}, I}(t)
\int_{t_0}^t H_{\text{int}, I}(t’) dt’,
\end{aligned}
\end{equation}
which has solution
\begin{equation}\label{eqn:qftLecture13:1020}
\begin{aligned}
U_2(t, t_0)
&= (-i )^2
\int_{t_0}^t H_{\text{int}, I}(t”) dt”
\int_{t_0}^{t”} H_{\text{int}, I}(t’) dt’ \\
&= (-i )^2
\int_{t_0}^t dt”
\int_{t_0}^{t”}
dt’
H_{\text{int}, I}(t”)
H_{\text{int}, I}(t’).
\end{aligned}
\end{equation}

\(O(\lambda^3)\):

\begin{equation}\label{eqn:qftLecture13:1060}
\PD{t}{U_3(t, t_0)}
=
-i
H_{\text{int}, I}(t) U_2(t, t_0)
\end{equation}
so
\begin{equation}\label{eqn:qftLecture13:1240}
\begin{aligned}
U_3(t, t_0)
&=
-i
\int_{t_0}^t dt”’
H_{\text{int}, I}(t”’) U_2(t”’, t_0) \\
&=
(-i )^3
\int_{t_0}^t dt”’
H_{\text{int}, I}(t”’)
\int_{t_0}^{t”’} dt”
\int_{t_0}^{t”}
dt’
H_{\text{int}, I}(t”)
H_{\text{int}, I}(t’) \\
&=
(-i)^3
\int_{t_0}^t dt”’
\int_{t_0}^{t”’} dt”
\int_{t_0}^{t”} dt’
H_{\text{int}, I}(t”’)
H_{\text{int}, I}(t”)
H_{\text{int}, I}(t’)
\end{aligned}
\end{equation}

Simplifying the integration region.

For the two fold integral, the integration range is the upper triangular region sketched in fig. 3.

fig. 3. Upper triangular integration region.

Claim:

We can integrate over the entire square, and divide by two, provided we keep the time ordering
\begin{equation}\label{eqn:qftLecture13:1040}
U_2(t, t_0)
= \frac{(-i )^2}{2}
\int_{t_0}^t dt”
\int_{t_0}^{t”}
dt’
T(H_{\text{int}, I}(t”) H_{\text{int}, I}(t’) )
\end{equation}

Demonstration:
\begin{equation}\label{eqn:qftLecture13:1100}
\begin{aligned}
\frac{(-i)^2}{2}
&\int_{t_0}^t dt”
\int_{t_0}^t dt’
T( H_I(t”) H_I(t’) ) \\
&=
\frac{(-i)^2}{2}
\int_{t_0}^t dt”
\int_{t_0}^t dt’
\Theta(t”- t’)
H_I(t”) H_I(t’)
+
\frac{(-i)^2}{2}
\int_{t_0}^t dt”
\int_{t_0}^t dt’
\Theta(t’- t”)
H_I(t’) H_I(t”),
\end{aligned}
\end{equation}
but the \( \Theta(t” – t’) \) function is non-zero only for \( t” – t’ > 0 \), or \( t’ < t” \), and the \( \Theta(t’ – t”) \) function is non-zero only for \( t’ – t” > 0 \), or \( t” < t’ \), so we can adjust the integration ranges for
\begin{equation}\label{eqn:qftLecture13:1260}
\begin{aligned}
\frac{(-i)^2}{2}
&\int_{t_0}^t dt”
\int_{t_0}^t dt’
T( H_I(t”) H_I(t’) ) \\
&=
\frac{(-i)^2}{2}
\int_{t_0}^t dt”
\int_{t_0}^{t”} dt’
H_I(t”) H_I(t’)
+
\frac{(-i)^2}{2}
\int_{t_0}^{t’} dt”
\int_{t_0}^t dt’
H_I(t’) H_I(t”) \\
&=
\frac{(-i)^2}{2}
\int_{t_0}^t dt”
\int_{t_0}^{t”} dt’
H_I(t”) H_I(t’)
+
\frac{(-i)^2}{2}
\int_{t_0}^t dt”
\int_{t_0}^{t”} dt’
H_I(t”) H_I(t’) \\
&=
U_2(t, t_0),
\end{aligned}
\end{equation}
where we swapped integration variables in second integral. We can clearly do the same thing for the higher order repeated integrals, but instead of a \(1/2 = 1/2!\) adjustment for the number of orderings, we will require a \( 1/n! \) adjustment for an \( n \)-fold integral.

Summary:

\begin{equation}\label{eqn:qftLecture13:1120}
\begin{aligned}
U_0 &= 1 \\
U_1 &= -i \int_{t_0}^t dt_1 H_I(t_1) \\
U_2 &= \frac{(-i)^2}{2}
\int_{t_0}^t dt_1
\int_{t_0}^t dt_2
T( H_I(t_1)
H_I(t_2) ) \\
U_3 &= \frac{(-i)^3}{3!}
\int_{t_0}^t dt_1
\int_{t_0}^t dt_2
\int_{t_0}^t dt_3
T( H_I(t_1)
H_I(t_2)
H_I(t_3)
) \\
U_n &= \frac{(-i)^n}{n!}
\int_{t_0}^t dt_1
\int_{t_0}^t dt_2
\int_{t_0}^t dt_3
\cdots
\int_{t_0}^t dt_n
T( H_I(t_1)
H_I(t_2)
\cdots
H_I(t_n)
) \\
\end{aligned}
\end{equation}

Summing we find
\begin{equation}\label{eqn:qftLecture13:1140}
\begin{aligned}
U(t, t_0)
&= T \exp\lr{-i
\int_{t_0}^t dt_1 H_I(t’)
} \\
&=
\sum_{n = 0}^\infty
\frac{(-i)^n}{n!} \int_{t_0}^t dt_1 \cdots dt_n T( H_I(t_1) \cdots H_I(t_n) ).
\end{aligned}
\end{equation}

This is called Dyson’s formula.

Next time.

Our goal is to compute: \( \bra{\Omega} T(\phi(x_1) \cdots \phi(x_n)) \ket{\Omega} \).

1D SHO linear superposition that maximizes expectation

October 7, 2015 phy1520 , , , , , ,

[Click here for a PDF of this post with nicer formatting]

Question: 1D SHO linear superposition that maximizes expectation ([1] pr. 2.17)

For a 1D SHO

(a)

Construct a linear combination of \( \ket{0}, \ket{1} \) that maximizes \( \expectation{x} \) without using wave functions.

(b)

How does this state evolve with time?

(c)

Evaluate \( \expectation{x} \) using the Schrodinger picture.

(d)

Evaluate \( \expectation{x} \) using the Heisenberg picture.

(e)

Evaluate \( \expectation{(\Delta x)^2} \).

Answer

(a)

Forming

\begin{equation}\label{eqn:shoSuperposition:20}
\ket{\psi} = \frac{\ket{0} + \sigma \ket{1}}{\sqrt{1 + \Abs{\sigma}^2}}
\end{equation}

the position expectation is

\begin{equation}\label{eqn:shoSuperposition:40}
\bra{\psi} x \ket{\psi}
=
\inv{1 + \Abs{\sigma}^2} \lr{ \bra{0} + \sigma^\conj \bra{1} } \frac{x_0}{\sqrt{2}} \lr{ a^\dagger + a } \lr{ \ket{0} + \sigma \ket{1} }.
\end{equation}

Evaluating the action of the operators on the kets, we’ve got

\begin{equation}\label{eqn:shoSuperposition:60}
\lr{ a^\dagger + a } \lr{ \ket{0} + \sigma \ket{1} }
=
\ket{1} + \sqrt{2} \sigma \ket{2} + \sigma \ket{0}.
\end{equation}

The \( \ket{2} \) term is killed by the bras, leaving

\begin{equation}\label{eqn:shoSuperposition:80}
\begin{aligned}
\expectation{x}
&=
\inv{1 + \Abs{\sigma}^2} \frac{x_0}{\sqrt{2}} \lr{ \sigma + \sigma^\conj} \\
&=
\frac{\sqrt{2} x_0 \textrm{Re} \sigma}{1 + \Abs{\sigma}^2}.
\end{aligned}
\end{equation}

Any imaginary component in \( \sigma \) will reduce the expectation, so we are constrained to picking a real value.

The derivative of

\begin{equation}\label{eqn:shoSuperposition:100}
f(\sigma) = \frac{\sigma}{1 + \sigma^2},
\end{equation}

is

\begin{equation}\label{eqn:shoSuperposition:120}
f'(\sigma) = \frac{1 – \sigma^2}{(1 + \sigma^2)^2}.
\end{equation}

That has zeros at \( \sigma = \pm 1 \). The second derivative is

\begin{equation}\label{eqn:shoSuperposition:140}
f”(\sigma) = \frac{-2 \sigma (3 – \sigma^2)}{(1 + \sigma^2)^3}.
\end{equation}

That will be negative (maximum for the extreme value) at \( \sigma = 1 \), so the linear superposition of these first two energy eigenkets that maximizes the position expectation is

\begin{equation}\label{eqn:shoSuperposition:160}
\psi = \inv{\sqrt{2}}\lr{ \ket{0} + \ket{1} }.
\end{equation}

That maximized position expectation is

\begin{equation}\label{eqn:shoSuperposition:180}
\expectation{x}
=
\frac{x_0}{\sqrt{2}}.
\end{equation}

(b)

The time evolution is given by

\begin{equation}\label{eqn:shoSuperposition:200}
\begin{aligned}
\ket{\Psi(t)}
&= e^{-iH t/\Hbar} \inv{\sqrt{2}}\lr{ \ket{0} + \ket{1} } \\
&= \inv{\sqrt{2}}\lr{ e^{-i(0+ \ifrac{1}{2})\Hbar \omega t/\Hbar} \ket{0} +
e^{-i(1+ \ifrac{1}{2})\Hbar \omega t/\Hbar} \ket{1} } \\
&= \inv{\sqrt{2}}\lr{ e^{-i \omega t/2} \ket{0} + e^{-3 i \omega t/2} \ket{1} }.
\end{aligned}
\end{equation}

(c)

The position expectation in the Schrodinger representation is

\begin{equation}\label{eqn:shoSuperposition:220}
\begin{aligned}
\expectation{x(t)}
&=
\inv{2}
\lr{ e^{i \omega t/2} \bra{0} + e^{3 i \omega t/2} \bra{1} } \frac{x_0}{\sqrt{2}} \lr{ a^\dagger + a }
\lr{ e^{-i \omega t/2} \ket{0} + e^{-3 i \omega t/2} \ket{1} } \\
&=
\frac{x_0}{2\sqrt{2}}
\lr{ e^{i \omega t/2} \bra{0} + e^{3 i \omega t/2} \bra{1} }
\lr{ e^{-i \omega t/2} \ket{1} + e^{-3 i \omega t/2} \sqrt{2} \ket{2} + e^{-3 i \omega t/2} \ket{0} } \\
&=
\frac{x_0}{\sqrt{2}} \cos(\omega t).
\end{aligned}
\end{equation}

(d)

\begin{equation}\label{eqn:shoSuperposition:240}
\begin{aligned}
\expectation{x(t)}
&=
\inv{2}
\lr{ \bra{0} + \bra{1} } \frac{x_0}{\sqrt{2}}
\lr{ a^\dagger e^{i\omega t} + a e^{-i \omega t} }
\lr{ \ket{0} + \ket{1} } \\
&=
\frac{x_0}{2 \sqrt{2}}
\lr{ \bra{0} + \bra{1} }
\lr{ e^{i\omega t} \ket{1} + \sqrt{2} e^{i\omega t} \ket{2} + e^{-i \omega t} \ket{0} } \\
&=
\frac{x_0}{\sqrt{2}} \cos(\omega t),
\end{aligned}
\end{equation}

matching the calculation using the Schrodinger picture.

(e)

Let’s use the Heisenberg picture for the uncertainty calculation. Using the calculation above we have

\begin{equation}\label{eqn:shoSuperposition:260}
\begin{aligned}
\expectation{x^2}
&=
\inv{2} \frac{x_0^2}{2}
\lr{ e^{-i\omega t} \bra{1} + \sqrt{2} e^{-i\omega t} \bra{2} + e^{i \omega t} \bra{0} }
\lr{ e^{i\omega t} \ket{1} + \sqrt{2} e^{i\omega t} \ket{2} + e^{-i \omega t} \ket{0} } \\
&=
\frac{x_0^2}{4} \lr{ 1 + 2 + 1} \\
&=
x_0^2.
\end{aligned}
\end{equation}

The uncertainty is
\begin{equation}\label{eqn:shoSuperposition:280}
\begin{aligned}
\expectation{(\Delta x)^2}
&=
\expectation{x^2} – \expectation{x}^2 \\
&=
x_0^2 – \frac{x_0^2}{2} \cos^2(\omega t) \\
&=
\frac{x_0^2}{2} \lr{ 2 – \cos^2(\omega t) } \\
&=
\frac{x_0^2}{2} \lr{ 1 + \sin^2(\omega t) }
\end{aligned}
\end{equation}

References

[1] Jun John Sakurai and Jim J Napolitano. Modern quantum mechanics. Pearson Higher Ed, 2014.

PHY1520H Graduate Quantum Mechanics. Lecture 2: Basic concepts, time evolution, and density operators. Taught by Prof. Arun Paramekanti

September 22, 2015 phy1520 , , ,

[Click here for a PDF of this post with nicer formatting]

Disclaimer

Peeter’s lecture notes from class. These may be incoherent and rough.

These are notes for the UofT course PHY1520, Graduate Quantum Mechanics, taught by Prof. Paramekanti, covering chap 1 (basic concepts),3 (density operator). content from [1].

Basic concepts

We’ve reviewed the basic concepts that we will encounter in Quantum Mechanics.

  1. Abstract state vector. \( \ket{ \psi} \)
  2. Basis states. \( \ket{ x } \)
  3. Observables, special Hermitian operators. We’ll only deal with linear observables.
  4. Measurement.

We can either express the wave functions \( \psi(x) = \braket{x}{\psi} \) in terms of a basis for the observable, or can express the observable in terms of the basis of the wave function (position or momentum for example).

We saw that the position space representation of a momentum operator (also an observable) was

\begin{equation}\label{eqn:lecture2:20}
\hat{p} \rightarrow -i \Hbar \PD{x}{}.
\end{equation}

In general we can find the matrix element representation of any operator by considering its representation in a given basis. For example, in a position basis, that would be

\begin{equation}\label{eqn:lecture2:40}
\bra{x’} \hat{A} \ket{x} \leftrightarrow A_{x x’}
\end{equation}

The Hermitian property of the observable means that \( A_{x x’} = A_{x’ x}^\conj \)

\begin{equation}\label{eqn:lecture2:60}
\int dx \bra{x’} \hat{A} \ket{x} \braket{x }{\psi} = \braket{x’}{\phi}
\leftrightarrow
A_{x’ x} \psi_x = \phi_{x’}.
\end{equation}

Example: Measurement example

 

polarizerMeasurementFig1

fig. 1. Polarizer apparatus

Consider a polarization apparatus as sketched in fig. 1, where the output is of the form \( I_{\textrm{out}} = I_{\textrm{in}} \cos^2 \theta \).

A general input state can be written in terms of each of the possible polarizations

\begin{equation}\label{eqn:lecture2:80}
\alpha \ket{ \updownarrow } + \beta \ket{ \leftrightarrow } \sim
\cos\theta \ket{ \updownarrow } + \sin\theta \ket{ \leftrightarrow }
\end{equation}

Here \( \abs{\alpha}^2 \) is the probability that the input state is in the upwards polarization state, and \( \abs{\beta}^2 \) is the probability that the input state is in the downwards polarization state.

The measurement of the polarization results in an output state that has a specific polarization. That measurement is said to collapse the wavefunction.

When attempting a measurement, looking for a specific value, effects the state of the system, and is call a strong or projective measurement. Such a measurement is

  • (i) Probabilistic.
  • (ii) Requires many measurements.

This measurement process results a determination of the eigenvalue of the operator. The eigenvalue production of measurement is why we demand that operators be Hermitian.

It is also possible to try to do a weaker (perturbative) measurement, where some information is extracted from the input state without completely altering it.

Time evolution

  1. Schrodinger picture.
    The time evolution process is governed by a Schrodinger equation of the following form\begin{equation}\label{eqn:lecture2:100}
    i \Hbar \PD{t}{} \ket{\Psi(t)} = \hat{H} \ket{\Psi(t)}.
    \end{equation}

    This Hamiltonian could be, for example,

    \begin{equation}\label{eqn:lecture2:120}
    \hat{H} = \frac{\hat{p}^2}{2m} + V(x),
    \end{equation}

    Such a representation of time evolution is expressed in terms of operators \( \hat{x}, \hat{p}, \hat{H}, \cdots \) that are independent of time.

  2. Heisenberg picture.Suppose we have a state \( \ket{\Psi(t)} \) and operate on this with an operator

    \begin{equation}\label{eqn:lecture2:140}
    \hat{A} \ket{\Psi(t)}.
    \end{equation}

    This will have time evolution of the form

    \begin{equation}\label{eqn:lecture2:160}
    \hat{A} e^{-i \hat{H} t/\Hbar} \ket{\Psi(0)},
    \end{equation}

    or in matrix element form

    \begin{equation}\label{eqn:lecture2:180}
    \bra{\phi(t)} \hat{A} \ket{\Psi(t)}
    =
    \bra{\phi(0)}
    e^{i \hat{H} t/\Hbar}
    \hat{A} e^{-i \hat{H} t/\Hbar} \ket{\Psi(0)}.
    \end{equation}

    We work with states that do not evolve in time \( \ket{\phi(0)}, \ket{\Psi(0)}, \cdots \), but operators do evolve in time according to

    \begin{equation}\label{eqn:lecture2:200}
    \hat{A}(t) =
    e^{i \hat{H} t/\Hbar}
    \hat{A} e^{-i \hat{H} t/\Hbar}.
    \end{equation}

Density operator

We can have situations where it is impossible to determine a single state that describes the system. For example, given the gas in the room that you are sitting in, there are things that we can measure, but it is impossible to describe the state that describes all the particles and also impossible to construct a Hamiltonian that governs all the interactions of those many many particles.

We need a probabilistic description to even describe such a complex system.

Suppose we have a complex system that can be partitioned into two subsets, left and right, as sketched in fig. 2.

fig. 2. System partitioned into separate set of states

fig. 2. System partitioned into separate set of states

 

If the states in each partition can be enumerated separately, we can write the state of the system as sums over the probability amplitudes that for the combined states.

\begin{equation}\label{eqn:lecture2:220}
\ket{\Psi}
=
\sum_{m, n} C_{m,n} \ket{m} \ket{n}
\end{equation}

Here \( C_{m, n} \) is the probability amplitude to find the state in the combined state \( \ket{m} \ket{n} \).

As an example of such a system, we could investigate a two particle configuration where spin up or spin down can be separately measured for each particle.

\begin{equation}\label{eqn:lecture2:240}
\ket{\psi} = \inv{\sqrt{2}} \lr{
\ket{\uparrow}\ket{\downarrow}
+
\ket{\downarrow}\ket{\rightarrow}
}
\end{equation}

Considering such a system we could ask questions such as

  • What is the probability that the left half is in state \( m \)? This would be\begin{equation}\label{eqn:lecture2:260}
    \sum_n \Abs{C_{m, n}}^2
    \end{equation}
  • Probability that the left half is in state \( m \), and the
    probability that the right half is in state \( n \)? That is\begin{equation}\label{eqn:lecture2:280}
    \Abs{C_{m, n}}^2
    \end{equation}

We define the density operator

\begin{equation}\label{eqn:lecture2:300}
\hat{\rho} = \ket{\Psi} \bra{\Psi}.
\end{equation}

This is idempotent

\begin{equation}\label{eqn:lecture2:320}
\hat{\rho}^2 =
\lr{ \ket{\Psi} \bra{\Psi} }
\lr{ \ket{\Psi} \bra{\Psi} }
=
\ket{\Psi} \bra{\Psi}
\end{equation}

References

[1] Jun John Sakurai and Jim J Napolitano. Modern quantum mechanics. Pearson Higher Ed, 2014.

Correlation function. Partition function and ground state energy.

September 5, 2015 phy1520 , , , , ,

[Click here for a PDF of this problem with nicer formatting]

Question: Correlation function ([1] pr. 2.16)

A correlation function can be defined as

\begin{equation}\label{eqn:correlationSHO:20}
C(t) = \expectation{ x(t) x(0) }.
\end{equation}

Using a Heisenberg picture \( x(t) \) calculate this correlation for the one dimensional SHO ground state.

Answer

The time dependent Heisenberg picture position operator was found to be

\begin{equation}\label{eqn:correlationSHO:40}
x(t) = x(0) \cos(\omega t) + \frac{p(0)}{m \omega} \sin(\omega t),
\end{equation}

so the correlation function is

\begin{equation}\label{eqn:correlationSHO:60}
\begin{aligned}
C(t)
&=
\bra{0} \lr{ x(0) \cos(\omega t) + \frac{p(0)}{m \omega} \sin(\omega t)} x(0) \ket{0} \\
&=
\cos(\omega t) \bra{0} x(0)^2 \ket{0} + \frac{\sin(\omega t)}{m \omega} \bra{0} p(0) x(0) \ket{0} \\
&=
\frac{\Hbar \cos(\omega t) }{2 m \omega} \bra{0} \lr{ a + a^\dagger}^2 \ket{0} – \frac{i \Hbar}{m \omega} \sin(\omega t),
\end{aligned}
\end{equation}

But
\begin{equation}\label{eqn:correlationSHO:80}
\begin{aligned}
\lr{ a + a^\dagger} \ket{0}
&=
a^\dagger \ket{0} \\
&=
\sqrt{1} \ket{1} \\
&=
\ket{1},
\end{aligned}
\end{equation}

so

\begin{equation}\label{eqn:correlationSHO:100}
C(t) = x_0^2 \lr{ \inv{2} \cos(\omega t) – i \sin(\omega t) },
\end{equation}

where \( x_0^2 = \Hbar/(m \omega) \), not to be confused with \( x(0)^2 \).

[Click here for a PDF of this problem with nicer formatting]

Question: Partition function and ground state energy ([1] pr. 2.32)

Define the partition function as

\begin{equation}\label{eqn:partitionFunction:20}
Z = \int d^3 x’ \evalbar{ K( \Bx’, t ; \Bx’, 0 ) }{\beta = i t/\Hbar},
\end{equation}

Show that the ground state energy is given by

\begin{equation}\label{eqn:partitionFunction:40}
-\inv{Z} \PD{\beta}{Z}, \qquad \beta \rightarrow \infty.
\end{equation}

Answer

The propagator evaluated at the same point is

\begin{equation}\label{eqn:partitionFunction:60}
\begin{aligned}
K( \Bx’, t ; \Bx’, 0 )
&=
\sum_{a’} \braket{\Bx’}{a’} \ket{a’}{\Bx’} \exp\lr{ -\frac{i E_{a’} t}{\Hbar}} \\
&=
\sum_{a’} \Abs{\braket{\Bx’}{a’}}^2 \exp\lr{ -\frac{i E_{a’} t}{\Hbar}} \\
&=
\sum_{a’} \Abs{\braket{\Bx’}{a’}}^2 \exp\lr{ -E_{a’} \beta}.
\end{aligned}
\end{equation}

The derivative is
\begin{equation}\label{eqn:partitionFunction:80}
\PD{\beta}{Z}
=
-\int d^3 x’ \sum_{a’} E_{a’} \Abs{\braket{\Bx’}{a’}}^2 \exp\lr{ -E_{a’} \beta}.
\end{equation}

In the \( \beta \rightarrow \infty \) this sum will be dominated by the term with the lowest value of \( E_{a’} \). Suppose that state is \( a’ = 0 \), then

\begin{equation}\label{eqn:partitionFunction:100}
\lim_{ \beta \rightarrow \infty }
-\inv{Z} \PD{\beta}{Z}
= \frac{
\int d^3 x’ E_{0} \Abs{\braket{\Bx’}{0}}^2 \exp\lr{ -E_{0} \beta}
}
{
\int d^3 x’ \Abs{\braket{\Bx’}{0}}^2 \exp\lr{ -E_{0} \beta}
}
= E_0.
\end{equation}

References

[1] Jun John Sakurai and Jim J Napolitano. Modern quantum mechanics. Pearson Higher Ed, 2014.

Quantum SHO ladder operators as a diagonal change of basis for the Heisenberg EOMs

August 19, 2015 phy1520 , , , , , , , , , , , , , , , , , , ,

[Click here for a PDF of this post with nicer formatting]

Many authors pull the definitions of the raising and lowering (or ladder) operators out of their butt with no attempt at motivation. This is pointed out nicely in [1] by Eli along with one justification based on factoring the Hamiltonian.

In [2] is a small exception to the usual presentation. In that text, these operators are defined as usual with no motivation. However, after the utility of these operators has been shown, the raising and lowering operators show up in a context that does provide that missing motivation as a side effect.
It doesn’t look like the author was trying to provide a motivation, but it can be interpreted that way.

When seeking the time evolution of Heisenberg-picture position and momentum operators, we will see that those solutions can be trivially expressed using the raising and lowering operators. No special tools nor black magic is required to find the structure of these operators. Unfortunately, we must first switch to both the Heisenberg picture representation of the position and momentum operators, and also employ the Heisenberg equations of motion. Neither of these last two fit into standard narrative of most introductory quantum mechanics treatments. We will also see that these raising and lowering “operators” could also be introduced in classical mechanics, provided we were attempting to solve the SHO system using the Hamiltonian equations of motion.

I’ll outline this route to finding the structure of the ladder operators below. Because these are encountered trying to solve the time evolution problem, I’ll first show a simpler way to solve that problem. Because that simpler method depends a bit on lucky observation and is somewhat unstructured, I’ll then outline a more structured procedure that leads to the ladder operators directly, also providing the solution to the time evolution problem as a side effect.

The starting point is the Heisenberg equations of motion. For a time independent Hamiltonian \( H \), and a Heisenberg operator \( A^{(H)} \), those equations are

\begin{equation}\label{eqn:harmonicOscDiagonalize:20}
\ddt{A^{(H)}} = \inv{i \Hbar} \antisymmetric{A^{(H)}}{H}.
\end{equation}

Here the Heisenberg operator \( A^{(H)} \) is related to the Schrodinger operator \( A^{(S)} \) by

\begin{equation}\label{eqn:harmonicOscDiagonalize:60}
A^{(H)} = U^\dagger A^{(S)} U,
\end{equation}

where \( U \) is the time evolution operator. For this discussion, we need only know that \( U \) commutes with \( H \), and do not need to know the specific structure of that operator. In particular, the Heisenberg equations of motion take the form

\begin{equation}\label{eqn:harmonicOscDiagonalize:80}
\begin{aligned}
\ddt{A^{(H)}}
&= \inv{i \Hbar}
\antisymmetric{A^{(H)}}{H} \\
&= \inv{i \Hbar}
\antisymmetric{U^\dagger A^{(S)} U}{H} \\
&= \inv{i \Hbar}
\lr{
U^\dagger A^{(S)} U H
– H U^\dagger A^{(S)} U
} \\
&= \inv{i \Hbar}
\lr{
U^\dagger A^{(S)} H U
– U^\dagger H A^{(S)} U
} \\
&= \inv{i \Hbar} U^\dagger \antisymmetric{A^{(S)}}{H} U.
\end{aligned}
\end{equation}

The Hamiltonian for the harmonic oscillator, with Schrodinger-picture position and momentum operators \( x, p \) is

\begin{equation}\label{eqn:harmonicOscDiagonalize:40}
H = \frac{p^2}{2m} + \inv{2} m \omega^2 x^2,
\end{equation}

so the equations of motions are

\begin{equation}\label{eqn:harmonicOscDiagonalize:100}
\begin{aligned}
\ddt{x^{(H)}}
&= \inv{i \Hbar} U^\dagger \antisymmetric{x}{H} U \\
&= \inv{i \Hbar} U^\dagger \antisymmetric{x}{\frac{p^2}{2m}} U \\
&= \inv{2 m i \Hbar} U^\dagger \lr{ i \Hbar \PD{p}{p^2} } U \\
&= \inv{m } U^\dagger p U \\
&= \inv{m } p^{(H)},
\end{aligned}
\end{equation}

and
\begin{equation}\label{eqn:harmonicOscDiagonalize:120}
\begin{aligned}
\ddt{p^{(H)}}
&= \inv{i \Hbar} U^\dagger \antisymmetric{p}{H} U \\
&= \inv{i \Hbar} U^\dagger \antisymmetric{p}{\inv{2} m \omega^2 x^2 } U \\
&= \frac{m \omega^2}{2 i \Hbar} U^\dagger \lr{ -i \Hbar \PD{x}{x^2} } U \\
&= -m \omega^2 U^\dagger x U \\
&= -m \omega^2 x^{(H)}.
\end{aligned}
\end{equation}

In the Heisenberg picture the equations of motion are precisely those of classical Hamiltonian mechanics, except that we are dealing with operators instead of scalars

\begin{equation}\label{eqn:harmonicOscDiagonalize:140}
\begin{aligned}
\ddt{p^{(H)}} &= -m \omega^2 x^{(H)} \\
\ddt{x^{(H)}} &= \inv{m } p^{(H)}.
\end{aligned}
\end{equation}

In the text the ladder operators are used to simplify the solution of these coupled equations, since they can decouple them. That’s not really required since we can solve them directly in matrix form with little work

\begin{equation}\label{eqn:harmonicOscDiagonalize:160}
\ddt{}
\begin{bmatrix}
p^{(H)} \\
x^{(H)}
\end{bmatrix}
=
\begin{bmatrix}
0 & -m \omega^2 \\
\inv{m} & 0
\end{bmatrix}
\begin{bmatrix}
p^{(H)} \\
x^{(H)}
\end{bmatrix},
\end{equation}

or, with length scaled variables

\begin{equation}\label{eqn:harmonicOscDiagonalize:180}
\begin{aligned}
\ddt{}
\begin{bmatrix}
\frac{p^{(H)}}{m \omega} \\
x^{(H)}
\end{bmatrix}
&=
\begin{bmatrix}
0 & -\omega \\
\omega & 0
\end{bmatrix}
\begin{bmatrix}
\frac{p^{(H)}}{m \omega} \\
x^{(H)}
\end{bmatrix} \\
&=
-i \omega
\begin{bmatrix} 0 & -i \\ i & 0 \\ \end{bmatrix}
\begin{bmatrix}
\frac{p^{(H)}}{m \omega} \\
x^{(H)}
\end{bmatrix} \\
&=
-i \omega
\sigma_y
\begin{bmatrix}
\frac{p^{(H)}}{m \omega} \\
x^{(H)}
\end{bmatrix}.
\end{aligned}
\end{equation}

Writing \( y = \begin{bmatrix} \frac{p^{(H)}}{m \omega} \\ x^{(H)} \end{bmatrix} \), the solution can then be written immediately as

\begin{equation}\label{eqn:harmonicOscDiagonalize:200}
\begin{aligned}
y(t)
&=
\exp\lr{ -i \omega \sigma_y t } y(0) \\
&=
\lr{ \cos \lr{ \omega t } I – i \sigma_y \sin\lr{ \omega t } } y(0) \\
&=
\begin{bmatrix}
\cos\lr{ \omega t } & \sin\lr{ \omega t } \\
-\sin\lr{ \omega t } & \cos\lr{ \omega t }
\end{bmatrix}
y(0),
\end{aligned}
\end{equation}

or

\begin{equation}\label{eqn:harmonicOscDiagonalize:220}
\begin{aligned}
\frac{p^{(H)}(t)}{m \omega} &= \cos\lr{ \omega t } \frac{p^{(H)}(0)}{m \omega} + \sin\lr{ \omega t } x^{(H)}(0) \\
x^{(H)}(t) &= -\sin\lr{ \omega t } \frac{p^{(H)}(0)}{m \omega} + \cos\lr{ \omega t } x^{(H)}(0).
\end{aligned}
\end{equation}

This solution depends on being lucky enough to recognize that the matrix has a Pauli matrix as a factor (which squares to unity, and allows the exponential to be evaluated easily.)

If we hadn’t been that observant, then the first tool we’d have used instead would have been to diagonalize the matrix. For such diagonalization, it’s natural to work in completely dimensionless variables. Such a non-dimensionalisation can be had by defining

\begin{equation}\label{eqn:harmonicOscDiagonalize:240}
x_0 = \sqrt{\frac{\Hbar}{m \omega}},
\end{equation}

and dividing the working (operator) variables through by those values. Let \( z = \inv{x_0} y \), and \( \tau = \omega t \) so that the equations of motion are

\begin{equation}\label{eqn:harmonicOscDiagonalize:260}
\frac{dz}{d\tau}
=
\begin{bmatrix}
0 & -1 \\
1 & 0
\end{bmatrix}
z.
\end{equation}

This matrix can be diagonalized as

\begin{equation}\label{eqn:harmonicOscDiagonalize:280}
A
=
\begin{bmatrix}
0 & -1 \\
1 & 0
\end{bmatrix}
=
V
\begin{bmatrix}
i & 0 \\
0 & -i
\end{bmatrix}
V^{-1},
\end{equation}

where

\begin{equation}\label{eqn:harmonicOscDiagonalize:300}
V =
\inv{\sqrt{2}}
\begin{bmatrix}
i & -i \\
1 & 1
\end{bmatrix}.
\end{equation}

The equations of motion can now be written

\begin{equation}\label{eqn:harmonicOscDiagonalize:320}
\frac{d}{d\tau} \lr{ V^{-1} z } =
\begin{bmatrix}
i & 0 \\
0 & -i
\end{bmatrix}
\lr{ V^{-1} z }.
\end{equation}

This final change of variables \( V^{-1} z \) decouples the system as desired. Expanding that gives

\begin{equation}\label{eqn:harmonicOscDiagonalize:340}
\begin{aligned}
V^{-1} z
&=
\inv{\sqrt{2}}
\begin{bmatrix}
-i & 1 \\
i & 1
\end{bmatrix}
\begin{bmatrix}
\frac{p^{(H)}}{x_0 m \omega} \\
\frac{x^{(H)}}{x_0}
\end{bmatrix} \\
&=
\inv{\sqrt{2} x_0}
\begin{bmatrix}
-i \frac{p^{(H)}}{m \omega} + x^{(H)} \\
i \frac{p^{(H)}}{m \omega} + x^{(H)}
\end{bmatrix} \\
&=
\begin{bmatrix}
a^\dagger \\
a
\end{bmatrix},
\end{aligned}
\end{equation}

where
\begin{equation}\label{eqn:harmonicOscDiagonalize:n}
\begin{aligned}
a^\dagger &= \sqrt{\frac{m \omega}{2 \Hbar}} \lr{ -i \frac{p^{(H)}}{m \omega} + x^{(H)} } \\
a &= \sqrt{\frac{m \omega}{2 \Hbar}} \lr{ i \frac{p^{(H)}}{m \omega} + x^{(H)} }.
\end{aligned}
\end{equation}

Lo and behold, we have the standard form of the raising and lowering operators, and can write the system equations as

\begin{equation}\label{eqn:harmonicOscDiagonalize:360}
\begin{aligned}
\ddt{a^\dagger} &= i \omega a^\dagger \\
\ddt{a} &= -i \omega a.
\end{aligned}
\end{equation}

It is actually a bit fluky that this matched exactly, since we could have chosen eigenvectors that differ by constant phase factors, like

\begin{equation}\label{eqn:harmonicOscDiagonalize:380}
V = \inv{\sqrt{2}}
\begin{bmatrix}
i e^{i\phi} & -i e^{i \psi} \\
1 e^{i\phi} & e^{i \psi}
\end{bmatrix},
\end{equation}

so

\begin{equation}\label{eqn:harmonicOscDiagonalize:341}
\begin{aligned}
V^{-1} z
&=
\frac{e^{-i(\phi + \psi)}}{\sqrt{2}}
\begin{bmatrix}
-i e^{i\psi} & e^{i \psi} \\
i e^{i\phi} & e^{i \phi}
\end{bmatrix}
\begin{bmatrix}
\frac{p^{(H)}}{x_0 m \omega} \\
\frac{x^{(H)}}{x_0}
\end{bmatrix} \\
&=
\inv{\sqrt{2} x_0}
\begin{bmatrix}
-i e^{i\phi} \frac{p^{(H)}}{m \omega} + e^{i\phi} x^{(H)} \\
i e^{i\psi} \frac{p^{(H)}}{m \omega} + e^{i\psi} x^{(H)}
\end{bmatrix} \\
&=
\begin{bmatrix}
e^{i\phi} a^\dagger \\
e^{i\psi} a
\end{bmatrix}.
\end{aligned}
\end{equation}

To make the resulting pairs of operators Hermitian conjugates, we’d want to constrain those constant phase factors by setting \( \phi = -\psi \). If we were only interested in solving the time evolution problem no such additional constraints are required.

The raising and lowering operators are seen to naturally occur when seeking the solution of the Heisenberg equations of motion. This is found using the standard technique of non-dimensionalisation and then seeking a change of basis that diagonalizes the system matrix. Because the Heisenberg equations of motion are identical to the classical Hamiltonian equations of motion in this case, what we call the raising and lowering operators in quantum mechanics could also be utilized in the classical simple harmonic oscillator problem. However, in a classical context we wouldn’t have a justification to call this more than a change of basis.

References

[1] Eli Lansey. The Quantum Harmonic Oscillator Ladder Operators, 2009. URL http://behindtheguesses.blogspot.ca/2009/03/quantum-harmonic-oscillator-ladder.html. [Online; accessed 18-August-2015].

[2] Jun John Sakurai and Jim J Napolitano. Modern quantum mechanics, chapter {Time Development of the Oscillator}. Pearson Higher Ed, 2014.