Hamiltonian

PHY2403H Quantum Field Theory. Lecture 14: Time evolution, Hamiltonian perturbation, ground state. Taught by Prof. Erich Poppitz

October 29, 2018 phy2403 No comments , , , , , ,

[Click here for a PDF of this post with nicer formatting]

DISCLAIMER: Very rough notes from class, with some additional side notes.

These are notes for the UofT course PHY2403H, Quantum Field Theory, taught by Prof. Erich Poppitz, fall 2018.

Review

Given a field \( \phi(t_0, \Bx) \), satisfying the commutation relations
\begin{equation}\label{eqn:qftLecture14:20}
\antisymmetric{\pi(t_0, \Bx)}{\phi(t_0, \By)} = -i \delta(\Bx – \By)
\end{equation}
we introduced an interaction picture field given by
\begin{equation}\label{eqn:qftLecture14:40}
\phi_I(t, x) = e^{i H_0(t- t_0)} \phi(t_0, \Bx) e^{-iH_0(t – t_0)}
\end{equation}
related to the Heisenberg picture representation by
\begin{equation}\label{eqn:qftLecture14:60}
\phi_H(t, x)
= e^{i H(t- t_0)} \phi(t_0, \Bx) e^{-iH(t – t_0)}
= U^\dagger(t, t_0) \phi_I(t, \Bx) U(t, t_0),
\end{equation}
where \( U(t, t_0) \) is the time evolution operator.
\begin{equation}\label{eqn:qftLecture14:80}
U(t, t_0) =
e^{i H_0(t – t_0)}
e^{-i H(t – t_0)}
\end{equation}
We argued that
\begin{equation}\label{eqn:qftLecture14:100}
i \PD{t}{} U(t, t_0) = H_{\text{I,int}}(t) U(t, t_0)
\end{equation}
We found the glorious expression
\begin{equation}\label{eqn:qftLecture14:120}
\boxed{
\begin{aligned}
U(t, t_0)
&= T \exp{\lr{ -i \int_{t_0}^t H_{\text{I,int}}(t’) dt’}} \\
&=
\sum_{n = 0}^\infty \frac{(-i)^n}{n!} \int_{t_0}^t dt_1 dt_2 \cdots dt_n T\lr{ H_{\text{I,int}}(t_1) H_{\text{I,int}}(t_2) \cdots H_{\text{I,int}}(t_n) }
\end{aligned}
}
\end{equation}

However, what we are really after is
\begin{equation}\label{eqn:qftLecture14:140}
\bra{\Omega} T(\phi(x_1) \cdots \phi(x_n)) \ket{\Omega}
\end{equation}
Such a product has many labels and names, and we’ll describe it as “vacuum expectation values of time-ordered products of arbitrary #’s of local Heisenberg operators”.

Perturbation

Following section 4.2, [1].

\begin{equation}\label{eqn:qftLecture14:160}
\begin{aligned}
H &= \text{exact Hamiltonian} = H_0 + H_{\text{int}}
\\
H_0 &= \text{free Hamiltonian.
}
\end{aligned}
\end{equation}
We know all about \( H_0 \) and assume that it has a lowest (ground state) \( \ket{0} \), the “vacuum” state of \( H_0 \).

\( H \) has eigenstates, in particular \( H \) is assumed to have a unique ground state \( \ket{\Omega} \) satisfying
\begin{equation}\label{eqn:qftLecture14:180}
H \ket{\Omega} = \ket{\Omega} E_0,
\end{equation}
and has states \( \ket{n} \), representing excited (non-vacuum states with energies > \( E_0 \)).
These states are assumed to be a complete basis
\begin{equation}\label{eqn:qftLecture14:200}
\mathbf{1} = \ket{\Omega}\bra{\Omega} + \sum_n \ket{n}\bra{n} + \int dn \ket{n}\bra{n}.
\end{equation}
The latter terms may be written with a superimposed sum-integral notation as
\begin{equation}\label{eqn:qftLecture14:440}
\sum_n + \int dn
=
{\int\kern-1em\sum}_n,
\end{equation}
so the identity operator takes the more compact form
\begin{equation}\label{eqn:qftLecture14:460}
\mathbf{1} = \ket{\Omega}\bra{\Omega} + {\int\kern-1em\sum}_n \ket{n}\bra{n}.
\end{equation}

For some time \( T \) we have
\begin{equation}\label{eqn:qftLecture14:220}
e^{-i H T} \ket{0} = e^{-i H T}
\lr{
\ket{\Omega}\braket{\Omega}{0} + {\int\kern-1em\sum}_n \ket{n}\braket{n}{0}
}.
\end{equation}

We now wish to argue that the \( {\int\kern-1em\sum}_n \) term can be ignored.

Argument 1:

This is something of a fast one, but one can consider a formal transformation \( T \rightarrow T(1 – i \epsilon) \), where \( \epsilon \rightarrow 0^+ \), and consider very large \( T \). This gives
\begin{equation}\label{eqn:qftLecture14:240}
\begin{aligned}
\lim_{T \rightarrow \infty, \epsilon \rightarrow 0^+}
e^{-i H T(1 – i \epsilon)} \ket{0}
&=
\lim_{T \rightarrow \infty, \epsilon \rightarrow 0^+}
e^{-i H T(1 – i \epsilon)}
\lr{
\ket{\Omega}\braket{\Omega}{0} + {\int\kern-1em\sum}_n \ket{n}\braket{n}{0}
} \\
&=
\lim_{T \rightarrow \infty, \epsilon \rightarrow 0^+}
e^{-i E_0 T – E_0 \epsilon T}
\ket{\Omega}\braket{\Omega}{0} + {\int\kern-1em\sum}_n e^{-i E_n T – \epsilon E_n T} \ket{n}\braket{n}{0} \\
&=
\lim_{T \rightarrow \infty, \epsilon \rightarrow 0^+}
e^{-i E_0 T – E_0 \epsilon T}
\lr{
\ket{\Omega}\braket{\Omega}{0} + {\int\kern-1em\sum}_n e^{-i (E_n -E_0) T – \epsilon T (E_n – E_0)} \ket{n}\braket{n}{0}
}
\end{aligned}
\end{equation}
The limits are evaluated by first taking \( T \) to infinity, then only after that take \( \epsilon \rightarrow 0^+ \). Doing this, the sum is dominated by the ground state contribution, since each excited state also has a \( e^{-\epsilon T(E_n – E_0)} \) suppression factor (in addition to the leading suppression factor).

Argument 2:

With the hand waving required for the argument above, it’s worth pointing other (less formal) ways to arrive at the same result. We can write
\begin{equation}\label{eqn:qftLecture14:260}
sectionumInt \ket{n}\bra{n} \rightarrow
\sum_k \int \frac{d^3 p}{(2 \pi)^3} \ket{\Bp, k}\bra{\Bp, k}
\end{equation}
where \( k \) is some unknown quantity that we are summing over.
If we have
\begin{equation}\label{eqn:qftLecture14:280}
H \ket{\Bp, k} = E_{\Bp, k} \ket{\Bp, k},
\end{equation}
then
\begin{equation}\label{eqn:qftLecture14:300}
e^{-i H T} sectionumInt \ket{n}\bra{n}
=
\sum_k \int \frac{d^3 p}{(2 \pi)^3} \ket{\Bp, k} e^{-i E_{\Bp, k}} \bra{\Bp, k}.
\end{equation}
If we take matrix elements
\begin{equation}\label{eqn:qftLecture14:320}
\begin{aligned}
\bra{A}
e^{-i H T} sectionumInt \ket{n}\bra{n} \ket{B}
&=
\sum_k \int \frac{d^3 p}{(2 \pi)^3} \braket{A}{\Bp, k} e^{-i E_{\Bp, k}} \braket{\Bp, k}{B} \\
&=
\sum_k \int \frac{d^3 p}{(2 \pi)^3} e^{-i E_{\Bp, k}} f(\Bp).
\end{aligned}
\end{equation}
If we assume that \( f(\Bp) \) is a well behaved smooth function, we have “infinite” frequency oscillation within the envelope provided by the amplitude of that function, as depicted in fig. 1.
The Riemann-Lebesgue lemma [2] describes such integrals, the result of which is that such an integral goes to zero. This is a different sort of hand waving argument, but either way, we can argue that only the ground state contributes to the sum \ref{eqn:qftLecture14:220} above.

fig. 1. High frequency oscillations within envelope of well behaved function.

 

Ground state of the perturbed Hamiltonian.

With the excited states ignored, we are left with
\begin{equation}\label{eqn:qftLecture14:340}
e^{-i H T} \ket{0} = e^{-i E_0 T} \ket{\Omega}\braket{\Omega}{0}
\end{equation}
in the \( T \rightarrow \infty(1 – i \epsilon) \) limit. We can now write the ground state as

\begin{equation}\label{eqn:qftLecture14:360}
\begin{aligned}
\ket{\Omega}
&=
\evalbar{
\frac{ e^{i E_0 T – i H T } \ket{0} }{
\braket{\Omega}{0}
}
}{ T \rightarrow \infty(1 – i \epsilon) } \\
&=
\evalbar{
\frac{ e^{- i H T } \ket{0} }{
e^{-i E_0 T} \braket{\Omega}{0}
}
}{ T \rightarrow \infty(1 – i \epsilon) }.
\end{aligned}
\end{equation}
Shifting the very large \( T \rightarrow T + t_0 \) shouldn’t change things, so
\begin{equation}\label{eqn:qftLecture14:480}
\ket{\Omega}
=
\evalbar{
\frac{ e^{- i H (T + t_0) } \ket{0} }{
e^{-i E_0 (T + t_0) } \braket{\Omega}{0}
}
}{ T \rightarrow \infty(1 – i \epsilon) }.
\end{equation}

A bit of manipulation shows that the operator in the numerator has the structure of a time evolution operator.

Claim: (DIY):

\Cref{eqn:qftLecture14:80}, \ref{eqn:qftLecture14:120} may be generalized to
\begin{equation}\label{eqn:qftLecture14:400}
U(t, t’) = e^{i H_0(t – t_0)} e^{-i H(t – t’)} e^{-i H_0(t’ – t_0)} =
T \exp{\lr{ -i \int_{t’}^t H_{\text{I,int}}(t”) dt”}}.
\end{equation}
Observe that we recover \ref{eqn:qftLecture14:120} when \( t’ = t_0 \).  Using \ref{eqn:qftLecture14:400} we find
\begin{equation}\label{eqn:qftLecture14:520}
\begin{aligned}
U(t_0, -T) \ket{0}
&= e^{i H_0(t_0 – t_0)} e^{-i H(t_0 + T)} e^{-i H_0(-T – t_0)} \ket{0} \\
&= e^{-i H(t_0 + T)} e^{-i H_0(-T – t_0)} \ket{0} \\
&= e^{-i H(t_0 + T)} \ket{0},
\end{aligned}
\end{equation}
where we use the fact that \( e^{i H_0 \tau} \ket{0} = \lr{ 1 + i H_0 \tau + \cdots } \ket{0} = 1 \ket{0}, \) since \( H_0 \ket{0} = 0 \).

We are left with
\begin{equation}\label{eqn:qftLecture14:420}
\boxed{
\ket{\Omega}
= \frac{U(t_0, -T) \ket{0} }{e^{-i E_0(t_0 – (-T))} \braket{\Omega}{0}}.
}
\end{equation}

We are close to where we want to be. Wednesday we finish off, and then start scattering and Feynman diagrams.

References

[1] Michael E Peskin and Daniel V Schroeder. An introduction to Quantum Field Theory. Westview, 1995.

[2] Wikipedia contributors. Riemann-lebesgue lemma — Wikipedia, the free encyclopedia, 2018. URL https://en.wikipedia.org/w/index.php?title=Riemann%E2%80%93Lebesgue_lemma&oldid=856778941. [Online; accessed 29-October-2018].

Hamiltonian for the non-homogeneous Klein-Gordon equation

October 25, 2018 phy2403 No comments , , ,

[Click here for a PDF of this post with nicer formatting]

In class we derived the field for the non-homogeneous Klein-Gordon equation
\begin{equation}\label{eqn:nonhomoKGhamiltonian:20}
\begin{aligned}
\phi(x)
&= \int \frac{d^3 p}{(2\pi)^3} \inv{\sqrt{2 \omega_\Bp}}
\evalbar{
\lr{
e^{-i p \cdot x} \lr{ a_\Bp + \frac{ i \tilde{j}(p) }{\sqrt{2 \omega_\Bp}} }
+
e^{i p \cdot x} \lr{ a_\Bp^\dagger – \frac{ i \tilde{j}^\conj(p) }{\sqrt{2 \omega_\Bp}} }
}
}
{
p^0 = \omega_\Bp
} \\
&= \int \frac{d^3 p}{(2\pi)^3} \inv{\sqrt{2 \omega_\Bp}}
\lr{
e^{-i \omega_\Bp t + i \Bp \cdot \Bx} \lr{ a_\Bp + \frac{ i \tilde{j}(p) }{\sqrt{2 \omega_\Bp}} }
+
e^{i \omega_\Bp t – i \Bp \cdot \Bx} \lr{ a_\Bp^\dagger – \frac{ i \tilde{j}^\conj(p) }{\sqrt{2 \omega_\Bp}} }
}.
\end{aligned}
\end{equation}
This means that we have
\begin{equation}\label{eqn:nonhomoKGhamiltonian:40}
\begin{aligned}
\pi = \dot{\phi}
&= \int \frac{d^3 p}{(2\pi)^3} \frac{i \omega_\Bp}{\sqrt{2 \omega_\Bp}}
\lr{
– e^{-i \omega_\Bp t + i \Bp \cdot \Bx} \lr{ a_\Bp + \frac{ i \tilde{j}(p) }{\sqrt{2 \omega_\Bp}} }
+
e^{i \omega_\Bp t – i \Bp \cdot \Bx} \lr{ a_\Bp^\dagger – \frac{ i \tilde{j}^\conj(p) }{\sqrt{2 \omega_\Bp}} }
} \\
(\spacegrad \phi)_k =
&= \int \frac{d^3 p}{(2\pi)^3} \frac{i p_k}{\sqrt{2 \omega_\Bp}}
\lr{
e^{-i \omega_\Bp t + i \Bp \cdot \Bx} \lr{ a_\Bp + \frac{ i \tilde{j}(p) }{\sqrt{2 \omega_\Bp}} }

e^{i \omega_\Bp t – i \Bp \cdot \Bx} \lr{ a_\Bp^\dagger – \frac{ i \tilde{j}^\conj(p) }{\sqrt{2 \omega_\Bp}} }
},
\end{aligned}
\end{equation}
and could plug these into the Hamiltonian
\begin{equation}\label{eqn:nonhomoKGhamiltonian:60}
H = \int d^3 p \lr{ \inv{2} \pi^2 + \inv{2} \lr{ \spacegrad \phi}^2 + \frac{m^2}{2} \phi^2 },
\end{equation}
to find \( H \) in terms of \( \tilde{j} \) and \( a_\Bp^\dagger, a_\Bp \). The result was mentioned in class, and it was left as an exercise to verify.

There’s an easy way and a dumb way to do this exercise. I did it the dumb way, and then after suffering through two long pages, where the equations were so long that I had to write on the paper sideways, I realized the way I should have done it.

The easy way is to observe that we’ve already done exactly this for the case \( \tilde{j} = 0 \), which had the answer
\begin{equation}\label{eqn:nonhomoKGhamiltonian:80}
H = \inv{2} \int \frac{d^3 p}{(2 \pi)^3} \omega_\Bp \lr{ a_\Bp^\dagger a_\Bp + a_\Bp a_\Bp^\dagger }.
\end{equation}
To handle this more general case, all we have to do is apply a transformation
\begin{equation}\label{eqn:nonhomoKGhamiltonian:100}
a_\Bp \rightarrow
a_\Bp + \frac{i \tilde{j}(p)}{\sqrt{2 \omega_\Bp}},
\end{equation}
to \ref{eqn:nonhomoKGhamiltonian:80}, which gives
\begin{equation}\label{eqn:nonhomoKGhamiltonian:120}
\begin{aligned}
H
&=
\inv{2} \int \frac{d^3 p}{(2 \pi)^3} \omega_\Bp \lr{\lr{ a_\Bp + \frac{i \tilde{j}(p)}{\sqrt{2 \omega_\Bp}} }^\dagger\lr{ a_\Bp + \frac{i \tilde{j}(p)}{\sqrt{2 \omega_\Bp}} } +\lr{ a_\Bp + \frac{i \tilde{j}(p)}{\sqrt{2 \omega_\Bp}} }\lr{ a_\Bp + \frac{i \tilde{j}(p)}{\sqrt{2 \omega_\Bp}} }^\dagger } \\
&=
\inv{2} \int \frac{d^3 p}{(2 \pi)^3} \omega_\Bp \lr{\lr{ a_\Bp^\dagger – \frac{i \tilde{j}^\conj(p)}{\sqrt{2 \omega_\Bp}} } \lr{ a_\Bp + \frac{i \tilde{j}(p)}{\sqrt{2 \omega_\Bp}} } +\lr{ a_\Bp + \frac{i \tilde{j}(p)}{\sqrt{2 \omega_\Bp}} }\lr{ a_\Bp^\dagger – \frac{i \tilde{j}^\conj(p)}{\sqrt{2 \omega_\Bp}} }
}.
\end{aligned}
\end{equation}

Like the \( \tilde{j} = 0 \) case, we can use normal ordering. This is easily seen by direct expansion:
\begin{equation}\label{eqn:nonhomoKGhamiltonian:140}
\begin{aligned}
\lr{ a_\Bp^\dagger – \frac{i \tilde{j}^\conj(p)}{\sqrt{2 \omega_\Bp}} } \lr{ a_\Bp + \frac{i \tilde{j}(p)}{\sqrt{2 \omega_\Bp}} }
&=
a_\Bp^\dagger a_\Bp
– \frac{i \tilde{j}^\conj(p) a_\Bp}{\sqrt{2 \omega_\Bp}}
+ \frac{ a_\Bp^\dagger i \tilde{j}^\conj(p)}{\sqrt{2 \omega_\Bp}}
+ \frac{\Abs{j}^2}{2 \omega_\Bp} \\
\lr{ a_\Bp + \frac{i \tilde{j}(p)}{\sqrt{2 \omega_\Bp}} }\lr{ a_\Bp^\dagger – \frac{i \tilde{j}^\conj(p)}{\sqrt{2 \omega_\Bp}} }
&=
a_\Bp^\dagger a_\Bp
+ \frac{i \tilde{j}^\conj(p) a_\Bp^\dagger}{\sqrt{2 \omega_\Bp}}
– \frac{ a_\Bp i \tilde{j}^\conj(p)}{\sqrt{2 \omega_\Bp}}
+ \frac{\Abs{j}^2}{2 \omega_\Bp}.
\end{aligned}
\end{equation}
Because \( \tilde{j} \) is just a complex valued function, it commutes with \( a_\Bp, a_\Bp^\dagger \), and these are equal up to the normal ordering, allowing us to write
\begin{equation}\label{eqn:nonhomoKGhamiltonian:160}
:H: =
\int \frac{d^3 p}{(2 \pi)^3} \omega_\Bp \lr{ a_\Bp^\dagger – \frac{i \tilde{j}^\conj(p)}{\sqrt{2 \omega_\Bp}}} \lr{ a_\Bp + \frac{i \tilde{j}(p)}{\sqrt{2 \omega_\Bp}} },
\end{equation}
which is the result mentioned in class.

PHY2403H Quantum Field Theory. Lecture 4: Scalar action, least action principle, Euler-Lagrange equations for a field, canonical quantization. Taught by Prof. Erich Poppitz

September 23, 2018 phy2403 No comments , , , , , , , , , , , , , , , , , , , , ,

[Click here for a PDF of this post with nicer formatting]

DISCLAIMER: Very rough notes from class. May have some additional side notes, but otherwise probably barely edited.

These are notes for the UofT course PHY2403H, Quantum Field Theory I, taught by Prof. Erich Poppitz fall 2018.

Principles (cont.)

  • Lorentz (Poincar\’e : Lorentz and spacetime translations)
  • locality
  • dimensional analysis
  • gauge invariance

These are the requirements for an action. We postulated an action that had the form
\begin{equation}\label{eqn:qftLecture4:20}
\int d^d x \partial_\mu \phi \partial^\mu \phi,
\end{equation}
called the “Kinetic term”, which mimics \( \int dt \dot{q}^2 \) that we’d see in quantum or classical mechanics. In principle there exists an infinite number of local Poincar\’e invariant terms that we can write. Examples:

  • \( \partial_\mu \phi \partial^\mu \phi \)
  • \( \partial_\mu \phi \partial_\nu \partial^\nu \partial^\mu \phi \)
  • \( \lr{\partial_\mu \phi \partial^\mu \phi}^2 \)
  • \( f(\phi) \partial_\mu \phi \partial^\mu \phi \)
  • \( f(\phi, \partial_\mu \phi \partial^\mu \phi) \)
  • \( V(\phi) \)

It turns out that nature (i.e. three spatial dimensions and one time dimension) is described by a finite number of terms. We will now utilize dimensional analysis to determine some of the allowed forms of the action for scalar field theories in \( d = 2, 3, 4, 5 \) dimensions. Even though the real world is only \( d = 4 \), some of the \( d < 4 \) theories are relevant in condensed matter studies, and \( d = 5 \) is just for fun (but also applies to string theories.)

With \( [x] \sim \inv{M} \) in natural units, we must define \([\phi]\) such that the kinetic term is dimensionless in d spacetime dimensions

\begin{equation}\label{eqn:qftLecture4:40}
\begin{aligned}
[d^d x] &\sim \inv{M^d} \\
[\partial_\mu] &\sim M
\end{aligned}
\end{equation}

so it must be that
\begin{equation}\label{eqn:qftLecture4:60}
[\phi] = M^{(d-2)/2}
\end{equation}

It will be easier to characterize the dimensionality of any given term by the power of the mass units, that is

\begin{equation}\label{eqn:qftLecture4:80}
\begin{aligned}
[\text{mass}] &= 1 \\
[d^d x] &= -d \\
[\partial_\mu] &= 1 \\
[\phi] &= (d-2)/2 \\
[S] &= 0.
\end{aligned}
\end{equation}
Since the action is
\begin{equation}\label{eqn:qftLecture4:100}
S = \int d^d x \lr{ \LL(\phi, \partial_\mu \phi) },
\end{equation}
and because action had dimensions of \( \Hbar \), so in natural units, it must be dimensionless, the Lagrangian density dimensions must be \( [d] \). We will abuse language in QFT and call the Lagrangian density the Lagrangian.

\( d = 2 \)

Because \( [\partial_\mu \phi \partial^\mu \phi ] = 2 \), the scalar field must be dimension zero, or in symbols
\begin{equation}\label{eqn:qftLecture4:120}
[\phi] = 0.
\end{equation}
This means that introducing any function \( f(\phi) = 1 + a \phi + b\phi^2 + c \phi^3 + \cdots \) is also dimensionless, and
\begin{equation}\label{eqn:qftLecture4:140}
[f(\phi) \partial_\mu \phi \partial^\mu \phi ] = 2,
\end{equation}
for any \( f(\phi) \). Another implication of this is that the a potential term in the Lagrangian \( [V(\phi)] = 0 \) needs a coupling constant of dimension 2. Letting \( \mu \) have mass dimensions, our Lagrangian must have the form
\begin{equation}\label{eqn:qftLecture4:160}
f(\phi) \partial_\mu \phi \partial^\mu \phi + \mu^2 V(\phi).
\end{equation}
An infinite number of coupling constants of positive mass dimensions for \( V(\phi) \) are also allowed. If we have higher order derivative terms, then we need to compensate for the negative mass dimensions. Example (still for \( d = 2 \)).
\begin{equation}\label{eqn:qftLecture4:180}
\LL =
f(\phi) \partial_\mu \phi \partial^\mu \phi + \mu^2 V(\phi) + \inv{{\mu’}^2}\partial_\mu \phi \partial_\nu \partial^\nu \partial^\mu \phi + \lr{ \partial_\mu \phi \partial^\mu \phi }^2 \inv{\tilde{\mu}^2}.
\end{equation}
The last two terms, called \underline{couplings} (i.e. any non-kinetic term), are examples of terms with negative mass dimension. There is an infinite number of those in any theory in any dimension.

Definitions

  • Couplings that are dimensionless are called (classically) marginal.
  • Couplings that have positive mass dimension are called (classically) relevant.
  • Couplings that have negative mass dimension are called (classically) irrelevant.

In QFT we are generally interested in the couplings that are measurable at long distances for some given energy. Classically irrelevant theories are generally not interesting in \( d > 2 \), so we are very lucky that we don’t live in three dimensional space. This means that we can get away with a finite number of classically marginal and relevant couplings in 3 or 4 dimensions. This was mentioned in the Wilczek’s article referenced in the class forum [1]\footnote{There’s currently more in that article that I don’t understand than I do, so it is hard to find it terribly illuminating.}

Long distance physics in any dimension is described by the marginal and relevant couplings. The irrelevant couplings die off at low energy. In two dimensions, a priori, an infinite number of marginal and relevant couplings are possible. 2D is a bad place to live!

\( d = 3 \)

Now we have
\begin{equation}\label{eqn:qftLecture4:200}
[\phi] = \inv{2}
\end{equation}
so that
\begin{equation}\label{eqn:qftLecture4:220}
[\partial_\mu \phi \partial^\mu \phi] = 3.
\end{equation}

A 3D Lagrangian could have local terms such as
\begin{equation}\label{eqn:qftLecture4:240}
\LL = \partial_\mu \phi \partial^\mu \phi + m^2 \phi^2 + \mu^{3/2} \phi^3 + \mu’ \phi^4
+ \lr{\mu”}{1/2} \phi^5
+ \lambda \phi^6.
\end{equation}
where \( m, \mu, \mu” \) all have mass dimensions, and \( \lambda \) is dimensionless. i.e. \( m, \mu, \mu” \) are relevant, and \( \lambda \) marginal. We stop at the sixth power, since any power after that will be irrelevant.

\( d = 4 \)

Now we have
\begin{equation}\label{eqn:qftLecture4:260}
[\phi] = 1
\end{equation}
so that
\begin{equation}\label{eqn:qftLecture4:280}
[\partial_\mu \phi \partial^\mu \phi] = 4.
\end{equation}

In this number of dimensions \( \phi^k \partial_\mu \phi \partial^\mu \) is an irrelevant coupling.

A 4D Lagrangian could have local terms such as
\begin{equation}\label{eqn:qftLecture4:300}
\LL = \partial_\mu \phi \partial^\mu \phi + m^2 \phi^2 + \mu \phi^3 + \lambda \phi^4.
\end{equation}
where \( m, \mu \) have mass dimensions, and \( \lambda \) is dimensionless. i.e. \( m, \mu \) are relevant, and \( \lambda \) is marginal.

\( d = 5 \)

Now we have
\begin{equation}\label{eqn:qftLecture4:320}
[\phi] = \frac{3}{2},
\end{equation}
so that
\begin{equation}\label{eqn:qftLecture4:340}
[\partial_\mu \phi \partial^\mu \phi] = 5.
\end{equation}

A 5D Lagrangian could have local terms such as
\begin{equation}\label{eqn:qftLecture4:360}
\LL = \partial_\mu \phi \partial^\mu \phi + m^2 \phi^2 + \sqrt{\mu} \phi^3 + \inv{\mu’} \phi^4.
\end{equation}
where \( m, \mu, \mu’ \) all have mass dimensions. In 5D there are no marginal couplings. Dimension 4 is the last dimension where marginal couplings exist. In condensed matter physics 4D is called the “upper critical dimension”.

From the point of view of particle physics, all the terms in the Lagrangian must be the ones that are relevant at long distances.

Least action principle (classical field theory).

Now we want to study 4D scalar theories. We have some action
\begin{equation}\label{eqn:qftLecture4:380}
S[\phi] = \int d^4 x \LL(\phi, \partial_\mu \phi).
\end{equation}

Let’s keep an example such as the following in mind
\begin{equation}\label{eqn:qftLecture4:400}
\LL = \underbrace{\inv{2} \partial_\mu \phi \partial^\mu \phi}_{\text{Kinetic term}} – \underbrace{m^2 \phi – \lambda \phi^4}_{\text{all relevant and marginal couplings}}.
\end{equation}
The even powers can be justified by assuming there is some symmetry that kills the odd powered terms.

fig. 1. Cylindrical spacetime boundary.

We will be integrating over a space time region such as that depicted in fig. 1, where a cylindrical spatial cross section is depicted that we allow to tend towards infinity. We demand that the field is fixed on the infinite spatial boundaries. The easiest way to demand that the field dies off on the spatial boundaries, that is
\begin{equation}\label{eqn:qftLecture4:420}
\lim_{\Abs{\Bx} \rightarrow \infty} \phi(\Bx) \rightarrow 0.
\end{equation}
The functional \( \phi(\Bx, t) \) that obeys the boundary condition as stated extremizes \( S[\phi] \).

Extremizing the action means that we seek \( \phi(\Bx, t) \)
\begin{equation}\label{eqn:qftLecture4:440}
\delta S[\phi] = 0 = S[\phi + \delta \phi] – S[\phi].
\end{equation}

How do we compute the variation?
\begin{equation}\label{eqn:qftLecture4:460}
\begin{aligned}
\delta S
&= \int d^d x \lr{ \LL(\phi + \delta \phi, \partial_\mu \phi + \partial_\mu \delta \phi) – \LL(\phi, \partial_\mu \phi) } \\
&= \int d^d x \lr{ \PD{\phi}{\LL} \delta \phi + \PD{(\partial_mu \phi)}{\LL} (\partial_\mu \delta \phi) } \\
&= \int d^d x \lr{ \PD{\phi}{\LL} \delta \phi
+ \partial_\mu \lr{ \PD{(\partial_mu \phi)}{\LL} \delta \phi}
– \lr{ \partial_\mu \PD{(\partial_mu \phi)}{\LL} } \delta \phi
} \\
&=
\int d^d x
\delta \phi
\lr{ \PD{\phi}{\LL}
– \partial_\mu \PD{(\partial_mu \phi)}{\LL} }
+ \int d^3 \sigma_\mu \lr{ \PD{(\partial_\mu \phi)}{\LL} \delta \phi }
\end{aligned}
\end{equation}

If we are explicit about the boundary term, we write it as
\begin{equation}\label{eqn:qftLecture4:480}
\int dt d^3 \Bx \partial_t \lr{ \PD{(\partial_t \phi)}{\LL} \delta \phi }
– \spacegrad \cdot \lr{ \PD{(\spacegrad \phi)}{\LL} \delta \phi }
=
\int d^3 \Bx \evalrange{ \PD{(\partial_t \phi)}{\LL} \delta \phi }{t = -T}{t = T}
– \int dt d^2 \BS \cdot \lr{ \PD{(\spacegrad \phi)}{\LL} \delta \phi }.
\end{equation}
but \( \delta \phi = 0 \) at \( t = \pm T \) and also at the spatial boundaries of the integration region.

This leaves
\begin{equation}\label{eqn:qftLecture4:500}
\delta S[\phi] = \int d^d x \delta \phi
\lr{ \PD{\phi}{\LL} – \partial_\mu \PD{(\partial_mu \phi)}{\LL} } = 0 \forall \delta \phi.
\end{equation}
That is

\begin{equation}\label{eqn:qftLecture4:540}
\boxed{
\PD{\phi}{\LL} – \partial_\mu \PD{(\partial_mu \phi)}{\LL} = 0.
}
\end{equation}

This are the Euler-Lagrange equations for a single scalar field.

Returning to our sample scalar Lagrangian
\begin{equation}\label{eqn:qftLecture4:560}
\LL = \inv{2} \partial_\mu \phi \partial^\mu \phi – \inv{2} m^2 \phi^2 – \frac{\lambda}{4} \phi^4.
\end{equation}
This example is related to the Ising model which has a \( \phi \rightarrow -\phi \) symmetry. Applying the Euler-Lagrange equations, we have
\begin{equation}\label{eqn:qftLecture4:580}
\PD{\phi}{\LL} = -m^2 \phi – \lambda \phi^3,
\end{equation}
and
\begin{equation}\label{eqn:qftLecture4:600}
\begin{aligned}
\PD{(\partial_\mu \phi)}{\LL}
&=
\PD{(\partial_\mu \phi)}{} \lr{
\inv{2} \partial_\nu \phi \partial^\nu \phi } \\
&=
\inv{2} \partial^\nu \phi
\PD{(\partial_\mu \phi)}{}
\partial_\nu \phi
+
\inv{2} \partial_\nu \phi
\PD{(\partial_\mu \phi)}{}
\partial_\alpha \phi g^{\nu\alpha} \\
&=
\inv{2} \partial^\mu \phi
+
\inv{2} \partial_\nu \phi g^{\nu\mu} \\
&=
\partial^\mu \phi
\end{aligned}
\end{equation}
so we have
\begin{equation}\label{eqn:qftLecture4:620}
\begin{aligned}
0
&=
\PD{\phi}{\LL} -\partial_\mu
\PD{(\partial_\mu \phi)}{\LL} \\
&=
-m^2 \phi – \lambda \phi^3 – \partial_\mu \partial^\mu \phi.
\end{aligned}
\end{equation}

For \( \lambda = 0 \), the free field theory limit, this is just
\begin{equation}\label{eqn:qftLecture4:640}
\partial_\mu \partial^\mu \phi + m^2 \phi = 0.
\end{equation}
Written out from the observer frame, this is
\begin{equation}\label{eqn:qftLecture4:660}
(\partial_t)^2 \phi – \spacegrad^2 \phi + m^2 \phi = 0.
\end{equation}

With a non-zero mass term
\begin{equation}\label{eqn:qftLecture4:680}
\lr{ \partial_t^2 – \spacegrad^2 + m^2 } \phi = 0,
\end{equation}
is called the Klein-Gordan equation.

If we also had \( m = 0 \) we’d have
\begin{equation}\label{eqn:qftLecture4:700}
\lr{ \partial_t^2 – \spacegrad^2 } \phi = 0,
\end{equation}
which is the wave equation (for a massless free field). This is also called the D’Alembert equation, which is familiar from electromagnetism where we have
\begin{equation}\label{eqn:qftLecture4:720}
\begin{aligned}
\lr{ \partial_t^2 – \spacegrad^2 } \BE &= 0 \\
\lr{ \partial_t^2 – \spacegrad^2 } \BB &= 0,
\end{aligned}
\end{equation}
in a source free region.

Canonical quantization.

\begin{equation}\label{eqn:qftLecture4:740}
\LL = \inv{2} \dot{q} – \frac{\omega^2}{2} q^2
\end{equation}
This has solution \(\ddot{q} = – \omega^2 q\).

Let
\begin{equation}\label{eqn:qftLecture4:760}
p = \PD{\dot{q}}{\LL} = \dot{q}
\end{equation}
\begin{equation}\label{eqn:qftLecture4:780}
H(p,q) = \evalbar{p \dot{q} – \LL}{\dot{q}(p, q)}
= p p – \inv{2} p^2 + \frac{\omega^2}{2} q^2 = \frac{p^2}{2} + \frac{\omega^2}{2} q^2
\end{equation}

In QM we quantize by mapping Poisson brackets to commutators.
\begin{equation}\label{eqn:qftLecture4:800}
\antisymmetric{\hatp}{\hat{q}} = -i
\end{equation}
One way to represent is to say that states are \( \Psi(\hat{q}) \), a wave function, \( \hat{q} \) acts by \( q \)
\begin{equation}\label{eqn:qftLecture4:820}
\hat{q} \Psi = q \Psi(q)
\end{equation}
With
\begin{equation}\label{eqn:qftLecture4:840}
\hatp = -i \PD{q}{},
\end{equation}
so
\begin{equation}\label{eqn:qftLecture4:860}
\antisymmetric{ -i \PD{q}{} } { q} = -i
\end{equation}

Let’s introduce an explicit space time split. We’ll write
\begin{equation}\label{eqn:qftLecture4:880}
L = \int d^3 x \lr{
\inv{2} (\partial_0 \phi(\Bx, t))^2 – \inv{2} \lr{ \spacegrad \phi(\Bx, t) }^2 – \frac{m^2}{2} \phi
},
\end{equation}
so that the action is
\begin{equation}\label{eqn:qftLecture4:900}
S = \int dt L.
\end{equation}
The dynamical variables are \( \phi(\Bx) \). We define
\begin{equation}\label{eqn:qftLecture4:920}
\begin{aligned}
\pi(\Bx, t) = \frac{\delta L}{\delta (\partial_0 \phi(\Bx, t))}
&=
\partial_0 \phi(\Bx, t) \\
&=
\dot{\phi}(\Bx, t),
\end{aligned}
\end{equation}
called the canonical momentum, or the momentum conjugate to \( \phi(\Bx, t) \). Why \( \delta \)? Has to do with an implicit Dirac function to eliminate the integral?

\begin{equation}\label{eqn:qftLecture4:940}
\begin{aligned}
H
&= \int d^3 x \evalbar{\lr{ \pi(\bar{\Bx}, t) \dot{\phi}(\bar{\Bx}, t) – L }}{\dot{\phi}(\bar{\Bx}, t) = \pi(x, t) } \\
&= \int d^3 x \lr{ (\pi(\Bx, t))^2 – \inv{2} (\pi(\Bx, t))^2 + \inv{2} (\spacegrad \phi)^2 + \frac{m}{2} \phi^2 },
\end{aligned}
\end{equation}
or
\begin{equation}\label{eqn:qftLecture4:960}
H
= \int d^3 x \lr{ \inv{2} (\pi(\Bx, t))^2 + \inv{2} (\spacegrad \phi(\Bx, t))^2 + \frac{m}{2} (\phi(\Bx, t))^2 }
\end{equation}

In analogy to the momentum, position commutator in QM
\begin{equation}\label{eqn:qftLecture4:1000}
\antisymmetric{\hat{p}_i}{\hat{q}_j} = -i \delta_{ij},
\end{equation}
we “quantize” the scalar field theory by promoting \( \pi, \phi \) to operators and insisting that they also obey a commutator relationship
\begin{equation}\label{eqn:qftLecture4:980}
\antisymmetric{\pi(\Bx, t)}{\phi(\By, t)} = -i \delta^3(\Bx – \By).
\end{equation}

References

[1] Frank Wilczek. Fundamental constants. arXiv preprint arXiv:0708.4361, 2007. URL https://arxiv.org/abs/0708.4361.

Hamiltonian for a scalar field

January 3, 2016 phy2403 No comments , , ,

[Click here for a PDF of this post with nicer formatting]

In [1] it is left as an exersize to expand the scalar field Hamiltonian in terms of the raising and lowering operators. Let’s do that.

The field operator expanded in terms of the raising and lowering operators is

\begin{equation}\label{eqn:scalarFieldHamiltonian:20}
\phi(x) =
\int \frac{ d^3 k}{ (2 \pi)^{3/2} \sqrt{ 2 \omega_k } } \lr{
a_k e^{-i k \cdot x}
+ a_k^\dagger e^{i k \cdot x}
}.
\end{equation}

Note that \( x \) and \( k \) here are both four-vectors, so this field is dependent on a spacetime point, but the integration is over a spatial volume.

The Hamiltonian in terms of the fields was
\begin{equation}\label{eqn:scalarFieldHamiltonian:40}
H = \inv{2} \int d^3 x \lr{ \Pi^2 + \lr{ \spacegrad \phi }^2 + \mu^2 \phi^2 }.
\end{equation}

The field derivatives are

\begin{equation}\label{eqn:scalarFieldHamiltonian:60}
\Pi
= \partial_0 \phi
= \partial_0
\int \frac{ d^3 k}{ (2 \pi)^{3/2} \sqrt{ 2 \omega_k } } \lr{
a_k e^{-i \omega_k t + i \Bk \cdot \Bx}
+a_k^\dagger e^{i \omega_k t – i \Bk \cdot \Bx}
}
=
i
\int \frac{ d^3 k}{ (2 \pi)^{3/2} \frac{\omega_k}{ 2 \omega_k } } \lr{
-a_k e^{-i \omega_k t + i \Bk \cdot \Bx}
+a_k^\dagger e^{i \omega_k t – i \Bk \cdot \Bx}
},
\end{equation}

and

\begin{equation}\label{eqn:scalarFieldHamiltonian:80}
\partial_n \phi
= \partial_n
\int \frac{ d^3 k}{ (2 \pi)^{3/2} \sqrt{ 2 \omega_k } } \lr{
a_k e^{-i \omega_k t + i \Bk \cdot \Bx}
+a_k^\dagger e^{i \omega_k t – i \Bk \cdot \Bx}
}
=
i
\int \frac{ d^3 k k^n}{ (2 \pi)^{3/2} \sqrt{ 2 \omega_k } } \lr{
a_k e^{-i \omega_k t + i \Bk \cdot \Bx}
-a_k^\dagger e^{i \omega_k t – i \Bk \cdot \Bx}
}.
\end{equation}

Introducing a second set of momentum variables with \( j = \Abs{\Bj} \), the momentum portion of the Hamiltonian is

\begin{equation}\label{eqn:scalarFieldHamiltonian:100}
\begin{aligned}
\inv{2} \int d^3 x \Pi^2
&=
-\inv{2}
\inv{(2 \pi)^{3}}
\int d^3 x
\int
d^3 j
d^3 k
\inv{ \sqrt{ 4 \omega_j \omega_k } }
\omega_j
\omega_k
\lr{
-a_j e^{-i \omega_j t + i \Bj \cdot \Bx}
+a_j^\dagger e^{i \omega_j t – i \Bj \cdot \Bx}
}
\lr{
-a_k e^{-i \omega_k t + i \Bk \cdot \Bx}
+a_k^\dagger e^{i \omega_k t – i \Bk \cdot \Bx}
} \\
&=
-\inv{4}
\inv{(2 \pi)^{3}}
\int d^3 x
\int
d^3 j
d^3 k
\sqrt{
\omega_j
\omega_k}
\lr{
a_j^\dagger a_k^\dagger e^{i (\omega_k + \omega_j) t – i (\Bk + \Bj) \cdot \Bx}
+ a_j a_k e^{-i (\omega_j + \omega_k) t + i (\Bj + \Bk) \cdot \Bx}
– a_j^\dagger a_k e^{-i (\omega_k -\omega_j) t – i (\Bj – \Bk) \cdot \Bx}
– a_j a_k^\dagger e^{-i (\omega_j – \omega_k) t – i (\Bk – \Bj) \cdot \Bx}
} \\
&=
-\inv{4}
\int
d^3 j
d^3 k
\sqrt{
\omega_j
\omega_k}
\lr{
a_j^\dagger a_k^\dagger e^{i (\omega_k + \omega_j) t } \delta^3(\Bk + \Bj)
+ a_j a_k e^{-i (\omega_j + \omega_k) t } \delta^3(-\Bj – \Bk)
– a_j^\dagger a_k e^{-i (\omega_k -\omega_j) t } \delta^3(\Bj – \Bk)
– a_j a_k^\dagger e^{-i (\omega_j – \omega_k) t } \delta^3(\Bk – \Bj)
} \\
&=
-\inv{4}
\int
d^3 k
\omega_k
\lr{
a_{-k}^\dagger a_k^\dagger e^{2 i \omega_k t }
+ a_{-k} a_k e^{- 2 i \omega_k t }
– a_k^\dagger a_k
– a_k a_k^\dagger
}.
\end{aligned}
\end{equation}

For the gradient portion of the Hamiltonian we have

\begin{equation}\label{eqn:scalarFieldHamiltonian:120}
\begin{aligned}
\inv{2} \int d^3 x \lr{ \spacegrad \phi }^2
&=
-\inv{2}
\inv{(2 \pi)^{3}}
\int d^3 x
\int
d^3 j
d^3 k
\inv{ \sqrt{ 4 \omega_j \omega_k } }
\lr{ \sum_{n=1}^3 j^n k^n }
\lr{
a_j e^{-i \omega_j t + i \Bj \cdot \Bx}
-a_j^\dagger e^{i \omega_j t – i \Bj \cdot \Bx}
}
\lr{
a_k e^{-i \omega_k t + i \Bk \cdot \Bx}
-a_k^\dagger e^{i \omega_k t – i \Bk \cdot \Bx}
} \\
&=
-\inv{4}
\int
d^3 j
d^3 k
\frac{\Bj \cdot \Bk}{ \sqrt{ \omega_j \omega_k } }
\lr{
a_j^\dagger a_k^\dagger e^{i (\omega_k + \omega_j) t } \delta^3(\Bk + \Bj)
+ a_j a_k e^{-i (\omega_j + \omega_k) t } \delta^3(-\Bj – \Bk)
– a_j^\dagger a_k e^{-i (\omega_k -\omega_j) t } \delta^3(\Bj – \Bk)
– a_j a_k^\dagger e^{-i (\omega_j – \omega_k) t } \delta^3(\Bk – \Bj)
} \\
&=
-\inv{4}
\int
d^3 k
\frac{\Bk^2}{ \omega_k }
\lr{
– a_{-k}^\dagger a_k^\dagger e^{2 i \omega_k t }
– a_{-k} a_k e^{- 2 i \omega_k t }
– a_k^\dagger a_k
– a_k a_k^\dagger
}.
\end{aligned}
\end{equation}

Finally, for the mass term, we have

\begin{equation}\label{eqn:scalarFieldHamiltonian:140}
\begin{aligned}
\inv{2} \int d^3 x \mu^2 \phi^2
&=
\frac{\mu^2}{2}
\inv{(2 \pi)^{3}}
\int d^3 x
\int
d^3 j
d^3 k
\inv{ \sqrt{ 4 \omega_j \omega_k } }
\lr{
a_j e^{-i \omega_j t + i \Bj \cdot \Bx}
+a_j^\dagger e^{i \omega_j t – i \Bj \cdot \Bx}
}
\lr{
a_k e^{-i \omega_k t + i \Bk \cdot \Bx}
+a_k^\dagger e^{i \omega_k t – i \Bk \cdot \Bx}
} \\
&=
\frac{\mu^2}{2}
\inv{(2 \pi)^{3}}
\int d^3 x
\int
d^3 j
d^3 k
\inv{ \sqrt{ 4 \omega_j \omega_k } }
\lr{
a_j a_k e^{-i (\omega_k + \omega_j) t + i (\Bk + \Bj) \cdot \Bx}
+a_j^\dagger a_k^\dagger e^{i (\omega_j + \omega_k) t – i (\Bk + \Bj) \cdot \Bx}
+a_j a_k^\dagger e^{i (\omega_k – \omega_j) t – i (\Bk – \Bj) \cdot \Bx}
+a_j^\dagger a_k e^{-i (\omega_k + \omega_j) t – i (\Bj – \Bk) \cdot \Bx}
} \\
&=
\frac{\mu^2}{2}
\int
d^3 j
d^3 k
\inv{ \sqrt{ 4 \omega_j \omega_k } }
\lr{
a_j a_k e^{-i (\omega_k + \omega_j) t } \delta^3(- \Bk – \Bj)
+a_j^\dagger a_k^\dagger e^{i (\omega_j + \omega_k) t } \delta^3( \Bk + \Bj)
+a_j a_k^\dagger e^{i (\omega_k – \omega_j) t } \delta^3 (\Bk – \Bj)
+a_j^\dagger a_k e^{-i (\omega_k + \omega_j) t } \delta^3 (\Bj – \Bk)
} \\
&=
\frac{\mu^2}{4}
\int
d^3 k
\inv{ \omega_k }
\lr{
a_{-k} a_k e^{- 2 i \omega_k t }
+a_{-k}^\dagger a_k^\dagger e^{2 i \omega_k t }
+a_k a_k^\dagger
+a_k^\dagger a_k
}.
\end{aligned}
\end{equation}

Now all the pieces can be put back together again

\begin{equation}\label{eqn:scalarFieldHamiltonian:160}
\begin{aligned}
H
&=
\inv{4}
\int d^3 k
\inv{\omega_k}
\lr{
-\omega_k^2
\lr{
a_{-k}^\dagger a_k^\dagger e^{2 i \omega_k t }
+ a_{-k} a_k e^{- 2 i \omega_k t }
– a_k^\dagger a_k
– a_k a_k^\dagger
}
+
\Bk^2
\lr{
a_{-k}^\dagger a_k^\dagger e^{2 i \omega_k t }
+ a_{-k} a_k e^{- 2 i \omega_k t }
+ a_k^\dagger a_k
+ a_k a_k^\dagger
}
+
\mu^2
\lr{
a_{-k} a_k e^{- 2 i \omega_k t }
+a_{-k}^\dagger a_k^\dagger e^{2 i \omega_k t }
+a_k a_k^\dagger
+a_k^\dagger a_k
}
} \\
&=
\inv{4}
\int d^3 k
\inv{\omega_k}
\lr{
a_{-k}^\dagger a_k^\dagger e^{2 i \omega_k t }
\lr{
-\omega_k^2
+ \Bk^2
+
\mu^2
}
+ a_{-k} a_k e^{- 2 i \omega_k t }
\lr{
-\omega_k^2
+ \Bk^2
+
\mu^2
}
+ a_k a_k^\dagger
\lr{
\omega_k^2
+ \Bk^2
+
\mu^2
}
+ a_k^\dagger a_k
\lr{
\omega_k^2
+ \Bk^2
+
\mu^2
}
}.
\end{aligned}
\end{equation}

With \( \omega_k^2 = \Bk^2 + \mu^2 \), the time dependent terms are killed leaving
\begin{equation}\label{eqn:scalarFieldHamiltonian:180}
H
=
\inv{2}
\int d^3 k
\omega_k
\lr{
a_k a_k^\dagger
+ a_k^\dagger a_k
}.
\end{equation}

References

[1] Michael Luke. PHY2403F Lecture Notes: Quantum Field Theory, 2015. URL lecturenotes.pdf. [Online; accessed 02-Jan-2016].

Time reversal behavior of solutions to crystal spin Hamiltonian

December 15, 2015 phy1520 No comments , , , ,

[Click here for a PDF of this post with nicer formatting]

Q: [1] pr 4.12

Solve the spin 1 Hamiltonian
\begin{equation}\label{eqn:crystalSpinHamiltonianTimeReversal:20}
H = A S_z^2 + B(S_x^2 – S_y^2).
\end{equation}

Is this Hamiltonian invariant under time reversal?

How do the eigenkets change under time reversal?

A:

In spinMatrices.nb the matrix representation of the Hamiltonian is found to be
\begin{equation}\label{eqn:crystalSpinHamiltonianTimeReversal:40}
H =
\Hbar^2
\begin{bmatrix}
A+\frac{B}{2} & 0 & \frac{B}{2} \\
-\frac{i B}{\sqrt{2}} & B & -\frac{i B}{\sqrt{2}} \\
\frac{B}{2} & 0 & A+\frac{B}{2} \\
\end{bmatrix}.
\end{equation}

The eigenvalues are
\begin{equation}\label{eqn:crystalSpinHamiltonianTimeReversal:60}
\setlr{ \Hbar^2 A, \Hbar^2 B, \Hbar^2(A + B)},
\end{equation}

and the respective eigenvalues (unnormalized) are

\begin{equation}\label{eqn:crystalSpinHamiltonianTimeReversal:80}
\setlr{
\begin{bmatrix}
-1 \\
0 \\
1
\end{bmatrix},
\begin{bmatrix}
0 \\
1 \\
0
\end{bmatrix},
\begin{bmatrix}
1 \\
-\frac{i \sqrt{2} B}{A} \\
1 \\
\end{bmatrix}
}.
\end{equation}

Under time reversal, the Hamiltonian is

\begin{equation}\label{eqn:crystalSpinHamiltonianTimeReversal:100}
H \rightarrow A (-S_z)^2 + B ( (-S_x)^2 – (-S_y)^2 ) = H,
\end{equation}

so we expect the eigenkets for this Hamiltonian to vary by at most a phase factor. To check this, first recall that the time reversal action on a spin one state is

\begin{equation}\label{eqn:crystalSpinHamiltonianTimeReversal:120}
\Theta \ket{1, m} = (-1)^m \ket{1, -m},
\end{equation}

or

\begin{equation}\label{eqn:crystalSpinHamiltonianTimeReversal:140}
\begin{aligned}
\Theta \ket{1} &= -\ket{-1} \\
\Theta \ket{0} &= \ket{0} \\
\Theta \ket{-1} &= -\ket{1}.
\end{aligned}
\end{equation}

Let’s write the eigenkets respectively as

\begin{equation}\label{eqn:crystalSpinHamiltonianTimeReversal:160}
\begin{aligned}
\ket{A} &= -\ket{1} + \ket{-1} \\
\ket{B} &= \ket{0} \\
\ket{A+B} &= \ket{1} + \ket{-1} – \frac{i \sqrt{2} B}{A} \ket{0}.
\end{aligned}
\end{equation}

Noting that the time reversal operator maps complex numbers onto their conjugates, the time reversed eigenkets are

\begin{equation}\label{eqn:crystalSpinHamiltonianTimeReversal:180}
\begin{aligned}
\ket{A} &\rightarrow \ket{-1} – \ket{-1} = -\ket{A} \\
\ket{B} &\rightarrow \ket{0} = \ket{B} \\
\ket{A+B} &\rightarrow -\ket{1} – \ket{-1} + \frac{i \sqrt{2} B}{A} \ket{0} = -\ket{A+B}.
\end{aligned}
\end{equation}

Up to a sign, the time reversed states match the unreversed states.

References

[1] Jun John Sakurai and Jim J Napolitano. Modern quantum mechanics. Pearson Higher Ed, 2014.

Energy estimate for an absolute value potential

December 4, 2015 phy1520 No comments , , ,

[Click here for a PDF of this post with nicer formatting]

Here’s a simple problem, a lot like the problem set 6 variational calculation.

Q: [1] 5.21

Estimate the lowest eigenvalue \( \lambda \) of the differential equation

\begin{equation}\label{eqn:absolutePotentialVariation:20}
\frac{d^2}{dx^2}\psi + \lr{ \lambda – \Abs{x} } \psi = 0.
\end{equation}

Using \( \alpha \) variation with the trial function

\begin{equation}\label{eqn:absolutePotentialVariation:40}
\psi =
\left\{
\begin{array}{l l}
c(\alpha – \Abs{x}) & \quad \mbox{\(\Abs{x} < \alpha \) } \\ 0 & \quad \mbox{\(\Abs{x} > \alpha \) }
\end{array}
\right.
\end{equation}

A:

First rewrite the differential equation in a Hamiltonian like fashion

\begin{equation}\label{eqn:absolutePotentialVariation:60}
H \psi = -\frac{d^2}{dx^2}\psi + \Abs{x} \psi = \lambda \psi.
\end{equation}

We need the derivatives of the trial distribution. The first derivative is

\begin{equation}\label{eqn:absolutePotentialVariation:80}
\begin{aligned}
\frac{d}{dx} \psi
&=
-c \frac{d}{dx} \Abs{x} \\
&=
-c \frac{d}{dx} \lr{ x \theta(x) – x \theta(-x) } \\
&=
-c \lr{
\theta(x) – \theta(-x)
+
x \delta(x) + x \delta(-x)
} \\
&=
-c \lr{
\theta(x) – \theta(-x)
+
2 x \delta(x)
}.
\end{aligned}
\end{equation}

The second derivative is
\begin{equation}\label{eqn:absolutePotentialVariation:100}
\begin{aligned}
\frac{d^2}{dx^2} \psi
&=
-c \frac{d}{dx} \lr{
\theta(x) – \theta(-x)
+
2 x \delta(x)
} \\
&=
-c \lr{
\delta(x) + \delta(-x)
+
2 \delta(x)
+
2 x \delta'(x)
} \\
&=
-c \lr{
4 \delta(x)
+
2 x \frac{-\delta(x) }{x}
} \\
&=
-2 c \delta(x).
\end{aligned}
\end{equation}

This gives

\begin{equation}\label{eqn:absolutePotentialVariation:120}
H \psi = -2 c \delta(x) + \Abs{x} c \lr{ \alpha – \Abs{x} }.
\end{equation}

We are now set to compute some of the inner products. The normalization is the simplest

\begin{equation}\label{eqn:absolutePotentialVariation:140}
\begin{aligned}
\braket{\psi}{\psi}
&= c^2 \int_{-\alpha}^\alpha ( \alpha – \Abs{x} )^2 dx \\
&= 2 c^2 \int_{0}^\alpha ( x – \alpha )^2 dx \\
&= 2 c^2 \int_{-\alpha}^0 u^2 du \\
&= 2 c^2 \lr{ -\frac{(-\alpha)^3}{3} } \\
&= \frac{2}{3} c^2 \alpha^3.
\end{aligned}
\end{equation}

For the energy
\begin{equation}\label{eqn:absolutePotentialVariation:160}
\begin{aligned}
\braket{\psi}{H \psi}
&=
c^2 \int dx \lr{ \alpha – \Abs{x} } \lr{ -2 \delta(x) + \Abs{x} \lr{ \alpha – \Abs{x} } } \\
&=
c^2 \lr{ – 2 \alpha + \int_{-\alpha}^\alpha dx \lr{ \alpha – \Abs{x} }^2 \Abs{x} } \\
&=
c^2 \lr{ – 2 \alpha + 2 \int_{-\alpha}^0 du u^2 \lr{ u + \alpha } } \\
&=
c^2 \lr{ – 2 \alpha + 2 \evalrange{\lr{ \frac{u^4}{4} + \alpha \frac{u^3}{3} }}{-\alpha}{0} } \\
&=
c^2 \lr{ – 2 \alpha – 2 \lr{ \frac{\alpha^4}{4} – \frac{\alpha^4}{3} } } \\
&=
c^2 \lr{ – 2 \alpha + \inv{6} \alpha^4 }.
\end{aligned}
\end{equation}

The energy estimate is

\begin{equation}\label{eqn:absolutePotentialVariation:180}
\begin{aligned}
\overline{{E}}
&=
\frac{\braket{\psi}{H \psi}}{\braket{\psi}{\psi}} \\
&=
\frac{ – 2 \alpha + \inv{6} \alpha^4 }{ \frac{2}{3} \alpha^3} \\
&=
– \frac{3}{\alpha^2} + \inv{4} \alpha.
\end{aligned}
\end{equation}

This has its minimum at
\begin{equation}\label{eqn:absolutePotentialVariation:200}
0 = -\frac{6}{\alpha^3} + \inv{4},
\end{equation}

or
\begin{equation}\label{eqn:absolutePotentialVariation:220}
\alpha = 2 \times 3^{1/3}.
\end{equation}

Back subst into the energy gives

\begin{equation}\label{eqn:absolutePotentialVariation:240}
\begin{aligned}
\overline{{E}}
&=
– \frac{3}{4 \times 3^{2/3}} + \inv{2} 3^{1/3} \\
&= \frac{3^{4/3}}{4} \\
&\approx 1.08.
\end{aligned}
\end{equation}

The problem says the exact answer is 1.019, so the variation gets within 6 %.

References

[1] Jun John Sakurai and Jim J Napolitano. Modern quantum mechanics. Pearson Higher Ed, 2014.

PHY1520H Graduate Quantum Mechanics. Lecture 20: Perturbation theory. Taught by Prof. Arun Paramekanti

December 3, 2015 phy1520 No comments , , , , , , , , , ,

[Click here for a PDF of this post with nicer formatting]

Disclaimer

Peeter’s lecture notes from class. These may be incoherent and rough.

These are notes for the UofT course PHY1520, Graduate Quantum Mechanics, taught by Prof. Paramekanti, covering [1] ch. 5 content.

Perturbation theory

Given a \( 2 \times 2 \) Hamiltonian \( H = H_0 + V \), where

\begin{equation}\label{eqn:qmLecture20:20}
H =
\begin{bmatrix}
a & c \\
c^\conj & b
\end{bmatrix}
\end{equation}

which has eigenvalues

\begin{equation}\label{eqn:qmLecture20:40}
\lambda_\pm = \frac{a + b}{2} \pm \sqrt{ \lr{ \frac{a – b}{2}}^2 + \Abs{c}^2 }.
\end{equation}

If \( c = 0 \),

\begin{equation}\label{eqn:qmLecture20:60}
H_0 =
\begin{bmatrix}
a & 0 \\
0 & b
\end{bmatrix},
\end{equation}

so

\begin{equation}\label{eqn:qmLecture20:80}
V =
\begin{bmatrix}
0 & c \\
c^\conj & 0
\end{bmatrix}.
\end{equation}

Suppose that \( \Abs{c} \ll \Abs{a – b} \), then

\begin{equation}\label{eqn:qmLecture20:100}
\lambda_\pm \approx \frac{a + b}{2} \pm \Abs{ \frac{a – b}{2} } \lr{ 1 + 2 \frac{\Abs{c}^2}{\Abs{a – b}^2} }.
\end{equation}

If \( a > b \), then

\begin{equation}\label{eqn:qmLecture20:120}
\lambda_\pm \approx \frac{a + b}{2} \pm \frac{a – b}{2} \lr{ 1 + 2 \frac{\Abs{c}^2}{\lr{a – b}^2} }.
\end{equation}

\begin{equation}\label{eqn:qmLecture20:140}
\begin{aligned}
\lambda_{+}
&= \frac{a + b}{2} + \frac{a – b}{2} \lr{ 1 + 2 \frac{\Abs{c}^2}{\lr{a – b}^2} } \\
&= a + \lr{a – b} \frac{\Abs{c}^2}{\lr{a – b}^2} \\
&= a + \frac{\Abs{c}^2}{a – b},
\end{aligned}
\end{equation}

and
\begin{equation}\label{eqn:qmLecture20:680}
\begin{aligned}
\lambda_{-}
&= \frac{a + b}{2} – \frac{a – b}{2} \lr{ 1 + 2 \frac{\Abs{c}^2}{\lr{a – b}^2} } \\
&=
b + \lr{a – b} \frac{\Abs{c}^2}{\lr{a – b}^2} \\
&= b + \frac{\Abs{c}^2}{a – b}.
\end{aligned}
\end{equation}

This adiabatic evolution displays a “level repulsion”, quadradic in \( \Abs{c} \) as sketched in fig. 1, and is described as a non-degenerate perbutation.

fig. 1.  Adiabatic (non-degenerate) perturbation

fig. 1. Adiabatic (non-degenerate) perturbation

If \( \Abs{c} \gg \Abs{a -b} \), then

\begin{equation}\label{eqn:qmLecture20:160}
\begin{aligned}
\lambda_\pm
&= \frac{a + b}{2} \pm \Abs{c} \sqrt{ 1 + \inv{\Abs{c}^2} \lr{ \frac{a – b}{2}}^2 } \\
&\approx \frac{a + b}{2} \pm \Abs{c} \lr{ 1 + \inv{2 \Abs{c}^2} \lr{ \frac{a – b}{2}}^2 } \\
&= \frac{a + b}{2} \pm \Abs{c} \pm \frac{\lr{a – b}^2}{8 \Abs{c}}.
\end{aligned}
\end{equation}

Here we loose the adiabaticity, and have “level repulsion” that is linear in \( \Abs{c} \), as sketched in fig. 2. We no longer have the sign of \( a – b \) in the expansion. This is described as a degenerate perbutation.

fig. 2.  Degenerate perbutation

fig. 2. Degenerate perbutation

General non-degenerate perturbation

Given an unperturbed system with solutions of the form

\begin{equation}\label{eqn:qmLecture20:180}
H_0 \ket{n^{(0)}} = E_n^{(0)} \ket{n^{(0)}},
\end{equation}

we want to solve the perturbed Hamiltonian equation

\begin{equation}\label{eqn:qmLecture20:200}
\lr{ H_0 + \lambda V } \ket{ n } = \lr{ E_n^{(0)} + \Delta n } \ket{n}.
\end{equation}

Here \( \Delta n \) is an energy shift as that goes to zero as \( \lambda \rightarrow 0 \). We can write this as

\begin{equation}\label{eqn:qmLecture20:220}
\lr{ E_n^{(0)} – H_0 } \ket{ n } = \lr{ \lambda V – \Delta_n } \ket{n}.
\end{equation}

We are hoping to iterate with application of the inverse to an initial estimate of \( \ket{n} \)

\begin{equation}\label{eqn:qmLecture20:240}
\ket{n} = \lr{ E_n^{(0)} – H_0 }^{-1} \lr{ \lambda V – \Delta_n } \ket{n}.
\end{equation}

This gets us into trouble if \( \lambda \rightarrow 0 \), which can be fixed by using

\begin{equation}\label{eqn:qmLecture20:260}
\ket{n} = \lr{ E_n^{(0)} – H_0 }^{-1} \lr{ \lambda V – \Delta_n } \ket{n} + \ket{ n^{(0)} },
\end{equation}

which can be seen to be a solution to \ref{eqn:qmLecture20:220}. We want to ask if

\begin{equation}\label{eqn:qmLecture20:280}
\lr{ \lambda V – \Delta_n } \ket{n} ,
\end{equation}

contains a bit of \( \ket{ n^{(0)} } \)? To determine this act with \( \bra{n^{(0)}} \) on the left

\begin{equation}\label{eqn:qmLecture20:300}
\begin{aligned}
\bra{ n^{(0)} } \lr{ \lambda V – \Delta_n } \ket{n}
&=
\bra{ n^{(0)} } \lr{ E_n^{(0)} – H_0 } \ket{n} \\
&=
\lr{ E_n^{(0)} – E_n^{(0)} } \braket{n^{(0)}}{n} \\
&=
0.
\end{aligned}
\end{equation}

This shows that \( \ket{n} \) is entirely orthogonal to \( \ket{n^{(0)}} \).

Define a projection operator

\begin{equation}\label{eqn:qmLecture20:320}
P_n = \ket{n^{(0)}}\bra{n^{(0)}},
\end{equation}

which has the idempotent property \( P_n^2 = P_n \) that we expect of a projection operator.

Define a rejection operator
\begin{equation}\label{eqn:qmLecture20:340}
\overline{{P}}_n
= 1 –
\ket{n^{(0)}}\bra{n^{(0)}}
= \sum_{m \ne n}
\ket{m^{(0)}}\bra{m^{(0)}}.
\end{equation}

Because \( \ket{n} \) has no component in the direction \( \ket{n^{(0)}} \), the rejection operator can be inserted much like we normally do with the identity operator, yielding

\begin{equation}\label{eqn:qmLecture20:360}
\ket{n}’ = \lr{ E_n^{(0)} – H_0 }^{-1} \overline{{P}}_n \lr{ \lambda V – \Delta_n } \ket{n} + \ket{ n^{(0)} },
\end{equation}

valid for any initial \( \ket{n} \).

Power series perturbation expansion

Instead of iterating, suppose that the unknown state and unknown energy difference operator can be expanded in a \( \lambda \) power series, say

\begin{equation}\label{eqn:qmLecture20:380}
\ket{n}
=
\ket{n_0}
+ \lambda \ket{n_1}
+ \lambda^2 \ket{n_2}
+ \lambda^3 \ket{n_3} + \cdots
\end{equation}

and

\begin{equation}\label{eqn:qmLecture20:400}
\Delta_{n} = \Delta_{n_0}
+ \lambda \Delta_{n_1}
+ \lambda^2 \Delta_{n_2}
+ \lambda^3 \Delta_{n_3} + \cdots
\end{equation}

We usually interpret functions of operators in terms of power series expansions. In the case of \( \lr{ E_n^{(0)} – H_0 }^{-1} \), we have a concrete interpretation when acting on one of the unpertubed eigenstates

\begin{equation}\label{eqn:qmLecture20:420}
\inv{ E_n^{(0)} – H_0 } \ket{m^{(0)}} =
\inv{ E_n^{(0)} – E_m^0 } \ket{m^{(0)}}.
\end{equation}

This gives

\begin{equation}\label{eqn:qmLecture20:440}
\ket{n}
=
\inv{ E_n^{(0)} – H_0 }
\sum_{m \ne n}
\ket{m^{(0)}}\bra{m^{(0)}}
\lr{ \lambda V – \Delta_n } \ket{n} + \ket{ n^{(0)} },
\end{equation}

or

\begin{equation}\label{eqn:qmLecture20:460}
\boxed{
\ket{n}
=
\ket{ n^{(0)} }
+
\sum_{m \ne n}
\frac{\ket{m^{(0)}}\bra{m^{(0)}}}
{
E_n^{(0)} – E_m^{(0)}
}
\lr{ \lambda V – \Delta_n } \ket{n}.
}
\end{equation}

From \ref{eqn:qmLecture20:220}, note that

\begin{equation}\label{eqn:qmLecture20:500}
\Delta_n =
\frac{\bra{n^{(0)}} \lambda V \ket{n}}{\braket{n^0}{n}},
\end{equation}

however, we will normalize by setting \( \braket{n^0}{n} = 1 \), so

\begin{equation}\label{eqn:qmLecture20:521}
\boxed{
\Delta_n =
\bra{n^{(0)}} \lambda V \ket{n}.
}
\end{equation}

to \( O(\lambda^0) \)

If all \( \lambda^n, n > 0 \) are zero, then we have

\label{eqn:qmLecture20:780}
\begin{equation}\label{eqn:qmLecture20:740}
\ket{n_0}
=
\ket{ n^{(0)} }
+
\sum_{m \ne n}
\frac{\ket{m^{(0)}}\bra{m^{(0)}}}
{
E_n^{(0)} – E_m^{(0)}
}
\lr{ – \Delta_{n_0} } \ket{n_0}
\end{equation}
\begin{equation}\label{eqn:qmLecture20:800}
\Delta_{n_0} \braket{n^{(0)}}{n_0} = 0
\end{equation}

so

\begin{equation}\label{eqn:qmLecture20:540}
\begin{aligned}
\ket{n_0} &= \ket{n^{(0)}} \\
\Delta_{n_0} &= 0.
\end{aligned}
\end{equation}

to \( O(\lambda^1) \)

Requiring identity for all \( \lambda^1 \) terms means

\begin{equation}\label{eqn:qmLecture20:760}
\ket{n_1} \lambda
=
\sum_{m \ne n}
\frac{\ket{m^{(0)}}\bra{m^{(0)}}}
{
E_n^{(0)} – E_m^{(0)}
}
\lr{ \lambda V – \Delta_{n_1} \lambda } \ket{n_0},
\end{equation}

so

\begin{equation}\label{eqn:qmLecture20:560}
\ket{n_1}
=
\sum_{m \ne n}
\frac{
\ket{m^{(0)}} \bra{ m^{(0)}}
}
{
E_n^{(0)} – E_m^{(0)}
}
\lr{ V – \Delta_{n_1} } \ket{n_0}.
\end{equation}

With the assumption that \( \ket{n^{(0)}} \) is normalized, and with the shorthand

\begin{equation}\label{eqn:qmLecture20:600}
V_{m n} = \bra{ m^{(0)}} V \ket{n^{(0)}},
\end{equation}

that is

\begin{equation}\label{eqn:qmLecture20:580}
\begin{aligned}
\ket{n_1}
&=
\sum_{m \ne n}
\frac{
\ket{m^{(0)}}
}
{
E_n^{(0)} – E_m^{(0)}
}
V_{m n}
\\
\Delta_{n_1} &= \bra{ n^{(0)} } V \ket{ n^0} = V_{nn}.
\end{aligned}
\end{equation}

to \( O(\lambda^2) \)

The second order perturbation states are found by selecting only the \( \lambda^2 \) contributions to

\begin{equation}\label{eqn:qmLecture20:820}
\lambda^2 \ket{n_2}
=
\sum_{m \ne n}
\frac{\ket{m^{(0)}}\bra{m^{(0)}}}
{
E_n^{(0)} – E_m^{(0)}
}
\lr{ \lambda V – (\lambda \Delta_{n_1} + \lambda^2 \Delta_{n_2}) }
\lr{
\ket{n_0}
+ \lambda \ket{n_1}
}.
\end{equation}

Because \( \ket{n_0} = \ket{n^{(0)}} \), the \( \lambda^2 \Delta_{n_2} \) is killed, leaving

\begin{equation}\label{eqn:qmLecture20:840}
\begin{aligned}
\ket{n_2}
&=
\sum_{m \ne n}
\frac{\ket{m^{(0)}}\bra{m^{(0)}}}
{
E_n^{(0)} – E_m^{(0)}
}
\lr{ V – \Delta_{n_1} }
\ket{n_1} \\
&=
\sum_{m \ne n}
\frac{\ket{m^{(0)}}\bra{m^{(0)}}}
{
E_n^{(0)} – E_m^{(0)}
}
\lr{ V – \Delta_{n_1} }
\sum_{l \ne n}
\frac{
\ket{l^{(0)}}
}
{
E_n^{(0)} – E_l^{(0)}
}
V_{l n},
\end{aligned}
\end{equation}

which can be written as

\begin{equation}\label{eqn:qmLecture20:620}
\ket{n_2}
=
\sum_{l,m \ne n}
\ket{m^{(0)}}
\frac{V_{m l} V_{l n}}
{
\lr{ E_n^{(0)} – E_m^{(0)} }
\lr{ E_n^{(0)} – E_l^{(0)} }
}

\sum_{m \ne n}
\ket{m^{(0)}}
\frac{V_{n n} V_{m n}}
{
\lr{ E_n^{(0)} – E_m^{(0)} }^2
}.
\end{equation}

For the second energy perturbation we have

\begin{equation}\label{eqn:qmLecture20:860}
\lambda^2 \Delta_{n_2} =
\bra{n^{(0)}} \lambda V \lr{ \lambda \ket{n_1} },
\end{equation}

or

\begin{equation}\label{eqn:qmLecture20:880}
\begin{aligned}
\Delta_{n_2}
&=
\bra{n^{(0)}} V \ket{n_1} \\
&=
\bra{n^{(0)}} V
\sum_{m \ne n}
\frac{
\ket{m^{(0)}}
}
{
E_n^{(0)} – E_m^{(0)}
}
V_{m n}.
\end{aligned}
\end{equation}

That is

\begin{equation}\label{eqn:qmLecture20:900}
\Delta_{n_2}
=
\sum_{m \ne n} \frac{V_{n m} V_{m n} }{E_n^{(0)} – E_m^{(0)}}.
\end{equation}

to \( O(\lambda^3) \)

Similarily, it can be shown that

\begin{equation}\label{eqn:qmLecture20:640}
\Delta_{n_3} =
\sum_{l, m \ne n} \frac{V_{n m} V_{m l} V_{l n} }{
\lr{ E_n^{(0)} – E_m^{(0)} }
\lr{ E_n^{(0)} – E_l^{(0)} }
}

\sum_{ m \ne n} \frac{V_{n m} V_{n n} V_{m n} }{
\lr{ E_n^{(0)} – E_m^{(0)} }^2
}.
\end{equation}

In general, the energy perturbation is given by

\begin{equation}\label{eqn:qmLecture20:660}
\Delta_n^{(l)} = \bra{n^{(0)}} V \ket{n^{(l-1)}}.
\end{equation}

References

[1] Jun John Sakurai and Jim J Napolitano. Modern quantum mechanics. Pearson Higher Ed, 2014.

Alternate Dirac equation representation

November 27, 2015 phy1520 No comments , ,

[Click here for a PDF of this post with nicer formatting]

Given an alternate representation of the Dirac equation

\begin{equation}\label{eqn:diracAlternate:20}
H =
\begin{bmatrix}
m c^2 + V_0 & c \hat{p} \\
c \hat{p} & – m c^2 + V_0
\end{bmatrix},
\end{equation}

calculate the constant momentum solutions, the Heisenberg velocity operator \( \hat{v} \), and find the form of the probability density current.

Plane wave solutions

The action of the Hamiltonian on

\begin{equation}\label{eqn:diracAlternate:40}
\psi =
e^{i k x – i E t/\Hbar}
\begin{bmatrix}
\psi_1 \\
\psi_2
\end{bmatrix}
\end{equation}

is
\begin{equation}\label{eqn:diracAlternate:60}
\begin{aligned}
H \psi
&=
\begin{bmatrix}
m c^2 + V_0 & c (-i \Hbar) i k \\
c (-i \Hbar) i k & – m c^2 + V_0
\end{bmatrix}
\begin{bmatrix}
\psi_1 \\
\psi_2
\end{bmatrix}
e^{i k x – i E t/\Hbar} \\
&=
\begin{bmatrix}
m c^2 + V_0 & c \Hbar k \\
c \Hbar k & – m c^2 + V_0
\end{bmatrix}
\psi.
\end{aligned}
\end{equation}

Writing

\begin{equation}\label{eqn:diracAlternate:80}
H_k
=
\begin{bmatrix}
m c^2 + V_0 & c \Hbar k \\
c \Hbar k & – m c^2 + V_0
\end{bmatrix}
\end{equation}

the characteristic equation is

\begin{equation}\label{eqn:diracAlternate:100}
0
=
(m c^2 + V_0 – \lambda)
(-m c^2 + V_0 – \lambda)
– (c \Hbar k)^2
=
\lr{ (\lambda – V_0)^2 – (m c^2)^2 } – (c \Hbar k)^2,
\end{equation}

so

\begin{equation}\label{eqn:diracAlternate:120}
\lambda = V_0 \pm \epsilon,
\end{equation}

where
\begin{equation}\label{eqn:diracAlternate:140}
\epsilon^2 = (m c^2)^2 + (c \Hbar k)^2.
\end{equation}

We’ve got

\begin{equation}\label{eqn:diracAlternate:160}
\begin{aligned}
H – ( V_0 + \epsilon )
&=
\begin{bmatrix}
m c^2 – \epsilon & c \Hbar k \\
c \Hbar k & – m c^2 – \epsilon
\end{bmatrix} \\
H – ( V_0 – \epsilon )
&=
\begin{bmatrix}
m c^2 + \epsilon & c \Hbar k \\
c \Hbar k & – m c^2 + \epsilon
\end{bmatrix},
\end{aligned}
\end{equation}

so the eigenkets are

\begin{equation}\label{eqn:diracAlternate:180}
\begin{aligned}
\ket{V_0+\epsilon}
&\propto
\begin{bmatrix}
-c \Hbar k \\
m c^2 – \epsilon
\end{bmatrix} \\
\ket{V_0-\epsilon}
&\propto
\begin{bmatrix}
-c \Hbar k \\
m c^2 + \epsilon
\end{bmatrix}.
\end{aligned}
\end{equation}

Up to an arbitrary phase for each, these are

\begin{equation}\label{eqn:diracAlternate:200}
\begin{aligned}
\ket{V_0 + \epsilon}
&=
\inv{\sqrt{ 2 \epsilon ( \epsilon – m c^2) }}
\begin{bmatrix}
c \Hbar k \\
\epsilon -m c^2
\end{bmatrix} \\
\ket{V_0 – \epsilon}
&=
\inv{\sqrt{ 2 \epsilon ( \epsilon + m c^2) }}
\begin{bmatrix}
-c \Hbar k \\
\epsilon + m c^2
\end{bmatrix} \\
\end{aligned}
\end{equation}

We can now write

\begin{equation}\label{eqn:diracAlternate:220}
H_k =
E
\begin{bmatrix}
V_0 + \epsilon & 0 \\
0 & V_0 – \epsilon
\end{bmatrix}
E^{-1},
\end{equation}

where
\begin{equation}\label{eqn:diracAlternate:240}
\begin{aligned}
E &=
\inv{\sqrt{2 \epsilon} }
\begin{bmatrix}
\frac{c \Hbar k}{ \sqrt{ \epsilon – m c^2 } } & -\frac{c \Hbar k}{ \sqrt{ \epsilon + m c^2 } } \\
\sqrt{ \epsilon – m c^2 } & \sqrt{ \epsilon + m c^2 }
\end{bmatrix}, \qquad k > 0 \\
E &=
\inv{\sqrt{2 \epsilon} }
\begin{bmatrix}
-\frac{c \Hbar k}{ \sqrt{ \epsilon – m c^2 } } & -\frac{c \Hbar k}{ \sqrt{ \epsilon + m c^2 } } \\
-\sqrt{ \epsilon – m c^2 } & \sqrt{ \epsilon + m c^2 }
\end{bmatrix}, \qquad k < 0. \end{aligned} \end{equation} Here the signs have been adjusted to ensure the transformation matrix has a unit determinant. Observe that there's redundancy in this matrix since \( \ifrac{c \Hbar \Abs{k}}{ \sqrt{ \epsilon - m c^2 } } = \sqrt{ \epsilon + m c^2 } \), and \( \ifrac{c \Hbar \Abs{k}}{ \sqrt{ \epsilon + m c^2 } } = \sqrt{ \epsilon - m c^2 } \), which allows the transformation matrix to be written in the form of a rotation matrix \begin{equation}\label{eqn:diracAlternate:260} \begin{aligned} E &= \inv{\sqrt{2 \epsilon} } \begin{bmatrix} \frac{c \Hbar k}{ \sqrt{ \epsilon - m c^2 } } & -\frac{c \Hbar k}{ \sqrt{ \epsilon + m c^2 } } \\ \frac{c \Hbar k}{ \sqrt{ \epsilon + m c^2 } } & \frac{c \Hbar k}{ \sqrt{ \epsilon - m c^2 } } \end{bmatrix}, \qquad k > 0 \\
E &=
\inv{\sqrt{2 \epsilon} }
\begin{bmatrix}
-\frac{c \Hbar k}{ \sqrt{ \epsilon – m c^2 } } & -\frac{c \Hbar k}{ \sqrt{ \epsilon + m c^2 } } \\
\frac{c \Hbar k}{ \sqrt{ \epsilon + m c^2 } } & -\frac{c \Hbar k}{ \sqrt{ \epsilon – m c^2 } }
\end{bmatrix}, \qquad k < 0 \\ \end{aligned} \end{equation} With \begin{equation}\label{eqn:diracAlternate:280} \begin{aligned} \cos\theta &= \frac{c \Hbar \Abs{k}}{ \sqrt{ 2 \epsilon( \epsilon - m c^2) } } = \frac{\sqrt{ \epsilon + m c^2} }{ \sqrt{ 2 \epsilon}}\\ \sin\theta &= \frac{c \Hbar k}{ \sqrt{ 2 \epsilon( \epsilon + m c^2) } } = \frac{\textrm{sgn}(k) \sqrt{ \epsilon - m c^2}}{ \sqrt{ 2 \epsilon } }, \end{aligned} \end{equation} the transformation matrix (and eigenkets) is \begin{equation}\label{eqn:diracAlternate:300} \boxed{ E = \begin{bmatrix} \ket{V_0 + \epsilon} & \ket{V_0 - \epsilon} \end{bmatrix} = \begin{bmatrix} \cos\theta & -\sin\theta \\ \sin\theta & \cos\theta \end{bmatrix}. } \end{equation} Observe that \ref{eqn:diracAlternate:280} can be simplified by using double angle formulas \begin{equation}\label{eqn:diracAlternate:320} \begin{aligned} \cos(2 \theta) &= \frac{\lr{ \epsilon + m c^2} }{ 2 \epsilon } - \frac{\lr{ \epsilon - m c^2}}{ 2 \epsilon } \\ &= \frac{1}{ 2 \epsilon } \lr{ \epsilon + m c^2 - \epsilon + m c^2 } \\ &= \frac{m c^2 }{ \epsilon }, \end{aligned} \end{equation} and \begin{equation}\label{eqn:diracAlternate:340} \sin(2\theta) = 2 \frac{1}{2 \epsilon} \textrm{sgn}(k ) \sqrt{ \epsilon^2 - (m c^2)^2 } = \frac{\Hbar k c}{\epsilon}. \end{equation} This allows all the \( \theta \) dependence on \( \Hbar k c \) and \( m c^2 \) to be expressed as a ratio of momenta \begin{equation}\label{eqn:diracAlternate:360} \boxed{ \tan(2\theta) = \frac{\Hbar k}{m c}. } \end{equation}

Hyperbolic solutions

For a wave function of the form

\begin{equation}\label{eqn:diracAlternate:380}
\psi =
e^{k x – i E t/\Hbar}
\begin{bmatrix}
\psi_1 \\
\psi_2
\end{bmatrix},
\end{equation}

some of the work above can be recycled if we substitute \( k \rightarrow -i k \), which yields unnormalized eigenfunctions

\begin{equation}\label{eqn:diracAlternate:400}
\begin{aligned}
\ket{V_0+\epsilon}
&\propto
\begin{bmatrix}
i c \Hbar k \\
m c^2 – \epsilon
\end{bmatrix} \\
\ket{V_0-\epsilon}
&\propto
\begin{bmatrix}
i c \Hbar k \\
m c^2 + \epsilon
\end{bmatrix},
\end{aligned}
\end{equation}

where

\begin{equation}\label{eqn:diracAlternate:420}
\epsilon^2 = (m c^2)^2 – (c \Hbar k)^2.
\end{equation}

The squared magnitude of these wavefunctions are

\begin{equation}\label{eqn:diracAlternate:440}
\begin{aligned}
(c \Hbar k)^2 + (m c^2 \mp \epsilon)^2
&=
(c \Hbar k)^2 + (m c^2)^2 + \epsilon^2 \mp 2 m c^2 \epsilon \\
&=
(c \Hbar k)^2 + (m c^2)^2 + (m c^2)^2 \mp (c \Hbar k)^2 – 2 m c^2 \epsilon \\
&= 2 (m c^2)^2 \mp 2 m c^2 \epsilon \\
&= 2 m c^2 ( m c^2 \mp \epsilon ),
\end{aligned}
\end{equation}

so, up to a constant phase for each, the normalized kets are

\begin{equation}\label{eqn:diracAlternate:460}
\begin{aligned}
\ket{V_0+\epsilon}
&=
\inv{\sqrt{ 2 m c^2 ( m c^2 – \epsilon ) }}
\begin{bmatrix}
i c \Hbar k \\
m c^2 – \epsilon
\end{bmatrix} \\
\ket{V_0-\epsilon}
&=
\inv{\sqrt{ 2 m c^2 ( m c^2 + \epsilon ) }}
\begin{bmatrix}
i c \Hbar k \\
m c^2 + \epsilon
\end{bmatrix},
\end{aligned}
\end{equation}

After the \( k \rightarrow -i k \) substitution, \( H_k \) is not Hermitian, so these kets aren’t expected to be orthonormal, which is readily verified

\begin{equation}\label{eqn:diracAlternate:480}
\begin{aligned}
\braket{V_0+\epsilon}{V_0-\epsilon}
&=
\inv{\sqrt{ 2 m c^2 ( m c^2 – \epsilon ) }}
\inv{\sqrt{ 2 m c^2 ( m c^2 + \epsilon ) }}
\begin{bmatrix}
-i c \Hbar k &
m c^2 – \epsilon
\end{bmatrix}
\begin{bmatrix}
i c \Hbar k \\
m c^2 + \epsilon
\end{bmatrix} \\
&=
\frac{ 2 ( c \Hbar k )^2 }{2 m c^2 \sqrt{(\Hbar k c)^2} } \\
&=
\textrm{sgn}(k)
\frac{
\Hbar k }{m c } .
\end{aligned}
\end{equation}

Heisenberg velocity operator

\begin{equation}\label{eqn:diracAlternate:500}
\begin{aligned}
\hat{v}
&= \inv{i \Hbar} \antisymmetric{ \hat{x} }{ H} \\
&= \inv{i \Hbar} \antisymmetric{ \hat{x} }{ m c^2 \sigma_z + V_0 + c \hat{p} \sigma_x } \\
&= \frac{c \sigma_x}{i \Hbar} \antisymmetric{ \hat{x} }{ \hat{p} } \\
&= c \sigma_x.
\end{aligned}
\end{equation}

Probability current

Acting against a completely general wavefunction the Hamiltonian action \( H \psi \) is

\begin{equation}\label{eqn:diracAlternate:520}
\begin{aligned}
i \Hbar \PD{t}{\psi}
&= m c^2 \sigma_z \psi + V_0 \psi + c \hat{p} \sigma_x \psi \\
&= m c^2 \sigma_z \psi + V_0 \psi -i \Hbar c \sigma_x \PD{x}{\psi}.
\end{aligned}
\end{equation}

Conversely, the conjugate \( (H \psi)^\dagger \) is

\begin{equation}\label{eqn:diracAlternate:540}
-i \Hbar \PD{t}{\psi^\dagger}
= m c^2 \psi^\dagger \sigma_z + V_0 \psi^\dagger +i \Hbar c \PD{x}{\psi^\dagger} \sigma_x.
\end{equation}

These give

\begin{equation}\label{eqn:diracAlternate:560}
\begin{aligned}
i \Hbar \psi^\dagger \PD{t}{\psi}
&=
m c^2 \psi^\dagger \sigma_z \psi + V_0 \psi^\dagger \psi -i \Hbar c \psi^\dagger \sigma_x \PD{x}{\psi} \\
-i \Hbar \PD{t}{\psi^\dagger} \psi
&= m c^2 \psi^\dagger \sigma_z \psi + V_0 \psi^\dagger \psi +i \Hbar c \PD{x}{\psi^\dagger} \sigma_x \psi.
\end{aligned}
\end{equation}

Taking differences
\begin{equation}\label{eqn:diracAlternate:580}
\psi^\dagger \PD{t}{\psi} + \PD{t}{\psi^\dagger} \psi
=
– c \psi^\dagger \sigma_x \PD{x}{\psi} – c \PD{x}{\psi^\dagger} \sigma_x \psi,
\end{equation}

or

\begin{equation}\label{eqn:diracAlternate:600}
0
=
\PD{t}{}
\lr{
\psi^\dagger \psi
}
+
\PD{x}{}
\lr{
c \psi^\dagger \sigma_x \psi
}.
\end{equation}

The probability current still has the usual form \( \rho = \psi^\dagger \psi = \psi_1^\conj \psi_1 + \psi_2^\conj \psi_2 \), but the probability current with this representation of the Dirac Hamiltonian is

\begin{equation}\label{eqn:diracAlternate:620}
\begin{aligned}
j
&= c \psi^\dagger \sigma_x \psi \\
&= c
\begin{bmatrix}
\psi_1^\conj &
\psi_2^\conj
\end{bmatrix}
\begin{bmatrix}
\psi_2 \\
\psi_1
\end{bmatrix} \\
&= c \lr{ \psi_1^\conj \psi_2 + \psi_2^\conj \psi_1 }.
\end{aligned}
\end{equation}

PHY1520H Graduate Quantum Mechanics. Lecture 18: Approximation methods. Taught by Prof. Arun Paramekanti

November 26, 2015 phy1520 No comments , ,

[Click here for a PDF of this post with nicer formatting]

Disclaimer

Peeter’s lecture notes from class. These may be incoherent and rough, especially since I didn’t attend this class myself, and am doing a walkthrough of notes provided by Nishant.

These are notes for the UofT course PHY1520, Graduate Quantum Mechanics, taught by Prof. Paramekanti, covering [1] chap. 5 content.

Approximation methods

Suppose we have a perturbed Hamiltonian

\begin{equation}\label{eqn:qmLecture18:20}
H = H_0 + \lambda V,
\end{equation}

where \( \lambda = 0 \) represents a solvable (perhaps known) system, and \( \lambda = 1 \) is the case of interest. There are two approaches of interest

  1. Direct solution of \( H \) with \( \lambda = 1 \).
  2. Take \( \lambda \) small, and do a series expansion. This is perturbation theory.

Variational methods

Given

\begin{equation}\label{eqn:qmLecture18:40}
H \ket{\phi_n} = E_n \ket{\phi_n},
\end{equation}

where we don’t know \( \ket{\phi_n} \), we can compute the expectation with respect to an arbitrary state \( \ket{\psi} \)

\begin{equation}\label{eqn:qmLecture18:60}
\bra{\psi} H \ket{\psi}
=
\bra{\psi} H \lr{ \sum_n \ket{\phi_n} \bra{\phi_n} } \ket{\psi}
=
\sum_n E_n \braket{\psi}{\phi_n} \braket{\phi_n}{\psi}
=
\sum_n E_n \Abs{\braket{\psi}{\phi_n}}^2.
\end{equation}

Define

\begin{equation}\label{eqn:qmLecture18:80}
\overline{{E}}
= \frac{\bra{\psi} H \ket{\psi}}{\braket{\psi}{\psi}}.
\end{equation}

Assuming that it is possible to express the state in the Hamiltonian energy basis

\begin{equation}\label{eqn:qmLecture18:100}
\ket{\psi}
=
\sum_n a_n \ket{\phi_n},
\end{equation}

this average energy is
\begin{equation}\label{eqn:qmLecture18:120}
\overline{{E}}
= \frac{ \sum_{m,n}\bra{\phi_m} a_m^\conj H a_n \ket{\phi_n}}{ \sum_n \Abs{a_n}^2 }
= \frac{ \sum_{n} \Abs{a_n}^2 E_n }{ \sum_n \Abs{a_n}^2 }.
= \sum_{n}
\frac{\Abs{a_n}^2 }{ \sum_n \Abs{a_n}^2 }
E_n
= \sum_n \frac{P_n}{\sum_m P_m} E_n,
\end{equation}

where \( P_m = \Abs{a_m}^2 \), which has the structure of a probability coefficient once divided by \( \sum_m P_m \), as sketched in fig. 1.

fig. 1.  A decreasing probability distribution

fig. 1. A decreasing probability distribution

This average energy is a probability weighted average of the individual energy basis states. One of those energies is the ground state energy \( E_1 \), so we necessarily have

\begin{equation}\label{eqn:qmLecture18:140}
\boxed{
\overline{{E}} \ge E_1.
}
\end{equation}

Example: particle in a \( [0,L] \) box.

For the infinite potential box sketched in fig. 2.

fig. 2.  Infinite potential  [0,L]  box.

fig. 2. Infinite potential [0,L] box.

The exact solutions for such a system are found to be

\begin{equation}\label{eqn:qmLecture18:220}
\psi(x) = \sqrt{\frac{2}{L}} \sin\lr{ \frac{n \pi}{L} x },
\end{equation}

where the energies are

\begin{equation}\label{eqn:qmLecture18:240}
E = \frac{\Hbar^2}{2m} \frac{n^2 \pi^2}{L^2}.
\end{equation}

The function \( \psi’ = x (L-x) \) also satisfies the boundary value constraints? How close in energy is that function to the ground state?

\begin{equation}\label{eqn:qmLecture18:260}
\overline{{E}}
=
-\frac{\Hbar^2}{2m} \frac{\int_0^L dx x (L-x) \frac{d^2}{dx^2} \lr{ x (L-x) }}{
\int_0^L dx x^2 (L-x)^2
}
=
\frac{\Hbar^2}{2m} \frac{\frac{2 L^3}{6}}{
\frac{L^5}{30}
}
=
\frac{\Hbar^2}{2m} \frac{10}{L^2}.
\end{equation}

This average energy is quite close to the ground state energy

\begin{equation}\label{eqn:qmLecture18:280}
\frac{\overline{{E}} }{E_1} = \frac{10}{\pi^2} = 1.014.
\end{equation}

Example II: particle in a \( [-L/2,L/2] \) box.

fig. 3.  Infinite potential  [-L/2,L/2]  box.

fig. 3. Infinite potential [-L/2,L/2] box.

Shifting the boundaries, as sketched in fig. 3 doesn’t change the energy levels. For this potential let’s try a shifted trial function

\begin{equation}\label{eqn:qmLecture18:300}
\psi(x) = \lr{ x – \frac{L}{2} } \lr{ x + \frac{L}{2} } = x^2 – \frac{L^2}{4},
\end{equation}

without worrying about the form of the exact solution. This produces the same result as above

\begin{equation}\label{eqn:qmLecture18:270}
\overline{{E}}
=
-\frac{\Hbar^2}{2m} \frac{\int_0^L dx \lr{ x^2 – \frac{L^2}{4} } \frac{d^2}{dx^2} \lr{ x^2 – \frac{L^2}{4} }}{
\int_0^L dx \lr{x^2 – \frac{L^2}{4} }^2
}
=
-\frac{\Hbar^2}{2m} \frac{- 2 L^3/6}{
\frac{L^5}{30}
}
=
\frac{\Hbar^2}{2m} \frac{10}{L^2}.
\end{equation}

Summary (Nishant)

The above example is that of a particle in a box. The actual wave function is a sin as shown. But we can
come up with a guess wave function that meets the boundary conditions and ask how accurate it is
compared to the actual one.

Basically we are assuming a wave function form and then seeing how it differs from the exact form.
We cannot do this if we have nothing to compare it against. But, we note that the variance of the
number operator in the systems eigenstate is zero. So we can still calculate the variance and try to
minimize it. This is one way of coming up with an approximate wave function. This does not necessarily
give the ground state wave function though. For this we need to minimize the energy itself.

References

[1] Jun John Sakurai and Jim J Napolitano. Modern quantum mechanics. Pearson Higher Ed, 2014.

Two spin time evolution

November 14, 2015 phy1520 No comments , , , ,

[Click here for a PDF of this post with nicer formatting]

Motivation

Our midterm posed a (low mark “quick question”) that I didn’t complete (or at least not properly). This shouldn’t have been a difficult question, but I spend way too much time on it, costing me time that I needed for other questions.

It turns out that there isn’t anything fancy required for this question, just perseverance and careful work.

Guts

The question asked for the time evolution of a two particle state

\begin{equation}\label{eqn:twoSpinHamiltonian:20}
\psi = \inv{\sqrt{2}} \lr{ \ket{\uparrow \downarrow} – \ket{\downarrow \uparrow} }
\end{equation}

under the action of the Hamiltonian

\begin{equation}\label{eqn:twoSpinHamiltonian:40}
H = – B S_{z,1} + 2 B S_{x,2} = \frac{\Hbar B}{2}\lr{ -\sigma_{z,1} + 2 \sigma_{x,2} } .
\end{equation}

We have to know the action of the Hamiltonian on all the states

\begin{equation}\label{eqn:twoSpinHamiltonian:60}
\begin{aligned}
H \ket{\uparrow \uparrow} &= \frac{B \Hbar}{2} \lr{ -\ket{\uparrow \uparrow} + 2 \ket{\uparrow \downarrow} } \\
H \ket{\uparrow \downarrow} &= \frac{B \Hbar}{2} \lr{ -\ket{\uparrow \downarrow} + 2 \ket{\uparrow \uparrow} } \\
H \ket{\downarrow \uparrow} &= \frac{B \Hbar}{2} \lr{ \ket{\downarrow \uparrow} + 2 \ket{\downarrow \downarrow} } \\
H \ket{\downarrow \downarrow} &= \frac{B \Hbar}{2} \lr{ \ket{\downarrow \downarrow} + 2 \ket{\downarrow \uparrow} } \\
\end{aligned}
\end{equation}

With respect to the basis \( \setlr{ \ket{\uparrow \uparrow}, \ket{\uparrow \downarrow}, \ket{\downarrow \uparrow}, \ket{\downarrow \downarrow} } \), the matrix of the Hamiltonian is

\begin{equation}\label{eqn:twoSpinHamiltonian:80}
H =
\frac{ \Hbar B }{2}
\begin{bmatrix}
-1 & 2 & 0 & 0 \\
2 & -1 & 0 & 0 \\
0 & 0 & 1 & 2 \\
0 & 0 & 2 & 1 \\
\end{bmatrix}
\end{equation}

Utilizing the block diagonal form (and ignoring the \( \Hbar B/2 \) factor for now), the characteristic equation is

\begin{equation}\label{eqn:twoSpinHamiltonian:100}
0
=
\begin{vmatrix}
-1 -\lambda & 2 \\
2 & -1 – \lambda
\end{vmatrix}
\begin{vmatrix}
1 -\lambda & 2 \\
2 & 1 – \lambda
\end{vmatrix}
=
\lr{(1 + \lambda)^2 – 4}
\lr{(1 – \lambda)^2 – 4}.
\end{equation}

This has solutions

\begin{equation}\label{eqn:twoSpinHamiltonian:120}
1 \pm \lambda = \pm 2,
\end{equation}

or, with the \( \Hbar B/2 \) factors put back in

\begin{equation}\label{eqn:twoSpinHamiltonian:140}
\lambda = \pm \Hbar B/2 , \pm 3 \Hbar B/2.
\end{equation}

I was thinking that we needed to compute the time evolution operator

\begin{equation}\label{eqn:twoSpinHamiltonian:160}
U = e^{-i H t/\Hbar},
\end{equation}

but we actually only need the eigenvectors, and the inverse relations. We can find the eigenvectors by inspection in each case from

\begin{equation}\label{eqn:twoSpinHamiltonian:180}
\begin{aligned}
H – (1) \frac{ \Hbar B }{2}
&=
\frac{ \Hbar B }{2}
\begin{bmatrix}
-2 & 2 & 0 & 0 \\
2 & -2 & 0 & 0 \\
0 & 0 & 0 & 2 \\
0 & 0 & 2 & 0 \\
\end{bmatrix} \\
H – (-1) \frac{ \Hbar B }{2}
&=
\frac{ \Hbar B }{2}
\begin{bmatrix}
0 & 2 & 0 & 0 \\
2 & 0 & 0 & 0 \\
0 & 0 & 2 & 2 \\
0 & 0 & 2 & 2 \\
\end{bmatrix} \\
H – (3) \frac{ \Hbar B }{2}
&=
\frac{ \Hbar B }{2}
\begin{bmatrix}
-4 & 2 & 0 & 0 \\
2 & -4 & 0 & 0 \\
0 & 0 &-2 & 2 \\
0 & 0 & 2 &-2 \\
\end{bmatrix} \\
H – (-3) \frac{ \Hbar B }{2}
&=
\frac{ \Hbar B }{2}
\begin{bmatrix}
2 & 2 & 0 & 0 \\
2 & 2 & 0 & 0 \\
0 & 0 & 4 & 2 \\
0 & 0 & 2 & 1 \\
\end{bmatrix}.
\end{aligned}
\end{equation}

The eigenkets are

\begin{equation}\label{eqn:twoSpinHamiltonian:280}
\begin{aligned}
\ket{1} &=
\inv{\sqrt{2}}
\begin{bmatrix}
1 \\
1 \\
0 \\
0 \\
\end{bmatrix} \\
\ket{-1} &=
\inv{\sqrt{2}}
\begin{bmatrix}
0 \\
0 \\
1 \\
-1 \\
\end{bmatrix} \\
\ket{3} &=
\inv{\sqrt{2}}
\begin{bmatrix}
0 \\
0 \\
1 \\
1 \\
\end{bmatrix} \\
\ket{-3} &=
\inv{\sqrt{2}}
\begin{bmatrix}
1 \\
-1 \\
0 \\
0 \\
\end{bmatrix},
\end{aligned}
\end{equation}

or

\begin{equation}\label{eqn:twoSpinHamiltonian:300}
\begin{aligned}
\sqrt{2} \ket{1} &= \ket{\uparrow \uparrow} + \ket{\uparrow \downarrow} \\
\sqrt{2} \ket{-1} &= \ket{\downarrow \uparrow} – \ket{\downarrow \downarrow} \\
\sqrt{2} \ket{3} &= \ket{\downarrow \uparrow} + \ket{\downarrow \downarrow} \\
\sqrt{2} \ket{-3} &= \ket{\uparrow \uparrow} – \ket{\uparrow \downarrow}.
\end{aligned}
\end{equation}

We can invert these

\begin{equation}\label{eqn:twoSpinHamiltonian:220}
\begin{aligned}
\ket{\uparrow \uparrow} &= \inv{\sqrt{2}} \lr{ \ket{1} + \ket{-3} } \\
\ket{\uparrow \downarrow} &= \inv{\sqrt{2}} \lr{ \ket{1} – \ket{-3} } \\
\ket{\downarrow \uparrow} &= \inv{\sqrt{2}} \lr{ \ket{3} + \ket{-1} } \\
\ket{\downarrow \downarrow} &= \inv{\sqrt{2}} \lr{ \ket{3} – \ket{-1} } \\
\end{aligned}
\end{equation}

The original state of interest can now be expressed in terms of the eigenkets

\begin{equation}\label{eqn:twoSpinHamiltonian:240}
\psi
=
\inv{2} \lr{
\ket{1} – \ket{-3} –
\ket{3} – \ket{-1}
}
\end{equation}

The time evolution of this ket is

\begin{equation}\label{eqn:twoSpinHamiltonian:260}
\begin{aligned}
\psi(t)
&=
\inv{2}
\lr{
e^{-i B t/2} \ket{1}
– e^{3 i B t/2} \ket{-3}
– e^{-3 i B t/2} \ket{3}
– e^{i B t/2} \ket{-1}
} \\
&=
\inv{2 \sqrt{2}}
\Biglr{
e^{-i B t/2} \lr{ \ket{\uparrow \uparrow} + \ket{\uparrow \downarrow} }
– e^{3 i B t/2} \lr{ \ket{\uparrow \uparrow} – \ket{\uparrow \downarrow} }
– e^{-3 i B t/2} \lr{ \ket{\downarrow \uparrow} + \ket{\downarrow \downarrow} }
– e^{i B t/2} \lr{ \ket{\downarrow \uparrow} – \ket{\downarrow \downarrow} }
} \\
&=
\inv{2 \sqrt{2}}
\Biglr{
\lr{ e^{-i B t/2} – e^{3 i B t/2} } \ket{\uparrow \uparrow}
+ \lr{ e^{-i B t/2} + e^{3 i B t/2} } \ket{\uparrow \downarrow}
– \lr{ e^{-3 i B t/2} + e^{i B t/2} } \ket{\downarrow \uparrow}
+ \lr{ e^{i B t/2} – e^{-3 i B t/2} } \ket{\downarrow \downarrow}
} \\
&=
\inv{2 \sqrt{2}}
\Biglr{
e^{i B t/2} \lr{ e^{-2 i B t/2} – e^{2 i B t/2} } \ket{\uparrow \uparrow}
+ e^{i B t/2} \lr{ e^{-2 i B t/2} + e^{2 i B t/2} } \ket{\uparrow \downarrow}
– e^{- i B t/2} \lr{ e^{-2 i B t/2} + e^{2 i B t/2} } \ket{\downarrow \uparrow}
+ e^{- i B t/2} \lr{ e^{2 i B t/2} – e^{-2 i B t/2} } \ket{\downarrow \downarrow}
} \\
&=
\inv{\sqrt{2}}
\lr{
i \sin( B t )
\lr{
e^{- i B t/2} \ket{\downarrow \downarrow} – e^{i B t/2} \ket{\uparrow \uparrow}
}
+ \cos( B t ) \lr{
e^{i B t/2} \ket{\uparrow \downarrow}
– e^{- i B t/2} \ket{\downarrow \uparrow}
}
}
\end{aligned}
\end{equation}

Note that this returns to the original state when \( t = \frac{2 \pi n}{B}, n \in \mathbb{Z} \). I think I’ve got it right this time (although I got a slightly different answer on paper before typing it up.)

This doesn’t exactly seem like a quick answer question, at least to me. Is there some easier way to do it?