position operator

Quantum SHO ladder operators as a diagonal change of basis for the Heisenberg EOMs

August 19, 2015 phy1520 , , , , , , , , , , , , , , , , , , ,

[Click here for a PDF of this post with nicer formatting]

Many authors pull the definitions of the raising and lowering (or ladder) operators out of their butt with no attempt at motivation. This is pointed out nicely in [1] by Eli along with one justification based on factoring the Hamiltonian.

In [2] is a small exception to the usual presentation. In that text, these operators are defined as usual with no motivation. However, after the utility of these operators has been shown, the raising and lowering operators show up in a context that does provide that missing motivation as a side effect.
It doesn’t look like the author was trying to provide a motivation, but it can be interpreted that way.

When seeking the time evolution of Heisenberg-picture position and momentum operators, we will see that those solutions can be trivially expressed using the raising and lowering operators. No special tools nor black magic is required to find the structure of these operators. Unfortunately, we must first switch to both the Heisenberg picture representation of the position and momentum operators, and also employ the Heisenberg equations of motion. Neither of these last two fit into standard narrative of most introductory quantum mechanics treatments. We will also see that these raising and lowering “operators” could also be introduced in classical mechanics, provided we were attempting to solve the SHO system using the Hamiltonian equations of motion.

I’ll outline this route to finding the structure of the ladder operators below. Because these are encountered trying to solve the time evolution problem, I’ll first show a simpler way to solve that problem. Because that simpler method depends a bit on lucky observation and is somewhat unstructured, I’ll then outline a more structured procedure that leads to the ladder operators directly, also providing the solution to the time evolution problem as a side effect.

The starting point is the Heisenberg equations of motion. For a time independent Hamiltonian \( H \), and a Heisenberg operator \( A^{(H)} \), those equations are

\begin{equation}\label{eqn:harmonicOscDiagonalize:20}
\ddt{A^{(H)}} = \inv{i \Hbar} \antisymmetric{A^{(H)}}{H}.
\end{equation}

Here the Heisenberg operator \( A^{(H)} \) is related to the Schrodinger operator \( A^{(S)} \) by

\begin{equation}\label{eqn:harmonicOscDiagonalize:60}
A^{(H)} = U^\dagger A^{(S)} U,
\end{equation}

where \( U \) is the time evolution operator. For this discussion, we need only know that \( U \) commutes with \( H \), and do not need to know the specific structure of that operator. In particular, the Heisenberg equations of motion take the form

\begin{equation}\label{eqn:harmonicOscDiagonalize:80}
\begin{aligned}
\ddt{A^{(H)}}
&= \inv{i \Hbar}
\antisymmetric{A^{(H)}}{H} \\
&= \inv{i \Hbar}
\antisymmetric{U^\dagger A^{(S)} U}{H} \\
&= \inv{i \Hbar}
\lr{
U^\dagger A^{(S)} U H
– H U^\dagger A^{(S)} U
} \\
&= \inv{i \Hbar}
\lr{
U^\dagger A^{(S)} H U
– U^\dagger H A^{(S)} U
} \\
&= \inv{i \Hbar} U^\dagger \antisymmetric{A^{(S)}}{H} U.
\end{aligned}
\end{equation}

The Hamiltonian for the harmonic oscillator, with Schrodinger-picture position and momentum operators \( x, p \) is

\begin{equation}\label{eqn:harmonicOscDiagonalize:40}
H = \frac{p^2}{2m} + \inv{2} m \omega^2 x^2,
\end{equation}

so the equations of motions are

\begin{equation}\label{eqn:harmonicOscDiagonalize:100}
\begin{aligned}
\ddt{x^{(H)}}
&= \inv{i \Hbar} U^\dagger \antisymmetric{x}{H} U \\
&= \inv{i \Hbar} U^\dagger \antisymmetric{x}{\frac{p^2}{2m}} U \\
&= \inv{2 m i \Hbar} U^\dagger \lr{ i \Hbar \PD{p}{p^2} } U \\
&= \inv{m } U^\dagger p U \\
&= \inv{m } p^{(H)},
\end{aligned}
\end{equation}

and
\begin{equation}\label{eqn:harmonicOscDiagonalize:120}
\begin{aligned}
\ddt{p^{(H)}}
&= \inv{i \Hbar} U^\dagger \antisymmetric{p}{H} U \\
&= \inv{i \Hbar} U^\dagger \antisymmetric{p}{\inv{2} m \omega^2 x^2 } U \\
&= \frac{m \omega^2}{2 i \Hbar} U^\dagger \lr{ -i \Hbar \PD{x}{x^2} } U \\
&= -m \omega^2 U^\dagger x U \\
&= -m \omega^2 x^{(H)}.
\end{aligned}
\end{equation}

In the Heisenberg picture the equations of motion are precisely those of classical Hamiltonian mechanics, except that we are dealing with operators instead of scalars

\begin{equation}\label{eqn:harmonicOscDiagonalize:140}
\begin{aligned}
\ddt{p^{(H)}} &= -m \omega^2 x^{(H)} \\
\ddt{x^{(H)}} &= \inv{m } p^{(H)}.
\end{aligned}
\end{equation}

In the text the ladder operators are used to simplify the solution of these coupled equations, since they can decouple them. That’s not really required since we can solve them directly in matrix form with little work

\begin{equation}\label{eqn:harmonicOscDiagonalize:160}
\ddt{}
\begin{bmatrix}
p^{(H)} \\
x^{(H)}
\end{bmatrix}
=
\begin{bmatrix}
0 & -m \omega^2 \\
\inv{m} & 0
\end{bmatrix}
\begin{bmatrix}
p^{(H)} \\
x^{(H)}
\end{bmatrix},
\end{equation}

or, with length scaled variables

\begin{equation}\label{eqn:harmonicOscDiagonalize:180}
\begin{aligned}
\ddt{}
\begin{bmatrix}
\frac{p^{(H)}}{m \omega} \\
x^{(H)}
\end{bmatrix}
&=
\begin{bmatrix}
0 & -\omega \\
\omega & 0
\end{bmatrix}
\begin{bmatrix}
\frac{p^{(H)}}{m \omega} \\
x^{(H)}
\end{bmatrix} \\
&=
-i \omega
\begin{bmatrix} 0 & -i \\ i & 0 \\ \end{bmatrix}
\begin{bmatrix}
\frac{p^{(H)}}{m \omega} \\
x^{(H)}
\end{bmatrix} \\
&=
-i \omega
\sigma_y
\begin{bmatrix}
\frac{p^{(H)}}{m \omega} \\
x^{(H)}
\end{bmatrix}.
\end{aligned}
\end{equation}

Writing \( y = \begin{bmatrix} \frac{p^{(H)}}{m \omega} \\ x^{(H)} \end{bmatrix} \), the solution can then be written immediately as

\begin{equation}\label{eqn:harmonicOscDiagonalize:200}
\begin{aligned}
y(t)
&=
\exp\lr{ -i \omega \sigma_y t } y(0) \\
&=
\lr{ \cos \lr{ \omega t } I – i \sigma_y \sin\lr{ \omega t } } y(0) \\
&=
\begin{bmatrix}
\cos\lr{ \omega t } & \sin\lr{ \omega t } \\
-\sin\lr{ \omega t } & \cos\lr{ \omega t }
\end{bmatrix}
y(0),
\end{aligned}
\end{equation}

or

\begin{equation}\label{eqn:harmonicOscDiagonalize:220}
\begin{aligned}
\frac{p^{(H)}(t)}{m \omega} &= \cos\lr{ \omega t } \frac{p^{(H)}(0)}{m \omega} + \sin\lr{ \omega t } x^{(H)}(0) \\
x^{(H)}(t) &= -\sin\lr{ \omega t } \frac{p^{(H)}(0)}{m \omega} + \cos\lr{ \omega t } x^{(H)}(0).
\end{aligned}
\end{equation}

This solution depends on being lucky enough to recognize that the matrix has a Pauli matrix as a factor (which squares to unity, and allows the exponential to be evaluated easily.)

If we hadn’t been that observant, then the first tool we’d have used instead would have been to diagonalize the matrix. For such diagonalization, it’s natural to work in completely dimensionless variables. Such a non-dimensionalisation can be had by defining

\begin{equation}\label{eqn:harmonicOscDiagonalize:240}
x_0 = \sqrt{\frac{\Hbar}{m \omega}},
\end{equation}

and dividing the working (operator) variables through by those values. Let \( z = \inv{x_0} y \), and \( \tau = \omega t \) so that the equations of motion are

\begin{equation}\label{eqn:harmonicOscDiagonalize:260}
\frac{dz}{d\tau}
=
\begin{bmatrix}
0 & -1 \\
1 & 0
\end{bmatrix}
z.
\end{equation}

This matrix can be diagonalized as

\begin{equation}\label{eqn:harmonicOscDiagonalize:280}
A
=
\begin{bmatrix}
0 & -1 \\
1 & 0
\end{bmatrix}
=
V
\begin{bmatrix}
i & 0 \\
0 & -i
\end{bmatrix}
V^{-1},
\end{equation}

where

\begin{equation}\label{eqn:harmonicOscDiagonalize:300}
V =
\inv{\sqrt{2}}
\begin{bmatrix}
i & -i \\
1 & 1
\end{bmatrix}.
\end{equation}

The equations of motion can now be written

\begin{equation}\label{eqn:harmonicOscDiagonalize:320}
\frac{d}{d\tau} \lr{ V^{-1} z } =
\begin{bmatrix}
i & 0 \\
0 & -i
\end{bmatrix}
\lr{ V^{-1} z }.
\end{equation}

This final change of variables \( V^{-1} z \) decouples the system as desired. Expanding that gives

\begin{equation}\label{eqn:harmonicOscDiagonalize:340}
\begin{aligned}
V^{-1} z
&=
\inv{\sqrt{2}}
\begin{bmatrix}
-i & 1 \\
i & 1
\end{bmatrix}
\begin{bmatrix}
\frac{p^{(H)}}{x_0 m \omega} \\
\frac{x^{(H)}}{x_0}
\end{bmatrix} \\
&=
\inv{\sqrt{2} x_0}
\begin{bmatrix}
-i \frac{p^{(H)}}{m \omega} + x^{(H)} \\
i \frac{p^{(H)}}{m \omega} + x^{(H)}
\end{bmatrix} \\
&=
\begin{bmatrix}
a^\dagger \\
a
\end{bmatrix},
\end{aligned}
\end{equation}

where
\begin{equation}\label{eqn:harmonicOscDiagonalize:n}
\begin{aligned}
a^\dagger &= \sqrt{\frac{m \omega}{2 \Hbar}} \lr{ -i \frac{p^{(H)}}{m \omega} + x^{(H)} } \\
a &= \sqrt{\frac{m \omega}{2 \Hbar}} \lr{ i \frac{p^{(H)}}{m \omega} + x^{(H)} }.
\end{aligned}
\end{equation}

Lo and behold, we have the standard form of the raising and lowering operators, and can write the system equations as

\begin{equation}\label{eqn:harmonicOscDiagonalize:360}
\begin{aligned}
\ddt{a^\dagger} &= i \omega a^\dagger \\
\ddt{a} &= -i \omega a.
\end{aligned}
\end{equation}

It is actually a bit fluky that this matched exactly, since we could have chosen eigenvectors that differ by constant phase factors, like

\begin{equation}\label{eqn:harmonicOscDiagonalize:380}
V = \inv{\sqrt{2}}
\begin{bmatrix}
i e^{i\phi} & -i e^{i \psi} \\
1 e^{i\phi} & e^{i \psi}
\end{bmatrix},
\end{equation}

so

\begin{equation}\label{eqn:harmonicOscDiagonalize:341}
\begin{aligned}
V^{-1} z
&=
\frac{e^{-i(\phi + \psi)}}{\sqrt{2}}
\begin{bmatrix}
-i e^{i\psi} & e^{i \psi} \\
i e^{i\phi} & e^{i \phi}
\end{bmatrix}
\begin{bmatrix}
\frac{p^{(H)}}{x_0 m \omega} \\
\frac{x^{(H)}}{x_0}
\end{bmatrix} \\
&=
\inv{\sqrt{2} x_0}
\begin{bmatrix}
-i e^{i\phi} \frac{p^{(H)}}{m \omega} + e^{i\phi} x^{(H)} \\
i e^{i\psi} \frac{p^{(H)}}{m \omega} + e^{i\psi} x^{(H)}
\end{bmatrix} \\
&=
\begin{bmatrix}
e^{i\phi} a^\dagger \\
e^{i\psi} a
\end{bmatrix}.
\end{aligned}
\end{equation}

To make the resulting pairs of operators Hermitian conjugates, we’d want to constrain those constant phase factors by setting \( \phi = -\psi \). If we were only interested in solving the time evolution problem no such additional constraints are required.

The raising and lowering operators are seen to naturally occur when seeking the solution of the Heisenberg equations of motion. This is found using the standard technique of non-dimensionalisation and then seeking a change of basis that diagonalizes the system matrix. Because the Heisenberg equations of motion are identical to the classical Hamiltonian equations of motion in this case, what we call the raising and lowering operators in quantum mechanics could also be utilized in the classical simple harmonic oscillator problem. However, in a classical context we wouldn’t have a justification to call this more than a change of basis.

References

[1] Eli Lansey. The Quantum Harmonic Oscillator Ladder Operators, 2009. URL http://behindtheguesses.blogspot.ca/2009/03/quantum-harmonic-oscillator-ladder.html. [Online; accessed 18-August-2015].

[2] Jun John Sakurai and Jim J Napolitano. Modern quantum mechanics, chapter {Time Development of the Oscillator}. Pearson Higher Ed, 2014.

Heisenberg picture position commutator

August 14, 2015 phy1520 , , , ,

[Click here for a PDF of this post with nicer formatting]

Question: Heisenberg picture position commutator ([1] pr. 2.5)

Evaluate

\begin{equation}\label{eqn:positionCommutator:20}
\antisymmetric{x(t)}{x(0)},
\end{equation}

for a Heisenberg picture operator \( x(t) \) for a free particle.

Answer

The free particle Hamiltonian is

\begin{equation}\label{eqn:positionCommutator:40}
H = \frac{p^2}{2m},
\end{equation}

so the time evolution operator is

\begin{equation}\label{eqn:positionCommutator:60}
U(t) = e^{-i p^2 t/(2 m \Hbar)}.
\end{equation}

The Heisenberg picture position operator is

\begin{equation}\label{eqn:positionCommutator:80}
\begin{aligned}
x^\textrm{H}
&= U^\dagger x U \\
&= e^{i p^2 t/(2 m \Hbar)} x e^{-i p^2 t/(2 m \Hbar)} \\
&= \sum_{k = 0}^\infty \inv{k!} \lr{ \frac{i p^2 t}{2 m \Hbar} }^k
x
e^{-i p^2 t/(2 m \Hbar)} \\
&= \sum_{k = 0}^\infty \inv{k!} \lr{ \frac{i t}{2 m \Hbar} }^k p^{2k} x
e^{-i p^2 t/(2 m \Hbar)} \\
&=
\sum_{k = 0}^\infty \inv{k!} \lr{ \frac{i t}{2 m \Hbar} }^k \lr{ \antisymmetric{p^{2k}}{x} + x p^{2k} }
e^{-i p^2 t/(2 m \Hbar)} \\
&= x +
\sum_{k = 0}^\infty \inv{k!} \lr{ \frac{i t}{2 m \Hbar} }^k \antisymmetric{p^{2k}}{x}
e^{-i p^2 t/(2 m \Hbar)} \\
&= x +
\sum_{k = 0}^\infty \inv{k!} \lr{ \frac{i t}{2 m \Hbar} }^k \lr{ -i \Hbar \PD{p}{p^{2k}} }
e^{-i p^2 t/(2 m \Hbar)} \\
&= x +
\sum_{k = 0}^\infty \inv{k!} \lr{ \frac{i t}{2 m \Hbar} }^k \lr{ -i \Hbar 2 k p^{2 k -1} }
e^{-i p^2 t/(2 m \Hbar)} \\
&= x +
-2 i \Hbar p \frac{i t}{2 m \Hbar} \sum_{k = 1}^\infty \inv{(k-1)!} \lr{ \frac{i t}{2 m \Hbar} }^{k-1} p^{2(k – 1)}
e^{-i p^2 t/(2 m \Hbar)} \\
&= x + t \frac{p}{m}.
\end{aligned}
\end{equation}

This has the structure of a classical free particle \( x(t) = x + v t \), but in this case \( x,p \) are operators.

The evolved position commutator is
\begin{equation}\label{eqn:positionCommutator:100}
\begin{aligned}
\antisymmetric{x(t)}{x(0)}
&=
\antisymmetric{x + t p/m}{x} \\
&=
\frac{t}{m} \antisymmetric{p}{x} \\
&=
-i \Hbar \frac{t}{m}.
\end{aligned}
\end{equation}

Compare this to the classical Poisson bracket
\begin{equation}\label{eqn:positionCommutator:120}
\antisymmetric{x(t)}{x(0)}_{\textrm{classical}}
=
\PD{x}{}\lr{x + p t/m} \PD{p}{x} – \PD{p}{}\lr{x + p t/m} \PD{x}{x}
=
– \frac{t}{m}.
\end{equation}

This has the expected relation \( \antisymmetric{x(t)}{x(0)} = i \Hbar \antisymmetric{x(t)}{x(0)}_{\textrm{classical}} \).

References

[1] Jun John Sakurai and Jim J Napolitano. Modern quantum mechanics. Pearson Higher Ed, 2014.

Translation operator problems

August 7, 2015 phy1520 , , , , , , ,

[Click here for a PDF of this post with nicer formatting]

Question: One dimensional translation operator. ([1] pr. 1.28)

(a)

Evaluate the classical Poisson bracket

\begin{equation}\label{eqn:translation:420}
\antisymmetric{x}{F(p)}_{\textrm{classical}}
\end{equation}

(b)

Evaluate the commutator

\begin{equation}\label{eqn:translation:440}
\antisymmetric{x}{e^{i p a/\Hbar}}
\end{equation}

(c)

Using the result in \ref{problem:translation:28:b}, prove that
\begin{equation}\label{eqn:translation:460}
e^{i p a/\Hbar} \ket{x’},
\end{equation}

is an eigenstate of the coordinate operator \( x \).

Answer

(a)

\begin{equation}\label{eqn:translation:480}
\begin{aligned}
\antisymmetric{x}{F(p)}_{\textrm{classical}}
&=
\PD{x}{x} \PD{p}{F(p)} – \PD{p}{x} \PD{x}{F(p)} \\
&=
\PD{p}{F(p)}.
\end{aligned}
\end{equation}

(b)

Having worked backwards through these problems, the answer for this one dimensional problem can be obtained from \ref{eqn:translation:140} and is

\begin{equation}\label{eqn:translation:500}
\antisymmetric{x}{e^{i p a/\Hbar}} = a e^{i p a/\Hbar}.
\end{equation}

(c)

\begin{equation}\label{eqn:translation:520}
\begin{aligned}
x e^{i p a/\Hbar} \ket{x’}
&=
\lr{
\antisymmetric{x}{e^{i p a/\Hbar}}
e^{i p a/\Hbar} x
+
}
\ket{x’} \\
&=
\lr{ a e^{i p a/\Hbar} + e^{i p a/\Hbar} x ‘ } \ket{x’} \\
&= \lr{ a + x’ } \ket{x’}.
\end{aligned}
\end{equation}

This demonstrates that \( e^{i p a/\Hbar} \ket{x’} \) is an eigenstate of \( x \) with eigenvalue \( a + x’ \).

Question: Polynomial commutators. ([1] pr. 1.29)

(a)

For power series \( F, G \), verify

\begin{equation}\label{eqn:translation:180}
\antisymmetric{x_k}{G(\Bp)} = i \Hbar \PD{p_k}{G}, \qquad
\antisymmetric{p_k}{F(\Bx)} = -i \Hbar \PD{x_k}{F}.
\end{equation}

(b)

Evaluate \( \antisymmetric{x^2}{p^2} \), and compare to the classical Poisson bracket \( \antisymmetric{x^2}{p^2}_{\textrm{classical}} \).

Answer

(a)

Let

\begin{equation}\label{eqn:translation:200}
\begin{aligned}
G(\Bp) &= \sum_{k l m} a_{k l m} p_1^k p_2^l p_3^m \\
F(\Bx) &= \sum_{k l m} b_{k l m} x_1^k x_2^l x_3^m.
\end{aligned}
\end{equation}

It is simpler to work with a specific \( x_k \), say \( x_k = y \). The validity of the general result will still be clear doing so. Expanding the commutator gives

\begin{equation}\label{eqn:translation:220}
\begin{aligned}
\antisymmetric{y}{G(\Bp)}
&=
\sum_{k l m} a_{k l m} \antisymmetric{y}{p_1^k p_2^l p_3^m } \\
&=
\sum_{k l m} a_{k l m} \lr{
y p_1^k p_2^l p_3^m – p_1^k p_2^l p_3^m y
} \\
&=
\sum_{k l m} a_{k l m} \lr{
p_1^k y p_2^l p_3^m – p_1^k y p_2^l p_3^m
} \\
&=
\sum_{k l m} a_{k l m}
p_1^k
\antisymmetric{y}{p_2^l}
p_3^m.
\end{aligned}
\end{equation}

From \ref{eqn:translation:100}, we have \( \antisymmetric{y}{p_2^l} = l i \Hbar p_2^{l-1} \), so

\begin{equation}\label{eqn:translation:240}
\begin{aligned}
\antisymmetric{y}{G(\Bp)}
&=
\sum_{k l m} a_{k l m}
p_1^k
\antisymmetric{y}{p_2^l}
\lr{ l
i \Hbar p_2^{l-1}
}
p_3^m \\
&=
i \Hbar \PD{y}{G(\Bp)}.
\end{aligned}
\end{equation}

It is straightforward to show that
\( \antisymmetric{p}{x^l} = -l i \Hbar x^{l-1} \), allowing for a similar computation of the momentum commutator

\begin{equation}\label{eqn:translation:260}
\begin{aligned}
\antisymmetric{p_y}{F(\Bx)}
&=
\sum_{k l m} b_{k l m} \antisymmetric{p_y}{x_1^k x_2^l x_3^m } \\
&=
\sum_{k l m} b_{k l m} \lr{
p_y x_1^k x_2^l x_3^m – x_1^k x_2^l x_3^m p_y
} \\
&=
\sum_{k l m} b_{k l m} \lr{
x_1^k p_y x_2^l x_3^m – x_1^k p_y x_2^l x_3^m
} \\
&=
\sum_{k l m} b_{k l m}
x_1^k
\antisymmetric{p_y}{x_2^l}
x_3^m \\
&=
\sum_{k l m} b_{k l m}
x_1^k
\lr{ -l i \Hbar x_2^{l-1}}
x_3^m \\
&=
-i \Hbar \PD{p_y}{F(\Bx)}.
\end{aligned}
\end{equation}

(b)

It isn’t clear to me how the results above can be used directly to compute \( \antisymmetric{x^2}{p^2} \). However, when the first term of such a commutator is a monomomial, it can be expanded in terms of an \( x \) commutator

\begin{equation}\label{eqn:translation:280}
\begin{aligned}
\antisymmetric{x^2}{G(\Bp)}
&= x^2 G – G x^2 \\
&= x \lr{ x G } – G x^2 \\
&= x \lr{ \antisymmetric{ x }{ G } + G x } – G x^2 \\
&= x \antisymmetric{ x }{ G } + \lr{ x G } x – G x^2 \\
&= x \antisymmetric{ x }{ G } + \lr{ \antisymmetric{ x }{ G } + G x } x – G x^2 \\
&= x \antisymmetric{ x }{ G } + \antisymmetric{ x }{ G } x.
\end{aligned}
\end{equation}

Similarily,

\begin{equation}\label{eqn:translation:300}
\antisymmetric{x^3}{G(\Bp)} = x^2 \antisymmetric{ x }{ G } + x \antisymmetric{ x }{ G } x + \antisymmetric{ x }{ G } x^2.
\end{equation}

An induction hypothesis can be formed

\begin{equation}\label{eqn:translation:320}
\antisymmetric{x^k}{G(\Bp)} = \sum_{j = 0}^{k-1} x^{k-1-j} \antisymmetric{ x }{ G } x^j,
\end{equation}

and demonstrated

\begin{equation}\label{eqn:translation:340}
\begin{aligned}
\antisymmetric{x^{k+1}}{G(\Bp)}
&=
x^{k+1} G – G x^{k+1} \\
&=
x \lr{ x^{k} G } – G x^{k+1} \\
&=
x \lr{ \antisymmetric{x^{k}}{G} + G x^k } – G x^{k+1} \\
&=
x \antisymmetric{x^{k}}{G} + \lr{ x G } x^k – G x^{k+1} \\
&=
x \antisymmetric{x^{k}}{G} + \lr{ \antisymmetric{x}{G} + G x } x^k – G x^{k+1} \\
&=
x \antisymmetric{x^{k}}{G} + \antisymmetric{x}{G} x^k \\
&=
x \sum_{j = 0}^{k-1} x^{k-1-j} \antisymmetric{ x }{ G } x^j + \antisymmetric{x}{G} x^k \\
&=
\sum_{j = 0}^{k-1} x^{(k+1)-1-j} \antisymmetric{ x }{ G } x^j + \antisymmetric{x}{G} x^k \\
&=
\sum_{j = 0}^{k} x^{(k+1)-1-j} \antisymmetric{ x }{ G } x^j.
\end{aligned}
\end{equation}

That was a bit overkill for this problem, but may be useful later. Application of this to the problem gives

\begin{equation}\label{eqn:translation:360}
\begin{aligned}
\antisymmetric{x^2}{p^2}
&=
x \antisymmetric{x}{p^2}
+ \antisymmetric{x}{p^2} x \\
&=
x i \Hbar \PD{x}{p^2}
+ i \Hbar \PD{x}{p^2} x \\
&=
x 2 i \Hbar p
+ 2 i \Hbar p x \\
&= i \Hbar \lr{ 2 x p + 2 p x }.
\end{aligned}
\end{equation}

The classical commutator is
\begin{equation}\label{eqn:translation:380}
\begin{aligned}
\antisymmetric{x^2}{p^2}_{\textrm{classical}}
&=
\PD{x}{x^2} \PD{p}{p^2} – \PD{p}{x^2} \PD{x}{p^2} \\
&=
2 x 2 p \\
&= 2 x p + 2 p x.
\end{aligned}
\end{equation}

This demonstrates the expected relation between the classical and quantum commutators

\begin{equation}\label{eqn:translation:400}
\antisymmetric{x^2}{p^2} = i \Hbar \antisymmetric{x^2}{p^2}_{\textrm{classical}}.
\end{equation}

Question: Translation operator and position expectation. ([1] pr. 1.30)

The translation operator for a finite spatial displacement is given by

\begin{equation}\label{eqn:translation:20}
J(\Bl) = \exp\lr{ -i \Bp \cdot \Bl/\Hbar },
\end{equation}

where \( \Bp \) is the momentum operator.

(a)

Evaluate

\begin{equation}\label{eqn:translation:40}
\antisymmetric{x_i}{J(\Bl)}.
\end{equation}

(b)

Demonstrate how the expectation value \( \expectation{\Bx} \) changes under translation.

Answer

(a)

For clarity, let’s set \( x_i = y \). The general result will be clear despite doing so.

\begin{equation}\label{eqn:translation:60}
\antisymmetric{y}{J(\Bl)}
=
\sum_{k= 0} \inv{k!} \lr{\frac{-i}{\Hbar}}
\antisymmetric{y}{
\lr{ \Bp \cdot \Bl }^k
}.
\end{equation}

The commutator expands as

\begin{equation}\label{eqn:translation:80}
\begin{aligned}
\antisymmetric{y}{
\lr{ \Bp \cdot \Bl }^k
}
+ \lr{ \Bp \cdot \Bl }^k y
&=
y \lr{ \Bp \cdot \Bl }^k \\
&=
y \lr{ p_x l_x + p_y l_y + p_z l_z } \lr{ \Bp \cdot \Bl }^{k-1} \\
&=
\lr{ p_x l_x y + y p_y l_y + p_z l_z y } \lr{ \Bp \cdot \Bl }^{k-1} \\
&=
\lr{ p_x l_x y + l_y \lr{ p_y y + i \Hbar } + p_z l_z y } \lr{ \Bp \cdot \Bl }^{k-1} \\
&=
\lr{ \Bp \cdot \Bl } y \lr{ \Bp \cdot \Bl }^{k-1}
+ i \Hbar l_y \lr{ \Bp \cdot \Bl }^{k-1} \\
&= \cdots \\
&=
\lr{ \Bp \cdot \Bl }^{k-1} y \lr{ \Bp \cdot \Bl }^{k-(k-1)}
+ (k-1) i \Hbar l_y \lr{ \Bp \cdot \Bl }^{k-1} \\
&=
\lr{ \Bp \cdot \Bl }^{k} y
+ k i \Hbar l_y \lr{ \Bp \cdot \Bl }^{k-1}.
\end{aligned}
\end{equation}

In the above expansion, the commutation of \( y \) with \( p_x, p_z \) has been used. This gives, for \( k \ne 0 \),

\begin{equation}\label{eqn:translation:100}
\antisymmetric{y}{
\lr{ \Bp \cdot \Bl }^k
}
=
k i \Hbar l_y \lr{ \Bp \cdot \Bl }^{k-1}.
\end{equation}

Note that this also holds for the \( k = 0 \) case, since \( y \) commutes with the identity operator. Plugging back into the \( J \) commutator, we have

\begin{equation}\label{eqn:translation:120}
\begin{aligned}
\antisymmetric{y}{J(\Bl)}
&=
\sum_{k = 1} \inv{k!} \lr{\frac{-i}{\Hbar}}
k i \Hbar l_y \lr{ \Bp \cdot \Bl }^{k-1} \\
&=
l_y \sum_{k = 1} \inv{(k-1)!} \lr{\frac{-i}{\Hbar}}
\lr{ \Bp \cdot \Bl }^{k-1} \\
&=
l_y J(\Bl).
\end{aligned}
\end{equation}

The same pattern clearly applies with the other \( x_i \) values, providing the desired relation.

\begin{equation}\label{eqn:translation:140}
\antisymmetric{\Bx}{J(\Bl)} = \sum_{m = 1}^3 \Be_m l_m J(\Bl) = \Bl J(\Bl).
\end{equation}

(b)

Suppose that the translated state is defined as \( \ket{\alpha_{\Bl}} = J(\Bl) \ket{\alpha} \). The expectation value with respect to this state is

\begin{equation}\label{eqn:translation:160}
\begin{aligned}
\expectation{\Bx’}
&=
\bra{\alpha_{\Bl}} \Bx \ket{\alpha_{\Bl}} \\
&=
\bra{\alpha} J^\dagger(\Bl) \Bx J(\Bl) \ket{\alpha} \\
&=
\bra{\alpha} J^\dagger(\Bl) \lr{ \Bx J(\Bl) } \ket{\alpha} \\
&=
\bra{\alpha} J^\dagger(\Bl) \lr{ J(\Bl) \Bx + \Bl J(\Bl) } \ket{\alpha} \\
&=
\bra{\alpha} J^\dagger J \Bx + \Bl J^\dagger J \ket{\alpha} \\
&=
\bra{\alpha} \Bx \ket{\alpha} + \Bl \braket{\alpha}{\alpha} \\
&=
\expectation{\Bx} + \Bl.
\end{aligned}
\end{equation}

References

[1] Jun John Sakurai and Jim J Napolitano. Modern quantum mechanics. Pearson Higher Ed, 2014.

Update to old phy356 (Quantum Mechanics I) notes.

February 12, 2015 math and physics play , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

It’s been a long time since I took QM I. My notes from that class were pretty rough, but I’ve cleaned them up a bit.

The main value to these notes is that I worked a number of introductory Quantum Mechanics problems.

These were my personal lecture notes for the Fall 2010, University of Toronto Quantum mechanics I course (PHY356H1F), taught by Prof. Vatche Deyirmenjian.

The official description of this course was:

The general structure of wave mechanics; eigenfunctions and eigenvalues; operators; orbital angular momentum; spherical harmonics; central potential; separation of variables, hydrogen atom; Dirac notation; operator methods; harmonic oscillator and spin.

This document contains a few things

• My lecture notes.
Typos, if any, are probably mine(Peeter), and no claim nor attempt of spelling or grammar correctness will be made. The first four lectures had chosen not to take notes for since they followed the text very closely.
• Notes from reading of the text. This includes observations, notes on what seem like errors, and some solved problems. None of these problems have been graded. Note that my informal errata sheet for the text has been separated out from this document.
• Some assigned problems. I have corrected some the errors after receiving grading feedback, and where I have not done so I at least recorded some of the grading comments as a reference.
• Some worked problems associated with exam preparation.