eigenvalue

L_z and L^2 eigenvalues and probabilities for a given wave function

December 13, 2015 phy1520 No comments , , , , ,

[Click here for a PDF of this post with nicer formatting]

Q: [1] 3.17

Given a wave function

\begin{equation}\label{eqn:LsquaredLzProblem:20}
\psi(r,\theta, \phi) = f(r) \lr{ x + y + 3 z },
\end{equation}

  • (a) Determine if this wave function is an eigenfunction of \( \BL^2 \), and the value of \( l \) if it is an eigenfunction.

  • (b) Determine the probabilities for the particle to be found in any given \( \ket{l, m} \) state,
  • (c) If it is known that \( \psi \) is an energy eigenfunction with energy \( E \) indicate how we can find \( V(r) \).

A: (a)

Using
\begin{equation}\label{eqn:LsquaredLzProblem:40}
\BL^2
=
-\Hbar^2 \lr{ \inv{\sin^2\theta} \partial_{\phi\phi} + \inv{\sin\theta} \partial_\theta \lr{ \sin\theta \partial_\theta} },
\end{equation}

and

\begin{equation}\label{eqn:LsquaredLzProblem:60}
\begin{aligned}
x &= r \sin\theta \cos\phi \\
y &= r \sin\theta \sin\phi \\
z &= r \cos\theta
\end{aligned}
\end{equation}

it’s a quick computation to show that

\begin{equation}\label{eqn:LsquaredLzProblem:80}
\BL^2 \psi = 2 \Hbar^2 \psi = 1(1 + 1) \Hbar^2 \psi,
\end{equation}

so this function is an eigenket of \( \BL^2 \) with an eigenvalue of \( 2 \Hbar^2 \), which corresponds to \( l = 1 \), a p-orbital state.

(b)

Recall that the angular representation of \( L_z \) is

\begin{equation}\label{eqn:LsquaredLzProblem:100}
L_z = -i \Hbar \PD{\phi},
\end{equation}

so we have

\begin{equation}\label{eqn:LsquaredLzProblem:120}
\begin{aligned}
L_z x &= i \Hbar y \\
L_z y &= – i \Hbar x \\
L_z z &= 0,
\end{aligned}
\end{equation}

The \( L_z \) action on \( \psi \) is

\begin{equation}\label{eqn:LsquaredLzProblem:140}
L_z \psi = -i \Hbar r f(r) \lr{ – y + x }.
\end{equation}

This wave function is not an eigenket of \( L_z \). Expressed in terms of the \( L_z \) basis states \( e^{i m \phi} \), this wave function is

\begin{equation}\label{eqn:LsquaredLzProblem:160}
\begin{aligned}
\psi
&= r f(r) \lr{ \sin\theta \lr{ \cos\phi + \sin\phi} + \cos\theta } \\
&= r f(r) \lr{ \frac{\sin\theta}{2} \lr{ e^{i \phi} \lr{ 1 + \inv{i}} + e^{-i\phi} \lr{ 1 – \inv{i} } } + \cos\theta } \\
&= r f(r) \lr{
\frac{(1-i)\sin\theta}{2} e^{1 i \phi}
+
\frac{(1+i)\sin\theta}{2} e^{- 1 i \phi}
+ \cos\theta e^{0 i \phi}
}
\end{aligned}
\end{equation}

Assuming that \( \psi \) is normalized, the probabilities for measuring \( m = 1,-1,0 \) respectively are

\begin{equation}\label{eqn:LsquaredLzProblem:180}
\begin{aligned}
P_{\pm 1}
&= 2 \pi \rho \Abs{\frac{1\mp i}{2}}^2 \int_0^\pi \sin\theta d\theta \sin^2 \theta \\
&= -2 \pi \rho \int_1^{-1} du (1-u^2) \\
&= 2 \pi \rho \evalrange{ \lr{ u – \frac{u^3}{3} } }{-1}{1} \\
&= 2 \pi \rho \lr{ 2 – \frac{2}{3}} \\
&= \frac{ 8 \pi \rho}{3},
\end{aligned}
\end{equation}

and

\begin{equation}\label{eqn:LsquaredLzProblem:200}
P_{0} = 2 \pi \rho \int_0^\pi \sin\theta \cos\theta = 0,
\end{equation}

where

\begin{equation}\label{eqn:LsquaredLzProblem:220}
\rho = \int_0^\infty r^4 \Abs{f(r)}^2 dr.
\end{equation}

Because the probabilities must sum to 1, this means the \( m = \pm 1 \) states are equiprobable with \( P_{\pm} = 1/2 \), fixing \( \rho = 3/16\pi \), even without knowing \( f(r) \).

(c)

The operator \( r^2 \Bp^2 \) can be decomposed into a \( \BL^2 \) component and some other portions, from which we can write

\begin{equation}\label{eqn:LsquaredLzProblem:240}
\begin{aligned}
H \psi
&= \lr{ \frac{\Bp^2}{2m} + V(r) } \psi \\
&=
\lr{
– \frac{\Hbar^2}{2m} \lr{ \partial_{rr} + \frac{2}{r} \partial_r – \inv{\Hbar^2 r^2} \BL^2 } + V(r) } \psi.
\end{aligned}
\end{equation}

(See: [1] eq. 6.21)

In this case where \( \BL^2 \psi = 2 \Hbar^2 \psi \) we can rearrange for \( V(r) \)

\begin{equation}\label{eqn:LsquaredLzProblem:260}
\begin{aligned}
V(r)
&= E + \inv{\psi} \frac{\Hbar^2}{2m} \lr{ \partial_{rr} + \frac{2}{r} \partial_r – \frac{2}{r^2} } \psi \\
&= E + \inv{f(r)} \frac{\Hbar^2}{2m} \lr{ \partial_{rr} + \frac{2}{r} \partial_r – \frac{2}{r^2} } f(r).
\end{aligned}
\end{equation}

References

[1] Jun John Sakurai and Jim J Napolitano. Modern quantum mechanics. Pearson Higher Ed, 2014.

Two spin time evolution

November 14, 2015 phy1520 No comments , , , ,

[Click here for a PDF of this post with nicer formatting]

Motivation

Our midterm posed a (low mark “quick question”) that I didn’t complete (or at least not properly). This shouldn’t have been a difficult question, but I spend way too much time on it, costing me time that I needed for other questions.

It turns out that there isn’t anything fancy required for this question, just perseverance and careful work.

Guts

The question asked for the time evolution of a two particle state

\begin{equation}\label{eqn:twoSpinHamiltonian:20}
\psi = \inv{\sqrt{2}} \lr{ \ket{\uparrow \downarrow} – \ket{\downarrow \uparrow} }
\end{equation}

under the action of the Hamiltonian

\begin{equation}\label{eqn:twoSpinHamiltonian:40}
H = – B S_{z,1} + 2 B S_{x,2} = \frac{\Hbar B}{2}\lr{ -\sigma_{z,1} + 2 \sigma_{x,2} } .
\end{equation}

We have to know the action of the Hamiltonian on all the states

\begin{equation}\label{eqn:twoSpinHamiltonian:60}
\begin{aligned}
H \ket{\uparrow \uparrow} &= \frac{B \Hbar}{2} \lr{ -\ket{\uparrow \uparrow} + 2 \ket{\uparrow \downarrow} } \\
H \ket{\uparrow \downarrow} &= \frac{B \Hbar}{2} \lr{ -\ket{\uparrow \downarrow} + 2 \ket{\uparrow \uparrow} } \\
H \ket{\downarrow \uparrow} &= \frac{B \Hbar}{2} \lr{ \ket{\downarrow \uparrow} + 2 \ket{\downarrow \downarrow} } \\
H \ket{\downarrow \downarrow} &= \frac{B \Hbar}{2} \lr{ \ket{\downarrow \downarrow} + 2 \ket{\downarrow \uparrow} } \\
\end{aligned}
\end{equation}

With respect to the basis \( \setlr{ \ket{\uparrow \uparrow}, \ket{\uparrow \downarrow}, \ket{\downarrow \uparrow}, \ket{\downarrow \downarrow} } \), the matrix of the Hamiltonian is

\begin{equation}\label{eqn:twoSpinHamiltonian:80}
H =
\frac{ \Hbar B }{2}
\begin{bmatrix}
-1 & 2 & 0 & 0 \\
2 & -1 & 0 & 0 \\
0 & 0 & 1 & 2 \\
0 & 0 & 2 & 1 \\
\end{bmatrix}
\end{equation}

Utilizing the block diagonal form (and ignoring the \( \Hbar B/2 \) factor for now), the characteristic equation is

\begin{equation}\label{eqn:twoSpinHamiltonian:100}
0
=
\begin{vmatrix}
-1 -\lambda & 2 \\
2 & -1 – \lambda
\end{vmatrix}
\begin{vmatrix}
1 -\lambda & 2 \\
2 & 1 – \lambda
\end{vmatrix}
=
\lr{(1 + \lambda)^2 – 4}
\lr{(1 – \lambda)^2 – 4}.
\end{equation}

This has solutions

\begin{equation}\label{eqn:twoSpinHamiltonian:120}
1 \pm \lambda = \pm 2,
\end{equation}

or, with the \( \Hbar B/2 \) factors put back in

\begin{equation}\label{eqn:twoSpinHamiltonian:140}
\lambda = \pm \Hbar B/2 , \pm 3 \Hbar B/2.
\end{equation}

I was thinking that we needed to compute the time evolution operator

\begin{equation}\label{eqn:twoSpinHamiltonian:160}
U = e^{-i H t/\Hbar},
\end{equation}

but we actually only need the eigenvectors, and the inverse relations. We can find the eigenvectors by inspection in each case from

\begin{equation}\label{eqn:twoSpinHamiltonian:180}
\begin{aligned}
H – (1) \frac{ \Hbar B }{2}
&=
\frac{ \Hbar B }{2}
\begin{bmatrix}
-2 & 2 & 0 & 0 \\
2 & -2 & 0 & 0 \\
0 & 0 & 0 & 2 \\
0 & 0 & 2 & 0 \\
\end{bmatrix} \\
H – (-1) \frac{ \Hbar B }{2}
&=
\frac{ \Hbar B }{2}
\begin{bmatrix}
0 & 2 & 0 & 0 \\
2 & 0 & 0 & 0 \\
0 & 0 & 2 & 2 \\
0 & 0 & 2 & 2 \\
\end{bmatrix} \\
H – (3) \frac{ \Hbar B }{2}
&=
\frac{ \Hbar B }{2}
\begin{bmatrix}
-4 & 2 & 0 & 0 \\
2 & -4 & 0 & 0 \\
0 & 0 &-2 & 2 \\
0 & 0 & 2 &-2 \\
\end{bmatrix} \\
H – (-3) \frac{ \Hbar B }{2}
&=
\frac{ \Hbar B }{2}
\begin{bmatrix}
2 & 2 & 0 & 0 \\
2 & 2 & 0 & 0 \\
0 & 0 & 4 & 2 \\
0 & 0 & 2 & 1 \\
\end{bmatrix}.
\end{aligned}
\end{equation}

The eigenkets are

\begin{equation}\label{eqn:twoSpinHamiltonian:280}
\begin{aligned}
\ket{1} &=
\inv{\sqrt{2}}
\begin{bmatrix}
1 \\
1 \\
0 \\
0 \\
\end{bmatrix} \\
\ket{-1} &=
\inv{\sqrt{2}}
\begin{bmatrix}
0 \\
0 \\
1 \\
-1 \\
\end{bmatrix} \\
\ket{3} &=
\inv{\sqrt{2}}
\begin{bmatrix}
0 \\
0 \\
1 \\
1 \\
\end{bmatrix} \\
\ket{-3} &=
\inv{\sqrt{2}}
\begin{bmatrix}
1 \\
-1 \\
0 \\
0 \\
\end{bmatrix},
\end{aligned}
\end{equation}

or

\begin{equation}\label{eqn:twoSpinHamiltonian:300}
\begin{aligned}
\sqrt{2} \ket{1} &= \ket{\uparrow \uparrow} + \ket{\uparrow \downarrow} \\
\sqrt{2} \ket{-1} &= \ket{\downarrow \uparrow} – \ket{\downarrow \downarrow} \\
\sqrt{2} \ket{3} &= \ket{\downarrow \uparrow} + \ket{\downarrow \downarrow} \\
\sqrt{2} \ket{-3} &= \ket{\uparrow \uparrow} – \ket{\uparrow \downarrow}.
\end{aligned}
\end{equation}

We can invert these

\begin{equation}\label{eqn:twoSpinHamiltonian:220}
\begin{aligned}
\ket{\uparrow \uparrow} &= \inv{\sqrt{2}} \lr{ \ket{1} + \ket{-3} } \\
\ket{\uparrow \downarrow} &= \inv{\sqrt{2}} \lr{ \ket{1} – \ket{-3} } \\
\ket{\downarrow \uparrow} &= \inv{\sqrt{2}} \lr{ \ket{3} + \ket{-1} } \\
\ket{\downarrow \downarrow} &= \inv{\sqrt{2}} \lr{ \ket{3} – \ket{-1} } \\
\end{aligned}
\end{equation}

The original state of interest can now be expressed in terms of the eigenkets

\begin{equation}\label{eqn:twoSpinHamiltonian:240}
\psi
=
\inv{2} \lr{
\ket{1} – \ket{-3} –
\ket{3} – \ket{-1}
}
\end{equation}

The time evolution of this ket is

\begin{equation}\label{eqn:twoSpinHamiltonian:260}
\begin{aligned}
\psi(t)
&=
\inv{2}
\lr{
e^{-i B t/2} \ket{1}
– e^{3 i B t/2} \ket{-3}
– e^{-3 i B t/2} \ket{3}
– e^{i B t/2} \ket{-1}
} \\
&=
\inv{2 \sqrt{2}}
\Biglr{
e^{-i B t/2} \lr{ \ket{\uparrow \uparrow} + \ket{\uparrow \downarrow} }
– e^{3 i B t/2} \lr{ \ket{\uparrow \uparrow} – \ket{\uparrow \downarrow} }
– e^{-3 i B t/2} \lr{ \ket{\downarrow \uparrow} + \ket{\downarrow \downarrow} }
– e^{i B t/2} \lr{ \ket{\downarrow \uparrow} – \ket{\downarrow \downarrow} }
} \\
&=
\inv{2 \sqrt{2}}
\Biglr{
\lr{ e^{-i B t/2} – e^{3 i B t/2} } \ket{\uparrow \uparrow}
+ \lr{ e^{-i B t/2} + e^{3 i B t/2} } \ket{\uparrow \downarrow}
– \lr{ e^{-3 i B t/2} + e^{i B t/2} } \ket{\downarrow \uparrow}
+ \lr{ e^{i B t/2} – e^{-3 i B t/2} } \ket{\downarrow \downarrow}
} \\
&=
\inv{2 \sqrt{2}}
\Biglr{
e^{i B t/2} \lr{ e^{-2 i B t/2} – e^{2 i B t/2} } \ket{\uparrow \uparrow}
+ e^{i B t/2} \lr{ e^{-2 i B t/2} + e^{2 i B t/2} } \ket{\uparrow \downarrow}
– e^{- i B t/2} \lr{ e^{-2 i B t/2} + e^{2 i B t/2} } \ket{\downarrow \uparrow}
+ e^{- i B t/2} \lr{ e^{2 i B t/2} – e^{-2 i B t/2} } \ket{\downarrow \downarrow}
} \\
&=
\inv{\sqrt{2}}
\lr{
i \sin( B t )
\lr{
e^{- i B t/2} \ket{\downarrow \downarrow} – e^{i B t/2} \ket{\uparrow \uparrow}
}
+ \cos( B t ) \lr{
e^{i B t/2} \ket{\uparrow \downarrow}
– e^{- i B t/2} \ket{\downarrow \uparrow}
}
}
\end{aligned}
\end{equation}

Note that this returns to the original state when \( t = \frac{2 \pi n}{B}, n \in \mathbb{Z} \). I think I’ve got it right this time (although I got a slightly different answer on paper before typing it up.)

This doesn’t exactly seem like a quick answer question, at least to me. Is there some easier way to do it?

Some spin problems

October 30, 2015 phy1520 No comments , ,

[Click here for a PDF of this post with nicer formatting]

Problems from angular momentum chapter of [1].

Q: \( S_y \) eigenvectors

Find the eigenvectors of \( \sigma_y \), and then find the probability that a measurement of \( S_y \) will be \( \Hbar/2 \) when the state is initially

\begin{equation}\label{eqn:someSpinProblems:20}
\begin{bmatrix}
\alpha \\
\beta
\end{bmatrix}
\end{equation}

A:

The eigenvalues should be \( \pm 1 \), which is easily checked

\begin{equation}\label{eqn:someSpinProblems:40}
\begin{aligned}
0
&=
\Abs{ \sigma_y – \lambda } \\
&=
\begin{vmatrix}
-\lambda & -i \\
i & -\lambda
\end{vmatrix} \\
&=
\lambda^2 – 1.
\end{aligned}
\end{equation}

For \( \ket{+} = (a,b)^\T \) we must have

\begin{equation}\label{eqn:someSpinProblems:60}
-1 a – i b = 0,
\end{equation}

so

\begin{equation}\label{eqn:someSpinProblems:80}
\ket{+} \propto
\begin{bmatrix}
-i \\
1
\end{bmatrix},
\end{equation}

or
\begin{equation}\label{eqn:someSpinProblems:100}
\ket{+} =
\inv{\sqrt{2}}
\begin{bmatrix}
1 \\
i
\end{bmatrix}.
\end{equation}

For \( \ket{-} \) we must have

\begin{equation}\label{eqn:someSpinProblems:120}
a – i b = 0,
\end{equation}

so

\begin{equation}\label{eqn:someSpinProblems:140}
\ket{+} \propto
\begin{bmatrix}
i \\
1
\end{bmatrix},
\end{equation}

or
\begin{equation}\label{eqn:someSpinProblems:160}
\ket{+} =
\inv{\sqrt{2}}
\begin{bmatrix}
1 \\
-i
\end{bmatrix}.
\end{equation}

The normalized eigenvectors are

\begin{equation}\label{eqn:someSpinProblems:180}
\boxed{
\ket{\pm} =
\inv{\sqrt{2}}
\begin{bmatrix}
1 \\
\pm i
\end{bmatrix}.
}
\end{equation}

For the probability question we are interested in

\begin{equation}\label{eqn:someSpinProblems:200}
\begin{aligned}
\Abs{\bra{S_y; +}
\begin{bmatrix}
\alpha \\
\beta
\end{bmatrix}
}^2
&=
\inv{2} \Abs{
\begin{bmatrix}
1 & -i
\end{bmatrix}
\begin{bmatrix}
\alpha \\
\beta
\end{bmatrix}
}^2 \\
&=
\inv{2} \lr{ \Abs{\alpha}^2 + \Abs{\beta}^2 } \\
&=
\inv{2}.
\end{aligned}
\end{equation}

There is a 50 % chance of finding the particle in the \( \ket{S_x;+} \) state, independent of the initial state.

Q: Magnetic Hamiltonian eigenvectors

Using Pauli matrices, find the eigenvectors for the magnetic spin interaction Hamiltonian

\begin{equation}\label{eqn:someSpinProblems:220}
H = – \inv{\Hbar} 2 \mu \BS \cdot \BB.
\end{equation}

A:

\begin{equation}\label{eqn:someSpinProblems:240}
\begin{aligned}
H
&= – \mu \Bsigma \cdot \BB \\
&= – \mu \lr{ B_x \begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix} + B_y
\begin{bmatrix} 0 & -i \\ i & 0 \\ \end{bmatrix} + B_z \begin{bmatrix} 1 & 0
\\ 0 & -1 \\ \end{bmatrix} } \\
&= – \mu
\begin{bmatrix}
B_z & B_x – i B_y \\
B_x + i B_y & -B_z
\end{bmatrix}.
\end{aligned}
\end{equation}

The characteristic equation is
\begin{equation}\label{eqn:someSpinProblems:260}
\begin{aligned}
0
&=
\begin{vmatrix}
-\mu B_z -\lambda & -\mu(B_x – i B_y) \\
-\mu(B_x + i B_y) & \mu B_z – \lambda
\end{vmatrix} \\
&=
-\lr{ (\mu B_z)^2 – \lambda^2 }
– \mu^2\lr{ B_x^2 – (iB_y)^2 } \\
&=
\lambda^2 – \mu^2 \BB^2.
\end{aligned}
\end{equation}

That is
\begin{equation}\label{eqn:someSpinProblems:360}
\boxed{
\lambda = \pm \mu B.
}
\end{equation}

Now for the eigenvectors. We are looking for \( \ket{\pm} = (a,b)^\T \) such that

\begin{equation}\label{eqn:someSpinProblems:300}
0
= (-\mu B_z \mp \mu B) a -\mu(B_x – i B_y) b
\end{equation}

or

\begin{equation}\label{eqn:someSpinProblems:320}
\ket{\pm} \propto
\begin{bmatrix}
B_x – i B_y \\
B_z \pm B
\end{bmatrix}.
\end{equation}

This squares to

\begin{equation}\label{eqn:someSpinProblems:340}
B_x^2 + B_y^2 + B_z^2 + B^2 \pm 2 B B_z
= 2 B( B \pm B_z ),
\end{equation}

so the normalized eigenkets are
\begin{equation}\label{eqn:someSpinProblems:380}
\boxed{
\ket{\pm}
=
\inv{\sqrt{2 B( B \pm B_z )}}
\begin{bmatrix}
B_x – i B_y \\
B_z \pm B
\end{bmatrix}.
}
\end{equation}

References

[1] Jun John Sakurai and Jim J Napolitano. Modern quantum mechanics. Pearson Higher Ed, 2014.

Translation operator problems

August 7, 2015 phy1520 No comments , , , , , , ,

[Click here for a PDF of this post with nicer formatting]

Question: One dimensional translation operator. ([1] pr. 1.28)

(a)

Evaluate the classical Poisson bracket

\begin{equation}\label{eqn:translation:420}
\antisymmetric{x}{F(p)}_{\textrm{classical}}
\end{equation}

(b)

Evaluate the commutator

\begin{equation}\label{eqn:translation:440}
\antisymmetric{x}{e^{i p a/\Hbar}}
\end{equation}

(c)

Using the result in \ref{problem:translation:28:b}, prove that
\begin{equation}\label{eqn:translation:460}
e^{i p a/\Hbar} \ket{x’},
\end{equation}

is an eigenstate of the coordinate operator \( x \).

Answer

(a)

\begin{equation}\label{eqn:translation:480}
\begin{aligned}
\antisymmetric{x}{F(p)}_{\textrm{classical}}
&=
\PD{x}{x} \PD{p}{F(p)} – \PD{p}{x} \PD{x}{F(p)} \\
&=
\PD{p}{F(p)}.
\end{aligned}
\end{equation}

(b)

Having worked backwards through these problems, the answer for this one dimensional problem can be obtained from \ref{eqn:translation:140} and is

\begin{equation}\label{eqn:translation:500}
\antisymmetric{x}{e^{i p a/\Hbar}} = a e^{i p a/\Hbar}.
\end{equation}

(c)

\begin{equation}\label{eqn:translation:520}
\begin{aligned}
x e^{i p a/\Hbar} \ket{x’}
&=
\lr{
\antisymmetric{x}{e^{i p a/\Hbar}}
e^{i p a/\Hbar} x
+
}
\ket{x’} \\
&=
\lr{ a e^{i p a/\Hbar} + e^{i p a/\Hbar} x ‘ } \ket{x’} \\
&= \lr{ a + x’ } \ket{x’}.
\end{aligned}
\end{equation}

This demonstrates that \( e^{i p a/\Hbar} \ket{x’} \) is an eigenstate of \( x \) with eigenvalue \( a + x’ \).

Question: Polynomial commutators. ([1] pr. 1.29)

(a)

For power series \( F, G \), verify

\begin{equation}\label{eqn:translation:180}
\antisymmetric{x_k}{G(\Bp)} = i \Hbar \PD{p_k}{G}, \qquad
\antisymmetric{p_k}{F(\Bx)} = -i \Hbar \PD{x_k}{F}.
\end{equation}

(b)

Evaluate \( \antisymmetric{x^2}{p^2} \), and compare to the classical Poisson bracket \( \antisymmetric{x^2}{p^2}_{\textrm{classical}} \).

Answer

(a)

Let

\begin{equation}\label{eqn:translation:200}
\begin{aligned}
G(\Bp) &= \sum_{k l m} a_{k l m} p_1^k p_2^l p_3^m \\
F(\Bx) &= \sum_{k l m} b_{k l m} x_1^k x_2^l x_3^m.
\end{aligned}
\end{equation}

It is simpler to work with a specific \( x_k \), say \( x_k = y \). The validity of the general result will still be clear doing so. Expanding the commutator gives

\begin{equation}\label{eqn:translation:220}
\begin{aligned}
\antisymmetric{y}{G(\Bp)}
&=
\sum_{k l m} a_{k l m} \antisymmetric{y}{p_1^k p_2^l p_3^m } \\
&=
\sum_{k l m} a_{k l m} \lr{
y p_1^k p_2^l p_3^m – p_1^k p_2^l p_3^m y
} \\
&=
\sum_{k l m} a_{k l m} \lr{
p_1^k y p_2^l p_3^m – p_1^k y p_2^l p_3^m
} \\
&=
\sum_{k l m} a_{k l m}
p_1^k
\antisymmetric{y}{p_2^l}
p_3^m.
\end{aligned}
\end{equation}

From \ref{eqn:translation:100}, we have \( \antisymmetric{y}{p_2^l} = l i \Hbar p_2^{l-1} \), so

\begin{equation}\label{eqn:translation:240}
\begin{aligned}
\antisymmetric{y}{G(\Bp)}
&=
\sum_{k l m} a_{k l m}
p_1^k
\antisymmetric{y}{p_2^l}
\lr{ l
i \Hbar p_2^{l-1}
}
p_3^m \\
&=
i \Hbar \PD{y}{G(\Bp)}.
\end{aligned}
\end{equation}

It is straightforward to show that
\( \antisymmetric{p}{x^l} = -l i \Hbar x^{l-1} \), allowing for a similar computation of the momentum commutator

\begin{equation}\label{eqn:translation:260}
\begin{aligned}
\antisymmetric{p_y}{F(\Bx)}
&=
\sum_{k l m} b_{k l m} \antisymmetric{p_y}{x_1^k x_2^l x_3^m } \\
&=
\sum_{k l m} b_{k l m} \lr{
p_y x_1^k x_2^l x_3^m – x_1^k x_2^l x_3^m p_y
} \\
&=
\sum_{k l m} b_{k l m} \lr{
x_1^k p_y x_2^l x_3^m – x_1^k p_y x_2^l x_3^m
} \\
&=
\sum_{k l m} b_{k l m}
x_1^k
\antisymmetric{p_y}{x_2^l}
x_3^m \\
&=
\sum_{k l m} b_{k l m}
x_1^k
\lr{ -l i \Hbar x_2^{l-1}}
x_3^m \\
&=
-i \Hbar \PD{p_y}{F(\Bx)}.
\end{aligned}
\end{equation}

(b)

It isn’t clear to me how the results above can be used directly to compute \( \antisymmetric{x^2}{p^2} \). However, when the first term of such a commutator is a monomomial, it can be expanded in terms of an \( x \) commutator

\begin{equation}\label{eqn:translation:280}
\begin{aligned}
\antisymmetric{x^2}{G(\Bp)}
&= x^2 G – G x^2 \\
&= x \lr{ x G } – G x^2 \\
&= x \lr{ \antisymmetric{ x }{ G } + G x } – G x^2 \\
&= x \antisymmetric{ x }{ G } + \lr{ x G } x – G x^2 \\
&= x \antisymmetric{ x }{ G } + \lr{ \antisymmetric{ x }{ G } + G x } x – G x^2 \\
&= x \antisymmetric{ x }{ G } + \antisymmetric{ x }{ G } x.
\end{aligned}
\end{equation}

Similarily,

\begin{equation}\label{eqn:translation:300}
\antisymmetric{x^3}{G(\Bp)} = x^2 \antisymmetric{ x }{ G } + x \antisymmetric{ x }{ G } x + \antisymmetric{ x }{ G } x^2.
\end{equation}

An induction hypothesis can be formed

\begin{equation}\label{eqn:translation:320}
\antisymmetric{x^k}{G(\Bp)} = \sum_{j = 0}^{k-1} x^{k-1-j} \antisymmetric{ x }{ G } x^j,
\end{equation}

and demonstrated

\begin{equation}\label{eqn:translation:340}
\begin{aligned}
\antisymmetric{x^{k+1}}{G(\Bp)}
&=
x^{k+1} G – G x^{k+1} \\
&=
x \lr{ x^{k} G } – G x^{k+1} \\
&=
x \lr{ \antisymmetric{x^{k}}{G} + G x^k } – G x^{k+1} \\
&=
x \antisymmetric{x^{k}}{G} + \lr{ x G } x^k – G x^{k+1} \\
&=
x \antisymmetric{x^{k}}{G} + \lr{ \antisymmetric{x}{G} + G x } x^k – G x^{k+1} \\
&=
x \antisymmetric{x^{k}}{G} + \antisymmetric{x}{G} x^k \\
&=
x \sum_{j = 0}^{k-1} x^{k-1-j} \antisymmetric{ x }{ G } x^j + \antisymmetric{x}{G} x^k \\
&=
\sum_{j = 0}^{k-1} x^{(k+1)-1-j} \antisymmetric{ x }{ G } x^j + \antisymmetric{x}{G} x^k \\
&=
\sum_{j = 0}^{k} x^{(k+1)-1-j} \antisymmetric{ x }{ G } x^j.
\end{aligned}
\end{equation}

That was a bit overkill for this problem, but may be useful later. Application of this to the problem gives

\begin{equation}\label{eqn:translation:360}
\begin{aligned}
\antisymmetric{x^2}{p^2}
&=
x \antisymmetric{x}{p^2}
+ \antisymmetric{x}{p^2} x \\
&=
x i \Hbar \PD{x}{p^2}
+ i \Hbar \PD{x}{p^2} x \\
&=
x 2 i \Hbar p
+ 2 i \Hbar p x \\
&= i \Hbar \lr{ 2 x p + 2 p x }.
\end{aligned}
\end{equation}

The classical commutator is
\begin{equation}\label{eqn:translation:380}
\begin{aligned}
\antisymmetric{x^2}{p^2}_{\textrm{classical}}
&=
\PD{x}{x^2} \PD{p}{p^2} – \PD{p}{x^2} \PD{x}{p^2} \\
&=
2 x 2 p \\
&= 2 x p + 2 p x.
\end{aligned}
\end{equation}

This demonstrates the expected relation between the classical and quantum commutators

\begin{equation}\label{eqn:translation:400}
\antisymmetric{x^2}{p^2} = i \Hbar \antisymmetric{x^2}{p^2}_{\textrm{classical}}.
\end{equation}

Question: Translation operator and position expectation. ([1] pr. 1.30)

The translation operator for a finite spatial displacement is given by

\begin{equation}\label{eqn:translation:20}
J(\Bl) = \exp\lr{ -i \Bp \cdot \Bl/\Hbar },
\end{equation}

where \( \Bp \) is the momentum operator.

(a)

Evaluate

\begin{equation}\label{eqn:translation:40}
\antisymmetric{x_i}{J(\Bl)}.
\end{equation}

(b)

Demonstrate how the expectation value \( \expectation{\Bx} \) changes under translation.

Answer

(a)

For clarity, let’s set \( x_i = y \). The general result will be clear despite doing so.

\begin{equation}\label{eqn:translation:60}
\antisymmetric{y}{J(\Bl)}
=
\sum_{k= 0} \inv{k!} \lr{\frac{-i}{\Hbar}}
\antisymmetric{y}{
\lr{ \Bp \cdot \Bl }^k
}.
\end{equation}

The commutator expands as

\begin{equation}\label{eqn:translation:80}
\begin{aligned}
\antisymmetric{y}{
\lr{ \Bp \cdot \Bl }^k
}
+ \lr{ \Bp \cdot \Bl }^k y
&=
y \lr{ \Bp \cdot \Bl }^k \\
&=
y \lr{ p_x l_x + p_y l_y + p_z l_z } \lr{ \Bp \cdot \Bl }^{k-1} \\
&=
\lr{ p_x l_x y + y p_y l_y + p_z l_z y } \lr{ \Bp \cdot \Bl }^{k-1} \\
&=
\lr{ p_x l_x y + l_y \lr{ p_y y + i \Hbar } + p_z l_z y } \lr{ \Bp \cdot \Bl }^{k-1} \\
&=
\lr{ \Bp \cdot \Bl } y \lr{ \Bp \cdot \Bl }^{k-1}
+ i \Hbar l_y \lr{ \Bp \cdot \Bl }^{k-1} \\
&= \cdots \\
&=
\lr{ \Bp \cdot \Bl }^{k-1} y \lr{ \Bp \cdot \Bl }^{k-(k-1)}
+ (k-1) i \Hbar l_y \lr{ \Bp \cdot \Bl }^{k-1} \\
&=
\lr{ \Bp \cdot \Bl }^{k} y
+ k i \Hbar l_y \lr{ \Bp \cdot \Bl }^{k-1}.
\end{aligned}
\end{equation}

In the above expansion, the commutation of \( y \) with \( p_x, p_z \) has been used. This gives, for \( k \ne 0 \),

\begin{equation}\label{eqn:translation:100}
\antisymmetric{y}{
\lr{ \Bp \cdot \Bl }^k
}
=
k i \Hbar l_y \lr{ \Bp \cdot \Bl }^{k-1}.
\end{equation}

Note that this also holds for the \( k = 0 \) case, since \( y \) commutes with the identity operator. Plugging back into the \( J \) commutator, we have

\begin{equation}\label{eqn:translation:120}
\begin{aligned}
\antisymmetric{y}{J(\Bl)}
&=
\sum_{k = 1} \inv{k!} \lr{\frac{-i}{\Hbar}}
k i \Hbar l_y \lr{ \Bp \cdot \Bl }^{k-1} \\
&=
l_y \sum_{k = 1} \inv{(k-1)!} \lr{\frac{-i}{\Hbar}}
\lr{ \Bp \cdot \Bl }^{k-1} \\
&=
l_y J(\Bl).
\end{aligned}
\end{equation}

The same pattern clearly applies with the other \( x_i \) values, providing the desired relation.

\begin{equation}\label{eqn:translation:140}
\antisymmetric{\Bx}{J(\Bl)} = \sum_{m = 1}^3 \Be_m l_m J(\Bl) = \Bl J(\Bl).
\end{equation}

(b)

Suppose that the translated state is defined as \( \ket{\alpha_{\Bl}} = J(\Bl) \ket{\alpha} \). The expectation value with respect to this state is

\begin{equation}\label{eqn:translation:160}
\begin{aligned}
\expectation{\Bx’}
&=
\bra{\alpha_{\Bl}} \Bx \ket{\alpha_{\Bl}} \\
&=
\bra{\alpha} J^\dagger(\Bl) \Bx J(\Bl) \ket{\alpha} \\
&=
\bra{\alpha} J^\dagger(\Bl) \lr{ \Bx J(\Bl) } \ket{\alpha} \\
&=
\bra{\alpha} J^\dagger(\Bl) \lr{ J(\Bl) \Bx + \Bl J(\Bl) } \ket{\alpha} \\
&=
\bra{\alpha} J^\dagger J \Bx + \Bl J^\dagger J \ket{\alpha} \\
&=
\bra{\alpha} \Bx \ket{\alpha} + \Bl \braket{\alpha}{\alpha} \\
&=
\expectation{\Bx} + \Bl.
\end{aligned}
\end{equation}

References

[1] Jun John Sakurai and Jim J Napolitano. Modern quantum mechanics. Pearson Higher Ed, 2014.

Bra-ket and spin one-half problems

July 27, 2015 phy1520 No comments , , , , , , , , ,


[Click here for a PDF of this post with nicer formatting]

Question: Operator matrix representation ([1] pr. 1.5)

(a)

Determine the matrix representation of \( \ket{\alpha}\bra{\beta} \) given a complete set of eigenvectors \( \ket{a^r} \).

(b)

Verify with \( \ket{\alpha} = \ket{s_z = \Hbar/2}, \ket{s_x = \Hbar/2} \).

Answer

(a)

Forming the matrix element

\begin{equation}\label{eqn:moreBraKetProblems:20}
\begin{aligned}
\bra{a^r} \lr{ \ket{\alpha}\bra{\beta} } \ket{a^s}
&=
\braket{a^r}{\alpha}\braket{\beta}{a^s} \\
&=
\braket{a^r}{\alpha}
\braket{a^s}{\beta}^\conj,
\end{aligned}
\end{equation}

the matrix representation is seen to be

\begin{equation}\label{eqn:moreBraKetProblems:40}
\ket{\alpha}\bra{\beta}
\sim
\begin{bmatrix}
\bra{a^1} \lr{ \ket{\alpha}\bra{\beta} } \ket{a^1} & \bra{a^1} \lr{ \ket{\alpha}\bra{\beta} } \ket{a^2} & \cdots \\
\bra{a^2} \lr{ \ket{\alpha}\bra{\beta} } \ket{a^1} & \bra{a^2} \lr{ \ket{\alpha}\bra{\beta} } \ket{a^2} & \cdots \\
\vdots & \vdots & \ddots \\
\end{bmatrix}
=
\begin{bmatrix}
\braket{a^1}{\alpha} \braket{a^1}{\beta}^\conj & \braket{a^1}{\alpha} \braket{a^2}{\beta}^\conj & \cdots \\
\braket{a^2}{\alpha} \braket{a^1}{\beta}^\conj & \braket{a^2}{\alpha} \braket{a^2}{\beta}^\conj & \cdots \\
\vdots & \vdots & \ddots \\
\end{bmatrix}.
\end{equation}

(b)

First compute the spin-z representation of \( \ket{s_x = \Hbar/2 } \).

\begin{equation}\label{eqn:moreBraKetProblems:60}
\begin{aligned}
\lr{ S_x – \Hbar/2 I }
\begin{bmatrix}
a \\
b
\end{bmatrix}
&=
\lr{
\begin{bmatrix}
0 & \Hbar/2 \\
\Hbar/2 & 0 \\
\end{bmatrix}

\begin{bmatrix}
\Hbar/2 & 0 \\
0 & \Hbar/2 \\
\end{bmatrix}
} \\
&=
\begin{bmatrix}
a \\
b
\end{bmatrix} \\
&=
\frac{\Hbar}{2}
\begin{bmatrix}
-1 & 1 \\
1 & -1 \\
\end{bmatrix}
\begin{bmatrix}
a \\
b
\end{bmatrix},
\end{aligned}
\end{equation}

so \( \ket{s_x = \Hbar/2 } \propto (1,1) \).

Normalized we have

\begin{equation}\label{eqn:moreBraKetProblems:80}
\begin{aligned}
\ket{\alpha} &= \ket{s_z = \Hbar/2 } =
\begin{bmatrix}
1 \\
0
\end{bmatrix} \\
\ket{\beta} &= \ket{s_z = \Hbar/2 }
\inv{\sqrt{2}}
\begin{bmatrix}
1 \\
1
\end{bmatrix}.
\end{aligned}
\end{equation}

Using \ref{eqn:moreBraKetProblems:40} the matrix representation is

\begin{equation}\label{eqn:moreBraKetProblems:100}
\ket{\alpha}\bra{\beta}
\sim
\begin{bmatrix}
(1) (1/\sqrt{2})^\conj & (1) (1/\sqrt{2})^\conj \\
(0) (1/\sqrt{2})^\conj & (0) (1/\sqrt{2})^\conj \\
\end{bmatrix}
=
\inv{\sqrt{2}}
\begin{bmatrix}
1 & 1 \\
0 & 0
\end{bmatrix}.
\end{equation}

This can be confirmed with direct computation
\begin{equation}\label{eqn:moreBraKetProblems:120}
\begin{aligned}
\ket{\alpha}\bra{\beta}
&=
\begin{bmatrix}
1 \\
0
\end{bmatrix}
\inv{\sqrt{2}}
\begin{bmatrix}
1 & 1
\end{bmatrix} \\
&=
\inv{\sqrt{2}}
\begin{bmatrix}
1 & 1 \\
0 & 0
\end{bmatrix}.
\end{aligned}
\end{equation}

Question: eigenvalue of sum of kets ([1] pr. 1.6)

Given eigenkets \( \ket{i}, \ket{j} \) of an operator \( A \), what are the conditions that \( \ket{i} + \ket{j} \) is also an eigenvector?

Answer

Let \( A \ket{i} = i \ket{i}, A \ket{j} = j \ket{j} \), and suppose that the sum is an eigenket. Then there must be a value \( a \) such that

\begin{equation}\label{eqn:moreBraKetProblems:140}
A \lr{ \ket{i} + \ket{j} } = a \lr{ \ket{i} + \ket{j} },
\end{equation}

so

\begin{equation}\label{eqn:moreBraKetProblems:160}
i \ket{i} + j \ket{j} = a \lr{ \ket{i} + \ket{j} }.
\end{equation}

Operating with \( \bra{i}, \bra{j} \) respectively, gives

\begin{equation}\label{eqn:moreBraKetProblems:180}
\begin{aligned}
i &= a \\
j &= a,
\end{aligned}
\end{equation}

so for the sum to be an eigenket, both of the corresponding energy eigenvalues must be identical (i.e. linear combinations of degenerate eigenkets are also eigenkets).

Question: Null operator ([1] pr. 1.7)

Given eigenkets \( \ket{a’} \) of operator \( A \)

(a)

show that

\begin{equation}\label{eqn:moreBraKetProblems:200}
\prod_{a’} \lr{ A – a’ }
\end{equation}

is the null operator.

(b)

\begin{equation}\label{eqn:moreBraKetProblems:220}
\prod_{a” \ne a’} \frac{\lr{ A – a” }}{a’ – a”}
\end{equation}

(c)

Illustrate using \( S_z \) for a spin 1/2 system.

Answer

(a)

Application of \( \ket{a} \), the eigenket of \( A \) with eigenvalue \( a \) to any term \( A – a’ \) scales \( \ket{a} \) by \( a – a’ \), so the product operating on \( \ket{a} \) is

\begin{equation}\label{eqn:moreBraKetProblems:240}
\prod_{a’} \lr{ A – a’ } \ket{a} = \prod_{a’} \lr{ a – a’ } \ket{a}.
\end{equation}

Since \( \ket{a} \) is one of the \( \setlr{\ket{a’}} \) eigenkets of \( A \), one of these terms must be zero.

(b)

Again, consider the action of the operator on \( \ket{a} \),

\begin{equation}\label{eqn:moreBraKetProblems:260}
\prod_{a” \ne a’} \frac{\lr{ A – a” }}{a’ – a”} \ket{a}
=
\prod_{a” \ne a’} \frac{\lr{ a – a” }}{a’ – a”} \ket{a}.
\end{equation}

If \( \ket{a} = \ket{a’} \), then \( \prod_{a” \ne a’} \frac{\lr{ A – a” }}{a’ – a”} \ket{a} = \ket{a} \), whereas if it does not, then it equals one of the \( a” \) energy eigenvalues. This is a representation of the Kronecker delta function

\begin{equation}\label{eqn:moreBraKetProblems:300}
\prod_{a” \ne a’} \frac{\lr{ A – a” }}{a’ – a”} \ket{a} \equiv \delta_{a’, a} \ket{a}
\end{equation}

(c)

For operator \( S_z \) the eigenvalues are \( \setlr{ \Hbar/2, -\Hbar/2 } \), so the null operator must be

\begin{equation}\label{eqn:moreBraKetProblems:280}
\begin{aligned}
\prod_{a’} \lr{ A – a’ }
&=
\lr{ \frac{\Hbar}{2} }^2 \lr{ \begin{bmatrix} 1 & 0 \\ 0 & -1 \\ \end{bmatrix} – \begin{bmatrix} 1 & 0 \\ 0 & 1 \\ \end{bmatrix} } \lr{ \begin{bmatrix} 1 & 0 \\ 0 & -1 \\ \end{bmatrix} + \begin{bmatrix} 1 & 0 \\ 0 & 1 \\ \end{bmatrix} } \\
&=
\begin{bmatrix}
0 & 0 \\
0 & -2
\end{bmatrix}
\begin{bmatrix}
2 & 0 \\
0 & 0 \\
\end{bmatrix} \\
&=
\begin{bmatrix}
0 & 0 \\
0 & 0 \\
\end{bmatrix}
\end{aligned}
\end{equation}

For the delta representation, consider the \( \ket{\pm} \) states and their eigenvalue. The delta operators are

\begin{equation}\label{eqn:moreBraKetProblems:320}
\begin{aligned}
\prod_{a” \ne \Hbar/2} \frac{\lr{ A – a” }}{\Hbar/2 – a”}
&=
\frac{S_z – (-\Hbar/2) I}{\Hbar/2 – (-\Hbar/2)} \\
&=
\inv{2} \lr{ \sigma_z + I } \\
&=
\inv{2} \lr{ \begin{bmatrix} 1 & 0 \\ 0 & -1 \\ \end{bmatrix} + \begin{bmatrix} 1 & 0 \\ 0 & 1 \\ \end{bmatrix} } \\
&=
\inv{2}
\begin{bmatrix}
2 & 0 \\
0 & 0
\end{bmatrix}
\\
&=
\begin{bmatrix}
1 & 0 \\
0 & 0
\end{bmatrix}.
\end{aligned}
\end{equation}

\begin{equation}\label{eqn:moreBraKetProblems:340}
\begin{aligned}
\prod_{a” \ne -\Hbar/2} \frac{\lr{ A – a” }}{-\Hbar/2 – a”}
&=
\frac{S_z – (\Hbar/2) I}{-\Hbar/2 – \Hbar/2} \\
&=
\inv{2} \lr{ \sigma_z – I } \\
&=
\inv{2} \lr{ \begin{bmatrix} 1 & 0 \\ 0 & -1 \\ \end{bmatrix} – \begin{bmatrix} 1 & 0 \\ 0 & 1 \\ \end{bmatrix} } \\
&=
\inv{2}
\begin{bmatrix}
0 & 0 \\
0 & -2
\end{bmatrix} \\
&=
\begin{bmatrix}
0 & 0 \\
0 & 1
\end{bmatrix}.
\end{aligned}
\end{equation}

These clearly have the expected delta function property acting on kets \( \ket{+} = (1,0), \ket{-} = (0, 1) \).

Question: Spin half general normal ([1] pr. 1.9)

Construct \( \ket{\BS \cdot \ncap ; + } \), where \( \ncap = ( \cos\alpha \sin\beta, \sin\alpha \sin\beta, \cos\beta ) \) such that

\begin{equation}\label{eqn:moreBraKetProblems:360}
\BS \cdot \ncap \ket{\BS \cdot \ncap ; + } =
\frac{\Hbar}{2} \ket{\BS \cdot \ncap ; + },
\end{equation}

Solve this as an eigenvalue problem.

Answer

The spin operator for this direction is

\begin{equation}\label{eqn:moreBraKetProblems:380}
\begin{aligned}
\BS \cdot \ncap
&= \frac{\Hbar}{2} \Bsigma \cdot \ncap \\
&= \frac{\Hbar}{2}
\lr{
\cos\alpha \sin\beta \begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix} + \sin\alpha \sin\beta \begin{bmatrix} 0 & -i \\ i & 0 \\ \end{bmatrix} + \cos\beta \begin{bmatrix} 1 & 0 \\ 0 & -1 \\ \end{bmatrix}
} \\
&=
\frac{\Hbar}{2}
\begin{bmatrix}
\cos\beta &
e^{-i\alpha}
\sin\beta
\\
e^{i\alpha}
\sin\beta
& -\cos\beta
\end{bmatrix}.
\end{aligned}
\end{equation}

Observed that this is traceless and has a \( -\Hbar/2 \) determinant like any of the \( x,y,z \) spin operators.

Assuming that this has an \( \Hbar/2 \) eigenvalue (to be verified later), the eigenvalue problem is

\begin{equation}\label{eqn:moreBraKetProblems:400}
\begin{aligned}
0
&=
\BS \cdot \ncap – \Hbar/2 I \\
&=
\frac{\Hbar}{2}
\begin{bmatrix}
\cos\beta -1 &
e^{-i\alpha}
\sin\beta
\\
e^{i\alpha}
\sin\beta
& -\cos\beta -1
\end{bmatrix} \\
&=
\Hbar
\begin{bmatrix}
– \sin^2 \frac{\beta}{2} &
e^{-i\alpha}
\sin\frac{\beta}{2} \cos\frac{\beta}{2}
\\
e^{i\alpha}
\sin\frac{\beta}{2} \cos\frac{\beta}{2}
& -\cos^2 \frac{\beta}{2}
\end{bmatrix}
\end{aligned}
\end{equation}

This has a zero determinant as expected, and the eigenvector \( (a,b) \) will satisfy

\begin{equation}\label{eqn:moreBraKetProblems:420}
\begin{aligned}
0
&= – \sin^2 \frac{\beta}{2} a +
e^{-i\alpha}
\sin\frac{\beta}{2} \cos\frac{\beta}{2}
b \\
&= \sin\frac{\beta}{2} \lr{ – \sin \frac{\beta}{2} a +
e^{-i\alpha} b
\cos\frac{\beta}{2}
}
\end{aligned}
\end{equation}

\begin{equation}\label{eqn:moreBraKetProblems:440}
\begin{bmatrix}
a \\
b
\end{bmatrix}
\propto
\begin{bmatrix}
\cos\frac{\beta}{2} \\
e^{i\alpha}
\sin\frac{\beta}{2}
\end{bmatrix}.
\end{equation}

This is appropriately normalized, so the ket for \( \BS \cdot \ncap \) is

\begin{equation}\label{eqn:moreBraKetProblems:460}
\ket{ \BS \cdot \ncap ; + } =
\cos\frac{\beta}{2} \ket{+} +
e^{i\alpha}
\sin\frac{\beta}{2}
\ket{-}.
\end{equation}

Note that the other eigenvalue is

\begin{equation}\label{eqn:moreBraKetProblems:480}
\ket{ \BS \cdot \ncap ; – } =
-\sin\frac{\beta}{2} \ket{+} +
e^{i\alpha}
\cos\frac{\beta}{2}
\ket{-}.
\end{equation}

It is straightforward to show that these are orthogonal and that this has the \( -\Hbar/2 \) eigenvalue.

Question: Two state Hamiltonian ([1] pr. 1.10)

Solve the eigenproblem for

\begin{equation}\label{eqn:moreBraKetProblems:500}
H = a \biglr{
\ket{1}\bra{1}
-\ket{2}\bra{2}
+\ket{1}\bra{2}
+\ket{2}\bra{1}
}
\end{equation}

Answer

In matrix form the Hamiltonian is

\begin{equation}\label{eqn:moreBraKetProblems:520}
H = a
\begin{bmatrix}
1 & 1 \\
1 & -1
\end{bmatrix}.
\end{equation}

The eigenvalue problem is

\begin{equation}\label{eqn:moreBraKetProblems:540}
\begin{aligned}
0
&= \Abs{ H – \lambda I } \\
&= (a – \lambda)(-a – \lambda) – a^2 \\
&= (-a + \lambda)(a + \lambda) – a^2 \\
&= \lambda^2 – a^2 – a^2,
\end{aligned}
\end{equation}

or

\begin{equation}\label{eqn:moreBraKetProblems:560}
\lambda = \pm \sqrt{2} a.
\end{equation}

An eigenket proportional to \( (\alpha,\beta) \) must satisfy

\begin{equation}\label{eqn:moreBraKetProblems:580}
0
= ( 1 \mp \sqrt{2} ) \alpha + \beta,
\end{equation}

so

\begin{equation}\label{eqn:moreBraKetProblems:600}
\ket{\pm} \propto
\begin{bmatrix}
-1 \\
1 \mp \sqrt{2}
\end{bmatrix},
\end{equation}

or

\begin{equation}\label{eqn:moreBraKetProblems:620}
\begin{aligned}
\ket{\pm}
&=
\inv{2(2 – \sqrt{2})}
\begin{bmatrix}
-1 \\
1 \mp \sqrt{2}
\end{bmatrix} \\
&=
\frac{2 + \sqrt{2}}{4}
\begin{bmatrix}
-1 \\
1 \mp \sqrt{2}
\end{bmatrix}.
\end{aligned}
\end{equation}

That is
\begin{equation}\label{eqn:moreBraKetProblems:640}
\ket{\pm} =
\frac{2 + \sqrt{2}}{4} \lr{
-\ket{1} + (1 \mp \sqrt{2}) \ket{2}
}.
\end{equation}

Question: Spin half probability and dispersion ([1] pr. 1.12)

A spin \( 1/2 \) system \( \BS \cdot \ncap \), with \( \ncap = \sin \gamma \xcap + \cos\gamma \zcap \), is in state with eigenvalue \( \Hbar/2 \).

(a)

If \( S_x \) is measured. What is the probability of getting \( + \Hbar/2 \)?

(b)

Evaluate the dispersion in \( S_x \), that is,

\begin{equation}\label{eqn:moreBraKetProblems:660}
\expectation{\lr{ S_x – \expectation{S_x}}^2}.
\end{equation}

Answer

(a)

In matrix form the spin operator for the system is

\begin{equation}\label{eqn:moreBraKetProblems:680}
\begin{aligned}
\BS \cdot \ncap
&= \frac{\Hbar}{2} \lr{ \cos\gamma \begin{bmatrix} 1 & 0 \\ 0 & -1 \\ \end{bmatrix} + \sin\gamma \begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix}} \\
&= \frac{\Hbar}{2}
\begin{bmatrix}
\cos\gamma & \sin\gamma \\
\sin\gamma & -\cos\gamma \\
\end{bmatrix}
\end{aligned}
\end{equation}

An eigenket \( \ket{\BS \cdot \ncap ; + } = (a,b) \) must satisfy

\begin{equation}\label{eqn:moreBraKetProblems:700}
\begin{aligned}
0
&= \lr{ \cos \gamma – 1 } a + \sin\gamma b \\
&= \lr{ -2 \sin^2 \frac{\gamma}{2} } a + 2 \sin\frac{\gamma}{2} \cos\frac{\gamma}{2} b \\
&= -\sin \frac{\gamma}{2} a + \cos\frac{\gamma}{2} b,
\end{aligned}
\end{equation}

so the eigenstate is
\begin{equation}\label{eqn:moreBraKetProblems:720}
\ket{\BS \cdot \ncap ; + }
=
\begin{bmatrix}
\cos\frac{\gamma}{2} \\
\sin\frac{\gamma}{2}
\end{bmatrix}.
\end{equation}

Pick \( \ket{S_x ; \pm } = \inv{\sqrt{2}}
\begin{bmatrix}
1 \\ \pm 1
\end{bmatrix} \) as the basis for the \( S_x \) operator. Then, for the probability that the system will end up in the \( + \Hbar/2 \) state of \( S_x \), we have

\begin{equation}\label{eqn:moreBraKetProblems:740}
\begin{aligned}
P
&= \Abs{\braket{ S_x ; + }{ \BS \cdot \ncap ; + } }^2 \\
&= \Abs{ \inv{\sqrt{2} }
{
\begin{bmatrix}
1 \\
1
\end{bmatrix}}^\dagger
\begin{bmatrix}
\cos\frac{\gamma}{2} \\
\sin\frac{\gamma}{2}
\end{bmatrix}
}^2 \\
&=\inv{2}
\Abs{
\begin{bmatrix}
1 & 1
\end{bmatrix}
\begin{bmatrix}
\cos\frac{\gamma}{2} \\
\sin\frac{\gamma}{2}
\end{bmatrix}
}^2 \\
&=
\inv{2}
\lr{
\cos\frac{\gamma}{2} +
\sin\frac{\gamma}{2}
}^2 \\
&=
\inv{2}
\lr{ 1 + 2 \cos\frac{\gamma}{2} \sin\frac{\gamma}{2} } \\
&=
\inv{2}
\lr{ 1 + \sin\gamma }.
\end{aligned}
\end{equation}

This is a reasonable seeming result, with \( P \in [0, 1] \). Some special values also further validate this

\begin{equation}\label{eqn:moreBraKetProblems:760}
\begin{aligned}
\gamma &= 0, \ket{\BS \cdot \ncap ; + } =
\begin{bmatrix}
1 \\
0
\end{bmatrix}
=
\ket{S_z ; +}
=
\inv{\sqrt{2}} \ket{S_x;+}
+\inv{\sqrt{2}} \ket{S_x;-}
\\
\gamma &= \pi/2, \ket{\BS \cdot \ncap ; + } =
\inv{\sqrt{2}}
\begin{bmatrix}
1 \\
1
\end{bmatrix}
=
\ket{S_x ; +}
\\
\gamma &= \pi, \ket{\BS \cdot \ncap ; + } =
\begin{bmatrix}
0 \\
1
\end{bmatrix}
=
\ket{S_z ; -}
=
\inv{\sqrt{2}} \ket{S_x;+}
-\inv{\sqrt{2}} \ket{S_x;-},
\end{aligned}
\end{equation}

where we see that the probabilites are in proportion to the projection of the initial state onto the measured state \( \ket{S_x ; +} \).

(b)

The \( S_x \) expectation is

\begin{equation}\label{eqn:moreBraKetProblems:780}
\begin{aligned}
\expectation{S_x}
&=
\frac{\Hbar}{2}
\begin{bmatrix}
\cos\frac{\gamma}{2} & \sin\frac{\gamma}{2}
\end{bmatrix}
\begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix}
\begin{bmatrix}
\cos\frac{\gamma}{2} \\
\sin\frac{\gamma}{2}
\end{bmatrix} \\
&=
\frac{\Hbar}{2}
\begin{bmatrix}
\cos\frac{\gamma}{2} & \sin\frac{\gamma}{2}
\end{bmatrix}
\begin{bmatrix}
\sin\frac{\gamma}{2} \\
\cos\frac{\gamma}{2}
\end{bmatrix} \\
&=
\frac{\Hbar}{2} 2 \sin\frac{\gamma}{2} \cos\frac{\gamma}{2} \\
&=
\frac{\Hbar}{2} \sin\gamma.
\end{aligned}
\end{equation}

Note that \( S_x^2 = (\Hbar/2)^2I \), so

\begin{equation}\label{eqn:moreBraKetProblems:800}
\begin{aligned}
\expectation{S_x^2}
&=
\lr{\frac{\Hbar}{2}}^2
\begin{bmatrix}
\cos\frac{\gamma}{2} & \sin\frac{\gamma}{2}
\end{bmatrix}
\begin{bmatrix}
\cos\frac{\gamma}{2} \\
\sin\frac{\gamma}{2}
\end{bmatrix} \\
&=
\lr{ \frac{\Hbar}{2} }^2
\cos^2\frac{\gamma}{2} + \sin^2 \frac{\gamma}{2} \\
&=
\lr{ \frac{\Hbar}{2} }^2.
\end{aligned}
\end{equation}

The dispersion is

\begin{equation}\label{eqn:moreBraKetProblems:820}
\begin{aligned}
\expectation{\lr{ S_x – \expectation{S_x}}^2}
&=
\expectation{S_x^2} – \expectation{S_x}^2 \\
&=
\lr{ \frac{\Hbar}{2} }^2
\lr{1 – \sin^2 \gamma} \\
&=
\lr{ \frac{\Hbar}{2} }^2
\cos^2 \gamma.
\end{aligned}
\end{equation}

At \( \gamma = \pi/2 \) the dispersion is 0, which is expected since \( \ket{\BS \cdot \ncap ; + } = \ket{ S_x ; + } \) at that point. Similarily, the dispersion is maximized at \( \gamma = 0,\pi \) where the \( \ket{\BS \cdot \ncap ; + } \) component in the \( \ket{S_x ; + } \) direction is minimized.

References

[1] Jun John Sakurai and Jim J Napolitano. Modern quantum mechanics. Pearson Higher Ed, 2014.

Update to old phy356 (Quantum Mechanics I) notes.

February 12, 2015 math and physics play No comments , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

It’s been a long time since I took QM I. My notes from that class were pretty rough, but I’ve cleaned them up a bit.

The main value to these notes is that I worked a number of introductory Quantum Mechanics problems.

These were my personal lecture notes for the Fall 2010, University of Toronto Quantum mechanics I course (PHY356H1F), taught by Prof. Vatche Deyirmenjian.

The official description of this course was:

The general structure of wave mechanics; eigenfunctions and eigenvalues; operators; orbital angular momentum; spherical harmonics; central potential; separation of variables, hydrogen atom; Dirac notation; operator methods; harmonic oscillator and spin.

This document contains a few things

• My lecture notes.
Typos, if any, are probably mine(Peeter), and no claim nor attempt of spelling or grammar correctness will be made. The first four lectures had chosen not to take notes for since they followed the text very closely.
• Notes from reading of the text. This includes observations, notes on what seem like errors, and some solved problems. None of these problems have been graded. Note that my informal errata sheet for the text has been separated out from this document.
• Some assigned problems. I have corrected some the errors after receiving grading feedback, and where I have not done so I at least recorded some of the grading comments as a reference.
• Some worked problems associated with exam preparation.

Singular Value Decomposition

October 2, 2014 ece1254 No comments , , , ,

[Click here for a PDF of this post with nicer formatting]

We’ve been presented with the definition of the singular value decomposition or SVD, but not how to compute it.

Recall that the definition was

Singular value decomposition (SVD)

Given \( M \in \text{R}^{m \times n} \), we can find a representation of \( M \)

\begin{equation}\label{eqn:svdNotes:81}
M = U \Sigma V^\T,
\end{equation}

where \( U \) and \( V\) are orthogonal matrices such that \( U^\T U = 1 \), and \( V^\T V = 1 \), and

\begin{equation}\label{eqn:svdNotes:101}
\Sigma =
\begin{bmatrix}
\sigma_1 & & & & & &\\
& \sigma_2 & & & & &\\
& & \ddots & & & &\\
& & & \sigma_r & & &\\
& & & & 0 & & \\
& & & & & \ddots & \\
& & & & & & 0 \\
\end{bmatrix}
\end{equation}

The values \( \sigma_i, i \le \min(n,m) \) are called the singular values of \( M \). The singular values are subject to the ordering

\begin{equation}\label{eqn:svdNotes:261}
\sigma_{1} \ge \sigma_{2} \ge \cdots \ge 0
\end{equation}

If \(r\) is the rank of \( M \), then the \( \sigma_r \) above is the minimum non-zero singular value (but the zeros are also called singular values).

Here I’ll review some of the key ideas from the MIT OCW SVD lecture by Prof. Gilbert Strang. This is largely to avoid having to rewatch this again in a few years as I just did.

The idea behind the SVD is to find an orthogonal basis that relates the image space of the transformation, as well as the basis for the vectors that the transformation applies to. That is a relation of the form

\begin{equation}\label{eqn:svdNotes:281}
\Bu_i = M \Bv_j
\end{equation}

Where \( \Bv_j \) are orthonormal basis vectors, and \( \Bu_i \) are orthonormal basis vectors for the image space. %Strang’s lectures call these the row and column spaces (or reverse?)

Let’s suppose that such a set of vectors can be computed and that \( M \) has a representation of the desired form

\begin{equation}\label{eqn:svdNotes:301}
M = U \Sigma V^\conj
\end{equation}

where
\begin{equation}\label{eqn:svdNotes:321}
U =
\begin{bmatrix}
\Bu_1 & \Bu_2 & \cdots & \Bu_m
\end{bmatrix},
\end{equation}

and

\begin{equation}\label{eqn:svdNotes:341}
V =
\begin{bmatrix}
\Bv_1 & \Bv_2 & \cdots & \Bv_n
\end{bmatrix}.
\end{equation}

By left or right multiplication of \( M \) with its (conjugate) transpose, we will see that the decomposed form of these products have a very simple form

\begin{equation}\label{eqn:svdNotes:361}
M^\conj M
=
V \Sigma^\conj U^\conj U \Sigma V^\conj
=
V \Sigma^\conj \Sigma V^\conj,
\end{equation}
\begin{equation}\label{eqn:svdNotes:381}
M M^\conj
=
U \Sigma V^\conj V \Sigma^\conj U^\conj
=
U \Sigma \Sigma^\conj U^\conj.
\end{equation}

Because \( \Sigma \) is diagonal, the products \( \Sigma^\conj \Sigma \) and \( \Sigma \Sigma^\conj \) are also both diagonal, and populated with the absolute squares of the singular values that we have presumed to exist. Because \( \Sigma \Sigma^\conj \) is an \( m \times m \) matrix, whereas \( \Sigma^\conj \Sigma \) is an \( n \times n \) matrix, so the numbers of zeros in each of these will differ, but each will have the structure

\begin{equation}\label{eqn:svdNotes:401}
\Sigma^\conj \Sigma \sim \Sigma \Sigma^\conj \sim
\begin{bmatrix}
\Abs{\sigma_1}^2 & & & & & &\\
& \Abs{ \sigma_2 }^2 & & & & &\\
& & \ddots & & & &\\
& & & \Abs{ \sigma_r }^2 & & &\\
& & & & 0 & & \\
& & & & & \ddots & \\
& & & & & & 0 \\
\end{bmatrix}
\end{equation}

This shows us one method of computing the singular value decomposition (for full rank systems), because we need only solve for the eigensystem of either \( M M^\conj \) or \( M^\conj M \) to find the singular values, and one of \( U \) or \( V \). To form \( U \) we will be able to find \( r \) orthonormal eigenvectors of \( M M^\conj \) and then supplement that with a mutually orthonormal set of vectors from the Null space of \( M \). We can form \( \Sigma \) by taking the square roots of the eigenvalues of \( M M^\conj \). With both \( \Sigma \) and \( U \) computed, we can then compute \( V \) by inversion. We could similarly solve for \( \Sigma \) and \( V \) by computing the eigensystem of \( M^\conj M \) and similarly supplementing those eigenvectors with vectors from the null space, and then compute \( U \) by inversion.

Let’s work the examples from the lecture to see how this works.

Example: Full rank 2×2 matrix

In the lecture the SVD decomposition is computed for

\begin{equation}\label{eqn:svdNotes:421}
M =
\begin{bmatrix}
4 & 4 \\
3 & -3
\end{bmatrix}
\end{equation}

For this we have

\begin{equation}\label{eqn:svdNotes:441}
M M^\conj
=
\begin{bmatrix}
32 & 0 \\
0 & 18
\end{bmatrix}
\end{equation}
\begin{equation}\label{eqn:svdNotes:461}
M^\conj M
=
\begin{bmatrix}
25 & 7 \\
7 & 25
\end{bmatrix}
\end{equation}

The first is already diagonalized so \( U = I \), and the singular values are found by inspection \( \{ \sqrt{32}, \sqrt{18} \} \), or

\begin{equation}\label{eqn:svdNotes:541}
\Sigma =
\begin{bmatrix}
\sqrt{32} & 0 \\
0 & \sqrt{18}
\end{bmatrix}
\end{equation}

Because the system is full rank we can solve for \( V \) by inversion

\begin{equation}\label{eqn:svdNotes:481}
\Sigma^{-1} U^\conj M = V^\conj,
\end{equation}

or
\begin{equation}\label{eqn:svdNotes:501}
V = M^\conj U \lr{ \Sigma^{-1} }^\conj.
\end{equation}

In this case that is

\begin{equation}\label{eqn:svdNotes:521}
V = \inv{\sqrt{2}}
\begin{bmatrix}
1 & 1 \\
1 & -1
\end{bmatrix}
\end{equation}

We could alternately compute \( V \) directly by diagonalizing \( M^\conj M \). We see that the eigenvectors are \( \lr{ 1, \pm 1 }/\sqrt{2} \), with respective eigenvalues \( \{ 32, 18 \} \).

This gives us \ref{eqn:svdNotes:541} and \ref{eqn:svdNotes:521}. Again, because the system is full rank, we can compute \( U \) by inversion. That is

\begin{equation}\label{eqn:svdNotes:561}
U = M V \Sigma^{-1}.
\end{equation}

Carrying out this calculation recovers \( U = I \) as expected. Looks like I used a different matrix than Prof. Strang used in his lecture (alternate signs on the 3’s). He had some trouble that arrived from independent calculation of the respective eigenspaces. Calculating one from the other avoids that trouble since there are different signed eigenvalues that can be chosen, and we are looking for specific mappings between the eigenspaces that satisfy the \( \Bu_i = M \Bv_j \) constraints encoded by the relationship \( M = U \Sigma V^\conj \).

Let’s work a non-full rank example, as in the lecture.

Example: 2×2 matrix without full rank

How about

\begin{equation}\label{eqn:svdNotes:581}
M =
\begin{bmatrix}
1 & 1 \\
2 & 2
\end{bmatrix}.
\end{equation}

For this we have

\begin{equation}\label{eqn:svdNotes:601}
M^\conj M =
\begin{bmatrix}
5 & 5 \\
5 & 5
\end{bmatrix}
\end{equation}
\begin{equation}\label{eqn:svdNotes:621}
M M^\conj =
\begin{bmatrix}
2 & 4 \\
4 & 8
\end{bmatrix}
\end{equation}

For which the non-zero eigenvalue is \( 10 \) and the corresponding eigenvalue is
\begin{equation}\label{eqn:svdNotes:641}
\Bv =
\inv{\sqrt{2}}
\begin{bmatrix}
1 \\
1
\end{bmatrix}.
\end{equation}

This gives us

\begin{equation}\label{eqn:svdNotes:661}
\Sigma =
\begin{bmatrix}
\sqrt{10} & 0 \\
0 & 0
\end{bmatrix}
\end{equation}

Since we require \( V \) to be orthonormal, there is only one choice (up to a sign) for the vector from the null space.

Let’s try

\begin{equation}\label{eqn:svdNotes:681}
V =
\inv{\sqrt{2}}
\begin{bmatrix}
1 & 1 \\
1 & -1
\end{bmatrix}
\end{equation}

We find that \( M M^\conj \) has eigenvalues \( \{ 10, 0 \} \) as expected. The eigenvector for the non-zero eigenvalue is found to be

\begin{equation}\label{eqn:svdNotes:701}
\Bu = \inv{\sqrt{5}}
\begin{bmatrix}
1 \\
2
\end{bmatrix}.
\end{equation}

It’s easy to expand this to an orthonormal basis, but do we have to pick a specific sign relative to the choice that we’ve made for \( V \)?

Let’s try

\begin{equation}\label{eqn:svdNotes:721}
U = \inv{\sqrt{5}}
\begin{bmatrix}
1 & 2 \\
2 & -1
\end{bmatrix}.
\end{equation}

Multiplying out \( U \Sigma V^\conj \) we have

\begin{equation}\label{eqn:svdNotes:741}
\begin{aligned}
\inv{\sqrt{5}}
\begin{bmatrix}
1 & 2 \\
2 & -1
\end{bmatrix}
\begin{bmatrix}
\sqrt{10} & 0 \\
0 & 0
\end{bmatrix}
\inv{\sqrt{2}}
\begin{bmatrix}
1 & 1 \\
1 & -1
\end{bmatrix}
&=
\begin{bmatrix}
1 & 2 \\
2 & -1
\end{bmatrix}
\begin{bmatrix}
1 & 0 \\
0 & 0
\end{bmatrix}
\begin{bmatrix}
1 & 1 \\
1 & -1
\end{bmatrix} \\
&=
\begin{bmatrix}
1 & 0 \\
2 & 0
\end{bmatrix}
\begin{bmatrix}
1 & 1 \\
1 & -1
\end{bmatrix} \\
&=
\begin{bmatrix}
1 & 1 \\
2 & 2
\end{bmatrix}.
\end{aligned}
\end{equation}

It appears that this works, although we haven’t demonstrated why that should be, and we could have gotten lucky with signs. There’s some theoretical work to do here, but let’s leave that for another day (or rely on software to do this computational task).