operator

Can anticommuting operators have a simulaneous eigenket?

September 28, 2015 phy1520 , ,

[Click here for a PDF of this post with nicer formatting]

Question: Can anticommuting operators have a simulaneous eigenket? ([1] pr. 1.16)

Two Hermitian operators anticommute

\begin{equation}\label{eqn:anticommutingOperatorWithSimulaneousEigenket:20}
\symmetric{A}{B} = A B + B A = 0.
\end{equation}

Is it possible to have a simultaneous eigenket of \( A \) and \( B \)? Prove or illustrate your assertion.

Answer

Suppose that such a simultaneous non-zero eigenket \( \ket{\alpha} \) exists, then

\begin{equation}\label{eqn:anticommutingOperatorWithSimulaneousEigenket:40}
A \ket{\alpha} = a \ket{\alpha},
\end{equation}

and

\begin{equation}\label{eqn:anticommutingOperatorWithSimulaneousEigenket:60}
B \ket{\alpha} = b \ket{\alpha}
\end{equation}

This gives

\begin{equation}\label{eqn:anticommutingOperatorWithSimulaneousEigenket:80}
\lr{ A B + B A } \ket{\alpha}
=
\lr{A b + B a} \ket{\alpha}
= 2 a b \ket{\alpha}.
\end{equation}

If this is zero, one of the operators must have a zero eigenvalue. Knowing that we can construct an example of such operators. In matrix form, let

\begin{equation}\label{eqn:anticommutingOperatorWithSimulaneousEigenket:120}
A =
\begin{bmatrix}
1 & 0 & 0 \\
0 & -1 & 0 \\
0 & 0 & a \\
\end{bmatrix}
\end{equation}
\begin{equation}\label{eqn:anticommutingOperatorWithSimulaneousEigenket:140}
B =
\begin{bmatrix}
0 & 1 & 0 \\
1 & 0 & 0 \\
0 & 0 & b \\
\end{bmatrix}.
\end{equation}

These are both Hermitian, and anticommute provided at least one of \( a, b\) is zero. These have a common eigenket

\begin{equation}\label{eqn:anticommutingOperatorWithSimulaneousEigenket:160}
\ket{\alpha} =
\begin{bmatrix}
0 \\
0 \\
1
\end{bmatrix}.
\end{equation}

A zero eigenvalue of one of the commuting operators may not be a sufficient condition for such anticommutation.

References

[1] Jun John Sakurai and Jim J Napolitano. Modern quantum mechanics. Pearson Higher Ed, 2014.

PHY1520H Graduate Quantum Mechanics. Lecture 3: Density matrix (cont.). Taught by Prof. Arun Paramekanti

September 24, 2015 phy1520 , , , , , , , , ,

[Click here for a PDF of this post with nicer formatting]

Disclaimer

Peeter’s lecture notes from class. These may be incoherent and rough.

These are notes for the UofT course PHY1520, Graduate Quantum Mechanics, taught by Prof. Paramekanti, covering [1] chap. 3 content.

Density matrix (cont.)

An example of a partitioned system with four total states (two spin 1/2 particles) is sketched in fig. 1.

fig. 1.  Two spins

fig. 1. Two spins

An example of a partitioned system with eight total states (three spin 1/2 particles) is sketched in fig. 2.

fig. 2.  Three spins

fig. 2. Three spins

The density matrix

\begin{equation}\label{eqn:qmLecture3:20}
\hat{\rho} = \ket{\Psi}\bra{\Psi}
\end{equation}

is clearly an operator as can be seen by applying it to a state

\begin{equation}\label{eqn:qmLecture3:40}
\hat{\rho} \ket{\phi} = \ket{\Psi} \lr{ \braket{ \Psi }{\phi} }.
\end{equation}

The quantity in braces is just a complex number.

After expanding the pure state \( \ket{\Psi} \) in terms of basis states for each of the two partitions

\begin{equation}\label{eqn:qmLecture3:60}
\ket{\Psi}
= \sum_{m,n} C_{m, n} \ket{m}_{\textrm{L}} \ket{n}_{\textrm{R}},
\end{equation}

With \( \textrm{L} \) and \( \textrm{R} \) implied for \( \ket{m}, \ket{n} \) indexed states respectively, this can be written

\begin{equation}\label{eqn:qmLecture3:460}
\ket{\Psi}
= \sum_{m,n} C_{m, n} \ket{m} \ket{n}.
\end{equation}

The density operator is

\begin{equation}\label{eqn:qmLecture3:80}
\hat{\rho} =
\sum_{m,n}
C_{m, n}
C_{m’, n’}^\conj
\ket{m} \ket{n}
\sum_{m’,n’}
\bra{m’} \bra{n’}.
\end{equation}

Suppose we trace over the right partition of the state space, defining such a trace as the reduced density operator \( \hat{\rho}_{\textrm{red}} \)

\begin{equation}\label{eqn:qmLecture3:100}
\begin{aligned}
\hat{\rho}_{\textrm{red}}
&\equiv
\textrm{Tr}_{\textrm{R}}(\hat{\rho}) \\
&= \sum_{\tilde{n}} \bra{\tilde{n}} \hat{\rho} \ket{ \tilde{n}} \\
&= \sum_{\tilde{n}}
\bra{\tilde{n} }
\lr{
\sum_{m,n}
C_{m, n}
\ket{m} \ket{n}
}
\lr{
\sum_{m’,n’}
C_{m’, n’}^\conj
\bra{m’} \bra{n’}
}
\ket{ \tilde{n} } \\
&=
\sum_{\tilde{n}}
\sum_{m,n}
\sum_{m’,n’}
C_{m, n}
C_{m’, n’}^\conj
\ket{m} \delta_{\tilde{n} n}
\bra{m’ }
\delta_{ \tilde{n} n’ } \\
&=
\sum_{\tilde{n}, m, m’}
C_{m, \tilde{n}}
C_{m’, \tilde{n}}^\conj
\ket{m} \bra{m’ }
\end{aligned}
\end{equation}

Computing the matrix element of \( \hat{\rho}_{\textrm{red}} \), we have

\begin{equation}\label{eqn:qmLecture3:120}
\begin{aligned}
\bra{\tilde{m}} \hat{\rho}_{\textrm{red}} \ket{\tilde{m}}
&=
\sum_{m, m’, \tilde{n}} C_{m, \tilde{n}} C_{m’, \tilde{n}}^\conj \braket{ \tilde{m}}{m} \braket{m’}{\tilde{m}} \\
&=
\sum_{\tilde{n}} \Abs{C_{\tilde{m}, \tilde{n}} }^2.
\end{aligned}
\end{equation}

This is the probability that the left partition is in state \( \tilde{m} \).

Average of an observable

Suppose we have two spin half particles. For such a system the total magnetization is

\begin{equation}\label{eqn:qmLecture3:140}
S_{\textrm{Total}} =
S_1^z
+
S_1^z,
\end{equation}

as sketched in fig. 3.

fig. 3.  Magnetic moments from two spins.

fig. 3. Magnetic moments from two spins.

The average of some observable is

\begin{equation}\label{eqn:qmLecture3:160}
\expectation{\hatA}
= \sum_{m, n, m’, n’} C_{m, n}^\conj C_{m’, n’}
\bra{m}\bra{n} \hatA \ket{n’} \ket{m’}.
\end{equation}

Consider the trace of the density operator observable product

\begin{equation}\label{eqn:qmLecture3:180}
\textrm{Tr}( \hat{\rho} \hatA )
= \sum_{m, n} \braket{m n}{\Psi} \bra{\Psi} \hatA \ket{m, n}.
\end{equation}

Let

\begin{equation}\label{eqn:qmLecture3:200}
\ket{\Psi} = \sum_{m, n} C_{m n} \ket{m, n},
\end{equation}

so that

\begin{equation}\label{eqn:qmLecture3:220}
\begin{aligned}
\textrm{Tr}( \hat{\rho} \hatA )
&= \sum_{m, n, m’, n’, m”, n”} C_{m’, n’} C_{m”, n”}^\conj
\braket{m n}{m’, n’} \bra{m”, n”} \hatA \ket{m, n} \\
&= \sum_{m, n, m”, n”} C_{m, n} C_{m”, n”}^\conj
\bra{m”, n”} \hatA \ket{m, n}.
\end{aligned}
\end{equation}

This is just

\begin{equation}\label{eqn:qmLecture3:240}
\boxed{
\bra{\Psi} \hatA \ket{\Psi} = \textrm{Tr}( \hat{\rho} \hatA ).
}
\end{equation}

Left observables

Consider

\begin{equation}\label{eqn:qmLecture3:260}
\begin{aligned}
\bra{\Psi} \hatA_{\textrm{L}} \ket{\Psi}
&= \textrm{Tr}(\hat{\rho} \hatA_{\textrm{L}}) \\
&=
\textrm{Tr}_{\textrm{L}}
\textrm{Tr}_{\textrm{R}}
(\hat{\rho} \hatA_{\textrm{L}}) \\
&=
\textrm{Tr}_{\textrm{L}}
\lr{
\lr{
\textrm{Tr}_{\textrm{R}} \hat{\rho}
}
\hatA_{\textrm{L}})
} \\
&=
\textrm{Tr}_{\textrm{L}}
\lr{
\hat{\rho}_{\textrm{red}}
\hatA_{\textrm{L}})
}.
\end{aligned}
\end{equation}

We see

\begin{equation}\label{eqn:qmLecture3:280}
\bra{\Psi} \hatA_{\textrm{L}} \ket{\Psi}
=
\textrm{Tr}_{\textrm{L}} \lr{ \hat{\rho}_{\textrm{red}, \textrm{L}} \hatA_{\textrm{L}} }.
\end{equation}

We find that we don’t need to know the state of the complete system to answer questions about portions of the system, but instead just need \( \hat{\rho} \), a “probability operator” that provides all the required information about the partitioning of the system.

Pure states vs. mixed states

For pure states we can assign a state vector and talk about reduced scenarios. For mixed states we must work with reduced density matrix.

Example: Two particle spin half pure states

Consider

\begin{equation}\label{eqn:qmLecture3:300}
\ket{\psi_1} = \inv{\sqrt{2}} \lr{ \ket{ \uparrow \downarrow } – \ket{ \downarrow \uparrow } }
\end{equation}

\begin{equation}\label{eqn:qmLecture3:320}
\ket{\psi_2} = \inv{\sqrt{2}} \lr{ \ket{ \uparrow \downarrow } + \ket{ \uparrow \uparrow } }.
\end{equation}

For the first pure state the density operator is
\begin{equation}\label{eqn:qmLecture3:360}
\hat{\rho} = \inv{2}
\lr{ \ket{ \uparrow \downarrow } – \ket{ \downarrow \uparrow } }
\lr{ \bra{ \uparrow \downarrow } – \bra{ \downarrow \uparrow } }
\end{equation}

What are the reduced density matrices?

\begin{equation}\label{eqn:qmLecture3:340}
\begin{aligned}
\hat{\rho}_{\textrm{L}}
&= \textrm{Tr}_{\textrm{R}} \lr{ \hat{\rho} } \\
&=
\inv{2} (-1)(-1) \ket{\downarrow}\bra{\downarrow}
+\inv{2} (+1)(+1) \ket{\uparrow}\bra{\uparrow},
\end{aligned}
\end{equation}

so the matrix representation of this reduced density operator is

\begin{equation}\label{eqn:qmLecture3:380}
\hat{\rho}_{\textrm{L}}
=
\inv{2}
\begin{bmatrix}
1 & 0 \\
0 & 1
\end{bmatrix}.
\end{equation}

For the second pure state the density operator is
\begin{equation}\label{eqn:qmLecture3:400}
\hat{\rho} = \inv{2}
\lr{ \ket{ \uparrow \downarrow } + \ket{ \uparrow \uparrow } }
\lr{ \bra{ \uparrow \downarrow } + \bra{ \uparrow \uparrow } }.
\end{equation}

This has a reduced density matrice

\begin{equation}\label{eqn:qmLecture3:420}
\begin{aligned}
\hat{\rho}_{\textrm{L}}
&= \textrm{Tr}_{\textrm{R}} \lr{ \hat{\rho} } \\
&=
\inv{2} \ket{\uparrow}\bra{\uparrow}
+\inv{2} \ket{\uparrow}\bra{\uparrow} \\
&=
\ket{\uparrow}\bra{\uparrow} .
\end{aligned}
\end{equation}

This has a matrix representation

\begin{equation}\label{eqn:qmLecture3:440}
\hat{\rho}_{\textrm{L}}
=
\begin{bmatrix}
1 & 0 \\
0 & 0
\end{bmatrix}.
\end{equation}

In this second example, we have more information about the left partition. That will be seen as a zero entanglement entropy in the problem set. In contrast we have less information about the first state, and will find a non-zero positive entanglement entropy in that case.

References

[1] Jun John Sakurai and Jim J Napolitano. Modern quantum mechanics. Pearson Higher Ed, 2014.

Operator matrix element

August 29, 2015 phy1520 , , , , , ,

[Click here for a PDF of this post with nicer formatting]

0dc1b8c5-232f-492d-b520-bcec41e45c88

Weird dreams

I woke up today having a dream still in my head from the night, but it was a strange one. I was expanding out the Dirac notation representation of an operator in matrix form, but the symbols in the kets were elaborate pictures of Disney princesses that I was drawing with forestry scenery in the background, including little bears. At the point that I woke up from the dream, I noticed that I’d gotten the proportion of the bears wrong in one of the pictures, and they looked like they were ready to eat one of the princess characters.

Guts

As a side effect of this weird dream I actually started thinking about matrix element representation of operators.

When forming the matrix element of an operator using Dirac notation the elements are of the form \( \bra{\textrm{row}} A \ket{\textrm{column}} \). I’ve gotten that mixed up a couple of times, so I thought it would be helpful to write this out explicitly for a \( 2 \times 2 \) operator representation for clarity.

To start, consider a change of basis for a single matrix element from basis \( \setlr{\ket{q}, \ket{r} } \), to basis \( \setlr{\ket{a}, \ket{b} } \)

\begin{equation}\label{eqn:operatorMatrixElement:20}
\begin{aligned}
\bra{q} A \ket{r}
&=
\braket{q}{a} \bra{a} A \ket{r}
+
\braket{q}{b} \bra{b} A \ket{r} \\
&=
\braket{q}{a} \bra{a} A \ket{a}\braket{a}{r}
+ \braket{q}{a} \bra{a} A \ket{b}\braket{b}{r} \\
&+ \braket{q}{b} \bra{b} A \ket{a}\braket{a}{r}
+ \braket{q}{b} \bra{b} A \ket{b}\braket{b}{r} \\
&=
\braket{q}{a}
\begin{bmatrix}
\bra{a} A \ket{a} & \bra{a} A \ket{b}
\end{bmatrix}
\begin{bmatrix}
\braket{a}{r} \\
\braket{b}{r}
\end{bmatrix}
+
\braket{q}{b}
\begin{bmatrix}
\bra{b} A \ket{a} & \bra{b} A \ket{b}
\end{bmatrix}
\begin{bmatrix}
\braket{a}{r} \\
\braket{b}{r}
\end{bmatrix} \\
&=
\begin{bmatrix}
\braket{q}{a} &
\braket{q}{b}
\end{bmatrix}
\begin{bmatrix}
\bra{a} A \ket{a} & \bra{a} A \ket{b} \\
\bra{b} A \ket{a} & \bra{b} A \ket{b}
\end{bmatrix}
\begin{bmatrix}
\braket{a}{r} \\
\braket{b}{r}
\end{bmatrix}.
\end{aligned}
\end{equation}

Suppose the matrix representation of \( \ket{q}, \ket{r} \) are respectively

\begin{equation}\label{eqn:operatorMatrixElement:40}
\begin{aligned}
\ket{q} &\sim
\begin{bmatrix}
\braket{a}{q} \\
\braket{b}{q} \\
\end{bmatrix} \\
\ket{r} &\sim
\begin{bmatrix}
\braket{a}{r} \\
\braket{b}{r} \\
\end{bmatrix} \\
\end{aligned},
\end{equation}

then

\begin{equation}\label{eqn:operatorMatrixElement:60}
\bra{q} \sim
{\begin{bmatrix}
\braket{a}{q} \\
\braket{b}{q} \\
\end{bmatrix}}^\dagger
=
\begin{bmatrix}
\braket{q}{a} &
\braket{q}{b}
\end{bmatrix}.
\end{equation}

The matrix element is then

\begin{equation}\label{eqn:operatorMatrixElement:80}
\bra{q} A \ket{r}
\sim
\bra{q}
\begin{bmatrix}
\bra{a} A \ket{a} & \bra{a} A \ket{b} \\
\bra{b} A \ket{a} & \bra{b} A \ket{b}
\end{bmatrix}
\ket{r},
\end{equation}

and the corresponding matrix representation of the operator is

\begin{equation}\label{eqn:operatorMatrixElement:100}
A \sim
\begin{bmatrix}
\bra{a} A \ket{a} & \bra{a} A \ket{b} \\
\bra{b} A \ket{a} & \bra{b} A \ket{b}
\end{bmatrix}.
\end{equation}

bra-ket manipulation problems

July 22, 2015 phy1520 , , , , , , , ,

[Click here for a PDF of this post with nicer formatting]

Some bra-ket manipulation problems.([1] pr. 1.4)

Using braket logic expand

(a)

\begin{equation}\label{eqn:braketManip:20}
\textrm{tr}{X Y}
\end{equation}

(b)

\begin{equation}\label{eqn:braketManip:40}
(X Y)^\dagger
\end{equation}

(c)

\begin{equation}\label{eqn:braketManip:60}
e^{i f(A)},
\end{equation}

where \( A \) is Hermitian with a complete set of eigenvalues.

(d)

\begin{equation}\label{eqn:braketManip:80}
\sum_{a’} \Psi_{a’}(\Bx’)^\conj \Psi_{a’}(\Bx”),
\end{equation}

where \( \Psi_{a’}(\Bx”) = \braket{\Bx’}{a’} \).

Answers

(a)

\begin{equation}\label{eqn:braketManip:100}
\begin{aligned}
\textrm{tr}{X Y}
&= \sum_a \bra{a} X Y \ket{a} \\
&= \sum_{a,b} \bra{a} X \ket{b}\bra{b} Y \ket{a} \\
&= \sum_{a,b}
\bra{b} Y \ket{a}
\bra{a} X \ket{b} \\
&= \sum_{a,b}
\bra{b} Y
X \ket{b} \\
&= \textrm{tr}{ Y X }.
\end{aligned}
\end{equation}

(b)

\begin{equation}\label{eqn:braketManip:120}
\begin{aligned}
\bra{a} \lr{ X Y}^\dagger \ket{b}
&=
\lr{ \bra{b} X Y \ket{a} }^\conj \\
&=
\sum_c \lr{ \bra{b} X \ket{c}\bra{c} Y \ket{a} }^\conj \\
&=
\sum_c \lr{ \bra{b} X \ket{c} }^\conj \lr{ \bra{c} Y \ket{a} }^\conj \\
&=
\sum_c
\lr{ \bra{c} Y \ket{a} }^\conj
\lr{ \bra{b} X \ket{c} }^\conj \\
&=
\sum_c
\bra{a} Y^\dagger \ket{c}
\bra{c} X^\dagger \ket{b} \\
&=
\bra{a} Y^\dagger
X^\dagger \ket{b},
\end{aligned}
\end{equation}

so \( \lr{ X Y }^\dagger = Y^\dagger X^\dagger \).

(c)

Let’s presume that the function \( f \) has a Taylor series representation

\begin{equation}\label{eqn:braketManip:140}
f(A) = \sum_r b_r A^r.
\end{equation}

If the eigenvalues of \( A \) are given by

\begin{equation}\label{eqn:braketManip:160}
A \ket{a_s} = a_s \ket{a_s},
\end{equation}

this operator can be expanded like

\begin{equation}\label{eqn:braketManip:180}
\begin{aligned}
A
&= \sum_{a_s} A \ket{a_s} \bra{a_s} \\
&= \sum_{a_s} a_s \ket{a_s} \bra{a_s},
\end{aligned}
\end{equation}

To compute powers of this operator, consider first the square

\begin{equation}\label{eqn:braketManip:200}
\begin{aligned}
A^2 =
&=
\sum_{a_s} a_s \ket{a_s} \bra{a_s}
\sum_{a_r} a_r \ket{a_r} \bra{a_r} \\
&=
\sum_{a_s, a_r} a_s a_r \ket{a_s} \bra{a_s} \ket{a_r} \bra{a_r} \\
&=
\sum_{a_s, a_r} a_s a_r \ket{a_s} \delta_{s r} \bra{a_r} \\
&=
\sum_{a_s} a_s^2 \ket{a_s} \bra{a_s}.
\end{aligned}
\end{equation}

The pattern for higher powers will clearly just be

\begin{equation}\label{eqn:braketManip:220}
A^k =
\sum_{a_s} a_s^k \ket{a_s} \bra{a_s},
\end{equation}

so the expansion of \( f(A) \) will be

\begin{equation}\label{eqn:braketManip:240}
\begin{aligned}
f(A)
&= \sum_r b_r A^r \\
&= \sum_r b_r
\sum_{a_s} a_s^r \ket{a_s} \bra{a_s} \\
&=
\sum_{a_s} \lr{ \sum_r b_r a_s^r } \ket{a_s} \bra{a_s} \\
&=
\sum_{a_s} f(a_s) \ket{a_s} \bra{a_s}.
\end{aligned}
\end{equation}

The exponential expansion is

\begin{equation}\label{eqn:braketManip:260}
\begin{aligned}
e^{i f(A)}
&=
\sum_t \frac{i^t}{t!} f^t(A) \\
&=
\sum_t \frac{i^t}{t!}
\lr{ \sum_{a_s} f(a_s) \ket{a_s} \bra{a_s} }^t \\
&=
\sum_t \frac{i^t}{t!}
\sum_{a_s} f^t(a_s) \ket{a_s} \bra{a_s} \\
&=
\sum_{a_s}
e^{i f(a_s) }
\ket{a_s} \bra{a_s}.
\end{aligned}
\end{equation}

(d)

\begin{equation}\label{eqn:braketManip:n}
\begin{aligned}
\sum_{a’} \Psi_{a’}(\Bx’)^\conj \Psi_{a’}(\Bx”)
&=
\sum_{a’}
\braket{\Bx’}{a’}^\conj
\braket{\Bx”}{a’} \\
&=
\sum_{a’}
\braket{a’}{\Bx’}
\braket{\Bx”}{a’} \\
&=
\sum_{a’}
\braket{\Bx”}{a’}
\braket{a’}{\Bx’} \\
&=
\braket{\Bx”}{\Bx’} \\
&= \delta_{\Bx” – \Bx’}.
\end{aligned}
\end{equation}

References

[1] Jun John Sakurai and Jim J Napolitano. Modern quantum mechanics. Pearson Higher Ed, 2014.

Update to old phy356 (Quantum Mechanics I) notes.

February 12, 2015 math and physics play , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

It’s been a long time since I took QM I. My notes from that class were pretty rough, but I’ve cleaned them up a bit.

The main value to these notes is that I worked a number of introductory Quantum Mechanics problems.

These were my personal lecture notes for the Fall 2010, University of Toronto Quantum mechanics I course (PHY356H1F), taught by Prof. Vatche Deyirmenjian.

The official description of this course was:

The general structure of wave mechanics; eigenfunctions and eigenvalues; operators; orbital angular momentum; spherical harmonics; central potential; separation of variables, hydrogen atom; Dirac notation; operator methods; harmonic oscillator and spin.

This document contains a few things

• My lecture notes.
Typos, if any, are probably mine(Peeter), and no claim nor attempt of spelling or grammar correctness will be made. The first four lectures had chosen not to take notes for since they followed the text very closely.
• Notes from reading of the text. This includes observations, notes on what seem like errors, and some solved problems. None of these problems have been graded. Note that my informal errata sheet for the text has been separated out from this document.
• Some assigned problems. I have corrected some the errors after receiving grading feedback, and where I have not done so I at least recorded some of the grading comments as a reference.
• Some worked problems associated with exam preparation.