sigma matrix

Explicit form of the square root of p . sigma.

December 10, 2018 phy2403 , , , ,

[Click here for a PDF of this post with nicer formatting]

With the help of Mathematica, a fairly compact form was found for the root of \( p \cdot \sigma \)
\begin{equation}\label{eqn:DiracUVmatricesExplicit:121}
\sqrt{ p \cdot \sigma }
=
\inv{
\sqrt{ \omega_\Bp- \Norm{\Bp} } + \sqrt{ \omega_\Bp+ \Norm{\Bp} }
}
\begin{bmatrix}
\omega_\Bp- p^3 + \sqrt{ \omega_\Bp^2 – \Bp^2 } & – p^1 + i p^2 \\
– p^1 – i p^2 & \omega_\Bp+ p^3 + \sqrt{ \omega_\Bp^2 – \Bp^2 }
\end{bmatrix}.
\end{equation}
A bit of examination shows that we can do much better. The leading scalar term can be simplified by squaring it
\begin{equation}\label{eqn:squarerootpsigma:140}
\begin{aligned}
\lr{ \sqrt{ \omega_\Bp- \Norm{\Bp} } + \sqrt{ \omega_\Bp+ \Norm{\Bp} } }^2
&=
\omega_\Bp- \Norm{\Bp} + \omega_\Bp+ \Norm{\Bp} + 2 \sqrt{ \omega_\Bp^2 – \Bp^2 } \\
&=
2 \omega_\Bp + 2 m,
\end{aligned}
\end{equation}
where the on-shell value of the energy \( \omega_\Bp^2 = m^2 + \Bp^2 \) has been inserted. Using that again in the matrix, we have
\begin{equation}\label{eqn:squarerootpsigma:160}
\begin{aligned}
\sqrt{ p \cdot \sigma }
&=
\inv{\sqrt{ 2 \omega_\Bp + 2 m }}
\begin{bmatrix}
\omega_\Bp- p^3 + m & – p^1 + i p^2 \\
– p^1 – i p^2 & \omega_\Bp+ p^3 + m
\end{bmatrix} \\
&=
\inv{\sqrt{ 2 \omega_\Bp + 2 m }}
\lr{
(\omega_\Bp + m) \sigma^0
-p^1 \begin{bmatrix} 0 & 1 \\ 1 & 0 \\ \end{bmatrix}
-p^2 \begin{bmatrix} 0 & -i \\ i & 0 \\ \end{bmatrix}
-p^3 \begin{bmatrix} 1 & 0 \\ 0 & -1 \\ \end{bmatrix}
} \\
&=
\inv{\sqrt{ 2 \omega_\Bp + 2 m }}
\lr{
(\omega_\Bp + m) \sigma^0
-p^1 \sigma^1
-p^2 \sigma^2
-p^3 \sigma^3
} \\
&=
\inv{\sqrt{ 2 \omega_\Bp + 2 m }}
\lr{
(\omega_\Bp + m) \sigma^0 – \Bsigma \cdot \Bp
}.
\end{aligned}
\end{equation}

We’ve now found a nice algebraic form for these matrix roots
\begin{equation}\label{eqn:squarerootpsigma:180}
\boxed{
\begin{aligned}
\sqrt{p \cdot \sigma} &= \inv{\sqrt{ 2 \omega_\Bp + 2 m }} \lr{ m + p \cdot \sigma } \\
\sqrt{p \cdot \overline{\sigma}} &= \inv{\sqrt{ 2 \omega_\Bp + 2 m }} \lr{ m + p \cdot \overline{\sigma}}.
\end{aligned}}
\end{equation}

As a check, let’s square one of these explicitly
\begin{equation}\label{eqn:squarerootpsigma:101}
\begin{aligned}
\lr{ \sqrt{p \cdot \sigma} }^2
&= \inv{2 \omega_\Bp + 2 m }
\lr{ m^2 + (p \cdot \sigma)^2 + 2 m (p \cdot \sigma) } \\
&= \inv{2 \omega_\Bp + 2 m }
\lr{ m^2 + (\omega_\Bp^2 – 2 \omega_\Bp \Bsigma \cdot \Bp + \Bp^2) + 2 m (p \cdot \sigma) } \\
&= \inv{2 \omega_\Bp + 2 m }
\lr{ 2 \omega_\Bp^2 – 2 \omega_\Bp \Bsigma \cdot \Bp + 2 m (\omega_\Bp – \Bsigma \cdot \Bp) } \\
&= \inv{2 \omega_\Bp + 2 m }
\lr{ 2 \omega_\Bp \lr{ \omega_\Bp + m } – (2 \omega_\Bp + 2 m) \Bsigma \cdot \Bp } \\
&=
\omega_\Bp – \Bsigma \cdot \Bp \\
&=
p \cdot \sigma,
\end{aligned}
\end{equation}
which validates the result.

Explicit expansion of the Dirac u,v matrices

December 9, 2018 phy2403 , , , ,

[Click here for a PDF of this post with nicer formatting]

We found that the solution of the \( u(p), v(p) \) matrices were
\begin{equation}\label{eqn:DiracUVmatricesExplicit:20}
\begin{aligned}
u(p) &=
\begin{bmatrix}
\sqrt{p \cdot \sigma} \zeta \\
\sqrt{p \cdot \overline{\sigma}} \zeta \\
\end{bmatrix} \\
v(p) &=
\begin{bmatrix}
\sqrt{p \cdot \sigma} \eta \\
-\sqrt{p \cdot \overline{\sigma}} \eta \\
\end{bmatrix},
\end{aligned}
\end{equation}
where
\begin{equation}\label{eqn:DiracUVmatricesExplicit:40}
\begin{aligned}
p \cdot \sigma &= p_0 \sigma_0 – \Bsigma \cdot \Bp \\
p \cdot \overline{\sigma} &= p_0 \sigma_0 + \Bsigma \cdot \Bp.
\end{aligned}
\end{equation}
It was pointed out that these square roots can be conceptualized as (in the right basis) as the diagonal matrices of the eigenvalue square roots.

It was also pointed out that we don’t tend to need the explicit form of these square roots.We saw that to be the case in all our calculations, where these always showed up in the end in quadratic combinations like \( \sqrt{ (p \cdot \sigma)^2 }, \sqrt{ (p \cdot \sigma)(p \cdot \overline{\sigma})}, \cdots \), which nicely reduced each time without requiring the matrix roots.

I encountered a case where it would have been nice to have the explicit representation. In particular, I wanted to use Mathematica to symbolically expand \( \overline{\Psi} i \gamma^\mu \partial_\mu \Psi \) in terms of \( a^s_\Bp, b^r_\Bp, \cdots \) representation, to verify that the massless Dirac Lagrangian are in fact the energy and momentum operators (and to compare to the explicit form of the momentum operator found in eq. 3.105 [1]). For that mechanical task, I needed explicit representations of all the \( u^s(p), v^r(p) \) matrices to plug in.

It happens that \( 2 \times 2 \) matrices can be square-rooted symbolically (FIXME: link to squarerootOfFourSigmaDotP.nb notebook). In particular, the matrices \( p \cdot \sigma, p \cdot \overline{\sigma} \) have nice simple eigenvalues \( \pm \Norm{\Bp} + \omega_\Bp \). The corresponding unnormalized eigenvectors for \( p \cdot \sigma \) are
\begin{equation}\label{eqn:DiracUVmatricesExplicit:60}
\begin{aligned}
e_1 &=
\begin{bmatrix}
– p_x + i p_y \\
p_z + \Norm{\Bp}
\end{bmatrix} \\
e_1 &=
\begin{bmatrix}
– p_x + i p_y \\
p_z – \Norm{\Bp}
\end{bmatrix}.
\end{aligned}
\end{equation}
This means that we can diagonalize \( p \cdot \sigma \) as
\begin{equation}\label{eqn:DiracUVmatricesExplicit:80}
p \cdot \sigma
= U
\begin{bmatrix}
\omega_\Bp+ \Norm{\Bp} & 0 \\
0 & \omega_\Bp- \Norm{\Bp}
\end{bmatrix}
U^\dagger,
\end{equation}
where \( U \) is the matrix of the normalized eigenvectors
\begin{equation}\label{eqn:DiracUVmatricesExplicit:100}
U =
\begin{bmatrix}
e_1′ & e_2′
\end{bmatrix}
=
\inv{ \sqrt{ 2 \Bp^2 + 2 p_z \Norm{\Bp} } }
\begin{bmatrix}
-p_x + i p_y & -p_x + i p_y \\
p_z + \Norm{\Bp} & p_z – \Norm{\Bp}
\end{bmatrix}.
\end{equation}

Letting Mathematica churn through the matrix products \ref{eqn:DiracUVmatricesExplicit:80} verifies the diagonalization, and for the roots, we find
\begin{equation}\label{eqn:DiracUVmatricesExplicit:120}
\sqrt{ p \cdot \sigma }
=
\inv{
\sqrt{ \omega_\Bp- \Norm{\Bp} } + \sqrt{ \omega_\Bp+ \Norm{\Bp} }
}
\begin{bmatrix}
\omega_\Bp- p_z + \sqrt{ \omega_\Bp^2 – \Bp^2 } & – p_x + i p_y \\
– p_x – i p_y & \omega_\Bp+ p_z + \sqrt{ \omega_\Bp^2 – \Bp^2 }
\end{bmatrix}.
\end{equation}
Now we can plug in \( \zeta^{1\T} = (1,0), \zeta^{2\T} = (0,1), \eta^{1\T} = (1,0), \eta^{2\T} = (0,1) \) to find the explicit form of our \( u\)’s and \( v\)’s
\begin{equation}\label{eqn:DiracUVmatricesExplicit:140}
\begin{aligned}
u^1(p) &=
\inv{
\sqrt{ \omega_\Bp- \Norm{\Bp} } + \sqrt{ \omega_\Bp+ \Norm{\Bp} }
}
\begin{bmatrix}
\omega_\Bp- p_z + \sqrt{ \omega_\Bp^2 – \Bp^2 } \\
– p_x – i p_y \\
\omega_\Bp+ p_z + \sqrt{ \omega_\Bp^2 – \Bp^2 } \\
p_x + i p_y \\
\end{bmatrix} \\
u^2(p) &=
\inv{
\sqrt{ \omega_\Bp- \Norm{\Bp} } + \sqrt{ \omega_\Bp+ \Norm{\Bp} }
}
\begin{bmatrix}
– p_x + i p_y \\
\omega_\Bp+ p_z + \sqrt{ \omega_\Bp^2 – \Bp^2 } \\
p_x – i p_y \\
\omega_\Bp- p_z + \sqrt{ \omega_\Bp^2 – \Bp^2 } \\
\end{bmatrix} \\
v^1(p) &=
\inv{
\sqrt{ \omega_\Bp- \Norm{\Bp} } + \sqrt{ \omega_\Bp+ \Norm{\Bp} }
}
\begin{bmatrix}
\omega_\Bp- p_z + \sqrt{ \omega_\Bp^2 – \Bp^2 } \\
– p_x – i p_y \\
-\omega_\Bp- p_z + \sqrt{ \omega_\Bp^2 – \Bp^2 } \\
-p_x – i p_y \\
\end{bmatrix} \\
v^2(p) &=
\inv{
\sqrt{ \omega_\Bp- \Norm{\Bp} } + \sqrt{ \omega_\Bp+ \Norm{\Bp} }
}
\begin{bmatrix}
– p_x + i p_y \\
\omega_\Bp+ p_z + \sqrt{ \omega_\Bp^2 – \Bp^2 } \\
-p_x + i p_y \\
-\omega_\Bp+ p_z + \sqrt{ \omega_\Bp^2 – \Bp^2 } \\
\end{bmatrix}.
\end{aligned}
\end{equation}
This is now a convenient form to try the next symbolic manipulation task. If nothing else this takes some of the mystery out of the original compact notation, since we see that the \( u,v \)’s are just \( 4 \) element column vectors, and we know their explicit should we want them.

Also note that in class we made a note that we should take the positive roots of the eigenvalue diagonal matrix. It doesn’t look like that is really required. We need not even use the same sign for each root. Squaring the resulting matrix root in the end will recover the original \( p \cdot \sigma \) matrix.

References

[1] Michael E Peskin and Daniel V Schroeder. An introduction to Quantum Field Theory. Westview, 1995.

Reflection using Pauli matrices.

November 22, 2018 phy2403 , , , , ,

[Click here for a PDF of this post with nicer formatting]

In class yesterday (lecture 19, notes not yet posted) we used \( \Bsigma^\T = -\sigma_2 \Bsigma \sigma_2 \), which implicitly shows that \( (\Bsigma \cdot \Bx)^\T \) is a reflection about the y-axis.
This form of reflection will be familiar to a student of geometric algebra (see [1] — a great book, one copy of which is in the physics library). I can’t recall any mention of the geometrical reflection identity from when I took QM. It’s a fun exercise to demonstrate the reflection identity when constrained to the Pauli matrix notation.

Theorem: Reflection about a normal.

Given a unit vector \( \ncap \in \mathbb{R}^3 \) and a vector \( \Bx \in \mathbb{R}^3 \) the reflection of \( \Bx \) about a plane with normal \( \ncap \) can be represented in Pauli notation as
\begin{equation*}
-\Bsigma \cdot \ncap \Bsigma \cdot \Bx \Bsigma \cdot \ncap.
\end{equation*}

To prove this, first note that in standard vector notation, we can decompose a vector into its projective and rejective components
\begin{equation}\label{eqn:reflection:20}
\Bx = (\Bx \cdot \ncap) \ncap + \lr{ \Bx – (\Bx \cdot \ncap) \ncap }.
\end{equation}
A reflection about the plane normal to \( \ncap \) just flips the component in the direction of \( \ncap \), leaving the rest unchanged. That is
\begin{equation}\label{eqn:reflection:40}
-(\Bx \cdot \ncap) \ncap + \lr{ \Bx – (\Bx \cdot \ncap) \ncap }
=
\Bx – 2 (\Bx \cdot \ncap) \ncap.
\end{equation}
We may write this in \( \Bsigma \) notation as
\begin{equation}\label{eqn:reflection:60}
\Bsigma \cdot \Bx – 2 \Bx \cdot \ncap \Bsigma \cdot \ncap.
\end{equation}
We also know that
\begin{equation}\label{eqn:reflection:80}
\begin{aligned}
\Bsigma \cdot \Ba \Bsigma \cdot \Bb &= a \cdot b + i \Bsigma \cdot (\Ba \cross \Bb) \\
\Bsigma \cdot \Bb \Bsigma \cdot \Ba &= a \cdot b – i \Bsigma \cdot (\Ba \cross \Bb),
\end{aligned}
\end{equation}
or
\begin{equation}\label{eqn:reflection:100}
a \cdot b = \inv{2} \symmetric{\Bsigma \cdot \Ba}{\Bsigma \cdot \Bb},
\end{equation}
where \( \symmetric{\Ba}{\Bb} \) is the anticommutator of \( \Ba, \Bb \).
Inserting \ref{eqn:reflection:100} into \ref{eqn:reflection:60} we find that the reflection is
\begin{equation}\label{eqn:reflection:120}
\begin{aligned}
\Bsigma \cdot \Bx –
\symmetric{\Bsigma \cdot \ncap}{\Bsigma \cdot \Bx}
\Bsigma \cdot \ncap
&=
\Bsigma \cdot \Bx –
{\Bsigma \cdot \ncap}{\Bsigma \cdot \Bx}
\Bsigma \cdot \ncap

{\Bsigma \cdot \Bx}{\Bsigma \cdot \ncap}
\Bsigma \cdot \ncap \\
&=
\Bsigma \cdot \Bx –
{\Bsigma \cdot \ncap}{\Bsigma \cdot \Bx}
\Bsigma \cdot \ncap

{\Bsigma \cdot \Bx} \\
&=

{\Bsigma \cdot \ncap}{\Bsigma \cdot \Bx}
\Bsigma \cdot \ncap,
\end{aligned}
\end{equation}
which completes the proof.

When we expand \( (\Bsigma \cdot \Bx)^\T \) and find
\begin{equation}\label{eqn:reflection:n}
(\Bsigma \cdot \Bx)^\T
=
\sigma^1 x^1 – \sigma^2 x^2 + \sigma^3 x^3,
\end{equation}
it is clear that this coordinate expansion is a reflection about the y-axis. Knowing the reflection formula above provides a rationale for why we might want to write this in the compact form \( -\sigma^2 (\Bsigma \cdot \Bx) \sigma^2 \), which might not be obvious otherwise.

References

[1] C. Doran and A.N. Lasenby. Geometric algebra for physicists. Cambridge University Press New York, Cambridge, UK, 1st edition, 2003.