## PHY2403H Quantum Field Theory. Lecture 3: Lorentz transformations and a scalar action. Taught by Prof. Erich Poppitz

September 18, 2018 phy2403 No comments , ,

### DISCLAIMER: Very rough notes from class. Some additional side notes, but otherwise barely edited.

These are notes for the UofT course PHY2403H, Quantum Field Theory, taught by Prof. Erich Poppitz.

## Determinant of Lorentz transformations

We require that Lorentz transformations leave the dot product invariant, that is $$x \cdot y = x’ \cdot y’$$, or
\label{eqn:qftLecture3:20}
x^\mu g_{\mu\nu} y^\nu = {x’}^\mu g_{\mu\nu} {y’}^\nu.

Explicitly, with coordinate transformations
\label{eqn:qftLecture3:40}
\begin{aligned}
{x’}^\mu &= {\Lambda^\mu}_\rho x^\rho \\
{y’}^\mu &= {\Lambda^\mu}_\rho y^\rho
\end{aligned}

such a requirement is equivalent to demanding that
\label{eqn:qftLecture3:500}
\begin{aligned}
x^\mu g_{\mu\nu} y^\nu
&=
{\Lambda^\mu}_\rho x^\rho
g_{\mu\nu}
{\Lambda^\nu}_\kappa y^\kappa \\
&=
x^\mu
{\Lambda^\alpha}_\mu
g_{\alpha\beta}
{\Lambda^\beta}_\nu
y^\nu,
\end{aligned}

or
\label{eqn:qftLecture3:60}
g_{\mu\nu}
=
{\Lambda^\alpha}_\mu
g_{\alpha\beta}
{\Lambda^\beta}_\nu

multiplying by the inverse we find
\label{eqn:qftLecture3:200}
\begin{aligned}
g_{\mu\nu}
{\lr{\Lambda^{-1}}^\nu}_\lambda
&=
{\Lambda^\alpha}_\mu
g_{\alpha\beta}
{\Lambda^\beta}_\nu
{\lr{\Lambda^{-1}}^\nu}_\lambda \\
&=
{\Lambda^\alpha}_\mu
g_{\alpha\lambda} \\
&=
g_{\lambda\alpha}
{\Lambda^\alpha}_\mu.
\end{aligned}

This is now amenable to expressing in matrix form
\label{eqn:qftLecture3:220}
\begin{aligned}
(G \Lambda^{-1})_{\mu\lambda}
&=
(G \Lambda)_{\lambda\mu} \\
&=
((G \Lambda)^\T)_{\mu\lambda} \\
&=
(\Lambda^\T G)_{\mu\lambda},
\end{aligned}

or
\label{eqn:qftLecture3:80}
G \Lambda^{-1}
=
(G \Lambda)^\T.

Taking determinants (using the normal identities for products of determinants, determinants of transposes and inverses), we find
\label{eqn:qftLecture3:100}
det(G)
det(\Lambda^{-1})
=
det(G) det(\Lambda),

or
\label{eqn:qftLecture3:120}
det(\Lambda)^2 = 1,

or
$$det(\Lambda)^2 = \pm 1$$. We will generally ignore the case of reflections in spacetime that have a negative determinant.

Smart-alec Peeter pointed out after class last time that we can do the same thing easier in matrix notation
\label{eqn:qftLecture3:140}
\begin{aligned}
x’ &= \Lambda x \\
y’ &= \Lambda y
\end{aligned}

where
\label{eqn:qftLecture3:160}
\begin{aligned}
x’ \cdot y’
&=
(x’)^\T G y’ \\
&=
x^\T \Lambda^\T G \Lambda y,
\end{aligned}

which we require to be $$x \cdot y = x^\T G y$$ for all four vectors $$x, y$$, that is
\label{eqn:qftLecture3:180}
\Lambda^\T G \Lambda = G.

We can find the result \ref{eqn:qftLecture3:120} immediately without having to first translate from index notation to matrices.

## Field theory

The electrostatic potential is an example of a scalar field $$\phi(\Bx)$$ unchanged by SO(3) rotations
\label{eqn:qftLecture3:240}
\Bx \rightarrow \Bx’ = O \Bx,

that is
\label{eqn:qftLecture3:260}
\phi'(\Bx’) = \phi(\Bx).

Here $$\phi'(\Bx’)$$ is the value of the (electrostatic) scalar potential in a primed frame.

However, the electrostatic field is not invariant under Lorentz transformation.
We postulate that there is some scalar field
\label{eqn:qftLecture3:280}
\phi'(x’) = \phi(x),

where $$x’ = \Lambda x$$ is an SO(1,3) transformation. There are actually no stable particles (fields that persist at long distances) described by Lorentz scalar fields, although there are some unstable scalar fields such as the Higgs, Pions, and Kaons. However, much of our homework and discussion will be focused on scalar fields, since they are the easiest to start with.

We need to first understand how derivatives $$\partial_\mu \phi(x)$$ transform. Using the chain rule
\label{eqn:qftLecture3:300}
\begin{aligned}
\PD{x^\mu}{\phi(x)}
&=
\PD{x^\mu}{\phi'(x’)} \\
&=
\PD{{x’}^\nu}{\phi'(x’)}
\PD{{x}^\mu}{{x’}^\nu} \\
&=
\PD{{x’}^\nu}{\phi'(x’)}
\partial_\mu \lr{
{\Lambda^\nu}_\rho x^\rho
} \\
&=
\PD{{x’}^\nu}{\phi'(x’)}
{\Lambda^\nu}_\mu \\
&=
\PD{{x’}^\nu}{\phi(x)}
{\Lambda^\nu}_\mu.
\end{aligned}

Multiplying by the inverse $${\lr{\Lambda^{-1}}^\mu}_\kappa$$ we get
\label{eqn:qftLecture3:320}
\PD{{x’}^\kappa}{}
=
{\lr{\Lambda^{-1}}^\mu}_\kappa \PD{x^\mu}{}

This should be familiar to you, and is an analogue of the transformation of the
\label{eqn:qftLecture3:340}
=

## Actions

We will start with a classical action, and quantize to determine a QFT. In mechanics we have the particle position $$q(t)$$, which is a classical field in 1+0 time and space dimensions. Our action is
\label{eqn:qftLecture3:360}
S
= \int dt \LL(t)
= \int dt \lr{
\inv{2} \dot{q}^2 – V(q)
}.

This action depends on the position of the particle that is local in time. You could imagine that we have a more complex action where the action depends on future or past times
\label{eqn:qftLecture3:380}
S
= \int dt’ q(t’) K( t’ – t ),

but we don’t seem to find such actions in classical mechanics.

### Principles determining the form of the action.

• relativity (action is invariant under Lorentz transformation)
• locality (action depends on fields and the derivatives at given $$(t, \Bx)$$.
• Gauge principle (the action should be invariant under gauge transformation). We won’t discuss this in detail right now since we will start with studying scalar fields.
Recall that for Maxwell’s equations a gauge transformation has the form
\label{eqn:qftLecture3:520}
\phi \rightarrow \phi + \dot{\chi}, \BA \rightarrow \BA – \spacegrad \chi.

Suppose we have a real scalar field $$\phi(x)$$ where $$x \in \mathbb{R}^{1,d-1}$$. We will be integrating over space and time $$\int dt d^{d-1} x$$ which we will write as $$\int d^d x$$. Our action is
\label{eqn:qftLecture3:400}
S = \int d^d x \lr{ \text{Some action density to be determined } }

The analogue of $$\dot{q}^2$$ is
\label{eqn:qftLecture3:420}
\begin{aligned}
\lr{ \PD{x^\mu}{\phi} }
\lr{ \PD{x^\nu}{\phi} }
g^{\mu\nu}
&=
(\partial_\mu \phi) (\partial_\nu \phi) g^{\mu\nu} \\
&= \partial^\mu \phi \partial_\mu \phi.
\end{aligned}

This has both time and spatial components, that is
\label{eqn:qftLecture3:440}
\partial^\mu \phi \partial_\mu \phi =

so the desired simplest scalar action is
\label{eqn:qftLecture3:460}
S = \int d^d x \lr{ \dotphi^2 – (\spacegrad \phi)^2 }.

The measure transforms using a Jacobian, which we have seen is the Lorentz transform matrix, and has unit determinant
\label{eqn:qftLecture3:480}
d^d x’ = d^d x \Abs{ det( \Lambda^{-1} ) } = d^d x.

## Question: Four vector form of the Maxwell gauge transformation.

Show that the transformation
\label{eqn:qftLecture3:580}
A^\mu \rightarrow A^\mu + \partial^\mu \chi

is the desired four-vector form of the gauge transformation \ref{eqn:qftLecture3:520}, that is
\label{eqn:qftLecture3:540}
\begin{aligned}
j^\nu
&= \partial_\mu {F’}^{\mu\nu} \\
&= \partial_\mu F^{\mu\nu}.
\end{aligned}

Also relate this four-vector gauge transformation to the spacetime split.

\label{eqn:qftLecture3:560}
\begin{aligned}
\partial_\mu {F’}^{\mu\nu}
&=
\partial_\mu \lr{ \partial^\mu {A’}^\nu – \partial_\nu {A’}^\mu } \\
&=
\partial_\mu \lr{
\partial^\mu \lr{ A^\nu + \partial^\nu \chi }
– \partial_\nu \lr{ A^\mu + \partial^\mu \chi }
} \\
&=
\partial_\mu {F}^{\mu\nu}
+
\partial_\mu \partial^\mu \partial^\nu \chi

\partial_\mu \partial^\nu \partial^\mu \chi \\
&=
\partial_\mu {F}^{\mu\nu},
\end{aligned}

by equality of mixed partials. Expanding \ref{eqn:qftLecture3:580} explicitly we find
\label{eqn:qftLecture3:600}
{A’}^\mu = A^\mu + \partial^\mu \chi,

which is
\label{eqn:qftLecture3:620}
\begin{aligned}
\phi’ = {A’}^0 &= A^0 + \partial^0 \chi = \phi + \dot{\chi} \\
\BA’ \cdot \Be_k = {A’}^k &= A^k + \partial^k \chi = \lr{ \BA – \spacegrad \chi } \cdot \Be_k.
\end{aligned}

The last of which can be written in vector notation as $$\BA’ = \BA – \spacegrad \chi$$.

## UofT QFT Fall 2018 Lecture 2. Units, scales, and Lorentz transformations. Taught by Prof. Erich Poppitz

September 17, 2018 phy2403 No comments , , ,

## Natural units.

\label{eqn:qftLecture2:20}
\begin{aligned}
[\Hbar] &= [\text{action}] = M \frac{L^2}{T^2} T = \frac{M L^2}{T} \\
&= [\text{velocity}] = \frac{L}{T} \\
& [\text{energy}] = M \frac{L^2}{T^2}.
\end{aligned}

Setting $$c = 1$$ means
\label{eqn:qftLecture2:240}
\frac{L}{T} = 1

and setting $$\Hbar = 1$$ means
\label{eqn:qftLecture2:260}
[\Hbar] = [\text{action}] = M L {\frac{L}{T}} = M L

therefore
\label{eqn:qftLecture2:280}
[L] = \inv{\text{mass}}

and
\label{eqn:qftLecture2:300}
[\text{energy}] = M {\frac{L^2}{T^2}} = \text{mass}\, \text{eV}

Summary

• $$\text{energy} \sim \text{eV}$$
• $$\text{distance} \sim \inv{M}$$
• $$\text{time} \sim \inv{M}$$

From:
\label{eqn:qftLecture2:320}
\alpha = \frac{e^2}{4 \pi {\Hbar c}}

which is dimensionless ($$1/137$$), so electric charge is dimensionless.

Some useful numbers in natural units

\label{eqn:qftLecture2:40}
\begin{aligned}
m_\txte &\sim 10^{-27} \text{g} \sim 0.5 \text{MeV} \\
m_\txtp &\sim 2000 m_\txte \sim 1 \text{GeV} \\
m_\pi &\sim 140 \text{MeV} \\
m_\mu &\sim 105 \text{MeV} \\
\Hbar c &\sim 200 \text{MeV} \,\text{fm} = 1
\end{aligned}

## Gravity

Interaction energy of two particles

\label{eqn:qftLecture2:60}
G_\txtN \frac{m_1 m_2}{r}

\label{eqn:qftLecture2:80}
[\text{energy}] \sim [G_\txtN] \frac{M^2}{L}

\label{eqn:qftLecture2:100}
[G_\txtN]
\sim
[\text{energy}] \frac{L}{M^2}

but energy x distance is dimensionless (action) in our units

\label{eqn:qftLecture2:120}
[G_\txtN]
\sim
{\text{dimensionless}}{M^2}

\label{eqn:qftLecture2:140}
\frac{G_\txtN}{\Hbar c} \sim \inv{M^2} \sim \frac{1}{10^{20} \text{GeV}}

Planck mass

\label{eqn:qftLecture2:160}
M_{\text{Planck}} \sim \sqrt{\frac{\Hbar c}{G_\txtN}}
\sim 10^{-4} g \sim \inv{\lr{10^{20} \text{GeV}}^2}

We can revisit the scale diagram from last lecture in terms of MeV mass/energy values, as sketched in fig. 1.

fig. 1. Scales, take II.

At the classical electron radius scale, we consider phenomena such as back reaction of radiation, the self energy of electrons. At the Compton wavelength we have to allow for production of multiple particle pairs. At Bohr radius scales we must start using QM instead of classical mechanics.

## Cross section.

Verbal discussion of cross section, not captured in these notes. Roughly, the cross section sounds like the number of events per unit time, related to the flux of some source through an area.

We’ll compute the cross section of a number of different systems in this course. The cross section is relevant in scattering such as the electron-electron scattering sketched in fig. 2.

fig. 2. Electron electron scattering.

We assume that QED is highly relativistic. In natural units, our scale factor is basically the square of the electric charge
\label{eqn:qftLecture2:180}
\alpha \sim e^2,

so the cross section has the form
\label{eqn:qftLecture2:200}
\sigma \sim \frac{\alpha^2}{E^2} \lr{ 1 + O(\alpha) + O(\alpha^2) + \cdots }

In gravity we could consider scattering of electrons, where $$G_\txtN$$ takes the place of $$\alpha$$. However, $$G_\txtN$$ has dimensions.

For electron-electron scattering due to gravitons

\label{eqn:qftLecture2:220}
\sigma \sim \frac{G_\txtN^2 E^2}{1 + G_\txtN E^2 + \cdots }

Now the cross section grows with energy. This will cause some problems (violating unitarity: probabilities greater than 1!) when $$O(G_\txtN E^2) = 1$$.

In any quantum field theories when the coupling constant is not-dimensionless we have the same sort of problems at some scale.

The point is that we can get far considering just dimensional analysis.

If the coupling constant has a dimension $$(1/\text{mass})^N\,, N > 0$$, then unitarity will be violated at high energy. One such theory is the Fermi theory of beta decay (electro-weak theory), which had a coupling constant with dimensions inverse-mass-squared. The relevant scale for beta decay was 4 Fermi, or $$G_\txtF \sim (1/{100 \text{GeV}})^2$$. This was the motivation for introducing the Higgs theory, which was motivated by restoring unitarity.

## Lorentz transformations.

The goal, perhaps not for today, is to study the simplest (relativistic) scalar field theory. First studied classically, and then consider such a quantum field theory.

How is relativity implemented when we write the Lagrangian and action?

Our first step must be to consider Lorentz transformations and the Lorentz group.

Spacetime (Minkowski space) is \R{3,1} (or \R{d-1,1}). Our coordinates are

\label{eqn:qftLecture2:340}
(c t, x^1, x^2, x^3) = (c t, \Br).

Here, we’ve scaled the time scale by $$c$$ so that we measure time and space in the same dimensions. We write this as

\label{eqn:qftLecture2:360}
x^\mu = (x^0, x^1, x^2, x^3),

where $$\mu = 0, 1, 2, 3$$, and call this a “4-vector”. These are called the space-time coordinates of an event, which tell us where and when an event occurs.

For two events whose spacetime coordinates differ by $$dx^0, dx^1, dx^2, dx^3$$ we introduce the notion of a space time \underline{interval}

\label{eqn:qftLecture2:380}
\begin{aligned}
ds^2
&= c^2 dt^2
– (dx^1)^2
– (dx^2)^2
– (dx^3)^2 \\
&=
\sum_{\mu, \nu = 0}^3 g_{\mu\nu} dx^\mu dx^\nu
\end{aligned}

Here $$g_{\mu\nu}$$ is the Minkowski space metric, an object with two indexes that run from 0-3. i.e. this is a diagonal matrix

\label{eqn:qftLecture2:400}
g_{\mu\nu} \sim
\begin{bmatrix}
1 & 0 & 0 & 0 \\
0 & -1 & 0 & 0 \\
0 & 0 & -1 & 0 \\
0 & 0 & 0 & -1 \\
\end{bmatrix}

i.e.
\label{eqn:qftLecture2:420}
\begin{aligned}
g_{00} &= 1 \\
g_{11} &= -1 \\
g_{22} &= -1 \\
g_{33} &= -1 \\
\end{aligned}

We will use the Einstein summation convention, where any repeated upper and lower indexes are considered summed over. That is \ref{eqn:qftLecture2:380} is written with an implied sum
\label{eqn:qftLecture2:440}
ds^2 = g_{\mu\nu} dx^\mu dx^\nu.

Explicit expansion:
\label{eqn:qftLecture2:460}
\begin{aligned}
ds^2
&= g_{\mu\nu} dx^\mu dx^\nu \\
&=
g_{00} dx^0 dx^0
+g_{11} dx^1 dx^1
+g_{22} dx^2 dx^2
+g_{33} dx^3 dx^3
&=
(1) dx^0 dx^0
+ (-1) dx^1 dx^1
+ (-1) dx^2 dx^2
+ (-1) dx^3 dx^3.
\end{aligned}

Recall that rotations (with orthogonal matrix representations) are transformations that leave the dot product unchanged, that is

\label{eqn:qftLecture2:480}
\begin{aligned}
(R \Bx) \cdot (R \By)
&= \Bx^\T R^\T R \By \\
&= \Bx^\T \By \\
&= \Bx \cdot \By,
\end{aligned}

where $$R$$ is a rotation orthogonal 3×3 matrix. The set of such transformations that leave the dot product unchanged have orthonormal matrix representations $$R^\T R = 1$$. We call the set of such transformations that have unit determinant the SO(3) group.

We call a Lorentz transformation, if it is a linear transformation acting on 4 vectors that leaves the spacetime interval (i.e. the inner product of 4 vectors) invariant. That is, a transformation that leaves
\label{eqn:qftLecture2:500}
x^\mu y^\nu g_{\mu\nu} = x^0 y^0 – x^1 y^1 – x^2 y^2 – x^3 y^3

unchanged.

Suppose that transformation has a 4×4 matrix form

\label{eqn:qftLecture2:520}
{x’}^\mu = {\Lambda^\mu}_\nu x^\nu

For an example of a possible $$\Lambda$$, consider the transformation sketched in fig. 3.

fig. 3. Boost transformation.

We know that boost has the form
\label{eqn:qftLecture2:540}
\begin{aligned}
x &= \frac{x’ + v t’}{\sqrt{1 – v^2/c^2}} \\
y &= y’ \\
z &= z’ \\
t &= \frac{t’ + (v/c^2) x’}{\sqrt{1 – v^2/c^2}} \\
\end{aligned}

(this is a boost along the x-axis, not y as I’d drawn),
or
\label{eqn:qftLecture2:560}
\begin{bmatrix}
c t \\
x \\
y \\
z
\end{bmatrix}
=
\begin{bmatrix}
\inv{\sqrt{1 – v^2/c^2}} & \frac{v/c}{\sqrt{1 – v^2/c^2}} & 0 & 0 \\
\frac{v/c}{\sqrt{1 – v^2/c^2}} & \frac{1}{\sqrt{1 – v^2/c^2}} & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1 \\
\end{bmatrix}
\begin{bmatrix}
c t’ \\
x’ \\
y’ \\
z’
\end{bmatrix}

Other examples include rotations ($${\lambda^0}_0 = 1$$ zeros in $${\lambda^0}_k, {\lambda^k}_0$$, and a rotation matrix in the remainder.)

Back to Lorentz transformations ($$\text{SO}(1,3)^+$$), let
\label{eqn:qftLecture2:600}
\begin{aligned}
{x’}^\mu &= {\Lambda^\mu}_\nu x^\nu \\
{y’}^\kappa &= {\Lambda^\kappa}_\rho y^\rho
\end{aligned}

The dot product
\label{eqn:qftLecture2:620}
g_{\mu \kappa}
{x’}^\mu
{y’}^\kappa
=
g_{\mu \kappa}
{\Lambda^\mu}_\nu
{\Lambda^\kappa}_\rho
x^\nu
y^\rho
=
g_{\nu\rho}
x^\nu
y^\rho,

where the last step introduces the invariance requirement of the transformation. That is

\label{eqn:qftLecture2:640}
\boxed{
g_{\nu\rho}
=
g_{\mu \kappa}
{\Lambda^\mu}_\nu
{\Lambda^\kappa}_\rho.
}

### Upper and lower indexes

We’ve defined

\label{eqn:qftLecture2:660}
x^\mu = (t, x^1, x^2, x^3)

We could also define a four vector with lower indexes
\label{eqn:qftLecture2:680}
x_\nu = g_{\nu\mu} x^\mu = (t, -x^1, -x^2, -x^3).

That is
\label{eqn:qftLecture2:700}
\begin{aligned}
x_0 &= x^0 \\
x_1 &= -x^1 \\
x_2 &= -x^2 \\
x_3 &= -x^3.
\end{aligned}

which allows us to write the dot product as simply $$x^\mu y_\mu$$.

We can also define a metric tensor with upper indexes

\label{eqn:qftLecture2:401}
g^{\mu\nu} \sim
\begin{bmatrix}
1 & 0 & 0 & 0 \\
0 & -1 & 0 & 0 \\
0 & 0 & -1 & 0 \\
0 & 0 & 0 & -1 \\
\end{bmatrix}

This is the inverse matrix of $$g_{\mu\nu}$$, and it satisfies
\label{eqn:qftLecture2:720}
g^{\mu \nu} g_{\nu\rho} = {\delta^\mu}_\rho

Exercise: Check:
\label{eqn:qftLecture2:740}
\begin{aligned}
g_{\mu\nu} x^\mu y^\nu
&= x_\nu y^\nu \\
&= x^\nu y_\nu \\
&= g^{\mu\nu} x_\mu y_\nu \\
&= {\delta^\mu}_\nu x_\mu y^\nu.
\end{aligned}

Class ended around this point, but it appeared that we were heading this direction:

Returning to the Lorentz invariant and multiplying both sides of
\ref{eqn:qftLecture2:640} with an inverse Lorentz transformation $$\Lambda^{-1}$$, we find
\label{eqn:qftLecture2:760}
\begin{aligned}
g_{\nu\rho}
{\lr{\Lambda^{-1}}^\rho}_\alpha
&=
g_{\mu \kappa}
{\Lambda^\mu}_\nu
{\Lambda^\kappa}_\rho
{\lr{\Lambda^{-1}}^\rho}_\alpha \\
&=
g_{\mu \kappa}
{\Lambda^\mu}_\nu
{\delta^\kappa}_\alpha \\
&=
g_{\mu \alpha}
{\Lambda^\mu}_\nu,
\end{aligned}

or
\label{eqn:qftLecture2:780}
\lr{\Lambda^{-1}}_{\nu \alpha} = \Lambda_{\alpha \nu}.

This is clearly analogous to $$R^\T = R^{-1}$$, although the index notation obscures things considerably. Prof. Poppitz said that next week this would all lead to showing that the determinant of any Lorentz transformation was $$\pm 1$$.

For what it’s worth, it seems to me that this index notation makes life a lot harder than it needs to be, at least for a matrix related question (i.e. determinant of the transformation). In matrix/column-(4)-vector notation, let $$x’ = \Lambda x, y’ = \Lambda y$$ be two four vector transformations, then
\label{eqn:qftLecture2:800}
x’ \cdot y’ = {x’}^T G y’ = (\Lambda x)^T G \Lambda y = x^T ( \Lambda^T G \Lambda) y = x^T G y.

so
\label{eqn:qftLecture2:820}
\boxed{
\Lambda^T G \Lambda = G.
}

Taking determinants of both sides gives $$-(det(\Lambda))^2 = -1$$, and thus $$det(\Lambda) = \pm 1$$.

## UofT QFT Fall 2018 phy2403 ; Lecture 1, What is a field? Taught by Prof. Erich Poppitz

September 14, 2018 math and physics play No comments

DISCLAIMER: Very rough notes from class. Some additional side notes, but otherwise barely edited.

## What is a field?

A field is a map from space(time) to some set of numbers. These set of numbers may be organized some how, possibly scalars, or vectors, …

One example is the familiar spacetime vector, where $$\Bx \in \mathbb{R}^{d}$$

\label{eqn:qftLecture1:20}
(\Bx, t) \rightarrow \mathbb{R}^{\lr{d,1}}

Examples of fields:

1. $$0 + 1$$ dimensional “QFT”, where the spatial dimension is zero dimensional and we have one time dimension. Fields in this case are just functions of time $$x(t)$$. That is, particle mechanics is a 0 + 1 dimensional classical field theory. We know that classical mechanics is described by the action
\label{eqn:qftLecture1:40}
S = \frac{m}{2} \int dt \xdot^2.

This is non-relativistic. We can make this relativistic by saying this is the first order term in the Taylor expansion
\label{eqn:qftLecture1:60}
S = – m c^2 \int dt \sqrt{ 1 – \xdot^2/c^2 }.

Classical field theory (of $$x(t)$$). The “QFT” of $$x(t)$$. i.e. QM.
All of you know quantum mechanics. If you don’t just leave. Not this way (pointing to the window), but this way (pointing to the door).
The solution of a quantum mechanical state is
\label{eqn:qftLecture1:80}
\bra{x} e^{-i H t/\,\hbar } \ket{x’},

which can be found by evaluating the “Feynman path integral”
\label{eqn:qftLecture1:100}
\sum_{\text{all paths x}} e^{i S[x]/\,\hbar}

This will be particularly useful for QFT, despite the fact that such a sum is really hard to evaluate (try it for the Hydrogen atom for example).
2. $$3 + 0$$ dimensional field theory, where we have 3 spatial dimensions and 0 time dimensions. Classical equilibrium static systems. The field may have a structure like
\label{eqn:qftLecture1:120}
\Bx \rightarrow \BM(\Bx),

for example, magnetization.
We can write the solution to such a system using the partition function
\label{eqn:qftLecture1:140}
Z \sim \sum_{\text{all} \BM(x)} e^{-E[\BM]/\kB T}.

For such a system the energy function may be like
\label{eqn:qftLecture1:160}
E[\BM] = \int d^3 \Bx \lr{ a \BM^2(\Bx) + b \BM^4(\Bx) + c \sum_{i = 1}^3 \lr{ \PD{x_i}{} \BM }
\cdot \lr{ \PD{x_i}{} \BM }
}.

There is an analogy between the partition function and the Feynman path integral, as both are summing over all possible energy states in both cases.
This will be probably be the last time that we mention the partition function and condensed matter physics in this term for this class.
3. $$3 + 1$$ dimensional field theories, with 3 spatial dimensions and 1 time dimension.
Example, electromagnetism with $$\BE(\Bx, t), \BB(\Bx, t)$$ or better use $$\BA(\Bx, t), \phi(\Bx, t)$$. The action is
\label{eqn:qftLecture1:180}
S = -\inv{16 \pi c} \int d^3 \Bx dt \lr{ \BE^2 – \BB^2 }.

This is our first example of a relativistic field theory in $$3 + 1$$ dimensions. It will take us a while to get there.

These are examples of classical field theories, such as fluid dynamics and general relativity. We want to consider electromagnetism because this is the place that we everything starts to fall apart (i.e. blackbody radiation, relating to the equilibrium states of radiating matter). Part of the resolution of this was the quantization of the energy states, where we studied the normal modes of electromagnetic radiation in a box. These modes can be considered an infinite number of radiating oscillators (the ultraviolet catastrophe). This was resolved by Planck by requiring those energy states to be quantized (an excellent discussion of this can be found in [1]. In that sense you have already seen quantum field theory.

For electromagnetism the classical description is not always good. Examples:

2. electron energy $$e^2/r_\txte$$ of a point charge diverges as $$r_\txte \rightarrow 0$$.
We can define the classical radius of the electron by
\label{eqn:qftLecture1:200}
\frac{e^2}{r^{\textrm{cl}}_{\txte}} \sim m_\txte c^2,

or
\label{eqn:qftLecture1:220}
r^{\textrm{cl}}_{\txte} \sim \frac{m_\txte c^2}{e^2} \sim 10^{-15} \text{m}

Don’t treat this very seriously, but it becomes useful at frequencies $$\omega \sim c/r_\txte$$, where $$r_\txte/c$$ is approximately the time for light to cross a distance $$r_\txte$$.
At frequencies like this, we should not believe the solutions that are obtained by classical electrodynamics.
In particular, self-accelerating solutions appear at these frequencies in classical EM. This is approximately $$\omega_\conj \sim 10^{23} Hz$$, or
\label{eqn:qftLecture1:240}
\begin{aligned}
\,\hbar \omega_\conj
&\sim \lr{ 10^{-21} \,\text{MeV s}} \lr{ 10^{23} \,\text{1/s} }\\
&\sim 100 \text{MeV}.
\end{aligned}

At such frequencies particle creation becomes possible.

## Scales

A (dimensionless) value that is very useful in determining scale is
\label{eqn:qftLecture1:260}
\alpha = \frac{e^2}{4 \pi \,\hbar c} \sim \inv{137},

called the fine scale constant, which relates three important scales relevant to quantum mechanics, as sketched in fig. 1.

fig. 1. Interesting scales in quantum mechanics.

• The Bohr radius (large end of the scale).
• The Compton wavelength of the electron.
• The classical radius of the electron.

A quick motivation for the Bohr radius was mentioned in passing in class while discussing scale, following the high school method of deriving the Balmer series ([2]).

That method assumes a circular electron trajectory ($$i = \Be_1 \Be_2$$)
\label{eqn:qftLecture1:280}
\begin{aligned}
\Br &= r \Be_1 e^{i \omega t} \\
\Bv &= \omega r \Be_2 e^{i \omega t} \\
\Ba &= -\omega^2 r \Be_1 e^{i \omega t} \\
\end{aligned}

The Coulomb force (in cgs units) on the electron is
\label{eqn:qftLecture1:300}
\BF = m\Ba = -m \omega^2 r \Be_1 e^{i \omega t} = \frac{-e (e)}{r^2} \Be_1 e^{i \omega t},

or
\label{eqn:qftLecture1:320}
m \lr{ \frac{v}{r}}^2 r = \frac{e^2}{r^2},

giving
\label{eqn:qftLecture1:340}
m v^2 = \frac{e^2}{r}.

The energy of the system, including both Kinetic and potential (from an infinite reference point) is
\label{eqn:qftLecture1:360}
\begin{aligned}
E
&= \inv{2} m v^2 – \frac{e^2}{r} \\
&= – \inv{2} m v^2 \sim \,\hbar \omega = \,\hbar \frac{v}{r},
\end{aligned}

or
\label{eqn:qftLecture1:380}
m v r \sim \,\hbar.

Eliminating $$v$$ using \ref{eqn:qftLecture1:340}, assuming a ground state radius $$r = a_0$$ gives

\label{eqn:qftLecture1:400}
a_0 \sim \frac{\hbar^2}{m e^2}.

The Bohr radius is of the order $$10^{-10} \text{m}$$.

### Compton wavelength.

When particle momentum starts approaching the speed of light, by the uncertainty relation ($$\Delta x \Delta p \sim \,\hbar$$) the variation in position must be of the order
\label{eqn:qftLecture1:420}
\lambda_\txtc \sim \frac{\hbar}{m_\txte c},

called the Compton wavelength.
Similarly, when the length scales are reduced to the Compton wavelength, the momentum increases to relativistic levels.
Because of the relativistic velocities at the Compton wavelength, particle creation and annihilation occurs and any theory has to account for multiple particle states.

### Relations.

Scaling the Bohr radius once by the fine structure constant, we obtain the Compton wavelength (after dropping factors of $$4\pi$$)
\label{eqn:qftLecture1:440}
\begin{aligned}
a_0 \alpha
&= \frac{\hbar^2}{m e^2}
\frac{e^2}{4 \pi \,\hbar c} \\
&= \frac{\hbar}{4 \pi m c} \\
&\sim
\frac{\hbar}{m c} \\
&= \lambda_\txtc.
\end{aligned}

Scaling once more, we obtain (after dropping another $$4\pi$$) the classical electron radius
\label{eqn:qftLecture1:n}
\begin{aligned}
\lambda_\txtc \alpha
&=
\frac{e^2}{4 \pi m c^2} \\
&\sim
\frac{e^2}{m c^2}.
\end{aligned}

# References

[1] D. Bohm. Quantum Theory. Courier Dover Publications, 1989.

[2] A.P. French and E.F. Taylor. An Introduction to Quantum Physics. CRC Press, 1998.

## Applied vanity press

Amazon’s createspace turns out to be a very cost effective way to get a personal color copy of large pdf (>250 pages) to markup for review. The only hassle was having to use their app to create cover art (although that took less time than commuting downtown to one of the cheap copy shops near the university.)

As a side effect, after I edit it, I’d have something I could actually list for sale.  Worldwide, I’d guess at least three people would buy it, that is, if they weren’t happy with the pdf version already available.

## The book.

A draft of my book: Geometric Algebra for Electrical Engineers, is now available. I’ve supplied limited distribution copies of some of the early drafts and have had some good review comments of the chapter I (introduction to geometric algebra), and chapter II (multivector calculus) material, but none on the electromagnetism content. In defense of the reviewers, the initial version of the electromagnetism chapter, while it had a lot of raw content, was pretty exploratory and very rough. It’s been cleaned up significantly and is hopefully now more reader friendly.

## Why I wrote this book.

I have been working on a part time M.Eng degree for a number of years. I wasn’t happy with the UofT ECE electromagnetics offerings in recent years, which have been inconsistently offered or unsatisfactory.  For example: the microwave circuits course which sounded interesting, and had an interesting text book, was mind numbing, almost entirely about Smith charts.  I had to go elsewhere to obtain the M.Eng degree requirements. That elsewhere was a project course.

I proposed a project to an electromagnetism project with the following goals

1. Perform a literature review of applications of geometric algebra to the study of electromagnetism.
2. Identify the subset of the literature that had direct relevance to electrical engineering.
3. Create a complete, and as compact as possible, introduction to the prerequisites required for a graduate or advanced undergraduate electrical engineering student to be able to apply geometric algebra to problems in electromagnetism. With those prerequisites in place, work through the fundamentals of electromagnetism in a geometric algebra context.

In retrospect, doing this project was a mistake. I could have done this work outside of an academic context without paying so much (in both time and money). Somewhere along the way I lost track of the fact that I enrolled on the M.Eng to learn (it provided a way to take grad physics courses on a part time schedule), and got side tracked by degree requirements. Basically I fell victim to a “I may as well graduate” sentiment that would have been better to ignore. All that coupled with the fact that I did not actually get any feedback from my “supervisor”, who did not even read my work (at least so far after one year), made this project-course very frustrating. On the bright side, I really like what I produced, even if I had to do so in isolation.

## Why geometric algebra?

Geometric algebra generalizes vectors, providing algebraic representations of not just directed line segments, but also points, plane segments, volumes, and higher degree geometric objects (hypervolumes.). The geometric algebra representation of planes, volumes and hypervolumes requires a vector dot product, a vector multiplication operation, and a generalized addition operation. The dot product provides the length of a vector and a test for whether or not any two vectors are perpendicular. The vector multiplication operation is used to construct directed plane segments (bivectors), and directed volumes (trivectors), which are built from the respective products of two or three mutually perpendicular vectors. The addition operation allows for sums of scalars, vectors, or any products of vectors. Such a sum is called a multivector.

The power to add scalars, vectors, and products of vectors can be exploited to simplify much of electromagnetism. In particular, Maxwell’s equations for isotropic media can be merged into a single multivector equation
\label{eqn:quaternion2maxwellWithGA:20}
\lr{ \spacegrad + \inv{c} \PD{t}{}} \lr{ \BE + I c \BB } = \eta\lr{ c \rho – \BJ },

where $$\spacegrad$$ is the gradient, $$I = \Be_1 \Be_2 \Be_3$$ is the ordered product of the three R^3 basis vectors, $$c = 1/\sqrt{\mu\epsilon}$$ is the group velocity of the medium, $$\eta = \sqrt{\mu/\epsilon}$$, $$\BE, \BB$$ are the electric and magnetic fields, and $$\rho$$ and $$\BJ$$ are the charge and current densities. This can be written as a single equation
\label{eqn:ece2500report:40}
\lr{ \spacegrad + \inv{c} \PD{t}{}} F = J,

where $$F = \BE + I c \BB$$ is the combined (multivector) electromagnetic field, and $$J = \eta\lr{ c \rho – \BJ }$$ is the multivector current.

Encountering Maxwell’s equation in its geometric algebra form leaves the student with more questions than answers. Yes, it is a compact representation, but so are the tensor and differential forms (or even the quaternionic) representations of Maxwell’s equations. The student needs to know how to work with the representation if it is to be useful. It should also be clear how to use the existing conventional mathematical tools of applied electromagnetism, or how to generalize those appropriately. Individually, there are answers available to many of the questions that are generated attempting to apply the theory, but they are scattered and in many cases not easily accessible.

Much of the geometric algebra literature for electrodynamics is presented with a relativistic bias, or assumes high levels of mathematical or physics sophistication. The aim of this work was an attempt to make the study of electromagnetism using geometric algebra more accessible, especially to other dumb engineering undergraduates like myself. In particular, this project explored non-relativistic applications of geometric algebra to electromagnetism. The end product of this project was a fairly small self contained book, titled “Geometric Algebra for Electrical Engineers”. This book includes an introduction to Euclidean geometric algebra focused on R^2 and R^3 (64 pages), an introduction to geometric calculus and multivector Green’s functions (64 pages), applications to electromagnetism (82 pages), and some appendices. Many of the fundamental results of electromagnetism are derived directly from the multivector Maxwell’s equation, in a streamlined and compact fashion. This includes some new results, and many of the existing non-relativistic results from the geometric algebra literature. As a conceptual bridge, the book includes many examples of how to extract familiar conventional results from simpler multivector representations. Also included in the book are some sample calculations exploiting unique capabilities that geometric algebra provides. In particular, vectors in a plane may be manipulated much like complex numbers, which has a number of advantages over working with coordinates explicitly.

## Followup.

In many ways this work only scratches the surface. Many more worked examples, problems, figures and computer algebra listings should be added. In depth applications of derived geometric algebra relationships to problems customarily tackled with separate electric and magnetic field equations should also be incorporated. There are also theoretical holes, topics covered in any conventional introductory electromagnetism text, that are missing. Examples include the Fresnel relationships for transmission and reflection at an interface, in depth treatment of waveguides, dipole radiation and motion of charged particles, bound charges, and meta materials to name a few. Many of these topics can probably be handled in a coordinate free fashion using geometric algebra. Despite all the work that is required to help bridge the gap between formalism and application, making applied electromagnetism using geometric algebra truly accessible, it is my belief this book makes some good first steps down this path.

The choice that I made to completely avoid the geometric algebra space-time-algebra (STA) is somewhat unfortunate. It is exceedingly elegant, especially in a relativisitic context. Despite that, I think that this was still a good choice from a pedagogical point of view, as most of the prerequisites for an STA based study will have been taken care of as a side effect, making that study much more accessible.

## Potential solutions to the static Maxwell’s equation using geometric algebra

When neither the electromagnetic field strength $$F = \BE + I \eta \BH$$, nor current $$J = \eta (c \rho – \BJ) + I(c\rho_m – \BM)$$ is a function of time, then the geometric algebra form of Maxwell’s equations is the first order multivector (gradient) equation
\label{eqn:staticPotentials:20}

While direct solutions to this equations are possible with the multivector Green’s function for the gradient
\label{eqn:staticPotentials:40}
G(\Bx, \Bx’) = \inv{4\pi} \frac{\Bx – \Bx’}{\Norm{\Bx – \Bx’}^3 },

the aim in this post is to explore second order (potential) solutions in a geometric algebra context. Can we assume that it is possible to find a multivector potential $$A$$ for which
\label{eqn:staticPotentials:60}

is a solution to the Maxwell statics equation? If such a solution exists, then Maxwell’s equation is simply
\label{eqn:staticPotentials:80}

which can be easily solved using the scalar Green’s function for the Laplacian
\label{eqn:staticPotentials:240}
G(\Bx, \Bx’) = -\inv{\Norm{\Bx – \Bx’} },

a beastie that may be easier to convolve than the vector valued Green’s function for the gradient.

It is immediately clear that some restrictions must be imposed on the multivector potential $$A$$. In particular, since the field $$F$$ has only vector and bivector grades, this gradient must have no scalar, nor pseudoscalar grades. That is
\label{eqn:staticPotentials:100}

This constraint on the potential can be avoided if a grade selection operation is built directly into the assumed potential solution, requiring that the field is given by
\label{eqn:staticPotentials:120}

However, after imposing such a constraint, Maxwell’s equation has a much less friendly form
\label{eqn:staticPotentials:140}

Luckily, it is possible to introduce a transformation of potentials, called a gauge transformation, that eliminates the ugly grade selection term, and allows the potential equation to be expressed as a plain old Laplacian. We do so by assuming first that it is possible to find a solution of the Laplacian equation that has the desired grade restrictions. That is
\label{eqn:staticPotentials:160}
\begin{aligned}
\end{aligned}

for which $$F = \spacegrad A’$$ is a grade 1,2 solution to $$\spacegrad F = J$$. Suppose that $$A$$ is any formal solution, free of any grade restrictions, to $$\spacegrad^2 A = J$$, and $$F = \gpgrade{\spacegrad A}{1,2}$$. Can we find a function $$\tilde{A}$$ for which $$A = A’ + \tilde{A}$$?

Maxwell’s equation in terms of $$A$$ is
\label{eqn:staticPotentials:180}
\begin{aligned}
J
\end{aligned}

or
\label{eqn:staticPotentials:200}

This non-homogeneous Laplacian equation that can be solved as is for $$\tilde{A}$$ using the Green’s function for the Laplacian. Alternatively, we may also solve the equivalent first order system using the Green’s function for the gradient.
\label{eqn:staticPotentials:220}

Clearly $$\tilde{A}$$ is not unique, as we can add any function $$\psi$$ satisfying the homogeneous Laplacian equation $$\spacegrad^2 \psi = 0$$.

In summary, if $$A$$ is any multivector solution to $$\spacegrad^2 A = J$$, that is
\label{eqn:staticPotentials:260}
A(\Bx)
= \int dV’ G(\Bx, \Bx’) J(\Bx’)
= -\int dV’ \frac{J(\Bx’)}{\Norm{\Bx – \Bx’} },

then $$F = \spacegrad A’$$ is a solution to Maxwell’s equation, where $$A’ = A – \tilde{A}$$, and $$\tilde{A}$$ is a solution to the non-homogeneous Laplacian equation or the non-homogeneous gradient equation above.

### Integral form of the gauge transformation.

Additional insight is possible by considering the gauge transformation in integral form. Suppose that
\label{eqn:staticPotentials:280}
A(\Bx) = -\int_V dV’ \frac{J(\Bx’)}{\Norm{\Bx – \Bx’} } – \tilde{A}(\Bx),

is a solution of $$\spacegrad^2 A = J$$, where $$\tilde{A}$$ is a multivector solution to the homogeneous Laplacian equation $$\spacegrad^2 \tilde{A} = 0$$. Let’s look at the constraints on $$\tilde{A}$$ that must be imposed for $$F = \spacegrad A$$ to be a valid (i.e. grade 1,2) solution of Maxwell’s equation.
\label{eqn:staticPotentials:300}
\begin{aligned}
F
&=
-\int_V dV’ \lr{ \spacegrad \inv{\Norm{\Bx – \Bx’} } } J(\Bx’)
&=
\int_V dV’ \lr{ \spacegrad’ \inv{\Norm{\Bx – \Bx’} } } J(\Bx’)
&=
\int_V dV’ \spacegrad’ \frac{J(\Bx’)}{\Norm{\Bx – \Bx’} } – \int_V dV’ \frac{\spacegrad’ J(\Bx’)}{\Norm{\Bx – \Bx’} }
&=
\int_{\partial V} dA’ \ncap’ \frac{J(\Bx’)}{\Norm{\Bx – \Bx’} } – \int_V \frac{\spacegrad’ J(\Bx’)}{\Norm{\Bx – \Bx’} }
\end{aligned}

Where $$\ncap’ = (\Bx’ – \Bx)/\Norm{\Bx’ – \Bx}$$, and the fundamental theorem of geometric calculus has been used to transform the gradient volume integral into an integral over the bounding surface. Operating on Maxwell’s equation with the gradient gives $$\spacegrad^2 F = \spacegrad J$$, which has only grades 1,2 on the left hand side, meaning that $$J$$ is constrained in a way that requires $$\spacegrad J$$ to have only grades 1,2. This means that $$F$$ has grades 1,2 if
\label{eqn:staticPotentials:320}
= \int_{\partial V} dA’ \frac{ \gpgrade{\ncap’ J(\Bx’)}{0,3} }{\Norm{\Bx – \Bx’} }.

The product $$\ncap J$$ expands to
\label{eqn:staticPotentials:340}
\begin{aligned}
\ncap J
&=
&=
\ncap \cdot (-\eta \BJ) + \gpgradethree{\ncap (-I \BM)} \\
&=- \eta \ncap \cdot \BJ -I \ncap \cdot \BM,
\end{aligned}

so
\label{eqn:staticPotentials:360}
=
-\int_{\partial V} dA’ \frac{ \eta \ncap’ \cdot \BJ(\Bx’) + I \ncap’ \cdot \BM(\Bx’)}{\Norm{\Bx – \Bx’} }.

Observe that if there is no flux of current density $$\BJ$$ and (fictitious) magnetic current density $$\BM$$ through the surface, then $$F = \spacegrad A$$ is a solution to Maxwell’s equation without any gauge transformation. Alternatively $$F = \spacegrad A$$ is also a solution if $$\lim_{\Bx’ \rightarrow \infty} \BJ(\Bx’)/\Norm{\Bx – \Bx’} = \lim_{\Bx’ \rightarrow \infty} \BM(\Bx’)/\Norm{\Bx – \Bx’} = 0$$ and the bounding volume is taken to infinity.

# References

## Generalizing Ampere’s law using geometric algebra.

The question I’d like to explore in this post is how Ampere’s law, the relationship between the line integral of the magnetic field to current (i.e. the enclosed current)
\label{eqn:flux:20}
\oint_{\partial A} d\Bx \cdot \BH = -\int_A \ncap \cdot \BJ,

generalizes to geometric algebra where Maxwell’s equations for a statics configuration (all time derivatives zero) is
\label{eqn:flux:40}

where the multivector fields and currents are
\label{eqn:flux:60}
\begin{aligned}
F &= \BE + I \eta \BH \\
J &= \eta \lr{ c \rho – \BJ } + I \lr{ c \rho_\txtm – \BM }.
\end{aligned}

Here (fictitious) the magnetic charge and current densities that can be useful in antenna theory have been included in the multivector current for generality.

My presumption is that it should be possible to utilize the fundamental theorem of geometric calculus for expressing the integral over an oriented surface to its boundary, but applied directly to Maxwell’s equation. That integral theorem has the form
\label{eqn:flux:80}
\int_A d^2 \Bx \boldpartial F = \oint_{\partial A} d\Bx F,

where $$d^2 \Bx = d\Ba \wedge d\Bb$$ is a two parameter bivector valued surface, and $$\boldpartial$$ is vector derivative, the projection of the gradient onto the tangent space. I won’t try to explain all of geometric calculus here, and refer the interested reader to [1], which is an excellent reference on geometric calculus and integration theory.

The gotcha is that we actually want a surface integral with $$\spacegrad F$$. We can split the gradient into the vector derivative a normal component
\label{eqn:flux:160}

so
\label{eqn:flux:100}
=
\int_A d^2 \Bx \boldpartial F
+
\int_A d^2 \Bx \ncap \lr{ \ncap \cdot \spacegrad } F,

so
\label{eqn:flux:120}
\begin{aligned}
\oint_{\partial A} d\Bx F
&=
\int_A d^2 \Bx \lr{ J – \ncap \lr{ \ncap \cdot \spacegrad } F } \\
&=
\int_A dA \lr{ I \ncap J – \lr{ \ncap \cdot \spacegrad } I F }
\end{aligned}

This is not nearly as nice as the magnetic flux relationship which was nicely split with the current and fields nicely separated. The $$d\Bx F$$ product has all possible grades, as does the $$d^2 \Bx J$$ product (in general). Observe however, that the normal term on the right has only grades 1,2, so we can split our line integral relations into pairs with and without grade 1,2 components
\label{eqn:flux:140}
\begin{aligned}
&=
\int_A dA \gpgrade{ I \ncap J }{0,3} \\
&=
\int_A dA \lr{ \gpgrade{ I \ncap J }{1,2} – \lr{ \ncap \cdot \spacegrad } I F }.
\end{aligned}

Let’s expand these explicitly in terms of the component fields and densities to check against the conventional relationships, and see if things look right. The line integrand expands to
\label{eqn:flux:180}
\begin{aligned}
d\Bx F
&=
d\Bx \lr{ \BE + I \eta \BH }
=
d\Bx \cdot \BE + I \eta d\Bx \cdot \BH
+
d\Bx \wedge \BE + I \eta d\Bx \wedge \BH \\
&=
d\Bx \cdot \BE
– \eta (d\Bx \cross \BH)
+ I (d\Bx \cross \BE )
+ I \eta (d\Bx \cdot \BH),
\end{aligned}

the current integrand expands to
\label{eqn:flux:200}
\begin{aligned}
I \ncap J
&=
I \ncap
\lr{
\frac{\rho}{\epsilon} – \eta \BJ + I \lr{ c \rho_\txtm – \BM }
} \\
&=
\ncap I \frac{\rho}{\epsilon} – \eta \ncap I \BJ – \ncap c \rho_\txtm + \ncap \BM \\
&=
\ncap \cdot \BM
+ \eta (\ncap \cross \BJ)
– \ncap c \rho_\txtm
+ I (\ncap \cross \BM)
+ \ncap I \frac{\rho}{\epsilon}
– \eta I (\ncap \cdot \BJ).
\end{aligned}

We are left with
\label{eqn:flux:220}
\begin{aligned}
\oint_{\partial A}
\lr{
d\Bx \cdot \BE + I \eta (d\Bx \cdot \BH)
}
&=
\int_A dA
\lr{
\ncap \cdot \BM – \eta I (\ncap \cdot \BJ)
} \\
\oint_{\partial A}
\lr{
– \eta (d\Bx \cross \BH)
+ I (d\Bx \cross \BE )
}
&=
\int_A dA
\lr{
\eta (\ncap \cross \BJ)
– \ncap c \rho_\txtm
+ I (\ncap \cross \BM)
+ \ncap I \frac{\rho}{\epsilon}
-\PD{n}{} \lr{ I \BE – \eta \BH }
}.
\end{aligned}

This is a crazy mess of dots, crosses, fields and sources. We can split it into one equation for each grade, which will probably look a little more regular. That is
\label{eqn:flux:240}
\begin{aligned}
\oint_{\partial A} d\Bx \cdot \BE &= \int_A dA \ncap \cdot \BM \\
\oint_{\partial A} d\Bx \cross \BH
&=
\int_A dA
\lr{
– \ncap \cross \BJ
+ \frac{ \ncap \rho_\txtm }{\mu}
– \PD{n}{\BH}
} \\
\oint_{\partial A} d\Bx \cross \BE &=
\int_A dA
\lr{
\ncap \cross \BM
+ \frac{\ncap \rho}{\epsilon}
– \PD{n}{\BE}
} \\
\oint_{\partial A} d\Bx \cdot \BH &= -\int_A dA \ncap \cdot \BJ \\
\end{aligned}

The first and last equations could have been obtained much more easily from Maxwell’s equations in their conventional form more easily. The two cross product equations with the normal derivatives are not familiar to me, even without the fictitious magnetic sources. It is somewhat remarkable that so much can be packed into one multivector equation:
\label{eqn:flux:260}
\oint_{\partial A} d\Bx F
=
I \int_A dA \lr{ \ncap J – \PD{n}{F} }.

# References

[1] A. Macdonald. Vector and Geometric Calculus. CreateSpace Independent Publishing Platform, 2012.

## Solving Maxwell’s equation in freespace: Multivector plane wave representation

The geometric algebra form of Maxwell’s equations in free space (or source free isotopic media with group velocity $$c$$) is the multivector equation
\label{eqn:planewavesMultivector:20}
\lr{ \spacegrad + \inv{c}\PD{t}{} } F(\Bx, t) = 0.

Here $$F = \BE + I c \BB$$ is a multivector with grades 1 and 2 (vector and bivector components). The velocity $$c$$ is called the group velocity since $$F$$, or its components $$\BE, \BH$$ satisfy the wave equation, which can be seen by pre-multiplying with $$\spacegrad – (1/c)\PDi{t}{}$$ to find
\label{eqn:planewavesMultivector:n}
\lr{ \spacegrad^2 – \inv{c^2}\PDSq{t}{} } F(\Bx, t) = 0.

Let’s look at the frequency domain solution of this equation with a presumed phasor representation
\label{eqn:planewavesMultivector:40}
F(\Bx, t) = \textrm{Re} \lr{ F(\Bk) e^{-j \Bk \cdot \Bx + j \omega t} },

where $$j$$ is a scalar imaginary, not necessarily with any geometric interpretation.

Maxwell’s equation reduces to just
\label{eqn:planewavesMultivector:60}
0
=
-j \lr{ \Bk – \frac{\omega}{c} } F(\Bk).

If $$F(\Bk)$$ has a left multivector factor
\label{eqn:planewavesMultivector:80}
F(\Bk) =
\lr{ \Bk + \frac{\omega}{c} } \tilde{F},

where $$\tilde{F}$$ is a multivector to be determined, then
\label{eqn:planewavesMultivector:100}
\begin{aligned}
\lr{ \Bk – \frac{\omega}{c} }
F(\Bk)
&=
\lr{ \Bk – \frac{\omega}{c} }
\lr{ \Bk + \frac{\omega}{c} } \tilde{F} \\
&=
\lr{ \Bk^2 – \lr{\frac{\omega}{c}}^2 } \tilde{F},
\end{aligned}

which is zero if $$\Norm{\Bk} = \ifrac{\omega}{c}$$.

Let $$\kcap = \ifrac{\Bk}{\Norm{\Bk}}$$, and $$\Norm{\Bk} \tilde{F} = F_0 + F_1 + F_2 + F_3$$, where $$F_0, F_1, F_2,$$ and $$F_3$$ are respectively have grades 0,1,2,3. Then
\label{eqn:planewavesMultivector:120}
\begin{aligned}
F(\Bk)
&= \lr{ 1 + \kcap } \lr{ F_0 + F_1 + F_2 + F_3 } \\
&=
F_0 + F_1 + F_2 + F_3
+
\kcap F_0 + \kcap F_1 + \kcap F_2 + \kcap F_3 \\
&=
F_0 + F_1 + F_2 + F_3
+
\kcap F_0 + \kcap \cdot F_1 + \kcap \cdot F_2 + \kcap \cdot F_3
+
\kcap \wedge F_1 + \kcap \wedge F_2 \\
&=
\lr{
F_0 + \kcap \cdot F_1
}
+
\lr{
F_1 + \kcap F_0 + \kcap \cdot F_2
}
+
\lr{
F_2 + \kcap \cdot F_3 + \kcap \wedge F_1
}
+
\lr{
F_3 + \kcap \wedge F_2
}.
\end{aligned}

Since the field $$F$$ has only vector and bivector grades, the grades zero and three components of the expansion above must be zero, or
\label{eqn:planewavesMultivector:140}
\begin{aligned}
F_0 &= – \kcap \cdot F_1 \\
F_3 &= – \kcap \wedge F_2,
\end{aligned}

so
\label{eqn:planewavesMultivector:160}
\begin{aligned}
F(\Bk)
&=
\lr{ 1 + \kcap } \lr{
F_1 – \kcap \cdot F_1 +
F_2 – \kcap \wedge F_2
} \\
&=
\lr{ 1 + \kcap } \lr{
F_1 – \kcap F_1 + \kcap \wedge F_1 +
F_2 – \kcap F_2 + \kcap \cdot F_2
}.
\end{aligned}

The multivector $$1 + \kcap$$ has the projective property of gobbling any leading factors of $$\kcap$$
\label{eqn:planewavesMultivector:180}
\begin{aligned}
(1 + \kcap)\kcap
&= \kcap + 1 \\
&= 1 + \kcap,
\end{aligned}

so for $$F_i \in F_1, F_2$$
\label{eqn:planewavesMultivector:200}
(1 + \kcap) ( F_i – \kcap F_i )
=
(1 + \kcap) ( F_i – F_i )
= 0,

leaving
\label{eqn:planewavesMultivector:220}
F(\Bk)
=
\lr{ 1 + \kcap } \lr{
\kcap \cdot F_2 +
\kcap \wedge F_1
}.

For $$\kcap \cdot F_2$$ to be non-zero $$F_2$$ must be a bivector that lies in a plane containing $$\kcap$$, and $$\kcap \cdot F_2$$ is a vector in that plane that is perpendicular to $$\kcap$$. On the other hand $$\kcap \wedge F_1$$ is non-zero only if $$F_1$$ has a non-zero component that does not lie in along the $$\kcap$$ direction, but $$\kcap \wedge F_1$$, like $$F_2$$ describes a plane that containing $$\kcap$$. This means that having both bivector and vector free variables $$F_2$$ and $$F_1$$ provide more degrees of freedom than required. For example, if $$\BE$$ is any vector, and $$F_2 = \kcap \wedge \BE$$, then
\label{eqn:planewavesMultivector:240}
\begin{aligned}
\lr{ 1 + \kcap }
\kcap \cdot F_2
&=
\lr{ 1 + \kcap }
\kcap \cdot \lr{ \kcap \wedge \BE } \\
&=
\lr{ 1 + \kcap }
\lr{
\BE

\kcap \lr{ \kcap \cdot \BE }
} \\
&=
\lr{ 1 + \kcap }
\kcap \lr{ \kcap \wedge \BE } \\
&=
\lr{ 1 + \kcap }
\kcap \wedge \BE,
\end{aligned}

which has the form $$\lr{ 1 + \kcap } \lr{ \kcap \wedge F_1 }$$, so the solution of the free space Maxwell’s equation can be written
\label{eqn:planewavesMultivector:260}
\boxed{
F(\Bx, t)
=
\textrm{Re} \lr{
\lr{ 1 + \kcap }
\BE\,
e^{-j \Bk \cdot \Bx + j \omega t}
}
,
}

where $$\BE$$ is any vector for which $$\BE \cdot \Bk = 0$$.

## The many faces of Maxwell’s equations

[Click here for a PDF of this post with nicer formatting (including equation numbering and references)]

The following is a possible introduction for a report for a UofT ECE2500 project associated with writing a small book: “Geometric Algebra for Electrical Engineers”. Given the space constraints for the report I may have to drop much of this, but some of the history of Maxwell’s equations may be of interest, so I thought I’d share before the knife hits the latex.

## Goals of the project.

This project had a few goals

1. Perform a literature review of applications of geometric algebra to the study of electromagnetism. Geometric algebra will be defined precisely later, along with bivector, trivector, multivector and other geometric algebra generalizations of the vector.
2. Identify the subset of the literature that had direct relevance to electrical engineering.
3. Create a complete, and as compact as possible, introduction of the prerequisites required
geometric algebra to problems in electromagnetism.

## The many faces of electromagnetism.

There is a long history of attempts to find more elegant, compact and powerful ways of encoding and working with Maxwell’s equations.

### Maxwell’s formulation.

Maxwell [12] employs some differential operators, including the gradient $$\spacegrad$$ and Laplacian $$\spacegrad^2$$, but the divergence and gradient are always written out in full using coordinates, usually in integral form. Reading the original Treatise highlights how important notation can be, as most modern engineering or physics practitioners would find his original work incomprehensible. A nice translation from Maxwell’s notation to the modern Heaviside-Gibbs notation can be found in [16].

### Quaterion representation.

In his second volume [11] the equations of electromagnetism are stated using quaterions (an extension of complex numbers to three dimensions), but quaternions are not used in the work. The modern form of Maxwell’s equations in quaternion form is
\label{eqn:ece2500report:220}
\begin{aligned}
\inv{2} \antisymmetric{ \frac{d}{dr} }{ \BH } – \inv{2} \symmetric{ \frac{d}{dr} } { c \BD } &= c \rho + \BJ \\
\inv{2} \antisymmetric{ \frac{d}{dr} }{ \BE } + \inv{2} \symmetric{ \frac{d}{dr} }{ c \BB } &= 0,
\end{aligned}

where $$\ifrac{d}{dr} = (1/c) \PDi{t}{} + \Bi \PDi{x}{} + \Bj \PDi{y}{} + \Bk \PDi{z}{}$$ [7] acts bidirectionally, and vectors are expressed in terms of the quaternion basis $$\setlr{ \Bi, \Bj, \Bk }$$, subject to the relations $$\Bi^2 = \Bj^2 = \Bk^2 = -1, \quad \Bi \Bj = \Bk = -\Bj \Bi, \quad \Bj \Bk = \Bi = -\Bk \Bj, \quad \Bk \Bi = \Bj = -\Bi \Bk$$.
There is clearly more structure to these equations than the traditional Heaviside-Gibbs representation that we are used to, which says something for the quaternion model. However, this structure requires notation that is arguably non-intuitive. The fact that the quaterion representation was abandoned long ago by most electromagnetism researchers and engineers supports such an argument.

### Minkowski tensor representation.

Minkowski introduced the concept of a complex time coordinate $$x_4 = i c t$$ for special relativity [3]. Such a four-vector representation can be used for many of the relativistic four-vector pairs of electromagnetism, such as the current $$(c\rho, \BJ)$$, and the energy-momentum Lorentz force relations, and can also be applied to Maxwell’s equations
\label{eqn:ece2500report:140}
\sum_{\mu= 1}^4 \PD{x_\mu}{F_{\mu\nu}} = – 4 \pi j_\nu.
\sum_{\lambda\rho\mu=1}^4
\epsilon_{\mu\nu\lambda\rho}
\PD{x_\mu}{F_{\lambda\rho}} = 0,

where
\label{eqn:ece2500report:160}
F
=
\begin{bmatrix}
0 & B_z & -B_y & -i E_x \\
-B_z & 0 & B_x & -i E_y \\
B_y & -B_x & 0 & -i E_z \\
i E_x & i E_y & i E_z & 0
\end{bmatrix}.

A rank-2 complex (Hermitian) tensor contains all six of the field components. Transformation of coordinates for this representation of the field may be performed exactly like the transformation for any other four-vector. This formalism is described nicely in [13], where the structure used is motivated by transformational requirements. One of the costs of this tensor representation is that we loose the clear separation of the electric and magnetic fields that we are so comfortable with. Another cost is that we loose the distinction between space and time, as separate space and time coordinates have to be projected out of a larger four vector. Both of these costs have theoretical benefits in some applications, particularly for high energy problems where relativity is important, but for the low velocity problems near and dear to electrical engineers who can freely treat space and time independently, the advantages are not clear.

### Modern tensor formalism.

The Minkowski representation fell out of favour in theoretical physics, which settled on a real tensor representation that utilizes an explicit metric tensor $$g_{\mu\nu} = \pm \textrm{diag}(1, -1, -1, -1)$$ to represent the complex inner products of special relativity. In this tensor formalism, Maxwell’s equations are also reduced to a set of two tensor relationships ([10], [8], [5]).
\label{eqn:ece2500report:40}
\begin{aligned}
\partial_\mu F^{\mu \nu} &= \mu_0 J^\nu \\
\epsilon^{\alpha \beta \mu \nu} \partial_\beta F_{\mu \nu} &= 0,
\end{aligned}

where $$F^{\mu\nu}$$ is a \textit{real} rank-2 antisymmetric tensor that contains all six electric and magnetic field components, and $$J^\nu$$ is a four-vector current containing both charge density and current density components. \Cref{eqn:ece2500report:40} provides a unified and simpler theoretical framework for electromagnetism, and is used extensively in physics but not engineering.

### Differential forms.

It has been argued that a differential forms treatment of electromagnetism provides some of the same theoretical advantages as the tensor formalism, without the disadvantages of introducing a hellish mess of index manipulation into the mix. With differential forms it is also possible to express Maxwell’s equations as two equations. The free-space differential forms equivalent [4] to the tensor equations is
\label{eqn:ece2500report:60}
\begin{aligned}
d \alpha &= 0 \\
d *\alpha &= 0,
\end{aligned}

where
\label{eqn:ece2500report:180}
\alpha = \lr{ E_1 dx^1 + E_2 dx^2 + E_3 dx^3 }(c dt) + H_1 dx^2 dx^3 + H_2 dx^3 dx^1 + H_3 dx^1 dx^2.

One of the advantages of this representation is that it is valid even for curvilinear coordinate representations, which are handled naturally in differential forms. However, this formalism also comes with a number of costs. One cost (or benefit), like that of the tensor formalism, is that this is implicitly a relativistic approach subject to non-Euclidean orthonormality conditions $$(dx^i, dx^j) = \delta^{ij}, (dx^i, c dt) = 0, (c dt, c dt) = -1$$. Most grievous of the costs is the requirement to use differentials $$dx^1, dx^2, dx^3, c dt$$, instead of a more familar set of basis vectors, even for non-curvilinear coordinates. This requirement is easily viewed as unnatural, and likely one of the reasons that electromagnetism with differential forms has never become popular.

### Vector formalism.

Euclidean vector algebra, in particular the vector algebra and calculus of $$R^3$$, is the de-facto language of electrical engineering for electromagnetism. Maxwell’s equations in the Heaviside-Gibbs vector formalism are
\label{eqn:ece2500report:20}
\begin{aligned}
\spacegrad \cross \BE &= – \PD{t}{\BB} \\
\spacegrad \cross \BH &= \BJ + \PD{t}{\BD} \\
\spacegrad \cdot \BD &= \rho \\
\end{aligned}

We are all intimately familiar with these equations, with the dot and the cross products, and with gradient, divergence and curl operations that are used to express them.
Given how comfortable we are with this mathematical formalism, there has to be a really good reason to switch to something else.

### Space time algebra (geometric algebra).

An alternative to any of the electrodynamics formalisms described above is STA, the Space Time Algebra. STA is a relativistic geometric algebra that allows Maxwell’s equations to be combined into one equation ([2], [6])
\label{eqn:ece2500report:80}

where
\label{eqn:ece2500report:200}
F = \BE + I c \BB \qquad (= \BE + I \eta \BH)

is a bivector field containing both the electric and magnetic field “vectors”, $$\grad = \gamma^\mu \partial_\mu$$ is the spacetime gradient, $$J$$ is a four vector containing electric charge and current components, and $$I = \gamma_0 \gamma_1 \gamma_2 \gamma_3$$ is the spacetime pseudoscalar, the ordered product of the basis vectors $$\setlr{ \gamma_\mu }$$. The STA representation is explicitly relativistic with a non-Euclidean relationships between the basis vectors $$\gamma_0 \cdot \gamma_0 = 1 = -\gamma_k \cdot \gamma_k, \forall k > 0$$. In this formalism “spatial” vectors $$\Bx = \sum_{k>0} \gamma_k \gamma_0 x^k$$ are represented as spacetime bivectors, requiring a small slight of hand when switching between STA notation and conventional vector representation. Uncoincidentally $$F$$ has exactly the same structure as the 2-form $$\alpha$$ above, provided the differential 1-forms $$dx^\mu$$ are replaced by the basis vectors $$\gamma_\mu$$. However, there is a simple complex structure inherent in the STA form that is not obvious in the 2-form equivalent. The bivector representation of the field $$F$$ directly encodes the antisymmetric nature of $$F^{\mu\nu}$$ from the tensor formalism, and the tensor equivalents of most STA results can be calcualted easily.

Having a single PDE for all of Maxwell’s equations allows for direct Green’s function solution of the field, and has a number of other advantages. There is extensive literature exploring selected applications of STA to electrodynamics. Many theoretical results have been derived using this formalism that require significantly more complex approaches using conventional vector or tensor analysis. Unfortunately, much of the STA literature is inaccessible to the engineering student, practising engineers, or engineering instructors. To even start reading the literature, one must learn geometric algebra, aspects of special relativity and non-Euclidean geometry, generalized integration theory, and even some tensor analysis.

### Paravector formalism (geometric algebra).

In the geometric algebra literature, there are a few authors who have endorsed the use of Euclidean geometric algebras for relativistic applications ([1], [14])
These authors use an Euclidean basis “vector” $$\Be_0 = 1$$ for the timelike direction, along with a standard Euclidean basis $$\setlr{ \Be_i }$$ for the spatial directions. A hybrid scalar plus vector representation of four vectors, called paravectors is employed. Maxwell’s equation is written as a multivector equation
\label{eqn:ece2500report:120}
\lr{ \spacegrad + \inv{c} \PD{t}{} } F = J,

where $$J$$ is a multivector source containing both the electric charge and currents, and $$c$$ is the group velocity for the medium (assumed uniform and isometric). $$J$$ may optionally include the (fictitious) magnetic charge and currents useful in antenna theory. The paravector formalism uses a the hybrid electromagnetic field representation of STA above, however, $$I = \Be_1 \Be_2 \Be_3$$ is interpreted as the $$R^3$$ pseudoscalar, the ordered product of the basis vectors $$\setlr{ \Be_i }$$, and $$F$$ represents a multivector with vector and bivector components. Unlike STA where $$\BE$$ and $$\BB$$ (or $$\BH$$) are interpretted as spacetime bivectors, here they are plain old Euclidian vectors in $$R^3$$, entirely consistent with conventional Heaviyside-Gibbs notation. Like the STA Maxwell’s equation, the paravector form is directly invertible using Green’s function techniques, without requiring the solution of equivalent second order potential problems, nor any requirement to take the derivatives of those potentials to determine the fields.

Lorentz transformation and manipulation of paravectors requires a variety of conjugation, real and imaginary operators, unlike STA where such operations have the same complex exponential structure as any 3D rotation expressed in geometric algebra. The advocates of the paravector representation argue that this provides an effective pedagogical bridge from Euclidean geometry to the Minkowski geometry of special relativity. This author agrees that this form of Maxwell’s equations is the natural choice for an introduction to electromagnetism using geometric algebra, but for relativistic operations, STA is a much more natural and less confusing choice.

## Results.

The end product of this project was a fairly small self contained book, titled “Geometric Algebra for Electrical Engineers”. This book includes an introduction to Euclidean geometric algebra focused on $$R^2$$ and $$R^3$$ (64 pages), an introduction to geometric calculus and multivector Green’s functions (64 pages), and applications to electromagnetism (75 pages). This report summarizes results from this book, omitting most derivations, and attempts to provide an overview that may be used as a road map for the book for further exploration. Many of the fundamental results of electromagnetism are derived directly from the geometric algebra form of Maxwell’s equation in a streamlined and compact fashion. This includes some new results, and many of the existing non-relativistic results from the geometric algebra STA and paravector literature. It will be clear to the reader that it is often simpler to have the electric and magnetic on equal footing, and demonstrates this by deriving most results in terms of the total electromagnetic field $$F$$. Many examples of how to extract the conventional electric and magnetic fields from the geometric algebra results expressed in terms of $$F$$ are given as a bridge between the multivector and vector representations.

The aim of this work was to remove some of the prerequisite conceptual roadblocks that make electromagnetism using geometric algebra inaccessbile. In particular, this project explored non-relativistic applications of geometric algebra to electromagnetism. After derivation from the conventional Heaviside-Gibbs representation of Maxwell’s equations, the paravector representation of Maxwell’s equation is used as the starting point for of all subsequent analysis. However, the paravector literature includes a confusing set of conjugation and real and imaginary selection operations that are tailored for relativisitic applications. These are not neccessary for low velocity applications, and have been avoided completely with the aim of making the subject more accessibility to the engineer.

In the book an attempt has been made to avoid introducing as little new notation as possible. For example, some authors use special notation for the bivector valued magnetic field $$I \BB$$, such as $$\boldsymbol{\mathcal{b}}$$ or $$\Bcap$$. Given the inconsistencies in the literature, $$I \BB$$ (or $$I \BH$$) will be used explicitly for the bivector (magnetic) components of the total electromagnetic field $$F$$. In the geometric algebra literature, there are conflicting conventions for the operator $$\spacegrad + (1/c) \PDi{t}{}$$ which we will call the spacetime gradient after the STA equivalent. For examples of different notations for the spacetime gradient, see [9], [1], and [15]. In the book the spacetime gradient is always written out in full to avoid picking from or explaining some of the subtlties of the competing notations.

Some researchers will find it distasteful that STA and relativity have been avoided completely in this book. Maxwell’s equations are inherently relativistic, and STA expresses the relativistic aspects of electromagnetism in an exceptional and beautiful fashion. However, a student of this book will have learned the geometric algebra and calculus prerequisites of STA. This makes the STA literature much more accessible, especially since most of the results in the book can be trivially translated into STA notation.

# References

[1] William Baylis. Electrodynamics: a modern geometric approach, volume 17. Springer Science \& Business Media, 2004.

[2] C. Doran and A.N. Lasenby. Geometric algebra for physicists. Cambridge University Press New York, Cambridge, UK, 1st edition, 2003.

[3] Albert Einstein. Relativity: The special and the general theory, chapter Minkowski’s Four-Dimensional Space. Princeton University Press, 2015. URL http://www.gutenberg.org/ebooks/5001.

[4] H. Flanders. Differential Forms With Applications to the Physical Sciences. Courier Dover Publications, 1989.

[5] David Jeffrey Griffiths and Reed College. Introduction to electrodynamics. Prentice hall Upper Saddle River, NJ, 3rd edition, 1999.

[6] David Hestenes. Space-time algebra, volume 1. Springer, 1966.

[7] Peter Michael Jack. Physical space as a quaternion structure, i: Maxwell equations. a brief note. arXiv preprint math-ph/0307038, 2003. URL https://arxiv.org/abs/math-ph/0307038.

[8] JD Jackson. Classical Electrodynamics. John Wiley and Sons, 2nd edition, 1975.

[9] Bernard Jancewicz. Multivectors and Clifford algebra in electrodynamics. World Scientific, 1988.

[10] L.D. Landau and E.M. Lifshitz. The classical theory of fields. Butterworth-Heinemann, 1980. ISBN 0750627689.

[11] James Clerk Maxwell. A treatise on electricity and magnetism, volume II. Merchant Books, 1881.

[12] James Clerk Maxwell. A treatise on electricity and magnetism, third edition, volume I. Dover publications, 1891.

[13] M. Schwartz. Principles of Electrodynamics. Dover Publications, 1987.

[14] Chappell et al. A simplified approach to electromagnetism using geometric algebra. arXiv preprint arXiv:1010.4947, 2010.

[15] Chappell et al. Geometric algebra for electrical and electronic engineers. 2014.

[16] Chappell et al. Geometric Algebra for Electrical and Electronic Engineers, 2014

## Motivation.

The quaternion form of Maxwell’s equations as stated in [2] is nearly indecipherable. The modern quaternionic form of these equations can be found in [1]. Looking for this representation was driven by the question of whether or not the compact geometric algebra representations of Maxwell’s equations $$\grad F = J$$, was possible using a quaternion representation of the fields.

As quaternions may be viewed as the even subalgebra of GA(3,0), it is possible to the quaternion representation of Maxwell’s equations using only geometric algebra, including source terms and independent of the heat considerations discussed in [1]. Such a derivation will be performed here. Examination of the results appears to answer the question about the compact representation in the negative.

## Quaternions as multivectors.

Quaternions are vector plus scalar sums, where the vector basis $$\setlr{ \Bi, \Bj, \Bk }$$ are subject to the complex like multiplication rules
\label{eqn:complex:240}
\begin{aligned}
\Bi^2 &= \Bj^2 = \Bk^2 = -1 \\
\Bi \Bj &= \Bk = -\Bj \Bi \\
\Bj \Bk &= \Bi = -\Bk \Bj \\
\Bk \Bi &= \Bj = -\Bi \Bk.
\end{aligned}

We can represent these basis vectors in terms of the \R{3} unit bivectors
\label{eqn:quaternion2maxwellWithGA:260}
\begin{aligned}
\Bi &= \Be_{3} \Be_{2} = -I \Be_1 \\
\Bj &= \Be_{1} \Be_{3} = -I \Be_2 \\
\Bk &= \Be_{2} \Be_{1} = -I \Be_3,
\end{aligned}

where $$I = \Be_1 \Be_2 \Be_3$$ is the ordered product of the \R{3} basis elements. Within geometric algebra, the quaternion basis “vectors” are more properly viewed as a bivector space basis that happens to have dimension three.

Following [1], we may introduce a quaternionic spacetime gradient, and express that in terms of geometric algebra
\label{eqn:quaternion2maxwellWithGA:280}
\frac{d}{dr} = \inv{c} \PD{t}{}
+ \Bi \PD{x}{}
+ \Bj \PD{y}{}
+ \Bk \PD{z}{}
=

Of particular interest is how do we write the curl, divergence and time partials in terms of the quaternionic spacetime gradient or its components. Like [1], we will use modern commutator notation for an antisymmetric difference of products
\label{eqn:quaternion2maxwellWithGA:600}
\antisymmetric{a}{b} = a b – b a,

and anticommutator notation for a symmetric difference of products
\label{eqn:quaternion2maxwellWithGA:620}
\symmetric{a}{b} = a b + b a.

The curl of a vector $$\Bf$$ in terms of vector products with the gradient is
\label{eqn:quaternion2maxwellWithGA:300}
\begin{aligned}
&= \inv{2} \antisymmetric{ -I \spacegrad }{ \Bf } \\
&= \inv{2} \antisymmetric{ \frac{d}{dr} }{ \Bf },
\end{aligned}

where the last step takes advantage of the fact that the timelike contribution of the spacetime gradient commutes with any vector $$\Bf$$ due to its scalar nature, so cancels out of the commutator. In a similar fashion, the dot product may be written as an anticommutator
\label{eqn:quaternion2maxwellWithGA:480}
=
=

as can the scalar time derivative
\label{eqn:quaternion2maxwellWithGA:500}
\PD{t}{\Bf}
= \inv{2} \symmetric{ \inv{c} \PD{t}{} } { c \Bf }.

## Quaternionic form of Maxwell’s equations.

Using geometric algebra as an intermediate transformation, let’s see directly how to express Maxwell’s equations in terms of this quaternionic operator. Our starting point is Maxwell’s equations in their standard macroscopic form

\label{eqn:ece2500report:20}
\spacegrad \cross \BH = \BJ + \PD{t}{\BD}

\label{eqn:quaternion2maxwellWithGA:340}

\label{eqn:quaternion2maxwellWithGA:360}
\spacegrad \cross \BE = – \PD{t}{\BB}

\label{eqn:quaternion2maxwellWithGA:380}

Inserting these into Maxwell-Faraday and into Gauss’s law for magnetism we have
\label{eqn:quaternion2maxwellWithGA:400}
\begin{aligned}
\inv{2} \antisymmetric{ \frac{d}{dr} }{ \BE } &= – \symmetric{ \inv{c}\PD{t}{} }{ c \BB } \\
\inv{2} \symmetric{ \spacegrad }{ c \BB } &= 0,
\end{aligned}

or
\label{eqn:quaternion2maxwellWithGA:420}
\begin{aligned}
\inv{2} \antisymmetric{ \frac{d}{dr} }{ -I \BE } + \symmetric{ \inv{c}\PD{t}{} }{ -I c \BB } &= 0 \\
\inv{2} \symmetric{ -I \spacegrad }{ -I c \BB } &= 0
\end{aligned}

We can introduce quaternionic electric and magnetic field “vectors” (really bivectors)
\label{eqn:quaternion2maxwellWithGA:440}
\begin{aligned}
\boldsymbol{\mathcal{E}} &= -I \BE = \Bi E_x + \Bj E_y + \Bk E_z \\
\boldsymbol{\mathcal{B}} &= -I \BB = \Bi B_x + \Bj B_y + \Bk B_z,
\end{aligned}

and substitute these and sum to find the quaternionic representation of the two source free Maxwell’s equations
\label{eqn:quaternion2maxwellWithGA:460}
\boxed{
\inv{2} \antisymmetric{ \frac{d}{dr} }{ \boldsymbol{\mathcal{E}} } + \inv{2} \symmetric{ \frac{d}{dr} }{ c \boldsymbol{\mathcal{B}} } = 0.
}

Inserting the quaternion curl, div and time derivative representations into Ampere-Maxwell’s law and Gauss’s law, gives
\label{eqn:quaternion2maxwellWithGA:520}
\begin{aligned}
\inv{2} \antisymmetric{ \frac{d}{dr} }{ \BH } &= \BJ + \inv{2} \symmetric{ \inv{c} \PD{t}{} } { c \BD } \\
\inv{2} \symmetric{ \spacegrad }{ c \BD } &= c \rho,
\end{aligned}

\label{eqn:quaternion2maxwellWithGA:540}
\begin{aligned}
\inv{2} \antisymmetric{ \frac{d}{dr} }{ -I \BH } – \inv{2} \symmetric{ \inv{c} \PD{t}{} } { -I c \BD } &= -I \BJ \\
-\inv{2} \symmetric{ -I \spacegrad }{ -I c \BD } &= c \rho.
\end{aligned}

With quaternionic displacement vector and magnetization, and current densities
\label{eqn:quaternion2maxwellWithGA:580}
\begin{aligned}
\boldsymbol{\mathcal{D}} &= -I \BD = \Bi D_x + \Bj D_y + \Bk D_z \\
\boldsymbol{\mathcal{H}} &= -I \BH = \Bi H_x + \Bj H_y + \Bk H_z \\
\boldsymbol{\mathcal{J}} &= -I \BJ = \Bi J_x + \Bj J_y + \Bk J_z,
\end{aligned}

and summing yields the two remaining two Maxwell equations in their quaternionic form
\label{eqn:quaternion2maxwellWithGA:560}
\boxed{
\inv{2} \antisymmetric{ \frac{d}{dr} }{ \boldsymbol{\mathcal{H}} } – \inv{2} \symmetric{ \frac{d}{dr} } { c \boldsymbol{\mathcal{D}} } = c \rho + \boldsymbol{\mathcal{J}}.
}

## Conclusions.

Maxwell’s equations in the quaternion representation have a structure that is not apparent in the Heaviside-Gibbs notation. There is some elegance to this result, but comes with the cost of having to use commutator and anticommutator operators, which are arguably non-intuitive. The compact geometric algebra representation of Maxwell’s equation does not appear possible with a quaternion representation, as an additional complex degree of freedom would be required (biquaternions?) Such a degree of freedom may also allow a quaternion representation of the (fictitious) magnetic sources that are useful in antenna theory with a quaternion model. Magnetic sources are easily incorporated into the current multivector in geometric algebra, but if done so in the derivation above, yield an odd grade multivector source which has no quaternion representation.

# References

[1] Peter Michael Jack. Physical space as a quaternion structure, i: Maxwell equations. a brief note. arXiv preprint math-ph/0307038, 2003. URL https://arxiv.org/abs/math-ph/0307038.

[2] James Clerk Maxwell. A treatise on electricity and magnetism, volume II. Merchant Books, 1881.