## PHY2403H Quantum Field Theory. Lecture 3: Lorentz transformations and a scalar action. Taught by Prof. Erich Poppitz

September 18, 2018 phy2403 No comments , ,

### DISCLAIMER: Very rough notes from class. Some additional side notes, but otherwise barely edited.

These are notes for the UofT course PHY2403H, Quantum Field Theory, taught by Prof. Erich Poppitz.

## Determinant of Lorentz transformations

We require that Lorentz transformations leave the dot product invariant, that is $$x \cdot y = x’ \cdot y’$$, or
\label{eqn:qftLecture3:20}
x^\mu g_{\mu\nu} y^\nu = {x’}^\mu g_{\mu\nu} {y’}^\nu.

Explicitly, with coordinate transformations
\label{eqn:qftLecture3:40}
\begin{aligned}
{x’}^\mu &= {\Lambda^\mu}_\rho x^\rho \\
{y’}^\mu &= {\Lambda^\mu}_\rho y^\rho
\end{aligned}

such a requirement is equivalent to demanding that
\label{eqn:qftLecture3:500}
\begin{aligned}
x^\mu g_{\mu\nu} y^\nu
&=
{\Lambda^\mu}_\rho x^\rho
g_{\mu\nu}
{\Lambda^\nu}_\kappa y^\kappa \\
&=
x^\mu
{\Lambda^\alpha}_\mu
g_{\alpha\beta}
{\Lambda^\beta}_\nu
y^\nu,
\end{aligned}

or
\label{eqn:qftLecture3:60}
g_{\mu\nu}
=
{\Lambda^\alpha}_\mu
g_{\alpha\beta}
{\Lambda^\beta}_\nu

multiplying by the inverse we find
\label{eqn:qftLecture3:200}
\begin{aligned}
g_{\mu\nu}
{\lr{\Lambda^{-1}}^\nu}_\lambda
&=
{\Lambda^\alpha}_\mu
g_{\alpha\beta}
{\Lambda^\beta}_\nu
{\lr{\Lambda^{-1}}^\nu}_\lambda \\
&=
{\Lambda^\alpha}_\mu
g_{\alpha\lambda} \\
&=
g_{\lambda\alpha}
{\Lambda^\alpha}_\mu.
\end{aligned}

This is now amenable to expressing in matrix form
\label{eqn:qftLecture3:220}
\begin{aligned}
(G \Lambda^{-1})_{\mu\lambda}
&=
(G \Lambda)_{\lambda\mu} \\
&=
((G \Lambda)^\T)_{\mu\lambda} \\
&=
(\Lambda^\T G)_{\mu\lambda},
\end{aligned}

or
\label{eqn:qftLecture3:80}
G \Lambda^{-1}
=
(G \Lambda)^\T.

Taking determinants (using the normal identities for products of determinants, determinants of transposes and inverses), we find
\label{eqn:qftLecture3:100}
det(G)
det(\Lambda^{-1})
=
det(G) det(\Lambda),

or
\label{eqn:qftLecture3:120}
det(\Lambda)^2 = 1,

or
$$det(\Lambda)^2 = \pm 1$$. We will generally ignore the case of reflections in spacetime that have a negative determinant.

Smart-alec Peeter pointed out after class last time that we can do the same thing easier in matrix notation
\label{eqn:qftLecture3:140}
\begin{aligned}
x’ &= \Lambda x \\
y’ &= \Lambda y
\end{aligned}

where
\label{eqn:qftLecture3:160}
\begin{aligned}
x’ \cdot y’
&=
(x’)^\T G y’ \\
&=
x^\T \Lambda^\T G \Lambda y,
\end{aligned}

which we require to be $$x \cdot y = x^\T G y$$ for all four vectors $$x, y$$, that is
\label{eqn:qftLecture3:180}
\Lambda^\T G \Lambda = G.

We can find the result \ref{eqn:qftLecture3:120} immediately without having to first translate from index notation to matrices.

## Field theory

The electrostatic potential is an example of a scalar field $$\phi(\Bx)$$ unchanged by SO(3) rotations
\label{eqn:qftLecture3:240}
\Bx \rightarrow \Bx’ = O \Bx,

that is
\label{eqn:qftLecture3:260}
\phi'(\Bx’) = \phi(\Bx).

Here $$\phi'(\Bx’)$$ is the value of the (electrostatic) scalar potential in a primed frame.

However, the electrostatic field is not invariant under Lorentz transformation.
We postulate that there is some scalar field
\label{eqn:qftLecture3:280}
\phi'(x’) = \phi(x),

where $$x’ = \Lambda x$$ is an SO(1,3) transformation. There are actually no stable particles (fields that persist at long distances) described by Lorentz scalar fields, although there are some unstable scalar fields such as the Higgs, Pions, and Kaons. However, much of our homework and discussion will be focused on scalar fields, since they are the easiest to start with.

We need to first understand how derivatives $$\partial_\mu \phi(x)$$ transform. Using the chain rule
\label{eqn:qftLecture3:300}
\begin{aligned}
\PD{x^\mu}{\phi(x)}
&=
\PD{x^\mu}{\phi'(x’)} \\
&=
\PD{{x’}^\nu}{\phi'(x’)}
\PD{{x}^\mu}{{x’}^\nu} \\
&=
\PD{{x’}^\nu}{\phi'(x’)}
\partial_\mu \lr{
{\Lambda^\nu}_\rho x^\rho
} \\
&=
\PD{{x’}^\nu}{\phi'(x’)}
{\Lambda^\nu}_\mu \\
&=
\PD{{x’}^\nu}{\phi(x)}
{\Lambda^\nu}_\mu.
\end{aligned}

Multiplying by the inverse $${\lr{\Lambda^{-1}}^\mu}_\kappa$$ we get
\label{eqn:qftLecture3:320}
\PD{{x’}^\kappa}{}
=
{\lr{\Lambda^{-1}}^\mu}_\kappa \PD{x^\mu}{}

This should be familiar to you, and is an analogue of the transformation of the
\label{eqn:qftLecture3:340}
=

## Actions

We will start with a classical action, and quantize to determine a QFT. In mechanics we have the particle position $$q(t)$$, which is a classical field in 1+0 time and space dimensions. Our action is
\label{eqn:qftLecture3:360}
S
= \int dt \LL(t)
= \int dt \lr{
\inv{2} \dot{q}^2 – V(q)
}.

This action depends on the position of the particle that is local in time. You could imagine that we have a more complex action where the action depends on future or past times
\label{eqn:qftLecture3:380}
S
= \int dt’ q(t’) K( t’ – t ),

but we don’t seem to find such actions in classical mechanics.

### Principles determining the form of the action.

• relativity (action is invariant under Lorentz transformation)
• locality (action depends on fields and the derivatives at given $$(t, \Bx)$$.
• Gauge principle (the action should be invariant under gauge transformation). We won’t discuss this in detail right now since we will start with studying scalar fields.
Recall that for Maxwell’s equations a gauge transformation has the form
\label{eqn:qftLecture3:520}
\phi \rightarrow \phi + \dot{\chi}, \BA \rightarrow \BA – \spacegrad \chi.

Suppose we have a real scalar field $$\phi(x)$$ where $$x \in \mathbb{R}^{1,d-1}$$. We will be integrating over space and time $$\int dt d^{d-1} x$$ which we will write as $$\int d^d x$$. Our action is
\label{eqn:qftLecture3:400}
S = \int d^d x \lr{ \text{Some action density to be determined } }

The analogue of $$\dot{q}^2$$ is
\label{eqn:qftLecture3:420}
\begin{aligned}
\lr{ \PD{x^\mu}{\phi} }
\lr{ \PD{x^\nu}{\phi} }
g^{\mu\nu}
&=
(\partial_\mu \phi) (\partial_\nu \phi) g^{\mu\nu} \\
&= \partial^\mu \phi \partial_\mu \phi.
\end{aligned}

This has both time and spatial components, that is
\label{eqn:qftLecture3:440}
\partial^\mu \phi \partial_\mu \phi =

so the desired simplest scalar action is
\label{eqn:qftLecture3:460}
S = \int d^d x \lr{ \dotphi^2 – (\spacegrad \phi)^2 }.

The measure transforms using a Jacobian, which we have seen is the Lorentz transform matrix, and has unit determinant
\label{eqn:qftLecture3:480}
d^d x’ = d^d x \Abs{ det( \Lambda^{-1} ) } = d^d x.

## Question: Four vector form of the Maxwell gauge transformation.

Show that the transformation
\label{eqn:qftLecture3:580}
A^\mu \rightarrow A^\mu + \partial^\mu \chi

is the desired four-vector form of the gauge transformation \ref{eqn:qftLecture3:520}, that is
\label{eqn:qftLecture3:540}
\begin{aligned}
j^\nu
&= \partial_\mu {F’}^{\mu\nu} \\
&= \partial_\mu F^{\mu\nu}.
\end{aligned}

Also relate this four-vector gauge transformation to the spacetime split.

\label{eqn:qftLecture3:560}
\begin{aligned}
\partial_\mu {F’}^{\mu\nu}
&=
\partial_\mu \lr{ \partial^\mu {A’}^\nu – \partial_\nu {A’}^\mu } \\
&=
\partial_\mu \lr{
\partial^\mu \lr{ A^\nu + \partial^\nu \chi }
– \partial_\nu \lr{ A^\mu + \partial^\mu \chi }
} \\
&=
\partial_\mu {F}^{\mu\nu}
+
\partial_\mu \partial^\mu \partial^\nu \chi

\partial_\mu \partial^\nu \partial^\mu \chi \\
&=
\partial_\mu {F}^{\mu\nu},
\end{aligned}

by equality of mixed partials. Expanding \ref{eqn:qftLecture3:580} explicitly we find
\label{eqn:qftLecture3:600}
{A’}^\mu = A^\mu + \partial^\mu \chi,

which is
\label{eqn:qftLecture3:620}
\begin{aligned}
\phi’ = {A’}^0 &= A^0 + \partial^0 \chi = \phi + \dot{\chi} \\
\BA’ \cdot \Be_k = {A’}^k &= A^k + \partial^k \chi = \lr{ \BA – \spacegrad \chi } \cdot \Be_k.
\end{aligned}

The last of which can be written in vector notation as $$\BA’ = \BA – \spacegrad \chi$$.

## UofT QFT Fall 2018 Lecture 2. Units, scales, and Lorentz transformations. Taught by Prof. Erich Poppitz

September 17, 2018 phy2403 No comments , , ,

## Natural units.

\label{eqn:qftLecture2:20}
\begin{aligned}
[\Hbar] &= [\text{action}] = M \frac{L^2}{T^2} T = \frac{M L^2}{T} \\
&= [\text{velocity}] = \frac{L}{T} \\
& [\text{energy}] = M \frac{L^2}{T^2}.
\end{aligned}

Setting $$c = 1$$ means
\label{eqn:qftLecture2:240}
\frac{L}{T} = 1

and setting $$\Hbar = 1$$ means
\label{eqn:qftLecture2:260}
[\Hbar] = [\text{action}] = M L {\frac{L}{T}} = M L

therefore
\label{eqn:qftLecture2:280}
[L] = \inv{\text{mass}}

and
\label{eqn:qftLecture2:300}
[\text{energy}] = M {\frac{L^2}{T^2}} = \text{mass}\, \text{eV}

Summary

• $$\text{energy} \sim \text{eV}$$
• $$\text{distance} \sim \inv{M}$$
• $$\text{time} \sim \inv{M}$$

From:
\label{eqn:qftLecture2:320}
\alpha = \frac{e^2}{4 \pi {\Hbar c}}

which is dimensionless ($$1/137$$), so electric charge is dimensionless.

Some useful numbers in natural units

\label{eqn:qftLecture2:40}
\begin{aligned}
m_\txte &\sim 10^{-27} \text{g} \sim 0.5 \text{MeV} \\
m_\txtp &\sim 2000 m_\txte \sim 1 \text{GeV} \\
m_\pi &\sim 140 \text{MeV} \\
m_\mu &\sim 105 \text{MeV} \\
\Hbar c &\sim 200 \text{MeV} \,\text{fm} = 1
\end{aligned}

## Gravity

Interaction energy of two particles

\label{eqn:qftLecture2:60}
G_\txtN \frac{m_1 m_2}{r}

\label{eqn:qftLecture2:80}
[\text{energy}] \sim [G_\txtN] \frac{M^2}{L}

\label{eqn:qftLecture2:100}
[G_\txtN]
\sim
[\text{energy}] \frac{L}{M^2}

but energy x distance is dimensionless (action) in our units

\label{eqn:qftLecture2:120}
[G_\txtN]
\sim
{\text{dimensionless}}{M^2}

\label{eqn:qftLecture2:140}
\frac{G_\txtN}{\Hbar c} \sim \inv{M^2} \sim \frac{1}{10^{20} \text{GeV}}

Planck mass

\label{eqn:qftLecture2:160}
M_{\text{Planck}} \sim \sqrt{\frac{\Hbar c}{G_\txtN}}
\sim 10^{-4} g \sim \inv{\lr{10^{20} \text{GeV}}^2}

We can revisit the scale diagram from last lecture in terms of MeV mass/energy values, as sketched in fig. 1.

fig. 1. Scales, take II.

At the classical electron radius scale, we consider phenomena such as back reaction of radiation, the self energy of electrons. At the Compton wavelength we have to allow for production of multiple particle pairs. At Bohr radius scales we must start using QM instead of classical mechanics.

## Cross section.

Verbal discussion of cross section, not captured in these notes. Roughly, the cross section sounds like the number of events per unit time, related to the flux of some source through an area.

We’ll compute the cross section of a number of different systems in this course. The cross section is relevant in scattering such as the electron-electron scattering sketched in fig. 2.

fig. 2. Electron electron scattering.

We assume that QED is highly relativistic. In natural units, our scale factor is basically the square of the electric charge
\label{eqn:qftLecture2:180}
\alpha \sim e^2,

so the cross section has the form
\label{eqn:qftLecture2:200}
\sigma \sim \frac{\alpha^2}{E^2} \lr{ 1 + O(\alpha) + O(\alpha^2) + \cdots }

In gravity we could consider scattering of electrons, where $$G_\txtN$$ takes the place of $$\alpha$$. However, $$G_\txtN$$ has dimensions.

For electron-electron scattering due to gravitons

\label{eqn:qftLecture2:220}
\sigma \sim \frac{G_\txtN^2 E^2}{1 + G_\txtN E^2 + \cdots }

Now the cross section grows with energy. This will cause some problems (violating unitarity: probabilities greater than 1!) when $$O(G_\txtN E^2) = 1$$.

In any quantum field theories when the coupling constant is not-dimensionless we have the same sort of problems at some scale.

The point is that we can get far considering just dimensional analysis.

If the coupling constant has a dimension $$(1/\text{mass})^N\,, N > 0$$, then unitarity will be violated at high energy. One such theory is the Fermi theory of beta decay (electro-weak theory), which had a coupling constant with dimensions inverse-mass-squared. The relevant scale for beta decay was 4 Fermi, or $$G_\txtF \sim (1/{100 \text{GeV}})^2$$. This was the motivation for introducing the Higgs theory, which was motivated by restoring unitarity.

## Lorentz transformations.

The goal, perhaps not for today, is to study the simplest (relativistic) scalar field theory. First studied classically, and then consider such a quantum field theory.

How is relativity implemented when we write the Lagrangian and action?

Our first step must be to consider Lorentz transformations and the Lorentz group.

Spacetime (Minkowski space) is \R{3,1} (or \R{d-1,1}). Our coordinates are

\label{eqn:qftLecture2:340}
(c t, x^1, x^2, x^3) = (c t, \Br).

Here, we’ve scaled the time scale by $$c$$ so that we measure time and space in the same dimensions. We write this as

\label{eqn:qftLecture2:360}
x^\mu = (x^0, x^1, x^2, x^3),

where $$\mu = 0, 1, 2, 3$$, and call this a “4-vector”. These are called the space-time coordinates of an event, which tell us where and when an event occurs.

For two events whose spacetime coordinates differ by $$dx^0, dx^1, dx^2, dx^3$$ we introduce the notion of a space time \underline{interval}

\label{eqn:qftLecture2:380}
\begin{aligned}
ds^2
&= c^2 dt^2
– (dx^1)^2
– (dx^2)^2
– (dx^3)^2 \\
&=
\sum_{\mu, \nu = 0}^3 g_{\mu\nu} dx^\mu dx^\nu
\end{aligned}

Here $$g_{\mu\nu}$$ is the Minkowski space metric, an object with two indexes that run from 0-3. i.e. this is a diagonal matrix

\label{eqn:qftLecture2:400}
g_{\mu\nu} \sim
\begin{bmatrix}
1 & 0 & 0 & 0 \\
0 & -1 & 0 & 0 \\
0 & 0 & -1 & 0 \\
0 & 0 & 0 & -1 \\
\end{bmatrix}

i.e.
\label{eqn:qftLecture2:420}
\begin{aligned}
g_{00} &= 1 \\
g_{11} &= -1 \\
g_{22} &= -1 \\
g_{33} &= -1 \\
\end{aligned}

We will use the Einstein summation convention, where any repeated upper and lower indexes are considered summed over. That is \ref{eqn:qftLecture2:380} is written with an implied sum
\label{eqn:qftLecture2:440}
ds^2 = g_{\mu\nu} dx^\mu dx^\nu.

Explicit expansion:
\label{eqn:qftLecture2:460}
\begin{aligned}
ds^2
&= g_{\mu\nu} dx^\mu dx^\nu \\
&=
g_{00} dx^0 dx^0
+g_{11} dx^1 dx^1
+g_{22} dx^2 dx^2
+g_{33} dx^3 dx^3
&=
(1) dx^0 dx^0
+ (-1) dx^1 dx^1
+ (-1) dx^2 dx^2
+ (-1) dx^3 dx^3.
\end{aligned}

Recall that rotations (with orthogonal matrix representations) are transformations that leave the dot product unchanged, that is

\label{eqn:qftLecture2:480}
\begin{aligned}
(R \Bx) \cdot (R \By)
&= \Bx^\T R^\T R \By \\
&= \Bx^\T \By \\
&= \Bx \cdot \By,
\end{aligned}

where $$R$$ is a rotation orthogonal 3×3 matrix. The set of such transformations that leave the dot product unchanged have orthonormal matrix representations $$R^\T R = 1$$. We call the set of such transformations that have unit determinant the SO(3) group.

We call a Lorentz transformation, if it is a linear transformation acting on 4 vectors that leaves the spacetime interval (i.e. the inner product of 4 vectors) invariant. That is, a transformation that leaves
\label{eqn:qftLecture2:500}
x^\mu y^\nu g_{\mu\nu} = x^0 y^0 – x^1 y^1 – x^2 y^2 – x^3 y^3

unchanged.

Suppose that transformation has a 4×4 matrix form

\label{eqn:qftLecture2:520}
{x’}^\mu = {\Lambda^\mu}_\nu x^\nu

For an example of a possible $$\Lambda$$, consider the transformation sketched in fig. 3.

fig. 3. Boost transformation.

We know that boost has the form
\label{eqn:qftLecture2:540}
\begin{aligned}
x &= \frac{x’ + v t’}{\sqrt{1 – v^2/c^2}} \\
y &= y’ \\
z &= z’ \\
t &= \frac{t’ + (v/c^2) x’}{\sqrt{1 – v^2/c^2}} \\
\end{aligned}

(this is a boost along the x-axis, not y as I’d drawn),
or
\label{eqn:qftLecture2:560}
\begin{bmatrix}
c t \\
x \\
y \\
z
\end{bmatrix}
=
\begin{bmatrix}
\inv{\sqrt{1 – v^2/c^2}} & \frac{v/c}{\sqrt{1 – v^2/c^2}} & 0 & 0 \\
\frac{v/c}{\sqrt{1 – v^2/c^2}} & \frac{1}{\sqrt{1 – v^2/c^2}} & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1 \\
\end{bmatrix}
\begin{bmatrix}
c t’ \\
x’ \\
y’ \\
z’
\end{bmatrix}

Other examples include rotations ($${\lambda^0}_0 = 1$$ zeros in $${\lambda^0}_k, {\lambda^k}_0$$, and a rotation matrix in the remainder.)

Back to Lorentz transformations ($$\text{SO}(1,3)^+$$), let
\label{eqn:qftLecture2:600}
\begin{aligned}
{x’}^\mu &= {\Lambda^\mu}_\nu x^\nu \\
{y’}^\kappa &= {\Lambda^\kappa}_\rho y^\rho
\end{aligned}

The dot product
\label{eqn:qftLecture2:620}
g_{\mu \kappa}
{x’}^\mu
{y’}^\kappa
=
g_{\mu \kappa}
{\Lambda^\mu}_\nu
{\Lambda^\kappa}_\rho
x^\nu
y^\rho
=
g_{\nu\rho}
x^\nu
y^\rho,

where the last step introduces the invariance requirement of the transformation. That is

\label{eqn:qftLecture2:640}
\boxed{
g_{\nu\rho}
=
g_{\mu \kappa}
{\Lambda^\mu}_\nu
{\Lambda^\kappa}_\rho.
}

### Upper and lower indexes

We’ve defined

\label{eqn:qftLecture2:660}
x^\mu = (t, x^1, x^2, x^3)

We could also define a four vector with lower indexes
\label{eqn:qftLecture2:680}
x_\nu = g_{\nu\mu} x^\mu = (t, -x^1, -x^2, -x^3).

That is
\label{eqn:qftLecture2:700}
\begin{aligned}
x_0 &= x^0 \\
x_1 &= -x^1 \\
x_2 &= -x^2 \\
x_3 &= -x^3.
\end{aligned}

which allows us to write the dot product as simply $$x^\mu y_\mu$$.

We can also define a metric tensor with upper indexes

\label{eqn:qftLecture2:401}
g^{\mu\nu} \sim
\begin{bmatrix}
1 & 0 & 0 & 0 \\
0 & -1 & 0 & 0 \\
0 & 0 & -1 & 0 \\
0 & 0 & 0 & -1 \\
\end{bmatrix}

This is the inverse matrix of $$g_{\mu\nu}$$, and it satisfies
\label{eqn:qftLecture2:720}
g^{\mu \nu} g_{\nu\rho} = {\delta^\mu}_\rho

Exercise: Check:
\label{eqn:qftLecture2:740}
\begin{aligned}
g_{\mu\nu} x^\mu y^\nu
&= x_\nu y^\nu \\
&= x^\nu y_\nu \\
&= g^{\mu\nu} x_\mu y_\nu \\
&= {\delta^\mu}_\nu x_\mu y^\nu.
\end{aligned}

Class ended around this point, but it appeared that we were heading this direction:

Returning to the Lorentz invariant and multiplying both sides of
\ref{eqn:qftLecture2:640} with an inverse Lorentz transformation $$\Lambda^{-1}$$, we find
\label{eqn:qftLecture2:760}
\begin{aligned}
g_{\nu\rho}
{\lr{\Lambda^{-1}}^\rho}_\alpha
&=
g_{\mu \kappa}
{\Lambda^\mu}_\nu
{\Lambda^\kappa}_\rho
{\lr{\Lambda^{-1}}^\rho}_\alpha \\
&=
g_{\mu \kappa}
{\Lambda^\mu}_\nu
{\delta^\kappa}_\alpha \\
&=
g_{\mu \alpha}
{\Lambda^\mu}_\nu,
\end{aligned}

or
\label{eqn:qftLecture2:780}
\lr{\Lambda^{-1}}_{\nu \alpha} = \Lambda_{\alpha \nu}.

This is clearly analogous to $$R^\T = R^{-1}$$, although the index notation obscures things considerably. Prof. Poppitz said that next week this would all lead to showing that the determinant of any Lorentz transformation was $$\pm 1$$.

For what it’s worth, it seems to me that this index notation makes life a lot harder than it needs to be, at least for a matrix related question (i.e. determinant of the transformation). In matrix/column-(4)-vector notation, let $$x’ = \Lambda x, y’ = \Lambda y$$ be two four vector transformations, then
\label{eqn:qftLecture2:800}
x’ \cdot y’ = {x’}^T G y’ = (\Lambda x)^T G \Lambda y = x^T ( \Lambda^T G \Lambda) y = x^T G y.

so
\label{eqn:qftLecture2:820}
\boxed{
\Lambda^T G \Lambda = G.
}

Taking determinants of both sides gives $$-(det(\Lambda))^2 = -1$$, and thus $$det(\Lambda) = \pm 1$$.

## The many faces of Maxwell’s equations

[Click here for a PDF of this post with nicer formatting (including equation numbering and references)]

The following is a possible introduction for a report for a UofT ECE2500 project associated with writing a small book: “Geometric Algebra for Electrical Engineers”. Given the space constraints for the report I may have to drop much of this, but some of the history of Maxwell’s equations may be of interest, so I thought I’d share before the knife hits the latex.

## Goals of the project.

This project had a few goals

1. Perform a literature review of applications of geometric algebra to the study of electromagnetism. Geometric algebra will be defined precisely later, along with bivector, trivector, multivector and other geometric algebra generalizations of the vector.
2. Identify the subset of the literature that had direct relevance to electrical engineering.
3. Create a complete, and as compact as possible, introduction of the prerequisites required
geometric algebra to problems in electromagnetism.

## The many faces of electromagnetism.

There is a long history of attempts to find more elegant, compact and powerful ways of encoding and working with Maxwell’s equations.

### Maxwell’s formulation.

Maxwell [12] employs some differential operators, including the gradient $$\spacegrad$$ and Laplacian $$\spacegrad^2$$, but the divergence and gradient are always written out in full using coordinates, usually in integral form. Reading the original Treatise highlights how important notation can be, as most modern engineering or physics practitioners would find his original work incomprehensible. A nice translation from Maxwell’s notation to the modern Heaviside-Gibbs notation can be found in [16].

### Quaterion representation.

In his second volume [11] the equations of electromagnetism are stated using quaterions (an extension of complex numbers to three dimensions), but quaternions are not used in the work. The modern form of Maxwell’s equations in quaternion form is
\label{eqn:ece2500report:220}
\begin{aligned}
\inv{2} \antisymmetric{ \frac{d}{dr} }{ \BH } – \inv{2} \symmetric{ \frac{d}{dr} } { c \BD } &= c \rho + \BJ \\
\inv{2} \antisymmetric{ \frac{d}{dr} }{ \BE } + \inv{2} \symmetric{ \frac{d}{dr} }{ c \BB } &= 0,
\end{aligned}

where $$\ifrac{d}{dr} = (1/c) \PDi{t}{} + \Bi \PDi{x}{} + \Bj \PDi{y}{} + \Bk \PDi{z}{}$$ [7] acts bidirectionally, and vectors are expressed in terms of the quaternion basis $$\setlr{ \Bi, \Bj, \Bk }$$, subject to the relations $$\Bi^2 = \Bj^2 = \Bk^2 = -1, \quad \Bi \Bj = \Bk = -\Bj \Bi, \quad \Bj \Bk = \Bi = -\Bk \Bj, \quad \Bk \Bi = \Bj = -\Bi \Bk$$.
There is clearly more structure to these equations than the traditional Heaviside-Gibbs representation that we are used to, which says something for the quaternion model. However, this structure requires notation that is arguably non-intuitive. The fact that the quaterion representation was abandoned long ago by most electromagnetism researchers and engineers supports such an argument.

### Minkowski tensor representation.

Minkowski introduced the concept of a complex time coordinate $$x_4 = i c t$$ for special relativity [3]. Such a four-vector representation can be used for many of the relativistic four-vector pairs of electromagnetism, such as the current $$(c\rho, \BJ)$$, and the energy-momentum Lorentz force relations, and can also be applied to Maxwell’s equations
\label{eqn:ece2500report:140}
\sum_{\mu= 1}^4 \PD{x_\mu}{F_{\mu\nu}} = – 4 \pi j_\nu.
\sum_{\lambda\rho\mu=1}^4
\epsilon_{\mu\nu\lambda\rho}
\PD{x_\mu}{F_{\lambda\rho}} = 0,

where
\label{eqn:ece2500report:160}
F
=
\begin{bmatrix}
0 & B_z & -B_y & -i E_x \\
-B_z & 0 & B_x & -i E_y \\
B_y & -B_x & 0 & -i E_z \\
i E_x & i E_y & i E_z & 0
\end{bmatrix}.

A rank-2 complex (Hermitian) tensor contains all six of the field components. Transformation of coordinates for this representation of the field may be performed exactly like the transformation for any other four-vector. This formalism is described nicely in [13], where the structure used is motivated by transformational requirements. One of the costs of this tensor representation is that we loose the clear separation of the electric and magnetic fields that we are so comfortable with. Another cost is that we loose the distinction between space and time, as separate space and time coordinates have to be projected out of a larger four vector. Both of these costs have theoretical benefits in some applications, particularly for high energy problems where relativity is important, but for the low velocity problems near and dear to electrical engineers who can freely treat space and time independently, the advantages are not clear.

### Modern tensor formalism.

The Minkowski representation fell out of favour in theoretical physics, which settled on a real tensor representation that utilizes an explicit metric tensor $$g_{\mu\nu} = \pm \textrm{diag}(1, -1, -1, -1)$$ to represent the complex inner products of special relativity. In this tensor formalism, Maxwell’s equations are also reduced to a set of two tensor relationships ([10], [8], [5]).
\label{eqn:ece2500report:40}
\begin{aligned}
\partial_\mu F^{\mu \nu} &= \mu_0 J^\nu \\
\epsilon^{\alpha \beta \mu \nu} \partial_\beta F_{\mu \nu} &= 0,
\end{aligned}

where $$F^{\mu\nu}$$ is a \textit{real} rank-2 antisymmetric tensor that contains all six electric and magnetic field components, and $$J^\nu$$ is a four-vector current containing both charge density and current density components. \Cref{eqn:ece2500report:40} provides a unified and simpler theoretical framework for electromagnetism, and is used extensively in physics but not engineering.

### Differential forms.

It has been argued that a differential forms treatment of electromagnetism provides some of the same theoretical advantages as the tensor formalism, without the disadvantages of introducing a hellish mess of index manipulation into the mix. With differential forms it is also possible to express Maxwell’s equations as two equations. The free-space differential forms equivalent [4] to the tensor equations is
\label{eqn:ece2500report:60}
\begin{aligned}
d \alpha &= 0 \\
d *\alpha &= 0,
\end{aligned}

where
\label{eqn:ece2500report:180}
\alpha = \lr{ E_1 dx^1 + E_2 dx^2 + E_3 dx^3 }(c dt) + H_1 dx^2 dx^3 + H_2 dx^3 dx^1 + H_3 dx^1 dx^2.

One of the advantages of this representation is that it is valid even for curvilinear coordinate representations, which are handled naturally in differential forms. However, this formalism also comes with a number of costs. One cost (or benefit), like that of the tensor formalism, is that this is implicitly a relativistic approach subject to non-Euclidean orthonormality conditions $$(dx^i, dx^j) = \delta^{ij}, (dx^i, c dt) = 0, (c dt, c dt) = -1$$. Most grievous of the costs is the requirement to use differentials $$dx^1, dx^2, dx^3, c dt$$, instead of a more familar set of basis vectors, even for non-curvilinear coordinates. This requirement is easily viewed as unnatural, and likely one of the reasons that electromagnetism with differential forms has never become popular.

### Vector formalism.

Euclidean vector algebra, in particular the vector algebra and calculus of $$R^3$$, is the de-facto language of electrical engineering for electromagnetism. Maxwell’s equations in the Heaviside-Gibbs vector formalism are
\label{eqn:ece2500report:20}
\begin{aligned}
\spacegrad \cross \BE &= – \PD{t}{\BB} \\
\spacegrad \cross \BH &= \BJ + \PD{t}{\BD} \\
\spacegrad \cdot \BD &= \rho \\
\end{aligned}

We are all intimately familiar with these equations, with the dot and the cross products, and with gradient, divergence and curl operations that are used to express them.
Given how comfortable we are with this mathematical formalism, there has to be a really good reason to switch to something else.

### Space time algebra (geometric algebra).

An alternative to any of the electrodynamics formalisms described above is STA, the Space Time Algebra. STA is a relativistic geometric algebra that allows Maxwell’s equations to be combined into one equation ([2], [6])
\label{eqn:ece2500report:80}

where
\label{eqn:ece2500report:200}
F = \BE + I c \BB \qquad (= \BE + I \eta \BH)

is a bivector field containing both the electric and magnetic field “vectors”, $$\grad = \gamma^\mu \partial_\mu$$ is the spacetime gradient, $$J$$ is a four vector containing electric charge and current components, and $$I = \gamma_0 \gamma_1 \gamma_2 \gamma_3$$ is the spacetime pseudoscalar, the ordered product of the basis vectors $$\setlr{ \gamma_\mu }$$. The STA representation is explicitly relativistic with a non-Euclidean relationships between the basis vectors $$\gamma_0 \cdot \gamma_0 = 1 = -\gamma_k \cdot \gamma_k, \forall k > 0$$. In this formalism “spatial” vectors $$\Bx = \sum_{k>0} \gamma_k \gamma_0 x^k$$ are represented as spacetime bivectors, requiring a small slight of hand when switching between STA notation and conventional vector representation. Uncoincidentally $$F$$ has exactly the same structure as the 2-form $$\alpha$$ above, provided the differential 1-forms $$dx^\mu$$ are replaced by the basis vectors $$\gamma_\mu$$. However, there is a simple complex structure inherent in the STA form that is not obvious in the 2-form equivalent. The bivector representation of the field $$F$$ directly encodes the antisymmetric nature of $$F^{\mu\nu}$$ from the tensor formalism, and the tensor equivalents of most STA results can be calcualted easily.

Having a single PDE for all of Maxwell’s equations allows for direct Green’s function solution of the field, and has a number of other advantages. There is extensive literature exploring selected applications of STA to electrodynamics. Many theoretical results have been derived using this formalism that require significantly more complex approaches using conventional vector or tensor analysis. Unfortunately, much of the STA literature is inaccessible to the engineering student, practising engineers, or engineering instructors. To even start reading the literature, one must learn geometric algebra, aspects of special relativity and non-Euclidean geometry, generalized integration theory, and even some tensor analysis.

### Paravector formalism (geometric algebra).

In the geometric algebra literature, there are a few authors who have endorsed the use of Euclidean geometric algebras for relativistic applications ([1], [14])
These authors use an Euclidean basis “vector” $$\Be_0 = 1$$ for the timelike direction, along with a standard Euclidean basis $$\setlr{ \Be_i }$$ for the spatial directions. A hybrid scalar plus vector representation of four vectors, called paravectors is employed. Maxwell’s equation is written as a multivector equation
\label{eqn:ece2500report:120}
\lr{ \spacegrad + \inv{c} \PD{t}{} } F = J,

where $$J$$ is a multivector source containing both the electric charge and currents, and $$c$$ is the group velocity for the medium (assumed uniform and isometric). $$J$$ may optionally include the (fictitious) magnetic charge and currents useful in antenna theory. The paravector formalism uses a the hybrid electromagnetic field representation of STA above, however, $$I = \Be_1 \Be_2 \Be_3$$ is interpreted as the $$R^3$$ pseudoscalar, the ordered product of the basis vectors $$\setlr{ \Be_i }$$, and $$F$$ represents a multivector with vector and bivector components. Unlike STA where $$\BE$$ and $$\BB$$ (or $$\BH$$) are interpretted as spacetime bivectors, here they are plain old Euclidian vectors in $$R^3$$, entirely consistent with conventional Heaviyside-Gibbs notation. Like the STA Maxwell’s equation, the paravector form is directly invertible using Green’s function techniques, without requiring the solution of equivalent second order potential problems, nor any requirement to take the derivatives of those potentials to determine the fields.

Lorentz transformation and manipulation of paravectors requires a variety of conjugation, real and imaginary operators, unlike STA where such operations have the same complex exponential structure as any 3D rotation expressed in geometric algebra. The advocates of the paravector representation argue that this provides an effective pedagogical bridge from Euclidean geometry to the Minkowski geometry of special relativity. This author agrees that this form of Maxwell’s equations is the natural choice for an introduction to electromagnetism using geometric algebra, but for relativistic operations, STA is a much more natural and less confusing choice.

## Results.

The end product of this project was a fairly small self contained book, titled “Geometric Algebra for Electrical Engineers”. This book includes an introduction to Euclidean geometric algebra focused on $$R^2$$ and $$R^3$$ (64 pages), an introduction to geometric calculus and multivector Green’s functions (64 pages), and applications to electromagnetism (75 pages). This report summarizes results from this book, omitting most derivations, and attempts to provide an overview that may be used as a road map for the book for further exploration. Many of the fundamental results of electromagnetism are derived directly from the geometric algebra form of Maxwell’s equation in a streamlined and compact fashion. This includes some new results, and many of the existing non-relativistic results from the geometric algebra STA and paravector literature. It will be clear to the reader that it is often simpler to have the electric and magnetic on equal footing, and demonstrates this by deriving most results in terms of the total electromagnetic field $$F$$. Many examples of how to extract the conventional electric and magnetic fields from the geometric algebra results expressed in terms of $$F$$ are given as a bridge between the multivector and vector representations.

The aim of this work was to remove some of the prerequisite conceptual roadblocks that make electromagnetism using geometric algebra inaccessbile. In particular, this project explored non-relativistic applications of geometric algebra to electromagnetism. After derivation from the conventional Heaviside-Gibbs representation of Maxwell’s equations, the paravector representation of Maxwell’s equation is used as the starting point for of all subsequent analysis. However, the paravector literature includes a confusing set of conjugation and real and imaginary selection operations that are tailored for relativisitic applications. These are not neccessary for low velocity applications, and have been avoided completely with the aim of making the subject more accessibility to the engineer.

In the book an attempt has been made to avoid introducing as little new notation as possible. For example, some authors use special notation for the bivector valued magnetic field $$I \BB$$, such as $$\boldsymbol{\mathcal{b}}$$ or $$\Bcap$$. Given the inconsistencies in the literature, $$I \BB$$ (or $$I \BH$$) will be used explicitly for the bivector (magnetic) components of the total electromagnetic field $$F$$. In the geometric algebra literature, there are conflicting conventions for the operator $$\spacegrad + (1/c) \PDi{t}{}$$ which we will call the spacetime gradient after the STA equivalent. For examples of different notations for the spacetime gradient, see [9], [1], and [15]. In the book the spacetime gradient is always written out in full to avoid picking from or explaining some of the subtlties of the competing notations.

Some researchers will find it distasteful that STA and relativity have been avoided completely in this book. Maxwell’s equations are inherently relativistic, and STA expresses the relativistic aspects of electromagnetism in an exceptional and beautiful fashion. However, a student of this book will have learned the geometric algebra and calculus prerequisites of STA. This makes the STA literature much more accessible, especially since most of the results in the book can be trivially translated into STA notation.

# References

[1] William Baylis. Electrodynamics: a modern geometric approach, volume 17. Springer Science \& Business Media, 2004.

[2] C. Doran and A.N. Lasenby. Geometric algebra for physicists. Cambridge University Press New York, Cambridge, UK, 1st edition, 2003.

[3] Albert Einstein. Relativity: The special and the general theory, chapter Minkowski’s Four-Dimensional Space. Princeton University Press, 2015. URL http://www.gutenberg.org/ebooks/5001.

[4] H. Flanders. Differential Forms With Applications to the Physical Sciences. Courier Dover Publications, 1989.

[5] David Jeffrey Griffiths and Reed College. Introduction to electrodynamics. Prentice hall Upper Saddle River, NJ, 3rd edition, 1999.

[6] David Hestenes. Space-time algebra, volume 1. Springer, 1966.

[7] Peter Michael Jack. Physical space as a quaternion structure, i: Maxwell equations. a brief note. arXiv preprint math-ph/0307038, 2003. URL https://arxiv.org/abs/math-ph/0307038.

[8] JD Jackson. Classical Electrodynamics. John Wiley and Sons, 2nd edition, 1975.

[9] Bernard Jancewicz. Multivectors and Clifford algebra in electrodynamics. World Scientific, 1988.

[10] L.D. Landau and E.M. Lifshitz. The classical theory of fields. Butterworth-Heinemann, 1980. ISBN 0750627689.

[11] James Clerk Maxwell. A treatise on electricity and magnetism, volume II. Merchant Books, 1881.

[12] James Clerk Maxwell. A treatise on electricity and magnetism, third edition, volume I. Dover publications, 1891.

[13] M. Schwartz. Principles of Electrodynamics. Dover Publications, 1987.

[14] Chappell et al. A simplified approach to electromagnetism using geometric algebra. arXiv preprint arXiv:1010.4947, 2010.

[15] Chappell et al. Geometric algebra for electrical and electronic engineers. 2014.

[16] Chappell et al. Geometric Algebra for Electrical and Electronic Engineers, 2014

## Motivation.

The notation I prefer for relativistic geometric algebra uses Hestenes’ space time algebra (STA) [2], where the basis is a four dimensional space $$\setlr{ \gamma_\mu }$$, subject to Dirac matrix like relations $$\gamma_\mu \cdot \gamma_\nu = \eta_{\mu \nu}$$.

In this formalism a four vector is just the sum of the products of coordinates and basis vectors, for example, using summation convention

\label{eqn:boostToParavector:160}
x = x^\mu \gamma_\mu.

The invariant for a four-vector in STA is just the square of that vector

\label{eqn:boostToParavector:180}
\begin{aligned}
x^2
&= (x^\mu \gamma_\mu) \cdot (x^\nu \gamma_\nu) \\
&= \sum_\mu (x^\mu)^2 (\gamma_\mu)^2 \\
&= (x^0)^2 – \sum_{k = 1}^3 (x^k)^2 \\
&= (ct)^2 – \Bx^2.
\end{aligned}

Recall that a four-vector is time-like if this squared-length is positive, spacelike if negative, and light-like when zero.

Time-like projections are possible by dotting with the “lab-frame” time like basis vector $$\gamma_0$$

\label{eqn:boostToParavector:200}
ct = x \cdot \gamma_0 = x^0,

and space-like projections are wedges with the same

\label{eqn:boostToParavector:220}
\Bx = x \cdot \gamma_0 = x^k \sigma_k,

where sums over Latin indexes $$k \in \setlr{1,2,3}$$ are implied, and where the elements $$\sigma_k$$

\label{eqn:boostToParavector:80}
\sigma_k = \gamma_k \gamma_0.

which are bivectors in STA, can be viewed as an Euclidean vector basis $$\setlr{ \sigma_k }$$.

Rotations in STA involve exponentials of space like bivectors $$\theta = a_{ij} \gamma_i \wedge \gamma_j$$

\label{eqn:boostToParavector:240}
x’ = e^{ \theta/2 } x e^{ -\theta/2 }.

Boosts, on the other hand, have exactly the same form, but the exponentials are with respect to space-time bivectors arguments, such as $$\theta = a \wedge \gamma_0$$, where $$a$$ is any four-vector.

Observe that both boosts and rotations necessarily conserve the space-time length of a four vector (or any multivector with a scalar square).

\label{eqn:boostToParavector:260}
\begin{aligned}
\lr{x’}^2
&=
\lr{ e^{ \theta/2 } x e^{ -\theta/2 } } \lr{ e^{ \theta/2 } x e^{ -\theta/2 } } \\
&=
e^{ \theta/2 } x \lr{ e^{ -\theta/2 } e^{ \theta/2 } } x e^{ -\theta/2 } \\
&=
e^{ \theta/2 } x^2 e^{ -\theta/2 } \\
&=
x^2 e^{ \theta/2 } e^{ -\theta/2 } \\
&=
x^2.
\end{aligned}

## Paravectors.

Paravectors, as used by Baylis [1], represent four-vectors using a Euclidean multivector basis $$\setlr{ \Be_\mu }$$, where $$\Be_0 = 1$$. The conversion between STA and paravector notation requires only multiplication with the timelike basis vector for the lab frame $$\gamma_0$$

\label{eqn:boostToParavector:40}
\begin{aligned}
X
&= x \gamma_0 \\
&= \lr{ x^0 \gamma_0 + x^k \gamma_k } \gamma_0 \\
&= x^0 + x^k \gamma_k \gamma_0 \\
&= x^0 + \Bx \\
&= c t + \Bx,
\end{aligned}

We need a different structure for the invariant length in paravector form. That invariant length is
\label{eqn:boostToParavector:280}
\begin{aligned}
x^2
&=
\lr{ \lr{ ct + \Bx } \gamma_0 }
\lr{ \lr{ ct + \Bx } \gamma_0 } \\
&=
\lr{ \lr{ ct + \Bx } \gamma_0 }
\lr{ \gamma_0 \lr{ ct – \Bx } } \\
&=
\lr{ ct + \Bx }
\lr{ ct – \Bx }.
\end{aligned}

Baylis introduces an involution operator $$\overline{{M}}$$ which toggles the sign of any vector or bivector grades of a multivector. For example, if $$M = a + \Ba + I \Bb + I c$$, where $$a,c \in \mathbb{R}$$ and $$\Ba, \Bb \in \mathbb{R}^3$$ is a multivector with all grades $$0,1,2,3$$, then the involution of $$M$$ is

\label{eqn:boostToParavector:300}
\overline{{M}} = a – \Ba – I \Bb + I c.

Utilizing this operator, the invariant length for a paravector $$X$$ is $$X \overline{{X}}$$.

Let’s consider how boosts and rotations can be expressed in the paravector form. The half angle operator for a boost along the spacelike $$\Bv = v \vcap$$ direction has the form

\label{eqn:boostToParavector:120}
L = e^{ -\vcap \phi/2 },

\label{eqn:boostToParavector:140}
\begin{aligned}
X’
&=
c t’ + \Bx’ \\
&=
x’ \gamma_0 \\
&=
L x L^\dagger \\
&=
e^{ -\vcap \phi/2 } x^\mu \gamma_\mu
e^{ \vcap \phi/2 } \gamma_0 \\
&=
e^{ -\vcap \phi/2 } x^\mu \gamma_\mu \gamma_0
e^{ -\vcap \phi/2 } \\
&=
e^{ -\vcap \phi/2 } \lr{ x^0 + \Bx } e^{ -\vcap \phi/2 } \\
&=
L X L.
\end{aligned}

Because the involution operator toggles the sign of vector grades, it is easy to see that the required invariance is maintained

\label{eqn:boostToParavector:320}
\begin{aligned}
X’ \overline{{X’}}
&=
L X L
\overline{{ L X L }} \\
&=
L X L
\overline{{ L }} \overline{{ X }} \overline{{ L }} \\
&=
L X \overline{{ X }} \overline{{ L }} \\
&=
X \overline{{ X }} L \overline{{ L }} \\
&=
X \overline{{ X }}.
\end{aligned}

Let’s explicitly expand the transformation of \ref{eqn:boostToParavector:140}, so we can relate the rapidity angle $$\phi$$ to the magnitude of the velocity. This is most easily done by splitting the spacelike component $$\Bx$$ of the four vector into its projective and rejective components

\label{eqn:boostToParavector:340}
\begin{aligned}
\Bx
&= \vcap \vcap \Bx \\
&= \vcap \lr{ \vcap \cdot \Bx + \vcap \wedge \Bx } \\
&= \vcap \lr{ \vcap \cdot \Bx } + \vcap \lr{ \vcap \wedge \Bx } \\
&= \Bx_\parallel + \Bx_\perp.
\end{aligned}

The exponential

\label{eqn:boostToParavector:360}
e^{-\vcap \phi/2}
=
\cosh\lr{ \phi/2 }
– \vcap \sinh\lr{ \phi/2 },

commutes with any scalar grades and with $$\Bx_\parallel$$, but anticommutes with $$\Bx_\perp$$, so

\label{eqn:boostToParavector:380}
\begin{aligned}
X’
&=
\lr{ c t + \Bx_\parallel } e^{ -\vcap \phi/2 } e^{ -\vcap \phi/2 }
+
\Bx_\perp e^{ \vcap \phi/2 } e^{ -\vcap \phi/2 } \\
&=
\lr{ c t + \Bx_\parallel } e^{ -\vcap \phi }
+
\Bx_\perp \\
&=
\lr{ c t + \vcap \lr{ \vcap \cdot \Bx } } \lr{ \cosh \phi – \vcap \sinh \phi }
+
\Bx_\perp \\
&=
\Bx_\perp
+
\lr{ c t \cosh\phi – \lr{ \vcap \cdot \Bx} \sinh \phi }
+
\vcap \lr{ \lr{ \vcap \cdot \Bx } \cosh\phi – c t \sinh \phi } \\
&=
\Bx_\perp
+
\cosh\phi \lr{ c t – \lr{ \vcap \cdot \Bx} \tanh \phi }
+
\vcap \cosh\phi \lr{ \vcap \cdot \Bx – c t \tanh \phi }.
\end{aligned}

Employing the argument from [3],
we want $$\phi$$ defined so that this has structure of a Galilean transformation in the limit where $$\phi \rightarrow 0$$. This means we equate

\label{eqn:boostToParavector:400}
\tanh \phi = \frac{v}{c},

so that for small $$\phi$$

\label{eqn:boostToParavector:420}
\Bx’ = \Bx – \Bv t.

We can solving for $$\sinh^2 \phi$$ and $$\cosh^2 \phi$$ in terms of $$v/c$$ using

\label{eqn:boostToParavector:440}
\tanh^2 \phi
= \frac{v^2}{c^2}
=
\frac{ \sinh^2 \phi }{1 + \sinh^2 \phi}
=
\frac{ \cosh^2 \phi – 1 }{\cosh^2 \phi}.

which after picking the positive root required for Galilean equivalence gives
\label{eqn:boostToParavector:460}
\begin{aligned}
\cosh \phi &= \frac{1}{\sqrt{1 – (\Bv/c)^2}} \equiv \gamma \\
\sinh \phi &= \frac{v/c}{\sqrt{1 – (\Bv/c)^2}} = \gamma v/c.
\end{aligned}

The Lorentz boost, written out in full is

\label{eqn:boostToParavector:480}
ct’ + \Bx’
=
\Bx_\perp
+
\gamma \lr{ c t – \frac{\Bv}{c} \cdot \Bx }
+
\gamma \lr{ \vcap \lr{ \vcap \cdot \Bx } – \Bv t }
.

Authors like Chappelle, et al., that also use paravectors [4], specify the form of the Lorentz transformation for the electromagnetic field, but for that transformation reversion is used instead of involution.
I plan to explore that in a later post, starting from the STA formalism that I already understand, and see if I can make sense
of the underlying rationale.

# References

[1] William Baylis. Electrodynamics: a modern geometric approach, volume 17. Springer Science \& Business Media, 2004.

[2] C. Doran and A.N. Lasenby. Geometric algebra for physicists. Cambridge University Press New York, Cambridge, UK, 1st edition, 2003.

[3] L. Landau and E. Lifshitz. The Classical theory of fields. Addison-Wesley, 1951.

[4] James M Chappell, Samuel P Drake, Cameron L Seidel, Lachlan J Gunn, and Derek Abbott. Geometric algebra for electrical and electronic engineers. Proceedings of the IEEE, 102 0(9), 2014.