## Notes.

Due to limitations in the MathJax-Latex package, all the oriented integrals in this blog post should be interpreted as having a clockwise orientation. [See the PDF version of this post for more sophisticated formatting.]

## Guts.

Given a two dimensional generating vector space, there are two instances of the fundamental theorem for multivector integration
\label{eqn:unpackingFundamentalTheorem:20}
\int_S F d\Bx \lrpartial G = \evalbar{F G}{\Delta S},

and
\label{eqn:unpackingFundamentalTheorem:40}
\int_S F d^2\Bx \lrpartial G = \oint_{\partial S} F d\Bx G.

The first case is trivial. Given a parameterizated curve $$x = x(u)$$, it just states
\label{eqn:unpackingFundamentalTheorem:60}
\int_{u(0)}^{u(1)} du \PD{u}{}\lr{FG} = F(u(1))G(u(1)) – F(u(0))G(u(0)),

for all multivectors $$F, G$$, regardless of the signature of the underlying space.

The surface integral is more interesting. Let’s first look at the area element for this surface integral, which is
\label{eqn:unpackingFundamentalTheorem:80}
d^2 \Bx = d\Bx_u \wedge d \Bx_v.

Geometrically, this has the area of the parallelogram spanned by $$d\Bx_u$$ and $$d\Bx_v$$, but weighted by the pseudoscalar of the space. This is explored algebraically in the following problem and illustrated in fig. 1.

fig. 1. 2D vector space and area element.

## Problem: Expansion of 2D area bivector.

Let $$\setlr{e_1, e_2}$$ be an orthonormal basis for a two dimensional space, with reciprocal frame $$\setlr{e^1, e^2}$$. Expand the area bivector $$d^2 \Bx$$ in coordinates relating the bivector to the Jacobian and the pseudoscalar.

With parameterization $$x = x(u,v) = x^\alpha e_\alpha = x_\alpha e^\alpha$$, we have
\label{eqn:unpackingFundamentalTheorem:120}
\Bx_u \wedge \Bx_v
=
\lr{ \PD{u}{x^\alpha} e_\alpha } \wedge
\lr{ \PD{v}{x^\beta} e_\beta }
=
\PD{u}{x^\alpha}
\PD{v}{x^\beta}
e_\alpha
e_\beta
=
\PD{(u,v)}{(x^1,x^2)} e_1 e_2,

or
\label{eqn:unpackingFundamentalTheorem:160}
\Bx_u \wedge \Bx_v
=
\lr{ \PD{u}{x_\alpha} e^\alpha } \wedge
\lr{ \PD{v}{x_\beta} e^\beta }
=
\PD{u}{x_\alpha}
\PD{v}{x_\beta}
e^\alpha
e^\beta
=
\PD{(u,v)}{(x_1,x_2)} e^1 e^2.

The upper and lower index pseudoscalars are related by
\label{eqn:unpackingFundamentalTheorem:180}
e^1 e^2 e_1 e_2 =
-e^1 e^2 e_2 e_1 =
-1,

so with $$I = e_1 e_2$$,
\label{eqn:unpackingFundamentalTheorem:200}
e^1 e^2 = -I^{-1},

leaving us with
\label{eqn:unpackingFundamentalTheorem:140}
d^2 \Bx
= \PD{(u,v)}{(x^1,x^2)} du dv\, I
= -\PD{(u,v)}{(x_1,x_2)} du dv\, I^{-1}.

We see that the area bivector is proportional to either the upper or lower index Jacobian and to the pseudoscalar for the space.

We may write the fundamental theorem for a 2D space as
\label{eqn:unpackingFundamentalTheorem:680}
\int_S du dv \, \PD{(u,v)}{(x^1,x^2)} F I \lrgrad G = \oint_{\partial S} F d\Bx G,

where we have dispensed with the vector derivative and use the gradient instead, since they are identical in a two parameter two dimensional space. Of course, unless we are using $$x^1, x^2$$ as our parameterization, we still want the curvilinear representation of the gradient $$\grad = \Bx^u \PDi{u}{} + \Bx^v \PDi{v}{}$$.

## Problem: Standard basis expansion of fundamental surface relation.

For a parameterization $$x = x^1 e_1 + x^2 e_2$$, where $$\setlr{ e_1, e_2 }$$ is a standard (orthogonal) basis, expand the fundamental theorem for surface integrals for the single sided $$F = 1$$ case. Consider functions $$G$$ of each grade (scalar, vector, bivector.)

From \ref{eqn:unpackingFundamentalTheorem:140} we see that the fundamental theorem takes the form
\label{eqn:unpackingFundamentalTheorem:220}
\int_S dx^1 dx^2\, F I \lrgrad G = \oint_{\partial S} F d\Bx G.

In a Euclidean space, the operator $$I \lrgrad$$, is a $$\pi/2$$ rotation of the gradient, but has a rotated like structure in all metrics:
\label{eqn:unpackingFundamentalTheorem:240}
=
e_1 e_2 \lr{ e^1 \partial_1 + e^2 \partial_2 }
=
-e_2 \partial_1 + e_1 \partial_2.

• $$F = 1$$ and $$G \in \bigwedge^0$$ or $$G \in \bigwedge^2$$. For $$F = 1$$ and scalar or bivector $$G$$ we have
\label{eqn:unpackingFundamentalTheorem:260}
\int_S dx^1 dx^2\, \lr{ -e_2 \partial_1 + e_1 \partial_2 } G = \oint_{\partial S} d\Bx G,

where, for $$x^1 \in [x^1(0),x^1(1)]$$ and $$x^2 \in [x^2(0),x^2(1)]$$, the RHS written explicitly is
\label{eqn:unpackingFundamentalTheorem:280}
\oint_{\partial S} d\Bx G
=
\int dx^1 e_1
\lr{ G(x^1, x^2(1)) – G(x^1, x^2(0)) }
– dx^2 e_2
\lr{ G(x^1(1),x^2) – G(x^1(0), x^2) }.

This is sketched in fig. 2. Since a 2D bivector $$G$$ can be written as $$G = I g$$, where $$g$$ is a scalar, we may write the pseudoscalar case as
\label{eqn:unpackingFundamentalTheorem:300}
\int_S dx^1 dx^2\, \lr{ -e_2 \partial_1 + e_1 \partial_2 } g = \oint_{\partial S} d\Bx g,

after right multiplying both sides with $$I^{-1}$$. Algebraically the scalar and pseudoscalar cases can be thought of as identical scalar relationships.
• $$F = 1, G \in \bigwedge^1$$. For $$F = 1$$ and vector $$G$$ the 2D fundamental theorem for surfaces can be split into scalar
\label{eqn:unpackingFundamentalTheorem:320}
\int_S dx^1 dx^2\, \lr{ -e_2 \partial_1 + e_1 \partial_2 } \cdot G = \oint_{\partial S} d\Bx \cdot G,

and bivector relations
\label{eqn:unpackingFundamentalTheorem:340}
\int_S dx^1 dx^2\, \lr{ -e_2 \partial_1 + e_1 \partial_2 } \wedge G = \oint_{\partial S} d\Bx \wedge G.

To expand \ref{eqn:unpackingFundamentalTheorem:320}, let
\label{eqn:unpackingFundamentalTheorem:360}
G = g_1 e^1 + g_2 e^2,

for which
\label{eqn:unpackingFundamentalTheorem:380}
\lr{ -e_2 \partial_1 + e_1 \partial_2 } \cdot G
=
\lr{ -e_2 \partial_1 + e_1 \partial_2 } \cdot
\lr{ g_1 e^1 + g_2 e^2 }
=
\partial_2 g_1 – \partial_1 g_2,

and
\label{eqn:unpackingFundamentalTheorem:400}
d\Bx \cdot G
=
\lr{ dx^1 e_1 – dx^2 e_2 } \cdot \lr{ g_1 e^1 + g_2 e^2 }
=
dx^1 g_1 – dx^2 g_2,

so \ref{eqn:unpackingFundamentalTheorem:320} expands to
\label{eqn:unpackingFundamentalTheorem:500}
\int_S dx^1 dx^2\, \lr{ \partial_2 g_1 – \partial_1 g_2 }
=
\int
\evalbar{dx^1 g_1}{\Delta x^2} – \evalbar{ dx^2 g_2 }{\Delta x^1}.

This coordinate expansion illustrates how the pseudoscalar nature of the area element results in a duality transformation, as we end up with a curl like operation on the LHS, despite the dot product nature of the decomposition that we used. That can also be seen directly for vector $$G$$, since
\label{eqn:unpackingFundamentalTheorem:560}
=
=
dA I \lr{ \grad \wedge G },

since the scalar selection of $$I \lr{ \grad \cdot G }$$ is zero.In the grade-2 relation \ref{eqn:unpackingFundamentalTheorem:340}, we expect a pseudoscalar cancellation on both sides, leaving a scalar (divergence-like) relationship. This time, we use upper index coordinates for the vector $$G$$, letting
\label{eqn:unpackingFundamentalTheorem:440}
G = g^1 e_1 + g^2 e_2,

so
\label{eqn:unpackingFundamentalTheorem:460}
\lr{ -e_2 \partial_1 + e_1 \partial_2 } \wedge G
=
\lr{ -e_2 \partial_1 + e_1 \partial_2 } \wedge G
\lr{ g^1 e_1 + g^2 e_2 }
=
e_1 e_2 \lr{ \partial_1 g^1 + \partial_2 g^2 },

and
\label{eqn:unpackingFundamentalTheorem:480}
d\Bx \wedge G
=
\lr{ dx^1 e_1 – dx^2 e_2 } \wedge
\lr{ g^1 e_1 + g^2 e_2 }
=
e_1 e_2 \lr{ dx^1 g^2 + dx^2 g^1 }.

So \ref{eqn:unpackingFundamentalTheorem:340}, after multiplication of both sides by $$I^{-1}$$, is
\label{eqn:unpackingFundamentalTheorem:520}
\int_S dx^1 dx^2\,
\lr{ \partial_1 g^1 + \partial_2 g^2 }
=
\int
\evalbar{dx^1 g^2}{\Delta x^2} + \evalbar{dx^2 g^1 }{\Delta x^1}.

As before, we’ve implicitly performed a duality transformation, and end up with a divergence operation. That can be seen directly without coordinate expansion, by rewriting the wedge as a grade two selection, and expanding the gradient action on the vector $$G$$, as follows
\label{eqn:unpackingFundamentalTheorem:580}
=
=
dA I \lr{ \grad \cdot G },

since $$I \lr{ \grad \wedge G }$$ has only a scalar component.

fig. 2. Line integral around rectangular boundary.

## Theorem 1.1: Green’s theorem [1].

Let $$S$$ be a Jordan region with a piecewise-smooth boundary $$C$$. If $$P, Q$$ are continuously differentiable on an open set that contains $$S$$, then
\begin{equation*}
\int dx dy \lr{ \PD{y}{P} – \PD{x}{Q} } = \oint P dx + Q dy.
\end{equation*}

## Problem: Relationship to Green’s theorem.

If the space is Euclidean, show that \ref{eqn:unpackingFundamentalTheorem:500} and \ref{eqn:unpackingFundamentalTheorem:520} are both instances of Green’s theorem with suitable choices of $$P$$ and $$Q$$.

I will omit the subtleties related to general regions and consider just the case of an infinitesimal square region.

### Start proof:

Let’s start with \ref{eqn:unpackingFundamentalTheorem:500}, with $$g_1 = P$$ and $$g_2 = Q$$, and $$x^1 = x, x^2 = y$$, the RHS is
\label{eqn:unpackingFundamentalTheorem:600}
\int dx dy \lr{ \PD{y}{P} – \PD{x}{Q} }.

On the RHS we have
\label{eqn:unpackingFundamentalTheorem:620}
\int \evalbar{dx P}{\Delta y} – \evalbar{ dy Q }{\Delta x}
=
\int dx \lr{ P(x, y_1) – P(x, y_0) } – \int dy \lr{ Q(x_1, y) – Q(x_0, y) }.

This pair of integrals is plotted in fig. 3, from which we see that \ref{eqn:unpackingFundamentalTheorem:620} can be expressed as the line integral, leaving us with
\label{eqn:unpackingFundamentalTheorem:640}
\int dx dy \lr{ \PD{y}{P} – \PD{x}{Q} }
=
\oint dx P + dy Q,

which is Green’s theorem over the infinitesimal square integration region.

For the equivalence of \ref{eqn:unpackingFundamentalTheorem:520} to Green’s theorem, let $$g^2 = P$$, and $$g^1 = -Q$$. Plugging into the LHS, we find the Green’s theorem integrand. On the RHS, the integrand expands to
\label{eqn:unpackingFundamentalTheorem:660}
\evalbar{dx g^2}{\Delta y} + \evalbar{dy g^1 }{\Delta x}
=
dx \lr{ P(x,y_1) – P(x, y_0)}
+
dy \lr{ -Q(x_1, y) + Q(x_0, y)},

which is exactly what we found in \ref{eqn:unpackingFundamentalTheorem:620}.

### End proof.

fig. 3. Path for Green’s theorem.

We may also relate multivector gradient integrals in 2D to the normal integral around the boundary of the bounding curve. That relationship is as follows.

## Theorem 1.2: 2D gradient integrals.

\begin{equation*}
\begin{aligned}
\int J du dv \rgrad G &= \oint I^{-1} d\Bx G = \int J \lr{ \Bx^v du + \Bx^u dv } G \\
\int J du dv F \lgrad &= \oint F I^{-1} d\Bx = \int J F \lr{ \Bx^v du + \Bx^u dv },
\end{aligned}
\end{equation*}
where $$J = \partial(x^1, x^2)/\partial(u,v)$$ is the Jacobian of the parameterization $$x = x(u,v)$$. In terms of the coordinates $$x^1, x^2$$, this reduces to
\begin{equation*}
\begin{aligned}
\int dx^1 dx^2 \rgrad G &= \oint I^{-1} d\Bx G = \int \lr{ e^2 dx^1 + e^1 dx^2 } G \\
\int dx^1 dx^2 F \lgrad &= \oint G I^{-1} d\Bx = \int F \lr{ e^2 dx^1 + e^1 dx^2 }.
\end{aligned}
\end{equation*}
The vector $$I^{-1} d\Bx$$ is orthogonal to the tangent vector along the boundary, and for Euclidean spaces it can be identified as the outwards normal.

### Start proof:

Respectively setting $$F = 1$$, and $$G = 1$$ in \ref{eqn:unpackingFundamentalTheorem:680}, we have
\label{eqn:unpackingFundamentalTheorem:940}
\int I^{-1} d^2 \Bx \rgrad G = \oint I^{-1} d\Bx G,

and
\label{eqn:unpackingFundamentalTheorem:960}
\int F d^2 \Bx \lgrad I^{-1} = \oint F d\Bx I^{-1}.

Starting with \ref{eqn:unpackingFundamentalTheorem:940} we find
\label{eqn:unpackingFundamentalTheorem:700}
\int I^{-1} J du dv I \rgrad G = \oint d\Bx G,

to find $$\int dx^1 dx^2 \rgrad G = \oint I^{-1} d\Bx G$$, as desireed. In terms of a parameterization $$x = x(u,v)$$, the pseudoscalar for the space is
\label{eqn:unpackingFundamentalTheorem:720}
I = \frac{\Bx_u \wedge \Bx_v}{J},

so
\label{eqn:unpackingFundamentalTheorem:740}
I^{-1} = \frac{J}{\Bx_u \wedge \Bx_v}.

Also note that $$\lr{\Bx_u \wedge \Bx_v}^{-1} = \Bx^v \wedge \Bx^u$$, so
\label{eqn:unpackingFundamentalTheorem:760}
I^{-1} = J \lr{ \Bx^v \wedge \Bx^u },

and
\label{eqn:unpackingFundamentalTheorem:780}
I^{-1} d\Bx
= I^{-1} \cdot d\Bx
= J \lr{ \Bx^v \wedge \Bx^u } \cdot \lr{ \Bx_u du – \Bx_v dv }
= J \lr{ \Bx^v du + \Bx^u dv },

so the right acting gradient integral is
\label{eqn:unpackingFundamentalTheorem:800}
\int J du dv \grad G =
\int
\evalbar{J \Bx^v G}{\Delta v} du + \evalbar{J \Bx^u G dv}{\Delta u},

which we write in abbreviated form as $$\int J \lr{ \Bx^v du + \Bx^u dv} G$$.

For the $$G = 1$$ case, from \ref{eqn:unpackingFundamentalTheorem:960} we find
\label{eqn:unpackingFundamentalTheorem:820}
\int J du dv F I \lgrad I^{-1} = \oint F d\Bx I^{-1}.

However, in a 2D space, regardless of metric, we have $$I a = – a I$$ for any vector $$a$$ (i.e. $$\grad$$ or $$d\Bx$$), so we may commute the outer pseudoscalars in
\label{eqn:unpackingFundamentalTheorem:840}
\int J du dv F I \lgrad I^{-1} = \oint F d\Bx I^{-1},

so
\label{eqn:unpackingFundamentalTheorem:850}
-\int J du dv F I I^{-1} \lgrad = -\oint F I^{-1} d\Bx.

After cancelling the negative sign on both sides, we have the claimed result.

To see that $$I a$$, for any vector $$a$$ is normal to $$a$$, we can compute the dot product
\label{eqn:unpackingFundamentalTheorem:860}
\lr{ I a } \cdot a
=
=
= 0,

since the scalar selection of a bivector is zero. Since $$I^{-1} = \pm I$$, the same argument shows that $$I^{-1} d\Bx$$ must be orthogonal to $$d\Bx$$.

### End proof.

Let’s look at the geometry of the normal $$I^{-1} \Bx$$ in a couple 2D vector spaces. We use an integration volume of a unit square to simplify the boundary term expressions.

• Euclidean: With a parameterization $$x(u,v) = u\Be_1 + v \Be_2$$, and Euclidean basis vectors $$(\Be_1)^2 = (\Be_2)^2 = 1$$, the fundamental theorem integrated over the rectangle $$[x_0,x_1] \times [y_0,y_1]$$ is
\label{eqn:unpackingFundamentalTheorem:880}
\int dx dy \grad G =
\int
\Be_2 \lr{ G(x,y_1) – G(x,y_0) } dx +
\Be_1 \lr{ G(x_1,y) – G(x_0,y) } dy,

Each of the terms in the integrand above are illustrated in fig. 4, and we see that this is a path integral weighted by the outwards normal.

fig. 4. Outwards oriented normal for Euclidean space.

• Spacetime: Let $$x(u,v) = u \gamma_0 + v \gamma_1$$, where $$(\gamma_0)^2 = -(\gamma_1)^2 = 1$$. With $$u = t, v = x$$, the gradient integral over a $$[t_0,t_1] \times [x_0,x_1]$$ of spacetime is
\label{eqn:unpackingFundamentalTheorem:900}
\begin{aligned}
&=
\int
\gamma^1 dt \lr{ G(t, x_1) – G(t, x_0) }
+
\gamma^0 dx \lr{ G(t_1, x) – G(t_1, x) } \\
&=
\int
\gamma_1 dt \lr{ -G(t, x_1) + G(t, x_0) }
+
\gamma_0 dx \lr{ G(t_1, x) – G(t_1, x) }
.
\end{aligned}

With $$t$$ plotted along the horizontal axis, and $$x$$ along the vertical, each of the terms of this integrand is illustrated graphically in fig. 5. For this mixed signature space, there is no longer any good geometrical characterization of the normal.

fig. 5. Orientation of the boundary normal for a spacetime basis.

• Spacelike:
Let $$x(u,v) = u \gamma_1 + v \gamma_2$$, where $$(\gamma_1)^2 = (\gamma_2)^2 = -1$$. With $$u = x, v = y$$, the gradient integral over a $$[x_0,x_1] \times [y_0,y_1]$$ of this space is
\label{eqn:unpackingFundamentalTheorem:920}
\begin{aligned}
&=
\int
\gamma^2 dx \lr{ G(x, y_1) – G(x, y_0) }
+
\gamma^1 dy \lr{ G(x_1, y) – G(x_1, y) } \\
&=
\int
\gamma_2 dx \lr{ -G(x, y_1) + G(x, y_0) }
+
\gamma_1 dy \lr{ -G(x_1, y) + G(x_1, y) }
.
\end{aligned}

Referring to fig. 6. where the elements of the integrand are illustrated, we see that the normal $$I^{-1} d\Bx$$ for the boundary of this region can be characterized as inwards.

fig. 6. Inwards oriented normal for a Dirac spacelike basis.

# References

[1] S.L. Salas and E. Hille. Calculus: one and several variables. Wiley New York, 1990.

## Fundamental theorem of geometric calculus for line integrals (relativistic.)

[This post is best viewed in PDF form, due to latex elements that I could not format with wordpress mathjax.]

Background for this particular post can be found in

## Motivation.

I’ve been slowly working my way towards a statement of the fundamental theorem of integral calculus, where the functions being integrated are elements of the Dirac algebra (space time multivectors in the geometric algebra parlance.)

This is interesting because we want to be able to do line, surface, 3-volume and 4-volume space time integrals. We have many $$\mathbb{R}^3$$ integral theorems
\label{eqn:fundamentalTheoremOfGC:40a}
\int_A^B d\Bl \cdot \spacegrad f = f(B) – f(A),

\label{eqn:fundamentalTheoremOfGC:60a}
\int_S dA\, \ncap \cross \spacegrad f = \int_{\partial S} d\Bx\, f,

\label{eqn:fundamentalTheoremOfGC:80a}
\int_S dA\, \ncap \cdot \lr{ \spacegrad \cross \Bf} = \int_{\partial S} d\Bx \cdot \Bf,

\label{eqn:fundamentalTheoremOfGC:100a}
\int_S dx dy \lr{ \PD{y}{P} – \PD{x}{Q} }
=
\int_{\partial S} P dx + Q dy,

\label{eqn:fundamentalTheoremOfGC:120a}
\int_V dV\, \spacegrad f = \int_{\partial V} dA\, \ncap f,

\label{eqn:fundamentalTheoremOfGC:140a}
\int_V dV\, \spacegrad \cross \Bf = \int_{\partial V} dA\, \ncap \cross \Bf,

\label{eqn:fundamentalTheoremOfGC:160a}
\int_V dV\, \spacegrad \cdot \Bf = \int_{\partial V} dA\, \ncap \cdot \Bf,

and want to know how to generalize these to four dimensions and also make sure that we are handling the relativistic mixed signature correctly. If our starting point was the mess of equations above, we’d be in trouble, since it is not obvious how these generalize. All the theorems with unit normals have to be handled completely differently in four dimensions since we don’t have a unique normal to any given spacetime plane.
What comes to our rescue is the Fundamental Theorem of Geometric Calculus (FTGC), which has the form
\label{eqn:fundamentalTheoremOfGC:40}
\int F d^n \Bx\, \lrpartial G = \int F d^{n-1} \Bx\, G,

where $$F,G$$ are multivectors functions (i.e. sums of products of vectors.) We’ve seen ([2], [1]) that all the identities above are special cases of the fundamental theorem.

Do we need any special care to state the FTGC correctly for our relativistic case? It turns out that the answer is no! Tangent and reciprocal frame vectors do all the heavy lifting, and we can use the fundamental theorem as is, even in our mixed signature space. The only real change that we need to make is use spacetime gradient and vector derivative operators instead of their spatial equivalents. We will see how this works below. Note that instead of starting with \ref{eqn:fundamentalTheoremOfGC:40} directly, I will attempt to build up to that point in a progressive fashion that is hopefully does not require the reader to make too many unjustified mental leaps.

## Multivector line integrals.

We want to define multivector line integrals to start with. Recall that in $$\mathbb{R}^3$$ we would say that for scalar functions $$f$$, the integral
\label{eqn:fundamentalTheoremOfGC:180b}
\int d\Bx\, f = \int f d\Bx,

is a line integral. Also, for vector functions $$\Bf$$ we call
\label{eqn:fundamentalTheoremOfGC:200}
\int d\Bx \cdot \Bf = \inv{2} \int d\Bx\, \Bf + \Bf d\Bx.

a line integral. In order to generalize line integrals to multivector functions, we will allow our multivector functions to be placed on either or both sides of the differential.

## Definition 1.1: Line integral.

Given a single variable parameterization $$x = x(u)$$, we write $$d^1\Bx = \Bx_u du$$, and call
\label{eqn:fundamentalTheoremOfGC:220a}
\int F d^1\Bx\, G,

a line integral, where $$F,G$$ are arbitrary multivector functions.

We must be careful not to reorder any of the factors in the integrand, since the differential may not commute with either $$F$$ or $$G$$. Here is a simple example where the integrand has a product of a vector and differential.

## Problem: Circular parameterization.

Given a circular parameterization $$x(\theta) = \gamma_1 e^{-i\theta}$$, where $$i = \gamma_1 \gamma_2$$, the unit bivector for the $$x,y$$ plane. Compute the line integral
\label{eqn:fundamentalTheoremOfGC:100}
\int_0^{\pi/4} F(\theta)\, d^1 \Bx\, G(\theta),

where $$F(\theta) = \Bx^\theta + \gamma_3 + \gamma_1 \gamma_0$$ is a multivector valued function, and $$G(\theta) = \gamma_0$$ is vector valued.

The tangent vector for the curve is
\label{eqn:fundamentalTheoremOfGC:60}
\Bx_\theta
= -\gamma_1 \gamma_1 \gamma_2 e^{-i\theta}
= \gamma_2 e^{-i\theta},

with reciprocal vector $$\Bx^\theta = e^{i \theta} \gamma^2$$. The differential element is $$d^1 \Bx = \gamma_2 e^{-i\theta} d\theta$$, so the integrand is
\label{eqn:fundamentalTheoremOfGC:80}
\begin{aligned}
\int_0^{\pi/4} \lr{ \Bx^\theta + \gamma_3 + \gamma_1 \gamma_0 } d^1 \Bx\, \gamma_0
&=
\int_0^{\pi/4} \lr{ e^{i\theta} \gamma^2 + \gamma_3 + \gamma_1 \gamma_0 } \gamma_2 e^{-i\theta} d\theta\, \gamma_0 \\
&=
\frac{\pi}{4} \gamma_0 + \lr{ \gamma_{32} + \gamma_{102} } \inv{-i} \lr{ e^{-i\pi/4} – 1 } \gamma_0 \\
&=
\frac{\pi}{4} \gamma_0 + \inv{\sqrt{2}} \lr{ \gamma_{32} + \gamma_{102} } \gamma_{120} \lr{ 1 – \gamma_{12} } \\
&=
\frac{\pi}{4} \gamma_0 + \inv{\sqrt{2}} \lr{ \gamma_{310} + 1 } \lr{ 1 – \gamma_{12} }.
\end{aligned}

Observe how care is required not to reorder any terms. This particular end result is a multivector with scalar, vector, bivector, and trivector grades, but no pseudoscalar component. The grades in the end result depend on both the function in the integrand and on the path. For example, had we integrated all the way around the circle, the end result would have been the vector $$2 \pi \gamma_0$$ (i.e. a $$\gamma_0$$ weighted unit circle circumference), as all the other grades would have been killed by the complex exponential integrated over a full period.

## Problem: Line integral for boosted time direction vector.

Let $$x = e^{\vcap \alpha/2} \gamma_0 e^{-\vcap \alpha/2}$$ represent the spacetime curve of all the boosts of $$\gamma_0$$ along a specific velocity direction vector, where $$\vcap = (v \wedge \gamma_0)/\Norm{v \wedge \gamma_0}$$ is a unit spatial bivector for any constant vector $$v$$. Compute the line integral
\label{eqn:fundamentalTheoremOfGC:240}
\int x\, d^1 \Bx.

Observe that $$\vcap$$ and $$\gamma_0$$ anticommute, so we may write our boost as a one sided exponential
\label{eqn:fundamentalTheoremOfGC:260}
x(\alpha) = \gamma_0 e^{-\vcap \alpha} = e^{\vcap \alpha} \gamma_0 = \lr{ \cosh\alpha + \vcap \sinh\alpha } \gamma_0.

The tangent vector is just
\label{eqn:fundamentalTheoremOfGC:280}
\Bx_\alpha = \PD{\alpha}{x} = e^{\vcap\alpha} \vcap \gamma_0.

Let’s get a bit of intuition about the nature of this vector. It’s square is
\label{eqn:fundamentalTheoremOfGC:300}
\begin{aligned}
\Bx_\alpha^2
&=
e^{\vcap\alpha} \vcap \gamma_0
e^{\vcap\alpha} \vcap \gamma_0 \\
&=
-e^{\vcap\alpha} \vcap e^{-\vcap\alpha} \vcap (\gamma_0)^2 \\
&=
-1,
\end{aligned}

so we see that the tangent vector is a spacelike unit vector. As the vector representing points on the curve is necessarily timelike (due to Lorentz invariance), these two must be orthogonal at all points. Let’s confirm this algebraically
\label{eqn:fundamentalTheoremOfGC:320}
\begin{aligned}
x \cdot \Bx_\alpha
&=
\gpgradezero{ e^{\vcap \alpha} \gamma_0 e^{\vcap \alpha} \vcap \gamma_0 } \\
&=
\gpgradezero{ e^{-\vcap \alpha} e^{\vcap \alpha} \vcap (\gamma_0)^2 } \\
&=
&= 0.
\end{aligned}

Here we used $$e^{\vcap \alpha} \gamma_0 = \gamma_0 e^{-\vcap \alpha}$$, and $$\gpgradezero{A B} = \gpgradezero{B A}$$. Geometrically, we have the curious fact that the direction vectors to points on the curve are perpendicular (with respect to our relativistic dot product) to the tangent vectors on the curve, as illustrated in fig. 1.

fig. 1. Tangent perpendicularity in mixed metric.

### Perfect differentials.

Having seen a couple examples of multivector line integrals, let’s now move on to figure out the structure of a line integral that has a “perfect” differential integrand. We can take a hint from the $$\mathbb{R}^3$$ vector result that we already know, namely
\label{eqn:fundamentalTheoremOfGC:120}
\int_A^B d\Bl \cdot \spacegrad f = f(B) – f(A).

It seems reasonable to guess that the relativistic generalization of this is
\label{eqn:fundamentalTheoremOfGC:140}
\int_A^B dx \cdot \grad f = f(B) – f(A).

Let’s check that, by expanding in coordinates
\label{eqn:fundamentalTheoremOfGC:160}
\begin{aligned}
&=
\int_A^B d\tau \frac{dx^\mu}{d\tau} \partial_\mu f \\
&=
\int_A^B d\tau \frac{dx^\mu}{d\tau} \PD{x^\mu}{f} \\
&=
\int_A^B d\tau \frac{df}{d\tau} \\
&=
f(B) – f(A).
\end{aligned}

If we drop the dot product, will we have such a nice result? Let’s see:
\label{eqn:fundamentalTheoremOfGC:180}
\begin{aligned}
&=
\int_A^B d\tau \frac{dx^\mu}{d\tau} \gamma_\mu \gamma^\nu \partial_\nu f \\
&=
\int_A^B d\tau \frac{dx^\mu}{d\tau} \PD{x^\mu}{f}
+
\int_A^B
d\tau
\sum_{\mu \ne \nu} \gamma_\mu \gamma^\nu
\frac{dx^\mu}{d\tau} \PD{x^\nu}{f}.
\end{aligned}

This scalar component of this integrand is a perfect differential, but the bivector part of the integrand is a complete mess, that we have no hope of generally integrating. It happens that if we consider one of the simplest parameterization examples, we can get a strong hint of how to generalize the differential operator to one that ends up providing a perfect differential. In particular, let’s integrate over a linear constant path, such as $$x(\tau) = \tau \gamma_0$$. For this path, we have
\label{eqn:fundamentalTheoremOfGC:200a}
\begin{aligned}
&=
\int_A^B \gamma_0 d\tau \lr{
\gamma^0 \partial_0 +
\gamma^1 \partial_1 +
\gamma^2 \partial_2 +
\gamma^3 \partial_3 } f \\
&=
\int_A^B d\tau \lr{
\PD{\tau}{f} +
\gamma_0 \gamma^1 \PD{x^1}{f} +
\gamma_0 \gamma^2 \PD{x^2}{f} +
\gamma_0 \gamma^3 \PD{x^3}{f}
}.
\end{aligned}

Just because the path does not have any $$x^1, x^2, x^3$$ component dependencies does not mean that these last three partials are neccessarily zero. For example $$f = f(x(\tau)) = \lr{ x^0 }^2 \gamma_0 + x^1 \gamma_1$$ will have a non-zero contribution from the $$\partial_1$$ operator. In that particular case, we can easily integrate $$f$$, but we have to know the specifics of the function to do the integral. However, if we had a differential operator that did not include any component off the integration path, we would ahve a perfect differential. That is, if we were to replace the gradient with the projection of the gradient onto the tangent space, we would have a perfect differential. We see that the function of the dot product in \ref{eqn:fundamentalTheoremOfGC:140} has the same effect, as it rejects any component of the gradient that does not lie on the tangent space.

## Definition 1.2: Vector derivative.

Given a spacetime manifold parameterized by $$x = x(u^0, \cdots u^{N-1})$$, with tangent vectors $$\Bx_\mu = \PDi{u^\mu}{x}$$, and reciprocal vectors $$\Bx^\mu \in \textrm{Span}\setlr{\Bx_\nu}$$, such that $$\Bx^\mu \cdot \Bx_\nu = {\delta^\mu}_\nu$$, the vector derivative is defined as
\label{eqn:fundamentalTheoremOfGC:240a}
\partial = \sum_{\mu = 0}^{N-1} \Bx^\mu \PD{u^\mu}{}.

Observe that if this is a full parameterization of the space ($$N = 4$$), then the vector derivative is identical to the gradient. The vector derivative is the projection of the gradient onto the tangent space at the point of evaluation.Furthermore, we designate $$\lrpartial$$ as the vector derivative allowed to act bidirectionally, as follows
\label{eqn:fundamentalTheoremOfGC:260a}
R \lrpartial S
=
R \Bx^\mu \PD{u^\mu}{S}
+
\PD{u^\mu}{R} \Bx^\mu S,

where $$R, S$$ are multivectors, and summation convention is implied. In this bidirectional action,
the vector factors of the vector derivative must stay in place (as they do not neccessarily commute with $$R,S$$), but the derivative operators apply in a chain rule like fashion to both functions.

Noting that $$\Bx_u \cdot \grad = \Bx_u \cdot \partial$$, we may rewrite the scalar line integral identity \ref{eqn:fundamentalTheoremOfGC:140} as
\label{eqn:fundamentalTheoremOfGC:220}
\int_A^B dx \cdot \partial f = f(B) – f(A).

However, as our example hinted at, the fundamental theorem for line integrals has a multivector generalization that does not rely on a dot product to do the tangent space filtering, and is more powerful. That generalization has the following form.

## Theorem 1.1: Fundamental theorem for line integrals.

Given multivector functions $$F, G$$, and a single parameter curve $$x(u)$$ with line element $$d^1 \Bx = \Bx_u du$$, then
\label{eqn:fundamentalTheoremOfGC:280a}
\int_A^B F d^1\Bx \lrpartial G = F(B) G(B) – F(A) G(A).

### Start proof:

Writing out the integrand explicitly, we find
\label{eqn:fundamentalTheoremOfGC:340}
\int_A^B F d^1\Bx \lrpartial G
=
\int_A^B \lr{
\PD{\alpha}{F} d\alpha\, \Bx_\alpha \Bx^\alpha G
+
F d\alpha\, \Bx_\alpha \Bx^\alpha \PD{\alpha}{G }
}

However for a single parameter curve, we have $$\Bx^\alpha = 1/\Bx_\alpha$$, so we are left with
\label{eqn:fundamentalTheoremOfGC:360}
\begin{aligned}
\int_A^B F d^1\Bx \lrpartial G
&=
\int_A^B d\alpha\, \PD{\alpha}{(F G)} \\
&=
\evalbar{F G}{B}

\evalbar{F G}{A}.
\end{aligned}

## More to come.

In the next installment we will explore surface integrals in spacetime, and the generalization of the fundamental theorem to multivector space time integrals.

# References

[1] Peeter Joot. Geometric Algebra for Electrical Engineers. Kindle Direct Publishing, 2019.

[2] A. Macdonald. Vector and Geometric Calculus. CreateSpace Independent Publishing Platform, 2012.

## Maxwell’s equation Lagrangian (geometric algebra and tensor formalism)

Maxwell’s equation using geometric algebra Lagrangian.

## Motivation.

In my classical mechanics notes, I’ve got computations of Maxwell’s equation (singular in it’s geometric algebra form) from a Lagrangian in various ways (using a tensor, scalar and multivector Lagrangians), but all of these seem more convoluted than they should be.
Here we do this from scratch, starting with the action principle for field variables, covering:

• Derivation of the relativistic form of the Euler-Lagrange field equations from the covariant form of the action,
• Derivation of Maxwell’s equation (in it’s STA form) from the Maxwell Lagrangian,
• Relationship of the STA Maxwell Lagrangian to the tensor equivalent,
• Relationship of the STA form of Maxwell’s equation to it’s tensor equivalents,
• Relationship of the STA Maxwell’s equation to it’s conventional Gibbs form.
• Show that we may use a multivector valued Lagrangian with all of $$F^2$$, not just the scalar part.

It is assumed that the reader is thoroughly familiar with the STA formalism, and if that is not the case, there is no better reference than [1].

## Theorem 1.1: Relativistic Euler-Lagrange field equations.

Let $$\phi \rightarrow \phi + \delta \phi$$ be any variation of the field, such that the variation
$$\delta \phi = 0$$ vanishes at the boundaries of the action integral
\label{eqn:maxwells:2120}
S = \int d^4 x \LL(\phi, \partial_\nu \phi).

The extreme value of the action is found when the Euler-Lagrange equations
\label{eqn:maxwells:2140}
0 = \PD{\phi}{\LL} – \partial_\nu \PD{(\partial_\nu \phi)}{\LL},

are satisfied. For a Lagrangian with multiple field variables, there will be one such equation for each field.

### Start proof:

To ease the visual burden, designate the variation of the field by $$\delta \phi = \epsilon$$, and perform a first order expansion of the varied Lagrangian
\label{eqn:maxwells:20}
\begin{aligned}
\LL
&\rightarrow
\LL(\phi + \epsilon, \partial_\nu (\phi + \epsilon)) \\
&=
\LL(\phi, \partial_\nu \phi)
+
\PD{\phi}{\LL} \epsilon +
\PD{(\partial_\nu \phi)}{\LL} \partial_\nu \epsilon.
\end{aligned}

The variation of the Lagrangian is
\label{eqn:maxwells:40}
\begin{aligned}
\delta \LL
&=
\PD{\phi}{\LL} \epsilon +
\PD{(\partial_\nu \phi)}{\LL} \partial_\nu \epsilon \\
&=
\PD{\phi}{\LL} \epsilon +
\partial_\nu \lr{ \PD{(\partial_\nu \phi)}{\LL} \epsilon }

\epsilon \partial_\nu \PD{(\partial_\nu \phi)}{\LL},
\end{aligned}

which we may plug into the action integral to find
\label{eqn:maxwells:60}
\delta S
=
\int d^4 x \epsilon \lr{
\PD{\phi}{\LL}

\partial_\nu \PD{(\partial_\nu \phi)}{\LL}
}
+
\int d^4 x
\partial_\nu \lr{ \PD{(\partial_\nu \phi)}{\LL} \epsilon }.

The last integral can be evaluated along the $$dx^\nu$$ direction, leaving
\label{eqn:maxwells:80}
\int d^3 x
\evalbar{ \PD{(\partial_\nu \phi)}{\LL} \epsilon }{\Delta x^\nu},

where $$d^3 x = dx^\alpha dx^\beta dx^\gamma$$ is the product of differentials that does not include $$dx^\nu$$. By construction, $$\epsilon$$ vanishes on the boundary of the action integral so \ref{eqn:maxwells:80} is zero. The action takes its extreme value when
\label{eqn:maxwells:100}
0 = \delta S
=
\int d^4 x \epsilon \lr{
\PD{\phi}{\LL}

\partial_\nu \PD{(\partial_\nu \phi)}{\LL}
}.

The proof is complete after noting that this must hold for all variations of the field $$\epsilon$$, which means that we must have
\label{eqn:maxwells:120}
0 =
\PD{\phi}{\LL}

\partial_\nu \PD{(\partial_\nu \phi)}{\LL}.

### End proof.

Armed with the Euler-Lagrange equations, we can apply them to the Maxwell’s equation Lagrangian, which we will claim has the following form.

## Theorem 1.2: Maxwell’s equation Lagrangian.

Application of the Euler-Lagrange equations to the Lagrangian
\label{eqn:maxwells:2160}
\LL = – \frac{\epsilon_0 c}{2} F \cdot F + J \cdot A,

where $$F = \grad \wedge A$$, yields the vector portion of Maxwell’s equation
\label{eqn:maxwells:2180}
\grad \cdot F = \inv{\epsilon_0 c} J,

which implies
\label{eqn:maxwells:2200}
\grad F = \inv{\epsilon_0 c} J.

This is Maxwell’s equation.

### Start proof:

We wish to apply all of the Euler-Lagrange equations simultaneously (i.e. once for each of the four $$A_\mu$$ components of the potential), and cast it into four-vector form
\label{eqn:maxwells:140}
0 = \gamma_\nu \lr{ \PD{A_\nu}{} – \partial_\mu \PD{(\partial_\mu A_\nu)}{} } \LL.

Since our Lagrangian splits nicely into kinetic and interaction terms, this gives us
\label{eqn:maxwells:160}
0 = \gamma_\nu \lr{ \PD{A_\nu}{(A \cdot J)} + \frac{\epsilon_0 c}{2} \partial_\mu \PD{(\partial_\mu A_\nu)}{ (F \cdot F)} }.

The interaction term above is just
\label{eqn:maxwells:180}
\gamma_\nu \PD{A_\nu}{(A \cdot J)}
=
\gamma_\nu \PD{A_\nu}{(A_\mu J^\mu)}
=
\gamma_\nu J^\nu
=
J,

but the kinetic term takes a bit more work. Let’s start with evaluating
\label{eqn:maxwells:200}
\begin{aligned}
\PD{(\partial_\mu A_\nu)}{ (F \cdot F)}
&=
\PD{(\partial_\mu A_\nu)}{ F } \cdot F
+
F \cdot \PD{(\partial_\mu A_\nu)}{ F } \\
&=
2 \PD{(\partial_\mu A_\nu)}{ F } \cdot F \\
&=
2 \PD{(\partial_\mu A_\nu)}{ (\partial_\alpha A_\beta) } \lr{ \gamma^\alpha \wedge \gamma^\beta } \cdot F \\
&=
2 \lr{ \gamma^\mu \wedge \gamma^\nu } \cdot F.
\end{aligned}

We hit this with the $$\mu$$-partial and expand as a scalar selection to find
\label{eqn:maxwells:220}
\begin{aligned}
\partial_\mu \PD{(\partial_\mu A_\nu)}{ (F \cdot F)}
&=
2 \lr{ \partial_\mu \gamma^\mu \wedge \gamma^\nu } \cdot F \\
&=
– 2 (\gamma^\nu \wedge \grad) \cdot F \\
&=
&=
&=
– 2 \gamma^\nu \cdot \lr{ \grad \cdot F }.
\end{aligned}

Putting all the pieces together yields
\label{eqn:maxwells:240}
0
= J – \epsilon_0 c \gamma_\nu \lr{ \gamma^\nu \cdot \lr{ \grad \cdot F } }
= J – \epsilon_0 c \lr{ \grad \cdot F },

but
\label{eqn:maxwells:260}
\begin{aligned}
&=
&=
&=
\end{aligned}

so the multivector field equations for this Lagrangian are
\label{eqn:maxwells:280}
\grad F = \inv{\epsilon_0 c} J,

as claimed.

## Problem: Correspondence with tensor formalism.

Cast the Lagrangian of \ref{eqn:maxwells:2160} into the conventional tensor form
\label{eqn:maxwells:300}
\LL = \frac{\epsilon_0 c}{4} F_{\mu\nu} F^{\mu\nu} + A^\mu J_\mu.

Also show that the four-vector component of Maxwell’s equation $$\grad \cdot F = J/(\epsilon_0 c)$$ is equivalent to the conventional tensor form of the Gauss-Ampere law
\label{eqn:maxwells:320}
\partial_\mu F^{\mu\nu} = \inv{\epsilon_0 c} J^\nu,

where $$F^{\mu\nu} = \partial^\mu A^\nu – \partial^\nu A^\mu$$ as usual. Also show that the trivector component of Maxwell’s equation $$\grad \wedge F = 0$$ is equivalent to the tensor form of the Gauss-Faraday law
\label{eqn:maxwells:340}
\partial_\alpha \lr{ \epsilon^{\alpha \beta \mu \nu} F_{\mu\nu} } = 0.

To show the Lagrangian correspondence we must expand $$F \cdot F$$ in coordinates
\label{eqn:maxwells:360}
\begin{aligned}
F \cdot F
&=
( \grad \wedge A ) \cdot
( \grad \wedge A ) \\
&=
\lr{ (\gamma^\mu \partial_\mu) \wedge (\gamma^\nu A_\nu) }
\cdot
\lr{ (\gamma^\alpha \partial_\alpha) \wedge (\gamma^\beta A_\beta) } \\
&=
\lr{ \gamma^\mu \wedge \gamma^\nu } \cdot \lr{ \gamma_\alpha \wedge \gamma_\beta }
(\partial_\mu A_\nu )
(\partial^\alpha A^\beta ) \\
&=
\lr{
{\delta^\mu}_\beta
{\delta^\nu}_\alpha

{\delta^\mu}_\alpha
{\delta^\nu}_\beta
}
(\partial_\mu A_\nu )
(\partial^\alpha A^\beta ) \\
&=
– \partial_\mu A_\nu \lr{
\partial^\mu A^\nu

\partial^\nu A^\mu
} \\
&=
– \partial_\mu A_\nu F^{\mu\nu} \\
&=
– \inv{2} \lr{
\partial_\mu A_\nu F^{\mu\nu}
+
\partial_\nu A_\mu F^{\nu\mu}
} \\
&=
– \inv{2} \lr{
\partial_\mu A_\nu

\partial_\nu A_\mu
}
F^{\mu\nu} \\
&=

\inv{2}
F_{\mu\nu}
F^{\mu\nu}.
\end{aligned}

With a substitution of this and $$A \cdot J = A_\mu J^\mu$$ back into the Lagrangian, we recover the tensor form of the Lagrangian.

To recover the tensor form of Maxwell’s equation, we first split it into vector and trivector parts
\label{eqn:maxwells:1580}

Now the vector component may be expanded in coordinates by dotting both sides with $$\gamma^\nu$$ to find
\label{eqn:maxwells:1600}
\inv{\epsilon_0 c} \gamma^\nu \cdot J = J^\nu,

and
\label{eqn:maxwells:1620}
\begin{aligned}
\gamma^\nu \cdot
&=
\partial_\mu \gamma^\nu \cdot \lr{ \gamma^\mu \cdot \lr{ \gamma_\alpha \wedge \gamma_\beta } \partial^\alpha A^\beta } \\
&=
\lr{
{\delta^\mu}_\alpha
{\delta^\nu}_\beta

{\delta^\nu}_\alpha
{\delta^\mu}_\beta
}
\partial_\mu
\partial^\alpha A^\beta \\
&=
\partial_\mu
\lr{
\partial^\mu A^\nu

\partial^\nu A^\mu
} \\
&=
\partial_\mu F^{\mu\nu}.
\end{aligned}

Equating \ref{eqn:maxwells:1600} and \ref{eqn:maxwells:1620} finishes the first part of the job. For the trivector component, we have
\label{eqn:maxwells:1640}
0
= (\gamma^\mu \partial_\mu) \wedge \lr{ \gamma^\alpha \wedge \gamma^\beta } \partial_\alpha A_\beta
= \inv{2} (\gamma^\mu \partial_\mu) \wedge \lr{ \gamma^\alpha \wedge \gamma^\beta } F_{\alpha \beta}.

Wedging with $$\gamma^\tau$$ and then multiplying by $$-2 I$$ we find
\label{eqn:maxwells:1660}
0 = – \lr{ \gamma^\mu \wedge \gamma^\alpha \wedge \gamma^\beta \wedge \gamma^\tau } I \partial_\mu F_{\alpha \beta},

but
\label{eqn:maxwells:1680}
\gamma^\mu \wedge \gamma^\alpha \wedge \gamma^\beta \wedge \gamma^\tau = -I \epsilon^{\mu \alpha \beta \tau},

which leaves us with
\label{eqn:maxwells:1700}
\epsilon^{\mu \alpha \beta \tau} \partial_\mu F_{\alpha \beta} = 0,

as expected.

## Problem: Correspondence of tensor and Gibbs forms of Maxwell’s equations.

Given the identifications

\label{eqn:lorentzForceCovariant:1500}
F^{k0} = E^k,

and
\label{eqn:lorentzForceCovariant:1520}
F^{rs} = -\epsilon^{rst} B^t,

and
\label{eqn:maxwells:1560}
J^\mu = \lr{ c \rho, \BJ },

the reader should satisfy themselves that the traditional Gibbs form of Maxwell’s equations can be recovered from \ref{eqn:maxwells:320}.

The reader is referred to Exercise 3.4 “Electrodynamics, variational principle.” from [2].

## Problem: Correspondence with grad and curl form of Maxwell’s equations.

With $$J = c \rho \gamma_0 + J^k \gamma_k$$ and $$F = \BE + I c \BB$$ show that Maxwell’s equation, as stated in \ref{eqn:maxwells:2200} expand to the conventional div and curl expressions for Maxwell’s equations.

To obtain Maxwell’s equations in their traditional vector forms, we pre-multiply both sides with $$\gamma_0$$
\label{eqn:maxwells:1720}
\gamma_0 \grad F = \inv{\epsilon_0 c} \gamma_0 J,

and then select each grade separately. First observe that the RHS above has scalar and bivector components, as
\label{eqn:maxwells:1740}
\gamma_0 J
=
c \rho + J^k \gamma_0 \gamma_k.

In terms of the spatial bivector basis $$\Be_k = \gamma_k \gamma_0$$, the RHS of \ref{eqn:maxwells:1720} is
\label{eqn:maxwells:1760}
\gamma_0 \frac{J}{\epsilon_0 c} = \frac{\rho}{\epsilon_0} – \mu_0 c \BJ.

For the LHS, first note that
\label{eqn:maxwells:1780}
\begin{aligned}
&=
\gamma_0
\lr{
\gamma_0 \partial^0 +
\gamma_k \partial^k
} \\
&=
\partial_0 – \gamma_0 \gamma_k \partial_k \\
&=
\end{aligned}

We can express all the the LHS of \ref{eqn:maxwells:1720} in the bivector spatial basis, so that Maxwell’s equation in multivector form is
\label{eqn:maxwells:1800}
\lr{ \inv{c} \PD{t}{} + \spacegrad } \lr{ \BE + I c \BB } = \frac{\rho}{\epsilon_0} – \mu_0 c \BJ.

Selecting the scalar, vector, bivector, and trivector grades of both sides (in the spatial basis) gives the following set of respective equations
\label{eqn:maxwells:1840}

\label{eqn:maxwells:1860}
\inv{c} \partial_t \BE + I c \spacegrad \wedge \BB = – \mu_0 c \BJ

\label{eqn:maxwells:1880}
\spacegrad \wedge \BE + I \partial_t \BB = 0

\label{eqn:maxwells:1900}
I c \spacegrad \cdot B = 0,

which we can rewrite after some duality transformations (and noting that $$\mu_0 \epsilon_0 c^2 = 1$$), we have
\label{eqn:maxwells:1940}

\label{eqn:maxwells:1960}
\spacegrad \cross \BB – \mu_0 \epsilon_0 \PD{t}{\BE} = \mu_0 \BJ

\label{eqn:maxwells:1980}
\spacegrad \cross \BE + \PD{t}{\BB} = 0

\label{eqn:maxwells:2000}

which are Maxwell’s equations in their traditional form.

## Problem: Alternative multivector Lagrangian.

Show that a scalar+pseudoscalar Lagrangian of the following form
\label{eqn:maxwells:2220}
\LL = – \frac{\epsilon_0 c}{2} F^2 + J \cdot A,

which omits the scalar selection of the Lagrangian in \ref{eqn:maxwells:2160}, also represents Maxwell’s equation. Discuss the scalar and pseudoscalar components of $$F^2$$, and show why the pseudoscalar inclusion is irrelevant.

The quantity $$F^2 = F \cdot F + F \wedge F$$ has both scalar and pseudoscalar
components. Note that unlike vectors, a bivector wedge in 4D with itself need not be zero (example: $$\gamma_0 \gamma_1 + \gamma_2 \gamma_3$$ wedged with itself).
We can see this multivector nature nicely by expansion in terms of the electric and magnetic fields
\label{eqn:maxwells:2020}
\begin{aligned}
F^2
&= \lr{ \BE + I c \BB }^2 \\
&= \BE^2 – c^2 \BB^2 + I c \lr{ \BE \BB + \BB \BE } \\
&= \BE^2 – c^2 \BB^2 + 2 I c \BE \cdot \BB.
\end{aligned}

Both the scalar and pseudoscalar parts of $$F^2$$ are Lorentz invariant, a requirement of our Lagrangian, but most Maxwell equation Lagrangians only include the scalar $$\BE^2 – c^2 \BB^2$$ component of the field square. If we allow the Lagrangian to be multivector valued, and evaluate the Euler-Lagrange equations, we quickly find the same results
\label{eqn:maxwells:2040}
\begin{aligned}
0
&= \gamma_\nu \lr{ \PD{A_\nu}{} – \partial_\mu \PD{(\partial_\mu A_\nu)}{} } \LL \\
&= \gamma_\nu \lr{ J^\nu + \frac{\epsilon_0 c}{2} \partial_\mu
\lr{
(\gamma^\mu \wedge \gamma^\nu) F
+
F (\gamma^\mu \wedge \gamma^\nu)
}
}.
\end{aligned}

Here some steps are skipped, building on our previous scalar Euler-Lagrange evaluation experience. We have a symmetric product of two bivectors, which we can express as a 0,4 grade selection, since
\label{eqn:maxwells:2060}
\gpgrade{ X F }{0,4} = \inv{2} \lr{ X F + F X },

for any two bivectors $$X, F$$. This leaves
\label{eqn:maxwells:2080}
\begin{aligned}
0
&= J + \epsilon_0 c \gamma_\nu \gpgrade{ (\grad \wedge \gamma^\nu) F }{0,4} \\
&= J + \epsilon_0 c \gamma_\nu \gpgrade{ -\gamma^\nu \grad F + (\gamma^\nu \cdot \grad) F }{0,4} \\
&= J + \epsilon_0 c \gamma_\nu \gpgrade{ -\gamma^\nu \grad F }{0,4} \\
&= J – \epsilon_0 c \gamma_\nu
\lr{
\gamma^\nu \cdot \lr{ \grad \cdot F } + \gamma^\nu \wedge \grad \wedge F
}.
\end{aligned}

However, since $$\grad \wedge F = \grad \wedge \grad \wedge A = 0$$, we see that there is no contribution from the $$F \wedge F$$ pseudoscalar component of the Lagrangian, and we are left with
\label{eqn:maxwells:2100}
\begin{aligned}
0
&= J – \epsilon_0 c (\grad \cdot F) \\
&= J – \epsilon_0 c \grad F,
\end{aligned}

which is Maxwell’s equation, as before.

# References

[1] C. Doran and A.N. Lasenby. Geometric algebra for physicists. Cambridge University Press New York, Cambridge, UK, 1st edition, 2003.

[2] Peeter Joot. Quantum field theory. Kindle Direct Publishing, 2018.

## Potential solutions to the static Maxwell’s equation using geometric algebra

When neither the electromagnetic field strength $$F = \BE + I \eta \BH$$, nor current $$J = \eta (c \rho – \BJ) + I(c\rho_m – \BM)$$ is a function of time, then the geometric algebra form of Maxwell’s equations is the first order multivector (gradient) equation
\label{eqn:staticPotentials:20}

While direct solutions to this equations are possible with the multivector Green’s function for the gradient
\label{eqn:staticPotentials:40}
G(\Bx, \Bx’) = \inv{4\pi} \frac{\Bx – \Bx’}{\Norm{\Bx – \Bx’}^3 },

the aim in this post is to explore second order (potential) solutions in a geometric algebra context. Can we assume that it is possible to find a multivector potential $$A$$ for which
\label{eqn:staticPotentials:60}

is a solution to the Maxwell statics equation? If such a solution exists, then Maxwell’s equation is simply
\label{eqn:staticPotentials:80}

which can be easily solved using the scalar Green’s function for the Laplacian
\label{eqn:staticPotentials:240}
G(\Bx, \Bx’) = -\inv{\Norm{\Bx – \Bx’} },

a beastie that may be easier to convolve than the vector valued Green’s function for the gradient.

It is immediately clear that some restrictions must be imposed on the multivector potential $$A$$. In particular, since the field $$F$$ has only vector and bivector grades, this gradient must have no scalar, nor pseudoscalar grades. That is
\label{eqn:staticPotentials:100}

This constraint on the potential can be avoided if a grade selection operation is built directly into the assumed potential solution, requiring that the field is given by
\label{eqn:staticPotentials:120}

However, after imposing such a constraint, Maxwell’s equation has a much less friendly form
\label{eqn:staticPotentials:140}

Luckily, it is possible to introduce a transformation of potentials, called a gauge transformation, that eliminates the ugly grade selection term, and allows the potential equation to be expressed as a plain old Laplacian. We do so by assuming first that it is possible to find a solution of the Laplacian equation that has the desired grade restrictions. That is
\label{eqn:staticPotentials:160}
\begin{aligned}
\end{aligned}

for which $$F = \spacegrad A’$$ is a grade 1,2 solution to $$\spacegrad F = J$$. Suppose that $$A$$ is any formal solution, free of any grade restrictions, to $$\spacegrad^2 A = J$$, and $$F = \gpgrade{\spacegrad A}{1,2}$$. Can we find a function $$\tilde{A}$$ for which $$A = A’ + \tilde{A}$$?

Maxwell’s equation in terms of $$A$$ is
\label{eqn:staticPotentials:180}
\begin{aligned}
J
\end{aligned}

or
\label{eqn:staticPotentials:200}

This non-homogeneous Laplacian equation that can be solved as is for $$\tilde{A}$$ using the Green’s function for the Laplacian. Alternatively, we may also solve the equivalent first order system using the Green’s function for the gradient.
\label{eqn:staticPotentials:220}

Clearly $$\tilde{A}$$ is not unique, as we can add any function $$\psi$$ satisfying the homogeneous Laplacian equation $$\spacegrad^2 \psi = 0$$.

In summary, if $$A$$ is any multivector solution to $$\spacegrad^2 A = J$$, that is
\label{eqn:staticPotentials:260}
A(\Bx)
= \int dV’ G(\Bx, \Bx’) J(\Bx’)
= -\int dV’ \frac{J(\Bx’)}{\Norm{\Bx – \Bx’} },

then $$F = \spacegrad A’$$ is a solution to Maxwell’s equation, where $$A’ = A – \tilde{A}$$, and $$\tilde{A}$$ is a solution to the non-homogeneous Laplacian equation or the non-homogeneous gradient equation above.

### Integral form of the gauge transformation.

Additional insight is possible by considering the gauge transformation in integral form. Suppose that
\label{eqn:staticPotentials:280}
A(\Bx) = -\int_V dV’ \frac{J(\Bx’)}{\Norm{\Bx – \Bx’} } – \tilde{A}(\Bx),

is a solution of $$\spacegrad^2 A = J$$, where $$\tilde{A}$$ is a multivector solution to the homogeneous Laplacian equation $$\spacegrad^2 \tilde{A} = 0$$. Let’s look at the constraints on $$\tilde{A}$$ that must be imposed for $$F = \spacegrad A$$ to be a valid (i.e. grade 1,2) solution of Maxwell’s equation.
\label{eqn:staticPotentials:300}
\begin{aligned}
F
&=
-\int_V dV’ \lr{ \spacegrad \inv{\Norm{\Bx – \Bx’} } } J(\Bx’)
&=
\int_V dV’ \lr{ \spacegrad’ \inv{\Norm{\Bx – \Bx’} } } J(\Bx’)
&=
\int_V dV’ \spacegrad’ \frac{J(\Bx’)}{\Norm{\Bx – \Bx’} } – \int_V dV’ \frac{\spacegrad’ J(\Bx’)}{\Norm{\Bx – \Bx’} }
&=
\int_{\partial V} dA’ \ncap’ \frac{J(\Bx’)}{\Norm{\Bx – \Bx’} } – \int_V \frac{\spacegrad’ J(\Bx’)}{\Norm{\Bx – \Bx’} }
\end{aligned}

Where $$\ncap’ = (\Bx’ – \Bx)/\Norm{\Bx’ – \Bx}$$, and the fundamental theorem of geometric calculus has been used to transform the gradient volume integral into an integral over the bounding surface. Operating on Maxwell’s equation with the gradient gives $$\spacegrad^2 F = \spacegrad J$$, which has only grades 1,2 on the left hand side, meaning that $$J$$ is constrained in a way that requires $$\spacegrad J$$ to have only grades 1,2. This means that $$F$$ has grades 1,2 if
\label{eqn:staticPotentials:320}
= \int_{\partial V} dA’ \frac{ \gpgrade{\ncap’ J(\Bx’)}{0,3} }{\Norm{\Bx – \Bx’} }.

The product $$\ncap J$$ expands to
\label{eqn:staticPotentials:340}
\begin{aligned}
\ncap J
&=
&=
\ncap \cdot (-\eta \BJ) + \gpgradethree{\ncap (-I \BM)} \\
&=- \eta \ncap \cdot \BJ -I \ncap \cdot \BM,
\end{aligned}

so
\label{eqn:staticPotentials:360}
=
-\int_{\partial V} dA’ \frac{ \eta \ncap’ \cdot \BJ(\Bx’) + I \ncap’ \cdot \BM(\Bx’)}{\Norm{\Bx – \Bx’} }.

Observe that if there is no flux of current density $$\BJ$$ and (fictitious) magnetic current density $$\BM$$ through the surface, then $$F = \spacegrad A$$ is a solution to Maxwell’s equation without any gauge transformation. Alternatively $$F = \spacegrad A$$ is also a solution if $$\lim_{\Bx’ \rightarrow \infty} \BJ(\Bx’)/\Norm{\Bx – \Bx’} = \lim_{\Bx’ \rightarrow \infty} \BM(\Bx’)/\Norm{\Bx – \Bx’} = 0$$ and the bounding volume is taken to infinity.

# References

## Generalizing Ampere’s law using geometric algebra.

The question I’d like to explore in this post is how Ampere’s law, the relationship between the line integral of the magnetic field to current (i.e. the enclosed current)
\label{eqn:flux:20}
\oint_{\partial A} d\Bx \cdot \BH = -\int_A \ncap \cdot \BJ,

generalizes to geometric algebra where Maxwell’s equations for a statics configuration (all time derivatives zero) is
\label{eqn:flux:40}

where the multivector fields and currents are
\label{eqn:flux:60}
\begin{aligned}
F &= \BE + I \eta \BH \\
J &= \eta \lr{ c \rho – \BJ } + I \lr{ c \rho_\txtm – \BM }.
\end{aligned}

Here (fictitious) the magnetic charge and current densities that can be useful in antenna theory have been included in the multivector current for generality.

My presumption is that it should be possible to utilize the fundamental theorem of geometric calculus for expressing the integral over an oriented surface to its boundary, but applied directly to Maxwell’s equation. That integral theorem has the form
\label{eqn:flux:80}
\int_A d^2 \Bx \boldpartial F = \oint_{\partial A} d\Bx F,

where $$d^2 \Bx = d\Ba \wedge d\Bb$$ is a two parameter bivector valued surface, and $$\boldpartial$$ is vector derivative, the projection of the gradient onto the tangent space. I won’t try to explain all of geometric calculus here, and refer the interested reader to [1], which is an excellent reference on geometric calculus and integration theory.

The gotcha is that we actually want a surface integral with $$\spacegrad F$$. We can split the gradient into the vector derivative a normal component
\label{eqn:flux:160}

so
\label{eqn:flux:100}
=
\int_A d^2 \Bx \boldpartial F
+
\int_A d^2 \Bx \ncap \lr{ \ncap \cdot \spacegrad } F,

so
\label{eqn:flux:120}
\begin{aligned}
\oint_{\partial A} d\Bx F
&=
\int_A d^2 \Bx \lr{ J – \ncap \lr{ \ncap \cdot \spacegrad } F } \\
&=
\int_A dA \lr{ I \ncap J – \lr{ \ncap \cdot \spacegrad } I F }
\end{aligned}

This is not nearly as nice as the magnetic flux relationship which was nicely split with the current and fields nicely separated. The $$d\Bx F$$ product has all possible grades, as does the $$d^2 \Bx J$$ product (in general). Observe however, that the normal term on the right has only grades 1,2, so we can split our line integral relations into pairs with and without grade 1,2 components
\label{eqn:flux:140}
\begin{aligned}
&=
\int_A dA \gpgrade{ I \ncap J }{0,3} \\
&=
\int_A dA \lr{ \gpgrade{ I \ncap J }{1,2} – \lr{ \ncap \cdot \spacegrad } I F }.
\end{aligned}

Let’s expand these explicitly in terms of the component fields and densities to check against the conventional relationships, and see if things look right. The line integrand expands to
\label{eqn:flux:180}
\begin{aligned}
d\Bx F
&=
d\Bx \lr{ \BE + I \eta \BH }
=
d\Bx \cdot \BE + I \eta d\Bx \cdot \BH
+
d\Bx \wedge \BE + I \eta d\Bx \wedge \BH \\
&=
d\Bx \cdot \BE
– \eta (d\Bx \cross \BH)
+ I (d\Bx \cross \BE )
+ I \eta (d\Bx \cdot \BH),
\end{aligned}

the current integrand expands to
\label{eqn:flux:200}
\begin{aligned}
I \ncap J
&=
I \ncap
\lr{
\frac{\rho}{\epsilon} – \eta \BJ + I \lr{ c \rho_\txtm – \BM }
} \\
&=
\ncap I \frac{\rho}{\epsilon} – \eta \ncap I \BJ – \ncap c \rho_\txtm + \ncap \BM \\
&=
\ncap \cdot \BM
+ \eta (\ncap \cross \BJ)
– \ncap c \rho_\txtm
+ I (\ncap \cross \BM)
+ \ncap I \frac{\rho}{\epsilon}
– \eta I (\ncap \cdot \BJ).
\end{aligned}

We are left with
\label{eqn:flux:220}
\begin{aligned}
\oint_{\partial A}
\lr{
d\Bx \cdot \BE + I \eta (d\Bx \cdot \BH)
}
&=
\int_A dA
\lr{
\ncap \cdot \BM – \eta I (\ncap \cdot \BJ)
} \\
\oint_{\partial A}
\lr{
– \eta (d\Bx \cross \BH)
+ I (d\Bx \cross \BE )
}
&=
\int_A dA
\lr{
\eta (\ncap \cross \BJ)
– \ncap c \rho_\txtm
+ I (\ncap \cross \BM)
+ \ncap I \frac{\rho}{\epsilon}
-\PD{n}{} \lr{ I \BE – \eta \BH }
}.
\end{aligned}

This is a crazy mess of dots, crosses, fields and sources. We can split it into one equation for each grade, which will probably look a little more regular. That is
\label{eqn:flux:240}
\begin{aligned}
\oint_{\partial A} d\Bx \cdot \BE &= \int_A dA \ncap \cdot \BM \\
\oint_{\partial A} d\Bx \cross \BH
&=
\int_A dA
\lr{
– \ncap \cross \BJ
+ \frac{ \ncap \rho_\txtm }{\mu}
– \PD{n}{\BH}
} \\
\oint_{\partial A} d\Bx \cross \BE &=
\int_A dA
\lr{
\ncap \cross \BM
+ \frac{\ncap \rho}{\epsilon}
– \PD{n}{\BE}
} \\
\oint_{\partial A} d\Bx \cdot \BH &= -\int_A dA \ncap \cdot \BJ \\
\end{aligned}

The first and last equations could have been obtained much more easily from Maxwell’s equations in their conventional form more easily. The two cross product equations with the normal derivatives are not familiar to me, even without the fictitious magnetic sources. It is somewhat remarkable that so much can be packed into one multivector equation:
\label{eqn:flux:260}
\oint_{\partial A} d\Bx F
=
I \int_A dA \lr{ \ncap J – \PD{n}{F} }.

# References

[1] A. Macdonald. Vector and Geometric Calculus. CreateSpace Independent Publishing Platform, 2012.

## Solving Maxwell’s equation in freespace: Multivector plane wave representation

The geometric algebra form of Maxwell’s equations in free space (or source free isotopic media with group velocity $$c$$) is the multivector equation
\label{eqn:planewavesMultivector:20}
\lr{ \spacegrad + \inv{c}\PD{t}{} } F(\Bx, t) = 0.

Here $$F = \BE + I c \BB$$ is a multivector with grades 1 and 2 (vector and bivector components). The velocity $$c$$ is called the group velocity since $$F$$, or its components $$\BE, \BH$$ satisfy the wave equation, which can be seen by pre-multiplying with $$\spacegrad – (1/c)\PDi{t}{}$$ to find
\label{eqn:planewavesMultivector:n}
\lr{ \spacegrad^2 – \inv{c^2}\PDSq{t}{} } F(\Bx, t) = 0.

Let’s look at the frequency domain solution of this equation with a presumed phasor representation
\label{eqn:planewavesMultivector:40}
F(\Bx, t) = \textrm{Re} \lr{ F(\Bk) e^{-j \Bk \cdot \Bx + j \omega t} },

where $$j$$ is a scalar imaginary, not necessarily with any geometric interpretation.

Maxwell’s equation reduces to just
\label{eqn:planewavesMultivector:60}
0
=
-j \lr{ \Bk – \frac{\omega}{c} } F(\Bk).

If $$F(\Bk)$$ has a left multivector factor
\label{eqn:planewavesMultivector:80}
F(\Bk) =
\lr{ \Bk + \frac{\omega}{c} } \tilde{F},

where $$\tilde{F}$$ is a multivector to be determined, then
\label{eqn:planewavesMultivector:100}
\begin{aligned}
\lr{ \Bk – \frac{\omega}{c} }
F(\Bk)
&=
\lr{ \Bk – \frac{\omega}{c} }
\lr{ \Bk + \frac{\omega}{c} } \tilde{F} \\
&=
\lr{ \Bk^2 – \lr{\frac{\omega}{c}}^2 } \tilde{F},
\end{aligned}

which is zero if $$\Norm{\Bk} = \ifrac{\omega}{c}$$.

Let $$\kcap = \ifrac{\Bk}{\Norm{\Bk}}$$, and $$\Norm{\Bk} \tilde{F} = F_0 + F_1 + F_2 + F_3$$, where $$F_0, F_1, F_2,$$ and $$F_3$$ are respectively have grades 0,1,2,3. Then
\label{eqn:planewavesMultivector:120}
\begin{aligned}
F(\Bk)
&= \lr{ 1 + \kcap } \lr{ F_0 + F_1 + F_2 + F_3 } \\
&=
F_0 + F_1 + F_2 + F_3
+
\kcap F_0 + \kcap F_1 + \kcap F_2 + \kcap F_3 \\
&=
F_0 + F_1 + F_2 + F_3
+
\kcap F_0 + \kcap \cdot F_1 + \kcap \cdot F_2 + \kcap \cdot F_3
+
\kcap \wedge F_1 + \kcap \wedge F_2 \\
&=
\lr{
F_0 + \kcap \cdot F_1
}
+
\lr{
F_1 + \kcap F_0 + \kcap \cdot F_2
}
+
\lr{
F_2 + \kcap \cdot F_3 + \kcap \wedge F_1
}
+
\lr{
F_3 + \kcap \wedge F_2
}.
\end{aligned}

Since the field $$F$$ has only vector and bivector grades, the grades zero and three components of the expansion above must be zero, or
\label{eqn:planewavesMultivector:140}
\begin{aligned}
F_0 &= – \kcap \cdot F_1 \\
F_3 &= – \kcap \wedge F_2,
\end{aligned}

so
\label{eqn:planewavesMultivector:160}
\begin{aligned}
F(\Bk)
&=
\lr{ 1 + \kcap } \lr{
F_1 – \kcap \cdot F_1 +
F_2 – \kcap \wedge F_2
} \\
&=
\lr{ 1 + \kcap } \lr{
F_1 – \kcap F_1 + \kcap \wedge F_1 +
F_2 – \kcap F_2 + \kcap \cdot F_2
}.
\end{aligned}

The multivector $$1 + \kcap$$ has the projective property of gobbling any leading factors of $$\kcap$$
\label{eqn:planewavesMultivector:180}
\begin{aligned}
(1 + \kcap)\kcap
&= \kcap + 1 \\
&= 1 + \kcap,
\end{aligned}

so for $$F_i \in F_1, F_2$$
\label{eqn:planewavesMultivector:200}
(1 + \kcap) ( F_i – \kcap F_i )
=
(1 + \kcap) ( F_i – F_i )
= 0,

leaving
\label{eqn:planewavesMultivector:220}
F(\Bk)
=
\lr{ 1 + \kcap } \lr{
\kcap \cdot F_2 +
\kcap \wedge F_1
}.

For $$\kcap \cdot F_2$$ to be non-zero $$F_2$$ must be a bivector that lies in a plane containing $$\kcap$$, and $$\kcap \cdot F_2$$ is a vector in that plane that is perpendicular to $$\kcap$$. On the other hand $$\kcap \wedge F_1$$ is non-zero only if $$F_1$$ has a non-zero component that does not lie in along the $$\kcap$$ direction, but $$\kcap \wedge F_1$$, like $$F_2$$ describes a plane that containing $$\kcap$$. This means that having both bivector and vector free variables $$F_2$$ and $$F_1$$ provide more degrees of freedom than required. For example, if $$\BE$$ is any vector, and $$F_2 = \kcap \wedge \BE$$, then
\label{eqn:planewavesMultivector:240}
\begin{aligned}
\lr{ 1 + \kcap }
\kcap \cdot F_2
&=
\lr{ 1 + \kcap }
\kcap \cdot \lr{ \kcap \wedge \BE } \\
&=
\lr{ 1 + \kcap }
\lr{
\BE

\kcap \lr{ \kcap \cdot \BE }
} \\
&=
\lr{ 1 + \kcap }
\kcap \lr{ \kcap \wedge \BE } \\
&=
\lr{ 1 + \kcap }
\kcap \wedge \BE,
\end{aligned}

which has the form $$\lr{ 1 + \kcap } \lr{ \kcap \wedge F_1 }$$, so the solution of the free space Maxwell’s equation can be written
\label{eqn:planewavesMultivector:260}
\boxed{
F(\Bx, t)
=
\textrm{Re} \lr{
\lr{ 1 + \kcap }
\BE\,
e^{-j \Bk \cdot \Bx + j \omega t}
}
,
}

where $$\BE$$ is any vector for which $$\BE \cdot \Bk = 0$$.

## The many faces of Maxwell’s equations

[Click here for a PDF of this post with nicer formatting (including equation numbering and references)]

The following is a possible introduction for a report for a UofT ECE2500 project associated with writing a small book: “Geometric Algebra for Electrical Engineers”. Given the space constraints for the report I may have to drop much of this, but some of the history of Maxwell’s equations may be of interest, so I thought I’d share before the knife hits the latex.

## Goals of the project.

This project had a few goals

1. Perform a literature review of applications of geometric algebra to the study of electromagnetism. Geometric algebra will be defined precisely later, along with bivector, trivector, multivector and other geometric algebra generalizations of the vector.
2. Identify the subset of the literature that had direct relevance to electrical engineering.
3. Create a complete, and as compact as possible, introduction of the prerequisites required
geometric algebra to problems in electromagnetism.

## The many faces of electromagnetism.

There is a long history of attempts to find more elegant, compact and powerful ways of encoding and working with Maxwell’s equations.

### Maxwell’s formulation.

Maxwell [12] employs some differential operators, including the gradient $$\spacegrad$$ and Laplacian $$\spacegrad^2$$, but the divergence and gradient are always written out in full using coordinates, usually in integral form. Reading the original Treatise highlights how important notation can be, as most modern engineering or physics practitioners would find his original work incomprehensible. A nice translation from Maxwell’s notation to the modern Heaviside-Gibbs notation can be found in [16].

### Quaterion representation.

In his second volume [11] the equations of electromagnetism are stated using quaterions (an extension of complex numbers to three dimensions), but quaternions are not used in the work. The modern form of Maxwell’s equations in quaternion form is
\label{eqn:ece2500report:220}
\begin{aligned}
\inv{2} \antisymmetric{ \frac{d}{dr} }{ \BH } – \inv{2} \symmetric{ \frac{d}{dr} } { c \BD } &= c \rho + \BJ \\
\inv{2} \antisymmetric{ \frac{d}{dr} }{ \BE } + \inv{2} \symmetric{ \frac{d}{dr} }{ c \BB } &= 0,
\end{aligned}

where $$\ifrac{d}{dr} = (1/c) \PDi{t}{} + \Bi \PDi{x}{} + \Bj \PDi{y}{} + \Bk \PDi{z}{}$$ [7] acts bidirectionally, and vectors are expressed in terms of the quaternion basis $$\setlr{ \Bi, \Bj, \Bk }$$, subject to the relations $$\Bi^2 = \Bj^2 = \Bk^2 = -1, \quad \Bi \Bj = \Bk = -\Bj \Bi, \quad \Bj \Bk = \Bi = -\Bk \Bj, \quad \Bk \Bi = \Bj = -\Bi \Bk$$.
There is clearly more structure to these equations than the traditional Heaviside-Gibbs representation that we are used to, which says something for the quaternion model. However, this structure requires notation that is arguably non-intuitive. The fact that the quaterion representation was abandoned long ago by most electromagnetism researchers and engineers supports such an argument.

### Minkowski tensor representation.

Minkowski introduced the concept of a complex time coordinate $$x_4 = i c t$$ for special relativity [3]. Such a four-vector representation can be used for many of the relativistic four-vector pairs of electromagnetism, such as the current $$(c\rho, \BJ)$$, and the energy-momentum Lorentz force relations, and can also be applied to Maxwell’s equations
\label{eqn:ece2500report:140}
\sum_{\mu= 1}^4 \PD{x_\mu}{F_{\mu\nu}} = – 4 \pi j_\nu.
\sum_{\lambda\rho\mu=1}^4
\epsilon_{\mu\nu\lambda\rho}
\PD{x_\mu}{F_{\lambda\rho}} = 0,

where
\label{eqn:ece2500report:160}
F
=
\begin{bmatrix}
0 & B_z & -B_y & -i E_x \\
-B_z & 0 & B_x & -i E_y \\
B_y & -B_x & 0 & -i E_z \\
i E_x & i E_y & i E_z & 0
\end{bmatrix}.

A rank-2 complex (Hermitian) tensor contains all six of the field components. Transformation of coordinates for this representation of the field may be performed exactly like the transformation for any other four-vector. This formalism is described nicely in [13], where the structure used is motivated by transformational requirements. One of the costs of this tensor representation is that we loose the clear separation of the electric and magnetic fields that we are so comfortable with. Another cost is that we loose the distinction between space and time, as separate space and time coordinates have to be projected out of a larger four vector. Both of these costs have theoretical benefits in some applications, particularly for high energy problems where relativity is important, but for the low velocity problems near and dear to electrical engineers who can freely treat space and time independently, the advantages are not clear.

### Modern tensor formalism.

The Minkowski representation fell out of favour in theoretical physics, which settled on a real tensor representation that utilizes an explicit metric tensor $$g_{\mu\nu} = \pm \textrm{diag}(1, -1, -1, -1)$$ to represent the complex inner products of special relativity. In this tensor formalism, Maxwell’s equations are also reduced to a set of two tensor relationships ([10], [8], [5]).
\label{eqn:ece2500report:40}
\begin{aligned}
\partial_\mu F^{\mu \nu} &= \mu_0 J^\nu \\
\epsilon^{\alpha \beta \mu \nu} \partial_\beta F_{\mu \nu} &= 0,
\end{aligned}

where $$F^{\mu\nu}$$ is a \textit{real} rank-2 antisymmetric tensor that contains all six electric and magnetic field components, and $$J^\nu$$ is a four-vector current containing both charge density and current density components. \Cref{eqn:ece2500report:40} provides a unified and simpler theoretical framework for electromagnetism, and is used extensively in physics but not engineering.

### Differential forms.

It has been argued that a differential forms treatment of electromagnetism provides some of the same theoretical advantages as the tensor formalism, without the disadvantages of introducing a hellish mess of index manipulation into the mix. With differential forms it is also possible to express Maxwell’s equations as two equations. The free-space differential forms equivalent [4] to the tensor equations is
\label{eqn:ece2500report:60}
\begin{aligned}
d \alpha &= 0 \\
d *\alpha &= 0,
\end{aligned}

where
\label{eqn:ece2500report:180}
\alpha = \lr{ E_1 dx^1 + E_2 dx^2 + E_3 dx^3 }(c dt) + H_1 dx^2 dx^3 + H_2 dx^3 dx^1 + H_3 dx^1 dx^2.

One of the advantages of this representation is that it is valid even for curvilinear coordinate representations, which are handled naturally in differential forms. However, this formalism also comes with a number of costs. One cost (or benefit), like that of the tensor formalism, is that this is implicitly a relativistic approach subject to non-Euclidean orthonormality conditions $$(dx^i, dx^j) = \delta^{ij}, (dx^i, c dt) = 0, (c dt, c dt) = -1$$. Most grievous of the costs is the requirement to use differentials $$dx^1, dx^2, dx^3, c dt$$, instead of a more familar set of basis vectors, even for non-curvilinear coordinates. This requirement is easily viewed as unnatural, and likely one of the reasons that electromagnetism with differential forms has never become popular.

### Vector formalism.

Euclidean vector algebra, in particular the vector algebra and calculus of $$R^3$$, is the de-facto language of electrical engineering for electromagnetism. Maxwell’s equations in the Heaviside-Gibbs vector formalism are
\label{eqn:ece2500report:20}
\begin{aligned}
\spacegrad \cross \BE &= – \PD{t}{\BB} \\
\spacegrad \cross \BH &= \BJ + \PD{t}{\BD} \\
\spacegrad \cdot \BD &= \rho \\
\end{aligned}

We are all intimately familiar with these equations, with the dot and the cross products, and with gradient, divergence and curl operations that are used to express them.
Given how comfortable we are with this mathematical formalism, there has to be a really good reason to switch to something else.

### Space time algebra (geometric algebra).

An alternative to any of the electrodynamics formalisms described above is STA, the Space Time Algebra. STA is a relativistic geometric algebra that allows Maxwell’s equations to be combined into one equation ([2], [6])
\label{eqn:ece2500report:80}

where
\label{eqn:ece2500report:200}
F = \BE + I c \BB \qquad (= \BE + I \eta \BH)

is a bivector field containing both the electric and magnetic field “vectors”, $$\grad = \gamma^\mu \partial_\mu$$ is the spacetime gradient, $$J$$ is a four vector containing electric charge and current components, and $$I = \gamma_0 \gamma_1 \gamma_2 \gamma_3$$ is the spacetime pseudoscalar, the ordered product of the basis vectors $$\setlr{ \gamma_\mu }$$. The STA representation is explicitly relativistic with a non-Euclidean relationships between the basis vectors $$\gamma_0 \cdot \gamma_0 = 1 = -\gamma_k \cdot \gamma_k, \forall k > 0$$. In this formalism “spatial” vectors $$\Bx = \sum_{k>0} \gamma_k \gamma_0 x^k$$ are represented as spacetime bivectors, requiring a small slight of hand when switching between STA notation and conventional vector representation. Uncoincidentally $$F$$ has exactly the same structure as the 2-form $$\alpha$$ above, provided the differential 1-forms $$dx^\mu$$ are replaced by the basis vectors $$\gamma_\mu$$. However, there is a simple complex structure inherent in the STA form that is not obvious in the 2-form equivalent. The bivector representation of the field $$F$$ directly encodes the antisymmetric nature of $$F^{\mu\nu}$$ from the tensor formalism, and the tensor equivalents of most STA results can be calcualted easily.

Having a single PDE for all of Maxwell’s equations allows for direct Green’s function solution of the field, and has a number of other advantages. There is extensive literature exploring selected applications of STA to electrodynamics. Many theoretical results have been derived using this formalism that require significantly more complex approaches using conventional vector or tensor analysis. Unfortunately, much of the STA literature is inaccessible to the engineering student, practising engineers, or engineering instructors. To even start reading the literature, one must learn geometric algebra, aspects of special relativity and non-Euclidean geometry, generalized integration theory, and even some tensor analysis.

### Paravector formalism (geometric algebra).

In the geometric algebra literature, there are a few authors who have endorsed the use of Euclidean geometric algebras for relativistic applications ([1], [14])
These authors use an Euclidean basis “vector” $$\Be_0 = 1$$ for the timelike direction, along with a standard Euclidean basis $$\setlr{ \Be_i }$$ for the spatial directions. A hybrid scalar plus vector representation of four vectors, called paravectors is employed. Maxwell’s equation is written as a multivector equation
\label{eqn:ece2500report:120}
\lr{ \spacegrad + \inv{c} \PD{t}{} } F = J,

where $$J$$ is a multivector source containing both the electric charge and currents, and $$c$$ is the group velocity for the medium (assumed uniform and isometric). $$J$$ may optionally include the (fictitious) magnetic charge and currents useful in antenna theory. The paravector formalism uses a the hybrid electromagnetic field representation of STA above, however, $$I = \Be_1 \Be_2 \Be_3$$ is interpreted as the $$R^3$$ pseudoscalar, the ordered product of the basis vectors $$\setlr{ \Be_i }$$, and $$F$$ represents a multivector with vector and bivector components. Unlike STA where $$\BE$$ and $$\BB$$ (or $$\BH$$) are interpretted as spacetime bivectors, here they are plain old Euclidian vectors in $$R^3$$, entirely consistent with conventional Heaviyside-Gibbs notation. Like the STA Maxwell’s equation, the paravector form is directly invertible using Green’s function techniques, without requiring the solution of equivalent second order potential problems, nor any requirement to take the derivatives of those potentials to determine the fields.

Lorentz transformation and manipulation of paravectors requires a variety of conjugation, real and imaginary operators, unlike STA where such operations have the same complex exponential structure as any 3D rotation expressed in geometric algebra. The advocates of the paravector representation argue that this provides an effective pedagogical bridge from Euclidean geometry to the Minkowski geometry of special relativity. This author agrees that this form of Maxwell’s equations is the natural choice for an introduction to electromagnetism using geometric algebra, but for relativistic operations, STA is a much more natural and less confusing choice.

## Results.

The end product of this project was a fairly small self contained book, titled “Geometric Algebra for Electrical Engineers”. This book includes an introduction to Euclidean geometric algebra focused on $$R^2$$ and $$R^3$$ (64 pages), an introduction to geometric calculus and multivector Green’s functions (64 pages), and applications to electromagnetism (75 pages). This report summarizes results from this book, omitting most derivations, and attempts to provide an overview that may be used as a road map for the book for further exploration. Many of the fundamental results of electromagnetism are derived directly from the geometric algebra form of Maxwell’s equation in a streamlined and compact fashion. This includes some new results, and many of the existing non-relativistic results from the geometric algebra STA and paravector literature. It will be clear to the reader that it is often simpler to have the electric and magnetic on equal footing, and demonstrates this by deriving most results in terms of the total electromagnetic field $$F$$. Many examples of how to extract the conventional electric and magnetic fields from the geometric algebra results expressed in terms of $$F$$ are given as a bridge between the multivector and vector representations.

The aim of this work was to remove some of the prerequisite conceptual roadblocks that make electromagnetism using geometric algebra inaccessbile. In particular, this project explored non-relativistic applications of geometric algebra to electromagnetism. After derivation from the conventional Heaviside-Gibbs representation of Maxwell’s equations, the paravector representation of Maxwell’s equation is used as the starting point for of all subsequent analysis. However, the paravector literature includes a confusing set of conjugation and real and imaginary selection operations that are tailored for relativisitic applications. These are not neccessary for low velocity applications, and have been avoided completely with the aim of making the subject more accessibility to the engineer.

In the book an attempt has been made to avoid introducing as little new notation as possible. For example, some authors use special notation for the bivector valued magnetic field $$I \BB$$, such as $$\boldsymbol{\mathcal{b}}$$ or $$\Bcap$$. Given the inconsistencies in the literature, $$I \BB$$ (or $$I \BH$$) will be used explicitly for the bivector (magnetic) components of the total electromagnetic field $$F$$. In the geometric algebra literature, there are conflicting conventions for the operator $$\spacegrad + (1/c) \PDi{t}{}$$ which we will call the spacetime gradient after the STA equivalent. For examples of different notations for the spacetime gradient, see [9], [1], and [15]. In the book the spacetime gradient is always written out in full to avoid picking from or explaining some of the subtlties of the competing notations.

Some researchers will find it distasteful that STA and relativity have been avoided completely in this book. Maxwell’s equations are inherently relativistic, and STA expresses the relativistic aspects of electromagnetism in an exceptional and beautiful fashion. However, a student of this book will have learned the geometric algebra and calculus prerequisites of STA. This makes the STA literature much more accessible, especially since most of the results in the book can be trivially translated into STA notation.

# References

[1] William Baylis. Electrodynamics: a modern geometric approach, volume 17. Springer Science \& Business Media, 2004.

[2] C. Doran and A.N. Lasenby. Geometric algebra for physicists. Cambridge University Press New York, Cambridge, UK, 1st edition, 2003.

[3] Albert Einstein. Relativity: The special and the general theory, chapter Minkowski’s Four-Dimensional Space. Princeton University Press, 2015. URL http://www.gutenberg.org/ebooks/5001.

[4] H. Flanders. Differential Forms With Applications to the Physical Sciences. Courier Dover Publications, 1989.

[5] David Jeffrey Griffiths and Reed College. Introduction to electrodynamics. Prentice hall Upper Saddle River, NJ, 3rd edition, 1999.

[6] David Hestenes. Space-time algebra, volume 1. Springer, 1966.

[7] Peter Michael Jack. Physical space as a quaternion structure, i: Maxwell equations. a brief note. arXiv preprint math-ph/0307038, 2003. URL https://arxiv.org/abs/math-ph/0307038.

[8] JD Jackson. Classical Electrodynamics. John Wiley and Sons, 2nd edition, 1975.

[9] Bernard Jancewicz. Multivectors and Clifford algebra in electrodynamics. World Scientific, 1988.

[10] L.D. Landau and E.M. Lifshitz. The classical theory of fields. Butterworth-Heinemann, 1980. ISBN 0750627689.

[11] James Clerk Maxwell. A treatise on electricity and magnetism, volume II. Merchant Books, 1881.

[12] James Clerk Maxwell. A treatise on electricity and magnetism, third edition, volume I. Dover publications, 1891.

[13] M. Schwartz. Principles of Electrodynamics. Dover Publications, 1987.

[14] Chappell et al. A simplified approach to electromagnetism using geometric algebra. arXiv preprint arXiv:1010.4947, 2010.

[15] Chappell et al. Geometric algebra for electrical and electronic engineers. 2014.

[16] Chappell et al. Geometric Algebra for Electrical and Electronic Engineers, 2014

## Motivation.

The quaternion form of Maxwell’s equations as stated in [2] is nearly indecipherable. The modern quaternionic form of these equations can be found in [1]. Looking for this representation was driven by the question of whether or not the compact geometric algebra representations of Maxwell’s equations $$\grad F = J$$, was possible using a quaternion representation of the fields.

As quaternions may be viewed as the even subalgebra of GA(3,0), it is possible to the quaternion representation of Maxwell’s equations using only geometric algebra, including source terms and independent of the heat considerations discussed in [1]. Such a derivation will be performed here. Examination of the results appears to answer the question about the compact representation in the negative.

## Quaternions as multivectors.

Quaternions are vector plus scalar sums, where the vector basis $$\setlr{ \Bi, \Bj, \Bk }$$ are subject to the complex like multiplication rules
\label{eqn:complex:240}
\begin{aligned}
\Bi^2 &= \Bj^2 = \Bk^2 = -1 \\
\Bi \Bj &= \Bk = -\Bj \Bi \\
\Bj \Bk &= \Bi = -\Bk \Bj \\
\Bk \Bi &= \Bj = -\Bi \Bk.
\end{aligned}

We can represent these basis vectors in terms of the $$\mathbb{R}^{3}$$ unit bivectors
\label{eqn:quaternion2maxwellWithGA:260}
\begin{aligned}
\Bi &= \Be_{3} \Be_{2} = -I \Be_1 \\
\Bj &= \Be_{1} \Be_{3} = -I \Be_2 \\
\Bk &= \Be_{2} \Be_{1} = -I \Be_3,
\end{aligned}

where $$I = \Be_1 \Be_2 \Be_3$$ is the ordered product of the $$\mathbb{R}^{3}$$ basis elements. Within geometric algebra, the quaternion basis “vectors” are more properly viewed as a bivector space basis that happens to have dimension three.

Following [1], we may introduce a quaternionic spacetime gradient, and express that in terms of geometric algebra
\label{eqn:quaternion2maxwellWithGA:280}
\frac{d}{dr} = \inv{c} \PD{t}{}
+ \Bi \PD{x}{}
+ \Bj \PD{y}{}
+ \Bk \PD{z}{}
=

Of particular interest is how do we write the curl, divergence and time partials in terms of the quaternionic spacetime gradient or its components. Like [1], we will use modern commutator notation for an antisymmetric difference of products
\label{eqn:quaternion2maxwellWithGA:600}
\antisymmetric{a}{b} = a b – b a,

and anticommutator notation for a symmetric difference of products
\label{eqn:quaternion2maxwellWithGA:620}
\symmetric{a}{b} = a b + b a.

The curl of a vector $$\Bf$$ in terms of vector products with the gradient is
\label{eqn:quaternion2maxwellWithGA:300}
\begin{aligned}
&= \inv{2} \antisymmetric{ -I \spacegrad }{ \Bf } \\
&= \inv{2} \antisymmetric{ \frac{d}{dr} }{ \Bf },
\end{aligned}

where the last step takes advantage of the fact that the timelike contribution of the spacetime gradient commutes with any vector $$\Bf$$ due to its scalar nature, so cancels out of the commutator. In a similar fashion, the dot product may be written as an anticommutator
\label{eqn:quaternion2maxwellWithGA:480}
=
=

as can the scalar time derivative
\label{eqn:quaternion2maxwellWithGA:500}
\PD{t}{\Bf}
= \inv{2} \symmetric{ \inv{c} \PD{t}{} } { c \Bf }.

## Quaternionic form of Maxwell’s equations.

Using geometric algebra as an intermediate transformation, let’s see directly how to express Maxwell’s equations in terms of this quaternionic operator. Our starting point is Maxwell’s equations in their standard macroscopic form

\label{eqn:ece2500report:20}
\spacegrad \cross \BH = \BJ + \PD{t}{\BD}

\label{eqn:quaternion2maxwellWithGA:340}

\label{eqn:quaternion2maxwellWithGA:360}
\spacegrad \cross \BE = – \PD{t}{\BB}

\label{eqn:quaternion2maxwellWithGA:380}

Inserting these into Maxwell-Faraday and into Gauss’s law for magnetism we have
\label{eqn:quaternion2maxwellWithGA:400}
\begin{aligned}
\inv{2} \antisymmetric{ \frac{d}{dr} }{ \BE } &= – \symmetric{ \inv{c}\PD{t}{} }{ c \BB } \\
\inv{2} \symmetric{ \spacegrad }{ c \BB } &= 0,
\end{aligned}

or
\label{eqn:quaternion2maxwellWithGA:420}
\begin{aligned}
\inv{2} \antisymmetric{ \frac{d}{dr} }{ -I \BE } + \symmetric{ \inv{c}\PD{t}{} }{ -I c \BB } &= 0 \\
\inv{2} \symmetric{ -I \spacegrad }{ -I c \BB } &= 0
\end{aligned}

We can introduce quaternionic electric and magnetic field “vectors” (really bivectors)
\label{eqn:quaternion2maxwellWithGA:440}
\begin{aligned}
\boldsymbol{\mathcal{E}} &= -I \BE = \Bi E_x + \Bj E_y + \Bk E_z \\
\boldsymbol{\mathcal{B}} &= -I \BB = \Bi B_x + \Bj B_y + \Bk B_z,
\end{aligned}

and substitute these and sum to find the quaternionic representation of the two source free Maxwell’s equations
\label{eqn:quaternion2maxwellWithGA:460}
\boxed{
\inv{2} \antisymmetric{ \frac{d}{dr} }{ \boldsymbol{\mathcal{E}} } + \inv{2} \symmetric{ \frac{d}{dr} }{ c \boldsymbol{\mathcal{B}} } = 0.
}

Inserting the quaternion curl, div and time derivative representations into Ampere-Maxwell’s law and Gauss’s law, gives
\label{eqn:quaternion2maxwellWithGA:520}
\begin{aligned}
\inv{2} \antisymmetric{ \frac{d}{dr} }{ \BH } &= \BJ + \inv{2} \symmetric{ \inv{c} \PD{t}{} } { c \BD } \\
\inv{2} \symmetric{ \spacegrad }{ c \BD } &= c \rho,
\end{aligned}

\label{eqn:quaternion2maxwellWithGA:540}
\begin{aligned}
\inv{2} \antisymmetric{ \frac{d}{dr} }{ -I \BH } – \inv{2} \symmetric{ \inv{c} \PD{t}{} } { -I c \BD } &= -I \BJ \\
-\inv{2} \symmetric{ -I \spacegrad }{ -I c \BD } &= c \rho.
\end{aligned}

With quaternionic displacement vector and magnetization, and current densities
\label{eqn:quaternion2maxwellWithGA:580}
\begin{aligned}
\boldsymbol{\mathcal{D}} &= -I \BD = \Bi D_x + \Bj D_y + \Bk D_z \\
\boldsymbol{\mathcal{H}} &= -I \BH = \Bi H_x + \Bj H_y + \Bk H_z \\
\boldsymbol{\mathcal{J}} &= -I \BJ = \Bi J_x + \Bj J_y + \Bk J_z,
\end{aligned}

and summing yields the two remaining two Maxwell equations in their quaternionic form
\label{eqn:quaternion2maxwellWithGA:560}
\boxed{
\inv{2} \antisymmetric{ \frac{d}{dr} }{ \boldsymbol{\mathcal{H}} } – \inv{2} \symmetric{ \frac{d}{dr} } { c \boldsymbol{\mathcal{D}} } = c \rho + \boldsymbol{\mathcal{J}}.
}

## Conclusions.

Maxwell’s equations in the quaternion representation have a structure that is not apparent in the Heaviside-Gibbs notation. There is some elegance to this result, but comes with the cost of having to use commutator and anticommutator operators, which are arguably non-intuitive. The compact geometric algebra representation of Maxwell’s equation does not appear possible with a quaternion representation, as an additional complex degree of freedom would be required (biquaternions?) Such a degree of freedom may also allow a quaternion representation of the (fictitious) magnetic sources that are useful in antenna theory with a quaternion model. Magnetic sources are easily incorporated into the current multivector in geometric algebra, but if done so in the derivation above, yield an odd grade multivector source which has no quaternion representation.

# References

[1] Peter Michael Jack. Physical space as a quaternion structure, i: Maxwell equations. a brief note. arXiv preprint math-ph/0307038, 2003. URL https://arxiv.org/abs/math-ph/0307038.

[2] James Clerk Maxwell. A treatise on electricity and magnetism, volume II. Merchant Books, 1881.

## Motivation.

The notation I prefer for relativistic geometric algebra uses Hestenes’ space time algebra (STA) [2], where the basis is a four dimensional space $$\setlr{ \gamma_\mu }$$, subject to Dirac matrix like relations $$\gamma_\mu \cdot \gamma_\nu = \eta_{\mu \nu}$$.

In this formalism a four vector is just the sum of the products of coordinates and basis vectors, for example, using summation convention

\label{eqn:boostToParavector:160}
x = x^\mu \gamma_\mu.

The invariant for a four-vector in STA is just the square of that vector

\label{eqn:boostToParavector:180}
\begin{aligned}
x^2
&= (x^\mu \gamma_\mu) \cdot (x^\nu \gamma_\nu) \\
&= \sum_\mu (x^\mu)^2 (\gamma_\mu)^2 \\
&= (x^0)^2 – \sum_{k = 1}^3 (x^k)^2 \\
&= (ct)^2 – \Bx^2.
\end{aligned}

Recall that a four-vector is time-like if this squared-length is positive, spacelike if negative, and light-like when zero.

Time-like projections are possible by dotting with the “lab-frame” time like basis vector $$\gamma_0$$

\label{eqn:boostToParavector:200}
ct = x \cdot \gamma_0 = x^0,

and space-like projections are wedges with the same

\label{eqn:boostToParavector:220}
\Bx = x \cdot \gamma_0 = x^k \sigma_k,

where sums over Latin indexes $$k \in \setlr{1,2,3}$$ are implied, and where the elements $$\sigma_k$$

\label{eqn:boostToParavector:80}
\sigma_k = \gamma_k \gamma_0.

which are bivectors in STA, can be viewed as an Euclidean vector basis $$\setlr{ \sigma_k }$$.

Rotations in STA involve exponentials of space like bivectors $$\theta = a_{ij} \gamma_i \wedge \gamma_j$$

\label{eqn:boostToParavector:240}
x’ = e^{ \theta/2 } x e^{ -\theta/2 }.

Boosts, on the other hand, have exactly the same form, but the exponentials are with respect to space-time bivectors arguments, such as $$\theta = a \wedge \gamma_0$$, where $$a$$ is any four-vector.

Observe that both boosts and rotations necessarily conserve the space-time length of a four vector (or any multivector with a scalar square).

\label{eqn:boostToParavector:260}
\begin{aligned}
\lr{x’}^2
&=
\lr{ e^{ \theta/2 } x e^{ -\theta/2 } } \lr{ e^{ \theta/2 } x e^{ -\theta/2 } } \\
&=
e^{ \theta/2 } x \lr{ e^{ -\theta/2 } e^{ \theta/2 } } x e^{ -\theta/2 } \\
&=
e^{ \theta/2 } x^2 e^{ -\theta/2 } \\
&=
x^2 e^{ \theta/2 } e^{ -\theta/2 } \\
&=
x^2.
\end{aligned}

## Paravectors.

Paravectors, as used by Baylis [1], represent four-vectors using a Euclidean multivector basis $$\setlr{ \Be_\mu }$$, where $$\Be_0 = 1$$. The conversion between STA and paravector notation requires only multiplication with the timelike basis vector for the lab frame $$\gamma_0$$

\label{eqn:boostToParavector:40}
\begin{aligned}
X
&= x \gamma_0 \\
&= \lr{ x^0 \gamma_0 + x^k \gamma_k } \gamma_0 \\
&= x^0 + x^k \gamma_k \gamma_0 \\
&= x^0 + \Bx \\
&= c t + \Bx,
\end{aligned}

We need a different structure for the invariant length in paravector form. That invariant length is
\label{eqn:boostToParavector:280}
\begin{aligned}
x^2
&=
\lr{ \lr{ ct + \Bx } \gamma_0 }
\lr{ \lr{ ct + \Bx } \gamma_0 } \\
&=
\lr{ \lr{ ct + \Bx } \gamma_0 }
\lr{ \gamma_0 \lr{ ct – \Bx } } \\
&=
\lr{ ct + \Bx }
\lr{ ct – \Bx }.
\end{aligned}

Baylis introduces an involution operator $$\overline{{M}}$$ which toggles the sign of any vector or bivector grades of a multivector. For example, if $$M = a + \Ba + I \Bb + I c$$, where $$a,c \in \mathbb{R}$$ and $$\Ba, \Bb \in \mathbb{R}^3$$ is a multivector with all grades $$0,1,2,3$$, then the involution of $$M$$ is

\label{eqn:boostToParavector:300}
\overline{{M}} = a – \Ba – I \Bb + I c.

Utilizing this operator, the invariant length for a paravector $$X$$ is $$X \overline{{X}}$$.

Let’s consider how boosts and rotations can be expressed in the paravector form. The half angle operator for a boost along the spacelike $$\Bv = v \vcap$$ direction has the form

\label{eqn:boostToParavector:120}
L = e^{ -\vcap \phi/2 },

\label{eqn:boostToParavector:140}
\begin{aligned}
X’
&=
c t’ + \Bx’ \\
&=
x’ \gamma_0 \\
&=
L x L^\dagger \\
&=
e^{ -\vcap \phi/2 } x^\mu \gamma_\mu
e^{ \vcap \phi/2 } \gamma_0 \\
&=
e^{ -\vcap \phi/2 } x^\mu \gamma_\mu \gamma_0
e^{ -\vcap \phi/2 } \\
&=
e^{ -\vcap \phi/2 } \lr{ x^0 + \Bx } e^{ -\vcap \phi/2 } \\
&=
L X L.
\end{aligned}

Because the involution operator toggles the sign of vector grades, it is easy to see that the required invariance is maintained

\label{eqn:boostToParavector:320}
\begin{aligned}
X’ \overline{{X’}}
&=
L X L
\overline{{ L X L }} \\
&=
L X L
\overline{{ L }} \overline{{ X }} \overline{{ L }} \\
&=
L X \overline{{ X }} \overline{{ L }} \\
&=
X \overline{{ X }} L \overline{{ L }} \\
&=
X \overline{{ X }}.
\end{aligned}

Let’s explicitly expand the transformation of \ref{eqn:boostToParavector:140}, so we can relate the rapidity angle $$\phi$$ to the magnitude of the velocity. This is most easily done by splitting the spacelike component $$\Bx$$ of the four vector into its projective and rejective components

\label{eqn:boostToParavector:340}
\begin{aligned}
\Bx
&= \vcap \vcap \Bx \\
&= \vcap \lr{ \vcap \cdot \Bx + \vcap \wedge \Bx } \\
&= \vcap \lr{ \vcap \cdot \Bx } + \vcap \lr{ \vcap \wedge \Bx } \\
&= \Bx_\parallel + \Bx_\perp.
\end{aligned}

The exponential

\label{eqn:boostToParavector:360}
e^{-\vcap \phi/2}
=
\cosh\lr{ \phi/2 }
– \vcap \sinh\lr{ \phi/2 },

commutes with any scalar grades and with $$\Bx_\parallel$$, but anticommutes with $$\Bx_\perp$$, so

\label{eqn:boostToParavector:380}
\begin{aligned}
X’
&=
\lr{ c t + \Bx_\parallel } e^{ -\vcap \phi/2 } e^{ -\vcap \phi/2 }
+
\Bx_\perp e^{ \vcap \phi/2 } e^{ -\vcap \phi/2 } \\
&=
\lr{ c t + \Bx_\parallel } e^{ -\vcap \phi }
+
\Bx_\perp \\
&=
\lr{ c t + \vcap \lr{ \vcap \cdot \Bx } } \lr{ \cosh \phi – \vcap \sinh \phi }
+
\Bx_\perp \\
&=
\Bx_\perp
+
\lr{ c t \cosh\phi – \lr{ \vcap \cdot \Bx} \sinh \phi }
+
\vcap \lr{ \lr{ \vcap \cdot \Bx } \cosh\phi – c t \sinh \phi } \\
&=
\Bx_\perp
+
\cosh\phi \lr{ c t – \lr{ \vcap \cdot \Bx} \tanh \phi }
+
\vcap \cosh\phi \lr{ \vcap \cdot \Bx – c t \tanh \phi }.
\end{aligned}

Employing the argument from [3],
we want $$\phi$$ defined so that this has structure of a Galilean transformation in the limit where $$\phi \rightarrow 0$$. This means we equate

\label{eqn:boostToParavector:400}
\tanh \phi = \frac{v}{c},

so that for small $$\phi$$

\label{eqn:boostToParavector:420}
\Bx’ = \Bx – \Bv t.

We can solving for $$\sinh^2 \phi$$ and $$\cosh^2 \phi$$ in terms of $$v/c$$ using

\label{eqn:boostToParavector:440}
\tanh^2 \phi
= \frac{v^2}{c^2}
=
\frac{ \sinh^2 \phi }{1 + \sinh^2 \phi}
=
\frac{ \cosh^2 \phi – 1 }{\cosh^2 \phi}.

which after picking the positive root required for Galilean equivalence gives
\label{eqn:boostToParavector:460}
\begin{aligned}
\cosh \phi &= \frac{1}{\sqrt{1 – (\Bv/c)^2}} \equiv \gamma \\
\sinh \phi &= \frac{v/c}{\sqrt{1 – (\Bv/c)^2}} = \gamma v/c.
\end{aligned}

The Lorentz boost, written out in full is

\label{eqn:boostToParavector:480}
ct’ + \Bx’
=
\Bx_\perp
+
\gamma \lr{ c t – \frac{\Bv}{c} \cdot \Bx }
+
\gamma \lr{ \vcap \lr{ \vcap \cdot \Bx } – \Bv t }
.

Authors like Chappelle, et al., that also use paravectors [4], specify the form of the Lorentz transformation for the electromagnetic field, but for that transformation reversion is used instead of involution.
I plan to explore that in a later post, starting from the STA formalism that I already understand, and see if I can make sense
of the underlying rationale.

# References

[1] William Baylis. Electrodynamics: a modern geometric approach, volume 17. Springer Science \& Business Media, 2004.

[2] C. Doran and A.N. Lasenby. Geometric algebra for physicists. Cambridge University Press New York, Cambridge, UK, 1st edition, 2003.

[3] L. Landau and E. Lifshitz. The Classical theory of fields. Addison-Wesley, 1951.

[4] James M Chappell, Samuel P Drake, Cameron L Seidel, Lachlan J Gunn, and Derek Abbott. Geometric algebra for electrical and electronic engineers. Proceedings of the IEEE, 102 0(9), 2014.

## Spherical gradient, divergence, curl and Laplacian

### Unit vectors

Two of the spherical unit vectors we can immediately write by inspection.

\label{eqn:sphericalLaplacian:20}
\begin{aligned}
\rcap &= \Be_1 \sin\theta \cos\phi + \Be_2 \sin\theta \sin\phi + \Be_3 \cos\theta \\
\phicap &= -\Be_1 \sin\theta + \Be_2 \cos\phi
\end{aligned}

We can compute $$\thetacap$$ by utilizing the right hand triplet property

\label{eqn:sphericalLaplacian:40}
\begin{aligned}
\thetacap
&=
\phicap \cross \rcap \\
&=
\begin{vmatrix}
\Be_1 & \Be_2 & \Be_3 \\
-S_\phi & C_\phi & 0 \\
S_\theta C_\phi & S_\theta S_\phi & C_\theta \\
\end{vmatrix} \\
&=
\Be_1 \lr{ C_\theta C_\phi }
+\Be_2 \lr{ C_\theta S_\phi }
+\Be_3 \lr{ -S_\theta \lr{ S_\phi^2 + C_\phi^2 } } \\
&=
\Be_1 \cos\theta \cos\phi
+\Be_2 \cos\theta \sin\phi
-\Be_3 \sin\theta.
\end{aligned}

Here I’ve used $$C_\theta = \cos\theta, S_\phi = \sin\phi, \cdots$$ as a convenient shorthand. Observe that with $$i = \Be_1 \Be_2$$, these unit vectors admit a small factorization that makes further manipulation easier

\label{eqn:sphericalLaplacian:80}
\boxed{
\begin{aligned}
\rcap &= \Be_1 e^{i\phi} \sin\theta + \Be_3 \cos\theta \\
\thetacap &= \cos\theta \Be_1 e^{i\phi} – \sin\theta \Be_3 \\
\phicap &= \Be_2 e^{i\phi}
\end{aligned}
}

It should also be the case that $$\rcap \thetacap \phicap = I$$, where $$I = \Be_1 \Be_2 \Be_3 = \Be_{123}$$ is the \R{3} pseudoscalar, which is straightforward to check

\label{eqn:sphericalLaplacian:60}
\begin{aligned}
\rcap \thetacap \phicap
&=
\lr{ \Be_1 e^{i\phi} \sin\theta + \Be_3 \cos\theta }
\lr{ \cos\theta \Be_1 e^{i\phi} – \sin\theta \Be_3 }
\Be_2 e^{i\phi} \\
&=
\lr{ \sin\theta \cos\theta – \cos\theta \sin\theta + \Be_{31} e^{i\phi} \lr{ \cos^2\theta + \sin^2\theta } }
\Be_2 e^{i\phi} \\
&=
\Be_{31} \Be_2 e^{-i\phi} e^{i\phi} \\
&=
\Be_{123}.
\end{aligned}

This property could also have been used to compute $$\thetacap$$.

To compute the gradient, note that the coordinate vectors for the spherical parameterization are
\label{eqn:sphericalLaplacian:120}
\begin{aligned}
\Bx_r
&= \PD{r}{\Br} \\
&= \PD{r}{\lr{r \rcap}} \\
&= \rcap + r \PD{r}{\rcap} \\
&= \rcap,
\end{aligned}

\label{eqn:sphericalLaplacian:140}
\begin{aligned}
\Bx_\theta
&= \PD{\theta}{\lr{r \rcap} } \\
&= r \PD{\theta}{} \lr{ S_\theta \Be_1 e^{i\phi} + C_\theta \Be_3 } \\
&= r \PD{\theta}{} \lr{ C_\theta \Be_1 e^{i\phi} – S_\theta \Be_3 } \\
&= r \thetacap,
\end{aligned}

\label{eqn:sphericalLaplacian:160}
\begin{aligned}
\Bx_\phi
&= \PD{\phi}{\lr{r \rcap} } \\
&= r \PD{\phi}{} \lr{ S_\theta \Be_1 e^{i\phi} + C_\theta \Be_3 } \\
&= r S_\theta \Be_2 e^{i\phi} \\
&= r \sin\theta \phicap.
\end{aligned}

Since these are all normal, the dual vectors defined by $$\Bx^j \cdot \Bx_k = \delta^j_k$$, can be obtained by inspection
\label{eqn:sphericalLaplacian:180}
\begin{aligned}
\Bx^r &= \rcap \\
\Bx^\theta &= \inv{r} \thetacap \\
\Bx^\phi &= \inv{r \sin\theta} \phicap.
\end{aligned}

\label{eqn:sphericalLaplacian:200}
\Bx^r \PD{r}{} +
\Bx^\theta \PD{\theta}{} +
\Bx^\phi \PD{\phicap}{},

or
\label{eqn:sphericalLaplacian:240}
\boxed{
=
\rcap \PD{r}{} +
\frac{\thetacap}{r} \PD{\theta}{} +
\frac{\phicap}{r\sin\theta} \PD{\phicap}{}.
}

More information on this general dual-vector technique of computing the gradient in curvilinear coordinate systems can be found in
[2].

### Partials

To compute the divergence, curl and Laplacian, we’ll need the partials of each of the unit vectors $$\PDi{\theta}{\rcap}, \PDi{\phi}{\rcap}, \PDi{\theta}{\thetacap}, \PDi{\phi}{\thetacap}, \PDi{\phi}{\phicap}$$.

The $$\thetacap$$ partials are

\label{eqn:sphericalLaplacian:260}
\begin{aligned}
\PD{\theta}{\thetacap}
&=
\PD{\theta}{} \lr{
C_\theta \Be_1 e^{i\phi} – S_\theta \Be_3
} \\
&=
-S_\theta \Be_1 e^{i\phi} – C_\theta \Be_3 \\
&=
-\rcap,
\end{aligned}

\label{eqn:sphericalLaplacian:280}
\begin{aligned}
\PD{\phi}{\thetacap}
&=
\PD{\phi}{} \lr{
C_\theta \Be_1 e^{i\phi} – S_\theta \Be_3
} \\
&=
C_\theta \Be_2 e^{i\phi} \\
&=
C_\theta \phicap.
\end{aligned}

The $$\phicap$$ partials are

\label{eqn:sphericalLaplacian:300}
\begin{aligned}
\PD{\theta}{\phicap}
&=
\PD{\theta}{} \Be_2 e^{i\phi} \\
&=
0.
\end{aligned}

\label{eqn:sphericalLaplacian:320}
\begin{aligned}
\PD{\phi}{\phicap}
&=
\PD{\phi}{} \Be_2 e^{i \phi} \\
&=
-\Be_1 e^{i \phi} \\
&=
-\rcap \gpgradezero{ \rcap \Be_1 e^{i \phi} }
– \thetacap \gpgradezero{ \thetacap \Be_1 e^{i \phi} }
– \phicap \gpgradezero{ \phicap \Be_1 e^{i \phi} } \\
&=
\Be_1 e^{i\phi} S_\theta + \Be_3 C_\theta
} \Be_1 e^{i \phi} }
C_\theta \Be_1 e^{i\phi} – S_\theta \Be_3
} \Be_1 e^{i \phi} } \\
&=
-\rcap \gpgradezero{ e^{-i\phi} S_\theta e^{i \phi} }
– \thetacap \gpgradezero{ C_\theta e^{-i\phi} e^{i \phi} } \\
&=
-\rcap S_\theta
– \thetacap C_\theta.
\end{aligned}

The $$\rcap$$ partials are were computed as a side effect of evaluating $$\Bx_\theta$$, and $$\Bx_\phi$$, and are

\label{eqn:sphericalLaplacian:340}
\PD{\theta}{\rcap}
=
\thetacap,

\label{eqn:sphericalLaplacian:360}
\PD{\phi}{\rcap}
=
S_\theta \phicap.

In summary
\label{eqn:sphericalLaplacian:380}
\boxed{
\begin{aligned}
\partial_{\theta}{\rcap} &= \thetacap \\
\partial_{\phi}{\rcap} &= S_\theta \phicap \\
\partial_{\theta}{\thetacap} &= -\rcap \\
\partial_{\phi}{\thetacap} &= C_\theta \phicap \\
\partial_{\theta}{\phicap} &= 0 \\
\partial_{\phi}{\phicap} &= -\rcap S_\theta – \thetacap C_\theta.
\end{aligned}
}

### Divergence and curl.

The divergence and curl can be computed from the vector product of the spherical coordinate gradient and the spherical representation of a vector. That is

\label{eqn:sphericalLaplacian:400}

\label{eqn:sphericalLaplacian:420}
\begin{aligned}
&=
\lr{
\rcap \partial_{r}
+ \frac{\thetacap}{r} \partial_{\theta}
+ \frac{\phicap}{rS_\theta} \partial_{\phi}
}
\lr{ \rcap A_r + \thetacap A_\theta + \phicap A_\phi} \\
&=
\rcap \partial_{r}
\lr{ \rcap A_r + \thetacap A_\theta + \phicap A_\phi} \\
&+ \frac{\thetacap}{r} \partial_{\theta}
\lr{ \rcap A_r + \thetacap A_\theta + \phicap A_\phi} \\
&+ \frac{\phicap}{rS_\theta} \partial_{\phicap}
\lr{ \rcap A_r + \thetacap A_\theta + \phicap A_\phi} \\
&=
\lr{ \partial_r A_r + \rcap \thetacap \partial_r A_\theta + \rcap \phicap \partial_r A_\phi} \\
&+ \frac{1}{r}
\lr{
\thetacap (\partial_\theta \rcap) A_r + \thetacap (\partial_\theta \thetacap) A_\theta + \thetacap (\partial_\theta \phicap) A_\phi
+\thetacap \rcap \partial_\theta A_r + \partial_\theta A_\theta + \thetacap \phicap \partial_\theta A_\phi
} \\
&+ \frac{1}{rS_\theta}
\lr{
\phicap (\partial_\phi \rcap) A_r + \phicap (\partial_\phi \thetacap) A_\theta + \phicap (\partial_\phi \phicap) A_\phi
+\phicap \rcap \partial_\phi A_r + \phicap \thetacap \partial_\phi A_\theta + \partial_\phi A_\phi
} \\
&=
\lr{ \partial_r A_r + \rcap \thetacap \partial_r A_\theta + \rcap \phicap \partial_r A_\phi} \\
&+ \frac{1}{r}
\lr{
\thetacap (\thetacap) A_r + \thetacap (-\rcap) A_\theta + \thetacap (0) A_\phi
+\thetacap \rcap \partial_\theta A_r + \partial_\theta A_\theta + \thetacap \phicap \partial_\theta A_\phi
} \\
&+ \frac{1}{r S_\theta}
\lr{
\phicap (S_\theta \phicap) A_r + \phicap (C_\theta \phicap) A_\theta – \phicap (\rcap S_\theta + \thetacap C_\theta) A_\phi
+\phicap \rcap \partial_\phi A_r + \phicap \thetacap \partial_\phi A_\theta + \partial_\phi A_\phi
}.
\end{aligned}

The scalar component of this is the divergence
\label{eqn:sphericalLaplacian:440}
\begin{aligned}
&=
\partial_r A_r
+ \frac{A_r}{r}
+ \inv{r} \partial_\theta A_\theta
+ \frac{1}{r S_\theta}
\lr{ S_\theta A_r + C_\theta A_\theta + \partial_\phi A_\phi
} \\
&=
\partial_r A_r
+ 2 \frac{A_r}{r}
+ \inv{r} \partial_\theta A_\theta
+ \frac{1}{r S_\theta}
C_\theta A_\theta
+ \frac{1}{r S_\theta} \partial_\phi A_\phi \\
&=
\partial_r A_r
+ 2 \frac{A_r}{r}
+ \inv{r} \partial_\theta A_\theta
+ \frac{1}{r S_\theta}
C_\theta A_\theta
+ \frac{1}{r S_\theta} \partial_\phi A_\phi,
\end{aligned}

which can be factored as
\label{eqn:sphericalLaplacian:460}
\boxed{
=
\inv{r^2} \partial_r (r^2 A_r)
+ \inv{r S_\theta} \partial_\theta (S_\theta A_\theta)
+ \frac{1}{r S_\theta} \partial_\phi A_\phi.
}

The bivector grade of $$\spacegrad \BA$$ is the bivector curl
\label{eqn:sphericalLaplacian:480}
\begin{aligned}
&=
\lr{
\rcap \thetacap \partial_r A_\theta + \rcap \phicap \partial_r A_\phi
} \\
\lr{
\thetacap (-\rcap) A_\theta
+\thetacap \rcap \partial_\theta A_r + \thetacap \phicap \partial_\theta A_\phi
} \\
\frac{1}{r S_\theta}
\lr{
-\phicap (\rcap S_\theta + \thetacap C_\theta) A_\phi
+\phicap \rcap \partial_\phi A_r + \phicap \thetacap \partial_\phi A_\theta
} \\
&=
\lr{
\rcap \thetacap \partial_r A_\theta – \phicap \rcap \partial_r A_\phi
} \\
\lr{
\rcap \thetacap A_\theta
-\rcap \thetacap \partial_\theta A_r + \thetacap \phicap \partial_\theta A_\phi
} \\
\frac{1}{r S_\theta}
\lr{
-\phicap \rcap S_\theta A_\phi + \thetacap \phicap C_\theta A_\phi
+\phicap \rcap \partial_\phi A_r – \thetacap \phicap \partial_\phi A_\theta
} \\
&=
\thetacap \phicap \lr{
\inv{r S_\theta} C_\theta A_\phi
+\frac{1}{r} \partial_\theta A_\phi
-\frac{1}{r S_\theta} \partial_\phi A_\theta
} \\
-\partial_r A_\phi
+
\frac{1}{r S_\theta}
\lr{
-S_\theta A_\phi
+ \partial_\phi A_r
}
} \\
\partial_r A_\theta
+ \frac{1}{r} A_\theta
– \inv{r} \partial_\theta A_r
} \\
&=
I
\rcap \lr{
\inv{r S_\theta} \partial_\theta (S_\theta A_\phi)
-\frac{1}{r S_\theta} \partial_\phi A_\theta
}
+ I \thetacap \lr{
\frac{1}{r S_\theta} \partial_\phi A_r
-\inv{r} \partial_r (r A_\phi)
}
+ I \phicap \lr{
\inv{r} \partial_r (r A_\theta)
– \inv{r} \partial_\theta A_r
}
\end{aligned}

This gives
\label{eqn:sphericalLaplacian:500}
\boxed{
=
\rcap \lr{
\inv{r S_\theta} \partial_\theta (S_\theta A_\phi)
-\frac{1}{r S_\theta} \partial_\phi A_\theta
}
+ \thetacap \lr{
\frac{1}{r S_\theta} \partial_\phi A_r
-\inv{r} \partial_r (r A_\phi)
}
+ \phicap \lr{
\inv{r} \partial_r (r A_\theta)
– \inv{r} \partial_\theta A_r
}.
}

This and the divergence result above both check against the back cover of [1].

### Laplacian

Using the divergence and curl it’s possible to compute the Laplacian from those, but we saw in cylindrical coordinates that it was much harder to do it that way than to do it directly.

\label{eqn:sphericalLaplacian:540}
\begin{aligned}
&=
\lr{
\rcap \partial_{r} +
\frac{\thetacap}{r} \partial_{\theta} +
\frac{\phicap}{r S_\theta} \partial_{\phi}
}
\lr{
\rcap \partial_{r} \psi
+ \frac{\thetacap}{r} \partial_{\theta} \psi
+ \frac{\phicap}{r S_\theta} \partial_{\phi} \psi
} \\
&=
\partial_{rr} \psi
+ \rcap \thetacap \partial_r \lr{ \inv{r} \partial_\theta \psi}
+ \rcap \phicap \inv{S_\theta} \partial_r \lr{ \inv{r} \partial_\phi \psi } \\
&
\quad + \frac{\thetacap}{r} \partial_{\theta} \lr{ \rcap \partial_{r} \psi }
+ \frac{\thetacap}{r^2} \partial_{\theta} \lr{ \thetacap \partial_{\theta} \psi }
+ \frac{\thetacap}{r^2} \partial_{\theta} \lr{ \frac{\phicap}{S_\theta} \partial_{\phi} \psi } \\
&
\quad + \frac{\phicap}{r S_\theta} \partial_{\phi} \lr{ \rcap \partial_{r} \psi }
+ \frac{\phicap}{r^2 S_\theta} \partial_{\phi} \lr{ \thetacap \partial_{\theta} \psi }
+ \frac{\phicap}{r^2 S_\theta^2} \partial_{\phi} \lr{ \phicap \partial_{\phi} \psi } \\
&=
\partial_{rr} \psi
+ \rcap \thetacap \partial_r \lr{ \inv{r} \partial_\theta \psi}
+ \rcap \phicap \inv{S_\theta} \partial_r \lr{ \inv{r} \partial_\phi \psi } \\
&
\quad + \frac{\thetacap\rcap}{r} \partial_{\theta} \lr{ \partial_{r} \psi }
+ \frac{1}{r^2} \partial_{\theta \theta} \psi
+ \frac{\thetacap \phicap}{r^2} \partial_{\theta} \lr{ \frac{1}{S_\theta} \partial_{\phi} \psi } \\
&
\quad + \frac{\phicap \rcap}{r S_\theta} \partial_{\phi r} \psi
+ \frac{\phicap\thetacap}{r^2 S_\theta} \partial_{\phi\theta} \psi
+ \frac{1}{r^2 S_\theta^2} \partial_{\phi \phi} \psi \\
&
\quad + \frac{\thetacap}{r} (\partial_\theta \rcap) \partial_{r} \psi
+ \frac{\thetacap}{r^2} (\partial_\theta \thetacap) \partial_{\theta} \psi
+ \frac{\thetacap}{r^2} (\partial_\theta \phicap) \frac{\phicap}{S_\theta} \partial_{\phi} \psi \\
&
\quad + \frac{\phicap}{r S_\theta} (\partial_\phi \rcap) \partial_{r} \psi
+ \frac{\phicap}{r^2 S_\theta} (\partial_\phi \thetacap) \partial_{\theta} \psi
+ \frac{\phicap}{r^2 S_\theta^2} (\partial_\phi \phicap) \partial_{\phi} \psi \\
&=
\partial_{rr} \psi
+ \rcap \thetacap \partial_r \lr{ \inv{r} \partial_\theta \psi}
+ \rcap \phicap \inv{S_\theta} \partial_r \lr{ \inv{r} \partial_\phi \psi } \\
&
\quad + \frac{\thetacap\rcap}{r} \partial_{\theta} \lr{ \partial_{r} \psi }
+ \frac{1}{r^2} \partial_{\theta \theta} \psi
+ \frac{\thetacap \phicap}{r^2} \partial_{\theta} \lr{ \frac{1}{S_\theta} \partial_{\phi} \psi } \\
&
\quad + \frac{\phicap \rcap}{r S_\theta} \partial_{\phi r} \psi
+ \frac{\phicap\thetacap}{r^2 S_\theta} \partial_{\phi\theta} \psi
+ \frac{1}{r^2 S_\theta^2} \partial_{\phi \phi} \psi \\
&
\quad + \frac{\thetacap}{r} (\thetacap) \partial_{r} \psi
+ \frac{\thetacap}{r^2} (-\rcap) \partial_{\theta} \psi
+ \frac{\thetacap}{r^2} (0) \frac{\phicap}{S_\theta} \partial_{\phi} \psi \\
&
\quad + \frac{\phicap}{r S_\theta} (S_\theta \phicap) \partial_{r} \psi
+ \frac{\phicap}{r^2 S_\theta} (C_\theta \phicap) \partial_{\theta} \psi
+ \frac{\phicap}{r^2 S_\theta^2} (-\rcap S_\theta – \thetacap C_\theta) \partial_{\phi} \psi
\end{aligned}

All the bivector factors are expected to cancel out, but this should be checked. Those with an $$\rcap \thetacap$$ factor are

\label{eqn:sphericalLaplacian:560}
\partial_r \lr{ \inv{r} \partial_\theta \psi}
– \frac{1}{r} \partial_{\theta r} \psi
+ \frac{1}{r^2} \partial_{\theta} \psi
=
-\inv{r^2} \partial_\theta \psi
+\inv{r} \partial_{r \theta} \psi
– \frac{1}{r} \partial_{\theta r} \psi
+ \frac{1}{r^2} \partial_{\theta} \psi
= 0,

and those with a $$\thetacap \phicap$$ factor are
\label{eqn:sphericalLaplacian:580}
\frac{1}{r^2} \partial_{\theta} \lr{ \frac{1}{S_\theta} \partial_{\phi} \psi }
– \frac{1}{r^2 S_\theta} \partial_{\phi\theta} \psi
+ \frac{1}{r^2 S_\theta^2} C_\theta \partial_{\phi} \psi
=
– \frac{1}{r^2} \frac{C_\theta}{S_\theta^2} \partial_{\phi} \psi
+ \frac{1}{r^2 S_\theta} \partial_{\theta \phi} \psi
– \frac{1}{r^2 S_\theta} \partial_{\phi\theta} \psi
+ \frac{1}{r^2 S_\theta^2} C_\theta \partial_{\phi} \psi
= 0,

and those with a $$\phicap \rcap$$ factor are
\label{eqn:sphericalLaplacian:600}
– \inv{S_\theta} \partial_r \lr{ \inv{r} \partial_\phi \psi }
+ \frac{1}{r S_\theta} \partial_{\phi r} \psi
– \frac{1}{r^2 S_\theta^2} S_\theta \partial_{\phi} \psi
=
\inv{S_\theta} \frac{1}{r^2} \partial_\phi \psi
– \inv{r S_\theta} \partial_{r \phi} \psi
+ \frac{1}{r S_\theta} \partial_{\phi r} \psi
– \frac{1}{r^2 S_\theta} \partial_{\phi} \psi
= 0.

This leaves
\label{eqn:sphericalLaplacian:620}
=
\partial_{rr} \psi
+ \frac{2}{r} \partial_{r} \psi
+ \frac{1}{r^2} \partial_{\theta \theta} \psi
+ \frac{1}{r^2 S_\theta} C_\theta \partial_{\theta} \psi
+ \frac{1}{r^2 S_\theta^2} \partial_{\phi \phi} \psi.

This factors nicely as

\label{eqn:sphericalLaplacian:640}
\boxed{
=
\inv{r^2} \PD{r}{} \lr{ r^2 \PD{r}{ \psi} }
+ \frac{1}{r^2 \sin\theta} \PD{\theta}{} \lr{ \sin\theta \PD{\theta}{ \psi } }
+ \frac{1}{r^2 \sin\theta^2} \PDSq{\phi}{ \psi}
,
}

which checks against the back cover of Jackson. Here it has been demonstrated explicitly that this operator expression is valid for multivector fields $$\psi$$ as well as scalar fields $$\psi$$.

# References

[1] JD Jackson. Classical Electrodynamics. John Wiley and Sons, 2nd edition, 1975.

[2] A. Macdonald. Vector and Geometric Calculus. CreateSpace Independent Publishing Platform, 2012.