## Notes.

Due to limitations in the MathJax-Latex package, all the oriented integrals in this blog post should be interpreted as having a clockwise orientation. [See the PDF version of this post for more sophisticated formatting.]

## Guts.

Given a two dimensional generating vector space, there are two instances of the fundamental theorem for multivector integration
\label{eqn:unpackingFundamentalTheorem:20}
\int_S F d\Bx \lrpartial G = \evalbar{F G}{\Delta S},

and
\label{eqn:unpackingFundamentalTheorem:40}
\int_S F d^2\Bx \lrpartial G = \oint_{\partial S} F d\Bx G.

The first case is trivial. Given a parameterizated curve $$x = x(u)$$, it just states
\label{eqn:unpackingFundamentalTheorem:60}
\int_{u(0)}^{u(1)} du \PD{u}{}\lr{FG} = F(u(1))G(u(1)) – F(u(0))G(u(0)),

for all multivectors $$F, G$$, regardless of the signature of the underlying space.

The surface integral is more interesting. Let’s first look at the area element for this surface integral, which is
\label{eqn:unpackingFundamentalTheorem:80}
d^2 \Bx = d\Bx_u \wedge d \Bx_v.

Geometrically, this has the area of the parallelogram spanned by $$d\Bx_u$$ and $$d\Bx_v$$, but weighted by the pseudoscalar of the space. This is explored algebraically in the following problem and illustrated in fig. 1.

fig. 1. 2D vector space and area element.

## Problem: Expansion of 2D area bivector.

Let $$\setlr{e_1, e_2}$$ be an orthonormal basis for a two dimensional space, with reciprocal frame $$\setlr{e^1, e^2}$$. Expand the area bivector $$d^2 \Bx$$ in coordinates relating the bivector to the Jacobian and the pseudoscalar.

With parameterization $$x = x(u,v) = x^\alpha e_\alpha = x_\alpha e^\alpha$$, we have
\label{eqn:unpackingFundamentalTheorem:120}
\Bx_u \wedge \Bx_v
=
\lr{ \PD{u}{x^\alpha} e_\alpha } \wedge
\lr{ \PD{v}{x^\beta} e_\beta }
=
\PD{u}{x^\alpha}
\PD{v}{x^\beta}
e_\alpha
e_\beta
=
\PD{(u,v)}{(x^1,x^2)} e_1 e_2,

or
\label{eqn:unpackingFundamentalTheorem:160}
\Bx_u \wedge \Bx_v
=
\lr{ \PD{u}{x_\alpha} e^\alpha } \wedge
\lr{ \PD{v}{x_\beta} e^\beta }
=
\PD{u}{x_\alpha}
\PD{v}{x_\beta}
e^\alpha
e^\beta
=
\PD{(u,v)}{(x_1,x_2)} e^1 e^2.

The upper and lower index pseudoscalars are related by
\label{eqn:unpackingFundamentalTheorem:180}
e^1 e^2 e_1 e_2 =
-e^1 e^2 e_2 e_1 =
-1,

so with $$I = e_1 e_2$$,
\label{eqn:unpackingFundamentalTheorem:200}
e^1 e^2 = -I^{-1},

leaving us with
\label{eqn:unpackingFundamentalTheorem:140}
d^2 \Bx
= \PD{(u,v)}{(x^1,x^2)} du dv\, I
= -\PD{(u,v)}{(x_1,x_2)} du dv\, I^{-1}.

We see that the area bivector is proportional to either the upper or lower index Jacobian and to the pseudoscalar for the space.

We may write the fundamental theorem for a 2D space as
\label{eqn:unpackingFundamentalTheorem:680}
\int_S du dv \, \PD{(u,v)}{(x^1,x^2)} F I \lrgrad G = \oint_{\partial S} F d\Bx G,

where we have dispensed with the vector derivative and use the gradient instead, since they are identical in a two parameter two dimensional space. Of course, unless we are using $$x^1, x^2$$ as our parameterization, we still want the curvilinear representation of the gradient $$\grad = \Bx^u \PDi{u}{} + \Bx^v \PDi{v}{}$$.

## Problem: Standard basis expansion of fundamental surface relation.

For a parameterization $$x = x^1 e_1 + x^2 e_2$$, where $$\setlr{ e_1, e_2 }$$ is a standard (orthogonal) basis, expand the fundamental theorem for surface integrals for the single sided $$F = 1$$ case. Consider functions $$G$$ of each grade (scalar, vector, bivector.)

From \ref{eqn:unpackingFundamentalTheorem:140} we see that the fundamental theorem takes the form
\label{eqn:unpackingFundamentalTheorem:220}
\int_S dx^1 dx^2\, F I \lrgrad G = \oint_{\partial S} F d\Bx G.

In a Euclidean space, the operator $$I \lrgrad$$, is a $$\pi/2$$ rotation of the gradient, but has a rotated like structure in all metrics:
\label{eqn:unpackingFundamentalTheorem:240}
=
e_1 e_2 \lr{ e^1 \partial_1 + e^2 \partial_2 }
=
-e_2 \partial_1 + e_1 \partial_2.

• $$F = 1$$ and $$G \in \bigwedge^0$$ or $$G \in \bigwedge^2$$. For $$F = 1$$ and scalar or bivector $$G$$ we have
\label{eqn:unpackingFundamentalTheorem:260}
\int_S dx^1 dx^2\, \lr{ -e_2 \partial_1 + e_1 \partial_2 } G = \oint_{\partial S} d\Bx G,

where, for $$x^1 \in [x^1(0),x^1(1)]$$ and $$x^2 \in [x^2(0),x^2(1)]$$, the RHS written explicitly is
\label{eqn:unpackingFundamentalTheorem:280}
\oint_{\partial S} d\Bx G
=
\int dx^1 e_1
\lr{ G(x^1, x^2(1)) – G(x^1, x^2(0)) }
– dx^2 e_2
\lr{ G(x^1(1),x^2) – G(x^1(0), x^2) }.

This is sketched in fig. 2. Since a 2D bivector $$G$$ can be written as $$G = I g$$, where $$g$$ is a scalar, we may write the pseudoscalar case as
\label{eqn:unpackingFundamentalTheorem:300}
\int_S dx^1 dx^2\, \lr{ -e_2 \partial_1 + e_1 \partial_2 } g = \oint_{\partial S} d\Bx g,

after right multiplying both sides with $$I^{-1}$$. Algebraically the scalar and pseudoscalar cases can be thought of as identical scalar relationships.
• $$F = 1, G \in \bigwedge^1$$. For $$F = 1$$ and vector $$G$$ the 2D fundamental theorem for surfaces can be split into scalar
\label{eqn:unpackingFundamentalTheorem:320}
\int_S dx^1 dx^2\, \lr{ -e_2 \partial_1 + e_1 \partial_2 } \cdot G = \oint_{\partial S} d\Bx \cdot G,

and bivector relations
\label{eqn:unpackingFundamentalTheorem:340}
\int_S dx^1 dx^2\, \lr{ -e_2 \partial_1 + e_1 \partial_2 } \wedge G = \oint_{\partial S} d\Bx \wedge G.

To expand \ref{eqn:unpackingFundamentalTheorem:320}, let
\label{eqn:unpackingFundamentalTheorem:360}
G = g_1 e^1 + g_2 e^2,

for which
\label{eqn:unpackingFundamentalTheorem:380}
\lr{ -e_2 \partial_1 + e_1 \partial_2 } \cdot G
=
\lr{ -e_2 \partial_1 + e_1 \partial_2 } \cdot
\lr{ g_1 e^1 + g_2 e^2 }
=
\partial_2 g_1 – \partial_1 g_2,

and
\label{eqn:unpackingFundamentalTheorem:400}
d\Bx \cdot G
=
\lr{ dx^1 e_1 – dx^2 e_2 } \cdot \lr{ g_1 e^1 + g_2 e^2 }
=
dx^1 g_1 – dx^2 g_2,

so \ref{eqn:unpackingFundamentalTheorem:320} expands to
\label{eqn:unpackingFundamentalTheorem:500}
\int_S dx^1 dx^2\, \lr{ \partial_2 g_1 – \partial_1 g_2 }
=
\int
\evalbar{dx^1 g_1}{\Delta x^2} – \evalbar{ dx^2 g_2 }{\Delta x^1}.

This coordinate expansion illustrates how the pseudoscalar nature of the area element results in a duality transformation, as we end up with a curl like operation on the LHS, despite the dot product nature of the decomposition that we used. That can also be seen directly for vector $$G$$, since
\label{eqn:unpackingFundamentalTheorem:560}
dA (I \grad) \cdot G
=
=
dA I \lr{ \grad \wedge G },

since the scalar selection of $$I \lr{ \grad \cdot G }$$ is zero.In the grade-2 relation \ref{eqn:unpackingFundamentalTheorem:340}, we expect a pseudoscalar cancellation on both sides, leaving a scalar (divergence-like) relationship. This time, we use upper index coordinates for the vector $$G$$, letting
\label{eqn:unpackingFundamentalTheorem:440}
G = g^1 e_1 + g^2 e_2,

so
\label{eqn:unpackingFundamentalTheorem:460}
\lr{ -e_2 \partial_1 + e_1 \partial_2 } \wedge G
=
\lr{ -e_2 \partial_1 + e_1 \partial_2 } \wedge G
\lr{ g^1 e_1 + g^2 e_2 }
=
e_1 e_2 \lr{ \partial_1 g^1 + \partial_2 g^2 },

and
\label{eqn:unpackingFundamentalTheorem:480}
d\Bx \wedge G
=
\lr{ dx^1 e_1 – dx^2 e_2 } \wedge
\lr{ g^1 e_1 + g^2 e_2 }
=
e_1 e_2 \lr{ dx^1 g^2 + dx^2 g^1 }.

So \ref{eqn:unpackingFundamentalTheorem:340}, after multiplication of both sides by $$I^{-1}$$, is
\label{eqn:unpackingFundamentalTheorem:520}
\int_S dx^1 dx^2\,
\lr{ \partial_1 g^1 + \partial_2 g^2 }
=
\int
\evalbar{dx^1 g^2}{\Delta x^2} + \evalbar{dx^2 g^1 }{\Delta x^1}.

As before, we’ve implicitly performed a duality transformation, and end up with a divergence operation. That can be seen directly without coordinate expansion, by rewriting the wedge as a grade two selection, and expanding the gradient action on the vector $$G$$, as follows
\label{eqn:unpackingFundamentalTheorem:580}
dA (I \grad) \wedge G
=
=
dA I \lr{ \grad \cdot G },

since $$I \lr{ \grad \wedge G }$$ has only a scalar component.

fig. 2. Line integral around rectangular boundary.

## Theorem 1.1: Green’s theorem [1].

Let $$S$$ be a Jordan region with a piecewise-smooth boundary $$C$$. If $$P, Q$$ are continuously differentiable on an open set that contains $$S$$, then
\begin{equation*}
\int dx dy \lr{ \PD{y}{P} – \PD{x}{Q} } = \oint P dx + Q dy.
\end{equation*}

## Problem: Relationship to Green’s theorem.

If the space is Euclidean, show that \ref{eqn:unpackingFundamentalTheorem:500} and \ref{eqn:unpackingFundamentalTheorem:520} are both instances of Green’s theorem with suitable choices of $$P$$ and $$Q$$.

I will omit the subtleties related to general regions and consider just the case of an infinitesimal square region.

### Start proof:

Let’s start with \ref{eqn:unpackingFundamentalTheorem:500}, with $$g_1 = P$$ and $$g_2 = Q$$, and $$x^1 = x, x^2 = y$$, the RHS is
\label{eqn:unpackingFundamentalTheorem:600}
\int dx dy \lr{ \PD{y}{P} – \PD{x}{Q} }.

On the RHS we have
\label{eqn:unpackingFundamentalTheorem:620}
\int \evalbar{dx P}{\Delta y} – \evalbar{ dy Q }{\Delta x}
=
\int dx \lr{ P(x, y_1) – P(x, y_0) } – \int dy \lr{ Q(x_1, y) – Q(x_0, y) }.

This pair of integrals is plotted in fig. 3, from which we see that \ref{eqn:unpackingFundamentalTheorem:620} can be expressed as the line integral, leaving us with
\label{eqn:unpackingFundamentalTheorem:640}
\int dx dy \lr{ \PD{y}{P} – \PD{x}{Q} }
=
\oint dx P + dy Q,

which is Green’s theorem over the infinitesimal square integration region.

For the equivalence of \ref{eqn:unpackingFundamentalTheorem:520} to Green’s theorem, let $$g^2 = P$$, and $$g^1 = -Q$$. Plugging into the LHS, we find the Green’s theorem integrand. On the RHS, the integrand expands to
\label{eqn:unpackingFundamentalTheorem:660}
\evalbar{dx g^2}{\Delta y} + \evalbar{dy g^1 }{\Delta x}
=
dx \lr{ P(x,y_1) – P(x, y_0)}
+
dy \lr{ -Q(x_1, y) + Q(x_0, y)},

which is exactly what we found in \ref{eqn:unpackingFundamentalTheorem:620}.

### End proof.

fig. 3. Path for Green’s theorem.

We may also relate multivector gradient integrals in 2D to the normal integral around the boundary of the bounding curve. That relationship is as follows.

## Theorem 1.2: 2D gradient integrals.

\begin{equation*}
\begin{aligned}
\int J du dv \rgrad G &= \oint I^{-1} d\Bx G = \int J \lr{ \Bx^v du + \Bx^u dv } G \\
\int J du dv F \lgrad &= \oint F I^{-1} d\Bx = \int J F \lr{ \Bx^v du + \Bx^u dv },
\end{aligned}
\end{equation*}
where $$J = \partial(x^1, x^2)/\partial(u,v)$$ is the Jacobian of the parameterization $$x = x(u,v)$$. In terms of the coordinates $$x^1, x^2$$, this reduces to
\begin{equation*}
\begin{aligned}
\int dx^1 dx^2 \rgrad G &= \oint I^{-1} d\Bx G = \int \lr{ e^2 dx^1 + e^1 dx^2 } G \\
\int dx^1 dx^2 F \lgrad &= \oint G I^{-1} d\Bx = \int F \lr{ e^2 dx^1 + e^1 dx^2 }.
\end{aligned}
\end{equation*}
The vector $$I^{-1} d\Bx$$ is orthogonal to the tangent vector along the boundary, and for Euclidean spaces it can be identified as the outwards normal.

### Start proof:

Respectively setting $$F = 1$$, and $$G = 1$$ in \ref{eqn:unpackingFundamentalTheorem:680}, we have
\label{eqn:unpackingFundamentalTheorem:940}
\int I^{-1} d^2 \Bx \rgrad G = \oint I^{-1} d\Bx G,

and
\label{eqn:unpackingFundamentalTheorem:960}
\int F d^2 \Bx \lgrad I^{-1} = \oint F d\Bx I^{-1}.

Starting with \ref{eqn:unpackingFundamentalTheorem:940} we find
\label{eqn:unpackingFundamentalTheorem:700}
\int I^{-1} J du dv I \rgrad G = \oint d\Bx G,

to find $$\int dx^1 dx^2 \rgrad G = \oint I^{-1} d\Bx G$$, as desireed. In terms of a parameterization $$x = x(u,v)$$, the pseudoscalar for the space is
\label{eqn:unpackingFundamentalTheorem:720}
I = \frac{\Bx_u \wedge \Bx_v}{J},

so
\label{eqn:unpackingFundamentalTheorem:740}
I^{-1} = \frac{J}{\Bx_u \wedge \Bx_v}.

Also note that $$\lr{\Bx_u \wedge \Bx_v}^{-1} = \Bx^v \wedge \Bx^u$$, so
\label{eqn:unpackingFundamentalTheorem:760}
I^{-1} = J \lr{ \Bx^v \wedge \Bx^u },

and
\label{eqn:unpackingFundamentalTheorem:780}
I^{-1} d\Bx
= I^{-1} \cdot d\Bx
= J \lr{ \Bx^v \wedge \Bx^u } \cdot \lr{ \Bx_u du – \Bx_v dv }
= J \lr{ \Bx^v du + \Bx^u dv },

so the right acting gradient integral is
\label{eqn:unpackingFundamentalTheorem:800}
\int J du dv \grad G =
\int
\evalbar{J \Bx^v G}{\Delta v} du + \evalbar{J \Bx^u G dv}{\Delta u},

which we write in abbreviated form as $$\int J \lr{ \Bx^v du + \Bx^u dv} G$$.

For the $$G = 1$$ case, from \ref{eqn:unpackingFundamentalTheorem:960} we find
\label{eqn:unpackingFundamentalTheorem:820}
\int J du dv F I \lgrad I^{-1} = \oint F d\Bx I^{-1}.

However, in a 2D space, regardless of metric, we have $$I a = – a I$$ for any vector $$a$$ (i.e. $$\grad$$ or $$d\Bx$$), so we may commute the outer pseudoscalars in
\label{eqn:unpackingFundamentalTheorem:840}
\int J du dv F I \lgrad I^{-1} = \oint F d\Bx I^{-1},

so
\label{eqn:unpackingFundamentalTheorem:850}
-\int J du dv F I I^{-1} \lgrad = -\oint F I^{-1} d\Bx.

After cancelling the negative sign on both sides, we have the claimed result.

To see that $$I a$$, for any vector $$a$$ is normal to $$a$$, we can compute the dot product
\label{eqn:unpackingFundamentalTheorem:860}
\lr{ I a } \cdot a
=
\gpgradezero{ I a a }
=
a^2 \gpgradezero{ I }
= 0,

since the scalar selection of a bivector is zero. Since $$I^{-1} = \pm I$$, the same argument shows that $$I^{-1} d\Bx$$ must be orthogonal to $$d\Bx$$.

### End proof.

Let’s look at the geometry of the normal $$I^{-1} \Bx$$ in a couple 2D vector spaces. We use an integration volume of a unit square to simplify the boundary term expressions.

• Euclidean: With a parameterization $$x(u,v) = u\Be_1 + v \Be_2$$, and Euclidean basis vectors $$(\Be_1)^2 = (\Be_2)^2 = 1$$, the fundamental theorem integrated over the rectangle $$[x_0,x_1] \times [y_0,y_1]$$ is
\label{eqn:unpackingFundamentalTheorem:880}
\int dx dy \grad G =
\int
\Be_2 \lr{ G(x,y_1) – G(x,y_0) } dx +
\Be_1 \lr{ G(x_1,y) – G(x_0,y) } dy,

Each of the terms in the integrand above are illustrated in fig. 4, and we see that this is a path integral weighted by the outwards normal.

fig. 4. Outwards oriented normal for Euclidean space.

• Spacetime: Let $$x(u,v) = u \gamma_0 + v \gamma_1$$, where $$(\gamma_0)^2 = -(\gamma_1)^2 = 1$$. With $$u = t, v = x$$, the gradient integral over a $$[t_0,t_1] \times [x_0,x_1]$$ of spacetime is
\label{eqn:unpackingFundamentalTheorem:900}
\begin{aligned}
\int dt dx \grad G
&=
\int
\gamma^1 dt \lr{ G(t, x_1) – G(t, x_0) }
+
\gamma^0 dx \lr{ G(t_1, x) – G(t_1, x) } \\
&=
\int
\gamma_1 dt \lr{ -G(t, x_1) + G(t, x_0) }
+
\gamma_0 dx \lr{ G(t_1, x) – G(t_1, x) }
.
\end{aligned}

With $$t$$ plotted along the horizontal axis, and $$x$$ along the vertical, each of the terms of this integrand is illustrated graphically in fig. 5. For this mixed signature space, there is no longer any good geometrical characterization of the normal.

fig. 5. Orientation of the boundary normal for a spacetime basis.

• Spacelike:
Let $$x(u,v) = u \gamma_1 + v \gamma_2$$, where $$(\gamma_1)^2 = (\gamma_2)^2 = -1$$. With $$u = x, v = y$$, the gradient integral over a $$[x_0,x_1] \times [y_0,y_1]$$ of this space is
\label{eqn:unpackingFundamentalTheorem:920}
\begin{aligned}
\int dx dy \grad G
&=
\int
\gamma^2 dx \lr{ G(x, y_1) – G(x, y_0) }
+
\gamma^1 dy \lr{ G(x_1, y) – G(x_1, y) } \\
&=
\int
\gamma_2 dx \lr{ -G(x, y_1) + G(x, y_0) }
+
\gamma_1 dy \lr{ -G(x_1, y) + G(x_1, y) }
.
\end{aligned}

Referring to fig. 6. where the elements of the integrand are illustrated, we see that the normal $$I^{-1} d\Bx$$ for the boundary of this region can be characterized as inwards.

fig. 6. Inwards oriented normal for a Dirac spacelike basis.

# References

[1] S.L. Salas and E. Hille. Calculus: one and several variables. Wiley New York, 1990.

## Curvilinear coordinates and gradient in spacetime, and reciprocal frames.

[If mathjax doesn’t display properly for you, click here for a PDF of this post]

## Motivation.

I started pondering some aspects of spacetime integration theory, and found that there were some aspects of the concepts of reciprocal frames that were not clear to me. In the process of sorting those ideas out for myself, I wrote up the following notes.

In the notes below, I will introduce the many of the prerequisite ideas that are needed to express and apply the fundamental theorem of geometric calculus in a 4D relativistic context. The focus will be the Dirac’s algebra of special relativity, known as STA (Space Time Algebra) in geometric algebra parlance. If desired, it should be clear how to apply these ideas to lower or higher dimensional spaces, and to plain old Euclidean metrics.

### On notation.

In Euclidean space we use bold face reciprocal frame vectors $$\Bx^i \cdot \Bx_j = {\delta^i}_j$$, which nicely distinguishes them from the generalized coordinates $$x_i, x^j$$ associated with the basis or the reciprocal frame, that is
\label{eqn:reciprocalblog:640}
\Bx = x^i \Bx_i = x_j \Bx^j.

On the other hand, it is conventional to use non-bold face for both the four-vectors and their coordinates in STA, such as the following standard basis decomposition
\label{eqn:reciprocalblog:660}
x = x^\mu \gamma_\mu = x_\mu \gamma^\mu.

If we use non-bold face $$x^\mu, x_\nu$$ for the coordinates with respect to a specified frame, then we cannot also use non-bold face for the curvilinear basis vectors.

To resolve this notational ambiguity, I’ve chosen to use bold face $$\Bx^\mu, \Bx_\nu$$ symbols as the curvilinear basis elements in this relativistic context, as we do for Euclidean spaces.

## Definition 1.1: Standard Dirac basis.

The Dirac basis elements are $$\setlr{ \gamma_0, \gamma_1, \gamma_2, \gamma_3 }$$, satisfying
\label{eqn:reciprocalblog:1940}
\gamma_0^2 = 1 = -\gamma_k^2, \quad \forall k = 1,2,3,

and
\label{eqn:reciprocalblog:740}
\gamma_\mu \cdot \gamma_\nu = 0, \quad \forall \mu \ne \nu.

A conventional way of summarizing these orthogonality relationships is $$\gamma_\mu \cdot \gamma_\nu = \eta_{\mu\nu}$$, where $$\eta_{\mu\nu}$$ are the elements of the metric $$G = \text{diag}(+,-,-,-)$$.

## Definition 1.2: Reciprocal basis for the standard Dirac basis.

We define a reciprocal basis $$\setlr{ \gamma^0, \gamma^1, \gamma^2, \gamma^3}$$ satisfying $$\gamma^\mu \cdot \gamma_\nu = {\delta^\mu}_\nu, \forall \mu,\nu \in 0,1,2,3$$.

## Theorem 1.1: Reciprocal basis uniqueness.

This reciprocal basis is unique, and for our choice of metric has the values
\label{eqn:reciprocalblog:1960}
\gamma^0 = \gamma_0, \quad \gamma^k = -\gamma_k, \quad \forall k = 1,2,3.

Proof is left to the reader.

## Definition 1.3: Coordinates.

We define the coordinates of a vector with respect to the standard basis as $$x^\mu$$ satisfying
\label{eqn:reciprocalblog:1980}
x = x^\mu \gamma_\mu,

and define the coordinates of a vector with respect to the reciprocal basis as $$x_\mu$$ satisfying
\label{eqn:reciprocalblog:2000}
x = x_\mu \gamma^\mu,

## Theorem 1.2: Coordinates.

Given the definitions above, we may compute the coordinates of a vector, simply by dotting with the basis elements
\label{eqn:reciprocalblog:2020}
x^\mu = x \cdot \gamma^\mu,

and
\label{eqn:reciprocalblog:2040}
x_\mu = x \cdot \gamma_\mu,

### Start proof:

This follows by straightforward computation
\label{eqn:reciprocalblog:840}
\begin{aligned}
x \cdot \gamma^\mu
&=
\lr{ x^\nu \gamma_\nu } \cdot \gamma^\mu \\
&=
x^\nu \lr{ \gamma_\nu \cdot \gamma^\mu } \\
&=
x^\nu {\delta_\nu}^\mu \\
&=
x^\mu,
\end{aligned}

and
\label{eqn:reciprocalblog:860}
\begin{aligned}
x \cdot \gamma_\mu
&=
\lr{ x_\nu \gamma^\nu } \cdot \gamma_\mu \\
&=
x_\nu \lr{ \gamma^\nu \cdot \gamma_\mu } \\
&=
x_\nu {\delta^\nu}_\mu \\
&=
x_\mu.
\end{aligned}

## Derivative operators.

We’d like to determine the form of the (spacetime) gradient operator. The gradient can be defined in terms of coordinates directly, but we choose an implicit definition, in terms of the directional derivative.

## Definition 1.4: Directional derivative and gradient.

Let $$F = F(x)$$ be a four-vector parameterized multivector. The directional derivative of $$F$$ with respect to the (four-vector) direction $$a$$ is denoted
\label{eqn:reciprocalblog:2060}
\lr{ a \cdot \grad } F = \lim_{\epsilon \rightarrow 0} \frac{ F(x + \epsilon a) – F(x) }{ \epsilon },

where $$\grad$$ is called the space time gradient.

## Theorem 1.3: Gradient.

The standard basis representation of the gradient is
\label{eqn:reciprocalblog:2080}
\grad = \gamma^\mu \partial_\mu,

where
\label{eqn:reciprocalblog:2100}
\partial_\mu = \PD{x^\mu}{}.

### Start proof:

The Dirac gradient pops naturally out of the coordinate representation of the directional derivative, as we can see by expanding $$F(x + \epsilon a)$$ in Taylor series
\label{eqn:reciprocalblog:900}
\begin{aligned}
F(x + \epsilon a)
&= F(x) + \epsilon \frac{dF(x + \epsilon a)}{d\epsilon} + O(\epsilon^2) \\
&= F(x) + \epsilon \PD{\lr{x^\mu + \epsilon a^\mu}}{F} \PD{\epsilon}{\lr{x^\mu + \epsilon a^\mu}} \\
&= F(x) + \epsilon \PD{\lr{x^\mu + \epsilon a^\mu}}{F} a^\mu.
\end{aligned}

The directional derivative is
\label{eqn:reciprocalblog:920}
\begin{aligned}
\lim_{\epsilon \rightarrow 0}
\frac{F(x + \epsilon a) – F(x)}{\epsilon}
&=
\lim_{\epsilon \rightarrow 0}\,
a^\mu
\PD{\lr{x^\mu + \epsilon a^\mu}}{F} \\
&=
a^\mu
\PD{x^\mu}{F} \\
&=
\lr{a^\nu \gamma_\nu} \cdot \gamma^\mu \PD{x^\mu}{F} \\
&=
a \cdot \lr{ \gamma^\mu \partial_\mu } F.
\end{aligned}

## Curvilinear bases.

Curvilinear bases are the foundation of the fundamental theorem of multivector calculus. This form of integral calculus is defined over parameterized surfaces (called manifolds) that satisfy some specific non-degeneracy and continuity requirements.

A parameterized vector $$x(u,v, \cdots w)$$ can be thought of as tracing out a hypersurface (curve, surface, volume, …), where the dimension of the hypersurface depends on the number of parameters. At each point, a bases can be constructed from the differentials of the parameterized vector. Such a basis is called the tangent space to the surface at the point in question. Our curvilinear bases will be related to these differentials. We will also be interested in a dual basis that is restricted to the span of the tangent space. This dual basis will be called the reciprocal frame, and line the basis of the tangent space itself, also varies from point to point on the surface.

Fig 1a. One parameter curve, with illustration of tangent space along the curve.

Fig 1b. Two parameter surface, with illustration of tangent space along the surface.

One and two parameter spaces are illustrated in fig. 1a, and 1b.  The tangent space basis at a specific point of a two parameter surface, $$x(u^0, u^1)$$, is illustrated in fig. 1. The differential directions that span the tangent space are
\label{eqn:reciprocalblog:1040}
\begin{aligned}
d\Bx_0 &= \PD{u^0}{x} du^0 \\
d\Bx_1 &= \PD{u^1}{x} du^1,
\end{aligned}

and the tangent space itself is $$\mbox{Span}\setlr{ d\Bx_0, d\Bx_1 }$$. We may form an oriented surface area element $$d\Bx_0 \wedge d\Bx_1$$ over this surface.

Fig 2. Two parameter surface.

Tangent spaces associated with 3 or more parameters cannot be easily visualized in three dimensions, but the idea generalizes algebraically without trouble.

## Definition 1.5: Tangent basis and space.

Given a parameterization $$x = x(u^0, \cdots, u^N)$$, where $$N < 4$$, the span of the vectors
\label{eqn:reciprocalblog:2120}
\Bx_\mu = \PD{u^\mu}{x},

is called the tangent space for the hypersurface associated with the parameterization, and it’s basis is
$$\setlr{ \Bx_\mu }$$.

Later we will see that parameterization constraints must be imposed, as not all surfaces generated by a set of parameterizations are useful for integration theory. In particular, degenerate parameterizations for which the wedge products of the tangent space basis vectors are zero, or those wedge products cannot be inverted, are not physically meaningful. Properly behaved surfaces of this sort are called manifolds.

Having introduced curvilinear coordinates associated with a parameterization, we can now determine the form of the gradient with respect to a parameterization of spacetime.

## Theorem 1.4: Gradient, curvilinear representation.

Given a spacetime parameterization $$x = x(u^0, u^1, u^2, u^3)$$, the gradient with respect to the parameters $$u^\mu$$ is
\label{eqn:reciprocalblog:2140}
\grad = \sum_\mu \Bx^\mu
\PD{u^\mu}{},

where
\label{eqn:reciprocalblog:2160}
\Bx^\mu = \grad u^\mu.

The vectors $$\Bx^\mu$$ are called the reciprocal frame vectors, and the ordered set $$\setlr{ \Bx^0, \Bx^1, \Bx^2, \Bx^3 }$$ is called the reciprocal basis.It is convenient to define $$\partial_\mu \equiv \PDi{u^\mu}{}$$, so that the gradient can be expressed in mixed index representation
\label{eqn:reciprocalblog:2180}
\grad = \Bx^\mu \partial_\mu.

This introduces some notational ambiguity, since we used $$\partial_\mu = \PDi{x^\mu}{}$$ for the standard basis derivative operators too, but we will be careful to be explicit when there is any doubt about what is intended.

### Start proof:

The proof follows by application of the chain rule.
\label{eqn:reciprocalblog:960}
\begin{aligned}
&=
\gamma^\alpha \PD{x^\alpha}{F} \\
&=
\gamma^\alpha
\PD{x^\alpha}{u^\mu}
\PD{u^\mu}{F} \\
&=
\lr{ \grad u^\mu } \PD{u^\mu}{F} \\
&=
\Bx^\mu \PD{u^\mu}{F}.
\end{aligned}

## Theorem 1.5: Reciprocal relationship.

The vectors $$\Bx^\mu = \grad u^\mu$$, and $$\Bx_\mu = \PDi{u^\mu}{x}$$ satisfy the reciprocal relationship
\label{eqn:reciprocalblog:2200}
\Bx^\mu \cdot \Bx_\nu = {\delta^\mu}_\nu.

### Start proof:

\label{eqn:reciprocalblog:1020}
\begin{aligned}
\Bx^\mu \cdot \Bx_\nu
&=
\PD{u^\nu}{x} \\
&=
\lr{
\gamma^\alpha \PD{x^\alpha}{u^\mu}
}
\cdot
\lr{
\PD{u^\nu}{x^\beta} \gamma_\beta
} \\
&=
{\delta^\alpha}_\beta \PD{x^\alpha}{u^\mu}
\PD{u^\nu}{x^\beta} \\
&=
\PD{x^\alpha}{u^\mu} \PD{u^\nu}{x^\alpha} \\
&=
\PD{u^\nu}{u^\mu} \\
&=
{\delta^\mu}_\nu
.
\end{aligned}

### End proof.

It is instructive to consider an example. Here is a parameterization that scales the proper time parameter, and uses polar coordinates in the $$x-y$$ plane.

## Problem: Compute the curvilinear and reciprocal basis.

Given
\label{eqn:reciprocalblog:2360}
x(t,\rho,\theta,z) = c t \gamma_0 + \gamma_1 \rho e^{i \theta} + z \gamma_3,

where $$i = \gamma_1 \gamma_2$$, compute the curvilinear frame vectors and their reciprocals.

The frame vectors are all easy to compute
\label{eqn:reciprocalblog:1180}
\begin{aligned}
\Bx_0 &= \PD{t}{x} = c \gamma_0 \\
\Bx_1 &= \PD{\rho}{x} = \gamma_1 e^{i \theta} \\
\Bx_2 &= \PD{\theta}{x} = \rho \gamma_1 \gamma_1 \gamma_2 e^{i \theta} = – \rho \gamma_2 e^{i \theta} \\
\Bx_3 &= \PD{z}{x} = \gamma_3.
\end{aligned}

The $$\Bx_1$$ vector is radial, $$\Bx^2$$ is perpendicular to that tangent to the same unit circle, as plotted in fig 3.

Fig3: Tangent space direction vectors.

All of these particular frame vectors happen to be mutually perpendicular, something that will not generally be true for a more arbitrary parameterization.

To compute the reciprocal frame vectors, we must express our parameters in terms of $$x^\mu$$ coordinates, and use implicit integration techniques to deal with the coupling of the rotational terms. First observe that
\label{eqn:reciprocalblog:1200}
\gamma_1 e^{i\theta}
= \gamma_1 \lr{ \cos\theta + \gamma_1 \gamma_2 \sin\theta }
= \gamma_1 \cos\theta – \gamma_2 \sin\theta,

so
\label{eqn:reciprocalblog:1220}
\begin{aligned}
x^0 &= c t \\
x^1 &= \rho \cos\theta \\
x^2 &= -\rho \sin\theta \\
x^3 &= z.
\end{aligned}

We can easily evaluate the $$t, z$$ gradients
\label{eqn:reciprocalblog:1240}
\begin{aligned}
\grad t &= \frac{\gamma^1 }{c} \\
\grad z &= \gamma^3,
\end{aligned}

but the $$\rho, \theta$$ gradients are not as easy. First writing
\label{eqn:reciprocalblog:1260}
\rho^2 = \lr{x^1}^2 + \lr{x^2}^2,

we find
\label{eqn:reciprocalblog:1280}
\begin{aligned}
2 \rho \grad \rho = 2 \lr{ x^1 \grad x^1 + x^2 \grad x^2 }
&= 2 \rho \lr{ \cos\theta \gamma^1 – \sin\theta \gamma^2 } \\
&= 2 \rho \gamma^1 \lr{ \cos\theta – \gamma_1 \gamma^2 \sin\theta } \\
&= 2 \rho \gamma^1 e^{i\theta},
\end{aligned}

so
\label{eqn:reciprocalblog:1300}
\grad \rho = \gamma^1 e^{i\theta}.

For the $$\theta$$ gradient, we can write
\label{eqn:reciprocalblog:1320}
\tan\theta = -\frac{x^2}{x^1},

so
\label{eqn:reciprocalblog:1340}
\begin{aligned}
\inv{\cos^2 \theta} \grad \theta
&= -\frac{\gamma^2}{x^1} – x^2 \frac{-\gamma^1}{\lr{x^1}^2} \\
&= \inv{\lr{x^1}^2} \lr{ – \gamma^2 x^1 + \gamma^1 x^2 } \\
&= \frac{\rho}{\rho^2 \cos^2\theta } \lr{ – \gamma^2 \cos\theta – \gamma^1 \sin\theta } \\
&= -\frac{1}{\rho \cos^2\theta } \gamma^2 \lr{ \cos\theta + \gamma_2 \gamma^1 \sin\theta } \\
&= -\frac{\gamma^2 e^{i\theta} }{\rho \cos^2\theta },
\end{aligned}

or
\label{eqn:reciprocalblog:1360}
\grad\theta = -\inv{\rho} \gamma^2 e^{i\theta}.

In summary,
\label{eqn:reciprocalblog:1380}
\begin{aligned}
\Bx^0 &= \frac{\gamma^0}{c} \\
\Bx^1 &= \gamma^1 e^{i\theta} \\
\Bx^2 &= -\inv{\rho} \gamma^2 e^{i\theta} \\
\Bx^3 &= \gamma^3.
\end{aligned}

Despite being a fairly simple parameterization, it was still fairly difficult to solve for the gradients when the parameterization introduced coupling between the coordinates. In this particular case, we could have solved for the parameters in terms of the coordinates (but it was easier not to), but that will not generally be true. We want a less labor intensive strategy to find the reciprocal frame. When we have a full parameterization of spacetime, then we can do this with nothing more than a matrix inversion.

## Theorem 1.6: Reciprocal frame matrix equations.

Given a spacetime basis $$\setlr{\Bx_0, \cdots \Bx_3}$$, let $$[\Bx_\mu]$$ and $$[\Bx^\nu]$$ be column matrices with the coordinates of these vectors and their reciprocals, with respect to the standard basis $$\setlr{\gamma_0, \gamma_1, \gamma_2, \gamma_3 }$$. Let
\label{eqn:reciprocalblog:2220}
A =
\begin{bmatrix}
[\Bx_0] & \cdots & [\Bx_{3}]
\end{bmatrix}
X =
\begin{bmatrix}
[\Bx^0] & \cdots & [\Bx^{3}]
\end{bmatrix}.

The coordinates of the reciprocal frame vectors can be found by solving
\label{eqn:reciprocalblog:2240}
A^\T G X = 1,

where $$G = \text{diag}(1,-1,-1,-1)$$ and the RHS is an $$4 \times 4$$ identity matrix.

### Start proof:

Let $$\Bx_\mu = {a_\mu}^\alpha \gamma_\alpha, \Bx^\nu = b^{\nu\beta} \gamma_\beta$$, so that
\label{eqn:reciprocalblog:140}
A =
\begin{bmatrix}
{a_\nu}^\mu
\end{bmatrix},

and
\label{eqn:reciprocalblog:160}
X =
\begin{bmatrix}
b^{\nu\mu}
\end{bmatrix},

where $$\mu \in [0,3]$$ are the row indexes and $$\nu \in [0,N-1]$$ are the column indexes. The reciprocal frame satisfies $$\Bx_\mu \cdot \Bx^\nu = {\delta_\mu}^\nu$$, which has the coordinate representation of
\label{eqn:reciprocalblog:180}
\begin{aligned}
\Bx_\mu \cdot \Bx^\nu
&=
\lr{
{a_\mu}^\alpha \gamma_\alpha
}
\cdot
\lr{
b^{\nu\beta} \gamma_\beta
} \\
&=
{a_\mu}^\alpha
\eta_{\alpha\beta}
b^{\nu\beta} \\
&=
{[A^\T G B]_\mu}^\nu,
\end{aligned}

where $$\mu$$ is the row index and $$\nu$$ is the column index.

## Problem: Matrix inversion reciprocals.

For the parameterization of \ref{eqn:reciprocalblog:2360}, find the reciprocal frame vectors by matrix inversion.

We expanded $$\Bx_1$$ explicitly in \ref{eqn:reciprocalblog:1200}. Doing the same for $$\Bx_2$$, we have
\label{eqn:reciprocalblog:1201}
\Bx_2 =
-\rho \gamma_2 e^{i\theta}
= -\rho \gamma_2 \lr{ \cos\theta + \gamma_1 \gamma_2 \sin\theta }
= – \rho \lr{ \gamma_2 \cos\theta + \gamma_1 \sin\theta}.

Reading off the coordinates of our frame vectors, we have
\label{eqn:reciprocalblog:1400}
X =
\begin{bmatrix}
c & 0 & 0 & 0 \\
0 & C & -\rho S & 0 \\
0 & -S & -\rho C & 0 \\
0 & 0 & 0 & 1 \\
\end{bmatrix},

where $$C = \cos\theta$$ and $$S = \sin\theta$$. We want
\label{eqn:reciprocalblog:1420}
Y =
{\begin{bmatrix}
c & 0 & 0 & 0 \\
0 & -C & S & 0 \\
0 & \rho S & \rho C & 0 \\
0 & 0 & 0 & -1 \\
\end{bmatrix}}^{-1}
=
\begin{bmatrix}
\inv{c} & 0 & 0 & 0 \\
0 & -C & \frac{S}{\rho} & 0 \\
0 & S & \frac{C}{\rho} & 0 \\
0 & 0 & 0 & -1 \\
\end{bmatrix}.

We can read off the coordinates of the reciprocal frame vectors
\label{eqn:reciprocalblog:1440}
\begin{aligned}
\Bx^0 &= \inv{c} \gamma_0 \\
\Bx^1 &= -\cos\theta \gamma_1 + \sin\theta \gamma_2 \\
\Bx^2 &= \inv{\rho} \lr{ \sin\theta \gamma_1 + \cos\theta \gamma_2 } \\
\Bx^3 &= -\gamma_3.
\end{aligned}

Factoring out $$\gamma^1$$ from the $$\Bx^1$$ terms, we find
\label{eqn:reciprocalblog:1460}
\begin{aligned}
\Bx^1
&= -\cos\theta \gamma_1 + \sin\theta \gamma_2 \\
&= \gamma^1 \lr{ \cos\theta + \gamma_1 \gamma_2 \sin\theta } \\
&= \gamma^1 e^{i\theta}.
\end{aligned}

Similarly for $$\Bx^2$$,
\label{eqn:reciprocalblog:1480}
\begin{aligned}
\Bx^2
&= \inv{\rho} \lr{ \sin\theta \gamma_1 + \cos\theta \gamma_2 } \\
&= \frac{\gamma^2}{\rho} \lr{ \sin\theta \gamma_2 \gamma_1 – \cos\theta } \\
&= -\frac{\gamma^2}{\rho} e^{i\theta}.
\end{aligned}

This matches \ref{eqn:reciprocalblog:1380}, as expected, but required only algebraic work to compute.

There will be circumstances where we parameterize only a subset of spacetime, and are interested in calculating quantities associated with such a surface. For example, suppose that
\label{eqn:reciprocalblog:1500}
x(\rho,\theta) = \gamma_1 \rho e^{i \theta},

where $$i = \gamma_1 \gamma_2$$ as before. We are now parameterizing only the $$x-y$$ plane. We will still find
\label{eqn:reciprocalblog:1520}
\begin{aligned}
\Bx_1 &= \gamma_1 e^{i \theta} \\
\Bx_2 &= -\gamma_2 \rho e^{i \theta}.
\end{aligned}

We can compute the reciprocals of these vectors using the gradient method. It’s possible to state matrix equations representing the reciprocal relationship of \ref{eqn:reciprocalblog:2200}, which, in this case, is $$X^\T G Y = 1$$, where the RHS is a $$2 \times 2$$ identity matrix, and $$X, Y$$ are $$4\times 2$$ matrices of coordinates, with
\label{eqn:reciprocalblog:1540}
X =
\begin{bmatrix}
0 & 0 \\
C & -\rho S \\
-S & -\rho C \\
0 & 0
\end{bmatrix}.

We no longer have a square matrix problem to solve, and our solution set is multivalued. In particular, this matrix equation has solutions
\label{eqn:reciprocalblog:1560}
\begin{aligned}
\Bx^1 &= \gamma^1 e^{i\theta} + \alpha \gamma^0 + \beta \gamma^3 \\
\Bx^2 &= -\frac{\gamma^2}{\rho} e^{i\theta} + \alpha’ \gamma^0 + \beta’ \gamma^3.
\end{aligned}

where $$\alpha, \alpha’, \beta, \beta’$$ are arbitrary constants. In the example we considered, we saw that our $$\rho, \theta$$ parameters were functions of only $$x^1, x^2$$, so taking gradients could not introduce any $$\gamma^0, \gamma^3$$ dependence in $$\Bx^1, \Bx^2$$. It seems reasonable to assert that we seek an algebraic method of computing a set of vectors that satisfies the reciprocal relationships, where that set of vectors is restricted to the tangent space. We will need to figure out how to prove that this reciprocal construction is identical to the parameter gradients, but let’s start with figuring out what such a tangent space restricted solution looks like.

## Theorem 1.7: Reciprocal frame for two parameter subspace.

Given two vectors, $$\Bx_1, \Bx_2$$, the vectors $$\Bx^1, \Bx^2 \in \mbox{Span}\setlr{ \Bx_1, \Bx_2 }$$ such that $$\Bx^\mu \cdot \Bx_\nu = {\delta^\mu}_\nu$$ are given by
\label{eqn:reciprocalblog:2260}
\begin{aligned}
\Bx^1 &= \Bx_2 \cdot \inv{\Bx_1 \wedge \Bx_2} \\
\Bx^2 &= -\Bx_1 \cdot \inv{\Bx_1 \wedge \Bx_2},
\end{aligned}

provided $$\Bx_1 \wedge \Bx_2 \ne 0$$ and
$$\lr{ \Bx_1 \wedge \Bx_2 }^2 \ne 0$$.

### Start proof:

The most general set of vectors that satisfy the span constraint are
\label{eqn:reciprocalblog:1580}
\begin{aligned}
\Bx^1 &= a \Bx_1 + b \Bx_2 \\
\Bx^2 &= c \Bx_1 + d \Bx_2.
\end{aligned}

We can use wedge products with either $$\Bx_1$$ or $$\Bx_2$$ to eliminate the other from the RHS
\label{eqn:reciprocalblog:1600}
\begin{aligned}
\Bx^1 \wedge \Bx_2 &= a \lr{ \Bx_1 \wedge \Bx_2 } \\
\Bx^1 \wedge \Bx_1 &= – b \lr{ \Bx_1 \wedge \Bx_2 } \\
\Bx^2 \wedge \Bx_2 &= c \lr{ \Bx_1 \wedge \Bx_2 } \\
\Bx^2 \wedge \Bx_1 &= – d \lr{ \Bx_1 \wedge \Bx_2 },
\end{aligned}

and then dot both sides with $$\Bx_1 \wedge \Bx_2$$ to produce four scalar equations
\label{eqn:reciprocalblog:1640}
\begin{aligned}
a \lr{ \Bx_1 \wedge \Bx_2 }^2
&= \lr{ \Bx^1 \wedge \Bx_2 } \cdot \lr{ \Bx_1 \wedge \Bx_2 } \\
&=
\lr{ \Bx_2 \cdot \Bx_1 } \lr{ \Bx^1 \cdot \Bx_2 }

\lr{ \Bx_2 \cdot \Bx_2 } \lr{ \Bx^1 \cdot \Bx_1 } \\
&=
\lr{ \Bx_2 \cdot \Bx_1 } (0)

\lr{ \Bx_2 \cdot \Bx_2 } (1) \\
&= – \Bx_2 \cdot \Bx_2
\end{aligned}

\label{eqn:reciprocalblog:1660}
\begin{aligned}
– b \lr{ \Bx_1 \wedge \Bx_2 }^2
&=
\lr{ \Bx^1 \wedge \Bx_1 } \cdot \lr{ \Bx_1 \wedge \Bx_2 } \\
&=
\lr{ \Bx^1 \cdot \Bx_2 } \lr{ \Bx_1 \cdot \Bx_1 }

\lr{ \Bx^1 \cdot \Bx_1 } \lr{ \Bx_1 \cdot \Bx_2 } \\
&=
(0) \lr{ \Bx_1 \cdot \Bx_1 }

(1) \lr{ \Bx_1 \cdot \Bx_2 } \\
&= – \Bx_1 \cdot \Bx_2
\end{aligned}

\label{eqn:reciprocalblog:1680}
\begin{aligned}
c \lr{ \Bx_1 \wedge \Bx_2 }^2
&= \lr{ \Bx^2 \wedge \Bx_2 } \cdot \lr{ \Bx_1 \wedge \Bx_2 } \\
&=
\lr{ \Bx_2 \cdot \Bx_1 } \lr{ \Bx^2 \cdot \Bx_2 }

\lr{ \Bx_2 \cdot \Bx_2 } \lr{ \Bx^2 \cdot \Bx_1 } \\
&=
\lr{ \Bx_2 \cdot \Bx_1 } (1)

\lr{ \Bx_2 \cdot \Bx_2 } (0) \\
&= \Bx_2 \cdot \Bx_1
\end{aligned}

\label{eqn:reciprocalblog:1700}
\begin{aligned}
– d \lr{ \Bx_1 \wedge \Bx_2 }^2
&= \lr{ \Bx^2 \wedge \Bx_1 } \cdot \lr{ \Bx_1 \wedge \Bx_2 } \\
&=
\lr{ \Bx_1 \cdot \Bx_1 } \lr{ \Bx^2 \cdot \Bx_2 }

\lr{ \Bx_1 \cdot \Bx_2 } \lr{ \Bx^2 \cdot \Bx_1 } \\
&=
\lr{ \Bx_1 \cdot \Bx_1 } (1)

\lr{ \Bx_1 \cdot \Bx_2 } (0) \\
&= \Bx_1 \cdot \Bx_1.
\end{aligned}

Putting the pieces together we have
\label{eqn:reciprocalblog:1740}
\begin{aligned}
\Bx^1
&= \frac{ – \lr{ \Bx_2 \cdot \Bx_2 } \Bx_1 + \lr{ \Bx_1 \cdot \Bx_2 } \Bx_2
}{\lr{\Bx_1 \wedge \Bx_2}^2} \\
&=
\frac{
\Bx_2 \cdot \lr{ \Bx_1 \wedge \Bx_2 }
}{\lr{\Bx_1 \wedge \Bx_2}^2} \\
&=
\Bx_2 \cdot \inv{\Bx_1 \wedge \Bx_2}
\end{aligned}

\label{eqn:reciprocalblog:1760}
\begin{aligned}
\Bx^2
&=
\frac{ \lr{ \Bx_1 \cdot \Bx_2 } \Bx_1 – \lr{ \Bx_1 \cdot \Bx_1 } \Bx_2
}{\lr{\Bx_1 \wedge \Bx_2}^2} \\
&=
\frac{ -\Bx_1 \cdot \lr{ \Bx_1 \wedge \Bx_2 } }
{\lr{\Bx_1 \wedge \Bx_2}^2} \\
&=
-\Bx_1 \cdot \inv{\Bx_1 \wedge \Bx_2}
\end{aligned}

## Lemma 1.1: Distribution identity.

Given k-vectors $$B, C$$ and a vector $$a$$, where the grade of $$C$$ is greater than that of $$B$$, then
\label{eqn:reciprocalblog:2280}
\lr{a \wedge B} \cdot C = a \cdot \lr{ B \cdot C }.

See [1] for a proof.

## Theorem 1.8: Higher order tangent space reciprocals.

Given an $$N$$ parameter tangent space with basis $$\setlr{ \Bx_0, \Bx_1, \cdots \Bx_{N-1} }$$, the reciprocals are given by
\label{eqn:reciprocalblog:2300}
\Bx^\mu = (-1)^\mu
\lr{ \Bx_0 \wedge \cdots \check{\Bx_\mu} \cdots \wedge \Bx_{N-1} } \cdot I_N^{-1},

where the checked term ($$\check{\Bx_\mu}$$) indicates that all terms are included in the wedges except the $$\Bx_\mu$$ term, and $$I_N = \Bx_0 \wedge \cdots \Bx_{N-1}$$ is the pseudoscalar for the tangent space.

### Start proof:

I’ll outline the proof for the three parameter tangent space case, from which the pattern will be clear. The motivation for this proof is a reexamination of the algebraic structure of the two vector solution. Suppose we have a tangent space basis $$\setlr{\Bx_0, \Bx_1}$$, for which we’ve shown that
\label{eqn:reciprocalblog:1860}
\begin{aligned}
\Bx^0
&= \Bx_1 \cdot \inv{\Bx_0 \wedge \Bx_1} \\
&= \frac{\Bx_1 \cdot \lr{\Bx_0 \wedge \Bx_1} }{\lr{ \Bx_0 \wedge \Bx_1}^2 }.
\end{aligned}

If we dot with $$\Bx_0$$ and $$\Bx_1$$ respectively, we find
\label{eqn:reciprocalblog:1800}
\begin{aligned}
\Bx_0 \cdot \Bx^0
&=
\Bx_0 \cdot \frac{ \Bx_1 \cdot \lr{ \Bx_0 \wedge \Bx_1 } }{\lr{ \Bx_0 \wedge \Bx_1}^2 } \\
&=
\lr{ \Bx_0 \wedge \Bx_1 } \cdot \frac{ \Bx_0 \wedge \Bx_1 }{\lr{ \Bx_0 \wedge \Bx_1}^2 }.
\end{aligned}

We end up with unity as expected. Here the
“factored” out vector is reincorporated into the pseudoscalar using the distribution identity \ref{eqn:reciprocalblog:2280}.
Similarly, dotting with $$\Bx_1$$, we find
\label{eqn:reciprocalblog:0810}
\begin{aligned}
\Bx_1 \cdot \Bx^0
&=
\Bx_1 \cdot \frac{ \Bx_1 \cdot \lr{ \Bx_0 \wedge \Bx_1 } }{\lr{ \Bx_0 \wedge \Bx_1}^2 } \\
&=
\lr{ \Bx_1 \wedge \Bx_1 } \cdot \frac{ \Bx_0 \wedge \Bx_1 }{\lr{ \Bx_0 \wedge \Bx_1}^2 }.
\end{aligned}

This is zero, since wedging a vector with itself is zero. We can perform such an operation in reverse, taking the square of the tangent space pseudoscalar, and factoring out one of the basis vectors. After this, division by that squared pseudoscalar will normalize things.

For a three parameter tangent space with basis $$\setlr{ \Bx_0, \Bx_1, \Bx_2 }$$, we can factor out any of the tangent vectors like so
\label{eqn:reciprocalblog:1880}
\begin{aligned}
\lr{ \Bx_0 \wedge \Bx_1 \wedge \Bx_2 }^2
&= \Bx_0 \cdot \lr{ \lr{ \Bx_1 \wedge \Bx_2 } \cdot \lr{ \Bx_0 \wedge \Bx_1 \wedge \Bx_2 } } \\
&= (-1) \Bx_1 \cdot \lr{ \lr{ \Bx_0 \wedge \Bx_2 } \cdot \lr{ \Bx_0 \wedge \Bx_1 \wedge \Bx_2 } } \\
&= (-1)^2 \Bx_2 \cdot \lr{ \lr{ \Bx_0 \wedge \Bx_1 } \cdot \lr{ \Bx_0 \wedge \Bx_1 \wedge \Bx_2 } }.
\end{aligned}

The toggling of sign reflects the number of permutations required to move the vector of interest to the front of the wedge sequence. Having factored out any one of the vectors, we can rearrange to find that vector that is it’s inverse and perpendicular to all the others.
\label{eqn:reciprocalblog:1900}
\begin{aligned}
\Bx^0 &= (-1)^0 \lr{ \Bx_1 \wedge \Bx_2 } \cdot \inv{ \Bx_0 \wedge \Bx_1 \wedge \Bx_2 } \\
\Bx^1 &= (-1)^1 \lr{ \Bx_0 \wedge \Bx_2 } \cdot \inv{ \Bx_0 \wedge \Bx_1 \wedge \Bx_2 } \\
\Bx^2 &= (-1)^2 \lr{ \Bx_0 \wedge \Bx_1 } \cdot \inv{ \Bx_0 \wedge \Bx_1 \wedge \Bx_2 }.
\end{aligned}

### End proof.

In the fashion above, should we want the reciprocal frame for all of spacetime given dimension 4 tangent space, we can state it trivially
\label{eqn:reciprocalblog:1920}
\begin{aligned}
\Bx^0 &= (-1)^0 \lr{ \Bx_1 \wedge \Bx_2 \wedge \Bx_3 } \cdot \inv{ \Bx_0 \wedge \Bx_1 \wedge \Bx_2 \wedge \Bx_3 } \\
\Bx^1 &= (-1)^1 \lr{ \Bx_0 \wedge \Bx_2 \wedge \Bx_3 } \cdot \inv{ \Bx_0 \wedge \Bx_1 \wedge \Bx_2 \wedge \Bx_3 } \\
\Bx^2 &= (-1)^2 \lr{ \Bx_0 \wedge \Bx_1 \wedge \Bx_3 } \cdot \inv{ \Bx_0 \wedge \Bx_1 \wedge \Bx_2 \wedge \Bx_3 } \\
\Bx^3 &= (-1)^3 \lr{ \Bx_0 \wedge \Bx_1 \wedge \Bx_2 } \cdot \inv{ \Bx_0 \wedge \Bx_1 \wedge \Bx_2 \wedge \Bx_3 }.
\end{aligned}

This is probably not an efficient way to compute all these reciprocals, since we can utilize a single matrix inversion to solve them in one shot. However, there are theoretical advantages to this construction that will be useful when we get to integration theory.

### On degeneracy.

A small mention of degeneracy was mentioned above. Regardless of metric, $$\Bx_0 \wedge \Bx_1 = 0$$ means that this pair of vectors are colinear. A tangent space with such a pseudoscalar is clearly undesirable, and we must construct parameterizations for which the area element is non-zero in all regions of interest.

Things get more interesting in mixed signature spaces where we can have vectors that square to zero (i.e. lightlike). If the tangent space pseudoscalar has a lightlike factor, then that pseudoscalar will not be invertible. Such a degeneracy will will likely lead to many other troubles, and parameterizations of this sort should be avoided.

This following problem illustrates an example of this sort of degenerate parameterization.

## Problem: Degenerate surface parameterization.

Given a spacetime plane parameterization $$x(u,v) = u a + v b$$, where
\label{eqn:reciprocalblog:480}
a = \gamma_0 + \gamma_1 + \gamma_2 + \gamma_3,

\label{eqn:reciprocalblog:500}
b = \gamma_0 – \gamma_1 + \gamma_2 – \gamma_3,

show that this is a degenerate parameterization, and find the bivector that represents the tangent space. Are these vectors lightlike, spacelike, or timelike? Comment on whether this parameterization represents a physically relevant spacetime surface.

To characterize the vectors, we square them
\label{eqn:reciprocalblog:1080}
a^2 = b^2 =
\gamma_0^2 +
\gamma_1^2 +
\gamma_2^2 +
\gamma_3^2
=
1 – 3
= -2,

so $$a, b$$ are both spacelike vectors. The tangent space is clearly just $$\mbox{Span}\setlr{ a, b } = \mbox{Span}\setlr{ e, f }$$ where
\label{eqn:reciprocalblog:1100}
\begin{aligned}
e &= \gamma_0 + \gamma_2 \\
f &= \gamma_1 + \gamma_3.
\end{aligned}

Observe that $$a = e + f, b = e – f$$, and $$e$$ is lightlike ($$e^2 = 0$$), whereas $$f$$ is spacelike ($$f^2 = -2$$), and $$e \cdot f = 0$$, so $$e f = – f e$$. The bivector for the tangent plane is
\label{eqn:reciprocalblog:1120}
a b
}
=
(e + f) (e – f)
}
=
e^2 – f^2 – 2 e f
}
= -2 e f,

where
\label{eqn:reciprocalblog:1140}
e f = \gamma_{01} + \gamma_{21} + \gamma_{23} + \gamma_{03}.

Because $$e$$ is lightlike (zero square), and $$e f = – f e$$,
the bivector $$e f$$ squares to zero
\label{eqn:reciprocalblog:1780}
\lr{ e f }^2
= -e^2 f^2
= 0,

which shows that the parameterization is degenerate.

This parameterization can also be expressed as
\label{eqn:reciprocalblog:1160}
x(u,v)
= u ( e + f ) + v ( e – f )
= (u + v) e + (u – v) f,

a linear combination of a lightlike and spacelike vector. Intuitively, we expect that a physically meaningful spacetime surface involves linear combinations spacelike vectors, or combinations of a timelike vector with spacelike vectors. This beastie is something entirely different.

### Final notes.

There are a few loose ends above. In particular, we haven’t conclusively proven that the set of reciprocal vectors $$\Bx^\mu = \grad u^\mu$$ are exactly those obtained through algebraic means. For a full parameterization of spacetime, they are necessarily the same, since both are unique. So we know that \ref{eqn:reciprocalblog:1920} must equal the reciprocals obtained by evaluating the gradient for a full parameterization (and this must also equal the reciprocals that we can obtain through matrix inversion.) We have also not proved explicitly that the three parameter construction of the reciprocals in \ref{eqn:reciprocalblog:1900} is in the tangent space, but that is a fairly trivial observation, so that can be left as an exercise for the reader dismissal. Some additional thought about this is probably required, but it seems reasonable to put that on the back burner and move on to some applications.

# References

[1] Peeter Joot. Geometric Algebra for Electrical Engineers. Kindle Direct Publishing, 2019.

## Potential solutions to the static Maxwell’s equation using geometric algebra

[Click here for a PDF of this post with nicer formatting]

When neither the electromagnetic field strength $$F = \BE + I \eta \BH$$, nor current $$J = \eta (c \rho – \BJ) + I(c\rho_m – \BM)$$ is a function of time, then the geometric algebra form of Maxwell’s equations is the first order multivector (gradient) equation
\label{eqn:staticPotentials:20}
\spacegrad F = J.

While direct solutions to this equations are possible with the multivector Green’s function for the gradient
\label{eqn:staticPotentials:40}
G(\Bx, \Bx’) = \inv{4\pi} \frac{\Bx – \Bx’}{\Norm{\Bx – \Bx’}^3 },

the aim in this post is to explore second order (potential) solutions in a geometric algebra context. Can we assume that it is possible to find a multivector potential $$A$$ for which
\label{eqn:staticPotentials:60}
F = \spacegrad A,

is a solution to the Maxwell statics equation? If such a solution exists, then Maxwell’s equation is simply
\label{eqn:staticPotentials:80}
\spacegrad^2 A = J,

which can be easily solved using the scalar Green’s function for the Laplacian
\label{eqn:staticPotentials:240}
G(\Bx, \Bx’) = -\inv{\Norm{\Bx – \Bx’} },

a beastie that may be easier to convolve than the vector valued Green’s function for the gradient.

It is immediately clear that some restrictions must be imposed on the multivector potential $$A$$. In particular, since the field $$F$$ has only vector and bivector grades, this gradient must have no scalar, nor pseudoscalar grades. That is
\label{eqn:staticPotentials:100}

This constraint on the potential can be avoided if a grade selection operation is built directly into the assumed potential solution, requiring that the field is given by
\label{eqn:staticPotentials:120}

However, after imposing such a constraint, Maxwell’s equation has a much less friendly form
\label{eqn:staticPotentials:140}

Luckily, it is possible to introduce a transformation of potentials, called a gauge transformation, that eliminates the ugly grade selection term, and allows the potential equation to be expressed as a plain old Laplacian. We do so by assuming first that it is possible to find a solution of the Laplacian equation that has the desired grade restrictions. That is
\label{eqn:staticPotentials:160}
\begin{aligned}
\spacegrad^2 A’ &= J \\
\end{aligned}

for which $$F = \spacegrad A’$$ is a grade 1,2 solution to $$\spacegrad F = J$$. Suppose that $$A$$ is any formal solution, free of any grade restrictions, to $$\spacegrad^2 A = J$$, and $$F = \gpgrade{\spacegrad A}{1,2}$$. Can we find a function $$\tilde{A}$$ for which $$A = A’ + \tilde{A}$$?

Maxwell’s equation in terms of $$A$$ is
\label{eqn:staticPotentials:180}
\begin{aligned}
J
&= \spacegrad^2 (A’ + \tilde{A})
\end{aligned}

or
\label{eqn:staticPotentials:200}

This non-homogeneous Laplacian equation that can be solved as is for $$\tilde{A}$$ using the Green’s function for the Laplacian. Alternatively, we may also solve the equivalent first order system using the Green’s function for the gradient.
\label{eqn:staticPotentials:220}

Clearly $$\tilde{A}$$ is not unique, as we can add any function $$\psi$$ satisfying the homogeneous Laplacian equation $$\spacegrad^2 \psi = 0$$.

In summary, if $$A$$ is any multivector solution to $$\spacegrad^2 A = J$$, that is
\label{eqn:staticPotentials:260}
A(\Bx)
= \int dV’ G(\Bx, \Bx’) J(\Bx’)
= -\int dV’ \frac{J(\Bx’)}{\Norm{\Bx – \Bx’} },

then $$F = \spacegrad A’$$ is a solution to Maxwell’s equation, where $$A’ = A – \tilde{A}$$, and $$\tilde{A}$$ is a solution to the non-homogeneous Laplacian equation or the non-homogeneous gradient equation above.

### Integral form of the gauge transformation.

Additional insight is possible by considering the gauge transformation in integral form. Suppose that
\label{eqn:staticPotentials:280}
A(\Bx) = -\int_V dV’ \frac{J(\Bx’)}{\Norm{\Bx – \Bx’} } – \tilde{A}(\Bx),

is a solution of $$\spacegrad^2 A = J$$, where $$\tilde{A}$$ is a multivector solution to the homogeneous Laplacian equation $$\spacegrad^2 \tilde{A} = 0$$. Let’s look at the constraints on $$\tilde{A}$$ that must be imposed for $$F = \spacegrad A$$ to be a valid (i.e. grade 1,2) solution of Maxwell’s equation.
\label{eqn:staticPotentials:300}
\begin{aligned}
F
&= \spacegrad A \\
&=
-\int_V dV’ \lr{ \spacegrad \inv{\Norm{\Bx – \Bx’} } } J(\Bx’)
– \spacegrad \tilde{A}(\Bx) \\
&=
\int_V dV’ \lr{ \spacegrad’ \inv{\Norm{\Bx – \Bx’} } } J(\Bx’)
– \spacegrad \tilde{A}(\Bx) \\
&=
\int_V dV’ \spacegrad’ \frac{J(\Bx’)}{\Norm{\Bx – \Bx’} } – \int_V dV’ \frac{\spacegrad’ J(\Bx’)}{\Norm{\Bx – \Bx’} }
– \spacegrad \tilde{A}(\Bx) \\
&=
\int_{\partial V} dA’ \ncap’ \frac{J(\Bx’)}{\Norm{\Bx – \Bx’} } – \int_V \frac{\spacegrad’ J(\Bx’)}{\Norm{\Bx – \Bx’} }
\end{aligned}

Where $$\ncap’ = (\Bx’ – \Bx)/\Norm{\Bx’ – \Bx}$$, and the fundamental theorem of geometric calculus has been used to transform the gradient volume integral into an integral over the bounding surface. Operating on Maxwell’s equation with the gradient gives $$\spacegrad^2 F = \spacegrad J$$, which has only grades 1,2 on the left hand side, meaning that $$J$$ is constrained in a way that requires $$\spacegrad J$$ to have only grades 1,2. This means that $$F$$ has grades 1,2 if
\label{eqn:staticPotentials:320}
= \int_{\partial V} dA’ \frac{ \gpgrade{\ncap’ J(\Bx’)}{0,3} }{\Norm{\Bx – \Bx’} }.

The product $$\ncap J$$ expands to
\label{eqn:staticPotentials:340}
\begin{aligned}
\ncap J
&=
&=
\ncap \cdot (-\eta \BJ) + \gpgradethree{\ncap (-I \BM)} \\
&=- \eta \ncap \cdot \BJ -I \ncap \cdot \BM,
\end{aligned}

so
\label{eqn:staticPotentials:360}
=
-\int_{\partial V} dA’ \frac{ \eta \ncap’ \cdot \BJ(\Bx’) + I \ncap’ \cdot \BM(\Bx’)}{\Norm{\Bx – \Bx’} }.

Observe that if there is no flux of current density $$\BJ$$ and (fictitious) magnetic current density $$\BM$$ through the surface, then $$F = \spacegrad A$$ is a solution to Maxwell’s equation without any gauge transformation. Alternatively $$F = \spacegrad A$$ is also a solution if $$\lim_{\Bx’ \rightarrow \infty} \BJ(\Bx’)/\Norm{\Bx – \Bx’} = \lim_{\Bx’ \rightarrow \infty} \BM(\Bx’)/\Norm{\Bx – \Bx’} = 0$$ and the bounding volume is taken to infinity.

# References

## Generalizing Ampere’s law using geometric algebra.

The question I’d like to explore in this post is how Ampere’s law, the relationship between the line integral of the magnetic field to current (i.e. the enclosed current)
\label{eqn:flux:20}
\oint_{\partial A} d\Bx \cdot \BH = -\int_A \ncap \cdot \BJ,

generalizes to geometric algebra where Maxwell’s equations for a statics configuration (all time derivatives zero) is
\label{eqn:flux:40}
\spacegrad F = J,

where the multivector fields and currents are
\label{eqn:flux:60}
\begin{aligned}
F &= \BE + I \eta \BH \\
J &= \eta \lr{ c \rho – \BJ } + I \lr{ c \rho_\txtm – \BM }.
\end{aligned}

Here (fictitious) the magnetic charge and current densities that can be useful in antenna theory have been included in the multivector current for generality.

My presumption is that it should be possible to utilize the fundamental theorem of geometric calculus for expressing the integral over an oriented surface to its boundary, but applied directly to Maxwell’s equation. That integral theorem has the form
\label{eqn:flux:80}
\int_A d^2 \Bx \boldpartial F = \oint_{\partial A} d\Bx F,

where $$d^2 \Bx = d\Ba \wedge d\Bb$$ is a two parameter bivector valued surface, and $$\boldpartial$$ is vector derivative, the projection of the gradient onto the tangent space. I won’t try to explain all of geometric calculus here, and refer the interested reader to [1], which is an excellent reference on geometric calculus and integration theory.

The gotcha is that we actually want a surface integral with $$\spacegrad F$$. We can split the gradient into the vector derivative a normal component
\label{eqn:flux:160}
\spacegrad = \boldpartial + \ncap (\ncap \cdot \spacegrad),

so
\label{eqn:flux:100}
\int_A d^2 \Bx \spacegrad F
=
\int_A d^2 \Bx \boldpartial F
+
\int_A d^2 \Bx \ncap \lr{ \ncap \cdot \spacegrad } F,

so
\label{eqn:flux:120}
\begin{aligned}
\oint_{\partial A} d\Bx F
&=
\int_A d^2 \Bx \lr{ J – \ncap \lr{ \ncap \cdot \spacegrad } F } \\
&=
\int_A dA \lr{ I \ncap J – \lr{ \ncap \cdot \spacegrad } I F }
\end{aligned}

This is not nearly as nice as the magnetic flux relationship which was nicely split with the current and fields nicely separated. The $$d\Bx F$$ product has all possible grades, as does the $$d^2 \Bx J$$ product (in general). Observe however, that the normal term on the right has only grades 1,2, so we can split our line integral relations into pairs with and without grade 1,2 components
\label{eqn:flux:140}
\begin{aligned}
\oint_{\partial A} \gpgrade{d\Bx F}{0,3}
&=
\int_A dA \gpgrade{ I \ncap J }{0,3} \\
\oint_{\partial A} \gpgrade{d\Bx F}{1,2}
&=
\int_A dA \lr{ \gpgrade{ I \ncap J }{1,2} – \lr{ \ncap \cdot \spacegrad } I F }.
\end{aligned}

Let’s expand these explicitly in terms of the component fields and densities to check against the conventional relationships, and see if things look right. The line integrand expands to
\label{eqn:flux:180}
\begin{aligned}
d\Bx F
&=
d\Bx \lr{ \BE + I \eta \BH }
=
d\Bx \cdot \BE + I \eta d\Bx \cdot \BH
+
d\Bx \wedge \BE + I \eta d\Bx \wedge \BH \\
&=
d\Bx \cdot \BE
– \eta (d\Bx \cross \BH)
+ I (d\Bx \cross \BE )
+ I \eta (d\Bx \cdot \BH),
\end{aligned}

the current integrand expands to
\label{eqn:flux:200}
\begin{aligned}
I \ncap J
&=
I \ncap
\lr{
\frac{\rho}{\epsilon} – \eta \BJ + I \lr{ c \rho_\txtm – \BM }
} \\
&=
\ncap I \frac{\rho}{\epsilon} – \eta \ncap I \BJ – \ncap c \rho_\txtm + \ncap \BM \\
&=
\ncap \cdot \BM
+ \eta (\ncap \cross \BJ)
– \ncap c \rho_\txtm
+ I (\ncap \cross \BM)
+ \ncap I \frac{\rho}{\epsilon}
– \eta I (\ncap \cdot \BJ).
\end{aligned}

We are left with
\label{eqn:flux:220}
\begin{aligned}
\oint_{\partial A}
\lr{
d\Bx \cdot \BE + I \eta (d\Bx \cdot \BH)
}
&=
\int_A dA
\lr{
\ncap \cdot \BM – \eta I (\ncap \cdot \BJ)
} \\
\oint_{\partial A}
\lr{
– \eta (d\Bx \cross \BH)
+ I (d\Bx \cross \BE )
}
&=
\int_A dA
\lr{
\eta (\ncap \cross \BJ)
– \ncap c \rho_\txtm
+ I (\ncap \cross \BM)
+ \ncap I \frac{\rho}{\epsilon}
-\PD{n}{} \lr{ I \BE – \eta \BH }
}.
\end{aligned}

This is a crazy mess of dots, crosses, fields and sources. We can split it into one equation for each grade, which will probably look a little more regular. That is
\label{eqn:flux:240}
\begin{aligned}
\oint_{\partial A} d\Bx \cdot \BE &= \int_A dA \ncap \cdot \BM \\
\oint_{\partial A} d\Bx \cross \BH
&=
\int_A dA
\lr{
– \ncap \cross \BJ
+ \frac{ \ncap \rho_\txtm }{\mu}
– \PD{n}{\BH}
} \\
\oint_{\partial A} d\Bx \cross \BE &=
\int_A dA
\lr{
\ncap \cross \BM
+ \frac{\ncap \rho}{\epsilon}
– \PD{n}{\BE}
} \\
\oint_{\partial A} d\Bx \cdot \BH &= -\int_A dA \ncap \cdot \BJ \\
\end{aligned}

The first and last equations could have been obtained much more easily from Maxwell’s equations in their conventional form more easily. The two cross product equations with the normal derivatives are not familiar to me, even without the fictitious magnetic sources. It is somewhat remarkable that so much can be packed into one multivector equation:
\label{eqn:flux:260}
\oint_{\partial A} d\Bx F
=
I \int_A dA \lr{ \ncap J – \PD{n}{F} }.

# References

[1] A. Macdonald. Vector and Geometric Calculus. CreateSpace Independent Publishing Platform, 2012.

## Solving Maxwell’s equation in freespace: Multivector plane wave representation

[Click here for a PDF of this post with nicer formatting]

The geometric algebra form of Maxwell’s equations in free space (or source free isotopic media with group velocity $$c$$) is the multivector equation
\label{eqn:planewavesMultivector:20}
\lr{ \spacegrad + \inv{c}\PD{t}{} } F(\Bx, t) = 0.

Here $$F = \BE + I c \BB$$ is a multivector with grades 1 and 2 (vector and bivector components). The velocity $$c$$ is called the group velocity since $$F$$, or its components $$\BE, \BH$$ satisfy the wave equation, which can be seen by pre-multiplying with $$\spacegrad – (1/c)\PDi{t}{}$$ to find
\label{eqn:planewavesMultivector:n}
\lr{ \spacegrad^2 – \inv{c^2}\PDSq{t}{} } F(\Bx, t) = 0.

Let’s look at the frequency domain solution of this equation with a presumed phasor representation
\label{eqn:planewavesMultivector:40}
F(\Bx, t) = \textrm{Re} \lr{ F(\Bk) e^{-j \Bk \cdot \Bx + j \omega t} },

where $$j$$ is a scalar imaginary, not necessarily with any geometric interpretation.

Maxwell’s equation reduces to just
\label{eqn:planewavesMultivector:60}
0
=
-j \lr{ \Bk – \frac{\omega}{c} } F(\Bk).

If $$F(\Bk)$$ has a left multivector factor
\label{eqn:planewavesMultivector:80}
F(\Bk) =
\lr{ \Bk + \frac{\omega}{c} } \tilde{F},

where $$\tilde{F}$$ is a multivector to be determined, then
\label{eqn:planewavesMultivector:100}
\begin{aligned}
\lr{ \Bk – \frac{\omega}{c} }
F(\Bk)
&=
\lr{ \Bk – \frac{\omega}{c} }
\lr{ \Bk + \frac{\omega}{c} } \tilde{F} \\
&=
\lr{ \Bk^2 – \lr{\frac{\omega}{c}}^2 } \tilde{F},
\end{aligned}

which is zero if $$\Norm{\Bk} = \ifrac{\omega}{c}$$.

Let $$\kcap = \ifrac{\Bk}{\Norm{\Bk}}$$, and $$\Norm{\Bk} \tilde{F} = F_0 + F_1 + F_2 + F_3$$, where $$F_0, F_1, F_2,$$ and $$F_3$$ are respectively have grades 0,1,2,3. Then
\label{eqn:planewavesMultivector:120}
\begin{aligned}
F(\Bk)
&= \lr{ 1 + \kcap } \lr{ F_0 + F_1 + F_2 + F_3 } \\
&=
F_0 + F_1 + F_2 + F_3
+
\kcap F_0 + \kcap F_1 + \kcap F_2 + \kcap F_3 \\
&=
F_0 + F_1 + F_2 + F_3
+
\kcap F_0 + \kcap \cdot F_1 + \kcap \cdot F_2 + \kcap \cdot F_3
+
\kcap \wedge F_1 + \kcap \wedge F_2 \\
&=
\lr{
F_0 + \kcap \cdot F_1
}
+
\lr{
F_1 + \kcap F_0 + \kcap \cdot F_2
}
+
\lr{
F_2 + \kcap \cdot F_3 + \kcap \wedge F_1
}
+
\lr{
F_3 + \kcap \wedge F_2
}.
\end{aligned}

Since the field $$F$$ has only vector and bivector grades, the grades zero and three components of the expansion above must be zero, or
\label{eqn:planewavesMultivector:140}
\begin{aligned}
F_0 &= – \kcap \cdot F_1 \\
F_3 &= – \kcap \wedge F_2,
\end{aligned}

so
\label{eqn:planewavesMultivector:160}
\begin{aligned}
F(\Bk)
&=
\lr{ 1 + \kcap } \lr{
F_1 – \kcap \cdot F_1 +
F_2 – \kcap \wedge F_2
} \\
&=
\lr{ 1 + \kcap } \lr{
F_1 – \kcap F_1 + \kcap \wedge F_1 +
F_2 – \kcap F_2 + \kcap \cdot F_2
}.
\end{aligned}

The multivector $$1 + \kcap$$ has the projective property of gobbling any leading factors of $$\kcap$$
\label{eqn:planewavesMultivector:180}
\begin{aligned}
(1 + \kcap)\kcap
&= \kcap + 1 \\
&= 1 + \kcap,
\end{aligned}

so for $$F_i \in F_1, F_2$$
\label{eqn:planewavesMultivector:200}
(1 + \kcap) ( F_i – \kcap F_i )
=
(1 + \kcap) ( F_i – F_i )
= 0,

leaving
\label{eqn:planewavesMultivector:220}
F(\Bk)
=
\lr{ 1 + \kcap } \lr{
\kcap \cdot F_2 +
\kcap \wedge F_1
}.

For $$\kcap \cdot F_2$$ to be non-zero $$F_2$$ must be a bivector that lies in a plane containing $$\kcap$$, and $$\kcap \cdot F_2$$ is a vector in that plane that is perpendicular to $$\kcap$$. On the other hand $$\kcap \wedge F_1$$ is non-zero only if $$F_1$$ has a non-zero component that does not lie in along the $$\kcap$$ direction, but $$\kcap \wedge F_1$$, like $$F_2$$ describes a plane that containing $$\kcap$$. This means that having both bivector and vector free variables $$F_2$$ and $$F_1$$ provide more degrees of freedom than required. For example, if $$\BE$$ is any vector, and $$F_2 = \kcap \wedge \BE$$, then
\label{eqn:planewavesMultivector:240}
\begin{aligned}
\lr{ 1 + \kcap }
\kcap \cdot F_2
&=
\lr{ 1 + \kcap }
\kcap \cdot \lr{ \kcap \wedge \BE } \\
&=
\lr{ 1 + \kcap }
\lr{
\BE

\kcap \lr{ \kcap \cdot \BE }
} \\
&=
\lr{ 1 + \kcap }
\kcap \lr{ \kcap \wedge \BE } \\
&=
\lr{ 1 + \kcap }
\kcap \wedge \BE,
\end{aligned}

which has the form $$\lr{ 1 + \kcap } \lr{ \kcap \wedge F_1 }$$, so the solution of the free space Maxwell’s equation can be written
\label{eqn:planewavesMultivector:260}
\boxed{
F(\Bx, t)
=
\textrm{Re} \lr{
\lr{ 1 + \kcap }
\BE\,
e^{-j \Bk \cdot \Bx + j \omega t}
}
,
}

where $$\BE$$ is any vector for which $$\BE \cdot \Bk = 0$$.

## The many faces of Maxwell’s equations

[Click here for a PDF of this post with nicer formatting (including equation numbering and references)]

The following is a possible introduction for a report for a UofT ECE2500 project associated with writing a small book: “Geometric Algebra for Electrical Engineers”. Given the space constraints for the report I may have to drop much of this, but some of the history of Maxwell’s equations may be of interest, so I thought I’d share before the knife hits the latex.

## Goals of the project.

This project had a few goals

1. Perform a literature review of applications of geometric algebra to the study of electromagnetism. Geometric algebra will be defined precisely later, along with bivector, trivector, multivector and other geometric algebra generalizations of the vector.
2. Identify the subset of the literature that had direct relevance to electrical engineering.
3. Create a complete, and as compact as possible, introduction of the prerequisites required
for a graduate or advanced undergraduate electrical engineering student to be able to apply
geometric algebra to problems in electromagnetism.

## The many faces of electromagnetism.

There is a long history of attempts to find more elegant, compact and powerful ways of encoding and working with Maxwell’s equations.

### Maxwell’s formulation.

Maxwell [12] employs some differential operators, including the gradient $$\spacegrad$$ and Laplacian $$\spacegrad^2$$, but the divergence and gradient are always written out in full using coordinates, usually in integral form. Reading the original Treatise highlights how important notation can be, as most modern engineering or physics practitioners would find his original work incomprehensible. A nice translation from Maxwell’s notation to the modern Heaviside-Gibbs notation can be found in [16].

### Quaterion representation.

In his second volume [11] the equations of electromagnetism are stated using quaterions (an extension of complex numbers to three dimensions), but quaternions are not used in the work. The modern form of Maxwell’s equations in quaternion form is
\label{eqn:ece2500report:220}
\begin{aligned}
\inv{2} \antisymmetric{ \frac{d}{dr} }{ \BH } – \inv{2} \symmetric{ \frac{d}{dr} } { c \BD } &= c \rho + \BJ \\
\inv{2} \antisymmetric{ \frac{d}{dr} }{ \BE } + \inv{2} \symmetric{ \frac{d}{dr} }{ c \BB } &= 0,
\end{aligned}

where $$\ifrac{d}{dr} = (1/c) \PDi{t}{} + \Bi \PDi{x}{} + \Bj \PDi{y}{} + \Bk \PDi{z}{}$$ [7] acts bidirectionally, and vectors are expressed in terms of the quaternion basis $$\setlr{ \Bi, \Bj, \Bk }$$, subject to the relations $$\Bi^2 = \Bj^2 = \Bk^2 = -1, \quad \Bi \Bj = \Bk = -\Bj \Bi, \quad \Bj \Bk = \Bi = -\Bk \Bj, \quad \Bk \Bi = \Bj = -\Bi \Bk$$.
There is clearly more structure to these equations than the traditional Heaviside-Gibbs representation that we are used to, which says something for the quaternion model. However, this structure requires notation that is arguably non-intuitive. The fact that the quaterion representation was abandoned long ago by most electromagnetism researchers and engineers supports such an argument.

### Minkowski tensor representation.

Minkowski introduced the concept of a complex time coordinate $$x_4 = i c t$$ for special relativity [3]. Such a four-vector representation can be used for many of the relativistic four-vector pairs of electromagnetism, such as the current $$(c\rho, \BJ)$$, and the energy-momentum Lorentz force relations, and can also be applied to Maxwell’s equations
\label{eqn:ece2500report:140}
\sum_{\mu= 1}^4 \PD{x_\mu}{F_{\mu\nu}} = – 4 \pi j_\nu.
\sum_{\lambda\rho\mu=1}^4
\epsilon_{\mu\nu\lambda\rho}
\PD{x_\mu}{F_{\lambda\rho}} = 0,

where
\label{eqn:ece2500report:160}
F
=
\begin{bmatrix}
0 & B_z & -B_y & -i E_x \\
-B_z & 0 & B_x & -i E_y \\
B_y & -B_x & 0 & -i E_z \\
i E_x & i E_y & i E_z & 0
\end{bmatrix}.

A rank-2 complex (Hermitian) tensor contains all six of the field components. Transformation of coordinates for this representation of the field may be performed exactly like the transformation for any other four-vector. This formalism is described nicely in [13], where the structure used is motivated by transformational requirements. One of the costs of this tensor representation is that we loose the clear separation of the electric and magnetic fields that we are so comfortable with. Another cost is that we loose the distinction between space and time, as separate space and time coordinates have to be projected out of a larger four vector. Both of these costs have theoretical benefits in some applications, particularly for high energy problems where relativity is important, but for the low velocity problems near and dear to electrical engineers who can freely treat space and time independently, the advantages are not clear.

### Modern tensor formalism.

The Minkowski representation fell out of favour in theoretical physics, which settled on a real tensor representation that utilizes an explicit metric tensor $$g_{\mu\nu} = \pm \textrm{diag}(1, -1, -1, -1)$$ to represent the complex inner products of special relativity. In this tensor formalism, Maxwell’s equations are also reduced to a set of two tensor relationships ([10], [8], [5]).
\label{eqn:ece2500report:40}
\begin{aligned}
\partial_\mu F^{\mu \nu} &= \mu_0 J^\nu \\
\epsilon^{\alpha \beta \mu \nu} \partial_\beta F_{\mu \nu} &= 0,
\end{aligned}

where $$F^{\mu\nu}$$ is a \textit{real} rank-2 antisymmetric tensor that contains all six electric and magnetic field components, and $$J^\nu$$ is a four-vector current containing both charge density and current density components. \Cref{eqn:ece2500report:40} provides a unified and simpler theoretical framework for electromagnetism, and is used extensively in physics but not engineering.

### Differential forms.

It has been argued that a differential forms treatment of electromagnetism provides some of the same theoretical advantages as the tensor formalism, without the disadvantages of introducing a hellish mess of index manipulation into the mix. With differential forms it is also possible to express Maxwell’s equations as two equations. The free-space differential forms equivalent [4] to the tensor equations is
\label{eqn:ece2500report:60}
\begin{aligned}
d \alpha &= 0 \\
d *\alpha &= 0,
\end{aligned}

where
\label{eqn:ece2500report:180}
\alpha = \lr{ E_1 dx^1 + E_2 dx^2 + E_3 dx^3 }(c dt) + H_1 dx^2 dx^3 + H_2 dx^3 dx^1 + H_3 dx^1 dx^2.

One of the advantages of this representation is that it is valid even for curvilinear coordinate representations, which are handled naturally in differential forms. However, this formalism also comes with a number of costs. One cost (or benefit), like that of the tensor formalism, is that this is implicitly a relativistic approach subject to non-Euclidean orthonormality conditions $$(dx^i, dx^j) = \delta^{ij}, (dx^i, c dt) = 0, (c dt, c dt) = -1$$. Most grievous of the costs is the requirement to use differentials $$dx^1, dx^2, dx^3, c dt$$, instead of a more familar set of basis vectors, even for non-curvilinear coordinates. This requirement is easily viewed as unnatural, and likely one of the reasons that electromagnetism with differential forms has never become popular.

### Vector formalism.

Euclidean vector algebra, in particular the vector algebra and calculus of $$R^3$$, is the de-facto language of electrical engineering for electromagnetism. Maxwell’s equations in the Heaviside-Gibbs vector formalism are
\label{eqn:ece2500report:20}
\begin{aligned}
\spacegrad \cross \BE &= – \PD{t}{\BB} \\
\spacegrad \cross \BH &= \BJ + \PD{t}{\BD} \\
\spacegrad \cdot \BD &= \rho \\
\spacegrad \cdot \BB &= 0.
\end{aligned}

We are all intimately familiar with these equations, with the dot and the cross products, and with gradient, divergence and curl operations that are used to express them.
Given how comfortable we are with this mathematical formalism, there has to be a really good reason to switch to something else.

### Space time algebra (geometric algebra).

An alternative to any of the electrodynamics formalisms described above is STA, the Space Time Algebra. STA is a relativistic geometric algebra that allows Maxwell’s equations to be combined into one equation ([2], [6])
\label{eqn:ece2500report:80}
\grad F = J,

where
\label{eqn:ece2500report:200}
F = \BE + I c \BB \qquad (= \BE + I \eta \BH)

is a bivector field containing both the electric and magnetic field “vectors”, $$\grad = \gamma^\mu \partial_\mu$$ is the spacetime gradient, $$J$$ is a four vector containing electric charge and current components, and $$I = \gamma_0 \gamma_1 \gamma_2 \gamma_3$$ is the spacetime pseudoscalar, the ordered product of the basis vectors $$\setlr{ \gamma_\mu }$$. The STA representation is explicitly relativistic with a non-Euclidean relationships between the basis vectors $$\gamma_0 \cdot \gamma_0 = 1 = -\gamma_k \cdot \gamma_k, \forall k > 0$$. In this formalism “spatial” vectors $$\Bx = \sum_{k>0} \gamma_k \gamma_0 x^k$$ are represented as spacetime bivectors, requiring a small slight of hand when switching between STA notation and conventional vector representation. Uncoincidentally $$F$$ has exactly the same structure as the 2-form $$\alpha$$ above, provided the differential 1-forms $$dx^\mu$$ are replaced by the basis vectors $$\gamma_\mu$$. However, there is a simple complex structure inherent in the STA form that is not obvious in the 2-form equivalent. The bivector representation of the field $$F$$ directly encodes the antisymmetric nature of $$F^{\mu\nu}$$ from the tensor formalism, and the tensor equivalents of most STA results can be calcualted easily.

Having a single PDE for all of Maxwell’s equations allows for direct Green’s function solution of the field, and has a number of other advantages. There is extensive literature exploring selected applications of STA to electrodynamics. Many theoretical results have been derived using this formalism that require significantly more complex approaches using conventional vector or tensor analysis. Unfortunately, much of the STA literature is inaccessible to the engineering student, practising engineers, or engineering instructors. To even start reading the literature, one must learn geometric algebra, aspects of special relativity and non-Euclidean geometry, generalized integration theory, and even some tensor analysis.

### Paravector formalism (geometric algebra).

In the geometric algebra literature, there are a few authors who have endorsed the use of Euclidean geometric algebras for relativistic applications ([1], [14])
These authors use an Euclidean basis “vector” $$\Be_0 = 1$$ for the timelike direction, along with a standard Euclidean basis $$\setlr{ \Be_i }$$ for the spatial directions. A hybrid scalar plus vector representation of four vectors, called paravectors is employed. Maxwell’s equation is written as a multivector equation
\label{eqn:ece2500report:120}
\lr{ \spacegrad + \inv{c} \PD{t}{} } F = J,

where $$J$$ is a multivector source containing both the electric charge and currents, and $$c$$ is the group velocity for the medium (assumed uniform and isometric). $$J$$ may optionally include the (fictitious) magnetic charge and currents useful in antenna theory. The paravector formalism uses a the hybrid electromagnetic field representation of STA above, however, $$I = \Be_1 \Be_2 \Be_3$$ is interpreted as the $$R^3$$ pseudoscalar, the ordered product of the basis vectors $$\setlr{ \Be_i }$$, and $$F$$ represents a multivector with vector and bivector components. Unlike STA where $$\BE$$ and $$\BB$$ (or $$\BH$$) are interpretted as spacetime bivectors, here they are plain old Euclidian vectors in $$R^3$$, entirely consistent with conventional Heaviyside-Gibbs notation. Like the STA Maxwell’s equation, the paravector form is directly invertible using Green’s function techniques, without requiring the solution of equivalent second order potential problems, nor any requirement to take the derivatives of those potentials to determine the fields.

Lorentz transformation and manipulation of paravectors requires a variety of conjugation, real and imaginary operators, unlike STA where such operations have the same complex exponential structure as any 3D rotation expressed in geometric algebra. The advocates of the paravector representation argue that this provides an effective pedagogical bridge from Euclidean geometry to the Minkowski geometry of special relativity. This author agrees that this form of Maxwell’s equations is the natural choice for an introduction to electromagnetism using geometric algebra, but for relativistic operations, STA is a much more natural and less confusing choice.

## Results.

The end product of this project was a fairly small self contained book, titled “Geometric Algebra for Electrical Engineers”. This book includes an introduction to Euclidean geometric algebra focused on $$R^2$$ and $$R^3$$ (64 pages), an introduction to geometric calculus and multivector Green’s functions (64 pages), and applications to electromagnetism (75 pages). This report summarizes results from this book, omitting most derivations, and attempts to provide an overview that may be used as a road map for the book for further exploration. Many of the fundamental results of electromagnetism are derived directly from the geometric algebra form of Maxwell’s equation in a streamlined and compact fashion. This includes some new results, and many of the existing non-relativistic results from the geometric algebra STA and paravector literature. It will be clear to the reader that it is often simpler to have the electric and magnetic on equal footing, and demonstrates this by deriving most results in terms of the total electromagnetic field $$F$$. Many examples of how to extract the conventional electric and magnetic fields from the geometric algebra results expressed in terms of $$F$$ are given as a bridge between the multivector and vector representations.

The aim of this work was to remove some of the prerequisite conceptual roadblocks that make electromagnetism using geometric algebra inaccessbile. In particular, this project explored non-relativistic applications of geometric algebra to electromagnetism. After derivation from the conventional Heaviside-Gibbs representation of Maxwell’s equations, the paravector representation of Maxwell’s equation is used as the starting point for of all subsequent analysis. However, the paravector literature includes a confusing set of conjugation and real and imaginary selection operations that are tailored for relativisitic applications. These are not neccessary for low velocity applications, and have been avoided completely with the aim of making the subject more accessibility to the engineer.

In the book an attempt has been made to avoid introducing as little new notation as possible. For example, some authors use special notation for the bivector valued magnetic field $$I \BB$$, such as $$\boldsymbol{\mathcal{b}}$$ or $$\Bcap$$. Given the inconsistencies in the literature, $$I \BB$$ (or $$I \BH$$) will be used explicitly for the bivector (magnetic) components of the total electromagnetic field $$F$$. In the geometric algebra literature, there are conflicting conventions for the operator $$\spacegrad + (1/c) \PDi{t}{}$$ which we will call the spacetime gradient after the STA equivalent. For examples of different notations for the spacetime gradient, see [9], [1], and [15]. In the book the spacetime gradient is always written out in full to avoid picking from or explaining some of the subtlties of the competing notations.

Some researchers will find it distasteful that STA and relativity have been avoided completely in this book. Maxwell’s equations are inherently relativistic, and STA expresses the relativistic aspects of electromagnetism in an exceptional and beautiful fashion. However, a student of this book will have learned the geometric algebra and calculus prerequisites of STA. This makes the STA literature much more accessible, especially since most of the results in the book can be trivially translated into STA notation.

# References

[1] William Baylis. Electrodynamics: a modern geometric approach, volume 17. Springer Science \& Business Media, 2004.

[2] C. Doran and A.N. Lasenby. Geometric algebra for physicists. Cambridge University Press New York, Cambridge, UK, 1st edition, 2003.

[3] Albert Einstein. Relativity: The special and the general theory, chapter Minkowski’s Four-Dimensional Space. Princeton University Press, 2015. URL http://www.gutenberg.org/ebooks/5001.

[4] H. Flanders. Differential Forms With Applications to the Physical Sciences. Courier Dover Publications, 1989.

[5] David Jeffrey Griffiths and Reed College. Introduction to electrodynamics. Prentice hall Upper Saddle River, NJ, 3rd edition, 1999.

[6] David Hestenes. Space-time algebra, volume 1. Springer, 1966.

[7] Peter Michael Jack. Physical space as a quaternion structure, i: Maxwell equations. a brief note. arXiv preprint math-ph/0307038, 2003. URL https://arxiv.org/abs/math-ph/0307038.

[8] JD Jackson. Classical Electrodynamics. John Wiley and Sons, 2nd edition, 1975.

[9] Bernard Jancewicz. Multivectors and Clifford algebra in electrodynamics. World Scientific, 1988.

[10] L.D. Landau and E.M. Lifshitz. The classical theory of fields. Butterworth-Heinemann, 1980. ISBN 0750627689.

[11] James Clerk Maxwell. A treatise on electricity and magnetism, volume II. Merchant Books, 1881.

[12] James Clerk Maxwell. A treatise on electricity and magnetism, third edition, volume I. Dover publications, 1891.

[13] M. Schwartz. Principles of Electrodynamics. Dover Publications, 1987.

[14] Chappell et al. A simplified approach to electromagnetism using geometric algebra. arXiv preprint arXiv:1010.4947, 2010.

[15] Chappell et al. Geometric algebra for electrical and electronic engineers. 2014.

[16] Chappell et al. Geometric Algebra for Electrical and Electronic Engineers, 2014

## A derivation of the quaternion Maxwell’s equations using geometric algebra.

[Click here for a PDF of this post with nicer formatting]

## Motivation.

The quaternion form of Maxwell’s equations as stated in [2] is nearly indecipherable. The modern quaternionic form of these equations can be found in [1]. Looking for this representation was driven by the question of whether or not the compact geometric algebra representations of Maxwell’s equations $$\grad F = J$$, was possible using a quaternion representation of the fields.

As quaternions may be viewed as the even subalgebra of GA(3,0), it is possible to the quaternion representation of Maxwell’s equations using only geometric algebra, including source terms and independent of the heat considerations discussed in [1]. Such a derivation will be performed here. Examination of the results appears to answer the question about the compact representation in the negative.

## Quaternions as multivectors.

Quaternions are vector plus scalar sums, where the vector basis $$\setlr{ \Bi, \Bj, \Bk }$$ are subject to the complex like multiplication rules
\label{eqn:complex:240}
\begin{aligned}
\Bi^2 &= \Bj^2 = \Bk^2 = -1 \\
\Bi \Bj &= \Bk = -\Bj \Bi \\
\Bj \Bk &= \Bi = -\Bk \Bj \\
\Bk \Bi &= \Bj = -\Bi \Bk.
\end{aligned}

We can represent these basis vectors in terms of the $$\mathbb{R}^{3}$$ unit bivectors
\label{eqn:quaternion2maxwellWithGA:260}
\begin{aligned}
\Bi &= \Be_{3} \Be_{2} = -I \Be_1 \\
\Bj &= \Be_{1} \Be_{3} = -I \Be_2 \\
\Bk &= \Be_{2} \Be_{1} = -I \Be_3,
\end{aligned}

where $$I = \Be_1 \Be_2 \Be_3$$ is the ordered product of the $$\mathbb{R}^{3}$$ basis elements. Within geometric algebra, the quaternion basis “vectors” are more properly viewed as a bivector space basis that happens to have dimension three.

Following [1], we may introduce a quaternionic spacetime gradient, and express that in terms of geometric algebra
\label{eqn:quaternion2maxwellWithGA:280}
\frac{d}{dr} = \inv{c} \PD{t}{}
+ \Bi \PD{x}{}
+ \Bj \PD{y}{}
+ \Bk \PD{z}{}
=

Of particular interest is how do we write the curl, divergence and time partials in terms of the quaternionic spacetime gradient or its components. Like [1], we will use modern commutator notation for an antisymmetric difference of products
\label{eqn:quaternion2maxwellWithGA:600}
\antisymmetric{a}{b} = a b – b a,

and anticommutator notation for a symmetric difference of products
\label{eqn:quaternion2maxwellWithGA:620}
\symmetric{a}{b} = a b + b a.

The curl of a vector $$\Bf$$ in terms of vector products with the gradient is
\label{eqn:quaternion2maxwellWithGA:300}
\begin{aligned}
&= -I(\spacegrad \wedge \Bf) \\
&= -\frac{I}{2} \lr{ \spacegrad \Bf – \Bf \spacegrad } \\
&= \frac{1}{2} \lr{ (-I \spacegrad) \Bf – \Bf (-I\spacegrad) } \\
&= \inv{2} \antisymmetric{ -I \spacegrad }{ \Bf } \\
&= \inv{2} \antisymmetric{ \frac{d}{dr} }{ \Bf },
\end{aligned}

where the last step takes advantage of the fact that the timelike contribution of the spacetime gradient commutes with any vector $$\Bf$$ due to its scalar nature, so cancels out of the commutator. In a similar fashion, the dot product may be written as an anticommutator
\label{eqn:quaternion2maxwellWithGA:480}
=
\inv{2} \lr{ \spacegrad \Bf + \Bf \spacegrad }
=
\inv{2} \symmetric{ \spacegrad}{ \Bf },

as can the scalar time derivative
\label{eqn:quaternion2maxwellWithGA:500}
\PD{t}{\Bf}
= \inv{2} \symmetric{ \inv{c} \PD{t}{} } { c \Bf }.

## Quaternionic form of Maxwell’s equations.

Using geometric algebra as an intermediate transformation, let’s see directly how to express Maxwell’s equations in terms of this quaternionic operator. Our starting point is Maxwell’s equations in their standard macroscopic form

\label{eqn:ece2500report:20}
\spacegrad \cross \BH = \BJ + \PD{t}{\BD}

\label{eqn:quaternion2maxwellWithGA:340}
\spacegrad \cdot \BD = \rho

\label{eqn:quaternion2maxwellWithGA:360}
\spacegrad \cross \BE = – \PD{t}{\BB}

\label{eqn:quaternion2maxwellWithGA:380}
\spacegrad \cdot \BB = 0.

Inserting these into Maxwell-Faraday and into Gauss’s law for magnetism we have
\label{eqn:quaternion2maxwellWithGA:400}
\begin{aligned}
\inv{2} \antisymmetric{ \frac{d}{dr} }{ \BE } &= – \symmetric{ \inv{c}\PD{t}{} }{ c \BB } \\
\inv{2} \symmetric{ \spacegrad }{ c \BB } &= 0,
\end{aligned}

or
\label{eqn:quaternion2maxwellWithGA:420}
\begin{aligned}
\inv{2} \antisymmetric{ \frac{d}{dr} }{ -I \BE } + \symmetric{ \inv{c}\PD{t}{} }{ -I c \BB } &= 0 \\
\inv{2} \symmetric{ -I \spacegrad }{ -I c \BB } &= 0
\end{aligned}

We can introduce quaternionic electric and magnetic field “vectors” (really bivectors)
\label{eqn:quaternion2maxwellWithGA:440}
\begin{aligned}
\boldsymbol{\mathcal{E}} &= -I \BE = \Bi E_x + \Bj E_y + \Bk E_z \\
\boldsymbol{\mathcal{B}} &= -I \BB = \Bi B_x + \Bj B_y + \Bk B_z,
\end{aligned}

and substitute these and sum to find the quaternionic representation of the two source free Maxwell’s equations
\label{eqn:quaternion2maxwellWithGA:460}
\boxed{
\inv{2} \antisymmetric{ \frac{d}{dr} }{ \boldsymbol{\mathcal{E}} } + \inv{2} \symmetric{ \frac{d}{dr} }{ c \boldsymbol{\mathcal{B}} } = 0.
}

Inserting the quaternion curl, div and time derivative representations into Ampere-Maxwell’s law and Gauss’s law, gives
\label{eqn:quaternion2maxwellWithGA:520}
\begin{aligned}
\inv{2} \antisymmetric{ \frac{d}{dr} }{ \BH } &= \BJ + \inv{2} \symmetric{ \inv{c} \PD{t}{} } { c \BD } \\
\inv{2} \symmetric{ \spacegrad }{ c \BD } &= c \rho,
\end{aligned}

\label{eqn:quaternion2maxwellWithGA:540}
\begin{aligned}
\inv{2} \antisymmetric{ \frac{d}{dr} }{ -I \BH } – \inv{2} \symmetric{ \inv{c} \PD{t}{} } { -I c \BD } &= -I \BJ \\
-\inv{2} \symmetric{ -I \spacegrad }{ -I c \BD } &= c \rho.
\end{aligned}

With quaternionic displacement vector and magnetization, and current densities
\label{eqn:quaternion2maxwellWithGA:580}
\begin{aligned}
\boldsymbol{\mathcal{D}} &= -I \BD = \Bi D_x + \Bj D_y + \Bk D_z \\
\boldsymbol{\mathcal{H}} &= -I \BH = \Bi H_x + \Bj H_y + \Bk H_z \\
\boldsymbol{\mathcal{J}} &= -I \BJ = \Bi J_x + \Bj J_y + \Bk J_z,
\end{aligned}

and summing yields the two remaining two Maxwell equations in their quaternionic form
\label{eqn:quaternion2maxwellWithGA:560}
\boxed{
\inv{2} \antisymmetric{ \frac{d}{dr} }{ \boldsymbol{\mathcal{H}} } – \inv{2} \symmetric{ \frac{d}{dr} } { c \boldsymbol{\mathcal{D}} } = c \rho + \boldsymbol{\mathcal{J}}.
}

## Conclusions.

Maxwell’s equations in the quaternion representation have a structure that is not apparent in the Heaviside-Gibbs notation. There is some elegance to this result, but comes with the cost of having to use commutator and anticommutator operators, which are arguably non-intuitive. The compact geometric algebra representation of Maxwell’s equation does not appear possible with a quaternion representation, as an additional complex degree of freedom would be required (biquaternions?) Such a degree of freedom may also allow a quaternion representation of the (fictitious) magnetic sources that are useful in antenna theory with a quaternion model. Magnetic sources are easily incorporated into the current multivector in geometric algebra, but if done so in the derivation above, yield an odd grade multivector source which has no quaternion representation.

# References

[1] Peter Michael Jack. Physical space as a quaternion structure, i: Maxwell equations. a brief note. arXiv preprint math-ph/0307038, 2003. URL https://arxiv.org/abs/math-ph/0307038.

[2] James Clerk Maxwell. A treatise on electricity and magnetism, volume II. Merchant Books, 1881.

## A comparison of Geometric Algebra electrodynamic potential methods

[Click here for a PDF of this post with nicer formatting]

## Motivation

Geometric algebra (GA) allows for a compact description of Maxwell’s equations in either an explicit 3D representation or a STA (SpaceTime Algebra [2]) representation. The 3D GA and STA representations Maxwell’s equation both the form

\label{eqn:potentialMethods:1280}
L \boldsymbol{\mathcal{F}} = J,

where $$J$$ represents the sources, $$L$$ is a multivector gradient operator that includes partial derivative operator components for each of the space and time coordinates, and

\label{eqn:potentialMethods:1020}
\boldsymbol{\mathcal{F}} = \boldsymbol{\mathcal{E}} + \eta I \boldsymbol{\mathcal{H}},

is an electromagnetic field multivector, $$I = \Be_1 \Be_2 \Be_3$$ is the \R{3} pseudoscalar, and $$\eta = \sqrt{\mu/\epsilon}$$ is the impedance of the media.

When Maxwell’s equations are extended to include magnetic sources in addition to conventional electric sources (as used in antenna-theory [1] and microwave engineering [3]), they take the form

\label{eqn:chapter3Notes:20}
\spacegrad \cross \boldsymbol{\mathcal{E}} = – \boldsymbol{\mathcal{M}} – \PD{t}{\boldsymbol{\mathcal{B}}}

\label{eqn:chapter3Notes:40}
\spacegrad \cross \boldsymbol{\mathcal{H}} = \boldsymbol{\mathcal{J}} + \PD{t}{\boldsymbol{\mathcal{D}}}

\label{eqn:chapter3Notes:60}
\spacegrad \cdot \boldsymbol{\mathcal{D}} = q_{\textrm{e}}

\label{eqn:chapter3Notes:80}
\spacegrad \cdot \boldsymbol{\mathcal{B}} = q_{\textrm{m}}.

The corresponding GA Maxwell equations in their respective 3D and STA forms are

\label{eqn:potentialMethods:300}
\lr{ \spacegrad + \inv{v} \PD{t}{} } \boldsymbol{\mathcal{F}}
=
\eta
\lr{ v q_{\textrm{e}} – \boldsymbol{\mathcal{J}} }
+ I \lr{ v q_{\textrm{m}} – \boldsymbol{\mathcal{M}} }

\label{eqn:potentialMethods:320}
\grad \boldsymbol{\mathcal{F}} = \eta J – I M,

where the wave group velocity in the medium is $$v = 1/\sqrt{\epsilon\mu}$$, and the medium is isotropic with
$$\boldsymbol{\mathcal{B}} = \mu \boldsymbol{\mathcal{H}}$$, and $$\boldsymbol{\mathcal{D}} = \epsilon \boldsymbol{\mathcal{E}}$$. In the STA representation, $$\grad, J, M$$ are all four-vectors, the specific meanings of which will be spelled out below.

How to determine the potential equations and the field representation using the conventional distinct Maxwell’s \ref{eqn:chapter3Notes:20}, … is well known. The basic procedure is to consider the electric and magnetic sources in turn, and observe that in each case one of the electric or magnetic fields must have a curl representation. The STA approach is similar, except that it can be observed that the field must have a four-curl representation for each type of source. In the explicit 3D GA formalism
\ref{eqn:potentialMethods:300} how to formulate a natural potential representation is not as obvious. There is no longer an reason to set any component of the field equal to a curl, and the representation of the four curl from the STA approach is awkward. Additionally, it is not obvious what form gauge invariance takes in the 3D GA representation.

### Ideas explored in these notes

• GA representation of Maxwell’s equations including magnetic sources.
• STA GA formalism for Maxwell’s equations including magnetic sources.
• Explicit form of the GA potential representation including both electric and magnetic sources.
• Demonstration of exactly how the 3D and STA potentials are related.
• Explore the structure of gauge transformations when magnetic sources are included.
• Explore the structure of gauge transformations in the 3D GA formalism.
• Specify the form of the Lorentz gauge in the 3D GA formalism.

## Traditional vector algebra

### No magnetic sources

When magnetic sources are omitted, it follows from \ref{eqn:chapter3Notes:80} that there is some $$\boldsymbol{\mathcal{A}}^{\mathrm{e}}$$ for which

\label{eqn:potentialMethods:20}
\boxed{
\boldsymbol{\mathcal{B}} = \spacegrad \cross \boldsymbol{\mathcal{A}}^{\mathrm{e}},
}

Substitution into Faraday’s law \ref{eqn:chapter3Notes:20} gives

\label{eqn:potentialMethods:40}
\spacegrad \cross \boldsymbol{\mathcal{E}} = – \PD{t}{}\lr{ \spacegrad \cross \boldsymbol{\mathcal{A}}^{\mathrm{e}} },

or
\label{eqn:potentialMethods:60}
\spacegrad \cross \lr{ \boldsymbol{\mathcal{E}} + \PD{t}{ \boldsymbol{\mathcal{A}}^{\mathrm{e}} } } = 0.

A gradient representation of this curled quantity, say $$-\spacegrad \phi$$, will provide the required zero

\label{eqn:potentialMethods:80}
\boxed{
\boldsymbol{\mathcal{E}} = -\spacegrad \phi -\PD{t}{ \boldsymbol{\mathcal{A}}^{\mathrm{e}} }.
}

The final two Maxwell equations yield

\label{eqn:potentialMethods:100}
\begin{aligned}
-\spacegrad^2 \boldsymbol{\mathcal{A}}^{\mathrm{e}} + \spacegrad \lr{ \spacegrad \cdot \boldsymbol{\mathcal{A}}^{\mathrm{e}} } &= \mu \lr{ \boldsymbol{\mathcal{J}} + \epsilon \PD{t}{} \lr{ -\spacegrad \phi -\PD{t}{ \boldsymbol{\mathcal{A}}^{\mathrm{e}} } } } \\
\spacegrad \cdot \lr{ -\spacegrad \phi -\PD{t}{ \boldsymbol{\mathcal{A}}^{\mathrm{e}} } } &= q_e/\epsilon,
\end{aligned}

or
\label{eqn:potentialMethods:120}
\boxed{
\begin{aligned}
\spacegrad^2 \boldsymbol{\mathcal{A}}^{\mathrm{e}} – \inv{v^2} \PDSq{t}{ \boldsymbol{\mathcal{A}}^{\mathrm{e}} }
\inv{v^2} \PD{t}{\phi}
}
&= -\mu \boldsymbol{\mathcal{J}} \\
\spacegrad^2 \phi + \PD{t}{} \lr{ \spacegrad \cdot \boldsymbol{\mathcal{A}}^{\mathrm{e}} } &= -q_e/\epsilon.
\end{aligned}
}

Note that the Lorentz condition $$\PDi{t}{(\phi/v^2)} + \spacegrad \cdot \boldsymbol{\mathcal{A}}^{\mathrm{e}} = 0$$ can be imposed to decouple these, leaving non-homogeneous wave equations for the vector and scalar potentials respectively.

### No electric sources

Without electric sources, a curl representation of the electric field can be assumed, satisfying Gauss’s law

\label{eqn:potentialMethods:140}
\boxed{
\boldsymbol{\mathcal{D}} = – \spacegrad \cross \boldsymbol{\mathcal{A}}^{\mathrm{m}}.
}

Substitution into the Maxwell-Faraday law gives
\label{eqn:potentialMethods:160}
\spacegrad \cross \lr{ \boldsymbol{\mathcal{H}} + \PD{t}{\boldsymbol{\mathcal{A}}^{\mathrm{m}}} } = 0.

This is satisfied with any gradient, say, $$-\spacegrad \phi_m$$, providing a potential representation for the magnetic field

\label{eqn:potentialMethods:180}
\boxed{
\boldsymbol{\mathcal{H}} = -\spacegrad \phi_m – \PD{t}{\boldsymbol{\mathcal{A}}^{\mathrm{m}}}.
}

The remaining Maxwell equations provide the required constraints on the potentials

\label{eqn:potentialMethods:220}
-\spacegrad^2 \boldsymbol{\mathcal{A}}^{\mathrm{m}} + \spacegrad \lr{ \spacegrad \cdot \boldsymbol{\mathcal{A}}^{\mathrm{m}} } = -\epsilon
\lr{
-\boldsymbol{\mathcal{M}} – \mu \PD{t}{}
\lr{
-\spacegrad \phi_m – \PD{t}{\boldsymbol{\mathcal{A}}^{\mathrm{m}}}
}
}

\label{eqn:potentialMethods:240}
\lr{
-\spacegrad \phi_m – \PD{t}{\boldsymbol{\mathcal{A}}^{\mathrm{m}}}
}
= \inv{\mu} q_m,

or
\label{eqn:potentialMethods:260}
\boxed{
\begin{aligned}
\spacegrad^2 \boldsymbol{\mathcal{A}}^{\mathrm{m}} – \inv{v^2} \PDSq{t}{\boldsymbol{\mathcal{A}}^{\mathrm{m}}} – \spacegrad \lr{ \inv{v^2} \PD{t}{\phi_m} + \spacegrad \cdot \boldsymbol{\mathcal{A}}^{\mathrm{m}} } &= -\epsilon \boldsymbol{\mathcal{M}} \\
\spacegrad^2 \phi_m + \PD{t}{}\lr{ \spacegrad \cdot \boldsymbol{\mathcal{A}}^{\mathrm{m}} } &= -\inv{\mu} q_m.
\end{aligned}
}

The general solution to Maxwell’s equations is therefore
\label{eqn:potentialMethods:280}
\begin{aligned}
\boldsymbol{\mathcal{E}} &=
-\spacegrad \phi -\PD{t}{ \boldsymbol{\mathcal{A}}^{\mathrm{e}} }
– \inv{\epsilon} \spacegrad \cross \boldsymbol{\mathcal{A}}^{\mathrm{m}} \\
\boldsymbol{\mathcal{H}} &=
\inv{\mu} \spacegrad \cross \boldsymbol{\mathcal{A}}^{\mathrm{e}}
-\spacegrad \phi_m – \PD{t}{\boldsymbol{\mathcal{A}}^{\mathrm{m}}},
\end{aligned}

subject to the constraints \ref{eqn:potentialMethods:120} and \ref{eqn:potentialMethods:260}.

### Potential operator structure

Knowing that there is a simple underlying structure to the potential representation of the electromagnetic field in the STA formalism inspires the question of whether that structure can be found directly using the scalar and vector potentials determined above.

Specifically, what is the multivector representation \ref{eqn:potentialMethods:1020} of the electromagnetic field in terms of all the individual potential variables, and can an underlying structure for that field representation be found? The composite field is

\label{eqn:potentialMethods:280b}
\boldsymbol{\mathcal{F}}
=
-\spacegrad \phi -\PD{t}{ \boldsymbol{\mathcal{A}}^{\mathrm{e}} }
– \inv{\epsilon} \spacegrad \cross \boldsymbol{\mathcal{A}}^{\mathrm{m}} \\
+ I \eta
\lr{
\inv{\mu} \spacegrad \cross \boldsymbol{\mathcal{A}}^{\mathrm{e}}
-\spacegrad \phi_m – \PD{t}{\boldsymbol{\mathcal{A}}^{\mathrm{m}}}
}.

Can this be factored into into multivector operator and multivector potentials? Expanding the cross products provides some direction

\label{eqn:potentialMethods:1040}
\begin{aligned}
\boldsymbol{\mathcal{F}}
&=
– \PD{t}{ \boldsymbol{\mathcal{A}}^{\mathrm{e}} }
– \eta \PD{t}{I \boldsymbol{\mathcal{A}}^{\mathrm{m}}}
– \spacegrad \lr{ \phi – \eta I \phi_m } \\
&\quad + \frac{\eta}{2 \mu} \lr{ \rspacegrad \boldsymbol{\mathcal{A}}^{\mathrm{e}} – \boldsymbol{\mathcal{A}}^{\mathrm{e}} \lspacegrad }
+ \frac{1}{2 \epsilon} \lr{ \rspacegrad I \boldsymbol{\mathcal{A}}^{\mathrm{m}} – I \boldsymbol{\mathcal{A}}^{\mathrm{m}} \lspacegrad }.
\end{aligned}

Observe that the
gradient and the time partials can be grouped together

\label{eqn:potentialMethods:1060}
\begin{aligned}
\boldsymbol{\mathcal{F}}
&=
– \PD{t}{ } \lr{\boldsymbol{\mathcal{A}}^{\mathrm{e}} + \eta I \boldsymbol{\mathcal{A}}^{\mathrm{m}}}
– \spacegrad \lr{ \phi + \eta I \phi_m }
+ \frac{v}{2} \lr{ \rspacegrad (\boldsymbol{\mathcal{A}}^{\mathrm{e}} + I \eta \boldsymbol{\mathcal{A}}^{\mathrm{m}}) – (\boldsymbol{\mathcal{A}}^{\mathrm{e}} + I \eta \boldsymbol{\mathcal{A}}^{\mathrm{m}}) \lspacegrad } \\
&=
\inv{2} \lr{
\lr{ \rspacegrad – \inv{v} {\stackrel{ \rightarrow }{\partial_t}} } \lr{ v \boldsymbol{\mathcal{A}}^{\mathrm{e}} + \eta v I \boldsymbol{\mathcal{A}}^{\mathrm{m}} }

\lr{ v \boldsymbol{\mathcal{A}}^{\mathrm{e}} + \eta v I \boldsymbol{\mathcal{A}}^{\mathrm{m}}} \lr{ \lspacegrad + \inv{v} {\stackrel{ \leftarrow }{\partial_t}} }
} \\
\lr{ \rspacegrad – \inv{v} {\stackrel{ \rightarrow }{\partial_t}} } \lr{ -\phi – \eta I \phi_m }
– \lr{ \phi + \eta I \phi_m } \lr{ \lspacegrad + \inv{v} {\stackrel{ \leftarrow }{\partial_t}} }
}
,
\end{aligned}

or

\label{eqn:potentialMethods:1080}
\boxed{
\boldsymbol{\mathcal{F}}
=
\inv{2} \Biglr{
\lr{ \rspacegrad – \inv{v} {\stackrel{ \rightarrow }{\partial_t}} }
\lr{
– \phi
+ v \boldsymbol{\mathcal{A}}^{\mathrm{e}}
+ \eta I v \boldsymbol{\mathcal{A}}^{\mathrm{m}}
– \eta I \phi_m
}

\lr{
\phi
+ v \boldsymbol{\mathcal{A}}^{\mathrm{e}}
+ \eta I v \boldsymbol{\mathcal{A}}^{\mathrm{m}}
+ \eta I \phi_m
}
\lr{ \lspacegrad + \inv{v} {\stackrel{ \leftarrow }{\partial_t}} }
}
.
}

There’s a conjugate structure to the potential on each side of the curl operation where we see a sign change for the scalar and pseudoscalar elements only. The reason for this becomes more clear in the STA formalism.

## Potentials in the STA formalism.

Maxwell’s equation in its explicit 3D form \ref{eqn:potentialMethods:300} can be
converted to STA form, by introducing a four-vector basis $$\setlr{ \gamma_\mu }$$, where the spatial basis
$$\setlr{ \Be_k = \gamma_k \gamma_0 }$$
is expressed in terms of the Dirac basis $$\setlr{ \gamma_\mu }$$.
By multiplying from the left with $$\gamma_0$$ a STA form of Maxwell’s equation
\ref{eqn:potentialMethods:320}
is obtained,
where
\label{eqn:potentialMethods:340}
\begin{aligned}
J &= \gamma^\mu J_\mu = ( v q_e, \boldsymbol{\mathcal{J}} ) \\
M &= \gamma^\mu M_\mu = ( v q_m, \boldsymbol{\mathcal{M}} ) \\
\grad &= \gamma^\mu \partial_\mu = ( (1/v) \partial_t, \spacegrad ) \\
I &= \gamma_0 \gamma_1 \gamma_2 \gamma_3,
\end{aligned}

Here the metric choice is $$\gamma_0^2 = 1 = -\gamma_k^2$$. Note that in this representation the electromagnetic field $$\boldsymbol{\mathcal{F}} = \boldsymbol{\mathcal{E}} + \eta I \boldsymbol{\mathcal{H}}$$ is a bivector, not a multivector as it is explicit (frame dependent) 3D representation of \ref{eqn:potentialMethods:300}.

A potential representation can be obtained as before by considering electric and magnetic sources in sequence and using superposition to assemble a complete potential.

### No magnetic sources

Without magnetic sources, Maxwell’s equation splits into vector and trivector terms of the form

\label{eqn:potentialMethods:380}
\grad \cdot \boldsymbol{\mathcal{F}} = \eta J

\label{eqn:potentialMethods:400}
\grad \wedge \boldsymbol{\mathcal{F}} = 0.

A four-vector curl representation of the field will satisfy \ref{eqn:potentialMethods:400} allowing an immediate potential solution

\label{eqn:potentialMethods:560}
\boxed{
\begin{aligned}
&\boldsymbol{\mathcal{F}} = \grad \wedge {A^{\mathrm{e}}} \\
&\grad^2 {A^{\mathrm{e}}} – \grad \lr{ \grad \cdot {A^{\mathrm{e}}} } = \eta J.
\end{aligned}
}

This can be put into correspondence with \ref{eqn:potentialMethods:120} by noting that

\label{eqn:potentialMethods:460}
\begin{aligned}
\grad^2 &= (\gamma^\mu \partial_\mu) \cdot (\gamma^\nu \partial_\nu) = \inv{v^2} \partial_{tt} – \spacegrad^2 \\
\gamma_0 {A^{\mathrm{e}}} &= \gamma_0 \gamma^\mu {A^{\mathrm{e}}}_\mu = {A^{\mathrm{e}}}_0 + \Be_k {A^{\mathrm{e}}}_k = {A^{\mathrm{e}}}_0 + \BA^{\mathrm{e}} \\
\gamma_0 \grad &= \gamma_0 \gamma^\mu \partial_\mu = \inv{v} \partial_t + \spacegrad \\
\grad \cdot {A^{\mathrm{e}}} &= \partial_\mu {A^{\mathrm{e}}}^\mu = \inv{v} \partial_t {A^{\mathrm{e}}}_0 – \spacegrad \cdot \BA^{\mathrm{e}},
\end{aligned}

so multiplying from the left with $$\gamma_0$$ gives

\label{eqn:potentialMethods:480}
\lr{ \inv{v^2} \partial_{tt} – \spacegrad^2 } \lr{ {A^{\mathrm{e}}}_0 + \BA^{\mathrm{e}} } – \lr{ \inv{v} \partial_t + \spacegrad }\lr{ \inv{v} \partial_t {A^{\mathrm{e}}}_0 – \spacegrad \cdot \BA^{\mathrm{e}} } = \eta( v q_e – \boldsymbol{\mathcal{J}} ),

or

\label{eqn:potentialMethods:520}
\lr{ \inv{v^2} \partial_{tt} – \spacegrad^2 } \BA^{\mathrm{e}} – \spacegrad \lr{ \inv{v} \partial_t {A^{\mathrm{e}}}_0 – \spacegrad \cdot \BA^{\mathrm{e}} } = -\eta \boldsymbol{\mathcal{J}}

\label{eqn:potentialMethods:540}
\spacegrad^2 {A^{\mathrm{e}}}_0 – \inv{v} \partial_t \lr{ \spacegrad \cdot \BA^{\mathrm{e}} } = -q_e/\epsilon.

So $${A^{\mathrm{e}}}_0 = \phi$$ and $$-\ifrac{\BA^{\mathrm{e}}}{v} = \boldsymbol{\mathcal{A}}^{\mathrm{e}}$$, or

\label{eqn:potentialMethods:600}
\boxed{
{A^{\mathrm{e}}} = \gamma_0\lr{ \phi – v \boldsymbol{\mathcal{A}}^{\mathrm{e}} }.
}

### No electric sources

Without electric sources, Maxwell’s equation now splits into

\label{eqn:potentialMethods:640}
\grad \cdot \boldsymbol{\mathcal{F}} = 0

\label{eqn:potentialMethods:660}
\grad \wedge \boldsymbol{\mathcal{F}} = -I M.

Here the dual of an STA curl yields a solution

\label{eqn:potentialMethods:680}
\boxed{
\boldsymbol{\mathcal{F}} = I ( \grad \wedge {A^{\mathrm{m}}} ).
}

Substituting this gives

\label{eqn:potentialMethods:720}
\begin{aligned}
0
&=
\grad \cdot (I ( \grad \wedge {A^{\mathrm{m}}} ) ) \\
&=
\gpgradeone{ \grad I ( \grad \wedge {A^{\mathrm{m}}} ) } \\
&=
-I \grad \wedge ( \grad \wedge {A^{\mathrm{m}}} ).
\end{aligned}

\label{eqn:potentialMethods:740}
\begin{aligned}
-I M
&=
\grad \wedge (I ( \grad \wedge {A^{\mathrm{m}}} ) ) \\
&=
\gpgradethree{ \grad I ( \grad \wedge {A^{\mathrm{m}}} ) } \\
&=
-I \grad \cdot ( \grad \wedge {A^{\mathrm{m}}} ).
\end{aligned}

The $$\grad \cdot \boldsymbol{\mathcal{F}}$$ relation \ref{eqn:potentialMethods:720} is identically zero as desired, leaving

\label{eqn:potentialMethods:760}
\boxed{
=
M.
}

So the general solution with both electric and magnetic sources is

\label{eqn:potentialMethods:800}
\boxed{
\boldsymbol{\mathcal{F}} = \grad \wedge {A^{\mathrm{e}}} + I (\grad \wedge {A^{\mathrm{m}}}),
}

subject to the constraints of \ref{eqn:potentialMethods:560} and \ref{eqn:potentialMethods:760}. As before the four-potential $${A^{\mathrm{m}}}$$ can be put into correspondence with the conventional scalar and vector potentials by left multiplying with $$\gamma_0$$, which gives

\label{eqn:potentialMethods:820}
\lr{ \inv{v^2} \partial_{tt} – \spacegrad^2 } \lr{ {A^{\mathrm{m}}}_0 + \BA^{\mathrm{m}} } – \lr{ \inv{v} \partial_t + \spacegrad }\lr{ \inv{v} \partial_t {A^{\mathrm{m}}}_0 – \spacegrad \cdot \BA^{\mathrm{m}} } = v q_m – \boldsymbol{\mathcal{M}},

or
\label{eqn:potentialMethods:860}
\lr{ \inv{v^2} \partial_{tt} – \spacegrad^2 } \BA^{\mathrm{m}} – \spacegrad \lr{ \inv{v} \partial_t {A^{\mathrm{m}}}_0 – \spacegrad \cdot \BA^{\mathrm{m}} } = – \boldsymbol{\mathcal{M}}

\label{eqn:potentialMethods:880}
\spacegrad^2 {A^{\mathrm{m}}}_0 – \inv{v} \partial_t \spacegrad \cdot \BA^{\mathrm{m}} = -v q_m.

Comparing with \ref{eqn:potentialMethods:260} shows that $${A^{\mathrm{m}}}_0/v = \mu \phi_m$$ and $$-\ifrac{\BA^{\mathrm{m}}}{v^2} = \mu \boldsymbol{\mathcal{A}}^{\mathrm{m}}$$, or

\label{eqn:potentialMethods:900}
\boxed{
{A^{\mathrm{m}}} = \gamma_0 \eta \lr{ \phi_m – v \boldsymbol{\mathcal{A}}^{\mathrm{m}} }.
}

### Potential operator structure

Observe that there is an underlying uniform structure of the differential operator that acts on the potential to produce the electromagnetic field. Expressed as a linear operator of the
gradient and the potentials, that is

$$\boldsymbol{\mathcal{F}} = L(\lrgrad, {A^{\mathrm{e}}}, {A^{\mathrm{m}}})$$

\label{eqn:potentialMethods:980}
\begin{aligned}
\boldsymbol{\mathcal{F}}
&=
L(\grad, {A^{\mathrm{e}}}, {A^{\mathrm{m}}}) \\
&= \grad \wedge {A^{\mathrm{e}}} + I (\grad \wedge {A^{\mathrm{m}}}) \\
&=
\inv{2} \lr{ \rgrad {A^{\mathrm{e}}} – {A^{\mathrm{e}}} \lgrad }
+ \frac{I}{2} \lr{ \rgrad {A^{\mathrm{m}}} – {A^{\mathrm{m}}} \lgrad } \\
&=
\inv{2} \lr{ \rgrad {A^{\mathrm{e}}} – {A^{\mathrm{e}}} \lgrad }
+ \frac{1}{2} \lr{ -\rgrad I {A^{\mathrm{m}}} – I {A^{\mathrm{m}}} \lgrad } \\
&=
\inv{2} \lr{ \rgrad ({A^{\mathrm{e}}} -I {A^{\mathrm{m}}}) – ({A^{\mathrm{e}}} + I {A^{\mathrm{m}}}) \lgrad }
,
\end{aligned}

or
\label{eqn:potentialMethods:1000}
\boxed{
\boldsymbol{\mathcal{F}}
=
\inv{2} \lr{ \rgrad ({A^{\mathrm{e}}} -I {A^{\mathrm{m}}}) – ({A^{\mathrm{e}}} – I {A^{\mathrm{m}}})^\dagger \lgrad }
.
}

Observe that \ref{eqn:potentialMethods:1000} can be
put into correspondence with \ref{eqn:potentialMethods:1080} using a factoring of unity $$1 = \gamma_0 \gamma_0$$

\label{eqn:potentialMethods:1100}
\boldsymbol{\mathcal{F}}
=
\inv{2} \lr{ (-\rgrad \gamma_0) (-\gamma_0 ({A^{\mathrm{e}}} -I {A^{\mathrm{m}}})) – (({A^{\mathrm{e}}} + I {A^{\mathrm{m}}}) \gamma_0)(\gamma_0 \lgrad) },

where

\label{eqn:potentialMethods:1140}
\begin{aligned}
&=
-(\gamma^0 \partial_0 + \gamma^k \partial_k) \gamma_0 \\
&=
-\partial_0 – \gamma^k \gamma_0 \partial_k \\
&=
-\inv{v} \partial_t
,
\end{aligned}

\label{eqn:potentialMethods:1160}
\begin{aligned}
&=
\gamma_0 (\gamma^0 \partial_0 + \gamma^k \partial_k) \\
&=
\partial_0 – \gamma^k \gamma_0 \partial_k \\
&=
+ \inv{v} \partial_t
,
\end{aligned}

and
\label{eqn:potentialMethods:1200}
\begin{aligned}
-\gamma_0 ( {A^{\mathrm{e}}} – I {A^{\mathrm{m}}} )
&=
-\gamma_0 \gamma_0 \lr{ \phi -v \boldsymbol{\mathcal{A}}^{\mathrm{e}} + \eta I \lr{ \phi_m – v \boldsymbol{\mathcal{A}}^{\mathrm{m}} } } \\
&=
-\lr{ \phi -v \boldsymbol{\mathcal{A}}^{\mathrm{e}} + \eta I \phi_m – \eta v I \boldsymbol{\mathcal{A}}^{\mathrm{m}} } \\
&=
– \phi
+ v \boldsymbol{\mathcal{A}}^{\mathrm{e}}
+ \eta v I \boldsymbol{\mathcal{A}}^{\mathrm{m}}
– \eta I \phi_m
\end{aligned}

\label{eqn:potentialMethods:1220}
\begin{aligned}
( {A^{\mathrm{e}}} + I {A^{\mathrm{m}}} )\gamma_0
&=
\lr{ \gamma_0 \lr{ \phi -v \boldsymbol{\mathcal{A}}^{\mathrm{e}} } + I \gamma_0 \eta \lr{ \phi_m – v \boldsymbol{\mathcal{A}}^{\mathrm{m}} } } \gamma_0 \\
&=
\phi + v \boldsymbol{\mathcal{A}}^{\mathrm{e}} + I \eta \phi_m + I \eta v \boldsymbol{\mathcal{A}}^{\mathrm{m}} \\
&=
\phi
+ v \boldsymbol{\mathcal{A}}^{\mathrm{e}}
+ \eta v I \boldsymbol{\mathcal{A}}^{\mathrm{m}}
+ \eta I \phi_m
,
\end{aligned}

This recovers \ref{eqn:potentialMethods:1080} as desired.

## Potentials in the 3D Euclidean formalism

In the conventional scalar plus vector differential representation of Maxwell’s equations \ref{eqn:chapter3Notes:20}…, given electric(magnetic) sources the structure of the electric(magnetic) potential follows from first setting the magnetic(electric) field equal to the curl of a vector potential. The procedure for the STA GA form of Maxwell’s equation was similar, where it was immediately evident that the field could be set to the four-curl of a four-vector potential (or the dual of such a curl for magnetic sources).

In the 3D GA representation, there is no immediate rationale for introducing a curl or the equivalent to a four-curl representation of the field. Reconciliation of this is possible by recognizing that the fact that the field (or a component of it) may be represented by a curl is not actually fundamental. Instead, observe that the two sided gradient action on a potential to generate the electromagnetic field in the STA representation of \ref{eqn:potentialMethods:1000} serves to select the grade two component product of the gradient and the multivector potential $${A^{\mathrm{e}}} – I {A^{\mathrm{m}}}$$, and that this can in fact be written as
a single sided gradient operation on a potential, provided the multivector product is filtered with a four-bivector grade selection operation

\label{eqn:potentialMethods:1240}
\boxed{
\boldsymbol{\mathcal{F}} = \gpgradetwo{ \grad \lr{ {A^{\mathrm{e}}} – I {A^{\mathrm{m}}} } }.
}

Similarly, it can be observed that the
specific function of the conjugate structure in the two sided potential representation of
\ref{eqn:potentialMethods:1080}
is to discard all the scalar and pseudoscalar grades in the multivector product. This means that a single sided potential can also be used, provided it is wrapped in a grade selection operation

\label{eqn:potentialMethods:1260}
\boxed{
\boldsymbol{\mathcal{F}} =
\gpgrade{ \lr{ \spacegrad – \inv{v} \PD{t}{} }
\lr{
– \phi
+ v \boldsymbol{\mathcal{A}}^{\mathrm{e}}
+ \eta I v \boldsymbol{\mathcal{A}}^{\mathrm{m}}
– \eta I \phi_m
} }{1,2}.
}

It is this grade selection operation that is really the fundamental defining action in the potential of the STA and conventional 3D representations of Maxwell’s equations. So, given Maxwell’s equation in the 3D GA representation, defining a potential representation for the field is really just a demand that the field have the structure

\label{eqn:potentialMethods:1320}
\boldsymbol{\mathcal{F}} = \gpgrade{ (\alpha \spacegrad + \beta \partial_t)( A_0 + A_1 + I( A_0′ + A_1′ ) }{1,2}.

This is a mandate that the electromagnetic field is the grades 1 and 2 components of the vector product of space and time derivative operators on a multivector field $$A = \sum_{k=0}^3 A_k = A_0 + A_1 + I( A_0′ + A_1′ )$$ that can potentially have any grade components. There are more degrees of freedom in this specification than required, since the multivector can absorb one of the $$\alpha$$ or $$\beta$$ coefficients, so without loss of generality, one of these (say $$\alpha$$) can be set to 1.

Expanding \ref{eqn:potentialMethods:1320} gives

\label{eqn:potentialMethods:1340}
\begin{aligned}
\boldsymbol{\mathcal{F}}
&=
+ \beta \partial_t A_1
– \spacegrad \cross A_1′
+ I (\spacegrad \cross A_1
+ \beta \partial_t A_1′
+ \spacegrad A_0′) \\
&=
\boldsymbol{\mathcal{E}} + I \eta \boldsymbol{\mathcal{H}}.
\end{aligned}

This naturally has all the right mixes of curls, gradients and time derivatives, all following as direct consequences of applying a grade selection operation to the action of a “spacetime gradient” on a general multivector potential.

The conclusion is that the potential representation of the field is

\label{eqn:potentialMethods:1360}
\boldsymbol{\mathcal{F}} =
\gpgrade{ \lr{ \spacegrad – \inv{v} \PD{t}{} } A }{1,2},

where $$A$$ is a multivector potentially containing all grades, where grades 0,1 are required for electric sources, and grades 2,3 are required for magnetic sources. When it is desirable to refer back to the conventional scalar and vector potentials this multivector potential can be written as $$A = -\phi + v \boldsymbol{\mathcal{A}}^{\mathrm{e}} + \eta I \lr{ -\phi_m + v \boldsymbol{\mathcal{A}}^{\mathrm{m}} }$$.

## Gauge transformations

Recall that for electric sources the magnetic field is of the form

\label{eqn:potentialMethods:1380}
\boldsymbol{\mathcal{B}} = \spacegrad \cross \boldsymbol{\mathcal{A}},

so adding the gradient of any scalar field to the potential $$\boldsymbol{\mathcal{A}}’ = \boldsymbol{\mathcal{A}} + \spacegrad \psi$$
does not change the magnetic field

\label{eqn:potentialMethods:1400}
\begin{aligned}
\boldsymbol{\mathcal{B}}’
&= \spacegrad \cross \lr{ \boldsymbol{\mathcal{A}} + \spacegrad \psi } \\
&= \spacegrad \cross \boldsymbol{\mathcal{A}} \\
&= \boldsymbol{\mathcal{B}}.
\end{aligned}

The electric field with this changed potential is

\label{eqn:potentialMethods:1420}
\begin{aligned}
\boldsymbol{\mathcal{E}}’
&= -\spacegrad \phi – \partial_t \lr{ \BA + \spacegrad \psi} \\
&= -\spacegrad \lr{ \phi + \partial_t \psi } – \partial_t \BA,
\end{aligned}

so if
\label{eqn:potentialMethods:1440}
\phi = \phi’ – \partial_t \psi,

the electric field will also be unaltered by this transformation.

In the STA representation, the field can similarly be altered by adding any (four)gradient to the potential. For example with only electric sources

\label{eqn:potentialMethods:1460}
\boldsymbol{\mathcal{F}} = \grad \wedge (A + \grad \psi) = \grad \wedge A

and for electric or magnetic sources

\label{eqn:potentialMethods:1480}

In the 3D GA representation, where the field is given by \ref{eqn:potentialMethods:1360}, there is no field that is being curled to add a gradient to. However, if the scalar and vector potentials transform as

\label{eqn:potentialMethods:1500}
\begin{aligned}
\boldsymbol{\mathcal{A}} &\rightarrow \boldsymbol{\mathcal{A}} + \spacegrad \psi \\
\phi &\rightarrow \phi – \partial_t \psi,
\end{aligned}

then the multivector potential transforms as
\label{eqn:potentialMethods:1520}
-\phi + v \boldsymbol{\mathcal{A}}
\rightarrow -\phi + v \boldsymbol{\mathcal{A}} + \partial_t \psi + v \spacegrad \psi,

so the electromagnetic field is unchanged when the multivector potential is transformed as

\label{eqn:potentialMethods:1540}
A \rightarrow A + \lr{ \spacegrad + \inv{v} \partial_t } \psi,

where $$\psi$$ is any field that has scalar or pseudoscalar grades. Viewed in terms of grade selection, this makes perfect sense, since the transformed field is

\label{eqn:potentialMethods:1560}
\begin{aligned}
\boldsymbol{\mathcal{F}}
&\rightarrow
\gpgrade{ \lr{ \spacegrad – \inv{v} \PD{t}{} } \lr{ A + \lr{ \spacegrad + \inv{v} \partial_t } \psi } }{1,2} \\
&=
\gpgrade{ \lr{ \spacegrad – \inv{v} \PD{t}{} } A + \lr{ \spacegrad^2 – \inv{v^2} \partial_{tt} } \psi }{1,2} \\
&=
\gpgrade{ \lr{ \spacegrad – \inv{v} \PD{t}{} } A }{1,2}.
\end{aligned}

The $$\psi$$ contribution to the grade selection operator is killed because it has scalar or pseudoscalar grades.

## Lorenz gauge

Maxwell’s equations are completely decoupled if the potential can be found such that

\label{eqn:potentialMethods:1580}
\begin{aligned}
\boldsymbol{\mathcal{F}}
&=
\gpgrade{ \lr{ \spacegrad – \inv{v} \PD{t}{} } A }{1,2} \\
&=
\lr{ \spacegrad – \inv{v} \PD{t}{} } A.
\end{aligned}

When this is the case, Maxwell’s equations are reduced to four non-homogeneous potential wave equations

\label{eqn:potentialMethods:1620}
\lr{ \spacegrad^2 – \inv{v^2} \PDSq{t}{} } A = J,

that is

\label{eqn:potentialMethods:1600}
\begin{aligned}
\lr{ \spacegrad^2 – \inv{v^2} \PDSq{t}{} } \phi &= – \inv{\epsilon} q_e \\
\lr{ \spacegrad^2 – \inv{v^2} \PDSq{t}{} } \boldsymbol{\mathcal{A}}^{\mathrm{e}} &= – \mu \boldsymbol{\mathcal{J}} \\
\lr{ \spacegrad^2 – \inv{v^2} \PDSq{t}{} } \phi_m &= – \frac{I}{\mu} q_m \\
\lr{ \spacegrad^2 – \inv{v^2} \PDSq{t}{} } \boldsymbol{\mathcal{A}}^{\mathrm{m}} &= – I \epsilon \boldsymbol{\mathcal{M}}.
\end{aligned}

There should be no a-priori assumption that such a field representation has no scalar, nor no pseudoscalar components. That explicit expansion in grades is

\label{eqn:potentialMethods:1640}
\begin{aligned}
\lr{ \spacegrad – \inv{v} \PD{t}{} } A
&=
\lr{ \spacegrad – \inv{v} \PD{t}{} } \lr{ -\phi + v \boldsymbol{\mathcal{A}}^{\mathrm{e}} + \eta I \lr{ -\phi_m + v \boldsymbol{\mathcal{A}}^{\mathrm{m}} } } \\
&=
\inv{v} \partial_t \phi
+ v \spacegrad \cdot \boldsymbol{\mathcal{A}}^{\mathrm{e}} \\
+ I \eta v \spacegrad \wedge \boldsymbol{\mathcal{A}}^{\mathrm{m}}
– \partial_t \boldsymbol{\mathcal{A}}^{\mathrm{e}} \\
&+ v \spacegrad \wedge \boldsymbol{\mathcal{A}}^{\mathrm{e}}
– \eta I \spacegrad \phi_m
– I \eta \partial_t \boldsymbol{\mathcal{A}}^{\mathrm{m}} \\
&+ \eta I \inv{v} \partial_t \phi_m
+ I \eta v \spacegrad \cdot \boldsymbol{\mathcal{A}}^{\mathrm{m}},
\end{aligned}

so if this potential representation has only vector and bivector grades, it must be true that

\label{eqn:potentialMethods:1660}
\begin{aligned}
\inv{v} \partial_t \phi + v \spacegrad \cdot \boldsymbol{\mathcal{A}}^{\mathrm{e}} &= 0 \\
\inv{v} \partial_t \phi_m + v \spacegrad \cdot \boldsymbol{\mathcal{A}}^{\mathrm{m}} &= 0.
\end{aligned}

The first is the well known Lorenz gauge condition, whereas the second is the dual of that condition for magnetic sources.

Should one of these conditions, say the Lorenz condition for the electric source potentials, be non-zero, then it is possible to make a potential transformation for which this condition is zero

\label{eqn:potentialMethods:1680}
\begin{aligned}
0
&\ne
\inv{v} \partial_t \phi + v \spacegrad \cdot \boldsymbol{\mathcal{A}}^{\mathrm{e}} \\
&=
\inv{v} \partial_t (\phi’ – \partial_t \psi) + v \spacegrad \cdot (\boldsymbol{\mathcal{A}}’ + \spacegrad \psi) \\
&=
\inv{v} \partial_t \phi’ + v \spacegrad \boldsymbol{\mathcal{A}}’
+ v \lr{ \spacegrad^2 – \inv{v^2} \partial_{tt} } \psi,
\end{aligned}

so if $$\inv{v} \partial_t \phi’ + v \spacegrad \boldsymbol{\mathcal{A}}’$$ is zero, $$\psi$$ must be found such that
\label{eqn:potentialMethods:1700}
\inv{v} \partial_t \phi + v \spacegrad \cdot \boldsymbol{\mathcal{A}}^{\mathrm{e}}
= v \lr{ \spacegrad^2 – \inv{v^2} \partial_{tt} } \psi.

# References

[1] Constantine A Balanis. Antenna theory: analysis and design. John Wiley \& Sons, 3rd edition, 2005.

[2] C. Doran and A.N. Lasenby. Geometric algebra for physicists. Cambridge University Press New York, Cambridge, UK, 1st edition, 2003.

[3] David M Pozar. Microwave engineering. John Wiley \& Sons, 2009.

## Gradient, divergence, curl and Laplacian in cylindrical coordinates

[Click here for a PDF of this post with nicer formatting]

In class it was suggested that the identity

\label{eqn:laplacianCylindrical:20}
-\spacegrad \cross \lr{ \spacegrad \cross \BA },

can be used to compute the Laplacian in non-rectangular coordinates. Is that the easiest way to do this?

How about just sequential applications of the gradient on the vector? Let’s start with the vector product of the gradient and the vector. First recall that the cylindrical representation of the gradient is

\label{eqn:laplacianCylindrical:80}
\spacegrad = \rhocap \partial_\rho + \frac{\phicap}{\rho} \partial_\phi + \zcap \partial_z,

where
\label{eqn:laplacianCylindrical:100}
\begin{aligned}
\rhocap &= \Be_1 e^{\Be_1 \Be_2 \phi} \\
\phicap &= \Be_2 e^{\Be_1 \Be_2 \phi} \\
\end{aligned}

Taking $$\phi$$ derivatives of \ref{eqn:laplacianCylindrical:100}, we have

\label{eqn:laplacianCylindrical:120}
\begin{aligned}
\partial_\phi \rhocap &= \Be_1 \Be_1 \Be_2 e^{\Be_1 \Be_2 \phi} = \Be_2 e^{\Be_1 \Be_2 \phi} = \phicap \\
\partial_\phi \phicap &= \Be_2 \Be_1 \Be_2 e^{\Be_1 \Be_2 \phi} = -\Be_1 e^{\Be_1 \Be_2 \phi} = -\rhocap.
\end{aligned}

The gradient of a vector $$\BA = \rhocap A_\rho + \phicap A_\phi + \zcap A_z$$ is

\label{eqn:laplacianCylindrical:60}
\begin{aligned}
&=
\lr{ \rhocap \partial_\rho + \frac{\phicap}{\rho} \partial_\phi + \zcap \partial_z }
\lr{ \rhocap A_\rho + \phicap A_\phi + \zcap A_z } \\
&=
\quad \rhocap \partial_\rho \lr{ \rhocap A_\rho + \phicap A_\phi + \zcap A_z } \\
&\quad + \frac{\phicap}{\rho} \partial_\phi \lr{ \rhocap A_\rho + \phicap A_\phi + \zcap A_z } \\
&\quad + \zcap \partial_z \lr{ \rhocap A_\rho + \phicap A_\phi + \zcap A_z } \\
&=
\quad \rhocap \lr{ \rhocap \partial_\rho A_\rho + \phicap \partial_\rho A_\phi + \zcap \partial_\rho A_z } \\
&\quad + \frac{\phicap}{\rho} \lr{ \partial_\phi(\rhocap A_\rho) + \partial_\phi(\phicap A_\phi) + \zcap \partial_\phi A_z } \\
&\quad + \zcap \lr{ \rhocap \partial_z A_\rho + \phicap \partial_z A_\phi + \zcap \partial_z A_z } \\
&=
\quad \partial_\rho A_\rho + \rhocap \phicap \partial_\rho A_\phi + \rhocap \zcap \partial_\rho A_z \\
&\quad +\frac{1}{\rho} \lr{ A_\rho + \phicap \rhocap \partial_\phi A_\rho – \phicap \rhocap A_\phi + \partial_\phi A_\phi + \phicap \zcap \partial_\phi A_z } \\
&\quad + \zcap \rhocap \partial_z A_\rho + \zcap \phicap \partial_z A_\phi + \partial_z A_z \\
&=
\quad \partial_\rho A_\rho + \frac{1}{\rho} \lr{ A_\rho + \partial_\phi A_\phi } + \partial_z A_z \\
\zcap \rhocap \lr{
\partial_z A_\rho
-\partial_\rho A_z
} \\
\phicap \zcap \lr{
\inv{\rho} \partial_\phi A_z
– \partial_z A_\phi
} \\
\rhocap \phicap \lr{
\partial_\rho A_\phi
– \inv{\rho} \lr{ \partial_\phi A_\rho – A_\phi }
},
\end{aligned}

As expected, we see that the gradient splits nicely into a dot and curl

\label{eqn:laplacianCylindrical:160}
\begin{aligned}
&= \spacegrad \cdot \BA + \spacegrad \wedge \BA \\
&= \spacegrad \cdot \BA + \rhocap \phicap \zcap (\spacegrad \cross \BA ),
\end{aligned}

where the cylindrical representation of the divergence is seen to be

\label{eqn:laplacianCylindrical:140}
=
\inv{\rho} \partial_\rho (\rho A_\rho) + \frac{1}{\rho} \partial_\phi A_\phi + \partial_z A_z,

and the cylindrical representation of the curl is

\label{eqn:laplacianCylindrical:180}
=
\rhocap
\lr{
\inv{\rho} \partial_\phi A_z
– \partial_z A_\phi
}
+
\phicap
\lr{
\partial_z A_\rho
-\partial_\rho A_z
}
+
\inv{\rho} \zcap \lr{
\partial_\rho ( \rho A_\phi )
– \partial_\phi A_\rho
}.

Should we want to, it is now possible to evaluate the Laplacian of $$\BA$$ using
\ref{eqn:laplacianCylindrical:20}
, which will have the following components

\label{eqn:laplacianCylindrical:220}
\begin{aligned}
\rhocap \cdot \lr{ \spacegrad^2 \BA }
&=
\partial_\rho
\lr{
\inv{\rho} \partial_\rho (\rho A_\rho) + \frac{1}{\rho} \partial_\phi A_\phi + \partial_z A_z
}

\lr{
\inv{\rho} \partial_\phi \lr{
\inv{\rho} \lr{
\partial_\rho ( \rho A_\phi ) – \partial_\phi A_\rho
}
}
– \partial_z \lr{
\partial_z A_\rho -\partial_\rho A_z
}
} \\
&=
\partial_\rho \lr{ \inv{\rho} \partial_\rho (\rho A_\rho)}
+ \partial_\rho \lr{ \frac{1}{\rho} \partial_\phi A_\phi}
+ \partial_{\rho z} A_z
– \inv{\rho^2}\partial_{\phi \rho} ( \rho A_\phi )
+ \inv{\rho^2}\partial_{\phi\phi} A_\rho
+ \partial_{zz} A_\rho
– \partial_{z\rho} A_z \\
&=
\partial_\rho \lr{ \inv{\rho} \partial_\rho (\rho A_\rho)}
+ \inv{\rho^2}\partial_{\phi\phi} A_\rho
+ \partial_{zz} A_\rho
– \frac{1}{\rho^2} \partial_\phi A_\phi
+ \frac{1}{\rho} \partial_{\rho\phi} A_\phi
– \inv{\rho^2}\partial_{\phi} A_\phi
– \inv{\rho}\partial_{\phi\rho} A_\phi \\
&=
\partial_\rho \lr{ \inv{\rho} \partial_\rho (\rho A_\rho)}
+ \inv{\rho^2}\partial_{\phi\phi} A_\rho
+ \partial_{zz} A_\rho
– \frac{2}{\rho^2} \partial_\phi A_\phi \\
&=
\inv{\rho} \partial_\rho \lr{ \rho \partial_\rho A_\rho}
+ \inv{\rho^2}\partial_{\phi\phi} A_\rho
+ \partial_{zz} A_\rho
– \frac{A_\rho}{\rho^2}
– \frac{2}{\rho^2} \partial_\phi A_\phi,
\end{aligned}

\label{eqn:laplacianCylindrical:240}
\begin{aligned}
\phicap \cdot \lr{ \spacegrad^2 \BA }
&=
\inv{\rho} \partial_\phi
\lr{
\inv{\rho} \partial_\rho (\rho A_\rho) + \frac{1}{\rho} \partial_\phi A_\phi + \partial_z A_z
}

\lr{
\lr{
\partial_z \lr{
\inv{\rho} \partial_\phi A_z – \partial_z A_\phi
}
-\partial_\rho \lr{
\inv{\rho} \lr{ \partial_\rho ( \rho A_\phi ) – \partial_\phi A_\rho}
}
}
} \\
&=
\inv{\rho^2} \partial_{\phi\rho} (\rho A_\rho)
+ \frac{1}{\rho^2} \partial_{\phi\phi} A_\phi
+ \inv{\rho}\partial_{\phi z} A_z
– \inv{\rho} \partial_{z\phi} A_z
+ \partial_{z z} A_\phi
+\partial_\rho \lr{ \inv{\rho} \partial_\rho ( \rho A_\phi ) }
– \partial_\rho \lr{ \inv{\rho} \partial_\phi A_\rho} \\
&=
\partial_\rho \lr{ \inv{\rho} \partial_\rho ( \rho A_\phi ) }
+ \frac{1}{\rho^2} \partial_{\phi\phi} A_\phi
+ \partial_{z z} A_\phi
+ \inv{\rho^2} \partial_{\phi\rho} (\rho A_\rho)
+ \inv{\rho}\partial_{\phi z} A_z
– \inv{\rho} \partial_{z\phi} A_z
– \partial_\rho \lr{ \inv{\rho} \partial_\phi A_\rho} \\
&=
\partial_\rho \lr{ \inv{\rho} \partial_\rho ( \rho A_\phi ) }
+ \frac{1}{\rho^2} \partial_{\phi\phi} A_\phi
+ \partial_{z z} A_\phi
+ \inv{\rho^2} \partial_{\phi} A_\rho
+ \inv{\rho} \partial_{\phi\rho} A_\rho
+ \inv{\rho^2} \partial_\phi A_\rho
– \inv{\rho} \partial_{\rho\phi} A_\rho \\
&=
\partial_\rho \lr{ \inv{\rho} \partial_\rho ( \rho A_\phi ) }
+ \frac{1}{\rho^2} \partial_{\phi\phi} A_\phi
+ \partial_{z z} A_\phi
+ \frac{2}{\rho^2} \partial_{\phi} A_\rho \\
&=
\inv{\rho} \partial_\rho \lr{ \rho \partial_\rho A_\phi }
+ \frac{1}{\rho^2} \partial_{\phi\phi} A_\phi
+ \partial_{z z} A_\phi
+ \frac{2}{\rho^2} \partial_{\phi} A_\rho
– \frac{A_\phi}{\rho^2},
\end{aligned}

\label{eqn:laplacianCylindrical:260}
\begin{aligned}
\zcap \cdot \lr{ \spacegrad^2 \BA }
&=
\partial_z
\lr{
\inv{\rho} \partial_\rho (\rho A_\rho) + \frac{1}{\rho} \partial_\phi A_\phi + \partial_z A_z
}

\inv{\rho} \lr{
\partial_\rho \lr{ \rho \lr{
\partial_z A_\rho -\partial_\rho A_z
}
}
– \partial_\phi \lr{
\inv{\rho} \partial_\phi A_z – \partial_z A_\phi
}
} \\
&=
\inv{\rho} \partial_{z\rho} (\rho A_\rho)
+ \frac{1}{\rho} \partial_{z\phi} A_\phi
+ \partial_{zz} A_z
– \inv{\rho}\partial_\rho \lr{ \rho \partial_z A_\rho }
+ \inv{\rho}\partial_\rho \lr{ \rho \partial_\rho A_z }
+ \inv{\rho^2} \partial_{\phi\phi} A_z
– \inv{\rho} \partial_{\phi z} A_\phi \\
&=
\inv{\rho}\partial_\rho \lr{ \rho \partial_\rho A_z }
+ \inv{\rho^2} \partial_{\phi\phi} A_z
+ \partial_{zz} A_z
+ \inv{\rho} \partial_{z} A_\rho
+\partial_{z\rho} A_\rho
+ \frac{1}{\rho} \partial_{z\phi} A_\phi
– \inv{\rho}\partial_z A_\rho
– \partial_{\rho z} A_\rho
– \inv{\rho} \partial_{\phi z} A_\phi \\
&=
\inv{\rho}\partial_\rho \lr{ \rho \partial_\rho A_z }
+ \inv{\rho^2} \partial_{\phi\phi} A_z
+ \partial_{zz} A_z
\end{aligned}

Evaluating these was a fairly tedious and mechanical job, and would have been better suited to a computer algebra system than by hand as done here.

### Explicit cylindrical Laplacian

Let’s try this a different way. The most obvious potential strategy is to just apply the Laplacian to the vector itself, but we need to include the unit vectors in such an operation

\label{eqn:laplacianCylindrical:280}
\spacegrad^2 \lr{ \rhocap A_\rho + \phicap A_\phi + \zcap A_z }.

First we need to know the explicit form of the cylindrical Laplacian. From the painful expansion, we can guess that it is

\label{eqn:laplacianCylindrical:300}
=
\inv{\rho}\partial_\rho \lr{ \rho \partial_\rho \psi }
+ \inv{\rho^2} \partial_{\phi\phi} \psi
+ \partial_{zz} \psi.

Let’s check that explicitly. Here I use the vector product where $$\rhocap^2 = \phicap^2 = \zcap^2 = 1$$, and these vectors anticommute when different

\label{eqn:laplacianCylindrical:320}
\begin{aligned}
&=
\lr{ \rhocap \partial_\rho + \frac{\phicap}{\rho} \partial_\phi + \zcap \partial_z }
\lr{ \rhocap \partial_\rho \psi + \frac{\phicap}{\rho} \partial_\phi \psi + \zcap \partial_z \psi } \\
&=
\rhocap \partial_\rho
\lr{ \rhocap \partial_\rho \psi + \frac{\phicap}{\rho} \partial_\phi \psi + \zcap \partial_z \psi }
+ \frac{\phicap}{\rho} \partial_\phi
\lr{ \rhocap \partial_\rho \psi + \frac{\phicap}{\rho} \partial_\phi \psi + \zcap \partial_z \psi }
+ \zcap \partial_z
\lr{ \rhocap \partial_\rho \psi + \frac{\phicap}{\rho} \partial_\phi \psi + \zcap \partial_z \psi } \\
&=
\partial_{\rho\rho} \psi
+ \rhocap \phicap \partial_\rho \lr{ \frac{1}{\rho} \partial_\phi \psi}
+ \rhocap \zcap \partial_{\rho z} \psi
+ \frac{\phicap}{\rho} \partial_\phi \lr{ \rhocap \partial_\rho \psi }
+ \frac{\phicap}{\rho} \partial_\phi \lr{ \frac{\phicap}{\rho} \partial_\phi \psi }
+ \frac{\phicap \zcap }{\rho} \partial_{\phi z} \psi
+ \zcap \rhocap \partial_{z\rho} \psi
+ \frac{\zcap \phicap}{\rho} \partial_{z\phi} \psi
+ \partial_{zz} \psi \\
&=
\partial_{\rho\rho} \psi
+ \inv{\rho} \partial_\rho \psi
+ \frac{1}{\rho^2} \partial_{\phi \phi} \psi
+ \partial_{zz} \psi
+ \rhocap \phicap
\lr{
-\frac{1}{\rho^2} \partial_\phi \psi
+\frac{1}{\rho} \partial_{\rho \phi} \psi
-\inv{\rho} \partial_{\phi \rho} \psi
+ \frac{1}{\rho^2} \partial_\phi \psi
}
+ \zcap \rhocap \lr{
-\partial_{\rho z} \psi
+ \partial_{z\rho} \psi
}
+ \phicap \zcap \lr{
\inv{\rho} \partial_{\phi z} \psi
– \inv{\rho} \partial_{z\phi} \psi
} \\
&=
\partial_{\rho\rho} \psi
+ \inv{\rho} \partial_\rho \psi
+ \frac{1}{\rho^2} \partial_{\phi \phi} \psi
+ \partial_{zz} \psi,
\end{aligned}

so the Laplacian operator is

\label{eqn:laplacianCylindrical:340}
\boxed{
=
\inv{\rho} \PD{\rho}{} \lr{ \rho \PD{\rho}{} }
+ \frac{1}{\rho^2} \PDSq{\phi}{}
+ \PDSq{z}{}.
}

All the bivector grades of the Laplacian operator are seen to explicitly cancel, regardless of the grade of $$\psi$$, just as if we had expanded the scalar Laplacian as a dot product
$$\spacegrad^2 \psi = \spacegrad \cdot \lr{ \spacegrad \psi}$$.
Unlike such a scalar expansion, this derivation is seen to be valid for any grade $$\psi$$. We know now that we can trust this result when $$\psi$$ is a scalar, a vector, a bivector, a trivector, or even a multivector.

### Vector Laplacian

Now that we trust that the typical scalar form of the Laplacian applies equally well to multivectors as it does to scalars, that cylindrical coordinate operator can now be applied to a
vector. Consider the projections onto each of the directions in turn

\label{eqn:laplacianCylindrical:360}
\spacegrad^2 \lr{ \rhocap A_\rho }
=
\rhocap \inv{\rho} \partial_\rho \lr{ \rho \partial_\rho A_\rho }
+ \frac{1}{\rho^2} \partial_{\phi\phi} \lr{\rhocap A_\rho}
+ \rhocap \partial_{zz} A_\rho

\label{eqn:laplacianCylindrical:380}
\begin{aligned}
\partial_{\phi\phi} \lr{\rhocap A_\rho}
&=
\partial_\phi \lr{ \phicap A_\rho + \rhocap \partial_\phi A_\rho } \\
&=
-\rhocap A_\rho
+\phicap \partial_\phi A_\rho
+ \phicap \partial_\phi A_\rho
+ \rhocap \partial_{\phi\phi} A_\rho \\
&=
\rhocap \lr{ \partial_{\phi\phi} A_\rho -A_\rho }
+ 2 \phicap \partial_\phi A_\rho
\end{aligned}

so this component of the vector Laplacian is

\label{eqn:laplacianCylindrical:400}
\begin{aligned}
\spacegrad^2 \lr{ \rhocap A_\rho }
&=
\rhocap
\lr{
\inv{\rho} \partial_\rho \lr{ \rho \partial_\rho A_\rho }
+ \inv{\rho^2} \partial_{\phi\phi} A_\rho
– \inv{\rho^2} A_\rho
+ \partial_{zz} A_\rho
}
+
\phicap
\lr{
2 \inv{\rho^2} \partial_\phi A_\rho
} \\
&=
\rhocap \lr{
– \inv{\rho^2} A_\rho
}
+
\phicap
\frac{2}{\rho^2} \partial_\phi A_\rho
.
\end{aligned}

The Laplacian for the projection of the vector onto the $$\phicap$$ direction is

\label{eqn:laplacianCylindrical:420}
\spacegrad^2 \lr{ \phicap A_\phi }
=
\phicap \inv{\rho} \partial_\rho \lr{ \rho \partial_\rho A_\phi }
+ \frac{1}{\rho^2} \partial_{\phi\phi} \lr{\phicap A_\phi}
+ \phicap \partial_{zz} A_\phi,

Again, since the unit vectors are $$\phi$$ dependent, the $$\phi$$ derivatives have to be treated carefully

\label{eqn:laplacianCylindrical:440}
\begin{aligned}
\partial_{\phi\phi} \lr{\phicap A_\phi}
&=
\partial_{\phi} \lr{-\rhocap A_\phi + \phicap \partial_\phi A_\phi} \\
&=
-\phicap A_\phi
-\rhocap \partial_\phi A_\phi
– \rhocap \partial_\phi A_\phi
+ \phicap \partial_{\phi \phi} A_\phi \\
&=
– 2 \rhocap \partial_\phi A_\phi
+
\phicap
\lr{
\partial_{\phi \phi} A_\phi
– A_\phi
},
\end{aligned}

so the Laplacian of this projection is
\label{eqn:laplacianCylindrical:460}
\begin{aligned}
\spacegrad^2 \lr{ \phicap A_\phi }
&=
\phicap
\lr{
\inv{\rho} \partial_\rho \lr{ \rho \partial_\rho A_\phi }
+ \phicap \partial_{zz} A_\phi,
\inv{\rho^2} \partial_{\phi \phi} A_\phi
– \frac{A_\phi }{\rho^2}
}
– \rhocap \frac{2}{\rho^2} \partial_\phi A_\phi \\
&=
\phicap \lr{
– \frac{A_\phi}{\rho^2}
}
– \rhocap \frac{2}{\rho^2} \partial_\phi A_\phi.
\end{aligned}

Since $$\zcap$$ is fixed we have

\label{eqn:laplacianCylindrical:480}
=

Putting all the pieces together we have
\label{eqn:laplacianCylindrical:500}
\boxed{
=
\rhocap \lr{
– \inv{\rho^2} A_\rho
– \frac{2}{\rho^2} \partial_\phi A_\phi
}
+\phicap \lr{
– \frac{A_\phi}{\rho^2}
+ \frac{2}{\rho^2} \partial_\phi A_\rho
}
+
}

This matches the results of \ref{eqn:laplacianCylindrical:220}, …, from the painful expansion of
$$\spacegrad \lr{ \spacegrad \cdot \BA } – \spacegrad \cross \lr{ \spacegrad \cross \BA }$$.

## Does the divergence and curl uniquely determine the vector?

[Click here for a PDF of this post with nicer formatting]

A problem posed in the ece1228 problem set was the following

### Helmholtz theorem.

Prove the first Helmholtz’s theorem, i.e. if vector $$\BM$$ is defined by its divergence

\label{eqn:emtProblemSet1Problem5:20}
\spacegrad \cdot \BM = s

and its curl
\label{eqn:emtProblemSet1Problem5:40}
\spacegrad \cross \BM = \BC

within a region and its normal component $$\BM_{\textrm{n}}$$ over the boundary, then $$\BM$$ is uniquely specified.

### Solution.

This problem screams for an attempt using Geometric Algebra techniques, since
the gradient of this vector can be written as a single even grade multivector

\label{eqn:emtProblemSet1Problem5AppendixGA:60}
\begin{aligned}
&= \spacegrad \cdot \BM + I \spacegrad \cross \BM \\
&= s + I \BC.
\end{aligned}

Observe that the Laplacian of $$\BM$$ is vector valued

\label{eqn:emtProblemSet1Problem5AppendixGA:400}
= \spacegrad s + I \spacegrad \BC.

This means that $$\spacegrad \BC$$ must be a bivector $$\spacegrad \BC = \spacegrad \wedge \BC$$, or that $$\BC$$ has zero divergence

\label{eqn:emtProblemSet1Problem5AppendixGA:420}
\spacegrad \cdot \BC = 0.

This required constraint on $$\BC$$ will show up in subsequent analysis. An equivalent problem to the one posed
is to show that the even grade multivector equation $$\spacegrad \BM = s + I \BC$$ has an inverse given the constraint
specified by \ref{eqn:emtProblemSet1Problem5AppendixGA:420}.

### Inverting the gradient equation.

The Green’s function for the gradient can be found in [1], where it is used to generalize the Cauchy integral equations to higher dimensions.

\label{eqn:emtProblemSet1Problem5AppendixGA:80}
\begin{aligned}
G(\Bx ; \Bx’) &= \inv{4 \pi} \frac{ \Bx – \Bx’ }{\Abs{\Bx – \Bx’}^3} \\
\spacegrad \BG(\Bx, \Bx’) &= \spacegrad \cdot \BG(\Bx, \Bx’) = \delta(\Bx – \Bx’) = -\spacegrad’ \BG(\Bx, \Bx’).
\end{aligned}

The inversion equation is an application of the Fundamental Theorem of (Geometric) Calculus, with the gradient operating bidirectionally on the Green’s function and the vector function

\label{eqn:emtProblemSet1Problem5AppendixGA:100}
\begin{aligned}
\oint_{\partial V} G(\Bx, \Bx’) d^2 \Bx’ \BM(\Bx’)
&=
\int_V G(\Bx, \Bx’) d^3 \Bx \lrspacegrad’ \BM(\Bx’) \\
&=
\int_V d^3 \Bx (G(\Bx, \Bx’) \lspacegrad’) \BM(\Bx’)
+
\int_V d^3 \Bx G(\Bx, \Bx’) (\spacegrad’ \BM(\Bx’)) \\
&=
-\int_V d^3 \Bx \delta(\Bx – \By) \BM(\Bx’)
+
\int_V d^3 \Bx G(\Bx, \Bx’) \lr{ s(\Bx’) + I \BC(\Bx’) } \\
&=
-I \BM(\Bx)
+
\inv{4 \pi} \int_V d^3 \Bx \frac{ \Bx – \Bx’}{ \Abs{\Bx – \Bx’}^3 } \lr{ s(\Bx’) + I \BC(\Bx’) }.
\end{aligned}

The integrals are in terms of the primed coordinates so that the end result is a function of $$\Bx$$. To rearrange for $$\BM$$, let $$d^3 \Bx’ = I dV’$$, and $$d^2 \Bx’ \ncap(\Bx’) = I dA’$$, then right multiply with the pseudoscalar $$I$$, noting that in \R{3} the pseudoscalar commutes with any grades

\label{eqn:emtProblemSet1Problem5AppendixGA:440}
\begin{aligned}
\BM(\Bx)
&=
I \oint_{\partial V} G(\Bx, \Bx’) I dA’ \ncap \BM(\Bx’)

I \inv{4 \pi} \int_V I dV’ \frac{ \Bx – \Bx’}{ \Abs{\Bx – \Bx’}^3 } \lr{ s(\Bx’) + I \BC(\Bx’) } \\
&=
-\oint_{\partial V} dA’ G(\Bx, \Bx’) \ncap \BM(\Bx’)
+
\inv{4 \pi} \int_V dV’ \frac{ \Bx – \Bx’}{ \Abs{\Bx – \Bx’}^3 } \lr{ s(\Bx’) + I \BC(\Bx’) }.
\end{aligned}

This can be decomposed into a vector and a trivector equation. Let $$\Br = \Bx – \Bx’ = r \rcap$$, and note that

\label{eqn:emtProblemSet1Problem5AppendixGA:500}
\begin{aligned}
\gpgradeone{ \rcap I \BC }
&=
\gpgradeone{ I \rcap \BC } \\
&=
I \rcap \wedge \BC \\
&=
-\rcap \cross \BC,
\end{aligned}

so this pair of equations can be written as

\label{eqn:emtProblemSet1Problem5AppendixGA:520}
\begin{aligned}
\BM(\Bx)
&=
-\inv{4 \pi} \oint_{\partial V} dA’ \frac{\gpgradeone{ \rcap \ncap \BM(\Bx’) }}{r^2}
+
\inv{4 \pi} \int_V dV’ \lr{
\frac{\rcap}{r^2} s(\Bx’) –
\frac{\rcap}{r^2} \cross \BC(\Bx’) } \\
0
&=
-\inv{4 \pi} \oint_{\partial V} dA’ \frac{\rcap}{r^2} \wedge \ncap \wedge \BM(\Bx’)
+
\frac{I}{4 \pi} \int_V dV’ \frac{ \rcap \cdot \BC(\Bx’) }{r^2}.
\end{aligned}

Consider the last integral in the pseudoscalar equation above. Since we expect no pseudoscalar components, this must be zero, or cancel perfectly. It’s not obvious that this is the case, but a transformation to a surface integral shows the constraints required for that to be the case. To do so note

\label{eqn:emtProblemSet1Problem5AppendixGA:540}
\begin{aligned}
\spacegrad \inv{\Bx – \Bx’}
&= -\spacegrad’ \inv{\Bx – \Bx’} \\
&=
-\frac{\Bx – \Bx’}{\Abs{\Bx – \Bx’}^3} \\
&= -\frac{\rcap}{r^2}.
\end{aligned}

Using this and the chain rule we have

\label{eqn:emtProblemSet1Problem5AppendixGA:560}
\begin{aligned}
\frac{I}{4 \pi} \int_V dV’ \frac{ \rcap \cdot \BC(\Bx’) }{r^2}
&=
\frac{I}{4 \pi} \int_V dV’ \lr{ \spacegrad’ \inv{ r } } \cdot \BC(\Bx’) \\
&=
\frac{I}{4 \pi} \int_V dV’ \spacegrad’ \cdot \frac{\BC(\Bx’)}{r}

\frac{I}{4 \pi} \int_V dV’ \frac{ \spacegrad’ \cdot \BC(\Bx’) }{r} \\
&=
\frac{I}{4 \pi} \int_V dV’ \spacegrad’ \cdot \frac{\BC(\Bx’)}{r} \\
&=
\frac{I}{4 \pi} \int_{\partial V} dA’ \ncap(\Bx’) \cdot \frac{\BC(\Bx’)}{r}.
\end{aligned}

The divergence of $$\BC$$ above was killed by recalling the constraint \ref{eqn:emtProblemSet1Problem5AppendixGA:420}. This means that we can rewrite entirely as surface integral and eventually reduced to a single triple product

\label{eqn:emtProblemSet1Problem5AppendixGA:580}
\begin{aligned}
0
&=
-\frac{I}{4 \pi} \oint_{\partial V} dA’ \lr{
\frac{\rcap}{r^2} \cdot (\ncap \cross \BM(\Bx’))
-\ncap \cdot \frac{\BC(\Bx’)}{r}
} \\
&=
\frac{I}{4 \pi} \oint_{\partial V} dA’ \ncap \cdot \lr{
\frac{\rcap}{r^2} \cross \BM(\Bx’)
+ \frac{\BC(\Bx’)}{r}
} \\
&=
\frac{I}{4 \pi} \oint_{\partial V} dA’ \ncap \cdot \lr{
\lr{ \spacegrad’ \inv{r}} \cross \BM(\Bx’)
+ \frac{\BC(\Bx’)}{r}
} \\
&=
\frac{I}{4 \pi} \oint_{\partial V} dA’ \ncap \cdot \lr{
} \\
&=
\frac{I}{4 \pi} \oint_{\partial V} dA’
\frac{\BM(\Bx’) \cross \ncap}{r}
&=
\frac{I}{4 \pi} \oint_{\partial V} dA’
\frac{\BM(\Bx’) \cross \ncap}{r}.
\end{aligned}

### Final results.

Assembling things back into a single multivector equation, the complete inversion integral for $$\BM$$ is

\label{eqn:emtProblemSet1Problem5AppendixGA:600}
\BM(\Bx)
=
\inv{4 \pi} \oint_{\partial V} dA’
\lr{
\frac{\BM(\Bx’) \wedge \ncap}{r}
-\frac{\gpgradeone{ \rcap \ncap \BM(\Bx’) }}{r^2}
}
+
\inv{4 \pi} \int_V dV’ \lr{
\frac{\rcap}{r^2} s(\Bx’) –
\frac{\rcap}{r^2} \cross \BC(\Bx’) }.

This shows that vector $$\BM$$ can be recovered uniquely from $$s, \BC$$ when $$\Abs{\BM}/r^2$$ vanishes on an infinite surface. If we restrict attention to a finite surface, we have to add to the fixed solution a specific solution that depends on the value of $$\BM$$ on that surface. The vector portion of that surface integrand contains

\label{eqn:emtProblemSet1Problem5AppendixGA:640}
\begin{aligned}
\gpgradeone{ \rcap \ncap \BM }
&=
\rcap (\ncap \cdot \BM )
+
\rcap \cdot (\ncap \wedge \BM ) \\
&=
\rcap (\ncap \cdot \BM )
+
(\rcap \cdot \ncap) \BM

(\rcap \cdot \BM ) \ncap.
\end{aligned}

The constraints required by a zero triple product $$\spacegrad’ \cdot (\BM(\Bx’) \cross \ncap(\Bx’))$$ are complicated on a such a general finite surface. Consider instead, for simplicity, the case of a spherical surface, which can be analyzed more easily. In that case the outward normal of the surface centred on the test charge point $$\Bx$$ is $$\ncap = -\rcap$$. The pseudoscalar integrand is not generally killed unless the divergence of its tangential component on this surface is zero. One way that this can occur is for $$\BM \cross \ncap = 0$$, so that $$-\gpgradeone{ \rcap \ncap \BM } = \BM = (\BM \cdot \ncap) \ncap = \BM_{\textrm{n}}$$.

This gives

\label{eqn:emtProblemSet1Problem5AppendixGA:620}
\BM(\Bx)
=
\inv{4 \pi} \oint_{\Abs{\Bx – \Bx’} = r} dA’ \frac{\BM_{\textrm{n}}(\Bx’)}{r^2}
+
\inv{4 \pi} \int_V dV’ \lr{
\frac{\rcap}{r^2} s(\Bx’) +
\BC(\Bx’) \cross \frac{\rcap}{r^2} },

or, in terms of potential functions, which is arguably tidier

\label{eqn:emtProblemSet1Problem5AppendixGA:300}
\boxed{
\BM(\Bx)
=
\inv{4 \pi} \oint_{\Abs{\Bx – \Bx’} = r} dA’ \frac{\BM_{\textrm{n}}(\Bx’)}{r^2}
-\spacegrad \int_V dV’ \frac{ s(\Bx’)}{ 4 \pi r }
+\spacegrad \cross \int_V dV’ \frac{ \BC(\Bx’) }{ 4 \pi r }.
}

### Commentary

I attempted this problem in three different ways. My first approach (above) assembled the divergence and curl relations above into a single (Geometric Algebra) multivector gradient equation and applied the vector valued Green’s function for the gradient to invert that equation. That approach logically led from the differential equation for $$\BM$$ to the solution for $$\BM$$ in terms of $$s$$ and $$\BC$$. However, this strategy introduced some complexities that make me doubt the correctness of the associated boundary analysis.

Even if the details of the boundary handling in my multivector approach is not correct, I thought that approach was interesting enough to share.

# References

[1] C. Doran and A.N. Lasenby. Geometric algebra for physicists. Cambridge University Press New York, Cambridge, UK, 1st edition, 2003.