divergence

Potentials for multivector Maxwell’s equation (again.)

December 8, 2023 math and physics play , , , , , , , , , , , , , , , , ,

[Click here for the PDF version of this post.]

Motivation.

This revisits my last blog post where I covered this content in a meandering fashion. This is an attempt to re-express this in a more compact form. In particular, in a form that is amenable to include in my book. When I wrote the potential section of my book, I cheated, and didn’t try to motivate the results. My cheat was figuring out the multivector potential representation starting with STA where things are simpler, and then translating it back to a multivector representation, instead of figuring out a reasonable way to motivate things from the foundation already laid.

I’d like to eventually have a less rushed treatment of potentials in my book, where the results are not pulled out of a magic hat. Here is an attempted step in that direction. I’ve opted to put some of the motivational material in problems (with solutions at the chapter end.)

Multivector potentials.

We know from conventional electromagnetism (given no fictitious magnetic sources) that we can represent the six components of the electric and magnetic fields in terms of four scalar fields
\begin{equation}\label{eqn:mvpotentials:80}
\begin{aligned}
\BE &= -\spacegrad \phi – \PD{t}{\BA} \\
\BH &= \inv{\mu} \spacegrad \cross \BA.
\end{aligned}
\end{equation}
The conventional way of constructing these potentials makes use of the identities
\begin{equation}\label{eqn:mvpotentials:60}
\begin{aligned}
\spacegrad \cdot \lr{ \spacegrad \cross \BA } &= 0 \\
\spacegrad \cross \lr{ \spacegrad \phi } &= 0,
\end{aligned}
\end{equation}
applying those to the source free Maxwell’s equations to find representations of \( \BE, \BH \) that automatically satisfy those equations. For that conventional analysis, see section 18-6 [2] (available online), or section 10.1 [3], or section 6.4 [4]. We can also find such a potential representation using geometric algebra methods that are cross product free (problem 1.)

For Maxwell’s equations with fictitious magnetic sources, it can be shown that a potential representation of the field
\begin{equation}\label{eqn:mvpotentials:100}
\begin{aligned}
\BH &= -\spacegrad \phi_m – \PD{t}{\BF} \\
\BE &= -\inv{\epsilon} \spacegrad \cross \BF.
\end{aligned}
\end{equation}
satisfies the source-free grades of Maxwell’s equation.
See [1], and [5] for such derivations. As with the conventional source potentials, we can also apply our geometric algebra toolbox to easily find these results (problem 2.)

We have a mix of time partials and curls that is reminiscent of Maxwell’s equation itself. It’s obvious to wonder whether there is a more coherent integrated form for the potential. This is in fact the case.

Lemma 1.1: Multivector potentials.

For Maxwell’s equation with electric sources, the total field \( F \) can be expressed in multivector potential form
\begin{equation}\label{eqn:mvpotentials:520}
F = \gpgrade{ \lr{ \spacegrad – \inv{c} \PD{t}{} } \lr{ -\phi + c \BA } }{1,2}.
\end{equation}
For Maxwell’s equation with only fictitious magnetic sources, the total field \( F \) can be expressed in multivector form
\begin{equation}\label{eqn:mvpotentials:540}
F = \gpgrade{ \lr{ \spacegrad – \inv{c} \PD{t}{} } I \eta \lr{ -\phi_m + c \BF } }{1,2}.
\end{equation}

The reader should try to verify this themselves (problem 3.)

Using superposition, we can form a multivector potential that includes all grades.

Definition 1.1: Multivector potential.

We call \( A \), a multivector with all grades, the multivector potential, defining the total field as
\begin{equation}\label{eqn:mvpotentials:600}
\begin{aligned}
F
&=
\gpgrade{ \lr{ \spacegrad – \inv{c} \PD{t}{} } A }{1,2} \\
&=
\lr{ \spacegrad – \inv{c} \PD{t}{} } A

\gpgrade{ \lr{ \spacegrad – \inv{c} \PD{t}{} } A }{0,3}.
\end{aligned}
\end{equation}
Imposition of the constraint
\begin{equation}\label{eqn:mvpotentials:680}
\gpgrade{ \lr{ \spacegrad – \inv{c} \PD{t}{} } A }{0,3} = 0,
\end{equation}
is called the Lorentz gauge condition, and allows us to express \( F \) in terms of the potential without any grade selection filters.

Lemma 1.2: Conventional multivector potential.

Let
\begin{equation}\label{eqn:mvpotentials:620}
A = -\phi + c \BA + I \eta \lr{ -\phi_m + c \BF }.
\end{equation}
This results in the conventional potential representation of the electric and magnetic fields
\begin{equation}\label{eqn:mvpotentials:640}
\begin{aligned}
\BE &= -\spacegrad \phi – \PD{t}{\BA} – \inv{\epsilon} \spacegrad \cross \BF \\
\BH &= -\spacegrad \phi_m – \PD{t}{\BF} + \inv{\mu} \spacegrad \cross \BA.
\end{aligned}
\end{equation}
In terms of potentials, the Lorentz gauge condition \ref{eqn:mvpotentials:680} takes the form
\begin{equation}\label{eqn:mvpotentials:660}
\begin{aligned}
0 &= \inv{c} \PD{t}{\phi} + \spacegrad \cdot (c \BA) \\
0 &= \inv{c} \PD{t}{\phi_m} + \spacegrad \cdot (c \BF).
\end{aligned}
\end{equation}

Start proof:

See problem 4.

End proof.

Problems.

Problem 1: Potentials for no-fictitious sources.

Starting with Maxwell’s equation with only conventional electric sources
\begin{equation}\label{eqn:mvpotentials:120}
\lr{ \spacegrad + \inv{c}\PD{t}{} } F = \gpgrade{J}{0,1}.
\end{equation}
Show that this may be split by grade into three equations
\begin{equation}\label{eqn:mvpotentials:140}
\begin{aligned}
\gpgrade{ \lr{ \spacegrad + \inv{c}\PD{t}{} } F}{0,1} &= \gpgrade{J}{0,1} \\
\spacegrad \wedge \BE + \inv{c}\PD{t}{} \lr{ I \eta \BH } &= 0 \\
\spacegrad \wedge \lr{ I \eta \BH } &= 0.
\end{aligned}
\end{equation}
Then use the identities \( \spacegrad \wedge \spacegrad \wedge \BA = 0 \), for vector \( \BA \) and \( \spacegrad \wedge \spacegrad \phi = 0 \), for scalar \( \phi \) to find the potential representation.

Answer

Taking grade(0,1) and (2,3) selections of Maxwell’s equation, we split our equations into source dependent and source free equations
\begin{equation}\label{eqn:mvpotentials:200}
\gpgrade{ \lr{ \spacegrad + \inv{c} \PD{t}{} } F }{0,1} = \gpgrade{J}{0,1},
\end{equation}
\begin{equation}\label{eqn:mvpotentials:220}
\gpgrade{ \lr{ \spacegrad + \inv{c} \PD{t}{} } F }{2,3} = 0.
\end{equation}

In terms of \( F = \BE + I \eta \BH \), the source free equation expands to
\begin{equation}\label{eqn:mvpotentials:240}
\begin{aligned}
0
&=
\gpgrade{
\lr{ \spacegrad + \inv{c} \PD{t}{} } \lr{ \BE + I \eta \BH }
}{2,3} \\
&=
\gpgradetwo{\spacegrad \BE}
+ \gpgradethree{I \eta \spacegrad \BH} + I \eta \inv{c} \PD{t}{\BH} \\
&=
\spacegrad \wedge \BE
+ \spacegrad \wedge \lr{ I \eta \BH }
+ I \eta \inv{c} \PD{t}{\BH},
\end{aligned}
\end{equation}
which can be further split into a bivector and trivector equation
\begin{equation}\label{eqn:mvpotentials:260}
0 = \spacegrad \wedge \BE + I \eta \inv{c} \PD{t}{\BH}
\end{equation}
\begin{equation}\label{eqn:mvpotentials:280}
0 = \spacegrad \wedge \lr{ I \eta \BH }.
\end{equation}
It’s clear that we want to write the magnetic field as a (bivector) curl, so we let
\begin{equation}\label{eqn:mvpotentials:300}
I \eta \BH = I c \BB = c \spacegrad \wedge \BA,
\end{equation}
or
\begin{equation}\label{eqn:mvpotentials:301}
\BH = \inv{\mu} \spacegrad \cross \BA.
\end{equation}

\Cref{eqn:mvpotentials:260} is reduced to
\begin{equation}\label{eqn:mvpotentials:320}
\begin{aligned}
0
&= \spacegrad \wedge \BE + I \eta \inv{c} \PD{t}{\BH} \\
&= \spacegrad \wedge \BE + \inv{c} \PD{t}{} \spacegrad \wedge \lr{ c \BA } \\
&= \spacegrad \wedge \lr{ \BE + \PD{t}{\BA} }.
\end{aligned}
\end{equation}
We can now let
\begin{equation}\label{eqn:mvpotentials:340}
\BE + \PD{t}{\BA} = -\spacegrad \phi.
\end{equation}
We sneakily adjust the sign of the gradient so that the result matches the conventional representation.

Problem 2: Potentials for fictitious sources.

Starting with Maxwell’s equation with only fictitious magnetic sources
\begin{equation}\label{eqn:mvpotentials:160}
\lr{ \spacegrad + \inv{c}\PD{t}{} } F = \gpgrade{J}{2,3},
\end{equation}
show that this may be split by grade into three equations
\begin{equation}\label{eqn:mvpotentials:180}
\begin{aligned}
\gpgrade{ \lr{ \spacegrad + \inv{c}\PD{t}{} } I F}{0,1} &= I \gpgrade{J}{2,3} \\
-\eta \spacegrad \wedge \BH + \inv{c}\PD{t}{(I \BE)} &= 0 \\
\spacegrad \wedge \lr{ I \BE } &= 0.
\end{aligned}
\end{equation}
Then use the identities \( \spacegrad \wedge \spacegrad \wedge \BF = 0 \), for vector \( \BF \) and \( \spacegrad \wedge \spacegrad \phi_m = 0 \), for scalar \( \phi_m \) to find the potential representation \ref{eqn:mvpotentials:100}.

Answer

We multiply \ref{eqn:mvpotentials:160} by \( I \) to find
\begin{equation}\label{eqn:mvpotentials:360}
\lr{ \spacegrad + \inv{c}\PD{t}{} } I F = I \gpgrade{J}{2,3},
\end{equation}
which can be split into
\begin{equation}\label{eqn:mvpotentials:380}
\begin{aligned}
\gpgrade{ \lr{ \spacegrad + \inv{c}\PD{t}{} } I F }{1,2} &= I \gpgrade{J}{2,3} \\
\gpgrade{ \lr{ \spacegrad + \inv{c}\PD{t}{} } I F }{0,3} &= 0.
\end{aligned}
\end{equation}
We expand the source free equation in terms of \( I F = I \BE – \eta \BH \), to find
\begin{equation}\label{eqn:mvpotentials:400}
\begin{aligned}
0
&= \gpgrade{ \lr{ \spacegrad + \inv{c}\PD{t}{} } \lr{ I \BE – \eta \BH } }{0,3} \\
&= \spacegrad \wedge \lr{ I \BE } + \inv{c} \PD{t}{(I \BE)} – \eta \spacegrad \wedge \BH,
\end{aligned}
\end{equation}
which has the respective bivector and trivector grades
\begin{equation}\label{eqn:mvpotentials:420}
0 = \spacegrad \wedge \lr{ I \BE }
\end{equation}
\begin{equation}\label{eqn:mvpotentials:440}
0 = \inv{c} \PD{t}{(I \BE)} – \eta \spacegrad \wedge \BH.
\end{equation}
We can clearly satisfy \ref{eqn:mvpotentials:420} by setting
\begin{equation}\label{eqn:mvpotentials:460}
I \BE = -\inv{\epsilon} \spacegrad \wedge \BF,
\end{equation}
or
\begin{equation}\label{eqn:mvpotentials:461}
\BE = -\inv{\epsilon} \spacegrad \cross \BF.
\end{equation}
Here, once again, the sneaky inclusion of a constant factor \( -1/\epsilon \) is to make the result match the conventional. Inserting this value for \( I \BE \) into our bivector equation yields
\begin{equation}\label{eqn:mvpotentials:480}
\begin{aligned}
0
&= -\inv{\epsilon} \inv{c} \PD{t}{} (\spacegrad \wedge \BF) – \eta \spacegrad \wedge \BH \\
&= -\eta \spacegrad \wedge \lr{ \PD{t}{\BF} + \BH },
\end{aligned}
\end{equation}
so we set
\begin{equation}\label{eqn:mvpotentials:500}
\PD{t}{\BF} + \BH = -\spacegrad \phi_m,
\end{equation}
and have a field representation that automatically satisfies the source free equations.

Problem 3: Total field in terms of potentials.

Prove lemma 1.1, either by direct expansion, or by trying to discover the multivector form of the field by construction.

Answer

Proof by expansion is straightforward, and left to the reader. We form the respective total electromagnetic fields \( F = \BE + I \eta H \) for each case.

We find
\begin{equation}\label{eqn:mvpotentials:560}
\begin{aligned}
F
&= \BE + I \eta \BH \\
&= -\spacegrad \phi – \PD{t}{\BA} + I \frac{\eta}{\mu} \spacegrad \cross \BA \\
&= -\spacegrad \phi – \inv{c} \PD{t}{(c \BA)} + \spacegrad \wedge (c\BA) \\
&= \gpgrade{ -\spacegrad \phi – \inv{c} \PD{t}{(c \BA)} + \spacegrad \wedge (c\BA) }{1,2} \\
&= \gpgrade{ -\spacegrad \phi – \inv{c} \PD{t}{(c \BA)} + \spacegrad (c\BA) }{1,2} \\
&= \gpgrade{ \spacegrad \lr{ -\phi + c \BA } – \inv{c} \PD{t}{(c \BA)} }{1,2} \\
&= \gpgrade{ \lr{ \spacegrad -\inv{c} \PD{t}{} } \lr{ -\phi + c \BA } }{1,2}.
\end{aligned}
\end{equation}

For the field for the fictitious source case, we compute the result in the same way, inserting a no-op grade selection to allow us to simplify, finding
\begin{equation}\label{eqn:mvpotentials:580}
\begin{aligned}
F
&= \BE + I \eta \BH \\
&= -\inv{\epsilon} \spacegrad \cross \BF + I \eta \lr{ -\spacegrad \phi_m – \PD{t}{\BF} } \\
&= \inv{\epsilon c} I \lr{ \spacegrad \wedge (c \BF)} + I \eta \lr{ -\spacegrad \phi_m – \inv{c} \PD{t}{(c \BF)} } \\
&= I \eta \lr{ \spacegrad \wedge (c \BF) + \lr{ -\spacegrad \phi_m – \inv{c} \PD{t}{(c \BF)} } } \\
&= I \eta \gpgrade{ \spacegrad \wedge (c \BF) + \lr{ -\spacegrad \phi_m – \inv{c} \PD{t}{(c \BF)} } }{1,2} \\
&= I \eta \gpgrade{ \spacegrad (c \BF) – \spacegrad \phi_m – \inv{c} \PD{t}{(c \BF)} }{1,2} \\
&= I \eta \gpgrade{ \spacegrad (-\phi_m + c \BF) – \inv{c} \PD{t}{(c \BF)} }{1,2} \\
&= I \eta \gpgrade{ \lr{ \spacegrad -\inv{c} \PD{t}{} } (-\phi_m + c \BF) }{1,2}.
\end{aligned}
\end{equation}

Problem 4: Fields in terms of potentials.

Prove lemma 1.2.

Answer

Let’s expand and then group by grade
\begin{equation}\label{eqn:mvpotentials:n}
\begin{aligned}
\lr{ \spacegrad – \inv{c} \PD{t}{} } A
&=
\lr{ \spacegrad – \inv{c} \PD{t}{} } \lr{ -\phi + c \BA + I \eta \lr{ -\phi_m + c \BF }} \\
&=
-\spacegrad \phi + c \spacegrad \BA + I \eta \lr{ -\spacegrad \phi_m + c \spacegrad \BF }
-\inv{c} \PD{t}{\phi} + c \inv{c} \PD{t}{ \BA } + I \eta \lr{ -\inv{c} \PD{t}{\phi_m} + c \inv{c} \PD{t}{\BF} } \\
&=
– \spacegrad \phi
+ I \eta c \spacegrad \wedge \BF
– c \inv{c} \PD{t}{\BA}
\quad + c \spacegrad \wedge \BA
-I \eta \spacegrad \phi_m
– c I \eta \inv{c} \PD{t}{\BF} \\
&\quad + c \spacegrad \cdot \BA
+\inv{c} \PD{t}{\phi}
\quad + I \eta \lr{ c \spacegrad \cdot \BF
+ \inv{c} \PD{t}{\phi_m} } \\
&=
– \spacegrad \phi
– \inv{\epsilon} \spacegrad \cross \BF
– \PD{t}{\BA}
\quad + I \eta \lr{
\inv{\mu} \spacegrad \cross \BA
– \spacegrad \phi_m
– \PD{t}{\BF}
} \\
&\quad + c \spacegrad \cdot \BA
+\inv{c} \PD{t}{\phi}
\quad + I \eta \lr{ c \spacegrad \cdot \BF
+ \inv{c} \PD{t}{\phi_m} }.
\end{aligned}
\end{equation}
Observing that \( F = \gpgrade{ \lr{ \spacegrad -(1/c) \partial_t } A }{1,2} = \BE + I \eta \BH \), completes the problem. If the Lorentz gauge condition is assumed, the scalar and pseudoscalar components above are obliterated, leaving just
\( F = \lr{ \spacegrad -(1/c) \partial_t } A \).

References

[1] Constantine A Balanis. Antenna theory: analysis and design. John Wiley & Sons, 3rd edition, 2005.

[2] R.P. Feynman, R.B. Leighton, and M.L. Sands. Feynman lectures on physics, Volume II.[Lectures on physics], chapter The Maxwell Equations. Addison-Wesley Publishing Company. Reading, Massachusetts, 1963. URL https://www.feynmanlectures.caltech.edu/II_18.html.

[3] David Jeffrey Griffiths and Reed College. Introduction to electrodynamics. Prentice hall Upper Saddle River, NJ, 3rd edition, 1999.

[4] JD Jackson. Classical Electrodynamics. John Wiley and Sons, 2nd edition, 1975.

[5] David M Pozar. Microwave engineering. John Wiley & Sons, 2009.

Potentials in geometric algebra.

December 2, 2023 math and physics play , , , , , , , , , , , , , , , , , , , ,

[Click here for a PDF version of this post]

Conventional formulation.

The idea behind introducing the scalar potential \( \phi \) and vector potential \( \BA \) is that we can impose a constraint on the form of our observable fields \( \BE, \BB \), (or \( \BD, \BH \)), that reduces the complexity and coupling of Maxwell’s equations. These potentials are not unique, but the types of allowed variations in those potentials (gauge transformations) do not change the observable fields.

The basic idea is that we are looking for representations of the fields that automatically satisfy the pair of source free Maxwell’s equations
\begin{equation}\label{eqn:gapotentials:40}
\begin{aligned}
\spacegrad \cdot \BB &= 0 \\
c \partial_0 \BB + \spacegrad \cross \BE &= 0,
\end{aligned}
\end{equation}
so that the problem is reduced to solving just the remaining source dependent Maxwell’s equations.

The conventional way of constructing these potentials makes use of the identities
\begin{equation}\label{eqn:gapotentials:60}
\begin{aligned}
\spacegrad \cdot \lr{ \spacegrad \cross \Bf } &= 0 \\
\spacegrad \cross \lr{ \spacegrad \chi } &= 0,
\end{aligned}
\end{equation}
where \( \Bf \) is a vector, and \( \chi \) is a scalar. This approach is straightforward. Instead of replicating it, here are a few well known references where such a treatment can be found

  1. section 18-6 potentials and the wave equation in [2] (available online),
  2. section 10.1 The potential formulation in [3], and
  3. section 6.4 Vector and Scalar Potentials, in [4],

Multivector potentials in geometric algebra.

The multivector form of Maxwell’s equation is
\begin{equation}\label{eqn:gapotentials:820}
\lr{ \spacegrad + \partial_0 } F = J,
\end{equation}
where \( \partial_0 = (1/c)\partial/\partial t \), the electromagnetic field \( F = \BE + I c \BB = \BE + I \eta H \) has grades(1,2), and a multivector charge and current density \( J \). Grades(0,1) of the current are the charge and current densities respectively, and if desired, the grade(2,3) portion of the current has the fictitious magnetic charge and current densities (used in microwave and antenna engineering.)

It’s best to consider the case of electric sources, separately from the case of (fictitious) magnetic sources, and then use superposition to construct a potential representation that includes both.

We require a tool, that generalizes the \(\mathbb{R}^3\) cross product curl identities above.

Lemma 1.1: Curl of curl.

Let \( A \in \bigwedge^k \) be a blade of grade \( k \). Then
\begin{equation*}
\nabla \wedge \nabla \wedge A = 0.
\end{equation*}

Observe that for scalar \( A \), this reduces to
\begin{equation}\label{eqn:gapotentials:1740}
\nabla \wedge \nabla A = 0.
\end{equation}
We’ve recently proved this, so we won’t do it again now.

Now we are ready to figure out the structure of the potentials.

Case I. No (fictitious) magnetic sources.

Without magnetic sources, Maxwell’s equation is
\begin{equation}\label{eqn:gapotentials:840}
\lr{ \spacegrad + \partial_0 } F = \gpgrade{J}{0,1},
\end{equation}
This can be split into two equations, one that has just the sources, and one that is source free
\begin{equation}\label{eqn:gapotentials:860}
\gpgrade{ \lr{ \spacegrad + \partial_0 } F }{0,1} = \gpgrade{J}{0,1},
\end{equation}
\begin{equation}\label{eqn:gapotentials:880}
\gpgrade{ \lr{ \spacegrad + \partial_0 } F }{2,3} = 0.
\end{equation}
If you are clever, or have the benefit of having worked out the answer already, you can look directly at \ref{eqn:gapotentials:880} and guess the multivector form for the potential. Hint: you want something closely related to \( F = \lr{ \spacegrad – \partial_0 } A \), where \( A \) has grades(0,1).

If you aren’t that clever, or don’t have a time machine that let’s you look that clever, you’ll have to work it out systematically like the rest of us. We can start by breaking down \( F \) into it’s constituent observer dependent fields. That means that we want to find values for \( \BE, \BH \) that satisfy
\begin{equation}\label{eqn:gapotentials:900}
\gpgrade{ \lr{ \spacegrad + \partial_0 } \lr{ \BE + I \eta \BH } }{2,3} = 0.
\end{equation}
Expanding the multivector factors gives us
\begin{equation}\label{eqn:gapotentials:920}
\begin{aligned}
\gpgrade{ \lr{ \spacegrad + \partial_0 } \lr{ \BE + I \eta \BH } }{2,3}
&=\gpgradetwo{\spacegrad \BE} + \gpgradethree{I \eta \spacegrad \BH} + I \eta \partial 0 \BH \\
&=
\spacegrad \wedge \BE
+ \spacegrad \wedge \lr{ I \eta \BH }
+ I \eta \partial_0 \BH.
\end{aligned}
\end{equation}
Splitting this into one equation for each grade, leaves us with
\begin{equation}\label{eqn:gapotentials:940}
0 = \spacegrad \wedge \BE + I \eta \partial_0 \BH
\end{equation}
\begin{equation}\label{eqn:gapotentials:960}
0 = \spacegrad \wedge \lr{ I \eta \BH }.
\end{equation}
Observe that we could have also written \ref{eqn:gapotentials:960} as \( 0 = I \eta \lr{ \spacegrad \cdot \BH } \), which is the starting point of the conventional non-GA approach.
It’s clear that we want to write \( I \eta \BH = I c \BB \) as a (bivector) curl, and let
\begin{equation}\label{eqn:gapotentials:980}
I \eta \BH = c \spacegrad \wedge \BA.
\end{equation}
It’s a bit sneaky to toss that factor of \( c \) in here, but that’s done to make the units of \( \BA \) turn out in a way that matches the conventional vector potential. If it makes you feel better, you can think of this as an undetermined constant multiplicative undetermined factor that will be used to adjust the dimensions of \( \BA \) down the line.

Having made that choice, \ref{eqn:gapotentials:960} is automatically satisfied, and \ref{eqn:gapotentials:940} is reduced to
\begin{equation}\label{eqn:gapotentials:1000}
\begin{aligned}
0
&= \spacegrad \wedge \BE + I \eta \partial_0 \BH \\
&= \spacegrad \wedge \BE + \partial_0 \spacegrad \wedge \lr{ c \BA } \\
&= \spacegrad \wedge \lr{ \BE + c \partial_0 \BA }.
\end{aligned}
\end{equation}
We can now let
\begin{equation}\label{eqn:gapotentials:1020}
\BE + \partial_0 c \BA = -\spacegrad \phi.
\end{equation}
Again, we had the option of including an arbitrary multiplicative constant, but this time, we managed to find the right switch for our time machine, and look ahead to see that we want that constant to be \( -1 \) in order to have agreement with the conventional result.

We are left with a potential construction for our individual field components
\begin{equation}\label{eqn:gapotentials:1040}
\begin{aligned}
\BE &= -\spacegrad \phi – c \partial_0 \BA \\
I \eta \BH &= c \spacegrad \wedge \BA,
\end{aligned}
\end{equation}
or
\begin{equation}\label{eqn:gapotentials:1060}
F = -\spacegrad \phi – c \partial_0 \BA + c \spacegrad \wedge \BA.
\end{equation}
This automatically satisfies the grades of Maxwell’s equation that are source free, leaving us to solve just
\begin{equation}\label{eqn:gapotentials:1080}
\gpgrade{ \lr{ \spacegrad + \partial_0 } F }{0,1} = \gpgrade{J}{0,1}.
\end{equation}

Multivector potential.

It’s natural to wonder if there is a more structured form for \( F \) than \ref{eqn:gapotentials:1060}, just as we found a GA structure for Maxwell’s equation that eliminated the crazy mix of divs and curls that we had in the original Gibbs representation. Let’s find that structure. To do so, we can enclose \( F \) in a no-op grade selection operation
\begin{equation}\label{eqn:gapotentials:1100}
\begin{aligned}
F
&= \gpgrade{ -\spacegrad \phi – c \partial_0 \BA + c \spacegrad \wedge \BA }{1,2} \\
&= \gpgrade{ -\spacegrad \phi – c \partial_0 \BA + c \spacegrad \BA }{1,2} \\
&= \gpgrade{ \spacegrad \lr{ -\phi + c \BA } – c \partial_0 \BA + \lr{ \partial_0 \phi – \partial_0 \phi } }{1,2} \\
&= \gpgrade{ \lr{ \spacegrad – \partial_0 } \lr{ -\phi + c \BA } }{1,2}.
\end{aligned}
\end{equation}

We can now introduce a multivector potential, and express the remaining non-zero grades of Maxwell’s equation in terms of this potential
\begin{equation}\label{eqn:gapotentials:1120}
\begin{aligned}
A &= -\phi + c \BA \\
F &= \gpgrade{ \lr{ \spacegrad – \partial_0 } A }{1,2} \\
\gpgrade{J}{0,1} &= \gpgrade{ \lr{ \spacegrad + \partial_0 } F }{0,1}.
\end{aligned}
\end{equation}

Lorentz gauge.

The grade selection in our representation of \( F \) is a bit annoying, and can be eliminated if we impose additional constraints on the potential. We can write
\begin{equation}\label{eqn:gapotentials:1140}
F =
\lr{ \spacegrad – \partial_0 } A

\gpgrade{ \lr{ \spacegrad – \partial_0 } A }{0,3},
\end{equation}
and then ask what conditions are required for this grade(0,3) selection to be zero. In terms of our constituent potentials, that is
\begin{equation}\label{eqn:gapotentials:1160}
\begin{aligned}
0 &=
\gpgrade{ \lr{ \spacegrad – \partial_0 } A }{0,3} \\
&=
\gpgrade{ \lr{ \spacegrad – \partial_0 } \lr{ -\phi + c \BA } }{0,3} \\
&=
c \spacegrad \cdot \BA + \partial_0 \phi,
\end{aligned}
\end{equation}
This is the Lorentz gauge condition, recognized a bit more easily if written out in terms of the time partials explicitly
\begin{equation}\label{eqn:gapotentials:1180}
\inv{c^2} \PD{t}{\phi} + \spacegrad \cdot \BA = 0.
\end{equation}

We can now write Maxwell’s equations, in the potential formulation, as
\begin{equation}\label{eqn:gapotentials:1200}
\begin{aligned}
A &= -\phi + c \BA \\
F &= \lr{ \spacegrad – \partial_0 } A \\
0 &= \inv{c} \gpgrade{ \lr{ \spacegrad – \partial_0 } A }{0,3} = \inv{c^2} \PD{t}{\phi} + \spacegrad \cdot \BA \\
\gpgrade{J}{0,1} &= \gpgrade{ \lr{ \spacegrad + \partial_0 } F }{0,1} = \lr{ \spacegrad^2 – \partial_{00} } A.
\end{aligned}
\end{equation}
This is quite nice. We have a one to one decoupled relationship between the potential and the current, and are free to use the well known techniques for solving the wave equation (using convolution and a superposition of advanced and retarded Green’s functions for the wave equation operator.)

Gauge transformation.

There’s one more thing that we should look at before moving on to the magnetic sources case, and that’s the question of gauge freedom. We’ve said that the potentials are not unique, but this non-uniqueness has a very specific form.

Since we’ve constructed \( F \) with a grade selection as
\begin{equation}\label{eqn:gapotentials:1220}
F = \gpgrade{ \lr{ \spacegrad – \partial_0 } A }{1,2},
\end{equation}
so it’s clear that any transformation
\begin{equation}\label{eqn:gapotentials:1240}
A \rightarrow A + \lr{ \spacegrad + \partial_0 } \psi_{0,3},
\end{equation}
where \( \psi_{0,3} \) is any multivector with grades(0,3) components, will leave \( F \) invariant. That is
\begin{equation}\label{eqn:gapotentials:1260}
\begin{aligned}
A &= -\phi + c \BA \\
&\rightarrow
-\phi + c \BA + \lr{ \spacegrad + \partial_0 } \psi_{0,3} \\
&=
-\phi + c \BA + \lr{ \spacegrad + \partial_0 } \lr{ c \psi + I \bar{\psi} } \\
&=
\lr{ -\phi + c \partial_0 \psi }
+ c \lr{ \BA + \spacegrad \psi }
+ I \spacegrad \bar{\psi}
+ I \partial_0 \bar{\psi}.
\end{aligned}
\end{equation}
We see that the contributions of \( \bar{\psi} \) result in grade(2,3) terms, which are not of interest, and we find that a paired transformation of the potentials
\begin{equation}\label{eqn:gapotentials:1280}
\begin{aligned}
\phi &\rightarrow \phi – \PD{t}{\psi} \\
\BA &\rightarrow \BA + \spacegrad \psi,
\end{aligned}
\end{equation}
called a gauge transformation, leaves the field \( F \) unchanged. This can be expressed slightly more compactly as
\begin{equation}\label{eqn:gapotentials:1300}
A \rightarrow A + \lr{ \spacegrad + \partial_0 } c \psi,
\end{equation}
where, once again, the multiplicative constant \( c \) is included so for consistency with the conventional expression for potential gauge transformation.

Case II. With (fictitious) magnetic sources.

With magnetic sources, Maxwell’s equation is
\begin{equation}\label{eqn:gapotentials:1500}
\lr{ \spacegrad + \partial_0 } F = \gpgrade{J}{2,3}.
\end{equation}
We put this in dual form
\begin{equation}\label{eqn:gapotentials:1520}
\lr{ \spacegrad + \partial_0 } I F = I \gpgrade{J}{2,3},
\end{equation}
which now has the sources all with grades (0,1) as we just analyzed. The dual vector \( I F \), like \( F \), has only grade(1,2) components.

Expanding the source free Maxwell’s equations in terms of \( \BE, \BH \), we have
\begin{equation}\label{eqn:gapotentials:1340}
\begin{aligned}
0
&= \gpgrade{ \lr{ \spacegrad + \partial_0 } I F}{2,3} \\
&= \gpgrade{ \lr{ \spacegrad + \partial_0 } \lr{I \BE – \eta \BH } }{2,3} \\
&= \gpgrade{ I \spacegrad \BE – \eta \spacegrad \BH + I \partial_0 \BE – \eta \partial_0 \BH }{2,3} \\
&= \spacegrad \wedge \lr{ I \BE } – \eta \spacegrad \wedge \BH + I \partial_0 \BE,
\end{aligned}
\end{equation}
or, by grade
\begin{equation}\label{eqn:gapotentials:1360}
0 = \spacegrad \wedge \lr{ I \BE },
\end{equation}
\begin{equation}\label{eqn:gapotentials:1361}
0 = – \eta \spacegrad \wedge \BH + I \partial_0 \BE.
\end{equation}
We see that the dual electric field needs to be a curl to satisfy \ref{eqn:gapotentials:1360}
\begin{equation}\label{eqn:gapotentials:1400}
I \BE = -\eta \spacegrad \wedge c \BF,
\end{equation}
and after substitution into \ref{eqn:gapotentials:1361} we are left with
\begin{equation}\label{eqn:gapotentials:1540}
\begin{aligned}
0
&= – \eta \spacegrad \wedge \BH + \partial_0 \lr{ – \eta \spacegrad \wedge c \BF } \\
&= \eta \spacegrad \wedge \lr{ -\BH – \partial_0 c \BF } \\
\end{aligned}
\end{equation}
We set
\begin{equation}\label{eqn:gapotentials:1420}
-\BH – \partial_0 c \BF = \spacegrad \phi_m,
\end{equation}
Our fields are
\begin{equation}\label{eqn:gapotentials:1440}
\begin{aligned}
\BE &= – \inv{\epsilon} \spacegrad \cross \BF \\
\BH &= -\spacegrad \phi_m – \PD{t}{\BF}.
\end{aligned}
\end{equation}
This has the structure that matches the potential conventions from antenna theory, for example as stated in [1].

Multivector potential.

As with the electrical sources, we expect that we can write this as something like
\begin{equation}\label{eqn:gapotentials:1460}
F = \gpgrade{ \lr{ \spacegrad – \partial_0 } I A }{1,2}.
\end{equation}
Let’s verify that this is the case.
\begin{equation}\label{eqn:gapotentials:1480}
\begin{aligned}
F
&= I \eta \spacegrad \wedge (c \BF) -I \eta \spacegrad \phi_m – I \eta \partial_0 c \BF \\
&= \gpgrade{ I \eta \spacegrad \wedge (c \BF) -I \eta \spacegrad \phi_m – I \eta \partial_0 c \BF }{1,2} \\
&= \gpgrade{ I \eta \spacegrad c \BF -I \eta \spacegrad \phi_m – I \eta \partial_0 c \BF }{1,2} \\
&= \gpgrade{ I \eta \lr{ \spacegrad \lr{ – \phi_m + c \BF } – \partial_0 c \BF + \partial_0 \phi_m – \partial_0 \phi_m} }{1,2} \\
&= \gpgrade{ \lr{ \spacegrad – \partial_0 } I \eta \lr{ – \phi_m + c \BF } }{1,2}.
\end{aligned}
\end{equation}

Lorentz gauge.

Let’s see what constraints we need to write our field in terms of a potential without a grade selection, that is
\begin{equation}\label{eqn:gapotentials:1560}
F = \lr{ \spacegrad – \partial_0 } I \eta \lr{ – \phi_m + c \BF }.
\end{equation}
We need the grade(0,3) components of this multivector to be zero. Those components are
\begin{equation}\label{eqn:gapotentials:1580}
\begin{aligned}
0 &=
\gpgrade{ \lr{ \spacegrad – \partial_0 } I \eta \lr{ – \phi_m + c \BF }}{0,3} \\
&=
\gpgrade{-\spacegrad I \eta \phi_m+\spacegrad I \eta c \BF+ \partial_0 I \eta \phi_m – \partial_0 I \eta c \BF }{0,3} \\
&=
\gpgradethree{ \spacegrad I \eta c \BF }
+ \partial_0 I \eta \phi_m \\
&=
I \eta \lr{ c \lr{ \spacegrad \cdot \BF} + \partial_0 \phi_m },
\end{aligned}
\end{equation}
or
\begin{equation}\label{eqn:gapotentials:1600}
0 = \inv{c^2} \PD{t}{\phi_m} + \spacegrad \cdot \BF.
\end{equation}
This is the Lorentz gauge condition. With this condition we can we can express Maxwell’s equation with magnetic sources, as a forced wave equation
\begin{equation}\label{eqn:gapotentials:1620}
\begin{aligned}
A &= I \eta \lr{ -\phi_m + c \BF } \\
F &= \lr{ \spacegrad – \partial_0 } A \\
0 &= \inv{c} \gpgrade{ \lr{ \spacegrad – \partial_0 } A }{0,3} = \inv{c^2} \PD{t}{\phi_m} + \spacegrad \cdot \BF \\
\gpgrade{J}{2,3} &= \gpgrade{ \lr{ \spacegrad + \partial_0 } F }{2,3} = \lr{ \spacegrad^2 – \partial_{00} } A.
\end{aligned}
\end{equation}

Gauge transformation.

Without the Lorentz gauge assumption, our potential representation for the field is
\begin{equation}\label{eqn:gapotentials:1640}
\begin{aligned}
A &= I \eta \lr{ -\phi_m + c \BF } \\
F &= \gpgrade{ \lr{ \spacegrad – \partial_0 } A }{1,2}.
\end{aligned}
\end{equation}
It’s clear that any transformation of the form
\begin{equation}\label{eqn:gapotentials:1660}
A \rightarrow A + \lr{ \spacegrad + \partial_0 } \psi_{0,3},
\end{equation}
leaves the field unchanged.
\begin{equation}\label{eqn:gapotentials:1680}
\begin{aligned}
A &= I \eta \lr{ -\phi_m + c \BF } \\
&\rightarrow
I \eta \lr{ -\phi + c \BF } + \lr{ \spacegrad + \partial_0 } \psi_{0,3} \\
&=
I \eta \lr{ -\phi_m + c \BF } + \lr{ \spacegrad + \partial_0 } \lr{ \psi + I \eta c \bar{\psi} } \\
&=
I \eta \lr{
-\phi_m
+ c \partial_0 \bar{\psi}
+ c \BF
+ c \spacegrad \bar{\psi}
}
+ \lr{ \spacegrad + \partial_0 } \psi.
\end{aligned}
\end{equation}
We can drop the \( \psi \) contributions, since this time we want only grades(2,3) in our potential, and find that the
desired form of the gauge transformation, for scalar \( \bar{\psi} \), is
\begin{equation}\label{eqn:gapotentials:1700}
\begin{aligned}
\phi_m &\rightarrow \phi_m – \PD{t}{\bar{\psi}} \\
\BF &\rightarrow \BF + \spacegrad \bar{\psi}.
\end{aligned}
\end{equation}
The multivector form of this is
\begin{equation}\label{eqn:gapotentials:1720}
A \rightarrow A + \lr{ \spacegrad + \partial_0 } I \eta c \bar{\psi}.
\end{equation}

Superposition.

We can now use superposition to construct a potential representation that works for both conventional electric and fictitious magnetic charges and currents.

Without a Lorentz gauge assumption, that is
\begin{equation}\label{eqn:gapotentials:1760}
\begin{aligned}
A &= -\phi + c \BA + I \eta \lr{ -\phi_m + c \BF } \\
F &= \gpgrade{ \lr{ \spacegrad – \partial_0 } A }{1,2} \\
J &= \lr{ \spacegrad + \partial_0 } F,
\end{aligned}
\end{equation}
where, given scalar functions \( \psi, \bar{\psi} \), we are free to make gauge transformations of the multivector potential that satisfy
\begin{equation}\label{eqn:gapotentials:1800}
A \rightarrow A + \lr{ \spacegrad + \partial_0 } \lr{ c \psi + I \eta c \bar{\psi} },
\end{equation}

With a Lorentz gauge constraint, we have a wave equation operator acting on \( A \), with the multivector current as a forcing term.
\begin{equation}\label{eqn:gapotentials:1780}
\begin{aligned}
A &= -\phi + c \BA + I \eta \lr{ -\phi_m + c \BF } \\
0 &= \gpgrade{ \lr{ \spacegrad – \partial_0 } A }{0,3} \\
F &= \lr{ \spacegrad – \partial_0 } A \\
J &= \lr{ \spacegrad^2 – \partial_{00} } A.
\end{aligned}
\end{equation}

Check.

It’s worth expansion to verify that we got all the dimensional constants write, and compare the results to Maxwell’s equations in their Gibbs form.

Let’s start with an expansion of \( F \) in terms of the potentials
\begin{equation}\label{eqn:gapotentials:1820}
\begin{aligned}
F &=
\gpgrade{\lr{ \spacegrad – \partial_0 } A }{1,2} \\
&= \gpgrade{ \lr{ \spacegrad – \partial_0 } \lr{ -\phi + c \BA + I \eta \lr{ -\phi_m + c \BF } } }{1,2} \\
&=
\gpgrade{ \spacegrad \lr{ -\phi + c \BA + I \eta \lr{ -\phi_m + c \BF } } -\partial_0 \lr{ -\phi + c \BA + I \eta \lr{ -\phi_m + c \BF } } }{1,2} \\
&=
\gpgrade{ \spacegrad \lr{ -\phi + c \BA + I \eta \lr{ -\phi_m + c \BF } } -\partial_0 \lr{ c \BA + I \eta c \BF } }{1,2} \\
&=
-\spacegrad \phi + c \spacegrad \wedge \BA – I \eta \spacegrad \phi_m + I \eta c \spacegrad \wedge \BF
-\partial_0 \lr{ c \BA + I \eta c \BF }.
\end{aligned}
\end{equation}
That is
\begin{equation}\label{eqn:gapotentials:1840}
\begin{aligned}
\BE &= -\spacegrad \phi + I \eta c \spacegrad \wedge \BF -c \partial_0 \BA \\
I \eta \BH &= c \spacegrad \wedge \BA – I \eta \spacegrad \phi_m – I \eta c \partial_0 \BF,
\end{aligned}
\end{equation}
or
\begin{equation}\label{eqn:gapotentials:1860}
\begin{aligned}
\BE &= – \spacegrad \phi -\partial_t \BA – \inv{\epsilon} \spacegrad \cross \BF \\
\BH &= – \spacegrad \phi_m – \partial_t \BF + \inv{\mu} \spacegrad \cross \BA.
\end{aligned}
\end{equation}
All is good. This is exactly the form that we expect.

Let’s expand out Maxwell’s equation in terms of this potential representation and see what we get.

Let’s write the total field without the grade(1,2) selection, by subtracting off any grade(0,3) contributions
\begin{equation}\label{eqn:gapotentials:1880}
F = \lr{ \spacegrad – \partial_0 } A – \gpgrade{ \lr{ \spacegrad – \partial_0 } A }{0,3}.
\end{equation}
That difference term is
\begin{equation}\label{eqn:gapotentials:1900}
\begin{aligned}
– \gpgrade{ \lr{ \spacegrad – \partial_0 } A }{0,3}
&=
– \gpgrade{ \lr{ \spacegrad – \partial_0 } \lr{ -\phi + c \BA – I \eta \phi_m + I \eta c \BF } }{0,3} \\
&=
– c \spacegrad \cdot \BA – I \eta c \spacegrad \cdot \BF – \partial_0 \phi – I \eta \partial_0 \phi_m.
\end{aligned}
\end{equation}
The field is nicely split into a multivector term that depends directly on the full multivector potential \( A \), and a difference term that wipes out any scalar and pseudoscalar terms
\begin{equation}\label{eqn:gapotentials:1920}
F
=
\lr{ \spacegrad – \partial_0 } A
– \lr{ \partial_0 \phi + c \spacegrad \cdot \BA } – I \eta \lr{ \partial_0 \phi_m + c \spacegrad \cdot \BF }.
\end{equation}

Maxwell’s equations are now reduced to
\begin{equation}\label{eqn:gapotentials:1940}
\lr{ \spacegrad^2 – \partial_{00} } A

\lr{ \spacegrad + \partial_0 }
\lr{ \partial_0 \phi + c \spacegrad \cdot \BA }

\lr{ \spacegrad + \partial_0 }
I \eta \lr{ \partial_0 \phi_m + c \spacegrad \cdot \BF }
= J.
\end{equation}
This splits nicely into a single equation for each grade of \( A, J \) respectively. We write
\begin{equation}\label{eqn:gapotentials:1960}
J = \eta\lr{ c \rho – \BJ } + I \lr{ c \phi_m – \BM },
\end{equation}
so
\begin{equation}\label{eqn:gapotentials:1980}
\begin{aligned}
\lr{ \spacegrad^2 – \partial_{00} } (-\phi) – \partial_0 \lr{ \partial_0 \phi + c \spacegrad \cdot \BA } &= \eta c \rho \\
\lr{ \spacegrad^2 – \partial_{00} } (c \BA) – \spacegrad \lr{ \partial_0 \phi + c \spacegrad \cdot \BA } &= -\eta \BJ \\
\lr{ \spacegrad^2 – \partial_{00} } (I \eta c \BF) – I \eta \partial_0 \lr{ \partial_0 \phi_m + c \spacegrad \cdot \BF } &= -I \BM \\
\lr{ \spacegrad^2 – \partial_{00} } (-I \eta \phi_m) – I \eta \spacegrad \lr{ \partial_0 \phi_m + c \spacegrad \cdot \BF } &= I c \rho_m.
\end{aligned}
\end{equation}
If we choose the Lorentz gauge conditions
\begin{equation}\label{eqn:gapotentials:2000}
0 = \lr{ \partial_0 \phi + c \spacegrad \cdot \BA } = \lr{ \partial_0 \phi_m + c \spacegrad \cdot \BF },
\end{equation}
all of these equations decouple nicely, leaving us with 8 (scalar) equations in 8 unknowns
\begin{equation}\label{eqn:gapotentials:2020}
\begin{aligned}
\lr{ \spacegrad^2 – \partial_{00} } \phi &= -\frac{\rho}{\epsilon} \\
\lr{ \spacegrad^2 – \partial_{00} } \BA &= -\mu \BJ \\
\lr{ \spacegrad^2 – \partial_{00} } \BF &= -\epsilon \BM \\
\lr{ \spacegrad^2 – \partial_{00} } \phi_m &= – \frac{\rho_m}{\mu}.
\end{aligned}
\end{equation}

Potentials in STA (space time algebra).

All of this was very convoluted. Maxwell’s equation in STA form is considerably simpler, as is the potential formulation.

STA form of Maxwell’s equation.

We identify
\begin{equation}\label{eqn:gapotentials:2040}
\begin{aligned}
\Be_k &= \gamma_k \gamma_0 \\
I &= \Be_1 \Be_2 \Be_3 = \gamma_0 \gamma_1 \gamma_2 \gamma_3 \\
\gamma^\mu \cdot \gamma_\nu &= {\delta^\mu}_\nu.
\end{aligned}
\end{equation}
Our field multivector
\begin{equation}\label{eqn:gapotentials:2060}
\begin{aligned}
F
&= \BE + I \eta \BH \\
&= \gamma_{k0} E^k + \eta \gamma_{0123k0} H^k \\
&= \gamma_{k0} E^k + \eta \gamma_{123k} H^k,
\end{aligned}
\end{equation}
now has a pure bivector representation in STA (since \( k \) will always clobber one of the \( 1,2,3 \) indexes.) To find the STA representation of Maxwell’s equation, we simply multiply both sides of our multivector representation, from the left, by \( \gamma_0 \).
\begin{equation}\label{eqn:gapotentials:2080}
\gamma_0 \lr{ \spacegrad + \partial_0 } F = \gamma_0 \lr{ \eta \lr{ c \rho – \BJ } + I \lr{ c \rho_m – \BM } }.
\end{equation}
The LHS is just the spacetime gradient of \( F \), which we can see by expanding the product
\begin{equation}\label{eqn:gapotentials:2100}
\begin{aligned}
\gamma_0 \lr{ \spacegrad + \partial_0 }
&=
\gamma_0 \lr{ \gamma_{k0} \PD{x^k}{} + \PD{x^0}{} } \\
&=
-\gamma_{k} \PD{x^k}{} + \gamma_0 \PD{x^0}{}.
\end{aligned}
\end{equation}
This is the spacetime gradient
\begin{equation}\label{eqn:gapotentials:2120}
\grad \equiv \gamma^k \PD{x^k}{} + \gamma^0 \PD{x^0}{} = \gamma^\mu \partial_\mu.
\end{equation}
Our RHS is
\begin{equation}\label{eqn:gapotentials:2140}
\begin{aligned}
\gamma_0 \lr{ \eta \lr{ c \rho – \BJ } + I \lr{ c \rho_m – \BM } }
&=
\gamma_0 \frac{\rho}{\epsilon} – \gamma_{0k0} \eta (\BJ \cdot \Be_k)
– I \lr{ c \rho_m \gamma_0 – \gamma_{0k0} (\BM \cdot \Be_k) } \\
&=
\gamma_0 \frac{\rho}{\epsilon} + \gamma_k \eta (\BJ \cdot \Be_k)
– I \lr{ c \rho_m \gamma_0 + \gamma_{k} (\BM \cdot \Be_k) }.
\end{aligned}
\end{equation}
If we let
\begin{equation}\label{eqn:gapotentials:2160}
\begin{aligned}
J_e^0 &= \frac{\rho}{\epsilon} \\
J_e^k &= \eta (\BJ \cdot \Be_k) \\
J_m^0 &= c \rho_m \\
J_m^k &= (\BM \cdot \Be_k) \\
J_e &= J_e^\mu \gamma_\mu \\
J_m &= J_m^\mu \gamma_\mu,
\end{aligned}
\end{equation}
then we are left with
\begin{equation}\label{eqn:gapotentials:2180}
\grad F = J_e – I J_m,
\end{equation}
or just
\begin{equation}\label{eqn:gapotentials:2640}
\grad F = J,
\end{equation}
where we now give a different meaning to \( J \) than we had in the multivector formulation. This \( J \) is now a multivector with grade(1,3) components.

Case I: potential formulation for conventional sources.

Much like we did with to find the potential formulation for the multivector form of Maxwell’s equation, we use superposition, and tackle the conventional sources, and fictitious magnetic sources separately.

With no fictitious sources, Maxwell’s equation is
\begin{equation}\label{eqn:gapotentials:2200}
\grad F = J_e,
\end{equation}
which we may split into vector and trivector components
\begin{equation}\label{eqn:gapotentials:2220}
\begin{aligned}
\grad \cdot F &= J_e \\
\grad \wedge F &= 0.
\end{aligned}
\end{equation}
Clearly, the trivector equation can be satified by setting
\begin{equation}\label{eqn:gapotentials:2240}
F = \grad \wedge A,
\end{equation}
for some vector \( A \). We may also make gauge transformations of \( A \) of the form
\begin{equation}\label{eqn:gapotentials:2260}
A \rightarrow A + \grad \psi,
\end{equation}
without changing \( F \), showing that \( A \) is not uniquely determined. With such a representation, Maxwell’s equation is now reduced to
\begin{equation}\label{eqn:gapotentials:2280}
\grad \cdot F = J_e,
\end{equation}
or
\begin{equation}\label{eqn:gapotentials:2300}
\begin{aligned}
J_e
&=
\grad \cdot \lr{ \grad \wedge A } \\
&=
\grad^2 A – \grad \lr{ \grad \cdot A }.
\end{aligned}
\end{equation}
Clearly the equivalent of the Lorentz gauge condition is now just
\begin{equation}\label{eqn:gapotentials:2320}
\grad \cdot A = 0,
\end{equation}
so the Lorentz gauge potential form of Maxwell’s equation is just
\begin{equation}\label{eqn:gapotentials:n}S
\grad^2 A = J_e.
\end{equation}

Case II: potential formulation for fictitious sources.

If we have only fictious sources, Maxwell’s equation is
\begin{equation}\label{eqn:gapotentials:2340}
\grad F = -I J_m,
\end{equation}
or after left multiplication by \( I \) we have
\begin{equation}\label{eqn:gapotentials:2360}
\grad I F = J_m.
\end{equation}
Let \( G = I F \), for the dual field, which is still a bivector. As before, we can split Maxwell’s equations into vector and trivector compoents
\begin{equation}\label{eqn:gapotentials:2380}
\begin{aligned}
\grad \cdot G &= J_m \\
\grad \wedge G &= 0.
\end{aligned}
\end{equation}
We may set
\begin{equation}\label{eqn:gapotentials:2400}
G = \grad \wedge K,
\end{equation}
for vector \( K \). Maxwell’s equation is now reduced to
\begin{equation}\label{eqn:gapotentials:2420}
\grad \cdot G = J_m,
\end{equation}
or
\begin{equation}\label{eqn:gapotentials:2440}
\begin{aligned}
J_m
&=
\grad \cdot \lr{ \grad \wedge K } \\
&=
\grad^2 K – \grad \lr{ \grad \cdot K }.
\end{aligned}
\end{equation}

As before we may make gauge transformations by adding gradient to our potential
\begin{equation}\label{eqn:gapotentials:2460}
K \rightarrow K + \grad \bar{\psi},
\end{equation}
which will not change \( G \). For such sources, the Lorentz gauge condition is \( \grad \cdot K = 0 \). With the Lorentz gauge, Maxwell’s equation is reduced to
\begin{equation}\label{eqn:gapotentials:2480}
\grad^2 K = J_m.
\end{equation}

Superposition.

For non-fictious sources, we have
\begin{equation}\label{eqn:gapotentials:2500}
F = \grad \wedge A
\end{equation}
and for fictious sources, we have
\begin{equation}\label{eqn:gapotentials:2520}
I F = G = \grad \wedge K,
\end{equation}
or
\begin{equation}\label{eqn:gapotentials:2540}
F = -I G = -I \lr{ \grad \wedge K }.
\end{equation}
Combining these results, we have
\begin{equation}\label{eqn:gapotentials:2560}
\begin{aligned}
F
&= \grad \wedge A -I \lr{ \grad \wedge K } \\
&= \gpgradetwo{ \grad \wedge A -I \lr{ \grad \wedge K } } \\
&= \gpgradetwo{ \grad A -I \lr{ \grad K } } \\
&= \gpgradetwo{ \grad \lr{ A + I K } },
\end{aligned}
\end{equation}
or
\begin{equation}\label{eqn:gapotentials:2580}
F = \grad \lr{ A + I K } – \gpgrade{ \grad \lr{ A + I K } }{0,4}.
\end{equation}
Maxwell’s equation is
\begin{equation}\label{eqn:gapotentials:2600}
\grad^2 \lr{ A + I K } – \grad \gpgrade{ \grad \lr{ A + I K } }{0,4} = J.
\end{equation}
With the Lorentz gauge, this splits nicely into one forced wave equation for each vector potential
\begin{equation}\label{eqn:gapotentials:2620}
\begin{aligned}
\grad^2 A &= J_e \\
\grad^2 K &= -J_m.
\end{aligned}
\end{equation}

References

[1] Constantine A Balanis. Antenna theory: analysis and design. John Wiley & Sons, 3rd edition, 2005.

[2] R.P. Feynman, R.B. Leighton, and M.L. Sands. Feynman lectures on physics, Volume II.[Lectures on physics], chapter The Maxwell Equations. Addison-Wesley Publishing Company. Reading, Massachusetts, 1963. URL https://www.feynmanlectures.caltech.edu/II_18.html.

[3] David Jeffrey Griffiths and Reed College. Introduction to electrodynamics. Prentice hall Upper Saddle River, NJ, 3rd edition, 1999.

[4] JD Jackson. Classical Electrodynamics. John Wiley and Sons, 2nd edition, 1975.

Geometric calculus relationships to differential forms, and vector calculus identities.

November 12, 2023 math and physics play , , , , , ,

[Click here for a PDF version of this post]

Motivation.

I was asked about the geometric algebra equivalents of some of the vector calculus identities from [1]. I’ll call the specific page of those calculus notes “the article”. The article includes identities like
\begin{equation}\label{eqn:formAndCurl:20}
\begin{aligned}
\spacegrad (f g) &= f \spacegrad g + g \spacegrad f \\
\spacegrad \cross (f \BF) &= f \spacegrad \cross \BF + (\spacegrad f) \cross \BF \\
\spacegrad \cdot (f \BF) &= f \spacegrad \cdot \BF + (\spacegrad f) \cdot \BF \\
\spacegrad \cdot \lr{ \BF \cross \BG } &= \BG \cdot \lr{ \spacegrad \cross \BF } – \BF \cdot \lr{ \spacegrad \cross \BG },
\end{aligned}
\end{equation}
but the point of these particular lecture notes is the interface between traditional Gibbs vector calculus and differential forms. That’s a much bigger topic, and perhaps not what I was actually being asked about. It is, however, an interesting topic, so let’s explore it.

Comparisons.

The article identifies the cross product representation of the curl \( \spacegrad \cross \BF \) as the equivalent to the exterior derivative of a one form (which has been mapped to a vector function.) In geometric algebra, this isn’t the identification we would use. Instead we should identify the “bivector curl” \( \spacegrad \wedge \BF \) as the logical equivalent of the exterior derivative of that one form, and in general identify \( \spacegrad \wedge A_k \) as the exterior derivative of a k-form (k-blade). In my notes to follow, the wedge of the gradient with a function, will be called the curl of that function, even if we are operating in \(\mathbb{R}^3\) where the cross product is defined.

The starting place of the article was to define a one form and it’s exterior derivative was essentially as follows

Definition 1.1: The exterior derivative of a one form.

Let \( f : \mathbb{R}^N \rightarrow \mathbb{R} \) be a zero form. It’s exterior derivative is
\begin{equation*}
df = \sum_i dx_i \PD{x_i}{f}.
\end{equation*}

I’ve stated that the GA equivalent of the exterior derivative was a curl \( \spacegrad \wedge A \), and this doesn’t look anything curl like, so right away, we have some trouble to deal with. To resolve that trouble, let’s step back to the gradient, which we haven’t defined yet. In the article, the gradient (of a scalar function) was defined as a coordinate triplet
\begin{equation}\label{eqn:formAndCurl:60}
\spacegrad \Bf = \lr{ \PD{x}{f}, \PD{y}{f}, \PD{z}{f} }.
\end{equation}
In GA we don’t like representations where the basis vectors are implicit, so we’d prefer to define

Definition 1.2: The gradient of a function.

We define the gradient of multivector \( f(x_1, x_2, \cdots, x_N) \), and denote it by \( \spacegrad f \)
\begin{equation*}
\spacegrad f = \sum_{i=1}^N \Be_i \PD{x_i}{f},
\end{equation*}
where \( \setlr{ \Be_1, \cdots \Be_N } \) is an orthonormal basis for \(\mathbb{R}^N\).

Unlike the article, we do not restrict \( f \) to be a scalar function, since we do not have a problem with a vector valued operator directly multiplying a vector or any product of vectors. Instead \( f \) can be a multivector function, with scalar, vector, bivector, trivector, … components, and we define the gradient the same way.

In order to define the curl of a k-blade, we need a reminder of how we define the wedge of a vector with a k-blade. Recall that this is how we generally define the wedge between two blades.

Definition 1.3:

Let \( A_r \) be a r-blade, and \( B_s \) a s-blade. The wedge of \( A_r \) with \( B_s \) is
\begin{equation}\label{eqn:formAndCurl:120}
A_r \wedge B_s = \gpgrade{A_r B_s}{r+s}.
\end{equation}

In particular, if \( \Ba \) is a vector, then the wedge with an s-blade \( B_s \) is
\begin{equation}\label{eqn:formAndCurl:140}
\Ba \wedge B_s = \gpgrade{\Ba B_s}{s+1},
\end{equation}
which is just the \( s+1 \) grade selection of their product. Furthermore, if \( f \) is a scalar, then
\begin{equation}\label{eqn:formAndCurl:160}
\Ba \wedge f = \gpgrade{\Ba f}{1} = \Ba f.
\end{equation}
We can now state the curl of a k-blade

Definition 1.4: Curl of a k-blade.

Let \( A_k \) be a k-blade. We define the curl of a k-blade as the wedge product of the gradient with that k-blade, designated
\begin{equation*}
\spacegrad \wedge A_k.
\end{equation*}

Observe, given our generalized wedge product definition above, that the curl of a scalar function \( f \), is in fact just the gradient of that function
\begin{equation}\label{eqn:formAndCurl:200}
\spacegrad \wedge f = \spacegrad f = \sum_i \Be_i \PD{x_i}{f}.
\end{equation}
This has exactly the structure of the exterior derivative of a one form, as stated in “Definition: The exterior derivative of a one form”, but we have replaced \( dx_i \) with a basis vector \( \Be_i \).

Definition 1.5: Exterior derivative of a one-form.

Let \( \omega = f_i dx_i \) be a one-form. The exterior derivative of \( d \omega \) is
\begin{equation*}
d\omega = \sum_i d( f_i ) \wedge dx_i.
\end{equation*}

Lemma 1.1: Exterior derivative of a one-form.

Let \( \omega = f_i dx_i \) be a one-form. The exterior derivative \( d \omega \) can be expanding into a Jacobian form
\begin{equation*}
d\omega
=
\sum_{i < j} \lr{
\PD{x_i}{f_j}

\PD{x_j}{f_i}
} dx_i \wedge dx_j.
\end{equation*}

Start proof:

\begin{equation}\label{eqn:formAndCurl:220}
\begin{aligned}
d\omega
&= \sum_j d( f_j dx_j ) \\
&= \sum_j d( f_j ) \wedge dx_j \\
&= \sum_j \lr{ \sum_i dx_i \PD{x_i}{f_j} } \wedge dx_j \\
&= \sum_{ji} \PD{x_i}{f_j} dx_i \wedge dx_j \\
&= \sum_{j \ne i} \PD{x_i}{f_j} dx_i \wedge dx_j \\
&=
\sum_{i < j} \PD{x_i}{f_j} dx_i \wedge dx_j
+
\sum_{j < i} \PD{x_i}{f_j} dx_i \wedge dx_j \\
&=
\sum_{i < j} \PD{x_i}{f_j} dx_i \wedge dx_j
+
\sum_{i < j} \PD{x_j}{f_i} dx_j \wedge dx_i \\
&=
\sum_{i < j} \lr{
\PD{x_i}{f_j}

\PD{x_j}{f_i}
} dx_i \wedge dx_j.
\end{aligned}
\end{equation}

End proof.

Lemma 1.2: Curl of a vector.

Let \( \Bf = \sum_i \Be_i f_i \in \mathbb{R}^N \) be a vector. The curl of \( \Bf \) has a Jacobian structure
\begin{equation*}
\spacegrad \wedge \Bf
=
\sum_{i < j}
\lr{ \PD{x_i}{f_j} – \PD{x_j}{f_i} }
\lr{ \Be_i \wedge \Be_j }
.
\end{equation*}

Start proof:

The antisymmetry of the wedges of differentials in the exterior derivative and the curl clearly has a one to one correspondence. Let’s show this explicitly by expansion
\begin{equation}\label{eqn:formAndCurl:240}
\begin{aligned}
\spacegrad \wedge \Bf
&=
\sum_{ij} \lr{ \Be_i \PD{x_i}{} } \wedge \lr{ \Be_j f_j } \\
&=
\sum_{ij} \lr{ \Be_i \wedge \Be_j } \PD{x_i}{f_j} \\
&=
\sum_{i \ne j} \lr{ \Be_i \wedge \Be_j } \PD{x_i}{f_j} \\
&=
\sum_{i < j} \lr{ \Be_i \wedge \Be_j } \PD{x_i}{f_j}
+
\sum_{j < i} \lr{ \Be_i \wedge \Be_j } \PD{x_i}{f_j} \\
&=
\sum_{i < j} \lr{ \Be_i \wedge \Be_j } \PD{x_i}{f_j}
+
\sum_{i < j} \lr{ \Be_j \wedge \Be_i } \PD{x_j}{f_i} \\
&=
\sum_{i < j} \lr{ \Be_i \wedge \Be_j } \lr{ \PD{x_i}{f_j} – \PD{x_j}{f_i} }.
\end{aligned}
\end{equation}

End proof.

If we are translating from differential forms, again, we see that we simply replace any differentials \( dx_i \) with the basis vectors \( \Be_i \) (at least for the zero-form and one-form cases, which is all that we have looked at here.)

Note that in differential forms, we often assume that there is an implicit wedge product between any different one form elements, writing
\begin{equation}\label{eqn:formAndCurl:260}
dx_1 \wedge dx_2 = dx_1 dx_2.
\end{equation}
This works out fine when we map differentials to basis vectors, since
\begin{equation}\label{eqn:formAndCurl:280}
\Be_1 \Be_2 =
\Be_1 \cdot \Be_2
+
\Be_1 \wedge \Be_2
=
\Be_1 \wedge \Be_2.
\end{equation}
However, we have to be more careful in GA when using indexed expressions, since
\begin{equation}\label{eqn:formAndCurl:300}
\Be_i \Be_j = \Be_i \cdot \Be_j + \Be_i \wedge \Be_j.
\end{equation}
The dot product portion of the RHS is only zero if \( i \ne j \).

Now let’s look at the equivalence between the exterior derivative of a two-form with the curl.

Definition 1.6: Exterior derivative of a two-form.

Let \( \eta = \sum_{ij} f_{ij} dx_i \wedge dx_j \) be a two-form. The exterior derivative of \( \eta \) is
\begin{equation*}
d\eta =
\sum_{ij} d( f_{ij} ) \wedge dx_i \wedge dx_j.
\end{equation*}

Lemma 1.3: Exterior derivative of a two-form.

Let \( \eta = \sum_{ij} f_{ij} dx_i \wedge dx_j \) be a two-form. The exterior derivative of \( \eta \) can be expanded as
\begin{equation*}
d \eta
=
\sum_{i,j,k} \PD{x_k}{f_{ij}} dx_i \wedge dx_j \wedge dx_k.
\end{equation*}

Start proof:

The exterior derivative of \( \eta \) is
\begin{equation}\label{eqn:formAndCurl:340}
\begin{aligned}
d \eta
&=
\sum_{i,j} d( f_{ij} dx_i \wedge dx_j ) \\
&=
\sum_{i,j,k} \lr{ \PD{x_k}{f_{ij}} dx_k } \wedge dx_i \wedge dx_j \\
&=
\sum_{i,j,k} \PD{x_k}{f_{ij}} dx_i \wedge dx_j \wedge dx_k.
\end{aligned}
\end{equation}

End proof.

Let’s compare that to the curl of a bivector valued function.

Lemma 1.4: Curl of a 2-blade.

Let \( B = \sum_{i \ne j} f_{ij} \Be_i \wedge \Be_j \) be a 2-blade. The curl of \( B \) is
\begin{equation*}
\spacegrad \wedge B
=
\sum_{i,j,k} \PD{x_k}{f_{ij}} \Be_i \wedge \Be_j \wedge \Be_k.
\end{equation*}

Start proof:

\begin{equation}\label{eqn:formAndCurl:380}
\begin{aligned}
\spacegrad \wedge B
&=
\lr{ \sum_k \Be_k \PD{x_k}{} } \wedge \lr{ \sum_{i \ne j} f_{ij} \Be_i \wedge \Be_j } \\
&=
\sum_{k, i \ne j} \PD{x_k}{f_{ij}} \Be_k \wedge \Be_i \wedge \Be_j \\
&=
\sum_{i,j,k} \PD{x_k}{f_{ij}} \Be_i \wedge \Be_j \wedge \Be_k.
\end{aligned}
\end{equation}

End proof.

Again, we see an exact correspondence with the exterior derivative \( d \eta \) of a two-form, and the curl \( \spacegrad \wedge B \), of a 2-blade.

The article established a coorespondence between the exterior derivative of a two form over \(\mathbb{R}^3\) to the divergence. The way we would express this in GA (also for \R{3}) is to write
\begin{equation}\label{eqn:formAndCurl:400}
B = I \Bb,
\end{equation}
where \( I = \Be_1 \Be_2 \Be_3 \) is the \(\mathbb{R}^3\) pseudoscalar (a “unit” trivector.) Forming the curl of \( B \) we have
\begin{equation}\label{eqn:formAndCurl:420}
\begin{aligned}
\spacegrad \wedge B
&= \gpgradethree{ \spacegrad B } \\
&= \gpgradethree{ \spacegrad I \Bb } \\
&= \gpgradethree{ I (\spacegrad \Bb) } \\
&= \gpgradethree{ I (\spacegrad \cdot \Bb + \spacegrad \wedge \Bb) } \\
&= I (\spacegrad \cdot \Bb).
\end{aligned}
\end{equation}
The equivalence relationships that we have developed must then imply that the differential forms representation of this relationship is
\begin{equation}\label{eqn:formAndCurl:440}
d B = dx_1 \wedge dx_2 \wedge dx_3 (\spacegrad \cdot \Bb)
= dx \wedge dy \wedge dz \lr{ \PD{x}{b_1} + \PD{y}{b_2} + \PD{z}{b_3} },
\end{equation}
as defined in the article.

Here is the GA equivalent of Lemma 4.4.10 from the article

Lemma 1.5: Repeated curl identities.

Let \( A \) be a smooth k-blade, then
\begin{equation*}
\spacegrad \wedge \lr{ \spacegrad \wedge A } = 0.
\end{equation*}
For \(\mathbb{R}^3\), this result, for a scalar function \( f \), and a vector function \( \Bf \), in terms of the cross product, as
\begin{equation}\label{eqn:formAndCurl:560}
\begin{aligned}
\spacegrad \cross \lr{ \spacegrad f } &= 0 \\
\spacegrad \cdot \lr{ \spacegrad \cross \Bf } &= 0.
\end{aligned}
\end{equation}

It shouldn’t be surprising that this is the equivalent of \( d^2 A = 0 \) from differential forms. Let’s prove this, first considering the 0-blade case

Start proof:

\begin{equation}\label{eqn:formAndCurl:480}
\begin{aligned}
\spacegrad \wedge \lr{ \spacegrad \wedge A }
&=
\spacegrad \wedge \lr{ \spacegrad A } \\
&=
\sum_{ij} \Be_i \wedge \Be_j \frac{\partial^2 A}{\partial x_i \partial x_j} \\
&= 0.
\end{aligned}
\end{equation}
The smooth criteria of for the function \( A \) is assumed to imply that we have equality of mixed partials, and since this is a sum of an antisymmetric term with respect to indexes \( i, j \) (the wedge) and a symmetric term in indexes \( i, j \) (the partials), we have zero overall.

Now consider a k-blade \( A, k > 0 \). Expanding the gradients, we have
\begin{equation}\label{eqn:formAndCurl:500}
\spacegrad \wedge \lr{ \spacegrad \wedge A }
=
\sum_{ij} \Be_i \wedge \Be_j \wedge \frac{\partial^2 A}{\partial x_i \partial x_j}.
\end{equation}
It may be obvious that this is zero for the same reasons as above (sum of product of symmetric and antisymmetric entities). We can, however, make it more obvious, at the cost of some hellish indexing, by expressing \( A \) in coordinate form. Let
\begin{equation}\label{eqn:formAndCurl:520}
A = \sum_{i_1, i_2, \cdots, i_k}
A_{i_1, i_2, \cdots, i_k} \Be_{i_1} \wedge \Be_{i_2} \wedge \cdots \wedge \Be_{i_k},
\end{equation}
then
\begin{equation}\label{eqn:formAndCurl:540}
\begin{aligned}
\spacegrad \wedge \lr{ \spacegrad \wedge A }
&=
\sum_{i,j,i_1, i_2, \cdots, i_k} \Be_i \wedge \Be_j \wedge \Be_{i_1} \wedge \Be_{i_2} \wedge \cdots \wedge \Be_{i_k}
\frac{\partial^2 }{\partial x_i \partial x_j} A_{i_1, i_2, \cdots, i_k} \\
&= 0.
\end{aligned}
\end{equation}
Now we clearly have a sum of an antisymmetric term (the wedges), and a symmetric term (assuming smooth \( A \) means that we have equality of mixed partials), so the sum is zero.

Finally, for the \(\mathbb{R}^3\) identities, we have
\begin{equation}\label{eqn:formAndCurl:580}
\begin{aligned}
\spacegrad \cross \lr{ \spacegrad f}
&=
-I \lr{ \spacegrad \wedge \lr{ \spacegrad f } }
&=
0,
\end{aligned}
\end{equation}
since \( \spacegrad \wedge \lr{ \spacegrad f } = 0 \). For a vector \( \Bf \), we have
\begin{equation}\label{eqn:formAndCurl:600}
\begin{aligned}
\spacegrad \cdot \lr{ \spacegrad \cross \Bf}
&=
\gpgradezero{
\spacegrad \lr{ \spacegrad \cross \Bf}
} \\
&=
\gpgradezero{
\spacegrad (-I) \lr{ \spacegrad \wedge \Bf}
} \\
&=
-\gpgradezero{
I \spacegrad \lr{ \spacegrad \wedge \Bf}
} \\
&=

I \spacegrad \wedge \lr{ \spacegrad \wedge \Bf} \\
&= 0,
\end{aligned}
\end{equation}
again, because \( \spacegrad \wedge \lr{ \spacegrad \wedge \Bf} = 0 \).

End proof.

Identities.

We have a number of chain rule identities in the article. Here is the GA equivalent of that, and its corollaries

Lemma 1.6: Chain rule identities.

Let \( f \) be a scalar function and \( A \) be a k-blade, then
\begin{equation*}
\spacegrad \lr{ f A } = \lr{ \spacegrad f } A + f \lr{ \spacegrad A }.
\end{equation*}
For \( A \) with grade \( k > 0 \), the grade \( k-1 \) and \( k+1 \) components of this product are
\begin{equation*}
\begin{aligned}
\spacegrad \cdot \lr{ f A } &= \lr{ \spacegrad f } \cdot A + f \lr{ \spacegrad \cdot A } \\
\spacegrad \wedge \lr{ f A } &= \lr{ \spacegrad f } \wedge A + f \lr{ \spacegrad \wedge A }.
\end{aligned}
\end{equation*}
For \(\mathbb{R}^3\), the wedge product relation above can be written in dual form as
\begin{equation*}
\spacegrad \cross \lr{ f A } = \lr{ \spacegrad f } \cross A + f \lr{ \spacegrad \cross A }.
\end{equation*}

Proving this is left to the reader.

We have some chain rule identities left in the article to verify and to find GA equivalents of. Before doing so, we need a couple miscellaneous identities relating triple cross products to wedge-dots.

Lemma 1.7: Triple cross products.

Let \( \Ba, \Bb, \Bc \) be vectors in \(\mathbb{R}^3\). Then
\begin{equation*}
\begin{aligned}
\Ba \cross \lr{ \Bb \cross \Bc } &= – \Ba \cdot \lr{ \Bb \wedge \Bc } \\
\lr{ \Ba \cross \Bb } \cross \Bc &= – \lr{ \Ba \wedge \Bb } \cdot \Bc.
\end{aligned}
\end{equation*}

Start proof:

\begin{equation}\label{eqn:formAndCurl:720}
\begin{aligned}
\Ba \cross \lr{ \Bb \cross \Bc }
&=
\gpgradeone{ -I \lr{ \Ba \wedge \lr{ \Bb \cross \Bc } } } \\
&=
\gpgradeone{ -I \lr{ \Ba \lr{ \Bb \cross \Bc } } } \\
&=
\gpgradeone{ (-I)^2 \lr{ \Ba \lr{ \Bb \wedge \Bc } } } \\
&=
-\Ba \cdot \lr{ \Bb \wedge \Bc },
\end{aligned}
\end{equation}
\begin{equation}\label{eqn:formAndCurl:740}
\begin{aligned}
\lr{ \Ba \cross \Bb } \cross \Bc
&=
\gpgradeone{ -I \lr{ \Ba \cross \Bb } \wedge \Bc } \\
&=
\gpgradeone{ -I \lr{ \Ba \cross \Bb } \Bc } \\
&=
\gpgradeone{ (-I)^2 \lr{ \Ba \wedge \Bb } \Bc } \\
&=
– \lr{ \Ba \wedge \Bb } \cdot \Bc.
\end{aligned}
\end{equation}

End proof.

Next up is another chain rule identity

Lemma 1.8: Gradient of dot product.

If \( \Ba, \Bb \) are vectors, then
\begin{equation*}
\spacegrad \lr{ \Ba \cdot \Bb } =
\lr{ \Ba \cdot \spacegrad } \Bb
+
\lr{ \Bb \cdot \spacegrad } \Ba
+
\lr{ \spacegrad \wedge \Bb }
\cdot
\Ba
+
\lr{ \spacegrad \wedge \Ba }
\cdot
\Bb
\end{equation*}
For \(\mathbb{R}^3\), this can be written as
\begin{equation*}
\spacegrad \lr{ \Ba \cdot \Bb }
=
\lr{ \Ba \cdot \spacegrad } \Bb
+
\lr{ \Bb \cdot \spacegrad } \Ba
+
\Ba \cross \lr{ \spacegrad \cross \Bb }
+
\Bb \cross \lr{ \spacegrad \cross \Ba }
\end{equation*}

Start proof:

We will use \( \rspacegrad \) to indicate that the gradient operates on everything to the right, \( \lrspacegrad \) to indicate that the gradient operates bidirectionally, and \( \spacegrad’ A B’ \) to indicate that the gradient’s scope is limited to the ticked entity (just on \( B \) in this case.)
\begin{equation}\label{eqn:formAndCurl:760}
\begin{aligned}
\rspacegrad \lr{ \Ba \cdot \Bb }
&=
\gpgradeone{
\rspacegrad \lr{ \Ba \Bb – \Ba \wedge \Bb }
} \\
&=
\gpgradeone{
\spacegrad’ \Ba’ \Bb
+
\spacegrad’ \Ba \Bb’
}
– \rspacegrad \cdot \lr{ \Ba \wedge \Bb }
\\
&=
\lr{ \spacegrad \cdot \Ba} \Bb
+
\lr{ \spacegrad \wedge \Ba} \cdot \Bb
+
\gpgradeone{
– \Ba \spacegrad \Bb + 2 \lr{ \Ba \cdot \spacegrad } \Bb
}
– \spacegrad’ \cdot \lr{ \Ba’ \wedge \Bb }
– \spacegrad’ \cdot \lr{ \Ba \wedge \Bb’ }
\\
&=
\lr{ \spacegrad \cdot \Ba} \Bb
+
\lr{ \spacegrad \wedge \Ba} \cdot \Bb

\Ba \lr{ \spacegrad \cdot \Bb }

\Ba \cdot \lr{ \spacegrad \wedge \Bb }
+ 2 \lr{ \Ba \cdot \spacegrad } \Bb
– \spacegrad’ \cdot \lr{ \Ba’ \wedge \Bb }
– \spacegrad’ \cdot \lr{ \Ba \wedge \Bb’ }.
\end{aligned}
\end{equation}
We are running out of room, and have not had any cancellation yet, so let’s expand those last two terms separately
\begin{equation}\label{eqn:formAndCurl:780}
\begin{aligned}
– \spacegrad’ \cdot \lr{ \Ba’ \wedge \Bb }
– \spacegrad’ \cdot \lr{ \Ba \wedge \Bb’ }
&=
– \lr{ \spacegrad’ \cdot \Ba’ } \Bb
+ \lr{ \spacegrad’ \cdot \Bb } \Ba’
– \lr{ \spacegrad’ \cdot \Ba } \Bb’
+ \lr{ \spacegrad’ \cdot \Bb’ } \Ba
\\
&=
– \lr{ \spacegrad \cdot \Ba } \Bb
+ \lr{ \Bb \cdot \spacegrad } \Ba
– \lr{ \Ba \cdot \spacegrad } \Bb
+ \lr{ \spacegrad \cdot \Bb } \Ba.
\end{aligned}
\end{equation}
Now we can cancel some terms, leaving
\begin{equation}\label{eqn:formAndCurl:800}
\begin{aligned}
\rspacegrad \lr{ \Ba \cdot \Bb }
&=
\lr{ \spacegrad \wedge \Ba} \cdot \Bb

\Ba \cdot \lr{ \spacegrad \wedge \Bb }
+ \lr{ \Ba \cdot \spacegrad } \Bb
+ \lr{ \Bb \cdot \spacegrad } \Ba.
\end{aligned}
\end{equation}
After adjustment of the order and sign of the second term, we see that this is the result we wanted. To show the \(\mathbb{R}^3\) formulation, we have only apply “Lemma: Triple cross products”.

End proof.

Lemma 1.9: Divergence of a bivector.

Let \( \Ba, \Bb \in \mathbb{R}^N \) be vectors. The divergence of their wedge can be written
\begin{equation*}
\spacegrad \cdot \lr{ \Ba \wedge \Bb }
=
\Bb \lr{ \spacegrad \cdot \Ba }
– \Ba \lr{ \spacegrad \cdot \Bb }
– \lr{ \Bb \cdot \spacegrad } \Ba
+ \lr{ \Ba \cdot \spacegrad } \Bb.
\end{equation*}
For \(\mathbb{R}^3\), this can also be written in triple cross product form
\begin{equation*}
\spacegrad \cdot \lr{ \Ba \wedge \Bb }
=
-\spacegrad \cross \lr{ \Ba \cross \Bb }.
\end{equation*}

Start proof:

\begin{equation}\label{eqn:formAndCurl:860}
\begin{aligned}
\rspacegrad \cdot \lr{ \Ba \wedge \Bb }
&=
\spacegrad’ \cdot \lr{ \Ba’ \wedge \Bb }
+ \spacegrad’ \cdot \lr{ \Ba \wedge \Bb’ } \\
&=
\lr{ \spacegrad’ \cdot \Ba’ } \Bb
– \lr{ \spacegrad’ \cdot \Bb } \Ba’
+ \lr{ \spacegrad’ \cdot \Ba } \Bb’
– \lr{ \spacegrad’ \cdot \Bb’ } \Ba
\\
&=
\lr{ \spacegrad \cdot \Ba } \Bb
– \lr{ \Bb \cdot \spacegrad } \Ba
+ \lr{ \Ba \cdot \spacegrad } \Bb
– \lr{ \spacegrad \cdot \Bb } \Ba.
\end{aligned}
\end{equation}
For the \(\mathbb{R}^3\) part of the story, we have
\begin{equation}\label{eqn:formAndCurl:870}
\begin{aligned}
\spacegrad \cross \lr{ \Ba \cross \Bb }
&=
\gpgradeone{
-I \lr{ \spacegrad \wedge \lr{ \Ba \cross \Bb } }
} \\
&=
\gpgradeone{
-I \spacegrad \lr{ \Ba \cross \Bb }
} \\
&=
\gpgradeone{
(-I)^2 \spacegrad \lr{ \Ba \wedge \Bb }
} \\
&=

\spacegrad \cdot \lr{ \Ba \wedge \Bb }
\end{aligned}
\end{equation}

End proof.

We have just one identity left in the article to find the GA equivalent of, but will split that into two logical pieces.

Lemma 1.10: Dual of triple wedge.

If \( \Ba, \Bb, \Bc \in \mathbb{R}^3 \) are vectors, then
\begin{equation*}
\Ba \cdot \lr{ \Bb \cross \Bc } = -I \lr{ \Ba \wedge \Bb \wedge \Bc }.
\end{equation*}

Start proof:

\begin{equation}\label{eqn:formAndCurl:680}
\begin{aligned}
\Ba \cdot \lr{ \Bb \cross \Bc }
&=
\gpgradezero{
\Ba \lr{ \Bb \cross \Bc }
} \\
&=
\gpgradezero{
\Ba (-I) \lr{ \Bb \wedge \Bc }
} \\
&=
\gpgradezero{
-I \lr{
\Ba \cdot \lr{ \Bb \wedge \Bc }
+
\Ba \wedge \lr{ \Bb \wedge \Bc }
}
} \\
&=
\gpgradezero{
-I \lr{ \Ba \wedge \lr{ \Bb \wedge \Bc } }
} \\
&=
-I \lr{ \Ba \wedge \lr{ \Bb \wedge \Bc } }.
\end{aligned}
\end{equation}

End proof.

Lemma 1.11: Curl of a wedge of gradients (divergence of a gradient cross products.)

Let \( f, g, h \) be smooth functions with smooth derivatives. Then
\begin{equation*}
\spacegrad \wedge \lr{ f \lr{ \spacegrad g \wedge \spacegrad h } }
=
\spacegrad f
\wedge
\spacegrad g
\wedge
\spacegrad h.
\end{equation*}
For \(\mathbb{R}^3\) this can be written as
\begin{equation*}
\spacegrad \cdot \lr{ f \lr{ \spacegrad g \cross \spacegrad h } }
=
\spacegrad f
\cdot
\lr{
\spacegrad g
\cross
\spacegrad h
}.
\end{equation*}

Start proof:

The GA identity follows by chain rule and application of “Lemma: Repeated curl identities”.
\begin{equation}\label{eqn:formAndCurl:910}
\begin{aligned}
\spacegrad \wedge \lr{ f \lr{ \spacegrad g \wedge \spacegrad h } }
&=
\spacegrad f \wedge \lr{ \spacegrad g \wedge \spacegrad h }
+
f
\spacegrad \wedge \lr{ \spacegrad g \wedge \spacegrad h } \\
&=
\spacegrad f \wedge \spacegrad g \wedge \spacegrad h.
\end{aligned}
\end{equation}
The \(\mathbb{R}^3\) part of the lemma follows from “Lemma: Dual of triple wedge”, applied twice
\begin{equation}\label{eqn:formAndCurl:970}
\begin{aligned}
\spacegrad \cdot \lr{ (f \spacegrad g) \cross \spacegrad h }
&=
-I \lr{ \spacegrad \wedge (f \spacegrad g) \wedge \spacegrad h } \\
&=
-I \lr{ \spacegrad f \wedge \spacegrad g \wedge \spacegrad h } \\
&=
\spacegrad f \cdot \lr{ \spacegrad g \cross \spacegrad h}.
\end{aligned}
\end{equation}

End proof.

Lemma 1.12: Curl of a bivector.

Let \( \Ba, \Bb \) be vectors. The curl of their wedge is
\begin{equation*}
\spacegrad \wedge \lr{ \Ba \wedge \Bb } = \Bb \wedge \lr{ \spacegrad \wedge \Ba } – \Ba \wedge \lr{ \spacegrad \wedge \Bb }
\end{equation*}
For \(\mathbb{R}^3\), this can be expressed as the divergence of a cross product
\begin{equation*}
\spacegrad \cdot \lr{ \Ba \cross \Bb } = \Bb \cdot \lr{ \spacegrad \cross \Ba } – \Ba \cdot \lr{ \spacegrad \cross \Bb }
\end{equation*}

Start proof:

The GA case is a trivial chain rule application
\begin{equation}\label{eqn:formAndCurl:950}
\begin{aligned}
\rspacegrad \wedge \lr{ \Ba \wedge \Bb }
&=
\lr{ \spacegrad’ \wedge \Ba’} \wedge \Bb
+
\lr{ \spacegrad’ \wedge \Ba } \wedge \Bb’ \\
&= \Bb \wedge \lr{ \spacegrad \wedge \Ba } – \Ba \wedge \lr{ \spacegrad \wedge \Bb }.
\end{aligned}
\end{equation}
The \(\mathbb{R}^3\) case, is less obvious by inspection, but follows from “Lemma: Dual of triple wedge”.

End proof.

Summary.

We found that we have an isomorphism between the exterior derivative of differential forms and the curl operation of geometric algebra, as follows
\begin{equation}\label{eqn:formAndCurl:990}
\begin{aligned}
df &\rightleftharpoons \spacegrad \wedge f \\
dx_i &\rightleftharpoons \Be_i.
\end{aligned}
\end{equation}
We didn’t look at how the Hodge dual translates to GA duality (pseudoscalar multiplication.) The divergence relationship between the exterior derivative of an \(\mathbb{R}^3\) two-form really requires that formalism, and has only been examined in a cursory fashion.

We also translated a number of vector and gradient identities from conventional vector algebra (i.e.: using cross products) and wedge product equivalents of the same. The GA identities are often simpler, and in some cases, provide nice mechanisms to derive the conventional identities that would be more cumbersome to determine without the GA toolbox.

References

[1] Vincent Bouchard. Math 215: Calculus iv: 4.4 the exterior derivative and vector calculus, 2023. URL https://sites.ualberta.ca/ vbouchar/MATH215/section_exterior_vector.html. [Online; accessed 11-November-2023].

Gauge freedom and four-potentials in the STA form of Maxwell’s equation.

March 27, 2022 math and physics play , , , , , , , , , , , , , , , , , , , , ,

[If mathjax doesn’t display properly for you, click here for a PDF of this post]

Motivation.

In a recent video on the tensor structure of Maxwell’s equation, I made a little side trip down the road of potential solutions and gauge transformations. I thought that was worth writing up in text form.

The initial point of that side trip was just to point out that the Faraday tensor can be expressed in terms of four potential coordinates
\begin{equation}\label{eqn:gaugeFreedomAndPotentialsMaxwell:20}
F_{\mu\nu} = \partial_\mu A_\nu – \partial_\nu A_\mu,
\end{equation}
but before I got there I tried to motivate this. In this post, I’ll outline the same ideas.

STA representation of Maxwell’s equation.

We’d gone through the work to show that Maxwell’s equation has the STA form
\begin{equation}\label{eqn:gaugeFreedomAndPotentialsMaxwell:40}
\grad F = J.
\end{equation}
This is a deceptively compact representation, as it requires all of the following definitions
\begin{equation}\label{eqn:gaugeFreedomAndPotentialsMaxwell:60}
\grad = \gamma^\mu \partial_\mu = \gamma_\mu \partial^\mu,
\end{equation}
\begin{equation}\label{eqn:gaugeFreedomAndPotentialsMaxwell:80}
\partial_\mu = \PD{x^\mu}{},
\end{equation}
\begin{equation}\label{eqn:gaugeFreedomAndPotentialsMaxwell:100}
\gamma^\mu \cdot \gamma_\nu = {\delta^\mu}_\nu,
\end{equation}
\begin{equation}\label{eqn:gaugeFreedomAndPotentialsMaxwell:160}
\gamma_\mu \cdot \gamma_\nu = g_{\mu\nu},
\end{equation}
\begin{equation}\label{eqn:gaugeFreedomAndPotentialsMaxwell:120}
\begin{aligned}
F
&= \BE + I c \BB \\
&= -E^k \gamma^k \gamma^0 – \inv{2} c B^r \gamma^s \gamma^t \epsilon^{r s t} \\
&= \inv{2} \gamma^{\mu} \wedge \gamma^{\nu} F_{\mu\nu},
\end{aligned}
\end{equation}
and
\begin{equation}\label{eqn:gaugeFreedomAndPotentialsMaxwell:140}
\begin{aligned}
J &= \gamma_\mu J^\mu \\
J^\mu &= \frac{\rho}{\epsilon} \gamma_0 + \eta (\BJ \cdot \Be_k).
\end{aligned}
\end{equation}

Four-potentials in the STA representation.

In order to find the tensor form of Maxwell’s equation (starting from the STA representation), we first split the equation into two, since
\begin{equation}\label{eqn:gaugeFreedomAndPotentialsMaxwell:180}
\grad F = \grad \cdot F + \grad \wedge F = J.
\end{equation}
The dot product is a four-vector, the wedge term is a trivector, and the current is a four-vector, so we have one grade-1 equation and one grade-3 equation
\begin{equation}\label{eqn:gaugeFreedomAndPotentialsMaxwell:200}
\begin{aligned}
\grad \cdot F &= J \\
\grad \wedge F &= 0.
\end{aligned}
\end{equation}
The potential comes into the mix, since the curl equation above means that \( F \) necessarily can be written as the curl of some four-vector
\begin{equation}\label{eqn:gaugeFreedomAndPotentialsMaxwell:220}
F = \grad \wedge A.
\end{equation}
One justification of this is that \( a \wedge (a \wedge b) = 0 \), for any vectors \( a, b \). Expanding such a double-curl out in coordinates is also worthwhile
\begin{equation}\label{eqn:gaugeFreedomAndPotentialsMaxwell:240}
\begin{aligned}
\grad \wedge \lr{ \grad \wedge A }
&=
\lr{ \gamma_\mu \partial^\mu }
\wedge
\lr{ \gamma_\nu \partial^\nu }
\wedge
A \\
&=
\gamma^\mu \wedge \gamma^\nu \wedge \lr{ \partial_\mu \partial_\nu A }.
\end{aligned}
\end{equation}
Provided we have equality of mixed partials, this is a product of an antisymmetric factor and a symmetric factor, so the full sum is zero.

Things get interesting if one imposes a \( \grad \cdot A = \partial_\mu A^\mu = 0 \) constraint on the potential. If we do so, then
\begin{equation}\label{eqn:gaugeFreedomAndPotentialsMaxwell:260}
\grad F = \grad^2 A = J.
\end{equation}
Observe that \( \grad^2 \) is the wave equation operator (often written as a square-box symbol.) That is
\begin{equation}\label{eqn:gaugeFreedomAndPotentialsMaxwell:280}
\begin{aligned}
\grad^2
&= \partial^\mu \partial_\mu \\
&= \partial_0 \partial_0
– \partial_1 \partial_1
– \partial_2 \partial_2
– \partial_3 \partial_3 \\
&= \inv{c^2} \PDSq{t}{} – \spacegrad^2.
\end{aligned}
\end{equation}
This is also an operator for which the Green’s function is well known ([1]), which means that we can immediately write the solutions
\begin{equation}\label{eqn:gaugeFreedomAndPotentialsMaxwell:300}
A(x) = \int G(x,x’) J(x’) d^4 x’.
\end{equation}
However, we have no a-priori guarantee that such a solution has zero divergence. We can fix that by making a gauge transformation of the form
\begin{equation}\label{eqn:gaugeFreedomAndPotentialsMaxwell:320}
A \rightarrow A – \grad \chi.
\end{equation}
Observe that such a transformation does not change the electromagnetic field
\begin{equation}\label{eqn:gaugeFreedomAndPotentialsMaxwell:340}
F = \grad \wedge A \rightarrow \grad \wedge \lr{ A – \grad \chi },
\end{equation}
since
\begin{equation}\label{eqn:gaugeFreedomAndPotentialsMaxwell:360}
\grad \wedge \grad \chi = 0,
\end{equation}
(also by equality of mixed partials.) Suppose that \( \tilde{A} \) is a solution of \( \grad^2 \tilde{A} = J \), and \( \tilde{A} = A + \grad \chi \), where \( A \) is a zero divergence field to be determined, then
\begin{equation}\label{eqn:gaugeFreedomAndPotentialsMaxwell:380}
\grad \cdot \tilde{A}
=
\grad \cdot A + \grad^2 \chi,
\end{equation}
or
\begin{equation}\label{eqn:gaugeFreedomAndPotentialsMaxwell:400}
\grad^2 \chi = \grad \cdot \tilde{A}.
\end{equation}
So if \( \tilde{A} \) does not have zero divergence, we can find a \( \chi \)
\begin{equation}\label{eqn:gaugeFreedomAndPotentialsMaxwell:420}
\chi(x) = \int G(x,x’) \grad’ \cdot \tilde{A}(x’) d^4 x’,
\end{equation}
so that \( A = \tilde{A} – \grad \chi \) does have zero divergence.

References

[1] JD Jackson. Classical Electrodynamics. John Wiley and Sons, 2nd edition, 1975.

Vector gradients in dyadic notation and geometric algebra: Part II.

March 6, 2022 math and physics play , , , , ,

[If mathjax doesn’t display properly for you, click here for a PDF of this post]

Symmetrization and antisymmetrization of the vector differential in GA.

There was an error in yesterday’s post. This decomposition was correct:
\begin{equation}\label{eqn:dyadicVsGa:460}
d\Bv
= (d\Bx \cdot \spacegrad) \Bv
= d\Bx (\spacegrad \cdot \Bv)
+
\spacegrad \cdot \lr{ d\Bx \wedge \Bv }.
\end{equation}
However, identifying these terms with the symmetric and antisymmetric splits of \( \spacegrad \otimes \Bv \) was wrong.
Brian pointed out that a purely incompressible flow is one for which \(\spacegrad \cdot \Bv = 0\), yet, in general, an incompressible flow can have a non-zero deformation tensor.

Also, given the nature of the matrix expansion of the antisymmetric tensor, we should have had a curl term in the mix and we do not. The conclusion must be that \ref{eqn:dyadicVsGa:460} is a split into divergence and non-divergence terms, but we really wanted a split into curl and non-curl terms.

Symmetrization and antisymmetrization of the vector differential in GA: Take II.

Identification of \( \ifrac{1}{2} \lr{ \spacegrad \otimes \Bv + \lr{ \spacegrad \otimes \Bv }^\dagger } \) with the divergence was incorrect.

Let’s explicitly expand out our symmetric tensor component fully to see what it really yields, without guessing.
\begin{equation}\label{eqn:dyadicVsGa:480}
\begin{aligned}
d\Bx \cdot
\inv{2}
\lr{ \spacegrad \otimes \Bv + \lr{ \spacegrad \otimes \Bv }^\dagger }
&=
d\Bx \cdot
\inv{2}
\lr{
\begin{bmatrix}
\partial_i v_j
\end{bmatrix}
+
\begin{bmatrix}
\partial_j v_i
\end{bmatrix}
} \\
&=
dx_i
\inv{2}
\begin{bmatrix}
\partial_i v_j +
\partial_j v_i
\end{bmatrix}
\begin{bmatrix}
\Be_1 \\
\Be_2 \\
\Be_3
\end{bmatrix}.
\end{aligned}
\end{equation}
The symmetric matrix that represents this direct product tensor is
\begin{equation}\label{eqn:dyadicVsGa:500}
\inv{2}
\begin{bmatrix}
\partial_i v_j +
\partial_j v_i
\end{bmatrix}
=
\inv{2}
\begin{bmatrix}
2 \partial_1 v_1 & \partial_1 v_2 + \partial_2 v_1 & \partial_1 v_3 + \partial_3 v_1 \\
\partial_2 v_1 + \partial_1 v_2 & 2 \partial_2 v_2 & \partial_2 v_3 + \partial_3 v_2 \\
\partial_3 v_1 + \partial_1 v_3 & \partial_3 v_2 + \partial_2 v_3 & \partial_3 v_1 + \partial_1 v_3 \\
\end{bmatrix}
.
\end{equation}
This certainly isn’t isomorphic to the divergence. Instead, the trace of this matrix is the portion that is isomorphic to the divergence. The rest is something else. Let’s put the tensors into vector form to understand what they really represent.

For the symmetric part we have
\begin{equation}\label{eqn:dyadicVsGa:520}
\begin{aligned}
d\Bx \cdot
\inv{2}
\lr{ \spacegrad \otimes \Bv + \lr{ \spacegrad \otimes \Bv }^\dagger }
&=
dx_i
\inv{2}
\begin{bmatrix}
\partial_i v_j +
\partial_j v_i
\end{bmatrix}
\begin{bmatrix}
\Be_1 \\
\Be_2 \\
\Be_3
\end{bmatrix} \\
&=
\inv{2} \lr{
\lr{ d\Bx \cdot \spacegrad } \Bv + \spacegrad \lr{ d\Bx \cdot \Bv }
},
\end{aligned}
\end{equation}
and, similarily, for the antisymmetric tensor component, we have
\begin{equation}\label{eqn:dyadicVsGa:540}
\begin{aligned}
d\Bx \cdot
\inv{2}
\lr{ \spacegrad \otimes \Bv – \lr{ \spacegrad \otimes \Bv }^\dagger }
&=
dx_i
\inv{2}
\begin{bmatrix}
\partial_i v_j –
\partial_j v_i
\end{bmatrix}
\begin{bmatrix}
\Be_1 \\
\Be_2 \\
\Be_3
\end{bmatrix} \\
&=
\inv{2} \lr{
\lr{ d\Bx \cdot \spacegrad } \Bv – \spacegrad \lr{ d\Bx \cdot \Bv }
} \\
&=
\inv{2}
d\Bx \cdot \lr{ \spacegrad \wedge \Bv}.
\end{aligned}
\end{equation}
We find an isomorphism of the antisymmetric term with the curl, but the symmetric term has a divergence component, plus more.

If we want to we can split the symmetric component into it’s divergence and non-divergence terms, we get
\begin{equation}\label{eqn:dyadicVsGa:560}
\begin{aligned}
d\Bx \cdot \Bd
&=
\inv{2}
\lr{
\lr{ d\Bx \cdot \spacegrad } \Bv + \spacegrad \lr{ d\Bx \cdot \Bv }
} \\
&=
\inv{2}
\lr{
d\Bx \lr{ \spacegrad \cdot \Bv } + \spacegrad \cdot \lr{ d\Bx \wedge \Bv } + \spacegrad \lr{ d\Bx \cdot \Bv }
} \\
&=
\inv{2}
\lr{
d\Bx \lr{ \spacegrad \cdot \Bv } + \gpgradeone{ \spacegrad \lr{ d\Bx \wedge \Bv } + \spacegrad \lr{ d\Bx \cdot \Bv } }
} \\
&=
\inv{2}
\lr{
d\Bx \lr{ \spacegrad \cdot \Bv } + \gpgradeone{ \spacegrad d\Bx\, \Bv }
},
\end{aligned}
\end{equation}
so for incompressible flow, the GA representation is a single grade one selection
\begin{equation}\label{eqn:dyadicVsGa:600}
d\Bx \cdot \Bd = \inv{2} \gpgradeone{ \spacegrad d\Bx\, \Bv }.
\end{equation}
It is a little unfortunate that we cannot factor out the \( d\Bx \) term. We can do that for the
GA representation of the antisymmetric tensor contribution, which is just
\begin{equation}\label{eqn:dyadicVsGa:580}
\BOmega
=
\inv{2} \spacegrad \wedge \Bv.
\end{equation}

Let’s see what the antisymmetric tensor equivalent looks like in the incompressible case, by subtracting a divergence term
\begin{equation}\label{eqn:dyadicVsGa:680}
\begin{aligned}
d\Bx \cdot \lr{ \spacegrad \wedge \Bv } – d\Bx \lr{ \spacegrad \cdot \Bv }
&=
\gpgradeone{ d\Bx \lr{ \spacegrad \wedge \Bv } – d\Bx \lr{ \spacegrad \cdot \Bv } } \\
&=
\gpgradeone{ -d\Bx \lr{ \Bv \wedge \spacegrad } – d\Bx \lr{ \Bv \cdot \spacegrad } } \\
&=
-\gpgradeone{ d\Bx \Bv \spacegrad },
\end{aligned}
\end{equation}
so we have
\begin{equation}\label{eqn:dyadicVsGa:700}
d\Bx \cdot \lr{ \spacegrad \wedge \Bv } = d\Bx \lr{ \spacegrad \cdot \Bv } – \gpgradeone{ d\Bx\, \Bv \spacegrad }.
\end{equation}
Both the symmetric and antisymmetric tensors have compressible components.

Summary.

We found that it was possible to split the vector differential into a divergence and incompressible components, as follows
\begin{equation}\label{eqn:dyadicVsGa:620}
\begin{aligned}
d\Bv
&= \lr{ d\Bx \cdot \spacegrad } \Bv \\
&= d\Bx (\spacegrad \cdot \Bv)
+
\spacegrad \cdot \lr{ d\Bx \wedge \Bv }.
\end{aligned}
\end{equation}

With
\begin{equation}\label{eqn:dyadicVsGa:720}
\begin{aligned}
d\Bv
&= d\Bx \cdot
\lr{
\inv{2} \lr{ \spacegrad \otimes \Bv + \lr{ \spacegrad \otimes \Bv }^\dagger }
+
\inv{2} \lr{ \spacegrad \otimes \Bv – \lr{ \spacegrad \otimes \Bv }^\dagger }
} \\
&= d\Bx \cdot \lr{ \Bd + \BOmega },
\end{aligned}
\end{equation}
we found the following correspondences between the symmetric and antisymmetric tensor product components
\begin{equation}\label{eqn:dyadicVsGa:640}
\begin{aligned}
d\Bx \cdot \Bd &=
\inv{2} \lr{
\lr{ d\Bx \cdot \spacegrad } \Bv + \spacegrad \lr{ d\Bx \cdot \Bv }
} \\
&=
\inv{2}
\lr{
d\Bx \lr{ \spacegrad \cdot \Bv } + \gpgradeone{ \spacegrad d\Bx\, \Bv }
}
\end{aligned},
\end{equation}
and
\begin{equation}\label{eqn:dyadicVsGa:660}
\begin{aligned}
d\Bx \cdot \BOmega
&=
\inv{2} d\Bx \cdot \lr{ \spacegrad \wedge \Bv } \\
&=
\inv{2} \lr{
d\Bx \lr{ \spacegrad \cdot \Bv } – \gpgradeone{ d\Bx\, \Bv \spacegrad }
}.
\end{aligned}
\end{equation}

In the incompressible case where \( \spacegrad \cdot \Bv = 0 \), we have
\begin{equation}\label{eqn:dyadicVsGa:740}
\begin{aligned}
d\Bx \cdot \Bd &= \inv{2} \gpgradeone{ \spacegrad d\Bx\, \Bv } \\
d\Bx \cdot \BOmega &= -\inv{2} \gpgradeone{ d\Bx\, \Bv \spacegrad },
\end{aligned}
\end{equation}
and
\begin{equation}\label{eqn:dyadicVsGa:760}
\begin{aligned}
d\Bv
&= d\Bx \cdot \lr{ \Bd + \BOmega } \\
&= \inv{2} \gpgradeone{ \spacegrad d\Bx\, \Bv – d\Bx\, \Bv \spacegrad } \\
&= \spacegrad \cdot \lr{ d\Bx \wedge \Bv }.
\end{aligned}
\end{equation}