Geometric Algebra

An absurd COBOL library: 2D Euclidean GA

December 31, 2023 COBOL, math and physics play , , , ,

I’ve achieved a new pinnacle of obscurity, and have now written a rudimentary COBOL implementation of a geometric algebra library for \( \mathbb{R}^2 \) calculations.

Who will use this?  Absolutely nobody.  Effectively, nobody knows geometric algebra.  Nobody wants to know COBOL, but some do.  The union of those two groups is vanishingly small (probably one: argued below.)

I understand that some Opus Dei members have taught themselves COBOL, as looking at COBOL has been found to be equally painful as a course of self flagellation.

Figure 0. A flagellation representation of COBOL.

Assuming that no Opus Dei practitioners know geometric algebra, that means that there is exactly one person in the world that both knows COBOL and geometric algebra.  Me.

Why did I write this little library?  Well, I was tickled to write something so completely stupid, and I’ve been laughing at the absurdity of it. I also thought I might learn a few things about COBOL in the process of trying to use it for something slightly non-trivial.  I’m adept at writing simple test programs that exercise various obscure compiler features, but those are usually fairly small.  On the flip side of complexity, I have to debug through a number of horribly complicated customer programs as part of my compiler validation work.  A simple real life test scenario might run 100+ COBOL programs in a set of CICS transactions, executing thousands of EXEC DLI and EXEC CICS statements as well as all of the rest of the COBOL language statements!  Despite having gained familiarity with COBOL from that sort of observational use, walking through stuff in the debugger doesn’t provide the same level of comfort with the language as writing code from scratch.  Since I have no interest in simulating a boring business application, why not do something just for fun as a learning game.

The compiler I am using does not seem to support object-COBOL (which would have been nicely suited for this project), so I’ve written my little toy in conventional COBOL, using one external procedure for each type of mathematical operation.  In the huge set of customer COBOL code that I’ve examined and done test compilations of, none of it has used object-COBOL.  I am guessing that the object-COBOL community is as large as the user base for my little toy COBOL geometric algebra library will ever be.

I’ve implemented methods to construct multivectors with scalar, vector and pseudoscalar components, or a general multivector with all of the above.  I’ve also implemented multiply, add, subtract, scalar multiplication, grade selection, and a DISPLAY function to write a multivector to SYSOUT (stdout equivalent.)

The multivector “type”

Figure 1 shows the implementation of my multivector type, implemented in copybook (include file) named MVI.  I have an alternate MV copybook that doesn’t have the VALUE (initialization) clauses, as you don’t want initialization for LINKAGE-SECTION values (i.e.: program parameters.)

Figure 1. Copybook with multivector declaration and initialization.

If you are wondering what the hell a ‘PIC S9(9) USAGE IS COMP-5’ is, well, that’s the “easy to remember” way to declare a 32-bit signed integer in COBOL.  A COMP-2, on the other hand, is a floating point value.

Figure 2 shows an example of the use of this copybook:

Figure 2. Using the multivector copybook.

Figure 3 shows these two copybook declarations after preprocessor expansion

Figure 3. Multivector global variable examples after preprocessing.

The global variable declarations above are roughly equivalent to the following pseudo C++ code (pretending that we can have anonymous unions that match the COBOL declarations above):

#include <complex>

using complex = std::complex<double>;

struct ga20{
   int grade{};
   union {
      struct { double sc{}; double ps{}; };
      complex g02{};
   };
   union { 
      struct { double x{}; double y{}; };
      complex g1{};
   };
};

ga20 a;
ga20 b;

COBOL is inherently untyped, but requires matching types for CALL parameters, or else all hell ensues, so you have to rely on naming conventions and other mechanisms to enforce the required type equivalences.  In this toy GA library, I’ve used copybooks to enforce the types required for everything.  Global variable declarations like these A-MV and B-MV variables are declared only using a copybook that knows the representation required, and all the uses in sub-programs of the effective -MV “type” use a matching copybook for their declarations.  However, I’ve also made use of the lack of typing to treat A-G02, B-G02, A-G1, and B-G1 as if they were complex numbers, and pass those “variables” off to complex number sub-programs, knowing that I’ve constructed the parameters to those programs in a way that is bit compatible with the MV field values.  You can screw things up really nicely doing stuff like this, especially because all COBOL sub-program parameters are (generally) passed by reference.  If you don’t match up the types right “fun ensues.”

Also observe that the nested level specifiers are optional in COBOL.  For nested fields in C++, we might write a.g1.x.  With a nested variable like this in COBOL, we could write something equivalent to that, like:

A-X OF A-G1 OF A-MV

but we can leave out any of the intermediate “level” specifications if we want.  This gets really confusing in complicated real-life COBOL code.  If you are looking to see where something is modified, you have to not only look for the variable of interest, but also any of the higher level fields, since any of those could have been passed off to other code, which implicitly wrote the value you are interested in.

Here’s what one of these multivectors looks like in memory on my (Linux x86-64) system

(lldb) c
Process 3903259 resuming
Process 3903259 stopped
* thread #10, name = 'GA20', stop reason = breakpoint 7.1
    frame #0: 0x00007fffd9189a02 PJOOT.GA20V01.LOADLIB(MULT).ec73dc4b`MULT at MULT.cob:50:1
   47              CALL GA-MKVECTOR-MODIFY USING C-MV, A-X, A-Y
   48              CALL GA-MKPSEUDO-MODIFY USING D-MV, A-PS
   49  
-> 50              MOVE 'A' TO WS-DISPPARM-N
   51              CALL GA-DISPLAY USING
   52                WS-DISPPARM-N,
   53                A-MV
(lldb) p A-MV
(A-MV) A-MV = {
  A-GRADE = -1
  A-G02 = (A-SC = 1, A-PS = 4)
  A-G1 = (A-X = 2, A-Y = 3)
}

i.e.: this has the value \( 1 + 2 \mathbf{e}_{12} + 3 \mathbf{e}_1 + 4 \mathbf{e}_1 \).

Looking at the multivector in it’s hex representation:

(lldb) fr v -format x A-MV
(A-MV) A-MV = {
  A-GRADE = 0xffffffff
  A-G02 = {
    A-SC = 0x3ff0000000000000
    A-PS = 0x4010000000000000
  }
  A-G1 = {
    A-X = 0x4000000000000000
    A-Y = 0x4008000000000000
  }
}

we see that the debugger is showing an underlying IEEE floating point representation for the COMP-2 variables in the program as it was compiled.

I have a multivector print routine that prints multivectors to SYSOUT:

Figure 4. Calling the multivector DISPLAY function.

where WS-DISPPARM-N is a PIC X(20).  (i.e.: a fixed size character array.)  Output for the A-MV value showing in the debug session above looks like:

A                     ( .10000000000000000E 01)                                                                         
                    + ( .20000000000000000E 01) e_1 + ( .30000000000000000E 01) e_2                                     
                    + ( .40000000000000000E 01) e_{12}            

End of sentence required for nested IFs?

I encountered a curious language issue in my multivector multiply function.  Here’s an example of how I’ve been coding IF statements

Figure 5. An IF END-IF pair without a period to terminate the sentence.

Notice that I don’t do anything special between the END-IF and the statement that follows it.  However, if I have an IF statement that includes nested IF END-IFs, then it appears that I need a period after the final END-IF, like so:

Figure 6. An IF with nested conditions that seems to require a period to terminate the sentence.

If I don’t include that period after the final END-IF (ending the COBOL sentence), then in some circumstances, I was seeing the program exit after the last interior basic block within this nested IF was executed.  In COBOL parlance, it seems as if a GOBACK (i.e.: return) was implicitly executed once we fell out of the big nested IF.  Why is that period required for a nested IF, but not for a simple IF?

In my “Murach’s mainframe COBOL”, he ends ALL if statements with a period, even simple IFs.  I don’t see a rationale for that in the book anywhere, but it’s a ~700 page book, so perhaps he says why at some point.

I’ve asked our compiler guys if this is a bug or expected behaviour, but I am guessing the latter…. I just don’t know why.

The multiplication kernel for this library

The workhorse of this GA(2,0) implementation, is a multivector multiplication operation, which can be implemented in two lines in Mathematica (or C++)

multivector /: multivector[_, m1_, m2_] ** multivector[_, n1_, n2_] := 
   multivector[-1, m1 n1 + Conjugate[m2] n2, n1 m2 + Conjugate[m1] n2 ]

In COBOL, it takes a lot more, and as usual, COBOL verbosity obfuscates things considerably. Here’s the equivalent code in my library:

Figure 7. GA(2,0) multiplication kernel in COBOL.

The library and a little test program.

If you are curious, you can poke around in the code for this library and the test program on github.  The sample/test program is src/MULT.cob, and running the job gives the following SYSOUT:

Figure 8. Sample SYSOUT for MULT.cob

Potentials for multivector Maxwell’s equation (again.)

December 8, 2023 math and physics play , , , , , , , , , , , , , , , , ,

[Click here for the PDF version of this post.]

Motivation.

This revisits my last blog post where I covered this content in a meandering fashion. This is an attempt to re-express this in a more compact form. In particular, in a form that is amenable to include in my book. When I wrote the potential section of my book, I cheated, and didn’t try to motivate the results. My cheat was figuring out the multivector potential representation starting with STA where things are simpler, and then translating it back to a multivector representation, instead of figuring out a reasonable way to motivate things from the foundation already laid.

I’d like to eventually have a less rushed treatment of potentials in my book, where the results are not pulled out of a magic hat. Here is an attempted step in that direction. I’ve opted to put some of the motivational material in problems (with solutions at the chapter end.)

Multivector potentials.

We know from conventional electromagnetism (given no fictitious magnetic sources) that we can represent the six components of the electric and magnetic fields in terms of four scalar fields
\begin{equation}\label{eqn:mvpotentials:80}
\begin{aligned}
\BE &= -\spacegrad \phi – \PD{t}{\BA} \\
\BH &= \inv{\mu} \spacegrad \cross \BA.
\end{aligned}
\end{equation}
The conventional way of constructing these potentials makes use of the identities
\begin{equation}\label{eqn:mvpotentials:60}
\begin{aligned}
\spacegrad \cdot \lr{ \spacegrad \cross \BA } &= 0 \\
\spacegrad \cross \lr{ \spacegrad \phi } &= 0,
\end{aligned}
\end{equation}
applying those to the source free Maxwell’s equations to find representations of \( \BE, \BH \) that automatically satisfy those equations. For that conventional analysis, see section 18-6 [2] (available online), or section 10.1 [3], or section 6.4 [4]. We can also find such a potential representation using geometric algebra methods that are cross product free (problem 1.)

For Maxwell’s equations with fictitious magnetic sources, it can be shown that a potential representation of the field
\begin{equation}\label{eqn:mvpotentials:100}
\begin{aligned}
\BH &= -\spacegrad \phi_m – \PD{t}{\BF} \\
\BE &= -\inv{\epsilon} \spacegrad \cross \BF.
\end{aligned}
\end{equation}
satisfies the source-free grades of Maxwell’s equation.
See [1], and [5] for such derivations. As with the conventional source potentials, we can also apply our geometric algebra toolbox to easily find these results (problem 2.)

We have a mix of time partials and curls that is reminiscent of Maxwell’s equation itself. It’s obvious to wonder whether there is a more coherent integrated form for the potential. This is in fact the case.

Lemma 1.1: Multivector potentials.

For Maxwell’s equation with electric sources, the total field \( F \) can be expressed in multivector potential form
\begin{equation}\label{eqn:mvpotentials:520}
F = \gpgrade{ \lr{ \spacegrad – \inv{c} \PD{t}{} } \lr{ -\phi + c \BA } }{1,2}.
\end{equation}
For Maxwell’s equation with only fictitious magnetic sources, the total field \( F \) can be expressed in multivector form
\begin{equation}\label{eqn:mvpotentials:540}
F = \gpgrade{ \lr{ \spacegrad – \inv{c} \PD{t}{} } I \eta \lr{ -\phi_m + c \BF } }{1,2}.
\end{equation}

The reader should try to verify this themselves (problem 3.)

Using superposition, we can form a multivector potential that includes all grades.

Definition 1.1: Multivector potential.

We call \( A \), a multivector with all grades, the multivector potential, defining the total field as
\begin{equation}\label{eqn:mvpotentials:600}
\begin{aligned}
F
&=
\gpgrade{ \lr{ \spacegrad – \inv{c} \PD{t}{} } A }{1,2} \\
&=
\lr{ \spacegrad – \inv{c} \PD{t}{} } A

\gpgrade{ \lr{ \spacegrad – \inv{c} \PD{t}{} } A }{0,3}.
\end{aligned}
\end{equation}
Imposition of the constraint
\begin{equation}\label{eqn:mvpotentials:680}
\gpgrade{ \lr{ \spacegrad – \inv{c} \PD{t}{} } A }{0,3} = 0,
\end{equation}
is called the Lorentz gauge condition, and allows us to express \( F \) in terms of the potential without any grade selection filters.

Lemma 1.2: Conventional multivector potential.

Let
\begin{equation}\label{eqn:mvpotentials:620}
A = -\phi + c \BA + I \eta \lr{ -\phi_m + c \BF }.
\end{equation}
This results in the conventional potential representation of the electric and magnetic fields
\begin{equation}\label{eqn:mvpotentials:640}
\begin{aligned}
\BE &= -\spacegrad \phi – \PD{t}{\BA} – \inv{\epsilon} \spacegrad \cross \BF \\
\BH &= -\spacegrad \phi_m – \PD{t}{\BF} + \inv{\mu} \spacegrad \cross \BA.
\end{aligned}
\end{equation}
In terms of potentials, the Lorentz gauge condition \ref{eqn:mvpotentials:680} takes the form
\begin{equation}\label{eqn:mvpotentials:660}
\begin{aligned}
0 &= \inv{c} \PD{t}{\phi} + \spacegrad \cdot (c \BA) \\
0 &= \inv{c} \PD{t}{\phi_m} + \spacegrad \cdot (c \BF).
\end{aligned}
\end{equation}

Start proof:

See problem 4.

End proof.

Problems.

Problem 1: Potentials for no-fictitious sources.

Starting with Maxwell’s equation with only conventional electric sources
\begin{equation}\label{eqn:mvpotentials:120}
\lr{ \spacegrad + \inv{c}\PD{t}{} } F = \gpgrade{J}{0,1}.
\end{equation}
Show that this may be split by grade into three equations
\begin{equation}\label{eqn:mvpotentials:140}
\begin{aligned}
\gpgrade{ \lr{ \spacegrad + \inv{c}\PD{t}{} } F}{0,1} &= \gpgrade{J}{0,1} \\
\spacegrad \wedge \BE + \inv{c}\PD{t}{} \lr{ I \eta \BH } &= 0 \\
\spacegrad \wedge \lr{ I \eta \BH } &= 0.
\end{aligned}
\end{equation}
Then use the identities \( \spacegrad \wedge \spacegrad \wedge \BA = 0 \), for vector \( \BA \) and \( \spacegrad \wedge \spacegrad \phi = 0 \), for scalar \( \phi \) to find the potential representation.

Answer

Taking grade(0,1) and (2,3) selections of Maxwell’s equation, we split our equations into source dependent and source free equations
\begin{equation}\label{eqn:mvpotentials:200}
\gpgrade{ \lr{ \spacegrad + \inv{c} \PD{t}{} } F }{0,1} = \gpgrade{J}{0,1},
\end{equation}
\begin{equation}\label{eqn:mvpotentials:220}
\gpgrade{ \lr{ \spacegrad + \inv{c} \PD{t}{} } F }{2,3} = 0.
\end{equation}

In terms of \( F = \BE + I \eta \BH \), the source free equation expands to
\begin{equation}\label{eqn:mvpotentials:240}
\begin{aligned}
0
&=
\gpgrade{
\lr{ \spacegrad + \inv{c} \PD{t}{} } \lr{ \BE + I \eta \BH }
}{2,3} \\
&=
\gpgradetwo{\spacegrad \BE}
+ \gpgradethree{I \eta \spacegrad \BH} + I \eta \inv{c} \PD{t}{\BH} \\
&=
\spacegrad \wedge \BE
+ \spacegrad \wedge \lr{ I \eta \BH }
+ I \eta \inv{c} \PD{t}{\BH},
\end{aligned}
\end{equation}
which can be further split into a bivector and trivector equation
\begin{equation}\label{eqn:mvpotentials:260}
0 = \spacegrad \wedge \BE + I \eta \inv{c} \PD{t}{\BH}
\end{equation}
\begin{equation}\label{eqn:mvpotentials:280}
0 = \spacegrad \wedge \lr{ I \eta \BH }.
\end{equation}
It’s clear that we want to write the magnetic field as a (bivector) curl, so we let
\begin{equation}\label{eqn:mvpotentials:300}
I \eta \BH = I c \BB = c \spacegrad \wedge \BA,
\end{equation}
or
\begin{equation}\label{eqn:mvpotentials:301}
\BH = \inv{\mu} \spacegrad \cross \BA.
\end{equation}

\Cref{eqn:mvpotentials:260} is reduced to
\begin{equation}\label{eqn:mvpotentials:320}
\begin{aligned}
0
&= \spacegrad \wedge \BE + I \eta \inv{c} \PD{t}{\BH} \\
&= \spacegrad \wedge \BE + \inv{c} \PD{t}{} \spacegrad \wedge \lr{ c \BA } \\
&= \spacegrad \wedge \lr{ \BE + \PD{t}{\BA} }.
\end{aligned}
\end{equation}
We can now let
\begin{equation}\label{eqn:mvpotentials:340}
\BE + \PD{t}{\BA} = -\spacegrad \phi.
\end{equation}
We sneakily adjust the sign of the gradient so that the result matches the conventional representation.

Problem 2: Potentials for fictitious sources.

Starting with Maxwell’s equation with only fictitious magnetic sources
\begin{equation}\label{eqn:mvpotentials:160}
\lr{ \spacegrad + \inv{c}\PD{t}{} } F = \gpgrade{J}{2,3},
\end{equation}
show that this may be split by grade into three equations
\begin{equation}\label{eqn:mvpotentials:180}
\begin{aligned}
\gpgrade{ \lr{ \spacegrad + \inv{c}\PD{t}{} } I F}{0,1} &= I \gpgrade{J}{2,3} \\
-\eta \spacegrad \wedge \BH + \inv{c}\PD{t}{(I \BE)} &= 0 \\
\spacegrad \wedge \lr{ I \BE } &= 0.
\end{aligned}
\end{equation}
Then use the identities \( \spacegrad \wedge \spacegrad \wedge \BF = 0 \), for vector \( \BF \) and \( \spacegrad \wedge \spacegrad \phi_m = 0 \), for scalar \( \phi_m \) to find the potential representation \ref{eqn:mvpotentials:100}.

Answer

We multiply \ref{eqn:mvpotentials:160} by \( I \) to find
\begin{equation}\label{eqn:mvpotentials:360}
\lr{ \spacegrad + \inv{c}\PD{t}{} } I F = I \gpgrade{J}{2,3},
\end{equation}
which can be split into
\begin{equation}\label{eqn:mvpotentials:380}
\begin{aligned}
\gpgrade{ \lr{ \spacegrad + \inv{c}\PD{t}{} } I F }{1,2} &= I \gpgrade{J}{2,3} \\
\gpgrade{ \lr{ \spacegrad + \inv{c}\PD{t}{} } I F }{0,3} &= 0.
\end{aligned}
\end{equation}
We expand the source free equation in terms of \( I F = I \BE – \eta \BH \), to find
\begin{equation}\label{eqn:mvpotentials:400}
\begin{aligned}
0
&= \gpgrade{ \lr{ \spacegrad + \inv{c}\PD{t}{} } \lr{ I \BE – \eta \BH } }{0,3} \\
&= \spacegrad \wedge \lr{ I \BE } + \inv{c} \PD{t}{(I \BE)} – \eta \spacegrad \wedge \BH,
\end{aligned}
\end{equation}
which has the respective bivector and trivector grades
\begin{equation}\label{eqn:mvpotentials:420}
0 = \spacegrad \wedge \lr{ I \BE }
\end{equation}
\begin{equation}\label{eqn:mvpotentials:440}
0 = \inv{c} \PD{t}{(I \BE)} – \eta \spacegrad \wedge \BH.
\end{equation}
We can clearly satisfy \ref{eqn:mvpotentials:420} by setting
\begin{equation}\label{eqn:mvpotentials:460}
I \BE = -\inv{\epsilon} \spacegrad \wedge \BF,
\end{equation}
or
\begin{equation}\label{eqn:mvpotentials:461}
\BE = -\inv{\epsilon} \spacegrad \cross \BF.
\end{equation}
Here, once again, the sneaky inclusion of a constant factor \( -1/\epsilon \) is to make the result match the conventional. Inserting this value for \( I \BE \) into our bivector equation yields
\begin{equation}\label{eqn:mvpotentials:480}
\begin{aligned}
0
&= -\inv{\epsilon} \inv{c} \PD{t}{} (\spacegrad \wedge \BF) – \eta \spacegrad \wedge \BH \\
&= -\eta \spacegrad \wedge \lr{ \PD{t}{\BF} + \BH },
\end{aligned}
\end{equation}
so we set
\begin{equation}\label{eqn:mvpotentials:500}
\PD{t}{\BF} + \BH = -\spacegrad \phi_m,
\end{equation}
and have a field representation that automatically satisfies the source free equations.

Problem 3: Total field in terms of potentials.

Prove lemma 1.1, either by direct expansion, or by trying to discover the multivector form of the field by construction.

Answer

Proof by expansion is straightforward, and left to the reader. We form the respective total electromagnetic fields \( F = \BE + I \eta H \) for each case.

We find
\begin{equation}\label{eqn:mvpotentials:560}
\begin{aligned}
F
&= \BE + I \eta \BH \\
&= -\spacegrad \phi – \PD{t}{\BA} + I \frac{\eta}{\mu} \spacegrad \cross \BA \\
&= -\spacegrad \phi – \inv{c} \PD{t}{(c \BA)} + \spacegrad \wedge (c\BA) \\
&= \gpgrade{ -\spacegrad \phi – \inv{c} \PD{t}{(c \BA)} + \spacegrad \wedge (c\BA) }{1,2} \\
&= \gpgrade{ -\spacegrad \phi – \inv{c} \PD{t}{(c \BA)} + \spacegrad (c\BA) }{1,2} \\
&= \gpgrade{ \spacegrad \lr{ -\phi + c \BA } – \inv{c} \PD{t}{(c \BA)} }{1,2} \\
&= \gpgrade{ \lr{ \spacegrad -\inv{c} \PD{t}{} } \lr{ -\phi + c \BA } }{1,2}.
\end{aligned}
\end{equation}

For the field for the fictitious source case, we compute the result in the same way, inserting a no-op grade selection to allow us to simplify, finding
\begin{equation}\label{eqn:mvpotentials:580}
\begin{aligned}
F
&= \BE + I \eta \BH \\
&= -\inv{\epsilon} \spacegrad \cross \BF + I \eta \lr{ -\spacegrad \phi_m – \PD{t}{\BF} } \\
&= \inv{\epsilon c} I \lr{ \spacegrad \wedge (c \BF)} + I \eta \lr{ -\spacegrad \phi_m – \inv{c} \PD{t}{(c \BF)} } \\
&= I \eta \lr{ \spacegrad \wedge (c \BF) + \lr{ -\spacegrad \phi_m – \inv{c} \PD{t}{(c \BF)} } } \\
&= I \eta \gpgrade{ \spacegrad \wedge (c \BF) + \lr{ -\spacegrad \phi_m – \inv{c} \PD{t}{(c \BF)} } }{1,2} \\
&= I \eta \gpgrade{ \spacegrad (c \BF) – \spacegrad \phi_m – \inv{c} \PD{t}{(c \BF)} }{1,2} \\
&= I \eta \gpgrade{ \spacegrad (-\phi_m + c \BF) – \inv{c} \PD{t}{(c \BF)} }{1,2} \\
&= I \eta \gpgrade{ \lr{ \spacegrad -\inv{c} \PD{t}{} } (-\phi_m + c \BF) }{1,2}.
\end{aligned}
\end{equation}

Problem 4: Fields in terms of potentials.

Prove lemma 1.2.

Answer

Let’s expand and then group by grade
\begin{equation}\label{eqn:mvpotentials:n}
\begin{aligned}
\lr{ \spacegrad – \inv{c} \PD{t}{} } A
&=
\lr{ \spacegrad – \inv{c} \PD{t}{} } \lr{ -\phi + c \BA + I \eta \lr{ -\phi_m + c \BF }} \\
&=
-\spacegrad \phi + c \spacegrad \BA + I \eta \lr{ -\spacegrad \phi_m + c \spacegrad \BF }
-\inv{c} \PD{t}{\phi} + c \inv{c} \PD{t}{ \BA } + I \eta \lr{ -\inv{c} \PD{t}{\phi_m} + c \inv{c} \PD{t}{\BF} } \\
&=
– \spacegrad \phi
+ I \eta c \spacegrad \wedge \BF
– c \inv{c} \PD{t}{\BA}
\quad + c \spacegrad \wedge \BA
-I \eta \spacegrad \phi_m
– c I \eta \inv{c} \PD{t}{\BF} \\
&\quad + c \spacegrad \cdot \BA
+\inv{c} \PD{t}{\phi}
\quad + I \eta \lr{ c \spacegrad \cdot \BF
+ \inv{c} \PD{t}{\phi_m} } \\
&=
– \spacegrad \phi
– \inv{\epsilon} \spacegrad \cross \BF
– \PD{t}{\BA}
\quad + I \eta \lr{
\inv{\mu} \spacegrad \cross \BA
– \spacegrad \phi_m
– \PD{t}{\BF}
} \\
&\quad + c \spacegrad \cdot \BA
+\inv{c} \PD{t}{\phi}
\quad + I \eta \lr{ c \spacegrad \cdot \BF
+ \inv{c} \PD{t}{\phi_m} }.
\end{aligned}
\end{equation}
Observing that \( F = \gpgrade{ \lr{ \spacegrad -(1/c) \partial_t } A }{1,2} = \BE + I \eta \BH \), completes the problem. If the Lorentz gauge condition is assumed, the scalar and pseudoscalar components above are obliterated, leaving just
\( F = \lr{ \spacegrad -(1/c) \partial_t } A \).

References

[1] Constantine A Balanis. Antenna theory: analysis and design. John Wiley & Sons, 3rd edition, 2005.

[2] R.P. Feynman, R.B. Leighton, and M.L. Sands. Feynman lectures on physics, Volume II.[Lectures on physics], chapter The Maxwell Equations. Addison-Wesley Publishing Company. Reading, Massachusetts, 1963. URL https://www.feynmanlectures.caltech.edu/II_18.html.

[3] David Jeffrey Griffiths and Reed College. Introduction to electrodynamics. Prentice hall Upper Saddle River, NJ, 3rd edition, 1999.

[4] JD Jackson. Classical Electrodynamics. John Wiley and Sons, 2nd edition, 1975.

[5] David M Pozar. Microwave engineering. John Wiley & Sons, 2009.

The evil of COBOL: everything is in global variables

December 7, 2023 COBOL , , , , , , ,

COBOL does not have stack variables.  Everything is a global variable.  There is a loose equivalent of a function, called a paragraph, which can be called using a PERFORM statement, but a paragraph does not have any input or output variables, and no return code, so if you want it to behave like a function, you have to construct some sort of complicated naming convention using your global variables.

I’ve seen real customer COBOL programs with many thousands of global variables.  A production COBOL program is usually a giant sequence of MOVEs, MOVE A TO B, MOVE B TO C, MOVE C TO D, MOVE D TO E, … with various PERFORMs or GOTOs, or other things in between.  If you find that your variable has a bad value in it, that is probably because it has been copied from something that was copied from something, that was copied from something, that’s the output of something else, that was copied from something, 9 or 10 times.

I was toying around with the idea of coding up a COBOL implementation of 2D Euclidean geometric algebra, just as a joke, as it is surely the worst language in the world.  Yes, I work on a COBOL compiler project. The project is a lot of fun, and the people I work with are awesome, but I don’t have to like the language.

If I was to implement this simplest geometric algebra in COBOL, the logical starting place for that would be to implement complex numbers in COBOL first.  That is because we can use a pair of complex numbers to implement a 2D multivector, with one complex number for the vector part, and a complex number for the scalar and pseudoscalar parts.  That technique has been detailed on this blog previously, and also in a Mathematica module Cl20.m.

Trying to implement a couple of complex number operations in COBOL got absurd really fast.  Here’s an example.  First step was to create some complex number types.  I did that with a copybook (include file), like so:

This can be included multiple times, each time with a different name, like so:

The way that I structured all my helper functions, was with one set of global variables for input (at least one), and if appropriate, one output global variable.  Here’s an example:

So, if I want to compute and display a value, I have a whole pile of stupid MOVEs to do in and out of the appropriate global variables for each of the helper routines in question:

I wrote enough of this little complex number library that I could do conjugate, real, imaginary, multiply, inverse, and divide operations.  I can run that little program with the following JCL

//COMPLEX JOB
//A EXEC PGM=COMPLEX
//SYSOUT   DD SYSOUT=*
//STEPLIB  DD DSN=PJOOT.SAMPLE.COMPLEX,
//  DISP=SHR

and get this SYSOUT:

STEP A SYSOUT:
A                    =  .10000000000000000E 01 + ( .20000000000000000E 01) I
B                    =  .30000000000000000E 01 + ( .40000000000000000E 01) I
CONJ(A)              =  .10000000000000000E 01 + (-.20000000000000000E 01) I
RE(A)                =  .10000000000000000E 01
IM(A)                =  .20000000000000000E 01
A * B                = -.50000000000000000E 01 + ( .10000000000000000E 02) I
1/A                  =  .20000000000000000E 00 + (-.40000000000000000E 00) I
A/B                  =  .44000000000000000E 00 + ( .80000000000000000E-01) I

If you would like your eyes burned further, you can access the full program on github here. It takes almost 200 lines of code to do almost nothing.

Derivatives of spherical polar vector representation.

December 6, 2023 math and physics play , , , , ,

[Click here for a PDF version of this post]

On discord, on the bivector server, ‘stationaryactionprinciple’ asked a question that I really liked.
It’s a question that nagged me before too, but I hadn’t taken the time to puzzle through it properly.

The main character in this question is the spherical polar form of a radial vector, which has the form
\begin{equation}\label{eqn:dexpquestion:20}
\begin{aligned}
i &= \Be_{12} \\
j &= \Be_{31} e^{i\phi} \\
\Bx(r,\theta,\phi) &= r \Be_3 e^{j \theta},
\end{aligned}
\end{equation}
as illustrated in Fig. 1

Fig. 1. Spherical polar conventions.

Notice that all the \( \phi \) dependence comes from the bivector \( j = j(\phi) \), which makes life a bit tricky. We can take \( r, \theta \) or \( \phi \) partials of \( \Bx \), but need to be particularly careful how we do this for the \( \phi \) partials of the exponential factor.

One correct way to compute such a partial is to first expand the exponential in its trig constituents, as
\begin{equation}\label{eqn:dexpquestion:120}
e^{j \theta} = \cos\theta + j \sin\theta,
\end{equation}
and then take the derivative with respect to \(\phi\). If we do so, we get
\begin{equation}\label{eqn:dexpquestion:140}
\PD{\phi}{} e^{j\theta} = \PD{\phi}{j} \sin\theta.
\end{equation}
On the other hand, should we just directly take derivatives of the exponential, one might think that the result is
\begin{equation}\label{eqn:dexpquestion:160}
\PD{\phi}{} e^{j\theta} = \PD{\phi}{(j\theta)} e^{j\theta} = \theta \PD{\phi}{j} e^{j\theta}.
\end{equation}
but this is not correct, for a subtle reason. To understand why, we can step back to the power series representation of the exponential, and compute
\begin{equation}\label{eqn:dexpquestion:60}
\begin{aligned}
\PD{\phi}{e^{j\theta}}
&= \sum_{k = 0}^\infty \PD{\phi}{} \frac{ (j \theta)^k }{k!} \\
&= \sum_{k = 1}^\infty \PD{\phi}{j^k} \frac{ \theta^k }{k!}.
\end{aligned}
\end{equation}
If you treat \( j \) as a complex number, this then reduces to
\begin{equation}\label{eqn:dexpquestion:80}
\begin{aligned}
\PD{\phi}{e^{j\theta}}
&= \sum_{k = 1}^\infty k \PD{\phi}{j} j^{k-1} \frac{ \theta^k }{k!} \\
&=
\theta \PD{\phi}{j} \sum_{k = 1}^\infty \frac{ (j\theta)^{k-1} }{(k-1)!} \\
&=
\theta \PD{\phi}{j} e^{j\theta}.
\end{aligned}
\end{equation}
But, as we have said, this is wrong. The reason that this is wrong is because \( \PDi{\phi}{j} \) does not commute with \( j \), so
\begin{equation}\label{eqn:dexpquestion:100}
\PD{\phi}{j^k} = \PD{\phi}{j} j^{k-1} + j \PD{\phi}{j} j^{k-2} + \cdots,
\end{equation}
not \( k (\PDi{\phi}{j}) j^{k-1} \).

This non-commutativity, sneakily hiding in the power series for the exponential, messes us up. If we are careful, though, we should still be able to compute the correct result using the power series representation of the exponential. To do so, we need to understand the commutation relations for \( j \) and \( j’ \). Writing \( j’ = \PDi{\phi}{j} \), those two bivectors are
\begin{equation}\label{eqn:dexpquestion:180}
\begin{aligned}
j &= \Be_{31} e^{i\phi} \\
j’ &= \Be_{32} e^{i\phi},
\end{aligned}
\end{equation}
so
\begin{equation}\label{eqn:dexpquestion:200}
\begin{aligned}
j j’
&= \Be_{31} e^{i\phi} \Be_{32} e^{i\phi} \\
&= \Be_{3132} e^{-i\phi} e^{i\phi} \\
&= -\Be_{12},
\end{aligned}
\end{equation}
and
\begin{equation}\label{eqn:dexpquestion:220}
\begin{aligned}
j’ j
&= \Be_{32} e^{i\phi} \Be_{31} e^{i\phi} \\
&= \Be_{3231} e^{-i\phi} e^{i\phi} \\
&= \Be_{12}.
\end{aligned}
\end{equation}
We find that \( j \) and \( j’ \), in this case, anticommute
\begin{equation}\label{eqn:dexpquestion:240}
j j’ = -j’ j.
\end{equation}
We can now compute
\begin{equation}\label{eqn:dexpquestion:260}
\begin{aligned}
\PD{\phi}{j^k}
&= j’ j^{k-1} + j j’ j^{k-2} + j^2 j’ j^{k-3} \cdots \\
&= j’ j^{k-1} – j’ j^{k-1} + (-1)^2 j’ j^{k-1} \cdots
\end{aligned}
\end{equation}
This is zero for any even \( k \) and \( j’ j^{k-1} \) for odd \( k \).

Plugging this back into our Taylor series for the derivative (before we messed it up), we find
\begin{equation}\label{eqn:dexpquestion:280}
\begin{aligned}
\PD{\phi}{e^{j\theta}}
&= \sum_{k = 1, k \in \mathrm{odd}}^\infty j’ j^{k-1} \frac{ \theta^k }{k!} \\
&= j’ \inv{j}
\sum_{k = 1,\, k \in \mathrm{odd}}^\infty \frac{ (j\theta)^k }{k!} \\
&= j’ \inv{j} \sinh( j \theta ) \\
&= j’ \inv{j} j \sin( \theta ) \\
&= j’ \sin( \theta ).
\end{aligned}
\end{equation}
This is exactly the result that we had when we expanded \( e^{j\theta} \) in it’s cis form, and then took derivatives, so we have now reconciled the two different approaches.

Observe that, as a side effect of this exploration, we know also know how to compute the derivative of \( e^{j\theta} \) for the special case where \( j j’ = -j’ j \), which will be the case for any \( j \) where \( j^2 = \mathrm{constant} \).

Potentials in geometric algebra.

December 2, 2023 math and physics play , , , , , , , , , , , , , , , , , , , ,

[Click here for a PDF version of this post]

Conventional formulation.

The idea behind introducing the scalar potential \( \phi \) and vector potential \( \BA \) is that we can impose a constraint on the form of our observable fields \( \BE, \BB \), (or \( \BD, \BH \)), that reduces the complexity and coupling of Maxwell’s equations. These potentials are not unique, but the types of allowed variations in those potentials (gauge transformations) do not change the observable fields.

The basic idea is that we are looking for representations of the fields that automatically satisfy the pair of source free Maxwell’s equations
\begin{equation}\label{eqn:gapotentials:40}
\begin{aligned}
\spacegrad \cdot \BB &= 0 \\
c \partial_0 \BB + \spacegrad \cross \BE &= 0,
\end{aligned}
\end{equation}
so that the problem is reduced to solving just the remaining source dependent Maxwell’s equations.

The conventional way of constructing these potentials makes use of the identities
\begin{equation}\label{eqn:gapotentials:60}
\begin{aligned}
\spacegrad \cdot \lr{ \spacegrad \cross \Bf } &= 0 \\
\spacegrad \cross \lr{ \spacegrad \chi } &= 0,
\end{aligned}
\end{equation}
where \( \Bf \) is a vector, and \( \chi \) is a scalar. This approach is straightforward. Instead of replicating it, here are a few well known references where such a treatment can be found

  1. section 18-6 potentials and the wave equation in [2] (available online),
  2. section 10.1 The potential formulation in [3], and
  3. section 6.4 Vector and Scalar Potentials, in [4],

Multivector potentials in geometric algebra.

The multivector form of Maxwell’s equation is
\begin{equation}\label{eqn:gapotentials:820}
\lr{ \spacegrad + \partial_0 } F = J,
\end{equation}
where \( \partial_0 = (1/c)\partial/\partial t \), the electromagnetic field \( F = \BE + I c \BB = \BE + I \eta H \) has grades(1,2), and a multivector charge and current density \( J \). Grades(0,1) of the current are the charge and current densities respectively, and if desired, the grade(2,3) portion of the current has the fictitious magnetic charge and current densities (used in microwave and antenna engineering.)

It’s best to consider the case of electric sources, separately from the case of (fictitious) magnetic sources, and then use superposition to construct a potential representation that includes both.

We require a tool, that generalizes the \(\mathbb{R}^3\) cross product curl identities above.

Lemma 1.1: Curl of curl.

Let \( A \in \bigwedge^k \) be a blade of grade \( k \). Then
\begin{equation*}
\nabla \wedge \nabla \wedge A = 0.
\end{equation*}

Observe that for scalar \( A \), this reduces to
\begin{equation}\label{eqn:gapotentials:1740}
\nabla \wedge \nabla A = 0.
\end{equation}
We’ve recently proved this, so we won’t do it again now.

Now we are ready to figure out the structure of the potentials.

Case I. No (fictitious) magnetic sources.

Without magnetic sources, Maxwell’s equation is
\begin{equation}\label{eqn:gapotentials:840}
\lr{ \spacegrad + \partial_0 } F = \gpgrade{J}{0,1},
\end{equation}
This can be split into two equations, one that has just the sources, and one that is source free
\begin{equation}\label{eqn:gapotentials:860}
\gpgrade{ \lr{ \spacegrad + \partial_0 } F }{0,1} = \gpgrade{J}{0,1},
\end{equation}
\begin{equation}\label{eqn:gapotentials:880}
\gpgrade{ \lr{ \spacegrad + \partial_0 } F }{2,3} = 0.
\end{equation}
If you are clever, or have the benefit of having worked out the answer already, you can look directly at \ref{eqn:gapotentials:880} and guess the multivector form for the potential. Hint: you want something closely related to \( F = \lr{ \spacegrad – \partial_0 } A \), where \( A \) has grades(0,1).

If you aren’t that clever, or don’t have a time machine that let’s you look that clever, you’ll have to work it out systematically like the rest of us. We can start by breaking down \( F \) into it’s constituent observer dependent fields. That means that we want to find values for \( \BE, \BH \) that satisfy
\begin{equation}\label{eqn:gapotentials:900}
\gpgrade{ \lr{ \spacegrad + \partial_0 } \lr{ \BE + I \eta \BH } }{2,3} = 0.
\end{equation}
Expanding the multivector factors gives us
\begin{equation}\label{eqn:gapotentials:920}
\begin{aligned}
\gpgrade{ \lr{ \spacegrad + \partial_0 } \lr{ \BE + I \eta \BH } }{2,3}
&=\gpgradetwo{\spacegrad \BE} + \gpgradethree{I \eta \spacegrad \BH} + I \eta \partial 0 \BH \\
&=
\spacegrad \wedge \BE
+ \spacegrad \wedge \lr{ I \eta \BH }
+ I \eta \partial_0 \BH.
\end{aligned}
\end{equation}
Splitting this into one equation for each grade, leaves us with
\begin{equation}\label{eqn:gapotentials:940}
0 = \spacegrad \wedge \BE + I \eta \partial_0 \BH
\end{equation}
\begin{equation}\label{eqn:gapotentials:960}
0 = \spacegrad \wedge \lr{ I \eta \BH }.
\end{equation}
Observe that we could have also written \ref{eqn:gapotentials:960} as \( 0 = I \eta \lr{ \spacegrad \cdot \BH } \), which is the starting point of the conventional non-GA approach.
It’s clear that we want to write \( I \eta \BH = I c \BB \) as a (bivector) curl, and let
\begin{equation}\label{eqn:gapotentials:980}
I \eta \BH = c \spacegrad \wedge \BA.
\end{equation}
It’s a bit sneaky to toss that factor of \( c \) in here, but that’s done to make the units of \( \BA \) turn out in a way that matches the conventional vector potential. If it makes you feel better, you can think of this as an undetermined constant multiplicative undetermined factor that will be used to adjust the dimensions of \( \BA \) down the line.

Having made that choice, \ref{eqn:gapotentials:960} is automatically satisfied, and \ref{eqn:gapotentials:940} is reduced to
\begin{equation}\label{eqn:gapotentials:1000}
\begin{aligned}
0
&= \spacegrad \wedge \BE + I \eta \partial_0 \BH \\
&= \spacegrad \wedge \BE + \partial_0 \spacegrad \wedge \lr{ c \BA } \\
&= \spacegrad \wedge \lr{ \BE + c \partial_0 \BA }.
\end{aligned}
\end{equation}
We can now let
\begin{equation}\label{eqn:gapotentials:1020}
\BE + \partial_0 c \BA = -\spacegrad \phi.
\end{equation}
Again, we had the option of including an arbitrary multiplicative constant, but this time, we managed to find the right switch for our time machine, and look ahead to see that we want that constant to be \( -1 \) in order to have agreement with the conventional result.

We are left with a potential construction for our individual field components
\begin{equation}\label{eqn:gapotentials:1040}
\begin{aligned}
\BE &= -\spacegrad \phi – c \partial_0 \BA \\
I \eta \BH &= c \spacegrad \wedge \BA,
\end{aligned}
\end{equation}
or
\begin{equation}\label{eqn:gapotentials:1060}
F = -\spacegrad \phi – c \partial_0 \BA + c \spacegrad \wedge \BA.
\end{equation}
This automatically satisfies the grades of Maxwell’s equation that are source free, leaving us to solve just
\begin{equation}\label{eqn:gapotentials:1080}
\gpgrade{ \lr{ \spacegrad + \partial_0 } F }{0,1} = \gpgrade{J}{0,1}.
\end{equation}

Multivector potential.

It’s natural to wonder if there is a more structured form for \( F \) than \ref{eqn:gapotentials:1060}, just as we found a GA structure for Maxwell’s equation that eliminated the crazy mix of divs and curls that we had in the original Gibbs representation. Let’s find that structure. To do so, we can enclose \( F \) in a no-op grade selection operation
\begin{equation}\label{eqn:gapotentials:1100}
\begin{aligned}
F
&= \gpgrade{ -\spacegrad \phi – c \partial_0 \BA + c \spacegrad \wedge \BA }{1,2} \\
&= \gpgrade{ -\spacegrad \phi – c \partial_0 \BA + c \spacegrad \BA }{1,2} \\
&= \gpgrade{ \spacegrad \lr{ -\phi + c \BA } – c \partial_0 \BA + \lr{ \partial_0 \phi – \partial_0 \phi } }{1,2} \\
&= \gpgrade{ \lr{ \spacegrad – \partial_0 } \lr{ -\phi + c \BA } }{1,2}.
\end{aligned}
\end{equation}

We can now introduce a multivector potential, and express the remaining non-zero grades of Maxwell’s equation in terms of this potential
\begin{equation}\label{eqn:gapotentials:1120}
\begin{aligned}
A &= -\phi + c \BA \\
F &= \gpgrade{ \lr{ \spacegrad – \partial_0 } A }{1,2} \\
\gpgrade{J}{0,1} &= \gpgrade{ \lr{ \spacegrad + \partial_0 } F }{0,1}.
\end{aligned}
\end{equation}

Lorentz gauge.

The grade selection in our representation of \( F \) is a bit annoying, and can be eliminated if we impose additional constraints on the potential. We can write
\begin{equation}\label{eqn:gapotentials:1140}
F =
\lr{ \spacegrad – \partial_0 } A

\gpgrade{ \lr{ \spacegrad – \partial_0 } A }{0,3},
\end{equation}
and then ask what conditions are required for this grade(0,3) selection to be zero. In terms of our constituent potentials, that is
\begin{equation}\label{eqn:gapotentials:1160}
\begin{aligned}
0 &=
\gpgrade{ \lr{ \spacegrad – \partial_0 } A }{0,3} \\
&=
\gpgrade{ \lr{ \spacegrad – \partial_0 } \lr{ -\phi + c \BA } }{0,3} \\
&=
c \spacegrad \cdot \BA + \partial_0 \phi,
\end{aligned}
\end{equation}
This is the Lorentz gauge condition, recognized a bit more easily if written out in terms of the time partials explicitly
\begin{equation}\label{eqn:gapotentials:1180}
\inv{c^2} \PD{t}{\phi} + \spacegrad \cdot \BA = 0.
\end{equation}

We can now write Maxwell’s equations, in the potential formulation, as
\begin{equation}\label{eqn:gapotentials:1200}
\begin{aligned}
A &= -\phi + c \BA \\
F &= \lr{ \spacegrad – \partial_0 } A \\
0 &= \inv{c} \gpgrade{ \lr{ \spacegrad – \partial_0 } A }{0,3} = \inv{c^2} \PD{t}{\phi} + \spacegrad \cdot \BA \\
\gpgrade{J}{0,1} &= \gpgrade{ \lr{ \spacegrad + \partial_0 } F }{0,1} = \lr{ \spacegrad^2 – \partial_{00} } A.
\end{aligned}
\end{equation}
This is quite nice. We have a one to one decoupled relationship between the potential and the current, and are free to use the well known techniques for solving the wave equation (using convolution and a superposition of advanced and retarded Green’s functions for the wave equation operator.)

Gauge transformation.

There’s one more thing that we should look at before moving on to the magnetic sources case, and that’s the question of gauge freedom. We’ve said that the potentials are not unique, but this non-uniqueness has a very specific form.

Since we’ve constructed \( F \) with a grade selection as
\begin{equation}\label{eqn:gapotentials:1220}
F = \gpgrade{ \lr{ \spacegrad – \partial_0 } A }{1,2},
\end{equation}
so it’s clear that any transformation
\begin{equation}\label{eqn:gapotentials:1240}
A \rightarrow A + \lr{ \spacegrad + \partial_0 } \psi_{0,3},
\end{equation}
where \( \psi_{0,3} \) is any multivector with grades(0,3) components, will leave \( F \) invariant. That is
\begin{equation}\label{eqn:gapotentials:1260}
\begin{aligned}
A &= -\phi + c \BA \\
&\rightarrow
-\phi + c \BA + \lr{ \spacegrad + \partial_0 } \psi_{0,3} \\
&=
-\phi + c \BA + \lr{ \spacegrad + \partial_0 } \lr{ c \psi + I \bar{\psi} } \\
&=
\lr{ -\phi + c \partial_0 \psi }
+ c \lr{ \BA + \spacegrad \psi }
+ I \spacegrad \bar{\psi}
+ I \partial_0 \bar{\psi}.
\end{aligned}
\end{equation}
We see that the contributions of \( \bar{\psi} \) result in grade(2,3) terms, which are not of interest, and we find that a paired transformation of the potentials
\begin{equation}\label{eqn:gapotentials:1280}
\begin{aligned}
\phi &\rightarrow \phi – \PD{t}{\psi} \\
\BA &\rightarrow \BA + \spacegrad \psi,
\end{aligned}
\end{equation}
called a gauge transformation, leaves the field \( F \) unchanged. This can be expressed slightly more compactly as
\begin{equation}\label{eqn:gapotentials:1300}
A \rightarrow A + \lr{ \spacegrad + \partial_0 } c \psi,
\end{equation}
where, once again, the multiplicative constant \( c \) is included so for consistency with the conventional expression for potential gauge transformation.

Case II. With (fictitious) magnetic sources.

With magnetic sources, Maxwell’s equation is
\begin{equation}\label{eqn:gapotentials:1500}
\lr{ \spacegrad + \partial_0 } F = \gpgrade{J}{2,3}.
\end{equation}
We put this in dual form
\begin{equation}\label{eqn:gapotentials:1520}
\lr{ \spacegrad + \partial_0 } I F = I \gpgrade{J}{2,3},
\end{equation}
which now has the sources all with grades (0,1) as we just analyzed. The dual vector \( I F \), like \( F \), has only grade(1,2) components.

Expanding the source free Maxwell’s equations in terms of \( \BE, \BH \), we have
\begin{equation}\label{eqn:gapotentials:1340}
\begin{aligned}
0
&= \gpgrade{ \lr{ \spacegrad + \partial_0 } I F}{2,3} \\
&= \gpgrade{ \lr{ \spacegrad + \partial_0 } \lr{I \BE – \eta \BH } }{2,3} \\
&= \gpgrade{ I \spacegrad \BE – \eta \spacegrad \BH + I \partial_0 \BE – \eta \partial_0 \BH }{2,3} \\
&= \spacegrad \wedge \lr{ I \BE } – \eta \spacegrad \wedge \BH + I \partial_0 \BE,
\end{aligned}
\end{equation}
or, by grade
\begin{equation}\label{eqn:gapotentials:1360}
0 = \spacegrad \wedge \lr{ I \BE },
\end{equation}
\begin{equation}\label{eqn:gapotentials:1361}
0 = – \eta \spacegrad \wedge \BH + I \partial_0 \BE.
\end{equation}
We see that the dual electric field needs to be a curl to satisfy \ref{eqn:gapotentials:1360}
\begin{equation}\label{eqn:gapotentials:1400}
I \BE = -\eta \spacegrad \wedge c \BF,
\end{equation}
and after substitution into \ref{eqn:gapotentials:1361} we are left with
\begin{equation}\label{eqn:gapotentials:1540}
\begin{aligned}
0
&= – \eta \spacegrad \wedge \BH + \partial_0 \lr{ – \eta \spacegrad \wedge c \BF } \\
&= \eta \spacegrad \wedge \lr{ -\BH – \partial_0 c \BF } \\
\end{aligned}
\end{equation}
We set
\begin{equation}\label{eqn:gapotentials:1420}
-\BH – \partial_0 c \BF = \spacegrad \phi_m,
\end{equation}
Our fields are
\begin{equation}\label{eqn:gapotentials:1440}
\begin{aligned}
\BE &= – \inv{\epsilon} \spacegrad \cross \BF \\
\BH &= -\spacegrad \phi_m – \PD{t}{\BF}.
\end{aligned}
\end{equation}
This has the structure that matches the potential conventions from antenna theory, for example as stated in [1].

Multivector potential.

As with the electrical sources, we expect that we can write this as something like
\begin{equation}\label{eqn:gapotentials:1460}
F = \gpgrade{ \lr{ \spacegrad – \partial_0 } I A }{1,2}.
\end{equation}
Let’s verify that this is the case.
\begin{equation}\label{eqn:gapotentials:1480}
\begin{aligned}
F
&= I \eta \spacegrad \wedge (c \BF) -I \eta \spacegrad \phi_m – I \eta \partial_0 c \BF \\
&= \gpgrade{ I \eta \spacegrad \wedge (c \BF) -I \eta \spacegrad \phi_m – I \eta \partial_0 c \BF }{1,2} \\
&= \gpgrade{ I \eta \spacegrad c \BF -I \eta \spacegrad \phi_m – I \eta \partial_0 c \BF }{1,2} \\
&= \gpgrade{ I \eta \lr{ \spacegrad \lr{ – \phi_m + c \BF } – \partial_0 c \BF + \partial_0 \phi_m – \partial_0 \phi_m} }{1,2} \\
&= \gpgrade{ \lr{ \spacegrad – \partial_0 } I \eta \lr{ – \phi_m + c \BF } }{1,2}.
\end{aligned}
\end{equation}

Lorentz gauge.

Let’s see what constraints we need to write our field in terms of a potential without a grade selection, that is
\begin{equation}\label{eqn:gapotentials:1560}
F = \lr{ \spacegrad – \partial_0 } I \eta \lr{ – \phi_m + c \BF }.
\end{equation}
We need the grade(0,3) components of this multivector to be zero. Those components are
\begin{equation}\label{eqn:gapotentials:1580}
\begin{aligned}
0 &=
\gpgrade{ \lr{ \spacegrad – \partial_0 } I \eta \lr{ – \phi_m + c \BF }}{0,3} \\
&=
\gpgrade{-\spacegrad I \eta \phi_m+\spacegrad I \eta c \BF+ \partial_0 I \eta \phi_m – \partial_0 I \eta c \BF }{0,3} \\
&=
\gpgradethree{ \spacegrad I \eta c \BF }
+ \partial_0 I \eta \phi_m \\
&=
I \eta \lr{ c \lr{ \spacegrad \cdot \BF} + \partial_0 \phi_m },
\end{aligned}
\end{equation}
or
\begin{equation}\label{eqn:gapotentials:1600}
0 = \inv{c^2} \PD{t}{\phi_m} + \spacegrad \cdot \BF.
\end{equation}
This is the Lorentz gauge condition. With this condition we can we can express Maxwell’s equation with magnetic sources, as a forced wave equation
\begin{equation}\label{eqn:gapotentials:1620}
\begin{aligned}
A &= I \eta \lr{ -\phi_m + c \BF } \\
F &= \lr{ \spacegrad – \partial_0 } A \\
0 &= \inv{c} \gpgrade{ \lr{ \spacegrad – \partial_0 } A }{0,3} = \inv{c^2} \PD{t}{\phi_m} + \spacegrad \cdot \BF \\
\gpgrade{J}{2,3} &= \gpgrade{ \lr{ \spacegrad + \partial_0 } F }{2,3} = \lr{ \spacegrad^2 – \partial_{00} } A.
\end{aligned}
\end{equation}

Gauge transformation.

Without the Lorentz gauge assumption, our potential representation for the field is
\begin{equation}\label{eqn:gapotentials:1640}
\begin{aligned}
A &= I \eta \lr{ -\phi_m + c \BF } \\
F &= \gpgrade{ \lr{ \spacegrad – \partial_0 } A }{1,2}.
\end{aligned}
\end{equation}
It’s clear that any transformation of the form
\begin{equation}\label{eqn:gapotentials:1660}
A \rightarrow A + \lr{ \spacegrad + \partial_0 } \psi_{0,3},
\end{equation}
leaves the field unchanged.
\begin{equation}\label{eqn:gapotentials:1680}
\begin{aligned}
A &= I \eta \lr{ -\phi_m + c \BF } \\
&\rightarrow
I \eta \lr{ -\phi + c \BF } + \lr{ \spacegrad + \partial_0 } \psi_{0,3} \\
&=
I \eta \lr{ -\phi_m + c \BF } + \lr{ \spacegrad + \partial_0 } \lr{ \psi + I \eta c \bar{\psi} } \\
&=
I \eta \lr{
-\phi_m
+ c \partial_0 \bar{\psi}
+ c \BF
+ c \spacegrad \bar{\psi}
}
+ \lr{ \spacegrad + \partial_0 } \psi.
\end{aligned}
\end{equation}
We can drop the \( \psi \) contributions, since this time we want only grades(2,3) in our potential, and find that the
desired form of the gauge transformation, for scalar \( \bar{\psi} \), is
\begin{equation}\label{eqn:gapotentials:1700}
\begin{aligned}
\phi_m &\rightarrow \phi_m – \PD{t}{\bar{\psi}} \\
\BF &\rightarrow \BF + \spacegrad \bar{\psi}.
\end{aligned}
\end{equation}
The multivector form of this is
\begin{equation}\label{eqn:gapotentials:1720}
A \rightarrow A + \lr{ \spacegrad + \partial_0 } I \eta c \bar{\psi}.
\end{equation}

Superposition.

We can now use superposition to construct a potential representation that works for both conventional electric and fictitious magnetic charges and currents.

Without a Lorentz gauge assumption, that is
\begin{equation}\label{eqn:gapotentials:1760}
\begin{aligned}
A &= -\phi + c \BA + I \eta \lr{ -\phi_m + c \BF } \\
F &= \gpgrade{ \lr{ \spacegrad – \partial_0 } A }{1,2} \\
J &= \lr{ \spacegrad + \partial_0 } F,
\end{aligned}
\end{equation}
where, given scalar functions \( \psi, \bar{\psi} \), we are free to make gauge transformations of the multivector potential that satisfy
\begin{equation}\label{eqn:gapotentials:1800}
A \rightarrow A + \lr{ \spacegrad + \partial_0 } \lr{ c \psi + I \eta c \bar{\psi} },
\end{equation}

With a Lorentz gauge constraint, we have a wave equation operator acting on \( A \), with the multivector current as a forcing term.
\begin{equation}\label{eqn:gapotentials:1780}
\begin{aligned}
A &= -\phi + c \BA + I \eta \lr{ -\phi_m + c \BF } \\
0 &= \gpgrade{ \lr{ \spacegrad – \partial_0 } A }{0,3} \\
F &= \lr{ \spacegrad – \partial_0 } A \\
J &= \lr{ \spacegrad^2 – \partial_{00} } A.
\end{aligned}
\end{equation}

Check.

It’s worth expansion to verify that we got all the dimensional constants write, and compare the results to Maxwell’s equations in their Gibbs form.

Let’s start with an expansion of \( F \) in terms of the potentials
\begin{equation}\label{eqn:gapotentials:1820}
\begin{aligned}
F &=
\gpgrade{\lr{ \spacegrad – \partial_0 } A }{1,2} \\
&= \gpgrade{ \lr{ \spacegrad – \partial_0 } \lr{ -\phi + c \BA + I \eta \lr{ -\phi_m + c \BF } } }{1,2} \\
&=
\gpgrade{ \spacegrad \lr{ -\phi + c \BA + I \eta \lr{ -\phi_m + c \BF } } -\partial_0 \lr{ -\phi + c \BA + I \eta \lr{ -\phi_m + c \BF } } }{1,2} \\
&=
\gpgrade{ \spacegrad \lr{ -\phi + c \BA + I \eta \lr{ -\phi_m + c \BF } } -\partial_0 \lr{ c \BA + I \eta c \BF } }{1,2} \\
&=
-\spacegrad \phi + c \spacegrad \wedge \BA – I \eta \spacegrad \phi_m + I \eta c \spacegrad \wedge \BF
-\partial_0 \lr{ c \BA + I \eta c \BF }.
\end{aligned}
\end{equation}
That is
\begin{equation}\label{eqn:gapotentials:1840}
\begin{aligned}
\BE &= -\spacegrad \phi + I \eta c \spacegrad \wedge \BF -c \partial_0 \BA \\
I \eta \BH &= c \spacegrad \wedge \BA – I \eta \spacegrad \phi_m – I \eta c \partial_0 \BF,
\end{aligned}
\end{equation}
or
\begin{equation}\label{eqn:gapotentials:1860}
\begin{aligned}
\BE &= – \spacegrad \phi -\partial_t \BA – \inv{\epsilon} \spacegrad \cross \BF \\
\BH &= – \spacegrad \phi_m – \partial_t \BF + \inv{\mu} \spacegrad \cross \BA.
\end{aligned}
\end{equation}
All is good. This is exactly the form that we expect.

Let’s expand out Maxwell’s equation in terms of this potential representation and see what we get.

Let’s write the total field without the grade(1,2) selection, by subtracting off any grade(0,3) contributions
\begin{equation}\label{eqn:gapotentials:1880}
F = \lr{ \spacegrad – \partial_0 } A – \gpgrade{ \lr{ \spacegrad – \partial_0 } A }{0,3}.
\end{equation}
That difference term is
\begin{equation}\label{eqn:gapotentials:1900}
\begin{aligned}
– \gpgrade{ \lr{ \spacegrad – \partial_0 } A }{0,3}
&=
– \gpgrade{ \lr{ \spacegrad – \partial_0 } \lr{ -\phi + c \BA – I \eta \phi_m + I \eta c \BF } }{0,3} \\
&=
– c \spacegrad \cdot \BA – I \eta c \spacegrad \cdot \BF – \partial_0 \phi – I \eta \partial_0 \phi_m.
\end{aligned}
\end{equation}
The field is nicely split into a multivector term that depends directly on the full multivector potential \( A \), and a difference term that wipes out any scalar and pseudoscalar terms
\begin{equation}\label{eqn:gapotentials:1920}
F
=
\lr{ \spacegrad – \partial_0 } A
– \lr{ \partial_0 \phi + c \spacegrad \cdot \BA } – I \eta \lr{ \partial_0 \phi_m + c \spacegrad \cdot \BF }.
\end{equation}

Maxwell’s equations are now reduced to
\begin{equation}\label{eqn:gapotentials:1940}
\lr{ \spacegrad^2 – \partial_{00} } A

\lr{ \spacegrad + \partial_0 }
\lr{ \partial_0 \phi + c \spacegrad \cdot \BA }

\lr{ \spacegrad + \partial_0 }
I \eta \lr{ \partial_0 \phi_m + c \spacegrad \cdot \BF }
= J.
\end{equation}
This splits nicely into a single equation for each grade of \( A, J \) respectively. We write
\begin{equation}\label{eqn:gapotentials:1960}
J = \eta\lr{ c \rho – \BJ } + I \lr{ c \phi_m – \BM },
\end{equation}
so
\begin{equation}\label{eqn:gapotentials:1980}
\begin{aligned}
\lr{ \spacegrad^2 – \partial_{00} } (-\phi) – \partial_0 \lr{ \partial_0 \phi + c \spacegrad \cdot \BA } &= \eta c \rho \\
\lr{ \spacegrad^2 – \partial_{00} } (c \BA) – \spacegrad \lr{ \partial_0 \phi + c \spacegrad \cdot \BA } &= -\eta \BJ \\
\lr{ \spacegrad^2 – \partial_{00} } (I \eta c \BF) – I \eta \partial_0 \lr{ \partial_0 \phi_m + c \spacegrad \cdot \BF } &= -I \BM \\
\lr{ \spacegrad^2 – \partial_{00} } (-I \eta \phi_m) – I \eta \spacegrad \lr{ \partial_0 \phi_m + c \spacegrad \cdot \BF } &= I c \rho_m.
\end{aligned}
\end{equation}
If we choose the Lorentz gauge conditions
\begin{equation}\label{eqn:gapotentials:2000}
0 = \lr{ \partial_0 \phi + c \spacegrad \cdot \BA } = \lr{ \partial_0 \phi_m + c \spacegrad \cdot \BF },
\end{equation}
all of these equations decouple nicely, leaving us with 8 (scalar) equations in 8 unknowns
\begin{equation}\label{eqn:gapotentials:2020}
\begin{aligned}
\lr{ \spacegrad^2 – \partial_{00} } \phi &= -\frac{\rho}{\epsilon} \\
\lr{ \spacegrad^2 – \partial_{00} } \BA &= -\mu \BJ \\
\lr{ \spacegrad^2 – \partial_{00} } \BF &= -\epsilon \BM \\
\lr{ \spacegrad^2 – \partial_{00} } \phi_m &= – \frac{\rho_m}{\mu}.
\end{aligned}
\end{equation}

Potentials in STA (space time algebra).

All of this was very convoluted. Maxwell’s equation in STA form is considerably simpler, as is the potential formulation.

STA form of Maxwell’s equation.

We identify
\begin{equation}\label{eqn:gapotentials:2040}
\begin{aligned}
\Be_k &= \gamma_k \gamma_0 \\
I &= \Be_1 \Be_2 \Be_3 = \gamma_0 \gamma_1 \gamma_2 \gamma_3 \\
\gamma^\mu \cdot \gamma_\nu &= {\delta^\mu}_\nu.
\end{aligned}
\end{equation}
Our field multivector
\begin{equation}\label{eqn:gapotentials:2060}
\begin{aligned}
F
&= \BE + I \eta \BH \\
&= \gamma_{k0} E^k + \eta \gamma_{0123k0} H^k \\
&= \gamma_{k0} E^k + \eta \gamma_{123k} H^k,
\end{aligned}
\end{equation}
now has a pure bivector representation in STA (since \( k \) will always clobber one of the \( 1,2,3 \) indexes.) To find the STA representation of Maxwell’s equation, we simply multiply both sides of our multivector representation, from the left, by \( \gamma_0 \).
\begin{equation}\label{eqn:gapotentials:2080}
\gamma_0 \lr{ \spacegrad + \partial_0 } F = \gamma_0 \lr{ \eta \lr{ c \rho – \BJ } + I \lr{ c \rho_m – \BM } }.
\end{equation}
The LHS is just the spacetime gradient of \( F \), which we can see by expanding the product
\begin{equation}\label{eqn:gapotentials:2100}
\begin{aligned}
\gamma_0 \lr{ \spacegrad + \partial_0 }
&=
\gamma_0 \lr{ \gamma_{k0} \PD{x^k}{} + \PD{x^0}{} } \\
&=
-\gamma_{k} \PD{x^k}{} + \gamma_0 \PD{x^0}{}.
\end{aligned}
\end{equation}
This is the spacetime gradient
\begin{equation}\label{eqn:gapotentials:2120}
\grad \equiv \gamma^k \PD{x^k}{} + \gamma^0 \PD{x^0}{} = \gamma^\mu \partial_\mu.
\end{equation}
Our RHS is
\begin{equation}\label{eqn:gapotentials:2140}
\begin{aligned}
\gamma_0 \lr{ \eta \lr{ c \rho – \BJ } + I \lr{ c \rho_m – \BM } }
&=
\gamma_0 \frac{\rho}{\epsilon} – \gamma_{0k0} \eta (\BJ \cdot \Be_k)
– I \lr{ c \rho_m \gamma_0 – \gamma_{0k0} (\BM \cdot \Be_k) } \\
&=
\gamma_0 \frac{\rho}{\epsilon} + \gamma_k \eta (\BJ \cdot \Be_k)
– I \lr{ c \rho_m \gamma_0 + \gamma_{k} (\BM \cdot \Be_k) }.
\end{aligned}
\end{equation}
If we let
\begin{equation}\label{eqn:gapotentials:2160}
\begin{aligned}
J_e^0 &= \frac{\rho}{\epsilon} \\
J_e^k &= \eta (\BJ \cdot \Be_k) \\
J_m^0 &= c \rho_m \\
J_m^k &= (\BM \cdot \Be_k) \\
J_e &= J_e^\mu \gamma_\mu \\
J_m &= J_m^\mu \gamma_\mu,
\end{aligned}
\end{equation}
then we are left with
\begin{equation}\label{eqn:gapotentials:2180}
\grad F = J_e – I J_m,
\end{equation}
or just
\begin{equation}\label{eqn:gapotentials:2640}
\grad F = J,
\end{equation}
where we now give a different meaning to \( J \) than we had in the multivector formulation. This \( J \) is now a multivector with grade(1,3) components.

Case I: potential formulation for conventional sources.

Much like we did with to find the potential formulation for the multivector form of Maxwell’s equation, we use superposition, and tackle the conventional sources, and fictitious magnetic sources separately.

With no fictitious sources, Maxwell’s equation is
\begin{equation}\label{eqn:gapotentials:2200}
\grad F = J_e,
\end{equation}
which we may split into vector and trivector components
\begin{equation}\label{eqn:gapotentials:2220}
\begin{aligned}
\grad \cdot F &= J_e \\
\grad \wedge F &= 0.
\end{aligned}
\end{equation}
Clearly, the trivector equation can be satified by setting
\begin{equation}\label{eqn:gapotentials:2240}
F = \grad \wedge A,
\end{equation}
for some vector \( A \). We may also make gauge transformations of \( A \) of the form
\begin{equation}\label{eqn:gapotentials:2260}
A \rightarrow A + \grad \psi,
\end{equation}
without changing \( F \), showing that \( A \) is not uniquely determined. With such a representation, Maxwell’s equation is now reduced to
\begin{equation}\label{eqn:gapotentials:2280}
\grad \cdot F = J_e,
\end{equation}
or
\begin{equation}\label{eqn:gapotentials:2300}
\begin{aligned}
J_e
&=
\grad \cdot \lr{ \grad \wedge A } \\
&=
\grad^2 A – \grad \lr{ \grad \cdot A }.
\end{aligned}
\end{equation}
Clearly the equivalent of the Lorentz gauge condition is now just
\begin{equation}\label{eqn:gapotentials:2320}
\grad \cdot A = 0,
\end{equation}
so the Lorentz gauge potential form of Maxwell’s equation is just
\begin{equation}\label{eqn:gapotentials:n}S
\grad^2 A = J_e.
\end{equation}

Case II: potential formulation for fictitious sources.

If we have only fictious sources, Maxwell’s equation is
\begin{equation}\label{eqn:gapotentials:2340}
\grad F = -I J_m,
\end{equation}
or after left multiplication by \( I \) we have
\begin{equation}\label{eqn:gapotentials:2360}
\grad I F = J_m.
\end{equation}
Let \( G = I F \), for the dual field, which is still a bivector. As before, we can split Maxwell’s equations into vector and trivector compoents
\begin{equation}\label{eqn:gapotentials:2380}
\begin{aligned}
\grad \cdot G &= J_m \\
\grad \wedge G &= 0.
\end{aligned}
\end{equation}
We may set
\begin{equation}\label{eqn:gapotentials:2400}
G = \grad \wedge K,
\end{equation}
for vector \( K \). Maxwell’s equation is now reduced to
\begin{equation}\label{eqn:gapotentials:2420}
\grad \cdot G = J_m,
\end{equation}
or
\begin{equation}\label{eqn:gapotentials:2440}
\begin{aligned}
J_m
&=
\grad \cdot \lr{ \grad \wedge K } \\
&=
\grad^2 K – \grad \lr{ \grad \cdot K }.
\end{aligned}
\end{equation}

As before we may make gauge transformations by adding gradient to our potential
\begin{equation}\label{eqn:gapotentials:2460}
K \rightarrow K + \grad \bar{\psi},
\end{equation}
which will not change \( G \). For such sources, the Lorentz gauge condition is \( \grad \cdot K = 0 \). With the Lorentz gauge, Maxwell’s equation is reduced to
\begin{equation}\label{eqn:gapotentials:2480}
\grad^2 K = J_m.
\end{equation}

Superposition.

For non-fictious sources, we have
\begin{equation}\label{eqn:gapotentials:2500}
F = \grad \wedge A
\end{equation}
and for fictious sources, we have
\begin{equation}\label{eqn:gapotentials:2520}
I F = G = \grad \wedge K,
\end{equation}
or
\begin{equation}\label{eqn:gapotentials:2540}
F = -I G = -I \lr{ \grad \wedge K }.
\end{equation}
Combining these results, we have
\begin{equation}\label{eqn:gapotentials:2560}
\begin{aligned}
F
&= \grad \wedge A -I \lr{ \grad \wedge K } \\
&= \gpgradetwo{ \grad \wedge A -I \lr{ \grad \wedge K } } \\
&= \gpgradetwo{ \grad A -I \lr{ \grad K } } \\
&= \gpgradetwo{ \grad \lr{ A + I K } },
\end{aligned}
\end{equation}
or
\begin{equation}\label{eqn:gapotentials:2580}
F = \grad \lr{ A + I K } – \gpgrade{ \grad \lr{ A + I K } }{0,4}.
\end{equation}
Maxwell’s equation is
\begin{equation}\label{eqn:gapotentials:2600}
\grad^2 \lr{ A + I K } – \grad \gpgrade{ \grad \lr{ A + I K } }{0,4} = J.
\end{equation}
With the Lorentz gauge, this splits nicely into one forced wave equation for each vector potential
\begin{equation}\label{eqn:gapotentials:2620}
\begin{aligned}
\grad^2 A &= J_e \\
\grad^2 K &= -J_m.
\end{aligned}
\end{equation}

References

[1] Constantine A Balanis. Antenna theory: analysis and design. John Wiley & Sons, 3rd edition, 2005.

[2] R.P. Feynman, R.B. Leighton, and M.L. Sands. Feynman lectures on physics, Volume II.[Lectures on physics], chapter The Maxwell Equations. Addison-Wesley Publishing Company. Reading, Massachusetts, 1963. URL https://www.feynmanlectures.caltech.edu/II_18.html.

[3] David Jeffrey Griffiths and Reed College. Introduction to electrodynamics. Prentice hall Upper Saddle River, NJ, 3rd edition, 1999.

[4] JD Jackson. Classical Electrodynamics. John Wiley and Sons, 2nd edition, 1975.