Geometric Algebra

Some line integral examples of the Fundamental theorem of geometric calculus

January 20, 2026 math and physics play , , , , , , , ,

[Click here for a PDF version of this post]

On my discord server, Frank asked about his attempt to demonstrate an example line integral computation of the fundamental theorem of geometric calculus.

Before working through his example, and some others, it is first worth restating the
line integral specialization of the \textit{Fundamental theorem of geometric calculus}:

Theorem 1.1: Fundamental theorem of geometric calculus (line integral version.)

Given multivectors \(F, G \), a single variable parameterization \( \Bx = \Bx(u) \), with line element \( d\Bx = du \Bx_u \), \( \Bx_u = \PDi{u}{\Bx} \), \( \boldpartial = \Bx^u \PDi{u}{} \), and \( \Bx^u \cdot \Bx_u = 1 \), then
the line integral is related to the boundary by
\begin{equation*}
\int F d\Bx \boldpartial G = \evalbar{F G}{\Delta u},
\end{equation*}
(with the \( \boldpartial \) acting bidirectionally on \( F, G \).)

It is very important to point out that the derivative operator here is the vector derivative, and not the gradient. Roughly speaking, the vector derivative is the projection of the gradient onto the tangent space. In this case, the tangent space is just the line in the direction \( \Bx_u \), which may vary along the parameterized path.

Here are some examples of some one variable parameterizations, all in two dimensions

  1. \( \Bx = u \Be_1 + y_0 \Be_2 \).
    We compute
    \begin{equation}\label{eqn:lineintegralExamples:20}
    \begin{aligned}
    \Bx_u &= \PD{\Bx}{u} = \Be_1 \\
    \Bx^u &= \Be_1 \\
    d\Bx &= du \Be_1 \\
    \boldpartial &= \Be_1 \PD{u}{}.
    \end{aligned}
    \end{equation}
    and \( d\Bx \boldpartial = \PDi{u}{} \).
    The fundamental theorem is really just a statement that
    \begin{equation}\label{eqn:lineintegralExamples:40}
    \int \PD{u}{} \lr{ F G } du = \evalbar{ F G }{\Delta u}.
    \end{equation}

  2. \( \Bx = \alpha u \Be_1 + \beta u \Be_2 \), where \( \alpha, \beta \) are constants. i.e.: a line, but not necessarily on the horizontal this time.
    This time, we compute
    \begin{equation}\label{eqn:lineintegralExamples:60}
    \begin{aligned}
    \Bx_u &= \alpha \Be_1 + \beta \Be_2 \\
    \Bx^u &= \inv{\Bx_u} = \frac{\alpha \Be_1 + \beta \Be_2}{\alpha^2 + \beta^2} \\
    d\Bx &= du \lr{ \alpha \Be_1 + \beta \Be_2 } \\
    \boldpartial &= \inv{\alpha \Be_1 + \beta \Be_2} \PD{u}{}.
    \end{aligned}
    \end{equation}
    Again, we have \( d\Bx \boldpartial = \PDi{u}{} \), and the story repeats.

  3. \( \Bx = R \Be_1 e^{i\theta}, i = \Be_1 \Be_2 \). This time we are going along a circular arc.

    Let \( \rcap = \Be_1 e^{i\theta} \), and \(\thetacap = \Be_2 e^{i\theta} \). We can compute
    \begin{equation}\label{eqn:lineintegralExamples:80}
    \begin{aligned}
    \Bx_\theta &= R \Be_2 e^{i\theta} = R \thetacap \\
    \Bx^\theta &= \inv{\Bx_\theta} = \inv{ R \Be_2 e^{i\theta} } = \inv{R} \thetacap \\
    d\Bx &= d\theta \thetacap \\
    \boldpartial &= \frac{\thetacap}{R} \PD{\theta}{}.
    \end{aligned}
    \end{equation}
    This time, probably to no suprise, we have \( d\Bx \boldpartial = \PDi{\theta}{} \), so the fundamental theorem for this parameterization is a statement that
    \begin{equation}\label{eqn:lineintegralExamples:100}
    \int \PD{\theta}{} \lr{ F G } d\theta = \evalbar{ F G }{\Delta \theta}.
    \end{equation}

  4. \( \Bx = r e^{i\theta_0} \), where \( \theta_0 \) is a constant. We’ve already computed this above with a Cartesian representation of a line, but can do it again this time with an explicitly radial parameterization. We compute
    \begin{equation}\label{eqn:lineintegralExamples:120}
    \begin{aligned}
    \Bx_r &= \Be_1 e^{i \theta_0} \\
    \Bx^r &= \inv{\Bx_r} = \Be_1 e^{i \theta_0} \\
    d\Bx &= dr \Be_1 e^{i \theta_0} \\
    \boldpartial &= e^{i \theta_0} \PD{r}{}.
    \end{aligned}
    \end{equation}
    This time, \( d\Bx \boldpartial = \PDi{r}{} \), and the fundamental theorem for this parameterization is a statement that
    \begin{equation}\label{eqn:lineintegralExamples:140}
    \int \PD{r}{} \lr{ F G } dr = \evalbar{ F G }{\Delta r}.
    \end{equation}

Observe that we do not get the same result if we use the gradient instead of the vector derivative. We may only make a gradient substitution for the vector derivative when the dimension of the hypervolume integral equals the dimension of the vector space itself. For a line integral that would mean we are restricting the domain of the underlying vector space to \(\mathbb{R}^1\), which isn’t a very interesting case for geometric algebra.

In Frank’s example, he was working with a generating vector space of \(\mathbb{R}^2\), with the horizontal parameterization \( \Bx = u \Be_1 + y_0 \Be_2 \) that we used in the first example (with \( F = 1, G = x y i \), where \( i = \Be_1 \Be_2 \), the pseudoscalar for the space).

Let’s see what happens if we compute a similar integral, but swapping out the vector derivative with the gradient
\begin{equation}\label{eqn:lineintegralExamples:160}
\begin{aligned}
\int d\Bx \spacegrad x y i
&=
\int du \Be_1 \lr{ \Be_1 \partial_x + \Be_2 \partial_y } ( x y i ) \\
&=
\int du \Be_1 \lr{ \Be_1 y + \Be_2 x } i \\
&=
\int du \lr{ y + i x } i \\
&=
\int du \lr{ y_0 + i u } i \\
&=
\lr{\Delta x} y_0 i – \frac{x_1^2}{2} + \frac{x_0^2}{2}.
\end{aligned}
\end{equation}
As well as the pseudoscalar term that we had when evaluating the fundamental theorem integral, this time we have an extra scalar term, a contribution that goes back to the \( y \) component of the gradient. There is nothing wrong with performing such an integral, but it’s not an instance of the fundamental theorem, and the same tidy answer should not be expected. In Frank’s original example, he also didn’t put the \( \Bx \) adjacent to the differential operator, which is required to get the perfect cancelation of the tangent space vectors that we’ve seen in the evaluations above.

Curl of Curl. Tensor and GA expansion, and GA equivalent identity.

November 12, 2025 math and physics play , , , , , , , , , , ,

[Click here for a PDF version of this post]

In this blog post, we will expand \(\spacegrad \cross \lr{ \spacegrad \cross \Bf } = -\spacegrad^2 \Bf + \spacegrad \lr{ \spacegrad \cdot \Bf } \) two different ways, using tensor index gymnastics and using geometric algebra.

The tensor way.

To expand the curl using a tensor expansion, let’s first expand the cross product in coordinates
\begin{equation}\label{eqn:curlcurl2:20}
\begin{aligned}
\Ba \cross \Bb
&=
\lr{ \Be_r \cross \Be_s } a_r b_s \\
&=
\Be_t \cdot \lr{ \Be_r \cross \Be_s } \Be_t a_r b_s \\
&=
\epsilon_{rst} a_r b_s \Be_t.
\end{aligned}
\end{equation}
Here \( \epsilon_{rst} \) is the completely antisymmetric (Levi-Civita) tensor, and allows us to compactly express the geometrical nature of the triple product.

We can then expand the curl of the curl by applying this twice
\begin{equation}\label{eqn:curlcurl2:40}
\begin{aligned}
\spacegrad \cross \lr{ \spacegrad \cross \Bf }
&=
\epsilon_{rst} \partial_r \lr{ \spacegrad \cross \Bf }_s \Be_t \\
&=
\epsilon_{rst} \partial_r \lr{ \epsilon_{uvw} \partial_u f_v \Be_w }_s \Be_t \\
&=
\epsilon_{rst} \partial_r \epsilon_{uvs} \partial_u f_v \Be_t.
\end{aligned}
\end{equation}

It turns out that there’s a nice identity to reduce the single index contraction of a pair of Levi-Civita tensors.
\begin{equation}\label{eqn:curlcurl2:60}
\epsilon_{abt} \epsilon_{cdt} = \delta_{ac} \delta_{bd} – \delta_{ad} \delta_{bc}.
\end{equation}
To show this, consider the \( t = 1 \) term of this sum \( \epsilon_{ab1} \epsilon_{cd1} \). This is non-zero only for \( a,b,c,d \in \setlr{2,3} \). If \( a,b = c,d \), this is one, and if \( a,b = d,c \), this is minus one. We may summarize that as
\begin{equation}\label{eqn:curlcurl2:80}
\epsilon_{ab1} \epsilon_{cd1} = \delta_{ac} \delta_{bd} – \delta_{ad} \delta_{bc},
\end{equation}
but this holds for \( t = 2,3 \) too, so \ref{eqn:curlcurl2:60} holds generally.

We may now contract the tensors to find
\begin{equation}\label{eqn:curlcurl2:100}
\begin{aligned}
\spacegrad \cross \lr{ \spacegrad \cross \Bf }
&=
\epsilon_{rst} \epsilon_{uvs} \Be_t \partial_r \partial_u f_v \\
&=
-\epsilon_{rts} \epsilon_{uvs} \Be_t \partial_r \partial_u f_v \\
&=
-\lr{ \delta_{ru} \delta_{tv} – \delta_{rv} \delta_{tu} } \Be_t \partial_r \partial_u f_v \\
&=
– \Be_v \partial_u \partial_u f_v
+ \Be_u \partial_v \partial_u f_v \\
&=
-\spacegrad^2 \Bf + \spacegrad \lr{ \spacegrad \cdot \Bf }.
\end{aligned}
\end{equation}

Using geometric algebra.

Now let’s pull out the GA toolbox. We start with introducing a no-op grade-1 selection, and using the identity \( \Ba \cross \Bb = -I \lr{ \Ba \wedge \Bb } \)
\begin{equation}\label{eqn:curlcurl2:120}
\begin{aligned}
\spacegrad \cross \lr{ \spacegrad \cross \Bf }
&=
\gpgradeone{
\spacegrad \cross \lr{ \spacegrad \cross \Bf }
} \\
&=
\gpgradeone{
-I \lr{ \spacegrad \wedge \lr{ \spacegrad \cross \Bf } }
} \\
\end{aligned}
\end{equation}
We can now expand \( \Ba \wedge \Bb = \Ba \Bb – \Ba \cdot \Bb \)
\begin{equation}\label{eqn:curlcurl2:140}
\spacegrad \cross \lr{ \spacegrad \cross \Bf }
=
\gpgradeone{
-I \spacegrad \lr{ \spacegrad \cross \Bf }
+I \lr{ \spacegrad \cdot \lr{ \spacegrad \cross \Bf } }
}
\end{equation}
but that dot product is a scalar, leaving just a pseudoscalar, which has a zero grade-1 selection. This leaves
\begin{equation}\label{eqn:curlcurl2:160}
\begin{aligned}
\spacegrad \cross \lr{ \spacegrad \cross \Bf }
&=
\gpgradeone{
-I \spacegrad \lr{ -I \lr{ \spacegrad \wedge \Bf } }
} \\
&=
-\gpgradeone{
\spacegrad \lr{ \spacegrad \wedge \Bf }
}.
\end{aligned}
\end{equation}
We use \( \Ba \wedge \Bb = \Ba \Bb – \Ba \cdot \Bb \) once more
\begin{equation}\label{eqn:curlcurl2:180}
\begin{aligned}
\spacegrad \cross \lr{ \spacegrad \cross \Bf }
&=
-\gpgradeone{
\spacegrad \lr{ \spacegrad \Bf }
-\spacegrad \lr{ \spacegrad \cdot \Bf }
}
\\
&=
-\spacegrad^2 \Bf
+\spacegrad \lr{ \spacegrad \cdot \Bf }.
\end{aligned}
\end{equation}

GA identity.

It’s also worth noting that there’s a natural GA formulation of the curl of a curl. From the Laplacian and divergence relationship that we ended up with, we need only factor out the gradient
\begin{equation}\label{eqn:curlcurl2:200}
\begin{aligned}
\spacegrad \cross \lr{ \spacegrad \cross \Bf }
&=
-\spacegrad^2 \Bf +\spacegrad \lr{ \spacegrad \cdot \Bf } \\
&=
-\spacegrad \lr{ \spacegrad \Bf – \spacegrad \cdot \Bf } \\
&=
-\spacegrad \lr{ \spacegrad \wedge \Bf }.
\end{aligned}
\end{equation}
Because \( \spacegrad \wedge \lr{ \spacegrad \wedge \Bf } = 0 \), we may also write this as
\begin{equation}\label{eqn:curlcurl2:220}
\boxed{
\spacegrad \cdot \lr{ \spacegrad \wedge \Bf } = -\spacegrad \cross \lr{ \spacegrad \cross \Bf }.
}
\end{equation}
From the GA LHS, we see by inspection that
\begin{equation}\label{eqn:curlcurl2:240}
\spacegrad \cdot \lr{ \spacegrad \wedge \Bf } = \spacegrad^2 \Bf – \spacegrad \lr{ \spacegrad \cdot \Bf }.
\end{equation}

A fun application of Green’s functions and geometric algebra: Residue calculus

November 2, 2025 math and physics play , , , , , , , , , , , , , , , , , ,

[Click here for a PDF version of this post]

Motivation.

A fun application of both Green’s functions and geometric algebra is to show how the Cauchy integral equation can be expressed in terms of the Green’s function for the 2D gradient. This is covered, almost as an aside, in [1]. I found that treatment a bit hard to understand, so I am going to work through it here at my own pace.

Complex numbers in geometric algebra.

Anybody who has studied geometric algebra is likely familiar with a variety of ways to construct complex numbers from geometric objects. For example, complex numbers can be constructed for any plane. If \( \Be_1, \Be_2 \) is a pair of orthonormal vectors for some plane in \(\mathbb{R}^N\), then any vector in that plane has the form
\begin{equation}\label{eqn:residueGreens:20}
\Bf = \Be_1 u + \Be_2 v,
\end{equation}
has an associated complex representation, by simply multiplying that vector one of those basis vectors. For example, if we pre-multiply \( \Bf \) by \( \Be_1 \), forming
\begin{equation}\label{eqn:residueGreens:40}
\begin{aligned}
z
&= \Be_1 \Bf \\
&= \Be_1 \lr{ \Be_1 u + \Be_2 v } \\
&= u + \Be_1 \Be_2 v.
\end{aligned}
\end{equation}

We may identify the unit bivector \( \Be_1 \Be_2 \) as an imaginary, designed by \( i \), since it has the expected behavior
\begin{equation}\label{eqn:residueGreens:60}
\begin{aligned}
i^2 &=
\lr{\Be_1 \Be_2}^2 \\
&=
\lr{\Be_1 \Be_2}
\lr{\Be_1 \Be_2} \\
&=
\Be_1 \lr{\Be_2
\Be_1} \Be_2 \\
&=
-\Be_1 \lr{\Be_1
\Be_2} \Be_2 \\
&=
-\lr{\Be_1 \Be_1}
\lr{\Be_2 \Be_2} \\
&=
-1.
\end{aligned}
\end{equation}

Complex numbers are seen to be isomorphic to even grade multivectors in a planar subspace. The imaginary is the grade-two pseudoscalar, and geometrically is an oriented unit area (bivector.)

Cauchy-equations in terms of the gradient.

It is natural to wonder about the geometric algebra equivalents of various complex-number relationships and identities. Of particular interest for this discussion is the geometric algebra equivalent of the Cauchy equations that specify required conditions for a function to be differentiable.

If a complex function \( f(z) = u(z) + i v(z) \) is differentiable, then we must be able to find the limit of
\begin{equation}\label{eqn:residueGreens:80}
\frac{\Delta f(z_0)}{\Delta z} = \frac{f(z_0 + h) – f(z_0)}{h},
\end{equation}
for any complex \( h \rightarrow 0 \), for any possible trajectory of \( z_0 + h \) toward \( z_0 \). In particular, for real \( h = \epsilon \),
\begin{equation}\label{eqn:residueGreens:100}
\lim_{\epsilon \rightarrow 0} \frac{u(x_0 + \epsilon, y_0) + i v(x_0 + \epsilon, y_0) – u(x_0, y_0) – i v(x_0, y_0)}{\epsilon}
=
\PD{x}{u(z_0)} + i \PD{x}{v(z_0)},
\end{equation}
and for imaginary \( h = i \epsilon \)
\begin{equation}\label{eqn:residueGreens:120}
\lim_{\epsilon \rightarrow 0} \frac{u(x_0, y_0 + \epsilon) + i v(x_0, y_0 + \epsilon) – u(x_0, y_0) – i v(x_0, y_0)}{i \epsilon}
=
-i\lr{ \PD{y}{u(z_0)} + i \PD{y}{v(z_0)} }.
\end{equation}
Equating real and imaginary parts, we see that existence of the derivative requires
\begin{equation}\label{eqn:residueGreens:140}
\begin{aligned}
\PD{x}{u} &= \PD{y}{v} \\
\PD{x}{v} &= -\PD{y}{u}.
\end{aligned}
\end{equation}
These are the Cauchy equations. When the derivative exists in a given neighbourhood, we say that the function is analytic in that region. If we use a bivector interpretation of the imaginary, with \( i = \Be_1 \Be_2 \), the Cauchy equations are also satisfied if the gradient of the complex function is zero, since
\begin{equation}\label{eqn:residueGreens:160}
\begin{aligned}
\spacegrad f
&=
\lr{ \Be_1 \partial_x + \Be_2 \partial_y } \lr{ u + \Be_1 \Be_2 v } \\
&=
\Be_1 \lr{ \partial_x u – \partial_y v } + \Be_2 \lr{ \partial_y u + \partial_x v }.
\end{aligned}
\end{equation}
We see that the geometric algebra equivalent of the Cauchy equations is simply
\begin{equation}\label{eqn:residueGreens:200}
\spacegrad f = 0.
\end{equation}
Roughly speaking, we may say that a function is analytic in a region, if the Cauchy equations are satisfied, or the gradient is zero, in a neighbourhood of all points in that region.

A special case of the fundamental theorem of geometric calculus.

Given an even grade multivector \( \psi \in \mathbb{R}^2 \) (i.e.: a complex number), we can show that
\begin{equation}\label{eqn:residueGreens:220}
\int_A \spacegrad \psi d^2\Bx = \oint_{\partial A} d\Bx \psi.
\end{equation}
Let’s get an idea why this works by expanding the area integral for a rectangular parameterization
\begin{equation}\label{eqn:residueGreens:240}
\begin{aligned}
\int_A \spacegrad \psi d^2\Bx
&=
\int_A \lr{ \Be_1 \partial_1 + \Be_2 \partial_2 } \psi I dx dy \\
&=
\int \Be_1 I dy \evalrange{\psi}{x_0}{x_1}
+
\int \Be_2 I dx \evalrange{\psi}{y_0}{y_1} \\
&=
\int \Be_2 dy \evalrange{\psi}{x_0}{x_1}

\int \Be_1 dx \evalrange{\psi}{y_0}{y_1} \\
&=
\int d\By \evalrange{\psi}{x_0}{x_1}

\int d\Bx \evalrange{\psi}{y_0}{y_1}.
\end{aligned}
\end{equation}
We took advantage of the fact that the \(\mathbb{R}^2\) pseudoscalar commutes with \( \psi \). The end result, is illustrated in fig. 1, shows pictorially that the remaining integral is an oriented line integral.

fig. 1. Oriented multivector line integral.

 

If we want to approximate a more general area, we may do so with additional tiles, as illustrated in fig. 2. We may evaluate the area integral using the line integral over just the exterior boundary using such a tiling, as any overlapping opposing boundary contributions cancel exactly.

fig. 2. A crude circular tiling approximation.

 

The reason that this is interesting is that it allows us to re-express a complex integral as a corresponding multivector area integral. With \( d\Bx = \Be_1 dz \), we have
\begin{equation}\label{eqn:residueGreens:260}
\oint dz\, \psi = \Be_1 \int \spacegrad \psi d^2\Bx.
\end{equation}

The Cauchy kernel as a Green’s function.

We’ve previously derived the Green’s function for the 2D Laplacian, and found
\begin{equation}\label{eqn:residueGreens:280}
\tilde{G}(\Bx, \Bx’) = \inv{2\pi} \ln \Abs{\lr{\Bx – \Bx’}},
\end{equation}
which satisfies
\begin{equation}\label{eqn:residueGreens:300}
\delta^2(\Bx – \Bx’) = \spacegrad^2 \tilde{G}(\Bx, \Bx’) = \spacegrad \lr{ \spacegrad \tilde{G}(\Bx, \Bx’) }.
\end{equation}
This means that \( G(\Bx, \Bx’) = \spacegrad \tilde{G}(\Bx, \Bx’) \) is the Green’s function for the gradient. That Green’s function is
\begin{equation}\label{eqn:residueGreens:320}
\begin{aligned}
G(\Bx, \Ba)
&= \inv{2 \pi} \frac{\spacegrad \Abs{\Bx – \Ba}}{\Abs{\Bx – \Ba}} \\
&= \inv{2 \pi} \frac{\Bx – \Ba}{\Abs{\Bx – \Ba}^2}.
\end{aligned}
\end{equation}
We may cast this Green’s function into complex form with \( z = \Be_1 \Bx, a = \Be_1 \Ba \). In particular
\begin{equation}\label{eqn:residueGreens:340}
\begin{aligned}
\inv{z – a}
&=
\frac{(z – a)^\conj}{\Abs{z – a}^2} \\
&=
\frac{(z – a)^\conj}{\Abs{z – a}^2} \\
&=
\frac{\Bx – \Ba}{\Abs{\Bx – \Ba}^2} \Be_1 \\
&=
2 \pi G(\Bx, \Ba) \Be_1.
\end{aligned}
\end{equation}

Cauchy’s integral.

With
\begin{equation}\label{eqn:residueGreens:360}
\psi = \frac{f(z)}{z – a},
\end{equation}
using \ref{eqn:residueGreens:260}, we can now evaluate
\begin{equation}\label{eqn:residueGreens:265}
\begin{aligned}
\oint dz\, \frac{f(z)}{z – a}
&= \Be_1 \int \spacegrad \frac{f(z)}{z – a} d^2\Bx \\
&= \Be_1 \int \lr{ \frac{\spacegrad f(z)}{z – a} + \lr{ \spacegrad \inv{z – a}} f(z) } I dA \\
&= \Be_1 \int f(z) \spacegrad 2 \pi G(\Bx – \Ba) \Be_1 I dA \\
&= 2 \pi \Be_1 \int \delta^2(\Bx – \Ba) \Be_1 f(\Bx) I dA \\
&= 2 \pi \Be_1^2 f(\Ba) I \\
&= 2 \pi I f(a),
\end{aligned}
\end{equation}
where we’ve made use of the analytic condition \( \spacegrad f = 0 \), and the fact that \( f \) and \( 1/(z-a) \), both even multivectors, commute.

The Cauchy integral equation
\begin{equation}\label{eqn:residueGreens:380}
f(a) = \inv{2 \pi I} \oint dz\, \frac{f(z)}{z – a},
\end{equation}
falls out naturally. This sort of residue calculation always seemed a bit miraculous. By introducing a geometric algebra encoding of complex numbers, we get a new and interesting interpretation. In particular,

  1. the imaginary factor in the geometric algebra formulation of this identity is an oriented unit area coming directly from the area element,
  2. the factor of \( 2 \pi \) comes directly from the Green’s function for the gradient,
  3. the fact that this particular form of integral picks up only the contribution at the point \( z = a \) is no longer mysterious seeming. This is directly due to delta-function filtering.

Also, if we are looking for an understanding of how to generalize the Cauchy equation to more general multivector functions, we now also have a good clue how that would be done.

References

[1] C. Doran and A.N. Lasenby. Geometric algebra for physicists. Cambridge University Press New York, Cambridge, UK, 1st edition, 2003.

Transverse electric and magnetic field relations.

August 10, 2025 math and physics play , , , , , , , , , , , , , , ,

[Click here for a PDF version of this post]

I found a sign error in my book. Here’s I’ll re-derive all the results for myself here in a standalone fashion, also verifying signs as I go.

Setup

Suppose that a field is propagating in a medium along the z-axis. We may represent that field as the real part of
\begin{equation}\label{eqn:transverseField:20}
F = F(x,y) e^{j(\omega t – k z)}.
\end{equation}
This is a doubly complex relationship, as we have a scalar complex imaginary \( j \), as well as the spatial imaginary \(I = \Be_1 \Be_2 \Be_3 \) that is part of the multivector field itself
\begin{equation}\label{eqn:transverseField:40}
F = \BE + I \eta \BH.
\end{equation}

Let’s call
\begin{equation}\label{eqn:transverseField:60}
F_z = \lr{ \BE \cdot \Be_3} \Be_3 + I \eta \lr{ \BH \cdot \Be_3 } \Be_3,
\end{equation}
the propagation component of the field and \( F_t = F – F_z \) the transverse component of the field. We can write these in a more symmetric fashion by expanding the dot products and regrouping
\begin{equation}\label{eqn:transverseField:80}
\begin{aligned}
F_z
&= \lr{ \BE \cdot \Be_3} \Be_3 + I \eta \lr{ \BH \cdot \Be_3 } \Be_3 \\
&= \inv{2} \lr{ \BE \Be_3 + \Be_3 \BE } \Be_3 + \frac{I \eta}{2} \lr{ \BH \Be_3 + \Be_3 \BH} \Be_3 \\
&= \inv{2} \lr{ \BE + \Be_3 \BE \Be_3 } + \frac{I \eta}{2} \lr{ \BH + \Be_3 \BH \Be_3} \Be_3 \\
&= \inv{2} \lr{ F + \Be_3 F \Be_3 }.
\end{aligned}
\end{equation}
By subtraction, we also have
\begin{equation}\label{eqn:transverseField:100}
F_t = \inv{2} \lr{ F – \Be_3 F \Be_3 }.
\end{equation}

Relating the transverse and propagation direction fields

The multivector form of Maxwell’s equation, for source free conditions, is
\begin{equation}\label{eqn:transverseField:120}
0 = \lr{ \spacegrad + \inv{c} \partial_t } F.
\end{equation}
We split the gradient into a propagation direction component and a transverse component \( \spacegrad_t \)
\begin{equation}\label{eqn:transverseField:140}
\spacegrad = \spacegrad_t + \Be_3 \partial_z,
\end{equation}
so
\begin{equation}\label{eqn:transverseField:160}
\begin{aligned}
0
&= \lr{ \spacegrad_t + \Be_3 \partial_z + \inv{c} \partial_t } F \\
&= \lr{ \spacegrad_t + \Be_3 \partial_z + \inv{c} \partial_t } F(x,y) e^{j(\omega t – k z) } \\
&= \lr{ \spacegrad_t – j\Be_3 k + j\frac{\omega}{c} } F(x,y) e^{j(\omega t – k z) },
\end{aligned}
\end{equation}
or
\begin{equation}\label{eqn:transverseField:180}
-j \lr{ \frac{\omega}{c} – k \Be_3 } F = \spacegrad_t F.
\end{equation}

Observe that
\begin{equation}\label{eqn:transverseField:200}
-j \lr{ \frac{\omega}{c} – k \Be_3 } \Be_3 F \Be_3 = -\spacegrad_t \Be_3 F \Be_3,
\end{equation}
which means that
\begin{equation}\label{eqn:transverseField:220}
-j \lr{ \frac{\omega}{c} – k \Be_3 } \inv{2} \lr{ F \pm \Be_3 F \Be_3 } = \spacegrad_t \inv{2} \lr{ F \mp \Be_3 F \Be_3 },
\end{equation}
or
\begin{equation}\label{eqn:transverseField:240}
\begin{aligned}
-j \lr{ \frac{\omega}{c} – k \Be_3 } F_z &= \spacegrad_t F_t \\
-j \lr{ \frac{\omega}{c} – k \Be_3 } F_t &= \spacegrad_t F_z.
\end{aligned}
\end{equation}

Provided \( \omega^2 \ne k^2 c^2 \), this can be inverted, meaning that \( F_t \) fully specifies \( F_z \) if known, as well as the opposite.

That inversion provides the propagation direction field in terms of the transverse
\begin{equation}\label{eqn:transverseField:260a}
F_z = j \frac{ \frac{\omega}{c} + k \Be_3 }{ \omega^2 \mu \epsilon – k^2 } \spacegrad_t F_t,
\end{equation}
and the transverse field in terms of the propagation direction field
\begin{equation}\label{eqn:transverseField:260b}
F_t = j \frac{ \frac{\omega}{c} + k \Be_3 }{ \omega^2 \mu \epsilon – k^2 } \spacegrad_t F_z.
\end{equation}

Transverse field in terms of propagation

Let’s expand \ref{eqn:transverseField:260b} in terms of component electric and magnetic fields. First note that
\begin{equation}\label{eqn:transverseField:280}
\begin{aligned}
\spacegrad_t F_z
&= \spacegrad_t \Be_3 \lr{ E_z + I \eta H_z } \\
&= -\Be_3 \spacegrad_t \lr{ E_z + I \eta H_z }.
\end{aligned}
\end{equation}
so
\begin{equation}\label{eqn:transverseField:300}
F_t = -j \frac{ \frac{\omega}{c} \Be_3 + k }{ \omega^2 \mu \epsilon – k^2 } \spacegrad_t \lr{ E_z + I \eta H_z }.
\end{equation}
This may now be split into electric and magnetic fields, but first note that the multivector operator
\begin{equation}\label{eqn:transverseField:320}
\begin{aligned}
\Be_3 \spacegrad_t
&=
\Be_3 \cdot \spacegrad_t + \Be_3 \wedge \spacegrad_t \\
&=
\Be_3 \wedge \spacegrad_t,
\end{aligned}
\end{equation}
has only a bivector component.

For the transverse electric field component, we have
\begin{equation}\label{eqn:transverseField:340}
\begin{aligned}
\gpgradeone{ \lr{ \frac{\omega}{c} \Be_3 + k } \spacegrad_t \lr{ E_z + I \eta H_z } }
&=
k \spacegrad_t E_z + \frac{\omega}{c} \Be_3 \wedge \spacegrad_t \lr{ I \eta H_z } \\
&=
k \spacegrad_t E_z – \frac{\eta \omega}{c} \Be_3 \cross \spacegrad_t H_z.
\end{aligned}
\end{equation}
and for the magnetic field component
\begin{equation}\label{eqn:transverseField:360}
\begin{aligned}
\gpgradetwo{ \lr{ \frac{\omega}{c} \Be_3 + k } \spacegrad_t \lr{ E_z + I \eta H_z } }
=
\frac{\omega}{c} \Be_3 \wedge \spacegrad_t E_z + I \eta k \spacegrad_t H_z
\end{aligned}
\end{equation}

This means that
\begin{equation}\label{eqn:transverseField:380}
\begin{aligned}
\BE_t &= \frac{j}{\omega^2 \mu \epsilon – k^2 } \lr{ -k \spacegrad_t E_z + \frac{\eta \omega}{c} \Be_3 \cross \spacegrad_t H_z } \\
\eta I \BH_t &= -\frac{j}{\omega^2 \mu \epsilon – k^2 } \lr{ \frac{\omega}{c} \Be_3 \wedge \spacegrad_t E_z + I \eta k \spacegrad_t H_z }
\end{aligned}
\end{equation}

Cancelling out the \( \eta I \) factors in the magnetic field component, and substituting \( \eta/c = \mu, 1/(c\eta) = \epsilon \), leaves us with
\begin{equation}\label{eqn:transverseField:400}
\begin{aligned}
\BE_t &= \frac{j}{\omega^2 \mu \epsilon – k^2 } \lr{ -k \spacegrad_t E_z + \mu \omega \Be_3 \cross \spacegrad_t H_z } \\
\BH_t &= -\frac{j}{\omega^2 \mu \epsilon – k^2 } \lr{ \epsilon \omega \Be_3 \cross \spacegrad_t E_z + k \spacegrad_t H_z }.
\end{aligned}
\end{equation}

Propagation field in terms of transverse.

Now let’s invert \ref{eqn:transverseField:260a}. We seek the grade selections
\begin{equation}\label{eqn:transverseField:420}
\gpgrade{ \lr{ \frac{\omega}{c} + k \Be_3 } \spacegrad_t F_t }{1,2}
\end{equation}

Performing each of these four grade selections in turn, for the \( \spacegrad_t F_t \) products we have
\begin{equation}\label{eqn:transverseField:440}
\begin{aligned}
\gpgradeone{ \spacegrad_t F_t }
&=
\gpgradeone{ \spacegrad_t \lr{ \BE_t + I \eta \BH_t } } \\
&=
\eta \gpgradeone{ I \spacegrad_t \BH_t } \\
&=
\eta I \lr{ \spacegrad_t \wedge \BH_t } \\
&=
-\eta \lr{ \spacegrad_t \cross \BH_t }.
\end{aligned}
\end{equation}
Because \( \spacegrad_t \BE_t \) has only 0,2 grades, so the grade-one selection was zero, leaving us with only \( \BH_t \) dependence.

For the grade two selection of the same, we have
\begin{equation}\label{eqn:transverseField:460}
\begin{aligned}
\gpgradetwo{ \spacegrad_t F_t }
&=
\gpgradetwo{ \spacegrad_t \lr{ \BE_t + I \eta \BH_t } } \\
&=
\spacegrad_t \wedge \BE_t \\
&=
I \lr{ \spacegrad_t \cross \BE_t }.
\end{aligned}
\end{equation}
This time we note that the vector-bivector product \( \spacegrad_t (I \BH_t) \) has only 1,3 grades, and is killed by the grade-2 selection.

For the \( \Be_3 \spacegrad_t F_t \) products, we have
\begin{equation}\label{eqn:transverseField:480}
\begin{aligned}
\gpgradeone{ \Be_3 \spacegrad_t F_t }
&=
\gpgradeone{ \Be_3 \spacegrad_t \lr{ \BE_t + I \eta \BH_t } } \\
&=
\gpgradeone{ \lr{ \Be_3 \cdot \spacegrad_t + \Be_3 \wedge \spacegrad_t } \BE_t }
+
\eta \gpgradeone{ I \Be_3 \lr{ \spacegrad_t \cdot \BH_t + \spacegrad_t \wedge \BH_t } } \\
&=
\gpgradeone{ I \lr{ \Be_3 \cross \spacegrad_t } \BE_t } \\
&=
-\lr{ \Be_3 \cross \spacegrad_t } \cross \BE_t.
\end{aligned}
\end{equation}
Observe that we’ve made use of \( \Be_3 \cdot \spacegrad_t = 0 \), regardless of what it operates on. For the \( \BH_t \) dependence, we had a bivector-scalar product \( (I \Be_3) (\spacegrad_t \cdot \BH_t) \), and a bivector-bivector product \( (I \Be_3) (\spacegrad_t \wedge \BH_t) \), neither of which have any vector grades.

Finally
\begin{equation}\label{eqn:transverseField:500}
\begin{aligned}
\gpgradetwo{ \Be_3 \spacegrad_t F_t }
&=
\eta \gpgradetwo{ I \Be_3 \spacegrad_t \BH_t } \\
&=
-\eta \gpgradetwo{ \lr{\Be_3 \cross \spacegrad_t} \BH_t } \\
&=
-\eta I \lr{\Be_3 \cross \spacegrad_t} \cross \BH_t.
\end{aligned}
\end{equation}
Here we’ve discarded the \( \BE_t \) dependent terms, since the bivector-vector product \( \lr{ \Be_3 \wedge \spacegrad_t } \BE_t \) has only grades 1,3, and we seek grade 2 only.

Putting all the pieces together, noting that \( \eta/c = \mu \) and \( 1/(c \eta) = \epsilon \), we have
we have
\begin{equation}\label{eqn:transverseField:520}
\BE_z = -\frac{j}{\omega^2 \mu \epsilon – k^2 } \lr{ \omega \mu \lr{ \spacegrad_t \cross \BH_t } + k \lr{ \Be_3 \cross \spacegrad_t } \cross \BE_t },
\end{equation}
and
\begin{equation}\label{eqn:transverseField:540}
\BH_z = \frac{j}{\omega^2 \mu \epsilon – k^2 } \lr{ \omega \epsilon \lr{ \spacegrad_t \cross \BE_t } – k \lr{\Be_3 \cross \spacegrad_t} \cross \BH_t }.
\end{equation}

An absurd COBOL library: 2D Euclidean GA

December 31, 2023 COBOL, math and physics play , , , ,

I’ve achieved a new pinnacle of obscurity, and have now written a rudimentary COBOL implementation of a geometric algebra library for \( \mathbb{R}^2 \) calculations.

Who will use this?  Absolutely nobody.  Effectively, nobody knows geometric algebra.  Nobody wants to know COBOL, but some do.  The union of those two groups is vanishingly small (probably one: argued below.)

I understand that some Opus Dei members have taught themselves COBOL, as looking at COBOL has been found to be equally painful as a course of self flagellation.

Figure 0. A flagellation representation of COBOL.

Assuming that no Opus Dei practitioners know geometric algebra, that means that there is exactly one person in the world that both knows COBOL and geometric algebra.  Me.

Why did I write this little library?  Well, I was tickled to write something so completely stupid, and I’ve been laughing at the absurdity of it. I also thought I might learn a few things about COBOL in the process of trying to use it for something slightly non-trivial.  I’m adept at writing simple test programs that exercise various obscure compiler features, but those are usually fairly small.  On the flip side of complexity, I have to debug through a number of horribly complicated customer programs as part of my compiler validation work.  A simple real life test scenario might run 100+ COBOL programs in a set of CICS transactions, executing thousands of EXEC DLI and EXEC CICS statements as well as all of the rest of the COBOL language statements!  Despite having gained familiarity with COBOL from that sort of observational use, walking through stuff in the debugger doesn’t provide the same level of comfort with the language as writing code from scratch.  Since I have no interest in simulating a boring business application, why not do something just for fun as a learning game.

The compiler I am using does not seem to support object-COBOL (which would have been nicely suited for this project), so I’ve written my little toy in conventional COBOL, using one external procedure for each type of mathematical operation.  In the huge set of customer COBOL code that I’ve examined and done test compilations of, none of it has used object-COBOL.  I am guessing that the object-COBOL community is as large as the user base for my little toy COBOL geometric algebra library will ever be.

I’ve implemented methods to construct multivectors with scalar, vector and pseudoscalar components, or a general multivector with all of the above.  I’ve also implemented multiply, add, subtract, scalar multiplication, grade selection, and a DISPLAY function to write a multivector to SYSOUT (stdout equivalent.)

The multivector “type”

Figure 1 shows the implementation of my multivector type, implemented in copybook (include file) named MVI.  I have an alternate MV copybook that doesn’t have the VALUE (initialization) clauses, as you don’t want initialization for LINKAGE-SECTION values (i.e.: program parameters.)

Figure 1. Copybook with multivector declaration and initialization.

If you are wondering what the hell a ‘PIC S9(9) USAGE IS COMP-5’ is, well, that’s the “easy to remember” way to declare a 32-bit signed integer in COBOL.  A COMP-2, on the other hand, is a floating point value.

Figure 2 shows an example of the use of this copybook:

Figure 2. Using the multivector copybook.

Figure 3 shows these two copybook declarations after preprocessor expansion

Figure 3. Multivector global variable examples after preprocessing.

The global variable declarations above are roughly equivalent to the following pseudo C++ code (pretending that we can have anonymous unions that match the COBOL declarations above):

#include <complex>

using complex = std::complex<double>;

struct ga20{
   int grade{};
   union {
      struct { double sc{}; double ps{}; };
      complex g02{};
   };
   union { 
      struct { double x{}; double y{}; };
      complex g1{};
   };
};

ga20 a;
ga20 b;

COBOL is inherently untyped, but requires matching types for CALL parameters, or else all hell ensues, so you have to rely on naming conventions and other mechanisms to enforce the required type equivalences.  In this toy GA library, I’ve used copybooks to enforce the types required for everything.  Global variable declarations like these A-MV and B-MV variables are declared only using a copybook that knows the representation required, and all the uses in sub-programs of the effective -MV “type” use a matching copybook for their declarations.  However, I’ve also made use of the lack of typing to treat A-G02, B-G02, A-G1, and B-G1 as if they were complex numbers, and pass those “variables” off to complex number sub-programs, knowing that I’ve constructed the parameters to those programs in a way that is bit compatible with the MV field values.  You can screw things up really nicely doing stuff like this, especially because all COBOL sub-program parameters are (generally) passed by reference.  If you don’t match up the types right “fun ensues.”

Also observe that the nested level specifiers are optional in COBOL.  For nested fields in C++, we might write a.g1.x.  With a nested variable like this in COBOL, we could write something equivalent to that, like:

A-X OF A-G1 OF A-MV

but we can leave out any of the intermediate “level” specifications if we want.  This gets really confusing in complicated real-life COBOL code.  If you are looking to see where something is modified, you have to not only look for the variable of interest, but also any of the higher level fields, since any of those could have been passed off to other code, which implicitly wrote the value you are interested in.

Here’s what one of these multivectors looks like in memory on my (Linux x86-64) system

(lldb) c
Process 3903259 resuming
Process 3903259 stopped
* thread #10, name = 'GA20', stop reason = breakpoint 7.1
    frame #0: 0x00007fffd9189a02 PJOOT.GA20V01.LOADLIB(MULT).ec73dc4b`MULT at MULT.cob:50:1
   47              CALL GA-MKVECTOR-MODIFY USING C-MV, A-X, A-Y
   48              CALL GA-MKPSEUDO-MODIFY USING D-MV, A-PS
   49  
-> 50              MOVE 'A' TO WS-DISPPARM-N
   51              CALL GA-DISPLAY USING
   52                WS-DISPPARM-N,
   53                A-MV
(lldb) p A-MV
(A-MV) A-MV = {
  A-GRADE = -1
  A-G02 = (A-SC = 1, A-PS = 4)
  A-G1 = (A-X = 2, A-Y = 3)
}

i.e.: this has the value \( 1 + 2 \mathbf{e}_{12} + 3 \mathbf{e}_1 + 4 \mathbf{e}_1 \).

Looking at the multivector in it’s hex representation:

(lldb) fr v -format x A-MV
(A-MV) A-MV = {
  A-GRADE = 0xffffffff
  A-G02 = {
    A-SC = 0x3ff0000000000000
    A-PS = 0x4010000000000000
  }
  A-G1 = {
    A-X = 0x4000000000000000
    A-Y = 0x4008000000000000
  }
}

we see that the debugger is showing an underlying IEEE floating point representation for the COMP-2 variables in the program as it was compiled.

I have a multivector print routine that prints multivectors to SYSOUT:

Figure 4. Calling the multivector DISPLAY function.

where WS-DISPPARM-N is a PIC X(20).  (i.e.: a fixed size character array.)  Output for the A-MV value showing in the debug session above looks like:

A                     ( .10000000000000000E 01)                                                                         
                    + ( .20000000000000000E 01) e_1 + ( .30000000000000000E 01) e_2                                     
                    + ( .40000000000000000E 01) e_{12}            

End of sentence required for nested IFs?

I encountered a curious language issue in my multivector multiply function.  Here’s an example of how I’ve been coding IF statements

Figure 5. An IF END-IF pair without a period to terminate the sentence.

Notice that I don’t do anything special between the END-IF and the statement that follows it.  However, if I have an IF statement that includes nested IF END-IFs, then it appears that I need a period after the final END-IF, like so:

Figure 6. An IF with nested conditions that seems to require a period to terminate the sentence.

If I don’t include that period after the final END-IF (ending the COBOL sentence), then in some circumstances, I was seeing the program exit after the last interior basic block within this nested IF was executed.  In COBOL parlance, it seems as if a GOBACK (i.e.: return) was implicitly executed once we fell out of the big nested IF.  Why is that period required for a nested IF, but not for a simple IF?

In my “Murach’s mainframe COBOL”, he ends ALL if statements with a period, even simple IFs.  I don’t see a rationale for that in the book anywhere, but it’s a ~700 page book, so perhaps he says why at some point.

I’ve asked our compiler guys if this is a bug or expected behaviour, but I am guessing the latter…. I just don’t know why.

The multiplication kernel for this library

The workhorse of this GA(2,0) implementation, is a multivector multiplication operation, which can be implemented in two lines in Mathematica (or C++)

multivector /: multivector[_, m1_, m2_] ** multivector[_, n1_, n2_] := 
   multivector[-1, m1 n1 + Conjugate[m2] n2, n1 m2 + Conjugate[m1] n2 ]

In COBOL, it takes a lot more, and as usual, COBOL verbosity obfuscates things considerably. Here’s the equivalent code in my library:

Figure 7. GA(2,0) multiplication kernel in COBOL.

The library and a little test program.

If you are curious, you can poke around in the code for this library and the test program on github.  The sample/test program is src/MULT.cob, and running the job gives the following SYSOUT:

Figure 8. Sample SYSOUT for MULT.cob