math and physics play

Some line integral examples of the Fundamental theorem of geometric calculus

January 20, 2026 math and physics play , , , , , , , ,

[Click here for a PDF version of this post]

On my discord server, Frank asked about his attempt to demonstrate an example line integral computation of the fundamental theorem of geometric calculus.

Before working through his example, and some others, it is first worth restating the
line integral specialization of the \textit{Fundamental theorem of geometric calculus}:

Theorem 1.1: Fundamental theorem of geometric calculus (line integral version.)

Given multivectors \(F, G \), a single variable parameterization \( \Bx = \Bx(u) \), with line element \( d\Bx = du \Bx_u \), \( \Bx_u = \PDi{u}{\Bx} \), \( \boldpartial = \Bx^u \PDi{u}{} \), and \( \Bx^u \cdot \Bx_u = 1 \), then
the line integral is related to the boundary by
\begin{equation*}
\int F d\Bx \boldpartial G = \evalbar{F G}{\Delta u},
\end{equation*}
(with the \( \boldpartial \) acting bidirectionally on \( F, G \).)

It is very important to point out that the derivative operator here is the vector derivative, and not the gradient. Roughly speaking, the vector derivative is the projection of the gradient onto the tangent space. In this case, the tangent space is just the line in the direction \( \Bx_u \), which may vary along the parameterized path.

Here are some examples of some one variable parameterizations, all in two dimensions

  1. \( \Bx = u \Be_1 + y_0 \Be_2 \).
    We compute
    \begin{equation}\label{eqn:lineintegralExamples:20}
    \begin{aligned}
    \Bx_u &= \PD{\Bx}{u} = \Be_1 \\
    \Bx^u &= \Be_1 \\
    d\Bx &= du \Be_1 \\
    \boldpartial &= \Be_1 \PD{u}{}.
    \end{aligned}
    \end{equation}
    and \( d\Bx \boldpartial = \PDi{u}{} \).
    The fundamental theorem is really just a statement that
    \begin{equation}\label{eqn:lineintegralExamples:40}
    \int \PD{u}{} \lr{ F G } du = \evalbar{ F G }{\Delta u}.
    \end{equation}

  2. \( \Bx = \alpha u \Be_1 + \beta u \Be_2 \), where \( \alpha, \beta \) are constants. i.e.: a line, but not necessarily on the horizontal this time.
    This time, we compute
    \begin{equation}\label{eqn:lineintegralExamples:60}
    \begin{aligned}
    \Bx_u &= \alpha \Be_1 + \beta \Be_2 \\
    \Bx^u &= \inv{\Bx_u} = \frac{\alpha \Be_1 + \beta \Be_2}{\alpha^2 + \beta^2} \\
    d\Bx &= du \lr{ \alpha \Be_1 + \beta \Be_2 } \\
    \boldpartial &= \inv{\alpha \Be_1 + \beta \Be_2} \PD{u}{}.
    \end{aligned}
    \end{equation}
    Again, we have \( d\Bx \boldpartial = \PDi{u}{} \), and the story repeats.

  3. \( \Bx = R \Be_1 e^{i\theta}, i = \Be_1 \Be_2 \). This time we are going along a circular arc.

    Let \( \rcap = \Be_1 e^{i\theta} \), and \(\thetacap = \Be_2 e^{i\theta} \). We can compute
    \begin{equation}\label{eqn:lineintegralExamples:80}
    \begin{aligned}
    \Bx_\theta &= R \Be_2 e^{i\theta} = R \thetacap \\
    \Bx^\theta &= \inv{\Bx_\theta} = \inv{ R \Be_2 e^{i\theta} } = \inv{R} \thetacap \\
    d\Bx &= d\theta \thetacap \\
    \boldpartial &= \frac{\thetacap}{R} \PD{\theta}{}.
    \end{aligned}
    \end{equation}
    This time, probably to no suprise, we have \( d\Bx \boldpartial = \PDi{\theta}{} \), so the fundamental theorem for this parameterization is a statement that
    \begin{equation}\label{eqn:lineintegralExamples:100}
    \int \PD{\theta}{} \lr{ F G } d\theta = \evalbar{ F G }{\Delta \theta}.
    \end{equation}

  4. \( \Bx = r e^{i\theta_0} \), where \( \theta_0 \) is a constant. We’ve already computed this above with a Cartesian representation of a line, but can do it again this time with an explicitly radial parameterization. We compute
    \begin{equation}\label{eqn:lineintegralExamples:120}
    \begin{aligned}
    \Bx_r &= \Be_1 e^{i \theta_0} \\
    \Bx^r &= \inv{\Bx_r} = \Be_1 e^{i \theta_0} \\
    d\Bx &= dr \Be_1 e^{i \theta_0} \\
    \boldpartial &= e^{i \theta_0} \PD{r}{}.
    \end{aligned}
    \end{equation}
    This time, \( d\Bx \boldpartial = \PDi{r}{} \), and the fundamental theorem for this parameterization is a statement that
    \begin{equation}\label{eqn:lineintegralExamples:140}
    \int \PD{r}{} \lr{ F G } dr = \evalbar{ F G }{\Delta r}.
    \end{equation}

Observe that we do not get the same result if we use the gradient instead of the vector derivative. We may only make a gradient substitution for the vector derivative when the dimension of the hypervolume integral equals the dimension of the vector space itself. For a line integral that would mean we are restricting the domain of the underlying vector space to \(\mathbb{R}^1\), which isn’t a very interesting case for geometric algebra.

In Frank’s example, he was working with a generating vector space of \(\mathbb{R}^2\), with the horizontal parameterization \( \Bx = u \Be_1 + y_0 \Be_2 \) that we used in the first example (with \( F = 1, G = x y i \), where \( i = \Be_1 \Be_2 \), the pseudoscalar for the space).

Let’s see what happens if we compute a similar integral, but swapping out the vector derivative with the gradient
\begin{equation}\label{eqn:lineintegralExamples:160}
\begin{aligned}
\int d\Bx \spacegrad x y i
&=
\int du \Be_1 \lr{ \Be_1 \partial_x + \Be_2 \partial_y } ( x y i ) \\
&=
\int du \Be_1 \lr{ \Be_1 y + \Be_2 x } i \\
&=
\int du \lr{ y + i x } i \\
&=
\int du \lr{ y_0 + i u } i \\
&=
\lr{\Delta x} y_0 i – \frac{x_1^2}{2} + \frac{x_0^2}{2}.
\end{aligned}
\end{equation}
As well as the pseudoscalar term that we had when evaluating the fundamental theorem integral, this time we have an extra scalar term, a contribution that goes back to the \( y \) component of the gradient. There is nothing wrong with performing such an integral, but it’s not an instance of the fundamental theorem, and the same tidy answer should not be expected. In Frank’s original example, he also didn’t put the \( \Bx \) adjacent to the differential operator, which is required to get the perfect cancelation of the tangent space vectors that we’ve seen in the evaluations above.

Curl of Curl. Tensor and GA expansion, and GA equivalent identity.

November 12, 2025 math and physics play , , , , , , , , , , ,

[Click here for a PDF version of this post]

In this blog post, we will expand \(\spacegrad \cross \lr{ \spacegrad \cross \Bf } = -\spacegrad^2 \Bf + \spacegrad \lr{ \spacegrad \cdot \Bf } \) two different ways, using tensor index gymnastics and using geometric algebra.

The tensor way.

To expand the curl using a tensor expansion, let’s first expand the cross product in coordinates
\begin{equation}\label{eqn:curlcurl2:20}
\begin{aligned}
\Ba \cross \Bb
&=
\lr{ \Be_r \cross \Be_s } a_r b_s \\
&=
\Be_t \cdot \lr{ \Be_r \cross \Be_s } \Be_t a_r b_s \\
&=
\epsilon_{rst} a_r b_s \Be_t.
\end{aligned}
\end{equation}
Here \( \epsilon_{rst} \) is the completely antisymmetric (Levi-Civita) tensor, and allows us to compactly express the geometrical nature of the triple product.

We can then expand the curl of the curl by applying this twice
\begin{equation}\label{eqn:curlcurl2:40}
\begin{aligned}
\spacegrad \cross \lr{ \spacegrad \cross \Bf }
&=
\epsilon_{rst} \partial_r \lr{ \spacegrad \cross \Bf }_s \Be_t \\
&=
\epsilon_{rst} \partial_r \lr{ \epsilon_{uvw} \partial_u f_v \Be_w }_s \Be_t \\
&=
\epsilon_{rst} \partial_r \epsilon_{uvs} \partial_u f_v \Be_t.
\end{aligned}
\end{equation}

It turns out that there’s a nice identity to reduce the single index contraction of a pair of Levi-Civita tensors.
\begin{equation}\label{eqn:curlcurl2:60}
\epsilon_{abt} \epsilon_{cdt} = \delta_{ac} \delta_{bd} – \delta_{ad} \delta_{bc}.
\end{equation}
To show this, consider the \( t = 1 \) term of this sum \( \epsilon_{ab1} \epsilon_{cd1} \). This is non-zero only for \( a,b,c,d \in \setlr{2,3} \). If \( a,b = c,d \), this is one, and if \( a,b = d,c \), this is minus one. We may summarize that as
\begin{equation}\label{eqn:curlcurl2:80}
\epsilon_{ab1} \epsilon_{cd1} = \delta_{ac} \delta_{bd} – \delta_{ad} \delta_{bc},
\end{equation}
but this holds for \( t = 2,3 \) too, so \ref{eqn:curlcurl2:60} holds generally.

We may now contract the tensors to find
\begin{equation}\label{eqn:curlcurl2:100}
\begin{aligned}
\spacegrad \cross \lr{ \spacegrad \cross \Bf }
&=
\epsilon_{rst} \epsilon_{uvs} \Be_t \partial_r \partial_u f_v \\
&=
-\epsilon_{rts} \epsilon_{uvs} \Be_t \partial_r \partial_u f_v \\
&=
-\lr{ \delta_{ru} \delta_{tv} – \delta_{rv} \delta_{tu} } \Be_t \partial_r \partial_u f_v \\
&=
– \Be_v \partial_u \partial_u f_v
+ \Be_u \partial_v \partial_u f_v \\
&=
-\spacegrad^2 \Bf + \spacegrad \lr{ \spacegrad \cdot \Bf }.
\end{aligned}
\end{equation}

Using geometric algebra.

Now let’s pull out the GA toolbox. We start with introducing a no-op grade-1 selection, and using the identity \( \Ba \cross \Bb = -I \lr{ \Ba \wedge \Bb } \)
\begin{equation}\label{eqn:curlcurl2:120}
\begin{aligned}
\spacegrad \cross \lr{ \spacegrad \cross \Bf }
&=
\gpgradeone{
\spacegrad \cross \lr{ \spacegrad \cross \Bf }
} \\
&=
\gpgradeone{
-I \lr{ \spacegrad \wedge \lr{ \spacegrad \cross \Bf } }
} \\
\end{aligned}
\end{equation}
We can now expand \( \Ba \wedge \Bb = \Ba \Bb – \Ba \cdot \Bb \)
\begin{equation}\label{eqn:curlcurl2:140}
\spacegrad \cross \lr{ \spacegrad \cross \Bf }
=
\gpgradeone{
-I \spacegrad \lr{ \spacegrad \cross \Bf }
+I \lr{ \spacegrad \cdot \lr{ \spacegrad \cross \Bf } }
}
\end{equation}
but that dot product is a scalar, leaving just a pseudoscalar, which has a zero grade-1 selection. This leaves
\begin{equation}\label{eqn:curlcurl2:160}
\begin{aligned}
\spacegrad \cross \lr{ \spacegrad \cross \Bf }
&=
\gpgradeone{
-I \spacegrad \lr{ -I \lr{ \spacegrad \wedge \Bf } }
} \\
&=
-\gpgradeone{
\spacegrad \lr{ \spacegrad \wedge \Bf }
}.
\end{aligned}
\end{equation}
We use \( \Ba \wedge \Bb = \Ba \Bb – \Ba \cdot \Bb \) once more
\begin{equation}\label{eqn:curlcurl2:180}
\begin{aligned}
\spacegrad \cross \lr{ \spacegrad \cross \Bf }
&=
-\gpgradeone{
\spacegrad \lr{ \spacegrad \Bf }
-\spacegrad \lr{ \spacegrad \cdot \Bf }
}
\\
&=
-\spacegrad^2 \Bf
+\spacegrad \lr{ \spacegrad \cdot \Bf }.
\end{aligned}
\end{equation}

GA identity.

It’s also worth noting that there’s a natural GA formulation of the curl of a curl. From the Laplacian and divergence relationship that we ended up with, we need only factor out the gradient
\begin{equation}\label{eqn:curlcurl2:200}
\begin{aligned}
\spacegrad \cross \lr{ \spacegrad \cross \Bf }
&=
-\spacegrad^2 \Bf +\spacegrad \lr{ \spacegrad \cdot \Bf } \\
&=
-\spacegrad \lr{ \spacegrad \Bf – \spacegrad \cdot \Bf } \\
&=
-\spacegrad \lr{ \spacegrad \wedge \Bf }.
\end{aligned}
\end{equation}
Because \( \spacegrad \wedge \lr{ \spacegrad \wedge \Bf } = 0 \), we may also write this as
\begin{equation}\label{eqn:curlcurl2:220}
\boxed{
\spacegrad \cdot \lr{ \spacegrad \wedge \Bf } = -\spacegrad \cross \lr{ \spacegrad \cross \Bf }.
}
\end{equation}
From the GA LHS, we see by inspection that
\begin{equation}\label{eqn:curlcurl2:240}
\spacegrad \cdot \lr{ \spacegrad \wedge \Bf } = \spacegrad^2 \Bf – \spacegrad \lr{ \spacegrad \cdot \Bf }.
\end{equation}

A fun application of Green’s functions and geometric algebra: Residue calculus

November 2, 2025 math and physics play , , , , , , , , , , , , , , , , , ,

[Click here for a PDF version of this post]

Motivation.

A fun application of both Green’s functions and geometric algebra is to show how the Cauchy integral equation can be expressed in terms of the Green’s function for the 2D gradient. This is covered, almost as an aside, in [1]. I found that treatment a bit hard to understand, so I am going to work through it here at my own pace.

Complex numbers in geometric algebra.

Anybody who has studied geometric algebra is likely familiar with a variety of ways to construct complex numbers from geometric objects. For example, complex numbers can be constructed for any plane. If \( \Be_1, \Be_2 \) is a pair of orthonormal vectors for some plane in \(\mathbb{R}^N\), then any vector in that plane has the form
\begin{equation}\label{eqn:residueGreens:20}
\Bf = \Be_1 u + \Be_2 v,
\end{equation}
has an associated complex representation, by simply multiplying that vector one of those basis vectors. For example, if we pre-multiply \( \Bf \) by \( \Be_1 \), forming
\begin{equation}\label{eqn:residueGreens:40}
\begin{aligned}
z
&= \Be_1 \Bf \\
&= \Be_1 \lr{ \Be_1 u + \Be_2 v } \\
&= u + \Be_1 \Be_2 v.
\end{aligned}
\end{equation}

We may identify the unit bivector \( \Be_1 \Be_2 \) as an imaginary, designed by \( i \), since it has the expected behavior
\begin{equation}\label{eqn:residueGreens:60}
\begin{aligned}
i^2 &=
\lr{\Be_1 \Be_2}^2 \\
&=
\lr{\Be_1 \Be_2}
\lr{\Be_1 \Be_2} \\
&=
\Be_1 \lr{\Be_2
\Be_1} \Be_2 \\
&=
-\Be_1 \lr{\Be_1
\Be_2} \Be_2 \\
&=
-\lr{\Be_1 \Be_1}
\lr{\Be_2 \Be_2} \\
&=
-1.
\end{aligned}
\end{equation}

Complex numbers are seen to be isomorphic to even grade multivectors in a planar subspace. The imaginary is the grade-two pseudoscalar, and geometrically is an oriented unit area (bivector.)

Cauchy-equations in terms of the gradient.

It is natural to wonder about the geometric algebra equivalents of various complex-number relationships and identities. Of particular interest for this discussion is the geometric algebra equivalent of the Cauchy equations that specify required conditions for a function to be differentiable.

If a complex function \( f(z) = u(z) + i v(z) \) is differentiable, then we must be able to find the limit of
\begin{equation}\label{eqn:residueGreens:80}
\frac{\Delta f(z_0)}{\Delta z} = \frac{f(z_0 + h) – f(z_0)}{h},
\end{equation}
for any complex \( h \rightarrow 0 \), for any possible trajectory of \( z_0 + h \) toward \( z_0 \). In particular, for real \( h = \epsilon \),
\begin{equation}\label{eqn:residueGreens:100}
\lim_{\epsilon \rightarrow 0} \frac{u(x_0 + \epsilon, y_0) + i v(x_0 + \epsilon, y_0) – u(x_0, y_0) – i v(x_0, y_0)}{\epsilon}
=
\PD{x}{u(z_0)} + i \PD{x}{v(z_0)},
\end{equation}
and for imaginary \( h = i \epsilon \)
\begin{equation}\label{eqn:residueGreens:120}
\lim_{\epsilon \rightarrow 0} \frac{u(x_0, y_0 + \epsilon) + i v(x_0, y_0 + \epsilon) – u(x_0, y_0) – i v(x_0, y_0)}{i \epsilon}
=
-i\lr{ \PD{y}{u(z_0)} + i \PD{y}{v(z_0)} }.
\end{equation}
Equating real and imaginary parts, we see that existence of the derivative requires
\begin{equation}\label{eqn:residueGreens:140}
\begin{aligned}
\PD{x}{u} &= \PD{y}{v} \\
\PD{x}{v} &= -\PD{y}{u}.
\end{aligned}
\end{equation}
These are the Cauchy equations. When the derivative exists in a given neighbourhood, we say that the function is analytic in that region. If we use a bivector interpretation of the imaginary, with \( i = \Be_1 \Be_2 \), the Cauchy equations are also satisfied if the gradient of the complex function is zero, since
\begin{equation}\label{eqn:residueGreens:160}
\begin{aligned}
\spacegrad f
&=
\lr{ \Be_1 \partial_x + \Be_2 \partial_y } \lr{ u + \Be_1 \Be_2 v } \\
&=
\Be_1 \lr{ \partial_x u – \partial_y v } + \Be_2 \lr{ \partial_y u + \partial_x v }.
\end{aligned}
\end{equation}
We see that the geometric algebra equivalent of the Cauchy equations is simply
\begin{equation}\label{eqn:residueGreens:200}
\spacegrad f = 0.
\end{equation}
Roughly speaking, we may say that a function is analytic in a region, if the Cauchy equations are satisfied, or the gradient is zero, in a neighbourhood of all points in that region.

A special case of the fundamental theorem of geometric calculus.

Given an even grade multivector \( \psi \in \mathbb{R}^2 \) (i.e.: a complex number), we can show that
\begin{equation}\label{eqn:residueGreens:220}
\int_A \spacegrad \psi d^2\Bx = \oint_{\partial A} d\Bx \psi.
\end{equation}
Let’s get an idea why this works by expanding the area integral for a rectangular parameterization
\begin{equation}\label{eqn:residueGreens:240}
\begin{aligned}
\int_A \spacegrad \psi d^2\Bx
&=
\int_A \lr{ \Be_1 \partial_1 + \Be_2 \partial_2 } \psi I dx dy \\
&=
\int \Be_1 I dy \evalrange{\psi}{x_0}{x_1}
+
\int \Be_2 I dx \evalrange{\psi}{y_0}{y_1} \\
&=
\int \Be_2 dy \evalrange{\psi}{x_0}{x_1}

\int \Be_1 dx \evalrange{\psi}{y_0}{y_1} \\
&=
\int d\By \evalrange{\psi}{x_0}{x_1}

\int d\Bx \evalrange{\psi}{y_0}{y_1}.
\end{aligned}
\end{equation}
We took advantage of the fact that the \(\mathbb{R}^2\) pseudoscalar commutes with \( \psi \). The end result, is illustrated in fig. 1, shows pictorially that the remaining integral is an oriented line integral.

fig. 1. Oriented multivector line integral.

 

If we want to approximate a more general area, we may do so with additional tiles, as illustrated in fig. 2. We may evaluate the area integral using the line integral over just the exterior boundary using such a tiling, as any overlapping opposing boundary contributions cancel exactly.

fig. 2. A crude circular tiling approximation.

 

The reason that this is interesting is that it allows us to re-express a complex integral as a corresponding multivector area integral. With \( d\Bx = \Be_1 dz \), we have
\begin{equation}\label{eqn:residueGreens:260}
\oint dz\, \psi = \Be_1 \int \spacegrad \psi d^2\Bx.
\end{equation}

The Cauchy kernel as a Green’s function.

We’ve previously derived the Green’s function for the 2D Laplacian, and found
\begin{equation}\label{eqn:residueGreens:280}
\tilde{G}(\Bx, \Bx’) = \inv{2\pi} \ln \Abs{\lr{\Bx – \Bx’}},
\end{equation}
which satisfies
\begin{equation}\label{eqn:residueGreens:300}
\delta^2(\Bx – \Bx’) = \spacegrad^2 \tilde{G}(\Bx, \Bx’) = \spacegrad \lr{ \spacegrad \tilde{G}(\Bx, \Bx’) }.
\end{equation}
This means that \( G(\Bx, \Bx’) = \spacegrad \tilde{G}(\Bx, \Bx’) \) is the Green’s function for the gradient. That Green’s function is
\begin{equation}\label{eqn:residueGreens:320}
\begin{aligned}
G(\Bx, \Ba)
&= \inv{2 \pi} \frac{\spacegrad \Abs{\Bx – \Ba}}{\Abs{\Bx – \Ba}} \\
&= \inv{2 \pi} \frac{\Bx – \Ba}{\Abs{\Bx – \Ba}^2}.
\end{aligned}
\end{equation}
We may cast this Green’s function into complex form with \( z = \Be_1 \Bx, a = \Be_1 \Ba \). In particular
\begin{equation}\label{eqn:residueGreens:340}
\begin{aligned}
\inv{z – a}
&=
\frac{(z – a)^\conj}{\Abs{z – a}^2} \\
&=
\frac{(z – a)^\conj}{\Abs{z – a}^2} \\
&=
\frac{\Bx – \Ba}{\Abs{\Bx – \Ba}^2} \Be_1 \\
&=
2 \pi G(\Bx, \Ba) \Be_1.
\end{aligned}
\end{equation}

Cauchy’s integral.

With
\begin{equation}\label{eqn:residueGreens:360}
\psi = \frac{f(z)}{z – a},
\end{equation}
using \ref{eqn:residueGreens:260}, we can now evaluate
\begin{equation}\label{eqn:residueGreens:265}
\begin{aligned}
\oint dz\, \frac{f(z)}{z – a}
&= \Be_1 \int \spacegrad \frac{f(z)}{z – a} d^2\Bx \\
&= \Be_1 \int \lr{ \frac{\spacegrad f(z)}{z – a} + \lr{ \spacegrad \inv{z – a}} f(z) } I dA \\
&= \Be_1 \int f(z) \spacegrad 2 \pi G(\Bx – \Ba) \Be_1 I dA \\
&= 2 \pi \Be_1 \int \delta^2(\Bx – \Ba) \Be_1 f(\Bx) I dA \\
&= 2 \pi \Be_1^2 f(\Ba) I \\
&= 2 \pi I f(a),
\end{aligned}
\end{equation}
where we’ve made use of the analytic condition \( \spacegrad f = 0 \), and the fact that \( f \) and \( 1/(z-a) \), both even multivectors, commute.

The Cauchy integral equation
\begin{equation}\label{eqn:residueGreens:380}
f(a) = \inv{2 \pi I} \oint dz\, \frac{f(z)}{z – a},
\end{equation}
falls out naturally. This sort of residue calculation always seemed a bit miraculous. By introducing a geometric algebra encoding of complex numbers, we get a new and interesting interpretation. In particular,

  1. the imaginary factor in the geometric algebra formulation of this identity is an oriented unit area coming directly from the area element,
  2. the factor of \( 2 \pi \) comes directly from the Green’s function for the gradient,
  3. the fact that this particular form of integral picks up only the contribution at the point \( z = a \) is no longer mysterious seeming. This is directly due to delta-function filtering.

Also, if we are looking for an understanding of how to generalize the Cauchy equation to more general multivector functions, we now also have a good clue how that would be done.

References

[1] C. Doran and A.N. Lasenby. Geometric algebra for physicists. Cambridge University Press New York, Cambridge, UK, 1st edition, 2003.

Summary of some gradient related Green’s functions

October 28, 2025 math and physics play , , , , , ,

[Click here for a PDF version of this post]

Here is a summary of Green’s functions for a number of gradient related differential operators (many of which are of interest for electrodynamics, and most of them have been derived recently in blog posts.) These Green’s functions all satisfy
\begin{equation}\label{eqn:deltaFunctions:120}
\delta(\Bx – \Bx’) = L G(\Bx, \Bx’).
\end{equation}

Let \( \Br = \Bx – \Bx’ \), \( r = \Norm{\Br} \), \( \mathbf{\hat{r}} = \Br/r \), and \( \tau = t – t’ \), then

  1. Gradient operator, \( L = \spacegrad \), in 1D, 2D and 3D respectively
    \begin{equation}\label{eqn:deltaFunctions:25}
    \begin{aligned}
    G\lr{ \Bx, \Bx’ } &= \frac{\mathbf{\hat{r}}}{2} \\
    G\lr{ \Bx, \Bx’ } &= \frac{1}{2 \pi} \frac{\mathbf{\hat{r}}}{r} \\
    G\lr{ \Bx, \Bx’ } &= \inv{4 \pi} \frac{\mathbf{\hat{r}}}{r^2}.
    \end{aligned}
    \end{equation}

  2. Laplacian operator, \( L = \spacegrad^2 \), in 1D, 2D and 3D respectively
    \begin{equation}\label{eqn:deltaFunctions:20}
    \begin{aligned}
    G\lr{ \Bx, \Bx’ } &= \frac{r}{2} \\
    G\lr{ \Bx, \Bx’ } &= \frac{1}{2 \pi} \ln r \\
    G\lr{ \Bx, \Bx’ } &= -\frac{1}{4 \pi r}.
    \end{aligned}
    \end{equation}

  3. Second order Helmholtz operator, \( L = \spacegrad^2 + k^2 \) for 1D, 2D and 3D respectively
    \begin{equation}\label{eqn:deltaFunctions:60}
    \begin{aligned}
    G\lr{ \Bx, \Bx’ } &= \pm \frac{1}{2 j k} e^{\pm j k r} \\
    G(\Bx, \Bx’) &= \frac{1}{4 j} H_0^{(1)}(\pm k r) \\
    G\lr{ \Bx, \Bx’ } &= -\frac{1}{4 \pi} \frac{e^{\pm j k r }}{r}.
    \end{aligned}
    \end{equation}

  4. First order Helmholtz operator, \( L = \spacegrad + j k \), in 1D, 2D and 3D respectively

    \begin{equation}\label{eqn:deltaFunctions:80}
    \begin{aligned}
    G\lr{ \Bx, \Bx’ } &= \frac{j}{2} \lr{ \mathbf{\hat{r}} \mp 1 } e^{\pm j k r} \\
    G\lr{ \Bx, \Bx’ } &= \frac{k}{4} \lr{ \pm j \mathbf{\hat{r}} H_1^{(1)}(\pm k r) – H_0^{(1)}(\pm k r) } \\
    G\lr{ \Bx, \Bx’ } &= \frac{e^{\pm j k r}}{4 \pi r} \lr{ jk \lr{ 1 \mp \mathbf{\hat{r}} } + \frac{\mathbf{\hat{r}}}{r} }.
    \end{aligned}
    \end{equation}

    This is also the Green’s function for a left acting operator \( G(\Bx, \Bx’) \lr{ – \lspacegrad + j k } = \delta(\Bx – \Bx’) \).

  5. Wave equation, \( \spacegrad^2 – (1/c^2) \partial_{tt} \), in 1D, 2D and 3D respectively
    \begin{equation}\label{eqn:deltaFunctions:140}
    \begin{aligned}
    G(\Br, \tau) &= -\frac{c}{2} \Theta( \pm \tau – r/c ) \\
    G(\Br, \tau) &= -\inv{2 \pi \sqrt{ \tau^2 – r^2/c^2 } } \Theta( \pm \tau – r/c ) \\
    G(\Br, \tau) &= -\inv{4 \pi r} \delta( \pm \tau – r/c ),
    \end{aligned}
    \end{equation}
    The positive sign is for the retarded solution, and the negative for advancing.

  6. Spacetime gradient \( L = \spacegrad + (1/c) \partial_t \), satisfying \( L G(\Bx – \Bx’, t – t’) = \delta(\Bx – \Bx’) \delta(t – t’) \), in 1D, 2D, and 3D respectively
    \begin{equation}\label{eqn:deltaFunctions:100}
    \begin{aligned}
    G(\Br, \tau)
    &= \inv{2} \lr{ \mathbf{\hat{r}} \pm 1 } \delta(\pm \tau – r/c) \\
    G(\Br, \tau)
    &=
    \frac{
    \lr{\tau^2 – r^2/c^2}^{-3/2}
    }{2 \pi c^2}
    \lr{
    c \lr{ \mathbf{\hat{r}} \pm 1 }
    \lr{\tau^2 – r^2/c^2}
    \delta(\pm \tau – r/c)
    -\lr{ \Br + c \tau }
    \Theta(\pm \tau – r/c)
    }
    \\
    G(\Br, \tau)
    &= \inv{4 \pi r} \delta(\pm \tau – r/c)
    \lr{
    \frac{\mathbf{\hat{r}}}{r}
    +
    \lr{ \mathbf{\hat{r}} \pm 1} \inv{c} \PD{t’}{}
    }
    \end{aligned}
    \end{equation}
    The plus sign is for the retarded solution, and negative for advanced.

Green’s function for the spacetime gradient (and solution of Maxwell’s equation)

October 28, 2025 math and physics play , , , , , , , , , , , , , , , , ,

[Click here for a PDF version of this post]

Motivation

I’ve been assembling a table of all the Green’s functions that can be used in electrodynamics. There’s one set of those Green’s functions left to fill in, the Green’s functions for the spacetime gradient:
\begin{equation}\label{eqn:spacetimeGradientGreens:20}
\lr{\spacegrad + \inv{c}\PD{t}{}} G(\Bx, \Bx’, t, t’) = \delta(\Bx – \Bx’)\delta(t – t’).
\end{equation}
I’d like to compute the retarded and advanced Green’s function for this operator for the 1D, 2D and 3D cases.

In [2] I use the retarded time Green’s function for the spacetime gradient to derive the Jefimenkos equations. However, in retrospect my handling of that material is sloppy. The starting point is the retarded wave equation Green’s function, but I didn’t even derive it, instead just lazily pointing to other authors that did.
I don’t actually ever state the spacetime gradient Green’s function, instead just using a sequence of intermediate results of that would be derivation. Even worse, all of that is scattered roughshod across both chapter II and III, as well as the appendix.

The idea.

Suppose that we know the Green’s functions for the wave equation
\begin{equation}\label{eqn:spacetimeGradientGreens:40}
\lr{\spacegrad^2 – \inv{c^2}\frac{\partial^2}{\partial t^2}} G_r(\Bx, \Bx’, t, t’) = \delta(\Bx – \Bx’)\delta(t – t’).
\end{equation}
\begin{equation}\label{eqn:spacetimeGradientGreens:60}
\lr{\spacegrad + \inv{c}\frac{\partial}{\partial t}} \lr{\spacegrad – \inv{c}\frac{\partial}{\partial t}} G_r(\Bx, \Bx’, t, t’) = \delta(\Bx – \Bx’)\delta(t – t’).
\end{equation}
This means that the Green’s function for the spacetime gradient, a multivector valued entity, satisfying \ref{eqn:spacetimeGradientGreens:20}, is
\begin{equation}\label{eqn:spacetimeGradientGreens:80}
G(\Bx, \Bx’, t, t’) = \lr{\spacegrad – \inv{c}\frac{\partial}{\partial t}} G_r(\Bx, \Bx’, t, t’).
\end{equation}
So if we have a Green’s function for the wave equation, it’s just a matter of taking derivatives to figure out the Green’s function for the spacetime gradient.

Why do we care? Recall that the multivector form of Maxwell’s equations is just
\begin{equation}\label{eqn:spacetimeGradientGreens:100}
\lr{\spacegrad + \inv{c}\frac{\partial}{\partial t}} F = J,
\end{equation}
so, if we know the Green’s function for this non-homogeneous problem, we may simply invert this equation for \( F \) with a convolution. This is how we can obtain the Jefimenkos equations in one fell swoop.

Now let’s evaluate these derivatives.

3D case.

Retarded case.

I’m going to start with the 3D retarded case, since I know the answer for that, and at least nominally, have all the composite parts of that derivation at hand. Then we can move on and compute the same for the advanced case, and then the 2D and 1D variants for fun. It’s not clear to me that we necessarily care about the 1D and 2D cases. I can imagine that there are circumstances where weird geometries or constraints force 1D and 2D solutions, but perhaps the 1D and 2D solutions will be academic and not practical.

Recall that the 3D retarded Green’s function for the wave equation was found to be
\begin{equation}\label{eqn:spacetimeGradientGreens:120}
G_r = -\inv{4 \pi r} \delta\lr{ t – t’ – r/c },
\end{equation}
where \( \Br = \Bx – \Bx’, r = \Abs{\Br} \).

Lemma 1.1: Gradient of \(\Abs{\Bx – \Bx’} \).

The gradient of the scalar \( r = \Abs{\Bx – \Bx’} \) is
\begin{equation*}
\spacegrad \Abs{\Bx – \Bx’} = \frac{\Br}{r}.
\end{equation*}
This will be written as \( \spacegrad r = \rcap \), with \( \rcap = \Br/r \).

Start proof:

\begin{equation}\label{eqn:spacetimeGradientGreens:140}
\begin{aligned}
\spacegrad \Abs{\Bx – \Bx’}
&=
\sum_m \Be_m \partial_m \sqrt{ \sum_n (x_n – x_n’)^2 } \\
&=
\sum_m \Be_m \inv{2} 2 \frac{x_m – x_m’}{r} \\
&=
\sum_m \Be_m \inv{2} 2 \frac{x_m – x_m’}{r} \\
&= \frac{\Br}{r}.
\end{aligned}
\end{equation}

End proof.

This means, suppressing the arguments of the delta function, that
\begin{equation}\label{eqn:spacetimeGradientGreens:160}
\begin{aligned}
\lr{ \spacegrad -(1/c) \partial_t } G_r
&= -\inv{4 \pi} \lr{
(\spacegrad r) \frac{\partial_r \delta}{r} + (\spacegrad r) \lr{ -\frac{1}{r^2}}\delta
– \inv{c r} \partial_t \delta
} \\
&= -\inv{4 \pi} \lr{ \frac{\rcap}{r} \partial_r \delta -\frac{\rcap}{r^2} \delta – \inv{c r} \partial_t \delta} \\
&= -\inv{4 \pi r} \lr{ \rcap \partial_r \delta – \frac{\rcap}{r} \delta – \inv{c} \partial_t \delta} \\
\end{aligned}
\end{equation}

Lemma 1.2: Derivatives of the delta function.

The derivative of the delta function (with respect to a non-integration variable parameter \( u \)) is
\begin{equation*}
\frac{d}{du} \delta( a u + b – t’ ) = a \delta( a u + b – t’ ) \frac{d}{dt’},
\end{equation*}
where \( t’ \) is the integration parameter for the delta function.

Observe that this is different than the usual identity
\begin{equation}\label{eqn:spacetimeGradientGreens:200}
\frac{d}{dt’} \delta(t’) = -\delta(t’) \frac{d}{dt’}.
\end{equation}

Start proof:

As usual, we figure out the meaning of these delta function derivatives by their action on a test function in a convolution.
\begin{equation}\label{eqn:spacetimeGradientGreens:220}
\int_{-\infty}^\infty \frac{d}{du} \delta( a u + b – t’ ) f(t’) dt’.
\end{equation}

Let’s start with a change of variables \( z = a u + b – t’ \), for which we find
\begin{equation}\label{eqn:spacetimeGradientGreens:240}
\begin{aligned}
t’ &= a u + b – z \\
dz &= – dt’ \\
\frac{d}{du} &= \frac{dz}{du} \frac{d}{dz} = a \frac{d}{dz}.
\end{aligned}
\end{equation}

Substitution back into \ref{eqn:spacetimeGradientGreens:220} gives
\begin{equation}\label{eqn:spacetimeGradientGreens:260}
\begin{aligned}
\int_{-\infty}^\infty \frac{d}{du} \delta( a u + b – t’ ) f(t’) dt’
&=
a \int_{\infty}^{-\infty} \lr{ \frac{d}{dz} \delta( z ) } f( a u + b – z ) (-dz) \\
&=
a \int_{-\infty}^{\infty} \lr{ \frac{d}{dz} \delta( z ) } f( a u + b – z ) dz \\
&=
\evalrange{a \delta(z) f( a u + b – z)}{-\infty}{\infty} \\
&\qquad –
a \int_{-\infty}^{\infty} \delta( z ) \frac{d}{dz} f( a u + b – z ) dz \\
&=
– \evalbar{ a \frac{d}{dz} f( a u + b – z ) }{z = 0} \\
&=
– \evalbar{ a \frac{d}{d(au + b – t’)} f( t’ ) }{t’ = a u + b} \\
&=
+ \evalbar{ a \frac{d}{d(t’ -(au + b))} f( t’ ) }{t’ = a u + b} \\
&=
\evalbar{ a \frac{dt’}{d(t’ – (a u + b))} \frac{d}{dt’} f( t’ ) }{t’ = a u + b} \\
&=
\evalbar{ a \frac{d}{dt’} f( t’ ) }{t’ = a u + b} \\
&=
\int_{-\infty}^\infty a \delta(a u + b – t’) \frac{df(t’)}{dt’} dt’.
\end{aligned}
\end{equation}

End proof.

In particular, this means that
\begin{equation}\label{eqn:spacetimeGradientGreens:280}
\begin{aligned}
\partial_r \delta(t – t’ – r/c) &= -\frac{1}{c} \delta(t – t’ – r/c) \PD{t’}{} \\
\partial_t \delta(t – t’ – r/c) &= \delta(t – t’ – r/c) \PD{t’}{} \\
\end{aligned}
\end{equation}

Application to \ref{eqn:spacetimeGradientGreens:160} gives
\begin{equation}\label{eqn:spacetimeGradientGreens:300}
\begin{aligned}
\lr{ \spacegrad -(1/c) \partial_t } G_r
&=
\inv{4 \pi r} \delta(t – t’ – r/c)
\lr{
\frac{\rcap}{r}
+
\lr{ \rcap + 1} \inv{c} \PD{t’}{}
} \\
\end{aligned}
\end{equation}
With \( t_r = t – r/c \), \ref{eqn:spacetimeGradientGreens:80} is found to be
\begin{equation}\label{eqn:spacetimeGradientGreens:320}
G(\Bx, \Bx’, t, t’) = \inv{4 \pi r} \delta(t_r – t’)
\lr{
\frac{\rcap}{r}
+
\lr{ \rcap + 1} \inv{c} \PD{t_r}{}
}
\end{equation}

Advanced case.

The advanced Green’s function for the wave equation is
\begin{equation}\label{eqn:spacetimeGradientGreens:340}
G_a(\Bx, \Bx’, t, t’) = -\inv{4 \pi r} \delta\lr{ t’ – t – r/c },
\end{equation}
so with \( t_a = t + r/c \), we must evaluate the delta function derivatives
\begin{equation}\label{eqn:spacetimeGradientGreens:360}
\begin{aligned}
\partial_r \delta\lr{ t’ – t – r/c } &= -\inv{c} \delta\lr{ t’ – t_a } \frac{d}{dt_a} \\
\partial_t \delta\lr{ t’ – t – r/c } &= – \delta\lr{ t’ – t_a } \frac{d}{dt_a}.
\end{aligned}
\end{equation}
So the Green’s function for the space time gradient is
\begin{equation}\label{eqn:spacetimeGradientGreens:380}
\begin{aligned}
G(\Bx, \Bx’, t, t’)
&= -\inv{4 \pi r} \lr{ \rcap \partial_r \delta – \frac{\rcap}{r} \delta – \inv{c} \partial_t \delta} \\
&= \inv{4 \pi r} \delta\lr{t’ – t_a} \lr{ \frac{\rcap}{r} + \lr{ \rcap – 1} \inv{c} \frac{d}{d t_a}}.
\end{aligned}
\end{equation}

Application: Maxwell’s equation.

Let’s use this to solve Maxwell’s equation. Finding a specific solution is now trivial. The retarded solution is
\begin{equation}\label{eqn:spacetimeGradientGreens:400}
\begin{aligned}
F(\Bx, t)
&= \int dV’ dt’ \gpgrade{
G(\Bx, \Bx’, t, t’) J(\Bx’, t’)
}{1,2} \\
&= \inv{ 4 \pi } \int d^3 \Bx’ dt’
\delta(t_r – t’)
\gpgrade{
\inv{r}
\lr{
\frac{\rcap}{r}
+
\lr{ \rcap + 1} \inv{c} \PD{t_r}{}
}
J(\Bx’, t’)
}{1,2} \\
&=
\inv{ 4 \pi } \int d^3 \Bx’
\gpgrade{
\inv{r}
\lr{
\frac{\rcap}{r} J(\Bx’, t_r)
+
\lr{ \rcap + 1} \inv{c} J'(\Bx’, t_r)
}
}{1,2},
\end{aligned}
\end{equation}
where \( J'(\Bx’, t_r) = \PDi{t_r}{J(\Bx’, t_r)} \).
Similarly, the advanced solution is
\begin{equation}\label{eqn:spacetimeGradientGreens:520}
F(\Bx, t) =
\inv{ 4 \pi } \int d^3 \Bx’
\gpgrade{
\inv{r}
\lr{
\frac{\rcap}{r} J(\Bx’, t_a)
+
\lr{ \rcap – 1} \inv{c} J'(\Bx’, t_a)
}
}{1,2},
\end{equation}
where derivatives are with respect to \( t_a \). In general, we are free to form a superposition of both the retarded and advanced solutions, as well as any solution of the homogeneous equation for charge and current free space \( \lr{ \spacegrad + (1/c) \partial_t } F = 0 \).

There’s a lot of abstraction baked into these solutions. One is the multivector charge and current density \( J \)
\begin{equation}\label{eqn:spacetimeGradientGreens:420}
J = \eta \lr{ c \rho – \BJ } + I \lr{ c \rho_\txtm – \BM },
\end{equation}
where \( \rho_\txtm, \BM \) are the fictitious magnetic sources that are used in engineering antenna and microwave circuit theory. We can ignore those if we choose. We also have the abstraction of the multivector field \( F = \BE + I \eta \BH = \BE + I c \BB \) itself on LHS.

Let’s unpack this solution into it’s constituent electric and magnetic field components, to see if the result looks more familiar. First note that
\begin{equation}\label{eqn:spacetimeGradientGreens:440}
\begin{aligned}
\gpgrade{\rcap J}{1}
&=
\gpgrade{
\rcap \eta \lr{ c \rho – \BJ } + \rcap I \lr{ c \rho_\txtm – \BM }
}{1} \\
&=
\eta c \rho \rcap
– I \rcap \wedge \BM \\
&=
\frac{\rho}{\epsilon} \rcap
+ \rcap \cross \BM,
\end{aligned}
\end{equation}
and
\begin{equation}\label{eqn:spacetimeGradientGreens:460}
\begin{aligned}
\gpgrade{\rcap J}{2}
&=
\gpgrade{
\rcap \eta \lr{ c \rho – \BJ } + \rcap I \lr{ c \rho_\txtm – \BM }
}{2} \\
&=
I \lr{
– \eta \rcap \cross \BJ
+ \rcap c \rho_\txtm
} \\
&=
I \eta \lr{
\BJ \cross \rcap
+ \rcap \frac{\rho_\txtm}{\mu}
}
\end{aligned}
\end{equation}
Selecting the vector and bivector components of the field \( F = \BE + I \eta \BH \), we have
\begin{equation}\label{eqn:spacetimeGradientGreens:480}
\BE(\Bx, t)
=
\inv{4 \pi \epsilon}
\int d^3 \Bx’
\lr{
\frac{\rho}{r^2} \rcap
+ \frac{\rho’}{c r} \rcap
+ \epsilon \frac{\rcap}{r^2} \cross \BM
+ \frac{\epsilon \rcap}{c r} \cross \BM’
\mp \frac{1}{c^2 r} \BJ’
}
\end{equation}
and
\begin{equation}\label{eqn:spacetimeGradientGreens:500}
\BH(\Bx, t)
=
\inv{4 \pi \mu}
\int d^3 \Bx’
\lr{
\frac{\rho_\txtm}{r^2} \rcap
+ \frac{\rho_\txtm}{c r} \rcap
+ \mu \BJ \cross \frac{\rcap}{r^2}
+ \mu \BJ’ \cross \frac{\rcap}{c r}
\mp \inv{c^2 r} \BM’
},
\end{equation}
where the negative sign is for the retarded solution, with times and derivatives with respect to the retarded time \( t_r = t – \Abs{\Bx – \Bx’}/c \), and the positive case for the advanced solutions where times are evaluated at the advanced time \( t_a = t + \Abs{\Bx – \Bx’}/c \).
For the retarded case, if we zero the fictitious sources, setting \( \rho_\txtm = 0, \BM = 0 \), these are Jefimenko’s equations, as seen in [1]. Griffiths derives them by first solving for the potential functions that solve the 2nd order scalar wave equation problem, and then computing all the derivatives.

1D case.

The Green’s function for the 1D spacetime gradient is easy to compute
\begin{equation}\label{eqn:spacetimeGradientGreens:540}
\begin{aligned}
G
&= -\frac{c}{2} \lr{ \spacegrad – \inv{c} \partial_t } \Theta(\pm (t – t’) – r/c) \\
&=
-\frac{c}{2} \lr{
-\inv{c} \rcap – \inv{c} (\pm 1)
}
\delta(\pm (t – t’) – r/c) \\
&=
\inv{2} \lr{ \rcap \pm 1 } \delta(\pm (t – t’) – r/c).
\end{aligned}
\end{equation}

2D case.

The Green’s function for the 2D spacetime gradient is
\begin{equation}\label{eqn:spacetimeGradientGreens:560}
G = -\inv{2 \pi}
\lr{ \spacegrad – \inv{c} \partial_t }
\frac{\Theta(\pm (t – t’) – r/c) }{
\sqrt{\lr{ \tau^2 – r^2/c^2 }}
}.
\end{equation}

The derivatives of the step are
\begin{equation}\label{eqn:spacetimeGradientGreens:580}
\begin{aligned}
\lr{ \spacegrad – \inv{c} \partial_t } \Theta(\pm (t – t’) – r/c)
&=
\lr{
-\inv{c} \rcap -\inv{c} (\pm 1)
}
\delta(\pm (t – t’) – r/c) \\
&=
-\inv{c} \lr{ \rcap \pm 1 }
\delta(\pm \tau – r/c).
\end{aligned}
\end{equation}
and the derivatives of the denominator is
\begin{equation}\label{eqn:spacetimeGradientGreens:600}
\begin{aligned}
\lr{ \spacegrad – \inv{c} \partial_t }
\lr{(t – t’)^2 – r^2/c^2}^{-1/2}
&=
-\inv{2}(2) \lr{ -\inv{c^2} r \rcap -\inv{c} (t – t’) }
\lr{(t – t’)^2 – r^2/c^2}^{-3/2} \\
&=
\inv{c^2} \lr{ \Br + c \tau }
\lr{\tau^2 – r^2/c^2}^{-3/2}.
\end{aligned}
\end{equation}
so
\begin{equation}\label{eqn:spacetimeGradientGreens:620}
G(r, \tau) =
\frac{
\lr{\tau^2 – r^2/c^2}^{-3/2}
}{2 \pi c^2}
\lr{
c \lr{ \rcap \pm 1 }
\lr{\tau^2 – r^2/c^2}
\delta(\pm \tau – r/c)
-\lr{ \Br + c \tau }
\Theta(\pm \tau – r/c)
}.
\end{equation}

References

[1] David Jeffrey Griffiths and Reed College. Introduction to electrodynamics. Prentice hall Upper Saddle River, NJ, 3rd edition, 1999.

[2] Peeter Joot. Geometric Algebra for Electrical Engineers. Kindle Direct Publishing, Toronto, 2019.