gradient

Some line integral examples of the Fundamental theorem of geometric calculus

January 20, 2026 math and physics play , , , , , , , ,

[Click here for a PDF version of this post]

On my discord server, Frank asked about his attempt to demonstrate an example line integral computation of the fundamental theorem of geometric calculus.

Before working through his example, and some others, it is first worth restating the
line integral specialization of the \textit{Fundamental theorem of geometric calculus}:

Theorem 1.1: Fundamental theorem of geometric calculus (line integral version.)

Given multivectors \(F, G \), a single variable parameterization \( \Bx = \Bx(u) \), with line element \( d\Bx = du \Bx_u \), \( \Bx_u = \PDi{u}{\Bx} \), \( \boldpartial = \Bx^u \PDi{u}{} \), and \( \Bx^u \cdot \Bx_u = 1 \), then
the line integral is related to the boundary by
\begin{equation*}
\int F d\Bx \boldpartial G = \evalbar{F G}{\Delta u},
\end{equation*}
(with the \( \boldpartial \) acting bidirectionally on \( F, G \).)

It is very important to point out that the derivative operator here is the vector derivative, and not the gradient. Roughly speaking, the vector derivative is the projection of the gradient onto the tangent space. In this case, the tangent space is just the line in the direction \( \Bx_u \), which may vary along the parameterized path.

Here are some examples of some one variable parameterizations, all in two dimensions

  1. \( \Bx = u \Be_1 + y_0 \Be_2 \).
    We compute
    \begin{equation}\label{eqn:lineintegralExamples:20}
    \begin{aligned}
    \Bx_u &= \PD{\Bx}{u} = \Be_1 \\
    \Bx^u &= \Be_1 \\
    d\Bx &= du \Be_1 \\
    \boldpartial &= \Be_1 \PD{u}{}.
    \end{aligned}
    \end{equation}
    and \( d\Bx \boldpartial = \PDi{u}{} \).
    The fundamental theorem is really just a statement that
    \begin{equation}\label{eqn:lineintegralExamples:40}
    \int \PD{u}{} \lr{ F G } du = \evalbar{ F G }{\Delta u}.
    \end{equation}

  2. \( \Bx = \alpha u \Be_1 + \beta u \Be_2 \), where \( \alpha, \beta \) are constants. i.e.: a line, but not necessarily on the horizontal this time.
    This time, we compute
    \begin{equation}\label{eqn:lineintegralExamples:60}
    \begin{aligned}
    \Bx_u &= \alpha \Be_1 + \beta \Be_2 \\
    \Bx^u &= \inv{\Bx_u} = \frac{\alpha \Be_1 + \beta \Be_2}{\alpha^2 + \beta^2} \\
    d\Bx &= du \lr{ \alpha \Be_1 + \beta \Be_2 } \\
    \boldpartial &= \inv{\alpha \Be_1 + \beta \Be_2} \PD{u}{}.
    \end{aligned}
    \end{equation}
    Again, we have \( d\Bx \boldpartial = \PDi{u}{} \), and the story repeats.

  3. \( \Bx = R \Be_1 e^{i\theta}, i = \Be_1 \Be_2 \). This time we are going along a circular arc.

    Let \( \rcap = \Be_1 e^{i\theta} \), and \(\thetacap = \Be_2 e^{i\theta} \). We can compute
    \begin{equation}\label{eqn:lineintegralExamples:80}
    \begin{aligned}
    \Bx_\theta &= R \Be_2 e^{i\theta} = R \thetacap \\
    \Bx^\theta &= \inv{\Bx_\theta} = \inv{ R \Be_2 e^{i\theta} } = \inv{R} \thetacap \\
    d\Bx &= d\theta \thetacap \\
    \boldpartial &= \frac{\thetacap}{R} \PD{\theta}{}.
    \end{aligned}
    \end{equation}
    This time, probably to no suprise, we have \( d\Bx \boldpartial = \PDi{\theta}{} \), so the fundamental theorem for this parameterization is a statement that
    \begin{equation}\label{eqn:lineintegralExamples:100}
    \int \PD{\theta}{} \lr{ F G } d\theta = \evalbar{ F G }{\Delta \theta}.
    \end{equation}

  4. \( \Bx = r e^{i\theta_0} \), where \( \theta_0 \) is a constant. We’ve already computed this above with a Cartesian representation of a line, but can do it again this time with an explicitly radial parameterization. We compute
    \begin{equation}\label{eqn:lineintegralExamples:120}
    \begin{aligned}
    \Bx_r &= \Be_1 e^{i \theta_0} \\
    \Bx^r &= \inv{\Bx_r} = \Be_1 e^{i \theta_0} \\
    d\Bx &= dr \Be_1 e^{i \theta_0} \\
    \boldpartial &= e^{i \theta_0} \PD{r}{}.
    \end{aligned}
    \end{equation}
    This time, \( d\Bx \boldpartial = \PDi{r}{} \), and the fundamental theorem for this parameterization is a statement that
    \begin{equation}\label{eqn:lineintegralExamples:140}
    \int \PD{r}{} \lr{ F G } dr = \evalbar{ F G }{\Delta r}.
    \end{equation}

Observe that we do not get the same result if we use the gradient instead of the vector derivative. We may only make a gradient substitution for the vector derivative when the dimension of the hypervolume integral equals the dimension of the vector space itself. For a line integral that would mean we are restricting the domain of the underlying vector space to \(\mathbb{R}^1\), which isn’t a very interesting case for geometric algebra.

In Frank’s example, he was working with a generating vector space of \(\mathbb{R}^2\), with the horizontal parameterization \( \Bx = u \Be_1 + y_0 \Be_2 \) that we used in the first example (with \( F = 1, G = x y i \), where \( i = \Be_1 \Be_2 \), the pseudoscalar for the space).

Let’s see what happens if we compute a similar integral, but swapping out the vector derivative with the gradient
\begin{equation}\label{eqn:lineintegralExamples:160}
\begin{aligned}
\int d\Bx \spacegrad x y i
&=
\int du \Be_1 \lr{ \Be_1 \partial_x + \Be_2 \partial_y } ( x y i ) \\
&=
\int du \Be_1 \lr{ \Be_1 y + \Be_2 x } i \\
&=
\int du \lr{ y + i x } i \\
&=
\int du \lr{ y_0 + i u } i \\
&=
\lr{\Delta x} y_0 i – \frac{x_1^2}{2} + \frac{x_0^2}{2}.
\end{aligned}
\end{equation}
As well as the pseudoscalar term that we had when evaluating the fundamental theorem integral, this time we have an extra scalar term, a contribution that goes back to the \( y \) component of the gradient. There is nothing wrong with performing such an integral, but it’s not an instance of the fundamental theorem, and the same tidy answer should not be expected. In Frank’s original example, he also didn’t put the \( \Bx \) adjacent to the differential operator, which is required to get the perfect cancelation of the tangent space vectors that we’ve seen in the evaluations above.

A fun application of Green’s functions and geometric algebra: Residue calculus

November 2, 2025 math and physics play , , , , , , , , , , , , , , , , , ,

[Click here for a PDF version of this post]

Motivation.

A fun application of both Green’s functions and geometric algebra is to show how the Cauchy integral equation can be expressed in terms of the Green’s function for the 2D gradient. This is covered, almost as an aside, in [1]. I found that treatment a bit hard to understand, so I am going to work through it here at my own pace.

Complex numbers in geometric algebra.

Anybody who has studied geometric algebra is likely familiar with a variety of ways to construct complex numbers from geometric objects. For example, complex numbers can be constructed for any plane. If \( \Be_1, \Be_2 \) is a pair of orthonormal vectors for some plane in \(\mathbb{R}^N\), then any vector in that plane has the form
\begin{equation}\label{eqn:residueGreens:20}
\Bf = \Be_1 u + \Be_2 v,
\end{equation}
has an associated complex representation, by simply multiplying that vector one of those basis vectors. For example, if we pre-multiply \( \Bf \) by \( \Be_1 \), forming
\begin{equation}\label{eqn:residueGreens:40}
\begin{aligned}
z
&= \Be_1 \Bf \\
&= \Be_1 \lr{ \Be_1 u + \Be_2 v } \\
&= u + \Be_1 \Be_2 v.
\end{aligned}
\end{equation}

We may identify the unit bivector \( \Be_1 \Be_2 \) as an imaginary, designed by \( i \), since it has the expected behavior
\begin{equation}\label{eqn:residueGreens:60}
\begin{aligned}
i^2 &=
\lr{\Be_1 \Be_2}^2 \\
&=
\lr{\Be_1 \Be_2}
\lr{\Be_1 \Be_2} \\
&=
\Be_1 \lr{\Be_2
\Be_1} \Be_2 \\
&=
-\Be_1 \lr{\Be_1
\Be_2} \Be_2 \\
&=
-\lr{\Be_1 \Be_1}
\lr{\Be_2 \Be_2} \\
&=
-1.
\end{aligned}
\end{equation}

Complex numbers are seen to be isomorphic to even grade multivectors in a planar subspace. The imaginary is the grade-two pseudoscalar, and geometrically is an oriented unit area (bivector.)

Cauchy-equations in terms of the gradient.

It is natural to wonder about the geometric algebra equivalents of various complex-number relationships and identities. Of particular interest for this discussion is the geometric algebra equivalent of the Cauchy equations that specify required conditions for a function to be differentiable.

If a complex function \( f(z) = u(z) + i v(z) \) is differentiable, then we must be able to find the limit of
\begin{equation}\label{eqn:residueGreens:80}
\frac{\Delta f(z_0)}{\Delta z} = \frac{f(z_0 + h) – f(z_0)}{h},
\end{equation}
for any complex \( h \rightarrow 0 \), for any possible trajectory of \( z_0 + h \) toward \( z_0 \). In particular, for real \( h = \epsilon \),
\begin{equation}\label{eqn:residueGreens:100}
\lim_{\epsilon \rightarrow 0} \frac{u(x_0 + \epsilon, y_0) + i v(x_0 + \epsilon, y_0) – u(x_0, y_0) – i v(x_0, y_0)}{\epsilon}
=
\PD{x}{u(z_0)} + i \PD{x}{v(z_0)},
\end{equation}
and for imaginary \( h = i \epsilon \)
\begin{equation}\label{eqn:residueGreens:120}
\lim_{\epsilon \rightarrow 0} \frac{u(x_0, y_0 + \epsilon) + i v(x_0, y_0 + \epsilon) – u(x_0, y_0) – i v(x_0, y_0)}{i \epsilon}
=
-i\lr{ \PD{y}{u(z_0)} + i \PD{y}{v(z_0)} }.
\end{equation}
Equating real and imaginary parts, we see that existence of the derivative requires
\begin{equation}\label{eqn:residueGreens:140}
\begin{aligned}
\PD{x}{u} &= \PD{y}{v} \\
\PD{x}{v} &= -\PD{y}{u}.
\end{aligned}
\end{equation}
These are the Cauchy equations. When the derivative exists in a given neighbourhood, we say that the function is analytic in that region. If we use a bivector interpretation of the imaginary, with \( i = \Be_1 \Be_2 \), the Cauchy equations are also satisfied if the gradient of the complex function is zero, since
\begin{equation}\label{eqn:residueGreens:160}
\begin{aligned}
\spacegrad f
&=
\lr{ \Be_1 \partial_x + \Be_2 \partial_y } \lr{ u + \Be_1 \Be_2 v } \\
&=
\Be_1 \lr{ \partial_x u – \partial_y v } + \Be_2 \lr{ \partial_y u + \partial_x v }.
\end{aligned}
\end{equation}
We see that the geometric algebra equivalent of the Cauchy equations is simply
\begin{equation}\label{eqn:residueGreens:200}
\spacegrad f = 0.
\end{equation}
Roughly speaking, we may say that a function is analytic in a region, if the Cauchy equations are satisfied, or the gradient is zero, in a neighbourhood of all points in that region.

A special case of the fundamental theorem of geometric calculus.

Given an even grade multivector \( \psi \in \mathbb{R}^2 \) (i.e.: a complex number), we can show that
\begin{equation}\label{eqn:residueGreens:220}
\int_A \spacegrad \psi d^2\Bx = \oint_{\partial A} d\Bx \psi.
\end{equation}
Let’s get an idea why this works by expanding the area integral for a rectangular parameterization
\begin{equation}\label{eqn:residueGreens:240}
\begin{aligned}
\int_A \spacegrad \psi d^2\Bx
&=
\int_A \lr{ \Be_1 \partial_1 + \Be_2 \partial_2 } \psi I dx dy \\
&=
\int \Be_1 I dy \evalrange{\psi}{x_0}{x_1}
+
\int \Be_2 I dx \evalrange{\psi}{y_0}{y_1} \\
&=
\int \Be_2 dy \evalrange{\psi}{x_0}{x_1}

\int \Be_1 dx \evalrange{\psi}{y_0}{y_1} \\
&=
\int d\By \evalrange{\psi}{x_0}{x_1}

\int d\Bx \evalrange{\psi}{y_0}{y_1}.
\end{aligned}
\end{equation}
We took advantage of the fact that the \(\mathbb{R}^2\) pseudoscalar commutes with \( \psi \). The end result, is illustrated in fig. 1, shows pictorially that the remaining integral is an oriented line integral.

fig. 1. Oriented multivector line integral.

 

If we want to approximate a more general area, we may do so with additional tiles, as illustrated in fig. 2. We may evaluate the area integral using the line integral over just the exterior boundary using such a tiling, as any overlapping opposing boundary contributions cancel exactly.

fig. 2. A crude circular tiling approximation.

 

The reason that this is interesting is that it allows us to re-express a complex integral as a corresponding multivector area integral. With \( d\Bx = \Be_1 dz \), we have
\begin{equation}\label{eqn:residueGreens:260}
\oint dz\, \psi = \Be_1 \int \spacegrad \psi d^2\Bx.
\end{equation}

The Cauchy kernel as a Green’s function.

We’ve previously derived the Green’s function for the 2D Laplacian, and found
\begin{equation}\label{eqn:residueGreens:280}
\tilde{G}(\Bx, \Bx’) = \inv{2\pi} \ln \Abs{\lr{\Bx – \Bx’}},
\end{equation}
which satisfies
\begin{equation}\label{eqn:residueGreens:300}
\delta^2(\Bx – \Bx’) = \spacegrad^2 \tilde{G}(\Bx, \Bx’) = \spacegrad \lr{ \spacegrad \tilde{G}(\Bx, \Bx’) }.
\end{equation}
This means that \( G(\Bx, \Bx’) = \spacegrad \tilde{G}(\Bx, \Bx’) \) is the Green’s function for the gradient. That Green’s function is
\begin{equation}\label{eqn:residueGreens:320}
\begin{aligned}
G(\Bx, \Ba)
&= \inv{2 \pi} \frac{\spacegrad \Abs{\Bx – \Ba}}{\Abs{\Bx – \Ba}} \\
&= \inv{2 \pi} \frac{\Bx – \Ba}{\Abs{\Bx – \Ba}^2}.
\end{aligned}
\end{equation}
We may cast this Green’s function into complex form with \( z = \Be_1 \Bx, a = \Be_1 \Ba \). In particular
\begin{equation}\label{eqn:residueGreens:340}
\begin{aligned}
\inv{z – a}
&=
\frac{(z – a)^\conj}{\Abs{z – a}^2} \\
&=
\frac{(z – a)^\conj}{\Abs{z – a}^2} \\
&=
\frac{\Bx – \Ba}{\Abs{\Bx – \Ba}^2} \Be_1 \\
&=
2 \pi G(\Bx, \Ba) \Be_1.
\end{aligned}
\end{equation}

Cauchy’s integral.

With
\begin{equation}\label{eqn:residueGreens:360}
\psi = \frac{f(z)}{z – a},
\end{equation}
using \ref{eqn:residueGreens:260}, we can now evaluate
\begin{equation}\label{eqn:residueGreens:265}
\begin{aligned}
\oint dz\, \frac{f(z)}{z – a}
&= \Be_1 \int \spacegrad \frac{f(z)}{z – a} d^2\Bx \\
&= \Be_1 \int \lr{ \frac{\spacegrad f(z)}{z – a} + \lr{ \spacegrad \inv{z – a}} f(z) } I dA \\
&= \Be_1 \int f(z) \spacegrad 2 \pi G(\Bx – \Ba) \Be_1 I dA \\
&= 2 \pi \Be_1 \int \delta^2(\Bx – \Ba) \Be_1 f(\Bx) I dA \\
&= 2 \pi \Be_1^2 f(\Ba) I \\
&= 2 \pi I f(a),
\end{aligned}
\end{equation}
where we’ve made use of the analytic condition \( \spacegrad f = 0 \), and the fact that \( f \) and \( 1/(z-a) \), both even multivectors, commute.

The Cauchy integral equation
\begin{equation}\label{eqn:residueGreens:380}
f(a) = \inv{2 \pi I} \oint dz\, \frac{f(z)}{z – a},
\end{equation}
falls out naturally. This sort of residue calculation always seemed a bit miraculous. By introducing a geometric algebra encoding of complex numbers, we get a new and interesting interpretation. In particular,

  1. the imaginary factor in the geometric algebra formulation of this identity is an oriented unit area coming directly from the area element,
  2. the factor of \( 2 \pi \) comes directly from the Green’s function for the gradient,
  3. the fact that this particular form of integral picks up only the contribution at the point \( z = a \) is no longer mysterious seeming. This is directly due to delta-function filtering.

Also, if we are looking for an understanding of how to generalize the Cauchy equation to more general multivector functions, we now also have a good clue how that would be done.

References

[1] C. Doran and A.N. Lasenby. Geometric algebra for physicists. Cambridge University Press New York, Cambridge, UK, 1st edition, 2003.

Summary of some gradient related Green’s functions

October 28, 2025 math and physics play , , , , , ,

[Click here for a PDF version of this post]

Here is a summary of Green’s functions for a number of gradient related differential operators (many of which are of interest for electrodynamics, and most of them have been derived recently in blog posts.) These Green’s functions all satisfy
\begin{equation}\label{eqn:deltaFunctions:120}
\delta(\Bx – \Bx’) = L G(\Bx, \Bx’).
\end{equation}

Let \( \Br = \Bx – \Bx’ \), \( r = \Norm{\Br} \), \( \mathbf{\hat{r}} = \Br/r \), and \( \tau = t – t’ \), then

  1. Gradient operator, \( L = \spacegrad \), in 1D, 2D and 3D respectively
    \begin{equation}\label{eqn:deltaFunctions:25}
    \begin{aligned}
    G\lr{ \Bx, \Bx’ } &= \frac{\mathbf{\hat{r}}}{2} \\
    G\lr{ \Bx, \Bx’ } &= \frac{1}{2 \pi} \frac{\mathbf{\hat{r}}}{r} \\
    G\lr{ \Bx, \Bx’ } &= \inv{4 \pi} \frac{\mathbf{\hat{r}}}{r^2}.
    \end{aligned}
    \end{equation}

  2. Laplacian operator, \( L = \spacegrad^2 \), in 1D, 2D and 3D respectively
    \begin{equation}\label{eqn:deltaFunctions:20}
    \begin{aligned}
    G\lr{ \Bx, \Bx’ } &= \frac{r}{2} \\
    G\lr{ \Bx, \Bx’ } &= \frac{1}{2 \pi} \ln r \\
    G\lr{ \Bx, \Bx’ } &= -\frac{1}{4 \pi r}.
    \end{aligned}
    \end{equation}

  3. Second order Helmholtz operator, \( L = \spacegrad^2 + k^2 \) for 1D, 2D and 3D respectively
    \begin{equation}\label{eqn:deltaFunctions:60}
    \begin{aligned}
    G\lr{ \Bx, \Bx’ } &= \pm \frac{1}{2 j k} e^{\pm j k r} \\
    G(\Bx, \Bx’) &= \frac{1}{4 j} H_0^{(1)}(\pm k r) \\
    G\lr{ \Bx, \Bx’ } &= -\frac{1}{4 \pi} \frac{e^{\pm j k r }}{r}.
    \end{aligned}
    \end{equation}

  4. First order Helmholtz operator, \( L = \spacegrad + j k \), in 1D, 2D and 3D respectively

    \begin{equation}\label{eqn:deltaFunctions:80}
    \begin{aligned}
    G\lr{ \Bx, \Bx’ } &= \frac{j}{2} \lr{ \mathbf{\hat{r}} \mp 1 } e^{\pm j k r} \\
    G\lr{ \Bx, \Bx’ } &= \frac{k}{4} \lr{ \pm j \mathbf{\hat{r}} H_1^{(1)}(\pm k r) – H_0^{(1)}(\pm k r) } \\
    G\lr{ \Bx, \Bx’ } &= \frac{e^{\pm j k r}}{4 \pi r} \lr{ jk \lr{ 1 \mp \mathbf{\hat{r}} } + \frac{\mathbf{\hat{r}}}{r} }.
    \end{aligned}
    \end{equation}

    This is also the Green’s function for a left acting operator \( G(\Bx, \Bx’) \lr{ – \lspacegrad + j k } = \delta(\Bx – \Bx’) \).

  5. Wave equation, \( \spacegrad^2 – (1/c^2) \partial_{tt} \), in 1D, 2D and 3D respectively
    \begin{equation}\label{eqn:deltaFunctions:140}
    \begin{aligned}
    G(\Br, \tau) &= -\frac{c}{2} \Theta( \pm \tau – r/c ) \\
    G(\Br, \tau) &= -\inv{2 \pi \sqrt{ \tau^2 – r^2/c^2 } } \Theta( \pm \tau – r/c ) \\
    G(\Br, \tau) &= -\inv{4 \pi r} \delta( \pm \tau – r/c ),
    \end{aligned}
    \end{equation}
    The positive sign is for the retarded solution, and the negative for advancing.

  6. Spacetime gradient \( L = \spacegrad + (1/c) \partial_t \), satisfying \( L G(\Bx – \Bx’, t – t’) = \delta(\Bx – \Bx’) \delta(t – t’) \), in 1D, 2D, and 3D respectively
    \begin{equation}\label{eqn:deltaFunctions:100}
    \begin{aligned}
    G(\Br, \tau)
    &= \inv{2} \lr{ \mathbf{\hat{r}} \pm 1 } \delta(\pm \tau – r/c) \\
    G(\Br, \tau)
    &=
    \frac{
    \lr{\tau^2 – r^2/c^2}^{-3/2}
    }{2 \pi c^2}
    \lr{
    c \lr{ \mathbf{\hat{r}} \pm 1 }
    \lr{\tau^2 – r^2/c^2}
    \delta(\pm \tau – r/c)
    -\lr{ \Br + c \tau }
    \Theta(\pm \tau – r/c)
    }
    \\
    G(\Br, \tau)
    &= \inv{4 \pi r} \delta(\pm \tau – r/c)
    \lr{
    \frac{\mathbf{\hat{r}}}{r}
    +
    \lr{ \mathbf{\hat{r}} \pm 1} \inv{c} \PD{t’}{}
    }
    \end{aligned}
    \end{equation}
    The plus sign is for the retarded solution, and negative for advanced.

Green’s function for the wave equation: 1D and 2D cases.

October 4, 2025 math and physics play , , , , , , , , , ,

[Click here for a PDF version of this post]

The Green’s function(s) \( G(\Br, \tau) \) for the 3D wave equation
\begin{equation}\label{eqn:waveEquationGreens:40}
\lr{ \spacegrad^2 – \inv{c^2}\frac{\partial^2}{\partial t^2} } G(\Br, \tau) = \delta(\Br) \delta(\tau),
\end{equation}
where
\begin{equation}\label{eqn:waveEquationGreens:20}
\begin{aligned}
\Br &= \Bx – \Bx’ \\
r &= \Abs{\Br} \\
\tau &= t – t’,
\end{aligned}
\end{equation}
is
\begin{equation}\label{eqn:waveEquationGreens:60}
G(\Br, \tau) = -\inv{4 \pi r} \delta( \pm \tau – r/c ).
\end{equation}
Here the positive case is the retarded solution, and negative the advanced solution. The derivation of these Green’s functions can be found derived in many places, including [1], [2], and [3]

I wasn’t familiar with the 1D and 2D Green’s functions for the wave equation. Grok says they are, respectively
\begin{equation}\label{eqn:waveEquationGreens:80}
\begin{aligned}
G(\Br, \tau) &= -\frac{c}{2} \Theta( \pm \tau – r/c ) \\
G(\Br, \tau) &= -\inv{2 \pi \sqrt{ \tau^2 – r^2/c^2 } } \Theta( \pm \tau – r/c ).
\end{aligned}
\end{equation}
At least for the time being, I thought that I’ll attempt to verify these, instead of deriving them. For the 1D case, this turns out to be fairly straightforward. Perhaps unexpectedly, that isn’t true for the 2D case, and I’ll have to revisit that case in other ways. In this post, I’ll show the verification of the 1D Green’s function, and my partial attempt to verify the 2D case.

1D Green’s function verification.

We will use the Heaviside theta representation of the absolute value.
\begin{equation}\label{eqn:waveEquationGreens:100}
\Abs{x} = x \Theta(x) – x \Theta(-x).
\end{equation}
Recall that the derivative of the absolute value function is a sign function
\begin{equation}\label{eqn:waveEquationGreens:120}
\begin{aligned}
\Abs{x}’
&= \Theta(x) – \Theta(-x) + x \delta(x) + x \delta(-x) \\
&= \Theta(x) – \Theta(-x) + 2 x \delta(x) \\
&= \Theta(x) – \Theta(-x) \\
&= \textrm{sgn}(x),
\end{aligned}
\end{equation}
where \( x \delta(x) \) is zero in a distributional sense (zero if applied to a test function.)
\begin{equation}\label{eqn:waveEquationGreens:140}
\begin{aligned}
\textrm{sgn}(x)’
&= \Theta(x)’ – \Theta(-x)’ \\
&= \delta(x) + \delta(-x) \\
&= 2 \delta(x).
\end{aligned}
\end{equation}

Now let’s evaluate the \( x \) partials.
\begin{equation}\label{eqn:waveEquationGreens:160}
\begin{aligned}
\PD{x}{} \Theta(\tau – r/c)
&=
-\inv{c} \delta\lr{ \tau – r/c } \PD{x}{} \Abs{x – x’} \\
&=
-\inv{c} \delta\lr{ \tau – r/c } \textrm{sgn}(x – x’).
\end{aligned}
\end{equation}
The second derivative is
\begin{equation}\label{eqn:waveEquationGreens:180}
\begin{aligned}
\frac{\partial^2}{\partial x^2} \Theta(\tau – r/c)
&=
-\inv{c}
\lr{
-\inv{c} \delta’\lr{ \tau – r/c } (\textrm{sgn}(x – x’))^2
+
\delta\lr{ \tau – r/c } 2 \delta(x – x’)
} \\
&=
\inv{c^2} \delta’\lr{ \tau – r/c } – \frac{2}{c} \delta\lr{ \tau} \delta(x – x’).
\end{aligned}
\end{equation}
The transformation above from \( \delta\lr{ \tau – r/c } \rightarrow \delta(\tau) \) is because the spatial delta function \( \delta(x – x’) \) is zero unless \( x = x’ \), and \( r = 0 \) at that point.

The time derivatives are easier to compute
\begin{equation}\label{eqn:waveEquationGreens:200}
\begin{aligned}
\frac{\partial^2}{\partial t^2} \Theta(\tau – r/c)
&=
\PD{t}{} \delta(\tau – r/c) \\
&=
\delta'(\tau – r/c).
\end{aligned}
\end{equation}

Putting the pieces together, we have
\begin{equation}\label{eqn:waveEquationGreens:220}
\begin{aligned}
\lr{ \spacegrad^2 – \inv{c^2}\frac{\partial^2}{\partial t^2} } \Theta(\tau – r/c)
&=
\inv{c^2} \delta’\lr{ \tau – r/c } – \frac{2}{c} \delta\lr{ \tau} \delta(x – x’)
– \inv{c^2} \delta'(\tau – r/c)
\\
&=
– \frac{2}{c} \delta\lr{ \tau} \delta(x – x’).
\end{aligned}
\end{equation}
Dividing through by \( -2/c \) gives us
\begin{equation}\label{eqn:waveEquationGreens:240}
\lr{ \spacegrad^2 – \inv{c^2}\frac{\partial^2}{\partial t^2} } G(\Bx – \Bx’, t – t’) = \delta\lr{t – t’} \delta\lr{\Bx – \Bx’},
\end{equation}
as desired. The \( \delta \) derivative terms can be given meaning, but they conveniently cancel out, so we don’t have to think about that this time.

It’s easy to see that the advanced Green’s function has the same behaviour, since the two time partials will bring down a factor of \( (\pm 1)^2 = 1 \) in general, which does not change anything above.

Attempted verification of the claimed 2D Green’s function.

Now let’s try to verify Grok’s claim for the 2D Green’s function, starting with a few helpful side calculations.

\begin{equation}\label{eqn:waveEquationGreens:260}
\begin{aligned}
\spacegrad \Abs{r}
&= \sum_m \Be_m \partial_m \sqrt{ \sum_n \lr{x_n – x_n’}^2 } \\
&= \inv{2} 2 \frac{\Bx – \Bx’}{\Abs{\Bx – \Bx’}} \\
&= \rcap
\end{aligned}
\end{equation}

\begin{equation}\label{eqn:waveEquationGreens:280}
\begin{aligned}
\spacegrad \lr{ \tau^2 – r^2/c^2 }^{-1/2}
&=
-\inv{2} \lr{ \tau^2 – r^2/c^2 }^{-3/2} \lr{-\frac{2 r}{c^2}} \spacegrad r \\
&=
-\inv{2} \lr{ \tau^2 – r^2/c^2 }^{-3/2} \lr{-\frac{2 r}{c^2}} \rcap \\
&=
\frac{r}{c^2} \lr{ \tau^2 – r^2/c^2 }^{-3/2} \rcap
\end{aligned}
\end{equation}

\begin{equation}\label{eqn:waveEquationGreens:300}
\begin{aligned}
\spacegrad \lr{ \tau^2 – r^2/c^2 }^{-3/2}
&=
-\frac{3}{2} \lr{ \tau^2 – r^2/c^2 }^{-5/2} \lr{-\frac{2 r}{c^2}} \spacegrad r \\
&=
-\frac{3}{2} \lr{ \tau^2 – r^2/c^2 }^{-5/2} \lr{-\frac{2 r}{c^2}} \rcap \\
&=
\frac{3 r}{c^2} \lr{ \tau^2 – r^2/c^2 }^{-5/2} \rcap
\end{aligned}
\end{equation}

\begin{equation}\label{eqn:waveEquationGreens:320}
\begin{aligned}
\spacegrad \Theta\lr{ \pm \tau – r/c }
&=
-\inv{c} \delta\lr{ \pm \tau – r/c } \spacegrad r \\
&=
-\inv{c} \delta\lr{ \pm \tau – r/c } \rcap
\end{aligned}
\end{equation}

\begin{equation}\label{eqn:waveEquationGreens:340}
\begin{aligned}
\spacegrad \delta\lr{ \pm \tau – r/c }
&=
-\inv{c} \delta’\lr{ \pm \tau – r/c } \spacegrad r \\
&=
-\inv{c} \delta’\lr{ \pm \tau – r/c } \rcap
\end{aligned}
\end{equation}

\begin{equation}\label{eqn:waveEquationGreens:360}
\begin{aligned}
\spacegrad \cdot \rcap
&=
\spacegrad \cdot \frac{\Bx – \Bx’}{r} \\
&=
\inv{r} \spacegrad \cdot \lr{\Bx – \Bx’} + \lr{\Bx – \Bx’} \cdot \spacegrad \inv{r} \\
&=
\frac{2}{r} + \lr{\Bx – \Bx’} \cdot \lr{ -\inv{r^2} \rcap } \\
&=
\frac{2}{r} – \inv{r} \\
&=
\frac{1}{r}.
\end{aligned}
\end{equation}
In summary, with \( X = \tau^2 – r^2/c^2 \)
\begin{equation}\label{eqn:waveEquationGreens:540}
\begin{aligned}
\spacegrad \Abs{r} &= \rcap \\
\spacegrad X^{-1/2} &= \inv{c^2} r \rcap X^{-3/2} \\
\spacegrad X^{-3/2} &= \inv{c^2} 3 r \rcap X^{-5/2} \\
\spacegrad \Theta &= – \inv{c} \delta \rcap \\
\spacegrad \delta &= – \inv{c} \rcap \delta’ \\
\spacegrad \cdot \rcap &= \frac{1}{r}.
\end{aligned}
\end{equation}

We will want a couple helper Laplacian operations, including
\begin{equation}\label{eqn:waveEquationGreens:580}
\begin{aligned}
\spacegrad^2 X^{-1/2}
&=
\spacegrad \cdot \lr{ \inv{c^2} r \rcap X^{-3/2} } \\
&=
\inv{c^2} \lr{ \spacegrad \cdot \rcap} \lr{ r X^{-3/2} }
+ \inv{c^2} \lr{ \rcap \cdot \spacegrad r } X^{-3/2}
+ \frac{r}{c^2} \lr{ \rcap \cdot \spacegrad X^{-3/2} } \\
&=
\inv{c^2} X^{-3/2}
+ \inv{c^2} X^{-3/2}
+ \frac{r}{c^2} \lr{ \inv{c^2} 3 r X^{-5/2} } \\
&=
\frac{2}{c^2} X^{-3/2}
+ \frac{3 r^2}{c^4} X^{-5/2}.
\end{aligned}
\end{equation}

The Laplacian of the step is
\begin{equation}\label{eqn:waveEquationGreens:600}
\begin{aligned}
\spacegrad^2 \Theta
&=
\spacegrad \cdot \lr{ – \inv{c} \delta \rcap } \\
&=
-\inv{c}
\lr{ \spacegrad \cdot \rcap } \delta
-\inv{c}
\rcap \cdot \spacegrad \delta \\
&=
-\inv{r c} \delta
-\inv{c}
\rcap \cdot \lr{
– \inv{c} \rcap \delta’
}
&=
-\inv{r c} \delta
+\inv{c^2} \delta’.
\end{aligned}
\end{equation}

We are now ready to compute the Laplacian of \( \Theta X^{-1/2} \). Let’s expand the chain rule for that, so that the rest of the job is just algebra
\begin{equation}\label{eqn:waveEquationGreens:620}
\begin{aligned}
\spacegrad^2 \lr{ f g }
&=
\spacegrad \cdot \lr{ f \spacegrad g }
+
\spacegrad \cdot \lr{ g \spacegrad f } \\
&=
f \spacegrad^2 g + \spacegrad f \cdot \spacegrad g
+
g \spacegrad^2 f + \spacegrad g \cdot \spacegrad f \\
&=
f \spacegrad^2 g + 2 \spacegrad f \cdot \spacegrad g + g \spacegrad^2 f.
\end{aligned}
\end{equation}
We want to sub in
\begin{equation}\label{eqn:waveEquationGreens:640}
\begin{aligned}
\spacegrad^2 \Theta &= -\inv{r c} \delta +\inv{c^2} \delta’ \\
\spacegrad^2 X^{-1/2} &= \frac{2}{c^2} X^{-3/2} + \frac{3 r^2}{c^4} X^{-5/2} \\
\spacegrad X^{-1/2} &= \inv{c^2} r \rcap X^{-3/2} \\
\spacegrad \Theta &= – \inv{c} \delta \rcap.
\end{aligned}
\end{equation}
We get
\begin{equation}\label{eqn:waveEquationGreens:660}
\begin{aligned}
\spacegrad^2 \lr{ \Theta X^{-1/2} }
&=
\lr{ -\inv{r c} \delta +\inv{c^2} \delta’ } X^{-1/2}
+ \lr{ \frac{2}{c^2} X^{-3/2} + \frac{3 r^2}{c^4} X^{-5/2} } \Theta
– 2 \inv{c^2} r X^{-3/2} \inv{c} \delta \\
&=
\inv{c^2} X^{-1/2} \delta’
+ \inv{c^2} \lr{ 2 \lr{\tau^2 – r^2/c^2} + \frac{3 r^2}{c^2} } X^{-5/2} \Theta
– \inv{r c} \lr{ \tau^2 – r^2/c^2 + 2 r^2/c^2 } X^{-3/2} \delta \\
&=
\inv{c^2} X^{-1/2} \delta’
+ \inv{c^2} \lr{ 2 \tau^2 + \frac{r^2}{c^2} } X^{-5/2} \Theta
– \inv{r c} \lr{ \tau^2 + \frac{r^2}{c^2} } X^{-3/2} \delta
\end{aligned}
\end{equation}

We are ready to evaluate the time derivatives now. Let’s try it the same way with
\begin{equation}\label{eqn:waveEquationGreens:680}
\begin{aligned}
\partial_{tt} \lr{ f g }
&=
\partial_t \lr{ f \partial_t g + g \partial_t f } \\
&=
g \partial_{tt} f
+
f \partial_{tt} g
+ 2 \lr{ \partial_t f } \lr{ \partial_t g }.
\end{aligned}
\end{equation}
A couple of the time partials can be computed by inspection
\begin{equation}\label{eqn:waveEquationGreens:700}
\begin{aligned}
\partial_t \Theta &= \pm \delta \\
\partial_{tt} \Theta &= \lr{\pm 1}^2 \delta’,
\end{aligned}
\end{equation}
and for the rest, we have
\begin{equation}\label{eqn:waveEquationGreens:720}
\begin{aligned}
\partial_t X^{-1/2}
&=
-\inv{2} X^{-3/2} \partial_t X \\
&=
-\inv{2} X^{-3/2} 2 \tau \\
&=
-\tau X^{-3/2},
\end{aligned}
\end{equation}
and
\begin{equation}\label{eqn:waveEquationGreens:740}
\begin{aligned}
\partial_{tt} X^{-1/2}
&=
– X^{-3/2}
– \tau \partial_t X^{-3/2} \\
&=
– X^{-3/2}
+ 3 \tau^2 X^{-5/2}.
\end{aligned}
\end{equation}
Assembling the pieces, we have
\begin{equation}\label{eqn:waveEquationGreens:760}
\begin{aligned}
\partial_{tt} \lr{ \Theta X^{-1/2} }
&=
\lr{
– X^{-3/2}
+ 3 \tau^2 X^{-5/2}
} \Theta
+
\delta’ X^{-1/2}
+ 2 \lr{ \pm \delta } \lr{ -\tau X^{-3/2} } \\
&=
\delta’ X^{-1/2}
+ \lr{ -\lr{ \tau^2 – r^2/c^2 } + 3 \tau^2 } X^{-5/2} \Theta
\mp 2 \tau X^{-3/2} \delta \\
&=
\delta’ X^{-1/2}
+ \lr{ 2 \tau^2 + r^2/c^2 } X^{-5/2} \Theta
\mp 2 \tau X^{-3/2} \delta.
\end{aligned}
\end{equation}

The wave equation operation on \( \Theta X^{-1/2} \) is
\begin{equation}\label{eqn:waveEquationGreens:780}
\begin{aligned}
\lr{ \spacegrad^2 – (1/c^2) \partial_{tt} } \Theta X^{-1/2}
&=
\inv{c^2} \lr{ 2 \tau^2 + \frac{r^2}{c^2} } X^{-5/2} \Theta
– \inv{r c} \lr{ \tau^2 + \frac{r^2}{c^2} } X^{-3/2} \delta \\
&- \inv{c^2} \lr{ 2 \tau^2 + r^2/c^2 } X^{-5/2} \Theta
\pm \frac{2}{c^2} \tau X^{-3/2} \delta \\
&=
– \inv{r c} \lr{ \tau^2 + \frac{r^2}{c^2} } X^{-3/2} \delta
\pm \frac{2}{c^2} \tau X^{-3/2} \delta \\
&=
\inv{c^2} \lr{
– \frac{c \tau^2}{r}
– \frac{r}{c}
\pm 2 \tau
}
X^{-3/2} \delta.
\end{aligned}
\end{equation}

So, after all that we have
\begin{equation}\label{eqn:waveEquationGreens:800}
\lr{ \spacegrad^2 – (1/c^2) \partial_{tt} } G =
-\inv{2 \pi c^2} \lr{
– \frac{c \tau^2}{r}
– \frac{r}{c}
\pm 2 \tau
}
\frac{\delta(\pm \tau – r/c)}{\lr{\tau^2 – r^2/c^2}^{3/2}}.
\end{equation}

This is a very problematic expression. The delta function is zero everywhere but \( \pm \tau = r/c \), but the denominator blows up at \( \pm \tau = r/c \), and the leading factor is also zero at that point:
\begin{equation}\label{eqn:waveEquationGreens:820}
\begin{aligned}
\evalbar{ \lr{ -\frac{c}{r} \tau^2 – \frac{r}{c} \pm 2 \tau }}{\pm \tau = r/c}
&=
-\frac{c}{r} \lr{ \frac{r}{c} }^2 – \frac{r}{c} + 2 \frac{r}{c} \\
&=
0.
\end{aligned}
\end{equation}
So, we’ve computed something that has a \( 0 \times \infty / 0 \) structure at \( \pm \tau = r/c \). Presumably, this has the infinite value \( \delta(x – x’) \delta(y – y’) \delta(t – t’) \) at that point.

I think that the root problem here is that the derivatives of \( \lr{ \tau^2 – r^2/c^2 }^{-1/2} \) are not defined where \( \tau = \pm r/c \), so we have a zero result for any region of spacetime where that is not the case, but can’t say much about it at other points without additional work.

Attempting to describe this physically, I think that we’d say that we have discovered that a constant velocity wave of this form has to propagate on the “light cone”. We see something like that for the 3D Green’s function too, which is explicitly zero off the light cone, not just after application of the wave equation operator.

Followup:

  1. Is there a better representation of the 2D Green’s function than this one? I think it’s time to look up some more advanced handling of Green’s function to get a better handle on this. I’d guess that there’s a Green’s function for the 2D wave equation related to Bessel functions, like that of the 2D Helmholtz operator.
  2. It should also be possible to perform a limiting convolution verification, in the neighbourhood of the light cone, and then look at the limit of that convolution. I’d expect that to be better behaved, as it should avoid the singularity itself.

References

[1] F.W. Byron and R.W. Fuller. Mathematics of Classical and Quantum Physics. Dover Publications, 1992.

[2] JD Jackson. Classical Electrodynamics. John Wiley and Sons, 2nd edition, 1975.

[3] J Schwinger, LL DeRaad Jr, KA Milton, and W-Y Tsai. Classical electrodynamics, perseus. 1998.

An “easy” integral from Jackson’s electrodynamics

September 6, 2025 math and physics play , , ,

Screenshot

[Click here for a PDF version of this post]

Once again, I was reading my Jackson [1], which characteristically had the statement “the […] integral can easily be shown to have the value \( 4 \pi \)”, in a discussion of electrostatic energy and self energy.

The integral is
\begin{equation}\label{eqn:selfEnergyIntegral:20}
I = \int \frac{\Brho}{\rho^3} \cdot \frac{\Brho + \Bn}{\Norm{\Brho + \Bn}^3} d^3 \rho.
\end{equation}

This is something that I once figured out once before (see [2] appendix C). However, trying to do it a second time around, I think that I found the “easy” way.

As Jackson hints, the starting point is
\begin{equation}\label{eqn:selfEnergyIntegral:40}
\frac{\Bx}{\Norm{\Bx}^3}
=
-\spacegrad \inv{\Norm{\Bx}},
\end{equation}
but we don’t have to apply it to both the vector terms, as I did in my initial attempt (which results in a Laplacian to reduce.) Inserting this and applying chain rule, we find
\begin{equation}\label{eqn:selfEnergyIntegral:60}
\begin{aligned}
I
&= -\int \frac{\Brho}{\rho^3} \cdot \spacegrad_\Brho \inv{\Norm{\Brho + \Bn}} d^3 \rho \\
&=
-\int
\spacegrad_\Brho \cdot \lr{
\frac{\Brho}{\rho^3} \cdot
\inv{\Norm{\Brho + \Bn}}
}
d^3 \rho
+
\int
\lr{
\spacegrad_\Brho \cdot
\frac{\Brho}{\rho^3}
}
\inv{\Norm{\Brho + \Bn}}
d^3 \rho
\\
\end{aligned}
\end{equation}

The first integral can be evaluated using an infinite spherical shell
\begin{equation}\label{eqn:selfEnergyIntegral:80}
\begin{aligned}
I_1
&= -\int
\spacegrad_\Brho \cdot \lr{
\frac{\Brho}{\rho^3} \cdot
\inv{\Norm{\Brho + \Bn}}
}
d^3 \rho \\
&=
\lim_{\rho \rightarrow \infty}
-\frac{\Brho}{\rho} \cdot \lr{
\frac{\Brho}{\rho^3} \cdot
\inv{\Norm{\Brho + \Bn}}
} 4 \pi \rho^2 \\
&=
\lim_{\rho \rightarrow \infty}
\frac{-4 \pi}{\Norm{\Brho + \Bn}} \\
&=
0.
\end{aligned}
\end{equation}

The divergence term in the second integral, provided \( \Bx \ne 0 \), has the form
\begin{equation}\label{eqn:selfEnergyIntegral:100}
\begin{aligned}
\spacegrad \cdot \frac{\Bx}{\Norm{\Bx}^3}
&=
\inv{\Norm{\Bx}^3} \spacegrad \cdot \Bx
+
\lr{ \Bx \cdot \spacegrad } \inv{\Norm{\Bx}^3} \\
&=
\frac{3}{\Norm{\Bx}^3}
+
2 \frac{x_k x_j}{\Norm{\Bx}^5} \lr{-\frac{3}{2}} \partial_k x_j \\
&=
\frac{3}{\Norm{\Bx}^3}
– \frac{3}{\Norm{\Bx}^3}
\end{aligned}
\end{equation}
However, in a neighbourhood of the origin, this actually has a delta function structure. We can see that from Gauss’s law, where we have
\begin{equation}\label{eqn:selfEnergyIntegral:120}
\spacegrad \cdot \BE = \frac{\rho}{\epsilon_0}.
\end{equation}
If we plug in the integral representation of \( \BE \) on the LHS, we have
\begin{equation}\label{eqn:selfEnergyIntegral:140}
\begin{aligned}
\spacegrad \cdot \BE
&=
\spacegrad \cdot \int \frac{\rho(\Bx’)}{4 \pi \epsilon_0} \frac{\Bx – \Bx’}{\Norm{\Bx – \Bx’}^3} d^3 x \\
&=
\int \frac{\rho(\Bx’)}{4 \pi \epsilon_0} \spacegrad \cdot \frac{\Bx – \Bx’}{\Norm{\Bx – \Bx’}^3} d^3 x \\
&=
-\int \frac{\rho(\Bx’)}{4 \pi \epsilon_0} \spacegrad’ \cdot \frac{\Bx – \Bx’}{\Norm{\Bx – \Bx’}^3} d^3 x.
\end{aligned}
\end{equation}
Comparing the LHS and RHS, we must have
\begin{equation}\label{eqn:selfEnergyIntegral:160}
\spacegrad’ \cdot \frac{\Bx’ – \Bx}{\Norm{\Bx’ – \Bx}^3} = 4 \pi \delta^3\lr{\Bx’ – \Bx}.
\end{equation}

We can now substitute that into the second integral to find
\begin{equation}\label{eqn:selfEnergyIntegral:n}
\begin{aligned}
I_2 &=
\int
\lr{
\spacegrad_\Brho \cdot
\frac{\Brho}{\rho^3}
}
\inv{\Norm{\Brho + \Bn}}
d^3 \rho \\
&=
\frac{4 \pi}{\Norm{\Bn}} \\
&=
4 \pi.
\end{aligned}
\end{equation}

Sure enough, the integral has a \( 4 \pi \) value. But was that easy?  I think Hitler would disagree.

References

[1] JD Jackson. Classical Electrodynamics. John Wiley and Sons, 2nd edition, 1975.

[2] Peeter Joot. Electromagnetic Theory. Kindle Direct Publishing, Toronto, 2016.