Laplacian

Curl of Curl. Tensor and GA expansion, and GA equivalent identity.

November 12, 2025 math and physics play , , , , , , , , , , ,

[Click here for a PDF version of this post]

In this blog post, we will expand \(\spacegrad \cross \lr{ \spacegrad \cross \Bf } = -\spacegrad^2 \Bf + \spacegrad \lr{ \spacegrad \cdot \Bf } \) two different ways, using tensor index gymnastics and using geometric algebra.

The tensor way.

To expand the curl using a tensor expansion, let’s first expand the cross product in coordinates
\begin{equation}\label{eqn:curlcurl2:20}
\begin{aligned}
\Ba \cross \Bb
&=
\lr{ \Be_r \cross \Be_s } a_r b_s \\
&=
\Be_t \cdot \lr{ \Be_r \cross \Be_s } \Be_t a_r b_s \\
&=
\epsilon_{rst} a_r b_s \Be_t.
\end{aligned}
\end{equation}
Here \( \epsilon_{rst} \) is the completely antisymmetric (Levi-Civita) tensor, and allows us to compactly express the geometrical nature of the triple product.

We can then expand the curl of the curl by applying this twice
\begin{equation}\label{eqn:curlcurl2:40}
\begin{aligned}
\spacegrad \cross \lr{ \spacegrad \cross \Bf }
&=
\epsilon_{rst} \partial_r \lr{ \spacegrad \cross \Bf }_s \Be_t \\
&=
\epsilon_{rst} \partial_r \lr{ \epsilon_{uvw} \partial_u f_v \Be_w }_s \Be_t \\
&=
\epsilon_{rst} \partial_r \epsilon_{uvs} \partial_u f_v \Be_t.
\end{aligned}
\end{equation}

It turns out that there’s a nice identity to reduce the single index contraction of a pair of Levi-Civita tensors.
\begin{equation}\label{eqn:curlcurl2:60}
\epsilon_{abt} \epsilon_{cdt} = \delta_{ac} \delta_{bd} – \delta_{ad} \delta_{bc}.
\end{equation}
To show this, consider the \( t = 1 \) term of this sum \( \epsilon_{ab1} \epsilon_{cd1} \). This is non-zero only for \( a,b,c,d \in \setlr{2,3} \). If \( a,b = c,d \), this is one, and if \( a,b = d,c \), this is minus one. We may summarize that as
\begin{equation}\label{eqn:curlcurl2:80}
\epsilon_{ab1} \epsilon_{cd1} = \delta_{ac} \delta_{bd} – \delta_{ad} \delta_{bc},
\end{equation}
but this holds for \( t = 2,3 \) too, so \ref{eqn:curlcurl2:60} holds generally.

We may now contract the tensors to find
\begin{equation}\label{eqn:curlcurl2:100}
\begin{aligned}
\spacegrad \cross \lr{ \spacegrad \cross \Bf }
&=
\epsilon_{rst} \epsilon_{uvs} \Be_t \partial_r \partial_u f_v \\
&=
-\epsilon_{rts} \epsilon_{uvs} \Be_t \partial_r \partial_u f_v \\
&=
-\lr{ \delta_{ru} \delta_{tv} – \delta_{rv} \delta_{tu} } \Be_t \partial_r \partial_u f_v \\
&=
– \Be_v \partial_u \partial_u f_v
+ \Be_u \partial_v \partial_u f_v \\
&=
-\spacegrad^2 \Bf + \spacegrad \lr{ \spacegrad \cdot \Bf }.
\end{aligned}
\end{equation}

Using geometric algebra.

Now let’s pull out the GA toolbox. We start with introducing a no-op grade-1 selection, and using the identity \( \Ba \cross \Bb = -I \lr{ \Ba \wedge \Bb } \)
\begin{equation}\label{eqn:curlcurl2:120}
\begin{aligned}
\spacegrad \cross \lr{ \spacegrad \cross \Bf }
&=
\gpgradeone{
\spacegrad \cross \lr{ \spacegrad \cross \Bf }
} \\
&=
\gpgradeone{
-I \lr{ \spacegrad \wedge \lr{ \spacegrad \cross \Bf } }
} \\
\end{aligned}
\end{equation}
We can now expand \( \Ba \wedge \Bb = \Ba \Bb – \Ba \cdot \Bb \)
\begin{equation}\label{eqn:curlcurl2:140}
\spacegrad \cross \lr{ \spacegrad \cross \Bf }
=
\gpgradeone{
-I \spacegrad \lr{ \spacegrad \cross \Bf }
+I \lr{ \spacegrad \cdot \lr{ \spacegrad \cross \Bf } }
}
\end{equation}
but that dot product is a scalar, leaving just a pseudoscalar, which has a zero grade-1 selection. This leaves
\begin{equation}\label{eqn:curlcurl2:160}
\begin{aligned}
\spacegrad \cross \lr{ \spacegrad \cross \Bf }
&=
\gpgradeone{
-I \spacegrad \lr{ -I \lr{ \spacegrad \wedge \Bf } }
} \\
&=
-\gpgradeone{
\spacegrad \lr{ \spacegrad \wedge \Bf }
}.
\end{aligned}
\end{equation}
We use \( \Ba \wedge \Bb = \Ba \Bb – \Ba \cdot \Bb \) once more
\begin{equation}\label{eqn:curlcurl2:180}
\begin{aligned}
\spacegrad \cross \lr{ \spacegrad \cross \Bf }
&=
-\gpgradeone{
\spacegrad \lr{ \spacegrad \Bf }
-\spacegrad \lr{ \spacegrad \cdot \Bf }
}
\\
&=
-\spacegrad^2 \Bf
+\spacegrad \lr{ \spacegrad \cdot \Bf }.
\end{aligned}
\end{equation}

GA identity.

It’s also worth noting that there’s a natural GA formulation of the curl of a curl. From the Laplacian and divergence relationship that we ended up with, we need only factor out the gradient
\begin{equation}\label{eqn:curlcurl2:200}
\begin{aligned}
\spacegrad \cross \lr{ \spacegrad \cross \Bf }
&=
-\spacegrad^2 \Bf +\spacegrad \lr{ \spacegrad \cdot \Bf } \\
&=
-\spacegrad \lr{ \spacegrad \Bf – \spacegrad \cdot \Bf } \\
&=
-\spacegrad \lr{ \spacegrad \wedge \Bf }.
\end{aligned}
\end{equation}
Because \( \spacegrad \wedge \lr{ \spacegrad \wedge \Bf } = 0 \), we may also write this as
\begin{equation}\label{eqn:curlcurl2:220}
\boxed{
\spacegrad \cdot \lr{ \spacegrad \wedge \Bf } = -\spacegrad \cross \lr{ \spacegrad \cross \Bf }.
}
\end{equation}
From the GA LHS, we see by inspection that
\begin{equation}\label{eqn:curlcurl2:240}
\spacegrad \cdot \lr{ \spacegrad \wedge \Bf } = \spacegrad^2 \Bf – \spacegrad \lr{ \spacegrad \cdot \Bf }.
\end{equation}

A fun application of Green’s functions and geometric algebra: Residue calculus

November 2, 2025 math and physics play , , , , , , , , , , , , , , , , , ,

[Click here for a PDF version of this post]

Motivation.

A fun application of both Green’s functions and geometric algebra is to show how the Cauchy integral equation can be expressed in terms of the Green’s function for the 2D gradient. This is covered, almost as an aside, in [1]. I found that treatment a bit hard to understand, so I am going to work through it here at my own pace.

Complex numbers in geometric algebra.

Anybody who has studied geometric algebra is likely familiar with a variety of ways to construct complex numbers from geometric objects. For example, complex numbers can be constructed for any plane. If \( \Be_1, \Be_2 \) is a pair of orthonormal vectors for some plane in \(\mathbb{R}^N\), then any vector in that plane has the form
\begin{equation}\label{eqn:residueGreens:20}
\Bf = \Be_1 u + \Be_2 v,
\end{equation}
has an associated complex representation, by simply multiplying that vector one of those basis vectors. For example, if we pre-multiply \( \Bf \) by \( \Be_1 \), forming
\begin{equation}\label{eqn:residueGreens:40}
\begin{aligned}
z
&= \Be_1 \Bf \\
&= \Be_1 \lr{ \Be_1 u + \Be_2 v } \\
&= u + \Be_1 \Be_2 v.
\end{aligned}
\end{equation}

We may identify the unit bivector \( \Be_1 \Be_2 \) as an imaginary, designed by \( i \), since it has the expected behavior
\begin{equation}\label{eqn:residueGreens:60}
\begin{aligned}
i^2 &=
\lr{\Be_1 \Be_2}^2 \\
&=
\lr{\Be_1 \Be_2}
\lr{\Be_1 \Be_2} \\
&=
\Be_1 \lr{\Be_2
\Be_1} \Be_2 \\
&=
-\Be_1 \lr{\Be_1
\Be_2} \Be_2 \\
&=
-\lr{\Be_1 \Be_1}
\lr{\Be_2 \Be_2} \\
&=
-1.
\end{aligned}
\end{equation}

Complex numbers are seen to be isomorphic to even grade multivectors in a planar subspace. The imaginary is the grade-two pseudoscalar, and geometrically is an oriented unit area (bivector.)

Cauchy-equations in terms of the gradient.

It is natural to wonder about the geometric algebra equivalents of various complex-number relationships and identities. Of particular interest for this discussion is the geometric algebra equivalent of the Cauchy equations that specify required conditions for a function to be differentiable.

If a complex function \( f(z) = u(z) + i v(z) \) is differentiable, then we must be able to find the limit of
\begin{equation}\label{eqn:residueGreens:80}
\frac{\Delta f(z_0)}{\Delta z} = \frac{f(z_0 + h) – f(z_0)}{h},
\end{equation}
for any complex \( h \rightarrow 0 \), for any possible trajectory of \( z_0 + h \) toward \( z_0 \). In particular, for real \( h = \epsilon \),
\begin{equation}\label{eqn:residueGreens:100}
\lim_{\epsilon \rightarrow 0} \frac{u(x_0 + \epsilon, y_0) + i v(x_0 + \epsilon, y_0) – u(x_0, y_0) – i v(x_0, y_0)}{\epsilon}
=
\PD{x}{u(z_0)} + i \PD{x}{v(z_0)},
\end{equation}
and for imaginary \( h = i \epsilon \)
\begin{equation}\label{eqn:residueGreens:120}
\lim_{\epsilon \rightarrow 0} \frac{u(x_0, y_0 + \epsilon) + i v(x_0, y_0 + \epsilon) – u(x_0, y_0) – i v(x_0, y_0)}{i \epsilon}
=
-i\lr{ \PD{y}{u(z_0)} + i \PD{y}{v(z_0)} }.
\end{equation}
Equating real and imaginary parts, we see that existence of the derivative requires
\begin{equation}\label{eqn:residueGreens:140}
\begin{aligned}
\PD{x}{u} &= \PD{y}{v} \\
\PD{x}{v} &= -\PD{y}{u}.
\end{aligned}
\end{equation}
These are the Cauchy equations. When the derivative exists in a given neighbourhood, we say that the function is analytic in that region. If we use a bivector interpretation of the imaginary, with \( i = \Be_1 \Be_2 \), the Cauchy equations are also satisfied if the gradient of the complex function is zero, since
\begin{equation}\label{eqn:residueGreens:160}
\begin{aligned}
\spacegrad f
&=
\lr{ \Be_1 \partial_x + \Be_2 \partial_y } \lr{ u + \Be_1 \Be_2 v } \\
&=
\Be_1 \lr{ \partial_x u – \partial_y v } + \Be_2 \lr{ \partial_y u + \partial_x v }.
\end{aligned}
\end{equation}
We see that the geometric algebra equivalent of the Cauchy equations is simply
\begin{equation}\label{eqn:residueGreens:200}
\spacegrad f = 0.
\end{equation}
Roughly speaking, we may say that a function is analytic in a region, if the Cauchy equations are satisfied, or the gradient is zero, in a neighbourhood of all points in that region.

A special case of the fundamental theorem of geometric calculus.

Given an even grade multivector \( \psi \in \mathbb{R}^2 \) (i.e.: a complex number), we can show that
\begin{equation}\label{eqn:residueGreens:220}
\int_A \spacegrad \psi d^2\Bx = \oint_{\partial A} d\Bx \psi.
\end{equation}
Let’s get an idea why this works by expanding the area integral for a rectangular parameterization
\begin{equation}\label{eqn:residueGreens:240}
\begin{aligned}
\int_A \spacegrad \psi d^2\Bx
&=
\int_A \lr{ \Be_1 \partial_1 + \Be_2 \partial_2 } \psi I dx dy \\
&=
\int \Be_1 I dy \evalrange{\psi}{x_0}{x_1}
+
\int \Be_2 I dx \evalrange{\psi}{y_0}{y_1} \\
&=
\int \Be_2 dy \evalrange{\psi}{x_0}{x_1}

\int \Be_1 dx \evalrange{\psi}{y_0}{y_1} \\
&=
\int d\By \evalrange{\psi}{x_0}{x_1}

\int d\Bx \evalrange{\psi}{y_0}{y_1}.
\end{aligned}
\end{equation}
We took advantage of the fact that the \(\mathbb{R}^2\) pseudoscalar commutes with \( \psi \). The end result, is illustrated in fig. 1, shows pictorially that the remaining integral is an oriented line integral.

fig. 1. Oriented multivector line integral.

 

If we want to approximate a more general area, we may do so with additional tiles, as illustrated in fig. 2. We may evaluate the area integral using the line integral over just the exterior boundary using such a tiling, as any overlapping opposing boundary contributions cancel exactly.

fig. 2. A crude circular tiling approximation.

 

The reason that this is interesting is that it allows us to re-express a complex integral as a corresponding multivector area integral. With \( d\Bx = \Be_1 dz \), we have
\begin{equation}\label{eqn:residueGreens:260}
\oint dz\, \psi = \Be_1 \int \spacegrad \psi d^2\Bx.
\end{equation}

The Cauchy kernel as a Green’s function.

We’ve previously derived the Green’s function for the 2D Laplacian, and found
\begin{equation}\label{eqn:residueGreens:280}
\tilde{G}(\Bx, \Bx’) = \inv{2\pi} \ln \Abs{\lr{\Bx – \Bx’}},
\end{equation}
which satisfies
\begin{equation}\label{eqn:residueGreens:300}
\delta^2(\Bx – \Bx’) = \spacegrad^2 \tilde{G}(\Bx, \Bx’) = \spacegrad \lr{ \spacegrad \tilde{G}(\Bx, \Bx’) }.
\end{equation}
This means that \( G(\Bx, \Bx’) = \spacegrad \tilde{G}(\Bx, \Bx’) \) is the Green’s function for the gradient. That Green’s function is
\begin{equation}\label{eqn:residueGreens:320}
\begin{aligned}
G(\Bx, \Ba)
&= \inv{2 \pi} \frac{\spacegrad \Abs{\Bx – \Ba}}{\Abs{\Bx – \Ba}} \\
&= \inv{2 \pi} \frac{\Bx – \Ba}{\Abs{\Bx – \Ba}^2}.
\end{aligned}
\end{equation}
We may cast this Green’s function into complex form with \( z = \Be_1 \Bx, a = \Be_1 \Ba \). In particular
\begin{equation}\label{eqn:residueGreens:340}
\begin{aligned}
\inv{z – a}
&=
\frac{(z – a)^\conj}{\Abs{z – a}^2} \\
&=
\frac{(z – a)^\conj}{\Abs{z – a}^2} \\
&=
\frac{\Bx – \Ba}{\Abs{\Bx – \Ba}^2} \Be_1 \\
&=
2 \pi G(\Bx, \Ba) \Be_1.
\end{aligned}
\end{equation}

Cauchy’s integral.

With
\begin{equation}\label{eqn:residueGreens:360}
\psi = \frac{f(z)}{z – a},
\end{equation}
using \ref{eqn:residueGreens:260}, we can now evaluate
\begin{equation}\label{eqn:residueGreens:265}
\begin{aligned}
\oint dz\, \frac{f(z)}{z – a}
&= \Be_1 \int \spacegrad \frac{f(z)}{z – a} d^2\Bx \\
&= \Be_1 \int \lr{ \frac{\spacegrad f(z)}{z – a} + \lr{ \spacegrad \inv{z – a}} f(z) } I dA \\
&= \Be_1 \int f(z) \spacegrad 2 \pi G(\Bx – \Ba) \Be_1 I dA \\
&= 2 \pi \Be_1 \int \delta^2(\Bx – \Ba) \Be_1 f(\Bx) I dA \\
&= 2 \pi \Be_1^2 f(\Ba) I \\
&= 2 \pi I f(a),
\end{aligned}
\end{equation}
where we’ve made use of the analytic condition \( \spacegrad f = 0 \), and the fact that \( f \) and \( 1/(z-a) \), both even multivectors, commute.

The Cauchy integral equation
\begin{equation}\label{eqn:residueGreens:380}
f(a) = \inv{2 \pi I} \oint dz\, \frac{f(z)}{z – a},
\end{equation}
falls out naturally. This sort of residue calculation always seemed a bit miraculous. By introducing a geometric algebra encoding of complex numbers, we get a new and interesting interpretation. In particular,

  1. the imaginary factor in the geometric algebra formulation of this identity is an oriented unit area coming directly from the area element,
  2. the factor of \( 2 \pi \) comes directly from the Green’s function for the gradient,
  3. the fact that this particular form of integral picks up only the contribution at the point \( z = a \) is no longer mysterious seeming. This is directly due to delta-function filtering.

Also, if we are looking for an understanding of how to generalize the Cauchy equation to more general multivector functions, we now also have a good clue how that would be done.

References

[1] C. Doran and A.N. Lasenby. Geometric algebra for physicists. Cambridge University Press New York, Cambridge, UK, 1st edition, 2003.

Summary of some gradient related Green’s functions

October 28, 2025 math and physics play , , , , , ,

[Click here for a PDF version of this post]

Here is a summary of Green’s functions for a number of gradient related differential operators (many of which are of interest for electrodynamics, and most of them have been derived recently in blog posts.) These Green’s functions all satisfy
\begin{equation}\label{eqn:deltaFunctions:120}
\delta(\Bx – \Bx’) = L G(\Bx, \Bx’).
\end{equation}

Let \( \Br = \Bx – \Bx’ \), \( r = \Norm{\Br} \), \( \mathbf{\hat{r}} = \Br/r \), and \( \tau = t – t’ \), then

  1. Gradient operator, \( L = \spacegrad \), in 1D, 2D and 3D respectively
    \begin{equation}\label{eqn:deltaFunctions:25}
    \begin{aligned}
    G\lr{ \Bx, \Bx’ } &= \frac{\mathbf{\hat{r}}}{2} \\
    G\lr{ \Bx, \Bx’ } &= \frac{1}{2 \pi} \frac{\mathbf{\hat{r}}}{r} \\
    G\lr{ \Bx, \Bx’ } &= \inv{4 \pi} \frac{\mathbf{\hat{r}}}{r^2}.
    \end{aligned}
    \end{equation}

  2. Laplacian operator, \( L = \spacegrad^2 \), in 1D, 2D and 3D respectively
    \begin{equation}\label{eqn:deltaFunctions:20}
    \begin{aligned}
    G\lr{ \Bx, \Bx’ } &= \frac{r}{2} \\
    G\lr{ \Bx, \Bx’ } &= \frac{1}{2 \pi} \ln r \\
    G\lr{ \Bx, \Bx’ } &= -\frac{1}{4 \pi r}.
    \end{aligned}
    \end{equation}

  3. Second order Helmholtz operator, \( L = \spacegrad^2 + k^2 \) for 1D, 2D and 3D respectively
    \begin{equation}\label{eqn:deltaFunctions:60}
    \begin{aligned}
    G\lr{ \Bx, \Bx’ } &= \pm \frac{1}{2 j k} e^{\pm j k r} \\
    G(\Bx, \Bx’) &= \frac{1}{4 j} H_0^{(1)}(\pm k r) \\
    G\lr{ \Bx, \Bx’ } &= -\frac{1}{4 \pi} \frac{e^{\pm j k r }}{r}.
    \end{aligned}
    \end{equation}

  4. First order Helmholtz operator, \( L = \spacegrad + j k \), in 1D, 2D and 3D respectively

    \begin{equation}\label{eqn:deltaFunctions:80}
    \begin{aligned}
    G\lr{ \Bx, \Bx’ } &= \frac{j}{2} \lr{ \mathbf{\hat{r}} \mp 1 } e^{\pm j k r} \\
    G\lr{ \Bx, \Bx’ } &= \frac{k}{4} \lr{ \pm j \mathbf{\hat{r}} H_1^{(1)}(\pm k r) – H_0^{(1)}(\pm k r) } \\
    G\lr{ \Bx, \Bx’ } &= \frac{e^{\pm j k r}}{4 \pi r} \lr{ jk \lr{ 1 \mp \mathbf{\hat{r}} } + \frac{\mathbf{\hat{r}}}{r} }.
    \end{aligned}
    \end{equation}

    This is also the Green’s function for a left acting operator \( G(\Bx, \Bx’) \lr{ – \lspacegrad + j k } = \delta(\Bx – \Bx’) \).

  5. Wave equation, \( \spacegrad^2 – (1/c^2) \partial_{tt} \), in 1D, 2D and 3D respectively
    \begin{equation}\label{eqn:deltaFunctions:140}
    \begin{aligned}
    G(\Br, \tau) &= -\frac{c}{2} \Theta( \pm \tau – r/c ) \\
    G(\Br, \tau) &= -\inv{2 \pi \sqrt{ \tau^2 – r^2/c^2 } } \Theta( \pm \tau – r/c ) \\
    G(\Br, \tau) &= -\inv{4 \pi r} \delta( \pm \tau – r/c ),
    \end{aligned}
    \end{equation}
    The positive sign is for the retarded solution, and the negative for advancing.

  6. Spacetime gradient \( L = \spacegrad + (1/c) \partial_t \), satisfying \( L G(\Bx – \Bx’, t – t’) = \delta(\Bx – \Bx’) \delta(t – t’) \), in 1D, 2D, and 3D respectively
    \begin{equation}\label{eqn:deltaFunctions:100}
    \begin{aligned}
    G(\Br, \tau)
    &= \inv{2} \lr{ \mathbf{\hat{r}} \pm 1 } \delta(\pm \tau – r/c) \\
    G(\Br, \tau)
    &=
    \frac{
    \lr{\tau^2 – r^2/c^2}^{-3/2}
    }{2 \pi c^2}
    \lr{
    c \lr{ \mathbf{\hat{r}} \pm 1 }
    \lr{\tau^2 – r^2/c^2}
    \delta(\pm \tau – r/c)
    -\lr{ \Br + c \tau }
    \Theta(\pm \tau – r/c)
    }
    \\
    G(\Br, \tau)
    &= \inv{4 \pi r} \delta(\pm \tau – r/c)
    \lr{
    \frac{\mathbf{\hat{r}}}{r}
    +
    \lr{ \mathbf{\hat{r}} \pm 1} \inv{c} \PD{t’}{}
    }
    \end{aligned}
    \end{equation}
    The plus sign is for the retarded solution, and negative for advanced.

Green’s function for the wave equation: 1D and 2D cases.

October 4, 2025 math and physics play , , , , , , , , , ,

[Click here for a PDF version of this post]

The Green’s function(s) \( G(\Br, \tau) \) for the 3D wave equation
\begin{equation}\label{eqn:waveEquationGreens:40}
\lr{ \spacegrad^2 – \inv{c^2}\frac{\partial^2}{\partial t^2} } G(\Br, \tau) = \delta(\Br) \delta(\tau),
\end{equation}
where
\begin{equation}\label{eqn:waveEquationGreens:20}
\begin{aligned}
\Br &= \Bx – \Bx’ \\
r &= \Abs{\Br} \\
\tau &= t – t’,
\end{aligned}
\end{equation}
is
\begin{equation}\label{eqn:waveEquationGreens:60}
G(\Br, \tau) = -\inv{4 \pi r} \delta( \pm \tau – r/c ).
\end{equation}
Here the positive case is the retarded solution, and negative the advanced solution. The derivation of these Green’s functions can be found derived in many places, including [1], [2], and [3]

I wasn’t familiar with the 1D and 2D Green’s functions for the wave equation. Grok says they are, respectively
\begin{equation}\label{eqn:waveEquationGreens:80}
\begin{aligned}
G(\Br, \tau) &= -\frac{c}{2} \Theta( \pm \tau – r/c ) \\
G(\Br, \tau) &= -\inv{2 \pi \sqrt{ \tau^2 – r^2/c^2 } } \Theta( \pm \tau – r/c ).
\end{aligned}
\end{equation}
At least for the time being, I thought that I’ll attempt to verify these, instead of deriving them. For the 1D case, this turns out to be fairly straightforward. Perhaps unexpectedly, that isn’t true for the 2D case, and I’ll have to revisit that case in other ways. In this post, I’ll show the verification of the 1D Green’s function, and my partial attempt to verify the 2D case.

1D Green’s function verification.

We will use the Heaviside theta representation of the absolute value.
\begin{equation}\label{eqn:waveEquationGreens:100}
\Abs{x} = x \Theta(x) – x \Theta(-x).
\end{equation}
Recall that the derivative of the absolute value function is a sign function
\begin{equation}\label{eqn:waveEquationGreens:120}
\begin{aligned}
\Abs{x}’
&= \Theta(x) – \Theta(-x) + x \delta(x) + x \delta(-x) \\
&= \Theta(x) – \Theta(-x) + 2 x \delta(x) \\
&= \Theta(x) – \Theta(-x) \\
&= \textrm{sgn}(x),
\end{aligned}
\end{equation}
where \( x \delta(x) \) is zero in a distributional sense (zero if applied to a test function.)
\begin{equation}\label{eqn:waveEquationGreens:140}
\begin{aligned}
\textrm{sgn}(x)’
&= \Theta(x)’ – \Theta(-x)’ \\
&= \delta(x) + \delta(-x) \\
&= 2 \delta(x).
\end{aligned}
\end{equation}

Now let’s evaluate the \( x \) partials.
\begin{equation}\label{eqn:waveEquationGreens:160}
\begin{aligned}
\PD{x}{} \Theta(\tau – r/c)
&=
-\inv{c} \delta\lr{ \tau – r/c } \PD{x}{} \Abs{x – x’} \\
&=
-\inv{c} \delta\lr{ \tau – r/c } \textrm{sgn}(x – x’).
\end{aligned}
\end{equation}
The second derivative is
\begin{equation}\label{eqn:waveEquationGreens:180}
\begin{aligned}
\frac{\partial^2}{\partial x^2} \Theta(\tau – r/c)
&=
-\inv{c}
\lr{
-\inv{c} \delta’\lr{ \tau – r/c } (\textrm{sgn}(x – x’))^2
+
\delta\lr{ \tau – r/c } 2 \delta(x – x’)
} \\
&=
\inv{c^2} \delta’\lr{ \tau – r/c } – \frac{2}{c} \delta\lr{ \tau} \delta(x – x’).
\end{aligned}
\end{equation}
The transformation above from \( \delta\lr{ \tau – r/c } \rightarrow \delta(\tau) \) is because the spatial delta function \( \delta(x – x’) \) is zero unless \( x = x’ \), and \( r = 0 \) at that point.

The time derivatives are easier to compute
\begin{equation}\label{eqn:waveEquationGreens:200}
\begin{aligned}
\frac{\partial^2}{\partial t^2} \Theta(\tau – r/c)
&=
\PD{t}{} \delta(\tau – r/c) \\
&=
\delta'(\tau – r/c).
\end{aligned}
\end{equation}

Putting the pieces together, we have
\begin{equation}\label{eqn:waveEquationGreens:220}
\begin{aligned}
\lr{ \spacegrad^2 – \inv{c^2}\frac{\partial^2}{\partial t^2} } \Theta(\tau – r/c)
&=
\inv{c^2} \delta’\lr{ \tau – r/c } – \frac{2}{c} \delta\lr{ \tau} \delta(x – x’)
– \inv{c^2} \delta'(\tau – r/c)
\\
&=
– \frac{2}{c} \delta\lr{ \tau} \delta(x – x’).
\end{aligned}
\end{equation}
Dividing through by \( -2/c \) gives us
\begin{equation}\label{eqn:waveEquationGreens:240}
\lr{ \spacegrad^2 – \inv{c^2}\frac{\partial^2}{\partial t^2} } G(\Bx – \Bx’, t – t’) = \delta\lr{t – t’} \delta\lr{\Bx – \Bx’},
\end{equation}
as desired. The \( \delta \) derivative terms can be given meaning, but they conveniently cancel out, so we don’t have to think about that this time.

It’s easy to see that the advanced Green’s function has the same behaviour, since the two time partials will bring down a factor of \( (\pm 1)^2 = 1 \) in general, which does not change anything above.

Attempted verification of the claimed 2D Green’s function.

Now let’s try to verify Grok’s claim for the 2D Green’s function, starting with a few helpful side calculations.

\begin{equation}\label{eqn:waveEquationGreens:260}
\begin{aligned}
\spacegrad \Abs{r}
&= \sum_m \Be_m \partial_m \sqrt{ \sum_n \lr{x_n – x_n’}^2 } \\
&= \inv{2} 2 \frac{\Bx – \Bx’}{\Abs{\Bx – \Bx’}} \\
&= \rcap
\end{aligned}
\end{equation}

\begin{equation}\label{eqn:waveEquationGreens:280}
\begin{aligned}
\spacegrad \lr{ \tau^2 – r^2/c^2 }^{-1/2}
&=
-\inv{2} \lr{ \tau^2 – r^2/c^2 }^{-3/2} \lr{-\frac{2 r}{c^2}} \spacegrad r \\
&=
-\inv{2} \lr{ \tau^2 – r^2/c^2 }^{-3/2} \lr{-\frac{2 r}{c^2}} \rcap \\
&=
\frac{r}{c^2} \lr{ \tau^2 – r^2/c^2 }^{-3/2} \rcap
\end{aligned}
\end{equation}

\begin{equation}\label{eqn:waveEquationGreens:300}
\begin{aligned}
\spacegrad \lr{ \tau^2 – r^2/c^2 }^{-3/2}
&=
-\frac{3}{2} \lr{ \tau^2 – r^2/c^2 }^{-5/2} \lr{-\frac{2 r}{c^2}} \spacegrad r \\
&=
-\frac{3}{2} \lr{ \tau^2 – r^2/c^2 }^{-5/2} \lr{-\frac{2 r}{c^2}} \rcap \\
&=
\frac{3 r}{c^2} \lr{ \tau^2 – r^2/c^2 }^{-5/2} \rcap
\end{aligned}
\end{equation}

\begin{equation}\label{eqn:waveEquationGreens:320}
\begin{aligned}
\spacegrad \Theta\lr{ \pm \tau – r/c }
&=
-\inv{c} \delta\lr{ \pm \tau – r/c } \spacegrad r \\
&=
-\inv{c} \delta\lr{ \pm \tau – r/c } \rcap
\end{aligned}
\end{equation}

\begin{equation}\label{eqn:waveEquationGreens:340}
\begin{aligned}
\spacegrad \delta\lr{ \pm \tau – r/c }
&=
-\inv{c} \delta’\lr{ \pm \tau – r/c } \spacegrad r \\
&=
-\inv{c} \delta’\lr{ \pm \tau – r/c } \rcap
\end{aligned}
\end{equation}

\begin{equation}\label{eqn:waveEquationGreens:360}
\begin{aligned}
\spacegrad \cdot \rcap
&=
\spacegrad \cdot \frac{\Bx – \Bx’}{r} \\
&=
\inv{r} \spacegrad \cdot \lr{\Bx – \Bx’} + \lr{\Bx – \Bx’} \cdot \spacegrad \inv{r} \\
&=
\frac{2}{r} + \lr{\Bx – \Bx’} \cdot \lr{ -\inv{r^2} \rcap } \\
&=
\frac{2}{r} – \inv{r} \\
&=
\frac{1}{r}.
\end{aligned}
\end{equation}
In summary, with \( X = \tau^2 – r^2/c^2 \)
\begin{equation}\label{eqn:waveEquationGreens:540}
\begin{aligned}
\spacegrad \Abs{r} &= \rcap \\
\spacegrad X^{-1/2} &= \inv{c^2} r \rcap X^{-3/2} \\
\spacegrad X^{-3/2} &= \inv{c^2} 3 r \rcap X^{-5/2} \\
\spacegrad \Theta &= – \inv{c} \delta \rcap \\
\spacegrad \delta &= – \inv{c} \rcap \delta’ \\
\spacegrad \cdot \rcap &= \frac{1}{r}.
\end{aligned}
\end{equation}

We will want a couple helper Laplacian operations, including
\begin{equation}\label{eqn:waveEquationGreens:580}
\begin{aligned}
\spacegrad^2 X^{-1/2}
&=
\spacegrad \cdot \lr{ \inv{c^2} r \rcap X^{-3/2} } \\
&=
\inv{c^2} \lr{ \spacegrad \cdot \rcap} \lr{ r X^{-3/2} }
+ \inv{c^2} \lr{ \rcap \cdot \spacegrad r } X^{-3/2}
+ \frac{r}{c^2} \lr{ \rcap \cdot \spacegrad X^{-3/2} } \\
&=
\inv{c^2} X^{-3/2}
+ \inv{c^2} X^{-3/2}
+ \frac{r}{c^2} \lr{ \inv{c^2} 3 r X^{-5/2} } \\
&=
\frac{2}{c^2} X^{-3/2}
+ \frac{3 r^2}{c^4} X^{-5/2}.
\end{aligned}
\end{equation}

The Laplacian of the step is
\begin{equation}\label{eqn:waveEquationGreens:600}
\begin{aligned}
\spacegrad^2 \Theta
&=
\spacegrad \cdot \lr{ – \inv{c} \delta \rcap } \\
&=
-\inv{c}
\lr{ \spacegrad \cdot \rcap } \delta
-\inv{c}
\rcap \cdot \spacegrad \delta \\
&=
-\inv{r c} \delta
-\inv{c}
\rcap \cdot \lr{
– \inv{c} \rcap \delta’
}
&=
-\inv{r c} \delta
+\inv{c^2} \delta’.
\end{aligned}
\end{equation}

We are now ready to compute the Laplacian of \( \Theta X^{-1/2} \). Let’s expand the chain rule for that, so that the rest of the job is just algebra
\begin{equation}\label{eqn:waveEquationGreens:620}
\begin{aligned}
\spacegrad^2 \lr{ f g }
&=
\spacegrad \cdot \lr{ f \spacegrad g }
+
\spacegrad \cdot \lr{ g \spacegrad f } \\
&=
f \spacegrad^2 g + \spacegrad f \cdot \spacegrad g
+
g \spacegrad^2 f + \spacegrad g \cdot \spacegrad f \\
&=
f \spacegrad^2 g + 2 \spacegrad f \cdot \spacegrad g + g \spacegrad^2 f.
\end{aligned}
\end{equation}
We want to sub in
\begin{equation}\label{eqn:waveEquationGreens:640}
\begin{aligned}
\spacegrad^2 \Theta &= -\inv{r c} \delta +\inv{c^2} \delta’ \\
\spacegrad^2 X^{-1/2} &= \frac{2}{c^2} X^{-3/2} + \frac{3 r^2}{c^4} X^{-5/2} \\
\spacegrad X^{-1/2} &= \inv{c^2} r \rcap X^{-3/2} \\
\spacegrad \Theta &= – \inv{c} \delta \rcap.
\end{aligned}
\end{equation}
We get
\begin{equation}\label{eqn:waveEquationGreens:660}
\begin{aligned}
\spacegrad^2 \lr{ \Theta X^{-1/2} }
&=
\lr{ -\inv{r c} \delta +\inv{c^2} \delta’ } X^{-1/2}
+ \lr{ \frac{2}{c^2} X^{-3/2} + \frac{3 r^2}{c^4} X^{-5/2} } \Theta
– 2 \inv{c^2} r X^{-3/2} \inv{c} \delta \\
&=
\inv{c^2} X^{-1/2} \delta’
+ \inv{c^2} \lr{ 2 \lr{\tau^2 – r^2/c^2} + \frac{3 r^2}{c^2} } X^{-5/2} \Theta
– \inv{r c} \lr{ \tau^2 – r^2/c^2 + 2 r^2/c^2 } X^{-3/2} \delta \\
&=
\inv{c^2} X^{-1/2} \delta’
+ \inv{c^2} \lr{ 2 \tau^2 + \frac{r^2}{c^2} } X^{-5/2} \Theta
– \inv{r c} \lr{ \tau^2 + \frac{r^2}{c^2} } X^{-3/2} \delta
\end{aligned}
\end{equation}

We are ready to evaluate the time derivatives now. Let’s try it the same way with
\begin{equation}\label{eqn:waveEquationGreens:680}
\begin{aligned}
\partial_{tt} \lr{ f g }
&=
\partial_t \lr{ f \partial_t g + g \partial_t f } \\
&=
g \partial_{tt} f
+
f \partial_{tt} g
+ 2 \lr{ \partial_t f } \lr{ \partial_t g }.
\end{aligned}
\end{equation}
A couple of the time partials can be computed by inspection
\begin{equation}\label{eqn:waveEquationGreens:700}
\begin{aligned}
\partial_t \Theta &= \pm \delta \\
\partial_{tt} \Theta &= \lr{\pm 1}^2 \delta’,
\end{aligned}
\end{equation}
and for the rest, we have
\begin{equation}\label{eqn:waveEquationGreens:720}
\begin{aligned}
\partial_t X^{-1/2}
&=
-\inv{2} X^{-3/2} \partial_t X \\
&=
-\inv{2} X^{-3/2} 2 \tau \\
&=
-\tau X^{-3/2},
\end{aligned}
\end{equation}
and
\begin{equation}\label{eqn:waveEquationGreens:740}
\begin{aligned}
\partial_{tt} X^{-1/2}
&=
– X^{-3/2}
– \tau \partial_t X^{-3/2} \\
&=
– X^{-3/2}
+ 3 \tau^2 X^{-5/2}.
\end{aligned}
\end{equation}
Assembling the pieces, we have
\begin{equation}\label{eqn:waveEquationGreens:760}
\begin{aligned}
\partial_{tt} \lr{ \Theta X^{-1/2} }
&=
\lr{
– X^{-3/2}
+ 3 \tau^2 X^{-5/2}
} \Theta
+
\delta’ X^{-1/2}
+ 2 \lr{ \pm \delta } \lr{ -\tau X^{-3/2} } \\
&=
\delta’ X^{-1/2}
+ \lr{ -\lr{ \tau^2 – r^2/c^2 } + 3 \tau^2 } X^{-5/2} \Theta
\mp 2 \tau X^{-3/2} \delta \\
&=
\delta’ X^{-1/2}
+ \lr{ 2 \tau^2 + r^2/c^2 } X^{-5/2} \Theta
\mp 2 \tau X^{-3/2} \delta.
\end{aligned}
\end{equation}

The wave equation operation on \( \Theta X^{-1/2} \) is
\begin{equation}\label{eqn:waveEquationGreens:780}
\begin{aligned}
\lr{ \spacegrad^2 – (1/c^2) \partial_{tt} } \Theta X^{-1/2}
&=
\inv{c^2} \lr{ 2 \tau^2 + \frac{r^2}{c^2} } X^{-5/2} \Theta
– \inv{r c} \lr{ \tau^2 + \frac{r^2}{c^2} } X^{-3/2} \delta \\
&- \inv{c^2} \lr{ 2 \tau^2 + r^2/c^2 } X^{-5/2} \Theta
\pm \frac{2}{c^2} \tau X^{-3/2} \delta \\
&=
– \inv{r c} \lr{ \tau^2 + \frac{r^2}{c^2} } X^{-3/2} \delta
\pm \frac{2}{c^2} \tau X^{-3/2} \delta \\
&=
\inv{c^2} \lr{
– \frac{c \tau^2}{r}
– \frac{r}{c}
\pm 2 \tau
}
X^{-3/2} \delta.
\end{aligned}
\end{equation}

So, after all that we have
\begin{equation}\label{eqn:waveEquationGreens:800}
\lr{ \spacegrad^2 – (1/c^2) \partial_{tt} } G =
-\inv{2 \pi c^2} \lr{
– \frac{c \tau^2}{r}
– \frac{r}{c}
\pm 2 \tau
}
\frac{\delta(\pm \tau – r/c)}{\lr{\tau^2 – r^2/c^2}^{3/2}}.
\end{equation}

This is a very problematic expression. The delta function is zero everywhere but \( \pm \tau = r/c \), but the denominator blows up at \( \pm \tau = r/c \), and the leading factor is also zero at that point:
\begin{equation}\label{eqn:waveEquationGreens:820}
\begin{aligned}
\evalbar{ \lr{ -\frac{c}{r} \tau^2 – \frac{r}{c} \pm 2 \tau }}{\pm \tau = r/c}
&=
-\frac{c}{r} \lr{ \frac{r}{c} }^2 – \frac{r}{c} + 2 \frac{r}{c} \\
&=
0.
\end{aligned}
\end{equation}
So, we’ve computed something that has a \( 0 \times \infty / 0 \) structure at \( \pm \tau = r/c \). Presumably, this has the infinite value \( \delta(x – x’) \delta(y – y’) \delta(t – t’) \) at that point.

I think that the root problem here is that the derivatives of \( \lr{ \tau^2 – r^2/c^2 }^{-1/2} \) are not defined where \( \tau = \pm r/c \), so we have a zero result for any region of spacetime where that is not the case, but can’t say much about it at other points without additional work.

Attempting to describe this physically, I think that we’d say that we have discovered that a constant velocity wave of this form has to propagate on the “light cone”. We see something like that for the 3D Green’s function too, which is explicitly zero off the light cone, not just after application of the wave equation operator.

Followup:

  1. Is there a better representation of the 2D Green’s function than this one? I think it’s time to look up some more advanced handling of Green’s function to get a better handle on this. I’d guess that there’s a Green’s function for the 2D wave equation related to Bessel functions, like that of the 2D Helmholtz operator.
  2. It should also be possible to perform a limiting convolution verification, in the neighbourhood of the light cone, and then look at the limit of that convolution. I’d expect that to be better behaved, as it should avoid the singularity itself.

References

[1] F.W. Byron and R.W. Fuller. Mathematics of Classical and Quantum Physics. Dover Publications, 1992.

[2] JD Jackson. Classical Electrodynamics. John Wiley and Sons, 2nd edition, 1975.

[3] J Schwinger, LL DeRaad Jr, KA Milton, and W-Y Tsai. Classical electrodynamics, perseus. 1998.

A trilogy in 7+ parts: A better check of the 1D Helmholtz Green’s function.

September 26, 2025 math and physics play , , , , ,

[Click here for a PDF version of this post, and the previous posts in this series.]

We used a complicated limiting argument to show that the \( \mathrm{sgn}(x – x’) \) factor in the contour integral derivation of the Helmholtz operator Green’s function was wrong.

Having discovered, even if slightly by accident, what the correct form of that Green’s function is, we can check it more directly. This time, we use the Heaviside theta technique that we used to verify the 1D Laplacian Green’s function.

The goal is to show that
\begin{equation}\label{eqn:helmholtzGreens:2060}
\lr{ \spacegrad^2 + k^2 } G(x, x’) = \delta(x – x’),
\end{equation}
where
\begin{equation}\label{eqn:helmholtzGreens:2080}
G(x, x’) = \frac{e^{j k \Abs{x – x’}}}{2 j k}.
\end{equation}
Let’s start with an \( r = x – x’ \) change of variables, for which
\begin{equation}\label{eqn:helmholtzGreens:2100}
\frac{d}{dx} = \frac{dr}{dx} \frac{d}{dr} = \frac{d}{dr}.
\end{equation}
This means that
\begin{equation}\label{eqn:helmholtzGreens:2120}
\spacegrad^2 e^{j k \Abs{x – x’}} = \frac{d^2}{dr^2} e{j k \Abs{r}}
\end{equation}

Starting with the first derivative we have
\begin{equation}\label{eqn:helmholtzGreens:2140}
\begin{aligned}
\frac{d}{dr} e^{j k \Abs{r} }
&=
j k e^{j k \Abs{r} } \frac{d\Abs{r}}{dr} \\
&=
j k e^{j k \Abs{r} } \frac{d}{dr} \lr{ r \Theta(r) – r \Theta(-r) } \\
&=
j k e^{j k \Abs{r} } \lr{ \Theta(r) – \Theta(-r) + 2 r \delta(r) } \\
&=
j k e^{j k \Abs{r} } \lr{ \Theta(r) – \Theta(-r) } \\
&=
j k e^{j k \Abs{r} } \mathrm{sgn}(r).
\end{aligned}
\end{equation}
Using that Heaviside representation of the sign function we have \( sgn(r)’ = 2 \delta(r) \), so
\begin{equation}\label{eqn:helmholtzGreens:2160}
\frac{d^2}{dr^2} e{j k \Abs{r}}
=
\lr{ j k \mathrm{sgn}(r) }^2 e^{j k \Abs{r} } + 2 j k e^{j k \Abs{r} } \delta(r).
\end{equation}
We can identify \( e^{j k \Abs{r} } \delta(r) = \delta(r) \), just as we identified \( r \delta(r) = 0 \), by application to a test function. That is
\begin{equation}\label{eqn:helmholtzGreens:2180}
\begin{aligned}
\int e^{j k \Abs{r} } \delta(r) f(r) dr
&=
\evalbar{e^{j k \Abs{r} } f(r)}{r = 0} \\
&=
f(0) \\
&=
\int \delta(r) f(r) dr.
\end{aligned}
\end{equation}
With that identification
\begin{equation}\label{eqn:helmholtzGreens:2200}
\spacegrad^2 e^{j k \Abs{r} } = -k^2 e^{j k \Abs{r} } + 2 j k \delta(r),
\end{equation}
or
\begin{equation}\label{eqn:helmholtzGreens:2220}
\boxed{
\lr{ \spacegrad^2 + k^2 } \frac{e^{j k \Abs{x – x’} }}{2 j k} = \delta(x – x’).
}
\end{equation}