delta function

A trilogy in 7+ parts: A better check of the 1D Helmholtz Green’s function.

September 26, 2025 math and physics play , , , , ,

[Click here for a PDF version of this post, and the previous posts in this series.]

We used a complicated limiting argument to show that the \( \mathrm{sgn}(x – x’) \) factor in the contour integral derivation of the Helmholtz operator Green’s function was wrong.

Having discovered, even if slightly by accident, what the correct form of that Green’s function is, we can check it more directly. This time, we use the Heaviside theta technique that we used to verify the 1D Laplacian Green’s function.

The goal is to show that
\begin{equation}\label{eqn:helmholtzGreens:2060}
\lr{ \spacegrad^2 + k^2 } G(x, x’) = \delta(x – x’),
\end{equation}
where
\begin{equation}\label{eqn:helmholtzGreens:2080}
G(x, x’) = \frac{e^{j k \Abs{x – x’}}}{2 j k}.
\end{equation}
Let’s start with an \( r = x – x’ \) change of variables, for which
\begin{equation}\label{eqn:helmholtzGreens:2100}
\frac{d}{dx} = \frac{dr}{dx} \frac{d}{dr} = \frac{d}{dr}.
\end{equation}
This means that
\begin{equation}\label{eqn:helmholtzGreens:2120}
\spacegrad^2 e^{j k \Abs{x – x’}} = \frac{d^2}{dr^2} e{j k \Abs{r}}
\end{equation}

Starting with the first derivative we have
\begin{equation}\label{eqn:helmholtzGreens:2140}
\begin{aligned}
\frac{d}{dr} e^{j k \Abs{r} }
&=
j k e^{j k \Abs{r} } \frac{d\Abs{r}}{dr} \\
&=
j k e^{j k \Abs{r} } \frac{d}{dr} \lr{ r \Theta(r) – r \Theta(-r) } \\
&=
j k e^{j k \Abs{r} } \lr{ \Theta(r) – \Theta(-r) + 2 r \delta(r) } \\
&=
j k e^{j k \Abs{r} } \lr{ \Theta(r) – \Theta(-r) } \\
&=
j k e^{j k \Abs{r} } \mathrm{sgn}(r).
\end{aligned}
\end{equation}
Using that Heaviside representation of the sign function we have \( sgn(r)’ = 2 \delta(r) \), so
\begin{equation}\label{eqn:helmholtzGreens:2160}
\frac{d^2}{dr^2} e{j k \Abs{r}}
=
\lr{ j k \mathrm{sgn}(r) }^2 e^{j k \Abs{r} } + 2 j k e^{j k \Abs{r} } \delta(r).
\end{equation}
We can identify \( e^{j k \Abs{r} } \delta(r) = \delta(r) \), just as we identified \( r \delta(r) = 0 \), by application to a test function. That is
\begin{equation}\label{eqn:helmholtzGreens:2180}
\begin{aligned}
\int e^{j k \Abs{r} } \delta(r) f(r) dr
&=
\evalbar{e^{j k \Abs{r} } f(r)}{r = 0} \\
&=
f(0) \\
&=
\int \delta(r) f(r) dr.
\end{aligned}
\end{equation}
With that identification
\begin{equation}\label{eqn:helmholtzGreens:2200}
\spacegrad^2 e^{j k \Abs{r} } = -k^2 e^{j k \Abs{r} } + 2 j k \delta(r),
\end{equation}
or
\begin{equation}\label{eqn:helmholtzGreens:2220}
\boxed{
\lr{ \spacegrad^2 + k^2 } \frac{e^{j k \Abs{x – x’} }}{2 j k} = \delta(x – x’).
}
\end{equation}

A trilogy in six+ parts: 1D Laplacian Green’s function

September 25, 2025 math and physics play , , , , , , , , ,

[Click here for a PDF version of this post, and the others in this series.]

Having blundered our way to what appears to be the correct Green’s function for the 1D Helmholtz operator, let’s further validate that by deriving the Green’s function for the 1D Laplacian. We should also be able to verify that it has the correct delta function semantics.

The 1D Laplacian Green’s function.

Expanding the Helmholtz Green’s function in series around \( k \Abs{r} \) we have
\begin{equation}\label{eqn:helmholtzGreens:1780}
\begin{aligned}
G(r)
&= -\frac{j}{2k} \lr{ 1 + j k \Abs{r} + O((k \Abs{r})^2) } \\
&= -\frac{j}{2} \lr{ \inv{k} + j \Abs{r} + \inv{k} O((k \Abs{r})^2) } \\
\end{aligned}
\end{equation}
This means that to first order in \( k \), we have
\begin{equation}\label{eqn:helmholtzGreens:1800}
G(r) + \frac{j}{2k} = \frac{\Abs{r}}{2}.
\end{equation}
As before, we are free to add constant terms to the Green’s function for the Laplacian, and we conclude that the 1D Green’s function for the Laplacian is
\begin{equation}\label{eqn:helmholtzGreens:1820}
\boxed{
G(r) = \frac{\Abs{r}}{2}.
}
\end{equation}

Observing the delta-function semantics of our Laplacian Green’s function through convolution.

We can now attempt to validate that this has the desired delta function semantics, operating on the convolution with the Laplacian. We are interested in evaluating
\begin{equation}\label{eqn:helmholtzGreens:1840}
\spacegrad^2 \int \frac{\Abs{x – x’}}{2} V(x’) dx’ = \int V(x + r) \frac{d^2}{dr^2} \frac{\Abs{r}}{2} dr.
\end{equation}
If all goes well, this should evaluate to \( V(x) \), indicating that \( \spacegrad^2 \Abs{x – x’}/2 = \delta(x – x’) \). As a first step, we expect \( \spacegrad^2 G = 0 \), for \( x \ne x’ \). Consider first \( r > 0 \), where
\begin{equation}\label{eqn:helmholtzGreens:1860}
\frac{d}{dr} \Abs{r}
=
\frac{d}{dr} r
= 1,
\end{equation}
and for \( r < 0 \) where
\begin{equation}\label{eqn:helmholtzGreens:1880}
\frac{d}{dr} \Abs{r}
=
\frac{d}{dr} (-r)
= -1.
\end{equation}
This means that, away from the origin \( d\Abs{r}/dr = \mathrm{sgn}(r) \), and \( d^2 \Abs{r}/dr^2 = 0\). We can conclude that, for some non-zero positive epsilon that we will eventually let approach zero, we have
\begin{equation}\label{eqn:helmholtzGreens:1900}
\begin{aligned}
\spacegrad^2 \int \frac{\Abs{x – x’}}{2} V(x’) dx’
&= \int_{-\epsilon}^\epsilon V(x + r) \frac{d^2}{dr^2} \frac{\Abs{r}}{2} dr \\
&= \int_{-\epsilon}^\epsilon \lr{
\frac{d}{dr} \lr{ V(x + r) \frac{d}{dr} \frac{\Abs{r}}{2} }
– \frac{dV(x + r)}{dr} \frac{d}{dr} \frac{\Abs{r}}{2}
}
dr \\
&=
\evalrange{ V(x + r) \frac{d}{dr} \frac{\Abs{r}}{2} }{-\epsilon}{\epsilon}
– \int_{-\epsilon}^\epsilon
\lr{
\frac{d}{dr} \lr{ \frac{dV(x + r)}{dr} \frac{\Abs{r}}{2} }

\frac{d^2V(x + r)}{dr^2} \frac{\Abs{r}}{2}
} dr \\
&=
\evalrange{ V(x + r) \frac{d}{dr} \frac{\Abs{r}}{2} }{-\epsilon}{\epsilon}
-\evalrange{ \frac{dV(x + r)}{dr} \frac{\Abs{r}}{2} }{-\epsilon}{\epsilon}
+
\int_{-\epsilon}^\epsilon \frac{d^2V(x + r)}{dr^2} \frac{\Abs{r}}{2} dr \\
&=
\inv{2} \lr{ V(x + \epsilon) + V(x – \epsilon) } \\
&-\quad \frac{\epsilon}{2}\lr{
\frac{dV(x + \epsilon)}{dr}

\frac{dV(x – \epsilon)}{dr}
} \\
&+
\frac{\epsilon^2}{2} \lr{
\frac{d^2V(x + \epsilon)}{dr^2}
+
\frac{d^2V(x + \epsilon)}{dr^2}
}.
\end{aligned}
\end{equation}
In the limit we have
\begin{equation}\label{eqn:helmholtzGreens:1920}
\boxed{
\spacegrad^2 \int G(x, x’) V(x’) dx’ = \inv{2} \lr{ V(x^+) + V(x^-) }.
}
\end{equation}

If the test (or driving) function is continuous at \( x’ = x \), then this is exactly the delta-function semantics that we expect of a Green’s function. It’s interesting that this check provides us with precise semantics for the Green’s function for discontinuous functions too.

Extracting the delta-function semantics of the Laplacian Green’s function directly.

There’s a more direct, but less satisfying way to do this same computation. We can compute \( d^2 G(r)/dr^2 \). We need the trick
\begin{equation}\label{eqn:helmholtzGreens:1940}
\Abs{r} = r \Theta(r) – r \Theta(-r),
\end{equation}
and the identification \(\Theta'(r) = \delta(r) \). We find
\begin{equation}\label{eqn:helmholtzGreens:1960}
\begin{aligned}
\Abs{r}’
&= \Theta(r) – \Theta(-r) + r \delta(r) – r (-1) \delta(-r) \\
&= \Theta(r) – \Theta(-r) + 2 r \delta(r).
\end{aligned}
\end{equation}
To give \( r \delta(r) \) meaning, we can apply it to a test function
\begin{equation}\label{eqn:helmholtzGreens:1980}
\int r \delta(r) f(r) dr = \evalbar{ r f(r) }{r = 0} = 0,
\end{equation}
so
\begin{equation}\label{eqn:helmholtzGreens:2000}
\Abs{r}’ = \Theta(r) – \Theta(-r).
\end{equation}
Now we can take second derivatives
\begin{equation}\label{eqn:helmholtzGreens:2020}
\Abs{r}” = \delta(r) + \delta(-r) = 2 \delta(r).
\end{equation}
This means that
\begin{equation}\label{eqn:helmholtzGreens:2040}
\boxed{
\frac{d^2}{dr^2} \frac{\Abs{r}}{2} = \delta(r).
}
\end{equation}

A trilogy in five+ parts: Confirming an error in the derived 1D Helmholtz Green’s function.

September 24, 2025 math and physics play , , , , , , , ,

[Click here for a PDF version of this, and previous, posts in this series].

The discontinuity in the derived 1D Helmholtz Green’s function is somewhat surprising. Let’s try to verify that this works or find what does. The first thing to check is that
\begin{equation}\label{eqn:helmholtzGreens:1220}
\lr{ \spacegrad^2 + k^2} G(x,x’) = 0,
\end{equation}
at locations where \( x \ne x’ \). Since we are avoiding the origin (where the annoying sign function kicks in), means that we want to evaluate:
\begin{equation}\label{eqn:helmholtzGreens:1240}
\lr{ k^2 + \frac{d^2}{dx^2} } e^{j k \Abs{x – x’}},
\end{equation}
and expect that this will be zero. Let’s make a change of variables \( r = x’ – x \), and evaluate
\begin{equation}\label{eqn:helmholtzGreens:1260}
\lr{ k^2 + \frac{d^2}{dr^2} } e^{j k \Abs{r}},
\end{equation}
assuming that we are omitting a neighbourhood around \( r = 0 \) where the absolute value causes trouble. For \( r > 0 \)
\begin{equation}\label{eqn:helmholtzGreens:1280}
\begin{aligned}
\frac{d}{dr} e^{j k \Abs{r}}
&= \frac{d}{dr} e^{j k r} \\
&= j k e^{j k r},
\end{aligned}
\end{equation}
and
\begin{equation}\label{eqn:helmholtzGreens:1300}
\begin{aligned}
\frac{d^2}{dr^2} e^{j k \Abs{r}}
&= \frac{d}{dr} j k e^{j k r} \\
&= (j k)^2 e^{j k r}.
\end{aligned}
\end{equation}
Similarly, for \( r < 0 \), we have \begin{equation}\label{eqn:helmholtzGreens:1320} \frac{d^2}{dr^2} e^{j k \Abs{r}} = (-jk)^2 e^{j k \Abs{r}}. \end{equation} In both cases, provided we are in a neighbourhood that omits \( r \ne 0 \), we have \begin{equation}\label{eqn:helmholtzGreens:1340} \lr{ k^2 + \frac{d^2}{dr^2} } e^{j k \Abs{r}} = 0, \end{equation} as desired. The takeaway is that we have \begin{equation}\label{eqn:helmholtzGreens:1360} \begin{aligned} \lr{ \spacegrad^2 + k^2 }\int_{-\infty}^\infty G(x, x’) V(x’) dx’ &= \int_{\Abs{x’ – x} \le \epsilon} V(x’) \lr{ \frac{d^2}{d{x’}^2} + k^2 } G(x, x’) dx’ \\ &= \int_{-\epsilon}^\epsilon V(x + r) \lr{ \frac{d^2}{dr^2} + k^2 } G(r = x’ – x) dr \\ \end{aligned} \end{equation} for some arbitrarily small value of \( \epsilon \). Observe that after bringing the operator into the integral, we also made a change of variables, first to \( x’ \) for the Laplacian, and then to \( r = x’ – x \). We’d like the 1D equivalent of Green’s theorem to reduce this, so let’s work that out first. \begin{equation}\label{eqn:helmholtzGreens:1380} \begin{aligned} \int dx\, v \frac{d^2 u}{dx^2} – \int dx\, u \frac{d^2 v}{dx^2} &= \int dx\, \lr{ \frac{d}{dx} \lr{ v \frac{du}{dx} } – \frac{dv}{dx} \frac{du}{dx} } – \int dx\, \lr{ \frac{d}{dx} \lr{ u \frac{dv}{dx} } – \frac{du}{dx} \frac{dv}{dx} } \\ &= \int dx\, \frac{d}{dx} \lr{ v \frac{du}{dx} } – \int dx\, \frac{d}{dx} \lr{ u \frac{dv}{dx} } \\ &= v \frac{du}{dx} – u \frac{dv}{dx}, \end{aligned} \end{equation} so \begin{equation}\label{eqn:helmholtzGreens:1400} \boxed{ \int_a^b dx\, v \frac{d^2 u}{dx^2} = \int_a^b dx\, u \frac{d^2 v}{dx^2} + \evalrange{v \frac{du}{dx}}{a}{b} – \evalrange{u \frac{dv}{dx}}{a}{b}. } \end{equation} Let’s try applying that to the function \( G(r) = e^{j k \Abs{r} } \), and see what happens. That is \begin{equation}\label{eqn:helmholtzGreens:1420} \begin{aligned} \lr{ \spacegrad^2 + k^2 }\int_{\Abs{x’- x} \le \epsilon} e^{j k \Abs{x – x’}} V(x’) dx’ &= \int_{-\epsilon}^\epsilon e^{j k \Abs{r}} \lr{ k^2 + \frac{d^2}{dr^2} } V(x + r) dr \\ &\quad + \evalrange{ V(x + r) \frac{d}{dr} e^{j k \Abs{r}} }{-\epsilon}{\epsilon} – \evalrange{ e^{j k \Abs{r}} \frac{d}{dr} V(x+r) }{-\epsilon}{\epsilon}. \end{aligned} \end{equation} If we can assume that \( V \) and it’s first and second derivatives are all continuous over this small interval, then the first integral is approximately \begin{equation}\label{eqn:helmholtzGreens:1440} \begin{aligned} \int_{-\epsilon}^\epsilon e^{j k \Abs{r}} \lr{ k^2 + \frac{d^2}{dr^2} } V(x + r) dr &\sim \lr{ \lr{ k^2 + \frac{d^2}{dr^2} } V(x + r) } \int_{-\epsilon}^\epsilon e^{j k \Abs{r}} dr \\ &= \frac{2 j}{k} \lr{ 1 – e^{ j k \epsilon} } \lr{ \lr{ k^2 + \frac{d^2}{dr^2} } V(x + r) } \\ &\rightarrow 0. \end{aligned} \end{equation} Similarly, with \( dV/dr \) continuity condition, that last term is also zero. We are left, for \( \epsilon \) sufficiently small, we are left with \begin{equation}\label{eqn:helmholtzGreens:1460} \lr{ \spacegrad^2 + k^2 }\int_{\Abs{x’- x} \le \epsilon} e^{j k \Abs{x – x’}} V(x’) dx’ = V(x) \evalrange{ \frac{d}{dr} e^{j k \Abs{r}} }{-\epsilon}{\epsilon}. \end{equation} but this is an extremely problematic derivative around the origin. The core problem is evaluating \begin{equation}\label{eqn:helmholtzGreens:1480} \frac{d}{dr} e^{j k \Abs{r}} = j k e^{j k \Abs{r} } \frac{d\Abs{r}}{dr}. \end{equation} In conventional mathematics, we’d have to say that this is undefined at the origin. In physics, on the other hand, where we play fast and loose with the mathematics, we express the absolute value in terms of Heavyside theta functions \begin{equation}\label{eqn:helmholtzGreens:1500} \Abs{r} = r \Theta(r) – r \Theta(-r). \end{equation} We may now take derivatives \begin{equation}\label{eqn:helmholtzGreens:1520} \begin{aligned} \Abs{r}’ &= \Theta(r) – \Theta(-r) + r \delta(r) + r \delta(-r) \\ &= \mathrm{sgn}(r) + 2 r \delta(r). \end{aligned} \end{equation} Evaluating \( e^{j k \Abs{r} } \Abs{r}’ \) over the \( [-\epsilon, \epsilon] \) range, we have \begin{equation}\label{eqn:helmholtzGreens:1540} \begin{aligned} \evalrange{ e^{j k \Abs{r} } \Abs{r}’ }{-\epsilon}{\epsilon} &= e^{j k \epsilon} \lr{ \evalrange{ \mathrm{sgn}(r) + 2 r \delta(r) }{-\epsilon}{\epsilon} } \\ &= e^{j k \epsilon} \lr{ 2 + 2 \epsilon \delta(\epsilon) }. \end{aligned} \end{equation} Again, playing fast and loose, we evaluate this range before taking the limit, where \( \delta(\epsilon) = 0 \) for \( \epsilon > 0 \). We are left with
\begin{equation}\label{eqn:helmholtzGreens:1560}
\lim_{\epsilon \rightarrow 0} \lr{ \spacegrad^2 + k^2 }\int_{\Abs{x’- x} \le \epsilon} e^{j k \Abs{x – x’}} V(x’) dx’
=
2 j k V(x),
\end{equation}
provided \( V \) and its first and second derivatives are continuous.

Under those constraints, the implication is that one valid Green’s function for the 1D Helmholtz operator is
\begin{equation}\label{eqn:helmholtzGreens:1580}
G(r) = -\frac{j}{2k} e^{j k \Abs{r} }.
\end{equation}
The \( \mathrm{sgn}(r) \) scale factor that was part of the Green’s function that we derived using contour integration does not appear to be required.

What happens if we retain the sign function factor? Doing so, we have
\begin{equation}\label{eqn:helmholtzGreens:1600}
\begin{aligned}
\lr{ \spacegrad^2 + k^2 }\int_{\Abs{x’- x} \le \epsilon} \mathrm{sgn}(x – x’) e^{j k \Abs{x – x’}} V(x’) dx’
&=
-\int_{-\epsilon}^\epsilon \mathrm{sgn}(r) e^{j k \Abs{r}} \lr{ k^2 + \frac{d^2}{dr^2} } V(x + r) dr \\
&\quad

\evalrange{ V(x + r) \frac{d}{dr} \mathrm{sgn}(r) e^{j k \Abs{r}} }{-\epsilon}{\epsilon}
+
\evalrange{ \mathrm{sgn}(r) e^{j k \Abs{r}} \frac{d}{dr} V(x+r) }{-\epsilon}{\epsilon}.
\end{aligned}
\end{equation}
This time, we note that
\begin{equation}\label{eqn:helmholtzGreens:1620}
\int_{-\epsilon}^\epsilon \mathrm{sgn}(r) e^{j k \Abs{r}} dr = 0, \quad \forall \epsilon \ne 0,
\end{equation}
even without evaluating the limit. However, we have problems with the other two terms. The last term doesn’t zero out as desired, instead
\begin{equation}\label{eqn:helmholtzGreens:1640}
\evalrange{ \mathrm{sgn}(r) e^{j k \Abs{r}} \frac{d}{dr} V(x+r) }{-\epsilon}{\epsilon} \rightarrow 2 V'(x).
\end{equation}
To evaluate the \( V(x) \) factor, we write
\begin{equation}\label{eqn:helmholtzGreens:1660}
\mathrm{sgn}(r) = \Theta(r) – \Theta(-r),
\end{equation}
so
\begin{equation}\label{eqn:helmholtzGreens:1680}
\begin{aligned}
\mathrm{sgn}(r)’
&= \delta(r) + \delta(-r) \\
&= 2 \delta(r).
\end{aligned}
\end{equation}
That means that
\begin{equation}\label{eqn:helmholtzGreens:1700}
\begin{aligned}
\frac{d}{dr} \lr{ \mathrm{sgn}(r) e^{j k \Abs{r}} }
&=
2 \delta(r) e^{j k \Abs{r}} + j k \mathrm{sgn}(r) e^{j k \Abs{r}} \lr{ \mathrm{sgn}(r) + 2 r \delta(r) } \\
&=
e^{j k \Abs{r}} \lr{ 2 \delta(r) + j k \lr{ 1 + 2 r \mathrm{sgn}(r) \delta(r)} } \\
&=
j k e^{j k \Abs{r}},
\end{aligned}
\end{equation}
for \( r \ne 0 \), so
\begin{equation}\label{eqn:helmholtzGreens:1760}

\evalrange{ V(x + r) \frac{d}{dr} \mathrm{sgn}(r) e^{j k \Abs{r}} }{-\epsilon}{\epsilon}
\rightarrow V(x) j k \lr{ e^{j k \epsilon} – e^{j k \epsilon} } = 0.
\end{equation}

All in, we are left with
\begin{equation}\label{eqn:helmholtzGreens:1720}
\lim_{\epsilon \rightarrow 0} \lr{ \spacegrad^2 + k^2 }\int_{\Abs{x’- x} \le \epsilon} \frac{-j}{2k} \mathrm{sgn}(x – x’) e^{j k \Abs{x – x’}} V(x’) dx’
= -\frac{j}{k} V'(x),
\end{equation}
but for a Green’s function, we expected just \( V(x) \).

It seems that the sign factor in the contour integration result is definitively wrong. That result was
\begin{equation}\label{eqn:helmholtzGreens:300b}
G(u) = -\frac{j \mathrm{sgn}(u)}{2k} e^{j k \Abs{u}},
\end{equation}
but what we really want is
\begin{equation}\label{eqn:helmholtzGreens:1740}
\boxed{
G(u) = -\frac{j}{2k} e^{j k \Abs{u}}.
}
\end{equation}

Unfortunately, I don’t see any errors in the original contour integration, so I’m at a loss where things went wrong.

An “easy” integral from Jackson’s electrodynamics

September 6, 2025 math and physics play , , ,

Screenshot

[Click here for a PDF version of this post]

Once again, I was reading my Jackson [1], which characteristically had the statement “the […] integral can easily be shown to have the value \( 4 \pi \)”, in a discussion of electrostatic energy and self energy.

The integral is
\begin{equation}\label{eqn:selfEnergyIntegral:20}
I = \int \frac{\Brho}{\rho^3} \cdot \frac{\Brho + \Bn}{\Norm{\Brho + \Bn}^3} d^3 \rho.
\end{equation}

This is something that I once figured out once before (see [2] appendix C). However, trying to do it a second time around, I think that I found the “easy” way.

As Jackson hints, the starting point is
\begin{equation}\label{eqn:selfEnergyIntegral:40}
\frac{\Bx}{\Norm{\Bx}^3}
=
-\spacegrad \inv{\Norm{\Bx}},
\end{equation}
but we don’t have to apply it to both the vector terms, as I did in my initial attempt (which results in a Laplacian to reduce.) Inserting this and applying chain rule, we find
\begin{equation}\label{eqn:selfEnergyIntegral:60}
\begin{aligned}
I
&= -\int \frac{\Brho}{\rho^3} \cdot \spacegrad_\Brho \inv{\Norm{\Brho + \Bn}} d^3 \rho \\
&=
-\int
\spacegrad_\Brho \cdot \lr{
\frac{\Brho}{\rho^3} \cdot
\inv{\Norm{\Brho + \Bn}}
}
d^3 \rho
+
\int
\lr{
\spacegrad_\Brho \cdot
\frac{\Brho}{\rho^3}
}
\inv{\Norm{\Brho + \Bn}}
d^3 \rho
\\
\end{aligned}
\end{equation}

The first integral can be evaluated using an infinite spherical shell
\begin{equation}\label{eqn:selfEnergyIntegral:80}
\begin{aligned}
I_1
&= -\int
\spacegrad_\Brho \cdot \lr{
\frac{\Brho}{\rho^3} \cdot
\inv{\Norm{\Brho + \Bn}}
}
d^3 \rho \\
&=
\lim_{\rho \rightarrow \infty}
-\frac{\Brho}{\rho} \cdot \lr{
\frac{\Brho}{\rho^3} \cdot
\inv{\Norm{\Brho + \Bn}}
} 4 \pi \rho^2 \\
&=
\lim_{\rho \rightarrow \infty}
\frac{-4 \pi}{\Norm{\Brho + \Bn}} \\
&=
0.
\end{aligned}
\end{equation}

The divergence term in the second integral, provided \( \Bx \ne 0 \), has the form
\begin{equation}\label{eqn:selfEnergyIntegral:100}
\begin{aligned}
\spacegrad \cdot \frac{\Bx}{\Norm{\Bx}^3}
&=
\inv{\Norm{\Bx}^3} \spacegrad \cdot \Bx
+
\lr{ \Bx \cdot \spacegrad } \inv{\Norm{\Bx}^3} \\
&=
\frac{3}{\Norm{\Bx}^3}
+
2 \frac{x_k x_j}{\Norm{\Bx}^5} \lr{-\frac{3}{2}} \partial_k x_j \\
&=
\frac{3}{\Norm{\Bx}^3}
– \frac{3}{\Norm{\Bx}^3}
\end{aligned}
\end{equation}
However, in a neighbourhood of the origin, this actually has a delta function structure. We can see that from Gauss’s law, where we have
\begin{equation}\label{eqn:selfEnergyIntegral:120}
\spacegrad \cdot \BE = \frac{\rho}{\epsilon_0}.
\end{equation}
If we plug in the integral representation of \( \BE \) on the LHS, we have
\begin{equation}\label{eqn:selfEnergyIntegral:140}
\begin{aligned}
\spacegrad \cdot \BE
&=
\spacegrad \cdot \int \frac{\rho(\Bx’)}{4 \pi \epsilon_0} \frac{\Bx – \Bx’}{\Norm{\Bx – \Bx’}^3} d^3 x \\
&=
\int \frac{\rho(\Bx’)}{4 \pi \epsilon_0} \spacegrad \cdot \frac{\Bx – \Bx’}{\Norm{\Bx – \Bx’}^3} d^3 x \\
&=
-\int \frac{\rho(\Bx’)}{4 \pi \epsilon_0} \spacegrad’ \cdot \frac{\Bx – \Bx’}{\Norm{\Bx – \Bx’}^3} d^3 x.
\end{aligned}
\end{equation}
Comparing the LHS and RHS, we must have
\begin{equation}\label{eqn:selfEnergyIntegral:160}
\spacegrad’ \cdot \frac{\Bx’ – \Bx}{\Norm{\Bx’ – \Bx}^3} = 4 \pi \delta^3\lr{\Bx’ – \Bx}.
\end{equation}

We can now substitute that into the second integral to find
\begin{equation}\label{eqn:selfEnergyIntegral:n}
\begin{aligned}
I_2 &=
\int
\lr{
\spacegrad_\Brho \cdot
\frac{\Brho}{\rho^3}
}
\inv{\Norm{\Brho + \Bn}}
d^3 \rho \\
&=
\frac{4 \pi}{\Norm{\Bn}} \\
&=
4 \pi.
\end{aligned}
\end{equation}

Sure enough, the integral has a \( 4 \pi \) value. But was that easy?  I think Hitler would disagree.

References

[1] JD Jackson. Classical Electrodynamics. John Wiley and Sons, 2nd edition, 1975.

[2] Peeter Joot. Electromagnetic Theory. Kindle Direct Publishing, Toronto, 2016.

A Green’s function solution to falling with resistance problem.

January 30, 2025 math and physics play , , , , , , , ,

[Click here for a PDF version of this post]

Motivation.

In a fun twitter/x post, we have a Green’s function solution to a constant acceleration problem with drag. The post is meant to be a joke, as the stated problem is: “A boy drops a ball from a height \( h \). What is the speed of the ball when it reaches the floor (no drag)?”

The joke is that nobody would solve this problem using Green’s functions, and nobody would solve this function using Green’s functions for the more general case, allowing for drag. Instead, you’d just solve this using energy balance, which makes the problem trivial.

That said, there are actually lots of cool ideas in the Green’s function method on the joke side of the solution.

So let’s play along with the joke and solve the general damped problem with Green’s functions. Along the way, we can fill in the missing details, and also explore some supplemental ideas that are worth understanding.

Setup.

The equation of motion is
\begin{equation}\label{eqn:greensDropWithResistance:20}
m \frac{d^2 \Bx}{dt^2} = – \gamma \frac{d \Bx}{dt} – m \Bg,
\end{equation}
where \( \Bg \) is a constant (positively oriented) force. The first detail that needs to be included, is that this isn’t the differential equation for the stated problem, and will become problematic should we attempt to apply Green’s function methods. We have to account for the “boy drops” part of the problem statement, and solve with a different forcing function, namely
\begin{equation}\label{eqn:greensDropWithResistance:40}
m \frac{d^2 \Bx}{dt^2} = – \gamma \frac{d \Bx}{dt} – m \Bg \Theta(t).
\end{equation}
This revised model of the system begins the application of the constant (gravitational) force, at time \( t = 0 \). This is now a system that will yield to Green’s function methods.

Fourier transform solution.

The joke solution has strong hints that Fourier transform methods were part of the story. In particular, it appears that the following definitions of the transform pair were used
\begin{equation}\label{eqn:greensDropWithResistance:60}
\begin{aligned}
\hatU(\omega) = F(u(t)) &= \int_{-\infty}^\infty u(t) e^{-i\omega t} dt \\
u(t) = F^{-1}(\hatU(\omega)) &= \inv{2\pi} \int_{-\infty}^\infty \hatU(\omega) e^{i\omega t} d\omega.
\end{aligned}
\end{equation}
However, if we are using Fourier transforms, why bother with Green’s functions? Instead, we can just solve for the system response using Fourier transforms. When looking for the system response, we usually pose the problem with more generality. For example, instead of the specific theta-weighted constant gravitational forcing function above, we seek to find the solution of
\begin{equation}\label{eqn:greensDropWithResistance:80}
m \frac{d^2 \Bx}{dt^2} + \gamma \frac{d \Bx}{dt} = \BF(t).
\end{equation}
We start by assuming that the Fourier transforms of \( \Bx(t), \BF(t) \) are \( \hat{\BX}(\omega), \hat{\BF}(\omega) \) so
\begin{equation}\label{eqn:greensDropWithResistance:100}
\Bx(t) = \inv{2\pi} \int_{-\infty}^\infty e^{i\omega t} \hat{\BX}(\omega) d\omega.
\end{equation}
Derivatives of this presumed Fourier representation are trivial
\begin{equation}\label{eqn:greensDropWithResistance:120}
\begin{aligned}
\Bx'(t) &= \inv{2\pi} \int_{-\infty}^\infty \lr{ i\omega } e^{i\omega t} \hat{\BX}(\omega) d\omega \\
\Bx”(t) &= \inv{2\pi} \int_{-\infty}^\infty \lr{ i\omega }^2 e^{i\omega t} \hat{\BX}(\omega) d\omega,
\end{aligned}
\end{equation}
so the frequency representation of our system is
\begin{equation}\label{eqn:greensDropWithResistance:140}
\inv{2\pi} \int_{-\infty}^\infty \lr{ m \lr{ i\omega }^2 + \gamma \lr{ i\omega} } e^{i\omega t} \hat{\BX}(\omega) d\omega
=
\inv{2\pi} \int_{-\infty}^\infty e^{i\omega t} \hat{\BF}(\omega) d\omega,
\end{equation}
or
\begin{equation}\label{eqn:greensDropWithResistance:160}
\hat{\BX}(\omega) = \frac{\hat{\BF}(\omega)}{-m \omega^2 + i \omega \gamma}.
\end{equation}
We now only have to inverse Fourier transform to find a solution, namely
\begin{equation}\label{eqn:greensDropWithResistance:180}
\begin{aligned}
\Bx(t)
&= \inv{2\pi} \int_{-\infty}^\infty e^{i\omega t} \frac{\hat{\BF}(\omega)}{-m \omega^2 + i \omega \gamma} d\omega \\
&= \inv{2\pi} \int_{-\infty}^\infty e^{i\omega t} \frac{1}{-m \omega^2 + i \omega \gamma} d\omega
\int_{-\infty}^\infty \BF(t’) e^{-i \omega t’} dt’ \\
&= \int_{-\infty}^\infty \lr{ -\inv{2\pi} \int_{-\infty}^\infty \frac{ e^{i\omega (t-t’)} }{m \omega^2 – i \omega \gamma} d\omega }F(t’) dt’,
\end{aligned}
\end{equation}
or
\begin{equation}\label{eqn:greensDropWithResistance:200}
\Bx(t) = \int_{-\infty}^\infty G(t – t’) \BF(t’) dt’,
\end{equation}
where
\begin{equation}\label{eqn:greensDropWithResistance:220}
G(\tau) = -\inv{2\pi} \int_{-\infty}^\infty \frac{ e^{i\omega \tau} }{\omega\lr{ m \omega – i \gamma}} d\omega.
\end{equation}

We’ve been fast and loose above, swapping order of integration without proper justification, and have assumed that all Fourier transforms and inverse transforms exist. Given all those assumptions, we now have a general solution for the system, requiring only the convolution of our driving force \( F(t) \) with the system response function \( G(t) \). The only caveat is that we have to be able to perform the integral for the system response function, and that integral does not exist.

There are lots of integrals that do not strictly exist when playing the fast and loose physicist game with Fourier transforms. One such example can be found by looking at any transform pair. For example, given \( u(t) = F^{-1}(\hatU(\omega)) \), we have
\begin{equation}\label{eqn:greensDropWithResistance:240}
\begin{aligned}
u(t)
&= \inv{2\pi} \int_{-\infty}^\infty \hatU(\omega) e^{i\omega t} d\omega \\
&= \inv{2\pi} \int_{-\infty}^\infty \lr{ \int_{-\infty}^\infty u(t’) e^{-i\omega t’} dt’ } e^{i\omega t} d\omega \\
&= \int_{-\infty}^\infty u(t’) \lr{ \inv{2\pi} \int_{-\infty}^\infty e^{i\omega (t-t’)} d\omega } dt’.
\end{aligned}
\end{equation}
This is exactly the sort of integration order swapping that we did to find the system response function above, and we are left with a statement that \( f(t) \) is the convolution of \( f(t) \), with another, also non-integrable, convolution kernel. Any physics student will recognize that kernel as a representation of the Dirac delta function, and without blinking, would just write
\begin{equation}\label{eqn:greensDropWithResistance:260}
\delta(\tau) = \inv{2\pi} \int_{-\infty}^\infty e^{i\omega \tau} d\omega,
\end{equation}
without worrying that it is not possible to evaluate this integral. Somebody who is trying to use the right mathematical language, would say that this isn’t a function, but is, instead a distribution. Just like this delta function distribution, our system response integral, something that we also cannot actually evaluate in a strict sense, is a distribution. It’s a beastie that has delta function like characteristics, and if we want to try to integrate it, we have to play sneaky games.

Let’s put off evaluating that integral for now, and return to the Green’s function description of the story.

The Green’s function picture.

Using Fourier transforms, we found that it theoretically possible to find a convolution solution to the system, and found the convolution kernel for the system. The rough idea behind Green’s functions is to assume that such a convolution exists, say
\begin{equation}\label{eqn:greensDropWithResistance:280}
\Bx(t) = \Bx_0(t) + \int_{-\infty}^\infty G(t,t’) \BF(t’) dt’,
\end{equation}
where \( \Bx_0(t) \) is any solution of the homogeneous problem satisfying, in this case,
\begin{equation}\label{eqn:greensDropWithResistance:300}
m \frac{d^2}{dt^2} \Bx_0(t) + \gamma \frac{d}{dt} \Bx_0(t) = 0,
\end{equation}
and \( G(t,t’) \) is a convolution kernel, representing the system response, to be determined.
If we plug this presumed solution into our differential equation, we find
\begin{equation}\label{eqn:greensDropWithResistance:320}
\int_{-\infty}^\infty \lr{
m \frac{\partial^2}{\partial t^2} G(t,t’)
+ \gamma \frac{\partial}{\partial t} G(t,t’)
} \BF(t’) dt’
=
\BF(t),
\end{equation}
but
\begin{equation}\label{eqn:greensDropWithResistance:340}
\BF(t) = \int_{-\infty}^\infty \BF(t’) \delta(t – t’) dt’,
\end{equation}
so, if we can find \( G \) satisfying
\begin{equation}\label{eqn:greensDropWithResistance:360}
m \frac{\partial^2}{\partial t^2} G(t,t’) + \gamma \frac{\partial}{\partial t} G(t,t’) = \delta(t – t’),
\end{equation}
then we have solved the system. We can simplify this slightly by presuming that the \( t,t’ \) dependence is always a difference, and seek \( G(\tau) \) such that
\begin{equation}\label{eqn:greensDropWithResistance:380}
m G”(\tau) + \gamma G'(\tau) = \delta(\tau).
\end{equation}
We now pull the Fourier transform out of our toolbox again, assuming that
\begin{equation}\label{eqn:greensDropWithResistance:400}
G(\tau) = \inv{2 \pi} \int_{-\infty}^\infty \hat{G}(\omega) e^{i\omega\tau} d\omega,
\end{equation}
for which
\begin{equation}\label{eqn:greensDropWithResistance:420}
\inv{2 \pi} \int_{-\infty}^\infty \lr{ m \lr{ i \omega }^2 + \gamma \lr{ i \omega } } \hat{G}(\omega) e^{i\omega \tau} d\omega
=
\inv{2 \pi } \int_{-\infty}^\infty e^{i\omega \tau} d\omega,
\end{equation}
or
\begin{equation}\label{eqn:greensDropWithResistance:440}
\hat{G}(\omega) = \inv{ m \lr{ i \omega }^2 + \gamma \lr{ i \omega } }.
\end{equation}
This is the Fourier transform of the Green’s function, and is exactly what we found earlier using pure Fourier transforms. Our starting point was different this time, as we just blatantly assumed that the solution had a convolution structure. We then found a differential equation for that convolution kernel, the Green’s function. Only then did we pull the Fourier transform out of the toolbox to attempt to find the structure of that Green’s function.

Evaluating the Green’s function integral.

We can’t go any further without figuring out what to do with our nasty little divergent integral \ref{eqn:greensDropWithResistance:220}. We may coerce this into something that we can evaluate using standard contour integration, if we offset the pole at the origin slightly. Given \( \epsilon > 0 \), let’s evaluate
\begin{equation}\label{eqn:greensDropWithResistance:460}
G(\tau, \epsilon) = -\inv{2\pi} \oint \frac{ e^{i z \tau} }{\lr{ z – i \epsilon } \lr{ m z – i \gamma}} dz.
\end{equation}
We can evaluate this integral using infinite semicircular contours, using an upper half plane contour for \( \tau > 0 \) and a lower half plane contour for \( \tau < 0 \), as illustrated in fig. 1, and fig. 2.

 

fig. 1. Contour for tau > 0.

 

 

fig. 2. Contour for tau < 0.

By Jordan’s lemma, that upper half plane infinite semicircular part of the contour integral is zero for the \( \tau > 0 \) case, and for the \( \tau < 0 \) case, the lower half plane infinite semicircular part of the contour integral is zero. We can proceed with the residue calculations. In the upper half plane, we have both of the enclosed poles, so \begin{equation}\label{eqn:greensDropWithResistance:480} \begin{aligned} G(\tau > 0, \epsilon)
&= -\inv{2\pi m } \int_{-\infty}^\infty \frac{ e^{i \omega \tau} }{\lr{ \omega – i \epsilon } \lr{ \omega – i \gamma/m}} d\omega \\
&= -\frac{ 2 \pi i }{ 2 \pi m} \lr{
\evalbar{ \frac{ e^{i z \tau} }{ z – i \gamma/m} }{z = i \epsilon}
+
\evalbar{ \frac{ e^{i z \tau} }{ z – i \epsilon } }{ z = i \gamma/m}
} \\
&=
-\frac{i}{m} \lr{
\frac{ e^{-\epsilon \tau} }{ i \epsilon – i \gamma/m}
+
\frac{ e^{-\gamma\tau/m} }{ i \gamma/m – i \epsilon }
} \\
&=
-\lr{
\frac{e^{-\epsilon \tau}}{ m \epsilon – \gamma }
+
\frac{ e^{-\gamma\tau/m} }{ \gamma – m \epsilon }
},
\end{aligned}
\end{equation}
and for the lower half plane, where there are no enclosed poles we have \( G(\tau < 0, \epsilon) = 0 \). In the \( \epsilon \rightarrow 0 \) limit, we are left with
\begin{equation}\label{eqn:greensDropWithResistance:500}
G(\tau) = \inv{\gamma} \lr{ 1 – e^{-\gamma \tau/m} } \Theta(\tau).
\end{equation}

Back to the original problem.

We may now go and find the specific solution for the original problem where \( F(t) = – m g \Be_2 \Theta(t) \). That solution is
\begin{equation}\label{eqn:greensDropWithResistance:520}
\begin{aligned}
\Bx(t)
&= \Bx(0) + \int_{-\infty}^\infty G(t – t’) \lr{ – m g \Be_2 \Theta(t’) } dt’ \\
&= \Bx(0) – m g \Be_2 \int_{-\infty}^\infty \frac{\Theta(t – t’)}{\gamma} \lr{ 1 – e^{-\gamma \lr{ t – t’}/m } } \Theta(t’) dt’ \\
&= \Bx(0) – m g \Be_2 \int_{0}^\infty \frac{\Theta(t – t’)}{\gamma} \lr{ 1 – e^{-\gamma \lr{ t – t’}/m } } dt’ \\
&= \Bx(0) – \frac{m g}{\gamma} \Be_2 \int_{0}^t \lr{ 1 – e^{-\gamma \lr{ t – t’}/m } } dt’ \\
&= \Bx(0) – \frac{m g}{\gamma} \Be_2 \int_0^t \lr{ 1 – e^{-\gamma u/m } } du \\
&= \Bx(0) – \frac{m g}{\gamma} \Be_2 \evalrange{ \lr{ t’ – \frac{e^{-\gamma u/m } }{-\gamma/m} } }{u=0}{t} \\
&= \Bx(0) – \frac{m g}{\gamma} \Be_2 \lr{ t + \frac{m e^{-\gamma t/m }}{\gamma} – \frac{m}{\gamma} } \\
&= \Bx(0) – \frac{m g t}{\gamma} \Be_2 – \frac{m^2 g}{\gamma^2} \lr{ 1 – e^{-\gamma t/m } }.
\end{aligned}
\end{equation}

Ignoring the missing factor of \( g \) on the last term in the twitter slide, this is the final result before the limiting argument on that slide.

Having found the Green’s function for this system, we could then, fairly trivially, use it to solve similar systems with different forcing functions. For example, suppose we have a mass on a table, with friction, and a forcing function (perhaps sinusoidal) moving that mass. We could then figure out the time response for that particular forcing function, and would only have a convolution integral to evaluate. That general applicability is one of the beauties of these transform or Green’s function methods.