Green’s function

A Green’s function solution to falling with resistance problem.

January 30, 2025 math and physics play , , , , , , , ,

[Click here for a PDF version of this post]

Motivation.

In a fun twitter/x post, we have a Green’s function solution to a constant acceleration problem with drag. The post is meant to be a joke, as the stated problem is: “A boy drops a ball from a height \( h \). What is the speed of the ball when it reaches the floor (no drag)?”

The joke is that nobody would solve this problem using Green’s functions, and nobody would solve this function using Green’s functions for the more general case, allowing for drag. Instead, you’d just solve this using energy balance, which makes the problem trivial.

That said, there are actually lots of cool ideas in the Green’s function method on the joke side of the solution.

So let’s play along with the joke and solve the general damped problem with Green’s functions. Along the way, we can fill in the missing details, and also explore some supplemental ideas that are worth understanding.

Setup.

The equation of motion is
\begin{equation}\label{eqn:greensDropWithResistance:20}
m \frac{d^2 \Bx}{dt^2} = – \gamma \frac{d \Bx}{dt} – m \Bg,
\end{equation}
where \( \Bg \) is a constant (positively oriented) force. The first detail that needs to be included, is that this isn’t the differential equation for the stated problem, and will become problematic should we attempt to apply Green’s function methods. We have to account for the “boy drops” part of the problem statement, and solve with a different forcing function, namely
\begin{equation}\label{eqn:greensDropWithResistance:40}
m \frac{d^2 \Bx}{dt^2} = – \gamma \frac{d \Bx}{dt} – m \Bg \Theta(t).
\end{equation}
This revised model of the system begins the application of the constant (gravitational) force, at time \( t = 0 \). This is now a system that will yield to Green’s function methods.

Fourier transform solution.

The joke solution has strong hints that Fourier transform methods were part of the story. In particular, it appears that the following definitions of the transform pair were used
\begin{equation}\label{eqn:greensDropWithResistance:60}
\begin{aligned}
\hatU(\omega) = F(u(t)) &= \int_{-\infty}^\infty u(t) e^{-i\omega t} dt \\
u(t) = F^{-1}(\hatU(\omega)) &= \inv{2\pi} \int_{-\infty}^\infty \hatU(\omega) e^{i\omega t} d\omega.
\end{aligned}
\end{equation}
However, if we are using Fourier transforms, why bother with Green’s functions? Instead, we can just solve for the system response using Fourier transforms. When looking for the system response, we usually pose the problem with more generality. For example, instead of the specific theta-weighted constant gravitational forcing function above, we seek to find the solution of
\begin{equation}\label{eqn:greensDropWithResistance:80}
m \frac{d^2 \Bx}{dt^2} + \gamma \frac{d \Bx}{dt} = \BF(t).
\end{equation}
We start by assuming that the Fourier transforms of \( \Bx(t), \BF(t) \) are \( \hat{\BX}(\omega), \hat{\BF}(\omega) \) so
\begin{equation}\label{eqn:greensDropWithResistance:100}
\Bx(t) = \inv{2\pi} \int_{-\infty}^\infty e^{i\omega t} \hat{\BX}(\omega) d\omega.
\end{equation}
Derivatives of this presumed Fourier representation are trivial
\begin{equation}\label{eqn:greensDropWithResistance:120}
\begin{aligned}
\Bx'(t) &= \inv{2\pi} \int_{-\infty}^\infty \lr{ i\omega } e^{i\omega t} \hat{\BX}(\omega) d\omega \\
\Bx”(t) &= \inv{2\pi} \int_{-\infty}^\infty \lr{ i\omega }^2 e^{i\omega t} \hat{\BX}(\omega) d\omega,
\end{aligned}
\end{equation}
so the frequency representation of our system is
\begin{equation}\label{eqn:greensDropWithResistance:140}
\inv{2\pi} \int_{-\infty}^\infty \lr{ m \lr{ i\omega }^2 + \gamma \lr{ i\omega} } e^{i\omega t} \hat{\BX}(\omega) d\omega
=
\inv{2\pi} \int_{-\infty}^\infty e^{i\omega t} \hat{\BF}(\omega) d\omega,
\end{equation}
or
\begin{equation}\label{eqn:greensDropWithResistance:160}
\hat{\BX}(\omega) = \frac{\hat{\BF}(\omega)}{-m \omega^2 + i \omega \gamma}.
\end{equation}
We now only have to inverse Fourier transform to find a solution, namely
\begin{equation}\label{eqn:greensDropWithResistance:180}
\begin{aligned}
\Bx(t)
&= \inv{2\pi} \int_{-\infty}^\infty e^{i\omega t} \frac{\hat{\BF}(\omega)}{-m \omega^2 + i \omega \gamma} d\omega \\
&= \inv{2\pi} \int_{-\infty}^\infty e^{i\omega t} \frac{1}{-m \omega^2 + i \omega \gamma} d\omega
\int_{-\infty}^\infty \BF(t’) e^{-i \omega t’} dt’ \\
&= \int_{-\infty}^\infty \lr{ -\inv{2\pi} \int_{-\infty}^\infty \frac{ e^{i\omega (t-t’)} }{m \omega^2 – i \omega \gamma} d\omega }F(t’) dt’,
\end{aligned}
\end{equation}
or
\begin{equation}\label{eqn:greensDropWithResistance:200}
\Bx(t) = \int_{-\infty}^\infty G(t – t’) \BF(t’) dt’,
\end{equation}
where
\begin{equation}\label{eqn:greensDropWithResistance:220}
G(\tau) = -\inv{2\pi} \int_{-\infty}^\infty \frac{ e^{i\omega \tau} }{\omega\lr{ m \omega – i \gamma}} d\omega.
\end{equation}

We’ve been fast and loose above, swapping order of integration without proper justification, and have assumed that all Fourier transforms and inverse transforms exist. Given all those assumptions, we now have a general solution for the system, requiring only the convolution of our driving force \( F(t) \) with the system response function \( G(t) \). The only caveat is that we have to be able to perform the integral for the system response function, and that integral does not exist.

There are lots of integrals that do not strictly exist when playing the fast and loose physicist game with Fourier transforms. One such example can be found by looking at any transform pair. For example, given \( u(t) = F^{-1}(\hatU(\omega)) \), we have
\begin{equation}\label{eqn:greensDropWithResistance:240}
\begin{aligned}
u(t)
&= \inv{2\pi} \int_{-\infty}^\infty \hatU(\omega) e^{i\omega t} d\omega \\
&= \inv{2\pi} \int_{-\infty}^\infty \lr{ \int_{-\infty}^\infty u(t’) e^{-i\omega t’} dt’ } e^{i\omega t} d\omega \\
&= \int_{-\infty}^\infty u(t’) \lr{ \inv{2\pi} \int_{-\infty}^\infty e^{i\omega (t-t’)} d\omega } dt’.
\end{aligned}
\end{equation}
This is exactly the sort of integration order swapping that we did to find the system response function above, and we are left with a statement that \( f(t) \) is the convolution of \( f(t) \), with another, also non-integrable, convolution kernel. Any physics student will recognize that kernel as a representation of the Dirac delta function, and without blinking, would just write
\begin{equation}\label{eqn:greensDropWithResistance:260}
\delta(\tau) = \inv{2\pi} \int_{-\infty}^\infty e^{i\omega \tau} d\omega,
\end{equation}
without worrying that it is not possible to evaluate this integral. Somebody who is trying to use the right mathematical language, would say that this isn’t a function, but is, instead a distribution. Just like this delta function distribution, our system response integral, something that we also cannot actually evaluate in a strict sense, is a distribution. It’s a beastie that has delta function like characteristics, and if we want to try to integrate it, we have to play sneaky games.

Let’s put off evaluating that integral for now, and return to the Green’s function description of the story.

The Green’s function picture.

Using Fourier transforms, we found that it theoretically possible to find a convolution solution to the system, and found the convolution kernel for the system. The rough idea behind Green’s functions is to assume that such a convolution exists, say
\begin{equation}\label{eqn:greensDropWithResistance:280}
\Bx(t) = \Bx_0(t) + \int_{-\infty}^\infty G(t,t’) \BF(t’) dt’,
\end{equation}
where \( \Bx_0(t) \) is any solution of the homogeneous problem satisfying, in this case,
\begin{equation}\label{eqn:greensDropWithResistance:300}
m \frac{d^2}{dt^2} \Bx_0(t) + \gamma \frac{d}{dt} \Bx_0(t) = 0,
\end{equation}
and \( G(t,t’) \) is a convolution kernel, representing the system response, to be determined.
If we plug this presumed solution into our differential equation, we find
\begin{equation}\label{eqn:greensDropWithResistance:320}
\int_{-\infty}^\infty \lr{
m \frac{\partial^2}{\partial t^2} G(t,t’)
+ \gamma \frac{\partial}{\partial t} G(t,t’)
} \BF(t’) dt’
=
\BF(t),
\end{equation}
but
\begin{equation}\label{eqn:greensDropWithResistance:340}
\BF(t) = \int_{-\infty}^\infty \BF(t’) \delta(t – t’) dt’,
\end{equation}
so, if we can find \( G \) satisfying
\begin{equation}\label{eqn:greensDropWithResistance:360}
m \frac{\partial^2}{\partial t^2} G(t,t’) + \gamma \frac{\partial}{\partial t} G(t,t’) = \delta(t – t’),
\end{equation}
then we have solved the system. We can simplify this slightly by presuming that the \( t,t’ \) dependence is always a difference, and seek \( G(\tau) \) such that
\begin{equation}\label{eqn:greensDropWithResistance:380}
m G”(\tau) + \gamma G'(\tau) = \delta(\tau).
\end{equation}
We now pull the Fourier transform out of our toolbox again, assuming that
\begin{equation}\label{eqn:greensDropWithResistance:400}
G(\tau) = \inv{2 \pi} \int_{-\infty}^\infty \hat{G}(\omega) e^{i\omega\tau} d\omega,
\end{equation}
for which
\begin{equation}\label{eqn:greensDropWithResistance:420}
\inv{2 \pi} \int_{-\infty}^\infty \lr{ m \lr{ i \omega }^2 + \gamma \lr{ i \omega } } \hat{G}(\omega) e^{i\omega \tau} d\omega
=
\inv{2 \pi } \int_{-\infty}^\infty e^{i\omega \tau} d\omega,
\end{equation}
or
\begin{equation}\label{eqn:greensDropWithResistance:440}
\hat{G}(\omega) = \inv{ m \lr{ i \omega }^2 + \gamma \lr{ i \omega } }.
\end{equation}
This is the Fourier transform of the Green’s function, and is exactly what we found earlier using pure Fourier transforms. Our starting point was different this time, as we just blatantly assumed that the solution had a convolution structure. We then found a differential equation for that convolution kernel, the Green’s function. Only then did we pull the Fourier transform out of the toolbox to attempt to find the structure of that Green’s function.

Evaluating the Green’s function integral.

We can’t go any further without figuring out what to do with our nasty little divergent integral \ref{eqn:greensDropWithResistance:220}. We may coerce this into something that we can evaluate using standard contour integration, if we offset the pole at the origin slightly. Given \( \epsilon > 0 \), let’s evaluate
\begin{equation}\label{eqn:greensDropWithResistance:460}
G(\tau, \epsilon) = -\inv{2\pi} \oint \frac{ e^{i z \tau} }{\lr{ z – i \epsilon } \lr{ m z – i \gamma}} dz.
\end{equation}
We can evaluate this integral using infinite semicircular contours, using an upper half plane contour for \( \tau > 0 \) and a lower half plane contour for \( \tau < 0 \), as illustrated in fig. 1, and fig. 2.

 

fig. 1. Contour for tau > 0.

 

 

fig. 2. Contour for tau < 0.

By Jordan’s lemma, that upper half plane infinite semicircular part of the contour integral is zero for the \( \tau > 0 \) case, and for the \( \tau < 0 \) case, the lower half plane infinite semicircular part of the contour integral is zero. We can proceed with the residue calculations. In the upper half plane, we have both of the enclosed poles, so \begin{equation}\label{eqn:greensDropWithResistance:480} \begin{aligned} G(\tau > 0, \epsilon)
&= -\inv{2\pi m } \int_{-\infty}^\infty \frac{ e^{i \omega \tau} }{\lr{ \omega – i \epsilon } \lr{ \omega – i \gamma/m}} d\omega \\
&= -\frac{ 2 \pi i }{ 2 \pi m} \lr{
\evalbar{ \frac{ e^{i z \tau} }{ z – i \gamma/m} }{z = i \epsilon}
+
\evalbar{ \frac{ e^{i z \tau} }{ z – i \epsilon } }{ z = i \gamma/m}
} \\
&=
-\frac{i}{m} \lr{
\frac{ e^{-\epsilon \tau} }{ i \epsilon – i \gamma/m}
+
\frac{ e^{-\gamma\tau/m} }{ i \gamma/m – i \epsilon }
} \\
&=
-\lr{
\frac{e^{-\epsilon \tau}}{ m \epsilon – \gamma }
+
\frac{ e^{-\gamma\tau/m} }{ \gamma – m \epsilon }
},
\end{aligned}
\end{equation}
and for the lower half plane, where there are no enclosed poles we have \( G(\tau < 0, \epsilon) = 0 \). In the \( \epsilon \rightarrow 0 \) limit, we are left with
\begin{equation}\label{eqn:greensDropWithResistance:500}
G(\tau) = \inv{\gamma} \lr{ 1 – e^{-\gamma \tau/m} } \Theta(\tau).
\end{equation}

Back to the original problem.

We may now go and find the specific solution for the original problem where \( F(t) = – m g \Be_2 \Theta(t) \). That solution is
\begin{equation}\label{eqn:greensDropWithResistance:520}
\begin{aligned}
\Bx(t)
&= \Bx(0) + \int_{-\infty}^\infty G(t – t’) \lr{ – m g \Be_2 \Theta(t’) } dt’ \\
&= \Bx(0) – m g \Be_2 \int_{-\infty}^\infty \frac{\Theta(t – t’)}{\gamma} \lr{ 1 – e^{-\gamma \lr{ t – t’}/m } } \Theta(t’) dt’ \\
&= \Bx(0) – m g \Be_2 \int_{0}^\infty \frac{\Theta(t – t’)}{\gamma} \lr{ 1 – e^{-\gamma \lr{ t – t’}/m } } dt’ \\
&= \Bx(0) – \frac{m g}{\gamma} \Be_2 \int_{0}^t \lr{ 1 – e^{-\gamma \lr{ t – t’}/m } } dt’ \\
&= \Bx(0) – \frac{m g}{\gamma} \Be_2 \int_0^t \lr{ 1 – e^{-\gamma u/m } } du \\
&= \Bx(0) – \frac{m g}{\gamma} \Be_2 \evalrange{ \lr{ t’ – \frac{e^{-\gamma u/m } }{-\gamma/m} } }{u=0}{t} \\
&= \Bx(0) – \frac{m g}{\gamma} \Be_2 \lr{ t + \frac{m e^{-\gamma t/m }}{\gamma} – \frac{m}{\gamma} } \\
&= \Bx(0) – \frac{m g t}{\gamma} \Be_2 – \frac{m^2 g}{\gamma^2} \lr{ 1 – e^{-\gamma t/m } }.
\end{aligned}
\end{equation}

Ignoring the missing factor of \( g \) on the last term in the twitter slide, this is the final result before the limiting argument on that slide.

Having found the Green’s function for this system, we could then, fairly trivially, use it to solve similar systems with different forcing functions. For example, suppose we have a mass on a table, with friction, and a forcing function (perhaps sinusoidal) moving that mass. We could then figure out the time response for that particular forcing function, and would only have a convolution integral to evaluate. That general applicability is one of the beauties of these transform or Green’s function methods.

Gauge freedom and four-potentials in the STA form of Maxwell’s equation.

March 27, 2022 math and physics play , , , , , , , , , , , , , , , , , , , , ,

[If mathjax doesn’t display properly for you, click here for a PDF of this post]

Motivation.

In a recent video on the tensor structure of Maxwell’s equation, I made a little side trip down the road of potential solutions and gauge transformations. I thought that was worth writing up in text form.

The initial point of that side trip was just to point out that the Faraday tensor can be expressed in terms of four potential coordinates
\begin{equation}\label{eqn:gaugeFreedomAndPotentialsMaxwell:20}
F_{\mu\nu} = \partial_\mu A_\nu – \partial_\nu A_\mu,
\end{equation}
but before I got there I tried to motivate this. In this post, I’ll outline the same ideas.

STA representation of Maxwell’s equation.

We’d gone through the work to show that Maxwell’s equation has the STA form
\begin{equation}\label{eqn:gaugeFreedomAndPotentialsMaxwell:40}
\grad F = J.
\end{equation}
This is a deceptively compact representation, as it requires all of the following definitions
\begin{equation}\label{eqn:gaugeFreedomAndPotentialsMaxwell:60}
\grad = \gamma^\mu \partial_\mu = \gamma_\mu \partial^\mu,
\end{equation}
\begin{equation}\label{eqn:gaugeFreedomAndPotentialsMaxwell:80}
\partial_\mu = \PD{x^\mu}{},
\end{equation}
\begin{equation}\label{eqn:gaugeFreedomAndPotentialsMaxwell:100}
\gamma^\mu \cdot \gamma_\nu = {\delta^\mu}_\nu,
\end{equation}
\begin{equation}\label{eqn:gaugeFreedomAndPotentialsMaxwell:160}
\gamma_\mu \cdot \gamma_\nu = g_{\mu\nu},
\end{equation}
\begin{equation}\label{eqn:gaugeFreedomAndPotentialsMaxwell:120}
\begin{aligned}
F
&= \BE + I c \BB \\
&= -E^k \gamma^k \gamma^0 – \inv{2} c B^r \gamma^s \gamma^t \epsilon^{r s t} \\
&= \inv{2} \gamma^{\mu} \wedge \gamma^{\nu} F_{\mu\nu},
\end{aligned}
\end{equation}
and
\begin{equation}\label{eqn:gaugeFreedomAndPotentialsMaxwell:140}
\begin{aligned}
J &= \gamma_\mu J^\mu \\
J^\mu &= \frac{\rho}{\epsilon} \gamma_0 + \eta (\BJ \cdot \Be_k).
\end{aligned}
\end{equation}

Four-potentials in the STA representation.

In order to find the tensor form of Maxwell’s equation (starting from the STA representation), we first split the equation into two, since
\begin{equation}\label{eqn:gaugeFreedomAndPotentialsMaxwell:180}
\grad F = \grad \cdot F + \grad \wedge F = J.
\end{equation}
The dot product is a four-vector, the wedge term is a trivector, and the current is a four-vector, so we have one grade-1 equation and one grade-3 equation
\begin{equation}\label{eqn:gaugeFreedomAndPotentialsMaxwell:200}
\begin{aligned}
\grad \cdot F &= J \\
\grad \wedge F &= 0.
\end{aligned}
\end{equation}
The potential comes into the mix, since the curl equation above means that \( F \) necessarily can be written as the curl of some four-vector
\begin{equation}\label{eqn:gaugeFreedomAndPotentialsMaxwell:220}
F = \grad \wedge A.
\end{equation}
One justification of this is that \( a \wedge (a \wedge b) = 0 \), for any vectors \( a, b \). Expanding such a double-curl out in coordinates is also worthwhile
\begin{equation}\label{eqn:gaugeFreedomAndPotentialsMaxwell:240}
\begin{aligned}
\grad \wedge \lr{ \grad \wedge A }
&=
\lr{ \gamma_\mu \partial^\mu }
\wedge
\lr{ \gamma_\nu \partial^\nu }
\wedge
A \\
&=
\gamma^\mu \wedge \gamma^\nu \wedge \lr{ \partial_\mu \partial_\nu A }.
\end{aligned}
\end{equation}
Provided we have equality of mixed partials, this is a product of an antisymmetric factor and a symmetric factor, so the full sum is zero.

Things get interesting if one imposes a \( \grad \cdot A = \partial_\mu A^\mu = 0 \) constraint on the potential. If we do so, then
\begin{equation}\label{eqn:gaugeFreedomAndPotentialsMaxwell:260}
\grad F = \grad^2 A = J.
\end{equation}
Observe that \( \grad^2 \) is the wave equation operator (often written as a square-box symbol.) That is
\begin{equation}\label{eqn:gaugeFreedomAndPotentialsMaxwell:280}
\begin{aligned}
\grad^2
&= \partial^\mu \partial_\mu \\
&= \partial_0 \partial_0
– \partial_1 \partial_1
– \partial_2 \partial_2
– \partial_3 \partial_3 \\
&= \inv{c^2} \PDSq{t}{} – \spacegrad^2.
\end{aligned}
\end{equation}
This is also an operator for which the Green’s function is well known ([1]), which means that we can immediately write the solutions
\begin{equation}\label{eqn:gaugeFreedomAndPotentialsMaxwell:300}
A(x) = \int G(x,x’) J(x’) d^4 x’.
\end{equation}
However, we have no a-priori guarantee that such a solution has zero divergence. We can fix that by making a gauge transformation of the form
\begin{equation}\label{eqn:gaugeFreedomAndPotentialsMaxwell:320}
A \rightarrow A – \grad \chi.
\end{equation}
Observe that such a transformation does not change the electromagnetic field
\begin{equation}\label{eqn:gaugeFreedomAndPotentialsMaxwell:340}
F = \grad \wedge A \rightarrow \grad \wedge \lr{ A – \grad \chi },
\end{equation}
since
\begin{equation}\label{eqn:gaugeFreedomAndPotentialsMaxwell:360}
\grad \wedge \grad \chi = 0,
\end{equation}
(also by equality of mixed partials.) Suppose that \( \tilde{A} \) is a solution of \( \grad^2 \tilde{A} = J \), and \( \tilde{A} = A + \grad \chi \), where \( A \) is a zero divergence field to be determined, then
\begin{equation}\label{eqn:gaugeFreedomAndPotentialsMaxwell:380}
\grad \cdot \tilde{A}
=
\grad \cdot A + \grad^2 \chi,
\end{equation}
or
\begin{equation}\label{eqn:gaugeFreedomAndPotentialsMaxwell:400}
\grad^2 \chi = \grad \cdot \tilde{A}.
\end{equation}
So if \( \tilde{A} \) does not have zero divergence, we can find a \( \chi \)
\begin{equation}\label{eqn:gaugeFreedomAndPotentialsMaxwell:420}
\chi(x) = \int G(x,x’) \grad’ \cdot \tilde{A}(x’) d^4 x’,
\end{equation}
so that \( A = \tilde{A} – \grad \chi \) does have zero divergence.

References

[1] JD Jackson. Classical Electrodynamics. John Wiley and Sons, 2nd edition, 1975.

Switching from YouTube to Odysee as a video sharing platform

March 10, 2022 math and physics play , , ,

YouTube’s rampant censorship over the last two years (and even before that), has been increasingly hard to stomach.

In light of that, I am going to transition to odysee as my primary video sharing platform.  I’ll probably post backup copies on YouTube too, but will treat that platform as a secondary mirror (despite the fact that subscribers and viewers will probably find stuff there first.)

In the grand scheme of things, my viewership is infinitesimal and will surely stay that way, and I don’t monetize anything anyways, so this switch has zero impact to me, and is more of a conceptual switch than anything else.  Since, I’m talking about math and physics, which I can’t imagine that YouTube should would ever find a reason to censor, but we should all start treating it as a compromised platform, and breaking any dependencies that we have on it.

As a first step towards this transition, I’ve uploaded all my geometric algebra videos to odysee.  I’ve also uploaded my Green’s function videos (so far all related to the damped forced harmonic oscillator), but haven’t put those in a playlist yet, but it will be here when I do.

I have a couple cool Manim based geometric algebra videos that I have been working on for a while.  I’ll post those soon too.

Potential solutions to the static Maxwell’s equation using geometric algebra

March 20, 2018 math and physics play , , , , , , , , , , , , , , , , ,

[Click here for a PDF of this post with nicer formatting]

When neither the electromagnetic field strength \( F = \BE + I \eta \BH \), nor current \( J = \eta (c \rho – \BJ) + I(c\rho_m – \BM) \) is a function of time, then the geometric algebra form of Maxwell’s equations is the first order multivector (gradient) equation
\begin{equation}\label{eqn:staticPotentials:20}
\spacegrad F = J.
\end{equation}

While direct solutions to this equations are possible with the multivector Green’s function for the gradient
\begin{equation}\label{eqn:staticPotentials:40}
G(\Bx, \Bx’) = \inv{4\pi} \frac{\Bx – \Bx’}{\Norm{\Bx – \Bx’}^3 },
\end{equation}
the aim in this post is to explore second order (potential) solutions in a geometric algebra context. Can we assume that it is possible to find a multivector potential \( A \) for which
\begin{equation}\label{eqn:staticPotentials:60}
F = \spacegrad A,
\end{equation}
is a solution to the Maxwell statics equation? If such a solution exists, then Maxwell’s equation is simply
\begin{equation}\label{eqn:staticPotentials:80}
\spacegrad^2 A = J,
\end{equation}
which can be easily solved using the scalar Green’s function for the Laplacian
\begin{equation}\label{eqn:staticPotentials:240}
G(\Bx, \Bx’) = -\inv{\Norm{\Bx – \Bx’} },
\end{equation}
a beastie that may be easier to convolve than the vector valued Green’s function for the gradient.

It is immediately clear that some restrictions must be imposed on the multivector potential \(A\). In particular, since the field \( F \) has only vector and bivector grades, this gradient must have no scalar, nor pseudoscalar grades. That is
\begin{equation}\label{eqn:staticPotentials:100}
\gpgrade{\spacegrad A}{0,3} = 0.
\end{equation}
This constraint on the potential can be avoided if a grade selection operation is built directly into the assumed potential solution, requiring that the field is given by
\begin{equation}\label{eqn:staticPotentials:120}
F = \gpgrade{\spacegrad A}{1,2}.
\end{equation}
However, after imposing such a constraint, Maxwell’s equation has a much less friendly form
\begin{equation}\label{eqn:staticPotentials:140}
\spacegrad^2 A – \spacegrad \gpgrade{\spacegrad A}{0,3} = J.
\end{equation}
Luckily, it is possible to introduce a transformation of potentials, called a gauge transformation, that eliminates the ugly grade selection term, and allows the potential equation to be expressed as a plain old Laplacian. We do so by assuming first that it is possible to find a solution of the Laplacian equation that has the desired grade restrictions. That is
\begin{equation}\label{eqn:staticPotentials:160}
\begin{aligned}
\spacegrad^2 A’ &= J \\
\gpgrade{\spacegrad A’}{0,3} &= 0,
\end{aligned}
\end{equation}
for which \( F = \spacegrad A’ \) is a grade 1,2 solution to \( \spacegrad F = J \). Suppose that \( A \) is any formal solution, free of any grade restrictions, to \( \spacegrad^2 A = J \), and \( F = \gpgrade{\spacegrad A}{1,2} \). Can we find a function \( \tilde{A} \) for which \( A = A’ + \tilde{A} \)?

Maxwell’s equation in terms of \( A \) is
\begin{equation}\label{eqn:staticPotentials:180}
\begin{aligned}
J
&= \spacegrad \gpgrade{\spacegrad A}{1,2} \\
&= \spacegrad^2 A
– \spacegrad \gpgrade{\spacegrad A}{0,3} \\
&= \spacegrad^2 (A’ + \tilde{A})
– \spacegrad \gpgrade{\spacegrad A}{0,3}
\end{aligned}
\end{equation}
or
\begin{equation}\label{eqn:staticPotentials:200}
\spacegrad^2 \tilde{A} = \spacegrad \gpgrade{\spacegrad A}{0,3}.
\end{equation}
This non-homogeneous Laplacian equation that can be solved as is for \( \tilde{A} \) using the Green’s function for the Laplacian. Alternatively, we may also solve the equivalent first order system using the Green’s function for the gradient.
\begin{equation}\label{eqn:staticPotentials:220}
\spacegrad \tilde{A} = \gpgrade{\spacegrad A}{0,3}.
\end{equation}
Clearly \( \tilde{A} \) is not unique, as we can add any function \( \psi \) satisfying the homogeneous Laplacian equation \( \spacegrad^2 \psi = 0 \).

In summary, if \( A \) is any multivector solution to \( \spacegrad^2 A = J \), that is
\begin{equation}\label{eqn:staticPotentials:260}
A(\Bx)
= \int dV’ G(\Bx, \Bx’) J(\Bx’)
= -\int dV’ \frac{J(\Bx’)}{\Norm{\Bx – \Bx’} },
\end{equation}
then \( F = \spacegrad A’ \) is a solution to Maxwell’s equation, where \( A’ = A – \tilde{A} \), and \( \tilde{A} \) is a solution to the non-homogeneous Laplacian equation or the non-homogeneous gradient equation above.

Integral form of the gauge transformation.

Additional insight is possible by considering the gauge transformation in integral form. Suppose that
\begin{equation}\label{eqn:staticPotentials:280}
A(\Bx) = -\int_V dV’ \frac{J(\Bx’)}{\Norm{\Bx – \Bx’} } – \tilde{A}(\Bx),
\end{equation}
is a solution of \( \spacegrad^2 A = J \), where \( \tilde{A} \) is a multivector solution to the homogeneous Laplacian equation \( \spacegrad^2 \tilde{A} = 0 \). Let’s look at the constraints on \( \tilde{A} \) that must be imposed for \( F = \spacegrad A \) to be a valid (i.e. grade 1,2) solution of Maxwell’s equation.
\begin{equation}\label{eqn:staticPotentials:300}
\begin{aligned}
F
&= \spacegrad A \\
&=
-\int_V dV’ \lr{ \spacegrad \inv{\Norm{\Bx – \Bx’} } } J(\Bx’)
– \spacegrad \tilde{A}(\Bx) \\
&=
\int_V dV’ \lr{ \spacegrad’ \inv{\Norm{\Bx – \Bx’} } } J(\Bx’)
– \spacegrad \tilde{A}(\Bx) \\
&=
\int_V dV’ \spacegrad’ \frac{J(\Bx’)}{\Norm{\Bx – \Bx’} } – \int_V dV’ \frac{\spacegrad’ J(\Bx’)}{\Norm{\Bx – \Bx’} }
– \spacegrad \tilde{A}(\Bx) \\
&=
\int_{\partial V} dA’ \ncap’ \frac{J(\Bx’)}{\Norm{\Bx – \Bx’} } – \int_V \frac{\spacegrad’ J(\Bx’)}{\Norm{\Bx – \Bx’} }
– \spacegrad \tilde{A}(\Bx).
\end{aligned}
\end{equation}
Where \( \ncap’ = (\Bx’ – \Bx)/\Norm{\Bx’ – \Bx} \), and the fundamental theorem of geometric calculus has been used to transform the gradient volume integral into an integral over the bounding surface. Operating on Maxwell’s equation with the gradient gives \( \spacegrad^2 F = \spacegrad J \), which has only grades 1,2 on the left hand side, meaning that \( J \) is constrained in a way that requires \( \spacegrad J \) to have only grades 1,2. This means that \( F \) has grades 1,2 if
\begin{equation}\label{eqn:staticPotentials:320}
\spacegrad \tilde{A}(\Bx)
= \int_{\partial V} dA’ \frac{ \gpgrade{\ncap’ J(\Bx’)}{0,3} }{\Norm{\Bx – \Bx’} }.
\end{equation}
The product \( \ncap J \) expands to
\begin{equation}\label{eqn:staticPotentials:340}
\begin{aligned}
\ncap J
&=
\gpgradezero{\ncap J_1} + \gpgradethree{\ncap J_2} \\
&=
\ncap \cdot (-\eta \BJ) + \gpgradethree{\ncap (-I \BM)} \\
&=- \eta \ncap \cdot \BJ -I \ncap \cdot \BM,
\end{aligned}
\end{equation}
so
\begin{equation}\label{eqn:staticPotentials:360}
\spacegrad \tilde{A}(\Bx)
=
-\int_{\partial V} dA’ \frac{ \eta \ncap’ \cdot \BJ(\Bx’) + I \ncap’ \cdot \BM(\Bx’)}{\Norm{\Bx – \Bx’} }.
\end{equation}
Observe that if there is no flux of current density \( \BJ \) and (fictitious) magnetic current density \( \BM \) through the surface, then \( F = \spacegrad A \) is a solution to Maxwell’s equation without any gauge transformation. Alternatively \( F = \spacegrad A \) is also a solution if \( \lim_{\Bx’ \rightarrow \infty} \BJ(\Bx’)/\Norm{\Bx – \Bx’} = \lim_{\Bx’ \rightarrow \infty} \BM(\Bx’)/\Norm{\Bx – \Bx’} = 0 \) and the bounding volume is taken to infinity.

References

The many faces of Maxwell’s equations

March 5, 2018 math and physics play , , , , , , , , , , , , , , , , , , , , , , , ,

[Click here for a PDF of this post with nicer formatting (including equation numbering and references)]

The following is a possible introduction for a report for a UofT ECE2500 project associated with writing a small book: “Geometric Algebra for Electrical Engineers”. Given the space constraints for the report I may have to drop much of this, but some of the history of Maxwell’s equations may be of interest, so I thought I’d share before the knife hits the latex.

Goals of the project.

This project had a few goals

  1. Perform a literature review of applications of geometric algebra to the study of electromagnetism. Geometric algebra will be defined precisely later, along with bivector, trivector, multivector and other geometric algebra generalizations of the vector.
  2. Identify the subset of the literature that had direct relevance to electrical engineering.
  3. Create a complete, and as compact as possible, introduction of the prerequisites required
    for a graduate or advanced undergraduate electrical engineering student to be able to apply
    geometric algebra to problems in electromagnetism.

The many faces of electromagnetism.

There is a long history of attempts to find more elegant, compact and powerful ways of encoding and working with Maxwell’s equations.

Maxwell’s formulation.

Maxwell [12] employs some differential operators, including the gradient \( \spacegrad \) and Laplacian \( \spacegrad^2 \), but the divergence and gradient are always written out in full using coordinates, usually in integral form. Reading the original Treatise highlights how important notation can be, as most modern engineering or physics practitioners would find his original work incomprehensible. A nice translation from Maxwell’s notation to the modern Heaviside-Gibbs notation can be found in [16].

Quaterion representation.

In his second volume [11] the equations of electromagnetism are stated using quaterions (an extension of complex numbers to three dimensions), but quaternions are not used in the work. The modern form of Maxwell’s equations in quaternion form is
\begin{equation}\label{eqn:ece2500report:220}
\begin{aligned}
\inv{2} \antisymmetric{ \frac{d}{dr} }{ \BH } – \inv{2} \symmetric{ \frac{d}{dr} } { c \BD } &= c \rho + \BJ \\
\inv{2} \antisymmetric{ \frac{d}{dr} }{ \BE } + \inv{2} \symmetric{ \frac{d}{dr} }{ c \BB } &= 0,
\end{aligned}
\end{equation}
where \( \ifrac{d}{dr} = (1/c) \PDi{t}{} + \Bi \PDi{x}{} + \Bj \PDi{y}{} + \Bk \PDi{z}{} \) [7] acts bidirectionally, and vectors are expressed in terms of the quaternion basis \( \setlr{ \Bi, \Bj, \Bk } \), subject to the relations \(
\Bi^2 = \Bj^2 = \Bk^2 = -1, \quad
\Bi \Bj = \Bk = -\Bj \Bi, \quad
\Bj \Bk = \Bi = -\Bk \Bj, \quad
\Bk \Bi = \Bj = -\Bi \Bk \).
There is clearly more structure to these equations than the traditional Heaviside-Gibbs representation that we are used to, which says something for the quaternion model. However, this structure requires notation that is arguably non-intuitive. The fact that the quaterion representation was abandoned long ago by most electromagnetism researchers and engineers supports such an argument.

Minkowski tensor representation.

Minkowski introduced the concept of a complex time coordinate \( x_4 = i c t \) for special relativity [3]. Such a four-vector representation can be used for many of the relativistic four-vector pairs of electromagnetism, such as the current \((c\rho, \BJ)\), and the energy-momentum Lorentz force relations, and can also be applied to Maxwell’s equations
\begin{equation}\label{eqn:ece2500report:140}
\sum_{\mu= 1}^4 \PD{x_\mu}{F_{\mu\nu}} = – 4 \pi j_\nu.
\qquad
\sum_{\lambda\rho\mu=1}^4
\epsilon_{\mu\nu\lambda\rho}
\PD{x_\mu}{F_{\lambda\rho}} = 0,
\end{equation}
where
\begin{equation}\label{eqn:ece2500report:160}
F
=
\begin{bmatrix}
0 & B_z & -B_y & -i E_x \\
-B_z & 0 & B_x & -i E_y \\
B_y & -B_x & 0 & -i E_z \\
i E_x & i E_y & i E_z & 0
\end{bmatrix}.
\end{equation}
A rank-2 complex (Hermitian) tensor contains all six of the field components. Transformation of coordinates for this representation of the field may be performed exactly like the transformation for any other four-vector. This formalism is described nicely in [13], where the structure used is motivated by transformational requirements. One of the costs of this tensor representation is that we loose the clear separation of the electric and magnetic fields that we are so comfortable with. Another cost is that we loose the distinction between space and time, as separate space and time coordinates have to be projected out of a larger four vector. Both of these costs have theoretical benefits in some applications, particularly for high energy problems where relativity is important, but for the low velocity problems near and dear to electrical engineers who can freely treat space and time independently, the advantages are not clear.

Modern tensor formalism.

The Minkowski representation fell out of favour in theoretical physics, which settled on a real tensor representation that utilizes an explicit metric tensor \( g_{\mu\nu} = \pm \textrm{diag}(1, -1, -1, -1) \) to represent the complex inner products of special relativity. In this tensor formalism, Maxwell’s equations are also reduced to a set of two tensor relationships ([10], [8], [5]).
\begin{equation}\label{eqn:ece2500report:40}
\begin{aligned}
\partial_\mu F^{\mu \nu} &= \mu_0 J^\nu \\
\epsilon^{\alpha \beta \mu \nu} \partial_\beta F_{\mu \nu} &= 0,
\end{aligned}
\end{equation}
where \( F^{\mu\nu} \) is a \textit{real} rank-2 antisymmetric tensor that contains all six electric and magnetic field components, and \( J^\nu \) is a four-vector current containing both charge density and current density components. \Cref{eqn:ece2500report:40} provides a unified and simpler theoretical framework for electromagnetism, and is used extensively in physics but not engineering.

Differential forms.

It has been argued that a differential forms treatment of electromagnetism provides some of the same theoretical advantages as the tensor formalism, without the disadvantages of introducing a hellish mess of index manipulation into the mix. With differential forms it is also possible to express Maxwell’s equations as two equations. The free-space differential forms equivalent [4] to the tensor equations is
\begin{equation}\label{eqn:ece2500report:60}
\begin{aligned}
d \alpha &= 0 \\
d *\alpha &= 0,
\end{aligned}
\end{equation}
where
\begin{equation}\label{eqn:ece2500report:180}
\alpha = \lr{ E_1 dx^1 + E_2 dx^2 + E_3 dx^3 }(c dt) + H_1 dx^2 dx^3 + H_2 dx^3 dx^1 + H_3 dx^1 dx^2.
\end{equation}
One of the advantages of this representation is that it is valid even for curvilinear coordinate representations, which are handled naturally in differential forms. However, this formalism also comes with a number of costs. One cost (or benefit), like that of the tensor formalism, is that this is implicitly a relativistic approach subject to non-Euclidean orthonormality conditions \( (dx^i, dx^j) = \delta^{ij}, (dx^i, c dt) = 0, (c dt, c dt) = -1 \). Most grievous of the costs is the requirement to use differentials \( dx^1, dx^2, dx^3, c dt \), instead of a more familar set of basis vectors, even for non-curvilinear coordinates. This requirement is easily viewed as unnatural, and likely one of the reasons that electromagnetism with differential forms has never become popular.

Vector formalism.

Euclidean vector algebra, in particular the vector algebra and calculus of \( R^3 \), is the de-facto language of electrical engineering for electromagnetism. Maxwell’s equations in the Heaviside-Gibbs vector formalism are
\begin{equation}\label{eqn:ece2500report:20}
\begin{aligned}
\spacegrad \cross \BE &= – \PD{t}{\BB} \\
\spacegrad \cross \BH &= \BJ + \PD{t}{\BD} \\
\spacegrad \cdot \BD &= \rho \\
\spacegrad \cdot \BB &= 0.
\end{aligned}
\end{equation}
We are all intimately familiar with these equations, with the dot and the cross products, and with gradient, divergence and curl operations that are used to express them.
Given how comfortable we are with this mathematical formalism, there has to be a really good reason to switch to something else.

Space time algebra (geometric algebra).

An alternative to any of the electrodynamics formalisms described above is STA, the Space Time Algebra. STA is a relativistic geometric algebra that allows Maxwell’s equations to be combined into one equation ([2], [6])
\begin{equation}\label{eqn:ece2500report:80}
\grad F = J,
\end{equation}
where
\begin{equation}\label{eqn:ece2500report:200}
F = \BE + I c \BB \qquad (= \BE + I \eta \BH)
\end{equation}
is a bivector field containing both the electric and magnetic field “vectors”, \( \grad = \gamma^\mu \partial_\mu \) is the spacetime gradient, \( J \) is a four vector containing electric charge and current components, and \( I = \gamma_0 \gamma_1 \gamma_2 \gamma_3 \) is the spacetime pseudoscalar, the ordered product of the basis vectors \( \setlr{ \gamma_\mu } \). The STA representation is explicitly relativistic with a non-Euclidean relationships between the basis vectors \( \gamma_0 \cdot \gamma_0 = 1 = -\gamma_k \cdot \gamma_k, \forall k > 0 \). In this formalism “spatial” vectors \( \Bx = \sum_{k>0} \gamma_k \gamma_0 x^k \) are represented as spacetime bivectors, requiring a small slight of hand when switching between STA notation and conventional vector representation. Uncoincidentally \( F \) has exactly the same structure as the 2-form \(\alpha\) above, provided the differential 1-forms \( dx^\mu \) are replaced by the basis vectors \( \gamma_\mu \). However, there is a simple complex structure inherent in the STA form that is not obvious in the 2-form equivalent. The bivector representation of the field \( F \) directly encodes the antisymmetric nature of \( F^{\mu\nu} \) from the tensor formalism, and the tensor equivalents of most STA results can be calcualted easily.

Having a single PDE for all of Maxwell’s equations allows for direct Green’s function solution of the field, and has a number of other advantages. There is extensive literature exploring selected applications of STA to electrodynamics. Many theoretical results have been derived using this formalism that require significantly more complex approaches using conventional vector or tensor analysis. Unfortunately, much of the STA literature is inaccessible to the engineering student, practising engineers, or engineering instructors. To even start reading the literature, one must learn geometric algebra, aspects of special relativity and non-Euclidean geometry, generalized integration theory, and even some tensor analysis.

Paravector formalism (geometric algebra).

In the geometric algebra literature, there are a few authors who have endorsed the use of Euclidean geometric algebras for relativistic applications ([1], [14])
These authors use an Euclidean basis “vector” \( \Be_0 = 1 \) for the timelike direction, along with a standard Euclidean basis \( \setlr{ \Be_i } \) for the spatial directions. A hybrid scalar plus vector representation of four vectors, called paravectors is employed. Maxwell’s equation is written as a multivector equation
\begin{equation}\label{eqn:ece2500report:120}
\lr{ \spacegrad + \inv{c} \PD{t}{} } F = J,
\end{equation}
where \( J \) is a multivector source containing both the electric charge and currents, and \( c \) is the group velocity for the medium (assumed uniform and isometric). \( J \) may optionally include the (fictitious) magnetic charge and currents useful in antenna theory. The paravector formalism uses a the hybrid electromagnetic field representation of STA above, however, \( I = \Be_1 \Be_2 \Be_3 \) is interpreted as the \( R^3 \) pseudoscalar, the ordered product of the basis vectors \( \setlr{ \Be_i } \), and \( F \) represents a multivector with vector and bivector components. Unlike STA where \( \BE \) and \( \BB \) (or \( \BH \)) are interpretted as spacetime bivectors, here they are plain old Euclidian vectors in \( R^3 \), entirely consistent with conventional Heaviyside-Gibbs notation. Like the STA Maxwell’s equation, the paravector form is directly invertible using Green’s function techniques, without requiring the solution of equivalent second order potential problems, nor any requirement to take the derivatives of those potentials to determine the fields.

Lorentz transformation and manipulation of paravectors requires a variety of conjugation, real and imaginary operators, unlike STA where such operations have the same complex exponential structure as any 3D rotation expressed in geometric algebra. The advocates of the paravector representation argue that this provides an effective pedagogical bridge from Euclidean geometry to the Minkowski geometry of special relativity. This author agrees that this form of Maxwell’s equations is the natural choice for an introduction to electromagnetism using geometric algebra, but for relativistic operations, STA is a much more natural and less confusing choice.

Results.

The end product of this project was a fairly small self contained book, titled “Geometric Algebra for Electrical Engineers”. This book includes an introduction to Euclidean geometric algebra focused on \( R^2 \) and \( R^3 \) (64 pages), an introduction to geometric calculus and multivector Green’s functions (64 pages), and applications to electromagnetism (75 pages). This report summarizes results from this book, omitting most derivations, and attempts to provide an overview that may be used as a road map for the book for further exploration. Many of the fundamental results of electromagnetism are derived directly from the geometric algebra form of Maxwell’s equation in a streamlined and compact fashion. This includes some new results, and many of the existing non-relativistic results from the geometric algebra STA and paravector literature. It will be clear to the reader that it is often simpler to have the electric and magnetic on equal footing, and demonstrates this by deriving most results in terms of the total electromagnetic field \( F \). Many examples of how to extract the conventional electric and magnetic fields from the geometric algebra results expressed in terms of \( F \) are given as a bridge between the multivector and vector representations.

The aim of this work was to remove some of the prerequisite conceptual roadblocks that make electromagnetism using geometric algebra inaccessbile. In particular, this project explored non-relativistic applications of geometric algebra to electromagnetism. After derivation from the conventional Heaviside-Gibbs representation of Maxwell’s equations, the paravector representation of Maxwell’s equation is used as the starting point for of all subsequent analysis. However, the paravector literature includes a confusing set of conjugation and real and imaginary selection operations that are tailored for relativisitic applications. These are not neccessary for low velocity applications, and have been avoided completely with the aim of making the subject more accessibility to the engineer.

In the book an attempt has been made to avoid introducing as little new notation as possible. For example, some authors use special notation for the bivector valued magnetic field \( I \BB \), such as \( \boldsymbol{\mathcal{b}} \) or \( \Bcap \). Given the inconsistencies in the literature, \( I \BB \) (or \( I \BH \)) will be used explicitly for the bivector (magnetic) components of the total electromagnetic field \( F \). In the geometric algebra literature, there are conflicting conventions for the operator \( \spacegrad + (1/c) \PDi{t}{} \) which we will call the spacetime gradient after the STA equivalent. For examples of different notations for the spacetime gradient, see [9], [1], and [15]. In the book the spacetime gradient is always written out in full to avoid picking from or explaining some of the subtlties of the competing notations.

Some researchers will find it distasteful that STA and relativity have been avoided completely in this book. Maxwell’s equations are inherently relativistic, and STA expresses the relativistic aspects of electromagnetism in an exceptional and beautiful fashion. However, a student of this book will have learned the geometric algebra and calculus prerequisites of STA. This makes the STA literature much more accessible, especially since most of the results in the book can be trivially translated into STA notation.

References

[1] William Baylis. Electrodynamics: a modern geometric approach, volume 17. Springer Science \& Business Media, 2004.

[2] C. Doran and A.N. Lasenby. Geometric algebra for physicists. Cambridge University Press New York, Cambridge, UK, 1st edition, 2003.

[3] Albert Einstein. Relativity: The special and the general theory, chapter Minkowski’s Four-Dimensional Space. Princeton University Press, 2015. URL http://www.gutenberg.org/ebooks/5001.

[4] H. Flanders. Differential Forms With Applications to the Physical Sciences. Courier Dover Publications, 1989.

[5] David Jeffrey Griffiths and Reed College. Introduction to electrodynamics. Prentice hall Upper Saddle River, NJ, 3rd edition, 1999.

[6] David Hestenes. Space-time algebra, volume 1. Springer, 1966.

[7] Peter Michael Jack. Physical space as a quaternion structure, i: Maxwell equations. a brief note. arXiv preprint math-ph/0307038, 2003. URL https://arxiv.org/abs/math-ph/0307038.

[8] JD Jackson. Classical Electrodynamics. John Wiley and Sons, 2nd edition, 1975.

[9] Bernard Jancewicz. Multivectors and Clifford algebra in electrodynamics. World Scientific, 1988.

[10] L.D. Landau and E.M. Lifshitz. The classical theory of fields. Butterworth-Heinemann, 1980. ISBN 0750627689.

[11] James Clerk Maxwell. A treatise on electricity and magnetism, volume II. Merchant Books, 1881.

[12] James Clerk Maxwell. A treatise on electricity and magnetism, third edition, volume I. Dover publications, 1891.

[13] M. Schwartz. Principles of Electrodynamics. Dover Publications, 1987.

[14] Chappell et al. A simplified approach to electromagnetism using geometric algebra. arXiv preprint arXiv:1010.4947, 2010.

[15] Chappell et al. Geometric algebra for electrical and electronic engineers. 2014.

[16] Chappell et al. Geometric Algebra for Electrical and Electronic Engineers, 2014