## Gauge freedom and four-potentials in the STA form of Maxwell’s equation.

[If mathjax doesn’t display properly for you, click here for a PDF of this post]

## Motivation.

In a recent video on the tensor structure of Maxwell’s equation, I made a little side trip down the road of potential solutions and gauge transformations. I thought that was worth writing up in text form.

The initial point of that side trip was just to point out that the Faraday tensor can be expressed in terms of four potential coordinates
\label{eqn:gaugeFreedomAndPotentialsMaxwell:20}
F_{\mu\nu} = \partial_\mu A_\nu – \partial_\nu A_\mu,

but before I got there I tried to motivate this. In this post, I’ll outline the same ideas.

## STA representation of Maxwell’s equation.

We’d gone through the work to show that Maxwell’s equation has the STA form
\label{eqn:gaugeFreedomAndPotentialsMaxwell:40}
\grad F = J.

This is a deceptively compact representation, as it requires all of the following definitions
\label{eqn:gaugeFreedomAndPotentialsMaxwell:60}
\grad = \gamma^\mu \partial_\mu = \gamma_\mu \partial^\mu,

\label{eqn:gaugeFreedomAndPotentialsMaxwell:80}
\partial_\mu = \PD{x^\mu}{},

\label{eqn:gaugeFreedomAndPotentialsMaxwell:100}
\gamma^\mu \cdot \gamma_\nu = {\delta^\mu}_\nu,

\label{eqn:gaugeFreedomAndPotentialsMaxwell:160}
\gamma_\mu \cdot \gamma_\nu = g_{\mu\nu},

\label{eqn:gaugeFreedomAndPotentialsMaxwell:120}
\begin{aligned}
F
&= \BE + I c \BB \\
&= -E^k \gamma^k \gamma^0 – \inv{2} c B^r \gamma^s \gamma^t \epsilon^{r s t} \\
&= \inv{2} \gamma^{\mu} \wedge \gamma^{\nu} F_{\mu\nu},
\end{aligned}

and
\label{eqn:gaugeFreedomAndPotentialsMaxwell:140}
\begin{aligned}
J &= \gamma_\mu J^\mu \\
J^\mu &= \frac{\rho}{\epsilon} \gamma_0 + \eta (\BJ \cdot \Be_k).
\end{aligned}

## Four-potentials in the STA representation.

In order to find the tensor form of Maxwell’s equation (starting from the STA representation), we first split the equation into two, since
\label{eqn:gaugeFreedomAndPotentialsMaxwell:180}
\grad F = \grad \cdot F + \grad \wedge F = J.

The dot product is a four-vector, the wedge term is a trivector, and the current is a four-vector, so we have one grade-1 equation and one grade-3 equation
\label{eqn:gaugeFreedomAndPotentialsMaxwell:200}
\begin{aligned}
\grad \cdot F &= J \\
\grad \wedge F &= 0.
\end{aligned}

The potential comes into the mix, since the curl equation above means that $$F$$ necessarily can be written as the curl of some four-vector
\label{eqn:gaugeFreedomAndPotentialsMaxwell:220}
F = \grad \wedge A.

One justification of this is that $$a \wedge (a \wedge b) = 0$$, for any vectors $$a, b$$. Expanding such a double-curl out in coordinates is also worthwhile
\label{eqn:gaugeFreedomAndPotentialsMaxwell:240}
\begin{aligned}
\grad \wedge \lr{ \grad \wedge A }
&=
\lr{ \gamma_\mu \partial^\mu }
\wedge
\lr{ \gamma_\nu \partial^\nu }
\wedge
A \\
&=
\gamma^\mu \wedge \gamma^\nu \wedge \lr{ \partial_\mu \partial_\nu A }.
\end{aligned}

Provided we have equality of mixed partials, this is a product of an antisymmetric factor and a symmetric factor, so the full sum is zero.

Things get interesting if one imposes a $$\grad \cdot A = \partial_\mu A^\mu = 0$$ constraint on the potential. If we do so, then
\label{eqn:gaugeFreedomAndPotentialsMaxwell:260}
\grad F = \grad^2 A = J.

Observe that $$\grad^2$$ is the wave equation operator (often written as a square-box symbol.) That is
\label{eqn:gaugeFreedomAndPotentialsMaxwell:280}
\begin{aligned}
&= \partial^\mu \partial_\mu \\
&= \partial_0 \partial_0
– \partial_1 \partial_1
– \partial_2 \partial_2
– \partial_3 \partial_3 \\
&= \inv{c^2} \PDSq{t}{} – \spacegrad^2.
\end{aligned}

This is also an operator for which the Green’s function is well known ([1]), which means that we can immediately write the solutions
\label{eqn:gaugeFreedomAndPotentialsMaxwell:300}
A(x) = \int G(x,x’) J(x’) d^4 x’.

However, we have no a-priori guarantee that such a solution has zero divergence. We can fix that by making a gauge transformation of the form
\label{eqn:gaugeFreedomAndPotentialsMaxwell:320}
A \rightarrow A – \grad \chi.

Observe that such a transformation does not change the electromagnetic field
\label{eqn:gaugeFreedomAndPotentialsMaxwell:340}
F = \grad \wedge A \rightarrow \grad \wedge \lr{ A – \grad \chi },

since
\label{eqn:gaugeFreedomAndPotentialsMaxwell:360}

(also by equality of mixed partials.) Suppose that $$\tilde{A}$$ is a solution of $$\grad^2 \tilde{A} = J$$, and $$\tilde{A} = A + \grad \chi$$, where $$A$$ is a zero divergence field to be determined, then
\label{eqn:gaugeFreedomAndPotentialsMaxwell:380}
=

or
\label{eqn:gaugeFreedomAndPotentialsMaxwell:400}

So if $$\tilde{A}$$ does not have zero divergence, we can find a $$\chi$$
\label{eqn:gaugeFreedomAndPotentialsMaxwell:420}
\chi(x) = \int G(x,x’) \grad’ \cdot \tilde{A}(x’) d^4 x’,

so that $$A = \tilde{A} – \grad \chi$$ does have zero divergence.

# References

[1] JD Jackson. Classical Electrodynamics. John Wiley and Sons, 2nd edition, 1975.

## Green’s function video series is now done

I’ve been working on a Green’s function video series, available on both Odysee and the old legacy CensorshipTube.  In this series, I am working from the great Dover book [1].  You can think of this book as a one stop shop containing most of the advanced mathematical tricks that any graduate student in physics or engineering would ever need.

I chose to leisurely visit most of the single variable Green’s function content from chapter 7 of this book in this video series, with focus on the damped forced harmonic oscillator problem
\label{eqn:greens:20}
\LL x(t) = F(t),

where
\label{eqn:greens:40}
\LL = \frac{d^2}{dt^2} + 2 \gamma \frac{d}{dt} + \omega_0^2.

In more pedestrian notation, this problem is the differential equation
\label{eqn:greens:60}
x”(t) + 2 \gamma x'(t) + \omega_0^2 x(t) = F(t).

## Green’s function solution to the forced damped harmonic oscillator

In the first video, of what I thought would probably be three videos, we formally solve this problem, by attacking it with Fourier transform pairs
\label{eqn:greens:80}
\begin{aligned}
\hat{f}(k) &= \inv{\sqrt{2\pi}} \int_{-\infty}^\infty e^{i k t} f(t) dt \\
f(t) &= \inv{\sqrt{2\pi}} \int_{-\infty}^\infty e^{-i k t} \hat{f}(k) dk,
\end{aligned}

and find a specific solution to the forcing problem
\label{eqn:greens:100}
x(t) = \int_{-\infty}^\infty G(t,t’) F(t’) dt’,

where our convolution kernel (later shown to satisfy the Green’s function criteria) is
\label{eqn:greens:120}
G(t,t’) = -\inv{2 \pi} \int_{-\infty}^\infty \frac{e^{-ik(t-t’)}}{k^2 + 2 i \gamma k – \omega_0^2} dk.

We also find that the homogeneous solutions have the form
\label{eqn:greens:140}
x(t) = e^{-\gamma t \pm i \alpha t},

where $$\alpha = \sqrt{ \omega_0^2 – \gamma_2 }$$.

## Evaluating the Fourier convolution kernel for the forced damped harmonic oscillator

In the second video we proceed to dig out our coutour integration techniques and use them to evaluate the convolution kernel. I do a very quick non-rigorous refresher and justification of contour integration and residue analysis, and then proceed to tackle our convolution kernel, rewritten as
\label{eqn:greens:160}
G(t,t’) = -\inv{2 \pi} \int_{-\infty}^\infty \frac{e^{-ik(t-t’)}}{\lr{k – k_1}\lr{ k – k_2}} dk.

We evaluate this in top and bottom half plane infinite closed semi-circular contours (both of which include the real axis component that we are interested in).  We find that the upper half semi-circular path is zero for $$t – t’ < 0$$, as is the entire integral, as it encloses no poles. We find that the lower half semi-circular path is zero for $$t – t’ > 0$$, so the real axis integral can be evaluated by computing the two residues.  In the end we find
\label{eqn:greens:180}
G(t,t’) = \Theta(t – t’) e^{-\gamma(t-t’)} \frac{\sin(\alpha (t – t’))}{\alpha}.

Incidentally, we see that the Green’s function, in this case, is a Heavyside-theta weighted superposition of homogeneous solutions. Specifically, if
\label{eqn:greens:200}
\begin{aligned}
x_1(t) &= e^{-\gamma t + i \alpha t} \\
x_2(t) &= e^{-\gamma t – i \alpha t},
\end{aligned}

then
\label{eqn:greens:220}
G(t,t’) = \Theta(t-t’) \lr{
\frac{x_1(-t’)}{2 i \alpha} x_1(t)

\frac{x_2(-t’)}{2 i \alpha} x_2(t)
}.

This becomes relevant later in the series when we derive and utilize Wrokskian determinant form of the Green’s function.

## Showing that the convolution kernel for the forced damped harmonic oscillator is a Greens function

In the third video, we demonstrate that the convolution kernel that we derived using Fourier transforms, and contour integration, is in fact a Green’s function for the problem. That is
\label{eqn:greens:240}
\LL G(t,t’) = \delta(t- t’).

This is a formal way of expressing the fact that the Green’s function is an inverse of the linear operator. Specifically, given
\label{eqn:greens:260}
x(t) = \int_{-\infty}^{\infty} G(t,t’) F(t’) dt’,

then application of our linear operator to both sides gives
\label{eqn:greens:280}
\LL x(t) = \int_{-\infty}^{\infty} \LL G(t,t’) F(t’) dt’,

so if \ref{eqn:greens:240} is true, we have
\label{eqn:greens:300}
\LL x(t) = F(t),

as desired.

## Green’s function for a first order linear system: two different ways

My trilogy in four parts steps backwards slightly in preparation for examination of the Wronskian method of Green’s function construction. Here I tackle one of the simplest first order single variable systems, that of
\label{eqn:greens:320}
\LL = \frac{d}{dt} + \alpha.

We derive the Green’s function, first using the now familiar Fourier transform and contour integration methods, and then attempt to find the Green’s function by demanding that it has the structure of a piecewise superposition of homogeneous solutions, which is the method used in the book for second order systems. Since we have a first order system, our superposition is trivially simple, as it requires only scaling our homogeneous solution $$x_1(t) = e^{-\alpha t}$$ in each of the domains
\label{eqn:greens:340}
\begin{aligned}
G(t,t’) &= A x_1(t), \quad t – t’ > 0 \\
G(t,t’) &= B x_1(t), \quad t – t’ < 0.
\end{aligned}

We find that
\label{eqn:greens:360}
G(t,t’) = B e^{-\alpha t} + \Theta(t- t’) e^{-\alpha (t – t’) }.

The second term is precisely what we found by direct Fourier transformation, and the first is related to the boundary conditions for the Green’s function itself, something that we address in the final video.

## Wronskian form for the Green’s Function of a general 2nd order one variable differential equation

In this part of my trilogy in five parts, we derive the Wronskian form of the Green’s function for a second order differential equation. Given
\label{eqn:greens:380}
\LL = f_0(t) \frac{d^2}{dt^2} + f_1(t) \frac{d}{dt} + f_2(t),

and two homogenous solutions $$x_1(t), x_2(t)$$, we find
\label{eqn:greens:400}
G(t,t’) = \alpha x_1(t) + \beta x_2(t) +
\frac{\Theta(t- t’)}{f_0(t’)} \frac{
\begin{vmatrix}
x_1(t’) & x_2(t’) \\
x_1(t) & x_2(t) \\
\end{vmatrix}
}
{
\begin{vmatrix}
x_1(t’) & x_2(t’) \\
x_1′(t’) & x_2′(t’) \\
\end{vmatrix}
}.

We use this to re-derive the Green’s function for the forced, damped, harmonic oscillator, finding the previous result from Fourier-transform and contour integration (provided we set $$\alpha = \beta = 0$$.)

## Green’s function boundary value conditions

In this final sixth video, my channelling of Douglas Adams (trilogy in four and then five parts) fails completely. However, I do finally address boundary conditions for the Green’s function itself. I don’t use the damped forced harmonic oscillator, but the very simplest second order system
\label{eqn:greens:420}
x”(t) = F(t).

I chose this equation, and not the damped forced HO, because the Green’s function for this system was derived twice in the text by direct integration. Once for the single point boundary condition
\label{eqn:greens:440}
\begin{aligned}
x(a) &= x_0 \\
x'(a) &= \bar{x}_0,
\end{aligned}

and once for a two point boundary condition
\label{eqn:greens:460}
\begin{aligned}
x(0) &= x_0 \\
x(1) &= x_1.
\end{aligned}

I apply the Wronskian method to derive the Green’s function for this differential operator, which is just
\label{eqn:greens:480}
G(t,t’) = \alpha + \beta t + \lr{t-t’} \Theta(t-t’),

and then proceed to apply the pair of boundary conditions to the Green’s function, fixing the $$\alpha$$ and $$\beta$$ constants for each. There’s a bit of subtlety and hand waving required to get the right results, so it is probably worth repeating the problem for some more complex cases in the future and making sure that I do fully understand how this works. I am able to rederive the Green’s functions from the text for each of the two boundary condition cases.

This business of application of the boundary conditions to the Green’s function itself is very important, and as I found back when I took QM-I (phy356). If you don’t do it, then you get the wrong answers. Perhaps, now finally armed with a better understanding of the tools, I should go back, find that problem again and try it anew.

# References

[1] F.W. Byron and R.W. Fuller. Mathematics of Classical and Quantum Physics. Dover Publications, 1992.

## Switching from YouTube to Odysee as a video sharing platform

March 10, 2022 math and physics play , , ,

YouTube’s rampant censorship over the last two years (and even before that), has been increasingly hard to stomach.

In light of that, I am going to transition to odysee as my primary video sharing platform.  I’ll probably post backup copies on YouTube too, but will treat that platform as a secondary mirror (despite the fact that subscribers and viewers will probably find stuff there first.)

In the grand scheme of things, my viewership is infinitesimal and will surely stay that way, and I don’t monetize anything anyways, so this switch has zero impact to me, and is more of a conceptual switch than anything else.  Since, I’m talking about math and physics, which I can’t imagine that YouTube should would ever find a reason to censor, but we should all start treating it as a compromised platform, and breaking any dependencies that we have on it.

As a first step towards this transition, I’ve uploaded all my geometric algebra videos to odysee.  I’ve also uploaded my Green’s function videos (so far all related to the damped forced harmonic oscillator), but haven’t put those in a playlist yet, but it will be here when I do.

I have a couple cool Manim based geometric algebra videos that I have been working on for a while.  I’ll post those soon too.

## Pondering the Human Calculator’s “rule of nine”

Mike Rowe’s most recent podcast (ep. 241 the Maddest March Ever), with Scott Flansburg (aka the Human Calculator), Scott mentioned what he called the “Rule of Nine”.

Take any 2 digit number, for example, $$71$$, and:

• Add the digits (in this case that gives: $$8$$)
• Subtract that from the original number, giving, in this case: $$71-8 = 63$$
• The result will always be nine times the first digit.

Listening in double speed (and I think Scott may be a fast talker anyways), this sounds impressive, perhaps even mysterious, but it is easy to decode:

Let’s represent that two digit number as ‘ab’.  i.e:

$$a b \equiv a * 10 + b.$$

The algorithm gives us:

$$a * 10 + b \,- (a + b) = a * (10 – 1) = 9 * a$$

We see that this “Rule of Nine” algorithm above has a built-in distractor, the second digit.  You could express it more simply as:

Take any 2 digit number, for example, $$71$$, and:

• Ignore the second digit, giving in this case $$70$$
• Subtract that first digit from the original number, giving, in this case: $$70-7 = 63$$
• The result will always be nine times that first digit.

But it’s not as cool to point out that $$7*10 \,- 7 = 9 * 7$$.  It’s kind of cool that adding the digits of the result will again always be nine: $$6 + 3 = 9$$, which I am sure was also mentioned in the episode, but we can also decode that secondary rule of nine.  That second rule of nine you know from ancient history when you learned your time tables, but algebraically, it is nothing more than:

$$10 * (x \,- 1) + 9 \,- (x-1) = 9 * x – 10 + 10 = 9 * x.$$

For example, for $$x = 7$$

$$10 * 6 + (9-6) = 63$$

but we can also write that as

$$10 * (7-1) + (9-(7-1)) = 9 * (7-1) + 9 = 9 * 7,$$

which isn’t as cool.

## Verifying the GA form for the symmetric and antisymmetric components of the different rate of strain.

[If mathjax doesn’t display properly for you, click here for a PDF of this post]

We found geometric algebra representations for the symmetric and antisymmetric components for a gradient-vector direct product. In particular, given
\label{eqn:tensorComponents:20}
d\Bv = d\Bx \cdot \lr{ \spacegrad \otimes \Bv }

we found
\label{eqn:tensorComponents:40}
\begin{aligned}
d\Bx \cdot \Bd
&=
\inv{2} d\Bx \cdot \lr{
+
\lr{\spacegrad \otimes \Bv }^\dagger
} \\
&=
\inv{2} \lr{
d\Bx \lr{ \spacegrad \cdot \Bv }
+
},
\end{aligned}

and
\label{eqn:tensorComponents:60}
\begin{aligned}
d\Bx \cdot \BOmega
&=
\inv{2} d\Bx \cdot \lr{

\lr{\spacegrad \otimes \Bv }^\dagger
} \\
&=
\inv{2} \lr{
d\Bx \lr{ \spacegrad \cdot \Bv }

}.
\end{aligned}

Let’s expand each of these in coordinates to verify that these are correct. For the symmetric component, that is
\label{eqn:tensorComponents:80}
\begin{aligned}
d\Bx \cdot \Bd
&=
\inv{2}
\lr{
dx_i \partial_j v_j \Be_i
+
\partial_j dx_i v_k \gpgradeone{ \Be_j \Be_i \Be_k }
} \\
&=
\inv{2} dx_i
\lr{
\partial_j v_j \Be_i
+
\partial_j v_k \lr{ \delta_{ji} \Be_k + \lr{ \Be_j \wedge \Be_i } \cdot \Be_k }
} \\
&=
\inv{2} dx_i
\lr{
\partial_j v_j \Be_i
+
\partial_j v_k \lr{ \delta_{ji} \Be_k + \delta_{ik} \Be_j – \delta_{jk} \Be_i }
} \\
&=
\inv{2} dx_i
\lr{
\partial_j v_j \Be_i
+
\partial_i v_k \Be_k
+
\partial_j v_i \Be_j

\partial_j v_j \Be_i
} \\
&=
\inv{2} dx_i
\lr{
\partial_i v_k \Be_k
+
\partial_j v_i \Be_j
} \\
&=
dx_i \inv{2} \lr{ \partial_i v_j + \partial_j v_i } \Be_j.
\end{aligned}

Sure enough, we that the product contains the matrix element of the symmetric component of $$\spacegrad \otimes \Bv$$.

Now let’s verify that our GA antisymmetric tensor product representation works out.
\label{eqn:tensorComponents:100}
\begin{aligned}
d\Bx \cdot \BOmega
&=
\inv{2}
\lr{
dx_i \partial_j v_j \Be_i

dx_i \partial_k v_j \gpgradeone{ \Be_i \Be_j \Be_k }
} \\
&=
\inv{2} dx_i
\lr{
\partial_j v_j \Be_i

\partial_k v_j
\lr{ \delta_{ij} \Be_k + \delta_{jk} \Be_i – \delta_{ik} \Be_j }
} \\
&=
\inv{2} dx_i
\lr{
\partial_j v_j \Be_i

\partial_k v_i \Be_k

\partial_k v_k \Be_i
+
\partial_i v_j \Be_j
} \\
&=
\inv{2} dx_i
\lr{
\partial_i v_j \Be_j

\partial_k v_i \Be_k
} \\
&=
dx_i
\inv{2}
\lr{
\partial_i v_j

\partial_j v_i
}
\Be_j.
\end{aligned}

As expected, we that this product contains the matrix element of the antisymmetric component of $$\spacegrad \otimes \Bv$$.

We also found previously that $$\BOmega$$ is just a curl, namely
\label{eqn:tensorComponents:120}
\BOmega = \inv{2} \lr{ \spacegrad \wedge \Bv } = \inv{2} \lr{ \partial_i v_j } \Be_i \wedge \Be_j,

which directly encodes the antisymmetric component of $$\spacegrad \otimes \Bv$$. We can also see that by fully expanding $$d\Bx \cdot \BOmega$$, which gives
\label{eqn:tensorComponents:140}
\begin{aligned}
d\Bx \cdot \BOmega
&=
dx_i \inv{2} \lr{ \partial_j v_k }
\Be_i \cdot \lr{ \Be_j \wedge \Be_k } \\
&=
dx_i \inv{2} \lr{ \partial_j v_k }
\lr{
\delta_{ij} \Be_k

\delta_{ik} \Be_j
} \\
&=
dx_i \inv{2}
\lr{
\lr{ \partial_i v_k } \Be_k

\lr{ \partial_j v_i }
\Be_j
} \\
&=
dx_i \inv{2}
\lr{
\partial_i v_j – \partial_j v_i
}
\Be_j,
\end{aligned}

as expected.