This is the 6th part of a series on finding Maxwell’s equations (including the fictitious magnetic sources that are useful in engineering) from a multivector Lagrangian representation.

[Click here for a PDF version of this series of posts, up to and including this one.]  The first, second, third, fourth, and fifth parts are also available here on this blog.

We managed to find Maxwell’s equation in it’s STA form by variation of a multivector Lagrangian, with respect to a four-vector field (the potential). That approach differed from the usual variation with respect to the coordinates of that four-vector, or the use of the Euler-Lagrange equations with respect to those coordinates.

Euler-Lagrange equations.

Having done so, an immediate question is whether we can express the Euler-Lagrange equations with respect to the four-potential in it’s entirety, instead of the coordinates of that vector. I have some intuition about how to completely avoid that use of coordinates, but first we can get part way there.

Consider a general Lagrangian, dependent on a field \( A \) and all it’s derivatives \( \partial_\mu A \)
\begin{equation}\label{eqn:fsquared:1180}
\LL = \LL( A, \partial_\mu A ).
\end{equation}

The variational principle requires
\begin{equation}\label{eqn:fsquared:1200}
0 = \delta S = \int d^4 x \delta \LL( A, \partial_\mu A ).
\end{equation}
That variation can be expressed as a limiting parametric operation as follows
\begin{equation}\label{eqn:fsquared:1220}
\delta S
= \int d^4 x
\lr{
\lim_{t \rightarrow 0} \ddt{} \LL( A + t \delta A )
+
\sum_\mu
\lim_{t \rightarrow 0} \ddt{} \LL( \partial_\mu A + t \delta \partial_\mu A )
}
\end{equation}
We eventually want a coordinate free expression for the variation, but we’ll use them to get there. We can expand the first derivative by chain rule as
\begin{equation}\label{eqn:fsquared:1240}
\begin{aligned}
\lim_{t \rightarrow 0} \ddt{} \LL( A + t \delta A )
&=
\lim_{t \rightarrow 0} \PD{(A^\alpha + t \delta A^\alpha)}{\LL} \PD{t}{}(A^\alpha + t \delta A^\alpha) \\
&=
\PD{A^\alpha}{\LL} \delta A^\alpha.
\end{aligned}
\end{equation}
This has the structure of a directional derivative \( A \). In particular, let
\begin{equation}\label{eqn:fsquared:1260}
\grad_A = \gamma^\alpha \PD{A^\alpha}{},
\end{equation}
so we have
\begin{equation}\label{eqn:fsquared:1280}
\lim_{t \rightarrow 0} \ddt{} \LL( A + t \delta A )
= \delta A \cdot \grad_A.
\end{equation}
Similarly,
\begin{equation}\label{eqn:fsquared:1300}
\lim_{t \rightarrow 0} \ddt{} \LL( \partial_\mu A + t \delta \partial_\mu A )
=
\PD{(\partial_\mu A^\alpha)}{\LL} \delta \partial_\mu A^\alpha,
\end{equation}
so we can define a gradient with respect to each of the derivatives of \(A \) as
\begin{equation}\label{eqn:fsquared:1320}
\grad_{\partial_\mu A} = \gamma^\alpha \PD{(\partial_\mu A^\alpha)}{}.
\end{equation}
Our variation can now be expressed in a somewhat coordinate free form
\begin{equation}\label{eqn:fsquared:1340}
\delta S = \int d^4 x \lr{
\lr{\delta A \cdot \grad_A} \LL + \lr{ \lr{\delta \partial_\mu A} \cdot \grad_{\partial_\mu A} } \LL
}.
\end{equation}
We now sum implicitly over pairs of indexes \( \mu \) (i.e. we are treating \( \grad_{\partial_\mu A} \) as an upper index entity). We can now proceed with our chain rule expansion
\begin{equation}\label{eqn:fsquared:1360}
\begin{aligned}
\delta S
&= \int d^4 x \lr{
\lr{\delta A \cdot \grad_A} \LL + \lr{ \lr{\delta \partial_\mu A} \cdot \grad_{\partial_\mu A} } \LL
} \\
&= \int d^4 x \lr{
\lr{\delta A \cdot \grad_A} \LL + \lr{ \lr{\partial_\mu \delta A} \cdot \grad_{\partial_\mu A} } \LL
} \\
&= \int d^4 x \lr{
\lr{\delta A \cdot \grad_A} \LL
+ \partial_\mu \lr{ \lr{ \delta A \cdot \grad_{\partial_\mu A} } \LL }
– \lr{\PD{x^\mu}{} \delta A \cdot \grad_{\partial_\mu A} \LL}_{\delta A}
}.
\end{aligned}
\end{equation}
As usual, we kill off the boundary term, by insisting that \( \delta A = 0 \) on the boundary, leaving us with a four-vector form of the field Euler-Lagrange equations
\begin{equation}\label{eqn:fsquared:1380}
\lr{\delta A \cdot \grad_A} \LL = \lr{\PD{x^\mu}{} \delta A \cdot \grad_{\partial_\mu A} \LL}_{\delta A},
\end{equation}
where the RHS derivatives are taken with \(\delta A \) held fixed. We seek solutions of this equation that hold for all variations \( \delta A \).

Application to the Maxwell Lagrangian.

For the Maxwell application we need a few helper calculations. The first, given a multivector \( B \), is
\begin{equation}\label{eqn:fsquared:1400}
\begin{aligned}
\lr{ \delta A \cdot \grad_A } A B
&=
\delta A^\alpha \PD{A^\alpha}{} \gamma_\beta A^\beta B \\
&=
\delta A^\alpha \gamma_\alpha B \\
&=
\lr{ \delta A } B.
\end{aligned}
\end{equation}

Now let’s compute, for multivector \( B \)
\begin{equation}\label{eqn:fsquared:1420}
\begin{aligned}
\lr{ \delta A \cdot \grad_{\partial_\mu A} } B F
&=
\delta A^\alpha \PD{(\partial_\mu A^\alpha)} B \lr{ \gamma^\beta \wedge \partial_\beta \lr{ \gamma_\pi A^\pi } } \\
&=
\delta A^\alpha B \lr{ \gamma^\mu \wedge \gamma_\alpha } \\
&=
B \lr{ \gamma^\mu \wedge \delta A }.
\end{aligned}
\end{equation}

Our Lagrangian is
\begin{equation}\label{eqn:fsquared:1440}
\LL = \inv{2} F^2 – \gpgrade{A \lr{ J – I M } }{0,4},
\end{equation}
so
\begin{equation}\label{eqn:fsquared:1460}
\lr{\delta A \cdot \grad_A} \LL
=
-\gpgrade{ \lr{ \delta A } \lr{ J – I M } }{0,4},
\end{equation}
and
\begin{equation}\label{eqn:fsquared:1480}
\begin{aligned}
\lr{ \delta A \cdot \grad_{\partial_\mu A} } \inv{2} F^2
&=
\inv{2} \lr{ F \lr{ \gamma^\mu \wedge \delta A } + \lr{ \gamma^\mu \wedge \delta A } F } \\
&=
\gpgrade{
\lr{ \gamma^\mu \wedge \delta A } F
}{0,4} \\
&=
-\gpgrade{
\lr{ \delta A \wedge \gamma^\mu } F
}{0,4} \\
&=
-\gpgrade{
\delta A \gamma^\mu F

\lr{ \delta A \cdot \gamma^\mu } F
}{0,4} \\
&=
-\gpgrade{
\delta A \gamma^\mu F
}{0,4}.
\end{aligned}
\end{equation}
Taking derivatives (holding \( \delta A \) fixed), we have
\begin{equation}\label{eqn:fsquared:1500}
\begin{aligned}
-\gpgrade{ \lr{ \delta A } \lr{ J – I M } }{0,4}
&=
-\gpgrade{
\delta A \partial_\mu \gamma^\mu F
}{0,4} \\
&=
-\gpgrade{
\delta A \grad F
}{0,4}.
\end{aligned}
\end{equation}
We’ve already seen that the solution can be expressed without grade selection as
\begin{equation}\label{eqn:fsquared:1520}
\grad F = \lr{ J – I M },
\end{equation}
which is Maxwell’s equation in it’s STA form. It’s not clear that this is really any less work, but it’s a step towards a coordinate free evaluation of the Maxwell Lagrangian (at least not having to use the coordinates \( A^\mu \) as we have to do in the tensor formalism.)