space time algebra

Canonical bivectors in spacetime algebra.

December 5, 2022 math and physics play , , , , ,

[Click here for a PDF version of this post]

I’ve been enjoying XylyXylyX’s QED Prerequisites Geometric Algebra: Spacetime YouTube series, which is doing a thorough walk through of [1], filling in missing details. The last episode QED Prerequisites Geometric Algebra 15: Complex Structure, left things with a bit of a cliff hanger, mentioning a “canonical” form for STA bivectors that was intriguing.

The idea is that STA bivectors, like spacetime vectors can be spacelike, timelike, or lightlike (i.e.: positive, negative, or zero square), but can also have a complex signature (squaring to a 0,4-multivector.)

The only context that I knew of that one wanted to square an STA bivector is for the electrodynamic field Lagrangian, which has an \( F^2 \) term. In no other context, was the signature of \( F \), the electrodynamic field, of interest that I knew of, so I’d never considered this “Canonical form” representation.

Here are some examples:
\begin{equation}\label{eqn:canonicalbivectors:20}
\begin{aligned}
F &= \gamma_{10}, \quad F^2 = 1 \\
F &= \gamma_{23}, \quad F^2 = -1 \\
F &= 4 \gamma_{10} + \gamma_{13}, \quad F^2 = 15 \\
F &= \gamma_{10} + \gamma_{13}, \quad F^2 = 0 \\
F &= \gamma_{10} + 4 \gamma_{13}, \quad F^2 = -15 \\
F &= \gamma_{10} + \gamma_{23}, \quad F^2 = 2 I \\
F &= \gamma_{10} – 2 \gamma_{23}, \quad F^2 = -3 + 4 I.
\end{aligned}
\end{equation}
You can see in this table that all the \( F \)’s that are purely electric, have a positive signature, and all the purely magnetic fields have a negative signature, but when there is a mix, anything goes. The idea behind the canonical representation in the paper is to write
\begin{equation}\label{eqn:canonicalbivectors:40}
F = f e^{I \phi},
\end{equation}
where \( f^2 \) is real and positive, assuming that \( F \) is not lightlike.

The paper gives a formula for computing \( f \) and \( \phi\), but let’s do this by example, putting all the \( F^2 \)’s above into their complex polar form representation, like so
\begin{equation}\label{eqn:canonicalbivectors:60}
\begin{aligned}
F &= \gamma_{10}, \quad F^2 = 1 \\
F &= \gamma_{23}, \quad F^2 = 1 e^{\pi I} \\
F &= 4 \gamma_{10} + \gamma_{13}, \quad F^2 = 15 \\
F &= \gamma_{10} + \gamma_{13}, \quad F^2 = 0 \\
F &= \gamma_{10} + 4 \gamma_{13}, \quad F^2 = 15 e^{\pi I} \\
F &= \gamma_{10} + \gamma_{23}, \quad F^2 = 2 e^{(\pi/2) I} \\
F &= \gamma_{10} – 2 \gamma_{23}, \quad F^2 = 5 e^{ (\pi – \arctan(4/3)) I}
\end{aligned}
\end{equation}

Since we can put \( F^2 \) in polar form, we can factor out half of that phase angle, so that we are left with a bivector that has a positive square. If we write
\begin{equation}\label{eqn:canonicalbivectors:80}
F^2 = \Abs{F^2} e^{2 \phi I},
\end{equation}
we can then form
\begin{equation}\label{eqn:canonicalbivectors:100}
f = F e^{-\phi I}.
\end{equation}

If we want an equation for \( \phi \), we can just write
\begin{equation}\label{eqn:canonicalbivectors:120}
2 \phi = \mathrm{Arg}( F^2 ).
\end{equation}
This is a bit better (I think) than the form given in the paper, since it will uniformly rotate \( F^2 \) toward the positive region of the real axis, whereas the paper’s formula sometimes rotates towards the negative reals, which is a strange seeming polar form to use.

Let’s compute \( f \) for \( F = \gamma_{10} – 2 \gamma_{23} \), using
\begin{equation}\label{eqn:canonicalbivectors:140}
2 \phi = \pi – \arctan(4/3).
\end{equation}
The exponential expands to
\begin{equation}\label{eqn:canonicalbivectors:160}
e^{-\phi I} = \inv{\sqrt{5}} \lr{ 1 – 2 I }.
\end{equation}

Multiplying each of the bivector components by \(1 – 2 I\), we find
\begin{equation}\label{eqn:canonicalbivectors:180}
\begin{aligned}
\gamma_{10} \lr{ 1 – 2 I}
&=
\gamma_{10} – 2 \gamma_{100123} \\
&=
\gamma_{10} – 2 \gamma_{1123} \\
&=
\gamma_{10} + 2 \gamma_{23},
\end{aligned}
\end{equation}
and
\begin{equation}\label{eqn:canonicalbivectors:200}
\begin{aligned}
– 2 \gamma_{23} \lr{ 1 – 2 I}
&=
– 2 \gamma_{23}
+ 4 \gamma_{230123} \\
&=
– 2 \gamma_{23}
+ 4 \gamma_{23}^2 \gamma_{01} \\
&=
– 2 \gamma_{23}
+ 4 \gamma_{10},
\end{aligned}
\end{equation}
leaving
\begin{equation}\label{eqn:canonicalbivectors:220}
f = \sqrt{5} \gamma_{10},
\end{equation}
so the canonical form is
\begin{equation}\label{eqn:canonicalbivectors:240}
F = \gamma_{10} – 2 \gamma_{23} = \sqrt{5} \gamma_{10} \frac{1 + 2 I}{\sqrt{5}}.
\end{equation}

It’s interesting here that \( f \), in this case, is a spatial bivector (i.e.: pure electric field), but that clearly isn’t always going to be the case, since we can have a case like,
\begin{equation}\label{eqn:canonicalbivectors:260}
F = 4 \gamma_{10} + \gamma_{13} = 4 \gamma_{10} + \gamma_{20} I,
\end{equation}
from the table above, that has both electric and magnetic field components, yet is already in the canonical form, with \( F^2 = 15 \). The canonical \( f \), despite having a positive square, is not necessarily a spatial bivector (as it may have both grades 1,2 in the spatial representation, not just the electric field, spatial grade-1 component.)

References

[1] Justin Dressel, Konstantin Y Bliokh, and Franco Nori. Spacetime algebra as a powerful tool for electromagnetism. Physics Reports, 589:1–71, 2015.

A multivector Lagrangian for Maxwell’s equation, w/ electric and magnetic current density four-vector sources

June 29, 2022 math and physics play , , , , , , , ,

[Click here for a PDF version of this and previous related posts .]

Initially I had trouble generalizing the multivector Lagrangian to include both the electric and magnetic sources without using two independent potentials. However, this can be done, provided one is careful enough. Recall that we found that a useful formulation for the field in terms of two potentials is
\begin{equation}\label{eqn:maxwellLagrangian:2050}
F = F_{\mathrm{e}} + I F_{\mathrm{m}},
\end{equation}
where
\begin{equation}\label{eqn:maxwellLagrangian:2070}
\begin{aligned}
F_{\mathrm{e}} = \grad \wedge A \\
F_{\mathrm{m}} = \grad \wedge K,
\end{aligned}
\end{equation}
and where \( A, K \) are arbitrary four-vector potentials.
Use of two potentials allowed us to decouple Maxwell’s equations into two separate gradient equations. We don’t want to do that now, but let’s see how we can combine the two fields into a single multivector potential. Letting the gradient act bidirectionally, and introducing a dummy grade-two selection into the mix, we have
\begin{equation}\label{eqn:maxwellLagrangian:2090}
\begin{aligned}
F
&= \rgrad \wedge A + I \lr{ \rgrad \wedge K } \\
&= – A \wedge \lgrad – I \lr{ K \wedge \lgrad } \\
&= -\gpgradetwo{ A \wedge \lgrad + I \lr{ K \wedge \lgrad } } \\
&= -\gpgradetwo{ A \lgrad + I K \lgrad } \\
&= -\gpgradetwo{ \lr{ A + I K } \lgrad }.
\end{aligned}
\end{equation}
Now, we call
\begin{equation}\label{eqn:maxwellLagrangian:2110}
N = A + I K,
\end{equation}
(a 1,3 multivector), the multivector potential, and write the electromagnetic field not in terms of curls explicitly, but using a grade-2 selection filter
\begin{equation}\label{eqn:maxwellLagrangian:2130}
F = -\gpgradetwo{ N \lgrad }.
\end{equation}

We can now form the following multivector Lagrangian
\begin{equation}\label{eqn:maxwellLagrangian:2150}
\LL = \inv{2} F^2 – \gpgrade{ N \lr{ J – I M } }{0,4},
\end{equation}
and vary the action to (eventually) find our multivector Maxwell’s equation, without ever resorting to coordinates. We have
\begin{equation}\label{eqn:maxwellLagrangian:2170}
\begin{aligned}
\delta S
&= \int d^4 x \inv{2} \lr{ \lr{ \delta F } F + F \lr{ \delta F } } – \gpgrade{ \delta N \lr{ J – I M } }{0,4} \\
&= \int d^4 x \gpgrade{ \lr{ \delta F } F – \lr{ \delta N } \lr{ J – I M } }{0,4} \\
&= \int d^4 x \gpgrade{ -\gpgradetwo{ \lr{ \delta N} \lgrad } F – \lr{ \delta N } \lr{ J – I M } }{0,4} \\
&= \int d^4 x \gpgrade{ -\gpgradetwo{ \lr{ \delta N} \lrgrad } F +\gpgradetwo{ \lr{ \delta N} \rgrad } F – \lr{ \delta N } \lr{ J – I M } }{0,4}.
\end{aligned}
\end{equation}
The \( \lrgrad \) term can be evaluated using the fundamential theorem of GC, and will be zero, as \( \delta N = 0 \) on the boundary. Let’s look at the next integrand term a bit more carefully
\begin{equation}\label{eqn:maxwellLagrangian:2190}
\begin{aligned}
\gpgrade{ \gpgradetwo{ \lr{ \delta N} \rgrad } F }{0,4}
&=
\gpgrade{ \gpgradetwo{ \lr{ \lr{ \delta A } + I \lr{ \delta K } } \rgrad } F }{0,4} \\
&=
\gpgrade{ \lr{ \lr{\delta A} \wedge \rgrad + I \lr{ \lr{ \delta K } \wedge \rgrad }} F }{0,4} \\
&=
\gpgrade{ \lr{\delta A} \rgrad F – \lr{ \lr{\delta A} \cdot \rgrad} F + I \lr{ \delta K } \rgrad F – I \lr{ \lr{ \delta K } \cdot \rgrad} F }{0,4} \\
&=
\gpgrade{ \lr{\delta A} \rgrad F + I \lr{ \delta K } \rgrad F }{0,4} \\
&=
\gpgrade{ \lr{ \lr{\delta A} + I \lr{ \delta K} } \rgrad F }{0,4} \\
&=
\gpgrade{ \lr{ \delta N} \rgrad F }{0,4},
\end{aligned}
\end{equation}
so
\begin{equation}\label{eqn:maxwellLagrangian:2210}
\begin{aligned}
\delta S
&= \int d^4 x \gpgrade{ \lr{ \delta N} \rgrad F – \lr{ \delta N } \lr{ J – I M } }{0,4} \\
&= \int d^4 x \gpgrade{ \lr{ \delta N} \lr{ \rgrad F – \lr{ J – I M } } }{0,4}.
\end{aligned}
\end{equation}
for this to be zero for all variations \( \delta N \) of the 1,3-multivector potential \( N \), we must have
\begin{equation}\label{eqn:maxwellLagrangian:2230}
\grad F = J – I M.
\end{equation}
This is Maxwell’s equation, as desired, including both electric and (if desired) magnetic sources.

A multivector Lagrangian for Maxwell’s equation: A summary of previous exploration.

June 21, 2022 math and physics play , , , , , , , , , , , , , , , , , , , ,

This summarizes the significant parts of the last 8 blog posts.

[Click here for a PDF version of this post]

STA form of Maxwell’s equation.

Maxwell’s equations, with electric and fictional magnetic sources (useful for antenna theory and other engineering applications), are
\begin{equation}\label{eqn:maxwellLagrangian:220}
\begin{aligned}
\spacegrad \cdot \BE &= \frac{\rho}{\epsilon} \\
\spacegrad \cross \BE &= – \BM – \mu \PD{t}{\BH} \\
\spacegrad \cdot \BH &= \frac{\rho_\txtm}{\mu} \\
\spacegrad \cross \BH &= \BJ + \epsilon \PD{t}{\BE}.
\end{aligned}
\end{equation}
We can assemble these into a single geometric algebra equation,
\begin{equation}\label{eqn:maxwellLagrangian:240}
\lr{ \spacegrad + \inv{c} \PD{t}{} } F = \eta \lr{ c \rho – \BJ } + I \lr{ c \rho_{\mathrm{m}} – \BM },
\end{equation}
where \( F = \BE + \eta I \BH = \BE + I c \BB \), \( c = 1/\sqrt{\mu\epsilon}, \eta = \sqrt{(\mu/\epsilon)} \).

By multiplying through by \( \gamma_0 \), making the identification \( \Be_k = \gamma_k \gamma_0 \), and
\begin{equation}\label{eqn:maxwellLagrangian:300}
\begin{aligned}
J^0 &= \frac{\rho}{\epsilon}, \quad J^k = \eta \lr{ \BJ \cdot \Be_k }, \quad J = J^\mu \gamma_\mu \\
M^0 &= c \rho_{\mathrm{m}}, \quad M^k = \BM \cdot \Be_k, \quad M = M^\mu \gamma_\mu \\
\grad &= \gamma^\mu \partial_\mu,
\end{aligned}
\end{equation}
we find the STA form of Maxwell’s equation, including magnetic sources
\begin{equation}\label{eqn:maxwellLagrangian:320}
\grad F = J – I M.
\end{equation}

Decoupling the electric and magnetic fields and sources.

We can utilize two separate four-vector potential fields to split Maxwell’s equation into two parts. Let
\begin{equation}\label{eqn:maxwellLagrangian:1740}
F = F_{\mathrm{e}} + I F_{\mathrm{m}},
\end{equation}
where
\begin{equation}\label{eqn:maxwellLagrangian:1760}
\begin{aligned}
F_{\mathrm{e}} &= \grad \wedge A \\
F_{\mathrm{m}} &= \grad \wedge K,
\end{aligned}
\end{equation}
and \( A, K \) are independent four-vector potential fields. Plugging this into Maxwell’s equation, and employing a duality transformation, gives us two coupled vector grade equations
\begin{equation}\label{eqn:maxwellLagrangian:1780}
\begin{aligned}
\grad \cdot F_{\mathrm{e}} – I \lr{ \grad \wedge F_{\mathrm{m}} } &= J \\
\grad \cdot F_{\mathrm{m}} + I \lr{ \grad \wedge F_{\mathrm{e}} } &= M.
\end{aligned}
\end{equation}
However, since \( \grad \wedge F_{\mathrm{m}} = \grad \wedge F_{\mathrm{e}} = 0 \), by construction, the curls above are killed. We may also add in \( \grad \wedge F_{\mathrm{e}} = 0 \) and \( \grad \wedge F_{\mathrm{m}} = 0 \) respectively, yielding two independent gradient equations
\begin{equation}\label{eqn:maxwellLagrangian:1810}
\begin{aligned}
\grad F_{\mathrm{e}} &= J \\
\grad F_{\mathrm{m}} &= M,
\end{aligned}
\end{equation}
one for each of the electric and magnetic sources and their associated fields.

Tensor formulation.

The electromagnetic field \( F \), is a vector-bivector multivector in the multivector representation of Maxwell’s equation, but is a bivector in the STA representation. The split of \( F \) into it’s electric and magnetic field components is observer dependent, but we may write it without reference to a specific observer frame as
\begin{equation}\label{eqn:maxwellLagrangian:1830}
F = \inv{2} \gamma_\mu \wedge \gamma_\nu F^{\mu\nu},
\end{equation}
where \( F^{\mu\nu} \) is an arbitrary antisymmetric 2nd rank tensor. Maxwell’s equation has a vector and trivector component, which may be split out explicitly using grade selection, to find
\begin{equation}\label{eqn:maxwellLagrangian:360}
\begin{aligned}
\grad \cdot F &= J \\
\grad \wedge F &= -I M.
\end{aligned}
\end{equation}
Further dotting and wedging these equations with \( \gamma^\mu \) allows for extraction of scalar relations
\begin{equation}\label{eqn:maxwellLagrangian:460}
\partial_\nu F^{\nu\mu} = J^{\mu}, \quad \partial_\nu G^{\nu\mu} = M^{\mu},
\end{equation}
where \( G^{\mu\nu} = -(1/2) \epsilon^{\mu\nu\alpha\beta} F_{\alpha\beta} \) is also an antisymmetric 2nd rank tensor.

If we treat \( F^{\mu\nu} \) and \( G^{\mu\nu} \) as independent fields, this pair of equations is the coordinate equivalent to \ref{eqn:maxwellLagrangian:1760}, also decoupling the electric and magnetic source contributions to Maxwell’s equation.

Coordinate representation of the Lagrangian.

As observed above, we may choose to express the decoupled fields as curls \( F_{\mathrm{e}} = \grad \wedge A \) or \( F_{\mathrm{m}} = \grad \wedge K \). The coordinate expansion of either field component, given such a representation, is straight forward. For example
\begin{equation}\label{eqn:maxwellLagrangian:1850}
\begin{aligned}
F_{\mathrm{e}}
&= \lr{ \gamma_\mu \partial^\mu } \wedge \lr{ \gamma_\nu A^\nu } \\
&= \inv{2} \lr{ \gamma_\mu \wedge \gamma_\nu } \lr{ \partial^\mu A^\nu – \partial^\nu A^\mu }.
\end{aligned}
\end{equation}

We make the identification \( F^{\mu\nu} = \partial^\mu A^\nu – \partial^\nu A^\mu \), the usual definition of \( F^{\mu\nu} \) in the tensor formalism. In that tensor formalism, the Maxwell Lagrangian is
\begin{equation}\label{eqn:maxwellLagrangian:1870}
\LL = – \inv{4} F_{\mu\nu} F^{\mu\nu} – A_\mu J^\mu.
\end{equation}
We may show this though application of the Euler-Lagrange equations
\begin{equation}\label{eqn:maxwellLagrangian:600}
\PD{A_\mu}{\LL} = \partial_\nu \PD{(\partial_\nu A_\mu)}{\LL}.
\end{equation}
\begin{equation}\label{eqn:maxwellLagrangian:1930}
\begin{aligned}
\PD{(\partial_\nu A_\mu)}{\LL}
&= -\inv{4} (2) \lr{ \PD{(\partial_\nu A_\mu)}{F_{\alpha\beta}} } F^{\alpha\beta} \\
&= -\inv{2} \delta^{[\nu\mu]}_{\alpha\beta} F^{\alpha\beta} \\
&= -\inv{2} \lr{ F^{\nu\mu} – F^{\mu\nu} } \\
&= F^{\mu\nu}.
\end{aligned}
\end{equation}
So \( \partial_\nu F^{\nu\mu} = J^\mu \), the equivalent of \( \grad \cdot F = J \), as expected.

Coordinate-free representation and variation of the Lagrangian.

Because
\begin{equation}\label{eqn:maxwellLagrangian:200}
F^2 =
-\inv{2}
F^{\mu\nu} F_{\mu\nu}
+
\lr{ \gamma_\alpha \wedge \gamma^\beta }
F_{\alpha\mu}
F^{\beta\mu}
+
\frac{I}{4}
\epsilon_{\mu\nu\alpha\beta} F^{\mu\nu} F^{\alpha\beta},
\end{equation}
we may express the Lagrangian \ref{eqn:maxwellLagrangian:1870} in a coordinate free representation
\begin{equation}\label{eqn:maxwellLagrangian:1890}
\LL = \inv{2} F \cdot F – A \cdot J,
\end{equation}
where \( F = \grad \wedge A \).

We will now show that it is also possible to apply the variational principle to the following multivector Lagrangian
\begin{equation}\label{eqn:maxwellLagrangian:1910}
\LL = \inv{2} F^2 – A \cdot J,
\end{equation}
and recover the geometric algebra form \( \grad F = J \) of Maxwell’s equation in it’s entirety, including both vector and trivector components in one shot.

We will need a few geometric algebra tools to do this.

The first such tool is the notational freedom to let the gradient act bidirectionally on multivectors to the left and right. We will designate such action with over-arrows, sometimes also using braces to limit the scope of the action in question. If \( Q, R \) are multivectors, then the bidirectional action of the gradient in a \( Q, R \) sandwich is
\begin{equation}\label{eqn:maxwellLagrangian:1950}
\begin{aligned}
Q \lrgrad R
&= Q \lgrad R + Q \rgrad R \\
&= \lr{ Q \gamma^\mu \lpartial_\mu } R + Q \lr{ \gamma^\mu \rpartial_\mu R } \\
&= \lr{ \partial_\mu Q } \gamma^\mu R + Q \gamma^\mu \lr{ \partial_\mu R }.
\end{aligned}
\end{equation}
In the final statement, the partials are acting exclusively on \( Q \) and \( R \) respectively, but the \( \gamma^\mu \) factors must remain in place, as they do not necessarily commute with any of the multivector factors.

This bidirectional action is a critical aspect of the Fundamental Theorem of Geometric calculus, another tool that we will require. The specific form of that theorem that we will utilize here is
\begin{equation}\label{eqn:maxwellLagrangian:1970}
\int_V Q d^4 \Bx \lrgrad R = \int_{\partial V} Q d^3 \Bx R,
\end{equation}
where \( d^4 \Bx = I d^4 x \) is the pseudoscalar four-volume element associated with a parameterization of space time. For our purposes, we may assume that parameterization are standard basis coordinates associated with the basis \( \setlr{ \gamma_0, \gamma_1, \gamma_2, \gamma_3 } \). The surface differential form \( d^3 \Bx \) can be given specific meaning, but we do not actually care what that form is here, as all our surface integrals will be zero due to the boundary constraints of the variational principle.

Finally, we will utilize the fact that bivector products can be split into grade \(0,4\) and \( 2 \) components using anticommutator and commutator products, namely, given two bivectors \( F, G \), we have
\begin{equation}\label{eqn:maxwellLagrangian:1990}
\begin{aligned}
\gpgrade{ F G }{0,4} &= \inv{2} \lr{ F G + G F } \\
\gpgrade{ F G }{2} &= \inv{2} \lr{ F G – G F }.
\end{aligned}
\end{equation}

We may now proceed to evaluate the variation of the action for our presumed Lagrangian
\begin{equation}\label{eqn:maxwellLagrangian:2010}
S = \int d^4 x \lr{ \inv{2} F^2 – A \cdot J }.
\end{equation}
We seek solutions of the variational equation \( \delta S = 0 \), that are satisfied for all variations \( \delta A \), where the four-potential variations \( \delta A \) are zero on the boundaries of this action volume (i.e. an infinite spherical surface.)

We may start our variation in terms of \( F \) and \( A \)
\begin{equation}\label{eqn:maxwellLagrangian:1540}
\begin{aligned}
\delta S
&=
\int d^4 x \lr{ \inv{2} \lr{ \delta F } F + F \lr{ \delta F } } – \lr{ \delta A } \cdot J \\
&=
\int d^4 x \gpgrade{ \lr{ \delta F } F – \lr{ \delta A } J }{0,4} \\
&=
\int d^4 x \gpgrade{ \lr{ \grad \wedge \lr{\delta A} } F – \lr{ \delta A } J }{0,4} \\
&=
-\int d^4 x \gpgrade{ \lr{ \lr{\delta A} \lgrad } F – \lr{ \lr{ \delta A } \cdot \lgrad } F + \lr{ \delta A } J }{0,4} \\
&=
-\int d^4 x \gpgrade{ \lr{ \lr{\delta A} \lgrad } F + \lr{ \delta A } J }{0,4} \\
&=
-\int d^4 x \gpgrade{ \lr{\delta A} \lrgrad F – \lr{\delta A} \rgrad F + \lr{ \delta A } J }{0,4},
\end{aligned}
\end{equation}
where we have used arrows, when required, to indicate the directional action of the gradient.

Writing \( d^4 x = -I d^4 \Bx \), we have
\begin{equation}\label{eqn:maxwellLagrangian:1600}
\begin{aligned}
\delta S
&=
-\int_V d^4 x \gpgrade{ \lr{\delta A} \lrgrad F – \lr{\delta A} \rgrad F + \lr{ \delta A } J }{0,4} \\
&=
-\int_V \gpgrade{ -\lr{\delta A} I d^4 \Bx \lrgrad F – d^4 x \lr{\delta A} \rgrad F + d^4 x \lr{ \delta A } J }{0,4} \\
&=
\int_{\partial V} \gpgrade{ \lr{\delta A} I d^3 \Bx F }{0,4}
+ \int_V d^4 x \gpgrade{ \lr{\delta A} \lr{ \rgrad F – J } }{0,4}.
\end{aligned}
\end{equation}
The first integral is killed since \( \delta A = 0 \) on the boundary. The remaining integrand can be simplified to
\begin{equation}\label{eqn:maxwellLagrangian:1660}
\gpgrade{ \lr{\delta A} \lr{ \rgrad F – J } }{0,4} =
\gpgrade{ \lr{\delta A} \lr{ \grad F – J } }{0},
\end{equation}
where the grade-4 filter has also been discarded since \( \grad F = \grad \cdot F + \grad \wedge F = \grad \cdot F \) since \( \grad \wedge F = \grad \wedge \grad \wedge A = 0 \) by construction, which implies that the only non-zero grades in the multivector \( \grad F – J \) are vector grades. Also, the directional indicator on the gradient has been dropped, since there is no longer any ambiguity. We seek solutions of \( \gpgrade{ \lr{\delta A} \lr{ \grad F – J } }{0} = 0 \) for all variations \( \delta A \), namely
\begin{equation}\label{eqn:maxwellLagrangian:1620}
\boxed{
\grad F = J.
}
\end{equation}
This is Maxwell’s equation in it’s coordinate free STA form, found using the variational principle from a coordinate free multivector Maxwell Lagrangian, without having to resort to a coordinate expansion of that Lagrangian.

Lagrangian for fictitious magnetic sources.

The generalization of the Lagrangian to include magnetic charge and current densities can be as simple as utilizing two independent four-potential fields
\begin{equation}\label{eqn:maxwellLagrangian:n}
\LL = \inv{2} \lr{ \grad \wedge A }^2 – A \cdot J + \alpha \lr{ \inv{2} \lr{ \grad \wedge K }^2 – K \cdot M },
\end{equation}
where \( \alpha \) is an arbitrary multivector constant.

Variation of this Lagrangian provides two independent equations
\begin{equation}\label{eqn:maxwellLagrangian:1840}
\begin{aligned}
\grad \lr{ \grad \wedge A } &= J \\
\grad \lr{ \grad \wedge K } &= M.
\end{aligned}
\end{equation}
We may add these, scaling the second by \( -I \) (recall that \( I, \grad \) anticommute), to find
\begin{equation}\label{eqn:maxwellLagrangian:1860}
\grad \lr{ F_{\mathrm{e}} + I F_{\mathrm{m}} } = J – I M,
\end{equation}
which is \( \grad F = J – I M \), as desired.

It would be interesting to explore whether it is possible find Lagrangian that is dependent on a multivector potential, that would yield \( \grad F = J – I M \) directly, instead of requiring a superposition operation from the two independent solutions. One such possible potential is \( \tilde{A} = A – I K \), for which \( F = \gpgradetwo{ \grad \tilde{A} } = \grad \wedge A + I \lr{ \grad \wedge K } \). The author was not successful constructing such a Lagrangian.

A coordinate free variation of the Maxwell equation multivector Lagrangian.

June 18, 2022 math and physics play , , , , , , , , ,

This is the 7th part of a series on finding Maxwell’s equations (including the fictitious magnetic sources that are useful in engineering) from a multivector Lagrangian representation.

[Click here for a PDF version of this series of posts, up to and including this one.]  The first, second, third, fourth, fifth, and sixth parts are also available here on this blog.

For what is now (probably) the final step in this exploration, we now wish to evaluate the variation of the multivector Maxwell Lagrangian
\begin{equation}\label{eqn:fsquared:1440x}
\LL = \inv{2} F^2 – \gpgrade{A \lr{ J – I M } }{0,4},
\end{equation}
without resorting to coordinate expansion of any part of \( F = \grad \wedge A \). We’d initially evaluated this, expanding both \( \grad \) and \( A \) in coordinates, and then just \( \grad \), but we can avoid both.
In particular, given a coordinate free Lagrangian, and a coordinate free form of Maxwell’s equation as the final destination, there must be a way to get there directly.

It is clear how to work through the first part of the action variation argument, without resorting to any sort of coordinate expansion
\begin{equation}\label{eqn:fsquared:1540}
\begin{aligned}
\delta S
&=
\int d^4 x \lr{ \inv{2} \lr{ \delta F } F + F \lr{ \delta F } } – \gpgrade{ \lr{ \delta F } \lr{ J – I M } }{0,4} \\
&=
\int d^4 x \gpgrade{ \lr{ \delta F } F – \lr{ \delta A } \lr{ J – I M } }{0,4} \\
&=
\int d^4 x \gpgrade{ \lr{ \grad \wedge \lr{\delta A} } F – \lr{ \delta A } \lr{ J – I M } }{0,4} \\
&=
-\int d^4 x \gpgrade{ \lr{ \lr{\delta A} \grad } F – \lr{ \lr{ \delta A } \cdot \grad } F + \lr{ \delta A } \lr{ J – I M } }{0,4} \\
&=
-\int d^4 x \gpgrade{ \lr{ \lr{\delta A} \grad } F + \lr{ \delta A } \lr{ J – I M } }{0,4}.
\end{aligned}
\end{equation}

In the last three lines, it is important to note that \( \grad \) acts bidirectionally, but on \( \delta A \), but not \( F \).
In particular, if \( B, C \) are multivectors, we interpret the bidirectional action of the gradient as
\begin{equation}\label{eqn:fsquared:1560}
\begin{aligned}
B \lrgrad C &=
B \gamma^\mu \lrpartial_\mu C \\
&=
(\partial_\mu B) \gamma^\mu C
+
B \gamma^\mu (\partial_\mu C),
\end{aligned}
\end{equation}
where the partial operators on the first line are bidirectionally acting, and braces have been used in the last line to indicate the scope of the operators in the chain rule expansion.

Let’s also use arrows to clarify the directionality of this first part of the action variation, writing
\begin{equation}\label{eqn:fsquared:1580}
\begin{aligned}
\delta S
&=
-\int d^4 x \gpgrade{ \lr{\delta A} \lgrad F + \lr{ \delta A } \lr{ J – I M } }{0,4} \\
&=
-\int d^4 x \gpgrade{ \lr{\delta A} \lrgrad F – \lr{\delta A} \rgrad F + \lr{ \delta A } \lr{ J – I M } }{0,4}.
\end{aligned}
\end{equation}
We can cast the first term into an integrand that can be evaluated using the Fundamental Theorem of Geometric Calculus, by introducing a
a parameterization \( x = x(a_\mu) \), for which the tangent space basis vectors are \( \Bx_{a_\mu} = \PDi{a_\mu}{x} \), and the pseudoscalar volume element is
\begin{equation}\label{eqn:fsquared:1640}
d^4 \Bx = \lr{ \Bx_{a_0} \wedge \Bx_{a_1} \wedge \Bx_{a_2} \wedge \Bx_{a_3} } da_0 da_1 da_2 da_3 = I d^4 x.
\end{equation}
Writing \( d^4 x = -I d^4 \Bx \), we have
\begin{equation}\label{eqn:fsquared:1600}
\begin{aligned}
\delta S
&=
-\int_V d^4 x \gpgrade{ \lr{\delta A} \lrgrad F – \lr{\delta A} \rgrad F + \lr{ \delta A } \lr{ J – I M } }{0,4} \\
&=
-\int_V \gpgrade{ -\lr{\delta A} I d^4 \Bx \lrgrad F – d^4 x \lr{\delta A} \rgrad F + d^4 x \lr{ \delta A } \lr{ J – I M } }{0,4} \\
&=
\int_{\partial V} \gpgrade{ \lr{\delta A} I d^3 \Bx F }{0,4}
+ \int_V d^4 x \gpgrade{ \lr{\delta A} \lr{ \rgrad F – J + I M } }{0,4}.
\end{aligned}
\end{equation}
The first integral is killed since \( \delta A = 0 \) on the boundary. For the second integral to be zero for all variations \( \delta A \), we must have
\begin{equation}\label{eqn:fsquared:1660}
\gpgrade{ \lr{\delta A} \lr{ \rgrad F – J + I M } }{0,4} = 0,
\end{equation}
but have argued previously that we can drop the grade selection, leaving
\begin{equation}\label{eqn:fsquared:1620}
\boxed{
\grad F = J – I M
},
\end{equation}
where the directional indicator on our gradient has been dropped, since there is no longer any ambiguity. This is Maxwell’s equation in it’s coordinate free STA form, found using the variational principle from a coordinate free multivector Maxwell Lagrangian, without having to resort to a coordinate expansion of that Lagrangian.

Progressing towards coordinate free form of the Euler-Lagrange equations for Maxwell’s equation

June 17, 2022 math and physics play , , , , , , , , , ,

This is the 6th part of a series on finding Maxwell’s equations (including the fictitious magnetic sources that are useful in engineering) from a multivector Lagrangian representation.

[Click here for a PDF version of this series of posts, up to and including this one.]  The first, second, third, fourth, and fifth parts are also available here on this blog.

We managed to find Maxwell’s equation in it’s STA form by variation of a multivector Lagrangian, with respect to a four-vector field (the potential). That approach differed from the usual variation with respect to the coordinates of that four-vector, or the use of the Euler-Lagrange equations with respect to those coordinates.

Euler-Lagrange equations.

Having done so, an immediate question is whether we can express the Euler-Lagrange equations with respect to the four-potential in it’s entirety, instead of the coordinates of that vector. I have some intuition about how to completely avoid that use of coordinates, but first we can get part way there.

Consider a general Lagrangian, dependent on a field \( A \) and all it’s derivatives \( \partial_\mu A \)
\begin{equation}\label{eqn:fsquared:1180}
\LL = \LL( A, \partial_\mu A ).
\end{equation}

The variational principle requires
\begin{equation}\label{eqn:fsquared:1200}
0 = \delta S = \int d^4 x \delta \LL( A, \partial_\mu A ).
\end{equation}
That variation can be expressed as a limiting parametric operation as follows
\begin{equation}\label{eqn:fsquared:1220}
\delta S
= \int d^4 x
\lr{
\lim_{t \rightarrow 0} \ddt{} \LL( A + t \delta A )
+
\sum_\mu
\lim_{t \rightarrow 0} \ddt{} \LL( \partial_\mu A + t \delta \partial_\mu A )
}
\end{equation}
We eventually want a coordinate free expression for the variation, but we’ll use them to get there. We can expand the first derivative by chain rule as
\begin{equation}\label{eqn:fsquared:1240}
\begin{aligned}
\lim_{t \rightarrow 0} \ddt{} \LL( A + t \delta A )
&=
\lim_{t \rightarrow 0} \PD{(A^\alpha + t \delta A^\alpha)}{\LL} \PD{t}{}(A^\alpha + t \delta A^\alpha) \\
&=
\PD{A^\alpha}{\LL} \delta A^\alpha.
\end{aligned}
\end{equation}
This has the structure of a directional derivative \( A \). In particular, let
\begin{equation}\label{eqn:fsquared:1260}
\grad_A = \gamma^\alpha \PD{A^\alpha}{},
\end{equation}
so we have
\begin{equation}\label{eqn:fsquared:1280}
\lim_{t \rightarrow 0} \ddt{} \LL( A + t \delta A )
= \delta A \cdot \grad_A.
\end{equation}
Similarly,
\begin{equation}\label{eqn:fsquared:1300}
\lim_{t \rightarrow 0} \ddt{} \LL( \partial_\mu A + t \delta \partial_\mu A )
=
\PD{(\partial_\mu A^\alpha)}{\LL} \delta \partial_\mu A^\alpha,
\end{equation}
so we can define a gradient with respect to each of the derivatives of \(A \) as
\begin{equation}\label{eqn:fsquared:1320}
\grad_{\partial_\mu A} = \gamma^\alpha \PD{(\partial_\mu A^\alpha)}{}.
\end{equation}
Our variation can now be expressed in a somewhat coordinate free form
\begin{equation}\label{eqn:fsquared:1340}
\delta S = \int d^4 x \lr{
\lr{\delta A \cdot \grad_A} \LL + \lr{ \lr{\delta \partial_\mu A} \cdot \grad_{\partial_\mu A} } \LL
}.
\end{equation}
We now sum implicitly over pairs of indexes \( \mu \) (i.e. we are treating \( \grad_{\partial_\mu A} \) as an upper index entity). We can now proceed with our chain rule expansion
\begin{equation}\label{eqn:fsquared:1360}
\begin{aligned}
\delta S
&= \int d^4 x \lr{
\lr{\delta A \cdot \grad_A} \LL + \lr{ \lr{\delta \partial_\mu A} \cdot \grad_{\partial_\mu A} } \LL
} \\
&= \int d^4 x \lr{
\lr{\delta A \cdot \grad_A} \LL + \lr{ \lr{\partial_\mu \delta A} \cdot \grad_{\partial_\mu A} } \LL
} \\
&= \int d^4 x \lr{
\lr{\delta A \cdot \grad_A} \LL
+ \partial_\mu \lr{ \lr{ \delta A \cdot \grad_{\partial_\mu A} } \LL }
– \lr{\PD{x^\mu}{} \delta A \cdot \grad_{\partial_\mu A} \LL}_{\delta A}
}.
\end{aligned}
\end{equation}
As usual, we kill off the boundary term, by insisting that \( \delta A = 0 \) on the boundary, leaving us with a four-vector form of the field Euler-Lagrange equations
\begin{equation}\label{eqn:fsquared:1380}
\lr{\delta A \cdot \grad_A} \LL = \lr{\PD{x^\mu}{} \delta A \cdot \grad_{\partial_\mu A} \LL}_{\delta A},
\end{equation}
where the RHS derivatives are taken with \(\delta A \) held fixed. We seek solutions of this equation that hold for all variations \( \delta A \).

Application to the Maxwell Lagrangian.

For the Maxwell application we need a few helper calculations. The first, given a multivector \( B \), is
\begin{equation}\label{eqn:fsquared:1400}
\begin{aligned}
\lr{ \delta A \cdot \grad_A } A B
&=
\delta A^\alpha \PD{A^\alpha}{} \gamma_\beta A^\beta B \\
&=
\delta A^\alpha \gamma_\alpha B \\
&=
\lr{ \delta A } B.
\end{aligned}
\end{equation}

Now let’s compute, for multivector \( B \)
\begin{equation}\label{eqn:fsquared:1420}
\begin{aligned}
\lr{ \delta A \cdot \grad_{\partial_\mu A} } B F
&=
\delta A^\alpha \PD{(\partial_\mu A^\alpha)} B \lr{ \gamma^\beta \wedge \partial_\beta \lr{ \gamma_\pi A^\pi } } \\
&=
\delta A^\alpha B \lr{ \gamma^\mu \wedge \gamma_\alpha } \\
&=
B \lr{ \gamma^\mu \wedge \delta A }.
\end{aligned}
\end{equation}

Our Lagrangian is
\begin{equation}\label{eqn:fsquared:1440}
\LL = \inv{2} F^2 – \gpgrade{A \lr{ J – I M } }{0,4},
\end{equation}
so
\begin{equation}\label{eqn:fsquared:1460}
\lr{\delta A \cdot \grad_A} \LL
=
-\gpgrade{ \lr{ \delta A } \lr{ J – I M } }{0,4},
\end{equation}
and
\begin{equation}\label{eqn:fsquared:1480}
\begin{aligned}
\lr{ \delta A \cdot \grad_{\partial_\mu A} } \inv{2} F^2
&=
\inv{2} \lr{ F \lr{ \gamma^\mu \wedge \delta A } + \lr{ \gamma^\mu \wedge \delta A } F } \\
&=
\gpgrade{
\lr{ \gamma^\mu \wedge \delta A } F
}{0,4} \\
&=
-\gpgrade{
\lr{ \delta A \wedge \gamma^\mu } F
}{0,4} \\
&=
-\gpgrade{
\delta A \gamma^\mu F

\lr{ \delta A \cdot \gamma^\mu } F
}{0,4} \\
&=
-\gpgrade{
\delta A \gamma^\mu F
}{0,4}.
\end{aligned}
\end{equation}
Taking derivatives (holding \( \delta A \) fixed), we have
\begin{equation}\label{eqn:fsquared:1500}
\begin{aligned}
-\gpgrade{ \lr{ \delta A } \lr{ J – I M } }{0,4}
&=
-\gpgrade{
\delta A \partial_\mu \gamma^\mu F
}{0,4} \\
&=
-\gpgrade{
\delta A \grad F
}{0,4}.
\end{aligned}
\end{equation}
We’ve already seen that the solution can be expressed without grade selection as
\begin{equation}\label{eqn:fsquared:1520}
\grad F = \lr{ J – I M },
\end{equation}
which is Maxwell’s equation in it’s STA form. It’s not clear that this is really any less work, but it’s a step towards a coordinate free evaluation of the Maxwell Lagrangian (at least not having to use the coordinates \( A^\mu \) as we have to do in the tensor formalism.)