## A multivector Lagrangian for Maxwell’s equation: A summary of previous exploration.

This summarizes the significant parts of the last 8 blog posts.

## STA form of Maxwell’s equation.

Maxwell’s equations, with electric and fictional magnetic sources (useful for antenna theory and other engineering applications), are
\label{eqn:maxwellLagrangian:220}
\begin{aligned}
\spacegrad \cdot \BE &= \frac{\rho}{\epsilon} \\
\spacegrad \cross \BE &= – \BM – \mu \PD{t}{\BH} \\
\spacegrad \cdot \BH &= \frac{\rho_\txtm}{\mu} \\
\spacegrad \cross \BH &= \BJ + \epsilon \PD{t}{\BE}.
\end{aligned}

We can assemble these into a single geometric algebra equation,
\label{eqn:maxwellLagrangian:240}
\lr{ \spacegrad + \inv{c} \PD{t}{} } F = \eta \lr{ c \rho – \BJ } + I \lr{ c \rho_{\mathrm{m}} – \BM },

where $$F = \BE + \eta I \BH = \BE + I c \BB$$, $$c = 1/\sqrt{\mu\epsilon}, \eta = \sqrt{(\mu/\epsilon)}$$.

By multiplying through by $$\gamma_0$$, making the identification $$\Be_k = \gamma_k \gamma_0$$, and
\label{eqn:maxwellLagrangian:300}
\begin{aligned}
J^0 &= \frac{\rho}{\epsilon}, \quad J^k = \eta \lr{ \BJ \cdot \Be_k }, \quad J = J^\mu \gamma_\mu \\
M^0 &= c \rho_{\mathrm{m}}, \quad M^k = \BM \cdot \Be_k, \quad M = M^\mu \gamma_\mu \\
\end{aligned}

we find the STA form of Maxwell’s equation, including magnetic sources
\label{eqn:maxwellLagrangian:320}
\grad F = J – I M.

## Decoupling the electric and magnetic fields and sources.

We can utilize two separate four-vector potential fields to split Maxwell’s equation into two parts. Let
\label{eqn:maxwellLagrangian:1740}
F = F_{\mathrm{e}} + I F_{\mathrm{m}},

where
\label{eqn:maxwellLagrangian:1760}
\begin{aligned}
F_{\mathrm{e}} &= \grad \wedge A \\
\end{aligned}

and $$A, K$$ are independent four-vector potential fields. Plugging this into Maxwell’s equation, and employing a duality transformation, gives us two coupled vector grade equations
\label{eqn:maxwellLagrangian:1780}
\begin{aligned}
\grad \cdot F_{\mathrm{e}} – I \lr{ \grad \wedge F_{\mathrm{m}} } &= J \\
\grad \cdot F_{\mathrm{m}} + I \lr{ \grad \wedge F_{\mathrm{e}} } &= M.
\end{aligned}

However, since $$\grad \wedge F_{\mathrm{m}} = \grad \wedge F_{\mathrm{e}} = 0$$, by construction, the curls above are killed. We may also add in $$\grad \wedge F_{\mathrm{e}} = 0$$ and $$\grad \wedge F_{\mathrm{m}} = 0$$ respectively, yielding two independent gradient equations
\label{eqn:maxwellLagrangian:1810}
\begin{aligned}
\end{aligned}

one for each of the electric and magnetic sources and their associated fields.

## Tensor formulation.

The electromagnetic field $$F$$, is a vector-bivector multivector in the multivector representation of Maxwell’s equation, but is a bivector in the STA representation. The split of $$F$$ into it’s electric and magnetic field components is observer dependent, but we may write it without reference to a specific observer frame as
\label{eqn:maxwellLagrangian:1830}
F = \inv{2} \gamma_\mu \wedge \gamma_\nu F^{\mu\nu},

where $$F^{\mu\nu}$$ is an arbitrary antisymmetric 2nd rank tensor. Maxwell’s equation has a vector and trivector component, which may be split out explicitly using grade selection, to find
\label{eqn:maxwellLagrangian:360}
\begin{aligned}
\grad \cdot F &= J \\
\grad \wedge F &= -I M.
\end{aligned}

Further dotting and wedging these equations with $$\gamma^\mu$$ allows for extraction of scalar relations
\label{eqn:maxwellLagrangian:460}
\partial_\nu F^{\nu\mu} = J^{\mu}, \quad \partial_\nu G^{\nu\mu} = M^{\mu},

where $$G^{\mu\nu} = -(1/2) \epsilon^{\mu\nu\alpha\beta} F_{\alpha\beta}$$ is also an antisymmetric 2nd rank tensor.

If we treat $$F^{\mu\nu}$$ and $$G^{\mu\nu}$$ as independent fields, this pair of equations is the coordinate equivalent to \ref{eqn:maxwellLagrangian:1760}, also decoupling the electric and magnetic source contributions to Maxwell’s equation.

## Coordinate representation of the Lagrangian.

As observed above, we may choose to express the decoupled fields as curls $$F_{\mathrm{e}} = \grad \wedge A$$ or $$F_{\mathrm{m}} = \grad \wedge K$$. The coordinate expansion of either field component, given such a representation, is straight forward. For example
\label{eqn:maxwellLagrangian:1850}
\begin{aligned}
F_{\mathrm{e}}
&= \lr{ \gamma_\mu \partial^\mu } \wedge \lr{ \gamma_\nu A^\nu } \\
&= \inv{2} \lr{ \gamma_\mu \wedge \gamma_\nu } \lr{ \partial^\mu A^\nu – \partial^\nu A^\mu }.
\end{aligned}

We make the identification $$F^{\mu\nu} = \partial^\mu A^\nu – \partial^\nu A^\mu$$, the usual definition of $$F^{\mu\nu}$$ in the tensor formalism. In that tensor formalism, the Maxwell Lagrangian is
\label{eqn:maxwellLagrangian:1870}
\LL = – \inv{4} F_{\mu\nu} F^{\mu\nu} – A_\mu J^\mu.

We may show this though application of the Euler-Lagrange equations
\label{eqn:maxwellLagrangian:600}
\PD{A_\mu}{\LL} = \partial_\nu \PD{(\partial_\nu A_\mu)}{\LL}.

\label{eqn:maxwellLagrangian:1930}
\begin{aligned}
\PD{(\partial_\nu A_\mu)}{\LL}
&= -\inv{4} (2) \lr{ \PD{(\partial_\nu A_\mu)}{F_{\alpha\beta}} } F^{\alpha\beta} \\
&= -\inv{2} \delta^{[\nu\mu]}_{\alpha\beta} F^{\alpha\beta} \\
&= -\inv{2} \lr{ F^{\nu\mu} – F^{\mu\nu} } \\
&= F^{\mu\nu}.
\end{aligned}

So $$\partial_\nu F^{\nu\mu} = J^\mu$$, the equivalent of $$\grad \cdot F = J$$, as expected.

## Coordinate-free representation and variation of the Lagrangian.

Because
\label{eqn:maxwellLagrangian:200}
F^2 =
-\inv{2}
F^{\mu\nu} F_{\mu\nu}
+
\lr{ \gamma_\alpha \wedge \gamma^\beta }
F_{\alpha\mu}
F^{\beta\mu}
+
\frac{I}{4}
\epsilon_{\mu\nu\alpha\beta} F^{\mu\nu} F^{\alpha\beta},

we may express the Lagrangian \ref{eqn:maxwellLagrangian:1870} in a coordinate free representation
\label{eqn:maxwellLagrangian:1890}
\LL = \inv{2} F \cdot F – A \cdot J,

where $$F = \grad \wedge A$$.

We will now show that it is also possible to apply the variational principle to the following multivector Lagrangian
\label{eqn:maxwellLagrangian:1910}
\LL = \inv{2} F^2 – A \cdot J,

and recover the geometric algebra form $$\grad F = J$$ of Maxwell’s equation in it’s entirety, including both vector and trivector components in one shot.

We will need a few geometric algebra tools to do this.

The first such tool is the notational freedom to let the gradient act bidirectionally on multivectors to the left and right. We will designate such action with over-arrows, sometimes also using braces to limit the scope of the action in question. If $$Q, R$$ are multivectors, then the bidirectional action of the gradient in a $$Q, R$$ sandwich is
\label{eqn:maxwellLagrangian:1950}
\begin{aligned}
&= \lr{ Q \gamma^\mu \lpartial_\mu } R + Q \lr{ \gamma^\mu \rpartial_\mu R } \\
&= \lr{ \partial_\mu Q } \gamma^\mu R + Q \gamma^\mu \lr{ \partial_\mu R }.
\end{aligned}

In the final statement, the partials are acting exclusively on $$Q$$ and $$R$$ respectively, but the $$\gamma^\mu$$ factors must remain in place, as they do not necessarily commute with any of the multivector factors.

This bidirectional action is a critical aspect of the Fundamental Theorem of Geometric calculus, another tool that we will require. The specific form of that theorem that we will utilize here is
\label{eqn:maxwellLagrangian:1970}
\int_V Q d^4 \Bx \lrgrad R = \int_{\partial V} Q d^3 \Bx R,

where $$d^4 \Bx = I d^4 x$$ is the pseudoscalar four-volume element associated with a parameterization of space time. For our purposes, we may assume that parameterization are standard basis coordinates associated with the basis $$\setlr{ \gamma_0, \gamma_1, \gamma_2, \gamma_3 }$$. The surface differential form $$d^3 \Bx$$ can be given specific meaning, but we do not actually care what that form is here, as all our surface integrals will be zero due to the boundary constraints of the variational principle.

Finally, we will utilize the fact that bivector products can be split into grade $$0,4$$ and $$2$$ components using anticommutator and commutator products, namely, given two bivectors $$F, G$$, we have
\label{eqn:maxwellLagrangian:1990}
\begin{aligned}
\gpgrade{ F G }{0,4} &= \inv{2} \lr{ F G + G F } \\
\gpgrade{ F G }{2} &= \inv{2} \lr{ F G – G F }.
\end{aligned}

We may now proceed to evaluate the variation of the action for our presumed Lagrangian
\label{eqn:maxwellLagrangian:2010}
S = \int d^4 x \lr{ \inv{2} F^2 – A \cdot J }.

We seek solutions of the variational equation $$\delta S = 0$$, that are satisfied for all variations $$\delta A$$, where the four-potential variations $$\delta A$$ are zero on the boundaries of this action volume (i.e. an infinite spherical surface.)

We may start our variation in terms of $$F$$ and $$A$$
\label{eqn:maxwellLagrangian:1540}
\begin{aligned}
\delta S
&=
\int d^4 x \lr{ \inv{2} \lr{ \delta F } F + F \lr{ \delta F } } – \lr{ \delta A } \cdot J \\
&=
\int d^4 x \gpgrade{ \lr{ \delta F } F – \lr{ \delta A } J }{0,4} \\
&=
\int d^4 x \gpgrade{ \lr{ \grad \wedge \lr{\delta A} } F – \lr{ \delta A } J }{0,4} \\
&=
-\int d^4 x \gpgrade{ \lr{ \lr{\delta A} \lgrad } F – \lr{ \lr{ \delta A } \cdot \lgrad } F + \lr{ \delta A } J }{0,4} \\
&=
-\int d^4 x \gpgrade{ \lr{ \lr{\delta A} \lgrad } F + \lr{ \delta A } J }{0,4} \\
&=
-\int d^4 x \gpgrade{ \lr{\delta A} \lrgrad F – \lr{\delta A} \rgrad F + \lr{ \delta A } J }{0,4},
\end{aligned}

where we have used arrows, when required, to indicate the directional action of the gradient.

Writing $$d^4 x = -I d^4 \Bx$$, we have
\label{eqn:maxwellLagrangian:1600}
\begin{aligned}
\delta S
&=
-\int_V d^4 x \gpgrade{ \lr{\delta A} \lrgrad F – \lr{\delta A} \rgrad F + \lr{ \delta A } J }{0,4} \\
&=
-\int_V \gpgrade{ -\lr{\delta A} I d^4 \Bx \lrgrad F – d^4 x \lr{\delta A} \rgrad F + d^4 x \lr{ \delta A } J }{0,4} \\
&=
\int_{\partial V} \gpgrade{ \lr{\delta A} I d^3 \Bx F }{0,4}
+ \int_V d^4 x \gpgrade{ \lr{\delta A} \lr{ \rgrad F – J } }{0,4}.
\end{aligned}

The first integral is killed since $$\delta A = 0$$ on the boundary. The remaining integrand can be simplified to
\label{eqn:maxwellLagrangian:1660}

where the grade-4 filter has also been discarded since $$\grad F = \grad \cdot F + \grad \wedge F = \grad \cdot F$$ since $$\grad \wedge F = \grad \wedge \grad \wedge A = 0$$ by construction, which implies that the only non-zero grades in the multivector $$\grad F – J$$ are vector grades. Also, the directional indicator on the gradient has been dropped, since there is no longer any ambiguity. We seek solutions of $$\gpgrade{ \lr{\delta A} \lr{ \grad F – J } }{0} = 0$$ for all variations $$\delta A$$, namely
\label{eqn:maxwellLagrangian:1620}
\boxed{
}

This is Maxwell’s equation in it’s coordinate free STA form, found using the variational principle from a coordinate free multivector Maxwell Lagrangian, without having to resort to a coordinate expansion of that Lagrangian.

## Lagrangian for fictitious magnetic sources.

The generalization of the Lagrangian to include magnetic charge and current densities can be as simple as utilizing two independent four-potential fields
\label{eqn:maxwellLagrangian:n}
\LL = \inv{2} \lr{ \grad \wedge A }^2 – A \cdot J + \alpha \lr{ \inv{2} \lr{ \grad \wedge K }^2 – K \cdot M },

where $$\alpha$$ is an arbitrary multivector constant.

Variation of this Lagrangian provides two independent equations
\label{eqn:maxwellLagrangian:1840}
\begin{aligned}
\end{aligned}

We may add these, scaling the second by $$-I$$ (recall that $$I, \grad$$ anticommute), to find
\label{eqn:maxwellLagrangian:1860}
\grad \lr{ F_{\mathrm{e}} + I F_{\mathrm{m}} } = J – I M,

which is $$\grad F = J – I M$$, as desired.

It would be interesting to explore whether it is possible find Lagrangian that is dependent on a multivector potential, that would yield $$\grad F = J – I M$$ directly, instead of requiring a superposition operation from the two independent solutions. One such possible potential is $$\tilde{A} = A – I K$$, for which $$F = \gpgradetwo{ \grad \tilde{A} } = \grad \wedge A + I \lr{ \grad \wedge K }$$. The author was not successful constructing such a Lagrangian.

## A better 3D generalization of the Mandelbrot set.

I’ve been exploring 3D generalizations of the Mandelbrot set:

The iterative equation for the Mandelbrot set can be written in vector form ([1]) as:

\begin{aligned}
\Bz
&\rightarrow
\Bz \Be_1 \Bz + \Bc \\
&=
\Bz \lr{ \Be_1 \cdot \Bz }
+
\Bz \cdot \lr{ \Be_1 \wedge \Bz }
+ \Bc \\
&=
2 \Bz \lr{ \Be_1 \cdot \Bz }

\Bz^2\, \Be_1
+ \Bc
\end{aligned}

Plotting this in 3D was an interesting challenge, but showed that the Mandelbrot set expressed above has rotational symmetry about the x-axis, which is kind of boring.

If all we require for a 3D fractal is to iterate a vector equation that is (presumably) at least quadratic, then we have lots of options. Here’s the first one that comes to mind:

\begin{aligned}
\Bz
&\rightarrow
\gpgradeone{ \Ba \Bz \Bb \Bz \Bc } + \Bd \\
&=
\lr{ \Ba \cdot \Bz } \lr{ \Bz \cross \lr{ \Bc \cross \Bz } }
+
\lr{ \Ba \cross \Bz } \lr{ \Bz \cdot \lr{ \Bc \cross \Bz } }
+ \Bd
.
\end{aligned}

where we iterate starting, as usual with $$\Bz = 0$$ where $$\Bd$$ is the point of interest to test for inclusion in the set. I tried this with
\label{eqn:mandel3:n}
\begin{aligned}
\Ba &= (1,1,1) \\
\Bb &= (1,0,0) \\
\Bc &= (1,-1,0).
\end{aligned}

Here are some slice plots at various values of z

and an animation of the slices with respect to the z-axis:

Here are a couple snapshots from a 3D Paraview rendering of a netCDF dataset of all the escape time values

Data collection and image creation used commit b042acf6ab7a5ba09865490b3f1fedaf0bd6e773 from my Mandelbrot generalization experimentation repository.

# References

[1] L. Dorst, D. Fontijne, and S. Mann. Geometric Algebra for Computer Science. Morgan Kaufmann, San Francisco, 2007.

## Lorentz transformations in Space Time Algebra (STA)

[If mathjax doesn’t display properly for you, click here for a PDF of this post]

## Motivation.

One of the remarkable features of geometric algebra are the complex exponential sandwiches that can be used to encode rotations in any dimension, or rotation like operations like Lorentz transformations in Minkowski spaces. In this post, we show some examples that unpack the geometric algebra expressions for Lorentz transformations operations of this sort. In particular, we will look at the exponential sandwich operations for spatial rotations and Lorentz boosts in the Dirac algebra, known as Space Time Algebra (STA) in geometric algebra circles, and demonstrate that these sandwiches do have the desired effects.

## Theorem 1.1: Lorentz transformation.

The transformation
\label{eqn:lorentzTransform:580}
x \rightarrow e^{B} x e^{-B} = x’,

where $$B = a \wedge b$$, is an STA 2-blade for any two linearly independent four-vectors $$a, b$$, is a norm preserving, that is
\label{eqn:lorentzTransform:600}
x^2 = {x’}^2.

### Start proof:

The proof is disturbingly trivial in this geometric algebra form
\label{eqn:lorentzTransform:40}
\begin{aligned}
{x’}^2
&=
e^{B} x e^{-B} e^{B} x e^{-B} \\
&=
e^{B} x x e^{-B} \\
&=
x^2 e^{B} e^{-B} \\
&=
x^2.
\end{aligned}

### End proof.

In particular, observe that we did not need to construct the usual infinitesimal representations of rotation and boost transformation matrices or tensors in order to demonstrate that we have spacetime invariance for the transformations. The rough idea of such a transformation is that the exponential commutes with components of the four-vector that lie off the spacetime plane specified by the bivector $$B$$, and anticommutes with components of the four-vector that lie in the plane. The end result is that the sandwich operation simplifies to
\label{eqn:lorentzTransform:60}
x’ = x_\parallel e^{-B} + x_\perp,

where $$x = x_\perp + x_\parallel$$ and $$x_\perp \cdot B = 0$$, and $$x_\parallel \wedge B = 0$$. In particular, using $$x = x B B^{-1} = \lr{ x \cdot B + x \wedge B } B^{-1}$$, we find that
\label{eqn:lorentzTransform:80}
\begin{aligned}
x_\parallel &= \lr{ x \cdot B } B^{-1} \\
x_\perp &= \lr{ x \wedge B } B^{-1}.
\end{aligned}

When $$B$$ is a spacetime plane $$B = b \wedge \gamma_0$$, then this exponential has a hyperbolic nature, and we end up with a Lorentz boost. When $$B$$ is a spatial bivector, we end up with a single complex exponential, encoding our plane old 3D rotation. More general $$B$$’s that encode composite boosts and rotations are also possible, but $$B$$ must be invertible (it should have no lightlike factors.) The rough geometry of these projections is illustrated in fig 1, where the spacetime plane is represented by $$B$$.

fig 1. Projection and rejection geometry.

What is not so obvious is how to pick $$B$$’s that correspond to specific rotation axes or boost directions. Let’s consider each of those cases in turn.

## Theorem 1.2: Boost.

The boost along a direction vector $$\vcap$$ and rapidity $$\alpha$$ is given by
\label{eqn:lorentzTransform:620}
x’ = e^{-\vcap \alpha/2} x e^{\vcap \alpha/2},

where $$\vcap = \gamma_{k0} \cos\theta^k$$ is an STA bivector representing a spatial direction with direction cosines $$\cos\theta^k$$.

### Start proof:

We want to demonstrate that this is equivalent to the usual boost formulation. We can start with decomposition of the four-vector $$x$$ into components that lie in and off of the spacetime plane $$\vcap$$.
\label{eqn:lorentzTransform:100}
\begin{aligned}
x
&= \lr{ x^0 + \Bx } \gamma_0 \\
&= \lr{ x^0 + \Bx \vcap^2 } \gamma_0 \\
&= \lr{ x^0 + \lr{ \Bx \cdot \vcap} \vcap + \lr{ \Bx \wedge \vcap} \vcap } \gamma_0,
\end{aligned}

where $$\Bx = x \wedge \gamma_0$$. The first two components lie in the boost plane, whereas the last is the spatial component of the vector that lies perpendicular to the boost plane. Observe that $$\vcap$$ anticommutes with the dot product term and commutes with he wedge product term, so we have
\label{eqn:lorentzTransform:120}
\begin{aligned}
x’
&=
\lr{ x^0 + \lr{ \Bx \cdot \vcap } \vcap } \gamma_0
e^{\vcap \alpha/2 }
e^{\vcap \alpha/2 }
+
\lr{ \Bx \wedge \vcap } \vcap \gamma_0
e^{-\vcap \alpha/2 }
e^{\vcap \alpha/2 } \\
&=
\lr{ x^0 + \lr{ \Bx \cdot \vcap } \vcap } \gamma_0
e^{\vcap \alpha }
+
\lr{ \Bx \wedge \vcap } \vcap \gamma_0.
\end{aligned}

Noting that $$\vcap^2 = 1$$, we may expand the exponential in hyperbolic functions, and find that the boosted portion of the vector expands as
\label{eqn:lorentzTransform:260}
\begin{aligned}
\lr{ x^0 + \lr{ \Bx \cdot \vcap} \vcap } \gamma_0 e^{\vcap \alpha}
&=
\lr{ x^0 + \lr{ \Bx \cdot \vcap} \vcap } \gamma_0 \lr{ \cosh\alpha + \vcap \sinh \alpha} \\
&=
\lr{ x^0 + \lr{ \Bx \cdot \vcap} \vcap } \lr{ \cosh\alpha – \vcap \sinh \alpha} \gamma_0 \\
&=
\lr{ x^0 \cosh\alpha – \lr{ \Bx \cdot \vcap} \sinh \alpha} \gamma_0
+
\lr{ -x^0 \sinh \alpha + \lr{ \Bx \cdot \vcap} \cosh \alpha } \vcap \gamma_0.
\end{aligned}

We are left with
\label{eqn:lorentzTransform:320}
\begin{aligned}
x’
&=
\lr{ x^0 \cosh\alpha – \lr{ \Bx \cdot \vcap} \sinh \alpha} \gamma_0
+
\lr{ \lr{ \Bx \cdot \vcap} \cosh \alpha -x^0 \sinh \alpha } \vcap \gamma_0
+
\lr{ \Bx \wedge \vcap} \vcap \gamma_0 \\
&=
\begin{bmatrix}
\gamma_0 & \vcap \gamma_0
\end{bmatrix}
\begin{bmatrix}
\cosh\alpha & – \sinh\alpha \\
-\sinh\alpha & \cosh\alpha
\end{bmatrix}
\begin{bmatrix}
x^0 \\
\Bx \cdot \vcap
\end{bmatrix}
+
\lr{ \Bx \wedge \vcap} \vcap \gamma_0,
\end{aligned}

which has the desired Lorentz boost structure. Of course, this is usually seen with $$\vcap = \gamma_{10}$$ so that the components in the coordinate column vector are $$(ct, x)$$.

## Theorem 1.3: Spatial rotation.

Given two linearly independent spatial bivectors $$\Ba = a^k \gamma_{k0}, \Bb = b^k \gamma_{k0}$$, a rotation of $$\theta$$ radians in the plane of $$\Ba, \Bb$$ from $$\Ba$$ towards $$\Bb$$, is given by
\label{eqn:lorentzTransform:640}
x’ = e^{-i\theta} x e^{i\theta},

where $$i = (\Ba \wedge \Bb)/\Abs{\Ba \wedge \Bb}$$, is a unit (spatial) bivector.

### Start proof:

Without loss of generality, we may pick $$i = \acap \bcap$$, where $$\acap^2 = \bcap^2 = 1$$, and $$\acap \cdot \bcap = 0$$. With such an orthonormal basis for the plane, we can decompose our four vector into portions that lie in and off the plane
\label{eqn:lorentzTransform:400}
\begin{aligned}
x
&= \lr{ x^0 + \Bx } \gamma_0 \\
&= \lr{ x^0 + \Bx i i^{-1} } \gamma_0 \\
&= \lr{ x^0 + \lr{ \Bx \cdot i } i^{-1} + \lr{ \Bx \wedge i } i^{-1} } \gamma_0.
\end{aligned}

The projective term lies in the plane of rotation, whereas the timelike and spatial rejection term are perpendicular. That is
\label{eqn:lorentzTransform:420}
\begin{aligned}
x_\parallel &= \lr{ \Bx \cdot i } i^{-1} \gamma_0 \\
x_\perp &= \lr{ x^0 + \lr{ \Bx \wedge i } i^{-1} } \gamma_0,
\end{aligned}

where $$x_\parallel \wedge i = 0$$, and $$x_\perp \cdot i = 0$$. The plane pseudoscalar $$i$$ anticommutes with $$x_\parallel$$, and commutes with $$x_\perp$$, so
\label{eqn:lorentzTransform:440}
\begin{aligned}
x’
&= e^{-i\theta/2} \lr{ x_\parallel + x_\perp } e^{i\theta/2} \\
&= x_\parallel e^{i\theta} + x_\perp.
\end{aligned}

However
\label{eqn:lorentzTransform:460}
\begin{aligned}
\lr{ \Bx \cdot i } i^{-1}
&=
\lr{ \Bx \cdot \lr{ \acap \wedge \bcap } } \bcap \acap \\
&=
\lr{\Bx \cdot \acap} \bcap \bcap \acap
-\lr{\Bx \cdot \bcap} \acap \bcap \acap \\
&=
\lr{\Bx \cdot \acap} \acap
+\lr{\Bx \cdot \bcap} \bcap,
\end{aligned}

so
\label{eqn:lorentzTransform:480}
\begin{aligned}
x_\parallel e^{i\theta}
&=
\lr{
\lr{\Bx \cdot \acap} \acap
+
\lr{\Bx \cdot \bcap} \bcap
}
\gamma_0
\lr{
\cos\theta + \acap \bcap \sin\theta
} \\
&=
\acap \lr{
\lr{\Bx \cdot \acap} \cos\theta

\lr{\Bx \cdot \bcap} \sin\theta
}
\gamma_0
+
\bcap \lr{
\lr{\Bx \cdot \acap} \sin\theta
+
\lr{\Bx \cdot \bcap} \cos\theta
}
\gamma_0,
\end{aligned}

so
\label{eqn:lorentzTransform:500}
x’
=
\begin{bmatrix}
\acap & \bcap
\end{bmatrix}
\begin{bmatrix}
\cos\theta & – \sin\theta \\
\sin\theta & \cos\theta
\end{bmatrix}
\begin{bmatrix}
\Bx \cdot \acap \\
\Bx \cdot \bcap \\
\end{bmatrix}
\gamma_0
+
\lr{ x \wedge i} i^{-1} \gamma_0.

Observe that this rejection term can be explicitly expanded to
\label{eqn:lorentzTransform:520}
\lr{ \Bx \wedge i} i^{-1} \gamma_0 =
x –
\lr{ \Bx \cdot \acap } \acap \gamma_0

\lr{ \Bx \cdot \acap } \acap \gamma_0.

This is the timelike component of the vector, plus the spatial component that is normal to the plane. This exponential sandwich transformation rotates only the portion of the vector that lies in the plane, and leaves the rest (timelike and normal) untouched.

## Problem: Verify components relative to boost direction.

In the proof of thm. 1.2, the vector $$x$$ was expanded in terms of the spacetime split. An alternate approach, is to expand as
\label{eqn:lorentzTransform:340}
\begin{aligned}
x
&= x \vcap^2 \\
&= \lr{ x \cdot \vcap + x \wedge \vcap } \vcap \\
&= \lr{ x \cdot \vcap } \vcap + \lr{ x \wedge \vcap } \vcap.
\end{aligned}

Show that
\label{eqn:lorentzTransform:360}
\lr{ x \cdot \vcap } \vcap
=
\lr{ x^0 + \lr{ \Bx \cdot \vcap} \vcap } \gamma_0,

and
\label{eqn:lorentzTransform:380}
\lr{ x \wedge \vcap } \vcap
=
\lr{ \Bx \wedge \vcap} \vcap \gamma_0.

Let $$x = x^\mu \gamma_\mu$$, so that
\label{eqn:lorentzTransform:160}
\begin{aligned}
x \cdot \vcap
&=
\gpgradeone{ x^\mu \gamma_\mu \cos\theta^b \gamma_{b 0} } \\
&=
x^\mu \cos\theta^b \gpgradeone{ \gamma_\mu \gamma_{b 0} }
.
\end{aligned}

The $$\mu = 0$$ component of this grade selection is
\label{eqn:lorentzTransform:180}
=
-\gamma_b,

and for $$\mu = a \ne 0$$, we have
\label{eqn:lorentzTransform:200}
=
-\delta_{a b} \gamma_0,

so we have
\label{eqn:lorentzTransform:220}
\begin{aligned}
x \cdot \vcap
&=
x^0 \cos\theta^b (-\gamma_b)
+
x^a \cos\theta^b (-\delta_{ab} \gamma_0 ) \\
&=
-x^0 \vcap \gamma_0

x^b \cos\theta^b \gamma_0 \\
&=
– \lr{ x^0 \vcap + \Bx \cdot \vcap } \gamma_0,
\end{aligned}

where $$\Bx = x \wedge \gamma_0$$ is the spatial portion of the four vector $$x$$ relative to the stationary observer frame. Since $$\vcap$$ anticommutes with $$\gamma_0$$, the component of $$x$$ in the spacetime plane $$\vcap$$ is
\label{eqn:lorentzTransform:240}
\lr{ x \cdot \vcap } \vcap =
\lr{ x^0 + \lr{ \Bx \cdot \vcap} \vcap } \gamma_0,

as expected.

For the rejection term, we have
\label{eqn:lorentzTransform:280}
x \wedge \vcap
=
x^\mu \cos\theta^s \gpgradethree{ \gamma_\mu \gamma_{s 0} }.

The $$\mu = 0$$ term clearly contributes nothing, leaving us with:
\label{eqn:lorentzTransform:300}
\begin{aligned}
\lr{ x \wedge \vcap } \vcap
&=
\lr{ x \wedge \vcap } \cdot \vcap \\
&=
x^r \cos\theta^s \cos\theta^t \lr{ \lr{ \gamma_r \wedge \gamma_{s}} \gamma_0 } \cdot \lr{ \gamma_{t0} } \\
&=
\lr{ \gamma_r \wedge \gamma_{s} } \gamma_0 \gamma_{t0}
} \\
&=
-x^r \cos\theta^s \cos\theta^t \lr{ \gamma_r \wedge \gamma_{s}} \cdot \gamma_t \\
&=
-x^r \cos\theta^s \cos\theta^t \lr{ -\gamma_r \delta_{st} + \gamma_s \delta_{rt} } \\
&=
x^r \cos\theta^t \cos\theta^t \gamma_r

x^t \cos\theta^s \cos\theta^t \gamma_s \\
&=
\Bx \gamma_0
– (\Bx \cdot \vcap) \vcap \gamma_0 \\
&=
\lr{ \Bx \wedge \vcap} \vcap \gamma_0,
\end{aligned}

as expected. Is there a clever way to demonstrate this without resorting to coordinates?

## Problem: Rotation transformation components.

Given a unit spatial bivector $$i = \acap \bcap$$, where $$\acap \cdot \bcap = 0$$ and $$i^2 = -1$$, show that
\label{eqn:lorentzTransform:540}
\lr{ x \cdot i } i^{-1}
=
\lr{ \Bx \cdot i } i^{-1} \gamma_0
=
\lr{\Bx \cdot \acap } \acap \gamma_0
+
\lr{\Bx \cdot \bcap } \bcap \gamma_0,

and
\label{eqn:lorentzTransform:560}
\lr{ x \wedge i } i^{-1}
=
\lr{ \Bx \wedge i } i^{-1} \gamma_0
=
x –
\lr{\Bx \cdot \acap } \acap \gamma_0

\lr{\Bx \cdot \bcap } \bcap \gamma_0.

Also show that $$i$$ anticommutes with $$\lr{ x \cdot i } i^{-1}$$ and commutes with $$\lr{ x \wedge i } i^{-1}$$.

This problem is left for the reader, as I don’t feel like typing out my solution.

The first part of this problem can be done in the tedious coordinate approach used above, but hopefully there is a better way.

For the last (commutation) part of the problem, here is a hint. Let $$x \wedge i = n i$$, where $$n \cdot i = 0$$. The result then follows easily.

## Potential solutions to the static Maxwell’s equation using geometric algebra

When neither the electromagnetic field strength $$F = \BE + I \eta \BH$$, nor current $$J = \eta (c \rho – \BJ) + I(c\rho_m – \BM)$$ is a function of time, then the geometric algebra form of Maxwell’s equations is the first order multivector (gradient) equation
\label{eqn:staticPotentials:20}

While direct solutions to this equations are possible with the multivector Green’s function for the gradient
\label{eqn:staticPotentials:40}
G(\Bx, \Bx’) = \inv{4\pi} \frac{\Bx – \Bx’}{\Norm{\Bx – \Bx’}^3 },

the aim in this post is to explore second order (potential) solutions in a geometric algebra context. Can we assume that it is possible to find a multivector potential $$A$$ for which
\label{eqn:staticPotentials:60}

is a solution to the Maxwell statics equation? If such a solution exists, then Maxwell’s equation is simply
\label{eqn:staticPotentials:80}

which can be easily solved using the scalar Green’s function for the Laplacian
\label{eqn:staticPotentials:240}
G(\Bx, \Bx’) = -\inv{\Norm{\Bx – \Bx’} },

a beastie that may be easier to convolve than the vector valued Green’s function for the gradient.

It is immediately clear that some restrictions must be imposed on the multivector potential $$A$$. In particular, since the field $$F$$ has only vector and bivector grades, this gradient must have no scalar, nor pseudoscalar grades. That is
\label{eqn:staticPotentials:100}

This constraint on the potential can be avoided if a grade selection operation is built directly into the assumed potential solution, requiring that the field is given by
\label{eqn:staticPotentials:120}

However, after imposing such a constraint, Maxwell’s equation has a much less friendly form
\label{eqn:staticPotentials:140}

Luckily, it is possible to introduce a transformation of potentials, called a gauge transformation, that eliminates the ugly grade selection term, and allows the potential equation to be expressed as a plain old Laplacian. We do so by assuming first that it is possible to find a solution of the Laplacian equation that has the desired grade restrictions. That is
\label{eqn:staticPotentials:160}
\begin{aligned}
\end{aligned}

for which $$F = \spacegrad A’$$ is a grade 1,2 solution to $$\spacegrad F = J$$. Suppose that $$A$$ is any formal solution, free of any grade restrictions, to $$\spacegrad^2 A = J$$, and $$F = \gpgrade{\spacegrad A}{1,2}$$. Can we find a function $$\tilde{A}$$ for which $$A = A’ + \tilde{A}$$?

Maxwell’s equation in terms of $$A$$ is
\label{eqn:staticPotentials:180}
\begin{aligned}
J
\end{aligned}

or
\label{eqn:staticPotentials:200}

This non-homogeneous Laplacian equation that can be solved as is for $$\tilde{A}$$ using the Green’s function for the Laplacian. Alternatively, we may also solve the equivalent first order system using the Green’s function for the gradient.
\label{eqn:staticPotentials:220}

Clearly $$\tilde{A}$$ is not unique, as we can add any function $$\psi$$ satisfying the homogeneous Laplacian equation $$\spacegrad^2 \psi = 0$$.

In summary, if $$A$$ is any multivector solution to $$\spacegrad^2 A = J$$, that is
\label{eqn:staticPotentials:260}
A(\Bx)
= \int dV’ G(\Bx, \Bx’) J(\Bx’)
= -\int dV’ \frac{J(\Bx’)}{\Norm{\Bx – \Bx’} },

then $$F = \spacegrad A’$$ is a solution to Maxwell’s equation, where $$A’ = A – \tilde{A}$$, and $$\tilde{A}$$ is a solution to the non-homogeneous Laplacian equation or the non-homogeneous gradient equation above.

### Integral form of the gauge transformation.

Additional insight is possible by considering the gauge transformation in integral form. Suppose that
\label{eqn:staticPotentials:280}
A(\Bx) = -\int_V dV’ \frac{J(\Bx’)}{\Norm{\Bx – \Bx’} } – \tilde{A}(\Bx),

is a solution of $$\spacegrad^2 A = J$$, where $$\tilde{A}$$ is a multivector solution to the homogeneous Laplacian equation $$\spacegrad^2 \tilde{A} = 0$$. Let’s look at the constraints on $$\tilde{A}$$ that must be imposed for $$F = \spacegrad A$$ to be a valid (i.e. grade 1,2) solution of Maxwell’s equation.
\label{eqn:staticPotentials:300}
\begin{aligned}
F
&=
-\int_V dV’ \lr{ \spacegrad \inv{\Norm{\Bx – \Bx’} } } J(\Bx’)
&=
\int_V dV’ \lr{ \spacegrad’ \inv{\Norm{\Bx – \Bx’} } } J(\Bx’)
&=
\int_V dV’ \spacegrad’ \frac{J(\Bx’)}{\Norm{\Bx – \Bx’} } – \int_V dV’ \frac{\spacegrad’ J(\Bx’)}{\Norm{\Bx – \Bx’} }
&=
\int_{\partial V} dA’ \ncap’ \frac{J(\Bx’)}{\Norm{\Bx – \Bx’} } – \int_V \frac{\spacegrad’ J(\Bx’)}{\Norm{\Bx – \Bx’} }
\end{aligned}

Where $$\ncap’ = (\Bx’ – \Bx)/\Norm{\Bx’ – \Bx}$$, and the fundamental theorem of geometric calculus has been used to transform the gradient volume integral into an integral over the bounding surface. Operating on Maxwell’s equation with the gradient gives $$\spacegrad^2 F = \spacegrad J$$, which has only grades 1,2 on the left hand side, meaning that $$J$$ is constrained in a way that requires $$\spacegrad J$$ to have only grades 1,2. This means that $$F$$ has grades 1,2 if
\label{eqn:staticPotentials:320}
= \int_{\partial V} dA’ \frac{ \gpgrade{\ncap’ J(\Bx’)}{0,3} }{\Norm{\Bx – \Bx’} }.

The product $$\ncap J$$ expands to
\label{eqn:staticPotentials:340}
\begin{aligned}
\ncap J
&=
&=
\ncap \cdot (-\eta \BJ) + \gpgradethree{\ncap (-I \BM)} \\
&=- \eta \ncap \cdot \BJ -I \ncap \cdot \BM,
\end{aligned}

so
\label{eqn:staticPotentials:360}
=
-\int_{\partial V} dA’ \frac{ \eta \ncap’ \cdot \BJ(\Bx’) + I \ncap’ \cdot \BM(\Bx’)}{\Norm{\Bx – \Bx’} }.

Observe that if there is no flux of current density $$\BJ$$ and (fictitious) magnetic current density $$\BM$$ through the surface, then $$F = \spacegrad A$$ is a solution to Maxwell’s equation without any gauge transformation. Alternatively $$F = \spacegrad A$$ is also a solution if $$\lim_{\Bx’ \rightarrow \infty} \BJ(\Bx’)/\Norm{\Bx – \Bx’} = \lim_{\Bx’ \rightarrow \infty} \BM(\Bx’)/\Norm{\Bx – \Bx’} = 0$$ and the bounding volume is taken to infinity.

# References

## Generalizing Ampere’s law using geometric algebra.

The question I’d like to explore in this post is how Ampere’s law, the relationship between the line integral of the magnetic field to current (i.e. the enclosed current)
\label{eqn:flux:20}
\oint_{\partial A} d\Bx \cdot \BH = -\int_A \ncap \cdot \BJ,

generalizes to geometric algebra where Maxwell’s equations for a statics configuration (all time derivatives zero) is
\label{eqn:flux:40}

where the multivector fields and currents are
\label{eqn:flux:60}
\begin{aligned}
F &= \BE + I \eta \BH \\
J &= \eta \lr{ c \rho – \BJ } + I \lr{ c \rho_\txtm – \BM }.
\end{aligned}

Here (fictitious) the magnetic charge and current densities that can be useful in antenna theory have been included in the multivector current for generality.

My presumption is that it should be possible to utilize the fundamental theorem of geometric calculus for expressing the integral over an oriented surface to its boundary, but applied directly to Maxwell’s equation. That integral theorem has the form
\label{eqn:flux:80}
\int_A d^2 \Bx \boldpartial F = \oint_{\partial A} d\Bx F,

where $$d^2 \Bx = d\Ba \wedge d\Bb$$ is a two parameter bivector valued surface, and $$\boldpartial$$ is vector derivative, the projection of the gradient onto the tangent space. I won’t try to explain all of geometric calculus here, and refer the interested reader to [1], which is an excellent reference on geometric calculus and integration theory.

The gotcha is that we actually want a surface integral with $$\spacegrad F$$. We can split the gradient into the vector derivative a normal component
\label{eqn:flux:160}

so
\label{eqn:flux:100}
=
\int_A d^2 \Bx \boldpartial F
+
\int_A d^2 \Bx \ncap \lr{ \ncap \cdot \spacegrad } F,

so
\label{eqn:flux:120}
\begin{aligned}
\oint_{\partial A} d\Bx F
&=
\int_A d^2 \Bx \lr{ J – \ncap \lr{ \ncap \cdot \spacegrad } F } \\
&=
\int_A dA \lr{ I \ncap J – \lr{ \ncap \cdot \spacegrad } I F }
\end{aligned}

This is not nearly as nice as the magnetic flux relationship which was nicely split with the current and fields nicely separated. The $$d\Bx F$$ product has all possible grades, as does the $$d^2 \Bx J$$ product (in general). Observe however, that the normal term on the right has only grades 1,2, so we can split our line integral relations into pairs with and without grade 1,2 components
\label{eqn:flux:140}
\begin{aligned}
&=
\int_A dA \gpgrade{ I \ncap J }{0,3} \\
&=
\int_A dA \lr{ \gpgrade{ I \ncap J }{1,2} – \lr{ \ncap \cdot \spacegrad } I F }.
\end{aligned}

Let’s expand these explicitly in terms of the component fields and densities to check against the conventional relationships, and see if things look right. The line integrand expands to
\label{eqn:flux:180}
\begin{aligned}
d\Bx F
&=
d\Bx \lr{ \BE + I \eta \BH }
=
d\Bx \cdot \BE + I \eta d\Bx \cdot \BH
+
d\Bx \wedge \BE + I \eta d\Bx \wedge \BH \\
&=
d\Bx \cdot \BE
– \eta (d\Bx \cross \BH)
+ I (d\Bx \cross \BE )
+ I \eta (d\Bx \cdot \BH),
\end{aligned}

the current integrand expands to
\label{eqn:flux:200}
\begin{aligned}
I \ncap J
&=
I \ncap
\lr{
\frac{\rho}{\epsilon} – \eta \BJ + I \lr{ c \rho_\txtm – \BM }
} \\
&=
\ncap I \frac{\rho}{\epsilon} – \eta \ncap I \BJ – \ncap c \rho_\txtm + \ncap \BM \\
&=
\ncap \cdot \BM
+ \eta (\ncap \cross \BJ)
– \ncap c \rho_\txtm
+ I (\ncap \cross \BM)
+ \ncap I \frac{\rho}{\epsilon}
– \eta I (\ncap \cdot \BJ).
\end{aligned}

We are left with
\label{eqn:flux:220}
\begin{aligned}
\oint_{\partial A}
\lr{
d\Bx \cdot \BE + I \eta (d\Bx \cdot \BH)
}
&=
\int_A dA
\lr{
\ncap \cdot \BM – \eta I (\ncap \cdot \BJ)
} \\
\oint_{\partial A}
\lr{
– \eta (d\Bx \cross \BH)
+ I (d\Bx \cross \BE )
}
&=
\int_A dA
\lr{
\eta (\ncap \cross \BJ)
– \ncap c \rho_\txtm
+ I (\ncap \cross \BM)
+ \ncap I \frac{\rho}{\epsilon}
-\PD{n}{} \lr{ I \BE – \eta \BH }
}.
\end{aligned}

This is a crazy mess of dots, crosses, fields and sources. We can split it into one equation for each grade, which will probably look a little more regular. That is
\label{eqn:flux:240}
\begin{aligned}
\oint_{\partial A} d\Bx \cdot \BE &= \int_A dA \ncap \cdot \BM \\
\oint_{\partial A} d\Bx \cross \BH
&=
\int_A dA
\lr{
– \ncap \cross \BJ
+ \frac{ \ncap \rho_\txtm }{\mu}
– \PD{n}{\BH}
} \\
\oint_{\partial A} d\Bx \cross \BE &=
\int_A dA
\lr{
\ncap \cross \BM
+ \frac{\ncap \rho}{\epsilon}
– \PD{n}{\BE}
} \\
\oint_{\partial A} d\Bx \cdot \BH &= -\int_A dA \ncap \cdot \BJ \\
\end{aligned}

The first and last equations could have been obtained much more easily from Maxwell’s equations in their conventional form more easily. The two cross product equations with the normal derivatives are not familiar to me, even without the fictitious magnetic sources. It is somewhat remarkable that so much can be packed into one multivector equation:
\label{eqn:flux:260}
\oint_{\partial A} d\Bx F
=
I \int_A dA \lr{ \ncap J – \PD{n}{F} }.

# References

[1] A. Macdonald. Vector and Geometric Calculus. CreateSpace Independent Publishing Platform, 2012.