wedge product

Maxwell’s equation Lagrangian (geometric algebra and tensor formalism)

November 1, 2020 math and physics play , , , , , , , , , , , , , , , , , , , , , ,

[Click here for a PDF of this post with nicer formatting]

Maxwell’s equation using geometric algebra Lagrangian.

Motivation.

In my classical mechanics notes, I’ve got computations of Maxwell’s equation (singular in it’s geometric algebra form) from a Lagrangian in various ways (using a tensor, scalar and multivector Lagrangians), but all of these seem more convoluted than they should be.
Here we do this from scratch, starting with the action principle for field variables, covering:

  • Derivation of the relativistic form of the Euler-Lagrange field equations from the covariant form of the action,
  • Derivation of Maxwell’s equation (in it’s STA form) from the Maxwell Lagrangian,
  • Relationship of the STA Maxwell Lagrangian to the tensor equivalent,
  • Relationship of the STA form of Maxwell’s equation to it’s tensor equivalents,
  • Relationship of the STA Maxwell’s equation to it’s conventional Gibbs form.
  • Show that we may use a multivector valued Lagrangian with all of \( F^2 \), not just the scalar part.

It is assumed that the reader is thoroughly familiar with the STA formalism, and if that is not the case, there is no better reference than [1].

Field action.

Theorem 1.1: Relativistic Euler-Lagrange field equations.

Let \( \phi \rightarrow \phi + \delta \phi \) be any variation of the field, such that the variation
\( \delta \phi = 0 \) vanishes at the boundaries of the action integral
\begin{equation}\label{eqn:maxwells:2120}
S = \int d^4 x \LL(\phi, \partial_\nu \phi).
\end{equation}
The extreme value of the action is found when the Euler-Lagrange equations
\begin{equation}\label{eqn:maxwells:2140}
0 = \PD{\phi}{\LL} – \partial_\nu \PD{(\partial_\nu \phi)}{\LL},
\end{equation}
are satisfied. For a Lagrangian with multiple field variables, there will be one such equation for each field.

Start proof:

To ease the visual burden, designate the variation of the field by \( \delta \phi = \epsilon \), and perform a first order expansion of the varied Lagrangian
\begin{equation}\label{eqn:maxwells:20}
\begin{aligned}
\LL
&\rightarrow
\LL(\phi + \epsilon, \partial_\nu (\phi + \epsilon)) \\
&=
\LL(\phi, \partial_\nu \phi)
+
\PD{\phi}{\LL} \epsilon +
\PD{(\partial_\nu \phi)}{\LL} \partial_\nu \epsilon.
\end{aligned}
\end{equation}
The variation of the Lagrangian is
\begin{equation}\label{eqn:maxwells:40}
\begin{aligned}
\delta \LL
&=
\PD{\phi}{\LL} \epsilon +
\PD{(\partial_\nu \phi)}{\LL} \partial_\nu \epsilon \\
&=
\PD{\phi}{\LL} \epsilon +
\partial_\nu \lr{ \PD{(\partial_\nu \phi)}{\LL} \epsilon }

\epsilon \partial_\nu \PD{(\partial_\nu \phi)}{\LL},
\end{aligned}
\end{equation}
which we may plug into the action integral to find
\begin{equation}\label{eqn:maxwells:60}
\delta S
=
\int d^4 x \epsilon \lr{
\PD{\phi}{\LL}

\partial_\nu \PD{(\partial_\nu \phi)}{\LL}
}
+
\int d^4 x
\partial_\nu \lr{ \PD{(\partial_\nu \phi)}{\LL} \epsilon }.
\end{equation}
The last integral can be evaluated along the \( dx^\nu \) direction, leaving
\begin{equation}\label{eqn:maxwells:80}
\int d^3 x
\evalbar{ \PD{(\partial_\nu \phi)}{\LL} \epsilon }{\Delta x^\nu},
\end{equation}
where \( d^3 x = dx^\alpha dx^\beta dx^\gamma \) is the product of differentials that does not include \( dx^\nu \). By construction, \( \epsilon \) vanishes on the boundary of the action integral so \ref{eqn:maxwells:80} is zero. The action takes its extreme value when
\begin{equation}\label{eqn:maxwells:100}
0 = \delta S
=
\int d^4 x \epsilon \lr{
\PD{\phi}{\LL}

\partial_\nu \PD{(\partial_\nu \phi)}{\LL}
}.
\end{equation}
The proof is complete after noting that this must hold for all variations of the field \( \epsilon \), which means that we must have
\begin{equation}\label{eqn:maxwells:120}
0 =
\PD{\phi}{\LL}

\partial_\nu \PD{(\partial_\nu \phi)}{\LL}.
\end{equation}

End proof.

Armed with the Euler-Lagrange equations, we can apply them to the Maxwell’s equation Lagrangian, which we will claim has the following form.

Theorem 1.2: Maxwell’s equation Lagrangian.

Application of the Euler-Lagrange equations to the Lagrangian
\begin{equation}\label{eqn:maxwells:2160}
\LL = – \frac{\epsilon_0 c}{2} F \cdot F + J \cdot A,
\end{equation}
where \( F = \grad \wedge A \), yields the vector portion of Maxwell’s equation
\begin{equation}\label{eqn:maxwells:2180}
\grad \cdot F = \inv{\epsilon_0 c} J,
\end{equation}
which implies
\begin{equation}\label{eqn:maxwells:2200}
\grad F = \inv{\epsilon_0 c} J.
\end{equation}
This is Maxwell’s equation.

Start proof:

We wish to apply all of the Euler-Lagrange equations simultaneously (i.e. once for each of the four \(A_\mu\) components of the potential), and cast it into four-vector form
\begin{equation}\label{eqn:maxwells:140}
0 = \gamma_\nu \lr{ \PD{A_\nu}{} – \partial_\mu \PD{(\partial_\mu A_\nu)}{} } \LL.
\end{equation}
Since our Lagrangian splits nicely into kinetic and interaction terms, this gives us
\begin{equation}\label{eqn:maxwells:160}
0 = \gamma_\nu \lr{ \PD{A_\nu}{(A \cdot J)} + \frac{\epsilon_0 c}{2} \partial_\mu \PD{(\partial_\mu A_\nu)}{ (F \cdot F)} }.
\end{equation}
The interaction term above is just
\begin{equation}\label{eqn:maxwells:180}
\gamma_\nu \PD{A_\nu}{(A \cdot J)}
=
\gamma_\nu \PD{A_\nu}{(A_\mu J^\mu)}
=
\gamma_\nu J^\nu
=
J,
\end{equation}
but the kinetic term takes a bit more work. Let’s start with evaluating
\begin{equation}\label{eqn:maxwells:200}
\begin{aligned}
\PD{(\partial_\mu A_\nu)}{ (F \cdot F)}
&=
\PD{(\partial_\mu A_\nu)}{ F } \cdot F
+
F \cdot \PD{(\partial_\mu A_\nu)}{ F } \\
&=
2 \PD{(\partial_\mu A_\nu)}{ F } \cdot F \\
&=
2 \PD{(\partial_\mu A_\nu)}{ (\partial_\alpha A_\beta) } \lr{ \gamma^\alpha \wedge \gamma^\beta } \cdot F \\
&=
2 \lr{ \gamma^\mu \wedge \gamma^\nu } \cdot F.
\end{aligned}
\end{equation}
We hit this with the \(\mu\)-partial and expand as a scalar selection to find
\begin{equation}\label{eqn:maxwells:220}
\begin{aligned}
\partial_\mu \PD{(\partial_\mu A_\nu)}{ (F \cdot F)}
&=
2 \lr{ \partial_\mu \gamma^\mu \wedge \gamma^\nu } \cdot F \\
&=
– 2 (\gamma^\nu \wedge \grad) \cdot F \\
&=
– 2 \gpgradezero{ (\gamma^\nu \wedge \grad) F } \\
&=
– 2 \gpgradezero{ \gamma^\nu \grad F – \gamma^\nu \cdot \grad F } \\
&=
– 2 \gamma^\nu \cdot \lr{ \grad \cdot F }.
\end{aligned}
\end{equation}
Putting all the pieces together yields
\begin{equation}\label{eqn:maxwells:240}
0
= J – \epsilon_0 c \gamma_\nu \lr{ \gamma^\nu \cdot \lr{ \grad \cdot F } }
= J – \epsilon_0 c \lr{ \grad \cdot F },
\end{equation}
but
\begin{equation}\label{eqn:maxwells:260}
\begin{aligned}
\grad \cdot F
&=
\grad F – \grad \wedge F \\
&=
\grad F – \grad \wedge (\grad \wedge A) \\
&=
\grad F,
\end{aligned}
\end{equation}
so the multivector field equations for this Lagrangian are
\begin{equation}\label{eqn:maxwells:280}
\grad F = \inv{\epsilon_0 c} J,
\end{equation}
as claimed.

End proof.

Problem: Correspondence with tensor formalism.

Cast the Lagrangian of \ref{eqn:maxwells:2160} into the conventional tensor form
\begin{equation}\label{eqn:maxwells:300}
\LL = \frac{\epsilon_0 c}{4} F_{\mu\nu} F^{\mu\nu} + A^\mu J_\mu.
\end{equation}
Also show that the four-vector component of Maxwell’s equation \( \grad \cdot F = J/(\epsilon_0 c) \) is equivalent to the conventional tensor form of the Gauss-Ampere law
\begin{equation}\label{eqn:maxwells:320}
\partial_\mu F^{\mu\nu} = \inv{\epsilon_0 c} J^\nu,
\end{equation}
where \( F^{\mu\nu} = \partial^\mu A^\nu – \partial^\nu A^\mu \) as usual. Also show that the trivector component of Maxwell’s equation \( \grad \wedge F = 0 \) is equivalent to the tensor form of the Gauss-Faraday law
\begin{equation}\label{eqn:maxwells:340}
\partial_\alpha \lr{ \epsilon^{\alpha \beta \mu \nu} F_{\mu\nu} } = 0.
\end{equation}

Answer

To show the Lagrangian correspondence we must expand \( F \cdot F \) in coordinates
\begin{equation}\label{eqn:maxwells:360}
\begin{aligned}
F \cdot F
&=
( \grad \wedge A ) \cdot
( \grad \wedge A ) \\
&=
\lr{ (\gamma^\mu \partial_\mu) \wedge (\gamma^\nu A_\nu) }
\cdot
\lr{ (\gamma^\alpha \partial_\alpha) \wedge (\gamma^\beta A_\beta) } \\
&=
\lr{ \gamma^\mu \wedge \gamma^\nu } \cdot \lr{ \gamma_\alpha \wedge \gamma_\beta }
(\partial_\mu A_\nu )
(\partial^\alpha A^\beta ) \\
&=
\lr{
{\delta^\mu}_\beta
{\delta^\nu}_\alpha

{\delta^\mu}_\alpha
{\delta^\nu}_\beta
}
(\partial_\mu A_\nu )
(\partial^\alpha A^\beta ) \\
&=
– \partial_\mu A_\nu \lr{
\partial^\mu A^\nu

\partial^\nu A^\mu
} \\
&=
– \partial_\mu A_\nu F^{\mu\nu} \\
&=
– \inv{2} \lr{
\partial_\mu A_\nu F^{\mu\nu}
+
\partial_\nu A_\mu F^{\nu\mu}
} \\
&=
– \inv{2} \lr{
\partial_\mu A_\nu

\partial_\nu A_\mu
}
F^{\mu\nu} \\
&=

\inv{2}
F_{\mu\nu}
F^{\mu\nu}.
\end{aligned}
\end{equation}
With a substitution of this and \( A \cdot J = A_\mu J^\mu \) back into the Lagrangian, we recover the tensor form of the Lagrangian.

To recover the tensor form of Maxwell’s equation, we first split it into vector and trivector parts
\begin{equation}\label{eqn:maxwells:1580}
\grad \cdot F + \grad \wedge F = \inv{\epsilon_0 c} J.
\end{equation}
Now the vector component may be expanded in coordinates by dotting both sides with \( \gamma^\nu \) to find
\begin{equation}\label{eqn:maxwells:1600}
\inv{\epsilon_0 c} \gamma^\nu \cdot J = J^\nu,
\end{equation}
and
\begin{equation}\label{eqn:maxwells:1620}
\begin{aligned}
\gamma^\nu \cdot
\lr{ \grad \cdot F }
&=
\partial_\mu \gamma^\nu \cdot \lr{ \gamma^\mu \cdot \lr{ \gamma_\alpha \wedge \gamma_\beta } \partial^\alpha A^\beta } \\
&=
\lr{
{\delta^\mu}_\alpha
{\delta^\nu}_\beta

{\delta^\nu}_\alpha
{\delta^\mu}_\beta
}
\partial_\mu
\partial^\alpha A^\beta \\
&=
\partial_\mu
\lr{
\partial^\mu A^\nu

\partial^\nu A^\mu
} \\
&=
\partial_\mu F^{\mu\nu}.
\end{aligned}
\end{equation}
Equating \ref{eqn:maxwells:1600} and \ref{eqn:maxwells:1620} finishes the first part of the job. For the trivector component, we have
\begin{equation}\label{eqn:maxwells:1640}
0
= \grad \wedge F
= (\gamma^\mu \partial_\mu) \wedge \lr{ \gamma^\alpha \wedge \gamma^\beta } \partial_\alpha A_\beta
= \inv{2} (\gamma^\mu \partial_\mu) \wedge \lr{ \gamma^\alpha \wedge \gamma^\beta } F_{\alpha \beta}.
\end{equation}
Wedging with \( \gamma^\tau \) and then multiplying by \( -2 I \) we find
\begin{equation}\label{eqn:maxwells:1660}
0 = – \lr{ \gamma^\mu \wedge \gamma^\alpha \wedge \gamma^\beta \wedge \gamma^\tau } I \partial_\mu F_{\alpha \beta},
\end{equation}
but
\begin{equation}\label{eqn:maxwells:1680}
\gamma^\mu \wedge \gamma^\alpha \wedge \gamma^\beta \wedge \gamma^\tau = -I \epsilon^{\mu \alpha \beta \tau},
\end{equation}
which leaves us with
\begin{equation}\label{eqn:maxwells:1700}
\epsilon^{\mu \alpha \beta \tau} \partial_\mu F_{\alpha \beta} = 0,
\end{equation}
as expected.

Problem: Correspondence of tensor and Gibbs forms of Maxwell’s equations.

Given the identifications

\begin{equation}\label{eqn:lorentzForceCovariant:1500}
F^{k0} = E^k,
\end{equation}
and
\begin{equation}\label{eqn:lorentzForceCovariant:1520}
F^{rs} = -\epsilon^{rst} B^t,
\end{equation}
and
\begin{equation}\label{eqn:maxwells:1560}
J^\mu = \lr{ c \rho, \BJ },
\end{equation}
the reader should satisfy themselves that the traditional Gibbs form of Maxwell’s equations can be recovered from \ref{eqn:maxwells:320}.

Answer

The reader is referred to Exercise 3.4 “Electrodynamics, variational principle.” from [2].

Problem: Correspondence with grad and curl form of Maxwell’s equations.

With \( J = c \rho \gamma_0 + J^k \gamma_k \) and \( F = \BE + I c \BB \) show that Maxwell’s equation, as stated in \ref{eqn:maxwells:2200} expand to the conventional div and curl expressions for Maxwell’s equations.

Answer

To obtain Maxwell’s equations in their traditional vector forms, we pre-multiply both sides with \( \gamma_0 \)
\begin{equation}\label{eqn:maxwells:1720}
\gamma_0 \grad F = \inv{\epsilon_0 c} \gamma_0 J,
\end{equation}
and then select each grade separately. First observe that the RHS above has scalar and bivector components, as
\begin{equation}\label{eqn:maxwells:1740}
\gamma_0 J
=
c \rho + J^k \gamma_0 \gamma_k.
\end{equation}
In terms of the spatial bivector basis \( \Be_k = \gamma_k \gamma_0 \), the RHS of \ref{eqn:maxwells:1720} is
\begin{equation}\label{eqn:maxwells:1760}
\gamma_0 \frac{J}{\epsilon_0 c} = \frac{\rho}{\epsilon_0} – \mu_0 c \BJ.
\end{equation}
For the LHS, first note that
\begin{equation}\label{eqn:maxwells:1780}
\begin{aligned}
\gamma_0 \grad
&=
\gamma_0
\lr{
\gamma_0 \partial^0 +
\gamma_k \partial^k
} \\
&=
\partial_0 – \gamma_0 \gamma_k \partial_k \\
&=
\inv{c} \PD{t}{} + \spacegrad.
\end{aligned}
\end{equation}
We can express all the the LHS of \ref{eqn:maxwells:1720} in the bivector spatial basis, so that Maxwell’s equation in multivector form is
\begin{equation}\label{eqn:maxwells:1800}
\lr{ \inv{c} \PD{t}{} + \spacegrad } \lr{ \BE + I c \BB } = \frac{\rho}{\epsilon_0} – \mu_0 c \BJ.
\end{equation}
Selecting the scalar, vector, bivector, and trivector grades of both sides (in the spatial basis) gives the following set of respective equations
\begin{equation}\label{eqn:maxwells:1840}
\spacegrad \cdot \BE = \frac{\rho}{\epsilon_0}
\end{equation}
\begin{equation}\label{eqn:maxwells:1860}
\inv{c} \partial_t \BE + I c \spacegrad \wedge \BB = – \mu_0 c \BJ
\end{equation}
\begin{equation}\label{eqn:maxwells:1880}
\spacegrad \wedge \BE + I \partial_t \BB = 0
\end{equation}
\begin{equation}\label{eqn:maxwells:1900}
I c \spacegrad \cdot B = 0,
\end{equation}
which we can rewrite after some duality transformations (and noting that \( \mu_0 \epsilon_0 c^2 = 1 \)), we have
\begin{equation}\label{eqn:maxwells:1940}
\spacegrad \cdot \BE = \frac{\rho}{\epsilon_0}
\end{equation}
\begin{equation}\label{eqn:maxwells:1960}
\spacegrad \cross \BB – \mu_0 \epsilon_0 \PD{t}{\BE} = \mu_0 \BJ
\end{equation}
\begin{equation}\label{eqn:maxwells:1980}
\spacegrad \cross \BE + \PD{t}{\BB} = 0
\end{equation}
\begin{equation}\label{eqn:maxwells:2000}
\spacegrad \cdot B = 0,
\end{equation}
which are Maxwell’s equations in their traditional form.

Problem: Alternative multivector Lagrangian.

Show that a scalar+pseudoscalar Lagrangian of the following form
\begin{equation}\label{eqn:maxwells:2220}
\LL = – \frac{\epsilon_0 c}{2} F^2 + J \cdot A,
\end{equation}
which omits the scalar selection of the Lagrangian in \ref{eqn:maxwells:2160}, also represents Maxwell’s equation. Discuss the scalar and pseudoscalar components of \( F^2 \), and show why the pseudoscalar inclusion is irrelevant.

Answer

The quantity \( F^2 = F \cdot F + F \wedge F \) has both scalar and pseudoscalar
components. Note that unlike vectors, a bivector wedge in 4D with itself need not be zero (example: \( \gamma_0 \gamma_1 + \gamma_2 \gamma_3 \) wedged with itself).
We can see this multivector nature nicely by expansion in terms of the electric and magnetic fields
\begin{equation}\label{eqn:maxwells:2020}
\begin{aligned}
F^2
&= \lr{ \BE + I c \BB }^2 \\
&= \BE^2 – c^2 \BB^2 + I c \lr{ \BE \BB + \BB \BE } \\
&= \BE^2 – c^2 \BB^2 + 2 I c \BE \cdot \BB.
\end{aligned}
\end{equation}
Both the scalar and pseudoscalar parts of \( F^2 \) are Lorentz invariant, a requirement of our Lagrangian, but most Maxwell equation Lagrangians only include the scalar \( \BE^2 – c^2 \BB^2 \) component of the field square. If we allow the Lagrangian to be multivector valued, and evaluate the Euler-Lagrange equations, we quickly find the same results
\begin{equation}\label{eqn:maxwells:2040}
\begin{aligned}
0
&= \gamma_\nu \lr{ \PD{A_\nu}{} – \partial_\mu \PD{(\partial_\mu A_\nu)}{} } \LL \\
&= \gamma_\nu \lr{ J^\nu + \frac{\epsilon_0 c}{2} \partial_\mu
\lr{
(\gamma^\mu \wedge \gamma^\nu) F
+
F (\gamma^\mu \wedge \gamma^\nu)
}
}.
\end{aligned}
\end{equation}
Here some steps are skipped, building on our previous scalar Euler-Lagrange evaluation experience. We have a symmetric product of two bivectors, which we can express as a 0,4 grade selection, since
\begin{equation}\label{eqn:maxwells:2060}
\gpgrade{ X F }{0,4} = \inv{2} \lr{ X F + F X },
\end{equation}
for any two bivectors \( X, F \). This leaves
\begin{equation}\label{eqn:maxwells:2080}
\begin{aligned}
0
&= J + \epsilon_0 c \gamma_\nu \gpgrade{ (\grad \wedge \gamma^\nu) F }{0,4} \\
&= J + \epsilon_0 c \gamma_\nu \gpgrade{ -\gamma^\nu \grad F + (\gamma^\nu \cdot \grad) F }{0,4} \\
&= J + \epsilon_0 c \gamma_\nu \gpgrade{ -\gamma^\nu \grad F }{0,4} \\
&= J – \epsilon_0 c \gamma_\nu
\lr{
\gamma^\nu \cdot \lr{ \grad \cdot F } + \gamma^\nu \wedge \grad \wedge F
}.
\end{aligned}
\end{equation}
However, since \( \grad \wedge F = \grad \wedge \grad \wedge A = 0 \), we see that there is no contribution from the \( F \wedge F \) pseudoscalar component of the Lagrangian, and we are left with
\begin{equation}\label{eqn:maxwells:2100}
\begin{aligned}
0
&= J – \epsilon_0 c (\grad \cdot F) \\
&= J – \epsilon_0 c \grad F,
\end{aligned}
\end{equation}
which is Maxwell’s equation, as before.

References

[1] C. Doran and A.N. Lasenby. Geometric algebra for physicists. Cambridge University Press New York, Cambridge, UK, 1st edition, 2003.

[2] Peeter Joot. Quantum field theory. Kindle Direct Publishing, 2018.

Lagrangian for the Lorentz force equation.

October 24, 2020 math and physics play , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

[Click here for a PDF of this post with nicer formatting]

Motivation.

In my old classical mechanics notes it appears that I did covariant derivations of the Lorentz force equations a number of times, using different trial Lagrangians (relativistic and non-relativistic), and using both geometric algebra and tensor methods. However, none of these appear to have been done concisely, and a number not even coherently.

The following document has been drafted as replacement text for those incoherent classical mechanics notes. I’ll attempt to cover

  • a lighting review of the geometric algebra STA (Space Time Algebra),
  • relations between Dirac matrix algebra and STA,
  • derivation of the relativistic form of the Euler-Lagrange equations from the covariant form of the action,
  • relationship of the STA form of the Euler-Lagrange equations to their tensor equivalents,
  • derivation of the Lorentz force equation from the STA Lorentz force Lagrangian,
  • relationship of the STA Lorentz force equation to its equivalent in the tensor formalism,
  • relationship of the STA Lorentz force equation to the traditional vector form.

Note that some of the prerequisite ideas and auxiliary details are presented as problems with solutions. If the reader has sufficient background to attempt those problems themselves, they are encouraged to do so.

The STA and geometric algebra ideas used here are not complete to learn from in isolation. The reader is referred to [1] for a more complete exposition of both STA and geometric algebra.

Conventions.

Definition 1.1: Index conventions.

Latin indexes \( i, j, k, r, s, t, \cdots \) are used to designate values in the range \( \setlr{ 1,2,3 } \). Greek indexes are \( \alpha, \beta, \mu, \nu, \cdots \) are used for indexes of spacetime quantities \( \setlr{0,1,2,3} \).
The Einstein convention of implied summation for mixed upper and lower Greek indexes will be used, for example
\begin{equation*}
x^\alpha x_\alpha \equiv \sum_{\alpha = 0}^3 x^\alpha x_\alpha.
\end{equation*}

Space Time Algebra (STA.)

In the geometric algebra literature, the Dirac algebra of quantum field theory has been rebranded Space Time Algebra (STA). The differences between STA and the Dirac theory that uses matrices (\( \gamma_\mu \)) are as follows

  • STA completely omits any representation of the Dirac basis vectors \( \gamma_\mu \). In particular, any possible matrix representation is irrelevant.
  • STA provides a rich set of fundamental operations (grade selection, generalized dot and wedge products for multivector elements, rotation and reflection operations, …)
  • Matrix trace, and commutator and anticommutator operations are nowhere to be found in STA, as geometrically grounded equivalents are available instead.
  • The “slashed” quantities from Dirac theory, such as \( \gamma_\mu p^\mu \) are nothing more than vectors in their entirety in STA (where the basis is no longer implicit, as is the case for coordinates.)

Our basis vectors have the following properties.

Definition 1.2: Standard basis.

Let the four-vector standard basis be designated \( \setlr{\gamma_0, \gamma_1, \gamma_2, \gamma_3 } \), where the basis vectors satisfy
\begin{equation}\label{eqn:lorentzForceCovariant:1540}
\begin{aligned}
\gamma_0^2 &= -\gamma_i^2 = 1 \\
\gamma_\alpha \cdot \gamma_\beta &= 0, \forall \alpha \ne \beta.
\end{aligned}
\end{equation}

Problem: Commutator properties of the STA basis.

In Dirac theory, the commutator properties of the Dirac matrices is considered fundamental, namely
\begin{equation*}
\symmetric{\gamma_\mu}{\gamma_\nu} = 2 \eta_{\mu\nu}.
\end{equation*}

Show that this follows from the axiomatic assumptions of geometric algebra, and describe how the dot and wedge products are related to the anticommutator and commutator products of Dirac theory.

Answer

The anticommutator is defined as symmetric sum of products
\begin{equation}\label{eqn:lorentzForceCovariant:1040}
\symmetric{\gamma_\mu}{\gamma_\nu}
\equiv
\gamma_\mu \gamma_\nu
+
\gamma_\nu \gamma_\mu,
\end{equation}
but this is just twice the dot product in its geometric algebra form \( a b = (a b + ba)/2 \). Observe that the properties of the basis vectors defined in \ref{eqn:lorentzForceCovariant:1540} may be summarized as
\begin{equation}\label{eqn:lorentzForceCovariant:1060}
\gamma_\mu \cdot \gamma_\nu = \eta_{\mu\nu},
\end{equation}
where \( \eta_{\mu\nu} = \text{diag}(+,-,-,-)
=
\begin{bmatrix}
1 & 0 & 0 & 0 \\
0 & -1 & 0 & 0 \\
0 & 0 & -1 & 0 \\
0 & 0 & 0 & -1 \\
\end{bmatrix}
\) is the conventional metric tensor. This means
\begin{equation}\label{eqn:lorentzForceCovariant:1080}
\gamma_\mu \cdot \gamma_\nu = \eta_{\mu\nu} = 2 \symmetric{\gamma_\mu}{\gamma_\nu},
\end{equation}
as claimed.

Similarly, observe that the commutator, defined as the antisymmetric sum of products
\begin{equation}\label{eqn:lorentzForceCovariant:1100}
\antisymmetric{\gamma_\mu}{\gamma_\nu} \equiv
\gamma_\mu \gamma_\nu

\gamma_\nu \gamma_\mu,
\end{equation}
is twice the wedge product \( a \wedge b = (a b – b a)/2 \). This provides geometric identifications for the respective anti-commutator and commutator products respectively
\begin{equation}\label{eqn:lorentzForceCovariant:1120}
\begin{aligned}
\symmetric{\gamma_\mu}{\gamma_\nu} &= 2 \gamma_\mu \cdot \gamma_\nu \\
\antisymmetric{\gamma_\mu}{\gamma_\nu} &= 2 \gamma_\mu \wedge \gamma_\nu,
\end{aligned}
\end{equation}

Definition 1.3: Pseudoscalar.

The pseudoscalar for the space is denoted \( I = \gamma_0 \gamma_1 \gamma_2 \gamma_3 \).

Problem: Pseudoscalar.

Show that the STA pseudoscalar \( I \) defined by \ref{eqn:lorentzForceCovariant:1540} satisfies
\begin{equation*}
\tilde{I} = I,
\end{equation*}
where the tilde operator designates reversion. Also show that \( I \) has the properties of an imaginary number
\begin{equation*}
I^2 = -1.
\end{equation*}
Finally, show that, unlike the spatial pseudoscalar that commutes with all grades, \( I \) anticommutes with any vector or trivector, and commutes with any bivector.

Answer

Since \( \gamma_\alpha \gamma_\beta = -\gamma_\beta \gamma_\alpha \) for any \( \alpha \ne \beta \), any permutation of the factors of \( I \) changes the sign once. In particular
\begin{equation}\label{eqn:lorentzForceCovariant:680}
\begin{aligned}
I &=
\gamma_0
\gamma_1
\gamma_2
\gamma_3 \\
&=

\gamma_1
\gamma_2
\gamma_3
\gamma_0 \\
&=

\gamma_2
\gamma_3
\gamma_1
\gamma_0 \\
&=
+
\gamma_3
\gamma_2
\gamma_1
\gamma_0
= \tilde{I}.
\end{aligned}
\end{equation}
Using this, we have
\begin{equation}\label{eqn:lorentzForceCovariant:700}
\begin{aligned}
I^2
&= I \tilde{I} \\
&=
(
\gamma_0
\gamma_1
\gamma_2
\gamma_3
)(
\gamma_3
\gamma_2
\gamma_1
\gamma_0
) \\
&=
\lr{\gamma_0}^2
\lr{\gamma_1}^2
\lr{\gamma_2}^2
\lr{\gamma_3}^2 \\
&=
(+1)
(-1)
(-1)
(-1) \\
&= -1.
\end{aligned}
\end{equation}
To illustrate the anticommutation property with any vector basis element, consider the following two examples:
\begin{equation}\label{eqn:lorentzForceCovariant:720}
\begin{aligned}
I \gamma_0 &=
\gamma_0
\gamma_1
\gamma_2
\gamma_3
\gamma_0 \\
&=

\gamma_0
\gamma_0
\gamma_1
\gamma_2
\gamma_3 \\
&=

\gamma_0 I,
\end{aligned}
\end{equation}
\begin{equation}\label{eqn:lorentzForceCovariant:740}
\begin{aligned}
I \gamma_2
&=
\gamma_0
\gamma_1
\gamma_2
\gamma_3
\gamma_2 \\
&=

\gamma_0
\gamma_1
\gamma_2
\gamma_2
\gamma_3 \\
&=

\gamma_2
\gamma_0
\gamma_1
\gamma_2
\gamma_3 \\
&= -\gamma_2 I.
\end{aligned}
\end{equation}
A total of three sign swaps is required to “percolate” any given \(\gamma_\alpha\) through the factors of \( I \), resulting in an overall sign change of \( -1 \).

For any bivector basis element \( \alpha \ne \beta \)
\begin{equation}\label{eqn:lorentzForceCovariant:760}
\begin{aligned}
I \gamma_\alpha \gamma_\beta
&=
-\gamma_\alpha I \gamma_\beta \\
&=
+\gamma_\alpha \gamma_\beta I.
\end{aligned}
\end{equation}

Similarly for any trivector basis element \( \alpha \ne \beta \ne \sigma \)
\begin{equation}\label{eqn:lorentzForceCovariant:780}
\begin{aligned}
I \gamma_\alpha \gamma_\beta \gamma_\sigma
&=
-\gamma_\alpha I \gamma_\beta \gamma_\sigma \\
&=
+\gamma_\alpha \gamma_\beta I \gamma_\sigma \\
&=
-\gamma_\alpha \gamma_\beta \gamma_\sigma I.
\end{aligned}
\end{equation}

Definition 1.4: Reciprocal basis.

The reciprocal basis \( \setlr{ \gamma^0, \gamma^1, \gamma^2, \gamma^3 } \) is defined , such that the property \( \gamma^\alpha \cdot \gamma_\beta = {\delta^\alpha}_\beta \) holds.

Observe that, \( \gamma^0 = \gamma_0 \) and \( \gamma^i = -\gamma_i \).

Theorem 1.1: Coordinates.

Coordinates are defined in terms of dot products with the standard basis, or reciprocal basis
\begin{equation*}
\begin{aligned}
x^\alpha &= x \cdot \gamma^\alpha \\
x_\alpha &= x \cdot \gamma_\alpha,
\end{aligned}
\end{equation*}

Start proof:

Suppose that a coordinate representation of the following form is assumed
\begin{equation}\label{eqn:lorentzForceCovariant:820}
x = x^\alpha \gamma_\alpha = x_\beta \gamma^\beta.
\end{equation}
We wish to determine the representation of the \( x^\alpha \) or \( x_\beta \) coordinates in terms of \( x\) and the basis elements. Taking the dot product with any standard basis element, we find
\begin{equation}\label{eqn:lorentzForceCovariant:840}
\begin{aligned}
x \cdot \gamma_\mu
&= (x_\beta \gamma^\beta) \cdot \gamma_\mu \\
&= x_\beta {\delta^\beta}_\mu \\
&= x_\mu,
\end{aligned}
\end{equation}
as claimed. Similarly, dotting with a reciprocal frame vector, we find
\begin{equation}\label{eqn:lorentzForceCovariant:860}
\begin{aligned}
x \cdot \gamma^\mu
&= (x^\beta \gamma_\beta) \cdot \gamma^\mu \\
&= x^\beta {\delta_\beta}^\mu \\
&= x^\mu.
\end{aligned}
\end{equation}

End proof.

Observe that raising or lowering the index of a spatial index toggles the sign of a coordinate, but timelike indexes are left unchanged.
\begin{equation}\label{eqn:lorentzForceCovariant:880}
\begin{aligned}
x^0 &= x_0 \\
x^i &= -x_i \\
\end{aligned}
\end{equation}

Definition 1.5: Spacetime gradient.

The spacetime gradient operator is
\begin{equation*}
\grad = \gamma^\mu \partial_\mu = \gamma_\nu \partial^\nu,
\end{equation*}
where
\begin{equation*}
\partial_\mu = \PD{x^\mu}{},
\end{equation*}
and
\begin{equation*}
\partial^\mu = \PD{x_\mu}{}.
\end{equation*}

This definition of gradient is consistent with the Dirac gradient (sometimes denoted as a slashed \(\partial\)).

Definition 1.6: Timelike and spacelike components of a four-vector.

Given a four vector \( x = \gamma_\mu x^\mu \), that would be designated \( x^\mu = \setlr{ x^0, \Bx} \) in conventional special relativity, we write
\begin{equation*}
x^0 = x \cdot \gamma_0,
\end{equation*}
and
\begin{equation*}
\Bx = x \wedge \gamma_0,
\end{equation*}
or
\begin{equation*}
x = (x^0 + \Bx) \gamma_0.
\end{equation*}

The spacetime split of a four-vector \( x \) is relative to the frame. In the relativistic lingo, one would say that it is “observer dependent”, as the same operations with \( {\gamma_0}’ \), the timelike basis vector for a different frame, would yield a different set of coordinates.

While the dot and wedge products above provide an effective mechanism to split a four vector into a set of timelike and spacelike quantities, the spatial component of a vector has a bivector representation in STA. Consider the following coordinate expansion of a spatial vector
\begin{equation}\label{eqn:lorentzForceCovariant:1000}
\Bx =
x \wedge \gamma_0
=
\lr{ x^\mu \gamma_\mu } \wedge \gamma_0
=
\sum_{k = 1}^3 x^k \gamma_k \gamma_0.
\end{equation}

Definition 1.7: Spatial basis.

We designate
\begin{equation}\label{eqn:lorentzForceCovariant:1560}
\Be_i = \gamma_i \gamma_0,
\end{equation}
as the standard basis vectors for \(\mathbb{R}^3\).

In the literature, this bivector representation of the spatial basis may be designated \( \sigma_i = \gamma_i \gamma_0 \), as these bivectors have the properties of the Pauli matrices \( \sigma_i \). Because I intend to expand these notes to include purely non-relativistic applications, I won’t use the Pauli notation here.

Problem: Orthonormality of the spatial basis.

Show that the spatial basis \( \setlr{ \Be_1, \Be_2, \Be_3 } \), defined by \ref{eqn:lorentzForceCovariant:1560}, is orthonormal.

Answer

\begin{equation}\label{eqn:lorentzForceCovariant:620}
\begin{aligned}
\Be_i \cdot \Be_j
&= \gpgradezero{ \gamma_i \gamma_0 \gamma_j \gamma_0 } \\
&= -\gpgradezero{ \gamma_i \gamma_j } \\
&= – \gamma_i \cdot \gamma_j.
\end{aligned}
\end{equation}
This is zero for all \( i \ne j \), and unity for any \( i = j \).

Problem: Spatial pseudoscalar.

Show that the STA pseudoscalar \( I = \gamma_0 \gamma_1 \gamma_2 \gamma_3 \) equals the spatial pseudoscalar \( I = \Be_1 \Be_2 \Be_3 \).

Answer

The spatial pseudoscalar, expanded in terms of the STA basis vectors, is
\begin{equation}\label{eqn:lorentzForceCovariant:1020}
\begin{aligned}
I
&= \Be_1 \Be_2 \Be_3 \\
&= \lr{ \gamma_1 \gamma_0 }
\lr{ \gamma_2 \gamma_0 }
\lr{ \gamma_3 \gamma_0 } \\
&= \lr{ \gamma_1 \gamma_0 } \gamma_2 \lr{ \gamma_0 \gamma_3 } \gamma_0 \\
&= \lr{ -\gamma_0 \gamma_1 } \gamma_2 \lr{ -\gamma_3 \gamma_0 } \gamma_0 \\
&= \gamma_0 \gamma_1 \gamma_2 \gamma_3 \lr{ \gamma_0 \gamma_0 } \\
&= \gamma_0 \gamma_1 \gamma_2 \gamma_3,
\end{aligned}
\end{equation}
as claimed.

Problem: Characteristics of the Pauli matrices.

The Pauli matrices obey the following anticommutation relations:
\begin{equation}\label{eqn:lorentzForceCovariant:660}
\symmetric{ \sigma_a}{\sigma_b } = 2 \delta_{a b},
\end{equation}
and commutation relations:
\begin{equation}\label{eqn:lorentzForceCovariant:640}
\antisymmetric{ \sigma_a}{ \sigma_b } = 2 i \epsilon_{a b c}\,\sigma_c,
\end{equation}
Show how these relate to the geometric algebra dot and wedge products, and determine the geometric algebra representation of the imaginary \( i \) above.

Euler-Lagrange equations.

I’ll start at ground zero, with the derivation of the relativistic form of the Euler-Lagrange equations from the action. A relativistic action for a single particle system has the form
\begin{equation}\label{eqn:lorentzForceCovariant:20}
S = \int d\tau L(x, \dot{x}),
\end{equation}
where \( x \) is the spacetime coordinate, \( \dot{x} = dx/d\tau \) is the four-velocity, and \( \tau \) is proper time.

Theorem 1.2: Relativistic Euler-Lagrange equations.

Let \( x \rightarrow x + \delta x \) be any variation of the Lagrangian four-vector coordinates, where \( \delta x = 0 \) at the boundaries of the action integral. The variation of the action is
\begin{equation}\label{eqn:lorentzForceCovariant:1580}
\delta S = \int d\tau \delta x \cdot \delta L(x, \dot{x}),
\end{equation}
where
\begin{equation}\label{eqn:lorentzForceCovariant:1600}
\delta L = \grad L – \frac{d}{d\tau} (\grad_v L),
\end{equation}
where \( \grad = \gamma^\mu \partial_\mu \), and where we construct a similar velocity-gradient with respect to the proper-time derivatives of the coordinates \( \grad_v = \gamma^\mu \partial/\partial \dot{x}^\mu \).The action is extremized when \( \delta S = 0 \), or when \( \delta L = 0 \). This latter condition is called the Euler-Lagrange equations.

Start proof:

Let \( \epsilon = \delta x \), and expand the Lagrangian in Taylor series to first order
\begin{equation}\label{eqn:lorentzForceCovariant:60}
\begin{aligned}
S &\rightarrow S + \delta S \\
&= \int d\tau L( x + \epsilon, \dot{x} + \dot{\epsilon})
&=
\int d\tau \lr{
L(x, \dot{x}) + \epsilon \cdot \grad L + \dot{\epsilon} \cdot \grad_v L
}.
\end{aligned}
\end{equation}
Subtracting off \( S \) and integrating by parts, leaves
\begin{equation}\label{eqn:lorentzForceCovariant:80}
\delta S =
\int d\tau \epsilon \cdot \lr{
\grad L – \frac{d}{d\tau} \grad_v L
}
+
\int d\tau \frac{d}{d\tau} (\grad_v L ) \cdot \epsilon.
\end{equation}
The boundary integral
\begin{equation}\label{eqn:lorentzForceCovariant:100}
\int d\tau \frac{d}{d\tau} (\grad_v L ) \cdot \epsilon
=
\evalbar{(\grad_v L ) \cdot \epsilon}{\Delta \tau} = 0,
\end{equation}
is zero since the variation \( \epsilon \) is required to vanish on the boundaries. So, if \( \delta S = 0 \), we must have
\begin{equation}\label{eqn:lorentzForceCovariant:120}
0 =
\int d\tau \epsilon \cdot \lr{
\grad L – \frac{d}{d\tau} \grad_v L
},
\end{equation}
for all variations \( \epsilon \). Clearly, this requires that
\begin{equation}\label{eqn:lorentzForceCovariant:140}
\delta L = \grad L – \frac{d}{d\tau} (\grad_v L) = 0,
\end{equation}
or
\begin{equation}\label{eqn:lorentzForceCovariant:145}
\grad L = \frac{d}{d\tau} (\grad_v L),
\end{equation}
which is the coordinate free statement of the Euler-Lagrange equations.

End proof.

Problem: Coordinate form of the Euler-Lagrange equations.

Working in coordinates, use the action argument show that the Euler-Lagrange equations have the form
\begin{equation*}
\PD{x^\mu}{L} = \frac{d}{d\tau} \PD{\dot{x}^\mu}{L}
\end{equation*}
Observe that this is identical to the statement of \ref{eqn:lorentzForceCovariant:1600} after contraction with \( \gamma^\mu \).

Answer

In terms of coordinates, the first order Taylor expansion of the action is
\begin{equation}\label{eqn:lorentzForceCovariant:180}
\begin{aligned}
S &\rightarrow S + \delta S \\
&= \int d\tau L( x^\alpha + \epsilon^\alpha, \dot{x}^\alpha + \dot{\epsilon}^\alpha) \\
&=
\int d\tau \lr{
L(x^\alpha, \dot{x}^\alpha) + \epsilon^\mu \PD{x^\mu}{L} + \dot{\epsilon}^\mu \PD{\dot{x}^\mu}{L}
}.
\end{aligned}
\end{equation}
As before, we integrate by parts to separate out a pure boundary term
\begin{equation}\label{eqn:lorentzForceCovariant:200}
\delta S =
\int d\tau \epsilon^\mu
\lr{
\PD{x^\mu}{L} – \frac{d}{d\tau} \PD{\dot{x}^\mu}{L}
}
+
\int d\tau \frac{d}{d\tau} \lr{
\epsilon^\mu \PD{\dot{x}^\mu}{L}
}.
\end{equation}
The boundary term is killed since \( \epsilon^\mu = 0 \) at the end points of the action integral. We conclude that extremization of the action (\( \delta S = 0 \), for all \( \epsilon^\mu \)) requires
\begin{equation}\label{eqn:lorentzForceCovariant:220}
\PD{x^\mu}{L} – \frac{d}{d\tau} \PD{\dot{x}^\mu}{L} = 0.
\end{equation}

Lorentz force equation.

Theorem 1.3: Lorentz force.

The relativistic Lagrangian for a charged particle is
\begin{equation}\label{eqn:lorentzForceCovariant:1640}
L = \inv{2} m v^2 + q A \cdot v/c.
\end{equation}
Application of the Euler-Lagrange equations to this Lagrangian yields the Lorentz-force equation
\begin{equation}\label{eqn:lorentzForceCovariant:1660}
\frac{dp}{d\tau} = q F \cdot v/c,
\end{equation}
where \( p = m v \) is the proper momentum, \( F \) is the Faraday bivector \( F = \grad \wedge A \), and \( c \) is the speed of light.

Start proof:

To make life easier, let’s take advantage of the linearity of the Lagrangian, and break it into the free particle Lagrangian \( L_0 = (1/2) m v^2 \) and a potential term \( L_1 = q A \cdot v/c \). For the free particle case we have
\begin{equation}\label{eqn:lorentzForceCovariant:240}
\begin{aligned}
\delta L_0
&= \grad L_0 – \frac{d}{d\tau} (\grad_v L_0) \\
&= – \frac{d}{d\tau} (m v) \\
&= – \frac{dp}{d\tau}.
\end{aligned}
\end{equation}
For the potential contribution we have
\begin{equation}\label{eqn:lorentzForceCovariant:260}
\begin{aligned}
\delta L_1
&= \grad L_1 – \frac{d}{d\tau} (\grad_v L_1) \\
&= \frac{q}{c} \lr{ \grad (A \cdot v) – \frac{d}{d\tau} \lr{ \grad_v (A \cdot v)} } \\
&= \frac{q}{c} \lr{ \grad (A \cdot v) – \frac{dA}{d\tau} }.
\end{aligned}
\end{equation}
The proper time derivative can be evaluated using the chain rule
\begin{equation}\label{eqn:lorentzForceCovariant:280}
\frac{dA}{d\tau}
=
\frac{\partial x^\mu}{\partial \tau} \partial_\mu A
= (v \cdot \grad) A.
\end{equation}
Putting all the pieces back together we have
\begin{equation}\label{eqn:lorentzForceCovariant:300}
\begin{aligned}
0
&= \delta L \\
&=
-\frac{dp}{d\tau} + \frac{q}{c} \lr{ \grad (A \cdot v) – (v \cdot \grad) A } \\
&=
-\frac{dp}{d\tau} + \frac{q}{c} \lr{ \grad \wedge A } \cdot v.
\end{aligned}
\end{equation}

End proof.

Problem: Gradient of a squared position vector.

Show that
\begin{equation*}
\grad (a \cdot x) = a,
\end{equation*}
and
\begin{equation*}
\grad x^2 = 2 x.
\end{equation*}
It should be clear that the same ideas can be used for the velocity gradient, where we obtain \( \grad_v (v^2) = 2 v \), and \( \grad_v (A \cdot v) = A \), as used in the derivation above.

Answer

The first identity follows easily by expansion in coordinates
\begin{equation}\label{eqn:lorentzForceCovariant:320}
\begin{aligned}
\grad (a \cdot x)
&=
\gamma^\mu \partial_\mu a_\alpha x^\alpha \\
&=
\gamma^\mu a_\alpha \delta_\mu^\alpha \\
&=
\gamma^\mu a_\mu \\
&=
a.
\end{aligned}
\end{equation}
The second identity follows by linearity of the gradient
\begin{equation}\label{eqn:lorentzForceCovariant:340}
\begin{aligned}
\grad x^2
&=
\grad (x \cdot x) \\
&=
\evalbar{\lr{\grad (x \cdot a)}}{a = x}
+
\evalbar{\lr{\grad (b \cdot x)}}{b = x} \\
&=
\evalbar{a}{a = x}
+
\evalbar{b}{b = x} \\
&=
2x.
\end{aligned}
\end{equation}

It is desirable to put this relativistic Lorentz force equation into the usual vector and tensor forms for comparison.

Theorem 1.4: Tensor form of the Lorentz force equation.

The tensor form of the Lorentz force equation is
\begin{equation}\label{eqn:lorentzForceCovariant:1620}
\frac{dp^\mu}{d\tau} = \frac{q}{c} F^{\mu\nu} v_\nu,
\end{equation}
where the antisymmetric Faraday tensor is defined as \( F^{\mu\nu} = \partial^\mu A^\nu – \partial^\nu A^\mu \).

Start proof:

We have only to dot both sides with \( \gamma^\mu \). On the left we have
\begin{equation}\label{eqn:lorentzForceCovariant:380}
\gamma^\mu \cdot \frac{dp}{d\tau}
=
\frac{dp^\mu}{d\tau}.
\end{equation}
On the right, we have
\begin{equation}\label{eqn:lorentzForceCovariant:400}
\begin{aligned}
\gamma^\mu \cdot \lr{ \frac{q}{c} F \cdot v }
&=
\frac{q}{c} (( \grad \wedge A ) \cdot v ) \cdot \gamma^\mu \\
&=
\frac{q}{c} ( \grad ( A \cdot v ) – (v \cdot \grad) A ) \cdot \gamma^\mu \\
&=
\frac{q}{c} \lr{ (\partial^\mu A^\nu) v_\nu – v_\nu \partial^\nu A^\mu } \\
&=
\frac{q}{c} F^{\mu\nu} v_\nu.
\end{aligned}
\end{equation}

End proof.

Problem: Tensor expansion of \(F\).

An alternate way to demonstrate \ref{eqn:lorentzForceCovariant:1620} is to first expand \( F = \grad \wedge A \) in terms of coordinates, an expansion that can be expressed in terms of a second rank tensor antisymmetric tensor \( F^{\mu\nu} \). Find that expansion, and re-evaluate the dot products of \ref{eqn:lorentzForceCovariant:400} using that.

Answer

\begin{equation}\label{eqn:lorentzForceCovariant:900}
\begin{aligned}
F &=
\grad \wedge A \\
&=
\lr{ \gamma_\mu \partial^\mu } \wedge \lr{ \gamma_\nu A^\nu } \\
&=
\lr{ \gamma_\mu \wedge \gamma_\nu } \partial^\mu A^\nu.
\end{aligned}
\end{equation}
To this we can use the usual tensor trick (add self to self, change indexes, and divide by two), to give
\begin{equation}\label{eqn:lorentzForceCovariant:920}
\begin{aligned}
F &=
\inv{2} \lr{
\lr{ \gamma_\mu \wedge \gamma_\nu } \partial^\mu A^\nu
+
\lr{ \gamma_\nu \wedge \gamma_\mu } \partial^\nu A^\mu
} \\
&=
\inv{2}
\lr{ \gamma_\mu \wedge \gamma_\nu } \lr{
\partial^\mu A^\nu

\partial^\nu A^\mu
},
\end{aligned}
\end{equation}
which is just
\begin{equation}\label{eqn:lorentzForceCovariant:940}
F =
\inv{2} \lr{ \gamma_\mu \wedge \gamma_\nu } F^{\mu\nu}.
\end{equation}
Now, let’s expand \( (F \cdot v) \cdot \gamma^\mu \) to compare to the earlier expansion in terms of \( \grad \) and \( A \).
\begin{equation}\label{eqn:lorentzForceCovariant:960}
\begin{aligned}
(F \cdot v) \cdot \gamma^\mu
&=
\inv{2}
F^{\alpha\nu}
\lr{ \lr{ \gamma_\alpha \wedge \gamma_\nu } \cdot \lr{ \gamma^\beta v_\beta } } \cdot \gamma^\mu \\
&=
\inv{2}
F^{\alpha\nu} v_\beta
\lr{
{\delta_\nu}^\beta {\gamma_\alpha}^\mu

{\delta_\alpha}^\beta {\gamma_\nu}^\mu
} \\
&=
\inv{2}
\lr{
F^{\mu\beta} v_\beta

F^{\beta\mu} v_\beta
} \\
&=
F^{\mu\nu} v_\nu.
\end{aligned}
\end{equation}
This alternate expansion illustrates some of the connectivity between the geometric algebra approach and the traditional tensor formalism.

Problem: Lorentz force direct tensor derivation.

Instead of using the geometric algebra form of the Lorentz force equation as a stepping stone, we may derive the tensor form from the Lagrangian directly, provided the Lagrangian is put into tensor form
\begin{equation*}
L = \inv{2} m v^\mu v_\mu + q A^\mu v_\mu /c.
\end{equation*}
Evaluate the Euler-Lagrange equations in coordinate form and compare to \ref{eqn:lorentzForceCovariant:1620}.

Answer

Let \( \delta_\mu L = \gamma_\mu \cdot \delta L \), so that we can write the Euler-Lagrange equations as
\begin{equation}\label{eqn:lorentzForceCovariant:460}
0 = \delta_\mu L = \PD{x^\mu}{L} – \frac{d}{d\tau} \PD{\dot{x}^\mu}{L}.
\end{equation}
Operating on the kinetic term of the Lagrangian, we have
\begin{equation}\label{eqn:lorentzForceCovariant:480}
\delta_\mu L_0 = – \frac{d}{d\tau} m v_\mu.
\end{equation}
For the potential term
\begin{equation}\label{eqn:lorentzForceCovariant:500}
\begin{aligned}
\delta_\mu L_1
&=
\frac{q}{c} \lr{
v_\nu \PD{x^\mu}{A^\nu} – \frac{d}{d\tau} A_\mu
} \\
&=
\frac{q}{c} \lr{
v_\nu \PD{x^\mu}{A^\nu} – \frac{dx_\alpha}{d\tau} \PD{x_\alpha}{ A_\mu }
} \\
&=
\frac{q}{c} v^\nu \lr{
\partial_\mu A_\nu – \partial_\nu A_\mu
} \\
&=
\frac{q}{c} v^\nu F_{\mu\nu}.
\end{aligned}
\end{equation}
Putting the pieces together gives
\begin{equation}\label{eqn:lorentzForceCovariant:520}
\frac{d}{d\tau} (m v_\mu) = \frac{q}{c} v^\nu F_{\mu\nu},
\end{equation}
which is identical\footnote{Some minor index raising and lowering gymnastics are required.} to the tensor form that we found by expanding the geometric algebra form of Maxwell’s equation in coordinates.

Theorem 1.5: Vector Lorentz force equation.

Relative to a fixed observer’s frame, the Lorentz force equation of \ref{eqn:lorentzForceCovariant:1660} splits into a spatial rate of change of momentum, and (timelike component) rate of change of energy, as follows
\begin{equation}\label{eqn:lorentzForceCovariant:1680}
\begin{aligned}
\ddt{(\gamma m \Bv)} &= q \lr{ \BE + \Bv \cross \BB } \\
\ddt{(\gamma m c^2)} &= q \Bv \cdot \BE,
\end{aligned}
\end{equation}
where \( F = \BE + I c \BB \), \( \gamma = 1/\sqrt{1 – \Bv^2/c^2 }\).

Start proof:

The first step is to eliminate the proper time dependencies in the Lorentz force equation. Consider first the coordinate representation of an arbitrary position four-vector \( x \)
\begin{equation}\label{eqn:lorentzForceCovariant:1140}
x = c t \gamma_0 + x^k \gamma_k.
\end{equation}
The corresponding four-vector velocity is
\begin{equation}\label{eqn:lorentzForceCovariant:1160}
v = \ddtau{x} = c \ddtau{t} \gamma_0 + \ddtau{t} \ddt{x^k} \gamma_k.
\end{equation}
By construction, \( v^2 = c^2 \) is a Lorentz invariant quantity (this is one of the relativistic postulates), so the LHS of \ref{eqn:lorentzForceCovariant:1160} must have the same square. That is
\begin{equation}\label{eqn:lorentzForceCovariant:1240}
c^2 = \lr{ \ddtau{t} }^2 \lr{ c^2 – \Bv^2 },
\end{equation}
where \( \Bv = v \wedge \gamma_0 \). This shows that we may make the identification
\begin{equation}\label{eqn:lorentzForceCovariant:1260}
\gamma = \ddtau{t} = \inv{1 – \Bv^2/c^2 },
\end{equation}
and
\begin{equation}\label{eqn:lorentzForceCovariant:1280}
\ddtau{} = \ddtau{t} \ddt{} = \gamma \ddt{}.
\end{equation}
We may now factor the four-velocity \( v \) into its spacetime split
\begin{equation}\label{eqn:lorentzForceCovariant:1300}
v = \gamma \lr{ c + \Bv } \gamma_0.
\end{equation}
In particular the LHS of the Lorentz force equation can be rewritten as
\begin{equation}\label{eqn:lorentzForceCovariant:1320}
\ddtau{p} = \gamma \ddt{}\lr{ \gamma \lr{ c + \Bv } } \gamma_0,
\end{equation}
and the RHS of the Lorentz force equation can be rewritten as
\begin{equation}\label{eqn:lorentzForceCovariant:1340}
\frac{q}{c} F \cdot v
=
\frac{\gamma q}{c} F \cdot \lr{ (c + \Bv) \gamma_0 }.
\end{equation}
Equating timelike and spacelike components leaves us
\begin{equation}\label{eqn:lorentzForceCovariant:1380}
\ddt{ (m \gamma c) } = \frac{q}{c} \lr{ F \cdot \lr{ (c + \Bv) \gamma_0 } } \cdot \gamma_0,
\end{equation}
\begin{equation}\label{eqn:lorentzForceCovariant:1400}
\ddt{ (m \gamma \Bv) } = \frac{q}{c} \lr{ F \cdot \lr{ (c + \Bv) \gamma_0 } } \wedge \gamma_0,
\end{equation}
Evaluating these products requires some care, but is an essentially manual process. The reader is encouraged to do so once, but the end result may also be obtained easily using software (see lorentzForce.nb in [2]). One finds
\begin{equation}\label{eqn:lorentzForceCovariant:1440}
F = \BE + I c \BB
=
E^1 \gamma_{10} +
+ E^2 \gamma_{20} +
+ E^3 \gamma_{30} +
– c B^1 \gamma_{23} +
– c B^2 \gamma_{31} +
– c B^3 \gamma_{12},
\end{equation}
\begin{equation}\label{eqn:lorentzForceCovariant:1460}
\frac{q}{c} \lr{ F \cdot \lr{ (c + \Bv) \gamma_0 } } \cdot \gamma_0
= \frac{q}{c} \BE \cdot \Bv,
\end{equation}
\begin{equation}\label{eqn:lorentzForceCovariant:1480}
\frac{q}{c} \lr{ F \cdot \lr{ (c + \Bv) \gamma_0 } } \wedge \gamma_0
= q \lr{ \BE + \Bv \cross \BB }.
\end{equation}

End proof.

Problem: Algebraic spacetime split of the Lorentz force equation.

Derive the results of \ref{eqn:lorentzForceCovariant:1440} through \ref{eqn:lorentzForceCovariant:1480} algebraically.

Problem: Spacetime split of the Lorentz force tensor equation.

Show that \ref{eqn:lorentzForceCovariant:1680} also follows from the tensor form of the Lorentz force equation (\ref{eqn:lorentzForceCovariant:1620}) provided we identify
\begin{equation}\label{eqn:lorentzForceCovariant:1500}
F^{k0} = E^k,
\end{equation}
and
\begin{equation}\label{eqn:lorentzForceCovariant:1520}
F^{rs} = -\epsilon^{rst} B^t.
\end{equation}

Also verify that the identifications of \ref{eqn:lorentzForceCovariant:1500} and \ref{eqn:lorentzForceCovariant:1520} is consistent with the geometric algebra Faraday bivector \( F = \BE + I c \BB \), and the associated coordinate expansion of the field \( F = (1/2) (\gamma_\mu \wedge \gamma_\nu) F^{\mu\nu} \).

References

[1] C. Doran and A.N. Lasenby. Geometric algebra for physicists. Cambridge University Press New York, Cambridge, UK, 1st edition, 2003.

[2] Peeter Joot. Mathematica modules for Geometric Algebra’s GA(2,0), GA(3,0), and GA(1,3), 2017. URL https://github.com/peeterjoot/gapauli. [Online; accessed 24-Oct-2020].

Three more geometric algebra tutorials on youtube.

January 28, 2018 math and physics play , , , , , , , ,

Here’s three more fairly short Geometric Algebra related tutorials that I’ve posted on youtube

second experiment in screen recording

July 17, 2017 math and physics play , , , , ,

Here’s a second attempt at recording a blackboard style screen recording:

 

To handle the screen transitions, equivalent to clearing my small blackboard, I switched to using a black background and just moved the text as I filled things up.  This worked much better.  I still drew with mischief, and recorded with OBS, but then did a small post production edit in iMovie to remove a little bit of dead air and to edit out one particularly bad flub.

This talk covers the product of two vectors, defines the dot and wedge products, and shows how the 3D wedge product is related to the cross product.  I recorded some additional discussion of duality that I left out of this video, which was long enough without it.

A comparison of Geometric Algebra electrodynamic potential methods

January 7, 2017 math and physics play , , , , , , , , , , , , , , , , , , ,

[Click here for a PDF of this post with nicer formatting]

Motivation

Geometric algebra (GA) allows for a compact description of Maxwell’s equations in either an explicit 3D representation or a STA (SpaceTime Algebra [2]) representation. The 3D GA and STA representations Maxwell’s equation both the form

\begin{equation}\label{eqn:potentialMethods:1280}
L \boldsymbol{\mathcal{F}} = J,
\end{equation}

where \( J \) represents the sources, \( L \) is a multivector gradient operator that includes partial derivative operator components for each of the space and time coordinates, and

\begin{equation}\label{eqn:potentialMethods:1020}
\boldsymbol{\mathcal{F}} = \boldsymbol{\mathcal{E}} + \eta I \boldsymbol{\mathcal{H}},
\end{equation}

is an electromagnetic field multivector, \( I = \Be_1 \Be_2 \Be_3 \) is the \R{3} pseudoscalar, and \( \eta = \sqrt{\mu/\epsilon} \) is the impedance of the media.

When Maxwell’s equations are extended to include magnetic sources in addition to conventional electric sources (as used in antenna-theory [1] and microwave engineering [3]), they take the form

\begin{equation}\label{eqn:chapter3Notes:20}
\spacegrad \cross \boldsymbol{\mathcal{E}} = – \boldsymbol{\mathcal{M}} – \PD{t}{\boldsymbol{\mathcal{B}}}
\end{equation}
\begin{equation}\label{eqn:chapter3Notes:40}
\spacegrad \cross \boldsymbol{\mathcal{H}} = \boldsymbol{\mathcal{J}} + \PD{t}{\boldsymbol{\mathcal{D}}}
\end{equation}
\begin{equation}\label{eqn:chapter3Notes:60}
\spacegrad \cdot \boldsymbol{\mathcal{D}} = q_{\textrm{e}}
\end{equation}
\begin{equation}\label{eqn:chapter3Notes:80}
\spacegrad \cdot \boldsymbol{\mathcal{B}} = q_{\textrm{m}}.
\end{equation}

The corresponding GA Maxwell equations in their respective 3D and STA forms are

\begin{equation}\label{eqn:potentialMethods:300}
\lr{ \spacegrad + \inv{v} \PD{t}{} } \boldsymbol{\mathcal{F}}
=
\eta
\lr{ v q_{\textrm{e}} – \boldsymbol{\mathcal{J}} }
+ I \lr{ v q_{\textrm{m}} – \boldsymbol{\mathcal{M}} }
\end{equation}
\begin{equation}\label{eqn:potentialMethods:320}
\grad \boldsymbol{\mathcal{F}} = \eta J – I M,
\end{equation}

where the wave group velocity in the medium is \( v = 1/\sqrt{\epsilon\mu} \), and the medium is isotropic with
\( \boldsymbol{\mathcal{B}} = \mu \boldsymbol{\mathcal{H}} \), and \( \boldsymbol{\mathcal{D}} = \epsilon \boldsymbol{\mathcal{E}} \). In the STA representation, \( \grad, J, M \) are all four-vectors, the specific meanings of which will be spelled out below.

How to determine the potential equations and the field representation using the conventional distinct Maxwell’s \ref{eqn:chapter3Notes:20}, … is well known. The basic procedure is to consider the electric and magnetic sources in turn, and observe that in each case one of the electric or magnetic fields must have a curl representation. The STA approach is similar, except that it can be observed that the field must have a four-curl representation for each type of source. In the explicit 3D GA formalism
\ref{eqn:potentialMethods:300} how to formulate a natural potential representation is not as obvious. There is no longer an reason to set any component of the field equal to a curl, and the representation of the four curl from the STA approach is awkward. Additionally, it is not obvious what form gauge invariance takes in the 3D GA representation.

Ideas explored in these notes

  • GA representation of Maxwell’s equations including magnetic sources.
  • STA GA formalism for Maxwell’s equations including magnetic sources.
  • Explicit form of the GA potential representation including both electric and magnetic sources.
  • Demonstration of exactly how the 3D and STA potentials are related.
  • Explore the structure of gauge transformations when magnetic sources are included.
  • Explore the structure of gauge transformations in the 3D GA formalism.
  • Specify the form of the Lorentz gauge in the 3D GA formalism.

Traditional vector algebra

No magnetic sources

When magnetic sources are omitted, it follows from \ref{eqn:chapter3Notes:80} that there is some \( \boldsymbol{\mathcal{A}}^{\mathrm{e}} \) for which

\begin{equation}\label{eqn:potentialMethods:20}
\boxed{
\boldsymbol{\mathcal{B}} = \spacegrad \cross \boldsymbol{\mathcal{A}}^{\mathrm{e}},
}
\end{equation}

Substitution into Faraday’s law \ref{eqn:chapter3Notes:20} gives

\begin{equation}\label{eqn:potentialMethods:40}
\spacegrad \cross \boldsymbol{\mathcal{E}} = – \PD{t}{}\lr{ \spacegrad \cross \boldsymbol{\mathcal{A}}^{\mathrm{e}} },
\end{equation}

or
\begin{equation}\label{eqn:potentialMethods:60}
\spacegrad \cross \lr{ \boldsymbol{\mathcal{E}} + \PD{t}{ \boldsymbol{\mathcal{A}}^{\mathrm{e}} } } = 0.
\end{equation}

A gradient representation of this curled quantity, say \( -\spacegrad \phi \), will provide the required zero

\begin{equation}\label{eqn:potentialMethods:80}
\boxed{
\boldsymbol{\mathcal{E}} = -\spacegrad \phi -\PD{t}{ \boldsymbol{\mathcal{A}}^{\mathrm{e}} }.
}
\end{equation}

The final two Maxwell equations yield

\begin{equation}\label{eqn:potentialMethods:100}
\begin{aligned}
-\spacegrad^2 \boldsymbol{\mathcal{A}}^{\mathrm{e}} + \spacegrad \lr{ \spacegrad \cdot \boldsymbol{\mathcal{A}}^{\mathrm{e}} } &= \mu \lr{ \boldsymbol{\mathcal{J}} + \epsilon \PD{t}{} \lr{ -\spacegrad \phi -\PD{t}{ \boldsymbol{\mathcal{A}}^{\mathrm{e}} } } } \\
\spacegrad \cdot \lr{ -\spacegrad \phi -\PD{t}{ \boldsymbol{\mathcal{A}}^{\mathrm{e}} } } &= q_e/\epsilon,
\end{aligned}
\end{equation}

or
\begin{equation}\label{eqn:potentialMethods:120}
\boxed{
\begin{aligned}
\spacegrad^2 \boldsymbol{\mathcal{A}}^{\mathrm{e}} – \inv{v^2} \PDSq{t}{ \boldsymbol{\mathcal{A}}^{\mathrm{e}} }
– \spacegrad \lr{
\inv{v^2} \PD{t}{\phi}
+\spacegrad \cdot \boldsymbol{\mathcal{A}}^{\mathrm{e}}
}
&= -\mu \boldsymbol{\mathcal{J}} \\
\spacegrad^2 \phi + \PD{t}{} \lr{ \spacegrad \cdot \boldsymbol{\mathcal{A}}^{\mathrm{e}} } &= -q_e/\epsilon.
\end{aligned}
}
\end{equation}

Note that the Lorentz condition \( \PDi{t}{(\phi/v^2)} + \spacegrad \cdot \boldsymbol{\mathcal{A}}^{\mathrm{e}} = 0 \) can be imposed to decouple these, leaving non-homogeneous wave equations for the vector and scalar potentials respectively.

No electric sources

Without electric sources, a curl representation of the electric field can be assumed, satisfying Gauss’s law

\begin{equation}\label{eqn:potentialMethods:140}
\boxed{
\boldsymbol{\mathcal{D}} = – \spacegrad \cross \boldsymbol{\mathcal{A}}^{\mathrm{m}}.
}
\end{equation}

Substitution into the Maxwell-Faraday law gives
\begin{equation}\label{eqn:potentialMethods:160}
\spacegrad \cross \lr{ \boldsymbol{\mathcal{H}} + \PD{t}{\boldsymbol{\mathcal{A}}^{\mathrm{m}}} } = 0.
\end{equation}

This is satisfied with any gradient, say, \( -\spacegrad \phi_m \), providing a potential representation for the magnetic field

\begin{equation}\label{eqn:potentialMethods:180}
\boxed{
\boldsymbol{\mathcal{H}} = -\spacegrad \phi_m – \PD{t}{\boldsymbol{\mathcal{A}}^{\mathrm{m}}}.
}
\end{equation}

The remaining Maxwell equations provide the required constraints on the potentials

\begin{equation}\label{eqn:potentialMethods:220}
-\spacegrad^2 \boldsymbol{\mathcal{A}}^{\mathrm{m}} + \spacegrad \lr{ \spacegrad \cdot \boldsymbol{\mathcal{A}}^{\mathrm{m}} } = -\epsilon
\lr{
-\boldsymbol{\mathcal{M}} – \mu \PD{t}{}
\lr{
-\spacegrad \phi_m – \PD{t}{\boldsymbol{\mathcal{A}}^{\mathrm{m}}}
}
}
\end{equation}
\begin{equation}\label{eqn:potentialMethods:240}
\spacegrad \cdot
\lr{
-\spacegrad \phi_m – \PD{t}{\boldsymbol{\mathcal{A}}^{\mathrm{m}}}
}
= \inv{\mu} q_m,
\end{equation}

or
\begin{equation}\label{eqn:potentialMethods:260}
\boxed{
\begin{aligned}
\spacegrad^2 \boldsymbol{\mathcal{A}}^{\mathrm{m}} – \inv{v^2} \PDSq{t}{\boldsymbol{\mathcal{A}}^{\mathrm{m}}} – \spacegrad \lr{ \inv{v^2} \PD{t}{\phi_m} + \spacegrad \cdot \boldsymbol{\mathcal{A}}^{\mathrm{m}} } &= -\epsilon \boldsymbol{\mathcal{M}} \\
\spacegrad^2 \phi_m + \PD{t}{}\lr{ \spacegrad \cdot \boldsymbol{\mathcal{A}}^{\mathrm{m}} } &= -\inv{\mu} q_m.
\end{aligned}
}
\end{equation}

The general solution to Maxwell’s equations is therefore
\begin{equation}\label{eqn:potentialMethods:280}
\begin{aligned}
\boldsymbol{\mathcal{E}} &=
-\spacegrad \phi -\PD{t}{ \boldsymbol{\mathcal{A}}^{\mathrm{e}} }
– \inv{\epsilon} \spacegrad \cross \boldsymbol{\mathcal{A}}^{\mathrm{m}} \\
\boldsymbol{\mathcal{H}} &=
\inv{\mu} \spacegrad \cross \boldsymbol{\mathcal{A}}^{\mathrm{e}}
-\spacegrad \phi_m – \PD{t}{\boldsymbol{\mathcal{A}}^{\mathrm{m}}},
\end{aligned}
\end{equation}

subject to the constraints \ref{eqn:potentialMethods:120} and \ref{eqn:potentialMethods:260}.

Potential operator structure

Knowing that there is a simple underlying structure to the potential representation of the electromagnetic field in the STA formalism inspires the question of whether that structure can be found directly using the scalar and vector potentials determined above.

Specifically, what is the multivector representation \ref{eqn:potentialMethods:1020} of the electromagnetic field in terms of all the individual potential variables, and can an underlying structure for that field representation be found? The composite field is

\begin{equation}\label{eqn:potentialMethods:280b}
\boldsymbol{\mathcal{F}}
=
-\spacegrad \phi -\PD{t}{ \boldsymbol{\mathcal{A}}^{\mathrm{e}} }
– \inv{\epsilon} \spacegrad \cross \boldsymbol{\mathcal{A}}^{\mathrm{m}} \\
+ I \eta
\lr{
\inv{\mu} \spacegrad \cross \boldsymbol{\mathcal{A}}^{\mathrm{e}}
-\spacegrad \phi_m – \PD{t}{\boldsymbol{\mathcal{A}}^{\mathrm{m}}}
}.
\end{equation}

Can this be factored into into multivector operator and multivector potentials? Expanding the cross products provides some direction

\begin{equation}\label{eqn:potentialMethods:1040}
\begin{aligned}
\boldsymbol{\mathcal{F}}
&=
– \PD{t}{ \boldsymbol{\mathcal{A}}^{\mathrm{e}} }
– \eta \PD{t}{I \boldsymbol{\mathcal{A}}^{\mathrm{m}}}
– \spacegrad \lr{ \phi – \eta I \phi_m } \\
&\quad + \frac{\eta}{2 \mu} \lr{ \rspacegrad \boldsymbol{\mathcal{A}}^{\mathrm{e}} – \boldsymbol{\mathcal{A}}^{\mathrm{e}} \lspacegrad }
+ \frac{1}{2 \epsilon} \lr{ \rspacegrad I \boldsymbol{\mathcal{A}}^{\mathrm{m}} – I \boldsymbol{\mathcal{A}}^{\mathrm{m}} \lspacegrad }.
\end{aligned}
\end{equation}

Observe that the
gradient and the time partials can be grouped together

\begin{equation}\label{eqn:potentialMethods:1060}
\begin{aligned}
\boldsymbol{\mathcal{F}}
&=
– \PD{t}{ } \lr{\boldsymbol{\mathcal{A}}^{\mathrm{e}} + \eta I \boldsymbol{\mathcal{A}}^{\mathrm{m}}}
– \spacegrad \lr{ \phi + \eta I \phi_m }
+ \frac{v}{2} \lr{ \rspacegrad (\boldsymbol{\mathcal{A}}^{\mathrm{e}} + I \eta \boldsymbol{\mathcal{A}}^{\mathrm{m}}) – (\boldsymbol{\mathcal{A}}^{\mathrm{e}} + I \eta \boldsymbol{\mathcal{A}}^{\mathrm{m}}) \lspacegrad } \\
&=
\inv{2} \lr{
\lr{ \rspacegrad – \inv{v} {\stackrel{ \rightarrow }{\partial_t}} } \lr{ v \boldsymbol{\mathcal{A}}^{\mathrm{e}} + \eta v I \boldsymbol{\mathcal{A}}^{\mathrm{m}} }

\lr{ v \boldsymbol{\mathcal{A}}^{\mathrm{e}} + \eta v I \boldsymbol{\mathcal{A}}^{\mathrm{m}}} \lr{ \lspacegrad + \inv{v} {\stackrel{ \leftarrow }{\partial_t}} }
} \\
&+\quad \inv{2} \lr{
\lr{ \rspacegrad – \inv{v} {\stackrel{ \rightarrow }{\partial_t}} } \lr{ -\phi – \eta I \phi_m }
– \lr{ \phi + \eta I \phi_m } \lr{ \lspacegrad + \inv{v} {\stackrel{ \leftarrow }{\partial_t}} }
}
,
\end{aligned}
\end{equation}

or

\begin{equation}\label{eqn:potentialMethods:1080}
\boxed{
\boldsymbol{\mathcal{F}}
=
\inv{2} \Biglr{
\lr{ \rspacegrad – \inv{v} {\stackrel{ \rightarrow }{\partial_t}} }
\lr{
– \phi
+ v \boldsymbol{\mathcal{A}}^{\mathrm{e}}
+ \eta I v \boldsymbol{\mathcal{A}}^{\mathrm{m}}
– \eta I \phi_m
}

\lr{
\phi
+ v \boldsymbol{\mathcal{A}}^{\mathrm{e}}
+ \eta I v \boldsymbol{\mathcal{A}}^{\mathrm{m}}
+ \eta I \phi_m
}
\lr{ \lspacegrad + \inv{v} {\stackrel{ \leftarrow }{\partial_t}} }
}
.
}
\end{equation}

There’s a conjugate structure to the potential on each side of the curl operation where we see a sign change for the scalar and pseudoscalar elements only. The reason for this becomes more clear in the STA formalism.

Potentials in the STA formalism.

Maxwell’s equation in its explicit 3D form \ref{eqn:potentialMethods:300} can be
converted to STA form, by introducing a four-vector basis \( \setlr{ \gamma_\mu } \), where the spatial basis
\( \setlr{ \Be_k = \gamma_k \gamma_0 } \)
is expressed in terms of the Dirac basis \( \setlr{ \gamma_\mu } \).
By multiplying from the left with \( \gamma_0 \) a STA form of Maxwell’s equation
\ref{eqn:potentialMethods:320}
is obtained,
where
\begin{equation}\label{eqn:potentialMethods:340}
\begin{aligned}
J &= \gamma^\mu J_\mu = ( v q_e, \boldsymbol{\mathcal{J}} ) \\
M &= \gamma^\mu M_\mu = ( v q_m, \boldsymbol{\mathcal{M}} ) \\
\grad &= \gamma^\mu \partial_\mu = ( (1/v) \partial_t, \spacegrad ) \\
I &= \gamma_0 \gamma_1 \gamma_2 \gamma_3,
\end{aligned}
\end{equation}

Here the metric choice is \( \gamma_0^2 = 1 = -\gamma_k^2 \). Note that in this representation the electromagnetic field \( \boldsymbol{\mathcal{F}} = \boldsymbol{\mathcal{E}} + \eta I \boldsymbol{\mathcal{H}} \) is a bivector, not a multivector as it is explicit (frame dependent) 3D representation of \ref{eqn:potentialMethods:300}.

A potential representation can be obtained as before by considering electric and magnetic sources in sequence and using superposition to assemble a complete potential.

No magnetic sources

Without magnetic sources, Maxwell’s equation splits into vector and trivector terms of the form

\begin{equation}\label{eqn:potentialMethods:380}
\grad \cdot \boldsymbol{\mathcal{F}} = \eta J
\end{equation}
\begin{equation}\label{eqn:potentialMethods:400}
\grad \wedge \boldsymbol{\mathcal{F}} = 0.
\end{equation}

A four-vector curl representation of the field will satisfy \ref{eqn:potentialMethods:400} allowing an immediate potential solution

\begin{equation}\label{eqn:potentialMethods:560}
\boxed{
\begin{aligned}
&\boldsymbol{\mathcal{F}} = \grad \wedge {A^{\mathrm{e}}} \\
&\grad^2 {A^{\mathrm{e}}} – \grad \lr{ \grad \cdot {A^{\mathrm{e}}} } = \eta J.
\end{aligned}
}
\end{equation}

This can be put into correspondence with \ref{eqn:potentialMethods:120} by noting that

\begin{equation}\label{eqn:potentialMethods:460}
\begin{aligned}
\grad^2 &= (\gamma^\mu \partial_\mu) \cdot (\gamma^\nu \partial_\nu) = \inv{v^2} \partial_{tt} – \spacegrad^2 \\
\gamma_0 {A^{\mathrm{e}}} &= \gamma_0 \gamma^\mu {A^{\mathrm{e}}}_\mu = {A^{\mathrm{e}}}_0 + \Be_k {A^{\mathrm{e}}}_k = {A^{\mathrm{e}}}_0 + \BA^{\mathrm{e}} \\
\gamma_0 \grad &= \gamma_0 \gamma^\mu \partial_\mu = \inv{v} \partial_t + \spacegrad \\
\grad \cdot {A^{\mathrm{e}}} &= \partial_\mu {A^{\mathrm{e}}}^\mu = \inv{v} \partial_t {A^{\mathrm{e}}}_0 – \spacegrad \cdot \BA^{\mathrm{e}},
\end{aligned}
\end{equation}

so multiplying from the left with \( \gamma_0 \) gives

\begin{equation}\label{eqn:potentialMethods:480}
\lr{ \inv{v^2} \partial_{tt} – \spacegrad^2 } \lr{ {A^{\mathrm{e}}}_0 + \BA^{\mathrm{e}} } – \lr{ \inv{v} \partial_t + \spacegrad }\lr{ \inv{v} \partial_t {A^{\mathrm{e}}}_0 – \spacegrad \cdot \BA^{\mathrm{e}} } = \eta( v q_e – \boldsymbol{\mathcal{J}} ),
\end{equation}

or

\begin{equation}\label{eqn:potentialMethods:520}
\lr{ \inv{v^2} \partial_{tt} – \spacegrad^2 } \BA^{\mathrm{e}} – \spacegrad \lr{ \inv{v} \partial_t {A^{\mathrm{e}}}_0 – \spacegrad \cdot \BA^{\mathrm{e}} } = -\eta \boldsymbol{\mathcal{J}}
\end{equation}
\begin{equation}\label{eqn:potentialMethods:540}
\spacegrad^2 {A^{\mathrm{e}}}_0 – \inv{v} \partial_t \lr{ \spacegrad \cdot \BA^{\mathrm{e}} } = -q_e/\epsilon.
\end{equation}

So \( {A^{\mathrm{e}}}_0 = \phi \) and \( -\ifrac{\BA^{\mathrm{e}}}{v} = \boldsymbol{\mathcal{A}}^{\mathrm{e}} \), or

\begin{equation}\label{eqn:potentialMethods:600}
\boxed{
{A^{\mathrm{e}}} = \gamma_0\lr{ \phi – v \boldsymbol{\mathcal{A}}^{\mathrm{e}} }.
}
\end{equation}

No electric sources

Without electric sources, Maxwell’s equation now splits into

\begin{equation}\label{eqn:potentialMethods:640}
\grad \cdot \boldsymbol{\mathcal{F}} = 0
\end{equation}
\begin{equation}\label{eqn:potentialMethods:660}
\grad \wedge \boldsymbol{\mathcal{F}} = -I M.
\end{equation}

Here the dual of an STA curl yields a solution

\begin{equation}\label{eqn:potentialMethods:680}
\boxed{
\boldsymbol{\mathcal{F}} = I ( \grad \wedge {A^{\mathrm{m}}} ).
}
\end{equation}

Substituting this gives

\begin{equation}\label{eqn:potentialMethods:720}
\begin{aligned}
0
&=
\grad \cdot (I ( \grad \wedge {A^{\mathrm{m}}} ) ) \\
&=
\gpgradeone{ \grad I ( \grad \wedge {A^{\mathrm{m}}} ) } \\
&=
-I \grad \wedge ( \grad \wedge {A^{\mathrm{m}}} ).
\end{aligned}
\end{equation}
\begin{equation}\label{eqn:potentialMethods:740}
\begin{aligned}
-I M
&=
\grad \wedge (I ( \grad \wedge {A^{\mathrm{m}}} ) ) \\
&=
\gpgradethree{ \grad I ( \grad \wedge {A^{\mathrm{m}}} ) } \\
&=
-I \grad \cdot ( \grad \wedge {A^{\mathrm{m}}} ).
\end{aligned}
\end{equation}

The \( \grad \cdot \boldsymbol{\mathcal{F}} \) relation \ref{eqn:potentialMethods:720} is identically zero as desired, leaving

\begin{equation}\label{eqn:potentialMethods:760}
\boxed{
\grad^2 {A^{\mathrm{m}}} – \grad \lr{ \grad \cdot {A^{\mathrm{m}}} }
=
M.
}
\end{equation}

So the general solution with both electric and magnetic sources is

\begin{equation}\label{eqn:potentialMethods:800}
\boxed{
\boldsymbol{\mathcal{F}} = \grad \wedge {A^{\mathrm{e}}} + I (\grad \wedge {A^{\mathrm{m}}}),
}
\end{equation}

subject to the constraints of \ref{eqn:potentialMethods:560} and \ref{eqn:potentialMethods:760}. As before the four-potential \( {A^{\mathrm{m}}} \) can be put into correspondence with the conventional scalar and vector potentials by left multiplying with \( \gamma_0 \), which gives

\begin{equation}\label{eqn:potentialMethods:820}
\lr{ \inv{v^2} \partial_{tt} – \spacegrad^2 } \lr{ {A^{\mathrm{m}}}_0 + \BA^{\mathrm{m}} } – \lr{ \inv{v} \partial_t + \spacegrad }\lr{ \inv{v} \partial_t {A^{\mathrm{m}}}_0 – \spacegrad \cdot \BA^{\mathrm{m}} } = v q_m – \boldsymbol{\mathcal{M}},
\end{equation}

or
\begin{equation}\label{eqn:potentialMethods:860}
\lr{ \inv{v^2} \partial_{tt} – \spacegrad^2 } \BA^{\mathrm{m}} – \spacegrad \lr{ \inv{v} \partial_t {A^{\mathrm{m}}}_0 – \spacegrad \cdot \BA^{\mathrm{m}} } = – \boldsymbol{\mathcal{M}}
\end{equation}
\begin{equation}\label{eqn:potentialMethods:880}
\spacegrad^2 {A^{\mathrm{m}}}_0 – \inv{v} \partial_t \spacegrad \cdot \BA^{\mathrm{m}} = -v q_m.
\end{equation}

Comparing with \ref{eqn:potentialMethods:260} shows that \( {A^{\mathrm{m}}}_0/v = \mu \phi_m \) and \( -\ifrac{\BA^{\mathrm{m}}}{v^2} = \mu \boldsymbol{\mathcal{A}}^{\mathrm{m}} \), or

\begin{equation}\label{eqn:potentialMethods:900}
\boxed{
{A^{\mathrm{m}}} = \gamma_0 \eta \lr{ \phi_m – v \boldsymbol{\mathcal{A}}^{\mathrm{m}} }.
}
\end{equation}

Potential operator structure

Observe that there is an underlying uniform structure of the differential operator that acts on the potential to produce the electromagnetic field. Expressed as a linear operator of the
gradient and the potentials, that is

\( \boldsymbol{\mathcal{F}} = L(\lrgrad, {A^{\mathrm{e}}}, {A^{\mathrm{m}}}) \)

\begin{equation}\label{eqn:potentialMethods:980}
\begin{aligned}
\boldsymbol{\mathcal{F}}
&=
L(\grad, {A^{\mathrm{e}}}, {A^{\mathrm{m}}}) \\
&= \grad \wedge {A^{\mathrm{e}}} + I (\grad \wedge {A^{\mathrm{m}}}) \\
&=
\inv{2} \lr{ \rgrad {A^{\mathrm{e}}} – {A^{\mathrm{e}}} \lgrad }
+ \frac{I}{2} \lr{ \rgrad {A^{\mathrm{m}}} – {A^{\mathrm{m}}} \lgrad } \\
&=
\inv{2} \lr{ \rgrad {A^{\mathrm{e}}} – {A^{\mathrm{e}}} \lgrad }
+ \frac{1}{2} \lr{ -\rgrad I {A^{\mathrm{m}}} – I {A^{\mathrm{m}}} \lgrad } \\
&=
\inv{2} \lr{ \rgrad ({A^{\mathrm{e}}} -I {A^{\mathrm{m}}}) – ({A^{\mathrm{e}}} + I {A^{\mathrm{m}}}) \lgrad }
,
\end{aligned}
\end{equation}

or
\begin{equation}\label{eqn:potentialMethods:1000}
\boxed{
\boldsymbol{\mathcal{F}}
=
\inv{2} \lr{ \rgrad ({A^{\mathrm{e}}} -I {A^{\mathrm{m}}}) – ({A^{\mathrm{e}}} – I {A^{\mathrm{m}}})^\dagger \lgrad }
.
}
\end{equation}

Observe that \ref{eqn:potentialMethods:1000} can be
put into correspondence with \ref{eqn:potentialMethods:1080} using a factoring of unity \( 1 = \gamma_0 \gamma_0 \)

\begin{equation}\label{eqn:potentialMethods:1100}
\boldsymbol{\mathcal{F}}
=
\inv{2} \lr{ (-\rgrad \gamma_0) (-\gamma_0 ({A^{\mathrm{e}}} -I {A^{\mathrm{m}}})) – (({A^{\mathrm{e}}} + I {A^{\mathrm{m}}}) \gamma_0)(\gamma_0 \lgrad) },
\end{equation}

where

\begin{equation}\label{eqn:potentialMethods:1140}
\begin{aligned}
-\grad \gamma_0
&=
-(\gamma^0 \partial_0 + \gamma^k \partial_k) \gamma_0 \\
&=
-\partial_0 – \gamma^k \gamma_0 \partial_k \\
&=
\spacegrad
-\inv{v} \partial_t
,
\end{aligned}
\end{equation}
\begin{equation}\label{eqn:potentialMethods:1160}
\begin{aligned}
\gamma_0 \grad
&=
\gamma_0 (\gamma^0 \partial_0 + \gamma^k \partial_k) \\
&=
\partial_0 – \gamma^k \gamma_0 \partial_k \\
&=
\spacegrad
+ \inv{v} \partial_t
,
\end{aligned}
\end{equation}

and
\begin{equation}\label{eqn:potentialMethods:1200}
\begin{aligned}
-\gamma_0 ( {A^{\mathrm{e}}} – I {A^{\mathrm{m}}} )
&=
-\gamma_0 \gamma_0 \lr{ \phi -v \boldsymbol{\mathcal{A}}^{\mathrm{e}} + \eta I \lr{ \phi_m – v \boldsymbol{\mathcal{A}}^{\mathrm{m}} } } \\
&=
-\lr{ \phi -v \boldsymbol{\mathcal{A}}^{\mathrm{e}} + \eta I \phi_m – \eta v I \boldsymbol{\mathcal{A}}^{\mathrm{m}} } \\
&=
– \phi
+ v \boldsymbol{\mathcal{A}}^{\mathrm{e}}
+ \eta v I \boldsymbol{\mathcal{A}}^{\mathrm{m}}
– \eta I \phi_m
\end{aligned}
\end{equation}
\begin{equation}\label{eqn:potentialMethods:1220}
\begin{aligned}
( {A^{\mathrm{e}}} + I {A^{\mathrm{m}}} )\gamma_0
&=
\lr{ \gamma_0 \lr{ \phi -v \boldsymbol{\mathcal{A}}^{\mathrm{e}} } + I \gamma_0 \eta \lr{ \phi_m – v \boldsymbol{\mathcal{A}}^{\mathrm{m}} } } \gamma_0 \\
&=
\phi + v \boldsymbol{\mathcal{A}}^{\mathrm{e}} + I \eta \phi_m + I \eta v \boldsymbol{\mathcal{A}}^{\mathrm{m}} \\
&=
\phi
+ v \boldsymbol{\mathcal{A}}^{\mathrm{e}}
+ \eta v I \boldsymbol{\mathcal{A}}^{\mathrm{m}}
+ \eta I \phi_m
,
\end{aligned}
\end{equation}

This recovers \ref{eqn:potentialMethods:1080} as desired.

Potentials in the 3D Euclidean formalism

In the conventional scalar plus vector differential representation of Maxwell’s equations \ref{eqn:chapter3Notes:20}…, given electric(magnetic) sources the structure of the electric(magnetic) potential follows from first setting the magnetic(electric) field equal to the curl of a vector potential. The procedure for the STA GA form of Maxwell’s equation was similar, where it was immediately evident that the field could be set to the four-curl of a four-vector potential (or the dual of such a curl for magnetic sources).

In the 3D GA representation, there is no immediate rationale for introducing a curl or the equivalent to a four-curl representation of the field. Reconciliation of this is possible by recognizing that the fact that the field (or a component of it) may be represented by a curl is not actually fundamental. Instead, observe that the two sided gradient action on a potential to generate the electromagnetic field in the STA representation of \ref{eqn:potentialMethods:1000} serves to select the grade two component product of the gradient and the multivector potential \( {A^{\mathrm{e}}} – I {A^{\mathrm{m}}} \), and that this can in fact be written as
a single sided gradient operation on a potential, provided the multivector product is filtered with a four-bivector grade selection operation

\begin{equation}\label{eqn:potentialMethods:1240}
\boxed{
\boldsymbol{\mathcal{F}} = \gpgradetwo{ \grad \lr{ {A^{\mathrm{e}}} – I {A^{\mathrm{m}}} } }.
}
\end{equation}

Similarly, it can be observed that the
specific function of the conjugate structure in the two sided potential representation of
\ref{eqn:potentialMethods:1080}
is to discard all the scalar and pseudoscalar grades in the multivector product. This means that a single sided potential can also be used, provided it is wrapped in a grade selection operation

\begin{equation}\label{eqn:potentialMethods:1260}
\boxed{
\boldsymbol{\mathcal{F}} =
\gpgrade{ \lr{ \spacegrad – \inv{v} \PD{t}{} }
\lr{
– \phi
+ v \boldsymbol{\mathcal{A}}^{\mathrm{e}}
+ \eta I v \boldsymbol{\mathcal{A}}^{\mathrm{m}}
– \eta I \phi_m
} }{1,2}.
}
\end{equation}

It is this grade selection operation that is really the fundamental defining action in the potential of the STA and conventional 3D representations of Maxwell’s equations. So, given Maxwell’s equation in the 3D GA representation, defining a potential representation for the field is really just a demand that the field have the structure

\begin{equation}\label{eqn:potentialMethods:1320}
\boldsymbol{\mathcal{F}} = \gpgrade{ (\alpha \spacegrad + \beta \partial_t)( A_0 + A_1 + I( A_0′ + A_1′ ) }{1,2}.
\end{equation}

This is a mandate that the electromagnetic field is the grades 1 and 2 components of the vector product of space and time derivative operators on a multivector field \( A = \sum_{k=0}^3 A_k = A_0 + A_1 + I( A_0′ + A_1′ ) \) that can potentially have any grade components. There are more degrees of freedom in this specification than required, since the multivector can absorb one of the \( \alpha \) or \( \beta \) coefficients, so without loss of generality, one of these (say \( \alpha\)) can be set to 1.

Expanding \ref{eqn:potentialMethods:1320} gives

\begin{equation}\label{eqn:potentialMethods:1340}
\begin{aligned}
\boldsymbol{\mathcal{F}}
&=
\spacegrad A_0
+ \beta \partial_t A_1
– \spacegrad \cross A_1′
+ I (\spacegrad \cross A_1
+ \beta \partial_t A_1′
+ \spacegrad A_0′) \\
&=
\boldsymbol{\mathcal{E}} + I \eta \boldsymbol{\mathcal{H}}.
\end{aligned}
\end{equation}

This naturally has all the right mixes of curls, gradients and time derivatives, all following as direct consequences of applying a grade selection operation to the action of a “spacetime gradient” on a general multivector potential.

The conclusion is that the potential representation of the field is

\begin{equation}\label{eqn:potentialMethods:1360}
\boldsymbol{\mathcal{F}} =
\gpgrade{ \lr{ \spacegrad – \inv{v} \PD{t}{} } A }{1,2},
\end{equation}

where \( A \) is a multivector potentially containing all grades, where grades 0,1 are required for electric sources, and grades 2,3 are required for magnetic sources. When it is desirable to refer back to the conventional scalar and vector potentials this multivector potential can be written as \( A = -\phi + v \boldsymbol{\mathcal{A}}^{\mathrm{e}} + \eta I \lr{ -\phi_m + v \boldsymbol{\mathcal{A}}^{\mathrm{m}} } \).

Gauge transformations

Recall that for electric sources the magnetic field is of the form

\begin{equation}\label{eqn:potentialMethods:1380}
\boldsymbol{\mathcal{B}} = \spacegrad \cross \boldsymbol{\mathcal{A}},
\end{equation}

so adding the gradient of any scalar field to the potential \( \boldsymbol{\mathcal{A}}’ = \boldsymbol{\mathcal{A}} + \spacegrad \psi \)
does not change the magnetic field

\begin{equation}\label{eqn:potentialMethods:1400}
\begin{aligned}
\boldsymbol{\mathcal{B}}’
&= \spacegrad \cross \lr{ \boldsymbol{\mathcal{A}} + \spacegrad \psi } \\
&= \spacegrad \cross \boldsymbol{\mathcal{A}} \\
&= \boldsymbol{\mathcal{B}}.
\end{aligned}
\end{equation}

The electric field with this changed potential is

\begin{equation}\label{eqn:potentialMethods:1420}
\begin{aligned}
\boldsymbol{\mathcal{E}}’
&= -\spacegrad \phi – \partial_t \lr{ \BA + \spacegrad \psi} \\
&= -\spacegrad \lr{ \phi + \partial_t \psi } – \partial_t \BA,
\end{aligned}
\end{equation}

so if
\begin{equation}\label{eqn:potentialMethods:1440}
\phi = \phi’ – \partial_t \psi,
\end{equation}

the electric field will also be unaltered by this transformation.

In the STA representation, the field can similarly be altered by adding any (four)gradient to the potential. For example with only electric sources

\begin{equation}\label{eqn:potentialMethods:1460}
\boldsymbol{\mathcal{F}} = \grad \wedge (A + \grad \psi) = \grad \wedge A
\end{equation}

and for electric or magnetic sources

\begin{equation}\label{eqn:potentialMethods:1480}
\boldsymbol{\mathcal{F}} = \gpgradetwo{ \grad (A + \grad \psi) } = \gpgradetwo{ \grad A }.
\end{equation}

In the 3D GA representation, where the field is given by \ref{eqn:potentialMethods:1360}, there is no field that is being curled to add a gradient to. However, if the scalar and vector potentials transform as

\begin{equation}\label{eqn:potentialMethods:1500}
\begin{aligned}
\boldsymbol{\mathcal{A}} &\rightarrow \boldsymbol{\mathcal{A}} + \spacegrad \psi \\
\phi &\rightarrow \phi – \partial_t \psi,
\end{aligned}
\end{equation}

then the multivector potential transforms as
\begin{equation}\label{eqn:potentialMethods:1520}
-\phi + v \boldsymbol{\mathcal{A}}
\rightarrow -\phi + v \boldsymbol{\mathcal{A}} + \partial_t \psi + v \spacegrad \psi,
\end{equation}

so the electromagnetic field is unchanged when the multivector potential is transformed as

\begin{equation}\label{eqn:potentialMethods:1540}
A \rightarrow A + \lr{ \spacegrad + \inv{v} \partial_t } \psi,
\end{equation}

where \( \psi \) is any field that has scalar or pseudoscalar grades. Viewed in terms of grade selection, this makes perfect sense, since the transformed field is

\begin{equation}\label{eqn:potentialMethods:1560}
\begin{aligned}
\boldsymbol{\mathcal{F}}
&\rightarrow
\gpgrade{ \lr{ \spacegrad – \inv{v} \PD{t}{} } \lr{ A + \lr{ \spacegrad + \inv{v} \partial_t } \psi } }{1,2} \\
&=
\gpgrade{ \lr{ \spacegrad – \inv{v} \PD{t}{} } A + \lr{ \spacegrad^2 – \inv{v^2} \partial_{tt} } \psi }{1,2} \\
&=
\gpgrade{ \lr{ \spacegrad – \inv{v} \PD{t}{} } A }{1,2}.
\end{aligned}
\end{equation}

The \( \psi \) contribution to the grade selection operator is killed because it has scalar or pseudoscalar grades.

Lorenz gauge

Maxwell’s equations are completely decoupled if the potential can be found such that

\begin{equation}\label{eqn:potentialMethods:1580}
\begin{aligned}
\boldsymbol{\mathcal{F}}
&=
\gpgrade{ \lr{ \spacegrad – \inv{v} \PD{t}{} } A }{1,2} \\
&=
\lr{ \spacegrad – \inv{v} \PD{t}{} } A.
\end{aligned}
\end{equation}

When this is the case, Maxwell’s equations are reduced to four non-homogeneous potential wave equations

\begin{equation}\label{eqn:potentialMethods:1620}
\lr{ \spacegrad^2 – \inv{v^2} \PDSq{t}{} } A = J,
\end{equation}

that is

\begin{equation}\label{eqn:potentialMethods:1600}
\begin{aligned}
\lr{ \spacegrad^2 – \inv{v^2} \PDSq{t}{} } \phi &= – \inv{\epsilon} q_e \\
\lr{ \spacegrad^2 – \inv{v^2} \PDSq{t}{} } \boldsymbol{\mathcal{A}}^{\mathrm{e}} &= – \mu \boldsymbol{\mathcal{J}} \\
\lr{ \spacegrad^2 – \inv{v^2} \PDSq{t}{} } \phi_m &= – \frac{I}{\mu} q_m \\
\lr{ \spacegrad^2 – \inv{v^2} \PDSq{t}{} } \boldsymbol{\mathcal{A}}^{\mathrm{m}} &= – I \epsilon \boldsymbol{\mathcal{M}}.
\end{aligned}
\end{equation}

There should be no a-priori assumption that such a field representation has no scalar, nor no pseudoscalar components. That explicit expansion in grades is

\begin{equation}\label{eqn:potentialMethods:1640}
\begin{aligned}
\lr{ \spacegrad – \inv{v} \PD{t}{} } A
&=
\lr{ \spacegrad – \inv{v} \PD{t}{} } \lr{ -\phi + v \boldsymbol{\mathcal{A}}^{\mathrm{e}} + \eta I \lr{ -\phi_m + v \boldsymbol{\mathcal{A}}^{\mathrm{m}} } } \\
&=
\inv{v} \partial_t \phi
+ v \spacegrad \cdot \boldsymbol{\mathcal{A}}^{\mathrm{e}} \\
&-\spacegrad \phi
+ I \eta v \spacegrad \wedge \boldsymbol{\mathcal{A}}^{\mathrm{m}}
– \partial_t \boldsymbol{\mathcal{A}}^{\mathrm{e}} \\
&+ v \spacegrad \wedge \boldsymbol{\mathcal{A}}^{\mathrm{e}}
– \eta I \spacegrad \phi_m
– I \eta \partial_t \boldsymbol{\mathcal{A}}^{\mathrm{m}} \\
&+ \eta I \inv{v} \partial_t \phi_m
+ I \eta v \spacegrad \cdot \boldsymbol{\mathcal{A}}^{\mathrm{m}},
\end{aligned}
\end{equation}

so if this potential representation has only vector and bivector grades, it must be true that

\begin{equation}\label{eqn:potentialMethods:1660}
\begin{aligned}
\inv{v} \partial_t \phi + v \spacegrad \cdot \boldsymbol{\mathcal{A}}^{\mathrm{e}} &= 0 \\
\inv{v} \partial_t \phi_m + v \spacegrad \cdot \boldsymbol{\mathcal{A}}^{\mathrm{m}} &= 0.
\end{aligned}
\end{equation}

The first is the well known Lorenz gauge condition, whereas the second is the dual of that condition for magnetic sources.

Should one of these conditions, say the Lorenz condition for the electric source potentials, be non-zero, then it is possible to make a potential transformation for which this condition is zero

\begin{equation}\label{eqn:potentialMethods:1680}
\begin{aligned}
0
&\ne
\inv{v} \partial_t \phi + v \spacegrad \cdot \boldsymbol{\mathcal{A}}^{\mathrm{e}} \\
&=
\inv{v} \partial_t (\phi’ – \partial_t \psi) + v \spacegrad \cdot (\boldsymbol{\mathcal{A}}’ + \spacegrad \psi) \\
&=
\inv{v} \partial_t \phi’ + v \spacegrad \boldsymbol{\mathcal{A}}’
+ v \lr{ \spacegrad^2 – \inv{v^2} \partial_{tt} } \psi,
\end{aligned}
\end{equation}

so if \( \inv{v} \partial_t \phi’ + v \spacegrad \boldsymbol{\mathcal{A}}’ \) is zero, \( \psi \) must be found such that
\begin{equation}\label{eqn:potentialMethods:1700}
\inv{v} \partial_t \phi + v \spacegrad \cdot \boldsymbol{\mathcal{A}}^{\mathrm{e}}
= v \lr{ \spacegrad^2 – \inv{v^2} \partial_{tt} } \psi.
\end{equation}

References

[1] Constantine A Balanis. Antenna theory: analysis and design. John Wiley \& Sons, 3rd edition, 2005.

[2] C. Doran and A.N. Lasenby. Geometric algebra for physicists. Cambridge University Press New York, Cambridge, UK, 1st edition, 2003.

[3] David M Pozar. Microwave engineering. John Wiley \& Sons, 2009.