## A couple more reciprocal frame examples.

[If mathjax doesn’t display properly for you, click here for a PDF of this post]

This post logically follows both of the following:

The PDF linked above above contains all the content from this post plus (1.) above [to be edited later into a more logical sequence.]

## More examples.

Here are a few additional examples of reciprocal frame calculations.

## Problem: Unidirectional arbitrary functional dependence.

Let
\label{eqn:reciprocal:2540}
x = a f(u),

where $$a$$ is a constant vector and $$f(u)$$ is some arbitrary differentiable function with a non-zero derivative in the region of interest.

Here we have just a single tangent space direction (a line in spacetime) with tangent vector
\label{eqn:reciprocal:2400}
\Bx_u = a \PD{u}{f} = a f_u,

so we see that the tangent space vectors are just rescaled values of the direction vector $$a$$.
This is a simple enough parameterization that we can compute the reciprocal frame vector explicitly using the gradient. We expect that $$\Bx^u = 1/\Bx_u$$, and find
\label{eqn:reciprocal:2420}
\inv{a} \cdot x = f(u),

but for constant $$a$$, we know that $$\grad a \cdot x = a$$, so taking gradients of both sides we find
\label{eqn:reciprocal:2440}

so the reciprocal vector is
\label{eqn:reciprocal:2460}
\Bx^u = \grad u = \inv{a f_u},

as expected.

## Problem: Linear two variable parameterization.

Let $$x = a u + b v$$, where $$x \wedge a \wedge b = 0$$ represents spacetime plane (also the tangent space.) Find the curvilinear coordinates and their reciprocals.

The frame vectors are easy to compute, as they are just
\label{eqn:reciprocal:1960}
\begin{aligned}
\Bx_u &= \PD{u}{x} = a \\
\Bx_v &= \PD{v}{x} = b.
\end{aligned}

This is an example of a parametric equation that we can easily invert, as we have
\label{eqn:reciprocal:1980}
\begin{aligned}
x \wedge a &= – v \lr{ a \wedge b } \\
x \wedge b &= u \lr{ a \wedge b },
\end{aligned}

so
\label{eqn:reciprocal:2000}
\begin{aligned}
u
&= \inv{ a \wedge b } \cdot \lr{ x \wedge b } \\
&= \inv{ \lr{a \wedge b}^2 } \lr{ a \wedge b } \cdot \lr{ x \wedge b } \\
&=
\frac{
\lr{b \cdot x} \lr{ a \cdot b }

\lr{a \cdot x} \lr{ b \cdot b }
}{ \lr{a \wedge b}^2 }
\end{aligned}

\label{eqn:reciprocal:2020}
\begin{aligned}
v &= -\inv{ a \wedge b } \cdot \lr{ x \wedge a } \\
&= -\inv{ \lr{a \wedge b}^2 } \lr{ a \wedge b } \cdot \lr{ x \wedge a } \\
&=
-\frac{
\lr{b \cdot x} \lr{ a \cdot a }

\lr{a \cdot x} \lr{ a \cdot b }
}{ \lr{a \wedge b}^2 }
\end{aligned}

Recall that $$\grad \lr{ a \cdot x} = a$$, if $$a$$ is a constant, so our gradients are just
\label{eqn:reciprocal:2040}
\begin{aligned}
&=
\frac{
b \lr{ a \cdot b }

a
\lr{ b \cdot b }
}{ \lr{a \wedge b}^2 } \\
&=
b \cdot \inv{ a \wedge b },
\end{aligned}

and
\label{eqn:reciprocal:2060}
\begin{aligned}
&=
-\frac{
b \lr{ a \cdot a }

a \lr{ a \cdot b }
}{ \lr{a \wedge b}^2 } \\
&=
-a \cdot \inv{ a \wedge b }.
\end{aligned}

Expressed in terms of the frame vectors, this is just
\label{eqn:reciprocal:2080}
\begin{aligned}
\Bx^u &= \Bx_v \cdot \inv{ \Bx_u \wedge \Bx_v } \\
\Bx^v &= -\Bx_u \cdot \inv{ \Bx_u \wedge \Bx_v },
\end{aligned}

so we were able to show, for this special two parameter linear case, that the explicit evaluation of the gradients has the exact structure that we intuited that the reciprocals must have, provided they are constrained to the spacetime plane $$a \wedge b$$. It is interesting to observe how this structure falls out of the linear system solution so directly. Also note that these reciprocals are not defined at the origin of the $$(u,v)$$ parameter space.

## Problem: Quadratic two variable parameterization.

Now consider a variation of the previous problem, with $$x = a u^2 + b v^2$$. Find the curvilinear coordinates and their reciprocals.

\label{eqn:reciprocal:2100}
\begin{aligned}
\Bx_u &= \PD{u}{x} = 2 u a \\
\Bx_v &= \PD{v}{x} = 2 v b.
\end{aligned}

Our tangent space is still the $$a \wedge b$$ plane (as is the surface itself), but the spacing of the cells starts getting wider in proportion to $$u, v$$.
Utilizing the work from the previous problem, we have
\label{eqn:reciprocal:2120}
\begin{aligned}
b \cdot \inv{ a \wedge b } \\
-a \cdot \inv{ a \wedge b }.
\end{aligned}

A bit of rearrangement can show that this is equivalent to the reciprocal frame identities. This is a second demonstration that the gradient and the algebraic formulations for the reciprocals match, at least for these special cases of linear non-coupled parameterizations.

## Problem: Reciprocal frame for generalized cylindrical parameterization.

Let the vector parameterization be $$x(\rho,\theta) = \rho e^{-i\theta/2} x(\rho_0, \theta_0) e^{i \theta}$$, where $$i^2 = \pm 1$$ is a unit bivector ($$+1$$ for a boost, and $$-1$$ for a rotation), and where $$\theta, \rho$$ are scalars. Find the tangent space vectors and their reciprocals.

fig. 1. “Cylindrical” boost parameterization.

Note that this is cylindrical parameterization for the rotation case, and traces out hyperbolic regions for the boost case. The boost case is illustrated in fig. 1 where hyperbolas in the light cone are found for boosts of $$\gamma_0$$ with various values of $$\rho$$, and the spacelike hyperbolas are boosts of $$\gamma_1$$, again for various values of $$\rho$$.

The tangent space vectors are
\label{eqn:reciprocal:2480}
\Bx_\rho = \frac{x}{\rho},

and

\label{eqn:reciprocal:2500}
\begin{aligned}
\Bx_\theta
&= -\frac{i}{2} x + x \frac{i}{2} \\
&= x \cdot i.
\end{aligned}

Recall that $$x \cdot i$$ lies perpendicular to $$x$$ (in the plane $$i$$), as illustrated in fig. 2. This means that $$\Bx_\rho$$ and $$\Bx_\theta$$ are orthogonal, so we can find the reciprocal vectors by just inverting them
\label{eqn:reciprocal:2520}
\begin{aligned}
\Bx^\rho &= \frac{\rho}{x} \\
\Bx^\theta &= \frac{1}{x \cdot i}.
\end{aligned}

fig. 2. Projection and rejection geometry.

## Parameterization of a general linear transformation.

Given $$N$$ parameters $$u^0, u^1, \cdots u^{N-1}$$, a general linear transformation from the parameter space to the vector space has the form
\label{eqn:reciprocal:2160}
x =
{a^\alpha}_\beta \gamma_\alpha u^\beta,

where $$\beta \in [0, \cdots, N-1]$$ and $$\alpha \in [0,3]$$.
For such a general transformation, observe that the curvilinear basis vectors are
\label{eqn:reciprocal:2180}
\begin{aligned}
\Bx_\mu
&= \PD{u^\mu}{x} \\
&= \PD{u^\mu}{}
{a^\alpha}_\beta \gamma_\alpha u^\beta \\
&=
{a^\alpha}_\mu \gamma_\alpha.
\end{aligned}

We find an interpretation of $${a^\alpha}_\mu$$ by dotting $$\Bx_\mu$$ with the reciprocal frame vectors of the standard basis
\label{eqn:reciprocal:2200}
\begin{aligned}
\Bx_\mu \cdot \gamma^\nu
&=
{a^\alpha}_\mu \lr{ \gamma_\alpha \cdot \gamma^\nu } \\
&=
{a^\nu}_\mu,
\end{aligned}

so
\label{eqn:reciprocal:2220}
x = \Bx_\mu u^\mu.

We are able to reinterpret \ref{eqn:reciprocal:2160} as a contraction of the tangent space vectors with the parameters, scaling and summing these direction vectors to characterize all the points in the tangent plane.

## Theorem 1.1: Projecting onto the tangent space.

Let $$T$$ represent the tangent space. The projection of a vector onto the tangent space has the form
\label{eqn:reciprocal:2560}
\textrm{Proj}_{\textrm{T}} y = \lr{ y \cdot \Bx^\mu } \Bx_\mu = \lr{ y \cdot \Bx_\mu } \Bx^\mu.

### Start proof:

Let’s designate $$a$$ as the portion of the vector $$y$$ that lies outside of the tangent space
\label{eqn:reciprocal:2260}
y = y^\mu \Bx_\mu + a.

If we knew the coordinates $$y^\mu$$, we would have a recipe for the projection.
Algebraically, requiring that $$a$$ lies outside of the tangent space, is equivalent to stating $$a \cdot \Bx_\mu = a \cdot \Bx^\mu = 0$$. We use that fact, and then take dot products
\label{eqn:reciprocal:2280}
\begin{aligned}
y \cdot \Bx^\nu
&= \lr{ y^\mu \Bx_\mu + a } \cdot \Bx^\nu \\
&= y^\nu,
\end{aligned}

so
\label{eqn:reciprocal:2300}
y = \lr{ y \cdot \Bx^\mu } \Bx_\mu + a.

Similarly, the tangent space projection can be expressed as a linear combination of reciprocal basis elements
\label{eqn:reciprocal:2320}
y = y_\mu \Bx^\mu + a.

Dotting with $$\Bx_\mu$$, we have
\label{eqn:reciprocal:2340}
\begin{aligned}
y \cdot \Bx^\mu
&= \lr{ y_\alpha \Bx^\alpha + a } \cdot \Bx_\mu \\
&= y_\mu,
\end{aligned}

so
\label{eqn:reciprocal:2360}
y = \lr{ y \cdot \Bx^\mu } \Bx_\mu + a.

We find the two stated ways of computing the projection.

Observe that, for the special case that all of $$\setlr{ \Bx_\mu }$$ are orthogonal, the equivalence of these two projection methods follows directly, since
\label{eqn:reciprocal:2380}
\begin{aligned}
\lr{ y \cdot \Bx^\mu } \Bx_\mu
&=
\lr{ y \cdot \inv{\Bx_\mu} } \inv{\Bx^\mu} \\
&=
\lr{ y \cdot \frac{\Bx_\mu}{\lr{\Bx_\mu}^2 } } \frac{\Bx^\mu}{\lr{\Bx^\mu}^2} \\
&=
\lr{ y \cdot \Bx_\mu } \Bx^\mu.
\end{aligned}

## Lorentz transformations in Space Time Algebra (STA)

[If mathjax doesn’t display properly for you, click here for a PDF of this post]

## Motivation.

One of the remarkable features of geometric algebra are the complex exponential sandwiches that can be used to encode rotations in any dimension, or rotation like operations like Lorentz transformations in Minkowski spaces. In this post, we show some examples that unpack the geometric algebra expressions for Lorentz transformations operations of this sort. In particular, we will look at the exponential sandwich operations for spatial rotations and Lorentz boosts in the Dirac algebra, known as Space Time Algebra (STA) in geometric algebra circles, and demonstrate that these sandwiches do have the desired effects.

## Theorem 1.1: Lorentz transformation.

The transformation
\label{eqn:lorentzTransform:580}
x \rightarrow e^{B} x e^{-B} = x’,

where $$B = a \wedge b$$, is an STA 2-blade for any two linearly independent four-vectors $$a, b$$, is a norm preserving, that is
\label{eqn:lorentzTransform:600}
x^2 = {x’}^2.

### Start proof:

The proof is disturbingly trivial in this geometric algebra form
\label{eqn:lorentzTransform:40}
\begin{aligned}
{x’}^2
&=
e^{B} x e^{-B} e^{B} x e^{-B} \\
&=
e^{B} x x e^{-B} \\
&=
x^2 e^{B} e^{-B} \\
&=
x^2.
\end{aligned}

### End proof.

In particular, observe that we did not need to construct the usual infinitesimal representations of rotation and boost transformation matrices or tensors in order to demonstrate that we have spacetime invariance for the transformations. The rough idea of such a transformation is that the exponential commutes with components of the four-vector that lie off the spacetime plane specified by the bivector $$B$$, and anticommutes with components of the four-vector that lie in the plane. The end result is that the sandwich operation simplifies to
\label{eqn:lorentzTransform:60}
x’ = x_\parallel e^{-B} + x_\perp,

where $$x = x_\perp + x_\parallel$$ and $$x_\perp \cdot B = 0$$, and $$x_\parallel \wedge B = 0$$. In particular, using $$x = x B B^{-1} = \lr{ x \cdot B + x \wedge B } B^{-1}$$, we find that
\label{eqn:lorentzTransform:80}
\begin{aligned}
x_\parallel &= \lr{ x \cdot B } B^{-1} \\
x_\perp &= \lr{ x \wedge B } B^{-1}.
\end{aligned}

When $$B$$ is a spacetime plane $$B = b \wedge \gamma_0$$, then this exponential has a hyperbolic nature, and we end up with a Lorentz boost. When $$B$$ is a spatial bivector, we end up with a single complex exponential, encoding our plane old 3D rotation. More general $$B$$’s that encode composite boosts and rotations are also possible, but $$B$$ must be invertible (it should have no lightlike factors.) The rough geometry of these projections is illustrated in fig 1, where the spacetime plane is represented by $$B$$.

fig 1. Projection and rejection geometry.

What is not so obvious is how to pick $$B$$’s that correspond to specific rotation axes or boost directions. Let’s consider each of those cases in turn.

## Theorem 1.2: Boost.

The boost along a direction vector $$\vcap$$ and rapidity $$\alpha$$ is given by
\label{eqn:lorentzTransform:620}
x’ = e^{-\vcap \alpha/2} x e^{\vcap \alpha/2},

where $$\vcap = \gamma_{k0} \cos\theta^k$$ is an STA bivector representing a spatial direction with direction cosines $$\cos\theta^k$$.

### Start proof:

We want to demonstrate that this is equivalent to the usual boost formulation. We can start with decomposition of the four-vector $$x$$ into components that lie in and off of the spacetime plane $$\vcap$$.
\label{eqn:lorentzTransform:100}
\begin{aligned}
x
&= \lr{ x^0 + \Bx } \gamma_0 \\
&= \lr{ x^0 + \Bx \vcap^2 } \gamma_0 \\
&= \lr{ x^0 + \lr{ \Bx \cdot \vcap} \vcap + \lr{ \Bx \wedge \vcap} \vcap } \gamma_0,
\end{aligned}

where $$\Bx = x \wedge \gamma_0$$. The first two components lie in the boost plane, whereas the last is the spatial component of the vector that lies perpendicular to the boost plane. Observe that $$\vcap$$ anticommutes with the dot product term and commutes with he wedge product term, so we have
\label{eqn:lorentzTransform:120}
\begin{aligned}
x’
&=
\lr{ x^0 + \lr{ \Bx \cdot \vcap } \vcap } \gamma_0
e^{\vcap \alpha/2 }
e^{\vcap \alpha/2 }
+
\lr{ \Bx \wedge \vcap } \vcap \gamma_0
e^{-\vcap \alpha/2 }
e^{\vcap \alpha/2 } \\
&=
\lr{ x^0 + \lr{ \Bx \cdot \vcap } \vcap } \gamma_0
e^{\vcap \alpha }
+
\lr{ \Bx \wedge \vcap } \vcap \gamma_0.
\end{aligned}

Noting that $$\vcap^2 = 1$$, we may expand the exponential in hyperbolic functions, and find that the boosted portion of the vector expands as
\label{eqn:lorentzTransform:260}
\begin{aligned}
\lr{ x^0 + \lr{ \Bx \cdot \vcap} \vcap } \gamma_0 e^{\vcap \alpha}
&=
\lr{ x^0 + \lr{ \Bx \cdot \vcap} \vcap } \gamma_0 \lr{ \cosh\alpha + \vcap \sinh \alpha} \\
&=
\lr{ x^0 + \lr{ \Bx \cdot \vcap} \vcap } \lr{ \cosh\alpha – \vcap \sinh \alpha} \gamma_0 \\
&=
\lr{ x^0 \cosh\alpha – \lr{ \Bx \cdot \vcap} \sinh \alpha} \gamma_0
+
\lr{ -x^0 \sinh \alpha + \lr{ \Bx \cdot \vcap} \cosh \alpha } \vcap \gamma_0.
\end{aligned}

We are left with
\label{eqn:lorentzTransform:320}
\begin{aligned}
x’
&=
\lr{ x^0 \cosh\alpha – \lr{ \Bx \cdot \vcap} \sinh \alpha} \gamma_0
+
\lr{ \lr{ \Bx \cdot \vcap} \cosh \alpha -x^0 \sinh \alpha } \vcap \gamma_0
+
\lr{ \Bx \wedge \vcap} \vcap \gamma_0 \\
&=
\begin{bmatrix}
\gamma_0 & \vcap \gamma_0
\end{bmatrix}
\begin{bmatrix}
\cosh\alpha & – \sinh\alpha \\
-\sinh\alpha & \cosh\alpha
\end{bmatrix}
\begin{bmatrix}
x^0 \\
\Bx \cdot \vcap
\end{bmatrix}
+
\lr{ \Bx \wedge \vcap} \vcap \gamma_0,
\end{aligned}

which has the desired Lorentz boost structure. Of course, this is usually seen with $$\vcap = \gamma_{10}$$ so that the components in the coordinate column vector are $$(ct, x)$$.

## Theorem 1.3: Spatial rotation.

Given two linearly independent spatial bivectors $$\Ba = a^k \gamma_{k0}, \Bb = b^k \gamma_{k0}$$, a rotation of $$\theta$$ radians in the plane of $$\Ba, \Bb$$ from $$\Ba$$ towards $$\Bb$$, is given by
\label{eqn:lorentzTransform:640}
x’ = e^{-i\theta} x e^{i\theta},

where $$i = (\Ba \wedge \Bb)/\Abs{\Ba \wedge \Bb}$$, is a unit (spatial) bivector.

### Start proof:

Without loss of generality, we may pick $$i = \acap \bcap$$, where $$\acap^2 = \bcap^2 = 1$$, and $$\acap \cdot \bcap = 0$$. With such an orthonormal basis for the plane, we can decompose our four vector into portions that lie in and off the plane
\label{eqn:lorentzTransform:400}
\begin{aligned}
x
&= \lr{ x^0 + \Bx } \gamma_0 \\
&= \lr{ x^0 + \Bx i i^{-1} } \gamma_0 \\
&= \lr{ x^0 + \lr{ \Bx \cdot i } i^{-1} + \lr{ \Bx \wedge i } i^{-1} } \gamma_0.
\end{aligned}

The projective term lies in the plane of rotation, whereas the timelike and spatial rejection term are perpendicular. That is
\label{eqn:lorentzTransform:420}
\begin{aligned}
x_\parallel &= \lr{ \Bx \cdot i } i^{-1} \gamma_0 \\
x_\perp &= \lr{ x^0 + \lr{ \Bx \wedge i } i^{-1} } \gamma_0,
\end{aligned}

where $$x_\parallel \wedge i = 0$$, and $$x_\perp \cdot i = 0$$. The plane pseudoscalar $$i$$ anticommutes with $$x_\parallel$$, and commutes with $$x_\perp$$, so
\label{eqn:lorentzTransform:440}
\begin{aligned}
x’
&= e^{-i\theta/2} \lr{ x_\parallel + x_\perp } e^{i\theta/2} \\
&= x_\parallel e^{i\theta} + x_\perp.
\end{aligned}

However
\label{eqn:lorentzTransform:460}
\begin{aligned}
\lr{ \Bx \cdot i } i^{-1}
&=
\lr{ \Bx \cdot \lr{ \acap \wedge \bcap } } \bcap \acap \\
&=
\lr{\Bx \cdot \acap} \bcap \bcap \acap
-\lr{\Bx \cdot \bcap} \acap \bcap \acap \\
&=
\lr{\Bx \cdot \acap} \acap
+\lr{\Bx \cdot \bcap} \bcap,
\end{aligned}

so
\label{eqn:lorentzTransform:480}
\begin{aligned}
x_\parallel e^{i\theta}
&=
\lr{
\lr{\Bx \cdot \acap} \acap
+
\lr{\Bx \cdot \bcap} \bcap
}
\gamma_0
\lr{
\cos\theta + \acap \bcap \sin\theta
} \\
&=
\acap \lr{
\lr{\Bx \cdot \acap} \cos\theta

\lr{\Bx \cdot \bcap} \sin\theta
}
\gamma_0
+
\bcap \lr{
\lr{\Bx \cdot \acap} \sin\theta
+
\lr{\Bx \cdot \bcap} \cos\theta
}
\gamma_0,
\end{aligned}

so
\label{eqn:lorentzTransform:500}
x’
=
\begin{bmatrix}
\acap & \bcap
\end{bmatrix}
\begin{bmatrix}
\cos\theta & – \sin\theta \\
\sin\theta & \cos\theta
\end{bmatrix}
\begin{bmatrix}
\Bx \cdot \acap \\
\Bx \cdot \bcap \\
\end{bmatrix}
\gamma_0
+
\lr{ x \wedge i} i^{-1} \gamma_0.

Observe that this rejection term can be explicitly expanded to
\label{eqn:lorentzTransform:520}
\lr{ \Bx \wedge i} i^{-1} \gamma_0 =
x –
\lr{ \Bx \cdot \acap } \acap \gamma_0

\lr{ \Bx \cdot \acap } \acap \gamma_0.

This is the timelike component of the vector, plus the spatial component that is normal to the plane. This exponential sandwich transformation rotates only the portion of the vector that lies in the plane, and leaves the rest (timelike and normal) untouched.

## Problem: Verify components relative to boost direction.

In the proof of thm. 1.2, the vector $$x$$ was expanded in terms of the spacetime split. An alternate approach, is to expand as
\label{eqn:lorentzTransform:340}
\begin{aligned}
x
&= x \vcap^2 \\
&= \lr{ x \cdot \vcap + x \wedge \vcap } \vcap \\
&= \lr{ x \cdot \vcap } \vcap + \lr{ x \wedge \vcap } \vcap.
\end{aligned}

Show that
\label{eqn:lorentzTransform:360}
\lr{ x \cdot \vcap } \vcap
=
\lr{ x^0 + \lr{ \Bx \cdot \vcap} \vcap } \gamma_0,

and
\label{eqn:lorentzTransform:380}
\lr{ x \wedge \vcap } \vcap
=
\lr{ \Bx \wedge \vcap} \vcap \gamma_0.

Let $$x = x^\mu \gamma_\mu$$, so that
\label{eqn:lorentzTransform:160}
\begin{aligned}
x \cdot \vcap
&=
\gpgradeone{ x^\mu \gamma_\mu \cos\theta^b \gamma_{b 0} } \\
&=
x^\mu \cos\theta^b \gpgradeone{ \gamma_\mu \gamma_{b 0} }
.
\end{aligned}

The $$\mu = 0$$ component of this grade selection is
\label{eqn:lorentzTransform:180}
=
-\gamma_b,

and for $$\mu = a \ne 0$$, we have
\label{eqn:lorentzTransform:200}
=
-\delta_{a b} \gamma_0,

so we have
\label{eqn:lorentzTransform:220}
\begin{aligned}
x \cdot \vcap
&=
x^0 \cos\theta^b (-\gamma_b)
+
x^a \cos\theta^b (-\delta_{ab} \gamma_0 ) \\
&=
-x^0 \vcap \gamma_0

x^b \cos\theta^b \gamma_0 \\
&=
– \lr{ x^0 \vcap + \Bx \cdot \vcap } \gamma_0,
\end{aligned}

where $$\Bx = x \wedge \gamma_0$$ is the spatial portion of the four vector $$x$$ relative to the stationary observer frame. Since $$\vcap$$ anticommutes with $$\gamma_0$$, the component of $$x$$ in the spacetime plane $$\vcap$$ is
\label{eqn:lorentzTransform:240}
\lr{ x \cdot \vcap } \vcap =
\lr{ x^0 + \lr{ \Bx \cdot \vcap} \vcap } \gamma_0,

as expected.

For the rejection term, we have
\label{eqn:lorentzTransform:280}
x \wedge \vcap
=
x^\mu \cos\theta^s \gpgradethree{ \gamma_\mu \gamma_{s 0} }.

The $$\mu = 0$$ term clearly contributes nothing, leaving us with:
\label{eqn:lorentzTransform:300}
\begin{aligned}
\lr{ x \wedge \vcap } \vcap
&=
\lr{ x \wedge \vcap } \cdot \vcap \\
&=
x^r \cos\theta^s \cos\theta^t \lr{ \lr{ \gamma_r \wedge \gamma_{s}} \gamma_0 } \cdot \lr{ \gamma_{t0} } \\
&=
\lr{ \gamma_r \wedge \gamma_{s} } \gamma_0 \gamma_{t0}
} \\
&=
-x^r \cos\theta^s \cos\theta^t \lr{ \gamma_r \wedge \gamma_{s}} \cdot \gamma_t \\
&=
-x^r \cos\theta^s \cos\theta^t \lr{ -\gamma_r \delta_{st} + \gamma_s \delta_{rt} } \\
&=
x^r \cos\theta^t \cos\theta^t \gamma_r

x^t \cos\theta^s \cos\theta^t \gamma_s \\
&=
\Bx \gamma_0
– (\Bx \cdot \vcap) \vcap \gamma_0 \\
&=
\lr{ \Bx \wedge \vcap} \vcap \gamma_0,
\end{aligned}

as expected. Is there a clever way to demonstrate this without resorting to coordinates?

## Problem: Rotation transformation components.

Given a unit spatial bivector $$i = \acap \bcap$$, where $$\acap \cdot \bcap = 0$$ and $$i^2 = -1$$, show that
\label{eqn:lorentzTransform:540}
\lr{ x \cdot i } i^{-1}
=
\lr{ \Bx \cdot i } i^{-1} \gamma_0
=
\lr{\Bx \cdot \acap } \acap \gamma_0
+
\lr{\Bx \cdot \bcap } \bcap \gamma_0,

and
\label{eqn:lorentzTransform:560}
\lr{ x \wedge i } i^{-1}
=
\lr{ \Bx \wedge i } i^{-1} \gamma_0
=
x –
\lr{\Bx \cdot \acap } \acap \gamma_0

\lr{\Bx \cdot \bcap } \bcap \gamma_0.

Also show that $$i$$ anticommutes with $$\lr{ x \cdot i } i^{-1}$$ and commutes with $$\lr{ x \wedge i } i^{-1}$$.

This problem is left for the reader, as I don’t feel like typing out my solution.

The first part of this problem can be done in the tedious coordinate approach used above, but hopefully there is a better way.

For the last (commutation) part of the problem, here is a hint. Let $$x \wedge i = n i$$, where $$n \cdot i = 0$$. The result then follows easily.

## Stokes Theorem

The Fundamental Theorem of (Geometric) Calculus is a generalization of Stokes theorem to multivector integrals. Notationally, it looks like Stokes theorem with all the dot and wedge products removed. It is worth restating Stokes theorem and all the definitions associated with it for reference

## Stokes’ Theorem

For blades $$F \in \bigwedge^{s}$$, and $$m$$ volume element $$d^k \Bx, s < k$$, \begin{equation*} \int_V d^k \Bx \cdot (\boldpartial \wedge F) = \oint_{\partial V} d^{k-1} \Bx \cdot F. \end{equation*} This is a loaded and abstract statement, and requires many definitions to make it useful

• The volume integral is over a $$m$$ dimensional surface (manifold).
• Integration over the boundary of the manifold $$V$$ is indicated by $$\partial V$$.
• This manifold is assumed to be spanned by a parameterized vector $$\Bx(u^1, u^2, \cdots, u^k)$$.
• A curvilinear coordinate basis $$\setlr{ \Bx_i }$$ can be defined on the manifold by
\label{eqn:fundamentalTheoremOfCalculus:40}
\Bx_i \equiv \PD{u^i}{\Bx} \equiv \partial_i \Bx.

• A dual basis $$\setlr{\Bx^i}$$ reciprocal to the tangent vector basis $$\Bx_i$$ can be calculated subject to the requirement $$\Bx_i \cdot \Bx^j = \delta_i^j$$.
• The vector derivative $$\boldpartial$$, the projection of the gradient onto the tangent space of the manifold, is defined by
\label{eqn:fundamentalTheoremOfCalculus:100}
\boldpartial = \Bx^i \partial_i = \sum_{i=1}^k \Bx_i \PD{u^i}{}.

• The volume element is defined by
\label{eqn:fundamentalTheoremOfCalculus:60}
d^k \Bx = d\Bx_1 \wedge d\Bx_2 \cdots \wedge d\Bx_k,

where

\label{eqn:fundamentalTheoremOfCalculus:80}
d\Bx_k = \Bx_k du^k,\qquad \text{(no sum)}.

• The volume element is non-zero on the manifold, or $$\Bx_1 \wedge \cdots \wedge \Bx_k \ne 0$$.
• The surface area element $$d^{k-1} \Bx$$, is defined by
\label{eqn:fundamentalTheoremOfCalculus:120}
d^{k-1} \Bx = \sum_{i = 1}^k (-1)^{k-i} d\Bx_1 \wedge d\Bx_2 \cdots \widehat{d\Bx_i} \cdots \wedge d\Bx_k,

where $$\widehat{d\Bx_i}$$ indicates the omission of $$d\Bx_i$$.

• My proof for this theorem was restricted to a simple “rectangular” volume parameterized by the ranges
$$[u^1(0), u^1(1) ] \otimes [u^2(0), u^2(1) ] \otimes \cdots \otimes [u^k(0), u^k(1) ]$$

• The precise meaning that should be given to oriented area integral is
\label{eqn:fundamentalTheoremOfCalculus:140}
\oint_{\partial V} d^{k-1} \Bx \cdot F
=
\sum_{i = 1}^k (-1)^{k-i} \int \evalrange{
\lr{ \lr{ d\Bx_1 \wedge d\Bx_2 \cdots \widehat{d\Bx_i} \cdots \wedge d\Bx_k } \cdot F }
}{u^i = u^i(0)}{u^i(1)},

where both the a area form and the blade $$F$$ are evaluated at the end points of the parameterization range.

After the work of stating exactly what is meant by this theorem, most of the proof follows from the fact that for $$s < k$$ the volume curl dot product can be expanded as $$\label{eqn:fundamentalTheoremOfCalculus:160} \int_V d^k \Bx \cdot (\boldpartial \wedge F) = \int_V d^k \Bx \cdot (\Bx^i \wedge \partial_i F) = \int_V \lr{ d^k \Bx \cdot \Bx^i } \cdot \partial_i F.$$ Each of the $$du^i$$ integrals can be evaluated directly, since each of the remaining $$d\Bx_j = du^j \PDi{u^j}{}, i \ne j$$ is calculated with $$u^i$$ held fixed. This allows for the integration over a rectangular'' parameterization region, proving the theorem for such a volume parameterization. A more general proof requires a triangulation of the volume and surface, but the basic principle of the theorem is evident, without that additional work.

## Fundamental Theorem of Calculus

There is a Geometric Algebra generalization of Stokes theorem that does not have the blade grade restriction of Stokes theorem. In [2] this is stated as

\label{eqn:fundamentalTheoremOfCalculus:180}
\int_V d^k \Bx \boldpartial F = \oint_{\partial V} d^{k-1} \Bx F.

A similar expression is used in [1] where it is also pointed out there is a variant with the vector derivative acting to the left

\label{eqn:fundamentalTheoremOfCalculus:200}
\int_V F d^k \Bx \boldpartial = \oint_{\partial V} F d^{k-1} \Bx.

In [3] it is pointed out that a bidirectional formulation is possible, providing the most general expression of the Fundamental Theorem of (Geometric) Calculus

\label{eqn:fundamentalTheoremOfCalculus:220}
\boxed{
\int_V F d^k \Bx \boldpartial G = \oint_{\partial V} F d^{k-1} \Bx G.
}

Here the vector derivative acts both to the left and right on $$F$$ and $$G$$. The specific action of this operator is
\label{eqn:fundamentalTheoremOfCalculus:240}
\begin{aligned}
F \boldpartial G
&=
(F \boldpartial) G
+
F (\boldpartial G) \\
&=
(\partial_i F) \Bx^i G
+
F \Bx^i (\partial_i G).
\end{aligned}

The fundamental theorem can be demonstrated by direct expansion. With the vector derivative $$\boldpartial$$ and its partials $$\partial_i$$ acting bidirectionally, that is

\label{eqn:fundamentalTheoremOfCalculus:260}
\begin{aligned}
\int_V F d^k \Bx \boldpartial G
&=
\int_V F d^k \Bx \Bx^i \partial_i G \\
&=
\int_V F \lr{ d^k \Bx \cdot \Bx^i + d^k \Bx \wedge \Bx^i } \partial_i G.
\end{aligned}

Both the reciprocal frame vectors and the curvilinear basis span the tangent space of the manifold, since we can write any reciprocal frame vector as a set of projections in the curvilinear basis

\label{eqn:fundamentalTheoremOfCalculus:280}
\Bx^i = \sum_j \lr{ \Bx^i \cdot \Bx^j } \Bx_j,

so $$\Bx^i \in sectionpan \setlr{ \Bx_j, j \in [1,k] }$$.
This means that $$d^k \Bx \wedge \Bx^i = 0$$, and

\label{eqn:fundamentalTheoremOfCalculus:300}
\begin{aligned}
\int_V F d^k \Bx \boldpartial G
&=
\int_V F \lr{ d^k \Bx \cdot \Bx^i } \partial_i G \\
&=
\sum_{i = 1}^{k}
\int_V
du^1 du^2 \cdots \widehat{ du^i} \cdots du^k
F \lr{
(-1)^{k-i}
\Bx_1 \wedge \Bx_2 \cdots \widehat{\Bx_i} \cdots \wedge \Bx_k } \partial_i G du^i \\
&=
\sum_{i = 1}^{k}
(-1)^{k-i}
\int_{u^1}
\int_{u^2}
\cdots
\int_{u^{i-1}}
\int_{u^{i+1}}
\cdots
\int_{u^k}
\evalrange{ \lr{
F d\Bx_1 \wedge d\Bx_2 \cdots \widehat{d\Bx_i} \cdots \wedge d\Bx_k G
}
}{u^i = u^i(0)}{u^i(1)}.
\end{aligned}

Adding in the same notational sugar that we used in Stokes theorem, this proves the Fundamental theorem \ref{eqn:fundamentalTheoremOfCalculus:220} for “rectangular” parameterizations. Note that such a parameterization need not actually be rectangular.

## Example: Application to Maxwell’s equation

{example:fundamentalTheoremOfCalculus:1}

Maxwell’s equation is an example of a first order gradient equation

\label{eqn:fundamentalTheoremOfCalculus:320}
\grad F = \inv{\epsilon_0 c} J.

Integrating over a four-volume (where the vector derivative equals the gradient), and applying the Fundamental theorem, we have

\label{eqn:fundamentalTheoremOfCalculus:340}
\inv{\epsilon_0 c} \int d^4 x J = \oint d^3 x F.

Observe that the surface area element product with $$F$$ has both vector and trivector terms. This can be demonstrated by considering some examples

\label{eqn:fundamentalTheoremOfCalculus:360}
\begin{aligned}
\gamma_{012} \gamma_{01} &\propto \gamma_2 \\
\gamma_{012} \gamma_{23} &\propto \gamma_{023}.
\end{aligned}

On the other hand, the four volume integral of $$J$$ has only trivector parts. This means that the integral can be split into a pair of same-grade equations

\label{eqn:fundamentalTheoremOfCalculus:380}
\begin{aligned}
\inv{\epsilon_0 c} \int d^4 x \cdot J &=
\oint \gpgradethree{ d^3 x F} \\
0 &=
\oint d^3 x \cdot F.
\end{aligned}

The first can be put into a slightly tidier form using a duality transformation
\label{eqn:fundamentalTheoremOfCalculus:400}
\begin{aligned}
&=
-\gpgradethree{ d^3 x I^2 F} \\
&=
\gpgradethree{ I d^3 x I F} \\
&=
(I d^3 x) \wedge (I F).
\end{aligned}

Letting $$n \Abs{d^3 x} = I d^3 x$$, this gives

\label{eqn:fundamentalTheoremOfCalculus:420}
\oint \Abs{d^3 x} n \wedge (I F) = \inv{\epsilon_0 c} \int d^4 x \cdot J.

Note that this normal is normal to a three-volume subspace of the spacetime volume. For example, if one component of that spacetime surface area element is $$\gamma_{012} c dt dx dy$$, then the normal to that area component is $$\gamma_3$$.

A second set of duality transformations

\label{eqn:fundamentalTheoremOfCalculus:440}
\begin{aligned}
n \wedge (IF)
&=
&=
&=
-\gpgradethree{ I (n \cdot F)} \\
&=
-I (n \cdot F),
\end{aligned}

and
\label{eqn:fundamentalTheoremOfCalculus:460}
\begin{aligned}
I d^4 x \cdot J
&=
\gpgradeone{ I d^4 x \cdot J } \\
&=
\gpgradeone{ I d^4 x J } \\
&=
\gpgradeone{ (I d^4 x) J } \\
&=
(I d^4 x) J,
\end{aligned}

can further tidy things up, leaving us with

\label{eqn:fundamentalTheoremOfCalculus:500}
\boxed{
\begin{aligned}
\oint \Abs{d^3 x} n \cdot F &= \inv{\epsilon_0 c} \int (I d^4 x) J \\
\oint d^3 x \cdot F &= 0.
\end{aligned}
}

The Fundamental theorem of calculus immediately provides relations between the Faraday bivector $$F$$ and the four-current $$J$$.

# References

[1] C. Doran and A.N. Lasenby. Geometric algebra for physicists. Cambridge University Press New York, Cambridge, UK, 1st edition, 2003.

[2] A. Macdonald. Vector and Geometric Calculus. CreateSpace Independent Publishing Platform, 2012.

[3] Garret Sobczyk and Omar Le\’on S\’anchez. Fundamental theorem of calculus. Advances in Applied Clifford Algebras, 21\penalty0 (1):\penalty0 221–231, 2011. URL https://arxiv.org/abs/0809.4526.

## Geometric algebra notes collection split into two volumes

I’ve now split my (way too big) Exploring physics with Geometric Algebra into two volumes:

Each of these is now a much more manageable size, which should facilitate removing the redundancies in these notes, and making them more properly book like.

Also note I’ve also previously moved “Exploring Geometric Algebra” content related to:

• Lagrangian’s
• Hamiltonian’s
• Noether’s theorem

into my classical mechanics collection (449 pages).

## Motivation

In [2] the Schwartz inequality

\label{eqn:qmSchwartz:20}
\boxed{
\braket{a}{a}
\braket{b}{b}
\ge \Abs{\braket{a}{b}}^2,
}

is used in the derivation of the uncertainty relation. The proof of the Schwartz inequality uses a sneaky substitution that doesn’t seem obvious, and is even less obvious since there is a typo in the value to be substituted. Let’s understand where that sneakiness is coming from.

## Without being sneaky

My ancient first year linear algebra text [1] contains a non-sneaky proof, but it only works for real vector spaces. Recast in bra-ket notation, this method examines the bounds of the norms of sums and differences of unit states (i.e. $$\braket{a}{a} = \braket{b}{b} = 1$$.)

\label{eqn:qmSchwartz:40}
\braket{a – b}{a – b}
= \braket{a}{a} + \braket{b}{b} – \braket{a}{b} – \braket{b}{a}
= 2 – 2 \textrm{Re} \braket{a}{b}
\ge 0,

so
\label{eqn:qmSchwartz:60}
1 \ge \textrm{Re} \braket{a}{b}.

Similarily

\label{eqn:qmSchwartz:80}
\braket{a + b}{a + b}
= \braket{a}{a} + \braket{b}{b} + \braket{a}{b} + \braket{b}{a}
= 2 + 2 \textrm{Re} \braket{a}{b}
\ge 0,

so
\label{eqn:qmSchwartz:100}
\textrm{Re} \braket{a}{b} \ge -1.

This means that for normalized state vectors

\label{eqn:qmSchwartz:120}
-1 \le \textrm{Re} \braket{a}{b} \le 1,

or
\label{eqn:qmSchwartz:140}
\Abs{\textrm{Re} \braket{a}{b}} \le 1.

Writing out the unit vectors explicitly, that last inequality is

\label{eqn:qmSchwartz:180}
\Abs{ \textrm{Re} \braket{ \frac{a}{\sqrt{\braket{a}{a}}} }{ \frac{b}{\sqrt{\braket{b}{b}}} } } \le 1,

squaring and rearranging gives

\label{eqn:qmSchwartz:200}
\Abs{\textrm{Re} \braket{a}{b}}^2 \le
\braket{a}{a}
\braket{b}{b}.

This is similar to, but not identical to the Schwartz inequality. Since $$\Abs{\textrm{Re} \braket{a}{b}} \le \Abs{\braket{a}{b}}$$ the Schwartz inequality cannot be demonstrated with this argument. This first year algebra method works nicely for demonstrating the inequality for real vector spaces, so a different argument is required for a complex vector space (i.e. quantum mechanics state space.)

## Arguing with projected and rejected components

Notice that the equality condition in the inequality holds when the vectors are colinear, and the largest inequality holds when the vectors are normal to each other. Given those geometrical observations, it seems reasonable to examine the norms of projected or rejected components of a vector. To do so in bra-ket notation, the correct form of a projection operation must be determined. Care is required to get the ordering of the bra-kets right when expressing such a projection.

Suppose we wish to calculation the rejection of $$\ket{a}$$ from $$\ket{b}$$, that is $$\ket{b – \alpha a}$$, such that

\label{eqn:qmSchwartz:220}
0
= \braket{a}{b – \alpha a}
= \braket{a}{b} – \alpha \braket{a}{a},

or
\label{eqn:qmSchwartz:240}
\alpha =
\frac{\braket{a}{b} }{ \braket{a}{a} }.

Therefore, the projection of $$\ket{b}$$ on $$\ket{a}$$ is

\label{eqn:qmSchwartz:260}
\textrm{Proj}_{\ket{a}} \ket{b}
= \frac{\braket{a}{b} }{ \braket{a}{a} } \ket{a}
= \frac{\braket{b}{a}^\conj }{ \braket{a}{a} } \ket{a}.

The conventional way to write this in QM is in the operator form

\label{eqn:qmSchwartz:300}
\textrm{Proj}_{\ket{a}} \ket{b}
= \frac{\ket{a}\bra{a}}{\braket{a}{a}} \ket{b}.

In this form the rejection of $$\ket{a}$$ from $$\ket{b}$$ can be expressed as

\label{eqn:qmSchwartz:280}
\textrm{Rej}_{\ket{a}} \ket{b} = \ket{b} – \frac{\ket{a}\bra{a}}{\braket{a}{a}} \ket{b}.

This state vector is normal to $$\ket{a}$$ as desired

\label{eqn:qmSchwartz:320}
\braket{a}{b – \frac{\braket{a}{b} }{ \braket{a}{a} } a }
=
\braket{a}{ b} – \frac{ \braket{a}{b} }{ \braket{a}{a} } \braket{a}{a}
=
\braket{a}{ b} – \braket{a}{b}
= 0.

How about it’s length? That is

\label{eqn:qmSchwartz:340}
\begin{aligned}
\braket{b – \frac{\braket{a}{b} }{ \braket{a}{a} } a}{b – \frac{\braket{a}{b} }{ \braket{a}{a} } a }
&=
\braket{b}{b} – 2 \frac{\Abs{\braket{a}{b}}^2}{\braket{a}{a}} +\frac{\Abs{\braket{a}{b}}^2 }{ \braket{a}{a}^2 } \braket{a}{a} \\
&=
\braket{b}{b} – \frac{\Abs{\braket{a}{b}}^2}{\braket{a}{a}}.
\end{aligned}

Observe that this must be greater to or equal to zero, so

\label{eqn:qmSchwartz:360}
\braket{b}{b} \ge \frac{ \Abs{ \braket{a}{b} }^2 }{ \braket{a}{a} }.

Rearranging this gives \ref{eqn:qmSchwartz:20} as desired. The Schwartz proof in [2] obscures the geometry involved and starts with

\label{eqn:qmSchwartz:380}
\braket{b + \lambda a}{b + \lambda a} \ge 0,

where the “proof” is nothing more than a statement that one can “pick” $$\lambda = -\braket{b}{a}/\braket{a}{a}$$. The Pythagorean context of the Schwartz inequality is not mentioned, and without thinking about it, one is left wondering what sort of magic hat that $$\lambda$$ selection came from.

# References

[1] W Keith Nicholson. Elementary linear algebra, with applications. PWS-Kent Publishing Company, 1990.

[2] Jun John Sakurai and Jim J Napolitano. Modern quantum mechanics. Pearson Higher Ed, 2014.

## Parallel projection of electromagnetic fields with Geometric Algebra

When computing the components of a polarized reflecting ray that were parallel or not-parallel to the reflecting surface, it was found that the electric and magnetic fields could be written as

\label{eqn:gaFieldProjection:280}
\BE = \lr{ \BE \cdot \pcap } \pcap + \lr{ \BE \cdot \qcap } \qcap = E_\parallel \pcap + E_\perp \qcap

\label{eqn:gaFieldProjection:300}
\BH = \lr{ \BH \cdot \pcap } \pcap + \lr{ \BH \cdot \qcap } \qcap = H_\parallel \pcap + H_\perp \qcap.

where a unit vector $$\pcap$$ that lies both in the reflecting plane and in the electromagnetic plane (tangential to the wave vector direction) was

\label{eqn:gaFieldProjection:340}
\pcap = \frac{\kcap \cross \ncap}{\Abs{\kcap \cross \ncap}}

\label{eqn:gaFieldProjection:360}
\qcap = \kcap \cross \pcap.

Here $$\qcap$$ is perpendicular to $$\pcap$$ but lies in the electromagnetic plane. This logically subdivides the fields into two pairs, one with the electric field parallel to the reflection plane

\label{eqn:gaFieldProjection:240}
\begin{aligned}
\BE_1 &= \lr{ \BE \cdot \pcap } \pcap = E_\parallel \pcap \\
\BH_1 &= \lr{ \BH \cdot \qcap } \qcap = H_\perp \qcap,
\end{aligned}

and one with the magnetic field parallel to the reflection plane

\label{eqn:gaFieldProjection:380}
\begin{aligned}
\BH_2 &= \lr{ \BH \cdot \pcap } \pcap = H_\parallel \pcap \\
\BE_2 &= \lr{ \BE \cdot \qcap } \qcap = E_\perp \qcap.
\end{aligned}

Expressed in Geometric Algebra form, each of these pairs of fields should be thought of as components of a single multivector field. That is

\label{eqn:gaFieldProjection:400}
F_1 = \BE_1 + c \mu_0 \BH_1 I

\label{eqn:gaFieldProjection:460}
F_2 = \BE_2 + c \mu_0 \BH_2 I

where the original total field is

\label{eqn:gaFieldProjection:420}
F = \BE + c \mu_0 \BH I.

In \ref{eqn:gaFieldProjection:400} we have a composite projection operation, finding the portion of the electric field that lies in the reflection plane, and simultaneously finding the component of the magnetic field that lies perpendicular to that (while still lying in the tangential plane of the electromagnetic field). In \ref{eqn:gaFieldProjection:460} the magnetic field is projected onto the reflection plane and a component of the electric field that lies in the tangential (to the wave vector direction) plane is computed.

If we operate only on the complete multivector field, can we find these composite projection field components in a single operation, instead of working with the individual electric and magnetic fields?

Working towards this goal, it is worthwhile to point out consequences of the assumption that the fields are plane wave (or equivalently far field spherical waves). For such a wave we have

\label{eqn:gaFieldProjection:480}
\begin{aligned}
\BH
&= \inv{\mu_0} \kcap \cross \BE \\
&= \inv{\mu_0} (-I)\lr{ \kcap \wedge \BE } \\
&= \inv{\mu_0} (-I)\lr{ \kcap \BE – \kcap \cdot \BE} \\
&= -\frac{I}{\mu_0} \kcap \BE,
\end{aligned}

or

\label{eqn:gaFieldProjection:520}
\mu_0 \BH I = \kcap \BE.

This made use of the identity $$\Ba \wedge \Bb = I \lr{\Ba \cross \Bb}$$, and the fact that the electric field is perpendicular to the wave vector direction. The total multivector field is

\label{eqn:gaFieldProjection:500}
\begin{aligned}
F
&= \BE + c \mu_0 \BH I \\
&= \lr{ 1 + c \kcap } \BE.
\end{aligned}

Expansion of magnetic field component that is perpendicular to the reflection plane gives

\label{eqn:gaFieldProjection:540}
\begin{aligned}
\mu_0 H_\perp
&= \mu_0 \BH \cdot \qcap \\
&= \gpgradezero{ \lr{-\kcap \BE I} \qcap } \\
&= -\gpgradezero{ \kcap \BE I \lr{ \kcap \cross \pcap} } \\
&= \gpgradezero{ \kcap \BE I I \lr{ \kcap \wedge \pcap} } \\
&= -\gpgradezero{ \kcap \BE \kcap \pcap } \\
&= \gpgradezero{ \kcap \kcap \BE \pcap } \\
&= \BE \cdot \pcap,
\end{aligned}

so

\label{eqn:gaFieldProjection:560}
F_1
= (\pcap + c I \qcap ) \BE \cdot \pcap.

Since $$\qcap \kcap \pcap = I$$, the component of the complete multivector field in the $$\pcap$$ direction is

\label{eqn:gaFieldProjection:580}
\begin{aligned}
F_1
&= (\pcap – c \pcap \kcap ) \BE \cdot \pcap \\
&= \pcap (1 – c \kcap ) \BE \cdot \pcap \\
&= (1 + c \kcap ) \pcap \BE \cdot \pcap.
\end{aligned}

It is reasonable to expect that $$F_2$$ has a similar form, but with $$\pcap \rightarrow \qcap$$. This is verified by expansion

\label{eqn:gaFieldProjection:600}
\begin{aligned}
F_2
&= E_\perp \qcap + c \lr{ \mu_0 H_\parallel } \pcap I \\
&= \lr{\BE \cdot \qcap} \qcap + c \gpgradezero{ – \kcap \BE I \kcap \qcap I } \lr{\kcap \qcap I} I \\
&= \lr{\BE \cdot \qcap} \qcap + c \gpgradezero{ \kcap \BE \kcap \qcap } \kcap \qcap (-1) \\
&= \lr{\BE \cdot \qcap} \qcap + c \gpgradezero{ \kcap \BE (-\qcap \kcap) } \kcap \qcap (-1) \\
&= \lr{\BE \cdot \qcap} \qcap + c \gpgradezero{ \kcap \kcap \BE \qcap } \kcap \qcap \\
&= \lr{ 1 + c \kcap } \qcap \lr{ \BE \cdot \qcap }
\end{aligned}

This and \ref{eqn:gaFieldProjection:580} before that makes a lot of sense. The original field can be written

\label{eqn:gaFieldProjection:620}
F = \lr{ \Ecap + c \lr{ \kcap \cross \Ecap } I } \BE \cdot \Ecap,

where the leading multivector term contains all the directional dependence of the electric and magnetic field components, and the trailing scalar has the magnitude of the field with respect to the reference direction $$\Ecap$$.

We have the same structure after projecting $$\BE$$ onto either the $$\pcap$$, or $$\qcap$$ directions respectively

\label{eqn:gaFieldProjection:660}
F_1 = \lr{ \pcap + c \lr{ \kcap \cross \pcap } I} \BE \cdot \pcap

\label{eqn:gaFieldProjection:680}
F_2 = \lr{ \qcap + c \lr{ \kcap \cross \qcap } I} \BE \cdot \qcap.

The next question is how to achieve this projection operation directly in terms of $$F$$ and $$\pcap, \qcap$$, without resorting to expression of $$F$$ in terms of $$\BE$$, and $$\BB$$. I’ve not yet been able to determine the structure of that operation.