projection

Fundamental Theorem of Geometric Calculus

September 20, 2016 math and physics play , , , , , , , , , , , , , , , ,

[Click here for a PDF of this post with nicer formatting]

Stokes Theorem

The Fundamental Theorem of (Geometric) Calculus is a generalization of Stokes theorem to multivector integrals. Notationally, it looks like Stokes theorem with all the dot and wedge products removed. It is worth restating Stokes theorem and all the definitions associated with it for reference

Stokes’ Theorem

For blades \(F \in \bigwedge^{s}\), and \(m\) volume element \(d^k \Bx, s < k\), \begin{equation*} \int_V d^k \Bx \cdot (\boldpartial \wedge F) = \oint_{\partial V} d^{k-1} \Bx \cdot F. \end{equation*} This is a loaded and abstract statement, and requires many definitions to make it useful

  • The volume integral is over a \(m\) dimensional surface (manifold).
  • Integration over the boundary of the manifold \(V\) is indicated by \( \partial V \).
  • This manifold is assumed to be spanned by a parameterized vector \( \Bx(u^1, u^2, \cdots, u^k) \).
  • A curvilinear coordinate basis \( \setlr{ \Bx_i } \) can be defined on the manifold by
    \begin{equation}\label{eqn:fundamentalTheoremOfCalculus:40}
    \Bx_i \equiv \PD{u^i}{\Bx} \equiv \partial_i \Bx.
    \end{equation}

  • A dual basis \( \setlr{\Bx^i} \) reciprocal to the tangent vector basis \( \Bx_i \) can be calculated subject to the requirement \( \Bx_i \cdot \Bx^j = \delta_i^j \).
  • The vector derivative \(\boldpartial\), the projection of the gradient onto the tangent space of the manifold, is defined by
    \begin{equation}\label{eqn:fundamentalTheoremOfCalculus:100}
    \boldpartial = \Bx^i \partial_i = \sum_{i=1}^k \Bx_i \PD{u^i}{}.
    \end{equation}

  • The volume element is defined by
    \begin{equation}\label{eqn:fundamentalTheoremOfCalculus:60}
    d^k \Bx = d\Bx_1 \wedge d\Bx_2 \cdots \wedge d\Bx_k,
    \end{equation}

    where

    \begin{equation}\label{eqn:fundamentalTheoremOfCalculus:80}
    d\Bx_k = \Bx_k du^k,\qquad \text{(no sum)}.
    \end{equation}

  • The volume element is non-zero on the manifold, or \( \Bx_1 \wedge \cdots \wedge \Bx_k \ne 0 \).
  • The surface area element \( d^{k-1} \Bx \), is defined by
    \begin{equation}\label{eqn:fundamentalTheoremOfCalculus:120}
    d^{k-1} \Bx = \sum_{i = 1}^k (-1)^{k-i} d\Bx_1 \wedge d\Bx_2 \cdots \widehat{d\Bx_i} \cdots \wedge d\Bx_k,
    \end{equation}

    where \( \widehat{d\Bx_i} \) indicates the omission of \( d\Bx_i \).

  • My proof for this theorem was restricted to a simple “rectangular” volume parameterized by the ranges
    \(
    [u^1(0), u^1(1) ] \otimes
    [u^2(0), u^2(1) ] \otimes \cdots \otimes
    [u^k(0), u^k(1) ] \)

  • The precise meaning that should be given to oriented area integral is
    \begin{equation}\label{eqn:fundamentalTheoremOfCalculus:140}
    \oint_{\partial V} d^{k-1} \Bx \cdot F
    =
    \sum_{i = 1}^k (-1)^{k-i} \int \evalrange{
    \lr{ \lr{ d\Bx_1 \wedge d\Bx_2 \cdots \widehat{d\Bx_i} \cdots \wedge d\Bx_k } \cdot F }
    }{u^i = u^i(0)}{u^i(1)},
    \end{equation}

    where both the a area form and the blade \( F \) are evaluated at the end points of the parameterization range.

After the work of stating exactly what is meant by this theorem, most of the proof follows from the fact that for \( s < k \) the volume curl dot product can be expanded as \begin{equation}\label{eqn:fundamentalTheoremOfCalculus:160} \int_V d^k \Bx \cdot (\boldpartial \wedge F) = \int_V d^k \Bx \cdot (\Bx^i \wedge \partial_i F) = \int_V \lr{ d^k \Bx \cdot \Bx^i } \cdot \partial_i F. \end{equation} Each of the \(du^i\) integrals can be evaluated directly, since each of the remaining \(d\Bx_j = du^j \PDi{u^j}{}, i \ne j \) is calculated with \( u^i \) held fixed. This allows for the integration over a ``rectangular'' parameterization region, proving the theorem for such a volume parameterization. A more general proof requires a triangulation of the volume and surface, but the basic principle of the theorem is evident, without that additional work.

Fundamental Theorem of Calculus

There is a Geometric Algebra generalization of Stokes theorem that does not have the blade grade restriction of Stokes theorem. In [2] this is stated as

\begin{equation}\label{eqn:fundamentalTheoremOfCalculus:180}
\int_V d^k \Bx \boldpartial F = \oint_{\partial V} d^{k-1} \Bx F.
\end{equation}

A similar expression is used in [1] where it is also pointed out there is a variant with the vector derivative acting to the left

\begin{equation}\label{eqn:fundamentalTheoremOfCalculus:200}
\int_V F d^k \Bx \boldpartial = \oint_{\partial V} F d^{k-1} \Bx.
\end{equation}

In [3] it is pointed out that a bidirectional formulation is possible, providing the most general expression of the Fundamental Theorem of (Geometric) Calculus

\begin{equation}\label{eqn:fundamentalTheoremOfCalculus:220}
\boxed{
\int_V F d^k \Bx \boldpartial G = \oint_{\partial V} F d^{k-1} \Bx G.
}
\end{equation}

Here the vector derivative acts both to the left and right on \( F \) and \( G \). The specific action of this operator is
\begin{equation}\label{eqn:fundamentalTheoremOfCalculus:240}
\begin{aligned}
F \boldpartial G
&=
(F \boldpartial) G
+
F (\boldpartial G) \\
&=
(\partial_i F) \Bx^i G
+
F \Bx^i (\partial_i G).
\end{aligned}
\end{equation}

The fundamental theorem can be demonstrated by direct expansion. With the vector derivative \( \boldpartial \) and its partials \( \partial_i \) acting bidirectionally, that is

\begin{equation}\label{eqn:fundamentalTheoremOfCalculus:260}
\begin{aligned}
\int_V F d^k \Bx \boldpartial G
&=
\int_V F d^k \Bx \Bx^i \partial_i G \\
&=
\int_V F \lr{ d^k \Bx \cdot \Bx^i + d^k \Bx \wedge \Bx^i } \partial_i G.
\end{aligned}
\end{equation}

Both the reciprocal frame vectors and the curvilinear basis span the tangent space of the manifold, since we can write any reciprocal frame vector as a set of projections in the curvilinear basis

\begin{equation}\label{eqn:fundamentalTheoremOfCalculus:280}
\Bx^i = \sum_j \lr{ \Bx^i \cdot \Bx^j } \Bx_j,
\end{equation}

so \( \Bx^i \in sectionpan \setlr{ \Bx_j, j \in [1,k] } \).
This means that \( d^k \Bx \wedge \Bx^i = 0 \), and

\begin{equation}\label{eqn:fundamentalTheoremOfCalculus:300}
\begin{aligned}
\int_V F d^k \Bx \boldpartial G
&=
\int_V F \lr{ d^k \Bx \cdot \Bx^i } \partial_i G \\
&=
\sum_{i = 1}^{k}
\int_V
du^1 du^2 \cdots \widehat{ du^i} \cdots du^k
F \lr{
(-1)^{k-i}
\Bx_1 \wedge \Bx_2 \cdots \widehat{\Bx_i} \cdots \wedge \Bx_k } \partial_i G du^i \\
&=
\sum_{i = 1}^{k}
(-1)^{k-i}
\int_{u^1}
\int_{u^2}
\cdots
\int_{u^{i-1}}
\int_{u^{i+1}}
\cdots
\int_{u^k}
\evalrange{ \lr{
F d\Bx_1 \wedge d\Bx_2 \cdots \widehat{d\Bx_i} \cdots \wedge d\Bx_k G
}
}{u^i = u^i(0)}{u^i(1)}.
\end{aligned}
\end{equation}

Adding in the same notational sugar that we used in Stokes theorem, this proves the Fundamental theorem \ref{eqn:fundamentalTheoremOfCalculus:220} for “rectangular” parameterizations. Note that such a parameterization need not actually be rectangular.

Example: Application to Maxwell’s equation

{example:fundamentalTheoremOfCalculus:1}

Maxwell’s equation is an example of a first order gradient equation

\begin{equation}\label{eqn:fundamentalTheoremOfCalculus:320}
\grad F = \inv{\epsilon_0 c} J.
\end{equation}

Integrating over a four-volume (where the vector derivative equals the gradient), and applying the Fundamental theorem, we have

\begin{equation}\label{eqn:fundamentalTheoremOfCalculus:340}
\inv{\epsilon_0 c} \int d^4 x J = \oint d^3 x F.
\end{equation}

Observe that the surface area element product with \( F \) has both vector and trivector terms. This can be demonstrated by considering some examples

\begin{equation}\label{eqn:fundamentalTheoremOfCalculus:360}
\begin{aligned}
\gamma_{012} \gamma_{01} &\propto \gamma_2 \\
\gamma_{012} \gamma_{23} &\propto \gamma_{023}.
\end{aligned}
\end{equation}

On the other hand, the four volume integral of \( J \) has only trivector parts. This means that the integral can be split into a pair of same-grade equations

\begin{equation}\label{eqn:fundamentalTheoremOfCalculus:380}
\begin{aligned}
\inv{\epsilon_0 c} \int d^4 x \cdot J &=
\oint \gpgradethree{ d^3 x F} \\
0 &=
\oint d^3 x \cdot F.
\end{aligned}
\end{equation}

The first can be put into a slightly tidier form using a duality transformation
\begin{equation}\label{eqn:fundamentalTheoremOfCalculus:400}
\begin{aligned}
\gpgradethree{ d^3 x F}
&=
-\gpgradethree{ d^3 x I^2 F} \\
&=
\gpgradethree{ I d^3 x I F} \\
&=
(I d^3 x) \wedge (I F).
\end{aligned}
\end{equation}

Letting \( n \Abs{d^3 x} = I d^3 x \), this gives

\begin{equation}\label{eqn:fundamentalTheoremOfCalculus:420}
\oint \Abs{d^3 x} n \wedge (I F) = \inv{\epsilon_0 c} \int d^4 x \cdot J.
\end{equation}

Note that this normal is normal to a three-volume subspace of the spacetime volume. For example, if one component of that spacetime surface area element is \( \gamma_{012} c dt dx dy \), then the normal to that area component is \( \gamma_3 \).

A second set of duality transformations

\begin{equation}\label{eqn:fundamentalTheoremOfCalculus:440}
\begin{aligned}
n \wedge (IF)
&=
\gpgradethree{ n I F} \\
&=
-\gpgradethree{ I n F} \\
&=
-\gpgradethree{ I (n \cdot F)} \\
&=
-I (n \cdot F),
\end{aligned}
\end{equation}

and
\begin{equation}\label{eqn:fundamentalTheoremOfCalculus:460}
\begin{aligned}
I d^4 x \cdot J
&=
\gpgradeone{ I d^4 x \cdot J } \\
&=
\gpgradeone{ I d^4 x J } \\
&=
\gpgradeone{ (I d^4 x) J } \\
&=
(I d^4 x) J,
\end{aligned}
\end{equation}

can further tidy things up, leaving us with

\begin{equation}\label{eqn:fundamentalTheoremOfCalculus:500}
\boxed{
\begin{aligned}
\oint \Abs{d^3 x} n \cdot F &= \inv{\epsilon_0 c} \int (I d^4 x) J \\
\oint d^3 x \cdot F &= 0.
\end{aligned}
}
\end{equation}

The Fundamental theorem of calculus immediately provides relations between the Faraday bivector \( F \) and the four-current \( J \).

References

[1] C. Doran and A.N. Lasenby. Geometric algebra for physicists. Cambridge University Press New York, Cambridge, UK, 1st edition, 2003.

[2] A. Macdonald. Vector and Geometric Calculus. CreateSpace Independent Publishing Platform, 2012.

[3] Garret Sobczyk and Omar Le\’on S\’anchez. Fundamental theorem of calculus. Advances in Applied Clifford Algebras, 21\penalty0 (1):\penalty0 221–231, 2011. URL https://arxiv.org/abs/0809.4526.

Geometric algebra notes collection split into two volumes

November 10, 2015 math and physics play , , , , , , , , , , , , ,

I’ve now split my (way too big) Exploring physics with Geometric Algebra into two volumes:

Each of these is now a much more manageable size, which should facilitate removing the redundancies in these notes, and making them more properly book like.

Also note I’ve also previously moved “Exploring Geometric Algebra” content related to:

  • Lagrangian’s
  • Hamiltonian’s
  • Noether’s theorem

into my classical mechanics collection (449 pages).

Schwartz inequality in bra-ket notation

July 6, 2015 phy1520 , , , , , ,

[Click here for a PDF of this post with nicer formatting]

Motivation

In [2] the Schwartz inequality

\begin{equation}\label{eqn:qmSchwartz:20}
\boxed{
\braket{a}{a}
\braket{b}{b}
\ge \Abs{\braket{a}{b}}^2,
}
\end{equation}

is used in the derivation of the uncertainty relation. The proof of the Schwartz inequality uses a sneaky substitution that doesn’t seem obvious, and is even less obvious since there is a typo in the value to be substituted. Let’s understand where that sneakiness is coming from.

Without being sneaky

My ancient first year linear algebra text [1] contains a non-sneaky proof, but it only works for real vector spaces. Recast in bra-ket notation, this method examines the bounds of the norms of sums and differences of unit states (i.e. \( \braket{a}{a} = \braket{b}{b} = 1 \).)

\begin{equation}\label{eqn:qmSchwartz:40}
\braket{a – b}{a – b}
= \braket{a}{a} + \braket{b}{b} – \braket{a}{b} – \braket{b}{a}
= 2 – 2 \textrm{Re} \braket{a}{b}
\ge 0,
\end{equation}

so
\begin{equation}\label{eqn:qmSchwartz:60}
1 \ge \textrm{Re} \braket{a}{b}.
\end{equation}

Similarily

\begin{equation}\label{eqn:qmSchwartz:80}
\braket{a + b}{a + b}
= \braket{a}{a} + \braket{b}{b} + \braket{a}{b} + \braket{b}{a}
= 2 + 2 \textrm{Re} \braket{a}{b}
\ge 0,
\end{equation}

so
\begin{equation}\label{eqn:qmSchwartz:100}
\textrm{Re} \braket{a}{b} \ge -1.
\end{equation}

This means that for normalized state vectors

\begin{equation}\label{eqn:qmSchwartz:120}
-1 \le \textrm{Re} \braket{a}{b} \le 1,
\end{equation}

or
\begin{equation}\label{eqn:qmSchwartz:140}
\Abs{\textrm{Re} \braket{a}{b}} \le 1.
\end{equation}

Writing out the unit vectors explicitly, that last inequality is

\begin{equation}\label{eqn:qmSchwartz:180}
\Abs{ \textrm{Re} \braket{ \frac{a}{\sqrt{\braket{a}{a}}} }{ \frac{b}{\sqrt{\braket{b}{b}}} } } \le 1,
\end{equation}

squaring and rearranging gives

\begin{equation}\label{eqn:qmSchwartz:200}
\Abs{\textrm{Re} \braket{a}{b}}^2 \le
\braket{a}{a}
\braket{b}{b}.
\end{equation}

This is similar to, but not identical to the Schwartz inequality. Since \( \Abs{\textrm{Re} \braket{a}{b}} \le \Abs{\braket{a}{b}} \) the Schwartz inequality cannot be demonstrated with this argument. This first year algebra method works nicely for demonstrating the inequality for real vector spaces, so a different argument is required for a complex vector space (i.e. quantum mechanics state space.)

Arguing with projected and rejected components

Notice that the equality condition in the inequality holds when the vectors are colinear, and the largest inequality holds when the vectors are normal to each other. Given those geometrical observations, it seems reasonable to examine the norms of projected or rejected components of a vector. To do so in bra-ket notation, the correct form of a projection operation must be determined. Care is required to get the ordering of the bra-kets right when expressing such a projection.

Suppose we wish to calculation the rejection of \( \ket{a} \) from \( \ket{b} \), that is \( \ket{b – \alpha a}\), such that

\begin{equation}\label{eqn:qmSchwartz:220}
0
= \braket{a}{b – \alpha a}
= \braket{a}{b} – \alpha \braket{a}{a},
\end{equation}

or
\begin{equation}\label{eqn:qmSchwartz:240}
\alpha =
\frac{\braket{a}{b} }{ \braket{a}{a} }.
\end{equation}

Therefore, the projection of \( \ket{b} \) on \( \ket{a} \) is

\begin{equation}\label{eqn:qmSchwartz:260}
\textrm{Proj}_{\ket{a}} \ket{b}
= \frac{\braket{a}{b} }{ \braket{a}{a} } \ket{a}
= \frac{\braket{b}{a}^\conj }{ \braket{a}{a} } \ket{a}.
\end{equation}

The conventional way to write this in QM is in the operator form

\begin{equation}\label{eqn:qmSchwartz:300}
\textrm{Proj}_{\ket{a}} \ket{b}
= \frac{\ket{a}\bra{a}}{\braket{a}{a}} \ket{b}.
\end{equation}

In this form the rejection of \( \ket{a} \) from \( \ket{b} \) can be expressed as

\begin{equation}\label{eqn:qmSchwartz:280}
\textrm{Rej}_{\ket{a}} \ket{b} = \ket{b} – \frac{\ket{a}\bra{a}}{\braket{a}{a}} \ket{b}.
\end{equation}

This state vector is normal to \( \ket{a} \) as desired

\begin{equation}\label{eqn:qmSchwartz:320}
\braket{a}{b – \frac{\braket{a}{b} }{ \braket{a}{a} } a }
=
\braket{a}{ b} – \frac{ \braket{a}{b} }{ \braket{a}{a} } \braket{a}{a}
=
\braket{a}{ b} – \braket{a}{b}
= 0.
\end{equation}

How about it’s length? That is

\begin{equation}\label{eqn:qmSchwartz:340}
\begin{aligned}
\braket{b – \frac{\braket{a}{b} }{ \braket{a}{a} } a}{b – \frac{\braket{a}{b} }{ \braket{a}{a} } a }
&=
\braket{b}{b} – 2 \frac{\Abs{\braket{a}{b}}^2}{\braket{a}{a}} +\frac{\Abs{\braket{a}{b}}^2 }{ \braket{a}{a}^2 } \braket{a}{a} \\
&=
\braket{b}{b} – \frac{\Abs{\braket{a}{b}}^2}{\braket{a}{a}}.
\end{aligned}
\end{equation}

Observe that this must be greater to or equal to zero, so

\begin{equation}\label{eqn:qmSchwartz:360}
\braket{b}{b} \ge \frac{ \Abs{ \braket{a}{b} }^2 }{ \braket{a}{a} }.
\end{equation}

Rearranging this gives \ref{eqn:qmSchwartz:20} as desired. The Schwartz proof in [2] obscures the geometry involved and starts with

\begin{equation}\label{eqn:qmSchwartz:380}
\braket{b + \lambda a}{b + \lambda a} \ge 0,
\end{equation}

where the “proof” is nothing more than a statement that one can “pick” \( \lambda = -\braket{b}{a}/\braket{a}{a} \). The Pythagorean context of the Schwartz inequality is not mentioned, and without thinking about it, one is left wondering what sort of magic hat that \( \lambda \) selection came from.

References

[1] W Keith Nicholson. Elementary linear algebra, with applications. PWS-Kent Publishing Company, 1990.

[2] Jun John Sakurai and Jim J Napolitano. Modern quantum mechanics. Pearson Higher Ed, 2014.

Parallel projection of electromagnetic fields with Geometric Algebra

March 8, 2015 ece1229 , , , ,

[Click here for a PDF of this post with nicer formatting]

When computing the components of a polarized reflecting ray that were parallel or not-parallel to the reflecting surface, it was found that the electric and magnetic fields could be written as

\begin{equation}\label{eqn:gaFieldProjection:280}
\BE = \lr{ \BE \cdot \pcap } \pcap + \lr{ \BE \cdot \qcap } \qcap = E_\parallel \pcap + E_\perp \qcap
\end{equation}
\begin{equation}\label{eqn:gaFieldProjection:300}
\BH = \lr{ \BH \cdot \pcap } \pcap + \lr{ \BH \cdot \qcap } \qcap = H_\parallel \pcap + H_\perp \qcap.
\end{equation}

where a unit vector \( \pcap \) that lies both in the reflecting plane and in the electromagnetic plane (tangential to the wave vector direction) was

\begin{equation}\label{eqn:gaFieldProjection:340}
\pcap = \frac{\kcap \cross \ncap}{\Abs{\kcap \cross \ncap}}
\end{equation}
\begin{equation}\label{eqn:gaFieldProjection:360}
\qcap = \kcap \cross \pcap.
\end{equation}

Here \( \qcap \) is perpendicular to \( \pcap \) but lies in the electromagnetic plane. This logically subdivides the fields into two pairs, one with the electric field parallel to the reflection plane

\begin{equation}\label{eqn:gaFieldProjection:240}
\begin{aligned}
\BE_1 &= \lr{ \BE \cdot \pcap } \pcap = E_\parallel \pcap \\
\BH_1 &= \lr{ \BH \cdot \qcap } \qcap = H_\perp \qcap,
\end{aligned}
\end{equation}

and one with the magnetic field parallel to the reflection plane

\begin{equation}\label{eqn:gaFieldProjection:380}
\begin{aligned}
\BH_2 &= \lr{ \BH \cdot \pcap } \pcap = H_\parallel \pcap \\
\BE_2 &= \lr{ \BE \cdot \qcap } \qcap = E_\perp \qcap.
\end{aligned}
\end{equation}

Expressed in Geometric Algebra form, each of these pairs of fields should be thought of as components of a single multivector field. That is

\begin{equation}\label{eqn:gaFieldProjection:400}
F_1 = \BE_1 + c \mu_0 \BH_1 I
\end{equation}
\begin{equation}\label{eqn:gaFieldProjection:460}
F_2 = \BE_2 + c \mu_0 \BH_2 I
\end{equation}

where the original total field is

\begin{equation}\label{eqn:gaFieldProjection:420}
F = \BE + c \mu_0 \BH I.
\end{equation}

In \ref{eqn:gaFieldProjection:400} we have a composite projection operation, finding the portion of the electric field that lies in the reflection plane, and simultaneously finding the component of the magnetic field that lies perpendicular to that (while still lying in the tangential plane of the electromagnetic field). In \ref{eqn:gaFieldProjection:460} the magnetic field is projected onto the reflection plane and a component of the electric field that lies in the tangential (to the wave vector direction) plane is computed.

If we operate only on the complete multivector field, can we find these composite projection field components in a single operation, instead of working with the individual electric and magnetic fields?

Working towards this goal, it is worthwhile to point out consequences of the assumption that the fields are plane wave (or equivalently far field spherical waves). For such a wave we have

\begin{equation}\label{eqn:gaFieldProjection:480}
\begin{aligned}
\BH
&= \inv{\mu_0} \kcap \cross \BE \\
&= \inv{\mu_0} (-I)\lr{ \kcap \wedge \BE } \\
&= \inv{\mu_0} (-I)\lr{ \kcap \BE – \kcap \cdot \BE} \\
&= -\frac{I}{\mu_0} \kcap \BE,
\end{aligned}
\end{equation}

or

\begin{equation}\label{eqn:gaFieldProjection:520}
\mu_0 \BH I = \kcap \BE.
\end{equation}

This made use of the identity \( \Ba \wedge \Bb = I \lr{\Ba \cross \Bb} \), and the fact that the electric field is perpendicular to the wave vector direction. The total multivector field is

\begin{equation}\label{eqn:gaFieldProjection:500}
\begin{aligned}
F
&= \BE + c \mu_0 \BH I \\
&= \lr{ 1 + c \kcap } \BE.
\end{aligned}
\end{equation}

Expansion of magnetic field component that is perpendicular to the reflection plane gives

\begin{equation}\label{eqn:gaFieldProjection:540}
\begin{aligned}
\mu_0 H_\perp
&= \mu_0 \BH \cdot \qcap \\
&= \gpgradezero{ \lr{-\kcap \BE I} \qcap } \\
&= -\gpgradezero{ \kcap \BE I \lr{ \kcap \cross \pcap} } \\
&= \gpgradezero{ \kcap \BE I I \lr{ \kcap \wedge \pcap} } \\
&= -\gpgradezero{ \kcap \BE \kcap \pcap } \\
&= \gpgradezero{ \kcap \kcap \BE \pcap } \\
&= \BE \cdot \pcap,
\end{aligned}
\end{equation}

so

\begin{equation}\label{eqn:gaFieldProjection:560}
F_1
= (\pcap + c I \qcap ) \BE \cdot \pcap.
\end{equation}

Since \( \qcap \kcap \pcap = I \), the component of the complete multivector field in the \( \pcap \) direction is

\begin{equation}\label{eqn:gaFieldProjection:580}
\begin{aligned}
F_1
&= (\pcap – c \pcap \kcap ) \BE \cdot \pcap \\
&= \pcap (1 – c \kcap ) \BE \cdot \pcap \\
&= (1 + c \kcap ) \pcap \BE \cdot \pcap.
\end{aligned}
\end{equation}

It is reasonable to expect that \( F_2 \) has a similar form, but with \( \pcap \rightarrow \qcap \). This is verified by expansion

\begin{equation}\label{eqn:gaFieldProjection:600}
\begin{aligned}
F_2
&= E_\perp \qcap + c \lr{ \mu_0 H_\parallel } \pcap I \\
&= \lr{\BE \cdot \qcap} \qcap + c \gpgradezero{ – \kcap \BE I \kcap \qcap I } \lr{\kcap \qcap I} I \\
&= \lr{\BE \cdot \qcap} \qcap + c \gpgradezero{ \kcap \BE \kcap \qcap } \kcap \qcap (-1) \\
&= \lr{\BE \cdot \qcap} \qcap + c \gpgradezero{ \kcap \BE (-\qcap \kcap) } \kcap \qcap (-1) \\
&= \lr{\BE \cdot \qcap} \qcap + c \gpgradezero{ \kcap \kcap \BE \qcap } \kcap \qcap \\
&= \lr{ 1 + c \kcap } \qcap \lr{ \BE \cdot \qcap }
\end{aligned}
\end{equation}

This and \ref{eqn:gaFieldProjection:580} before that makes a lot of sense. The original field can be written

\begin{equation}\label{eqn:gaFieldProjection:620}
F = \lr{ \Ecap + c \lr{ \kcap \cross \Ecap } I } \BE \cdot \Ecap,
\end{equation}

where the leading multivector term contains all the directional dependence of the electric and magnetic field components, and the trailing scalar has the magnitude of the field with respect to the reference direction \( \Ecap \).

We have the same structure after projecting \( \BE \) onto either the \( \pcap \), or \( \qcap \) directions respectively

\begin{equation}\label{eqn:gaFieldProjection:660}
F_1 = \lr{ \pcap + c \lr{ \kcap \cross \pcap } I} \BE \cdot \pcap
\end{equation}
\begin{equation}\label{eqn:gaFieldProjection:680}
F_2 = \lr{ \qcap + c \lr{ \kcap \cross \qcap } I} \BE \cdot \qcap.
\end{equation}

The next question is how to achieve this projection operation directly in terms of \( F \) and \( \pcap, \qcap \), without resorting to expression of \( F \) in terms of \( \BE \), and \( \BB \). I’ve not yet been able to determine the structure of that operation.