## Motivation.

I was asked about geometric algebra equivalents for a couple identities found in [1], one for line integrals
\label{eqn:more_feynmans_trick:20}
\ddt{} \int_{C(t)} \Bf \cdot d\Bx =
\int_{C(t)} \lr{
\PD{t}{\Bf} + \spacegrad \lr{ \Bv \cdot \Bf } – \Bv \cross \lr{ \spacegrad \cross \Bf }
}
\cdot d\Bx,

and one for area integrals
\label{eqn:more_feynmans_trick:40}
\ddt{} \int_{S(t)} \Bf \cdot d\BA =
\int_{S(t)} \lr{
\PD{t}{\Bf} + \Bv \lr{ \spacegrad \cdot \Bf } – \spacegrad \cross \lr{ \Bv \cross \Bf }
}
\cdot d\BA.

Both of these look questionable at first glance, because neither has boundary term. However, they can be transformed with Stokes theorem to
\label{eqn:more_feynmans_trick:60}
\ddt{} \int_{C(t)} \Bf \cdot d\Bx
=
\int_{C(t)} \lr{
\PD{t}{\Bf} – \Bv \cross \lr{ \spacegrad \cross \Bf }
}
\cdot d\Bx
+
\evalbar{\Bv \cdot \Bf }{\Delta C},

and
\label{eqn:more_feynmans_trick:80}
\ddt{} \int_{S(t)} \Bf \cdot d\BA =
\int_{S(t)} \lr{
\PD{t}{\Bf} + \Bv \lr{ \spacegrad \cdot \Bf }
}
\cdot d\BA

\oint_{\partial S(t)} \lr{ \Bv \cross \Bf } \cdot d\Bx.

The area integral derivative is now seen to be a variation of one of the special cases of the Leibniz integral rule, see for example [2]. The author admits that the line integral relationship is not well used, and doesn’t show up in the wikipedia page.

My end goal will be to evaluate the derivative of a general multivector line integral
\label{eqn:more_feynmans_trick:100}
\ddt{} \int_{C(t)} F d\Bx G,

and area integral
\label{eqn:more_feynmans_trick:120}
\ddt{} \int_{S(t)} F d^2\Bx G.

We’ve derived that line integral result in a different fashion previously, but it’s interesting to see a different approach. Perhaps this approach will lend itself nicely to non-scalar integrands?

## Definition 1.1: Convective derivative.

The convective derivative,
of $$\phi(t, \Bx(t))$$ is defined as
\begin{equation*}
\frac{D \phi}{D t} = \lim_{\Delta t \rightarrow 0} \frac{ \phi(t + \Delta t, \Bx + \Delta t \Bv) – \phi(t, \Bx)}{\Delta t},
\end{equation*}
where $$\Bv = d\Bx/dt$$.

## Theorem 1.1: Convective derivative.

The convective derivative operator may be written
\begin{equation*}
\frac{D}{D t} = \PD{t}{} + \Bv \cdot \spacegrad.
\end{equation*}

### Start proof:

Let’s write
\label{eqn:more_feynmans_trick:140}
\begin{aligned}
v_0 &= 1 \\
u_0 &= t + v_0 h \\
u_k &= x_k + v_k h, k \in [1,3] \\
\end{aligned}

The limit, if it exists, must equal the sum of the individual limits
\label{eqn:more_feynmans_trick:160}
\frac{D \phi}{D t} = \sum_{\alpha = 0}^3 \lim_{\Delta t \rightarrow 0} \frac{ \phi(u_\alpha + v_\alpha h) – \phi(t, Bx)}{h},

but that is just a sum of derivitives, which can be evaluated by chain rule
\label{eqn:more_feynmans_trick:180}
\begin{aligned}
\frac{D \phi}{D t}
&= \sum_{\alpha = 0}^{3} \evalbar{ \PD{u_\alpha}{\phi(u_\alpha)} \PD{h}{u_\alpha} }{h = 0} \\
&= \PD{t}{\phi} + \sum_{k = 1}^3 v_k \PD{x_k}{\phi} \\
&= \lr{ \PD{t}{} + \Bv \cdot \spacegrad } \phi.
\end{aligned}

## Definition 1.2: Hestenes overdot notation.

We may use a dot or a tick with a derivative operator, to designate the scope of that operator, allowing it to operate bidirectionally, or in a restricted fashion, holding specific multivector elements constant. This is called the Hestenes overdot notation.Illustrating by example, with multivectors $$F, G$$, and allowing the gradient to act bidirectionally, we have
\begin{equation*}
\begin{aligned}
&=
+
&=
\sum_i \lr{ \partial_i F } \Be_i G + \sum_i F \Be_i \lr{ \partial_i G }.
\end{aligned}
\end{equation*}
The last step is a precise statement of the meaning of the overdot notation, showing that we hold the position of the vector elements of the gradient constant, while the (scalar) partials are allowed to commute, acting on the designated elements.

We will need one additional identity

## Lemma 1.1: Gradient of dot product (one constant vector.)

Given vectors $$\Ba, \Bb$$ the gradient of their dot product is given by
\begin{equation*}
\spacegrad \lr{ \Ba \cdot \Bb }
= \lr{ \Bb \cdot \spacegrad } \Ba – \Bb \cdot \lr{ \spacegrad \wedge \Ba }
+ \lr{ \Ba \cdot \spacegrad } \Bb – \Ba \cdot \lr{ \spacegrad \wedge \Bb }.
\end{equation*}
If $$\Bb$$ is constant, this reduces to
\begin{equation*}
\spacegrad \lr{ \Ba \cdot \Bb }
=
\dot{\spacegrad} \lr{ \dot{\Ba} \cdot \Bb }
= \lr{ \Bb \cdot \spacegrad } \Ba – \Bb \cdot \lr{ \spacegrad \wedge \Ba }.
\end{equation*}

### Start proof:

The $$\Bb$$ constant case is trivial to prove. We use $$\Ba \cdot \lr{ \Bb \wedge \Bc } = \lr{ \Ba \cdot \Bb} \Bc – \Bb \lr{ \Ba \cdot \Bc }$$, and simply expand the vector, curl dot product
\label{eqn:more_feynmans_trick:200}
\Bb \cdot \lr{ \spacegrad \wedge \Ba }
=
\Bb \cdot \lr{ \dot{\spacegrad} \wedge \dot{\Ba} }
= \lr{ \Bb \cdot \dot{\spacegrad} } \dot{\Ba} – \dot{\spacegrad} \lr{ \dot{\Ba} \cdot \Bb }.
Rearrangement proves that $$\Bb$$ constant identity. The more general statement follows from a chain rule evaluation of the gradient, holding each vector constant in turn
\label{eqn:more_feynmans_trick:320}
\spacegrad \lr{ \Ba \cdot \Bb }
=
\dot{\spacegrad} \lr{ \dot{\Ba} \cdot \Bb }
+
\dot{\spacegrad} \lr{ \dot{\Bb} \cdot \Ba }.

## Time derivative of a line integral of a vector field.

We now have all our tools assembled, and can proceed to evaluate the derivative of the line integral. We want to show that

## Theorem 1.2:

Given a path parameterized by $$\Bx(\lambda)$$, where $$d\Bx = (\PDi{\lambda}{\Bx}) d\lambda$$, with points along a $$C(t)$$ moving through space at a velocity $$\Bv(\Bx(\lambda))$$, and a vector function $$\Bf = \Bf(t, \Bx(\lambda))$$,
\begin{equation*}
\ddt{} \int_{C(t)} \Bf \cdot d\Bx =
\int_{C(t)} \lr{
\PD{t}{\Bf} + \spacegrad \lr{ \Bf \cdot \Bv } + \Bv \cdot \lr{ \spacegrad \wedge \Bf}
} \cdot d\Bx
\end{equation*}

### Start proof:

I’m going to avoid thinking about the rigorous details, like any requirements for curve continuity and smoothness. We will however, specify that the end points are given by $$[\lambda_1, \lambda_2]$$. Expanding out the parameterization, we seek to evaluate
\label{eqn:more_feynmans_trick:240}
\int_{C(t)} \Bf \cdot d\Bx
=
\int_{\lambda_1}^{\lambda_2} \Bf(t, \Bx(\lambda) ) \cdot \frac{\partial \Bx}{\partial \lambda} d\lambda.

The parametric form nicely moves all the boundary time dependence into the integrand, allowing us to write
\label{eqn:more_feynmans_trick:260}
\begin{aligned}
\ddt{} \int_{C(t)} \Bf \cdot d\Bx
&=
\lim_{\Delta t \rightarrow 0}
\inv{\Delta t}
\int_{\lambda_1}^{\lambda_2}
\lr{ \Bf(t + \Delta t, \Bx(\lambda) + \Delta t \Bv(\Bx(\lambda) ) \cdot \frac{\partial}{\partial \lambda} \lr{ \Bx + \Delta t \Bv(\Bx(\lambda)) } – \Bf(t, \Bx(\lambda)) \cdot \frac{\partial \Bx}{\partial \lambda} } d\lambda \\
&=
\lim_{\Delta t \rightarrow 0}
\inv{\Delta t}
\int_{\lambda_1}^{\lambda_2}
\lr{ \Bf(t + \Delta t, \Bx(\lambda) + \Delta t \Bv(\Bx(\lambda) ) – \Bf(t, \Bx)} \cdot \frac{\partial \Bx}{\partial \lambda} d\lambda \\
\lim_{\Delta t \rightarrow 0}
\int_{\lambda_1}^{\lambda_2}
\Bf(t + \Delta t, \Bx(\lambda) + \Delta t \Bv(\Bx(\lambda) )) \cdot \PD{\lambda}{}\Bv(\Bx(\lambda)) d\lambda \\
&=
\int_{\lambda_1}^{\lambda_2}
\frac{D \Bf}{Dt} \cdot \frac{\partial \Bx}{\partial \lambda} d\lambda +
\lim_{\Delta t \rightarrow 0}
\int_{\lambda_1}^{\lambda_2}
\Bf(t + \Delta t, \Bx(\lambda) + \Delta t \Bv(\Bx(\lambda) \cdot \frac{\partial}{\partial \lambda} \Bv(\Bx(\lambda)) d\lambda \\
&=
\int_{\lambda_1}^{\lambda_2}
\lr{ \PD{t}{\Bf} + \lr{ \Bv \cdot \spacegrad } \Bf } \cdot \frac{\partial \Bx}{\partial \lambda} d\lambda
+
\int_{\lambda_1}^{\lambda_2}
\Bf \cdot \frac{\partial \Bv}{\partial \lambda} d\lambda
\end{aligned}

At this point, we have a $$d\Bx$$ in the first integrand, and a $$d\Bv$$ in the second. We can expand the second integrand, evaluating the derivative using chain rule to find
\label{eqn:more_feynmans_trick:280}
\begin{aligned}
\Bf \cdot \PD{\lambda}{\Bv}
&=
\sum_i \Bf \cdot \PD{x_i}{\Bv} \PD{\lambda}{x_i} \\
&=
\sum_{i,j} f_j \PD{x_i}{v_j} \PD{\lambda}{x_i} \\
&=
\sum_{j} f_j \lr{ \spacegrad v_j } \cdot \PD{\lambda}{\Bx} \\
&=
\sum_{j} \lr{ \dot{\spacegrad} f_j \dot{v_j} } \cdot \PD{\lambda}{\Bx} \\
&=
\dot{\spacegrad} \lr{ \Bf \cdot \dot{\Bv} } \cdot \PD{\lambda}{\Bx}.
\end{aligned}

Substitution gives
\label{eqn:more_feynmans_trick:300}
\begin{aligned}
\ddt{} \int_{C(t)} \Bf \cdot d\Bx
&=
\int_{C(t)}
\lr{ \PD{t}{\Bf} + \lr{ \Bv \cdot \spacegrad } \Bf + \dot{\spacegrad} \lr{ \Bf \cdot \dot{\Bv} } } \cdot \frac{\partial \Bx}{\partial \lambda} d\lambda \\
&=
\int_{C(t)}
\lr{ \PD{t}{\Bf}
+ \spacegrad \lr{ \Bf \cdot \Bv }
+ \lr{ \Bv \cdot \spacegrad } \Bf
– \dot{\spacegrad} \lr{ \dot{\Bf} \cdot \Bv }
} \cdot d\Bx \\
&=
\int_{C(t)}
\lr{ \PD{t}{\Bf}
+ \spacegrad \lr{ \Bf \cdot \Bv }
+ \Bv \cdot \lr{ \spacegrad \wedge \Bf }
} \cdot d\Bx,
\end{aligned}

where the last simplification utilizes lemma 1.1.

### End proof.

Since $$\Ba \cdot \lr{ \Bb \wedge \Bc } = -\Ba \cross \lr{ \Bb \cross \Bc }$$, observe that we have also recovered \ref{eqn:more_feynmans_trick:20}.

## Time derivative of a line integral of a bivector field.

For a bivector line integral, we have

## Theorem 1.3:

Given a path parameterized by $$\Bx(\lambda)$$, where $$d\Bx = (\PDi{\lambda}{\Bx}) d\lambda$$, with points along a $$C(t)$$ moving through space at a velocity $$\Bv(\Bx(\lambda))$$, and a bivector function $$B = B(t, \Bx(\lambda))$$,
\begin{equation*}
\ddt{} \int_{C(t)} B \cdot d\Bx =
\int_{C(t)}
\PD{t}{B} \cdot d\Bx + \lr{ d\Bx \cdot \spacegrad } \lr{ B \cdot \Bv } + \lr{ \lr{ \Bv \wedge d\Bx } \cdot \spacegrad } \cdot B.
\end{equation*}

### Start proof:

Skipping the steps that follow our previous proceedure exactly, we have
\label{eqn:more_feynmans_trick:340}
\ddt{} \int_{C(t)} B \cdot d\Bx =
\int_{C(t)}
\PD{t}{B} \cdot d\Bx + \lr{ \Bv \cdot \spacegrad } B \cdot d\Bx + B \cdot d\Bv.

Since
\label{eqn:more_feynmans_trick:360}
\begin{aligned}
B \cdot d\Bv
&= B \cdot \PD{\lambda}{\Bv} d\lambda \\
&= B \cdot \PD{x_i}{\Bv} \PD{\lambda}{x_i} d\lambda \\
&= B \cdot \lr{ \lr{ d\Bx \cdot \spacegrad } \Bv },
\end{aligned}

we have
\label{eqn:more_feynmans_trick:380}
\ddt{} \int_{C(t)} B \cdot d\Bx
=
\int_{C(t)}
\PD{t}{B} \cdot d\Bx + \lr{ \Bv \cdot \spacegrad } B \cdot d\Bx + B \cdot \lr{ \lr{ d\Bx \cdot \spacegrad } \Bv } \\

Let’s reduce the two last terms in this integrand
\label{eqn:more_feynmans_trick:400}
\begin{aligned}
\lr{ \Bv \cdot \spacegrad } B \cdot d\Bx + B \cdot \lr{ \lr{ d\Bx \cdot \spacegrad } \Bv }
&=
\lr{ \Bv \cdot \spacegrad } B \cdot d\Bx –
\lr{ d\Bx \cdot \dot{\spacegrad} } \lr{ \dot{\Bv} \cdot B } \\
&=
\lr{ \Bv \cdot \spacegrad } B \cdot d\Bx
– \lr{ d\Bx \cdot \spacegrad} \lr{ \Bv \cdot B }
+ \lr{ d\Bx \cdot \dot{\spacegrad} } \lr{ \Bv \cdot \dot{B} } \\
&=
\lr{ d\Bx \cdot \spacegrad} \lr{ B \cdot \Bv }
+ \lr{ \Bv \cdot \dot{\spacegrad} } \dot{B} \cdot d\Bx
+ \lr{ d\Bx \cdot \dot{\spacegrad} } \lr{ \Bv \cdot \dot{B} } \\
&=
\lr{ d\Bx \cdot \spacegrad} \lr{ B \cdot \Bv }
+ \lr{ \Bv \lr{ d\Bx \cdot \spacegrad } – d\Bx \lr{ \Bv \cdot \spacegrad } } \cdot B \\
&=
\lr{ d\Bx \cdot \spacegrad} \lr{ B \cdot \Bv }
+ \lr{ \lr{ \Bv \wedge d\Bx } \cdot \spacegrad } \cdot B.
\end{aligned}

Back substitution finishes the job.

## Theorem 1.4: Time derivative of multivector line integral.

Given a path parameterized by $$\Bx(\lambda)$$, where $$d\Bx = (\PDi{\lambda}{\Bx}) d\lambda$$, with points along a $$C(t)$$ moving through space at a velocity $$\Bv(\Bx(\lambda))$$, and multivector functions $$M = M(t, \Bx(\lambda)), N = N(t, \Bx(\lambda))$$,
\begin{equation*}
\ddt{} \int_{C(t)} M d\Bx N =
\int_{C(t)}
\frac{D}{D t} M d\Bx N + M \lr{ \lr{ d\Bx \cdot \dot{\spacegrad} } \dot{\Bv} } N.
\end{equation*}

It is useful to write this out explicitly for clarity
\label{eqn:more_feynmans_trick:420}
\ddt{} \int_{C(t)} M d\Bx N =
\int_{C(t)}
\PD{t}{M} d\Bx N + M d\Bx \PD{t}{N}
+ \dot{M} \lr{ \Bv \cdot \dot{\spacegrad} } N
+ M \lr{ \Bv \cdot \dot{\spacegrad} } \dot{N}
+ M \lr{ \lr{ d\Bx \cdot \dot{\spacegrad} } \dot{\Bv} } N.

Proof is left to the reader, but follows the patterns above.

It’s not obvious whether there is a nice way to reduce this, as we did for the scalar valued line integral of a vector function, and the vector valued line integral of a bivector function. In particular, our vector and bivector results had $$\spacegrad \lr{ \Bf \cdot \Bv }$$, and $$\spacegrad \lr{ B \cdot \Bv }$$ terms respectively, which allows for the boundary term to be evaluated using Stokes’ theorem. Is such a manipulation possible here?

# References

[1] Nicholas Kemmer. Vector Analysis: A physicist’s guide to the mathematics of fields in three dimensions. CUP Archive, 1977.

[2] Wikipedia contributors. Leibniz integral rule — Wikipedia, the free encyclopedia. https://en.wikipedia.org/w/index.php?title=Leibniz_integral_rule&oldid=1223666713, 2024. [Online; accessed 22-May-2024].

## Goal.

Here we will explore the multivector form of the Leibniz integral theorem (aka. Feynman’s trick in one dimension), as discussed in [1].

Given a boundary $$\Omega(t)$$ that varies in time, we seek to evaluate
\label{eqn:LeibnizIntegralTheorem:20}
\ddt{} \int_{\Omega(t)} F d^p \Bx \lrpartial G.

Recall that when the bounding volume is fixed, we have
\label{eqn:LeibnizIntegralTheorem:40}
\int_{\Omega} F d^p \Bx \lrpartial G = \int_{\partial \Omega} F d^{p-1} \Bx G,

and expect a few terms that are variations of the RHS if we take derivatives.

## Simplest case: scalar function, one variable.

With
\label{eqn:LeibnizIntegralTheorem:60}
A(t) = \int_{a(t)}^{b(t)} f(u, t) du,

If we can find an antiderivative, such that
\label{eqn:LeibnizIntegralTheorem:80}
\PD{u}{F(u,t)} = f(u, t),

or
\label{eqn:LeibnizIntegralTheorem:90}
F(u, t) = \int f(u, t) du.

\label{eqn:LeibnizIntegralTheorem:100}
\begin{aligned}
A(t)
&=
\int_{a(t)}^{b(t)} f(u, t) du \\
&=
\int_{a(t)}^{b(t)} \PD{u}{F(u,t)} du \\
&= F( b(t), t ) – F( a(t), t ).
\end{aligned}

Should we attempt to take derivatives, we have a contribution from the first parameter that is entirely dependent on the boundary, and a contribution from the second parameter that is entirely independent of the boundary. That is
\label{eqn:LeibnizIntegralTheorem:120}
\begin{aligned}
\ddt{} \int_{a(t)}^{b(t)} f(u, t) du
&=
\PD{b}{ F } \PD{t}{b}
-\PD{a}{ F } \PD{t}{a}
+ \evalrange{\PD{t}{F(u, t)}}{u = a(t)}{b(t)} \\
&=
f(b(t), t) b'(t) –
f(a(t), t) a'(t)
+ \int_{a(t)}^{b(t)} \PD{t}{} f(u, t) du.
\end{aligned}

In the second step, the antiderivative function $$F$$ has been restated in it’s original integral form \ref{eqn:LeibnizIntegralTheorem:90}. We are able to take the derivative into the integral, since we first evaluate that derivative, independent of the boundary, and then evaluate the result at the respective end points of the boundary.

## Next simplest case: Multivector line integral (perfect derivative.)

Given an $$N$$ dimensional vector space, and a path parameterized by vector $$\Bx = \Bx(u)$$. The line integral special case of the fundamental theorem of calculus is found by evaluating
\label{eqn:LeibnizIntegralTheorem:140}
\int F(u) d\Bx \lrpartial G(u),

where $$F, G$$ are multivectors, and
\label{eqn:LeibnizIntegralTheorem:160}
\begin{aligned}
d\Bx &= \PD{u}{\Bx} du = \Bx_u du \\
\lrpartial &= \Bx^u \stackrel{ \leftrightarrow }{\PD{u}{}},
\end{aligned}

where $$\Bx_u \Bx^u = \Bx_u \cdot \Bx^u = 1$$.

Evaluating the integral, we have
\label{eqn:LeibnizIntegralTheorem:180}
\begin{aligned}
\int F(u) d\Bx \lrpartial G(u)
&=
\int F(u) \Bx_u du \Bx^u \stackrel{ \leftrightarrow }{\PD{u}{}} G(u) \\
&=
\int du \PD{u}{} \lr{ F(u) G(u) } \\
&=
F(u) G(u).
\end{aligned}

If we allow $$F, G, \Bx$$ to each have time dependence
\label{eqn:LeibnizIntegralTheorem:200}
\begin{aligned}
F &= F(u, t) \\
G &= G(u, t) \\
\Bx &= \Bx(u, t),
\end{aligned}

so we have
\label{eqn:LeibnizIntegralTheorem:220}
\ddt{} \int_{u = a(t)}^{b(t)} F(u, t) d\Bx \lrpartial G(u, t)
=
\evalrange{ \ddt{u} \PD{u}{} \lr{ F(u, t) G(u, t) } }{u = a(t)}{b(t)}
+ \evalrange{\ddt{} \lr{ F(u, t) G(u, t) } }{u = a(t)}{b(t)}
.

## General multivector line integral.

Now suppose that we have a general multivector line integral
\label{eqn:LeibnizIntegralTheorem:240}
A(t) = \int_{a(t)}^{b(t)} F(u, t) d\Bx G(u, t),

where $$d\Bx = \Bx_u du$$, $$\Bx_u = \partial \Bx(u, t)/\partial u$$. Writing out the integrand explicitly, we have
\label{eqn:LeibnizIntegralTheorem:260}
A(t) = \int_{a(t)}^{b(t)} du F(u, t) \Bx_u(u, t) G(u, t).

Following our logic with the first scalar case, let
\label{eqn:LeibnizIntegralTheorem:280}
\PD{u}{B(u, t)} = F(u, t) \Bx_u(u, t) G(u, t).

We can now evaluate the derivative
\label{eqn:LeibnizIntegralTheorem:300}
\ddt{A(t)} = \evalrange{ \ddt{u} \PD{u}{B} }{u = a(t)}{b(t)} + \evalrange{ \PD{t}{}B(u, t) }{u = a(t)}{b(t)}.

Writing \ref{eqn:LeibnizIntegralTheorem:280} in integral form, we have
\label{eqn:LeibnizIntegralTheorem:320}
B(u, t) = \int du F(u, t) \Bx_u(u, t) G(u, t),

so
\label{eqn:LeibnizIntegralTheorem:340}
\begin{aligned}
\ddt{A(t)}
&= \evalrange{ \ddt{u} \PD{u}{B} }{u = a(t)}{b(t)} +
\evalbar{ \PD{t’}{} \int_{a(t)}^{b(t)} du F(u, t’) d\Bx_u(u, t’) G(u, t’) }{t’ = t} \\
&= \evalrange{ \ddt{u} F(u, t) \Bx_u(u, t) G(u, t) }{u = a(t)}{b(t)} +
\int_{a(t)}^{b(t)} \PD{t}{} F(u, t) d\Bx(u, t) G(u, t),
\end{aligned}

so
\label{eqn:LeibnizIntegralTheorem:360}
\ddt{} \int_{a(t)}^{b(t)} F(u, t) d\Bx(u, t) G(u, t)
= \evalrange{ F(u, t) \ddt{\Bx}(u, t) G(u, t) }{u = a(t)}{b(t)} +
\int_{a(t)}^{b(t)} \PD{t}{} F(u, t) d\Bx(u, t) G(u, t).

This is perhaps clearer, if just written as:
\label{eqn:LeibnizIntegralTheorem:380}
\ddt{} \int_{a(t)}^{b(t)} F d\Bx G
= \evalrange{ F \ddt{\Bx} G }{u = a(t)}{b(t)} +
\int_{a(t)}^{b(t)} \PD{t}{} F d\Bx G.

As a check, it’s worth pointing out that we can recover the one dimensional result, writing $$\Bx = u \Be_1$$, $$f = F \Be_1^{-1}$$, and $$G = 1$$, for
\label{eqn:LeibnizIntegralTheorem:400}
\ddt{} \int_{a(t)}^{b(t)} f du
= \evalrange{ f(u) \ddt{u} }{u = a(t)}{b(t)} +
\int_{a(t)}^{b(t)} du \PD{t}{f}.

## Next steps.

I’ve tried a couple times on paper to do surface integral variations of this (allowing the surface to vary with time), and don’t think that I’ve gotten it right. Will try again (or perhaps just look it up and see what the result is supposed to look like, then see how that translates into the GC formalism.)

# References

[1] Wikipedia contributors. Leibniz integral rule — Wikipedia, the free encyclopedia. https://en.wikipedia.org/w/index.php?title=Leibniz_integral_rule&oldid=1223666713, 2024. [Online; accessed 22-May-2024].

## Motivation.

This revisits my last blog post where I covered this content in a meandering fashion. This is an attempt to re-express this in a more compact form. In particular, in a form that is amenable to include in my book. When I wrote the potential section of my book, I cheated, and didn’t try to motivate the results. My cheat was figuring out the multivector potential representation starting with STA where things are simpler, and then translating it back to a multivector representation, instead of figuring out a reasonable way to motivate things from the foundation already laid.

I’d like to eventually have a less rushed treatment of potentials in my book, where the results are not pulled out of a magic hat. Here is an attempted step in that direction. I’ve opted to put some of the motivational material in problems (with solutions at the chapter end.)

## Multivector potentials.

We know from conventional electromagnetism (given no fictitious magnetic sources) that we can represent the six components of the electric and magnetic fields in terms of four scalar fields
\label{eqn:mvpotentials:80}
\begin{aligned}
\BE &= -\spacegrad \phi – \PD{t}{\BA} \\
\BH &= \inv{\mu} \spacegrad \cross \BA.
\end{aligned}

The conventional way of constructing these potentials makes use of the identities
\label{eqn:mvpotentials:60}
\begin{aligned}
\end{aligned}

applying those to the source free Maxwell’s equations to find representations of $$\BE, \BH$$ that automatically satisfy those equations. For that conventional analysis, see section 18-6 [2] (available online), or section 10.1 [3], or section 6.4 [4]. We can also find such a potential representation using geometric algebra methods that are cross product free (problem 1.)

For Maxwell’s equations with fictitious magnetic sources, it can be shown that a potential representation of the field
\label{eqn:mvpotentials:100}
\begin{aligned}
\BH &= -\spacegrad \phi_m – \PD{t}{\BF} \\
\BE &= -\inv{\epsilon} \spacegrad \cross \BF.
\end{aligned}

satisfies the source-free grades of Maxwell’s equation.
See [1], and [5] for such derivations. As with the conventional source potentials, we can also apply our geometric algebra toolbox to easily find these results (problem 2.)

We have a mix of time partials and curls that is reminiscent of Maxwell’s equation itself. It’s obvious to wonder whether there is a more coherent integrated form for the potential. This is in fact the case.

## Lemma 1.1: Multivector potentials.

For Maxwell’s equation with electric sources, the total field $$F$$ can be expressed in multivector potential form
\label{eqn:mvpotentials:520}
F = \gpgrade{ \lr{ \spacegrad – \inv{c} \PD{t}{} } \lr{ -\phi + c \BA } }{1,2}.

For Maxwell’s equation with only fictitious magnetic sources, the total field $$F$$ can be expressed in multivector form
\label{eqn:mvpotentials:540}
F = \gpgrade{ \lr{ \spacegrad – \inv{c} \PD{t}{} } I \eta \lr{ -\phi_m + c \BF } }{1,2}.

The reader should try to verify this themselves (problem 3.)

Using superposition, we can form a multivector potential that includes all grades.

## Definition 1.1: Multivector potential.

We call $$A$$, a multivector with all grades, the multivector potential, defining the total field as
\label{eqn:mvpotentials:600}
\begin{aligned}
F
&=
&=
\lr{ \spacegrad – \inv{c} \PD{t}{} } A

\end{aligned}

Imposition of the constraint
\label{eqn:mvpotentials:680}

is called the Lorentz gauge condition, and allows us to express $$F$$ in terms of the potential without any grade selection filters.

## Lemma 1.2: Conventional multivector potential.

Let
\label{eqn:mvpotentials:620}
A = -\phi + c \BA + I \eta \lr{ -\phi_m + c \BF }.

This results in the conventional potential representation of the electric and magnetic fields
\label{eqn:mvpotentials:640}
\begin{aligned}
\BE &= -\spacegrad \phi – \PD{t}{\BA} – \inv{\epsilon} \spacegrad \cross \BF \\
\end{aligned}

In terms of potentials, the Lorentz gauge condition \ref{eqn:mvpotentials:680} takes the form
\label{eqn:mvpotentials:660}
\begin{aligned}
0 &= \inv{c} \PD{t}{\phi} + \spacegrad \cdot (c \BA) \\
0 &= \inv{c} \PD{t}{\phi_m} + \spacegrad \cdot (c \BF).
\end{aligned}

See problem 4.

## Problem 1: Potentials for no-fictitious sources.

Starting with Maxwell’s equation with only conventional electric sources
\label{eqn:mvpotentials:120}

Show that this may be split by grade into three equations
\label{eqn:mvpotentials:140}
\begin{aligned}
\spacegrad \wedge \BE + \inv{c}\PD{t}{} \lr{ I \eta \BH } &= 0 \\
\spacegrad \wedge \lr{ I \eta \BH } &= 0.
\end{aligned}

Then use the identities $$\spacegrad \wedge \spacegrad \wedge \BA = 0$$, for vector $$\BA$$ and $$\spacegrad \wedge \spacegrad \phi = 0$$, for scalar $$\phi$$ to find the potential representation.

Taking grade(0,1) and (2,3) selections of Maxwell’s equation, we split our equations into source dependent and source free equations
\label{eqn:mvpotentials:200}

\label{eqn:mvpotentials:220}

In terms of $$F = \BE + I \eta \BH$$, the source free equation expands to
\label{eqn:mvpotentials:240}
\begin{aligned}
0
&=
\lr{ \spacegrad + \inv{c} \PD{t}{} } \lr{ \BE + I \eta \BH }
}{2,3} \\
&=
&=
+ \spacegrad \wedge \lr{ I \eta \BH }
+ I \eta \inv{c} \PD{t}{\BH},
\end{aligned}

which can be further split into a bivector and trivector equation
\label{eqn:mvpotentials:260}
0 = \spacegrad \wedge \BE + I \eta \inv{c} \PD{t}{\BH}

\label{eqn:mvpotentials:280}
0 = \spacegrad \wedge \lr{ I \eta \BH }.

It’s clear that we want to write the magnetic field as a (bivector) curl, so we let
\label{eqn:mvpotentials:300}
I \eta \BH = I c \BB = c \spacegrad \wedge \BA,

or
\label{eqn:mvpotentials:301}
\BH = \inv{\mu} \spacegrad \cross \BA.

\Cref{eqn:mvpotentials:260} is reduced to
\label{eqn:mvpotentials:320}
\begin{aligned}
0
&= \spacegrad \wedge \BE + I \eta \inv{c} \PD{t}{\BH} \\
&= \spacegrad \wedge \BE + \inv{c} \PD{t}{} \spacegrad \wedge \lr{ c \BA } \\
&= \spacegrad \wedge \lr{ \BE + \PD{t}{\BA} }.
\end{aligned}

We can now let
\label{eqn:mvpotentials:340}
\BE + \PD{t}{\BA} = -\spacegrad \phi.

We sneakily adjust the sign of the gradient so that the result matches the conventional representation.

## Problem 2: Potentials for fictitious sources.

Starting with Maxwell’s equation with only fictitious magnetic sources
\label{eqn:mvpotentials:160}

show that this may be split by grade into three equations
\label{eqn:mvpotentials:180}
\begin{aligned}
-\eta \spacegrad \wedge \BH + \inv{c}\PD{t}{(I \BE)} &= 0 \\
\spacegrad \wedge \lr{ I \BE } &= 0.
\end{aligned}

Then use the identities $$\spacegrad \wedge \spacegrad \wedge \BF = 0$$, for vector $$\BF$$ and $$\spacegrad \wedge \spacegrad \phi_m = 0$$, for scalar $$\phi_m$$ to find the potential representation \ref{eqn:mvpotentials:100}.

We multiply \ref{eqn:mvpotentials:160} by $$I$$ to find
\label{eqn:mvpotentials:360}

which can be split into
\label{eqn:mvpotentials:380}
\begin{aligned}
\end{aligned}

We expand the source free equation in terms of $$I F = I \BE – \eta \BH$$, to find
\label{eqn:mvpotentials:400}
\begin{aligned}
0
&= \gpgrade{ \lr{ \spacegrad + \inv{c}\PD{t}{} } \lr{ I \BE – \eta \BH } }{0,3} \\
&= \spacegrad \wedge \lr{ I \BE } + \inv{c} \PD{t}{(I \BE)} – \eta \spacegrad \wedge \BH,
\end{aligned}

which has the respective bivector and trivector grades
\label{eqn:mvpotentials:420}
0 = \spacegrad \wedge \lr{ I \BE }

\label{eqn:mvpotentials:440}
0 = \inv{c} \PD{t}{(I \BE)} – \eta \spacegrad \wedge \BH.

We can clearly satisfy \ref{eqn:mvpotentials:420} by setting
\label{eqn:mvpotentials:460}
I \BE = -\inv{\epsilon} \spacegrad \wedge \BF,

or
\label{eqn:mvpotentials:461}
\BE = -\inv{\epsilon} \spacegrad \cross \BF.

Here, once again, the sneaky inclusion of a constant factor $$-1/\epsilon$$ is to make the result match the conventional. Inserting this value for $$I \BE$$ into our bivector equation yields
\label{eqn:mvpotentials:480}
\begin{aligned}
0
&= -\inv{\epsilon} \inv{c} \PD{t}{} (\spacegrad \wedge \BF) – \eta \spacegrad \wedge \BH \\
&= -\eta \spacegrad \wedge \lr{ \PD{t}{\BF} + \BH },
\end{aligned}

so we set
\label{eqn:mvpotentials:500}
\PD{t}{\BF} + \BH = -\spacegrad \phi_m,

and have a field representation that automatically satisfies the source free equations.

## Problem 3: Total field in terms of potentials.

Prove lemma 1.1, either by direct expansion, or by trying to discover the multivector form of the field by construction.

Proof by expansion is straightforward, and left to the reader. We form the respective total electromagnetic fields $$F = \BE + I \eta H$$ for each case.

We find
\label{eqn:mvpotentials:560}
\begin{aligned}
F
&= \BE + I \eta \BH \\
&= -\spacegrad \phi – \PD{t}{\BA} + I \frac{\eta}{\mu} \spacegrad \cross \BA \\
&= -\spacegrad \phi – \inv{c} \PD{t}{(c \BA)} + \spacegrad \wedge (c\BA) \\
&= \gpgrade{ \spacegrad \lr{ -\phi + c \BA } – \inv{c} \PD{t}{(c \BA)} }{1,2} \\
&= \gpgrade{ \lr{ \spacegrad -\inv{c} \PD{t}{} } \lr{ -\phi + c \BA } }{1,2}.
\end{aligned}

For the field for the fictitious source case, we compute the result in the same way, inserting a no-op grade selection to allow us to simplify, finding
\label{eqn:mvpotentials:580}
\begin{aligned}
F
&= \BE + I \eta \BH \\
&= -\inv{\epsilon} \spacegrad \cross \BF + I \eta \lr{ -\spacegrad \phi_m – \PD{t}{\BF} } \\
&= \inv{\epsilon c} I \lr{ \spacegrad \wedge (c \BF)} + I \eta \lr{ -\spacegrad \phi_m – \inv{c} \PD{t}{(c \BF)} } \\
&= I \eta \lr{ \spacegrad \wedge (c \BF) + \lr{ -\spacegrad \phi_m – \inv{c} \PD{t}{(c \BF)} } } \\
&= I \eta \gpgrade{ \spacegrad \wedge (c \BF) + \lr{ -\spacegrad \phi_m – \inv{c} \PD{t}{(c \BF)} } }{1,2} \\
&= I \eta \gpgrade{ \spacegrad (-\phi_m + c \BF) – \inv{c} \PD{t}{(c \BF)} }{1,2} \\
&= I \eta \gpgrade{ \lr{ \spacegrad -\inv{c} \PD{t}{} } (-\phi_m + c \BF) }{1,2}.
\end{aligned}

## Problem 4: Fields in terms of potentials.

Prove lemma 1.2.

Let’s expand and then group by grade
\label{eqn:mvpotentials:n}
\begin{aligned}
\lr{ \spacegrad – \inv{c} \PD{t}{} } A
&=
\lr{ \spacegrad – \inv{c} \PD{t}{} } \lr{ -\phi + c \BA + I \eta \lr{ -\phi_m + c \BF }} \\
&=
-\inv{c} \PD{t}{\phi} + c \inv{c} \PD{t}{ \BA } + I \eta \lr{ -\inv{c} \PD{t}{\phi_m} + c \inv{c} \PD{t}{\BF} } \\
&=
+ I \eta c \spacegrad \wedge \BF
– c \inv{c} \PD{t}{\BA}
– c I \eta \inv{c} \PD{t}{\BF} \\
+\inv{c} \PD{t}{\phi}
+ \inv{c} \PD{t}{\phi_m} } \\
&=
– \PD{t}{\BA}
– \PD{t}{\BF}
} \\
+\inv{c} \PD{t}{\phi}
+ \inv{c} \PD{t}{\phi_m} }.
\end{aligned}

Observing that $$F = \gpgrade{ \lr{ \spacegrad -(1/c) \partial_t } A }{1,2} = \BE + I \eta \BH$$, completes the problem. If the Lorentz gauge condition is assumed, the scalar and pseudoscalar components above are obliterated, leaving just
$$F = \lr{ \spacegrad -(1/c) \partial_t } A$$.

# References

[1] Constantine A Balanis. Antenna theory: analysis and design. John Wiley & Sons, 3rd edition, 2005.

[2] R.P. Feynman, R.B. Leighton, and M.L. Sands. Feynman lectures on physics, Volume II.[Lectures on physics], chapter The Maxwell Equations. Addison-Wesley Publishing Company. Reading, Massachusetts, 1963. URL https://www.feynmanlectures.caltech.edu/II_18.html.

[3] David Jeffrey Griffiths and Reed College. Introduction to electrodynamics. Prentice hall Upper Saddle River, NJ, 3rd edition, 1999.

[4] JD Jackson. Classical Electrodynamics. John Wiley and Sons, 2nd edition, 1975.

[5] David M Pozar. Microwave engineering. John Wiley & Sons, 2009.

## Exact system.

Recall that we can use the wedge product to solve linear systems. For example, assuming that $$\Ba, \Bb$$ are not colinear, the system
\label{eqn:cramersProjection:20}
x \Ba + y \Bb = \Bc,

if it has a solution, can be solved for $$x$$ and $$y$$ by wedging with $$\Bb$$, and $$\Ba$$ respectively.
For example, wedging with $$\Bb$$, from the right, gives
\label{eqn:cramersProjection:40}
x \lr{ \Ba \wedge \Bb } + y \lr{ \Bb \wedge \Bb } = \Bc \wedge \Bb,

but since $$\Bb \wedge \Bb = 0$$, we are left with
\label{eqn:cramersProjection:60}
x \lr{ \Ba \wedge \Bb } = \Bc \wedge \Bb,

and since $$\Ba, \Bb$$ are not colinear, which means that $$\Ba \wedge \Bb \ne 0$$, we have
\label{eqn:cramersProjection:80}
x = \inv{ \Ba \wedge \Bb } \Bc \wedge \Bb.

Similarly, we can wedge with $$\Ba$$ (from the left), to find
\label{eqn:cramersProjection:100}
y = \inv{ \Ba \wedge \Bb } \Ba \wedge \Bc.

This works because, if the system has a solution, all the bivectors $$\Ba \wedge \Bb$$, $$\Ba \wedge \Bc$$, and $$\Bb \wedge \Bc$$, are all scalar multiples of each other, so we can just divide the two bivectors, and the results must be scalars.

## Cramer’s rule.

Incidentally, observe that for $$\mathbb{R}^2$$, this is the “Cramer’s rule” solution to the system, since
\label{eqn:cramersProjection:180}
\Bx \wedge \By = \begin{vmatrix} \Bx & \By \end{vmatrix} \Be_1 \Be_2,

where we are treating $$\Bx$$ and $$\By$$ here as column vectors of the coordinates. This means that, after dividing out the plane pseudoscalar $$\Be_1 \Be_2$$, we have
\label{eqn:cramersProjection:200}
\begin{aligned}
x
&=
\frac{
\begin{vmatrix}
\Bc & \Bb \\
\end{vmatrix}
}{
\begin{vmatrix}
\Ba & \Bb
\end{vmatrix}
} \\
y
&=
\frac{
\begin{vmatrix}
\Ba & \Bc \\
\end{vmatrix}
}{
\begin{vmatrix}
\Ba & \Bb
\end{vmatrix}
}.
\end{aligned}

This follows the usual Cramer’s rule proscription, where we form determinants of the coordinates of the spanning vectors, replace either of the original vectors in the numerator with the target vector (depending on which variable we seek), and then take ratios of the two determinants.

## Least squares solution, using geometry.

Now, let’s consider the case, where the system \ref{eqn:cramersProjection:20} cannot be solved exactly. Geometrically, the best we can do is to try to solve the related “least squares” problem
\label{eqn:cramersProjection:120}
x \Ba + y \Bb = \Bc_\parallel,

where $$\Bc_\parallel$$ is the projection of $$\Bc$$ onto the plane spanned by $$\Ba, \Bb$$. Regardless of the value of $$\Bc$$, we can always find a solution to this problem. For example, solving for $$x$$, we have
\label{eqn:cramersProjection:160}
\begin{aligned}
x
&= \inv{ \Ba \wedge \Bb } \Bc_\parallel \wedge \Bb \\
&= \inv{ \Ba \wedge \Bb } \cdot \lr{ \Bc_\parallel \wedge \Bb } \\
&= \inv{ \Ba \wedge \Bb } \cdot \lr{ \Bc \wedge \Bb } – \inv{ \Ba \wedge \Bb } \cdot \lr{ \Bc_\perp \wedge \Bb }.
\end{aligned}

Let’s look at the second term, which can be written
\label{eqn:cramersProjection:140}
\begin{aligned}
– \inv{ \Ba \wedge \Bb } \cdot \lr{ \Bc_\perp \wedge \Bb }
&=
– \frac{ \Ba \wedge \Bb }{ \lr{ \Ba \wedge \Bb}^2 } \cdot \lr{ \Bc_\perp \wedge \Bb } \\
&\propto
\lr{ \Ba \wedge \Bb } \cdot \lr{ \Bc_\perp \wedge \Bb } \\
&=
\lr{ \lr{ \Ba \wedge \Bb } \cdot \Bc_\perp } \cdot \Bb \\
&=
\lr{ \Ba \lr{ \Bb \cdot \Bc_\perp} – \Bb \lr{ \Ba \cdot \Bc_\perp} } \cdot \Bb \\
&=
0.
\end{aligned}

The zero above follows because $$\Bc_\perp$$ is perpendicular to both $$\Ba$$ and $$\Bb$$ by construction. Geometrically, we are trying to dot two perpendicular bivectors, where $$\Bb$$ is a common factor of those two bivectors, as illustrated in fig. 1.

fig. 1. Perpendicular bivectors.

We see that our least squares solution, to this two variable linear system problem, is
\label{eqn:cramersProjection:220}
x = \inv{ \Ba \wedge \Bb } \cdot \lr{ \Bc \wedge \Bb }.

\label{eqn:cramersProjection:240}
y = \inv{ \Ba \wedge \Bb } \cdot \lr{ \Ba \wedge \Bc }.

The interesting thing here is how we have managed to connect the geometric notion of the optimal solution, the equivalent of a least squares solution (which we can compute with the Moore-Penrose inverse, or with an SVD (Singular Value Decomposition)), with the entirely geometric notion of selecting for the portion of the desired solution that lies within the span of the set of input vectors, provided that the spanning vectors for that hyperplane are linearly independent.

## Least squares solution, using calculus.

I’ve called the projection solution, a least-squares solution, without full justification. Here’s that justification. We define the usual error function, the squared distance from the target, from our superposition position in the plane
\label{eqn:cramersProjection:300}
\epsilon = \lr{ \Bc – x \Ba – y \Bb }^2,

and then take partials with respect to $$x, y$$, equating each to zero
\label{eqn:cramersProjection:320}
\begin{aligned}
0 &= \PD{x}{\epsilon} = 2 \lr{ \Bc – x \Ba – y \Bb } \cdot (-\Ba) \\
0 &= \PD{y}{\epsilon} = 2 \lr{ \Bc – x \Ba – y \Bb } \cdot (-\Bb).
\end{aligned}

This is a two equation, two unknown system, which can be expressed in matrix form as
\label{eqn:cramersProjection:340}
\begin{bmatrix}
\Ba^2 & \Ba \cdot \Bb \\
\Ba \cdot \Bb & \Bb^2
\end{bmatrix}
\begin{bmatrix}
x \\
y
\end{bmatrix}
=
\begin{bmatrix}
\Ba \cdot \Bc \\
\Bb \cdot \Bc \\
\end{bmatrix}.

This has solution
\label{eqn:cramersProjection:360}
\begin{bmatrix}
x \\
y
\end{bmatrix}
=
\inv{
\begin{vmatrix}
\Ba^2 & \Ba \cdot \Bb \\
\Ba \cdot \Bb & \Bb^2
\end{vmatrix}
}
\begin{bmatrix}
\Bb^2 & -\Ba \cdot \Bb \\
-\Ba \cdot \Bb & \Ba^2
\end{bmatrix}
\begin{bmatrix}
\Ba \cdot \Bc \\
\Bb \cdot \Bc \\
\end{bmatrix}
=
\frac{
\begin{bmatrix}
\Bb^2 \lr{ \Ba \cdot \Bc } – \lr{ \Ba \cdot \Bb} \lr{ \Bb \cdot \Bc } \\
\Ba^2 \lr{ \Bb \cdot \Bc } – \lr{ \Ba \cdot \Bb} \lr{ \Ba \cdot \Bc } \\
\end{bmatrix}
}{
\Ba^2 \Bb^2 – \lr{ \Ba \cdot \Bb }^2
}.

All of these differences can be expressed as wedge dot products, using the following expansions in reverse
\label{eqn:cramersProjection:420}
\begin{aligned}
\lr{ \Ba \wedge \Bb } \cdot \lr{ \Bc \wedge \Bd }
&=
\Ba \cdot \lr{ \Bb \cdot \lr{ \Bc \wedge \Bd } } \\
&=
\Ba \cdot \lr{ \lr{\Bb \cdot \Bc} \Bd – \lr{\Bb \cdot \Bd} \Bc } \\
&=
\lr{ \Ba \cdot \Bd } \lr{\Bb \cdot \Bc} – \lr{ \Ba \cdot \Bc }\lr{\Bb \cdot \Bd}.
\end{aligned}

We find
\label{eqn:cramersProjection:380}
\begin{aligned}
x
&= \frac{\Bb^2 \lr{ \Ba \cdot \Bc } – \lr{ \Ba \cdot \Bb} \lr{ \Bb \cdot \Bc }}{-\lr{ \Ba \wedge \Bb }^2 } \\
&= \frac{\lr{ \Ba \wedge \Bb } \cdot \lr{ \Bb \wedge \Bc }}{ -\lr{ \Ba \wedge \Bb }^2 } \\
&= \inv{ \Ba \wedge \Bb } \cdot \lr{ \Bc \wedge \Bb },
\end{aligned}

and
\label{eqn:cramersProjection:400}
\begin{aligned}
y
&= \frac{\Ba^2 \lr{ \Bb \cdot \Bc } – \lr{ \Ba \cdot \Bb} \lr{ \Ba \cdot \Bc } }{-\lr{ \Ba \wedge \Bb }^2 } \\
&= \frac{- \lr{ \Ba \wedge \Bb } \cdot \lr{ \Ba \wedge \Bc } }{ -\lr{ \Ba \wedge \Bb }^2 } \\
&= \inv{ \Ba \wedge \Bb } \cdot \lr{ \Ba \wedge \Bc }.
\end{aligned}

Sure enough, we find what was dubbed our least squares solution, which we now know can be written out as a ratio of (dotted) wedge products.
From \ref{eqn:cramersProjection:340}, it wasn’t obvious that the least squares solution would have a structure that was almost Cramer’s rule like, but having solved this problem using geometry alone, we knew to expect that. It was therefore natural to write the results in terms of wedge products factors, and find the simplest statement of the end result. That end result reduces to Cramer’s rule for the $$\mathbb{R}^2$$ special case where the system has an exact solution.

## Static load with two forces in a plane, solved a few different ways.

There’s a class of simple statics problems that are pervasive in high school physics and first year engineering classes (for me that CIV102.)Â  These problems are illustrated in the figures below. Here we have a static load under gravity, and two supporting members (rigid beams or wire lines), which can be under compression, or tension, depending on the geometry.

The problem, given the geometry, is to find the magnitudes of the forces in the two members. The equation to solve is of the form
\label{eqn:twoForceStaticsProblem:20}
\BF_s + \BF_r + m \Bg = 0.

The usual way to solve such a problem is to resolve the forces into components. We will do that first here as a review, but then also solve the system using GA techniques, which are arguably simpler or more direct.

## Solving as a conventional vector equation.

If we were back in high school we could have written our forces out in vector form
\label{eqn:twoForceStaticsProblem:160}
\begin{aligned}
\BF_r &= f_r \lr{ \Be_1 \cos\alpha + \Be_2 \sin\alpha } \\
\BF_s &= f_s \lr{ \Be_1 \cos\beta + \Be_2 \sin\beta } \\
\Bg &= g \Be_1.
\end{aligned}

Here the gravitational direction has been pointed along the x-axis.

Our equation to solve is now
\label{eqn:twoForceStaticsProblem:180}
f_r \lr{ \Be_1 \cos\alpha + \Be_2 \sin\alpha } + f_s \lr{ \Be_1 \cos\beta + \Be_2 \sin\beta } + m g \Be_1 = 0.

This we can solve as a set of scalar equations, one for each of the $$\Be_1$$ and $$\Be_2$$ directions
\label{eqn:twoForceStaticsProblem:200}
\begin{aligned}
f_r \cos\alpha + f_s \cos\beta + m g &= 0 \\
f_r \sin\alpha + f_s \sin\beta &= 0.
\end{aligned}

Our solution is
\label{eqn:twoForceStaticsProblem:220}
\begin{aligned}
\begin{bmatrix}
f_r \\
f_s
\end{bmatrix}
&=
{\begin{bmatrix}
\cos\alpha & \cos\beta \\
\sin\alpha & \sin\beta
\end{bmatrix}}^{-1}
\begin{bmatrix}
– m g \\
0
\end{bmatrix} \\
&=
\inv{
\cos\alpha \sin\beta – \cos\beta \sin\alpha
}
\begin{bmatrix}
\sin\beta & -\cos\beta \\
-\sin\alpha & \cos\alpha
\end{bmatrix}
\begin{bmatrix}
– m g \\
0
\end{bmatrix} \\
&=
\frac{ m g }{ \cos\alpha \sin\beta – \cos\beta \sin\alpha }
\begin{bmatrix}
-\sin\beta \\
\sin\alpha
\end{bmatrix} \\
&=
\frac{ m g }{ \sin\lr{ \beta – \alpha } }
\begin{bmatrix}
-\sin\beta \\
\sin\alpha
\end{bmatrix}.
\end{aligned}

We have to haul out some trig identities to make a final simplification, but find a solution to the system.

Another approach, is to take cross products with the unit force direction.Â  First note that
\label{eqn:twoForceStaticsProblem:240}
\begin{aligned}
\lr{ \Be_1 \cos\alpha + \Be_2 \sin\alpha } \cross \lr{ \Be_1 \cos\beta + \Be_2 \sin\beta }
&=
\Be_3 \lr{
\cos\alpha \sin\beta – \sin\alpha \cos\beta
} \\
&=
\Be_3 \sin\lr{ \beta – \alpha }.
\end{aligned}

If we take cross products with each of the unit vectors, we find
\label{eqn:twoForceStaticsProblem:260}
\begin{aligned}
f_r \lr{ \Be_1 \cos\alpha + \Be_2 \sin\alpha } \cross \lr{ \Be_1 \cos\beta + \Be_2 \sin\beta } + m g \Be_1 \cross \lr{ \Be_1 \cos\beta + \Be_2 \sin\beta } &= 0 \\
f_s \lr{ \Be_1 \cos\beta + \Be_2 \sin\beta } \cross \lr{ \Be_1 \cos\alpha + \Be_2 \sin\alpha } + m g \Be_1 \cross \lr{ \Be_1 \cos\alpha + \Be_2 \sin\alpha } &= 0,
\end{aligned}

or
\label{eqn:twoForceStaticsProblem:280}
\begin{aligned}
\Be_3 f_r \sin\lr{ \beta – \alpha } + m g \Be_3 \sin\beta &= 0 \\
-\Be_3 f_s \sin\lr{ \beta – \alpha } + m g \Be_3 \sin\alpha &= 0.
\end{aligned}

After cancelling the $$\Be_3$$’s, we find the same result as we did solving the scalar system. This was a fairly direct way to solve the system, but the intermediate cross products were a bit messy. We will try this cross product using the wedge product. Switching from the cross to the wedge, by itself, will not make things any simpler or more complicated, but we can use the complex exponential form of the unit vectors for the forces, and that will make things simpler.

## Geometric algebra setup and solution.

As usual for planar problems, let’s write $$i = \Be_1 \Be_2$$ for the plane pseudoscalar, which allows us to write the forces in polar form
\label{eqn:twoForceStaticsProblem:40}
\begin{aligned}
\BF_r &= f_r \Be_1 e^{i\alpha} \\
\BF_s &= f_s \Be_1 e^{i\beta} \\
\Bg &= g \Be_1,
\end{aligned}

Our equation to solve is now
\label{eqn:twoForceStaticsProblem:60}
f_r \Be_1 e^{i\alpha} + f_s \Be_1 e^{i\beta} + m g \Be_1 = 0.

The solution for either $$f_r$$ or $$f_s$$ is now trivial, as we only have to take wedge products with the force direction vectors to solve for the magnitudes.Â  That is
\label{eqn:twoForceStaticsProblem:80}
\begin{aligned}
f_r \lr{ \Be_1 e^{i\alpha} +} \wedge \lr{ \Be_1 e^{i\beta} } + m g \Be_1 \wedge \lr{ \Be_1 e^{i\beta} } &= 0 \\
f_s \lr{ \Be_1 e^{i\beta} +} \wedge \lr{ \Be_1 e^{i\alpha} } + m g \Be_1 \wedge \lr{ \Be_1 e^{i\alpha} } &= 0.
\end{aligned}

Writing the wedges as grade two selections, and noting that $$e^{i\theta} \Be_1 = \Be_1 e^{-i\theta }$$, we have
\label{eqn:twoForceStaticsProblem:100}
\begin{aligned}
f_r &= – m g \frac{ \gpgradetwo{\Be_1^2 e^{i\beta}} }{ \gpgradetwo{ \Be_1^2 e^{-i\alpha} e^{i\beta} } } = – m g \frac{ \sin\beta }{ \sin\lr{ \beta – \alpha } } \\
f_s &= – m g \frac{ \gpgradetwo{\Be_1^2 e^{i\alpha}} }{ \gpgradetwo{ \Be_1^2 e^{-i\beta} e^{i\alpha} } } = m g \frac{ \sin\alpha }{ \sin\lr{ \beta – \alpha } }.
\end{aligned}

The grade selection a unit pseudoscalar factor in both the denominator and numerator, which cancelled out to give the final scalar result.

## As a complex variable problem.

Observe that we could have reframed the problem as a multivector problem by left multiplying \ref{eqn:twoForceStaticsProblem:60} by $$\Be_1$$ to find
\label{eqn:twoForceStaticsProblem:120}
f_r e^{i\alpha} + f_s e^{i\beta} + m g = 0.

Alternatively, we could have written the equations this way directly as a complex variable problem.

We can now solve for $$f_r$$ or $$f_s$$ by multiplying by the conjugate of one of the complex exponentials. That is
\label{eqn:twoForceStaticsProblem:140}
\begin{aligned}
f_r + f_s e^{i\beta} e^{-i\alpha} + m g e^{-i\alpha} &= 0 \\
f_r e^{i\alpha} e^{-i\beta} + f_s + m g e^{-i\beta} &= 0.
\end{aligned}

Selecting the bivector part of these equations (if interpreted as a multivector equation), or selecting the imaginary (if interpreting as a complex variables equation), will eliminate one of the force magnitudes from each equation, after which we find the same result.

This last approach, treating the problem as either a complex number problem (selecting imaginaries), or multivector problem (selecting bivectors), seems the simplest. We have no messing cross products, nor do we have to haul out the trig identities (the sine difference in the denominator comes practically for free, as it did with the wedge product method.)