wedge product

Area within closed boundary

August 11, 2024 math and physics play , , , , , ,

[Click here for a PDF version of this post]

Motivation.

On vacation I was reading some more of [1]. It was mentioned in passing that the area contained within a closed parameterized curve is given by
\begin{equation}\label{eqn:containedArea:20}
A = \inv{2} \int_{t_0}^{t_1} \lr{x y’ – y x’} dt,
\end{equation}
where \( x = x(t), y = y(t), t \in [t_0, t_1] \). This has the look of a Stokes theorem coordinate expansion (specifically, the Green’s theorem special case of Stokes’), but with somewhat mysterious looking factor of one half out in front. My aim in this post is to understand the origins of this area relationship, and play with it a bit.

Circular coordinates example.

The book suggests that the reader verify this for a circular parameterization, so we’ll do that here too.

Let
\begin{equation}\label{eqn:containedArea:40}
\begin{aligned}
x(t) &= r \cos t \\
y(t) &= r \sin t,
\end{aligned}
\end{equation}
where \( t \in [0, 2 \pi] \). Plugging in this, we have
\begin{equation}\label{eqn:containedArea:60}
\begin{aligned}
A
&= \inv{2} \int_0^{2 \pi} \lr{ r \cos t \lr{ r \cos t } – r \sin t \lr{ – r \sin t } } dt \\
&= \frac{r^2}{2} \int_0^{2 \pi} \lr{ \cos^2 t + \sin^2 t } dt \\
&= \frac{2 \pi r^2}{2} \\
&= \pi r^2.
\end{aligned}
\end{equation}
This simple example works out.

Piecewise linear parametrization example.

One parameterization of the unit parallelogram depicted in fig. 1 is

\begin{equation}\label{eqn:containedArea:340}
\begin{aligned}
(x,y) &= (t, 0),\quad t \in [0,1] \\
&= (t, t – 1),\quad t \in [1,2] \\
&= (4 – t, 1),\quad t \in [2,3] \\
&= (4 – t, 4 – t),\quad t \in [3,4]
\end{aligned}
\end{equation}

fig. 1. Parallelogram with unit area.

fig. 1. Parallelogram with unit area.

Respective evaluating of \( x y’ – y x’ \) in each of these regions gives
\begin{equation}\label{eqn:containedArea:360}
\begin{aligned}
(t) (0) – (0)(0) &= 0 \\
(t) (1) – (t-1)(1) &= 1 \\
(4-t)(0) – (1)(-1) &= 1 \\
(4-t)(-1) – (4-t)(-1) &= 0,
\end{aligned}
\end{equation}
and integrating
\begin{equation}\label{eqn:containedArea:380}
\inv{2} \int_0^4 \lr{ x y’ – y x’} dt = \frac{2}{2} = 1,
\end{equation}
as expected. In this example, the directional derivative is not continuous at the corners of the parallelogram, but that is not a requirement (as it should not be, as the area is well defined despite any corners.)

Can we discover this relationship using the Jacobian?

Graphically, I can imagine that we could find this area relationship, by considering a parameterization of a family of nested closed curves, as depicted in fig. 2.

fig. 2. Family of nested closed curves.

fig. 2. Family of nested closed curves.

For such a parameterization, calculating the area is just a Jacobian evaluation
\begin{equation}\label{eqn:containedArea:80}
\begin{aligned}
A
&= \iint \frac{\partial(x, y)}{\partial(u,t)} du dt \\
&= \iint \lr{ \PD{u}{x} \PD{t}{y} – \PD{u}{y} \PD{t}{x} } du dt \\
&= \iint \lr{ \PD{u}{x} y’ – \PD{u}{y} x’ } du dt.
\end{aligned}
\end{equation}
Let’s try to eliminate the \( u \) derivatives using integration by parts, and see what we get.
\begin{equation}\label{eqn:containedArea:100}
\begin{aligned}
A
&= \iint \lr{ \PD{u}{x} y’ – \PD{u}{y} x’ } du dt \\
&= \iint \frac{d}{du} \lr{ x y’ – y x’ } du dt – \iint \lr{ x \PD{u}{y’} – y \PD{u}{x’} } du dt \\
&= \int \lr{ x y’ – y x’ } dt – \iint \lr{ x \PD{u}{y’} – y \PD{u}{x’} } du dt.
\end{aligned}
\end{equation}
This is interesting, as we find the area equation that we are interested (times two), but we have a strange new area equation. Essentially, we have found, assuming we trust the claim in the book, that
\begin{equation}\label{eqn:containedArea:120}
A = 2 A – \iint \lr{ x \PD{u}{y’} – y \PD{u}{x’} } du dt,
\end{equation}
so it seems that the area can also be expressed as
\begin{equation}\label{eqn:containedArea:140}
A = \iint \lr{ x \frac{\partial^2 y}{\partial u \partial t} – y \frac{\partial^2 x}{\partial u \partial t} } du dt.
\end{equation}
Let’s again use the circular parameterization to verify that this works. I won’t try to prove this directly, but instead, we’ll use Stokes’ theorem to prove the stated result, from which we get this second derivative area formula as a side effect by virtue of our integration by parts expansion above.

For the circular parameterization, we have
\begin{equation}\label{eqn:containedArea:160}
\begin{aligned}
A
&= \int_{r = 0}^R dr \int_{t = 0}^{2 \pi} dt \lr{ x \frac{\partial^2 y}{\partial r \partial t} – y \frac{\partial^2 x}{\partial r \partial t} } \\
&= \int_{r = 0}^R dr \int_{t = 0}^{2 \pi} dt \lr{ r \cos t \frac{\partial \sin t}{\partial t} – r \sin t \frac{\partial \cos t}{\partial t} } \\
&= \int_{r = 0}^R r dr \int_{t = 0}^{2 \pi} dt \lr{ \cos^2 t + \sin^2 t } \\
&= \frac{R^2}{2} 2 \pi \\
&= \pi R^2.
\end{aligned}
\end{equation}
This checks out, at least for this one specific circular parameterization.

Area formula derivation using Stokes’ theorem.

Theorem 1.1: Green’s theorem.

\begin{equation}\label{eqn:containedArea:260}
\iint dx dy \lr{ \PD{x}{M} – \PD{y}{L} } = \oint L dx + M dy.
\end{equation}

Start proof:

We start with the general two parameter integration theorem
\begin{equation}\label{eqn:containedArea:180}
\iint F d^2 \Bx \lrpartial G = -\oint F d\Bx G,
\end{equation}
set \( F = 1, G = \Bf \), and apply scalar selection
\begin{equation}\label{eqn:containedArea:200}
\iint \gpgradezero{ d^2 \Bx \lrpartial \Bf } = -\oint d\Bx \cdot \Bf,
\end{equation}
to find the two parameter form of Stokes’ theorem
\begin{equation}\label{eqn:containedArea:220}
\iint d^2 \Bx \cdot \lr{ \spacegrad \wedge \Bf } = -\oint d\Bx \cdot \Bf,
\end{equation}

With a planar parameterization, say \( \Bf = L \Be_1 + M \Be_2 \), we have \( d\Bx \cdot \Bf = L dx + M dy \), and for the LHS
\begin{equation}\label{eqn:containedArea:240}
\begin{aligned}
\iint d^2 \Bx \cdot \lr{ \spacegrad \wedge \Bf }
&=
\iint dx dy \Be_{12}^2
\begin{vmatrix}
\partial_1 & \partial_2 \\
L & M
\end{vmatrix} \\
&=
-\iint dx dy \lr{ \PD{x}{M} – \PD{y}{L} }.
\end{aligned}
\end{equation}

End proof.

Parameterized area equation.

If we wish to evaluate an elementary area, we can pick \( L, M \) such that \( \PDi{x}{M} – \PDi{y}{L} = 1 \). One such selection is
\begin{equation}\label{eqn:containedArea:280}
\begin{aligned}
M &= \frac{x}{2} \\
L &= -\frac{y}{2},
\end{aligned}
\end{equation}
so
\begin{equation}\label{eqn:containedArea:300}
A = \inv{2} \oint -y dx + x dy = \inv{2} \int \lr{ x y’ – y x’ } dt.
\end{equation}
Clearly, there are other possible choices of \( L, M \) that we could use to find alternate area equations, but this choice seems to be independent of the shape of the region.

References

[1] F.W. Byron and R.W. Fuller. Mathematics of Classical and Quantum Physics. Dover Publications, 1992.

More on time derivatives of integrals.

June 9, 2024 math and physics play , , , , , , , , , , , , ,

[Click here for a PDF version of this post]

Motivation.

I was asked about geometric algebra equivalents for a couple identities found in [1], one for line integrals
\begin{equation}\label{eqn:more_feynmans_trick:20}
\ddt{} \int_{C(t)} \Bf \cdot d\Bx =
\int_{C(t)} \lr{
\PD{t}{\Bf} + \spacegrad \lr{ \Bv \cdot \Bf } – \Bv \cross \lr{ \spacegrad \cross \Bf }
}
\cdot d\Bx,
\end{equation}
and one for area integrals
\begin{equation}\label{eqn:more_feynmans_trick:40}
\ddt{} \int_{S(t)} \Bf \cdot d\BA =
\int_{S(t)} \lr{
\PD{t}{\Bf} + \Bv \lr{ \spacegrad \cdot \Bf } – \spacegrad \cross \lr{ \Bv \cross \Bf }
}
\cdot d\BA.
\end{equation}

Both of these look questionable at first glance, because neither has boundary term. However, they can be transformed with Stokes theorem to
\begin{equation}\label{eqn:more_feynmans_trick:60}
\ddt{} \int_{C(t)} \Bf \cdot d\Bx
=
\int_{C(t)} \lr{
\PD{t}{\Bf} – \Bv \cross \lr{ \spacegrad \cross \Bf }
}
\cdot d\Bx
+
\evalbar{\Bv \cdot \Bf }{\Delta C},
\end{equation}
and
\begin{equation}\label{eqn:more_feynmans_trick:80}
\ddt{} \int_{S(t)} \Bf \cdot d\BA =
\int_{S(t)} \lr{
\PD{t}{\Bf} + \Bv \lr{ \spacegrad \cdot \Bf }
}
\cdot d\BA

\oint_{\partial S(t)} \lr{ \Bv \cross \Bf } \cdot d\Bx.
\end{equation}
The area integral derivative is now seen to be a variation of one of the special cases of the Leibniz integral rule, see for example [2]. The author admits that the line integral relationship is not well used, and doesn’t show up in the wikipedia page.

My end goal will be to evaluate the derivative of a general multivector line integral
\begin{equation}\label{eqn:more_feynmans_trick:100}
\ddt{} \int_{C(t)} F d\Bx G,
\end{equation}
and area integral
\begin{equation}\label{eqn:more_feynmans_trick:120}
\ddt{} \int_{S(t)} F d^2\Bx G.
\end{equation}
We’ve derived that line integral result in a different fashion previously, but it’s interesting to see a different approach. Perhaps this approach will lend itself nicely to non-scalar integrands?

Prerequisites.

Definition 1.1: Convective derivative.

The convective derivative,
of \( \phi(t, \Bx(t)) \) is defined as
\begin{equation*}
\frac{D \phi}{D t} = \lim_{\Delta t \rightarrow 0} \frac{ \phi(t + \Delta t, \Bx + \Delta t \Bv) – \phi(t, \Bx)}{\Delta t},
\end{equation*}
where \( \Bv = d\Bx/dt \).

Theorem 1.1: Convective derivative.

The convective derivative operator may be written
\begin{equation*}
\frac{D}{D t} = \PD{t}{} + \Bv \cdot \spacegrad.
\end{equation*}

Start proof:

Let’s write
\begin{equation}\label{eqn:more_feynmans_trick:140}
\begin{aligned}
v_0 &= 1 \\
u_0 &= t + v_0 h \\
u_k &= x_k + v_k h, k \in [1,3] \\
\end{aligned}
\end{equation}

The limit, if it exists, must equal the sum of the individual limits
\begin{equation}\label{eqn:more_feynmans_trick:160}
\frac{D \phi}{D t} = \sum_{\alpha = 0}^3 \lim_{\Delta t \rightarrow 0} \frac{ \phi(u_\alpha + v_\alpha h) – \phi(t, Bx)}{h},
\end{equation}
but that is just a sum of derivitives, which can be evaluated by chain rule
\begin{equation}\label{eqn:more_feynmans_trick:180}
\begin{aligned}
\frac{D \phi}{D t}
&= \sum_{\alpha = 0}^{3} \evalbar{ \PD{u_\alpha}{\phi(u_\alpha)} \PD{h}{u_\alpha} }{h = 0} \\
&= \PD{t}{\phi} + \sum_{k = 1}^3 v_k \PD{x_k}{\phi} \\
&= \lr{ \PD{t}{} + \Bv \cdot \spacegrad } \phi.
\end{aligned}
\end{equation}

End proof.

Definition 1.2: Hestenes overdot notation.

We may use a dot or a tick with a derivative operator, to designate the scope of that operator, allowing it to operate bidirectionally, or in a restricted fashion, holding specific multivector elements constant. This is called the Hestenes overdot notation.Illustrating by example, with multivectors \( F, G \), and allowing the gradient to act bidirectionally, we have
\begin{equation*}
\begin{aligned}
F \spacegrad G
&=
\dot{F} \dot{\spacegrad} G
+
F \dot{\spacegrad} \dot{G} \\
&=
\sum_i \lr{ \partial_i F } \Be_i G + \sum_i F \Be_i \lr{ \partial_i G }.
\end{aligned}
\end{equation*}
The last step is a precise statement of the meaning of the overdot notation, showing that we hold the position of the vector elements of the gradient constant, while the (scalar) partials are allowed to commute, acting on the designated elements.

We will need one additional identity

Lemma 1.1: Gradient of dot product (one constant vector.)

Given vectors \( \Ba, \Bb \) the gradient of their dot product is given by
\begin{equation*}
\spacegrad \lr{ \Ba \cdot \Bb }
= \lr{ \Bb \cdot \spacegrad } \Ba – \Bb \cdot \lr{ \spacegrad \wedge \Ba }
+ \lr{ \Ba \cdot \spacegrad } \Bb – \Ba \cdot \lr{ \spacegrad \wedge \Bb }.
\end{equation*}
If \( \Bb \) is constant, this reduces to
\begin{equation*}
\spacegrad \lr{ \Ba \cdot \Bb }
=
\dot{\spacegrad} \lr{ \dot{\Ba} \cdot \Bb }
= \lr{ \Bb \cdot \spacegrad } \Ba – \Bb \cdot \lr{ \spacegrad \wedge \Ba }.
\end{equation*}

Start proof:

The \( \Bb \) constant case is trivial to prove. We use \( \Ba \cdot \lr{ \Bb \wedge \Bc } = \lr{ \Ba \cdot \Bb} \Bc – \Bb \lr{ \Ba \cdot \Bc } \), and simply expand the vector, curl dot product
\begin{equation}\label{eqn:more_feynmans_trick:200}
\Bb \cdot \lr{ \spacegrad \wedge \Ba }
=
\Bb \cdot \lr{ \dot{\spacegrad} \wedge \dot{\Ba} }
= \lr{ \Bb \cdot \dot{\spacegrad} } \dot{\Ba} – \dot{\spacegrad} \lr{ \dot{\Ba} \cdot \Bb }. \end{equation}
Rearrangement proves that \( \Bb \) constant identity. The more general statement follows from a chain rule evaluation of the gradient, holding each vector constant in turn
\begin{equation}\label{eqn:more_feynmans_trick:320}
\spacegrad \lr{ \Ba \cdot \Bb }
=
\dot{\spacegrad} \lr{ \dot{\Ba} \cdot \Bb }
+
\dot{\spacegrad} \lr{ \dot{\Bb} \cdot \Ba }.
\end{equation}

End proof.

Time derivative of a line integral of a vector field.

We now have all our tools assembled, and can proceed to evaluate the derivative of the line integral. We want to show that

Theorem 1.2:

Given a path parameterized by \( \Bx(\lambda) \), where \( d\Bx = (\PDi{\lambda}{\Bx}) d\lambda \), with points along a \( C(t) \) moving through space at a velocity \( \Bv(\Bx(\lambda)) \), and a vector function \( \Bf = \Bf(t, \Bx(\lambda)) \),
\begin{equation*}
\ddt{} \int_{C(t)} \Bf \cdot d\Bx =
\int_{C(t)} \lr{
\PD{t}{\Bf} + \spacegrad \lr{ \Bf \cdot \Bv } + \Bv \cdot \lr{ \spacegrad \wedge \Bf}
} \cdot d\Bx
\end{equation*}

Start proof:

I’m going to avoid thinking about the rigorous details, like any requirements for curve continuity and smoothness. We will however, specify that the end points are given by \( [\lambda_1, \lambda_2] \). Expanding out the parameterization, we seek to evaluate
\begin{equation}\label{eqn:more_feynmans_trick:240}
\int_{C(t)} \Bf \cdot d\Bx
=
\int_{\lambda_1}^{\lambda_2} \Bf(t, \Bx(\lambda) ) \cdot \frac{\partial \Bx}{\partial \lambda} d\lambda.
\end{equation}
The parametric form nicely moves all the boundary time dependence into the integrand, allowing us to write
\begin{equation}\label{eqn:more_feynmans_trick:260}
\begin{aligned}
\ddt{} \int_{C(t)} \Bf \cdot d\Bx
&=
\lim_{\Delta t \rightarrow 0}
\inv{\Delta t}
\int_{\lambda_1}^{\lambda_2}
\lr{ \Bf(t + \Delta t, \Bx(\lambda) + \Delta t \Bv(\Bx(\lambda) ) \cdot \frac{\partial}{\partial \lambda} \lr{ \Bx + \Delta t \Bv(\Bx(\lambda)) } – \Bf(t, \Bx(\lambda)) \cdot \frac{\partial \Bx}{\partial \lambda} } d\lambda \\
&=
\lim_{\Delta t \rightarrow 0}
\inv{\Delta t}
\int_{\lambda_1}^{\lambda_2}
\lr{ \Bf(t + \Delta t, \Bx(\lambda) + \Delta t \Bv(\Bx(\lambda) ) – \Bf(t, \Bx)} \cdot \frac{\partial \Bx}{\partial \lambda} d\lambda \\
&\quad+
\lim_{\Delta t \rightarrow 0}
\int_{\lambda_1}^{\lambda_2}
\Bf(t + \Delta t, \Bx(\lambda) + \Delta t \Bv(\Bx(\lambda) )) \cdot \PD{\lambda}{}\Bv(\Bx(\lambda)) d\lambda \\
&=
\int_{\lambda_1}^{\lambda_2}
\frac{D \Bf}{Dt} \cdot \frac{\partial \Bx}{\partial \lambda} d\lambda +
\lim_{\Delta t \rightarrow 0}
\int_{\lambda_1}^{\lambda_2}
\Bf(t + \Delta t, \Bx(\lambda) + \Delta t \Bv(\Bx(\lambda) \cdot \frac{\partial}{\partial \lambda} \Bv(\Bx(\lambda)) d\lambda \\
&=
\int_{\lambda_1}^{\lambda_2}
\lr{ \PD{t}{\Bf} + \lr{ \Bv \cdot \spacegrad } \Bf } \cdot \frac{\partial \Bx}{\partial \lambda} d\lambda
+
\int_{\lambda_1}^{\lambda_2}
\Bf \cdot \frac{\partial \Bv}{\partial \lambda} d\lambda
\end{aligned}
\end{equation}
At this point, we have a \( d\Bx \) in the first integrand, and a \( d\Bv \) in the second. We can expand the second integrand, evaluating the derivative using chain rule to find
\begin{equation}\label{eqn:more_feynmans_trick:280}
\begin{aligned}
\Bf \cdot \PD{\lambda}{\Bv}
&=
\sum_i \Bf \cdot \PD{x_i}{\Bv} \PD{\lambda}{x_i} \\
&=
\sum_{i,j} f_j \PD{x_i}{v_j} \PD{\lambda}{x_i} \\
&=
\sum_{j} f_j \lr{ \spacegrad v_j } \cdot \PD{\lambda}{\Bx} \\
&=
\sum_{j} \lr{ \dot{\spacegrad} f_j \dot{v_j} } \cdot \PD{\lambda}{\Bx} \\
&=
\dot{\spacegrad} \lr{ \Bf \cdot \dot{\Bv} } \cdot \PD{\lambda}{\Bx}.
\end{aligned}
\end{equation}
Substitution gives
\begin{equation}\label{eqn:more_feynmans_trick:300}
\begin{aligned}
\ddt{} \int_{C(t)} \Bf \cdot d\Bx
&=
\int_{C(t)}
\lr{ \PD{t}{\Bf} + \lr{ \Bv \cdot \spacegrad } \Bf + \dot{\spacegrad} \lr{ \Bf \cdot \dot{\Bv} } } \cdot \frac{\partial \Bx}{\partial \lambda} d\lambda \\
&=
\int_{C(t)}
\lr{ \PD{t}{\Bf}
+ \spacegrad \lr{ \Bf \cdot \Bv }
+ \lr{ \Bv \cdot \spacegrad } \Bf
– \dot{\spacegrad} \lr{ \dot{\Bf} \cdot \Bv }
} \cdot d\Bx \\
&=
\int_{C(t)}
\lr{ \PD{t}{\Bf}
+ \spacegrad \lr{ \Bf \cdot \Bv }
+ \Bv \cdot \lr{ \spacegrad \wedge \Bf }
} \cdot d\Bx,
\end{aligned}
\end{equation}
where the last simplification utilizes lemma 1.1.

End proof.

Since \( \Ba \cdot \lr{ \Bb \wedge \Bc } = -\Ba \cross \lr{ \Bb \cross \Bc } \), observe that we have also recovered \ref{eqn:more_feynmans_trick:20}.

Time derivative of a line integral of a bivector field.

For a bivector line integral, we have

Theorem 1.3:

Given a path parameterized by \( \Bx(\lambda) \), where \( d\Bx = (\PDi{\lambda}{\Bx}) d\lambda \), with points along a \( C(t) \) moving through space at a velocity \( \Bv(\Bx(\lambda)) \), and a bivector function \( B = B(t, \Bx(\lambda)) \),
\begin{equation*}
\ddt{} \int_{C(t)} B \cdot d\Bx =
\int_{C(t)}
\PD{t}{B} \cdot d\Bx + \lr{ d\Bx \cdot \spacegrad } \lr{ B \cdot \Bv } + \lr{ \lr{ \Bv \wedge d\Bx } \cdot \spacegrad } \cdot B.
\end{equation*}

Start proof:

Skipping the steps that follow our previous proceedure exactly, we have
\begin{equation}\label{eqn:more_feynmans_trick:340}
\ddt{} \int_{C(t)} B \cdot d\Bx =
\int_{C(t)}
\PD{t}{B} \cdot d\Bx + \lr{ \Bv \cdot \spacegrad } B \cdot d\Bx + B \cdot d\Bv.
\end{equation}
Since
\begin{equation}\label{eqn:more_feynmans_trick:360}
\begin{aligned}
B \cdot d\Bv
&= B \cdot \PD{\lambda}{\Bv} d\lambda \\
&= B \cdot \PD{x_i}{\Bv} \PD{\lambda}{x_i} d\lambda \\
&= B \cdot \lr{ \lr{ d\Bx \cdot \spacegrad } \Bv },
\end{aligned}
\end{equation}
we have
\begin{equation}\label{eqn:more_feynmans_trick:380}
\ddt{} \int_{C(t)} B \cdot d\Bx
=
\int_{C(t)}
\PD{t}{B} \cdot d\Bx + \lr{ \Bv \cdot \spacegrad } B \cdot d\Bx + B \cdot \lr{ \lr{ d\Bx \cdot \spacegrad } \Bv } \\
\end{equation}
Let’s reduce the two last terms in this integrand
\begin{equation}\label{eqn:more_feynmans_trick:400}
\begin{aligned}
\lr{ \Bv \cdot \spacegrad } B \cdot d\Bx + B \cdot \lr{ \lr{ d\Bx \cdot \spacegrad } \Bv }
&=
\lr{ \Bv \cdot \spacegrad } B \cdot d\Bx –
\lr{ d\Bx \cdot \dot{\spacegrad} } \lr{ \dot{\Bv} \cdot B } \\
&=
\lr{ \Bv \cdot \spacegrad } B \cdot d\Bx
– \lr{ d\Bx \cdot \spacegrad} \lr{ \Bv \cdot B }
+ \lr{ d\Bx \cdot \dot{\spacegrad} } \lr{ \Bv \cdot \dot{B} } \\
&=
\lr{ d\Bx \cdot \spacegrad} \lr{ B \cdot \Bv }
+ \lr{ \Bv \cdot \dot{\spacegrad} } \dot{B} \cdot d\Bx
+ \lr{ d\Bx \cdot \dot{\spacegrad} } \lr{ \Bv \cdot \dot{B} } \\
&=
\lr{ d\Bx \cdot \spacegrad} \lr{ B \cdot \Bv }
+ \lr{ \Bv \lr{ d\Bx \cdot \spacegrad } – d\Bx \lr{ \Bv \cdot \spacegrad } } \cdot B \\
&=
\lr{ d\Bx \cdot \spacegrad} \lr{ B \cdot \Bv }
+ \lr{ \lr{ \Bv \wedge d\Bx } \cdot \spacegrad } \cdot B.
\end{aligned}
\end{equation}
Back substitution finishes the job.

End proof.

Time derivative of a multivector line integral.

Theorem 1.4: Time derivative of multivector line integral.

Given a path parameterized by \( \Bx(\lambda) \), where \( d\Bx = (\PDi{\lambda}{\Bx}) d\lambda \), with points along a \( C(t) \) moving through space at a velocity \( \Bv(\Bx(\lambda)) \), and multivector functions \( M = M(t, \Bx(\lambda)), N = N(t, \Bx(\lambda)) \),
\begin{equation*}
\ddt{} \int_{C(t)} M d\Bx N =
\int_{C(t)}
\frac{D}{D t} M d\Bx N + M \lr{ \lr{ d\Bx \cdot \dot{\spacegrad} } \dot{\Bv} } N.
\end{equation*}

It is useful to write this out explicitly for clarity
\begin{equation}\label{eqn:more_feynmans_trick:420}
\ddt{} \int_{C(t)} M d\Bx N =
\int_{C(t)}
\PD{t}{M} d\Bx N + M d\Bx \PD{t}{N}
+ \dot{M} \lr{ \Bv \cdot \dot{\spacegrad} } N
+ M \lr{ \Bv \cdot \dot{\spacegrad} } \dot{N}
+ M \lr{ \lr{ d\Bx \cdot \dot{\spacegrad} } \dot{\Bv} } N.
\end{equation}

Proof is left to the reader, but follows the patterns above.

It’s not obvious whether there is a nice way to reduce this, as we did for the scalar valued line integral of a vector function, and the vector valued line integral of a bivector function. In particular, our vector and bivector results had \( \spacegrad \lr{ \Bf \cdot \Bv } \), and \( \spacegrad \lr{ B \cdot \Bv } \) terms respectively, which allows for the boundary term to be evaluated using Stokes’ theorem. Is such a manipulation possible here?

Coming later: surface integrals!

References

[1] Nicholas Kemmer. Vector Analysis: A physicist’s guide to the mathematics of fields in three dimensions. CUP Archive, 1977.

[2] Wikipedia contributors. Leibniz integral rule — Wikipedia, the free encyclopedia. https://en.wikipedia.org/w/index.php?title=Leibniz_integral_rule&oldid=1223666713, 2024. [Online; accessed 22-May-2024].

Triangle area problem: REVISITED.

March 31, 2024 math and physics play , , , , ,

[Click here for a PDF version of this post]

On LinkedIn, James asked for ideas about how to solve What is the total area of ABC? You should be able to solve this! using geometric algebra.

I found a couple ways, and this last variation is pretty cool.

fig. 1. Triangle with given areas.

To start with I’ve re-sketched the triangle with the areas slightly more to scale in fig. 1, where areas \( A_1 = 40, A_2 = 30, A_3 = 35, A_4 = 84 \) are given. The aim is to find the total area \( \sum A_i \).

If we had the vertex and center locations as vectors, we could easily compute the total area, but we don’t. We also don’t know the locations of the edge intersections, but can calculate those, as they satisfy
\begin{equation}\label{eqn:triangle_area_problem:20}
\begin{aligned}
\BD &= s_1 \BA = \BB + t_1 \lr{ \BC – \BB } \\
\BE &= s_2 \BC = \BA + t_2 \lr{ \BB – \BA } \\
\BF &= s_3 \BB = \BA + t_3 \lr{ \BC – \BA }.
\end{aligned}
\end{equation}
It turns out that the problem is over specified, and we will only need \( \BD, \BE \). To find those, we may eliminate the \( t_i \)’s by wedging appropriately (or equivalently, using Cramer’s rule), to find
\begin{equation}\label{eqn:triangle_area_problem:40}
\begin{aligned}
s_1 \BA \wedge \lr{ \BC – \BB } &= \BB \wedge \lr{ \BC – \BB } \\
s_2 \BC \wedge \lr{ \BB – \BA } &= \BA \wedge \lr{ \BB – \BA },
\end{aligned}
\end{equation}
or
\begin{equation}\label{eqn:triangle_area_problem:60}
\begin{aligned}
s_1 &= \frac{\BB \wedge \BC }{\BA \wedge \lr{ \BC – \BB }} \\
s_2 &= \frac{\BA \wedge \BB }{\BC \wedge \lr{ \BB – \BA }}.
\end{aligned}
\end{equation}
Now let’s introduce some scalar area variables, each pseudoscalar multiples of bivector area elements, with \( i = \Be_{1} \Be_2 \)
\begin{equation}\label{eqn:triangle_area_problem:81}
\begin{aligned}
X &= \lr{ \BA \wedge \BB } i^{-1} = \begin{vmatrix} \BA & \BB \end{vmatrix} \\
Y &= \lr{ \BC \wedge \BB } i^{-1} = \begin{vmatrix} \BC & \BB \end{vmatrix} \\
Z &= \lr{ \BA \wedge \BC } i^{-1} = \begin{vmatrix} \BA & \BC \end{vmatrix},
\end{aligned}
\end{equation}
Note that the orientation of all of these has been picked to have a positive orientation matching the figure, and that the
triangle area that we seek for this problem is \( 1/2 \Abs{ \BA \wedge \BB } = X/2 \).

The intersection parameters, after cancelling pseudoscalar factors, are
\begin{equation}\label{eqn:triangle_area_problem:100}
\begin{aligned}
s_1 &= \frac{\BB \wedge \BC }{\BA \wedge \BC – \BA \wedge \BB } = \frac{-Y}{Z – X} \\
s_2 &= \frac{\BA \wedge \BB }{\BC \wedge \BB – \BC \wedge \BA } = \frac{X}{Y + Z},
\end{aligned}
\end{equation}
so the intersection points are
\begin{equation}\label{eqn:triangle_area_problem:120}
\begin{aligned}
\BD &= \BA \frac{Y}{X – Z} \\
\BE &= \BC \frac{X}{Y + Z}.
\end{aligned}
\end{equation}
Observe that both scalar factors are positive (i.e.: \( X > Z \).)

We may now express all the known areas in terms of our area variables
\begin{equation}\label{eqn:triangle_area_problem:140}
\begin{aligned}
A_1 &= \inv{2} \lr{ \BD \wedge \BC } i^{-1} \\
A_1 + A_2 &= \inv{2} \lr{ \BA \wedge \BC } i^{-1} \\
A_1 + A_2 + A_3 &= \inv{2} \lr{ \BA \wedge \BE } i^{-1} \\
A_2 &= \inv{2} \lr{ \lr{\BA – \BD} \wedge \lr{ \BC – \BD } } i^{-1}\\
A_3 &= \inv{2} \lr{ \lr{\BA – \BC} \wedge \lr{ \BE – \BC } } i^{-1}\\
A_5 &= \inv{2} \lr{ \lr{\BB – \BC} \wedge \lr{ \BF – \BC } } i^{-1}.
\end{aligned}
\end{equation}

As mentioned, the problem is over specified, and we can get away with just the first three of these relations to solve for total area. Eliminating \( \BD, \BE \) from those, gives us
\begin{equation}\label{eqn:triangle_area_problem:180}
A_1 = \inv{2} \frac{Y}{X – Z} \lr{ \BA \wedge \BC } i^{-1} = \frac{Z}{2} \lr{ \frac{Y}{X – Z} },
\end{equation}
\begin{equation}\label{eqn:triangle_area_problem:460}
A_1 + A_2 = \inv{2} \lr{ \BA \wedge \BC } i^{-1} = \frac{Z}{2},
\end{equation}
and
\begin{equation}\label{eqn:triangle_area_problem:400}
\begin{aligned}
A_1 + A_2 + A_3 &= \inv{2} \lr{ \BA \wedge \BE } i^{-1} \\
&= \inv{2} \lr{ \BA \wedge \BC } \frac{X}{Y + Z} \\
&= \frac{Z}{2} \frac{X}{Y + Z}.
\end{aligned}
\end{equation}

Let’s eliminate \( Z \) to start with, leaving
\begin{equation}\label{eqn:triangle_area_problem:420}
\begin{aligned}
A_1 \lr{ X – 2 A_1 – 2 A_2 } &= Y \lr{ A_1 + A_2 } \\
\lr{ A_1 + A_2 + A_3 } \lr{ Y + 2 A_1 + 2 A_2 } &= \lr{ A_1 + A_2 } X.
\end{aligned}
\end{equation}
Solving for \( Y \) yields
\begin{equation}\label{eqn:triangle_area_problem:380}
Y = – 2 A_1 – 2 A_2 + \frac{ \lr{A_1 + A_2} X }{ A_1 + A_2 + A_3 } = \lr{ A_1 + A_2 } \lr{ -2 + \frac{X}{A_1 + A_2 + A_3 } },
\end{equation}
and back substution leaves us with a linear equation in \( X \)
\begin{equation}\label{eqn:triangle_area_problem:480}
\lr{ A_1 + A_2}^2 \lr{ -2 + \frac{X}{A_1 + A_2 + A_3 } } = A_1 \lr{ X – 2 A_1 – 2 A_2 }.
\end{equation}

This is easily solved to find
\begin{equation}\label{eqn:triangle_area_problem:500}
\frac{X}{2} = \frac{ \lr{ A_1 + A_2} A_2 \lr{ A_1 + A_2 + A_3 } }{A_2 \lr{ A_1 + A_2} – A_1 A_3 }.
\end{equation}
Plugging in the numeric values for the problem solves it, giving a total triangular area of \( \inv{2} \lr{\BA \wedge \BB } i^{-1} = X/2 = 315 \).

Now, I’ll have to watch the video and see how he solved it.

Triangle area problem

March 30, 2024 math and physics play , , , , , ,

[Click here for a PDF version of this post]

On LinkedIn, James asked for ideas about how to solve What is the total area of ABC? You should be able to solve this! using geometric algebra.

I found one way, but suspect it’s not the easiest way to solve the problem.

To start with I’ve re-sketched the triangle with the areas slightly more to scale in fig. 1, where areas \( A_1, A_2, A_3, A_5\) are given. The aim is to find the total area \( \sum A_i \).

fig. 1. Triangle with given areas.

 

If we had the vertex and center locations as vectors, we could easily compute the total area, but we don’t. We also don’t know the locations of the edge intersections, but can calculate those, as they satisfy
\begin{equation}\label{eqn:triangle_area_problem:20}
\begin{aligned}
\BD &= s_1 \BA = \BB + t_1 \lr{ \BC – \BB } \\
\BE &= s_2 \BC = \BA + t_2 \lr{ \BB – \BA } \\
\BF &= s_3 \BB = \BA + t_3 \lr{ \BC – \BA }.
\end{aligned}
\end{equation}
Eliminating the \( t_i \) constants by wedging appropriately (or equivalently, using Cramer’s rule), we find
\begin{equation}\label{eqn:triangle_area_problem:40}
\begin{aligned}
s_1 \BA \wedge \lr{ \BC – \BB } &= \BB \wedge \lr{ \BC – \BB } \\
s_2 \BC \wedge \lr{ \BB – \BA } &= \BA \wedge \lr{ \BB – \BA } \\
s_3 \BB \wedge \lr{ \BC – \BA } &= \BA \wedge \lr{ \BC – \BA },
\end{aligned}
\end{equation}
or
\begin{equation}\label{eqn:triangle_area_problem:60}
\begin{aligned}
s_1 &= \frac{\BB \wedge \BC }{\BA \wedge \lr{ \BC – \BB }} \\
s_2 &= \frac{\BA \wedge \BB }{\BC \wedge \lr{ \BB – \BA }} \\
s_3 &= \frac{\BA \wedge \BC }{\BB \wedge \lr{ \BC – \BA }}.
\end{aligned}
\end{equation}
Introducing bivector (signed-area) unknowns
\begin{equation}\label{eqn:triangle_area_problem:80}
\begin{aligned}
\alpha &= \BA \wedge \BB = \begin{vmatrix} \BA & \BB \end{vmatrix} \Be_{12} \\
\beta &= \BB \wedge \BC = \begin{vmatrix} \BB & \BC \end{vmatrix} \Be_{12} \\
\gamma &= \BC \wedge \BA = \begin{vmatrix} \BC & \BA \end{vmatrix} \Be_{12}.
\end{aligned}
\end{equation}
the intersection parameters are
\begin{equation}\label{eqn:triangle_area_problem:100}
\begin{aligned}
s_1 &= \frac{\BB \wedge \BC }{\BA \wedge \BC – \BA \wedge \BB } = \frac{\beta}{-\gamma – \alpha} \\
s_2 &= \frac{\BA \wedge \BB }{\BC \wedge \BB – \BC \wedge \BA } = \frac{\alpha}{-\beta – \gamma} \\
s_3 &= \frac{\BA \wedge \BC }{\BB \wedge \BC – \BB \wedge \BA } = \frac{-\gamma}{\beta + \alpha},
\end{aligned}
\end{equation}
so the intersection points are
\begin{equation}\label{eqn:triangle_area_problem:120}
\begin{aligned}
\BD &= -\BA \frac{\beta}{\gamma + \alpha} \\
\BE &= -\BC \frac{\alpha}{\beta + \gamma} \\
\BF &= -\BB \frac{\gamma}{\beta + \alpha}.
\end{aligned}
\end{equation}

We may now express the known areas in terms of these unknown vectors
\begin{equation}\label{eqn:triangle_area_problem:140}
\begin{aligned}
A_1 &= \inv{2} \Abs{ \BD \wedge \BC } \\
A_2 &= \inv{2} \Abs{ \lr{\BC – \BA} \wedge \lr{ \BD – \BA } } \\
A_3 &= \inv{2} \Abs{ \lr{\BA – \BC} \wedge \lr{ \BE – \BC } } \\
A_5 &= \inv{2} \Abs{ \lr{\BB – \BC} \wedge \lr{ \BF – \BC } },
\end{aligned}
\end{equation}
but
\begin{equation}\label{eqn:triangle_area_problem:160}
\begin{aligned}
\BD – \BA &= -\BA \lr{ 1 + \frac{\beta}{\gamma + \alpha} } \\
\BE – \BC &= -\BC \lr{ 1 + \frac{\alpha}{\beta + \gamma} } \\
\BF – \BC &= -\BB \frac{\gamma}{\beta + \alpha} – \BC,
\end{aligned}
\end{equation}
so
\begin{equation}\label{eqn:triangle_area_problem:180}
\begin{aligned}
A_1 &= \inv{2} \Abs{ \BA \wedge \BC \frac{\beta}{\gamma + \alpha} } = \inv{2} \Abs{ \gamma} \Abs{ \frac{\beta}{\gamma + \alpha} } \\
A_2 &= \inv{2} \Abs{ \lr{\BC – \BA} \wedge \BA } \Abs{ 1 + \frac{\beta}{\gamma + \alpha} } = \inv{2} \Abs{\gamma} \Abs{ 1 + \frac{\beta}{\gamma + \alpha} } \\
A_3 &= \inv{2} \Abs{ \lr{\BA – \BC} \wedge \BC } \Abs{ 1 + \frac{\alpha}{\beta + \gamma} } = \inv{2} \Abs{\gamma} \Abs{ 1 + \frac{\alpha}{\beta + \gamma} },
\end{aligned}
\end{equation}
and
\begin{equation}\label{eqn:triangle_area_problem:200}
\begin{aligned}
A_5
&= \inv{2} \Abs{ \lr{\BB – \BC} \wedge \lr{ \BB \frac{\gamma}{\beta + \alpha} + \BC } } \\
&= \inv{2} \Abs{ \BB \wedge \BC – \BC \wedge \BB \frac{\gamma}{\beta + \alpha} } \\
&= \inv{2} \Abs{ \beta } \Abs{ 1 + \frac{\gamma}{\beta + \alpha} }.
\end{aligned}
\end{equation}

This gives us four equations in two (bivector) unknowns
\begin{equation}\label{eqn:triangle_area_problem:220}
\begin{aligned}
4 A_1^2 \lr{ \gamma + \alpha }^2 &= -\gamma^2 \beta^2 \\
4 A_2^2 \lr{ \gamma + \alpha }^2 &= -\gamma^2 \lr{ \alpha + \beta + \gamma }^2 \\
4 A_3^2 \lr{ \gamma + \beta }^2 &= -\gamma^2 \lr{ \alpha + \beta + \gamma }^2 \\
4 A_4^2 \lr{ \alpha + \beta }^2 &= -\beta^2 \lr{ \alpha + \beta + \gamma }^2.
\end{aligned}
\end{equation}
Let’s recast this in terms of area determinants, to eliminate the bivector variables in these equations. To do so, write
\begin{equation}\label{eqn:triangle_area_problem:240}
\begin{aligned}
X &= \begin{vmatrix} \BA & \BB \end{vmatrix} \\
Y &= \begin{vmatrix} \BB & \BC \end{vmatrix} \\
Z &= \begin{vmatrix} \BC & \BA \end{vmatrix}.
\end{aligned}
\end{equation}
so
\begin{equation}\label{eqn:triangle_area_problem:300}
\begin{aligned}
4 A_1^2 \lr{ Z + X }^2 &= Z^2 Y^2 \\
4 A_2^2 \lr{ Z + X }^2 &= Z^2 \lr{ X + Y + Z }^2 \\
4 A_3^2 \lr{ Z + Y }^2 &= Z^2 \lr{ X + Y + Z }^2 \\
4 A_4^2 \lr{ X + Y }^2 &= Y^2 \lr{ X + Y + Z }^2.
\end{aligned}
\end{equation}
The goal is now to solve this system for \( X \). That solution (courtesy of Mathematica), for the numeric values in the original problem \( A_1 = 40, A_2 = 30, A_3 = 35, A_4 = 84 \), is:

\begin{equation}\label{eqn:triangle_area_problem:280}
\begin{aligned}
X &= \pm 630 \\
Y &= \mp 280 \\
Z &= \mp 140,
\end{aligned}
\end{equation}
so the total area is \( 315 \).

Now, I’ll have to watch the video and see how he solved it. I’d guess in a considerably simpler way.

Simplifying the previous adjoint matrix results.

January 17, 2024 math and physics play , , , , , , , , ,

[Click here for a PDF version of this (and the previous) post]

We previously found determinant expressions for the matrix elements of the adjoint for 2D and 3D matrices \( M \). However, we can extract additional structure from each of those results.

2D case.

Given a matrix expressed in block matrix form in terms of it’s columns
\begin{equation}\label{eqn:adjoint:500}
M =
\begin{bmatrix}
\Bm_1 & \Bm_2
\end{bmatrix},
\end{equation}
we found that the adjoint \( A \) satisfying \( M A = \Abs{M} I \) had the structure
\begin{equation}\label{eqn:adjoint:520}
A =
\begin{bmatrix}
\begin{vmatrix} \Be_1 & \Bm_2 \end{vmatrix} & \begin{vmatrix} \Be_2 & \Bm_2 \end{vmatrix} \\
& \\
\begin{vmatrix} \Bm_1 & \Be_1 \end{vmatrix} & \begin{vmatrix} \Bm_1 & \Be_2 \end{vmatrix}
\end{bmatrix}.
\end{equation}
We initially had wedge product expressions for each of these matrix elements, and can discover our structure by putting back those wedge products. Modulo sign, each of these matrix elemens has the form
\begin{equation}\label{eqn:adjoint:540}
\begin{aligned}
\begin{vmatrix} \Be_i & \Bm_j \end{vmatrix}
&=
\lr{ \Be_i \wedge \Bm_j } i^{-1} \\
&=
\gpgradezero{
\lr{ \Be_i \wedge \Bm_j } i^{-1}
} \\
&=
\gpgradezero{
\lr{ \Be_i \Bm_j – \Be_i \cdot \Bm_j } i^{-1}
} \\
&=
\gpgradezero{
\Be_i \Bm_j i^{-1}
} \\
&=
\Be_i \cdot \lr{ \Bm_j i^{-1} },
\end{aligned}
\end{equation}
where \( i = \Be_{12} \). The adjoint matrix is
\begin{equation}\label{eqn:adjoint:560}
A =
\begin{bmatrix}
-\lr{ \Bm_2 i } \cdot \Be_1 & -\lr{ \Bm_2 i } \cdot \Be_2 \\
\lr{ \Bm_1 i } \cdot \Be_1 & \lr{ \Bm_1 i } \cdot \Be_2 \\
\end{bmatrix}.
\end{equation}
If we use a column vector representation of the vectors \( \Bm_j i^{-1} \), we can write the adjoint in a compact hybrid geometric-algebra matrix form
\begin{equation}\label{eqn:adjoint:640}
A =
\begin{bmatrix}
-\lr{ \Bm_2 i }^\T \\
\lr{ \Bm_1 i }^\T
\end{bmatrix}.
\end{equation}

Check:

Let’s see if this works, by multiplying with \( M \)
\begin{equation}\label{eqn:adjoint:580}
\begin{aligned}
A M &=
\begin{bmatrix}
-\lr{ \Bm_2 i }^\T \\
\lr{ \Bm_1 i }^\T
\end{bmatrix}
\begin{bmatrix}
\Bm_1 & \Bm_2
\end{bmatrix} \\
&=
\begin{bmatrix}
-\lr{ \Bm_2 i }^\T \Bm_1 & -\lr{ \Bm_2 i }^\T \Bm_2 \\
\lr{ \Bm_1 i }^\T \Bm_1 & \lr{ \Bm_1 i }^\T \Bm_2
\end{bmatrix}.
\end{aligned}
\end{equation}
Those dot products have the form
\begin{equation}\label{eqn:adjoint:600}
\begin{aligned}
\lr{ \Bm_j i }^\T \Bm_i
&=
\lr{ \Bm_j i } \cdot \Bm_i \\
&=
\gpgradezero{ \lr{ \Bm_j i } \Bm_i } \\
&=
\gpgradezero{ -i \Bm_j \Bm_i } \\
&=
-i \lr{ \Bm_j \wedge \Bm_i },
\end{aligned}
\end{equation}
so
\begin{equation}\label{eqn:adjoint:620}
\begin{aligned}
A M &=
\begin{bmatrix}
i \lr{ \Bm_2 \wedge \Bm_1 } & 0 \\
0 & -i \lr { \Bm_1 \wedge \Bm_2 }
\end{bmatrix} \\
&=
\Abs{M} I.
\end{aligned}
\end{equation}
We find the determinant weighted identity that we expected. Our methods are a bit schizophrenic, switching fluidly between matrix and geometric algebra representations, but provided we are careful enough, this isn’t problematic.

3D case.

Now, let’s look at the 3D case, where we assume a column vector representation of the matrix of interest
\begin{equation}\label{eqn:adjoint:660}
M =
\begin{bmatrix}
\Bm_1 & \Bm_2 & \Bm_3
\end{bmatrix},
\end{equation}
and try to simplify the expression we found for the adjoint
\begin{equation}\label{eqn:adjoint:680}
A =
\begin{bmatrix}
\begin{vmatrix} \Be_1 & \Bm_2 & \Bm_3 \end{vmatrix} & \begin{vmatrix} \Be_2 & \Bm_2 & \Bm_3 \end{vmatrix} & \begin{vmatrix} \Be_3 & \Bm_2 & \Bm_3 \end{vmatrix} \\
& & \\
\begin{vmatrix} \Be_1 & \Bm_3 & \Bm_1 \end{vmatrix} & \begin{vmatrix} \Be_2 & \Bm_3 & \Bm_1 \end{vmatrix} & \begin{vmatrix} \Be_3 & \Bm_3 & \Bm_1 \end{vmatrix} \\
& & \\
\begin{vmatrix} \Be_1 & \Bm_1 & \Bm_2 \end{vmatrix} & \begin{vmatrix} \Be_2 & \Bm_1 & \Bm_2 \end{vmatrix} & \begin{vmatrix} \Be_3 & \Bm_1 & \Bm_2 \end{vmatrix}
\end{bmatrix}.
\end{equation}
As with the 2D case, let’s re-express these determinants in wedge product form. We’ll write \( I = \Be_{123} \), and find
\begin{equation}\label{eqn:adjoint:700}
\begin{aligned}
\begin{vmatrix} \Be_i & \Bm_j & \Bm_k \end{vmatrix}
&=
\lr{ \Be_i \wedge \Bm_j \wedge \Bm_k } I^{-1} \\
&=
\gpgradezero{ \lr{ \Be_i \wedge \Bm_j \wedge \Bm_k } I^{-1} } \\
&=
\gpgradezero{ \lr{
\Be_i \lr{ \Bm_j \wedge \Bm_k }
\Be_i \cdot \lr{ \Bm_j \wedge \Bm_k }
} I^{-1} } \\
&=
\gpgradezero{
\Be_i \lr{ \Bm_j \wedge \Bm_k }
I^{-1} } \\
&=
\gpgradezero{
\Be_i \lr{ \Bm_j \cross \Bm_k } I
I^{-1} } \\
&=
\Be_i \cdot \lr{ \Bm_j \cross \Bm_k }.
\end{aligned}
\end{equation}
We see that we can put the adjoint in block matrix form
\begin{equation}\label{eqn:adjoint:720}
A =
\begin{bmatrix}
\lr{ \Bm_2 \cross \Bm_3 }^\T \\
\lr{ \Bm_3 \cross \Bm_1 }^\T \\
\lr{ \Bm_1 \cross \Bm_2 }^\T \\
\end{bmatrix}.
\end{equation}

Check:

\begin{equation}\label{eqn:adjoint:740}
\begin{aligned}
A M
&=
\begin{bmatrix}
\lr{ \Bm_2 \cross \Bm_3 }^\T \\
\lr{ \Bm_3 \cross \Bm_1 }^\T \\
\lr{ \Bm_1 \cross \Bm_2 }^\T \\
\end{bmatrix}
\begin{bmatrix}
\Bm_1 & \Bm_2 & \Bm_3
\end{bmatrix} \\
&=
\begin{bmatrix}
\lr{ \Bm_2 \cross \Bm_3 }^\T \Bm_1 & \lr{ \Bm_2 \cross \Bm_3 }^\T \Bm_2 & \lr{ \Bm_2 \cross \Bm_3 }^\T \Bm_3 \\
\lr{ \Bm_3 \cross \Bm_1 }^\T \Bm_1 & \lr{ \Bm_3 \cross \Bm_1 }^\T \Bm_2 & \lr{ \Bm_3 \cross \Bm_1 }^\T \Bm_3 \\
\lr{ \Bm_1 \cross \Bm_2 }^\T \Bm_1 & \lr{ \Bm_1 \cross \Bm_2 }^\T \Bm_2 & \lr{ \Bm_1 \cross \Bm_2 }^\T \Bm_3
\end{bmatrix} \\
&=
\Abs{M} I.
\end{aligned}
\end{equation}

Essentially, we found that the rows of the adjoint matrix are each parallel to the reciprocal frame vectors of the columns of \( M \). This makes sense, as the reciprocal frame encodes a generalized inverse of sorts.