## Motivation.

I was asked about the geometric algebra equivalents of some of the vector calculus identities from [1]. I’ll call the specific page of those calculus notes “the article”. The article includes identities like
\label{eqn:formAndCurl:20}
\begin{aligned}
\spacegrad \cdot \lr{ \BF \cross \BG } &= \BG \cdot \lr{ \spacegrad \cross \BF } – \BF \cdot \lr{ \spacegrad \cross \BG },
\end{aligned}

but the point of these particular lecture notes is the interface between traditional Gibbs vector calculus and differential forms. That’s a much bigger topic, and perhaps not what I was actually being asked about. It is, however, an interesting topic, so let’s explore it.

## Comparisons.

The article identifies the cross product representation of the curl $$\spacegrad \cross \BF$$ as the equivalent to the exterior derivative of a one form (which has been mapped to a vector function.) In geometric algebra, this isn’t the identification we would use. Instead we should identify the “bivector curl” $$\spacegrad \wedge \BF$$ as the logical equivalent of the exterior derivative of that one form, and in general identify $$\spacegrad \wedge A_k$$ as the exterior derivative of a k-form (k-blade). In my notes to follow, the wedge of the gradient with a function, will be called the curl of that function, even if we are operating in $$\mathbb{R}^3$$ where the cross product is defined.

The starting place of the article was to define a one form and it’s exterior derivative was essentially as follows

## Definition 1.1: The exterior derivative of a one form.

Let $$f : \mathbb{R}^N \rightarrow \mathbb{R}$$ be a zero form. It’s exterior derivative is
\begin{equation*}
df = \sum_i dx_i \PD{x_i}{f}.
\end{equation*}

I’ve stated that the GA equivalent of the exterior derivative was a curl $$\spacegrad \wedge A$$, and this doesn’t look anything curl like, so right away, we have some trouble to deal with. To resolve that trouble, let’s step back to the gradient, which we haven’t defined yet. In the article, the gradient (of a scalar function) was defined as a coordinate triplet
\label{eqn:formAndCurl:60}
\spacegrad \Bf = \lr{ \PD{x}{f}, \PD{y}{f}, \PD{z}{f} }.

In GA we don’t like representations where the basis vectors are implicit, so we’d prefer to define

## Definition 1.2: The gradient of a function.

We define the gradient of multivector $$f(x_1, x_2, \cdots, x_N)$$, and denote it by $$\spacegrad f$$
\begin{equation*}
\spacegrad f = \sum_{i=1}^N \Be_i \PD{x_i}{f},
\end{equation*}
where $$\setlr{ \Be_1, \cdots \Be_N }$$ is an orthonormal basis for $$\mathbb{R}^N$$.

Unlike the article, we do not restrict $$f$$ to be a scalar function, since we do not have a problem with a vector valued operator directly multiplying a vector or any product of vectors. Instead $$f$$ can be a multivector function, with scalar, vector, bivector, trivector, … components, and we define the gradient the same way.

In order to define the curl of a k-blade, we need a reminder of how we define the wedge of a vector with a k-blade. Recall that this is how we generally define the wedge between two blades.

## Definition 1.3:

Let $$A_r$$ be a r-blade, and $$B_s$$ a s-blade. The wedge of $$A_r$$ with $$B_s$$ is
\label{eqn:formAndCurl:120}
A_r \wedge B_s = \gpgrade{A_r B_s}{r+s}.

In particular, if $$\Ba$$ is a vector, then the wedge with an s-blade $$B_s$$ is
\label{eqn:formAndCurl:140}
\Ba \wedge B_s = \gpgrade{\Ba B_s}{s+1},

which is just the $$s+1$$ grade selection of their product. Furthermore, if $$f$$ is a scalar, then
\label{eqn:formAndCurl:160}
\Ba \wedge f = \gpgrade{\Ba f}{1} = \Ba f.

We can now state the curl of a k-blade

## Definition 1.4: Curl of a k-blade.

Let $$A_k$$ be a k-blade. We define the curl of a k-blade as the wedge product of the gradient with that k-blade, designated
\begin{equation*}
\end{equation*}

Observe, given our generalized wedge product definition above, that the curl of a scalar function $$f$$, is in fact just the gradient of that function
\label{eqn:formAndCurl:200}

This has exactly the structure of the exterior derivative of a one form, as stated in “Definition: The exterior derivative of a one form”, but we have replaced $$dx_i$$ with a basis vector $$\Be_i$$.

## Definition 1.5: Exterior derivative of a one-form.

Let $$\omega = f_i dx_i$$ be a one-form. The exterior derivative of $$d \omega$$ is
\begin{equation*}
d\omega = \sum_i d( f_i ) \wedge dx_i.
\end{equation*}

## Lemma 1.1: Exterior derivative of a one-form.

Let $$\omega = f_i dx_i$$ be a one-form. The exterior derivative $$d \omega$$ can be expanding into a Jacobian form
\begin{equation*}
d\omega
=
\sum_{i < j} \lr{
\PD{x_i}{f_j}

\PD{x_j}{f_i}
} dx_i \wedge dx_j.
\end{equation*}

### Start proof:

\label{eqn:formAndCurl:220}
\begin{aligned}
d\omega
&= \sum_j d( f_j dx_j ) \\
&= \sum_j d( f_j ) \wedge dx_j \\
&= \sum_j \lr{ \sum_i dx_i \PD{x_i}{f_j} } \wedge dx_j \\
&= \sum_{ji} \PD{x_i}{f_j} dx_i \wedge dx_j \\
&= \sum_{j \ne i} \PD{x_i}{f_j} dx_i \wedge dx_j \\
&=
\sum_{i < j} \PD{x_i}{f_j} dx_i \wedge dx_j
+
\sum_{j < i} \PD{x_i}{f_j} dx_i \wedge dx_j \\
&=
\sum_{i < j} \PD{x_i}{f_j} dx_i \wedge dx_j
+
\sum_{i < j} \PD{x_j}{f_i} dx_j \wedge dx_i \\
&=
\sum_{i < j} \lr{
\PD{x_i}{f_j}

\PD{x_j}{f_i}
} dx_i \wedge dx_j.
\end{aligned}

## Lemma 1.2: Curl of a vector.

Let $$\Bf = \sum_i \Be_i f_i \in \mathbb{R}^N$$ be a vector. The curl of $$\Bf$$ has a Jacobian structure
\begin{equation*}
=
\sum_{i < j}
\lr{ \PD{x_i}{f_j} – \PD{x_j}{f_i} }
\lr{ \Be_i \wedge \Be_j }
.
\end{equation*}

### Start proof:

The antisymmetry of the wedges of differentials in the exterior derivative and the curl clearly has a one to one correspondence. Let’s show this explicitly by expansion
\label{eqn:formAndCurl:240}
\begin{aligned}
&=
\sum_{ij} \lr{ \Be_i \PD{x_i}{} } \wedge \lr{ \Be_j f_j } \\
&=
\sum_{ij} \lr{ \Be_i \wedge \Be_j } \PD{x_i}{f_j} \\
&=
\sum_{i \ne j} \lr{ \Be_i \wedge \Be_j } \PD{x_i}{f_j} \\
&=
\sum_{i < j} \lr{ \Be_i \wedge \Be_j } \PD{x_i}{f_j}
+
\sum_{j < i} \lr{ \Be_i \wedge \Be_j } \PD{x_i}{f_j} \\
&=
\sum_{i < j} \lr{ \Be_i \wedge \Be_j } \PD{x_i}{f_j}
+
\sum_{i < j} \lr{ \Be_j \wedge \Be_i } \PD{x_j}{f_i} \\
&=
\sum_{i < j} \lr{ \Be_i \wedge \Be_j } \lr{ \PD{x_i}{f_j} – \PD{x_j}{f_i} }.
\end{aligned}

### End proof.

If we are translating from differential forms, again, we see that we simply replace any differentials $$dx_i$$ with the basis vectors $$\Be_i$$ (at least for the zero-form and one-form cases, which is all that we have looked at here.)

Note that in differential forms, we often assume that there is an implicit wedge product between any different one form elements, writing
\label{eqn:formAndCurl:260}
dx_1 \wedge dx_2 = dx_1 dx_2.

This works out fine when we map differentials to basis vectors, since
\label{eqn:formAndCurl:280}
\Be_1 \Be_2 =
\Be_1 \cdot \Be_2
+
\Be_1 \wedge \Be_2
=
\Be_1 \wedge \Be_2.

However, we have to be more careful in GA when using indexed expressions, since
\label{eqn:formAndCurl:300}
\Be_i \Be_j = \Be_i \cdot \Be_j + \Be_i \wedge \Be_j.

The dot product portion of the RHS is only zero if $$i \ne j$$.

Now let’s look at the equivalence between the exterior derivative of a two-form with the curl.

## Definition 1.6: Exterior derivative of a two-form.

Let $$\eta = \sum_{ij} f_{ij} dx_i \wedge dx_j$$ be a two-form. The exterior derivative of $$\eta$$ is
\begin{equation*}
d\eta =
\sum_{ij} d( f_{ij} ) \wedge dx_i \wedge dx_j.
\end{equation*}

## Lemma 1.3: Exterior derivative of a two-form.

Let $$\eta = \sum_{ij} f_{ij} dx_i \wedge dx_j$$ be a two-form. The exterior derivative of $$\eta$$ can be expanded as
\begin{equation*}
d \eta
=
\sum_{i,j,k} \PD{x_k}{f_{ij}} dx_i \wedge dx_j \wedge dx_k.
\end{equation*}

### Start proof:

The exterior derivative of $$\eta$$ is
\label{eqn:formAndCurl:340}
\begin{aligned}
d \eta
&=
\sum_{i,j} d( f_{ij} dx_i \wedge dx_j ) \\
&=
\sum_{i,j,k} \lr{ \PD{x_k}{f_{ij}} dx_k } \wedge dx_i \wedge dx_j \\
&=
\sum_{i,j,k} \PD{x_k}{f_{ij}} dx_i \wedge dx_j \wedge dx_k.
\end{aligned}

### End proof.

Let’s compare that to the curl of a bivector valued function.

## Lemma 1.4: Curl of a 2-blade.

Let $$B = \sum_{i \ne j} f_{ij} \Be_i \wedge \Be_j$$ be a 2-blade. The curl of $$B$$ is
\begin{equation*}
=
\sum_{i,j,k} \PD{x_k}{f_{ij}} \Be_i \wedge \Be_j \wedge \Be_k.
\end{equation*}

### Start proof:

\label{eqn:formAndCurl:380}
\begin{aligned}
&=
\lr{ \sum_k \Be_k \PD{x_k}{} } \wedge \lr{ \sum_{i \ne j} f_{ij} \Be_i \wedge \Be_j } \\
&=
\sum_{k, i \ne j} \PD{x_k}{f_{ij}} \Be_k \wedge \Be_i \wedge \Be_j \\
&=
\sum_{i,j,k} \PD{x_k}{f_{ij}} \Be_i \wedge \Be_j \wedge \Be_k.
\end{aligned}

### End proof.

Again, we see an exact correspondence with the exterior derivative $$d \eta$$ of a two-form, and the curl $$\spacegrad \wedge B$$, of a 2-blade.

The article established a coorespondence between the exterior derivative of a two form over $$\mathbb{R}^3$$ to the divergence. The way we would express this in GA (also for \R{3}) is to write
\label{eqn:formAndCurl:400}
B = I \Bb,

where $$I = \Be_1 \Be_2 \Be_3$$ is the $$\mathbb{R}^3$$ pseudoscalar (a “unit” trivector.) Forming the curl of $$B$$ we have
\label{eqn:formAndCurl:420}
\begin{aligned}
\end{aligned}

The equivalence relationships that we have developed must then imply that the differential forms representation of this relationship is
\label{eqn:formAndCurl:440}
d B = dx_1 \wedge dx_2 \wedge dx_3 (\spacegrad \cdot \Bb)
= dx \wedge dy \wedge dz \lr{ \PD{x}{b_1} + \PD{y}{b_2} + \PD{z}{b_3} },

as defined in the article.

Here is the GA equivalent of Lemma 4.4.10 from the article

## Lemma 1.5: Repeated curl identities.

Let $$A$$ be a smooth k-blade, then
\begin{equation*}
\end{equation*}
For $$\mathbb{R}^3$$, this result, for a scalar function $$f$$, and a vector function $$\Bf$$, in terms of the cross product, as
\label{eqn:formAndCurl:560}
\begin{aligned}
\end{aligned}

It shouldn’t be surprising that this is the equivalent of $$d^2 A = 0$$ from differential forms. Let’s prove this, first considering the 0-blade case

### Start proof:

\label{eqn:formAndCurl:480}
\begin{aligned}
&=
&=
\sum_{ij} \Be_i \wedge \Be_j \frac{\partial^2 A}{\partial x_i \partial x_j} \\
&= 0.
\end{aligned}

The smooth criteria of for the function $$A$$ is assumed to imply that we have equality of mixed partials, and since this is a sum of an antisymmetric term with respect to indexes $$i, j$$ (the wedge) and a symmetric term in indexes $$i, j$$ (the partials), we have zero overall.

Now consider a k-blade $$A, k > 0$$. Expanding the gradients, we have
\label{eqn:formAndCurl:500}
=
\sum_{ij} \Be_i \wedge \Be_j \wedge \frac{\partial^2 A}{\partial x_i \partial x_j}.

It may be obvious that this is zero for the same reasons as above (sum of product of symmetric and antisymmetric entities). We can, however, make it more obvious, at the cost of some hellish indexing, by expressing $$A$$ in coordinate form. Let
\label{eqn:formAndCurl:520}
A = \sum_{i_1, i_2, \cdots, i_k}
A_{i_1, i_2, \cdots, i_k} \Be_{i_1} \wedge \Be_{i_2} \wedge \cdots \wedge \Be_{i_k},

then
\label{eqn:formAndCurl:540}
\begin{aligned}
&=
\sum_{i,j,i_1, i_2, \cdots, i_k} \Be_i \wedge \Be_j \wedge \Be_{i_1} \wedge \Be_{i_2} \wedge \cdots \wedge \Be_{i_k}
\frac{\partial^2 }{\partial x_i \partial x_j} A_{i_1, i_2, \cdots, i_k} \\
&= 0.
\end{aligned}

Now we clearly have a sum of an antisymmetric term (the wedges), and a symmetric term (assuming smooth $$A$$ means that we have equality of mixed partials), so the sum is zero.

Finally, for the $$\mathbb{R}^3$$ identities, we have
\label{eqn:formAndCurl:580}
\begin{aligned}
&=
&=
0,
\end{aligned}

since $$\spacegrad \wedge \lr{ \spacegrad f } = 0$$. For a vector $$\Bf$$, we have
\label{eqn:formAndCurl:600}
\begin{aligned}
&=
} \\
&=
} \\
&=
} \\
&=

&= 0,
\end{aligned}

again, because $$\spacegrad \wedge \lr{ \spacegrad \wedge \Bf} = 0$$.

## Identities.

We have a number of chain rule identities in the article. Here is the GA equivalent of that, and its corollaries

## Lemma 1.6: Chain rule identities.

Let $$f$$ be a scalar function and $$A$$ be a k-blade, then
\begin{equation*}
\spacegrad \lr{ f A } = \lr{ \spacegrad f } A + f \lr{ \spacegrad A }.
\end{equation*}
For $$A$$ with grade $$k > 0$$, the grade $$k-1$$ and $$k+1$$ components of this product are
\begin{equation*}
\begin{aligned}
\spacegrad \cdot \lr{ f A } &= \lr{ \spacegrad f } \cdot A + f \lr{ \spacegrad \cdot A } \\
\spacegrad \wedge \lr{ f A } &= \lr{ \spacegrad f } \wedge A + f \lr{ \spacegrad \wedge A }.
\end{aligned}
\end{equation*}
For $$\mathbb{R}^3$$, the wedge product relation above can be written in dual form as
\begin{equation*}
\spacegrad \cross \lr{ f A } = \lr{ \spacegrad f } \cross A + f \lr{ \spacegrad \cross A }.
\end{equation*}

Proving this is left to the reader.

We have some chain rule identities left in the article to verify and to find GA equivalents of. Before doing so, we need a couple miscellaneous identities relating triple cross products to wedge-dots.

## Lemma 1.7: Triple cross products.

Let $$\Ba, \Bb, \Bc$$ be vectors in $$\mathbb{R}^3$$. Then
\begin{equation*}
\begin{aligned}
\Ba \cross \lr{ \Bb \cross \Bc } &= – \Ba \cdot \lr{ \Bb \wedge \Bc } \\
\lr{ \Ba \cross \Bb } \cross \Bc &= – \lr{ \Ba \wedge \Bb } \cdot \Bc.
\end{aligned}
\end{equation*}

### Start proof:

\label{eqn:formAndCurl:720}
\begin{aligned}
\Ba \cross \lr{ \Bb \cross \Bc }
&=
\gpgradeone{ -I \lr{ \Ba \wedge \lr{ \Bb \cross \Bc } } } \\
&=
\gpgradeone{ -I \lr{ \Ba \lr{ \Bb \cross \Bc } } } \\
&=
\gpgradeone{ (-I)^2 \lr{ \Ba \lr{ \Bb \wedge \Bc } } } \\
&=
-\Ba \cdot \lr{ \Bb \wedge \Bc },
\end{aligned}

\label{eqn:formAndCurl:740}
\begin{aligned}
\lr{ \Ba \cross \Bb } \cross \Bc
&=
\gpgradeone{ -I \lr{ \Ba \cross \Bb } \wedge \Bc } \\
&=
\gpgradeone{ -I \lr{ \Ba \cross \Bb } \Bc } \\
&=
\gpgradeone{ (-I)^2 \lr{ \Ba \wedge \Bb } \Bc } \\
&=
– \lr{ \Ba \wedge \Bb } \cdot \Bc.
\end{aligned}

### End proof.

Next up is another chain rule identity

## Lemma 1.8: Gradient of dot product.

If $$\Ba, \Bb$$ are vectors, then
\begin{equation*}
\spacegrad \lr{ \Ba \cdot \Bb } =
\lr{ \Ba \cdot \spacegrad } \Bb
+
\lr{ \Bb \cdot \spacegrad } \Ba
+
\cdot
\Ba
+
\cdot
\Bb
\end{equation*}
For $$\mathbb{R}^3$$, this can be written as
\begin{equation*}
\spacegrad \lr{ \Ba \cdot \Bb }
=
\lr{ \Ba \cdot \spacegrad } \Bb
+
\lr{ \Bb \cdot \spacegrad } \Ba
+
\Ba \cross \lr{ \spacegrad \cross \Bb }
+
\Bb \cross \lr{ \spacegrad \cross \Ba }
\end{equation*}

### Start proof:

We will use $$\rspacegrad$$ to indicate that the gradient operates on everything to the right, $$\lrspacegrad$$ to indicate that the gradient operates bidirectionally, and $$\spacegrad’ A B’$$ to indicate that the gradient’s scope is limited to the ticked entity (just on $$B$$ in this case.)
\label{eqn:formAndCurl:760}
\begin{aligned}
\rspacegrad \lr{ \Ba \cdot \Bb }
&=
\rspacegrad \lr{ \Ba \Bb – \Ba \wedge \Bb }
} \\
&=
+
}
– \rspacegrad \cdot \lr{ \Ba \wedge \Bb }
\\
&=
+
\lr{ \spacegrad \wedge \Ba} \cdot \Bb
+
– \Ba \spacegrad \Bb + 2 \lr{ \Ba \cdot \spacegrad } \Bb
}
– \spacegrad’ \cdot \lr{ \Ba’ \wedge \Bb }
– \spacegrad’ \cdot \lr{ \Ba \wedge \Bb’ }
\\
&=
+
\lr{ \spacegrad \wedge \Ba} \cdot \Bb

\Ba \lr{ \spacegrad \cdot \Bb }

\Ba \cdot \lr{ \spacegrad \wedge \Bb }
+ 2 \lr{ \Ba \cdot \spacegrad } \Bb
– \spacegrad’ \cdot \lr{ \Ba’ \wedge \Bb }
– \spacegrad’ \cdot \lr{ \Ba \wedge \Bb’ }.
\end{aligned}

We are running out of room, and have not had any cancellation yet, so let’s expand those last two terms separately
\label{eqn:formAndCurl:780}
\begin{aligned}
– \spacegrad’ \cdot \lr{ \Ba’ \wedge \Bb }
– \spacegrad’ \cdot \lr{ \Ba \wedge \Bb’ }
&=
– \lr{ \spacegrad’ \cdot \Ba’ } \Bb
+ \lr{ \spacegrad’ \cdot \Bb } \Ba’
– \lr{ \spacegrad’ \cdot \Ba } \Bb’
+ \lr{ \spacegrad’ \cdot \Bb’ } \Ba
\\
&=
– \lr{ \spacegrad \cdot \Ba } \Bb
+ \lr{ \Bb \cdot \spacegrad } \Ba
– \lr{ \Ba \cdot \spacegrad } \Bb
+ \lr{ \spacegrad \cdot \Bb } \Ba.
\end{aligned}

Now we can cancel some terms, leaving
\label{eqn:formAndCurl:800}
\begin{aligned}
\rspacegrad \lr{ \Ba \cdot \Bb }
&=
\lr{ \spacegrad \wedge \Ba} \cdot \Bb

\Ba \cdot \lr{ \spacegrad \wedge \Bb }
+ \lr{ \Ba \cdot \spacegrad } \Bb
+ \lr{ \Bb \cdot \spacegrad } \Ba.
\end{aligned}

After adjustment of the order and sign of the second term, we see that this is the result we wanted. To show the $$\mathbb{R}^3$$ formulation, we have only apply “Lemma: Triple cross products”.

## Lemma 1.9: Divergence of a bivector.

Let $$\Ba, \Bb \in \mathbb{R}^N$$ be vectors. The divergence of their wedge can be written
\begin{equation*}
\spacegrad \cdot \lr{ \Ba \wedge \Bb }
=
\Bb \lr{ \spacegrad \cdot \Ba }
– \Ba \lr{ \spacegrad \cdot \Bb }
– \lr{ \Bb \cdot \spacegrad } \Ba
+ \lr{ \Ba \cdot \spacegrad } \Bb.
\end{equation*}
For $$\mathbb{R}^3$$, this can also be written in triple cross product form
\begin{equation*}
\spacegrad \cdot \lr{ \Ba \wedge \Bb }
=
-\spacegrad \cross \lr{ \Ba \cross \Bb }.
\end{equation*}

### Start proof:

\label{eqn:formAndCurl:860}
\begin{aligned}
\rspacegrad \cdot \lr{ \Ba \wedge \Bb }
&=
\spacegrad’ \cdot \lr{ \Ba’ \wedge \Bb }
+ \spacegrad’ \cdot \lr{ \Ba \wedge \Bb’ } \\
&=
\lr{ \spacegrad’ \cdot \Ba’ } \Bb
– \lr{ \spacegrad’ \cdot \Bb } \Ba’
+ \lr{ \spacegrad’ \cdot \Ba } \Bb’
– \lr{ \spacegrad’ \cdot \Bb’ } \Ba
\\
&=
\lr{ \spacegrad \cdot \Ba } \Bb
– \lr{ \Bb \cdot \spacegrad } \Ba
+ \lr{ \Ba \cdot \spacegrad } \Bb
– \lr{ \spacegrad \cdot \Bb } \Ba.
\end{aligned}

For the $$\mathbb{R}^3$$ part of the story, we have
\label{eqn:formAndCurl:870}
\begin{aligned}
\spacegrad \cross \lr{ \Ba \cross \Bb }
&=
-I \lr{ \spacegrad \wedge \lr{ \Ba \cross \Bb } }
} \\
&=
-I \spacegrad \lr{ \Ba \cross \Bb }
} \\
&=
(-I)^2 \spacegrad \lr{ \Ba \wedge \Bb }
} \\
&=

\spacegrad \cdot \lr{ \Ba \wedge \Bb }
\end{aligned}

### End proof.

We have just one identity left in the article to find the GA equivalent of, but will split that into two logical pieces.

## Lemma 1.10: Dual of triple wedge.

If $$\Ba, \Bb, \Bc \in \mathbb{R}^3$$ are vectors, then
\begin{equation*}
\Ba \cdot \lr{ \Bb \cross \Bc } = -I \lr{ \Ba \wedge \Bb \wedge \Bc }.
\end{equation*}

### Start proof:

\label{eqn:formAndCurl:680}
\begin{aligned}
\Ba \cdot \lr{ \Bb \cross \Bc }
&=
\Ba \lr{ \Bb \cross \Bc }
} \\
&=
\Ba (-I) \lr{ \Bb \wedge \Bc }
} \\
&=
-I \lr{
\Ba \cdot \lr{ \Bb \wedge \Bc }
+
\Ba \wedge \lr{ \Bb \wedge \Bc }
}
} \\
&=
-I \lr{ \Ba \wedge \lr{ \Bb \wedge \Bc } }
} \\
&=
-I \lr{ \Ba \wedge \lr{ \Bb \wedge \Bc } }.
\end{aligned}

## Lemma 1.11: Curl of a wedge of gradients (divergence of a gradient cross products.)

Let $$f, g, h$$ be smooth functions with smooth derivatives. Then
\begin{equation*}
=
\wedge
\wedge
\end{equation*}
For $$\mathbb{R}^3$$ this can be written as
\begin{equation*}
=
\cdot
\lr{
\cross
}.
\end{equation*}

### Start proof:

The GA identity follows by chain rule and application of “Lemma: Repeated curl identities”.
\label{eqn:formAndCurl:910}
\begin{aligned}
&=
+
f
&=
\end{aligned}

The $$\mathbb{R}^3$$ part of the lemma follows from “Lemma: Dual of triple wedge”, applied twice
\label{eqn:formAndCurl:970}
\begin{aligned}
&=
&=
&=
\end{aligned}

## Lemma 1.12: Curl of a bivector.

Let $$\Ba, \Bb$$ be vectors. The curl of their wedge is
\begin{equation*}
\spacegrad \wedge \lr{ \Ba \wedge \Bb } = \Bb \wedge \lr{ \spacegrad \wedge \Ba } – \Ba \wedge \lr{ \spacegrad \wedge \Bb }
\end{equation*}
For $$\mathbb{R}^3$$, this can be expressed as the divergence of a cross product
\begin{equation*}
\spacegrad \cdot \lr{ \Ba \cross \Bb } = \Bb \cdot \lr{ \spacegrad \cross \Ba } – \Ba \cdot \lr{ \spacegrad \cross \Bb }
\end{equation*}

### Start proof:

The GA case is a trivial chain rule application
\label{eqn:formAndCurl:950}
\begin{aligned}
\rspacegrad \wedge \lr{ \Ba \wedge \Bb }
&=
\lr{ \spacegrad’ \wedge \Ba’} \wedge \Bb
+
\lr{ \spacegrad’ \wedge \Ba } \wedge \Bb’ \\
&= \Bb \wedge \lr{ \spacegrad \wedge \Ba } – \Ba \wedge \lr{ \spacegrad \wedge \Bb }.
\end{aligned}

The $$\mathbb{R}^3$$ case, is less obvious by inspection, but follows from “Lemma: Dual of triple wedge”.

## Summary.

We found that we have an isomorphism between the exterior derivative of differential forms and the curl operation of geometric algebra, as follows
\label{eqn:formAndCurl:990}
\begin{aligned}
df &\rightleftharpoons \spacegrad \wedge f \\
dx_i &\rightleftharpoons \Be_i.
\end{aligned}

We didn’t look at how the Hodge dual translates to GA duality (pseudoscalar multiplication.) The divergence relationship between the exterior derivative of an $$\mathbb{R}^3$$ two-form really requires that formalism, and has only been examined in a cursory fashion.

We also translated a number of vector and gradient identities from conventional vector algebra (i.e.: using cross products) and wedge product equivalents of the same. The GA identities are often simpler, and in some cases, provide nice mechanisms to derive the conventional identities that would be more cumbersome to determine without the GA toolbox.

# References

[1] Vincent Bouchard. Math 215: Calculus iv: 4.4 the exterior derivative and vector calculus, 2023. URL https://sites.ualberta.ca/ vbouchar/MATH215/section_exterior_vector.html. [Online; accessed 11-November-2023].

## Exact system.

Recall that we can use the wedge product to solve linear systems. For example, assuming that $$\Ba, \Bb$$ are not colinear, the system
\label{eqn:cramersProjection:20}
x \Ba + y \Bb = \Bc,

if it has a solution, can be solved for $$x$$ and $$y$$ by wedging with $$\Bb$$, and $$\Ba$$ respectively.
For example, wedging with $$\Bb$$, from the right, gives
\label{eqn:cramersProjection:40}
x \lr{ \Ba \wedge \Bb } + y \lr{ \Bb \wedge \Bb } = \Bc \wedge \Bb,

but since $$\Bb \wedge \Bb = 0$$, we are left with
\label{eqn:cramersProjection:60}
x \lr{ \Ba \wedge \Bb } = \Bc \wedge \Bb,

and since $$\Ba, \Bb$$ are not colinear, which means that $$\Ba \wedge \Bb \ne 0$$, we have
\label{eqn:cramersProjection:80}
x = \inv{ \Ba \wedge \Bb } \Bc \wedge \Bb.

Similarly, we can wedge with $$\Ba$$ (from the left), to find
\label{eqn:cramersProjection:100}
y = \inv{ \Ba \wedge \Bb } \Ba \wedge \Bc.

This works because, if the system has a solution, all the bivectors $$\Ba \wedge \Bb$$, $$\Ba \wedge \Bc$$, and $$\Bb \wedge \Bc$$, are all scalar multiples of each other, so we can just divide the two bivectors, and the results must be scalars.

## Cramer’s rule.

Incidentally, observe that for $$\mathbb{R}^2$$, this is the “Cramer’s rule” solution to the system, since
\label{eqn:cramersProjection:180}
\Bx \wedge \By = \begin{vmatrix} \Bx & \By \end{vmatrix} \Be_1 \Be_2,

where we are treating $$\Bx$$ and $$\By$$ here as column vectors of the coordinates. This means that, after dividing out the plane pseudoscalar $$\Be_1 \Be_2$$, we have
\label{eqn:cramersProjection:200}
\begin{aligned}
x
&=
\frac{
\begin{vmatrix}
\Bc & \Bb \\
\end{vmatrix}
}{
\begin{vmatrix}
\Ba & \Bb
\end{vmatrix}
} \\
y
&=
\frac{
\begin{vmatrix}
\Ba & \Bc \\
\end{vmatrix}
}{
\begin{vmatrix}
\Ba & \Bb
\end{vmatrix}
}.
\end{aligned}

This follows the usual Cramer’s rule proscription, where we form determinants of the coordinates of the spanning vectors, replace either of the original vectors in the numerator with the target vector (depending on which variable we seek), and then take ratios of the two determinants.

## Least squares solution, using geometry.

Now, let’s consider the case, where the system \ref{eqn:cramersProjection:20} cannot be solved exactly. Geometrically, the best we can do is to try to solve the related “least squares” problem
\label{eqn:cramersProjection:120}
x \Ba + y \Bb = \Bc_\parallel,

where $$\Bc_\parallel$$ is the projection of $$\Bc$$ onto the plane spanned by $$\Ba, \Bb$$. Regardless of the value of $$\Bc$$, we can always find a solution to this problem. For example, solving for $$x$$, we have
\label{eqn:cramersProjection:160}
\begin{aligned}
x
&= \inv{ \Ba \wedge \Bb } \Bc_\parallel \wedge \Bb \\
&= \inv{ \Ba \wedge \Bb } \cdot \lr{ \Bc_\parallel \wedge \Bb } \\
&= \inv{ \Ba \wedge \Bb } \cdot \lr{ \Bc \wedge \Bb } – \inv{ \Ba \wedge \Bb } \cdot \lr{ \Bc_\perp \wedge \Bb }.
\end{aligned}

Let’s look at the second term, which can be written
\label{eqn:cramersProjection:140}
\begin{aligned}
– \inv{ \Ba \wedge \Bb } \cdot \lr{ \Bc_\perp \wedge \Bb }
&=
– \frac{ \Ba \wedge \Bb }{ \lr{ \Ba \wedge \Bb}^2 } \cdot \lr{ \Bc_\perp \wedge \Bb } \\
&\propto
\lr{ \Ba \wedge \Bb } \cdot \lr{ \Bc_\perp \wedge \Bb } \\
&=
\lr{ \lr{ \Ba \wedge \Bb } \cdot \Bc_\perp } \cdot \Bb \\
&=
\lr{ \Ba \lr{ \Bb \cdot \Bc_\perp} – \Bb \lr{ \Ba \cdot \Bc_\perp} } \cdot \Bb \\
&=
0.
\end{aligned}

The zero above follows because $$\Bc_\perp$$ is perpendicular to both $$\Ba$$ and $$\Bb$$ by construction. Geometrically, we are trying to dot two perpendicular bivectors, where $$\Bb$$ is a common factor of those two bivectors, as illustrated in fig. 1.

fig. 1. Perpendicular bivectors.

We see that our least squares solution, to this two variable linear system problem, is
\label{eqn:cramersProjection:220}
x = \inv{ \Ba \wedge \Bb } \cdot \lr{ \Bc \wedge \Bb }.

\label{eqn:cramersProjection:240}
y = \inv{ \Ba \wedge \Bb } \cdot \lr{ \Ba \wedge \Bc }.

The interesting thing here is how we have managed to connect the geometric notion of the optimal solution, the equivalent of a least squares solution (which we can compute with the Moore-Penrose inverse, or with an SVD (Singular Value Decomposition)), with the entirely geometric notion of selecting for the portion of the desired solution that lies within the span of the set of input vectors, provided that the spanning vectors for that hyperplane are linearly independent.

## Least squares solution, using calculus.

I’ve called the projection solution, a least-squares solution, without full justification. Here’s that justification. We define the usual error function, the squared distance from the target, from our superposition position in the plane
\label{eqn:cramersProjection:300}
\epsilon = \lr{ \Bc – x \Ba – y \Bb }^2,

and then take partials with respect to $$x, y$$, equating each to zero
\label{eqn:cramersProjection:320}
\begin{aligned}
0 &= \PD{x}{\epsilon} = 2 \lr{ \Bc – x \Ba – y \Bb } \cdot (-\Ba) \\
0 &= \PD{y}{\epsilon} = 2 \lr{ \Bc – x \Ba – y \Bb } \cdot (-\Bb).
\end{aligned}

This is a two equation, two unknown system, which can be expressed in matrix form as
\label{eqn:cramersProjection:340}
\begin{bmatrix}
\Ba^2 & \Ba \cdot \Bb \\
\Ba \cdot \Bb & \Bb^2
\end{bmatrix}
\begin{bmatrix}
x \\
y
\end{bmatrix}
=
\begin{bmatrix}
\Ba \cdot \Bc \\
\Bb \cdot \Bc \\
\end{bmatrix}.

This has solution
\label{eqn:cramersProjection:360}
\begin{bmatrix}
x \\
y
\end{bmatrix}
=
\inv{
\begin{vmatrix}
\Ba^2 & \Ba \cdot \Bb \\
\Ba \cdot \Bb & \Bb^2
\end{vmatrix}
}
\begin{bmatrix}
\Bb^2 & -\Ba \cdot \Bb \\
-\Ba \cdot \Bb & \Ba^2
\end{bmatrix}
\begin{bmatrix}
\Ba \cdot \Bc \\
\Bb \cdot \Bc \\
\end{bmatrix}
=
\frac{
\begin{bmatrix}
\Bb^2 \lr{ \Ba \cdot \Bc } – \lr{ \Ba \cdot \Bb} \lr{ \Bb \cdot \Bc } \\
\Ba^2 \lr{ \Bb \cdot \Bc } – \lr{ \Ba \cdot \Bb} \lr{ \Ba \cdot \Bc } \\
\end{bmatrix}
}{
\Ba^2 \Bb^2 – \lr{ \Ba \cdot \Bb }^2
}.

All of these differences can be expressed as wedge dot products, using the following expansions in reverse
\label{eqn:cramersProjection:420}
\begin{aligned}
\lr{ \Ba \wedge \Bb } \cdot \lr{ \Bc \wedge \Bd }
&=
\Ba \cdot \lr{ \Bb \cdot \lr{ \Bc \wedge \Bd } } \\
&=
\Ba \cdot \lr{ \lr{\Bb \cdot \Bc} \Bd – \lr{\Bb \cdot \Bd} \Bc } \\
&=
\lr{ \Ba \cdot \Bd } \lr{\Bb \cdot \Bc} – \lr{ \Ba \cdot \Bc }\lr{\Bb \cdot \Bd}.
\end{aligned}

We find
\label{eqn:cramersProjection:380}
\begin{aligned}
x
&= \frac{\Bb^2 \lr{ \Ba \cdot \Bc } – \lr{ \Ba \cdot \Bb} \lr{ \Bb \cdot \Bc }}{-\lr{ \Ba \wedge \Bb }^2 } \\
&= \frac{\lr{ \Ba \wedge \Bb } \cdot \lr{ \Bb \wedge \Bc }}{ -\lr{ \Ba \wedge \Bb }^2 } \\
&= \inv{ \Ba \wedge \Bb } \cdot \lr{ \Bc \wedge \Bb },
\end{aligned}

and
\label{eqn:cramersProjection:400}
\begin{aligned}
y
&= \frac{\Ba^2 \lr{ \Bb \cdot \Bc } – \lr{ \Ba \cdot \Bb} \lr{ \Ba \cdot \Bc } }{-\lr{ \Ba \wedge \Bb }^2 } \\
&= \frac{- \lr{ \Ba \wedge \Bb } \cdot \lr{ \Ba \wedge \Bc } }{ -\lr{ \Ba \wedge \Bb }^2 } \\
&= \inv{ \Ba \wedge \Bb } \cdot \lr{ \Ba \wedge \Bc }.
\end{aligned}

Sure enough, we find what was dubbed our least squares solution, which we now know can be written out as a ratio of (dotted) wedge products.
From \ref{eqn:cramersProjection:340}, it wasn’t obvious that the least squares solution would have a structure that was almost Cramer’s rule like, but having solved this problem using geometry alone, we knew to expect that. It was therefore natural to write the results in terms of wedge products factors, and find the simplest statement of the end result. That end result reduces to Cramer’s rule for the $$\mathbb{R}^2$$ special case where the system has an exact solution.

## New video: Elliptical motion from Newton’s law of gravitation.

This blog post is a text version of the video below, available in a few forms:

We found previously that
\label{eqn:solarellipse:20}
\mathbf{\hat{r}}’ = \inv{r} \mathbf{\hat{r}} \lr{ \mathbf{\hat{r}} \wedge \Bx’ }.

Somewhat remarkably, we can use this identity to demonstrate that orbits governed gravitational force are elliptical (or parabolic, or hyperbolic.) This ends up being possible because the angular momentum of the system is a conserved quantity, and this immediately introduces angular momentum into the mix in a fundamental way. In particular,
\label{eqn:solarellipse:40}
\mathbf{\hat{r}}’ = \inv{m r^2} \mathbf{\hat{r}} L,

where we define the angular momentum bivector as
\label{eqn:solarellipse:60}
L = \Bx \wedge \Bp.

Our gravitational law is
\label{eqn:solarellipse:80}
m \ddt{\Bv} = – G m M \frac{\mathbf{\hat{r}}}{r^2},

or
\label{eqn:solarellipse:100}
-\inv{G M} \ddt{\Bv} = \frac{\mathbf{\hat{r}}}{r^2}.

Combining the gravitational law with our $$\mathbf{\hat{r}}$$ derivative identity, we have
\label{eqn:solarellipse:120}
\begin{aligned}
\ddt{ \mathbf{\hat{r}} }
&= \inv{m} \frac{\mathbf{\hat{r}}}{r^2} L \\
&= -\inv{G m M} \ddt{\Bv} L \\
&= -\inv{G m M} \lr{ \ddt{(\Bv L)} – \ddt{L} }.
\end{aligned}

Since angular momentum is a constant of motion of the system, means that
\label{eqn:solarellipse:140}
\ddt{L} = 0,

our equation of motion is integratable
\label{eqn:solarellipse:160}
\ddt{ \mathbf{\hat{r}} } = -\inv{G m M} \ddt{(\Bv L)}.

Introducing a vector valued integration constant $$-\Be$$, we have
\label{eqn:solarellipse:180}
\mathbf{\hat{r}} = -\inv{G m M} \Bv L – \Be.

We’ve transformed our second order differential equation to a first order equation, one that does not look easy to integrate one more time. Luckily, we do not have to integrate, and can partially solve this algebraically, enough to describe the orbit in a compact fashion.

Before trying that, it’s worth quickly demonstrating that this equation is not a multivector equation, but a vector equation, since the multivector $$\Bv L$$ is, in fact, vector valued.
\label{eqn:solarellipse:200}
\begin{aligned}
\Bv L
&= \Bv \lr{ \Bx \wedge (m \Bv) } \\
&\propto \mathbf{\hat{v}} \lr{ \mathbf{\hat{r}} \wedge \mathbf{\hat{v}} } \\
&= \mathbf{\hat{v}} \cdot \lr{ \mathbf{\hat{r}} \wedge \mathbf{\hat{v}} } + \mathbf{\hat{v}} \wedge \lr{ \mathbf{\hat{r}} \wedge \mathbf{\hat{v}} } \\
&= \mathbf{\hat{v}} \cdot \lr{ \mathbf{\hat{r}} \wedge \mathbf{\hat{v}} } \\
&= \lr{ \mathbf{\hat{v}} \cdot \mathbf{\hat{r}} } \mathbf{\hat{v}} – \mathbf{\hat{r}},
\end{aligned}

which is a vector (i.e.: a vector that is directed along the portion of $$\Bx$$ that is perpendicular to $$\Bv$$.)

We can reduce \ref{eqn:solarellipse:180} to a scalar equation by dotting with $$\Bx = r \mathbf{\hat{r}}$$, leaving
\label{eqn:solarellipse:220}
\begin{aligned}
r
&= -\inv{G m M} \gpgradezero{ \Bx \Bv L } – \Bx \cdot \Be \\
&= -\inv{G m^2 M} \gpgradezero{ \Bx \Bp L } – \Bx \cdot \Be \\
&= -\inv{G m^2 M} \gpgradezero{ \lr{ \Bx \cdot \Bp + L } L } – \Bx \cdot \Be \\
&= -\inv{G m^2 M} L^2 – \Bx \cdot \Be,
\end{aligned}

or
\label{eqn:solarellipse:240}
r = -\frac{L^2}{G M m^2} – r e \cos\theta,

or
\label{eqn:solarellipse:260}
r \lr{ 1 + e \cos\theta } = -\frac{L^2}{G M m^2}.

Observe that the RHS constant is a positive constant, since $$L^2 \le 0$$. This has the structure of a conic section, if we write
\label{eqn:solarellipse:280}
-\frac{L^2}{G M m^2} = e d.

This is an ellipse, for $$e \in [0,1)$$, a parabola for $$e = 1$$, and hyperbola for $$e > 1$$ ([1] theorem 10.3.1).

fig. 1. Ellipse with e = 0.75

In fig. 1 is a plot with $$e = 0.75$$ (changing $$d$$ doesn’t change the shape of the figure, just the size.)

# References

[1] S.L. Salas and E. Hille. Calculus: one and several variables. Wiley New York, 1990.

## New video: Velocity and angular momentum with geometric algebra

In this video, we compute velocity in a radial representation $$\mathbf{x} = r \mathbf{\hat{r}}$$.

We use a scalar radial coordinate $$r$$, and leave all the angular dependence implicitly encoded in a radial unit vector $$\mathbf{\hat{r}}$$.

We find the geometric algebra structure of the $$\mathbf{\hat{r}}’$$ in two different ways, to find

$$\mathbf{\hat{r}}’ = \frac{\mathbf{\hat{r}}}{r} \left( \mathbf{\hat{r}} \wedge \mathbf{\hat{x}}’ \right),$$

then derive the conventional triple vector cross product equivalent for reference:

$$\mathbf{\hat{r}}’ = \left( \mathbf{\hat{r}} \times \mathbf{\hat{x}}’ \right) \times \frac{\mathbf{\hat{r}}}{r}.$$

We then compute kinetic energy in this representation, and show how a bivector-valued angular momentum $$L = \mathbf{x} \wedge \mathbf{p}$$, falls naturally from that computation, where we have

$$\frac{m}{2} \mathbf{v}^2 = \frac{1}{2 m} {(m r’)}^2 – \frac{1}{2 m r^2 } L^2.$$

Prerequisites: calculus (derivatives and chain rule), and geometric algebra basics (vector multiplication, commutation relationships for vectors and bivectors in a plane, wedge and cross product equivalencies, …)

Errata: at around 4:12 I used $$\mathbf{r}$$ instead of $$\mathbf{x}$$, then kept doing so every time after that when the value for $$L$$ was stated.

As well as being posted to Google’s censorship-tube, this video can also be found on odysee.

## Video: Spherical basis vectors in geometric algebra

I’ve made a new manim-based video with a geometric algebra application.

In the video, the geometric algebra form for the spherical unit vectors are derived, then unpacked to find the conventional vector algebra form. We will then use our new tools to find the expression for the kinetic energy of a particle in spherical coordinates.

Prerequisites: calculus (derivatives and chain rule), complex numbers (exponential polar form), and geometric algebra basics (single sided rotations, vector multiplication, vector commutation sign changes, …)

You can find the video on Google’s censorship-tube, or on odysee.