## Does the divergence and curl uniquely determine the vector?

A problem posed in the ece1228 problem set was the following

### Helmholtz theorem.

Prove the first Helmholtz’s theorem, i.e. if vector $$\BM$$ is defined by its divergence

\label{eqn:emtProblemSet1Problem5:20}

and its curl
\label{eqn:emtProblemSet1Problem5:40}

within a region and its normal component $$\BM_{\textrm{n}}$$ over the boundary, then $$\BM$$ is uniquely specified.

### Solution.

This problem screams for an attempt using Geometric Algebra techniques, since
the gradient of this vector can be written as a single even grade multivector

\label{eqn:emtProblemSet1Problem5AppendixGA:60}
\begin{aligned}
&= s + I \BC.
\end{aligned}

Observe that the Laplacian of $$\BM$$ is vector valued

\label{eqn:emtProblemSet1Problem5AppendixGA:400}

This means that $$\spacegrad \BC$$ must be a bivector $$\spacegrad \BC = \spacegrad \wedge \BC$$, or that $$\BC$$ has zero divergence

\label{eqn:emtProblemSet1Problem5AppendixGA:420}

This required constraint on $$\BC$$ will show up in subsequent analysis. An equivalent problem to the one posed
is to show that the even grade multivector equation $$\spacegrad \BM = s + I \BC$$ has an inverse given the constraint
specified by \ref{eqn:emtProblemSet1Problem5AppendixGA:420}.

The Green’s function for the gradient can be found in [1], where it is used to generalize the Cauchy integral equations to higher dimensions.

\label{eqn:emtProblemSet1Problem5AppendixGA:80}
\begin{aligned}
G(\Bx ; \Bx’) &= \inv{4 \pi} \frac{ \Bx – \Bx’ }{\Abs{\Bx – \Bx’}^3} \\
\end{aligned}

The inversion equation is an application of the Fundamental Theorem of (Geometric) Calculus, with the gradient operating bidirectionally on the Green’s function and the vector function

\label{eqn:emtProblemSet1Problem5AppendixGA:100}
\begin{aligned}
\oint_{\partial V} G(\Bx, \Bx’) d^2 \Bx’ \BM(\Bx’)
&=
\int_V G(\Bx, \Bx’) d^3 \Bx \lrspacegrad’ \BM(\Bx’) \\
&=
\int_V d^3 \Bx (G(\Bx, \Bx’) \lspacegrad’) \BM(\Bx’)
+
\int_V d^3 \Bx G(\Bx, \Bx’) (\spacegrad’ \BM(\Bx’)) \\
&=
-\int_V d^3 \Bx \delta(\Bx – \By) \BM(\Bx’)
+
\int_V d^3 \Bx G(\Bx, \Bx’) \lr{ s(\Bx’) + I \BC(\Bx’) } \\
&=
-I \BM(\Bx)
+
\inv{4 \pi} \int_V d^3 \Bx \frac{ \Bx – \Bx’}{ \Abs{\Bx – \Bx’}^3 } \lr{ s(\Bx’) + I \BC(\Bx’) }.
\end{aligned}

The integrals are in terms of the primed coordinates so that the end result is a function of $$\Bx$$. To rearrange for $$\BM$$, let $$d^3 \Bx’ = I dV’$$, and $$d^2 \Bx’ \ncap(\Bx’) = I dA’$$, then right multiply with the pseudoscalar $$I$$, noting that in \R{3} the pseudoscalar commutes with any grades

\label{eqn:emtProblemSet1Problem5AppendixGA:440}
\begin{aligned}
\BM(\Bx)
&=
I \oint_{\partial V} G(\Bx, \Bx’) I dA’ \ncap \BM(\Bx’)

I \inv{4 \pi} \int_V I dV’ \frac{ \Bx – \Bx’}{ \Abs{\Bx – \Bx’}^3 } \lr{ s(\Bx’) + I \BC(\Bx’) } \\
&=
-\oint_{\partial V} dA’ G(\Bx, \Bx’) \ncap \BM(\Bx’)
+
\inv{4 \pi} \int_V dV’ \frac{ \Bx – \Bx’}{ \Abs{\Bx – \Bx’}^3 } \lr{ s(\Bx’) + I \BC(\Bx’) }.
\end{aligned}

This can be decomposed into a vector and a trivector equation. Let $$\Br = \Bx – \Bx’ = r \rcap$$, and note that

\label{eqn:emtProblemSet1Problem5AppendixGA:500}
\begin{aligned}
&=
\gpgradeone{ I \rcap \BC } \\
&=
I \rcap \wedge \BC \\
&=
-\rcap \cross \BC,
\end{aligned}

so this pair of equations can be written as

\label{eqn:emtProblemSet1Problem5AppendixGA:520}
\begin{aligned}
\BM(\Bx)
&=
-\inv{4 \pi} \oint_{\partial V} dA’ \frac{\gpgradeone{ \rcap \ncap \BM(\Bx’) }}{r^2}
+
\inv{4 \pi} \int_V dV’ \lr{
\frac{\rcap}{r^2} s(\Bx’) –
\frac{\rcap}{r^2} \cross \BC(\Bx’) } \\
0
&=
-\inv{4 \pi} \oint_{\partial V} dA’ \frac{\rcap}{r^2} \wedge \ncap \wedge \BM(\Bx’)
+
\frac{I}{4 \pi} \int_V dV’ \frac{ \rcap \cdot \BC(\Bx’) }{r^2}.
\end{aligned}

Consider the last integral in the pseudoscalar equation above. Since we expect no pseudoscalar components, this must be zero, or cancel perfectly. It’s not obvious that this is the case, but a transformation to a surface integral shows the constraints required for that to be the case. To do so note

\label{eqn:emtProblemSet1Problem5AppendixGA:540}
\begin{aligned}
&= -\spacegrad’ \inv{\Bx – \Bx’} \\
&=
-\frac{\Bx – \Bx’}{\Abs{\Bx – \Bx’}^3} \\
&= -\frac{\rcap}{r^2}.
\end{aligned}

Using this and the chain rule we have

\label{eqn:emtProblemSet1Problem5AppendixGA:560}
\begin{aligned}
\frac{I}{4 \pi} \int_V dV’ \frac{ \rcap \cdot \BC(\Bx’) }{r^2}
&=
\frac{I}{4 \pi} \int_V dV’ \lr{ \spacegrad’ \inv{ r } } \cdot \BC(\Bx’) \\
&=
\frac{I}{4 \pi} \int_V dV’ \spacegrad’ \cdot \frac{\BC(\Bx’)}{r}

\frac{I}{4 \pi} \int_V dV’ \frac{ \spacegrad’ \cdot \BC(\Bx’) }{r} \\
&=
\frac{I}{4 \pi} \int_V dV’ \spacegrad’ \cdot \frac{\BC(\Bx’)}{r} \\
&=
\frac{I}{4 \pi} \int_{\partial V} dA’ \ncap(\Bx’) \cdot \frac{\BC(\Bx’)}{r}.
\end{aligned}

The divergence of $$\BC$$ above was killed by recalling the constraint \ref{eqn:emtProblemSet1Problem5AppendixGA:420}. This means that we can rewrite entirely as surface integral and eventually reduced to a single triple product

\label{eqn:emtProblemSet1Problem5AppendixGA:580}
\begin{aligned}
0
&=
-\frac{I}{4 \pi} \oint_{\partial V} dA’ \lr{
\frac{\rcap}{r^2} \cdot (\ncap \cross \BM(\Bx’))
-\ncap \cdot \frac{\BC(\Bx’)}{r}
} \\
&=
\frac{I}{4 \pi} \oint_{\partial V} dA’ \ncap \cdot \lr{
\frac{\rcap}{r^2} \cross \BM(\Bx’)
+ \frac{\BC(\Bx’)}{r}
} \\
&=
\frac{I}{4 \pi} \oint_{\partial V} dA’ \ncap \cdot \lr{
+ \frac{\BC(\Bx’)}{r}
} \\
&=
\frac{I}{4 \pi} \oint_{\partial V} dA’ \ncap \cdot \lr{
} \\
&=
\frac{I}{4 \pi} \oint_{\partial V} dA’
\frac{\BM(\Bx’) \cross \ncap}{r}
&=
\frac{I}{4 \pi} \oint_{\partial V} dA’
\frac{\BM(\Bx’) \cross \ncap}{r}.
\end{aligned}

### Final results.

Assembling things back into a single multivector equation, the complete inversion integral for $$\BM$$ is

\label{eqn:emtProblemSet1Problem5AppendixGA:600}
\BM(\Bx)
=
\inv{4 \pi} \oint_{\partial V} dA’
\lr{
\frac{\BM(\Bx’) \wedge \ncap}{r}
}
+
\inv{4 \pi} \int_V dV’ \lr{
\frac{\rcap}{r^2} s(\Bx’) –
\frac{\rcap}{r^2} \cross \BC(\Bx’) }.

This shows that vector $$\BM$$ can be recovered uniquely from $$s, \BC$$ when $$\Abs{\BM}/r^2$$ vanishes on an infinite surface. If we restrict attention to a finite surface, we have to add to the fixed solution a specific solution that depends on the value of $$\BM$$ on that surface. The vector portion of that surface integrand contains

\label{eqn:emtProblemSet1Problem5AppendixGA:640}
\begin{aligned}
&=
\rcap (\ncap \cdot \BM )
+
\rcap \cdot (\ncap \wedge \BM ) \\
&=
\rcap (\ncap \cdot \BM )
+
(\rcap \cdot \ncap) \BM

(\rcap \cdot \BM ) \ncap.
\end{aligned}

The constraints required by a zero triple product $$\spacegrad’ \cdot (\BM(\Bx’) \cross \ncap(\Bx’))$$ are complicated on a such a general finite surface. Consider instead, for simplicity, the case of a spherical surface, which can be analyzed more easily. In that case the outward normal of the surface centred on the test charge point $$\Bx$$ is $$\ncap = -\rcap$$. The pseudoscalar integrand is not generally killed unless the divergence of its tangential component on this surface is zero. One way that this can occur is for $$\BM \cross \ncap = 0$$, so that $$-\gpgradeone{ \rcap \ncap \BM } = \BM = (\BM \cdot \ncap) \ncap = \BM_{\textrm{n}}$$.

This gives

\label{eqn:emtProblemSet1Problem5AppendixGA:620}
\BM(\Bx)
=
\inv{4 \pi} \oint_{\Abs{\Bx – \Bx’} = r} dA’ \frac{\BM_{\textrm{n}}(\Bx’)}{r^2}
+
\inv{4 \pi} \int_V dV’ \lr{
\frac{\rcap}{r^2} s(\Bx’) +
\BC(\Bx’) \cross \frac{\rcap}{r^2} },

or, in terms of potential functions, which is arguably tidier

\label{eqn:emtProblemSet1Problem5AppendixGA:300}
\boxed{
\BM(\Bx)
=
\inv{4 \pi} \oint_{\Abs{\Bx – \Bx’} = r} dA’ \frac{\BM_{\textrm{n}}(\Bx’)}{r^2}
-\spacegrad \int_V dV’ \frac{ s(\Bx’)}{ 4 \pi r }
+\spacegrad \cross \int_V dV’ \frac{ \BC(\Bx’) }{ 4 \pi r }.
}

### Commentary

I attempted this problem in three different ways. My first approach (above) assembled the divergence and curl relations above into a single (Geometric Algebra) multivector gradient equation and applied the vector valued Green’s function for the gradient to invert that equation. That approach logically led from the differential equation for $$\BM$$ to the solution for $$\BM$$ in terms of $$s$$ and $$\BC$$. However, this strategy introduced some complexities that make me doubt the correctness of the associated boundary analysis.

Even if the details of the boundary handling in my multivector approach is not correct, I thought that approach was interesting enough to share.

# References

[1] C. Doran and A.N. Lasenby. Geometric algebra for physicists. Cambridge University Press New York, Cambridge, UK, 1st edition, 2003.

## Motivation

I initially thought that I might submit a problem set solution for ece1228 using Geometric Algebra. In order to justify this, I needed to add an appendix to that problem set that outlined enough of the ideas that such a solution might make sense to the grader.

I ended up changing my mind and reworked the problem entirely, removing any use of GA. Here’s the tutorial I initially considered submitting with that problem.

## Geometric Algebra in a nutshell.

Geometric Algebra defines a non-commutative, associative vector product

\label{eqn:gaTutorial:20}
\begin{aligned}
\Ba \Bb \Bc
&=
(\Ba \Bb) \Bc \\
&=
\Ba (\Bb \Bc),
\end{aligned}

where the square of a vector equals the squared vector magnitude

\label{eqn:gaTutorial:40}
\Ba^2 = \Abs{\Ba}^2,

In Euclidean spaces such a squared vector is always positive, but that is not necessarily the case in the mixed signature spaces used in special relativity.

There are a number of consequences of these two simple vector multiplication rules.

• Squared unit vectors have a unit magnitude (up to a sign). In a Euclidean space such a product is always positive

\label{eqn:gaTutorial:60}
(\Be_1)^2 = 1.

• Products of perpendicular vectors anticommute.

\label{eqn:gaTutorial:80}
\begin{aligned}
2
&=
(\Be_1 + \Be_2)^2 \\
&= (\Be_1 + \Be_2)(\Be_1 + \Be_2) \\
&= \Be_1^2 + \Be_2 \Be_1 + \Be_1 \Be_2 + \Be_2^2 \\
&= 2 + \Be_2 \Be_1 + \Be_1 \Be_2.
\end{aligned}

A product of two perpendicular vectors is called a bivector, and can be used to represent an oriented plane. The last line above shows an example of a scalar and bivector sum, called a multivector. In general Geometric Algebra allows sums of scalars, vectors, bivectors, and higher degree analogues (grades) be summed.

Comparison of the RHS and LHS of \ref{eqn:gaTutorial:80} shows that we must have

\label{eqn:gaTutorial:100}
\Be_2 \Be_1 = -\Be_1 \Be_2.

It is true in general that the product of two perpendicular vectors anticommutes. When, as above, such a product is a product of
two orthonormal vectors, it behaves like a non-commutative imaginary quantity, as it has an imaginary square in Euclidean spaces

\label{eqn:gaTutorial:120}
\begin{aligned}
(\Be_1 \Be_2)^2
&=
(\Be_1 \Be_2)
(\Be_1 \Be_2) \\
&=
\Be_1 (\Be_2
\Be_1) \Be_2 \\
&=
-\Be_1 (\Be_1
\Be_2) \Be_2 \\
&=
-(\Be_1 \Be_1)
(\Be_2 \Be_2) \\
&=-1.
\end{aligned}

Such “imaginary” (unit bivectors) have important applications describing rotations in Euclidean spaces, and boosts in Minkowski spaces.

• The product of three perpendicular vectors, such as

\label{eqn:gaTutorial:140}
I = \Be_1 \Be_2 \Be_3,

is called a trivector. In \R{3}, the product of three orthonormal vectors is called a pseudoscalar for the space, and can represent an oriented volume element. The quantity $$I$$ above is the typical orientation picked for the \R{3} unit pseudoscalar. This quantity also has characteristics of an imaginary number

\label{eqn:gaTutorial:160}
\begin{aligned}
I^2
&=
(\Be_1 \Be_2 \Be_3)
(\Be_1 \Be_2 \Be_3) \\
&=
\Be_1 \Be_2 (\Be_3
\Be_1) \Be_2 \Be_3 \\
&=
-\Be_1 \Be_2 \Be_1
\Be_3 \Be_2 \Be_3 \\
&=
-\Be_1 (\Be_2 \Be_1)
(\Be_3 \Be_2) \Be_3 \\
&=
-\Be_1 (\Be_1 \Be_2)
(\Be_2 \Be_3) \Be_3 \\
&=

\Be_1^2
\Be_2^2
\Be_3^2 \\
&=
-1.
\end{aligned}

• The product of two vectors in \R{3} can be expressed as the sum of a symmetric scalar product and antisymmetric bivector product

\label{eqn:gaTutorial:480}
\begin{aligned}
\Ba \Bb
&=
\sum_{i,j = 1}^n \Be_i \Be_j a_i b_j \\
&=
\sum_{i = 1}^n \Be_i^2 a_i b_i
+
\sum_{0 < i \ne j \le n} \Be_i \Be_j a_i b_j \\ &= \sum_{i = 1}^n a_i b_i + \sum_{0 < i < j \le n} \Be_i \Be_j (a_i b_j - a_j b_i). \end{aligned} The first (symmetric) term is clearly the dot product. The antisymmetric term is designated the wedge product. In general these are written $$\label{eqn:gaTutorial:500} \Ba \Bb = \Ba \cdot \Bb + \Ba \wedge \Bb,$$ where \label{eqn:gaTutorial:520} \begin{aligned} \Ba \cdot \Bb &\equiv \inv{2} \lr{ \Ba \Bb + \Bb \Ba } \\ \Ba \wedge \Bb &\equiv \inv{2} \lr{ \Ba \Bb - \Bb \Ba }, \end{aligned} The coordinate expansion of both can be seen above, but in \R{3} the wedge can also be written $$\label{eqn:gaTutorial:540} \Ba \wedge \Bb = \Be_1 \Be_2 \Be_3 (\Ba \cross \Bb) = I (\Ba \cross \Bb).$$ This allows for an handy dot plus cross product expansion of the vector product $$\label{eqn:gaTutorial:180} \Ba \Bb = \Ba \cdot \Bb + I (\Ba \cross \Bb).$$ This result should be familiar to the student of quantum spin states where one writes $$\label{eqn:gaTutorial:200} (\Bsigma \cdot \Ba) (\Bsigma \cdot \Bb) = (\Ba \cdot \Bb) + i (\Ba \cross \Bb) \cdot \Bsigma.$$ This correspondence is because the Pauli spin basis is a specific matrix representation of a Geometric Algebra, satisfying the same commutator and anticommutator relationships. A number of other algebra structures, such as complex numbers, and quaterions can also be modelled as Geometric Algebra elements.

• It is often useful to utilize the grade selection operator
$$\gpgrade{M}{n}$$ and scalar grade selection operator $$\gpgradezero{M} = \gpgrade{M}{0}$$
to select the scalar, vector, bivector, trivector, or higher grade algebraic elements. For example, operating on vectors $$\Ba, \Bb, \Bc$$, we have

\label{eqn:gaTutorial:580}
\begin{aligned}
&= \Ba \cdot \Bb \\
&=
\Ba (\Bb \cdot \Bc)
+
\Ba \cdot (\Bb \wedge \Bc) \\
&=
\Ba (\Bb \cdot \Bc)
+
(\Ba \cdot \Bb) \Bc

(\Ba \cdot \Bc) \Bb \\
\Ba \wedge \Bb \\
\Ba \wedge \Bb \wedge \Bc.
\end{aligned}

Note that the wedge product of any number of vectors such as $$\Ba \wedge \Bb \wedge \Bc$$ is associative and can be expressed in terms of the complete antisymmetrization of the product of those vectors. A consequence of that is the fact a wedge product that includes any colinear vectors in the product is zero.

## Example: Helmholz equations.

As an example of the power of \ref{eqn:gaTutorial:180}, consider the following Helmholtz equation derivation (wave equations for the electric and magnetic fields in the frequency domain.)

Application of \ref{eqn:gaTutorial:180} to
Maxwell equations in the frequency domain for source free simple media gives

\label{eqn:emtProblemSet1Problem6:340}
\label{eqn:emtProblemSet1Problem6:360}
\spacegrad \BE = -j \omega I \BB

\label{eqn:emtProblemSet1Problem6:380}
\spacegrad I \BB = -j \omega \mu \epsilon \BE.

These equations use the engineering (not physics) sign convention for the phasors where the time domain fields are of the form $$\boldsymbol{\mathcal{E}}(\Br, t) = \textrm{Re}( \BE e^{j\omega t}$$.

Operation with the gradient from the left produces the Helmholtz equation for each of the fields using nothing more than multiplication and simple substitution

\label{eqn:emtProblemSet1Problem6:400}
\label{eqn:emtProblemSet1Problem6:420}
\spacegrad^2 \BE = – \mu \epsilon \omega^2 \BE

\label{eqn:emtProblemSet1Problem6:440}
\spacegrad^2 I \BB = – \mu \epsilon \omega^2 I \BB.

There was no reason to go through the headache of looking up or deriving the expansion of $$\spacegrad \cross (\spacegrad \cross \BA )$$ as is required with the traditional vector algebra demonstration of these identities.

Observe that the usual Helmholtz equation for $$\BB$$ doesn’t have a pseudoscalar factor. That result can be obtained by just cancelling the factors $$I$$ since the \R{3} Euclidean pseudoscalar commutes with all grades (this isn’t the case in \R{2} nor in Minkowski spaces.)

## Example: Factoring the Laplacian.

There are various ways to demonstrate the identity

\label{eqn:gaTutorial:660}

such as the use of (somewhat obscure) tensor contraction techniques. We can also do this with Geometric Algebra (using a different set of obscure techniques) by factoring the Laplacian action on a vector

\label{eqn:gaTutorial:700}
\begin{aligned}
&=
&=
&=
+
%+
&=
+
\end{aligned}

Should we wish to express the last term using cross products, a grade one selection operation can be used
\label{eqn:gaTutorial:680}
\begin{aligned}
&=
&=
&=
&=
&=
\end{aligned}

Here coordinate expansion was not required in any step.

# References

[1] C. Doran and A.N. Lasenby. Geometric algebra for physicists. Cambridge University Press New York, Cambridge, UK, 1st edition, 2003.

[2] L. Dorst, D. Fontijne, and S. Mann. Geometric Algebra for Computer Science. Morgan Kaufmann, San Francisco, 2007.

[3] D. Hestenes. New Foundations for Classical Mechanics. Kluwer Academic Publishers, 1999.

[4] A. Macdonald. Vector and Geometric Calculus. CreateSpace Independent Publishing Platform, 2012.

## Green’s function inversion of the magnetostatic equation

A previous example of inverting a gradient equation was the electrostatics equation. We can do the same for the magnetostatics equation, which has the following Geometric Algebra form in linear media

\label{eqn:biotSavartGreens:20}
\spacegrad I \BB = – \mu \BJ.

The Green’s inversion of this is
\label{eqn:biotSavartGreens:40}
\begin{aligned}
I \BB(\Bx)
&= \int_V dV’ G(\Bx, \Bx’) \spacegrad’ I \BB(\Bx’) \\
&= \int_V dV’ G(\Bx, \Bx’) (-\mu \BJ(\Bx’)) \\
&= \inv{4\pi} \int_V dV’ \frac{\Bx – \Bx’}{ \Abs{\Bx – \Bx’}^3 } (-\mu \BJ(\Bx’)).
\end{aligned}

We expect the LHS to be a bivector, so the scalar component of this should be zero. That can be demonstrated with some of the usual trickery
\label{eqn:biotSavartGreens:60}
\begin{aligned}
-\frac{\mu}{4\pi} \int_V dV’ \frac{\Bx – \Bx’}{ \Abs{\Bx – \Bx’}^3 } \cdot \BJ(\Bx’)
&= \frac{\mu}{4\pi} \int_V dV’ \lr{ \spacegrad \inv{ \Abs{\Bx – \Bx’} }} \cdot \BJ(\Bx’) \\
&= -\frac{\mu}{4\pi} \int_V dV’ \lr{ \spacegrad’ \inv{ \Abs{\Bx – \Bx’} }} \cdot \BJ(\Bx’) \\
&= -\frac{\mu}{4\pi} \int_V dV’ \lr{
\spacegrad’ \cdot \frac{\BJ(\Bx’)}{ \Abs{\Bx – \Bx’} }

\frac{\spacegrad’ \cdot \BJ(\Bx’)}{ \Abs{\Bx – \Bx’} }
}.
\end{aligned}

The current $$\BJ$$ is not unconstrained. This can be seen by premultiplying \ref{eqn:biotSavartGreens:20} by the gradient

\label{eqn:biotSavartGreens:80}

On the LHS we have a bivector so must have $$\spacegrad \BJ = \spacegrad \wedge \BJ$$, or $$\spacegrad \cdot \BJ = 0$$. This kills the $$\spacegrad’ \cdot \BJ(\Bx’)$$ integrand numerator in \ref{eqn:biotSavartGreens:60}, leaving

\label{eqn:biotSavartGreens:100}
\begin{aligned}
-\frac{\mu}{4\pi} \int_V dV’ \frac{\Bx – \Bx’}{ \Abs{\Bx – \Bx’}^3 } \cdot \BJ(\Bx’)
&= -\frac{\mu}{4\pi} \int_V dV’ \spacegrad’ \cdot \frac{\BJ(\Bx’)}{ \Abs{\Bx – \Bx’} } \\
&= -\frac{\mu}{4\pi} \int_{\partial V} dA’ \ncap \cdot \frac{\BJ(\Bx’)}{ \Abs{\Bx – \Bx’} }.
\end{aligned}

This shows that the scalar part of the equation is zero, provided the normal component of $$\BJ/\Abs{\Bx – \Bx’}$$ vanishes on the boundary of the infinite sphere. This leaves the Biot-Savart law as a bivector equation

\label{eqn:biotSavartGreens:120}
I \BB(\Bx)
= \frac{\mu}{4\pi} \int_V dV’ \BJ(\Bx’) \wedge \frac{\Bx – \Bx’}{ \Abs{\Bx – \Bx’}^3 }.

Observe that the traditional vector form of the Biot-Savart law can be obtained by premultiplying both sides with $$-I$$, leaving

\label{eqn:biotSavartGreens:140}
\BB(\Bx)
= \frac{\mu}{4\pi} \int_V dV’ \BJ(\Bx’) \cross \frac{\Bx – \Bx’}{ \Abs{\Bx – \Bx’}^3 }.

This checks against a trusted source such as [1] (eq. 5.39).

# References

[1] David Jeffrey Griffiths and Reed College. Introduction to electrodynamics. Prentice hall Upper Saddle River, NJ, 3rd edition, 1999.

## Green’s function for the gradient in Euclidean spaces.

In [1] it is stated that the Green’s function for the gradient is

G(x, x’) = \inv{S_n} \frac{x – x’}{\Abs{x-x’}^n},

where $$n$$ is the dimension of the space, $$S_n$$ is the area of the unit sphere, and

What I’d like to do here is verify that this Green’s function operates as asserted. Here, as in some parts of the text, I am following a convention where vectors are written without boldface.

Let’s start with checking that the gradient of the Green’s function is zero everywhere that $$x \ne x’$$

\begin{aligned}
&=
-\frac{n}{2} \frac{e^\nu \partial_\nu (x_\mu – x_\mu’)(x^\mu – {x^\mu}’)}{\Abs{x – x’}^{n+2}} \\
&=
-\frac{n}{2} 2 \frac{e^\nu (x_\mu – x_\mu’) \delta_\nu^\mu }{\Abs{x – x’}^{n+2}} \\
&=
-n \frac{ x – x’}{\Abs{x – x’}^{n+2}}.
\end{aligned}

This means that we have, everywhere that $$x \ne x’$$

\begin{aligned}
&=
\inv{S_n} \lr{ \frac{\spacegrad \cdot \lr{x – x’}}{\Abs{x – x’}^{n}} + \lr{ \spacegrad \inv{\Abs{x – x’}^{n}} } \cdot \lr{ x – x’} } \\
&=
\inv{S_n} \lr{ \frac{n}{\Abs{x – x’}^{n}} + \lr{ -n \frac{x – x’}{\Abs{x – x’}^{n+2} } \cdot \lr{ x – x’} } } \\
= 0.
\end{aligned}

Next, consider the curl of the Green’s function. Zero curl will mean that we have $$\grad G = \grad \cdot G = G \lgrad$$.

\begin{aligned}
&=
+
\grad \inv{\Abs{x – x’}^{n}} \wedge (x-x’) \\
&=
– n
\frac{x – x’}{\Abs{x – x’}^{n}} \wedge (x-x’) \\
&=
\end{aligned}

However,

\begin{aligned}
&=
&=
e^\mu \wedge e_\nu \partial_\mu x^\nu \\
&=
e^\mu \wedge e_\nu \delta_\mu^\nu \\
&=
e^\mu \wedge e_\mu.
\end{aligned}

For any metric where $$e_\mu \propto e^\mu$$, which is the case in all the ones with physical interest (i.e. \R{3} and Minkowski space), $$\grad \wedge G$$ is zero.

Having shown that the gradient of the (presumed) Green’s function is zero everywhere that $$x \ne x’$$, the guts of the
demonstration can now proceed. We wish to evaluate the gradient weighted convolution of the Green’s function using the Fundamental Theorem of (Geometric) Calculus. Here the gradient acts bidirectionally on both the gradient and the test function. Working in primed coordinates so that the final result is in terms of the unprimed, we have

\int_V G(x,x’) d^n x’ \lrgrad’ F(x’)
= \int_{\partial V} G(x,x’) d^{n-1} x’ F(x’).

Let $$d^n x’ = dV’ I$$, $$d^{n-1} x’ n = dA’ I$$, where $$n = n(x’)$$ is the outward normal to the area element $$d^{n-1} x’$$. From this point on, lets restrict attention to Euclidean spaces, where $$n^2 = 1$$. In that case

\begin{aligned}
&=
+
\int_V dV’ G(x,x’) \lr{ \rgrad’ F(x’) } \\
&= \int_{\partial V} dA’ G(x,x’) n F(x’).
\end{aligned}

Here, the pseudoscalar $$I$$ has been factored out by commuting it with $$G$$, using $$G I = (-1)^{n-1} I G$$, and then pre-multiplication with $$1/((-1)^{n-1} I )$$.

Each of these integrals can be considered in sequence. A convergence bound is required of the multivector test function $$F(x’)$$ on the infinite surface $$\partial V$$. Since it’s true that

\Abs{ \int_{\partial V} dA’ G(x,x’) n F(x’) }
\ge
\int_{\partial V} dA’ \Abs{ G(x,x’) n F(x’) },

then it is sufficient to require that

\lim_{x’ \rightarrow \infty} \Abs{ \frac{x -x’}{\Abs{x – x’}^n} n(x’) F(x’) } \rightarrow 0,

in order to kill off the surface integral. Evaluating the integral on a hypersphere centred on $$x$$ where $$x’ – x = n \Abs{x – x’}$$, that is

\lim_{x’ \rightarrow \infty} \frac{ \Abs{F(x’)}}{\Abs{x – x’}^{n-1}} \rightarrow 0.

Given such a constraint, that leaves

=
-\int_V dV’ G(x,x’) \lr{ \rgrad’ F(x’) }.

The LHS is zero everywhere that $$x \ne x’$$ so it can be restricted to a spherical ball around $$x$$, which allows the test function $$F$$ to be pulled out of the integral, and a second application of the Fundamental Theorem to be applied.

\begin{aligned}
&=
\lim_{\epsilon \rightarrow 0}
\int_{\Abs{x – x’} < \epsilon} dV' \lr{G(x,x') \lgrad'} F(x') \\ &= \lr{ \lim_{\epsilon \rightarrow 0} I^{-1} \int_{\Abs{x - x'} < \epsilon} I dV' \lr{G(x,x') \lgrad'} } F(x) \\ &= \lr{ \lim_{\epsilon \rightarrow 0} (-1)^{n-1} I^{-1} \int_{\Abs{x - x'} < \epsilon} G(x,x') d^n x' \lgrad' } F(x) \\ &= \lr{ \lim_{\epsilon \rightarrow 0} (-1)^{n-1} I^{-1} \int_{\Abs{x - x'} = \epsilon} G(x,x') d^{n-1} x' } F(x) \\ &= \lr{ \lim_{\epsilon \rightarrow 0} (-1)^{n-1} I^{-1} \int_{\Abs{x - x'} = \epsilon} G(x,x') dA' I n } F(x) \\ &= \lr{ \lim_{\epsilon \rightarrow 0} \int_{\Abs{x - x'} = \epsilon} dA' G(x,x') n } F(x) \\ &= \lr{ \lim_{\epsilon \rightarrow 0} \int_{\Abs{x - x'} = \epsilon} dA' \frac{\epsilon (-n)}{S_n \epsilon^n} n } F(x) \\ &= -\lim_{\epsilon \rightarrow 0} \frac{F(x)}{S_n \epsilon^{n-1}} \int_{\Abs{x - x'} = \epsilon} dA' \\ &= -\lim_{\epsilon \rightarrow 0} \frac{F(x)}{S_n \epsilon^{n-1}} S_n \epsilon^{n-1} \\ &= -F(x). \end{aligned} This essentially calculates the divergence integral around an infinitesimal hypersphere, without assuming that the gradient commutes with the gradient in this infinitesimal region. So, provided the test function is constrained by \ref{eqn:gradientGreensFunction:260}, we have $$\label{eqn:gradientGreensFunction:280} F(x) = \int_V dV' G(x,x') \lr{ \grad' F(x') }.$$ In particular, should we have a first order gradient equation $$\label{eqn:gradientGreensFunction:300} \spacegrad' F(x') = M(x'),$$ the inverse of this equation is given by $$\label{eqn:gradientGreensFunction:320} \boxed{ F(x) = \int_V dV' G(x,x') M(x'). }$$ Note that the sign of the Green's function is explicitly tied to the definition of the convolution integral that is used. This is important since since the conventions for the sign of the Green's function or the parameters in the convolution integral often vary. What's cool about this result is that it applies not only to gradient equations in Euclidean spaces, but also to multivector (or even just vector) fields $$F$$, instead of the usual scalar functions that we usually apply Green's functions to.

## Example: Electrostatics

As a check of the sign consider the electrostatics equation

for which we have after substitution into \ref{eqn:gradientGreensFunction:320}
\BE(\Bx) = \inv{4 \pi \epsilon_0} \int_V dV’ \frac{\Bx – \Bx’}{\Abs{\Bx – \Bx’}^3} \rho(\Bx’).

This matches the sign found in a trusted reference such as [2].

### Future thought.

Does this Green’s function also work for mixed metric spaces? If so, in such a metric, what does it mean to
calculate the surface area of a unit sphere in a mixed signature space?

# References

[1] C. Doran and A.N. Lasenby. Geometric algebra for physicists. Cambridge University Press New York, Cambridge, UK, 1st edition, 2003.

[2] JD Jackson. Classical Electrodynamics. John Wiley and Sons, 2nd edition, 1975.

## Stokes Theorem

The Fundamental Theorem of (Geometric) Calculus is a generalization of Stokes theorem to multivector integrals. Notationally, it looks like Stokes theorem with all the dot and wedge products removed. It is worth restating Stokes theorem and all the definitions associated with it for reference

## Stokes’ Theorem

For blades $$F \in \bigwedge^{s}$$, and $$m$$ volume element $$d^k \Bx, s < k$$, \begin{equation*} \int_V d^k \Bx \cdot (\boldpartial \wedge F) = \oint_{\partial V} d^{k-1} \Bx \cdot F. \end{equation*} This is a loaded and abstract statement, and requires many definitions to make it useful

• The volume integral is over a $$m$$ dimensional surface (manifold).
• Integration over the boundary of the manifold $$V$$ is indicated by $$\partial V$$.
• This manifold is assumed to be spanned by a parameterized vector $$\Bx(u^1, u^2, \cdots, u^k)$$.
• A curvilinear coordinate basis $$\setlr{ \Bx_i }$$ can be defined on the manifold by
\label{eqn:fundamentalTheoremOfCalculus:40}
\Bx_i \equiv \PD{u^i}{\Bx} \equiv \partial_i \Bx.

• A dual basis $$\setlr{\Bx^i}$$ reciprocal to the tangent vector basis $$\Bx_i$$ can be calculated subject to the requirement $$\Bx_i \cdot \Bx^j = \delta_i^j$$.
• The vector derivative $$\boldpartial$$, the projection of the gradient onto the tangent space of the manifold, is defined by
\label{eqn:fundamentalTheoremOfCalculus:100}
\boldpartial = \Bx^i \partial_i = \sum_{i=1}^k \Bx_i \PD{u^i}{}.

• The volume element is defined by
\label{eqn:fundamentalTheoremOfCalculus:60}
d^k \Bx = d\Bx_1 \wedge d\Bx_2 \cdots \wedge d\Bx_k,

where

\label{eqn:fundamentalTheoremOfCalculus:80}
d\Bx_k = \Bx_k du^k,\qquad \text{(no sum)}.

• The volume element is non-zero on the manifold, or $$\Bx_1 \wedge \cdots \wedge \Bx_k \ne 0$$.
• The surface area element $$d^{k-1} \Bx$$, is defined by
\label{eqn:fundamentalTheoremOfCalculus:120}
d^{k-1} \Bx = \sum_{i = 1}^k (-1)^{k-i} d\Bx_1 \wedge d\Bx_2 \cdots \widehat{d\Bx_i} \cdots \wedge d\Bx_k,

where $$\widehat{d\Bx_i}$$ indicates the omission of $$d\Bx_i$$.

• My proof for this theorem was restricted to a simple “rectangular” volume parameterized by the ranges
$$[u^1(0), u^1(1) ] \otimes [u^2(0), u^2(1) ] \otimes \cdots \otimes [u^k(0), u^k(1) ]$$

• The precise meaning that should be given to oriented area integral is
\label{eqn:fundamentalTheoremOfCalculus:140}
\oint_{\partial V} d^{k-1} \Bx \cdot F
=
\sum_{i = 1}^k (-1)^{k-i} \int \evalrange{
\lr{ \lr{ d\Bx_1 \wedge d\Bx_2 \cdots \widehat{d\Bx_i} \cdots \wedge d\Bx_k } \cdot F }
}{u^i = u^i(0)}{u^i(1)},

where both the a area form and the blade $$F$$ are evaluated at the end points of the parameterization range.

After the work of stating exactly what is meant by this theorem, most of the proof follows from the fact that for $$s < k$$ the volume curl dot product can be expanded as $$\label{eqn:fundamentalTheoremOfCalculus:160} \int_V d^k \Bx \cdot (\boldpartial \wedge F) = \int_V d^k \Bx \cdot (\Bx^i \wedge \partial_i F) = \int_V \lr{ d^k \Bx \cdot \Bx^i } \cdot \partial_i F.$$ Each of the $$du^i$$ integrals can be evaluated directly, since each of the remaining $$d\Bx_j = du^j \PDi{u^j}{}, i \ne j$$ is calculated with $$u^i$$ held fixed. This allows for the integration over a rectangular'' parameterization region, proving the theorem for such a volume parameterization. A more general proof requires a triangulation of the volume and surface, but the basic principle of the theorem is evident, without that additional work.

## Fundamental Theorem of Calculus

There is a Geometric Algebra generalization of Stokes theorem that does not have the blade grade restriction of Stokes theorem. In [2] this is stated as

\label{eqn:fundamentalTheoremOfCalculus:180}
\int_V d^k \Bx \boldpartial F = \oint_{\partial V} d^{k-1} \Bx F.

A similar expression is used in [1] where it is also pointed out there is a variant with the vector derivative acting to the left

\label{eqn:fundamentalTheoremOfCalculus:200}
\int_V F d^k \Bx \boldpartial = \oint_{\partial V} F d^{k-1} \Bx.

In [3] it is pointed out that a bidirectional formulation is possible, providing the most general expression of the Fundamental Theorem of (Geometric) Calculus

\label{eqn:fundamentalTheoremOfCalculus:220}
\boxed{
\int_V F d^k \Bx \boldpartial G = \oint_{\partial V} F d^{k-1} \Bx G.
}

Here the vector derivative acts both to the left and right on $$F$$ and $$G$$. The specific action of this operator is
\label{eqn:fundamentalTheoremOfCalculus:240}
\begin{aligned}
F \boldpartial G
&=
(F \boldpartial) G
+
F (\boldpartial G) \\
&=
(\partial_i F) \Bx^i G
+
F \Bx^i (\partial_i G).
\end{aligned}

The fundamental theorem can be demonstrated by direct expansion. With the vector derivative $$\boldpartial$$ and its partials $$\partial_i$$ acting bidirectionally, that is

\label{eqn:fundamentalTheoremOfCalculus:260}
\begin{aligned}
\int_V F d^k \Bx \boldpartial G
&=
\int_V F d^k \Bx \Bx^i \partial_i G \\
&=
\int_V F \lr{ d^k \Bx \cdot \Bx^i + d^k \Bx \wedge \Bx^i } \partial_i G.
\end{aligned}

Both the reciprocal frame vectors and the curvilinear basis span the tangent space of the manifold, since we can write any reciprocal frame vector as a set of projections in the curvilinear basis

\label{eqn:fundamentalTheoremOfCalculus:280}
\Bx^i = \sum_j \lr{ \Bx^i \cdot \Bx^j } \Bx_j,

so $$\Bx^i \in sectionpan \setlr{ \Bx_j, j \in [1,k] }$$.
This means that $$d^k \Bx \wedge \Bx^i = 0$$, and

\label{eqn:fundamentalTheoremOfCalculus:300}
\begin{aligned}
\int_V F d^k \Bx \boldpartial G
&=
\int_V F \lr{ d^k \Bx \cdot \Bx^i } \partial_i G \\
&=
\sum_{i = 1}^{k}
\int_V
du^1 du^2 \cdots \widehat{ du^i} \cdots du^k
F \lr{
(-1)^{k-i}
\Bx_1 \wedge \Bx_2 \cdots \widehat{\Bx_i} \cdots \wedge \Bx_k } \partial_i G du^i \\
&=
\sum_{i = 1}^{k}
(-1)^{k-i}
\int_{u^1}
\int_{u^2}
\cdots
\int_{u^{i-1}}
\int_{u^{i+1}}
\cdots
\int_{u^k}
\evalrange{ \lr{
F d\Bx_1 \wedge d\Bx_2 \cdots \widehat{d\Bx_i} \cdots \wedge d\Bx_k G
}
}{u^i = u^i(0)}{u^i(1)}.
\end{aligned}

Adding in the same notational sugar that we used in Stokes theorem, this proves the Fundamental theorem \ref{eqn:fundamentalTheoremOfCalculus:220} for “rectangular” parameterizations. Note that such a parameterization need not actually be rectangular.

## Example: Application to Maxwell’s equation

{example:fundamentalTheoremOfCalculus:1}

Maxwell’s equation is an example of a first order gradient equation

\label{eqn:fundamentalTheoremOfCalculus:320}
\grad F = \inv{\epsilon_0 c} J.

Integrating over a four-volume (where the vector derivative equals the gradient), and applying the Fundamental theorem, we have

\label{eqn:fundamentalTheoremOfCalculus:340}
\inv{\epsilon_0 c} \int d^4 x J = \oint d^3 x F.

Observe that the surface area element product with $$F$$ has both vector and trivector terms. This can be demonstrated by considering some examples

\label{eqn:fundamentalTheoremOfCalculus:360}
\begin{aligned}
\gamma_{012} \gamma_{01} &\propto \gamma_2 \\
\gamma_{012} \gamma_{23} &\propto \gamma_{023}.
\end{aligned}

On the other hand, the four volume integral of $$J$$ has only trivector parts. This means that the integral can be split into a pair of same-grade equations

\label{eqn:fundamentalTheoremOfCalculus:380}
\begin{aligned}
\inv{\epsilon_0 c} \int d^4 x \cdot J &=
\oint \gpgradethree{ d^3 x F} \\
0 &=
\oint d^3 x \cdot F.
\end{aligned}

The first can be put into a slightly tidier form using a duality transformation
\label{eqn:fundamentalTheoremOfCalculus:400}
\begin{aligned}
&=
-\gpgradethree{ d^3 x I^2 F} \\
&=
\gpgradethree{ I d^3 x I F} \\
&=
(I d^3 x) \wedge (I F).
\end{aligned}

Letting $$n \Abs{d^3 x} = I d^3 x$$, this gives

\label{eqn:fundamentalTheoremOfCalculus:420}
\oint \Abs{d^3 x} n \wedge (I F) = \inv{\epsilon_0 c} \int d^4 x \cdot J.

Note that this normal is normal to a three-volume subspace of the spacetime volume. For example, if one component of that spacetime surface area element is $$\gamma_{012} c dt dx dy$$, then the normal to that area component is $$\gamma_3$$.

A second set of duality transformations

\label{eqn:fundamentalTheoremOfCalculus:440}
\begin{aligned}
n \wedge (IF)
&=
&=
&=
-\gpgradethree{ I (n \cdot F)} \\
&=
-I (n \cdot F),
\end{aligned}

and
\label{eqn:fundamentalTheoremOfCalculus:460}
\begin{aligned}
I d^4 x \cdot J
&=
\gpgradeone{ I d^4 x \cdot J } \\
&=
\gpgradeone{ I d^4 x J } \\
&=
\gpgradeone{ (I d^4 x) J } \\
&=
(I d^4 x) J,
\end{aligned}

can further tidy things up, leaving us with

\label{eqn:fundamentalTheoremOfCalculus:500}
\boxed{
\begin{aligned}
\oint \Abs{d^3 x} n \cdot F &= \inv{\epsilon_0 c} \int (I d^4 x) J \\
\oint d^3 x \cdot F &= 0.
\end{aligned}
}

The Fundamental theorem of calculus immediately provides relations between the Faraday bivector $$F$$ and the four-current $$J$$.

# References

[1] C. Doran and A.N. Lasenby. Geometric algebra for physicists. Cambridge University Press New York, Cambridge, UK, 1st edition, 2003.

[2] A. Macdonald. Vector and Geometric Calculus. CreateSpace Independent Publishing Platform, 2012.

[3] Garret Sobczyk and Omar Le\’on S\’anchez. Fundamental theorem of calculus. Advances in Applied Clifford Algebras, 21\penalty0 (1):\penalty0 221–231, 2011. URL https://arxiv.org/abs/0809.4526.