## Notes.

Due to limitations in the MathJax-Latex package, all the oriented integrals in this blog post should be interpreted as having a clockwise orientation. [See the PDF version of this post for more sophisticated formatting.]

## Guts.

Given a two dimensional generating vector space, there are two instances of the fundamental theorem for multivector integration
\label{eqn:unpackingFundamentalTheorem:20}
\int_S F d\Bx \lrpartial G = \evalbar{F G}{\Delta S},

and
\label{eqn:unpackingFundamentalTheorem:40}
\int_S F d^2\Bx \lrpartial G = \oint_{\partial S} F d\Bx G.

The first case is trivial. Given a parameterizated curve $$x = x(u)$$, it just states
\label{eqn:unpackingFundamentalTheorem:60}
\int_{u(0)}^{u(1)} du \PD{u}{}\lr{FG} = F(u(1))G(u(1)) – F(u(0))G(u(0)),

for all multivectors $$F, G$$, regardless of the signature of the underlying space.

The surface integral is more interesting. Let’s first look at the area element for this surface integral, which is
\label{eqn:unpackingFundamentalTheorem:80}
d^2 \Bx = d\Bx_u \wedge d \Bx_v.

Geometrically, this has the area of the parallelogram spanned by $$d\Bx_u$$ and $$d\Bx_v$$, but weighted by the pseudoscalar of the space. This is explored algebraically in the following problem and illustrated in fig. 1.

fig. 1. 2D vector space and area element.

## Problem: Expansion of 2D area bivector.

Let $$\setlr{e_1, e_2}$$ be an orthonormal basis for a two dimensional space, with reciprocal frame $$\setlr{e^1, e^2}$$. Expand the area bivector $$d^2 \Bx$$ in coordinates relating the bivector to the Jacobian and the pseudoscalar.

With parameterization $$x = x(u,v) = x^\alpha e_\alpha = x_\alpha e^\alpha$$, we have
\label{eqn:unpackingFundamentalTheorem:120}
\Bx_u \wedge \Bx_v
=
\lr{ \PD{u}{x^\alpha} e_\alpha } \wedge
\lr{ \PD{v}{x^\beta} e_\beta }
=
\PD{u}{x^\alpha}
\PD{v}{x^\beta}
e_\alpha
e_\beta
=
\PD{(u,v)}{(x^1,x^2)} e_1 e_2,

or
\label{eqn:unpackingFundamentalTheorem:160}
\Bx_u \wedge \Bx_v
=
\lr{ \PD{u}{x_\alpha} e^\alpha } \wedge
\lr{ \PD{v}{x_\beta} e^\beta }
=
\PD{u}{x_\alpha}
\PD{v}{x_\beta}
e^\alpha
e^\beta
=
\PD{(u,v)}{(x_1,x_2)} e^1 e^2.

The upper and lower index pseudoscalars are related by
\label{eqn:unpackingFundamentalTheorem:180}
e^1 e^2 e_1 e_2 =
-e^1 e^2 e_2 e_1 =
-1,

so with $$I = e_1 e_2$$,
\label{eqn:unpackingFundamentalTheorem:200}
e^1 e^2 = -I^{-1},

leaving us with
\label{eqn:unpackingFundamentalTheorem:140}
d^2 \Bx
= \PD{(u,v)}{(x^1,x^2)} du dv\, I
= -\PD{(u,v)}{(x_1,x_2)} du dv\, I^{-1}.

We see that the area bivector is proportional to either the upper or lower index Jacobian and to the pseudoscalar for the space.

We may write the fundamental theorem for a 2D space as
\label{eqn:unpackingFundamentalTheorem:680}
\int_S du dv \, \PD{(u,v)}{(x^1,x^2)} F I \lrgrad G = \oint_{\partial S} F d\Bx G,

where we have dispensed with the vector derivative and use the gradient instead, since they are identical in a two parameter two dimensional space. Of course, unless we are using $$x^1, x^2$$ as our parameterization, we still want the curvilinear representation of the gradient $$\grad = \Bx^u \PDi{u}{} + \Bx^v \PDi{v}{}$$.

## Problem: Standard basis expansion of fundamental surface relation.

For a parameterization $$x = x^1 e_1 + x^2 e_2$$, where $$\setlr{ e_1, e_2 }$$ is a standard (orthogonal) basis, expand the fundamental theorem for surface integrals for the single sided $$F = 1$$ case. Consider functions $$G$$ of each grade (scalar, vector, bivector.)

From \ref{eqn:unpackingFundamentalTheorem:140} we see that the fundamental theorem takes the form
\label{eqn:unpackingFundamentalTheorem:220}
\int_S dx^1 dx^2\, F I \lrgrad G = \oint_{\partial S} F d\Bx G.

In a Euclidean space, the operator $$I \lrgrad$$, is a $$\pi/2$$ rotation of the gradient, but has a rotated like structure in all metrics:
\label{eqn:unpackingFundamentalTheorem:240}
=
e_1 e_2 \lr{ e^1 \partial_1 + e^2 \partial_2 }
=
-e_2 \partial_1 + e_1 \partial_2.

• $$F = 1$$ and $$G \in \bigwedge^0$$ or $$G \in \bigwedge^2$$. For $$F = 1$$ and scalar or bivector $$G$$ we have
\label{eqn:unpackingFundamentalTheorem:260}
\int_S dx^1 dx^2\, \lr{ -e_2 \partial_1 + e_1 \partial_2 } G = \oint_{\partial S} d\Bx G,

where, for $$x^1 \in [x^1(0),x^1(1)]$$ and $$x^2 \in [x^2(0),x^2(1)]$$, the RHS written explicitly is
\label{eqn:unpackingFundamentalTheorem:280}
\oint_{\partial S} d\Bx G
=
\int dx^1 e_1
\lr{ G(x^1, x^2(1)) – G(x^1, x^2(0)) }
– dx^2 e_2
\lr{ G(x^1(1),x^2) – G(x^1(0), x^2) }.

This is sketched in fig. 2. Since a 2D bivector $$G$$ can be written as $$G = I g$$, where $$g$$ is a scalar, we may write the pseudoscalar case as
\label{eqn:unpackingFundamentalTheorem:300}
\int_S dx^1 dx^2\, \lr{ -e_2 \partial_1 + e_1 \partial_2 } g = \oint_{\partial S} d\Bx g,

after right multiplying both sides with $$I^{-1}$$. Algebraically the scalar and pseudoscalar cases can be thought of as identical scalar relationships.
• $$F = 1, G \in \bigwedge^1$$. For $$F = 1$$ and vector $$G$$ the 2D fundamental theorem for surfaces can be split into scalar
\label{eqn:unpackingFundamentalTheorem:320}
\int_S dx^1 dx^2\, \lr{ -e_2 \partial_1 + e_1 \partial_2 } \cdot G = \oint_{\partial S} d\Bx \cdot G,

and bivector relations
\label{eqn:unpackingFundamentalTheorem:340}
\int_S dx^1 dx^2\, \lr{ -e_2 \partial_1 + e_1 \partial_2 } \wedge G = \oint_{\partial S} d\Bx \wedge G.

To expand \ref{eqn:unpackingFundamentalTheorem:320}, let
\label{eqn:unpackingFundamentalTheorem:360}
G = g_1 e^1 + g_2 e^2,

for which
\label{eqn:unpackingFundamentalTheorem:380}
\lr{ -e_2 \partial_1 + e_1 \partial_2 } \cdot G
=
\lr{ -e_2 \partial_1 + e_1 \partial_2 } \cdot
\lr{ g_1 e^1 + g_2 e^2 }
=
\partial_2 g_1 – \partial_1 g_2,

and
\label{eqn:unpackingFundamentalTheorem:400}
d\Bx \cdot G
=
\lr{ dx^1 e_1 – dx^2 e_2 } \cdot \lr{ g_1 e^1 + g_2 e^2 }
=
dx^1 g_1 – dx^2 g_2,

so \ref{eqn:unpackingFundamentalTheorem:320} expands to
\label{eqn:unpackingFundamentalTheorem:500}
\int_S dx^1 dx^2\, \lr{ \partial_2 g_1 – \partial_1 g_2 }
=
\int
\evalbar{dx^1 g_1}{\Delta x^2} – \evalbar{ dx^2 g_2 }{\Delta x^1}.

This coordinate expansion illustrates how the pseudoscalar nature of the area element results in a duality transformation, as we end up with a curl like operation on the LHS, despite the dot product nature of the decomposition that we used. That can also be seen directly for vector $$G$$, since
\label{eqn:unpackingFundamentalTheorem:560}
=
=
dA I \lr{ \grad \wedge G },

since the scalar selection of $$I \lr{ \grad \cdot G }$$ is zero.In the grade-2 relation \ref{eqn:unpackingFundamentalTheorem:340}, we expect a pseudoscalar cancellation on both sides, leaving a scalar (divergence-like) relationship. This time, we use upper index coordinates for the vector $$G$$, letting
\label{eqn:unpackingFundamentalTheorem:440}
G = g^1 e_1 + g^2 e_2,

so
\label{eqn:unpackingFundamentalTheorem:460}
\lr{ -e_2 \partial_1 + e_1 \partial_2 } \wedge G
=
\lr{ -e_2 \partial_1 + e_1 \partial_2 } \wedge G
\lr{ g^1 e_1 + g^2 e_2 }
=
e_1 e_2 \lr{ \partial_1 g^1 + \partial_2 g^2 },

and
\label{eqn:unpackingFundamentalTheorem:480}
d\Bx \wedge G
=
\lr{ dx^1 e_1 – dx^2 e_2 } \wedge
\lr{ g^1 e_1 + g^2 e_2 }
=
e_1 e_2 \lr{ dx^1 g^2 + dx^2 g^1 }.

So \ref{eqn:unpackingFundamentalTheorem:340}, after multiplication of both sides by $$I^{-1}$$, is
\label{eqn:unpackingFundamentalTheorem:520}
\int_S dx^1 dx^2\,
\lr{ \partial_1 g^1 + \partial_2 g^2 }
=
\int
\evalbar{dx^1 g^2}{\Delta x^2} + \evalbar{dx^2 g^1 }{\Delta x^1}.

As before, we’ve implicitly performed a duality transformation, and end up with a divergence operation. That can be seen directly without coordinate expansion, by rewriting the wedge as a grade two selection, and expanding the gradient action on the vector $$G$$, as follows
\label{eqn:unpackingFundamentalTheorem:580}
=
=
dA I \lr{ \grad \cdot G },

since $$I \lr{ \grad \wedge G }$$ has only a scalar component.

fig. 2. Line integral around rectangular boundary.

## Theorem 1.1: Green’s theorem [1].

Let $$S$$ be a Jordan region with a piecewise-smooth boundary $$C$$. If $$P, Q$$ are continuously differentiable on an open set that contains $$S$$, then
\begin{equation*}
\int dx dy \lr{ \PD{y}{P} – \PD{x}{Q} } = \oint P dx + Q dy.
\end{equation*}

## Problem: Relationship to Green’s theorem.

If the space is Euclidean, show that \ref{eqn:unpackingFundamentalTheorem:500} and \ref{eqn:unpackingFundamentalTheorem:520} are both instances of Green’s theorem with suitable choices of $$P$$ and $$Q$$.

I will omit the subtleties related to general regions and consider just the case of an infinitesimal square region.

### Start proof:

Let’s start with \ref{eqn:unpackingFundamentalTheorem:500}, with $$g_1 = P$$ and $$g_2 = Q$$, and $$x^1 = x, x^2 = y$$, the RHS is
\label{eqn:unpackingFundamentalTheorem:600}
\int dx dy \lr{ \PD{y}{P} – \PD{x}{Q} }.

On the RHS we have
\label{eqn:unpackingFundamentalTheorem:620}
\int \evalbar{dx P}{\Delta y} – \evalbar{ dy Q }{\Delta x}
=
\int dx \lr{ P(x, y_1) – P(x, y_0) } – \int dy \lr{ Q(x_1, y) – Q(x_0, y) }.

This pair of integrals is plotted in fig. 3, from which we see that \ref{eqn:unpackingFundamentalTheorem:620} can be expressed as the line integral, leaving us with
\label{eqn:unpackingFundamentalTheorem:640}
\int dx dy \lr{ \PD{y}{P} – \PD{x}{Q} }
=
\oint dx P + dy Q,

which is Green’s theorem over the infinitesimal square integration region.

For the equivalence of \ref{eqn:unpackingFundamentalTheorem:520} to Green’s theorem, let $$g^2 = P$$, and $$g^1 = -Q$$. Plugging into the LHS, we find the Green’s theorem integrand. On the RHS, the integrand expands to
\label{eqn:unpackingFundamentalTheorem:660}
\evalbar{dx g^2}{\Delta y} + \evalbar{dy g^1 }{\Delta x}
=
dx \lr{ P(x,y_1) – P(x, y_0)}
+
dy \lr{ -Q(x_1, y) + Q(x_0, y)},

which is exactly what we found in \ref{eqn:unpackingFundamentalTheorem:620}.

### End proof.

fig. 3. Path for Green’s theorem.

We may also relate multivector gradient integrals in 2D to the normal integral around the boundary of the bounding curve. That relationship is as follows.

## Theorem 1.2: 2D gradient integrals.

\begin{equation*}
\begin{aligned}
\int J du dv \rgrad G &= \oint I^{-1} d\Bx G = \int J \lr{ \Bx^v du + \Bx^u dv } G \\
\int J du dv F \lgrad &= \oint F I^{-1} d\Bx = \int J F \lr{ \Bx^v du + \Bx^u dv },
\end{aligned}
\end{equation*}
where $$J = \partial(x^1, x^2)/\partial(u,v)$$ is the Jacobian of the parameterization $$x = x(u,v)$$. In terms of the coordinates $$x^1, x^2$$, this reduces to
\begin{equation*}
\begin{aligned}
\int dx^1 dx^2 \rgrad G &= \oint I^{-1} d\Bx G = \int \lr{ e^2 dx^1 + e^1 dx^2 } G \\
\int dx^1 dx^2 F \lgrad &= \oint G I^{-1} d\Bx = \int F \lr{ e^2 dx^1 + e^1 dx^2 }.
\end{aligned}
\end{equation*}
The vector $$I^{-1} d\Bx$$ is orthogonal to the tangent vector along the boundary, and for Euclidean spaces it can be identified as the outwards normal.

### Start proof:

Respectively setting $$F = 1$$, and $$G = 1$$ in \ref{eqn:unpackingFundamentalTheorem:680}, we have
\label{eqn:unpackingFundamentalTheorem:940}
\int I^{-1} d^2 \Bx \rgrad G = \oint I^{-1} d\Bx G,

and
\label{eqn:unpackingFundamentalTheorem:960}
\int F d^2 \Bx \lgrad I^{-1} = \oint F d\Bx I^{-1}.

Starting with \ref{eqn:unpackingFundamentalTheorem:940} we find
\label{eqn:unpackingFundamentalTheorem:700}
\int I^{-1} J du dv I \rgrad G = \oint d\Bx G,

to find $$\int dx^1 dx^2 \rgrad G = \oint I^{-1} d\Bx G$$, as desireed. In terms of a parameterization $$x = x(u,v)$$, the pseudoscalar for the space is
\label{eqn:unpackingFundamentalTheorem:720}
I = \frac{\Bx_u \wedge \Bx_v}{J},

so
\label{eqn:unpackingFundamentalTheorem:740}
I^{-1} = \frac{J}{\Bx_u \wedge \Bx_v}.

Also note that $$\lr{\Bx_u \wedge \Bx_v}^{-1} = \Bx^v \wedge \Bx^u$$, so
\label{eqn:unpackingFundamentalTheorem:760}
I^{-1} = J \lr{ \Bx^v \wedge \Bx^u },

and
\label{eqn:unpackingFundamentalTheorem:780}
I^{-1} d\Bx
= I^{-1} \cdot d\Bx
= J \lr{ \Bx^v \wedge \Bx^u } \cdot \lr{ \Bx_u du – \Bx_v dv }
= J \lr{ \Bx^v du + \Bx^u dv },

so the right acting gradient integral is
\label{eqn:unpackingFundamentalTheorem:800}
\int J du dv \grad G =
\int
\evalbar{J \Bx^v G}{\Delta v} du + \evalbar{J \Bx^u G dv}{\Delta u},

which we write in abbreviated form as $$\int J \lr{ \Bx^v du + \Bx^u dv} G$$.

For the $$G = 1$$ case, from \ref{eqn:unpackingFundamentalTheorem:960} we find
\label{eqn:unpackingFundamentalTheorem:820}
\int J du dv F I \lgrad I^{-1} = \oint F d\Bx I^{-1}.

However, in a 2D space, regardless of metric, we have $$I a = – a I$$ for any vector $$a$$ (i.e. $$\grad$$ or $$d\Bx$$), so we may commute the outer pseudoscalars in
\label{eqn:unpackingFundamentalTheorem:840}
\int J du dv F I \lgrad I^{-1} = \oint F d\Bx I^{-1},

so
\label{eqn:unpackingFundamentalTheorem:850}
-\int J du dv F I I^{-1} \lgrad = -\oint F I^{-1} d\Bx.

After cancelling the negative sign on both sides, we have the claimed result.

To see that $$I a$$, for any vector $$a$$ is normal to $$a$$, we can compute the dot product
\label{eqn:unpackingFundamentalTheorem:860}
\lr{ I a } \cdot a
=
=
= 0,

since the scalar selection of a bivector is zero. Since $$I^{-1} = \pm I$$, the same argument shows that $$I^{-1} d\Bx$$ must be orthogonal to $$d\Bx$$.

### End proof.

Let’s look at the geometry of the normal $$I^{-1} \Bx$$ in a couple 2D vector spaces. We use an integration volume of a unit square to simplify the boundary term expressions.

• Euclidean: With a parameterization $$x(u,v) = u\Be_1 + v \Be_2$$, and Euclidean basis vectors $$(\Be_1)^2 = (\Be_2)^2 = 1$$, the fundamental theorem integrated over the rectangle $$[x_0,x_1] \times [y_0,y_1]$$ is
\label{eqn:unpackingFundamentalTheorem:880}
\int dx dy \grad G =
\int
\Be_2 \lr{ G(x,y_1) – G(x,y_0) } dx +
\Be_1 \lr{ G(x_1,y) – G(x_0,y) } dy,

Each of the terms in the integrand above are illustrated in fig. 4, and we see that this is a path integral weighted by the outwards normal.

fig. 4. Outwards oriented normal for Euclidean space.

• Spacetime: Let $$x(u,v) = u \gamma_0 + v \gamma_1$$, where $$(\gamma_0)^2 = -(\gamma_1)^2 = 1$$. With $$u = t, v = x$$, the gradient integral over a $$[t_0,t_1] \times [x_0,x_1]$$ of spacetime is
\label{eqn:unpackingFundamentalTheorem:900}
\begin{aligned}
&=
\int
\gamma^1 dt \lr{ G(t, x_1) – G(t, x_0) }
+
\gamma^0 dx \lr{ G(t_1, x) – G(t_1, x) } \\
&=
\int
\gamma_1 dt \lr{ -G(t, x_1) + G(t, x_0) }
+
\gamma_0 dx \lr{ G(t_1, x) – G(t_1, x) }
.
\end{aligned}

With $$t$$ plotted along the horizontal axis, and $$x$$ along the vertical, each of the terms of this integrand is illustrated graphically in fig. 5. For this mixed signature space, there is no longer any good geometrical characterization of the normal.

fig. 5. Orientation of the boundary normal for a spacetime basis.

• Spacelike:
Let $$x(u,v) = u \gamma_1 + v \gamma_2$$, where $$(\gamma_1)^2 = (\gamma_2)^2 = -1$$. With $$u = x, v = y$$, the gradient integral over a $$[x_0,x_1] \times [y_0,y_1]$$ of this space is
\label{eqn:unpackingFundamentalTheorem:920}
\begin{aligned}
&=
\int
\gamma^2 dx \lr{ G(x, y_1) – G(x, y_0) }
+
\gamma^1 dy \lr{ G(x_1, y) – G(x_1, y) } \\
&=
\int
\gamma_2 dx \lr{ -G(x, y_1) + G(x, y_0) }
+
\gamma_1 dy \lr{ -G(x_1, y) + G(x_1, y) }
.
\end{aligned}

Referring to fig. 6. where the elements of the integrand are illustrated, we see that the normal $$I^{-1} d\Bx$$ for the boundary of this region can be characterized as inwards.

fig. 6. Inwards oriented normal for a Dirac spacelike basis.

# References

[1] S.L. Salas and E. Hille. Calculus: one and several variables. Wiley New York, 1990.

## Fundamental theorem of geometric calculus for line integrals (relativistic.)

[This post is best viewed in PDF form, due to latex elements that I could not format with wordpress mathjax.]

Background for this particular post can be found in

## Motivation.

I’ve been slowly working my way towards a statement of the fundamental theorem of integral calculus, where the functions being integrated are elements of the Dirac algebra (space time multivectors in the geometric algebra parlance.)

This is interesting because we want to be able to do line, surface, 3-volume and 4-volume space time integrals. We have many $$\mathbb{R}^3$$ integral theorems
\label{eqn:fundamentalTheoremOfGC:40a}
\int_A^B d\Bl \cdot \spacegrad f = f(B) – f(A),

\label{eqn:fundamentalTheoremOfGC:60a}
\int_S dA\, \ncap \cross \spacegrad f = \int_{\partial S} d\Bx\, f,

\label{eqn:fundamentalTheoremOfGC:80a}
\int_S dA\, \ncap \cdot \lr{ \spacegrad \cross \Bf} = \int_{\partial S} d\Bx \cdot \Bf,

\label{eqn:fundamentalTheoremOfGC:100a}
\int_S dx dy \lr{ \PD{y}{P} – \PD{x}{Q} }
=
\int_{\partial S} P dx + Q dy,

\label{eqn:fundamentalTheoremOfGC:120a}
\int_V dV\, \spacegrad f = \int_{\partial V} dA\, \ncap f,

\label{eqn:fundamentalTheoremOfGC:140a}
\int_V dV\, \spacegrad \cross \Bf = \int_{\partial V} dA\, \ncap \cross \Bf,

\label{eqn:fundamentalTheoremOfGC:160a}
\int_V dV\, \spacegrad \cdot \Bf = \int_{\partial V} dA\, \ncap \cdot \Bf,

and want to know how to generalize these to four dimensions and also make sure that we are handling the relativistic mixed signature correctly. If our starting point was the mess of equations above, we’d be in trouble, since it is not obvious how these generalize. All the theorems with unit normals have to be handled completely differently in four dimensions since we don’t have a unique normal to any given spacetime plane.
What comes to our rescue is the Fundamental Theorem of Geometric Calculus (FTGC), which has the form
\label{eqn:fundamentalTheoremOfGC:40}
\int F d^n \Bx\, \lrpartial G = \int F d^{n-1} \Bx\, G,

where $$F,G$$ are multivectors functions (i.e. sums of products of vectors.) We’ve seen ([2], [1]) that all the identities above are special cases of the fundamental theorem.

Do we need any special care to state the FTGC correctly for our relativistic case? It turns out that the answer is no! Tangent and reciprocal frame vectors do all the heavy lifting, and we can use the fundamental theorem as is, even in our mixed signature space. The only real change that we need to make is use spacetime gradient and vector derivative operators instead of their spatial equivalents. We will see how this works below. Note that instead of starting with \ref{eqn:fundamentalTheoremOfGC:40} directly, I will attempt to build up to that point in a progressive fashion that is hopefully does not require the reader to make too many unjustified mental leaps.

## Multivector line integrals.

We want to define multivector line integrals to start with. Recall that in $$\mathbb{R}^3$$ we would say that for scalar functions $$f$$, the integral
\label{eqn:fundamentalTheoremOfGC:180b}
\int d\Bx\, f = \int f d\Bx,

is a line integral. Also, for vector functions $$\Bf$$ we call
\label{eqn:fundamentalTheoremOfGC:200}
\int d\Bx \cdot \Bf = \inv{2} \int d\Bx\, \Bf + \Bf d\Bx.

a line integral. In order to generalize line integrals to multivector functions, we will allow our multivector functions to be placed on either or both sides of the differential.

## Definition 1.1: Line integral.

Given a single variable parameterization $$x = x(u)$$, we write $$d^1\Bx = \Bx_u du$$, and call
\label{eqn:fundamentalTheoremOfGC:220a}
\int F d^1\Bx\, G,

a line integral, where $$F,G$$ are arbitrary multivector functions.

We must be careful not to reorder any of the factors in the integrand, since the differential may not commute with either $$F$$ or $$G$$. Here is a simple example where the integrand has a product of a vector and differential.

## Problem: Circular parameterization.

Given a circular parameterization $$x(\theta) = \gamma_1 e^{-i\theta}$$, where $$i = \gamma_1 \gamma_2$$, the unit bivector for the $$x,y$$ plane. Compute the line integral
\label{eqn:fundamentalTheoremOfGC:100}
\int_0^{\pi/4} F(\theta)\, d^1 \Bx\, G(\theta),

where $$F(\theta) = \Bx^\theta + \gamma_3 + \gamma_1 \gamma_0$$ is a multivector valued function, and $$G(\theta) = \gamma_0$$ is vector valued.

The tangent vector for the curve is
\label{eqn:fundamentalTheoremOfGC:60}
\Bx_\theta
= -\gamma_1 \gamma_1 \gamma_2 e^{-i\theta}
= \gamma_2 e^{-i\theta},

with reciprocal vector $$\Bx^\theta = e^{i \theta} \gamma^2$$. The differential element is $$d^1 \Bx = \gamma_2 e^{-i\theta} d\theta$$, so the integrand is
\label{eqn:fundamentalTheoremOfGC:80}
\begin{aligned}
\int_0^{\pi/4} \lr{ \Bx^\theta + \gamma_3 + \gamma_1 \gamma_0 } d^1 \Bx\, \gamma_0
&=
\int_0^{\pi/4} \lr{ e^{i\theta} \gamma^2 + \gamma_3 + \gamma_1 \gamma_0 } \gamma_2 e^{-i\theta} d\theta\, \gamma_0 \\
&=
\frac{\pi}{4} \gamma_0 + \lr{ \gamma_{32} + \gamma_{102} } \inv{-i} \lr{ e^{-i\pi/4} – 1 } \gamma_0 \\
&=
\frac{\pi}{4} \gamma_0 + \inv{\sqrt{2}} \lr{ \gamma_{32} + \gamma_{102} } \gamma_{120} \lr{ 1 – \gamma_{12} } \\
&=
\frac{\pi}{4} \gamma_0 + \inv{\sqrt{2}} \lr{ \gamma_{310} + 1 } \lr{ 1 – \gamma_{12} }.
\end{aligned}

Observe how care is required not to reorder any terms. This particular end result is a multivector with scalar, vector, bivector, and trivector grades, but no pseudoscalar component. The grades in the end result depend on both the function in the integrand and on the path. For example, had we integrated all the way around the circle, the end result would have been the vector $$2 \pi \gamma_0$$ (i.e. a $$\gamma_0$$ weighted unit circle circumference), as all the other grades would have been killed by the complex exponential integrated over a full period.

## Problem: Line integral for boosted time direction vector.

Let $$x = e^{\vcap \alpha/2} \gamma_0 e^{-\vcap \alpha/2}$$ represent the spacetime curve of all the boosts of $$\gamma_0$$ along a specific velocity direction vector, where $$\vcap = (v \wedge \gamma_0)/\Norm{v \wedge \gamma_0}$$ is a unit spatial bivector for any constant vector $$v$$. Compute the line integral
\label{eqn:fundamentalTheoremOfGC:240}
\int x\, d^1 \Bx.

Observe that $$\vcap$$ and $$\gamma_0$$ anticommute, so we may write our boost as a one sided exponential
\label{eqn:fundamentalTheoremOfGC:260}
x(\alpha) = \gamma_0 e^{-\vcap \alpha} = e^{\vcap \alpha} \gamma_0 = \lr{ \cosh\alpha + \vcap \sinh\alpha } \gamma_0.

The tangent vector is just
\label{eqn:fundamentalTheoremOfGC:280}
\Bx_\alpha = \PD{\alpha}{x} = e^{\vcap\alpha} \vcap \gamma_0.

Let’s get a bit of intuition about the nature of this vector. It’s square is
\label{eqn:fundamentalTheoremOfGC:300}
\begin{aligned}
\Bx_\alpha^2
&=
e^{\vcap\alpha} \vcap \gamma_0
e^{\vcap\alpha} \vcap \gamma_0 \\
&=
-e^{\vcap\alpha} \vcap e^{-\vcap\alpha} \vcap (\gamma_0)^2 \\
&=
-1,
\end{aligned}

so we see that the tangent vector is a spacelike unit vector. As the vector representing points on the curve is necessarily timelike (due to Lorentz invariance), these two must be orthogonal at all points. Let’s confirm this algebraically
\label{eqn:fundamentalTheoremOfGC:320}
\begin{aligned}
x \cdot \Bx_\alpha
&=
\gpgradezero{ e^{\vcap \alpha} \gamma_0 e^{\vcap \alpha} \vcap \gamma_0 } \\
&=
\gpgradezero{ e^{-\vcap \alpha} e^{\vcap \alpha} \vcap (\gamma_0)^2 } \\
&=
&= 0.
\end{aligned}

Here we used $$e^{\vcap \alpha} \gamma_0 = \gamma_0 e^{-\vcap \alpha}$$, and $$\gpgradezero{A B} = \gpgradezero{B A}$$. Geometrically, we have the curious fact that the direction vectors to points on the curve are perpendicular (with respect to our relativistic dot product) to the tangent vectors on the curve, as illustrated in fig. 1.

fig. 1. Tangent perpendicularity in mixed metric.

### Perfect differentials.

Having seen a couple examples of multivector line integrals, let’s now move on to figure out the structure of a line integral that has a “perfect” differential integrand. We can take a hint from the $$\mathbb{R}^3$$ vector result that we already know, namely
\label{eqn:fundamentalTheoremOfGC:120}
\int_A^B d\Bl \cdot \spacegrad f = f(B) – f(A).

It seems reasonable to guess that the relativistic generalization of this is
\label{eqn:fundamentalTheoremOfGC:140}
\int_A^B dx \cdot \grad f = f(B) – f(A).

Let’s check that, by expanding in coordinates
\label{eqn:fundamentalTheoremOfGC:160}
\begin{aligned}
&=
\int_A^B d\tau \frac{dx^\mu}{d\tau} \partial_\mu f \\
&=
\int_A^B d\tau \frac{dx^\mu}{d\tau} \PD{x^\mu}{f} \\
&=
\int_A^B d\tau \frac{df}{d\tau} \\
&=
f(B) – f(A).
\end{aligned}

If we drop the dot product, will we have such a nice result? Let’s see:
\label{eqn:fundamentalTheoremOfGC:180}
\begin{aligned}
&=
\int_A^B d\tau \frac{dx^\mu}{d\tau} \gamma_\mu \gamma^\nu \partial_\nu f \\
&=
\int_A^B d\tau \frac{dx^\mu}{d\tau} \PD{x^\mu}{f}
+
\int_A^B
d\tau
\sum_{\mu \ne \nu} \gamma_\mu \gamma^\nu
\frac{dx^\mu}{d\tau} \PD{x^\nu}{f}.
\end{aligned}

This scalar component of this integrand is a perfect differential, but the bivector part of the integrand is a complete mess, that we have no hope of generally integrating. It happens that if we consider one of the simplest parameterization examples, we can get a strong hint of how to generalize the differential operator to one that ends up providing a perfect differential. In particular, let’s integrate over a linear constant path, such as $$x(\tau) = \tau \gamma_0$$. For this path, we have
\label{eqn:fundamentalTheoremOfGC:200a}
\begin{aligned}
&=
\int_A^B \gamma_0 d\tau \lr{
\gamma^0 \partial_0 +
\gamma^1 \partial_1 +
\gamma^2 \partial_2 +
\gamma^3 \partial_3 } f \\
&=
\int_A^B d\tau \lr{
\PD{\tau}{f} +
\gamma_0 \gamma^1 \PD{x^1}{f} +
\gamma_0 \gamma^2 \PD{x^2}{f} +
\gamma_0 \gamma^3 \PD{x^3}{f}
}.
\end{aligned}

Just because the path does not have any $$x^1, x^2, x^3$$ component dependencies does not mean that these last three partials are neccessarily zero. For example $$f = f(x(\tau)) = \lr{ x^0 }^2 \gamma_0 + x^1 \gamma_1$$ will have a non-zero contribution from the $$\partial_1$$ operator. In that particular case, we can easily integrate $$f$$, but we have to know the specifics of the function to do the integral. However, if we had a differential operator that did not include any component off the integration path, we would ahve a perfect differential. That is, if we were to replace the gradient with the projection of the gradient onto the tangent space, we would have a perfect differential. We see that the function of the dot product in \ref{eqn:fundamentalTheoremOfGC:140} has the same effect, as it rejects any component of the gradient that does not lie on the tangent space.

## Definition 1.2: Vector derivative.

Given a spacetime manifold parameterized by $$x = x(u^0, \cdots u^{N-1})$$, with tangent vectors $$\Bx_\mu = \PDi{u^\mu}{x}$$, and reciprocal vectors $$\Bx^\mu \in \textrm{Span}\setlr{\Bx_\nu}$$, such that $$\Bx^\mu \cdot \Bx_\nu = {\delta^\mu}_\nu$$, the vector derivative is defined as
\label{eqn:fundamentalTheoremOfGC:240a}
\partial = \sum_{\mu = 0}^{N-1} \Bx^\mu \PD{u^\mu}{}.

Observe that if this is a full parameterization of the space ($$N = 4$$), then the vector derivative is identical to the gradient. The vector derivative is the projection of the gradient onto the tangent space at the point of evaluation.Furthermore, we designate $$\lrpartial$$ as the vector derivative allowed to act bidirectionally, as follows
\label{eqn:fundamentalTheoremOfGC:260a}
R \lrpartial S
=
R \Bx^\mu \PD{u^\mu}{S}
+
\PD{u^\mu}{R} \Bx^\mu S,

where $$R, S$$ are multivectors, and summation convention is implied. In this bidirectional action,
the vector factors of the vector derivative must stay in place (as they do not neccessarily commute with $$R,S$$), but the derivative operators apply in a chain rule like fashion to both functions.

Noting that $$\Bx_u \cdot \grad = \Bx_u \cdot \partial$$, we may rewrite the scalar line integral identity \ref{eqn:fundamentalTheoremOfGC:140} as
\label{eqn:fundamentalTheoremOfGC:220}
\int_A^B dx \cdot \partial f = f(B) – f(A).

However, as our example hinted at, the fundamental theorem for line integrals has a multivector generalization that does not rely on a dot product to do the tangent space filtering, and is more powerful. That generalization has the following form.

## Theorem 1.1: Fundamental theorem for line integrals.

Given multivector functions $$F, G$$, and a single parameter curve $$x(u)$$ with line element $$d^1 \Bx = \Bx_u du$$, then
\label{eqn:fundamentalTheoremOfGC:280a}
\int_A^B F d^1\Bx \lrpartial G = F(B) G(B) – F(A) G(A).

### Start proof:

Writing out the integrand explicitly, we find
\label{eqn:fundamentalTheoremOfGC:340}
\int_A^B F d^1\Bx \lrpartial G
=
\int_A^B \lr{
\PD{\alpha}{F} d\alpha\, \Bx_\alpha \Bx^\alpha G
+
F d\alpha\, \Bx_\alpha \Bx^\alpha \PD{\alpha}{G }
}

However for a single parameter curve, we have $$\Bx^\alpha = 1/\Bx_\alpha$$, so we are left with
\label{eqn:fundamentalTheoremOfGC:360}
\begin{aligned}
\int_A^B F d^1\Bx \lrpartial G
&=
\int_A^B d\alpha\, \PD{\alpha}{(F G)} \\
&=
\evalbar{F G}{B}

\evalbar{F G}{A}.
\end{aligned}

## More to come.

In the next installment we will explore surface integrals in spacetime, and the generalization of the fundamental theorem to multivector space time integrals.

# References

[1] Peeter Joot. Geometric Algebra for Electrical Engineers. Kindle Direct Publishing, 2019.

[2] A. Macdonald. Vector and Geometric Calculus. CreateSpace Independent Publishing Platform, 2012.

## Generalizing Ampere’s law using geometric algebra.

The question I’d like to explore in this post is how Ampere’s law, the relationship between the line integral of the magnetic field to current (i.e. the enclosed current)
\label{eqn:flux:20}
\oint_{\partial A} d\Bx \cdot \BH = -\int_A \ncap \cdot \BJ,

generalizes to geometric algebra where Maxwell’s equations for a statics configuration (all time derivatives zero) is
\label{eqn:flux:40}

where the multivector fields and currents are
\label{eqn:flux:60}
\begin{aligned}
F &= \BE + I \eta \BH \\
J &= \eta \lr{ c \rho – \BJ } + I \lr{ c \rho_\txtm – \BM }.
\end{aligned}

Here (fictitious) the magnetic charge and current densities that can be useful in antenna theory have been included in the multivector current for generality.

My presumption is that it should be possible to utilize the fundamental theorem of geometric calculus for expressing the integral over an oriented surface to its boundary, but applied directly to Maxwell’s equation. That integral theorem has the form
\label{eqn:flux:80}
\int_A d^2 \Bx \boldpartial F = \oint_{\partial A} d\Bx F,

where $$d^2 \Bx = d\Ba \wedge d\Bb$$ is a two parameter bivector valued surface, and $$\boldpartial$$ is vector derivative, the projection of the gradient onto the tangent space. I won’t try to explain all of geometric calculus here, and refer the interested reader to [1], which is an excellent reference on geometric calculus and integration theory.

The gotcha is that we actually want a surface integral with $$\spacegrad F$$. We can split the gradient into the vector derivative a normal component
\label{eqn:flux:160}

so
\label{eqn:flux:100}
=
\int_A d^2 \Bx \boldpartial F
+
\int_A d^2 \Bx \ncap \lr{ \ncap \cdot \spacegrad } F,

so
\label{eqn:flux:120}
\begin{aligned}
\oint_{\partial A} d\Bx F
&=
\int_A d^2 \Bx \lr{ J – \ncap \lr{ \ncap \cdot \spacegrad } F } \\
&=
\int_A dA \lr{ I \ncap J – \lr{ \ncap \cdot \spacegrad } I F }
\end{aligned}

This is not nearly as nice as the magnetic flux relationship which was nicely split with the current and fields nicely separated. The $$d\Bx F$$ product has all possible grades, as does the $$d^2 \Bx J$$ product (in general). Observe however, that the normal term on the right has only grades 1,2, so we can split our line integral relations into pairs with and without grade 1,2 components
\label{eqn:flux:140}
\begin{aligned}
&=
\int_A dA \gpgrade{ I \ncap J }{0,3} \\
&=
\int_A dA \lr{ \gpgrade{ I \ncap J }{1,2} – \lr{ \ncap \cdot \spacegrad } I F }.
\end{aligned}

Let’s expand these explicitly in terms of the component fields and densities to check against the conventional relationships, and see if things look right. The line integrand expands to
\label{eqn:flux:180}
\begin{aligned}
d\Bx F
&=
d\Bx \lr{ \BE + I \eta \BH }
=
d\Bx \cdot \BE + I \eta d\Bx \cdot \BH
+
d\Bx \wedge \BE + I \eta d\Bx \wedge \BH \\
&=
d\Bx \cdot \BE
– \eta (d\Bx \cross \BH)
+ I (d\Bx \cross \BE )
+ I \eta (d\Bx \cdot \BH),
\end{aligned}

the current integrand expands to
\label{eqn:flux:200}
\begin{aligned}
I \ncap J
&=
I \ncap
\lr{
\frac{\rho}{\epsilon} – \eta \BJ + I \lr{ c \rho_\txtm – \BM }
} \\
&=
\ncap I \frac{\rho}{\epsilon} – \eta \ncap I \BJ – \ncap c \rho_\txtm + \ncap \BM \\
&=
\ncap \cdot \BM
+ \eta (\ncap \cross \BJ)
– \ncap c \rho_\txtm
+ I (\ncap \cross \BM)
+ \ncap I \frac{\rho}{\epsilon}
– \eta I (\ncap \cdot \BJ).
\end{aligned}

We are left with
\label{eqn:flux:220}
\begin{aligned}
\oint_{\partial A}
\lr{
d\Bx \cdot \BE + I \eta (d\Bx \cdot \BH)
}
&=
\int_A dA
\lr{
\ncap \cdot \BM – \eta I (\ncap \cdot \BJ)
} \\
\oint_{\partial A}
\lr{
– \eta (d\Bx \cross \BH)
+ I (d\Bx \cross \BE )
}
&=
\int_A dA
\lr{
\eta (\ncap \cross \BJ)
– \ncap c \rho_\txtm
+ I (\ncap \cross \BM)
+ \ncap I \frac{\rho}{\epsilon}
-\PD{n}{} \lr{ I \BE – \eta \BH }
}.
\end{aligned}

This is a crazy mess of dots, crosses, fields and sources. We can split it into one equation for each grade, which will probably look a little more regular. That is
\label{eqn:flux:240}
\begin{aligned}
\oint_{\partial A} d\Bx \cdot \BE &= \int_A dA \ncap \cdot \BM \\
\oint_{\partial A} d\Bx \cross \BH
&=
\int_A dA
\lr{
– \ncap \cross \BJ
+ \frac{ \ncap \rho_\txtm }{\mu}
– \PD{n}{\BH}
} \\
\oint_{\partial A} d\Bx \cross \BE &=
\int_A dA
\lr{
\ncap \cross \BM
+ \frac{\ncap \rho}{\epsilon}
– \PD{n}{\BE}
} \\
\oint_{\partial A} d\Bx \cdot \BH &= -\int_A dA \ncap \cdot \BJ \\
\end{aligned}

The first and last equations could have been obtained much more easily from Maxwell’s equations in their conventional form more easily. The two cross product equations with the normal derivatives are not familiar to me, even without the fictitious magnetic sources. It is somewhat remarkable that so much can be packed into one multivector equation:
\label{eqn:flux:260}
\oint_{\partial A} d\Bx F
=
I \int_A dA \lr{ \ncap J – \PD{n}{F} }.

# References

[1] A. Macdonald. Vector and Geometric Calculus. CreateSpace Independent Publishing Platform, 2012.

## Stokes integrals for Maxwell’s equations in Geometric Algebra

Recall that the relativistic form of Maxwell’s equation in Geometric Algebra is

\label{eqn:maxwellStokes:20}
\grad F = \inv{c \epsilon_0} J.

where $$\grad = \gamma^\mu \partial_\mu$$ is the spacetime gradient, and $$J = (c\rho, \BJ) = J^\mu \gamma_\mu$$ is the four (vector) current density. The pseudoscalar for the space is denoted $$I = \gamma_0 \gamma_1 \gamma_2 \gamma_3$$, where the basis elements satisfy $$\gamma_0^2 = 1 = -\gamma_k^2$$, and a dual basis satisfies $$\gamma_\mu \cdot \gamma^\nu = \delta_\mu^\nu$$. The electromagnetic field $$F$$ is a composite multivector $$F = \BE + I c \BB$$. This is actually a bivector because spatial vectors have a bivector representation in the space time algebra of the form $$\BE = E^k \gamma_k \gamma_0$$.

Previously, I wrote out the Stokes integrals for Maxwell’s equation in GA form using some three parameter spacetime manifold volumes. This time I’m going to use two and three parameter spatial volumes, again with the Geometric Algebra form of Stokes theorem.

Multiplication by a timelike unit vector transforms Maxwell’s equation from their relativistic form. When that vector is the standard basis timelike unit vector $$\gamma_0$$, we obtain Maxwell’s equations from the point of view of a stationary observer

\label{eqn:stokesMaxwellSpaceTimeSplit:40}
\lr{\partial_0 + \spacegrad} \lr{ \BE + c I \BB } = \inv{\epsilon_0 c} \lr{ c \rho – \BJ },

Extracting the scalar, vector, bivector, and trivector grades respectively, we have
\label{eqn:stokesMaxwellSpaceTimeSplit:60}
\begin{aligned}
\spacegrad \cdot \BE &= \frac{\rho}{\epsilon_0} \\
c I \spacegrad \wedge \BB &= -\partial_0 \BE – \inv{\epsilon_0 c} \BJ \\
\spacegrad \wedge \BE &= – I c \partial_0 \BB \\
c I \spacegrad \cdot \BB &= 0.
\end{aligned}

Each of these can be written as a curl equation

\label{eqn:stokesMaxwellSpaceTimeSplit:80}
\boxed{
\begin{aligned}
\spacegrad \wedge (I \BE) &= I \frac{\rho}{\epsilon_0} \\
\inv{\mu_0} \spacegrad \wedge \BB &= \epsilon_0 I \partial_t \BE + I \BJ \\
\spacegrad \wedge \BE &= -I \partial_t \BB \\
\spacegrad \wedge (I \BB) &= 0,
\end{aligned}
}

a form that allows for direct application of Stokes integrals. The first and last of these require a three parameter volume element, whereas the two bivector grade equations can be integrated using either two or three parameter volume elements. Suppose that we have can parameterize the space with parameters $$u, v, w$$, for which the gradient has the representation

\label{eqn:stokesMaxwellSpaceTimeSplit:100}
\spacegrad = \Bx^u \partial_u + \Bx^v \partial_v + \Bx^w \partial_w,

but we integrate over a two parameter subset of this space spanned by $$\Bx(u,v)$$, with area element

\label{eqn:stokesMaxwellSpaceTimeSplit:120}
\begin{aligned}
d^2 \Bx
&= d\Bx_u \wedge d\Bx_v \\
&=
\PD{u}{\Bx}
\wedge
\PD{v}{\Bx}
\,du dv \\
&=
\Bx_u
\wedge
\Bx_v
\,du dv,
\end{aligned}

as illustrated in fig. 1.

fig. 1. Two parameter manifold.

Our curvilinear coordinates $$\Bx_u, \Bx_v, \Bx_w$$ are dual to the reciprocal basis $$\Bx^u, \Bx^v, \Bx^w$$, but we won’t actually have to calculate that reciprocal basis. Instead we need only know that it can be calculated and is defined by the relations $$\Bx_a \cdot \Bx^b = \delta_a^b$$. Knowing that we can reduce (say),

\label{eqn:stokesMaxwellSpaceTimeSplit:140}
\begin{aligned}
d^2 \Bx \cdot ( \spacegrad \wedge \BE )
&=
d^2 \Bx \cdot ( \Bx^a \partial_a \wedge \BE ) \\
&=
(\Bx_u \wedge \Bx_v) \cdot ( \Bx^a \wedge \partial_a \BE ) \,du dv \\
&=
(((\Bx_u \wedge \Bx_v) \cdot \Bx^a) \cdot \partial_a \BE \,du dv \\
&=
d\Bx_u \cdot \partial_v \BE \,dv
-d\Bx_v \cdot \partial_u \BE \,du,
\end{aligned}

Because each of the differentials, for example $$d\Bx_u = (\PDi{u}{\Bx}) du$$, is calculated with the other (i.e.$$v$$) held constant, this is directly integrable, leaving

\label{eqn:stokesMaxwellSpaceTimeSplit:160}
\begin{aligned}
\int d^2 \Bx \cdot ( \spacegrad \wedge \BE )
&=
\int \evalrange{\lr{d\Bx_u \cdot \BE}}{v=0}{v=1}
-\int \evalrange{\lr{d\Bx_v \cdot \BE}}{u=0}{u=1} \\
&=
\oint d\Bx \cdot \BE.
\end{aligned}

That direct integration of one of the parameters, while the others are held constant, is the basic idea behind Stokes theorem.

The pseudoscalar grade Maxwell’s equations from \ref{eqn:stokesMaxwellSpaceTimeSplit:80} require a three parameter volume element to apply Stokes theorem to. Again, allowing for curvilinear coordinates such a differential expands as

\label{eqn:stokesMaxwellSpaceTimeSplit:180}
\begin{aligned}
d^3 \Bx \cdot (\spacegrad \wedge (I\BB))
&=
(( \Bx_u \wedge \Bx_v \wedge \Bx_w ) \cdot \Bx^a ) \cdot \partial_a (I\BB) \,du dv dw \\
&=
(d\Bx_u \wedge d\Bx_v) \cdot \partial_w (I\BB) dw
+(d\Bx_v \wedge d\Bx_w) \cdot \partial_u (I\BB) du
+(d\Bx_w \wedge d\Bx_u) \cdot \partial_v (I\BB) dv.
\end{aligned}

Like the two parameter volume, this is directly integrable

\label{eqn:stokesMaxwellSpaceTimeSplit:200}
\int
d^3 \Bx \cdot (\spacegrad \wedge (I\BB))
=
\int \evalbar{(d\Bx_u \wedge d\Bx_v) \cdot (I\BB) }{\Delta w}
+\int \evalbar{(d\Bx_v \wedge d\Bx_w) \cdot (I\BB)}{\Delta u}
+\int \evalbar{(d\Bx_w \wedge d\Bx_u) \cdot (I\BB)}{\Delta v}.

After some thought (or a craft project such as that of fig. 2) is can be observed that this is conceptually an oriented surface integral

fig. 2. Oriented three parameter surface.

Noting that

\label{eqn:stokesMaxwellSpaceTimeSplit:221}
\begin{aligned}
d^2 \Bx \cdot (I\Bf)
&= \gpgradezero{ d^2 \Bx I B } \\
&= I (d^2\Bx \wedge \Bf)
\end{aligned}

we can now write down the results of application of Stokes theorem to each of Maxwell’s equations in their curl forms

\label{eqn:stokesMaxwellSpaceTimeSplit:220}
\boxed{
\begin{aligned}
\oint d\Bx \cdot \BE &= -I \partial_t \int d^2 \Bx \wedge \BB \\
\inv{\mu_0} \oint d\Bx \cdot \BB &= \epsilon_0 I \partial_t \int d^2 \Bx \wedge \BE + I \int d^2 \Bx \wedge \BJ \\
\oint d^2 \Bx \wedge \BE &= \inv{\epsilon_0} \int (d^3 \Bx \cdot I) \rho \\
\oint d^2 \Bx \wedge \BB &= 0.
\end{aligned}
}

In the three parameter surface integrals the specific meaning to apply to $$d^2 \Bx \wedge \Bf$$ is
\label{eqn:stokesMaxwellSpaceTimeSplit:240}
\oint d^2 \Bx \wedge \Bf
=
\int \evalbar{\lr{d\Bx_u \wedge d\Bx_v \wedge \Bf}}{\Delta w}
+\int \evalbar{\lr{d\Bx_v \wedge d\Bx_w \wedge \Bf}}{\Delta u}
+\int \evalbar{\lr{d\Bx_w \wedge d\Bx_u \wedge \Bf}}{\Delta v}.

Note that in each case only the component of the vector $$\Bf$$ that is projected onto the normal to the area element contributes.