line integral

A fun application of Green’s functions and geometric algebra: Residue calculus

November 2, 2025 math and physics play No comments , , , , , , , , , , , , , , , , , ,

[Click here for a PDF version of this post]

Motivation.

A fun application of both Green’s functions and geometric algebra is to show how the Cauchy integral equation can be expressed in terms of the Green’s function for the 2D gradient. This is covered, almost as an aside, in [1]. I found that treatment a bit hard to understand, so I am going to work through it here at my own pace.

Complex numbers in geometric algebra.

Anybody who has studied geometric algebra is likely familiar with a variety of ways to construct complex numbers from geometric objects. For example, complex numbers can be constructed for any plane. If \( \Be_1, \Be_2 \) is a pair of orthonormal vectors for some plane in \(\mathbb{R}^N\), then any vector in that plane has the form
\begin{equation}\label{eqn:residueGreens:20}
\Bf = \Be_1 u + \Be_2 v,
\end{equation}
has an associated complex representation, by simply multiplying that vector one of those basis vectors. For example, if we pre-multiply \( \Bf \) by \( \Be_1 \), forming
\begin{equation}\label{eqn:residueGreens:40}
\begin{aligned}
z
&= \Be_1 \Bf \\
&= \Be_1 \lr{ \Be_1 u + \Be_2 v } \\
&= u + \Be_1 \Be_2 v.
\end{aligned}
\end{equation}

We may identify the unit bivector \( \Be_1 \Be_2 \) as an imaginary, designed by \( i \), since it has the expected behavior
\begin{equation}\label{eqn:residueGreens:60}
\begin{aligned}
i^2 &=
\lr{\Be_1 \Be_2}^2 \\
&=
\lr{\Be_1 \Be_2}
\lr{\Be_1 \Be_2} \\
&=
\Be_1 \lr{\Be_2
\Be_1} \Be_2 \\
&=
-\Be_1 \lr{\Be_1
\Be_2} \Be_2 \\
&=
-\lr{\Be_1 \Be_1}
\lr{\Be_2 \Be_2} \\
&=
-1.
\end{aligned}
\end{equation}

Complex numbers are seen to be isomorphic to even grade multivectors in a planar subspace. The imaginary is the grade-two pseudoscalar, and geometrically is an oriented unit area (bivector.)

Cauchy-equations in terms of the gradient.

It is natural to wonder about the geometric algebra equivalents of various complex-number relationships and identities. Of particular interest for this discussion is the geometric algebra equivalent of the Cauchy equations that specify required conditions for a function to be differentiable.

If a complex function \( f(z) = u(z) + i v(z) \) is differentiable, then we must be able to find the limit of
\begin{equation}\label{eqn:residueGreens:80}
\frac{\Delta f(z_0)}{\Delta z} = \frac{f(z_0 + h) – f(z_0)}{h},
\end{equation}
for any complex \( h \rightarrow 0 \), for any possible trajectory of \( z_0 + h \) toward \( z_0 \). In particular, for real \( h = \epsilon \),
\begin{equation}\label{eqn:residueGreens:100}
\lim_{\epsilon \rightarrow 0} \frac{u(x_0 + \epsilon, y_0) + i v(x_0 + \epsilon, y_0) – u(x_0, y_0) – i v(x_0, y_0)}{\epsilon}
=
\PD{x}{u(z_0)} + i \PD{x}{v(z_0)},
\end{equation}
and for imaginary \( h = i \epsilon \)
\begin{equation}\label{eqn:residueGreens:120}
\lim_{\epsilon \rightarrow 0} \frac{u(x_0, y_0 + \epsilon) + i v(x_0, y_0 + \epsilon) – u(x_0, y_0) – i v(x_0, y_0)}{i \epsilon}
=
-i\lr{ \PD{y}{u(z_0)} + i \PD{y}{v(z_0)} }.
\end{equation}
Equating real and imaginary parts, we see that existence of the derivative requires
\begin{equation}\label{eqn:residueGreens:140}
\begin{aligned}
\PD{x}{u} &= \PD{y}{v} \\
\PD{x}{v} &= -\PD{y}{u}.
\end{aligned}
\end{equation}
These are the Cauchy equations. When the derivative exists in a given neighbourhood, we say that the function is analytic in that region. If we use a bivector interpretation of the imaginary, with \( i = \Be_1 \Be_2 \), the Cauchy equations are also satisfied if the gradient of the complex function is zero, since
\begin{equation}\label{eqn:residueGreens:160}
\begin{aligned}
\spacegrad f
&=
\lr{ \Be_1 \partial_x + \Be_2 \partial_y } \lr{ u + \Be_1 \Be_2 v } \\
&=
\Be_1 \lr{ \partial_x u – \partial_y v } + \Be_2 \lr{ \partial_y u + \partial_x v }.
\end{aligned}
\end{equation}
We see that the geometric algebra equivalent of the Cauchy equations is simply
\begin{equation}\label{eqn:residueGreens:200}
\spacegrad f = 0.
\end{equation}
Roughly speaking, we may say that a function is analytic in a region, if the Cauchy equations are satisfied, or the gradient is zero, in a neighbourhood of all points in that region.

A special case of the fundamental theorem of geometric calculus.

Given an even grade multivector \( \psi \in \mathbb{R}^2 \) (i.e.: a complex number), we can show that
\begin{equation}\label{eqn:residueGreens:220}
\int_A \spacegrad \psi d^2\Bx = \oint_{\partial A} d\Bx \psi.
\end{equation}
Let’s get an idea why this works by expanding the area integral for a rectangular parameterization
\begin{equation}\label{eqn:residueGreens:240}
\begin{aligned}
\int_A \spacegrad \psi d^2\Bx
&=
\int_A \lr{ \Be_1 \partial_1 + \Be_2 \partial_2 } \psi I dx dy \\
&=
\int \Be_1 I dy \evalrange{\psi}{x_0}{x_1}
+
\int \Be_2 I dx \evalrange{\psi}{y_0}{y_1} \\
&=
\int \Be_2 dy \evalrange{\psi}{x_0}{x_1}

\int \Be_1 dx \evalrange{\psi}{y_0}{y_1} \\
&=
\int d\By \evalrange{\psi}{x_0}{x_1}

\int d\Bx \evalrange{\psi}{y_0}{y_1}.
\end{aligned}
\end{equation}
We took advantage of the fact that the \(\mathbb{R}^2\) pseudoscalar commutes with \( \psi \). The end result, is illustrated in fig. 1, shows pictorially that the remaining integral is an oriented line integral.

fig. 1. Oriented multivector line integral.

 

If we want to approximate a more general area, we may do so with additional tiles, as illustrated in fig. 2. We may evaluate the area integral using the line integral over just the exterior boundary using such a tiling, as any overlapping opposing boundary contributions cancel exactly.

fig. 2. A crude circular tiling approximation.

 

The reason that this is interesting is that it allows us to re-express a complex integral as a corresponding multivector area integral. With \( d\Bx = \Be_1 dz \), we have
\begin{equation}\label{eqn:residueGreens:260}
\oint dz\, \psi = \Be_1 \int \spacegrad \psi d^2\Bx.
\end{equation}

The Cauchy kernel as a Green’s function.

We’ve previously derived the Green’s function for the 2D Laplacian, and found
\begin{equation}\label{eqn:residueGreens:280}
\tilde{G}(\Bx, \Bx’) = \inv{2\pi} \ln \Abs{\lr{\Bx – \Bx’}},
\end{equation}
which satisfies
\begin{equation}\label{eqn:residueGreens:300}
\delta^2(\Bx – \Bx’) = \spacegrad^2 \tilde{G}(\Bx, \Bx’) = \spacegrad \lr{ \spacegrad \tilde{G}(\Bx, \Bx’) }.
\end{equation}
This means that \( G(\Bx, \Bx’) = \spacegrad \tilde{G}(\Bx, \Bx’) \) is the Green’s function for the gradient. That Green’s function is
\begin{equation}\label{eqn:residueGreens:320}
\begin{aligned}
G(\Bx, \Ba)
&= \inv{2 \pi} \frac{\spacegrad \Abs{\Bx – \Ba}}{\Abs{\Bx – \Ba}} \\
&= \inv{2 \pi} \frac{\Bx – \Ba}{\Abs{\Bx – \Ba}^2}.
\end{aligned}
\end{equation}
We may cast this Green’s function into complex form with \( z = \Be_1 \Bx, a = \Be_1 \Ba \). In particular
\begin{equation}\label{eqn:residueGreens:340}
\begin{aligned}
\inv{z – a}
&=
\frac{(z – a)^\conj}{\Abs{z – a}^2} \\
&=
\frac{(z – a)^\conj}{\Abs{z – a}^2} \\
&=
\frac{\Bx – \Ba}{\Abs{\Bx – \Ba}^2} \Be_1 \\
&=
2 \pi G(\Bx, \Ba) \Be_1.
\end{aligned}
\end{equation}

Cauchy’s integral.

With
\begin{equation}\label{eqn:residueGreens:360}
\psi = \frac{f(z)}{z – a},
\end{equation}
using \ref{eqn:residueGreens:260}, we can now evaluate
\begin{equation}\label{eqn:residueGreens:265}
\begin{aligned}
\oint dz\, \frac{f(z)}{z – a}
&= \Be_1 \int \spacegrad \frac{f(z)}{z – a} d^2\Bx \\
&= \Be_1 \int \lr{ \frac{\spacegrad f(z)}{z – a} + \lr{ \spacegrad \inv{z – a}} f(z) } I dA \\
&= \Be_1 \int f(z) \spacegrad 2 \pi G(\Bx – \Ba) \Be_1 I dA \\
&= 2 \pi \Be_1 \int \delta^2(\Bx – \Ba) \Be_1 f(\Bx) I dA \\
&= 2 \pi \Be_1^2 f(\Ba) I \\
&= 2 \pi I f(a),
\end{aligned}
\end{equation}
where we’ve made use of the analytic condition \( \spacegrad f = 0 \), and the fact that \( f \) and \( 1/(z-a) \), both even multivectors, commute.

The Cauchy integral equation
\begin{equation}\label{eqn:residueGreens:380}
f(a) = \inv{2 \pi I} \oint dz\, \frac{f(z)}{z – a},
\end{equation}
falls out naturally. This sort of residue calculation always seemed a bit miraculous. By introducing a geometric algebra encoding of complex numbers, we get a new and interesting interpretation. In particular,

  1. the imaginary factor in the geometric algebra formulation of this identity is an oriented unit area coming directly from the area element,
  2. the factor of \( 2 \pi \) comes directly from the Green’s function for the gradient,
  3. the fact that this particular form of integral picks up only the contribution at the point \( z = a \) is no longer mysterious seeming. This is directly due to delta-function filtering.

Also, if we are looking for an understanding of how to generalize the Cauchy equation to more general multivector functions, we now also have a good clue how that would be done.

References

[1] C. Doran and A.N. Lasenby. Geometric algebra for physicists. Cambridge University Press New York, Cambridge, UK, 1st edition, 2003.

Multivector form of Leibniz integral theorem for line integrals.

June 2, 2024 math and physics play , , , , ,

[Click here for a PDF version of this post]

Goal.

Here we will explore the multivector form of the Leibniz integral theorem (aka. Feynman’s trick in one dimension), as discussed in [1].

Given a boundary \( \Omega(t) \) that varies in time, we seek to evaluate
\begin{equation}\label{eqn:LeibnizIntegralTheorem:20}
\ddt{} \int_{\Omega(t)} F d^p \Bx \lrpartial G.
\end{equation}
Recall that when the bounding volume is fixed, we have
\begin{equation}\label{eqn:LeibnizIntegralTheorem:40}
\int_{\Omega} F d^p \Bx \lrpartial G = \int_{\partial \Omega} F d^{p-1} \Bx G,
\end{equation}
and expect a few terms that are variations of the RHS if we take derivatives.

Simplest case: scalar function, one variable.

With
\begin{equation}\label{eqn:LeibnizIntegralTheorem:60}
A(t) = \int_{a(t)}^{b(t)} f(u, t) du,
\end{equation}
If we can find an antiderivative, such that
\begin{equation}\label{eqn:LeibnizIntegralTheorem:80}
\PD{u}{F(u,t)} = f(u, t),
\end{equation}
or
\begin{equation}\label{eqn:LeibnizIntegralTheorem:90}
F(u, t) = \int f(u, t) du.
\end{equation}
The integral is made trivial
\begin{equation}\label{eqn:LeibnizIntegralTheorem:100}
\begin{aligned}
A(t)
&=
\int_{a(t)}^{b(t)} f(u, t) du \\
&=
\int_{a(t)}^{b(t)} \PD{u}{F(u,t)} du \\
&= F( b(t), t ) – F( a(t), t ).
\end{aligned}
\end{equation}
Should we attempt to take derivatives, we have a contribution from the first parameter that is entirely dependent on the boundary, and a contribution from the second parameter that is entirely independent of the boundary. That is
\begin{equation}\label{eqn:LeibnizIntegralTheorem:120}
\begin{aligned}
\ddt{} \int_{a(t)}^{b(t)} f(u, t) du
&=
\PD{b}{ F } \PD{t}{b}
-\PD{a}{ F } \PD{t}{a}
+ \evalrange{\PD{t}{F(u, t)}}{u = a(t)}{b(t)} \\
&=
f(b(t), t) b'(t) –
f(a(t), t) a'(t)
+ \int_{a(t)}^{b(t)} \PD{t}{} f(u, t) du.
\end{aligned}
\end{equation}
In the second step, the antiderivative function \( F \) has been restated in it’s original integral form \ref{eqn:LeibnizIntegralTheorem:90}. We are able to take the derivative into the integral, since we first evaluate that derivative, independent of the boundary, and then evaluate the result at the respective end points of the boundary.

Next simplest case: Multivector line integral (perfect derivative.)

Given an \( N \) dimensional vector space, and a path parameterized by vector \( \Bx = \Bx(u) \). The line integral special case of the fundamental theorem of calculus is found by evaluating
\begin{equation}\label{eqn:LeibnizIntegralTheorem:140}
\int F(u) d\Bx \lrpartial G(u),
\end{equation}
where \( F, G \) are multivectors, and
\begin{equation}\label{eqn:LeibnizIntegralTheorem:160}
\begin{aligned}
d\Bx &= \PD{u}{\Bx} du = \Bx_u du \\
\lrpartial &= \Bx^u \stackrel{ \leftrightarrow }{\PD{u}{}},
\end{aligned}
\end{equation}
where \( \Bx_u \Bx^u = \Bx_u \cdot \Bx^u = 1 \).

Evaluating the integral, we have
\begin{equation}\label{eqn:LeibnizIntegralTheorem:180}
\begin{aligned}
\int F(u) d\Bx \lrpartial G(u)
&=
\int F(u) \Bx_u du \Bx^u \stackrel{ \leftrightarrow }{\PD{u}{}} G(u) \\
&=
\int du \PD{u}{} \lr{ F(u) G(u) } \\
&=
F(u) G(u).
\end{aligned}
\end{equation}

If we allow \( F, G, \Bx \) to each have time dependence
\begin{equation}\label{eqn:LeibnizIntegralTheorem:200}
\begin{aligned}
F &= F(u, t) \\
G &= G(u, t) \\
\Bx &= \Bx(u, t),
\end{aligned}
\end{equation}
so we have
\begin{equation}\label{eqn:LeibnizIntegralTheorem:220}
\ddt{} \int_{u = a(t)}^{b(t)} F(u, t) d\Bx \lrpartial G(u, t)
=
\evalrange{ \ddt{u} \PD{u}{} \lr{ F(u, t) G(u, t) } }{u = a(t)}{b(t)}
+ \evalrange{\ddt{} \lr{ F(u, t) G(u, t) } }{u = a(t)}{b(t)}
.
\end{equation}

General multivector line integral.

Now suppose that we have a general multivector line integral
\begin{equation}\label{eqn:LeibnizIntegralTheorem:240}
A(t) = \int_{a(t)}^{b(t)} F(u, t) d\Bx G(u, t),
\end{equation}
where \( d\Bx = \Bx_u du \), \( \Bx_u = \partial \Bx(u, t)/\partial u \). Writing out the integrand explicitly, we have
\begin{equation}\label{eqn:LeibnizIntegralTheorem:260}
A(t) = \int_{a(t)}^{b(t)} du F(u, t) \Bx_u(u, t) G(u, t).
\end{equation}
Following our logic with the first scalar case, let
\begin{equation}\label{eqn:LeibnizIntegralTheorem:280}
\PD{u}{B(u, t)} = F(u, t) \Bx_u(u, t) G(u, t).
\end{equation}
We can now evaluate the derivative
\begin{equation}\label{eqn:LeibnizIntegralTheorem:300}
\ddt{A(t)} = \evalrange{ \ddt{u} \PD{u}{B} }{u = a(t)}{b(t)} + \evalrange{ \PD{t}{}B(u, t) }{u = a(t)}{b(t)}.
\end{equation}
Writing \ref{eqn:LeibnizIntegralTheorem:280} in integral form, we have
\begin{equation}\label{eqn:LeibnizIntegralTheorem:320}
B(u, t) = \int du F(u, t) \Bx_u(u, t) G(u, t),
\end{equation}
so
\begin{equation}\label{eqn:LeibnizIntegralTheorem:340}
\begin{aligned}
\ddt{A(t)}
&= \evalrange{ \ddt{u} \PD{u}{B} }{u = a(t)}{b(t)} +
\evalbar{ \PD{t’}{} \int_{a(t)}^{b(t)} du F(u, t’) d\Bx_u(u, t’) G(u, t’) }{t’ = t} \\
&= \evalrange{ \ddt{u} F(u, t) \Bx_u(u, t) G(u, t) }{u = a(t)}{b(t)} +
\int_{a(t)}^{b(t)} \PD{t}{} F(u, t) d\Bx(u, t) G(u, t),
\end{aligned}
\end{equation}
so
\begin{equation}\label{eqn:LeibnizIntegralTheorem:360}
\ddt{} \int_{a(t)}^{b(t)} F(u, t) d\Bx(u, t) G(u, t)
= \evalrange{ F(u, t) \ddt{\Bx}(u, t) G(u, t) }{u = a(t)}{b(t)} +
\int_{a(t)}^{b(t)} \PD{t}{} F(u, t) d\Bx(u, t) G(u, t).
\end{equation}

This is perhaps clearer, if just written as:
\begin{equation}\label{eqn:LeibnizIntegralTheorem:380}
\ddt{} \int_{a(t)}^{b(t)} F d\Bx G
= \evalrange{ F \ddt{\Bx} G }{u = a(t)}{b(t)} +
\int_{a(t)}^{b(t)} \PD{t}{} F d\Bx G.
\end{equation}
As a check, it’s worth pointing out that we can recover the one dimensional result, writing \( \Bx = u \Be_1 \), \( f = F \Be_1^{-1} \), and \( G = 1 \), for
\begin{equation}\label{eqn:LeibnizIntegralTheorem:400}
\ddt{} \int_{a(t)}^{b(t)} f du
= \evalrange{ f(u) \ddt{u} }{u = a(t)}{b(t)} +
\int_{a(t)}^{b(t)} du \PD{t}{f}.
\end{equation}

Next steps.

I’ve tried a couple times on paper to do surface integral variations of this (allowing the surface to vary with time), and don’t think that I’ve gotten it right. Will try again (or perhaps just look it up and see what the result is supposed to look like, then see how that translates into the GC formalism.)

References

[1] Wikipedia contributors. Leibniz integral rule — Wikipedia, the free encyclopedia. https://en.wikipedia.org/w/index.php?title=Leibniz_integral_rule&oldid=1223666713, 2024. [Online; accessed 22-May-2024].

Unpacking the fundamental theorem of multivector calculus in two dimensions

January 18, 2021 math and physics play , , , , , , , , , , , , , , , , , , ,

Notes.

Due to limitations in the MathJax-Latex package, all the oriented integrals in this blog post should be interpreted as having a clockwise orientation. [See the PDF version of this post for more sophisticated formatting.]

Guts.

Given a two dimensional generating vector space, there are two instances of the fundamental theorem for multivector integration
\begin{equation}\label{eqn:unpackingFundamentalTheorem:20}
\int_S F d\Bx \lrpartial G = \evalbar{F G}{\Delta S},
\end{equation}
and
\begin{equation}\label{eqn:unpackingFundamentalTheorem:40}
\int_S F d^2\Bx \lrpartial G = \oint_{\partial S} F d\Bx G.
\end{equation}
The first case is trivial. Given a parameterizated curve \( x = x(u) \), it just states
\begin{equation}\label{eqn:unpackingFundamentalTheorem:60}
\int_{u(0)}^{u(1)} du \PD{u}{}\lr{FG} = F(u(1))G(u(1)) – F(u(0))G(u(0)),
\end{equation}
for all multivectors \( F, G\), regardless of the signature of the underlying space.

The surface integral is more interesting. Let’s first look at the area element for this surface integral, which is
\begin{equation}\label{eqn:unpackingFundamentalTheorem:80}
d^2 \Bx = d\Bx_u \wedge d \Bx_v.
\end{equation}
Geometrically, this has the area of the parallelogram spanned by \( d\Bx_u \) and \( d\Bx_v \), but weighted by the pseudoscalar of the space. This is explored algebraically in the following problem and illustrated in fig. 1.

fig. 1. 2D vector space and area element.

Problem: Expansion of 2D area bivector.

Let \( \setlr{e_1, e_2} \) be an orthonormal basis for a two dimensional space, with reciprocal frame \( \setlr{e^1, e^2} \). Expand the area bivector \( d^2 \Bx \) in coordinates relating the bivector to the Jacobian and the pseudoscalar.

Answer

With parameterization \( x = x(u,v) = x^\alpha e_\alpha = x_\alpha e^\alpha \), we have
\begin{equation}\label{eqn:unpackingFundamentalTheorem:120}
\Bx_u \wedge \Bx_v
=
\lr{ \PD{u}{x^\alpha} e_\alpha } \wedge
\lr{ \PD{v}{x^\beta} e_\beta }
=
\PD{u}{x^\alpha}
\PD{v}{x^\beta}
e_\alpha
e_\beta
=
\PD{(u,v)}{(x^1,x^2)} e_1 e_2,
\end{equation}
or
\begin{equation}\label{eqn:unpackingFundamentalTheorem:160}
\Bx_u \wedge \Bx_v
=
\lr{ \PD{u}{x_\alpha} e^\alpha } \wedge
\lr{ \PD{v}{x_\beta} e^\beta }
=
\PD{u}{x_\alpha}
\PD{v}{x_\beta}
e^\alpha
e^\beta
=
\PD{(u,v)}{(x_1,x_2)} e^1 e^2.
\end{equation}
The upper and lower index pseudoscalars are related by
\begin{equation}\label{eqn:unpackingFundamentalTheorem:180}
e^1 e^2 e_1 e_2 =
-e^1 e^2 e_2 e_1 =
-1,
\end{equation}
so with \( I = e_1 e_2 \),
\begin{equation}\label{eqn:unpackingFundamentalTheorem:200}
e^1 e^2 = -I^{-1},
\end{equation}
leaving us with
\begin{equation}\label{eqn:unpackingFundamentalTheorem:140}
d^2 \Bx
= \PD{(u,v)}{(x^1,x^2)} du dv\, I
= -\PD{(u,v)}{(x_1,x_2)} du dv\, I^{-1}.
\end{equation}
We see that the area bivector is proportional to either the upper or lower index Jacobian and to the pseudoscalar for the space.

We may write the fundamental theorem for a 2D space as
\begin{equation}\label{eqn:unpackingFundamentalTheorem:680}
\int_S du dv \, \PD{(u,v)}{(x^1,x^2)} F I \lrgrad G = \oint_{\partial S} F d\Bx G,
\end{equation}
where we have dispensed with the vector derivative and use the gradient instead, since they are identical in a two parameter two dimensional space. Of course, unless we are using \( x^1, x^2 \) as our parameterization, we still want the curvilinear representation of the gradient \( \grad = \Bx^u \PDi{u}{} + \Bx^v \PDi{v}{} \).

Problem: Standard basis expansion of fundamental surface relation.

For a parameterization \( x = x^1 e_1 + x^2 e_2 \), where \( \setlr{ e_1, e_2 } \) is a standard (orthogonal) basis, expand the fundamental theorem for surface integrals for the single sided \( F = 1 \) case. Consider functions \( G \) of each grade (scalar, vector, bivector.)

Answer

From \ref{eqn:unpackingFundamentalTheorem:140} we see that the fundamental theorem takes the form
\begin{equation}\label{eqn:unpackingFundamentalTheorem:220}
\int_S dx^1 dx^2\, F I \lrgrad G = \oint_{\partial S} F d\Bx G.
\end{equation}
In a Euclidean space, the operator \( I \lrgrad \), is a \( \pi/2 \) rotation of the gradient, but has a rotated like structure in all metrics:
\begin{equation}\label{eqn:unpackingFundamentalTheorem:240}
I \grad
=
e_1 e_2 \lr{ e^1 \partial_1 + e^2 \partial_2 }
=
-e_2 \partial_1 + e_1 \partial_2.
\end{equation}

  • \( F = 1 \) and \( G \in \bigwedge^0 \) or \( G \in \bigwedge^2 \). For \( F = 1 \) and scalar or bivector \( G \) we have
    \begin{equation}\label{eqn:unpackingFundamentalTheorem:260}
    \int_S dx^1 dx^2\, \lr{ -e_2 \partial_1 + e_1 \partial_2 } G = \oint_{\partial S} d\Bx G,
    \end{equation}
    where, for \( x^1 \in [x^1(0),x^1(1)] \) and \( x^2 \in [x^2(0),x^2(1)] \), the RHS written explicitly is
    \begin{equation}\label{eqn:unpackingFundamentalTheorem:280}
    \oint_{\partial S} d\Bx G
    =
    \int dx^1 e_1
    \lr{ G(x^1, x^2(1)) – G(x^1, x^2(0)) }
    – dx^2 e_2
    \lr{ G(x^1(1),x^2) – G(x^1(0), x^2) }.
    \end{equation}
    This is sketched in fig. 2. Since a 2D bivector \( G \) can be written as \( G = I g \), where \( g \) is a scalar, we may write the pseudoscalar case as
    \begin{equation}\label{eqn:unpackingFundamentalTheorem:300}
    \int_S dx^1 dx^2\, \lr{ -e_2 \partial_1 + e_1 \partial_2 } g = \oint_{\partial S} d\Bx g,
    \end{equation}
    after right multiplying both sides with \( I^{-1} \). Algebraically the scalar and pseudoscalar cases can be thought of as identical scalar relationships.
  • \( F = 1, G \in \bigwedge^1 \). For \( F = 1 \) and vector \( G \) the 2D fundamental theorem for surfaces can be split into scalar
    \begin{equation}\label{eqn:unpackingFundamentalTheorem:320}
    \int_S dx^1 dx^2\, \lr{ -e_2 \partial_1 + e_1 \partial_2 } \cdot G = \oint_{\partial S} d\Bx \cdot G,
    \end{equation}
    and bivector relations
    \begin{equation}\label{eqn:unpackingFundamentalTheorem:340}
    \int_S dx^1 dx^2\, \lr{ -e_2 \partial_1 + e_1 \partial_2 } \wedge G = \oint_{\partial S} d\Bx \wedge G.
    \end{equation}
    To expand \ref{eqn:unpackingFundamentalTheorem:320}, let
    \begin{equation}\label{eqn:unpackingFundamentalTheorem:360}
    G = g_1 e^1 + g_2 e^2,
    \end{equation}
    for which
    \begin{equation}\label{eqn:unpackingFundamentalTheorem:380}
    \lr{ -e_2 \partial_1 + e_1 \partial_2 } \cdot G
    =
    \lr{ -e_2 \partial_1 + e_1 \partial_2 } \cdot
    \lr{ g_1 e^1 + g_2 e^2 }
    =
    \partial_2 g_1 – \partial_1 g_2,
    \end{equation}
    and
    \begin{equation}\label{eqn:unpackingFundamentalTheorem:400}
    d\Bx \cdot G
    =
    \lr{ dx^1 e_1 – dx^2 e_2 } \cdot \lr{ g_1 e^1 + g_2 e^2 }
    =
    dx^1 g_1 – dx^2 g_2,
    \end{equation}
    so \ref{eqn:unpackingFundamentalTheorem:320} expands to
    \begin{equation}\label{eqn:unpackingFundamentalTheorem:500}
    \int_S dx^1 dx^2\, \lr{ \partial_2 g_1 – \partial_1 g_2 }
    =
    \int
    \evalbar{dx^1 g_1}{\Delta x^2} – \evalbar{ dx^2 g_2 }{\Delta x^1}.
    \end{equation}
    This coordinate expansion illustrates how the pseudoscalar nature of the area element results in a duality transformation, as we end up with a curl like operation on the LHS, despite the dot product nature of the decomposition that we used. That can also be seen directly for vector \( G \), since
    \begin{equation}\label{eqn:unpackingFundamentalTheorem:560}
    dA (I \grad) \cdot G
    =
    dA \gpgradezero{ I \grad G }
    =
    dA I \lr{ \grad \wedge G },
    \end{equation}
    since the scalar selection of \( I \lr{ \grad \cdot G } \) is zero.In the grade-2 relation \ref{eqn:unpackingFundamentalTheorem:340}, we expect a pseudoscalar cancellation on both sides, leaving a scalar (divergence-like) relationship. This time, we use upper index coordinates for the vector \( G \), letting
    \begin{equation}\label{eqn:unpackingFundamentalTheorem:440}
    G = g^1 e_1 + g^2 e_2,
    \end{equation}
    so
    \begin{equation}\label{eqn:unpackingFundamentalTheorem:460}
    \lr{ -e_2 \partial_1 + e_1 \partial_2 } \wedge G
    =
    \lr{ -e_2 \partial_1 + e_1 \partial_2 } \wedge G
    \lr{ g^1 e_1 + g^2 e_2 }
    =
    e_1 e_2 \lr{ \partial_1 g^1 + \partial_2 g^2 },
    \end{equation}
    and
    \begin{equation}\label{eqn:unpackingFundamentalTheorem:480}
    d\Bx \wedge G
    =
    \lr{ dx^1 e_1 – dx^2 e_2 } \wedge
    \lr{ g^1 e_1 + g^2 e_2 }
    =
    e_1 e_2 \lr{ dx^1 g^2 + dx^2 g^1 }.
    \end{equation}
    So \ref{eqn:unpackingFundamentalTheorem:340}, after multiplication of both sides by \( I^{-1} \), is
    \begin{equation}\label{eqn:unpackingFundamentalTheorem:520}
    \int_S dx^1 dx^2\,
    \lr{ \partial_1 g^1 + \partial_2 g^2 }
    =
    \int
    \evalbar{dx^1 g^2}{\Delta x^2} + \evalbar{dx^2 g^1 }{\Delta x^1}.
    \end{equation}

As before, we’ve implicitly performed a duality transformation, and end up with a divergence operation. That can be seen directly without coordinate expansion, by rewriting the wedge as a grade two selection, and expanding the gradient action on the vector \( G \), as follows
\begin{equation}\label{eqn:unpackingFundamentalTheorem:580}
dA (I \grad) \wedge G
=
dA \gpgradetwo{ I \grad G }
=
dA I \lr{ \grad \cdot G },
\end{equation}
since \( I \lr{ \grad \wedge G } \) has only a scalar component.

 

fig. 2. Line integral around rectangular boundary.

Theorem 1.1: Green’s theorem [1].

Let \( S \) be a Jordan region with a piecewise-smooth boundary \( C \). If \( P, Q \) are continuously differentiable on an open set that contains \( S \), then
\begin{equation*}
\int dx dy \lr{ \PD{y}{P} – \PD{x}{Q} } = \oint P dx + Q dy.
\end{equation*}

Problem: Relationship to Green’s theorem.

If the space is Euclidean, show that \ref{eqn:unpackingFundamentalTheorem:500} and \ref{eqn:unpackingFundamentalTheorem:520} are both instances of Green’s theorem with suitable choices of \( P \) and \( Q \).

Answer

I will omit the subtleties related to general regions and consider just the case of an infinitesimal square region.

Start proof:

Let’s start with \ref{eqn:unpackingFundamentalTheorem:500}, with \( g_1 = P \) and \( g_2 = Q \), and \( x^1 = x, x^2 = y \), the RHS is
\begin{equation}\label{eqn:unpackingFundamentalTheorem:600}
\int dx dy \lr{ \PD{y}{P} – \PD{x}{Q} }.
\end{equation}
On the RHS we have
\begin{equation}\label{eqn:unpackingFundamentalTheorem:620}
\int \evalbar{dx P}{\Delta y} – \evalbar{ dy Q }{\Delta x}
=
\int dx \lr{ P(x, y_1) – P(x, y_0) } – \int dy \lr{ Q(x_1, y) – Q(x_0, y) }.
\end{equation}
This pair of integrals is plotted in fig. 3, from which we see that \ref{eqn:unpackingFundamentalTheorem:620} can be expressed as the line integral, leaving us with
\begin{equation}\label{eqn:unpackingFundamentalTheorem:640}
\int dx dy \lr{ \PD{y}{P} – \PD{x}{Q} }
=
\oint dx P + dy Q,
\end{equation}
which is Green’s theorem over the infinitesimal square integration region.

For the equivalence of \ref{eqn:unpackingFundamentalTheorem:520} to Green’s theorem, let \( g^2 = P \), and \( g^1 = -Q \). Plugging into the LHS, we find the Green’s theorem integrand. On the RHS, the integrand expands to
\begin{equation}\label{eqn:unpackingFundamentalTheorem:660}
\evalbar{dx g^2}{\Delta y} + \evalbar{dy g^1 }{\Delta x}
=
dx \lr{ P(x,y_1) – P(x, y_0)}
+
dy \lr{ -Q(x_1, y) + Q(x_0, y)},
\end{equation}
which is exactly what we found in \ref{eqn:unpackingFundamentalTheorem:620}.

End proof.

 

fig. 3. Path for Green’s theorem.

We may also relate multivector gradient integrals in 2D to the normal integral around the boundary of the bounding curve. That relationship is as follows.

Theorem 1.2: 2D gradient integrals.

\begin{equation*}
\begin{aligned}
\int J du dv \rgrad G &= \oint I^{-1} d\Bx G = \int J \lr{ \Bx^v du + \Bx^u dv } G \\
\int J du dv F \lgrad &= \oint F I^{-1} d\Bx = \int J F \lr{ \Bx^v du + \Bx^u dv },
\end{aligned}
\end{equation*}
where \( J = \partial(x^1, x^2)/\partial(u,v) \) is the Jacobian of the parameterization \( x = x(u,v) \). In terms of the coordinates \( x^1, x^2 \), this reduces to
\begin{equation*}
\begin{aligned}
\int dx^1 dx^2 \rgrad G &= \oint I^{-1} d\Bx G = \int \lr{ e^2 dx^1 + e^1 dx^2 } G \\
\int dx^1 dx^2 F \lgrad &= \oint G I^{-1} d\Bx = \int F \lr{ e^2 dx^1 + e^1 dx^2 }.
\end{aligned}
\end{equation*}
The vector \( I^{-1} d\Bx \) is orthogonal to the tangent vector along the boundary, and for Euclidean spaces it can be identified as the outwards normal.

Start proof:

Respectively setting \( F = 1 \), and \( G = 1\) in \ref{eqn:unpackingFundamentalTheorem:680}, we have
\begin{equation}\label{eqn:unpackingFundamentalTheorem:940}
\int I^{-1} d^2 \Bx \rgrad G = \oint I^{-1} d\Bx G,
\end{equation}
and
\begin{equation}\label{eqn:unpackingFundamentalTheorem:960}
\int F d^2 \Bx \lgrad I^{-1} = \oint F d\Bx I^{-1}.
\end{equation}
Starting with \ref{eqn:unpackingFundamentalTheorem:940} we find
\begin{equation}\label{eqn:unpackingFundamentalTheorem:700}
\int I^{-1} J du dv I \rgrad G = \oint d\Bx G,
\end{equation}
to find \( \int dx^1 dx^2 \rgrad G = \oint I^{-1} d\Bx G \), as desireed. In terms of a parameterization \( x = x(u,v) \), the pseudoscalar for the space is
\begin{equation}\label{eqn:unpackingFundamentalTheorem:720}
I = \frac{\Bx_u \wedge \Bx_v}{J},
\end{equation}
so
\begin{equation}\label{eqn:unpackingFundamentalTheorem:740}
I^{-1} = \frac{J}{\Bx_u \wedge \Bx_v}.
\end{equation}
Also note that \( \lr{\Bx_u \wedge \Bx_v}^{-1} = \Bx^v \wedge \Bx^u \), so
\begin{equation}\label{eqn:unpackingFundamentalTheorem:760}
I^{-1} = J \lr{ \Bx^v \wedge \Bx^u },
\end{equation}
and
\begin{equation}\label{eqn:unpackingFundamentalTheorem:780}
I^{-1} d\Bx
= I^{-1} \cdot d\Bx
= J \lr{ \Bx^v \wedge \Bx^u } \cdot \lr{ \Bx_u du – \Bx_v dv }
= J \lr{ \Bx^v du + \Bx^u dv },
\end{equation}
so the right acting gradient integral is
\begin{equation}\label{eqn:unpackingFundamentalTheorem:800}
\int J du dv \grad G =
\int
\evalbar{J \Bx^v G}{\Delta v} du + \evalbar{J \Bx^u G dv}{\Delta u},
\end{equation}
which we write in abbreviated form as \( \int J \lr{ \Bx^v du + \Bx^u dv} G \).

For the \( G = 1 \) case, from \ref{eqn:unpackingFundamentalTheorem:960} we find
\begin{equation}\label{eqn:unpackingFundamentalTheorem:820}
\int J du dv F I \lgrad I^{-1} = \oint F d\Bx I^{-1}.
\end{equation}
However, in a 2D space, regardless of metric, we have \( I a = – a I \) for any vector \( a \) (i.e. \( \grad \) or \( d\Bx\)), so we may commute the outer pseudoscalars in
\begin{equation}\label{eqn:unpackingFundamentalTheorem:840}
\int J du dv F I \lgrad I^{-1} = \oint F d\Bx I^{-1},
\end{equation}
so
\begin{equation}\label{eqn:unpackingFundamentalTheorem:850}
-\int J du dv F I I^{-1} \lgrad = -\oint F I^{-1} d\Bx.
\end{equation}
After cancelling the negative sign on both sides, we have the claimed result.

To see that \( I a \), for any vector \( a \) is normal to \( a \), we can compute the dot product
\begin{equation}\label{eqn:unpackingFundamentalTheorem:860}
\lr{ I a } \cdot a
=
\gpgradezero{ I a a }
=
a^2 \gpgradezero{ I }
= 0,
\end{equation}
since the scalar selection of a bivector is zero. Since \( I^{-1} = \pm I \), the same argument shows that \( I^{-1} d\Bx \) must be orthogonal to \( d\Bx \).

End proof.

Let’s look at the geometry of the normal \( I^{-1} \Bx \) in a couple 2D vector spaces. We use an integration volume of a unit square to simplify the boundary term expressions.

  • Euclidean: With a parameterization \( x(u,v) = u\Be_1 + v \Be_2 \), and Euclidean basis vectors \( (\Be_1)^2 = (\Be_2)^2 = 1 \), the fundamental theorem integrated over the rectangle \( [x_0,x_1] \times [y_0,y_1] \) is
    \begin{equation}\label{eqn:unpackingFundamentalTheorem:880}
    \int dx dy \grad G =
    \int
    \Be_2 \lr{ G(x,y_1) – G(x,y_0) } dx +
    \Be_1 \lr{ G(x_1,y) – G(x_0,y) } dy,
    \end{equation}
    Each of the terms in the integrand above are illustrated in fig. 4, and we see that this is a path integral weighted by the outwards normal.

    fig. 4. Outwards oriented normal for Euclidean space.

  • Spacetime: Let \( x(u,v) = u \gamma_0 + v \gamma_1 \), where \( (\gamma_0)^2 = -(\gamma_1)^2 = 1 \). With \( u = t, v = x \), the gradient integral over a \([t_0,t_1] \times [x_0,x_1]\) of spacetime is
    \begin{equation}\label{eqn:unpackingFundamentalTheorem:900}
    \begin{aligned}
    \int dt dx \grad G
    &=
    \int
    \gamma^1 dt \lr{ G(t, x_1) – G(t, x_0) }
    +
    \gamma^0 dx \lr{ G(t_1, x) – G(t_1, x) } \\
    &=
    \int
    \gamma_1 dt \lr{ -G(t, x_1) + G(t, x_0) }
    +
    \gamma_0 dx \lr{ G(t_1, x) – G(t_1, x) }
    .
    \end{aligned}
    \end{equation}
    With \( t \) plotted along the horizontal axis, and \( x \) along the vertical, each of the terms of this integrand is illustrated graphically in fig. 5. For this mixed signature space, there is no longer any good geometrical characterization of the normal.

    fig. 5. Orientation of the boundary normal for a spacetime basis.

  • Spacelike:
    Let \( x(u,v) = u \gamma_1 + v \gamma_2 \), where \( (\gamma_1)^2 = (\gamma_2)^2 = -1 \). With \( u = x, v = y \), the gradient integral over a \([x_0,x_1] \times [y_0,y_1]\) of this space is
    \begin{equation}\label{eqn:unpackingFundamentalTheorem:920}
    \begin{aligned}
    \int dx dy \grad G
    &=
    \int
    \gamma^2 dx \lr{ G(x, y_1) – G(x, y_0) }
    +
    \gamma^1 dy \lr{ G(x_1, y) – G(x_1, y) } \\
    &=
    \int
    \gamma_2 dx \lr{ -G(x, y_1) + G(x, y_0) }
    +
    \gamma_1 dy \lr{ -G(x_1, y) + G(x_1, y) }
    .
    \end{aligned}
    \end{equation}
    Referring to fig. 6. where the elements of the integrand are illustrated, we see that the normal \( I^{-1} d\Bx \) for the boundary of this region can be characterized as inwards.

    fig. 6. Inwards oriented normal for a Dirac spacelike basis.

References

[1] S.L. Salas and E. Hille. Calculus: one and several variables. Wiley New York, 1990.

New version of classical mechanics notes

January 1, 2021 Uncategorized , , , , , , , , ,

I’ve posted a new version of my classical mechanics notes compilation.  This version is not yet live on amazon, but you shouldn’t buy a copy of this “book” anyways, as it is horribly rough (if you want a copy, grab the free PDF instead.)  [I am going to buy a copy so that I can continue to edit a paper copy of it, but nobody else should.]

This version includes additional background material on Space Time Algebra (STA), i.e. the geometric algebra name for the Dirac/Clifford-algebra in 3+1 dimensions.  In particular, I’ve added material on reciprocal frames, the gradient and vector derivatives, line and surface integrals and the fundamental theorem for both.  Some of the integration theory content might make sense to move to a different book, but I’ll keep it with the rest of these STA notes for now.

Relativistic multivector surface integrals

December 31, 2020 math and physics play , , , , , , ,

[Click here for a PDF of this post]

Background.

This post is a continuation of:

Surface integrals.

[If mathjax doesn’t display properly for you, click here for a PDF of this post]

We’ve now covered line integrals and the fundamental theorem for line integrals, so it’s now time to move on to surface integrals.

Definition 1.1: Surface integral.

Given a two variable parameterization \( x = x(u,v) \), we write \( d^2\Bx = \Bx_u \wedge \Bx_v du dv \), and call
\begin{equation*}
\int F d^2\Bx\, G,
\end{equation*}
a surface integral, where \( F,G \) are arbitrary multivector functions.

Like our multivector line integral, this is intrinsically multivector valued, with a product of \( F \) with arbitrary grades, a bivector \( d^2 \Bx \), and \( G \), also potentially with arbitrary grades. Let’s consider an example.

Problem: Surface area integral example.

Given the hyperbolic surface parameterization \( x(\rho,\alpha) = \rho \gamma_0 e^{-\vcap \alpha} \), where \( \vcap = \gamma_{20} \) evaluate the indefinite integral
\begin{equation}\label{eqn:relativisticSurface:40}
\int \gamma_1 e^{\gamma_{21}\alpha} d^2 \Bx\, \gamma_2.
\end{equation}

Answer

We have \( \Bx_\rho = \gamma_0 e^{-\vcap \alpha} \) and \( \Bx_\alpha = \rho\gamma_{2} e^{-\vcap \alpha} \), so
\begin{equation}\label{eqn:relativisticSurface:60}
\begin{aligned}
d^2 \Bx
&=
(\Bx_\rho \wedge \Bx_\alpha) d\rho d\alpha \\
&=
\gpgradetwo{
\gamma_{0} e^{-\vcap \alpha} \rho\gamma_{2} e^{-\vcap \alpha}
}
d\rho d\alpha \\
&=
\rho \gamma_{02} d\rho d\alpha,
\end{aligned}
\end{equation}
so the integral is
\begin{equation}\label{eqn:relativisticSurface:80}
\begin{aligned}
\int \rho \gamma_1 e^{\gamma_{21}\alpha} \gamma_{022} d\rho d\alpha
&=
-\inv{2} \rho^2 \int \gamma_1 e^{\gamma_{21}\alpha} \gamma_{0} d\alpha \\
&=
\frac{\gamma_{01}}{2} \rho^2 \int e^{\gamma_{21}\alpha} d\alpha \\
&=
\frac{\gamma_{01}}{2} \rho^2 \gamma^{12} e^{\gamma_{21}\alpha} \\
&=
\frac{\rho^2 \gamma_{20}}{2} e^{\gamma_{21}\alpha}.
\end{aligned}
\end{equation}
Because \( F \) and \( G \) were both vectors, the resulting integral could only have been a multivector with grades 0,2,4. As it happens, there were no scalar nor pseudoscalar grades in the end result, and we ended up with the spacetime plane between \( \gamma_0 \), and \( \gamma_2 e^{\gamma_{21}\alpha} \), which are rotations of \(\gamma_2\) in the x,y plane. This is illustrated in fig. 1 (omitting scale and sign factors.)

fig. 1. Spacetime plane.

Fundamental theorem for surfaces.

For line integrals we saw that \( d\Bx \cdot \grad = \gpgradezero{ d\Bx \partial } \), and obtained the fundamental theorem for multivector line integrals by omitting the grade selection and using the multivector operator \( d\Bx \partial \) in the integrand directly. We have the same situation for surface integrals. In particular, we know that the \(\mathbb{R}^3\) Stokes theorem can be expressed in terms of \( d^2 \Bx \cdot \spacegrad \)

Problem: GA form of 3D Stokes’ theorem integrand.

Given an \(\mathbb{R}^3\) vector field \( \Bf \), show that
\begin{equation}\label{eqn:relativisticSurface:180}
\int dA \ncap \cdot \lr{ \spacegrad \cross \Bf }
=
-\int \lr{d^2\Bx \cdot \spacegrad } \cdot \Bf.
\end{equation}

Answer

Let \( d^2 \Bx = I \ncap dA \), implicitly fixing the relative orientation of the bivector area element compared to the chosen surface normal direction.
\begin{equation}\label{eqn:relativisticSurface:200}
\begin{aligned}
\int \lr{d^2\Bx \cdot \spacegrad } \cdot \Bf
&=
\int dA \gpgradeone{I \ncap \spacegrad } \cdot \Bf \\
&=
\int dA \lr{ I \lr{ \ncap \wedge \spacegrad} } \cdot \Bf \\
&=
\int dA \gpgradezero{ I^2 \lr{ \ncap \cross \spacegrad} \Bf } \\
&=
-\int dA \lr{ \ncap \cross \spacegrad} \cdot \Bf \\
&=
-\int dA \ncap \cdot \lr{ \spacegrad \cross \Bf }.
\end{aligned}
\end{equation}

The moral of the story is that the conventional dual form of the \(\mathbb{R}^3\) Stokes’ theorem can be written directly by projecting the gradient onto the surface area element. Geometrically, this projection operation has a rotational effect as well, since for bivector \( B \), and vector \( x \), the bivector-vector dot product \( B \cdot x \) is the component of \( x \) that lies in the plane \( B \wedge x = 0 \), but also rotated 90 degrees.

For multivector integration, we do not want an integral operator that includes such dot products. In the line integral case, we were able to achieve the same projective operation by using vector derivative instead of a dot product, and can do the same for the surface integral case. In particular

Theorem 1.1: Projection of gradient onto the tangent space.

Given a curvilinear representation of the gradient with respect to parameters \( u^0, u^1, u^2, u^3 \)
\begin{equation*}
\grad = \sum_\mu \Bx^\mu \PD{u^\mu}{},
\end{equation*}
the surface projection onto the tangent space associated with any two of those parameters, satisfies
\begin{equation*}
d^2 \Bx \cdot \grad = \gpgradeone{ d^2 \Bx \partial }.
\end{equation*}

Start proof:

Without loss of generality, we may pick \( u^0, u^1 \) as the parameters associated with the tangent space. The area element for the surface is
\begin{equation}\label{eqn:relativisticSurface:100}
d^2 \Bx = \Bx_0 \wedge \Bx_1 \,
du^0 du^1.
\end{equation}
Dotting this with the gradient gives
\begin{equation}\label{eqn:relativisticSurface:120}
\begin{aligned}
d^2 \Bx \cdot \grad
&=
du^0 du^1
\lr{ \Bx_0 \wedge \Bx_1 } \cdot \Bx^\mu \PD{u^\mu}{} \\
&=
du^0 du^1
\lr{
\Bx_0
\lr{\Bx_1 \cdot \Bx^\mu }

\Bx_1
\lr{\Bx_0 \cdot \Bx^\mu }
}
\PD{u^\mu}{} \\
&=
du^0 du^1
\lr{
\Bx_0 \PD{u^1}{}

\Bx_0 \PD{u^1}{}
}.
\end{aligned}
\end{equation}
On the other hand, the vector derivative for this surface is
\begin{equation}\label{eqn:relativisticSurface:140}
\partial
=
\Bx^0 \PD{u^0}{}
+
\Bx^1 \PD{u^1}{},
\end{equation}
so
\begin{equation}\label{eqn:relativisticSurface:160}
\begin{aligned}
\gpgradeone{d^2 \Bx \partial}
&=
du^0 du^1\,
\lr{ \Bx_0 \wedge \Bx_1 } \cdot
\lr{
\Bx^0 \PD{u^0}{}
+
\Bx^1 \PD{u^1}{}
} \\
&=
du^0 du^1
\lr{
\Bx_0 \PD{u^1}{}

\Bx_1 \PD{u^0}{}
}.
\end{aligned}
\end{equation}

End proof.

We now want to formulate the geometric algebra form of the fundamental theorem for surface integrals.

Theorem 1.2: Fundamental theorem for surface integrals.

Given multivector functions \( F, G \), and surface area element \( d^2 \Bx = \lr{ \Bx_u \wedge \Bx_v }\, du dv \), associated with a two parameter curve \( x(u,v) \), then
\begin{equation*}
\int_S F d^2\Bx \lrpartial G = \int_{\partial S} F d^1\Bx G,
\end{equation*}
where \( S \) is the integration surface, and \( \partial S \) designates its boundary, and the line integral on the RHS is really short hand for
\begin{equation*}
\int
\evalbar{ \lr{ F (-d\Bx_v) G } }{\Delta u}
+
\int
\evalbar{ \lr{ F (d\Bx_u) G } }{\Delta v},
\end{equation*}
which is a line integral that traverses the boundary of the surface with the opposite orientation to the circulation of the area element.

Start proof:

The vector derivative for this surface is
\begin{equation}\label{eqn:relativisticSurface:220}
\partial =
\Bx^u \PD{u}{}
+
\Bx^v \PD{v}{},
\end{equation}
so
\begin{equation}\label{eqn:relativisticSurface:240}
F d^2\Bx \lrpartial G
=
\PD{u}{} \lr{ F d^2\Bx\, \Bx^u G }
+
\PD{v}{} \lr{ F d^2\Bx\, \Bx^v G },
\end{equation}
where \( d^2\Bx\, \Bx^u \) is held constant with respect to \( u \), and \( d^2\Bx\, \Bx^v \) is held constant with respect to \( v \) (since the partials of the vector derivative act on \( F, G \), but not on the area element, nor on the reciprocal vectors of \( \lrpartial \) itself.) Note that
\begin{equation}\label{eqn:relativisticSurface:260}
d^2\Bx \wedge \Bx^u
=
du dv\, \lr{ \Bx_u \wedge \Bx_v } \wedge \Bx^u = 0,
\end{equation}
since \( \Bx^u \in sectionpan \setlr{ \Bx_u\, \Bx_v } \), so
\begin{equation}\label{eqn:relativisticSurface:280}
\begin{aligned}
d^2\Bx\, \Bx^u
&=
d^2\Bx \cdot \Bx^u
+
d^2\Bx \wedge \Bx^u \\
&=
d^2\Bx \cdot \Bx^u \\
&=
du dv\, \lr{ \Bx_u \wedge \Bx_v } \cdot \Bx^u \\
&=
-du dv\, \Bx_v.
\end{aligned}
\end{equation}
Similarly
\begin{equation}\label{eqn:relativisticSurface:300}
\begin{aligned}
d^2\Bx\, \Bx^v
&=
d^2\Bx \cdot \Bx^v \\
&=
du dv\, \lr{ \Bx_u \wedge \Bx_v } \cdot \Bx^v \\
&=
du dv\, \Bx_u.
\end{aligned}
\end{equation}
This leaves us with
\begin{equation}\label{eqn:relativisticSurface:320}
F d^2\Bx \lrpartial G
=
-du dv\,
\PD{u}{} \lr{ F \Bx_v G }
+
du dv\,
\PD{v}{} \lr{ F \Bx_u G },
\end{equation}
where \( \Bx_v, \Bx_u \) are held constant with respect to \( u,v \) respectively. Fortuitously, this constant condition can be dropped, since the antisymmetry of the wedge in the area element results in perfect cancellation. If these line elements are not held constant then
\begin{equation}\label{eqn:relativisticSurface:340}
\PD{u}{} \lr{ F \Bx_v G }

\PD{v}{} \lr{ F \Bx_u G }
=
F \lr{
\PD{v}{\Bx_u}

\PD{u}{\Bx_v}
} G
+
\lr{
\PD{u}{F} \Bx_v G
+
F \Bx_v \PD{u}{G}
}
+
\lr{
\PD{v}{F} \Bx_u G
+
F \Bx_u \PD{v}{G}
}
,
\end{equation}
but the mixed partial contribution is zero
\begin{equation}\label{eqn:relativisticSurface:360}
\begin{aligned}
\PD{v}{\Bx_u}

\PD{u}{\Bx_v}
&=
\PD{v}{} \PD{u}{x}

\PD{u}{} \PD{v}{x} \\
&=
0,
\end{aligned}
\end{equation}
by equality of mixed partials. We have two perfect differentials, and can evaluate each of these integrals
\begin{equation}\label{eqn:relativisticSurface:380}
\begin{aligned}
\int F d^2\Bx \lrpartial G
&=
-\int
du dv\,
\PD{u}{} \lr{ F \Bx_v G }
+
\int
du dv\,
\PD{v}{} \lr{ F \Bx_u G } \\
&=
-\int
dv\,
\evalbar{ \lr{ F \Bx_v G } }{\Delta u}
+
\int
du\,
\evalbar{ \lr{ F \Bx_u G } }{\Delta v} \\
&=
\int
\evalbar{ \lr{ F (-d\Bx_v) G } }{\Delta u}
+
\int
\evalbar{ \lr{ F (d\Bx_u) G } }{\Delta v}.
\end{aligned}
\end{equation}
We use the shorthand \( d^1 \Bx = d\Bx_u – d\Bx_v \) to write
\begin{equation}\label{eqn:relativisticSurface:400}
\int_S F d^2\Bx \lrpartial G = \int_{\partial S} F d^1\Bx G,
\end{equation}
with the understanding that this is really instructions to evaluate the line integrals in the last step of \ref{eqn:relativisticSurface:380}.

End proof.

Problem: Integration in the t,y plane.

Let \( x(t,y) = c t \gamma_0 + y \gamma_2 \). Write out both sides of the fundamental theorem explicitly.

Answer

Let’s designate the tangent basis vectors as
\begin{equation}\label{eqn:relativisticSurface:420}
\Bx_0 = \PD{t}{x} = c \gamma_0,
\end{equation}
and
\begin{equation}\label{eqn:relativisticSurface:440}
\Bx_2 = \PD{y}{x} = \gamma_2,
\end{equation}
so the vector derivative is
\begin{equation}\label{eqn:relativisticSurface:460}
\partial
= \inv{c} \gamma^0 \PD{t}{}
+ \gamma^2 \PD{y}{},
\end{equation}
and the area element is
\begin{equation}\label{eqn:relativisticSurface:480}
d^2 \Bx = c \gamma_0 \gamma_2.
\end{equation}
The fundamental theorem of surface integrals is just a statement that
\begin{equation}\label{eqn:relativisticSurface:500}
\int_{t_0}^{t_1} c dt
\int_{y_0}^{y_1} dy
F \gamma_0 \gamma_2 \lr{
\inv{c} \gamma^0 \PD{t}{}
+ \gamma^2 \PD{y}{}
} G
=
\int F \lr{ c \gamma_0 dt – \gamma_2 dy } G,
\end{equation}
where the RHS, when stated explicitly, really means
\begin{equation}\label{eqn:relativisticSurface:520}
\begin{aligned}
\int &F \lr{ c \gamma_0 dt – \gamma_2 dy } G
=
\int_{t_0}^{t_1} c dt \lr{ F(t,y_1) \gamma_0 G(t, y_1) – F(t,y_0) \gamma_0 G(t, y_0) } \\
&\qquad –
\int_{y_0}^{y_1} dy \lr{ F(t_1,y) \gamma_2 G(t_1, y) – F(t_0,y) \gamma_0 G(t_0, y) }.
\end{aligned}
\end{equation}
In this particular case, since \( \Bx_0 = c \gamma_0, \Bx_2 = \gamma_2 \) are both constant functions that depend on neither \( t \) nor \( y \), it is easy to derive the full expansion of \ref{eqn:relativisticSurface:520} directly from the LHS of \ref{eqn:relativisticSurface:500}.

Problem: A cylindrical hyperbolic surface.

Generalizing the example surface integral from \ref{eqn:relativisticSurface:40}, let
\begin{equation}\label{eqn:relativisticSurface:540}
x(\rho, \alpha) = \rho e^{-\vcap \alpha/2} x(0,1) e^{\vcap \alpha/2},
\end{equation}
where \( \rho \) is a scalar, and \( \vcap = \cos\theta_k\gamma_{k0} \) is a unit spatial bivector, and \( \cos\theta_k \) are direction cosines of that vector. This is a composite transformation, where the \( \alpha \) variation boosts the \( x(0,1) \) four-vector, and the \( \rho \) parameter contracts or increases the magnitude of this vector, resulting in \( x \) spanning a hyperbolic region of spacetime.

Compute the tangent and reciprocal basis, the area element for the surface, and explicitly state both sides of the fundamental theorem.

Answer

For the tangent basis vectors we have
\begin{equation}\label{eqn:relativisticSurface:560}
\Bx_\rho = \PD{\rho}{x} =
e^{-\vcap \alpha/2} x(0,1) e^{\vcap \alpha/2} = \frac{x}{\rho},
\end{equation}
and
\begin{equation}\label{eqn:relativisticSurface:580}
\Bx_\alpha = \PD{\alpha}{x} =
\lr{-\vcap/2} x
+
x \lr{ \vcap/2 }
=
x \cdot \vcap.
\end{equation}
These vectors \( \Bx_\rho, \Bx_\alpha \) are orthogonal, as \( x \cdot \vcap \) is the projection of \( x \) onto the spacetime plane \( x \wedge \vcap = 0 \), but rotated so that \( x \cdot \lr{ x \cdot \vcap } = 0 \). Because of this orthogonality, the vector derivative for this tangent space is
\begin{equation}\label{eqn:relativisticSurface:600}
\partial =
\inv{x \cdot \vcap} \PD{\alpha}{}
+
\frac{\rho}{x}
\PD{\rho}{}
.
\end{equation}
The area element is
\begin{equation}\label{eqn:relativisticSurface:620}
\begin{aligned}
d^2 \Bx
&=
d\rho d\alpha\,
\frac{x}{\rho} \wedge \lr{ x \cdot \vcap } \\
&=
\inv{\rho} d\rho d\alpha\,
x \lr{ x \cdot \vcap }
.
\end{aligned}
\end{equation}
The full statement of the fundamental theorem for this surface is
\begin{equation}\label{eqn:relativisticSurface:640}
\int_S
d\rho d\alpha\,
F
\lr{
\inv{\rho} x \lr{ x \cdot \vcap }
}
\lr{
\inv{x \cdot \vcap} \PD{\alpha}{}
+
\frac{\rho}{x}
\PD{\rho}{}
}
G
=
\int_{\partial S}
F \lr{ d\rho \frac{x}{\rho} – d\alpha \lr{ x \cdot \vcap } } G.
\end{equation}
As in the previous example, due to the orthogonality of the tangent basis vectors, it’s easy to show find the RHS directly from the LHS.

Problem: Simple example with non-orthogonal tangent space basis vectors.

Let \( x(u,v) = u a + v b \), where \( u,v \) are scalar parameters, and \( a, b \) are non-null and non-colinear constant four-vectors. Write out the fundamental theorem for surfaces with respect to this parameterization.

Answer

The tangent basis vectors are just \( \Bx_u = a, \Bx_v = b \), with reciprocals
\begin{equation}\label{eqn:relativisticSurface:660}
\Bx^u = \Bx_v \cdot \inv{ \Bx_u \wedge \Bx_v } = b \cdot \inv{ a \wedge b },
\end{equation}
and
\begin{equation}\label{eqn:relativisticSurface:680}
\Bx^v = -\Bx_u \cdot \inv{ \Bx_u \wedge \Bx_v } = -a \cdot \inv{ a \wedge b }.
\end{equation}
The fundamental theorem, with respect to this surface, when written out explicitly takes the form
\begin{equation}\label{eqn:relativisticSurface:700}
\int F \, du dv\, \lr{ a \wedge b } \inv{ a \wedge b } \cdot \lr{ a \PD{u}{} – b \PD{v}{} } G
=
\int F \lr{ a du – b dv } G.
\end{equation}
This is a good example to illustrate the geometry of the line integral circulation.
Suppose that we are integrating over \( u \in [0,1], v \in [0,1] \). In this case, the line integral really means
\begin{equation}\label{eqn:relativisticSurface:720}
\begin{aligned}
\int &F \lr{ a du – b dv } G
=
+
\int F(u,1) (+a du) G(u,1)
+
\int F(u,0) (-a du) G(u,0) \\
&\quad+
\int F(1,v) (-b dv) G(1,v)
+
\int F(0,v) (+b dv) G(0,v),
\end{aligned}
\end{equation}
which is a path around the spacetime parallelogram spanned by \( u, v \), as illustrated in fig. 1, which illustrates the orientation of the bivector area element with the arrows around the exterior of the parallelogram: \( 0 \rightarrow a \rightarrow a + b \rightarrow b \rightarrow 0 \).

fig. 2. Line integral orientation.