Some slices.

As followup to:

here is a bit more experimentation with Paraview slice filtering. This time I saved all the point data with the escape time counts, and rendered it with a few different contours

The default slice filter places the plane in the x-y orientation:

but we can also tilt it in the Paraview render UI

and suppress the contour view to see just the slice

As a very GUI challenged user, I don’t find the interface particularly intuitive, but have at least figured out this one particular slicing task, which is kind of cool.  It’s impressive that the UI can drive interesting computational tasks without having to regenerate or reload any of the raw data itself.  This time I was using the MacOSX Paraview client, which is nicer looking than the Windows version, but has some weird glitches in the file dialogues.

Analysis.

The graphing play above shows some apparent rotational symmetry our vector equivalent to the Mandelbrot equation

\Bx \rightarrow \Bx \Be_1 \Bx + \Bc.

It was not clear to me if this symmetry existed, as there were artifacts in the plots that made it appear that there was irregularity. However, some thought shows that this irregularity is strictly due to sampling error, and perhaps also due to limitations in the plotting software, as such an uneven surface is probably tricky to deal with.

To see this, here are the first few iterations of the Mandlebrot sequence for an arbitary starting vector $$\Bc$$.

\begin{aligned}
\Bx_0 &= \Bc \\
\Bx_1 &= \Bc \Be_1 \Bc + \Bc \\
\Bx_2 &= \lr{ \Bc \Be_1 \Bc + \Bc } \Be_1 \lr{ \Bc \Be_1 \Bc + \Bc } + \Bc \Be_1 \Bc + \Bc.
\end{aligned}

Now, what happens when we rotate the starting vector $$\Bc$$ in the $$y-z$$ plane. The rotor for such a rotation is

R = \exp\lr{ e_{23} \theta/2 },

where

\Bc \rightarrow R \Bc \tilde{R}.

Observe that if $$\Bc$$ is parallel to the x-axis, then this rotation leaves the starting point invariant, as $$\Be_1$$ commutes with $$R$$. That is

R \Be_1 \tilde{R} =
\Be_1 R \tilde{R} = \Be_1.

Let $$\Bc’ = R \Bc \tilde{R}$$, so that

\Bx_0′ = R \Bc \tilde{R} = R \Bx_0 \tilde{R} .

\begin{aligned}
\Bx_1′
&= R \Bc \tilde{R} \Be_1 R \Bc \tilde{R} + R \Bc \tilde{R} \\
&= R \Bc \Be_1 \Bc \tilde{R} + R \Bc \tilde{R} \\
&= R \lr{ \Bc \Be_1 \Bc R + \Bc } \tilde{R} \\
&= R \Bx_1 \tilde{R}.
\end{aligned}

\label{eqn:m2:n}
\begin{aligned}
\Bx_2′
&= \Bx_1′ \Be_1 \Bx_1′ + \Bc’ \\
&= R \Bx_1 \tilde{R} \Be_1 R \Bx_1 \tilde{R} + R \Bc \tilde{R} \\
&= R \Bx_1 \Be_1 \Bx_1 \tilde{R} + R \Bc \tilde{R} \\
&= R \lr{ \Bx_1 \Be_1 \Bx_1 + \Bc } \tilde{R} \\
&= R \Bx_2 \tilde{R}.
\end{aligned}

The pattern is clear. If we rotate the starting point in the y-z plane, iterating the Mandelbrot sequence results in precisely the same rotation of the x-y plane Mandelbrot sequence. So the apparent rotational symmetry in the 3D iteration of the Mandelbrot vector equation is exactly that. This is an unfortunately boring 3D fractal. All of the interesting fractal nature occurs in the 2D plane, and the rest is just a consequence of rotating that image around the x-axis. We get some interesting fractal artifacts if we slice the rotated Mandelbrot image.

Some 3D renderings of the Mandelbrot set.

As mentioned previously, using geometric algebra we can convert the iterative equation for the Mandelbrot set from complex number form

z \rightarrow z^2 + c,

to an equivalent vector form

\mathbf{x} \rightarrow \mathbf{x} \mathbf{e} \mathbf{x} + \mathbf{c},

where $$\mathbf{e}$$ represents the x-axis (say). Geometrically, each iteration takes $$\mathbf{e}$$ and reflects it about the direction of $$\mathbf{x}$$, then scales that by $$\mathbf{x}^2$$ and adds $$\mathbf{c}$$.

To get the usual 2D Mandelbrot set, one iterates with vectors that lie only in the x-y plane, but we can treat the Mandelbrot set as a 3D solid if we remove the x-y plane restriction.

Last time I animated slices of the 3D set, but after a whole lot of messing around I managed to save the data for all the interior points of the 3D set in netCDF format, and render the solid using Paraview. Paraview has tons of filters available, and experimenting with them is pretty time consuming, but here are some initial screenshots of the 3D Mandelbrot set:

It’s interesting that much of the characteristic detail of the Mandelbrot set is not visible in the 3D volume, but if we slice that volume, you can then you can see it again.  Here’s a slice taken close to the z=0 plane (but far enough that the “CN tower” portion of the set is not visible)

You can also see some of that detail if the opacity of the rendering is turned way down:

If you look carefully at the images above, you’ll see that the axis labels are wrong.  I think that I’ve screwed up one of the stride related parameters to my putVar call, and I end up with x+z transposed in the axes labels when visualized.

Visualizing the 3D Mandelbrot set.

February 1, 2021 C/C++ development and debugging. 3 comments

In “Geometric Algebra for Computer Science” is a fractal problem based on a vectorization of the Mandelbrot equation, which allows for generalization to $$N$$ dimensions.

I finally got around to trying the 3D variation of this problem.  Recall that the Mandlebrot set is a visualization of iteration of the following complex number equation:

z \rightarrow z^2 + c,

where the idea is that $$z$$ starts as the constant $$c$$, and if this sequence converges to zero, then the point $$c$$ is in the set.

The idea in the problem is that this equation can be cast as a vector equation, instead of a complex number equation. All we have to do is set $$z = \Be_1 \Bx$$, where $$\Be_1$$ is the x-axis unit vector, and $$\Bx$$ is an $$\mathbb{R}^2$$ vector. Expanding in coordinates, with $$\Bx = \Be_1 x + \Be_2 y$$, we have

z
= \Be_1 \lr{ \Be_1 x + \Be_2 y }
= x + \Be_1 \Be_2 y,

but since the bivector $$\Be_1 \Be_2$$ squares to $$-1$$, we can represent complex numbers as even grade multivectors. Making the same substitution in the Mandlebrot equation, we have

\Be_1 \Bx \rightarrow \Be_1 \Bx \Be_1 \Bx + \Be_1 \Bc,

or

\Bx \rightarrow \Bx \Be_1 \Bx + \Bc.

Viola! This is a vector version of the Mandlebrot equation, and we can use it in 2 or 3 or N dimensions, as desired.  Observe that despite all the vector products above, the result is still a vector since $$\Bx \Be_1 \Bx$$ is the geometric algebra form of a reflection of $$\Bx$$ about the x-axis.

The problem with generalizing this from 2D is really one of visualization. How can we visualize a 3D Mandelbrot set? One idea I had was to use ray tracing, so that only the points on the surface from the desired viewpoint need be evaluated. I don’t think I’ve ever written a ray tracer, but I thought that there has got to be a quick and dirty way to do this.  Also, figuring out how to make a ray tracer interact with an irregular surface like this is probably non trivial!

What I did instead, was a brute force evaluation of all the points in the upper half plane in around the origin, one slice of the set at a time. Here’s the result

Code for the visualization can be found in github. I’ve used Pauli matrices to do the evaluation, which is actually pretty quick (but slower than plain std::complex< double> evaluation), and the C++ ImageMagick API to save individual png files for the slices. There are better coloring schemes for the Mandelbrot set, and if I scrounge up some more time, I may try one of those instead.

Example of PL/I macro

January 27, 2021 Mainframe No comments , , ,

Up until last week PL/I macros were a bit of a mystery.  Most of the ones that I’d seen in customer code were impressively inscrutable, and if I had to look at any of them, my reaction was to throw my hands in the air and plead with the compiler backend guys for help.  Implementing one such macro has been very helpful to understanding how these work.

Here is a C program that roughly models some PL/I code of interest

The documentation for the ‘foo’ function says of the final return code parameter that it is 12 bytes long, and that the ‘rcvalues.h’ header file has a set of RCNNN constants and a RCCHECK macro that can be used to test for any one of those constants.  A possible C implementation of that header might look something like:

/* rcvalues.h */
#define RC000 0x0000000000000000LL
#define RC001 0x0000000123456789LL
/* ... */

#define RCCHECK( urc, crc ) ( memcmp( &(urc), &(crc), 8 ) == 0 )


PL/I APIs do not typically use modern constructs like typedefs.  The closest that I have seen is for an API header file (copybook in the mainframe lingo) is to declare a variable (which becomes a local variable with a specific name in the including module), which the programmer can refer to using the LIKE keyword, as in the following example:

I believe there is also a DEFINE keyword available in newer PL/I compilers, which provides a typedef like mechanism, but most existing code probably doesn’t use such new-fangled nonsense, when cut and paste has far superior maintenance characteristics.  For that reason, the API would be unlikely to have a typedef equivalent for the return code structure.  Instead, the PL/I equivalent of the C code above, would probably look like:

(i.e. the C code is really modeled on the PL/I code of this form, and if this was a C API, the API would have a struct declaration or a typedef for the return code structure)

The RCNNN constants would actually be found as named variables (not immutable constants) in the copy book, perhaps declared something like:

I struggled a bit to figure out what the PL/I equivalent of my C RCCHECK macro would be.  The following inner function correctly did the required type casting and comparisons:

The implementation is very long, since the entire declaration of the input parameter type has to be duplicated.

If I was to put this RCCHECK implementation above into my RCVALUES.inc header file, it would only work if all the customer declaration of their return code structure objects were field by field compatible.  What I really want is for my RCCHECK function to take the address of the parameter, and pass that instead of the underlying type.  That was not at all obvious to figure out how to do, but with some help, I was eventually able to construct a PL/I macro (with helper inner-function) of the following form:

It’s clearly no longer a one liner.  Some notes on this PL/I macro:

• The PL/I macro body looks like a regular PL/I function, but the begin-PROCEDURE and END statements start with % (% is not part of the PROC name.)
• Macro parameters and return values are explicit strings, regardless of the types of the parameters that were actually passed.
• In PL/I the || symbol is used for string concatenation, so this constructs output that inserts an ADDR() call around ARG1 token and then passes the ARG2 token as is.
• I don’t know if there’s a way to implement this macro in a way that doesn’t require a helper function, and still have the output work in the context of an IF statement.
• You have to explicitly enable the macro, using %ACTIVATE.  In my case, without %ACTIVATE, the RCCHECK symbol ends up as an undeclared external entry, and was no call to the @RCCHECK_HELPER function $${}^{[1]}$$.
• Observe that the PL/I macro provides a mechanism to jam whatever you want into the code, as the compiler’s macro preprocessor replaces the macro call tokens with the string that you have provided, leaving that string for the final compiler pass to interpret instead.

If I compile the code using this macro version of RCCHECK, the preprocessor output looks like:

I’m still pretty horrified at some of the macros that I’ve seen in customer code — they almost seem like the source equivalent of self modifying code.  You can’t figure out what is going on without also looking at all the output of the precompiler passes.  This is especially evil, since you can write PL/I preprocessor macros that generate preprocessor macros and require multiple preprocessor passes to produce the final desired output!

Footnotes

[1] note that @ is a valid PL/I character to use in a symbol name, as is # and $— so if you want your functions to look like swear words, this is a language where that is possible. Something like the following is probably valid PL/I : V = #@$1A#@(1);


For added entertainment, your file names (i.e. PDS member names) can also be like ‘#@$1A#@’. Storing files with names like that on a Unix filesystem results in hours of fun, as you are then left with the task of figuring out how to properly quote file names with embedded$’s and #’s in scripts and makefiles.

Notes.

Due to limitations in the MathJax-Latex package, all the oriented integrals in this blog post should be interpreted as having a clockwise orientation. [See the PDF version of this post for more sophisticated formatting.]

Guts.

Given a two dimensional generating vector space, there are two instances of the fundamental theorem for multivector integration
\label{eqn:unpackingFundamentalTheorem:20}
\int_S F d\Bx \lrpartial G = \evalbar{F G}{\Delta S},

and
\label{eqn:unpackingFundamentalTheorem:40}
\int_S F d^2\Bx \lrpartial G = \oint_{\partial S} F d\Bx G.

The first case is trivial. Given a parameterizated curve $$x = x(u)$$, it just states
\label{eqn:unpackingFundamentalTheorem:60}
\int_{u(0)}^{u(1)} du \PD{u}{}\lr{FG} = F(u(1))G(u(1)) – F(u(0))G(u(0)),

for all multivectors $$F, G$$, regardless of the signature of the underlying space.

The surface integral is more interesting. Let’s first look at the area element for this surface integral, which is
\label{eqn:unpackingFundamentalTheorem:80}
d^2 \Bx = d\Bx_u \wedge d \Bx_v.

Geometrically, this has the area of the parallelogram spanned by $$d\Bx_u$$ and $$d\Bx_v$$, but weighted by the pseudoscalar of the space. This is explored algebraically in the following problem and illustrated in fig. 1.

fig. 1. 2D vector space and area element.

Problem: Expansion of 2D area bivector.

Let $$\setlr{e_1, e_2}$$ be an orthonormal basis for a two dimensional space, with reciprocal frame $$\setlr{e^1, e^2}$$. Expand the area bivector $$d^2 \Bx$$ in coordinates relating the bivector to the Jacobian and the pseudoscalar.

With parameterization $$x = x(u,v) = x^\alpha e_\alpha = x_\alpha e^\alpha$$, we have
\label{eqn:unpackingFundamentalTheorem:120}
\Bx_u \wedge \Bx_v
=
\lr{ \PD{u}{x^\alpha} e_\alpha } \wedge
\lr{ \PD{v}{x^\beta} e_\beta }
=
\PD{u}{x^\alpha}
\PD{v}{x^\beta}
e_\alpha
e_\beta
=
\PD{(u,v)}{(x^1,x^2)} e_1 e_2,

or
\label{eqn:unpackingFundamentalTheorem:160}
\Bx_u \wedge \Bx_v
=
\lr{ \PD{u}{x_\alpha} e^\alpha } \wedge
\lr{ \PD{v}{x_\beta} e^\beta }
=
\PD{u}{x_\alpha}
\PD{v}{x_\beta}
e^\alpha
e^\beta
=
\PD{(u,v)}{(x_1,x_2)} e^1 e^2.

The upper and lower index pseudoscalars are related by
\label{eqn:unpackingFundamentalTheorem:180}
e^1 e^2 e_1 e_2 =
-e^1 e^2 e_2 e_1 =
-1,

so with $$I = e_1 e_2$$,
\label{eqn:unpackingFundamentalTheorem:200}
e^1 e^2 = -I^{-1},

leaving us with
\label{eqn:unpackingFundamentalTheorem:140}
d^2 \Bx
= \PD{(u,v)}{(x^1,x^2)} du dv\, I
= -\PD{(u,v)}{(x_1,x_2)} du dv\, I^{-1}.

We see that the area bivector is proportional to either the upper or lower index Jacobian and to the pseudoscalar for the space.

We may write the fundamental theorem for a 2D space as
\label{eqn:unpackingFundamentalTheorem:680}
\int_S du dv \, \PD{(u,v)}{(x^1,x^2)} F I \lrgrad G = \oint_{\partial S} F d\Bx G,

where we have dispensed with the vector derivative and use the gradient instead, since they are identical in a two parameter two dimensional space. Of course, unless we are using $$x^1, x^2$$ as our parameterization, we still want the curvilinear representation of the gradient $$\grad = \Bx^u \PDi{u}{} + \Bx^v \PDi{v}{}$$.

Problem: Standard basis expansion of fundamental surface relation.

For a parameterization $$x = x^1 e_1 + x^2 e_2$$, where $$\setlr{ e_1, e_2 }$$ is a standard (orthogonal) basis, expand the fundamental theorem for surface integrals for the single sided $$F = 1$$ case. Consider functions $$G$$ of each grade (scalar, vector, bivector.)

From \ref{eqn:unpackingFundamentalTheorem:140} we see that the fundamental theorem takes the form
\label{eqn:unpackingFundamentalTheorem:220}
\int_S dx^1 dx^2\, F I \lrgrad G = \oint_{\partial S} F d\Bx G.

In a Euclidean space, the operator $$I \lrgrad$$, is a $$\pi/2$$ rotation of the gradient, but has a rotated like structure in all metrics:
\label{eqn:unpackingFundamentalTheorem:240}
=
e_1 e_2 \lr{ e^1 \partial_1 + e^2 \partial_2 }
=
-e_2 \partial_1 + e_1 \partial_2.

• $$F = 1$$ and $$G \in \bigwedge^0$$ or $$G \in \bigwedge^2$$. For $$F = 1$$ and scalar or bivector $$G$$ we have
\label{eqn:unpackingFundamentalTheorem:260}
\int_S dx^1 dx^2\, \lr{ -e_2 \partial_1 + e_1 \partial_2 } G = \oint_{\partial S} d\Bx G,

where, for $$x^1 \in [x^1(0),x^1(1)]$$ and $$x^2 \in [x^2(0),x^2(1)]$$, the RHS written explicitly is
\label{eqn:unpackingFundamentalTheorem:280}
\oint_{\partial S} d\Bx G
=
\int dx^1 e_1
\lr{ G(x^1, x^2(1)) – G(x^1, x^2(0)) }
– dx^2 e_2
\lr{ G(x^1(1),x^2) – G(x^1(0), x^2) }.

This is sketched in fig. 2. Since a 2D bivector $$G$$ can be written as $$G = I g$$, where $$g$$ is a scalar, we may write the pseudoscalar case as
\label{eqn:unpackingFundamentalTheorem:300}
\int_S dx^1 dx^2\, \lr{ -e_2 \partial_1 + e_1 \partial_2 } g = \oint_{\partial S} d\Bx g,

after right multiplying both sides with $$I^{-1}$$. Algebraically the scalar and pseudoscalar cases can be thought of as identical scalar relationships.
• $$F = 1, G \in \bigwedge^1$$. For $$F = 1$$ and vector $$G$$ the 2D fundamental theorem for surfaces can be split into scalar
\label{eqn:unpackingFundamentalTheorem:320}
\int_S dx^1 dx^2\, \lr{ -e_2 \partial_1 + e_1 \partial_2 } \cdot G = \oint_{\partial S} d\Bx \cdot G,

and bivector relations
\label{eqn:unpackingFundamentalTheorem:340}
\int_S dx^1 dx^2\, \lr{ -e_2 \partial_1 + e_1 \partial_2 } \wedge G = \oint_{\partial S} d\Bx \wedge G.

To expand \ref{eqn:unpackingFundamentalTheorem:320}, let
\label{eqn:unpackingFundamentalTheorem:360}
G = g_1 e^1 + g_2 e^2,

for which
\label{eqn:unpackingFundamentalTheorem:380}
\lr{ -e_2 \partial_1 + e_1 \partial_2 } \cdot G
=
\lr{ -e_2 \partial_1 + e_1 \partial_2 } \cdot
\lr{ g_1 e^1 + g_2 e^2 }
=
\partial_2 g_1 – \partial_1 g_2,

and
\label{eqn:unpackingFundamentalTheorem:400}
d\Bx \cdot G
=
\lr{ dx^1 e_1 – dx^2 e_2 } \cdot \lr{ g_1 e^1 + g_2 e^2 }
=
dx^1 g_1 – dx^2 g_2,

so \ref{eqn:unpackingFundamentalTheorem:320} expands to
\label{eqn:unpackingFundamentalTheorem:500}
\int_S dx^1 dx^2\, \lr{ \partial_2 g_1 – \partial_1 g_2 }
=
\int
\evalbar{dx^1 g_1}{\Delta x^2} – \evalbar{ dx^2 g_2 }{\Delta x^1}.

This coordinate expansion illustrates how the pseudoscalar nature of the area element results in a duality transformation, as we end up with a curl like operation on the LHS, despite the dot product nature of the decomposition that we used. That can also be seen directly for vector $$G$$, since
\label{eqn:unpackingFundamentalTheorem:560}
=
=
dA I \lr{ \grad \wedge G },

since the scalar selection of $$I \lr{ \grad \cdot G }$$ is zero.In the grade-2 relation \ref{eqn:unpackingFundamentalTheorem:340}, we expect a pseudoscalar cancellation on both sides, leaving a scalar (divergence-like) relationship. This time, we use upper index coordinates for the vector $$G$$, letting
\label{eqn:unpackingFundamentalTheorem:440}
G = g^1 e_1 + g^2 e_2,

so
\label{eqn:unpackingFundamentalTheorem:460}
\lr{ -e_2 \partial_1 + e_1 \partial_2 } \wedge G
=
\lr{ -e_2 \partial_1 + e_1 \partial_2 } \wedge G
\lr{ g^1 e_1 + g^2 e_2 }
=
e_1 e_2 \lr{ \partial_1 g^1 + \partial_2 g^2 },

and
\label{eqn:unpackingFundamentalTheorem:480}
d\Bx \wedge G
=
\lr{ dx^1 e_1 – dx^2 e_2 } \wedge
\lr{ g^1 e_1 + g^2 e_2 }
=
e_1 e_2 \lr{ dx^1 g^2 + dx^2 g^1 }.

So \ref{eqn:unpackingFundamentalTheorem:340}, after multiplication of both sides by $$I^{-1}$$, is
\label{eqn:unpackingFundamentalTheorem:520}
\int_S dx^1 dx^2\,
\lr{ \partial_1 g^1 + \partial_2 g^2 }
=
\int
\evalbar{dx^1 g^2}{\Delta x^2} + \evalbar{dx^2 g^1 }{\Delta x^1}.

As before, we’ve implicitly performed a duality transformation, and end up with a divergence operation. That can be seen directly without coordinate expansion, by rewriting the wedge as a grade two selection, and expanding the gradient action on the vector $$G$$, as follows
\label{eqn:unpackingFundamentalTheorem:580}
=
=
dA I \lr{ \grad \cdot G },

since $$I \lr{ \grad \wedge G }$$ has only a scalar component.

fig. 2. Line integral around rectangular boundary.

Theorem 1.1: Green’s theorem [1].

Let $$S$$ be a Jordan region with a piecewise-smooth boundary $$C$$. If $$P, Q$$ are continuously differentiable on an open set that contains $$S$$, then
\begin{equation*}
\int dx dy \lr{ \PD{y}{P} – \PD{x}{Q} } = \oint P dx + Q dy.
\end{equation*}

Problem: Relationship to Green’s theorem.

If the space is Euclidean, show that \ref{eqn:unpackingFundamentalTheorem:500} and \ref{eqn:unpackingFundamentalTheorem:520} are both instances of Green’s theorem with suitable choices of $$P$$ and $$Q$$.

I will omit the subtleties related to general regions and consider just the case of an infinitesimal square region.

Start proof:

Let’s start with \ref{eqn:unpackingFundamentalTheorem:500}, with $$g_1 = P$$ and $$g_2 = Q$$, and $$x^1 = x, x^2 = y$$, the RHS is
\label{eqn:unpackingFundamentalTheorem:600}
\int dx dy \lr{ \PD{y}{P} – \PD{x}{Q} }.

On the RHS we have
\label{eqn:unpackingFundamentalTheorem:620}
\int \evalbar{dx P}{\Delta y} – \evalbar{ dy Q }{\Delta x}
=
\int dx \lr{ P(x, y_1) – P(x, y_0) } – \int dy \lr{ Q(x_1, y) – Q(x_0, y) }.

This pair of integrals is plotted in fig. 3, from which we see that \ref{eqn:unpackingFundamentalTheorem:620} can be expressed as the line integral, leaving us with
\label{eqn:unpackingFundamentalTheorem:640}
\int dx dy \lr{ \PD{y}{P} – \PD{x}{Q} }
=
\oint dx P + dy Q,

which is Green’s theorem over the infinitesimal square integration region.

For the equivalence of \ref{eqn:unpackingFundamentalTheorem:520} to Green’s theorem, let $$g^2 = P$$, and $$g^1 = -Q$$. Plugging into the LHS, we find the Green’s theorem integrand. On the RHS, the integrand expands to
\label{eqn:unpackingFundamentalTheorem:660}
\evalbar{dx g^2}{\Delta y} + \evalbar{dy g^1 }{\Delta x}
=
dx \lr{ P(x,y_1) – P(x, y_0)}
+
dy \lr{ -Q(x_1, y) + Q(x_0, y)},

which is exactly what we found in \ref{eqn:unpackingFundamentalTheorem:620}.

End proof.

fig. 3. Path for Green’s theorem.

We may also relate multivector gradient integrals in 2D to the normal integral around the boundary of the bounding curve. That relationship is as follows.

\begin{equation*}
\begin{aligned}
\int J du dv \rgrad G &= \oint I^{-1} d\Bx G = \int J \lr{ \Bx^v du + \Bx^u dv } G \\
\int J du dv F \lgrad &= \oint F I^{-1} d\Bx = \int J F \lr{ \Bx^v du + \Bx^u dv },
\end{aligned}
\end{equation*}
where $$J = \partial(x^1, x^2)/\partial(u,v)$$ is the Jacobian of the parameterization $$x = x(u,v)$$. In terms of the coordinates $$x^1, x^2$$, this reduces to
\begin{equation*}
\begin{aligned}
\int dx^1 dx^2 \rgrad G &= \oint I^{-1} d\Bx G = \int \lr{ e^2 dx^1 + e^1 dx^2 } G \\
\int dx^1 dx^2 F \lgrad &= \oint G I^{-1} d\Bx = \int F \lr{ e^2 dx^1 + e^1 dx^2 }.
\end{aligned}
\end{equation*}
The vector $$I^{-1} d\Bx$$ is orthogonal to the tangent vector along the boundary, and for Euclidean spaces it can be identified as the outwards normal.

Start proof:

Respectively setting $$F = 1$$, and $$G = 1$$ in \ref{eqn:unpackingFundamentalTheorem:680}, we have
\label{eqn:unpackingFundamentalTheorem:940}
\int I^{-1} d^2 \Bx \rgrad G = \oint I^{-1} d\Bx G,

and
\label{eqn:unpackingFundamentalTheorem:960}
\int F d^2 \Bx \lgrad I^{-1} = \oint F d\Bx I^{-1}.

Starting with \ref{eqn:unpackingFundamentalTheorem:940} we find
\label{eqn:unpackingFundamentalTheorem:700}
\int I^{-1} J du dv I \rgrad G = \oint d\Bx G,

to find $$\int dx^1 dx^2 \rgrad G = \oint I^{-1} d\Bx G$$, as desireed. In terms of a parameterization $$x = x(u,v)$$, the pseudoscalar for the space is
\label{eqn:unpackingFundamentalTheorem:720}
I = \frac{\Bx_u \wedge \Bx_v}{J},

so
\label{eqn:unpackingFundamentalTheorem:740}
I^{-1} = \frac{J}{\Bx_u \wedge \Bx_v}.

Also note that $$\lr{\Bx_u \wedge \Bx_v}^{-1} = \Bx^v \wedge \Bx^u$$, so
\label{eqn:unpackingFundamentalTheorem:760}
I^{-1} = J \lr{ \Bx^v \wedge \Bx^u },

and
\label{eqn:unpackingFundamentalTheorem:780}
I^{-1} d\Bx
= I^{-1} \cdot d\Bx
= J \lr{ \Bx^v \wedge \Bx^u } \cdot \lr{ \Bx_u du – \Bx_v dv }
= J \lr{ \Bx^v du + \Bx^u dv },

so the right acting gradient integral is
\label{eqn:unpackingFundamentalTheorem:800}
\int J du dv \grad G =
\int
\evalbar{J \Bx^v G}{\Delta v} du + \evalbar{J \Bx^u G dv}{\Delta u},

which we write in abbreviated form as $$\int J \lr{ \Bx^v du + \Bx^u dv} G$$.

For the $$G = 1$$ case, from \ref{eqn:unpackingFundamentalTheorem:960} we find
\label{eqn:unpackingFundamentalTheorem:820}
\int J du dv F I \lgrad I^{-1} = \oint F d\Bx I^{-1}.

However, in a 2D space, regardless of metric, we have $$I a = – a I$$ for any vector $$a$$ (i.e. $$\grad$$ or $$d\Bx$$), so we may commute the outer pseudoscalars in
\label{eqn:unpackingFundamentalTheorem:840}
\int J du dv F I \lgrad I^{-1} = \oint F d\Bx I^{-1},

so
\label{eqn:unpackingFundamentalTheorem:850}
-\int J du dv F I I^{-1} \lgrad = -\oint F I^{-1} d\Bx.

After cancelling the negative sign on both sides, we have the claimed result.

To see that $$I a$$, for any vector $$a$$ is normal to $$a$$, we can compute the dot product
\label{eqn:unpackingFundamentalTheorem:860}
\lr{ I a } \cdot a
=
=
= 0,

since the scalar selection of a bivector is zero. Since $$I^{-1} = \pm I$$, the same argument shows that $$I^{-1} d\Bx$$ must be orthogonal to $$d\Bx$$.

End proof.

Let’s look at the geometry of the normal $$I^{-1} \Bx$$ in a couple 2D vector spaces. We use an integration volume of a unit square to simplify the boundary term expressions.

• Euclidean: With a parameterization $$x(u,v) = u\Be_1 + v \Be_2$$, and Euclidean basis vectors $$(\Be_1)^2 = (\Be_2)^2 = 1$$, the fundamental theorem integrated over the rectangle $$[x_0,x_1] \times [y_0,y_1]$$ is
\label{eqn:unpackingFundamentalTheorem:880}
\int dx dy \grad G =
\int
\Be_2 \lr{ G(x,y_1) – G(x,y_0) } dx +
\Be_1 \lr{ G(x_1,y) – G(x_0,y) } dy,

Each of the terms in the integrand above are illustrated in fig. 4, and we see that this is a path integral weighted by the outwards normal.

fig. 4. Outwards oriented normal for Euclidean space.

• Spacetime: Let $$x(u,v) = u \gamma_0 + v \gamma_1$$, where $$(\gamma_0)^2 = -(\gamma_1)^2 = 1$$. With $$u = t, v = x$$, the gradient integral over a $$[t_0,t_1] \times [x_0,x_1]$$ of spacetime is
\label{eqn:unpackingFundamentalTheorem:900}
\begin{aligned}
&=
\int
\gamma^1 dt \lr{ G(t, x_1) – G(t, x_0) }
+
\gamma^0 dx \lr{ G(t_1, x) – G(t_1, x) } \\
&=
\int
\gamma_1 dt \lr{ -G(t, x_1) + G(t, x_0) }
+
\gamma_0 dx \lr{ G(t_1, x) – G(t_1, x) }
.
\end{aligned}

With $$t$$ plotted along the horizontal axis, and $$x$$ along the vertical, each of the terms of this integrand is illustrated graphically in fig. 5. For this mixed signature space, there is no longer any good geometrical characterization of the normal.

fig. 5. Orientation of the boundary normal for a spacetime basis.

• Spacelike:
Let $$x(u,v) = u \gamma_1 + v \gamma_2$$, where $$(\gamma_1)^2 = (\gamma_2)^2 = -1$$. With $$u = x, v = y$$, the gradient integral over a $$[x_0,x_1] \times [y_0,y_1]$$ of this space is
\label{eqn:unpackingFundamentalTheorem:920}
\begin{aligned}
&=
\int
\gamma^2 dx \lr{ G(x, y_1) – G(x, y_0) }
+
\gamma^1 dy \lr{ G(x_1, y) – G(x_1, y) } \\
&=
\int
\gamma_2 dx \lr{ -G(x, y_1) + G(x, y_0) }
+
\gamma_1 dy \lr{ -G(x_1, y) + G(x_1, y) }
.
\end{aligned}

Referring to fig. 6. where the elements of the integrand are illustrated, we see that the normal $$I^{-1} d\Bx$$ for the boundary of this region can be characterized as inwards.

fig. 6. Inwards oriented normal for a Dirac spacelike basis.

References

[1] S.L. Salas and E. Hille. Calculus: one and several variables. Wiley New York, 1990.

Switching from screen to tmux

January 11, 2021 perl and general scripting hackery 3 comments , ,

RHEL8 (Redhat enterprise Linux 8) has dropped support for my old friend screen.  I had found a package somewhere that still worked for one new RHEL8 installation, but didn’t record where, and the version I installed on my most recently upgraded machine is crashing horribly.

Screen

Screen was originally recommended to me by Sam Bortman when I worked at IBM, and I am forever grateful to him, as it has been a godsend over the years.  The basic idea is that you have have a single terminal session that not only saves all state, but also allows you to have multiple terminal “tabs” all controlled by that single master session.  Since then, I no longer use nohup, and no longer try to run many background jobs anymore.  Both attempting to background or nohup a job can be problematic, as there are a suprising number of tools and scripts that expect an active terminal.  As well as the multiplexing functionality, running screen ensures that if you loose your network connection, or switch from wired to wireless and back, or go home or go to work, in all cases, you can resume your work where you left it.

A typical session looks something like the following:

i.e. plain old terminal, but three little “tabs” at the bottom, each representing a different shell on the same machine.  In this case, I have my ovpn client running in window 0, am in my Tests/scripts/ directory in window 1, and have ‘git log –graph –decorate’ running in window 2.  The second screenshot above shows the screen menu, listing all the different active windows.

screen can do window splitting vertically and horizontally too, but I’ve never used that.  My needs are pretty simple:

• multiple windows, each with a different shell,
• an easy way to tab between the windows,
• an easy way to start a new shell.

I always found the screen key bindings to be somewhat cumbersome (example: control-A ” to start the window menu), and it didn’t take me long before I’d constructed a standard .screenrc for myself with a couple handy key bindings:

• F5: -1th window
• F6: previous window (after switching explicitly using key bindings or the menu)
• F8: +1th window
• F9: new window

I’ve used those key bindings for so many years that I feel lost without them!

With screen crashing on my constantly, my options were to find a stable package somewhere, build it myself (which I used to do all the time at IBM when I had to work on many Unix platforms simultaneously), or bite the bullet and see what it would take to switch to tmux.

tmux attach

I chose the latter, and with the help of some tutorials, it was pretty easy to make the switch to tmux.  Startup is pretty easy:

tmux


and

tmux at


(at is short for attach, what to use instead of screen -dr)

tmux bindings

All my trusty key bindings were easy to reimplement, requiring the following in my .tmux.conf:

bind-key -T root F4 list-windows
bind-key -T root F5 select-window -p
bind-key -T root F6 select-window -l
bind-key -T root F8 select-window -n
bind-key -T root F9 new-window


tmux command line

One of the nice things about tmux is that you don’t need a whole bunch of complex key bindings that are hard to remember, as you can do it all on the command line from within any tmux slave window. This means that you can also alias your tmux commands easily! Here are a couple examples:

alias weekly='tmux new-window -c ~/weeklyreports/01 -n weekly -t 1'
alias master='tmux new-window -n master -c ~/master'
alias tests='tmux new-window -n tests -c ~/Tests'


These new-window aliases change the name displayed in the bottom bar, and open a new terminal in a set of specific directories.

The UI is pretty much identical, and a session might look something like:

tmux prefix binding

The only other customization that I made to tmux was to override the default key binding, as tmux uses control-b instead of screen’s control-a. control-b is much easier to type than control-a, but messes up paging in vim, so I’ve reset it to control-n using:

unbind C-b
set -g prefix ^N
bind n send-prefix


With this, the rename window command becomes ‘control-n ,’.

I can’t think of anything that uses control-n, but if that choice ends up being intrusive, I’ll probably just unbind control-b and not bother with a prefix binding, since tmux has the full functioning command line options, and I can use easier to remember (or lookup) aliases.

Incompatibilities.

It looks like the bindings that I used above are valid with RHEL8 tmux-2.7, but not with RHEL7’s tmux-1.8.  That’s a bit of a pain, and means that I’ll have to

1. find alternate newer tmux packages for RHEL7, or
2. figure out how to do the same bindings with tmux-1.8 and have different dot files, or
3. keep on using screen until I’ve managed to upgrade all my machines to RHEL8.

Nothing is ever easy;)

Raccoons vs. Cake: “Oh, come on kids, …, it’s still good!”

January 10, 2021 Incoherent ramblings No comments ,

Life comes in cycles, and here’s an old chapter replaying itself.

When I was a teenager, we spent weekdays with mom, and weekends with dad. Both of them lived a subsistence existence, but with the rent expenses that mom also had, she really struggled to pay the bills at that stage of our lives. I don’t remember the occasion, but one hot summer day, she had saved enough to buy the eggs, flour and other ingredients that she needed to make us all a cake as a special treat. After the cake was cooked, she put it on the kitchen table to cool enough that she could ice it (she probably would have used her classic cream-cheese and sugar recipe.)

That rental property did not have air conditioning, and the doors were always wide open in the summer. Imagine the smell of fresh baked cake pervading the air in the house, and then a blood curdling scream. It was the scream of a horrific physical injury, perhaps that of somebody with a foreign object embedding deep in the flesh of their leg. We all rushed down to find out what happened, and it turned out that the smell of the cake was not just inviting to us, but also to a family of raccoons. Mom walked into the kitchen to find a mother raccoon and her little kids all sitting politely at the table in a circle around the now cool cake, helping themselves to dainty little handfuls.  What sounded like the scream of mortal injury, was the scream of a struggling mom, who’s plan to spoil her kids was being eaten in front of her eyes.

From the kitchen you could enter the back room, or the hallway to the front door, and from the front door you could enter the “piano room”, which also had a door to the back room and back to the kitchen.  The scene degenerates into chaos at this point, with mom and the rest of us chasing crying and squealing raccoons in circles all around the first floor of the house along that circular path, with cake crumbs flying in all directions.  I don’t know how many laps we and the raccoons made of the house before we managed to shoo them all out the front or back door, but eventually we were left with just the crumb trail and the remains of the cake.

The icing on the cake was mom’s reaction though. She went over to the cake and cut all the raccoon handprints out of it. We didn’t want to eat it, and I still remember her pleading with us, “Oh come on kids, try it. It’s still good!” Poor mom.  She even took sample bites from the cake to demonstrate it was still edible, and convince us to partake in the treat that she’d worked so hard to make for us.  I don’t think that we ate her cake, despite her pleading.

Thirty years later, it’s my turn. I spent an hour making chili today, and after dinner I put the left overs out on the back porch to cool in the slow cooker pot with the lid on. I’d planned to bag and freeze part of it, and put the rest in the fridge as leftovers for the week. It was cold enough out that I didn’t think that the raccoons would be out that early, but figured it would have been fair game had I left it out all night in the “outside fridge”. Well, those little buggers were a lot more industrious than I gave them credit, and by the time I’d come back from walking the dog, they’d helped themselves to a portion, lifting the lid of the slow cooker pot, and making a big mess of as much chili as they wanted.  They ate quite a lot, but perhaps it had more spice than they cared for, as they left quite a lot:

Judging by the chili covered hand prints on the back deck I think they enjoyed themselves, despite the spices.

When I went upstairs to let Sofia know what had happened, she immediately connected the dots to this cake story that I’ve told so many times, and said in response: “Oh come on kids, it’s still good!”, at which point we both started laughing.

The total cost of the chili itself was probably only \$17, plus one hour of time.  However, I didn’t intend to try to talk anybody into eating the remains.  It is just not worth getting raccoon carried Giardia or some stomach bug.  I was sad to see my work wasted and the leftovers ruined.  I wish Mom was still with us, so that I could share this with her.  I can imagine her visiting on this very day, where I could have scooped everything off the top, and then offered her a spoonful, saying “Oh come on Mom, it’s still good!”  I think that she would have gotten a kick out of that, even if she was always embarrassed about this story and how poor we were at the time.

Final thoughts.

There were 4 cans of beans in that pot of chili.  I have to wonder if we are going to have a family of farting raccoons in the neighbourhood for a few days?

Some experiments in youtube mathematics videos

A couple years ago I was curious how easy it would be to use a graphics tablet as a virtual chalkboard, and produced a handful of very rough YouTube videos to get a feel for the basics of streaming and video editing (much of which I’ve now forgotten how to do). These were the videos in chronological order:

• Introduction to Geometric (Clifford) Algebra.Introduction to Geometric (Clifford) algebra. Interpretation of products of unit vectors, rules for reducing products of unit vectors, and the axioms that justify those rules.
• Geometric Algebra: dot, wedge, cross and vector products.Geometric (Clifford) Algebra introduction, showing the relation between the vector product dot and wedge products, and the cross product.
• Solution of two line intersection using geometric algebra.
• Linear system solution using the wedge product.. This video provides a standalone introduction to the wedge product, the geometry of the wedge product and some properties, and linear system solution as a sample application. In this video the wedge product is introduced independently of any geometric (Clifford) algebra, as an antisymmetric and associative operator. You’ll see that we get Cramer’s rule for free from this solution technique.
• Exponential form of vector products in geometric algebra.In this video, I discussed the exponential form of the product of two vectors.

I showed an example of how two unit vectors, each rotations of zcap orthonormal $$\mathbb{R}^3$$ planes, produce a “complex” exponential in the plane that spans these two vectors.

• Velocity and acceleration in cylindrical coordinates using geometric algebra.I derived the cylindrical coordinate representations of the velocity and acceleration vectors, showing the radial and azimuthal components of each vector.

I also showed how these are related to the dot and wedge product with the radial unit vector.

• Duality transformations in geometric algebra.Duality transformations (pseudoscalar multiplication) will be demonstrated in $$\mathbb{R}^2$$ and $$\mathbb{R}^3$$.

A polar parameterized vector in $$\mathbb{R}^2$$, written in complex exponential form, is multiplied by a unit pseudoscalar for the x-y plane. We see that the result is a vector normal to that vector, with the direction of the normal dependent on the order of multiplication, and the orientation of the pseudoscalar used.

In $$\mathbb{R}^3$$ we see that a vector multiplied by a pseudoscalar yields the bivector that represents the plane that is normal to that vector. The sign of that bivector (or its cyclic orientation) depends on the orientation of the pseudoscalar. The order of multiplication was not mentioned in this case since the $$\mathbb{R}^3$$ pseudoscalar commutes with any grade object (assumed, not proved). An example of a vector with two components in a plane, multiplied by a pseudoscalar was also given, which allowed for a visualization of the bivector that is normal to the original vector.

• Math bait and switch: Fractional integer exponents.When I was a kid, my dad asked me to explain fractional exponents, and perhaps any non-positive integer exponents, to him. He objected to the idea of multiplying something by itself $$1/2$$ times.

I failed to answer the question to his satisfaction. My own son is now reviewing the rules of exponentiation, and it occurred to me (30 years later) why my explanation to Dad failed.

Essentially, there’s a small bait and switch required, and my dad didn’t fall for it.

The meaning that my dad gave to exponentiation was that $$x^n$$ equals $$x$$ times itself $$n$$ times.

Using this rule, it is easy to demonstrate that $$x^a x^b = x^{a + b}$$, and this can be used to justify expressions like $$x^{1/2}$$. However, doing this really means that we’ve switched the definition of exponential, defining an exponential as any number that satisfies the relationship:

$$x^a x^b = x^{a+b}$$,

where $$x^1 = x$$. This slight of hand is required to give meaning to $$x^{1/2}$$ or other exponentials where the exponential argument is any non-positive integer.

Of these videos I just relistened to the wedge product episode, as I had a new lone comment on it, and I couldn’t even remember what I had said. It wasn’t completely horrible, despite the low tech. I was, however, very surprised how soft and gentle my voice was. When I am talking math in person, I get very animated, but attempting to manage the tech was distracting and all the excitement that I’d normally have was obliterated.

I’d love to attempt a manim based presentation of some of this material, but suspect if I do something completely scripted like that, I may not be a very good narrator.

New version of classical mechanics notes

I’ve posted a new version of my classical mechanics notes compilation.  This version is not yet live on amazon, but you shouldn’t buy a copy of this “book” anyways, as it is horribly rough (if you want a copy, grab the free PDF instead.)  [I am going to buy a copy so that I can continue to edit a paper copy of it, but nobody else should.]

This version includes additional background material on Space Time Algebra (STA), i.e. the geometric algebra name for the Dirac/Clifford-algebra in 3+1 dimensions.  In particular, I’ve added material on reciprocal frames, the gradient and vector derivatives, line and surface integrals and the fundamental theorem for both.  Some of the integration theory content might make sense to move to a different book, but I’ll keep it with the rest of these STA notes for now.

Relativistic multivector surface integrals

Background.

This post is a continuation of:

Surface integrals.

[If mathjax doesn’t display properly for you, click here for a PDF of this post]

We’ve now covered line integrals and the fundamental theorem for line integrals, so it’s now time to move on to surface integrals.

Definition 1.1: Surface integral.

Given a two variable parameterization $$x = x(u,v)$$, we write $$d^2\Bx = \Bx_u \wedge \Bx_v du dv$$, and call
\begin{equation*}
\int F d^2\Bx\, G,
\end{equation*}
a surface integral, where $$F,G$$ are arbitrary multivector functions.

Like our multivector line integral, this is intrinsically multivector valued, with a product of $$F$$ with arbitrary grades, a bivector $$d^2 \Bx$$, and $$G$$, also potentially with arbitrary grades. Let’s consider an example.

Problem: Surface area integral example.

Given the hyperbolic surface parameterization $$x(\rho,\alpha) = \rho \gamma_0 e^{-\vcap \alpha}$$, where $$\vcap = \gamma_{20}$$ evaluate the indefinite integral
\label{eqn:relativisticSurface:40}
\int \gamma_1 e^{\gamma_{21}\alpha} d^2 \Bx\, \gamma_2.

We have $$\Bx_\rho = \gamma_0 e^{-\vcap \alpha}$$ and $$\Bx_\alpha = \rho\gamma_{2} e^{-\vcap \alpha}$$, so
\label{eqn:relativisticSurface:60}
\begin{aligned}
d^2 \Bx
&=
(\Bx_\rho \wedge \Bx_\alpha) d\rho d\alpha \\
&=
\gamma_{0} e^{-\vcap \alpha} \rho\gamma_{2} e^{-\vcap \alpha}
}
d\rho d\alpha \\
&=
\rho \gamma_{02} d\rho d\alpha,
\end{aligned}

so the integral is
\label{eqn:relativisticSurface:80}
\begin{aligned}
\int \rho \gamma_1 e^{\gamma_{21}\alpha} \gamma_{022} d\rho d\alpha
&=
-\inv{2} \rho^2 \int \gamma_1 e^{\gamma_{21}\alpha} \gamma_{0} d\alpha \\
&=
\frac{\gamma_{01}}{2} \rho^2 \int e^{\gamma_{21}\alpha} d\alpha \\
&=
\frac{\gamma_{01}}{2} \rho^2 \gamma^{12} e^{\gamma_{21}\alpha} \\
&=
\frac{\rho^2 \gamma_{20}}{2} e^{\gamma_{21}\alpha}.
\end{aligned}

Because $$F$$ and $$G$$ were both vectors, the resulting integral could only have been a multivector with grades 0,2,4. As it happens, there were no scalar nor pseudoscalar grades in the end result, and we ended up with the spacetime plane between $$\gamma_0$$, and $$\gamma_2 e^{\gamma_{21}\alpha}$$, which are rotations of $$\gamma_2$$ in the x,y plane. This is illustrated in fig. 1 (omitting scale and sign factors.)

fig. 1. Spacetime plane.

Fundamental theorem for surfaces.

For line integrals we saw that $$d\Bx \cdot \grad = \gpgradezero{ d\Bx \partial }$$, and obtained the fundamental theorem for multivector line integrals by omitting the grade selection and using the multivector operator $$d\Bx \partial$$ in the integrand directly. We have the same situation for surface integrals. In particular, we know that the $$\mathbb{R}^3$$ Stokes theorem can be expressed in terms of $$d^2 \Bx \cdot \spacegrad$$

Problem: GA form of 3D Stokes’ theorem integrand.

Given an $$\mathbb{R}^3$$ vector field $$\Bf$$, show that
\label{eqn:relativisticSurface:180}
\int dA \ncap \cdot \lr{ \spacegrad \cross \Bf }
=
-\int \lr{d^2\Bx \cdot \spacegrad } \cdot \Bf.

Let $$d^2 \Bx = I \ncap dA$$, implicitly fixing the relative orientation of the bivector area element compared to the chosen surface normal direction.
\label{eqn:relativisticSurface:200}
\begin{aligned}
\int \lr{d^2\Bx \cdot \spacegrad } \cdot \Bf
&=
&=
\int dA \lr{ I \lr{ \ncap \wedge \spacegrad} } \cdot \Bf \\
&=
&=
-\int dA \lr{ \ncap \cross \spacegrad} \cdot \Bf \\
&=
-\int dA \ncap \cdot \lr{ \spacegrad \cross \Bf }.
\end{aligned}

The moral of the story is that the conventional dual form of the $$\mathbb{R}^3$$ Stokes’ theorem can be written directly by projecting the gradient onto the surface area element. Geometrically, this projection operation has a rotational effect as well, since for bivector $$B$$, and vector $$x$$, the bivector-vector dot product $$B \cdot x$$ is the component of $$x$$ that lies in the plane $$B \wedge x = 0$$, but also rotated 90 degrees.

For multivector integration, we do not want an integral operator that includes such dot products. In the line integral case, we were able to achieve the same projective operation by using vector derivative instead of a dot product, and can do the same for the surface integral case. In particular

Theorem 1.1: Projection of gradient onto the tangent space.

Given a curvilinear representation of the gradient with respect to parameters $$u^0, u^1, u^2, u^3$$
\begin{equation*}
\end{equation*}
the surface projection onto the tangent space associated with any two of those parameters, satisfies
\begin{equation*}
\end{equation*}

Start proof:

Without loss of generality, we may pick $$u^0, u^1$$ as the parameters associated with the tangent space. The area element for the surface is
\label{eqn:relativisticSurface:100}
d^2 \Bx = \Bx_0 \wedge \Bx_1 \,
du^0 du^1.

Dotting this with the gradient gives
\label{eqn:relativisticSurface:120}
\begin{aligned}
&=
du^0 du^1
\lr{ \Bx_0 \wedge \Bx_1 } \cdot \Bx^\mu \PD{u^\mu}{} \\
&=
du^0 du^1
\lr{
\Bx_0
\lr{\Bx_1 \cdot \Bx^\mu }

\Bx_1
\lr{\Bx_0 \cdot \Bx^\mu }
}
\PD{u^\mu}{} \\
&=
du^0 du^1
\lr{
\Bx_0 \PD{u^1}{}

\Bx_0 \PD{u^1}{}
}.
\end{aligned}

On the other hand, the vector derivative for this surface is
\label{eqn:relativisticSurface:140}
\partial
=
\Bx^0 \PD{u^0}{}
+
\Bx^1 \PD{u^1}{},

so
\label{eqn:relativisticSurface:160}
\begin{aligned}
&=
du^0 du^1\,
\lr{ \Bx_0 \wedge \Bx_1 } \cdot
\lr{
\Bx^0 \PD{u^0}{}
+
\Bx^1 \PD{u^1}{}
} \\
&=
du^0 du^1
\lr{
\Bx_0 \PD{u^1}{}

\Bx_1 \PD{u^0}{}
}.
\end{aligned}

End proof.

We now want to formulate the geometric algebra form of the fundamental theorem for surface integrals.

Theorem 1.2: Fundamental theorem for surface integrals.

Given multivector functions $$F, G$$, and surface area element $$d^2 \Bx = \lr{ \Bx_u \wedge \Bx_v }\, du dv$$, associated with a two parameter curve $$x(u,v)$$, then
\begin{equation*}
\int_S F d^2\Bx \lrpartial G = \int_{\partial S} F d^1\Bx G,
\end{equation*}
where $$S$$ is the integration surface, and $$\partial S$$ designates its boundary, and the line integral on the RHS is really short hand for
\begin{equation*}
\int
\evalbar{ \lr{ F (-d\Bx_v) G } }{\Delta u}
+
\int
\evalbar{ \lr{ F (d\Bx_u) G } }{\Delta v},
\end{equation*}
which is a line integral that traverses the boundary of the surface with the opposite orientation to the circulation of the area element.

Start proof:

The vector derivative for this surface is
\label{eqn:relativisticSurface:220}
\partial =
\Bx^u \PD{u}{}
+
\Bx^v \PD{v}{},

so
\label{eqn:relativisticSurface:240}
F d^2\Bx \lrpartial G
=
\PD{u}{} \lr{ F d^2\Bx\, \Bx^u G }
+
\PD{v}{} \lr{ F d^2\Bx\, \Bx^v G },

where $$d^2\Bx\, \Bx^u$$ is held constant with respect to $$u$$, and $$d^2\Bx\, \Bx^v$$ is held constant with respect to $$v$$ (since the partials of the vector derivative act on $$F, G$$, but not on the area element, nor on the reciprocal vectors of $$\lrpartial$$ itself.) Note that
\label{eqn:relativisticSurface:260}
d^2\Bx \wedge \Bx^u
=
du dv\, \lr{ \Bx_u \wedge \Bx_v } \wedge \Bx^u = 0,

since $$\Bx^u \in sectionpan \setlr{ \Bx_u\, \Bx_v }$$, so
\label{eqn:relativisticSurface:280}
\begin{aligned}
d^2\Bx\, \Bx^u
&=
d^2\Bx \cdot \Bx^u
+
d^2\Bx \wedge \Bx^u \\
&=
d^2\Bx \cdot \Bx^u \\
&=
du dv\, \lr{ \Bx_u \wedge \Bx_v } \cdot \Bx^u \\
&=
-du dv\, \Bx_v.
\end{aligned}

Similarly
\label{eqn:relativisticSurface:300}
\begin{aligned}
d^2\Bx\, \Bx^v
&=
d^2\Bx \cdot \Bx^v \\
&=
du dv\, \lr{ \Bx_u \wedge \Bx_v } \cdot \Bx^v \\
&=
du dv\, \Bx_u.
\end{aligned}

This leaves us with
\label{eqn:relativisticSurface:320}
F d^2\Bx \lrpartial G
=
-du dv\,
\PD{u}{} \lr{ F \Bx_v G }
+
du dv\,
\PD{v}{} \lr{ F \Bx_u G },

where $$\Bx_v, \Bx_u$$ are held constant with respect to $$u,v$$ respectively. Fortuitously, this constant condition can be dropped, since the antisymmetry of the wedge in the area element results in perfect cancellation. If these line elements are not held constant then
\label{eqn:relativisticSurface:340}
\PD{u}{} \lr{ F \Bx_v G }

\PD{v}{} \lr{ F \Bx_u G }
=
F \lr{
\PD{v}{\Bx_u}

\PD{u}{\Bx_v}
} G
+
\lr{
\PD{u}{F} \Bx_v G
+
F \Bx_v \PD{u}{G}
}
+
\lr{
\PD{v}{F} \Bx_u G
+
F \Bx_u \PD{v}{G}
}
,

but the mixed partial contribution is zero
\label{eqn:relativisticSurface:360}
\begin{aligned}
\PD{v}{\Bx_u}

\PD{u}{\Bx_v}
&=
\PD{v}{} \PD{u}{x}

\PD{u}{} \PD{v}{x} \\
&=
0,
\end{aligned}

by equality of mixed partials. We have two perfect differentials, and can evaluate each of these integrals
\label{eqn:relativisticSurface:380}
\begin{aligned}
\int F d^2\Bx \lrpartial G
&=
-\int
du dv\,
\PD{u}{} \lr{ F \Bx_v G }
+
\int
du dv\,
\PD{v}{} \lr{ F \Bx_u G } \\
&=
-\int
dv\,
\evalbar{ \lr{ F \Bx_v G } }{\Delta u}
+
\int
du\,
\evalbar{ \lr{ F \Bx_u G } }{\Delta v} \\
&=
\int
\evalbar{ \lr{ F (-d\Bx_v) G } }{\Delta u}
+
\int
\evalbar{ \lr{ F (d\Bx_u) G } }{\Delta v}.
\end{aligned}

We use the shorthand $$d^1 \Bx = d\Bx_u – d\Bx_v$$ to write
\label{eqn:relativisticSurface:400}
\int_S F d^2\Bx \lrpartial G = \int_{\partial S} F d^1\Bx G,

with the understanding that this is really instructions to evaluate the line integrals in the last step of \ref{eqn:relativisticSurface:380}.

Problem: Integration in the t,y plane.

Let $$x(t,y) = c t \gamma_0 + y \gamma_2$$. Write out both sides of the fundamental theorem explicitly.

Let’s designate the tangent basis vectors as
\label{eqn:relativisticSurface:420}
\Bx_0 = \PD{t}{x} = c \gamma_0,

and
\label{eqn:relativisticSurface:440}
\Bx_2 = \PD{y}{x} = \gamma_2,

so the vector derivative is
\label{eqn:relativisticSurface:460}
\partial
= \inv{c} \gamma^0 \PD{t}{}
+ \gamma^2 \PD{y}{},

and the area element is
\label{eqn:relativisticSurface:480}
d^2 \Bx = c \gamma_0 \gamma_2.

The fundamental theorem of surface integrals is just a statement that
\label{eqn:relativisticSurface:500}
\int_{t_0}^{t_1} c dt
\int_{y_0}^{y_1} dy
F \gamma_0 \gamma_2 \lr{
\inv{c} \gamma^0 \PD{t}{}
+ \gamma^2 \PD{y}{}
} G
=
\int F \lr{ c \gamma_0 dt – \gamma_2 dy } G,

where the RHS, when stated explicitly, really means
\label{eqn:relativisticSurface:520}
\begin{aligned}
\int &F \lr{ c \gamma_0 dt – \gamma_2 dy } G
=
\int_{t_0}^{t_1} c dt \lr{ F(t,y_1) \gamma_0 G(t, y_1) – F(t,y_0) \gamma_0 G(t, y_0) } \\
\int_{y_0}^{y_1} dy \lr{ F(t_1,y) \gamma_2 G(t_1, y) – F(t_0,y) \gamma_0 G(t_0, y) }.
\end{aligned}

In this particular case, since $$\Bx_0 = c \gamma_0, \Bx_2 = \gamma_2$$ are both constant functions that depend on neither $$t$$ nor $$y$$, it is easy to derive the full expansion of \ref{eqn:relativisticSurface:520} directly from the LHS of \ref{eqn:relativisticSurface:500}.

Problem: A cylindrical hyperbolic surface.

Generalizing the example surface integral from \ref{eqn:relativisticSurface:40}, let
\label{eqn:relativisticSurface:540}
x(\rho, \alpha) = \rho e^{-\vcap \alpha/2} x(0,1) e^{\vcap \alpha/2},

where $$\rho$$ is a scalar, and $$\vcap = \cos\theta_k\gamma_{k0}$$ is a unit spatial bivector, and $$\cos\theta_k$$ are direction cosines of that vector. This is a composite transformation, where the $$\alpha$$ variation boosts the $$x(0,1)$$ four-vector, and the $$\rho$$ parameter contracts or increases the magnitude of this vector, resulting in $$x$$ spanning a hyperbolic region of spacetime.

Compute the tangent and reciprocal basis, the area element for the surface, and explicitly state both sides of the fundamental theorem.

For the tangent basis vectors we have
\label{eqn:relativisticSurface:560}
\Bx_\rho = \PD{\rho}{x} =
e^{-\vcap \alpha/2} x(0,1) e^{\vcap \alpha/2} = \frac{x}{\rho},

and
\label{eqn:relativisticSurface:580}
\Bx_\alpha = \PD{\alpha}{x} =
\lr{-\vcap/2} x
+
x \lr{ \vcap/2 }
=
x \cdot \vcap.

These vectors $$\Bx_\rho, \Bx_\alpha$$ are orthogonal, as $$x \cdot \vcap$$ is the projection of $$x$$ onto the spacetime plane $$x \wedge \vcap = 0$$, but rotated so that $$x \cdot \lr{ x \cdot \vcap } = 0$$. Because of this orthogonality, the vector derivative for this tangent space is
\label{eqn:relativisticSurface:600}
\partial =
\inv{x \cdot \vcap} \PD{\alpha}{}
+
\frac{\rho}{x}
\PD{\rho}{}
.

The area element is
\label{eqn:relativisticSurface:620}
\begin{aligned}
d^2 \Bx
&=
d\rho d\alpha\,
\frac{x}{\rho} \wedge \lr{ x \cdot \vcap } \\
&=
\inv{\rho} d\rho d\alpha\,
x \lr{ x \cdot \vcap }
.
\end{aligned}

The full statement of the fundamental theorem for this surface is
\label{eqn:relativisticSurface:640}
\int_S
d\rho d\alpha\,
F
\lr{
\inv{\rho} x \lr{ x \cdot \vcap }
}
\lr{
\inv{x \cdot \vcap} \PD{\alpha}{}
+
\frac{\rho}{x}
\PD{\rho}{}
}
G
=
\int_{\partial S}
F \lr{ d\rho \frac{x}{\rho} – d\alpha \lr{ x \cdot \vcap } } G.

As in the previous example, due to the orthogonality of the tangent basis vectors, it’s easy to show find the RHS directly from the LHS.

Problem: Simple example with non-orthogonal tangent space basis vectors.

Let $$x(u,v) = u a + v b$$, where $$u,v$$ are scalar parameters, and $$a, b$$ are non-null and non-colinear constant four-vectors. Write out the fundamental theorem for surfaces with respect to this parameterization.

The tangent basis vectors are just $$\Bx_u = a, \Bx_v = b$$, with reciprocals
\label{eqn:relativisticSurface:660}
\Bx^u = \Bx_v \cdot \inv{ \Bx_u \wedge \Bx_v } = b \cdot \inv{ a \wedge b },

and
\label{eqn:relativisticSurface:680}
\Bx^v = -\Bx_u \cdot \inv{ \Bx_u \wedge \Bx_v } = -a \cdot \inv{ a \wedge b }.

The fundamental theorem, with respect to this surface, when written out explicitly takes the form
\label{eqn:relativisticSurface:700}
\int F \, du dv\, \lr{ a \wedge b } \inv{ a \wedge b } \cdot \lr{ a \PD{u}{} – b \PD{v}{} } G
=
\int F \lr{ a du – b dv } G.

This is a good example to illustrate the geometry of the line integral circulation.
Suppose that we are integrating over $$u \in [0,1], v \in [0,1]$$. In this case, the line integral really means
\label{eqn:relativisticSurface:720}
\begin{aligned}
\int &F \lr{ a du – b dv } G
=
+
\int F(u,1) (+a du) G(u,1)
+
\int F(u,0) (-a du) G(u,0) \\
which is a path around the spacetime parallelogram spanned by $$u, v$$, as illustrated in fig. 1, which illustrates the orientation of the bivector area element with the arrows around the exterior of the parallelogram: $$0 \rightarrow a \rightarrow a + b \rightarrow b \rightarrow 0$$.