Unpacking the fundamental theorem of multivector calculus in two dimensions

January 18, 2021 math and physics play No comments , , , , , , , , , , , , , , , , , , ,

Notes.

Due to limitations in the MathJax-Latex package, all the oriented integrals in this blog post should be interpreted as having a clockwise orientation. [See the PDF version of this post for more sophisticated formatting.]

Guts.

Given a two dimensional generating vector space, there are two instances of the fundamental theorem for multivector integration
\begin{equation}\label{eqn:unpackingFundamentalTheorem:20}
\int_S F d\Bx \lrpartial G = \evalbar{F G}{\Delta S},
\end{equation}
and
\begin{equation}\label{eqn:unpackingFundamentalTheorem:40}
\int_S F d^2\Bx \lrpartial G = \oint_{\partial S} F d\Bx G.
\end{equation}
The first case is trivial. Given a parameterizated curve \( x = x(u) \), it just states
\begin{equation}\label{eqn:unpackingFundamentalTheorem:60}
\int_{u(0)}^{u(1)} du \PD{u}{}\lr{FG} = F(u(1))G(u(1)) – F(u(0))G(u(0)),
\end{equation}
for all multivectors \( F, G\), regardless of the signature of the underlying space.

The surface integral is more interesting. Let’s first look at the area element for this surface integral, which is
\begin{equation}\label{eqn:unpackingFundamentalTheorem:80}
d^2 \Bx = d\Bx_u \wedge d \Bx_v.
\end{equation}
Geometrically, this has the area of the parallelogram spanned by \( d\Bx_u \) and \( d\Bx_v \), but weighted by the pseudoscalar of the space. This is explored algebraically in the following problem and illustrated in fig. 1.

fig. 1. 2D vector space and area element.

Problem: Expansion of 2D area bivector.

Let \( \setlr{e_1, e_2} \) be an orthonormal basis for a two dimensional space, with reciprocal frame \( \setlr{e^1, e^2} \). Expand the area bivector \( d^2 \Bx \) in coordinates relating the bivector to the Jacobian and the pseudoscalar.

Answer

With parameterization \( x = x(u,v) = x^\alpha e_\alpha = x_\alpha e^\alpha \), we have
\begin{equation}\label{eqn:unpackingFundamentalTheorem:120}
\Bx_u \wedge \Bx_v
=
\lr{ \PD{u}{x^\alpha} e_\alpha } \wedge
\lr{ \PD{v}{x^\beta} e_\beta }
=
\PD{u}{x^\alpha}
\PD{v}{x^\beta}
e_\alpha
e_\beta
=
\PD{(u,v)}{(x^1,x^2)} e_1 e_2,
\end{equation}
or
\begin{equation}\label{eqn:unpackingFundamentalTheorem:160}
\Bx_u \wedge \Bx_v
=
\lr{ \PD{u}{x_\alpha} e^\alpha } \wedge
\lr{ \PD{v}{x_\beta} e^\beta }
=
\PD{u}{x_\alpha}
\PD{v}{x_\beta}
e^\alpha
e^\beta
=
\PD{(u,v)}{(x_1,x_2)} e^1 e^2.
\end{equation}
The upper and lower index pseudoscalars are related by
\begin{equation}\label{eqn:unpackingFundamentalTheorem:180}
e^1 e^2 e_1 e_2 =
-e^1 e^2 e_2 e_1 =
-1,
\end{equation}
so with \( I = e_1 e_2 \),
\begin{equation}\label{eqn:unpackingFundamentalTheorem:200}
e^1 e^2 = -I^{-1},
\end{equation}
leaving us with
\begin{equation}\label{eqn:unpackingFundamentalTheorem:140}
d^2 \Bx
= \PD{(u,v)}{(x^1,x^2)} du dv\, I
= -\PD{(u,v)}{(x_1,x_2)} du dv\, I^{-1}.
\end{equation}
We see that the area bivector is proportional to either the upper or lower index Jacobian and to the pseudoscalar for the space.

We may write the fundamental theorem for a 2D space as
\begin{equation}\label{eqn:unpackingFundamentalTheorem:680}
\int_S du dv \, \PD{(u,v)}{(x^1,x^2)} F I \lrgrad G = \oint_{\partial S} F d\Bx G,
\end{equation}
where we have dispensed with the vector derivative and use the gradient instead, since they are identical in a two parameter two dimensional space. Of course, unless we are using \( x^1, x^2 \) as our parameterization, we still want the curvilinear representation of the gradient \( \grad = \Bx^u \PDi{u}{} + \Bx^v \PDi{v}{} \).

Problem: Standard basis expansion of fundamental surface relation.

For a parameterization \( x = x^1 e_1 + x^2 e_2 \), where \( \setlr{ e_1, e_2 } \) is a standard (orthogonal) basis, expand the fundamental theorem for surface integrals for the single sided \( F = 1 \) case. Consider functions \( G \) of each grade (scalar, vector, bivector.)

Answer

From \ref{eqn:unpackingFundamentalTheorem:140} we see that the fundamental theorem takes the form
\begin{equation}\label{eqn:unpackingFundamentalTheorem:220}
\int_S dx^1 dx^2\, F I \lrgrad G = \oint_{\partial S} F d\Bx G.
\end{equation}
In a Euclidean space, the operator \( I \lrgrad \), is a \( \pi/2 \) rotation of the gradient, but has a rotated like structure in all metrics:
\begin{equation}\label{eqn:unpackingFundamentalTheorem:240}
I \grad
=
e_1 e_2 \lr{ e^1 \partial_1 + e^2 \partial_2 }
=
-e_2 \partial_1 + e_1 \partial_2.
\end{equation}

  • \( F = 1 \) and \( G \in \bigwedge^0 \) or \( G \in \bigwedge^2 \). For \( F = 1 \) and scalar or bivector \( G \) we have
    \begin{equation}\label{eqn:unpackingFundamentalTheorem:260}
    \int_S dx^1 dx^2\, \lr{ -e_2 \partial_1 + e_1 \partial_2 } G = \oint_{\partial S} d\Bx G,
    \end{equation}
    where, for \( x^1 \in [x^1(0),x^1(1)] \) and \( x^2 \in [x^2(0),x^2(1)] \), the RHS written explicitly is
    \begin{equation}\label{eqn:unpackingFundamentalTheorem:280}
    \oint_{\partial S} d\Bx G
    =
    \int dx^1 e_1
    \lr{ G(x^1, x^2(1)) – G(x^1, x^2(0)) }
    – dx^2 e_2
    \lr{ G(x^1(1),x^2) – G(x^1(0), x^2) }.
    \end{equation}
    This is sketched in fig. 2. Since a 2D bivector \( G \) can be written as \( G = I g \), where \( g \) is a scalar, we may write the pseudoscalar case as
    \begin{equation}\label{eqn:unpackingFundamentalTheorem:300}
    \int_S dx^1 dx^2\, \lr{ -e_2 \partial_1 + e_1 \partial_2 } g = \oint_{\partial S} d\Bx g,
    \end{equation}
    after right multiplying both sides with \( I^{-1} \). Algebraically the scalar and pseudoscalar cases can be thought of as identical scalar relationships.
  • \( F = 1, G \in \bigwedge^1 \). For \( F = 1 \) and vector \( G \) the 2D fundamental theorem for surfaces can be split into scalar
    \begin{equation}\label{eqn:unpackingFundamentalTheorem:320}
    \int_S dx^1 dx^2\, \lr{ -e_2 \partial_1 + e_1 \partial_2 } \cdot G = \oint_{\partial S} d\Bx \cdot G,
    \end{equation}
    and bivector relations
    \begin{equation}\label{eqn:unpackingFundamentalTheorem:340}
    \int_S dx^1 dx^2\, \lr{ -e_2 \partial_1 + e_1 \partial_2 } \wedge G = \oint_{\partial S} d\Bx \wedge G.
    \end{equation}
    To expand \ref{eqn:unpackingFundamentalTheorem:320}, let
    \begin{equation}\label{eqn:unpackingFundamentalTheorem:360}
    G = g_1 e^1 + g_2 e^2,
    \end{equation}
    for which
    \begin{equation}\label{eqn:unpackingFundamentalTheorem:380}
    \lr{ -e_2 \partial_1 + e_1 \partial_2 } \cdot G
    =
    \lr{ -e_2 \partial_1 + e_1 \partial_2 } \cdot
    \lr{ g_1 e^1 + g_2 e^2 }
    =
    \partial_2 g_1 – \partial_1 g_2,
    \end{equation}
    and
    \begin{equation}\label{eqn:unpackingFundamentalTheorem:400}
    d\Bx \cdot G
    =
    \lr{ dx^1 e_1 – dx^2 e_2 } \cdot \lr{ g_1 e^1 + g_2 e^2 }
    =
    dx^1 g_1 – dx^2 g_2,
    \end{equation}
    so \ref{eqn:unpackingFundamentalTheorem:320} expands to
    \begin{equation}\label{eqn:unpackingFundamentalTheorem:500}
    \int_S dx^1 dx^2\, \lr{ \partial_2 g_1 – \partial_1 g_2 }
    =
    \int
    \evalbar{dx^1 g_1}{\Delta x^2} – \evalbar{ dx^2 g_2 }{\Delta x^1}.
    \end{equation}
    This coordinate expansion illustrates how the pseudoscalar nature of the area element results in a duality transformation, as we end up with a curl like operation on the LHS, despite the dot product nature of the decomposition that we used. That can also be seen directly for vector \( G \), since
    \begin{equation}\label{eqn:unpackingFundamentalTheorem:560}
    dA (I \grad) \cdot G
    =
    dA \gpgradezero{ I \grad G }
    =
    dA I \lr{ \grad \wedge G },
    \end{equation}
    since the scalar selection of \( I \lr{ \grad \cdot G } \) is zero.In the grade-2 relation \ref{eqn:unpackingFundamentalTheorem:340}, we expect a pseudoscalar cancellation on both sides, leaving a scalar (divergence-like) relationship. This time, we use upper index coordinates for the vector \( G \), letting
    \begin{equation}\label{eqn:unpackingFundamentalTheorem:440}
    G = g^1 e_1 + g^2 e_2,
    \end{equation}
    so
    \begin{equation}\label{eqn:unpackingFundamentalTheorem:460}
    \lr{ -e_2 \partial_1 + e_1 \partial_2 } \wedge G
    =
    \lr{ -e_2 \partial_1 + e_1 \partial_2 } \wedge G
    \lr{ g^1 e_1 + g^2 e_2 }
    =
    e_1 e_2 \lr{ \partial_1 g^1 + \partial_2 g^2 },
    \end{equation}
    and
    \begin{equation}\label{eqn:unpackingFundamentalTheorem:480}
    d\Bx \wedge G
    =
    \lr{ dx^1 e_1 – dx^2 e_2 } \wedge
    \lr{ g^1 e_1 + g^2 e_2 }
    =
    e_1 e_2 \lr{ dx^1 g^2 + dx^2 g^1 }.
    \end{equation}
    So \ref{eqn:unpackingFundamentalTheorem:340}, after multiplication of both sides by \( I^{-1} \), is
    \begin{equation}\label{eqn:unpackingFundamentalTheorem:520}
    \int_S dx^1 dx^2\,
    \lr{ \partial_1 g^1 + \partial_2 g^2 }
    =
    \int
    \evalbar{dx^1 g^2}{\Delta x^2} + \evalbar{dx^2 g^1 }{\Delta x^1}.
    \end{equation}

As before, we’ve implicitly performed a duality transformation, and end up with a divergence operation. That can be seen directly without coordinate expansion, by rewriting the wedge as a grade two selection, and expanding the gradient action on the vector \( G \), as follows
\begin{equation}\label{eqn:unpackingFundamentalTheorem:580}
dA (I \grad) \wedge G
=
dA \gpgradetwo{ I \grad G }
=
dA I \lr{ \grad \cdot G },
\end{equation}
since \( I \lr{ \grad \wedge G } \) has only a scalar component.

 

fig. 2. Line integral around rectangular boundary.

Theorem 1.1: Green’s theorem [1].

Let \( S \) be a Jordan region with a piecewise-smooth boundary \( C \). If \( P, Q \) are continuously differentiable on an open set that contains \( S \), then
\begin{equation*}
\int dx dy \lr{ \PD{y}{P} – \PD{x}{Q} } = \oint P dx + Q dy.
\end{equation*}

Problem: Relationship to Green’s theorem.

If the space is Euclidean, show that \ref{eqn:unpackingFundamentalTheorem:500} and \ref{eqn:unpackingFundamentalTheorem:520} are both instances of Green’s theorem with suitable choices of \( P \) and \( Q \).

Answer

I will omit the subtleties related to general regions and consider just the case of an infinitesimal square region.

Start proof:

Let’s start with \ref{eqn:unpackingFundamentalTheorem:500}, with \( g_1 = P \) and \( g_2 = Q \), and \( x^1 = x, x^2 = y \), the RHS is
\begin{equation}\label{eqn:unpackingFundamentalTheorem:600}
\int dx dy \lr{ \PD{y}{P} – \PD{x}{Q} }.
\end{equation}
On the RHS we have
\begin{equation}\label{eqn:unpackingFundamentalTheorem:620}
\int \evalbar{dx P}{\Delta y} – \evalbar{ dy Q }{\Delta x}
=
\int dx \lr{ P(x, y_1) – P(x, y_0) } – \int dy \lr{ Q(x_1, y) – Q(x_0, y) }.
\end{equation}
This pair of integrals is plotted in fig. 3, from which we see that \ref{eqn:unpackingFundamentalTheorem:620} can be expressed as the line integral, leaving us with
\begin{equation}\label{eqn:unpackingFundamentalTheorem:640}
\int dx dy \lr{ \PD{y}{P} – \PD{x}{Q} }
=
\oint dx P + dy Q,
\end{equation}
which is Green’s theorem over the infinitesimal square integration region.

For the equivalence of \ref{eqn:unpackingFundamentalTheorem:520} to Green’s theorem, let \( g^2 = P \), and \( g^1 = -Q \). Plugging into the LHS, we find the Green’s theorem integrand. On the RHS, the integrand expands to
\begin{equation}\label{eqn:unpackingFundamentalTheorem:660}
\evalbar{dx g^2}{\Delta y} + \evalbar{dy g^1 }{\Delta x}
=
dx \lr{ P(x,y_1) – P(x, y_0)}
+
dy \lr{ -Q(x_1, y) + Q(x_0, y)},
\end{equation}
which is exactly what we found in \ref{eqn:unpackingFundamentalTheorem:620}.

End proof.

 

fig. 3. Path for Green’s theorem.

We may also relate multivector gradient integrals in 2D to the normal integral around the boundary of the bounding curve. That relationship is as follows.

Theorem 1.2: 2D gradient integrals.

\begin{equation*}
\begin{aligned}
\int J du dv \rgrad G &= \oint I^{-1} d\Bx G = \int J \lr{ \Bx^v du + \Bx^u dv } G \\
\int J du dv F \lgrad &= \oint F I^{-1} d\Bx = \int J F \lr{ \Bx^v du + \Bx^u dv },
\end{aligned}
\end{equation*}
where \( J = \partial(x^1, x^2)/\partial(u,v) \) is the Jacobian of the parameterization \( x = x(u,v) \). In terms of the coordinates \( x^1, x^2 \), this reduces to
\begin{equation*}
\begin{aligned}
\int dx^1 dx^2 \rgrad G &= \oint I^{-1} d\Bx G = \int \lr{ e^2 dx^1 + e^1 dx^2 } G \\
\int dx^1 dx^2 F \lgrad &= \oint G I^{-1} d\Bx = \int F \lr{ e^2 dx^1 + e^1 dx^2 }.
\end{aligned}
\end{equation*}
The vector \( I^{-1} d\Bx \) is orthogonal to the tangent vector along the boundary, and for Euclidean spaces it can be identified as the outwards normal.

Start proof:

Respectively setting \( F = 1 \), and \( G = 1\) in \ref{eqn:unpackingFundamentalTheorem:680}, we have
\begin{equation}\label{eqn:unpackingFundamentalTheorem:940}
\int I^{-1} d^2 \Bx \rgrad G = \oint I^{-1} d\Bx G,
\end{equation}
and
\begin{equation}\label{eqn:unpackingFundamentalTheorem:960}
\int F d^2 \Bx \lgrad I^{-1} = \oint F d\Bx I^{-1}.
\end{equation}
Starting with \ref{eqn:unpackingFundamentalTheorem:940} we find
\begin{equation}\label{eqn:unpackingFundamentalTheorem:700}
\int I^{-1} J du dv I \rgrad G = \oint d\Bx G,
\end{equation}
to find \( \int dx^1 dx^2 \rgrad G = \oint I^{-1} d\Bx G \), as desireed. In terms of a parameterization \( x = x(u,v) \), the pseudoscalar for the space is
\begin{equation}\label{eqn:unpackingFundamentalTheorem:720}
I = \frac{\Bx_u \wedge \Bx_v}{J},
\end{equation}
so
\begin{equation}\label{eqn:unpackingFundamentalTheorem:740}
I^{-1} = \frac{J}{\Bx_u \wedge \Bx_v}.
\end{equation}
Also note that \( \lr{\Bx_u \wedge \Bx_v}^{-1} = \Bx^v \wedge \Bx^u \), so
\begin{equation}\label{eqn:unpackingFundamentalTheorem:760}
I^{-1} = J \lr{ \Bx^v \wedge \Bx^u },
\end{equation}
and
\begin{equation}\label{eqn:unpackingFundamentalTheorem:780}
I^{-1} d\Bx
= I^{-1} \cdot d\Bx
= J \lr{ \Bx^v \wedge \Bx^u } \cdot \lr{ \Bx_u du – \Bx_v dv }
= J \lr{ \Bx^v du + \Bx^u dv },
\end{equation}
so the right acting gradient integral is
\begin{equation}\label{eqn:unpackingFundamentalTheorem:800}
\int J du dv \grad G =
\int
\evalbar{J \Bx^v G}{\Delta v} du + \evalbar{J \Bx^u G dv}{\Delta u},
\end{equation}
which we write in abbreviated form as \( \int J \lr{ \Bx^v du + \Bx^u dv} G \).

For the \( G = 1 \) case, from \ref{eqn:unpackingFundamentalTheorem:960} we find
\begin{equation}\label{eqn:unpackingFundamentalTheorem:820}
\int J du dv F I \lgrad I^{-1} = \oint F d\Bx I^{-1}.
\end{equation}
However, in a 2D space, regardless of metric, we have \( I a = – a I \) for any vector \( a \) (i.e. \( \grad \) or \( d\Bx\)), so we may commute the outer pseudoscalars in
\begin{equation}\label{eqn:unpackingFundamentalTheorem:840}
\int J du dv F I \lgrad I^{-1} = \oint F d\Bx I^{-1},
\end{equation}
so
\begin{equation}\label{eqn:unpackingFundamentalTheorem:850}
-\int J du dv F I I^{-1} \lgrad = -\oint F I^{-1} d\Bx.
\end{equation}
After cancelling the negative sign on both sides, we have the claimed result.

To see that \( I a \), for any vector \( a \) is normal to \( a \), we can compute the dot product
\begin{equation}\label{eqn:unpackingFundamentalTheorem:860}
\lr{ I a } \cdot a
=
\gpgradezero{ I a a }
=
a^2 \gpgradezero{ I }
= 0,
\end{equation}
since the scalar selection of a bivector is zero. Since \( I^{-1} = \pm I \), the same argument shows that \( I^{-1} d\Bx \) must be orthogonal to \( d\Bx \).

End proof.

Let’s look at the geometry of the normal \( I^{-1} \Bx \) in a couple 2D vector spaces. We use an integration volume of a unit square to simplify the boundary term expressions.

  • Euclidean: With a parameterization \( x(u,v) = u\Be_1 + v \Be_2 \), and Euclidean basis vectors \( (\Be_1)^2 = (\Be_2)^2 = 1 \), the fundamental theorem integrated over the rectangle \( [x_0,x_1] \times [y_0,y_1] \) is
    \begin{equation}\label{eqn:unpackingFundamentalTheorem:880}
    \int dx dy \grad G =
    \int
    \Be_2 \lr{ G(x,y_1) – G(x,y_0) } dx +
    \Be_1 \lr{ G(x_1,y) – G(x_0,y) } dy,
    \end{equation}
    Each of the terms in the integrand above are illustrated in fig. 4, and we see that this is a path integral weighted by the outwards normal.

    fig. 4. Outwards oriented normal for Euclidean space.

  • Spacetime: Let \( x(u,v) = u \gamma_0 + v \gamma_1 \), where \( (\gamma_0)^2 = -(\gamma_1)^2 = 1 \). With \( u = t, v = x \), the gradient integral over a \([t_0,t_1] \times [x_0,x_1]\) of spacetime is
    \begin{equation}\label{eqn:unpackingFundamentalTheorem:900}
    \begin{aligned}
    \int dt dx \grad G
    &=
    \int
    \gamma^1 dt \lr{ G(t, x_1) – G(t, x_0) }
    +
    \gamma^0 dx \lr{ G(t_1, x) – G(t_1, x) } \\
    &=
    \int
    \gamma_1 dt \lr{ -G(t, x_1) + G(t, x_0) }
    +
    \gamma_0 dx \lr{ G(t_1, x) – G(t_1, x) }
    .
    \end{aligned}
    \end{equation}
    With \( t \) plotted along the horizontal axis, and \( x \) along the vertical, each of the terms of this integrand is illustrated graphically in fig. 5. For this mixed signature space, there is no longer any good geometrical characterization of the normal.

    fig. 5. Orientation of the boundary normal for a spacetime basis.

  • Spacelike:
    Let \( x(u,v) = u \gamma_1 + v \gamma_2 \), where \( (\gamma_1)^2 = (\gamma_2)^2 = -1 \). With \( u = x, v = y \), the gradient integral over a \([x_0,x_1] \times [y_0,y_1]\) of this space is
    \begin{equation}\label{eqn:unpackingFundamentalTheorem:920}
    \begin{aligned}
    \int dx dy \grad G
    &=
    \int
    \gamma^2 dx \lr{ G(x, y_1) – G(x, y_0) }
    +
    \gamma^1 dy \lr{ G(x_1, y) – G(x_1, y) } \\
    &=
    \int
    \gamma_2 dx \lr{ -G(x, y_1) + G(x, y_0) }
    +
    \gamma_1 dy \lr{ -G(x_1, y) + G(x_1, y) }
    .
    \end{aligned}
    \end{equation}
    Referring to fig. 6. where the elements of the integrand are illustrated, we see that the normal \( I^{-1} d\Bx \) for the boundary of this region can be characterized as inwards.

    fig. 6. Inwards oriented normal for a Dirac spacelike basis.

References

[1] S.L. Salas and E. Hille. Calculus: one and several variables. Wiley New York, 1990.

Switching from screen to tmux

January 11, 2021 perl and general scripting hackery 2 comments , ,

RHEL8 (Redhat enterprise Linux 8) has dropped support for my old friend screen.  I had found a package somewhere that still worked for one new RHEL8 installation, but didn’t record where, and the version I installed on my most recently upgraded machine is crashing horribly.

Screen

Screen was originally recommended to me by Sam Bortman when I worked at IBM, and I am forever grateful to him, as it has been a godsend over the years.  The basic idea is that you have have a single terminal session that not only saves all state, but also allows you to have multiple terminal “tabs” all controlled by that single master session.  Since then, I no longer use nohup, and no longer try to run many background jobs anymore.  Both attempting to background or nohup a job can be problematic, as there are a suprising number of tools and scripts that expect an active terminal.  As well as the multiplexing functionality, running screen ensures that if you loose your network connection, or switch from wired to wireless and back, or go home or go to work, in all cases, you can resume your work where you left it.

A typical session looks something like the following:

i.e. plain old terminal, but three little “tabs” at the bottom, each representing a different shell on the same machine.  In this case, I have my ovpn client running in window 0, am in my Tests/scripts/ directory in window 1, and have ‘git log –graph –decorate’ running in window 2.  The second screenshot above shows the screen menu, listing all the different active windows.

screen can do window splitting vertically and horizontally too, but I’ve never used that.  My needs are pretty simple:

  • multiple windows, each with a different shell,
  • an easy way to tab between the windows,
  • an easy way to start a new shell.

I always found the screen key bindings to be somewhat cumbersome (example: control-A ” to start the window menu), and it didn’t take me long before I’d constructed a standard .screenrc for myself with a couple handy key bindings:

  • F4: window menu
  • F5: -1th window
  • F6: previous window (after switching explicitly using key bindings or the menu)
  • F8: +1th window
  • F9: new window

I’ve used those key bindings for so many years that I feel lost without them!

With screen crashing on my constantly, my options were to find a stable package somewhere, build it myself (which I used to do all the time at IBM when I had to work on many Unix platforms simultaneously), or bite the bullet and see what it would take to switch to tmux.

tmux attach

I chose the latter, and with the help of some tutorials, it was pretty easy to make the switch to tmux.  Startup is pretty easy:

tmux

(instead of screen -q)

and

tmux at

(at is short for attach, what to use instead of screen -dr)

tmux bindings

All my trusty key bindings were easy to reimplement, requiring the following in my .tmux.conf:

bind-key -T root F4 list-windows
bind-key -T root F5 select-window -p
bind-key -T root F6 select-window -l
bind-key -T root F8 select-window -n
bind-key -T root F9 new-window

tmux command line

One of the nice things about tmux is that you don’t need a whole bunch of complex key bindings that are hard to remember, as you can do it all on the command line from within any tmux slave window. This means that you can also alias your tmux commands easily! Here are a couple examples:

alias weekly='tmux new-window -c ~/weeklyreports/01 -n weekly -t 1'
alias master='tmux new-window -n master -c ~/master'
alias tests='tmux new-window -n tests -c ~/Tests'

These new-window aliases change the name displayed in the bottom bar, and open a new terminal in a set of specific directories.

The UI is pretty much identical, and a session might look something like:

tmux prefix binding

The only other customization that I made to tmux was to override the default key binding, as tmux uses control-b instead of screen’s control-a. control-b is much easier to type than control-a, but messes up paging in vim, so I’ve reset it to control-n using:

unbind C-b
set -g prefix ^N
bind n send-prefix

With this, the rename window command becomes ‘control-n ,’.

I can’t think of anything that uses control-n, but if that choice ends up being intrusive, I’ll probably just unbind control-b and not bother with a prefix binding, since tmux has the full functioning command line options, and I can use easier to remember (or lookup) aliases.

Incompatibilities.

It looks like the bindings that I used above are valid with RHEL8 tmux-2.7, but not with RHEL7’s tmux-1.8.  That’s a bit of a pain, and means that I’ll have to

  1. find alternate newer tmux packages for RHEL7, or
  2. figure out how to do the same bindings with tmux-1.8 and have different dot files, or
  3. keep on using screen until I’ve managed to upgrade all my machines to RHEL8.

Nothing is ever easy;)

Raccoons vs. Cake: “Oh, come on kids, …, it’s still good!”

January 10, 2021 Incoherent ramblings No comments ,

Life comes in cycles, and here’s an old chapter replaying itself.

When I was a teenager, we spent weekdays with mom, and weekends with dad. Both of them lived a subsistence existence, but with the rent expenses that mom also had, she really struggled to pay the bills at that stage of our lives. I don’t remember the occasion, but one hot summer day, she had saved enough to buy the eggs, flour and other ingredients that she needed to make us all a cake as a special treat. After the cake was cooked, she put it on the kitchen table to cool enough that she could ice it (she probably would have used her classic cream-cheese and sugar recipe.)

That rental property did not have air conditioning, and the doors were always wide open in the summer. Imagine the smell of fresh baked cake pervading the air in the house, and then a blood curdling scream. It was the scream of a horrific physical injury, perhaps that of somebody with a foreign object embedding deep in the flesh of their leg. We all rushed down to find out what happened, and it turned out that the smell of the cake was not just inviting to us, but also to a family of raccoons. Mom walked into the kitchen to find a mother raccoon and her little kids all sitting politely at the table in a circle around the now cool cake, helping themselves to dainty little handfuls.  What sounded like the scream of mortal injury, was the scream of a struggling mom, who’s plan to spoil her kids was being eaten in front of her eyes.

From the kitchen you could enter the back room, or the hallway to the front door, and from the front door you could enter the “piano room”, which also had a door to the back room and back to the kitchen.  The scene degenerates into chaos at this point, with mom and the rest of us chasing crying and squealing raccoons in circles all around the first floor of the house along that circular path, with cake crumbs flying in all directions.  I don’t know how many laps we and the raccoons made of the house before we managed to shoo them all out the front or back door, but eventually we were left with just the crumb trail and the remains of the cake.

The icing on the cake was mom’s reaction though. She went over to the cake and cut all the raccoon handprints out of it. We didn’t want to eat it, and I still remember her pleading with us, “Oh come on kids, try it. It’s still good!” Poor mom.  She even took sample bites from the cake to demonstrate it was still edible, and convince us to partake in the treat that she’d worked so hard to make for us.  I don’t think that we ate her cake, despite her pleading.

Thirty years later, it’s my turn. I spent an hour making chili today, and after dinner I put the left overs out on the back porch to cool in the slow cooker pot with the lid on. I’d planned to bag and freeze part of it, and put the rest in the fridge as leftovers for the week. It was cold enough out that I didn’t think that the raccoons would be out that early, but figured it would have been fair game had I left it out all night in the “outside fridge”. Well, those little buggers were a lot more industrious than I gave them credit, and by the time I’d come back from walking the dog, they’d helped themselves to a portion, lifting the lid of the slow cooker pot, and making a big mess of as much chili as they wanted.  They ate quite a lot, but perhaps it had more spice than they cared for, as they left quite a lot:

Judging by the chili covered hand prints on the back deck I think they enjoyed themselves, despite the spices.

When I went upstairs to let Sofia know what had happened, she immediately connected the dots to this cake story that I’ve told so many times, and said in response: “Oh come on kids, it’s still good!”, at which point we both started laughing.

The total cost of the chili itself was probably only $17, plus one hour of time.  However, I didn’t intend to try to talk anybody into eating the remains.  It is just not worth getting raccoon carried Giardia or some stomach bug.  I was sad to see my work wasted and the leftovers ruined.  I wish Mom was still with us, so that I could share this with her.  I can imagine her visiting on this very day, where I could have scooped everything off the top, and then offered her a spoonful, saying “Oh come on Mom, it’s still good!”  I think that she would have gotten a kick out of that, even if she was always embarrassed about this story and how poor we were at the time.

Final thoughts.

There were 4 cans of beans in that pot of chili.  I have to wonder if we are going to have a family of farting raccoons in the neighbourhood for a few days?

Some experiments in youtube mathematics videos

January 3, 2021 math and physics play No comments , , , , , , ,

A couple years ago I was curious how easy it would be to use a graphics tablet as a virtual chalkboard, and produced a handful of very rough YouTube videos to get a feel for the basics of streaming and video editing (much of which I’ve now forgotten how to do). These were the videos in chronological order:

  • Introduction to Geometric (Clifford) Algebra.Introduction to Geometric (Clifford) algebra. Interpretation of products of unit vectors, rules for reducing products of unit vectors, and the axioms that justify those rules.
  • Geometric Algebra: dot, wedge, cross and vector products.Geometric (Clifford) Algebra introduction, showing the relation between the vector product dot and wedge products, and the cross product.
  • Solution of two line intersection using geometric algebra.
  • Linear system solution using the wedge product.. This video provides a standalone introduction to the wedge product, the geometry of the wedge product and some properties, and linear system solution as a sample application. In this video the wedge product is introduced independently of any geometric (Clifford) algebra, as an antisymmetric and associative operator. You’ll see that we get Cramer’s rule for free from this solution technique.
  • Exponential form of vector products in geometric algebra.In this video, I discussed the exponential form of the product of two vectors.

    I showed an example of how two unit vectors, each rotations of zcap orthonormal \(\mathbb{R}^3\) planes, produce a “complex” exponential in the plane that spans these two vectors.

  • Velocity and acceleration in cylindrical coordinates using geometric algebra.I derived the cylindrical coordinate representations of the velocity and acceleration vectors, showing the radial and azimuthal components of each vector.

    I also showed how these are related to the dot and wedge product with the radial unit vector.

  • Duality transformations in geometric algebra.Duality transformations (pseudoscalar multiplication) will be demonstrated in \(\mathbb{R}^2\) and \(\mathbb{R}^3\).

    A polar parameterized vector in \(\mathbb{R}^2\), written in complex exponential form, is multiplied by a unit pseudoscalar for the x-y plane. We see that the result is a vector normal to that vector, with the direction of the normal dependent on the order of multiplication, and the orientation of the pseudoscalar used.

    In \(\mathbb{R}^3\) we see that a vector multiplied by a pseudoscalar yields the bivector that represents the plane that is normal to that vector. The sign of that bivector (or its cyclic orientation) depends on the orientation of the pseudoscalar. The order of multiplication was not mentioned in this case since the \(\mathbb{R}^3\) pseudoscalar commutes with any grade object (assumed, not proved). An example of a vector with two components in a plane, multiplied by a pseudoscalar was also given, which allowed for a visualization of the bivector that is normal to the original vector.

  • Math bait and switch: Fractional integer exponents.When I was a kid, my dad asked me to explain fractional exponents, and perhaps any non-positive integer exponents, to him. He objected to the idea of multiplying something by itself \(1/2\) times.

    I failed to answer the question to his satisfaction. My own son is now reviewing the rules of exponentiation, and it occurred to me (30 years later) why my explanation to Dad failed.

    Essentially, there’s a small bait and switch required, and my dad didn’t fall for it.

    The meaning that my dad gave to exponentiation was that \( x^n\) equals \(x\) times itself \(n\) times.

    Using this rule, it is easy to demonstrate that \(x^a x^b = x^{a + b}\), and this can be used to justify expressions like \(x^{1/2}\). However, doing this really means that we’ve switched the definition of exponential, defining an exponential as any number that satisfies the relationship:

    \(x^a x^b = x^{a+b}\),

    where \(x^1 = x\). This slight of hand is required to give meaning to \(x^{1/2}\) or other exponentials where the exponential argument is any non-positive integer.

Of these videos I just relistened to the wedge product episode, as I had a new lone comment on it, and I couldn’t even remember what I had said. It wasn’t completely horrible, despite the low tech. I was, however, very surprised how soft and gentle my voice was. When I am talking math in person, I get very animated, but attempting to manage the tech was distracting and all the excitement that I’d normally have was obliterated.

I’d love to attempt a manim based presentation of some of this material, but suspect if I do something completely scripted like that, I may not be a very good narrator.

New version of classical mechanics notes

January 1, 2021 Uncategorized No comments , , , , , , , , ,

I’ve posted a new version of my classical mechanics notes compilation.  This version is not yet live on amazon, but you shouldn’t buy a copy of this “book” anyways, as it is horribly rough (if you want a copy, grab the free PDF instead.)  [I am going to buy a copy so that I can continue to edit a paper copy of it, but nobody else should.]

This version includes additional background material on Space Time Algebra (STA), i.e. the geometric algebra name for the Dirac/Clifford-algebra in 3+1 dimensions.  In particular, I’ve added material on reciprocal frames, the gradient and vector derivatives, line and surface integrals and the fundamental theorem for both.  Some of the integration theory content might make sense to move to a different book, but I’ll keep it with the rest of these STA notes for now.

Relativistic multivector surface integrals

December 31, 2020 math and physics play No comments , , , , , , ,

[Click here for a PDF of this post]

Background.

This post is a continuation of:

Surface integrals.

[If mathjax doesn’t display properly for you, click here for a PDF of this post]

We’ve now covered line integrals and the fundamental theorem for line integrals, so it’s now time to move on to surface integrals.

Definition 1.1: Surface integral.

Given a two variable parameterization \( x = x(u,v) \), we write \( d^2\Bx = \Bx_u \wedge \Bx_v du dv \), and call
\begin{equation*}
\int F d^2\Bx\, G,
\end{equation*}
a surface integral, where \( F,G \) are arbitrary multivector functions.

Like our multivector line integral, this is intrinsically multivector valued, with a product of \( F \) with arbitrary grades, a bivector \( d^2 \Bx \), and \( G \), also potentially with arbitrary grades. Let’s consider an example.

Problem: Surface area integral example.

Given the hyperbolic surface parameterization \( x(\rho,\alpha) = \rho \gamma_0 e^{-\vcap \alpha} \), where \( \vcap = \gamma_{20} \) evaluate the indefinite integral
\begin{equation}\label{eqn:relativisticSurface:40}
\int \gamma_1 e^{\gamma_{21}\alpha} d^2 \Bx\, \gamma_2.
\end{equation}

Answer

We have \( \Bx_\rho = \gamma_0 e^{-\vcap \alpha} \) and \( \Bx_\alpha = \rho\gamma_{2} e^{-\vcap \alpha} \), so
\begin{equation}\label{eqn:relativisticSurface:60}
\begin{aligned}
d^2 \Bx
&=
(\Bx_\rho \wedge \Bx_\alpha) d\rho d\alpha \\
&=
\gpgradetwo{
\gamma_{0} e^{-\vcap \alpha} \rho\gamma_{2} e^{-\vcap \alpha}
}
d\rho d\alpha \\
&=
\rho \gamma_{02} d\rho d\alpha,
\end{aligned}
\end{equation}
so the integral is
\begin{equation}\label{eqn:relativisticSurface:80}
\begin{aligned}
\int \rho \gamma_1 e^{\gamma_{21}\alpha} \gamma_{022} d\rho d\alpha
&=
-\inv{2} \rho^2 \int \gamma_1 e^{\gamma_{21}\alpha} \gamma_{0} d\alpha \\
&=
\frac{\gamma_{01}}{2} \rho^2 \int e^{\gamma_{21}\alpha} d\alpha \\
&=
\frac{\gamma_{01}}{2} \rho^2 \gamma^{12} e^{\gamma_{21}\alpha} \\
&=
\frac{\rho^2 \gamma_{20}}{2} e^{\gamma_{21}\alpha}.
\end{aligned}
\end{equation}
Because \( F \) and \( G \) were both vectors, the resulting integral could only have been a multivector with grades 0,2,4. As it happens, there were no scalar nor pseudoscalar grades in the end result, and we ended up with the spacetime plane between \( \gamma_0 \), and \( \gamma_2 e^{\gamma_{21}\alpha} \), which are rotations of \(\gamma_2\) in the x,y plane. This is illustrated in fig. 1 (omitting scale and sign factors.)

fig. 1. Spacetime plane.

Fundamental theorem for surfaces.

For line integrals we saw that \( d\Bx \cdot \grad = \gpgradezero{ d\Bx \partial } \), and obtained the fundamental theorem for multivector line integrals by omitting the grade selection and using the multivector operator \( d\Bx \partial \) in the integrand directly. We have the same situation for surface integrals. In particular, we know that the \(\mathbb{R}^3\) Stokes theorem can be expressed in terms of \( d^2 \Bx \cdot \spacegrad \)

Problem: GA form of 3D Stokes’ theorem integrand.

Given an \(\mathbb{R}^3\) vector field \( \Bf \), show that
\begin{equation}\label{eqn:relativisticSurface:180}
\int dA \ncap \cdot \lr{ \spacegrad \cross \Bf }
=
-\int \lr{d^2\Bx \cdot \spacegrad } \cdot \Bf.
\end{equation}

Answer

Let \( d^2 \Bx = I \ncap dA \), implicitly fixing the relative orientation of the bivector area element compared to the chosen surface normal direction.
\begin{equation}\label{eqn:relativisticSurface:200}
\begin{aligned}
\int \lr{d^2\Bx \cdot \spacegrad } \cdot \Bf
&=
\int dA \gpgradeone{I \ncap \spacegrad } \cdot \Bf \\
&=
\int dA \lr{ I \lr{ \ncap \wedge \spacegrad} } \cdot \Bf \\
&=
\int dA \gpgradezero{ I^2 \lr{ \ncap \cross \spacegrad} \Bf } \\
&=
-\int dA \lr{ \ncap \cross \spacegrad} \cdot \Bf \\
&=
-\int dA \ncap \cdot \lr{ \spacegrad \cross \Bf }.
\end{aligned}
\end{equation}

The moral of the story is that the conventional dual form of the \(\mathbb{R}^3\) Stokes’ theorem can be written directly by projecting the gradient onto the surface area element. Geometrically, this projection operation has a rotational effect as well, since for bivector \( B \), and vector \( x \), the bivector-vector dot product \( B \cdot x \) is the component of \( x \) that lies in the plane \( B \wedge x = 0 \), but also rotated 90 degrees.

For multivector integration, we do not want an integral operator that includes such dot products. In the line integral case, we were able to achieve the same projective operation by using vector derivative instead of a dot product, and can do the same for the surface integral case. In particular

Theorem 1.1: Projection of gradient onto the tangent space.

Given a curvilinear representation of the gradient with respect to parameters \( u^0, u^1, u^2, u^3 \)
\begin{equation*}
\grad = \sum_\mu \Bx^\mu \PD{u^\mu}{},
\end{equation*}
the surface projection onto the tangent space associated with any two of those parameters, satisfies
\begin{equation*}
d^2 \Bx \cdot \grad = \gpgradeone{ d^2 \Bx \partial }.
\end{equation*}

Start proof:

Without loss of generality, we may pick \( u^0, u^1 \) as the parameters associated with the tangent space. The area element for the surface is
\begin{equation}\label{eqn:relativisticSurface:100}
d^2 \Bx = \Bx_0 \wedge \Bx_1 \,
du^0 du^1.
\end{equation}
Dotting this with the gradient gives
\begin{equation}\label{eqn:relativisticSurface:120}
\begin{aligned}
d^2 \Bx \cdot \grad
&=
du^0 du^1
\lr{ \Bx_0 \wedge \Bx_1 } \cdot \Bx^\mu \PD{u^\mu}{} \\
&=
du^0 du^1
\lr{
\Bx_0
\lr{\Bx_1 \cdot \Bx^\mu }

\Bx_1
\lr{\Bx_0 \cdot \Bx^\mu }
}
\PD{u^\mu}{} \\
&=
du^0 du^1
\lr{
\Bx_0 \PD{u^1}{}

\Bx_0 \PD{u^1}{}
}.
\end{aligned}
\end{equation}
On the other hand, the vector derivative for this surface is
\begin{equation}\label{eqn:relativisticSurface:140}
\partial
=
\Bx^0 \PD{u^0}{}
+
\Bx^1 \PD{u^1}{},
\end{equation}
so
\begin{equation}\label{eqn:relativisticSurface:160}
\begin{aligned}
\gpgradeone{d^2 \Bx \partial}
&=
du^0 du^1\,
\lr{ \Bx_0 \wedge \Bx_1 } \cdot
\lr{
\Bx^0 \PD{u^0}{}
+
\Bx^1 \PD{u^1}{}
} \\
&=
du^0 du^1
\lr{
\Bx_0 \PD{u^1}{}

\Bx_1 \PD{u^0}{}
}.
\end{aligned}
\end{equation}

End proof.

We now want to formulate the geometric algebra form of the fundamental theorem for surface integrals.

Theorem 1.2: Fundamental theorem for surface integrals.

Given multivector functions \( F, G \), and surface area element \( d^2 \Bx = \lr{ \Bx_u \wedge \Bx_v }\, du dv \), associated with a two parameter curve \( x(u,v) \), then
\begin{equation*}
\int_S F d^2\Bx \lrpartial G = \int_{\partial S} F d^1\Bx G,
\end{equation*}
where \( S \) is the integration surface, and \( \partial S \) designates its boundary, and the line integral on the RHS is really short hand for
\begin{equation*}
\int
\evalbar{ \lr{ F (-d\Bx_v) G } }{\Delta u}
+
\int
\evalbar{ \lr{ F (d\Bx_u) G } }{\Delta v},
\end{equation*}
which is a line integral that traverses the boundary of the surface with the opposite orientation to the circulation of the area element.

Start proof:

The vector derivative for this surface is
\begin{equation}\label{eqn:relativisticSurface:220}
\partial =
\Bx^u \PD{u}{}
+
\Bx^v \PD{v}{},
\end{equation}
so
\begin{equation}\label{eqn:relativisticSurface:240}
F d^2\Bx \lrpartial G
=
\PD{u}{} \lr{ F d^2\Bx\, \Bx^u G }
+
\PD{v}{} \lr{ F d^2\Bx\, \Bx^v G },
\end{equation}
where \( d^2\Bx\, \Bx^u \) is held constant with respect to \( u \), and \( d^2\Bx\, \Bx^v \) is held constant with respect to \( v \) (since the partials of the vector derivative act on \( F, G \), but not on the area element, nor on the reciprocal vectors of \( \lrpartial \) itself.) Note that
\begin{equation}\label{eqn:relativisticSurface:260}
d^2\Bx \wedge \Bx^u
=
du dv\, \lr{ \Bx_u \wedge \Bx_v } \wedge \Bx^u = 0,
\end{equation}
since \( \Bx^u \in sectionpan \setlr{ \Bx_u\, \Bx_v } \), so
\begin{equation}\label{eqn:relativisticSurface:280}
\begin{aligned}
d^2\Bx\, \Bx^u
&=
d^2\Bx \cdot \Bx^u
+
d^2\Bx \wedge \Bx^u \\
&=
d^2\Bx \cdot \Bx^u \\
&=
du dv\, \lr{ \Bx_u \wedge \Bx_v } \cdot \Bx^u \\
&=
-du dv\, \Bx_v.
\end{aligned}
\end{equation}
Similarly
\begin{equation}\label{eqn:relativisticSurface:300}
\begin{aligned}
d^2\Bx\, \Bx^v
&=
d^2\Bx \cdot \Bx^v \\
&=
du dv\, \lr{ \Bx_u \wedge \Bx_v } \cdot \Bx^v \\
&=
du dv\, \Bx_u.
\end{aligned}
\end{equation}
This leaves us with
\begin{equation}\label{eqn:relativisticSurface:320}
F d^2\Bx \lrpartial G
=
-du dv\,
\PD{u}{} \lr{ F \Bx_v G }
+
du dv\,
\PD{v}{} \lr{ F \Bx_u G },
\end{equation}
where \( \Bx_v, \Bx_u \) are held constant with respect to \( u,v \) respectively. Fortuitously, this constant condition can be dropped, since the antisymmetry of the wedge in the area element results in perfect cancellation. If these line elements are not held constant then
\begin{equation}\label{eqn:relativisticSurface:340}
\PD{u}{} \lr{ F \Bx_v G }

\PD{v}{} \lr{ F \Bx_u G }
=
F \lr{
\PD{v}{\Bx_u}

\PD{u}{\Bx_v}
} G
+
\lr{
\PD{u}{F} \Bx_v G
+
F \Bx_v \PD{u}{G}
}
+
\lr{
\PD{v}{F} \Bx_u G
+
F \Bx_u \PD{v}{G}
}
,
\end{equation}
but the mixed partial contribution is zero
\begin{equation}\label{eqn:relativisticSurface:360}
\begin{aligned}
\PD{v}{\Bx_u}

\PD{u}{\Bx_v}
&=
\PD{v}{} \PD{u}{x}

\PD{u}{} \PD{v}{x} \\
&=
0,
\end{aligned}
\end{equation}
by equality of mixed partials. We have two perfect differentials, and can evaluate each of these integrals
\begin{equation}\label{eqn:relativisticSurface:380}
\begin{aligned}
\int F d^2\Bx \lrpartial G
&=
-\int
du dv\,
\PD{u}{} \lr{ F \Bx_v G }
+
\int
du dv\,
\PD{v}{} \lr{ F \Bx_u G } \\
&=
-\int
dv\,
\evalbar{ \lr{ F \Bx_v G } }{\Delta u}
+
\int
du\,
\evalbar{ \lr{ F \Bx_u G } }{\Delta v} \\
&=
\int
\evalbar{ \lr{ F (-d\Bx_v) G } }{\Delta u}
+
\int
\evalbar{ \lr{ F (d\Bx_u) G } }{\Delta v}.
\end{aligned}
\end{equation}
We use the shorthand \( d^1 \Bx = d\Bx_u – d\Bx_v \) to write
\begin{equation}\label{eqn:relativisticSurface:400}
\int_S F d^2\Bx \lrpartial G = \int_{\partial S} F d^1\Bx G,
\end{equation}
with the understanding that this is really instructions to evaluate the line integrals in the last step of \ref{eqn:relativisticSurface:380}.

End proof.

Problem: Integration in the t,y plane.

Let \( x(t,y) = c t \gamma_0 + y \gamma_2 \). Write out both sides of the fundamental theorem explicitly.

Answer

Let’s designate the tangent basis vectors as
\begin{equation}\label{eqn:relativisticSurface:420}
\Bx_0 = \PD{t}{x} = c \gamma_0,
\end{equation}
and
\begin{equation}\label{eqn:relativisticSurface:440}
\Bx_2 = \PD{y}{x} = \gamma_2,
\end{equation}
so the vector derivative is
\begin{equation}\label{eqn:relativisticSurface:460}
\partial
= \inv{c} \gamma^0 \PD{t}{}
+ \gamma^2 \PD{y}{},
\end{equation}
and the area element is
\begin{equation}\label{eqn:relativisticSurface:480}
d^2 \Bx = c \gamma_0 \gamma_2.
\end{equation}
The fundamental theorem of surface integrals is just a statement that
\begin{equation}\label{eqn:relativisticSurface:500}
\int_{t_0}^{t_1} c dt
\int_{y_0}^{y_1} dy
F \gamma_0 \gamma_2 \lr{
\inv{c} \gamma^0 \PD{t}{}
+ \gamma^2 \PD{y}{}
} G
=
\int F \lr{ c \gamma_0 dt – \gamma_2 dy } G,
\end{equation}
where the RHS, when stated explicitly, really means
\begin{equation}\label{eqn:relativisticSurface:520}
\begin{aligned}
\int &F \lr{ c \gamma_0 dt – \gamma_2 dy } G
=
\int_{t_0}^{t_1} c dt \lr{ F(t,y_1) \gamma_0 G(t, y_1) – F(t,y_0) \gamma_0 G(t, y_0) } \\
&\qquad –
\int_{y_0}^{y_1} dy \lr{ F(t_1,y) \gamma_2 G(t_1, y) – F(t_0,y) \gamma_0 G(t_0, y) }.
\end{aligned}
\end{equation}
In this particular case, since \( \Bx_0 = c \gamma_0, \Bx_2 = \gamma_2 \) are both constant functions that depend on neither \( t \) nor \( y \), it is easy to derive the full expansion of \ref{eqn:relativisticSurface:520} directly from the LHS of \ref{eqn:relativisticSurface:500}.

Problem: A cylindrical hyperbolic surface.

Generalizing the example surface integral from \ref{eqn:relativisticSurface:40}, let
\begin{equation}\label{eqn:relativisticSurface:540}
x(\rho, \alpha) = \rho e^{-\vcap \alpha/2} x(0,1) e^{\vcap \alpha/2},
\end{equation}
where \( \rho \) is a scalar, and \( \vcap = \cos\theta_k\gamma_{k0} \) is a unit spatial bivector, and \( \cos\theta_k \) are direction cosines of that vector. This is a composite transformation, where the \( \alpha \) variation boosts the \( x(0,1) \) four-vector, and the \( \rho \) parameter contracts or increases the magnitude of this vector, resulting in \( x \) spanning a hyperbolic region of spacetime.

Compute the tangent and reciprocal basis, the area element for the surface, and explicitly state both sides of the fundamental theorem.

Answer

For the tangent basis vectors we have
\begin{equation}\label{eqn:relativisticSurface:560}
\Bx_\rho = \PD{\rho}{x} =
e^{-\vcap \alpha/2} x(0,1) e^{\vcap \alpha/2} = \frac{x}{\rho},
\end{equation}
and
\begin{equation}\label{eqn:relativisticSurface:580}
\Bx_\alpha = \PD{\alpha}{x} =
\lr{-\vcap/2} x
+
x \lr{ \vcap/2 }
=
x \cdot \vcap.
\end{equation}
These vectors \( \Bx_\rho, \Bx_\alpha \) are orthogonal, as \( x \cdot \vcap \) is the projection of \( x \) onto the spacetime plane \( x \wedge \vcap = 0 \), but rotated so that \( x \cdot \lr{ x \cdot \vcap } = 0 \). Because of this orthogonality, the vector derivative for this tangent space is
\begin{equation}\label{eqn:relativisticSurface:600}
\partial =
\inv{x \cdot \vcap} \PD{\alpha}{}
+
\frac{\rho}{x}
\PD{\rho}{}
.
\end{equation}
The area element is
\begin{equation}\label{eqn:relativisticSurface:620}
\begin{aligned}
d^2 \Bx
&=
d\rho d\alpha\,
\frac{x}{\rho} \wedge \lr{ x \cdot \vcap } \\
&=
\inv{\rho} d\rho d\alpha\,
x \lr{ x \cdot \vcap }
.
\end{aligned}
\end{equation}
The full statement of the fundamental theorem for this surface is
\begin{equation}\label{eqn:relativisticSurface:640}
\int_S
d\rho d\alpha\,
F
\lr{
\inv{\rho} x \lr{ x \cdot \vcap }
}
\lr{
\inv{x \cdot \vcap} \PD{\alpha}{}
+
\frac{\rho}{x}
\PD{\rho}{}
}
G
=
\int_{\partial S}
F \lr{ d\rho \frac{x}{\rho} – d\alpha \lr{ x \cdot \vcap } } G.
\end{equation}
As in the previous example, due to the orthogonality of the tangent basis vectors, it’s easy to show find the RHS directly from the LHS.

Problem: Simple example with non-orthogonal tangent space basis vectors.

Let \( x(u,v) = u a + v b \), where \( u,v \) are scalar parameters, and \( a, b \) are non-null and non-colinear constant four-vectors. Write out the fundamental theorem for surfaces with respect to this parameterization.

Answer

The tangent basis vectors are just \( \Bx_u = a, \Bx_v = b \), with reciprocals
\begin{equation}\label{eqn:relativisticSurface:660}
\Bx^u = \Bx_v \cdot \inv{ \Bx_u \wedge \Bx_v } = b \cdot \inv{ a \wedge b },
\end{equation}
and
\begin{equation}\label{eqn:relativisticSurface:680}
\Bx^v = -\Bx_u \cdot \inv{ \Bx_u \wedge \Bx_v } = -a \cdot \inv{ a \wedge b }.
\end{equation}
The fundamental theorem, with respect to this surface, when written out explicitly takes the form
\begin{equation}\label{eqn:relativisticSurface:700}
\int F \, du dv\, \lr{ a \wedge b } \inv{ a \wedge b } \cdot \lr{ a \PD{u}{} – b \PD{v}{} } G
=
\int F \lr{ a du – b dv } G.
\end{equation}
This is a good example to illustrate the geometry of the line integral circulation.
Suppose that we are integrating over \( u \in [0,1], v \in [0,1] \). In this case, the line integral really means
\begin{equation}\label{eqn:relativisticSurface:720}
\begin{aligned}
\int &F \lr{ a du – b dv } G
=
+
\int F(u,1) (+a du) G(u,1)
+
\int F(u,0) (-a du) G(u,0) \\
&\quad+
\int F(1,v) (-b dv) G(1,v)
+
\int F(0,v) (+b dv) G(0,v),
\end{aligned}
\end{equation}
which is a path around the spacetime parallelogram spanned by \( u, v \), as illustrated in fig. 1, which illustrates the orientation of the bivector area element with the arrows around the exterior of the parallelogram: \( 0 \rightarrow a \rightarrow a + b \rightarrow b \rightarrow 0 \).

fig. 2. Line integral orientation.

Sabine Hossenfelder’s “Lost in Math”

December 27, 2020 Incoherent ramblings No comments , , , , , , ,

“Lost in Math” is a book that I’ve been curious to read, as I’ve been a subscriber to Sabine’s blog and youtube channel for quite a while.  On her blog and channel, she provides overviews of many topics in physics that are well articulated, as well as what appear to be very well reasoned and researched criticisms of a number of topics (mostly physics related.)  Within the small population of people interested in theoretical physics, I think that she is also very well known for her completely fearlessness, as she appears to have none of the usual social resistance to offending somebody should her statements not be aligned with popular consensus.

This book has a few aspects:

  • Interviews with a number of interesting and prominent physicists
  • A brutal take on the failures of string theory, supersymmetry, theories of everything, and other research programs that have consumed significant research budgets, but are detached from experimental and observational constraints.
  • An argument against the use of beauty, naturalness, and fine tuning avoidance in the constructions of physical theory.  Through the many interviews, we get a glimpse of the specific meanings of these words in the context of modern high level physical theories.
  • Some arguments against bigger colliders, given that the current ones have not delivered on their promises of producing new physics.
  • A considerable history of modern physics, and background for those wondering what the problems that string theory and supersymmetry have been trying to solve in the first place.
  • Some going-forward recommendations.

While there were no equations in this book, it is not a terribly easy read.  I felt that reading this requires considerable physics sophistication.  To level set, while I haven’t studied particle physics or the standard model, I have studied special relativity, electromagnetism, quantum mechanics, and even some introductory QFT, but still found this book fairly difficult (and I admit to nodding off a few times as a result.)  I don’t think this is really a book that aimed at the general public.

If you do have the background to attempt this book, you will probably learn a fair amount, on topics that include, for example: the standard model, general relativity, symmetry breaking, coupling constants, and the cosmological constant.  An example was her nice illustration of symmetry breaking.  We remember touching on this briefly in QFT I, but it was presented in an algebraic and abstract fashion.  At the time I didn’t get the high level view of what this meant (something with higher energy can have symmetries that are impossible at lower energies.)  In this book, this concept is illustrated by a spinning top, which when spinning fast is stable and has rotational symmetry, but once frictional effects start to slow it down, it will start to precess and wobble, and the symmetry that is evident at higher spin rates weakens.  This was a particularly apt justification for the title of the book, as her description of symmetry breaking did not require any mathematics!

Deep in the book, it was pointed out that the equations of the standard model cannot generally be solved, but have to be dealt with using perturbation methods.  In retrospect, this shouldn’t have surprised me, since we generally can’t solve non-harmonic oscillator problems in closed form, and have to resort to numerical methods for most interesting problems.

There were a number of biting statements that triggered laughs while reading this book.  I wish that I’d made notes of more of of those while I read it, but here are two to whet your appetite:

  • If you’d been sucking away on a giant jawbreaker for a century, wouldn’t you hope to finally get close to the gum?
  • It’s easy enough for us to discard philosophy as useless — because it is useless.

On the picture above.

I like reading in the big living room chair behind my desk that our dog Tessa has claimed as her own, so as soon as I get up for coffee (or anything else), she will usually come and plop herself in the chair so that it’s no longer available to me.  If she was lying on the floor, and my wife sits on “her” chair, she will almost always occupy it once Sofia gets up.  Ironically, the picture above was taken just after I had gotten to the section where she was interviewing Chad Orzel, of “How to Teach Quantum Mechanics to your Dog” fame.

On pet physics theories, Scientology, cosmology, relativity and libertarian tendencies.

December 26, 2020 Incoherent ramblings 1 comment , , , , , , , , , ,

In a recent Brian Keating podcast, he asked people to comment if they had their own theory of physics.

I’ve done a lot of exploration of conventional physics, both on my own, and with some in class studies (non-degree undergrad physics courses at UofT, plus grad QM and QFTI), but I don’t have my own personal theories of physics.  It’s enough of a challenge to figure out the existing theories without making up your own\({}^{1}\).

However, I have had one close encounter with an alternate physics theory, as I had a classmate in undergrad QMI (phy356) that had a personal “Aether” theory of relativity.  He shared that theory with me, but it came in the form of about 50 pages of dense text without equations.  For all intents and purposes, this was a theory that was written in an entirely different language than the rest of physics.  To him, it was all self evident, and he got upset with the suggestion of trying to mathematize it.  A lot of work and thought went into that theory, but it is work that has very little chance of paying off, since it was packaged in a form that made it unpalatable to anybody who is studying conventional physics.  There is also a great deal of work that would be required to “complete” the theory (presuming that could be done), since he would have to show that his theory is not inconsistent with many measurements and experiments, and would not predict nonphysical phenomena.  That was really an impossible task, which he would have found had he attempted to do so.  However, instead of attempting to do that work, he seemed to think that the onus should fall on others to do so.  He had done the work to write what he believed to be a self consistent logical theory that was self evidently true, and shouldn’t have to do anything more.

It is difficult to fully comprehend how he would have arrived at such certainty about his Aether theory, when he did not have the mathematical sophistication to handle the physics found in the theories that he believed his should supplant.  However, in his defence, there are some parts of what I imagine were part of his thought process that I can sympathize with.  The amount of knowledge required to understand the functioning of a even a simple digital watch (not to mention the cell “phone” computers that we now all carry) is absolutely phenomenal.  We are surrounded by so much sophisticated technology that understanding the mechanisms behind it all is practically unknowable.  Much of the world around is us is effectively magic to most people, even those with technical sophistication.  Should there be some sort of catastrophe that wipes out civilization, requiring us to relearn or redevelop everything from first principles, nobody really has the breadth required to reconstruct the world around us.  It is rather humbling to ponder that.

One way of coping with the fact that it is effectively impossible to know how everything works is to not believe in any consensus theories — period.  I think that is the origin of the recent popularization of flat earth models.  I think this was a factor in my classmate’s theory, as he also went on to believe that quantum mechanics was false (or also believed that when I knew him, but never stated it to me.)  People understand that it is impossible to know everything required to build their own satellites, imaging systems, rockets, et-al, (i.e. sophisticated methods of disproving the flat earth theory) and decide to disbelieve everything that they cannot personally prove.  That’s an interesting defence mechanism, but takes things to a rather extreme conclusion.

I have a lot of sympathy for those that do not believe in consensus theories.  Without such disbelief I would not have my current understanding of the world.  It happens that the prevailing consensus theory that I knew growing up was that of Scientology.  Among the many ideas that one finds in Scientology is a statement that relativity is wrong\({}^2\).  It has been too many years for me to accurately state the reasons that Hubbard stated that relativity was incorrect, but I do seem to recall that one of the arguments had to do with the speed of light being non-constant when bent by a prism \({}^3\).  I carried some Scientology derived skepticism of relativity into the undergrad “relativistic electrodynamics“\({}^4\) course that I took back around 2010, but had I not been willing to disregard the consensus beliefs that I had been exposed to up to that point in time, I would not have learned anything from that class.  Throwing away beliefs so that you can form your own conclusions is the key to being able to learn and grow.

I would admit to still carrying around baggage from my early indoctrination, despite not having practised Scientology for 25+ years.  This baggage spans multiple domains.  One example is that I am not subscribed to the religious belief that government and police are my friends.  It is hard to see your father, whom you love and respect, persecuted, and not come away with disrespect for the persecuting institutions.  I now have a rough idea of what Dad back in the Scientology Guardian’s Office did that triggered the ire of the Ontario crown attorneys \({}^5\).  However, that history definitely colored my views and current attitudes.  In particular, I recognize that back history as a key factor that pushed me so strongly in a libertarian direction.  The libertarian characterization of government as an entity that infringes on personal property and rights seems very reasonable, and aligns perfectly with my experience \({}^6\).

A second example of indoctrination based disbelief that I recognize that I carry with me is not subscribing to the current popular cosmological models of physics.  The big bang, and the belief that we know to picosecond granularity how the universe was at it’s beginning seems to me very religious.  That belief requires too much extrapolation, and it does not seem any more convincing to me than the Scientology cosmology.  The Scientology cosmology is somewhat ambiguous, and contains both a multiverse story and a steady state but finite model.  In the steady state aspect of that cosmology, the universe that we inhabit is dated with an age of 76 trillion years, but I don’t recall any sort of origin story for the beginning portion of that 76 trillion.  Given the little bits of things that we can actually measure and observe, I have no inclination to sign up for the big bang testiment any more than any other mythical origin story.  Thankfully, I can study almost anything that has practical application in physics or engineering and no amount of belief or disdain in the big bang or other less popular “physics” cosmologies makes any difference.  All of these, whether they be the big bang, cyclic theories, multiverses (of the quantum, thetan-created \({}^7\), or inflationary varieties), or even the old Scientology 76 trillion years old cosmology of my youth, cannot be measured, proven or disproved.  Just about any cosmology has no impact on anything real.

This throw it all out point of view of cosmology is probably a too cynical and harsh treatment of the subject.  It is certainly not the point of view that most practising physicists would take, but it is imminently practical.  There’s too much that is unknowable, so why waste time on the least knowable aspects of the unknowable when there are so many concrete things that we can learn.

 

Footnotes:

[1] The closest that I have come to my own theory of physics is somewhat zealous advocacy for the use of real Clifford algebras in engineering and physics (aka. geometric algebra.)  However, that is not a new theory, it is just a highly effective way to compactly represent many of the entities that we encounter in more clumsy forms in conventional physics.

[2] Hubbard’s sci-fi writing shows that he had knowledge of special relativistic time-dilation, and length-contraction effects.  I seem to recall that Lorentz transformations were mentioned in passing (on either the Student hat course, or in the “PDC” lectures).  I don’t believe that Hubbard had the mathematical sophistication to describe a Lorentz transformation in a quantitative sense.

[3] The traversal of light through matter is a complex affair, considerably different from light in vacuum, where the relativistic constancy applies.  It would be interesting to have a quantitative understanding of the chances of a photon getting through a piece of glass without interacting (absorption and subsequent time delayed spontaneous remission of new photons when the lattice atoms drop back into low energy states.)  There are probably also interactions of photons with the phonons of the lattice itself, and I don’t know how those would be quantified.  However, in short, I bet there is a large chance that most of the light that exits a transparent piece of matter is not the same light that went in, as it is going to come out as photons with different momentum, polarization, frequency, and so forth.  If we measure characteristics of a pulse of light going into and back out of matter, it’s probably somewhat akin to measuring the characteristics of a clementine orange that is thrown at a piece of heavy chicken wire at fastball speeds.  You’ll get some orange peel, seeds, pulp and other constituent components out the other side of the mesh, but shouldn’t really consider the inputs and the outputs to be equivalent entities.

[4] Relativistic electrodynamics was an extremely redundant course title, but was used to distinguish the class from the 3rd year applied electrodynamics course that had none of the tensor, relativistic, Lagrangian, nor four-vector baggage.

[5] Some information about that court case is publicly available, but it would be interesting to see whether I could use the Canadian or Ontario equivalent to the US freedom of information laws to request records from the Ontario crown and the RCMP about the specifics of Dad’s case.  Dad has passed, and was never terribly approachable about the subject when I could have asked him personally.  I did get his spin on the events as well as the media spin, and suspect that the truth is somewhere in between.

[6] This last year will probably push many people towards libertarian-ism (at least the subset of people that are not happy to be conforming sheep, or are too scared not to conform.)  We’ve had countless examples of watching evil bastards in government positions of power impose dictatorial and oppressive covid lockdowns on the poorest and most unfortunate people that they supposedly represent.  Instead, we see the corruption at full scale, with the little guys clobbered, and this covid lockdown scheme essentially being a way to efficiently channel money into the pockets of the rich.  The little guys loose their savings, livelihoods, and get their businesses shut down by fat corrupt bastards that believe they have the authority to decide whether or not you as an individual are essential.  The fat bastards that have the stamp of government authority do not believe that you should have the right to make up your own mind about what levels of risk are acceptable to you or your family.

[7] In Scientology, a sufficiently capable individual is considered capable of creating their own universes, independent of the 76 trillion year old universe that we currently inhabit.  Thetan is the label for the non-corporal form of that individual (i.e. what would be called the spirit or the soul in other religions.)

 

Just watched Cloonie’s “Midnight Sky”

December 26, 2020 Incoherent ramblings 4 comments , , , , , ,

I just watched George Clooney’s “Midnight Sky” on netflix.

The movie is visually striking, set on a space ship and on an apocalyptic Earth in +30 years.  Some sort of unspecified radioactive disaster has pretty much wiped out all livable space on Earth.  The movie focuses on the attempt of a sick astronomer to communicate with a space ship that has been off exploring a newly found habitable moon of Jupiter.   They have been out of communication with Earth for a couple years.

I really didn’t understand the foundational premise of the movie.  We have been able to receive communications from satellites that we’ve sent to Jupiter, and a quick google says it’s only ~22 light minutes between Jupiter and Earth.  If that distance is the closest, let’s suppose that it’s a few times that at maximum separation — that’s still only a couple hours separation (guestimating).  Why would the ship have gone completely out of communication with Earth for years while they were on their mission?

There were lots of other holes in the movie, and I wonder if some of those missing pieces were detailed in the book?

Incidentally, the astronomy facility looked really cosy and comfortable for a something located in Antarctica!  There was mention of the poles late in the movie, but early on there was the famous picture of the explorer Scott with his four companions on the wall, which I assumed was meant to give away the location (I recognized that picture from Brian Keating’s book, “Loosing the Nobel Prize”.)

Brian Keating’s “Losing the Nobel Prize”

December 24, 2020 Incoherent ramblings 2 comments , , , , , , ,

I’ve just finished “Loosing the Nobel Prize”, by Brian Keating.  I’d heard the book mentioned in episodes of his “Into the impossible” podcast\({}^1\).

This is a pretty fun and interesting book, with a few interesting threads woven through it:

  • his astronomical and cosmological work,
  • a pretty thorough background on a number of astronomical principles and history,
  • rationale for a number of the current and past cosmological models,
  • how he got close to but missed the Nobel target with his work,
  • discussion and criticisms of the Nobel nomination process and rules, and
  • DUST!

I had no idea that dust has been the nemesis of astronomers for so many hundreds of years, and will likely continue to be so for hundreds more.  This is not just dust on the lenses, but the dust and other fine matter that pervades the universe and mucks up measurements.  It will be a fitting end for his book to end up dusty on bookshelves around the world once all the purchasers have read it.

The author clearly knows his material well, and presents a thorough background lesson on the history of cosmology, starting way back at the Earth centered model, and moving through the history of competing narratives to the current big bang and inflationary models that seem to have popular consensus.

I’ve never thought much of cosmological ideas, as they go so deep into the territory of extrapolation that they seem worthless to me.  How can you argue that you know what happened \( 10^{-17} \) seconds into the beginning of the universe \({}^2\), when we can’t solve a three body problem without chaos getting into the mix?  The level of extrapolation that is required for some of these models makes arguments about them seem akin to arguing about how many angels fit on the head of a pin.

What’s kind of sad about cosmological models is how little difference they make.  It doesn’t matter if you subscribe to the current big bang religion, cyclic variations of bang and collapse, steady state, multiverses, or anything else: none of the theories have any practical application to anything that we can see or hear or touch.  I don’t think that my preconceived ideas about the uselessness of cosmology has been changed much by reading this book.  However, I do have a new appreciation for the careful and thorough thought, measurement, and experiment that has gone into building and discarding various models over time.  This book details many of the key experiments and concepts that lie behind some of the models.  It would take a lot of work to fully understand the ideas that were outlined in this book, and that’s not work that I’m inclined to do, but I did enjoy his thorough overview.

Okay, that’s enough of a rant against cosmology.  Don’t let my distaste of that subject dissuade you from reading this book, which is well written, entertaining, informative, and thoughtful.

As a small teaser, here are a couple of selected lines that give a taste for the clever wit that is casually interlaced into the book:

  • Trying to interest others in astronomy: “If you can imagine teaching music appreciation to a class filled with tone-deaf students, it was like that, only more disheartening.”
  • “It was all worth it, he assured me: because there was only going to be one sunset and one sunrise in the next year at the South Pole, he would take home $75,000 for a single night’s work!”
  • “By the time I arrived at the Pole, it was chilly for summer: -30 C (-25 F).”

Footnotes

[1]  I have not worked through all of his back episodes, but his line up of recent guests (Penrose, Susskind, Wilczek, Glashow, …) has been pretty spectacular.

[2] I am probably wrong about the precise levels of granularity that is claimed to be known, but do recall from my teenage reading of Hawking’s Brief History, that he insisted we “know” what happened down to insane levels of precision.

%d bloggers like this: