reciprocal frame

Bivector transformation, and reciprocal frame for column vectors of a transformation

January 21, 2024 math and physics play , , , ,

[Click here for a PDF version of this and previous two posts]

The author of a book draft I am reading pointed out if a vector transforms as
\begin{equation}\label{eqn:adjoint:760}
\Bv \rightarrow M \Bv,
\end{equation}
then cross products must transform as
\begin{equation}\label{eqn:adjoint:780}
\Ba \cross \Bb \rightarrow \lr{ \textrm{adj}\, M }^\T \lr{ \Ba \cross \Bb }.
\end{equation}
Bivectors clearly must transform in the same fashion. We also noticed that the adjoint is related to the reciprocal frame vectors of the columns of \( M \), but didn’t examine the reciprocal frame formulation of the adjoint in any detail.

Before we do that, let’s consider a slightly simpler case, the transformation of a pseudoscalar. That is
\begin{equation}\label{eqn:adjoint:800}
\begin{aligned}
M(\Ba) \wedge M(\Bb) \wedge M(\Bc)
&\rightarrow
\sum_{ijk}
\lr{ \Bm_i a_i } \wedge
\lr{ \Bm_j a_j } \wedge
\lr{ \Bm_k a_k } \\
&=
\sum_{ijk}
\lr{ \Bm_i \wedge \Bm_j \wedge \Bm_k } a_i b_j c_k \\
&=
\sum_{ijk}
\lr{ \Bm_1 \wedge \Bm_2 \wedge \Bm_3 } \epsilon_{ijk} a_i b_j c_k \\
&=
\Abs{M}
\sum_{ijk} \epsilon_{ijk} a_i b_j c_k \\
&=
\Abs{M} \lr{ \Ba \wedge \Bb \wedge \Bc }.
\end{aligned}
\end{equation}
This is a well known geometric algebra result (called an outermorphism transformation.)

It’s somewhat amusing that an outermorphism transformation with two wedged vectors is a bit more complicated to express than the same for three. Let’s see if we can find a coordinate free form for such a transformation.
\begin{equation}\label{eqn:adjoint:820}
\begin{aligned}
M(\Ba) \wedge M(\Bb)
&=
\sum_{ij} \lr{ \Bm_i a_i } \wedge \lr{ \Bm_j b_j } \\
&=
\sum_{ij} \lr{ \Bm_i \wedge \Bm_j } a_i b_j \\
&=
\sum_{i < j} \lr{ \Bm_i \wedge \Bm_j }
\begin{vmatrix}
a_i & a_j \\
b_i & b_j
\end{vmatrix} \\
&=
\sum_{i < j} \lr{ \Bm_i \wedge \Bm_j } \lr{ \lr{ \Ba \wedge \Bb } \cdot \lr{ \Be_j \wedge \Be_i } }.
\end{aligned}
\end{equation}

Recall that the reciprocal frame with respect to the basis \( \setlr{ \Bm_1, \Bm_2, \Bm_3 } \), assuming this is a non-degenerate basis, has elements of the form
\begin{equation}\label{eqn:adjoint:840}
\begin{aligned}
\Bm^1 &= \lr{ \Bm_2 \wedge \Bm_3 } \inv{ \Bm_1 \wedge \Bm_2 \wedge \Bm_3 } \\
\Bm^2 &= \lr{ \Bm_3 \wedge \Bm_1 } \inv{ \Bm_1 \wedge \Bm_2 \wedge \Bm_3 } \\
\Bm^3 &= \lr{ \Bm_1 \wedge \Bm_2 } \inv{ \Bm_1 \wedge \Bm_2 \wedge \Bm_3 }.
\end{aligned}
\end{equation}
This can be flipped around as
\begin{equation}\label{eqn:adjoint:860}
\begin{aligned}
\Bm_2 \wedge \Bm_3 &= \Bm^1 \Abs{M} I \\
\Bm_3 \wedge \Bm_1 &= \Bm^2 \Abs{M} I \\
\Bm_1 \wedge \Bm_2 &= \Bm^3 \Abs{M} I \\
\end{aligned}
\end{equation}
\begin{equation}\label{eqn:adjoint:880}
\begin{aligned}
M&(\Ba) \wedge M(\Bb) \\
&=
\lr{ \Bm_1 \wedge \Bm_2 } \lr{ \lr{ \Ba \wedge \Bb } \cdot \lr{ \Be_2 \wedge \Be_1 } }
+
\lr{ \Bm_2 \wedge \Bm_3 } \lr{ \lr{ \Ba \wedge \Bb } \cdot \lr{ \Be_3 \wedge \Be_2 } }
+
\lr{ \Bm_3 \wedge \Bm_1 } \lr{ \lr{ \Ba \wedge \Bb } \cdot \lr{ \Be_1 \wedge \Be_3 } } \\
&=
I \Abs{M} \lr{
\Bm^3 \lr{ \lr{ \Ba \wedge \Bb } \cdot \lr{ \Be_2 \wedge \Be_1 } }
+
\Bm^1 \lr{ \lr{ \Ba \wedge \Bb } \cdot \lr{ \Be_3 \wedge \Be_2 } }
+
\Bm^2 \lr{ \lr{ \Ba \wedge \Bb } \cdot \lr{ \Be_1 \wedge \Be_3 } }
}
\end{aligned}
\end{equation}

Let’s see if we can simplify one of these double index quantities
\begin{equation}\label{eqn:adjoint:900}
\begin{aligned}
I \lr{ \lr{ \Ba \wedge \Bb } \cdot \lr{ \Be_2 \wedge \Be_1 } }
&=
\gpgradethree{ I \lr{ \lr{ \Ba \wedge \Bb } \cdot \lr{ \Be_2 \wedge \Be_1 } } } \\
&=
\gpgradethree{ I \lr{ \Ba \wedge \Bb } \lr{ \Be_2 \wedge \Be_1 } } \\
&=
\gpgradethree{ \lr{ \Ba \wedge \Bb } \Be_{12321} } \\
&=
\gpgradethree{ \lr{ \Ba \wedge \Bb } \Be_{3} } \\
&=
\Ba \wedge \Bb \wedge \Be_3.
\end{aligned}
\end{equation}
We have
\begin{equation}\label{eqn:adjoint:920}
M(\Ba) \wedge M(\Bb) = \Abs{M} \lr{
\lr{ \Ba \wedge \Bb \wedge \Be_1 } \Bm^1
+
\lr{ \Ba \wedge \Bb \wedge \Be_2 } \Bm^2
+
\lr{ \Ba \wedge \Bb \wedge \Be_3 } \Bm^3
}.
\end{equation}

Using summation convention, we can now express the transformation of a bivector \( B \) as just
\begin{equation}\label{eqn:adjoint:940}
B \rightarrow \Abs{M} \lr{ B \wedge \Be_i } \Bm^i.
\end{equation}
If we are interested in the transformation of a pseudovector \( \Bv \) defined implicitly as the dual of a bivector \( B = I \Bv \), where
\begin{equation}\label{eqn:adjoint:960}
B \wedge \Be_i = \gpgradethree{ I \Bv \Be_i } = I \lr{ \Bv \cdot \Be_i }.
\end{equation}
This leaves us with a transformation rule for cross products equivalent to the adjoint relation \ref{eqn:adjoint:780}
\begin{equation}\label{eqn:adjoint:980}
\lr{ \Ba \cross \Bb } \rightarrow \lr{ \Ba \cross \Bb } \cdot \Be_i \Abs{M} \Bm^i.
\end{equation}
As intuited, the determinant weighted reciprocal frame vectors for the columns of the transformation \( M \), are the components of the adjoint. That is
\begin{equation}\label{eqn:adjoint:1000}
\lr{ \textrm{adj}\, M }^\T = \Abs{M}
\begin{bmatrix}
\Bm^1 & \Bm^2 & \Bm^3
\end{bmatrix}.
\end{equation}

Simplifying the previous adjoint matrix results.

January 17, 2024 math and physics play , , , , , , , , ,

[Click here for a PDF version of this (and the previous) post]

We previously found determinant expressions for the matrix elements of the adjoint for 2D and 3D matrices \( M \). However, we can extract additional structure from each of those results.

2D case.

Given a matrix expressed in block matrix form in terms of it’s columns
\begin{equation}\label{eqn:adjoint:500}
M =
\begin{bmatrix}
\Bm_1 & \Bm_2
\end{bmatrix},
\end{equation}
we found that the adjoint \( A \) satisfying \( M A = \Abs{M} I \) had the structure
\begin{equation}\label{eqn:adjoint:520}
A =
\begin{bmatrix}
\begin{vmatrix} \Be_1 & \Bm_2 \end{vmatrix} & \begin{vmatrix} \Be_2 & \Bm_2 \end{vmatrix} \\
& \\
\begin{vmatrix} \Bm_1 & \Be_1 \end{vmatrix} & \begin{vmatrix} \Bm_1 & \Be_2 \end{vmatrix}
\end{bmatrix}.
\end{equation}
We initially had wedge product expressions for each of these matrix elements, and can discover our structure by putting back those wedge products. Modulo sign, each of these matrix elemens has the form
\begin{equation}\label{eqn:adjoint:540}
\begin{aligned}
\begin{vmatrix} \Be_i & \Bm_j \end{vmatrix}
&=
\lr{ \Be_i \wedge \Bm_j } i^{-1} \\
&=
\gpgradezero{
\lr{ \Be_i \wedge \Bm_j } i^{-1}
} \\
&=
\gpgradezero{
\lr{ \Be_i \Bm_j – \Be_i \cdot \Bm_j } i^{-1}
} \\
&=
\gpgradezero{
\Be_i \Bm_j i^{-1}
} \\
&=
\Be_i \cdot \lr{ \Bm_j i^{-1} },
\end{aligned}
\end{equation}
where \( i = \Be_{12} \). The adjoint matrix is
\begin{equation}\label{eqn:adjoint:560}
A =
\begin{bmatrix}
-\lr{ \Bm_2 i } \cdot \Be_1 & -\lr{ \Bm_2 i } \cdot \Be_2 \\
\lr{ \Bm_1 i } \cdot \Be_1 & \lr{ \Bm_1 i } \cdot \Be_2 \\
\end{bmatrix}.
\end{equation}
If we use a column vector representation of the vectors \( \Bm_j i^{-1} \), we can write the adjoint in a compact hybrid geometric-algebra matrix form
\begin{equation}\label{eqn:adjoint:640}
A =
\begin{bmatrix}
-\lr{ \Bm_2 i }^\T \\
\lr{ \Bm_1 i }^\T
\end{bmatrix}.
\end{equation}

Check:

Let’s see if this works, by multiplying with \( M \)
\begin{equation}\label{eqn:adjoint:580}
\begin{aligned}
A M &=
\begin{bmatrix}
-\lr{ \Bm_2 i }^\T \\
\lr{ \Bm_1 i }^\T
\end{bmatrix}
\begin{bmatrix}
\Bm_1 & \Bm_2
\end{bmatrix} \\
&=
\begin{bmatrix}
-\lr{ \Bm_2 i }^\T \Bm_1 & -\lr{ \Bm_2 i }^\T \Bm_2 \\
\lr{ \Bm_1 i }^\T \Bm_1 & \lr{ \Bm_1 i }^\T \Bm_2
\end{bmatrix}.
\end{aligned}
\end{equation}
Those dot products have the form
\begin{equation}\label{eqn:adjoint:600}
\begin{aligned}
\lr{ \Bm_j i }^\T \Bm_i
&=
\lr{ \Bm_j i } \cdot \Bm_i \\
&=
\gpgradezero{ \lr{ \Bm_j i } \Bm_i } \\
&=
\gpgradezero{ -i \Bm_j \Bm_i } \\
&=
-i \lr{ \Bm_j \wedge \Bm_i },
\end{aligned}
\end{equation}
so
\begin{equation}\label{eqn:adjoint:620}
\begin{aligned}
A M &=
\begin{bmatrix}
i \lr{ \Bm_2 \wedge \Bm_1 } & 0 \\
0 & -i \lr { \Bm_1 \wedge \Bm_2 }
\end{bmatrix} \\
&=
\Abs{M} I.
\end{aligned}
\end{equation}
We find the determinant weighted identity that we expected. Our methods are a bit schizophrenic, switching fluidly between matrix and geometric algebra representations, but provided we are careful enough, this isn’t problematic.

3D case.

Now, let’s look at the 3D case, where we assume a column vector representation of the matrix of interest
\begin{equation}\label{eqn:adjoint:660}
M =
\begin{bmatrix}
\Bm_1 & \Bm_2 & \Bm_3
\end{bmatrix},
\end{equation}
and try to simplify the expression we found for the adjoint
\begin{equation}\label{eqn:adjoint:680}
A =
\begin{bmatrix}
\begin{vmatrix} \Be_1 & \Bm_2 & \Bm_3 \end{vmatrix} & \begin{vmatrix} \Be_2 & \Bm_2 & \Bm_3 \end{vmatrix} & \begin{vmatrix} \Be_3 & \Bm_2 & \Bm_3 \end{vmatrix} \\
& & \\
\begin{vmatrix} \Be_1 & \Bm_3 & \Bm_1 \end{vmatrix} & \begin{vmatrix} \Be_2 & \Bm_3 & \Bm_1 \end{vmatrix} & \begin{vmatrix} \Be_3 & \Bm_3 & \Bm_1 \end{vmatrix} \\
& & \\
\begin{vmatrix} \Be_1 & \Bm_1 & \Bm_2 \end{vmatrix} & \begin{vmatrix} \Be_2 & \Bm_1 & \Bm_2 \end{vmatrix} & \begin{vmatrix} \Be_3 & \Bm_1 & \Bm_2 \end{vmatrix}
\end{bmatrix}.
\end{equation}
As with the 2D case, let’s re-express these determinants in wedge product form. We’ll write \( I = \Be_{123} \), and find
\begin{equation}\label{eqn:adjoint:700}
\begin{aligned}
\begin{vmatrix} \Be_i & \Bm_j & \Bm_k \end{vmatrix}
&=
\lr{ \Be_i \wedge \Bm_j \wedge \Bm_k } I^{-1} \\
&=
\gpgradezero{ \lr{ \Be_i \wedge \Bm_j \wedge \Bm_k } I^{-1} } \\
&=
\gpgradezero{ \lr{
\Be_i \lr{ \Bm_j \wedge \Bm_k }
\Be_i \cdot \lr{ \Bm_j \wedge \Bm_k }
} I^{-1} } \\
&=
\gpgradezero{
\Be_i \lr{ \Bm_j \wedge \Bm_k }
I^{-1} } \\
&=
\gpgradezero{
\Be_i \lr{ \Bm_j \cross \Bm_k } I
I^{-1} } \\
&=
\Be_i \cdot \lr{ \Bm_j \cross \Bm_k }.
\end{aligned}
\end{equation}
We see that we can put the adjoint in block matrix form
\begin{equation}\label{eqn:adjoint:720}
A =
\begin{bmatrix}
\lr{ \Bm_2 \cross \Bm_3 }^\T \\
\lr{ \Bm_3 \cross \Bm_1 }^\T \\
\lr{ \Bm_1 \cross \Bm_2 }^\T \\
\end{bmatrix}.
\end{equation}

Check:

\begin{equation}\label{eqn:adjoint:740}
\begin{aligned}
A M
&=
\begin{bmatrix}
\lr{ \Bm_2 \cross \Bm_3 }^\T \\
\lr{ \Bm_3 \cross \Bm_1 }^\T \\
\lr{ \Bm_1 \cross \Bm_2 }^\T \\
\end{bmatrix}
\begin{bmatrix}
\Bm_1 & \Bm_2 & \Bm_3
\end{bmatrix} \\
&=
\begin{bmatrix}
\lr{ \Bm_2 \cross \Bm_3 }^\T \Bm_1 & \lr{ \Bm_2 \cross \Bm_3 }^\T \Bm_2 & \lr{ \Bm_2 \cross \Bm_3 }^\T \Bm_3 \\
\lr{ \Bm_3 \cross \Bm_1 }^\T \Bm_1 & \lr{ \Bm_3 \cross \Bm_1 }^\T \Bm_2 & \lr{ \Bm_3 \cross \Bm_1 }^\T \Bm_3 \\
\lr{ \Bm_1 \cross \Bm_2 }^\T \Bm_1 & \lr{ \Bm_1 \cross \Bm_2 }^\T \Bm_2 & \lr{ \Bm_1 \cross \Bm_2 }^\T \Bm_3
\end{bmatrix} \\
&=
\Abs{M} I.
\end{aligned}
\end{equation}

Essentially, we found that the rows of the adjoint matrix are each parallel to the reciprocal frame vectors of the columns of \( M \). This makes sense, as the reciprocal frame encodes a generalized inverse of sorts.

New version of classical mechanics notes

January 1, 2021 Uncategorized , , , , , , , , ,

I’ve posted a new version of my classical mechanics notes compilation.  This version is not yet live on amazon, but you shouldn’t buy a copy of this “book” anyways, as it is horribly rough (if you want a copy, grab the free PDF instead.)  [I am going to buy a copy so that I can continue to edit a paper copy of it, but nobody else should.]

This version includes additional background material on Space Time Algebra (STA), i.e. the geometric algebra name for the Dirac/Clifford-algebra in 3+1 dimensions.  In particular, I’ve added material on reciprocal frames, the gradient and vector derivatives, line and surface integrals and the fundamental theorem for both.  Some of the integration theory content might make sense to move to a different book, but I’ll keep it with the rest of these STA notes for now.

Fundamental theorem of geometric calculus for line integrals (relativistic.)

December 16, 2020 math and physics play , , , , , , , , , , , , , , , , , , , , , , , , , , ,

[This post is best viewed in PDF form, due to latex elements that I could not format with wordpress mathjax.]

Background for this particular post can be found in

  1. Curvilinear coordinates and gradient in spacetime, and reciprocal frames, and
  2. Lorentz transformations in Space Time Algebra (STA)
  3. A couple more reciprocal frame examples.

Motivation.

I’ve been slowly working my way towards a statement of the fundamental theorem of integral calculus, where the functions being integrated are elements of the Dirac algebra (space time multivectors in the geometric algebra parlance.)

This is interesting because we want to be able to do line, surface, 3-volume and 4-volume space time integrals. We have many \(\mathbb{R}^3\) integral theorems
\begin{equation}\label{eqn:fundamentalTheoremOfGC:40a}
\int_A^B d\Bl \cdot \spacegrad f = f(B) – f(A),
\end{equation}
\begin{equation}\label{eqn:fundamentalTheoremOfGC:60a}
\int_S dA\, \ncap \cross \spacegrad f = \int_{\partial S} d\Bx\, f,
\end{equation}
\begin{equation}\label{eqn:fundamentalTheoremOfGC:80a}
\int_S dA\, \ncap \cdot \lr{ \spacegrad \cross \Bf} = \int_{\partial S} d\Bx \cdot \Bf,
\end{equation}
\begin{equation}\label{eqn:fundamentalTheoremOfGC:100a}
\int_S dx dy \lr{ \PD{y}{P} – \PD{x}{Q} }
=
\int_{\partial S} P dx + Q dy,
\end{equation}
\begin{equation}\label{eqn:fundamentalTheoremOfGC:120a}
\int_V dV\, \spacegrad f = \int_{\partial V} dA\, \ncap f,
\end{equation}
\begin{equation}\label{eqn:fundamentalTheoremOfGC:140a}
\int_V dV\, \spacegrad \cross \Bf = \int_{\partial V} dA\, \ncap \cross \Bf,
\end{equation}
\begin{equation}\label{eqn:fundamentalTheoremOfGC:160a}
\int_V dV\, \spacegrad \cdot \Bf = \int_{\partial V} dA\, \ncap \cdot \Bf,
\end{equation}
and want to know how to generalize these to four dimensions and also make sure that we are handling the relativistic mixed signature correctly. If our starting point was the mess of equations above, we’d be in trouble, since it is not obvious how these generalize. All the theorems with unit normals have to be handled completely differently in four dimensions since we don’t have a unique normal to any given spacetime plane.
What comes to our rescue is the Fundamental Theorem of Geometric Calculus (FTGC), which has the form
\begin{equation}\label{eqn:fundamentalTheoremOfGC:40}
\int F d^n \Bx\, \lrpartial G = \int F d^{n-1} \Bx\, G,
\end{equation}
where \(F,G\) are multivectors functions (i.e. sums of products of vectors.) We’ve seen ([2], [1]) that all the identities above are special cases of the fundamental theorem.

Do we need any special care to state the FTGC correctly for our relativistic case? It turns out that the answer is no! Tangent and reciprocal frame vectors do all the heavy lifting, and we can use the fundamental theorem as is, even in our mixed signature space. The only real change that we need to make is use spacetime gradient and vector derivative operators instead of their spatial equivalents. We will see how this works below. Note that instead of starting with \ref{eqn:fundamentalTheoremOfGC:40} directly, I will attempt to build up to that point in a progressive fashion that is hopefully does not require the reader to make too many unjustified mental leaps.

Multivector line integrals.

We want to define multivector line integrals to start with. Recall that in \(\mathbb{R}^3\) we would say that for scalar functions \( f\), the integral
\begin{equation}\label{eqn:fundamentalTheoremOfGC:180b}
\int d\Bx\, f = \int f d\Bx,
\end{equation}
is a line integral. Also, for vector functions \( \Bf \) we call
\begin{equation}\label{eqn:fundamentalTheoremOfGC:200}
\int d\Bx \cdot \Bf = \inv{2} \int d\Bx\, \Bf + \Bf d\Bx.
\end{equation}
a line integral. In order to generalize line integrals to multivector functions, we will allow our multivector functions to be placed on either or both sides of the differential.

Definition 1.1: Line integral.

Given a single variable parameterization \( x = x(u) \), we write \( d^1\Bx = \Bx_u du \), and call
\begin{equation}\label{eqn:fundamentalTheoremOfGC:220a}
\int F d^1\Bx\, G,
\end{equation}
a line integral, where \( F,G \) are arbitrary multivector functions.

We must be careful not to reorder any of the factors in the integrand, since the differential may not commute with either \( F \) or \( G \). Here is a simple example where the integrand has a product of a vector and differential.

Problem: Circular parameterization.

Given a circular parameterization \( x(\theta) = \gamma_1 e^{-i\theta} \), where \( i = \gamma_1 \gamma_2 \), the unit bivector for the \(x,y\) plane. Compute the line integral
\begin{equation}\label{eqn:fundamentalTheoremOfGC:100}
\int_0^{\pi/4} F(\theta)\, d^1 \Bx\, G(\theta),
\end{equation}
where \( F(\theta) = \Bx^\theta + \gamma_3 + \gamma_1 \gamma_0 \) is a multivector valued function, and \( G(\theta) = \gamma_0 \) is vector valued.

Answer

The tangent vector for the curve is
\begin{equation}\label{eqn:fundamentalTheoremOfGC:60}
\Bx_\theta
= -\gamma_1 \gamma_1 \gamma_2 e^{-i\theta}
= \gamma_2 e^{-i\theta},
\end{equation}
with reciprocal vector \( \Bx^\theta = e^{i \theta} \gamma^2 \). The differential element is \( d^1 \Bx = \gamma_2 e^{-i\theta} d\theta \), so the integrand is
\begin{equation}\label{eqn:fundamentalTheoremOfGC:80}
\begin{aligned}
\int_0^{\pi/4} \lr{ \Bx^\theta + \gamma_3 + \gamma_1 \gamma_0 } d^1 \Bx\, \gamma_0
&=
\int_0^{\pi/4} \lr{ e^{i\theta} \gamma^2 + \gamma_3 + \gamma_1 \gamma_0 } \gamma_2 e^{-i\theta} d\theta\, \gamma_0 \\
&=
\frac{\pi}{4} \gamma_0 + \lr{ \gamma_{32} + \gamma_{102} } \inv{-i} \lr{ e^{-i\pi/4} – 1 } \gamma_0 \\
&=
\frac{\pi}{4} \gamma_0 + \inv{\sqrt{2}} \lr{ \gamma_{32} + \gamma_{102} } \gamma_{120} \lr{ 1 – \gamma_{12} } \\
&=
\frac{\pi}{4} \gamma_0 + \inv{\sqrt{2}} \lr{ \gamma_{310} + 1 } \lr{ 1 – \gamma_{12} }.
\end{aligned}
\end{equation}
Observe how care is required not to reorder any terms. This particular end result is a multivector with scalar, vector, bivector, and trivector grades, but no pseudoscalar component. The grades in the end result depend on both the function in the integrand and on the path. For example, had we integrated all the way around the circle, the end result would have been the vector \( 2 \pi \gamma_0 \) (i.e. a \( \gamma_0 \) weighted unit circle circumference), as all the other grades would have been killed by the complex exponential integrated over a full period.

Problem: Line integral for boosted time direction vector.

Let \( x = e^{\vcap \alpha/2} \gamma_0 e^{-\vcap \alpha/2} \) represent the spacetime curve of all the boosts of \( \gamma_0 \) along a specific velocity direction vector, where \( \vcap = (v \wedge \gamma_0)/\Norm{v \wedge \gamma_0} \) is a unit spatial bivector for any constant vector \( v \). Compute the line integral
\begin{equation}\label{eqn:fundamentalTheoremOfGC:240}
\int x\, d^1 \Bx.
\end{equation}

Answer

Observe that \( \vcap \) and \( \gamma_0 \) anticommute, so we may write our boost as a one sided exponential
\begin{equation}\label{eqn:fundamentalTheoremOfGC:260}
x(\alpha) = \gamma_0 e^{-\vcap \alpha} = e^{\vcap \alpha} \gamma_0 = \lr{ \cosh\alpha + \vcap \sinh\alpha } \gamma_0.
\end{equation}
The tangent vector is just
\begin{equation}\label{eqn:fundamentalTheoremOfGC:280}
\Bx_\alpha = \PD{\alpha}{x} = e^{\vcap\alpha} \vcap \gamma_0.
\end{equation}
Let’s get a bit of intuition about the nature of this vector. It’s square is
\begin{equation}\label{eqn:fundamentalTheoremOfGC:300}
\begin{aligned}
\Bx_\alpha^2
&=
e^{\vcap\alpha} \vcap \gamma_0
e^{\vcap\alpha} \vcap \gamma_0 \\
&=
-e^{\vcap\alpha} \vcap e^{-\vcap\alpha} \vcap (\gamma_0)^2 \\
&=
-1,
\end{aligned}
\end{equation}
so we see that the tangent vector is a spacelike unit vector. As the vector representing points on the curve is necessarily timelike (due to Lorentz invariance), these two must be orthogonal at all points. Let’s confirm this algebraically
\begin{equation}\label{eqn:fundamentalTheoremOfGC:320}
\begin{aligned}
x \cdot \Bx_\alpha
&=
\gpgradezero{ e^{\vcap \alpha} \gamma_0 e^{\vcap \alpha} \vcap \gamma_0 } \\
&=
\gpgradezero{ e^{-\vcap \alpha} e^{\vcap \alpha} \vcap (\gamma_0)^2 } \\
&=
\gpgradezero{ \vcap } \\
&= 0.
\end{aligned}
\end{equation}
Here we used \( e^{\vcap \alpha} \gamma_0 = \gamma_0 e^{-\vcap \alpha} \), and \( \gpgradezero{A B} = \gpgradezero{B A} \). Geometrically, we have the curious fact that the direction vectors to points on the curve are perpendicular (with respect to our relativistic dot product) to the tangent vectors on the curve, as illustrated in fig. 1.

fig. 1. Tangent perpendicularity in mixed metric.

Perfect differentials.

Having seen a couple examples of multivector line integrals, let’s now move on to figure out the structure of a line integral that has a “perfect” differential integrand. We can take a hint from the \(\mathbb{R}^3\) vector result that we already know, namely
\begin{equation}\label{eqn:fundamentalTheoremOfGC:120}
\int_A^B d\Bl \cdot \spacegrad f = f(B) – f(A).
\end{equation}
It seems reasonable to guess that the relativistic generalization of this is
\begin{equation}\label{eqn:fundamentalTheoremOfGC:140}
\int_A^B dx \cdot \grad f = f(B) – f(A).
\end{equation}
Let’s check that, by expanding in coordinates
\begin{equation}\label{eqn:fundamentalTheoremOfGC:160}
\begin{aligned}
\int_A^B dx \cdot \grad f
&=
\int_A^B d\tau \frac{dx^\mu}{d\tau} \partial_\mu f \\
&=
\int_A^B d\tau \frac{dx^\mu}{d\tau} \PD{x^\mu}{f} \\
&=
\int_A^B d\tau \frac{df}{d\tau} \\
&=
f(B) – f(A).
\end{aligned}
\end{equation}
If we drop the dot product, will we have such a nice result? Let’s see:
\begin{equation}\label{eqn:fundamentalTheoremOfGC:180}
\begin{aligned}
\int_A^B dx \grad f
&=
\int_A^B d\tau \frac{dx^\mu}{d\tau} \gamma_\mu \gamma^\nu \partial_\nu f \\
&=
\int_A^B d\tau \frac{dx^\mu}{d\tau} \PD{x^\mu}{f}
+
\int_A^B
d\tau
\sum_{\mu \ne \nu} \gamma_\mu \gamma^\nu
\frac{dx^\mu}{d\tau} \PD{x^\nu}{f}.
\end{aligned}
\end{equation}
This scalar component of this integrand is a perfect differential, but the bivector part of the integrand is a complete mess, that we have no hope of generally integrating. It happens that if we consider one of the simplest parameterization examples, we can get a strong hint of how to generalize the differential operator to one that ends up providing a perfect differential. In particular, let’s integrate over a linear constant path, such as \( x(\tau) = \tau \gamma_0 \). For this path, we have
\begin{equation}\label{eqn:fundamentalTheoremOfGC:200a}
\begin{aligned}
\int_A^B dx \grad f
&=
\int_A^B \gamma_0 d\tau \lr{
\gamma^0 \partial_0 +
\gamma^1 \partial_1 +
\gamma^2 \partial_2 +
\gamma^3 \partial_3 } f \\
&=
\int_A^B d\tau \lr{
\PD{\tau}{f} +
\gamma_0 \gamma^1 \PD{x^1}{f} +
\gamma_0 \gamma^2 \PD{x^2}{f} +
\gamma_0 \gamma^3 \PD{x^3}{f}
}.
\end{aligned}
\end{equation}
Just because the path does not have any \( x^1, x^2, x^3 \) component dependencies does not mean that these last three partials are neccessarily zero. For example \( f = f(x(\tau)) = \lr{ x^0 }^2 \gamma_0 + x^1 \gamma_1 \) will have a non-zero contribution from the \( \partial_1 \) operator. In that particular case, we can easily integrate \( f \), but we have to know the specifics of the function to do the integral. However, if we had a differential operator that did not include any component off the integration path, we would ahve a perfect differential. That is, if we were to replace the gradient with the projection of the gradient onto the tangent space, we would have a perfect differential. We see that the function of the dot product in \ref{eqn:fundamentalTheoremOfGC:140} has the same effect, as it rejects any component of the gradient that does not lie on the tangent space.

Definition 1.2: Vector derivative.

Given a spacetime manifold parameterized by \( x = x(u^0, \cdots u^{N-1}) \), with tangent vectors \( \Bx_\mu = \PDi{u^\mu}{x} \), and reciprocal vectors \( \Bx^\mu \in \textrm{Span}\setlr{\Bx_\nu} \), such that \( \Bx^\mu \cdot \Bx_\nu = {\delta^\mu}_\nu \), the vector derivative is defined as
\begin{equation}\label{eqn:fundamentalTheoremOfGC:240a}
\partial = \sum_{\mu = 0}^{N-1} \Bx^\mu \PD{u^\mu}{}.
\end{equation}
Observe that if this is a full parameterization of the space (\(N = 4\)), then the vector derivative is identical to the gradient. The vector derivative is the projection of the gradient onto the tangent space at the point of evaluation.Furthermore, we designate \( \lrpartial \) as the vector derivative allowed to act bidirectionally, as follows
\begin{equation}\label{eqn:fundamentalTheoremOfGC:260a}
R \lrpartial S
=
R \Bx^\mu \PD{u^\mu}{S}
+
\PD{u^\mu}{R} \Bx^\mu S,
\end{equation}
where \( R, S \) are multivectors, and summation convention is implied. In this bidirectional action,
the vector factors of the vector derivative must stay in place (as they do not neccessarily commute with \( R,S\)), but the derivative operators apply in a chain rule like fashion to both functions.

Noting that \( \Bx_u \cdot \grad = \Bx_u \cdot \partial \), we may rewrite the scalar line integral identity \ref{eqn:fundamentalTheoremOfGC:140} as
\begin{equation}\label{eqn:fundamentalTheoremOfGC:220}
\int_A^B dx \cdot \partial f = f(B) – f(A).
\end{equation}
However, as our example hinted at, the fundamental theorem for line integrals has a multivector generalization that does not rely on a dot product to do the tangent space filtering, and is more powerful. That generalization has the following form.

Theorem 1.1: Fundamental theorem for line integrals.

Given multivector functions \( F, G \), and a single parameter curve \( x(u) \) with line element \( d^1 \Bx = \Bx_u du \), then
\begin{equation}\label{eqn:fundamentalTheoremOfGC:280a}
\int_A^B F d^1\Bx \lrpartial G = F(B) G(B) – F(A) G(A).
\end{equation}

Start proof:

Writing out the integrand explicitly, we find
\begin{equation}\label{eqn:fundamentalTheoremOfGC:340}
\int_A^B F d^1\Bx \lrpartial G
=
\int_A^B \lr{
\PD{\alpha}{F} d\alpha\, \Bx_\alpha \Bx^\alpha G
+
F d\alpha\, \Bx_\alpha \Bx^\alpha \PD{\alpha}{G }
}
\end{equation}
However for a single parameter curve, we have \( \Bx^\alpha = 1/\Bx_\alpha \), so we are left with
\begin{equation}\label{eqn:fundamentalTheoremOfGC:360}
\begin{aligned}
\int_A^B F d^1\Bx \lrpartial G
&=
\int_A^B d\alpha\, \PD{\alpha}{(F G)} \\
&=
\evalbar{F G}{B}

\evalbar{F G}{A}.
\end{aligned}
\end{equation}

End proof.

More to come.

In the next installment we will explore surface integrals in spacetime, and the generalization of the fundamental theorem to multivector space time integrals.

References

[1] Peeter Joot. Geometric Algebra for Electrical Engineers. Kindle Direct Publishing, 2019.

[2] A. Macdonald. Vector and Geometric Calculus. CreateSpace Independent Publishing Platform, 2012.

A couple more reciprocal frame examples.

December 14, 2020 math and physics play , , , , , , , , , , , , ,

[If mathjax doesn’t display properly for you, click here for a PDF of this post]

This post logically follows both of the following:

  1. Curvilinear coordinates and gradient in spacetime, and reciprocal frames, and
  2. Lorentz transformations in Space Time Algebra (STA)

The PDF linked above above contains all the content from this post plus (1.) above [to be edited later into a more logical sequence.]

More examples.

Here are a few additional examples of reciprocal frame calculations.

Problem: Unidirectional arbitrary functional dependence.

Let
\begin{equation}\label{eqn:reciprocal:2540}
x = a f(u),
\end{equation}
where \( a \) is a constant vector and \( f(u)\) is some arbitrary differentiable function with a non-zero derivative in the region of interest.

Answer

Here we have just a single tangent space direction (a line in spacetime) with tangent vector
\begin{equation}\label{eqn:reciprocal:2400}
\Bx_u = a \PD{u}{f} = a f_u,
\end{equation}
so we see that the tangent space vectors are just rescaled values of the direction vector \( a \).
This is a simple enough parameterization that we can compute the reciprocal frame vector explicitly using the gradient. We expect that \( \Bx^u = 1/\Bx_u \), and find
\begin{equation}\label{eqn:reciprocal:2420}
\inv{a} \cdot x = f(u),
\end{equation}
but for constant \( a \), we know that \( \grad a \cdot x = a \), so taking gradients of both sides we find
\begin{equation}\label{eqn:reciprocal:2440}
\inv{a} = \grad f = \PD{u}{f} \grad u,
\end{equation}
so the reciprocal vector is
\begin{equation}\label{eqn:reciprocal:2460}
\Bx^u = \grad u = \inv{a f_u},
\end{equation}
as expected.

Problem: Linear two variable parameterization.

Let \( x = a u + b v \), where \( x \wedge a \wedge b = 0 \) represents spacetime plane (also the tangent space.) Find the curvilinear coordinates and their reciprocals.

Answer

The frame vectors are easy to compute, as they are just
\begin{equation}\label{eqn:reciprocal:1960}
\begin{aligned}
\Bx_u &= \PD{u}{x} = a \\
\Bx_v &= \PD{v}{x} = b.
\end{aligned}
\end{equation}
This is an example of a parametric equation that we can easily invert, as we have
\begin{equation}\label{eqn:reciprocal:1980}
\begin{aligned}
x \wedge a &= – v \lr{ a \wedge b } \\
x \wedge b &= u \lr{ a \wedge b },
\end{aligned}
\end{equation}
so
\begin{equation}\label{eqn:reciprocal:2000}
\begin{aligned}
u
&= \inv{ a \wedge b } \cdot \lr{ x \wedge b } \\
&= \inv{ \lr{a \wedge b}^2 } \lr{ a \wedge b } \cdot \lr{ x \wedge b } \\
&=
\frac{
\lr{b \cdot x} \lr{ a \cdot b }

\lr{a \cdot x} \lr{ b \cdot b }
}{ \lr{a \wedge b}^2 }
\end{aligned}
\end{equation}
\begin{equation}\label{eqn:reciprocal:2020}
\begin{aligned}
v &= -\inv{ a \wedge b } \cdot \lr{ x \wedge a } \\
&= -\inv{ \lr{a \wedge b}^2 } \lr{ a \wedge b } \cdot \lr{ x \wedge a } \\
&=
-\frac{
\lr{b \cdot x} \lr{ a \cdot a }

\lr{a \cdot x} \lr{ a \cdot b }
}{ \lr{a \wedge b}^2 }
\end{aligned}
\end{equation}
Recall that \( \grad \lr{ a \cdot x} = a \), if \( a \) is a constant, so our gradients are just
\begin{equation}\label{eqn:reciprocal:2040}
\begin{aligned}
\grad u
&=
\frac{
b \lr{ a \cdot b }

a
\lr{ b \cdot b }
}{ \lr{a \wedge b}^2 } \\
&=
b \cdot \inv{ a \wedge b },
\end{aligned}
\end{equation}
and
\begin{equation}\label{eqn:reciprocal:2060}
\begin{aligned}
\grad v
&=
-\frac{
b \lr{ a \cdot a }

a \lr{ a \cdot b }
}{ \lr{a \wedge b}^2 } \\
&=
-a \cdot \inv{ a \wedge b }.
\end{aligned}
\end{equation}
Expressed in terms of the frame vectors, this is just
\begin{equation}\label{eqn:reciprocal:2080}
\begin{aligned}
\Bx^u &= \Bx_v \cdot \inv{ \Bx_u \wedge \Bx_v } \\
\Bx^v &= -\Bx_u \cdot \inv{ \Bx_u \wedge \Bx_v },
\end{aligned}
\end{equation}
so we were able to show, for this special two parameter linear case, that the explicit evaluation of the gradients has the exact structure that we intuited that the reciprocals must have, provided they are constrained to the spacetime plane \( a \wedge b \). It is interesting to observe how this structure falls out of the linear system solution so directly. Also note that these reciprocals are not defined at the origin of the \( (u,v) \) parameter space.

Problem: Quadratic two variable parameterization.

Now consider a variation of the previous problem, with \( x = a u^2 + b v^2 \). Find the curvilinear coordinates and their reciprocals.

Answer

\begin{equation}\label{eqn:reciprocal:2100}
\begin{aligned}
\Bx_u &= \PD{u}{x} = 2 u a \\
\Bx_v &= \PD{v}{x} = 2 v b.
\end{aligned}
\end{equation}
Our tangent space is still the \( a \wedge b \) plane (as is the surface itself), but the spacing of the cells starts getting wider in proportion to \( u, v \).
Utilizing the work from the previous problem, we have
\begin{equation}\label{eqn:reciprocal:2120}
\begin{aligned}
2 u \grad u &=
b \cdot \inv{ a \wedge b } \\
2 v \grad v &=
-a \cdot \inv{ a \wedge b }.
\end{aligned}
\end{equation}
A bit of rearrangement can show that this is equivalent to the reciprocal frame identities. This is a second demonstration that the gradient and the algebraic formulations for the reciprocals match, at least for these special cases of linear non-coupled parameterizations.

Problem: Reciprocal frame for generalized cylindrical parameterization.

Let the vector parameterization be \( x(\rho,\theta) = \rho e^{-i\theta/2} x(\rho_0, \theta_0) e^{i \theta} \), where \( i^2 = \pm 1 \) is a unit bivector (\(+1\) for a boost, and \(-1\) for a rotation), and where \(\theta, \rho\) are scalars. Find the tangent space vectors and their reciprocals.

fig. 1. “Cylindrical” boost parameterization.

Note that this is cylindrical parameterization for the rotation case, and traces out hyperbolic regions for the boost case. The boost case is illustrated in fig. 1 where hyperbolas in the light cone are found for boosts of \( \gamma_0\) with various values of \(\rho\), and the spacelike hyperbolas are boosts of \( \gamma_1 \), again for various values of \( \rho \).

Answer

The tangent space vectors are
\begin{equation}\label{eqn:reciprocal:2480}
\Bx_\rho = \frac{x}{\rho},
\end{equation}
and

\begin{equation}\label{eqn:reciprocal:2500}
\begin{aligned}
\Bx_\theta
&= -\frac{i}{2} x + x \frac{i}{2} \\
&= x \cdot i.
\end{aligned}
\end{equation}
Recall that \( x \cdot i \) lies perpendicular to \( x \) (in the plane \( i \)), as illustrated in fig. 2. This means that \( \Bx_\rho \) and \( \Bx_\theta \) are orthogonal, so we can find the reciprocal vectors by just inverting them
\begin{equation}\label{eqn:reciprocal:2520}
\begin{aligned}
\Bx^\rho &= \frac{\rho}{x} \\
\Bx^\theta &= \frac{1}{x \cdot i}.
\end{aligned}
\end{equation}

fig. 2. Projection and rejection geometry.

Parameterization of a general linear transformation.

Given \( N \) parameters \( u^0, u^1, \cdots u^{N-1} \), a general linear transformation from the parameter space to the vector space has the form
\begin{equation}\label{eqn:reciprocal:2160}
x =
{a^\alpha}_\beta \gamma_\alpha u^\beta,
\end{equation}
where \( \beta \in [0, \cdots, N-1] \) and \( \alpha \in [0,3] \).
For such a general transformation, observe that the curvilinear basis vectors are
\begin{equation}\label{eqn:reciprocal:2180}
\begin{aligned}
\Bx_\mu
&= \PD{u^\mu}{x} \\
&= \PD{u^\mu}{}
{a^\alpha}_\beta \gamma_\alpha u^\beta \\
&=
{a^\alpha}_\mu \gamma_\alpha.
\end{aligned}
\end{equation}
We find an interpretation of \( {a^\alpha}_\mu \) by dotting \( \Bx_\mu \) with the reciprocal frame vectors of the standard basis
\begin{equation}\label{eqn:reciprocal:2200}
\begin{aligned}
\Bx_\mu \cdot \gamma^\nu
&=
{a^\alpha}_\mu \lr{ \gamma_\alpha \cdot \gamma^\nu } \\
&=
{a^\nu}_\mu,
\end{aligned}
\end{equation}
so
\begin{equation}\label{eqn:reciprocal:2220}
x = \Bx_\mu u^\mu.
\end{equation}
We are able to reinterpret \ref{eqn:reciprocal:2160} as a contraction of the tangent space vectors with the parameters, scaling and summing these direction vectors to characterize all the points in the tangent plane.

Theorem 1.1: Projecting onto the tangent space.

Let \( T \) represent the tangent space. The projection of a vector onto the tangent space has the form
\begin{equation}\label{eqn:reciprocal:2560}
\textrm{Proj}_{\textrm{T}} y = \lr{ y \cdot \Bx^\mu } \Bx_\mu = \lr{ y \cdot \Bx_\mu } \Bx^\mu.
\end{equation}

Start proof:

Let’s designate \( a \) as the portion of the vector \( y \) that lies outside of the tangent space
\begin{equation}\label{eqn:reciprocal:2260}
y = y^\mu \Bx_\mu + a.
\end{equation}
If we knew the coordinates \( y^\mu \), we would have a recipe for the projection.
Algebraically, requiring that \( a \) lies outside of the tangent space, is equivalent to stating \( a \cdot \Bx_\mu = a \cdot \Bx^\mu = 0 \). We use that fact, and then take dot products
\begin{equation}\label{eqn:reciprocal:2280}
\begin{aligned}
y \cdot \Bx^\nu
&= \lr{ y^\mu \Bx_\mu + a } \cdot \Bx^\nu \\
&= y^\nu,
\end{aligned}
\end{equation}
so
\begin{equation}\label{eqn:reciprocal:2300}
y = \lr{ y \cdot \Bx^\mu } \Bx_\mu + a.
\end{equation}
Similarly, the tangent space projection can be expressed as a linear combination of reciprocal basis elements
\begin{equation}\label{eqn:reciprocal:2320}
y = y_\mu \Bx^\mu + a.
\end{equation}
Dotting with \( \Bx_\mu \), we have
\begin{equation}\label{eqn:reciprocal:2340}
\begin{aligned}
y \cdot \Bx^\mu
&= \lr{ y_\alpha \Bx^\alpha + a } \cdot \Bx_\mu \\
&= y_\mu,
\end{aligned}
\end{equation}
so
\begin{equation}\label{eqn:reciprocal:2360}
y = \lr{ y \cdot \Bx^\mu } \Bx_\mu + a.
\end{equation}
We find the two stated ways of computing the projection.

Observe that, for the special case that all of \( \setlr{ \Bx_\mu } \) are orthogonal, the equivalence of these two projection methods follows directly, since
\begin{equation}\label{eqn:reciprocal:2380}
\begin{aligned}
\lr{ y \cdot \Bx^\mu } \Bx_\mu
&=
\lr{ y \cdot \inv{\Bx_\mu} } \inv{\Bx^\mu} \\
&=
\lr{ y \cdot \frac{\Bx_\mu}{\lr{\Bx_\mu}^2 } } \frac{\Bx^\mu}{\lr{\Bx^\mu}^2} \\
&=
\lr{ y \cdot \Bx_\mu } \Bx^\mu.
\end{aligned}
\end{equation}

End proof.