Simplifying the previous adjoint matrix results.

January 17, 2024 math and physics play , , , , , , , , ,

[Click here for a PDF version of this (and the previous) post]

We previously found determinant expressions for the matrix elements of the adjoint for 2D and 3D matrices \( M \). However, we can extract additional structure from each of those results.

2D case.

Given a matrix expressed in block matrix form in terms of it’s columns
\begin{equation}\label{eqn:adjoint:500}
M =
\begin{bmatrix}
\Bm_1 & \Bm_2
\end{bmatrix},
\end{equation}
we found that the adjoint \( A \) satisfying \( M A = \Abs{M} I \) had the structure
\begin{equation}\label{eqn:adjoint:520}
A =
\begin{bmatrix}
\begin{vmatrix} \Be_1 & \Bm_2 \end{vmatrix} & \begin{vmatrix} \Be_2 & \Bm_2 \end{vmatrix} \\
& \\
\begin{vmatrix} \Bm_1 & \Be_1 \end{vmatrix} & \begin{vmatrix} \Bm_1 & \Be_2 \end{vmatrix}
\end{bmatrix}.
\end{equation}
We initially had wedge product expressions for each of these matrix elements, and can discover our structure by putting back those wedge products. Modulo sign, each of these matrix elemens has the form
\begin{equation}\label{eqn:adjoint:540}
\begin{aligned}
\begin{vmatrix} \Be_i & \Bm_j \end{vmatrix}
&=
\lr{ \Be_i \wedge \Bm_j } i^{-1} \\
&=
\gpgradezero{
\lr{ \Be_i \wedge \Bm_j } i^{-1}
} \\
&=
\gpgradezero{
\lr{ \Be_i \Bm_j – \Be_i \cdot \Bm_j } i^{-1}
} \\
&=
\gpgradezero{
\Be_i \Bm_j i^{-1}
} \\
&=
\Be_i \cdot \lr{ \Bm_j i^{-1} },
\end{aligned}
\end{equation}
where \( i = \Be_{12} \). The adjoint matrix is
\begin{equation}\label{eqn:adjoint:560}
A =
\begin{bmatrix}
-\lr{ \Bm_2 i } \cdot \Be_1 & -\lr{ \Bm_2 i } \cdot \Be_2 \\
\lr{ \Bm_1 i } \cdot \Be_1 & \lr{ \Bm_1 i } \cdot \Be_2 \\
\end{bmatrix}.
\end{equation}
If we use a column vector representation of the vectors \( \Bm_j i^{-1} \), we can write the adjoint in a compact hybrid geometric-algebra matrix form
\begin{equation}\label{eqn:adjoint:640}
A =
\begin{bmatrix}
-\lr{ \Bm_2 i }^\T \\
\lr{ \Bm_1 i }^\T
\end{bmatrix}.
\end{equation}

Check:

Let’s see if this works, by multiplying with \( M \)
\begin{equation}\label{eqn:adjoint:580}
\begin{aligned}
A M &=
\begin{bmatrix}
-\lr{ \Bm_2 i }^\T \\
\lr{ \Bm_1 i }^\T
\end{bmatrix}
\begin{bmatrix}
\Bm_1 & \Bm_2
\end{bmatrix} \\
&=
\begin{bmatrix}
-\lr{ \Bm_2 i }^\T \Bm_1 & -\lr{ \Bm_2 i }^\T \Bm_2 \\
\lr{ \Bm_1 i }^\T \Bm_1 & \lr{ \Bm_1 i }^\T \Bm_2
\end{bmatrix}.
\end{aligned}
\end{equation}
Those dot products have the form
\begin{equation}\label{eqn:adjoint:600}
\begin{aligned}
\lr{ \Bm_j i }^\T \Bm_i
&=
\lr{ \Bm_j i } \cdot \Bm_i \\
&=
\gpgradezero{ \lr{ \Bm_j i } \Bm_i } \\
&=
\gpgradezero{ -i \Bm_j \Bm_i } \\
&=
-i \lr{ \Bm_j \wedge \Bm_i },
\end{aligned}
\end{equation}
so
\begin{equation}\label{eqn:adjoint:620}
\begin{aligned}
A M &=
\begin{bmatrix}
i \lr{ \Bm_2 \wedge \Bm_1 } & 0 \\
0 & -i \lr { \Bm_1 \wedge \Bm_2 }
\end{bmatrix} \\
&=
\Abs{M} I.
\end{aligned}
\end{equation}
We find the determinant weighted identity that we expected. Our methods are a bit schizophrenic, switching fluidly between matrix and geometric algebra representations, but provided we are careful enough, this isn’t problematic.

3D case.

Now, let’s look at the 3D case, where we assume a column vector representation of the matrix of interest
\begin{equation}\label{eqn:adjoint:660}
M =
\begin{bmatrix}
\Bm_1 & \Bm_2 & \Bm_3
\end{bmatrix},
\end{equation}
and try to simplify the expression we found for the adjoint
\begin{equation}\label{eqn:adjoint:680}
A =
\begin{bmatrix}
\begin{vmatrix} \Be_1 & \Bm_2 & \Bm_3 \end{vmatrix} & \begin{vmatrix} \Be_2 & \Bm_2 & \Bm_3 \end{vmatrix} & \begin{vmatrix} \Be_3 & \Bm_2 & \Bm_3 \end{vmatrix} \\
& & \\
\begin{vmatrix} \Be_1 & \Bm_3 & \Bm_1 \end{vmatrix} & \begin{vmatrix} \Be_2 & \Bm_3 & \Bm_1 \end{vmatrix} & \begin{vmatrix} \Be_3 & \Bm_3 & \Bm_1 \end{vmatrix} \\
& & \\
\begin{vmatrix} \Be_1 & \Bm_1 & \Bm_2 \end{vmatrix} & \begin{vmatrix} \Be_2 & \Bm_1 & \Bm_2 \end{vmatrix} & \begin{vmatrix} \Be_3 & \Bm_1 & \Bm_2 \end{vmatrix}
\end{bmatrix}.
\end{equation}
As with the 2D case, let’s re-express these determinants in wedge product form. We’ll write \( I = \Be_{123} \), and find
\begin{equation}\label{eqn:adjoint:700}
\begin{aligned}
\begin{vmatrix} \Be_i & \Bm_j & \Bm_k \end{vmatrix}
&=
\lr{ \Be_i \wedge \Bm_j \wedge \Bm_k } I^{-1} \\
&=
\gpgradezero{ \lr{ \Be_i \wedge \Bm_j \wedge \Bm_k } I^{-1} } \\
&=
\gpgradezero{ \lr{
\Be_i \lr{ \Bm_j \wedge \Bm_k }
\Be_i \cdot \lr{ \Bm_j \wedge \Bm_k }
} I^{-1} } \\
&=
\gpgradezero{
\Be_i \lr{ \Bm_j \wedge \Bm_k }
I^{-1} } \\
&=
\gpgradezero{
\Be_i \lr{ \Bm_j \cross \Bm_k } I
I^{-1} } \\
&=
\Be_i \cdot \lr{ \Bm_j \cross \Bm_k }.
\end{aligned}
\end{equation}
We see that we can put the adjoint in block matrix form
\begin{equation}\label{eqn:adjoint:720}
A =
\begin{bmatrix}
\lr{ \Bm_2 \cross \Bm_3 }^\T \\
\lr{ \Bm_3 \cross \Bm_1 }^\T \\
\lr{ \Bm_1 \cross \Bm_2 }^\T \\
\end{bmatrix}.
\end{equation}

Check:

\begin{equation}\label{eqn:adjoint:740}
\begin{aligned}
A M
&=
\begin{bmatrix}
\lr{ \Bm_2 \cross \Bm_3 }^\T \\
\lr{ \Bm_3 \cross \Bm_1 }^\T \\
\lr{ \Bm_1 \cross \Bm_2 }^\T \\
\end{bmatrix}
\begin{bmatrix}
\Bm_1 & \Bm_2 & \Bm_3
\end{bmatrix} \\
&=
\begin{bmatrix}
\lr{ \Bm_2 \cross \Bm_3 }^\T \Bm_1 & \lr{ \Bm_2 \cross \Bm_3 }^\T \Bm_2 & \lr{ \Bm_2 \cross \Bm_3 }^\T \Bm_3 \\
\lr{ \Bm_3 \cross \Bm_1 }^\T \Bm_1 & \lr{ \Bm_3 \cross \Bm_1 }^\T \Bm_2 & \lr{ \Bm_3 \cross \Bm_1 }^\T \Bm_3 \\
\lr{ \Bm_1 \cross \Bm_2 }^\T \Bm_1 & \lr{ \Bm_1 \cross \Bm_2 }^\T \Bm_2 & \lr{ \Bm_1 \cross \Bm_2 }^\T \Bm_3
\end{bmatrix} \\
&=
\Abs{M} I.
\end{aligned}
\end{equation}

Essentially, we found that the rows of the adjoint matrix are each parallel to the reciprocal frame vectors of the columns of \( M \). This makes sense, as the reciprocal frame encodes a generalized inverse of sorts.

Computing the adjoint matrix

January 16, 2024 math and physics play , , , , ,

[Click here for a PDF version of this post]

I started reviewing a book draft that mentions the adjoint in passing, but I’ve forgotten what I knew about the adjoint (not counting self-adjoint operators, which is different.) I do recall that adjoint matrices were covered in high school linear algebra (now 30+ years ago!), but never really used after that.

It appears that the basic property of the adjoint \( A \) of a matrix \( M \), when it exists, is
\begin{equation}\label{eqn:adjoint:20}
M A = \Abs{M} I,
\end{equation}
so it’s proportional to the inverse, where the numerical factor is the determinant of that matrix. Let’s try to compute this beastie for 1D, 2D, and 3D cases.

Simplest case: \(1 \times 1\) matrix.

For a one by one matrix, say
\begin{equation}\label{eqn:adjoint:40}
M =
\begin{bmatrix}
m_{11}
\end{bmatrix},
\end{equation}
the determinant is just \( \Abs{M} = m_11 \), so our adjoint is the identity matrix
\begin{equation}\label{eqn:adjoint:60}
A =
\begin{bmatrix}
1
\end{bmatrix}.
\end{equation}
Not too interesting. Let’s try the 2D case.

Less trivial case: \(2 \times 2\) matrix.

For the 2D case, let’s define our matrix as a pair of column vectors
\begin{equation}\label{eqn:adjoint:80}
M =
\begin{bmatrix}
\Bm_1 & \Bm_2
\end{bmatrix},
\end{equation}
and let’s write the adjoint out in full in coordinates as
\begin{equation}\label{eqn:adjoint:100}
A =
\begin{bmatrix}
a_{11} & a_{12} \\
a_{21} & a_{22}
\end{bmatrix}.
\end{equation}
We seek solutions to a pair of vector equations
\begin{equation}\label{eqn:adjoint:120}
\begin{aligned}
\Bm_1 a_{11} + \Bm_2 a_{21} &= \Abs{M} \Be_1 \\
\Bm_1 a_{12} + \Bm_2 a_{22} &= \Abs{M} \Be_2.
\end{aligned}
\end{equation}
We can immediately solve either of these, by taking wedge products, yielding
\begin{equation}\label{eqn:adjoint:140}
\begin{aligned}
\lr{ \Bm_1 \wedge \Bm_2 } a_{11} + \lr{ \Bm_2 \wedge \Bm_2 } a_{21} &= \Abs{M} \lr{ \Be_1 \wedge \Bm_2 } \\
\lr{ \Bm_1 \wedge \Bm_1 } a_{11} + \lr{ \Bm_1 \wedge \Bm_2 } a_{21} &= \Abs{M} \lr{ \Bm_1 \wedge \Be_1 } \\
\lr{ \Bm_1 \wedge \Bm_2 } a_{12} + \lr{ \Bm_2 \wedge \Bm_2 } a_{22} &= \Abs{M} \lr{ \Be_2 \wedge \Bm_2 } \\
\lr{ \Bm_1 \wedge \Bm_1 } a_{12} + \lr{ \Bm_1 \wedge \Bm_2 } a_{22} &= \Abs{M} \lr{ \Bm_1 \wedge \Be_2}.
\end{aligned}
\end{equation}
Any wedge with a repeated vector is zero.
Provided the determinant is non-zero, we can divide both sides by \( \Bm_1 \wedge \Bm_2 = \Abs{M} \Be_{12} \) to find a single determinant for each element in the adjoint
\begin{equation}\label{eqn:adjoint:160}
\begin{aligned}
a_{11} &= \begin{vmatrix} \Be_1 & \Bm_2 \end{vmatrix} \\
a_{21} &= \begin{vmatrix} \Bm_1 & \Be_1 \end{vmatrix} \\
a_{12} &= \begin{vmatrix} \Be_2 & \Bm_2 \end{vmatrix} \\
a_{22} &= \begin{vmatrix} \Bm_1 & \Be_2 \end{vmatrix}
\end{aligned}
\end{equation}
or
\begin{equation}\label{eqn:adjoint:400}
A =
\begin{bmatrix}
\begin{vmatrix} \Be_1 & \Bm_2 \end{vmatrix} & \begin{vmatrix} \Be_2 & \Bm_2 \end{vmatrix} \\
& \\
\begin{vmatrix} \Bm_1 & \Be_1 \end{vmatrix} & \begin{vmatrix} \Bm_1 & \Be_2 \end{vmatrix}
\end{bmatrix},
\end{equation}
or
\begin{equation}\label{eqn:adjoint:440}
A_{ij} =
\epsilon_{ir}
\begin{vmatrix}
\Be_j & \Bm_r
\end{vmatrix},
\end{equation}
where \( \epsilon_{ir} \) is the completely antisymmetric tensor, and the Einstein summation convention is in effect (summation implied over any repeated indexes.)

Check:

We should verify that expanding these determinants explicitly reproduces the usual representation of the 2D adjoint:
\begin{equation}\label{eqn:adjoint:420}
\begin{aligned}
\begin{vmatrix} \Be_1 & \Bm_2 \end{vmatrix} &= \begin{vmatrix} 1 & m_{12} \\ 0 & m_{22} \end{vmatrix} = m_{22} \\
\begin{vmatrix} \Bm_1 & \Be_1 \end{vmatrix} &= \begin{vmatrix} m_{11} & 1 \\ m_{21} & 0 \end{vmatrix} = -m_{21} \\
\begin{vmatrix} \Be_2 & \Bm_2 \end{vmatrix} &= \begin{vmatrix} 0 & m_{12} \\ 1 & m_{22} \end{vmatrix} = -m_{12} \\
\begin{vmatrix} \Bm_1 & \Be_2 \end{vmatrix} &= \begin{vmatrix} m_{11} & 0 \\ m_{21} & 1 \end{vmatrix} = m_{11},
\end{aligned}
\end{equation}
or
\begin{equation}\label{eqn:adjoint:180}
A =
\begin{bmatrix}
m_{22} & -m_{12} \\
-m_{21} & m_{11}
\end{bmatrix}.
\end{equation}

Multiplying everything out should give us determinant weighted identity
\begin{equation}\label{eqn:adjoint:200}
\begin{aligned}
M A
&=
\begin{bmatrix}
m_{11} & m_{12} \\
m_{21} & m_{22}
\end{bmatrix}
\begin{bmatrix}
m_{22} & -m_{12} \\
-m_{21} & m_{11}
\end{bmatrix} \\
&=
\lr{ m_{11} m_{22} – m_{12} m_{21} }
\begin{bmatrix}
1 & 0 \\
0 & 1
\end{bmatrix} \\
&= \Abs{M} I,
\end{aligned}
\end{equation}
as expected.

3D case: \(3 \times 3\) matrix.

For the 3D case, let’s also define our matrix as column vectors
\begin{equation}\label{eqn:adjoint:220}
M =
\begin{bmatrix}
\Bm_1 & \Bm_2 & \Bm_3
\end{bmatrix},
\end{equation}
and let’s write the adjoint out in full in coordinates as
\begin{equation}\label{eqn:adjoint:240}
A =
\begin{bmatrix}
a_{11} & a_{12} & a_{13} \\
a_{21} & a_{22} & a_{23} \\
a_{31} & a_{32} & a_{33}
\end{bmatrix}.
\end{equation}
This time, we seek solutions to three vector equations
\begin{equation}\label{eqn:adjoint:260}
\begin{aligned}
\Bm_1 a_{11} + \Bm_2 a_{21} + \Bm_3 a_{31} &= \Abs{M} \Be_1 \\
\Bm_1 a_{12} + \Bm_2 a_{22} + \Bm_3 a_{32} &= \Abs{M} \Be_2 \\
\Bm_1 a_{13} + \Bm_2 a_{23} + \Bm_3 a_{33} &= \Abs{M} \Be_3,
\end{aligned}
\end{equation}
and can immediately solve, once again, by taking wedge products, yielding
\begin{equation}\label{eqn:adjoint:280}
\begin{aligned}
\lr{ \Bm_1 \wedge \Bm_2 \wedge \Bm_3 }a_{11} + \lr{ \Bm_2 \wedge \Bm_2 \wedge \Bm_3 }a_{21} + \lr{ \Bm_3 \wedge \Bm_2 \wedge \Bm_3 }a_{31} &= \Abs{M} \Be_1 \wedge \Bm_2 \wedge \Bm_3 \\
\lr{ \Bm_1 \wedge \Bm_1 \wedge \Bm_3 }a_{11} + \lr{ \Bm_1 \wedge \Bm_2 \wedge \Bm_3 }a_{21} + \lr{ \Bm_1 \wedge \Bm_3 \wedge \Bm_3 }a_{31} &= \Abs{M} \Bm_1 \wedge \Be_1 \wedge \Bm_3 \\
\lr{ \Bm_1 \wedge \Bm_2 \wedge \Bm_1 }a_{11} + \lr{ \Bm_1 \wedge \Bm_2 \wedge \Bm_2 }a_{21} + \lr{ \Bm_1 \wedge \Bm_2 \wedge \Bm_3 }a_{31} &= \Abs{M} \Bm_1 \wedge \Bm_2 \wedge \Be_1 \\
\lr{ \Bm_1 \wedge \Bm_2 \wedge \Bm_3 }a_{12} + \lr{ \Bm_2 \wedge \Bm_2 \wedge \Bm_3 }a_{22} + \lr{ \Bm_3 \wedge \Bm_2 \wedge \Bm_3 }a_{32} &= \Abs{M} \Be_2 \wedge \Bm_2 \wedge \Bm_3 \\
\lr{ \Bm_1 \wedge \Bm_1 \wedge \Bm_3 }a_{12} + \lr{ \Bm_1 \wedge \Bm_2 \wedge \Bm_3 }a_{22} + \lr{ \Bm_1 \wedge \Bm_3 \wedge \Bm_3 }a_{32} &= \Abs{M} \Bm_1 \wedge \Be_2 \wedge \Bm_3 \\
\lr{ \Bm_1 \wedge \Bm_2 \wedge \Bm_1 }a_{12} + \lr{ \Bm_1 \wedge \Bm_2 \wedge \Bm_2 }a_{22} + \lr{ \Bm_1 \wedge \Bm_2 \wedge \Bm_3 }a_{32} &= \Abs{M} \Bm_1 \wedge \Bm_2 \wedge \Be_2 \\
\lr{ \Bm_1 \wedge \Bm_2 \wedge \Bm_3 }a_{13} + \lr{ \Bm_2 \wedge \Bm_2 \wedge \Bm_3 }a_{23} + \lr{ \Bm_3 \wedge \Bm_2 \wedge \Bm_3 }a_{33} &= \Abs{M} \Be_3 \wedge \Bm_2 \wedge \Bm_3 \\
\lr{ \Bm_1 \wedge \Bm_1 \wedge \Bm_3 }a_{13} + \lr{ \Bm_1 \wedge \Bm_2 \wedge \Bm_3 }a_{23} + \lr{ \Bm_1 \wedge \Bm_3 \wedge \Bm_3 }a_{33} &= \Abs{M} \Bm_1 \wedge \Be_3 \wedge \Bm_3 \\
\lr{ \Bm_1 \wedge \Bm_2 \wedge \Bm_1 }a_{13} + \lr{ \Bm_1 \wedge \Bm_2 \wedge \Bm_2 }a_{23} + \lr{ \Bm_1 \wedge \Bm_2 \wedge \Bm_3 }a_{33} &= \Abs{M} \Bm_1 \wedge \Bm_2 \wedge \Be_3,
\end{aligned}
\end{equation}
Any wedge with a repeated vector is zero.
Like before, provided the determinant is non-zero, we can divide both sides by \( \Bm_1 \wedge \Bm_2 \wedge \Bm_3 = \Abs{M} \Be_{123} \) to find a single determinant for each element in the adjoint
\begin{equation}\label{eqn:adjoint:360}
\begin{aligned}
A &=
\begin{bmatrix}
\begin{vmatrix} \Be_1 & \Bm_2 & \Bm_3 \end{vmatrix} & \begin{vmatrix} \Be_2 & \Bm_2 & \Bm_3 \end{vmatrix} & \begin{vmatrix} \Be_3 & \Bm_2 & \Bm_3 \end{vmatrix} \\
& & \\
\begin{vmatrix} \Bm_1 & \Be_1 & \Bm_3 \end{vmatrix} & \begin{vmatrix} \Bm_1 & \Be_2 & \Bm_3 \end{vmatrix} & \begin{vmatrix} \Bm_1 & \Be_3 & \Bm_3 \end{vmatrix} \\
& & \\
\begin{vmatrix} \Bm_1 & \Bm_2 & \Be_1 \end{vmatrix} & \begin{vmatrix} \Bm_1 & \Bm_2 & \Be_2 \end{vmatrix} & \begin{vmatrix} \Bm_1 & \Bm_2 & \Be_3 \end{vmatrix}
\end{bmatrix} \\
&=
\begin{bmatrix}
\begin{vmatrix} \Be_1 & \Bm_2 & \Bm_3 \end{vmatrix} & \begin{vmatrix} \Be_2 & \Bm_2 & \Bm_3 \end{vmatrix} & \begin{vmatrix} \Be_3 & \Bm_2 & \Bm_3 \end{vmatrix} \\
& & \\
\begin{vmatrix} \Be_1 & \Bm_3 & \Bm_1 \end{vmatrix} & \begin{vmatrix} \Be_2 & \Bm_3 & \Bm_1 \end{vmatrix} & \begin{vmatrix} \Be_3 & \Bm_3 & \Bm_1 \end{vmatrix} \\
& & \\
\begin{vmatrix} \Be_1 & \Bm_1 & \Bm_2 \end{vmatrix} & \begin{vmatrix} \Be_2 & \Bm_1 & \Bm_2 \end{vmatrix} & \begin{vmatrix} \Be_3 & \Bm_1 & \Bm_2 \end{vmatrix}
\end{bmatrix},
\end{aligned}
\end{equation}
or
\begin{equation}\label{eqn:adjoint:380}
A_{ij} = \frac{\epsilon_{irs}}{2!} \begin{vmatrix} \Be_j & \Bm_r & \Bm_s \end{vmatrix}.
\end{equation}
Observe that the inclusion of the \( \Be_j \) column vector in this determinant, means that we really need only compute a \( 2 \times 2 \) determinant for each adjoint matrix element. That is

\begin{equation}\label{eqn:adjoint:480}
A_{ij} = \frac{(-1)^j \epsilon_{irs}\epsilon_{jab}}{(2!)^2}
\begin{vmatrix}
m_{ar} & m_{as} \\
m_{br} & m_{bs}
\end{vmatrix}
.
\end{equation}
This looks a lot like the usual minor/cofactor recipe, but written out explicitly for each element, using the antisymmetric tensor to encode the index alternation. It’s worth noting that there may be an error or subtle difference from the usual in my formulation, since wikipedia defines the adjoint as the transpose of the cofactor matrix, see: [1].

General case: \(n \times n\) matrix.

It appears that if we wanted an induction hypotheses for the general \( n > 1 \) case, the \( ij \) element of the adjoint matrix is likely
\begin{equation}\label{eqn:adjoint:460}
\begin{aligned}
A_{ij} &= \frac{\epsilon_{i s_1 s_2 \cdots s_{n-1}}}{(n-1)!} \begin{vmatrix} \Be_j & \Bm_{s_1} & \Bm_{s_2} & \cdots & \Bm_{s_{n-1}} \end{vmatrix} \\
&= \frac{(-1)^j \epsilon_{i r_1 r_2 \cdots r_{n-1}} \epsilon_{j s_1 s_2 \cdots s_{n-1}} }{\lr{(n-1)!}^2}
\begin{vmatrix}
m_{r_1 s_1} & \cdots & m_{r_1 s_{n-1}} \\
\vdots & & \vdots \\
m_{r_{n-1} s_{1}} & \cdots & m_{r_{n-1} s_{n-1}}
\end{vmatrix}.
\end{aligned}
\end{equation}
I’m not going to try to prove this, inductively or otherwise.

References

[1] Wikipedia contributors. Minor (linear algebra) — Wikipedia, the free encyclopedia, 2023. URL https://en.wikipedia.org/w/index.php?title=Minor_(linear_algebra)&oldid=1182988311. [Online; accessed 16-January-2024].

A hardcopy of my book for myself.

January 14, 2024 Geometric Algebra for Electrical Engineers

I hadn’t printed a copy of my book for myself for about 4 years, and since I’ve added a lot since then, I wanted a new version to mark up.   This new version (V0.3.5) is now up to 313 pages, whereas my May 2019 V0.1.15-6 version weighed in at a much skinnier 258 pages.

This time, so I could see what it looked like, I got myself a hardcover copy:

The hard cover has a nice feel and thickness, and the book has a nice weight.  It also opens fairly flat, which is nice for a textbook style book.  All in all, I’m pretty impressed with the binding.  My only complaint is a small curvature to the covers.

A router table extension for my table saw.

January 7, 2024 Wood working

I’ve got a nice little job site table saw, but my “workshop” is an 8×10 shed (approximately), and space is at a premium, so I don’t have space for a lot of bigger wood working tools.  I saw a number of YouTube videos showing various router table extensions for their table saws, and I’ve started building one for myself.  I’d like to avoid such a modification that drills into the saw frame itself, as some of them did, and want to be able to use my table saw sliding fence as the router table fence too.

Here’s my progress so far

My router came with two bases, one plunge base that can be used as a fixed base or in plunge mode, and one fixed base.  I unscrewed the circular plate, and replaced it with a small rectangular piece of plexiglass, embedded in the board that I used for the table extension.  One YouTuber that did a project like this, said that he just used his thickness planer to get the height of that table extension right.  Well I don’t have a thickness planer (nor do I have a space for one), so I routed the very edge of my extension slightly, to remove a bit of the thickness.  There’s a lot of weird cuts on my little extension, but it fits nicely.

I have a couple clamps that I should be able to use to make a router table guide that I can attach to my table saw fence, so that I can use it as my router fence without any destructive modifications.  When I’m done, I’ll post my own little YouTube video of my creation.  I didn’t try to record the creation process, which included way too much trial and error, but I’ll show it in action if it all works out.

An absurd COBOL library: 2D Euclidean GA

December 31, 2023 COBOL, math and physics play , , , ,

I’ve achieved a new pinnacle of obscurity, and have now written a rudimentary COBOL implementation of a geometric algebra library for \( \mathbb{R}^2 \) calculations.

Who will use this?  Absolutely nobody.  Effectively, nobody knows geometric algebra.  Nobody wants to know COBOL, but some do.  The union of those two groups is vanishingly small (probably one: argued below.)

I understand that some Opus Dei members have taught themselves COBOL, as looking at COBOL has been found to be equally painful as a course of self flagellation.

Figure 0. A flagellation representation of COBOL.

Assuming that no Opus Dei practitioners know geometric algebra, that means that there is exactly one person in the world that both knows COBOL and geometric algebra.  Me.

Why did I write this little library?  Well, I was tickled to write something so completely stupid, and I’ve been laughing at the absurdity of it. I also thought I might learn a few things about COBOL in the process of trying to use it for something slightly non-trivial.  I’m adept at writing simple test programs that exercise various obscure compiler features, but those are usually fairly small.  On the flip side of complexity, I have to debug through a number of horribly complicated customer programs as part of my compiler validation work.  A simple real life test scenario might run 100+ COBOL programs in a set of CICS transactions, executing thousands of EXEC DLI and EXEC CICS statements as well as all of the rest of the COBOL language statements!  Despite having gained familiarity with COBOL from that sort of observational use, walking through stuff in the debugger doesn’t provide the same level of comfort with the language as writing code from scratch.  Since I have no interest in simulating a boring business application, why not do something just for fun as a learning game.

The compiler I am using does not seem to support object-COBOL (which would have been nicely suited for this project), so I’ve written my little toy in conventional COBOL, using one external procedure for each type of mathematical operation.  In the huge set of customer COBOL code that I’ve examined and done test compilations of, none of it has used object-COBOL.  I am guessing that the object-COBOL community is as large as the user base for my little toy COBOL geometric algebra library will ever be.

I’ve implemented methods to construct multivectors with scalar, vector and pseudoscalar components, or a general multivector with all of the above.  I’ve also implemented multiply, add, subtract, scalar multiplication, grade selection, and a DISPLAY function to write a multivector to SYSOUT (stdout equivalent.)

The multivector “type”

Figure 1 shows the implementation of my multivector type, implemented in copybook (include file) named MVI.  I have an alternate MV copybook that doesn’t have the VALUE (initialization) clauses, as you don’t want initialization for LINKAGE-SECTION values (i.e.: program parameters.)

Figure 1. Copybook with multivector declaration and initialization.

If you are wondering what the hell a ‘PIC S9(9) USAGE IS COMP-5’ is, well, that’s the “easy to remember” way to declare a 32-bit signed integer in COBOL.  A COMP-2, on the other hand, is a floating point value.

Figure 2 shows an example of the use of this copybook:

Figure 2. Using the multivector copybook.

Figure 3 shows these two copybook declarations after preprocessor expansion

Figure 3. Multivector global variable examples after preprocessing.

The global variable declarations above are roughly equivalent to the following pseudo C++ code (pretending that we can have anonymous unions that match the COBOL declarations above):

#include <complex>

using complex = std::complex<double>;

struct ga20{
   int grade{};
   union {
      struct { double sc{}; double ps{}; };
      complex g02{};
   };
   union { 
      struct { double x{}; double y{}; };
      complex g1{};
   };
};

ga20 a;
ga20 b;

COBOL is inherently untyped, but requires matching types for CALL parameters, or else all hell ensues, so you have to rely on naming conventions and other mechanisms to enforce the required type equivalences.  In this toy GA library, I’ve used copybooks to enforce the types required for everything.  Global variable declarations like these A-MV and B-MV variables are declared only using a copybook that knows the representation required, and all the uses in sub-programs of the effective -MV “type” use a matching copybook for their declarations.  However, I’ve also made use of the lack of typing to treat A-G02, B-G02, A-G1, and B-G1 as if they were complex numbers, and pass those “variables” off to complex number sub-programs, knowing that I’ve constructed the parameters to those programs in a way that is bit compatible with the MV field values.  You can screw things up really nicely doing stuff like this, especially because all COBOL sub-program parameters are (generally) passed by reference.  If you don’t match up the types right “fun ensues.”

Also observe that the nested level specifiers are optional in COBOL.  For nested fields in C++, we might write a.g1.x.  With a nested variable like this in COBOL, we could write something equivalent to that, like:

A-X OF A-G1 OF A-MV

but we can leave out any of the intermediate “level” specifications if we want.  This gets really confusing in complicated real-life COBOL code.  If you are looking to see where something is modified, you have to not only look for the variable of interest, but also any of the higher level fields, since any of those could have been passed off to other code, which implicitly wrote the value you are interested in.

Here’s what one of these multivectors looks like in memory on my (Linux x86-64) system

(lldb) c
Process 3903259 resuming
Process 3903259 stopped
* thread #10, name = 'GA20', stop reason = breakpoint 7.1
    frame #0: 0x00007fffd9189a02 PJOOT.GA20V01.LOADLIB(MULT).ec73dc4b`MULT at MULT.cob:50:1
   47              CALL GA-MKVECTOR-MODIFY USING C-MV, A-X, A-Y
   48              CALL GA-MKPSEUDO-MODIFY USING D-MV, A-PS
   49  
-> 50              MOVE 'A' TO WS-DISPPARM-N
   51              CALL GA-DISPLAY USING
   52                WS-DISPPARM-N,
   53                A-MV
(lldb) p A-MV
(A-MV) A-MV = {
  A-GRADE = -1
  A-G02 = (A-SC = 1, A-PS = 4)
  A-G1 = (A-X = 2, A-Y = 3)
}

i.e.: this has the value \( 1 + 2 \mathbf{e}_{12} + 3 \mathbf{e}_1 + 4 \mathbf{e}_1 \).

Looking at the multivector in it’s hex representation:

(lldb) fr v -format x A-MV
(A-MV) A-MV = {
  A-GRADE = 0xffffffff
  A-G02 = {
    A-SC = 0x3ff0000000000000
    A-PS = 0x4010000000000000
  }
  A-G1 = {
    A-X = 0x4000000000000000
    A-Y = 0x4008000000000000
  }
}

we see that the debugger is showing an underlying IEEE floating point representation for the COMP-2 variables in the program as it was compiled.

I have a multivector print routine that prints multivectors to SYSOUT:

Figure 4. Calling the multivector DISPLAY function.

where WS-DISPPARM-N is a PIC X(20).  (i.e.: a fixed size character array.)  Output for the A-MV value showing in the debug session above looks like:

A                     ( .10000000000000000E 01)                                                                         
                    + ( .20000000000000000E 01) e_1 + ( .30000000000000000E 01) e_2                                     
                    + ( .40000000000000000E 01) e_{12}            

End of sentence required for nested IFs?

I encountered a curious language issue in my multivector multiply function.  Here’s an example of how I’ve been coding IF statements

Figure 5. An IF END-IF pair without a period to terminate the sentence.

Notice that I don’t do anything special between the END-IF and the statement that follows it.  However, if I have an IF statement that includes nested IF END-IFs, then it appears that I need a period after the final END-IF, like so:

Figure 6. An IF with nested conditions that seems to require a period to terminate the sentence.

If I don’t include that period after the final END-IF (ending the COBOL sentence), then in some circumstances, I was seeing the program exit after the last interior basic block within this nested IF was executed.  In COBOL parlance, it seems as if a GOBACK (i.e.: return) was implicitly executed once we fell out of the big nested IF.  Why is that period required for a nested IF, but not for a simple IF?

In my “Murach’s mainframe COBOL”, he ends ALL if statements with a period, even simple IFs.  I don’t see a rationale for that in the book anywhere, but it’s a ~700 page book, so perhaps he says why at some point.

I’ve asked our compiler guys if this is a bug or expected behaviour, but I am guessing the latter…. I just don’t know why.

The multiplication kernel for this library

The workhorse of this GA(2,0) implementation, is a multivector multiplication operation, which can be implemented in two lines in Mathematica (or C++)

multivector /: multivector[_, m1_, m2_] ** multivector[_, n1_, n2_] := 
   multivector[-1, m1 n1 + Conjugate[m2] n2, n1 m2 + Conjugate[m1] n2 ]

In COBOL, it takes a lot more, and as usual, COBOL verbosity obfuscates things considerably. Here’s the equivalent code in my library:

Figure 7. GA(2,0) multiplication kernel in COBOL.

The library and a little test program.

If you are curious, you can poke around in the code for this library and the test program on github.  The sample/test program is src/MULT.cob, and running the job gives the following SYSOUT:

Figure 8. Sample SYSOUT for MULT.cob