[Click here for a PDF version of this (and the previous) post]

We previously found determinant expressions for the matrix elements of the adjoint for 2D and 3D matrices \( M \). However, we can extract additional structure from each of those results.

### 2D case.

Given a matrix expressed in block matrix form in terms of it’s columns

\begin{equation}\label{eqn:adjoint:500}

M =

\begin{bmatrix}

\Bm_1 & \Bm_2

\end{bmatrix},

\end{equation}

we found that the adjoint \( A \) satisfying \( M A = \Abs{M} I \) had the structure

\begin{equation}\label{eqn:adjoint:520}

A =

\begin{bmatrix}

\begin{vmatrix} \Be_1 & \Bm_2 \end{vmatrix} & \begin{vmatrix} \Be_2 & \Bm_2 \end{vmatrix} \\

& \\

\begin{vmatrix} \Bm_1 & \Be_1 \end{vmatrix} & \begin{vmatrix} \Bm_1 & \Be_2 \end{vmatrix}

\end{bmatrix}.

\end{equation}

We initially had wedge product expressions for each of these matrix elements, and can discover our structure by putting back those wedge products. Modulo sign, each of these matrix elemens has the form

\begin{equation}\label{eqn:adjoint:540}

\begin{aligned}

\begin{vmatrix} \Be_i & \Bm_j \end{vmatrix}

&=

\lr{ \Be_i \wedge \Bm_j } i^{-1} \\

&=

\gpgradezero{

\lr{ \Be_i \wedge \Bm_j } i^{-1}

} \\

&=

\gpgradezero{

\lr{ \Be_i \Bm_j – \Be_i \cdot \Bm_j } i^{-1}

} \\

&=

\gpgradezero{

\Be_i \Bm_j i^{-1}

} \\

&=

\Be_i \cdot \lr{ \Bm_j i^{-1} },

\end{aligned}

\end{equation}

where \( i = \Be_{12} \). The adjoint matrix is

\begin{equation}\label{eqn:adjoint:560}

A =

\begin{bmatrix}

-\lr{ \Bm_2 i } \cdot \Be_1 & -\lr{ \Bm_2 i } \cdot \Be_2 \\

\lr{ \Bm_1 i } \cdot \Be_1 & \lr{ \Bm_1 i } \cdot \Be_2 \\

\end{bmatrix}.

\end{equation}

If we use a column vector representation of the vectors \( \Bm_j i^{-1} \), we can write the adjoint in a compact hybrid geometric-algebra matrix form

\begin{equation}\label{eqn:adjoint:640}

A =

\begin{bmatrix}

-\lr{ \Bm_2 i }^\T \\

\lr{ \Bm_1 i }^\T

\end{bmatrix}.

\end{equation}

### Check:

Let’s see if this works, by multiplying with \( M \)

\begin{equation}\label{eqn:adjoint:580}

\begin{aligned}

A M &=

\begin{bmatrix}

-\lr{ \Bm_2 i }^\T \\

\lr{ \Bm_1 i }^\T

\end{bmatrix}

\begin{bmatrix}

\Bm_1 & \Bm_2

\end{bmatrix} \\

&=

\begin{bmatrix}

-\lr{ \Bm_2 i }^\T \Bm_1 & -\lr{ \Bm_2 i }^\T \Bm_2 \\

\lr{ \Bm_1 i }^\T \Bm_1 & \lr{ \Bm_1 i }^\T \Bm_2

\end{bmatrix}.

\end{aligned}

\end{equation}

Those dot products have the form

\begin{equation}\label{eqn:adjoint:600}

\begin{aligned}

\lr{ \Bm_j i }^\T \Bm_i

&=

\lr{ \Bm_j i } \cdot \Bm_i \\

&=

\gpgradezero{ \lr{ \Bm_j i } \Bm_i } \\

&=

\gpgradezero{ -i \Bm_j \Bm_i } \\

&=

-i \lr{ \Bm_j \wedge \Bm_i },

\end{aligned}

\end{equation}

so

\begin{equation}\label{eqn:adjoint:620}

\begin{aligned}

A M &=

\begin{bmatrix}

i \lr{ \Bm_2 \wedge \Bm_1 } & 0 \\

0 & -i \lr { \Bm_1 \wedge \Bm_2 }

\end{bmatrix} \\

&=

\Abs{M} I.

\end{aligned}

\end{equation}

We find the determinant weighted identity that we expected. Our methods are a bit schizophrenic, switching fluidly between matrix and geometric algebra representations, but provided we are careful enough, this isn’t problematic.

### 3D case.

Now, let’s look at the 3D case, where we assume a column vector representation of the matrix of interest

\begin{equation}\label{eqn:adjoint:660}

M =

\begin{bmatrix}

\Bm_1 & \Bm_2 & \Bm_3

\end{bmatrix},

\end{equation}

and try to simplify the expression we found for the adjoint

\begin{equation}\label{eqn:adjoint:680}

A =

\begin{bmatrix}

\begin{vmatrix} \Be_1 & \Bm_2 & \Bm_3 \end{vmatrix} & \begin{vmatrix} \Be_2 & \Bm_2 & \Bm_3 \end{vmatrix} & \begin{vmatrix} \Be_3 & \Bm_2 & \Bm_3 \end{vmatrix} \\

& & \\

\begin{vmatrix} \Be_1 & \Bm_3 & \Bm_1 \end{vmatrix} & \begin{vmatrix} \Be_2 & \Bm_3 & \Bm_1 \end{vmatrix} & \begin{vmatrix} \Be_3 & \Bm_3 & \Bm_1 \end{vmatrix} \\

& & \\

\begin{vmatrix} \Be_1 & \Bm_1 & \Bm_2 \end{vmatrix} & \begin{vmatrix} \Be_2 & \Bm_1 & \Bm_2 \end{vmatrix} & \begin{vmatrix} \Be_3 & \Bm_1 & \Bm_2 \end{vmatrix}

\end{bmatrix}.

\end{equation}

As with the 2D case, let’s re-express these determinants in wedge product form. We’ll write \( I = \Be_{123} \), and find

\begin{equation}\label{eqn:adjoint:700}

\begin{aligned}

\begin{vmatrix} \Be_i & \Bm_j & \Bm_k \end{vmatrix}

&=

\lr{ \Be_i \wedge \Bm_j \wedge \Bm_k } I^{-1} \\

&=

\gpgradezero{ \lr{ \Be_i \wedge \Bm_j \wedge \Bm_k } I^{-1} } \\

&=

\gpgradezero{ \lr{

\Be_i \lr{ \Bm_j \wedge \Bm_k }

\Be_i \cdot \lr{ \Bm_j \wedge \Bm_k }

} I^{-1} } \\

&=

\gpgradezero{

\Be_i \lr{ \Bm_j \wedge \Bm_k }

I^{-1} } \\

&=

\gpgradezero{

\Be_i \lr{ \Bm_j \cross \Bm_k } I

I^{-1} } \\

&=

\Be_i \cdot \lr{ \Bm_j \cross \Bm_k }.

\end{aligned}

\end{equation}

We see that we can put the adjoint in block matrix form

\begin{equation}\label{eqn:adjoint:720}

A =

\begin{bmatrix}

\lr{ \Bm_2 \cross \Bm_3 }^\T \\

\lr{ \Bm_3 \cross \Bm_1 }^\T \\

\lr{ \Bm_1 \cross \Bm_2 }^\T \\

\end{bmatrix}.

\end{equation}

### Check:

\begin{equation}\label{eqn:adjoint:740}

\begin{aligned}

A M

&=

\begin{bmatrix}

\lr{ \Bm_2 \cross \Bm_3 }^\T \\

\lr{ \Bm_3 \cross \Bm_1 }^\T \\

\lr{ \Bm_1 \cross \Bm_2 }^\T \\

\end{bmatrix}

\begin{bmatrix}

\Bm_1 & \Bm_2 & \Bm_3

\end{bmatrix} \\

&=

\begin{bmatrix}

\lr{ \Bm_2 \cross \Bm_3 }^\T \Bm_1 & \lr{ \Bm_2 \cross \Bm_3 }^\T \Bm_2 & \lr{ \Bm_2 \cross \Bm_3 }^\T \Bm_3 \\

\lr{ \Bm_3 \cross \Bm_1 }^\T \Bm_1 & \lr{ \Bm_3 \cross \Bm_1 }^\T \Bm_2 & \lr{ \Bm_3 \cross \Bm_1 }^\T \Bm_3 \\

\lr{ \Bm_1 \cross \Bm_2 }^\T \Bm_1 & \lr{ \Bm_1 \cross \Bm_2 }^\T \Bm_2 & \lr{ \Bm_1 \cross \Bm_2 }^\T \Bm_3

\end{bmatrix} \\

&=

\Abs{M} I.

\end{aligned}

\end{equation}

Essentially, we found that the rows of the adjoint matrix are each parallel to the reciprocal frame vectors of the columns of \( M \). This makes sense, as the reciprocal frame encodes a generalized inverse of sorts.