matrix

Simplifying the previous adjoint matrix results.

January 17, 2024 math and physics play , , , , , , , , ,

[Click here for a PDF version of this (and the previous) post]

We previously found determinant expressions for the matrix elements of the adjoint for 2D and 3D matrices \( M \). However, we can extract additional structure from each of those results.

2D case.

Given a matrix expressed in block matrix form in terms of it’s columns
\begin{equation}\label{eqn:adjoint:500}
M =
\begin{bmatrix}
\Bm_1 & \Bm_2
\end{bmatrix},
\end{equation}
we found that the adjoint \( A \) satisfying \( M A = \Abs{M} I \) had the structure
\begin{equation}\label{eqn:adjoint:520}
A =
\begin{bmatrix}
\begin{vmatrix} \Be_1 & \Bm_2 \end{vmatrix} & \begin{vmatrix} \Be_2 & \Bm_2 \end{vmatrix} \\
& \\
\begin{vmatrix} \Bm_1 & \Be_1 \end{vmatrix} & \begin{vmatrix} \Bm_1 & \Be_2 \end{vmatrix}
\end{bmatrix}.
\end{equation}
We initially had wedge product expressions for each of these matrix elements, and can discover our structure by putting back those wedge products. Modulo sign, each of these matrix elemens has the form
\begin{equation}\label{eqn:adjoint:540}
\begin{aligned}
\begin{vmatrix} \Be_i & \Bm_j \end{vmatrix}
&=
\lr{ \Be_i \wedge \Bm_j } i^{-1} \\
&=
\gpgradezero{
\lr{ \Be_i \wedge \Bm_j } i^{-1}
} \\
&=
\gpgradezero{
\lr{ \Be_i \Bm_j – \Be_i \cdot \Bm_j } i^{-1}
} \\
&=
\gpgradezero{
\Be_i \Bm_j i^{-1}
} \\
&=
\Be_i \cdot \lr{ \Bm_j i^{-1} },
\end{aligned}
\end{equation}
where \( i = \Be_{12} \). The adjoint matrix is
\begin{equation}\label{eqn:adjoint:560}
A =
\begin{bmatrix}
-\lr{ \Bm_2 i } \cdot \Be_1 & -\lr{ \Bm_2 i } \cdot \Be_2 \\
\lr{ \Bm_1 i } \cdot \Be_1 & \lr{ \Bm_1 i } \cdot \Be_2 \\
\end{bmatrix}.
\end{equation}
If we use a column vector representation of the vectors \( \Bm_j i^{-1} \), we can write the adjoint in a compact hybrid geometric-algebra matrix form
\begin{equation}\label{eqn:adjoint:640}
A =
\begin{bmatrix}
-\lr{ \Bm_2 i }^\T \\
\lr{ \Bm_1 i }^\T
\end{bmatrix}.
\end{equation}

Check:

Let’s see if this works, by multiplying with \( M \)
\begin{equation}\label{eqn:adjoint:580}
\begin{aligned}
A M &=
\begin{bmatrix}
-\lr{ \Bm_2 i }^\T \\
\lr{ \Bm_1 i }^\T
\end{bmatrix}
\begin{bmatrix}
\Bm_1 & \Bm_2
\end{bmatrix} \\
&=
\begin{bmatrix}
-\lr{ \Bm_2 i }^\T \Bm_1 & -\lr{ \Bm_2 i }^\T \Bm_2 \\
\lr{ \Bm_1 i }^\T \Bm_1 & \lr{ \Bm_1 i }^\T \Bm_2
\end{bmatrix}.
\end{aligned}
\end{equation}
Those dot products have the form
\begin{equation}\label{eqn:adjoint:600}
\begin{aligned}
\lr{ \Bm_j i }^\T \Bm_i
&=
\lr{ \Bm_j i } \cdot \Bm_i \\
&=
\gpgradezero{ \lr{ \Bm_j i } \Bm_i } \\
&=
\gpgradezero{ -i \Bm_j \Bm_i } \\
&=
-i \lr{ \Bm_j \wedge \Bm_i },
\end{aligned}
\end{equation}
so
\begin{equation}\label{eqn:adjoint:620}
\begin{aligned}
A M &=
\begin{bmatrix}
i \lr{ \Bm_2 \wedge \Bm_1 } & 0 \\
0 & -i \lr { \Bm_1 \wedge \Bm_2 }
\end{bmatrix} \\
&=
\Abs{M} I.
\end{aligned}
\end{equation}
We find the determinant weighted identity that we expected. Our methods are a bit schizophrenic, switching fluidly between matrix and geometric algebra representations, but provided we are careful enough, this isn’t problematic.

3D case.

Now, let’s look at the 3D case, where we assume a column vector representation of the matrix of interest
\begin{equation}\label{eqn:adjoint:660}
M =
\begin{bmatrix}
\Bm_1 & \Bm_2 & \Bm_3
\end{bmatrix},
\end{equation}
and try to simplify the expression we found for the adjoint
\begin{equation}\label{eqn:adjoint:680}
A =
\begin{bmatrix}
\begin{vmatrix} \Be_1 & \Bm_2 & \Bm_3 \end{vmatrix} & \begin{vmatrix} \Be_2 & \Bm_2 & \Bm_3 \end{vmatrix} & \begin{vmatrix} \Be_3 & \Bm_2 & \Bm_3 \end{vmatrix} \\
& & \\
\begin{vmatrix} \Be_1 & \Bm_3 & \Bm_1 \end{vmatrix} & \begin{vmatrix} \Be_2 & \Bm_3 & \Bm_1 \end{vmatrix} & \begin{vmatrix} \Be_3 & \Bm_3 & \Bm_1 \end{vmatrix} \\
& & \\
\begin{vmatrix} \Be_1 & \Bm_1 & \Bm_2 \end{vmatrix} & \begin{vmatrix} \Be_2 & \Bm_1 & \Bm_2 \end{vmatrix} & \begin{vmatrix} \Be_3 & \Bm_1 & \Bm_2 \end{vmatrix}
\end{bmatrix}.
\end{equation}
As with the 2D case, let’s re-express these determinants in wedge product form. We’ll write \( I = \Be_{123} \), and find
\begin{equation}\label{eqn:adjoint:700}
\begin{aligned}
\begin{vmatrix} \Be_i & \Bm_j & \Bm_k \end{vmatrix}
&=
\lr{ \Be_i \wedge \Bm_j \wedge \Bm_k } I^{-1} \\
&=
\gpgradezero{ \lr{ \Be_i \wedge \Bm_j \wedge \Bm_k } I^{-1} } \\
&=
\gpgradezero{ \lr{
\Be_i \lr{ \Bm_j \wedge \Bm_k }
\Be_i \cdot \lr{ \Bm_j \wedge \Bm_k }
} I^{-1} } \\
&=
\gpgradezero{
\Be_i \lr{ \Bm_j \wedge \Bm_k }
I^{-1} } \\
&=
\gpgradezero{
\Be_i \lr{ \Bm_j \cross \Bm_k } I
I^{-1} } \\
&=
\Be_i \cdot \lr{ \Bm_j \cross \Bm_k }.
\end{aligned}
\end{equation}
We see that we can put the adjoint in block matrix form
\begin{equation}\label{eqn:adjoint:720}
A =
\begin{bmatrix}
\lr{ \Bm_2 \cross \Bm_3 }^\T \\
\lr{ \Bm_3 \cross \Bm_1 }^\T \\
\lr{ \Bm_1 \cross \Bm_2 }^\T \\
\end{bmatrix}.
\end{equation}

Check:

\begin{equation}\label{eqn:adjoint:740}
\begin{aligned}
A M
&=
\begin{bmatrix}
\lr{ \Bm_2 \cross \Bm_3 }^\T \\
\lr{ \Bm_3 \cross \Bm_1 }^\T \\
\lr{ \Bm_1 \cross \Bm_2 }^\T \\
\end{bmatrix}
\begin{bmatrix}
\Bm_1 & \Bm_2 & \Bm_3
\end{bmatrix} \\
&=
\begin{bmatrix}
\lr{ \Bm_2 \cross \Bm_3 }^\T \Bm_1 & \lr{ \Bm_2 \cross \Bm_3 }^\T \Bm_2 & \lr{ \Bm_2 \cross \Bm_3 }^\T \Bm_3 \\
\lr{ \Bm_3 \cross \Bm_1 }^\T \Bm_1 & \lr{ \Bm_3 \cross \Bm_1 }^\T \Bm_2 & \lr{ \Bm_3 \cross \Bm_1 }^\T \Bm_3 \\
\lr{ \Bm_1 \cross \Bm_2 }^\T \Bm_1 & \lr{ \Bm_1 \cross \Bm_2 }^\T \Bm_2 & \lr{ \Bm_1 \cross \Bm_2 }^\T \Bm_3
\end{bmatrix} \\
&=
\Abs{M} I.
\end{aligned}
\end{equation}

Essentially, we found that the rows of the adjoint matrix are each parallel to the reciprocal frame vectors of the columns of \( M \). This makes sense, as the reciprocal frame encodes a generalized inverse of sorts.

Computing the adjoint matrix

January 16, 2024 math and physics play , , , , ,

[Click here for a PDF version of this post]

I started reviewing a book draft that mentions the adjoint in passing, but I’ve forgotten what I knew about the adjoint (not counting self-adjoint operators, which is different.) I do recall that adjoint matrices were covered in high school linear algebra (now 30+ years ago!), but never really used after that.

It appears that the basic property of the adjoint \( A \) of a matrix \( M \), when it exists, is
\begin{equation}\label{eqn:adjoint:20}
M A = \Abs{M} I,
\end{equation}
so it’s proportional to the inverse, where the numerical factor is the determinant of that matrix. Let’s try to compute this beastie for 1D, 2D, and 3D cases.

Simplest case: \(1 \times 1\) matrix.

For a one by one matrix, say
\begin{equation}\label{eqn:adjoint:40}
M =
\begin{bmatrix}
m_{11}
\end{bmatrix},
\end{equation}
the determinant is just \( \Abs{M} = m_11 \), so our adjoint is the identity matrix
\begin{equation}\label{eqn:adjoint:60}
A =
\begin{bmatrix}
1
\end{bmatrix}.
\end{equation}
Not too interesting. Let’s try the 2D case.

Less trivial case: \(2 \times 2\) matrix.

For the 2D case, let’s define our matrix as a pair of column vectors
\begin{equation}\label{eqn:adjoint:80}
M =
\begin{bmatrix}
\Bm_1 & \Bm_2
\end{bmatrix},
\end{equation}
and let’s write the adjoint out in full in coordinates as
\begin{equation}\label{eqn:adjoint:100}
A =
\begin{bmatrix}
a_{11} & a_{12} \\
a_{21} & a_{22}
\end{bmatrix}.
\end{equation}
We seek solutions to a pair of vector equations
\begin{equation}\label{eqn:adjoint:120}
\begin{aligned}
\Bm_1 a_{11} + \Bm_2 a_{21} &= \Abs{M} \Be_1 \\
\Bm_1 a_{12} + \Bm_2 a_{22} &= \Abs{M} \Be_2.
\end{aligned}
\end{equation}
We can immediately solve either of these, by taking wedge products, yielding
\begin{equation}\label{eqn:adjoint:140}
\begin{aligned}
\lr{ \Bm_1 \wedge \Bm_2 } a_{11} + \lr{ \Bm_2 \wedge \Bm_2 } a_{21} &= \Abs{M} \lr{ \Be_1 \wedge \Bm_2 } \\
\lr{ \Bm_1 \wedge \Bm_1 } a_{11} + \lr{ \Bm_1 \wedge \Bm_2 } a_{21} &= \Abs{M} \lr{ \Bm_1 \wedge \Be_1 } \\
\lr{ \Bm_1 \wedge \Bm_2 } a_{12} + \lr{ \Bm_2 \wedge \Bm_2 } a_{22} &= \Abs{M} \lr{ \Be_2 \wedge \Bm_2 } \\
\lr{ \Bm_1 \wedge \Bm_1 } a_{12} + \lr{ \Bm_1 \wedge \Bm_2 } a_{22} &= \Abs{M} \lr{ \Bm_1 \wedge \Be_2}.
\end{aligned}
\end{equation}
Any wedge with a repeated vector is zero.
Provided the determinant is non-zero, we can divide both sides by \( \Bm_1 \wedge \Bm_2 = \Abs{M} \Be_{12} \) to find a single determinant for each element in the adjoint
\begin{equation}\label{eqn:adjoint:160}
\begin{aligned}
a_{11} &= \begin{vmatrix} \Be_1 & \Bm_2 \end{vmatrix} \\
a_{21} &= \begin{vmatrix} \Bm_1 & \Be_1 \end{vmatrix} \\
a_{12} &= \begin{vmatrix} \Be_2 & \Bm_2 \end{vmatrix} \\
a_{22} &= \begin{vmatrix} \Bm_1 & \Be_2 \end{vmatrix}
\end{aligned}
\end{equation}
or
\begin{equation}\label{eqn:adjoint:400}
A =
\begin{bmatrix}
\begin{vmatrix} \Be_1 & \Bm_2 \end{vmatrix} & \begin{vmatrix} \Be_2 & \Bm_2 \end{vmatrix} \\
& \\
\begin{vmatrix} \Bm_1 & \Be_1 \end{vmatrix} & \begin{vmatrix} \Bm_1 & \Be_2 \end{vmatrix}
\end{bmatrix},
\end{equation}
or
\begin{equation}\label{eqn:adjoint:440}
A_{ij} =
\epsilon_{ir}
\begin{vmatrix}
\Be_j & \Bm_r
\end{vmatrix},
\end{equation}
where \( \epsilon_{ir} \) is the completely antisymmetric tensor, and the Einstein summation convention is in effect (summation implied over any repeated indexes.)

Check:

We should verify that expanding these determinants explicitly reproduces the usual representation of the 2D adjoint:
\begin{equation}\label{eqn:adjoint:420}
\begin{aligned}
\begin{vmatrix} \Be_1 & \Bm_2 \end{vmatrix} &= \begin{vmatrix} 1 & m_{12} \\ 0 & m_{22} \end{vmatrix} = m_{22} \\
\begin{vmatrix} \Bm_1 & \Be_1 \end{vmatrix} &= \begin{vmatrix} m_{11} & 1 \\ m_{21} & 0 \end{vmatrix} = -m_{21} \\
\begin{vmatrix} \Be_2 & \Bm_2 \end{vmatrix} &= \begin{vmatrix} 0 & m_{12} \\ 1 & m_{22} \end{vmatrix} = -m_{12} \\
\begin{vmatrix} \Bm_1 & \Be_2 \end{vmatrix} &= \begin{vmatrix} m_{11} & 0 \\ m_{21} & 1 \end{vmatrix} = m_{11},
\end{aligned}
\end{equation}
or
\begin{equation}\label{eqn:adjoint:180}
A =
\begin{bmatrix}
m_{22} & -m_{12} \\
-m_{21} & m_{11}
\end{bmatrix}.
\end{equation}

Multiplying everything out should give us determinant weighted identity
\begin{equation}\label{eqn:adjoint:200}
\begin{aligned}
M A
&=
\begin{bmatrix}
m_{11} & m_{12} \\
m_{21} & m_{22}
\end{bmatrix}
\begin{bmatrix}
m_{22} & -m_{12} \\
-m_{21} & m_{11}
\end{bmatrix} \\
&=
\lr{ m_{11} m_{22} – m_{12} m_{21} }
\begin{bmatrix}
1 & 0 \\
0 & 1
\end{bmatrix} \\
&= \Abs{M} I,
\end{aligned}
\end{equation}
as expected.

3D case: \(3 \times 3\) matrix.

For the 3D case, let’s also define our matrix as column vectors
\begin{equation}\label{eqn:adjoint:220}
M =
\begin{bmatrix}
\Bm_1 & \Bm_2 & \Bm_3
\end{bmatrix},
\end{equation}
and let’s write the adjoint out in full in coordinates as
\begin{equation}\label{eqn:adjoint:240}
A =
\begin{bmatrix}
a_{11} & a_{12} & a_{13} \\
a_{21} & a_{22} & a_{23} \\
a_{31} & a_{32} & a_{33}
\end{bmatrix}.
\end{equation}
This time, we seek solutions to three vector equations
\begin{equation}\label{eqn:adjoint:260}
\begin{aligned}
\Bm_1 a_{11} + \Bm_2 a_{21} + \Bm_3 a_{31} &= \Abs{M} \Be_1 \\
\Bm_1 a_{12} + \Bm_2 a_{22} + \Bm_3 a_{32} &= \Abs{M} \Be_2 \\
\Bm_1 a_{13} + \Bm_2 a_{23} + \Bm_3 a_{33} &= \Abs{M} \Be_3,
\end{aligned}
\end{equation}
and can immediately solve, once again, by taking wedge products, yielding
\begin{equation}\label{eqn:adjoint:280}
\begin{aligned}
\lr{ \Bm_1 \wedge \Bm_2 \wedge \Bm_3 }a_{11} + \lr{ \Bm_2 \wedge \Bm_2 \wedge \Bm_3 }a_{21} + \lr{ \Bm_3 \wedge \Bm_2 \wedge \Bm_3 }a_{31} &= \Abs{M} \Be_1 \wedge \Bm_2 \wedge \Bm_3 \\
\lr{ \Bm_1 \wedge \Bm_1 \wedge \Bm_3 }a_{11} + \lr{ \Bm_1 \wedge \Bm_2 \wedge \Bm_3 }a_{21} + \lr{ \Bm_1 \wedge \Bm_3 \wedge \Bm_3 }a_{31} &= \Abs{M} \Bm_1 \wedge \Be_1 \wedge \Bm_3 \\
\lr{ \Bm_1 \wedge \Bm_2 \wedge \Bm_1 }a_{11} + \lr{ \Bm_1 \wedge \Bm_2 \wedge \Bm_2 }a_{21} + \lr{ \Bm_1 \wedge \Bm_2 \wedge \Bm_3 }a_{31} &= \Abs{M} \Bm_1 \wedge \Bm_2 \wedge \Be_1 \\
\lr{ \Bm_1 \wedge \Bm_2 \wedge \Bm_3 }a_{12} + \lr{ \Bm_2 \wedge \Bm_2 \wedge \Bm_3 }a_{22} + \lr{ \Bm_3 \wedge \Bm_2 \wedge \Bm_3 }a_{32} &= \Abs{M} \Be_2 \wedge \Bm_2 \wedge \Bm_3 \\
\lr{ \Bm_1 \wedge \Bm_1 \wedge \Bm_3 }a_{12} + \lr{ \Bm_1 \wedge \Bm_2 \wedge \Bm_3 }a_{22} + \lr{ \Bm_1 \wedge \Bm_3 \wedge \Bm_3 }a_{32} &= \Abs{M} \Bm_1 \wedge \Be_2 \wedge \Bm_3 \\
\lr{ \Bm_1 \wedge \Bm_2 \wedge \Bm_1 }a_{12} + \lr{ \Bm_1 \wedge \Bm_2 \wedge \Bm_2 }a_{22} + \lr{ \Bm_1 \wedge \Bm_2 \wedge \Bm_3 }a_{32} &= \Abs{M} \Bm_1 \wedge \Bm_2 \wedge \Be_2 \\
\lr{ \Bm_1 \wedge \Bm_2 \wedge \Bm_3 }a_{13} + \lr{ \Bm_2 \wedge \Bm_2 \wedge \Bm_3 }a_{23} + \lr{ \Bm_3 \wedge \Bm_2 \wedge \Bm_3 }a_{33} &= \Abs{M} \Be_3 \wedge \Bm_2 \wedge \Bm_3 \\
\lr{ \Bm_1 \wedge \Bm_1 \wedge \Bm_3 }a_{13} + \lr{ \Bm_1 \wedge \Bm_2 \wedge \Bm_3 }a_{23} + \lr{ \Bm_1 \wedge \Bm_3 \wedge \Bm_3 }a_{33} &= \Abs{M} \Bm_1 \wedge \Be_3 \wedge \Bm_3 \\
\lr{ \Bm_1 \wedge \Bm_2 \wedge \Bm_1 }a_{13} + \lr{ \Bm_1 \wedge \Bm_2 \wedge \Bm_2 }a_{23} + \lr{ \Bm_1 \wedge \Bm_2 \wedge \Bm_3 }a_{33} &= \Abs{M} \Bm_1 \wedge \Bm_2 \wedge \Be_3,
\end{aligned}
\end{equation}
Any wedge with a repeated vector is zero.
Like before, provided the determinant is non-zero, we can divide both sides by \( \Bm_1 \wedge \Bm_2 \wedge \Bm_3 = \Abs{M} \Be_{123} \) to find a single determinant for each element in the adjoint
\begin{equation}\label{eqn:adjoint:360}
\begin{aligned}
A &=
\begin{bmatrix}
\begin{vmatrix} \Be_1 & \Bm_2 & \Bm_3 \end{vmatrix} & \begin{vmatrix} \Be_2 & \Bm_2 & \Bm_3 \end{vmatrix} & \begin{vmatrix} \Be_3 & \Bm_2 & \Bm_3 \end{vmatrix} \\
& & \\
\begin{vmatrix} \Bm_1 & \Be_1 & \Bm_3 \end{vmatrix} & \begin{vmatrix} \Bm_1 & \Be_2 & \Bm_3 \end{vmatrix} & \begin{vmatrix} \Bm_1 & \Be_3 & \Bm_3 \end{vmatrix} \\
& & \\
\begin{vmatrix} \Bm_1 & \Bm_2 & \Be_1 \end{vmatrix} & \begin{vmatrix} \Bm_1 & \Bm_2 & \Be_2 \end{vmatrix} & \begin{vmatrix} \Bm_1 & \Bm_2 & \Be_3 \end{vmatrix}
\end{bmatrix} \\
&=
\begin{bmatrix}
\begin{vmatrix} \Be_1 & \Bm_2 & \Bm_3 \end{vmatrix} & \begin{vmatrix} \Be_2 & \Bm_2 & \Bm_3 \end{vmatrix} & \begin{vmatrix} \Be_3 & \Bm_2 & \Bm_3 \end{vmatrix} \\
& & \\
\begin{vmatrix} \Be_1 & \Bm_3 & \Bm_1 \end{vmatrix} & \begin{vmatrix} \Be_2 & \Bm_3 & \Bm_1 \end{vmatrix} & \begin{vmatrix} \Be_3 & \Bm_3 & \Bm_1 \end{vmatrix} \\
& & \\
\begin{vmatrix} \Be_1 & \Bm_1 & \Bm_2 \end{vmatrix} & \begin{vmatrix} \Be_2 & \Bm_1 & \Bm_2 \end{vmatrix} & \begin{vmatrix} \Be_3 & \Bm_1 & \Bm_2 \end{vmatrix}
\end{bmatrix},
\end{aligned}
\end{equation}
or
\begin{equation}\label{eqn:adjoint:380}
A_{ij} = \frac{\epsilon_{irs}}{2!} \begin{vmatrix} \Be_j & \Bm_r & \Bm_s \end{vmatrix}.
\end{equation}
Observe that the inclusion of the \( \Be_j \) column vector in this determinant, means that we really need only compute a \( 2 \times 2 \) determinant for each adjoint matrix element. That is

\begin{equation}\label{eqn:adjoint:480}
A_{ij} = \frac{(-1)^j \epsilon_{irs}\epsilon_{jab}}{(2!)^2}
\begin{vmatrix}
m_{ar} & m_{as} \\
m_{br} & m_{bs}
\end{vmatrix}
.
\end{equation}
This looks a lot like the usual minor/cofactor recipe, but written out explicitly for each element, using the antisymmetric tensor to encode the index alternation. It’s worth noting that there may be an error or subtle difference from the usual in my formulation, since wikipedia defines the adjoint as the transpose of the cofactor matrix, see: [1].

General case: \(n \times n\) matrix.

It appears that if we wanted an induction hypotheses for the general \( n > 1 \) case, the \( ij \) element of the adjoint matrix is likely
\begin{equation}\label{eqn:adjoint:460}
\begin{aligned}
A_{ij} &= \frac{\epsilon_{i s_1 s_2 \cdots s_{n-1}}}{(n-1)!} \begin{vmatrix} \Be_j & \Bm_{s_1} & \Bm_{s_2} & \cdots & \Bm_{s_{n-1}} \end{vmatrix} \\
&= \frac{(-1)^j \epsilon_{i r_1 r_2 \cdots r_{n-1}} \epsilon_{j s_1 s_2 \cdots s_{n-1}} }{\lr{(n-1)!}^2}
\begin{vmatrix}
m_{r_1 s_1} & \cdots & m_{r_1 s_{n-1}} \\
\vdots & & \vdots \\
m_{r_{n-1} s_{1}} & \cdots & m_{r_{n-1} s_{n-1}}
\end{vmatrix}.
\end{aligned}
\end{equation}
I’m not going to try to prove this, inductively or otherwise.

References

[1] Wikipedia contributors. Minor (linear algebra) — Wikipedia, the free encyclopedia, 2023. URL https://en.wikipedia.org/w/index.php?title=Minor_(linear_algebra)&oldid=1182988311. [Online; accessed 16-January-2024].

Matrix form for discrete time Fourier transform

December 4, 2014 ece1254 , ,

[Click here for a PDF of this post with nicer formatting]

Matrix form

The discrete time Fourier transform has been seen to have the form

\begin{equation}\label{eqn:discreteFourierMatrixForm:200}
x_k = \sum_{n = -N}^N X_n e^{ 2 \pi j n k /(2 N + 1)}
\end{equation}
\begin{equation}\label{eqn:discreteFourierMatrixForm:220}
X_n = \inv{2 N + 1} \sum_{k = -N}^N x_k e^{- 2 \pi j n k /(2 N + 1)}.
\end{equation}

A matrix representation of this form is desired. Let

\begin{equation}\label{eqndiscreteFourierMatrixForm:240}
\Bx =
\begin{bmatrix}
x_N \\
\vdots \\
x_0 \\
\vdots \\
x_{-N}
\end{bmatrix}
\end{equation}
\begin{equation}\label{eqndiscreteFourierMatrixForm:260}
\BX =
\begin{bmatrix}
X_N \\
\vdots \\
X_0 \\
\vdots \\
X_{-N}
\end{bmatrix}
\end{equation}

\ref{eqn:discreteFourierMatrixForm:200} written out in full is
\begin{equation}\label{eqn:discreteFourierMatrixForm:280}
\begin{aligned}
x_k
&= X_N e^{ 2 \pi j N k /(2 N + 1)} \\
&+ X_{N-1} e^{ 2 \pi j \lr{N-1} k /(2 N + 1)} \\
&+ \cdots \\
&+ X_0 \\
&+ \cdots \\
&+ X_{1-N} e^{ -2 \pi j \lr{N-1} k /(2 N + 1)} \\
&+ X_{-N} e^{ -2 \pi j N k /(2 N + 1)}.
\end{aligned}
\end{equation}

With \( \alpha = e^{ 2 \pi j /(2 N + 1) } \) the matrix form is

\begin{equation}\label{eqn:discreteFourierMatrixForm:300}
\Bx =
\begin{bmatrix}
\alpha^{ N N } & \alpha^{ \lr{N-1} N } & \cdots & 1 & \cdots & \alpha^{ -\lr{N-1} N } & \alpha^{ -N N } \\
\alpha^{ N \lr{N-1} } & \alpha^{ \lr{N-1} \lr{N-1} } & \cdots & 1 & \cdots & \alpha^{ -\lr{N-1} \lr{N-1} } & \alpha^{ -N \lr{N-1} } \\
\vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\
1 & 1 & 1 & 1 & 1 & 1 & 1 \\
\vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\
\alpha^{ -N \lr{N-1} } & \alpha^{ -\lr{N-1} \lr{N-1} } & \cdots & 1 & \cdots & \alpha^{ {N-1} \lr{N-1} } & \alpha^{ N \lr{N-1} } \\
\alpha^{ -N N } & \alpha^{ -N N } & \cdots & 1 & \cdots & \alpha^{ \lr{N-1} N } & \alpha^{ N N } \\
\end{bmatrix}
\BX
\end{equation}

Similarily, from \ref{eqn:discreteFourierMatrixForm:220}, the inverse relation expands out to

\begin{equation}\label{eqn:discreteFourierMatrixForm:320}
\begin{aligned}
( 2 N + 1 ) X_n
&= x_N e^{- 2 \pi j n N /(2 N + 1)} \\
&+ x_{N-1} e^{- 2 \pi j n \lr{N-1}/(2 N + 1)} \\
&\cdots \\
&+ x_0 \\
&\cdots \\
&+ x_{1-N} e^{ 2 \pi j n \lr{ N-1 }/(2 N + 1)} \\
&+ x_{-N} e^{ 2 \pi j n N/(2 N + 1)},
\end{aligned}
\end{equation}

with a matrix form of

\begin{equation}\label{eqn:discreteFourierMatrixForm:340}
( 2 N + 1 ) \BX =
\begin{bmatrix}
\alpha^{- N N } & \alpha^{- N \lr{N-1}} &\cdots & 1 &\cdots & \alpha^{ N \lr{ N-1 }} & \alpha^{ N N} \\
\alpha^{- \lr{N-1} N } & \alpha^{- \lr{N-1} \lr{N-1}} &\cdots & 1 &\cdots & \alpha^{ \lr{N-1} \lr{ N-1 }} & \alpha^{ \lr{N-1} N} \\
\vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\
1 & 1 & 1 & 1 & 1 & 1 & 1 \\
\vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\
\alpha^{\lr{N-1} N } & \alpha^{ \lr{N-1} \lr{N-1}} &\cdots & 1 &\cdots & \alpha^{ -\lr{N-1} \lr{ N-1 }} & \alpha^{ -\lr{N-1} N} \\
\alpha^{ N N } & \alpha^{ N \lr{N-1}} &\cdots & 1 &\cdots & \alpha^{ -N \lr{ N-1 }} & \alpha^{ -N N} \\
\end{bmatrix}
\end{equation}

Lettting

\begin{equation}\label{eqn:discreteFourierMatrixForm:360}
\BF =
\begin{bmatrix}
\alpha^{ N N } & \alpha^{ \lr{N-1} N } & \cdots & 1 & \cdots & \alpha^{ -\lr{N-1} N } & \alpha^{ -N N } \\
\alpha^{ N \lr{N-1} } & \alpha^{ \lr{N-1} \lr{N-1} } & \cdots & 1 & \cdots & \alpha^{ -\lr{N-1} \lr{N-1} } & \alpha^{ -N \lr{N-1} } \\
\vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\
1 & 1 & 1 & 1 & 1 & 1 & 1 \\
\vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\
\alpha^{ -N \lr{N-1} } & \alpha^{ -\lr{N-1} \lr{N-1} } & \cdots & 1 & \cdots & \alpha^{ {N-1} \lr{N-1} } & \alpha^{ N \lr{N-1} } \\
\alpha^{ -N N } & \alpha^{ -N N } & \cdots & 1 & \cdots & \alpha^{ \lr{N-1} N } & \alpha^{ N N } \\
\end{bmatrix},
\end{equation}

the discrete transform pair has the following compactly matrix representation

\begin{equation}\label{eqn:discreteFourierMatrixForm:380}
\Bx = \BF \BX
\end{equation}
\begin{equation}\label{eqn:discreteFourierMatrixForm:400}
\BX = \inv{2 N + 1} \overline{{\BF}} \Bx,
\end{equation}

where \( \overline{{\BF}} \) is the complex conjugate of \( \BF \).

References