math and physics play

Verifying the GA form for the symmetric and antisymmetric components of the different rate of strain.

March 8, 2022 math and physics play , , , , , , , ,

[If mathjax doesn’t display properly for you, click here for a PDF of this post]

We found geometric algebra representations for the symmetric and antisymmetric components for a gradient-vector direct product. In particular, given
\begin{equation}\label{eqn:tensorComponents:20}
d\Bv = d\Bx \cdot \lr{ \spacegrad \otimes \Bv }
\end{equation}
we found
\begin{equation}\label{eqn:tensorComponents:40}
\begin{aligned}
d\Bx \cdot \Bd
&=
\inv{2} d\Bx \cdot \lr{
\spacegrad \otimes \Bv
+
\lr{\spacegrad \otimes \Bv }^\dagger
} \\
&=
\inv{2} \lr{
d\Bx \lr{ \spacegrad \cdot \Bv }
+
\gpgradeone{ \spacegrad d\Bx \Bv }
},
\end{aligned}
\end{equation}
and
\begin{equation}\label{eqn:tensorComponents:60}
\begin{aligned}
d\Bx \cdot \BOmega
&=
\inv{2} d\Bx \cdot \lr{
\spacegrad \otimes \Bv

\lr{\spacegrad \otimes \Bv }^\dagger
} \\
&=
\inv{2} \lr{
d\Bx \lr{ \spacegrad \cdot \Bv }

\gpgradeone{ d\Bx \Bv \spacegrad }
}.
\end{aligned}
\end{equation}

Let’s expand each of these in coordinates to verify that these are correct. For the symmetric component, that is
\begin{equation}\label{eqn:tensorComponents:80}
\begin{aligned}
d\Bx \cdot \Bd
&=
\inv{2}
\lr{
dx_i \partial_j v_j \Be_i
+
\partial_j dx_i v_k \gpgradeone{ \Be_j \Be_i \Be_k }
} \\
&=
\inv{2} dx_i
\lr{
\partial_j v_j \Be_i
+
\partial_j v_k \lr{ \delta_{ji} \Be_k + \lr{ \Be_j \wedge \Be_i } \cdot \Be_k }
} \\
&=
\inv{2} dx_i
\lr{
\partial_j v_j \Be_i
+
\partial_j v_k \lr{ \delta_{ji} \Be_k + \delta_{ik} \Be_j – \delta_{jk} \Be_i }
} \\
&=
\inv{2} dx_i
\lr{
\partial_j v_j \Be_i
+
\partial_i v_k \Be_k
+
\partial_j v_i \Be_j

\partial_j v_j \Be_i
} \\
&=
\inv{2} dx_i
\lr{
\partial_i v_k \Be_k
+
\partial_j v_i \Be_j
} \\
&=
dx_i \inv{2} \lr{ \partial_i v_j + \partial_j v_i } \Be_j.
\end{aligned}
\end{equation}
Sure enough, we that the product contains the matrix element of the symmetric component of \( \spacegrad \otimes \Bv \).

Now let’s verify that our GA antisymmetric tensor product representation works out.
\begin{equation}\label{eqn:tensorComponents:100}
\begin{aligned}
d\Bx \cdot \BOmega
&=
\inv{2}
\lr{
dx_i \partial_j v_j \Be_i

dx_i \partial_k v_j \gpgradeone{ \Be_i \Be_j \Be_k }
} \\
&=
\inv{2} dx_i
\lr{
\partial_j v_j \Be_i

\partial_k v_j
\lr{ \delta_{ij} \Be_k + \delta_{jk} \Be_i – \delta_{ik} \Be_j }
} \\
&=
\inv{2} dx_i
\lr{
\partial_j v_j \Be_i

\partial_k v_i \Be_k

\partial_k v_k \Be_i
+
\partial_i v_j \Be_j
} \\
&=
\inv{2} dx_i
\lr{
\partial_i v_j \Be_j

\partial_k v_i \Be_k
} \\
&=
dx_i
\inv{2}
\lr{
\partial_i v_j

\partial_j v_i
}
\Be_j.
\end{aligned}
\end{equation}
As expected, we that this product contains the matrix element of the antisymmetric component of \( \spacegrad \otimes \Bv \).

We also found previously that \( \BOmega \) is just a curl, namely
\begin{equation}\label{eqn:tensorComponents:120}
\BOmega = \inv{2} \lr{ \spacegrad \wedge \Bv } = \inv{2} \lr{ \partial_i v_j } \Be_i \wedge \Be_j,
\end{equation}
which directly encodes the antisymmetric component of \( \spacegrad \otimes \Bv \). We can also see that by fully expanding \( d\Bx \cdot \BOmega \), which gives
\begin{equation}\label{eqn:tensorComponents:140}
\begin{aligned}
d\Bx \cdot \BOmega
&=
dx_i \inv{2} \lr{ \partial_j v_k }
\Be_i \cdot \lr{ \Be_j \wedge \Be_k } \\
&=
dx_i \inv{2} \lr{ \partial_j v_k }
\lr{
\delta_{ij} \Be_k

\delta_{ik} \Be_j
} \\
&=
dx_i \inv{2}
\lr{
\lr{ \partial_i v_k } \Be_k

\lr{ \partial_j v_i }
\Be_j
} \\
&=
dx_i \inv{2}
\lr{
\partial_i v_j – \partial_j v_i
}
\Be_j,
\end{aligned}
\end{equation}
as expected.

Vector gradients in dyadic notation and geometric algebra: Part II.

March 6, 2022 math and physics play , , , , ,

[If mathjax doesn’t display properly for you, click here for a PDF of this post]

Symmetrization and antisymmetrization of the vector differential in GA.

There was an error in yesterday’s post. This decomposition was correct:
\begin{equation}\label{eqn:dyadicVsGa:460}
d\Bv
= (d\Bx \cdot \spacegrad) \Bv
= d\Bx (\spacegrad \cdot \Bv)
+
\spacegrad \cdot \lr{ d\Bx \wedge \Bv }.
\end{equation}
However, identifying these terms with the symmetric and antisymmetric splits of \( \spacegrad \otimes \Bv \) was wrong.
Brian pointed out that a purely incompressible flow is one for which \(\spacegrad \cdot \Bv = 0\), yet, in general, an incompressible flow can have a non-zero deformation tensor.

Also, given the nature of the matrix expansion of the antisymmetric tensor, we should have had a curl term in the mix and we do not. The conclusion must be that \ref{eqn:dyadicVsGa:460} is a split into divergence and non-divergence terms, but we really wanted a split into curl and non-curl terms.

Symmetrization and antisymmetrization of the vector differential in GA: Take II.

Identification of \( \ifrac{1}{2} \lr{ \spacegrad \otimes \Bv + \lr{ \spacegrad \otimes \Bv }^\dagger } \) with the divergence was incorrect.

Let’s explicitly expand out our symmetric tensor component fully to see what it really yields, without guessing.
\begin{equation}\label{eqn:dyadicVsGa:480}
\begin{aligned}
d\Bx \cdot
\inv{2}
\lr{ \spacegrad \otimes \Bv + \lr{ \spacegrad \otimes \Bv }^\dagger }
&=
d\Bx \cdot
\inv{2}
\lr{
\begin{bmatrix}
\partial_i v_j
\end{bmatrix}
+
\begin{bmatrix}
\partial_j v_i
\end{bmatrix}
} \\
&=
dx_i
\inv{2}
\begin{bmatrix}
\partial_i v_j +
\partial_j v_i
\end{bmatrix}
\begin{bmatrix}
\Be_1 \\
\Be_2 \\
\Be_3
\end{bmatrix}.
\end{aligned}
\end{equation}
The symmetric matrix that represents this direct product tensor is
\begin{equation}\label{eqn:dyadicVsGa:500}
\inv{2}
\begin{bmatrix}
\partial_i v_j +
\partial_j v_i
\end{bmatrix}
=
\inv{2}
\begin{bmatrix}
2 \partial_1 v_1 & \partial_1 v_2 + \partial_2 v_1 & \partial_1 v_3 + \partial_3 v_1 \\
\partial_2 v_1 + \partial_1 v_2 & 2 \partial_2 v_2 & \partial_2 v_3 + \partial_3 v_2 \\
\partial_3 v_1 + \partial_1 v_3 & \partial_3 v_2 + \partial_2 v_3 & \partial_3 v_1 + \partial_1 v_3 \\
\end{bmatrix}
.
\end{equation}
This certainly isn’t isomorphic to the divergence. Instead, the trace of this matrix is the portion that is isomorphic to the divergence. The rest is something else. Let’s put the tensors into vector form to understand what they really represent.

For the symmetric part we have
\begin{equation}\label{eqn:dyadicVsGa:520}
\begin{aligned}
d\Bx \cdot
\inv{2}
\lr{ \spacegrad \otimes \Bv + \lr{ \spacegrad \otimes \Bv }^\dagger }
&=
dx_i
\inv{2}
\begin{bmatrix}
\partial_i v_j +
\partial_j v_i
\end{bmatrix}
\begin{bmatrix}
\Be_1 \\
\Be_2 \\
\Be_3
\end{bmatrix} \\
&=
\inv{2} \lr{
\lr{ d\Bx \cdot \spacegrad } \Bv + \spacegrad \lr{ d\Bx \cdot \Bv }
},
\end{aligned}
\end{equation}
and, similarily, for the antisymmetric tensor component, we have
\begin{equation}\label{eqn:dyadicVsGa:540}
\begin{aligned}
d\Bx \cdot
\inv{2}
\lr{ \spacegrad \otimes \Bv – \lr{ \spacegrad \otimes \Bv }^\dagger }
&=
dx_i
\inv{2}
\begin{bmatrix}
\partial_i v_j –
\partial_j v_i
\end{bmatrix}
\begin{bmatrix}
\Be_1 \\
\Be_2 \\
\Be_3
\end{bmatrix} \\
&=
\inv{2} \lr{
\lr{ d\Bx \cdot \spacegrad } \Bv – \spacegrad \lr{ d\Bx \cdot \Bv }
} \\
&=
\inv{2}
d\Bx \cdot \lr{ \spacegrad \wedge \Bv}.
\end{aligned}
\end{equation}
We find an isomorphism of the antisymmetric term with the curl, but the symmetric term has a divergence component, plus more.

If we want to we can split the symmetric component into it’s divergence and non-divergence terms, we get
\begin{equation}\label{eqn:dyadicVsGa:560}
\begin{aligned}
d\Bx \cdot \Bd
&=
\inv{2}
\lr{
\lr{ d\Bx \cdot \spacegrad } \Bv + \spacegrad \lr{ d\Bx \cdot \Bv }
} \\
&=
\inv{2}
\lr{
d\Bx \lr{ \spacegrad \cdot \Bv } + \spacegrad \cdot \lr{ d\Bx \wedge \Bv } + \spacegrad \lr{ d\Bx \cdot \Bv }
} \\
&=
\inv{2}
\lr{
d\Bx \lr{ \spacegrad \cdot \Bv } + \gpgradeone{ \spacegrad \lr{ d\Bx \wedge \Bv } + \spacegrad \lr{ d\Bx \cdot \Bv } }
} \\
&=
\inv{2}
\lr{
d\Bx \lr{ \spacegrad \cdot \Bv } + \gpgradeone{ \spacegrad d\Bx\, \Bv }
},
\end{aligned}
\end{equation}
so for incompressible flow, the GA representation is a single grade one selection
\begin{equation}\label{eqn:dyadicVsGa:600}
d\Bx \cdot \Bd = \inv{2} \gpgradeone{ \spacegrad d\Bx\, \Bv }.
\end{equation}
It is a little unfortunate that we cannot factor out the \( d\Bx \) term. We can do that for the
GA representation of the antisymmetric tensor contribution, which is just
\begin{equation}\label{eqn:dyadicVsGa:580}
\BOmega
=
\inv{2} \spacegrad \wedge \Bv.
\end{equation}

Let’s see what the antisymmetric tensor equivalent looks like in the incompressible case, by subtracting a divergence term
\begin{equation}\label{eqn:dyadicVsGa:680}
\begin{aligned}
d\Bx \cdot \lr{ \spacegrad \wedge \Bv } – d\Bx \lr{ \spacegrad \cdot \Bv }
&=
\gpgradeone{ d\Bx \lr{ \spacegrad \wedge \Bv } – d\Bx \lr{ \spacegrad \cdot \Bv } } \\
&=
\gpgradeone{ -d\Bx \lr{ \Bv \wedge \spacegrad } – d\Bx \lr{ \Bv \cdot \spacegrad } } \\
&=
-\gpgradeone{ d\Bx \Bv \spacegrad },
\end{aligned}
\end{equation}
so we have
\begin{equation}\label{eqn:dyadicVsGa:700}
d\Bx \cdot \lr{ \spacegrad \wedge \Bv } = d\Bx \lr{ \spacegrad \cdot \Bv } – \gpgradeone{ d\Bx\, \Bv \spacegrad }.
\end{equation}
Both the symmetric and antisymmetric tensors have compressible components.

Summary.

We found that it was possible to split the vector differential into a divergence and incompressible components, as follows
\begin{equation}\label{eqn:dyadicVsGa:620}
\begin{aligned}
d\Bv
&= \lr{ d\Bx \cdot \spacegrad } \Bv \\
&= d\Bx (\spacegrad \cdot \Bv)
+
\spacegrad \cdot \lr{ d\Bx \wedge \Bv }.
\end{aligned}
\end{equation}

With
\begin{equation}\label{eqn:dyadicVsGa:720}
\begin{aligned}
d\Bv
&= d\Bx \cdot
\lr{
\inv{2} \lr{ \spacegrad \otimes \Bv + \lr{ \spacegrad \otimes \Bv }^\dagger }
+
\inv{2} \lr{ \spacegrad \otimes \Bv – \lr{ \spacegrad \otimes \Bv }^\dagger }
} \\
&= d\Bx \cdot \lr{ \Bd + \BOmega },
\end{aligned}
\end{equation}
we found the following correspondences between the symmetric and antisymmetric tensor product components
\begin{equation}\label{eqn:dyadicVsGa:640}
\begin{aligned}
d\Bx \cdot \Bd &=
\inv{2} \lr{
\lr{ d\Bx \cdot \spacegrad } \Bv + \spacegrad \lr{ d\Bx \cdot \Bv }
} \\
&=
\inv{2}
\lr{
d\Bx \lr{ \spacegrad \cdot \Bv } + \gpgradeone{ \spacegrad d\Bx\, \Bv }
}
\end{aligned},
\end{equation}
and
\begin{equation}\label{eqn:dyadicVsGa:660}
\begin{aligned}
d\Bx \cdot \BOmega
&=
\inv{2} d\Bx \cdot \lr{ \spacegrad \wedge \Bv } \\
&=
\inv{2} \lr{
d\Bx \lr{ \spacegrad \cdot \Bv } – \gpgradeone{ d\Bx\, \Bv \spacegrad }
}.
\end{aligned}
\end{equation}

In the incompressible case where \( \spacegrad \cdot \Bv = 0 \), we have
\begin{equation}\label{eqn:dyadicVsGa:740}
\begin{aligned}
d\Bx \cdot \Bd &= \inv{2} \gpgradeone{ \spacegrad d\Bx\, \Bv } \\
d\Bx \cdot \BOmega &= -\inv{2} \gpgradeone{ d\Bx\, \Bv \spacegrad },
\end{aligned}
\end{equation}
and
\begin{equation}\label{eqn:dyadicVsGa:760}
\begin{aligned}
d\Bv
&= d\Bx \cdot \lr{ \Bd + \BOmega } \\
&= \inv{2} \gpgradeone{ \spacegrad d\Bx\, \Bv – d\Bx\, \Bv \spacegrad } \\
&= \spacegrad \cdot \lr{ d\Bx \wedge \Bv }.
\end{aligned}
\end{equation}

Vector gradients in dyadic notation and geometric algebra.

March 5, 2022 math and physics play , , , ,

[If mathjax doesn’t display properly for you, click here for a PDF of this post]

This is an exploration of the dyadic representation of the gradient acting on a vector in \(\mathbb{R}^3\), where we determine a tensor product formulation of a vector differential. Such a tensor product formulation can be split into symmetric and antisymmetric components. The geometric algebra (GA) equivalents of such a split are determined.

There is an error in part of the analysis below, which is addressed in a followup post made the next day.

GA gradient of a vector.

In GA we are free to express the product of the gradient and a vector field by adjacency. In coordinates (summation over repeated indexes assumed), such a product has the form
\begin{equation}\label{eqn:dyadicVsGa:20}
\spacegrad \Bv
= \lr{ \Be_i \partial_i } \lr{ v_j \Be_j }
= \lr{ \partial_i v_j } \Be_i \Be_j.
\end{equation}
In this sum, any terms with \( i = j \) are scalars since \( \Be_i^2 = 1 \), and the remaining terms are bivectors. This can be written compactly as
\begin{equation}\label{eqn:dyadicVsGa:40}
\spacegrad \Bv = \spacegrad \cdot \Bv + \spacegrad \wedge \Bv,
\end{equation}
or for \(\mathbb{R}^3\)
\begin{equation}\label{eqn:dyadicVsGa:60}
\spacegrad \Bv = \spacegrad \cdot \Bv + I \lr{ \spacegrad \cross \Bv},
\end{equation}
either of which breaks the gradient into into divergence and curl components. In \ref{eqn:dyadicVsGa:40} this vector gradient is expressed using the bivector valued curl operator \( (\spacegrad \wedge \Bv) \), whereas \ref{eqn:dyadicVsGa:60} is expressed using the vector valued dual form of the curl \( (\spacegrad \cross \Bv) \) from convential vector algebra.

It is worth noting that order matters in the GA coordinate expansion of \ref{eqn:dyadicVsGa:20}. It is not correct to write
\begin{equation}\label{eqn:dyadicVsGa:80}
\spacegrad \Bv
= \lr{ \partial_i v_j } \Be_j \Be_i,
\end{equation}
which is only true when the curl, \( \spacegrad \wedge \Bv = 0 \), is zero.

Dyadic representation.

Given a vector field \( \Bv = \Bv(\Bx) \), the differential of that field can be computed by chain rule
\begin{equation}\label{eqn:dyadicVsGa:100}
d\Bv = \PD{x_i}{\Bv} dx_i = \lr{ d\Bx \cdot \spacegrad} \Bv,
\end{equation}
where \( d\Bx = \Be_i dx_i \). This is a representation invariant form of the differential, where we have a scalar operator \( d\Bx \cdot \spacegrad \) acting on the vector field \( \Bv \). The matrix representation of this differential can be written as
\begin{equation}\label{eqn:dyadicVsGa:120}
d\Bv = \lr{
{\begin{bmatrix}
d\Bx
\end{bmatrix}}^\dagger
\begin{bmatrix}
\spacegrad
\end{bmatrix}
}
\begin{bmatrix}
\Bv
\end{bmatrix}
,
\end{equation}
where we are using the dagger to designate transposition, and each of the terms on the right are the coordinate matrixes of the vectors with respect to the standard basis
\begin{equation}\label{eqn:dyadicVsGa:140}
\begin{bmatrix}
d\Bx
\end{bmatrix}
=
\begin{bmatrix}
dx_1 \\
dx_2 \\
dx_3
\end{bmatrix},\quad
\begin{bmatrix}
\Bv
\end{bmatrix}
=
\begin{bmatrix}
v_1 \\
v_2 \\
v_3
\end{bmatrix},\quad
\begin{bmatrix}
\spacegrad
\end{bmatrix}
=
\begin{bmatrix}
\partial_1 \\
\partial_2 \\
\partial_3
\end{bmatrix}.
\end{equation}

In \ref{eqn:dyadicVsGa:120} the parens are very important, as the expression is meaningless without them. With the parens we have a \((1 \times 3)(3 \times 1)\) matrix (i.e. a scalar) multiplied with a \(3\times 1\) matrix. That becomes ill-formed if we drop the parens since we are left with an incompatible product of a \((3\times1)(3\times1)\) matrix on the right. The dyadic notation, which introducing a tensor product into the mix, is a mechanism to make sense of the possibility of such a product. Can we make sense of an expression like \( \spacegrad \Bv \) without the geometric product in our toolbox?

Stepping towards that question, let’s examine the coordinate expansion of our vector differential \ref{eqn:dyadicVsGa:100}, which is
\begin{equation}\label{eqn:dyadicVsGa:160}
d\Bv = dx_i \lr{ \partial_i v_j } \Be_j.
\end{equation}
If we allow a matrix of vectors, this has a block matrix form
\begin{equation}\label{eqn:dyadicVsGa:180}
d\Bv =
{\begin{bmatrix}
d\Bx
\end{bmatrix}}^\dagger
\begin{bmatrix}
\spacegrad \otimes \Bv
\end{bmatrix}
\begin{bmatrix}
\Be_1 \\
\Be_2 \\
\Be_3
\end{bmatrix}
.
\end{equation}
Here we introduce the tensor product
\begin{equation}\label{eqn:dyadicVsGa:200}
\spacegrad \otimes \Bv
= \partial_i v_j \, \Be_i \otimes \Be_j,
\end{equation}
and designate the matrix of coordinates \( \partial_i v_j \), a second order tensor, by \(
\begin{bmatrix}
\spacegrad \otimes \Bv
\end{bmatrix}
\).

We have succeeded in factoring out a vector gradient. We can introduce dot product between vectors and a direct product of vectors, by observing that \ref{eqn:dyadicVsGa:180} has the structure of a quadradic form, and define
\begin{equation}\label{eqn:dyadicVsGa:220}
\Bx \cdot (\Ba \otimes \Bb) \equiv
{\begin{bmatrix}
\Bx
\end{bmatrix}}^\dagger
\begin{bmatrix}
\Ba \otimes \Bb
\end{bmatrix}
\begin{bmatrix}
\Be_1 \\
\Be_2 \\
\Be_3
\end{bmatrix},
\end{equation}
so that \ref{eqn:dyadicVsGa:180} takes the form
\begin{equation}\label{eqn:dyadicVsGa:240}
d\Bv = d\Bx \cdot \lr{ \spacegrad \otimes \Bv }.
\end{equation}
Such a dot product gives operational meaning to the gradient-vector tensor product.

Symmetrization and antisymmetrization of the vector differential in GA.

Using the dyadic notation, it’s possible to split a vector derivative into symmetric and antisymmetric components with respect to the gradient-vector direct product
\begin{equation}\label{eqn:dyadicVsGa:260}
d\Bv
= d\Bx \cdot
\lr{
\inv{2} \lr{ \spacegrad \otimes \Bv + \lr{ \spacegrad \otimes \Bv }^\dagger }
+
\inv{2} \lr{ \spacegrad \otimes \Bv – \lr{ \spacegrad \otimes \Bv }^\dagger }
},
\end{equation}
or \( d\Bv = d\Bx \cdot \lr{ \Bd + \BOmega } \), where \( \Bd \) is a symmetric tensor, and \( \BOmega \) is a traceless antisymmetric tensor.

A question of potential interest is “what GA equvivalent of this expression?”. There are two identities that are helpful for extracting this equivalence, the first of which is the k-blade vector product identities. Given a k-blade \( B_k \) (i.e.: a product of \( k \) orthogonal vectors, or the wedge of \( k \) vectors), and a vector \( \Ba \), the dot product of the two is
\begin{equation}\label{eqn:dyadicVsGa:280}
B_k \cdot \Ba = \inv{2} \lr{ B_k \Ba + (-1)^{k+1} \Ba B_k }
\end{equation}
Specifically, given two vectors \( \Ba, \Bb \), the vector dot product can be written as a symmetric sum
\begin{equation}\label{eqn:dyadicVsGa:300}
\Ba \cdot \Bb = \inv{2} \lr{ \Ba \Bb + \Bb \Ba } = \Bb \cdot \Ba,
\end{equation}
and given a bivector \( B \) and a vector \( \Ba \), the bivector-vector dot product can be written as an antisymmetric sum
\begin{equation}\label{eqn:dyadicVsGa:320}
B \cdot \Ba = \inv{2} \lr{ B \Ba – \Ba B } = – \Ba \cdot B.
\end{equation}

We may apply these to expressions where one of the vector terms is the gradient, but must allow for the gradient to act bidirectionally. That is, given multivectors \( M, N \)
\begin{equation}\label{eqn:dyadicVsGa:340}
M \spacegrad N
=
\partial_i (M \Be_i N)
=
(\partial_i M) \Be_i N + M \Be_i (\partial_i N),
\end{equation}
where parens have been used to indicate the scope of applicibility of the partials. In particular, this means that we may write the divergence as a GA symmetric sum
\begin{equation}\label{eqn:dyadicVsGa:360}
\spacegrad \cdot \Bv = \inv{2} \lr{
\spacegrad \Bv + \Bv \spacegrad },
\end{equation}
which clearly corresponds to the symmetric term \( \Bd = (1/2) \lr{ \spacegrad \otimes \Bv + \lr{ \spacegrad \otimes \Bv }^\dagger } \) from \ref{eqn:dyadicVsGa:260}.

Let’s assume that we can write our vector differential in terms of a divergence term isomorphic to the symmetric sum in \ref{eqn:dyadicVsGa:260}, and a “something else”, \(\BX\). That is
\begin{equation}\label{eqn:dyadicVsGa:380}
\begin{aligned}
d\Bv
&= \lr{ d\Bx \cdot \spacegrad } \Bv \\
&= d\Bx (\spacegrad \cdot \Bv) + \BX,
\end{aligned}
\end{equation}
where
\begin{equation}\label{eqn:dyadicVsGa:400}
\BX = \lr{ d\Bx \cdot \spacegrad } \Bv – d\Bx (\spacegrad \cdot \Bv),
\end{equation}
is a vector expression to be reduced to something simpler. That reduction is possible using the distribution identity
\begin{equation}\label{eqn:dyadicVsGa:420}
\Bc \cdot (\Ba \wedge \Bb)
=
(\Bc \cdot \Ba) \Bb
– (\Bc \cdot \Bb) \Ba,
\end{equation}
so we find
\begin{equation}\label{eqn:dyadicVsGa:440}
\BX = \spacegrad \cdot \lr{ d\Bx \wedge \Bv }.
\end{equation}

We find the following GA split of the vector differential into symmetric and antisymmetric terms
\begin{equation}\label{eqn:dyadicVsGa:460}
\boxed{
d\Bv
= (d\Bx \cdot \spacegrad) \Bv
= d\Bx (\spacegrad \cdot \Bv)
+
\spacegrad \cdot \lr{ d\Bx \wedge \Bv }.
}
\end{equation}
Such a split avoids the indeterminant nature of the tensor product, which we only give meaning by introducing the quadratic form based dot product given by \ref{eqn:dyadicVsGa:220}.

Hardcover physics class notes.

March 13, 2021 math and physics play , , , , , , , , , , , , , ,

Amazon’s kindle direct publishing invited me to their hardcover trial program, and I’ve now made hardcover versions available of most of my interesting physics notes compilations:

Instead of making hardover versions of my classical mechanics, antenna theory, and electromagnetic theory notes, I have unpublished the paperback versions. These are low quality notes, and I don’t want more people to waste money on them (some have.) The free PDFs of all those notes are still available.

My geometric algebra book is also available in both paperback and hardcover (black and white). I’ve unpublished the color version, as it has a much higher print cost, and I thought it was too confusing to have all the permutations of black-and-white/color and paperback/hardcover.

A better 3D generalization of the Mandelbrot set.

February 9, 2021 math and physics play , , , , , , ,

I’ve been exploring 3D generalizations of the Mandelbrot set:

The iterative equation for the Mandelbrot set can be written in vector form ([1]) as:
\begin{equation}
\begin{aligned}
\Bz
&\rightarrow
\Bz \Be_1 \Bz + \Bc \\
&=
\Bz \lr{ \Be_1 \cdot \Bz }
+
\Bz \cdot \lr{ \Be_1 \wedge \Bz }
+ \Bc \\
&=
2 \Bz \lr{ \Be_1 \cdot \Bz }

\Bz^2\, \Be_1
+ \Bc
\end{aligned}
\end{equation}
Plotting this in 3D was an interesting challenge, but showed that the Mandelbrot set expressed above has rotational symmetry about the x-axis, which is kind of boring.

If all we require for a 3D fractal is to iterate a vector equation that is (presumably) at least quadratic, then we have lots of options. Here’s the first one that comes to mind:
\begin{equation}
\begin{aligned}
\Bz
&\rightarrow
\gpgradeone{ \Ba \Bz \Bb \Bz \Bc } + \Bd \\
&=
\lr{ \Ba \cdot \Bz } \lr{ \Bz \cross \lr{ \Bc \cross \Bz } }
+
\lr{ \Ba \cross \Bz } \lr{ \Bz \cdot \lr{ \Bc \cross \Bz } }
+ \Bd
.
\end{aligned}
\end{equation}
where we iterate starting, as usual with \( \Bz = 0 \) where \( \Bd \) is the point of interest to test for inclusion in the set. I tried this with
\begin{equation}\label{eqn:mandel3:n}
\begin{aligned}
\Ba &= (1,1,1) \\
\Bb &= (1,0,0) \\
\Bc &= (1,-1,0).
\end{aligned}
\end{equation}
Here are some slice plots at various values of z

and an animation of the slices with respect to the z-axis:

Here are a couple snapshots from a 3D Paraview rendering of a netCDF dataset of all the escape time values

Data collection and image creation used commit b042acf6ab7a5ba09865490b3f1fedaf0bd6e773 from my Mandelbrot generalization experimentation repository.

References

[1] L. Dorst, D. Fontijne, and S. Mann. Geometric Algebra for Computer Science. Morgan Kaufmann, San Francisco, 2007.