Month: March 2022

Vector gradients in dyadic notation and geometric algebra: Part II.

March 6, 2022 math and physics play , , , , ,

[If mathjax doesn’t display properly for you, click here for a PDF of this post]

Symmetrization and antisymmetrization of the vector differential in GA.

There was an error in yesterday’s post. This decomposition was correct:
\begin{equation}\label{eqn:dyadicVsGa:460}
d\Bv
= (d\Bx \cdot \spacegrad) \Bv
= d\Bx (\spacegrad \cdot \Bv)
+
\spacegrad \cdot \lr{ d\Bx \wedge \Bv }.
\end{equation}
However, identifying these terms with the symmetric and antisymmetric splits of \( \spacegrad \otimes \Bv \) was wrong.
Brian pointed out that a purely incompressible flow is one for which \(\spacegrad \cdot \Bv = 0\), yet, in general, an incompressible flow can have a non-zero deformation tensor.

Also, given the nature of the matrix expansion of the antisymmetric tensor, we should have had a curl term in the mix and we do not. The conclusion must be that \ref{eqn:dyadicVsGa:460} is a split into divergence and non-divergence terms, but we really wanted a split into curl and non-curl terms.

Symmetrization and antisymmetrization of the vector differential in GA: Take II.

Identification of \( \ifrac{1}{2} \lr{ \spacegrad \otimes \Bv + \lr{ \spacegrad \otimes \Bv }^\dagger } \) with the divergence was incorrect.

Let’s explicitly expand out our symmetric tensor component fully to see what it really yields, without guessing.
\begin{equation}\label{eqn:dyadicVsGa:480}
\begin{aligned}
d\Bx \cdot
\inv{2}
\lr{ \spacegrad \otimes \Bv + \lr{ \spacegrad \otimes \Bv }^\dagger }
&=
d\Bx \cdot
\inv{2}
\lr{
\begin{bmatrix}
\partial_i v_j
\end{bmatrix}
+
\begin{bmatrix}
\partial_j v_i
\end{bmatrix}
} \\
&=
dx_i
\inv{2}
\begin{bmatrix}
\partial_i v_j +
\partial_j v_i
\end{bmatrix}
\begin{bmatrix}
\Be_1 \\
\Be_2 \\
\Be_3
\end{bmatrix}.
\end{aligned}
\end{equation}
The symmetric matrix that represents this direct product tensor is
\begin{equation}\label{eqn:dyadicVsGa:500}
\inv{2}
\begin{bmatrix}
\partial_i v_j +
\partial_j v_i
\end{bmatrix}
=
\inv{2}
\begin{bmatrix}
2 \partial_1 v_1 & \partial_1 v_2 + \partial_2 v_1 & \partial_1 v_3 + \partial_3 v_1 \\
\partial_2 v_1 + \partial_1 v_2 & 2 \partial_2 v_2 & \partial_2 v_3 + \partial_3 v_2 \\
\partial_3 v_1 + \partial_1 v_3 & \partial_3 v_2 + \partial_2 v_3 & \partial_3 v_1 + \partial_1 v_3 \\
\end{bmatrix}
.
\end{equation}
This certainly isn’t isomorphic to the divergence. Instead, the trace of this matrix is the portion that is isomorphic to the divergence. The rest is something else. Let’s put the tensors into vector form to understand what they really represent.

For the symmetric part we have
\begin{equation}\label{eqn:dyadicVsGa:520}
\begin{aligned}
d\Bx \cdot
\inv{2}
\lr{ \spacegrad \otimes \Bv + \lr{ \spacegrad \otimes \Bv }^\dagger }
&=
dx_i
\inv{2}
\begin{bmatrix}
\partial_i v_j +
\partial_j v_i
\end{bmatrix}
\begin{bmatrix}
\Be_1 \\
\Be_2 \\
\Be_3
\end{bmatrix} \\
&=
\inv{2} \lr{
\lr{ d\Bx \cdot \spacegrad } \Bv + \spacegrad \lr{ d\Bx \cdot \Bv }
},
\end{aligned}
\end{equation}
and, similarily, for the antisymmetric tensor component, we have
\begin{equation}\label{eqn:dyadicVsGa:540}
\begin{aligned}
d\Bx \cdot
\inv{2}
\lr{ \spacegrad \otimes \Bv – \lr{ \spacegrad \otimes \Bv }^\dagger }
&=
dx_i
\inv{2}
\begin{bmatrix}
\partial_i v_j –
\partial_j v_i
\end{bmatrix}
\begin{bmatrix}
\Be_1 \\
\Be_2 \\
\Be_3
\end{bmatrix} \\
&=
\inv{2} \lr{
\lr{ d\Bx \cdot \spacegrad } \Bv – \spacegrad \lr{ d\Bx \cdot \Bv }
} \\
&=
\inv{2}
d\Bx \cdot \lr{ \spacegrad \wedge \Bv}.
\end{aligned}
\end{equation}
We find an isomorphism of the antisymmetric term with the curl, but the symmetric term has a divergence component, plus more.

If we want to we can split the symmetric component into it’s divergence and non-divergence terms, we get
\begin{equation}\label{eqn:dyadicVsGa:560}
\begin{aligned}
d\Bx \cdot \Bd
&=
\inv{2}
\lr{
\lr{ d\Bx \cdot \spacegrad } \Bv + \spacegrad \lr{ d\Bx \cdot \Bv }
} \\
&=
\inv{2}
\lr{
d\Bx \lr{ \spacegrad \cdot \Bv } + \spacegrad \cdot \lr{ d\Bx \wedge \Bv } + \spacegrad \lr{ d\Bx \cdot \Bv }
} \\
&=
\inv{2}
\lr{
d\Bx \lr{ \spacegrad \cdot \Bv } + \gpgradeone{ \spacegrad \lr{ d\Bx \wedge \Bv } + \spacegrad \lr{ d\Bx \cdot \Bv } }
} \\
&=
\inv{2}
\lr{
d\Bx \lr{ \spacegrad \cdot \Bv } + \gpgradeone{ \spacegrad d\Bx\, \Bv }
},
\end{aligned}
\end{equation}
so for incompressible flow, the GA representation is a single grade one selection
\begin{equation}\label{eqn:dyadicVsGa:600}
d\Bx \cdot \Bd = \inv{2} \gpgradeone{ \spacegrad d\Bx\, \Bv }.
\end{equation}
It is a little unfortunate that we cannot factor out the \( d\Bx \) term. We can do that for the
GA representation of the antisymmetric tensor contribution, which is just
\begin{equation}\label{eqn:dyadicVsGa:580}
\BOmega
=
\inv{2} \spacegrad \wedge \Bv.
\end{equation}

Let’s see what the antisymmetric tensor equivalent looks like in the incompressible case, by subtracting a divergence term
\begin{equation}\label{eqn:dyadicVsGa:680}
\begin{aligned}
d\Bx \cdot \lr{ \spacegrad \wedge \Bv } – d\Bx \lr{ \spacegrad \cdot \Bv }
&=
\gpgradeone{ d\Bx \lr{ \spacegrad \wedge \Bv } – d\Bx \lr{ \spacegrad \cdot \Bv } } \\
&=
\gpgradeone{ -d\Bx \lr{ \Bv \wedge \spacegrad } – d\Bx \lr{ \Bv \cdot \spacegrad } } \\
&=
-\gpgradeone{ d\Bx \Bv \spacegrad },
\end{aligned}
\end{equation}
so we have
\begin{equation}\label{eqn:dyadicVsGa:700}
d\Bx \cdot \lr{ \spacegrad \wedge \Bv } = d\Bx \lr{ \spacegrad \cdot \Bv } – \gpgradeone{ d\Bx\, \Bv \spacegrad }.
\end{equation}
Both the symmetric and antisymmetric tensors have compressible components.

Summary.

We found that it was possible to split the vector differential into a divergence and incompressible components, as follows
\begin{equation}\label{eqn:dyadicVsGa:620}
\begin{aligned}
d\Bv
&= \lr{ d\Bx \cdot \spacegrad } \Bv \\
&= d\Bx (\spacegrad \cdot \Bv)
+
\spacegrad \cdot \lr{ d\Bx \wedge \Bv }.
\end{aligned}
\end{equation}

With
\begin{equation}\label{eqn:dyadicVsGa:720}
\begin{aligned}
d\Bv
&= d\Bx \cdot
\lr{
\inv{2} \lr{ \spacegrad \otimes \Bv + \lr{ \spacegrad \otimes \Bv }^\dagger }
+
\inv{2} \lr{ \spacegrad \otimes \Bv – \lr{ \spacegrad \otimes \Bv }^\dagger }
} \\
&= d\Bx \cdot \lr{ \Bd + \BOmega },
\end{aligned}
\end{equation}
we found the following correspondences between the symmetric and antisymmetric tensor product components
\begin{equation}\label{eqn:dyadicVsGa:640}
\begin{aligned}
d\Bx \cdot \Bd &=
\inv{2} \lr{
\lr{ d\Bx \cdot \spacegrad } \Bv + \spacegrad \lr{ d\Bx \cdot \Bv }
} \\
&=
\inv{2}
\lr{
d\Bx \lr{ \spacegrad \cdot \Bv } + \gpgradeone{ \spacegrad d\Bx\, \Bv }
}
\end{aligned},
\end{equation}
and
\begin{equation}\label{eqn:dyadicVsGa:660}
\begin{aligned}
d\Bx \cdot \BOmega
&=
\inv{2} d\Bx \cdot \lr{ \spacegrad \wedge \Bv } \\
&=
\inv{2} \lr{
d\Bx \lr{ \spacegrad \cdot \Bv } – \gpgradeone{ d\Bx\, \Bv \spacegrad }
}.
\end{aligned}
\end{equation}

In the incompressible case where \( \spacegrad \cdot \Bv = 0 \), we have
\begin{equation}\label{eqn:dyadicVsGa:740}
\begin{aligned}
d\Bx \cdot \Bd &= \inv{2} \gpgradeone{ \spacegrad d\Bx\, \Bv } \\
d\Bx \cdot \BOmega &= -\inv{2} \gpgradeone{ d\Bx\, \Bv \spacegrad },
\end{aligned}
\end{equation}
and
\begin{equation}\label{eqn:dyadicVsGa:760}
\begin{aligned}
d\Bv
&= d\Bx \cdot \lr{ \Bd + \BOmega } \\
&= \inv{2} \gpgradeone{ \spacegrad d\Bx\, \Bv – d\Bx\, \Bv \spacegrad } \\
&= \spacegrad \cdot \lr{ d\Bx \wedge \Bv }.
\end{aligned}
\end{equation}

Vector gradients in dyadic notation and geometric algebra.

March 5, 2022 math and physics play , , , ,

[If mathjax doesn’t display properly for you, click here for a PDF of this post]

This is an exploration of the dyadic representation of the gradient acting on a vector in \(\mathbb{R}^3\), where we determine a tensor product formulation of a vector differential. Such a tensor product formulation can be split into symmetric and antisymmetric components. The geometric algebra (GA) equivalents of such a split are determined.

There is an error in part of the analysis below, which is addressed in a followup post made the next day.

GA gradient of a vector.

In GA we are free to express the product of the gradient and a vector field by adjacency. In coordinates (summation over repeated indexes assumed), such a product has the form
\begin{equation}\label{eqn:dyadicVsGa:20}
\spacegrad \Bv
= \lr{ \Be_i \partial_i } \lr{ v_j \Be_j }
= \lr{ \partial_i v_j } \Be_i \Be_j.
\end{equation}
In this sum, any terms with \( i = j \) are scalars since \( \Be_i^2 = 1 \), and the remaining terms are bivectors. This can be written compactly as
\begin{equation}\label{eqn:dyadicVsGa:40}
\spacegrad \Bv = \spacegrad \cdot \Bv + \spacegrad \wedge \Bv,
\end{equation}
or for \(\mathbb{R}^3\)
\begin{equation}\label{eqn:dyadicVsGa:60}
\spacegrad \Bv = \spacegrad \cdot \Bv + I \lr{ \spacegrad \cross \Bv},
\end{equation}
either of which breaks the gradient into into divergence and curl components. In \ref{eqn:dyadicVsGa:40} this vector gradient is expressed using the bivector valued curl operator \( (\spacegrad \wedge \Bv) \), whereas \ref{eqn:dyadicVsGa:60} is expressed using the vector valued dual form of the curl \( (\spacegrad \cross \Bv) \) from convential vector algebra.

It is worth noting that order matters in the GA coordinate expansion of \ref{eqn:dyadicVsGa:20}. It is not correct to write
\begin{equation}\label{eqn:dyadicVsGa:80}
\spacegrad \Bv
= \lr{ \partial_i v_j } \Be_j \Be_i,
\end{equation}
which is only true when the curl, \( \spacegrad \wedge \Bv = 0 \), is zero.

Dyadic representation.

Given a vector field \( \Bv = \Bv(\Bx) \), the differential of that field can be computed by chain rule
\begin{equation}\label{eqn:dyadicVsGa:100}
d\Bv = \PD{x_i}{\Bv} dx_i = \lr{ d\Bx \cdot \spacegrad} \Bv,
\end{equation}
where \( d\Bx = \Be_i dx_i \). This is a representation invariant form of the differential, where we have a scalar operator \( d\Bx \cdot \spacegrad \) acting on the vector field \( \Bv \). The matrix representation of this differential can be written as
\begin{equation}\label{eqn:dyadicVsGa:120}
d\Bv = \lr{
{\begin{bmatrix}
d\Bx
\end{bmatrix}}^\dagger
\begin{bmatrix}
\spacegrad
\end{bmatrix}
}
\begin{bmatrix}
\Bv
\end{bmatrix}
,
\end{equation}
where we are using the dagger to designate transposition, and each of the terms on the right are the coordinate matrixes of the vectors with respect to the standard basis
\begin{equation}\label{eqn:dyadicVsGa:140}
\begin{bmatrix}
d\Bx
\end{bmatrix}
=
\begin{bmatrix}
dx_1 \\
dx_2 \\
dx_3
\end{bmatrix},\quad
\begin{bmatrix}
\Bv
\end{bmatrix}
=
\begin{bmatrix}
v_1 \\
v_2 \\
v_3
\end{bmatrix},\quad
\begin{bmatrix}
\spacegrad
\end{bmatrix}
=
\begin{bmatrix}
\partial_1 \\
\partial_2 \\
\partial_3
\end{bmatrix}.
\end{equation}

In \ref{eqn:dyadicVsGa:120} the parens are very important, as the expression is meaningless without them. With the parens we have a \((1 \times 3)(3 \times 1)\) matrix (i.e. a scalar) multiplied with a \(3\times 1\) matrix. That becomes ill-formed if we drop the parens since we are left with an incompatible product of a \((3\times1)(3\times1)\) matrix on the right. The dyadic notation, which introducing a tensor product into the mix, is a mechanism to make sense of the possibility of such a product. Can we make sense of an expression like \( \spacegrad \Bv \) without the geometric product in our toolbox?

Stepping towards that question, let’s examine the coordinate expansion of our vector differential \ref{eqn:dyadicVsGa:100}, which is
\begin{equation}\label{eqn:dyadicVsGa:160}
d\Bv = dx_i \lr{ \partial_i v_j } \Be_j.
\end{equation}
If we allow a matrix of vectors, this has a block matrix form
\begin{equation}\label{eqn:dyadicVsGa:180}
d\Bv =
{\begin{bmatrix}
d\Bx
\end{bmatrix}}^\dagger
\begin{bmatrix}
\spacegrad \otimes \Bv
\end{bmatrix}
\begin{bmatrix}
\Be_1 \\
\Be_2 \\
\Be_3
\end{bmatrix}
.
\end{equation}
Here we introduce the tensor product
\begin{equation}\label{eqn:dyadicVsGa:200}
\spacegrad \otimes \Bv
= \partial_i v_j \, \Be_i \otimes \Be_j,
\end{equation}
and designate the matrix of coordinates \( \partial_i v_j \), a second order tensor, by \(
\begin{bmatrix}
\spacegrad \otimes \Bv
\end{bmatrix}
\).

We have succeeded in factoring out a vector gradient. We can introduce dot product between vectors and a direct product of vectors, by observing that \ref{eqn:dyadicVsGa:180} has the structure of a quadradic form, and define
\begin{equation}\label{eqn:dyadicVsGa:220}
\Bx \cdot (\Ba \otimes \Bb) \equiv
{\begin{bmatrix}
\Bx
\end{bmatrix}}^\dagger
\begin{bmatrix}
\Ba \otimes \Bb
\end{bmatrix}
\begin{bmatrix}
\Be_1 \\
\Be_2 \\
\Be_3
\end{bmatrix},
\end{equation}
so that \ref{eqn:dyadicVsGa:180} takes the form
\begin{equation}\label{eqn:dyadicVsGa:240}
d\Bv = d\Bx \cdot \lr{ \spacegrad \otimes \Bv }.
\end{equation}
Such a dot product gives operational meaning to the gradient-vector tensor product.

Symmetrization and antisymmetrization of the vector differential in GA.

Using the dyadic notation, it’s possible to split a vector derivative into symmetric and antisymmetric components with respect to the gradient-vector direct product
\begin{equation}\label{eqn:dyadicVsGa:260}
d\Bv
= d\Bx \cdot
\lr{
\inv{2} \lr{ \spacegrad \otimes \Bv + \lr{ \spacegrad \otimes \Bv }^\dagger }
+
\inv{2} \lr{ \spacegrad \otimes \Bv – \lr{ \spacegrad \otimes \Bv }^\dagger }
},
\end{equation}
or \( d\Bv = d\Bx \cdot \lr{ \Bd + \BOmega } \), where \( \Bd \) is a symmetric tensor, and \( \BOmega \) is a traceless antisymmetric tensor.

A question of potential interest is “what GA equvivalent of this expression?”. There are two identities that are helpful for extracting this equivalence, the first of which is the k-blade vector product identities. Given a k-blade \( B_k \) (i.e.: a product of \( k \) orthogonal vectors, or the wedge of \( k \) vectors), and a vector \( \Ba \), the dot product of the two is
\begin{equation}\label{eqn:dyadicVsGa:280}
B_k \cdot \Ba = \inv{2} \lr{ B_k \Ba + (-1)^{k+1} \Ba B_k }
\end{equation}
Specifically, given two vectors \( \Ba, \Bb \), the vector dot product can be written as a symmetric sum
\begin{equation}\label{eqn:dyadicVsGa:300}
\Ba \cdot \Bb = \inv{2} \lr{ \Ba \Bb + \Bb \Ba } = \Bb \cdot \Ba,
\end{equation}
and given a bivector \( B \) and a vector \( \Ba \), the bivector-vector dot product can be written as an antisymmetric sum
\begin{equation}\label{eqn:dyadicVsGa:320}
B \cdot \Ba = \inv{2} \lr{ B \Ba – \Ba B } = – \Ba \cdot B.
\end{equation}

We may apply these to expressions where one of the vector terms is the gradient, but must allow for the gradient to act bidirectionally. That is, given multivectors \( M, N \)
\begin{equation}\label{eqn:dyadicVsGa:340}
M \spacegrad N
=
\partial_i (M \Be_i N)
=
(\partial_i M) \Be_i N + M \Be_i (\partial_i N),
\end{equation}
where parens have been used to indicate the scope of applicibility of the partials. In particular, this means that we may write the divergence as a GA symmetric sum
\begin{equation}\label{eqn:dyadicVsGa:360}
\spacegrad \cdot \Bv = \inv{2} \lr{
\spacegrad \Bv + \Bv \spacegrad },
\end{equation}
which clearly corresponds to the symmetric term \( \Bd = (1/2) \lr{ \spacegrad \otimes \Bv + \lr{ \spacegrad \otimes \Bv }^\dagger } \) from \ref{eqn:dyadicVsGa:260}.

Let’s assume that we can write our vector differential in terms of a divergence term isomorphic to the symmetric sum in \ref{eqn:dyadicVsGa:260}, and a “something else”, \(\BX\). That is
\begin{equation}\label{eqn:dyadicVsGa:380}
\begin{aligned}
d\Bv
&= \lr{ d\Bx \cdot \spacegrad } \Bv \\
&= d\Bx (\spacegrad \cdot \Bv) + \BX,
\end{aligned}
\end{equation}
where
\begin{equation}\label{eqn:dyadicVsGa:400}
\BX = \lr{ d\Bx \cdot \spacegrad } \Bv – d\Bx (\spacegrad \cdot \Bv),
\end{equation}
is a vector expression to be reduced to something simpler. That reduction is possible using the distribution identity
\begin{equation}\label{eqn:dyadicVsGa:420}
\Bc \cdot (\Ba \wedge \Bb)
=
(\Bc \cdot \Ba) \Bb
– (\Bc \cdot \Bb) \Ba,
\end{equation}
so we find
\begin{equation}\label{eqn:dyadicVsGa:440}
\BX = \spacegrad \cdot \lr{ d\Bx \wedge \Bv }.
\end{equation}

We find the following GA split of the vector differential into symmetric and antisymmetric terms
\begin{equation}\label{eqn:dyadicVsGa:460}
\boxed{
d\Bv
= (d\Bx \cdot \spacegrad) \Bv
= d\Bx (\spacegrad \cdot \Bv)
+
\spacegrad \cdot \lr{ d\Bx \wedge \Bv }.
}
\end{equation}
Such a split avoids the indeterminant nature of the tensor product, which we only give meaning by introducing the quadratic form based dot product given by \ref{eqn:dyadicVsGa:220}.

Finally, some sensible cause and effect analysis of the Ukraine conflict.

March 1, 2022 Incoherent ramblings , , , , , , , , , , ,

When you see the media all moving in lock step beating the drums of war, it’s clear that there’s a heavy propaganda element to the story, and that there must be deeper issues at play.  It seemed obvious to me that there was surely US funded covert conflict undergirding this story, as has been repeatedly been the case in so many other world conflicts.

I’ve not been going out of my way to ferret out that info, but it was inevitable that some would eventually cross my path.  I’m sure there will be more, but here’s one little bit of the story:

Ep. 2074 Russia, Ukraine, and NATO

This was, in my judgment, a sensible cause and effect analysis on the Ukraine conflict, that doesn’t just try to paint things as a reaction to NATO incursion (which is surely some part of the story.)

In that interview, Tom Wood’s guest, Gilbert Doctorow, expounds on two specific points that are significant.  The first is that there has been an informal undeclared war against the Russian Ukrainian states for 8 years (with active shelling of Ukrainian/Russian civilians in those areas, and explicit disregard for the existing negotiated treaties).  The second point is that the Ukrainian President’s recently declared his intent to start a Ukrainian nuclear weapons program.

In addition to those points, recall that psychopathic elements in the US government and power broker circles financed a coup in the Ukraine for the tune of $5 Billion dollars in 2014 (Obama era).  Some of that financing went to literal NeoNazis! We have multiple generations of US government corruption in play, with Trump keeping up the game by coordinating US weapon sales to Ukraine, and with Biden’s family playing money laundering games (and who knows what else) in the region.

I’m not excusing Putin and the psychopathic elements that surely also exist on the Russian side. There is plausible reporting on Putin’s use of bombing or attempting to bomb his own people in Moscow to justify the Chechen war. It takes a special kind of evil to kill your own people to justify killing other people. There’s also considerable reporting on the disappearing of and deaths of Russian reporters and dissidents. I’d be very surprised if there was not truth to a considerable portion of that reporting given Putin’s KGB origin story.  Putin is not a good guy in this story, even if there is unreported rationale for his actions.

It takes a lot of work to get a full scale conflict to play out, and this one is big enough that apparently everybody seems to have simultaneously forgotten all the covid fear mongering, and government and medical tyranny of the last two years. One thing that we can be certain of, is that a lot of people will make money from this war and the financial gouging that is enabled by it, regardless of the extent that it is taken.

Lots of people on both sides will die, as powerful factions on both sides profit from the chaos.