### Month: March 2022

[If mathjax doesn’t display properly for you, click here for a PDF of this post]

## Symmetrization and antisymmetrization of the vector differential in GA.

There was an error in yesterday’s post. This decomposition was correct:
d\Bv
+
\spacegrad \cdot \lr{ d\Bx \wedge \Bv }.

However, identifying these terms with the symmetric and antisymmetric splits of $$\spacegrad \otimes \Bv$$ was wrong.
Brian pointed out that a purely incompressible flow is one for which $$\spacegrad \cdot \Bv = 0$$, yet, in general, an incompressible flow can have a non-zero deformation tensor.

Also, given the nature of the matrix expansion of the antisymmetric tensor, we should have had a curl term in the mix and we do not. The conclusion must be that \ref{eqn:dyadicVsGa:460} is a split into divergence and non-divergence terms, but we really wanted a split into curl and non-curl terms.

## Symmetrization and antisymmetrization of the vector differential in GA: Take II.

Identification of $$\ifrac{1}{2} \lr{ \spacegrad \otimes \Bv + \lr{ \spacegrad \otimes \Bv }^\dagger }$$ with the divergence was incorrect.

Let’s explicitly expand out our symmetric tensor component fully to see what it really yields, without guessing.
\begin{aligned}
d\Bx \cdot
\inv{2}
&=
d\Bx \cdot
\inv{2}
\lr{
\begin{bmatrix}
\partial_i v_j
\end{bmatrix}
+
\begin{bmatrix}
\partial_j v_i
\end{bmatrix}
} \\
&=
dx_i
\inv{2}
\begin{bmatrix}
\partial_i v_j +
\partial_j v_i
\end{bmatrix}
\begin{bmatrix}
\Be_1 \\
\Be_2 \\
\Be_3
\end{bmatrix}.
\end{aligned}

The symmetric matrix that represents this direct product tensor is
\inv{2}
\begin{bmatrix}
\partial_i v_j +
\partial_j v_i
\end{bmatrix}
=
\inv{2}
\begin{bmatrix}
2 \partial_1 v_1 & \partial_1 v_2 + \partial_2 v_1 & \partial_1 v_3 + \partial_3 v_1 \\
\partial_2 v_1 + \partial_1 v_2 & 2 \partial_2 v_2 & \partial_2 v_3 + \partial_3 v_2 \\
\partial_3 v_1 + \partial_1 v_3 & \partial_3 v_2 + \partial_2 v_3 & \partial_3 v_1 + \partial_1 v_3 \\
\end{bmatrix}
.

This certainly isn’t isomorphic to the divergence. Instead, the trace of this matrix is the portion that is isomorphic to the divergence. The rest is something else. Let’s put the tensors into vector form to understand what they really represent.

For the symmetric part we have
\begin{aligned}
d\Bx \cdot
\inv{2}
&=
dx_i
\inv{2}
\begin{bmatrix}
\partial_i v_j +
\partial_j v_i
\end{bmatrix}
\begin{bmatrix}
\Be_1 \\
\Be_2 \\
\Be_3
\end{bmatrix} \\
&=
\inv{2} \lr{
\lr{ d\Bx \cdot \spacegrad } \Bv + \spacegrad \lr{ d\Bx \cdot \Bv }
},
\end{aligned}

and, similarily, for the antisymmetric tensor component, we have
\begin{aligned}
d\Bx \cdot
\inv{2}
&=
dx_i
\inv{2}
\begin{bmatrix}
\partial_i v_j –
\partial_j v_i
\end{bmatrix}
\begin{bmatrix}
\Be_1 \\
\Be_2 \\
\Be_3
\end{bmatrix} \\
&=
\inv{2} \lr{
\lr{ d\Bx \cdot \spacegrad } \Bv – \spacegrad \lr{ d\Bx \cdot \Bv }
} \\
&=
\inv{2}
d\Bx \cdot \lr{ \spacegrad \wedge \Bv}.
\end{aligned}

We find an isomorphism of the antisymmetric term with the curl, but the symmetric term has a divergence component, plus more.

If we want to we can split the symmetric component into it’s divergence and non-divergence terms, we get
\begin{aligned}
d\Bx \cdot \Bd
&=
\inv{2}
\lr{
\lr{ d\Bx \cdot \spacegrad } \Bv + \spacegrad \lr{ d\Bx \cdot \Bv }
} \\
&=
\inv{2}
\lr{
d\Bx \lr{ \spacegrad \cdot \Bv } + \spacegrad \cdot \lr{ d\Bx \wedge \Bv } + \spacegrad \lr{ d\Bx \cdot \Bv }
} \\
&=
\inv{2}
\lr{
} \\
&=
\inv{2}
\lr{
},
\end{aligned}

so for incompressible flow, the GA representation is a single grade one selection

It is a little unfortunate that we cannot factor out the $$d\Bx$$ term. We can do that for the
GA representation of the antisymmetric tensor contribution, which is just
\BOmega
=

Let’s see what the antisymmetric tensor equivalent looks like in the incompressible case, by subtracting a divergence term
\begin{aligned}
d\Bx \cdot \lr{ \spacegrad \wedge \Bv } – d\Bx \lr{ \spacegrad \cdot \Bv }
&=
&=
&=
\end{aligned}

so we have

Both the symmetric and antisymmetric tensors have compressible components.

## Summary.

We found that it was possible to split the vector differential into a divergence and incompressible components, as follows
\begin{aligned}
d\Bv
&= \lr{ d\Bx \cdot \spacegrad } \Bv \\
+
\spacegrad \cdot \lr{ d\Bx \wedge \Bv }.
\end{aligned}

With
\begin{aligned}
d\Bv
&= d\Bx \cdot
\lr{
\inv{2} \lr{ \spacegrad \otimes \Bv + \lr{ \spacegrad \otimes \Bv }^\dagger }
+
\inv{2} \lr{ \spacegrad \otimes \Bv – \lr{ \spacegrad \otimes \Bv }^\dagger }
} \\
&= d\Bx \cdot \lr{ \Bd + \BOmega },
\end{aligned}

we found the following correspondences between the symmetric and antisymmetric tensor product components
\begin{aligned}
d\Bx \cdot \Bd &=
\inv{2} \lr{
\lr{ d\Bx \cdot \spacegrad } \Bv + \spacegrad \lr{ d\Bx \cdot \Bv }
} \\
&=
\inv{2}
\lr{
}
\end{aligned},

and
\begin{aligned}
d\Bx \cdot \BOmega
&=
\inv{2} d\Bx \cdot \lr{ \spacegrad \wedge \Bv } \\
&=
\inv{2} \lr{
}.
\end{aligned}

In the incompressible case where $$\spacegrad \cdot \Bv = 0$$, we have
\begin{aligned}
\end{aligned}

and
\begin{aligned}
d\Bv
&= d\Bx \cdot \lr{ \Bd + \BOmega } \\
&= \spacegrad \cdot \lr{ d\Bx \wedge \Bv }.
\end{aligned}

[If mathjax doesn’t display properly for you, click here for a PDF of this post]

This is an exploration of the dyadic representation of the gradient acting on a vector in $$\mathbb{R}^3$$, where we determine a tensor product formulation of a vector differential. Such a tensor product formulation can be split into symmetric and antisymmetric components. The geometric algebra (GA) equivalents of such a split are determined.

There is an error in part of the analysis below, which is addressed in a followup post made the next day.

## GA gradient of a vector.

In GA we are free to express the product of the gradient and a vector field by adjacency. In coordinates (summation over repeated indexes assumed), such a product has the form
= \lr{ \Be_i \partial_i } \lr{ v_j \Be_j }
= \lr{ \partial_i v_j } \Be_i \Be_j.

In this sum, any terms with $$i = j$$ are scalars since $$\Be_i^2 = 1$$, and the remaining terms are bivectors. This can be written compactly as

or for $$\mathbb{R}^3$$

either of which breaks the gradient into into divergence and curl components. In \ref{eqn:dyadicVsGa:40} this vector gradient is expressed using the bivector valued curl operator $$(\spacegrad \wedge \Bv)$$, whereas \ref{eqn:dyadicVsGa:60} is expressed using the vector valued dual form of the curl $$(\spacegrad \cross \Bv)$$ from convential vector algebra.

It is worth noting that order matters in the GA coordinate expansion of \ref{eqn:dyadicVsGa:20}. It is not correct to write
= \lr{ \partial_i v_j } \Be_j \Be_i,

which is only true when the curl, $$\spacegrad \wedge \Bv = 0$$, is zero.

Given a vector field $$\Bv = \Bv(\Bx)$$, the differential of that field can be computed by chain rule
d\Bv = \PD{x_i}{\Bv} dx_i = \lr{ d\Bx \cdot \spacegrad} \Bv,

where $$d\Bx = \Be_i dx_i$$. This is a representation invariant form of the differential, where we have a scalar operator $$d\Bx \cdot \spacegrad$$ acting on the vector field $$\Bv$$. The matrix representation of this differential can be written as
d\Bv = \lr{
{\begin{bmatrix}
d\Bx
\end{bmatrix}}^\dagger
\begin{bmatrix}
\end{bmatrix}
}
\begin{bmatrix}
\Bv
\end{bmatrix}
,

where we are using the dagger to designate transposition, and each of the terms on the right are the coordinate matrixes of the vectors with respect to the standard basis
\begin{bmatrix}
d\Bx
\end{bmatrix}
=
\begin{bmatrix}
dx_1 \\
dx_2 \\
dx_3
\begin{bmatrix}
\Bv
\end{bmatrix}
=
\begin{bmatrix}
v_1 \\
v_2 \\
v_3
\begin{bmatrix}
\end{bmatrix}
=
\begin{bmatrix}
\partial_1 \\
\partial_2 \\
\partial_3
\end{bmatrix}.

In \ref{eqn:dyadicVsGa:120} the parens are very important, as the expression is meaningless without them. With the parens we have a $$(1 \times 3)(3 \times 1)$$ matrix (i.e. a scalar) multiplied with a $$3\times 1$$ matrix. That becomes ill-formed if we drop the parens since we are left with an incompatible product of a $$(3\times1)(3\times1)$$ matrix on the right. The dyadic notation, which introducing a tensor product into the mix, is a mechanism to make sense of the possibility of such a product. Can we make sense of an expression like $$\spacegrad \Bv$$ without the geometric product in our toolbox?

Stepping towards that question, let’s examine the coordinate expansion of our vector differential \ref{eqn:dyadicVsGa:100}, which is
d\Bv = dx_i \lr{ \partial_i v_j } \Be_j.

If we allow a matrix of vectors, this has a block matrix form
d\Bv =
{\begin{bmatrix}
d\Bx
\end{bmatrix}}^\dagger
\begin{bmatrix}
\end{bmatrix}
\begin{bmatrix}
\Be_1 \\
\Be_2 \\
\Be_3
\end{bmatrix}
.

Here we introduce the tensor product
= \partial_i v_j \, \Be_i \otimes \Be_j,

and designate the matrix of coordinates $$\partial_i v_j$$, a second order tensor, by $$\begin{bmatrix} \spacegrad \otimes \Bv \end{bmatrix}$$.

We have succeeded in factoring out a vector gradient. We can introduce dot product between vectors and a direct product of vectors, by observing that \ref{eqn:dyadicVsGa:180} has the structure of a quadradic form, and define
\Bx \cdot (\Ba \otimes \Bb) \equiv
{\begin{bmatrix}
\Bx
\end{bmatrix}}^\dagger
\begin{bmatrix}
\Ba \otimes \Bb
\end{bmatrix}
\begin{bmatrix}
\Be_1 \\
\Be_2 \\
\Be_3
\end{bmatrix},

so that \ref{eqn:dyadicVsGa:180} takes the form
d\Bv = d\Bx \cdot \lr{ \spacegrad \otimes \Bv }.

Such a dot product gives operational meaning to the gradient-vector tensor product.

## Symmetrization and antisymmetrization of the vector differential in GA.

Using the dyadic notation, it’s possible to split a vector derivative into symmetric and antisymmetric components with respect to the gradient-vector direct product
d\Bv
= d\Bx \cdot
\lr{
\inv{2} \lr{ \spacegrad \otimes \Bv + \lr{ \spacegrad \otimes \Bv }^\dagger }
+
\inv{2} \lr{ \spacegrad \otimes \Bv – \lr{ \spacegrad \otimes \Bv }^\dagger }
},

or $$d\Bv = d\Bx \cdot \lr{ \Bd + \BOmega }$$, where $$\Bd$$ is a symmetric tensor, and $$\BOmega$$ is a traceless antisymmetric tensor.

A question of potential interest is “what GA equvivalent of this expression?”. There are two identities that are helpful for extracting this equivalence, the first of which is the k-blade vector product identities. Given a k-blade $$B_k$$ (i.e.: a product of $$k$$ orthogonal vectors, or the wedge of $$k$$ vectors), and a vector $$\Ba$$, the dot product of the two is
B_k \cdot \Ba = \inv{2} \lr{ B_k \Ba + (-1)^{k+1} \Ba B_k }

Specifically, given two vectors $$\Ba, \Bb$$, the vector dot product can be written as a symmetric sum
\Ba \cdot \Bb = \inv{2} \lr{ \Ba \Bb + \Bb \Ba } = \Bb \cdot \Ba,

and given a bivector $$B$$ and a vector $$\Ba$$, the bivector-vector dot product can be written as an antisymmetric sum
B \cdot \Ba = \inv{2} \lr{ B \Ba – \Ba B } = – \Ba \cdot B.

We may apply these to expressions where one of the vector terms is the gradient, but must allow for the gradient to act bidirectionally. That is, given multivectors $$M, N$$
=
\partial_i (M \Be_i N)
=
(\partial_i M) \Be_i N + M \Be_i (\partial_i N),

where parens have been used to indicate the scope of applicibility of the partials. In particular, this means that we may write the divergence as a GA symmetric sum
\spacegrad \cdot \Bv = \inv{2} \lr{

which clearly corresponds to the symmetric term $$\Bd = (1/2) \lr{ \spacegrad \otimes \Bv + \lr{ \spacegrad \otimes \Bv }^\dagger }$$ from \ref{eqn:dyadicVsGa:260}.

Let’s assume that we can write our vector differential in terms of a divergence term isomorphic to the symmetric sum in \ref{eqn:dyadicVsGa:260}, and a “something else”, $$\BX$$. That is
\begin{aligned}
d\Bv
&= \lr{ d\Bx \cdot \spacegrad } \Bv \\
&= d\Bx (\spacegrad \cdot \Bv) + \BX,
\end{aligned}

where
\BX = \lr{ d\Bx \cdot \spacegrad } \Bv – d\Bx (\spacegrad \cdot \Bv),

is a vector expression to be reduced to something simpler. That reduction is possible using the distribution identity
\Bc \cdot (\Ba \wedge \Bb)
=
(\Bc \cdot \Ba) \Bb
– (\Bc \cdot \Bb) \Ba,

so we find
\BX = \spacegrad \cdot \lr{ d\Bx \wedge \Bv }.

We find the following GA split of the vector differential into symmetric and antisymmetric terms
\boxed{
d\Bv
+
\spacegrad \cdot \lr{ d\Bx \wedge \Bv }.
}

Such a split avoids the indeterminant nature of the tensor product, which we only give meaning by introducing the quadratic form based dot product given by \ref{eqn:dyadicVsGa:220}.

## Finally, some sensible cause and effect analysis of the Ukraine conflict.

When you see the media all moving in lock step beating the drums of war, it’s clear that there’s a heavy propaganda element to the story, and that there must be deeper issues at play.  It seemed obvious to me that there was surely US funded covert conflict undergirding this story, as has been repeatedly been the case in so many other world conflicts.

I’ve not been going out of my way to ferret out that info, but it was inevitable that some would eventually cross my path.  I’m sure there will be more, but here’s one little bit of the story:

Ep. 2074 Russia, Ukraine, and NATO

This was, in my judgment, a sensible cause and effect analysis on the Ukraine conflict, that doesn’t just try to paint things as a reaction to NATO incursion (which is surely some part of the story.)

In that interview, Tom Wood’s guest, Gilbert Doctorow, expounds on two specific points that are significant.  The first is that there has been an informal undeclared war against the Russian Ukrainian states for 8 years (with active shelling of Ukrainian/Russian civilians in those areas, and explicit disregard for the existing negotiated treaties).  The second point is that the Ukrainian President’s recently declared his intent to start a Ukrainian nuclear weapons program.

In addition to those points, recall that psychopathic elements in the US government and power broker circles financed a coup in the Ukraine for the tune of \$5 Billion dollars in 2014 (Obama era).  Some of that financing went to literal NeoNazis! We have multiple generations of US government corruption in play, with Trump keeping up the game by coordinating US weapon sales to Ukraine, and with Biden’s family playing money laundering games (and who knows what else) in the region.

I’m not excusing Putin and the psychopathic elements that surely also exist on the Russian side. There is plausible reporting on Putin’s use of bombing or attempting to bomb his own people in Moscow to justify the Chechen war. It takes a special kind of evil to kill your own people to justify killing other people. There’s also considerable reporting on the disappearing of and deaths of Russian reporters and dissidents. I’d be very surprised if there was not truth to a considerable portion of that reporting given Putin’s KGB origin story.  Putin is not a good guy in this story, even if there is unreported rationale for his actions.

It takes a lot of work to get a full scale conflict to play out, and this one is big enough that apparently everybody seems to have simultaneously forgotten all the covid fear mongering, and government and medical tyranny of the last two years. One thing that we can be certain of, is that a lot of people will make money from this war and the financial gouging that is enabled by it, regardless of the extent that it is taken.

Lots of people on both sides will die, as powerful factions on both sides profit from the chaos.