complex number

Static load with two forces in a plane, solved a few different ways.

February 12, 2023 math and physics play , , , , , , , , , , , ,

[Click here for a PDF version of this post]

There’s a class of simple statics problems that are pervasive in high school physics and first year engineering classes (for me that CIV102.)  These problems are illustrated in the figures below. Here we have a static load under gravity, and two supporting members (rigid beams or wire lines), which can be under compression, or tension, depending on the geometry.

The problem, given the geometry, is to find the magnitudes of the forces in the two members. The equation to solve is of the form
\begin{equation}\label{eqn:twoForceStaticsProblem:20}
\BF_s + \BF_r + m \Bg = 0.
\end{equation}
The usual way to solve such a problem is to resolve the forces into components. We will do that first here as a review, but then also solve the system using GA techniques, which are arguably simpler or more direct.

Solving as a conventional vector equation.

If we were back in high school we could have written our forces out in vector form
\begin{equation}\label{eqn:twoForceStaticsProblem:160}
\begin{aligned}
\BF_r &= f_r \lr{ \Be_1 \cos\alpha + \Be_2 \sin\alpha } \\
\BF_s &= f_s \lr{ \Be_1 \cos\beta + \Be_2 \sin\beta } \\
\Bg &= g \Be_1.
\end{aligned}
\end{equation}
Here the gravitational direction has been pointed along the x-axis.

Our equation to solve is now
\begin{equation}\label{eqn:twoForceStaticsProblem:180}
f_r \lr{ \Be_1 \cos\alpha + \Be_2 \sin\alpha } + f_s \lr{ \Be_1 \cos\beta + \Be_2 \sin\beta } + m g \Be_1 = 0.
\end{equation}
This we can solve as a set of scalar equations, one for each of the \( \Be_1 \) and \( \Be_2 \) directions
\begin{equation}\label{eqn:twoForceStaticsProblem:200}
\begin{aligned}
f_r \cos\alpha + f_s \cos\beta + m g &= 0 \\
f_r \sin\alpha + f_s \sin\beta &= 0.
\end{aligned}
\end{equation}
Our solution is
\begin{equation}\label{eqn:twoForceStaticsProblem:220}
\begin{aligned}
\begin{bmatrix}
f_r \\
f_s
\end{bmatrix}
&=
{\begin{bmatrix}
\cos\alpha & \cos\beta \\
\sin\alpha & \sin\beta
\end{bmatrix}}^{-1}
\begin{bmatrix}
– m g \\
0
\end{bmatrix} \\
&=
\inv{
\cos\alpha \sin\beta – \cos\beta \sin\alpha
}
\begin{bmatrix}
\sin\beta & -\cos\beta \\
-\sin\alpha & \cos\alpha
\end{bmatrix}
\begin{bmatrix}
– m g \\
0
\end{bmatrix} \\
&=
\frac{ m g }{ \cos\alpha \sin\beta – \cos\beta \sin\alpha }
\begin{bmatrix}
-\sin\beta \\
\sin\alpha
\end{bmatrix} \\
&=
\frac{ m g }{ \sin\lr{ \beta – \alpha } }
\begin{bmatrix}
-\sin\beta \\
\sin\alpha
\end{bmatrix}.
\end{aligned}
\end{equation}
We have to haul out some trig identities to make a final simplification, but find a solution to the system.

Another approach, is to take cross products with the unit force direction.  First note that
\begin{equation}\label{eqn:twoForceStaticsProblem:240}
\begin{aligned}
\lr{ \Be_1 \cos\alpha + \Be_2 \sin\alpha } \cross \lr{ \Be_1 \cos\beta + \Be_2 \sin\beta }
&=
\Be_3 \lr{
\cos\alpha \sin\beta – \sin\alpha \cos\beta
} \\
&=
\Be_3 \sin\lr{ \beta – \alpha }.
\end{aligned}
\end{equation}

If we take cross products with each of the unit vectors, we find
\begin{equation}\label{eqn:twoForceStaticsProblem:260}
\begin{aligned}
f_r \lr{ \Be_1 \cos\alpha + \Be_2 \sin\alpha } \cross \lr{ \Be_1 \cos\beta + \Be_2 \sin\beta } + m g \Be_1 \cross \lr{ \Be_1 \cos\beta + \Be_2 \sin\beta } &= 0 \\
f_s \lr{ \Be_1 \cos\beta + \Be_2 \sin\beta } \cross \lr{ \Be_1 \cos\alpha + \Be_2 \sin\alpha } + m g \Be_1 \cross \lr{ \Be_1 \cos\alpha + \Be_2 \sin\alpha } &= 0,
\end{aligned}
\end{equation}
or
\begin{equation}\label{eqn:twoForceStaticsProblem:280}
\begin{aligned}
\Be_3 f_r \sin\lr{ \beta – \alpha } + m g \Be_3 \sin\beta &= 0 \\
-\Be_3 f_s \sin\lr{ \beta – \alpha } + m g \Be_3 \sin\alpha &= 0.
\end{aligned}
\end{equation}
After cancelling the \( \Be_3 \)’s, we find the same result as we did solving the scalar system. This was a fairly direct way to solve the system, but the intermediate cross products were a bit messy. We will try this cross product using the wedge product. Switching from the cross to the wedge, by itself, will not make things any simpler or more complicated, but we can use the complex exponential form of the unit vectors for the forces, and that will make things simpler.

Geometric algebra setup and solution.

As usual for planar problems, let’s write \( i = \Be_1 \Be_2 \) for the plane pseudoscalar, which allows us to write the forces in polar form
\begin{equation}\label{eqn:twoForceStaticsProblem:40}
\begin{aligned}
\BF_r &= f_r \Be_1 e^{i\alpha} \\
\BF_s &= f_s \Be_1 e^{i\beta} \\
\Bg &= g \Be_1,
\end{aligned}
\end{equation}
Our equation to solve is now
\begin{equation}\label{eqn:twoForceStaticsProblem:60}
f_r \Be_1 e^{i\alpha} + f_s \Be_1 e^{i\beta} + m g \Be_1 = 0.
\end{equation}
The solution for either \( f_r \) or \( f_s \) is now trivial, as we only have to take wedge products with the force direction vectors to solve for the magnitudes.  That is
\begin{equation}\label{eqn:twoForceStaticsProblem:80}
\begin{aligned}
f_r \lr{ \Be_1 e^{i\alpha} +} \wedge \lr{ \Be_1 e^{i\beta} } + m g \Be_1 \wedge \lr{ \Be_1 e^{i\beta} } &= 0 \\
f_s \lr{ \Be_1 e^{i\beta} +} \wedge \lr{ \Be_1 e^{i\alpha} } + m g \Be_1 \wedge \lr{ \Be_1 e^{i\alpha} } &= 0.
\end{aligned}
\end{equation}
Writing the wedges as grade two selections, and noting that \( e^{i\theta} \Be_1 = \Be_1 e^{-i\theta } \), we have
\begin{equation}\label{eqn:twoForceStaticsProblem:100}
\begin{aligned}
f_r &= – m g \frac{ \gpgradetwo{\Be_1^2 e^{i\beta}} }{ \gpgradetwo{ \Be_1^2 e^{-i\alpha} e^{i\beta} } } = – m g \frac{ \sin\beta }{ \sin\lr{ \beta – \alpha } } \\
f_s &= – m g \frac{ \gpgradetwo{\Be_1^2 e^{i\alpha}} }{ \gpgradetwo{ \Be_1^2 e^{-i\beta} e^{i\alpha} } } = m g \frac{ \sin\alpha }{ \sin\lr{ \beta – \alpha } }.
\end{aligned}
\end{equation}
The grade selection a unit pseudoscalar factor in both the denominator and numerator, which cancelled out to give the final scalar result.

As a complex variable problem.

Observe that we could have reframed the problem as a multivector problem by left multiplying \ref{eqn:twoForceStaticsProblem:60} by \( \Be_1 \) to find
\begin{equation}\label{eqn:twoForceStaticsProblem:120}
f_r e^{i\alpha} + f_s e^{i\beta} + m g = 0.
\end{equation}
Alternatively, we could have written the equations this way directly as a complex variable problem.

We can now solve for \( f_r \) or \( f_s \) by multiplying by the conjugate of one of the complex exponentials. That is
\begin{equation}\label{eqn:twoForceStaticsProblem:140}
\begin{aligned}
f_r + f_s e^{i\beta} e^{-i\alpha} + m g e^{-i\alpha} &= 0 \\
f_r e^{i\alpha} e^{-i\beta} + f_s + m g e^{-i\beta} &= 0.
\end{aligned}
\end{equation}
Selecting the bivector part of these equations (if interpreted as a multivector equation), or selecting the imaginary (if interpreting as a complex variables equation), will eliminate one of the force magnitudes from each equation, after which we find the same result.

This last approach, treating the problem as either a complex number problem (selecting imaginaries), or multivector problem (selecting bivectors), seems the simplest. We have no messing cross products, nor do we have to haul out the trig identities (the sine difference in the denominator comes practically for free, as it did with the wedge product method.)

Canonical bivectors in spacetime algebra.

December 5, 2022 math and physics play , , , , ,

[Click here for a PDF version of this post]

I’ve been enjoying XylyXylyX’s QED Prerequisites Geometric Algebra: Spacetime YouTube series, which is doing a thorough walk through of [1], filling in missing details. The last episode QED Prerequisites Geometric Algebra 15: Complex Structure, left things with a bit of a cliff hanger, mentioning a “canonical” form for STA bivectors that was intriguing.

The idea is that STA bivectors, like spacetime vectors can be spacelike, timelike, or lightlike (i.e.: positive, negative, or zero square), but can also have a complex signature (squaring to a 0,4-multivector.)

The only context that I knew of that one wanted to square an STA bivector is for the electrodynamic field Lagrangian, which has an \( F^2 \) term. In no other context, was the signature of \( F \), the electrodynamic field, of interest that I knew of, so I’d never considered this “Canonical form” representation.

Here are some examples:
\begin{equation}\label{eqn:canonicalbivectors:20}
\begin{aligned}
F &= \gamma_{10}, \quad F^2 = 1 \\
F &= \gamma_{23}, \quad F^2 = -1 \\
F &= 4 \gamma_{10} + \gamma_{13}, \quad F^2 = 15 \\
F &= \gamma_{10} + \gamma_{13}, \quad F^2 = 0 \\
F &= \gamma_{10} + 4 \gamma_{13}, \quad F^2 = -15 \\
F &= \gamma_{10} + \gamma_{23}, \quad F^2 = 2 I \\
F &= \gamma_{10} – 2 \gamma_{23}, \quad F^2 = -3 + 4 I.
\end{aligned}
\end{equation}
You can see in this table that all the \( F \)’s that are purely electric, have a positive signature, and all the purely magnetic fields have a negative signature, but when there is a mix, anything goes. The idea behind the canonical representation in the paper is to write
\begin{equation}\label{eqn:canonicalbivectors:40}
F = f e^{I \phi},
\end{equation}
where \( f^2 \) is real and positive, assuming that \( F \) is not lightlike.

The paper gives a formula for computing \( f \) and \( \phi\), but let’s do this by example, putting all the \( F^2 \)’s above into their complex polar form representation, like so
\begin{equation}\label{eqn:canonicalbivectors:60}
\begin{aligned}
F &= \gamma_{10}, \quad F^2 = 1 \\
F &= \gamma_{23}, \quad F^2 = 1 e^{\pi I} \\
F &= 4 \gamma_{10} + \gamma_{13}, \quad F^2 = 15 \\
F &= \gamma_{10} + \gamma_{13}, \quad F^2 = 0 \\
F &= \gamma_{10} + 4 \gamma_{13}, \quad F^2 = 15 e^{\pi I} \\
F &= \gamma_{10} + \gamma_{23}, \quad F^2 = 2 e^{(\pi/2) I} \\
F &= \gamma_{10} – 2 \gamma_{23}, \quad F^2 = 5 e^{ (\pi – \arctan(4/3)) I}
\end{aligned}
\end{equation}

Since we can put \( F^2 \) in polar form, we can factor out half of that phase angle, so that we are left with a bivector that has a positive square. If we write
\begin{equation}\label{eqn:canonicalbivectors:80}
F^2 = \Abs{F^2} e^{2 \phi I},
\end{equation}
we can then form
\begin{equation}\label{eqn:canonicalbivectors:100}
f = F e^{-\phi I}.
\end{equation}

If we want an equation for \( \phi \), we can just write
\begin{equation}\label{eqn:canonicalbivectors:120}
2 \phi = \mathrm{Arg}( F^2 ).
\end{equation}
This is a bit better (I think) than the form given in the paper, since it will uniformly rotate \( F^2 \) toward the positive region of the real axis, whereas the paper’s formula sometimes rotates towards the negative reals, which is a strange seeming polar form to use.

Let’s compute \( f \) for \( F = \gamma_{10} – 2 \gamma_{23} \), using
\begin{equation}\label{eqn:canonicalbivectors:140}
2 \phi = \pi – \arctan(4/3).
\end{equation}
The exponential expands to
\begin{equation}\label{eqn:canonicalbivectors:160}
e^{-\phi I} = \inv{\sqrt{5}} \lr{ 1 – 2 I }.
\end{equation}

Multiplying each of the bivector components by \(1 – 2 I\), we find
\begin{equation}\label{eqn:canonicalbivectors:180}
\begin{aligned}
\gamma_{10} \lr{ 1 – 2 I}
&=
\gamma_{10} – 2 \gamma_{100123} \\
&=
\gamma_{10} – 2 \gamma_{1123} \\
&=
\gamma_{10} + 2 \gamma_{23},
\end{aligned}
\end{equation}
and
\begin{equation}\label{eqn:canonicalbivectors:200}
\begin{aligned}
– 2 \gamma_{23} \lr{ 1 – 2 I}
&=
– 2 \gamma_{23}
+ 4 \gamma_{230123} \\
&=
– 2 \gamma_{23}
+ 4 \gamma_{23}^2 \gamma_{01} \\
&=
– 2 \gamma_{23}
+ 4 \gamma_{10},
\end{aligned}
\end{equation}
leaving
\begin{equation}\label{eqn:canonicalbivectors:220}
f = \sqrt{5} \gamma_{10},
\end{equation}
so the canonical form is
\begin{equation}\label{eqn:canonicalbivectors:240}
F = \gamma_{10} – 2 \gamma_{23} = \sqrt{5} \gamma_{10} \frac{1 + 2 I}{\sqrt{5}}.
\end{equation}

It’s interesting here that \( f \), in this case, is a spatial bivector (i.e.: pure electric field), but that clearly isn’t always going to be the case, since we can have a case like,
\begin{equation}\label{eqn:canonicalbivectors:260}
F = 4 \gamma_{10} + \gamma_{13} = 4 \gamma_{10} + \gamma_{20} I,
\end{equation}
from the table above, that has both electric and magnetic field components, yet is already in the canonical form, with \( F^2 = 15 \). The canonical \( f \), despite having a positive square, is not necessarily a spatial bivector (as it may have both grades 1,2 in the spatial representation, not just the electric field, spatial grade-1 component.)

References

[1] Justin Dressel, Konstantin Y Bliokh, and Franco Nori. Spacetime algebra as a powerful tool for electromagnetism. Physics Reports, 589:1–71, 2015.

Exploring 0^0, x^x, and z^z.

May 10, 2020 math and physics play , , , , , , ,

My Youtube home page knows that I’m geeky enough to watch math videos.  Today it suggested Eddie Woo’s video about \(0^0\).

Mr Woo, who has great enthusiasm, and must be an awesome teacher to have in person.  He reminds his class about the exponent laws, which allow for an interpretation that \(0^0\) would be equal to 1.  He points out that \(0^n = 0\) for any positive integer, which admits a second contradictory value for \( 0^0 \), if this was true for \(n=0\) too.

When reviewing the exponent laws Woo points out that the exponent law for subtraction \( a^{n-n} \) requires \(a\) to be non-zero.  Given that restriction, we really ought to have no expectation that \(0^{n-n} = 1\).

To attempt to determine a reasonable value for this question, resolving the two contradictory possibilities, neither of which we actually have any reason to assume are valid possibilities, he asks the class to perform a proof by calculator, computing a limit table for \( x \rightarrow 0+ \). I stopped at that point and tried it by myself, constructing such a table in Mathematica. Here is what I used

griddisp[labelc1_, labelc2_, f_, values_] := Grid[({
({{labelc1}, values}) // Flatten,
({ {labelc2}, f[#] & /@ values} ) // Flatten
}) // Transpose,
Frame -> All]
decimalFractions[n_] := ((10^(-#)) & /@ Range[n])
With[{m = 10}, griddisp[x, x^x, #^# &, N[decimalFractions[m], 10]]]
With[{m = 10}, griddisp[x, x^x, #^# &, -N[decimalFractions[m], 10]]]

Observe that I calculated the limits from both above and below. The results are

and for the negative limit

Sure enough, from both below and above, we see numerically that \(\lim_{\epsilon\rightarrow 0} \epsilon^\epsilon = 1\), as if the exponent law argument for \( 0^0 = 1 \) was actually valid.  We see that this limit appears to be valid despite the fact that \( x^x \) can be complex valued — that is ignoring the fact that a rigorous limit argument should be valid for any path neighbourhood of \( x = 0 \) and not just along two specific (real valued) paths.

Let’s get a better idea where the imaginary component of \((-x)^{-x}\) comes from.  To do so, consider \( f(z) = z^z \) for complex values of \( z \) where \( z = r e^{i \theta} \). The logarithm of such a beast is

\begin{equation}\label{eqn:xtox:20}
\begin{aligned}
\ln z^z
&= z \ln \lr{ r e^{i\theta} } \\
&= z \ln r + i \theta z \\
&= e^{i\theta} \ln r^r + i \theta z \\
&= \lr{ \cos\theta + i \sin\theta } \ln r^r + i r \theta \lr{ \cos\theta + i \sin\theta } \\
&= \cos\theta \ln r^r – r \theta \sin\theta
+ i r \lr{ \sin\theta \ln r + \theta \cos\theta },
\end{aligned}
\end{equation}
so
\begin{equation}\label{eqn:xtox:40}
z^z =
e^{ r \lr{ \cos\theta \ln r – \theta \sin\theta}} \times
e^{i r \lr{ \sin\theta \ln r + \theta \cos\theta }}.
\end{equation}
In particular, picking the \( \theta = \pi \) branch, we have, for any \( x > 0 \)
\begin{equation}\label{eqn:xtox:60}
(-x)^{-x} = e^{-x \ln x – i x \pi } = \frac{e^{ – i x \pi }}{x^x}.
\end{equation}

Let’s get some visual appreciation for this interesting \(z^z\) beastie, first plotting it for real values of \(z\)


Manipulate[
Plot[ {Re[x^x], Im[x^x]}, {x, -r, r}
, PlotRange -> {{-r, r}, {-r^r, r^r}}
, PlotLegends -> {Re[x^x], Im[x^x]}
], {{r, 2.25}, 0.0000001, 10}]

From this display, we see that the imaginary part of \( x^x \) is zero for integer values of \( x \).  That’s easy enough to verify explicitly: \( (-1)^{-1} = -1, (-2)^{-2} = 1/4, (-3)^{-3} = -1/27, \cdots \).

The newest version of Mathematica has a few nice new complex number visualization options.  Here’s two that I found illuminating, an absolute value plot that highlights the poles and zeros, also showing some of the phase action:

Manipulate[
ComplexPlot[ x^x, {x, s (-1 – I), s (1 + I)},
PlotLegends -> Automatic, ColorFunction -> "GlobalAbs"], {{s, 4},
0.00001, 10}]

We see the branch cut nicely, the tendency to zero in the left half plane, as well as some of the phase periodicity in the regions that are in the intermediate regions between the zeros and the poles.  We can also plot just the phase, which shows its interesting periodic nature


Manipulate[
ComplexPlot[ x^x, {x, s (-1 – I), s (1 + I)},
PlotLegends -> Automatic, ColorFunction -> "CyclicArg"], {{s, 6},
0.00001, 10}]

I’d like to take the time to play with some of the other ComplexPlot ColorFunction options, which appears to be a powerful and flexible visualization tool.