imaginary

A fun application of Green’s functions and geometric algebra: Residue calculus

November 2, 2025 math and physics play , , , , , , , , , , , , , , , , , ,

[Click here for a PDF version of this post]

Motivation.

A fun application of both Green’s functions and geometric algebra is to show how the Cauchy integral equation can be expressed in terms of the Green’s function for the 2D gradient. This is covered, almost as an aside, in [1]. I found that treatment a bit hard to understand, so I am going to work through it here at my own pace.

Complex numbers in geometric algebra.

Anybody who has studied geometric algebra is likely familiar with a variety of ways to construct complex numbers from geometric objects. For example, complex numbers can be constructed for any plane. If \( \Be_1, \Be_2 \) is a pair of orthonormal vectors for some plane in \(\mathbb{R}^N\), then any vector in that plane has the form
\begin{equation}\label{eqn:residueGreens:20}
\Bf = \Be_1 u + \Be_2 v,
\end{equation}
has an associated complex representation, by simply multiplying that vector one of those basis vectors. For example, if we pre-multiply \( \Bf \) by \( \Be_1 \), forming
\begin{equation}\label{eqn:residueGreens:40}
\begin{aligned}
z
&= \Be_1 \Bf \\
&= \Be_1 \lr{ \Be_1 u + \Be_2 v } \\
&= u + \Be_1 \Be_2 v.
\end{aligned}
\end{equation}

We may identify the unit bivector \( \Be_1 \Be_2 \) as an imaginary, designed by \( i \), since it has the expected behavior
\begin{equation}\label{eqn:residueGreens:60}
\begin{aligned}
i^2 &=
\lr{\Be_1 \Be_2}^2 \\
&=
\lr{\Be_1 \Be_2}
\lr{\Be_1 \Be_2} \\
&=
\Be_1 \lr{\Be_2
\Be_1} \Be_2 \\
&=
-\Be_1 \lr{\Be_1
\Be_2} \Be_2 \\
&=
-\lr{\Be_1 \Be_1}
\lr{\Be_2 \Be_2} \\
&=
-1.
\end{aligned}
\end{equation}

Complex numbers are seen to be isomorphic to even grade multivectors in a planar subspace. The imaginary is the grade-two pseudoscalar, and geometrically is an oriented unit area (bivector.)

Cauchy-equations in terms of the gradient.

It is natural to wonder about the geometric algebra equivalents of various complex-number relationships and identities. Of particular interest for this discussion is the geometric algebra equivalent of the Cauchy equations that specify required conditions for a function to be differentiable.

If a complex function \( f(z) = u(z) + i v(z) \) is differentiable, then we must be able to find the limit of
\begin{equation}\label{eqn:residueGreens:80}
\frac{\Delta f(z_0)}{\Delta z} = \frac{f(z_0 + h) – f(z_0)}{h},
\end{equation}
for any complex \( h \rightarrow 0 \), for any possible trajectory of \( z_0 + h \) toward \( z_0 \). In particular, for real \( h = \epsilon \),
\begin{equation}\label{eqn:residueGreens:100}
\lim_{\epsilon \rightarrow 0} \frac{u(x_0 + \epsilon, y_0) + i v(x_0 + \epsilon, y_0) – u(x_0, y_0) – i v(x_0, y_0)}{\epsilon}
=
\PD{x}{u(z_0)} + i \PD{x}{v(z_0)},
\end{equation}
and for imaginary \( h = i \epsilon \)
\begin{equation}\label{eqn:residueGreens:120}
\lim_{\epsilon \rightarrow 0} \frac{u(x_0, y_0 + \epsilon) + i v(x_0, y_0 + \epsilon) – u(x_0, y_0) – i v(x_0, y_0)}{i \epsilon}
=
-i\lr{ \PD{y}{u(z_0)} + i \PD{y}{v(z_0)} }.
\end{equation}
Equating real and imaginary parts, we see that existence of the derivative requires
\begin{equation}\label{eqn:residueGreens:140}
\begin{aligned}
\PD{x}{u} &= \PD{y}{v} \\
\PD{x}{v} &= -\PD{y}{u}.
\end{aligned}
\end{equation}
These are the Cauchy equations. When the derivative exists in a given neighbourhood, we say that the function is analytic in that region. If we use a bivector interpretation of the imaginary, with \( i = \Be_1 \Be_2 \), the Cauchy equations are also satisfied if the gradient of the complex function is zero, since
\begin{equation}\label{eqn:residueGreens:160}
\begin{aligned}
\spacegrad f
&=
\lr{ \Be_1 \partial_x + \Be_2 \partial_y } \lr{ u + \Be_1 \Be_2 v } \\
&=
\Be_1 \lr{ \partial_x u – \partial_y v } + \Be_2 \lr{ \partial_y u + \partial_x v }.
\end{aligned}
\end{equation}
We see that the geometric algebra equivalent of the Cauchy equations is simply
\begin{equation}\label{eqn:residueGreens:200}
\spacegrad f = 0.
\end{equation}
Roughly speaking, we may say that a function is analytic in a region, if the Cauchy equations are satisfied, or the gradient is zero, in a neighbourhood of all points in that region.

A special case of the fundamental theorem of geometric calculus.

Given an even grade multivector \( \psi \in \mathbb{R}^2 \) (i.e.: a complex number), we can show that
\begin{equation}\label{eqn:residueGreens:220}
\int_A \spacegrad \psi d^2\Bx = \oint_{\partial A} d\Bx \psi.
\end{equation}
Let’s get an idea why this works by expanding the area integral for a rectangular parameterization
\begin{equation}\label{eqn:residueGreens:240}
\begin{aligned}
\int_A \spacegrad \psi d^2\Bx
&=
\int_A \lr{ \Be_1 \partial_1 + \Be_2 \partial_2 } \psi I dx dy \\
&=
\int \Be_1 I dy \evalrange{\psi}{x_0}{x_1}
+
\int \Be_2 I dx \evalrange{\psi}{y_0}{y_1} \\
&=
\int \Be_2 dy \evalrange{\psi}{x_0}{x_1}

\int \Be_1 dx \evalrange{\psi}{y_0}{y_1} \\
&=
\int d\By \evalrange{\psi}{x_0}{x_1}

\int d\Bx \evalrange{\psi}{y_0}{y_1}.
\end{aligned}
\end{equation}
We took advantage of the fact that the \(\mathbb{R}^2\) pseudoscalar commutes with \( \psi \). The end result, is illustrated in fig. 1, shows pictorially that the remaining integral is an oriented line integral.

fig. 1. Oriented multivector line integral.

 

If we want to approximate a more general area, we may do so with additional tiles, as illustrated in fig. 2. We may evaluate the area integral using the line integral over just the exterior boundary using such a tiling, as any overlapping opposing boundary contributions cancel exactly.

fig. 2. A crude circular tiling approximation.

 

The reason that this is interesting is that it allows us to re-express a complex integral as a corresponding multivector area integral. With \( d\Bx = \Be_1 dz \), we have
\begin{equation}\label{eqn:residueGreens:260}
\oint dz\, \psi = \Be_1 \int \spacegrad \psi d^2\Bx.
\end{equation}

The Cauchy kernel as a Green’s function.

We’ve previously derived the Green’s function for the 2D Laplacian, and found
\begin{equation}\label{eqn:residueGreens:280}
\tilde{G}(\Bx, \Bx’) = \inv{2\pi} \ln \Abs{\lr{\Bx – \Bx’}},
\end{equation}
which satisfies
\begin{equation}\label{eqn:residueGreens:300}
\delta^2(\Bx – \Bx’) = \spacegrad^2 \tilde{G}(\Bx, \Bx’) = \spacegrad \lr{ \spacegrad \tilde{G}(\Bx, \Bx’) }.
\end{equation}
This means that \( G(\Bx, \Bx’) = \spacegrad \tilde{G}(\Bx, \Bx’) \) is the Green’s function for the gradient. That Green’s function is
\begin{equation}\label{eqn:residueGreens:320}
\begin{aligned}
G(\Bx, \Ba)
&= \inv{2 \pi} \frac{\spacegrad \Abs{\Bx – \Ba}}{\Abs{\Bx – \Ba}} \\
&= \inv{2 \pi} \frac{\Bx – \Ba}{\Abs{\Bx – \Ba}^2}.
\end{aligned}
\end{equation}
We may cast this Green’s function into complex form with \( z = \Be_1 \Bx, a = \Be_1 \Ba \). In particular
\begin{equation}\label{eqn:residueGreens:340}
\begin{aligned}
\inv{z – a}
&=
\frac{(z – a)^\conj}{\Abs{z – a}^2} \\
&=
\frac{(z – a)^\conj}{\Abs{z – a}^2} \\
&=
\frac{\Bx – \Ba}{\Abs{\Bx – \Ba}^2} \Be_1 \\
&=
2 \pi G(\Bx, \Ba) \Be_1.
\end{aligned}
\end{equation}

Cauchy’s integral.

With
\begin{equation}\label{eqn:residueGreens:360}
\psi = \frac{f(z)}{z – a},
\end{equation}
using \ref{eqn:residueGreens:260}, we can now evaluate
\begin{equation}\label{eqn:residueGreens:265}
\begin{aligned}
\oint dz\, \frac{f(z)}{z – a}
&= \Be_1 \int \spacegrad \frac{f(z)}{z – a} d^2\Bx \\
&= \Be_1 \int \lr{ \frac{\spacegrad f(z)}{z – a} + \lr{ \spacegrad \inv{z – a}} f(z) } I dA \\
&= \Be_1 \int f(z) \spacegrad 2 \pi G(\Bx – \Ba) \Be_1 I dA \\
&= 2 \pi \Be_1 \int \delta^2(\Bx – \Ba) \Be_1 f(\Bx) I dA \\
&= 2 \pi \Be_1^2 f(\Ba) I \\
&= 2 \pi I f(a),
\end{aligned}
\end{equation}
where we’ve made use of the analytic condition \( \spacegrad f = 0 \), and the fact that \( f \) and \( 1/(z-a) \), both even multivectors, commute.

The Cauchy integral equation
\begin{equation}\label{eqn:residueGreens:380}
f(a) = \inv{2 \pi I} \oint dz\, \frac{f(z)}{z – a},
\end{equation}
falls out naturally. This sort of residue calculation always seemed a bit miraculous. By introducing a geometric algebra encoding of complex numbers, we get a new and interesting interpretation. In particular,

  1. the imaginary factor in the geometric algebra formulation of this identity is an oriented unit area coming directly from the area element,
  2. the factor of \( 2 \pi \) comes directly from the Green’s function for the gradient,
  3. the fact that this particular form of integral picks up only the contribution at the point \( z = a \) is no longer mysterious seeming. This is directly due to delta-function filtering.

Also, if we are looking for an understanding of how to generalize the Cauchy equation to more general multivector functions, we now also have a good clue how that would be done.

References

[1] C. Doran and A.N. Lasenby. Geometric algebra for physicists. Cambridge University Press New York, Cambridge, UK, 1st edition, 2003.

A generalized Gaussian integral.

July 26, 2025 math and physics play , , , , , , ,

[Click here for a PDF version of this post]

Here’s another problem from [1]. The point is to show that
\begin{equation}\label{eqn:generalizedGaussian:20}
G(x,x’,\tau) = \inv{2\pi} \int_{-\infty}^\infty e^{i k\lr{ x – x’ } } e^{-k^2 \tau} dk,
\end{equation}
has the value
\begin{equation}\label{eqn:generalizedGaussian:40}
G(x,x’,\tau) = \inv{\sqrt{4 \pi \tau} } e^{-\lr{ x – x’}^2/4 \tau },
\end{equation}
not just for real \(\tau\), but also for purely imaginary \(\tau\).

Real case.

The authors claim the real case is easy, but I don’t think the real case is that trivial. The trivial part is basically just completing the square. Writing \( x – x ‘ = \Delta x \), that is
\begin{equation}\label{eqn:generalizedGaussian:60}
\begin{aligned}
-k^2 \tau + i k\lr{ x – x’ }
&=
-\tau \lr{ k^2 – i k \Delta x/\tau } \\
&=
-\tau \lr{ \lr{ k – i \Delta x/2\tau }^2 – \lr{ – i \Delta x/2\tau }^2 } \\
&=
-\tau \lr{ \lr{ k – i \Delta x/2\tau }^2 + \lr{ \Delta x/2\tau }^2 }.
\end{aligned}
\end{equation}
So we have
\begin{equation}\label{eqn:generalizedGaussian:80}
G(x,x’,\tau) = \inv{2\pi} e^{-(\Delta x/2)^2/\tau} \int_{-\infty}^\infty e^{-\tau \lr{ k – i \Delta x/2\tau }^2 } dk.
\end{equation}
Let’s call the integral factor part of this \( I \)
\begin{equation}\label{eqn:generalizedGaussian:100}
I = \int_{-\infty}^\infty e^{-\tau \lr{ k – i \Delta x/2\tau }^2 } dk.
\end{equation}
However, making a change of variables makes this an integral over a complex path
\begin{equation}\label{eqn:generalizedGaussian:120}
I = \int_{-\infty – i \Delta x/2\tau }^{\infty – i \Delta x/2\tau} e^{-\tau k^2 } dk.
\end{equation}
If you are lazy you could say that \( \pm \infty \) adjusted by a constant, even if that constant is imaginary, leaves the integration limits unchanged. That’s clearly true if the constant is real, but I don’t think it’s that obvious if the constant is imaginary.

To answer that question more exactly, let’s consider the integral
\begin{equation}\label{eqn:generalizedGaussian:140}
0 = I + J + K + L = \oint e^{-\tau z^2} dz,
\end{equation}
where the path is illustrated in fig. 1.

fig. 1. Contour for the Gaussian.

Since there are no enclosed poles, we have
\begin{equation}\label{eqn:generalizedGaussian:160}
I = \int_{-\infty}^\infty e^{-\tau k^2 } dk + K + L,
\end{equation}
where
\begin{equation}\label{eqn:generalizedGaussian:180}
\begin{aligned}
K &= \int_{- i \Delta x/2\tau}^0 e^{-\tau z^2} dz \\
L &= \int_0^{- i \Delta x/2\tau} e^{-\tau z^2} dz.
\end{aligned}
\end{equation}
We see now that we see perfect cancellation of \( K \) and \( L \), which justifies the change of variables, and the corresponding integration limits.

We can use the usual trick to evaluate \( I^2 \), to find
\begin{equation}\label{eqn:generalizedGaussian:200}
\begin{aligned}
I^2
&=
\int_{-\infty}^\infty e^{-\tau k^2 } dk
\int_{-\infty}^\infty e^{-\tau m^2 } dm \\
&=
2 \pi \int_0^\infty r e^{-\tau r^2} r dr \\
&=
2 \pi \evalrange{ – \frac{e^{-\tau r^2}}{-2 \tau} }{0}{\infty} \\
&=
\frac{\pi}{\tau}.
\end{aligned}
\end{equation}

So, for real values of \( \tau \) we have
\begin{equation}\label{eqn:generalizedGaussian:220}
G(x,x’,\tau) = \inv{2\pi} \sqrt{\frac{\pi}{\tau}}e^{-(\Delta x/2)^2/\tau},
\end{equation}
as expected.

Imaginary case.

For the imaginary case, let \( \tau = i \alpha \). Let’s recomplete the square from scratch with this substitution
\begin{equation}\label{eqn:generalizedGaussian:240}
\begin{aligned}
-k^2 i \alpha + i k \Delta x
&=
– i \alpha \lr{ k^2 – \frac{i k \Delta x}{i \alpha} } \\
&=
– i \alpha \lr{ k^2 – \frac{k \Delta x}{\alpha} } \\
&=
– i \alpha \lr{ \lr{ k – \frac{\Delta x}{2 \alpha} }^2 – \lr{ \frac{\Delta x}{2 \alpha} }^2 }.
\end{aligned}
\end{equation}
So we have
\begin{equation}\label{eqn:generalizedGaussian:260}
\begin{aligned}
G(x,x’,\tau)
&= \inv{2\pi} e^{i \alpha(\Delta x/2\alpha)^2} \int_{-\infty}^\infty e^{- i \alpha \lr{ k – \frac{\Delta x}{2 \alpha } }^2 } dk \\
&= \inv{\sqrt{\pi^2 \alpha}} e^{- (\Delta x)^2/4\tau} \int_0^\infty e^{- i m^2 } dm.
\end{aligned}
\end{equation}
This time we can make a \( m = \sqrt{\alpha} \lr{k – \frac{\Delta x}{2 \alpha }} \) change of variables, but don’t have to worry about imaginary displacements of the integration limits.

The task is now reduced to the evaluation of an imaginary Gaussian like integral, and we are given the hint integrate \( e^{-z^2} \) over a pie shaped contour fig. 2.

fig. 2. Pie shaped integration contour.

Over \( I \) we set \( z = x, x \in [0, R] \), over \( J \), we set \( z = R e^{i\theta}, \theta \in [0, \pi/4] \), and on \( K \) we set \( z = u e^{i\pi/4}, u \in [R, 0] \). Since there are no enclosed poles we have
\begin{equation}\label{eqn:generalizedGaussian:280}
\begin{aligned}
0 &= I + J + K \\
&= \int_0^R e^{- x^2} dx + \int_0^{\pi/4} e^{-R^2 e^{2 i \theta} } i R e^{i\theta} d\theta – \int_0^R e^{-i u^2 } e^{i \pi/4} du.
\end{aligned}
\end{equation}
In the limit we have
\begin{equation}\label{eqn:generalizedGaussian:300}
\begin{aligned}
\int_0^\infty e^{-i u^2 } du
&= e^{-i\pi/4} \int_0^\infty e^{- x^2} dx + L \\
&= e^{-i\pi/4} \sqrt{\pi}/2 + L,
\end{aligned}
\end{equation}
where
\begin{equation}\label{eqn:generalizedGaussian:320}
L = \lim_{R\rightarrow \infty} e^{-i\pi/4} \int_0^{\pi/4} e^{-R^2 e^{2 i \theta} } i R e^{i\theta} d\theta.
\end{equation}
We hope that this is zero in the limit, but showing this requires a bit of care around the \( \pi/4 \) endpoint. We start with
\begin{equation}\label{eqn:generalizedGaussian:340}
\begin{aligned}
\Abs{L}
&\le \lim_{R\rightarrow \infty} \int_0^{\pi/4} R e^{-R^2 \cos\lr{2 \theta} } d\theta \\
&= \lim_{R\rightarrow \infty} \inv{2} \int_{\pi/4 – \epsilon/2}^{\pi/4} R e^{-R^2 \cos\lr{2 \theta} } (2 d\theta),
\end{aligned}
\end{equation}
Now we make a change of variables
\begin{equation}\label{eqn:generalizedGaussian:360}
t = \frac{\pi}{2} – 2 \theta.
\end{equation}
At the limits we have
\begin{equation}\label{eqn:generalizedGaussian:380}
\begin{aligned}
t(\pi/4 – \epsilon/2) &= \epsilon \\
t(\pi/4) &= 0.
\end{aligned}
\end{equation}
Also,
\begin{equation}\label{eqn:generalizedGaussian:400}
\begin{aligned}
\cos\lr{ 2 \theta }
&= \textrm{Re} \lr{ e^{2 i \theta} } \\
&= \textrm{Re} \lr{ e^{i \lr{\pi/2 – t} } } \\
&= \textrm{Re} \lr{ i e^{-i t } } \\
&= \sin t,
\end{aligned}
\end{equation}
so
\begin{equation}\label{eqn:generalizedGaussian:420}
\Abs{L} \le \lim_{R\rightarrow \infty} \inv{2} \int_0^\epsilon R e^{-R^2 \sin t} dt.
\end{equation}
Since we have forced \( t \) small, we can use the small angle approximation for the sine
\begin{equation}\label{eqn:generalizedGaussian:440}
\begin{aligned}
\int_0^\epsilon R e^{-R^2 \sin t} dt
&\approx
\int_0^\epsilon R e^{-R^2 t} dt \\
&= \evalrange{ \inv{-2 R} e^{-R^2 t} }{0}{\epsilon} \\
&= \frac{ 1 – e^{-R^2 \epsilon }}{2 R}.
\end{aligned}
\end{equation}
The numerator goes to zero for either \( \epsilon \rightarrow 0 \), or \( R \rightarrow \infty \), and the denominator to infinity, so we have the desired zero in the limit. This means that
\begin{equation}\label{eqn:generalizedGaussian:460}
\begin{aligned}
G(x,x’,\tau)
&= \frac{e^{-i\pi/4}}{\sqrt{4 \pi \alpha}} e^{- (\Delta x)^2/4\tau} \\
&= \frac{1}{\sqrt{4 \pi i \alpha}} e^{- (\Delta x)^2/4\tau} \\
&= \frac{1}{\sqrt{4 \pi \tau}} e^{- (\Delta x)^2/4\tau},
\end{aligned}
\end{equation}
as expected.

A fun application, also noted in the problem, is that we can decompose the imaginary integral
\begin{equation}\label{eqn:generalizedGaussian:480}
\int_{-\infty}^\infty e^{-i u^2} du = \sqrt{\pi} e^{-i\pi/4},
\end{equation}
into real and imaginary parts, to find
\begin{equation}\label{eqn:generalizedGaussian:500}
\int_{-\infty}^\infty \cos u^2 du = \int_{-\infty}^\infty \sin u^2 du = \sqrt{\frac{\pi}{2}}.
\end{equation}
Despite being real valued integrals, it it not at all obvious how one would go about finding those without these contour integration tricks.

References

[1] F.W. Byron and R.W. Fuller. Mathematics of Classical and Quantum Physics. Dover Publications, 1992.

Exploring 0^0, x^x, and z^z.

May 10, 2020 math and physics play , , , , , , ,

My Youtube home page knows that I’m geeky enough to watch math videos.  Today it suggested Eddie Woo’s video about \(0^0\).

Mr Woo, who has great enthusiasm, and must be an awesome teacher to have in person.  He reminds his class about the exponent laws, which allow for an interpretation that \(0^0\) would be equal to 1.  He points out that \(0^n = 0\) for any positive integer, which admits a second contradictory value for \( 0^0 \), if this was true for \(n=0\) too.

When reviewing the exponent laws Woo points out that the exponent law for subtraction \( a^{n-n} \) requires \(a\) to be non-zero.  Given that restriction, we really ought to have no expectation that \(0^{n-n} = 1\).

To attempt to determine a reasonable value for this question, resolving the two contradictory possibilities, neither of which we actually have any reason to assume are valid possibilities, he asks the class to perform a proof by calculator, computing a limit table for \( x \rightarrow 0+ \). I stopped at that point and tried it by myself, constructing such a table in Mathematica. Here is what I used

griddisp[labelc1_, labelc2_, f_, values_] := Grid[({
({{labelc1}, values}) // Flatten,
({ {labelc2}, f[#] & /@ values} ) // Flatten
}) // Transpose,
Frame -> All]
decimalFractions[n_] := ((10^(-#)) & /@ Range[n])
With[{m = 10}, griddisp[x, x^x, #^# &, N[decimalFractions[m], 10]]]
With[{m = 10}, griddisp[x, x^x, #^# &, -N[decimalFractions[m], 10]]]

Observe that I calculated the limits from both above and below. The results are

and for the negative limit

Sure enough, from both below and above, we see numerically that \(\lim_{\epsilon\rightarrow 0} \epsilon^\epsilon = 1\), as if the exponent law argument for \( 0^0 = 1 \) was actually valid.  We see that this limit appears to be valid despite the fact that \( x^x \) can be complex valued — that is ignoring the fact that a rigorous limit argument should be valid for any path neighbourhood of \( x = 0 \) and not just along two specific (real valued) paths.

Let’s get a better idea where the imaginary component of \((-x)^{-x}\) comes from.  To do so, consider \( f(z) = z^z \) for complex values of \( z \) where \( z = r e^{i \theta} \). The logarithm of such a beast is

\begin{equation}\label{eqn:xtox:20}
\begin{aligned}
\ln z^z
&= z \ln \lr{ r e^{i\theta} } \\
&= z \ln r + i \theta z \\
&= e^{i\theta} \ln r^r + i \theta z \\
&= \lr{ \cos\theta + i \sin\theta } \ln r^r + i r \theta \lr{ \cos\theta + i \sin\theta } \\
&= \cos\theta \ln r^r – r \theta \sin\theta
+ i r \lr{ \sin\theta \ln r + \theta \cos\theta },
\end{aligned}
\end{equation}
so
\begin{equation}\label{eqn:xtox:40}
z^z =
e^{ r \lr{ \cos\theta \ln r – \theta \sin\theta}} \times
e^{i r \lr{ \sin\theta \ln r + \theta \cos\theta }}.
\end{equation}
In particular, picking the \( \theta = \pi \) branch, we have, for any \( x > 0 \)
\begin{equation}\label{eqn:xtox:60}
(-x)^{-x} = e^{-x \ln x – i x \pi } = \frac{e^{ – i x \pi }}{x^x}.
\end{equation}

Let’s get some visual appreciation for this interesting \(z^z\) beastie, first plotting it for real values of \(z\)


Manipulate[
Plot[ {Re[x^x], Im[x^x]}, {x, -r, r}
, PlotRange -> {{-r, r}, {-r^r, r^r}}
, PlotLegends -> {Re[x^x], Im[x^x]}
], {{r, 2.25}, 0.0000001, 10}]

From this display, we see that the imaginary part of \( x^x \) is zero for integer values of \( x \).  That’s easy enough to verify explicitly: \( (-1)^{-1} = -1, (-2)^{-2} = 1/4, (-3)^{-3} = -1/27, \cdots \).

The newest version of Mathematica has a few nice new complex number visualization options.  Here’s two that I found illuminating, an absolute value plot that highlights the poles and zeros, also showing some of the phase action:

Manipulate[
ComplexPlot[ x^x, {x, s (-1 – I), s (1 + I)},
PlotLegends -> Automatic, ColorFunction -> "GlobalAbs"], {{s, 4},
0.00001, 10}]

We see the branch cut nicely, the tendency to zero in the left half plane, as well as some of the phase periodicity in the regions that are in the intermediate regions between the zeros and the poles.  We can also plot just the phase, which shows its interesting periodic nature


Manipulate[
ComplexPlot[ x^x, {x, s (-1 – I), s (1 + I)},
PlotLegends -> Automatic, ColorFunction -> "CyclicArg"], {{s, 6},
0.00001, 10}]

I’d like to take the time to play with some of the other ComplexPlot ColorFunction options, which appears to be a powerful and flexible visualization tool.

Geometric Algebra in a nutshell.

September 29, 2016 math and physics play , , , , , , , , , , , , ,

[Click here for a PDF of this post with nicer formatting]

Motivation

I initially thought that I might submit a problem set solution for ece1228 using Geometric Algebra. In order to justify this, I needed to add an appendix to that problem set that outlined enough of the ideas that such a solution might make sense to the grader.

I ended up changing my mind and reworked the problem entirely, removing any use of GA. Here’s the tutorial I initially considered submitting with that problem.

Geometric Algebra in a nutshell.

Geometric Algebra defines a non-commutative, associative vector product

\begin{equation}\label{eqn:gaTutorial:20}
\begin{aligned}
\Ba \Bb \Bc
&=
(\Ba \Bb) \Bc \\
&=
\Ba (\Bb \Bc),
\end{aligned}
\end{equation}

where the square of a vector equals the squared vector magnitude

\begin{equation}\label{eqn:gaTutorial:40}
\Ba^2 = \Abs{\Ba}^2,
\end{equation}

In Euclidean spaces such a squared vector is always positive, but that is not necessarily the case in the mixed signature spaces used in special relativity.

There are a number of consequences of these two simple vector multiplication rules.

  • Squared unit vectors have a unit magnitude (up to a sign). In a Euclidean space such a product is always positive

    \begin{equation}\label{eqn:gaTutorial:60}
    (\Be_1)^2 = 1.
    \end{equation}

  • Products of perpendicular vectors anticommute.

    \begin{equation}\label{eqn:gaTutorial:80}
    \begin{aligned}
    2
    &=
    (\Be_1 + \Be_2)^2 \\
    &= (\Be_1 + \Be_2)(\Be_1 + \Be_2) \\
    &= \Be_1^2 + \Be_2 \Be_1 + \Be_1 \Be_2 + \Be_2^2 \\
    &= 2 + \Be_2 \Be_1 + \Be_1 \Be_2.
    \end{aligned}
    \end{equation}

    A product of two perpendicular vectors is called a bivector, and can be used to represent an oriented plane. The last line above shows an example of a scalar and bivector sum, called a multivector. In general Geometric Algebra allows sums of scalars, vectors, bivectors, and higher degree analogues (grades) be summed.

    Comparison of the RHS and LHS of \ref{eqn:gaTutorial:80} shows that we must have

    \begin{equation}\label{eqn:gaTutorial:100}
    \Be_2 \Be_1 = -\Be_1 \Be_2.
    \end{equation}

    It is true in general that the product of two perpendicular vectors anticommutes. When, as above, such a product is a product of
    two orthonormal vectors, it behaves like a non-commutative imaginary quantity, as it has an imaginary square in Euclidean spaces

    \begin{equation}\label{eqn:gaTutorial:120}
    \begin{aligned}
    (\Be_1 \Be_2)^2
    &=
    (\Be_1 \Be_2)
    (\Be_1 \Be_2) \\
    &=
    \Be_1 (\Be_2
    \Be_1) \Be_2 \\
    &=
    -\Be_1 (\Be_1
    \Be_2) \Be_2 \\
    &=
    -(\Be_1 \Be_1)
    (\Be_2 \Be_2) \\
    &=-1.
    \end{aligned}
    \end{equation}

    Such “imaginary” (unit bivectors) have important applications describing rotations in Euclidean spaces, and boosts in Minkowski spaces.

  • The product of three perpendicular vectors, such as

    \begin{equation}\label{eqn:gaTutorial:140}
    I = \Be_1 \Be_2 \Be_3,
    \end{equation}

    is called a trivector. In \R{3}, the product of three orthonormal vectors is called a pseudoscalar for the space, and can represent an oriented volume element. The quantity \( I \) above is the typical orientation picked for the \R{3} unit pseudoscalar. This quantity also has characteristics of an imaginary number

    \begin{equation}\label{eqn:gaTutorial:160}
    \begin{aligned}
    I^2
    &=
    (\Be_1 \Be_2 \Be_3)
    (\Be_1 \Be_2 \Be_3) \\
    &=
    \Be_1 \Be_2 (\Be_3
    \Be_1) \Be_2 \Be_3 \\
    &=
    -\Be_1 \Be_2 \Be_1
    \Be_3 \Be_2 \Be_3 \\
    &=
    -\Be_1 (\Be_2 \Be_1)
    (\Be_3 \Be_2) \Be_3 \\
    &=
    -\Be_1 (\Be_1 \Be_2)
    (\Be_2 \Be_3) \Be_3 \\
    &=

    \Be_1^2
    \Be_2^2
    \Be_3^2 \\
    &=
    -1.
    \end{aligned}
    \end{equation}

  • The product of two vectors in \R{3} can be expressed as the sum of a symmetric scalar product and antisymmetric bivector product

    \begin{equation}\label{eqn:gaTutorial:480}
    \begin{aligned}
    \Ba \Bb
    &=
    \sum_{i,j = 1}^n \Be_i \Be_j a_i b_j \\
    &=
    \sum_{i = 1}^n \Be_i^2 a_i b_i
    +
    \sum_{0 < i \ne j \le n} \Be_i \Be_j a_i b_j \\ &= \sum_{i = 1}^n a_i b_i + \sum_{0 < i < j \le n} \Be_i \Be_j (a_i b_j - a_j b_i). \end{aligned} \end{equation} The first (symmetric) term is clearly the dot product. The antisymmetric term is designated the wedge product. In general these are written \begin{equation}\label{eqn:gaTutorial:500} \Ba \Bb = \Ba \cdot \Bb + \Ba \wedge \Bb, \end{equation} where \begin{equation}\label{eqn:gaTutorial:520} \begin{aligned} \Ba \cdot \Bb &\equiv \inv{2} \lr{ \Ba \Bb + \Bb \Ba } \\ \Ba \wedge \Bb &\equiv \inv{2} \lr{ \Ba \Bb - \Bb \Ba }, \end{aligned} \end{equation} The coordinate expansion of both can be seen above, but in \R{3} the wedge can also be written \begin{equation}\label{eqn:gaTutorial:540} \Ba \wedge \Bb = \Be_1 \Be_2 \Be_3 (\Ba \cross \Bb) = I (\Ba \cross \Bb). \end{equation} This allows for an handy dot plus cross product expansion of the vector product \begin{equation}\label{eqn:gaTutorial:180} \Ba \Bb = \Ba \cdot \Bb + I (\Ba \cross \Bb). \end{equation} This result should be familiar to the student of quantum spin states where one writes \begin{equation}\label{eqn:gaTutorial:200} (\Bsigma \cdot \Ba) (\Bsigma \cdot \Bb) = (\Ba \cdot \Bb) + i (\Ba \cross \Bb) \cdot \Bsigma. \end{equation} This correspondence is because the Pauli spin basis is a specific matrix representation of a Geometric Algebra, satisfying the same commutator and anticommutator relationships. A number of other algebra structures, such as complex numbers, and quaterions can also be modelled as Geometric Algebra elements.

  • It is often useful to utilize the grade selection operator
    \( \gpgrade{M}{n} \) and scalar grade selection operator \( \gpgradezero{M} = \gpgrade{M}{0} \)
    to select the scalar, vector, bivector, trivector, or higher grade algebraic elements. For example, operating on vectors \( \Ba, \Bb, \Bc \), we have

    \begin{equation}\label{eqn:gaTutorial:580}
    \begin{aligned}
    \gpgradezero{ \Ba \Bb }
    &= \Ba \cdot \Bb \\
    \gpgradeone{ \Ba \Bb \Bc }
    &=
    \Ba (\Bb \cdot \Bc)
    +
    \Ba \cdot (\Bb \wedge \Bc) \\
    &=
    \Ba (\Bb \cdot \Bc)
    +
    (\Ba \cdot \Bb) \Bc

    (\Ba \cdot \Bc) \Bb \\
    \gpgradetwo{\Ba \Bb} &=
    \Ba \wedge \Bb \\
    \gpgradethree{\Ba \Bb \Bc} &=
    \Ba \wedge \Bb \wedge \Bc.
    \end{aligned}
    \end{equation}

    Note that the wedge product of any number of vectors such as \( \Ba \wedge \Bb \wedge \Bc \) is associative and can be expressed in terms of the complete antisymmetrization of the product of those vectors. A consequence of that is the fact a wedge product that includes any colinear vectors in the product is zero.

Example: Helmholz equations.

As an example of the power of \ref{eqn:gaTutorial:180}, consider the following Helmholtz equation derivation (wave equations for the electric and magnetic fields in the frequency domain.)

Application of \ref{eqn:gaTutorial:180} to
Maxwell equations in the frequency domain for source free simple media gives

\label{eqn:emtProblemSet1Problem6:340}
\begin{equation}\label{eqn:emtProblemSet1Problem6:360}
\spacegrad \BE = -j \omega I \BB
\end{equation}
\begin{equation}\label{eqn:emtProblemSet1Problem6:380}
\spacegrad I \BB = -j \omega \mu \epsilon \BE.
\end{equation}

These equations use the engineering (not physics) sign convention for the phasors where the time domain fields are of the form \( \boldsymbol{\mathcal{E}}(\Br, t) = \textrm{Re}( \BE e^{j\omega t} \).

Operation with the gradient from the left produces the Helmholtz equation for each of the fields using nothing more than multiplication and simple substitution

\label{eqn:emtProblemSet1Problem6:400}
\begin{equation}\label{eqn:emtProblemSet1Problem6:420}
\spacegrad^2 \BE = – \mu \epsilon \omega^2 \BE
\end{equation}
\begin{equation}\label{eqn:emtProblemSet1Problem6:440}
\spacegrad^2 I \BB = – \mu \epsilon \omega^2 I \BB.
\end{equation}

There was no reason to go through the headache of looking up or deriving the expansion of \( \spacegrad \cross (\spacegrad \cross \BA ) \) as is required with the traditional vector algebra demonstration of these identities.

Observe that the usual Helmholtz equation for \( \BB \) doesn’t have a pseudoscalar factor. That result can be obtained by just cancelling the factors \( I \) since the \R{3} Euclidean pseudoscalar commutes with all grades (this isn’t the case in \R{2} nor in Minkowski spaces.)

Example: Factoring the Laplacian.

There are various ways to demonstrate the identity

\begin{equation}\label{eqn:gaTutorial:660}
\spacegrad \cross \lr{ \spacegrad \cross \BA } = \spacegrad \lr{ \spacegrad \cdot \BA } – \spacegrad^2 \BA,
\end{equation}

such as the use of (somewhat obscure) tensor contraction techniques. We can also do this with Geometric Algebra (using a different set of obscure techniques) by factoring the Laplacian action on a vector

\begin{equation}\label{eqn:gaTutorial:700}
\begin{aligned}
\spacegrad^2 \BA
&=
\spacegrad (\spacegrad \BA) \\
&=
\spacegrad (\spacegrad \cdot \BA + \spacegrad \wedge \BA) \\
&=
\spacegrad (\spacegrad \cdot \BA)
+
\spacegrad \cdot (\spacegrad \wedge \BA) \\
%+
%\cancel{\spacegrad \wedge \spacegrad \wedge \BA}
&=
\spacegrad (\spacegrad \cdot \BA)
+
\spacegrad \cdot (\spacegrad \wedge \BA).
\end{aligned}
\end{equation}

Should we wish to express the last term using cross products, a grade one selection operation can be used
\begin{equation}\label{eqn:gaTutorial:680}
\begin{aligned}
\spacegrad \cdot (\spacegrad \wedge \BA)
&=
\gpgradeone{ \spacegrad (\spacegrad \wedge \BA) } \\
&=
\gpgradeone{ \spacegrad I (\spacegrad \cross \BA) } \\
&=
\gpgradeone{ I \spacegrad \wedge (\spacegrad \cross \BA) } \\
&=
\gpgradeone{ I^2 \spacegrad \cross (\spacegrad \cross \BA) } \\
&=
-\spacegrad \cross (\spacegrad \cross \BA).
\end{aligned}
\end{equation}

Here coordinate expansion was not required in any step.

Learning more.

Some references that may be helpful to learn more about Geometric Algebra are [2], [1], [4], and [3].

References

[1] C. Doran and A.N. Lasenby. Geometric algebra for physicists. Cambridge University Press New York, Cambridge, UK, 1st edition, 2003.

[2] L. Dorst, D. Fontijne, and S. Mann. Geometric Algebra for Computer Science. Morgan Kaufmann, San Francisco, 2007.

[3] D. Hestenes. New Foundations for Classical Mechanics. Kluwer Academic Publishers, 1999.

[4] A. Macdonald. Vector and Geometric Calculus. CreateSpace Independent Publishing Platform, 2012.