complex numbers

A fun application of Green’s functions and geometric algebra: Residue calculus

November 2, 2025 math and physics play , , , , , , , , , , , , , , , , , ,

[Click here for a PDF version of this post]

Motivation.

A fun application of both Green’s functions and geometric algebra is to show how the Cauchy integral equation can be expressed in terms of the Green’s function for the 2D gradient. This is covered, almost as an aside, in [1]. I found that treatment a bit hard to understand, so I am going to work through it here at my own pace.

Complex numbers in geometric algebra.

Anybody who has studied geometric algebra is likely familiar with a variety of ways to construct complex numbers from geometric objects. For example, complex numbers can be constructed for any plane. If \( \Be_1, \Be_2 \) is a pair of orthonormal vectors for some plane in \(\mathbb{R}^N\), then any vector in that plane has the form
\begin{equation}\label{eqn:residueGreens:20}
\Bf = \Be_1 u + \Be_2 v,
\end{equation}
has an associated complex representation, by simply multiplying that vector one of those basis vectors. For example, if we pre-multiply \( \Bf \) by \( \Be_1 \), forming
\begin{equation}\label{eqn:residueGreens:40}
\begin{aligned}
z
&= \Be_1 \Bf \\
&= \Be_1 \lr{ \Be_1 u + \Be_2 v } \\
&= u + \Be_1 \Be_2 v.
\end{aligned}
\end{equation}

We may identify the unit bivector \( \Be_1 \Be_2 \) as an imaginary, designed by \( i \), since it has the expected behavior
\begin{equation}\label{eqn:residueGreens:60}
\begin{aligned}
i^2 &=
\lr{\Be_1 \Be_2}^2 \\
&=
\lr{\Be_1 \Be_2}
\lr{\Be_1 \Be_2} \\
&=
\Be_1 \lr{\Be_2
\Be_1} \Be_2 \\
&=
-\Be_1 \lr{\Be_1
\Be_2} \Be_2 \\
&=
-\lr{\Be_1 \Be_1}
\lr{\Be_2 \Be_2} \\
&=
-1.
\end{aligned}
\end{equation}

Complex numbers are seen to be isomorphic to even grade multivectors in a planar subspace. The imaginary is the grade-two pseudoscalar, and geometrically is an oriented unit area (bivector.)

Cauchy-equations in terms of the gradient.

It is natural to wonder about the geometric algebra equivalents of various complex-number relationships and identities. Of particular interest for this discussion is the geometric algebra equivalent of the Cauchy equations that specify required conditions for a function to be differentiable.

If a complex function \( f(z) = u(z) + i v(z) \) is differentiable, then we must be able to find the limit of
\begin{equation}\label{eqn:residueGreens:80}
\frac{\Delta f(z_0)}{\Delta z} = \frac{f(z_0 + h) – f(z_0)}{h},
\end{equation}
for any complex \( h \rightarrow 0 \), for any possible trajectory of \( z_0 + h \) toward \( z_0 \). In particular, for real \( h = \epsilon \),
\begin{equation}\label{eqn:residueGreens:100}
\lim_{\epsilon \rightarrow 0} \frac{u(x_0 + \epsilon, y_0) + i v(x_0 + \epsilon, y_0) – u(x_0, y_0) – i v(x_0, y_0)}{\epsilon}
=
\PD{x}{u(z_0)} + i \PD{x}{v(z_0)},
\end{equation}
and for imaginary \( h = i \epsilon \)
\begin{equation}\label{eqn:residueGreens:120}
\lim_{\epsilon \rightarrow 0} \frac{u(x_0, y_0 + \epsilon) + i v(x_0, y_0 + \epsilon) – u(x_0, y_0) – i v(x_0, y_0)}{i \epsilon}
=
-i\lr{ \PD{y}{u(z_0)} + i \PD{y}{v(z_0)} }.
\end{equation}
Equating real and imaginary parts, we see that existence of the derivative requires
\begin{equation}\label{eqn:residueGreens:140}
\begin{aligned}
\PD{x}{u} &= \PD{y}{v} \\
\PD{x}{v} &= -\PD{y}{u}.
\end{aligned}
\end{equation}
These are the Cauchy equations. When the derivative exists in a given neighbourhood, we say that the function is analytic in that region. If we use a bivector interpretation of the imaginary, with \( i = \Be_1 \Be_2 \), the Cauchy equations are also satisfied if the gradient of the complex function is zero, since
\begin{equation}\label{eqn:residueGreens:160}
\begin{aligned}
\spacegrad f
&=
\lr{ \Be_1 \partial_x + \Be_2 \partial_y } \lr{ u + \Be_1 \Be_2 v } \\
&=
\Be_1 \lr{ \partial_x u – \partial_y v } + \Be_2 \lr{ \partial_y u + \partial_x v }.
\end{aligned}
\end{equation}
We see that the geometric algebra equivalent of the Cauchy equations is simply
\begin{equation}\label{eqn:residueGreens:200}
\spacegrad f = 0.
\end{equation}
Roughly speaking, we may say that a function is analytic in a region, if the Cauchy equations are satisfied, or the gradient is zero, in a neighbourhood of all points in that region.

A special case of the fundamental theorem of geometric calculus.

Given an even grade multivector \( \psi \in \mathbb{R}^2 \) (i.e.: a complex number), we can show that
\begin{equation}\label{eqn:residueGreens:220}
\int_A \spacegrad \psi d^2\Bx = \oint_{\partial A} d\Bx \psi.
\end{equation}
Let’s get an idea why this works by expanding the area integral for a rectangular parameterization
\begin{equation}\label{eqn:residueGreens:240}
\begin{aligned}
\int_A \spacegrad \psi d^2\Bx
&=
\int_A \lr{ \Be_1 \partial_1 + \Be_2 \partial_2 } \psi I dx dy \\
&=
\int \Be_1 I dy \evalrange{\psi}{x_0}{x_1}
+
\int \Be_2 I dx \evalrange{\psi}{y_0}{y_1} \\
&=
\int \Be_2 dy \evalrange{\psi}{x_0}{x_1}

\int \Be_1 dx \evalrange{\psi}{y_0}{y_1} \\
&=
\int d\By \evalrange{\psi}{x_0}{x_1}

\int d\Bx \evalrange{\psi}{y_0}{y_1}.
\end{aligned}
\end{equation}
We took advantage of the fact that the \(\mathbb{R}^2\) pseudoscalar commutes with \( \psi \). The end result, is illustrated in fig. 1, shows pictorially that the remaining integral is an oriented line integral.

fig. 1. Oriented multivector line integral.

 

If we want to approximate a more general area, we may do so with additional tiles, as illustrated in fig. 2. We may evaluate the area integral using the line integral over just the exterior boundary using such a tiling, as any overlapping opposing boundary contributions cancel exactly.

fig. 2. A crude circular tiling approximation.

 

The reason that this is interesting is that it allows us to re-express a complex integral as a corresponding multivector area integral. With \( d\Bx = \Be_1 dz \), we have
\begin{equation}\label{eqn:residueGreens:260}
\oint dz\, \psi = \Be_1 \int \spacegrad \psi d^2\Bx.
\end{equation}

The Cauchy kernel as a Green’s function.

We’ve previously derived the Green’s function for the 2D Laplacian, and found
\begin{equation}\label{eqn:residueGreens:280}
\tilde{G}(\Bx, \Bx’) = \inv{2\pi} \ln \Abs{\lr{\Bx – \Bx’}},
\end{equation}
which satisfies
\begin{equation}\label{eqn:residueGreens:300}
\delta^2(\Bx – \Bx’) = \spacegrad^2 \tilde{G}(\Bx, \Bx’) = \spacegrad \lr{ \spacegrad \tilde{G}(\Bx, \Bx’) }.
\end{equation}
This means that \( G(\Bx, \Bx’) = \spacegrad \tilde{G}(\Bx, \Bx’) \) is the Green’s function for the gradient. That Green’s function is
\begin{equation}\label{eqn:residueGreens:320}
\begin{aligned}
G(\Bx, \Ba)
&= \inv{2 \pi} \frac{\spacegrad \Abs{\Bx – \Ba}}{\Abs{\Bx – \Ba}} \\
&= \inv{2 \pi} \frac{\Bx – \Ba}{\Abs{\Bx – \Ba}^2}.
\end{aligned}
\end{equation}
We may cast this Green’s function into complex form with \( z = \Be_1 \Bx, a = \Be_1 \Ba \). In particular
\begin{equation}\label{eqn:residueGreens:340}
\begin{aligned}
\inv{z – a}
&=
\frac{(z – a)^\conj}{\Abs{z – a}^2} \\
&=
\frac{(z – a)^\conj}{\Abs{z – a}^2} \\
&=
\frac{\Bx – \Ba}{\Abs{\Bx – \Ba}^2} \Be_1 \\
&=
2 \pi G(\Bx, \Ba) \Be_1.
\end{aligned}
\end{equation}

Cauchy’s integral.

With
\begin{equation}\label{eqn:residueGreens:360}
\psi = \frac{f(z)}{z – a},
\end{equation}
using \ref{eqn:residueGreens:260}, we can now evaluate
\begin{equation}\label{eqn:residueGreens:265}
\begin{aligned}
\oint dz\, \frac{f(z)}{z – a}
&= \Be_1 \int \spacegrad \frac{f(z)}{z – a} d^2\Bx \\
&= \Be_1 \int \lr{ \frac{\spacegrad f(z)}{z – a} + \lr{ \spacegrad \inv{z – a}} f(z) } I dA \\
&= \Be_1 \int f(z) \spacegrad 2 \pi G(\Bx – \Ba) \Be_1 I dA \\
&= 2 \pi \Be_1 \int \delta^2(\Bx – \Ba) \Be_1 f(\Bx) I dA \\
&= 2 \pi \Be_1^2 f(\Ba) I \\
&= 2 \pi I f(a),
\end{aligned}
\end{equation}
where we’ve made use of the analytic condition \( \spacegrad f = 0 \), and the fact that \( f \) and \( 1/(z-a) \), both even multivectors, commute.

The Cauchy integral equation
\begin{equation}\label{eqn:residueGreens:380}
f(a) = \inv{2 \pi I} \oint dz\, \frac{f(z)}{z – a},
\end{equation}
falls out naturally. This sort of residue calculation always seemed a bit miraculous. By introducing a geometric algebra encoding of complex numbers, we get a new and interesting interpretation. In particular,

  1. the imaginary factor in the geometric algebra formulation of this identity is an oriented unit area coming directly from the area element,
  2. the factor of \( 2 \pi \) comes directly from the Green’s function for the gradient,
  3. the fact that this particular form of integral picks up only the contribution at the point \( z = a \) is no longer mysterious seeming. This is directly due to delta-function filtering.

Also, if we are looking for an understanding of how to generalize the Cauchy equation to more general multivector functions, we now also have a good clue how that would be done.

References

[1] C. Doran and A.N. Lasenby. Geometric algebra for physicists. Cambridge University Press New York, Cambridge, UK, 1st edition, 2003.

A generalized Gaussian integral.

July 26, 2025 math and physics play , , , , , , ,

[Click here for a PDF version of this post]

Here’s another problem from [1]. The point is to show that
\begin{equation}\label{eqn:generalizedGaussian:20}
G(x,x’,\tau) = \inv{2\pi} \int_{-\infty}^\infty e^{i k\lr{ x – x’ } } e^{-k^2 \tau} dk,
\end{equation}
has the value
\begin{equation}\label{eqn:generalizedGaussian:40}
G(x,x’,\tau) = \inv{\sqrt{4 \pi \tau} } e^{-\lr{ x – x’}^2/4 \tau },
\end{equation}
not just for real \(\tau\), but also for purely imaginary \(\tau\).

Real case.

The authors claim the real case is easy, but I don’t think the real case is that trivial. The trivial part is basically just completing the square. Writing \( x – x ‘ = \Delta x \), that is
\begin{equation}\label{eqn:generalizedGaussian:60}
\begin{aligned}
-k^2 \tau + i k\lr{ x – x’ }
&=
-\tau \lr{ k^2 – i k \Delta x/\tau } \\
&=
-\tau \lr{ \lr{ k – i \Delta x/2\tau }^2 – \lr{ – i \Delta x/2\tau }^2 } \\
&=
-\tau \lr{ \lr{ k – i \Delta x/2\tau }^2 + \lr{ \Delta x/2\tau }^2 }.
\end{aligned}
\end{equation}
So we have
\begin{equation}\label{eqn:generalizedGaussian:80}
G(x,x’,\tau) = \inv{2\pi} e^{-(\Delta x/2)^2/\tau} \int_{-\infty}^\infty e^{-\tau \lr{ k – i \Delta x/2\tau }^2 } dk.
\end{equation}
Let’s call the integral factor part of this \( I \)
\begin{equation}\label{eqn:generalizedGaussian:100}
I = \int_{-\infty}^\infty e^{-\tau \lr{ k – i \Delta x/2\tau }^2 } dk.
\end{equation}
However, making a change of variables makes this an integral over a complex path
\begin{equation}\label{eqn:generalizedGaussian:120}
I = \int_{-\infty – i \Delta x/2\tau }^{\infty – i \Delta x/2\tau} e^{-\tau k^2 } dk.
\end{equation}
If you are lazy you could say that \( \pm \infty \) adjusted by a constant, even if that constant is imaginary, leaves the integration limits unchanged. That’s clearly true if the constant is real, but I don’t think it’s that obvious if the constant is imaginary.

To answer that question more exactly, let’s consider the integral
\begin{equation}\label{eqn:generalizedGaussian:140}
0 = I + J + K + L = \oint e^{-\tau z^2} dz,
\end{equation}
where the path is illustrated in fig. 1.

fig. 1. Contour for the Gaussian.

Since there are no enclosed poles, we have
\begin{equation}\label{eqn:generalizedGaussian:160}
I = \int_{-\infty}^\infty e^{-\tau k^2 } dk + K + L,
\end{equation}
where
\begin{equation}\label{eqn:generalizedGaussian:180}
\begin{aligned}
K &= \int_{- i \Delta x/2\tau}^0 e^{-\tau z^2} dz \\
L &= \int_0^{- i \Delta x/2\tau} e^{-\tau z^2} dz.
\end{aligned}
\end{equation}
We see now that we see perfect cancellation of \( K \) and \( L \), which justifies the change of variables, and the corresponding integration limits.

We can use the usual trick to evaluate \( I^2 \), to find
\begin{equation}\label{eqn:generalizedGaussian:200}
\begin{aligned}
I^2
&=
\int_{-\infty}^\infty e^{-\tau k^2 } dk
\int_{-\infty}^\infty e^{-\tau m^2 } dm \\
&=
2 \pi \int_0^\infty r e^{-\tau r^2} r dr \\
&=
2 \pi \evalrange{ – \frac{e^{-\tau r^2}}{-2 \tau} }{0}{\infty} \\
&=
\frac{\pi}{\tau}.
\end{aligned}
\end{equation}

So, for real values of \( \tau \) we have
\begin{equation}\label{eqn:generalizedGaussian:220}
G(x,x’,\tau) = \inv{2\pi} \sqrt{\frac{\pi}{\tau}}e^{-(\Delta x/2)^2/\tau},
\end{equation}
as expected.

Imaginary case.

For the imaginary case, let \( \tau = i \alpha \). Let’s recomplete the square from scratch with this substitution
\begin{equation}\label{eqn:generalizedGaussian:240}
\begin{aligned}
-k^2 i \alpha + i k \Delta x
&=
– i \alpha \lr{ k^2 – \frac{i k \Delta x}{i \alpha} } \\
&=
– i \alpha \lr{ k^2 – \frac{k \Delta x}{\alpha} } \\
&=
– i \alpha \lr{ \lr{ k – \frac{\Delta x}{2 \alpha} }^2 – \lr{ \frac{\Delta x}{2 \alpha} }^2 }.
\end{aligned}
\end{equation}
So we have
\begin{equation}\label{eqn:generalizedGaussian:260}
\begin{aligned}
G(x,x’,\tau)
&= \inv{2\pi} e^{i \alpha(\Delta x/2\alpha)^2} \int_{-\infty}^\infty e^{- i \alpha \lr{ k – \frac{\Delta x}{2 \alpha } }^2 } dk \\
&= \inv{\sqrt{\pi^2 \alpha}} e^{- (\Delta x)^2/4\tau} \int_0^\infty e^{- i m^2 } dm.
\end{aligned}
\end{equation}
This time we can make a \( m = \sqrt{\alpha} \lr{k – \frac{\Delta x}{2 \alpha }} \) change of variables, but don’t have to worry about imaginary displacements of the integration limits.

The task is now reduced to the evaluation of an imaginary Gaussian like integral, and we are given the hint integrate \( e^{-z^2} \) over a pie shaped contour fig. 2.

fig. 2. Pie shaped integration contour.

Over \( I \) we set \( z = x, x \in [0, R] \), over \( J \), we set \( z = R e^{i\theta}, \theta \in [0, \pi/4] \), and on \( K \) we set \( z = u e^{i\pi/4}, u \in [R, 0] \). Since there are no enclosed poles we have
\begin{equation}\label{eqn:generalizedGaussian:280}
\begin{aligned}
0 &= I + J + K \\
&= \int_0^R e^{- x^2} dx + \int_0^{\pi/4} e^{-R^2 e^{2 i \theta} } i R e^{i\theta} d\theta – \int_0^R e^{-i u^2 } e^{i \pi/4} du.
\end{aligned}
\end{equation}
In the limit we have
\begin{equation}\label{eqn:generalizedGaussian:300}
\begin{aligned}
\int_0^\infty e^{-i u^2 } du
&= e^{-i\pi/4} \int_0^\infty e^{- x^2} dx + L \\
&= e^{-i\pi/4} \sqrt{\pi}/2 + L,
\end{aligned}
\end{equation}
where
\begin{equation}\label{eqn:generalizedGaussian:320}
L = \lim_{R\rightarrow \infty} e^{-i\pi/4} \int_0^{\pi/4} e^{-R^2 e^{2 i \theta} } i R e^{i\theta} d\theta.
\end{equation}
We hope that this is zero in the limit, but showing this requires a bit of care around the \( \pi/4 \) endpoint. We start with
\begin{equation}\label{eqn:generalizedGaussian:340}
\begin{aligned}
\Abs{L}
&\le \lim_{R\rightarrow \infty} \int_0^{\pi/4} R e^{-R^2 \cos\lr{2 \theta} } d\theta \\
&= \lim_{R\rightarrow \infty} \inv{2} \int_{\pi/4 – \epsilon/2}^{\pi/4} R e^{-R^2 \cos\lr{2 \theta} } (2 d\theta),
\end{aligned}
\end{equation}
Now we make a change of variables
\begin{equation}\label{eqn:generalizedGaussian:360}
t = \frac{\pi}{2} – 2 \theta.
\end{equation}
At the limits we have
\begin{equation}\label{eqn:generalizedGaussian:380}
\begin{aligned}
t(\pi/4 – \epsilon/2) &= \epsilon \\
t(\pi/4) &= 0.
\end{aligned}
\end{equation}
Also,
\begin{equation}\label{eqn:generalizedGaussian:400}
\begin{aligned}
\cos\lr{ 2 \theta }
&= \textrm{Re} \lr{ e^{2 i \theta} } \\
&= \textrm{Re} \lr{ e^{i \lr{\pi/2 – t} } } \\
&= \textrm{Re} \lr{ i e^{-i t } } \\
&= \sin t,
\end{aligned}
\end{equation}
so
\begin{equation}\label{eqn:generalizedGaussian:420}
\Abs{L} \le \lim_{R\rightarrow \infty} \inv{2} \int_0^\epsilon R e^{-R^2 \sin t} dt.
\end{equation}
Since we have forced \( t \) small, we can use the small angle approximation for the sine
\begin{equation}\label{eqn:generalizedGaussian:440}
\begin{aligned}
\int_0^\epsilon R e^{-R^2 \sin t} dt
&\approx
\int_0^\epsilon R e^{-R^2 t} dt \\
&= \evalrange{ \inv{-2 R} e^{-R^2 t} }{0}{\epsilon} \\
&= \frac{ 1 – e^{-R^2 \epsilon }}{2 R}.
\end{aligned}
\end{equation}
The numerator goes to zero for either \( \epsilon \rightarrow 0 \), or \( R \rightarrow \infty \), and the denominator to infinity, so we have the desired zero in the limit. This means that
\begin{equation}\label{eqn:generalizedGaussian:460}
\begin{aligned}
G(x,x’,\tau)
&= \frac{e^{-i\pi/4}}{\sqrt{4 \pi \alpha}} e^{- (\Delta x)^2/4\tau} \\
&= \frac{1}{\sqrt{4 \pi i \alpha}} e^{- (\Delta x)^2/4\tau} \\
&= \frac{1}{\sqrt{4 \pi \tau}} e^{- (\Delta x)^2/4\tau},
\end{aligned}
\end{equation}
as expected.

A fun application, also noted in the problem, is that we can decompose the imaginary integral
\begin{equation}\label{eqn:generalizedGaussian:480}
\int_{-\infty}^\infty e^{-i u^2} du = \sqrt{\pi} e^{-i\pi/4},
\end{equation}
into real and imaginary parts, to find
\begin{equation}\label{eqn:generalizedGaussian:500}
\int_{-\infty}^\infty \cos u^2 du = \int_{-\infty}^\infty \sin u^2 du = \sqrt{\frac{\pi}{2}}.
\end{equation}
Despite being real valued integrals, it it not at all obvious how one would go about finding those without these contour integration tricks.

References

[1] F.W. Byron and R.W. Fuller. Mathematics of Classical and Quantum Physics. Dover Publications, 1992.

A less evil COBOL toy complex number library

December 29, 2023 COBOL , , , , , , , , , ,

In a previous post ‘The evil of COBOL: everything is in global variables’, I discussed the implementation of a toy complex number library in COBOL.

That example code was a single module, having one paragraph for each function. I used a naming convention to work around the fact that COBOL functions (paragraphs) are completely braindead and have no input nor output parameters, and all such functions in a given loadmodule have access to all the variables of the entire program.

Perhaps you try to call me on my claim that COBOL doesn’t have parameters, nor does it have return values.  That’s true if you consider COBOL paragraphs to be the equivalent to functions.  I’ve heard paragraphs described as not-really-functions, and there’s some truth to that, especially since you can do a PERFORM range that executes a set of paragraphs, and there can be non-intuitive control flow transfers between paragraphs of such a range of execution, that is entirely non-function like.

There is one circumstance where COBOL parameters can be found.  It is actually possible to have both input and output parameters in COBOL, but it can only be done at a program level (i.e.: int main( int argc, char ** )). So, you can write a less braindead COBOL library, with a set of meaningful input and output parameters for each function, by using CALL instead of PERFORM, and a whole set of external programs, one for each of the operations that is desired. With that in mind, I’ve reworked my COBOL complex number toy library to use this program-level organization.  This is still a toy library implementation, but serves to illustrate the ideas.  The previous paragraph implementation can be found in the same repository, in the ../paragraphs-as-library/ directory.

Here are some examples of the functions in this little library, and examples of calls to them.

Multiply code:

And here’s a call to it:

Notice that I’ve opted to use dynamic calls to the COBOL functions, using a copybook that lists all the possible function names:

This frees me from the constraint of having to use inscrutable 8-character function names, which will get confusing as the library grows.

Like everything in COBOL, the verbosity makes it fairly unreadable, but refactoring all paragraphs into external programs, does make the calling code, and even the library functions themselves, much more readable.  It still takes 49 lines of code, to initialize two complex numbers, multiply them and display them to stdout.

Compare to the same thing in C++, which is 18 lines for a grow-your-own complex implementation with multiply:

#include <iostream>

struct complex{
   double re_;
   double im_;
};

complex mult(const complex & a, const complex & b) {
   // (a + b i)(c + d i) = a c - b d + i( b c + a d) 
   return complex{ a.re_ * b.re_ - a.im_ * b.im_,
                   a.im_ * b.re_ + a.re_ * b.im_ };
}

int main()
{
   complex a{1,2};
   complex b{3,4};
   complex c = mult(a, b);
   std::cout << "c = " << c.re_ << " +(" << c.im_ << ") I\n";

   return 0;
}

and only 11 lines if we use the standard library complex implementation:

#include <iostream>
#include <complex>

using complex = std::complex<double>;

int main() 
{  
   complex a{1,2}; 
   complex b{3,4};
   complex c = a * b;
   std::cout << "c = " << c << "\n";

   return 0;
}

Basically, we have one line for each operation: init, init, multiply, display, and all the rest is one-time fluff (the includes, main decl, return, …)

It turns out that the so-called OBJECT oriented COBOL extension to the language (circa Y2K), is basically a packaging of external-style-programs into collections that are class prefixed, just as I’ve done above.  This provides the capability for information hiding, and allows functions to have parameters and return values.  However, this doesn’t try to rectify the fundamental failure of the COBOL language: everything has to be in a global variable.  This language extension appears to be a hack that may have been done primarily for Java integration, which is probably why nobody uses it.  You just can’t take the dinosaur out of COBOL.

Sadly, it didn’t take people long to figure out that it’s incredibly dumb to require all variables to be global.  Even PL/I, which is 59 years old at the time I write this (only five years younger than COBOL), got it right.  They added parameters and return values to functions, and allow functions to have variables that are limited to that scope.  PL/I probably went too far, and added lots of features that are also braindead (like the PL/I macro preprocessor), but the basic language is at least sane.  It’s interesting that COBOL never evolved.  A language like C++ may have evolved too much, and still is, but the most flagrant design flaw in the COBOL language has been there since inception, despite every other language in the world figuring out that sort of stupidity should not be propagated.

Note that I work on the development of a COBOL and PL/I compilation stack.  I really like my work, which is challenging and great fun, and I work with awesome people. That doesn’t stop me from acknowledging that COBOL is a language spawned in hell by Satan. I can love my work, which provides tools for customers allowing them to develop, maintain and debug COBOL code, but also have great pity and remorse for those customers, having inherited ancient code built with an ancient language, and having no easy migration path away from that language.

The evil of COBOL: everything is in global variables

December 7, 2023 COBOL , , , , , , ,

COBOL does not have stack variables.  Everything is a global variable.  There is a loose equivalent of a function, called a paragraph, which can be called using a PERFORM statement, but a paragraph does not have any input or output variables, and no return code, so if you want it to behave like a function, you have to construct some sort of complicated naming convention using your global variables.

I’ve seen real customer COBOL programs with many thousands of global variables.  A production COBOL program is usually a giant sequence of MOVEs, MOVE A TO B, MOVE B TO C, MOVE C TO D, MOVE D TO E, … with various PERFORMs or GOTOs, or other things in between.  If you find that your variable has a bad value in it, that is probably because it has been copied from something that was copied from something, that was copied from something, that’s the output of something else, that was copied from something, 9 or 10 times.

I was toying around with the idea of coding up a COBOL implementation of 2D Euclidean geometric algebra, just as a joke, as it is surely the worst language in the world.  Yes, I work on a COBOL compiler project. The project is a lot of fun, and the people I work with are awesome, but I don’t have to like the language.

If I was to implement this simplest geometric algebra in COBOL, the logical starting place for that would be to implement complex numbers in COBOL first.  That is because we can use a pair of complex numbers to implement a 2D multivector, with one complex number for the vector part, and a complex number for the scalar and pseudoscalar parts.  That technique has been detailed on this blog previously, and also in a Mathematica module Cl20.m.

Trying to implement a couple of complex number operations in COBOL got absurd really fast.  Here’s an example.  First step was to create some complex number types.  I did that with a copybook (include file), like so:

This can be included multiple times, each time with a different name, like so:

The way that I structured all my helper functions, was with one set of global variables for input (at least one), and if appropriate, one output global variable.  Here’s an example:

So, if I want to compute and display a value, I have a whole pile of stupid MOVEs to do in and out of the appropriate global variables for each of the helper routines in question:

I wrote enough of this little complex number library that I could do conjugate, real, imaginary, multiply, inverse, and divide operations.  I can run that little program with the following JCL

//COMPLEX JOB
//A EXEC PGM=COMPLEX
//SYSOUT   DD SYSOUT=*
//STEPLIB  DD DSN=PJOOT.SAMPLE.COMPLEX,
//  DISP=SHR

and get this SYSOUT:

STEP A SYSOUT:
A                    =  .10000000000000000E 01 + ( .20000000000000000E 01) I
B                    =  .30000000000000000E 01 + ( .40000000000000000E 01) I
CONJ(A)              =  .10000000000000000E 01 + (-.20000000000000000E 01) I
RE(A)                =  .10000000000000000E 01
IM(A)                =  .20000000000000000E 01
A * B                = -.50000000000000000E 01 + ( .10000000000000000E 02) I
1/A                  =  .20000000000000000E 00 + (-.40000000000000000E 00) I
A/B                  =  .44000000000000000E 00 + ( .80000000000000000E-01) I

If you would like your eyes burned further, you can access the full program on github here. It takes almost 200 lines of code to do almost nothing.

A silly geometry problem: length of side of square in circular quadrant

September 27, 2023 math and physics play , ,

[Click here for a PDF version of this post]

Problem from:

My solution (before numerical reduction), using basic trig and complex numbers, is illustrated in fig. 1.

fig. 1. With complex numbers.

We have
\begin{equation}\label{eqn:squareInCircle:20}
\begin{aligned}
s &= x \cos\theta \\
y &= x \sin\theta \\
p &= y + x e^{i\theta} \\
q &= i s + x e^{i\theta} \\
\Abs{q} &= y + 5 \\
\Abs{p – q} &= 2.
\end{aligned}
\end{equation}
This can be reduced to
\begin{equation}\label{eqn:squareInCircle:40}
\begin{aligned}
\Abs{ x e^{i\theta} – 5 } &= 2 \\
x \Abs{ i \cos\theta + e^{i\theta} } &= x \sin\theta + 5.
\end{aligned}
\end{equation}

My wife figured out how to do it with just Pythagoras, as illustrated in fig. 2.

fig. 2.  With Pythagoras.

fig. 2. With Pythagoras.

\begin{equation}\label{eqn:squareInCircle:60}
\begin{aligned}
\lr{ 5 – s }^2 + y^2 &= 4 \\
\lr{ s + y }^2 + s^2 &= \lr{ y + 5 }^2 \\
x^2 &= s^2 + y^2.
\end{aligned}
\end{equation}

Either way, the numerical solution is 4.12. The geometry looks like fig. 3.

fig. 3. Lengths to scale.

A mathematica notebook to compute the numerical part of the problem (either way) and plot the figure to scale can be found in my mathematica github repo.