wedge product

Book update. Now includes recent work on best fit solutions.

October 1, 2023 Geometric Algebra for Electrical Engineers , , , , , , , , ,

 

I’ve added a few new pages in the linear systems solution portion of my book, Geometric Algebra for Electrical Engineers.  This now includes the best fit content that was covered in my recent video and blog post on approximate solutions to linear systems.

The geometry that is associated with a Moore-Penrose or SVD-based pseudoinverse is not terribly obvious, and this result, providing the same answer, uses geometry exclusively.  I’ve included it in my book, since it’s a cool application, and not conceptually much trickier than the exact system solution.  This makes this section slightly more formal, as it now including an up front statement as a theorem — but that’s where formality ends, as I don’t formally prove the theorem.  I do, however, provide lots of examples and problems (with solutions), sufficient for the industrious to craft their own proof if desired.

The updated version of the book should be available on all amazon marketplaces within the next 3-5 days.  The free PDF version (and leanpub edition), both linked above, are already updated.

 

Geometric algebra, exact and least squares solutions of two variable linear system

September 25, 2023 math and physics play , , , , , , , , ,

New video (on Google’s CensorshipTube):

Exact system.

Recall that we can use the wedge product to solve linear systems. For example, assuming that \( \Ba, \Bb \) are not colinear, the system
\begin{equation}\label{eqn:cramersProjection:20}
x \Ba + y \Bb = \Bc,
\end{equation}
if it has a solution, can be solved for \( x \) and \( y \) by wedging with \( \Bb \), and \( \Ba \) respectively.
For example, wedging with \( \Bb \), from the right, gives
\begin{equation}\label{eqn:cramersProjection:40}
x \lr{ \Ba \wedge \Bb } + y \lr{ \Bb \wedge \Bb } = \Bc \wedge \Bb,
\end{equation}
but since \( \Bb \wedge \Bb = 0 \), we are left with
\begin{equation}\label{eqn:cramersProjection:60}
x \lr{ \Ba \wedge \Bb } = \Bc \wedge \Bb,
\end{equation}
and since \( \Ba, \Bb \) are not colinear, which means that \( \Ba \wedge \Bb \ne 0 \), we have
\begin{equation}\label{eqn:cramersProjection:80}
x = \inv{ \Ba \wedge \Bb } \Bc \wedge \Bb.
\end{equation}
Similarly, we can wedge with \( \Ba \) (from the left), to find
\begin{equation}\label{eqn:cramersProjection:100}
y = \inv{ \Ba \wedge \Bb } \Ba \wedge \Bc.
\end{equation}
This works because, if the system has a solution, all the bivectors \( \Ba \wedge \Bb \), \( \Ba \wedge \Bc \), and \( \Bb \wedge \Bc \), are all scalar multiples of each other, so we can just divide the two bivectors, and the results must be scalars.

Cramer’s rule.

Incidentally, observe that for \(\mathbb{R}^2\), this is the “Cramer’s rule” solution to the system, since
\begin{equation}\label{eqn:cramersProjection:180}
\Bx \wedge \By = \begin{vmatrix} \Bx & \By \end{vmatrix} \Be_1 \Be_2,
\end{equation}
where we are treating \( \Bx \) and \( \By \) here as column vectors of the coordinates. This means that, after dividing out the plane pseudoscalar \( \Be_1 \Be_2 \), we have
\begin{equation}\label{eqn:cramersProjection:200}
\begin{aligned}
x
&=
\frac{
\begin{vmatrix}
\Bc & \Bb \\
\end{vmatrix}
}{
\begin{vmatrix}
\Ba & \Bb
\end{vmatrix}
} \\
y
&=
\frac{
\begin{vmatrix}
\Ba & \Bc \\
\end{vmatrix}
}{
\begin{vmatrix}
\Ba & \Bb
\end{vmatrix}
}.
\end{aligned}
\end{equation}
This follows the usual Cramer’s rule proscription, where we form determinants of the coordinates of the spanning vectors, replace either of the original vectors in the numerator with the target vector (depending on which variable we seek), and then take ratios of the two determinants.

Least squares solution, using geometry.

Now, let’s consider the case, where the system \ref{eqn:cramersProjection:20} cannot be solved exactly. Geometrically, the best we can do is to try to solve the related “least squares” problem
\begin{equation}\label{eqn:cramersProjection:120}
x \Ba + y \Bb = \Bc_\parallel,
\end{equation}
where \( \Bc_\parallel \) is the projection of \( \Bc \) onto the plane spanned by \( \Ba, \Bb \). Regardless of the value of \( \Bc \), we can always find a solution to this problem. For example, solving for \( x \), we have
\begin{equation}\label{eqn:cramersProjection:160}
\begin{aligned}
x
&= \inv{ \Ba \wedge \Bb } \Bc_\parallel \wedge \Bb \\
&= \inv{ \Ba \wedge \Bb } \cdot \lr{ \Bc_\parallel \wedge \Bb } \\
&= \inv{ \Ba \wedge \Bb } \cdot \lr{ \Bc \wedge \Bb } – \inv{ \Ba \wedge \Bb } \cdot \lr{ \Bc_\perp \wedge \Bb }.
\end{aligned}
\end{equation}
Let’s look at the second term, which can be written
\begin{equation}\label{eqn:cramersProjection:140}
\begin{aligned}
– \inv{ \Ba \wedge \Bb } \cdot \lr{ \Bc_\perp \wedge \Bb }
&=
– \frac{ \Ba \wedge \Bb }{ \lr{ \Ba \wedge \Bb}^2 } \cdot \lr{ \Bc_\perp \wedge \Bb } \\
&\propto
\lr{ \Ba \wedge \Bb } \cdot \lr{ \Bc_\perp \wedge \Bb } \\
&=
\lr{ \lr{ \Ba \wedge \Bb } \cdot \Bc_\perp } \cdot \Bb \\
&=
\lr{ \Ba \lr{ \Bb \cdot \Bc_\perp} – \Bb \lr{ \Ba \cdot \Bc_\perp} } \cdot \Bb \\
&=
0.
\end{aligned}
\end{equation}
The zero above follows because \( \Bc_\perp \) is perpendicular to both \( \Ba \) and \( \Bb \) by construction. Geometrically, we are trying to dot two perpendicular bivectors, where \( \Bb \) is a common factor of those two bivectors, as illustrated in fig. 1.

fig. 1. Perpendicular bivectors.

We see that our least squares solution, to this two variable linear system problem, is
\begin{equation}\label{eqn:cramersProjection:220}
x = \inv{ \Ba \wedge \Bb } \cdot \lr{ \Bc \wedge \Bb }.
\end{equation}
\begin{equation}\label{eqn:cramersProjection:240}
y = \inv{ \Ba \wedge \Bb } \cdot \lr{ \Ba \wedge \Bc }.
\end{equation}

The interesting thing here is how we have managed to connect the geometric notion of the optimal solution, the equivalent of a least squares solution (which we can compute with the Moore-Penrose inverse, or with an SVD (Singular Value Decomposition)), with the entirely geometric notion of selecting for the portion of the desired solution that lies within the span of the set of input vectors, provided that the spanning vectors for that hyperplane are linearly independent.

Least squares solution, using calculus.

I’ve called the projection solution, a least-squares solution, without full justification. Here’s that justification. We define the usual error function, the squared distance from the target, from our superposition position in the plane
\begin{equation}\label{eqn:cramersProjection:300}
\epsilon = \lr{ \Bc – x \Ba – y \Bb }^2,
\end{equation}
and then take partials with respect to \( x, y \), equating each to zero
\begin{equation}\label{eqn:cramersProjection:320}
\begin{aligned}
0 &= \PD{x}{\epsilon} = 2 \lr{ \Bc – x \Ba – y \Bb } \cdot (-\Ba) \\
0 &= \PD{y}{\epsilon} = 2 \lr{ \Bc – x \Ba – y \Bb } \cdot (-\Bb).
\end{aligned}
\end{equation}
This is a two equation, two unknown system, which can be expressed in matrix form as
\begin{equation}\label{eqn:cramersProjection:340}
\begin{bmatrix}
\Ba^2 & \Ba \cdot \Bb \\
\Ba \cdot \Bb & \Bb^2
\end{bmatrix}
\begin{bmatrix}
x \\
y
\end{bmatrix}
=
\begin{bmatrix}
\Ba \cdot \Bc \\
\Bb \cdot \Bc \\
\end{bmatrix}.
\end{equation}
This has solution
\begin{equation}\label{eqn:cramersProjection:360}
\begin{bmatrix}
x \\
y
\end{bmatrix}
=
\inv{
\begin{vmatrix}
\Ba^2 & \Ba \cdot \Bb \\
\Ba \cdot \Bb & \Bb^2
\end{vmatrix}
}
\begin{bmatrix}
\Bb^2 & -\Ba \cdot \Bb \\
-\Ba \cdot \Bb & \Ba^2
\end{bmatrix}
\begin{bmatrix}
\Ba \cdot \Bc \\
\Bb \cdot \Bc \\
\end{bmatrix}
=
\frac{
\begin{bmatrix}
\Bb^2 \lr{ \Ba \cdot \Bc } – \lr{ \Ba \cdot \Bb} \lr{ \Bb \cdot \Bc } \\
\Ba^2 \lr{ \Bb \cdot \Bc } – \lr{ \Ba \cdot \Bb} \lr{ \Ba \cdot \Bc } \\
\end{bmatrix}
}{
\Ba^2 \Bb^2 – \lr{ \Ba \cdot \Bb }^2
}.
\end{equation}

All of these differences can be expressed as wedge dot products, using the following expansions in reverse
\begin{equation}\label{eqn:cramersProjection:420}
\begin{aligned}
\lr{ \Ba \wedge \Bb } \cdot \lr{ \Bc \wedge \Bd }
&=
\Ba \cdot \lr{ \Bb \cdot \lr{ \Bc \wedge \Bd } } \\
&=
\Ba \cdot \lr{ \lr{\Bb \cdot \Bc} \Bd – \lr{\Bb \cdot \Bd} \Bc } \\
&=
\lr{ \Ba \cdot \Bd } \lr{\Bb \cdot \Bc} – \lr{ \Ba \cdot \Bc }\lr{\Bb \cdot \Bd}.
\end{aligned}
\end{equation}

We find
\begin{equation}\label{eqn:cramersProjection:380}
\begin{aligned}
x
&= \frac{\Bb^2 \lr{ \Ba \cdot \Bc } – \lr{ \Ba \cdot \Bb} \lr{ \Bb \cdot \Bc }}{-\lr{ \Ba \wedge \Bb }^2 } \\
&= \frac{\lr{ \Ba \wedge \Bb } \cdot \lr{ \Bb \wedge \Bc }}{ -\lr{ \Ba \wedge \Bb }^2 } \\
&= \inv{ \Ba \wedge \Bb } \cdot \lr{ \Bc \wedge \Bb },
\end{aligned}
\end{equation}
and
\begin{equation}\label{eqn:cramersProjection:400}
\begin{aligned}
y
&= \frac{\Ba^2 \lr{ \Bb \cdot \Bc } – \lr{ \Ba \cdot \Bb} \lr{ \Ba \cdot \Bc } }{-\lr{ \Ba \wedge \Bb }^2 } \\
&= \frac{- \lr{ \Ba \wedge \Bb } \cdot \lr{ \Ba \wedge \Bc } }{ -\lr{ \Ba \wedge \Bb }^2 } \\
&= \inv{ \Ba \wedge \Bb } \cdot \lr{ \Ba \wedge \Bc }.
\end{aligned}
\end{equation}
Sure enough, we find what was dubbed our least squares solution, which we now know can be written out as a ratio of (dotted) wedge products.
From \ref{eqn:cramersProjection:340}, it wasn’t obvious that the least squares solution would have a structure that was almost Cramer’s rule like, but having solved this problem using geometry alone, we knew to expect that. It was therefore natural to write the results in terms of wedge products factors, and find the simplest statement of the end result. That end result reduces to Cramer’s rule for the \(\mathbb{R}^2\) special case where the system has an exact solution.

New video: Elliptical motion from Newton’s law of gravitation.

September 14, 2023 math and physics play , , , , , , , , , ,

This blog post is a text version of the video below, available in a few forms:

 

We found previously that
\begin{equation}\label{eqn:solarellipse:20}
\mathbf{\hat{r}}’ = \inv{r} \mathbf{\hat{r}} \lr{ \mathbf{\hat{r}} \wedge \Bx’ }.
\end{equation}
Somewhat remarkably, we can use this identity to demonstrate that orbits governed gravitational force are elliptical (or parabolic, or hyperbolic.) This ends up being possible because the angular momentum of the system is a conserved quantity, and this immediately introduces angular momentum into the mix in a fundamental way. In particular,
\begin{equation}\label{eqn:solarellipse:40}
\mathbf{\hat{r}}’ = \inv{m r^2} \mathbf{\hat{r}} L,
\end{equation}
where we define the angular momentum bivector as
\begin{equation}\label{eqn:solarellipse:60}
L = \Bx \wedge \Bp.
\end{equation}
Our gravitational law is
\begin{equation}\label{eqn:solarellipse:80}
m \ddt{\Bv} = – G m M \frac{\mathbf{\hat{r}}}{r^2},
\end{equation}
or
\begin{equation}\label{eqn:solarellipse:100}
-\inv{G M} \ddt{\Bv} = \frac{\mathbf{\hat{r}}}{r^2}.
\end{equation}
Combining the gravitational law with our \( \mathbf{\hat{r}} \) derivative identity, we have
\begin{equation}\label{eqn:solarellipse:120}
\begin{aligned}
\ddt{ \mathbf{\hat{r}} }
&= \inv{m} \frac{\mathbf{\hat{r}}}{r^2} L \\
&= -\inv{G m M} \ddt{\Bv} L \\
&= -\inv{G m M} \lr{ \ddt{(\Bv L)} – \ddt{L} }.
\end{aligned}
\end{equation}
Since angular momentum is a constant of motion of the system, means that
\begin{equation}\label{eqn:solarellipse:140}
\ddt{L} = 0,
\end{equation}
our equation of motion is integratable
\begin{equation}\label{eqn:solarellipse:160}
\ddt{ \mathbf{\hat{r}} } = -\inv{G m M} \ddt{(\Bv L)}.
\end{equation}
Introducing a vector valued integration constant \( -\Be \), we have
\begin{equation}\label{eqn:solarellipse:180}
\mathbf{\hat{r}} = -\inv{G m M} \Bv L – \Be.
\end{equation}
We’ve transformed our second order differential equation to a first order equation, one that does not look easy to integrate one more time. Luckily, we do not have to integrate, and can partially solve this algebraically, enough to describe the orbit in a compact fashion.

Before trying that, it’s worth quickly demonstrating that this equation is not a multivector equation, but a vector equation, since the multivector \( \Bv L \) is, in fact, vector valued.
\begin{equation}\label{eqn:solarellipse:200}
\begin{aligned}
\Bv L
&= \Bv \lr{ \Bx \wedge (m \Bv) } \\
&\propto \mathbf{\hat{v}} \lr{ \mathbf{\hat{r}} \wedge \mathbf{\hat{v}} } \\
&= \mathbf{\hat{v}} \cdot \lr{ \mathbf{\hat{r}} \wedge \mathbf{\hat{v}} } + \mathbf{\hat{v}} \wedge \lr{ \mathbf{\hat{r}} \wedge \mathbf{\hat{v}} } \\
&= \mathbf{\hat{v}} \cdot \lr{ \mathbf{\hat{r}} \wedge \mathbf{\hat{v}} } \\
&= \lr{ \mathbf{\hat{v}} \cdot \mathbf{\hat{r}} } \mathbf{\hat{v}} – \mathbf{\hat{r}},
\end{aligned}
\end{equation}
which is a vector (i.e.: a vector that is directed along the portion of \( \Bx \) that is perpendicular to \( \Bv \).)

We can reduce \ref{eqn:solarellipse:180} to a scalar equation by dotting with \( \Bx = r \mathbf{\hat{r}} \), leaving
\begin{equation}\label{eqn:solarellipse:220}
\begin{aligned}
r
&= -\inv{G m M} \gpgradezero{ \Bx \Bv L } – \Bx \cdot \Be \\
&= -\inv{G m^2 M} \gpgradezero{ \Bx \Bp L } – \Bx \cdot \Be \\
&= -\inv{G m^2 M} \gpgradezero{ \lr{ \Bx \cdot \Bp + L } L } – \Bx \cdot \Be \\
&= -\inv{G m^2 M} L^2 – \Bx \cdot \Be,
\end{aligned}
\end{equation}
or
\begin{equation}\label{eqn:solarellipse:240}
r = -\frac{L^2}{G M m^2} – r e \cos\theta,
\end{equation}
or
\begin{equation}\label{eqn:solarellipse:260}
r \lr{ 1 + e \cos\theta } = -\frac{L^2}{G M m^2}.
\end{equation}
Observe that the RHS constant is a positive constant, since \( L^2 \le 0 \). This has the structure of a conic section, if we write
\begin{equation}\label{eqn:solarellipse:280}
-\frac{L^2}{G M m^2} = e d.
\end{equation}
This is an ellipse, for \( e \in [0,1) \), a parabola for \( e = 1 \), and hyperbola for \( e > 1 \) ([1] theorem 10.3.1).

fig. 1. Ellipse with e = 0.75

In fig. 1 is a plot with \( e = 0.75 \) (changing \( d \) doesn’t change the shape of the figure, just the size.)

References

[1] S.L. Salas and E. Hille. Calculus: one and several variables. Wiley New York, 1990.

Static load with two forces in a plane, solved a few different ways.

February 12, 2023 math and physics play , , , , , , , , , , , ,

[Click here for a PDF version of this post]

There’s a class of simple statics problems that are pervasive in high school physics and first year engineering classes (for me that CIV102.)  These problems are illustrated in the figures below. Here we have a static load under gravity, and two supporting members (rigid beams or wire lines), which can be under compression, or tension, depending on the geometry.

The problem, given the geometry, is to find the magnitudes of the forces in the two members. The equation to solve is of the form
\begin{equation}\label{eqn:twoForceStaticsProblem:20}
\BF_s + \BF_r + m \Bg = 0.
\end{equation}
The usual way to solve such a problem is to resolve the forces into components. We will do that first here as a review, but then also solve the system using GA techniques, which are arguably simpler or more direct.

Solving as a conventional vector equation.

If we were back in high school we could have written our forces out in vector form
\begin{equation}\label{eqn:twoForceStaticsProblem:160}
\begin{aligned}
\BF_r &= f_r \lr{ \Be_1 \cos\alpha + \Be_2 \sin\alpha } \\
\BF_s &= f_s \lr{ \Be_1 \cos\beta + \Be_2 \sin\beta } \\
\Bg &= g \Be_1.
\end{aligned}
\end{equation}
Here the gravitational direction has been pointed along the x-axis.

Our equation to solve is now
\begin{equation}\label{eqn:twoForceStaticsProblem:180}
f_r \lr{ \Be_1 \cos\alpha + \Be_2 \sin\alpha } + f_s \lr{ \Be_1 \cos\beta + \Be_2 \sin\beta } + m g \Be_1 = 0.
\end{equation}
This we can solve as a set of scalar equations, one for each of the \( \Be_1 \) and \( \Be_2 \) directions
\begin{equation}\label{eqn:twoForceStaticsProblem:200}
\begin{aligned}
f_r \cos\alpha + f_s \cos\beta + m g &= 0 \\
f_r \sin\alpha + f_s \sin\beta &= 0.
\end{aligned}
\end{equation}
Our solution is
\begin{equation}\label{eqn:twoForceStaticsProblem:220}
\begin{aligned}
\begin{bmatrix}
f_r \\
f_s
\end{bmatrix}
&=
{\begin{bmatrix}
\cos\alpha & \cos\beta \\
\sin\alpha & \sin\beta
\end{bmatrix}}^{-1}
\begin{bmatrix}
– m g \\
0
\end{bmatrix} \\
&=
\inv{
\cos\alpha \sin\beta – \cos\beta \sin\alpha
}
\begin{bmatrix}
\sin\beta & -\cos\beta \\
-\sin\alpha & \cos\alpha
\end{bmatrix}
\begin{bmatrix}
– m g \\
0
\end{bmatrix} \\
&=
\frac{ m g }{ \cos\alpha \sin\beta – \cos\beta \sin\alpha }
\begin{bmatrix}
-\sin\beta \\
\sin\alpha
\end{bmatrix} \\
&=
\frac{ m g }{ \sin\lr{ \beta – \alpha } }
\begin{bmatrix}
-\sin\beta \\
\sin\alpha
\end{bmatrix}.
\end{aligned}
\end{equation}
We have to haul out some trig identities to make a final simplification, but find a solution to the system.

Another approach, is to take cross products with the unit force direction.  First note that
\begin{equation}\label{eqn:twoForceStaticsProblem:240}
\begin{aligned}
\lr{ \Be_1 \cos\alpha + \Be_2 \sin\alpha } \cross \lr{ \Be_1 \cos\beta + \Be_2 \sin\beta }
&=
\Be_3 \lr{
\cos\alpha \sin\beta – \sin\alpha \cos\beta
} \\
&=
\Be_3 \sin\lr{ \beta – \alpha }.
\end{aligned}
\end{equation}

If we take cross products with each of the unit vectors, we find
\begin{equation}\label{eqn:twoForceStaticsProblem:260}
\begin{aligned}
f_r \lr{ \Be_1 \cos\alpha + \Be_2 \sin\alpha } \cross \lr{ \Be_1 \cos\beta + \Be_2 \sin\beta } + m g \Be_1 \cross \lr{ \Be_1 \cos\beta + \Be_2 \sin\beta } &= 0 \\
f_s \lr{ \Be_1 \cos\beta + \Be_2 \sin\beta } \cross \lr{ \Be_1 \cos\alpha + \Be_2 \sin\alpha } + m g \Be_1 \cross \lr{ \Be_1 \cos\alpha + \Be_2 \sin\alpha } &= 0,
\end{aligned}
\end{equation}
or
\begin{equation}\label{eqn:twoForceStaticsProblem:280}
\begin{aligned}
\Be_3 f_r \sin\lr{ \beta – \alpha } + m g \Be_3 \sin\beta &= 0 \\
-\Be_3 f_s \sin\lr{ \beta – \alpha } + m g \Be_3 \sin\alpha &= 0.
\end{aligned}
\end{equation}
After cancelling the \( \Be_3 \)’s, we find the same result as we did solving the scalar system. This was a fairly direct way to solve the system, but the intermediate cross products were a bit messy. We will try this cross product using the wedge product. Switching from the cross to the wedge, by itself, will not make things any simpler or more complicated, but we can use the complex exponential form of the unit vectors for the forces, and that will make things simpler.

Geometric algebra setup and solution.

As usual for planar problems, let’s write \( i = \Be_1 \Be_2 \) for the plane pseudoscalar, which allows us to write the forces in polar form
\begin{equation}\label{eqn:twoForceStaticsProblem:40}
\begin{aligned}
\BF_r &= f_r \Be_1 e^{i\alpha} \\
\BF_s &= f_s \Be_1 e^{i\beta} \\
\Bg &= g \Be_1,
\end{aligned}
\end{equation}
Our equation to solve is now
\begin{equation}\label{eqn:twoForceStaticsProblem:60}
f_r \Be_1 e^{i\alpha} + f_s \Be_1 e^{i\beta} + m g \Be_1 = 0.
\end{equation}
The solution for either \( f_r \) or \( f_s \) is now trivial, as we only have to take wedge products with the force direction vectors to solve for the magnitudes.  That is
\begin{equation}\label{eqn:twoForceStaticsProblem:80}
\begin{aligned}
f_r \lr{ \Be_1 e^{i\alpha} +} \wedge \lr{ \Be_1 e^{i\beta} } + m g \Be_1 \wedge \lr{ \Be_1 e^{i\beta} } &= 0 \\
f_s \lr{ \Be_1 e^{i\beta} +} \wedge \lr{ \Be_1 e^{i\alpha} } + m g \Be_1 \wedge \lr{ \Be_1 e^{i\alpha} } &= 0.
\end{aligned}
\end{equation}
Writing the wedges as grade two selections, and noting that \( e^{i\theta} \Be_1 = \Be_1 e^{-i\theta } \), we have
\begin{equation}\label{eqn:twoForceStaticsProblem:100}
\begin{aligned}
f_r &= – m g \frac{ \gpgradetwo{\Be_1^2 e^{i\beta}} }{ \gpgradetwo{ \Be_1^2 e^{-i\alpha} e^{i\beta} } } = – m g \frac{ \sin\beta }{ \sin\lr{ \beta – \alpha } } \\
f_s &= – m g \frac{ \gpgradetwo{\Be_1^2 e^{i\alpha}} }{ \gpgradetwo{ \Be_1^2 e^{-i\beta} e^{i\alpha} } } = m g \frac{ \sin\alpha }{ \sin\lr{ \beta – \alpha } }.
\end{aligned}
\end{equation}
The grade selection a unit pseudoscalar factor in both the denominator and numerator, which cancelled out to give the final scalar result.

As a complex variable problem.

Observe that we could have reframed the problem as a multivector problem by left multiplying \ref{eqn:twoForceStaticsProblem:60} by \( \Be_1 \) to find
\begin{equation}\label{eqn:twoForceStaticsProblem:120}
f_r e^{i\alpha} + f_s e^{i\beta} + m g = 0.
\end{equation}
Alternatively, we could have written the equations this way directly as a complex variable problem.

We can now solve for \( f_r \) or \( f_s \) by multiplying by the conjugate of one of the complex exponentials. That is
\begin{equation}\label{eqn:twoForceStaticsProblem:140}
\begin{aligned}
f_r + f_s e^{i\beta} e^{-i\alpha} + m g e^{-i\alpha} &= 0 \\
f_r e^{i\alpha} e^{-i\beta} + f_s + m g e^{-i\beta} &= 0.
\end{aligned}
\end{equation}
Selecting the bivector part of these equations (if interpreted as a multivector equation), or selecting the imaginary (if interpreting as a complex variables equation), will eliminate one of the force magnitudes from each equation, after which we find the same result.

This last approach, treating the problem as either a complex number problem (selecting imaginaries), or multivector problem (selecting bivectors), seems the simplest. We have no messing cross products, nor do we have to haul out the trig identities (the sine difference in the denominator comes practically for free, as it did with the wedge product method.)

A multivector Lagrangian for Maxwell’s equation: A summary of previous exploration.

June 21, 2022 math and physics play , , , , , , , , , , , , , , , , , , , ,

This summarizes the significant parts of the last 8 blog posts.

[Click here for a PDF version of this post]

STA form of Maxwell’s equation.

Maxwell’s equations, with electric and fictional magnetic sources (useful for antenna theory and other engineering applications), are
\begin{equation}\label{eqn:maxwellLagrangian:220}
\begin{aligned}
\spacegrad \cdot \BE &= \frac{\rho}{\epsilon} \\
\spacegrad \cross \BE &= – \BM – \mu \PD{t}{\BH} \\
\spacegrad \cdot \BH &= \frac{\rho_\txtm}{\mu} \\
\spacegrad \cross \BH &= \BJ + \epsilon \PD{t}{\BE}.
\end{aligned}
\end{equation}
We can assemble these into a single geometric algebra equation,
\begin{equation}\label{eqn:maxwellLagrangian:240}
\lr{ \spacegrad + \inv{c} \PD{t}{} } F = \eta \lr{ c \rho – \BJ } + I \lr{ c \rho_{\mathrm{m}} – \BM },
\end{equation}
where \( F = \BE + \eta I \BH = \BE + I c \BB \), \( c = 1/\sqrt{\mu\epsilon}, \eta = \sqrt{(\mu/\epsilon)} \).

By multiplying through by \( \gamma_0 \), making the identification \( \Be_k = \gamma_k \gamma_0 \), and
\begin{equation}\label{eqn:maxwellLagrangian:300}
\begin{aligned}
J^0 &= \frac{\rho}{\epsilon}, \quad J^k = \eta \lr{ \BJ \cdot \Be_k }, \quad J = J^\mu \gamma_\mu \\
M^0 &= c \rho_{\mathrm{m}}, \quad M^k = \BM \cdot \Be_k, \quad M = M^\mu \gamma_\mu \\
\grad &= \gamma^\mu \partial_\mu,
\end{aligned}
\end{equation}
we find the STA form of Maxwell’s equation, including magnetic sources
\begin{equation}\label{eqn:maxwellLagrangian:320}
\grad F = J – I M.
\end{equation}

Decoupling the electric and magnetic fields and sources.

We can utilize two separate four-vector potential fields to split Maxwell’s equation into two parts. Let
\begin{equation}\label{eqn:maxwellLagrangian:1740}
F = F_{\mathrm{e}} + I F_{\mathrm{m}},
\end{equation}
where
\begin{equation}\label{eqn:maxwellLagrangian:1760}
\begin{aligned}
F_{\mathrm{e}} &= \grad \wedge A \\
F_{\mathrm{m}} &= \grad \wedge K,
\end{aligned}
\end{equation}
and \( A, K \) are independent four-vector potential fields. Plugging this into Maxwell’s equation, and employing a duality transformation, gives us two coupled vector grade equations
\begin{equation}\label{eqn:maxwellLagrangian:1780}
\begin{aligned}
\grad \cdot F_{\mathrm{e}} – I \lr{ \grad \wedge F_{\mathrm{m}} } &= J \\
\grad \cdot F_{\mathrm{m}} + I \lr{ \grad \wedge F_{\mathrm{e}} } &= M.
\end{aligned}
\end{equation}
However, since \( \grad \wedge F_{\mathrm{m}} = \grad \wedge F_{\mathrm{e}} = 0 \), by construction, the curls above are killed. We may also add in \( \grad \wedge F_{\mathrm{e}} = 0 \) and \( \grad \wedge F_{\mathrm{m}} = 0 \) respectively, yielding two independent gradient equations
\begin{equation}\label{eqn:maxwellLagrangian:1810}
\begin{aligned}
\grad F_{\mathrm{e}} &= J \\
\grad F_{\mathrm{m}} &= M,
\end{aligned}
\end{equation}
one for each of the electric and magnetic sources and their associated fields.

Tensor formulation.

The electromagnetic field \( F \), is a vector-bivector multivector in the multivector representation of Maxwell’s equation, but is a bivector in the STA representation. The split of \( F \) into it’s electric and magnetic field components is observer dependent, but we may write it without reference to a specific observer frame as
\begin{equation}\label{eqn:maxwellLagrangian:1830}
F = \inv{2} \gamma_\mu \wedge \gamma_\nu F^{\mu\nu},
\end{equation}
where \( F^{\mu\nu} \) is an arbitrary antisymmetric 2nd rank tensor. Maxwell’s equation has a vector and trivector component, which may be split out explicitly using grade selection, to find
\begin{equation}\label{eqn:maxwellLagrangian:360}
\begin{aligned}
\grad \cdot F &= J \\
\grad \wedge F &= -I M.
\end{aligned}
\end{equation}
Further dotting and wedging these equations with \( \gamma^\mu \) allows for extraction of scalar relations
\begin{equation}\label{eqn:maxwellLagrangian:460}
\partial_\nu F^{\nu\mu} = J^{\mu}, \quad \partial_\nu G^{\nu\mu} = M^{\mu},
\end{equation}
where \( G^{\mu\nu} = -(1/2) \epsilon^{\mu\nu\alpha\beta} F_{\alpha\beta} \) is also an antisymmetric 2nd rank tensor.

If we treat \( F^{\mu\nu} \) and \( G^{\mu\nu} \) as independent fields, this pair of equations is the coordinate equivalent to \ref{eqn:maxwellLagrangian:1760}, also decoupling the electric and magnetic source contributions to Maxwell’s equation.

Coordinate representation of the Lagrangian.

As observed above, we may choose to express the decoupled fields as curls \( F_{\mathrm{e}} = \grad \wedge A \) or \( F_{\mathrm{m}} = \grad \wedge K \). The coordinate expansion of either field component, given such a representation, is straight forward. For example
\begin{equation}\label{eqn:maxwellLagrangian:1850}
\begin{aligned}
F_{\mathrm{e}}
&= \lr{ \gamma_\mu \partial^\mu } \wedge \lr{ \gamma_\nu A^\nu } \\
&= \inv{2} \lr{ \gamma_\mu \wedge \gamma_\nu } \lr{ \partial^\mu A^\nu – \partial^\nu A^\mu }.
\end{aligned}
\end{equation}

We make the identification \( F^{\mu\nu} = \partial^\mu A^\nu – \partial^\nu A^\mu \), the usual definition of \( F^{\mu\nu} \) in the tensor formalism. In that tensor formalism, the Maxwell Lagrangian is
\begin{equation}\label{eqn:maxwellLagrangian:1870}
\LL = – \inv{4} F_{\mu\nu} F^{\mu\nu} – A_\mu J^\mu.
\end{equation}
We may show this though application of the Euler-Lagrange equations
\begin{equation}\label{eqn:maxwellLagrangian:600}
\PD{A_\mu}{\LL} = \partial_\nu \PD{(\partial_\nu A_\mu)}{\LL}.
\end{equation}
\begin{equation}\label{eqn:maxwellLagrangian:1930}
\begin{aligned}
\PD{(\partial_\nu A_\mu)}{\LL}
&= -\inv{4} (2) \lr{ \PD{(\partial_\nu A_\mu)}{F_{\alpha\beta}} } F^{\alpha\beta} \\
&= -\inv{2} \delta^{[\nu\mu]}_{\alpha\beta} F^{\alpha\beta} \\
&= -\inv{2} \lr{ F^{\nu\mu} – F^{\mu\nu} } \\
&= F^{\mu\nu}.
\end{aligned}
\end{equation}
So \( \partial_\nu F^{\nu\mu} = J^\mu \), the equivalent of \( \grad \cdot F = J \), as expected.

Coordinate-free representation and variation of the Lagrangian.

Because
\begin{equation}\label{eqn:maxwellLagrangian:200}
F^2 =
-\inv{2}
F^{\mu\nu} F_{\mu\nu}
+
\lr{ \gamma_\alpha \wedge \gamma^\beta }
F_{\alpha\mu}
F^{\beta\mu}
+
\frac{I}{4}
\epsilon_{\mu\nu\alpha\beta} F^{\mu\nu} F^{\alpha\beta},
\end{equation}
we may express the Lagrangian \ref{eqn:maxwellLagrangian:1870} in a coordinate free representation
\begin{equation}\label{eqn:maxwellLagrangian:1890}
\LL = \inv{2} F \cdot F – A \cdot J,
\end{equation}
where \( F = \grad \wedge A \).

We will now show that it is also possible to apply the variational principle to the following multivector Lagrangian
\begin{equation}\label{eqn:maxwellLagrangian:1910}
\LL = \inv{2} F^2 – A \cdot J,
\end{equation}
and recover the geometric algebra form \( \grad F = J \) of Maxwell’s equation in it’s entirety, including both vector and trivector components in one shot.

We will need a few geometric algebra tools to do this.

The first such tool is the notational freedom to let the gradient act bidirectionally on multivectors to the left and right. We will designate such action with over-arrows, sometimes also using braces to limit the scope of the action in question. If \( Q, R \) are multivectors, then the bidirectional action of the gradient in a \( Q, R \) sandwich is
\begin{equation}\label{eqn:maxwellLagrangian:1950}
\begin{aligned}
Q \lrgrad R
&= Q \lgrad R + Q \rgrad R \\
&= \lr{ Q \gamma^\mu \lpartial_\mu } R + Q \lr{ \gamma^\mu \rpartial_\mu R } \\
&= \lr{ \partial_\mu Q } \gamma^\mu R + Q \gamma^\mu \lr{ \partial_\mu R }.
\end{aligned}
\end{equation}
In the final statement, the partials are acting exclusively on \( Q \) and \( R \) respectively, but the \( \gamma^\mu \) factors must remain in place, as they do not necessarily commute with any of the multivector factors.

This bidirectional action is a critical aspect of the Fundamental Theorem of Geometric calculus, another tool that we will require. The specific form of that theorem that we will utilize here is
\begin{equation}\label{eqn:maxwellLagrangian:1970}
\int_V Q d^4 \Bx \lrgrad R = \int_{\partial V} Q d^3 \Bx R,
\end{equation}
where \( d^4 \Bx = I d^4 x \) is the pseudoscalar four-volume element associated with a parameterization of space time. For our purposes, we may assume that parameterization are standard basis coordinates associated with the basis \( \setlr{ \gamma_0, \gamma_1, \gamma_2, \gamma_3 } \). The surface differential form \( d^3 \Bx \) can be given specific meaning, but we do not actually care what that form is here, as all our surface integrals will be zero due to the boundary constraints of the variational principle.

Finally, we will utilize the fact that bivector products can be split into grade \(0,4\) and \( 2 \) components using anticommutator and commutator products, namely, given two bivectors \( F, G \), we have
\begin{equation}\label{eqn:maxwellLagrangian:1990}
\begin{aligned}
\gpgrade{ F G }{0,4} &= \inv{2} \lr{ F G + G F } \\
\gpgrade{ F G }{2} &= \inv{2} \lr{ F G – G F }.
\end{aligned}
\end{equation}

We may now proceed to evaluate the variation of the action for our presumed Lagrangian
\begin{equation}\label{eqn:maxwellLagrangian:2010}
S = \int d^4 x \lr{ \inv{2} F^2 – A \cdot J }.
\end{equation}
We seek solutions of the variational equation \( \delta S = 0 \), that are satisfied for all variations \( \delta A \), where the four-potential variations \( \delta A \) are zero on the boundaries of this action volume (i.e. an infinite spherical surface.)

We may start our variation in terms of \( F \) and \( A \)
\begin{equation}\label{eqn:maxwellLagrangian:1540}
\begin{aligned}
\delta S
&=
\int d^4 x \lr{ \inv{2} \lr{ \delta F } F + F \lr{ \delta F } } – \lr{ \delta A } \cdot J \\
&=
\int d^4 x \gpgrade{ \lr{ \delta F } F – \lr{ \delta A } J }{0,4} \\
&=
\int d^4 x \gpgrade{ \lr{ \grad \wedge \lr{\delta A} } F – \lr{ \delta A } J }{0,4} \\
&=
-\int d^4 x \gpgrade{ \lr{ \lr{\delta A} \lgrad } F – \lr{ \lr{ \delta A } \cdot \lgrad } F + \lr{ \delta A } J }{0,4} \\
&=
-\int d^4 x \gpgrade{ \lr{ \lr{\delta A} \lgrad } F + \lr{ \delta A } J }{0,4} \\
&=
-\int d^4 x \gpgrade{ \lr{\delta A} \lrgrad F – \lr{\delta A} \rgrad F + \lr{ \delta A } J }{0,4},
\end{aligned}
\end{equation}
where we have used arrows, when required, to indicate the directional action of the gradient.

Writing \( d^4 x = -I d^4 \Bx \), we have
\begin{equation}\label{eqn:maxwellLagrangian:1600}
\begin{aligned}
\delta S
&=
-\int_V d^4 x \gpgrade{ \lr{\delta A} \lrgrad F – \lr{\delta A} \rgrad F + \lr{ \delta A } J }{0,4} \\
&=
-\int_V \gpgrade{ -\lr{\delta A} I d^4 \Bx \lrgrad F – d^4 x \lr{\delta A} \rgrad F + d^4 x \lr{ \delta A } J }{0,4} \\
&=
\int_{\partial V} \gpgrade{ \lr{\delta A} I d^3 \Bx F }{0,4}
+ \int_V d^4 x \gpgrade{ \lr{\delta A} \lr{ \rgrad F – J } }{0,4}.
\end{aligned}
\end{equation}
The first integral is killed since \( \delta A = 0 \) on the boundary. The remaining integrand can be simplified to
\begin{equation}\label{eqn:maxwellLagrangian:1660}
\gpgrade{ \lr{\delta A} \lr{ \rgrad F – J } }{0,4} =
\gpgrade{ \lr{\delta A} \lr{ \grad F – J } }{0},
\end{equation}
where the grade-4 filter has also been discarded since \( \grad F = \grad \cdot F + \grad \wedge F = \grad \cdot F \) since \( \grad \wedge F = \grad \wedge \grad \wedge A = 0 \) by construction, which implies that the only non-zero grades in the multivector \( \grad F – J \) are vector grades. Also, the directional indicator on the gradient has been dropped, since there is no longer any ambiguity. We seek solutions of \( \gpgrade{ \lr{\delta A} \lr{ \grad F – J } }{0} = 0 \) for all variations \( \delta A \), namely
\begin{equation}\label{eqn:maxwellLagrangian:1620}
\boxed{
\grad F = J.
}
\end{equation}
This is Maxwell’s equation in it’s coordinate free STA form, found using the variational principle from a coordinate free multivector Maxwell Lagrangian, without having to resort to a coordinate expansion of that Lagrangian.

Lagrangian for fictitious magnetic sources.

The generalization of the Lagrangian to include magnetic charge and current densities can be as simple as utilizing two independent four-potential fields
\begin{equation}\label{eqn:maxwellLagrangian:n}
\LL = \inv{2} \lr{ \grad \wedge A }^2 – A \cdot J + \alpha \lr{ \inv{2} \lr{ \grad \wedge K }^2 – K \cdot M },
\end{equation}
where \( \alpha \) is an arbitrary multivector constant.

Variation of this Lagrangian provides two independent equations
\begin{equation}\label{eqn:maxwellLagrangian:1840}
\begin{aligned}
\grad \lr{ \grad \wedge A } &= J \\
\grad \lr{ \grad \wedge K } &= M.
\end{aligned}
\end{equation}
We may add these, scaling the second by \( -I \) (recall that \( I, \grad \) anticommute), to find
\begin{equation}\label{eqn:maxwellLagrangian:1860}
\grad \lr{ F_{\mathrm{e}} + I F_{\mathrm{m}} } = J – I M,
\end{equation}
which is \( \grad F = J – I M \), as desired.

It would be interesting to explore whether it is possible find Lagrangian that is dependent on a multivector potential, that would yield \( \grad F = J – I M \) directly, instead of requiring a superposition operation from the two independent solutions. One such possible potential is \( \tilde{A} = A – I K \), for which \( F = \gpgradetwo{ \grad \tilde{A} } = \grad \wedge A + I \lr{ \grad \wedge K } \). The author was not successful constructing such a Lagrangian.