Problem from:

My solution (before numerical reduction), using basic trig and complex numbers, is illustrated in fig. 1.

We have

\begin{equation}\label{eqn:squareInCircle:20}

\begin{aligned}

s &= x \cos\theta \\

y &= x \sin\theta \\

p &= y + x e^{i\theta} \\

q &= i s + x e^{i\theta} \\

\Abs{q} &= y + 5 \\

\Abs{p – q} &= 2.

\end{aligned}

\end{equation}

This can be reduced to

\begin{equation}\label{eqn:squareInCircle:40}

\begin{aligned}

\Abs{ x e^{i\theta} – 5 } &= 2 \\

x \Abs{ i \cos\theta + e^{i\theta} } &= x \sin\theta + 5.

\end{aligned}

\end{equation}

My wife figured out how to do it with just Pythagoras, as illustrated in fig. 2.

\begin{equation}\label{eqn:squareInCircle:60}

\begin{aligned}

\lr{ 5 – s }^2 + y^2 &= 4 \\

\lr{ s + y }^2 + s^2 &= \lr{ y + 5 }^2 \\

x^2 &= s^2 + y^2.

\end{aligned}

\end{equation}

Either way, the numerical solution is 4.12. The geometry looks like fig. 3.

A mathematica notebook to compute the numerical part of the problem (either way) and plot the figure to scale can be found in my mathematica github repo.

]]>-
- Click here to watch the video on the Odysee platform.
- [Click here for a PDF version of this post (and supplementary notes for the video.)]
- Look below for a wordpress version of this post (and supplementary notes for the video.)

Recall that we can use the wedge product to solve linear systems. For example, assuming that \( \Ba, \Bb \) are not colinear, the system

\begin{equation}\label{eqn:cramersProjection:20}

x \Ba + y \Bb = \Bc,

\end{equation}

if it has a solution, can be solved for \( x \) and \( y \) by wedging with \( \Bb \), and \( \Ba \) respectively.

For example, wedging with \( \Bb \), from the right, gives

\begin{equation}\label{eqn:cramersProjection:40}

x \lr{ \Ba \wedge \Bb } + y \lr{ \Bb \wedge \Bb } = \Bc \wedge \Bb,

\end{equation}

but since \( \Bb \wedge \Bb = 0 \), we are left with

\begin{equation}\label{eqn:cramersProjection:60}

x \lr{ \Ba \wedge \Bb } = \Bc \wedge \Bb,

\end{equation}

and since \( \Ba, \Bb \) are not colinear, which means that \( \Ba \wedge \Bb \ne 0 \), we have

\begin{equation}\label{eqn:cramersProjection:80}

x = \inv{ \Ba \wedge \Bb } \Bc \wedge \Bb.

\end{equation}

Similarly, we can wedge with \( \Ba \) (from the left), to find

\begin{equation}\label{eqn:cramersProjection:100}

y = \inv{ \Ba \wedge \Bb } \Ba \wedge \Bc.

\end{equation}

This works because, if the system has a solution, all the bivectors \( \Ba \wedge \Bb \), \( \Ba \wedge \Bc \), and \( \Bb \wedge \Bc \), are all scalar multiples of each other, so we can just divide the two bivectors, and the results must be scalars.

Incidentally, observe that for \(\mathbb{R}^2\), this is the “Cramer’s rule” solution to the system, since

\begin{equation}\label{eqn:cramersProjection:180}

\Bx \wedge \By = \begin{vmatrix} \Bx & \By \end{vmatrix} \Be_1 \Be_2,

\end{equation}

where we are treating \( \Bx \) and \( \By \) here as column vectors of the coordinates. This means that, after dividing out the plane pseudoscalar \( \Be_1 \Be_2 \), we have

\begin{equation}\label{eqn:cramersProjection:200}

\begin{aligned}

x

&=

\frac{

\begin{vmatrix}

\Bc & \Bb \\

\end{vmatrix}

}{

\begin{vmatrix}

\Ba & \Bb

\end{vmatrix}

} \\

y

&=

\frac{

\begin{vmatrix}

\Ba & \Bc \\

\end{vmatrix}

}{

\begin{vmatrix}

\Ba & \Bb

\end{vmatrix}

}.

\end{aligned}

\end{equation}

This follows the usual Cramer’s rule proscription, where we form determinants of the coordinates of the spanning vectors, replace either of the original vectors in the numerator with the target vector (depending on which variable we seek), and then take ratios of the two determinants.

Now, let’s consider the case, where the system \ref{eqn:cramersProjection:20} cannot be solved exactly. Geometrically, the best we can do is to try to solve the related “least squares” problem

\begin{equation}\label{eqn:cramersProjection:120}

x \Ba + y \Bb = \Bc_\parallel,

\end{equation}

where \( \Bc_\parallel \) is the projection of \( \Bc \) onto the plane spanned by \( \Ba, \Bb \). Regardless of the value of \( \Bc \), we can always find a solution to this problem. For example, solving for \( x \), we have

\begin{equation}\label{eqn:cramersProjection:160}

\begin{aligned}

x

&= \inv{ \Ba \wedge \Bb } \Bc_\parallel \wedge \Bb \\

&= \inv{ \Ba \wedge \Bb } \cdot \lr{ \Bc_\parallel \wedge \Bb } \\

&= \inv{ \Ba \wedge \Bb } \cdot \lr{ \Bc \wedge \Bb } – \inv{ \Ba \wedge \Bb } \cdot \lr{ \Bc_\perp \wedge \Bb }.

\end{aligned}

\end{equation}

Let’s look at the second term, which can be written

\begin{equation}\label{eqn:cramersProjection:140}

\begin{aligned}

– \inv{ \Ba \wedge \Bb } \cdot \lr{ \Bc_\perp \wedge \Bb }

&=

– \frac{ \Ba \wedge \Bb }{ \lr{ \Ba \wedge \Bb}^2 } \cdot \lr{ \Bc_\perp \wedge \Bb } \\

&\propto

\lr{ \Ba \wedge \Bb } \cdot \lr{ \Bc_\perp \wedge \Bb } \\

&=

\lr{ \lr{ \Ba \wedge \Bb } \cdot \Bc_\perp } \cdot \Bb \\

&=

\lr{ \Ba \lr{ \Bb \cdot \Bc_\perp} – \Bb \lr{ \Ba \cdot \Bc_\perp} } \cdot \Bb \\

&=

0.

\end{aligned}

\end{equation}

The zero above follows because \( \Bc_\perp \) is perpendicular to both \( \Ba \) and \( \Bb \) by construction. Geometrically, we are trying to dot two perpendicular bivectors, where \( \Bb \) is a common factor of those two bivectors, as illustrated in fig. 1.

We see that our least squares solution, to this two variable linear system problem, is

\begin{equation}\label{eqn:cramersProjection:220}

x = \inv{ \Ba \wedge \Bb } \cdot \lr{ \Bc \wedge \Bb }.

\end{equation}

\begin{equation}\label{eqn:cramersProjection:240}

y = \inv{ \Ba \wedge \Bb } \cdot \lr{ \Ba \wedge \Bc }.

\end{equation}

The interesting thing here is how we have managed to connect the geometric notion of the optimal solution, the equivalent of a least squares solution (which we can compute with the Moore-Penrose inverse, or with an SVD (Singular Value Decomposition)), with the entirely geometric notion of selecting for the portion of the desired solution that lies within the span of the set of input vectors, provided that the spanning vectors for that hyperplane are linearly independent.

I’ve called the projection solution, a least-squares solution, without full justification. Here’s that justification. We define the usual error function, the squared distance from the target, from our superposition position in the plane

\begin{equation}\label{eqn:cramersProjection:300}

\epsilon = \lr{ \Bc – x \Ba – y \Bb }^2,

\end{equation}

and then take partials with respect to \( x, y \), equating each to zero

\begin{equation}\label{eqn:cramersProjection:320}

\begin{aligned}

0 &= \PD{x}{\epsilon} = 2 \lr{ \Bc – x \Ba – y \Bb } \cdot (-\Ba) \\

0 &= \PD{y}{\epsilon} = 2 \lr{ \Bc – x \Ba – y \Bb } \cdot (-\Bb).

\end{aligned}

\end{equation}

This is a two equation, two unknown system, which can be expressed in matrix form as

\begin{equation}\label{eqn:cramersProjection:340}

\begin{bmatrix}

\Ba^2 & \Ba \cdot \Bb \\

\Ba \cdot \Bb & \Bb^2

\end{bmatrix}

\begin{bmatrix}

x \\

y

\end{bmatrix}

=

\begin{bmatrix}

\Ba \cdot \Bc \\

\Bb \cdot \Bc \\

\end{bmatrix}.

\end{equation}

This has solution

\begin{equation}\label{eqn:cramersProjection:360}

\begin{bmatrix}

x \\

y

\end{bmatrix}

=

\inv{

\begin{vmatrix}

\Ba^2 & \Ba \cdot \Bb \\

\Ba \cdot \Bb & \Bb^2

\end{vmatrix}

}

\begin{bmatrix}

\Bb^2 & -\Ba \cdot \Bb \\

-\Ba \cdot \Bb & \Ba^2

\end{bmatrix}

\begin{bmatrix}

\Ba \cdot \Bc \\

\Bb \cdot \Bc \\

\end{bmatrix}

=

\frac{

\begin{bmatrix}

\Bb^2 \lr{ \Ba \cdot \Bc } – \lr{ \Ba \cdot \Bb} \lr{ \Bb \cdot \Bc } \\

\Ba^2 \lr{ \Bb \cdot \Bc } – \lr{ \Ba \cdot \Bb} \lr{ \Ba \cdot \Bc } \\

\end{bmatrix}

}{

\Ba^2 \Bb^2 – \lr{ \Ba \cdot \Bb }^2

}.

\end{equation}

All of these differences can be expressed as wedge dot products, using the following expansions in reverse

\begin{equation}\label{eqn:cramersProjection:420}

\begin{aligned}

\lr{ \Ba \wedge \Bb } \cdot \lr{ \Bc \wedge \Bd }

&=

\Ba \cdot \lr{ \Bb \cdot \lr{ \Bc \wedge \Bd } } \\

&=

\Ba \cdot \lr{ \lr{\Bb \cdot \Bc} \Bd – \lr{\Bb \cdot \Bd} \Bc } \\

&=

\lr{ \Ba \cdot \Bd } \lr{\Bb \cdot \Bc} – \lr{ \Ba \cdot \Bc }\lr{\Bb \cdot \Bd}.

\end{aligned}

\end{equation}

We find

\begin{equation}\label{eqn:cramersProjection:380}

\begin{aligned}

x

&= \frac{\Bb^2 \lr{ \Ba \cdot \Bc } – \lr{ \Ba \cdot \Bb} \lr{ \Bb \cdot \Bc }}{-\lr{ \Ba \wedge \Bb }^2 } \\

&= \frac{\lr{ \Ba \wedge \Bb } \cdot \lr{ \Bb \wedge \Bc }}{ -\lr{ \Ba \wedge \Bb }^2 } \\

&= \inv{ \Ba \wedge \Bb } \cdot \lr{ \Bc \wedge \Bb },

\end{aligned}

\end{equation}

and

\begin{equation}\label{eqn:cramersProjection:400}

\begin{aligned}

y

&= \frac{\Ba^2 \lr{ \Bb \cdot \Bc } – \lr{ \Ba \cdot \Bb} \lr{ \Ba \cdot \Bc } }{-\lr{ \Ba \wedge \Bb }^2 } \\

&= \frac{- \lr{ \Ba \wedge \Bb } \cdot \lr{ \Ba \wedge \Bc } }{ -\lr{ \Ba \wedge \Bb }^2 } \\

&= \inv{ \Ba \wedge \Bb } \cdot \lr{ \Ba \wedge \Bc }.

\end{aligned}

\end{equation}

Sure enough, we find what was dubbed our least squares solution, which we now know can be written out as a ratio of (dotted) wedge products.

From \ref{eqn:cramersProjection:340}, it wasn’t obvious that the least squares solution would have a structure that was almost Cramer’s rule like, but having solved this problem using geometry alone, we knew to expect that. It was therefore natural to write the results in terms of wedge products factors, and find the simplest statement of the end result. That end result reduces to Cramer’s rule for the \(\mathbb{R}^2\) special case where the system has an exact solution.

- [PDF]
- Google’s CensorShipTube
- Odysee
- WordPress with Mathjax, below.

We found previously that

\begin{equation}\label{eqn:solarellipse:20}

\mathbf{\hat{r}}’ = \inv{r} \mathbf{\hat{r}} \lr{ \mathbf{\hat{r}} \wedge \Bx’ }.

\end{equation}

Somewhat remarkably, we can use this identity to demonstrate that orbits governed gravitational force are elliptical (or parabolic, or hyperbolic.) This ends up being possible because the angular momentum of the system is a conserved quantity, and this immediately introduces angular momentum into the mix in a fundamental way. In particular,

\begin{equation}\label{eqn:solarellipse:40}

\mathbf{\hat{r}}’ = \inv{m r^2} \mathbf{\hat{r}} L,

\end{equation}

where we define the angular momentum bivector as

\begin{equation}\label{eqn:solarellipse:60}

L = \Bx \wedge \Bp.

\end{equation}

Our gravitational law is

\begin{equation}\label{eqn:solarellipse:80}

m \ddt{\Bv} = – G m M \frac{\mathbf{\hat{r}}}{r^2},

\end{equation}

or

\begin{equation}\label{eqn:solarellipse:100}

-\inv{G M} \ddt{\Bv} = \frac{\mathbf{\hat{r}}}{r^2}.

\end{equation}

Combining the gravitational law with our \( \mathbf{\hat{r}} \) derivative identity, we have

\begin{equation}\label{eqn:solarellipse:120}

\begin{aligned}

\ddt{ \mathbf{\hat{r}} }

&= \inv{m} \frac{\mathbf{\hat{r}}}{r^2} L \\

&= -\inv{G m M} \ddt{\Bv} L \\

&= -\inv{G m M} \lr{ \ddt{(\Bv L)} – \ddt{L} }.

\end{aligned}

\end{equation}

Since angular momentum is a constant of motion of the system, means that

\begin{equation}\label{eqn:solarellipse:140}

\ddt{L} = 0,

\end{equation}

our equation of motion is integratable

\begin{equation}\label{eqn:solarellipse:160}

\ddt{ \mathbf{\hat{r}} } = -\inv{G m M} \ddt{(\Bv L)}.

\end{equation}

Introducing a vector valued integration constant \( -\Be \), we have

\begin{equation}\label{eqn:solarellipse:180}

\mathbf{\hat{r}} = -\inv{G m M} \Bv L – \Be.

\end{equation}

We’ve transformed our second order differential equation to a first order equation, one that does not look easy to integrate one more time. Luckily, we do not have to integrate, and can partially solve this algebraically, enough to describe the orbit in a compact fashion.

Before trying that, it’s worth quickly demonstrating that this equation is not a multivector equation, but a vector equation, since the multivector \( \Bv L \) is, in fact, vector valued.

\begin{equation}\label{eqn:solarellipse:200}

\begin{aligned}

\Bv L

&= \Bv \lr{ \Bx \wedge (m \Bv) } \\

&\propto \mathbf{\hat{v}} \lr{ \mathbf{\hat{r}} \wedge \mathbf{\hat{v}} } \\

&= \mathbf{\hat{v}} \cdot \lr{ \mathbf{\hat{r}} \wedge \mathbf{\hat{v}} } + \mathbf{\hat{v}} \wedge \lr{ \mathbf{\hat{r}} \wedge \mathbf{\hat{v}} } \\

&= \mathbf{\hat{v}} \cdot \lr{ \mathbf{\hat{r}} \wedge \mathbf{\hat{v}} } \\

&= \lr{ \mathbf{\hat{v}} \cdot \mathbf{\hat{r}} } \mathbf{\hat{v}} – \mathbf{\hat{r}},

\end{aligned}

\end{equation}

which is a vector (i.e.: a vector that is directed along the portion of \( \Bx \) that is perpendicular to \( \Bv \).)

We can reduce \ref{eqn:solarellipse:180} to a scalar equation by dotting with \( \Bx = r \mathbf{\hat{r}} \), leaving

\begin{equation}\label{eqn:solarellipse:220}

\begin{aligned}

r

&= -\inv{G m M} \gpgradezero{ \Bx \Bv L } – \Bx \cdot \Be \\

&= -\inv{G m^2 M} \gpgradezero{ \Bx \Bp L } – \Bx \cdot \Be \\

&= -\inv{G m^2 M} \gpgradezero{ \lr{ \Bx \cdot \Bp + L } L } – \Bx \cdot \Be \\

&= -\inv{G m^2 M} L^2 – \Bx \cdot \Be,

\end{aligned}

\end{equation}

or

\begin{equation}\label{eqn:solarellipse:240}

r = -\frac{L^2}{G M m^2} – r e \cos\theta,

\end{equation}

or

\begin{equation}\label{eqn:solarellipse:260}

r \lr{ 1 + e \cos\theta } = -\frac{L^2}{G M m^2}.

\end{equation}

Observe that the RHS constant is a positive constant, since \( L^2 \le 0 \). This has the structure of a conic section, if we write

\begin{equation}\label{eqn:solarellipse:280}

-\frac{L^2}{G M m^2} = e d.

\end{equation}

This is an ellipse, for \( e \in [0,1) \), a parabola for \( e = 1 \), and hyperbola for \( e > 1 \) ([1] theorem 10.3.1).

In fig. 1 is a plot with \( e = 0.75 \) (changing \( d \) doesn’t change the shape of the figure, just the size.)

[1] S.L. Salas and E. Hille. *Calculus: one and several variables*. Wiley New York, 1990.

In my last couple GA YouTube videos, circular and spherical coordinates were examined.

This post is a text representation of a new video that follows up on those two videos.

We found the form of the unit vector derivatives in both cases.

\begin{equation}\label{eqn:radialderivatives:20}

\Bx = r \mathbf{\hat{r}},

\end{equation}

leaving the angular dependence of \( \mathbf{\hat{r}} \) unspecified. We want to find both \( \Bv = \Bx’ \) and \( \mathbf{\hat{r}}’\).

The derivative of a spherical length \( r \) can be expressed as

\begin{equation*}

\frac{dr}{dt} = \mathbf{\hat{r}} \cdot \frac{d\Bx}{dt}.

\end{equation*}

\begin{equation*}

\frac{dr}{dt} = \mathbf{\hat{r}} \cdot \frac{d\Bx}{dt}.

\end{equation*}

We write \( r^2 = \Bx \cdot \Bx \), and take derivatives of both sides, to find

\begin{equation}\label{eqn:radialderivatives:60}

2 r \frac{dr}{dt} = 2 \Bx \cdot \frac{d\Bx}{dt},

\end{equation}

or

\begin{equation}\label{eqn:radialderivatives:80}

\frac{dr}{dt} = \frac{\Bx}{r} \cdot \frac{d\Bx}{dt} = \mathbf{\hat{r}} \cdot \frac{d\Bx}{dt}.

\end{equation}

Application of the chain rule to \ref{eqn:radialderivatives:20} is straightforward

\begin{equation}\label{eqn:radialderivatives:100}

\Bx’ = r’ \mathbf{\hat{r}} + r \mathbf{\hat{r}}’,

\end{equation}

but we don’t know the form for \( \mathbf{\hat{r}}’ \). We could proceed with a niave expansion of

\begin{equation}\label{eqn:radialderivatives:120}

\frac{d}{dt} \lr{ \frac{\Bx}{r} },

\end{equation}

but we can be sneaky, and perform a projective and rejective split of \( \Bx’ \) with respect to \( \mathbf{\hat{r}} \). That is

\begin{equation}\label{eqn:radialderivatives:140}

\begin{aligned}

\Bx’

&= \mathbf{\hat{r}} \mathbf{\hat{r}} \Bx’ \\

&= \mathbf{\hat{r}} \lr{ \mathbf{\hat{r}} \Bx’ } \\

&= \mathbf{\hat{r}} \lr{ \mathbf{\hat{r}} \cdot \Bx’ + \mathbf{\hat{r}} \wedge \Bx’} \\

&= \mathbf{\hat{r}} \lr{ r’ + \mathbf{\hat{r}} \wedge \Bx’}.

\end{aligned}

\end{equation}

We used our lemma in the last step above, and after distribution, find

\begin{equation}\label{eqn:radialderivatives:160}

\Bx’ = r’ \mathbf{\hat{r}} + \mathbf{\hat{r}} \lr{ \mathbf{\hat{r}} \wedge \Bx’ }.

\end{equation}

Comparing to \ref{eqn:radialderivatives:100}, we see that

\begin{equation}\label{eqn:radialderivatives:180}

r \mathbf{\hat{r}}’ = \mathbf{\hat{r}} \lr{ \mathbf{\hat{r}} \wedge \Bx’ }.

\end{equation}

We see that the radial unit vector derivative is proportional to the rejection of \( \mathbf{\hat{r}} \) from \( \Bx’ \)

\begin{equation}\label{eqn:radialderivatives:200}

\mathbf{\hat{r}}’ = \inv{r} \mathrm{Rej}_{\mathbf{\hat{r}}}(\Bx’) = \inv{r^3} \Bx \lr{ \Bx \wedge \Bx’ }.

\end{equation}

The vector \( \mathbf{\hat{r}}’ \) is perpendicular to \( \mathbf{\hat{r}} \) for any parameterization of it’s orientation, or in symbols

\begin{equation}\label{eqn:radialderivatives:220}

\mathbf{\hat{r}} \cdot \mathbf{\hat{r}}’ = 0.

\end{equation}

We saw this for the circular and spherical parameterizations, and see now that this also holds more generally.

Let’s now write out the momentum \( \Bp = m \Bv \) for a point particle with mass \( m \), and determine the kinetic energy \( m \Bv^2/2 = \Bp^2/2m \) for that particle.

The momentum is

\begin{equation}\label{eqn:radialderivatives:320}

\begin{aligned}

\Bp

&= m r’ \mathbf{\hat{r}} + m \mathbf{\hat{r}} \lr{ \mathbf{\hat{r}} \wedge \Bv } \\

&= m r’ \mathbf{\hat{r}} + \inv{r} \mathbf{\hat{r}} \lr{ \Br \wedge \Bp }.

\end{aligned}

\end{equation}

Observe that \( p_r = m r’ \) is the radial component of the momentum. It is natural to introduce a bivector valued angular momentum operator

\begin{equation}\label{eqn:radialderivatives:340}

L = \Br \wedge \Bp,

\end{equation}

splitting the momentum into a component that is strictly radial and a component that lies purely on the surface of a spherical surface in momentum space. That is

\begin{equation}\label{eqn:radialderivatives:360}

\Bp = p_r \mathbf{\hat{r}} + \inv{r} \mathbf{\hat{r}} L.

\end{equation}

Making use of the fact that \( \mathbf{\hat{r}} \) and \( \mathrm{Rej}_{\mathbf{\hat{r}}}(\Bx’) \) are perpendicular (so there are no cross terms when we square the momentum), the

kinetic energy is

\begin{equation}\label{eqn:radialderivatives:380}

\begin{aligned}

\inv{2m} \Bp^2

&= \inv{2m} \lr{ p_r \mathbf{\hat{r}} + \inv{r} \mathbf{\hat{r}} L }^2 \\

&= \inv{2m} p_r^2 + \inv{2 m r^2 } \mathbf{\hat{r}} L \mathbf{\hat{r}} L \\

&= \inv{2m} p_r^2 – \inv{2 m r^2 } \mathbf{\hat{r}} L^2 \mathbf{\hat{r}} \\

&= \inv{2m} p_r^2 – \inv{2 m r^2 } L^2 \mathbf{\hat{r}}^2,

\end{aligned}

\end{equation}

where we’ve used the anticommutative nature of \( \mathbf{\hat{r}} \) and \( L \) (i.e.: a sign swap is needed to swap them), and used the fact that \( L^2 \) is a scalar, allowing us to commute \( \mathbf{\hat{r}} \) with \( L^2 \). This leaves us with

\begin{equation}\label{eqn:radialderivatives:400}

E = \inv{2m} \Bp^2 = \inv{2m} p_r^2 – \inv{2 m r^2 } L^2.

\end{equation}

Observe that both the radial momentum term and the angular momentum term are both strictly postive, since \( L \) is a bivector and \( L^2 \le 0 \).

Find \ref{eqn:radialderivatives:200} without being sneaky.

\begin{equation}\label{eqn:radialderivatives:280}

\begin{aligned}

\mathbf{\hat{r}}’

&= \frac{d}{dt} \lr{ \frac{\Bx}{r} } \\

&= \inv{r} \Bx’ – \inv{r^2} \Bx r’ \\

&= \inv{r} \Bx’ – \inv{r} \mathbf{\hat{r}} r’ \\

&= \inv{r} \lr{ \Bx’ – \mathbf{\hat{r}} r’ } \\

&= \inv{r} \lr{ \mathbf{\hat{r}} \mathbf{\hat{r}} \Bx’ – \mathbf{\hat{r}} r’ } \\

&= \inv{r} \mathbf{\hat{r}} \lr{ \mathbf{\hat{r}} \Bx’ – r’ } \\

&= \inv{r} \mathbf{\hat{r}} \lr{ \mathbf{\hat{r}} \Bx’ – \mathbf{\hat{r}} \cdot \Bx’ } \\

&= \inv{r} \mathbf{\hat{r}} \lr{ \mathbf{\hat{r}} \wedge \Bx’ }.

\end{aligned}

\end{equation}

Show that \ref{eqn:radialderivatives:200} can be expressed as a triple vector cross product

\begin{equation}\label{eqn:radialderivatives:230}

\mathbf{\hat{r}}’ = \inv{r^3} \lr{ \Bx \cross \Bx’ } \cross \Bx,

\end{equation}

While this may be familiar from elementary calculus, such as in [1], we can show follows easily from our GA result

\begin{equation}\label{eqn:radialderivatives:300}

\begin{aligned}

\mathbf{\hat{r}}’

&= \inv{r} \mathbf{\hat{r}} \lr{ \mathbf{\hat{r}} \wedge \Bx’ } \\

&= \inv{r} \gpgradeone{ \mathbf{\hat{r}} \lr{ \mathbf{\hat{r}} \wedge \Bx’ } } \\

&= \inv{r} \gpgradeone{ \mathbf{\hat{r}} I \lr{ \mathbf{\hat{r}} \cross \Bx’ } } \\

&= \inv{r} \gpgradeone{ I \lr{ \mathbf{\hat{r}} \cdot \lr{ \mathbf{\hat{r}} \cross \Bx’ } + \mathbf{\hat{r}} \wedge \lr{ \mathbf{\hat{r}} \cross \Bx’ } } } \\

&= \inv{r} \gpgradeone{ I^2 \mathbf{\hat{r}} \cross \lr{ \mathbf{\hat{r}} \cross \Bx’ } } \\

&= \inv{r} \lr{ \mathbf{\hat{r}} \cross \Bx’ } \cross \mathbf{\hat{r}}.

\end{aligned}

\end{equation}

[1] S.L. Salas and E. Hille. *Calculus: one and several variables*. Wiley New York, 1990.

In this video, we compute velocity in a radial representation \( \mathbf{x} = r \mathbf{\hat{r}} \).

We use a scalar radial coordinate \( r \), and leave all the angular dependence implicitly encoded in a radial unit vector \( \mathbf{\hat{r}} \).

We find the geometric algebra structure of the \( \mathbf{\hat{r}}’ \) in two different ways, to find

\( \mathbf{\hat{r}}’ = \frac{\mathbf{\hat{r}}}{r} \left( \mathbf{\hat{r}} \wedge \mathbf{\hat{x}}’ \right), \)

then derive the conventional triple vector cross product equivalent for reference:

\( \mathbf{\hat{r}}’ = \left( \mathbf{\hat{r}} \times \mathbf{\hat{x}}’ \right) \times \frac{\mathbf{\hat{r}}}{r}. \)

We then compute kinetic energy in this representation, and show how a bivector-valued angular momentum \( L = \mathbf{x} \wedge \mathbf{p} \), falls naturally from that computation, where we have

\( \frac{m}{2} \mathbf{v}^2 = \frac{1}{2 m} {(m r’)}^2 – \frac{1}{2 m r^2 } L^2. \)

Prerequisites: calculus (derivatives and chain rule), and geometric algebra basics (vector multiplication, commutation relationships for vectors and bivectors in a plane, wedge and cross product equivalencies, …)

Errata: at around 4:12 I used \( \mathbf{r} \) instead of \( \mathbf{x} \), then kept doing so every time after that when the value for \( L \) was stated.

As well as being posted to Google’s censorship-tube, this video can also be found on odysee.

]]>- V0.1.19-2 (Sep 2, 2023)
- Reworked many of the Mathematica generated figures. Now using the MaTeX[] extension to do the figure labelling (that was only done in a couple figures before this), as it looks much better, and is consistent with the fonts in the text.
Each of these are individually very small changes, barely noticeable, but I think it makes a nice difference to overall quality.

In many cases, I’ve generated new separate figures for the amazon paper editions of the book, using straight black instead of colors, so they don’t look as washed out, after conversion to black and white.

- Reworked many of the Mathematica generated figures. Now using the MaTeX[] extension to do the figure labelling (that was only done in a couple figures before this), as it looks much better, and is consistent with the fonts in the text.

Here’s an example where just the captioning was changed:

The font is now whatever LaTeX uses for \\mathbf{n}, so it matches the text.

I think that the new Mathematica version (13.2) that I am using, also happens to render this 3D figure a bit nicer.

Here’s a comparison of one of the figures that now has a black and white specialization (old, new-color, new-bw):

In this particular case, I chose not to color the labels like I did previously, but I have retained that label color matching in some places.

Like I said, it’s a small difference, but the latex labelling just look better, period. Notice that the numeric values at the tick marks on the border of the figure are not using a matching font (those are directly generated by Mathematica). I’ll have to figure out how to make those use MaTeX too, and audit all the figures for that, but that’s a game for another day.

]]>Amazon author copies don’t seem to be available in Canada anymore, so I had to buy a regular copy (printed in Bolton, Ontario, Canada!), but did so my setting the price as low as possible on amazon.ca (about $20 CAD each). That means that I got bound and printed books, with 469+503 pages, in 8.5×11″ format for about $40 (buying an author copy from the US amazon.com would have cost more after shipping and currency conversion.) I don’t think that I could have gotten bound print copies that cheap at one of the St George copy houses that service the university.

Now that I have my copies, I’ll un-publish these from amazon, so that nobody buys them by mistake. I just wanted a copy of each as a reference for myself (as I do refer to parts of them sometimes — like the Pauli matrix/GA-equivalents writeup.)

This leaves me with 9 active titles on amazon (one is my book, and the rest are all course notes.)

]]>I’ve made a new manim-based video with a geometric algebra application.

In the video, the geometric algebra form for the spherical unit vectors are derived, then unpacked to find the conventional vector algebra form. We will then use our new tools to find the expression for the kinetic energy of a particle in spherical coordinates.

Prerequisites: calculus (derivatives and chain rule), complex numbers (exponential polar form), and geometric algebra basics (single sided rotations, vector multiplication, vector commutation sign changes, …)

You can find the video on Google’s censorship-tube, or on odysee.

]]>Here’s another geometric algebra video, weighing in at a massive 2:29 (minutes.)

This video is a very short introduction to geometric algebra, showing the most basic concepts and how to apply them to the 2D geometric algebra of the Euclidean plane. Those concepts aren’t developed further in this video, but the idea is just to show the most basic consequences of the definitions.

Prerequisites: basic vector algebra (basis, vector space, dot product space, arrow representation of vectors, graphical vector addition, …)

If you watched yesterday’s video, don’t both watching this one, since it is extracted from that with no additions.

You can find the video on Google’s censorship-tube, and on odysee.

]]>Months ago, I used Manim to create a outline a geometric algebra treatment of the derivation of the circular velocity and acceleration formulas that you would find in a first year undergrad physics course. I never published it, since overlaying audio and getting the timing of the audio and video right is hard (at least for me.) I’m also faced with the difficulty of not being able to speak properly when attempting to record myself.

Anyways, I finally finished the audio overlays (it was sitting waiting for me to record the final 10s of audio!), and have posted this little 11 minute video, which includes:

- A reminder of what circular coordinates are.
- A brief outline of what is meant by each of the circular basis vectors.
- A derivation of those basis vectors (just basic geometry, and no GA.)
- A brief introduction to geometric algebra, and geometric algebra for a plane, including the “imaginary” \( i = \Be_1 \Be_2 \), and it’s use for rotation and polar form.
- How to express the circular basis vectors in polar form.
- Application of all the ideas above to compute velocity and acceleration.
- Circular coordinate examples of velocity and acceleration.

It probably doesn’t actually make sense to try to pack all these ideas into one video, but oh well — that’s what I did.

You can find the video on google’s censorship-tube, and on odysee.