scalar

Unpacking the fundamental theorem of multivector calculus in two dimensions

January 18, 2021 math and physics play , , , , , , , , , , , , , , , , , , ,

Notes.

Due to limitations in the MathJax-Latex package, all the oriented integrals in this blog post should be interpreted as having a clockwise orientation. [See the PDF version of this post for more sophisticated formatting.]

Guts.

Given a two dimensional generating vector space, there are two instances of the fundamental theorem for multivector integration
\begin{equation}\label{eqn:unpackingFundamentalTheorem:20}
\int_S F d\Bx \lrpartial G = \evalbar{F G}{\Delta S},
\end{equation}
and
\begin{equation}\label{eqn:unpackingFundamentalTheorem:40}
\int_S F d^2\Bx \lrpartial G = \oint_{\partial S} F d\Bx G.
\end{equation}
The first case is trivial. Given a parameterizated curve \( x = x(u) \), it just states
\begin{equation}\label{eqn:unpackingFundamentalTheorem:60}
\int_{u(0)}^{u(1)} du \PD{u}{}\lr{FG} = F(u(1))G(u(1)) – F(u(0))G(u(0)),
\end{equation}
for all multivectors \( F, G\), regardless of the signature of the underlying space.

The surface integral is more interesting. Let’s first look at the area element for this surface integral, which is
\begin{equation}\label{eqn:unpackingFundamentalTheorem:80}
d^2 \Bx = d\Bx_u \wedge d \Bx_v.
\end{equation}
Geometrically, this has the area of the parallelogram spanned by \( d\Bx_u \) and \( d\Bx_v \), but weighted by the pseudoscalar of the space. This is explored algebraically in the following problem and illustrated in fig. 1.

fig. 1. 2D vector space and area element.

Problem: Expansion of 2D area bivector.

Let \( \setlr{e_1, e_2} \) be an orthonormal basis for a two dimensional space, with reciprocal frame \( \setlr{e^1, e^2} \). Expand the area bivector \( d^2 \Bx \) in coordinates relating the bivector to the Jacobian and the pseudoscalar.

Answer

With parameterization \( x = x(u,v) = x^\alpha e_\alpha = x_\alpha e^\alpha \), we have
\begin{equation}\label{eqn:unpackingFundamentalTheorem:120}
\Bx_u \wedge \Bx_v
=
\lr{ \PD{u}{x^\alpha} e_\alpha } \wedge
\lr{ \PD{v}{x^\beta} e_\beta }
=
\PD{u}{x^\alpha}
\PD{v}{x^\beta}
e_\alpha
e_\beta
=
\PD{(u,v)}{(x^1,x^2)} e_1 e_2,
\end{equation}
or
\begin{equation}\label{eqn:unpackingFundamentalTheorem:160}
\Bx_u \wedge \Bx_v
=
\lr{ \PD{u}{x_\alpha} e^\alpha } \wedge
\lr{ \PD{v}{x_\beta} e^\beta }
=
\PD{u}{x_\alpha}
\PD{v}{x_\beta}
e^\alpha
e^\beta
=
\PD{(u,v)}{(x_1,x_2)} e^1 e^2.
\end{equation}
The upper and lower index pseudoscalars are related by
\begin{equation}\label{eqn:unpackingFundamentalTheorem:180}
e^1 e^2 e_1 e_2 =
-e^1 e^2 e_2 e_1 =
-1,
\end{equation}
so with \( I = e_1 e_2 \),
\begin{equation}\label{eqn:unpackingFundamentalTheorem:200}
e^1 e^2 = -I^{-1},
\end{equation}
leaving us with
\begin{equation}\label{eqn:unpackingFundamentalTheorem:140}
d^2 \Bx
= \PD{(u,v)}{(x^1,x^2)} du dv\, I
= -\PD{(u,v)}{(x_1,x_2)} du dv\, I^{-1}.
\end{equation}
We see that the area bivector is proportional to either the upper or lower index Jacobian and to the pseudoscalar for the space.

We may write the fundamental theorem for a 2D space as
\begin{equation}\label{eqn:unpackingFundamentalTheorem:680}
\int_S du dv \, \PD{(u,v)}{(x^1,x^2)} F I \lrgrad G = \oint_{\partial S} F d\Bx G,
\end{equation}
where we have dispensed with the vector derivative and use the gradient instead, since they are identical in a two parameter two dimensional space. Of course, unless we are using \( x^1, x^2 \) as our parameterization, we still want the curvilinear representation of the gradient \( \grad = \Bx^u \PDi{u}{} + \Bx^v \PDi{v}{} \).

Problem: Standard basis expansion of fundamental surface relation.

For a parameterization \( x = x^1 e_1 + x^2 e_2 \), where \( \setlr{ e_1, e_2 } \) is a standard (orthogonal) basis, expand the fundamental theorem for surface integrals for the single sided \( F = 1 \) case. Consider functions \( G \) of each grade (scalar, vector, bivector.)

Answer

From \ref{eqn:unpackingFundamentalTheorem:140} we see that the fundamental theorem takes the form
\begin{equation}\label{eqn:unpackingFundamentalTheorem:220}
\int_S dx^1 dx^2\, F I \lrgrad G = \oint_{\partial S} F d\Bx G.
\end{equation}
In a Euclidean space, the operator \( I \lrgrad \), is a \( \pi/2 \) rotation of the gradient, but has a rotated like structure in all metrics:
\begin{equation}\label{eqn:unpackingFundamentalTheorem:240}
I \grad
=
e_1 e_2 \lr{ e^1 \partial_1 + e^2 \partial_2 }
=
-e_2 \partial_1 + e_1 \partial_2.
\end{equation}

  • \( F = 1 \) and \( G \in \bigwedge^0 \) or \( G \in \bigwedge^2 \). For \( F = 1 \) and scalar or bivector \( G \) we have
    \begin{equation}\label{eqn:unpackingFundamentalTheorem:260}
    \int_S dx^1 dx^2\, \lr{ -e_2 \partial_1 + e_1 \partial_2 } G = \oint_{\partial S} d\Bx G,
    \end{equation}
    where, for \( x^1 \in [x^1(0),x^1(1)] \) and \( x^2 \in [x^2(0),x^2(1)] \), the RHS written explicitly is
    \begin{equation}\label{eqn:unpackingFundamentalTheorem:280}
    \oint_{\partial S} d\Bx G
    =
    \int dx^1 e_1
    \lr{ G(x^1, x^2(1)) – G(x^1, x^2(0)) }
    – dx^2 e_2
    \lr{ G(x^1(1),x^2) – G(x^1(0), x^2) }.
    \end{equation}
    This is sketched in fig. 2. Since a 2D bivector \( G \) can be written as \( G = I g \), where \( g \) is a scalar, we may write the pseudoscalar case as
    \begin{equation}\label{eqn:unpackingFundamentalTheorem:300}
    \int_S dx^1 dx^2\, \lr{ -e_2 \partial_1 + e_1 \partial_2 } g = \oint_{\partial S} d\Bx g,
    \end{equation}
    after right multiplying both sides with \( I^{-1} \). Algebraically the scalar and pseudoscalar cases can be thought of as identical scalar relationships.
  • \( F = 1, G \in \bigwedge^1 \). For \( F = 1 \) and vector \( G \) the 2D fundamental theorem for surfaces can be split into scalar
    \begin{equation}\label{eqn:unpackingFundamentalTheorem:320}
    \int_S dx^1 dx^2\, \lr{ -e_2 \partial_1 + e_1 \partial_2 } \cdot G = \oint_{\partial S} d\Bx \cdot G,
    \end{equation}
    and bivector relations
    \begin{equation}\label{eqn:unpackingFundamentalTheorem:340}
    \int_S dx^1 dx^2\, \lr{ -e_2 \partial_1 + e_1 \partial_2 } \wedge G = \oint_{\partial S} d\Bx \wedge G.
    \end{equation}
    To expand \ref{eqn:unpackingFundamentalTheorem:320}, let
    \begin{equation}\label{eqn:unpackingFundamentalTheorem:360}
    G = g_1 e^1 + g_2 e^2,
    \end{equation}
    for which
    \begin{equation}\label{eqn:unpackingFundamentalTheorem:380}
    \lr{ -e_2 \partial_1 + e_1 \partial_2 } \cdot G
    =
    \lr{ -e_2 \partial_1 + e_1 \partial_2 } \cdot
    \lr{ g_1 e^1 + g_2 e^2 }
    =
    \partial_2 g_1 – \partial_1 g_2,
    \end{equation}
    and
    \begin{equation}\label{eqn:unpackingFundamentalTheorem:400}
    d\Bx \cdot G
    =
    \lr{ dx^1 e_1 – dx^2 e_2 } \cdot \lr{ g_1 e^1 + g_2 e^2 }
    =
    dx^1 g_1 – dx^2 g_2,
    \end{equation}
    so \ref{eqn:unpackingFundamentalTheorem:320} expands to
    \begin{equation}\label{eqn:unpackingFundamentalTheorem:500}
    \int_S dx^1 dx^2\, \lr{ \partial_2 g_1 – \partial_1 g_2 }
    =
    \int
    \evalbar{dx^1 g_1}{\Delta x^2} – \evalbar{ dx^2 g_2 }{\Delta x^1}.
    \end{equation}
    This coordinate expansion illustrates how the pseudoscalar nature of the area element results in a duality transformation, as we end up with a curl like operation on the LHS, despite the dot product nature of the decomposition that we used. That can also be seen directly for vector \( G \), since
    \begin{equation}\label{eqn:unpackingFundamentalTheorem:560}
    dA (I \grad) \cdot G
    =
    dA \gpgradezero{ I \grad G }
    =
    dA I \lr{ \grad \wedge G },
    \end{equation}
    since the scalar selection of \( I \lr{ \grad \cdot G } \) is zero.In the grade-2 relation \ref{eqn:unpackingFundamentalTheorem:340}, we expect a pseudoscalar cancellation on both sides, leaving a scalar (divergence-like) relationship. This time, we use upper index coordinates for the vector \( G \), letting
    \begin{equation}\label{eqn:unpackingFundamentalTheorem:440}
    G = g^1 e_1 + g^2 e_2,
    \end{equation}
    so
    \begin{equation}\label{eqn:unpackingFundamentalTheorem:460}
    \lr{ -e_2 \partial_1 + e_1 \partial_2 } \wedge G
    =
    \lr{ -e_2 \partial_1 + e_1 \partial_2 } \wedge G
    \lr{ g^1 e_1 + g^2 e_2 }
    =
    e_1 e_2 \lr{ \partial_1 g^1 + \partial_2 g^2 },
    \end{equation}
    and
    \begin{equation}\label{eqn:unpackingFundamentalTheorem:480}
    d\Bx \wedge G
    =
    \lr{ dx^1 e_1 – dx^2 e_2 } \wedge
    \lr{ g^1 e_1 + g^2 e_2 }
    =
    e_1 e_2 \lr{ dx^1 g^2 + dx^2 g^1 }.
    \end{equation}
    So \ref{eqn:unpackingFundamentalTheorem:340}, after multiplication of both sides by \( I^{-1} \), is
    \begin{equation}\label{eqn:unpackingFundamentalTheorem:520}
    \int_S dx^1 dx^2\,
    \lr{ \partial_1 g^1 + \partial_2 g^2 }
    =
    \int
    \evalbar{dx^1 g^2}{\Delta x^2} + \evalbar{dx^2 g^1 }{\Delta x^1}.
    \end{equation}

As before, we’ve implicitly performed a duality transformation, and end up with a divergence operation. That can be seen directly without coordinate expansion, by rewriting the wedge as a grade two selection, and expanding the gradient action on the vector \( G \), as follows
\begin{equation}\label{eqn:unpackingFundamentalTheorem:580}
dA (I \grad) \wedge G
=
dA \gpgradetwo{ I \grad G }
=
dA I \lr{ \grad \cdot G },
\end{equation}
since \( I \lr{ \grad \wedge G } \) has only a scalar component.

 

fig. 2. Line integral around rectangular boundary.

Theorem 1.1: Green’s theorem [1].

Let \( S \) be a Jordan region with a piecewise-smooth boundary \( C \). If \( P, Q \) are continuously differentiable on an open set that contains \( S \), then
\begin{equation*}
\int dx dy \lr{ \PD{y}{P} – \PD{x}{Q} } = \oint P dx + Q dy.
\end{equation*}

Problem: Relationship to Green’s theorem.

If the space is Euclidean, show that \ref{eqn:unpackingFundamentalTheorem:500} and \ref{eqn:unpackingFundamentalTheorem:520} are both instances of Green’s theorem with suitable choices of \( P \) and \( Q \).

Answer

I will omit the subtleties related to general regions and consider just the case of an infinitesimal square region.

Start proof:

Let’s start with \ref{eqn:unpackingFundamentalTheorem:500}, with \( g_1 = P \) and \( g_2 = Q \), and \( x^1 = x, x^2 = y \), the RHS is
\begin{equation}\label{eqn:unpackingFundamentalTheorem:600}
\int dx dy \lr{ \PD{y}{P} – \PD{x}{Q} }.
\end{equation}
On the RHS we have
\begin{equation}\label{eqn:unpackingFundamentalTheorem:620}
\int \evalbar{dx P}{\Delta y} – \evalbar{ dy Q }{\Delta x}
=
\int dx \lr{ P(x, y_1) – P(x, y_0) } – \int dy \lr{ Q(x_1, y) – Q(x_0, y) }.
\end{equation}
This pair of integrals is plotted in fig. 3, from which we see that \ref{eqn:unpackingFundamentalTheorem:620} can be expressed as the line integral, leaving us with
\begin{equation}\label{eqn:unpackingFundamentalTheorem:640}
\int dx dy \lr{ \PD{y}{P} – \PD{x}{Q} }
=
\oint dx P + dy Q,
\end{equation}
which is Green’s theorem over the infinitesimal square integration region.

For the equivalence of \ref{eqn:unpackingFundamentalTheorem:520} to Green’s theorem, let \( g^2 = P \), and \( g^1 = -Q \). Plugging into the LHS, we find the Green’s theorem integrand. On the RHS, the integrand expands to
\begin{equation}\label{eqn:unpackingFundamentalTheorem:660}
\evalbar{dx g^2}{\Delta y} + \evalbar{dy g^1 }{\Delta x}
=
dx \lr{ P(x,y_1) – P(x, y_0)}
+
dy \lr{ -Q(x_1, y) + Q(x_0, y)},
\end{equation}
which is exactly what we found in \ref{eqn:unpackingFundamentalTheorem:620}.

End proof.

 

fig. 3. Path for Green’s theorem.

We may also relate multivector gradient integrals in 2D to the normal integral around the boundary of the bounding curve. That relationship is as follows.

Theorem 1.2: 2D gradient integrals.

\begin{equation*}
\begin{aligned}
\int J du dv \rgrad G &= \oint I^{-1} d\Bx G = \int J \lr{ \Bx^v du + \Bx^u dv } G \\
\int J du dv F \lgrad &= \oint F I^{-1} d\Bx = \int J F \lr{ \Bx^v du + \Bx^u dv },
\end{aligned}
\end{equation*}
where \( J = \partial(x^1, x^2)/\partial(u,v) \) is the Jacobian of the parameterization \( x = x(u,v) \). In terms of the coordinates \( x^1, x^2 \), this reduces to
\begin{equation*}
\begin{aligned}
\int dx^1 dx^2 \rgrad G &= \oint I^{-1} d\Bx G = \int \lr{ e^2 dx^1 + e^1 dx^2 } G \\
\int dx^1 dx^2 F \lgrad &= \oint G I^{-1} d\Bx = \int F \lr{ e^2 dx^1 + e^1 dx^2 }.
\end{aligned}
\end{equation*}
The vector \( I^{-1} d\Bx \) is orthogonal to the tangent vector along the boundary, and for Euclidean spaces it can be identified as the outwards normal.

Start proof:

Respectively setting \( F = 1 \), and \( G = 1\) in \ref{eqn:unpackingFundamentalTheorem:680}, we have
\begin{equation}\label{eqn:unpackingFundamentalTheorem:940}
\int I^{-1} d^2 \Bx \rgrad G = \oint I^{-1} d\Bx G,
\end{equation}
and
\begin{equation}\label{eqn:unpackingFundamentalTheorem:960}
\int F d^2 \Bx \lgrad I^{-1} = \oint F d\Bx I^{-1}.
\end{equation}
Starting with \ref{eqn:unpackingFundamentalTheorem:940} we find
\begin{equation}\label{eqn:unpackingFundamentalTheorem:700}
\int I^{-1} J du dv I \rgrad G = \oint d\Bx G,
\end{equation}
to find \( \int dx^1 dx^2 \rgrad G = \oint I^{-1} d\Bx G \), as desireed. In terms of a parameterization \( x = x(u,v) \), the pseudoscalar for the space is
\begin{equation}\label{eqn:unpackingFundamentalTheorem:720}
I = \frac{\Bx_u \wedge \Bx_v}{J},
\end{equation}
so
\begin{equation}\label{eqn:unpackingFundamentalTheorem:740}
I^{-1} = \frac{J}{\Bx_u \wedge \Bx_v}.
\end{equation}
Also note that \( \lr{\Bx_u \wedge \Bx_v}^{-1} = \Bx^v \wedge \Bx^u \), so
\begin{equation}\label{eqn:unpackingFundamentalTheorem:760}
I^{-1} = J \lr{ \Bx^v \wedge \Bx^u },
\end{equation}
and
\begin{equation}\label{eqn:unpackingFundamentalTheorem:780}
I^{-1} d\Bx
= I^{-1} \cdot d\Bx
= J \lr{ \Bx^v \wedge \Bx^u } \cdot \lr{ \Bx_u du – \Bx_v dv }
= J \lr{ \Bx^v du + \Bx^u dv },
\end{equation}
so the right acting gradient integral is
\begin{equation}\label{eqn:unpackingFundamentalTheorem:800}
\int J du dv \grad G =
\int
\evalbar{J \Bx^v G}{\Delta v} du + \evalbar{J \Bx^u G dv}{\Delta u},
\end{equation}
which we write in abbreviated form as \( \int J \lr{ \Bx^v du + \Bx^u dv} G \).

For the \( G = 1 \) case, from \ref{eqn:unpackingFundamentalTheorem:960} we find
\begin{equation}\label{eqn:unpackingFundamentalTheorem:820}
\int J du dv F I \lgrad I^{-1} = \oint F d\Bx I^{-1}.
\end{equation}
However, in a 2D space, regardless of metric, we have \( I a = – a I \) for any vector \( a \) (i.e. \( \grad \) or \( d\Bx\)), so we may commute the outer pseudoscalars in
\begin{equation}\label{eqn:unpackingFundamentalTheorem:840}
\int J du dv F I \lgrad I^{-1} = \oint F d\Bx I^{-1},
\end{equation}
so
\begin{equation}\label{eqn:unpackingFundamentalTheorem:850}
-\int J du dv F I I^{-1} \lgrad = -\oint F I^{-1} d\Bx.
\end{equation}
After cancelling the negative sign on both sides, we have the claimed result.

To see that \( I a \), for any vector \( a \) is normal to \( a \), we can compute the dot product
\begin{equation}\label{eqn:unpackingFundamentalTheorem:860}
\lr{ I a } \cdot a
=
\gpgradezero{ I a a }
=
a^2 \gpgradezero{ I }
= 0,
\end{equation}
since the scalar selection of a bivector is zero. Since \( I^{-1} = \pm I \), the same argument shows that \( I^{-1} d\Bx \) must be orthogonal to \( d\Bx \).

End proof.

Let’s look at the geometry of the normal \( I^{-1} \Bx \) in a couple 2D vector spaces. We use an integration volume of a unit square to simplify the boundary term expressions.

  • Euclidean: With a parameterization \( x(u,v) = u\Be_1 + v \Be_2 \), and Euclidean basis vectors \( (\Be_1)^2 = (\Be_2)^2 = 1 \), the fundamental theorem integrated over the rectangle \( [x_0,x_1] \times [y_0,y_1] \) is
    \begin{equation}\label{eqn:unpackingFundamentalTheorem:880}
    \int dx dy \grad G =
    \int
    \Be_2 \lr{ G(x,y_1) – G(x,y_0) } dx +
    \Be_1 \lr{ G(x_1,y) – G(x_0,y) } dy,
    \end{equation}
    Each of the terms in the integrand above are illustrated in fig. 4, and we see that this is a path integral weighted by the outwards normal.

    fig. 4. Outwards oriented normal for Euclidean space.

  • Spacetime: Let \( x(u,v) = u \gamma_0 + v \gamma_1 \), where \( (\gamma_0)^2 = -(\gamma_1)^2 = 1 \). With \( u = t, v = x \), the gradient integral over a \([t_0,t_1] \times [x_0,x_1]\) of spacetime is
    \begin{equation}\label{eqn:unpackingFundamentalTheorem:900}
    \begin{aligned}
    \int dt dx \grad G
    &=
    \int
    \gamma^1 dt \lr{ G(t, x_1) – G(t, x_0) }
    +
    \gamma^0 dx \lr{ G(t_1, x) – G(t_1, x) } \\
    &=
    \int
    \gamma_1 dt \lr{ -G(t, x_1) + G(t, x_0) }
    +
    \gamma_0 dx \lr{ G(t_1, x) – G(t_1, x) }
    .
    \end{aligned}
    \end{equation}
    With \( t \) plotted along the horizontal axis, and \( x \) along the vertical, each of the terms of this integrand is illustrated graphically in fig. 5. For this mixed signature space, there is no longer any good geometrical characterization of the normal.

    fig. 5. Orientation of the boundary normal for a spacetime basis.

  • Spacelike:
    Let \( x(u,v) = u \gamma_1 + v \gamma_2 \), where \( (\gamma_1)^2 = (\gamma_2)^2 = -1 \). With \( u = x, v = y \), the gradient integral over a \([x_0,x_1] \times [y_0,y_1]\) of this space is
    \begin{equation}\label{eqn:unpackingFundamentalTheorem:920}
    \begin{aligned}
    \int dx dy \grad G
    &=
    \int
    \gamma^2 dx \lr{ G(x, y_1) – G(x, y_0) }
    +
    \gamma^1 dy \lr{ G(x_1, y) – G(x_1, y) } \\
    &=
    \int
    \gamma_2 dx \lr{ -G(x, y_1) + G(x, y_0) }
    +
    \gamma_1 dy \lr{ -G(x_1, y) + G(x_1, y) }
    .
    \end{aligned}
    \end{equation}
    Referring to fig. 6. where the elements of the integrand are illustrated, we see that the normal \( I^{-1} d\Bx \) for the boundary of this region can be characterized as inwards.

    fig. 6. Inwards oriented normal for a Dirac spacelike basis.

References

[1] S.L. Salas and E. Hille. Calculus: one and several variables. Wiley New York, 1990.

Gradient, divergence, curl and Laplacian in cylindrical coordinates

November 6, 2016 math and physics play , , , , , , , , , , , ,

[Click here for a PDF of this post with nicer formatting]

In class it was suggested that the identity

\begin{equation}\label{eqn:laplacianCylindrical:20}
\spacegrad^2 \BA =
\spacegrad \lr{ \spacegrad \cdot \BA }
-\spacegrad \cross \lr{ \spacegrad \cross \BA },
\end{equation}

can be used to compute the Laplacian in non-rectangular coordinates. Is that the easiest way to do this?

How about just sequential applications of the gradient on the vector? Let’s start with the vector product of the gradient and the vector. First recall that the cylindrical representation of the gradient is

\begin{equation}\label{eqn:laplacianCylindrical:80}
\spacegrad = \rhocap \partial_\rho + \frac{\phicap}{\rho} \partial_\phi + \zcap \partial_z,
\end{equation}

where
\begin{equation}\label{eqn:laplacianCylindrical:100}
\begin{aligned}
\rhocap &= \Be_1 e^{\Be_1 \Be_2 \phi} \\
\phicap &= \Be_2 e^{\Be_1 \Be_2 \phi} \\
\end{aligned}
\end{equation}

Taking \( \phi \) derivatives of \ref{eqn:laplacianCylindrical:100}, we have

\begin{equation}\label{eqn:laplacianCylindrical:120}
\begin{aligned}
\partial_\phi \rhocap &= \Be_1 \Be_1 \Be_2 e^{\Be_1 \Be_2 \phi} = \Be_2 e^{\Be_1 \Be_2 \phi} = \phicap \\
\partial_\phi \phicap &= \Be_2 \Be_1 \Be_2 e^{\Be_1 \Be_2 \phi} = -\Be_1 e^{\Be_1 \Be_2 \phi} = -\rhocap.
\end{aligned}
\end{equation}

The gradient of a vector \( \BA = \rhocap A_\rho + \phicap A_\phi + \zcap A_z \) is

\begin{equation}\label{eqn:laplacianCylindrical:60}
\begin{aligned}
\spacegrad \BA
&=
\lr{ \rhocap \partial_\rho + \frac{\phicap}{\rho} \partial_\phi + \zcap \partial_z }
\lr{ \rhocap A_\rho + \phicap A_\phi + \zcap A_z } \\
&=
\quad \rhocap \partial_\rho \lr{ \rhocap A_\rho + \phicap A_\phi + \zcap A_z } \\
&\quad + \frac{\phicap}{\rho} \partial_\phi \lr{ \rhocap A_\rho + \phicap A_\phi + \zcap A_z } \\
&\quad + \zcap \partial_z \lr{ \rhocap A_\rho + \phicap A_\phi + \zcap A_z } \\
&=
\quad \rhocap \lr{ \rhocap \partial_\rho A_\rho + \phicap \partial_\rho A_\phi + \zcap \partial_\rho A_z } \\
&\quad + \frac{\phicap}{\rho} \lr{ \partial_\phi(\rhocap A_\rho) + \partial_\phi(\phicap A_\phi) + \zcap \partial_\phi A_z } \\
&\quad + \zcap \lr{ \rhocap \partial_z A_\rho + \phicap \partial_z A_\phi + \zcap \partial_z A_z } \\
&=
\quad \partial_\rho A_\rho + \rhocap \phicap \partial_\rho A_\phi + \rhocap \zcap \partial_\rho A_z \\
&\quad +\frac{1}{\rho} \lr{ A_\rho + \phicap \rhocap \partial_\phi A_\rho – \phicap \rhocap A_\phi + \partial_\phi A_\phi + \phicap \zcap \partial_\phi A_z } \\
&\quad + \zcap \rhocap \partial_z A_\rho + \zcap \phicap \partial_z A_\phi + \partial_z A_z \\
&=
\quad \partial_\rho A_\rho + \frac{1}{\rho} \lr{ A_\rho + \partial_\phi A_\phi } + \partial_z A_z \\
&\quad +
\zcap \rhocap \lr{
\partial_z A_\rho
-\partial_\rho A_z
} \\
&\quad +
\phicap \zcap \lr{
\inv{\rho} \partial_\phi A_z
– \partial_z A_\phi
} \\
&\quad +
\rhocap \phicap \lr{
\partial_\rho A_\phi
– \inv{\rho} \lr{ \partial_\phi A_\rho – A_\phi }
},
\end{aligned}
\end{equation}

As expected, we see that the gradient splits nicely into a dot and curl

\begin{equation}\label{eqn:laplacianCylindrical:160}
\begin{aligned}
\spacegrad \BA
&= \spacegrad \cdot \BA + \spacegrad \wedge \BA \\
&= \spacegrad \cdot \BA + \rhocap \phicap \zcap (\spacegrad \cross \BA ),
\end{aligned}
\end{equation}

where the cylindrical representation of the divergence is seen to be

\begin{equation}\label{eqn:laplacianCylindrical:140}
\spacegrad \cdot \BA
=
\inv{\rho} \partial_\rho (\rho A_\rho) + \frac{1}{\rho} \partial_\phi A_\phi + \partial_z A_z,
\end{equation}

and the cylindrical representation of the curl is

\begin{equation}\label{eqn:laplacianCylindrical:180}
\spacegrad \cross \BA
=
\rhocap
\lr{
\inv{\rho} \partial_\phi A_z
– \partial_z A_\phi
}
+
\phicap
\lr{
\partial_z A_\rho
-\partial_\rho A_z
}
+
\inv{\rho} \zcap \lr{
\partial_\rho ( \rho A_\phi )
– \partial_\phi A_\rho
}.
\end{equation}

Should we want to, it is now possible to evaluate the Laplacian of \( \BA \) using
\ref{eqn:laplacianCylindrical:20}
, which will have the following components

\begin{equation}\label{eqn:laplacianCylindrical:220}
\begin{aligned}
\rhocap \cdot \lr{ \spacegrad^2 \BA }
&=
\partial_\rho
\lr{
\inv{\rho} \partial_\rho (\rho A_\rho) + \frac{1}{\rho} \partial_\phi A_\phi + \partial_z A_z
}

\lr{
\inv{\rho} \partial_\phi \lr{
\inv{\rho} \lr{
\partial_\rho ( \rho A_\phi ) – \partial_\phi A_\rho
}
}
– \partial_z \lr{
\partial_z A_\rho -\partial_\rho A_z
}
} \\
&=
\partial_\rho \lr{ \inv{\rho} \partial_\rho (\rho A_\rho)}
+ \partial_\rho \lr{ \frac{1}{\rho} \partial_\phi A_\phi}
+ \partial_{\rho z} A_z
– \inv{\rho^2}\partial_{\phi \rho} ( \rho A_\phi )
+ \inv{\rho^2}\partial_{\phi\phi} A_\rho
+ \partial_{zz} A_\rho
– \partial_{z\rho} A_z \\
&=
\partial_\rho \lr{ \inv{\rho} \partial_\rho (\rho A_\rho)}
+ \inv{\rho^2}\partial_{\phi\phi} A_\rho
+ \partial_{zz} A_\rho
– \frac{1}{\rho^2} \partial_\phi A_\phi
+ \frac{1}{\rho} \partial_{\rho\phi} A_\phi
– \inv{\rho^2}\partial_{\phi} A_\phi
– \inv{\rho}\partial_{\phi\rho} A_\phi \\
&=
\partial_\rho \lr{ \inv{\rho} \partial_\rho (\rho A_\rho)}
+ \inv{\rho^2}\partial_{\phi\phi} A_\rho
+ \partial_{zz} A_\rho
– \frac{2}{\rho^2} \partial_\phi A_\phi \\
&=
\inv{\rho} \partial_\rho \lr{ \rho \partial_\rho A_\rho}
+ \inv{\rho^2}\partial_{\phi\phi} A_\rho
+ \partial_{zz} A_\rho
– \frac{A_\rho}{\rho^2}
– \frac{2}{\rho^2} \partial_\phi A_\phi,
\end{aligned}
\end{equation}

\begin{equation}\label{eqn:laplacianCylindrical:240}
\begin{aligned}
\phicap \cdot \lr{ \spacegrad^2 \BA }
&=
\inv{\rho} \partial_\phi
\lr{
\inv{\rho} \partial_\rho (\rho A_\rho) + \frac{1}{\rho} \partial_\phi A_\phi + \partial_z A_z
}

\lr{
\lr{
\partial_z \lr{
\inv{\rho} \partial_\phi A_z – \partial_z A_\phi
}
-\partial_\rho \lr{
\inv{\rho} \lr{ \partial_\rho ( \rho A_\phi ) – \partial_\phi A_\rho}
}
}
} \\
&=
\inv{\rho^2} \partial_{\phi\rho} (\rho A_\rho)
+ \frac{1}{\rho^2} \partial_{\phi\phi} A_\phi
+ \inv{\rho}\partial_{\phi z} A_z
– \inv{\rho} \partial_{z\phi} A_z
+ \partial_{z z} A_\phi
+\partial_\rho \lr{ \inv{\rho} \partial_\rho ( \rho A_\phi ) }
– \partial_\rho \lr{ \inv{\rho} \partial_\phi A_\rho} \\
&=
\partial_\rho \lr{ \inv{\rho} \partial_\rho ( \rho A_\phi ) }
+ \frac{1}{\rho^2} \partial_{\phi\phi} A_\phi
+ \partial_{z z} A_\phi
+ \inv{\rho^2} \partial_{\phi\rho} (\rho A_\rho)
+ \inv{\rho}\partial_{\phi z} A_z
– \inv{\rho} \partial_{z\phi} A_z
– \partial_\rho \lr{ \inv{\rho} \partial_\phi A_\rho} \\
&=
\partial_\rho \lr{ \inv{\rho} \partial_\rho ( \rho A_\phi ) }
+ \frac{1}{\rho^2} \partial_{\phi\phi} A_\phi
+ \partial_{z z} A_\phi
+ \inv{\rho^2} \partial_{\phi} A_\rho
+ \inv{\rho} \partial_{\phi\rho} A_\rho
+ \inv{\rho^2} \partial_\phi A_\rho
– \inv{\rho} \partial_{\rho\phi} A_\rho \\
&=
\partial_\rho \lr{ \inv{\rho} \partial_\rho ( \rho A_\phi ) }
+ \frac{1}{\rho^2} \partial_{\phi\phi} A_\phi
+ \partial_{z z} A_\phi
+ \frac{2}{\rho^2} \partial_{\phi} A_\rho \\
&=
\inv{\rho} \partial_\rho \lr{ \rho \partial_\rho A_\phi }
+ \frac{1}{\rho^2} \partial_{\phi\phi} A_\phi
+ \partial_{z z} A_\phi
+ \frac{2}{\rho^2} \partial_{\phi} A_\rho
– \frac{A_\phi}{\rho^2},
\end{aligned}
\end{equation}

\begin{equation}\label{eqn:laplacianCylindrical:260}
\begin{aligned}
\zcap \cdot \lr{ \spacegrad^2 \BA }
&=
\partial_z
\lr{
\inv{\rho} \partial_\rho (\rho A_\rho) + \frac{1}{\rho} \partial_\phi A_\phi + \partial_z A_z
}

\inv{\rho} \lr{
\partial_\rho \lr{ \rho \lr{
\partial_z A_\rho -\partial_\rho A_z
}
}
– \partial_\phi \lr{
\inv{\rho} \partial_\phi A_z – \partial_z A_\phi
}
} \\
&=
\inv{\rho} \partial_{z\rho} (\rho A_\rho)
+ \frac{1}{\rho} \partial_{z\phi} A_\phi
+ \partial_{zz} A_z
– \inv{\rho}\partial_\rho \lr{ \rho \partial_z A_\rho }
+ \inv{\rho}\partial_\rho \lr{ \rho \partial_\rho A_z }
+ \inv{\rho^2} \partial_{\phi\phi} A_z
– \inv{\rho} \partial_{\phi z} A_\phi \\
&=
\inv{\rho}\partial_\rho \lr{ \rho \partial_\rho A_z }
+ \inv{\rho^2} \partial_{\phi\phi} A_z
+ \partial_{zz} A_z
+ \inv{\rho} \partial_{z} A_\rho
+\partial_{z\rho} A_\rho
+ \frac{1}{\rho} \partial_{z\phi} A_\phi
– \inv{\rho}\partial_z A_\rho
– \partial_{\rho z} A_\rho
– \inv{\rho} \partial_{\phi z} A_\phi \\
&=
\inv{\rho}\partial_\rho \lr{ \rho \partial_\rho A_z }
+ \inv{\rho^2} \partial_{\phi\phi} A_z
+ \partial_{zz} A_z
\end{aligned}
\end{equation}

Evaluating these was a fairly tedious and mechanical job, and would have been better suited to a computer algebra system than by hand as done here.

Explicit cylindrical Laplacian

Let’s try this a different way. The most obvious potential strategy is to just apply the Laplacian to the vector itself, but we need to include the unit vectors in such an operation

\begin{equation}\label{eqn:laplacianCylindrical:280}
\spacegrad^2 \BA =
\spacegrad^2 \lr{ \rhocap A_\rho + \phicap A_\phi + \zcap A_z }.
\end{equation}

First we need to know the explicit form of the cylindrical Laplacian. From the painful expansion, we can guess that it is

\begin{equation}\label{eqn:laplacianCylindrical:300}
\spacegrad^2 \psi
=
\inv{\rho}\partial_\rho \lr{ \rho \partial_\rho \psi }
+ \inv{\rho^2} \partial_{\phi\phi} \psi
+ \partial_{zz} \psi.
\end{equation}

Let’s check that explicitly. Here I use the vector product where \( \rhocap^2 = \phicap^2 = \zcap^2 = 1 \), and these vectors anticommute when different

\begin{equation}\label{eqn:laplacianCylindrical:320}
\begin{aligned}
\spacegrad^2 \psi
&=
\lr{ \rhocap \partial_\rho + \frac{\phicap}{\rho} \partial_\phi + \zcap \partial_z }
\lr{ \rhocap \partial_\rho \psi + \frac{\phicap}{\rho} \partial_\phi \psi + \zcap \partial_z \psi } \\
&=
\rhocap \partial_\rho
\lr{ \rhocap \partial_\rho \psi + \frac{\phicap}{\rho} \partial_\phi \psi + \zcap \partial_z \psi }
+ \frac{\phicap}{\rho} \partial_\phi
\lr{ \rhocap \partial_\rho \psi + \frac{\phicap}{\rho} \partial_\phi \psi + \zcap \partial_z \psi }
+ \zcap \partial_z
\lr{ \rhocap \partial_\rho \psi + \frac{\phicap}{\rho} \partial_\phi \psi + \zcap \partial_z \psi } \\
&=
\partial_{\rho\rho} \psi
+ \rhocap \phicap \partial_\rho \lr{ \frac{1}{\rho} \partial_\phi \psi}
+ \rhocap \zcap \partial_{\rho z} \psi
+ \frac{\phicap}{\rho} \partial_\phi \lr{ \rhocap \partial_\rho \psi }
+ \frac{\phicap}{\rho} \partial_\phi \lr{ \frac{\phicap}{\rho} \partial_\phi \psi }
+ \frac{\phicap \zcap }{\rho} \partial_{\phi z} \psi
+ \zcap \rhocap \partial_{z\rho} \psi
+ \frac{\zcap \phicap}{\rho} \partial_{z\phi} \psi
+ \partial_{zz} \psi \\
&=
\partial_{\rho\rho} \psi
+ \inv{\rho} \partial_\rho \psi
+ \frac{1}{\rho^2} \partial_{\phi \phi} \psi
+ \partial_{zz} \psi
+ \rhocap \phicap
\lr{
-\frac{1}{\rho^2} \partial_\phi \psi
+\frac{1}{\rho} \partial_{\rho \phi} \psi
-\inv{\rho} \partial_{\phi \rho} \psi
+ \frac{1}{\rho^2} \partial_\phi \psi
}
+ \zcap \rhocap \lr{
-\partial_{\rho z} \psi
+ \partial_{z\rho} \psi
}
+ \phicap \zcap \lr{
\inv{\rho} \partial_{\phi z} \psi
– \inv{\rho} \partial_{z\phi} \psi
} \\
&=
\partial_{\rho\rho} \psi
+ \inv{\rho} \partial_\rho \psi
+ \frac{1}{\rho^2} \partial_{\phi \phi} \psi
+ \partial_{zz} \psi,
\end{aligned}
\end{equation}

so the Laplacian operator is

\begin{equation}\label{eqn:laplacianCylindrical:340}
\boxed{
\spacegrad^2
=
\inv{\rho} \PD{\rho}{} \lr{ \rho \PD{\rho}{} }
+ \frac{1}{\rho^2} \PDSq{\phi}{}
+ \PDSq{z}{}.
}
\end{equation}

All the bivector grades of the Laplacian operator are seen to explicitly cancel, regardless of the grade of \( \psi \), just as if we had expanded the scalar Laplacian as a dot product
\( \spacegrad^2 \psi = \spacegrad \cdot \lr{ \spacegrad \psi} \).
Unlike such a scalar expansion, this derivation is seen to be valid for any grade \( \psi \). We know now that we can trust this result when \( \psi \) is a scalar, a vector, a bivector, a trivector, or even a multivector.

Vector Laplacian

Now that we trust that the typical scalar form of the Laplacian applies equally well to multivectors as it does to scalars, that cylindrical coordinate operator can now be applied to a
vector. Consider the projections onto each of the directions in turn

\begin{equation}\label{eqn:laplacianCylindrical:360}
\spacegrad^2 \lr{ \rhocap A_\rho }
=
\rhocap \inv{\rho} \partial_\rho \lr{ \rho \partial_\rho A_\rho }
+ \frac{1}{\rho^2} \partial_{\phi\phi} \lr{\rhocap A_\rho}
+ \rhocap \partial_{zz} A_\rho
\end{equation}

\begin{equation}\label{eqn:laplacianCylindrical:380}
\begin{aligned}
\partial_{\phi\phi} \lr{\rhocap A_\rho}
&=
\partial_\phi \lr{ \phicap A_\rho + \rhocap \partial_\phi A_\rho } \\
&=
-\rhocap A_\rho
+\phicap \partial_\phi A_\rho
+ \phicap \partial_\phi A_\rho
+ \rhocap \partial_{\phi\phi} A_\rho \\
&=
\rhocap \lr{ \partial_{\phi\phi} A_\rho -A_\rho }
+ 2 \phicap \partial_\phi A_\rho
\end{aligned}
\end{equation}

so this component of the vector Laplacian is

\begin{equation}\label{eqn:laplacianCylindrical:400}
\begin{aligned}
\spacegrad^2 \lr{ \rhocap A_\rho }
&=
\rhocap
\lr{
\inv{\rho} \partial_\rho \lr{ \rho \partial_\rho A_\rho }
+ \inv{\rho^2} \partial_{\phi\phi} A_\rho
– \inv{\rho^2} A_\rho
+ \partial_{zz} A_\rho
}
+
\phicap
\lr{
2 \inv{\rho^2} \partial_\phi A_\rho
} \\
&=
\rhocap \lr{
\spacegrad^2 A_\rho
– \inv{\rho^2} A_\rho
}
+
\phicap
\frac{2}{\rho^2} \partial_\phi A_\rho
.
\end{aligned}
\end{equation}

The Laplacian for the projection of the vector onto the \( \phicap \) direction is

\begin{equation}\label{eqn:laplacianCylindrical:420}
\spacegrad^2 \lr{ \phicap A_\phi }
=
\phicap \inv{\rho} \partial_\rho \lr{ \rho \partial_\rho A_\phi }
+ \frac{1}{\rho^2} \partial_{\phi\phi} \lr{\phicap A_\phi}
+ \phicap \partial_{zz} A_\phi,
\end{equation}

Again, since the unit vectors are \( \phi \) dependent, the \( \phi \) derivatives have to be treated carefully

\begin{equation}\label{eqn:laplacianCylindrical:440}
\begin{aligned}
\partial_{\phi\phi} \lr{\phicap A_\phi}
&=
\partial_{\phi} \lr{-\rhocap A_\phi + \phicap \partial_\phi A_\phi} \\
&=
-\phicap A_\phi
-\rhocap \partial_\phi A_\phi
– \rhocap \partial_\phi A_\phi
+ \phicap \partial_{\phi \phi} A_\phi \\
&=
– 2 \rhocap \partial_\phi A_\phi
+
\phicap
\lr{
\partial_{\phi \phi} A_\phi
– A_\phi
},
\end{aligned}
\end{equation}

so the Laplacian of this projection is
\begin{equation}\label{eqn:laplacianCylindrical:460}
\begin{aligned}
\spacegrad^2 \lr{ \phicap A_\phi }
&=
\phicap
\lr{
\inv{\rho} \partial_\rho \lr{ \rho \partial_\rho A_\phi }
+ \phicap \partial_{zz} A_\phi,
\inv{\rho^2} \partial_{\phi \phi} A_\phi
– \frac{A_\phi }{\rho^2}
}
– \rhocap \frac{2}{\rho^2} \partial_\phi A_\phi \\
&=
\phicap \lr{
\spacegrad^2 A_\phi
– \frac{A_\phi}{\rho^2}
}
– \rhocap \frac{2}{\rho^2} \partial_\phi A_\phi.
\end{aligned}
\end{equation}

Since \( \zcap \) is fixed we have

\begin{equation}\label{eqn:laplacianCylindrical:480}
\spacegrad^2 \zcap A_z
=
\zcap \spacegrad^2 A_z.
\end{equation}

Putting all the pieces together we have
\begin{equation}\label{eqn:laplacianCylindrical:500}
\boxed{
\spacegrad^2 \BA
=
\rhocap \lr{
\spacegrad^2 A_\rho
– \inv{\rho^2} A_\rho
– \frac{2}{\rho^2} \partial_\phi A_\phi
}
+\phicap \lr{
\spacegrad^2 A_\phi
– \frac{A_\phi}{\rho^2}
+ \frac{2}{\rho^2} \partial_\phi A_\rho
}
+
\zcap \spacegrad^2 A_z.
}
\end{equation}

This matches the results of \ref{eqn:laplacianCylindrical:220}, …, from the painful expansion of
\( \spacegrad \lr{ \spacegrad \cdot \BA } – \spacegrad \cross \lr{ \spacegrad \cross \BA } \).

Does the divergence and curl uniquely determine the vector?

September 30, 2016 math and physics play , , , , , , , , , , , , , , , , ,

[Click here for a PDF of this post with nicer formatting]

A problem posed in the ece1228 problem set was the following

Helmholtz theorem.

Prove the first Helmholtz’s theorem, i.e. if vector \(\BM\) is defined by its divergence

\begin{equation}\label{eqn:emtProblemSet1Problem5:20}
\spacegrad \cdot \BM = s
\end{equation}

and its curl
\begin{equation}\label{eqn:emtProblemSet1Problem5:40}
\spacegrad \cross \BM = \BC
\end{equation}

within a region and its normal component \( \BM_{\textrm{n}} \) over the boundary, then \( \BM \) is uniquely specified.

Solution.

This problem screams for an attempt using Geometric Algebra techniques, since
the gradient of this vector can be written as a single even grade multivector

\begin{equation}\label{eqn:emtProblemSet1Problem5AppendixGA:60}
\begin{aligned}
\spacegrad \BM
&= \spacegrad \cdot \BM + I \spacegrad \cross \BM \\
&= s + I \BC.
\end{aligned}
\end{equation}

Observe that the Laplacian of \( \BM \) is vector valued

\begin{equation}\label{eqn:emtProblemSet1Problem5AppendixGA:400}
\spacegrad^2 \BM
= \spacegrad s + I \spacegrad \BC.
\end{equation}

This means that \( \spacegrad \BC \) must be a bivector \( \spacegrad \BC = \spacegrad \wedge \BC \), or that \( \BC \) has zero divergence

\begin{equation}\label{eqn:emtProblemSet1Problem5AppendixGA:420}
\spacegrad \cdot \BC = 0.
\end{equation}

This required constraint on \( \BC \) will show up in subsequent analysis. An equivalent problem to the one posed
is to show that the even grade multivector equation \( \spacegrad \BM = s + I \BC \) has an inverse given the constraint
specified by \ref{eqn:emtProblemSet1Problem5AppendixGA:420}.

Inverting the gradient equation.

The Green’s function for the gradient can be found in [1], where it is used to generalize the Cauchy integral equations to higher dimensions.

\begin{equation}\label{eqn:emtProblemSet1Problem5AppendixGA:80}
\begin{aligned}
G(\Bx ; \Bx’) &= \inv{4 \pi} \frac{ \Bx – \Bx’ }{\Abs{\Bx – \Bx’}^3} \\
\spacegrad \BG(\Bx, \Bx’) &= \spacegrad \cdot \BG(\Bx, \Bx’) = \delta(\Bx – \Bx’) = -\spacegrad’ \BG(\Bx, \Bx’).
\end{aligned}
\end{equation}

The inversion equation is an application of the Fundamental Theorem of (Geometric) Calculus, with the gradient operating bidirectionally on the Green’s function and the vector function

\begin{equation}\label{eqn:emtProblemSet1Problem5AppendixGA:100}
\begin{aligned}
\oint_{\partial V} G(\Bx, \Bx’) d^2 \Bx’ \BM(\Bx’)
&=
\int_V G(\Bx, \Bx’) d^3 \Bx \lrspacegrad’ \BM(\Bx’) \\
&=
\int_V d^3 \Bx (G(\Bx, \Bx’) \lspacegrad’) \BM(\Bx’)
+
\int_V d^3 \Bx G(\Bx, \Bx’) (\spacegrad’ \BM(\Bx’)) \\
&=
-\int_V d^3 \Bx \delta(\Bx – \By) \BM(\Bx’)
+
\int_V d^3 \Bx G(\Bx, \Bx’) \lr{ s(\Bx’) + I \BC(\Bx’) } \\
&=
-I \BM(\Bx)
+
\inv{4 \pi} \int_V d^3 \Bx \frac{ \Bx – \Bx’}{ \Abs{\Bx – \Bx’}^3 } \lr{ s(\Bx’) + I \BC(\Bx’) }.
\end{aligned}
\end{equation}

The integrals are in terms of the primed coordinates so that the end result is a function of \( \Bx \). To rearrange for \( \BM \), let \( d^3 \Bx’ = I dV’ \), and \( d^2 \Bx’ \ncap(\Bx’) = I dA’ \), then right multiply with the pseudoscalar \( I \), noting that in \R{3} the pseudoscalar commutes with any grades

\begin{equation}\label{eqn:emtProblemSet1Problem5AppendixGA:440}
\begin{aligned}
\BM(\Bx)
&=
I \oint_{\partial V} G(\Bx, \Bx’) I dA’ \ncap \BM(\Bx’)

I \inv{4 \pi} \int_V I dV’ \frac{ \Bx – \Bx’}{ \Abs{\Bx – \Bx’}^3 } \lr{ s(\Bx’) + I \BC(\Bx’) } \\
&=
-\oint_{\partial V} dA’ G(\Bx, \Bx’) \ncap \BM(\Bx’)
+
\inv{4 \pi} \int_V dV’ \frac{ \Bx – \Bx’}{ \Abs{\Bx – \Bx’}^3 } \lr{ s(\Bx’) + I \BC(\Bx’) }.
\end{aligned}
\end{equation}

This can be decomposed into a vector and a trivector equation. Let \( \Br = \Bx – \Bx’ = r \rcap \), and note that

\begin{equation}\label{eqn:emtProblemSet1Problem5AppendixGA:500}
\begin{aligned}
\gpgradeone{ \rcap I \BC }
&=
\gpgradeone{ I \rcap \BC } \\
&=
I \rcap \wedge \BC \\
&=
-\rcap \cross \BC,
\end{aligned}
\end{equation}

so this pair of equations can be written as

\begin{equation}\label{eqn:emtProblemSet1Problem5AppendixGA:520}
\begin{aligned}
\BM(\Bx)
&=
-\inv{4 \pi} \oint_{\partial V} dA’ \frac{\gpgradeone{ \rcap \ncap \BM(\Bx’) }}{r^2}
+
\inv{4 \pi} \int_V dV’ \lr{
\frac{\rcap}{r^2} s(\Bx’) –
\frac{\rcap}{r^2} \cross \BC(\Bx’) } \\
0
&=
-\inv{4 \pi} \oint_{\partial V} dA’ \frac{\rcap}{r^2} \wedge \ncap \wedge \BM(\Bx’)
+
\frac{I}{4 \pi} \int_V dV’ \frac{ \rcap \cdot \BC(\Bx’) }{r^2}.
\end{aligned}
\end{equation}

Trivector grades.

Consider the last integral in the pseudoscalar equation above. Since we expect no pseudoscalar components, this must be zero, or cancel perfectly. It’s not obvious that this is the case, but a transformation to a surface integral shows the constraints required for that to be the case. To do so note

\begin{equation}\label{eqn:emtProblemSet1Problem5AppendixGA:540}
\begin{aligned}
\spacegrad \inv{\Bx – \Bx’}
&= -\spacegrad’ \inv{\Bx – \Bx’} \\
&=
-\frac{\Bx – \Bx’}{\Abs{\Bx – \Bx’}^3} \\
&= -\frac{\rcap}{r^2}.
\end{aligned}
\end{equation}

Using this and the chain rule we have

\begin{equation}\label{eqn:emtProblemSet1Problem5AppendixGA:560}
\begin{aligned}
\frac{I}{4 \pi} \int_V dV’ \frac{ \rcap \cdot \BC(\Bx’) }{r^2}
&=
\frac{I}{4 \pi} \int_V dV’ \lr{ \spacegrad’ \inv{ r } } \cdot \BC(\Bx’) \\
&=
\frac{I}{4 \pi} \int_V dV’ \spacegrad’ \cdot \frac{\BC(\Bx’)}{r}

\frac{I}{4 \pi} \int_V dV’ \frac{ \spacegrad’ \cdot \BC(\Bx’) }{r} \\
&=
\frac{I}{4 \pi} \int_V dV’ \spacegrad’ \cdot \frac{\BC(\Bx’)}{r} \\
&=
\frac{I}{4 \pi} \int_{\partial V} dA’ \ncap(\Bx’) \cdot \frac{\BC(\Bx’)}{r}.
\end{aligned}
\end{equation}

The divergence of \( \BC \) above was killed by recalling the constraint \ref{eqn:emtProblemSet1Problem5AppendixGA:420}. This means that we can rewrite entirely as surface integral and eventually reduced to a single triple product

\begin{equation}\label{eqn:emtProblemSet1Problem5AppendixGA:580}
\begin{aligned}
0
&=
-\frac{I}{4 \pi} \oint_{\partial V} dA’ \lr{
\frac{\rcap}{r^2} \cdot (\ncap \cross \BM(\Bx’))
-\ncap \cdot \frac{\BC(\Bx’)}{r}
} \\
&=
\frac{I}{4 \pi} \oint_{\partial V} dA’ \ncap \cdot \lr{
\frac{\rcap}{r^2} \cross \BM(\Bx’)
+ \frac{\BC(\Bx’)}{r}
} \\
&=
\frac{I}{4 \pi} \oint_{\partial V} dA’ \ncap \cdot \lr{
\lr{ \spacegrad’ \inv{r}} \cross \BM(\Bx’)
+ \frac{\BC(\Bx’)}{r}
} \\
&=
\frac{I}{4 \pi} \oint_{\partial V} dA’ \ncap \cdot \lr{
\spacegrad’ \cross \frac{\BM(\Bx’)}{r}
} \\
&=
\frac{I}{4 \pi} \oint_{\partial V} dA’
\spacegrad’ \cdot
\frac{\BM(\Bx’) \cross \ncap}{r}
&=
\frac{I}{4 \pi} \oint_{\partial V} dA’
\spacegrad’ \cdot
\frac{\BM(\Bx’) \cross \ncap}{r}.
\end{aligned}
\end{equation}

Final results.

Assembling things back into a single multivector equation, the complete inversion integral for \( \BM \) is

\begin{equation}\label{eqn:emtProblemSet1Problem5AppendixGA:600}
\BM(\Bx)
=
\inv{4 \pi} \oint_{\partial V} dA’
\lr{
\spacegrad’ \wedge
\frac{\BM(\Bx’) \wedge \ncap}{r}
-\frac{\gpgradeone{ \rcap \ncap \BM(\Bx’) }}{r^2}
}
+
\inv{4 \pi} \int_V dV’ \lr{
\frac{\rcap}{r^2} s(\Bx’) –
\frac{\rcap}{r^2} \cross \BC(\Bx’) }.
\end{equation}

This shows that vector \( \BM \) can be recovered uniquely from \( s, \BC \) when \( \Abs{\BM}/r^2 \) vanishes on an infinite surface. If we restrict attention to a finite surface, we have to add to the fixed solution a specific solution that depends on the value of \( \BM \) on that surface. The vector portion of that surface integrand contains

\begin{equation}\label{eqn:emtProblemSet1Problem5AppendixGA:640}
\begin{aligned}
\gpgradeone{ \rcap \ncap \BM }
&=
\rcap (\ncap \cdot \BM )
+
\rcap \cdot (\ncap \wedge \BM ) \\
&=
\rcap (\ncap \cdot \BM )
+
(\rcap \cdot \ncap) \BM

(\rcap \cdot \BM ) \ncap.
\end{aligned}
\end{equation}

The constraints required by a zero triple product \( \spacegrad’ \cdot (\BM(\Bx’) \cross \ncap(\Bx’)) \) are complicated on a such a general finite surface. Consider instead, for simplicity, the case of a spherical surface, which can be analyzed more easily. In that case the outward normal of the surface centred on the test charge point \( \Bx \) is \( \ncap = -\rcap \). The pseudoscalar integrand is not generally killed unless the divergence of its tangential component on this surface is zero. One way that this can occur is for \( \BM \cross \ncap = 0 \), so that \( -\gpgradeone{ \rcap \ncap \BM } = \BM = (\BM \cdot \ncap) \ncap = \BM_{\textrm{n}} \).

This gives

\begin{equation}\label{eqn:emtProblemSet1Problem5AppendixGA:620}
\BM(\Bx)
=
\inv{4 \pi} \oint_{\Abs{\Bx – \Bx’} = r} dA’ \frac{\BM_{\textrm{n}}(\Bx’)}{r^2}
+
\inv{4 \pi} \int_V dV’ \lr{
\frac{\rcap}{r^2} s(\Bx’) +
\BC(\Bx’) \cross \frac{\rcap}{r^2} },
\end{equation}

or, in terms of potential functions, which is arguably tidier

\begin{equation}\label{eqn:emtProblemSet1Problem5AppendixGA:300}
\boxed{
\BM(\Bx)
=
\inv{4 \pi} \oint_{\Abs{\Bx – \Bx’} = r} dA’ \frac{\BM_{\textrm{n}}(\Bx’)}{r^2}
-\spacegrad \int_V dV’ \frac{ s(\Bx’)}{ 4 \pi r }
+\spacegrad \cross \int_V dV’ \frac{ \BC(\Bx’) }{ 4 \pi r }.
}
\end{equation}

Commentary

I attempted this problem in three different ways. My first approach (above) assembled the divergence and curl relations above into a single (Geometric Algebra) multivector gradient equation and applied the vector valued Green’s function for the gradient to invert that equation. That approach logically led from the differential equation for \( \BM \) to the solution for \( \BM \) in terms of \( s \) and \( \BC \). However, this strategy introduced some complexities that make me doubt the correctness of the associated boundary analysis.

Even if the details of the boundary handling in my multivector approach is not correct, I thought that approach was interesting enough to share.

References

[1] C. Doran and A.N. Lasenby. Geometric algebra for physicists. Cambridge University Press New York, Cambridge, UK, 1st edition, 2003.

Notes for ece1229 antenna theory

February 4, 2015 ece1229 , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

I’ve now posted a first set of notes for the antenna theory course that I am taking this term at UofT.

Unlike most of the other classes I have taken, I am not attempting to take comprehensive notes for this class. The class is taught on slides that match the textbook so closely, there is little value to me taking notes that just replicate the text. Instead, I am annotating my copy of textbook with little details instead. My usual notes collection for the class will contain musings of details that were unclear, or in some cases, details that were provided in class, but are not in the text (and too long to pencil into my book.)

The notes linked above include:

  • Reading notes for chapter 2 (Fundamental Parameters of Antennas) and chapter 3 (Radiation Integrals and Auxiliary Potential Functions) of the class text.
  • Geometric Algebra musings.  How to do formulate Maxwell’s equations when magnetic sources are also included (those modeling magnetic dipoles).
  • Some problems for chapter 2 content.

Dual-Maxwell’s (phasor) equations in Geometric Algebra

February 3, 2015 ece1229 , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

[Click here for a PDF of this post with nicer formatting]

These notes repeat (mostly word for word) the previous notes Maxwell’s (phasor) equations in Geometric Algebra. Electric charges and currents have been replaced with magnetic charges and currents, and the appropriate relations modified accordingly.

In [1] section 3.3, treating magnetic charges and currents, and no electric charges and currents, is a demonstration of the required (curl) form for the electric field, and potential form for the electric field. Not knowing what to name this, I’ll call the associated equations the dual-Maxwell’s equations.

I was wondering how this derivation would proceed using the Geometric Algebra (GA) formalism.

Dual-Maxwell’s equation in GA phasor form.

The dual-Maxwell’s equations, omitting electric charges and currents, are

\begin{equation}\label{eqn:phasorDualMaxwellsGA:20}
\spacegrad \cross \boldsymbol{\mathcal{E}} = -\PD{t}{\boldsymbol{\mathcal{B}}} -\BM
\end{equation}
\begin{equation}\label{eqn:phasorDualMaxwellsGA:40}
\spacegrad \cross \boldsymbol{\mathcal{H}} = \PD{t}{\boldsymbol{\mathcal{D}}}
\end{equation}
\begin{equation}\label{eqn:phasorDualMaxwellsGA:60}
\spacegrad \cdot \boldsymbol{\mathcal{D}} = 0
\end{equation}
\begin{equation}\label{eqn:phasorDualMaxwellsGA:80}
\spacegrad \cdot \boldsymbol{\mathcal{B}} = \rho_m.
\end{equation}

Assuming linear media \( \boldsymbol{\mathcal{B}} = \mu_0
\boldsymbol{\mathcal{H}} \), \( \boldsymbol{\mathcal{D}} = \epsilon_0
\boldsymbol{\mathcal{E}} \), and phasor relationships of the form \(
\boldsymbol{\mathcal{E}} = \textrm{Re} \lr{ \BE(\Br) e^{j \omega t}} \) for the fields and the currents, these reduce to

\begin{equation}\label{eqn:phasorDualMaxwellsGA:100}
\spacegrad \cross \BE = – j \omega \BB – \BM
\end{equation}
\begin{equation}\label{eqn:phasorDualMaxwellsGA:120}
\spacegrad \cross \BB = j \omega \epsilon_0 \mu_0 \BE
\end{equation}
\begin{equation}\label{eqn:phasorDualMaxwellsGA:140}
\spacegrad \cdot \BE = 0
\end{equation}
\begin{equation}\label{eqn:phasorDualMaxwellsGA:160}
\spacegrad \cdot \BB = \rho_m.
\end{equation}

These four equations can be assembled into a single equation form using the GA identities

\begin{equation}\label{eqn:phasorDualMaxwellsGA:200}
\Bf \Bg
= \Bf \cdot \Bg + \Bf \wedge \Bg
= \Bf \cdot \Bg + I \Bf \cross \Bg.
\end{equation}
\begin{equation}\label{eqn:phasorDualMaxwellsGA:220}
I = \xcap \ycap \zcap.
\end{equation}

The electric and magnetic field equations, respectively, are

\begin{equation}\label{eqn:phasorDualMaxwellsGA:260}
\spacegrad \BE = – \lr{ \BM + j k c \BB} I
\end{equation}
\begin{equation}\label{eqn:phasorDualMaxwellsGA:280}
\spacegrad c \BB = c \rho_m + j k \BE I
\end{equation}

where \( \omega = k c \), and \( 1 = c^2 \epsilon_0 \mu_0 \) have also been used to eliminate some of the mess of constants.

Summing these (first scaling \ref{eqn:phasorDualMaxwellsGA:280} by \( I \)), gives Maxwell’s equation in its GA phasor form

\begin{equation}\label{eqn:phasorDualMaxwellsGA:300}
\boxed{
\lr{ \spacegrad + j k } \lr{ \BE + I c \BB } = \lr{c \rho – \BM} I.
}
\end{equation}

Preliminaries. Dual magnetic form of Maxwell’s equations.

The arguments of the text showing that a potential representation for the electric and magnetic fields is possible easily translates into GA. To perform this translation, some duality lemmas are required

First consider the cross product of two vectors \( \Bx, \By \) and the right handed dual \( -\By I \) of \( \By \), a bivector, of one of these vectors. Noting that the Euclidean pseudoscalar \( I \) commutes with all grade multivectors in a Euclidean geometric algebra space, the cross product can be written

\begin{equation}\label{eqn:phasorDualMaxwellsGA:320}
\begin{aligned}
\lr{ \Bx \cross \By }
&=
-I \lr{ \Bx \wedge \By } \\
&=
-I \inv{2} \lr{ \Bx \By – \By \Bx } \\
&=
\inv{2} \lr{ \Bx (-\By I) – (-\By I) \Bx } \\
&=
\Bx \cdot \lr{ -\By I }.
\end{aligned}
\end{equation}

The last step makes use of the fact that the wedge product of a vector and vector is antisymmetric, whereas the dot product (vector grade selection) of a vector and bivector is antisymmetric. Details on grade selection operators and how to characterize symmetric and antisymmetric products of vectors with blades as either dot or wedge products can be found in [3], [2].

Similarly, the dual of the dot product can be written as

\begin{equation}\label{eqn:phasorDualMaxwellsGA:440}
\begin{aligned}
-I \lr{ \Bx \cdot \By }
&=
-I \inv{2} \lr{ \Bx \By + \By \Bx } \\
&=
\inv{2} \lr{ \Bx (-\By I) + (-\By I) \Bx } \\
&=
\Bx \wedge \lr{ -\By I }.
\end{aligned}
\end{equation}

These duality transformations are motivated by the observation that in the GA form of Maxwell’s equation the magnetic field shows up in its dual form, a bivector. Spelled out in terms of the dual magnetic field, those equations are

\begin{equation}\label{eqn:phasorDualMaxwellsGA:360}
\spacegrad \cdot (-\BE I)= – j \omega \BB – \BM
\end{equation}
\begin{equation}\label{eqn:phasorDualMaxwellsGA:380}
\spacegrad \wedge \BH = j \omega \epsilon_0 \BE I
\end{equation}
\begin{equation}\label{eqn:phasorDualMaxwellsGA:400}
\spacegrad \wedge (-\BE I) = 0
\end{equation}
\begin{equation}\label{eqn:phasorDualMaxwellsGA:420}
\spacegrad \cdot \BB = \rho_m.
\end{equation}

Constructing a potential representation.

The starting point of the argument in the text was the observation that the triple product \( \spacegrad \cdot \lr{ \spacegrad \cross \Bx } = 0 \) for any (sufficiently continuous) vector \( \Bx \). This triple product is a completely antisymmetric sum, and the equivalent statement in GA is \( \spacegrad \wedge \spacegrad \wedge \Bx = 0 \) for any vector \( \Bx \). This follows from \( \Ba \wedge \Ba = 0 \), true for any vector \( \Ba \), including the gradient operator \( \spacegrad \), provided those gradients are acting on a sufficiently continuous blade.

In the absence of electric charges,
\ref{eqn:phasorDualMaxwellsGA:400} shows that the divergence of the dual electric field is zero. It it therefore possible to find a potential \( \BF \) such that

\begin{equation}\label{eqn:phasorDualMaxwellsGA:460}
-\epsilon_0 \BE I = \spacegrad \wedge \BF.
\end{equation}

Substituting this \ref{eqn:phasorDualMaxwellsGA:380} gives

\begin{equation}\label{eqn:phasorDualMaxwellsGA:480}
\spacegrad \wedge \lr{ \BH + j \omega \BF } = 0.
\end{equation}

This relation is a bivector identity with zero, so will be satisfied if

\begin{equation}\label{eqn:phasorDualMaxwellsGA:500}
\BH + j \omega \BF = -\spacegrad \phi_m,
\end{equation}

for some scalar \( \phi_m \). Unlike the \( -\epsilon_0 \BE I = \spacegrad \wedge \BF \) solution to \ref{eqn:phasorDualMaxwellsGA:400}, the grade of \( \phi_m \) is fixed by the requirement that \( \BE + j \omega \BF \) is unity (a vector), so
a \( \BE + j \omega \BF = \spacegrad \wedge \psi \), for a higher grade blade \( \psi \) would not work, despite satisfying the condition \( \spacegrad \wedge \spacegrad \wedge \psi = 0 \).

Substitution of \ref{eqn:phasorDualMaxwellsGA:500} and \ref{eqn:phasorDualMaxwellsGA:460} into \ref{eqn:phasorDualMaxwellsGA:380} gives

\begin{equation}\label{eqn:phasorDualMaxwellsGA:520}
\begin{aligned}
\spacegrad \cdot \lr{ \spacegrad \wedge \BF } &= -\epsilon_0 \BM – j \omega \epsilon_0 \mu_0 \lr{ -\spacegrad \phi_m -j \omega \BF } \\
\spacegrad^2 \BF – \spacegrad \lr{\spacegrad \cdot \BF} &=
\end{aligned}
\end{equation}

Rearranging gives

\begin{equation}\label{eqn:phasorDualMaxwellsGA:540}
\spacegrad^2 \BF + k^2 \BF = -\epsilon_0 \BM + \spacegrad \lr{ \spacegrad \cdot \BF + j \frac{k}{c} \phi_m }.
\end{equation}

The fields \( \BF \) and \( \phi_m \) are assumed to be phasors, say \( \boldsymbol{\mathcal{A}} = \textrm{Re} \BF e^{j k c t} \) and \( \varphi = \textrm{Re} \phi_m e^{j k c t} \). Grouping the scalar and vector potentials into the standard four vector form
\( F^\mu = \lr{\phi_m/c, \BF} \), and expanding the Lorentz gauge condition

\begin{equation}\label{eqn:phasorDualMaxwellsGA:580}
\begin{aligned}
0
&= \partial_\mu \lr{ F^\mu e^{j k c t}} \\
&= \partial_a \lr{ F^a e^{j k c t}} + \inv{c}\PD{t}{} \lr{ \frac{\phi_m}{c}
e^{j k c t}} \\
&= \spacegrad \cdot \BF e^{j k c t} + \inv{c} j k \phi_m e^{j k c t} \\
&= \lr{ \spacegrad \cdot \BF + j k \phi_m/c } e^{j k c t},
\end{aligned}
\end{equation}

shows that in
\ref{eqn:phasorDualMaxwellsGA:540}
the quantity in braces is in fact the Lorentz gauge condition, so in the Lorentz gauge, the vector potential satisfies a non-homogeneous Helmholtz equation.

\begin{equation}\label{eqn:phasorDualMaxwellsGA:550}
\boxed{
\spacegrad^2 \BF + k^2 \BF = -\epsilon_0 \BM.
}
\end{equation}

Maxwell’s equation in Four vector form

The four vector form of Maxwell’s equation follows from \ref{eqn:phasorDualMaxwellsGA:300} after pre-multiplying by \( \gamma^0 \).

With

\begin{equation}\label{eqn:phasorDualMaxwellsGA:620}
F = F^\mu \gamma_\mu = \lr{ \phi_m/c, \BF }
\end{equation}
\begin{equation}\label{eqn:phasorDualMaxwellsGA:640}
G = \grad \wedge F = – \epsilon_0 \lr{ \BE + c \BB I } I
\end{equation}
\begin{equation}\label{eqn:phasorDualMaxwellsGA:660}
\grad = \gamma^\mu \partial_\mu = \gamma^0 \lr{ \spacegrad + j k }
\end{equation}
\begin{equation}\label{eqn:phasorDualMaxwellsGA:680}
M = M^\mu \gamma_\mu = \lr{ c \rho_m, \BM },
\end{equation}

Maxwell’s equation is

\begin{equation}\label{eqn:phasorDualMaxwellsGA:720}
\boxed{
\grad G = -\epsilon_0 M.
}
\end{equation}

Here \( \setlr{ \gamma_\mu } \) is used as the basis of the four vector Minkowski space, with \( \gamma_0^2 = -\gamma_k^2 = 1 \) (i.e. \(\gamma^\mu \cdot \gamma_\nu = {\delta^\mu}_\nu \)), and \( \gamma_a \gamma_0 = \sigma_a \) where \( \setlr{ \sigma_a} \) is the Pauli basic (i.e. standard basis vectors for \R{3}).

Let’s demonstrate this, one piece at a time. Observe that the action of the spacetime gradient on a phasor, assuming that all time dependence is in the exponential, is

\begin{equation}\label{eqn:phasorDualMaxwellsGA:740}
\begin{aligned}
\gamma^\mu \partial_\mu \lr{ \psi e^{j k c t} }
&=
\lr{ \gamma^a \partial_a + \gamma_0 \partial_{c t} } \lr{ \psi e^{j k c t} }
\\
&=
\gamma_0 \lr{ \gamma_0 \gamma^a \partial_a + j k } \lr{ \psi e^{j k c t} } \\
&=
\gamma_0 \lr{ \sigma_a \partial_a + j k } \psi e^{j k c t} \\
&=
\gamma_0 \lr{ \spacegrad + j k } \psi e^{j k c t}
\end{aligned}
\end{equation}

This allows the operator identification of \ref{eqn:phasorDualMaxwellsGA:660}. The four current portion of the equation comes from

\begin{equation}\label{eqn:phasorDualMaxwellsGA:760}
\begin{aligned}
c \rho_m – \BM
&=
\gamma_0 \lr{ \gamma_0 c \rho_m – \gamma_0 \gamma_a \gamma_0 M^a } \\
&=
\gamma_0 \lr{ \gamma_0 c \rho_m + \gamma_a M^a } \\
&=
\gamma_0 \lr{ \gamma_\mu M^\mu } \\
&= \gamma_0 M.
\end{aligned}
\end{equation}

Taking the curl of the four potential gives

\begin{equation}\label{eqn:phasorDualMaxwellsGA:780}
\begin{aligned}
\grad \wedge F
&=
\lr{ \gamma^a \partial_a + \gamma_0 j k } \wedge \lr{ \gamma_0 \phi_m/c +
\gamma_b F^b } \\
&=
– \sigma_a \partial_a \phi_m/c + \gamma^a \wedge \gamma_b \partial_a F^b – j k
\sigma_b F^b \\
&=
– \sigma_a \partial_a \phi_m/c + \sigma_a \wedge \sigma_b \partial_a F^b – j k
\sigma_b F^b \\
&= \inv{c} \lr{ – \spacegrad \phi_m – j \omega \BF + c \spacegrad \wedge \BF }
\\
&= \epsilon_0 \lr{ c \BB – \BE I } \\
&= – \epsilon_0 \lr{ \BE + c \BB I } I.
\end{aligned}
\end{equation}

Substituting all of these into Maxwell’s \ref{eqn:phasorDualMaxwellsGA:300} gives

\begin{equation}\label{eqn:phasorDualMaxwellsGA:800}
-\frac{\gamma_0}{\epsilon_0}\grad G = \gamma_0 M,
\end{equation}

which recovers \ref{eqn:phasorDualMaxwellsGA:700} as desired.

Helmholtz equation directly from the GA form.

It is easier to find \ref{eqn:phasorDualMaxwellsGA:550} from the GA form of Maxwell’s \ref{eqn:phasorDualMaxwellsGA:700} than the traditional curl and divergence equations. Note that

\begin{equation}\label{eqn:phasorDualMaxwellsGA:820}
\begin{aligned}
\grad G
&=
\grad \lr{ \grad \wedge F } \\
&=
\grad \cdot \lr{ \grad \wedge F } \\
+
\grad \wedge \lr{ \grad \wedge F } \\
&=
\grad^2 F – \grad \lr{ \grad \cdot F },
\end{aligned}
\end{equation}

however, the Lorentz gauge condition \( \partial_\mu F^\mu = \grad \cdot F = 0 \) kills the latter term above. This leaves

\begin{equation}\label{eqn:phasorDualMaxwellsGA:840}
\begin{aligned}
\grad G
&=
\grad^2 F \\
&=
\gamma_0 \lr{ \spacegrad + j k }
\gamma_0 \lr{ \spacegrad + j k } F \\
&=
\gamma_0^2 \lr{ -\spacegrad + j k }
\lr{ \spacegrad + j k } F \\
&=
-\lr{ \spacegrad^2 + k^2 } F = -\epsilon_0 M.
\end{aligned}
\end{equation}

The timelike component of this gives

\begin{equation}\label{eqn:phasorDualMaxwellsGA:860}
\lr{ \spacegrad^2 + k^2 } \phi_m = -\epsilon_0 c \rho_m,
\end{equation}

and the spacelike components give

\begin{equation}\label{eqn:phasorDualMaxwellsGA:880}
\lr{ \spacegrad^2 + k^2 } \BF = -\epsilon_0 \BM,
\end{equation}

recovering \ref{eqn:phasorDualMaxwellsGA:550} as desired.

References

[1] Constantine A Balanis. Antenna theory: analysis and design. John Wiley \& Sons, 3rd edition, 2005.

[2] C. Doran and A.N. Lasenby. Geometric algebra for physicists. Cambridge University Press New York, Cambridge, UK, 1st edition, 2003.

[3] D. Hestenes. New Foundations for Classical Mechanics. Kluwer Academic Publishers, 1999.

%d bloggers like this: