cross product

A comparison of Geometric Algebra electrodynamic potential methods

January 7, 2017 math and physics play No comments , , , , , , , , , , , , , , , , , , ,

[Click here for a PDF of this post with nicer formatting]

Motivation

Geometric algebra (GA) allows for a compact description of Maxwell’s equations in either an explicit 3D representation or a STA (SpaceTime Algebra [2]) representation. The 3D GA and STA representations Maxwell’s equation both the form

\begin{equation}\label{eqn:potentialMethods:1280}
L \boldsymbol{\mathcal{F}} = J,
\end{equation}

where \( J \) represents the sources, \( L \) is a multivector gradient operator that includes partial derivative operator components for each of the space and time coordinates, and

\begin{equation}\label{eqn:potentialMethods:1020}
\boldsymbol{\mathcal{F}} = \boldsymbol{\mathcal{E}} + \eta I \boldsymbol{\mathcal{H}},
\end{equation}

is an electromagnetic field multivector, \( I = \Be_1 \Be_2 \Be_3 \) is the \R{3} pseudoscalar, and \( \eta = \sqrt{\mu/\epsilon} \) is the impedance of the media.

When Maxwell’s equations are extended to include magnetic sources in addition to conventional electric sources (as used in antenna-theory [1] and microwave engineering [3]), they take the form

\begin{equation}\label{eqn:chapter3Notes:20}
\spacegrad \cross \boldsymbol{\mathcal{E}} = – \boldsymbol{\mathcal{M}} – \PD{t}{\boldsymbol{\mathcal{B}}}
\end{equation}
\begin{equation}\label{eqn:chapter3Notes:40}
\spacegrad \cross \boldsymbol{\mathcal{H}} = \boldsymbol{\mathcal{J}} + \PD{t}{\boldsymbol{\mathcal{D}}}
\end{equation}
\begin{equation}\label{eqn:chapter3Notes:60}
\spacegrad \cdot \boldsymbol{\mathcal{D}} = q_{\textrm{e}}
\end{equation}
\begin{equation}\label{eqn:chapter3Notes:80}
\spacegrad \cdot \boldsymbol{\mathcal{B}} = q_{\textrm{m}}.
\end{equation}

The corresponding GA Maxwell equations in their respective 3D and STA forms are

\begin{equation}\label{eqn:potentialMethods:300}
\lr{ \spacegrad + \inv{v} \PD{t}{} } \boldsymbol{\mathcal{F}}
=
\eta
\lr{ v q_{\textrm{e}} – \boldsymbol{\mathcal{J}} }
+ I \lr{ v q_{\textrm{m}} – \boldsymbol{\mathcal{M}} }
\end{equation}
\begin{equation}\label{eqn:potentialMethods:320}
\grad \boldsymbol{\mathcal{F}} = \eta J – I M,
\end{equation}

where the wave group velocity in the medium is \( v = 1/\sqrt{\epsilon\mu} \), and the medium is isotropic with
\( \boldsymbol{\mathcal{B}} = \mu \boldsymbol{\mathcal{H}} \), and \( \boldsymbol{\mathcal{D}} = \epsilon \boldsymbol{\mathcal{E}} \). In the STA representation, \( \grad, J, M \) are all four-vectors, the specific meanings of which will be spelled out below.

How to determine the potential equations and the field representation using the conventional distinct Maxwell’s \ref{eqn:chapter3Notes:20}, … is well known. The basic procedure is to consider the electric and magnetic sources in turn, and observe that in each case one of the electric or magnetic fields must have a curl representation. The STA approach is similar, except that it can be observed that the field must have a four-curl representation for each type of source. In the explicit 3D GA formalism
\ref{eqn:potentialMethods:300} how to formulate a natural potential representation is not as obvious. There is no longer an reason to set any component of the field equal to a curl, and the representation of the four curl from the STA approach is awkward. Additionally, it is not obvious what form gauge invariance takes in the 3D GA representation.

Ideas explored in these notes

  • GA representation of Maxwell’s equations including magnetic sources.
  • STA GA formalism for Maxwell’s equations including magnetic sources.
  • Explicit form of the GA potential representation including both electric and magnetic sources.
  • Demonstration of exactly how the 3D and STA potentials are related.
  • Explore the structure of gauge transformations when magnetic sources are included.
  • Explore the structure of gauge transformations in the 3D GA formalism.
  • Specify the form of the Lorentz gauge in the 3D GA formalism.

Traditional vector algebra

No magnetic sources

When magnetic sources are omitted, it follows from \ref{eqn:chapter3Notes:80} that there is some \( \boldsymbol{\mathcal{A}}^{\mathrm{e}} \) for which

\begin{equation}\label{eqn:potentialMethods:20}
\boxed{
\boldsymbol{\mathcal{B}} = \spacegrad \cross \boldsymbol{\mathcal{A}}^{\mathrm{e}},
}
\end{equation}

Substitution into Faraday’s law \ref{eqn:chapter3Notes:20} gives

\begin{equation}\label{eqn:potentialMethods:40}
\spacegrad \cross \boldsymbol{\mathcal{E}} = – \PD{t}{}\lr{ \spacegrad \cross \boldsymbol{\mathcal{A}}^{\mathrm{e}} },
\end{equation}

or
\begin{equation}\label{eqn:potentialMethods:60}
\spacegrad \cross \lr{ \boldsymbol{\mathcal{E}} + \PD{t}{ \boldsymbol{\mathcal{A}}^{\mathrm{e}} } } = 0.
\end{equation}

A gradient representation of this curled quantity, say \( -\spacegrad \phi \), will provide the required zero

\begin{equation}\label{eqn:potentialMethods:80}
\boxed{
\boldsymbol{\mathcal{E}} = -\spacegrad \phi -\PD{t}{ \boldsymbol{\mathcal{A}}^{\mathrm{e}} }.
}
\end{equation}

The final two Maxwell equations yield

\begin{equation}\label{eqn:potentialMethods:100}
\begin{aligned}
-\spacegrad^2 \boldsymbol{\mathcal{A}}^{\mathrm{e}} + \spacegrad \lr{ \spacegrad \cdot \boldsymbol{\mathcal{A}}^{\mathrm{e}} } &= \mu \lr{ \boldsymbol{\mathcal{J}} + \epsilon \PD{t}{} \lr{ -\spacegrad \phi -\PD{t}{ \boldsymbol{\mathcal{A}}^{\mathrm{e}} } } } \\
\spacegrad \cdot \lr{ -\spacegrad \phi -\PD{t}{ \boldsymbol{\mathcal{A}}^{\mathrm{e}} } } &= q_e/\epsilon,
\end{aligned}
\end{equation}

or
\begin{equation}\label{eqn:potentialMethods:120}
\boxed{
\begin{aligned}
\spacegrad^2 \boldsymbol{\mathcal{A}}^{\mathrm{e}} – \inv{v^2} \PDSq{t}{ \boldsymbol{\mathcal{A}}^{\mathrm{e}} }
– \spacegrad \lr{
\inv{v^2} \PD{t}{\phi}
+\spacegrad \cdot \boldsymbol{\mathcal{A}}^{\mathrm{e}}
}
&= -\mu \boldsymbol{\mathcal{J}} \\
\spacegrad^2 \phi + \PD{t}{} \lr{ \spacegrad \cdot \boldsymbol{\mathcal{A}}^{\mathrm{e}} } &= -q_e/\epsilon.
\end{aligned}
}
\end{equation}

Note that the Lorentz condition \( \PDi{t}{(\phi/v^2)} + \spacegrad \cdot \boldsymbol{\mathcal{A}}^{\mathrm{e}} = 0 \) can be imposed to decouple these, leaving non-homogeneous wave equations for the vector and scalar potentials respectively.

No electric sources

Without electric sources, a curl representation of the electric field can be assumed, satisfying Gauss’s law

\begin{equation}\label{eqn:potentialMethods:140}
\boxed{
\boldsymbol{\mathcal{D}} = – \spacegrad \cross \boldsymbol{\mathcal{A}}^{\mathrm{m}}.
}
\end{equation}

Substitution into the Maxwell-Faraday law gives
\begin{equation}\label{eqn:potentialMethods:160}
\spacegrad \cross \lr{ \boldsymbol{\mathcal{H}} + \PD{t}{\boldsymbol{\mathcal{A}}^{\mathrm{m}}} } = 0.
\end{equation}

This is satisfied with any gradient, say, \( -\spacegrad \phi_m \), providing a potential representation for the magnetic field

\begin{equation}\label{eqn:potentialMethods:180}
\boxed{
\boldsymbol{\mathcal{H}} = -\spacegrad \phi_m – \PD{t}{\boldsymbol{\mathcal{A}}^{\mathrm{m}}}.
}
\end{equation}

The remaining Maxwell equations provide the required constraints on the potentials

\begin{equation}\label{eqn:potentialMethods:220}
-\spacegrad^2 \boldsymbol{\mathcal{A}}^{\mathrm{m}} + \spacegrad \lr{ \spacegrad \cdot \boldsymbol{\mathcal{A}}^{\mathrm{m}} } = -\epsilon
\lr{
-\boldsymbol{\mathcal{M}} – \mu \PD{t}{}
\lr{
-\spacegrad \phi_m – \PD{t}{\boldsymbol{\mathcal{A}}^{\mathrm{m}}}
}
}
\end{equation}
\begin{equation}\label{eqn:potentialMethods:240}
\spacegrad \cdot
\lr{
-\spacegrad \phi_m – \PD{t}{\boldsymbol{\mathcal{A}}^{\mathrm{m}}}
}
= \inv{\mu} q_m,
\end{equation}

or
\begin{equation}\label{eqn:potentialMethods:260}
\boxed{
\begin{aligned}
\spacegrad^2 \boldsymbol{\mathcal{A}}^{\mathrm{m}} – \inv{v^2} \PDSq{t}{\boldsymbol{\mathcal{A}}^{\mathrm{m}}} – \spacegrad \lr{ \inv{v^2} \PD{t}{\phi_m} + \spacegrad \cdot \boldsymbol{\mathcal{A}}^{\mathrm{m}} } &= -\epsilon \boldsymbol{\mathcal{M}} \\
\spacegrad^2 \phi_m + \PD{t}{}\lr{ \spacegrad \cdot \boldsymbol{\mathcal{A}}^{\mathrm{m}} } &= -\inv{\mu} q_m.
\end{aligned}
}
\end{equation}

The general solution to Maxwell’s equations is therefore
\begin{equation}\label{eqn:potentialMethods:280}
\begin{aligned}
\boldsymbol{\mathcal{E}} &=
-\spacegrad \phi -\PD{t}{ \boldsymbol{\mathcal{A}}^{\mathrm{e}} }
– \inv{\epsilon} \spacegrad \cross \boldsymbol{\mathcal{A}}^{\mathrm{m}} \\
\boldsymbol{\mathcal{H}} &=
\inv{\mu} \spacegrad \cross \boldsymbol{\mathcal{A}}^{\mathrm{e}}
-\spacegrad \phi_m – \PD{t}{\boldsymbol{\mathcal{A}}^{\mathrm{m}}},
\end{aligned}
\end{equation}

subject to the constraints \ref{eqn:potentialMethods:120} and \ref{eqn:potentialMethods:260}.

Potential operator structure

Knowing that there is a simple underlying structure to the potential representation of the electromagnetic field in the STA formalism inspires the question of whether that structure can be found directly using the scalar and vector potentials determined above.

Specifically, what is the multivector representation \ref{eqn:potentialMethods:1020} of the electromagnetic field in terms of all the individual potential variables, and can an underlying structure for that field representation be found? The composite field is

\begin{equation}\label{eqn:potentialMethods:280b}
\boldsymbol{\mathcal{F}}
=
-\spacegrad \phi -\PD{t}{ \boldsymbol{\mathcal{A}}^{\mathrm{e}} }
– \inv{\epsilon} \spacegrad \cross \boldsymbol{\mathcal{A}}^{\mathrm{m}} \\
+ I \eta
\lr{
\inv{\mu} \spacegrad \cross \boldsymbol{\mathcal{A}}^{\mathrm{e}}
-\spacegrad \phi_m – \PD{t}{\boldsymbol{\mathcal{A}}^{\mathrm{m}}}
}.
\end{equation}

Can this be factored into into multivector operator and multivector potentials? Expanding the cross products provides some direction

\begin{equation}\label{eqn:potentialMethods:1040}
\begin{aligned}
\boldsymbol{\mathcal{F}}
&=
– \PD{t}{ \boldsymbol{\mathcal{A}}^{\mathrm{e}} }
– \eta \PD{t}{I \boldsymbol{\mathcal{A}}^{\mathrm{m}}}
– \spacegrad \lr{ \phi – \eta I \phi_m } \\
&\quad + \frac{\eta}{2 \mu} \lr{ \rspacegrad \boldsymbol{\mathcal{A}}^{\mathrm{e}} – \boldsymbol{\mathcal{A}}^{\mathrm{e}} \lspacegrad }
+ \frac{1}{2 \epsilon} \lr{ \rspacegrad I \boldsymbol{\mathcal{A}}^{\mathrm{m}} – I \boldsymbol{\mathcal{A}}^{\mathrm{m}} \lspacegrad }.
\end{aligned}
\end{equation}

Observe that the
gradient and the time partials can be grouped together

\begin{equation}\label{eqn:potentialMethods:1060}
\begin{aligned}
\boldsymbol{\mathcal{F}}
&=
– \PD{t}{ } \lr{\boldsymbol{\mathcal{A}}^{\mathrm{e}} + \eta I \boldsymbol{\mathcal{A}}^{\mathrm{m}}}
– \spacegrad \lr{ \phi + \eta I \phi_m }
+ \frac{v}{2} \lr{ \rspacegrad (\boldsymbol{\mathcal{A}}^{\mathrm{e}} + I \eta \boldsymbol{\mathcal{A}}^{\mathrm{m}}) – (\boldsymbol{\mathcal{A}}^{\mathrm{e}} + I \eta \boldsymbol{\mathcal{A}}^{\mathrm{m}}) \lspacegrad } \\
&=
\inv{2} \lr{
\lr{ \rspacegrad – \inv{v} {\stackrel{ \rightarrow }{\partial_t}} } \lr{ v \boldsymbol{\mathcal{A}}^{\mathrm{e}} + \eta v I \boldsymbol{\mathcal{A}}^{\mathrm{m}} }

\lr{ v \boldsymbol{\mathcal{A}}^{\mathrm{e}} + \eta v I \boldsymbol{\mathcal{A}}^{\mathrm{m}}} \lr{ \lspacegrad + \inv{v} {\stackrel{ \leftarrow }{\partial_t}} }
} \\
&+\quad \inv{2} \lr{
\lr{ \rspacegrad – \inv{v} {\stackrel{ \rightarrow }{\partial_t}} } \lr{ -\phi – \eta I \phi_m }
– \lr{ \phi + \eta I \phi_m } \lr{ \lspacegrad + \inv{v} {\stackrel{ \leftarrow }{\partial_t}} }
}
,
\end{aligned}
\end{equation}

or

\begin{equation}\label{eqn:potentialMethods:1080}
\boxed{
\boldsymbol{\mathcal{F}}
=
\inv{2} \Biglr{
\lr{ \rspacegrad – \inv{v} {\stackrel{ \rightarrow }{\partial_t}} }
\lr{
– \phi
+ v \boldsymbol{\mathcal{A}}^{\mathrm{e}}
+ \eta I v \boldsymbol{\mathcal{A}}^{\mathrm{m}}
– \eta I \phi_m
}

\lr{
\phi
+ v \boldsymbol{\mathcal{A}}^{\mathrm{e}}
+ \eta I v \boldsymbol{\mathcal{A}}^{\mathrm{m}}
+ \eta I \phi_m
}
\lr{ \lspacegrad + \inv{v} {\stackrel{ \leftarrow }{\partial_t}} }
}
.
}
\end{equation}

There’s a conjugate structure to the potential on each side of the curl operation where we see a sign change for the scalar and pseudoscalar elements only. The reason for this becomes more clear in the STA formalism.

Potentials in the STA formalism.

Maxwell’s equation in its explicit 3D form \ref{eqn:potentialMethods:300} can be
converted to STA form, by introducing a four-vector basis \( \setlr{ \gamma_\mu } \), where the spatial basis
\( \setlr{ \Be_k = \gamma_k \gamma_0 } \)
is expressed in terms of the Dirac basis \( \setlr{ \gamma_\mu } \).
By multiplying from the left with \( \gamma_0 \) a STA form of Maxwell’s equation
\ref{eqn:potentialMethods:320}
is obtained,
where
\begin{equation}\label{eqn:potentialMethods:340}
\begin{aligned}
J &= \gamma^\mu J_\mu = ( v q_e, \boldsymbol{\mathcal{J}} ) \\
M &= \gamma^\mu M_\mu = ( v q_m, \boldsymbol{\mathcal{M}} ) \\
\grad &= \gamma^\mu \partial_\mu = ( (1/v) \partial_t, \spacegrad ) \\
I &= \gamma_0 \gamma_1 \gamma_2 \gamma_3,
\end{aligned}
\end{equation}

Here the metric choice is \( \gamma_0^2 = 1 = -\gamma_k^2 \). Note that in this representation the electromagnetic field \( \boldsymbol{\mathcal{F}} = \boldsymbol{\mathcal{E}} + \eta I \boldsymbol{\mathcal{H}} \) is a bivector, not a multivector as it is explicit (frame dependent) 3D representation of \ref{eqn:potentialMethods:300}.

A potential representation can be obtained as before by considering electric and magnetic sources in sequence and using superposition to assemble a complete potential.

No magnetic sources

Without magnetic sources, Maxwell’s equation splits into vector and trivector terms of the form

\begin{equation}\label{eqn:potentialMethods:380}
\grad \cdot \boldsymbol{\mathcal{F}} = \eta J
\end{equation}
\begin{equation}\label{eqn:potentialMethods:400}
\grad \wedge \boldsymbol{\mathcal{F}} = 0.
\end{equation}

A four-vector curl representation of the field will satisfy \ref{eqn:potentialMethods:400} allowing an immediate potential solution

\begin{equation}\label{eqn:potentialMethods:560}
\boxed{
\begin{aligned}
&\boldsymbol{\mathcal{F}} = \grad \wedge {A^{\mathrm{e}}} \\
&\grad^2 {A^{\mathrm{e}}} – \grad \lr{ \grad \cdot {A^{\mathrm{e}}} } = \eta J.
\end{aligned}
}
\end{equation}

This can be put into correspondence with \ref{eqn:potentialMethods:120} by noting that

\begin{equation}\label{eqn:potentialMethods:460}
\begin{aligned}
\grad^2 &= (\gamma^\mu \partial_\mu) \cdot (\gamma^\nu \partial_\nu) = \inv{v^2} \partial_{tt} – \spacegrad^2 \\
\gamma_0 {A^{\mathrm{e}}} &= \gamma_0 \gamma^\mu {A^{\mathrm{e}}}_\mu = {A^{\mathrm{e}}}_0 + \Be_k {A^{\mathrm{e}}}_k = {A^{\mathrm{e}}}_0 + \BA^{\mathrm{e}} \\
\gamma_0 \grad &= \gamma_0 \gamma^\mu \partial_\mu = \inv{v} \partial_t + \spacegrad \\
\grad \cdot {A^{\mathrm{e}}} &= \partial_\mu {A^{\mathrm{e}}}^\mu = \inv{v} \partial_t {A^{\mathrm{e}}}_0 – \spacegrad \cdot \BA^{\mathrm{e}},
\end{aligned}
\end{equation}

so multiplying from the left with \( \gamma_0 \) gives

\begin{equation}\label{eqn:potentialMethods:480}
\lr{ \inv{v^2} \partial_{tt} – \spacegrad^2 } \lr{ {A^{\mathrm{e}}}_0 + \BA^{\mathrm{e}} } – \lr{ \inv{v} \partial_t + \spacegrad }\lr{ \inv{v} \partial_t {A^{\mathrm{e}}}_0 – \spacegrad \cdot \BA^{\mathrm{e}} } = \eta( v q_e – \boldsymbol{\mathcal{J}} ),
\end{equation}

or

\begin{equation}\label{eqn:potentialMethods:520}
\lr{ \inv{v^2} \partial_{tt} – \spacegrad^2 } \BA^{\mathrm{e}} – \spacegrad \lr{ \inv{v} \partial_t {A^{\mathrm{e}}}_0 – \spacegrad \cdot \BA^{\mathrm{e}} } = -\eta \boldsymbol{\mathcal{J}}
\end{equation}
\begin{equation}\label{eqn:potentialMethods:540}
\spacegrad^2 {A^{\mathrm{e}}}_0 – \inv{v} \partial_t \lr{ \spacegrad \cdot \BA^{\mathrm{e}} } = -q_e/\epsilon.
\end{equation}

So \( {A^{\mathrm{e}}}_0 = \phi \) and \( -\ifrac{\BA^{\mathrm{e}}}{v} = \boldsymbol{\mathcal{A}}^{\mathrm{e}} \), or

\begin{equation}\label{eqn:potentialMethods:600}
\boxed{
{A^{\mathrm{e}}} = \gamma_0\lr{ \phi – v \boldsymbol{\mathcal{A}}^{\mathrm{e}} }.
}
\end{equation}

No electric sources

Without electric sources, Maxwell’s equation now splits into

\begin{equation}\label{eqn:potentialMethods:640}
\grad \cdot \boldsymbol{\mathcal{F}} = 0
\end{equation}
\begin{equation}\label{eqn:potentialMethods:660}
\grad \wedge \boldsymbol{\mathcal{F}} = -I M.
\end{equation}

Here the dual of an STA curl yields a solution

\begin{equation}\label{eqn:potentialMethods:680}
\boxed{
\boldsymbol{\mathcal{F}} = I ( \grad \wedge {A^{\mathrm{m}}} ).
}
\end{equation}

Substituting this gives

\begin{equation}\label{eqn:potentialMethods:720}
\begin{aligned}
0
&=
\grad \cdot (I ( \grad \wedge {A^{\mathrm{m}}} ) ) \\
&=
\gpgradeone{ \grad I ( \grad \wedge {A^{\mathrm{m}}} ) } \\
&=
-I \grad \wedge ( \grad \wedge {A^{\mathrm{m}}} ).
\end{aligned}
\end{equation}
\begin{equation}\label{eqn:potentialMethods:740}
\begin{aligned}
-I M
&=
\grad \wedge (I ( \grad \wedge {A^{\mathrm{m}}} ) ) \\
&=
\gpgradethree{ \grad I ( \grad \wedge {A^{\mathrm{m}}} ) } \\
&=
-I \grad \cdot ( \grad \wedge {A^{\mathrm{m}}} ).
\end{aligned}
\end{equation}

The \( \grad \cdot \boldsymbol{\mathcal{F}} \) relation \ref{eqn:potentialMethods:720} is identically zero as desired, leaving

\begin{equation}\label{eqn:potentialMethods:760}
\boxed{
\grad^2 {A^{\mathrm{m}}} – \grad \lr{ \grad \cdot {A^{\mathrm{m}}} }
=
M.
}
\end{equation}

So the general solution with both electric and magnetic sources is

\begin{equation}\label{eqn:potentialMethods:800}
\boxed{
\boldsymbol{\mathcal{F}} = \grad \wedge {A^{\mathrm{e}}} + I (\grad \wedge {A^{\mathrm{m}}}),
}
\end{equation}

subject to the constraints of \ref{eqn:potentialMethods:560} and \ref{eqn:potentialMethods:760}. As before the four-potential \( {A^{\mathrm{m}}} \) can be put into correspondence with the conventional scalar and vector potentials by left multiplying with \( \gamma_0 \), which gives

\begin{equation}\label{eqn:potentialMethods:820}
\lr{ \inv{v^2} \partial_{tt} – \spacegrad^2 } \lr{ {A^{\mathrm{m}}}_0 + \BA^{\mathrm{m}} } – \lr{ \inv{v} \partial_t + \spacegrad }\lr{ \inv{v} \partial_t {A^{\mathrm{m}}}_0 – \spacegrad \cdot \BA^{\mathrm{m}} } = v q_m – \boldsymbol{\mathcal{M}},
\end{equation}

or
\begin{equation}\label{eqn:potentialMethods:860}
\lr{ \inv{v^2} \partial_{tt} – \spacegrad^2 } \BA^{\mathrm{m}} – \spacegrad \lr{ \inv{v} \partial_t {A^{\mathrm{m}}}_0 – \spacegrad \cdot \BA^{\mathrm{m}} } = – \boldsymbol{\mathcal{M}}
\end{equation}
\begin{equation}\label{eqn:potentialMethods:880}
\spacegrad^2 {A^{\mathrm{m}}}_0 – \inv{v} \partial_t \spacegrad \cdot \BA^{\mathrm{m}} = -v q_m.
\end{equation}

Comparing with \ref{eqn:potentialMethods:260} shows that \( {A^{\mathrm{m}}}_0/v = \mu \phi_m \) and \( -\ifrac{\BA^{\mathrm{m}}}{v^2} = \mu \boldsymbol{\mathcal{A}}^{\mathrm{m}} \), or

\begin{equation}\label{eqn:potentialMethods:900}
\boxed{
{A^{\mathrm{m}}} = \gamma_0 \eta \lr{ \phi_m – v \boldsymbol{\mathcal{A}}^{\mathrm{m}} }.
}
\end{equation}

Potential operator structure

Observe that there is an underlying uniform structure of the differential operator that acts on the potential to produce the electromagnetic field. Expressed as a linear operator of the
gradient and the potentials, that is

\( \boldsymbol{\mathcal{F}} = L(\lrgrad, {A^{\mathrm{e}}}, {A^{\mathrm{m}}}) \)

\begin{equation}\label{eqn:potentialMethods:980}
\begin{aligned}
\boldsymbol{\mathcal{F}}
&=
L(\grad, {A^{\mathrm{e}}}, {A^{\mathrm{m}}}) \\
&= \grad \wedge {A^{\mathrm{e}}} + I (\grad \wedge {A^{\mathrm{m}}}) \\
&=
\inv{2} \lr{ \rgrad {A^{\mathrm{e}}} – {A^{\mathrm{e}}} \lgrad }
+ \frac{I}{2} \lr{ \rgrad {A^{\mathrm{m}}} – {A^{\mathrm{m}}} \lgrad } \\
&=
\inv{2} \lr{ \rgrad {A^{\mathrm{e}}} – {A^{\mathrm{e}}} \lgrad }
+ \frac{1}{2} \lr{ -\rgrad I {A^{\mathrm{m}}} – I {A^{\mathrm{m}}} \lgrad } \\
&=
\inv{2} \lr{ \rgrad ({A^{\mathrm{e}}} -I {A^{\mathrm{m}}}) – ({A^{\mathrm{e}}} + I {A^{\mathrm{m}}}) \lgrad }
,
\end{aligned}
\end{equation}

or
\begin{equation}\label{eqn:potentialMethods:1000}
\boxed{
\boldsymbol{\mathcal{F}}
=
\inv{2} \lr{ \rgrad ({A^{\mathrm{e}}} -I {A^{\mathrm{m}}}) – ({A^{\mathrm{e}}} – I {A^{\mathrm{m}}})^\dagger \lgrad }
.
}
\end{equation}

Observe that \ref{eqn:potentialMethods:1000} can be
put into correspondence with \ref{eqn:potentialMethods:1080} using a factoring of unity \( 1 = \gamma_0 \gamma_0 \)

\begin{equation}\label{eqn:potentialMethods:1100}
\boldsymbol{\mathcal{F}}
=
\inv{2} \lr{ (-\rgrad \gamma_0) (-\gamma_0 ({A^{\mathrm{e}}} -I {A^{\mathrm{m}}})) – (({A^{\mathrm{e}}} + I {A^{\mathrm{m}}}) \gamma_0)(\gamma_0 \lgrad) },
\end{equation}

where

\begin{equation}\label{eqn:potentialMethods:1140}
\begin{aligned}
-\grad \gamma_0
&=
-(\gamma^0 \partial_0 + \gamma^k \partial_k) \gamma_0 \\
&=
-\partial_0 – \gamma^k \gamma_0 \partial_k \\
&=
\spacegrad
-\inv{v} \partial_t
,
\end{aligned}
\end{equation}
\begin{equation}\label{eqn:potentialMethods:1160}
\begin{aligned}
\gamma_0 \grad
&=
\gamma_0 (\gamma^0 \partial_0 + \gamma^k \partial_k) \\
&=
\partial_0 – \gamma^k \gamma_0 \partial_k \\
&=
\spacegrad
+ \inv{v} \partial_t
,
\end{aligned}
\end{equation}

and
\begin{equation}\label{eqn:potentialMethods:1200}
\begin{aligned}
-\gamma_0 ( {A^{\mathrm{e}}} – I {A^{\mathrm{m}}} )
&=
-\gamma_0 \gamma_0 \lr{ \phi -v \boldsymbol{\mathcal{A}}^{\mathrm{e}} + \eta I \lr{ \phi_m – v \boldsymbol{\mathcal{A}}^{\mathrm{m}} } } \\
&=
-\lr{ \phi -v \boldsymbol{\mathcal{A}}^{\mathrm{e}} + \eta I \phi_m – \eta v I \boldsymbol{\mathcal{A}}^{\mathrm{m}} } \\
&=
– \phi
+ v \boldsymbol{\mathcal{A}}^{\mathrm{e}}
+ \eta v I \boldsymbol{\mathcal{A}}^{\mathrm{m}}
– \eta I \phi_m
\end{aligned}
\end{equation}
\begin{equation}\label{eqn:potentialMethods:1220}
\begin{aligned}
( {A^{\mathrm{e}}} + I {A^{\mathrm{m}}} )\gamma_0
&=
\lr{ \gamma_0 \lr{ \phi -v \boldsymbol{\mathcal{A}}^{\mathrm{e}} } + I \gamma_0 \eta \lr{ \phi_m – v \boldsymbol{\mathcal{A}}^{\mathrm{m}} } } \gamma_0 \\
&=
\phi + v \boldsymbol{\mathcal{A}}^{\mathrm{e}} + I \eta \phi_m + I \eta v \boldsymbol{\mathcal{A}}^{\mathrm{m}} \\
&=
\phi
+ v \boldsymbol{\mathcal{A}}^{\mathrm{e}}
+ \eta v I \boldsymbol{\mathcal{A}}^{\mathrm{m}}
+ \eta I \phi_m
,
\end{aligned}
\end{equation}

This recovers \ref{eqn:potentialMethods:1080} as desired.

Potentials in the 3D Euclidean formalism

In the conventional scalar plus vector differential representation of Maxwell’s equations \ref{eqn:chapter3Notes:20}…, given electric(magnetic) sources the structure of the electric(magnetic) potential follows from first setting the magnetic(electric) field equal to the curl of a vector potential. The procedure for the STA GA form of Maxwell’s equation was similar, where it was immediately evident that the field could be set to the four-curl of a four-vector potential (or the dual of such a curl for magnetic sources).

In the 3D GA representation, there is no immediate rationale for introducing a curl or the equivalent to a four-curl representation of the field. Reconciliation of this is possible by recognizing that the fact that the field (or a component of it) may be represented by a curl is not actually fundamental. Instead, observe that the two sided gradient action on a potential to generate the electromagnetic field in the STA representation of \ref{eqn:potentialMethods:1000} serves to select the grade two component product of the gradient and the multivector potential \( {A^{\mathrm{e}}} – I {A^{\mathrm{m}}} \), and that this can in fact be written as
a single sided gradient operation on a potential, provided the multivector product is filtered with a four-bivector grade selection operation

\begin{equation}\label{eqn:potentialMethods:1240}
\boxed{
\boldsymbol{\mathcal{F}} = \gpgradetwo{ \grad \lr{ {A^{\mathrm{e}}} – I {A^{\mathrm{m}}} } }.
}
\end{equation}

Similarly, it can be observed that the
specific function of the conjugate structure in the two sided potential representation of
\ref{eqn:potentialMethods:1080}
is to discard all the scalar and pseudoscalar grades in the multivector product. This means that a single sided potential can also be used, provided it is wrapped in a grade selection operation

\begin{equation}\label{eqn:potentialMethods:1260}
\boxed{
\boldsymbol{\mathcal{F}} =
\gpgrade{ \lr{ \spacegrad – \inv{v} \PD{t}{} }
\lr{
– \phi
+ v \boldsymbol{\mathcal{A}}^{\mathrm{e}}
+ \eta I v \boldsymbol{\mathcal{A}}^{\mathrm{m}}
– \eta I \phi_m
} }{1,2}.
}
\end{equation}

It is this grade selection operation that is really the fundamental defining action in the potential of the STA and conventional 3D representations of Maxwell’s equations. So, given Maxwell’s equation in the 3D GA representation, defining a potential representation for the field is really just a demand that the field have the structure

\begin{equation}\label{eqn:potentialMethods:1320}
\boldsymbol{\mathcal{F}} = \gpgrade{ (\alpha \spacegrad + \beta \partial_t)( A_0 + A_1 + I( A_0′ + A_1′ ) }{1,2}.
\end{equation}

This is a mandate that the electromagnetic field is the grades 1 and 2 components of the vector product of space and time derivative operators on a multivector field \( A = \sum_{k=0}^3 A_k = A_0 + A_1 + I( A_0′ + A_1′ ) \) that can potentially have any grade components. There are more degrees of freedom in this specification than required, since the multivector can absorb one of the \( \alpha \) or \( \beta \) coefficients, so without loss of generality, one of these (say \( \alpha\)) can be set to 1.

Expanding \ref{eqn:potentialMethods:1320} gives

\begin{equation}\label{eqn:potentialMethods:1340}
\begin{aligned}
\boldsymbol{\mathcal{F}}
&=
\spacegrad A_0
+ \beta \partial_t A_1
– \spacegrad \cross A_1′
+ I (\spacegrad \cross A_1
+ \beta \partial_t A_1′
+ \spacegrad A_0′) \\
&=
\boldsymbol{\mathcal{E}} + I \eta \boldsymbol{\mathcal{H}}.
\end{aligned}
\end{equation}

This naturally has all the right mixes of curls, gradients and time derivatives, all following as direct consequences of applying a grade selection operation to the action of a “spacetime gradient” on a general multivector potential.

The conclusion is that the potential representation of the field is

\begin{equation}\label{eqn:potentialMethods:1360}
\boldsymbol{\mathcal{F}} =
\gpgrade{ \lr{ \spacegrad – \inv{v} \PD{t}{} } A }{1,2},
\end{equation}

where \( A \) is a multivector potentially containing all grades, where grades 0,1 are required for electric sources, and grades 2,3 are required for magnetic sources. When it is desirable to refer back to the conventional scalar and vector potentials this multivector potential can be written as \( A = -\phi + v \boldsymbol{\mathcal{A}}^{\mathrm{e}} + \eta I \lr{ -\phi_m + v \boldsymbol{\mathcal{A}}^{\mathrm{m}} } \).

Gauge transformations

Recall that for electric sources the magnetic field is of the form

\begin{equation}\label{eqn:potentialMethods:1380}
\boldsymbol{\mathcal{B}} = \spacegrad \cross \boldsymbol{\mathcal{A}},
\end{equation}

so adding the gradient of any scalar field to the potential \( \boldsymbol{\mathcal{A}}’ = \boldsymbol{\mathcal{A}} + \spacegrad \psi \)
does not change the magnetic field

\begin{equation}\label{eqn:potentialMethods:1400}
\begin{aligned}
\boldsymbol{\mathcal{B}}’
&= \spacegrad \cross \lr{ \boldsymbol{\mathcal{A}} + \spacegrad \psi } \\
&= \spacegrad \cross \boldsymbol{\mathcal{A}} \\
&= \boldsymbol{\mathcal{B}}.
\end{aligned}
\end{equation}

The electric field with this changed potential is

\begin{equation}\label{eqn:potentialMethods:1420}
\begin{aligned}
\boldsymbol{\mathcal{E}}’
&= -\spacegrad \phi – \partial_t \lr{ \BA + \spacegrad \psi} \\
&= -\spacegrad \lr{ \phi + \partial_t \psi } – \partial_t \BA,
\end{aligned}
\end{equation}

so if
\begin{equation}\label{eqn:potentialMethods:1440}
\phi = \phi’ – \partial_t \psi,
\end{equation}

the electric field will also be unaltered by this transformation.

In the STA representation, the field can similarly be altered by adding any (four)gradient to the potential. For example with only electric sources

\begin{equation}\label{eqn:potentialMethods:1460}
\boldsymbol{\mathcal{F}} = \grad \wedge (A + \grad \psi) = \grad \wedge A
\end{equation}

and for electric or magnetic sources

\begin{equation}\label{eqn:potentialMethods:1480}
\boldsymbol{\mathcal{F}} = \gpgradetwo{ \grad (A + \grad \psi) } = \gpgradetwo{ \grad A }.
\end{equation}

In the 3D GA representation, where the field is given by \ref{eqn:potentialMethods:1360}, there is no field that is being curled to add a gradient to. However, if the scalar and vector potentials transform as

\begin{equation}\label{eqn:potentialMethods:1500}
\begin{aligned}
\boldsymbol{\mathcal{A}} &\rightarrow \boldsymbol{\mathcal{A}} + \spacegrad \psi \\
\phi &\rightarrow \phi – \partial_t \psi,
\end{aligned}
\end{equation}

then the multivector potential transforms as
\begin{equation}\label{eqn:potentialMethods:1520}
-\phi + v \boldsymbol{\mathcal{A}}
\rightarrow -\phi + v \boldsymbol{\mathcal{A}} + \partial_t \psi + v \spacegrad \psi,
\end{equation}

so the electromagnetic field is unchanged when the multivector potential is transformed as

\begin{equation}\label{eqn:potentialMethods:1540}
A \rightarrow A + \lr{ \spacegrad + \inv{v} \partial_t } \psi,
\end{equation}

where \( \psi \) is any field that has scalar or pseudoscalar grades. Viewed in terms of grade selection, this makes perfect sense, since the transformed field is

\begin{equation}\label{eqn:potentialMethods:1560}
\begin{aligned}
\boldsymbol{\mathcal{F}}
&\rightarrow
\gpgrade{ \lr{ \spacegrad – \inv{v} \PD{t}{} } \lr{ A + \lr{ \spacegrad + \inv{v} \partial_t } \psi } }{1,2} \\
&=
\gpgrade{ \lr{ \spacegrad – \inv{v} \PD{t}{} } A + \lr{ \spacegrad^2 – \inv{v^2} \partial_{tt} } \psi }{1,2} \\
&=
\gpgrade{ \lr{ \spacegrad – \inv{v} \PD{t}{} } A }{1,2}.
\end{aligned}
\end{equation}

The \( \psi \) contribution to the grade selection operator is killed because it has scalar or pseudoscalar grades.

Lorenz gauge

Maxwell’s equations are completely decoupled if the potential can be found such that

\begin{equation}\label{eqn:potentialMethods:1580}
\begin{aligned}
\boldsymbol{\mathcal{F}}
&=
\gpgrade{ \lr{ \spacegrad – \inv{v} \PD{t}{} } A }{1,2} \\
&=
\lr{ \spacegrad – \inv{v} \PD{t}{} } A.
\end{aligned}
\end{equation}

When this is the case, Maxwell’s equations are reduced to four non-homogeneous potential wave equations

\begin{equation}\label{eqn:potentialMethods:1620}
\lr{ \spacegrad^2 – \inv{v^2} \PDSq{t}{} } A = J,
\end{equation}

that is

\begin{equation}\label{eqn:potentialMethods:1600}
\begin{aligned}
\lr{ \spacegrad^2 – \inv{v^2} \PDSq{t}{} } \phi &= – \inv{\epsilon} q_e \\
\lr{ \spacegrad^2 – \inv{v^2} \PDSq{t}{} } \boldsymbol{\mathcal{A}}^{\mathrm{e}} &= – \mu \boldsymbol{\mathcal{J}} \\
\lr{ \spacegrad^2 – \inv{v^2} \PDSq{t}{} } \phi_m &= – \frac{I}{\mu} q_m \\
\lr{ \spacegrad^2 – \inv{v^2} \PDSq{t}{} } \boldsymbol{\mathcal{A}}^{\mathrm{m}} &= – I \epsilon \boldsymbol{\mathcal{M}}.
\end{aligned}
\end{equation}

There should be no a-priori assumption that such a field representation has no scalar, nor no pseudoscalar components. That explicit expansion in grades is

\begin{equation}\label{eqn:potentialMethods:1640}
\begin{aligned}
\lr{ \spacegrad – \inv{v} \PD{t}{} } A
&=
\lr{ \spacegrad – \inv{v} \PD{t}{} } \lr{ -\phi + v \boldsymbol{\mathcal{A}}^{\mathrm{e}} + \eta I \lr{ -\phi_m + v \boldsymbol{\mathcal{A}}^{\mathrm{m}} } } \\
&=
\inv{v} \partial_t \phi
+ v \spacegrad \cdot \boldsymbol{\mathcal{A}}^{\mathrm{e}} \\
&-\spacegrad \phi
+ I \eta v \spacegrad \wedge \boldsymbol{\mathcal{A}}^{\mathrm{m}}
– \partial_t \boldsymbol{\mathcal{A}}^{\mathrm{e}} \\
&+ v \spacegrad \wedge \boldsymbol{\mathcal{A}}^{\mathrm{e}}
– \eta I \spacegrad \phi_m
– I \eta \partial_t \boldsymbol{\mathcal{A}}^{\mathrm{m}} \\
&+ \eta I \inv{v} \partial_t \phi_m
+ I \eta v \spacegrad \cdot \boldsymbol{\mathcal{A}}^{\mathrm{m}},
\end{aligned}
\end{equation}

so if this potential representation has only vector and bivector grades, it must be true that

\begin{equation}\label{eqn:potentialMethods:1660}
\begin{aligned}
\inv{v} \partial_t \phi + v \spacegrad \cdot \boldsymbol{\mathcal{A}}^{\mathrm{e}} &= 0 \\
\inv{v} \partial_t \phi_m + v \spacegrad \cdot \boldsymbol{\mathcal{A}}^{\mathrm{m}} &= 0.
\end{aligned}
\end{equation}

The first is the well known Lorenz gauge condition, whereas the second is the dual of that condition for magnetic sources.

Should one of these conditions, say the Lorenz condition for the electric source potentials, be non-zero, then it is possible to make a potential transformation for which this condition is zero

\begin{equation}\label{eqn:potentialMethods:1680}
\begin{aligned}
0
&\ne
\inv{v} \partial_t \phi + v \spacegrad \cdot \boldsymbol{\mathcal{A}}^{\mathrm{e}} \\
&=
\inv{v} \partial_t (\phi’ – \partial_t \psi) + v \spacegrad \cdot (\boldsymbol{\mathcal{A}}’ + \spacegrad \psi) \\
&=
\inv{v} \partial_t \phi’ + v \spacegrad \boldsymbol{\mathcal{A}}’
+ v \lr{ \spacegrad^2 – \inv{v^2} \partial_{tt} } \psi,
\end{aligned}
\end{equation}

so if \( \inv{v} \partial_t \phi’ + v \spacegrad \boldsymbol{\mathcal{A}}’ \) is zero, \( \psi \) must be found such that
\begin{equation}\label{eqn:potentialMethods:1700}
\inv{v} \partial_t \phi + v \spacegrad \cdot \boldsymbol{\mathcal{A}}^{\mathrm{e}}
= v \lr{ \spacegrad^2 – \inv{v^2} \partial_{tt} } \psi.
\end{equation}

References

[1] Constantine A Balanis. Antenna theory: analysis and design. John Wiley \& Sons, 3rd edition, 2005.

[2] C. Doran and A.N. Lasenby. Geometric algebra for physicists. Cambridge University Press New York, Cambridge, UK, 1st edition, 2003.

[3] David M Pozar. Microwave engineering. John Wiley \& Sons, 2009.

Calculating the magnetostatic field from the moment

November 14, 2016 math and physics play No comments , , , , , ,

[Click here for a PDF of this post with nicer formatting]

The vector potential, to first order, for a magnetostatic localized current distribution was found to be

\begin{equation}\label{eqn:magneticFieldFromMoment:20}
\BA(\Bx) = \frac{\mu_0}{4 \pi} \frac{\Bm \cross \Bx}{\Abs{\Bx}^3}.
\end{equation}

Initially, I tried to calculate the magnetic field from this, but ran into trouble. Here’s a new try.

\begin{equation}\label{eqn:magneticFieldFromMoment:40}
\begin{aligned}
\BB
&=
\frac{\mu_0}{4 \pi}
\spacegrad \cross \lr{ \Bm \cross \frac{\Bx}{r^3} } \\
&=
-\frac{\mu_0}{4 \pi}
\spacegrad \cdot \lr{ \Bm \wedge \frac{\Bx}{r^3} } \\
&=
-\frac{\mu_0}{4 \pi}
\lr{
(\Bm \cdot \spacegrad) \frac{\Bx}{r^3}
-\Bm \spacegrad \cdot \frac{\Bx}{r^3}
} \\
&=
\frac{\mu_0}{4 \pi}
\lr{
-\frac{(\Bm \cdot \spacegrad) \Bx}{r^3}
– \lr{ \Bm \cdot \lr{\spacegrad \inv{r^3} }} \Bx
+\Bm (\spacegrad \cdot \Bx) \inv{r^3}
+\Bm \lr{\spacegrad \inv{r^3} } \cdot \Bx
}.
\end{aligned}
\end{equation}

Here I’ve used \( \Ba \cross \lr{ \Bb \cross \Bc } = -\Ba \cdot \lr{ \Bb \wedge \Bc } \), and then expanded that with \( \Ba \cdot \lr{ \Bb \wedge \Bc } = (\Ba \cdot \Bb) \Bc – (\Ba \cdot \Bc) \Bb \). Since one of these vectors is the gradient, care must be taken to have it operate on the appropriate terms in such an expansion.

Since we have \( \spacegrad \cdot \Bx = 3 \), \( (\Bm \cdot \spacegrad) \Bx = \Bm \), and \( \spacegrad 1/r^n = -n \Bx/r^{n+2} \), this reduces to

\begin{equation}\label{eqn:magneticFieldFromMoment:60}
\begin{aligned}
\BB
&=
\frac{\mu_0}{4 \pi}
\lr{
– \frac{\Bm}{r^3}
+ 3 \frac{(\Bm \cdot \Bx) \Bx}{r^5} %
+ 3 \Bm \inv{r^3}
-3 \Bm \frac{\Bx}{r^5} \cdot \Bx
} \\
&=
\frac{\mu_0}{4 \pi}
\frac{3 (\Bm \cdot \ncap) \ncap -\Bm}{r^3},
\end{aligned}
\end{equation}

which is the desired result.

Frequency domain time averaged Poynting theorem

November 8, 2016 math and physics play No comments , , ,

[Click here for a PDF of this post with nicer formatting]

The time domain Poynting relationship was found to be

\begin{equation}\label{eqn:poyntingTimeHarmonic:20}
0
=
\spacegrad \cdot \lr{ \BE \cross \BH }
+ \frac{\epsilon}{2} \BE \cdot \PD{t}{\BE}
+ \frac{\mu}{2} \BH \cdot \PD{t}{\BH}
+ \BH \cdot \BM_i
+ \BE \cdot \BJ_i
+ \sigma \BE \cdot \BE.
\end{equation}

Let’s derive the equivalent relationship for the time averaged portion of the time-harmonic Poynting vector. The time domain representation of the Poynting vector in terms of the time-harmonic (phasor) vectors is

\begin{equation}\label{eqn:poyntingTimeHarmonic:40}
\begin{aligned}
\boldsymbol{\mathcal{E}} \cross \boldsymbol{\mathcal{H}}
&= \inv{4}
\lr{
\BE e^{j\omega t}
+ \BE^\conj e^{-j\omega t}
}
\cross
\lr{
\BH e^{j\omega t}
+ \BH^\conj e^{-j\omega t}
} \\
&=
\inv{2} \textrm{Re} \lr{ \BE \cross \BH^\conj + \BE \cross \BH e^{2 j \omega t} },
\end{aligned}
\end{equation}

so if we are looking for the relationships that effect only the time averaged Poynting vector, over integral multiples of the period, we are interested in evaluating the divergence of

\begin{equation}\label{eqn:poyntingTimeHarmonic:60}
\inv{2} \BE \cross \BH^\conj.
\end{equation}

The time-harmonic Maxwell’s equations are
\begin{equation}\label{eqn:poyntingTimeHarmonic:80}
\begin{aligned}
\spacegrad \cross \BE &= – j \omega \mu \BH – \BM_i \\
\spacegrad \cross \BH &= j \omega \epsilon \BE + \BJ_i + \sigma \BE \\
\end{aligned}
\end{equation}

The latter after conjugation is

\begin{equation}\label{eqn:poyntingTimeHarmonic:100}
\spacegrad \cross \BH^\conj = -j \omega \epsilon^\conj \BE^\conj + \BJ_i^\conj + \sigma^\conj \BE^\conj.
\end{equation}

For the divergence we have

\begin{equation}\label{eqn:poyntingTimeHarmonic:120}
\begin{aligned}
\spacegrad \cdot \lr{ \BE \cross \BH^\conj }
&=
\BH^\conj \cdot \lr{ \spacegrad \cdot \BE }
-\BE \cdot \lr{ \spacegrad \cdot \BH^\conj } \\
&=
\BH^\conj \cdot \lr{ – j \omega \mu \BH – \BM_i }
– \BE \cdot \lr{ -j \omega \epsilon^\conj \BE^\conj + \BJ_i^\conj + \sigma^\conj \BE^\conj },
\end{aligned}
\end{equation}

or

\begin{equation}\label{eqn:poyntingTimeHarmonic:140}
0
=
\spacegrad \cdot \lr{ \BE \cross \BH^\conj }
+
\BH^\conj \cdot \lr{ j \omega \mu \BH + \BM_i }
+ \BE \cdot \lr{ -j \omega \epsilon^\conj \BE^\conj + \BJ_i^\conj + \sigma^\conj \BE^\conj },
\end{equation}

so
\begin{equation}\label{eqn:poyntingTimeHarmonic:160}
\boxed{
0
=
\spacegrad \cdot \inv{2} \lr{ \BE \cross \BH^\conj }
+ \inv{2} \lr{ \BH^\conj \cdot \BM_i
+ \BE \cdot \BJ_i^\conj }
+ \inv{2} j \omega \lr{ \mu \Abs{\BH}^2 – \epsilon^\conj \Abs{\BE}^2 }
+ \inv{2} \sigma^\conj \Abs{\BE}^2.
}
\end{equation}

Vector Area

October 13, 2016 math and physics play No comments , , , ,

[Click here for a PDF of this post with nicer formatting]

One of the results of this problem is required for a later one on magnetic moments that I’d like to do.

Question: Vector Area. ([1] pr. 1.61)

The integral

\begin{equation}\label{eqn:vectorAreaGriffiths:20}
\Ba = \int_S d\Ba,
\end{equation}

is sometimes called the vector area of the surface \( S \).

(a)

Find the vector area of a hemispherical bowl of radius \( R \).

(b)

Show that \( \Ba = 0 \) for any closed surface.

(c)

Show that \( \Ba \) is the same for all surfaces sharing the same boundary.

(d)

Show that
\begin{equation}\label{eqn:vectorAreaGriffiths:40}
\Ba = \inv{2} \oint \Br \cross d\Bl,
\end{equation}

where the integral is around the boundary line.

(e)

Show that
\begin{equation}\label{eqn:vectorAreaGriffiths:60}
\oint \lr{ \Bc \cdot \Br } d\Bl = \Ba \cross \Bc.
\end{equation}

Answer

(a)

\begin{equation}\label{eqn:vectorAreaGriffiths:80}
\begin{aligned}
\Ba
&=
\int_{0}^{\pi/2} R^2 \sin\theta d\theta \int_0^{2\pi} d\phi
\lr{ \sin\theta \cos\phi, \sin\theta \sin\phi, \cos\theta } \\
&=
R^2 \int_{0}^{\pi/2} d\theta \int_0^{2\pi} d\phi
\lr{ \sin^2\theta \cos\phi, \sin^2\theta \sin\phi, \sin\theta\cos\theta } \\
&=
2 \pi R^2 \int_{0}^{\pi/2} d\theta \Be_3
\sin\theta\cos\theta \\
&=
\pi R^2
\Be_3
\int_{0}^{\pi/2} d\theta
\sin(2 \theta) \\
&=
\pi R^2
\Be_3
\evalrange{\lr{\frac{-\cos(2 \theta)}{2}}}{0}{\pi/2} \\
&=
\pi R^2
\Be_3
\lr{ 1 – (-1) }/2 \\
&=
\pi R^2
\Be_3.
\end{aligned}
\end{equation}

(b)

As hinted in the original problem description, this follows from

\begin{equation}\label{eqn:vectorAreaGriffiths:100}
\int dV \spacegrad T = \oint T d\Ba,
\end{equation}

simply by setting \( T = 1 \).

(c)

Suppose that two surfaces sharing a boundary are parameterized by vectors \( \Bx(u, v), \Bx(a,b) \) respectively. The area integral with the first parameterization is

\begin{equation}\label{eqn:vectorAreaGriffiths:120}
\begin{aligned}
\Ba
&= \int \PD{u}{\Bx} \cross \PD{v}{\Bx} du dv \\
&= \epsilon_{ijk} \Be_i \int \PD{u}{x_j} \PD{v}{x_k} du dv \\
&=
\epsilon_{ijk} \Be_i \int
\lr{
\PD{a}{x_j}
\PD{u}{a}
+
\PD{b}{x_j}
\PD{u}{b}
}
\lr{
\PD{a}{x_k}
\PD{v}{a}
+
\PD{b}{x_k}
\PD{v}{b}
}
du dv \\
&=
\epsilon_{ijk} \Be_i \int
du dv
\lr{
\PD{a}{x_j}
\PD{u}{a}
\PD{a}{x_k}
\PD{v}{a}
+
\PD{b}{x_j}
\PD{u}{b}
\PD{b}{x_k}
\PD{v}{b}
+
\PD{b}{x_j}
\PD{u}{b}
\PD{a}{x_k}
\PD{v}{a}
+
\PD{a}{x_j}
\PD{u}{a}
\PD{b}{x_k}
\PD{v}{b}
} \\
&=
\epsilon_{ijk} \Be_i \int
du dv
\lr{
\PD{a}{x_j}
\PD{a}{x_k}
\PD{u}{a}
\PD{v}{a}
+
\PD{b}{x_j}
\PD{b}{x_k}
\PD{u}{b}
\PD{v}{b}
}
+
\epsilon_{ijk} \Be_i \int
du dv
\lr{
\PD{b}{x_j}
\PD{a}{x_k}
\PD{u}{b}
\PD{v}{a}

\PD{a}{x_k}
\PD{b}{x_j}
\PD{u}{a}
\PD{v}{b}
}.
\end{aligned}
\end{equation}

In the last step a \( j,k \) index swap was performed for the last term of the second integral. The first integral is zero, since the integrand is symmetric in \( j,k \). This leaves
\begin{equation}\label{eqn:vectorAreaGriffiths:140}
\begin{aligned}
\Ba
&=
\epsilon_{ijk} \Be_i \int
du dv
\lr{
\PD{b}{x_j}
\PD{a}{x_k}
\PD{u}{b}
\PD{v}{a}

\PD{a}{x_k}
\PD{b}{x_j}
\PD{u}{a}
\PD{v}{b}
} \\
&=
\epsilon_{ijk} \Be_i \int
\PD{b}{x_j}
\PD{a}{x_k}
\lr{
\PD{u}{b}
\PD{v}{a}

\PD{u}{a}
\PD{v}{b}
}
du dv \\
&=
\epsilon_{ijk} \Be_i \int
\PD{b}{x_j}
\PD{a}{x_k}
\frac{\partial(b,a)}{\partial(u,v)} du dv \\
&=
-\int
\PD{b}{\Bx} \cross \PD{a}{\Bx} da db \\
&=
\int
\PD{a}{\Bx} \cross \PD{b}{\Bx} da db.
\end{aligned}
\end{equation}

However, this is the area integral with the second parameterization, proving that the area-integral for any given boundary is independant of the surface.

(d)

Having proven that the area-integral for a given boundary is independent of the surface that it is evaluated on, the result follows by illustration as hinted in the full problem description. Draw a “cone”, tracing a vector \( \Bx’ \) from the origin to the position line element, and divide that cone up into infinitesimal slices as sketched in fig. 1.

conevectorareafig1

Fig 1. Cone subtended by loop

The area of each of these triangular slices is

\begin{equation}\label{eqn:vectorAreaGriffiths:160}
\inv{2} \Bx’ \cross d\Bl’.
\end{equation}

Summing those triangles proves the result.

(e)

As hinted in the problem, this follows from

\begin{equation}\label{eqn:vectorAreaGriffiths:180}
\int \spacegrad T \cross d\Ba = -\oint T d\Bl.
\end{equation}

Set \( T = \Bc \cdot \Br \), for which

\begin{equation}\label{eqn:vectorAreaGriffiths:240}
\begin{aligned}
\spacegrad T
&= \Be_k \partial_k c_m x_m \\
&= \Be_k c_m \delta_{km} \\
&= \Be_k c_k \\
&= \Bc,
\end{aligned}
\end{equation}

so
\begin{equation}\label{eqn:vectorAreaGriffiths:200}
\begin{aligned}
(\spacegrad T) \cross d\Ba
&=
\int \Bc \cross d\Ba \\
&=
\Bc \cross \int d\Ba \\
&=
\Bc \cross \Ba.
\end{aligned}
\end{equation}

so
\begin{equation}\label{eqn:vectorAreaGriffiths:220}
\Bc \cross \Ba = -\oint (\Bc \cdot \Br) d\Bl,
\end{equation}

or
\begin{equation}\label{eqn:vectorAreaGriffiths:260}
\oint (\Bc \cdot \Br) d\Bl
=
\Ba \cross \Bc.
\end{equation}

References

[1] David Jeffrey Griffiths and Reed College. Introduction to electrodynamics. Prentice hall Upper Saddle River, NJ, 3rd edition, 1999.

Geometric Algebra in a nutshell.

September 29, 2016 math and physics play No comments , , , , , , , , , , , , ,

[Click here for a PDF of this post with nicer formatting]

Motivation

I initially thought that I might submit a problem set solution for ece1228 using Geometric Algebra. In order to justify this, I needed to add an appendix to that problem set that outlined enough of the ideas that such a solution might make sense to the grader.

I ended up changing my mind and reworked the problem entirely, removing any use of GA. Here’s the tutorial I initially considered submitting with that problem.

Geometric Algebra in a nutshell.

Geometric Algebra defines a non-commutative, associative vector product

\begin{equation}\label{eqn:gaTutorial:20}
\begin{aligned}
\Ba \Bb \Bc
&=
(\Ba \Bb) \Bc \\
&=
\Ba (\Bb \Bc),
\end{aligned}
\end{equation}

where the square of a vector equals the squared vector magnitude

\begin{equation}\label{eqn:gaTutorial:40}
\Ba^2 = \Abs{\Ba}^2,
\end{equation}

In Euclidean spaces such a squared vector is always positive, but that is not necessarily the case in the mixed signature spaces used in special relativity.

There are a number of consequences of these two simple vector multiplication rules.

  • Squared unit vectors have a unit magnitude (up to a sign). In a Euclidean space such a product is always positive

    \begin{equation}\label{eqn:gaTutorial:60}
    (\Be_1)^2 = 1.
    \end{equation}

  • Products of perpendicular vectors anticommute.

    \begin{equation}\label{eqn:gaTutorial:80}
    \begin{aligned}
    2
    &=
    (\Be_1 + \Be_2)^2 \\
    &= (\Be_1 + \Be_2)(\Be_1 + \Be_2) \\
    &= \Be_1^2 + \Be_2 \Be_1 + \Be_1 \Be_2 + \Be_2^2 \\
    &= 2 + \Be_2 \Be_1 + \Be_1 \Be_2.
    \end{aligned}
    \end{equation}

    A product of two perpendicular vectors is called a bivector, and can be used to represent an oriented plane. The last line above shows an example of a scalar and bivector sum, called a multivector. In general Geometric Algebra allows sums of scalars, vectors, bivectors, and higher degree analogues (grades) be summed.

    Comparison of the RHS and LHS of \ref{eqn:gaTutorial:80} shows that we must have

    \begin{equation}\label{eqn:gaTutorial:100}
    \Be_2 \Be_1 = -\Be_1 \Be_2.
    \end{equation}

    It is true in general that the product of two perpendicular vectors anticommutes. When, as above, such a product is a product of
    two orthonormal vectors, it behaves like a non-commutative imaginary quantity, as it has an imaginary square in Euclidean spaces

    \begin{equation}\label{eqn:gaTutorial:120}
    \begin{aligned}
    (\Be_1 \Be_2)^2
    &=
    (\Be_1 \Be_2)
    (\Be_1 \Be_2) \\
    &=
    \Be_1 (\Be_2
    \Be_1) \Be_2 \\
    &=
    -\Be_1 (\Be_1
    \Be_2) \Be_2 \\
    &=
    -(\Be_1 \Be_1)
    (\Be_2 \Be_2) \\
    &=-1.
    \end{aligned}
    \end{equation}

    Such “imaginary” (unit bivectors) have important applications describing rotations in Euclidean spaces, and boosts in Minkowski spaces.

  • The product of three perpendicular vectors, such as

    \begin{equation}\label{eqn:gaTutorial:140}
    I = \Be_1 \Be_2 \Be_3,
    \end{equation}

    is called a trivector. In \R{3}, the product of three orthonormal vectors is called a pseudoscalar for the space, and can represent an oriented volume element. The quantity \( I \) above is the typical orientation picked for the \R{3} unit pseudoscalar. This quantity also has characteristics of an imaginary number

    \begin{equation}\label{eqn:gaTutorial:160}
    \begin{aligned}
    I^2
    &=
    (\Be_1 \Be_2 \Be_3)
    (\Be_1 \Be_2 \Be_3) \\
    &=
    \Be_1 \Be_2 (\Be_3
    \Be_1) \Be_2 \Be_3 \\
    &=
    -\Be_1 \Be_2 \Be_1
    \Be_3 \Be_2 \Be_3 \\
    &=
    -\Be_1 (\Be_2 \Be_1)
    (\Be_3 \Be_2) \Be_3 \\
    &=
    -\Be_1 (\Be_1 \Be_2)
    (\Be_2 \Be_3) \Be_3 \\
    &=

    \Be_1^2
    \Be_2^2
    \Be_3^2 \\
    &=
    -1.
    \end{aligned}
    \end{equation}

  • The product of two vectors in \R{3} can be expressed as the sum of a symmetric scalar product and antisymmetric bivector product

    \begin{equation}\label{eqn:gaTutorial:480}
    \begin{aligned}
    \Ba \Bb
    &=
    \sum_{i,j = 1}^n \Be_i \Be_j a_i b_j \\
    &=
    \sum_{i = 1}^n \Be_i^2 a_i b_i
    +
    \sum_{0 < i \ne j \le n} \Be_i \Be_j a_i b_j \\ &= \sum_{i = 1}^n a_i b_i + \sum_{0 < i < j \le n} \Be_i \Be_j (a_i b_j - a_j b_i). \end{aligned} \end{equation} The first (symmetric) term is clearly the dot product. The antisymmetric term is designated the wedge product. In general these are written \begin{equation}\label{eqn:gaTutorial:500} \Ba \Bb = \Ba \cdot \Bb + \Ba \wedge \Bb, \end{equation} where \begin{equation}\label{eqn:gaTutorial:520} \begin{aligned} \Ba \cdot \Bb &\equiv \inv{2} \lr{ \Ba \Bb + \Bb \Ba } \\ \Ba \wedge \Bb &\equiv \inv{2} \lr{ \Ba \Bb - \Bb \Ba }, \end{aligned} \end{equation} The coordinate expansion of both can be seen above, but in \R{3} the wedge can also be written \begin{equation}\label{eqn:gaTutorial:540} \Ba \wedge \Bb = \Be_1 \Be_2 \Be_3 (\Ba \cross \Bb) = I (\Ba \cross \Bb). \end{equation} This allows for an handy dot plus cross product expansion of the vector product \begin{equation}\label{eqn:gaTutorial:180} \Ba \Bb = \Ba \cdot \Bb + I (\Ba \cross \Bb). \end{equation} This result should be familiar to the student of quantum spin states where one writes \begin{equation}\label{eqn:gaTutorial:200} (\Bsigma \cdot \Ba) (\Bsigma \cdot \Bb) = (\Ba \cdot \Bb) + i (\Ba \cross \Bb) \cdot \Bsigma. \end{equation} This correspondence is because the Pauli spin basis is a specific matrix representation of a Geometric Algebra, satisfying the same commutator and anticommutator relationships. A number of other algebra structures, such as complex numbers, and quaterions can also be modelled as Geometric Algebra elements.

  • It is often useful to utilize the grade selection operator
    \( \gpgrade{M}{n} \) and scalar grade selection operator \( \gpgradezero{M} = \gpgrade{M}{0} \)
    to select the scalar, vector, bivector, trivector, or higher grade algebraic elements. For example, operating on vectors \( \Ba, \Bb, \Bc \), we have

    \begin{equation}\label{eqn:gaTutorial:580}
    \begin{aligned}
    \gpgradezero{ \Ba \Bb }
    &= \Ba \cdot \Bb \\
    \gpgradeone{ \Ba \Bb \Bc }
    &=
    \Ba (\Bb \cdot \Bc)
    +
    \Ba \cdot (\Bb \wedge \Bc) \\
    &=
    \Ba (\Bb \cdot \Bc)
    +
    (\Ba \cdot \Bb) \Bc

    (\Ba \cdot \Bc) \Bb \\
    \gpgradetwo{\Ba \Bb} &=
    \Ba \wedge \Bb \\
    \gpgradethree{\Ba \Bb \Bc} &=
    \Ba \wedge \Bb \wedge \Bc.
    \end{aligned}
    \end{equation}

    Note that the wedge product of any number of vectors such as \( \Ba \wedge \Bb \wedge \Bc \) is associative and can be expressed in terms of the complete antisymmetrization of the product of those vectors. A consequence of that is the fact a wedge product that includes any colinear vectors in the product is zero.

Example: Helmholz equations.

As an example of the power of \ref{eqn:gaTutorial:180}, consider the following Helmholtz equation derivation (wave equations for the electric and magnetic fields in the frequency domain.)

Application of \ref{eqn:gaTutorial:180} to
Maxwell equations in the frequency domain for source free simple media gives

\label{eqn:emtProblemSet1Problem6:340}
\begin{equation}\label{eqn:emtProblemSet1Problem6:360}
\spacegrad \BE = -j \omega I \BB
\end{equation}
\begin{equation}\label{eqn:emtProblemSet1Problem6:380}
\spacegrad I \BB = -j \omega \mu \epsilon \BE.
\end{equation}

These equations use the engineering (not physics) sign convention for the phasors where the time domain fields are of the form \( \boldsymbol{\mathcal{E}}(\Br, t) = \textrm{Re}( \BE e^{j\omega t} \).

Operation with the gradient from the left produces the Helmholtz equation for each of the fields using nothing more than multiplication and simple substitution

\label{eqn:emtProblemSet1Problem6:400}
\begin{equation}\label{eqn:emtProblemSet1Problem6:420}
\spacegrad^2 \BE = – \mu \epsilon \omega^2 \BE
\end{equation}
\begin{equation}\label{eqn:emtProblemSet1Problem6:440}
\spacegrad^2 I \BB = – \mu \epsilon \omega^2 I \BB.
\end{equation}

There was no reason to go through the headache of looking up or deriving the expansion of \( \spacegrad \cross (\spacegrad \cross \BA ) \) as is required with the traditional vector algebra demonstration of these identities.

Observe that the usual Helmholtz equation for \( \BB \) doesn’t have a pseudoscalar factor. That result can be obtained by just cancelling the factors \( I \) since the \R{3} Euclidean pseudoscalar commutes with all grades (this isn’t the case in \R{2} nor in Minkowski spaces.)

Example: Factoring the Laplacian.

There are various ways to demonstrate the identity

\begin{equation}\label{eqn:gaTutorial:660}
\spacegrad \cross \lr{ \spacegrad \cross \BA } = \spacegrad \lr{ \spacegrad \cdot \BA } – \spacegrad^2 \BA,
\end{equation}

such as the use of (somewhat obscure) tensor contraction techniques. We can also do this with Geometric Algebra (using a different set of obscure techniques) by factoring the Laplacian action on a vector

\begin{equation}\label{eqn:gaTutorial:700}
\begin{aligned}
\spacegrad^2 \BA
&=
\spacegrad (\spacegrad \BA) \\
&=
\spacegrad (\spacegrad \cdot \BA + \spacegrad \wedge \BA) \\
&=
\spacegrad (\spacegrad \cdot \BA)
+
\spacegrad \cdot (\spacegrad \wedge \BA) \\
%+
%\cancel{\spacegrad \wedge \spacegrad \wedge \BA}
&=
\spacegrad (\spacegrad \cdot \BA)
+
\spacegrad \cdot (\spacegrad \wedge \BA).
\end{aligned}
\end{equation}

Should we wish to express the last term using cross products, a grade one selection operation can be used
\begin{equation}\label{eqn:gaTutorial:680}
\begin{aligned}
\spacegrad \cdot (\spacegrad \wedge \BA)
&=
\gpgradeone{ \spacegrad (\spacegrad \wedge \BA) } \\
&=
\gpgradeone{ \spacegrad I (\spacegrad \cross \BA) } \\
&=
\gpgradeone{ I \spacegrad \wedge (\spacegrad \cross \BA) } \\
&=
\gpgradeone{ I^2 \spacegrad \cross (\spacegrad \cross \BA) } \\
&=
-\spacegrad \cross (\spacegrad \cross \BA).
\end{aligned}
\end{equation}

Here coordinate expansion was not required in any step.

Learning more.

Some references that may be helpful to learn more about Geometric Algebra are [2], [1], [4], and [3].

References

[1] C. Doran and A.N. Lasenby. Geometric algebra for physicists. Cambridge University Press New York, Cambridge, UK, 1st edition, 2003.

[2] L. Dorst, D. Fontijne, and S. Mann. Geometric Algebra for Computer Science. Morgan Kaufmann, San Francisco, 2007.

[3] D. Hestenes. New Foundations for Classical Mechanics. Kluwer Academic Publishers, 1999.

[4] A. Macdonald. Vector and Geometric Calculus. CreateSpace Independent Publishing Platform, 2012.

Maxwell equation boundary conditions in media

September 10, 2016 math and physics play No comments , , , , , , , , ,

[Click here for a PDF of this post with nicer formatting]

Following [1], Maxwell’s equations in media, including both electric and magnetic sources and currents are

\begin{equation}\label{eqn:boundaryConditionsInMedia:40}
\spacegrad \cross \BE = -\BM – \partial_t \BB
\end{equation}
\begin{equation}\label{eqn:boundaryConditionsInMedia:60}
\spacegrad \cross \BH = \BJ + \partial_t \BD
\end{equation}
\begin{equation}\label{eqn:boundaryConditionsInMedia:80}
\spacegrad \cdot \BD = \rho
\end{equation}
\begin{equation}\label{eqn:boundaryConditionsInMedia:100}
\spacegrad \cdot \BB = \rho_{\textrm{m}}
\end{equation}

In general, it is not possible to assemble these into a single Geometric Algebra equation unless specific assumptions about the permeabilities are made, but we can still use Geometric Algebra to examine the boundary condition question. First, these equations can be expressed in a more natural multivector form

\begin{equation}\label{eqn:boundaryConditionsInMedia:140}
\spacegrad \wedge \BE = -I \lr{ \BM + \partial_t \BB }
\end{equation}
\begin{equation}\label{eqn:boundaryConditionsInMedia:160}
\spacegrad \wedge \BH = I \lr{ \BJ + \partial_t \BD }
\end{equation}
\begin{equation}\label{eqn:boundaryConditionsInMedia:180}
\spacegrad \cdot \BD = \rho
\end{equation}
\begin{equation}\label{eqn:boundaryConditionsInMedia:200}
\spacegrad \cdot \BB = \rho_{\textrm{m}}
\end{equation}

Then duality relations can be used on the divergences to write all four equations in their curl form

\begin{equation}\label{eqn:boundaryConditionsInMedia:240}
\spacegrad \wedge \BE = -I \lr{ \BM + \partial_t \BB }
\end{equation}
\begin{equation}\label{eqn:boundaryConditionsInMedia:260}
\spacegrad \wedge \BH = I \lr{ \BJ + \partial_t \BD }
\end{equation}
\begin{equation}\label{eqn:boundaryConditionsInMedia:280}
\spacegrad \wedge (I\BD) = \rho I
\end{equation}
\begin{equation}\label{eqn:boundaryConditionsInMedia:300}
\spacegrad \wedge (I\BB) = \rho_{\textrm{m}} I.
\end{equation}

Now it is possible to employ Stokes theorem to each of these. The usual procedure is to both use the loops of fig. 2 and the pillbox of fig. 1, where in both cases the height is made infinitesimal.

boundaryConditionsTwoSurfacesFig1

fig 1. Two surfaces normal to the interface.

boundaryConditionsPillBoxFig2

fig 2. A pillbox volume encompassing the interface.

With all these relations expressed in curl form as above, we can use just the pillbox configuration to evaluate the Stokes integrals.
Let the height \( h \) be measured along the normal axis, and assume that all the charges and currents are localized to the surface

\begin{equation}\label{eqn:boundaryConditionsInMedia:320}
\begin{aligned}
\BM &= \BM_{\textrm{s}} \delta( h ) \\
\BJ &= \BJ_{\textrm{s}} \delta( h ) \\
\rho &= \rho_{\textrm{s}} \delta( h ) \\
\rho_{\textrm{m}} &= \rho_{\textrm{m}\textrm{s}} \delta( h ),
\end{aligned}
\end{equation}

we can enumerate the Stokes integrals \( \int d^3 \Bx \cdot \lr{ \spacegrad \wedge \BX } = \oint_{\partial V} d^2 \Bx \cdot \BX \). The three-volume area element will be written as \( d^3 \Bx = d^2 \Bx \wedge \ncap dh \), giving

\begin{equation}\label{eqn:boundaryConditionsInMedia:360}
\oint_{\partial V} d^2 \Bx \cdot \BE = -\int (d^2 \Bx \wedge \ncap) \cdot \lr{ I \BM_{\textrm{s}} + \partial_t I \BB \Delta h}
\end{equation}
\begin{equation}\label{eqn:boundaryConditionsInMedia:380}
\oint_{\partial V} d^2 \Bx \cdot \BH = \int (d^2 \Bx \wedge \ncap) \cdot \lr{ I \BJ_{\textrm{s}} + \partial_t I \BD \Delta h}
\end{equation}
\begin{equation}\label{eqn:boundaryConditionsInMedia:400}
\oint_{\partial V} d^2 \Bx \cdot (I\BD) = \int (d^2 \Bx \wedge \ncap) \cdot \lr{ \rho_{\textrm{s}} I }
\end{equation}
\begin{equation}\label{eqn:boundaryConditionsInMedia:420}
\oint_{\partial V} d^2 \Bx \cdot (I\BB) = \int (d^2 \Bx \wedge \ncap) \cdot \lr{ \rho_{\textrm{m}\textrm{s}} I }
\end{equation}

In the limit with \( \Delta h \rightarrow 0 \), the LHS integrals are reduced to just the top and bottom surfaces, and the \( \Delta h \) contributions on the RHS are eliminated. With \( i = I \ncap \), and \( d^2 \Bx = dA\, i \) on the top surface, we are left with

\begin{equation}\label{eqn:boundaryConditionsInMedia:460}
0 = \int dA \lr{ i \cdot \Delta \BE + I \cdot \lr{ I \BM_{\textrm{s}} } }
\end{equation}
\begin{equation}\label{eqn:boundaryConditionsInMedia:480}
0 = \int dA \lr{ i \cdot \Delta \BH – I \cdot \lr{ I \BJ_{\textrm{s}} } }
\end{equation}
\begin{equation}\label{eqn:boundaryConditionsInMedia:500}
0 = \int dA \lr{ i \cdot \Delta (I\BD) + \rho_{\textrm{s}} }
\end{equation}
\begin{equation}\label{eqn:boundaryConditionsInMedia:520}
0 = \int dA \lr{ i \cdot \Delta (I\BB) + \rho_{\textrm{m}\textrm{s}} }
\end{equation}

Consider the first integral. Any component of \( \BE \) that is normal to the plane of the pillbox top (or bottom) has no contribution to the integral, so this constraint is one that effects only the tangential components \( \ncap (\ncap \wedge (\Delta \BE)) \). Writing out the vector portion of the integrand, we have

\begin{equation}\label{eqn:boundaryConditionsInMedia:540}
\begin{aligned}
i \cdot \Delta \BE + I \cdot \lr{ I \BM_{\textrm{s}} }
&=
\gpgradeone{ i \Delta \BE + I^2 \BM_{\textrm{s}} } \\
&=
\gpgradeone{ I \ncap \Delta \BE – \BM_{\textrm{s}} } \\
&=
\gpgradeone{ I \ncap \ncap (\ncap \wedge \Delta \BE) – \BM_{\textrm{s}} } \\
&=
\gpgradeone{ I (\ncap \wedge (\Delta \BE)) – \BM_{\textrm{s}} } \\
&=
\gpgradeone{ -\ncap \cross (\Delta \BE) – \BM_{\textrm{s}} }.
\end{aligned}
\end{equation}

The dot product (a scalar) in the two surface charge integrals can also be reduced

\begin{equation}\label{eqn:boundaryConditionsInMedia:560}
\begin{aligned}
i \cdot \Delta (I\BD)
&=
\gpgradezero{ i \Delta (I\BD) } \\
&=
\gpgradezero{ I \ncap \Delta (I\BD) } \\
&=
\gpgradezero{ -\ncap \Delta \BD } \\
&=
-\ncap \cdot \Delta \BD,
\end{aligned}
\end{equation}

so the integral equations are satisfied provided

\begin{equation}\label{eqn:boundaryConditionsInMedia:580}
\boxed{
\begin{aligned}
\ncap \cross (\BE_2 – \BE_1) &= – \BM_{\textrm{s}} \\
\ncap \cross (\BH_2 – \BH_1) &= \BJ_{\textrm{s}} \\
\ncap \cdot (\BD_2 – \BD_1) &= \rho_{\textrm{s}} \\
\ncap \cdot (\BB_2 – \BB_1) &= \rho_{\textrm{m}\textrm{s}}.
\end{aligned}
}
\end{equation}

It is tempting to try to assemble these into a results expressed in terms of a four-vector surface current and composite STA bivector fields like the \( F = \BE + I c \BB \) that we can use for the free space Maxwell’s equation. Dimensionally, we need something with velocity in that mix, but what velocity should be used when the speed of the field propagation in each media is potentially different?

References

[1] Constantine A Balanis. Advanced engineering electromagnetics. Wiley New York, 1989.

Updated notes for ece1229 antenna theory

March 16, 2015 ece1229 No comments , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

I’ve now posted a first update of my notes for the antenna theory course that I am taking this term at UofT.

Unlike most of the other classes I have taken, I am not attempting to take comprehensive notes for this class. The class is taught on slides which go by faster than I can easily take notes for (and some of which match the textbook closely). In class I have annotated my copy of textbook with little details instead. This set of notes contains musings of details that were unclear, or in some cases, details that were provided in class, but are not in the text (and too long to pencil into my book), as well as some notes Geometric Algebra formalism for Maxwell’s equations with magnetic sources (something I’ve encountered for the first time in any real detail in this class).

The notes compilation linked above includes all of the following separate notes, some of which have been posted separately on this blog:

Notes for ece1229 antenna theory

February 4, 2015 ece1229 No comments , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

I’ve now posted a first set of notes for the antenna theory course that I am taking this term at UofT.

Unlike most of the other classes I have taken, I am not attempting to take comprehensive notes for this class. The class is taught on slides that match the textbook so closely, there is little value to me taking notes that just replicate the text. Instead, I am annotating my copy of textbook with little details instead. My usual notes collection for the class will contain musings of details that were unclear, or in some cases, details that were provided in class, but are not in the text (and too long to pencil into my book.)

The notes linked above include:

  • Reading notes for chapter 2 (Fundamental Parameters of Antennas) and chapter 3 (Radiation Integrals and Auxiliary Potential Functions) of the class text.
  • Geometric Algebra musings.  How to do formulate Maxwell’s equations when magnetic sources are also included (those modeling magnetic dipoles).
  • Some problems for chapter 2 content.

Dual-Maxwell’s (phasor) equations in Geometric Algebra

February 3, 2015 ece1229 No comments , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

[Click here for a PDF of this post with nicer formatting]

These notes repeat (mostly word for word) the previous notes Maxwell’s (phasor) equations in Geometric Algebra. Electric charges and currents have been replaced with magnetic charges and currents, and the appropriate relations modified accordingly.

In [1] section 3.3, treating magnetic charges and currents, and no electric charges and currents, is a demonstration of the required (curl) form for the electric field, and potential form for the electric field. Not knowing what to name this, I’ll call the associated equations the dual-Maxwell’s equations.

I was wondering how this derivation would proceed using the Geometric Algebra (GA) formalism.

Dual-Maxwell’s equation in GA phasor form.

The dual-Maxwell’s equations, omitting electric charges and currents, are

\begin{equation}\label{eqn:phasorDualMaxwellsGA:20}
\spacegrad \cross \boldsymbol{\mathcal{E}} = -\PD{t}{\boldsymbol{\mathcal{B}}} -\BM
\end{equation}
\begin{equation}\label{eqn:phasorDualMaxwellsGA:40}
\spacegrad \cross \boldsymbol{\mathcal{H}} = \PD{t}{\boldsymbol{\mathcal{D}}}
\end{equation}
\begin{equation}\label{eqn:phasorDualMaxwellsGA:60}
\spacegrad \cdot \boldsymbol{\mathcal{D}} = 0
\end{equation}
\begin{equation}\label{eqn:phasorDualMaxwellsGA:80}
\spacegrad \cdot \boldsymbol{\mathcal{B}} = \rho_m.
\end{equation}

Assuming linear media \( \boldsymbol{\mathcal{B}} = \mu_0
\boldsymbol{\mathcal{H}} \), \( \boldsymbol{\mathcal{D}} = \epsilon_0
\boldsymbol{\mathcal{E}} \), and phasor relationships of the form \(
\boldsymbol{\mathcal{E}} = \textrm{Re} \lr{ \BE(\Br) e^{j \omega t}} \) for the fields and the currents, these reduce to

\begin{equation}\label{eqn:phasorDualMaxwellsGA:100}
\spacegrad \cross \BE = – j \omega \BB – \BM
\end{equation}
\begin{equation}\label{eqn:phasorDualMaxwellsGA:120}
\spacegrad \cross \BB = j \omega \epsilon_0 \mu_0 \BE
\end{equation}
\begin{equation}\label{eqn:phasorDualMaxwellsGA:140}
\spacegrad \cdot \BE = 0
\end{equation}
\begin{equation}\label{eqn:phasorDualMaxwellsGA:160}
\spacegrad \cdot \BB = \rho_m.
\end{equation}

These four equations can be assembled into a single equation form using the GA identities

\begin{equation}\label{eqn:phasorDualMaxwellsGA:200}
\Bf \Bg
= \Bf \cdot \Bg + \Bf \wedge \Bg
= \Bf \cdot \Bg + I \Bf \cross \Bg.
\end{equation}
\begin{equation}\label{eqn:phasorDualMaxwellsGA:220}
I = \xcap \ycap \zcap.
\end{equation}

The electric and magnetic field equations, respectively, are

\begin{equation}\label{eqn:phasorDualMaxwellsGA:260}
\spacegrad \BE = – \lr{ \BM + j k c \BB} I
\end{equation}
\begin{equation}\label{eqn:phasorDualMaxwellsGA:280}
\spacegrad c \BB = c \rho_m + j k \BE I
\end{equation}

where \( \omega = k c \), and \( 1 = c^2 \epsilon_0 \mu_0 \) have also been used to eliminate some of the mess of constants.

Summing these (first scaling \ref{eqn:phasorDualMaxwellsGA:280} by \( I \)), gives Maxwell’s equation in its GA phasor form

\begin{equation}\label{eqn:phasorDualMaxwellsGA:300}
\boxed{
\lr{ \spacegrad + j k } \lr{ \BE + I c \BB } = \lr{c \rho – \BM} I.
}
\end{equation}

Preliminaries. Dual magnetic form of Maxwell’s equations.

The arguments of the text showing that a potential representation for the electric and magnetic fields is possible easily translates into GA. To perform this translation, some duality lemmas are required

First consider the cross product of two vectors \( \Bx, \By \) and the right handed dual \( -\By I \) of \( \By \), a bivector, of one of these vectors. Noting that the Euclidean pseudoscalar \( I \) commutes with all grade multivectors in a Euclidean geometric algebra space, the cross product can be written

\begin{equation}\label{eqn:phasorDualMaxwellsGA:320}
\begin{aligned}
\lr{ \Bx \cross \By }
&=
-I \lr{ \Bx \wedge \By } \\
&=
-I \inv{2} \lr{ \Bx \By – \By \Bx } \\
&=
\inv{2} \lr{ \Bx (-\By I) – (-\By I) \Bx } \\
&=
\Bx \cdot \lr{ -\By I }.
\end{aligned}
\end{equation}

The last step makes use of the fact that the wedge product of a vector and vector is antisymmetric, whereas the dot product (vector grade selection) of a vector and bivector is antisymmetric. Details on grade selection operators and how to characterize symmetric and antisymmetric products of vectors with blades as either dot or wedge products can be found in [3], [2].

Similarly, the dual of the dot product can be written as

\begin{equation}\label{eqn:phasorDualMaxwellsGA:440}
\begin{aligned}
-I \lr{ \Bx \cdot \By }
&=
-I \inv{2} \lr{ \Bx \By + \By \Bx } \\
&=
\inv{2} \lr{ \Bx (-\By I) + (-\By I) \Bx } \\
&=
\Bx \wedge \lr{ -\By I }.
\end{aligned}
\end{equation}

These duality transformations are motivated by the observation that in the GA form of Maxwell’s equation the magnetic field shows up in its dual form, a bivector. Spelled out in terms of the dual magnetic field, those equations are

\begin{equation}\label{eqn:phasorDualMaxwellsGA:360}
\spacegrad \cdot (-\BE I)= – j \omega \BB – \BM
\end{equation}
\begin{equation}\label{eqn:phasorDualMaxwellsGA:380}
\spacegrad \wedge \BH = j \omega \epsilon_0 \BE I
\end{equation}
\begin{equation}\label{eqn:phasorDualMaxwellsGA:400}
\spacegrad \wedge (-\BE I) = 0
\end{equation}
\begin{equation}\label{eqn:phasorDualMaxwellsGA:420}
\spacegrad \cdot \BB = \rho_m.
\end{equation}

Constructing a potential representation.

The starting point of the argument in the text was the observation that the triple product \( \spacegrad \cdot \lr{ \spacegrad \cross \Bx } = 0 \) for any (sufficiently continuous) vector \( \Bx \). This triple product is a completely antisymmetric sum, and the equivalent statement in GA is \( \spacegrad \wedge \spacegrad \wedge \Bx = 0 \) for any vector \( \Bx \). This follows from \( \Ba \wedge \Ba = 0 \), true for any vector \( \Ba \), including the gradient operator \( \spacegrad \), provided those gradients are acting on a sufficiently continuous blade.

In the absence of electric charges,
\ref{eqn:phasorDualMaxwellsGA:400} shows that the divergence of the dual electric field is zero. It it therefore possible to find a potential \( \BF \) such that

\begin{equation}\label{eqn:phasorDualMaxwellsGA:460}
-\epsilon_0 \BE I = \spacegrad \wedge \BF.
\end{equation}

Substituting this \ref{eqn:phasorDualMaxwellsGA:380} gives

\begin{equation}\label{eqn:phasorDualMaxwellsGA:480}
\spacegrad \wedge \lr{ \BH + j \omega \BF } = 0.
\end{equation}

This relation is a bivector identity with zero, so will be satisfied if

\begin{equation}\label{eqn:phasorDualMaxwellsGA:500}
\BH + j \omega \BF = -\spacegrad \phi_m,
\end{equation}

for some scalar \( \phi_m \). Unlike the \( -\epsilon_0 \BE I = \spacegrad \wedge \BF \) solution to \ref{eqn:phasorDualMaxwellsGA:400}, the grade of \( \phi_m \) is fixed by the requirement that \( \BE + j \omega \BF \) is unity (a vector), so
a \( \BE + j \omega \BF = \spacegrad \wedge \psi \), for a higher grade blade \( \psi \) would not work, despite satisfying the condition \( \spacegrad \wedge \spacegrad \wedge \psi = 0 \).

Substitution of \ref{eqn:phasorDualMaxwellsGA:500} and \ref{eqn:phasorDualMaxwellsGA:460} into \ref{eqn:phasorDualMaxwellsGA:380} gives

\begin{equation}\label{eqn:phasorDualMaxwellsGA:520}
\begin{aligned}
\spacegrad \cdot \lr{ \spacegrad \wedge \BF } &= -\epsilon_0 \BM – j \omega \epsilon_0 \mu_0 \lr{ -\spacegrad \phi_m -j \omega \BF } \\
\spacegrad^2 \BF – \spacegrad \lr{\spacegrad \cdot \BF} &=
\end{aligned}
\end{equation}

Rearranging gives

\begin{equation}\label{eqn:phasorDualMaxwellsGA:540}
\spacegrad^2 \BF + k^2 \BF = -\epsilon_0 \BM + \spacegrad \lr{ \spacegrad \cdot \BF + j \frac{k}{c} \phi_m }.
\end{equation}

The fields \( \BF \) and \( \phi_m \) are assumed to be phasors, say \( \boldsymbol{\mathcal{A}} = \textrm{Re} \BF e^{j k c t} \) and \( \varphi = \textrm{Re} \phi_m e^{j k c t} \). Grouping the scalar and vector potentials into the standard four vector form
\( F^\mu = \lr{\phi_m/c, \BF} \), and expanding the Lorentz gauge condition

\begin{equation}\label{eqn:phasorDualMaxwellsGA:580}
\begin{aligned}
0
&= \partial_\mu \lr{ F^\mu e^{j k c t}} \\
&= \partial_a \lr{ F^a e^{j k c t}} + \inv{c}\PD{t}{} \lr{ \frac{\phi_m}{c}
e^{j k c t}} \\
&= \spacegrad \cdot \BF e^{j k c t} + \inv{c} j k \phi_m e^{j k c t} \\
&= \lr{ \spacegrad \cdot \BF + j k \phi_m/c } e^{j k c t},
\end{aligned}
\end{equation}

shows that in
\ref{eqn:phasorDualMaxwellsGA:540}
the quantity in braces is in fact the Lorentz gauge condition, so in the Lorentz gauge, the vector potential satisfies a non-homogeneous Helmholtz equation.

\begin{equation}\label{eqn:phasorDualMaxwellsGA:550}
\boxed{
\spacegrad^2 \BF + k^2 \BF = -\epsilon_0 \BM.
}
\end{equation}

Maxwell’s equation in Four vector form

The four vector form of Maxwell’s equation follows from \ref{eqn:phasorDualMaxwellsGA:300} after pre-multiplying by \( \gamma^0 \).

With

\begin{equation}\label{eqn:phasorDualMaxwellsGA:620}
F = F^\mu \gamma_\mu = \lr{ \phi_m/c, \BF }
\end{equation}
\begin{equation}\label{eqn:phasorDualMaxwellsGA:640}
G = \grad \wedge F = – \epsilon_0 \lr{ \BE + c \BB I } I
\end{equation}
\begin{equation}\label{eqn:phasorDualMaxwellsGA:660}
\grad = \gamma^\mu \partial_\mu = \gamma^0 \lr{ \spacegrad + j k }
\end{equation}
\begin{equation}\label{eqn:phasorDualMaxwellsGA:680}
M = M^\mu \gamma_\mu = \lr{ c \rho_m, \BM },
\end{equation}

Maxwell’s equation is

\begin{equation}\label{eqn:phasorDualMaxwellsGA:720}
\boxed{
\grad G = -\epsilon_0 M.
}
\end{equation}

Here \( \setlr{ \gamma_\mu } \) is used as the basis of the four vector Minkowski space, with \( \gamma_0^2 = -\gamma_k^2 = 1 \) (i.e. \(\gamma^\mu \cdot \gamma_\nu = {\delta^\mu}_\nu \)), and \( \gamma_a \gamma_0 = \sigma_a \) where \( \setlr{ \sigma_a} \) is the Pauli basic (i.e. standard basis vectors for \R{3}).

Let’s demonstrate this, one piece at a time. Observe that the action of the spacetime gradient on a phasor, assuming that all time dependence is in the exponential, is

\begin{equation}\label{eqn:phasorDualMaxwellsGA:740}
\begin{aligned}
\gamma^\mu \partial_\mu \lr{ \psi e^{j k c t} }
&=
\lr{ \gamma^a \partial_a + \gamma_0 \partial_{c t} } \lr{ \psi e^{j k c t} }
\\
&=
\gamma_0 \lr{ \gamma_0 \gamma^a \partial_a + j k } \lr{ \psi e^{j k c t} } \\
&=
\gamma_0 \lr{ \sigma_a \partial_a + j k } \psi e^{j k c t} \\
&=
\gamma_0 \lr{ \spacegrad + j k } \psi e^{j k c t}
\end{aligned}
\end{equation}

This allows the operator identification of \ref{eqn:phasorDualMaxwellsGA:660}. The four current portion of the equation comes from

\begin{equation}\label{eqn:phasorDualMaxwellsGA:760}
\begin{aligned}
c \rho_m – \BM
&=
\gamma_0 \lr{ \gamma_0 c \rho_m – \gamma_0 \gamma_a \gamma_0 M^a } \\
&=
\gamma_0 \lr{ \gamma_0 c \rho_m + \gamma_a M^a } \\
&=
\gamma_0 \lr{ \gamma_\mu M^\mu } \\
&= \gamma_0 M.
\end{aligned}
\end{equation}

Taking the curl of the four potential gives

\begin{equation}\label{eqn:phasorDualMaxwellsGA:780}
\begin{aligned}
\grad \wedge F
&=
\lr{ \gamma^a \partial_a + \gamma_0 j k } \wedge \lr{ \gamma_0 \phi_m/c +
\gamma_b F^b } \\
&=
– \sigma_a \partial_a \phi_m/c + \gamma^a \wedge \gamma_b \partial_a F^b – j k
\sigma_b F^b \\
&=
– \sigma_a \partial_a \phi_m/c + \sigma_a \wedge \sigma_b \partial_a F^b – j k
\sigma_b F^b \\
&= \inv{c} \lr{ – \spacegrad \phi_m – j \omega \BF + c \spacegrad \wedge \BF }
\\
&= \epsilon_0 \lr{ c \BB – \BE I } \\
&= – \epsilon_0 \lr{ \BE + c \BB I } I.
\end{aligned}
\end{equation}

Substituting all of these into Maxwell’s \ref{eqn:phasorDualMaxwellsGA:300} gives

\begin{equation}\label{eqn:phasorDualMaxwellsGA:800}
-\frac{\gamma_0}{\epsilon_0}\grad G = \gamma_0 M,
\end{equation}

which recovers \ref{eqn:phasorDualMaxwellsGA:700} as desired.

Helmholtz equation directly from the GA form.

It is easier to find \ref{eqn:phasorDualMaxwellsGA:550} from the GA form of Maxwell’s \ref{eqn:phasorDualMaxwellsGA:700} than the traditional curl and divergence equations. Note that

\begin{equation}\label{eqn:phasorDualMaxwellsGA:820}
\begin{aligned}
\grad G
&=
\grad \lr{ \grad \wedge F } \\
&=
\grad \cdot \lr{ \grad \wedge F } \\
+
\grad \wedge \lr{ \grad \wedge F } \\
&=
\grad^2 F – \grad \lr{ \grad \cdot F },
\end{aligned}
\end{equation}

however, the Lorentz gauge condition \( \partial_\mu F^\mu = \grad \cdot F = 0 \) kills the latter term above. This leaves

\begin{equation}\label{eqn:phasorDualMaxwellsGA:840}
\begin{aligned}
\grad G
&=
\grad^2 F \\
&=
\gamma_0 \lr{ \spacegrad + j k }
\gamma_0 \lr{ \spacegrad + j k } F \\
&=
\gamma_0^2 \lr{ -\spacegrad + j k }
\lr{ \spacegrad + j k } F \\
&=
-\lr{ \spacegrad^2 + k^2 } F = -\epsilon_0 M.
\end{aligned}
\end{equation}

The timelike component of this gives

\begin{equation}\label{eqn:phasorDualMaxwellsGA:860}
\lr{ \spacegrad^2 + k^2 } \phi_m = -\epsilon_0 c \rho_m,
\end{equation}

and the spacelike components give

\begin{equation}\label{eqn:phasorDualMaxwellsGA:880}
\lr{ \spacegrad^2 + k^2 } \BF = -\epsilon_0 \BM,
\end{equation}

recovering \ref{eqn:phasorDualMaxwellsGA:550} as desired.

References

[1] Constantine A Balanis. Antenna theory: analysis and design. John Wiley \& Sons, 3rd edition, 2005.

[2] C. Doran and A.N. Lasenby. Geometric algebra for physicists. Cambridge University Press New York, Cambridge, UK, 1st edition, 2003.

[3] D. Hestenes. New Foundations for Classical Mechanics. Kluwer Academic Publishers, 1999.

Maxwell’s (phasor) equations in Geometric Algebra

February 1, 2015 ece1229 1 comment , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

[Click here for a PDF of this post with nicer formatting]

In [1] section 3.2 is a demonstration of the required (curl) form for the magnetic field, and potential form for the electric field.

I was wondering how this derivation would proceed using the Geometric Algebra (GA) formalism.

Maxwell’s equation in GA phasor form.

Maxwell’s equations, omitting magnetic charges and currents, are

\begin{equation}\label{eqn:phasorMaxwellsGA:20}
\spacegrad \cross \boldsymbol{\mathcal{E}} = -\PD{t}{\boldsymbol{\mathcal{B}}}
\end{equation}
\begin{equation}\label{eqn:phasorMaxwellsGA:40}
\spacegrad \cross \boldsymbol{\mathcal{H}} = \boldsymbol{\mathcal{J}} + \PD{t}{\boldsymbol{\mathcal{D}}}
\end{equation}
\begin{equation}\label{eqn:phasorMaxwellsGA:60}
\spacegrad \cdot \boldsymbol{\mathcal{D}} = \rho
\end{equation}
\begin{equation}\label{eqn:phasorMaxwellsGA:80}
\spacegrad \cdot \boldsymbol{\mathcal{B}} = 0.
\end{equation}

Assuming linear media \( \boldsymbol{\mathcal{B}} = \mu_0 \boldsymbol{\mathcal{H}} \), \( \boldsymbol{\mathcal{D}} = \epsilon_0 \boldsymbol{\mathcal{E}} \), and phasor relationships of the form \( \boldsymbol{\mathcal{E}} = \textrm{Re} \lr{ \BE(\Br) e^{j \omega t}} \) for the fields and the currents, these reduce to

\begin{equation}\label{eqn:phasorMaxwellsGA:100}
\spacegrad \cross \BE = – j \omega \BB
\end{equation}
\begin{equation}\label{eqn:phasorMaxwellsGA:120}
\spacegrad \cross \BB = \mu_0 \BJ + j \omega \epsilon_0 \mu_0 \BE
\end{equation}
\begin{equation}\label{eqn:phasorMaxwellsGA:140}
\spacegrad \cdot \BE = \rho/\epsilon_0
\end{equation}
\begin{equation}\label{eqn:phasorMaxwellsGA:160}
\spacegrad \cdot \BB = 0.
\end{equation}

These four equations can be assembled into a single equation form using the GA identities

\begin{equation}\label{eqn:phasorMaxwellsGA:200}
\Bf \Bg
= \Bf \cdot \Bg + \Bf \wedge \Bg
= \Bf \cdot \Bg + I \Bf \cross \Bg.
\end{equation}
\begin{equation}\label{eqn:phasorMaxwellsGA:220}
I = \xcap \ycap \zcap.
\end{equation}

The electric and magnetic field equations, respectively, are

\begin{equation}\label{eqn:phasorMaxwellsGA:260}
\spacegrad \BE = \rho/\epsilon_0 -j k c \BB I
\end{equation}
\begin{equation}\label{eqn:phasorMaxwellsGA:280}
\spacegrad c \BB = \frac{I}{\epsilon_0 c} \BJ + j k \BE I
\end{equation}

where \( \omega = k c \), and \( 1 = c^2 \epsilon_0 \mu_0 \) have also been used to eliminate some of the mess of constants.

Summing these (first scaling \ref{eqn:phasorMaxwellsGA:280} by \( I \)), gives Maxwell’s equation in its GA phasor form

\begin{equation}\label{eqn:phasorMaxwellsGA:300}
\boxed{
\lr{ \spacegrad + j k } \lr{ \BE + I c \BB } = \inv{\epsilon_0 c}\lr{c \rho – \BJ}.
}
\end{equation}

Preliminaries. Dual magnetic form of Maxwell’s equations.

The arguments of the text showing that a potential representation for the electric and magnetic fields is possible easily translates into GA. To perform this translation, some duality lemmas are required

First consider the cross product of two vectors \( \Bx, \By \) and the right handed dual \( -\By I \) of \( \By \), a bivector, of one of these vectors. Noting that the Euclidean pseudoscalar \( I \) commutes with all grade multivectors in a Euclidean geometric algebra space, the cross product can be written

\begin{equation}\label{eqn:phasorMaxwellsGA:320}
\begin{aligned}
\lr{ \Bx \cross \By }
&=
-I \lr{ \Bx \wedge \By } \\
&=
-I \inv{2} \lr{ \Bx \By – \By \Bx } \\
&=
\inv{2} \lr{ \Bx (-\By I) – (-\By I) \Bx } \\
&=
\Bx \cdot \lr{ -\By I }.
\end{aligned}
\end{equation}

The last step makes use of the fact that the wedge product of a vector and vector is antisymmetric, whereas the dot product (vector grade selection) of a vector and bivector is antisymmetric. Details on grade selection operators and how to characterize symmetric and antisymmetric products of vectors with blades as either dot or wedge products can be found in [3], [2].

Similarly, the dual of the dot product can be written as

\begin{equation}\label{eqn:phasorMaxwellsGA:440}
\begin{aligned}
-I \lr{ \Bx \cdot \By }
&=
-I \inv{2} \lr{ \Bx \By + \By \Bx } \\
&=
\inv{2} \lr{ \Bx (-\By I) + (-\By I) \Bx } \\
&=
\Bx \wedge \lr{ -\By I }.
\end{aligned}
\end{equation}

These duality transformations are motivated by the observation that in the GA form of Maxwell’s equation the magnetic field shows up in its dual form, a bivector. Spelled out in terms of the dual magnetic field, those equations are

\begin{equation}\label{eqn:phasorMaxwellsGA:360}
\spacegrad \wedge \BE = – j \omega \BB I
\end{equation}
\begin{equation}\label{eqn:phasorMaxwellsGA:380}
\spacegrad \cdot \lr{ -\BB I } = \mu_0 \BJ + j \omega \epsilon_0 \mu_0 \BE
\end{equation}
\begin{equation}\label{eqn:phasorMaxwellsGA:400}
\spacegrad \cdot \BE = \rho/\epsilon_0
\end{equation}
\begin{equation}\label{eqn:phasorMaxwellsGA:420}
\spacegrad \wedge (-\BB I) = 0.
\end{equation}

Constructing a potential representation.

The starting point of the argument in the text was the observation that the triple product \( \spacegrad \cdot \lr{ \spacegrad \cross \Bx } = 0 \) for any (sufficiently continuous) vector \( \Bx \). This triple product is a completely antisymmetric sum, and the equivalent statement in GA is \( \spacegrad \wedge \spacegrad \wedge \Bx = 0 \) for any vector \( \Bx \). This follows from \( \Ba \wedge \Ba = 0 \), true for any vector \( \Ba \), including the gradient operator \( \spacegrad \), provided those gradients are acting on a sufficiently continuous blade.

In the absence of magnetic charges, \ref{eqn:phasorMaxwellsGA:420} shows that the divergence of the dual magnetic field is zero. It it therefore possible to find a potential \( \BA \) such that

\begin{equation}\label{eqn:phasorMaxwellsGA:460}
\BB I = \spacegrad \wedge \BA.
\end{equation}

Substituting this into Maxwell-Faraday \ref{eqn:phasorMaxwellsGA:360} gives

\begin{equation}\label{eqn:phasorMaxwellsGA:480}
\spacegrad \wedge \lr{ \BE + j \omega \BA } = 0.
\end{equation}

This relation is a bivector identity with zero, so will be satisfied if

\begin{equation}\label{eqn:phasorMaxwellsGA:500}
\BE + j \omega \BA = -\spacegrad \phi,
\end{equation}

for some scalar \( \phi \). Unlike the \( \BB I = \spacegrad \wedge \BA \) solution to \ref{eqn:phasorMaxwellsGA:420}, the grade of \( \phi \) is fixed by the requirement that \( \BE + j \omega \BA \) is unity (a vector), so a \( \BE + j \omega \BA = \spacegrad \wedge \psi \), for a higher grade blade \( \psi \) would not work, despite satisifying the condition \( \spacegrad \wedge \spacegrad \wedge \psi = 0 \).

Substitution of \ref{eqn:phasorMaxwellsGA:500} and \ref{eqn:phasorMaxwellsGA:460} into Ampere’s law \ref{eqn:phasorMaxwellsGA:380} gives

\begin{equation}\label{eqn:phasorMaxwellsGA:520}
\begin{aligned}
-\spacegrad \cdot \lr{ \spacegrad \wedge \BA } &= \mu_0 \BJ + j \omega \epsilon_0 \mu_0 \lr{ -\spacegrad \phi -j \omega \BA } \\
-\spacegrad^2 \BA – \spacegrad \lr{\spacegrad \cdot \BA} &=
\end{aligned}
\end{equation}

Rearranging gives

\begin{equation}\label{eqn:phasorMaxwellsGA:540}
\spacegrad^2 \BA + k^2 \BA = -\mu_0 \BJ – \spacegrad \lr{ \spacegrad \cdot \BA + j \frac{k}{c} \phi }.
\end{equation}

The fields \( \BA \) and \( \phi \) are assumed to be phasors, say \( \boldsymbol{\mathcal{A}} = \textrm{Re} \BA e^{j k c t} \) and \( \varphi = \textrm{Re} \phi e^{j k c t} \). Grouping the scalar and vector potentials into the standard four vector form \( A^\mu = \lr{\phi/c, \BA} \), and expanding the Lorentz gauge condition

\begin{equation}\label{eqn:phasorMaxwellsGA:580}
\begin{aligned}
0
&= \partial_\mu \lr{ A^\mu e^{j k c t}} \\
&= \partial_a \lr{ A^a e^{j k c t}} + \inv{c}\PD{t}{} \lr{ \frac{\phi}{c} e^{j k c t}} \\
&= \spacegrad \cdot \BA e^{j k c t} + \inv{c} j k \phi e^{j k c t} \\
&= \lr{ \spacegrad \cdot \BA + j k \phi/c } e^{j k c t},
\end{aligned}
\end{equation}

shows that in \ref{eqn:phasorMaxwellsGA:540} the quantity in braces is in fact the Lorentz gauge condition, so in the Lorentz gauge, the vector potential satisfies a non-homogeneous Helmholtz equation.

\begin{equation}\label{eqn:phasorMaxwellsGA:550}
\boxed{
\spacegrad^2 \BA + k^2 \BA = -\mu_0 \BJ.
}
\end{equation}

Maxwell’s equation in Four vector form

The four vector form of Maxwell’s equation follows from \ref{eqn:phasorMaxwellsGA:300} after pre-multiplying by \( \gamma^0 \).

With

\begin{equation}\label{eqn:phasorMaxwellsGA:620}
A = A^\mu \gamma_\mu = \lr{ \phi/c, \BA }
\end{equation}
\begin{equation}\label{eqn:phasorMaxwellsGA:640}
F = \grad \wedge A = \inv{c} \lr{ \BE + c \BB I }
\end{equation}
\begin{equation}\label{eqn:phasorMaxwellsGA:660}
\grad = \gamma^\mu \partial_\mu = \gamma^0 \lr{ \spacegrad + j k }
\end{equation}
\begin{equation}\label{eqn:phasorMaxwellsGA:680}
J = J^\mu \gamma_\mu = \lr{ c \rho, \BJ },
\end{equation}

Maxwell’s equation is

\begin{equation}\label{eqn:phasorMaxwellsGA:700}
\boxed{
\grad F = \mu_0 J.
}
\end{equation}

Here \( \setlr{ \gamma_\mu } \) is used as the basis of the four vector Minkowski space, with \( \gamma_0^2 = -\gamma_k^2 = 1 \) (i.e. \(\gamma^\mu \cdot \gamma_\nu = {\delta^\mu}_\nu \)), and \( \gamma_a \gamma_0 = \sigma_a \) where \( \setlr{ \sigma_a} \) is the Pauli basic (i.e. standard basis vectors for \R{3}).

Let’s demonstrate this, one piece at a time. Observe that the action of the spacetime gradient on a phasor, assuming that all time dependence is in the exponential, is

\begin{equation}\label{eqn:phasorMaxwellsGA:740}
\begin{aligned}
\gamma^\mu \partial_\mu \lr{ \psi e^{j k c t} }
&=
\lr{ \gamma^a \partial_a + \gamma_0 \partial_{c t} } \lr{ \psi e^{j k c t} }
\\
&=
\gamma_0 \lr{ \gamma_0 \gamma^a \partial_a + j k } \lr{ \psi e^{j k c t} } \\
&=
\gamma_0 \lr{ \sigma_a \partial_a + j k } \psi e^{j k c t} \\
&=
\gamma_0 \lr{ \spacegrad + j k } \psi e^{j k c t}
\end{aligned}
\end{equation}

This allows the operator identification of \ref{eqn:phasorMaxwellsGA:660}. The four current portion of the equation comes from

\begin{equation}\label{eqn:phasorMaxwellsGA:760}
\begin{aligned}
c \rho – \BJ
&=
\gamma_0 \lr{ \gamma_0 c \rho – \gamma_0 \gamma_a \gamma_0 J^a } \\
&=
\gamma_0 \lr{ \gamma_0 c \rho + \gamma_a J^a } \\
&=
\gamma_0 \lr{ \gamma_\mu J^\mu } \\
&= \gamma_0 J.
\end{aligned}
\end{equation}

Taking the curl of the four potential gives

\begin{equation}\label{eqn:phasorMaxwellsGA:780}
\begin{aligned}
\grad \wedge A
&=
\lr{ \gamma^a \partial_a + \gamma_0 j k } \wedge \lr{ \gamma_0 \phi/c + \gamma_b A^b } \\
&=
– \sigma_a \partial_a \phi/c + \gamma^a \wedge \gamma_b \partial_a A^b – j k
\sigma_b A^b \\
&=
– \sigma_a \partial_a \phi/c + \sigma_a \wedge \sigma_b \partial_a A^b – j k
\sigma_b A^b \\
&= \inv{c} \lr{ – \spacegrad \phi – j \omega \BA + c \spacegrad \wedge \BA }
\\
&= \inv{c} \lr{ \BE + c \BB I }.
\end{aligned}
\end{equation}

Substituting all of these into Maxwell’s \ref{eqn:phasorMaxwellsGA:300} gives

\begin{equation}\label{eqn:phasorMaxwellsGA:800}
\gamma_0 \grad c F = \inv{ \epsilon_0 c } \gamma_0 J,
\end{equation}

which recovers \ref{eqn:phasorMaxwellsGA:700} as desired.

Helmholtz equation directly from the GA form.

It is easier to find \ref{eqn:phasorMaxwellsGA:550} from the GA form of Maxwell’s \ref{eqn:phasorMaxwellsGA:700} than the traditional curl and divergence equations. Note that

\begin{equation}\label{eqn:phasorMaxwellsGA:820}
\grad F
=
\grad \lr{ \grad \wedge A }
=
\grad \cdot \lr{ \grad \wedge A }
+
\grad \wedge \lr{ \grad \wedge A }
=
\grad^2 A – \grad \lr{ \grad \cdot A },
\end{equation}

however, the Lorentz gauge condition \( \partial_\mu A^\mu = \grad \cdot A = 0 \) kills the latter term above. This leaves

\begin{equation}\label{eqn:phasorMaxwellsGA:840}
\begin{aligned}
\grad F
&=
\grad^2 A \\
&=
\gamma_0 \lr{ \spacegrad + j k }
\gamma_0 \lr{ \spacegrad + j k } A \\
&=
\gamma_0^2 \lr{ -\spacegrad + j k }
\lr{ \spacegrad + j k } A \\
&=
-\lr{ \spacegrad^2 + k^2 } A = \mu_0 J.
\end{aligned}
\end{equation}

The timelike component of this gives

\begin{equation}\label{eqn:phasorMaxwellsGA:860}
\lr{ \spacegrad^2 + k^2 } \phi = -\rho/\epsilon_0,
\end{equation}

and the spacelike components give

\begin{equation}\label{eqn:phasorMaxwellsGA:880}
\lr{ \spacegrad^2 + k^2 } \BA = -\mu_0 \BJ,
\end{equation}

recovering \ref{eqn:phasorMaxwellsGA:550} as desired.

References

[1] Constantine A Balanis. Antenna theory: analysis and design. John Wiley & Sons, 3rd edition, 2005.

[2] C. Doran and A.N. Lasenby. Geometric algebra for physicists. Cambridge University Press New York, Cambridge, UK, 1st edition, 2003.

[3] D. Hestenes. New Foundations for Classical Mechanics. Kluwer Academic Publishers, 1999.