trivector

Geometric Algebra in a nutshell.

September 29, 2016 math and physics play , , , , , , , , , , , , ,

[Click here for a PDF of this post with nicer formatting]

Motivation

I initially thought that I might submit a problem set solution for ece1228 using Geometric Algebra. In order to justify this, I needed to add an appendix to that problem set that outlined enough of the ideas that such a solution might make sense to the grader.

I ended up changing my mind and reworked the problem entirely, removing any use of GA. Here’s the tutorial I initially considered submitting with that problem.

Geometric Algebra in a nutshell.

Geometric Algebra defines a non-commutative, associative vector product

\begin{equation}\label{eqn:gaTutorial:20}
\begin{aligned}
\Ba \Bb \Bc
&=
(\Ba \Bb) \Bc \\
&=
\Ba (\Bb \Bc),
\end{aligned}
\end{equation}

where the square of a vector equals the squared vector magnitude

\begin{equation}\label{eqn:gaTutorial:40}
\Ba^2 = \Abs{\Ba}^2,
\end{equation}

In Euclidean spaces such a squared vector is always positive, but that is not necessarily the case in the mixed signature spaces used in special relativity.

There are a number of consequences of these two simple vector multiplication rules.

  • Squared unit vectors have a unit magnitude (up to a sign). In a Euclidean space such a product is always positive

    \begin{equation}\label{eqn:gaTutorial:60}
    (\Be_1)^2 = 1.
    \end{equation}

  • Products of perpendicular vectors anticommute.

    \begin{equation}\label{eqn:gaTutorial:80}
    \begin{aligned}
    2
    &=
    (\Be_1 + \Be_2)^2 \\
    &= (\Be_1 + \Be_2)(\Be_1 + \Be_2) \\
    &= \Be_1^2 + \Be_2 \Be_1 + \Be_1 \Be_2 + \Be_2^2 \\
    &= 2 + \Be_2 \Be_1 + \Be_1 \Be_2.
    \end{aligned}
    \end{equation}

    A product of two perpendicular vectors is called a bivector, and can be used to represent an oriented plane. The last line above shows an example of a scalar and bivector sum, called a multivector. In general Geometric Algebra allows sums of scalars, vectors, bivectors, and higher degree analogues (grades) be summed.

    Comparison of the RHS and LHS of \ref{eqn:gaTutorial:80} shows that we must have

    \begin{equation}\label{eqn:gaTutorial:100}
    \Be_2 \Be_1 = -\Be_1 \Be_2.
    \end{equation}

    It is true in general that the product of two perpendicular vectors anticommutes. When, as above, such a product is a product of
    two orthonormal vectors, it behaves like a non-commutative imaginary quantity, as it has an imaginary square in Euclidean spaces

    \begin{equation}\label{eqn:gaTutorial:120}
    \begin{aligned}
    (\Be_1 \Be_2)^2
    &=
    (\Be_1 \Be_2)
    (\Be_1 \Be_2) \\
    &=
    \Be_1 (\Be_2
    \Be_1) \Be_2 \\
    &=
    -\Be_1 (\Be_1
    \Be_2) \Be_2 \\
    &=
    -(\Be_1 \Be_1)
    (\Be_2 \Be_2) \\
    &=-1.
    \end{aligned}
    \end{equation}

    Such “imaginary” (unit bivectors) have important applications describing rotations in Euclidean spaces, and boosts in Minkowski spaces.

  • The product of three perpendicular vectors, such as

    \begin{equation}\label{eqn:gaTutorial:140}
    I = \Be_1 \Be_2 \Be_3,
    \end{equation}

    is called a trivector. In \R{3}, the product of three orthonormal vectors is called a pseudoscalar for the space, and can represent an oriented volume element. The quantity \( I \) above is the typical orientation picked for the \R{3} unit pseudoscalar. This quantity also has characteristics of an imaginary number

    \begin{equation}\label{eqn:gaTutorial:160}
    \begin{aligned}
    I^2
    &=
    (\Be_1 \Be_2 \Be_3)
    (\Be_1 \Be_2 \Be_3) \\
    &=
    \Be_1 \Be_2 (\Be_3
    \Be_1) \Be_2 \Be_3 \\
    &=
    -\Be_1 \Be_2 \Be_1
    \Be_3 \Be_2 \Be_3 \\
    &=
    -\Be_1 (\Be_2 \Be_1)
    (\Be_3 \Be_2) \Be_3 \\
    &=
    -\Be_1 (\Be_1 \Be_2)
    (\Be_2 \Be_3) \Be_3 \\
    &=

    \Be_1^2
    \Be_2^2
    \Be_3^2 \\
    &=
    -1.
    \end{aligned}
    \end{equation}

  • The product of two vectors in \R{3} can be expressed as the sum of a symmetric scalar product and antisymmetric bivector product

    \begin{equation}\label{eqn:gaTutorial:480}
    \begin{aligned}
    \Ba \Bb
    &=
    \sum_{i,j = 1}^n \Be_i \Be_j a_i b_j \\
    &=
    \sum_{i = 1}^n \Be_i^2 a_i b_i
    +
    \sum_{0 < i \ne j \le n} \Be_i \Be_j a_i b_j \\ &= \sum_{i = 1}^n a_i b_i + \sum_{0 < i < j \le n} \Be_i \Be_j (a_i b_j - a_j b_i). \end{aligned} \end{equation} The first (symmetric) term is clearly the dot product. The antisymmetric term is designated the wedge product. In general these are written \begin{equation}\label{eqn:gaTutorial:500} \Ba \Bb = \Ba \cdot \Bb + \Ba \wedge \Bb, \end{equation} where \begin{equation}\label{eqn:gaTutorial:520} \begin{aligned} \Ba \cdot \Bb &\equiv \inv{2} \lr{ \Ba \Bb + \Bb \Ba } \\ \Ba \wedge \Bb &\equiv \inv{2} \lr{ \Ba \Bb - \Bb \Ba }, \end{aligned} \end{equation} The coordinate expansion of both can be seen above, but in \R{3} the wedge can also be written \begin{equation}\label{eqn:gaTutorial:540} \Ba \wedge \Bb = \Be_1 \Be_2 \Be_3 (\Ba \cross \Bb) = I (\Ba \cross \Bb). \end{equation} This allows for an handy dot plus cross product expansion of the vector product \begin{equation}\label{eqn:gaTutorial:180} \Ba \Bb = \Ba \cdot \Bb + I (\Ba \cross \Bb). \end{equation} This result should be familiar to the student of quantum spin states where one writes \begin{equation}\label{eqn:gaTutorial:200} (\Bsigma \cdot \Ba) (\Bsigma \cdot \Bb) = (\Ba \cdot \Bb) + i (\Ba \cross \Bb) \cdot \Bsigma. \end{equation} This correspondence is because the Pauli spin basis is a specific matrix representation of a Geometric Algebra, satisfying the same commutator and anticommutator relationships. A number of other algebra structures, such as complex numbers, and quaterions can also be modelled as Geometric Algebra elements.

  • It is often useful to utilize the grade selection operator
    \( \gpgrade{M}{n} \) and scalar grade selection operator \( \gpgradezero{M} = \gpgrade{M}{0} \)
    to select the scalar, vector, bivector, trivector, or higher grade algebraic elements. For example, operating on vectors \( \Ba, \Bb, \Bc \), we have

    \begin{equation}\label{eqn:gaTutorial:580}
    \begin{aligned}
    \gpgradezero{ \Ba \Bb }
    &= \Ba \cdot \Bb \\
    \gpgradeone{ \Ba \Bb \Bc }
    &=
    \Ba (\Bb \cdot \Bc)
    +
    \Ba \cdot (\Bb \wedge \Bc) \\
    &=
    \Ba (\Bb \cdot \Bc)
    +
    (\Ba \cdot \Bb) \Bc

    (\Ba \cdot \Bc) \Bb \\
    \gpgradetwo{\Ba \Bb} &=
    \Ba \wedge \Bb \\
    \gpgradethree{\Ba \Bb \Bc} &=
    \Ba \wedge \Bb \wedge \Bc.
    \end{aligned}
    \end{equation}

    Note that the wedge product of any number of vectors such as \( \Ba \wedge \Bb \wedge \Bc \) is associative and can be expressed in terms of the complete antisymmetrization of the product of those vectors. A consequence of that is the fact a wedge product that includes any colinear vectors in the product is zero.

Example: Helmholz equations.

As an example of the power of \ref{eqn:gaTutorial:180}, consider the following Helmholtz equation derivation (wave equations for the electric and magnetic fields in the frequency domain.)

Application of \ref{eqn:gaTutorial:180} to
Maxwell equations in the frequency domain for source free simple media gives

\label{eqn:emtProblemSet1Problem6:340}
\begin{equation}\label{eqn:emtProblemSet1Problem6:360}
\spacegrad \BE = -j \omega I \BB
\end{equation}
\begin{equation}\label{eqn:emtProblemSet1Problem6:380}
\spacegrad I \BB = -j \omega \mu \epsilon \BE.
\end{equation}

These equations use the engineering (not physics) sign convention for the phasors where the time domain fields are of the form \( \boldsymbol{\mathcal{E}}(\Br, t) = \textrm{Re}( \BE e^{j\omega t} \).

Operation with the gradient from the left produces the Helmholtz equation for each of the fields using nothing more than multiplication and simple substitution

\label{eqn:emtProblemSet1Problem6:400}
\begin{equation}\label{eqn:emtProblemSet1Problem6:420}
\spacegrad^2 \BE = – \mu \epsilon \omega^2 \BE
\end{equation}
\begin{equation}\label{eqn:emtProblemSet1Problem6:440}
\spacegrad^2 I \BB = – \mu \epsilon \omega^2 I \BB.
\end{equation}

There was no reason to go through the headache of looking up or deriving the expansion of \( \spacegrad \cross (\spacegrad \cross \BA ) \) as is required with the traditional vector algebra demonstration of these identities.

Observe that the usual Helmholtz equation for \( \BB \) doesn’t have a pseudoscalar factor. That result can be obtained by just cancelling the factors \( I \) since the \R{3} Euclidean pseudoscalar commutes with all grades (this isn’t the case in \R{2} nor in Minkowski spaces.)

Example: Factoring the Laplacian.

There are various ways to demonstrate the identity

\begin{equation}\label{eqn:gaTutorial:660}
\spacegrad \cross \lr{ \spacegrad \cross \BA } = \spacegrad \lr{ \spacegrad \cdot \BA } – \spacegrad^2 \BA,
\end{equation}

such as the use of (somewhat obscure) tensor contraction techniques. We can also do this with Geometric Algebra (using a different set of obscure techniques) by factoring the Laplacian action on a vector

\begin{equation}\label{eqn:gaTutorial:700}
\begin{aligned}
\spacegrad^2 \BA
&=
\spacegrad (\spacegrad \BA) \\
&=
\spacegrad (\spacegrad \cdot \BA + \spacegrad \wedge \BA) \\
&=
\spacegrad (\spacegrad \cdot \BA)
+
\spacegrad \cdot (\spacegrad \wedge \BA) \\
%+
%\cancel{\spacegrad \wedge \spacegrad \wedge \BA}
&=
\spacegrad (\spacegrad \cdot \BA)
+
\spacegrad \cdot (\spacegrad \wedge \BA).
\end{aligned}
\end{equation}

Should we wish to express the last term using cross products, a grade one selection operation can be used
\begin{equation}\label{eqn:gaTutorial:680}
\begin{aligned}
\spacegrad \cdot (\spacegrad \wedge \BA)
&=
\gpgradeone{ \spacegrad (\spacegrad \wedge \BA) } \\
&=
\gpgradeone{ \spacegrad I (\spacegrad \cross \BA) } \\
&=
\gpgradeone{ I \spacegrad \wedge (\spacegrad \cross \BA) } \\
&=
\gpgradeone{ I^2 \spacegrad \cross (\spacegrad \cross \BA) } \\
&=
-\spacegrad \cross (\spacegrad \cross \BA).
\end{aligned}
\end{equation}

Here coordinate expansion was not required in any step.

Learning more.

Some references that may be helpful to learn more about Geometric Algebra are [2], [1], [4], and [3].

References

[1] C. Doran and A.N. Lasenby. Geometric algebra for physicists. Cambridge University Press New York, Cambridge, UK, 1st edition, 2003.

[2] L. Dorst, D. Fontijne, and S. Mann. Geometric Algebra for Computer Science. Morgan Kaufmann, San Francisco, 2007.

[3] D. Hestenes. New Foundations for Classical Mechanics. Kluwer Academic Publishers, 1999.

[4] A. Macdonald. Vector and Geometric Calculus. CreateSpace Independent Publishing Platform, 2012.

Maxwell’s equations in tensor form with magnetic sources

February 22, 2015 ece1229 , , , , , , , , , , , , , , , , , , , , , , , , ,

[Click here for a PDF of this post with nicer formatting]

Following the principle that one should always relate new formalisms to things previously learned, I’d like to know what Maxwell’s equations look like in tensor form when magnetic sources are included. As a verification that the previous Geometric Algebra form of Maxwell’s equation that includes magnetic sources is correct, I’ll start with the GA form of Maxwell’s equation, find the tensor form, and then verify that the vector form of Maxwell’s equations can be recovered from the tensor form.

Tensor form

With four-vector potential \( A \), and bivector electromagnetic field \( F = \grad \wedge A \), the GA form of Maxwell’s equation is

\begin{equation}\label{eqn:gaMagneticSourcesToTensorToVector:20}
\grad F = \frac{J}{\epsilon_0 c} + M I.
\end{equation}

The left hand side can be unpacked into vector and trivector terms \( \grad F = \grad \cdot F + \grad \wedge F \), which happens to also separate the sources nicely as a side effect

\begin{equation}\label{eqn:gaMagneticSourcesToTensorToVector:60}
\grad \cdot F = \frac{J}{\epsilon_0 c}
\end{equation}
\begin{equation}\label{eqn:gaMagneticSourcesToTensorToVector:80}
\grad \wedge F = M I.
\end{equation}

The electric source equation can be unpacked into tensor form by dotting with the four vector basis vectors. With the usual definition \( F^{\alpha \beta} = \partial^\alpha A^\beta – \partial^\beta A^\alpha \), that is

\begin{equation}\label{eqn:gaMagneticSourcesToTensorToVector:100}
\begin{aligned}
\gamma^\mu \cdot \lr{ \grad \cdot F }
&=
\gamma^\mu \cdot \lr{ \grad \cdot \lr{ \grad \wedge A } } \\
&=
\gamma^\mu \cdot \lr{ \gamma^\nu \partial_\nu \cdot
\lr{ \gamma_\alpha \partial^\alpha \wedge \gamma_\beta A^\beta } } \\
&=
\gamma^\mu \cdot \lr{ \gamma^\nu \cdot \lr{ \gamma_\alpha \wedge \gamma_\beta
} } \partial_\nu \partial^\alpha A^\beta \\
&=
\inv{2}
\gamma^\mu \cdot \lr{ \gamma^\nu \cdot \lr{ \gamma_\alpha \wedge \gamma_\beta } }
\partial_\nu F^{\alpha \beta} \\
&=
\inv{2} \delta^{\nu \mu}_{[\alpha \beta]} \partial_\nu F^{\alpha \beta} \\
&=
\inv{2} \partial_\nu F^{\nu \mu}

\inv{2} \partial_\nu F^{\mu \nu} \\
&=
\partial_\nu F^{\nu \mu}.
\end{aligned}
\end{equation}

So the first tensor equation is

\begin{equation}\label{eqn:gaMagneticSourcesToTensorToVector:120}
\boxed{
\partial_\nu F^{\nu \mu} = \inv{c \epsilon_0} J^\mu.
}
\end{equation}

To unpack the magnetic source portion of Maxwell’s equation, put it first into dual form, so that it has four vectors on each side

\begin{equation}\label{eqn:gaMagneticSourcesToTensorToVector:140}
\begin{aligned}
M
&= – \lr{ \grad \wedge F} I \\
&= -\frac{1}{2} \lr{ \grad F + F \grad } I \\
&= -\frac{1}{2} \lr{ \grad F I – F I \grad } \\
&= – \grad \cdot \lr{ F I }.
\end{aligned}
\end{equation}

Dotting with \( \gamma^\mu \) gives

\begin{equation}\label{eqn:gaMagneticSourcesToTensorToVector:160}
\begin{aligned}
M^\mu
&= \gamma^\mu \cdot \lr{ \grad \cdot \lr{ – F I } } \\
&= \gamma^\mu \cdot \lr{ \gamma^\nu \partial_\nu \cdot \lr{ -\frac{1}{2}
\gamma^\alpha \wedge \gamma^\beta I F_{\alpha \beta} } } \\
&= -\inv{2}
\gpgradezero{
\gamma^\mu \cdot \lr{ \gamma^\nu \cdot \lr{ \gamma^\alpha \wedge \gamma^\beta I } }
}
\partial_\nu F_{\alpha \beta}.
\end{aligned}
\end{equation}

This scalar grade selection is a complete antisymmetrization of the indexes

\begin{equation}\label{eqn:gaMagneticSourcesToTensorToVector:180}
\begin{aligned}
\gpgradezero{
\gamma^\mu \cdot \lr{ \gamma^\nu \cdot \lr{ \gamma^\alpha \wedge \gamma^\beta I } }
}
&=
\gpgradezero{
\gamma^\mu \cdot \lr{ \gamma^\nu \cdot \lr{
\gamma^\alpha \gamma^\beta
\gamma_0 \gamma_1 \gamma_2 \gamma_3
} }
} \\
&=
\gpgradezero{
\gamma_0 \gamma_1 \gamma_2 \gamma_3
\gamma^\mu \gamma^\nu \gamma^\alpha \gamma^\beta
} \\
&=
\delta^{\mu \nu \alpha \beta}_{3 2 1 0} \\
&=
\epsilon^{\mu \nu \alpha \beta },
\end{aligned}
\end{equation}

so the magnetic source portion of Maxwell’s equation, in tensor form, is

\begin{equation}\label{eqn:gaMagneticSourcesToTensorToVector:200}
\boxed{
\inv{2} \epsilon^{\nu \alpha \beta \mu}
\partial_\nu F_{\alpha \beta}
=
M^\mu.
}
\end{equation}

Relating the tensor to the fields

The electromagnetic field has been identified with the electric and magnetic fields by

\begin{equation}\label{eqn:gaMagneticSourcesToTensorToVector:220}
F = \boldsymbol{\mathcal{E}} + c \mu_0 \boldsymbol{\mathcal{H}} I ,
\end{equation}

or in coordinates

\begin{equation}\label{eqn:gaMagneticSourcesToTensorToVector:240}
\inv{2} \gamma_\mu \wedge \gamma_\nu F^{\mu \nu}
= E^a \gamma_a \gamma_0 + c \mu_0 H^a \gamma_a \gamma_0 I.
\end{equation}

By forming the dot product sequence \( F^{\alpha \beta} = \gamma^\beta \cdot \lr{ \gamma^\alpha \cdot F } \), the electric and magnetic field components can be related to the tensor components. The electric field components follow by inspection and are

\begin{equation}\label{eqn:gaMagneticSourcesToTensorToVector:260}
E^b = \gamma^0 \cdot \lr{ \gamma^b \cdot F } = F^{b 0}.
\end{equation}

The magnetic field relation to the tensor components follow from

\begin{equation}\label{eqn:gaMagneticSourcesToTensorToVector:280}
\begin{aligned}
F^{r s}
&= F_{r s} \\
&= \gamma_s \cdot \lr{ \gamma_r \cdot \lr{ c \mu_0 H^a \gamma_a \gamma_0 I
} } \\
&=
c \mu_0 H^a \gpgradezero{ \gamma_s \gamma_r \gamma_a \gamma_0 I } \\
&=
c \mu_0 H^a \gpgradezero{ -\gamma^0 \gamma^1 \gamma^2 \gamma^3
\gamma_s \gamma_r \gamma_a \gamma_0 } \\
&=
c \mu_0 H^a \gpgradezero{ -\gamma^1 \gamma^2 \gamma^3
\gamma_s \gamma_r \gamma_a } \\
&=
– c \mu_0 H^a \delta^{[3 2 1]}_{s r a} \\
&=
c \mu_0 H^a \epsilon_{ s r a }.
\end{aligned}
\end{equation}

Expanding this for each pair of spacelike coordinates gives

\begin{equation}\label{eqn:gaMagneticSourcesToTensorToVector:320}
F^{1 2} = c \mu_0 H^3 \epsilon_{ 2 1 3 } = – c \mu_0 H^3
\end{equation}
\begin{equation}\label{eqn:gaMagneticSourcesToTensorToVector:340}
F^{2 3} = c \mu_0 H^1 \epsilon_{ 3 2 1 } = – c \mu_0 H^1
\end{equation}
\begin{equation}\label{eqn:gaMagneticSourcesToTensorToVector:360}
F^{3 1} = c \mu_0 H^2 \epsilon_{ 1 3 2 } = – c \mu_0 H^2,
\end{equation}

or

\begin{equation}\label{eqn:gaMagneticSourcesToTensorToVector:380}
\boxed{
\begin{aligned}
E^1 &= F^{1 0} \\
E^2 &= F^{2 0} \\
E^3 &= F^{3 0} \\
H^1 &= -\inv{c \mu_0} F^{2 3} \\
H^2 &= -\inv{c \mu_0} F^{3 1} \\
H^3 &= -\inv{c \mu_0} F^{1 2}.
\end{aligned}
}
\end{equation}

Recover the vector equations from the tensor equations

Starting with the non-dual Maxwell tensor equation, expanding the timelike index gives

\begin{equation}\label{eqn:gaMagneticSourcesToTensorToVector:480}
\begin{aligned}
\inv{c \epsilon_0} J^0
&= \inv{\epsilon_0} \rho \\
&=
\partial_\nu F^{\nu 0} \\
&=
\partial_1 F^{1 0}
+\partial_2 F^{2 0}
+\partial_3 F^{3 0}
\end{aligned}
\end{equation}

This is Gauss’s law

\begin{equation}\label{eqn:gaMagneticSourcesToTensorToVector:500}
\boxed{
\spacegrad \cdot \boldsymbol{\mathcal{E}}
=
\rho/\epsilon_0.
}
\end{equation}

For a spacelike index, any one is representive. Expanding index 1 gives

\begin{equation}\label{eqn:gaMagneticSourcesToTensorToVector:520}
\begin{aligned}
\inv{c \epsilon_0} J^1
&= \partial_\nu F^{\nu 1} \\
&= \inv{c} \partial_t F^{0 1}
+ \partial_2 F^{2 1}
+ \partial_3 F^{3 1} \\
&= -\inv{c} E^1
+ \partial_2 (c \mu_0 H^3)
+ \partial_3 (-c \mu_0 H^2) \\
&=
\lr{ -\inv{c} \PD{t}{\boldsymbol{\mathcal{E}}} + c \mu_0 \spacegrad \cross \boldsymbol{\mathcal{H}} } \cdot \Be_1.
\end{aligned}
\end{equation}

Extending this to the other indexes and multiplying through by \( \epsilon_0 c \) recovers the Ampere-Maxwell equation (assuming linear media)

\begin{equation}\label{eqn:gaMagneticSourcesToTensorToVector:540}
\boxed{
\spacegrad \cross \boldsymbol{\mathcal{H}} = \boldsymbol{\mathcal{J}} + \PD{t}{\boldsymbol{\mathcal{D}}}.
}
\end{equation}

The expansion of the 0th free (timelike) index of the dual Maxwell tensor equation is

\begin{equation}\label{eqn:gaMagneticSourcesToTensorToVector:400}
\begin{aligned}
M^0
&=
\inv{2} \epsilon^{\nu \alpha \beta 0}
\partial_\nu F_{\alpha \beta} \\
&=
-\inv{2} \epsilon^{0 \nu \alpha \beta}
\partial_\nu F_{\alpha \beta} \\
&=
-\inv{2}
\lr{
\partial_1 (F_{2 3} – F_{3 2})
+\partial_2 (F_{3 1} – F_{1 3})
+\partial_3 (F_{1 2} – F_{2 1})
} \\
&=

\lr{
\partial_1 F_{2 3}
+\partial_2 F_{3 1}
+\partial_3 F_{1 2}
} \\
&=

\lr{
\partial_1 (- c \mu_0 H^1 ) +
\partial_2 (- c \mu_0 H^2 ) +
\partial_3 (- c \mu_0 H^3 )
},
\end{aligned}
\end{equation}

but \( M^0 = c \rho_m \), giving us Gauss’s law for magnetism (with magnetic charge density included)

\begin{equation}\label{eqn:gaMagneticSourcesToTensorToVector:420}
\boxed{
\spacegrad \cdot \boldsymbol{\mathcal{H}} = \rho_m/\mu_0.
}
\end{equation}

For the spacelike indexes of the dual Maxwell equation, only one need be computed (say 1), and cyclic permutation will provide the rest. That is

\begin{equation}\label{eqn:gaMagneticSourcesToTensorToVector:440}
\begin{aligned}
M^1
&= \inv{2} \epsilon^{\nu \alpha \beta 1} \partial_\nu F_{\alpha \beta} \\
&=
\inv{2} \lr{ \partial_2 \lr{F_{3 0} – F_{0 3}} }
+\inv{2} \lr{ \partial_3 \lr{F_{0 2} – F_{0 2}} }
+\inv{2} \lr{ \partial_0 \lr{F_{2 3} – F_{3 2}} } \\
&=
– \partial_2 F^{3 0}
+ \partial_3 F^{2 0}
+ \partial_0 F_{2 3} \\
&=
-\partial_2 E^3 + \partial_3 E^2 + \inv{c} \PD{t}{} \lr{ – c \mu_0 H^1 } \\
&= – \lr{ \spacegrad \cross \boldsymbol{\mathcal{E}} + \mu_0 \PD{t}{\boldsymbol{\mathcal{H}}} } \cdot \Be_1.
\end{aligned}
\end{equation}

Extending this to the rest of the coordinates gives the Maxwell-Faraday equation (as extended to include magnetic current density sources)

\begin{equation}\label{eqn:gaMagneticSourcesToTensorToVector:460}
\boxed{
\spacegrad \cross \boldsymbol{\mathcal{E}} = -\boldsymbol{\mathcal{M}} – \mu_0 \PD{t}{\boldsymbol{\mathcal{H}}}.
}
\end{equation}

This takes things full circle, going from the vector differential Maxwell’s equations, to the Geometric Algebra form of Maxwell’s equation, to Maxwell’s equations in tensor form, and back to the vector form. Not only is the tensor form of Maxwell’s equations with magnetic sources now known, the translation from the tensor and vector formalism has also been verified, and miraculously no signs or factors of 2 were lost or gained in the process.