grade selection

Geometric Algebra in a nutshell.

September 29, 2016 math and physics play , , , , , , , , , , , , ,

[Click here for a PDF of this post with nicer formatting]

Motivation

I initially thought that I might submit a problem set solution for ece1228 using Geometric Algebra. In order to justify this, I needed to add an appendix to that problem set that outlined enough of the ideas that such a solution might make sense to the grader.

I ended up changing my mind and reworked the problem entirely, removing any use of GA. Here’s the tutorial I initially considered submitting with that problem.

Geometric Algebra in a nutshell.

Geometric Algebra defines a non-commutative, associative vector product

\begin{equation}\label{eqn:gaTutorial:20}
\begin{aligned}
\Ba \Bb \Bc
&=
(\Ba \Bb) \Bc \\
&=
\Ba (\Bb \Bc),
\end{aligned}
\end{equation}

where the square of a vector equals the squared vector magnitude

\begin{equation}\label{eqn:gaTutorial:40}
\Ba^2 = \Abs{\Ba}^2,
\end{equation}

In Euclidean spaces such a squared vector is always positive, but that is not necessarily the case in the mixed signature spaces used in special relativity.

There are a number of consequences of these two simple vector multiplication rules.

  • Squared unit vectors have a unit magnitude (up to a sign). In a Euclidean space such a product is always positive

    \begin{equation}\label{eqn:gaTutorial:60}
    (\Be_1)^2 = 1.
    \end{equation}

  • Products of perpendicular vectors anticommute.

    \begin{equation}\label{eqn:gaTutorial:80}
    \begin{aligned}
    2
    &=
    (\Be_1 + \Be_2)^2 \\
    &= (\Be_1 + \Be_2)(\Be_1 + \Be_2) \\
    &= \Be_1^2 + \Be_2 \Be_1 + \Be_1 \Be_2 + \Be_2^2 \\
    &= 2 + \Be_2 \Be_1 + \Be_1 \Be_2.
    \end{aligned}
    \end{equation}

    A product of two perpendicular vectors is called a bivector, and can be used to represent an oriented plane. The last line above shows an example of a scalar and bivector sum, called a multivector. In general Geometric Algebra allows sums of scalars, vectors, bivectors, and higher degree analogues (grades) be summed.

    Comparison of the RHS and LHS of \ref{eqn:gaTutorial:80} shows that we must have

    \begin{equation}\label{eqn:gaTutorial:100}
    \Be_2 \Be_1 = -\Be_1 \Be_2.
    \end{equation}

    It is true in general that the product of two perpendicular vectors anticommutes. When, as above, such a product is a product of
    two orthonormal vectors, it behaves like a non-commutative imaginary quantity, as it has an imaginary square in Euclidean spaces

    \begin{equation}\label{eqn:gaTutorial:120}
    \begin{aligned}
    (\Be_1 \Be_2)^2
    &=
    (\Be_1 \Be_2)
    (\Be_1 \Be_2) \\
    &=
    \Be_1 (\Be_2
    \Be_1) \Be_2 \\
    &=
    -\Be_1 (\Be_1
    \Be_2) \Be_2 \\
    &=
    -(\Be_1 \Be_1)
    (\Be_2 \Be_2) \\
    &=-1.
    \end{aligned}
    \end{equation}

    Such “imaginary” (unit bivectors) have important applications describing rotations in Euclidean spaces, and boosts in Minkowski spaces.

  • The product of three perpendicular vectors, such as

    \begin{equation}\label{eqn:gaTutorial:140}
    I = \Be_1 \Be_2 \Be_3,
    \end{equation}

    is called a trivector. In \R{3}, the product of three orthonormal vectors is called a pseudoscalar for the space, and can represent an oriented volume element. The quantity \( I \) above is the typical orientation picked for the \R{3} unit pseudoscalar. This quantity also has characteristics of an imaginary number

    \begin{equation}\label{eqn:gaTutorial:160}
    \begin{aligned}
    I^2
    &=
    (\Be_1 \Be_2 \Be_3)
    (\Be_1 \Be_2 \Be_3) \\
    &=
    \Be_1 \Be_2 (\Be_3
    \Be_1) \Be_2 \Be_3 \\
    &=
    -\Be_1 \Be_2 \Be_1
    \Be_3 \Be_2 \Be_3 \\
    &=
    -\Be_1 (\Be_2 \Be_1)
    (\Be_3 \Be_2) \Be_3 \\
    &=
    -\Be_1 (\Be_1 \Be_2)
    (\Be_2 \Be_3) \Be_3 \\
    &=

    \Be_1^2
    \Be_2^2
    \Be_3^2 \\
    &=
    -1.
    \end{aligned}
    \end{equation}

  • The product of two vectors in \R{3} can be expressed as the sum of a symmetric scalar product and antisymmetric bivector product

    \begin{equation}\label{eqn:gaTutorial:480}
    \begin{aligned}
    \Ba \Bb
    &=
    \sum_{i,j = 1}^n \Be_i \Be_j a_i b_j \\
    &=
    \sum_{i = 1}^n \Be_i^2 a_i b_i
    +
    \sum_{0 < i \ne j \le n} \Be_i \Be_j a_i b_j \\ &= \sum_{i = 1}^n a_i b_i + \sum_{0 < i < j \le n} \Be_i \Be_j (a_i b_j - a_j b_i). \end{aligned} \end{equation} The first (symmetric) term is clearly the dot product. The antisymmetric term is designated the wedge product. In general these are written \begin{equation}\label{eqn:gaTutorial:500} \Ba \Bb = \Ba \cdot \Bb + \Ba \wedge \Bb, \end{equation} where \begin{equation}\label{eqn:gaTutorial:520} \begin{aligned} \Ba \cdot \Bb &\equiv \inv{2} \lr{ \Ba \Bb + \Bb \Ba } \\ \Ba \wedge \Bb &\equiv \inv{2} \lr{ \Ba \Bb - \Bb \Ba }, \end{aligned} \end{equation} The coordinate expansion of both can be seen above, but in \R{3} the wedge can also be written \begin{equation}\label{eqn:gaTutorial:540} \Ba \wedge \Bb = \Be_1 \Be_2 \Be_3 (\Ba \cross \Bb) = I (\Ba \cross \Bb). \end{equation} This allows for an handy dot plus cross product expansion of the vector product \begin{equation}\label{eqn:gaTutorial:180} \Ba \Bb = \Ba \cdot \Bb + I (\Ba \cross \Bb). \end{equation} This result should be familiar to the student of quantum spin states where one writes \begin{equation}\label{eqn:gaTutorial:200} (\Bsigma \cdot \Ba) (\Bsigma \cdot \Bb) = (\Ba \cdot \Bb) + i (\Ba \cross \Bb) \cdot \Bsigma. \end{equation} This correspondence is because the Pauli spin basis is a specific matrix representation of a Geometric Algebra, satisfying the same commutator and anticommutator relationships. A number of other algebra structures, such as complex numbers, and quaterions can also be modelled as Geometric Algebra elements.

  • It is often useful to utilize the grade selection operator
    \( \gpgrade{M}{n} \) and scalar grade selection operator \( \gpgradezero{M} = \gpgrade{M}{0} \)
    to select the scalar, vector, bivector, trivector, or higher grade algebraic elements. For example, operating on vectors \( \Ba, \Bb, \Bc \), we have

    \begin{equation}\label{eqn:gaTutorial:580}
    \begin{aligned}
    \gpgradezero{ \Ba \Bb }
    &= \Ba \cdot \Bb \\
    \gpgradeone{ \Ba \Bb \Bc }
    &=
    \Ba (\Bb \cdot \Bc)
    +
    \Ba \cdot (\Bb \wedge \Bc) \\
    &=
    \Ba (\Bb \cdot \Bc)
    +
    (\Ba \cdot \Bb) \Bc

    (\Ba \cdot \Bc) \Bb \\
    \gpgradetwo{\Ba \Bb} &=
    \Ba \wedge \Bb \\
    \gpgradethree{\Ba \Bb \Bc} &=
    \Ba \wedge \Bb \wedge \Bc.
    \end{aligned}
    \end{equation}

    Note that the wedge product of any number of vectors such as \( \Ba \wedge \Bb \wedge \Bc \) is associative and can be expressed in terms of the complete antisymmetrization of the product of those vectors. A consequence of that is the fact a wedge product that includes any colinear vectors in the product is zero.

Example: Helmholz equations.

As an example of the power of \ref{eqn:gaTutorial:180}, consider the following Helmholtz equation derivation (wave equations for the electric and magnetic fields in the frequency domain.)

Application of \ref{eqn:gaTutorial:180} to
Maxwell equations in the frequency domain for source free simple media gives

\label{eqn:emtProblemSet1Problem6:340}
\begin{equation}\label{eqn:emtProblemSet1Problem6:360}
\spacegrad \BE = -j \omega I \BB
\end{equation}
\begin{equation}\label{eqn:emtProblemSet1Problem6:380}
\spacegrad I \BB = -j \omega \mu \epsilon \BE.
\end{equation}

These equations use the engineering (not physics) sign convention for the phasors where the time domain fields are of the form \( \boldsymbol{\mathcal{E}}(\Br, t) = \textrm{Re}( \BE e^{j\omega t} \).

Operation with the gradient from the left produces the Helmholtz equation for each of the fields using nothing more than multiplication and simple substitution

\label{eqn:emtProblemSet1Problem6:400}
\begin{equation}\label{eqn:emtProblemSet1Problem6:420}
\spacegrad^2 \BE = – \mu \epsilon \omega^2 \BE
\end{equation}
\begin{equation}\label{eqn:emtProblemSet1Problem6:440}
\spacegrad^2 I \BB = – \mu \epsilon \omega^2 I \BB.
\end{equation}

There was no reason to go through the headache of looking up or deriving the expansion of \( \spacegrad \cross (\spacegrad \cross \BA ) \) as is required with the traditional vector algebra demonstration of these identities.

Observe that the usual Helmholtz equation for \( \BB \) doesn’t have a pseudoscalar factor. That result can be obtained by just cancelling the factors \( I \) since the \R{3} Euclidean pseudoscalar commutes with all grades (this isn’t the case in \R{2} nor in Minkowski spaces.)

Example: Factoring the Laplacian.

There are various ways to demonstrate the identity

\begin{equation}\label{eqn:gaTutorial:660}
\spacegrad \cross \lr{ \spacegrad \cross \BA } = \spacegrad \lr{ \spacegrad \cdot \BA } – \spacegrad^2 \BA,
\end{equation}

such as the use of (somewhat obscure) tensor contraction techniques. We can also do this with Geometric Algebra (using a different set of obscure techniques) by factoring the Laplacian action on a vector

\begin{equation}\label{eqn:gaTutorial:700}
\begin{aligned}
\spacegrad^2 \BA
&=
\spacegrad (\spacegrad \BA) \\
&=
\spacegrad (\spacegrad \cdot \BA + \spacegrad \wedge \BA) \\
&=
\spacegrad (\spacegrad \cdot \BA)
+
\spacegrad \cdot (\spacegrad \wedge \BA) \\
%+
%\cancel{\spacegrad \wedge \spacegrad \wedge \BA}
&=
\spacegrad (\spacegrad \cdot \BA)
+
\spacegrad \cdot (\spacegrad \wedge \BA).
\end{aligned}
\end{equation}

Should we wish to express the last term using cross products, a grade one selection operation can be used
\begin{equation}\label{eqn:gaTutorial:680}
\begin{aligned}
\spacegrad \cdot (\spacegrad \wedge \BA)
&=
\gpgradeone{ \spacegrad (\spacegrad \wedge \BA) } \\
&=
\gpgradeone{ \spacegrad I (\spacegrad \cross \BA) } \\
&=
\gpgradeone{ I \spacegrad \wedge (\spacegrad \cross \BA) } \\
&=
\gpgradeone{ I^2 \spacegrad \cross (\spacegrad \cross \BA) } \\
&=
-\spacegrad \cross (\spacegrad \cross \BA).
\end{aligned}
\end{equation}

Here coordinate expansion was not required in any step.

Learning more.

Some references that may be helpful to learn more about Geometric Algebra are [2], [1], [4], and [3].

References

[1] C. Doran and A.N. Lasenby. Geometric algebra for physicists. Cambridge University Press New York, Cambridge, UK, 1st edition, 2003.

[2] L. Dorst, D. Fontijne, and S. Mann. Geometric Algebra for Computer Science. Morgan Kaufmann, San Francisco, 2007.

[3] D. Hestenes. New Foundations for Classical Mechanics. Kluwer Academic Publishers, 1999.

[4] A. Macdonald. Vector and Geometric Calculus. CreateSpace Independent Publishing Platform, 2012.

Fundamental Theorem of Geometric Calculus

September 20, 2016 math and physics play , , , , , , , , , , , , , , , ,

[Click here for a PDF of this post with nicer formatting]

Stokes Theorem

The Fundamental Theorem of (Geometric) Calculus is a generalization of Stokes theorem to multivector integrals. Notationally, it looks like Stokes theorem with all the dot and wedge products removed. It is worth restating Stokes theorem and all the definitions associated with it for reference

Stokes’ Theorem

For blades \(F \in \bigwedge^{s}\), and \(m\) volume element \(d^k \Bx, s < k\), \begin{equation*} \int_V d^k \Bx \cdot (\boldpartial \wedge F) = \oint_{\partial V} d^{k-1} \Bx \cdot F. \end{equation*} This is a loaded and abstract statement, and requires many definitions to make it useful

  • The volume integral is over a \(m\) dimensional surface (manifold).
  • Integration over the boundary of the manifold \(V\) is indicated by \( \partial V \).
  • This manifold is assumed to be spanned by a parameterized vector \( \Bx(u^1, u^2, \cdots, u^k) \).
  • A curvilinear coordinate basis \( \setlr{ \Bx_i } \) can be defined on the manifold by
    \begin{equation}\label{eqn:fundamentalTheoremOfCalculus:40}
    \Bx_i \equiv \PD{u^i}{\Bx} \equiv \partial_i \Bx.
    \end{equation}

  • A dual basis \( \setlr{\Bx^i} \) reciprocal to the tangent vector basis \( \Bx_i \) can be calculated subject to the requirement \( \Bx_i \cdot \Bx^j = \delta_i^j \).
  • The vector derivative \(\boldpartial\), the projection of the gradient onto the tangent space of the manifold, is defined by
    \begin{equation}\label{eqn:fundamentalTheoremOfCalculus:100}
    \boldpartial = \Bx^i \partial_i = \sum_{i=1}^k \Bx_i \PD{u^i}{}.
    \end{equation}

  • The volume element is defined by
    \begin{equation}\label{eqn:fundamentalTheoremOfCalculus:60}
    d^k \Bx = d\Bx_1 \wedge d\Bx_2 \cdots \wedge d\Bx_k,
    \end{equation}

    where

    \begin{equation}\label{eqn:fundamentalTheoremOfCalculus:80}
    d\Bx_k = \Bx_k du^k,\qquad \text{(no sum)}.
    \end{equation}

  • The volume element is non-zero on the manifold, or \( \Bx_1 \wedge \cdots \wedge \Bx_k \ne 0 \).
  • The surface area element \( d^{k-1} \Bx \), is defined by
    \begin{equation}\label{eqn:fundamentalTheoremOfCalculus:120}
    d^{k-1} \Bx = \sum_{i = 1}^k (-1)^{k-i} d\Bx_1 \wedge d\Bx_2 \cdots \widehat{d\Bx_i} \cdots \wedge d\Bx_k,
    \end{equation}

    where \( \widehat{d\Bx_i} \) indicates the omission of \( d\Bx_i \).

  • My proof for this theorem was restricted to a simple “rectangular” volume parameterized by the ranges
    \(
    [u^1(0), u^1(1) ] \otimes
    [u^2(0), u^2(1) ] \otimes \cdots \otimes
    [u^k(0), u^k(1) ] \)

  • The precise meaning that should be given to oriented area integral is
    \begin{equation}\label{eqn:fundamentalTheoremOfCalculus:140}
    \oint_{\partial V} d^{k-1} \Bx \cdot F
    =
    \sum_{i = 1}^k (-1)^{k-i} \int \evalrange{
    \lr{ \lr{ d\Bx_1 \wedge d\Bx_2 \cdots \widehat{d\Bx_i} \cdots \wedge d\Bx_k } \cdot F }
    }{u^i = u^i(0)}{u^i(1)},
    \end{equation}

    where both the a area form and the blade \( F \) are evaluated at the end points of the parameterization range.

After the work of stating exactly what is meant by this theorem, most of the proof follows from the fact that for \( s < k \) the volume curl dot product can be expanded as \begin{equation}\label{eqn:fundamentalTheoremOfCalculus:160} \int_V d^k \Bx \cdot (\boldpartial \wedge F) = \int_V d^k \Bx \cdot (\Bx^i \wedge \partial_i F) = \int_V \lr{ d^k \Bx \cdot \Bx^i } \cdot \partial_i F. \end{equation} Each of the \(du^i\) integrals can be evaluated directly, since each of the remaining \(d\Bx_j = du^j \PDi{u^j}{}, i \ne j \) is calculated with \( u^i \) held fixed. This allows for the integration over a ``rectangular'' parameterization region, proving the theorem for such a volume parameterization. A more general proof requires a triangulation of the volume and surface, but the basic principle of the theorem is evident, without that additional work.

Fundamental Theorem of Calculus

There is a Geometric Algebra generalization of Stokes theorem that does not have the blade grade restriction of Stokes theorem. In [2] this is stated as

\begin{equation}\label{eqn:fundamentalTheoremOfCalculus:180}
\int_V d^k \Bx \boldpartial F = \oint_{\partial V} d^{k-1} \Bx F.
\end{equation}

A similar expression is used in [1] where it is also pointed out there is a variant with the vector derivative acting to the left

\begin{equation}\label{eqn:fundamentalTheoremOfCalculus:200}
\int_V F d^k \Bx \boldpartial = \oint_{\partial V} F d^{k-1} \Bx.
\end{equation}

In [3] it is pointed out that a bidirectional formulation is possible, providing the most general expression of the Fundamental Theorem of (Geometric) Calculus

\begin{equation}\label{eqn:fundamentalTheoremOfCalculus:220}
\boxed{
\int_V F d^k \Bx \boldpartial G = \oint_{\partial V} F d^{k-1} \Bx G.
}
\end{equation}

Here the vector derivative acts both to the left and right on \( F \) and \( G \). The specific action of this operator is
\begin{equation}\label{eqn:fundamentalTheoremOfCalculus:240}
\begin{aligned}
F \boldpartial G
&=
(F \boldpartial) G
+
F (\boldpartial G) \\
&=
(\partial_i F) \Bx^i G
+
F \Bx^i (\partial_i G).
\end{aligned}
\end{equation}

The fundamental theorem can be demonstrated by direct expansion. With the vector derivative \( \boldpartial \) and its partials \( \partial_i \) acting bidirectionally, that is

\begin{equation}\label{eqn:fundamentalTheoremOfCalculus:260}
\begin{aligned}
\int_V F d^k \Bx \boldpartial G
&=
\int_V F d^k \Bx \Bx^i \partial_i G \\
&=
\int_V F \lr{ d^k \Bx \cdot \Bx^i + d^k \Bx \wedge \Bx^i } \partial_i G.
\end{aligned}
\end{equation}

Both the reciprocal frame vectors and the curvilinear basis span the tangent space of the manifold, since we can write any reciprocal frame vector as a set of projections in the curvilinear basis

\begin{equation}\label{eqn:fundamentalTheoremOfCalculus:280}
\Bx^i = \sum_j \lr{ \Bx^i \cdot \Bx^j } \Bx_j,
\end{equation}

so \( \Bx^i \in sectionpan \setlr{ \Bx_j, j \in [1,k] } \).
This means that \( d^k \Bx \wedge \Bx^i = 0 \), and

\begin{equation}\label{eqn:fundamentalTheoremOfCalculus:300}
\begin{aligned}
\int_V F d^k \Bx \boldpartial G
&=
\int_V F \lr{ d^k \Bx \cdot \Bx^i } \partial_i G \\
&=
\sum_{i = 1}^{k}
\int_V
du^1 du^2 \cdots \widehat{ du^i} \cdots du^k
F \lr{
(-1)^{k-i}
\Bx_1 \wedge \Bx_2 \cdots \widehat{\Bx_i} \cdots \wedge \Bx_k } \partial_i G du^i \\
&=
\sum_{i = 1}^{k}
(-1)^{k-i}
\int_{u^1}
\int_{u^2}
\cdots
\int_{u^{i-1}}
\int_{u^{i+1}}
\cdots
\int_{u^k}
\evalrange{ \lr{
F d\Bx_1 \wedge d\Bx_2 \cdots \widehat{d\Bx_i} \cdots \wedge d\Bx_k G
}
}{u^i = u^i(0)}{u^i(1)}.
\end{aligned}
\end{equation}

Adding in the same notational sugar that we used in Stokes theorem, this proves the Fundamental theorem \ref{eqn:fundamentalTheoremOfCalculus:220} for “rectangular” parameterizations. Note that such a parameterization need not actually be rectangular.

Example: Application to Maxwell’s equation

{example:fundamentalTheoremOfCalculus:1}

Maxwell’s equation is an example of a first order gradient equation

\begin{equation}\label{eqn:fundamentalTheoremOfCalculus:320}
\grad F = \inv{\epsilon_0 c} J.
\end{equation}

Integrating over a four-volume (where the vector derivative equals the gradient), and applying the Fundamental theorem, we have

\begin{equation}\label{eqn:fundamentalTheoremOfCalculus:340}
\inv{\epsilon_0 c} \int d^4 x J = \oint d^3 x F.
\end{equation}

Observe that the surface area element product with \( F \) has both vector and trivector terms. This can be demonstrated by considering some examples

\begin{equation}\label{eqn:fundamentalTheoremOfCalculus:360}
\begin{aligned}
\gamma_{012} \gamma_{01} &\propto \gamma_2 \\
\gamma_{012} \gamma_{23} &\propto \gamma_{023}.
\end{aligned}
\end{equation}

On the other hand, the four volume integral of \( J \) has only trivector parts. This means that the integral can be split into a pair of same-grade equations

\begin{equation}\label{eqn:fundamentalTheoremOfCalculus:380}
\begin{aligned}
\inv{\epsilon_0 c} \int d^4 x \cdot J &=
\oint \gpgradethree{ d^3 x F} \\
0 &=
\oint d^3 x \cdot F.
\end{aligned}
\end{equation}

The first can be put into a slightly tidier form using a duality transformation
\begin{equation}\label{eqn:fundamentalTheoremOfCalculus:400}
\begin{aligned}
\gpgradethree{ d^3 x F}
&=
-\gpgradethree{ d^3 x I^2 F} \\
&=
\gpgradethree{ I d^3 x I F} \\
&=
(I d^3 x) \wedge (I F).
\end{aligned}
\end{equation}

Letting \( n \Abs{d^3 x} = I d^3 x \), this gives

\begin{equation}\label{eqn:fundamentalTheoremOfCalculus:420}
\oint \Abs{d^3 x} n \wedge (I F) = \inv{\epsilon_0 c} \int d^4 x \cdot J.
\end{equation}

Note that this normal is normal to a three-volume subspace of the spacetime volume. For example, if one component of that spacetime surface area element is \( \gamma_{012} c dt dx dy \), then the normal to that area component is \( \gamma_3 \).

A second set of duality transformations

\begin{equation}\label{eqn:fundamentalTheoremOfCalculus:440}
\begin{aligned}
n \wedge (IF)
&=
\gpgradethree{ n I F} \\
&=
-\gpgradethree{ I n F} \\
&=
-\gpgradethree{ I (n \cdot F)} \\
&=
-I (n \cdot F),
\end{aligned}
\end{equation}

and
\begin{equation}\label{eqn:fundamentalTheoremOfCalculus:460}
\begin{aligned}
I d^4 x \cdot J
&=
\gpgradeone{ I d^4 x \cdot J } \\
&=
\gpgradeone{ I d^4 x J } \\
&=
\gpgradeone{ (I d^4 x) J } \\
&=
(I d^4 x) J,
\end{aligned}
\end{equation}

can further tidy things up, leaving us with

\begin{equation}\label{eqn:fundamentalTheoremOfCalculus:500}
\boxed{
\begin{aligned}
\oint \Abs{d^3 x} n \cdot F &= \inv{\epsilon_0 c} \int (I d^4 x) J \\
\oint d^3 x \cdot F &= 0.
\end{aligned}
}
\end{equation}

The Fundamental theorem of calculus immediately provides relations between the Faraday bivector \( F \) and the four-current \( J \).

References

[1] C. Doran and A.N. Lasenby. Geometric algebra for physicists. Cambridge University Press New York, Cambridge, UK, 1st edition, 2003.

[2] A. Macdonald. Vector and Geometric Calculus. CreateSpace Independent Publishing Platform, 2012.

[3] Garret Sobczyk and Omar Le\’on S\’anchez. Fundamental theorem of calculus. Advances in Applied Clifford Algebras, 21\penalty0 (1):\penalty0 221–231, 2011. URL https://arxiv.org/abs/0809.4526.

Maxwell equation boundary conditions

September 6, 2016 math and physics play , , , , , , , , , , , , , ,

[Click here for a PDF of this post with nicer formatting]

Motivation

boundaryConditionsTwoSurfacesFig1

fig 1. Two surfaces normal to the interface.

Most electrodynamics textbooks either start with or contain a treatment of boundary value conditions. These typically involve evaluating Maxwell’s equations over areas or volumes of decreasing height, such as those illustrated in fig. 1, and fig. 2. These represent surfaces and volumes where the height is allowed to decrease to infinitesimal levels, and are traditionally used to find the boundary value constraints of the normal and tangential components of the electric and magnetic fields.

boundaryConditionsPillBoxFig2

fig 2. A pillbox volume encompassing the interface.

More advanced topics, such as evaluation of the Fresnel reflection and transmission equations, also rely on similar consideration of boundary value constraints. I’ve wondered for a long time how the Fresnel equations could be attacked by looking at the boundary conditions for the combined field \( F = \BE + I c \BB \), instead of the considering them separately.

A unified approach.

The Geometric Algebra (and relativistic tensor) formulations of Maxwell’s equations put the electric and magnetic fields on equal footings. It is in fact possible to specify the boundary value constraints on the fields without first separating Maxwell’s equations into their traditional forms. The starting point in Geometric Algebra is Maxwell’s equation, premultiplied by a stationary observer’s timelike basis vector

\begin{equation}\label{eqn:maxwellBoundaryConditions:20}
\gamma_0 \grad F = \inv{\epsilon_0 c} \gamma_0 J,
\end{equation}

or

\begin{equation}\label{eqn:maxwellBoundaryConditions:40}
\lr{ \partial_0 + \spacegrad} F = \frac{\rho}{\epsilon_0} – \frac{\BJ}{\epsilon_0}.
\end{equation}

The electrodynamic field \(F = \BE + I c \BB\) is a multivector in this spatial domain (whereas it is a bivector in the spacetime algebra domain), and has vector and bivector components. The product of the spatial gradient and the field can still be split into dot and curl components \(\spacegrad M = \spacegrad \cdot M + \spacegrad \wedge M \). If \(M = \sum M_i \), where \(M_i\) is an grade \(i\) blade, then we give this the Hestenes’ [1] definitions

\begin{equation}\label{eqn:maxwellBoundaryConditions:60}
\begin{aligned}
\spacegrad \cdot M &= \sum_i \gpgrade{\spacegrad M_i}{i-1} \\
\spacegrad \wedge M &= \sum_i \gpgrade{\spacegrad M_i}{i+1}.
\end{aligned}
\end{equation}

With that said, Maxwell’s equation can be rearranged into a pair of multivector equations

\begin{equation}\label{eqn:maxwellBoundaryConditions:80}
\begin{aligned}
\spacegrad \cdot F &= \gpgrade{-\partial_0 F + \frac{\rho}{\epsilon_0} – \frac{\BJ}{\epsilon_0 c}}{0,1} \\
\spacegrad \wedge F &= \gpgrade{-\partial_0 F + \frac{\rho}{\epsilon_0} – \frac{\BJ}{\epsilon_0 c}}{2,3},
\end{aligned}
\end{equation}

The latter equation can be integrated with Stokes theorem, but we need to apply a duality transformation to the latter in order to apply Stokes to it

\begin{equation}\label{eqn:maxwellBoundaryConditions:120}
\begin{aligned}
\spacegrad \cdot F
&=
-I^2 \spacegrad \cdot F \\
&=
-I^2 \gpgrade{\spacegrad F}{0,1} \\
&=
-I \gpgrade{I \spacegrad F}{2,3} \\
&=
-I \spacegrad \wedge (IF),
\end{aligned}
\end{equation}

so

\begin{equation}\label{eqn:maxwellBoundaryConditions:100}
\begin{aligned}
\spacegrad \wedge (I F) &= I \lr{ -\inv{c} \partial_t \BE + \frac{\rho}{\epsilon_0} – \frac{\BJ}{\epsilon_0 c} } \\
\spacegrad \wedge F &= -I \partial_t \BB.
\end{aligned}
\end{equation}

Integrating each of these over the pillbox volume gives

\begin{equation}\label{eqn:maxwellBoundaryConditions:140}
\begin{aligned}
\oint_{\partial V} d^2 \Bx \cdot (I F)
&=
\int_{V} d^3 \Bx \cdot \lr{ I \lr{ -\inv{c} \partial_t \BE + \frac{\rho}{\epsilon_0} – \frac{\BJ}{\epsilon_0 c} } } \\
\oint_{\partial V} d^2 \Bx \cdot F
&=
– \partial_t \int_{V} d^3 \Bx \cdot \lr{ I \BB }.
\end{aligned}
\end{equation}

In the absence of charges and currents on the surface, and if the height of the volume is reduced to zero, the volume integrals vanish, and only the upper surfaces of the pillbox contribute to the surface integrals.

\begin{equation}\label{eqn:maxwellBoundaryConditions:200}
\begin{aligned}
\oint_{\partial V} d^2 \Bx \cdot (I F) &= 0 \\
\oint_{\partial V} d^2 \Bx \cdot F &= 0.
\end{aligned}
\end{equation}

With a multivector \(F\) in the mix, the geometric meaning of these integrals is not terribly clear. They do describe the boundary conditions, but to see exactly what those are, we can now resort to the split of \(F\) into its electric and magnetic fields. Let’s look at the non-dual integral to start with

\begin{equation}\label{eqn:maxwellBoundaryConditions:160}
\begin{aligned}
\oint_{\partial V} d^2 \Bx \cdot F
&=
\oint_{\partial V} d^2 \Bx \cdot \lr{ \BE + I c \BB } \\
&=
\oint_{\partial V} d^2 \Bx \cdot \BE + I c d^2 \Bx \wedge \BB \\
&=
0.
\end{aligned}
\end{equation}

No component of \(\BE\) that is normal to the surface contributes to \(d^2 \Bx \cdot \BE \), whereas only components of \(\BB\) that are normal contribute to \(d^2 \Bx \wedge \BB \). That means that we must have tangential components of \(\BE\) and the normal components of \(\BB\) matching on the surfaces

\begin{equation}\label{eqn:maxwellBoundaryConditions:180}
\begin{aligned}
\lr{\BE_2 \wedge \ncap} \ncap – \lr{\BE_1 \wedge (-\ncap)} (-\ncap) &= 0 \\
\lr{\BB_2 \cdot \ncap} \ncap – \lr{\BB_1 \cdot (-\ncap)} (-\ncap) &= 0 .
\end{aligned}
\end{equation}

Similarly, for the dot product of the dual field, this is

\begin{equation}\label{eqn:maxwellBoundaryConditions:220}
\begin{aligned}
\oint_{\partial V} d^2 \Bx \cdot (I F)
&=
\oint_{\partial V} d^2 \Bx \cdot (I \BE – c \BB) \\
&=
\oint_{\partial V} I d^2 \Bx \wedge \BE – c d^2 \Bx \cdot \BB.
\end{aligned}
\end{equation}

For this integral, only the normal components of \(\BE\) contribute, and only the tangential components of \(\BB\) contribute. This means that

\begin{equation}\label{eqn:maxwellBoundaryConditions:240}
\begin{aligned}
\lr{\BE_2 \cdot \ncap} \ncap – \lr{\BE_1 \cdot (-\ncap)} (-\ncap) &= 0 \\
\lr{\BB_2 \wedge \ncap} \ncap – \lr{\BB_1 \wedge (-\ncap)} (-\ncap) &= 0.
\end{aligned}
\end{equation}

This is why we end up with a seemingly strange mix of tangential and normal components of the electric and magnetic fields. These constraints can be summarized as

\begin{equation}\label{eqn:maxwellBoundaryConditions:260}
\begin{aligned}
( \BE_2 – \BE_1 ) \cdot \ncap &= 0 \\
( \BE_2 – \BE_1 ) \wedge \ncap &= 0 \\
( \BB_2 – \BB_1 ) \cdot \ncap &= 0 \\
( \BB_2 – \BB_1 ) \wedge \ncap &= 0
\end{aligned}
\end{equation}

These relationships are usually expressed in terms of all of \(\BE, \BD, \BB\) and \(\BH \). Because I’d started with Maxwell’s equations for free space, I don’t have the \( \epsilon \) and \( \mu \) factors that produce those more general relationships. Those more general boundary value relationships are usually the starting point for the Fresnel interface analysis. It is also possible to further generalize these relationships to include charges and currents on the surface.

References

[1] D. Hestenes. New Foundations for Classical Mechanics. Kluwer Academic Publishers, 1999.

Application of Stokes Theorem to the Maxwell equation

September 3, 2016 math and physics play , , , , , , , , , , , ,

[Click here for a PDF of this post with nicer formatting]

The relativistic form of Maxwell’s equation in Geometric Algebra is

\begin{equation}\label{eqn:maxwellStokes:20}
\grad F = \inv{c \epsilon_0} J,
\end{equation}

where \( \grad = \gamma^\mu \partial_\mu \) is the spacetime gradient, and \( J = (c\rho, \BJ) = J^\mu \gamma_\mu \) is the four (vector) current density. The pseudoscalar for the space is denoted \( I = \gamma_0 \gamma_1 \gamma_2 \gamma_3 \), where the basis elements satisfy \( \gamma_0^2 = 1 = -\gamma_k^2 \), and a dual basis satisfies \( \gamma_\mu \cdot \gamma^\nu = \delta_\mu^\nu \). The electromagnetic field \( F \) is a composite multivector \( F = \BE + I c \BB \). This is actually a bivector because spatial vectors have a bivector representation in the space time algebra of the form \( \BE = E^k \gamma_k \gamma_0 \).

A dual representation, with \( F = I G \) is also possible

\begin{equation}\label{eqn:maxwellStokes:60}
\grad G = \frac{I}{c \epsilon_0} J.
\end{equation}

Either form of Maxwell’s equation can be split into grade one and three components. The standard (non-dual) form is

\begin{equation}\label{eqn:maxwellStokes:40}
\begin{aligned}
\grad \cdot F &= \inv{c \epsilon_0} J \\
\grad \wedge F &= 0,
\end{aligned}
\end{equation}

and the dual form is

\begin{equation}\label{eqn:maxwellStokes:41}
\begin{aligned}
\grad \cdot G &= 0 \\
\grad \wedge G &= \frac{I}{c \epsilon_0} J.
\end{aligned}
\end{equation}

In both cases a potential representation \( F = \grad \wedge A \), where \( A \) is a four vector potential can be used to kill off the non-current equation. Such a potential representation reduces Maxwell’s equation to

\begin{equation}\label{eqn:maxwellStokes:80}
\grad \cdot F = \inv{c \epsilon_0} J,
\end{equation}

or
\begin{equation}\label{eqn:maxwellStokes:100}
\grad \wedge G = \frac{I}{c \epsilon_0} J.
\end{equation}

In both cases, these reduce to
\begin{equation}\label{eqn:maxwellStokes:120}
\grad^2 A – \grad \lr{ \grad \cdot A } = \inv{c \epsilon_0} J.
\end{equation}

This can clearly be further simplified by using the Lorentz gauge, where \( \grad \cdot A = 0 \). However, the aim for now is to try applying Stokes theorem to Maxwell’s equation. The dual form \ref{eqn:maxwellStokes:100} has the curl structure required for the application of Stokes. Suppose that we evaluate this curl over the three parameter volume element \( d^3 x = i\, dx^0 dx^1 dx^2 \), where \( i = \gamma_0 \gamma_1 \gamma_2 \) is the unit pseudoscalar for the spacetime volume element.

\begin{equation}\label{eqn:maxwellStokes:101}
\begin{aligned}
\int_V d^3 x \cdot \lr{ \grad \wedge G }
&=
\int_V d^3 x \cdot \lr{ \gamma^\mu \wedge \partial_\mu G } \\
&=
\int_V \lr{ d^3 x \cdot \gamma^\mu } \cdot \partial_\mu G \\
&=
\sum_{\mu \ne 3} \int_V \lr{ d^3 x \cdot \gamma^\mu } \cdot \partial_\mu G.
\end{aligned}
\end{equation}

This uses the distibution identity \( A_s \cdot (a \wedge A_r) = (A_s \cdot a) \cdot A_r \) which holds for blades \( A_s, A_r \) provided \( s > r > 0 \). Observe that only the component of the gradient that lies in the tangent space of the three volume manifold contributes to the integral, allowing the gradient to be used in the Stokes integral instead of the vector derivative (see: [1]).
Defining the the surface area element

\begin{equation}\label{eqn:maxwellStokes:140}
\begin{aligned}
d^2 x
&= \sum_{\mu \ne 3} i \cdot \gamma^\mu \inv{dx^\mu} d^3 x \\
&= \gamma_1 \gamma_2 dx dy
+ c \gamma_2 \gamma_0 dt dy
+ c \gamma_0 \gamma_1 dt dx,
\end{aligned}
\end{equation}

Stokes theorem for this volume element is now completely specified

\begin{equation}\label{eqn:maxwellStokes:200}
\int_V d^3 x \cdot \lr{ \grad \wedge G }
=
\int_{\partial V} d^2 \cdot G.
\end{equation}

Application to the dual Maxwell equation gives

\begin{equation}\label{eqn:maxwellStokes:160}
\int_{\partial V} d^2 x \cdot G
= \inv{c \epsilon_0} \int_V d^3 x \cdot (I J).
\end{equation}

After some manipulation, this can be restated in the non-dual form

\begin{equation}\label{eqn:maxwellStokes:180}
\boxed{
\int_{\partial V} \inv{I} d^2 x \wedge F
= \frac{1}{c \epsilon_0 I} \int_V d^3 x \wedge J.
}
\end{equation}

It can be demonstrated that using this with each of the standard basis spacetime 3-volume elements recovers Gauss’s law and the Ampere-Maxwell equation. So, what happened to Faraday’s law and Gauss’s law for magnetism? With application of Stokes to the curl equation from \ref{eqn:maxwellStokes:40}, those equations take the form

\begin{equation}\label{eqn:maxwellStokes:240}
\boxed{
\int_{\partial V} d^2 x \cdot F = 0.
}
\end{equation}

Problem 1:

Demonstrate that the Ampere-Maxwell equation and Gauss’s law can be recovered from the trivector (curl) equation \ref{eqn:maxwellStokes:100}.

Answer

The curl equation is a trivector on each side, so dotting it with each of the four possible trivectors \( \gamma_0 \gamma_1 \gamma_2, \gamma_0 \gamma_2 \gamma_3, \gamma_0 \gamma_1 \gamma_3, \gamma_1 \gamma_2 \gamma_3 \) will give four different scalar equations. For example, dotting with \( \gamma_0 \gamma_1 \gamma_2 \), we have for the curl side

\begin{equation}\label{eqn:maxwellStokes:460}
\begin{aligned}
\lr{ \gamma_0 \gamma_1 \gamma_2 } \cdot \lr{ \gamma^\mu \wedge \partial_\mu G }
&=
\lr{ \lr{ \gamma_0 \gamma_1 \gamma_2 } \cdot \gamma^\mu } \cdot \partial_\mu G \\
&=
(\gamma_0 \gamma_1) \cdot \partial_2 G
+(\gamma_2 \gamma_0) \cdot \partial_1 G
+(\gamma_1 \gamma_2) \cdot \partial_0 G,
\end{aligned}
\end{equation}

and for the current side, we have

\begin{equation}\label{eqn:maxwellStokes:480}
\begin{aligned}
\inv{\epsilon_0 c} \lr{ \gamma_0 \gamma_1 \gamma_2 } \cdot \lr{ I J }
&=
\inv{\epsilon_0 c} \gpgradezero{ \gamma_0 \gamma_1 \gamma_2 (\gamma_0 \gamma_1 \gamma_2 \gamma_3) J } \\
&=
\inv{\epsilon_0 c} \gpgradezero{ -\gamma_3 J } \\
&=
\inv{\epsilon_0 c} \gamma^3 \cdot J \\
&=
\inv{\epsilon_0 c} J^3,
\end{aligned}
\end{equation}

so we have
\begin{equation}\label{eqn:maxwellStokes:500}
(\gamma_0 \gamma_1) \cdot \partial_2 G
+(\gamma_2 \gamma_0) \cdot \partial_1 G
+(\gamma_1 \gamma_2) \cdot \partial_0 G
=
\inv{\epsilon_0 c} J^3.
\end{equation}

Similarily, dotting with \( \gamma_{013}, \gamma_{023}, and \gamma_{123} \) respectively yields
\begin{equation}\label{eqn:maxwellStokes:620}
\begin{aligned}
\gamma_{01} \cdot \partial_3 G + \gamma_{30} \partial_1 G + \gamma_{13} \partial_0 G &= – \inv{\epsilon_0 c} J^2 \\
\gamma_{02} \cdot \partial_3 G + \gamma_{30} \partial_2 G + \gamma_{23} \partial_0 G &= \inv{\epsilon_0 c} J^1 \\
\gamma_{12} \cdot \partial_3 G + \gamma_{31} \partial_2 G + \gamma_{23} \partial_1 G &= -\inv{\epsilon_0} \rho.
\end{aligned}
\end{equation}

Expanding the dual electromagnetic field, first in terms of the spatial vectors, and then in the space time basis, we have
\begin{equation}\label{eqn:maxwellStokes:520}
\begin{aligned}
G
&= -I F \\
&= -I \lr{ \BE + I c \BB } \\
&= -I \BE + c \BB. \\
&= -I \BE + c B^k \gamma_k \gamma_0 \\
&= \inv{2} \epsilon^{r s t} \gamma_r \gamma_s E^t + c B^k \gamma_k \gamma_0.
\end{aligned}
\end{equation}

So, dotting with a spatial vector will pick up a component of \( \BB \), we have
\begin{equation}\label{eqn:maxwellStokes:540}
\begin{aligned}
\lr{ \gamma_m \wedge \gamma_0 } \cdot \partial_\mu G
&=
\lr{ \gamma_m \wedge \gamma_0 } \cdot \partial_\mu \lr{
\inv{2} \epsilon^{r s t} \gamma_r \gamma_s E^t + c B^k \gamma_k \gamma_0
} \\
&=
c \partial_\mu B^k
\gpgradezero{
\gamma_m \gamma_0 \gamma_k \gamma_0
} \\
&=
c \partial_\mu B^k
\gpgradezero{
\gamma_m \gamma_0 \gamma_0 \gamma^k
} \\
&=
c \partial_\mu B^k
\delta_m^k \\
&=
c \partial_\mu B^m.
\end{aligned}
\end{equation}

Written out explicitly the electric field contributions to \( G \) are

\begin{equation}\label{eqn:maxwellStokes:560}
\begin{aligned}
-I \BE
&=
-\gamma_{0123k0} E^k \\
&=
-\gamma_{123k} E^k \\
&=
\left\{
\begin{array}{l l}
\gamma_{12} E^3 & \quad \mbox{\( k = 3 \)} \\
\gamma_{31} E^2 & \quad \mbox{\( k = 2 \)} \\
\gamma_{23} E^1 & \quad \mbox{\( k = 1 \)} \\
\end{array}
\right.,
\end{aligned}
\end{equation}

so
\begin{equation}\label{eqn:maxwellStokes:580}
\begin{aligned}
\gamma_{23} \cdot G &= -E^1 \\
\gamma_{31} \cdot G &= -E^2 \\
\gamma_{12} \cdot G &= -E^3.
\end{aligned}
\end{equation}

We now have the pieces required to expand \ref{eqn:maxwellStokes:500} and \ref{eqn:maxwellStokes:620}, which are respectively

\begin{equation}\label{eqn:maxwellStokes:501}
\begin{aligned}
– c \partial_2 B^1 + c \partial_1 B^2 – \partial_0 E^3 &= \inv{\epsilon_0 c} J^3 \\
– c \partial_3 B^1 + c \partial_1 B^3 + \partial_0 E^2 &= -\inv{\epsilon_0 c} J^2 \\
– c \partial_3 B^2 + c \partial_2 B^3 – \partial_0 E^1 &= \inv{\epsilon_0 c} J^1 \\
– \partial_3 E^3 – \partial_2 E^2 – \partial_1 E^1 &= – \inv{\epsilon_0} \rho
\end{aligned}
\end{equation}

which are the components of the Ampere-Maxwell equation, and Gauss’s law

\begin{equation}\label{eqn:maxwellStokes:600}
\begin{aligned}
\inv{\mu_0} \spacegrad \cross \BB – \epsilon_0 \PD{t}{\BE} &= \BJ \\
\spacegrad \cdot \BE &= \frac{\rho}{\epsilon_0}.
\end{aligned}
\end{equation}

Problem 2:

Prove \ref{eqn:maxwellStokes:180}.

Answer

The proof just requires the expansion of the dot products using scalar selection

\begin{equation}\label{eqn:maxwellStokes:260}
\begin{aligned}
d^2 x \cdot G
&=
\gpgradezero{ d^2 x (-I) F } \\
&=
-\gpgradezero{ I d^2 x F } \\
&=
-I \lr{ d^2 x \wedge F },
\end{aligned}
\end{equation}

and
for the three volume dot product

\begin{equation}\label{eqn:maxwellStokes:280}
\begin{aligned}
d^3 x \cdot (I J)
&=
\gpgradezero{
d^3 x\, I J
} \\
&=
-\gpgradezero{
I d^3 x\, J
} \\
&=
-I \lr{ d^3 x \wedge J }.
\end{aligned}
\end{equation}

Problem 3:

Using each of the four possible spacetime volume elements, write out the components of the Stokes integral
\ref{eqn:maxwellStokes:180}.

Answer

The four possible volume and associated area elements are
\begin{equation}\label{eqn:maxwellStokes:220}
\begin{aligned}
d^3 x = c \gamma_0 \gamma_1 \gamma_2 dt dx dy & \qquad d^2 x = \gamma_1 \gamma_2 dx dy + c \gamma_2 \gamma_0 dy dt + c \gamma_0 \gamma_1 dt dx \\
d^3 x = c \gamma_0 \gamma_1 \gamma_3 dt dx dz & \qquad d^2 x = \gamma_1 \gamma_3 dx dz + c \gamma_3 \gamma_0 dz dt + c \gamma_0 \gamma_1 dt dx \\
d^3 x = c \gamma_0 \gamma_2 \gamma_3 dt dy dz & \qquad d^2 x = \gamma_2 \gamma_3 dy dz + c \gamma_3 \gamma_0 dz dt + c \gamma_0 \gamma_2 dt dy \\
d^3 x = \gamma_1 \gamma_2 \gamma_3 dx dy dz & \qquad d^2 x = \gamma_1 \gamma_2 dx dy + \gamma_2 \gamma_3 dy dz + c \gamma_3 \gamma_1 dz dx \\
\end{aligned}
\end{equation}

Wedging the area element with \( F \) will produce pseudoscalar multiples of the various \( \BE \) and \( \BB \) components, but a recipe for these components is required.

First note that for \( k \ne 0 \), the wedge \( \gamma_k \wedge \gamma_0 \wedge F \) will just select components of \( \BB \). This can be seen first by simplifying

\begin{equation}\label{eqn:maxwellStokes:300}
\begin{aligned}
I \BB
&=
\gamma_{0 1 2 3} B^m \gamma_{m 0} \\
&=
\left\{
\begin{array}{l l}
\gamma_{3 2} B^1 & \quad \mbox{\( m = 1 \)} \\
\gamma_{1 3} B^2 & \quad \mbox{\( m = 2 \)} \\
\gamma_{2 1} B^3 & \quad \mbox{\( m = 3 \)}
\end{array}
\right.,
\end{aligned}
\end{equation}

or

\begin{equation}\label{eqn:maxwellStokes:320}
I \BB = – \epsilon_{a b c} \gamma_{a b} B^c.
\end{equation}

From this it follows that

\begin{equation}\label{eqn:maxwellStokes:340}
\gamma_k \wedge \gamma_0 \wedge F = I c B^k.
\end{equation}

The electric field components are easier to pick out. Those are selected by

\begin{equation}\label{eqn:maxwellStokes:360}
\begin{aligned}
\gamma_m \wedge \gamma_n \wedge F
&= \gamma_m \wedge \gamma_n \wedge \gamma_k \wedge \gamma_0 E^k \\
&= -I E^k \epsilon_{m n k}.
\end{aligned}
\end{equation}

The respective volume element wedge products with \( J \) are

\begin{equation}\label{eqn:maxwellStokes:400}
\begin{aligned}
\inv{I} d^3 x \wedge J = \inv{c \epsilon_0} J^3
\inv{I} d^3 x \wedge J = \inv{c \epsilon_0} J^2
\inv{I} d^3 x \wedge J = \inv{c \epsilon_0} J^1,
\end{aligned}
\end{equation}

and the respective sum of surface area elements wedged with the electromagnetic field are

\begin{equation}\label{eqn:maxwellStokes:380}
\begin{aligned}
\inv{I} d^2 x \wedge F &= – \evalbar{E^3}{c \Delta t} dx dy + c \lr{ \evalbar{B^2}{\Delta x} dy – \evalbar{B^1}{\Delta y} dx } dt \\
\inv{I} d^2 x \wedge F &= \evalbar{E^2}{c \Delta t} dx dz + c \lr{ \evalbar{B^3}{\Delta x} dz – \evalbar{B^1}{\Delta z} dx } dt \\
\inv{I} d^2 x \wedge F &= – \evalbar{E^1}{c \Delta t} dy dz + c \lr{ \evalbar{B^3}{\Delta y} dz – \evalbar{B^2}{\Delta z} dy } dt \\
\inv{I} d^2 x \wedge F &= – \evalbar{E^3}{\Delta z} dy dx – \evalbar{E^2}{\Delta y} dx dz – \evalbar{E^1}{\Delta x} dz dy,
\end{aligned}
\end{equation}

so
\begin{equation}\label{eqn:maxwellStokes:381}
\begin{aligned}
\int_{\partial V} – \evalbar{E^3}{c \Delta t} dx dy + c \lr{ \evalbar{B^2}{\Delta x} dy – \evalbar{B^1}{\Delta y} dx } dt &=
c \int_V dx dy dt \inv{c \epsilon_0} J^3 \\
\int_{\partial V} \evalbar{E^2}{c \Delta t} dx dz + c \lr{ \evalbar{B^3}{\Delta x} dz – \evalbar{B^1}{\Delta z} dx } dt &=
-c \int_V dx dy dt \inv{c \epsilon_0} J^2 \\
\int_{\partial V} – \evalbar{E^1}{c \Delta t} dy dz + c \lr{ \evalbar{B^3}{\Delta y} dz – \evalbar{B^2}{\Delta z} dy } dt &=
c \int_V dx dy dt \inv{c \epsilon_0} J^1 \\
\int_{\partial V} – \evalbar{E^3}{\Delta z} dy dx – \evalbar{E^2}{\Delta y} dx dz – \evalbar{E^1}{\Delta x} dz dy &=
-\int_V dx dy dz \inv{\epsilon_0} \rho.
\end{aligned}
\end{equation}

Observe that if the volume elements are taken to their infinesimal limits, we recover the traditional differential forms of the Ampere-Maxwell and Gauss’s law equations.

References

[1] A. Macdonald. Vector and Geometric Calculus. CreateSpace Independent Publishing Platform, 2012.

Reciprocity theorem in Geometric Algebra

February 19, 2015 ece1229 , , , ,

[Click here for a PDF of this post with nicer formatting]

The reciprocity theorem involves a Poynting like antisymmetric difference of the following form

\begin{equation}\label{eqn:reciprocityTheoremGA:20}
\BE^{(a)} \cross \BH^{(b)} – \BE^{(b)} \cross \BH^{(a)}.
\end{equation}

This smells like something that can probably be related to a combined electromagnetic field multivectors in some sort of structured fashion. Guessing that this is related to the antisymmetic sum of two electromagnetic field multivectors turns out to be correct. Let

\begin{equation}\label{eqn:reciprocityTheoremGA:60}
F^{(a)} = \BE^{(a)} + I c \BB^{(a)}
\end{equation}
\begin{equation}\label{eqn:reciprocityTheoremGA:80}
F^{(b)} = \BE^{(b)} + I c \BB^{(b)}.
\end{equation}

Now form the antisymmetic sum

\begin{equation}\label{eqn:reciprocityTheoremGA:100}
\begin{aligned}
\inv{2} \lr{ F^{(a)} F^{(b)} – F^{(b)} F^{(a)} }
&=
\inv{2} \lr{\BE^{(a)} + I c \BB^{(a)}}
\lr{\BE^{(b)} + I c \BB^{(b)}} \\
&-
\inv{2} \lr{\BE^{(b)} + I c \BB^{(b)}}
\lr{\BE^{(a)} + I c \BB^{(a)}} \\
&=
\inv{2} \lr{ \BE^{(a)} \BE^{(b)} -\BE^{(b)} \BE^{(a)} }
+ \frac{I c}{2} \lr{ \BE^{(a)} \BB^{(b)} – \BB^{(b)} \BE^{(a)} }\\&
+ \frac{I c}{2} \lr{ \BB^{(a)} \BE^{(b)} – \BE^{(b)} \BB^{(a)} }
+ \frac{c^2}{2} \lr{ \BB^{(b)} \BB^{(a)} – \BB^{(a)} \BB^{(b)} } \\
&=
\BE^{(a)} \wedge \BE^{(b)} + c^2 \lr{ \BB^{(b)} \wedge \BB^{(a)} }
+ I c \lr{
\BE^{(a)} \wedge \BB^{(b)}
+
\BB^{(a)} \wedge \BE^{(b)}
} \\
&=
I \BE^{(a)} \cross \BE^{(b)} + c^2 I \lr{ \BB^{(b)} \cross \BB^{(a)} }

c \lr{
\BE^{(a)} \cross \BB^{(b)}
+
\BB^{(a)} \cross \BE^{(b)}
}
\end{aligned}
\end{equation}

This has two components, the first is a bivector (pseudoscalar times vector) that includes all the non-mixed products, and the second is a vector that includes all the mixed terms. We can therefore write the antisymmetic difference of the reciprocity theorem by extracting just the grade one terms of the antisymmetric sum of the combined electromagnetic field

\begin{equation}\label{eqn:reciprocityTheoremGA:120}
\BE^{(a)} \cross \BH^{(b)} – \BE^{(b)} \cross \BH^{(a)}
=
-\frac{1}{2 c \mu_0} \gpgradeone{ \lr{ F^{(a)} F^{(b)} – F^{(b)} F^{(a)} } }.
\end{equation}

Observing that the antisymmetrization used in the reciprocity theorem is only one portion of the larger electromagnetic field antisymmetrization, introduces two new questions

  1. How would the reciprocity theorem be derived directly in terms of \( F^{(a)} F^{(b)} – F^{(b)} F^{(a)} \)?
  2. What is the significance of the other portion of this antisymmetrization \( \BE^{(a)} \cross \BE^{(b)} – c^2 \mu_0^2 \lr{ \BH^{(a)} \cross \BH^{(b)} } \) ?

… more to come.