Maxwell’s equation

More satisfying editing of classical mechanics notes.

November 3, 2020 math and physics play 2 comments , , , , ,

I’ve purged about 30 pages of material related to field Lagrangian densities and Maxwell’s equation, replacing it with about 8 pages of new less incoherent material.

As before, I’ve physically ripped out all the pages that have been replaced, which is satisfying, and makes it easier to see what is left to review.

The new version is now reduced to 333 pages, close to a 100 page reduction from the original mess.  I may print myself a new physical copy, as I’ve moved things around so much that I have to search the latex to figure out where to make changes.

Maxwell’s equation Lagrangian (geometric algebra and tensor formalism)

November 1, 2020 math and physics play 1 comment , , , , , , , , , , , , , , , , , , , , , ,

[Click here for a PDF of this post with nicer formatting]

Maxwell’s equation using geometric algebra Lagrangian.

Motivation.

In my classical mechanics notes, I’ve got computations of Maxwell’s equation (singular in it’s geometric algebra form) from a Lagrangian in various ways (using a tensor, scalar and multivector Lagrangians), but all of these seem more convoluted than they should be.
Here we do this from scratch, starting with the action principle for field variables, covering:

  • Derivation of the relativistic form of the Euler-Lagrange field equations from the covariant form of the action,
  • Derivation of Maxwell’s equation (in it’s STA form) from the Maxwell Lagrangian,
  • Relationship of the STA Maxwell Lagrangian to the tensor equivalent,
  • Relationship of the STA form of Maxwell’s equation to it’s tensor equivalents,
  • Relationship of the STA Maxwell’s equation to it’s conventional Gibbs form.
  • Show that we may use a multivector valued Lagrangian with all of \( F^2 \), not just the scalar part.

It is assumed that the reader is thoroughly familiar with the STA formalism, and if that is not the case, there is no better reference than [1].

Field action.

Theorem 1.1: Relativistic Euler-Lagrange field equations.

Let \( \phi \rightarrow \phi + \delta \phi \) be any variation of the field, such that the variation
\( \delta \phi = 0 \) vanishes at the boundaries of the action integral
\begin{equation}\label{eqn:maxwells:2120}
S = \int d^4 x \LL(\phi, \partial_\nu \phi).
\end{equation}
The extreme value of the action is found when the Euler-Lagrange equations
\begin{equation}\label{eqn:maxwells:2140}
0 = \PD{\phi}{\LL} – \partial_\nu \PD{(\partial_\nu \phi)}{\LL},
\end{equation}
are satisfied. For a Lagrangian with multiple field variables, there will be one such equation for each field.

Start proof:

To ease the visual burden, designate the variation of the field by \( \delta \phi = \epsilon \), and perform a first order expansion of the varied Lagrangian
\begin{equation}\label{eqn:maxwells:20}
\begin{aligned}
\LL
&\rightarrow
\LL(\phi + \epsilon, \partial_\nu (\phi + \epsilon)) \\
&=
\LL(\phi, \partial_\nu \phi)
+
\PD{\phi}{\LL} \epsilon +
\PD{(\partial_\nu \phi)}{\LL} \partial_\nu \epsilon.
\end{aligned}
\end{equation}
The variation of the Lagrangian is
\begin{equation}\label{eqn:maxwells:40}
\begin{aligned}
\delta \LL
&=
\PD{\phi}{\LL} \epsilon +
\PD{(\partial_\nu \phi)}{\LL} \partial_\nu \epsilon \\
&=
\PD{\phi}{\LL} \epsilon +
\partial_\nu \lr{ \PD{(\partial_\nu \phi)}{\LL} \epsilon }

\epsilon \partial_\nu \PD{(\partial_\nu \phi)}{\LL},
\end{aligned}
\end{equation}
which we may plug into the action integral to find
\begin{equation}\label{eqn:maxwells:60}
\delta S
=
\int d^4 x \epsilon \lr{
\PD{\phi}{\LL}

\partial_\nu \PD{(\partial_\nu \phi)}{\LL}
}
+
\int d^4 x
\partial_\nu \lr{ \PD{(\partial_\nu \phi)}{\LL} \epsilon }.
\end{equation}
The last integral can be evaluated along the \( dx^\nu \) direction, leaving
\begin{equation}\label{eqn:maxwells:80}
\int d^3 x
\evalbar{ \PD{(\partial_\nu \phi)}{\LL} \epsilon }{\Delta x^\nu},
\end{equation}
where \( d^3 x = dx^\alpha dx^\beta dx^\gamma \) is the product of differentials that does not include \( dx^\nu \). By construction, \( \epsilon \) vanishes on the boundary of the action integral so \ref{eqn:maxwells:80} is zero. The action takes its extreme value when
\begin{equation}\label{eqn:maxwells:100}
0 = \delta S
=
\int d^4 x \epsilon \lr{
\PD{\phi}{\LL}

\partial_\nu \PD{(\partial_\nu \phi)}{\LL}
}.
\end{equation}
The proof is complete after noting that this must hold for all variations of the field \( \epsilon \), which means that we must have
\begin{equation}\label{eqn:maxwells:120}
0 =
\PD{\phi}{\LL}

\partial_\nu \PD{(\partial_\nu \phi)}{\LL}.
\end{equation}

End proof.

Armed with the Euler-Lagrange equations, we can apply them to the Maxwell’s equation Lagrangian, which we will claim has the following form.

Theorem 1.2: Maxwell’s equation Lagrangian.

Application of the Euler-Lagrange equations to the Lagrangian
\begin{equation}\label{eqn:maxwells:2160}
\LL = – \frac{\epsilon_0 c}{2} F \cdot F + J \cdot A,
\end{equation}
where \( F = \grad \wedge A \), yields the vector portion of Maxwell’s equation
\begin{equation}\label{eqn:maxwells:2180}
\grad \cdot F = \inv{\epsilon_0 c} J,
\end{equation}
which implies
\begin{equation}\label{eqn:maxwells:2200}
\grad F = \inv{\epsilon_0 c} J.
\end{equation}
This is Maxwell’s equation.

Start proof:

We wish to apply all of the Euler-Lagrange equations simultaneously (i.e. once for each of the four \(A_\mu\) components of the potential), and cast it into four-vector form
\begin{equation}\label{eqn:maxwells:140}
0 = \gamma_\nu \lr{ \PD{A_\nu}{} – \partial_\mu \PD{(\partial_\mu A_\nu)}{} } \LL.
\end{equation}
Since our Lagrangian splits nicely into kinetic and interaction terms, this gives us
\begin{equation}\label{eqn:maxwells:160}
0 = \gamma_\nu \lr{ \PD{A_\nu}{(A \cdot J)} + \frac{\epsilon_0 c}{2} \partial_\mu \PD{(\partial_\mu A_\nu)}{ (F \cdot F)} }.
\end{equation}
The interaction term above is just
\begin{equation}\label{eqn:maxwells:180}
\gamma_\nu \PD{A_\nu}{(A \cdot J)}
=
\gamma_\nu \PD{A_\nu}{(A_\mu J^\mu)}
=
\gamma_\nu J^\nu
=
J,
\end{equation}
but the kinetic term takes a bit more work. Let’s start with evaluating
\begin{equation}\label{eqn:maxwells:200}
\begin{aligned}
\PD{(\partial_\mu A_\nu)}{ (F \cdot F)}
&=
\PD{(\partial_\mu A_\nu)}{ F } \cdot F
+
F \cdot \PD{(\partial_\mu A_\nu)}{ F } \\
&=
2 \PD{(\partial_\mu A_\nu)}{ F } \cdot F \\
&=
2 \PD{(\partial_\mu A_\nu)}{ (\partial_\alpha A_\beta) } \lr{ \gamma^\alpha \wedge \gamma^\beta } \cdot F \\
&=
2 \lr{ \gamma^\mu \wedge \gamma^\nu } \cdot F.
\end{aligned}
\end{equation}
We hit this with the \(\mu\)-partial and expand as a scalar selection to find
\begin{equation}\label{eqn:maxwells:220}
\begin{aligned}
\partial_\mu \PD{(\partial_\mu A_\nu)}{ (F \cdot F)}
&=
2 \lr{ \partial_\mu \gamma^\mu \wedge \gamma^\nu } \cdot F \\
&=
– 2 (\gamma^\nu \wedge \grad) \cdot F \\
&=
– 2 \gpgradezero{ (\gamma^\nu \wedge \grad) F } \\
&=
– 2 \gpgradezero{ \gamma^\nu \grad F – \gamma^\nu \cdot \grad F } \\
&=
– 2 \gamma^\nu \cdot \lr{ \grad \cdot F }.
\end{aligned}
\end{equation}
Putting all the pieces together yields
\begin{equation}\label{eqn:maxwells:240}
0
= J – \epsilon_0 c \gamma_\nu \lr{ \gamma^\nu \cdot \lr{ \grad \cdot F } }
= J – \epsilon_0 c \lr{ \grad \cdot F },
\end{equation}
but
\begin{equation}\label{eqn:maxwells:260}
\begin{aligned}
\grad \cdot F
&=
\grad F – \grad \wedge F \\
&=
\grad F – \grad \wedge (\grad \wedge A) \\
&=
\grad F,
\end{aligned}
\end{equation}
so the multivector field equations for this Lagrangian are
\begin{equation}\label{eqn:maxwells:280}
\grad F = \inv{\epsilon_0 c} J,
\end{equation}
as claimed.

End proof.

Problem: Correspondence with tensor formalism.

Cast the Lagrangian of \ref{eqn:maxwells:2160} into the conventional tensor form
\begin{equation}\label{eqn:maxwells:300}
\LL = \frac{\epsilon_0 c}{4} F_{\mu\nu} F^{\mu\nu} + A^\mu J_\mu.
\end{equation}
Also show that the four-vector component of Maxwell’s equation \( \grad \cdot F = J/(\epsilon_0 c) \) is equivalent to the conventional tensor form of the Gauss-Ampere law
\begin{equation}\label{eqn:maxwells:320}
\partial_\mu F^{\mu\nu} = \inv{\epsilon_0 c} J^\nu,
\end{equation}
where \( F^{\mu\nu} = \partial^\mu A^\nu – \partial^\nu A^\mu \) as usual. Also show that the trivector component of Maxwell’s equation \( \grad \wedge F = 0 \) is equivalent to the tensor form of the Gauss-Faraday law
\begin{equation}\label{eqn:maxwells:340}
\partial_\alpha \lr{ \epsilon^{\alpha \beta \mu \nu} F_{\mu\nu} } = 0.
\end{equation}

Answer

To show the Lagrangian correspondence we must expand \( F \cdot F \) in coordinates
\begin{equation}\label{eqn:maxwells:360}
\begin{aligned}
F \cdot F
&=
( \grad \wedge A ) \cdot
( \grad \wedge A ) \\
&=
\lr{ (\gamma^\mu \partial_\mu) \wedge (\gamma^\nu A_\nu) }
\cdot
\lr{ (\gamma^\alpha \partial_\alpha) \wedge (\gamma^\beta A_\beta) } \\
&=
\lr{ \gamma^\mu \wedge \gamma^\nu } \cdot \lr{ \gamma_\alpha \wedge \gamma_\beta }
(\partial_\mu A_\nu )
(\partial^\alpha A^\beta ) \\
&=
\lr{
{\delta^\mu}_\beta
{\delta^\nu}_\alpha

{\delta^\mu}_\alpha
{\delta^\nu}_\beta
}
(\partial_\mu A_\nu )
(\partial^\alpha A^\beta ) \\
&=
– \partial_\mu A_\nu \lr{
\partial^\mu A^\nu

\partial^\nu A^\mu
} \\
&=
– \partial_\mu A_\nu F^{\mu\nu} \\
&=
– \inv{2} \lr{
\partial_\mu A_\nu F^{\mu\nu}
+
\partial_\nu A_\mu F^{\nu\mu}
} \\
&=
– \inv{2} \lr{
\partial_\mu A_\nu

\partial_\nu A_\mu
}
F^{\mu\nu} \\
&=

\inv{2}
F_{\mu\nu}
F^{\mu\nu}.
\end{aligned}
\end{equation}
With a substitution of this and \( A \cdot J = A_\mu J^\mu \) back into the Lagrangian, we recover the tensor form of the Lagrangian.

To recover the tensor form of Maxwell’s equation, we first split it into vector and trivector parts
\begin{equation}\label{eqn:maxwells:1580}
\grad \cdot F + \grad \wedge F = \inv{\epsilon_0 c} J.
\end{equation}
Now the vector component may be expanded in coordinates by dotting both sides with \( \gamma^\nu \) to find
\begin{equation}\label{eqn:maxwells:1600}
\inv{\epsilon_0 c} \gamma^\nu \cdot J = J^\nu,
\end{equation}
and
\begin{equation}\label{eqn:maxwells:1620}
\begin{aligned}
\gamma^\nu \cdot
\lr{ \grad \cdot F }
&=
\partial_\mu \gamma^\nu \cdot \lr{ \gamma^\mu \cdot \lr{ \gamma_\alpha \wedge \gamma_\beta } \partial^\alpha A^\beta } \\
&=
\lr{
{\delta^\mu}_\alpha
{\delta^\nu}_\beta

{\delta^\nu}_\alpha
{\delta^\mu}_\beta
}
\partial_\mu
\partial^\alpha A^\beta \\
&=
\partial_\mu
\lr{
\partial^\mu A^\nu

\partial^\nu A^\mu
} \\
&=
\partial_\mu F^{\mu\nu}.
\end{aligned}
\end{equation}
Equating \ref{eqn:maxwells:1600} and \ref{eqn:maxwells:1620} finishes the first part of the job. For the trivector component, we have
\begin{equation}\label{eqn:maxwells:1640}
0
= \grad \wedge F
= (\gamma^\mu \partial_\mu) \wedge \lr{ \gamma^\alpha \wedge \gamma^\beta } \partial_\alpha A_\beta
= \inv{2} (\gamma^\mu \partial_\mu) \wedge \lr{ \gamma^\alpha \wedge \gamma^\beta } F_{\alpha \beta}.
\end{equation}
Wedging with \( \gamma^\tau \) and then multiplying by \( -2 I \) we find
\begin{equation}\label{eqn:maxwells:1660}
0 = – \lr{ \gamma^\mu \wedge \gamma^\alpha \wedge \gamma^\beta \wedge \gamma^\tau } I \partial_\mu F_{\alpha \beta},
\end{equation}
but
\begin{equation}\label{eqn:maxwells:1680}
\gamma^\mu \wedge \gamma^\alpha \wedge \gamma^\beta \wedge \gamma^\tau = -I \epsilon^{\mu \alpha \beta \tau},
\end{equation}
which leaves us with
\begin{equation}\label{eqn:maxwells:1700}
\epsilon^{\mu \alpha \beta \tau} \partial_\mu F_{\alpha \beta} = 0,
\end{equation}
as expected.

Problem: Correspondence of tensor and Gibbs forms of Maxwell’s equations.

Given the identifications

\begin{equation}\label{eqn:lorentzForceCovariant:1500}
F^{k0} = E^k,
\end{equation}
and
\begin{equation}\label{eqn:lorentzForceCovariant:1520}
F^{rs} = -\epsilon^{rst} B^t,
\end{equation}
and
\begin{equation}\label{eqn:maxwells:1560}
J^\mu = \lr{ c \rho, \BJ },
\end{equation}
the reader should satisfy themselves that the traditional Gibbs form of Maxwell’s equations can be recovered from \ref{eqn:maxwells:320}.

Answer

The reader is referred to Exercise 3.4 “Electrodynamics, variational principle.” from [2].

Problem: Correspondence with grad and curl form of Maxwell’s equations.

With \( J = c \rho \gamma_0 + J^k \gamma_k \) and \( F = \BE + I c \BB \) show that Maxwell’s equation, as stated in \ref{eqn:maxwells:2200} expand to the conventional div and curl expressions for Maxwell’s equations.

Answer

To obtain Maxwell’s equations in their traditional vector forms, we pre-multiply both sides with \( \gamma_0 \)
\begin{equation}\label{eqn:maxwells:1720}
\gamma_0 \grad F = \inv{\epsilon_0 c} \gamma_0 J,
\end{equation}
and then select each grade separately. First observe that the RHS above has scalar and bivector components, as
\begin{equation}\label{eqn:maxwells:1740}
\gamma_0 J
=
c \rho + J^k \gamma_0 \gamma_k.
\end{equation}
In terms of the spatial bivector basis \( \Be_k = \gamma_k \gamma_0 \), the RHS of \ref{eqn:maxwells:1720} is
\begin{equation}\label{eqn:maxwells:1760}
\gamma_0 \frac{J}{\epsilon_0 c} = \frac{\rho}{\epsilon_0} – \mu_0 c \BJ.
\end{equation}
For the LHS, first note that
\begin{equation}\label{eqn:maxwells:1780}
\begin{aligned}
\gamma_0 \grad
&=
\gamma_0
\lr{
\gamma_0 \partial^0 +
\gamma_k \partial^k
} \\
&=
\partial_0 – \gamma_0 \gamma_k \partial_k \\
&=
\inv{c} \PD{t}{} + \spacegrad.
\end{aligned}
\end{equation}
We can express all the the LHS of \ref{eqn:maxwells:1720} in the bivector spatial basis, so that Maxwell’s equation in multivector form is
\begin{equation}\label{eqn:maxwells:1800}
\lr{ \inv{c} \PD{t}{} + \spacegrad } \lr{ \BE + I c \BB } = \frac{\rho}{\epsilon_0} – \mu_0 c \BJ.
\end{equation}
Selecting the scalar, vector, bivector, and trivector grades of both sides (in the spatial basis) gives the following set of respective equations
\begin{equation}\label{eqn:maxwells:1840}
\spacegrad \cdot \BE = \frac{\rho}{\epsilon_0}
\end{equation}
\begin{equation}\label{eqn:maxwells:1860}
\inv{c} \partial_t \BE + I c \spacegrad \wedge \BB = – \mu_0 c \BJ
\end{equation}
\begin{equation}\label{eqn:maxwells:1880}
\spacegrad \wedge \BE + I \partial_t \BB = 0
\end{equation}
\begin{equation}\label{eqn:maxwells:1900}
I c \spacegrad \cdot B = 0,
\end{equation}
which we can rewrite after some duality transformations (and noting that \( \mu_0 \epsilon_0 c^2 = 1 \)), we have
\begin{equation}\label{eqn:maxwells:1940}
\spacegrad \cdot \BE = \frac{\rho}{\epsilon_0}
\end{equation}
\begin{equation}\label{eqn:maxwells:1960}
\spacegrad \cross \BB – \mu_0 \epsilon_0 \PD{t}{\BE} = \mu_0 \BJ
\end{equation}
\begin{equation}\label{eqn:maxwells:1980}
\spacegrad \cross \BE + \PD{t}{\BB} = 0
\end{equation}
\begin{equation}\label{eqn:maxwells:2000}
\spacegrad \cdot B = 0,
\end{equation}
which are Maxwell’s equations in their traditional form.

Problem: Alternative multivector Lagrangian.

Show that a scalar+pseudoscalar Lagrangian of the following form
\begin{equation}\label{eqn:maxwells:2220}
\LL = – \frac{\epsilon_0 c}{2} F^2 + J \cdot A,
\end{equation}
which omits the scalar selection of the Lagrangian in \ref{eqn:maxwells:2160}, also represents Maxwell’s equation. Discuss the scalar and pseudoscalar components of \( F^2 \), and show why the pseudoscalar inclusion is irrelevant.

Answer

The quantity \( F^2 = F \cdot F + F \wedge F \) has both scalar and pseudoscalar
components. Note that unlike vectors, a bivector wedge in 4D with itself need not be zero (example: \( \gamma_0 \gamma_1 + \gamma_2 \gamma_3 \) wedged with itself).
We can see this multivector nature nicely by expansion in terms of the electric and magnetic fields
\begin{equation}\label{eqn:maxwells:2020}
\begin{aligned}
F^2
&= \lr{ \BE + I c \BB }^2 \\
&= \BE^2 – c^2 \BB^2 + I c \lr{ \BE \BB + \BB \BE } \\
&= \BE^2 – c^2 \BB^2 + 2 I c \BE \cdot \BB.
\end{aligned}
\end{equation}
Both the scalar and pseudoscalar parts of \( F^2 \) are Lorentz invariant, a requirement of our Lagrangian, but most Maxwell equation Lagrangians only include the scalar \( \BE^2 – c^2 \BB^2 \) component of the field square. If we allow the Lagrangian to be multivector valued, and evaluate the Euler-Lagrange equations, we quickly find the same results
\begin{equation}\label{eqn:maxwells:2040}
\begin{aligned}
0
&= \gamma_\nu \lr{ \PD{A_\nu}{} – \partial_\mu \PD{(\partial_\mu A_\nu)}{} } \LL \\
&= \gamma_\nu \lr{ J^\nu + \frac{\epsilon_0 c}{2} \partial_\mu
\lr{
(\gamma^\mu \wedge \gamma^\nu) F
+
F (\gamma^\mu \wedge \gamma^\nu)
}
}.
\end{aligned}
\end{equation}
Here some steps are skipped, building on our previous scalar Euler-Lagrange evaluation experience. We have a symmetric product of two bivectors, which we can express as a 0,4 grade selection, since
\begin{equation}\label{eqn:maxwells:2060}
\gpgrade{ X F }{0,4} = \inv{2} \lr{ X F + F X },
\end{equation}
for any two bivectors \( X, F \). This leaves
\begin{equation}\label{eqn:maxwells:2080}
\begin{aligned}
0
&= J + \epsilon_0 c \gamma_\nu \gpgrade{ (\grad \wedge \gamma^\nu) F }{0,4} \\
&= J + \epsilon_0 c \gamma_\nu \gpgrade{ -\gamma^\nu \grad F + (\gamma^\nu \cdot \grad) F }{0,4} \\
&= J + \epsilon_0 c \gamma_\nu \gpgrade{ -\gamma^\nu \grad F }{0,4} \\
&= J – \epsilon_0 c \gamma_\nu
\lr{
\gamma^\nu \cdot \lr{ \grad \cdot F } + \gamma^\nu \wedge \grad \wedge F
}.
\end{aligned}
\end{equation}
However, since \( \grad \wedge F = \grad \wedge \grad \wedge A = 0 \), we see that there is no contribution from the \( F \wedge F \) pseudoscalar component of the Lagrangian, and we are left with
\begin{equation}\label{eqn:maxwells:2100}
\begin{aligned}
0
&= J – \epsilon_0 c (\grad \cdot F) \\
&= J – \epsilon_0 c \grad F,
\end{aligned}
\end{equation}
which is Maxwell’s equation, as before.

References

[1] C. Doran and A.N. Lasenby. Geometric algebra for physicists. Cambridge University Press New York, Cambridge, UK, 1st edition, 2003.

[2] Peeter Joot. Quantum field theory. Kindle Direct Publishing, 2018.

Some nice positive feedback for my book.

October 31, 2020 math and physics play No comments , , , , , , , , , ,

Here’s a fun congratulatory email that I received today for my Geometric Algebra for Electrical Engineers book

Peeter ..
I had to email to congratulate you on your geometric algebra book. Like yourself, when I came across it, I was totally blown away and your book, being written from the position of a discoverer rather than an expert, answers most of the questions I was confronted by when reading Doran and Lasenby’s book.
You’re a C++ programmer and from my perspective, when using natural world math, you are constructing a representation of a problem (like code does) except many physicists do not recognize this. They’re doing physics with COBOL (or C with classes!).
congratulations
.. Reader
I couldn’t resist pointing out the irony of his COBOL comment, as my work at LzLabs is now heavily focused on COBOL (and PL/I) compilers and compiler runtimes.  You could say that my work, at work or at play, is all an attempt to transition people away from the evils of legacy COBOL.
For reference the Doran and Lasenby book is phenomenal work, but it is really hard material.  To attempt to read this, you’ll need a thorough understanding of electromagnetism, relativity, tensor algebra, quantum mechanics, advanced classical mechanics, and field theory.  I’m still working on this book, and it’s probably been 12 years since I bought it.  I managed to teach myself some of this material as I went, but also took most of the 4th year UofT undergrad physics courses (and some grad courses) to fill in some of the gaps.
When I titled my book, I included “for Electrical Engineers” in the title.  That titling choice was somewhat derivative, as there were already geometric algebra books “for physicists”,  and “for computer science“.  However, I thought it was also good shorthand for the prerequisites required for the book as “for Electrical Engineers” seemed to be good shorthand for “for a student that has seen electromagnetism in its div, grad, curl form, and doesn’t know special relativity, field theory, differential forms, tensor algebra, or other topics from more advanced physics.”
The relativistic presentation of electromagnetism in Doran and Lasenby, using the Dirac algebra (aka Space Time Algebra (STA)), is much more beautiful than the form that I have used in my book.  However, I was hoping to present the subject in a way that was accessible, and provided a stepping stone for the STA approach when the reader was ready to tackle a next interval of the “learning curve.”

Applied vanity press

April 9, 2018 math and physics play No comments , , , ,

Amazon’s createspace turns out to be a very cost effective way to get a personal color copy of large pdf (>250 pages) to markup for review. The only hassle was having to use their app to create cover art (although that took less time than commuting downtown to one of the cheap copy shops near the university.)

As a side effect, after I edit it, I’d have something I could actually list for sale.  Worldwide, I’d guess at least three people would buy it, that is, if they weren’t happy with the pdf version already available.

I wrote a book: Geometric Algebra for Electrical Engineers

April 5, 2018 math and physics play 6 comments , ,

The book.

A draft of my book: Geometric Algebra for Electrical Engineers, is now available. I’ve supplied limited distribution copies of some of the early drafts and have had some good review comments of the chapter I (introduction to geometric algebra), and chapter II (multivector calculus) material, but none on the electromagnetism content. In defense of the reviewers, the initial version of the electromagnetism chapter, while it had a lot of raw content, was pretty exploratory and very rough. It’s been cleaned up significantly and is hopefully now more reader friendly.

Why I wrote this book.

I have been working on a part time M.Eng degree for a number of years. I wasn’t happy with the UofT ECE electromagnetics offerings in recent years, which have been inconsistently offered or unsatisfactory. For example: the microwave circuits course which sounded interesting, and had an interesting text book, was mind numbing, almost entirely about Smith charts. I had to go elsewhere to obtain the M.Eng degree requirements. That elsewhere was a project course.

I proposed a project to an electromagnetism project with the following goals

  1. Perform a literature review of applications of geometric algebra to the study of electromagnetism.
  2. Identify the subset of the literature that had direct relevance to electrical engineering.
  3. Create a complete, and as compact as possible, introduction to the prerequisites required for a graduate or advanced undergraduate electrical engineering student to be able to apply geometric algebra to problems in electromagnetism. With those prerequisites in place, work through the fundamentals of electromagnetism in a geometric algebra context.

In retrospect, doing this project was a mistake. I could have done this work outside of an academic context without paying so much (in both time and money). Somewhere along the way I lost track of the fact that I enrolled on the M.Eng to learn (it provided a way to take grad physics courses on a part time schedule), and got side tracked by degree requirements. Basically I fell victim to a “I may as well graduate” sentiment that would have been better to ignore. All that coupled with the fact that I did not actually get any feedback from my “supervisor”, who did not even read my work (at least so far after one year), made this project-course very frustrating. On the bright side, I really like what I produced, even if I had to do so in isolation.

Why geometric algebra?

Geometric algebra generalizes vectors, providing algebraic representations of not just directed line segments, but also points, plane segments, volumes, and higher degree geometric objects (hypervolumes.). The geometric algebra representation of planes, volumes and hypervolumes requires a vector dot product, a vector multiplication operation, and a generalized addition operation. The dot product provides the length of a vector and a test for whether or not any two vectors are perpendicular. The vector multiplication operation is used to construct directed plane segments (bivectors), and directed volumes (trivectors), which are built from the respective products of two or three mutually perpendicular vectors. The addition operation allows for sums of scalars, vectors, or any products of vectors. Such a sum is called a multivector.

The power to add scalars, vectors, and products of vectors can be exploited to simplify much of electromagnetism. In particular, Maxwell’s equations for isotropic media can be merged into a single multivector equation
\begin{equation}\label{eqn:quaternion2maxwellWithGA:20}
\lr{ \spacegrad + \inv{c} \PD{t}{}} \lr{ \BE + I c \BB } = \eta\lr{ c \rho – \BJ },
\end{equation}
where \( \spacegrad \) is the gradient, \( I = \Be_1 \Be_2 \Be_3 \) is the ordered product of the three R^3 basis vectors, \( c = 1/\sqrt{\mu\epsilon}\) is the group velocity of the medium, \( \eta = \sqrt{\mu/\epsilon} \), \( \BE, \BB \) are the electric and magnetic fields, and \( \rho \) and \( \BJ \) are the charge and current densities. This can be written as a single equation
\begin{equation}\label{eqn:ece2500report:40}
\lr{ \spacegrad + \inv{c} \PD{t}{}} F = J,
\end{equation}
where \( F = \BE + I c \BB \) is the combined (multivector) electromagnetic field, and \( J = \eta\lr{ c \rho – \BJ } \) is the multivector current.

Encountering Maxwell’s equation in its geometric algebra form leaves the student with more questions than answers. Yes, it is a compact representation, but so are the tensor and differential forms (or even the quaternionic) representations of Maxwell’s equations. The student needs to know how to work with the representation if it is to be useful. It should also be clear how to use the existing conventional mathematical tools of applied electromagnetism, or how to generalize those appropriately. Individually, there are answers available to many of the questions that are generated attempting to apply the theory, but they are scattered and in many cases not easily accessible.

Much of the geometric algebra literature for electrodynamics is presented with a relativistic bias, or assumes high levels of mathematical or physics sophistication. The aim of this work was an attempt to make the study of electromagnetism using geometric algebra more accessible, especially to other dumb engineering undergraduates like myself. In particular, this project explored non-relativistic applications of geometric algebra to electromagnetism. The end product of this project was a fairly small self contained book, titled “Geometric Algebra for Electrical Engineers”. This book includes an introduction to Euclidean geometric algebra focused on R^2 and R^3 (64 pages), an introduction to geometric calculus and multivector Green’s functions (64 pages), applications to electromagnetism (82 pages), and some appendices. Many of the fundamental results of electromagnetism are derived directly from the multivector Maxwell’s equation, in a streamlined and compact fashion. This includes some new results, and many of the existing non-relativistic results from the geometric algebra literature. As a conceptual bridge, the book includes many examples of how to extract familiar conventional results from simpler multivector representations. Also included in the book are some sample calculations exploiting unique capabilities that geometric algebra provides. In particular, vectors in a plane may be manipulated much like complex numbers, which has a number of advantages over working with coordinates explicitly.

Followup.

In many ways this work only scratches the surface. Many more worked examples, problems, figures and computer algebra listings should be added. In depth applications of derived geometric algebra relationships to problems customarily tackled with separate electric and magnetic field equations should also be incorporated. There are also theoretical holes, topics covered in any conventional introductory electromagnetism text, that are missing. Examples include the Fresnel relationships for transmission and reflection at an interface, in depth treatment of waveguides, dipole radiation and motion of charged particles, bound charges, and meta materials to name a few. Many of these topics can probably be handled in a coordinate free fashion using geometric algebra. Despite all the work that is required to help bridge the gap between formalism and application, making applied electromagnetism using geometric algebra truly accessible, it is my belief this book makes some good first steps down this path.

The choice that I made to completely avoid the geometric algebra space-time-algebra (STA) is somewhat unfortunate. It is exceedingly elegant, especially in a relativisitic context. Despite that, I think that this was still a good choice from a pedagogical point of view, as most of the prerequisites for an STA based study will have been taken care of as a side effect, making that study much more accessible.

Potential solutions to the static Maxwell’s equation using geometric algebra

March 20, 2018 math and physics play No comments , , , , , , , , , , , , , , , , ,

[Click here for a PDF of this post with nicer formatting]

When neither the electromagnetic field strength \( F = \BE + I \eta \BH \), nor current \( J = \eta (c \rho – \BJ) + I(c\rho_m – \BM) \) is a function of time, then the geometric algebra form of Maxwell’s equations is the first order multivector (gradient) equation
\begin{equation}\label{eqn:staticPotentials:20}
\spacegrad F = J.
\end{equation}

While direct solutions to this equations are possible with the multivector Green’s function for the gradient
\begin{equation}\label{eqn:staticPotentials:40}
G(\Bx, \Bx’) = \inv{4\pi} \frac{\Bx – \Bx’}{\Norm{\Bx – \Bx’}^3 },
\end{equation}
the aim in this post is to explore second order (potential) solutions in a geometric algebra context. Can we assume that it is possible to find a multivector potential \( A \) for which
\begin{equation}\label{eqn:staticPotentials:60}
F = \spacegrad A,
\end{equation}
is a solution to the Maxwell statics equation? If such a solution exists, then Maxwell’s equation is simply
\begin{equation}\label{eqn:staticPotentials:80}
\spacegrad^2 A = J,
\end{equation}
which can be easily solved using the scalar Green’s function for the Laplacian
\begin{equation}\label{eqn:staticPotentials:240}
G(\Bx, \Bx’) = -\inv{\Norm{\Bx – \Bx’} },
\end{equation}
a beastie that may be easier to convolve than the vector valued Green’s function for the gradient.

It is immediately clear that some restrictions must be imposed on the multivector potential \(A\). In particular, since the field \( F \) has only vector and bivector grades, this gradient must have no scalar, nor pseudoscalar grades. That is
\begin{equation}\label{eqn:staticPotentials:100}
\gpgrade{\spacegrad A}{0,3} = 0.
\end{equation}
This constraint on the potential can be avoided if a grade selection operation is built directly into the assumed potential solution, requiring that the field is given by
\begin{equation}\label{eqn:staticPotentials:120}
F = \gpgrade{\spacegrad A}{1,2}.
\end{equation}
However, after imposing such a constraint, Maxwell’s equation has a much less friendly form
\begin{equation}\label{eqn:staticPotentials:140}
\spacegrad^2 A – \spacegrad \gpgrade{\spacegrad A}{0,3} = J.
\end{equation}
Luckily, it is possible to introduce a transformation of potentials, called a gauge transformation, that eliminates the ugly grade selection term, and allows the potential equation to be expressed as a plain old Laplacian. We do so by assuming first that it is possible to find a solution of the Laplacian equation that has the desired grade restrictions. That is
\begin{equation}\label{eqn:staticPotentials:160}
\begin{aligned}
\spacegrad^2 A’ &= J \\
\gpgrade{\spacegrad A’}{0,3} &= 0,
\end{aligned}
\end{equation}
for which \( F = \spacegrad A’ \) is a grade 1,2 solution to \( \spacegrad F = J \). Suppose that \( A \) is any formal solution, free of any grade restrictions, to \( \spacegrad^2 A = J \), and \( F = \gpgrade{\spacegrad A}{1,2} \). Can we find a function \( \tilde{A} \) for which \( A = A’ + \tilde{A} \)?

Maxwell’s equation in terms of \( A \) is
\begin{equation}\label{eqn:staticPotentials:180}
\begin{aligned}
J
&= \spacegrad \gpgrade{\spacegrad A}{1,2} \\
&= \spacegrad^2 A
– \spacegrad \gpgrade{\spacegrad A}{0,3} \\
&= \spacegrad^2 (A’ + \tilde{A})
– \spacegrad \gpgrade{\spacegrad A}{0,3}
\end{aligned}
\end{equation}
or
\begin{equation}\label{eqn:staticPotentials:200}
\spacegrad^2 \tilde{A} = \spacegrad \gpgrade{\spacegrad A}{0,3}.
\end{equation}
This non-homogeneous Laplacian equation that can be solved as is for \( \tilde{A} \) using the Green’s function for the Laplacian. Alternatively, we may also solve the equivalent first order system using the Green’s function for the gradient.
\begin{equation}\label{eqn:staticPotentials:220}
\spacegrad \tilde{A} = \gpgrade{\spacegrad A}{0,3}.
\end{equation}
Clearly \( \tilde{A} \) is not unique, as we can add any function \( \psi \) satisfying the homogeneous Laplacian equation \( \spacegrad^2 \psi = 0 \).

In summary, if \( A \) is any multivector solution to \( \spacegrad^2 A = J \), that is
\begin{equation}\label{eqn:staticPotentials:260}
A(\Bx)
= \int dV’ G(\Bx, \Bx’) J(\Bx’)
= -\int dV’ \frac{J(\Bx’)}{\Norm{\Bx – \Bx’} },
\end{equation}
then \( F = \spacegrad A’ \) is a solution to Maxwell’s equation, where \( A’ = A – \tilde{A} \), and \( \tilde{A} \) is a solution to the non-homogeneous Laplacian equation or the non-homogeneous gradient equation above.

Integral form of the gauge transformation.

Additional insight is possible by considering the gauge transformation in integral form. Suppose that
\begin{equation}\label{eqn:staticPotentials:280}
A(\Bx) = -\int_V dV’ \frac{J(\Bx’)}{\Norm{\Bx – \Bx’} } – \tilde{A}(\Bx),
\end{equation}
is a solution of \( \spacegrad^2 A = J \), where \( \tilde{A} \) is a multivector solution to the homogeneous Laplacian equation \( \spacegrad^2 \tilde{A} = 0 \). Let’s look at the constraints on \( \tilde{A} \) that must be imposed for \( F = \spacegrad A \) to be a valid (i.e. grade 1,2) solution of Maxwell’s equation.
\begin{equation}\label{eqn:staticPotentials:300}
\begin{aligned}
F
&= \spacegrad A \\
&=
-\int_V dV’ \lr{ \spacegrad \inv{\Norm{\Bx – \Bx’} } } J(\Bx’)
– \spacegrad \tilde{A}(\Bx) \\
&=
\int_V dV’ \lr{ \spacegrad’ \inv{\Norm{\Bx – \Bx’} } } J(\Bx’)
– \spacegrad \tilde{A}(\Bx) \\
&=
\int_V dV’ \spacegrad’ \frac{J(\Bx’)}{\Norm{\Bx – \Bx’} } – \int_V dV’ \frac{\spacegrad’ J(\Bx’)}{\Norm{\Bx – \Bx’} }
– \spacegrad \tilde{A}(\Bx) \\
&=
\int_{\partial V} dA’ \ncap’ \frac{J(\Bx’)}{\Norm{\Bx – \Bx’} } – \int_V \frac{\spacegrad’ J(\Bx’)}{\Norm{\Bx – \Bx’} }
– \spacegrad \tilde{A}(\Bx).
\end{aligned}
\end{equation}
Where \( \ncap’ = (\Bx’ – \Bx)/\Norm{\Bx’ – \Bx} \), and the fundamental theorem of geometric calculus has been used to transform the gradient volume integral into an integral over the bounding surface. Operating on Maxwell’s equation with the gradient gives \( \spacegrad^2 F = \spacegrad J \), which has only grades 1,2 on the left hand side, meaning that \( J \) is constrained in a way that requires \( \spacegrad J \) to have only grades 1,2. This means that \( F \) has grades 1,2 if
\begin{equation}\label{eqn:staticPotentials:320}
\spacegrad \tilde{A}(\Bx)
= \int_{\partial V} dA’ \frac{ \gpgrade{\ncap’ J(\Bx’)}{0,3} }{\Norm{\Bx – \Bx’} }.
\end{equation}
The product \( \ncap J \) expands to
\begin{equation}\label{eqn:staticPotentials:340}
\begin{aligned}
\ncap J
&=
\gpgradezero{\ncap J_1} + \gpgradethree{\ncap J_2} \\
&=
\ncap \cdot (-\eta \BJ) + \gpgradethree{\ncap (-I \BM)} \\
&=- \eta \ncap \cdot \BJ -I \ncap \cdot \BM,
\end{aligned}
\end{equation}
so
\begin{equation}\label{eqn:staticPotentials:360}
\spacegrad \tilde{A}(\Bx)
=
-\int_{\partial V} dA’ \frac{ \eta \ncap’ \cdot \BJ(\Bx’) + I \ncap’ \cdot \BM(\Bx’)}{\Norm{\Bx – \Bx’} }.
\end{equation}
Observe that if there is no flux of current density \( \BJ \) and (fictitious) magnetic current density \( \BM \) through the surface, then \( F = \spacegrad A \) is a solution to Maxwell’s equation without any gauge transformation. Alternatively \( F = \spacegrad A \) is also a solution if \( \lim_{\Bx’ \rightarrow \infty} \BJ(\Bx’)/\Norm{\Bx – \Bx’} = \lim_{\Bx’ \rightarrow \infty} \BM(\Bx’)/\Norm{\Bx – \Bx’} = 0 \) and the bounding volume is taken to infinity.

References

Solving Maxwell’s equation in freespace: Multivector plane wave representation

March 14, 2018 math and physics play 1 comment , , , , , , , , , , , ,

[Click here for a PDF of this post with nicer formatting]

The geometric algebra form of Maxwell’s equations in free space (or source free isotopic media with group velocity \( c \)) is the multivector equation
\begin{equation}\label{eqn:planewavesMultivector:20}
\lr{ \spacegrad + \inv{c}\PD{t}{} } F(\Bx, t) = 0.
\end{equation}
Here \( F = \BE + I c \BB \) is a multivector with grades 1 and 2 (vector and bivector components). The velocity \( c \) is called the group velocity since \( F \), or its components \( \BE, \BH \) satisfy the wave equation, which can be seen by pre-multiplying with \( \spacegrad – (1/c)\PDi{t}{} \) to find
\begin{equation}\label{eqn:planewavesMultivector:n}
\lr{ \spacegrad^2 – \inv{c^2}\PDSq{t}{} } F(\Bx, t) = 0.
\end{equation}

Let’s look at the frequency domain solution of this equation with a presumed phasor representation
\begin{equation}\label{eqn:planewavesMultivector:40}
F(\Bx, t) = \textrm{Re} \lr{ F(\Bk) e^{-j \Bk \cdot \Bx + j \omega t} },
\end{equation}
where \( j \) is a scalar imaginary, not necessarily with any geometric interpretation.

Maxwell’s equation reduces to just
\begin{equation}\label{eqn:planewavesMultivector:60}
0
=
-j \lr{ \Bk – \frac{\omega}{c} } F(\Bk).
\end{equation}

If \( F(\Bk) \) has a left multivector factor
\begin{equation}\label{eqn:planewavesMultivector:80}
F(\Bk) =
\lr{ \Bk + \frac{\omega}{c} } \tilde{F},
\end{equation}
where \( \tilde{F} \) is a multivector to be determined, then
\begin{equation}\label{eqn:planewavesMultivector:100}
\begin{aligned}
\lr{ \Bk – \frac{\omega}{c} }
F(\Bk)
&=
\lr{ \Bk – \frac{\omega}{c} }
\lr{ \Bk + \frac{\omega}{c} } \tilde{F} \\
&=
\lr{ \Bk^2 – \lr{\frac{\omega}{c}}^2 } \tilde{F},
\end{aligned}
\end{equation}
which is zero if \( \Norm{\Bk} = \ifrac{\omega}{c} \).

Let \( \kcap = \ifrac{\Bk}{\Norm{\Bk}} \), and \( \Norm{\Bk} \tilde{F} = F_0 + F_1 + F_2 + F_3 \), where \( F_0, F_1, F_2, \) and \( F_3 \) are respectively have grades 0,1,2,3. Then
\begin{equation}\label{eqn:planewavesMultivector:120}
\begin{aligned}
F(\Bk)
&= \lr{ 1 + \kcap } \lr{ F_0 + F_1 + F_2 + F_3 } \\
&=
F_0 + F_1 + F_2 + F_3
+
\kcap F_0 + \kcap F_1 + \kcap F_2 + \kcap F_3 \\
&=
F_0 + F_1 + F_2 + F_3
+
\kcap F_0 + \kcap \cdot F_1 + \kcap \cdot F_2 + \kcap \cdot F_3
+
\kcap \wedge F_1 + \kcap \wedge F_2 \\
&=
\lr{
F_0 + \kcap \cdot F_1
}
+
\lr{
F_1 + \kcap F_0 + \kcap \cdot F_2
}
+
\lr{
F_2 + \kcap \cdot F_3 + \kcap \wedge F_1
}
+
\lr{
F_3 + \kcap \wedge F_2
}.
\end{aligned}
\end{equation}
Since the field \( F \) has only vector and bivector grades, the grades zero and three components of the expansion above must be zero, or
\begin{equation}\label{eqn:planewavesMultivector:140}
\begin{aligned}
F_0 &= – \kcap \cdot F_1 \\
F_3 &= – \kcap \wedge F_2,
\end{aligned}
\end{equation}
so
\begin{equation}\label{eqn:planewavesMultivector:160}
\begin{aligned}
F(\Bk)
&=
\lr{ 1 + \kcap } \lr{
F_1 – \kcap \cdot F_1 +
F_2 – \kcap \wedge F_2
} \\
&=
\lr{ 1 + \kcap } \lr{
F_1 – \kcap F_1 + \kcap \wedge F_1 +
F_2 – \kcap F_2 + \kcap \cdot F_2
}.
\end{aligned}
\end{equation}
The multivector \( 1 + \kcap \) has the projective property of gobbling any leading factors of \( \kcap \)
\begin{equation}\label{eqn:planewavesMultivector:180}
\begin{aligned}
(1 + \kcap)\kcap
&= \kcap + 1 \\
&= 1 + \kcap,
\end{aligned}
\end{equation}
so for \( F_i \in F_1, F_2 \)
\begin{equation}\label{eqn:planewavesMultivector:200}
(1 + \kcap) ( F_i – \kcap F_i )
=
(1 + \kcap) ( F_i – F_i )
= 0,
\end{equation}
leaving
\begin{equation}\label{eqn:planewavesMultivector:220}
F(\Bk)
=
\lr{ 1 + \kcap } \lr{
\kcap \cdot F_2 +
\kcap \wedge F_1
}.
\end{equation}

For \( \kcap \cdot F_2 \) to be non-zero \( F_2 \) must be a bivector that lies in a plane containing \( \kcap \), and \( \kcap \cdot F_2 \) is a vector in that plane that is perpendicular to \( \kcap \). On the other hand \( \kcap \wedge F_1 \) is non-zero only if \( F_1 \) has a non-zero component that does not lie in along the \( \kcap \) direction, but \( \kcap \wedge F_1 \), like \( F_2 \) describes a plane that containing \( \kcap \). This means that having both bivector and vector free variables \( F_2 \) and \( F_1 \) provide more degrees of freedom than required. For example, if \( \BE \) is any vector, and \( F_2 = \kcap \wedge \BE \), then
\begin{equation}\label{eqn:planewavesMultivector:240}
\begin{aligned}
\lr{ 1 + \kcap }
\kcap \cdot F_2
&=
\lr{ 1 + \kcap }
\kcap \cdot \lr{ \kcap \wedge \BE } \\
&=
\lr{ 1 + \kcap }
\lr{
\BE

\kcap \lr{ \kcap \cdot \BE }
} \\
&=
\lr{ 1 + \kcap }
\kcap \lr{ \kcap \wedge \BE } \\
&=
\lr{ 1 + \kcap }
\kcap \wedge \BE,
\end{aligned}
\end{equation}
which has the form \( \lr{ 1 + \kcap } \lr{ \kcap \wedge F_1 } \), so the solution of the free space Maxwell’s equation can be written
\begin{equation}\label{eqn:planewavesMultivector:260}
\boxed{
F(\Bx, t)
=
\textrm{Re} \lr{
\lr{ 1 + \kcap }
\BE\,
e^{-j \Bk \cdot \Bx + j \omega t}
}
,
}
\end{equation}
where \( \BE \) is any vector for which \( \BE \cdot \Bk = 0 \).

Fundamental Theorem of Geometric Calculus

September 20, 2016 math and physics play No comments , , , , , , , , , , , , , , , ,

[Click here for a PDF of this post with nicer formatting]

Stokes Theorem

The Fundamental Theorem of (Geometric) Calculus is a generalization of Stokes theorem to multivector integrals. Notationally, it looks like Stokes theorem with all the dot and wedge products removed. It is worth restating Stokes theorem and all the definitions associated with it for reference

Stokes’ Theorem

For blades \(F \in \bigwedge^{s}\), and \(m\) volume element \(d^k \Bx, s < k\), \begin{equation*} \int_V d^k \Bx \cdot (\boldpartial \wedge F) = \oint_{\partial V} d^{k-1} \Bx \cdot F. \end{equation*} This is a loaded and abstract statement, and requires many definitions to make it useful

  • The volume integral is over a \(m\) dimensional surface (manifold).
  • Integration over the boundary of the manifold \(V\) is indicated by \( \partial V \).
  • This manifold is assumed to be spanned by a parameterized vector \( \Bx(u^1, u^2, \cdots, u^k) \).
  • A curvilinear coordinate basis \( \setlr{ \Bx_i } \) can be defined on the manifold by
    \begin{equation}\label{eqn:fundamentalTheoremOfCalculus:40}
    \Bx_i \equiv \PD{u^i}{\Bx} \equiv \partial_i \Bx.
    \end{equation}

  • A dual basis \( \setlr{\Bx^i} \) reciprocal to the tangent vector basis \( \Bx_i \) can be calculated subject to the requirement \( \Bx_i \cdot \Bx^j = \delta_i^j \).
  • The vector derivative \(\boldpartial\), the projection of the gradient onto the tangent space of the manifold, is defined by
    \begin{equation}\label{eqn:fundamentalTheoremOfCalculus:100}
    \boldpartial = \Bx^i \partial_i = \sum_{i=1}^k \Bx_i \PD{u^i}{}.
    \end{equation}

  • The volume element is defined by
    \begin{equation}\label{eqn:fundamentalTheoremOfCalculus:60}
    d^k \Bx = d\Bx_1 \wedge d\Bx_2 \cdots \wedge d\Bx_k,
    \end{equation}

    where

    \begin{equation}\label{eqn:fundamentalTheoremOfCalculus:80}
    d\Bx_k = \Bx_k du^k,\qquad \text{(no sum)}.
    \end{equation}

  • The volume element is non-zero on the manifold, or \( \Bx_1 \wedge \cdots \wedge \Bx_k \ne 0 \).
  • The surface area element \( d^{k-1} \Bx \), is defined by
    \begin{equation}\label{eqn:fundamentalTheoremOfCalculus:120}
    d^{k-1} \Bx = \sum_{i = 1}^k (-1)^{k-i} d\Bx_1 \wedge d\Bx_2 \cdots \widehat{d\Bx_i} \cdots \wedge d\Bx_k,
    \end{equation}

    where \( \widehat{d\Bx_i} \) indicates the omission of \( d\Bx_i \).

  • My proof for this theorem was restricted to a simple “rectangular” volume parameterized by the ranges
    \(
    [u^1(0), u^1(1) ] \otimes
    [u^2(0), u^2(1) ] \otimes \cdots \otimes
    [u^k(0), u^k(1) ] \)

  • The precise meaning that should be given to oriented area integral is
    \begin{equation}\label{eqn:fundamentalTheoremOfCalculus:140}
    \oint_{\partial V} d^{k-1} \Bx \cdot F
    =
    \sum_{i = 1}^k (-1)^{k-i} \int \evalrange{
    \lr{ \lr{ d\Bx_1 \wedge d\Bx_2 \cdots \widehat{d\Bx_i} \cdots \wedge d\Bx_k } \cdot F }
    }{u^i = u^i(0)}{u^i(1)},
    \end{equation}

    where both the a area form and the blade \( F \) are evaluated at the end points of the parameterization range.

After the work of stating exactly what is meant by this theorem, most of the proof follows from the fact that for \( s < k \) the volume curl dot product can be expanded as \begin{equation}\label{eqn:fundamentalTheoremOfCalculus:160} \int_V d^k \Bx \cdot (\boldpartial \wedge F) = \int_V d^k \Bx \cdot (\Bx^i \wedge \partial_i F) = \int_V \lr{ d^k \Bx \cdot \Bx^i } \cdot \partial_i F. \end{equation} Each of the \(du^i\) integrals can be evaluated directly, since each of the remaining \(d\Bx_j = du^j \PDi{u^j}{}, i \ne j \) is calculated with \( u^i \) held fixed. This allows for the integration over a ``rectangular'' parameterization region, proving the theorem for such a volume parameterization. A more general proof requires a triangulation of the volume and surface, but the basic principle of the theorem is evident, without that additional work.

Fundamental Theorem of Calculus

There is a Geometric Algebra generalization of Stokes theorem that does not have the blade grade restriction of Stokes theorem. In [2] this is stated as

\begin{equation}\label{eqn:fundamentalTheoremOfCalculus:180}
\int_V d^k \Bx \boldpartial F = \oint_{\partial V} d^{k-1} \Bx F.
\end{equation}

A similar expression is used in [1] where it is also pointed out there is a variant with the vector derivative acting to the left

\begin{equation}\label{eqn:fundamentalTheoremOfCalculus:200}
\int_V F d^k \Bx \boldpartial = \oint_{\partial V} F d^{k-1} \Bx.
\end{equation}

In [3] it is pointed out that a bidirectional formulation is possible, providing the most general expression of the Fundamental Theorem of (Geometric) Calculus

\begin{equation}\label{eqn:fundamentalTheoremOfCalculus:220}
\boxed{
\int_V F d^k \Bx \boldpartial G = \oint_{\partial V} F d^{k-1} \Bx G.
}
\end{equation}

Here the vector derivative acts both to the left and right on \( F \) and \( G \). The specific action of this operator is
\begin{equation}\label{eqn:fundamentalTheoremOfCalculus:240}
\begin{aligned}
F \boldpartial G
&=
(F \boldpartial) G
+
F (\boldpartial G) \\
&=
(\partial_i F) \Bx^i G
+
F \Bx^i (\partial_i G).
\end{aligned}
\end{equation}

The fundamental theorem can be demonstrated by direct expansion. With the vector derivative \( \boldpartial \) and its partials \( \partial_i \) acting bidirectionally, that is

\begin{equation}\label{eqn:fundamentalTheoremOfCalculus:260}
\begin{aligned}
\int_V F d^k \Bx \boldpartial G
&=
\int_V F d^k \Bx \Bx^i \partial_i G \\
&=
\int_V F \lr{ d^k \Bx \cdot \Bx^i + d^k \Bx \wedge \Bx^i } \partial_i G.
\end{aligned}
\end{equation}

Both the reciprocal frame vectors and the curvilinear basis span the tangent space of the manifold, since we can write any reciprocal frame vector as a set of projections in the curvilinear basis

\begin{equation}\label{eqn:fundamentalTheoremOfCalculus:280}
\Bx^i = \sum_j \lr{ \Bx^i \cdot \Bx^j } \Bx_j,
\end{equation}

so \( \Bx^i \in sectionpan \setlr{ \Bx_j, j \in [1,k] } \).
This means that \( d^k \Bx \wedge \Bx^i = 0 \), and

\begin{equation}\label{eqn:fundamentalTheoremOfCalculus:300}
\begin{aligned}
\int_V F d^k \Bx \boldpartial G
&=
\int_V F \lr{ d^k \Bx \cdot \Bx^i } \partial_i G \\
&=
\sum_{i = 1}^{k}
\int_V
du^1 du^2 \cdots \widehat{ du^i} \cdots du^k
F \lr{
(-1)^{k-i}
\Bx_1 \wedge \Bx_2 \cdots \widehat{\Bx_i} \cdots \wedge \Bx_k } \partial_i G du^i \\
&=
\sum_{i = 1}^{k}
(-1)^{k-i}
\int_{u^1}
\int_{u^2}
\cdots
\int_{u^{i-1}}
\int_{u^{i+1}}
\cdots
\int_{u^k}
\evalrange{ \lr{
F d\Bx_1 \wedge d\Bx_2 \cdots \widehat{d\Bx_i} \cdots \wedge d\Bx_k G
}
}{u^i = u^i(0)}{u^i(1)}.
\end{aligned}
\end{equation}

Adding in the same notational sugar that we used in Stokes theorem, this proves the Fundamental theorem \ref{eqn:fundamentalTheoremOfCalculus:220} for “rectangular” parameterizations. Note that such a parameterization need not actually be rectangular.

Example: Application to Maxwell’s equation

{example:fundamentalTheoremOfCalculus:1}

Maxwell’s equation is an example of a first order gradient equation

\begin{equation}\label{eqn:fundamentalTheoremOfCalculus:320}
\grad F = \inv{\epsilon_0 c} J.
\end{equation}

Integrating over a four-volume (where the vector derivative equals the gradient), and applying the Fundamental theorem, we have

\begin{equation}\label{eqn:fundamentalTheoremOfCalculus:340}
\inv{\epsilon_0 c} \int d^4 x J = \oint d^3 x F.
\end{equation}

Observe that the surface area element product with \( F \) has both vector and trivector terms. This can be demonstrated by considering some examples

\begin{equation}\label{eqn:fundamentalTheoremOfCalculus:360}
\begin{aligned}
\gamma_{012} \gamma_{01} &\propto \gamma_2 \\
\gamma_{012} \gamma_{23} &\propto \gamma_{023}.
\end{aligned}
\end{equation}

On the other hand, the four volume integral of \( J \) has only trivector parts. This means that the integral can be split into a pair of same-grade equations

\begin{equation}\label{eqn:fundamentalTheoremOfCalculus:380}
\begin{aligned}
\inv{\epsilon_0 c} \int d^4 x \cdot J &=
\oint \gpgradethree{ d^3 x F} \\
0 &=
\oint d^3 x \cdot F.
\end{aligned}
\end{equation}

The first can be put into a slightly tidier form using a duality transformation
\begin{equation}\label{eqn:fundamentalTheoremOfCalculus:400}
\begin{aligned}
\gpgradethree{ d^3 x F}
&=
-\gpgradethree{ d^3 x I^2 F} \\
&=
\gpgradethree{ I d^3 x I F} \\
&=
(I d^3 x) \wedge (I F).
\end{aligned}
\end{equation}

Letting \( n \Abs{d^3 x} = I d^3 x \), this gives

\begin{equation}\label{eqn:fundamentalTheoremOfCalculus:420}
\oint \Abs{d^3 x} n \wedge (I F) = \inv{\epsilon_0 c} \int d^4 x \cdot J.
\end{equation}

Note that this normal is normal to a three-volume subspace of the spacetime volume. For example, if one component of that spacetime surface area element is \( \gamma_{012} c dt dx dy \), then the normal to that area component is \( \gamma_3 \).

A second set of duality transformations

\begin{equation}\label{eqn:fundamentalTheoremOfCalculus:440}
\begin{aligned}
n \wedge (IF)
&=
\gpgradethree{ n I F} \\
&=
-\gpgradethree{ I n F} \\
&=
-\gpgradethree{ I (n \cdot F)} \\
&=
-I (n \cdot F),
\end{aligned}
\end{equation}

and
\begin{equation}\label{eqn:fundamentalTheoremOfCalculus:460}
\begin{aligned}
I d^4 x \cdot J
&=
\gpgradeone{ I d^4 x \cdot J } \\
&=
\gpgradeone{ I d^4 x J } \\
&=
\gpgradeone{ (I d^4 x) J } \\
&=
(I d^4 x) J,
\end{aligned}
\end{equation}

can further tidy things up, leaving us with

\begin{equation}\label{eqn:fundamentalTheoremOfCalculus:500}
\boxed{
\begin{aligned}
\oint \Abs{d^3 x} n \cdot F &= \inv{\epsilon_0 c} \int (I d^4 x) J \\
\oint d^3 x \cdot F &= 0.
\end{aligned}
}
\end{equation}

The Fundamental theorem of calculus immediately provides relations between the Faraday bivector \( F \) and the four-current \( J \).

References

[1] C. Doran and A.N. Lasenby. Geometric algebra for physicists. Cambridge University Press New York, Cambridge, UK, 1st edition, 2003.

[2] A. Macdonald. Vector and Geometric Calculus. CreateSpace Independent Publishing Platform, 2012.

[3] Garret Sobczyk and Omar Le\’on S\’anchez. Fundamental theorem of calculus. Advances in Applied Clifford Algebras, 21\penalty0 (1):\penalty0 221–231, 2011. URL https://arxiv.org/abs/0809.4526.

Stokes integrals for Maxwell’s equations in Geometric Algebra

September 4, 2016 math and physics play No comments , , , , , , , , , ,

[Click here for a PDF of this post with nicer formatting]

Recall that the relativistic form of Maxwell’s equation in Geometric Algebra is

\begin{equation}\label{eqn:maxwellStokes:20}
\grad F = \inv{c \epsilon_0} J.
\end{equation}

where \( \grad = \gamma^\mu \partial_\mu \) is the spacetime gradient, and \( J = (c\rho, \BJ) = J^\mu \gamma_\mu \) is the four (vector) current density. The pseudoscalar for the space is denoted \( I = \gamma_0 \gamma_1 \gamma_2 \gamma_3 \), where the basis elements satisfy \( \gamma_0^2 = 1 = -\gamma_k^2 \), and a dual basis satisfies \( \gamma_\mu \cdot \gamma^\nu = \delta_\mu^\nu \). The electromagnetic field \( F \) is a composite multivector \( F = \BE + I c \BB \). This is actually a bivector because spatial vectors have a bivector representation in the space time algebra of the form \( \BE = E^k \gamma_k \gamma_0 \).

Previously, I wrote out the Stokes integrals for Maxwell’s equation in GA form using some three parameter spacetime manifold volumes. This time I’m going to use two and three parameter spatial volumes, again with the Geometric Algebra form of Stokes theorem.

Multiplication by a timelike unit vector transforms Maxwell’s equation from their relativistic form. When that vector is the standard basis timelike unit vector \( \gamma_0 \), we obtain Maxwell’s equations from the point of view of a stationary observer

\begin{equation}\label{eqn:stokesMaxwellSpaceTimeSplit:40}
\lr{\partial_0 + \spacegrad} \lr{ \BE + c I \BB } = \inv{\epsilon_0 c} \lr{ c \rho – \BJ },
\end{equation}

Extracting the scalar, vector, bivector, and trivector grades respectively, we have
\begin{equation}\label{eqn:stokesMaxwellSpaceTimeSplit:60}
\begin{aligned}
\spacegrad \cdot \BE &= \frac{\rho}{\epsilon_0} \\
c I \spacegrad \wedge \BB &= -\partial_0 \BE – \inv{\epsilon_0 c} \BJ \\
\spacegrad \wedge \BE &= – I c \partial_0 \BB \\
c I \spacegrad \cdot \BB &= 0.
\end{aligned}
\end{equation}

Each of these can be written as a curl equation

\begin{equation}\label{eqn:stokesMaxwellSpaceTimeSplit:80}
\boxed{
\begin{aligned}
\spacegrad \wedge (I \BE) &= I \frac{\rho}{\epsilon_0} \\
\inv{\mu_0} \spacegrad \wedge \BB &= \epsilon_0 I \partial_t \BE + I \BJ \\
\spacegrad \wedge \BE &= -I \partial_t \BB \\
\spacegrad \wedge (I \BB) &= 0,
\end{aligned}
}
\end{equation}

a form that allows for direct application of Stokes integrals. The first and last of these require a three parameter volume element, whereas the two bivector grade equations can be integrated using either two or three parameter volume elements. Suppose that we have can parameterize the space with parameters \( u, v, w \), for which the gradient has the representation

\begin{equation}\label{eqn:stokesMaxwellSpaceTimeSplit:100}
\spacegrad = \Bx^u \partial_u + \Bx^v \partial_v + \Bx^w \partial_w,
\end{equation}

but we integrate over a two parameter subset of this space spanned by \( \Bx(u,v) \), with area element

\begin{equation}\label{eqn:stokesMaxwellSpaceTimeSplit:120}
\begin{aligned}
d^2 \Bx
&= d\Bx_u \wedge d\Bx_v \\
&=
\PD{u}{\Bx}
\wedge
\PD{v}{\Bx}
\,du dv \\
&=
\Bx_u
\wedge
\Bx_v
\,du dv,
\end{aligned}
\end{equation}

as illustrated in fig. 1.

 

twoParameterAreaElementFig1

fig. 1. Two parameter manifold.

Our curvilinear coordinates \( \Bx_u, \Bx_v, \Bx_w \) are dual to the reciprocal basis \( \Bx^u, \Bx^v, \Bx^w \), but we won’t actually have to calculate that reciprocal basis. Instead we need only know that it can be calculated and is defined by the relations \( \Bx_a \cdot \Bx^b = \delta_a^b \). Knowing that we can reduce (say),

\begin{equation}\label{eqn:stokesMaxwellSpaceTimeSplit:140}
\begin{aligned}
d^2 \Bx \cdot ( \spacegrad \wedge \BE )
&=
d^2 \Bx \cdot ( \Bx^a \partial_a \wedge \BE ) \\
&=
(\Bx_u \wedge \Bx_v) \cdot ( \Bx^a \wedge \partial_a \BE ) \,du dv \\
&=
(((\Bx_u \wedge \Bx_v) \cdot \Bx^a) \cdot \partial_a \BE \,du dv \\
&=
d\Bx_u \cdot \partial_v \BE \,dv
-d\Bx_v \cdot \partial_u \BE \,du,
\end{aligned}
\end{equation}

Because each of the differentials, for example \( d\Bx_u = (\PDi{u}{\Bx}) du \), is calculated with the other (i.e.\( v \)) held constant, this is directly integrable, leaving

\begin{equation}\label{eqn:stokesMaxwellSpaceTimeSplit:160}
\begin{aligned}
\int d^2 \Bx \cdot ( \spacegrad \wedge \BE )
&=
\int \evalrange{\lr{d\Bx_u \cdot \BE}}{v=0}{v=1}
-\int \evalrange{\lr{d\Bx_v \cdot \BE}}{u=0}{u=1} \\
&=
\oint d\Bx \cdot \BE.
\end{aligned}
\end{equation}

That direct integration of one of the parameters, while the others are held constant, is the basic idea behind Stokes theorem.

The pseudoscalar grade Maxwell’s equations from \ref{eqn:stokesMaxwellSpaceTimeSplit:80} require a three parameter volume element to apply Stokes theorem to. Again, allowing for curvilinear coordinates such a differential expands as

\begin{equation}\label{eqn:stokesMaxwellSpaceTimeSplit:180}
\begin{aligned}
d^3 \Bx \cdot (\spacegrad \wedge (I\BB))
&=
(( \Bx_u \wedge \Bx_v \wedge \Bx_w ) \cdot \Bx^a ) \cdot \partial_a (I\BB) \,du dv dw \\
&=
(d\Bx_u \wedge d\Bx_v) \cdot \partial_w (I\BB) dw
+(d\Bx_v \wedge d\Bx_w) \cdot \partial_u (I\BB) du
+(d\Bx_w \wedge d\Bx_u) \cdot \partial_v (I\BB) dv.
\end{aligned}
\end{equation}

Like the two parameter volume, this is directly integrable

\begin{equation}\label{eqn:stokesMaxwellSpaceTimeSplit:200}
\int
d^3 \Bx \cdot (\spacegrad \wedge (I\BB))
=
\int \evalbar{(d\Bx_u \wedge d\Bx_v) \cdot (I\BB) }{\Delta w}
+\int \evalbar{(d\Bx_v \wedge d\Bx_w) \cdot (I\BB)}{\Delta u}
+\int \evalbar{(d\Bx_w \wedge d\Bx_u) \cdot (I\BB)}{\Delta v}.
\end{equation}

After some thought (or a craft project such as that of fig. 2) is can be observed that this is conceptually an oriented surface integral

threeParameterSurfaceFig2

fig. 2. Oriented three parameter surface.

Noting that

\begin{equation}\label{eqn:stokesMaxwellSpaceTimeSplit:221}
\begin{aligned}
d^2 \Bx \cdot (I\Bf)
&= \gpgradezero{ d^2 \Bx I B } \\
&= I (d^2\Bx \wedge \Bf)
\end{aligned}
\end{equation}

we can now write down the results of application of Stokes theorem to each of Maxwell’s equations in their curl forms

\begin{equation}\label{eqn:stokesMaxwellSpaceTimeSplit:220}
\boxed{
\begin{aligned}
\oint d\Bx \cdot \BE &= -I \partial_t \int d^2 \Bx \wedge \BB \\
\inv{\mu_0} \oint d\Bx \cdot \BB &= \epsilon_0 I \partial_t \int d^2 \Bx \wedge \BE + I \int d^2 \Bx \wedge \BJ \\
\oint d^2 \Bx \wedge \BE &= \inv{\epsilon_0} \int (d^3 \Bx \cdot I) \rho \\
\oint d^2 \Bx \wedge \BB &= 0.
\end{aligned}
}
\end{equation}

In the three parameter surface integrals the specific meaning to apply to \( d^2 \Bx \wedge \Bf \) is
\begin{equation}\label{eqn:stokesMaxwellSpaceTimeSplit:240}
\oint d^2 \Bx \wedge \Bf
=
\int \evalbar{\lr{d\Bx_u \wedge d\Bx_v \wedge \Bf}}{\Delta w}
+\int \evalbar{\lr{d\Bx_v \wedge d\Bx_w \wedge \Bf}}{\Delta u}
+\int \evalbar{\lr{d\Bx_w \wedge d\Bx_u \wedge \Bf}}{\Delta v}.
\end{equation}

Note that in each case only the component of the vector \( \Bf \) that is projected onto the normal to the area element contributes.

Updated notes for ece1229 antenna theory

March 16, 2015 ece1229 No comments , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

I’ve now posted a first update of my notes for the antenna theory course that I am taking this term at UofT.

Unlike most of the other classes I have taken, I am not attempting to take comprehensive notes for this class. The class is taught on slides which go by faster than I can easily take notes for (and some of which match the textbook closely). In class I have annotated my copy of textbook with little details instead. This set of notes contains musings of details that were unclear, or in some cases, details that were provided in class, but are not in the text (and too long to pencil into my book), as well as some notes Geometric Algebra formalism for Maxwell’s equations with magnetic sources (something I’ve encountered for the first time in any real detail in this class).

The notes compilation linked above includes all of the following separate notes, some of which have been posted separately on this blog:

%d bloggers like this: