Maxwell’s equation

Applied vanity press

April 9, 2018 math and physics play No comments , , , ,

Amazon’s createspace turns out to be a very cost effective way to get a personal color copy of large pdf (>250 pages) to markup for review. The only hassle was having to use their app to create cover art (although that took less time than commuting downtown to one of the cheap copy shops near the university.)

As a side effect, after I edit it, I’d have something I could actually list for sale.  Worldwide, I’d guess at least three people would buy it, that is, if they weren’t happy with the pdf version already available.

I wrote a book: Geometric Algebra for Electrical Engineers

April 5, 2018 math and physics play 6 comments , ,

The book.

A draft of my book: Geometric Algebra for Electrical Engineers, is now available. I’ve supplied limited distribution copies of some of the early drafts and have had some good review comments of the chapter I (introduction to geometric algebra), and chapter II (multivector calculus) material, but none on the electromagnetism content. In defense of the reviewers, the initial version of the electromagnetism chapter, while it had a lot of raw content, was pretty exploratory and very rough. It’s been cleaned up significantly and is hopefully now more reader friendly.

Why I wrote this book.

I have been working on a part time M.Eng degree for a number of years. I wasn’t happy with the UofT ECE electromagnetics offerings in recent years, which have been inconsistently offered or unsatisfactory.  For example: the microwave circuits course which sounded interesting, and had an interesting text book, was mind numbing, almost entirely about Smith charts.  I had to go elsewhere to obtain the M.Eng degree requirements. That elsewhere was a project course.

I proposed a project to an electromagnetism project with the following goals

  1. Perform a literature review of applications of geometric algebra to the study of electromagnetism.
  2. Identify the subset of the literature that had direct relevance to electrical engineering.
  3. Create a complete, and as compact as possible, introduction to the prerequisites required for a graduate or advanced undergraduate electrical engineering student to be able to apply geometric algebra to problems in electromagnetism. With those prerequisites in place, work through the fundamentals of electromagnetism in a geometric algebra context.

In retrospect, doing this project was a mistake. I could have done this work outside of an academic context without paying so much (in both time and money). Somewhere along the way I lost track of the fact that I enrolled on the M.Eng to learn (it provided a way to take grad physics courses on a part time schedule), and got side tracked by degree requirements. Basically I fell victim to a “I may as well graduate” sentiment that would have been better to ignore. All that coupled with the fact that I did not actually get any feedback from my “supervisor”, who did not even read my work (at least so far after one year), made this project-course very frustrating. On the bright side, I really like what I produced, even if I had to do so in isolation.

Why geometric algebra?

Geometric algebra generalizes vectors, providing algebraic representations of not just directed line segments, but also points, plane segments, volumes, and higher degree geometric objects (hypervolumes.). The geometric algebra representation of planes, volumes and hypervolumes requires a vector dot product, a vector multiplication operation, and a generalized addition operation. The dot product provides the length of a vector and a test for whether or not any two vectors are perpendicular. The vector multiplication operation is used to construct directed plane segments (bivectors), and directed volumes (trivectors), which are built from the respective products of two or three mutually perpendicular vectors. The addition operation allows for sums of scalars, vectors, or any products of vectors. Such a sum is called a multivector.

The power to add scalars, vectors, and products of vectors can be exploited to simplify much of electromagnetism. In particular, Maxwell’s equations for isotropic media can be merged into a single multivector equation
\begin{equation}\label{eqn:quaternion2maxwellWithGA:20}
\lr{ \spacegrad + \inv{c} \PD{t}{}} \lr{ \BE + I c \BB } = \eta\lr{ c \rho – \BJ },
\end{equation}
where \( \spacegrad \) is the gradient, \( I = \Be_1 \Be_2 \Be_3 \) is the ordered product of the three R^3 basis vectors, \( c = 1/\sqrt{\mu\epsilon}\) is the group velocity of the medium, \( \eta = \sqrt{\mu/\epsilon} \), \( \BE, \BB \) are the electric and magnetic fields, and \( \rho \) and \( \BJ \) are the charge and current densities. This can be written as a single equation
\begin{equation}\label{eqn:ece2500report:40}
\lr{ \spacegrad + \inv{c} \PD{t}{}} F = J,
\end{equation}
where \( F = \BE + I c \BB \) is the combined (multivector) electromagnetic field, and \( J = \eta\lr{ c \rho – \BJ } \) is the multivector current.

Encountering Maxwell’s equation in its geometric algebra form leaves the student with more questions than answers. Yes, it is a compact representation, but so are the tensor and differential forms (or even the quaternionic) representations of Maxwell’s equations. The student needs to know how to work with the representation if it is to be useful. It should also be clear how to use the existing conventional mathematical tools of applied electromagnetism, or how to generalize those appropriately. Individually, there are answers available to many of the questions that are generated attempting to apply the theory, but they are scattered and in many cases not easily accessible.

Much of the geometric algebra literature for electrodynamics is presented with a relativistic bias, or assumes high levels of mathematical or physics sophistication. The aim of this work was an attempt to make the study of electromagnetism using geometric algebra more accessible, especially to other dumb engineering undergraduates like myself. In particular, this project explored non-relativistic applications of geometric algebra to electromagnetism. The end product of this project was a fairly small self contained book, titled “Geometric Algebra for Electrical Engineers”. This book includes an introduction to Euclidean geometric algebra focused on R^2 and R^3 (64 pages), an introduction to geometric calculus and multivector Green’s functions (64 pages), applications to electromagnetism (82 pages), and some appendices. Many of the fundamental results of electromagnetism are derived directly from the multivector Maxwell’s equation, in a streamlined and compact fashion. This includes some new results, and many of the existing non-relativistic results from the geometric algebra literature. As a conceptual bridge, the book includes many examples of how to extract familiar conventional results from simpler multivector representations. Also included in the book are some sample calculations exploiting unique capabilities that geometric algebra provides. In particular, vectors in a plane may be manipulated much like complex numbers, which has a number of advantages over working with coordinates explicitly.

Followup.

In many ways this work only scratches the surface. Many more worked examples, problems, figures and computer algebra listings should be added. In depth applications of derived geometric algebra relationships to problems customarily tackled with separate electric and magnetic field equations should also be incorporated. There are also theoretical holes, topics covered in any conventional introductory electromagnetism text, that are missing. Examples include the Fresnel relationships for transmission and reflection at an interface, in depth treatment of waveguides, dipole radiation and motion of charged particles, bound charges, and meta materials to name a few. Many of these topics can probably be handled in a coordinate free fashion using geometric algebra. Despite all the work that is required to help bridge the gap between formalism and application, making applied electromagnetism using geometric algebra truly accessible, it is my belief this book makes some good first steps down this path.

The choice that I made to completely avoid the geometric algebra space-time-algebra (STA) is somewhat unfortunate. It is exceedingly elegant, especially in a relativisitic context. Despite that, I think that this was still a good choice from a pedagogical point of view, as most of the prerequisites for an STA based study will have been taken care of as a side effect, making that study much more accessible.

Potential solutions to the static Maxwell’s equation using geometric algebra

March 20, 2018 math and physics play No comments , , , , , , , , , , , , , , , , ,

[Click here for a PDF of this post with nicer formatting]

When neither the electromagnetic field strength \( F = \BE + I \eta \BH \), nor current \( J = \eta (c \rho – \BJ) + I(c\rho_m – \BM) \) is a function of time, then the geometric algebra form of Maxwell’s equations is the first order multivector (gradient) equation
\begin{equation}\label{eqn:staticPotentials:20}
\spacegrad F = J.
\end{equation}

While direct solutions to this equations are possible with the multivector Green’s function for the gradient
\begin{equation}\label{eqn:staticPotentials:40}
G(\Bx, \Bx’) = \inv{4\pi} \frac{\Bx – \Bx’}{\Norm{\Bx – \Bx’}^3 },
\end{equation}
the aim in this post is to explore second order (potential) solutions in a geometric algebra context. Can we assume that it is possible to find a multivector potential \( A \) for which
\begin{equation}\label{eqn:staticPotentials:60}
F = \spacegrad A,
\end{equation}
is a solution to the Maxwell statics equation? If such a solution exists, then Maxwell’s equation is simply
\begin{equation}\label{eqn:staticPotentials:80}
\spacegrad^2 A = J,
\end{equation}
which can be easily solved using the scalar Green’s function for the Laplacian
\begin{equation}\label{eqn:staticPotentials:240}
G(\Bx, \Bx’) = -\inv{\Norm{\Bx – \Bx’} },
\end{equation}
a beastie that may be easier to convolve than the vector valued Green’s function for the gradient.

It is immediately clear that some restrictions must be imposed on the multivector potential \(A\). In particular, since the field \( F \) has only vector and bivector grades, this gradient must have no scalar, nor pseudoscalar grades. That is
\begin{equation}\label{eqn:staticPotentials:100}
\gpgrade{\spacegrad A}{0,3} = 0.
\end{equation}
This constraint on the potential can be avoided if a grade selection operation is built directly into the assumed potential solution, requiring that the field is given by
\begin{equation}\label{eqn:staticPotentials:120}
F = \gpgrade{\spacegrad A}{1,2}.
\end{equation}
However, after imposing such a constraint, Maxwell’s equation has a much less friendly form
\begin{equation}\label{eqn:staticPotentials:140}
\spacegrad^2 A – \spacegrad \gpgrade{\spacegrad A}{0,3} = J.
\end{equation}
Luckily, it is possible to introduce a transformation of potentials, called a gauge transformation, that eliminates the ugly grade selection term, and allows the potential equation to be expressed as a plain old Laplacian. We do so by assuming first that it is possible to find a solution of the Laplacian equation that has the desired grade restrictions. That is
\begin{equation}\label{eqn:staticPotentials:160}
\begin{aligned}
\spacegrad^2 A’ &= J \\
\gpgrade{\spacegrad A’}{0,3} &= 0,
\end{aligned}
\end{equation}
for which \( F = \spacegrad A’ \) is a grade 1,2 solution to \( \spacegrad F = J \). Suppose that \( A \) is any formal solution, free of any grade restrictions, to \( \spacegrad^2 A = J \), and \( F = \gpgrade{\spacegrad A}{1,2} \). Can we find a function \( \tilde{A} \) for which \( A = A’ + \tilde{A} \)?

Maxwell’s equation in terms of \( A \) is
\begin{equation}\label{eqn:staticPotentials:180}
\begin{aligned}
J
&= \spacegrad \gpgrade{\spacegrad A}{1,2} \\
&= \spacegrad^2 A
– \spacegrad \gpgrade{\spacegrad A}{0,3} \\
&= \spacegrad^2 (A’ + \tilde{A})
– \spacegrad \gpgrade{\spacegrad A}{0,3}
\end{aligned}
\end{equation}
or
\begin{equation}\label{eqn:staticPotentials:200}
\spacegrad^2 \tilde{A} = \spacegrad \gpgrade{\spacegrad A}{0,3}.
\end{equation}
This non-homogeneous Laplacian equation that can be solved as is for \( \tilde{A} \) using the Green’s function for the Laplacian. Alternatively, we may also solve the equivalent first order system using the Green’s function for the gradient.
\begin{equation}\label{eqn:staticPotentials:220}
\spacegrad \tilde{A} = \gpgrade{\spacegrad A}{0,3}.
\end{equation}
Clearly \( \tilde{A} \) is not unique, as we can add any function \( \psi \) satisfying the homogeneous Laplacian equation \( \spacegrad^2 \psi = 0 \).

In summary, if \( A \) is any multivector solution to \( \spacegrad^2 A = J \), that is
\begin{equation}\label{eqn:staticPotentials:260}
A(\Bx)
= \int dV’ G(\Bx, \Bx’) J(\Bx’)
= -\int dV’ \frac{J(\Bx’)}{\Norm{\Bx – \Bx’} },
\end{equation}
then \( F = \spacegrad A’ \) is a solution to Maxwell’s equation, where \( A’ = A – \tilde{A} \), and \( \tilde{A} \) is a solution to the non-homogeneous Laplacian equation or the non-homogeneous gradient equation above.

Integral form of the gauge transformation.

Additional insight is possible by considering the gauge transformation in integral form. Suppose that
\begin{equation}\label{eqn:staticPotentials:280}
A(\Bx) = -\int_V dV’ \frac{J(\Bx’)}{\Norm{\Bx – \Bx’} } – \tilde{A}(\Bx),
\end{equation}
is a solution of \( \spacegrad^2 A = J \), where \( \tilde{A} \) is a multivector solution to the homogeneous Laplacian equation \( \spacegrad^2 \tilde{A} = 0 \). Let’s look at the constraints on \( \tilde{A} \) that must be imposed for \( F = \spacegrad A \) to be a valid (i.e. grade 1,2) solution of Maxwell’s equation.
\begin{equation}\label{eqn:staticPotentials:300}
\begin{aligned}
F
&= \spacegrad A \\
&=
-\int_V dV’ \lr{ \spacegrad \inv{\Norm{\Bx – \Bx’} } } J(\Bx’)
– \spacegrad \tilde{A}(\Bx) \\
&=
\int_V dV’ \lr{ \spacegrad’ \inv{\Norm{\Bx – \Bx’} } } J(\Bx’)
– \spacegrad \tilde{A}(\Bx) \\
&=
\int_V dV’ \spacegrad’ \frac{J(\Bx’)}{\Norm{\Bx – \Bx’} } – \int_V dV’ \frac{\spacegrad’ J(\Bx’)}{\Norm{\Bx – \Bx’} }
– \spacegrad \tilde{A}(\Bx) \\
&=
\int_{\partial V} dA’ \ncap’ \frac{J(\Bx’)}{\Norm{\Bx – \Bx’} } – \int_V \frac{\spacegrad’ J(\Bx’)}{\Norm{\Bx – \Bx’} }
– \spacegrad \tilde{A}(\Bx).
\end{aligned}
\end{equation}
Where \( \ncap’ = (\Bx’ – \Bx)/\Norm{\Bx’ – \Bx} \), and the fundamental theorem of geometric calculus has been used to transform the gradient volume integral into an integral over the bounding surface. Operating on Maxwell’s equation with the gradient gives \( \spacegrad^2 F = \spacegrad J \), which has only grades 1,2 on the left hand side, meaning that \( J \) is constrained in a way that requires \( \spacegrad J \) to have only grades 1,2. This means that \( F \) has grades 1,2 if
\begin{equation}\label{eqn:staticPotentials:320}
\spacegrad \tilde{A}(\Bx)
= \int_{\partial V} dA’ \frac{ \gpgrade{\ncap’ J(\Bx’)}{0,3} }{\Norm{\Bx – \Bx’} }.
\end{equation}
The product \( \ncap J \) expands to
\begin{equation}\label{eqn:staticPotentials:340}
\begin{aligned}
\ncap J
&=
\gpgradezero{\ncap J_1} + \gpgradethree{\ncap J_2} \\
&=
\ncap \cdot (-\eta \BJ) + \gpgradethree{\ncap (-I \BM)} \\
&=- \eta \ncap \cdot \BJ -I \ncap \cdot \BM,
\end{aligned}
\end{equation}
so
\begin{equation}\label{eqn:staticPotentials:360}
\spacegrad \tilde{A}(\Bx)
=
-\int_{\partial V} dA’ \frac{ \eta \ncap’ \cdot \BJ(\Bx’) + I \ncap’ \cdot \BM(\Bx’)}{\Norm{\Bx – \Bx’} }.
\end{equation}
Observe that if there is no flux of current density \( \BJ \) and (fictitious) magnetic current density \( \BM \) through the surface, then \( F = \spacegrad A \) is a solution to Maxwell’s equation without any gauge transformation. Alternatively \( F = \spacegrad A \) is also a solution if \( \lim_{\Bx’ \rightarrow \infty} \BJ(\Bx’)/\Norm{\Bx – \Bx’} = \lim_{\Bx’ \rightarrow \infty} \BM(\Bx’)/\Norm{\Bx – \Bx’} = 0 \) and the bounding volume is taken to infinity.

References

Solving Maxwell’s equation in freespace: Multivector plane wave representation

March 14, 2018 math and physics play 1 comment , , , , , , , , , , , ,

[Click here for a PDF of this post with nicer formatting]

The geometric algebra form of Maxwell’s equations in free space (or source free isotopic media with group velocity \( c \)) is the multivector equation
\begin{equation}\label{eqn:planewavesMultivector:20}
\lr{ \spacegrad + \inv{c}\PD{t}{} } F(\Bx, t) = 0.
\end{equation}
Here \( F = \BE + I c \BB \) is a multivector with grades 1 and 2 (vector and bivector components). The velocity \( c \) is called the group velocity since \( F \), or its components \( \BE, \BH \) satisfy the wave equation, which can be seen by pre-multiplying with \( \spacegrad – (1/c)\PDi{t}{} \) to find
\begin{equation}\label{eqn:planewavesMultivector:n}
\lr{ \spacegrad^2 – \inv{c^2}\PDSq{t}{} } F(\Bx, t) = 0.
\end{equation}

Let’s look at the frequency domain solution of this equation with a presumed phasor representation
\begin{equation}\label{eqn:planewavesMultivector:40}
F(\Bx, t) = \textrm{Re} \lr{ F(\Bk) e^{-j \Bk \cdot \Bx + j \omega t} },
\end{equation}
where \( j \) is a scalar imaginary, not necessarily with any geometric interpretation.

Maxwell’s equation reduces to just
\begin{equation}\label{eqn:planewavesMultivector:60}
0
=
-j \lr{ \Bk – \frac{\omega}{c} } F(\Bk).
\end{equation}

If \( F(\Bk) \) has a left multivector factor
\begin{equation}\label{eqn:planewavesMultivector:80}
F(\Bk) =
\lr{ \Bk + \frac{\omega}{c} } \tilde{F},
\end{equation}
where \( \tilde{F} \) is a multivector to be determined, then
\begin{equation}\label{eqn:planewavesMultivector:100}
\begin{aligned}
\lr{ \Bk – \frac{\omega}{c} }
F(\Bk)
&=
\lr{ \Bk – \frac{\omega}{c} }
\lr{ \Bk + \frac{\omega}{c} } \tilde{F} \\
&=
\lr{ \Bk^2 – \lr{\frac{\omega}{c}}^2 } \tilde{F},
\end{aligned}
\end{equation}
which is zero if \( \Norm{\Bk} = \ifrac{\omega}{c} \).

Let \( \kcap = \ifrac{\Bk}{\Norm{\Bk}} \), and \( \Norm{\Bk} \tilde{F} = F_0 + F_1 + F_2 + F_3 \), where \( F_0, F_1, F_2, \) and \( F_3 \) are respectively have grades 0,1,2,3. Then
\begin{equation}\label{eqn:planewavesMultivector:120}
\begin{aligned}
F(\Bk)
&= \lr{ 1 + \kcap } \lr{ F_0 + F_1 + F_2 + F_3 } \\
&=
F_0 + F_1 + F_2 + F_3
+
\kcap F_0 + \kcap F_1 + \kcap F_2 + \kcap F_3 \\
&=
F_0 + F_1 + F_2 + F_3
+
\kcap F_0 + \kcap \cdot F_1 + \kcap \cdot F_2 + \kcap \cdot F_3
+
\kcap \wedge F_1 + \kcap \wedge F_2 \\
&=
\lr{
F_0 + \kcap \cdot F_1
}
+
\lr{
F_1 + \kcap F_0 + \kcap \cdot F_2
}
+
\lr{
F_2 + \kcap \cdot F_3 + \kcap \wedge F_1
}
+
\lr{
F_3 + \kcap \wedge F_2
}.
\end{aligned}
\end{equation}
Since the field \( F \) has only vector and bivector grades, the grades zero and three components of the expansion above must be zero, or
\begin{equation}\label{eqn:planewavesMultivector:140}
\begin{aligned}
F_0 &= – \kcap \cdot F_1 \\
F_3 &= – \kcap \wedge F_2,
\end{aligned}
\end{equation}
so
\begin{equation}\label{eqn:planewavesMultivector:160}
\begin{aligned}
F(\Bk)
&=
\lr{ 1 + \kcap } \lr{
F_1 – \kcap \cdot F_1 +
F_2 – \kcap \wedge F_2
} \\
&=
\lr{ 1 + \kcap } \lr{
F_1 – \kcap F_1 + \kcap \wedge F_1 +
F_2 – \kcap F_2 + \kcap \cdot F_2
}.
\end{aligned}
\end{equation}
The multivector \( 1 + \kcap \) has the projective property of gobbling any leading factors of \( \kcap \)
\begin{equation}\label{eqn:planewavesMultivector:180}
\begin{aligned}
(1 + \kcap)\kcap
&= \kcap + 1 \\
&= 1 + \kcap,
\end{aligned}
\end{equation}
so for \( F_i \in F_1, F_2 \)
\begin{equation}\label{eqn:planewavesMultivector:200}
(1 + \kcap) ( F_i – \kcap F_i )
=
(1 + \kcap) ( F_i – F_i )
= 0,
\end{equation}
leaving
\begin{equation}\label{eqn:planewavesMultivector:220}
F(\Bk)
=
\lr{ 1 + \kcap } \lr{
\kcap \cdot F_2 +
\kcap \wedge F_1
}.
\end{equation}

For \( \kcap \cdot F_2 \) to be non-zero \( F_2 \) must be a bivector that lies in a plane containing \( \kcap \), and \( \kcap \cdot F_2 \) is a vector in that plane that is perpendicular to \( \kcap \). On the other hand \( \kcap \wedge F_1 \) is non-zero only if \( F_1 \) has a non-zero component that does not lie in along the \( \kcap \) direction, but \( \kcap \wedge F_1 \), like \( F_2 \) describes a plane that containing \( \kcap \). This means that having both bivector and vector free variables \( F_2 \) and \( F_1 \) provide more degrees of freedom than required. For example, if \( \BE \) is any vector, and \( F_2 = \kcap \wedge \BE \), then
\begin{equation}\label{eqn:planewavesMultivector:240}
\begin{aligned}
\lr{ 1 + \kcap }
\kcap \cdot F_2
&=
\lr{ 1 + \kcap }
\kcap \cdot \lr{ \kcap \wedge \BE } \\
&=
\lr{ 1 + \kcap }
\lr{
\BE

\kcap \lr{ \kcap \cdot \BE }
} \\
&=
\lr{ 1 + \kcap }
\kcap \lr{ \kcap \wedge \BE } \\
&=
\lr{ 1 + \kcap }
\kcap \wedge \BE,
\end{aligned}
\end{equation}
which has the form \( \lr{ 1 + \kcap } \lr{ \kcap \wedge F_1 } \), so the solution of the free space Maxwell’s equation can be written
\begin{equation}\label{eqn:planewavesMultivector:260}
\boxed{
F(\Bx, t)
=
\textrm{Re} \lr{
\lr{ 1 + \kcap }
\BE\,
e^{-j \Bk \cdot \Bx + j \omega t}
}
,
}
\end{equation}
where \( \BE \) is any vector for which \( \BE \cdot \Bk = 0 \).

Fundamental Theorem of Geometric Calculus

September 20, 2016 math and physics play No comments , , , , , , , , , , , , , , , ,

[Click here for a PDF of this post with nicer formatting]

Stokes Theorem

The Fundamental Theorem of (Geometric) Calculus is a generalization of Stokes theorem to multivector integrals. Notationally, it looks like Stokes theorem with all the dot and wedge products removed. It is worth restating Stokes theorem and all the definitions associated with it for reference

Stokes’ Theorem

For blades \(F \in \bigwedge^{s}\), and \(m\) volume element \(d^k \Bx, s < k\), \begin{equation*} \int_V d^k \Bx \cdot (\boldpartial \wedge F) = \oint_{\partial V} d^{k-1} \Bx \cdot F. \end{equation*} This is a loaded and abstract statement, and requires many definitions to make it useful

  • The volume integral is over a \(m\) dimensional surface (manifold).
  • Integration over the boundary of the manifold \(V\) is indicated by \( \partial V \).
  • This manifold is assumed to be spanned by a parameterized vector \( \Bx(u^1, u^2, \cdots, u^k) \).
  • A curvilinear coordinate basis \( \setlr{ \Bx_i } \) can be defined on the manifold by
    \begin{equation}\label{eqn:fundamentalTheoremOfCalculus:40}
    \Bx_i \equiv \PD{u^i}{\Bx} \equiv \partial_i \Bx.
    \end{equation}

  • A dual basis \( \setlr{\Bx^i} \) reciprocal to the tangent vector basis \( \Bx_i \) can be calculated subject to the requirement \( \Bx_i \cdot \Bx^j = \delta_i^j \).
  • The vector derivative \(\boldpartial\), the projection of the gradient onto the tangent space of the manifold, is defined by
    \begin{equation}\label{eqn:fundamentalTheoremOfCalculus:100}
    \boldpartial = \Bx^i \partial_i = \sum_{i=1}^k \Bx_i \PD{u^i}{}.
    \end{equation}

  • The volume element is defined by
    \begin{equation}\label{eqn:fundamentalTheoremOfCalculus:60}
    d^k \Bx = d\Bx_1 \wedge d\Bx_2 \cdots \wedge d\Bx_k,
    \end{equation}

    where

    \begin{equation}\label{eqn:fundamentalTheoremOfCalculus:80}
    d\Bx_k = \Bx_k du^k,\qquad \text{(no sum)}.
    \end{equation}

  • The volume element is non-zero on the manifold, or \( \Bx_1 \wedge \cdots \wedge \Bx_k \ne 0 \).
  • The surface area element \( d^{k-1} \Bx \), is defined by
    \begin{equation}\label{eqn:fundamentalTheoremOfCalculus:120}
    d^{k-1} \Bx = \sum_{i = 1}^k (-1)^{k-i} d\Bx_1 \wedge d\Bx_2 \cdots \widehat{d\Bx_i} \cdots \wedge d\Bx_k,
    \end{equation}

    where \( \widehat{d\Bx_i} \) indicates the omission of \( d\Bx_i \).

  • My proof for this theorem was restricted to a simple “rectangular” volume parameterized by the ranges
    \(
    [u^1(0), u^1(1) ] \otimes
    [u^2(0), u^2(1) ] \otimes \cdots \otimes
    [u^k(0), u^k(1) ] \)

  • The precise meaning that should be given to oriented area integral is
    \begin{equation}\label{eqn:fundamentalTheoremOfCalculus:140}
    \oint_{\partial V} d^{k-1} \Bx \cdot F
    =
    \sum_{i = 1}^k (-1)^{k-i} \int \evalrange{
    \lr{ \lr{ d\Bx_1 \wedge d\Bx_2 \cdots \widehat{d\Bx_i} \cdots \wedge d\Bx_k } \cdot F }
    }{u^i = u^i(0)}{u^i(1)},
    \end{equation}

    where both the a area form and the blade \( F \) are evaluated at the end points of the parameterization range.

After the work of stating exactly what is meant by this theorem, most of the proof follows from the fact that for \( s < k \) the volume curl dot product can be expanded as \begin{equation}\label{eqn:fundamentalTheoremOfCalculus:160} \int_V d^k \Bx \cdot (\boldpartial \wedge F) = \int_V d^k \Bx \cdot (\Bx^i \wedge \partial_i F) = \int_V \lr{ d^k \Bx \cdot \Bx^i } \cdot \partial_i F. \end{equation} Each of the \(du^i\) integrals can be evaluated directly, since each of the remaining \(d\Bx_j = du^j \PDi{u^j}{}, i \ne j \) is calculated with \( u^i \) held fixed. This allows for the integration over a ``rectangular'' parameterization region, proving the theorem for such a volume parameterization. A more general proof requires a triangulation of the volume and surface, but the basic principle of the theorem is evident, without that additional work.

Fundamental Theorem of Calculus

There is a Geometric Algebra generalization of Stokes theorem that does not have the blade grade restriction of Stokes theorem. In [2] this is stated as

\begin{equation}\label{eqn:fundamentalTheoremOfCalculus:180}
\int_V d^k \Bx \boldpartial F = \oint_{\partial V} d^{k-1} \Bx F.
\end{equation}

A similar expression is used in [1] where it is also pointed out there is a variant with the vector derivative acting to the left

\begin{equation}\label{eqn:fundamentalTheoremOfCalculus:200}
\int_V F d^k \Bx \boldpartial = \oint_{\partial V} F d^{k-1} \Bx.
\end{equation}

In [3] it is pointed out that a bidirectional formulation is possible, providing the most general expression of the Fundamental Theorem of (Geometric) Calculus

\begin{equation}\label{eqn:fundamentalTheoremOfCalculus:220}
\boxed{
\int_V F d^k \Bx \boldpartial G = \oint_{\partial V} F d^{k-1} \Bx G.
}
\end{equation}

Here the vector derivative acts both to the left and right on \( F \) and \( G \). The specific action of this operator is
\begin{equation}\label{eqn:fundamentalTheoremOfCalculus:240}
\begin{aligned}
F \boldpartial G
&=
(F \boldpartial) G
+
F (\boldpartial G) \\
&=
(\partial_i F) \Bx^i G
+
F \Bx^i (\partial_i G).
\end{aligned}
\end{equation}

The fundamental theorem can be demonstrated by direct expansion. With the vector derivative \( \boldpartial \) and its partials \( \partial_i \) acting bidirectionally, that is

\begin{equation}\label{eqn:fundamentalTheoremOfCalculus:260}
\begin{aligned}
\int_V F d^k \Bx \boldpartial G
&=
\int_V F d^k \Bx \Bx^i \partial_i G \\
&=
\int_V F \lr{ d^k \Bx \cdot \Bx^i + d^k \Bx \wedge \Bx^i } \partial_i G.
\end{aligned}
\end{equation}

Both the reciprocal frame vectors and the curvilinear basis span the tangent space of the manifold, since we can write any reciprocal frame vector as a set of projections in the curvilinear basis

\begin{equation}\label{eqn:fundamentalTheoremOfCalculus:280}
\Bx^i = \sum_j \lr{ \Bx^i \cdot \Bx^j } \Bx_j,
\end{equation}

so \( \Bx^i \in sectionpan \setlr{ \Bx_j, j \in [1,k] } \).
This means that \( d^k \Bx \wedge \Bx^i = 0 \), and

\begin{equation}\label{eqn:fundamentalTheoremOfCalculus:300}
\begin{aligned}
\int_V F d^k \Bx \boldpartial G
&=
\int_V F \lr{ d^k \Bx \cdot \Bx^i } \partial_i G \\
&=
\sum_{i = 1}^{k}
\int_V
du^1 du^2 \cdots \widehat{ du^i} \cdots du^k
F \lr{
(-1)^{k-i}
\Bx_1 \wedge \Bx_2 \cdots \widehat{\Bx_i} \cdots \wedge \Bx_k } \partial_i G du^i \\
&=
\sum_{i = 1}^{k}
(-1)^{k-i}
\int_{u^1}
\int_{u^2}
\cdots
\int_{u^{i-1}}
\int_{u^{i+1}}
\cdots
\int_{u^k}
\evalrange{ \lr{
F d\Bx_1 \wedge d\Bx_2 \cdots \widehat{d\Bx_i} \cdots \wedge d\Bx_k G
}
}{u^i = u^i(0)}{u^i(1)}.
\end{aligned}
\end{equation}

Adding in the same notational sugar that we used in Stokes theorem, this proves the Fundamental theorem \ref{eqn:fundamentalTheoremOfCalculus:220} for “rectangular” parameterizations. Note that such a parameterization need not actually be rectangular.

Example: Application to Maxwell’s equation

{example:fundamentalTheoremOfCalculus:1}

Maxwell’s equation is an example of a first order gradient equation

\begin{equation}\label{eqn:fundamentalTheoremOfCalculus:320}
\grad F = \inv{\epsilon_0 c} J.
\end{equation}

Integrating over a four-volume (where the vector derivative equals the gradient), and applying the Fundamental theorem, we have

\begin{equation}\label{eqn:fundamentalTheoremOfCalculus:340}
\inv{\epsilon_0 c} \int d^4 x J = \oint d^3 x F.
\end{equation}

Observe that the surface area element product with \( F \) has both vector and trivector terms. This can be demonstrated by considering some examples

\begin{equation}\label{eqn:fundamentalTheoremOfCalculus:360}
\begin{aligned}
\gamma_{012} \gamma_{01} &\propto \gamma_2 \\
\gamma_{012} \gamma_{23} &\propto \gamma_{023}.
\end{aligned}
\end{equation}

On the other hand, the four volume integral of \( J \) has only trivector parts. This means that the integral can be split into a pair of same-grade equations

\begin{equation}\label{eqn:fundamentalTheoremOfCalculus:380}
\begin{aligned}
\inv{\epsilon_0 c} \int d^4 x \cdot J &=
\oint \gpgradethree{ d^3 x F} \\
0 &=
\oint d^3 x \cdot F.
\end{aligned}
\end{equation}

The first can be put into a slightly tidier form using a duality transformation
\begin{equation}\label{eqn:fundamentalTheoremOfCalculus:400}
\begin{aligned}
\gpgradethree{ d^3 x F}
&=
-\gpgradethree{ d^3 x I^2 F} \\
&=
\gpgradethree{ I d^3 x I F} \\
&=
(I d^3 x) \wedge (I F).
\end{aligned}
\end{equation}

Letting \( n \Abs{d^3 x} = I d^3 x \), this gives

\begin{equation}\label{eqn:fundamentalTheoremOfCalculus:420}
\oint \Abs{d^3 x} n \wedge (I F) = \inv{\epsilon_0 c} \int d^4 x \cdot J.
\end{equation}

Note that this normal is normal to a three-volume subspace of the spacetime volume. For example, if one component of that spacetime surface area element is \( \gamma_{012} c dt dx dy \), then the normal to that area component is \( \gamma_3 \).

A second set of duality transformations

\begin{equation}\label{eqn:fundamentalTheoremOfCalculus:440}
\begin{aligned}
n \wedge (IF)
&=
\gpgradethree{ n I F} \\
&=
-\gpgradethree{ I n F} \\
&=
-\gpgradethree{ I (n \cdot F)} \\
&=
-I (n \cdot F),
\end{aligned}
\end{equation}

and
\begin{equation}\label{eqn:fundamentalTheoremOfCalculus:460}
\begin{aligned}
I d^4 x \cdot J
&=
\gpgradeone{ I d^4 x \cdot J } \\
&=
\gpgradeone{ I d^4 x J } \\
&=
\gpgradeone{ (I d^4 x) J } \\
&=
(I d^4 x) J,
\end{aligned}
\end{equation}

can further tidy things up, leaving us with

\begin{equation}\label{eqn:fundamentalTheoremOfCalculus:500}
\boxed{
\begin{aligned}
\oint \Abs{d^3 x} n \cdot F &= \inv{\epsilon_0 c} \int (I d^4 x) J \\
\oint d^3 x \cdot F &= 0.
\end{aligned}
}
\end{equation}

The Fundamental theorem of calculus immediately provides relations between the Faraday bivector \( F \) and the four-current \( J \).

References

[1] C. Doran and A.N. Lasenby. Geometric algebra for physicists. Cambridge University Press New York, Cambridge, UK, 1st edition, 2003.

[2] A. Macdonald. Vector and Geometric Calculus. CreateSpace Independent Publishing Platform, 2012.

[3] Garret Sobczyk and Omar Le\’on S\’anchez. Fundamental theorem of calculus. Advances in Applied Clifford Algebras, 21\penalty0 (1):\penalty0 221–231, 2011. URL http://arxiv.org/abs/0809.4526.

Stokes integrals for Maxwell’s equations in Geometric Algebra

September 4, 2016 math and physics play No comments , , , , , , , , , ,

[Click here for a PDF of this post with nicer formatting]

Recall that the relativistic form of Maxwell’s equation in Geometric Algebra is

\begin{equation}\label{eqn:maxwellStokes:20}
\grad F = \inv{c \epsilon_0} J.
\end{equation}

where \( \grad = \gamma^\mu \partial_\mu \) is the spacetime gradient, and \( J = (c\rho, \BJ) = J^\mu \gamma_\mu \) is the four (vector) current density. The pseudoscalar for the space is denoted \( I = \gamma_0 \gamma_1 \gamma_2 \gamma_3 \), where the basis elements satisfy \( \gamma_0^2 = 1 = -\gamma_k^2 \), and a dual basis satisfies \( \gamma_\mu \cdot \gamma^\nu = \delta_\mu^\nu \). The electromagnetic field \( F \) is a composite multivector \( F = \BE + I c \BB \). This is actually a bivector because spatial vectors have a bivector representation in the space time algebra of the form \( \BE = E^k \gamma_k \gamma_0 \).

Previously, I wrote out the Stokes integrals for Maxwell’s equation in GA form using some three parameter spacetime manifold volumes. This time I’m going to use two and three parameter spatial volumes, again with the Geometric Algebra form of Stokes theorem.

Multiplication by a timelike unit vector transforms Maxwell’s equation from their relativistic form. When that vector is the standard basis timelike unit vector \( \gamma_0 \), we obtain Maxwell’s equations from the point of view of a stationary observer

\begin{equation}\label{eqn:stokesMaxwellSpaceTimeSplit:40}
\lr{\partial_0 + \spacegrad} \lr{ \BE + c I \BB } = \inv{\epsilon_0 c} \lr{ c \rho – \BJ },
\end{equation}

Extracting the scalar, vector, bivector, and trivector grades respectively, we have
\begin{equation}\label{eqn:stokesMaxwellSpaceTimeSplit:60}
\begin{aligned}
\spacegrad \cdot \BE &= \frac{\rho}{\epsilon_0} \\
c I \spacegrad \wedge \BB &= -\partial_0 \BE – \inv{\epsilon_0 c} \BJ \\
\spacegrad \wedge \BE &= – I c \partial_0 \BB \\
c I \spacegrad \cdot \BB &= 0.
\end{aligned}
\end{equation}

Each of these can be written as a curl equation

\begin{equation}\label{eqn:stokesMaxwellSpaceTimeSplit:80}
\boxed{
\begin{aligned}
\spacegrad \wedge (I \BE) &= I \frac{\rho}{\epsilon_0} \\
\inv{\mu_0} \spacegrad \wedge \BB &= \epsilon_0 I \partial_t \BE + I \BJ \\
\spacegrad \wedge \BE &= -I \partial_t \BB \\
\spacegrad \wedge (I \BB) &= 0,
\end{aligned}
}
\end{equation}

a form that allows for direct application of Stokes integrals. The first and last of these require a three parameter volume element, whereas the two bivector grade equations can be integrated using either two or three parameter volume elements. Suppose that we have can parameterize the space with parameters \( u, v, w \), for which the gradient has the representation

\begin{equation}\label{eqn:stokesMaxwellSpaceTimeSplit:100}
\spacegrad = \Bx^u \partial_u + \Bx^v \partial_v + \Bx^w \partial_w,
\end{equation}

but we integrate over a two parameter subset of this space spanned by \( \Bx(u,v) \), with area element

\begin{equation}\label{eqn:stokesMaxwellSpaceTimeSplit:120}
\begin{aligned}
d^2 \Bx
&= d\Bx_u \wedge d\Bx_v \\
&=
\PD{u}{\Bx}
\wedge
\PD{v}{\Bx}
\,du dv \\
&=
\Bx_u
\wedge
\Bx_v
\,du dv,
\end{aligned}
\end{equation}

as illustrated in fig. 1.

 

twoParameterAreaElementFig1

fig. 1. Two parameter manifold.

Our curvilinear coordinates \( \Bx_u, \Bx_v, \Bx_w \) are dual to the reciprocal basis \( \Bx^u, \Bx^v, \Bx^w \), but we won’t actually have to calculate that reciprocal basis. Instead we need only know that it can be calculated and is defined by the relations \( \Bx_a \cdot \Bx^b = \delta_a^b \). Knowing that we can reduce (say),

\begin{equation}\label{eqn:stokesMaxwellSpaceTimeSplit:140}
\begin{aligned}
d^2 \Bx \cdot ( \spacegrad \wedge \BE )
&=
d^2 \Bx \cdot ( \Bx^a \partial_a \wedge \BE ) \\
&=
(\Bx_u \wedge \Bx_v) \cdot ( \Bx^a \wedge \partial_a \BE ) \,du dv \\
&=
(((\Bx_u \wedge \Bx_v) \cdot \Bx^a) \cdot \partial_a \BE \,du dv \\
&=
d\Bx_u \cdot \partial_v \BE \,dv
-d\Bx_v \cdot \partial_u \BE \,du,
\end{aligned}
\end{equation}

Because each of the differentials, for example \( d\Bx_u = (\PDi{u}{\Bx}) du \), is calculated with the other (i.e.\( v \)) held constant, this is directly integrable, leaving

\begin{equation}\label{eqn:stokesMaxwellSpaceTimeSplit:160}
\begin{aligned}
\int d^2 \Bx \cdot ( \spacegrad \wedge \BE )
&=
\int \evalrange{\lr{d\Bx_u \cdot \BE}}{v=0}{v=1}
-\int \evalrange{\lr{d\Bx_v \cdot \BE}}{u=0}{u=1} \\
&=
\oint d\Bx \cdot \BE.
\end{aligned}
\end{equation}

That direct integration of one of the parameters, while the others are held constant, is the basic idea behind Stokes theorem.

The pseudoscalar grade Maxwell’s equations from \ref{eqn:stokesMaxwellSpaceTimeSplit:80} require a three parameter volume element to apply Stokes theorem to. Again, allowing for curvilinear coordinates such a differential expands as

\begin{equation}\label{eqn:stokesMaxwellSpaceTimeSplit:180}
\begin{aligned}
d^3 \Bx \cdot (\spacegrad \wedge (I\BB))
&=
(( \Bx_u \wedge \Bx_v \wedge \Bx_w ) \cdot \Bx^a ) \cdot \partial_a (I\BB) \,du dv dw \\
&=
(d\Bx_u \wedge d\Bx_v) \cdot \partial_w (I\BB) dw
+(d\Bx_v \wedge d\Bx_w) \cdot \partial_u (I\BB) du
+(d\Bx_w \wedge d\Bx_u) \cdot \partial_v (I\BB) dv.
\end{aligned}
\end{equation}

Like the two parameter volume, this is directly integrable

\begin{equation}\label{eqn:stokesMaxwellSpaceTimeSplit:200}
\int
d^3 \Bx \cdot (\spacegrad \wedge (I\BB))
=
\int \evalbar{(d\Bx_u \wedge d\Bx_v) \cdot (I\BB) }{\Delta w}
+\int \evalbar{(d\Bx_v \wedge d\Bx_w) \cdot (I\BB)}{\Delta u}
+\int \evalbar{(d\Bx_w \wedge d\Bx_u) \cdot (I\BB)}{\Delta v}.
\end{equation}

After some thought (or a craft project such as that of fig. 2) is can be observed that this is conceptually an oriented surface integral

threeParameterSurfaceFig2

fig. 2. Oriented three parameter surface.

Noting that

\begin{equation}\label{eqn:stokesMaxwellSpaceTimeSplit:221}
\begin{aligned}
d^2 \Bx \cdot (I\Bf)
&= \gpgradezero{ d^2 \Bx I B } \\
&= I (d^2\Bx \wedge \Bf)
\end{aligned}
\end{equation}

we can now write down the results of application of Stokes theorem to each of Maxwell’s equations in their curl forms

\begin{equation}\label{eqn:stokesMaxwellSpaceTimeSplit:220}
\boxed{
\begin{aligned}
\oint d\Bx \cdot \BE &= -I \partial_t \int d^2 \Bx \wedge \BB \\
\inv{\mu_0} \oint d\Bx \cdot \BB &= \epsilon_0 I \partial_t \int d^2 \Bx \wedge \BE + I \int d^2 \Bx \wedge \BJ \\
\oint d^2 \Bx \wedge \BE &= \inv{\epsilon_0} \int (d^3 \Bx \cdot I) \rho \\
\oint d^2 \Bx \wedge \BB &= 0.
\end{aligned}
}
\end{equation}

In the three parameter surface integrals the specific meaning to apply to \( d^2 \Bx \wedge \Bf \) is
\begin{equation}\label{eqn:stokesMaxwellSpaceTimeSplit:240}
\oint d^2 \Bx \wedge \Bf
=
\int \evalbar{\lr{d\Bx_u \wedge d\Bx_v \wedge \Bf}}{\Delta w}
+\int \evalbar{\lr{d\Bx_v \wedge d\Bx_w \wedge \Bf}}{\Delta u}
+\int \evalbar{\lr{d\Bx_w \wedge d\Bx_u \wedge \Bf}}{\Delta v}.
\end{equation}

Note that in each case only the component of the vector \( \Bf \) that is projected onto the normal to the area element contributes.

Updated notes for ece1229 antenna theory

March 16, 2015 ece1229 No comments , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

I’ve now posted a first update of my notes for the antenna theory course that I am taking this term at UofT.

Unlike most of the other classes I have taken, I am not attempting to take comprehensive notes for this class. The class is taught on slides which go by faster than I can easily take notes for (and some of which match the textbook closely). In class I have annotated my copy of textbook with little details instead. This set of notes contains musings of details that were unclear, or in some cases, details that were provided in class, but are not in the text (and too long to pencil into my book), as well as some notes Geometric Algebra formalism for Maxwell’s equations with magnetic sources (something I’ve encountered for the first time in any real detail in this class).

The notes compilation linked above includes all of the following separate notes, some of which have been posted separately on this blog:

Maxwell’s equations in tensor form with magnetic sources

February 22, 2015 ece1229 No comments , , , , , , , , , , , , , , , , , , , , , , , , ,

[Click here for a PDF of this post with nicer formatting]

Following the principle that one should always relate new formalisms to things previously learned, I’d like to know what Maxwell’s equations look like in tensor form when magnetic sources are included. As a verification that the previous Geometric Algebra form of Maxwell’s equation that includes magnetic sources is correct, I’ll start with the GA form of Maxwell’s equation, find the tensor form, and then verify that the vector form of Maxwell’s equations can be recovered from the tensor form.

Tensor form

With four-vector potential \( A \), and bivector electromagnetic field \( F = \grad \wedge A \), the GA form of Maxwell’s equation is

\begin{equation}\label{eqn:gaMagneticSourcesToTensorToVector:20}
\grad F = \frac{J}{\epsilon_0 c} + M I.
\end{equation}

The left hand side can be unpacked into vector and trivector terms \( \grad F = \grad \cdot F + \grad \wedge F \), which happens to also separate the sources nicely as a side effect

\begin{equation}\label{eqn:gaMagneticSourcesToTensorToVector:60}
\grad \cdot F = \frac{J}{\epsilon_0 c}
\end{equation}
\begin{equation}\label{eqn:gaMagneticSourcesToTensorToVector:80}
\grad \wedge F = M I.
\end{equation}

The electric source equation can be unpacked into tensor form by dotting with the four vector basis vectors. With the usual definition \( F^{\alpha \beta} = \partial^\alpha A^\beta – \partial^\beta A^\alpha \), that is

\begin{equation}\label{eqn:gaMagneticSourcesToTensorToVector:100}
\begin{aligned}
\gamma^\mu \cdot \lr{ \grad \cdot F }
&=
\gamma^\mu \cdot \lr{ \grad \cdot \lr{ \grad \wedge A } } \\
&=
\gamma^\mu \cdot \lr{ \gamma^\nu \partial_\nu \cdot
\lr{ \gamma_\alpha \partial^\alpha \wedge \gamma_\beta A^\beta } } \\
&=
\gamma^\mu \cdot \lr{ \gamma^\nu \cdot \lr{ \gamma_\alpha \wedge \gamma_\beta
} } \partial_\nu \partial^\alpha A^\beta \\
&=
\inv{2}
\gamma^\mu \cdot \lr{ \gamma^\nu \cdot \lr{ \gamma_\alpha \wedge \gamma_\beta } }
\partial_\nu F^{\alpha \beta} \\
&=
\inv{2} \delta^{\nu \mu}_{[\alpha \beta]} \partial_\nu F^{\alpha \beta} \\
&=
\inv{2} \partial_\nu F^{\nu \mu}

\inv{2} \partial_\nu F^{\mu \nu} \\
&=
\partial_\nu F^{\nu \mu}.
\end{aligned}
\end{equation}

So the first tensor equation is

\begin{equation}\label{eqn:gaMagneticSourcesToTensorToVector:120}
\boxed{
\partial_\nu F^{\nu \mu} = \inv{c \epsilon_0} J^\mu.
}
\end{equation}

To unpack the magnetic source portion of Maxwell’s equation, put it first into dual form, so that it has four vectors on each side

\begin{equation}\label{eqn:gaMagneticSourcesToTensorToVector:140}
\begin{aligned}
M
&= – \lr{ \grad \wedge F} I \\
&= -\frac{1}{2} \lr{ \grad F + F \grad } I \\
&= -\frac{1}{2} \lr{ \grad F I – F I \grad } \\
&= – \grad \cdot \lr{ F I }.
\end{aligned}
\end{equation}

Dotting with \( \gamma^\mu \) gives

\begin{equation}\label{eqn:gaMagneticSourcesToTensorToVector:160}
\begin{aligned}
M^\mu
&= \gamma^\mu \cdot \lr{ \grad \cdot \lr{ – F I } } \\
&= \gamma^\mu \cdot \lr{ \gamma^\nu \partial_\nu \cdot \lr{ -\frac{1}{2}
\gamma^\alpha \wedge \gamma^\beta I F_{\alpha \beta} } } \\
&= -\inv{2}
\gpgradezero{
\gamma^\mu \cdot \lr{ \gamma^\nu \cdot \lr{ \gamma^\alpha \wedge \gamma^\beta I } }
}
\partial_\nu F_{\alpha \beta}.
\end{aligned}
\end{equation}

This scalar grade selection is a complete antisymmetrization of the indexes

\begin{equation}\label{eqn:gaMagneticSourcesToTensorToVector:180}
\begin{aligned}
\gpgradezero{
\gamma^\mu \cdot \lr{ \gamma^\nu \cdot \lr{ \gamma^\alpha \wedge \gamma^\beta I } }
}
&=
\gpgradezero{
\gamma^\mu \cdot \lr{ \gamma^\nu \cdot \lr{
\gamma^\alpha \gamma^\beta
\gamma_0 \gamma_1 \gamma_2 \gamma_3
} }
} \\
&=
\gpgradezero{
\gamma_0 \gamma_1 \gamma_2 \gamma_3
\gamma^\mu \gamma^\nu \gamma^\alpha \gamma^\beta
} \\
&=
\delta^{\mu \nu \alpha \beta}_{3 2 1 0} \\
&=
\epsilon^{\mu \nu \alpha \beta },
\end{aligned}
\end{equation}

so the magnetic source portion of Maxwell’s equation, in tensor form, is

\begin{equation}\label{eqn:gaMagneticSourcesToTensorToVector:200}
\boxed{
\inv{2} \epsilon^{\nu \alpha \beta \mu}
\partial_\nu F_{\alpha \beta}
=
M^\mu.
}
\end{equation}

Relating the tensor to the fields

The electromagnetic field has been identified with the electric and magnetic fields by

\begin{equation}\label{eqn:gaMagneticSourcesToTensorToVector:220}
F = \boldsymbol{\mathcal{E}} + c \mu_0 \boldsymbol{\mathcal{H}} I ,
\end{equation}

or in coordinates

\begin{equation}\label{eqn:gaMagneticSourcesToTensorToVector:240}
\inv{2} \gamma_\mu \wedge \gamma_\nu F^{\mu \nu}
= E^a \gamma_a \gamma_0 + c \mu_0 H^a \gamma_a \gamma_0 I.
\end{equation}

By forming the dot product sequence \( F^{\alpha \beta} = \gamma^\beta \cdot \lr{ \gamma^\alpha \cdot F } \), the electric and magnetic field components can be related to the tensor components. The electric field components follow by inspection and are

\begin{equation}\label{eqn:gaMagneticSourcesToTensorToVector:260}
E^b = \gamma^0 \cdot \lr{ \gamma^b \cdot F } = F^{b 0}.
\end{equation}

The magnetic field relation to the tensor components follow from

\begin{equation}\label{eqn:gaMagneticSourcesToTensorToVector:280}
\begin{aligned}
F^{r s}
&= F_{r s} \\
&= \gamma_s \cdot \lr{ \gamma_r \cdot \lr{ c \mu_0 H^a \gamma_a \gamma_0 I
} } \\
&=
c \mu_0 H^a \gpgradezero{ \gamma_s \gamma_r \gamma_a \gamma_0 I } \\
&=
c \mu_0 H^a \gpgradezero{ -\gamma^0 \gamma^1 \gamma^2 \gamma^3
\gamma_s \gamma_r \gamma_a \gamma_0 } \\
&=
c \mu_0 H^a \gpgradezero{ -\gamma^1 \gamma^2 \gamma^3
\gamma_s \gamma_r \gamma_a } \\
&=
– c \mu_0 H^a \delta^{[3 2 1]}_{s r a} \\
&=
c \mu_0 H^a \epsilon_{ s r a }.
\end{aligned}
\end{equation}

Expanding this for each pair of spacelike coordinates gives

\begin{equation}\label{eqn:gaMagneticSourcesToTensorToVector:320}
F^{1 2} = c \mu_0 H^3 \epsilon_{ 2 1 3 } = – c \mu_0 H^3
\end{equation}
\begin{equation}\label{eqn:gaMagneticSourcesToTensorToVector:340}
F^{2 3} = c \mu_0 H^1 \epsilon_{ 3 2 1 } = – c \mu_0 H^1
\end{equation}
\begin{equation}\label{eqn:gaMagneticSourcesToTensorToVector:360}
F^{3 1} = c \mu_0 H^2 \epsilon_{ 1 3 2 } = – c \mu_0 H^2,
\end{equation}

or

\begin{equation}\label{eqn:gaMagneticSourcesToTensorToVector:380}
\boxed{
\begin{aligned}
E^1 &= F^{1 0} \\
E^2 &= F^{2 0} \\
E^3 &= F^{3 0} \\
H^1 &= -\inv{c \mu_0} F^{2 3} \\
H^2 &= -\inv{c \mu_0} F^{3 1} \\
H^3 &= -\inv{c \mu_0} F^{1 2}.
\end{aligned}
}
\end{equation}

Recover the vector equations from the tensor equations

Starting with the non-dual Maxwell tensor equation, expanding the timelike index gives

\begin{equation}\label{eqn:gaMagneticSourcesToTensorToVector:480}
\begin{aligned}
\inv{c \epsilon_0} J^0
&= \inv{\epsilon_0} \rho \\
&=
\partial_\nu F^{\nu 0} \\
&=
\partial_1 F^{1 0}
+\partial_2 F^{2 0}
+\partial_3 F^{3 0}
\end{aligned}
\end{equation}

This is Gauss’s law

\begin{equation}\label{eqn:gaMagneticSourcesToTensorToVector:500}
\boxed{
\spacegrad \cdot \boldsymbol{\mathcal{E}}
=
\rho/\epsilon_0.
}
\end{equation}

For a spacelike index, any one is representive. Expanding index 1 gives

\begin{equation}\label{eqn:gaMagneticSourcesToTensorToVector:520}
\begin{aligned}
\inv{c \epsilon_0} J^1
&= \partial_\nu F^{\nu 1} \\
&= \inv{c} \partial_t F^{0 1}
+ \partial_2 F^{2 1}
+ \partial_3 F^{3 1} \\
&= -\inv{c} E^1
+ \partial_2 (c \mu_0 H^3)
+ \partial_3 (-c \mu_0 H^2) \\
&=
\lr{ -\inv{c} \PD{t}{\boldsymbol{\mathcal{E}}} + c \mu_0 \spacegrad \cross \boldsymbol{\mathcal{H}} } \cdot \Be_1.
\end{aligned}
\end{equation}

Extending this to the other indexes and multiplying through by \( \epsilon_0 c \) recovers the Ampere-Maxwell equation (assuming linear media)

\begin{equation}\label{eqn:gaMagneticSourcesToTensorToVector:540}
\boxed{
\spacegrad \cross \boldsymbol{\mathcal{H}} = \boldsymbol{\mathcal{J}} + \PD{t}{\boldsymbol{\mathcal{D}}}.
}
\end{equation}

The expansion of the 0th free (timelike) index of the dual Maxwell tensor equation is

\begin{equation}\label{eqn:gaMagneticSourcesToTensorToVector:400}
\begin{aligned}
M^0
&=
\inv{2} \epsilon^{\nu \alpha \beta 0}
\partial_\nu F_{\alpha \beta} \\
&=
-\inv{2} \epsilon^{0 \nu \alpha \beta}
\partial_\nu F_{\alpha \beta} \\
&=
-\inv{2}
\lr{
\partial_1 (F_{2 3} – F_{3 2})
+\partial_2 (F_{3 1} – F_{1 3})
+\partial_3 (F_{1 2} – F_{2 1})
} \\
&=

\lr{
\partial_1 F_{2 3}
+\partial_2 F_{3 1}
+\partial_3 F_{1 2}
} \\
&=

\lr{
\partial_1 (- c \mu_0 H^1 ) +
\partial_2 (- c \mu_0 H^2 ) +
\partial_3 (- c \mu_0 H^3 )
},
\end{aligned}
\end{equation}

but \( M^0 = c \rho_m \), giving us Gauss’s law for magnetism (with magnetic charge density included)

\begin{equation}\label{eqn:gaMagneticSourcesToTensorToVector:420}
\boxed{
\spacegrad \cdot \boldsymbol{\mathcal{H}} = \rho_m/\mu_0.
}
\end{equation}

For the spacelike indexes of the dual Maxwell equation, only one need be computed (say 1), and cyclic permutation will provide the rest. That is

\begin{equation}\label{eqn:gaMagneticSourcesToTensorToVector:440}
\begin{aligned}
M^1
&= \inv{2} \epsilon^{\nu \alpha \beta 1} \partial_\nu F_{\alpha \beta} \\
&=
\inv{2} \lr{ \partial_2 \lr{F_{3 0} – F_{0 3}} }
+\inv{2} \lr{ \partial_3 \lr{F_{0 2} – F_{0 2}} }
+\inv{2} \lr{ \partial_0 \lr{F_{2 3} – F_{3 2}} } \\
&=
– \partial_2 F^{3 0}
+ \partial_3 F^{2 0}
+ \partial_0 F_{2 3} \\
&=
-\partial_2 E^3 + \partial_3 E^2 + \inv{c} \PD{t}{} \lr{ – c \mu_0 H^1 } \\
&= – \lr{ \spacegrad \cross \boldsymbol{\mathcal{E}} + \mu_0 \PD{t}{\boldsymbol{\mathcal{H}}} } \cdot \Be_1.
\end{aligned}
\end{equation}

Extending this to the rest of the coordinates gives the Maxwell-Faraday equation (as extended to include magnetic current density sources)

\begin{equation}\label{eqn:gaMagneticSourcesToTensorToVector:460}
\boxed{
\spacegrad \cross \boldsymbol{\mathcal{E}} = -\boldsymbol{\mathcal{M}} – \mu_0 \PD{t}{\boldsymbol{\mathcal{H}}}.
}
\end{equation}

This takes things full circle, going from the vector differential Maxwell’s equations, to the Geometric Algebra form of Maxwell’s equation, to Maxwell’s equations in tensor form, and back to the vector form. Not only is the tensor form of Maxwell’s equations with magnetic sources now known, the translation from the tensor and vector formalism has also been verified, and miraculously no signs or factors of 2 were lost or gained in the process.

Notes for ece1229 antenna theory

February 4, 2015 ece1229 No comments , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

I’ve now posted a first set of notes for the antenna theory course that I am taking this term at UofT.

Unlike most of the other classes I have taken, I am not attempting to take comprehensive notes for this class. The class is taught on slides that match the textbook so closely, there is little value to me taking notes that just replicate the text. Instead, I am annotating my copy of textbook with little details instead. My usual notes collection for the class will contain musings of details that were unclear, or in some cases, details that were provided in class, but are not in the text (and too long to pencil into my book.)

The notes linked above include:

  • Reading notes for chapter 2 (Fundamental Parameters of Antennas) and chapter 3 (Radiation Integrals and Auxiliary Potential Functions) of the class text.
  • Geometric Algebra musings.  How to do formulate Maxwell’s equations when magnetic sources are also included (those modeling magnetic dipoles).
  • Some problems for chapter 2 content.

Phasor form of (extended) Maxwell’s equations in Geometric Algebra

February 3, 2015 ece1229 1 comment , , , , , , , , , , , , , , , ,

[Click here for a PDF of this post with nicer formatting]

Separate examinations of the phasor form of Maxwell’s equation (with electric charges and current densities), and the Dual Maxwell’s equation (i.e. allowing magnetic charges and currents) were just performed. Here the structure of these equations with both electric and magnetic charges and currents will be examined.

The vector curl and divergence form of Maxwell’s equations are

\begin{equation}\label{eqn:phasorMaxwellsWithElectricAndMagneticCharges:20}
\spacegrad \cross \boldsymbol{\mathcal{E}} = -\PD{t}{\boldsymbol{\mathcal{B}}} -\BM
\end{equation}
\begin{equation}\label{eqn:phasorMaxwellsWithElectricAndMagneticCharges:40}
\spacegrad \cross \boldsymbol{\mathcal{H}} = \boldsymbol{\mathcal{J}} + \PD{t}{\boldsymbol{\mathcal{D}}}
\end{equation}
\begin{equation}\label{eqn:phasorMaxwellsWithElectricAndMagneticCharges:60}
\spacegrad \cdot \boldsymbol{\mathcal{D}} = \rho
\end{equation}
\begin{equation}\label{eqn:phasorMaxwellsWithElectricAndMagneticCharges:80}
\spacegrad \cdot \boldsymbol{\mathcal{B}} = \rho_m.
\end{equation}

In phasor form these are

\begin{equation}\label{eqn:phasorMaxwellsWithElectricAndMagneticCharges:100}
\spacegrad \cross \BE = – j k c \BB -\BM
\end{equation}
\begin{equation}\label{eqn:phasorMaxwellsWithElectricAndMagneticCharges:120}
\spacegrad \cross \BH = \BJ + j k c \BD
\end{equation}
\begin{equation}\label{eqn:phasorMaxwellsWithElectricAndMagneticCharges:140}
\spacegrad \cdot \BD = \rho
\end{equation}
\begin{equation}\label{eqn:phasorMaxwellsWithElectricAndMagneticCharges:160}
\spacegrad \cdot \BB = \rho_m.
\end{equation}

Switching to \( \BE = \BD/\epsilon_0, \BB = \mu_0 \BH\) fields (even though these aren’t the primary fields in engineering), gives

\begin{equation}\label{eqn:phasorMaxwellsWithElectricAndMagneticCharges:180}
\spacegrad \cross \BE = – j k (c \BB) -\BM
\end{equation}
\begin{equation}\label{eqn:phasorMaxwellsWithElectricAndMagneticCharges:200}
\spacegrad \cross (c \BB) = \frac{\BJ}{\epsilon_0 c} + j k \BE
\end{equation}
\begin{equation}\label{eqn:phasorMaxwellsWithElectricAndMagneticCharges:220}
\spacegrad \cdot \BE = \rho/\epsilon_0
\end{equation}
\begin{equation}\label{eqn:phasorMaxwellsWithElectricAndMagneticCharges:240}
\spacegrad \cdot (c \BB) = c \rho_m.
\end{equation}

Finally, using

\begin{equation}\label{eqn:phasorMaxwellsWithElectricAndMagneticCharges:260}
\Bf \Bg = \Bf \cdot \Bg + I \Bf \cross \Bg,
\end{equation}

the divergence and curl contributions of each of the fields can be grouped

\begin{equation}\label{eqn:phasorMaxwellsWithElectricAndMagneticCharges:300}
\spacegrad \BE = \rho/\epsilon_0 – \lr{ j k (c \BB) +\BM} I
\end{equation}
\begin{equation}\label{eqn:phasorMaxwellsWithElectricAndMagneticCharges:320}
\spacegrad (c \BB I) = c \rho_m I – \lr{ \frac{\BJ}{\epsilon_0 c} + j k \BE },
\end{equation}

or

\begin{equation}\label{eqn:phasorMaxwellsWithElectricAndMagneticCharges:340}
\spacegrad \lr{ \BE + c \BB I }
=
\rho/\epsilon_0 – \lr{ j k (c \BB) +\BM} I
+
c \rho_m I – \lr{ \frac{\BJ}{\epsilon_0 c} + j k \BE }.
\end{equation}

Regrouping gives Maxwell’s equations including both electric and magnetic sources
\begin{equation}\label{eqn:phasorMaxwellsWithElectricAndMagneticCharges:360}
\boxed{
\lr{ \spacegrad + j k } \lr{ \BE + c \BB I }
=
\inv{\epsilon_0 c} \lr{ c \rho – \BJ }
+ \lr{ c \rho_m – \BM } I.
}
\end{equation}

It was observed that these can be put into a tidy four vector form by premultiplying by \( \gamma_0 \), where

\begin{equation}\label{eqn:phasorMaxwellsWithElectricAndMagneticCharges:400}
J = \gamma_\mu J^\mu = \lr{ c \rho, \BJ }
\end{equation}
\begin{equation}\label{eqn:phasorMaxwellsWithElectricAndMagneticCharges:420}
M = \gamma_\mu M^\mu = \lr{ c \rho_m, \BM }
\end{equation}
\begin{equation}\label{eqn:phasorMaxwellsWithElectricAndMagneticCharges:440}
\grad = \gamma_0 \lr{ \spacegrad + j k } = \gamma^k \partial_k + j k \gamma_0,
\end{equation}

That gives

\begin{equation}\label{eqn:phasorMaxwellsWithElectricAndMagneticCharges:460}
\boxed{
\grad \lr{ \BE + c \BB I } = \frac{J}{\epsilon_0 c} + M I.
}
\end{equation}

When there were only electric sources, it was observed that potential solutions were of the form \( \BE + c \BB I \propto \grad \wedge A \), whereas when there was only magnetic sources it was observed that potential solutions were of the form \( \BE + c \BB I \propto (\grad \wedge F) I \). It seems reasonable to attempt a trial solution that contains both such contributions, say

\begin{equation}\label{eqn:phasorMaxwellsWithElectricAndMagneticCharges:480}
\BE + c \BB I = \grad \wedge A_{\textrm{e}} + \grad \wedge A_{\textrm{m}} I.
\end{equation}

Without any loss of generality Lorentz gauge conditions can be imposed on the four-vector fields \( A_{\textrm{e}}, A_{\textrm{m}} \). Those conditions are

\begin{equation}\label{eqn:phasorMaxwellsWithElectricAndMagneticCharges:500}
\grad \cdot A_{\textrm{e}} = \grad \cdot A_{\textrm{m}} = 0.
\end{equation}

Since \( \grad X = \grad \cdot X + \grad \wedge X \), for any four vector \( X \), the trial solution \ref{eqn:phasorMaxwellsWithElectricAndMagneticCharges:480} is reduced to

\begin{equation}\label{eqn:phasorMaxwellsWithElectricAndMagneticCharges:520}
\BE + c \BB I = \grad A_{\textrm{e}} + \grad A_{\textrm{m}} I.
\end{equation}

Maxwell’s equation is now

\begin{equation}\label{eqn:phasorMaxwellsWithElectricAndMagneticCharges:540}
\begin{aligned}
\frac{J}{\epsilon_0 c} + M I
&=
\grad^2 \lr{ A_{\textrm{e}} + A_{\textrm{m}} I } \\
&=
\gamma_0 \lr{ \spacegrad + j k }
\gamma_0 \lr{ \spacegrad + j k }
\lr{ A_{\textrm{e}} + A_{\textrm{m}} I } \\
&=
\lr{ -\spacegrad + j k }
\lr{ \spacegrad + j k }
\lr{ A_{\textrm{e}} + A_{\textrm{m}} I } \\
&=
-\lr{ \spacegrad^2 + k^2 }
\lr{ A_{\textrm{e}} + A_{\textrm{m}} I }.
\end{aligned}
\end{equation}

Notice how tidily this separates into vector and trivector components. Those are

\begin{equation}\label{eqn:phasorMaxwellsWithElectricAndMagneticCharges:580}
-\lr{ \spacegrad^2 + k^2 } A_{\textrm{e}} = \frac{J}{\epsilon_0 c}
\end{equation}
\begin{equation}\label{eqn:phasorMaxwellsWithElectricAndMagneticCharges:600}
-\lr{ \spacegrad^2 + k^2 } A_{\textrm{m}} = M.
\end{equation}

The result is a single Helmholtz equation for each of the electric and magnetic four-potentials, and both can be solved completely independently. This was claimed in class, but now the underlying reason is clear.

Because a single frequency phasor relationship was implied the scalar components of each of these four potentials is determined by the Lorentz gauge condition. For example

\begin{equation}\label{eqn:phasorMaxwellsWithElectricAndMagneticCharges:620}
\begin{aligned}
0
&=
\spacegrad \cdot \lr{ A_{\textrm{e}} e^{j k c t} } \\
&=
\lr{ \gamma^0 \inv{c} \PD{t}{} + \gamma^k \PD{x^k}{} } \cdot
\lr{
\gamma_0 A_{\textrm{e}}^0 e^{j k c t}
+ \gamma_m A_{\textrm{e}}^m e^{j k c t}
} \\
&=
\lr{ \gamma^0 j k + \gamma^r \PD{x^r}{} } \cdot
\lr{
\gamma_0 A_{\textrm{e}}^0
+ \gamma_s A_{\textrm{e}}^s
}
e^{j k c t} \\
&=
\lr{
j k
A_{\textrm{e}}^0
+
\spacegrad \cdot
\BA_{\textrm{e}}
}
e^{j k c t},
\end{aligned}
\end{equation}

so

\begin{equation}\label{eqn:phasorMaxwellsWithElectricAndMagneticCharges:640}
A_{\textrm{e}}^0
=\frac{ j} { k }
\spacegrad \cdot
\BA_{\textrm{e}}.
\end{equation}

The same sort of relationship will apply to the magnetic potential too. This means that the Helmholtz equations can be solved in the three vector space as

\begin{equation}\label{eqn:phasorMaxwellsWithElectricAndMagneticCharges:680}
\lr{ \spacegrad^2 + k^2 } \BA_{\textrm{e}} = -\frac{\BJ}{\epsilon_0 c}
\end{equation}
\begin{equation}\label{eqn:phasorMaxwellsWithElectricAndMagneticCharges:700}
\lr{ \spacegrad^2 + k^2 } \BA_{\textrm{m}} = -\BM.
\end{equation}