## using ltrace to dig into shared libraries

I was trying to find where the clang compiler is writing out constant global data values, and didn’t manage to find it by code inspection. If I run ltrace (also tracing system calls), I see the point where the ELF object is written out:

std::string::compare(std::string const&) const(0x7ffc8983a190, 0x1e32e60, 7, 254) = 5
std::string::compare(std::string const&) const(0x1e32e60, 0x7ffc8983a190, 7, 254) = 0xfffffffb
std::string::compare(std::string const&) const(0x7ffc8983a190, 0x1e32e60, 7, 254) = 5
write@SYS(4, "\177ELF\002\001\001", 848)         = 848
lseek@SYS(4, 40, 0)                              = 40
write@SYS(4, "\220\001", 8)                      = 8
lseek@SYS(4, 848, 0)                             = 848
lseek@SYS(4, 60, 0)                              = 60
write@SYS(4, "\a", 2)                            = 2
lseek@SYS(4, 848, 0)                             = 848
std::basic_string<char, std::char_traits<char>, std::allocator<char> >::~basic_string()(0x1e2a2e0, 0x1e2a2e8, 0x1e27978, 0x1e27978) = 0
close@SYS(4)                                     = 0


This is from running:

ltrace -S --demangle \
...


The -S is to display syscalls as well as library calls. To my suprise, this seems to show calls to libstc++ library calls, but I’m not seeing much from clang itself, just:

clang::DiagnosticsEngine::DiagnosticsEngine
clang::driver::ToolChain::getTargetAndModeFromProgramName
llvm::cl::ExpandResponseFiles
llvm::EnablePrettyStackTrace
llvm::errs
llvm::install_fatal_error_handler
llvm::llvm_shutdown
llvm::PrettyStackTraceEntry::PrettyStackTraceEntry
llvm::PrettyStackTraceEntry::~PrettyStackTraceEntry
llvm::raw_ostream::preferred_buffer_size
llvm::raw_svector_ostream::write_impl
llvm::remove_fatal_error_handler
llvm::StringMapImpl::LookupBucketFor
llvm::StringMapImpl::RehashTable
llvm::sys::PrintStackTraceOnErrorSignal
llvm::sys::Process::FixupStandardFileDescriptors
llvm::sys::Process::GetArgumentVector
llvm::TimerGroup::printAll


There’s got to be a heck of a lot more that the compiler is doing!? It turns out that ltrace doesn’t seem to trace out all the library function calls that lie in shared libraries (I’m using a shared library + split dwarf build of clang). The default output was a bit deceptive since I saw some shared lib calls, in particular the there were std::… calls (from libstc++.so) in the ltrace output. My conclusion seems to be that the tool is lying by default.

This can be confirmed by explicitly asking to see the functions from a specific shared lib. For example, if I call ltrace as:

ltrace -S --demangle -e @libLLVMX86CodeGen.so \
/clang/be.b226a0a/bin/clang-3.9 \
-cc1 \
-triple \
x86_64-unknown-linux-gnu \
...


Now I get ~68K calls to libLLVMX86CodeGen.so functions that didn’t show up in the default ltrace output! The ltrace tool won’t show me these by default (although the man page seems to suggest that it should), but if I narrow down what I’m looking through to a single shared lib, at least I can now examine the function calls in that shared lib.

## Magnetostatic force and torque

In Jackson [1], the following equations for the vector potential, magnetostatic force and torque are derived

\label{eqn:magnetostaticsJacksonNotesForceAndTorque:20}
\Bm = \inv{2} \int \Bx’ \cross \BJ(\Bx’) d^3 x’

\label{eqn:magnetostaticsJacksonNotesForceAndTorque:40}
\BF = \spacegrad( \Bm \cdot \BB ),

\label{eqn:magnetostaticsJacksonNotesForceAndTorque:60}
\BN = \Bm \cross \BB,

where $$\BB$$ is an applied external magnetic field and $$\Bm$$ is the magnetic dipole for the current in question. These results (and a similar one derived earlier for the vector potential $$\BA$$) all follow from
an analysis of localized current densities $$\BJ$$, evaluated far enough away from the current sources.

For the force and torque, the starting point for the force is one that had me puzzled a bit. Namely

\label{eqn:magnetostaticsJacksonNotesForceAndTorque:80}
\BF = \int \BJ(\Bx) \cross \BB(\Bx) d^3 x

This is clearly the continuum generalization of the point particle Lorentz force equation, which for $$\BE = 0$$ is:

\label{eqn:magnetostaticsJacksonNotesForceAndTorque:100}
\BF = q \Bv \cross \BB

For the point particle, this is the force on the particle when it is in the external field $$BB$$. i.e. this is the force at the position of the particle. My question is what does it mean to sum all the forces on the charge distribution over all space.
How can a force be applied over all, as opposed to a force applied at a single point, or against a surface?

In the special case of a localized current density, this makes some sense. Considering the other half of the force equation $$\BF = \ddt{}\int \rho_m \Bv dV$$, where $$\rho_m$$ here is mass density of the charged particles making up the continuous current distribution. The other half of this $$\BF = m\Ba$$ equation is also an average phenomena, so we have an average of sorts on both the field contribution to the force equation and the mass contribution to the force equation. There is probably a centre-of-mass and centre-of-current density interpretation that would make a bit more sense of this continuum force description.

It’s kind of funny how you can work through all the detailed mathematical steps in a book like Jackson, but then go right back to the beginning and say “Hey, what does that even mean”?

### Force

Moving on from the pondering of the meaning of the equation being manipulated, let’s do the easy part, the derivation of the results that Jackson comes up with.

Writing out \ref{eqn:magnetostaticsJacksonNotesForceAndTorque:80} in coordinates

\label{eqn:magnetostaticsJacksonNotesForceAndTorque:320}
\BF = \epsilon_{ijk} \Be_i \int J_j B_k d^3 x.

To first order, a slowly varying (external) magnetic field can be expanded around a point of interest

\label{eqn:magnetostaticsJacksonNotesForceAndTorque:120}
\BB(\Bx) = \BB(\Bx_0) + \lr{ \Bx – \Bx_0 } \cdot \spacegrad \BB,

where the directional derivative is evaluated at the point $$\Bx_0$$ after the gradient operation. Setting the origin at this point $$\Bx_0$$ gives

\label{eqn:magnetostaticsJacksonNotesForceAndTorque:340}
\begin{aligned}
\BF
&= \epsilon_{ijk} \Be_i
\lr{
\int J_j(\Bx’) B_k(0) d^3 x’
+
\int J_j(\Bx’) (\Bx’ \cdot \spacegrad) B_k(0) d^3 x’
} \\
&=
\epsilon_{ijk} \Be_i
\Bk_0 \int J_j(\Bx’) d^3 x’
+
\epsilon_{ijk} \Be_i
\int J_j(\Bx’) (\Bx’ \cdot \spacegrad) B_k(0) d^3 x’.
\end{aligned}

We found

earlier
that the first integral can be written as a divergence

\label{eqn:magnetostaticsJacksonNotesForceAndTorque:140}
\int J_j(\Bx’) d^3 x’
=
\int \spacegrad’ \cdot \lr{ \BJ(\Bx’) x_j’ } dV’,

which is zero when the integration surface is outside of the current localization region. We also found

that

\label{eqn:magnetostaticsJacksonNotesForceAndTorque:160}
\int (\Bx \cdot \Bx’) \BJ
= -\inv{2} \Bx \cross \int \Bx’ \cross \BJ = \Bm \cross \Bx.

so
\label{eqn:magnetostaticsJacksonNotesForceAndTorque:180}
\begin{aligned}
\int (\spacegrad B_k(0) \cdot \Bx’) J_j
&= -\inv{2} \lr{ \spacegrad B_k(0) \cross \int \Bx’ \cross \BJ}_j \\
&= \lr{ \Bm \cross (\spacegrad B_k(0)) }_j.
\end{aligned}

This gives

\label{eqn:magnetostaticsJacksonNotesForceAndTorque:200}
\begin{aligned}
\BF
&= \epsilon_{ijk} \Be_i \lr{ \Bm \cross (\spacegrad B_k(0)) }_j \\
&= \epsilon_{ijk} \Be_i \lr{ \Bm \cross \spacegrad }_j B_k(0) \\
&= (\Bm \cross \spacegrad) \cross \BB(0) \\
&= -\BB(0) \cross (\Bm \cross \lspacegrad) \\
\end{aligned}

The second term is killed by the magnetic Gauss’s law, leaving to first order

\label{eqn:magnetostaticsJacksonNotesForceAndTorque:220}
\BF = \spacegrad \lr{\Bm \cdot \BB}.

### Torque

For the torque we have a similar quandary at the starting point. About what point is a continuum torque integral of the following form

\label{eqn:magnetostaticsJacksonNotesForceAndTorque:240}
\BN = \int \Bx’ \cross (\BJ(\Bx’) \cross \BB(\Bx’)) d^3 x’?

Ignoring that detail again, assuming the answer has something to do with the centre of mass and parallel axis theorem, we can proceed with a constant approximation of the magnetic field

\label{eqn:magnetostaticsJacksonNotesForceAndTorque:260}
\begin{aligned}
\BN
&= \int \Bx’ \cross (\BJ(\Bx’) \cross \BB(0)) d^3 x’ \\
&=
-\int (\Bx’ \cdot \BJ(\Bx’)) \BB(0) d^3 x’
+\int (\Bx’ \cdot \BB(0)) \BJ(\Bx’) d^3 x’ \\
&=
-\BB(0) \int (\Bx’ \cdot \BJ(\Bx’)) d^3 x’
+\int (\Bx’ \cdot \BB(0)) \BJ(\Bx’) d^3 x’.
\end{aligned}

Jackson’s trick for killing the first integral is to transform it into a divergence by evaluating

\label{eqn:magnetostaticsJacksonNotesForceAndTorque:280}
\begin{aligned}
\spacegrad \cdot \lr{ \BJ \Abs{\Bx}^2 }
&=
+
&=
\BJ \cdot \Be_i \partial_i x_m x_m \\
&=
2 \BJ \cdot \Be_i \delta_{im} x_m \\
&=
2 \BJ \cdot \Bx,
\end{aligned}

so

\label{eqn:magnetostaticsJacksonNotesForceAndTorque:300}
\begin{aligned}
\BN
&=
-\inv{2} \BB(0) \int \spacegrad’ \cdot \lr{ \BJ(\Bx’) \Abs{\Bx’}^2 } d^3 x’
+\int (\Bx’ \cdot \BB(0)) \BJ(\Bx’) d^3 x’ \\
&=
-\inv{2} \BB(0) \oint \Bn \cdot \lr{ \BJ(\Bx’) \Abs{\Bx’}^2 } d^3 x’
+\int (\Bx’ \cdot \BB(0)) \BJ(\Bx’) d^3 x’.
\end{aligned}

Again, the localized current density assumption kills the surface integral. The second integral can be evaluated with \ref{eqn:magnetostaticsJacksonNotesForceAndTorque:160}, so to first order we have

\label{eqn:magnetostaticsJacksonNotesForceAndTorque:360}
\BN
=
\Bm \cross \BB.

# References

[1] JD Jackson. Classical Electrodynamics. John Wiley and Sons, 2nd edition, 1975.

## Interesting tidbits in a Hillary Goldman Sacks wikileaks transcript.

Here’s some notes on a read of the first of the wikileaks transcripts of the Hillary Goldman Sacks talks.  There are three transcripts in total:

The main takeaway is that the State deptartment role is certainly not about diplomacy.  There’s lots of mentions of stirring up crap as part of the routine game.  Chaos is a desired end goal, so long as it’s controlled or directed.

page 7: North Korea:

We don’t want the North Koreans to
cause more trouble than the system can absorb. So
we’ve got a pretty good thing going with the

What an interesting statement.  The corollary seems to be that they do want North Korea to be stirring up trouble.  It serves to distract and limit China for example, a point made in other parts of the speech.

page 13: Syria:

So the problem for the US and the
Europeans has been from the very beginning: What
is it you — who is it you are going to try to arm?
And you probably read in the papers my view was we
should try to find some of the groups that were
there that we thought we could build relationships
with and develop some covert connections that might
then at least give us some insight into what is
going on inside Syria.

It is well known now that the US has been arming the “Free Syrian Army”, funnelling weapons in through Turkey via the Saudis.  Here Hillary is discussing exactly this process.  She actually expresses regret that the US isn’t as good at this discrete covert warmongering as they used to be.

page 14: Libya:

In Libya we didn’t have that problem.
It’s a huge place. The air defenses were not that
sophisticated and there wasn’t very — in fact,
there were very few civilian casualties.

A psychopath in action.  I hear of Hillary’s carpet bombing of Libya discussed as one of the most brutal and destructive campaigns in near history, and she describes it as “very few casualties”.  I don’t actually know the numbers, but it’s certainly interesting to see how casual she is with respect to the death of civilians.

page 15: on Iran? (or perhaps Syria):

Well, you up the pain
that they have to endure by not in any way
occupying or invading them but by bombing their
facilities. I mean, that is the option. It is not
as, we like to say these days, boots on the ground.

Causal talk of bombing other countries is so disgusting.  Notice how the word facilities is very vague.  Decoding this a bit, if you are simultaneously talking about “upping the pain” and bombing facilities, this is probably theorizing about bombing targets that have the most terror inducing and hardship effects on the civilians (water processing, energy production, schools, hospitals, …).  But that’s okay so long as it isn’t perceived as “boots on the ground”.

page 36: Russia:

And finally on Afghanistan and Russia.
Look, I would love it if we could continue to build
a more positive relationship with Russia. I worked
very hard on that when I was Secretary, and we made
some progress with Medvedev, who was president in
name but was obviously beholden to Putin, but Putin
kind of let him go and we helped them get into the
WTO for several years, and they were helpful to us
in shipping equipment, even lethal equipment, in
and out of out of Afghanistan.

Russia was a useful ally when they helped with covert wars.  Now that those covert wars are knocking on Russia’s door, the relationship has soured.  It’s hard to imagine why that relationship has deteriorated.

## Vector Area

One of the results of this problem is required for a later one on magnetic moments that I’d like to do.

## Question: Vector Area. ([1] pr. 1.61)

The integral

\label{eqn:vectorAreaGriffiths:20}
\Ba = \int_S d\Ba,

is sometimes called the vector area of the surface $$S$$.

## (a)

Find the vector area of a hemispherical bowl of radius $$R$$.

## (b)

Show that $$\Ba = 0$$ for any closed surface.

## (c)

Show that $$\Ba$$ is the same for all surfaces sharing the same boundary.

## (d)

Show that
\label{eqn:vectorAreaGriffiths:40}
\Ba = \inv{2} \oint \Br \cross d\Bl,

where the integral is around the boundary line.

## (e)

Show that
\label{eqn:vectorAreaGriffiths:60}
\oint \lr{ \Bc \cdot \Br } d\Bl = \Ba \cross \Bc.

## (a)

\label{eqn:vectorAreaGriffiths:80}
\begin{aligned}
\Ba
&=
\int_{0}^{\pi/2} R^2 \sin\theta d\theta \int_0^{2\pi} d\phi
\lr{ \sin\theta \cos\phi, \sin\theta \sin\phi, \cos\theta } \\
&=
R^2 \int_{0}^{\pi/2} d\theta \int_0^{2\pi} d\phi
\lr{ \sin^2\theta \cos\phi, \sin^2\theta \sin\phi, \sin\theta\cos\theta } \\
&=
2 \pi R^2 \int_{0}^{\pi/2} d\theta \Be_3
\sin\theta\cos\theta \\
&=
\pi R^2
\Be_3
\int_{0}^{\pi/2} d\theta
\sin(2 \theta) \\
&=
\pi R^2
\Be_3
\evalrange{\lr{\frac{-\cos(2 \theta)}{2}}}{0}{\pi/2} \\
&=
\pi R^2
\Be_3
\lr{ 1 – (-1) }/2 \\
&=
\pi R^2
\Be_3.
\end{aligned}

## (b)

As hinted in the original problem description, this follows from

\label{eqn:vectorAreaGriffiths:100}
\int dV \spacegrad T = \oint T d\Ba,

simply by setting $$T = 1$$.

## (c)

Suppose that two surfaces sharing a boundary are parameterized by vectors $$\Bx(u, v), \Bx(a,b)$$ respectively. The area integral with the first parameterization is

\label{eqn:vectorAreaGriffiths:120}
\begin{aligned}
\Ba
&= \int \PD{u}{\Bx} \cross \PD{v}{\Bx} du dv \\
&= \epsilon_{ijk} \Be_i \int \PD{u}{x_j} \PD{v}{x_k} du dv \\
&=
\epsilon_{ijk} \Be_i \int
\lr{
\PD{a}{x_j}
\PD{u}{a}
+
\PD{b}{x_j}
\PD{u}{b}
}
\lr{
\PD{a}{x_k}
\PD{v}{a}
+
\PD{b}{x_k}
\PD{v}{b}
}
du dv \\
&=
\epsilon_{ijk} \Be_i \int
du dv
\lr{
\PD{a}{x_j}
\PD{u}{a}
\PD{a}{x_k}
\PD{v}{a}
+
\PD{b}{x_j}
\PD{u}{b}
\PD{b}{x_k}
\PD{v}{b}
+
\PD{b}{x_j}
\PD{u}{b}
\PD{a}{x_k}
\PD{v}{a}
+
\PD{a}{x_j}
\PD{u}{a}
\PD{b}{x_k}
\PD{v}{b}
} \\
&=
\epsilon_{ijk} \Be_i \int
du dv
\lr{
\PD{a}{x_j}
\PD{a}{x_k}
\PD{u}{a}
\PD{v}{a}
+
\PD{b}{x_j}
\PD{b}{x_k}
\PD{u}{b}
\PD{v}{b}
}
+
\epsilon_{ijk} \Be_i \int
du dv
\lr{
\PD{b}{x_j}
\PD{a}{x_k}
\PD{u}{b}
\PD{v}{a}

\PD{a}{x_k}
\PD{b}{x_j}
\PD{u}{a}
\PD{v}{b}
}.
\end{aligned}

In the last step a $$j,k$$ index swap was performed for the last term of the second integral. The first integral is zero, since the integrand is symmetric in $$j,k$$. This leaves
\label{eqn:vectorAreaGriffiths:140}
\begin{aligned}
\Ba
&=
\epsilon_{ijk} \Be_i \int
du dv
\lr{
\PD{b}{x_j}
\PD{a}{x_k}
\PD{u}{b}
\PD{v}{a}

\PD{a}{x_k}
\PD{b}{x_j}
\PD{u}{a}
\PD{v}{b}
} \\
&=
\epsilon_{ijk} \Be_i \int
\PD{b}{x_j}
\PD{a}{x_k}
\lr{
\PD{u}{b}
\PD{v}{a}

\PD{u}{a}
\PD{v}{b}
}
du dv \\
&=
\epsilon_{ijk} \Be_i \int
\PD{b}{x_j}
\PD{a}{x_k}
\frac{\partial(b,a)}{\partial(u,v)} du dv \\
&=
-\int
\PD{b}{\Bx} \cross \PD{a}{\Bx} da db \\
&=
\int
\PD{a}{\Bx} \cross \PD{b}{\Bx} da db.
\end{aligned}

However, this is the area integral with the second parameterization, proving that the area-integral for any given boundary is independant of the surface.

## (d)

Having proven that the area-integral for a given boundary is independent of the surface that it is evaluated on, the result follows by illustration as hinted in the full problem description. Draw a “cone”, tracing a vector $$\Bx’$$ from the origin to the position line element, and divide that cone up into infinitesimal slices as sketched in fig. 1.

Fig 1. Cone subtended by loop

The area of each of these triangular slices is

\label{eqn:vectorAreaGriffiths:160}
\inv{2} \Bx’ \cross d\Bl’.

Summing those triangles proves the result.

## (e)

As hinted in the problem, this follows from

\label{eqn:vectorAreaGriffiths:180}
\int \spacegrad T \cross d\Ba = -\oint T d\Bl.

Set $$T = \Bc \cdot \Br$$, for which

\label{eqn:vectorAreaGriffiths:240}
\begin{aligned}
&= \Be_k \partial_k c_m x_m \\
&= \Be_k c_m \delta_{km} \\
&= \Be_k c_k \\
&= \Bc,
\end{aligned}

so
\label{eqn:vectorAreaGriffiths:200}
\begin{aligned}
&=
\int \Bc \cross d\Ba \\
&=
\Bc \cross \int d\Ba \\
&=
\Bc \cross \Ba.
\end{aligned}

so
\label{eqn:vectorAreaGriffiths:220}
\Bc \cross \Ba = -\oint (\Bc \cdot \Br) d\Bl,

or
\label{eqn:vectorAreaGriffiths:260}
\oint (\Bc \cdot \Br) d\Bl
=
\Ba \cross \Bc.

# References

[1] David Jeffrey Griffiths and Reed College. Introduction to electrodynamics. Prentice hall Upper Saddle River, NJ, 3rd edition, 1999.

## Magnetic moment for a localized magnetostatic current

### Motivation.

I was once again reading my Jackson [2]. This time I found that his presentation of magnetic moment didn’t really make sense to me. Here’s my own pass through it, filling in a number of details. As I did last time, I’ll also translate into SI units as I go.

### Vector potential.

The Biot-Savart expression for the magnetic field can be factored into a curl expression using the usual tricks

\label{eqn:magneticMomentJackson:20}
\begin{aligned}
\BB
&= \frac{\mu_0}{4\pi} \int \frac{\BJ(\Bx’) \cross (\Bx – \Bx’)}{\Abs{\Bx – \Bx’}^3} d^3 x’ \\
&= -\frac{\mu_0}{4\pi} \int \BJ(\Bx’) \cross \spacegrad \inv{\Abs{\Bx – \Bx’}} d^3 x’ \\
&= \frac{\mu_0}{4\pi} \spacegrad \cross \int \frac{\BJ(\Bx’)}{\Abs{\Bx – \Bx’}} d^3 x’,
\end{aligned}

so the vector potential, through its curl, defines the magnetic field $$\BB = \spacegrad \cross \BA$$ is given by

\label{eqn:magneticMomentJackson:40}
\BA(\Bx) = \frac{\mu_0}{4 \pi} \int \frac{J(\Bx’)}{\Abs{\Bx – \Bx’}} d^3 x’.

If the current source is localized (zero outside of some finite region), then there will always be a region for which $$\Abs{\Bx} \gg \Abs{\Bx’}$$, so the denominator yields to Taylor expansion

\label{eqn:magneticMomentJackson:60}
\begin{aligned}
\inv{\Abs{\Bx – \Bx’}}
&=
\inv{\Abs{\Bx}} \lr{1 + \frac{\Abs{\Bx’}^2}{\Abs{\Bx}^2} – 2 \frac{\Bx \cdot \Bx’}{\Abs{\Bx}^2} }^{-1/2} \\
&\approx
\inv{\Abs{\Bx}} \lr{ 1 + \frac{\Bx \cdot \Bx’}{\Abs{\Bx}^2} } \\
&=
\inv{\Abs{\Bx}} + \frac{\Bx \cdot \Bx’}{\Abs{\Bx}^3}.
\end{aligned}

so the vector potential, far enough away from the current source is
\label{eqn:magneticMomentJackson:80}
\BA(\Bx)
=
\frac{\mu_0}{4 \pi} \int \frac{J(\Bx’)}{\Abs{\Bx}} d^3 x’
+\frac{\mu_0}{4 \pi} \int \frac{(\Bx \cdot \Bx’)J(\Bx’)}{\Abs{\Bx}^3} d^3 x’.

Jackson uses a sneaky trick to show that the first integral is killed for a localized source. That trick appears to be based on evaluating the following divergence

\label{eqn:magneticMomentJackson:100}
\begin{aligned}
&=
+
&=
(\Be_k \partial_k x_i) \cdot\BJ \\
&=
\delta_{ki} J_k \\
&=
J_i.
\end{aligned}

Note that this made use of the fact that $$\spacegrad \cdot \BJ = 0$$ for magnetostatics. This provides a way to rewrite the current density as a divergence

\label{eqn:magneticMomentJackson:120}
\begin{aligned}
\int \frac{J(\Bx’)}{\Abs{\Bx}} d^3 x’
&=
\Be_i \int \frac{\spacegrad’ \cdot (x_i’ \BJ(\Bx’))}{\Abs{\Bx}} d^3 x’ \\
&=
\frac{\Be_i}{\Abs{\Bx}} \int \spacegrad’ \cdot (x_i’ \BJ(\Bx’)) d^3 x’ \\
&=
\frac{1}{\Abs{\Bx}} \oint \Bx’ (d\Ba \cdot \BJ(\Bx’)).
\end{aligned}

When $$\BJ$$ is localized, this is zero provided we pick the integration surface for the volume outside of that localization region.

It is now desired to rewrite $$\int \Bx \cdot \Bx’ \BJ$$ as a triple cross product since the dot product of such a triple cross product has exactly this term in it

\label{eqn:magneticMomentJackson:140}
\begin{aligned}
– \Bx \cross \int \Bx’ \cross \BJ
&=
\int (\Bx \cdot \Bx’) \BJ

\int (\Bx \cdot \BJ) \Bx’ \\
&=
\int (\Bx \cdot \Bx’) \BJ

\Be_k x_i \int J_i x_k’,
\end{aligned}

so
\label{eqn:magneticMomentJackson:160}
\int (\Bx \cdot \Bx’) \BJ
=
– \Bx \cross \int \Bx’ \cross \BJ
+
\Be_k x_i \int J_i x_k’.

To get of this second term, the next sneaky trick is to consider the following divergence

\label{eqn:magneticMomentJackson:180}
\begin{aligned}
\oint d\Ba’ \cdot (\BJ(\Bx’) x_i’ x_j’)
&=
\int dV’ \spacegrad’ \cdot (\BJ(\Bx’) x_i’ x_j’) \\
&=
+
\int dV’ \BJ \cdot \spacegrad’ (x_i’ x_j’) \\
&=
\int dV’ J_k \cdot \lr{ x_i’ \partial_k x_j’ + x_j’ \partial_k x_i’ } \\
&=
\int dV’ \lr{J_k x_i’ \delta_{kj} + J_k x_j’ \delta_{ki}} \\
&=
\int dV’ \lr{J_j x_i’ + J_i x_j’}.
\end{aligned}

The surface integral is once again zero, which means that we have an antisymmetric relationship in integrals of the form

\label{eqn:magneticMomentJackson:200}
\int J_j x_i’ = -\int J_i x_j’.

Now we can use the tensor algebra trick of writing $$y = (y + y)/2$$,

\label{eqn:magneticMomentJackson:220}
\begin{aligned}
\int (\Bx \cdot \Bx’) \BJ
&=
– \Bx \cross \int \Bx’ \cross \BJ
+
\Be_k x_i \int J_i x_k’ \\
&=
– \Bx \cross \int \Bx’ \cross \BJ
+
\inv{2} \Be_k x_i \int \lr{ J_i x_k’ + J_i x_k’ } \\
&=
– \Bx \cross \int \Bx’ \cross \BJ
+
\inv{2} \Be_k x_i \int \lr{ J_i x_k’ – J_k x_i’ } \\
&=
– \Bx \cross \int \Bx’ \cross \BJ
+
\inv{2} \Be_k x_i \int (\BJ \cross \Bx’)_j \epsilon_{ikj} \\
&=
– \Bx \cross \int \Bx’ \cross \BJ

\inv{2} \epsilon_{kij} \Be_k x_i \int (\BJ \cross \Bx’)_j \\
&=
– \Bx \cross \int \Bx’ \cross \BJ

\inv{2} \Bx \cross \int \BJ \cross \Bx’ \\
&=
– \Bx \cross \int \Bx’ \cross \BJ
+
\inv{2} \Bx \cross \int \Bx’ \cross \BJ \\
&=
-\inv{2} \Bx \cross \int \Bx’ \cross \BJ,
\end{aligned}

so

\label{eqn:magneticMomentJackson:240}
\BA(\Bx) \approx \frac{\mu_0}{4 \pi \Abs{\Bx}^3} \lr{ -\frac{\Bx}{2} } \int \Bx’ \cross \BJ(\Bx’) d^3 x’.

Letting

\label{eqn:magneticMomentJackson:260}
\boxed{
\Bm = \inv{2} \int \Bx’ \cross \BJ(\Bx’) d^3 x’,
}

the far field approximation of the vector potential is
\label{eqn:magneticMomentJackson:280}
\boxed{
\BA(\Bx) = \frac{\mu_0}{4 \pi} \frac{\Bm \cross \Bx}{\Abs{\Bx}^3}.
}

Note that when the current is restricted to an infintisimally thin loop, the magnetic moment reduces to

\label{eqn:magneticMomentJackson:300}
\Bm(\Bx) = \frac{I}{2} \int \Bx \cross d\Bl’.

Refering to [1] (pr. 1.60), this can be seen to be $$I$$ times the “vector-area” integral.

# References

[1] David Jeffrey Griffiths and Reed College. Introduction to electrodynamics. Prentice hall Upper Saddle River, NJ, 3rd edition, 1999.

[2] JD Jackson. Classical Electrodynamics. John Wiley and Sons, 2nd edition, 1975.

## Corollaries to Stokes and Divergence theorems

In [1] a few problems are set to prove some variations of Stokes theorem. He gives some cool tricks to prove each one using just the classic 3D Stokes and divergence theorems. We can also do them directly from the more general Stokes theorem $$\int d^k \Bx \cdot (\spacegrad \wedge F) = \oint d^{k-1} \Bx \cdot F$$.

## Question: Stokes theorem on scalar function. ([1] pr. 1.60a)

Prove
\label{eqn:stokesCorollariesGriffiths:20}
\int \spacegrad T dV = \oint T d\Ba.

The direct way to prove this is to apply Stokes theorem

\label{eqn:stokesCorollariesGriffiths:80}
\int d^3 \Bx \cdot (\spacegrad \wedge T) = \oint d^2 \Bx \cdot T

Here $$d^3 \Bx = d\Bx_1 \wedge d\Bx_2 \wedge d\Bx_3$$, a pseudoscalar (trivector) volume element, and the wedge and dot products take their most general meanings. For $$k$$-blade $$F$$, and $$k’$$-blade $$F’$$, that is

\label{eqn:stokesCorollariesGriffiths:100}
\begin{aligned}
F \wedge F’ &= \gpgrade{F F’}{k+k’} \\
F \cdot F’ &= \gpgrade{F F’}{\Abs{k-k’}}
\end{aligned}

With $$d^3\Bx = I dV$$, and $$d^2 \Bx = I \ncap dA = I d\Ba$$, we have

\label{eqn:stokesCorollariesGriffiths:120}
\int I dV \spacegrad T = \oint I d\Ba T.

Cancelling the factors of $$I$$ proves the result.

Griffith’s trick to do this was to let $$\Bv = \Bc T$$, where $$\Bc$$ is a constant. For this, the divergence theorem integral is

\label{eqn:stokesCorollariesGriffiths:160}
\begin{aligned}
\int dV \spacegrad \cdot (\Bc T)
&=
\int dV \Bc \cdot \spacegrad T \\
&=
\Bc \cdot \int dV \spacegrad T \\
&=
\oint d\Ba \cdot (\Bc T) \\
&=
\Bc \cdot \oint d\Ba T.
\end{aligned}

This is true for any constant $$\Bc$$, so is also true for the unit vectors. This allows for summing projections in each of the unit directions

\label{eqn:stokesCorollariesGriffiths:180}
\begin{aligned}
&=
\sum \Be_k \lr{ \Be_k \cdot \int dV \spacegrad T } \\
&=
\sum \Be_k \lr{ \Be_k \cdot \oint d\Ba T } \\
&=
\oint d\Ba T.
\end{aligned}

## Question: ([1] pr. 1.60b)

Prove
\label{eqn:stokesCorollariesGriffiths:40}
\int \spacegrad \cross \Bv dV = -\oint \Bv \cross d\Ba.

This also follows directly from the general Stokes theorem

\label{eqn:stokesCorollariesGriffiths:200}
\int d^3 \Bx \cdot \lr{ \spacegrad \wedge \Bv } = \oint d^2 \Bx \cdot \Bv

The volume integrand is

\label{eqn:stokesCorollariesGriffiths:220}
\begin{aligned}
d^3 \Bx \cdot \lr{ \spacegrad \wedge \Bv }
&=
&=
\end{aligned}

and the surface integrand is
\label{eqn:stokesCorollariesGriffiths:240}
\begin{aligned}
d^2 \Bx \cdot \Bv
&=
\gpgradeone{ I d\Ba \Bv } \\
&=
\gpgradeone{ I (d\Ba \wedge \Bv) } \\
&=
I^2 (d\Ba \cross \Bv) \\
&=
-d\Ba \cross \Bv \\
&=
\Bv \cross d\Ba.
\end{aligned}

Plugging these into \ref{eqn:stokesCorollariesGriffiths:200} proves the result.

Griffiths trick for the same is to apply the divergence theorem to $$\Bv \cross \Bc$$. Such a volume integral is

\label{eqn:stokesCorollariesGriffiths:260}
\begin{aligned}
\int dV \spacegrad \cdot (\Bv \cross \Bc)
&=
\int dV \Bc \cdot (\spacegrad \cross \Bv) \\
&=
\Bc \cdot \int dV \spacegrad \cross \Bv.
\end{aligned}

This must equal
\label{eqn:stokesCorollariesGriffiths:280}
\begin{aligned}
\oint d\Ba \cdot (\Bv \cross \Bc)
&=
\Bc \cdot \oint d\Ba \cross \Bv \\
&=
-\Bc \cdot \oint \Bv \cross d\Ba
\end{aligned}

Again, assembling projections, we have
\label{eqn:stokesCorollariesGriffiths:300}
\begin{aligned}
&=
\sum \Be_k \lr{ \Be_k \cdot \int dV \spacegrad \cross \Bv } \\
&=
-\sum \Be_k \lr{ \Be_k \cdot \oint \Bv \cross d\Ba } \\
&=
-\oint \Bv \cross d\Ba.
\end{aligned}

## Question: ([1] pr. 1.60e)

Prove
\label{eqn:stokesCorollariesGriffiths:60}
\int \spacegrad T \cross d\Ba = -\oint T d\Bl.

This one follows from
\label{eqn:stokesCorollariesGriffiths:320}
\int d^2 \Bx \cdot \lr{ \spacegrad \wedge T } = \oint d^1 \Bx \cdot T.

The surface integrand can be written
\label{eqn:stokesCorollariesGriffiths:340}
\begin{aligned}
d^2 \Bx \cdot \lr{ \spacegrad \wedge T }
&=
&=
I (d\Ba \wedge \spacegrad T ) \\
&=
I^2 ( d\Ba \cross \spacegrad T ) \\
&=
\end{aligned}

The line integrand is

\label{eqn:stokesCorollariesGriffiths:360}
d^1 \Bx \cdot T = d^1 \Bx T.

Given a two parameter representation of the surface area element $$d^2 \Bx = d\Bx_1 \wedge d\Bx_2$$, the line element representation is
\label{eqn:stokesCorollariesGriffiths:380}
\begin{aligned}
d^1 \Bx
&= (\Bx_1 \wedge d\Bx_2) \cdot \Bx^1 + (d\Bx_1 \wedge \Bx_2) \cdot \Bx^2 \\
&= -d\Bx_2 + d\Bx_1,
\end{aligned}

giving

\label{eqn:stokesCorollariesGriffiths:400}
\begin{aligned}
&=
\int
-\evalbar{\lr{ \PD{u_2}{\Bx} T }}{\Delta u_1} du_2
+\evalbar{\lr{ \PD{u_1}{\Bx} T }}{\Delta u_2} du_1 \\
&=
-\oint d\Bl T,
\end{aligned}

or
\label{eqn:stokesCorollariesGriffiths:420}
=
-\oint d\Bl T.

Griffiths trick for the same is to use $$\Bv = \Bc T$$ for constant $$\Bc$$ in (the usual 3D) Stokes’ theorem. That is

\label{eqn:stokesCorollariesGriffiths:440}
\begin{aligned}
\int d\Ba \cdot (\spacegrad \cross (\Bc T))
&=
\Bc \cdot \int d\Ba \cross \spacegrad T \\
&=
-\Bc \cdot \int \spacegrad T \cross d\Ba \\
&=
\oint d\Bl \cdot (\Bc T) \\
&=
\Bc \cdot \oint d\Bl T.
\end{aligned}

Again assembling projections we have
\label{eqn:stokesCorollariesGriffiths:460}
\begin{aligned}
&=
\sum \Be_k \lr{ \Be_k \cdot \int \spacegrad T \cross d\Ba} \\
&=
-\sum \Be_k \lr{ \Be_k \cdot \oint d\Bl T } \\
&=
-\oint d\Bl T.
\end{aligned}

# References

[1] David Jeffrey Griffiths and Reed College. Introduction to electrodynamics. Prentice hall Upper Saddle River, NJ, 3rd edition, 1999.

## Jackson’s electrostatic self energy analysis

### Motivation

I was reading my Jackson [1], which characteristically had the statement “the […] integral can easily be shown to have the value $$4 \pi$$”, in a discussion of electrostatic energy and self energy. After a few attempts and a couple of pages of calculations, I figured out how this can be easily shown.

### Context

Let me walk through the context that leads to the “easy” integral, and then the evaluation of that integral. Unlike my older copy of Jackson, I’ll do this in SI units.

The starting point is a statement that the work done (potential energy) of one charge $$q_i$$ in a set of $$n$$ charges, where that charge is brought to its position $$\Bx_i$$ from infinity, is

\label{eqn:electrostaticJacksonSelfEnergy:20}
W_i = q_i \Phi(\Bx_i),

where the potential energy due to the rest of the charge configuration is

\label{eqn:electrostaticJacksonSelfEnergy:40}
\Phi(\Bx_i) = \inv{4 \pi \epsilon} \sum_{i \ne j} \frac{q_j}{\Abs{\Bx_i – \Bx_j}}.

This means that the total potential energy, making sure not to double count, to move all the charges in from infinity is

\label{eqn:electrostaticJacksonSelfEnergy:60}
W = \inv{4 \pi \epsilon} \sum_{1 \le i < j \le n} \frac{q_i q_j}{\Abs{\Bx_i - \Bx_j}}. This sum over all unique pairs is somewhat unwieldy, so it can be adjusted by explicitly double counting with a corresponding divide by two $$\label{eqn:electrostaticJacksonSelfEnergy:80} W = \inv{2} \inv{4 \pi \epsilon} \sum_{1 \le i \ne j \le n} \frac{q_i q_j}{\Abs{\Bx_i - \Bx_j}}.$$ The point that causes the trouble later is the continuum equivalent to this relationship, which is $$\label{eqn:electrostaticJacksonSelfEnergy:100} W = \inv{8 \pi \epsilon} \int \frac{\rho(\Bx) \rho(\Bx')}{\Abs{\Bx - \Bx'}} d^3 \Bx d^3 \Bx',$$ or $$\label{eqn:electrostaticJacksonSelfEnergy:120} W = \inv{2} \int \rho(\Bx) \Phi(\Bx) d^3 \Bx.$$ There's a subtlety here that is often passed over. When the charge densities represent point charges $$\rho(\Bx) = q \delta^3(\Bx - \Bx')$$ are located at, notice that this integral equivalent is evaluated over all space, including the spaces that the charges that the charges are located at. Ignoring that subtlety, this potential energy can be expressed in terms of the electric field, and then integrated by parts \label{eqn:electrostaticJacksonSelfEnergy:140} \begin{aligned} W &= \inv{2 } \int (\spacegrad \cdot (\epsilon \BE)) \Phi(\Bx) d^3 \Bx \\ &= \frac{\epsilon}{2 } \int \lr{ \spacegrad \cdot (\BE \Phi) - (\spacegrad \Phi) \cdot \BE } d^3 \Bx \\ &= \frac{\epsilon}{2 } \oint dA \ncap \cdot (\BE \Phi) + \frac{\epsilon}{2 } \int \BE \cdot \BE d^3 \Bx. \end{aligned} The presumption is that $$\BE \Phi$$ falls off as the bounds of the integration volume tends to infinity. That leaves us with an energy density proportional to the square of the field $$\label{eqn:electrostaticJacksonSelfEnergy:160} w = \frac{\epsilon}{2 } \BE^2.$$

### Inconsistency

It’s here that Jackson points out the inconsistency between \ref{eqn:electrostaticJacksonSelfEnergy:160} and the original
discrete analogue \ref{eqn:electrostaticJacksonSelfEnergy:80} that this was based on. The energy density is positive definite, whereas the discrete potential energy can be negative if there is a difference in the sign of the charges.

Here Jackson uses a two particle charge distribution to help resolve this conundrum. For a superposition $$\BE = \BE_1 + \BE_2$$, we have

\label{eqn:electrostaticJacksonSelfEnergy:180}
\BE
=
\inv{4 \pi \epsilon} \frac{q_1 (\Bx – \Bx_1)}{\Abs{\Bx – \Bx_1}^3}
+ \inv{4 \pi \epsilon} \frac{q_2 (\Bx – \Bx_2)}{\Abs{\Bx – \Bx_2}^3},

so the energy density is
\label{eqn:electrostaticJacksonSelfEnergy:200}
w =
\frac{1}{32 \pi^2 \epsilon} \frac{q_1^2}{\Abs{\Bx – \Bx_1}^4 }
+
\frac{1}{32 \pi^2 \epsilon} \frac{q_2^2}{\Abs{\Bx – \Bx_2}^4 }
+
2 \frac{q_1 q_2}{32 \pi^2 \epsilon}
\frac{(\Bx – \Bx_1)}{\Abs{\Bx – \Bx_1}^3} \cdot
\frac{(\Bx – \Bx_2)}{\Abs{\Bx – \Bx_2}^3}.

The discrete potential had only an interaction energy, whereas the potential from this squared field has an interaction energy plus two self energy terms. Those two strictly positive self energy terms are what forces this field energy positive, independent of the sign of the interaction energy density. Jackson makes a change of variables of the form

\label{eqn:electrostaticJacksonSelfEnergy:220}
\begin{aligned}
\Brho &= (\Bx – \Bx_1)/R \\
R &= \Abs{\Bx_1 – \Bx_2} \\
\ncap &= (\Bx_1 – \Bx_2)/R,
\end{aligned}

for which we find

\label{eqn:electrostaticJacksonSelfEnergy:240}
\Bx = \Bx_1 + R \Brho,

so
\label{eqn:electrostaticJacksonSelfEnergy:260}
\Bx – \Bx_2 =
\Bx_1 – \Bx_2 + R \Brho
R (\ncap + \Brho),

and
\label{eqn:electrostaticJacksonSelfEnergy:280}
d^3 \Bx = R^3 d^3 \Brho,

so the total interaction energy is
\label{eqn:electrostaticJacksonSelfEnergy:300}
\begin{aligned}
W_{\textrm{int}}
&=
\frac{q_1 q_2}{16 \pi^2 \epsilon}
\int d^3 \Bx
\frac{(\Bx – \Bx_1)}{\Abs{\Bx – \Bx_1}^3} \cdot
\frac{(\Bx – \Bx_2)}{\Abs{\Bx – \Bx_2}^3} \\
&=
\frac{q_1 q_2}{16 \pi^2 \epsilon}
\int R^3 d^3 \Brho
\frac{ R \Brho }{ R^3 \Abs{\Brho}^3 } \cdot
\frac{R (\ncap + \Brho)}{R^3 \Abs{\ncap + \Brho}^3} \\
&=
\frac{q_1 q_2}{16 \pi^2 \epsilon R}
\int d^3 \Brho
\frac{ \Brho }{ \Abs{\Brho}^3 } \cdot
\frac{(\ncap + \Brho)}{ \Abs{\ncap + \Brho}^3}.
\end{aligned}

Evaluating this integral is what Jackson calls easy. The technique required is to express the integrand in terms of gradients in the $$\Brho$$ coordinate system

\label{eqn:electrostaticJacksonSelfEnergy:320}
\begin{aligned}
\int d^3 \Brho
\frac{ \Brho }{ \Abs{\Brho}^3 } \cdot
\frac{(\ncap + \Brho)}{ \Abs{\ncap + \Brho}^3}
&=
\int d^3 \Brho
\cdot
\lr{ – \spacegrad_\Brho \inv{\Abs{\ncap + \Brho}} } \\
&=
\int d^3 \Brho
\cdot
\lr{ \spacegrad_\Brho \inv{\Abs{\ncap + \Brho}} }.
\end{aligned}

I found it somewhat non-trivial to find the exact form of the chain rule that is required to simplify this integral, but after some trial and error, figured it out by working backwards from
\label{eqn:electrostaticJacksonSelfEnergy:340}
\spacegrad_\Brho^2 \inv{ \Abs{\Brho} \Abs{\ncap + \Brho}}
=
+

In integral form this is
\label{eqn:electrostaticJacksonSelfEnergy:360}
\begin{aligned}
\oint dA’ \ncap’ \cdot \spacegrad_\Brho \inv{ \Abs{\Brho} \Abs{\ncap + \Brho}}
&=
\int d^3 \Brho’
+
\int d^3 \Brho
\spacegrad_\Brho \cdot \lr{ \inv{\Abs{\ncap + \Brho}} \spacegrad_\Brho \inv{ \Abs{\Brho} } } \\
&=
\int d^3 \Brho’
\lr{ \spacegrad_{\Brho’} \inv{\Abs{\Brho’ – \ncap} } \cdot \spacegrad_{\Brho’} \inv{ \Abs{\Brho’} } }
+
\int d^3 \Brho’
\inv{\Abs{\Brho’ – \ncap}} \spacegrad_{\Brho’}^2 \inv{ \Abs{\Brho’} } \\
&+
\int d^3 \Brho
+
\int d^3 \Brho
\inv{\Abs{\ncap + \Brho}} \spacegrad_\Brho^2 \inv{ \Abs{\Brho} } \\
&=
2 \int d^3 \Brho
&- 4 \pi
\int d^3 \Brho’
\inv{\Abs{\Brho’ – \ncap}} \delta^3(\Brho’)
– 4 \pi
\int d^3 \Brho
\inv{\Abs{\Brho + \ncap}} \delta^3(\Brho) \\
&=
2 \int d^3 \Brho
– 8 \pi.
\end{aligned}

This used the Laplacian representation of the delta function $$\delta^3(\Bx) = -(1/4\pi) \spacegrad^2 (1/\Abs{\Bx})$$. Back-substitution gives

\label{eqn:electrostaticJacksonSelfEnergy:380}
\int d^3 \Brho
\frac{ \Brho }{ \Abs{\Brho}^3 } \cdot
\frac{(\ncap + \Brho)}{ \Abs{\ncap + \Brho}^3}
=
4 \pi
+
\oint dA’ \ncap’ \cdot \spacegrad_\Brho \inv{ \Abs{\Brho} \Abs{\ncap + \Brho}}.

We can argue that this last integral tends to zero, since

\label{eqn:electrostaticJacksonSelfEnergy:400}
\begin{aligned}
\oint dA’ \ncap’ \cdot \spacegrad_\Brho \inv{ \Abs{\Brho} \Abs{\ncap + \Brho}}
&=
\oint dA’ \ncap’ \cdot \lr{
\lr{ \spacegrad_\Brho \inv{ \Abs{\Brho}} } \inv{\Abs{\ncap + \Brho}}
+
\inv{ \Abs{\Brho}} \lr{ \spacegrad_\Brho \inv{\Abs{\ncap + \Brho}} }
} \\
&=
-\oint dA’ \ncap’ \cdot \lr{
\frac{ \Brho } {\inv{ \Abs{\Brho}}^3 } \inv{\Abs{\ncap + \Brho}}
+
\inv{ \Abs{\Brho}} \frac{ (\Brho + \ncap) }{ \Abs{\ncap + \Brho}^3 }
} \\
&=
-\oint dA’ \inv{\Abs{\Brho} \Abs{\Brho + \ncap}}
\lr{
\frac{ \ncap’ \cdot \Brho }{
{\Abs{\Brho}}^2 }
+\frac{ \ncap’ \cdot (\Brho + \ncap) }{
{\Abs{\Brho + \ncap}}^2 }
}.
\end{aligned}

The integrand in this surface integral is of $$O(1/\rho^3)$$ so tends to zero on an infinite surface in the $$\Brho$$ coordinate system. This completes the “easy” integral, leaving

\label{eqn:electrostaticJacksonSelfEnergy:420}
\int d^3 \Brho
\frac{ \Brho }{ \Abs{\Brho}^3 } \cdot
\frac{(\ncap + \Brho)}{ \Abs{\ncap + \Brho}^3}
=
4 \pi.

The total field energy can now be expressed as a sum of the self energies and the interaction energy
\label{eqn:electrostaticJacksonSelfEnergy:440}
W =
\frac{1}{32 \pi^2 \epsilon} \int d^3 \Bx \frac{q_1^2}{\Abs{\Bx – \Bx_1}^4 }
+
\frac{1}{32 \pi^2 \epsilon} \int d^3 \Bx \frac{q_2^2}{\Abs{\Bx – \Bx_2}^4 }
+ \inv{ 4 \pi \epsilon}
\frac{q_1 q_2}{\Abs{\Bx_1 – \Bx_2} }.

The interaction energy is exactly the potential energies for the two particles, the this total energy in the field is biased in the positive direction by the pair of self energies. It is interesting that the energy obtained from integrating the field energy density contains such self energy terms, but I don’t know exactly what to make of them at this point in time.

# References

[1] JD Jackson. Classical Electrodynamics. John Wiley and Sons, 2nd edition, 1975.

## Electric and magnetic fields at an interface

As pointed out in [1] the fields at an interface that is not a perfect conductor on either side are related by

\label{eqn:fieldsAtInterface:20}
\begin{aligned}
\ncap \cdot \lr{ \BD_2 – \BD_1 } &= \rho_{es} \\
\ncap \cross \lr{ \BE_2 – \BE_1 } &= -\BM_s \\
\ncap \cdot \lr{ \BB_2 – \BB_1 } &= \rho_{ms} \\
\ncap \cross \lr{ \BH_2 – \BH_1 } &= \BJ_s.
\end{aligned}

Given the fields in medium 1, assuming that boths sets of media are linear, we can use these relationships to determine the fields in the other medium.

\label{eqn:fieldsAtInterface:40}
\begin{aligned}
\ncap \cdot \BE_2 &= \inv{\epsilon_2} \lr{ \epsilon_1 \ncap \cdot \BE_1 + \rho_{es} } \\
\ncap \wedge \BE_2 &= \ncap \wedge \BE_1 -I \BM_s \\
\ncap \cdot \BB_2 &= \ncap \cdot \BB_1 + \rho_{ms} \\
\ncap \wedge \BB_2 &= \mu_2 \lr{ \inv{\mu_1} \ncap \wedge \BB_1 + I \BJ_s}.
\end{aligned}

Now the fields in interface 2 can be obtained by adding the normal and tangential projections. For the electric field

\label{eqn:fieldsAtInterface:60}
\begin{aligned}
\BE_2
&=
\ncap (\ncap \cdot \BE_2 )
+ \ncap \cdot (\ncap \wedge \BE_2) \\
&=
\inv{\epsilon_2} \ncap \lr{ \epsilon_1 \ncap \cdot \BE_1 + \rho_{es} }
+
\ncap \cdot (\ncap \wedge \BE_1 -I \BM_s).
\end{aligned}

Note that this manipulation can also be done without Geometric Algebra by writing $$\BE_2 = \ncap (\ncap \cdot \BE_2 ) – \ncap \cross (\ncap \cross \BE_2)$$).
Expanding $$\ncap \cdot (\ncap \wedge \BE_1) = \BE_1 – \ncap (\ncap \cdot \BE_1)$$, and $$\ncap \cdot (I \BM_s) = -\ncap \cross \BM_s$$, that is

\label{eqn:fieldsAtInterface:80}
\boxed{
\BE_2
=
\BE_1
+ \ncap (\ncap \cdot \BE_1) \lr{ \frac{\epsilon_1}{\epsilon_2} – 1 }
+ \frac{\rho_{es}}{\epsilon_2}
+ \ncap \cross \BM_s.
}

For the magnetic field

\label{eqn:fieldsAtInterface:100}
\begin{aligned}
\BB_2
&=
\ncap (\ncap \cdot \BB_2 )
+
\ncap \cdot (\ncap \wedge \BB_2) \\
&=
\ncap \lr{ \ncap \cdot \BB_1 + \rho_{ms} }
+
\mu_2 \ncap \cdot \lr{ \lr{ \inv{\mu_1} \ncap \wedge \BB_1 + I \BJ_s} },
\end{aligned}

which is

\label{eqn:fieldsAtInterface:120}
\boxed{
\BB_2
=
\frac{\mu_2}{\mu_1} \BB_1
+
\ncap (\ncap \cdot \BB_1) \lr{ 1 – \frac{\mu_2}{\mu_1} }
+ \ncap \rho_{ms}
– \ncap \cross \BJ_s.
}

These are kind of pretty results, having none of the explicit angle dependence that we see in the Fresnel relationships. In this analysis, it is assumed there is only a transmitted component of the ray in question, and no reflected component. Can we do a purely vectoral treatment of the Fresnel equations along these same lines?

# References

[1] Constantine A Balanis. Advanced engineering electromagnetics. Wiley New York, 1989.

## How to invoke the 2nd pass of the clang compiler manually

October 3, 2016 clang/llvm No comments , , , , ,

Because the clang front end reexecs itself, breakpoints on the interesting parts of the clang front end don’t get hit by default. Here’s an example

$cat g2 b llvm::Module::setDataLayout b BackendConsumer::BackendConsumer b llvm::TargetMachine::TargetMachine b llvm::TargetMachine::createDataLayout run -mbig-endian -m64 -c bytes.c -emit-llvm -o big.bc$ gdb which clang
GNU gdb (GDB) Red Hat Enterprise Linux 7.9.1-19.lz.el7
...
(gdb) source g2
Breakpoint 1 at 0x2c04c3d: llvm::Module::setDataLayout. (2 locations)
Breakpoint 2 at 0x3d08870: file /source/llvm/lib/Target/TargetMachine.cpp, line 47.
Breakpoint 3 at 0x33108ca: file /source/llvm/include/llvm/Target/TargetMachine.h, line 133.
...
Detaching after vfork from child process 15795.
[Inferior 1 (process 15789) exited normally]


(The debugger finishes and exits, hitting none of the breakpoints)

One way to deal with this is to set the fork mode to child:

(gdb) set follow-fork-mode child


An alternate way of dealing with this is to use strace to collect the command line that clang invokes itself with. For example:

$strace -f -s 1024 -v clang -mbig-endian -m64 big.bc -c 2>&1 | grep exec | tail -2 | head -1  This provides the command line options for the self invocation of clang [pid 4650] execve("/usr/local/bin/clang-3.9", ["/usr/local/bin/clang-3.9", "-cc1", "-triple", "aarch64_be-unknown-linux-gnu", "-emit-obj", "-mrelax-all", "-disable-free", "-main-file-name", "big.bc", "-mrelocation-model", "static", "-mthread-model", "posix", "-mdisable-fp-elim", "-fmath-errno", "-masm-verbose", "-mconstructor-aliases", "-fuse-init-array", "-target-cpu", "generic", "-target-feature", "+neon", "-target-abi", "aapcs", "-dwarf-column-info", "-debugger-tuning=gdb", "-coverage-file", "/workspace/pass/run/big.bc", "-resource-dir", "/usr/local/bin/../lib/clang/3.9.0", "-fdebug-compilation-dir", "/workspace/pass/run", "-ferror-limit", "19", "-fmessage-length", "0", "-fallow-half-arguments-and-returns", "-fno-signed-char", "-fobjc-runtime=gcc", "-fdiagnostics-show-option", "-o", "big.o", "-x", "ir", "big.bc"],  With a bit of vim tweaking you can turn this into a command line that can be executed (or debugged) directly /usr/local/bin/clang-3.9 -cc1 -triple aarch64_be-unknown-linux-gnu -emit-obj -mrelax-all -disable-free -main-file-name big.bc -mrelocation-model static -mthread-model posix -mdisable-fp-elim -fmath-errno -masm-verbose -mconstructor-aliases -fuse-init-array -target-cpu generic -target-feature +neon -target-abi aapcs -dwarf-column-info -debugger-tuning=gdb -coverage-file /workspace/pass/run/big.bc -resource-dir /usr/local/bin/../lib/clang/3.9.0 -fdebug-compilation-dir /workspace/pass/run -ferror-limit 19 -fmessage-length 0 -fallow-half-arguments-and-returns -fno-signed-char -fobjc-runtime=gcc -fdiagnostics-show-option -o big.o -x ir big.bc  Note that doing this also provides a mechanism to change the compiler triple manually, which is something that I wondered how to do (since clang documents -triple as an option, but seems to ignore it). For example, I’m able to able to change -triple aarch64_be to aarch64 and get little endian object code from bytecode prepared with -mbig-endian. ## speeding up clang debug and builds I found the default static library configuration of clang slow to rebuild, so I started building it with in shared mode. That loaded pretty slow in gdb, so I went looking for how to enable split dwarf, and found a nice little presentation on how to speed up clang builds. There’s a followup blog post with some speed up conclusions. A failure of that blog post is actually listing the cmake commands required to build with all these tricks. Using all these tricks listed there, I’m now trying the following: mkdir -p ~/freeware cd ~/freeware git clone git://sourceware.org/git/binutils-gdb.git cd binutils-gdb ./configure --prefix=$HOME/local/binutils.gold --enable-gold=default
make
make install

cd ..
git clone git://github.com/ninja-build/ninja.git
cd ninja
./configure.py --bootstrap
mkdir -p ~/local/ninja/bin/
cp ninja ~/local/ninja/bin/


With ninja in my PATH, I can now build clang with:

CC=clang CXX=clang++ \
cmake -G Ninja \
../llvm \
-DLLVM_USE_SPLIT_DWARF=TRUE \
-DLLVM_ENABLE_ASSERTIONS=TRUE \
-DCMAKE_BUILD_TYPE=Debug \
-DCMAKE_INSTALL_PREFIX=$HOME/clang39.be \ -DCMAKE_SHARED_LINKER_FLAGS="-B$HOME/local/binutils.gold/bin -Wl,--gdb-index' \
-DCMAKE_EXE_LINKER_FLAGS="-B$HOME/local/binutils.gold/bin -Wl,--gdb-index' \ -DBUILD_SHARED_LIBS=true \ -DLLVM_TARGETS_TO_BUILD=X86 \ 2>&1 | tee o ninja ninja install  This does build way faster, both for full builds and incremental builds. ## Build tree size Dynamic libraries: 4.4 Gb. Static libraries: 19.8Gb. ## Installed size Dynamic libraries: 0.7 Gb. Static libraries: 14.7Gb. ## Results: full build time. Static libraries, non-ninja, all backends: real 51m6.494s user 160m47.027s sys 8m49.429s  Dynamic libraries, ninja, split dwarf, x86 backend only: real 26m19.360s user 86m11.477s sys 3m14.478s  ## Results: incremental build. touch lib/Target/X86/MCTargetDesc/X86MCCodeEmitter.cpp. Static libraries, non-ninja, all backends: real 2m17.709s user 6m8.648s sys 0m28.594s  Dynamic libraries, ninja, split dwarf, x86 backend only: real 0m3.245s user 0m6.104s sys 0m0.802s  ## make install times make: real 2m6.353s user 0m7.827s sys 0m15.316s  ninja: real 0m2.138s user 0m0.420s sys 0m0.831s  The time for rerunning a sharedlib-config ‘ninja install’ is even faster! ## Results: time for gdb, b main, run, quit Static libraries: real 0m45.904s user 0m32.376s sys 0m1.787s  Dynamic libraries, with split dwarf: real 0m44.440s user 0m37.096s sys 0m1.067s  This one isn’t what I would have expected. The initial gdb load time for the split-dwarf exe is almost instantaneous, however it still takes a long time to break in main and continue to that point. I guess that we are taking the hit for a lot of symbol lookup at that point, so it comes out as a wash. Thinking about this, I noticed that the clang make system doesn’t seem to add ‘-Wl,-gdb-index’ to the link step along with the addition of -gsplit-dwarf to the compilation command line. I thought that was required to get all the deferred symbol table lookup? Attempting to do so, I found that the insertion of an alternate linker in my PATH wasn’t enough to get clang to use it. Adding –Wl,–gdb-index into the link flags caused complaints from /usr/bin/ld! The cmake magic required was: -DCMAKE_SHARED_LINKER_FLAGS="-B$HOME/local/binutils.gold/bin -Wl,--gdb-index' \

real    0m10.268s