delta function

Line charge field and potential.

October 26, 2016 math and physics play No comments , , , , , , , , , ,

[Click here for a PDF of this post with nicer formatting]

When computing the most general solution of the electrostatic potential in a plane, Jackson [1] mentions that \( -2 \lambda_0 \ln \rho \) is the well known potential for an infinite line charge (up to the unit specific factor). Checking that statement, since I didn’t recall what that potential was offhand, I encountered some inconsistencies and non-convergent integrals, and thought it was worthwhile to explore those a bit more carefully. This will be done here.

Using Gauss’s law.

For an infinite length line charge, we can find the radial field contribution using Gauss’s law, imagining a cylinder of length \( \Delta l \) of radius \( \rho \) surrounding this charge with the midpoint at the origin. Ignoring any non-radial field contribution, we have

\begin{equation}\label{eqn:lineCharge:20}
\int_{-\Delta l/2}^{\Delta l/2} \ncap \cdot \BE (2 \pi \rho) dl = \frac{\lambda_0}{\epsilon_0} \Delta l,
\end{equation}

or

\begin{equation}\label{eqn:lineCharge:40}
\BE = \frac{\lambda_0}{2 \pi \epsilon_0} \frac{\rhocap}{\rho}.
\end{equation}

Since

\begin{equation}\label{eqn:lineCharge:60}
\frac{\rhocap}{\rho} = \spacegrad \ln \rho,
\end{equation}

this means that the potential is

\begin{equation}\label{eqn:lineCharge:80}
\phi = -\frac{2 \lambda_0}{4 \pi \epsilon_0} \ln \rho.
\end{equation}

Finite line charge potential.

Let’s try both these calculations for a finite charge distribution. Gauss’s law looses its usefulness, but we can evaluate the integrals directly. For the electric field

\begin{equation}\label{eqn:lineCharge:100}
\BE
= \frac{\lambda_0}{4 \pi \epsilon_0} \int \frac{(\Bx – \Bx’)}{\Abs{\Bx – \Bx’}^3} dl’.
\end{equation}

Using cylindrical coordinates with the field point \( \Bx = \rho \rhocap \) for convience, the charge point \( \Bx’ = z’ \zcap \), and a the charge distributed over \( [a,b] \) this is

\begin{equation}\label{eqn:lineCharge:120}
\BE
= \frac{\lambda_0}{4 \pi \epsilon_0} \int_a^b \frac{(\rho \rhocap – z’ \zcap)}{\lr{\rho^2 + (z’)^2}^{3/2}} dz’.
\end{equation}

When the charge is uniformly distributed around the origin \( [a,b] = b[-1,1] \) the \( \zcap \) component of this field is killed because the integrand is odd. This justifies ignoring such contributions in the Gaussing cylinder analysis above. The general solution to this integral is found to be

\begin{equation}\label{eqn:lineCharge:140}
\BE
=
\frac{\lambda_0}{4 \pi \epsilon_0}
\evalrange{
\lr{
\frac{z’ \rhocap }{\rho \sqrt{ \rho^2 + (z’)^2 } }
+\frac{\zcap}{ \sqrt{ \rho^2 + (z’)^2 } }
}
}{a}{b},
\end{equation}

or
\begin{equation}\label{eqn:lineCharge:240}
\boxed{
\BE
=
\frac{\lambda_0}{4 \pi \epsilon_0}
\lr{
\frac{\rhocap }{\rho}
\lr{
\frac{b}{\sqrt{ \rho^2 + b^2 } }
-\frac{a}{\sqrt{ \rho^2 + a^2 } }
}
+ \zcap
\lr{
\frac{1}{ \sqrt{ \rho^2 + b^2 } }
-\frac{1}{ \sqrt{ \rho^2 + a^2 } }
}
}.
}
\end{equation}

When \( b = -a = \Delta l/2 \), this reduces to

\begin{equation}\label{eqn:lineCharge:160}
\BE
=
\frac{\lambda_0}{4 \pi \epsilon_0}
\frac{\rhocap }{\rho}
\frac{\Delta l}{\sqrt{ \rho^2 + (\Delta l/2)^2 } },
\end{equation}

which further reduces to \ref{eqn:lineCharge:40} when \( \Delta l \gg \rho \).

Finite line charge potential. Wrong but illuminating.

Again, putting the field point at \( z’ = 0 \), we have

\begin{equation}\label{eqn:lineCharge:180}
\phi(\rho)
= \frac{\lambda_0}{4 \pi \epsilon_0} \int_a^b \frac{dz’}{\lr{\rho^2 + (z’)^2}^{1/2}},
\end{equation}

which integrates to
\begin{equation}\label{eqn:lineCharge:260}
\phi(\rho)
= \frac{\lambda_0}{4 \pi \epsilon_0 }
\ln \frac{ b + \sqrt{ \rho^2 + b^2 }}{ a + \sqrt{\rho^2 + a^2}}.
\end{equation}

With \( b = -a = \Delta l/2 \), this approaches

\begin{equation}\label{eqn:lineCharge:200}
\phi
\approx
\frac{\lambda_0}{4 \pi \epsilon_0 }
\ln \frac{ (\Delta l/2) }{ \rho^2/2\Abs{\Delta l/2}}
=
\frac{-2 \lambda_0}{4 \pi \epsilon_0 } \ln \rho
+
\frac{\lambda_0}{4 \pi \epsilon_0 }
\ln \lr{ (\Delta l)^2/2 }.
\end{equation}

Before \( \Delta l \) is allowed to tend to infinity, this is identical (up to a difference in the reference potential) to \ref{eqn:lineCharge:80} found using Gauss’s law. It is, strictly speaking, singular when \( \Delta l \rightarrow \infty \), so it does not seem right to infinity as a reference point for the potential.

There’s another weird thing about this result. Since this has no \( z \) dependence, it is not obvious how we would recover the non-radial portion of the electric field from this potential using \( \BE = -\spacegrad \phi \)? Let’s calculate the elecric field from \ref{eqn:lineCharge:180} explicitly

\begin{equation}\label{eqn:lineCharge:220}
\begin{aligned}
\BE
&=
-\frac{\lambda_0}{4 \pi \epsilon_0}
\spacegrad
\ln \frac{ b + \sqrt{ \rho^2 + b^2 }}{ a + \sqrt{\rho^2 + a^2}} \\
&=
-\frac{\lambda_0 \rhocap}{4 \pi \epsilon_0 }
\PD{\rho}{}
\ln \frac{ b + \sqrt{ \rho^2 + b^2 }}{ a + \sqrt{\rho^2 + a^2}} \\
&=
-\frac{\lambda_0 \rhocap}{4 \pi \epsilon_0}
\lr{
\inv{ b + \sqrt{ \rho^2 + b^2 }} \frac{ \rho }{\sqrt{ \rho^2 + b^2 }}
-\inv{ a + \sqrt{ \rho^2 + a^2 }} \frac{ \rho }{\sqrt{ \rho^2 + a^2 }}
} \\
&=
-\frac{\lambda_0 \rhocap}{4 \pi \epsilon_0 \rho}
\lr{
\frac{ -b + \sqrt{ \rho^2 + b^2 }}{\sqrt{ \rho^2 + b^2 }}
-\frac{ -a + \sqrt{ \rho^2 + a^2 }}{\sqrt{ \rho^2 + a^2 }}
} \\
&=
\frac{\lambda_0 \rhocap}{4 \pi \epsilon_0 \rho}
\lr{
\frac{ b }{\sqrt{ \rho^2 + b^2 }}
-\frac{ a }{\sqrt{ \rho^2 + a^2 }}
}.
\end{aligned}
\end{equation}

This recovers the radial component of the field from \ref{eqn:lineCharge:240}, but where did the \( \zcap \) component go? The required potential appears to be

\begin{equation}\label{eqn:lineCharge:340}
\phi(\rho, z)
=
\frac{\lambda_0}{4 \pi \epsilon_0 }
\ln \frac{ b + \sqrt{ \rho^2 + b^2 }}{ a + \sqrt{\rho^2 + a^2}}

\frac{z \lambda_0}{4 \pi \epsilon_0 }
\lr{ \frac{1}{\sqrt{\rho^2 + b^2}}
-\frac{1}{\sqrt{\rho^2 + a^2}}
}.
\end{equation}

When computing the electric field \( \BE(\rho, \theta, z) \), it was convienent to pick the coordinate system so that \( z = 0 \). Doing this with the potential gives the wrong answers. The reason for this appears to be that this kills the potential term that is linear in \( z \) before taking its gradient, and we need that term to have the \( \zcap \) field component that is expected for a charge distribution that is non-symmetric about the origin on the z-axis!

Finite line charge potential. Take II.

Let the point at which the potential is evaluated be

\begin{equation}\label{eqn:lineCharge:360}
\Bx = \rho \rhocap + z \zcap,
\end{equation}

and the charge point be
\begin{equation}\label{eqn:lineCharge:380}
\Bx’ = z’ \zcap.
\end{equation}

This gives

\begin{equation}\label{eqn:lineCharge:400}
\begin{aligned}
\phi(\rho, z)
&= \frac{\lambda_0}{4\pi \epsilon_0} \int_a^b \frac{dz’}{\Abs{\rho^2 + (z – z’)^2 }} \\
&= \frac{\lambda_0}{4\pi \epsilon_0} \int_{a-z}^{b-z} \frac{du}{ \Abs{\rho^2 + u^2} } \\
&= \frac{\lambda_0}{4\pi \epsilon_0}
\evalrange{\ln \lr{ u + \sqrt{ \rho^2 + u^2 }}}{b-z}{a-z} \\
&=
\frac{\lambda_0}{4\pi \epsilon_0}
\ln \frac
{ b-z + \sqrt{ \rho^2 + (b-z)^2 }}
{ a-z + \sqrt{ \rho^2 + (a-z)^2 }}.
\end{aligned}
\end{equation}

The limit of this potential \( a = -\Delta/2 \rightarrow -\infty, b = \Delta/2 \rightarrow \infty \) doesn’t exist in any strict sense. If we are cavilier about the limits, as in \ref{eqn:lineCharge:200}, this can be evaluated as

\begin{equation}\label{eqn:lineCharge:n}
\phi \approx
\frac{\lambda_0}{4\pi \epsilon_0} \lr{ -2 \ln \rho + \textrm{constant} }.
\end{equation}

however, the constant (\( \ln \Delta^2/2 \)) is infinite, so there isn’t really a good justification for using that constant as the potential reference point directly.

It seems that the “right” way to calculate the potential for the infinite distribution, is to

  • Calculate the field from the potential.
  • Take the PV limit of that field with the charge distribution extending to infinity.
  • Compute the corresponding potential from this limiting value of the field.

Doing that doesn’t blow up. That field calculation, for the finite case, should include a \( \zcap \) component. To verify, let’s take the respective derivatives

\begin{equation}\label{eqn:lineCharge:420}
\begin{aligned}
-\PD{z}{} \phi
&=
-\frac{\lambda_0}{4\pi \epsilon_0}
\lr{
\frac{ -1 + \frac{z – b}{\sqrt{ \rho^2 + (b-z)^2 }} }{
b-z + \sqrt{ \rho^2 + (b-z)^2 }
}

\frac{ -1 + \frac{z – a}{\sqrt{ \rho^2 + (a-z)^2 }} }{
a-z + \sqrt{ \rho^2 + (a-z)^2 }
}
} \\
&=
\frac{\lambda_0}{4\pi \epsilon_0}
\lr{
\frac{ 1 + \frac{b – z}{\sqrt{ \rho^2 + (b-z)^2 }} }{
b-z + \sqrt{ \rho^2 + (b-z)^2 }
}

\frac{ 1 + \frac{a – z}{\sqrt{ \rho^2 + (a-z)^2 }} }{
a-z + \sqrt{ \rho^2 + (a-z)^2 }
}
} \\
&=
\frac{\lambda_0}{4\pi \epsilon_0}
\lr{
\inv{\sqrt{ \rho^2 + (b-z)^2 }}
-\inv{\sqrt{ \rho^2 + (a-z)^2 }}
},
\end{aligned}
\end{equation}

and

\begin{equation}\label{eqn:lineCharge:440}
\begin{aligned}
-\PD{\rho}{} \phi
&=
-\frac{\lambda_0}{4\pi \epsilon_0}
\lr{
\frac{ \frac{\rho}{\sqrt{ \rho^2 + (b-z)^2 }} }{
b-z + \sqrt{ \rho^2 + (b-z)^2 }
}

\frac{ \frac{\rho}{\sqrt{ \rho^2 + (a-z)^2 }} }{
a-z + \sqrt{ \rho^2 + (a-z)^2 }
}
} \\
&=
-\frac{\lambda_0}{4\pi \epsilon_0}
\lr{
\frac{\rho \lr{
-(b-z) + \sqrt{ \rho^2 + (b-z)^2 }
}}{ \rho^2 \sqrt{ \rho^2 + (b-z)^2 } }

\frac{\rho \lr{
-(a-z) + \sqrt{ \rho^2 + (a-z)^2 }
}}{ \rho^2 \sqrt{ \rho^2 + (a-z)^2 } }
} \\
&=
\frac{\lambda_0}{4\pi \epsilon_0 \rho}
\lr{
\frac{b-z}{\sqrt{ \rho^2 + (b-z)^2 }}
-\frac{a-z}{\sqrt{ \rho^2 + (a-z)^2 }}
}
.
\end{aligned}
\end{equation}

Putting the pieces together, the electric field is
\begin{equation}\label{eqn:lineCharge:460}
\BE =
\frac{\lambda_0}{4\pi \epsilon_0}
\lr{
\frac{\rhocap}{\rho} \lr{
\frac{b-z}{\sqrt{ \rho^2 + (b-z)^2 }}
-\frac{a-z}{\sqrt{ \rho^2 + (a-z)^2 }}
}
+
\zcap \lr{
\inv{\sqrt{ \rho^2 + (b-z)^2 }}
-\inv{\sqrt{ \rho^2 + (a-z)^2 }}
}
}.
\end{equation}

For has a PV limit of \ref{eqn:lineCharge:40} at \( z = 0 \), and also for the finite case, has the \( \zcap \) field component that was obtained when the field was obtained by direct integration.

Conclusions

  • We have to evaluate the potential at all points in space, not just on the axis that we evaluate the field on (should we choose to do so).
  • In this case, we found that it was not directly meaningful to take the limit of a potential distribution. We can, however, compute the field from a potential for a finite charge distribution,
    take the limit of that field, and then calculate the corresponding potential for the infinite distribution.

Is there a more robust mechanism that can be used to directly calculate the potential for an infinite charge distribution, instead of calculating the potential from the field of such an infinite distribution?

I think that were things go wrong is that the integral of \ref{eqn:lineCharge:180} does not apply to charge distributions that are not finite on the infinite range \( z \in [-\infty, \infty] \). That solution was obtained by utilizing an all-space Green’s function, and the boundary term in that Green’s analysis was assumed to tend to zero. That isn’t the case when the charge distribution is \( \lambda_0 \delta( z ) \).

References

[1] JD Jackson. Classical Electrodynamics. John Wiley and Sons, 2nd edition, 1975.

Jackson’s electrostatic self energy analysis

October 10, 2016 math and physics play No comments , , , , , , , , , ,

[Click here for a PDF of this post with nicer formatting]

Motivation

I was reading my Jackson [1], which characteristically had the statement “the […] integral can easily be shown to have the value \( 4 \pi \)”, in a discussion of electrostatic energy and self energy. After a few attempts and a couple of pages of calculations, I figured out how this can be easily shown.

Context

Let me walk through the context that leads to the “easy” integral, and then the evaluation of that integral. Unlike my older copy of Jackson, I’ll do this in SI units.

The starting point is a statement that the work done (potential energy) of one charge \( q_i \) in a set of \( n \) charges, where that charge is brought to its position \( \Bx_i \) from infinity, is

\begin{equation}\label{eqn:electrostaticJacksonSelfEnergy:20}
W_i = q_i \Phi(\Bx_i),
\end{equation}

where the potential energy due to the rest of the charge configuration is

\begin{equation}\label{eqn:electrostaticJacksonSelfEnergy:40}
\Phi(\Bx_i) = \inv{4 \pi \epsilon} \sum_{i \ne j} \frac{q_j}{\Abs{\Bx_i – \Bx_j}}.
\end{equation}

This means that the total potential energy, making sure not to double count, to move all the charges in from infinity is

\begin{equation}\label{eqn:electrostaticJacksonSelfEnergy:60}
W = \inv{4 \pi \epsilon} \sum_{1 \le i < j \le n} \frac{q_i q_j}{\Abs{\Bx_i - \Bx_j}}. \end{equation} This sum over all unique pairs is somewhat unwieldy, so it can be adjusted by explicitly double counting with a corresponding divide by two \begin{equation}\label{eqn:electrostaticJacksonSelfEnergy:80} W = \inv{2} \inv{4 \pi \epsilon} \sum_{1 \le i \ne j \le n} \frac{q_i q_j}{\Abs{\Bx_i - \Bx_j}}. \end{equation} The point that causes the trouble later is the continuum equivalent to this relationship, which is \begin{equation}\label{eqn:electrostaticJacksonSelfEnergy:100} W = \inv{8 \pi \epsilon} \int \frac{\rho(\Bx) \rho(\Bx')}{\Abs{\Bx - \Bx'}} d^3 \Bx d^3 \Bx', \end{equation} or \begin{equation}\label{eqn:electrostaticJacksonSelfEnergy:120} W = \inv{2} \int \rho(\Bx) \Phi(\Bx) d^3 \Bx. \end{equation} There's a subtlety here that is often passed over. When the charge densities represent point charges \( \rho(\Bx) = q \delta^3(\Bx - \Bx') \) are located at, notice that this integral equivalent is evaluated over all space, including the spaces that the charges that the charges are located at. Ignoring that subtlety, this potential energy can be expressed in terms of the electric field, and then integrated by parts \begin{equation}\label{eqn:electrostaticJacksonSelfEnergy:140} \begin{aligned} W &= \inv{2 } \int (\spacegrad \cdot (\epsilon \BE)) \Phi(\Bx) d^3 \Bx \\ &= \frac{\epsilon}{2 } \int \lr{ \spacegrad \cdot (\BE \Phi) - (\spacegrad \Phi) \cdot \BE } d^3 \Bx \\ &= \frac{\epsilon}{2 } \oint dA \ncap \cdot (\BE \Phi) + \frac{\epsilon}{2 } \int \BE \cdot \BE d^3 \Bx. \end{aligned} \end{equation} The presumption is that \( \BE \Phi \) falls off as the bounds of the integration volume tends to infinity. That leaves us with an energy density proportional to the square of the field \begin{equation}\label{eqn:electrostaticJacksonSelfEnergy:160} w = \frac{\epsilon}{2 } \BE^2. \end{equation}

Inconsistency

It’s here that Jackson points out the inconsistency between \ref{eqn:electrostaticJacksonSelfEnergy:160} and the original
discrete analogue \ref{eqn:electrostaticJacksonSelfEnergy:80} that this was based on. The energy density is positive definite, whereas the discrete potential energy can be negative if there is a difference in the sign of the charges.

Here Jackson uses a two particle charge distribution to help resolve this conundrum. For a superposition \( \BE = \BE_1 + \BE_2 \), we have

\begin{equation}\label{eqn:electrostaticJacksonSelfEnergy:180}
\BE
=
\inv{4 \pi \epsilon} \frac{q_1 (\Bx – \Bx_1)}{\Abs{\Bx – \Bx_1}^3}
+ \inv{4 \pi \epsilon} \frac{q_2 (\Bx – \Bx_2)}{\Abs{\Bx – \Bx_2}^3},
\end{equation}

so the energy density is
\begin{equation}\label{eqn:electrostaticJacksonSelfEnergy:200}
w =
\frac{1}{32 \pi^2 \epsilon} \frac{q_1^2}{\Abs{\Bx – \Bx_1}^4 }
+
\frac{1}{32 \pi^2 \epsilon} \frac{q_2^2}{\Abs{\Bx – \Bx_2}^4 }
+
2 \frac{q_1 q_2}{32 \pi^2 \epsilon}
\frac{(\Bx – \Bx_1)}{\Abs{\Bx – \Bx_1}^3} \cdot
\frac{(\Bx – \Bx_2)}{\Abs{\Bx – \Bx_2}^3}.
\end{equation}

The discrete potential had only an interaction energy, whereas the potential from this squared field has an interaction energy plus two self energy terms. Those two strictly positive self energy terms are what forces this field energy positive, independent of the sign of the interaction energy density. Jackson makes a change of variables of the form

\begin{equation}\label{eqn:electrostaticJacksonSelfEnergy:220}
\begin{aligned}
\Brho &= (\Bx – \Bx_1)/R \\
R &= \Abs{\Bx_1 – \Bx_2} \\
\ncap &= (\Bx_1 – \Bx_2)/R,
\end{aligned}
\end{equation}

for which we find

\begin{equation}\label{eqn:electrostaticJacksonSelfEnergy:240}
\Bx = \Bx_1 + R \Brho,
\end{equation}

so
\begin{equation}\label{eqn:electrostaticJacksonSelfEnergy:260}
\Bx – \Bx_2 =
\Bx_1 – \Bx_2 + R \Brho
R (\ncap + \Brho),
\end{equation}

and
\begin{equation}\label{eqn:electrostaticJacksonSelfEnergy:280}
d^3 \Bx = R^3 d^3 \Brho,
\end{equation}

so the total interaction energy is
\begin{equation}\label{eqn:electrostaticJacksonSelfEnergy:300}
\begin{aligned}
W_{\textrm{int}}
&=
\frac{q_1 q_2}{16 \pi^2 \epsilon}
\int d^3 \Bx
\frac{(\Bx – \Bx_1)}{\Abs{\Bx – \Bx_1}^3} \cdot
\frac{(\Bx – \Bx_2)}{\Abs{\Bx – \Bx_2}^3} \\
&=
\frac{q_1 q_2}{16 \pi^2 \epsilon}
\int R^3 d^3 \Brho
\frac{ R \Brho }{ R^3 \Abs{\Brho}^3 } \cdot
\frac{R (\ncap + \Brho)}{R^3 \Abs{\ncap + \Brho}^3} \\
&=
\frac{q_1 q_2}{16 \pi^2 \epsilon R}
\int d^3 \Brho
\frac{ \Brho }{ \Abs{\Brho}^3 } \cdot
\frac{(\ncap + \Brho)}{ \Abs{\ncap + \Brho}^3}.
\end{aligned}
\end{equation}

Evaluating this integral is what Jackson calls easy. The technique required is to express the integrand in terms of gradients in the \( \Brho \) coordinate system

\begin{equation}\label{eqn:electrostaticJacksonSelfEnergy:320}
\begin{aligned}
\int d^3 \Brho
\frac{ \Brho }{ \Abs{\Brho}^3 } \cdot
\frac{(\ncap + \Brho)}{ \Abs{\ncap + \Brho}^3}
&=
\int d^3 \Brho
\lr{ – \spacegrad_\Brho \inv{\Abs{\Brho}} }
\cdot
\lr{ – \spacegrad_\Brho \inv{\Abs{\ncap + \Brho}} } \\
&=
\int d^3 \Brho
\lr{ \spacegrad_\Brho \inv{\Abs{\Brho}} }
\cdot
\lr{ \spacegrad_\Brho \inv{\Abs{\ncap + \Brho}} }.
\end{aligned}
\end{equation}

I found it somewhat non-trivial to find the exact form of the chain rule that is required to simplify this integral, but after some trial and error, figured it out by working backwards from
\begin{equation}\label{eqn:electrostaticJacksonSelfEnergy:340}
\spacegrad_\Brho^2 \inv{ \Abs{\Brho} \Abs{\ncap + \Brho}}
=
\spacegrad_\Brho \cdot \lr{ \inv{\Abs{\Brho}} \spacegrad_\Brho \inv{ \Abs{\ncap + \Brho} } }
+
\spacegrad_\Brho \cdot \lr{ \inv{\Abs{\ncap + \Brho}} \spacegrad_\Brho \inv{ \Abs{\Brho} } }.
\end{equation}

In integral form this is
\begin{equation}\label{eqn:electrostaticJacksonSelfEnergy:360}
\begin{aligned}
\oint dA’ \ncap’ \cdot \spacegrad_\Brho \inv{ \Abs{\Brho} \Abs{\ncap + \Brho}}
&=
\int d^3 \Brho’
\spacegrad_{\Brho’} \cdot \lr{ \inv{\Abs{\Brho’ – \ncap}} \spacegrad_{\Brho’} \inv{ \Abs{\Brho’} } }
+
\int d^3 \Brho
\spacegrad_\Brho \cdot \lr{ \inv{\Abs{\ncap + \Brho}} \spacegrad_\Brho \inv{ \Abs{\Brho} } } \\
&=
\int d^3 \Brho’
\lr{ \spacegrad_{\Brho’} \inv{\Abs{\Brho’ – \ncap} } \cdot \spacegrad_{\Brho’} \inv{ \Abs{\Brho’} } }
+
\int d^3 \Brho’
\inv{\Abs{\Brho’ – \ncap}} \spacegrad_{\Brho’}^2 \inv{ \Abs{\Brho’} } \\
&+
\int d^3 \Brho
\lr{ \spacegrad_\Brho \inv{\Abs{\ncap + \Brho}}} \cdot \spacegrad_\Brho \inv{ \Abs{\Brho} }
+
\int d^3 \Brho
\inv{\Abs{\ncap + \Brho}} \spacegrad_\Brho^2 \inv{ \Abs{\Brho} } \\
&=
2 \int d^3 \Brho
\lr{ \spacegrad_\Brho \inv{\Abs{\ncap + \Brho}}} \cdot \spacegrad_\Brho \inv{ \Abs{\Brho} } \\
&- 4 \pi
\int d^3 \Brho’
\inv{\Abs{\Brho’ – \ncap}} \delta^3(\Brho’)
– 4 \pi
\int d^3 \Brho
\inv{\Abs{\Brho + \ncap}} \delta^3(\Brho) \\
&=
2 \int d^3 \Brho
\lr{ \spacegrad_\Brho \inv{\Abs{\ncap + \Brho}}} \cdot \spacegrad_\Brho \inv{ \Abs{\Brho} }
– 8 \pi.
\end{aligned}
\end{equation}

This used the Laplacian representation of the delta function \( \delta^3(\Bx) = -(1/4\pi) \spacegrad^2 (1/\Abs{\Bx}) \). Back-substitution gives

\begin{equation}\label{eqn:electrostaticJacksonSelfEnergy:380}
\int d^3 \Brho
\frac{ \Brho }{ \Abs{\Brho}^3 } \cdot
\frac{(\ncap + \Brho)}{ \Abs{\ncap + \Brho}^3}
=
4 \pi
+
\oint dA’ \ncap’ \cdot \spacegrad_\Brho \inv{ \Abs{\Brho} \Abs{\ncap + \Brho}}.
\end{equation}

We can argue that this last integral tends to zero, since

\begin{equation}\label{eqn:electrostaticJacksonSelfEnergy:400}
\begin{aligned}
\oint dA’ \ncap’ \cdot \spacegrad_\Brho \inv{ \Abs{\Brho} \Abs{\ncap + \Brho}}
&=
\oint dA’ \ncap’ \cdot \lr{
\lr{ \spacegrad_\Brho \inv{ \Abs{\Brho}} } \inv{\Abs{\ncap + \Brho}}
+
\inv{ \Abs{\Brho}} \lr{ \spacegrad_\Brho \inv{\Abs{\ncap + \Brho}} }
} \\
&=
-\oint dA’ \ncap’ \cdot \lr{
\frac{ \Brho } {\inv{ \Abs{\Brho}}^3 } \inv{\Abs{\ncap + \Brho}}
+
\inv{ \Abs{\Brho}} \frac{ (\Brho + \ncap) }{ \Abs{\ncap + \Brho}^3 }
} \\
&=
-\oint dA’ \inv{\Abs{\Brho} \Abs{\Brho + \ncap}}
\lr{
\frac{ \ncap’ \cdot \Brho }{
{\Abs{\Brho}}^2 }
+\frac{ \ncap’ \cdot (\Brho + \ncap) }{
{\Abs{\Brho + \ncap}}^2 }
}.
\end{aligned}
\end{equation}

The integrand in this surface integral is of \( O(1/\rho^3) \) so tends to zero on an infinite surface in the \( \Brho \) coordinate system. This completes the “easy” integral, leaving

\begin{equation}\label{eqn:electrostaticJacksonSelfEnergy:420}
\int d^3 \Brho
\frac{ \Brho }{ \Abs{\Brho}^3 } \cdot
\frac{(\ncap + \Brho)}{ \Abs{\ncap + \Brho}^3}
=
4 \pi.
\end{equation}

The total field energy can now be expressed as a sum of the self energies and the interaction energy
\begin{equation}\label{eqn:electrostaticJacksonSelfEnergy:440}
W =
\frac{1}{32 \pi^2 \epsilon} \int d^3 \Bx \frac{q_1^2}{\Abs{\Bx – \Bx_1}^4 }
+
\frac{1}{32 \pi^2 \epsilon} \int d^3 \Bx \frac{q_2^2}{\Abs{\Bx – \Bx_2}^4 }
+ \inv{ 4 \pi \epsilon}
\frac{q_1 q_2}{\Abs{\Bx_1 – \Bx_2} }.
\end{equation}

The interaction energy is exactly the potential energies for the two particles, the this total energy in the field is biased in the positive direction by the pair of self energies. It is interesting that the energy obtained from integrating the field energy density contains such self energy terms, but I don’t know exactly what to make of them at this point in time.

References

[1] JD Jackson. Classical Electrodynamics. John Wiley and Sons, 2nd edition, 1975.

Helmholtz theorem

October 1, 2016 math and physics play No comments , , , , , , , , , , , , , ,

[Click here for a PDF of this post with nicer formatting]

This is a problem from ece1228. I attempted solutions in a number of ways. One using Geometric Algebra, one devoid of that algebra, and then this method, which combined aspects of both. Of the three methods I tried to obtain this result, this is the most compact and elegant. It does however, require a fair bit of Geometric Algebra knowledge, including the Fundamental Theorem of Geometric Calculus, as detailed in [1], [3] and [2].

Question: Helmholtz theorem

Prove the first Helmholtz’s theorem, i.e. if vector \(\BM\) is defined by its divergence

\begin{equation}\label{eqn:helmholtzDerviationMultivector:20}
\spacegrad \cdot \BM = s
\end{equation}

and its curl
\begin{equation}\label{eqn:helmholtzDerviationMultivector:40}
\spacegrad \cross \BM = \BC
\end{equation}

within a region and its normal component \( \BM_{\textrm{n}} \) over the boundary, then \( \BM \) is
uniquely specified.

Answer

The gradient of the vector \( \BM \) can be written as a single even grade multivector

\begin{equation}\label{eqn:helmholtzDerviationMultivector:60}
\spacegrad \BM
= \spacegrad \cdot \BM + I \spacegrad \cross \BM
= s + I \BC.
\end{equation}

We will use this to attempt to discover the relation between the vector \( \BM \) and its divergence and curl. We can express \( \BM \) at the point of interest as a convolution with the delta function at all other points in space

\begin{equation}\label{eqn:helmholtzDerviationMultivector:80}
\BM(\Bx) = \int_V dV’ \delta(\Bx – \Bx’) \BM(\Bx’).
\end{equation}

The Laplacian representation of the delta function in \R{3} is

\begin{equation}\label{eqn:helmholtzDerviationMultivector:100}
\delta(\Bx – \Bx’) = -\inv{4\pi} \spacegrad^2 \inv{\Abs{\Bx – \Bx’}},
\end{equation}

so \( \BM \) can be represented as the following convolution

\begin{equation}\label{eqn:helmholtzDerviationMultivector:120}
\BM(\Bx) = -\inv{4\pi} \int_V dV’ \spacegrad^2 \inv{\Abs{\Bx – \Bx’}} \BM(\Bx’).
\end{equation}

Using this relation and proceeding with a few applications of the chain rule, plus the fact that \( \spacegrad 1/\Abs{\Bx – \Bx’} = -\spacegrad’ 1/\Abs{\Bx – \Bx’} \), we find

\begin{equation}\label{eqn:helmholtzDerviationMultivector:720}
\begin{aligned}
-4 \pi \BM(\Bx)
&= \int_V dV’ \spacegrad^2 \inv{\Abs{\Bx – \Bx’}} \BM(\Bx’) \\
&= \gpgradeone{\int_V dV’ \spacegrad^2 \inv{\Abs{\Bx – \Bx’}} \BM(\Bx’)} \\
&= -\gpgradeone{\int_V dV’ \spacegrad \lr{ \spacegrad’ \inv{\Abs{\Bx – \Bx’}}} \BM(\Bx’)} \\
&= -\gpgradeone{\spacegrad \int_V dV’ \lr{
\spacegrad’ \frac{\BM(\Bx’)}{\Abs{\Bx – \Bx’}}
-\frac{\spacegrad’ \BM(\Bx’)}{\Abs{\Bx – \Bx’}}
} } \\
&=
-\gpgradeone{\spacegrad \int_{\partial V} dA’
\ncap \frac{\BM(\Bx’)}{\Abs{\Bx – \Bx’}}
}
+\gpgradeone{\spacegrad \int_V dV’
\frac{s(\Bx’) + I\BC(\Bx’)}{\Abs{\Bx – \Bx’}}
} \\
&=
-\gpgradeone{\spacegrad \int_{\partial V} dA’
\ncap \frac{\BM(\Bx’)}{\Abs{\Bx – \Bx’}}
}
+\spacegrad \int_V dV’
\frac{s(\Bx’)}{\Abs{\Bx – \Bx’}}
+\spacegrad \cdot \int_V dV’
\frac{I\BC(\Bx’)}{\Abs{\Bx – \Bx’}}.
\end{aligned}
\end{equation}

By inserting a no-op grade selection operation in the second step, the trivector terms that would show up in subsequent steps are automatically filtered out. This leaves us with a boundary term dependent on the surface and the normal and tangential components of \( \BM \). Added to that is a pair of volume integrals that provide the unique dependence of \( \BM \) on its divergence and curl. When the surface is taken to infinity, which requires \( \Abs{\BM}/\Abs{\Bx – \Bx’} \rightarrow 0 \), then the dependence of \( \BM \) on its divergence and curl is unique.

In order to express final result in traditional vector algebra form, a couple transformations are required. The first is that

\begin{equation}\label{eqn:helmholtzDerviationMultivector:800}
\gpgradeone{ \Ba I \Bb } = I^2 \Ba \cross \Bb = -\Ba \cross \Bb.
\end{equation}

For the grade selection in the boundary integral, note that

\begin{equation}\label{eqn:helmholtzDerviationMultivector:740}
\begin{aligned}
\gpgradeone{ \spacegrad \ncap \BX }
&=
\gpgradeone{ \spacegrad (\ncap \cdot \BX) }
+
\gpgradeone{ \spacegrad (\ncap \wedge \BX) } \\
&=
\spacegrad (\ncap \cdot \BX)
+
\gpgradeone{ \spacegrad I (\ncap \cross \BX) } \\
&=
\spacegrad (\ncap \cdot \BX)

\spacegrad \cross (\ncap \cross \BX).
\end{aligned}
\end{equation}

These give

\begin{equation}\label{eqn:helmholtzDerviationMultivector:721}
\boxed{
\begin{aligned}
\BM(\Bx)
&=
\spacegrad \inv{4\pi} \int_{\partial V} dA’ \ncap \cdot \frac{\BM(\Bx’)}{\Abs{\Bx – \Bx’}}

\spacegrad \cross \inv{4\pi} \int_{\partial V} dA’ \ncap \cross \frac{\BM(\Bx’)}{\Abs{\Bx – \Bx’}} \\
&-\spacegrad \inv{4\pi} \int_V dV’
\frac{s(\Bx’)}{\Abs{\Bx – \Bx’}}
+\spacegrad \cross \inv{4\pi} \int_V dV’
\frac{\BC(\Bx’)}{\Abs{\Bx – \Bx’}}.
\end{aligned}
}
\end{equation}

References

[1] C. Doran and A.N. Lasenby. Geometric algebra for physicists. Cambridge University Press New York, Cambridge, UK, 1st edition, 2003.

[2] A. Macdonald. Vector and Geometric Calculus. CreateSpace Independent Publishing Platform, 2012.

[3] Garret Sobczyk and Omar Le’on S’anchez. Fundamental theorem of calculus. Advances in Applied Clifford Algebras, 21:221–231, 2011. URL http://arxiv.org/abs/0809.4526.

Does the divergence and curl uniquely determine the vector?

September 30, 2016 math and physics play No comments , , , , , , , , , , , , , , , , ,

[Click here for a PDF of this post with nicer formatting]

A problem posed in the ece1228 problem set was the following

Helmholtz theorem.

Prove the first Helmholtz’s theorem, i.e. if vector \(\BM\) is defined by its divergence

\begin{equation}\label{eqn:emtProblemSet1Problem5:20}
\spacegrad \cdot \BM = s
\end{equation}

and its curl
\begin{equation}\label{eqn:emtProblemSet1Problem5:40}
\spacegrad \cross \BM = \BC
\end{equation}

within a region and its normal component \( \BM_{\textrm{n}} \) over the boundary, then \( \BM \) is uniquely specified.

Solution.

This problem screams for an attempt using Geometric Algebra techniques, since
the gradient of this vector can be written as a single even grade multivector

\begin{equation}\label{eqn:emtProblemSet1Problem5AppendixGA:60}
\begin{aligned}
\spacegrad \BM
&= \spacegrad \cdot \BM + I \spacegrad \cross \BM \\
&= s + I \BC.
\end{aligned}
\end{equation}

Observe that the Laplacian of \( \BM \) is vector valued

\begin{equation}\label{eqn:emtProblemSet1Problem5AppendixGA:400}
\spacegrad^2 \BM
= \spacegrad s + I \spacegrad \BC.
\end{equation}

This means that \( \spacegrad \BC \) must be a bivector \( \spacegrad \BC = \spacegrad \wedge \BC \), or that \( \BC \) has zero divergence

\begin{equation}\label{eqn:emtProblemSet1Problem5AppendixGA:420}
\spacegrad \cdot \BC = 0.
\end{equation}

This required constraint on \( \BC \) will show up in subsequent analysis. An equivalent problem to the one posed
is to show that the even grade multivector equation \( \spacegrad \BM = s + I \BC \) has an inverse given the constraint
specified by \ref{eqn:emtProblemSet1Problem5AppendixGA:420}.

Inverting the gradient equation.

The Green’s function for the gradient can be found in [1], where it is used to generalize the Cauchy integral equations to higher dimensions.

\begin{equation}\label{eqn:emtProblemSet1Problem5AppendixGA:80}
\begin{aligned}
G(\Bx ; \Bx’) &= \inv{4 \pi} \frac{ \Bx – \Bx’ }{\Abs{\Bx – \Bx’}^3} \\
\spacegrad \BG(\Bx, \Bx’) &= \spacegrad \cdot \BG(\Bx, \Bx’) = \delta(\Bx – \Bx’) = -\spacegrad’ \BG(\Bx, \Bx’).
\end{aligned}
\end{equation}

The inversion equation is an application of the Fundamental Theorem of (Geometric) Calculus, with the gradient operating bidirectionally on the Green’s function and the vector function

\begin{equation}\label{eqn:emtProblemSet1Problem5AppendixGA:100}
\begin{aligned}
\oint_{\partial V} G(\Bx, \Bx’) d^2 \Bx’ \BM(\Bx’)
&=
\int_V G(\Bx, \Bx’) d^3 \Bx \lrspacegrad’ \BM(\Bx’) \\
&=
\int_V d^3 \Bx (G(\Bx, \Bx’) \lspacegrad’) \BM(\Bx’)
+
\int_V d^3 \Bx G(\Bx, \Bx’) (\spacegrad’ \BM(\Bx’)) \\
&=
-\int_V d^3 \Bx \delta(\Bx – \By) \BM(\Bx’)
+
\int_V d^3 \Bx G(\Bx, \Bx’) \lr{ s(\Bx’) + I \BC(\Bx’) } \\
&=
-I \BM(\Bx)
+
\inv{4 \pi} \int_V d^3 \Bx \frac{ \Bx – \Bx’}{ \Abs{\Bx – \Bx’}^3 } \lr{ s(\Bx’) + I \BC(\Bx’) }.
\end{aligned}
\end{equation}

The integrals are in terms of the primed coordinates so that the end result is a function of \( \Bx \). To rearrange for \( \BM \), let \( d^3 \Bx’ = I dV’ \), and \( d^2 \Bx’ \ncap(\Bx’) = I dA’ \), then right multiply with the pseudoscalar \( I \), noting that in \R{3} the pseudoscalar commutes with any grades

\begin{equation}\label{eqn:emtProblemSet1Problem5AppendixGA:440}
\begin{aligned}
\BM(\Bx)
&=
I \oint_{\partial V} G(\Bx, \Bx’) I dA’ \ncap \BM(\Bx’)

I \inv{4 \pi} \int_V I dV’ \frac{ \Bx – \Bx’}{ \Abs{\Bx – \Bx’}^3 } \lr{ s(\Bx’) + I \BC(\Bx’) } \\
&=
-\oint_{\partial V} dA’ G(\Bx, \Bx’) \ncap \BM(\Bx’)
+
\inv{4 \pi} \int_V dV’ \frac{ \Bx – \Bx’}{ \Abs{\Bx – \Bx’}^3 } \lr{ s(\Bx’) + I \BC(\Bx’) }.
\end{aligned}
\end{equation}

This can be decomposed into a vector and a trivector equation. Let \( \Br = \Bx – \Bx’ = r \rcap \), and note that

\begin{equation}\label{eqn:emtProblemSet1Problem5AppendixGA:500}
\begin{aligned}
\gpgradeone{ \rcap I \BC }
&=
\gpgradeone{ I \rcap \BC } \\
&=
I \rcap \wedge \BC \\
&=
-\rcap \cross \BC,
\end{aligned}
\end{equation}

so this pair of equations can be written as

\begin{equation}\label{eqn:emtProblemSet1Problem5AppendixGA:520}
\begin{aligned}
\BM(\Bx)
&=
-\inv{4 \pi} \oint_{\partial V} dA’ \frac{\gpgradeone{ \rcap \ncap \BM(\Bx’) }}{r^2}
+
\inv{4 \pi} \int_V dV’ \lr{
\frac{\rcap}{r^2} s(\Bx’) –
\frac{\rcap}{r^2} \cross \BC(\Bx’) } \\
0
&=
-\inv{4 \pi} \oint_{\partial V} dA’ \frac{\rcap}{r^2} \wedge \ncap \wedge \BM(\Bx’)
+
\frac{I}{4 \pi} \int_V dV’ \frac{ \rcap \cdot \BC(\Bx’) }{r^2}.
\end{aligned}
\end{equation}

Trivector grades.

Consider the last integral in the pseudoscalar equation above. Since we expect no pseudoscalar components, this must be zero, or cancel perfectly. It’s not obvious that this is the case, but a transformation to a surface integral shows the constraints required for that to be the case. To do so note

\begin{equation}\label{eqn:emtProblemSet1Problem5AppendixGA:540}
\begin{aligned}
\spacegrad \inv{\Bx – \Bx’}
&= -\spacegrad’ \inv{\Bx – \Bx’} \\
&=
-\frac{\Bx – \Bx’}{\Abs{\Bx – \Bx’}^3} \\
&= -\frac{\rcap}{r^2}.
\end{aligned}
\end{equation}

Using this and the chain rule we have

\begin{equation}\label{eqn:emtProblemSet1Problem5AppendixGA:560}
\begin{aligned}
\frac{I}{4 \pi} \int_V dV’ \frac{ \rcap \cdot \BC(\Bx’) }{r^2}
&=
\frac{I}{4 \pi} \int_V dV’ \lr{ \spacegrad’ \inv{ r } } \cdot \BC(\Bx’) \\
&=
\frac{I}{4 \pi} \int_V dV’ \spacegrad’ \cdot \frac{\BC(\Bx’)}{r}

\frac{I}{4 \pi} \int_V dV’ \frac{ \spacegrad’ \cdot \BC(\Bx’) }{r} \\
&=
\frac{I}{4 \pi} \int_V dV’ \spacegrad’ \cdot \frac{\BC(\Bx’)}{r} \\
&=
\frac{I}{4 \pi} \int_{\partial V} dA’ \ncap(\Bx’) \cdot \frac{\BC(\Bx’)}{r}.
\end{aligned}
\end{equation}

The divergence of \( \BC \) above was killed by recalling the constraint \ref{eqn:emtProblemSet1Problem5AppendixGA:420}. This means that we can rewrite entirely as surface integral and eventually reduced to a single triple product

\begin{equation}\label{eqn:emtProblemSet1Problem5AppendixGA:580}
\begin{aligned}
0
&=
-\frac{I}{4 \pi} \oint_{\partial V} dA’ \lr{
\frac{\rcap}{r^2} \cdot (\ncap \cross \BM(\Bx’))
-\ncap \cdot \frac{\BC(\Bx’)}{r}
} \\
&=
\frac{I}{4 \pi} \oint_{\partial V} dA’ \ncap \cdot \lr{
\frac{\rcap}{r^2} \cross \BM(\Bx’)
+ \frac{\BC(\Bx’)}{r}
} \\
&=
\frac{I}{4 \pi} \oint_{\partial V} dA’ \ncap \cdot \lr{
\lr{ \spacegrad’ \inv{r}} \cross \BM(\Bx’)
+ \frac{\BC(\Bx’)}{r}
} \\
&=
\frac{I}{4 \pi} \oint_{\partial V} dA’ \ncap \cdot \lr{
\spacegrad’ \cross \frac{\BM(\Bx’)}{r}
} \\
&=
\frac{I}{4 \pi} \oint_{\partial V} dA’
\spacegrad’ \cdot
\frac{\BM(\Bx’) \cross \ncap}{r}
&=
\frac{I}{4 \pi} \oint_{\partial V} dA’
\spacegrad’ \cdot
\frac{\BM(\Bx’) \cross \ncap}{r}.
\end{aligned}
\end{equation}

Final results.

Assembling things back into a single multivector equation, the complete inversion integral for \( \BM \) is

\begin{equation}\label{eqn:emtProblemSet1Problem5AppendixGA:600}
\BM(\Bx)
=
\inv{4 \pi} \oint_{\partial V} dA’
\lr{
\spacegrad’ \wedge
\frac{\BM(\Bx’) \wedge \ncap}{r}
-\frac{\gpgradeone{ \rcap \ncap \BM(\Bx’) }}{r^2}
}
+
\inv{4 \pi} \int_V dV’ \lr{
\frac{\rcap}{r^2} s(\Bx’) –
\frac{\rcap}{r^2} \cross \BC(\Bx’) }.
\end{equation}

This shows that vector \( \BM \) can be recovered uniquely from \( s, \BC \) when \( \Abs{\BM}/r^2 \) vanishes on an infinite surface. If we restrict attention to a finite surface, we have to add to the fixed solution a specific solution that depends on the value of \( \BM \) on that surface. The vector portion of that surface integrand contains

\begin{equation}\label{eqn:emtProblemSet1Problem5AppendixGA:640}
\begin{aligned}
\gpgradeone{ \rcap \ncap \BM }
&=
\rcap (\ncap \cdot \BM )
+
\rcap \cdot (\ncap \wedge \BM ) \\
&=
\rcap (\ncap \cdot \BM )
+
(\rcap \cdot \ncap) \BM

(\rcap \cdot \BM ) \ncap.
\end{aligned}
\end{equation}

The constraints required by a zero triple product \( \spacegrad’ \cdot (\BM(\Bx’) \cross \ncap(\Bx’)) \) are complicated on a such a general finite surface. Consider instead, for simplicity, the case of a spherical surface, which can be analyzed more easily. In that case the outward normal of the surface centred on the test charge point \( \Bx \) is \( \ncap = -\rcap \). The pseudoscalar integrand is not generally killed unless the divergence of its tangential component on this surface is zero. One way that this can occur is for \( \BM \cross \ncap = 0 \), so that \( -\gpgradeone{ \rcap \ncap \BM } = \BM = (\BM \cdot \ncap) \ncap = \BM_{\textrm{n}} \).

This gives

\begin{equation}\label{eqn:emtProblemSet1Problem5AppendixGA:620}
\BM(\Bx)
=
\inv{4 \pi} \oint_{\Abs{\Bx – \Bx’} = r} dA’ \frac{\BM_{\textrm{n}}(\Bx’)}{r^2}
+
\inv{4 \pi} \int_V dV’ \lr{
\frac{\rcap}{r^2} s(\Bx’) +
\BC(\Bx’) \cross \frac{\rcap}{r^2} },
\end{equation}

or, in terms of potential functions, which is arguably tidier

\begin{equation}\label{eqn:emtProblemSet1Problem5AppendixGA:300}
\boxed{
\BM(\Bx)
=
\inv{4 \pi} \oint_{\Abs{\Bx – \Bx’} = r} dA’ \frac{\BM_{\textrm{n}}(\Bx’)}{r^2}
-\spacegrad \int_V dV’ \frac{ s(\Bx’)}{ 4 \pi r }
+\spacegrad \cross \int_V dV’ \frac{ \BC(\Bx’) }{ 4 \pi r }.
}
\end{equation}

Commentary

I attempted this problem in three different ways. My first approach (above) assembled the divergence and curl relations above into a single (Geometric Algebra) multivector gradient equation and applied the vector valued Green’s function for the gradient to invert that equation. That approach logically led from the differential equation for \( \BM \) to the solution for \( \BM \) in terms of \( s \) and \( \BC \). However, this strategy introduced some complexities that make me doubt the correctness of the associated boundary analysis.

Even if the details of the boundary handling in my multivector approach is not correct, I thought that approach was interesting enough to share.

References

[1] C. Doran and A.N. Lasenby. Geometric algebra for physicists. Cambridge University Press New York, Cambridge, UK, 1st edition, 2003.

Green’s function for the gradient in Euclidean spaces.

September 26, 2016 math and physics play 1 comment , , , , , , , , , , ,

[Click here for a PDF of this post with nicer formatting]

In [1] it is stated that the Green’s function for the gradient is

\begin{equation}\label{eqn:gradientGreensFunction:20}
G(x, x’) = \inv{S_n} \frac{x – x’}{\Abs{x-x’}^n},
\end{equation}

where \( n \) is the dimension of the space, \( S_n \) is the area of the unit sphere, and
\begin{equation}\label{eqn:gradientGreensFunction:40}
\grad G = \grad \cdot G = \delta(x – x’).
\end{equation}

What I’d like to do here is verify that this Green’s function operates as asserted. Here, as in some parts of the text, I am following a convention where vectors are written without boldface.

Let’s start with checking that the gradient of the Green’s function is zero everywhere that \( x \ne x’ \)

\begin{equation}\label{eqn:gradientGreensFunction:100}
\begin{aligned}
\spacegrad \inv{\Abs{x – x’}^n}
&=
-\frac{n}{2} \frac{e^\nu \partial_\nu (x_\mu – x_\mu’)(x^\mu – {x^\mu}’)}{\Abs{x – x’}^{n+2}} \\
&=
-\frac{n}{2} 2 \frac{e^\nu (x_\mu – x_\mu’) \delta_\nu^\mu }{\Abs{x – x’}^{n+2}} \\
&=
-n \frac{ x – x’}{\Abs{x – x’}^{n+2}}.
\end{aligned}
\end{equation}

This means that we have, everywhere that \( x \ne x’ \)

\begin{equation}\label{eqn:gradientGreensFunction:120}
\begin{aligned}
\spacegrad \cdot G
&=
\inv{S_n} \lr{ \frac{\spacegrad \cdot \lr{x – x’}}{\Abs{x – x’}^{n}} + \lr{ \spacegrad \inv{\Abs{x – x’}^{n}} } \cdot \lr{ x – x’} } \\
&=
\inv{S_n} \lr{ \frac{n}{\Abs{x – x’}^{n}} + \lr{ -n \frac{x – x’}{\Abs{x – x’}^{n+2} } \cdot \lr{ x – x’} } } \\
= 0.
\end{aligned}
\end{equation}

Next, consider the curl of the Green’s function. Zero curl will mean that we have \( \grad G = \grad \cdot G = G \lgrad \).

\begin{equation}\label{eqn:gradientGreensFunction:140}
\begin{aligned}
S_n (\grad \wedge G)
&=
\frac{\grad \wedge (x-x’)}{\Abs{x – x’}^{n}}
+
\grad \inv{\Abs{x – x’}^{n}} \wedge (x-x’) \\
&=
\frac{\grad \wedge (x-x’)}{\Abs{x – x’}^{n}}
– n
\frac{x – x’}{\Abs{x – x’}^{n}} \wedge (x-x’) \\
&=
\frac{\grad \wedge (x-x’)}{\Abs{x – x’}^{n}}.
\end{aligned}
\end{equation}

However,

\begin{equation}\label{eqn:gradientGreensFunction:160}
\begin{aligned}
\grad \wedge (x-x’)
&=
\grad \wedge x \\
&=
e^\mu \wedge e_\nu \partial_\mu x^\nu \\
&=
e^\mu \wedge e_\nu \delta_\mu^\nu \\
&=
e^\mu \wedge e_\mu.
\end{aligned}
\end{equation}

For any metric where \( e_\mu \propto e^\mu \), which is the case in all the ones with physical interest (i.e. \R{3} and Minkowski space), \( \grad \wedge G \) is zero.

Having shown that the gradient of the (presumed) Green’s function is zero everywhere that \( x \ne x’ \), the guts of the
demonstration can now proceed. We wish to evaluate the gradient weighted convolution of the Green’s function using the Fundamental Theorem of (Geometric) Calculus. Here the gradient acts bidirectionally on both the gradient and the test function. Working in primed coordinates so that the final result is in terms of the unprimed, we have

\begin{equation}\label{eqn:gradientGreensFunction:60}
\int_V G(x,x’) d^n x’ \lrgrad’ F(x’)
= \int_{\partial V} G(x,x’) d^{n-1} x’ F(x’).
\end{equation}

Let \( d^n x’ = dV’ I \), \( d^{n-1} x’ n = dA’ I \), where \( n = n(x’) \) is the outward normal to the area element \( d^{n-1} x’ \). From this point on, lets restrict attention to Euclidean spaces, where \( n^2 = 1 \). In that case

\begin{equation}\label{eqn:gradientGreensFunction:80}
\begin{aligned}
\int_V dV’ G(x,x’) \lrgrad’ F(x’)
&=
\int_V dV’ \lr{G(x,x’) \lgrad’} F(x’)
+
\int_V dV’ G(x,x’) \lr{ \rgrad’ F(x’) } \\
&= \int_{\partial V} dA’ G(x,x’) n F(x’).
\end{aligned}
\end{equation}

Here, the pseudoscalar \( I \) has been factored out by commuting it with \( G \), using \( G I = (-1)^{n-1} I G \), and then pre-multiplication with \( 1/((-1)^{n-1} I ) \).

Each of these integrals can be considered in sequence. A convergence bound is required of the multivector test function \( F(x’) \) on the infinite surface \( \partial V \). Since it’s true that

\begin{equation}\label{eqn:gradientGreensFunction:180}
\Abs{ \int_{\partial V} dA’ G(x,x’) n F(x’) }
\ge
\int_{\partial V} dA’ \Abs{ G(x,x’) n F(x’) },
\end{equation}

then it is sufficient to require that

\begin{equation}\label{eqn:gradientGreensFunction:200}
\lim_{x’ \rightarrow \infty} \Abs{ \frac{x -x’}{\Abs{x – x’}^n} n(x’) F(x’) } \rightarrow 0,
\end{equation}

in order to kill off the surface integral. Evaluating the integral on a hypersphere centred on \( x \) where \( x’ – x = n \Abs{x – x’} \), that is

\begin{equation}\label{eqn:gradientGreensFunction:260}
\lim_{x’ \rightarrow \infty} \frac{ \Abs{F(x’)}}{\Abs{x – x’}^{n-1}} \rightarrow 0.
\end{equation}

Given such a constraint, that leaves

\begin{equation}\label{eqn:gradientGreensFunction:220}
\int_V dV’ \lr{G(x,x’) \lgrad’} F(x’)
=
-\int_V dV’ G(x,x’) \lr{ \rgrad’ F(x’) }.
\end{equation}

The LHS is zero everywhere that \( x \ne x’ \) so it can be restricted to a spherical ball around \( x \), which allows the test function \( F \) to be pulled out of the integral, and a second application of the Fundamental Theorem to be applied.

\begin{equation}\label{eqn:gradientGreensFunction:240}
\begin{aligned}
\int_V dV’ \lr{G(x,x’) \lgrad’} F(x’)
&=
\lim_{\epsilon \rightarrow 0}
\int_{\Abs{x – x’} < \epsilon} dV' \lr{G(x,x') \lgrad'} F(x') \\ &= \lr{ \lim_{\epsilon \rightarrow 0} I^{-1} \int_{\Abs{x - x'} < \epsilon} I dV' \lr{G(x,x') \lgrad'} } F(x) \\ &= \lr{ \lim_{\epsilon \rightarrow 0} (-1)^{n-1} I^{-1} \int_{\Abs{x - x'} < \epsilon} G(x,x') d^n x' \lgrad' } F(x) \\ &= \lr{ \lim_{\epsilon \rightarrow 0} (-1)^{n-1} I^{-1} \int_{\Abs{x - x'} = \epsilon} G(x,x') d^{n-1} x' } F(x) \\ &= \lr{ \lim_{\epsilon \rightarrow 0} (-1)^{n-1} I^{-1} \int_{\Abs{x - x'} = \epsilon} G(x,x') dA' I n } F(x) \\ &= \lr{ \lim_{\epsilon \rightarrow 0} \int_{\Abs{x - x'} = \epsilon} dA' G(x,x') n } F(x) \\ &= \lr{ \lim_{\epsilon \rightarrow 0} \int_{\Abs{x - x'} = \epsilon} dA' \frac{\epsilon (-n)}{S_n \epsilon^n} n } F(x) \\ &= -\lim_{\epsilon \rightarrow 0} \frac{F(x)}{S_n \epsilon^{n-1}} \int_{\Abs{x - x'} = \epsilon} dA' \\ &= -\lim_{\epsilon \rightarrow 0} \frac{F(x)}{S_n \epsilon^{n-1}} S_n \epsilon^{n-1} \\ &= -F(x). \end{aligned} \end{equation} This essentially calculates the divergence integral around an infinitesimal hypersphere, without assuming that the gradient commutes with the gradient in this infinitesimal region. So, provided the test function is constrained by \ref{eqn:gradientGreensFunction:260}, we have \begin{equation}\label{eqn:gradientGreensFunction:280} F(x) = \int_V dV' G(x,x') \lr{ \grad' F(x') }. \end{equation} In particular, should we have a first order gradient equation \begin{equation}\label{eqn:gradientGreensFunction:300} \spacegrad' F(x') = M(x'), \end{equation} the inverse of this equation is given by \begin{equation}\label{eqn:gradientGreensFunction:320} \boxed{ F(x) = \int_V dV' G(x,x') M(x'). } \end{equation} Note that the sign of the Green's function is explicitly tied to the definition of the convolution integral that is used. This is important since since the conventions for the sign of the Green's function or the parameters in the convolution integral often vary. What's cool about this result is that it applies not only to gradient equations in Euclidean spaces, but also to multivector (or even just vector) fields \( F \), instead of the usual scalar functions that we usually apply Green's functions to.

Example: Electrostatics

As a check of the sign consider the electrostatics equation

\begin{equation}\label{eqn:gradientGreensFunction:380}
\spacegrad \BE = \frac{\rho}{\epsilon_0},
\end{equation}

for which we have after substitution into \ref{eqn:gradientGreensFunction:320}
\begin{equation}\label{eqn:gradientGreensFunction:400}
\BE(\Bx) = \inv{4 \pi \epsilon_0} \int_V dV’ \frac{\Bx – \Bx’}{\Abs{\Bx – \Bx’}^3} \rho(\Bx’).
\end{equation}

This matches the sign found in a trusted reference such as [2].

Future thought.

Does this Green’s function also work for mixed metric spaces? If so, in such a metric, what does it mean to
calculate the surface area of a unit sphere in a mixed signature space?

References

[1] C. Doran and A.N. Lasenby. Geometric algebra for physicists. Cambridge University Press New York, Cambridge, UK, 1st edition, 2003.

[2] JD Jackson. Classical Electrodynamics. John Wiley and Sons, 2nd edition, 1975.

PHY1520H Graduate Quantum Mechanics. Lecture 1: Lighting review. Taught by Prof. Arun Paramekanti

September 17, 2015 phy1520 No comments , , , , , , , , , , , , , ,

[Click here for a PDF of this post with nicer formatting]

Disclaimer

Peeter’s lecture notes from class. These may be incoherent and rough.

These are notes for the UofT course PHY1520, Graduate Quantum Mechanics, taught by Prof. Paramekanti, covering [1] chap. 1 content.

Classical mechanics

We’ll be talking about one body physics for most of this course. In classical mechanics we can figure out the particle trajectories using both of \( (\Br, \Bp \), where

\begin{equation}\label{eqn:qmLecture1:20}
\begin{aligned}
\ddt{\Br} &= \inv{m} \Bp \\
\ddt{\Bp} &= \spacegrad V
\end{aligned}
\end{equation}

A two dimensional phase space as sketched in fig. 1 shows the trajectory of a point particle subject to some equations of motion

lectureOnePhaseSpaceClassicalFig1

fig. 1. One dimensional classical phase space example

Quantum mechanics

For this lecture, we’ll work with natural units, setting

\begin{equation}\label{eqn:qmLecture1:480}
\boxed{
\Hbar = 1.
}
\end{equation}

In QM we are no longer allowed to think of position and momentum, but have to start asking about state vectors \( \ket{\Psi} \).

We’ll consider the state vector with respect to some basis, for example, in a position basis, we write

\begin{equation}\label{eqn:qmLecture1:40}
\braket{ x }{\Psi } = \Psi(x),
\end{equation}

a complex numbered “wave function”, the probability amplitude for a particle in \( \ket{\Psi} \) to be in the vicinity of \( x \).

We could also consider the state in a momentum basis

\begin{equation}\label{eqn:qmLecture1:60}
\braket{ p }{\Psi } = \Psi(p),
\end{equation}

a probability amplitude with respect to momentum \( p \).

More precisely,

\begin{equation}\label{eqn:qmLecture1:80}
\Abs{\Psi(x)}^2 dx \ge 0
\end{equation}

is the probability of finding the particle in the range \( (x, x + dx ) \). To have meaning as a probability, we require

\begin{equation}\label{eqn:qmLecture1:100}
\int_{-\infty}^\infty \Abs{\Psi(x)}^2 dx = 1.
\end{equation}

The average position can be calculated using this probability density function. For example

\begin{equation}\label{eqn:qmLecture1:120}
\expectation{x} = \int_{-\infty}^\infty \Abs{\Psi(x)}^2 x dx,
\end{equation}

or
\begin{equation}\label{eqn:qmLecture1:140}
\expectation{f(x)} = \int_{-\infty}^\infty \Abs{\Psi(x)}^2 f(x) dx.
\end{equation}

Similarly, calculation of an average of a function of momentum can be expressed as

\begin{equation}\label{eqn:qmLecture1:160}
\expectation{f(p)} = \int_{-\infty}^\infty \Abs{\Psi(p)}^2 f(p) dp.
\end{equation}

Transformation from a position to momentum basis

We have a problem, if we which to compute an average in momentum space such as \( \expectation{p} \), when given a wavefunction \( \Psi(x) \).

How do we convert

\begin{equation}\label{eqn:qmLecture1:180}
\Psi(p)
\stackrel{?}{\leftrightarrow}
\Psi(x),
\end{equation}

or equivalently
\begin{equation}\label{eqn:qmLecture1:200}
\braket{p}{\Psi}
\stackrel{?}{\leftrightarrow}
\braket{x}{\Psi}.
\end{equation}

Such a conversion can be performed by virtue of an the assumption that we have a complete orthonormal basis, for which we can introduce identity operations such as

\begin{equation}\label{eqn:qmLecture1:220}
\int_{-\infty}^\infty dp \ket{p}\bra{p} = 1,
\end{equation}

or
\begin{equation}\label{eqn:qmLecture1:240}
\int_{-\infty}^\infty dx \ket{x}\bra{x} = 1
\end{equation}

Some interpretations:

  1. \( \ket{x_0} \leftrightarrow \text{sits at} x = x_0 \)
  2. \( \braket{x}{x’} \leftrightarrow \delta(x – x’) \)
  3. \( \braket{p}{p’} \leftrightarrow \delta(p – p’) \)
  4. \( \braket{x}{p’} = \frac{e^{i p x}}{\sqrt{V}} \), where \( V \) is the volume of the box containing the particle. We’ll define the appropriate normalization for an infinite box volume later.

The delta function interpretation of the braket \( \braket{p}{p’} \) justifies the identity operator, since we recover any state in the basis when operating with it. For example, in momentum space

\begin{equation}\label{eqn:qmLecture1:260}
\begin{aligned}
1 \ket{p}
&=
\lr{ \int_{-\infty}^\infty dp’
\ket{p’}\bra{p’} }
\ket{p} \\
&=
\int_{-\infty}^\infty dp’
\ket{p’}
\braket{p’}{p} \\
&=
\int_{-\infty}^\infty dp’
\ket{p’}
\delta(p – p’) \\
&=
\ket{p}.
\end{aligned}
\end{equation}

This also the determination of an integral operator representation for the delta function

\begin{equation}\label{eqn:qmLecture1:500}
\begin{aligned}
\delta(x – x’)
&=
\braket{x}{x’} \\
&=
\int dp \braket{x}{p} \braket{p}{x’} \\
&=
\inv{V} \int dp e^{i p x} e^{-i p x’},
\end{aligned}
\end{equation}

or

\begin{equation}\label{eqn:qmLecture1:520}
\delta(x – x’)
=
\inv{V} \int dp e^{i p (x- x’)}.
\end{equation}

Here we used the fact that \( \braket{p}{x} = \braket{x}{p}^\conj \).

FIXME: do we have a justification for that conjugation with what was defined here so far?

The conversion from a position basis to momentum space is now possible

\begin{equation}\label{eqn:qmLecture1:280}
\begin{aligned}
\braket{p}{\Psi}
&= \Psi(p) \\
&= \int_{-\infty}^\infty \braket{p}{x} \braket{x}{\Psi} dx \\
&= \int_{-\infty}^\infty \frac{e^{-ip x}}{\sqrt{V}} \Psi(x) dx.
\end{aligned}
\end{equation}

The momentum space to position space conversion can be written as

\begin{equation}\label{eqn:qmLecture1:300}
\Psi(x)
= \int_{-\infty}^\infty \frac{e^{ip x}}{\sqrt{V}} \Psi(p) dp.
\end{equation}

Now we can go back and figure out the an expectation

\begin{equation}\label{eqn:qmLecture1:320}
\begin{aligned}
\expectation{p}
&=
\int \Psi^\conj(p) \Psi(p) p d p \\
&=
\int dp
\lr{
\int_{-\infty}^\infty \frac{e^{ip x}}{\sqrt{V}} \Psi^\conj(x) dx
}
\lr{
\int_{-\infty}^\infty \frac{e^{-ip x’}}{\sqrt{V}} \Psi(x’) dx’
}
p \\
&=\int dp dx dx’
\Psi^\conj(x)
\inv{V} e^{ip (x-x’)} \Psi(x’) p \\
&=
\int dp dx dx’
\Psi^\conj(x)
\inv{V} \lr{ -i\PD{x}{e^{ip (x-x’)}} }\Psi(x’) \\
&=
\int dp dx
\Psi^\conj(x) \lr{ -i \PD{x}{} }
\inv{V} \int dx’ e^{ip (x-x’)} \Psi(x’) \\
&=
\int dx
\Psi^\conj(x) \lr{ -i \PD{x}{} }
\int dx’ \lr{ \inv{V} \int dp e^{ip (x-x’)} } \Psi(x’) \\
&=
\int dx
\Psi^\conj(x) \lr{ -i \PD{x}{} }
\int dx’ \delta(x – x’) \Psi(x’) \\
&=
\int dx
\Psi^\conj(x) \lr{ -i \PD{x}{} }
\Psi(x)
\end{aligned}
\end{equation}

Here we’ve essentially calculated the position space representation of the momentum operator, allowing identifications of the following form

\begin{equation}\label{eqn:qmLecture1:380}
p \leftrightarrow -i \PD{x}{}
\end{equation}
\begin{equation}\label{eqn:qmLecture1:400}
p^2 \leftrightarrow – \PDSq{x}{}.
\end{equation}

Alternate starting point.

Most of the above results followed from the claim that \( \braket{x}{p} = e^{i p x} \). Note that this position space representation of the momentum operator can also be taken as the starting point. Given that, the exponential representation of the position-momentum braket follows

\begin{equation}\label{eqn:qmLecture1:540}
\bra{x} P \ket{p}
=
-i \Hbar \PD{x}{} \braket{x}{p},
\end{equation}

but \( \bra{x} P \ket{p} = p \braket{x}{p} \), providing a differential equation for \( \braket{x}{p} \)

\begin{equation}\label{eqn:qmLecture1:560}
p \braket{x}{p} = -i \Hbar \PD{x}{} \braket{x}{p},
\end{equation}

with solution

\begin{equation}\label{eqn:qmLecture1:580}
i p x/\Hbar = \ln \braket{x}{p} + \text{const},
\end{equation}

or
\begin{equation}\label{eqn:qmLecture1:600}
\braket{x}{p} \propto e^{i p x/\Hbar}.
\end{equation}

Matrix interpretation

  1. Ket’s \( \ket{\Psi} \leftrightarrow \text{column vector} \)
  2. Bra’s \( \bra{\Psi} \leftrightarrow {(\text{row vector})}^\conj \)
  3. Operators \( \leftrightarrow \) matrices that act on vectors.

\begin{equation}\label{eqn:qmLecture1:420}
\hat{p} \ket{\Psi} \rightarrow \ket{\Psi’}
\end{equation}

Time evolution

For a state subject to the equations of motion given by the Hamiltonian operator \( \hat{H} \)

\begin{equation}\label{eqn:qmLecture1:440}
i \PD{t}{} \ket{\Psi} = \hat{H} \ket{\Psi},
\end{equation}

the time evolution is given by
\begin{equation}\label{eqn:qmLecture1:460}
\ket{\Psi(t)} = e^{-i \hat{H} t} \ket{\Psi(0)}.
\end{equation}

Incomplete information

We’ll need to introduce the concept of Density matrices. This will bring us to concepts like entanglement.

References

[1] Jun John Sakurai and Jim J Napolitano. Modern quantum mechanics. Pearson Higher Ed, 2014.