Laplacian

Line charge field and potential.

October 26, 2016 math and physics play , , , , , , , , , ,

[Click here for a PDF of this post with nicer formatting]

When computing the most general solution of the electrostatic potential in a plane, Jackson [1] mentions that \( -2 \lambda_0 \ln \rho \) is the well known potential for an infinite line charge (up to the unit specific factor). Checking that statement, since I didn’t recall what that potential was offhand, I encountered some inconsistencies and non-convergent integrals, and thought it was worthwhile to explore those a bit more carefully. This will be done here.

Using Gauss’s law.

For an infinite length line charge, we can find the radial field contribution using Gauss’s law, imagining a cylinder of length \( \Delta l \) of radius \( \rho \) surrounding this charge with the midpoint at the origin. Ignoring any non-radial field contribution, we have

\begin{equation}\label{eqn:lineCharge:20}
\int_{-\Delta l/2}^{\Delta l/2} \ncap \cdot \BE (2 \pi \rho) dl = \frac{\lambda_0}{\epsilon_0} \Delta l,
\end{equation}

or

\begin{equation}\label{eqn:lineCharge:40}
\BE = \frac{\lambda_0}{2 \pi \epsilon_0} \frac{\rhocap}{\rho}.
\end{equation}

Since

\begin{equation}\label{eqn:lineCharge:60}
\frac{\rhocap}{\rho} = \spacegrad \ln \rho,
\end{equation}

this means that the potential is

\begin{equation}\label{eqn:lineCharge:80}
\phi = -\frac{2 \lambda_0}{4 \pi \epsilon_0} \ln \rho.
\end{equation}

Finite line charge potential.

Let’s try both these calculations for a finite charge distribution. Gauss’s law looses its usefulness, but we can evaluate the integrals directly. For the electric field

\begin{equation}\label{eqn:lineCharge:100}
\BE
= \frac{\lambda_0}{4 \pi \epsilon_0} \int \frac{(\Bx – \Bx’)}{\Abs{\Bx – \Bx’}^3} dl’.
\end{equation}

Using cylindrical coordinates with the field point \( \Bx = \rho \rhocap \) for convience, the charge point \( \Bx’ = z’ \zcap \), and a the charge distributed over \( [a,b] \) this is

\begin{equation}\label{eqn:lineCharge:120}
\BE
= \frac{\lambda_0}{4 \pi \epsilon_0} \int_a^b \frac{(\rho \rhocap – z’ \zcap)}{\lr{\rho^2 + (z’)^2}^{3/2}} dz’.
\end{equation}

When the charge is uniformly distributed around the origin \( [a,b] = b[-1,1] \) the \( \zcap \) component of this field is killed because the integrand is odd. This justifies ignoring such contributions in the Gaussing cylinder analysis above. The general solution to this integral is found to be

\begin{equation}\label{eqn:lineCharge:140}
\BE
=
\frac{\lambda_0}{4 \pi \epsilon_0}
\evalrange{
\lr{
\frac{z’ \rhocap }{\rho \sqrt{ \rho^2 + (z’)^2 } }
+\frac{\zcap}{ \sqrt{ \rho^2 + (z’)^2 } }
}
}{a}{b},
\end{equation}

or
\begin{equation}\label{eqn:lineCharge:240}
\boxed{
\BE
=
\frac{\lambda_0}{4 \pi \epsilon_0}
\lr{
\frac{\rhocap }{\rho}
\lr{
\frac{b}{\sqrt{ \rho^2 + b^2 } }
-\frac{a}{\sqrt{ \rho^2 + a^2 } }
}
+ \zcap
\lr{
\frac{1}{ \sqrt{ \rho^2 + b^2 } }
-\frac{1}{ \sqrt{ \rho^2 + a^2 } }
}
}.
}
\end{equation}

When \( b = -a = \Delta l/2 \), this reduces to

\begin{equation}\label{eqn:lineCharge:160}
\BE
=
\frac{\lambda_0}{4 \pi \epsilon_0}
\frac{\rhocap }{\rho}
\frac{\Delta l}{\sqrt{ \rho^2 + (\Delta l/2)^2 } },
\end{equation}

which further reduces to \ref{eqn:lineCharge:40} when \( \Delta l \gg \rho \).

Finite line charge potential. Wrong but illuminating.

Again, putting the field point at \( z’ = 0 \), we have

\begin{equation}\label{eqn:lineCharge:180}
\phi(\rho)
= \frac{\lambda_0}{4 \pi \epsilon_0} \int_a^b \frac{dz’}{\lr{\rho^2 + (z’)^2}^{1/2}},
\end{equation}

which integrates to
\begin{equation}\label{eqn:lineCharge:260}
\phi(\rho)
= \frac{\lambda_0}{4 \pi \epsilon_0 }
\ln \frac{ b + \sqrt{ \rho^2 + b^2 }}{ a + \sqrt{\rho^2 + a^2}}.
\end{equation}

With \( b = -a = \Delta l/2 \), this approaches

\begin{equation}\label{eqn:lineCharge:200}
\phi
\approx
\frac{\lambda_0}{4 \pi \epsilon_0 }
\ln \frac{ (\Delta l/2) }{ \rho^2/2\Abs{\Delta l/2}}
=
\frac{-2 \lambda_0}{4 \pi \epsilon_0 } \ln \rho
+
\frac{\lambda_0}{4 \pi \epsilon_0 }
\ln \lr{ (\Delta l)^2/2 }.
\end{equation}

Before \( \Delta l \) is allowed to tend to infinity, this is identical (up to a difference in the reference potential) to \ref{eqn:lineCharge:80} found using Gauss’s law. It is, strictly speaking, singular when \( \Delta l \rightarrow \infty \), so it does not seem right to infinity as a reference point for the potential.

There’s another weird thing about this result. Since this has no \( z \) dependence, it is not obvious how we would recover the non-radial portion of the electric field from this potential using \( \BE = -\spacegrad \phi \)? Let’s calculate the elecric field from \ref{eqn:lineCharge:180} explicitly

\begin{equation}\label{eqn:lineCharge:220}
\begin{aligned}
\BE
&=
-\frac{\lambda_0}{4 \pi \epsilon_0}
\spacegrad
\ln \frac{ b + \sqrt{ \rho^2 + b^2 }}{ a + \sqrt{\rho^2 + a^2}} \\
&=
-\frac{\lambda_0 \rhocap}{4 \pi \epsilon_0 }
\PD{\rho}{}
\ln \frac{ b + \sqrt{ \rho^2 + b^2 }}{ a + \sqrt{\rho^2 + a^2}} \\
&=
-\frac{\lambda_0 \rhocap}{4 \pi \epsilon_0}
\lr{
\inv{ b + \sqrt{ \rho^2 + b^2 }} \frac{ \rho }{\sqrt{ \rho^2 + b^2 }}
-\inv{ a + \sqrt{ \rho^2 + a^2 }} \frac{ \rho }{\sqrt{ \rho^2 + a^2 }}
} \\
&=
-\frac{\lambda_0 \rhocap}{4 \pi \epsilon_0 \rho}
\lr{
\frac{ -b + \sqrt{ \rho^2 + b^2 }}{\sqrt{ \rho^2 + b^2 }}
-\frac{ -a + \sqrt{ \rho^2 + a^2 }}{\sqrt{ \rho^2 + a^2 }}
} \\
&=
\frac{\lambda_0 \rhocap}{4 \pi \epsilon_0 \rho}
\lr{
\frac{ b }{\sqrt{ \rho^2 + b^2 }}
-\frac{ a }{\sqrt{ \rho^2 + a^2 }}
}.
\end{aligned}
\end{equation}

This recovers the radial component of the field from \ref{eqn:lineCharge:240}, but where did the \( \zcap \) component go? The required potential appears to be

\begin{equation}\label{eqn:lineCharge:340}
\phi(\rho, z)
=
\frac{\lambda_0}{4 \pi \epsilon_0 }
\ln \frac{ b + \sqrt{ \rho^2 + b^2 }}{ a + \sqrt{\rho^2 + a^2}}

\frac{z \lambda_0}{4 \pi \epsilon_0 }
\lr{ \frac{1}{\sqrt{\rho^2 + b^2}}
-\frac{1}{\sqrt{\rho^2 + a^2}}
}.
\end{equation}

When computing the electric field \( \BE(\rho, \theta, z) \), it was convienent to pick the coordinate system so that \( z = 0 \). Doing this with the potential gives the wrong answers. The reason for this appears to be that this kills the potential term that is linear in \( z \) before taking its gradient, and we need that term to have the \( \zcap \) field component that is expected for a charge distribution that is non-symmetric about the origin on the z-axis!

Finite line charge potential. Take II.

Let the point at which the potential is evaluated be

\begin{equation}\label{eqn:lineCharge:360}
\Bx = \rho \rhocap + z \zcap,
\end{equation}

and the charge point be
\begin{equation}\label{eqn:lineCharge:380}
\Bx’ = z’ \zcap.
\end{equation}

This gives

\begin{equation}\label{eqn:lineCharge:400}
\begin{aligned}
\phi(\rho, z)
&= \frac{\lambda_0}{4\pi \epsilon_0} \int_a^b \frac{dz’}{\Abs{\rho^2 + (z – z’)^2 }} \\
&= \frac{\lambda_0}{4\pi \epsilon_0} \int_{a-z}^{b-z} \frac{du}{ \Abs{\rho^2 + u^2} } \\
&= \frac{\lambda_0}{4\pi \epsilon_0}
\evalrange{\ln \lr{ u + \sqrt{ \rho^2 + u^2 }}}{b-z}{a-z} \\
&=
\frac{\lambda_0}{4\pi \epsilon_0}
\ln \frac
{ b-z + \sqrt{ \rho^2 + (b-z)^2 }}
{ a-z + \sqrt{ \rho^2 + (a-z)^2 }}.
\end{aligned}
\end{equation}

The limit of this potential \( a = -\Delta/2 \rightarrow -\infty, b = \Delta/2 \rightarrow \infty \) doesn’t exist in any strict sense. If we are cavilier about the limits, as in \ref{eqn:lineCharge:200}, this can be evaluated as

\begin{equation}\label{eqn:lineCharge:n}
\phi \approx
\frac{\lambda_0}{4\pi \epsilon_0} \lr{ -2 \ln \rho + \textrm{constant} }.
\end{equation}

however, the constant (\( \ln \Delta^2/2 \)) is infinite, so there isn’t really a good justification for using that constant as the potential reference point directly.

It seems that the “right” way to calculate the potential for the infinite distribution, is to

  • Calculate the field from the potential.
  • Take the PV limit of that field with the charge distribution extending to infinity.
  • Compute the corresponding potential from this limiting value of the field.

Doing that doesn’t blow up. That field calculation, for the finite case, should include a \( \zcap \) component. To verify, let’s take the respective derivatives

\begin{equation}\label{eqn:lineCharge:420}
\begin{aligned}
-\PD{z}{} \phi
&=
-\frac{\lambda_0}{4\pi \epsilon_0}
\lr{
\frac{ -1 + \frac{z – b}{\sqrt{ \rho^2 + (b-z)^2 }} }{
b-z + \sqrt{ \rho^2 + (b-z)^2 }
}

\frac{ -1 + \frac{z – a}{\sqrt{ \rho^2 + (a-z)^2 }} }{
a-z + \sqrt{ \rho^2 + (a-z)^2 }
}
} \\
&=
\frac{\lambda_0}{4\pi \epsilon_0}
\lr{
\frac{ 1 + \frac{b – z}{\sqrt{ \rho^2 + (b-z)^2 }} }{
b-z + \sqrt{ \rho^2 + (b-z)^2 }
}

\frac{ 1 + \frac{a – z}{\sqrt{ \rho^2 + (a-z)^2 }} }{
a-z + \sqrt{ \rho^2 + (a-z)^2 }
}
} \\
&=
\frac{\lambda_0}{4\pi \epsilon_0}
\lr{
\inv{\sqrt{ \rho^2 + (b-z)^2 }}
-\inv{\sqrt{ \rho^2 + (a-z)^2 }}
},
\end{aligned}
\end{equation}

and

\begin{equation}\label{eqn:lineCharge:440}
\begin{aligned}
-\PD{\rho}{} \phi
&=
-\frac{\lambda_0}{4\pi \epsilon_0}
\lr{
\frac{ \frac{\rho}{\sqrt{ \rho^2 + (b-z)^2 }} }{
b-z + \sqrt{ \rho^2 + (b-z)^2 }
}

\frac{ \frac{\rho}{\sqrt{ \rho^2 + (a-z)^2 }} }{
a-z + \sqrt{ \rho^2 + (a-z)^2 }
}
} \\
&=
-\frac{\lambda_0}{4\pi \epsilon_0}
\lr{
\frac{\rho \lr{
-(b-z) + \sqrt{ \rho^2 + (b-z)^2 }
}}{ \rho^2 \sqrt{ \rho^2 + (b-z)^2 } }

\frac{\rho \lr{
-(a-z) + \sqrt{ \rho^2 + (a-z)^2 }
}}{ \rho^2 \sqrt{ \rho^2 + (a-z)^2 } }
} \\
&=
\frac{\lambda_0}{4\pi \epsilon_0 \rho}
\lr{
\frac{b-z}{\sqrt{ \rho^2 + (b-z)^2 }}
-\frac{a-z}{\sqrt{ \rho^2 + (a-z)^2 }}
}
.
\end{aligned}
\end{equation}

Putting the pieces together, the electric field is
\begin{equation}\label{eqn:lineCharge:460}
\BE =
\frac{\lambda_0}{4\pi \epsilon_0}
\lr{
\frac{\rhocap}{\rho} \lr{
\frac{b-z}{\sqrt{ \rho^2 + (b-z)^2 }}
-\frac{a-z}{\sqrt{ \rho^2 + (a-z)^2 }}
}
+
\zcap \lr{
\inv{\sqrt{ \rho^2 + (b-z)^2 }}
-\inv{\sqrt{ \rho^2 + (a-z)^2 }}
}
}.
\end{equation}

For has a PV limit of \ref{eqn:lineCharge:40} at \( z = 0 \), and also for the finite case, has the \( \zcap \) field component that was obtained when the field was obtained by direct integration.

Conclusions

  • We have to evaluate the potential at all points in space, not just on the axis that we evaluate the field on (should we choose to do so).
  • In this case, we found that it was not directly meaningful to take the limit of a potential distribution. We can, however, compute the field from a potential for a finite charge distribution,
    take the limit of that field, and then calculate the corresponding potential for the infinite distribution.

Is there a more robust mechanism that can be used to directly calculate the potential for an infinite charge distribution, instead of calculating the potential from the field of such an infinite distribution?

I think that were things go wrong is that the integral of \ref{eqn:lineCharge:180} does not apply to charge distributions that are not finite on the infinite range \( z \in [-\infty, \infty] \). That solution was obtained by utilizing an all-space Green’s function, and the boundary term in that Green’s analysis was assumed to tend to zero. That isn’t the case when the charge distribution is \( \lambda_0 \delta( z ) \).

References

[1] JD Jackson. Classical Electrodynamics. John Wiley and Sons, 2nd edition, 1975.

Jackson’s electrostatic self energy analysis

October 10, 2016 math and physics play , , , , , , , , , ,

[Click here for a PDF of this post with nicer formatting]

Motivation

I was reading my Jackson [1], which characteristically had the statement “the […] integral can easily be shown to have the value \( 4 \pi \)”, in a discussion of electrostatic energy and self energy. After a few attempts and a couple of pages of calculations, I figured out how this can be easily shown.

Context

Let me walk through the context that leads to the “easy” integral, and then the evaluation of that integral. Unlike my older copy of Jackson, I’ll do this in SI units.

The starting point is a statement that the work done (potential energy) of one charge \( q_i \) in a set of \( n \) charges, where that charge is brought to its position \( \Bx_i \) from infinity, is

\begin{equation}\label{eqn:electrostaticJacksonSelfEnergy:20}
W_i = q_i \Phi(\Bx_i),
\end{equation}

where the potential energy due to the rest of the charge configuration is

\begin{equation}\label{eqn:electrostaticJacksonSelfEnergy:40}
\Phi(\Bx_i) = \inv{4 \pi \epsilon} \sum_{i \ne j} \frac{q_j}{\Abs{\Bx_i – \Bx_j}}.
\end{equation}

This means that the total potential energy, making sure not to double count, to move all the charges in from infinity is

\begin{equation}\label{eqn:electrostaticJacksonSelfEnergy:60}
W = \inv{4 \pi \epsilon} \sum_{1 \le i < j \le n} \frac{q_i q_j}{\Abs{\Bx_i - \Bx_j}}. \end{equation} This sum over all unique pairs is somewhat unwieldy, so it can be adjusted by explicitly double counting with a corresponding divide by two \begin{equation}\label{eqn:electrostaticJacksonSelfEnergy:80} W = \inv{2} \inv{4 \pi \epsilon} \sum_{1 \le i \ne j \le n} \frac{q_i q_j}{\Abs{\Bx_i - \Bx_j}}. \end{equation} The point that causes the trouble later is the continuum equivalent to this relationship, which is \begin{equation}\label{eqn:electrostaticJacksonSelfEnergy:100} W = \inv{8 \pi \epsilon} \int \frac{\rho(\Bx) \rho(\Bx')}{\Abs{\Bx - \Bx'}} d^3 \Bx d^3 \Bx', \end{equation} or \begin{equation}\label{eqn:electrostaticJacksonSelfEnergy:120} W = \inv{2} \int \rho(\Bx) \Phi(\Bx) d^3 \Bx. \end{equation} There's a subtlety here that is often passed over. When the charge densities represent point charges \( \rho(\Bx) = q \delta^3(\Bx - \Bx') \) are located at, notice that this integral equivalent is evaluated over all space, including the spaces that the charges that the charges are located at. Ignoring that subtlety, this potential energy can be expressed in terms of the electric field, and then integrated by parts \begin{equation}\label{eqn:electrostaticJacksonSelfEnergy:140} \begin{aligned} W &= \inv{2 } \int (\spacegrad \cdot (\epsilon \BE)) \Phi(\Bx) d^3 \Bx \\ &= \frac{\epsilon}{2 } \int \lr{ \spacegrad \cdot (\BE \Phi) - (\spacegrad \Phi) \cdot \BE } d^3 \Bx \\ &= \frac{\epsilon}{2 } \oint dA \ncap \cdot (\BE \Phi) + \frac{\epsilon}{2 } \int \BE \cdot \BE d^3 \Bx. \end{aligned} \end{equation} The presumption is that \( \BE \Phi \) falls off as the bounds of the integration volume tends to infinity. That leaves us with an energy density proportional to the square of the field \begin{equation}\label{eqn:electrostaticJacksonSelfEnergy:160} w = \frac{\epsilon}{2 } \BE^2. \end{equation}

Inconsistency

It’s here that Jackson points out the inconsistency between \ref{eqn:electrostaticJacksonSelfEnergy:160} and the original
discrete analogue \ref{eqn:electrostaticJacksonSelfEnergy:80} that this was based on. The energy density is positive definite, whereas the discrete potential energy can be negative if there is a difference in the sign of the charges.

Here Jackson uses a two particle charge distribution to help resolve this conundrum. For a superposition \( \BE = \BE_1 + \BE_2 \), we have

\begin{equation}\label{eqn:electrostaticJacksonSelfEnergy:180}
\BE
=
\inv{4 \pi \epsilon} \frac{q_1 (\Bx – \Bx_1)}{\Abs{\Bx – \Bx_1}^3}
+ \inv{4 \pi \epsilon} \frac{q_2 (\Bx – \Bx_2)}{\Abs{\Bx – \Bx_2}^3},
\end{equation}

so the energy density is
\begin{equation}\label{eqn:electrostaticJacksonSelfEnergy:200}
w =
\frac{1}{32 \pi^2 \epsilon} \frac{q_1^2}{\Abs{\Bx – \Bx_1}^4 }
+
\frac{1}{32 \pi^2 \epsilon} \frac{q_2^2}{\Abs{\Bx – \Bx_2}^4 }
+
2 \frac{q_1 q_2}{32 \pi^2 \epsilon}
\frac{(\Bx – \Bx_1)}{\Abs{\Bx – \Bx_1}^3} \cdot
\frac{(\Bx – \Bx_2)}{\Abs{\Bx – \Bx_2}^3}.
\end{equation}

The discrete potential had only an interaction energy, whereas the potential from this squared field has an interaction energy plus two self energy terms. Those two strictly positive self energy terms are what forces this field energy positive, independent of the sign of the interaction energy density. Jackson makes a change of variables of the form

\begin{equation}\label{eqn:electrostaticJacksonSelfEnergy:220}
\begin{aligned}
\Brho &= (\Bx – \Bx_1)/R \\
R &= \Abs{\Bx_1 – \Bx_2} \\
\ncap &= (\Bx_1 – \Bx_2)/R,
\end{aligned}
\end{equation}

for which we find

\begin{equation}\label{eqn:electrostaticJacksonSelfEnergy:240}
\Bx = \Bx_1 + R \Brho,
\end{equation}

so
\begin{equation}\label{eqn:electrostaticJacksonSelfEnergy:260}
\Bx – \Bx_2 =
\Bx_1 – \Bx_2 + R \Brho
R (\ncap + \Brho),
\end{equation}

and
\begin{equation}\label{eqn:electrostaticJacksonSelfEnergy:280}
d^3 \Bx = R^3 d^3 \Brho,
\end{equation}

so the total interaction energy is
\begin{equation}\label{eqn:electrostaticJacksonSelfEnergy:300}
\begin{aligned}
W_{\textrm{int}}
&=
\frac{q_1 q_2}{16 \pi^2 \epsilon}
\int d^3 \Bx
\frac{(\Bx – \Bx_1)}{\Abs{\Bx – \Bx_1}^3} \cdot
\frac{(\Bx – \Bx_2)}{\Abs{\Bx – \Bx_2}^3} \\
&=
\frac{q_1 q_2}{16 \pi^2 \epsilon}
\int R^3 d^3 \Brho
\frac{ R \Brho }{ R^3 \Abs{\Brho}^3 } \cdot
\frac{R (\ncap + \Brho)}{R^3 \Abs{\ncap + \Brho}^3} \\
&=
\frac{q_1 q_2}{16 \pi^2 \epsilon R}
\int d^3 \Brho
\frac{ \Brho }{ \Abs{\Brho}^3 } \cdot
\frac{(\ncap + \Brho)}{ \Abs{\ncap + \Brho}^3}.
\end{aligned}
\end{equation}

Evaluating this integral is what Jackson calls easy. The technique required is to express the integrand in terms of gradients in the \( \Brho \) coordinate system

\begin{equation}\label{eqn:electrostaticJacksonSelfEnergy:320}
\begin{aligned}
\int d^3 \Brho
\frac{ \Brho }{ \Abs{\Brho}^3 } \cdot
\frac{(\ncap + \Brho)}{ \Abs{\ncap + \Brho}^3}
&=
\int d^3 \Brho
\lr{ – \spacegrad_\Brho \inv{\Abs{\Brho}} }
\cdot
\lr{ – \spacegrad_\Brho \inv{\Abs{\ncap + \Brho}} } \\
&=
\int d^3 \Brho
\lr{ \spacegrad_\Brho \inv{\Abs{\Brho}} }
\cdot
\lr{ \spacegrad_\Brho \inv{\Abs{\ncap + \Brho}} }.
\end{aligned}
\end{equation}

I found it somewhat non-trivial to find the exact form of the chain rule that is required to simplify this integral, but after some trial and error, figured it out by working backwards from
\begin{equation}\label{eqn:electrostaticJacksonSelfEnergy:340}
\spacegrad_\Brho^2 \inv{ \Abs{\Brho} \Abs{\ncap + \Brho}}
=
\spacegrad_\Brho \cdot \lr{ \inv{\Abs{\Brho}} \spacegrad_\Brho \inv{ \Abs{\ncap + \Brho} } }
+
\spacegrad_\Brho \cdot \lr{ \inv{\Abs{\ncap + \Brho}} \spacegrad_\Brho \inv{ \Abs{\Brho} } }.
\end{equation}

In integral form this is
\begin{equation}\label{eqn:electrostaticJacksonSelfEnergy:360}
\begin{aligned}
\oint dA’ \ncap’ \cdot \spacegrad_\Brho \inv{ \Abs{\Brho} \Abs{\ncap + \Brho}}
&=
\int d^3 \Brho’
\spacegrad_{\Brho’} \cdot \lr{ \inv{\Abs{\Brho’ – \ncap}} \spacegrad_{\Brho’} \inv{ \Abs{\Brho’} } }
+
\int d^3 \Brho
\spacegrad_\Brho \cdot \lr{ \inv{\Abs{\ncap + \Brho}} \spacegrad_\Brho \inv{ \Abs{\Brho} } } \\
&=
\int d^3 \Brho’
\lr{ \spacegrad_{\Brho’} \inv{\Abs{\Brho’ – \ncap} } \cdot \spacegrad_{\Brho’} \inv{ \Abs{\Brho’} } }
+
\int d^3 \Brho’
\inv{\Abs{\Brho’ – \ncap}} \spacegrad_{\Brho’}^2 \inv{ \Abs{\Brho’} } \\
&+
\int d^3 \Brho
\lr{ \spacegrad_\Brho \inv{\Abs{\ncap + \Brho}}} \cdot \spacegrad_\Brho \inv{ \Abs{\Brho} }
+
\int d^3 \Brho
\inv{\Abs{\ncap + \Brho}} \spacegrad_\Brho^2 \inv{ \Abs{\Brho} } \\
&=
2 \int d^3 \Brho
\lr{ \spacegrad_\Brho \inv{\Abs{\ncap + \Brho}}} \cdot \spacegrad_\Brho \inv{ \Abs{\Brho} } \\
&- 4 \pi
\int d^3 \Brho’
\inv{\Abs{\Brho’ – \ncap}} \delta^3(\Brho’)
– 4 \pi
\int d^3 \Brho
\inv{\Abs{\Brho + \ncap}} \delta^3(\Brho) \\
&=
2 \int d^3 \Brho
\lr{ \spacegrad_\Brho \inv{\Abs{\ncap + \Brho}}} \cdot \spacegrad_\Brho \inv{ \Abs{\Brho} }
– 8 \pi.
\end{aligned}
\end{equation}

This used the Laplacian representation of the delta function \( \delta^3(\Bx) = -(1/4\pi) \spacegrad^2 (1/\Abs{\Bx}) \). Back-substitution gives

\begin{equation}\label{eqn:electrostaticJacksonSelfEnergy:380}
\int d^3 \Brho
\frac{ \Brho }{ \Abs{\Brho}^3 } \cdot
\frac{(\ncap + \Brho)}{ \Abs{\ncap + \Brho}^3}
=
4 \pi
+
\oint dA’ \ncap’ \cdot \spacegrad_\Brho \inv{ \Abs{\Brho} \Abs{\ncap + \Brho}}.
\end{equation}

We can argue that this last integral tends to zero, since

\begin{equation}\label{eqn:electrostaticJacksonSelfEnergy:400}
\begin{aligned}
\oint dA’ \ncap’ \cdot \spacegrad_\Brho \inv{ \Abs{\Brho} \Abs{\ncap + \Brho}}
&=
\oint dA’ \ncap’ \cdot \lr{
\lr{ \spacegrad_\Brho \inv{ \Abs{\Brho}} } \inv{\Abs{\ncap + \Brho}}
+
\inv{ \Abs{\Brho}} \lr{ \spacegrad_\Brho \inv{\Abs{\ncap + \Brho}} }
} \\
&=
-\oint dA’ \ncap’ \cdot \lr{
\frac{ \Brho } {\inv{ \Abs{\Brho}}^3 } \inv{\Abs{\ncap + \Brho}}
+
\inv{ \Abs{\Brho}} \frac{ (\Brho + \ncap) }{ \Abs{\ncap + \Brho}^3 }
} \\
&=
-\oint dA’ \inv{\Abs{\Brho} \Abs{\Brho + \ncap}}
\lr{
\frac{ \ncap’ \cdot \Brho }{
{\Abs{\Brho}}^2 }
+\frac{ \ncap’ \cdot (\Brho + \ncap) }{
{\Abs{\Brho + \ncap}}^2 }
}.
\end{aligned}
\end{equation}

The integrand in this surface integral is of \( O(1/\rho^3) \) so tends to zero on an infinite surface in the \( \Brho \) coordinate system. This completes the “easy” integral, leaving

\begin{equation}\label{eqn:electrostaticJacksonSelfEnergy:420}
\int d^3 \Brho
\frac{ \Brho }{ \Abs{\Brho}^3 } \cdot
\frac{(\ncap + \Brho)}{ \Abs{\ncap + \Brho}^3}
=
4 \pi.
\end{equation}

The total field energy can now be expressed as a sum of the self energies and the interaction energy
\begin{equation}\label{eqn:electrostaticJacksonSelfEnergy:440}
W =
\frac{1}{32 \pi^2 \epsilon} \int d^3 \Bx \frac{q_1^2}{\Abs{\Bx – \Bx_1}^4 }
+
\frac{1}{32 \pi^2 \epsilon} \int d^3 \Bx \frac{q_2^2}{\Abs{\Bx – \Bx_2}^4 }
+ \inv{ 4 \pi \epsilon}
\frac{q_1 q_2}{\Abs{\Bx_1 – \Bx_2} }.
\end{equation}

The interaction energy is exactly the potential energies for the two particles, the this total energy in the field is biased in the positive direction by the pair of self energies. It is interesting that the energy obtained from integrating the field energy density contains such self energy terms, but I don’t know exactly what to make of them at this point in time.

References

[1] JD Jackson. Classical Electrodynamics. John Wiley and Sons, 2nd edition, 1975.

Helmholtz theorem

October 1, 2016 math and physics play , , , , , , , , , , , , , ,

[Click here for a PDF of this post with nicer formatting]

This is a problem from ece1228. I attempted solutions in a number of ways. One using Geometric Algebra, one devoid of that algebra, and then this method, which combined aspects of both. Of the three methods I tried to obtain this result, this is the most compact and elegant. It does however, require a fair bit of Geometric Algebra knowledge, including the Fundamental Theorem of Geometric Calculus, as detailed in [1], [3] and [2].

Question: Helmholtz theorem

Prove the first Helmholtz’s theorem, i.e. if vector \(\BM\) is defined by its divergence

\begin{equation}\label{eqn:helmholtzDerviationMultivector:20}
\spacegrad \cdot \BM = s
\end{equation}

and its curl
\begin{equation}\label{eqn:helmholtzDerviationMultivector:40}
\spacegrad \cross \BM = \BC
\end{equation}

within a region and its normal component \( \BM_{\textrm{n}} \) over the boundary, then \( \BM \) is
uniquely specified.

Answer

The gradient of the vector \( \BM \) can be written as a single even grade multivector

\begin{equation}\label{eqn:helmholtzDerviationMultivector:60}
\spacegrad \BM
= \spacegrad \cdot \BM + I \spacegrad \cross \BM
= s + I \BC.
\end{equation}

We will use this to attempt to discover the relation between the vector \( \BM \) and its divergence and curl. We can express \( \BM \) at the point of interest as a convolution with the delta function at all other points in space

\begin{equation}\label{eqn:helmholtzDerviationMultivector:80}
\BM(\Bx) = \int_V dV’ \delta(\Bx – \Bx’) \BM(\Bx’).
\end{equation}

The Laplacian representation of the delta function in \R{3} is

\begin{equation}\label{eqn:helmholtzDerviationMultivector:100}
\delta(\Bx – \Bx’) = -\inv{4\pi} \spacegrad^2 \inv{\Abs{\Bx – \Bx’}},
\end{equation}

so \( \BM \) can be represented as the following convolution

\begin{equation}\label{eqn:helmholtzDerviationMultivector:120}
\BM(\Bx) = -\inv{4\pi} \int_V dV’ \spacegrad^2 \inv{\Abs{\Bx – \Bx’}} \BM(\Bx’).
\end{equation}

Using this relation and proceeding with a few applications of the chain rule, plus the fact that \( \spacegrad 1/\Abs{\Bx – \Bx’} = -\spacegrad’ 1/\Abs{\Bx – \Bx’} \), we find

\begin{equation}\label{eqn:helmholtzDerviationMultivector:720}
\begin{aligned}
-4 \pi \BM(\Bx)
&= \int_V dV’ \spacegrad^2 \inv{\Abs{\Bx – \Bx’}} \BM(\Bx’) \\
&= \gpgradeone{\int_V dV’ \spacegrad^2 \inv{\Abs{\Bx – \Bx’}} \BM(\Bx’)} \\
&= -\gpgradeone{\int_V dV’ \spacegrad \lr{ \spacegrad’ \inv{\Abs{\Bx – \Bx’}}} \BM(\Bx’)} \\
&= -\gpgradeone{\spacegrad \int_V dV’ \lr{
\spacegrad’ \frac{\BM(\Bx’)}{\Abs{\Bx – \Bx’}}
-\frac{\spacegrad’ \BM(\Bx’)}{\Abs{\Bx – \Bx’}}
} } \\
&=
-\gpgradeone{\spacegrad \int_{\partial V} dA’
\ncap \frac{\BM(\Bx’)}{\Abs{\Bx – \Bx’}}
}
+\gpgradeone{\spacegrad \int_V dV’
\frac{s(\Bx’) + I\BC(\Bx’)}{\Abs{\Bx – \Bx’}}
} \\
&=
-\gpgradeone{\spacegrad \int_{\partial V} dA’
\ncap \frac{\BM(\Bx’)}{\Abs{\Bx – \Bx’}}
}
+\spacegrad \int_V dV’
\frac{s(\Bx’)}{\Abs{\Bx – \Bx’}}
+\spacegrad \cdot \int_V dV’
\frac{I\BC(\Bx’)}{\Abs{\Bx – \Bx’}}.
\end{aligned}
\end{equation}

By inserting a no-op grade selection operation in the second step, the trivector terms that would show up in subsequent steps are automatically filtered out. This leaves us with a boundary term dependent on the surface and the normal and tangential components of \( \BM \). Added to that is a pair of volume integrals that provide the unique dependence of \( \BM \) on its divergence and curl. When the surface is taken to infinity, which requires \( \Abs{\BM}/\Abs{\Bx – \Bx’} \rightarrow 0 \), then the dependence of \( \BM \) on its divergence and curl is unique.

In order to express final result in traditional vector algebra form, a couple transformations are required. The first is that

\begin{equation}\label{eqn:helmholtzDerviationMultivector:800}
\gpgradeone{ \Ba I \Bb } = I^2 \Ba \cross \Bb = -\Ba \cross \Bb.
\end{equation}

For the grade selection in the boundary integral, note that

\begin{equation}\label{eqn:helmholtzDerviationMultivector:740}
\begin{aligned}
\gpgradeone{ \spacegrad \ncap \BX }
&=
\gpgradeone{ \spacegrad (\ncap \cdot \BX) }
+
\gpgradeone{ \spacegrad (\ncap \wedge \BX) } \\
&=
\spacegrad (\ncap \cdot \BX)
+
\gpgradeone{ \spacegrad I (\ncap \cross \BX) } \\
&=
\spacegrad (\ncap \cdot \BX)

\spacegrad \cross (\ncap \cross \BX).
\end{aligned}
\end{equation}

These give

\begin{equation}\label{eqn:helmholtzDerviationMultivector:721}
\boxed{
\begin{aligned}
\BM(\Bx)
&=
\spacegrad \inv{4\pi} \int_{\partial V} dA’ \ncap \cdot \frac{\BM(\Bx’)}{\Abs{\Bx – \Bx’}}

\spacegrad \cross \inv{4\pi} \int_{\partial V} dA’ \ncap \cross \frac{\BM(\Bx’)}{\Abs{\Bx – \Bx’}} \\
&-\spacegrad \inv{4\pi} \int_V dV’
\frac{s(\Bx’)}{\Abs{\Bx – \Bx’}}
+\spacegrad \cross \inv{4\pi} \int_V dV’
\frac{\BC(\Bx’)}{\Abs{\Bx – \Bx’}}.
\end{aligned}
}
\end{equation}

References

[1] C. Doran and A.N. Lasenby. Geometric algebra for physicists. Cambridge University Press New York, Cambridge, UK, 1st edition, 2003.

[2] A. Macdonald. Vector and Geometric Calculus. CreateSpace Independent Publishing Platform, 2012.

[3] Garret Sobczyk and Omar Le’on S’anchez. Fundamental theorem of calculus. Advances in Applied Clifford Algebras, 21:221–231, 2011. URL https://arxiv.org/abs/0809.4526.

Geometric Algebra in a nutshell.

September 29, 2016 math and physics play , , , , , , , , , , , , ,

[Click here for a PDF of this post with nicer formatting]

Motivation

I initially thought that I might submit a problem set solution for ece1228 using Geometric Algebra. In order to justify this, I needed to add an appendix to that problem set that outlined enough of the ideas that such a solution might make sense to the grader.

I ended up changing my mind and reworked the problem entirely, removing any use of GA. Here’s the tutorial I initially considered submitting with that problem.

Geometric Algebra in a nutshell.

Geometric Algebra defines a non-commutative, associative vector product

\begin{equation}\label{eqn:gaTutorial:20}
\begin{aligned}
\Ba \Bb \Bc
&=
(\Ba \Bb) \Bc \\
&=
\Ba (\Bb \Bc),
\end{aligned}
\end{equation}

where the square of a vector equals the squared vector magnitude

\begin{equation}\label{eqn:gaTutorial:40}
\Ba^2 = \Abs{\Ba}^2,
\end{equation}

In Euclidean spaces such a squared vector is always positive, but that is not necessarily the case in the mixed signature spaces used in special relativity.

There are a number of consequences of these two simple vector multiplication rules.

  • Squared unit vectors have a unit magnitude (up to a sign). In a Euclidean space such a product is always positive

    \begin{equation}\label{eqn:gaTutorial:60}
    (\Be_1)^2 = 1.
    \end{equation}

  • Products of perpendicular vectors anticommute.

    \begin{equation}\label{eqn:gaTutorial:80}
    \begin{aligned}
    2
    &=
    (\Be_1 + \Be_2)^2 \\
    &= (\Be_1 + \Be_2)(\Be_1 + \Be_2) \\
    &= \Be_1^2 + \Be_2 \Be_1 + \Be_1 \Be_2 + \Be_2^2 \\
    &= 2 + \Be_2 \Be_1 + \Be_1 \Be_2.
    \end{aligned}
    \end{equation}

    A product of two perpendicular vectors is called a bivector, and can be used to represent an oriented plane. The last line above shows an example of a scalar and bivector sum, called a multivector. In general Geometric Algebra allows sums of scalars, vectors, bivectors, and higher degree analogues (grades) be summed.

    Comparison of the RHS and LHS of \ref{eqn:gaTutorial:80} shows that we must have

    \begin{equation}\label{eqn:gaTutorial:100}
    \Be_2 \Be_1 = -\Be_1 \Be_2.
    \end{equation}

    It is true in general that the product of two perpendicular vectors anticommutes. When, as above, such a product is a product of
    two orthonormal vectors, it behaves like a non-commutative imaginary quantity, as it has an imaginary square in Euclidean spaces

    \begin{equation}\label{eqn:gaTutorial:120}
    \begin{aligned}
    (\Be_1 \Be_2)^2
    &=
    (\Be_1 \Be_2)
    (\Be_1 \Be_2) \\
    &=
    \Be_1 (\Be_2
    \Be_1) \Be_2 \\
    &=
    -\Be_1 (\Be_1
    \Be_2) \Be_2 \\
    &=
    -(\Be_1 \Be_1)
    (\Be_2 \Be_2) \\
    &=-1.
    \end{aligned}
    \end{equation}

    Such “imaginary” (unit bivectors) have important applications describing rotations in Euclidean spaces, and boosts in Minkowski spaces.

  • The product of three perpendicular vectors, such as

    \begin{equation}\label{eqn:gaTutorial:140}
    I = \Be_1 \Be_2 \Be_3,
    \end{equation}

    is called a trivector. In \R{3}, the product of three orthonormal vectors is called a pseudoscalar for the space, and can represent an oriented volume element. The quantity \( I \) above is the typical orientation picked for the \R{3} unit pseudoscalar. This quantity also has characteristics of an imaginary number

    \begin{equation}\label{eqn:gaTutorial:160}
    \begin{aligned}
    I^2
    &=
    (\Be_1 \Be_2 \Be_3)
    (\Be_1 \Be_2 \Be_3) \\
    &=
    \Be_1 \Be_2 (\Be_3
    \Be_1) \Be_2 \Be_3 \\
    &=
    -\Be_1 \Be_2 \Be_1
    \Be_3 \Be_2 \Be_3 \\
    &=
    -\Be_1 (\Be_2 \Be_1)
    (\Be_3 \Be_2) \Be_3 \\
    &=
    -\Be_1 (\Be_1 \Be_2)
    (\Be_2 \Be_3) \Be_3 \\
    &=

    \Be_1^2
    \Be_2^2
    \Be_3^2 \\
    &=
    -1.
    \end{aligned}
    \end{equation}

  • The product of two vectors in \R{3} can be expressed as the sum of a symmetric scalar product and antisymmetric bivector product

    \begin{equation}\label{eqn:gaTutorial:480}
    \begin{aligned}
    \Ba \Bb
    &=
    \sum_{i,j = 1}^n \Be_i \Be_j a_i b_j \\
    &=
    \sum_{i = 1}^n \Be_i^2 a_i b_i
    +
    \sum_{0 < i \ne j \le n} \Be_i \Be_j a_i b_j \\ &= \sum_{i = 1}^n a_i b_i + \sum_{0 < i < j \le n} \Be_i \Be_j (a_i b_j - a_j b_i). \end{aligned} \end{equation} The first (symmetric) term is clearly the dot product. The antisymmetric term is designated the wedge product. In general these are written \begin{equation}\label{eqn:gaTutorial:500} \Ba \Bb = \Ba \cdot \Bb + \Ba \wedge \Bb, \end{equation} where \begin{equation}\label{eqn:gaTutorial:520} \begin{aligned} \Ba \cdot \Bb &\equiv \inv{2} \lr{ \Ba \Bb + \Bb \Ba } \\ \Ba \wedge \Bb &\equiv \inv{2} \lr{ \Ba \Bb - \Bb \Ba }, \end{aligned} \end{equation} The coordinate expansion of both can be seen above, but in \R{3} the wedge can also be written \begin{equation}\label{eqn:gaTutorial:540} \Ba \wedge \Bb = \Be_1 \Be_2 \Be_3 (\Ba \cross \Bb) = I (\Ba \cross \Bb). \end{equation} This allows for an handy dot plus cross product expansion of the vector product \begin{equation}\label{eqn:gaTutorial:180} \Ba \Bb = \Ba \cdot \Bb + I (\Ba \cross \Bb). \end{equation} This result should be familiar to the student of quantum spin states where one writes \begin{equation}\label{eqn:gaTutorial:200} (\Bsigma \cdot \Ba) (\Bsigma \cdot \Bb) = (\Ba \cdot \Bb) + i (\Ba \cross \Bb) \cdot \Bsigma. \end{equation} This correspondence is because the Pauli spin basis is a specific matrix representation of a Geometric Algebra, satisfying the same commutator and anticommutator relationships. A number of other algebra structures, such as complex numbers, and quaterions can also be modelled as Geometric Algebra elements.

  • It is often useful to utilize the grade selection operator
    \( \gpgrade{M}{n} \) and scalar grade selection operator \( \gpgradezero{M} = \gpgrade{M}{0} \)
    to select the scalar, vector, bivector, trivector, or higher grade algebraic elements. For example, operating on vectors \( \Ba, \Bb, \Bc \), we have

    \begin{equation}\label{eqn:gaTutorial:580}
    \begin{aligned}
    \gpgradezero{ \Ba \Bb }
    &= \Ba \cdot \Bb \\
    \gpgradeone{ \Ba \Bb \Bc }
    &=
    \Ba (\Bb \cdot \Bc)
    +
    \Ba \cdot (\Bb \wedge \Bc) \\
    &=
    \Ba (\Bb \cdot \Bc)
    +
    (\Ba \cdot \Bb) \Bc

    (\Ba \cdot \Bc) \Bb \\
    \gpgradetwo{\Ba \Bb} &=
    \Ba \wedge \Bb \\
    \gpgradethree{\Ba \Bb \Bc} &=
    \Ba \wedge \Bb \wedge \Bc.
    \end{aligned}
    \end{equation}

    Note that the wedge product of any number of vectors such as \( \Ba \wedge \Bb \wedge \Bc \) is associative and can be expressed in terms of the complete antisymmetrization of the product of those vectors. A consequence of that is the fact a wedge product that includes any colinear vectors in the product is zero.

Example: Helmholz equations.

As an example of the power of \ref{eqn:gaTutorial:180}, consider the following Helmholtz equation derivation (wave equations for the electric and magnetic fields in the frequency domain.)

Application of \ref{eqn:gaTutorial:180} to
Maxwell equations in the frequency domain for source free simple media gives

\label{eqn:emtProblemSet1Problem6:340}
\begin{equation}\label{eqn:emtProblemSet1Problem6:360}
\spacegrad \BE = -j \omega I \BB
\end{equation}
\begin{equation}\label{eqn:emtProblemSet1Problem6:380}
\spacegrad I \BB = -j \omega \mu \epsilon \BE.
\end{equation}

These equations use the engineering (not physics) sign convention for the phasors where the time domain fields are of the form \( \boldsymbol{\mathcal{E}}(\Br, t) = \textrm{Re}( \BE e^{j\omega t} \).

Operation with the gradient from the left produces the Helmholtz equation for each of the fields using nothing more than multiplication and simple substitution

\label{eqn:emtProblemSet1Problem6:400}
\begin{equation}\label{eqn:emtProblemSet1Problem6:420}
\spacegrad^2 \BE = – \mu \epsilon \omega^2 \BE
\end{equation}
\begin{equation}\label{eqn:emtProblemSet1Problem6:440}
\spacegrad^2 I \BB = – \mu \epsilon \omega^2 I \BB.
\end{equation}

There was no reason to go through the headache of looking up or deriving the expansion of \( \spacegrad \cross (\spacegrad \cross \BA ) \) as is required with the traditional vector algebra demonstration of these identities.

Observe that the usual Helmholtz equation for \( \BB \) doesn’t have a pseudoscalar factor. That result can be obtained by just cancelling the factors \( I \) since the \R{3} Euclidean pseudoscalar commutes with all grades (this isn’t the case in \R{2} nor in Minkowski spaces.)

Example: Factoring the Laplacian.

There are various ways to demonstrate the identity

\begin{equation}\label{eqn:gaTutorial:660}
\spacegrad \cross \lr{ \spacegrad \cross \BA } = \spacegrad \lr{ \spacegrad \cdot \BA } – \spacegrad^2 \BA,
\end{equation}

such as the use of (somewhat obscure) tensor contraction techniques. We can also do this with Geometric Algebra (using a different set of obscure techniques) by factoring the Laplacian action on a vector

\begin{equation}\label{eqn:gaTutorial:700}
\begin{aligned}
\spacegrad^2 \BA
&=
\spacegrad (\spacegrad \BA) \\
&=
\spacegrad (\spacegrad \cdot \BA + \spacegrad \wedge \BA) \\
&=
\spacegrad (\spacegrad \cdot \BA)
+
\spacegrad \cdot (\spacegrad \wedge \BA) \\
%+
%\cancel{\spacegrad \wedge \spacegrad \wedge \BA}
&=
\spacegrad (\spacegrad \cdot \BA)
+
\spacegrad \cdot (\spacegrad \wedge \BA).
\end{aligned}
\end{equation}

Should we wish to express the last term using cross products, a grade one selection operation can be used
\begin{equation}\label{eqn:gaTutorial:680}
\begin{aligned}
\spacegrad \cdot (\spacegrad \wedge \BA)
&=
\gpgradeone{ \spacegrad (\spacegrad \wedge \BA) } \\
&=
\gpgradeone{ \spacegrad I (\spacegrad \cross \BA) } \\
&=
\gpgradeone{ I \spacegrad \wedge (\spacegrad \cross \BA) } \\
&=
\gpgradeone{ I^2 \spacegrad \cross (\spacegrad \cross \BA) } \\
&=
-\spacegrad \cross (\spacegrad \cross \BA).
\end{aligned}
\end{equation}

Here coordinate expansion was not required in any step.

Learning more.

Some references that may be helpful to learn more about Geometric Algebra are [2], [1], [4], and [3].

References

[1] C. Doran and A.N. Lasenby. Geometric algebra for physicists. Cambridge University Press New York, Cambridge, UK, 1st edition, 2003.

[2] L. Dorst, D. Fontijne, and S. Mann. Geometric Algebra for Computer Science. Morgan Kaufmann, San Francisco, 2007.

[3] D. Hestenes. New Foundations for Classical Mechanics. Kluwer Academic Publishers, 1999.

[4] A. Macdonald. Vector and Geometric Calculus. CreateSpace Independent Publishing Platform, 2012.

Update to old phy356 (Quantum Mechanics I) notes.

February 12, 2015 math and physics play , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

It’s been a long time since I took QM I. My notes from that class were pretty rough, but I’ve cleaned them up a bit.

The main value to these notes is that I worked a number of introductory Quantum Mechanics problems.

These were my personal lecture notes for the Fall 2010, University of Toronto Quantum mechanics I course (PHY356H1F), taught by Prof. Vatche Deyirmenjian.

The official description of this course was:

The general structure of wave mechanics; eigenfunctions and eigenvalues; operators; orbital angular momentum; spherical harmonics; central potential; separation of variables, hydrogen atom; Dirac notation; operator methods; harmonic oscillator and spin.

This document contains a few things

• My lecture notes.
Typos, if any, are probably mine(Peeter), and no claim nor attempt of spelling or grammar correctness will be made. The first four lectures had chosen not to take notes for since they followed the text very closely.
• Notes from reading of the text. This includes observations, notes on what seem like errors, and some solved problems. None of these problems have been graded. Note that my informal errata sheet for the text has been separated out from this document.
• Some assigned problems. I have corrected some the errors after receiving grading feedback, and where I have not done so I at least recorded some of the grading comments as a reference.
• Some worked problems associated with exam preparation.