delta function

Green’s function for the gradient in Euclidean spaces.

September 26, 2016 math and physics play , , , , , , , , , , ,

[Click here for a PDF of this post with nicer formatting]

In [1] it is stated that the Green’s function for the gradient is

\begin{equation}\label{eqn:gradientGreensFunction:20}
G(x, x’) = \inv{S_n} \frac{x – x’}{\Abs{x-x’}^n},
\end{equation}

where \( n \) is the dimension of the space, \( S_n \) is the area of the unit sphere, and
\begin{equation}\label{eqn:gradientGreensFunction:40}
\grad G = \grad \cdot G = \delta(x – x’).
\end{equation}

What I’d like to do here is verify that this Green’s function operates as asserted. Here, as in some parts of the text, I am following a convention where vectors are written without boldface.

Let’s start with checking that the gradient of the Green’s function is zero everywhere that \( x \ne x’ \)

\begin{equation}\label{eqn:gradientGreensFunction:100}
\begin{aligned}
\spacegrad \inv{\Abs{x – x’}^n}
&=
-\frac{n}{2} \frac{e^\nu \partial_\nu (x_\mu – x_\mu’)(x^\mu – {x^\mu}’)}{\Abs{x – x’}^{n+2}} \\
&=
-\frac{n}{2} 2 \frac{e^\nu (x_\mu – x_\mu’) \delta_\nu^\mu }{\Abs{x – x’}^{n+2}} \\
&=
-n \frac{ x – x’}{\Abs{x – x’}^{n+2}}.
\end{aligned}
\end{equation}

This means that we have, everywhere that \( x \ne x’ \)

\begin{equation}\label{eqn:gradientGreensFunction:120}
\begin{aligned}
\spacegrad \cdot G
&=
\inv{S_n} \lr{ \frac{\spacegrad \cdot \lr{x – x’}}{\Abs{x – x’}^{n}} + \lr{ \spacegrad \inv{\Abs{x – x’}^{n}} } \cdot \lr{ x – x’} } \\
&=
\inv{S_n} \lr{ \frac{n}{\Abs{x – x’}^{n}} + \lr{ -n \frac{x – x’}{\Abs{x – x’}^{n+2} } \cdot \lr{ x – x’} } } \\
= 0.
\end{aligned}
\end{equation}

Next, consider the curl of the Green’s function. Zero curl will mean that we have \( \grad G = \grad \cdot G = G \lgrad \).

\begin{equation}\label{eqn:gradientGreensFunction:140}
\begin{aligned}
S_n (\grad \wedge G)
&=
\frac{\grad \wedge (x-x’)}{\Abs{x – x’}^{n}}
+
\grad \inv{\Abs{x – x’}^{n}} \wedge (x-x’) \\
&=
\frac{\grad \wedge (x-x’)}{\Abs{x – x’}^{n}}
– n
\frac{x – x’}{\Abs{x – x’}^{n}} \wedge (x-x’) \\
&=
\frac{\grad \wedge (x-x’)}{\Abs{x – x’}^{n}}.
\end{aligned}
\end{equation}

However,

\begin{equation}\label{eqn:gradientGreensFunction:160}
\begin{aligned}
\grad \wedge (x-x’)
&=
\grad \wedge x \\
&=
e^\mu \wedge e_\nu \partial_\mu x^\nu \\
&=
e^\mu \wedge e_\nu \delta_\mu^\nu \\
&=
e^\mu \wedge e_\mu.
\end{aligned}
\end{equation}

For any metric where \( e_\mu \propto e^\mu \), which is the case in all the ones with physical interest (i.e. \R{3} and Minkowski space), \( \grad \wedge G \) is zero.

Having shown that the gradient of the (presumed) Green’s function is zero everywhere that \( x \ne x’ \), the guts of the
demonstration can now proceed. We wish to evaluate the gradient weighted convolution of the Green’s function using the Fundamental Theorem of (Geometric) Calculus. Here the gradient acts bidirectionally on both the gradient and the test function. Working in primed coordinates so that the final result is in terms of the unprimed, we have

\begin{equation}\label{eqn:gradientGreensFunction:60}
\int_V G(x,x’) d^n x’ \lrgrad’ F(x’)
= \int_{\partial V} G(x,x’) d^{n-1} x’ F(x’).
\end{equation}

Let \( d^n x’ = dV’ I \), \( d^{n-1} x’ n = dA’ I \), where \( n = n(x’) \) is the outward normal to the area element \( d^{n-1} x’ \). From this point on, lets restrict attention to Euclidean spaces, where \( n^2 = 1 \). In that case

\begin{equation}\label{eqn:gradientGreensFunction:80}
\begin{aligned}
\int_V dV’ G(x,x’) \lrgrad’ F(x’)
&=
\int_V dV’ \lr{G(x,x’) \lgrad’} F(x’)
+
\int_V dV’ G(x,x’) \lr{ \rgrad’ F(x’) } \\
&= \int_{\partial V} dA’ G(x,x’) n F(x’).
\end{aligned}
\end{equation}

Here, the pseudoscalar \( I \) has been factored out by commuting it with \( G \), using \( G I = (-1)^{n-1} I G \), and then pre-multiplication with \( 1/((-1)^{n-1} I ) \).

Each of these integrals can be considered in sequence. A convergence bound is required of the multivector test function \( F(x’) \) on the infinite surface \( \partial V \). Since it’s true that

\begin{equation}\label{eqn:gradientGreensFunction:180}
\Abs{ \int_{\partial V} dA’ G(x,x’) n F(x’) }
\ge
\int_{\partial V} dA’ \Abs{ G(x,x’) n F(x’) },
\end{equation}

then it is sufficient to require that

\begin{equation}\label{eqn:gradientGreensFunction:200}
\lim_{x’ \rightarrow \infty} \Abs{ \frac{x -x’}{\Abs{x – x’}^n} n(x’) F(x’) } \rightarrow 0,
\end{equation}

in order to kill off the surface integral. Evaluating the integral on a hypersphere centred on \( x \) where \( x’ – x = n \Abs{x – x’} \), that is

\begin{equation}\label{eqn:gradientGreensFunction:260}
\lim_{x’ \rightarrow \infty} \frac{ \Abs{F(x’)}}{\Abs{x – x’}^{n-1}} \rightarrow 0.
\end{equation}

Given such a constraint, that leaves

\begin{equation}\label{eqn:gradientGreensFunction:220}
\int_V dV’ \lr{G(x,x’) \lgrad’} F(x’)
=
-\int_V dV’ G(x,x’) \lr{ \rgrad’ F(x’) }.
\end{equation}

The LHS is zero everywhere that \( x \ne x’ \) so it can be restricted to a spherical ball around \( x \), which allows the test function \( F \) to be pulled out of the integral, and a second application of the Fundamental Theorem to be applied.

\begin{equation}\label{eqn:gradientGreensFunction:240}
\begin{aligned}
\int_V dV’ \lr{G(x,x’) \lgrad’} F(x’)
&=
\lim_{\epsilon \rightarrow 0}
\int_{\Abs{x – x’} < \epsilon} dV' \lr{G(x,x') \lgrad'} F(x') \\ &= \lr{ \lim_{\epsilon \rightarrow 0} I^{-1} \int_{\Abs{x - x'} < \epsilon} I dV' \lr{G(x,x') \lgrad'} } F(x) \\ &= \lr{ \lim_{\epsilon \rightarrow 0} (-1)^{n-1} I^{-1} \int_{\Abs{x - x'} < \epsilon} G(x,x') d^n x' \lgrad' } F(x) \\ &= \lr{ \lim_{\epsilon \rightarrow 0} (-1)^{n-1} I^{-1} \int_{\Abs{x - x'} = \epsilon} G(x,x') d^{n-1} x' } F(x) \\ &= \lr{ \lim_{\epsilon \rightarrow 0} (-1)^{n-1} I^{-1} \int_{\Abs{x - x'} = \epsilon} G(x,x') dA' I n } F(x) \\ &= \lr{ \lim_{\epsilon \rightarrow 0} \int_{\Abs{x - x'} = \epsilon} dA' G(x,x') n } F(x) \\ &= \lr{ \lim_{\epsilon \rightarrow 0} \int_{\Abs{x - x'} = \epsilon} dA' \frac{\epsilon (-n)}{S_n \epsilon^n} n } F(x) \\ &= -\lim_{\epsilon \rightarrow 0} \frac{F(x)}{S_n \epsilon^{n-1}} \int_{\Abs{x - x'} = \epsilon} dA' \\ &= -\lim_{\epsilon \rightarrow 0} \frac{F(x)}{S_n \epsilon^{n-1}} S_n \epsilon^{n-1} \\ &= -F(x). \end{aligned} \end{equation} This essentially calculates the divergence integral around an infinitesimal hypersphere, without assuming that the gradient commutes with the gradient in this infinitesimal region. So, provided the test function is constrained by \ref{eqn:gradientGreensFunction:260}, we have \begin{equation}\label{eqn:gradientGreensFunction:280} F(x) = \int_V dV' G(x,x') \lr{ \grad' F(x') }. \end{equation} In particular, should we have a first order gradient equation \begin{equation}\label{eqn:gradientGreensFunction:300} \spacegrad' F(x') = M(x'), \end{equation} the inverse of this equation is given by \begin{equation}\label{eqn:gradientGreensFunction:320} \boxed{ F(x) = \int_V dV' G(x,x') M(x'). } \end{equation} Note that the sign of the Green's function is explicitly tied to the definition of the convolution integral that is used. This is important since since the conventions for the sign of the Green's function or the parameters in the convolution integral often vary. What's cool about this result is that it applies not only to gradient equations in Euclidean spaces, but also to multivector (or even just vector) fields \( F \), instead of the usual scalar functions that we usually apply Green's functions to.

Example: Electrostatics

As a check of the sign consider the electrostatics equation

\begin{equation}\label{eqn:gradientGreensFunction:380}
\spacegrad \BE = \frac{\rho}{\epsilon_0},
\end{equation}

for which we have after substitution into \ref{eqn:gradientGreensFunction:320}
\begin{equation}\label{eqn:gradientGreensFunction:400}
\BE(\Bx) = \inv{4 \pi \epsilon_0} \int_V dV’ \frac{\Bx – \Bx’}{\Abs{\Bx – \Bx’}^3} \rho(\Bx’).
\end{equation}

This matches the sign found in a trusted reference such as [2].

Future thought.

Does this Green’s function also work for mixed metric spaces? If so, in such a metric, what does it mean to
calculate the surface area of a unit sphere in a mixed signature space?

References

[1] C. Doran and A.N. Lasenby. Geometric algebra for physicists. Cambridge University Press New York, Cambridge, UK, 1st edition, 2003.

[2] JD Jackson. Classical Electrodynamics. John Wiley and Sons, 2nd edition, 1975.

PHY1520H Graduate Quantum Mechanics. Lecture 1: Lighting review. Taught by Prof. Arun Paramekanti

September 17, 2015 phy1520 , , , , , , , , , , , , , ,

[Click here for a PDF of this post with nicer formatting]

Disclaimer

Peeter’s lecture notes from class. These may be incoherent and rough.

These are notes for the UofT course PHY1520, Graduate Quantum Mechanics, taught by Prof. Paramekanti, covering [1] chap. 1 content.

Classical mechanics

We’ll be talking about one body physics for most of this course. In classical mechanics we can figure out the particle trajectories using both of \( (\Br, \Bp \), where

\begin{equation}\label{eqn:qmLecture1:20}
\begin{aligned}
\ddt{\Br} &= \inv{m} \Bp \\
\ddt{\Bp} &= \spacegrad V
\end{aligned}
\end{equation}

A two dimensional phase space as sketched in fig. 1 shows the trajectory of a point particle subject to some equations of motion

lectureOnePhaseSpaceClassicalFig1

fig. 1. One dimensional classical phase space example

Quantum mechanics

For this lecture, we’ll work with natural units, setting

\begin{equation}\label{eqn:qmLecture1:480}
\boxed{
\Hbar = 1.
}
\end{equation}

In QM we are no longer allowed to think of position and momentum, but have to start asking about state vectors \( \ket{\Psi} \).

We’ll consider the state vector with respect to some basis, for example, in a position basis, we write

\begin{equation}\label{eqn:qmLecture1:40}
\braket{ x }{\Psi } = \Psi(x),
\end{equation}

a complex numbered “wave function”, the probability amplitude for a particle in \( \ket{\Psi} \) to be in the vicinity of \( x \).

We could also consider the state in a momentum basis

\begin{equation}\label{eqn:qmLecture1:60}
\braket{ p }{\Psi } = \Psi(p),
\end{equation}

a probability amplitude with respect to momentum \( p \).

More precisely,

\begin{equation}\label{eqn:qmLecture1:80}
\Abs{\Psi(x)}^2 dx \ge 0
\end{equation}

is the probability of finding the particle in the range \( (x, x + dx ) \). To have meaning as a probability, we require

\begin{equation}\label{eqn:qmLecture1:100}
\int_{-\infty}^\infty \Abs{\Psi(x)}^2 dx = 1.
\end{equation}

The average position can be calculated using this probability density function. For example

\begin{equation}\label{eqn:qmLecture1:120}
\expectation{x} = \int_{-\infty}^\infty \Abs{\Psi(x)}^2 x dx,
\end{equation}

or
\begin{equation}\label{eqn:qmLecture1:140}
\expectation{f(x)} = \int_{-\infty}^\infty \Abs{\Psi(x)}^2 f(x) dx.
\end{equation}

Similarly, calculation of an average of a function of momentum can be expressed as

\begin{equation}\label{eqn:qmLecture1:160}
\expectation{f(p)} = \int_{-\infty}^\infty \Abs{\Psi(p)}^2 f(p) dp.
\end{equation}

Transformation from a position to momentum basis

We have a problem, if we which to compute an average in momentum space such as \( \expectation{p} \), when given a wavefunction \( \Psi(x) \).

How do we convert

\begin{equation}\label{eqn:qmLecture1:180}
\Psi(p)
\stackrel{?}{\leftrightarrow}
\Psi(x),
\end{equation}

or equivalently
\begin{equation}\label{eqn:qmLecture1:200}
\braket{p}{\Psi}
\stackrel{?}{\leftrightarrow}
\braket{x}{\Psi}.
\end{equation}

Such a conversion can be performed by virtue of an the assumption that we have a complete orthonormal basis, for which we can introduce identity operations such as

\begin{equation}\label{eqn:qmLecture1:220}
\int_{-\infty}^\infty dp \ket{p}\bra{p} = 1,
\end{equation}

or
\begin{equation}\label{eqn:qmLecture1:240}
\int_{-\infty}^\infty dx \ket{x}\bra{x} = 1
\end{equation}

Some interpretations:

  1. \( \ket{x_0} \leftrightarrow \text{sits at} x = x_0 \)
  2. \( \braket{x}{x’} \leftrightarrow \delta(x – x’) \)
  3. \( \braket{p}{p’} \leftrightarrow \delta(p – p’) \)
  4. \( \braket{x}{p’} = \frac{e^{i p x}}{\sqrt{V}} \), where \( V \) is the volume of the box containing the particle. We’ll define the appropriate normalization for an infinite box volume later.

The delta function interpretation of the braket \( \braket{p}{p’} \) justifies the identity operator, since we recover any state in the basis when operating with it. For example, in momentum space

\begin{equation}\label{eqn:qmLecture1:260}
\begin{aligned}
1 \ket{p}
&=
\lr{ \int_{-\infty}^\infty dp’
\ket{p’}\bra{p’} }
\ket{p} \\
&=
\int_{-\infty}^\infty dp’
\ket{p’}
\braket{p’}{p} \\
&=
\int_{-\infty}^\infty dp’
\ket{p’}
\delta(p – p’) \\
&=
\ket{p}.
\end{aligned}
\end{equation}

This also the determination of an integral operator representation for the delta function

\begin{equation}\label{eqn:qmLecture1:500}
\begin{aligned}
\delta(x – x’)
&=
\braket{x}{x’} \\
&=
\int dp \braket{x}{p} \braket{p}{x’} \\
&=
\inv{V} \int dp e^{i p x} e^{-i p x’},
\end{aligned}
\end{equation}

or

\begin{equation}\label{eqn:qmLecture1:520}
\delta(x – x’)
=
\inv{V} \int dp e^{i p (x- x’)}.
\end{equation}

Here we used the fact that \( \braket{p}{x} = \braket{x}{p}^\conj \).

FIXME: do we have a justification for that conjugation with what was defined here so far?

The conversion from a position basis to momentum space is now possible

\begin{equation}\label{eqn:qmLecture1:280}
\begin{aligned}
\braket{p}{\Psi}
&= \Psi(p) \\
&= \int_{-\infty}^\infty \braket{p}{x} \braket{x}{\Psi} dx \\
&= \int_{-\infty}^\infty \frac{e^{-ip x}}{\sqrt{V}} \Psi(x) dx.
\end{aligned}
\end{equation}

The momentum space to position space conversion can be written as

\begin{equation}\label{eqn:qmLecture1:300}
\Psi(x)
= \int_{-\infty}^\infty \frac{e^{ip x}}{\sqrt{V}} \Psi(p) dp.
\end{equation}

Now we can go back and figure out the an expectation

\begin{equation}\label{eqn:qmLecture1:320}
\begin{aligned}
\expectation{p}
&=
\int \Psi^\conj(p) \Psi(p) p d p \\
&=
\int dp
\lr{
\int_{-\infty}^\infty \frac{e^{ip x}}{\sqrt{V}} \Psi^\conj(x) dx
}
\lr{
\int_{-\infty}^\infty \frac{e^{-ip x’}}{\sqrt{V}} \Psi(x’) dx’
}
p \\
&=\int dp dx dx’
\Psi^\conj(x)
\inv{V} e^{ip (x-x’)} \Psi(x’) p \\
&=
\int dp dx dx’
\Psi^\conj(x)
\inv{V} \lr{ -i\PD{x}{e^{ip (x-x’)}} }\Psi(x’) \\
&=
\int dp dx
\Psi^\conj(x) \lr{ -i \PD{x}{} }
\inv{V} \int dx’ e^{ip (x-x’)} \Psi(x’) \\
&=
\int dx
\Psi^\conj(x) \lr{ -i \PD{x}{} }
\int dx’ \lr{ \inv{V} \int dp e^{ip (x-x’)} } \Psi(x’) \\
&=
\int dx
\Psi^\conj(x) \lr{ -i \PD{x}{} }
\int dx’ \delta(x – x’) \Psi(x’) \\
&=
\int dx
\Psi^\conj(x) \lr{ -i \PD{x}{} }
\Psi(x)
\end{aligned}
\end{equation}

Here we’ve essentially calculated the position space representation of the momentum operator, allowing identifications of the following form

\begin{equation}\label{eqn:qmLecture1:380}
p \leftrightarrow -i \PD{x}{}
\end{equation}
\begin{equation}\label{eqn:qmLecture1:400}
p^2 \leftrightarrow – \PDSq{x}{}.
\end{equation}

Alternate starting point.

Most of the above results followed from the claim that \( \braket{x}{p} = e^{i p x} \). Note that this position space representation of the momentum operator can also be taken as the starting point. Given that, the exponential representation of the position-momentum braket follows

\begin{equation}\label{eqn:qmLecture1:540}
\bra{x} P \ket{p}
=
-i \Hbar \PD{x}{} \braket{x}{p},
\end{equation}

but \( \bra{x} P \ket{p} = p \braket{x}{p} \), providing a differential equation for \( \braket{x}{p} \)

\begin{equation}\label{eqn:qmLecture1:560}
p \braket{x}{p} = -i \Hbar \PD{x}{} \braket{x}{p},
\end{equation}

with solution

\begin{equation}\label{eqn:qmLecture1:580}
i p x/\Hbar = \ln \braket{x}{p} + \text{const},
\end{equation}

or
\begin{equation}\label{eqn:qmLecture1:600}
\braket{x}{p} \propto e^{i p x/\Hbar}.
\end{equation}

Matrix interpretation

  1. Ket’s \( \ket{\Psi} \leftrightarrow \text{column vector} \)
  2. Bra’s \( \bra{\Psi} \leftrightarrow {(\text{row vector})}^\conj \)
  3. Operators \( \leftrightarrow \) matrices that act on vectors.

\begin{equation}\label{eqn:qmLecture1:420}
\hat{p} \ket{\Psi} \rightarrow \ket{\Psi’}
\end{equation}

Time evolution

For a state subject to the equations of motion given by the Hamiltonian operator \( \hat{H} \)

\begin{equation}\label{eqn:qmLecture1:440}
i \PD{t}{} \ket{\Psi} = \hat{H} \ket{\Psi},
\end{equation}

the time evolution is given by
\begin{equation}\label{eqn:qmLecture1:460}
\ket{\Psi(t)} = e^{-i \hat{H} t} \ket{\Psi(0)}.
\end{equation}

Incomplete information

We’ll need to introduce the concept of Density matrices. This will bring us to concepts like entanglement.

References

[1] Jun John Sakurai and Jim J Napolitano. Modern quantum mechanics. Pearson Higher Ed, 2014.