spacetime

Fundamental theorem of geometric calculus for line integrals (relativistic.)

December 16, 2020 math and physics play , , , , , , , , , , , , , , , , , , , , , , , , , , ,

[This post is best viewed in PDF form, due to latex elements that I could not format with wordpress mathjax.]

Background for this particular post can be found in

  1. Curvilinear coordinates and gradient in spacetime, and reciprocal frames, and
  2. Lorentz transformations in Space Time Algebra (STA)
  3. A couple more reciprocal frame examples.

Motivation.

I’ve been slowly working my way towards a statement of the fundamental theorem of integral calculus, where the functions being integrated are elements of the Dirac algebra (space time multivectors in the geometric algebra parlance.)

This is interesting because we want to be able to do line, surface, 3-volume and 4-volume space time integrals. We have many \(\mathbb{R}^3\) integral theorems
\begin{equation}\label{eqn:fundamentalTheoremOfGC:40a}
\int_A^B d\Bl \cdot \spacegrad f = f(B) – f(A),
\end{equation}
\begin{equation}\label{eqn:fundamentalTheoremOfGC:60a}
\int_S dA\, \ncap \cross \spacegrad f = \int_{\partial S} d\Bx\, f,
\end{equation}
\begin{equation}\label{eqn:fundamentalTheoremOfGC:80a}
\int_S dA\, \ncap \cdot \lr{ \spacegrad \cross \Bf} = \int_{\partial S} d\Bx \cdot \Bf,
\end{equation}
\begin{equation}\label{eqn:fundamentalTheoremOfGC:100a}
\int_S dx dy \lr{ \PD{y}{P} – \PD{x}{Q} }
=
\int_{\partial S} P dx + Q dy,
\end{equation}
\begin{equation}\label{eqn:fundamentalTheoremOfGC:120a}
\int_V dV\, \spacegrad f = \int_{\partial V} dA\, \ncap f,
\end{equation}
\begin{equation}\label{eqn:fundamentalTheoremOfGC:140a}
\int_V dV\, \spacegrad \cross \Bf = \int_{\partial V} dA\, \ncap \cross \Bf,
\end{equation}
\begin{equation}\label{eqn:fundamentalTheoremOfGC:160a}
\int_V dV\, \spacegrad \cdot \Bf = \int_{\partial V} dA\, \ncap \cdot \Bf,
\end{equation}
and want to know how to generalize these to four dimensions and also make sure that we are handling the relativistic mixed signature correctly. If our starting point was the mess of equations above, we’d be in trouble, since it is not obvious how these generalize. All the theorems with unit normals have to be handled completely differently in four dimensions since we don’t have a unique normal to any given spacetime plane.
What comes to our rescue is the Fundamental Theorem of Geometric Calculus (FTGC), which has the form
\begin{equation}\label{eqn:fundamentalTheoremOfGC:40}
\int F d^n \Bx\, \lrpartial G = \int F d^{n-1} \Bx\, G,
\end{equation}
where \(F,G\) are multivectors functions (i.e. sums of products of vectors.) We’ve seen ([2], [1]) that all the identities above are special cases of the fundamental theorem.

Do we need any special care to state the FTGC correctly for our relativistic case? It turns out that the answer is no! Tangent and reciprocal frame vectors do all the heavy lifting, and we can use the fundamental theorem as is, even in our mixed signature space. The only real change that we need to make is use spacetime gradient and vector derivative operators instead of their spatial equivalents. We will see how this works below. Note that instead of starting with \ref{eqn:fundamentalTheoremOfGC:40} directly, I will attempt to build up to that point in a progressive fashion that is hopefully does not require the reader to make too many unjustified mental leaps.

Multivector line integrals.

We want to define multivector line integrals to start with. Recall that in \(\mathbb{R}^3\) we would say that for scalar functions \( f\), the integral
\begin{equation}\label{eqn:fundamentalTheoremOfGC:180b}
\int d\Bx\, f = \int f d\Bx,
\end{equation}
is a line integral. Also, for vector functions \( \Bf \) we call
\begin{equation}\label{eqn:fundamentalTheoremOfGC:200}
\int d\Bx \cdot \Bf = \inv{2} \int d\Bx\, \Bf + \Bf d\Bx.
\end{equation}
a line integral. In order to generalize line integrals to multivector functions, we will allow our multivector functions to be placed on either or both sides of the differential.

Definition 1.1: Line integral.

Given a single variable parameterization \( x = x(u) \), we write \( d^1\Bx = \Bx_u du \), and call
\begin{equation}\label{eqn:fundamentalTheoremOfGC:220a}
\int F d^1\Bx\, G,
\end{equation}
a line integral, where \( F,G \) are arbitrary multivector functions.

We must be careful not to reorder any of the factors in the integrand, since the differential may not commute with either \( F \) or \( G \). Here is a simple example where the integrand has a product of a vector and differential.

Problem: Circular parameterization.

Given a circular parameterization \( x(\theta) = \gamma_1 e^{-i\theta} \), where \( i = \gamma_1 \gamma_2 \), the unit bivector for the \(x,y\) plane. Compute the line integral
\begin{equation}\label{eqn:fundamentalTheoremOfGC:100}
\int_0^{\pi/4} F(\theta)\, d^1 \Bx\, G(\theta),
\end{equation}
where \( F(\theta) = \Bx^\theta + \gamma_3 + \gamma_1 \gamma_0 \) is a multivector valued function, and \( G(\theta) = \gamma_0 \) is vector valued.

Answer

The tangent vector for the curve is
\begin{equation}\label{eqn:fundamentalTheoremOfGC:60}
\Bx_\theta
= -\gamma_1 \gamma_1 \gamma_2 e^{-i\theta}
= \gamma_2 e^{-i\theta},
\end{equation}
with reciprocal vector \( \Bx^\theta = e^{i \theta} \gamma^2 \). The differential element is \( d^1 \Bx = \gamma_2 e^{-i\theta} d\theta \), so the integrand is
\begin{equation}\label{eqn:fundamentalTheoremOfGC:80}
\begin{aligned}
\int_0^{\pi/4} \lr{ \Bx^\theta + \gamma_3 + \gamma_1 \gamma_0 } d^1 \Bx\, \gamma_0
&=
\int_0^{\pi/4} \lr{ e^{i\theta} \gamma^2 + \gamma_3 + \gamma_1 \gamma_0 } \gamma_2 e^{-i\theta} d\theta\, \gamma_0 \\
&=
\frac{\pi}{4} \gamma_0 + \lr{ \gamma_{32} + \gamma_{102} } \inv{-i} \lr{ e^{-i\pi/4} – 1 } \gamma_0 \\
&=
\frac{\pi}{4} \gamma_0 + \inv{\sqrt{2}} \lr{ \gamma_{32} + \gamma_{102} } \gamma_{120} \lr{ 1 – \gamma_{12} } \\
&=
\frac{\pi}{4} \gamma_0 + \inv{\sqrt{2}} \lr{ \gamma_{310} + 1 } \lr{ 1 – \gamma_{12} }.
\end{aligned}
\end{equation}
Observe how care is required not to reorder any terms. This particular end result is a multivector with scalar, vector, bivector, and trivector grades, but no pseudoscalar component. The grades in the end result depend on both the function in the integrand and on the path. For example, had we integrated all the way around the circle, the end result would have been the vector \( 2 \pi \gamma_0 \) (i.e. a \( \gamma_0 \) weighted unit circle circumference), as all the other grades would have been killed by the complex exponential integrated over a full period.

Problem: Line integral for boosted time direction vector.

Let \( x = e^{\vcap \alpha/2} \gamma_0 e^{-\vcap \alpha/2} \) represent the spacetime curve of all the boosts of \( \gamma_0 \) along a specific velocity direction vector, where \( \vcap = (v \wedge \gamma_0)/\Norm{v \wedge \gamma_0} \) is a unit spatial bivector for any constant vector \( v \). Compute the line integral
\begin{equation}\label{eqn:fundamentalTheoremOfGC:240}
\int x\, d^1 \Bx.
\end{equation}

Answer

Observe that \( \vcap \) and \( \gamma_0 \) anticommute, so we may write our boost as a one sided exponential
\begin{equation}\label{eqn:fundamentalTheoremOfGC:260}
x(\alpha) = \gamma_0 e^{-\vcap \alpha} = e^{\vcap \alpha} \gamma_0 = \lr{ \cosh\alpha + \vcap \sinh\alpha } \gamma_0.
\end{equation}
The tangent vector is just
\begin{equation}\label{eqn:fundamentalTheoremOfGC:280}
\Bx_\alpha = \PD{\alpha}{x} = e^{\vcap\alpha} \vcap \gamma_0.
\end{equation}
Let’s get a bit of intuition about the nature of this vector. It’s square is
\begin{equation}\label{eqn:fundamentalTheoremOfGC:300}
\begin{aligned}
\Bx_\alpha^2
&=
e^{\vcap\alpha} \vcap \gamma_0
e^{\vcap\alpha} \vcap \gamma_0 \\
&=
-e^{\vcap\alpha} \vcap e^{-\vcap\alpha} \vcap (\gamma_0)^2 \\
&=
-1,
\end{aligned}
\end{equation}
so we see that the tangent vector is a spacelike unit vector. As the vector representing points on the curve is necessarily timelike (due to Lorentz invariance), these two must be orthogonal at all points. Let’s confirm this algebraically
\begin{equation}\label{eqn:fundamentalTheoremOfGC:320}
\begin{aligned}
x \cdot \Bx_\alpha
&=
\gpgradezero{ e^{\vcap \alpha} \gamma_0 e^{\vcap \alpha} \vcap \gamma_0 } \\
&=
\gpgradezero{ e^{-\vcap \alpha} e^{\vcap \alpha} \vcap (\gamma_0)^2 } \\
&=
\gpgradezero{ \vcap } \\
&= 0.
\end{aligned}
\end{equation}
Here we used \( e^{\vcap \alpha} \gamma_0 = \gamma_0 e^{-\vcap \alpha} \), and \( \gpgradezero{A B} = \gpgradezero{B A} \). Geometrically, we have the curious fact that the direction vectors to points on the curve are perpendicular (with respect to our relativistic dot product) to the tangent vectors on the curve, as illustrated in fig. 1.

fig. 1. Tangent perpendicularity in mixed metric.

Perfect differentials.

Having seen a couple examples of multivector line integrals, let’s now move on to figure out the structure of a line integral that has a “perfect” differential integrand. We can take a hint from the \(\mathbb{R}^3\) vector result that we already know, namely
\begin{equation}\label{eqn:fundamentalTheoremOfGC:120}
\int_A^B d\Bl \cdot \spacegrad f = f(B) – f(A).
\end{equation}
It seems reasonable to guess that the relativistic generalization of this is
\begin{equation}\label{eqn:fundamentalTheoremOfGC:140}
\int_A^B dx \cdot \grad f = f(B) – f(A).
\end{equation}
Let’s check that, by expanding in coordinates
\begin{equation}\label{eqn:fundamentalTheoremOfGC:160}
\begin{aligned}
\int_A^B dx \cdot \grad f
&=
\int_A^B d\tau \frac{dx^\mu}{d\tau} \partial_\mu f \\
&=
\int_A^B d\tau \frac{dx^\mu}{d\tau} \PD{x^\mu}{f} \\
&=
\int_A^B d\tau \frac{df}{d\tau} \\
&=
f(B) – f(A).
\end{aligned}
\end{equation}
If we drop the dot product, will we have such a nice result? Let’s see:
\begin{equation}\label{eqn:fundamentalTheoremOfGC:180}
\begin{aligned}
\int_A^B dx \grad f
&=
\int_A^B d\tau \frac{dx^\mu}{d\tau} \gamma_\mu \gamma^\nu \partial_\nu f \\
&=
\int_A^B d\tau \frac{dx^\mu}{d\tau} \PD{x^\mu}{f}
+
\int_A^B
d\tau
\sum_{\mu \ne \nu} \gamma_\mu \gamma^\nu
\frac{dx^\mu}{d\tau} \PD{x^\nu}{f}.
\end{aligned}
\end{equation}
This scalar component of this integrand is a perfect differential, but the bivector part of the integrand is a complete mess, that we have no hope of generally integrating. It happens that if we consider one of the simplest parameterization examples, we can get a strong hint of how to generalize the differential operator to one that ends up providing a perfect differential. In particular, let’s integrate over a linear constant path, such as \( x(\tau) = \tau \gamma_0 \). For this path, we have
\begin{equation}\label{eqn:fundamentalTheoremOfGC:200a}
\begin{aligned}
\int_A^B dx \grad f
&=
\int_A^B \gamma_0 d\tau \lr{
\gamma^0 \partial_0 +
\gamma^1 \partial_1 +
\gamma^2 \partial_2 +
\gamma^3 \partial_3 } f \\
&=
\int_A^B d\tau \lr{
\PD{\tau}{f} +
\gamma_0 \gamma^1 \PD{x^1}{f} +
\gamma_0 \gamma^2 \PD{x^2}{f} +
\gamma_0 \gamma^3 \PD{x^3}{f}
}.
\end{aligned}
\end{equation}
Just because the path does not have any \( x^1, x^2, x^3 \) component dependencies does not mean that these last three partials are neccessarily zero. For example \( f = f(x(\tau)) = \lr{ x^0 }^2 \gamma_0 + x^1 \gamma_1 \) will have a non-zero contribution from the \( \partial_1 \) operator. In that particular case, we can easily integrate \( f \), but we have to know the specifics of the function to do the integral. However, if we had a differential operator that did not include any component off the integration path, we would ahve a perfect differential. That is, if we were to replace the gradient with the projection of the gradient onto the tangent space, we would have a perfect differential. We see that the function of the dot product in \ref{eqn:fundamentalTheoremOfGC:140} has the same effect, as it rejects any component of the gradient that does not lie on the tangent space.

Definition 1.2: Vector derivative.

Given a spacetime manifold parameterized by \( x = x(u^0, \cdots u^{N-1}) \), with tangent vectors \( \Bx_\mu = \PDi{u^\mu}{x} \), and reciprocal vectors \( \Bx^\mu \in \textrm{Span}\setlr{\Bx_\nu} \), such that \( \Bx^\mu \cdot \Bx_\nu = {\delta^\mu}_\nu \), the vector derivative is defined as
\begin{equation}\label{eqn:fundamentalTheoremOfGC:240a}
\partial = \sum_{\mu = 0}^{N-1} \Bx^\mu \PD{u^\mu}{}.
\end{equation}
Observe that if this is a full parameterization of the space (\(N = 4\)), then the vector derivative is identical to the gradient. The vector derivative is the projection of the gradient onto the tangent space at the point of evaluation.Furthermore, we designate \( \lrpartial \) as the vector derivative allowed to act bidirectionally, as follows
\begin{equation}\label{eqn:fundamentalTheoremOfGC:260a}
R \lrpartial S
=
R \Bx^\mu \PD{u^\mu}{S}
+
\PD{u^\mu}{R} \Bx^\mu S,
\end{equation}
where \( R, S \) are multivectors, and summation convention is implied. In this bidirectional action,
the vector factors of the vector derivative must stay in place (as they do not neccessarily commute with \( R,S\)), but the derivative operators apply in a chain rule like fashion to both functions.

Noting that \( \Bx_u \cdot \grad = \Bx_u \cdot \partial \), we may rewrite the scalar line integral identity \ref{eqn:fundamentalTheoremOfGC:140} as
\begin{equation}\label{eqn:fundamentalTheoremOfGC:220}
\int_A^B dx \cdot \partial f = f(B) – f(A).
\end{equation}
However, as our example hinted at, the fundamental theorem for line integrals has a multivector generalization that does not rely on a dot product to do the tangent space filtering, and is more powerful. That generalization has the following form.

Theorem 1.1: Fundamental theorem for line integrals.

Given multivector functions \( F, G \), and a single parameter curve \( x(u) \) with line element \( d^1 \Bx = \Bx_u du \), then
\begin{equation}\label{eqn:fundamentalTheoremOfGC:280a}
\int_A^B F d^1\Bx \lrpartial G = F(B) G(B) – F(A) G(A).
\end{equation}

Start proof:

Writing out the integrand explicitly, we find
\begin{equation}\label{eqn:fundamentalTheoremOfGC:340}
\int_A^B F d^1\Bx \lrpartial G
=
\int_A^B \lr{
\PD{\alpha}{F} d\alpha\, \Bx_\alpha \Bx^\alpha G
+
F d\alpha\, \Bx_\alpha \Bx^\alpha \PD{\alpha}{G }
}
\end{equation}
However for a single parameter curve, we have \( \Bx^\alpha = 1/\Bx_\alpha \), so we are left with
\begin{equation}\label{eqn:fundamentalTheoremOfGC:360}
\begin{aligned}
\int_A^B F d^1\Bx \lrpartial G
&=
\int_A^B d\alpha\, \PD{\alpha}{(F G)} \\
&=
\evalbar{F G}{B}

\evalbar{F G}{A}.
\end{aligned}
\end{equation}

End proof.

More to come.

In the next installment we will explore surface integrals in spacetime, and the generalization of the fundamental theorem to multivector space time integrals.

References

[1] Peeter Joot. Geometric Algebra for Electrical Engineers. Kindle Direct Publishing, 2019.

[2] A. Macdonald. Vector and Geometric Calculus. CreateSpace Independent Publishing Platform, 2012.

PHY2403H Quantum Field Theory. Lecture 4: Scalar action, least action principle, Euler-Lagrange equations for a field, canonical quantization. Taught by Prof. Erich Poppitz

September 23, 2018 phy2403 , , , , , , , , , , , , , , , , , , , , ,

[Click here for a PDF of this post with nicer formatting]

DISCLAIMER: Very rough notes from class. May have some additional side notes, but otherwise probably barely edited.

These are notes for the UofT course PHY2403H, Quantum Field Theory I, taught by Prof. Erich Poppitz fall 2018.

Principles (cont.)

  • Lorentz (Poincar\’e : Lorentz and spacetime translations)
  • locality
  • dimensional analysis
  • gauge invariance

These are the requirements for an action. We postulated an action that had the form
\begin{equation}\label{eqn:qftLecture4:20}
\int d^d x \partial_\mu \phi \partial^\mu \phi,
\end{equation}
called the “Kinetic term”, which mimics \( \int dt \dot{q}^2 \) that we’d see in quantum or classical mechanics. In principle there exists an infinite number of local Poincar\’e invariant terms that we can write. Examples:

  • \( \partial_\mu \phi \partial^\mu \phi \)
  • \( \partial_\mu \phi \partial_\nu \partial^\nu \partial^\mu \phi \)
  • \( \lr{\partial_\mu \phi \partial^\mu \phi}^2 \)
  • \( f(\phi) \partial_\mu \phi \partial^\mu \phi \)
  • \( f(\phi, \partial_\mu \phi \partial^\mu \phi) \)
  • \( V(\phi) \)

It turns out that nature (i.e. three spatial dimensions and one time dimension) is described by a finite number of terms. We will now utilize dimensional analysis to determine some of the allowed forms of the action for scalar field theories in \( d = 2, 3, 4, 5 \) dimensions. Even though the real world is only \( d = 4 \), some of the \( d < 4 \) theories are relevant in condensed matter studies, and \( d = 5 \) is just for fun (but also applies to string theories.)

With \( [x] \sim \inv{M} \) in natural units, we must define \([\phi]\) such that the kinetic term is dimensionless in d spacetime dimensions

\begin{equation}\label{eqn:qftLecture4:40}
\begin{aligned}
[d^d x] &\sim \inv{M^d} \\
[\partial_\mu] &\sim M
\end{aligned}
\end{equation}

so it must be that
\begin{equation}\label{eqn:qftLecture4:60}
[\phi] = M^{(d-2)/2}
\end{equation}

It will be easier to characterize the dimensionality of any given term by the power of the mass units, that is

\begin{equation}\label{eqn:qftLecture4:80}
\begin{aligned}
[\text{mass}] &= 1 \\
[d^d x] &= -d \\
[\partial_\mu] &= 1 \\
[\phi] &= (d-2)/2 \\
[S] &= 0.
\end{aligned}
\end{equation}
Since the action is
\begin{equation}\label{eqn:qftLecture4:100}
S = \int d^d x \lr{ \LL(\phi, \partial_\mu \phi) },
\end{equation}
and because action had dimensions of \( \Hbar \), so in natural units, it must be dimensionless, the Lagrangian density dimensions must be \( [d] \). We will abuse language in QFT and call the Lagrangian density the Lagrangian.

\( d = 2 \)

Because \( [\partial_\mu \phi \partial^\mu \phi ] = 2 \), the scalar field must be dimension zero, or in symbols
\begin{equation}\label{eqn:qftLecture4:120}
[\phi] = 0.
\end{equation}
This means that introducing any function \( f(\phi) = 1 + a \phi + b\phi^2 + c \phi^3 + \cdots \) is also dimensionless, and
\begin{equation}\label{eqn:qftLecture4:140}
[f(\phi) \partial_\mu \phi \partial^\mu \phi ] = 2,
\end{equation}
for any \( f(\phi) \). Another implication of this is that the a potential term in the Lagrangian \( [V(\phi)] = 0 \) needs a coupling constant of dimension 2. Letting \( \mu \) have mass dimensions, our Lagrangian must have the form
\begin{equation}\label{eqn:qftLecture4:160}
f(\phi) \partial_\mu \phi \partial^\mu \phi + \mu^2 V(\phi).
\end{equation}
An infinite number of coupling constants of positive mass dimensions for \( V(\phi) \) are also allowed. If we have higher order derivative terms, then we need to compensate for the negative mass dimensions. Example (still for \( d = 2 \)).
\begin{equation}\label{eqn:qftLecture4:180}
\LL =
f(\phi) \partial_\mu \phi \partial^\mu \phi + \mu^2 V(\phi) + \inv{{\mu’}^2}\partial_\mu \phi \partial_\nu \partial^\nu \partial^\mu \phi + \lr{ \partial_\mu \phi \partial^\mu \phi }^2 \inv{\tilde{\mu}^2}.
\end{equation}
The last two terms, called \underline{couplings} (i.e. any non-kinetic term), are examples of terms with negative mass dimension. There is an infinite number of those in any theory in any dimension.

Definitions

  • Couplings that are dimensionless are called (classically) marginal.
  • Couplings that have positive mass dimension are called (classically) relevant.
  • Couplings that have negative mass dimension are called (classically) irrelevant.

In QFT we are generally interested in the couplings that are measurable at long distances for some given energy. Classically irrelevant theories are generally not interesting in \( d > 2 \), so we are very lucky that we don’t live in three dimensional space. This means that we can get away with a finite number of classically marginal and relevant couplings in 3 or 4 dimensions. This was mentioned in the Wilczek’s article referenced in the class forum [1]\footnote{There’s currently more in that article that I don’t understand than I do, so it is hard to find it terribly illuminating.}

Long distance physics in any dimension is described by the marginal and relevant couplings. The irrelevant couplings die off at low energy. In two dimensions, a priori, an infinite number of marginal and relevant couplings are possible. 2D is a bad place to live!

\( d = 3 \)

Now we have
\begin{equation}\label{eqn:qftLecture4:200}
[\phi] = \inv{2}
\end{equation}
so that
\begin{equation}\label{eqn:qftLecture4:220}
[\partial_\mu \phi \partial^\mu \phi] = 3.
\end{equation}

A 3D Lagrangian could have local terms such as
\begin{equation}\label{eqn:qftLecture4:240}
\LL = \partial_\mu \phi \partial^\mu \phi + m^2 \phi^2 + \mu^{3/2} \phi^3 + \mu’ \phi^4
+ \lr{\mu”}{1/2} \phi^5
+ \lambda \phi^6.
\end{equation}
where \( m, \mu, \mu” \) all have mass dimensions, and \( \lambda \) is dimensionless. i.e. \( m, \mu, \mu” \) are relevant, and \( \lambda \) marginal. We stop at the sixth power, since any power after that will be irrelevant.

\( d = 4 \)

Now we have
\begin{equation}\label{eqn:qftLecture4:260}
[\phi] = 1
\end{equation}
so that
\begin{equation}\label{eqn:qftLecture4:280}
[\partial_\mu \phi \partial^\mu \phi] = 4.
\end{equation}

In this number of dimensions \( \phi^k \partial_\mu \phi \partial^\mu \) is an irrelevant coupling.

A 4D Lagrangian could have local terms such as
\begin{equation}\label{eqn:qftLecture4:300}
\LL = \partial_\mu \phi \partial^\mu \phi + m^2 \phi^2 + \mu \phi^3 + \lambda \phi^4.
\end{equation}
where \( m, \mu \) have mass dimensions, and \( \lambda \) is dimensionless. i.e. \( m, \mu \) are relevant, and \( \lambda \) is marginal.

\( d = 5 \)

Now we have
\begin{equation}\label{eqn:qftLecture4:320}
[\phi] = \frac{3}{2},
\end{equation}
so that
\begin{equation}\label{eqn:qftLecture4:340}
[\partial_\mu \phi \partial^\mu \phi] = 5.
\end{equation}

A 5D Lagrangian could have local terms such as
\begin{equation}\label{eqn:qftLecture4:360}
\LL = \partial_\mu \phi \partial^\mu \phi + m^2 \phi^2 + \sqrt{\mu} \phi^3 + \inv{\mu’} \phi^4.
\end{equation}
where \( m, \mu, \mu’ \) all have mass dimensions. In 5D there are no marginal couplings. Dimension 4 is the last dimension where marginal couplings exist. In condensed matter physics 4D is called the “upper critical dimension”.

From the point of view of particle physics, all the terms in the Lagrangian must be the ones that are relevant at long distances.

Least action principle (classical field theory).

Now we want to study 4D scalar theories. We have some action
\begin{equation}\label{eqn:qftLecture4:380}
S[\phi] = \int d^4 x \LL(\phi, \partial_\mu \phi).
\end{equation}

Let’s keep an example such as the following in mind
\begin{equation}\label{eqn:qftLecture4:400}
\LL = \underbrace{\inv{2} \partial_\mu \phi \partial^\mu \phi}_{\text{Kinetic term}} – \underbrace{m^2 \phi – \lambda \phi^4}_{\text{all relevant and marginal couplings}}.
\end{equation}
The even powers can be justified by assuming there is some symmetry that kills the odd powered terms.

fig. 1. Cylindrical spacetime boundary.

We will be integrating over a space time region such as that depicted in fig. 1, where a cylindrical spatial cross section is depicted that we allow to tend towards infinity. We demand that the field is fixed on the infinite spatial boundaries. The easiest way to demand that the field dies off on the spatial boundaries, that is
\begin{equation}\label{eqn:qftLecture4:420}
\lim_{\Abs{\Bx} \rightarrow \infty} \phi(\Bx) \rightarrow 0.
\end{equation}
The functional \( \phi(\Bx, t) \) that obeys the boundary condition as stated extremizes \( S[\phi] \).

Extremizing the action means that we seek \( \phi(\Bx, t) \)
\begin{equation}\label{eqn:qftLecture4:440}
\delta S[\phi] = 0 = S[\phi + \delta \phi] – S[\phi].
\end{equation}

How do we compute the variation?
\begin{equation}\label{eqn:qftLecture4:460}
\begin{aligned}
\delta S
&= \int d^d x \lr{ \LL(\phi + \delta \phi, \partial_\mu \phi + \partial_\mu \delta \phi) – \LL(\phi, \partial_\mu \phi) } \\
&= \int d^d x \lr{ \PD{\phi}{\LL} \delta \phi + \PD{(\partial_mu \phi)}{\LL} (\partial_\mu \delta \phi) } \\
&= \int d^d x \lr{ \PD{\phi}{\LL} \delta \phi
+ \partial_\mu \lr{ \PD{(\partial_mu \phi)}{\LL} \delta \phi}
– \lr{ \partial_\mu \PD{(\partial_mu \phi)}{\LL} } \delta \phi
} \\
&=
\int d^d x
\delta \phi
\lr{ \PD{\phi}{\LL}
– \partial_\mu \PD{(\partial_mu \phi)}{\LL} }
+ \int d^3 \sigma_\mu \lr{ \PD{(\partial_\mu \phi)}{\LL} \delta \phi }
\end{aligned}
\end{equation}

If we are explicit about the boundary term, we write it as
\begin{equation}\label{eqn:qftLecture4:480}
\int dt d^3 \Bx \partial_t \lr{ \PD{(\partial_t \phi)}{\LL} \delta \phi }
– \spacegrad \cdot \lr{ \PD{(\spacegrad \phi)}{\LL} \delta \phi }
=
\int d^3 \Bx \evalrange{ \PD{(\partial_t \phi)}{\LL} \delta \phi }{t = -T}{t = T}
– \int dt d^2 \BS \cdot \lr{ \PD{(\spacegrad \phi)}{\LL} \delta \phi }.
\end{equation}
but \( \delta \phi = 0 \) at \( t = \pm T \) and also at the spatial boundaries of the integration region.

This leaves
\begin{equation}\label{eqn:qftLecture4:500}
\delta S[\phi] = \int d^d x \delta \phi
\lr{ \PD{\phi}{\LL} – \partial_\mu \PD{(\partial_mu \phi)}{\LL} } = 0 \forall \delta \phi.
\end{equation}
That is

\begin{equation}\label{eqn:qftLecture4:540}
\boxed{
\PD{\phi}{\LL} – \partial_\mu \PD{(\partial_mu \phi)}{\LL} = 0.
}
\end{equation}

This are the Euler-Lagrange equations for a single scalar field.

Returning to our sample scalar Lagrangian
\begin{equation}\label{eqn:qftLecture4:560}
\LL = \inv{2} \partial_\mu \phi \partial^\mu \phi – \inv{2} m^2 \phi^2 – \frac{\lambda}{4} \phi^4.
\end{equation}
This example is related to the Ising model which has a \( \phi \rightarrow -\phi \) symmetry. Applying the Euler-Lagrange equations, we have
\begin{equation}\label{eqn:qftLecture4:580}
\PD{\phi}{\LL} = -m^2 \phi – \lambda \phi^3,
\end{equation}
and
\begin{equation}\label{eqn:qftLecture4:600}
\begin{aligned}
\PD{(\partial_\mu \phi)}{\LL}
&=
\PD{(\partial_\mu \phi)}{} \lr{
\inv{2} \partial_\nu \phi \partial^\nu \phi } \\
&=
\inv{2} \partial^\nu \phi
\PD{(\partial_\mu \phi)}{}
\partial_\nu \phi
+
\inv{2} \partial_\nu \phi
\PD{(\partial_\mu \phi)}{}
\partial_\alpha \phi g^{\nu\alpha} \\
&=
\inv{2} \partial^\mu \phi
+
\inv{2} \partial_\nu \phi g^{\nu\mu} \\
&=
\partial^\mu \phi
\end{aligned}
\end{equation}
so we have
\begin{equation}\label{eqn:qftLecture4:620}
\begin{aligned}
0
&=
\PD{\phi}{\LL} -\partial_\mu
\PD{(\partial_\mu \phi)}{\LL} \\
&=
-m^2 \phi – \lambda \phi^3 – \partial_\mu \partial^\mu \phi.
\end{aligned}
\end{equation}

For \( \lambda = 0 \), the free field theory limit, this is just
\begin{equation}\label{eqn:qftLecture4:640}
\partial_\mu \partial^\mu \phi + m^2 \phi = 0.
\end{equation}
Written out from the observer frame, this is
\begin{equation}\label{eqn:qftLecture4:660}
(\partial_t)^2 \phi – \spacegrad^2 \phi + m^2 \phi = 0.
\end{equation}

With a non-zero mass term
\begin{equation}\label{eqn:qftLecture4:680}
\lr{ \partial_t^2 – \spacegrad^2 + m^2 } \phi = 0,
\end{equation}
is called the Klein-Gordan equation.

If we also had \( m = 0 \) we’d have
\begin{equation}\label{eqn:qftLecture4:700}
\lr{ \partial_t^2 – \spacegrad^2 } \phi = 0,
\end{equation}
which is the wave equation (for a massless free field). This is also called the D’Alembert equation, which is familiar from electromagnetism where we have
\begin{equation}\label{eqn:qftLecture4:720}
\begin{aligned}
\lr{ \partial_t^2 – \spacegrad^2 } \BE &= 0 \\
\lr{ \partial_t^2 – \spacegrad^2 } \BB &= 0,
\end{aligned}
\end{equation}
in a source free region.

Canonical quantization.

\begin{equation}\label{eqn:qftLecture4:740}
\LL = \inv{2} \dot{q} – \frac{\omega^2}{2} q^2
\end{equation}
This has solution \(\ddot{q} = – \omega^2 q\).

Let
\begin{equation}\label{eqn:qftLecture4:760}
p = \PD{\dot{q}}{\LL} = \dot{q}
\end{equation}
\begin{equation}\label{eqn:qftLecture4:780}
H(p,q) = \evalbar{p \dot{q} – \LL}{\dot{q}(p, q)}
= p p – \inv{2} p^2 + \frac{\omega^2}{2} q^2 = \frac{p^2}{2} + \frac{\omega^2}{2} q^2
\end{equation}

In QM we quantize by mapping Poisson brackets to commutators.
\begin{equation}\label{eqn:qftLecture4:800}
\antisymmetric{\hatp}{\hat{q}} = -i
\end{equation}
One way to represent is to say that states are \( \Psi(\hat{q}) \), a wave function, \( \hat{q} \) acts by \( q \)
\begin{equation}\label{eqn:qftLecture4:820}
\hat{q} \Psi = q \Psi(q)
\end{equation}
With
\begin{equation}\label{eqn:qftLecture4:840}
\hatp = -i \PD{q}{},
\end{equation}
so
\begin{equation}\label{eqn:qftLecture4:860}
\antisymmetric{ -i \PD{q}{} } { q} = -i
\end{equation}

Let’s introduce an explicit space time split. We’ll write
\begin{equation}\label{eqn:qftLecture4:880}
L = \int d^3 x \lr{
\inv{2} (\partial_0 \phi(\Bx, t))^2 – \inv{2} \lr{ \spacegrad \phi(\Bx, t) }^2 – \frac{m^2}{2} \phi
},
\end{equation}
so that the action is
\begin{equation}\label{eqn:qftLecture4:900}
S = \int dt L.
\end{equation}
The dynamical variables are \( \phi(\Bx) \). We define
\begin{equation}\label{eqn:qftLecture4:920}
\begin{aligned}
\pi(\Bx, t) = \frac{\delta L}{\delta (\partial_0 \phi(\Bx, t))}
&=
\partial_0 \phi(\Bx, t) \\
&=
\dot{\phi}(\Bx, t),
\end{aligned}
\end{equation}
called the canonical momentum, or the momentum conjugate to \( \phi(\Bx, t) \). Why \( \delta \)? Has to do with an implicit Dirac function to eliminate the integral?

\begin{equation}\label{eqn:qftLecture4:940}
\begin{aligned}
H
&= \int d^3 x \evalbar{\lr{ \pi(\bar{\Bx}, t) \dot{\phi}(\bar{\Bx}, t) – L }}{\dot{\phi}(\bar{\Bx}, t) = \pi(x, t) } \\
&= \int d^3 x \lr{ (\pi(\Bx, t))^2 – \inv{2} (\pi(\Bx, t))^2 + \inv{2} (\spacegrad \phi)^2 + \frac{m}{2} \phi^2 },
\end{aligned}
\end{equation}
or
\begin{equation}\label{eqn:qftLecture4:960}
H
= \int d^3 x \lr{ \inv{2} (\pi(\Bx, t))^2 + \inv{2} (\spacegrad \phi(\Bx, t))^2 + \frac{m}{2} (\phi(\Bx, t))^2 }
\end{equation}

In analogy to the momentum, position commutator in QM
\begin{equation}\label{eqn:qftLecture4:1000}
\antisymmetric{\hat{p}_i}{\hat{q}_j} = -i \delta_{ij},
\end{equation}
we “quantize” the scalar field theory by promoting \( \pi, \phi \) to operators and insisting that they also obey a commutator relationship
\begin{equation}\label{eqn:qftLecture4:980}
\antisymmetric{\pi(\Bx, t)}{\phi(\By, t)} = -i \delta^3(\Bx – \By).
\end{equation}

References

[1] Frank Wilczek. Fundamental constants. arXiv preprint arXiv:0708.4361, 2007. URL https://arxiv.org/abs/0708.4361.